text
stringlengths 56
7.94M
|
---|
\begin{document}
\begin{abstract}
We consider a class of endomorphisms which contains a set of piecewise partially hyperbolic dynamics semi-conjugated to non-uniformly expanding maps. The aimed transformation preserves a foliation which is almost everywhere uniformly contracted with possible discontinuity sets, which are parallel to the contracting direction. We apply the spectral gap property and the ${\Greekmath 0110}$-H\"older regularity of the disintegration of its physical measure to prove a quantitative statistical stability statement. More precisely, under deterministic perturbations of the system of size ${\Greekmath 010E}$, we show that the physical measure varies continuously with respect to a strong $L^\infty$-like norm. Moreover, we prove that for certain interesting classes of perturbations its modulus of continuity is $O({\Greekmath 010E}^{\Greekmath 0110} \log {\Greekmath 010E})$.
\end{abstract}
\title[Quantitative Stability for Equilibrium States of Skew Products]{Quantitative Statistical Stability for Equilibrium States of Piecewise Partially Hyperbolic Maps.}
\author[Rafael A. Bilbao]{Rafael A. Bilbao}
\author[Ricardo Bioni]{Ricardo Bioni}
\author[Rafael Lucena]{Rafael Lucena}
\date{\today }
\keywords{Statistical Stability, Transfer
Operator, Equilibrium States, Skew Product.}
\address[Rafael A. Bilbao]{Universidad Pedag\'ogica y Tecnol\'ogica de Colombia, Avenida Central del Norte 39-115, Sede Central Tunja, Boyac\'a, 150003, Colombia.}
\email{[email protected]}
\address[Ricardo Bioni]{Rua Costa Bastos, 34, Santa Teresa, Rio de Janeiro-Brasil}
\email{[email protected]}
\address[Rafael Lucena]{Universidade Federal de Alagoas, Instituto de Matemática - UFAL, Av. Lourival Melo Mota, S/N
Tabuleiro dos Martins, Maceio - AL, 57072-900, Brasil}
\email{[email protected]}
\urladdr{www.im.ufal.br/professor/rafaellucena}
\maketitle
\section{Introduction}
The understanding of how statistical properties changes when a system is perturbed is of great interest for both pure and applied mathematics. When a certain property of the concerned system varies continuously under deterministic or even stochastic modifications we say that it is \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{statistically stable}. This sort of feature has its main motivation in the studying of how uncertain affects the quantitative and qualitative measurements of the systems.
The fundamental aspects of the asymptotic behavior of a system are determined by its invariant measure. It tuns out the studying of its statistical stability of great relevance. And from a functional analytic point of view, the problem becomes to understand how the eigenvectors of is transfer operator associated with unitary eigenvalues varies when the system changes. Among important available tools to reach this end, a remarkable one that can be cited is the Liverani-Keller stability result (see \cite{KL}). But it is not always suitable. In fact, in the absence of the required hypothesis to apply \cite{KL} in the environment introduced by \cite{GLU} for Lorenz-like systems, S. Galatolo and R. Lucena defined a class of anisotropic spaces appropriate to work with the limitations in hands. Few years later, the general ideas behind the spaces of \cite{GLU} showed to be fruitful to study the dynamical features of other classes of skew-products, specially the statistical stability. Indeed, \cite{GIND} used the mentioned spaces to establish a quantitative statistical stability and convergence to equilibrium result for to maps with indifferent fixed points. While \cite{Gjep} used the spaces to obtain the same results for a class of partially hyperbolic skew products. Other good results reaching statistical stability statements are \cite{AS}, \cite{JS2}, \cite{JFA}, \cite{JFAMV}, \cite{WBV}, \cite{BGK}, \cite{RJM}, \cite{KKL}, \cite{WSSVS} and \cite{BR}.
Despite the uniform hyperbolic scenario is well understood, the overall picture of the partially hyperbolic ones is far to be closed, specially when non invertible maps and discontinuities are allowed. Works in this direction are \cite{Gjep} and \cite{TC}. Where the former allow discontinuities and the latter restrict themselves to smooth invertible systems.
In this paper we obtain quantitative results on the statistical stability under deterministic perturbations of the unique physical measure (equilibrium state if $F$ is continuous, see theorem \ref{belongss}) in the anisotropic space $S^\infty$. We reach this result by showing that an \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{admissible} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{$R({\Greekmath 010E})$-perturbation} (see definitions \ref{add} and \ref{UFL}) induces a \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{$(R({\Greekmath 010E}), {\Greekmath 0110})$-family of operators} (definition \ref{UF}) where we apply some results on the regularity of the invariant measure obtained in \cite{RRR} and also some generalized ideas from \cite{GLU}. Contrasting with others, this result is stronger in some directions. For instance, comparing with \cite{TC} here we deal with discontinuities and non invertible maps. Moreover, our weak norm, where the stability statement is established (see Equation \ref{stabll}, of Theorem \ref{htyttigu}), is stronger than the $L_1$-like norm used in \cite{GIND}, \cite{Gjep} and \cite{GLU} (for instance, see Theorem 9.2 of \cite{GLU}) to obtain this sort of result.
\noindent\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Statements of the Main Results.} Here we expose the main results of this work. We ordered the theorems in order to clarify the propose of the paper and make the exposition of how the main results (theorems \ref{htyttigu} and \ref{htyttigui}) are reached. The following hypothesis (f1), (f2), (f3), (G1) and (G2) on the system $F$ will be stated in section \ref{sec1}.
The first result guaranties existence and uniqueness of a physical measure for $F$ in the space $S^{\infty}$, which is an equilibrium state if $F$ is continuous. More precisely, once the system $(f,m_1)$ is fixed, $F:=(f,G)$ has an unique invariant measure in the space $S^\infty$. Moreover, from a functional point of view, for a given \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{admissible $R({\Greekmath 010E})$-perturbation} (see definitions \ref{add} and \ref{UFL}) $\{F_{\Greekmath 010E}\}_{{\Greekmath 010E} \in [0,1)}$, it turns out the function $${\Greekmath 010E} \longmapsto F_{\Greekmath 010E} \longmapsto \func {F}_{{\Greekmath 010E}*} \longmapsto {\Greekmath 0116} _{\Greekmath 010E} $$ well defined (where $\func {F}_{{\Greekmath 010E}*}$ is the transfer operator of $F_{\Greekmath 010E}$), which is important for the following results, theorems like \ref{htyttigu} and \ref{htyttigui}. Its proof is given in section \ref{sofkjsdkgfhksjfgd}.
\begin{athm}
\label{belongss} The system $F$ has an unique and invariant probability, ${\Greekmath 0116}_0 \in S^{\infty }$. If $F$ is continuous, ${\Greekmath 0116}_0$ is an equilibrium state.
\end{athm}
The following theorem \ref{dlogd} establishes a general and quantitative relation between the stochastic perturbation $\{\func{L}_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in \left[0, 1 \right)}$ called \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{$(R({\Greekmath 010E}), {\Greekmath 0110})$-family of operators} (see definition \ref{UF}) and the variation of the induced family of fixed points, $\{{\Greekmath 0116}_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in [0,1)}$. It states that the function ${\Greekmath 010E} \longmapsto {\Greekmath 0116}_{{\Greekmath 010E} }$, given by $${\Greekmath 010E} \longmapsto \func{L}_{{\Greekmath 010E} } \longmapsto {\Greekmath 0116}_{{\Greekmath 010E} },$$ varies continuously at $0$, with respect to a general weak norm $||\cdot||_w$, and gives a explicit bound for its modulus of continuity: $O(R({\Greekmath 010E})^{\Greekmath 0110} \log {\Greekmath 010E} )$.
\begin{athm} [Quantitative stability for stochastic perturbations] \label{dlogd}
Suppose $\{\func{L}_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in \left[0, 1 \right)}$ is a $(R({\Greekmath 010E}),{\Greekmath 0110})$-family of operators (definition \ref{UF}), where ${\Greekmath 0116}_{0}$ is the unique fixed point of $\func{L}_{0}$ in $B_{w}$ and $
{\Greekmath 0116}_{{\Greekmath 010E} }$ is a fixed point of $\func{L} _{{\Greekmath 010E} }$. Then, there exists ${\Greekmath 010E} _0 \in (0,1)$ such that for all ${\Greekmath 010E} \in [0,{\Greekmath 010E} _0)$, it holds
\begin{equation*}
||{\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0}||_{w}\leq O(R({\Greekmath 010E})^{\Greekmath 0110} \log {\Greekmath 010E} ).
\end{equation*}
\end{athm}
The next theorem \ref{rr} yields that all admissible $R({\Greekmath 010E})$-perturbation, $\{F_{\Greekmath 010E}\}_{{\Greekmath 010E} \in [0,1)}$, induces a well defined function $${\Greekmath 010E} \longmapsto F_{\Greekmath 010E} \longmapsto \func{L}_{\Greekmath 010E} \longmapsto {\Greekmath 0116}_{{\Greekmath 010E} },$$where $\func{L}_{\Greekmath 010E} = \func{F}_{{\Greekmath 010E} *}$ (the transfer operator of $F_{\Greekmath 010E}$). This allows us to apply theorem \ref{dlogd} to obtain the main result of this work, theorem \ref{htyttigu}, exposed ahead.
\begin{athm}\label{rr}
Let $\{F_{\Greekmath 010E}\}_{{\Greekmath 010E} \in [0,1)}$ be an admissible $R({\Greekmath 010E})$-perturbation (satisfies both definitions \ref{add} and \ref{UFL}) and let $\{\func{F_{\Greekmath 010E}{_*}}\}_{{\Greekmath 010E} \in [0,1)}$ be the induced family of transfer operators. Then, $\{\func {F_{\Greekmath 010E}{_*}}\}_{{\Greekmath 010E} \in [0,1)}$ is a $(R({\Greekmath 010E}),{\Greekmath 0110})$-family of operators with weak space $(\mathcal{L}^{\infty}, || \cdot ||_\infty)$ and strong space $(S^\infty,||\cdot||_{S^\infty})$.
\end{athm}
The next theorem \ref{htyttigu} gives a relation between the deterministic perturbation called \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{admissible $R({\Greekmath 010E})$-perturbations} and the variation of the induced family of physical measures, $\{{\Greekmath 0116}_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in [0,1)}$. Moreover, it estimates the modulus of continuity of the induced function ${\Greekmath 010E} \longmapsto {\Greekmath 0116}_{{\Greekmath 010E} }$, given by $${\Greekmath 010E} \longmapsto F_{\Greekmath 010E} \longmapsto \func{F}_{{\Greekmath 010E} *} \longmapsto {\Greekmath 0116}_{{\Greekmath 010E} },$$ with respect to the norm $||\cdot||_\infty$. Giving that the modulus is of the order of $R({\Greekmath 010E})^{\Greekmath 0110} \log {\Greekmath 010E}$.
\begin{athm}[Quantitative stability for deterministic perturbations]
Let $\{F_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in [0,1)} $ be an admissible $R({\Greekmath 010E})$-perturbation (satisfies both definitions \ref{add} and \ref{UFL}). Denote by ${\Greekmath 0116}_{\Greekmath 010E}$ the invariant measure of $F_{\Greekmath 010E}$ in $S^\infty$, for all ${\Greekmath 010E}$. Then, there exists ${\Greekmath 010E} _0 \in (0,{\Greekmath 010E}_1)$ such that for all ${\Greekmath 010E} \in [0,{\Greekmath 010E} _0)$, it holds
\begin{equation}\label{stabll}
||{\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0}||_{\infty}\leq O(R({\Greekmath 010E})^{\Greekmath 0110} \log {\Greekmath 010E} ).
\end{equation}
\label{htyttigu}
\end{athm}
Many interesting perturbations of $F$ ensures the existence of a linear $R({\Greekmath 010E})$ (see definition \ref{UFL}). For instance, perturbations with respect to topologies defined in the set of the skew-products, induced by the $C^r$ topologies. Therefore, if the function $R({\Greekmath 010E})$ is of the type, $$R({\Greekmath 010E})=K_6 {\Greekmath 010E},$$ for all ${\Greekmath 010E}$ and a constant $K_6$ we immediately get the following result.
\begin{athm}[Quantitative stability for deterministic perturbations with a linear $R({\Greekmath 010E})$]
Let $\{F_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in [0,1)} $ be an admissible $R({\Greekmath 010E})$-perturbation, where $R({\Greekmath 010E})$ is defined by $R({\Greekmath 010E})=K_5{\Greekmath 010E}$. Denote by ${\Greekmath 0116}_{\Greekmath 010E}$ the unique invariant probability of $F_{\Greekmath 010E}$ in $S^\infty$, for all ${\Greekmath 010E}$. Then, there exists ${\Greekmath 010E} _0 \in (0,{\Greekmath 010E}_1)$ such that for all ${\Greekmath 010E} \in [0,{\Greekmath 010E} _0)$, it holds\footnote{A question to be answered is: is $O({\Greekmath 010E}^{\Greekmath 0110} \log {\Greekmath 010E} )$ an optimal modulus of continuity?}
\begin{equation}\label{stablli}
||{\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0}||_{\infty}\leq O({\Greekmath 010E}^{\Greekmath 0110} \log {\Greekmath 010E} ).
\end{equation}
\label{htyttigui}
\end{athm}
The following is the main ingredient to reach theorem \ref{htyttigu}. It states that the function $${\Greekmath 010E} \longmapsto |{\Greekmath 0116}_{{\Greekmath 010E} }|_{\Greekmath 0110},$$ given by the family of $F$-invariant probabilities $\{{\Greekmath 0116}_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in [0,1)}$ induced by an admissible perturbation (definition \ref{add}), $\{F_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in [0,1)} $, is uniformly bounded.
\begin{athm}
\label{thshgf}
Let $\{F_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in [0,1)}$ be an \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{admissible perturbation} (definition \ref{add}) and let ${\Greekmath 0116}_{{\Greekmath 010E} }$ be the unique $F_{\Greekmath 010E}$-invariant probability in $S^\infty$. Then, there exists $B_u>0$ such that
\begin{equation*}
|{\Greekmath 0116}_{{\Greekmath 010E} }|_{\Greekmath 0110}\leq B_u,
\end{equation*}for all ${\Greekmath 010E} \in[0,1)$.
\end{athm}
\noindent\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Plan of the paper.} The paper is structured as follows:
\begin{itemize}
\item Section \ref{sec1}: we introduce the kind of systems we consider in the paper. Essentially, it is a class of systems which contains a set of piecewise partially hyperbolic dynamics ($F(x,y)=(f(x), G(x,y))$) with a non-uniformly expanding basis map, $f$, and whose fibers are uniformly contracted $m_1$-a.e, where $m_1$ is an $f$-invariant equilibrium state;
\item Section \ref{seccc}: we expose some tools and preliminary results which are already established. Most of them are from \cite{GLU} and \cite{RRR}. We use them to introduce the functional spaces used in the paper and discussed in the previous paragraphs;
\item Section \ref{jsdhjfnsd}: we introduce the sort of stochastic perturbation (\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{$(R({\Greekmath 010E}),{\Greekmath 0110})$-family of operators} (see definition \ref{UF})) we are going to work with and introduce the basis to obtain theorem \ref{dlogd};
\item Section \ref{kjrthkje}: we introduce the sort of deterministic perturbations (\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{admissible}-perturbation and (\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{$R({\Greekmath 010E})$-perturbation}) we are going to work with and introduce the basis to obtain theorems \ref{htyttigu} and \ref{thshgf};
\item Section \ref{sofkjsdkgfhksjfgd}: we prove theorem \ref{belongss};
\item Section \ref{kdjfhksdjfhksdf}: we prove theorem \ref{dlogd};
\item Section \ref{kkdjfkshfdsdfsttr}: we prove theorems \ref{rr}, \ref{htyttigu} and \ref{htyttigui};
\item Section \ref{ieutiet}: we prove theorem \ref{thshgf}.
\end{itemize}
\noindent\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Acknowledgments.} We are thankful to Stefano Galatolo and Isaia Nisoli for all valuable comments and fruitful discussions regarding this work.
This work was partially supported by Alagoas Research Foundation-FAPEAL (Brazil) Grants 60030 000587/2016, CNPq (Brazil) Grants 300398/2016-6, CAPES (Brazil) Grants 99999.014021/2013-07 and EU Marie-Curie IRSES Brazilian-European partnership in
Dynamical Systems (FP7-PEOPLE- 2012-IRSES 318999 BREUDS).
\section{Settings\label{sec1}}
Fix a compact and connected Riemannian manifold, $M$, equipped with its Riemannian metric $d_1$. For the sake of simplicity, we suppose that $\diam(M) = 1$, this is not restrictive but will avoid multiplicative constants. Moreover, consider a compact metric space $(K,d_2)$, endowed with its Borel's sigma algebra, $\mathcal{B}$. We set $\Sigma := M\times K$ and we endow this space with the metric $d_1 + d_2$.
\subsection{Contracting Fiber Maps with Non Uniformily Expanding Basis\label{sec2}}
Let $F$ be the map $F:\Sigma \longrightarrow \Sigma$ given by
\begin{equation}\label{cccccc}
F(x,z)=(f(x),G(x,z)),
\end{equation}where $G: \Sigma \longrightarrow K$ and $f:M\longrightarrow M$ are measurable maps satisfying what follows.
\subsubsection{Hypothesis on $f$}\label{hf}
Suppose that $f:M \longrightarrow M$ is a local diffeomorphism and assume that there is a continuous function $L:M\longrightarrow\mathbb{R}$, s.t. for every $x \in M$ there exists a neighbourhood $U_x$, of $x$, so that $f_x:=f|_{U_x}: U_x \longrightarrow f(U_x)$ is invertible and $$d_1(f_x^{-1}(y), f_x^{-1}(z)) \leq L(x)d_1(y, z), \ \ \forall y,z \in f(U_x).$$
In particular, $\#f^{-1}(x)$ is constant for all $x \in M$. We set $\deg(f):=\#f^{-1}(x)$, the degree of $f$.
Denote by
\begin{equation}\label{ro}
{\Greekmath 011A}({\Greekmath 010D}) := \dfrac{1}{|\det (f^{'}({\Greekmath 010D}))|},
\end{equation}where $\det (f^{'})$ is the Jacobian of $f$ with respect to its equilibrium state $m_1$.
Suppose that there is an open region $\mathcal{A} \subset M$ and constants ${\Greekmath 011B} >1$ and $L\geq 1$ such that
\begin{enumerate}
\item[(f1)] $L(x) \leq L$ for every $x \in \mathcal{A}$ and $L(x) < {\Greekmath 011B} ^{-1}$ for every $x \in \mathcal{A}^c$. Moreover, $L$ is close enough to $1$: the precise estimation for $L$ is given in equation (\ref{kdljfhkdjfkasd});
\item[(f2)] There exists a finite covering $\mathcal{U}$ of $M$, by open domains of injectivity for $f$, such that $\mathcal{A}$ can be covered by $q<\deg(f)$ of these domains.
\end{enumerate}
Denote by $H_{\Greekmath 0110}$ the set of the H\"older functions $h:M \longrightarrow \mathbb{R}$, i.e., if we define $$H_{\Greekmath 0110}(h) := \sup _{x\neq y} \dfrac{|h(x) - h(y)|}{d_1(x,y)^{\Greekmath 0110}},$$then
$$H_{\Greekmath 0110}:= \{ h:M \longrightarrow \mathbb{R}: H_{\Greekmath 0110}(h) < \infty\}.$$
Next, (f3) is an open condition relatively to the H\"older norm and equation (\ref{f32}) means that ${\Greekmath 011A}$ belongs to a small cone of H\"older continuous functions (see \cite{VAC}).
\begin{enumerate}
\item[(f3)] There exists a sufficiently small ${\Greekmath 010F} _{\Greekmath 011A} >0$ s.t.
\begin{equation}\label{f31}
\sup \log ({\Greekmath 011A}) - \inf \log ({\Greekmath 011A}) <{\Greekmath 010F} _{\Greekmath 011A};
\end{equation}and
\begin{equation}\label{f32}
H_{\Greekmath 0110} ({\Greekmath 011A}) < {\Greekmath 010F}_{\Greekmath 011A} \inf {\Greekmath 011A}.
\end{equation}
\end{enumerate}
Precisely, we suppose the constants ${\Greekmath 010F} _{\Greekmath 011A}$ and $L$ satisfy the condition
\begin{equation}\label{kdljfhkdjfkasd}
\exp{{\Greekmath 010F}_{\Greekmath 011A}} \cdot \left( \dfrac{(\deg(f) - q){\Greekmath 011B} ^{-{\Greekmath 010B}} + qL^{\Greekmath 010B}[1+(L-1)^{\Greekmath 010B}] }{\deg(f)}\right)< 1.
\end{equation}
According to \cite{VAC}, such a map (satisfying (f1), (f2) and (f3)) $f:M \longrightarrow M$ has an invariant probability $m_1$ of maximal entropy, absolutely continuous with respect to a conformal measure, and its Perron-Frobenius operator with respect to $m_1$, $\func{P}_f: L^1_{m_1} \longrightarrow L^1_{m_1}$, defined for ${\Greekmath 0127} \in L^1_{m_1}$ by $\func{P}_f ({\Greekmath 0127})(x) = \sum _{i=1}^{\deg(f)}{{\Greekmath 0127} (x_i){\Greekmath 011A} (x_i)}$ ($x_i$ is the $i$-th pre-image of $x$, $i=1, \cdots, \deg(f)$), satisfies the following three results which proofs can be found in \cite{RRR}.
\begin{theorem}\label{loiub}
There exist $0< r<1$ and $D>0$ s.t. for all ${\Greekmath 0127} \in H_{\Greekmath 0110}$, with $\int{{\Greekmath 0127}}dm_1 =0$, it holds $$|\func{P_f}^n({\Greekmath 0127})|_{\Greekmath 0110} \leq Dr^n|{\Greekmath 0127}|_{\Greekmath 0110} \ \ \forall \ n \geq 1,$$where $|{\Greekmath 0127}|_{\Greekmath 0110} := H_{\Greekmath 0110} ({\Greekmath 0127}) + |{\Greekmath 0127}|_{\infty} $ for all ${\Greekmath 0127} \in H_{\Greekmath 0110}$.
\end{theorem}
\begin{remark}\label{chkjg}
By (f2), (see \cite{RRR}) there exists a disjoint finite family, $\mathcal{P}$, of open sets, $P_1, \cdots, P_{\deg{(f)}}$, s.t. $\bigcup_{i=1}^{\deg{(f)}} P_i=M$ $m_1$-a.e., and $f|_{P_{i}}:P_i \longrightarrow f(P_i)$ is a diffeomorfism for all $i=1, \cdots \deg{(f)}$. Moreover, $f(P_i)=M$ $m_1$-a.e., for all $i=1, \cdots, \deg(f)$. Therefore, it holds that $$\func{P}_f({\Greekmath 0127})(x) = \sum _{i=1}^{\deg{(f)}} {{\Greekmath 0127} (x_i){\Greekmath 011A} (x_i)}{\Greekmath 011F}_{f(P_{i})}(x),$$ for $m_1$-a.e. $x \in M$, where $${\Greekmath 011A}_i({\Greekmath 010D}):= \frac{1}{|\det (f_i^{'}({\Greekmath 010D}))|}$$ and $f_i = f|_{P_i}$. This expression will be used later on.
\end{remark}
\begin{theorem}\label{asewqtw} (Lasota-Yorke inequality) There exist $k\in \mathbb{N}$, $
0<{\Greekmath 010C} _{0}<1$ and $C>0$ such that, for all $g\in H_{\Greekmath 0110}$, it holds
\begin{equation}
|\func{P}_{f}^{k}g|_{{\Greekmath 0110}}\leq {\Greekmath 010C} _{0}|g|_{{\Greekmath 0110}}+C|g|_{\infty}, \label{LY1}
\end{equation}where $|g|_{\Greekmath 0110} := H_{\Greekmath 0110} (g) + |g|_{\infty}$.
\end{theorem}
\begin{corollary}\label{irytrtrte}
There exist constants $B_3>0$, $C_2>0$ and $0<{\Greekmath 010C}_2<1$ such that for all $
g \in H_{\Greekmath 0110}$, and all $n \geq 1$, it holds
\begin{equation}
|\func{P}_{f}^{n}g|_{{\Greekmath 0110}} \leq B_3 {\Greekmath 010C} _2 ^n | g|_{{\Greekmath 0110}} + C_2|g|_{\infty},
\label{lasotaiiii}
\end{equation}where $|g|_{\Greekmath 0110} := H_{\Greekmath 0110} (g) + |g|_{\infty}$.
\end{corollary}
\subsubsection{Hypothesis on $G$}
We suppose that $G: \Sigma \longrightarrow K$ satisfies:
\begin{enumerate}
\item [(G1)] $G$ is uniformly contracting on $m_1$-a.e. vertical fiber, ${\Greekmath 010D}_x :=\{x\}\times K$.
\end{enumerate}Precisely, there is $0< {\Greekmath 010B} <1$ such that for $m_1$-a.e. $x\in M$ it holds
\begin{equation}
d_2(G(x,z_{1}),G(x,z_2))\leq {\Greekmath 010B} d_2(z_{1},z_{2}), \quad \forall
z_{1},z_{2}\in K. \label{contracting1}
\end{equation}We denote the set of all vertical fibers ${\Greekmath 010D}_x$, by $\mathcal{F}^s$: $$\mathcal{F}^s:= \{{\Greekmath 010D} _x:=\{ x\}\times K; x \in M \} .$$ When no confusion is possible, the elements of $\mathcal{F}^s$ will be denoted simply by ${\Greekmath 010D}$, instead of ${\Greekmath 010D} _x$.
\begin{enumerate}
\item [(G2)] Let $P_1, \cdots, P_{\deg(f)}$ be the partition of $M$ given in Remark \ref{chkjg} and ${\Greekmath 0110} \leq 1$. Suppose that
\begin{equation}\label{oityy}
|G_i|_{\Greekmath 0110}:= \sup _y\sup_{x_1, x_2 \in P_i} \dfrac{d_2(G(x_1,y), G(x_2,y))}{d_1(x_1,x_2)^{\Greekmath 0110}}< \infty.
\end{equation}
\end{enumerate}And denote by $|G|_{\Greekmath 0110}$ the following constant
\begin{equation}\label{jdhfjdh}
|G|_{\Greekmath 0110} := \max_{i=1, \cdots, s} \{|G_i|_{\Greekmath 0110}\}.
\end{equation}
\begin{remark}
The condition (G2) means that $G$ can be discontinuous on the sets $\partial P_i \times K$, for all $i=1, \cdots, \deg(f)$, where $\partial P_i$ denotes the boundary of $P_i$.
\end{remark}
\subsection{Examples}\label{ooisidrosr}
\begin{example}\label{sesprowerpo}
Let $f_0: \mathbb{T}^d \longrightarrow \mathbb{T}^d$ be a linear expanding map. Fix a covering $\mathcal{P}$ and an atom $P_1 \in \mathcal{P}$ which contains a periodic point (maybe fixed point) $p$. Then, consider a perturbation $f$, of $f_0$, inside $P_1$ by a pitchfork bifurcation, in a way that $p$ becomes a saddle for $f$. Therefore, $f$ coincides with $f_0$ in $P_1^c$, where we have uniform expansion. The perturbation can be made in a way that (f1) is satisfied, i.e., is never too contracting in $P_1$ and $f$ is still a topological mixing. Note that a small perturbation with the previous properties may not exist. If it does, then (f3) is satisfied. In this case, $m_1$ is absolutely continuous with respect to the Lebesgue measure which is an expanding conformal and positive measure on open sets. Hence, there can be no periodic attractors.
\end{example}
\begin{example}\label{sesprowerpoo}
In the previous example, assume that $f_0$ is diagonalisable, with eigenvalues $1< 1+ a<{\Greekmath 0115}$, associated to $e_1, e_2$ respectively, and $x_0$ is a fixed point. Fix $a,{\Greekmath 010F}> 0$ such that $\log(\frac{1+ a}{1- a})<{\Greekmath 010F}$ and
\begin{equation*}
\exp{\Greekmath 010F}\left(\frac{(\deg(f_0)- 1)(1+ a)^{-{\Greekmath 010B}}+ (1/(1- a))^{{\Greekmath 010B}}[1+ (a/(1- a))^{\Greekmath 010B}]}{\deg(f_0)}\right)< 1.
\end{equation*}
Note that any smaller $a> 0$ will still satisfy these equations.
Let $\mathcal U$ be a finite covering of $M$ by open domains of injectivity for $f$. Redefining sets in $\mathcal U$, we may assume $x_0= (m_0, n_0)$ belongs to exactly one such domain $U$. Let $r> 0$ be small enough that $B_{2r}(x_0)\subset U$. Define ${\Greekmath 011A}={\Greekmath 0111}_r\ast g$, where ${\Greekmath 0111}_r(z)= (1/r^2){\Greekmath 0111}(z/r)$, ${\Greekmath 0111}$ the standard mollifier, and
\begin{equation*}
g(m,n)=\begin{cases}
{\Greekmath 0115}(1- a),&\RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }(m, n)\in B_r(x_0);\\
{\Greekmath 0115}(1+ a),&\RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.}
\end{cases}
\end{equation*}
Finally, define a perturbation $f$ of $f_0$ by
\begin{equation*}
f(m, n)= (m_0+{\Greekmath 0115} (m- m_0),n_ 0+({\Greekmath 011A}(m, n)/{\Greekmath 0115}) (n- n_0)).
\end{equation*}
Then $x_0$ is a saddle point of $f$ and the desired conditions are satisfied for $\mathcal A= B_{2r}(x_0)$, $L= 1/(1- a)$ and ${\Greekmath 011B}= 1+ 2a$. The only non-trivial condition is (f3). To show it, note that
\begin{equation*}
{\Greekmath 011A}(x)-{\Greekmath 011A}(y)=\int_S \frac{2a}{{\Greekmath 0115}(1- a^2)}{\Greekmath 0111}_r(z)\,dz-\int_{S'}\frac{2a}{{\Greekmath 0115}(1- a^2)}{\Greekmath 0111}_r(z)\,dz,
\end{equation*}
where $S=\{z\in\mathbb R^2: x- z\in B_r(x_ 0), y- z\notin B_r(x_ 0)\}$ and $S'=\{z\in\mathbb R^2: y- z\in B_r(x_ 0), x- z\notin B_r(x_ 0)\}$. Take $x, y\in\mathbb R^2$ and write $|x- y|= qr$, $A_q=\{z\in\mathbb R^2: 1- q< |z|< 1\}$. We have
\begin{equation*}
\frac{|{\Greekmath 011A}(x)-{\Greekmath 011A}(y)|}{|x- y|^{\Greekmath 0110}}\leq\frac{2a{\Greekmath 0111}_r(S)}{{\Greekmath 0115}(1- a^2)q^{\Greekmath 0110} r^{\Greekmath 0110}}\leq\frac{2a{\Greekmath 0111}(A_q)/q^{\Greekmath 0110}}{{\Greekmath 0115}(1- a^2)}.
\end{equation*}
Since $N=\sup_{q> 0}{\Greekmath 0111}(A_q)/q^{\Greekmath 0110}< +\infty$, we can take $a$ so small that $2aN/(1- a)<{\Greekmath 010F}$, therefore $H_{\Greekmath 0110}({\Greekmath 011A})<{\Greekmath 010F}\inf{\Greekmath 011A}$.
\end{example}
\section{Preliminaries}\label{seccc}
Through this section, we expose some preliminary results which are already established and construct the functional analytic framework suitable for our approach. Some of results presented here are from \cite{RRR} and \cite{GLU}. For the reader convenience, we provide the proofs of the main ones.
\subsection{Weak and Strong Spaces}
\subsubsection{$L^{\infty}$-like spaces.}\label{spa}
In this subsection, we construct the vector spaces of signed measures which we are going to work with. Particularly, we define the norm used in the left hand side of equation \ref{stabll}.
To make it possible, we need to briefly recall some facts on disintegration of measures, state the Rokhlin's Disintegration Theorem and fix some notations.
\subsubsection*{Rokhlin's Disintegration Theorem}
Consider a probability space $(\Sigma,\mathcal{B}, {\Greekmath 0116})$ and a partition $
\Gamma$ of $\Sigma$ into measurable sets ${\Greekmath 010D} \in \mathcal{B}$. Denote by $
{\Greekmath 0119} : \Sigma \longrightarrow \Gamma$ the projection that associates to each
point $x \in M$ the element ${\Greekmath 010D} _x$ of $\Gamma$ which contains $x$, i.e.,
${\Greekmath 0119}(x) = {\Greekmath 010D} _x$. Let $\widehat{\mathcal{B}}$ be the ${\Greekmath 011B}$-algebra of
$\Gamma$ provided by ${\Greekmath 0119}$. Precisely, a subset $\mathcal{Q} \subset \Gamma$
is measurable if, and only if, ${\Greekmath 0119}^{-1}(\mathcal{Q}) \in \mathcal{B}$. We
define the \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{quotient} measure ${\Greekmath 0116} _x$ on $\Gamma$ by ${\Greekmath 0116} _x(
\mathcal{Q})= {\Greekmath 0116}({\Greekmath 0119} ^{-1}(\mathcal{Q}))$.
The proof of the following theorem can be found in \cite{Kva}, Theorem
5.1.11 (items a), b) and c)) and Proposition 5.1.7 (item d)).
\begin{theorem}
(Rokhlin's Disintegration Theorem) Suppose that $\Sigma $ is a complete and
separable metric space, $\Gamma $ is a measurable partition
of $\Sigma $ and ${\Greekmath 0116} $ is a probability on $\Sigma $. Then, ${\Greekmath 0116} $ admits a
disintegration relative to $\Gamma $, i.e., a family $\{{\Greekmath 0116} _{{\Greekmath 010D}
}\}_{{\Greekmath 010D} \in \Gamma }$ of probabilities on $\Sigma $ and a quotient
measure ${\Greekmath 0116} _{x}$ as above, such that:
\begin{enumerate}
\item[(a)] ${\Greekmath 0116} _{\Greekmath 010D} ({\Greekmath 010D})=1$ for ${\Greekmath 0116} _x$-a.e. ${\Greekmath 010D} \in \Gamma$;
\item[(b)] for all measurable set $E\subset \Sigma $ the function $\Gamma
\longrightarrow \mathbb{R}$ defined by ${\Greekmath 010D} \longmapsto {\Greekmath 0116} _{{\Greekmath 010D}
}(E), $ is measurable;
\item[(c)] for all measurable set $E\subset \Sigma $, it holds ${\Greekmath 0116} (E)=\int
{{\Greekmath 0116} _{{\Greekmath 010D} }(E)}d{\Greekmath 0116} _{x}({\Greekmath 010D} )$.
\label{rok}
\item [(d)] If the ${\Greekmath 011B} $-algebra $\mathcal{B}$ on $\Sigma $ has a countable
generator, then the disintegration is unique in the following sense. If $(\{{\Greekmath 0116} _{{\Greekmath 010D} }^{\prime }\}_{{\Greekmath 010D} \in \Gamma },{\Greekmath 0116} _{x})$ is another disintegration of the measure ${\Greekmath 0116} $ relative to $\Gamma $, then ${\Greekmath 0116}
_{{\Greekmath 010D} }={\Greekmath 0116} _{{\Greekmath 010D} }^{\prime }$, for ${\Greekmath 0116} _{x}$-almost every ${\Greekmath 010D}
\in \Gamma $.
\end{enumerate}
\end{theorem}
\subsubsection{The $\mathcal{L}^{\infty}$ and $S^\infty$ spaces}
Let $\mathcal{SB}(\Sigma )$ be the space of Borel signed measures on $\Sigma : = M \times K$. Given ${\Greekmath 0116} \in \mathcal{SB}(\Sigma )$ denote by ${\Greekmath 0116} ^{+}$ and ${\Greekmath 0116} ^{-}$
the positive and the negative parts of its Jordan decomposition, ${\Greekmath 0116} ={\Greekmath 0116}
^{+}-{\Greekmath 0116} ^{-}$ (see remark {\ref{ghtyhh}). Let ${\Greekmath 0119} _{x}:\Sigma
\longrightarrow M$ \ be the projection defined by ${\Greekmath 0119}_x (x,y)=x$, denote
by ${\Greekmath 0119} _{x\ast }:$}$\mathcal{SB}(\Sigma )\rightarrow \mathcal{SB}(M)${\
the pushforward map associated to ${\Greekmath 0119} _{x}$. Denote by $\mathcal{AB}$ the
set of signed measures ${\Greekmath 0116} \in \mathcal{SB}(\Sigma )$ such that its
associated positive and negative marginal measures, ${\Greekmath 0119} _{x\ast }{\Greekmath 0116} ^{+}$
and ${\Greekmath 0119} _{x\ast }{\Greekmath 0116} ^{-},$ are absolutely continuous with respect to $m_{1}$, i.e.,
\begin{equation*}
\mathcal{AB}=\{{\Greekmath 0116} \in \mathcal{SB}(\Sigma ):{\Greekmath 0119} _{x\ast }{\Greekmath 0116} ^{+}<<m_{1}\ \
\mathnormal{and}\ \ {\Greekmath 0119} _{x\ast }{\Greekmath 0116} ^{-}<<m_{1}\}. \label{thespace1}
\end{equation*}
}Given a \emph{probability measure} ${\Greekmath 0116} \in \mathcal{AB}$ on $\Sigma $,
Theorem \ref{rok} describes a disintegration $\left( \{{\Greekmath 0116} _{{\Greekmath 010D}
}\}_{{\Greekmath 010D} },{\Greekmath 0116} _{x}\right) $ along $\mathcal{F}^{s}$ by a family $\{{\Greekmath 0116} _{{\Greekmath 010D} }\}_{{\Greekmath 010D} }$ of probability measures
on the stable leaves\footnote{
In the following to simplify notations, when no confusion is possible we
will indicate the generic leaf or its coordinate with ${\Greekmath 010D} $.} and, since
${\Greekmath 0116} \in \mathcal{AB}$, ${\Greekmath 0116} _{x}$ can be identified with a non negative
marginal density ${\Greekmath 011E} _{x}:M\longrightarrow \mathbb{R}$, defined almost
everywhere, with $|{\Greekmath 011E} _{x}|_{1}=1$. \ For a general (non normalized)
positive measure ${\Greekmath 0116} \in \mathcal{AB}$ we can define its disintegration in
the same way. In this case, ${\Greekmath 0116} _{{\Greekmath 010D} }$ are still probability measures, $
{\Greekmath 011E} _{x}$ is still defined and $\ |{\Greekmath 011E} _{x}|_{1}={\Greekmath 0116} (\Sigma )$.
\begin{definition}
Let ${\Greekmath 0119} _{y}:\Sigma \longrightarrow K$ be the projection defined by $
{\Greekmath 0119} _{y}(x,y)=y$. Let ${\Greekmath 010D} \in \mathcal{F}^{s}$, consider ${\Greekmath 0119}
_{{\Greekmath 010D} ,y}:{\Greekmath 010D} \longrightarrow K$, the restriction of the map ${\Greekmath 0119}
_{y}:\Sigma \longrightarrow K$ to the vertical leaf ${\Greekmath 010D} $, and the
associated pushforward map ${\Greekmath 0119} _{{\Greekmath 010D} ,y\ast }$. Given a positive measure
${\Greekmath 0116} \in \mathcal{AB}$ and its disintegration along the stable leaves $
\mathcal{F}^{s}$, $\left( \{{\Greekmath 0116} _{{\Greekmath 010D} }\}_{{\Greekmath 010D} },{\Greekmath 0116} _{x}={\Greekmath 011E}
_{x}m_{1}\right) $, we define the \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{restriction of ${\Greekmath 0116} $ on ${\Greekmath 010D} $}
and denote it by ${\Greekmath 0116} |_{{\Greekmath 010D} }$ as the positive measure on $K$ (not
on the leaf ${\Greekmath 010D} $) defined, for all mensurable set $A\subset K$, as
\begin{equation*}
{\Greekmath 0116} |_{{\Greekmath 010D} }(A)={\Greekmath 0119} _{{\Greekmath 010D} ,y\ast }({\Greekmath 011E} _{x}({\Greekmath 010D} ){\Greekmath 0116} _{{\Greekmath 010D}
})(A).
\end{equation*}
For a given signed measure ${\Greekmath 0116} \in \mathcal{AB}$ and its Jordan
decomposition ${\Greekmath 0116} ={\Greekmath 0116} ^{+}-{\Greekmath 0116} ^{-}$, define the \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{restriction of $
{\Greekmath 0116} $ on ${\Greekmath 010D} $} by
\begin{equation*}
{\Greekmath 0116} |_{{\Greekmath 010D} }={\Greekmath 0116} ^{+}|_{{\Greekmath 010D} }-{\Greekmath 0116} ^{-}|_{{\Greekmath 010D} }.
\end{equation*}
\label{restrictionmeasure}
\end{definition}
\begin{remark}
\label{ghtyhh}As proved in Appendix 2 of \cite {GLU}, the restriction $
{\Greekmath 0116} |_{{\Greekmath 010D} }$ does not depend on the decomposition. Precisely, if ${\Greekmath 0116}
={\Greekmath 0116} _{1}-{\Greekmath 0116} _{2}$, where ${\Greekmath 0116} _{1}$ and ${\Greekmath 0116} _{2}$ are any positive
measures, then ${\Greekmath 0116} |_{{\Greekmath 010D} }={\Greekmath 0116} _{1}|_{{\Greekmath 010D} }-{\Greekmath 0116} _{2}|_{{\Greekmath 010D} }$ $
m_{1}$-a.e. ${\Greekmath 010D} \in M$.
\end{remark}
Let $(X,d)$ be a compact metric space, $g:X\longrightarrow \mathbb{R}$ be a
${\Greekmath 0110}$-H\"older function and let $H_{\Greekmath 0110}(g)$ be its best ${\Greekmath 0110}$-H\"older's constant, i.e.,
\begin{equation}\label{lipsc}
\displaystyle{H_{\Greekmath 0110}(g)=\sup_{x,y\in X,x\neq y}\left\{ \dfrac{|g(x)-g(y)|}{d(x,y)^{\Greekmath 0110}}
\right\} }.
\end{equation}
In what follows, we present a generalization of the Wasserstein-Kantorovich-like metric given in \cite{GLU} and \cite{GP}.
\begin{definition}
Given two signed measures ${\Greekmath 0116} $ and ${\Greekmath 0117} $ on $X,$ we define a \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{\
Wasserstein-Kantorovich-like} distance between ${\Greekmath 0116} $ and ${\Greekmath 0117} $ by
\begin{equation*}
W_{1}^{{\Greekmath 0110}}({\Greekmath 0116} ,{\Greekmath 0117} )=\sup_{H_{\Greekmath 0110}(g)\leq 1,|g|_{\infty }\leq 1}\left\vert \int {\
g}d{\Greekmath 0116} -\int {g}d{\Greekmath 0117} \right\vert .
\end{equation*}
\label{wasserstein}
\end{definition}Since the constant ${\Greekmath 0110}$ is fixed, from now on we denote
\begin{equation}
||{\Greekmath 0116} ||_{W}:=W_{1}^{{\Greekmath 0110}}(0,{\Greekmath 0116} ), \label{WW}
\end{equation}and observe that $||\cdot ||_{W}$ defines a norm on the vector space of signed measures defined on a compact metric space. It is worth to remark that this norm is equivalent to the dual of the ${\Greekmath 0110}$-H\"older norm.
\begin{definition}
Let $\mathcal{L}^{\infty }\subseteq \mathcal{AB}(\Sigma )$ be defined as
\begin{equation*}
\mathcal{L}^{\infty }=\left\{ {\Greekmath 0116} \in \mathcal{AB}:\esssup ({W_{1}^{{\Greekmath 0110}}({\Greekmath 0116}
^{+}|_{{\Greekmath 010D} },{\Greekmath 0116} ^{-}|_{{\Greekmath 010D} }))}<\infty \right\},
\end{equation*}
where the essential supremum is taken over $M$ with respect to $m_{1}$.
Define the function $||\cdot ||_{\infty }:\mathcal{L}^{\infty
}\longrightarrow \mathbb{R}$ by
\begin{equation*}
||{\Greekmath 0116} ||_{\infty }=\esssup ({W_{1}^{{\Greekmath 0110}}({\Greekmath 0116} ^{+}|_{{\Greekmath 010D} },{\Greekmath 0116} ^{-}|_{{\Greekmath 010D}
}))}.
\end{equation*}
\end{definition}
Finally, consider the following set of signed measures on $\Sigma $
\begin{equation}\label{sinfi}
S^{\infty }=\left\{ {\Greekmath 0116} \in \mathcal{L}^{\infty };{\Greekmath 011E} _{x}\in
H_{\Greekmath 0110} \right\},
\end{equation}
and the function, $||\cdot ||_{S^{\infty }}:S^{\infty }\longrightarrow
\mathbb{R}$, defined by
\begin{equation*}
||{\Greekmath 0116} ||_{S^{\infty }}=|{\Greekmath 011E} _{x}|_{{\Greekmath 0110}}+||{\Greekmath 0116} ||_{\infty }.
\end{equation*}
The proof of the next proposition is straightforward and can be found in
\cite{L}.
\begin{proposition}
$\left( \mathcal{L}^{\infty },||\cdot ||_{\infty }\right) $ and $\left(
S^{\infty },||\cdot||_{S^{\infty }}\right) $ are normed vector spaces.
\end{proposition}
\subsection{The transfer operator associated to $F$}
In this section, we consider the transfer operator associated to skew product
maps as defined in Section \ref{sec1}, acting on our disintegrated measures spaces
defined in Section \ref{spa}. For such transfer operators and measures it holds a
kind of Perron-Frobenius formula, which is somewhat similar to the one used for one-dimensional maps.
Consider the pushforward map (also known as the "transfer operator") $\func{F}_{\ast }$ associated with $F$, defined
by
\begin{equation*}
\lbrack \func{F}_{\ast }{\Greekmath 0116} ](E)={\Greekmath 0116} (F^{-1}(E)),
\end{equation*}
for each signed measure ${\Greekmath 0116} \in \mathcal{SB}(\Sigma )$ and for each
measurable set $E\subset \Sigma $, where $\Sigma:=M\times K$.
The proofs of the following three results can be found in \cite{RRR}.
\begin{lemma}
\label{transformula}For every probability ${\Greekmath 0116} \in \mathcal{AB}$ disintegrated
by $(\{{\Greekmath 0116} _{{\Greekmath 010D} }\}_{{\Greekmath 010D} },{\Greekmath 011E} _x)$, the disintegration $(\{(\func{
F}_{\ast }{\Greekmath 0116} )_{{\Greekmath 010D} }\}_{{\Greekmath 010D} },(\func{F}_{\ast }{\Greekmath 0116} )_{x})$ of the
pushforward $\func{F}_{\ast }{\Greekmath 0116} $ \ satisfies the following relations
\begin{equation}
(\func{F}_{\ast }{\Greekmath 0116} )_{x}=\func{P}_{f}({\Greekmath 011E} _x)m_{1} \label{1}
\end{equation}
and
\begin{equation}
(\func{F}_{\ast }{\Greekmath 0116} )_{{\Greekmath 010D} }={\Greekmath 0117} _{{\Greekmath 010D} }:=\frac{1}{\func{P}_{f}({\Greekmath 011E}
_x)({\Greekmath 010D} )}\sum_{i=1}^{\deg(f)}{\frac{{\Greekmath 011E} _x}{|\det Df_{i}|}\circ
f_{i}^{-1}({\Greekmath 010D} )\cdot {\Greekmath 011F} _{f_{i}(P_{i})}({\Greekmath 010D} )\cdot \func{F}_{\ast
}{\Greekmath 0116} _{f_{i}^{-1}({\Greekmath 010D} )}} \label{2}
\end{equation}
when $\func{P}_{f}({\Greekmath 011E} _x)({\Greekmath 010D} )\neq 0$. Otherwise, if $\func{P}
_{f}({\Greekmath 011E} _x)({\Greekmath 010D} )=0$, then ${\Greekmath 0117} _{{\Greekmath 010D} }$ is the Lebesgue measure
on ${\Greekmath 010D} $ (the expression $\displaystyle{\frac{{\Greekmath 011E} _x}{|\det Df_{i}|}
\circ f_{i}^{-1}({\Greekmath 010D} )\cdot \frac{{\Greekmath 011F} _{f_{i}(P_{i})}({\Greekmath 010D} )}{\func{P}
_{f}({\Greekmath 011E} _x)({\Greekmath 010D} )}\cdot \func{F}_{\ast }{\Greekmath 0116} _{f_{i}^{-1}({\Greekmath 010D} )}}$
is understood to be zero outside $f_{i}(P_{i})$ for all $i=1,\cdots ,\deg(f)$).
Here and above, ${\Greekmath 011F} _{A}$ is the characteristic function of the set $A$.
\end{lemma}
\begin{proposition}
\label{niceformulaab}Let ${\Greekmath 010D} \in \mathcal{F}^{s}$ be a stable leaf. Let
us define the map $F_{{\Greekmath 010D} }:K\longrightarrow K$ by
\begin{equation}\label{ritiruwt}
F_{{\Greekmath 010D} }={\Greekmath 0119} _{y}\circ F|_{{\Greekmath 010D} }\circ {\Greekmath 0119} _{{\Greekmath 010D} ,y}^{-1}.
\end{equation}
Then, for each ${\Greekmath 0116} \in \mathcal{L}^{\infty}$ and for almost all ${\Greekmath 010D} \in
M $ (interpreted as the quotient space of leaves) it holds
\begin{equation}
(\func{F}_{\ast }{\Greekmath 0116} )|_{{\Greekmath 010D} }=\sum_{i=1}^{\deg(f)}{\func{F}
_{{\Greekmath 010D} _i \ast }{\Greekmath 0116} |_{{\Greekmath 010D} _i }{\Greekmath 011A} _i({\Greekmath 010D} _i){\Greekmath 011F} _{f_{i}(P_{i})}({\Greekmath 010D} )}\ \ m_{1}
\mathnormal{-a.e.}\ \ {\Greekmath 010D} \in M \label{niceformulaa}
\end{equation}
where $\func{F}_{{\Greekmath 010D}_i \ast }$ is the pushforward map
associated to $\func{F}_{{\Greekmath 010D}_i}$, ${\Greekmath 010D} _i = f_{i}^{-1}({\Greekmath 010D} )$ when ${\Greekmath 010D} \in f_i (P_i)$ and ${\Greekmath 011A}_i({\Greekmath 010D})= \dfrac{1}{|\det (f_i^{'}({\Greekmath 010D}))|}$, where $f_i = f|_{P_i}$.
\end{proposition}
Sometimes (see also Remark \ref{chkjg}) it will be convenient to use the following expression for $(\func{F}_{\ast } {\Greekmath 0116} )|_{{\Greekmath 010D} }$:
\begin{corollary}\label{oierew}
For each ${\Greekmath 0116} \in \mathcal{L}^{\infty}$ it holds
\begin{equation}
(\func{F}_{\ast }{\Greekmath 0116} )|_{{\Greekmath 010D} }=\sum_{i=1}^{\deg(f)}{\func{F}
_{{\Greekmath 010D} _i \ast }{\Greekmath 0116} |_{{\Greekmath 010D} _i }{\Greekmath 011A} _i({\Greekmath 010D} _i)}\ \ m_{1}
\mathnormal{-a.e.}\ \ {\Greekmath 010D} \in M, \label{niceformulaaareer}
\end{equation}
where ${\Greekmath 010D} _i$ is the $i$-th pre image of ${\Greekmath 010D}$, $i=1,\cdots, \deg(f)$.
\end{corollary}
\subsection{Basic properties of the norms and convergence to equilibrium}
In this section, we show list properties of the norms and their behavior with respect to the transfer operator. In particular, the following item (1) shows continuity and weak contraction for the transfer operator, $\func {F}_*$, with respect to the norm $||\cdot||_\infty$. Items (2) and (3) provide Lasota-Yorke inequalities for the norms $||\cdot||_\infty$ and $||\cdot||_{S^\infty}$, which shows a regularizing property of the transfer operator with respect to these norms. Such inequalities are also usually called Doeblin-Fortet inequalities.
\begin{enumerate}
\item (Weak Contraction for $||\cdot||_\infty$) If ${\Greekmath 0116} \in \mathcal{L}^{\infty}$ then
\begin{equation}\label{weakcontral11234}
||\func{F}_{\ast }{\Greekmath 0116} ||_{\infty}\leq ||{\Greekmath 0116} ||_{\infty};
\end{equation}
\item (Lasota-Yorke inequality for $S^{\infty}$)
There exist $A$, $B_{2}>0$ and ${\Greekmath 0115} <1$ (${\Greekmath 0115} = {\Greekmath 010C} _2$ of Corollary \ref{irytrtrte}) such that, for all ${\Greekmath 0116}
\in S^{1}$, it holds
\begin{equation}
||\func{F}_{\ast }^{n}{\Greekmath 0116} ||_{S^{\infty}}\leq A{\Greekmath 0115} ^{n}||{\Greekmath 0116}
||_{S^{\infty}}+B_{2}||{\Greekmath 0116} ||_{\infty},\ \ \forall n\geq 1; \label{lasotaoscilation2}
\end{equation}
\item For every signed measure ${\Greekmath 0116} \in \mathcal{L}^{\infty}$ it holds
\begin{equation}\label{nicecoro}
||\func{F}_{\ast }^{n}{\Greekmath 0116} ||_{\infty}\leq ({\Greekmath 010B} ^{\Greekmath 0110}) ^{n}||{\Greekmath 0116} ||_{\infty}+\overline{
{\Greekmath 010B} }|{\Greekmath 011E} _{x}|_{\infty},
\end{equation}
where $\overline{{\Greekmath 010B} }=\frac{1 }{1-{\Greekmath 010B}^{\Greekmath 0110} }$;
\item For every signed measure ${\Greekmath 0116}$ on $K$, such that ${\Greekmath 0116}(K)=0$ it holds
\begin{equation}\label{nicecoroo}
||\func{F}_{{\Greekmath 010D} *} {\Greekmath 0116} ||_{W}\leq {\Greekmath 010B} ^{\Greekmath 0110} ||{\Greekmath 0116} ||_{W}.
\end{equation}where $F_{\Greekmath 010D}$ is defined in equation (\ref{ritiruwt}).
\end{enumerate}
\subsection{Convergence to equilibrium}\label{invt}
Let $X$ be a compact metric space. Consider the space $\mathcal{SB}(X)$ of
signed Borel measures on $X$. \ In the following, we consider two further
normed vectors spaces of signed Borel measures on $X.$ The spaces $
(B_{s},||~||_{s})\subseteq (B_{w},||~||_{w})\subseteq \mathcal{SB}(X)$ with
norms satisfying
\begin{equation*}
||~||_{w}\leq ||~||_{s}.
\end{equation*}
We say that a Markov operator
\begin{equation*}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{L}:B_{w}\longrightarrow B_{w}
\end{equation*}
has convergence to equilibrium with speed at least $\Phi $ and with respect to
the norms $||\cdot ||_{s}$ and $||\cdot ||_{w}$, if for each ${\Greekmath 0116} \in
\mathcal{V}_{s}$, where
\begin{equation}
\mathcal{V}_{s}=\{{\Greekmath 0116} \in B_{s},{\Greekmath 0116} (X)=0\} \label{vs}
\end{equation}
is the space of zero-average measures, it holds
\begin{equation*}
||\RIfM@\expandafter\text@\else\expandafter\mbox\fi{L}^{n}({\Greekmath 0116} )||_{w}\leq \Phi (n)||{\Greekmath 0116} ||_{s}, \label{wwe}
\end{equation*}
where $\Phi (n)\longrightarrow 0$ as $n\longrightarrow \infty $.
Let us consider the set of zero average measures in $S^{\infty}$ defined by
\begin{equation}
\mathcal{V}_{s}=\{{\Greekmath 0116} \in S^{\infty}:{\Greekmath 0116} (\Sigma )=0\}. \label{mathV}
\end{equation}
Note that, for all ${\Greekmath 0116} \in \mathcal{V}_{s}$ we have ${\Greekmath 0119} _{x\ast }{\Greekmath 0116}
(M)=0$. Moreover, since ${\Greekmath 0119} _{x\ast }{\Greekmath 0116} ={\Greekmath 011E} _{x}m_{1}$ (${\Greekmath 011E}
_{x}={\Greekmath 011E} _{x}^{+}-{\Greekmath 011E} _{x}^{-}$), we have $\displaystyle{\int_{M}{{\Greekmath 011E}
_{x}}dm_{1}=0}$. This allows us to apply Theorem \ref{loiub} in the proof of
the next proposition.
\begin{theorem}[Exponential convergence to equilibrium]
\label{5.8} There exist $D_{2}\in \mathbb{R}$ and $0<{\Greekmath 010C} _{1}<1$ such that
for every signed measure ${\Greekmath 0116} \in \mathcal{V}_{s}$, it holds
\begin{equation*}
||\func{F}_{\ast }^{n}{\Greekmath 0116} ||_{\infty}\leq D_{2}{\Greekmath 010C} _{1}^{n}||{\Greekmath 0116} ||_{S^{\infty}},
\end{equation*}
for all $n\geq 1$, where ${\Greekmath 010C} _{1}=\max \{\sqrt{r},
\sqrt{{\Greekmath 010B}^{\Greekmath 0110} }\}$ and $D_{2}=(\sqrt{{\Greekmath 010B}^{\Greekmath 0110} }^{-1}+\overline{{\Greekmath 010B} }D \sqrt{r}^{-1})$.\label{quasiquasiquasi}
\end{theorem}
\begin{proof}
In this proof, to simplify the notation, we denote the constant ${\Greekmath 010B}^{\Greekmath 0110}$ just by ${\Greekmath 010B}$.
Given ${\Greekmath 0116} \in \mathcal{V}_s$ and denoting ${\Greekmath 011E} _{x}={\Greekmath 011E} _{x}^{+}-{\Greekmath 011E}
_{x}^{-}$, it holds that $\int {{\Greekmath 011E} }_{x}dm _1=0$. Moreover, Theorem \ref
{loiub} yields $|\func{P}_{f}^{n}({\Greekmath 011E} _{x})|_{{\Greekmath 0110}}\leq Dr^{n}|{\Greekmath 011E} _{x}|_{{\Greekmath 0110}}$
for all $n\geq 1$, then (since ${|{\Greekmath 011E}_x|}_{\infty}\le {\|{\Greekmath 0116}\|}_{\infty}$) $|\func{P}_{f}^{n}({\Greekmath 011E} _{x})|_{{\Greekmath 0110}}\leq Dr^{n}||{\Greekmath 0116}
||_{S^{\infty}}$, for all $n\geq 1$.
Let $l$ and $0\leq d\leq 1$ be the coefficients of the division of $n$ by $2$
, i.e., $n=2l+d$. Thus, $l=\frac{n-d}{2}$ (by equation (\ref{weakcontral11234}), we have $||\func{F}_{\ast }^{n}{\Greekmath 0116} ||_{\infty}\leq ||{\Greekmath 0116}
||_{\infty}$, for all $n$, and $||{\Greekmath 0116} ||_{\infty}\leq ||{\Greekmath 0116} ||_{S^{\infty}}$) and by
equation (\ref{nicecoro}), it holds (below, set ${\Greekmath 010C} _{1}=\max \{\sqrt{r},
\sqrt{{\Greekmath 010B}^{\Greekmath 0110} }\}$)
\begin{eqnarray*}
||\func{F}_{\ast }^{n}{\Greekmath 0116} ||_{\infty} &= &||\func{F}_{\ast }^{2l+d}{\Greekmath 0116} ||_{\infty}
\\
&\leq &({\Greekmath 010B} ^{\Greekmath 0110}) ^{l}||\func{F}_{\ast }^{l+d}{\Greekmath 0116} ||_{\infty}+\overline{{\Greekmath 010B} }
\left\vert \dfrac{d({\Greekmath 0119} _{x\ast }(\func{F}_{\ast}^{l+d}{\Greekmath 0116} ))}{dm_{1}}
\right\vert _{\infty} \\
&\leq &({\Greekmath 010B}^{\Greekmath 0110}) ^{l}||{\Greekmath 0116} ||_{\infty}+\overline{{\Greekmath 010B} }|\func{P}_{f}^{l}({\Greekmath 011E}
_{x})|_{\infty} \\
&\leq &(\sqrt{{\Greekmath 010B}^{\Greekmath 0110} }^{-1}+\overline{{\Greekmath 010B} }D \sqrt{r}^{-1}){\Greekmath 010C} _{1}^{n}||{\Greekmath 0116} ||_{S^{\infty}}
\\
&\leq &D_{2}{\Greekmath 010C} _{1}^{n}||{\Greekmath 0116} ||_{S^{\infty}},
\end{eqnarray*}
where $D_{2}=(\sqrt{{\Greekmath 010B}^{\Greekmath 0110} }^{-1}+\overline{{\Greekmath 010B} }D \sqrt{r}^{-1})$.
\end{proof}
\subsection{H\"older-Measures}
In this section define what is a Holder's constant of a signed measure on $\Sigma$.
We apply the fact that $G$ satisfies the property (G2) (all previous results doe not depend on (G2)). Moreover, besides satisfying equation (\ref{kdljfhkdjfkasd}), the constant $L$ mentioned in (f1) and (f3) is also supposed to be close enough to $1$ such that $({\Greekmath 010B} \cdot L)^{\Greekmath 0110}<1$ (or ${\Greekmath 010B}$ is close enough to $0$). It is clearly satisfied by example \ref{sesprowerpo} and hence by example \ref{sesprowerpoo} of section \ref{ooisidrosr}.
We have seen that a positive measure on $M \times K$ can be disintegrated along the stable
leaves $\mathcal{F}^s$ in a way that we can see it as a family of positive measures on $M$, $\{{\Greekmath 0116} |_{\Greekmath 010D}\}_{{\Greekmath 010D} \in \mathcal{F}^s }$. Since there is a one-to-one correspondence between $\mathcal{F}^s$ and $M$, this defines a path
in the metric space of positive measures ($\mathcal{SB}(K)$) defined on $K$, $M \longmapsto \mathcal{SB}(K)$, where $\mathcal{SB}(K)$ is endowed with the Wasserstein-Kantorovich-like metric (see definition \ref{wasserstein}).
It will be convenient to use a functional notation and denote such a path by
$\Gamma_{{\Greekmath 0116} } : M \longrightarrow \mathcal{SB}(K)$ defined almost everywhere by $\Gamma_{{\Greekmath 0116} } ({\Greekmath 010D}) = {\Greekmath 0116}|_{\Greekmath 010D}$, where $(\{{\Greekmath 0116} _{{\Greekmath 010D} }\}_{{\Greekmath 010D} \in M},{\Greekmath 011E}_{x})$ is some disintegration of ${\Greekmath 0116}$.
However, since such a disintegration is defined $\widehat{{\Greekmath 0116}}$-a.e. ${\Greekmath 010D} \in M$, the path $\Gamma_{\Greekmath 0116}$ is not unique. For this reason we define more precisely $\Gamma_{{\Greekmath 0116} } $ as the class of almost everywhere equivalent paths corresponding to ${\Greekmath 0116}$.
\begin{definition}
Consider a positive Borel measure ${\Greekmath 0116}$ on $M \times K$ and a disintegration ${\Greekmath 0121}=(\{{\Greekmath 0116} _{{\Greekmath 010D} }\}_{{\Greekmath 010D} \in M},{\Greekmath 011E}
_x)$, where $\{{\Greekmath 0116} _{{\Greekmath 010D} }\}_{{\Greekmath 010D} \in M }$ is a family of
probabilities on $M \times K$ defined $\widehat{{\Greekmath 0116}}$-a.e. ${\Greekmath 010D} \in M$ (where $\widehat{{\Greekmath 0116}} := {\Greekmath 0119}_x{_*}{\Greekmath 0116}={\Greekmath 011E} _x m_1$) and ${\Greekmath 011E}
_x:\Sigma_A^+\longrightarrow \mathbb{R}$ is a non-negative marginal density. Denote by $\Gamma_{{\Greekmath 0116} }$ the class of equivalent paths associated to ${\Greekmath 0116}$
\begin{equation*}
\Gamma_{{\Greekmath 0116} }=\{ \Gamma^{\Greekmath 0121}_{{\Greekmath 0116} }\}_{\Greekmath 0121},
\end{equation*}
where ${\Greekmath 0121}$ ranges on all the possible disintegrations of ${\Greekmath 0116}$ and $\Gamma^{\Greekmath 0121}_{{\Greekmath 0116} }: M\longrightarrow \mathcal{SB}(K)$ is the map associated to a given disintegration, ${\Greekmath 0121}$:
$$\Gamma^{\Greekmath 0121}_{{\Greekmath 0116} }({\Greekmath 010D} )={\Greekmath 0116} |_{{\Greekmath 010D} } = {\Greekmath 0119} _{{\Greekmath 010D}, y} ^\ast {\Greekmath 011E} _x
({\Greekmath 010D}){\Greekmath 0116} _{\Greekmath 010D} .$$
\end{definition}Let us call the set on which $\Gamma_{{\Greekmath 0116} }^{\Greekmath 0121} $ is defined by $I_{\Gamma_{{\Greekmath 0116} }^{\Greekmath 0121} } \left( \subset M\right)$.
\begin{definition}For a given $0<{\Greekmath 0110} <1$, a disintegration ${\Greekmath 0121}$ of ${\Greekmath 0116}$ and its functional representation $\Gamma_{{\Greekmath 0116} }^{\Greekmath 0121} $ we define the \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{${\Greekmath 0110}$-H\"older constant of ${\Greekmath 0116}$ associated to ${\Greekmath 0121}$} by
\begin{equation}\label{Lips1}
|{\Greekmath 0116}|_{\Greekmath 0110} ^{\Greekmath 0121} := \esssup _{{\Greekmath 010D}_1, {\Greekmath 010D}_2 \in I_{\Gamma_{{\Greekmath 0116} }^{\Greekmath 0121}}} \left\{ \dfrac{||{\Greekmath 0116}|_{{\Greekmath 010D} _1}- {\Greekmath 0116}|_{{\Greekmath 010D} _2}||_W}{d_1 ({\Greekmath 010D} _1, {\Greekmath 010D} _2)^{\Greekmath 0110}}\right\}.
\end{equation}Finally, we define the \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{${\Greekmath 0110}$-H\"older constant} of the positive measure ${\Greekmath 0116}$ by
\begin{equation}\label{Lips2}
|{\Greekmath 0116}|_{\Greekmath 0110} :=\displaystyle{\inf_{ \Gamma_{{\Greekmath 0116} }^{\Greekmath 0121} \in \Gamma_{{\Greekmath 0116} } }\{|{\Greekmath 0116}|_{\Greekmath 0110} ^{\Greekmath 0121}\}}.
\end{equation}
\label{Lips3}
\end{definition}
\begin{remark}
When no confusion is possible, to simplify the notation, we denote $\Gamma_{{\Greekmath 0116} }^{\Greekmath 0121} ({\Greekmath 010D} )$ just by ${\Greekmath 0116} |_{{\Greekmath 010D} } $.
\end{remark}
\begin{definition}
From the Definition \ref{Lips3} we define the set of the ${\Greekmath 0110}$-H\"older positive measures $\mathcal{H} _{\Greekmath 0110}^{+}$ as
\begin{equation}
\mathcal{H} _{\Greekmath 0110}^{+}=\{{\Greekmath 0116} \in \mathcal{AB}:{\Greekmath 0116} \geq 0,|{\Greekmath 0116} |_{\Greekmath 0110} <\infty \}.
\end{equation}
\end{definition}
For the next lemma, for a given path, $\Gamma _{\Greekmath 0116}$ which represents the measure ${\Greekmath 0116}$, we define for each ${\Greekmath 010D} \in I_{\Gamma_{{\Greekmath 0116} }^{\Greekmath 0121} }\subset M$, the map
\begin{equation}
{\Greekmath 0116} _F({\Greekmath 010D}) := \func{F_{\Greekmath 010D} }_*{\Greekmath 0116}|_{\Greekmath 010D},
\end{equation}where $F_{\Greekmath 010D} :K \longrightarrow K$ is defined as
\begin{equation}\label{poier}
F_{\Greekmath 010D} (y) = {\Greekmath 0119}_y \circ F \circ {({\Greekmath 0119} _y|_{\Greekmath 010D})} ^{-1}(y)
\end{equation}and ${\Greekmath 0119}_y : M\times K \longrightarrow K$ is the projection ${\Greekmath 0119}_y(x,y)=y$.
\begin{lemma}\label{apppoas}
Suppose that $F:\Sigma \longrightarrow \Sigma$ satisfies (G1) and (G2). Then, for all ${\Greekmath 0116} \in \mathcal{H} _{\Greekmath 0110}^{+} $ which satisfy ${\Greekmath 011E} _x = 1$ $m_1$-a.e., it holds $$||\func{F}
_{x \ast }{\Greekmath 0116} |_{x } - \func{F}
_{y \ast }{\Greekmath 0116} |_{y }||_W \leq {\Greekmath 010B}^{\Greekmath 0110} |{\Greekmath 0116}|_{\Greekmath 0110} d_1(x, y)^{\Greekmath 0110} + |G|_{\Greekmath 0110} d_1(x, y)^{\Greekmath 0110} ||{\Greekmath 0116} ||_\infty,$$ for all $x,y \in P_i$ and all $i=1, \cdots, \deg(f)$.
\end{lemma}
\begin{proof}
Since $({\Greekmath 0116}|_x - {\Greekmath 0116}|_y)(K)=0$ (${\Greekmath 011E} _x = 1$ $m_1$-a.e.), by equation (\ref{nicecoroo}), it holds
\begin{eqnarray*}
||\func{F}
_{x \ast }{\Greekmath 0116} |_{x } - \func{F}
_{y \ast }{\Greekmath 0116} |_{y }||_W &\leq & ||\func{F}
_{x \ast }{\Greekmath 0116} |_{x } - \func{F}
_{x \ast }{\Greekmath 0116} |_{y }||_W + ||\func{F}
_{x \ast }{\Greekmath 0116} |_{y } - \func{F}
_{y \ast }{\Greekmath 0116} |_{y }||_W
\\&\leq & {\Greekmath 010B}^{\Greekmath 0110}||{\Greekmath 0116} |_{x } - {\Greekmath 0116} |_{y }||_W + ||\func{F}
_{x \ast }{\Greekmath 0116} |_{y } - \func{F}
_{y \ast }{\Greekmath 0116} |_{y }||_W
\\&\leq & {\Greekmath 010B}^{\Greekmath 0110} |{\Greekmath 0116}|_{\Greekmath 0110} d_1(x,y)^{\Greekmath 0110} + ||\func{F}
_{x \ast }{\Greekmath 0116} |_{y } - \func{F}
_{y \ast }{\Greekmath 0116} |_{y }||_W.
\end{eqnarray*}Let us estimate the second summand $||\func{F}
_{x \ast }{\Greekmath 0116} |_{y } - \func{F}
_{y \ast }{\Greekmath 0116} |_{y }||_W$. To do it, let $g:K \longrightarrow \mathbb{R}$ be a ${\Greekmath 0110}$-H\"older function s.t. $H_{\Greekmath 0110}(g), |g|_\infty \leq 1$. By equation (\ref{poier}), we get
\begin{align*}
\begin{split}
\left|\int gd(\func{F}_{x\ast}{\Greekmath 0116}|_y)-\int gd(\func{F}_{y\ast}{\Greekmath 0116}|_y) \right|&=\left|\int\!{g(G(x,z))}d({\Greekmath 0116}|_y)(z)\right.\\
&\qquad\left.-\int\!{g(G(y,z))}d({\Greekmath 0116}|_y)(z) \right|
\end{split}
\\&\leq\int{\left|G(x,z)-G(y,z)\right|}d({\Greekmath 0116}|_y)(z)
\\&\leq|G|_{\Greekmath 0110} d_1(x,y)^{\Greekmath 0110} \int{1}d({\Greekmath 0116}|_y)(z)
\\&\leq|G|_{\Greekmath 0110} d_1(x,y)^{\Greekmath 0110} ||{\Greekmath 0116}|_y||_W.
\end{align*}Thus, taking the supremum over $g$ and the essential supremum over $y$, we get
\begin{equation*}
||\func{F}
_{x \ast }{\Greekmath 0116} |_{y } - \func{F}
_{y \ast }{\Greekmath 0116} |_{y }||_W \leq|G|_{\Greekmath 0110} d_1(x,y)^{\Greekmath 0110} ||{\Greekmath 0116}||_\infty.\qedhere
\end{equation*}
\end{proof}
For the next proposition and henceforth, for a given path $\Gamma _{\Greekmath 0116} ^{\Greekmath 0121} \in \Gamma_{ {\Greekmath 0116} }$ (associated with the disintegration ${\Greekmath 0121} = (\{{\Greekmath 0116} _{\Greekmath 010D}\}_{\Greekmath 010D}, {\Greekmath 011E} _x)$, of ${\Greekmath 0116}$), unless written otherwise, we consider the particular path $\Gamma_{\func{F_*}{\Greekmath 0116}} ^{\Greekmath 0121} \in \Gamma_{\func{F_*}{\Greekmath 0116}}$ defined by the Corollary \ref{oierew}, by the expression
\begin{equation}
\Gamma_{\func{F_*}{\Greekmath 0116}} ^{\Greekmath 0121} ({\Greekmath 010D})=\sum_{i=1}^{\deg(f)}{\func{F}
_{{\Greekmath 010D} _i \ast }\Gamma _{\Greekmath 0116} ^{\Greekmath 0121} ({\Greekmath 010D}_i){\Greekmath 011A} _i({\Greekmath 010D} _i)}\ \ m_{1}
\mathnormal{-a.e.}\ \ {\Greekmath 010D} \in M. \label{niceformulaaareer}
\end{equation}Recall that $\Gamma_{{\Greekmath 0116}} ^{\Greekmath 0121} ({\Greekmath 010D}) = {\Greekmath 0116}|_{\Greekmath 010D}:= {\Greekmath 0119}_{y*}({\Greekmath 011E}_{x}({\Greekmath 010D}){\Greekmath 0116} _{\Greekmath 010D})$ and in particular $\Gamma_{\func{F_*}{\Greekmath 0116}} ^{\Greekmath 0121} ({\Greekmath 010D}) = (\func{F_*}{\Greekmath 0116})|_{\Greekmath 010D} = {\Greekmath 0119}_{y*}(\func{P}_f{\Greekmath 011E}_x({\Greekmath 010D}){\Greekmath 0116} _{\Greekmath 010D})$, where ${\Greekmath 011E}_x = \dfrac{d {\Greekmath 0119} _{x*} {\Greekmath 0116}}{dm_1}$ and $\func{P}_f$ is the Perron-Frobenius operator of $f$.
\begin{proposition}\label{iuaswdas}
If $F:\Sigma \longrightarrow \Sigma$ satisfies (f1), (f2), (f3), (G1), (G2) and ${\Greekmath 010B} \cdot L^{\Greekmath 0110}<1$, then there exist $0<{\Greekmath 010C}<1$ and $D >0$, such that for all ${\Greekmath 0116} \in \mathcal{H} _{\Greekmath 0110}^{+} $ which satisfy ${\Greekmath 011E} _x = 1$ $m_1$-a.e. and for all $\Gamma ^{\Greekmath 0121} _{\Greekmath 0116} \in \Gamma _{\Greekmath 0116}$, it holds $$|\Gamma_{\func{F_*} } ^{\Greekmath 0121}{\Greekmath 0116}|_{{\Greekmath 0110}} \leq {\Greekmath 010C} |\Gamma_{{\Greekmath 0116}}^{\Greekmath 0121}|_{\Greekmath 0110} + D_2||{\Greekmath 0116}||_\infty,$$ for ${\Greekmath 010C}:= ({\Greekmath 010B} L)^{\Greekmath 0110}$ and $D_2:=\{{\Greekmath 010F} _{\Greekmath 011A} L^{\Greekmath 0110} + |G|_ {\Greekmath 0110} L^{\Greekmath 0110}\}$.
\end{proposition}
\begin{corollary}\label{kjdfhkkhfdjfh}
Suppose that $F:\Sigma \longrightarrow \Sigma$ satisfies (f1), (f2), (f3), (G1), (G2) and $({\Greekmath 010B} \cdot L)^{\Greekmath 0110}<1$. Then, for all ${\Greekmath 0116} \in \mathcal{H} _{\Greekmath 0110}^{+} $ which satisfy ${\Greekmath 011E} _x = 1$ $m_1$-a.e. and $||\func{F_*}{\Greekmath 0116}||_\infty \leq ||{\Greekmath 0116}||_\infty$, it holds
\begin{equation}\label{erkjwr}
|\Gamma_{\func{F_*}^n{\Greekmath 0116}}^{\Greekmath 0121}|_{{\Greekmath 0110}} \leq {\Greekmath 010C}^n |\Gamma _{\Greekmath 0116}^{\Greekmath 0121}|_{\Greekmath 0110} + \dfrac{D_2}{1-{\Greekmath 010C}}||{\Greekmath 0116}||_\infty,
\end{equation}
for all $n\geq 1$, where ${\Greekmath 010C}$ and $D_2$ are from Proposition \ref{iuaswdas}.
\end{corollary}
\begin{remark}\label{kjedhkfjhksjdf}
Taking the infimum over all paths $\Gamma_{ {\Greekmath 0116} } ^{\Greekmath 0121} \in \Gamma_{ {\Greekmath 0116} }$ and all $\Gamma_{\func{F_*}^n{\Greekmath 0116}}^{\Greekmath 0121} \in \Gamma_{\func{F_*}^n{\Greekmath 0116}}$ on both sides of inequality (\ref{erkjwr}), we get
\begin{equation}\label{fljghlfjdgkdg}
|\func{F_*}^n{\Greekmath 0116}|_{{\Greekmath 0110}} \leq {\Greekmath 010C}^n |{\Greekmath 0116}|_{\Greekmath 0110} + \dfrac{D_2}{1-{\Greekmath 010C}}||{\Greekmath 0116}||_\infty.
\end{equation}The above Equation (\ref{fljghlfjdgkdg}) will give a uniform bound (see the proof of Theorem \ref{riirorpdf}) for the H\"older's constant of the measure $\func {F_*}^{n} m$, for all $n$, where $m$ is defined as the product $m=m_1 \times {\Greekmath 0117}$, for a fixed probability measure ${\Greekmath 0117}$ on $K$. The uniform bound will be useful later on.
\end{remark}
\begin{remark}\label{riirorpdf}
Consider the probability measure $m$ defined in Remark \ref{kjedhkfjhksjdf}, i.e., $m=m_1 \times {\Greekmath 0117}$, where ${\Greekmath 0117}$ is a given probability measure on $K$ and $m_1$ is the $f$-invariant measure fixed in the subsection \ref{hf}. Besides that, consider its trivial disintegration ${\Greekmath 0121}_0 =(\{m_{{\Greekmath 010D}} \}_{{\Greekmath 010D}}, {\Greekmath 011E}_x)$, given by $m_{\Greekmath 010D} = \func{{\Greekmath 0119} _{y,{\Greekmath 010D}}^{-1}{_*}}{\Greekmath 0117}$, for all ${\Greekmath 010D}$ and ${\Greekmath 011E} _x \equiv 1$. According to this definition, it holds that
\begin{equation*}
m|_{\Greekmath 010D} = {\Greekmath 0117}, \ \ \forall \ {\Greekmath 010D}.
\end{equation*}In other words, the path $\Gamma ^{{\Greekmath 0121} _0}_m$ is constant: $\Gamma ^{{\Greekmath 0121} _0}_m ({\Greekmath 010D})= {\Greekmath 0117}$ for all ${\Greekmath 010D}$. Moreover, for each $n \in \mathbb{N}$, let ${\Greekmath 0121}_n$ be the particular disintegration of the measure $\func{F{_\ast }}^nm$ defined from ${\Greekmath 0121}_0$ as an application of Lemma \ref{transformula}, and consider the path $\Gamma^{{\Greekmath 0121}_{n}}_{\func{F{_\ast }}^n m}$ associated with this disintegration. By Proposition \ref{niceformulaab}, we have
\begin{equation}
\Gamma^{{\Greekmath 0121}_{n}}_{\func{F{_\ast }} ^n m} ({\Greekmath 010D}) =\sum_{i=1}^{q}{\dfrac{\func{F^n
_{f_{i}^{-n}({\Greekmath 010D} )}}_{\ast }{\Greekmath 0117}}{|\det Df^n_{i}\circ f_{i}^{-n}({\Greekmath 010D} ))|}{\Greekmath 011F} _{f^n_i(P _{i})}({\Greekmath 010D} )}\ \ m_1-\hbox{a.e} \ \ {\Greekmath 010D} \in M, \label{niceformulaaw}
\end{equation}where $P_i$, $i=1, \cdots, q=q(n)$, ranges over the partition $\mathcal{P}^{(n)}$ defined in the following way: for all $n \geq 1$, let $\mathcal{P}^{(n)}$ be the partition of $I$ s.t. $\mathcal{P}^{(n)}(x) = \mathcal{P}^{(n)}(y)$ if and only if $\mathcal{P}^{(1)}(f^j (x)) = \mathcal{P}^{(1)}(f^j(y))$ for all $j = 0, \cdots , n-1$, where $\mathcal{P}^{(1)} = \mathcal{P}$ (see remark \ref{chkjg}). This path will be used in section \ref{ieutiet}.
The following result is an estimative for the regularity of the physical measure of $F$. This sort of result has many applications and can also be found in \cite{GLU}, \cite{RRR} and \cite{LiLu}, where in \cite{LiLu} the authors reach an analogous result for random dynamical systems.
\end{remark}
\begin{theorem}
Suppose that $F:\Sigma \longrightarrow \Sigma$ satisfies (f1), (f2), (f3), (G1), (G2) and $({\Greekmath 010B} \cdot L)^{\Greekmath 0110}<1$ and consider the unique $F$-invariant probability ${\Greekmath 0116} _{0}\in S^\infty$. Then ${\Greekmath 0116} _{0}\in \mathcal{H} _{\Greekmath 0110}^{+}$ and
\begin{equation*}
|{\Greekmath 0116} _{0}|_{\Greekmath 0110} \leq \dfrac{D_2}{1-{\Greekmath 010C}},
\end{equation*}where $D_2$ and ${\Greekmath 010C}$ are from Proposition \ref{iuaswdas}.
\label{regg}
\end{theorem}
\section{Quantitative stability of stochastic perturbations}\label{jsdhjfnsd}
In this subsection we present a general {\em quantitative} result relating the {\em continuity } of the fixed points of a \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{$(R({\Greekmath 010E}), {\Greekmath 0110})$-family of operators} (Definition \ref{UF}) and {\em convergence to equilibrium}.
In the following definition, for all ${\Greekmath 010E} \in [0,1) $, let $\func{L_{\Greekmath 010E}}$ be a linear operator acting on two vector subspaces of signed measures on $X$, $\func{L_{\Greekmath 010E}}:(B_{s}, ||\cdot||_{s} ) \longrightarrow (B_{s}, ||\cdot||_{s} )$ and $\func{L_{\Greekmath 010E}}: (B_{w}, ||\cdot||_{w} ) \longrightarrow (B_{w}, ||\cdot||_{w} )$, endowed with two norms, the strong norm $||\cdot||_{s}$ on $B_{s},$ and the weak
norm $||\cdot||_{w}$ on $B_{w}$, such that $||\cdot||_{s}\geq ||\cdot||_{w}$. Suppose that,
\begin{equation*}
B_{s}\mathcal{\subseteq }B_{w}\mathcal{\subseteq }\mathcal{SB}(X),
\end{equation*} where $\mathcal{SB}(X)$ denotes the space of Borel signed measures on $X$.
\begin{definition}\label{UF}
A one parameter family of transfer operators $\{\func{L} _{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in \left[ 0,1 \right)} $ is said to be a \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{$(R({\Greekmath 010E}),{\Greekmath 0110})$-family of operators} with respect to the weak space $(B_{w}, ||\cdot||_{w} )$ and the strong space $(B_{s}, ||\cdot||_{s} )$ if $||\cdot||_{s}\geq ||\cdot||_{w}$ if it satisfies
\begin{enumerate}
\item [\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{P1}] There is $C\in \mathbb{R}^+$ and a real valued function ${\Greekmath 010E} \longmapsto R({\Greekmath 010E}) \in \mathbb{R}^+$ such that $$\lim_{{\Greekmath 010E} \rightarrow 0^+} {R({\Greekmath 010E})\log ({\Greekmath 010E})}=0$$ and
\begin{equation*}
||(\func{L} _{0}-\func{L} _{{\Greekmath 010E} }){\Greekmath 0116}_{{\Greekmath 010E} }||_{w}\leq R({\Greekmath 010E}) ^{\Greekmath 0110} C;
\end{equation*}
\item [\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{P2}] Let ${\Greekmath 0116}_{{\Greekmath 010E} }\in B_{s}$ be a probability measure fixed under the operator $\func{L} _{{\Greekmath 010E} }$. Suppose there is $M>0$ such that for all ${\Greekmath 010E} \in [0,1)$, it holds $$||{\Greekmath 0116}_{{\Greekmath 010E}}||_{s}\leq M;$$
\item[\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{P3}] $\func{L} _{0}$ has exponential convergence to equilibrium with
respect to the norms $||\cdot||_{s}$ and $||\cdot||_{w}$: there exists $0<{\Greekmath 011A}_2 <1$ and $C_2>0$ such that $$\forall \ {\Greekmath 0116} \in \mathcal{V}_s: =\{{\Greekmath 0116} \in B_{s}: {\Greekmath 0116}(X)=0 \}$$ it holds $$||\func{L}^{n}_0 {\Greekmath 0116}||_{w} \leq {\Greekmath 011A} _2 ^n C_2 ||{\Greekmath 0116}||_{s};$$
\item[\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{P4}] The iterates of the operators are uniformly bounded for the weak norm: there exists $M_2 >0$ such that $$\forall {\Greekmath 010E} ,n,{\Greekmath 0117} \in B_{s} \ \RIfM@\expandafter\text@\else\expandafter\mbox\finormal{it holds} \ ||\func{L} _{{\Greekmath 010E}
} ^{ n}{\Greekmath 0117}||_{w}\leq M_{2}||{\Greekmath 0117}||_{w}.$$
\end{enumerate}
\end{definition}
To prove the main result of this section (theorem \ref{dlogd}), we state a general lemma on the stability of fixed points satisfying certain assumptions. Consider two operators $\func{L} _{0}$ and $\func{L} _{{\Greekmath 010E} }$ preserving a normed
space of signed measures $\mathcal{B\subseteq }\mathcal{SB}(X)$ with norm $||\cdot ||_{\mathcal{B}
}$. Suppose that $f_{0},$ $f_{{\Greekmath 010E} }\in \mathcal{B}$ are fixed points of $\func{L} _{0}$ and $\func{L} _{{\Greekmath 010E} }$, respectively.
\begin{lemma}
\label{gen}Suppose that:
\begin{enumerate}
\item[a)] $||\func{L} _{{\Greekmath 010E} }f_{{\Greekmath 010E} }-\func{L} _{0}f_{{\Greekmath 010E} }||_{\mathcal{B}}<\infty $;
\item[b)] For all $i\geq 1$, $\func{L} _{0}^{ i}$ is continuous on $\mathcal{B}$: for each $ i \geq 1$, $\exists
\,C_{i}~s.t.~\forall g\in \mathcal{B},~||\func{L} _{0}^{ i}g||_{\mathcal{B}}\leq
C_{i}||g||_{\mathcal{B}}.$
\end{enumerate}
Then, for each $N \geq 1$, it holds
\begin{equation*}
||f_{{\Greekmath 010E} }-f_{0}||_{\mathcal{B}}\leq ||\func{L} _{0}^{ N}(f_{{\Greekmath 010E} }-f_{0})||_{
\mathcal{B}}+||\func{L} _{{\Greekmath 010E} } f_{{\Greekmath 010E} }-\func{L} _{0}f_{{\Greekmath 010E} }||_{\mathcal{B}
}\sum_{i\in \lbrack 0,N-1]}C_{i}.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is a direct computation. First note that,
\begin{eqnarray*}
||f_{{\Greekmath 010E} }-f_{0}||_{\mathcal{B}} &\leq &||\func{L}_{{\Greekmath 010E} }^{N}f_{{\Greekmath 010E}
}-\func{L}_{0}^{ N}f_{0}||_{\mathcal{B}} \\
&\leq &||\func{L}_{0}^{ N}f_{0}-\func{L}_{0}^{ N}f_{{\Greekmath 010E} }||_{\mathcal{B}
}+||\func{L}_{0}^{ N}f_{{\Greekmath 010E} }-\func{L}_{{\Greekmath 010E} }^{ N}f_{{\Greekmath 010E} }||_{\mathcal{B}} \\
&\leq &||\func{L}_{0}^{N}(f_{0}-f_{{\Greekmath 010E} })||_{\mathcal{B}}+||\func{L}_{0}^{ N}f_{{\Greekmath 010E}
}-\func{L}_{{\Greekmath 010E} }^{N}f_{{\Greekmath 010E} }||_{\mathcal{B}}.
\end{eqnarray*}
Moreover,
\begin{equation*}
\func{L}_{0}^{ N}-\func{L}_{{\Greekmath 010E} }^{ N}=\sum_{k=1}^{N}\func{L}_{0}^{(N-k)}(\func{L}_{0}-\func{L} _{{\Greekmath 010E}
})\func{L}_{{\Greekmath 010E} }^{ (k-1)}
\end{equation*}
hence
\begin{eqnarray*}
(\func{L}_{0}^{ N}-\func{L}_{{\Greekmath 010E} }^{ N})f_{{\Greekmath 010E}} &=&\sum_{k=1}^{ N}\func{L}_{0}^{ (N-k)}(\func{L}_{0}-\func{L}_{{\Greekmath 010E}
})\func{L}_{{\Greekmath 010E} }^{ (k-1)}f_{{\Greekmath 010E} } \\
&=&\sum_{k=1}^{N}\func{L}_{0}^{ (N-k)}(\func{L}_{0}-\func{L} _{{\Greekmath 010E} })f_{{\Greekmath 010E} }
\end{eqnarray*}
by item b), we have
\begin{eqnarray*}
||(\func{L}_{0}^{ N}-\func{L}_{{\Greekmath 010E} }^{ N})f_{{\Greekmath 010E} }||_{\mathcal{B}} &\leq
&\sum_{k=1}^{N}C_{N-k}||(\func{L}_{0}-\func{L} _{{\Greekmath 010E} })f_{{\Greekmath 010E} }||_{\mathcal{B}} \\
&\leq &||(\func{L}_{0}-\func{L}_{{\Greekmath 010E} })f_{{\Greekmath 010E} }||_{\mathcal{B}}\sum_{i\in \lbrack
0,N-1]}C_{i}
\end{eqnarray*}
and then
\begin{equation*}
||f_{{\Greekmath 010E} }-f_{0}||_{\mathcal{B}}\leq ||\func{L}_{0}^{ N}(f_{0}-f_{{\Greekmath 010E} })||_{
\mathcal{B}}+||(\func{L} _{0}-\func{L} _{{\Greekmath 010E} })f_{{\Greekmath 010E} }||_{\mathcal{B}}\sum_{i\in
\lbrack 0,N-1]}C_{i}.
\end{equation*}
\end{proof}
\section{Quantitative stability of deterministic perturbations}\label{kjrthkje}
In this section we consider families of maps as defined in section \ref{sec2} defining families of piecewise partially hyperbolic maps (see definitions \ref{add} and \ref{UFL}).
For families which satisfies both definitions \ref{add} and \ref{UFL} we prove that the invariant measures varies continuously as the map is perturbed, with modulus of continuity $R({\Greekmath 010E}) ^{\Greekmath 0110} \log {\Greekmath 010E}$. We remark that the perturbations defined here are quite natural and are satisfied by the classical $C^r$-perturbations in the set of the skews-product maps. Besides that, these sort of perturbations admits $R({\Greekmath 010E})$ to be linear: $R({\Greekmath 010E}) = K_6 {\Greekmath 010E}$, for all ${\Greekmath 010E}$ and a constant $K_6$.
\begin{remark}
A straightforward computation yields $||\cdot ||_W \leq ||\cdot||_\infty$. Then, supposing that $\{F_{\Greekmath 010E}\}_{{\Greekmath 010E} \in [0,1)}$ satisfies theorem (\ref{htyttigu}), it holds $$||{\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0}||_{W}\leq AR({\Greekmath 010E})^{\Greekmath 0110} \log {\Greekmath 010E} ,$$for some $A>0$. Therefore, for all ${\Greekmath 0110}$-Holder function $g:\Sigma \longrightarrow \mathbb{R}$, the following estimate holds $$\left|\int{g}d{\Greekmath 0116}_{\Greekmath 010E} - \int{g}d{\Greekmath 0116}_0\right| \leq A ||g||_{{\Greekmath 0110}} R({\Greekmath 010E}) ^{\Greekmath 0110} \log {\Greekmath 010E},$$where $||g||_{{\Greekmath 0110}} = ||g||_\infty + H_{\Greekmath 0110}(g)$ (see equation (\ref{lipsc}), for the definition of $H_{\Greekmath 0110}(g)$). Thus, for all ${\Greekmath 0110}$-Holder function, $g:\Sigma \longrightarrow\mathbb{R}$, the limit $\displaystyle{\lim _{{\Greekmath 010E} \longrightarrow 0} {\int{g}d{\Greekmath 0116}_{\Greekmath 010E}} = \int{g}d{\Greekmath 0116}_0}$ holds, with rate of convergence smaller than or equal to $R({\Greekmath 010E})^{\Greekmath 0110} \log {\Greekmath 010E}$.
\end{remark}
\begin{definition}\label{add}
A family of maps $\{F_{\Greekmath 010E}\}_{{\Greekmath 010E} \in [0,1)}$, which satisfies (f1), (f2), (f3), (G1) and (G2) is said to be an \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{admissible perturbation} if it holds the following assumptions:
\begin{enumerate}
\item [(A1):] There exist constants $D>0$ and $0<{\Greekmath 0115}<1$ such that for all $
g \in H_{\Greekmath 0110}$, for all ${\Greekmath 010E} \in [0,1)$ and all $n \geq 1$, it holds
\begin{equation}
|\func{P}_{f_{\Greekmath 010E}}^{n}g|_{{\Greekmath 0110}} \leq D {\Greekmath 0115} ^n | g|_{{\Greekmath 0110}} + D|g|_{\infty},
\end{equation}where $|g|_{\Greekmath 0110} := H_{\Greekmath 0110} (g) + |g|_{\infty}$ and $\func{P}_{f_{{\Greekmath 010E} }}$ is the Perron-Frobenius operator of $f_{{\Greekmath 010E} }$;
\end{enumerate}
\begin{enumerate}
\item [(A2):] Set $ {\Greekmath 010C}_{\Greekmath 010E}:= ({\Greekmath 010B}_{\Greekmath 010E} L_{\Greekmath 010E})^{\Greekmath 0110}$ and $D_{2, {\Greekmath 010E}}:=\{{\Greekmath 010F} _{{\Greekmath 011A}, {\Greekmath 010E}} L_{\Greekmath 010E}^{\Greekmath 0110} + |G_{\Greekmath 010E}|_ {\Greekmath 0110} L_{\Greekmath 010E}^{\Greekmath 0110}\}$. Suppose that, $$\sup _ {\Greekmath 010E} {\Greekmath 010C}_{\Greekmath 010E} <1$$and $$\sup _ {\Greekmath 010E} D_{2, {\Greekmath 010E}} < \infty.$$
\end{enumerate}
\end{definition}
\begin{definition}\label{UFL}
A family (see definition \ref{add}) $\{F_{\Greekmath 010E}\}_{{\Greekmath 010E} \in [0,1)}$ is said to be a \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{$R({\Greekmath 010E})$-perturbation} if $F_{\Greekmath 010E}$ satisfies the following assumptions:
\begin{enumerate}
\item [(U1):] There exists a small enough ${\Greekmath 010E} _1 $ such that for all ${\Greekmath 010E} \in (0, {\Greekmath 010E}_1)$ it holds
\begin{enumerate}
\item $ K_5:=\displaystyle{\sup _{(0, {\Greekmath 010E}_1)} \sup \dfrac{1}{\det Df_{{\Greekmath 010E}} \cdot \det Df_{0}} }< \infty$;
\item $\displaystyle{ \deg (f_{\Greekmath 010E})=q:=\deg(f_0)}$, for all ${\Greekmath 010E} \in (0, {\Greekmath 010E}_1)$.
\end{enumerate}
\end{enumerate}
\begin{enumerate}
\item [(U2):] There exists a real valued function\footnote{The condition $\lim_{{\Greekmath 010E} \rightarrow 0^+} {R({\Greekmath 010E})\log ({\Greekmath 010E})}=0$ is required to ensure the continuity for the family of measures $\{{\Greekmath 0116}_{ {\Greekmath 010E}}\}_{{\Greekmath 010E} \in [0,1)}$ on $0$. But theorem \ref{htyttigu} (equation (\ref{stabll})) would be established without this hypothesis.} ${\Greekmath 010E} \longmapsto R({\Greekmath 010E}) \in \mathbb{R}^+$ such that $$\lim_{{\Greekmath 010E} \rightarrow 0^+} {R({\Greekmath 010E})\log ({\Greekmath 010E})}=0$$ and the following three conditions hold:
\begin{enumerate}
\item [(U2.1)]
$\displaystyle{\sum_{i=1}^{q}
\left\vert \dfrac{1}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}
-\dfrac{1}{\det Df_{0}({\Greekmath 010D} _{0,i})}\right\vert \leq R({\Greekmath 010E})}$;
\end{enumerate}
\begin{enumerate}
\item [(U2.2)]
$\esssup _{\Greekmath 010D} \max_{i=1, \cdots, q} d_1({\Greekmath 010D} _{0,i},{\Greekmath 010D} _{{\Greekmath 010E},i}) \leq R({\Greekmath 010E});$
\end{enumerate}
\begin{enumerate}
\item [(U2.3)] $G_0$ and $G_{\Greekmath 010E}$ are $R({\Greekmath 010E})$-close in the $\sup$ norm: for all ${\Greekmath 010E}$ $$|G_{0}(x,y)-G_{{\Greekmath 010E} }(x,y)|\leq R({\Greekmath 010E}).$$
\end{enumerate}
\end{enumerate}
\end{definition}
\begin{theorem}\label{UF2ass}
Let $\{F_{\Greekmath 010E} \}_{{\Greekmath 010E} \in [0,1)}$ be a $R({\Greekmath 010E})$-perturbation (definition \ref{UFL}). Denote by $\func{F_{\Greekmath 010E}{_\ast}}$ their transfer operators and by $
{\Greekmath 0116}_{{\Greekmath 010E} }$ their fixed points (probabilities) in $S^\infty$. Suppose that the family $\{{\Greekmath 0116}_{{\Greekmath 010E} }\}_{{\Greekmath 010E} \in [0,1)}$ satisfies
$$|{\Greekmath 0116}_{{\Greekmath 010E} }|_{\Greekmath 0110} \leq B_u,$$for all ${\Greekmath 010E} \in [0,{\Greekmath 010E}_1)$.
Then, there is a constant $C_{1}$ such that, it holds
$$
||(\func{F_0{_\ast }}-\func{F_{\Greekmath 010E}{_\ast }}){\Greekmath 0116}_{{\Greekmath 010E} }||_{{\infty}}\leq
C_{1}R({\Greekmath 010E})^{\Greekmath 0110},$$for all ${\Greekmath 010E} \in [0,{\Greekmath 010E}_1)$. Where ${\Greekmath 010E} _1$ and $R({\Greekmath 010E})$ come from definition \ref{UFL} and $C_1:=|G_0|_{\Greekmath 0110} + 1 + \displaystyle{\sup _{(0, {\Greekmath 010E}_0)} \sup \dfrac{1}{\det Df_{{\Greekmath 010E}} \cdot \det Df_{0}} } + B_u $.
\end{theorem}
\begin{proof}
Let us estimate
\begin{equation}\label{12112}
||(\func{F_0{_\ast }}-\func{F_{\Greekmath 010E}{_\ast }}){\Greekmath 0116}_{{\Greekmath 010E} }||_{\infty}= \esssup_{M}{||(\func{F_0{_\ast }}{\Greekmath 0116}_{{\Greekmath 010E} })|}_{{\Greekmath 010D} }-(\func{F_{\Greekmath 010E}{_\ast }}{\Greekmath 0116}_{{\Greekmath 010E} })|_{{\Greekmath 010D}}||_{W}.
\end{equation}
Denote by $f_{{\Greekmath 010E},i}$, with $0\leq i\leq q$, the branches of $f_{{\Greekmath 010E}}$ defined in the sets $P_{i} \in \mathcal{P}$, $f_{{\Greekmath 010E},i}=f_{{\Greekmath 010E} }|_{P_{i}}$. Moreover, denote ${\Greekmath 010D}_{{\Greekmath 010E}, i}:= f^{-1}_{{\Greekmath 010E}, i} ({\Greekmath 010D})$, for all ${\Greekmath 010D} \in M$ and remember that there exists $R({\Greekmath 010E})$ such that
\begin{equation}
d_1({\Greekmath 010D} _{0,i},{\Greekmath 010D} _{{\Greekmath 010E},i}) \leq R({\Greekmath 010E}).
\end{equation}We also recall that (item (b) of (U3)), the $\deg (f_{\Greekmath 010E}) =q$ for all ${\Greekmath 010E} \in [0,{\Greekmath 010E}_1)$, $f_{\Greekmath 010E}(\mathcal{P}_i)=M $ for all $i$ and all ${\Greekmath 010E}$. Moreover, the partition $\mathcal{P}$ depends on ${\Greekmath 010E}$ but, since this is not important on the following proof (because of the property $f_{\Greekmath 010E}(\mathcal{P}_i)=M $ mentioned above), we do not consider the parameter ${\Greekmath 010E}$.
Thus, denoting $\func{F}_{{\Greekmath 010E},{\Greekmath 010D}_{{\Greekmath 010E},i}}:=\func{F}_{{\Greekmath 010E} ,f_{{\Greekmath 010E} ,i}^{-1}({\Greekmath 010D} )}$, we get
\[
({\func{F_0{_\ast }}}{\Greekmath 0116} -{\func{F_{\Greekmath 010E}{_\ast }}}{\Greekmath 0116} )|_{{\Greekmath 010D} }=\sum_{i=1}^{q}
\frac{{\func{F}_{0,{\Greekmath 010D} _{0,i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{0}({\Greekmath 010D} _{0,i})}
-\sum_{i=1}^{q}
\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{{\Greekmath 010E},i}}}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})},\ \ {\Greekmath 0116} _{x}-a.e.\ {\Greekmath 010D} \in M.
\]
Then, we have
\begin{equation*}
||(\func{F_0{_\ast }}-\func{F_{\Greekmath 010E}{_\ast }}){\Greekmath 0116}_{{\Greekmath 010E} }||_{\infty} \leq \func{I} + \func{II},
\end{equation*}where
\begin{equation}\label{I}
\func{I} := \esssup_{M}\left\vert \left\vert \sum_{i=1}^{q}
\frac{{\func{F}_{0,{\Greekmath 010D} _{0,i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{0}({\Greekmath 010D} _{0,i})}
-\sum_{i=1}^{q}
\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert \right\vert _{W}
\end{equation}
and
\begin{equation} \label{II}
\func{II} := \esssup_{M}\left\vert \left\vert \sum_{i=1}^{q}
\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}
-\sum_{i=1}^{q}
\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{{\Greekmath 010E},i}}}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})|}\right\vert \right\vert _{W}.
\end{equation}
Let us estimate $\func{I}$, of equation (\ref{I}). By an analogous application of the triangular inequality, we have
$$ \func{I} \leq \func{I}_a + \func{I}_b,$$ where
\begin{equation}
\func{I}_a := \left\vert \left\vert \sum_{i=1}^{q}
\frac{{\func{F}_{0,{\Greekmath 010D} _{0,i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{0}({\Greekmath 010D} _{0,i})}
-\sum_{i=1}^{q}\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{0}({\Greekmath 010D} _{0,i})}\right\vert \right\vert _{W}
\end{equation}and
\begin{equation}
\func{I}_b := \left\vert \left\vert \sum_{i=1}^{q}
\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{0}({\Greekmath 010D} _{0,i})}
-\sum_{i=1}^{q}\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert \right\vert _{W}.
\end{equation}The summands will be treated separately.
For $\func{I}_a $ (here we use the technical assumption $R({\Greekmath 010E}) \leq 1$, which implies that $R({\Greekmath 010E}) \leq R({\Greekmath 010E})^{\Greekmath 0110}$), by lemma \ref{apppoas} we have
\begin{eqnarray*}
\func{I}_a &\leq & \sum_{i=1}^{q}
\left\vert \left\vert \frac{{\func{F}_{0,{\Greekmath 010D} _{0,i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{0}({\Greekmath 010D} _{0,i})}
-\sum_{i=1}^{q}\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{0}({\Greekmath 010D} _{0,i})}\right\vert \right\vert _{W}
\\&\leq & \sum_{i=1}^{q}
\frac{\left\vert \left\vert ({\func{F}_{0,{\Greekmath 010D} _{0,i} }{_\ast }}- \func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }){\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}\right\vert \right\vert _{W}}{\det Df_{0}({\Greekmath 010D} _{0,i})}
\\&\leq & \sum_{i=1}^{q}
\frac{\left\vert \left\vert ({\func{F}_{0,{\Greekmath 010D} _{0,i} }{_\ast }}- \func{F}_{0,{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }){\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}\right\vert \right\vert _{W}}{\det Df_{0}({\Greekmath 010D} _{0,i})} + \sum_{i=1}^{q}
\frac{\left\vert \left\vert ({\func{F}_{0,{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}- \func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }){\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}\right\vert \right\vert _{W}}{\det Df_{0}({\Greekmath 010D} _{0,i})}
\\&\leq & \left(\sum_{i=1}^{q}
\frac{1}{\det Df_{0}({\Greekmath 010D} _{0,i})}\right) |G_0|_{\Greekmath 0110} d_1({\Greekmath 010D} _{0,i},{\Greekmath 010D} _{{\Greekmath 010E},i})^{\Greekmath 0110} ||{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}||_W+ \left(\sum_{i=1}^{q}
\frac{1}{\det Df_{0}({\Greekmath 010D} _{0,i})}\right) R({\Greekmath 010E}) ||{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}||_W
\\&\leq & \left(\sum_{i=1}^{q}
\frac{1}{\det Df_{0}({\Greekmath 010D} _{0,i})}\right) |G_0|_{\Greekmath 0110} R({\Greekmath 010E})^{\Greekmath 0110} ||{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}||_W+ \left(\sum_{i=1}^{q}
\frac{1}{\det Df_{0}({\Greekmath 010D} _{0,i})}\right) R({\Greekmath 010E}) ||{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}||_W
\\&\leq & |G_0|_{\Greekmath 0110} R({\Greekmath 010E})^{\Greekmath 0110} + R({\Greekmath 010E})
\\&\leq & R({\Greekmath 010E})^{\Greekmath 0110} (|G_0|_{\Greekmath 0110} +1).
\end{eqnarray*}For $\func{I}_b$, we have (by item 2. of U2)
\begin{eqnarray*}
\func{I}_b &\leq& \sum_{i=1}^{q}
\left\vert \left\vert \frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{0}({\Greekmath 010D} _{0,i})}
-\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert \right\vert _{W}
\\&\leq& \sum_{i=1}^{q}
\left\vert \frac{1}{\det Df_{0}({\Greekmath 010D} _{0,i})}
-\frac{1}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert \left\vert \left\vert {\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}\right\vert \right \vert _{W}
\\&\leq& \sup \dfrac{1}{\det Df_{{\Greekmath 010E}} \cdot \det Df_{0}} \sum_{i=1}^{q}
\left\vert \frac{1}{\det Df_{0}({\Greekmath 010D} _{0,i})}
-\frac{1}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert
\\&\leq& K_5 R({\Greekmath 010E})^{\Greekmath 0110}.
\end{eqnarray*}Let us estimate $\func{II}$ (note that $\sum_{i=1}^{q} \left\vert \frac{1}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert =1$, since $m_1$ is supposed to be $f_{\Greekmath 010E}$ invariant for all ${\Greekmath 010E}$),
\begin{eqnarray*}
\func{II} &\leq& \sum_{i=1}^{q}
\left\vert \left\vert \frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}
-\frac{{\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}{\Greekmath 0116} |_{{\Greekmath 010D} _{{\Greekmath 010E},i}}}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert \right\vert _{W}
\\&\leq& \sum_{i=1}^{q} \left\vert \frac{1}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert
\left\vert \left\vert {\func{F}_{{\Greekmath 010E},{\Greekmath 010D} _{{\Greekmath 010E},i} }{_\ast }}({\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}-{\Greekmath 0116} |_{{\Greekmath 010D} _{{\Greekmath 010E},i}})\right\vert \right\vert _{W}
\\&\leq& \sum_{i=1}^{q} \left\vert \frac{1}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert
\left\vert \left\vert {\Greekmath 0116} |_{{\Greekmath 010D} _{0,i}}-{\Greekmath 0116} |_{{\Greekmath 010D} _{{\Greekmath 010E},i}}\right\vert \right\vert _{W}
\\&\leq& \sum_{i=1}^{q} \left\vert \frac{1}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert
d_1({\Greekmath 010D} _{{\Greekmath 010E},i},{\Greekmath 010D} _{0,i})^{\Greekmath 0110}|{\Greekmath 0116}|_{\Greekmath 0110}
\\&\leq& \sum_{i=1}^{q} \left\vert \frac{1}{\det Df_{{\Greekmath 010E}}({\Greekmath 010D} _{{\Greekmath 010E},i})}\right\vert
R({\Greekmath 010E}) ^{\Greekmath 0110} |{\Greekmath 0116}|_{\Greekmath 0110}
\\&\leq& R({\Greekmath 010E}) ^{\Greekmath 0110} B_u.
\end{eqnarray*}Since ${\Greekmath 0110} <1$, then ${\Greekmath 010E} \leq {\Greekmath 010E} ^{\Greekmath 0110}$. Thus, all these facts yields
\begin{eqnarray*}
||(\func{F_0{_\ast }}-\func{F_{\Greekmath 010E}{_\ast }}){\Greekmath 0116}_{{\Greekmath 010E} }||_{\infty} & \leq & \func{I} + \func{II}
\\& \leq & \func{I}_a + \func{I}_b + \func{II}
\\& \leq & R({\Greekmath 010E})^{\Greekmath 0110} (|G_0|_{\Greekmath 0110} +1) + K_5 R({\Greekmath 010E})^{\Greekmath 0110} + R({\Greekmath 010E}) ^{\Greekmath 0110} B_u
\\& \leq &C_1 R({\Greekmath 010E}) ^{\Greekmath 0110},
\end{eqnarray*}where $C_1=|G_0|_{\Greekmath 0110} + 1 + K_5 + B_u.$
\end{proof}
\section{Proof of Theorem \ref{belongss}}\label{sofkjsdkgfhksjfgd}
First of all, let us prove the existence and uniqueness of an $F$-invariant measure in $S^\infty$.
The next lemma \ref{kjdhkskjfkjskdjf} ensures the existence and uniqueness of an $F$-invariant measure which projects on $m_1$. Since its proof is done by standard arguments (see \cite{AP}, for instance), we skip it.
\begin{lemma}\label{kjdhkskjfkjskdjf}
There exists an unique measure ${\Greekmath 0116}_0$ on $M \times K$ such that for every continuous function ${\Greekmath 0120} \in C^0 (M \times K)$ it holds
\begin{equation}
\lim {\int{\inf_{{\Greekmath 010D} \times K} {\Greekmath 0120} \circ F^n }dm_1({\Greekmath 010D})}= \lim {\int{\sup_{{\Greekmath 010D} \times K} {\Greekmath 0120} \circ F^n}dm_1 ({\Greekmath 010D})}=\int {{\Greekmath 0120}}d{\Greekmath 0116}_0.
\end{equation}Moreover, the measure ${\Greekmath 0116}_0$ is $F$-invariant and ${\Greekmath 0119}_x{_\ast}{\Greekmath 0116}_0 = m_1$.
\end{lemma}
Let ${\Greekmath 0116} _{0}$ be the $F$-invariant measure such that ${\Greekmath 0119} _{x\ast }{\Greekmath 0116} _{0}=m_1$ (which do exist by Lemma \ref{kjdhkskjfkjskdjf}), where $1$ is the unique $
f$-invariant density in $H_{\Greekmath 0110}$. Suppose that $
g:K\longrightarrow \mathbb{R}$ is a ${\Greekmath 0110}$-H\"older function such that $
|g|_{\infty }\leq 1$ and $H_{\Greekmath 0110}(g)\leq 1$. Then, it holds $\left\vert \int {g} d({\Greekmath 0116} _{0}|_{{\Greekmath 010D} })\right\vert \leq |g|_{\infty }\leq 1$. Hence, ${\Greekmath 0116} _{0}\in \mathcal{L}^{\infty }$. Since, $\dfrac{{\Greekmath 0119}_{x*}{\Greekmath 0116}_0}{dm_1} \equiv 1$, we have ${\Greekmath 0116}_0 \in S^\infty$.
The uniqueness follows directly from Theorem \ref{5.8}, since the difference between two probabilities (${\Greekmath 0116} _1 - {\Greekmath 0116}_0$) is a zero average signed measure.
\begin{definition}
Let $F:\Sigma \longrightarrow \Sigma$ be a continuous map, with $\Sigma=M\times K$ compact set and $F(x,y)=(f(x), G(x,y))$ where $f:M\longrightarrow M$ and $G(x, \cdot ): K\longrightarrow K$ for all $x\in M$. Given $E\subset \Sigma$ is a $(n,{\Greekmath 0122})-$spanning set if for every $(x_0, y_0)\in \Sigma$, there exists $(x_1,y_1)\in E$ such that, for all $j\in \{0,1,...,n-1\}$
\begin{align*}
d(F^{j}(x_0,y_0),F^{j}(x_1, y_1))&= d((f^{j}(x_0), G_{x_0}^{j}(y_0)),(f^{j}(x_1), G_{x_1}^{j}(y_1))\\
& =d_1(f^{j}(x_0), f^{j}(x_1)) + d_2(G_{x_0}^{j}(y_0), G_{x_1}^{j}(y_1))\\
& < {\Greekmath 0122}.
\end{align*}
where $d_1$ and $d_2$ are the metric on $M$ and $K$ respectively. For ${\Greekmath 0127} \in C^{0}(M\times K, \mathbb{R})$(continuous functions space), define the {\bf topological pressure} of ${\Greekmath 0127}$ by
$$
P_{t}(F,{\Greekmath 0127}) := \lim_{{\Greekmath 0122} \to 0} \limsup_{n \to \infty}\dfrac{1}{n} \log \inf_{E \subset \Sigma} \biggl( \sum_{(x,y)\in E}\exp (S_{n}{\Greekmath 0127}(x,y))\biggr)
$$
where $S_{n}({\Greekmath 0127})(x,y):=\sum_{j=0}^{n-1} {\Greekmath 0127}(F^{j}(x,y))=\sum^{n-1}_{j=0}{\Greekmath 0127}(f^{j}(x), G^{j}_{x}(y))$ and infinium is taken over all $(n,{\Greekmath 0122})-$ spanning subsets $E$ of $\Sigma$.
\end{definition}
It is known that the variational principle occurs, i.e.,
\begin{equation}
\label{variaprinciple}
P_{t}(F,{\Greekmath 0127}) = \sup_{{\Greekmath 0116} \in \mathcal{M}^{1}_{F}(M\times K)} \biggl\{h_{{\Greekmath 0116}}(F) + \int {\Greekmath 0127} d{\Greekmath 0116} \biggr \}
\end{equation}
where $\mathcal{M}^{1}_{F}(M\times K)$ be the set of measures ${\Greekmath 0116}$ that are invariant by $F$ (${\Greekmath 0116} \circ F^{-1}={\Greekmath 0116}$).
On the other hand, let ${\Greekmath 0127}^{\ast}\in C^{0}(M,\mathbb{R})$ and define
\[
\begin{array}{cccc}
{\Greekmath 0127}\ : & \! M\times K & \! \longrightarrow
& \! \mathbb{R} \\
& \! (x,y) & \! \longmapsto
& \! {\Greekmath 0127}(x,y):={\Greekmath 0127}^{\ast}(x)
\end{array}
\]
we have to, ${\Greekmath 0127}\in C^{0}(M\times K, \mathbb{R})$. Now, let $\mathcal{M}^{1}_{m_1}(M\times K)$ collection of all probability measure ${\Greekmath 0116}$ on $(M\times K, \mathcal{B})$ such that
\[
{\Greekmath 0119}_{x\ast}{\Greekmath 0116} = {\Greekmath 0116}\circ {\Greekmath 0119}^{-1}_{x}=m_1
\]
for ${\Greekmath 0119}_x:M \times K \rightarrow M$ as first projection (${\Greekmath 0119}_x(x,y)=x$). Therefore, by Theorem \ref{rok} (Rokhli's disintegration theorem) describes a disintegration $\left( \{{\Greekmath 0116} _{{\Greekmath 010D}}\}_{{\Greekmath 010D} }, m_1\right) $ of ${\Greekmath 0116}$. So that
\begin{align*}
\int_{M\times K} {\Greekmath 0127} d{\Greekmath 0116} & = \int_M \int_K {\Greekmath 0127}({\Greekmath 010D},y)d{\Greekmath 0116}_{\Greekmath 010D}(y) dm_1({\Greekmath 010D})\\
& = \int_M \int_K {\Greekmath 0127}^\ast({\Greekmath 010D}) d{\Greekmath 0116}_{\Greekmath 010D}(y)dm_1({\Greekmath 010D})\\
& = \int_M {\Greekmath 0127}^\ast({\Greekmath 010D}) dm_1({\Greekmath 010D}) < \infty.
\end{align*}
If we consider $E\subset M\times K$ is a $(n,{\Greekmath 0122})$- spanning set, then by the metric $d$, $E^\ast =\{x\in M: (x,y)\in E\}$ is a $(n,{\Greekmath 0122})$- spanning set for the system $f:M\longrightarrow M$. From hence and by definition of topological pressure, we get
\begin{equation}
\label{equaPfPF1}
P_t(f, {\Greekmath 0127}^{\ast})\leq P_{t}(F, {\Greekmath 0127}).
\end{equation}
For the other inequality, we will use the following result.
\begin{theorem}[Ledrappier-Walters Formula]
\label{thLed-Walters}
Let $\Hat{X}, X$ be compact metric spaces and let $\Hat{T}:\Hat{X}\longrightarrow \Hat{X}$, $T:X\longrightarrow X$ and $\Hat{{\Greekmath 0119}}:\Hat{X}\longrightarrow X$ be continuous maps such that $\Hat{{\Greekmath 0119}}$ is surjective and $\Hat{{\Greekmath 0119}}\circ \Hat{T} = T\circ \Hat{{\Greekmath 0119}}$. Then
$$
\sup_{\Hat{{\Greekmath 0117}}; \Hat{{\Greekmath 0119}}_{\ast}\Hat{{\Greekmath 0117}}={\Greekmath 0117}}h_{\Hat{{\Greekmath 0117}}}(\Hat{T})= h_{{\Greekmath 0117}}(T) + \int h_{top}(\Hat{T}, \Hat{{\Greekmath 0119}}^{-1}(y)) d{\Greekmath 0117}(y).
$$
\end{theorem}
By be $G(x, \cdot):K\longrightarrow K$ uniform contraction for every $x\in M$, we have than $h_{top}(F, {\Greekmath 0119}_x^{-1}(x))=0$ for every $x\in M$. Then, by Theorem \ref{thLed-Walters}, we obtain
\begin{equation}
\label{equahfHF}
h_{{\Greekmath 0116}}(F)=h_{m_1}(f)
\end{equation}
for every $m_1\in \mathcal{M}_{f}(M)$ and ${\Greekmath 0116}\in \mathcal{M}_{F}(M\times K)$ such that ${\Greekmath 0119}_{x\ast}{\Greekmath 0116}=m_1$. Therefore, by (\ref{variaprinciple}) and (\ref{equahfHF}) we get
\begin{equation}
\label{equaPFpf2}
P_{t}(F,{\Greekmath 0127})\leq P_{t}(f,{\Greekmath 0127}^\ast).
\end{equation}
Combined (\ref{equaPfPF1}) and (\ref{equaPFpf2})
\begin{equation}
\label{equaPFpf3}
P_{t}(F,{\Greekmath 0127})= P_{t}(f,{\Greekmath 0127}^\ast)
\end{equation}
\begin{proposition}
\label{propestadoequil}
If $m_1\in \mathcal{M}_{f}(M)$ is an equilibrium state for $(f, {\Greekmath 0127}^\ast)$, if and only if ${\Greekmath 0116}\in \mathcal{M}_{F}(M\times K)$ such that $m_1={\Greekmath 0119}_{x\ast} {\Greekmath 0116}$, is an equilibrium states for $(F,{\Greekmath 0127})$. Moreover, if $m_1$ is unique equilibrium states, then ${\Greekmath 0116}$ unique equilibrium states.
\end{proposition}
\begin{proof}
Is followed by (\ref{equahfHF}) and (\ref{equaPFpf3}). The second part of the proof, we use Lemma \ref{kjdhkskjfkjskdjf}.
\end{proof}
\section{Proof of Theorem \ref{dlogd}}\label{kdjfhksdjfhksdf}
\begin{proof} (of theorem \ref{dlogd})
First note that, if ${\Greekmath 010E} \geq 0$ is small enough, then ${\Greekmath 010E} \leq -{\Greekmath 010E} \log {{\Greekmath 010E}} $. Moreover, $x -1 \leq \lfloor x \rfloor$, for all $x \in \mathbb{R}$.
By P1,
\begin{equation*}
||\func{L}_{{\Greekmath 010E} }{\Greekmath 0116}_{{\Greekmath 010E} }-\func{L}_{0}{\Greekmath 0116}_{{\Greekmath 010E} }||_{w}\leq R({\Greekmath 010E}) ^{\Greekmath 0110} C
\end{equation*}
(see Lemma \ref{gen}, item a) ) and P4 yields $C_{i}\leq M_{2}.$
Hence, by Lemma \ref{gen} we have
\begin{equation*}
||{\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0 }||_{w}\leq R({\Greekmath 010E}) ^{\Greekmath 0110} CM_{2}N+||\func{L}_{0}^{ N}({\Greekmath 0116}_{0}-{\Greekmath 0116}_{{\Greekmath 010E}})||_{w}.
\end{equation*}
By the exponential convergence to equilibrium of $\func{L}_{0}$ (P3), there exists $0<{\Greekmath 011A} _2 <1$ and $C_2 >0$ such that (recalling that
by P1 $||({\Greekmath 0116}_{{\Greekmath 010E}}-{\Greekmath 0116}_{0})||_{s}\leq 2M$)
\begin{eqnarray*}
||\func{L} _{0}^{ N}({\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0})||_{w} &\leq &C_{2}{\Greekmath 011A} _{2}^{N}||({\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0 })||_{s} \\
&\leq &2C_{2}{\Greekmath 011A} _{2}^{N}M
\end{eqnarray*}
hence
\begin{equation*}
||{\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0}||_{w}\leq R({\Greekmath 010E}) ^{\Greekmath 0110} CM_{2}N+2C_{2}{\Greekmath 011A} _{2}^{N}M.
\end{equation*}
Choosing $N=\left\lfloor \frac{\log {\Greekmath 010E} }{\log {\Greekmath 011A} _{2}}\right\rfloor $, we have
\begin{eqnarray*}
||{\Greekmath 0116}_{{\Greekmath 010E} }-{\Greekmath 0116}_{0} ||_{w} &\leq &R({\Greekmath 010E}) ^{\Greekmath 0110} CM_{2}\left\lfloor \frac{
\log {\Greekmath 010E} }{\log {\Greekmath 011A} _{2}}\right\rfloor +2C_{2}{\Greekmath 011A} _{2}^{\left\lfloor
\frac{\log {\Greekmath 010E} }{\log {\Greekmath 011A} _{2}}\right\rfloor }M \\
&\leq &R({\Greekmath 010E}) ^{\Greekmath 0110} \log {\Greekmath 010E} CM_{2}\frac{1}{\log {\Greekmath 011A} _{2}}+2C_{2}{\Greekmath 011A} _2 ^ { \frac{\log {\Greekmath 010E} }{\log {\Greekmath 011A} _{2}} -1} M \\
&\leq & R({\Greekmath 010E}) ^{\Greekmath 0110} \log {\Greekmath 010E} CM_{2}\frac{1}{\log {\Greekmath 011A} _{2}}+\frac{2C_{2}{\Greekmath 011A} _2 ^ { \frac{\log {\Greekmath 010E} }{\log {\Greekmath 011A} _{2}}} M}{{\Greekmath 011A} _2} \\
&\leq & R({\Greekmath 010E}) ^{\Greekmath 0110} \log {\Greekmath 010E} CM_{2}\frac{1}{\log {\Greekmath 011A} _{2}}+\frac{2C_{2}{\Greekmath 010E} M}{{\Greekmath 011A} _2} \\
&\leq & R({\Greekmath 010E}) ^{\Greekmath 0110} \log {\Greekmath 010E} CM_{2}\frac{1}{\log {\Greekmath 011A} _{2}}-\frac{2C_{2}{\Greekmath 010E} \log {\Greekmath 010E} M}{{\Greekmath 011A} _2}\\
&\leq & R({\Greekmath 010E}) ^{\Greekmath 0110} \log {\Greekmath 010E} \left( \frac{CM_{2}}{\log {\Greekmath 011A} _{2}}-\frac{2C_{2} M}{{\Greekmath 011A} _2} \right).
\notag
\end{eqnarray*}
\end{proof}
\section{Proof of Theorems \ref{rr}, \ref{htyttigu} and \ref{htyttigui}}\label{kkdjfkshfdsdfsttr}
Before to establish Theorem \ref{htyttigu}, we need to prove some results. Theorem \ref{thshgf} used here, had its proof postponed to Section \ref{ieutiet}.
\begin{proof}(of theorem \ref{rr})
We need to prove that $\{F_{\Greekmath 010E}\}_{{\Greekmath 010E} \in [0,1)}$ satisfies P1, P2, P3 and P4 of Definition \ref{UF}. To prove P1, note that, by (A1) and equation (\ref{weakcontral11234}) we have
\begin{eqnarray*}
||\func{F {_{\Greekmath 010E}} {_*} ^n} {\Greekmath 0116}_{{\Greekmath 010E} }||_{S^\infty} &=& |\func {P}_{f_{{\Greekmath 010E} }} ^n {\Greekmath 011E} _x |_{{\Greekmath 0110}} + ||\func{F_{\Greekmath 010E}{_\ast }} ^n {\Greekmath 0116}||_{\infty} \\&\leq & D {\Greekmath 0115} ^n |{\Greekmath 011E} _x|_{{\Greekmath 0110}} + D|{\Greekmath 011E} _x|_\infty + || {\Greekmath 0116}||_{\infty} \\&\leq & D {\Greekmath 0115} ^n ||{\Greekmath 0116}||_{S^\infty} + (D +1) || {\Greekmath 0116}||_{\infty}.
\end{eqnarray*}Therefore, if ${\Greekmath 0116}_{{\Greekmath 010E} }$ is a fixed probability measure for the operator $\func{F_{\Greekmath 010E} {_*}}$, by the above inequality we get P1 with $M=D+1$.
A direct application of theorem \ref{UF2ass} and theorem \ref{thshgf} give P2. The items P3 and P4 follow, respectively, from Proposition \ref{5.8} and equation (\ref{weakcontral11234}) applied to each $F_{\Greekmath 010E}$.
\end{proof}
\begin{proof}(of theorems \ref{htyttigu} and \ref{htyttigui})
We directly apply the above propositions together with Theorem \ref{dlogd}. And the proof of Theorem \ref{htyttigu} is done. Theorem \ref{htyttigui} is obtained replacing $R({\Greekmath 010E})$ by its expression $R({\Greekmath 010E})=K_6 {\Greekmath 010E}$.
\end{proof}
\section{Proof of Theorem \ref{thshgf}}\label{ieutiet}
First we need a preliminary lemma.
\begin{lemma} If $\{F_{\Greekmath 010E}\}_{{\Greekmath 010E} \in [0,1)}$ is an admissible $R({\Greekmath 010E})$-perturbation. Then, there exist uniform constants $0<{\Greekmath 010C}_u<1$ and $D_{2,u}>0$ such that for every ${\Greekmath 0116} \in \mathcal{H} _{\Greekmath 0110}^{+}$ which satisfy ${\Greekmath 011E} _x = 1$ $m_1$-a.e., it holds
\begin{equation}\label{er}
|\Gamma_{\func{F_{\Greekmath 010E} {_*}}^n{\Greekmath 0116}}^{\Greekmath 0121}|_{{\Greekmath 0110}} \leq {\Greekmath 010C}_u ^n |\Gamma _{\Greekmath 0116}^{\Greekmath 0121}|_{\Greekmath 0110} + \dfrac{D_{2,u}}{1-{\Greekmath 010C}_{\Greekmath 010E}}||{\Greekmath 0116}||_\infty,
\end{equation}for all ${\Greekmath 010E} \in [0,1)$ and all $n \geq 0$.
\label{las123rtryrdfd2}
\end{lemma}
\begin{proof}
We apply Corollary \ref{kjdfhkkhfdjfh} to each $F_{\Greekmath 010E}$ and obtain,
\begin{equation*}
|\func{F_{\Greekmath 010E}{_\ast} {\Greekmath 0116}}|_{\Greekmath 0110} \leq {\Greekmath 010C}_{\Greekmath 010E} |{\Greekmath 0116} |_{\Greekmath 0110} + D_{2,{\Greekmath 010E}} ||{\Greekmath 0116}||_{\infty}, \ \forall {\Greekmath 010E} \in [0,1), \label{lasotaingt234dffggdgh2}
\end{equation*}where ${\Greekmath 010C}_{\Greekmath 010E}:= ({\Greekmath 010B}_{\Greekmath 010E} L_{\Greekmath 010E})^{\Greekmath 0110}$ and $D_{2,{\Greekmath 010E}}:=\{{\Greekmath 010F} _{{\Greekmath 011A},{\Greekmath 010E}} L_{\Greekmath 010E}^{\Greekmath 0110} + |G_{\Greekmath 010E}|_ {\Greekmath 0110} L_{\Greekmath 010E}^{\Greekmath 0110}\}$.
By U2, we define ${\Greekmath 010C}_u:= \displaystyle{\sup _ {\Greekmath 010E} {\Greekmath 010C}_{\Greekmath 010E}}$ and $D_{2,u}:= \displaystyle{\sup_{\Greekmath 010E} D_{2,{\Greekmath 010E}}}$ and the result is established.
\end{proof}
\begin{proof}(of Theorem \ref{thshgf})
Consider the path $\Gamma^{{\Greekmath 0121}_n}_{\func{F_{\Greekmath 010E}{_\ast }^n}}m$, defined in Remark \ref{riirorpdf}, which represents the measure $\func{F_{\Greekmath 010E} {_\ast }}^nm$.
According to Theorem \ref{belongss}, let ${\Greekmath 0116} _{{\Greekmath 010E}}\in S^\infty$ be the unique $F_{\Greekmath 010E}$-invariant probability measure in $S^\infty$. Consider the measure $m$, defined in Remark \ref{riirorpdf} and its iterates $\func{F_{\Greekmath 010E}{_\ast }}^n
(m)$. By Theorem \ref{5.8}, these iterates converge to ${\Greekmath 0116} _{{\Greekmath 010E}}$
in $\mathcal{L}^{\infty }$. It implies that the sequence $\{\Gamma_{\func{F_{\Greekmath 010E}{_\ast }}^n(m)} ^{{\Greekmath 0121} _n}\}_{n}$ converges $m_1$-a.e. to $\Gamma_{{\Greekmath 0116} _{{\Greekmath 010E}}}^{\Greekmath 0121}\in \Gamma_{{\Greekmath 0116}_{{\Greekmath 010E}} }$ (in $\mathcal{SB}(K)$ with respect to the metric defined in definition \ref{wasserstein}), where $\Gamma_{{\Greekmath 0116} _{{\Greekmath 010E}}}^{\Greekmath 0121}$ is a path given by the Rokhlin Disintegration Theorem and $\{\Gamma_{\func{F_{\Greekmath 010E}{_\ast }}^n(m)} ^{{\Greekmath 0121}_n}\}_{n}$ is given by equation (\ref{niceformulaaw}). It implies that $\{\Gamma_{\func{F_{\Greekmath 010E}{_\ast }}^n(m)} ^{{\Greekmath 0121}_n}\}_{n}$ converges pointwise to $\Gamma_{{\Greekmath 0116} _{ {\Greekmath 010E}}}^{\Greekmath 0121}$ on a full measure set $\widehat{M_{\Greekmath 010E}}\subset M$. Let us denote $
\Gamma_{n,{\Greekmath 010E}}:=\Gamma^{{\Greekmath 0121}_n}_{\func{F_{\Greekmath 010E}{_\ast }}^n(m)}|_{
\widehat{M_{\Greekmath 010E}}}$ and $\Gamma_{\Greekmath 010E}:=\Gamma^{\Greekmath 0121} _{{\Greekmath 0116} _{{\Greekmath 010E}}}|_{\widehat{M_{\Greekmath 010E}}}$. Since $\{\Gamma_{n,{\Greekmath 010E}} \}_n $ converges pointwise to $\Gamma _{\Greekmath 010E}$, it holds $|\Gamma_{n,{\Greekmath 010E}}|_{\Greekmath 0110} \longrightarrow |\Gamma _{\Greekmath 010E}|_{\Greekmath 0110}$ as $n \rightarrow \infty$. Indeed, let $x,y \in \widehat{M}$. Then,
\begin{eqnarray*}
\lim _{n \longrightarrow \infty} {\dfrac{||\Gamma_{n,{\Greekmath 010E}} (x) - \Gamma _{n, {\Greekmath 010E}}(y)||_W}{d_1(x,y)^{\Greekmath 0110}}} &= & \dfrac{||\Gamma _{\Greekmath 010E} (x) - \Gamma _{\Greekmath 010E} (y)||_W}{d_1(x,y)^{\Greekmath 0110}}.
\end{eqnarray*} On the other hand, by Lemma \ref{las123rtryrdfd2}, the argument of the left hand side is bounded by $|\Gamma_{n, {\Greekmath 010E}}|_{\Greekmath 0110} \leq \dfrac{D_u}{1-{\Greekmath 010C}_u}$ for all $n\geq 1$. Then,
\begin{eqnarray*}
\dfrac{||\Gamma _{\Greekmath 010E} (x) - \Gamma _{\Greekmath 010E} (y)||_W}{d_1(x,y)^{\Greekmath 0110}}&\leq & \dfrac{D_{u}}{1-{\Greekmath 010C} _u}.
\end{eqnarray*} Thus, $|\Gamma^{\Greekmath 0121}_{{\Greekmath 0116} _{{\Greekmath 010E}}}|_{\Greekmath 0110} \leq\dfrac{D_{u}}{1-{\Greekmath 010C} _u}$ and taking the infimum we get $|{\Greekmath 0116} _{{\Greekmath 010E}}|_{\Greekmath 0110} \leq \dfrac{D_{u}}{1-{\Greekmath 010C} _u}$. We finish the proof defining $B_u:=\dfrac{D_{u}}{1-{\Greekmath 010C} _u}$.
\end{proof}
\end{document}
|
\begin{document}
\title{Connectivity keeping trees in 3-connected bipartite graphs with girth conditions
\footnote{The research is supported by National Natural Science Foundation of China (12261086).}}
\author{Qing Yang, Yingzhi Tian\footnote{Corresponding author. E-mail: [email protected] (Y. Tian).} \\
{\small College of Mathematics and System Sciences, Xinjiang
University, Urumqi, Xinjiang, 830046, PR China}}
\date{}
\maketitle
\noindent{\bf Abstract } Luo, Tian and Wu conjectured in 2022 that for any tree $T$ with bipartition $X$ and $Y$, every $k$-connected bipartite graph $G$ with $\delta(G) \geq k + t$, where $t = \max\{|X|,|Y |\}$, contains a subtree $T' \cong T$ such that $G-V(T')$ remains $k$-connected. This conjecture has been proved for caterpillars and spiders when $k\leq 3$; and for paths with odd order. In this paper, we prove that this conjecture holds if $G$ is a bipartite graph with $g(G)\geq diam(T)-1$ and $k\leq 3$, where $g(G)$ and $diam(T)$ denote the girth of $G$ and the diameter of $T$, respectively.
\noindent{\bf Keywords:} Connectivity; Bipartite graphs; Trees; Girth; Diameter
\section{Introduction}
In this paper, all graphs are finite, undirected without multiple edges and loops. Let $G$ be a graph with \emph{vertex set} $V(G)$ and \emph{edge set} $V(G)$. The \emph{girth} of $G$, denoted by $g(G)$, is the length of a smallest cycle in $G$. The \emph{diameter} of $G$, denoted by $diam(G)$, is the maximum distance for every pair of vertices in $G$. Assume $diam(G) = 0$ if $|V(G)| = 1$. For a vertex $v\in V(G)$, the \emph{neighborhood} of $v$ in $G$, denoted by $N_{G}(v)$, is the set of vertices in $G$ adjacent to $v$. The degree $d_G(v)$ of $v$ in $G$ is $|N_{G}(v)|$. The \emph{minimum degree} $\delta(G)$ of $G$ is min$_{v\in V(G)}d_G(v)$. For graph-theoretical terminologies and notation not defined here, we follow \cite{Bondy}.
The $connectivity$ of $G$, denoted by $\kappa(G)$, is the minimum size of a vertex set $U\subseteq V(G)$ such that $G-U$ is disconnected or has only one vertex. A graph $G$ is \emph{$k$-connected} if $\kappa(G)\geq k$.
In 1972, Chartrand, Kaugars and Lick \cite{Chartrand} proved that every $k$-connected graph $G$ with $\delta(G) \geq \lfloor \frac{3k}{2}\rfloor $contains a vertex $v$ such that $\kappa(G -\{v\})\geq k$. Fujita and Kawarabayashi \cite{Fujita} showed every $k$-connected graph $G$ with $\delta(G) \geq \lfloor \frac{3k}{2}\rfloor +2$ contains an edge $uv$ such that $G -\{u,v\}$ remains $k$-connected. Meanwhile, they conjectured that every $k$-connected graph $G$ with $\delta(G) \geq \lfloor \frac{3k}{2}\rfloor+ f_{k}(m)-1$ contains a connected subgraph $H$ of order $m$ such that $\kappa(G-V(H))\geq k$, where $f_{k}(m)$ is non-negative. Mader confirmed the conjecture and proposed the following conjecture in \cite{Mader1}.
\begin{conj}(Mader \cite{Mader1})
For every tree $T$ of order $m$, every $k$-connected graph $G$ with $\delta(G) \geq \lfloor \frac{3k}{2}\rfloor+ m-1$ contains a subtree $T'\cong T$ such that $G-V (T')$ is still $k$-connected.
\end{conj}
In the same paper, Mader confirmed this conjecture for paths. Concerning to this conjecture, Mader \cite{Mader2} also proved that this conjecture holds if $\delta(G) \geq 2(k-1+m)^2+m-1$. For $k=1$, the result in \cite{Diwan} showed that Conjecture 1 is true. For $k=2$, Conjecture 1 was verified when $T$ is a special tree: stars, path-double-stars, quasi-monotone caterpillars, caterpillars and spiders et al. [5-7, 9, 13-14]. In 2022, Hong and Liu \cite{Hong} proved that Mader's conjecture holds when $k=2$ and $k=3$. Conjecture 1 is still open when $k\geq 4$.
For bipartite graphs, Luo, Tian and Wu \cite{Luo} proved the following theorem, which relaxes the minimum degree bound in Conjecture 1.
\begin{thm} (Luo, Tian and Wu \cite{Luo})
Every $k$-connected bipartite graph $G$ with $\delta(G) \geq k + m$ contains a path $P$ of order $m$ such that $G-V (P)$ remains $k$-connected.
\end{thm}
Analogue to Conjecture 1, for bipartite graphs, the authors in \cite{Luo} proposed the following conjecture.
\begin{conj} (Luo, Tian and Wu \cite{Luo})
For every tree $T$ with bipartition $X$ and $Y$, every $k$-connected bipartite graph $G$ with $\delta(G) \geq k + t$, where $t = \max\{|X|, |Y |\}$, contains a subtree $T'\cong T$ such that $\kappa(G -V (T')) \geq k$.
\end{conj}
Zhang \cite{Zhang} confirmed Conjecture 2 for caterpillars when $k\leq 2$. Yang and Tian \cite{Yang1,Yang2}confirmed Conjecture 2 for caterpillars and spiders when $k\leq 3$; and for paths with odd order.
In the next section, we will present notations, terminology and some lemmas used in the proofs of main results. Section 3 show that Conjecture 2 holds if $G$ is a bipartite graph with $g(G)\geq diam(T)-1$ and $k\leq 3$.
\section{Preliminaries}
For a subgraph $H\subseteq G$, define $\delta_{G}(H)=\min_{v\in V(H)}d_{G}(v)$. While $\delta(H)$ denotes the minimum degree of the graph $H$. The \emph{induced subgraph} of a vertex set $U\subseteq V(G)$ in $G$, denoted by $G[U]$, is the graph with vertex set $U$, where two vertices in $U$ are adjacent if and only if they are adjacent in $G$. And $G-U$ is the induced graph $G[V(G)\backslash U)]$. The \emph{edge-induced subgraph} of an edge set $F\subseteq E(G)$ in $G$, denoted by $G[F]$, is the subgraph of $G$ whose edge set is $F$ and whose vertex set consists of all ends of edges in $F$.
The \emph{distance} between two vertices $u$ and $v$ in a connected graph $G$, denoted by $d_{G}(u, v)$, is the length of a shortest path between them. The \emph{eccentricity} of $v$ in $G$, denoted by $ecc_{G}(v)$, is defined as $max_{w\in V(G)}d_{G}(v, w)$.
A \emph{$(u, v)$-path} $P$ is a path with ends $u$ and $v$. For any $u',v'\in V(P)$, $u'Pv'$ is the subpath of $P$ between $u'$ and $v'$. For two subsets $V_{1}$ and $V_{2}$ of $V (G)$, the $(V_{1}, V_{2})$-path is a path with one end in $V_{1}$, the other end in $V_{2}$, but internal vertices not in $V_{1}\cup V_{2}$. We use $(v, V_{2} )$-path for $(\{v\}, V_{2})$-path.
The $subdivision$ for an edge $uv$ in a graph $G$ is the deletion of $uv$ from $G$ and the addition of two edges $uw$ and $wv$ along with the new vertex $w$. The subdivision of the graph $G$ is derived from $G$ by a sequence of subdivisions for some edges in $G$. Let $H$ be a subdivision of some simple 3-connected graph. An \emph{ear} of $H$ is a maximal path $P$ whose each internal vertex has degree 2 in $H$.
The following Lemmas will be used to construct trees in our proof.
\begin{lem} (Yang and Tian \cite{Yang1})
Let $T=(X,Y)$ be a tree and let $S$ be a subtree of $T$. If a bipartite graph $G$ contains a subtree $S'\cong S $ such that $d_{G}(v)\geq t$ for any $v\in V(G)\setminus V(S')$ and for any $v\in V(S')$ with $d_{S}(\phi ^{-1}(v))<d_{T}(\phi ^{-1}(v))$, where $\phi$ is an isomorphism from $V(S)$ to $V(S')$ and $t=\max\{|X|,|Y|\}$, then $G$ contains a subtree $T'\cong T$ such that $S'\subseteq T'$.
\end{lem}
\begin{lem}(Yang and Tian \cite{Yang1})
Let $T=(X,Y)$ be a tree and let $S$ be a subtree obtained from $T$ by deleting some leaves of $T$. If a bipartite graph $G$ contains a subtree $S'\cong S $ such that $d_{G}(v)\geq t$ for any $v\in V(S')$ with $d_{S}(\phi ^{-1}(v))<d_{T}(\phi ^{-1}(v))$, where $\phi$ is an isomorphism from $V(S)$ to $V(S')$ and $t=\max\{|X|,|Y|\}$, then $G$ contains a subtree $T'\cong T$ such that $S'\subseteq T'$.
\end{lem}
\section{Results}
\begin{thm}
Let $T$ be a tree with bipartition $X$ and $Y$. Every 3-connected bipartite graph $G$ with $\delta(G)\geq t+3$ and $g(G)\geq diam(T)-1$, where $t=\max\{|X|,|Y|\}$, contains a subtree $T'\cong T$ such that $G-V(T')$ is still 3-connected.
\end{thm}
\noindent{\bf Proof.} Since $\delta(G)\geq t+3$, there exists a subtree $T'\cong T$ in $G$ by Lemma 2.1. Let $B$ be a subdivision of some simple 3-connected graph of $G-V(T')$. The existence of $B$ is assured by $\delta(G-V(T'))\geq 3$. We assume $T'$ and $B$ are such subgraphs such that $n(B)$ is as large as possible and $|V(B)|$ is as small as possible, where $n(B)=|\{v\in V(B)|d_{B}(v)\geq 3\}|$. And we denote $$V_{1}=\{v\in V(B)|d_{B}(v)\geq 3\}$$ and $V_{2}=V(B)\backslash V_{1}$. Let $H=G-V(T')\cup V(B)$. Suppose $V(H)=\emptyset$. Then $\delta(B)\geq t+3-t=3$, $B=G-V(T')$ is 3-connected and the theorem holds. Thus, we assume $V(H)\neq\emptyset$ and will complete the proof by a series of claims.
\noindent{\bf Claim 1.} For any $v\in V(T'\cup H)$, if $H\cup T'-\{v\}$ contains a subtree $T''\cong T$, then $$|N_{G}(v)\cap V(B)|\leq 2.$$
Suppose, to the contrary, that $u\in V(T'\cup H)$ is such a vertex such that $H\cup T'-\{u\}$ contains a subtree $T''\cong T$ and $|N_{G}(u)\cap V(B)|\geq 3.$ Suppose there exists an ear $Q$ such that $N_{G}(u) \cap V (B) \subseteq V (Q)$. Let $Q = u_{1}u_{2}\cdots u_{s}$, $a=\min\{i: 1\leq i\leq s~and~ u_{i} \in N_{G}(u)\}$ and $b=\max\{i: 1\leq i\leq s ~and~ u_{i} \in N_{G}(u)\}$. Since $G$ is a bipartite graph and $|N_{G}(u)\cap V(B)|\geq 3$, we get $s\geq 5$ and $b-a\geq 4$. We denote $$B'=G[(E(B)\setminus\{u_{a+1}u_{a+2},\cdots,u_{b-2}u_{b-1}\})\cup \{uu_{a},uu_{b}\}].$$ Then $B'$ is a subdivision of some simple 3-connected graph, but $n(B')= n(B)$ and $|V(B)|-|V(B')|=b-a-2\geq 1$, which is a contradiction to the choices of $T'$ and $B$. Thus, no ear of $B$ contains $N_{G}(u) \cap V (B)$. Then there exist three $(u, V_1)$-paths $Q_{1}$, $Q_{2}$, $Q_{3}$ with internal vertices not in $V(G)\backslash (V(B)\cup \{u\})$. We denote $$B''=G[E(B)\cup E(Q_{1})\cup E(Q_{2})\cup E(Q_{3})].$$ Then $B''$ is still a subdivision of some simple 3-connected graph, but $n(B'') > n(B)$, a contradiction.
\noindent{\bf Claim 2.} $|N_{G}(v)\cap V(B)|\leq 2$ for any $v\in V(H)$.
Since $T'\subseteq H\cup T'-\{v\}$ for any $v\in V(H)$, Claim 1 implies Claim 2.
For any $v\in V(H)$, we have $|N_{G}(v)\cap V(T')|\leq t$. Thus, $\delta (H)\geq t+3-t-2=1$ by Claim 2.
\noindent{\bf Claim 3.} $|N_{G}(v)\cap V(B)|\leq 2$ for any $v\in V(T')$.
If $t=1$, then $T\cong K_1$ or $K_2$, and the claim holds by $\delta (H)\geq 1$. Thus, we assume $t\geq2$ in the following. Let $S$ be a subtree obtains from $T'$ by deleting all leaves of $T'$. Then $V(S)\neq \emptyset$ and $g(G)\geq diam (S)+1$ by $diam (S)=diam(T')-2$. By contrary, assume the claim is false. Let us consider the following two cases.
\noindent{\bf Case 1.} $|N_{G}(v)\cap V(B)|\leq 2$ for any $v\in V(S)$.
There exists a leaf $v$ of $T'$ such that $|N_{G}(v)\cap V(B)|\geq 3$. Let $v'$ be the neighbor of $v$ in $T'$. Since $v'\in V(S)$, we see that $$|N_{G}(v')\cap V(H)|\geq d_{G}(v')-|N_{G}(v')\cap V(B)|-|N_{G}(v')\cap V(T')|\geq t+3-2-t\geq1.$$ Thus, there exists a vertex $v''\in N_{G}(v')\cap V(H)$. Let $T_{1}=G[E(T'-\{v\})\cup \{v'v''\}]$. Then $T_{1}\cong T$ and $T_{1}\subseteq H\cup T'-\{v\}$, but $|N_{G}(v)\cap V(B)|\geq 3$, a contradiction to Claim 1.
\noindent{\bf Case 2.} $|N_{G}(v)\cap V(B)|\geq 3$ for some $v\in V(S)$.
Let $w\in V(S)$ such that $|N_{G}(w)\cap V(B)|\geq 3$. We regard $S$ as a rooted tree at $w$ and denote by $C_{S}(v)$ the set of children of a vertex $v$ is $S$. Suppose $G[V(H)\cup (V(T')\backslash\{w\})]$ contains a subtree $S^{*}$ such that $S^{*}\cong S'$ (where $S'$ is a subtree obtains from $T'$ by deleting some leaves of $T'$) and $u\in V(H)$ for any $u\in V(S^{*})$ with $d_{S'}(\phi ^{-1}(u))<d_{T'}(\phi ^{-1}(u))$ (where $\phi$ is an isomorphism from $V(S')$ to $V(S^{*})$). Then, by $|N_{G}(h)\backslash(V(B)\cup \{w\})|\geq t+3-2-1=t$ for any $h\in V(H)$, we can extend $S^{*}$ to a tree $T^{*}\cong T$ in $H\cup T'-\{w\}$ by Lemma 2.2. But $|N_{G}(w)\cap V(B)|\geq 3$, a contradiction to Claim 1. In the following, we only need to find some $S^{*}$ as above.
Suppose there exists a vertex $u'\in V(H)$ such that $C_{S}(w)\subseteq N_{G}(u')$. Then, we can employ $G[ E(T'-\{w\})\cup\{u'u''|u''\in C_{S}(w)\}]$ as a desired subtree $S^{*}$. Thus, we assume that $C_{S}(w)\nsubseteq N_{G}(h)$ for any $h\in V(H)$. Let $x_{0}\in V(H)$ such that $x_{0}$ and $w$ are in the same partition of the graph $G$. We denote $C_{S}(w)\backslash N_{G}(x_{0})=\{w_{1},w_{2},\cdots,w_{p}\}$. Since $|N_{G}(x_{0})\cap V(B)|\leq 2$ and $|N_{G}(x_{0})\cap V(T')|\leq t-p$, we have $$|N_{G}(x_{0})\cap V(H)|=d_{G}(x_{0})-|N_{G}(x_{0})\cap V(B)|-|N_{G}(x_{0})\cap V(T')|\geq p+1.$$
Assume $x_{1},x_{2},\cdots,x_{p}\in N_{H}(x_{0})$. Let us consider the following three subcases.
\noindent{\bf Subcase 2.1} $ecc_{S}(w)\leq1$.
If $ecc_{S}(w)=0$, then $S$ is an isolated vertex and $T'$ is a star. Since $V(H)\neq\emptyset$, any trivial subgraph in $H$ can be a desired subtree $S^{*}$ in $G[V(H)\cup (V(T')\backslash\{w\})]$. Thus, assume $ecc_{S}(w)=1$.
Suppose $diam(S)=1$. Then $S$ is a path of order 2 and $T'$ is a double-star. By $\delta(H)\geq 1$, there exists an edge $h_{1}h_{2} \in E(H)$. Thus, the subtree defined by $G[\{h_{1}h_{2}\}]$ can be employed as a desired subtree $S^{*}$.
Suppose $diam(S)=2$. Then, the subtree defined by $G[E(T'-\{w,w_{1},w_{2},\cdots,w_{p}\})\cup \{x_{0}v|v\in C_{S}(w)\cap N_{G}(x_{0})\}\cup \{x_{0}x_{i}|1\leq i\leq p\}]$ can be employed as a desired subtree $S^{*}$.
\noindent{\bf Subcase 2.2} $ecc_{S}(w)=2$.
Let $|C_{S}(w_{i})\backslash N_{G}(x_{i})|=q_{i}$ for each $i\in \{1,2,\cdots,p\}$. Since $G$ is a bipartite graph, we have $$|N_{G}(x_{i})\cap V(H)|=d_{G}(x_{i})-|N_{G}(x_{i})\cap V(B)|-|N_{G}(x_{i})\cap V(T')|\geq q_{i}+1.$$ We denote $y_{i,1},y_{i,2},\cdots,y_{i,q_{i}}\in N_{G}(x_{i})\cap (V(H)\backslash\{x_{0}\})$ for any $i\in \{1,2,\cdots,p\}$. Since $G$ is a bipartite graph, we see that $$\{x_{1},x_{2},\cdots,x_{p}\}\cap\{y_{i,1},y_{i,2},\cdots,y_{i,q_{i}}\}=\emptyset,$$ for any $i\in \{1,2,\cdots,p\}$.
Suppose $diam(S)=2$. Then $S$ is a path of order $3$. Since $|N_{G}(h)\cap V(B)|\leq 2$ and $|N_{G}(h)\cap V(T')|\leq t$ for any $h\in V(H)$, each component of $H$ has at least three vertices. Thus any path with order 3 in $H$ can be seen as $S^*$.
Suppose $diam(S)=3$. Then, there is at most one $i$ such that $C_{S}(w_{i})\neq \emptyset$. Without loss of generality, assume that $C_{S}(w_{1})\neq \emptyset$. Then, the subtree defined by $G[E(T'-\{w,w_{1},w_{2},\cdots,w_{p}\}\cup C_{S}(w_{1})\backslash N_{G}(x_{1}))\cup \{x_{0}v|v\in C_{S}(w)\cap N_{G}(x_{0})\}\cup \{x_{0}x_{i}|1\leq i\leq p\}\cup\{x_{1}v|v\in C_{S}(w_{1})\cap N_{G}(x_{1})\}\cup \{x_{1}y_{1,j}|1\leq j\leq q_{1}\}]$ can be employed as a desired subtree $S^{*}$.
Suppose $diam(S)=4$. By the grith condition, we see that $$\{y_{i,1},y_{i,2},\cdots,y_{i,q_{i}}\}\cap\{y_{i',1},y_{i',2},\cdots,y_{i',q_{i'}}\}=\emptyset$$ for any pair of $i$ and $i'$ with $q_{i}\geq 1$ and $q_{i'}\geq 1$. Then, the subtree defined by $G[E(T'-U)\cup \{x_{0}v|v\in C_{S}(w)\cap N_{G}(x_{0})\}\cup \{x_{0}x_{i}|1\leq i\leq p\}\cup (\cup_{1\leq i\leq p}(\{x_{i}v|v\in C_{S}(w_{i})\cap N_{G}(x_{i})\}\cup \{x_{i}y_{i,j}|1\leq j\leq q_{i}\}))]$ can be employed as a desired subtree $S^{*}$, where $U=\{w,w_{1},w_{2},\cdots,w_{p}\}\cup (\cup_{1\leq i\leq p}C_{S}(w_{i})\backslash N_{G}(x_{i}) )$.
\noindent{\bf Subcase 2.3} $ecc_{S}(w)\geq3$.
By inductively applying similar manipulations to descendants of $x_{0}$, we can finally obtain a desired subtree $S^{*}$. Note that in each extension process, disjointness of the sets of new children for descendants of $x_{0}$ is guaranteed by the girth condition $g(G)\geq diam(S)+1$ and by the properties of bipartite graphs. Thus Claim 3 holds.
From the above claims, we have $|N_{G}(v)\cap V(B)|\leq 2$ for any $v\in V(H\cup T')$.
\noindent{\bf Claim 4.} $V_{2}\neq \emptyset$.
Suppose $V_{2}= \emptyset$. Then $\delta (B)\geq 3$ and $B$ is 3-connected. And there exist three $(v,B)$-paths $P_{1}$, $P_{2}$, $P_{3}$ for any $v\in V(H)$. We choose $v$ and $P_{1}$, $P_{2}$, $P_{3}$ such that $\sum^{3}_{i=1}|V(P_{i})|$ is as small as possible. We denote $B_{1}=G[E(B)\cup E(P_{1})\cup E(P_{2}) \cup E(P_{3})]$. Then $B_{1}$ is still a subdivision of some simple 3-connected graph by $\delta (B)\geq 3$, but $n(B_{1})>n(B)$.
Suppose $\delta (G-V(B_{1}))\geq t$. By Lemma 2.1, there exists a subtree $T''\cong T$ in $G-V(B_{1})$, but $n(B_{1})>n(B)$, which is a contradiction to choices of $T'$ and $B$.
Suppose $\delta (G-V(B_{1}))\leq t-1$. Then, there exists a vertex $z\in V(G) \backslash V(B_{1})$ such that $|N_{G}(z)\cap (V(G)\setminus V(B_{1}))|\leq t-1$. Thus $|N_{G}(z)\cap V(B_{1})|=d_{G}(z)-|N_{G}(z)\cap (V(G)\setminus V(B_{1})|\geq 4$. Let $z_{i}\in N_{G}(z)\cap V(B_{1})$ such that $z_{i}\notin \{z_{1}, \cdots, z_{i-1}\}$ and the distance from $z_{i}$ to $B$ in $B_{1}-\{z_{1}, \cdots, z_{i-1}\}$ is as small as possible for $i=1,2,3$. Then, we can obtain three $(z, B)$-paths $P'_{1}$, $P'_{2}$, $P'_{3}$ in $G[V (B_{1})\cup\{z\}]$ using edges $zz_{1}, zz_{2}, zz_{3}$ such that $\sum^{3}_{i=1}|V(P_{i})|> \sum^{3}_{i=1}|V(P'_{i})|$, a contradiction. Thus Claim 4 holds.
Since $G$ is 3-connected and $V_{2}\neq \emptyset$, there exists a shortest $(V_{2},B)$-path $Q$ which is edge-disjoint with $B$. Let $B_{2}=G[E(B)\cup E(Q)]$ and $Q=v_{1}\cdots v_{r}$, where $v_{1}\in V_{2}$. Then $B_{2}$ is still a subdivision of some simple 3-connected graph, but $n(B_{2})>n(B)$.
Suppose $\delta (G-V(B_{2}))\geq t$. By Lemma 2.1, $G-V(B_{2})$ contains a subtree isomorphism to $T$, but $n(B_{2})>n(B)$, which is a contradiction to choices of $T'$ and $B$.
Suppose $\delta (G-V(B_{2}))\leq t-1$. Let $u\in V(G)\backslash V(B_{2})$ such that $|N_{G}(u)\cap (V(G)\setminus V(B_{2}))|\leq t-1$. Then $|N_{G}(u)\cap V(B_{2})|=d_{G}(u)-|N_{G}(u)\cap (V(G)\setminus V(B_{2})|\geq 4$. Since $|N_{G}(v)\cap V(B)|\leq 2$ for any $v\in V(H\cup T')$, we have $|N_{G}(u)\cap V(B)|\leq 2$. Thus, $|N_{G}(u)\cap V(Q)|\geq 2$. Further, $|N_{G}(u)\cap (V(Q)\backslash\{v_{1},v_{r}\})|=2$, since $G$ is a bipartite graph and $Q$ is the shortest $(V_{2},B)$-path. Thus, $|N_{G}(u)\cap (V(B)\backslash\{v_{1},v_{r}\})|= 2$ and $|N_{G}(u)\cap \{v_{1},v_{r}\}|=0$. Assume that $u_{1}$ is a neighbor of $u$ nearest to $v_{1}$ in $V(Q)$ and $u_{2}$ is a neighbor of $u$ in $V(B)$. Then $Q'=v_{1}Qu_{1}uu_{2}$ is a shorter $(V_{2},B)$-path than $Q$, which is a contradiction to the choice of $Q$. The proof is thus complete. $\Box$
The proof of Theorem 3.2 is similar to that of Theorem 3.1. In fact, we only need to replace $B$ in Theorem 3.1 such that $B$ is a maximum connected component (respectively, a maximum block). So we omit the detail proof here.
\begin{thm}
Let $T$ be a tree with bipartition $X$ and $Y$. Every connected bipartite graph (respectively, 2-connected bipartite graph) $G$ with $\delta(G)\geq t+1$ (respectively, $\delta(G)\geq t+2$) and $g(G)\geq diam(T)-1$, where $t=\max\{|X|,|Y|\}$, contains a subtree $T'\cong T$ such that $G-V(T')$ is still connected (respectively, 2-connected).
\end{thm}
For any connected bipartite graph $G$, we have $g(G)\geq 4$. Thus, the following result is obtained as a corollary from Theorem 3.1 and Theorem 3.2.
\begin{cor}
For any tree $T=(X,Y)$ with diameter at most 5, every $k (\leq 3)$-connected bipartite graph $G$ with $\delta(G)\geq t+k$, where $t=\max\{|X|,|Y|\}$, contains a subtree $T'\cong T$ such that $G-V(T')$ is still $k$-connected.
\end{cor}
\end{document}
|
\begin{document}
\frontmatter
\title{ON THE SYMMETRIC HOMOLOGY OF ALGEBRAS}
\disscopyright
\begin{abstract}
\noindent
The theory of symmetric homology, in which the symmetric groups
$\Sigma_k^\mathrm{op}$, for $k \geq 0$, play the role that the cyclic
groups do in cyclic homology, begins with the definition of the category
$\Delta S$, containing the simplicial category $\Delta$ as subcategory.
Symmetric homology of a unital algebra, $A$, over a commutative ground ring, $k$,
is defined using derived functors and the symmetric bar construction of
Fiedorowicz. If $A = k[G]$ is a group ring, then $HS_*(k[G])$ is related to stable
homotopy theory.
Two chain complexes that compute $HS_*(A)$ are constructed, both making
use of a symmetric monoidal category $\Delta S_+$ containing $\Delta S$, which
also permits homology operations to be defined on $HS_*(A)$.
Two spectral sequences are found that aid in computing symmetric homology.
In the second spectral sequence, the complex $Sym_*^{(p)}$ is constructed. This
complex turns out to be isomorphic to the suspension of the cycle-free chessboard
complex, $\Omega_{p+1}$, of Vre\'{c}ica and \v{Z}ivaljevi\'{c}. Recent results
on the connectivity of $\Omega_n$ imply finite-dimensionality of the symmetric
homology groups of finite-dimensional algebras.
Finally, an explicit
partial resolution is presented, permitting the calculation of $HS_0(A)$ and
$HS_1(A)$ for a finite-dimensional algebra $A$.
\end{abstract}
\dedication{
{\it To my Mother, Crystal, whose steadfast encouragement spurred me to
begin this journey.}
{\it To my Wife, Megan, whose love and support served as fuel for the journey.}
{\it To my son, Joshua Leighton, and my daughter, Holley Anne, who will
dream their own dreams and embark on their own journeys someday.}
}
\begin{acknowl}
\noindent
First, I would like to express my gratitude to my advisor, Zbigniew Fiedorowicz;
without his hands-on guidance this dissertation would not be possible.
Zig's extensive knowledge of the field and its literature, his incredible insight
into the problems, his extreme patience with me as I took the time to convince
myself of the validity of his suggestions, and his sacrifice of time and energy
proofing my work are qualities that I have come to admire greatly.
I would also like to thank Dan Burghelea, who sparked my interest in algebraic
topology in the summer of 2003, as he turned my interest in mathematical coding
towards an application involving cyclic homology. I would be remiss not to
acknowledge the support and contributions of many more
faculty of the Ohio State University: Roy Joshua, Thomas Kerler, Sergei Chmutov,
Ian Leary, Mike Davis, Atabey Kaygun,
and Markus Linckelmann (presently at the University of Aberdeen).
A very special thanks goes out to Birgit Richter (Universit\"at Hamburg) whose
work in the field has served as inspiration. Moreover, after she graciously took the
time to proof an early draft and suggest many improvements, she was (still!) willing
to write a letter of recommendation on my behalf.
I am also grateful to Sini\v{s}a Vre\'{c}ica (Belgrade University) and
Rade \v{Z}ivaljevi\'{c}
(Math. Inst. SANU, Belgrade), whose results about cycle-free chessboard
complexes contributed substantially to my research; Robert Lewis (Fordham
University), whose program \verb+Fermat+ and helpful suggestions aided in computations;
Fernando Muro (Universitat de Barcelona), for his answer to my question about
$\pi_2^s(B\Gamma)$
and pointing out the paper by Brown and Loday;
Rainer Vogt (Universit\"at Osnabr\"uck) for pointing out the papers by Kapranov
and Manin, as well as Dold's work on the universal coefficient theorem;
and to Peter May and others at The
University of Chicago, for helpful comments and suggestions.
In conclusion, I must acknowledge that it takes a village to raise a thesis, and I
am deeply indebted to all those who have lent a hand along the way.
\end{acknowl}
\begin{vita}
\begin{datelist}
\item[April 30, 1979]{Born - Bellaire, OH, USA}
\item[May 27, 2002]{B.A. Mathematics (Computer Science minor)}
\item[May 27, 2002]{B.Mus. Music Composition}
\item[June 2002 - May 2007] VIGRE Fellow, The Ohio State University
\item[June 2007 - August 2008] Graduate Teaching Assistant, OSU
\end{datelist}
\begin{publist}
\begin{itemize}
\item
S. Ault and Z. Fiedorowicz. \emph{Symmetric homology of algebras}. Preprint on
arXiv, arXiv:0708.1575v4 [math.AT] (11-5-07).
\item
R. Joshua and S. Ault. \emph{Extension of Stanley’s Algorithm for group imbeddings}.
Preprint. Available at: \url{http://www.math.ohio-state.edu/~ault/Stan9.pdf}
(5-25-08).
\end{itemize}
\end{publist}
\begin{fieldsstudy}
\majorfield{Mathematics}
\specialization{Algebraic Topology}
\begin{studieslist}
\studyitem{Symmetric Homology}{Dr. Zbigniew Fiedorowicz}
\studyitem{Computational Methods in Algebraic Geometry}{Dr. Roy Joshua}
\studyitem{Computational Methods in Cyclic Homology}{Dr. Dan Burghelea}
\end{studieslist}
\end{fieldsstudy}
\end{vita}
\tableofcontents
\listoffigures
\listoftables
\begin{tabular}{ll}
$\#S$ &= The number of elements in the set $S$.\\
$N\mathscr{C}$ &= The nerve of the category $\mathscr{C}$ (as a simplicial set).\\
$B\mathscr{C}$ &= The geometric realization of the category $\mathscr{C}$. (\textit{i.e.},
$|N\mathscr{C}|$.)\\
$E_*G$ &= The standard resolution of the group $G$ (as a simplicial set).\\
$EG$ &= Contractible space on which $G$ acts.\\
$\mathrm{Mor}\mathscr{C}$ &= The class of all morphisms of the category
$\mathscr{C}$.\\
$\mathrm{Mor}_{\mathscr{C}}(X, Y)$ &= The set of all morphisms in
$\mathscr{C}$ from $X$ to $Y$.\\
$\mathrm{Obj}\mathscr{C}$ &= The class of all objects of the category
$\mathscr{C}$.\\
$S_n$ &= Symmetric group on the letters $\{1, 2, \ldots, n\}$. \\
$\Sigma_n$ &= Symmetric group on the letters $\{0, 1, \ldots, n-1\}$.\\
$\Delta$ &= The simplicial category.\\
$\textbf{Sets}$ &= The category of sets and set maps.\\
$\textbf{SimpSets}$ &= The category of simplicial sets and simplicial set maps.\\
$\textbf{Mon}$ &= The category of monoids and monoid maps.\\
$k$-\textbf{Mod} &= The category of left $k$-modules, for a ring $k$.\\
$k$-\textbf{Alg} &= The category of $k$-algebras and algebra homomorphisms.\\
$k$-\textbf{SimpMod} &= The category of simplicial left $k$-modules and $k$-linear chain maps.\\
$k$-\textbf{Complexes} &= The category of complexes of $k$-modules and chain maps.\\
$\textbf{Cat}$ &= The category of small categories and functors.\\
$IS_\lambda$ &= The identity representation of a subgroup $S_\lambda$ of $S_n$.\\
$AS_\lambda$ &= The alternating (sign) representation of a subgroup
$S_\lambda$ of $S_n$.\\
$R \uparrow G$ &= Representation of $G$ induced from a representation of
a subgroup.\\
$A^{\otimes n}$ &= The $n$-fold tensor product of the algebra $A$ over its ground ring.\\
\end{tabular}
\mainmatter
\parindent=0pt
\chapter{PRELIMINARIES AND DEFINITIONS}\label{chap.pre_def}
\section{The Category $\Delta S$}\label{sec.deltas}
Denote by $[n]$ the ordered set $\{0, 1, \ldots, n\}$. The category $\Delta S$
has as objects, the sets $[n]$ for $n \geq 0$, and morphisms are pairs $(\phi,
g)$, where $\phi : [n] \to [m]$ is a non-decreasing map of sets (\textit{i.e.}, a
morphism in $\Delta$), and $g \in \Sigma_{n+1}^{\mathrm{op}}$. The element $g$
represents
an automorphism of $[n]$, and as a set map, takes $i \in [n]$ to $g^{-1}(i)$.
Indeed, a morphism $(\phi, g): [n] \to [m]$ of $\Delta S$ may be represented as a
diagram:
\[
\begin{diagram}
\node{[n]}
\\
\node{[n]}
\arrow{n,l}{g}
\arrow{e,t}{\phi}
\node{[m]}
\end{diagram}
\]
Equivalently, a morphism in $\Delta S$ is a morphism in $\Delta$ together with
a total ordering of the domain $[n]$.
Composition of morphisms is achieved as in~\cite{FL}, namely:
\[
(\phi, g) \circ (\psi, h) := (\phi \cdot g^*(\psi), \psi^*(g) \cdot h),
\]
where $g^*(\psi)$ is the morphism of $\Delta$ defined by sending the first
$\#\psi^{-1}(g(0))$ points of $[n]$ to $0$, the next $\#\psi^{-1}(g(1))$ points
to $1$, etc. Note, if $\#\psi^{-1}(g(i)) = 0$, then the point $i \in [m]$ is not
hit. $\psi^*(g)$ is determined as follows: For each
$i \in [m]$, $(g^*(\psi))^{-1}(g^{-1}(i))$ has the same number of elements as
$\psi^{-1}(i)$ If these sets are non-empty, take the order-preserving bijection
$(g^*(\psi))^{-1}(g^{-1}(i)) \to \psi^{-1}(i)$.
It is often helpful to represent morphisms of $\Delta S$ as diagrams of points
and lines, indicating images of set maps. Using these diagrams, it is easy to
see how $g^*(\psi)$ and $\psi^*(g)$ are related to $(\psi, g)$ (see
Figure~\ref{diag.morphism}).
\begin{figure}\label{diag.morphism}
\end{figure}
\begin{rmk}
Observe that the properties of $g^*(\phi)$ and $\phi^*(g)$ stated in
Prop.~1.6 of~\cite{FL} are
formally rather similar to the properties of exponents. Indeed, if we denote:
\[
g^\phi := \phi^*(g), \qquad\qquad \phi^g := g^*(\phi),
\]
then Prop. 1.6 becomes:
\begin{prop}\label{prop.comp}
For $g, h \in G_n$ and $\phi, \psi \in \mathrm{Mor}{\Delta}$,
\[
\begin{array}{ll}
(1.h)' \quad& g^{\phi \psi} = (g^\phi)^{\psi} \\
(1.v)' \quad& \phi^{gh} = (\phi^g)^h \\
(2.h)' \quad& (\phi\psi)^g = \phi^g \psi^{(g^\phi)} \\
(2.v)' \quad& (gh)^\phi = g^\phi h^{(\phi^g)} \\
(3.h)' \quad& g^{\mathrm{id_n}} = g, \qquad 1^\phi = 1 \\
(3.v)' \quad& \phi^1 = \phi, \qquad \mathrm{id_n}^g = \mathrm{id_n} \\
\end{array}
\]
\end{prop}
In what follows, the exponent notation may be used interchangeably with the standard
notation.
\end{rmk}
The above construction for $\Delta S$ shows that the family of groups $\{
\Sigma_n\}_{n \geq 0}$ forms a crossed simplicial group in the sense of Def. 1.1
of~\cite{FL}. The inclusion $\Delta \hookrightarrow \Delta S$ is given by
$\phi \mapsto (\phi, 1)$, where $1$ is the identity element of $\Sigma_{n+1}^
{\mathrm{op}}$, and $[n]$ is the domain of $\phi$. For each $n$, let $\tau_n$
be the $(n+1)$-cycle $(0, n, n-1, \ldots, 1) \in \Sigma_{n+1}^\mathrm{op}$. Thus,
the subgroup generated by $\tau_n$ is isomorphic to $\big(\mathbb{Z} / (n+1)\mathbb{Z}\big)^\mathrm{op}
= \mathbb{Z} / (n+1)\mathbb{Z}$. We may define the category $\Delta C$ as the subcategory
of $\Delta S$ consisting of all objects $[n]$ for $n \geq 0$, together with those
morphisms $(\phi, g)$ of $\Delta S$ for which $g = \tau_n^i$ for some $i$ (cf.~\cite{L}).
In this way, we get a natural chain of inclusions,
\[
\Delta \hookrightarrow \Delta C \hookrightarrow \Delta S
\]
An equivalent characterization of $\Delta S$ comes from Pirashvili (cf.~\cite{P}),
as the category $\mathcal{F}(\mathrm{as})$ of `non-commutative' sets. The objects are
sets $\underline{n} := \{1, 2, \ldots, n\}$ for $n \geq 0$. By convention,
$\underline{0}$ is the empty set. A morphism in
$\mathrm{Mor}_{\mathcal{F}(\mathrm{as})}
(\underline{n}, \underline{m})$ consists of a map (of sets) $f : \underline{n}
\to \underline{m}$ together with a total ordering, $\Pi_j$, on $f^{-1}(j)$
for all $j \in \underline{m}$. In such a case, denote by $\Pi$ the partial
order generated by all $\Pi_j$.
If $(f, \Pi) : \underline{n} \to \underline{m}$ and
$(g, \Psi) : \underline{m} \to \underline{p}$, their composition will be
$(gf, \Phi)$, where $\Phi_j$ is the total ordering on $(gf)^{-1}(j)$ (for all $j \in
\underline{p}$) induced by $\Pi$ and $\Psi$. Explicitly, for each pair $i_1, i_2 \in
(gf)^{-1}(j)$, we have
\begin{center}
$i_1 < i_2$ under $\Phi$ if and only if [$f(i_1) < f(i_2)$ under
$\Psi$] or [$f(i_1) = f(i_2)$ and $i_1 < i_2$ under $\Pi$].
\end{center}
For example, let $f : \underline{9} \to \underline{5}$ be given by:
\[
f \;:\; \left\{
\begin{array}{ll}
1, 5, 8 &\mapsto 1\\
2, 7 &\mapsto 2\\
3, 9 &\mapsto 3\\
4 &\mapsto 4\\
6 &\mapsto 5
\end{array}
\right.
\]
Let the preordering $\Pi$ on pre-image sets be defined by: $8 < 1 < 5$, $2 < 7$, and
$9 < 3$.
Let $g : \underline{5} \to \underline{3}$ be given by:
\[
g \;:\; \left\{
\begin{array}{ll}
1, 2 &\mapsto 2\\
3, 4 &\mapsto 1\\
5 &\mapsto 3
\end{array}
\right.
\]
Let the preordering $\Psi$ on pre-image sets be defined by: $1 < 2$ and
$3 < 4$.
Then, the composition $(g, \Psi)(f, \Pi) = (gf, \Phi)$ will consist of the map $gf$:
\[
gf \;:\; \left\{
\begin{array}{ll}
1,2,5,7,8 &\mapsto 2\\
3,4,9 &\mapsto 1\\
6 &\mapsto 3
\end{array}
\right.
\]
and the corresponding preordering $\Phi$, defined by: $9 < 3 < 4$ and $8 < 1 < 5 < 2 < 7$.
There is an obvious inclusion of categories,
$\Delta S \hookrightarrow \mathcal{F}(\mathrm{as})$, taking $[n]$ to $\underline{n+1}$,
but there is no object of $\Delta S$ that maps to $\underline{0}$. It will be
useful to define $\Delta S_+ \supset \Delta S$ which is isomorphic to
$\mathcal{F}(\mathrm{as})$:
\begin{definition}
$\Delta S_+$ is the category consisting of all objects and morphisms of
$\Delta S$, with the additional object $[-1]$, representing the empty set,
and a unique morphism $\iota_n : [-1] \to [n]$ for each $n \geq -1$.
\end{definition}
\begin{rmk}
Pirashvili's construction is a special case of a more general construction
due to May and Thomason \cite{MT}. This construction associates to any topological operad
$\{\mathcal{C}(n)\}_{n \geq 0}$ a topological category $\widehat{\mathcal{C}}$ together
with a functor $\widehat{\mathcal{C}} \to \mathcal{F}$, where $\mathcal{F}$ is the
category of finite sets, such that the inverse image of any function
$f: \underline{m} \to \underline{n}$ is the space
\[
\prod_{i=1}^n \mathcal{C}(\# f^{-1}(i) ).
\]
Composition in $\widehat{\mathcal{C}}$
is defined using the composition of the operad. May and Thomason refer to
$\widehat{\mathcal{C}}$
as the {\it category of operators} associated to $\mathcal{C}$. They were interested
in the case of an $E_\infty$ operad, but their construction evidently works for any
operad. The category of operators associated to the discrete $A_\infty$ operad
$\mathcal{A}ss$, which parametrizes monoid structures, is precisely Pirashvili's
construction of $\mathcal{F}(as)$, {\it i.e.} $\Delta S_+$. (See
Chapter~\ref{chap.prod} for more on operads.)
\end{rmk}
One very useful advantage in enlarging our category to $\Delta S$ to $\Delta S_+$
is the added structure inherent in $\Delta S_+$.
\begin{prop}\label{prop.deltaSpermutative}
$\Delta S_+$ is a permutative category.
\end{prop}
\begin{proof}
Recall from Def.~4.1 of~\cite{M3} that a \textit{permutative category} is a
category $\mathscr{C}$ with the following additional structure:
An associative bifunctor $\odot : \mathscr{C} \times \mathscr{C} \to \mathscr{C}$,
A \textit{unit} object $e \in \mathrm{Obj}_\mathscr{C}$, which acts as two-sided identity
for $\odot$.
A natural transformation \mbox{$\gamma : \odot \to \odot T$}, where
\mbox{$T : \mathscr{C} \times \mathscr{C} \to \mathscr{C}$} is the transposition functor
\mbox{$(A,B) \mapsto (B,A)$}, such that
$\gamma^2 = \mathrm{id}$,
$\gamma_{(A,e)} = \gamma_{(e, A)} = \mathrm{Id}_A$, for all objects $A$,
and the following diagram is commutative for all objects $A, B, C$:
\[
\begin{diagram}
\node{A \odot B \odot C}
\arrow[2]{e,t}{\gamma_{A\odot B, C}}
\arrow{se,l}{\mathrm{Id}\odot \gamma_{B,C}}
\node[2]{C \odot A \odot B}\\
\node[2]{A \odot C \odot B}
\arrow{ne,r}{\gamma_{A,C} \odot \mathrm{Id}}
\end{diagram}
\]
For $\Delta S_+$, let $\odot$ be the functor defined on objects by:
\[
[n] \odot [m] := [n+m+1], \qquad \textrm{(disjoint union of sets)},
\]
and for morphisms $(\phi, g) : [n] \to [n']$, $(\psi, h) :[m] \to [m']$,
\[
(\phi,g) \odot (\psi,h) = (\eta, k) : [n+m+1] \to [n'+m'+1],
\]
where
\[
\eta \;:\; i \mapsto
\left\{
\begin{array}{lll}
\phi(i), &\qquad& \textrm{for $i=0,\ldots n$}\\
\psi(i-n-1) + (n'+1), &\qquad& \textrm{for $i=n+1, \ldots n+m$}.
\end{array}
\right.
\]
and
\[
k \; : \; i \mapsto
\left\{
\begin{array}{lll}
g^{-1}(i), &\qquad& \textrm{for $i=0,\ldots n$}\\
h^{-1}(i-n-1) + (n'+1), &\qquad& \textrm{for $i=n+1, \ldots n+m$}.
\end{array}
\right.
\]
In short, $(\phi,g) \odot (\psi, h)$ is just the morphism $(\phi, g)$ acting on
the first $n+1$ points of $[n+m+1]$, and $(\psi, h)$ acting on the remaining points.
The unit object will be $[-1] = \emptyset$. $\odot$ is clearly associative, and
$[-1]$ acts as two-sided identity.
Finally, define $\gamma_{n,m} : [n] \odot [m] \to [m] \odot [n]$ to be the identity on
objects, and on morphisms to be precomposition with the block transposition
$\beta_{n,m} : [n+m+1] \to [n+m+1]$. That is, $\beta(i) = i + m+1$, if $i \leq n$, and
$\beta(i) = i-n-1$, if $i > n$.
$\gamma_{m,n} \gamma_{n,m} = \mathrm{id}$, which is true since $\beta_{m,n} \beta_{n,m} =
\mathrm{id}$, and $\gamma_{m, -1}$ is precomposition by $\beta_{m, -1}$, which is clearly
the identity (similarly for $\gamma_{-1, m}$).
For $[n], [m], [p]$, we have the following commutative diagram:
\[
\begin{diagram}
\node{ [n] \odot [m+p+1] }
\arrow{e,t}{=}
\arrow{s,r}{\mathrm{id}\odot \gamma}
\node{ [n+m+1] \odot [p] }
\arrow{e,t}{ \gamma }
\node{ [p] \odot [n+m+1] }
\arrow{s,l}{ = }
\\
\node{ [n] \odot [p+m+1] }
\arrow{e,t}{ = }
\node{ [n+p+1] \odot [m] }
\arrow{e,t}{ \gamma \odot \mathrm{id} }
\node{ [p+n+1] \odot [m] }
\end{diagram}
\]
This diagram commutes because the block transposition $\beta_{n+m,p}$ can be accomplished
by first transposing the blocks $\{n+1, \ldots, n+m+1\}$ and $\{n+m+2, \ldots n+m+p+2\}$ while
keeping the block $\{0, \ldots n\}$ fixed, then transposing the blocks $\{0, \ldots, n\}$
and $\{n+1, \ldots, n+p+1\}$ while keeping the block $\{n+p+2,\ldots,n+p+m+2\}$ fixed.
\end{proof}
For the purposes of computation, a morphism $\alpha : [n] \to [m]$ of $\Delta S$
may be conveniently represented as a tensor product of monomials in the variables $\{x_0,
x_1, \ldots, x_n\}$. Let $\alpha = (\phi, g)$, with $\phi \in \mathrm{Mor}_
\Delta([n],[m])$ and $g \in \Sigma_{n+1}^\mathrm{op}$. The tensor representation
of $\alpha$ will have $m + 1$ tensor factors. Each $x_i$ will occur exactly once,
in the order $x_{g(0)}, x_{g(1)}, \ldots, x_{g(n)}$. The $i^{th}$ tensor factor
consists of the product of $\#\phi^{-1}(i-1)$ variables, with the convention that
the empty product will be denoted $1$. Thus, the $i^{th}$ tensor factor
records the total ordering of $\phi^{-1}(i)$.
As an example, the tensor representation of
the morphism depicted in Fig.~\ref{diag.morphism-tensor} is
$x_1x_0 \otimes x_3x_4 \otimes 1 \otimes x_2$.
\begin{figure}\label{diag.morphism-tensor}
\end{figure}
With this notation, the
composition of two morphisms $\alpha = X_0 \otimes X_1 \otimes \ldots \otimes X_m
: [n] \to [m]$ and $\beta = Y_1 \otimes Y_2 \otimes \ldots Y_n : [p] \to [n]$
is given by:
\[
\alpha \cdot \beta = Z_0 \otimes Z_1 \otimes \ldots \otimes Z_m,
\]
where $Z_i$ is determined by replacing each variable in the monomial $X_i =
x_{j_1} \ldots x_{j_s}$ in $\alpha$ by the corresponding monomials $Y_{j_k}$ in
$\beta$. So, $Z_i = Y_{j_1} \ldots Y_{j_s}$. Thus, for example,
\[
x_4 x_0 \otimes 1 \otimes x_2 x_3 \otimes x_1 \;\cdot\;
x_1 x_6 x_0 \otimes x_7 x_4 \otimes 1 \otimes x_3 \otimes x_2 x_5 \;=\;
x_2 x_5 x_1 x_6 x_0 \otimes 1 \otimes x_3 \otimes x_7 x_4.
\]
\section{The Symmetric Bar Construction}\label{sec.symbar}
\begin{definition}\label{def.symbar}
Let $A$ be an associative, unital algebra over a commutative ground ring $k$.
Following~\cite{F}, define a (covariant) functor $B_*^{sym}A : \Delta S \to
k$-\textbf{Mod} by:
\[
B_n^{sym}A := B_*^{sym}A[n] := A^{\otimes n+1}
\]
\[
B_*^{sym}A(\alpha) : (a_0 \otimes a_1 \otimes \ldots \otimes a_n) \mapsto
\alpha(a_0, \ldots, a_n),
\]
where $\alpha : [n] \to [m]$ is represented in tensor notation, and evaluation
at $(a_0, \ldots, a_n)$ simply amounts to substituting each $a_i$ for $x_i$
and multiplying the resulting monomials in $A$. If the pre-image
$\alpha^{-1}(i)$ is empty, then the unit of $A$ is inserted.
\end{definition}
\begin{rmk}
Fiedorowicz~\cite{F} defines the symmetric bar construction functor for morphisms
$\alpha = (\phi, g)$, where $\phi \in \mathrm{Mor}\Delta([n],[m])$ and $g \in
\Sigma_{n+1}^{\mathrm{op}}$, via
\begin{eqnarray*}
B_*^{sym}A(\phi)(a_0 \otimes a_1 \otimes \ldots \otimes a_n) &=&
\left( \prod_{a_i \in \phi^{-1}(0)} a_i \right)\otimes \ldots
\otimes \left( \prod_{a_i \in \phi^{-1}(n)} a_i \right)\\
B_*^{sym}A(g)(a_0 \otimes a_1 \otimes \ldots \otimes a_n) &=&
a_{g^{-1}(0)} \otimes a_{g^{-1}(1)} \otimes \ldots \otimes a_{g^{-1}(n)}
\end{eqnarray*}
However, in order that this becomes consistent with earlier notation, we should
require $B_*^{sym}A(g)$ to permute the tensor factors in the inverse sense:
\[
B_*^{sym}A(g)(a_0 \otimes a_1 \otimes \ldots \otimes a_n) =
a_{g(0)} \otimes a_{g(1)} \otimes \ldots \otimes a_{g(n)}.
\]
\end{rmk}
\begin{prop}\label{prop.symbar-natural}
The symmetric bar construction $B_*^{sym}A$ is natural in $A$.
\end{prop}
\begin{proof}
If $f : A \to A'$ is a morphism of
$k$-algebras (sending $1_A \mapsto 1_{A'}$), then there is a family of induced functors
$B_n^{sym}f : B_n^{sym}A \to B_n^{sym}A'$ defined by
\[
(B_n^{sym}f)( a_0 \otimes a_1 \otimes \ldots \otimes a_n )
= f(a_0) \otimes f(a_1) \otimes \ldots \otimes f(a_n)
\]
It is easily verified that the square below commutes for each $\Delta S$
morphism $\phi : [n] \to [m]$.
\[
\begin{diagram}
\node{[n]}
\arrow{s,r}{\phi}
\node{A^{\otimes(n+1)}}
\arrow{s,l}{B_*^{sym}A(\phi)}
\arrow{e,t}{f^{\otimes(n+1)}}
\node{B^{\otimes(n+1)}}
\arrow{s,r}{B_*^{sym}B(\phi)}
\\
\node{[m]}
\node{A^{\otimes(m+1)}}
\arrow{e,t}{f^{\otimes(m+1)}}
\node{B^{\otimes(m+1)}}
\end{diagram}
\]
\end{proof}
Note that $B_*^{sym}A$ can be regarded as a simplicial \mbox{$k$-module} (\textit{i.e.},
a functor $\Delta^{\mathrm{op}} \to k$-\textbf{Mod}) via the chain of
functors:
\begin{equation}\label{eq.delta-C-S-chain}
\Delta^{\mathrm{op}} \hookrightarrow \Delta C^{\mathrm{op}}
\stackrel{\cong}{\to} \Delta C \hookrightarrow \Delta S.
\end{equation}
Here, the isomorphism $D : \Delta C^{\mathrm{op}} \to \Delta C$ is the standard
duality (see~\cite{L}), which is defined on generators by:
\[
\left\{
\begin{array}{llll}
D(d_i) &=& \sigma_i, &(0 \leq i \leq n-1)\\
D(d_n) &=& \sigma_0 \cdot \tau^{-1} & \\
D(s_i) &=& \delta_{i+1}, &(0 \leq i \leq n-1)\\
D(t) &=& \tau^{-1} &
\end{array}
\right.
\]
\begin{rmk}\label{rmk.cyclic}
Fiedorowicz showed in~\cite{F} that $B_*^{sym}A \cdot D = B_*^{cyc}A$,
the \textit{cyclic bar construction} (cf.~\cite{L}). By duality of $\Delta C$,
it is equivalent to use the functor $B_*^{sym}A$, restricted to morphisms of
$\Delta C$ in order to do computations of $HC_*(A)$.
\end{rmk}
\section{Definition of Symmetric Homology}\label{sec.symhom}
\begin{definition}\label{def.C-modules}
For a category $\mathscr{C}$, a covariant functor $F : \mathscr{C} \to
\mbox{\textrm{$k$-\textbf{Mod}}}$ will be called a \mbox{$\mathscr{C}$-module}. Similarly,
a contravariant functor $G : \mathscr{C} \to \mbox{\textrm{$k$-\textbf{Mod}}}$ will be
called a $\mathscr{C}^\mathrm{op}$-module (since $G^\mathrm{op} : \mathscr{C}^\mathrm{op}
\to \mbox{\textrm{$k$-\textbf{Mod}}}$ is covariant).
\end{definition}
\begin{definition}\label{def.categorical_tensor}
For a category $\mathscr{C}$, if $N$ is a \mbox{$\mathscr{C}$-module} and
$M$ is a \mbox{$\mathscr{C}^{\mathrm{op}}$-module}, define the tensor product
(over $\mathscr{C}$) thus:
\[
M \otimes_{\mathscr{C}} N := \bigoplus_{X \in \mathrm{Obj}\mathscr{C}}
M(X) \otimes_k N(X) / \approx,
\]
where the equivalence $\approx$ is generated by the following: For every morphism
$f \in \mathrm{Mor}_{\mathscr{C}}(X, Y)$, and every $x \in N(X)$ and $y \in M(Y)$,
we have $ y \otimes f_*(x) \approx f^*(y) \otimes x$.
\end{definition}
Note, MacLane defines the tensor product as a \textit{coend}:
\[
M \otimes_\mathscr{C} N := \int^X (MX) \otimes (NX),
\]
where we consider
$(M-)\otimes(N-)$ as a bifunctor $\mathscr{C}^\mathrm{op} \times \mathscr{C}
\to \mathscr{L}$, for a given cocomplete category $\mathscr{L}$ equipped with
a functor $\otimes : \mathscr{L} \times \mathscr{L} \to \mathscr{L}$ (see~\cite{ML}).
When $\mathscr{L} = $ \mbox{$k$-\textbf{Mod}}, this construction yields the same as that above.
Alternatively, consider $k[\mathrm{Mor}\mathscr{C}]$, the free \mbox{$k$-module} with
basis $\mathrm{Mor}\mathscr{C}$. We may define a ring structure on
$k[\mathrm{Mor}\mathscr{C}]$ by defining products of basis elements thus:
\[
f \cdot g :=
\left\{
\begin{array}{ll}
f \circ g, &\textrm{if $f$ is composable with $g$} \\
0, &\textrm{otherwise}
\end{array}
\right.
\]
Note, $k[\mathrm{Mor}\mathscr{C}]$ will in general not have a unit, but only
\textit{local units}.
Indeed, for any finitely generated $k$-submodule, $M$, with basis $\{v_1, \ldots,
v_t\}$, each $v_i$ is the sum of only finitely many terms $c_{ij} f_j$, with
$f_j \in \mathrm{Mor}\mathscr{C}$. Let $\{f_1, f_2, \ldots, f_n\}$ be the set
of those morphisms that occur in any of the $v_i$'s. Then there is an element
in $k[\mathrm{Mor}\mathscr{C}]$ that acts as a two-sided unit
for any element of $M$, namely: $\sum \mathrm{id}_{X}$, where the sum extends
over all
those $X \in \mathrm{Obj}\mathscr{C}$ that appear as domains or codomains of the
$f_i$'s.
Now, the category of \mbox{$\mathscr{C}$-modules} (with natural transformations as
morphisms) is equivalent to the category of left \mbox{$k[\mathrm{Mor}\mathscr{C}]$-modules}.
The correspondence is as follows: For a \mbox{$\mathscr{C}$-module} $M$, let $\overline{M}$
be the \mbox{$k$-module} $\bigoplus_{X \in \mathrm{Obj}\mathscr{C}} MX$. For a morphism
$f : X \to Y$ in $\mathrm{Mor}\mathscr{C}$ and a homogeneous $x \in \overline{M}$
(\text{i.e.}, $x \in MW$ for some $W \in \mathrm{Obj}\mathscr{C}$), put:
\[
f . x :=
\left\{
\begin{array}{ll}
f_*(x), &\textrm{if $x \in MX$} \\
0, &\textrm{otherwise}
\end{array}
\right.
\]
Extend the action of $f$ to arbitrary $v \in \overline{M}$ by linearity. This formula
provides a module structure, since if $x \in MX$ and $X \stackrel{g}{\to} Y
\stackrel{f}{\to} Z$ is a chain of morphisms in $\mathscr{C}$, we have
\[
(f \cdot g). x = (fg).x= (fg)_*(x) = f_*\big(g_*(x)\big) = f.\big(g_*(x)\big)
= f.(g.x).
\]
Similarly, the category of \mbox{$\mathscr{C}^\mathrm{op}$-modules} is equivalent to
the category of right $k[\mathrm{Mor}\mathscr{C}]$-modules, with action
$y \cdot f = f^*(y)$ for any $y \in Y$ and $f \in
\mathrm{Mor}_\mathscr{C}(X, Y)$.
Under this equivalence, the tensor product $M \otimes_{\mathscr{C}} N$ construction
is simply the standard tensor product $\overline{M}
\otimes_{k[\mathrm{Mor}\mathscr{C}]} \overline{N}$, and thus we can define the modules
\[
\mathrm{Tor}_n^\mathscr{C}(M, N) :=
\mathrm{Tor}_n^{k[\mathrm{Mor}\mathscr{C}]}\left(\overline{M}, \overline{N}\right).
\]
Note, it is also possible to define $\mathrm{Tor}_n^\mathscr{C}(M, N)$ directly as
the derived functors of the categorical tensor product construction
(see~\cite{FL}).
The trivial \mbox{$\mathscr{C}$-module}, denoted
by $\underline{k}$, is the functor $\mathscr{C} \to k$-\textbf{Mod} which takes
every object
to $k$ and every morphism to $\mathrm{id}_k$. Under the above equivalence,
this becomes the trivial left \mbox{$k[\mathrm{Mor}\mathscr{C}]$-module}
$\overline{k} = \bigoplus_{X \in \mathrm{Obj}\mathscr{C}}k$.
We also denote by $\underline{k}$ the trivial \mbox{$\mathscr{C}^\mathrm{op}$-module}.
\begin{definition}
The \textbf{symmetric homology} of an associative, unital $k$-algebra, $A$,
is denoted $HS_*(A)$, and is defined as:
\[
HS_*(A) := \mathrm{Tor}_*^{\Delta S}\left(\underline{k},B_*^{sym}A\right)
\]
\end{definition}
\begin{rmk}
Note, the existing literature based on the work of Connes, Loday and Quillen
consistently defines the categorical tensor product in the reverse sense:
$N \otimes_{\mathscr{C}} M$ is the direct sum of copies of $NX \otimes_k MX$
modded out by the equivalence $x \otimes f^*(y) \approx f_*(x) \otimes y$ for
all $\mathscr{C}$-morphisms $f : X \to Y$. In this context, $N$ is covariant,
while $M$ is contravariant. I chose to follow the convention of Pirashvili
and Richter \cite{PR} in writing tensor products as $M \otimes_{\mathscr{C}} N$ so
that the equivalence $\xi : $\mbox{$\mathscr{C}$-\textbf{Mod}} $\to$
\mbox{$k[\mathrm{Mor}\mathscr{C}]$-\textbf{Mod}} passes to tensor products in a
straightforward way: $\xi( M \otimes_{\mathscr{C}} N ) =
\xi(M) \otimes_{k[\mathrm{Mor}\mathscr{C}]} \xi(N)$.
\end{rmk}
\begin{rmk}
Since
\[
\underline{k}\otimes_{\Delta S} M \cong \colim_{\Delta S}M,
\]
for any $\Delta S$-module $M$,
we can alternatively describe symmetric homology as derived functors
of $\colim$:
\[
HS_i(A) = {\colim_{\Delta S}}^{(i)}B_*^{sym}A.
\]
(To see the relation with higher colimits, we need to tensor a projective
resolution of $B_*^{sym}A$ with $\underline{k}$).
\end{rmk}
\section{The Standard Resolution}\label{sec.stdres}
Let $\mathscr{C}$ be a category. Henceforth, we shall use interchangeably the
notion of \mbox{$\mathscr{C}$-module} and
\mbox{$k[\mathrm{Mor}\mathscr{C}]$-module}, under
the equivalence mentioned in section~\ref{sec.symhom}.
The rank $1$ free \mbox{$\mathscr{C}$-module} is
$k[\mathrm{Mor}\mathscr{C}]$, with the left action of composition of morphisms.
Now as $k$-module, $k[\mathrm{Mor}\mathscr{C}]$ decomposes into the direct sum,
\[
k[\mathrm{Mor}\mathscr{C}] = \bigoplus_{X \in \mathrm{Obj}\mathscr{C}}
\left( \bigoplus_{Y \in \mathrm{Obj}\mathscr{C}}
k\left[\mathrm{Mor}_\mathscr{C}(X, Y)\right] \right).
\]
By abuse of notation, denote $\displaystyle{\bigoplus_{Y \in \mathrm{Obj}\mathscr{C}}}
k\left[\mathrm{Mor}_\mathscr{C}(X, Y)\right]$ by
$k\left[\mathrm{Mor}_\mathscr{C}(X, -)\right]$.
So there is a direct sum decomposition as left \mbox{$\mathscr{C}$-module}),
\[
k[\mathrm{Mor}\mathscr{C}] = \bigoplus_{X \in \mathrm{Obj}\mathscr{C}}
k\left[\mathrm{Mor}_\mathscr{C}(X, -)\right].
\]
Thus, the submodules $k\left[\mathrm{Mor}_\mathscr{C}(X, -)\right]$ are projective left
\mbox{$\mathscr{C}$-modules}.
Similarly, $k[\mathrm{Mor}\mathscr{C}]$ is the rank $1$ free right
\mbox{$\mathscr{C}$-module}, with right action of pre-composition of morphisms, and as such,
decomposes as:
\[
k[\mathrm{Mor}\mathscr{C}] = \bigoplus_{Y \in \mathrm{Obj}\mathscr{C}}
k\left[\mathrm{Mor}_\mathscr{C}(-, Y)\right]
\]
Again, the notation $k\left[\mathrm{Mor}_\mathscr{C}(-, Y)\right]$ is shorthand
for $\displaystyle{\bigoplus_{X \in \mathrm{Obj}\mathscr{C}}}
k\left[\mathrm{Mor}_\mathscr{C}(X, Y)\right]$.
The submodules $k\left[\mathrm{Mor}_\mathscr{C}(-, Y)\right]$ are projective as right
\mbox{$\mathscr{C}$-modules}.
It will also be important to note that
$k\left[\mathrm{Mor}_\mathscr{C}(-, Y)\right] \otimes_{\mathscr{C}} N
\cong N(Y)$ as $k$-module via the evaluation map $f \otimes y \mapsto f_*(y)$.
Similarly, $M \otimes_{\mathscr{C}} k\left[\mathrm{Mor}_\mathscr{C}(X, -)\right]
\cong M(X)$.
Following Quillen (\cite{Q}, Section~1), we make the following definition:
\begin{definition}
Given a functor $F : \mathscr{B} \to \mathscr{C}$ and a fixed object $Y$ in $\mathscr{C}$,
let $F / Y$ denote the category whose objects are pairs $(X, \phi)$ where $X$ is an
object of $\mathscr{B}$ and $\phi : FX \to Y$ is a morphism in $\mathscr{C}$. A morphism
from $(X, \phi)$ to $(X', \phi')$ is a morphism $\psi : X \to X'$ such that
$\phi' \circ F\psi= \phi$. When $F$ is the identity functor on $\mathscr{C}$, this
construction is called the over-category (objects over $Y$), and is denoted by
$\mathscr{C} / Y$.
Dually, let $Y \setminus F$ denote the category whose objects are pairs $(X, \phi)$ for
$X$ in $\mathscr{B}$ and $\phi : Y \to FX$ in $\mathscr{C}$. Here, a morphism
from $(X, \phi)$ to $(X', \phi')$ is a morphism $\phi : X \to X'$ such that
$F\psi \circ \phi = \phi'$.
When $F$ is the identity functor,
this is called the under-category (objects under $Y$), and is denoted by $Y \setminus
\mathscr{C}$.
\end{definition}
Given a functor $F : \mathscr{B} \to \mathscr{C}$ of small categories
define a functor $(F/-) : \mathscr{C} \to \textbf{Cat}$ as follows:
The object $Y$ is sent to the category $F / Y$. If $\nu : Y \to Y'$ is a morphism
in $\mathscr{C}$, the functor $(F / \nu) : F / Y \to F / Y'$ is defined on objects by
$(X, \phi) \mapsto (X, \nu \phi)$. For a morphism $\psi : (X, \phi) \to (X', \phi')$
in $F / Y$, $\psi$ may also represent a morphism in $F / Y'$,
since $\phi' F\psi =\phi \implies \nu\phi' F\psi = \nu\phi$ (as morphisms in $\mathscr{C}$).
Again, we may dualize to obtain a \textit{contravariant} functor $(-\setminus F) :
\mathscr{C} \to \textbf{Cat}$, defined on objects by $Y \mapsto Y\setminus F$, and if
$\nu : Y \to Y'$, then $(\nu \setminus F)$ is a functor $(Y' \setminus F) \to
(Y \setminus F)$ which takes $(X, \phi)$ to $(X, \phi\nu)$.
Thus, $(F / -)$ is a $\mathscr{C}$-category, and $(- \setminus F)$ is a $\mathscr{C}^
\mathrm{op}$-category. In what follows, we shall assume $F$ is the identity functor
of $\mathscr{C}$.
As noted in~\cite{FL}, the nerve of $(\mathscr{C} / -)$
is a simplicial $\mathscr{C}$-set, and the complex $L_*$, given by:
\[
L_n := k\left[ N(\mathscr{C}/-)_n \right]
\]
is a resolution by projective \mbox{$\mathscr{C}$-modules} of the trivial
\mbox{$\mathscr{C}$-
module}, $k$. Here, the boundary map is $\partial := \sum_i (-1)^i d_i$, where the
$d_i$'s come from the simplicial structure of the nerve of
$(\mathscr{C}/-)$.
For the definition of $HS_*(A)$, we
shall be more interested in the dual construction, which yields a resolution by projective
\mbox{$\mathscr{C}^\mathrm{op}$-modules} of the trivial
\mbox{$\mathscr{C}^\mathrm{op}$-module},
$k$. Explicitly, define the complex $\overline{L}_*$ by:
\[
\overline{L}_n := k\left[ N(- \setminus\mathscr{C})_n \right]
\]
\[
\overline{L}_n(C) := k\left[ \{C \stackrel{g}{\to} A_0 \stackrel{f_1}{\to} A_1
\stackrel{f_2}{\to} \ldots \stackrel{f_n}{\to} A_n \} \right]
\]
For completeness, we shall provide a proof of:
\begin{prop}\label{prop.contractibility_under-category}
$\overline{L}_*$ is a resolution of $k$ by projective
\mbox{$\mathscr{C}^\mathrm{op}$-modules}.
\end{prop}
\begin{proof}
Fix $C \in \mathrm{Obj}\mathscr{C}$. Let $\epsilon : \overline{L}_0(C) \to k$ be
the map defined on generators by
\[
\epsilon( C \to A_0 ) := 1_k.
\]
We shall show the complex
\[
k \stackrel{\epsilon}{\leftarrow} \overline{L}_0(C)
\stackrel{\partial}{\leftarrow} \overline{L}_1(C) \stackrel{\partial}{\leftarrow}
\ldots
\]
is chain homotopic to the $0$ complex. Explicitly, define
\[
h_{-1} : 1 \mapsto (C \stackrel{id}{\to} C)
\]
\[
h_n : (C \to A_0 \to \ldots \to A_n) \mapsto (C \stackrel{id}{\to} C \to A_0 \to
\ldots A_n), \qquad \textrm{for $i \geq 0$}
\]
We have $\epsilon h_{-1} (1) = \epsilon(C \to C) = 1$, so $\epsilon h_{-1} = \mathrm{id}$.
Next, in degree $0$,
\[
(\partial h_0 + h_{-1} \epsilon)( C \to A_0 ) = \partial ( C \to C \to A_0) +
h_{-1} ( 1 )
\]
\[
= (C \to A_0) - (C \to C) + (C \to C)
\]
\[
= (C \to A_0).
\]
Finally, let $n \geq 1$.
\[
(\partial h_n + h_{n-1} \partial)(C \to A_0 \to \ldots \to A_n)
\]
\[
= \partial(C \to C \to A_0 \to \ldots \to A_n) +
h_{n-1} \sum_{i = 0}^{n} (-1)^i(C \to A_0 \to \ldots \widehat{A_i} \ldots \to A_n),
\]
where $\widehat{A_i}$ means to omit the object $A_i$ by composing the map with target $A_i$
with the map with source $A_i$.
\[
= (C \to A_0 \to \ldots \to A_n) + \sum_{i=0}^n (-1)^{i+1} (C \to C \to A_0 \to \ldots
\widehat{A_i} \ldots \to A_n) \,+
\]
\[
\qquad\sum_{i = 0}^{n} (-1)^i(C \to C \to A_0 \to \ldots \widehat{A_i} \ldots \to A_n)
\]
\[
= (C \to A_0 \to \ldots \to A_n).
\]
Hence, $\partial h_n + h_{n-1} \partial = \mathrm{id}$, and so $h$ determines a
chain homotopy $0 \simeq \mathrm{id}$, proving that the complex is contractible.
Next, we show that the \mbox{$\mathscr{C}^\mathrm{op}$-module} $\overline{L}_n$ is projective.
Indeed,
\[
\overline{L}_n = \bigoplus_{C_n} k\left[\mathrm{Mor}_\mathscr{C}( -, A_0)\right],
\]
where the direct sum is indexed over the set $C_n$ of all chains
\mbox{$A_0 \to A_1 \to \ldots \to A_n$}. As
we have seen above, $k\left[\mathrm{Mor}_\mathscr{C}(-,A_0)\right]$ is projective as
\mbox{$\mathscr{C}^\mathrm{op}$-module}, therefore $\overline{L}_n$ is projective.
\end{proof}
Thus, we may compute $HS_*(A)$ as the homology groups of the following complex:
\begin{equation}\label{symhomcomplex}
0 \longleftarrow
\overline{L}_0 \otimes_{\Delta S} B_*^{sym}A \longleftarrow
\overline{L}_1 \otimes_{\Delta S} B_*^{sym}A \longleftarrow
\overline{L}_2 \otimes_{\Delta S} B_*^{sym}A \longleftarrow
\ldots
\end{equation}
\begin{cor}\label{cor.SymHomComplex}
For an associative, unital $k$-algebra $A$,
\[
HS_*(A) = H_*\left( k[N(- \setminus \Delta S)] \otimes_{\Delta S} B_*^{sym}A;\,k \right)
\]
\end{cor}
\begin{rmk}\label{rmk.HC}
By remark~\ref{rmk.cyclic}, it is clear that the related complex
$k[N(- \setminus \Delta C)] \otimes_{\Delta C} B_*^{sym}A$ computes
$HC_*(A)$.
\end{rmk}
\begin{rmk}\label{rmk.uniqueChain}
Observe that every element of $\overline{L}_n \otimes_{\Delta S} B^{sym}_*A$
is equivalent to one in which the first morphism of the $\overline{L}_n$
factor is an identity:
\[
[p]\stackrel{\alpha}{\to}
[q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\, (y_0 \otimes \ldots \otimes y_p)
\]
\[
= \alpha^*\big(
[q_0]\stackrel{\mathrm{id}_{[q_0]}}{\longrightarrow}[q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]\big)
\,\otimes\, (y_0 \otimes \ldots \otimes y_p)
\]
\[
\approx
[q_0]\stackrel{\mathrm{id}_{[q_0]}}{\longrightarrow}[q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\,\alpha_*(y_0, \ldots, y_p)
\]
Thus, we may consider $\overline{L}_n \otimes_{\Delta S} B_*^{sym}A$ to be the
\mbox{$k$-module} generated by the elements
\begin{equation}\label{chain_elem}
\left\{
[q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\,
(y_0 \otimes \ldots \otimes y_{q_0})
\right\},
\end{equation}
where the tensor product is now over $k$.
The face maps
$d_0, d_1, \ldots, d_n$ are defined on generators by:
\[
d_0\left( [q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\,(y_0 \otimes \ldots \otimes y_{q_0})\right)
\,=\,
[q_1]\stackrel{\beta_2}{\to}
\ldots\stackrel{\beta_n}{\to}[q_n]\,\otimes\,
\beta_1(y_0, \ldots, y_{q_0}),
\]
\[
d_j\left([q_0]\stackrel{\beta_1}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\,(y_0 \otimes \ldots \otimes y_{q_0})
\right) \,=
\]
\[
[q_0]\stackrel{\beta_1}{\to}
\ldots \to [q_{j-1}] \stackrel{\beta{j+1}\beta{j}}{\longrightarrow} [q_{j+1}] \to \ldots
\stackrel{\beta_n}{\to}[q_n]
\,\otimes\,
(y_0 \otimes \ldots \otimes y_{q_0}) , \;\; (0 < j < n),
\]
\[
d_n\left( [q_0]\stackrel{\beta_1}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\,(y_0 \otimes \ldots \otimes y_{q_0})\right)
\,=\,
[q_0]\stackrel{\beta_1}{\to}
\ldots\stackrel{\beta_{n-1}}{\to}[q_{n-1}]\,\otimes\,
(y_0 \otimes \ldots \otimes y_{q_0}).
\]
\end{rmk}
We now have enough tools to compute $HS_*(k)$. First, we need to show:
\begin{lemma}\label{lem.DeltaScontractible}
$N(\Delta S)$ is a contractible complex.
\end{lemma}
\begin{proof}
Define a functor $\mathscr{F} : \Delta S \to \Delta S$ by
\[
\mathscr{F} : [n] \mapsto [0] \odot [n],
\]
\[
\mathscr{F} : f \mapsto \mathrm{id}_{[0]} \odot f,
\]
using the multiplication $\odot$ defined in section~\ref{sec.deltas}.
There is a natural transformation $\mathrm{id}_{\Delta S} \to \mathscr{F}$
given by the following commutative diagram for each $f : [m] \to [n]$:
\[
\begin{diagram}
\node{ [m] }
\arrow{e,t}{f}
\arrow{s,l}{\delta_0}
\node{ [n] }
\arrow{s,r}{\delta_0}\\
\node{ [m+1] }
\arrow{e,t}{\mathrm{id} \odot f}
\node{ [n+1] }
\end{diagram}
\]
Here, $\delta^{(k)}_j : [k-1] \to [k]$ is the $\Delta$ morphism that misses the point
$j \in [k]$.
Consider the constant functor $\Delta S \stackrel{[0]}{\to} \Delta S$ that sends
all objects to
$[0]$ and all morphisms to $\mathrm{id}_{[0]}$. There is a natural transformation
$[0] \to \mathscr{F}$ given by the following commutative diagram for each
$f : [m] \to [n]$.
\[
\begin{diagram}
\node{ [0] }
\arrow{e,t}{\mathrm{id}}
\arrow{s,l}{0_0}
\node{ [0] }
\arrow{s,r}{0_0}\\
\node{ [m+1] }
\arrow{e,t}{\mathrm{id} \odot f}
\node{ [n+1] }
\end{diagram}
\]
Here, $0^{(k)}_j : [0] \to [k]$ is the morphism that sends the point $0$ to
$j \in [k]$.
Natural transformations induce homotopy equivalences (see~\cite{Se} or
Prop.~1.2 of~\cite{Q}), so in
particular, the identity map on $N(\Delta S)$ is homotopic to the map that
sends $N(\Delta S)$ to the nerve of a trivial category. Thus, $N(\Delta S)$
is contractible.
\end{proof}
\begin{cor}\label{cor.HS_of_k}
The symmetric homology of the ground ring $k$ is isomorphic to $k$, concentrated
in degree $0$.
\end{cor}
\begin{proof}
By Cor.~\ref{cor.SymHomComplex} and Remark~\ref{rmk.uniqueChain}, $HS_*(k)$ is
the homology of the chain complex generated (freely) over $k$ by the chains
\[
\left\{
[q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\,
(1 \otimes \ldots \otimes 1)
\right\},
\]
where $\beta_i \in \mathrm{Mor}_{\Delta S}\left( [q_{i-1}], [q_i] \right)$. Each
chain $[q_0] \to [q_1] \to \ldots \to [q_n]\,\otimes\,
(1 \otimes \ldots \otimes 1)$ may be identified with the chain
$[q_0] \to [q_1] \to \ldots \to [q_n]$ of $N(\Delta S)$, and clearly this
defines a chain isomorphism to $N(\Delta S)$. The result now
follows from Lemma~\ref{lem.DeltaScontractible}.
\end{proof}
\section{Tensor Algebras}\label{sec.tensoralg}
For a general $k$-algebra $A$, the standard resolution is often too difficult
to work with. In the following chapters, we shall see some methods of reducing
the size of the standard resolution.
In order to prove the results of chapter~\ref{chap.alt}, it is necessary
to prove these results first for the special case of tensor algebras. Indeed,
tensor algebra arguments are also key in the proof of Fiedorowicz's Theorem
(Thm.~1({\it i}) of~\cite{F}) about the symmetric homology of group algebras.
Let $T : k$-\textbf{Alg} $\to k$-\textbf{Alg} be the functor
sending an algebra $A$ to the tensor algebra generated by $A$.
\[
TA := k \oplus A \oplus A^{\otimes 2}
\oplus A^{\otimes 3} \oplus \ldots
\]
The functor $T$ takes an algebra homomorphism
$A \stackrel{f}{\to} B$ to the induced homomorphism $Tf$ defined on generators by:
\[
Tf( a_0 \otimes a_1 \otimes \ldots \otimes a_k ) = f(a_0) \otimes f(a_1) \otimes
\ldots \otimes f(a_k).
\]
There is an algebra homomorphism $\theta : TA \to A$, defined by multiplying tensor factors:
\[
\theta( a_0 \otimes a_1 \otimes \ldots \otimes a_k ) := a_0a_1 \cdots a_k.
\]
In fact, $\theta$ defines a natural transformation $T \to \mathrm{id}$, as can
be verified by the following commutative diagram (valid for all $A \stackrel{f}{\to}
B$ in $k$-\textbf{Alg}).
\[
\begin{diagram}
\node{TA}
\arrow{e,t}{\theta_A}
\arrow{s,l}{Tf}
\node{A}
\arrow{s,r}{f}\\
\node{TB}
\arrow{e,t}{\theta_B}
\node{B}
\end{diagram}
\]
\[
\begin{diagram}
\node{a_0 \otimes \ldots \otimes a_k}
\arrow[2]{e,t,T}{\theta_A}
\arrow{s,l,T}{Tf}
\node[2]{a_0\cdots a_k}
\arrow{s,r,T}{f}\\
\node{f(a_0) \otimes \ldots \otimes f(a_k)}
\arrow{e,t,T}{\theta_B}
\node{f(a_0)\cdots f(a_k)}
\arrow{e,t,=}{}
\node{f(a_0\cdots a_k)}
\end{diagram}
\]
We shall also make use of a $k$-module homomorphism $h$ sending the
algebra $A$ identically onto the summand $A$ of $TA$. This map is a natural
transformation from the forgetful functor $U : k$-\textbf{Alg} $\to k$-\textbf{Mod}
to the functor $UT$.
\[
\begin{diagram}
\node{UA}
\arrow{s,l}{Uf}
\arrow{e,t}{h_A}
\node{UTA}
\arrow{s,r}{UTf}\\
\node{UB}
\arrow{e,t}{h_B}
\node{UTB}
\end{diagram}
\]
Henceforth, context will make it clear whether we are working with algebras or
underlying $k$-modules, and so the functor $U$ shall be omitted.
Denote by $\mathscr{Y}_*A$, the complex $k[ N(- \setminus \Delta S) ]
\otimes_{\Delta S} B_*^{sym}A$ of Cor.~\ref{cor.SymHomComplex}.
\begin{prop}\label{prop.Y-functor}
The assignment $A \mapsto \mathscr{Y}_*A$ is functorial.
\end{prop}
\begin{proof}
We have to say what happens to morphisms. If $f : A \to B$ is a morphism of
$k$-algebras (sending $1_A \mapsto 1_B$), then there is an induced chain map
\[
\mathrm{id} \otimes B_*^{sym}f \;:\; k[ N(- \setminus \Delta S) ]\otimes_{\Delta S}
B_*^{sym}A \to k[ N(- \setminus \Delta S) ]\otimes_{\Delta S} B_*^{sym}B,
\]
defined on $k$-chains by:
\begin{eqnarray*}
[n] \to [p_0] \to [p_1] \to \ldots \to [p_k] \otimes \left(
a_0 \otimes \ldots \otimes a_n\right)
\\
\mapsto \;\;
[n] \to [p_0] \to [p_1] \to \ldots \to [p_k] \otimes \left(
f(a_0) \otimes \ldots \otimes f(a_n)\right)
\end{eqnarray*}
$B_*^{sym}f$ is a natural transformation by Prop.~\ref{prop.symbar-natural}.
\end{proof}
For a general
$k$-algebra $A$, resolve $A$ by tensor algebras. The resulting
long exact sequence may be regarded as a $k$-complex, where $TA$ is regarded as
degree $0$.
\begin{equation}\label{eq.res_tensor_alg}
0 \gets A \stackrel{\theta_A}{\gets} TA \stackrel{\theta_1}{\gets} T^2A
\stackrel{\theta_2}{\gets} \ldots
\end{equation}
The maps $\theta_n$ for $n \geq 1$ are defined in terms of face maps:
\begin{equation}\label{eq.theta_n}
\theta_n := \sum_{i = 0}^{n} (-1)^i T^{n-i}\theta_{T^iA}.
\end{equation}
In Section~\ref{sec.deltas_plus}, we shall need to use an important property of the maps
$\theta_n$:
\begin{prop}\label{prop.naturality-of-theta_n}
$\theta_n$ defines a natural transformation $T^{p+1} \to T^p$.
\end{prop}
\begin{proof}
The map $\theta_n$ depends on the algebra $A$. When we wish to distinguish
which algebra is associated with $\theta_n$, we shall use the notation $\theta_n(A)$.
Now, let $f : A \to B$ be any unital map of algebras. Consider the diagram below:
\[
\begin{diagram}
\node{T^{n+1}A}
\arrow{e,t}{\theta_n(A)}
\arrow{s,l}{T^{n+1}f}
\node{T^nA}
\arrow{s,r}{T^nf}\\
\node{T^{n+1}B}
\arrow{e,t}{\theta_n(B)}
\node{T^nB}
\end{diagram}
\]
We must show this diagram commutes. Now, $\displaystyle{\theta_n(A) = \sum_{i = 0}^{n} (-1)^i
T^{n-i}\theta_{T^iA}}$. Let $0 \leq i \leq n$, and consider the following diagram:
\[
\begin{diagram}
\node{T(T^i A)}
\arrow{e,t}{\theta_{T^i A}}
\arrow{s,l}{T(T^i f)}
\node{T^i A}
\arrow{s,r}{T^i f}\\
\node{T(T^i B)}
\arrow{e,t}{\theta_{T^i B}}
\node{T^i B}
\end{diagram}
\]
This diagram commutes by naturality of $\theta$. Now, apply the functor $T^{n-i}$
to each object and functor to get the corresponding commutative diagram for the
$i^{th}$ face map of $\theta_n$.
\multiply \dgARROWLENGTH by2
\[
\begin{diagram}
\node{T^{n+1} A}
\arrow{e,t}{T^{n-i}\theta_{T^i A}}
\arrow{s,l}{T^{n+1} f}
\node{T^n A}
\arrow{s,r}{T^n f}\\
\node{T^{n+1} B}
\arrow{e,t}{T^{n-i}\theta_{T^i B}}
\node{T^n B}
\end{diagram}
\]
\divide \dgARROWLENGTH by2
This proves each face map is natural, so the differential $\theta_n$ is natural.
\end{proof}
\begin{rmk}
Note that the complex~(\ref{eq.res_tensor_alg}) is nothing more than the complex
associated to May's 2-sided bar construction
$B_*(T, T, A)$ (See chapter 9 of~\cite{M}). If we denote by $A_0$ the
chain complex consisting of $A$ in degree $0$ and $0$ in higher degrees,
then there is a homotopy $h_n : B_n(T, T, A) \to
B_{n+1}(T, T, A)$ that establishes
a strong deformation retract $B_*(T, T, A) \to A_0$.
In fact, the homotopy maps are given by $h_n := h_{T^{n+1}A}$, where
$h$ is the natural transformation $U \to UT$ given above.
\end{rmk}
For each $q \geq 0$, if we
apply the functor $\mathscr{Y}_q$ to the complex~(\ref{eq.res_tensor_alg}),
we obtain the sequence below:
\begin{equation}\label{eq.ex-seq-TA}
\begin{diagram}
\node{0}
\node{\mathscr{Y}_qA}
\arrow{w,t}{ }
\node{\mathscr{Y}_qTA}
\arrow{w,t}{\mathscr{Y}_q\theta_{A}}
\node{\mathscr{Y}_qT^2A}
\arrow{w,t}{\mathscr{Y}_q\theta_1}
\node{\mathscr{Y}_qT^3A}
\arrow{w,t}{\mathscr{Y}_q\theta_2}
\node{\cdots}
\arrow{w,t}{ }
\end{diagram}
\end{equation}
This sequence is exact via the induced homotopy $\mathscr{Y}_q h_*$.
Denote by $d_i(A)$ the $i^{th}$ differential map of $\mathscr{Y}_*A$. When the
context it clear,
the differential will be simply written $d_i$.
Now, the bigraded module $\left\{\mathscr{Y}_q T^{p+1}A\right\}_{p,q \geq 0}$
is not quite a double complex, since the induced maps
$\mathscr{Y}_* T^{p+1}A \to \mathscr{Y}_*T^p A$ are chain maps (the
corresponding squares commute). In order
to make the squares of the bigraded module anti-commute, introduce the
sign $(-1)^p$ on each vertical differential. Call the resulting double complex
$\mathscr{T}_{*,*}$.
\begin{equation}\label{eq.scriptT}
\begin{diagram}
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\\
\node{\mathscr{Y}_2TA}
\arrow{s,l}{d_2}
\node{\mathscr{Y}_2T^2A}
\arrow{w,t}{\mathscr{Y}_2\theta_1}
\arrow{s,l}{-d_2}
\node{\mathscr{Y}_2T^3A}
\arrow{w,t}{\mathscr{Y}_2\theta_2}
\arrow{s,l}{d_2}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}_1TA}
\arrow{s,l}{d_1}
\node{\mathscr{Y}_1T^2A}
\arrow{w,t}{\mathscr{Y}_1\theta_1}
\arrow{s,l}{-d_1}
\node{\mathscr{Y}_1T^3A}
\arrow{w,t}{\mathscr{Y}_2\theta_2}
\arrow{s,l}{d_1}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}_0TA}
\node{\mathscr{Y}_0T^2A}
\arrow{w,t}{\mathscr{Y}_0\theta_1}
\node{\mathscr{Y}_0T^3A}
\arrow{w,t}{\mathscr{Y}_1\theta_2}
\node{\cdots}
\arrow{w,t}{}
\end{diagram}
\end{equation}
Consider a second double complex, $\mathscr{A}_{*,*}$, consisting of the complex
$\mathscr{Y}_*A$ as the $0^{th}$ column, and $0$ in all positive columns.
\begin{equation}\label{eq.scriptA}
\begin{diagram}
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\\
\node{\mathscr{Y}_2A}
\arrow{s,l}{d_2}
\node{0}
\arrow{w}
\arrow{s}
\node{0}
\arrow{w}
\arrow{s}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}_1A}
\arrow{s,l}{d_1}
\node{0}
\arrow{w}
\arrow{s}
\node{0}
\arrow{w}
\arrow{s}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}_0A}
\node{0}
\arrow{w}
\node{0}
\arrow{w}
\node{\cdots}
\arrow{w,t}{}
\end{diagram}
\end{equation}
\begin{theorem}\label{thm.doublecomplexiso}
There is a map of double complexes, $\Theta_{*,*} : \mathscr{T}_{*,*} \to
\mathscr{A}_{*,*}$ inducing isomorphism in homology
\[
H_*\left( \mathrm{Tot}(\mathscr{T});\,k\right) \to
H_*\left( \mathrm{Tot}(\mathscr{A});\, k\right)
\]
\end{theorem}
\begin{proof}
The map $\Theta_{*,*}$ is defined as:
\[
\Theta_{p,q} := \left\{\begin{array}{ll}
\mathscr{Y}_q\theta_A, & p = 0 \\
0, & p > 0
\end{array}\right.
\]
This map is easily verified to be a map of double complexes, since most components
of $\mathscr{A}_{*,*}$ are $0$. On the $0^{th}$ column, we just verify that
$d_q(A) \circ \mathscr{Y}_q\theta_A = \mathscr{Y}_{q-1}\theta_A
\circ d_q(TA)$, but this
follows since $\mathscr{Y}_*$ is functorial ($\mathscr{Y}_*\theta_A$ is
a chain map). The isomorphism follows from the exactness of the
sequence~(\ref{eq.ex-seq-TA}).
\end{proof}
\begin{rmk}\label{rmk.HS-A-bicomplex}
Observe that
\[
H_*\left( \mathrm{Tot}(\mathscr{A});\, k\right)
= H_*\left(\mathscr{Y}_*A;\,k\right) = HS_*(A).
\]
\end{rmk}
This permits the computation of symmetric homology of any given algebra $A$ in
terms of tensor algebras:
\begin{cor}\label{cor.HS_A_via-tensoralgebras}
For an associative, unital $k$-algebra $A$,
\[
HS_*(A) \cong H_*\left( \mathrm{Tot}(\mathscr{T});\,k \right),
\]
where $\mathscr{T}_{*,*}$ is the double complex $\{ \mathscr{Y}_q T^{p+1}A \}_{p,q \geq 0}$.
\end{cor}
The following lemma shows why it is advantageous to work with tensor algebras.
\begin{lemma}\label{lem.tensor_alg_constr}
For a unital, associative $k$-algebra $A$,
there is an isomorphism of $k$-complexes:
\begin{equation}\label{eq.YA-decomp}
\mathscr{Y}_*TA \cong \bigoplus_{n\geq -1} Y_n,
\end{equation}
where
\[
Y_n = \left\{\begin{array}{ll}
k\left[N(\Delta S)\right], &n = -1\\
k\left[N([n] \setminus \Delta S)\right]
\otimes_{k\Sigma_{n+1}^\mathrm{op}} A^{\otimes(n+1)},
&n \geq 0
\end{array}\right.
\]
Moreover, the differential respects the direct sum decomposition.
\end{lemma}
\begin{proof}
Any generator of $\mathscr{Y}_*TA$
has the form
\[
[p] \stackrel{\alpha}{\to} [q_0] \to \ldots \to [q_n] \otimes u,
\]
where
\[
u = \left(\bigotimes_{a\in A_0}a\right) \otimes
\left(\bigotimes_{a\in A_1}a\right) \otimes \ldots \otimes
\left(\bigotimes_{a\in A_p}a\right),
\]
and $A_0, A_1, \ldots, A_p$ are finite ordered lists of elements of $A$. Indeed,
each $A_j$ may be thought of as an element of $A^{m_j}$ (set product). If $A_j = \emptyset$,
then $m_j = 0$, and we use the convention that an empty tensor product is
equal to $1 \in k$. We say that the corresponding tensor factor is
{\it trivial}. (Caution, $1_A \in A$ is not considered trivial, since it
has degree $1$ in the tensor algebra.) Now,
let $m = \left(\sum m_j\right) - 1$. We shall use the convention that
$A^0 = \emptyset$. Let
\[
\pi \;:\; A^{m_0} \times A^{m_1} \times \ldots \times
A^{m_p} \longrightarrow A^{m+1}
\]
be the evident isomorphism. Let $A_m = \pi( A_0, A_1, \ldots, A_p )$.
{\bf Case 1.} If $u$ is
non-trivial (\textit{i.e.}, $A_m \neq \emptyset$), then construct the element
\[
u' = \bigotimes_{a \in A_m}a
\]
Next, construct a $\Delta$-morphism $\zeta_u : [m] \to [p]$ as follows: $\zeta_u$
sends the point $0, 1, \ldots, m_0-1$ identically onto $0$, then
sends the the points $m_0, m_0 + 1, \ldots, m_0+m_1 -1 $ onto $1$, etc. It should
be clear that $(\zeta_u)_*(u') = u$. An example will clarify. Suppose
\[
u = (a_0 \otimes a_1) \otimes 1 \otimes (a_2 \otimes a_3 \otimes a_4) \in
(A \otimes A) \oplus k \oplus (A \otimes A \otimes A).
\]
Then
\[
u' = a_0 \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4,
\]
and $\zeta_u : [4] \to [2]$ has
preimages: $\zeta_u^{-1}(0) = \{0, 1\}, \zeta_u^{-1}(1) = \emptyset, \zeta^{-1}(2)
= \{2, 3, 4\}$ (or, in tensor notation, $\zeta_u = x_0x_1 \otimes 1 \otimes x_2x_3x_4$).
Note, the elements $a_i$ need not be distinct.
Then under the $\Delta S$-equivalence,
\[
[p] \stackrel{\alpha}{\to} [q_0] \to \ldots \to [q_n] \otimes u =
[p] \stackrel{\alpha}{\to} [q_0] \to \ldots \to [q_n] \otimes (\zeta_u)_*(u')
\]
\[
\approx
[m] \stackrel{\alpha\zeta_u}{\to} [q_0] \to \ldots \to [q_n] \otimes u'
\]
The assignment is well-defined with respect to
the $\Delta S$-equivalence since the total number of non-trivial tensor
factors in $u$ is the same as the total number of non-trivial tensor factors
in $\phi_*(u)$ for any $\phi \in \mathrm{Mor}\Delta S$. It is this property
of tensor algebras that is essential in making the proof work.
Note that the only equivalence that persists after rewriting the generators
is invariance under the symmetric group action:
\[
[m] \stackrel{\alpha\sigma}{\to}
[q_0] \to \ldots \to [q_n] \otimes u' \approx
[m] \stackrel{\alpha}{\to}
[q_0] \to \ldots \to [q_n] \otimes \sigma_*(u'), \;\; \textrm{for $\sigma \in
\Sigma_{m+1}^\mathrm{op}$}
\]
This shows that any such non-trivial element in $\mathscr{Y}_*TA$ may be written
uniquely as an element of
\[
k\left[N([m] \setminus \Delta S)\right] \otimes_{k\Sigma_{m+1}^\mathrm{op}}
A^{\otimes(m+1)}.
\]
{\bf Case 2.} If $u$ is
trivial (\textit{i.e.}, $A_m = \emptyset$), then
\[
[p] \stackrel{\alpha}{\to} [q_0] \to \ldots \to [q_n] \otimes u
= [p] \stackrel{\alpha}{\to} [q_0] \to \ldots \to [q_n] \otimes 1_k^{\otimes(p + 1)}.
\]
This element is equivalent to:
\[
[q_0] \stackrel{\mathrm{id}}{\to} [q_0] \to \ldots \to [q_n] \otimes
1_k^{\otimes(q_0 + 1)},
\]
and so this element can be identified with $[q_0] \to \ldots \to [q_n] \in
k\left[N(\Delta S)\right]$.
Thus, the isomorphism~(\ref{eq.YA-decomp}) is proven. Note, the fact that total number
of non-trivial tensor factors is preserved under $\Delta S$ morphisms also proves that
the differential respects the direct sum decomposition.
\end{proof}
\section{Symmetric Homology with Coefficients}\label{sec.symhom_coeff}
Following the conventions for Hochschild and cyclic homology in
Loday~\cite{L}, when we need to indicate explicitly the ground ring $k$
over which we compute symmetric homology of $A$, we shall use the notation:
\[
HS_*(A\;|\;k)
\]
Furthermore, since the notion ``$\Delta S$-module'' does not explicitly state the
ground ring we shall use the bulkier ``$\Delta S$-module over $k$'' when the
context is ambiguous.
If $\mathscr{Y}_*$ is a complex that computes symmetric homology of
the algebra $A$ over $k$, we may
make the following definition:
\begin{definition}\label{def.HS-with-coeff}
The symmetric homology of $A$ over $k$, with coefficients in a left
$k$-module $M$ is
\[
HS_*(A;M) := H_*( \mathscr{Y}_* \otimes_k M)
\]
\end{definition}
Note, this definition is independent of the particular choice of complex $\mathscr{Y}_*$,
so we shall generally use the complex $\mathscr{Y}_*A =
k[ N(- \setminus \Delta S) ]
\otimes_{\Delta S} B_*^{sym}A$ of Cor.~\ref{cor.SymHomComplex} in this section.
\begin{prop}\label{prop.M-flat}
If $M$ is flat over $k$, then
\[
HS_*(A ; M) \cong HS_*(A) \otimes_k M
\]
\end{prop}
\begin{proof}
Since $M$ is $k$-flat, the functor $ - \otimes_k M$ is exact, and so
commutes with homology functors. In particular,
\[
H_n(\mathscr{Y}_*A \otimes_k M) \cong H_n(\mathscr{Y}_*A) \otimes_k M
\]
\end{proof}
\begin{cor}\label{cor.HS-Q-Z_p}
For any $\mathbb{Z}$-agebra $A$, $HS_n(A ; \mathbb{Q}) \cong HS_n(A) \otimes_{\mathbb{Z}}
\mathbb{Q}$.
\end{cor}
\begin{lemma}\label{lem.YA-flat}
{\it i.} If $A$ is a flat $k$-algebra, then $\mathscr{Y}_nA$ is flat for each $n$.
{\it ii.} If $A$ is a projective $k$-algebra, then $\mathscr{Y}_nA$ is projective for
each $n$.
\end{lemma}
\begin{proof}
By Remark~\ref{rmk.uniqueChain} we may identify:
\[
\mathscr{Y}_nA \cong \bigoplus_{m \geq 0}\left(
k[ N([m] \setminus\Delta S)_{n-1} ] \otimes_k A^{\otimes(m+1)}\right).
\]
Note, $k[ N([m] \setminus\Delta S)_{n-1} ]$ is free, so if $A$ is
flat, then $\mathscr{Y}_nA$ is a direct sum of modules that are
tensor products of free with flat modules, hence $\mathscr{Y}_nA$ is flat.
Similarly, if $A$ is projective, $\mathscr{Y}_nA$ is also, since tensor products
and direct sums of projectives are projective.
\end{proof}
\begin{prop}\label{prop.HS-A-B}
If $B$ is a commutative $k$-algebra, then there is an
isomorphism
\[
HS_*(A \otimes_k B \;|\; B) \cong HS_*(A ; B)
\]
\end{prop}
\begin{proof}
Here, we are viewing $A \otimes_k B$ as a $B$-algebra via the inclusion
$B \cong 1_A \otimes_k B \hookrightarrow A \otimes_k B$. Observe, there
is an isomorphism
\[
(A \otimes_k B) \otimes_B (A \otimes_k B) \stackrel{\cong}{\longrightarrow}
A \otimes_k A \otimes_k (B \otimes_B B)\stackrel{\cong}{\longrightarrow}
(A \otimes_k A) \otimes_k B.
\]
Iterating this for $n$-fold tensors of $A \otimes_k B$,
\[
\underbrace{(A \otimes_k B) \otimes_B \ldots \otimes_B (A \otimes_k B)}_{n}
\cong \underbrace{A \otimes_k \ldots \otimes_k A}_n \otimes_k B
\]
This shows that the $\Delta S$-module over $B$, $B_*^{sym}(A \otimes_k B)$
is isomorphic as $k$-module to \mbox{$(B_*^{sym}A) \otimes_k B$} over $k$.
The proposition now follows essentially by definition. Let $\overline{L}_*$
be the resolution of $\underline{k}$ by projective $\Delta S^\mathrm{op}$-modules
(over $k$) given by $\overline{L}_* = k[N(- \setminus \Delta S)]$. Then, if we take
tensor products (over $k$) with the algebra $B$, we obtain
\[
\overline{L}_* \otimes_k B \cong B[N(- \setminus \Delta S)],
\]
which is a projective resolution of the trivial $\Delta S^\mathrm{op}$-module over $B$,
$\underline{B}$. Thus,
\begin{equation}\label{eq.H_AotimesB_B}
HS_*(A \otimes_k B \;|\; B) = H_*\left( (\overline{L}_* \otimes_k B)
\otimes_{B[\mathrm{Mor}\Delta S]}
B_*^{sym}(A \otimes_k B);\;B \right)
\end{equation}
On the chain level, there are isomorphisms:
\[
(\overline{L}_* \otimes_k B)
\otimes_{B[\mathrm{Mor}\Delta S]}
B_*^{sym}(A \otimes_k B) \cong (\overline{L}_* \otimes_k B)
\otimes_{B[\mathrm{Mor}\Delta S]}
(B_*^{sym}A \otimes_k B)
\]
\begin{equation}\label{eq.H_AotimesB_B-red}
\cong (\overline{L}_* \otimes_{k[\mathrm{Mor}\Delta S]}
B_*^{sym}A) \otimes_k B
\end{equation}
The complex~(\ref{eq.H_AotimesB_B-red}) computes $HS_*( A ; B)$
by definition.
\end{proof}
\begin{rmk}
Since $HS_*(A \;|\; k) = HS_*(A \otimes_k k\;|\; k)$, Prop.~\ref{prop.HS-A-B} allows
us to identify $HS_*(A \;|\; k)$ with $HS_*(A ; k)$.
\end{rmk}
The construction $HS_*(A ; -)$ is a covariant functor, as
is immediately seen on the chain level. Moreover, Prop.~\ref{prop.Y-functor}
implies $HS_*( - ; X)$ is a covariant functor for any left $k$-module, $X$.
\begin{prop}\label{prop.les-for-HS}
Suppose $0 \to X \to Y \to Z \to 0$ is a short exact sequence of left $k$-modules,
and suppose $A$ is a flat $k$-algebra.
Then there is an induced long exact sequence in symmetric homology:
\begin{equation}\label{eq.les_HS}
\ldots \to HS_n(A ; X) \to HS_n(A ; Y) \to HS_n(A ; Z) \to HS_{n-1}(A ; X) \to \ldots
\end{equation}
Moreover, a map of short exact sequences, $(\alpha, \beta, \gamma)$, as in the
diagram below, induces a map of the corresponding long exact sequences
(commutative ladder)
\begin{equation}\label{eq.ses-morphism}
\begin{diagram}
\node{0}
\arrow{e}
\node{X}
\arrow{e}
\arrow{s,l}{\alpha}
\node{Y}
\arrow{e}
\arrow{s,l}{\beta}
\node{Z}
\arrow{e}
\arrow{s,l}{\gamma}
\node{0}
\\
\node{0}
\arrow{e}
\node{X'}
\arrow{e}
\node{Y'}
\arrow{e}
\node{Z'}
\arrow{e}
\node{0}
\end{diagram}
\end{equation}
\end{prop}
\begin{proof}
By Lemma~\ref{lem.YA-flat}, the hypothesis $A$ is flat implies that the following
is an exact sequence of chain complexes:
\[
0 \to \mathscr{Y}_*A \otimes_k X \to \mathscr{Y}_*A \otimes_k Y \to
\mathscr{Y}_*A \otimes_k Z \to 0.
\]
This induces a long exact sequence in homology
\[
\ldots \to H_n(\mathscr{Y}_*A \otimes_k X) \to H_n(\mathscr{Y}_*A \otimes_k Y) \to
H_n(\mathscr{Y}_*A \otimes_k Z) \to H_{n-1}(\mathscr{Y}_*A \otimes_k X)
\to \ldots
\]
as required.
Now let $(\alpha, \beta, \gamma)$ be a morphism of short exact sequences, as
in diagram~(\ref{eq.ses-morphism}).
Consider the diagram,
\begin{equation}\label{eq.les-morphism}
\begin{diagram}
\node{ \vdots }
\arrow{s}
\node{ \vdots }
\arrow{s}
\\
\node{ HS_n(A ; X) }
\arrow{s}
\arrow{e,t}{ \alpha_* }
\node{ HS_n(A ; X') }
\arrow{s}
\\
\node{ HS_n(A ; Y) }
\arrow{s}
\arrow{e,t}{ \beta_* }
\node{ HS_n(A ; Y') }
\arrow{s}
\\
\node{ HS_n(A ; Z) }
\arrow{s,l}{\partial}
\arrow{e,t}{ \gamma_* }
\node{ HS_n(A ; Z') }
\arrow{s,l}{\partial'}
\\
\node{ HS_{n-1}(A ; X) }
\arrow{s}
\arrow{e,t}{ \alpha_* }
\node{ HS_{n-1}(A ; X') }
\arrow{s}
\\
\node{ \vdots }
\node{ \vdots }
\end{diagram}
\end{equation}
Since $HS_n(A ; -)$ is functorial, the upper two squares of the diagram commute.
Commutativity of the lower square follows from the naturality of the connecting
homomorphism in the snake lemma.
\end{proof}
\begin{rmk}\label{rmk.les_functors}
Any family of additive covariant functors $\{T_n\}$ between two abelian categories
is said to be a {\it long exact sequence of functors} if it takes
short exact sequences to long exact sequences such as~(\ref{eq.les_HS})
and morphisms of short exact
sequences to commutative ladders of long exact sequences such as~(\ref{eq.les-morphism}).
See~\cite{D},
Definition~1.1 and also~\cite{Mc}, section~12.1. The content of
Prop.~\ref{prop.les-for-HS} is that for $A$ flat, $\{HS_n(A ; - )\}_{n \in \mathbb{Z}}$ is a
long exact sequence of functors.
\end{rmk}
We now state the {\it Universal Coefficient Theorem for symmetric homology}.
\begin{theorem}\label{thm.univ.coeff.}
If $A$ is a flat $k$-algebra, and $B$ is a commutative $k$-algebra, then there is
a spectral sequence with
\[
E_2^{p,q} := \mathrm{Tor}^k_p\left( HS_q( A \;|\; k) , B \right) \mathbb{R}ightarrow
HS_*( A ; B).
\]
\end{theorem}
\begin{proof}
Let $T_q : k\textrm{-$\mathbf{Mod}$} \to k\textrm{-$\mathbf{Mod}$}$ be the functor
$HS_q( A ; - )$. Observe, since $A$ is flat, $\{T_q\}$ is a long exact sequence
of additive covariant functors
(Rmk.~\ref{rmk.les_functors} and Prop.~\ref{prop.les-for-HS}); $T_q = 0$ for
sufficiently small $q$ (indeed, for
$q < 0$); and $T_q$ commutes with arbitrary direct sums, since tensoring and
taking homology always commutes with direct sums.
Hence, by the Universal Coefficient Theorem of Dold (2.12 of~\cite{D}. See also
McCleary~\cite{Mc}, Thm.~12.11), there is a spectral sequence
with
\[
E_2^{p,q} := \mathrm{Tor}^k_p\left( T_q(k) , B \right) \mathbb{R}ightarrow
T_*(B).
\]
\end{proof}
As an immediate consequence, we have the following result.
\begin{cor}\label{cor.iso_in_HS_with_coeff.}
If $f : A \to A'$ is a $k$-algebra map between flat algebras which induces
an isomorphism in symmetric homology, $HS_*(A) \stackrel{\cong}{\to} HS_*(A')$,
then for a commutative $k$-algebra $B$, the map $f \otimes \mathrm{id}_B$ induces
an isomorphism $HS_*(A;B) \stackrel{\cong}{\to} HS_*(A' ; B)$.
\end{cor}
Under stronger hypotheses, the universal coefficient spectral sequence
reduces to short exact sequences. Recall some notions of ring theory
(c.f. the article Homological Algebra: Categories of Modules (200:K), Vol. 1,
pp. 755-757 of~\cite{I}). A commutative ring $k$ is said to have
{\it global dimension $\leq n$} if for all \mbox{$k$-modules} $X$ and $Y$,
$\mathrm{Ext}_k^m(X,Y) = 0$
for $m > n$. $k$ is said to have {\it weak global dimension $\leq n$} if for
all \mbox{$k$-modules} $X$ and $Y$, $\mathrm{Tor}_m^k(X, Y) = 0$ for $m>n$.
Note, the weak global dimension of a ring is less than or equal to its
global dimension, with equality holding for Noetherian rings but not in
general. A ring is said to be {\it hereditary} if all submodules of projective
modules are projective, and this is equivalent to the global dimension of
the ring being no greater than $1$.
\begin{theorem}\label{thm.univ.coeff.ses}
If $k$ has weak global dimension $\leq 1$,
then the spectral sequence
of Thm.~\ref{thm.univ.coeff.} reduces to
short exact sequences,
\begin{equation}\label{eq.UCTses}
0 \longrightarrow HS_n(A\;|\;k) \otimes_k B \longrightarrow
HS_n(A ; B) \longrightarrow \mathrm{Tor}^k_1( HS_{n-1}(A \;|\;k), B)
\longrightarrow 0.
\end{equation}
Moreover, if $k$ is hereditary and and $A$ is projective over $k$, then
these sequences split (unnaturally).
\end{theorem}
\begin{proof}
Assume first that $k$ has weak global dimension $\leq 1$.
So $\mathrm{Tor}_p^k(T_q(k), B) = 0$ for all $p > 1$. Following Dold's
argument (Corollary~2.13 of~\cite{D}), we obtain the required exact sequences,
\[
0 \longrightarrow T_n(k)\otimes_k B \longrightarrow
T_n(B) \longrightarrow \mathrm{Tor}^k_1( T_{n-1}(k), B )
\longrightarrow 0.
\]
Assume further that $k$ is hereditary and $A$ is projective.
Then by Lemma~\ref{lem.YA-flat}, $\mathscr{Y}_nA$ is projective for each n.
Theorem~8.22 of Rotman~\cite{R3} then gives us the desired splitting.
\end{proof}
\begin{rmk}
The proof given above also proves UCT for cyclic homology. A partial result
along these lines exists in Loday (\cite{L}, 2.1.16).
There, he shows
$HC_*(A\;|\; k) \otimes_k K \cong HC_*(A \;|\; K)$
and \mbox{$HH_*(A\;|\; k) \otimes_k K \cong$} \mbox{$HH_*(A \;|\; K)$} in the case
that $K$
is a localization of $k$, and $A$ is a $K$-module, flat over $k$. I am
not aware of a statement of UCT for cyclic or Hochschild homology in its full
generality in the literature.
\end{rmk}
For the remainder of this section, we shall obtain a converse to
Cor.~\ref{cor.iso_in_HS_with_coeff.} in the case $k = \mathbb{Z}$.
\begin{theorem}\label{thm.conv.HS_iso}
Let $f : A \to A'$ be an algebra map between torsion-free $\mathbb{Z}$-algebras.
Suppose for $B = \mathbb{Q}$ and $B = \mathbb{Z}/p\mathbb{Z}$ for
any prime $p$, the map
$f \otimes \mathrm{id}_B$ induces an isomorphism
$HS_*(A ; B) \to HS_*(A' ; B)$. Then $f$ also induces an isomorphism
$HS_*(A) \stackrel{\cong}{\to} HS_*(A')$.
\end{theorem}
First, note that Prop.~\ref{prop.les-for-HS} allows one to construct the Bockstein
homomorphisms
\[
\beta_n : HS_n(A ; Z) \to HS_{n-1}(A ; X)
\]
associated to a short exact sequence of $k$-modules, $0 \to X \to Y \to Z \to 0$,
as long as $A$ is flat over $k$. These Bocksteins are natural in the following
sense:
\begin{lemma}\label{lem.bockstein}
Suppose $f : A \to A'$ is a map of flat $k$-algebras. and $0 \to X \to Y \to Z \to 0$
is a short exact sequence of left $k$-modules. Then the following diagram
is commutative for each $n$:
\[
\begin{diagram}
\node{HS_n(A; Z)}
\arrow{e,t}{\beta}
\arrow{s,l}{f_*}
\node{HS_{n-1}(A; X)}
\arrow{s,l}{f_*}
\\
\node{HS_n(A'; Z)}
\arrow{e,t}{\beta'}
\node{HS_{n-1}(A'; X)}
\end{diagram}
\]
Moreover if the induced map $f_*: HS_*(A;W) \to HS_*(A';W)$
is an isomorphism for any two of $W=X$, $W=Y$, $W=Z$, then it is an isomorphism
for the third.
\end{lemma}
\begin{proof}
$A$ and $A'$ flat imply both sequences of complexes are exact:
\[
0 \to \mathscr{Y}_*A \otimes_k X \to \mathscr{Y}_*A \otimes_k Y \to
\mathscr{Y}_*A \otimes_k Z \to 0.
\]
\[
0 \to \mathscr{Y}_*A' \otimes_k X \to \mathscr{Y}_*A' \otimes_k Y \to
\mathscr{Y}_*A' \otimes_k Z \to 0.
\]
The map $\mathscr{Y}_*A \to \mathscr{Y}_*A'$ induces a map of short
exact sequences, hence induces a commutative ladder of long exact
sequences of homology groups. In particular, the squares involving the
boundary maps (Bocksteins) must commute.
Now, assuming further that $f_*$ induces isomorphisms $HS_*(A;W) \to HS_*(A';W)$
for any two of $W = X$, $W = Y$, $W = Z$, let $V$ be the third module. The
$5$-lemma implies isomorphisms $HS_n(A; V) \stackrel{\cong}{\longrightarrow}
HS_n(A'; V)$ for each $n$.
\end{proof}
We shall now proceed with the proof of Thm.~\ref{thm.conv.HS_iso}.
All tensor products will be over $\mathbb{Z}$ for the rest of this section.
\begin{proof}
Let $A$ and $A'$ be torsion-free $\mathbb{Z}$-modules. Over $\mathbb{Z}$, torsion-free implies
flat. Let $f : A \to A'$ be an algebra map inducing isomorphism in
symmetric homology with coefficients in $\mathbb{Q}$ and also in $\mathbb{Z}/p\mathbb{Z}$
for any prime $p$.
For $m \geq 2$, there is a short exact sequence,
\[
0 \longrightarrow \mathbb{Z}/p^{m-1}\mathbb{Z} \stackrel{p}{\longrightarrow} \mathbb{Z}/p^m\mathbb{Z}
\longrightarrow \mathbb{Z}/p\mathbb{Z} \longrightarrow 0.
\]
Consider first the case $m = 2$. Since $HS_*(A ; \mathbb{Z}/p\mathbb{Z}) \to HS_*(A' ; \mathbb{Z}/p\mathbb{Z})$
is an isomorphism, Lemma~\ref{lem.bockstein} implies the induced map
is an isomorphism for the middle term:
\begin{equation}\label{eq.HS_A_Zp2}
f* : HS_*(A; \mathbb{Z}/p^2\mathbb{Z}) \stackrel{\cong}{\longrightarrow} HS_*(A'; \mathbb{Z}/p^2\mathbb{Z})
\end{equation}
(Note, all maps induced by $f$ on symmetric homology will be denoted by
$f_*$.)
For the inductive step, fix $m > 2$ and suppose $f$ induces an isomorphism in
symmetric homology,
$f_* : HS_*(A; \mathbb{Z}/p^{m-1}\mathbb{Z}) \stackrel{\cong}{\longrightarrow} HS_*(A'; \mathbb{Z}/p^{m-1}\mathbb{Z})$.
Again, Lemma~\ref{lem.bockstein} implies the induced map is an isomorphism
on the middle term.
\begin{equation}\label{eq.HS_A_Zpm}
f_* : HS_*(A; \mathbb{Z}/p^{m}\mathbb{Z}) \stackrel{\cong}{\longrightarrow} HS_*(A'; \mathbb{Z}/p^{m}\mathbb{Z})
\end{equation}
Denote $\displaystyle{\mathbb{Z} / p^\infty \mathbb{Z} := \lim_{\longrightarrow} \mathbb{Z}/ p^m\mathbb{Z}}$. Note, this
is a {\it direct limit} in the sense that it is a colimit over a directed
system. The direct limit functor is exact (Prop.~5.3 of~\cite{S2}),
so the maps $HS_n(A ; \mathbb{Z} / p^\infty \mathbb{Z}) \to HS_n(A' ; \mathbb{Z} / p^\infty \mathbb{Z})$ induced
by $f$ are isomorphisms, given by the chain of isomorphisms below:
\[
HS_n(A; \mathbb{Z} / p^\infty \mathbb{Z})
\cong
H_n(\lim_{\longrightarrow} \mathscr{Y}_*A \otimes \mathbb{Z}/p^m\mathbb{Z})
\cong
\lim_{\longrightarrow}H_*(\mathscr{Y}_*A \otimes \mathbb{Z}/p^m\mathbb{Z})
\stackrel{f_*}{\longrightarrow}
\]
\[
\qquad\qquad\qquad
\lim_{\longrightarrow}H_*(\mathscr{Y}_*A' \otimes \mathbb{Z}/p^m\mathbb{Z})
\cong
H_*(\lim_{\longrightarrow} \mathscr{Y}_*A' \otimes \mathbb{Z}/p^m\mathbb{Z})
\cong
HS_*(A'; \mathbb{Z} / p^\infty \mathbb{Z})
\]
(Note, $f_*$ here stands for $\displaystyle{\lim_{\longrightarrow}H_n(\mathscr{Y}_*f \otimes
\mathrm{id})}$.)
Finally, consider the short exact sequence of abelian groups,
\[
0 \longrightarrow \mathbb{Z} \longrightarrow \mathbb{Q} \longrightarrow
\bigoplus_{\textrm{$p$ prime}} \mathbb{Z} / p^\infty \mathbb{Z} \longrightarrow 0
\]
The isomorphism $f_* : HS_*(A; \mathbb{Z} / p^\infty \mathbb{Z}) \to HS_*(A'; \mathbb{Z} / p^\infty \mathbb{Z})$
passes to direct sums, giving isomorphisms for each $n$,
\[
f_* : HS_{n}\left(A; \bigoplus_p \mathbb{Z}/p^\infty \mathbb{Z}\right) \stackrel{\cong}
{\longrightarrow}
HS_{n}\left(A'; \bigoplus_p \mathbb{Z}/p^\infty \mathbb{Z}\right).
\]
Together with the assumption that $HS_*(A; \mathbb{Q}) \to HS_*(A';\mathbb{Q})$
is an isomorphism, another appeal to Lemma~\ref{lem.bockstein} gives the
required isomorphism in symmetric homology induced by $f$:
\[
f_* : HS_n(A) \stackrel{\cong}{\longrightarrow} HS_n(A')
\]
\end{proof}
\begin{rmk}
Theorem~\ref{thm.conv.HS_iso} may be useful for determining integral
symmetric homology, since rational computations are generally simpler (see
Section~\ref{sec.char0}),
and computations mod $p$ may be made easier due to the presence of
additional structure, namely homology operations (see Chapter~\ref{chap.prod}).
\end{rmk}
Finally, we state a result along the lines of McCleary~\cite{Mc}, Thm.~10.3. Denote
the torsion submodule of the graded module
$HS_*(A; X)$ by $\tau\left(HS_*(A ; X) \right)$.
\begin{theorem}\label{thm.bockstein_spec_seq}
Suppose $A$ is free of finite rank over $\mathbb{Z}$. Then there is a singly-graded
spectral sequence with
\[
E_*^1 := HS_*( A ; \mathbb{Z}/p\mathbb{Z}) \mathbb{R}ightarrow HS_*(A)/\tau\left(HS_*(A) \right)
\otimes \mathbb{Z}/p\mathbb{Z},
\]
with differential map $d^1 = \beta$, the standard Bockstein map
associated to $0 \to \mathbb{Z} / p\mathbb{Z} \to \mathbb{Z} / p^2\mathbb{Z} \to \mathbb{Z} / p\mathbb{Z} \to 0$. Moreover,
the convergence is strong.
\end{theorem}
The proof McCleary gives on p.~459 carries over to our case intact.
All that is required for this proof is that each $H_n(\mathscr{Y}_*A)$ be a
finitely-generated abelian group. The hypothesis that $A$ is finitely-generated,
coupled with a result of Chapter~\ref{chap.spec_seq2}, namely Cor.~\ref{cor.fin-gen},
guarantees this. Note, over $\mathbb{Z}$, {\it free of finite rank} is equivalent
to {\it flat and finitely-generated}.
Theorem~\ref{thm.bockstein_spec_seq} is a version of the Bockstein spectral
sequence for symmetric homology.
\section{Symmetric Homology of Monoid Algebras}\label{sec.symhommonoid}
The symmetric homology for the case of a monoid algebra
$A = k[M]$ has been studied by Fiedorowicz in~\cite{F}. In the most general
formulation (Prop. 1.3 of~\cite{F}), we have:
\begin{theorem}\label{thm.HS_monoidalgebra}
\[
HS_*(k[M]) \cong H_*\left(B(C_{\infty}, C_1, M); k\right),
\]
where $C_1$ is the little $1$-cubes monad,
$C_{\infty}$ is the little
$\infty$-cubes monad, and $B(C_{\infty}, C_1, M)$ is May's functorial
2-sided bar construction (see~\cite{M}).
\end{theorem}
The proof makes use of a variant of the
symmetric bar construction:
\begin{definition}
Let $M$ be a monoid. Define a functor $B_*^{sym}M : \Delta S \to \textbf{Sets}$ by:
\[
B_n^{sym}M := B_*^{sym}M[n] := M^{n+1}, \; \textrm{(set product)}
\]
\[
B_*^{sym}M(\alpha) : (m_0, \ldots, m_n) \mapsto
\alpha(m_0, \ldots, m_n), \qquad \textrm{for $\alpha \in \mathrm{Mor}
\Delta S$}.
\]
where $\alpha : [n] \to [k]$ is represented in tensor notation, and evaluation
at $(m_0, \ldots, m_n)$ is as in definition~\ref{def.symbar}. (This makes sense, as $M$
is closed under multiplication).
\end{definition}
\begin{definition}
For a $\mathscr{C}$-set $X$ and $\mathscr{C}^\mathrm{op}$-set $Y$, define the
$\mathscr{C}$-equivariant set product:
\[
Y \times_\mathscr{C} X := \left(\coprod_{C \in \mathrm{Obj}\mathscr{C}}
Y(C) \times X(C)\right) / \approx,
\]
where the equivalence $\approx$ is generated by the following: For every morphism
$f \in \mathrm{Mor}_{\mathscr{C}}(C, D)$, and every $x \in X(C)$ and $y \in Y(D)$,
we have $\big(y, f_*(x)\big) \approx \big(f^*(y), x\big)$.
\end{definition}
Note that $B_*^{sym}M$ is a $\Delta S$-set, and also a simplicial set, via the
chain of functors in section~\ref{sec.symbar}.
Let $\mathscr{X}_* := N(- \setminus \Delta S) \times_{\Delta S} B^{sym}_*M$.
\begin{prop}\label{prop.SymHomComplexMonoid}
$\mathscr{X}_*$ is a simplicial set whose homology computes $HS_*(k[M])$.
\end{prop}
\begin{proof}
It is clear that $\mathscr{X}_*$ is a simplicial set. The standard construction
for finding the homology of a simplicial set is to create the complex $k[\mathscr{X}_*]$,
with face maps induced by the face maps of $\mathscr{X}_*$. Since $M$ is a $k$-basis for
$k[M]$, $B^{sym}_*M$ acts as a $k$-basis for $B^{sym}_*k[M]$. Then, observe that
$k[ N( - \setminus \Delta S) \times_{\Delta S} B_*^{sym}M ] =
k[N(-\setminus \Delta S)] \otimes_{\Delta S} B_*^{sym}k[M]$.
\end{proof}
If $M = JX_+$ is a free monoid on a generating set $X$, then $k[M] = T(X)$, the
(free) tensor algebra over $k$ on the set $X$. In this case, we have the following:
\begin{lemma}\label{lem.HS_tensoralg}
\[
HS_*\left(T(X)\right) \cong H_*\left( \coprod_{n\geq -1} \widetilde{X}_n; k\right),
\]
where
\[
\widetilde{X}_n = \left\{\begin{array}{ll}
N(\Delta S), &n = -1\\
N([n] \setminus \Delta S)
\times_{\Sigma_{n+1}^\mathrm{op}} X^{n+1},
&n \geq 0
\end{array}\right.
\]
\end{lemma}
\begin{proof}
This is a consequence of Lemma~\ref{lem.tensor_alg_constr} when the tensor algebra
is free, generated by $X = \{ x_i \;|\; i \in \mathscr{I} \}$. By the lemma,
we obtain a decomposition
\[
\mathscr{Y}_*T(X) \cong k\left[N(\Delta S)\right] \oplus
\left(\bigoplus_{n \geq 0} k\left[N([n] \setminus \Delta S)\right]
\otimes_{k\Sigma_{n+1}^\mathrm{op}} k[X^{n+1}]\right),
\]
\[
\cong k\left[ N(\Delta S) \amalg \coprod_{n \geq 0}
N([n] \setminus \Delta S) \times_{\Sigma_{n+1}^\mathrm{op}} X^{n+1} \right],
\]
computing $HS_*\left( T(X) \right)$.
\end{proof}
\begin{rmk}
This proves Thm.~\ref{thm.HS_monoidalgebra} in the special case that $M$ is a
free monoid.
\end{rmk}
If $M$ is a group, $\Gamma$, then Fiedorowicz~\cite{F} found:
\begin{theorem}\label{thm.HS_group}
\[
HS_*(k[\Gamma]) \cong H_*\left(\Omega\Omega^{\infty}S^{\infty}(B\Gamma); k\right)
\]
\end{theorem}
This final formulation shows in particular that $HS_*$ is a non-trivial theory. While
it is true that $H_*(\Omega^{\infty}S^{\infty}(X)) = H_*(QX)$ is well understood,
the same cannot be said of the homology of $\Omega\Omega^{\infty}S^{\infty}X$.
Indeed, May states that $H_*(QX)$ may be regarded as the
free allowable Hopf algebra with conjugation over the Dyer-Lashof algebra and
dual of the Steenrod algebra (See~\cite{CLM}, preface to chapter 1, and also
Lemma~4.10). Cohen and Peterson~\cite{CP} found the homology of
$\Omega\Omega^{\infty}S^{\infty}X$,
where $X = S^0$, the zero-sphere, but there is little hope of extending
this result to arbitrary $X$ using the same methods.
We shall have more to say about $HS_1(k[\Gamma])$ in section~\ref{sec.2-torsion}.
\chapter{ALTERNATIVE RESOLUTIONS}\label{chap.alt}
\section{Symmetric Homology Using $\Delta S_+$}\label{sec.deltas_plus}
In this section, we shall show that replacing $\Delta S$ by $\Delta S_+$ in an
appropriate way does not affect the computation of $HS_*$.
\begin{definition}\label{def.symbar_plus}
For an associative, unital algebra, $A$, over a commutative ground ring $k$,
define a functor $B_*^{sym_+}A : \Delta S_+ \to
k$-\textbf{Mod} by:
\[
\left\{
\begin{array}{lll}
B_n^{sym_+}A &:=& B_*^{sym}A[n] := A^{\otimes n+1} \\
B_{-1}^{sym_+}A &:=& k,
\end{array}
\right.
\]
\[
\left\{
\begin{array}{ll}
B_*^{sym_+}A(\alpha) : (a_0 \otimes a_1 \otimes \ldots \otimes a_n) \mapsto
\alpha(a_0, \ldots, a_n), \;&\textrm{for $\alpha \in \mathrm{Mor}\Delta S$},\\
B_*^{sym_+}A(\iota_n) : \lambda \mapsto \lambda(1_A \otimes \ldots \otimes 1_A),
\;&(\lambda \in k).
\end{array}
\right.
\]
\end{definition}
Consider the functor $\mathscr{Y}_*^+ : k$-\textbf{Alg} $ \to k$-\textbf{complexes}
given by:
\begin{equation}\label{eq.deltasplus}
\mathscr{Y}_*^+A \;:=\; k[ N(- \setminus \Delta S_+) ]\otimes_{\Delta S_+} B_*^{sym_+}A.
\end{equation}
\[
\mathscr{Y}_*^+f = \mathrm{id} \otimes B_*^{sym_+}f
\]
The functoriality of $\mathscr{Y}_*^+$ depends on the naturality of $B_*^{sym_+}f$,
which follows from the naturality of $B_*^{sym}f$ in most cases. The only case
to check is on a morphism $[-1] \stackrel{\iota_m}{\to} [m]$. For the object $[-1]$,
$B_{-1}^{sym_+}f$ will be the identity map of $k$.
\[
\begin{diagram}
\node{k}
\arrow{s,l}{B_*^{sym}A(\iota_m)}
\arrow{e,t}{\mathrm{id}}
\node{k}
\arrow{s,r}{B_*^{sym}B(\iota_m)}
\\
\node{A^{\otimes(m+1)}}
\arrow{e,t}{f^{\otimes(m+1)}}
\node{B^{\otimes(m+1)}}
\end{diagram}
\]
The commutativity of this diagram is clear, since $f(1) = 1$, and $B_*^{sym}A(\iota_m)$,
$B_*^{sym}B(\iota_m)$ are simply the unit maps.
Note, the differential of $\mathscr{Y}_*^+A$ will be denoted $d_*(A)$. As before,
when the context is clear, the differential will simply be denoted $d_*$.
Our goal is to prove the following:
\begin{theorem}\label{thm.SymHom_plusComplex}
For an associative, unital $k$-algebra $A$,
\[
HS_*(A) = H_*\left( \mathscr{Y}_*^+A;\,k \right)
\]
\end{theorem}
As the preliminary step, we shall prove the theorem in the special case of
tensor algebras. We shall need an analogue of Lemma~\ref{lem.tensor_alg_constr}
for $\Delta S_+$.
\begin{lemma}\label{lem.tensor_alg_constr-plus}
For a unital, associative $k$-algebra $A$,
there is an isomorphism of $k$-complexes:
\begin{equation}\label{eq.YplusA-decomp}
\mathscr{Y}^+_*TA \cong \bigoplus_{n\geq -1} Y^+_n,
\end{equation}
where
\[
Y^+_n = \left\{\begin{array}{ll}
k\left[N(\Delta S_+)\right], &n = -1\\
k\left[N([n] \setminus \Delta S_+)\right]
\otimes_{k\Sigma_{n+1}} A^{\otimes(n+1)},
&n \geq 0
\end{array}\right.
\]
Moreover, the differential respects the direct sum decomposition.
\end{lemma}
\begin{proof}
The proof follows verbatim as the proof of Lemma~\ref{lem.tensor_alg_constr}, only
with $\Delta S$ replaced with $\Delta S_+$ throughout.
\end{proof}
\begin{lemma}\label{lem.Y-to-Y-plus}
There is a chain map $J_A : \mathscr{Y}_*A \to \mathscr{Y}_*^+A$, which is
natural in $A$.
\end{lemma}
\begin{proof}
First observe that the the inclusion $\Delta S
\hookrightarrow \Delta S_+$ induces an inclusion of nerves:
\[
N(-\setminus \Delta S)
\hookrightarrow
N(-\setminus \Delta S_+),
\]
which in turn induces the chain map
\[
k\left[ N(-\setminus \Delta S) \right] \otimes_{\Delta S} B_*^{sym}A
\longrightarrow
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S} B_*^{sym}A
\]
$k\left[ N(-\setminus \Delta S_+) \right]$ is a right $\Delta S$-module
as well as a right $\Delta S_+$-module. Similarly, $B_*^{sym_+}A$ is
both a left $\Delta S$-module and a left $\Delta S_+$-module. There is
a natural transformation \mbox{$B_*^{sym}A \to B_*^{sym_+}A$}, again induced by
inclusion of categories $\Delta S \hookrightarrow \Delta S_+$, hence there is
a chain map
\[
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S} B_*^{sym}A
\longrightarrow
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S} B_*^{sym_+}A.
\]
Finally, pass to tensors over $\Delta S_+$:
\[
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S} B_*^{sym_+}A
\longrightarrow
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S_+} B_*^{sym_+}A.
\]
The composition gives a chain map $J_A : \mathscr{Y}_*A \to \mathscr{Y}_*^+A$. We
must verify that $J$ is a natural transformation $\mathscr{Y}_* \to \mathscr{Y}_*^+$.
Suppose $f : A \to B$ is a unital algebra map.
\[
\begin{diagram}
\node{k\left[ N(-\setminus \Delta S) \right] \otimes_{\Delta S} B_*^{sym}A}
\arrow{s,l}{\mathscr{Y}_*f}
\arrow{e,t}{J_A}
\node{k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S_+} B_*^{sym_+}A}
\arrow{s,r}{\mathscr{Y}_*^+f}
\\
\node{k\left[ N(-\setminus \Delta S) \right] \otimes_{\Delta S} B_*^{sym}B}
\arrow{e,t}{J_B}
\node{k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S_+} B_*^{sym_+}B}
\end{diagram}
\]
Let $y = [p]\to[q_0] \to \ldots \to [q_n] \otimes (a_0 \otimes \ldots \otimes a_p)$
be an $n$-chain of $k\left[ N(-\setminus \Delta S) \right] \otimes_{\Delta S} B_*^{sym}A$.
Then $J_A(y)$ has the same form in
$k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S_+} B_*^{sym_+}A$.
So,
\[
\mathscr{Y}_*^+f\left( J_A(y) \right) =
[p]\to[q_0] \to \ldots \to [q_n] \otimes \left(f(a_0) \otimes \ldots \otimes f(a_p)\right)
= J_B\left( \mathscr{Y}_*f(y) \right)
\]
\end{proof}
Our goal now will be to show the following:
\begin{theorem}\label{thm.J-iso}
For a unital, associative $k$-algebra $A$, the chain map
\[
J_A : \mathscr{Y}_*A \to \mathscr{Y}^+_*A
\]
induces an isomorphism on homology
\[
H_*\left(\mathscr{Y}_*A;\,k\right) \stackrel{\cong}{\longrightarrow}
H_*\left(\mathscr{Y}^+_*A;\,k\right)
\]
\end{theorem}
\begin{lemma}\label{lem.SymHom_plusComplex-tensalg}
For a unital, associative $k$-algebra $A$, the chain map
\[
J_{TA} : \mathscr{Y}_*TA \longrightarrow \mathscr{Y}_*^+TA
\]
induces an isomorphism in homology, hence
\[
HS_*(TA) = H_*\left( \mathscr{Y}_*^+TA;\,k \right)
\]
\end{lemma}
\begin{proof}
\[
HS_*(TA) = H_*\left( \mathscr{Y}_*TA;\,k \right), \quad\textrm{by definition.}
\]
There is a commutative square of complexes:
\[
\begin{diagram}
\node{\mathscr{Y}_*TA}
\arrow{e,t}{J_{TA}}
\node{\mathscr{Y}_*^+TA}\\
\node{\bigoplus_{n\geq -1} Y_n}
\arrow{n,l}{\cong}
\arrow{e,t}{j_*}
\node{\bigoplus_{n \geq -1} Y_n^+}
\arrow{n,l}{\cong}
\end{diagram}
\]
The isomorphisms on the left and right follow from Lemmas~\ref{lem.tensor_alg_constr}
and~\ref{lem.tensor_alg_constr-plus}. The map $j_*$ defined as follows. For $n = -1$,
\begin{equation}\label{eq.NDeltaS-to-NDeltaS-plus}
j_* : k\left[N(\Delta S)\right] \to k\left[N(\Delta S_+)\right]
\end{equation}
is induced by inclusion of categories $\Delta S \hookrightarrow \Delta S_+$.
For $n \geq 0$,
\begin{equation}\label{eq.Y_n-to-Y_n-plus}
j_* :
k\left[N([n] \setminus \Delta S)\right]
\otimes_{k\Sigma_{n+1}} A^{\otimes(n+1)} \longrightarrow
k\left[N([n] \setminus \Delta S_+)\right]
\otimes_{k\Sigma_{n+1}} A^{\otimes(n+1)}
\end{equation}
is again induced by inclusion of categories.
Observe that $N(\Delta S_+)$ is
contractible, since $[-1] \in \mathrm{Obj}(\Delta S_+)$
is initial. Thus by Lemma~\ref{lem.DeltaScontractible},
the map $j_*$ of~(\ref{eq.NDeltaS-to-NDeltaS-plus})
is a homotopy equivalence. Now, for $n \geq 0$,
there is equality $N([n] \setminus \Delta S_+) = N([n] \setminus \Delta S)$, since
there are no morphisms $[n] \to [-1]$ for $n \geq 0$, so~(\ref{eq.Y_n-to-Y_n-plus}),
and therefore $j_*$, is a homotopy equivalence. This implies
that $J_{TA}$ is must also be a homotopy equivalence.
\end{proof}
\begin{rmk}\label{rmk.cyclic-departure}
Observe, this lemma provides our first major departure from the theory of
cyclic homology. The proof above
would not work over the categories $\Delta C$ and $\Delta C_+$, as $N(\Delta C)$
is not contractible.
\end{rmk}
Consider a double complex $\mathscr{T}^+_{*,*}$, the analogue of
complex~(\ref{eq.scriptT}) for $\Delta S_+$.
\begin{equation}\label{eq.scriptT-plus}
\begin{diagram}
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\\
\node{\mathscr{Y}^+_2TA}
\arrow{s,l}{d_2}
\node{\mathscr{Y}^+_2T^2A}
\arrow{w,t}{\mathscr{Y}^+_2\theta_1}
\arrow{s,l}{-d_2}
\node{\mathscr{Y}^+_2T^3A}
\arrow{w,t}{\mathscr{Y}^+_2\theta_2}
\arrow{s,l}{d_2}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}^+_1TA}
\arrow{s,l}{d_1}
\node{\mathscr{Y}^+_1T^2A}
\arrow{w,t}{\mathscr{Y}^+_1\theta_1}
\arrow{s,l}{-d_1}
\node{\mathscr{Y}^+_1T^3A}
\arrow{w,t}{\mathscr{Y}^+_2\theta_2}
\arrow{s,l}{d_1}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}^+_0TA}
\node{\mathscr{Y}^+_0T^2A}
\arrow{w,t}{\mathscr{Y}^+_0\theta_1}
\node{\mathscr{Y}^+_0T^3A}
\arrow{w,t}{\mathscr{Y}^+_1\theta_2}
\node{\cdots}
\arrow{w,t}{}
\end{diagram}
\end{equation}
The maps $\theta_1, \theta_2, \ldots$ are defined by formula~(\ref{eq.theta_n}).
Consider a second double complex, $\mathscr{A}^+_{*,*}$, the analogue of
complex~(\ref{eq.scriptA}) for $\Delta S_+$. It consists
of the complex
$\mathscr{Y}^+_*A$ as the $0^{th}$ column, and $0$ in all positive columns.
\begin{equation}\label{eq.scriptA-plus}
\begin{diagram}
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\\
\node{\mathscr{Y}^+_2A}
\arrow{s,l}{d_2}
\node{0}
\arrow{w}
\arrow{s}
\node{0}
\arrow{w}
\arrow{s}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}^+_1A}
\arrow{s,l}{d_1}
\node{0}
\arrow{w}
\arrow{s}
\node{0}
\arrow{w}
\arrow{s}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}^+_0A}
\node{0}
\arrow{w}
\node{0}
\arrow{w}
\node{\cdots}
\arrow{w,t}{}
\end{diagram}
\end{equation}
We may think of each double complex construction as a functor:
\[
\begin{array}{l}
A \mapsto \mathscr{A}_{*,*}(A)\\
A \mapsto \mathscr{T}_{*,*}(A)\\
A \mapsto \mathscr{A}^+_{*,*}(A)\\
A \mapsto \mathscr{T}^+_{*,*}(A)
\end{array}
\]
Each functor takes unital morphisms of algebras to maps of double complexes in the
obvious way -- for example if $f : A \to B$, then the induced map
$\mathscr{T}_{*,*}(A) \to \mathscr{T}_{*,*}(B)$ is defined on the $(p,q)$-component
by the map $\mathscr{Y}_qT^{p+1}f$. The induced map commutes with vertical
differentials of $\mathscr{A}_{*,*}$ and $\mathscr{T}_{*,*}$ (resp.,
$\mathscr{A}^+_{*,*}$ and $\mathscr{T}^+_{*,*}$)
by naturality of $\mathscr{Y}_*$ (resp. $\mathscr{Y}_*^+$), and it commutes
with the horizontal differentials of $\mathscr{T}_{*,*}$ and $\mathscr{T}^+_{*,*}$
by naturality of $\theta_n$ (see Prop.~\ref{prop.naturality-of-theta_n}).
The map $J$ induces a natural transformation
\mbox{$J_{*,*} : \mathscr{A}_{*,*} \to \mathscr{A}^+_{*,*}$}, defined by
\[
J_{p,*}(A) = \left\{\begin{array}{ll}
J_A : \mathscr{Y}_*A \to \mathscr{Y}^+_*A, & p = 0\\
0, & p > 0 \end{array}\right.
\]
Define a map of bigraded modules, \mbox{$K_{*,*}(A) : \mathscr{T}_{*,*}(A)
\to \mathscr{T}^+_{*,*}(A)$} by:
\[
K_{p,*}(A) = J_{T^{p+1}A} : \mathscr{Y}_*T^{p+1}A \to \mathscr{Y}^+_*T^{p+1}A
\]
Now, $K_{*,*}(A)$ commutes with the vertical differentials because each
$J_{T^{p+1}A}$ is a chain map. $K_{*,*}(A)$ commutes with the horizontal
differentials because of naturality of $J$. Finally, $K_{*,*}$ defines
a natural transformation $\mathscr{T}_{*,*} \to \mathscr{T}^+_{*,*}$,
again by naturality of $J$.
\multiply \dgARROWLENGTH by2
\[
\begin{diagram}
\node{A}
\arrow{s,l}{f}
\node{\mathscr{Y}_qT^{p+1}A}
\arrow{s,l}{\mathscr{Y}_qT^{p+1}f}
\arrow{e,tb}{K_{p,q}(A)}{= J_{T^{p+1}A}}
\node{\mathscr{Y}^+_qT^{p+1}A}
\arrow{s,r}{\mathscr{Y}^+_qT^{p+1}f}\\
\node{B}
\node{\mathscr{Y}_qT^{p+1}A}
\arrow{e,tb}{K_{p,q}(B)}{= J_{T^{p+1}B}}
\node{\mathscr{Y}^+_qT^{p+1}A}
\end{diagram}
\]
\divide \dgARROWLENGTH by2
Recall by Thm~\ref{thm.doublecomplexiso}, there is a map of double complexes,
\[
\Theta_{*,*}(A) : \mathscr{T}_{*,*}(A) \longrightarrow \mathscr{A}_{*,*}(A)
\]
inducing an isomorphism in homology of the total complexes. Observe that
$\Theta_{*,*}$ provides a natural transformation $\mathscr{T}_{*,*}
\to \mathscr{A}_{*,*}$, since $\Theta_{*,*}(A)$ is defined in terms of
$\theta_A$, which is natural in $A$. We shall need the analogous statement for
the double complexes $\mathscr{T}^+_{*,*}$ and $\mathscr{A}^+_{*,*}$.
\begin{theorem}\label{thm.doublecomplexiso-plus}
For any unital associative algebra, $A$,
there is a map of double complexes, $\Theta^+_{*,*}(A) : \mathscr{T}^+_{*,*}(A) \to
\mathscr{A}^+_{*,*}(A)$ inducing isomorphism in homology
\[
H_*\left( \mathrm{Tot}(\mathscr{T}^+(A));\,k\right) \to
H_*\left( \mathrm{Tot}(\mathscr{A}^+(A));\, k\right)
\]
Moreover, $\Theta^+_{*,*}$ provides a natural transformation $\mathscr{T}^+_{*,*}
\to \mathscr{A}^+_{*,*}$.
\end{theorem}
\begin{proof}
The map $\Theta^+_{*,*}(A)$ is defined as:
\[
\Theta^+_{p,q}(A) := \left\{\begin{array}{ll}
\mathscr{Y}^+_q\theta_A, & p = 0 \\
0, & p > 0
\end{array}\right.
\]
This map is a map of double complexes by functoriality of $\mathscr{Y}^+_*$,
and the isomorphism follows from the exactness of the
sequence~(\ref{eq.ex-seq-TA}). Naturality of $\Theta_{*,*}$ follows from naturality
of $\theta$.
\end{proof}
\begin{lemma}\label{lem.comm-diag-functors}
The following diagram of functors and transformations is commutative.
\begin{equation}\label{eq.comm-diag-functors}
\begin{diagram}
\node{\mathscr{T}_{*,*}}
\arrow{s,l}{K_{*,*}}
\arrow{e,t}{\Theta_{*,*}}
\node{\mathscr{A}_{*,*}}
\arrow{s,r}{J_{*,*}}\\
\node{\mathscr{T}^+_{*,*}}
\arrow{e,t}{\Theta^+_{*,*}}
\node{\mathscr{A}^+_{*,*}}
\end{diagram}
\end{equation}
\end{lemma}
\begin{proof}
It suffices to fix an algebra $A$ and examine only the $(p,q)$-components.
\begin{equation}\label{eq.comm-diag-Apq}
\begin{diagram}
\node{\mathscr{T}_{p,q}(A)}
\arrow{s,l}{K_{p,q}(A)}
\arrow{e,t}{\Theta_{p,q}(A)}
\node{\mathscr{A}_{p,q}(A)}
\arrow{s,r}{J_{p,q}(A)}\\
\node{\mathscr{T}^+_{p,q}(A)}
\arrow{e,t}{\Theta^+_{p,q}(A)}
\node{\mathscr{A}^+_{p,q}(A)}
\end{diagram}
\end{equation}
If $p > 0$, then the right hand side of~(\ref{eq.comm-diag-Apq}) is trivial, so
we may assume $p = 0$. In this case, diagram~(\ref{eq.comm-diag-Apq}) becomes:
\begin{equation}\label{eq.comm-diag-A0q}
\begin{diagram}
\node{\mathscr{Y}_qTA}
\arrow{s,l}{(J_{TA})_q}
\arrow{e,t}{\mathscr{Y}_q\theta_A}
\node{\mathscr{Y}_qA}
\arrow{s,r}{(J_A)_q}\\
\node{\mathscr{Y}^+_qTA}
\arrow{e,t}{\mathscr{Y}^+_q\theta_A}
\node{\mathscr{Y}^+_qA}
\end{diagram}
\end{equation}
This diagram commutes because of naturality of $J$.
\end{proof}
To any double complex $\mathscr{B}_{*,*}$ over $k$, we may associate two spectral sequences:
$(E_{I}\mathscr{B})_{*,*}$, obtained by first taking vertical homology, then
horizontal; and $(E_{II}\mathscr{B})_{*,*}$, obtained by first taking horizontal homology,
then vertical. In the case that $\mathscr{B}_{*,*}$ lies entirely within the first quadrant,
both spectral sequences converge to $H_*\left( \mathrm{Tot}(\mathscr{B});\,k
\right)$ (See~\cite{Mc}, Section~2.4).
Maps of double complexes induce maps of spectral sequences, $E_{I}$ and $E_{II}$,
respectively.
Fix the algebra $A$, and consider the commutative diagram of spectral sequences
induced by diagram~(\ref{eq.comm-diag-functors}).
The induced maps will be indicated by an overline, and explicit mention of
the algebra $A$ is suppressed for brevity of notation.
\begin{equation}\label{eq.comm-diag-SpSeqII}
\begin{diagram}
\node{E_{II} \mathscr{T}}
\arrow{s,l}{\overline{K}}
\arrow{e,t}{\overline{\Theta}}
\node{E_{II} \mathscr{A}}
\arrow{s,r}{\overline{J}}\\
\node{E_{II} \mathscr{T}^+}
\arrow{e,t}{\overline{\Theta^+}}
\node{E_{II} \mathscr{A}^+}
\end{diagram}
\end{equation}
Now, by Thm.~\ref{thm.doublecomplexiso} and Thm.~\ref{thm.doublecomplexiso-plus}, we
know that $\Theta_{*,*}$ and $\Theta^+_{*,*}$ induce isomorphisms on total homology, so
$\overline{\Theta}$ and $\overline{\Theta^+}$ also induce isomorphisms on the limit
term of the spectral sequences. In fact, both $\overline{\Theta}^r$ and
$\overline{\Theta^+}^r$ are isomorphisms $(E_{II} \mathscr{T})^r \to
(E_{II} \mathscr{A})^r$ for $r \geq 1$. This is because taking horizontal homology
of $\mathscr{T}_{*,*}$ (resp. $\mathscr{T}^+_{*,*}$) kills all components in positive
columns, leaving only
the $0^{th}$ column, which is chain-isomorphic to the $0^{th}$ column of
$\mathscr{A}_{*,*}$ (resp. $\mathscr{A}^+_{*,*}$). On the other hand,
taking horizontal homology of $\mathscr{A}_{*,*}$ (resp. $\mathscr{A}^+_{*,*}$) does not
change that double complex.
Consider a second diagram of spectral sequences, with induced maps indicated by
a hat.
\begin{equation}\label{eq.comm-diag-SpSeqI}
\begin{diagram}
\node{E_{I} \mathscr{T}}
\arrow{s,l}{\widehat{K}}
\arrow{e,t}{\widehat{\Theta}}
\node{E_{I} \mathscr{A}}
\arrow{s,r}{\widehat{J}}\\
\node{E_{I} \mathscr{T}^+}
\arrow{e,t}{\widehat{\Theta^+}}
\node{E_{I} \mathscr{A}^+}
\end{diagram}
\end{equation}
Now the map $\widehat{K}$ induces an isomorphism on the limit terms of the sequences
$E_{I} \mathscr{T}$ and $E_{I} \mathscr{T}^+$ as a result of
Lemma~\ref{lem.SymHom_plusComplex-tensalg}. As before, $\widehat{K}^r$ is
an isomorphism for $r \geq 1$.
Now, since $H_*\left(\mathrm{Tot}(\mathscr{A});\,k\right) = H_*\left(\mathscr{Y}_*A;\,k
\right)$
and $H_*\left(\mathrm{Tot}(\mathscr{A}^+);\,k\right) = H_*\left( \mathscr{Y}_*^+A;\,k \right)$,
we can put together a chain of isomorphisms
\[
\begin{diagram}
\node{H_*\left(\mathscr{Y}_*A;\,k\right) \cong \left(E_{II} \mathscr{A}\right)^{\infty}_*}
\node{\left(E_{II} \mathscr{T}\right)^{\infty}_*
\cong \left(E_{I} \mathscr{T}\right)^{\infty}_*}
\arrow{w,tb}{\overline{\Theta}^{\infty}}{\cong}
\arrow{e,tb}{\widehat{K}^{\infty}_*}{\cong}
\node{\left(E_{I} \mathscr{T}^+\right)^{\infty}_*}
\end{diagram}
\]
\begin{equation}\label{eq.long-iso-YtoYplus}
\begin{diagram}
\node{\cong \left(E_{II} \mathscr{T}^+\right)^{\infty}_*}
\arrow{e,tb}{(\overline{\Theta^+})^{\infty}_*}{\cong}
\node{\left(E_{II} \mathscr{A}^+\right)^{\infty}_*
\cong H_*\left( \mathscr{Y}_*^+A;\,k \right)}
\end{diagram}
\end{equation}
Commutativity of Diagram~(\ref{eq.comm-diag-functors}) ensures that
the composition of maps in Diagram~\ref{eq.long-iso-YtoYplus} is the map
induced by $J_A$, hence proving Thm.~\ref{thm.J-iso}.
As a direct consequence, $HS_*(A) \cong H_*\left( \mathscr{Y}_*^+A;\,k \right)$,
proving Thm.~\ref{thm.SymHom_plusComplex}.
\section{The Category $\mathrm{Epi}\Delta S$ and a Smaller Resolution}\label{sec.epideltas}
The complex~(\ref{symhomcomplex}) is an extremely large and unwieldy for computation.
Fortunately, when the algebra $A$ is equipped with an augmentation,
$\epsilon : A \to k$,~(\ref{symhomcomplex}) is
homotopic to a much smaller subcomplex. Let
$I$ be the augmentation ideal, and let $\eta \,:\, k \to A$ be the unit map.
Since $\epsilon \eta = \mathrm{id}_k$, the following short exact
sequence splits (as $k$-modules):
\[
0 \to I \to A \stackrel{\epsilon}{\to} k \to 0,
\]
and every $x \in A$ can be written uniquely as $x = a + \eta(\lambda)$ for some $a \in I$,
$\lambda \in k$. This property will allow $B_n^{sym_+}A$ to be decomposed in a
useful way.
\begin{definition}\label{def.B_JA}
Suppose $J \subset [n]$. Define
\[
B_{n,J}A := B_0 \otimes B_1 \otimes \ldots \otimes B_n, \quad \textrm{where}\;\;
B_j = \left\{\begin{array}{ll}
I & \textrm{if $j \in J$}\\
\eta(k) & \textrm{if $j \notin J$}
\end{array}\right.
\]
Define $B_{-1,\emptyset}A = k$.
\end{definition}
\begin{lemma}\label{lem.I-decomp}
For each $n \geq -1$, there is a direct sum decomposition of $k$-modules
\[
B_n^{sym_+}A \cong \bigoplus_{J \subset [n]} B_{n,J}A.
\]
\end{lemma}
\begin{proof}
The splitting of the unit map $\eta$ implies that $A \cong \eta(k) \oplus I$
as $k$-modules. So, for $n \geq 0$,
\[
B_n^{sym_+}A = (\eta(k) \oplus I)^{\otimes(n+1)}
\cong \bigoplus_{J \subset [n]} B_{n,J}A.
\]
For $n = -1$, $B_{-1}^{sym_+}A = k = B_{n,\emptyset}A$. (Recall, $[-1] = \emptyset$).
\end{proof}
\begin{definition}\label{def.basictensors}
A {\it basic tensor} is any tensor $w_0 \otimes w_1 \otimes \ldots \otimes w_n$,
where each $w_j$ is in $I$ or is equal to the unit of $A$. Call a tensor factor
$w_j$ \textit{trivial} if it is the unit of $A$; otherwise,
the factor is called \textit{non-trivial}. If all factors of a basic tensor
are trivial, then the tensor is called trivial, otherwise non-trivial.
\end{definition}
It will become convenient to include the object $[-1]$ in $\Delta$. Let $\Delta_+$
be the category with objects $[-1], [0], [1], [2], \ldots$, and morphisms are all
those of $\Delta$ together with $\iota_n : [-1] \to [n]$ for $n \geq -1$. $\Delta_+$
may be thought of as the subcategory of $\Delta S_+$ consisting of all non-decreasing
set maps.
For a basic tensor $Y \in B_n^{sym_+}A$, we shall define a map
$\delta_Y \in \mathrm{Mor}\Delta_+$ as follows:
If $Y$ is trivial, let $\delta_Y = \iota_n$. Otherwise,
$Y$ has $\overline{n} + 1$ non-trivial factors for some
$\overline{n} \geq 0$. Define $\delta_Y : [\overline{n}] \to [n]$ to be the
unique injective map
that sends each point $0, 1, \ldots, \overline{n}$ to a point $p \in [n]$ such that $Y$ is
non-trivial at \mbox{factor $p$}.
Let $\overline{Y}$ be the tensor obtained from
$Y$ by omitting all trivial factors if such exist, or $\overline{Y} := 1$ if $Y$ is
trivial. Note, $\overline{Y}$ is the unique basic tensor
such that $(\delta_Y)_*(\overline{Y}) = Y$.
\begin{prop}\label{prop.BsymI}
Any chain $[q]\to [q_0] \to \ldots \to [q_n] \otimes Y \;\in\;
k[N(-\setminus \Delta S_+)] \otimes_{\Delta S_+} B_*^{sym_+}A$, where $Y$ is a basic
tensor, is equivalent to a chain
$[\overline{q}] \to [q_0] \to \ldots \to [q_n] \otimes \overline{Y}$,
where either $\overline{Y}$ has no trivial factors or
$\overline{Y} = 1$ and $\overline{q} = -1$.
\end{prop}
\begin{proof}
Let $\delta_Y$ and $\overline{Y}$ be defined as above, and let
$[\overline{q}]$ be the source of $\delta_Y$.
\[
[q] \stackrel{\phi}{\to} [q_0] \to \ldots \to [q_n] \otimes Y \;=\;
[q] \stackrel{\phi}{\to} [q_0] \to \ldots \to [q_n] \otimes (\delta_Y)_*(\overline{Y})
\]
\[
\approx \; [\overline{q}] \stackrel{\phi\delta_Y}{\to} [q_0] \to \ldots \to [q_n]
\otimes \overline{Y}
\]
\end{proof}
Next, we turn our attention to the morphisms in the chain. Our goal is to reduce to
those chains that involve only epimorphisms.
\begin{definition}
Let $\mathscr{C}$ be a category.
The category $\mathrm{Epi}\mathscr{C}$ (resp. $\mathrm{Mono}\mathscr{C}$) is the
subcategory
of $\mathscr{C}$ consisting of
the same objects as $\mathscr{C}$ and only those morphisms $f \in \mathrm{Mor}
\mathscr{C}$
that are epimorphisms (resp. monomorphisms).
\end{definition}
Note, a morphism $\alpha = (\phi, g) \in \mathrm{Mor}\Delta S_+$ is epic, resp. monic, if
and only if $\phi$ is epic, resp. monic, as morphism in $\Delta_+$.
\begin{prop}\label{prop.decomp}
Any morphism $\alpha \in \mathrm{Mor}\Delta S_+$ decomposes
uniquely as $(\eta, \mathrm{id}) \circ \gamma$, where
$\gamma \in \mathrm{Mor}(\mathrm{Epi}\Delta S_+)$ and $\eta \in \mathrm{Mor}
(\mathrm{Mono}\Delta_+)$.
\end{prop}
\begin{proof}
Suppose $\alpha$ has source $[-1]$ and target $[n]$. Then $\alpha = \iota_n$ is
the only possibility, and this decomposes as $\iota_n \circ \mathrm{id}_{[-1]}$.
Now suppose the source of $\alpha$ is $[p]$ for some $p \geq 0$.
Write $\alpha = (\phi, g)$, with $\phi \in \mathrm{Mor}\Delta$ and
$g \in \Sigma_{p+1}^{\mathrm{op}}$. We shall decompose
$\phi$ as follows:
For $\phi : [p] {\to} [r]$, suppose $\phi$ hits $q+1$ distinct points in $[r]$.
Then $\pi : [p] \to [q]$ is induced by $\phi$ by maintaining the order of the points hit.
$\eta$ is the obvious order-preserving monomorphism $[q] \to [r]$ so that
$\eta \pi = \phi$
as morphisms in $\Delta$. To get the required decomposition in $\Delta S$, use:
$\alpha = (\eta, \mathrm{id}) \circ (\pi, g)$.
Now, if $(\xi, \mathrm{id})\circ (\psi, h)$ is also a decomposition, with $\xi$
monic and $\psi$ epic, then
\[
(\xi, \mathrm{id})\circ (\psi, h) = (\eta, \mathrm{id}) \circ (\pi, g)
\]
\[
(\xi, \mathrm{id}) \circ (\psi, g^{-1}h) = (\eta, \mathrm{id}) \circ(\phi, \mathrm{id})
\]
\[
(\xi \psi, g^{-1}h) = (\eta\phi, \mathrm{id}),
\]
proving $g = h$. Uniqueness will then follow from uniqueness of such decompositions
entirely within the category $\Delta$. The latter follows from Theorem B.2
of~\cite{L}, since any monomorphism (resp. epimorphism) of $\Delta$ can be built
up (uniquely) as compositions of $\delta_{i}$ (resp. $\sigma_{i}$).
\end{proof}
Explicitly, if $\alpha = X_0 \otimes X_1 \otimes \ldots \otimes X_m : [n] \to [m]$, with
$X_i \neq 1$ for $i = j_0, j_1, \ldots j_k$, then $\mathrm{im}(\alpha)$ is
isomorphic to the object $[k]$. The surjection onto $[k]$ is
$X_{j_0} \otimes \ldots \otimes X_{j_k}$.
The $\Delta$ injection $[k] \hookrightarrow [m]$ is $Z_0 \otimes \ldots \otimes Z_m$,
where $Z_i = 1$ if
$i \neq j_0, j_1, \ldots j_k$ and for $i = j_0, \ldots, j_k$, the monomials
$Z_i$ are the symbols $x_0, x_1, \ldots, x_k$, in that order. For example,
\[
x_2 x_3 \otimes 1 \otimes x_1 \otimes 1 \otimes x_0 =
x_0 \otimes 1 \otimes x_1 \otimes 1 \otimes x_2 \;\cdot\; x_2 x_3 \otimes x_1 \otimes x_0
\]
When morphisms are not labelled,
we shall write:
\[
[p] \to [r] \;=\; [p] \twoheadrightarrow \mathrm{im}([p] \to [r]) \hookrightarrow [r].
\]
For any $p \geq -1$, if $[p] \stackrel{\beta}{\to} [r_1] \stackrel{\alpha}{\to} [r_2]$,
then there is an induced
map
\[
\mathrm{im}([p] \to [r_1]) \stackrel{\overline{\alpha}}{\to}
\mathrm{im}([p] \to [r_2])
\]
making the diagram commute:
\begin{equation}\label{eq.epidiagram}
\begin{diagram}
\node{ [r_1] }
\arrow[2]{e,t}{ \alpha }
\node[2]{ [r_2] }
\\
\node[2]{ [p] }
\arrow{nw,b}{ \beta }
\arrow{ne,b}{ \alpha\beta }
\arrow{sw,t,A}{ \pi_1 }
\arrow{se,t,A}{ \pi_2 }
\\
\node{ \mathrm{im}([p] \to [r_1]) }
\arrow[2]{n,l,J}{ \eta_1 }
\arrow[2]{e,b,A}{ \overline{\alpha} }
\node[2]{ \mathrm{im}([p] \to [r_2]) }
\arrow[2]{n,r,J}{ \eta_2 }
\end{diagram}
\end{equation}
$\overline{\alpha}$ is the epimorphism induced from the map $\alpha \eta_1$. Furthermore,
for morphisms
\[
[p] \to [r_1] \stackrel{\alpha_1}{\to} [r_2] \stackrel{\alpha_2}{\to} [r_3],
\]
we have:
\[
\overline{\alpha_2 \alpha_1} = \overline{\alpha_2} \circ \overline{\alpha_1},
\]
\textit{i.e.}, the epimorphism construction is a functor $( [p] \setminus \Delta S_+ ) \to
( [p] \setminus \mathrm{Epi}\Delta S_+ )$.
Define a variant of the symmetric bar construction:
\begin{definition}
$B_*^{sym_+}I : \mathrm{Epi}\Delta S_+ \to k$-\textbf{Mod} is the functor defined
by:
\[
\left\{
\begin{array}{lll}
B_n^{sym_+}I &:=& I^{\otimes n+1}, \quad n \geq 0, \\
B_{-1}^{sym_+}I &:=& k,
\end{array}
\right.
\]
\[
B_*^{sym_+}I(\alpha) : (a_0 \otimes a_1 \otimes \ldots \otimes a_n) \mapsto
\alpha(a_0, \ldots, a_n), \;\textrm{for $\alpha \in
\mathrm{Mor}(\mathrm{Epi}\Delta S_+$)}
\]
\end{definition}
This definition makes sense, since the only epimorphism with source $[-1]$ is
$\iota_{-1} = \mathrm{id}_{[-1]}$, sending $B_{-1}^{sym_+}I = k$ identically
to itself. Since $\alpha \in \mathrm{Mor}_\mathrm{Epi}\Delta S_+([p],[q])$
for $p \geq 0$, there is no need for a unit element in $I$.
Furthermore, any product of elements of the ideal $I$ must also be in $I$.
Consider the simplicial $k$-module:
\begin{equation}\label{epiDeltaS_complex}
\mathscr{Y}^{epi}_*A :=
k[ N(- \setminus \mathrm{Epi}\Delta S_+) ] \otimes_{\mathrm{Epi}\Delta S_+} B_*^{sym_+}I
\end{equation}
There is an obvious inclusion,
\[
f : \mathscr{Y}^{epi}_*A \longrightarrow \mathscr{Y}^+_*A
\]
Define a chain map, $g$, in the opposite direction as follows.
First, by prop.~\ref{prop.BsymI} and observations above, we only need to
define $g$ on the chains $[q] \to [q_0] \to \ldots \to [q_n] \otimes Y$
where $Y \in B_*^{sym_+}I$ already. In this case, define:
\[
g( [q] \to [q_0] \to \ldots \to [q_n] \otimes Y )
\]
\[
=
\left\{\begin{array}{ll}
{[-1] \twoheadrightarrow [-1] \twoheadrightarrow \ldots
\twoheadrightarrow [-1] \otimes 1,} & \quad \textrm{$Y$ trivial},\\
{[q] \twoheadrightarrow \mathrm{im}([q] \to [q_0])
\twoheadrightarrow \mathrm{im}([q] \to [q_{1}]) \twoheadrightarrow \ldots
\twoheadrightarrow \mathrm{im}([q] \to [q_n]) \otimes Y,} &
\quad \textrm{$Y$ non-trivial}
\end{array}\right.
\]
I claim $g$ is well-defined. Indeed, if $Y\in B_q^{sym_+}I$ is trivial,
then $q$ must be $-1$. If $\psi : [-1] \to [q']$ is any
morphism of $\Delta S_+$, then we know $\psi = \iota_{q'}$, and $(\iota_{q'})_*(Y)$
is still a trivial tensor. We have equivalent chains:
\[
[-1] \to [q_0] \to \ldots \to [q_n] \otimes 1 \approx
[q'] \to [q_0] \to \ldots \to [q_n] \otimes 1^{\otimes(q' + 1)}
\]
Applying $g$ to the chain on the left results in a chain of identity
maps,
\[
[-1] \twoheadrightarrow [-1] \twoheadrightarrow \ldots \twoheadrightarrow [-1]
\otimes Y.
\]
In order to apply $g$ to the chain
on the right, it we must put it into the correct form. Since $1^{\otimes(q' + 1)}$
is trivial, we must use $\delta = \iota_{q'}$ to rewrite the chain. But what results
is exactly the chain on the left, so $g$ is well-defined in this case.
Suppose now that $q \geq 0$ and $Y \in B_q^{sym_+}I$. Let $\psi : [q] \to [q']$
be any morphism of $\Delta S_+$. Since $q \geq 0$, $\psi \in \mathrm{Mor}\Delta S$.
We have equivalent chains:
\[
[q] \stackrel{\phi \psi}{\to} [q_0] \stackrel{\alpha_1}{\to} \ldots
\stackrel{\alpha_n}{\to} [q_n]
\otimes Y \;\approx\;
[q'] \stackrel{\phi}{\to} [q_0] \stackrel{\alpha_1}{\to} \ldots
\stackrel{\alpha_n}{\to} [q_n] \otimes \psi_*(Y).
\]
Applying $g$ on the left hand side yields
\begin{equation}\label{eq.lhs}
[q] \stackrel{\overline{\phi\psi}}{\twoheadrightarrow} \mathrm{im}(\phi\psi)
\twoheadrightarrow \ldots \twoheadrightarrow
\mathrm{im}([q] \to [q_n])
\otimes Y,
\end{equation}
Consider the chain on the right hand. If $\psi$ happens to be an epimorphism,
then $\psi_*(Y) \in B_{q'}^{sym_+}I$, and we may apply $g$ directly to this chain.
In this case, we get:
\begin{equation}\label{eq.rhs}
[q'] \stackrel{\overline{\phi}}{\twoheadrightarrow} \mathrm{im}(\phi)
\twoheadrightarrow
\ldots \twoheadrightarrow \mathrm{im}([q'] \to [q_n])
\otimes \psi_*(Y)
\end{equation}
Now, since $\psi$ is epic, $\mathrm{im}(\phi\psi) = \mathrm{im}(\phi)$. Moreover,
$\mathrm{im}(\alpha_k\ldots\alpha_1\phi\psi) =\mathrm{im}(\alpha_k\ldots\alpha_1\phi)$
for each $k = 1, 2, \ldots, n$, and the induced morphisms are equal:
\[
\left(\mathrm{im}([q'] \to [q_k]) \twoheadrightarrow
\mathrm{im}([q'] \to [q_{k+1}])\right) =
\left(\mathrm{im}([q] \to [q_k]) \twoheadrightarrow \mathrm{im}([q] \to [q_{k+1}])\right)
\]
Hence, the chain~(\ref{eq.rhs}) is equal to:
\[
[q'] \stackrel{\overline{\phi}}{\twoheadrightarrow} \mathrm{im}([q] \to [q_0])
\twoheadrightarrow
\ldots \twoheadrightarrow \mathrm{im}([q] \to [q_n])
\otimes \psi_*(Y)
\]
\[
\approx [q] \stackrel{\overline{\phi}\circ \psi}{\twoheadrightarrow}
\mathrm{im}([q] \to [q_0]) \twoheadrightarrow
\ldots \to \mathrm{im}([q] \twoheadrightarrow [q_n])
\otimes Y
\]
Thus, since $\psi = \overline{\psi}$ and $\overline{\phi}\overline{\psi} =
\overline{\phi} \circ \overline{\psi}$, the chains~(\ref{eq.lhs}) and~(\ref{eq.rhs})
are equivalent.
Suppose now that $\psi$ is not epimorphic. Use Prop.~\ref{prop.decomp} to write
$\psi = \pi \eta$ for $\pi \in \mathrm{Epi}\Delta S_+$ and $\eta \in \mathrm{Mono}\Delta_+$.
By the previous, it is clear that we may assume $\pi = \mathrm{id}$, so that
$\psi$ is a monomorphism of $\Delta_+$. In this case, $\psi_*(Y)$ may have trivial
tensor factors. Now, $g$ is only defined for chains in which the factor
$Y \in B_n^{sym_+}A$ is a basic
tensor having no trivial factors, so we must use Prop.~\ref{prop.BsymI}
to rewrite the chain as:
\[
[\overline{q}] \to [q_0] \to \ldots \to [q_n] \otimes \overline{\psi_*(Y)}.
\]
Since $Y$ is in $B_q^{sym_+}I$ and $\psi$ is a monomorphism of $\Delta$, we have
$\overline{\psi_*(Y)} = Y$, and $\delta_{\psi_*(Y)} = \psi$, by uniqueness of the
decomposition. Thus, when we apply $g$ to this chain,
we must apply it to the equivalent chain:
\[
[q] \stackrel{\phi\psi}{\to} [q_0] \to \ldots \to [q_n] \otimes Y.
\]
This shows that $g$ is well-defined.
Now, $gf = \mathrm{id}$, since if $[p] \to [r]$ is in $\mathrm{Mor}(\mathrm{Epi}\Delta S$),
then the epimorphism construction $[p] \twoheadrightarrow \mathrm{im}([p] \to [r])$ is
just the original morphism.
\begin{prop}
$fg \simeq \mathrm{id}.$
\end{prop}
\begin{proof}
In what follows, we assume $Y$ is a basic tensor in $B_q^{sym}I$.
Define a presimplicial homotopy $h$ from $fg$ to $\mathrm{id}$ as follows:
\[
h_j^{(n)}([q] \to [q_0] \to \ldots \to [q_n] \otimes Y) \;:=
\]
\[
[q] \twoheadrightarrow \mathrm{im}([q] \to [q_0]) \twoheadrightarrow \ldots
\twoheadrightarrow \mathrm{im}([q] \to [q_j]) \hookrightarrow [q_j] \to
\ldots \to [q_n]
\otimes Y.
\]
$h_j$ is well-defined when by the functorial properties of the
epimorphism construction.
Suppose $0 \leq i < j \leq n$. We have $d_i h_j = h_{j-1} d_i$, since $d_i$ on the
right hand side reduces the number of nodes to the left of $[q_j]$ by one. We also
use the functoriality of the epimorphism construction here.
Suppose $1 \leq j+1 < i \leq n+1$. $d_i h_j = h_j d_{i-1}$, since $h_j$ on the left
hand side shifts all nodes to the right of (and including) $[q_j]$ to the right by one.
Suppose $0 < i \leq n$. First apply $d_ih_i$ to an arbitrary chain.
\[
\begin{diagram}
\node{ [q] \to [q_0] \to \ldots \to [q_n] \otimes Y }
\arrow{s,l,T}{ h_i }
\\
\node{ [q] \twoheadrightarrow \mathrm{im}([q] \to [q_0]) \twoheadrightarrow
\ldots \twoheadrightarrow \mathrm{im}([q] \to [q_{i-1}]) \twoheadrightarrow
\mathrm{im}([q] \to [q_i]) \hookrightarrow [q_i] \to \ldots \to [q_n]
\otimes Y }
\arrow{s,l,T}{ d_i }
\\
\node{ [q] \twoheadrightarrow \mathrm{im}([q] \to [q_0]) \twoheadrightarrow \ldots
\twoheadrightarrow \mathrm{im}([q] \to [q_{i-1}]) \to [q_i] \to \ldots \to [q_n]
\otimes Y }
\end{diagram}
\]
Apply $d_ih_{i-1}$ to the same chain.
\[
\begin{diagram}
\node{ [q] \to [q_0] \to \ldots \to [q_n] \otimes Y }
\arrow{s,l,T}{ h_{i-1} }
\\
\node{ [q] \twoheadrightarrow \mathrm{im}([q] \to [q_0]) \twoheadrightarrow
\ldots \twoheadrightarrow \mathrm{im}([q] \to [q_{i-1}]) \hookrightarrow [q_{i-1}]
\to [q_i] \to \ldots \to [q_n] \otimes Y }
\arrow{s,l,T}{ d_i }
\\
\node{ [q] \twoheadrightarrow \mathrm{im}([q] \to [q_0]) \twoheadrightarrow \ldots
\twoheadrightarrow \mathrm{im}([q] \to [q_{i-1}]) \to [q_i] \to \ldots \to [q_n]
\otimes Y }
\end{diagram}
\]
The fact that the composition of
$\mathrm{im}([q] \to [q_{i-1}]) \to [q_{i-1}] \to [q_i]$ is equal to the composition
of $\mathrm{im}([q] \to [q_{i-1}]) \to \mathrm{im}([q] \to [q_i]) \to [q_i] $ follows
from the commutativity of the
outside square of diagram~\ref{eq.epidiagram}.
Thus, $d_i h_i = d_i h_{i-1}$.
Finally,
\[
d_0 h_0 ([q] \to [q_0] \to \ldots \to [q_n] \otimes Y) =
d_0\big([q]\twoheadrightarrow \mathrm{im}([q] \to [q_0]) \hookrightarrow [q_0] \to
\ldots \to [q_n] \otimes Y \big)
\]
\[
= [q] \to [q_0] \to \ldots \to [q_n] \otimes Y,
\]
and
\[
d_{n+1} h_n ([q] \to [q_0] \to \ldots \to [q_n] \otimes Y)
\]
\[
= d_{n+1}\big([q] \twoheadrightarrow \mathrm{im}([q] \to [q_0]) \twoheadrightarrow
\ldots \twoheadrightarrow \mathrm{im}([q] \to [q_n]) \hookrightarrow [q_n]
\otimes Y \big)
\]
\[
= [q] \twoheadrightarrow \mathrm{im}([q] \to [q_0]) \twoheadrightarrow
\ldots \twoheadrightarrow \mathrm{im}([q] \to [q_n]) \otimes Y
\]
\[
= g([q] \to [q_0] \to \ldots \to [q_n] \otimes Y)
\]
Hence, $\mathrm{id} \simeq fg$, as required.
\end{proof}
\begin{prop}\label{prop.epi}
If $A$ has augmentation ideal $I$, then
\[
HS_*(A) = H_*\left(\mathscr{Y}^{epi}_*A;\,k\right) =
H_*\left(k[ N(- \setminus \mathrm{Epi}\Delta S_+) ] \otimes_{\mathrm{Epi}\Delta S_+}
B_*^{sym_+}I;\,k \right).
\]
\end{prop}
\begin{proof}
The complex~(\ref{epiDeltaS_complex}) has been shown to be chain homotopy equivalent to
the complex $\mathscr{Y}^+_*A$, which by Thm.~\ref{thm.SymHom_plusComplex},
computes $HS_*(A)$.
\end{proof}
\begin{rmk}
The condition that $A$ have an augmentation ideal may be lifted (as Richter conjectures),
if it can be shown that $N(\mathrm{Epi}\Delta S)$ is contractible. As partial
progress along these lines, it can be shown that $N(\mathrm{Epi}\Delta S)$ is
simply-connected.
\end{rmk}
\chapter{A SPECTRAL SEQUENCE FOR $HS_*(A)$}\label{chap.specseq}
\section{Filtering by Number of Strict Epimorphisms}\label{sec.specseq}
In this chapter, fix a unital associative algebra $A$ over commutative
ground ring $k$. We also assume $A$ comes equipped with an augmentation,
and denote the augmentation ideal by $I$.
Let $\mathscr{Y}^{epi}_*A$ be the complex~(\ref{epiDeltaS_complex}).
Since $A$ is fixed, it will suffice to use
the notation $\mathscr{Y}^{epi}_*$ in place of $\mathscr{Y}^{epi}_*A$.
As we have seen above in Section~\ref{sec.epideltas}, $H_*(\mathscr{Y}^{epi}) = HS_*(A)$.
Consider a
filtration of $\mathscr{Y}^{epi}_*$
by number of strict epimorphisms, or \textit{jumps}:
\[
\mathscr{F}_p\mathscr{Y}^{epi}_q \qquad \textrm{is generated by}
\]
\[
\left\{[m_0] \to [m_{1}] \to \ldots \to [m_q] \otimes Y,\, \textrm{where
$m_{i-1} > m_{i}$ for no more than $p$ distinct values of $i$}\right\}.
\]
The face maps of $\mathscr{Y}^{epi}_*$ only delete morphisms or compose morphisms,
so this filtration is compatible with the differential of $\mathscr{Y}^{epi}_*$.
The filtration quotients are easily described:
\[
E^0_{p,q} := \mathscr{F}_p\mathscr{Y}^{epi}_q /
\mathscr{F}_{p-1}\mathscr{Y}^{epi}_q \qquad
\textrm{is generated by}
\]
\[
\left\{[m_0]\to [m_1] \to \ldots \to [m_q] \otimes Y,\, \textrm{where
$m_{i-1} > m_{i}$ for exactly $p$ distinct values of $i$}\right\}.
\]
The induced differential on $E^0_{p,q}$ is of bidegree $(0,-1)$, so we may form
a spectral sequence with $E^1_{p,q} = H_{p+q}( E^0_{p,*} )$ (cf. \cite{Mc},
\cite{S}).
\begin{lemma}\label{lem.E1_term}
There are chain maps (one for each $p$):
\[
E^0_{p,*} \to \bigoplus_{m_0 > \ldots > m_p} \bigg(
I^{\otimes(m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big]
\otimes_{k\Sigma_{m_p + 1}}
E_*\Sigma_{m_p + 1}
\bigg),
\]
inducing isomorphisms in homology:
\[
E^1_{p,q} \cong \bigoplus_{m_0 > \ldots > m_p} H_q\bigg( \Sigma_{m_p+1}^{\mathrm{op}}
\; ; \; I^{\otimes(m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big]
\bigg).
\]
Here, we use the convention that $I^{\otimes 0} = k$, and $\Sigma_0 \cong 1$, the
trivial group.
\end{lemma}
We will begin by defining two related chain complexes:
Denote by $\mathscr{B}_*^{(m_0, \ldots, m_p)}$, the chain complex:
\[
\bigoplus \Big(
[m_0] \stackrel{\cong}{\to} \ldots \stackrel{\cong}{\to} [m_{0}]
\searrow [m_{1}] \stackrel{\cong}{\to} \ldots
\stackrel{\cong}{\to} [m_{p-1}] \searrow [m_p] \otimes
I^{\otimes(m_0+1)}
\Big),
\]
where the sum extends over all such chains that begin with $0$ or more isomorphisms
of $[m_0]$, followed by a strict epimorphism
\mbox{$[m_{0}] \twoheadrightarrow [m_{1}]$}, followed by $0$ or more isomorphisms of
$[m_{1}]$, followed by a strict epimorphism \mbox{$[m_{1}] \twoheadrightarrow [m_{2}]$},
etc., and the last morphism must be a strict epimorphism \mbox{$[m_{p-1}]
\twoheadrightarrow [m_{p}]$}.
$\mathscr{B}_*^{(m_0, \ldots, m_p)}$ is a subcomplex of $E^0_{p,*}$ with the
same induced differential, and there is a $\Sigma^{\mathrm{op}}_{m_p+1}$-action given by
postcomposition of $g$, regarded as an automorphism
of $[m_p]$.
Denote by $\mathscr{M}_*^{(m_0,\ldots, m_p)}$, the
chain complex consisting of $0$ in degrees different from $p$, and
\[
\mathscr{M}_p^{(m_0, \ldots, m_p)} \;:=\; I^{\otimes(m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big],
\]
the coefficient group that shows up in Lemma~\ref{lem.E1_term}. This complex
has trivial differential.
Now, $\mathscr{B}_p^{(m_0, \ldots, m_p)}$ is generated by elements of the form
\[
[m_0] \searrow [m_{1}] \searrow \ldots \searrow [m_p] \otimes Y,
\]
where the chain consists entirely of strict epimorphisms of $\Delta S$. Observe that
\[
\mathscr{B}_p^{(m_0, \ldots, m_p)} =
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{0}], [m_1] ) \big] \otimes \ldots \otimes
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big]
\otimes
I^{\otimes(m_0+1)}
\]
\begin{equation}\label{eq.B_p}
\cong
I^{\otimes(m_0+1)} \otimes
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{0}], [m_1] ) \big] \otimes \ldots \otimes
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big]
\end{equation}
as \mbox{$k$-module}. Now, each $k\big[\mathrm{Epi}_{\Delta S_+}
( [m], [n] ) \big]$ is a
\mbox{$(k\Sigma^\mathrm{op}_{n+1})$-$(k\Sigma^\mathrm{op}_{m+1})$-bimodule}.
View $\sigma \in \Sigma_{n+1}^\mathrm{op} = \mathrm{Aut}_{\Delta S_+}([n])$ and
$\tau \in \Sigma_{m+1}^\mathrm{op} = \mathrm{Aut}_{\Delta S_+}([m])$ as automorphisms. Then
the action of $\sigma$, resp., $\tau$, is by postcomposition, resp., precomposition. The
bimodule structure,
$(\sigma . \phi) . \tau = \sigma . (\phi . \tau)$,
follows easily from associativity of composition in $\Delta S_+$,
\mbox{$(\sigma \circ \phi)\circ \tau = \sigma \circ (\phi \circ \tau)$}.
We shall use the equivalent interpretation of $k\big[\mathrm{Epi}_{\Delta S_+}
( [m], [n] ) \big]$ as \mbox{$(k\Sigma_{m+1})$-$(k\Sigma_{n+1})$-bimodule}.
Explicitly, an element of $\mathrm{Epi}_{\Delta S_+}([m],[n])$ is a pair $(\psi, g)$, with
$\psi \in \mathrm{Epi}_{\Delta_+}([m],[n])$ and $g \in \Sigma_{m+1}^\mathrm{op}$, so for
$\tau \in \Sigma_{m+1}$ and $\sigma \in \Sigma_{n+1}$,
\[
(\psi, g).\sigma \;=\; (\mathrm{id}, \sigma) \circ (\psi, g) \;=\;
(\psi^\sigma, \sigma^\psi \circ g) \;=\; (\psi^\sigma, g\sigma^\psi),
\]
\[
\tau.(\psi, g) \;=\; (\psi, g) \circ (\mathrm{id}, \tau) \;=\;
(\psi, g \circ \tau) \;=\; (\psi, \tau g).
\]
Also, since $B_*^{sym_+}I$ is a \mbox{$\Delta S_+$-module}, we may view it as a right
\mbox{$\Delta S_+^\mathrm{op}$-module}, hence $B_{m_0}^{sym_+}I = I^{\otimes(m_0+1)}$ is a
right \mbox{$k\Sigma_{m_0+1}$-module}.
With this in mind,~(\ref{eq.B_p}) becomes a \mbox{$k\Sigma^\mathrm{op}_{m_p + 1}$-module},
where the action is the right action of $k\Sigma_{m_p+1}$ on the last tensor
factor by postcomposition, and the isomorphism given above respects this action.
Consider the $k$-module:
\begin{equation}\label{eq.M_alt}
M \;:=\; I^{\otimes(m_0+1)} \otimes_{kG_0}
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{0}], [m_1] ) \big]
\otimes_{kG_{1}} \ldots \otimes_{kG_{p-1}}
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big],
\end{equation}
where $G_i$ is the group $\Sigma_{m_i + 1}$. I claim that $M$ is isomorphic
to $\mathscr{M}_p^{(m_0, \ldots, m_p)}$ as $k$-module. Indeed, any element
\[
Y \otimes (\psi_p, g_p)\otimes \ldots \otimes (\psi_1, g_1)
\]
in $M$ is equivalent to one in which all $g_i$ are identities by writing $(\psi_p, g_p) =
g_p.(\psi_p, \mathrm{id})$ then commuting $g_p$ over the tensor to the left and iterating
this process to the leftmost tensor factor. Thus, we may write the element uniquely as
\[
Z \otimes \phi_1 \otimes \ldots \otimes \phi_p,
\]
where all tensors are now over $k$, and all morphisms are in $\mathrm{Epi}\Delta S_+$.
This isomorphism also allows us to view $\mathscr{M}_p^{(m_0, \ldots, m_p)}$ as a
\mbox{$\Sigma^\mathrm{op}_{m_p+1}$-module}.
The action is defined as the right action of $\Sigma_{m_p+1}$ on the
tensor factor $k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] )\big]$. We
then use the isomorphism to express this action in terms of
$\mathscr{M}_p^{(m_0, \ldots, m_p)}$.
Let $\gamma_*$ be a chain map $\mathscr{B}_*^{(m_0, \ldots, m_p)} \to
\mathscr{M}_*^{(m_0, \ldots, m_p)}$ defined as the zero map
in degrees different from $p$,
and the canonical map
\[
I^{\otimes(m_0+1)} \otimes k\big[\mathrm{Epi}_{\Delta S_+}( [m_0], [m_1] ) \big]
\otimes \ldots
\otimes k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big]
\longrightarrow
\]
\[
I^{\otimes(m_0+1)} \otimes_{kG_0}
k\big[\mathrm{Epi}_{\Delta S_+}( [m_0], [m_1] ) \big] \otimes_{kG_1} \ldots
\otimes_{kG_{p-1}} k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big],
\]
in degree $p$. $\gamma_*$ is $\Sigma_{m_p + 1}^\mathrm{op}$-equivariant due to
an elementary property of bimodules:
\begin{prop}
Suppose $R$ and $S$ are $k$-algebras, $A$ is a right \mbox{$S$-module}, and $B$ is an
\mbox{$S$-$R$-bimodule}, then the canonical map $A \otimes_k B \to A \otimes_S B$ is a map
of right $R$-modules.
\end{prop}
Our aim now is to show that $\gamma_*$ induces an isomorphism in homology.
\begin{prop}\label{prop.gamma_iso}
$\gamma_*$ induces an isomorphism
\[
H_*\big( \mathscr{B}_*^{(m_0,\ldots, m_p)} \big) \longrightarrow
H_*\big( \mathscr{M}_*^{(m_0,\ldots, m_p)} \big)
\]
\end{prop}
\begin{proof}
We shall prove this by induction on $p$.
Suppose $p=0$. Observe that
\[
\mathscr{B}_n^{(m_0)} = \left\{
\begin{array}{ll}
I^{\otimes(m_0+1)}, &\qquad n = 0 \\
0, &\qquad n > 0.
\end{array}\right.
\]
Moreover, $\gamma_*$ is the identity $\mathscr{B}_*^{(m_0)} \to \mathscr{M}_*^{(m_0)}$.
Next, for the induction step, we assume $\gamma_* : \mathscr{B}_*^{(m_0, \ldots, m_{p-1})}
\to \mathscr{M}_*^{(m_0, \ldots, m_{p-1})}$ induces an isomorphism in homology for any
string of $p$ numbers $m_0 > m_1 > \ldots > m_{p-1}$. Now assume $m_p < m_{p-1}$.
Let $G = \Sigma_{m_{p-1}+1}$. As graded $k$-module, there is a
degree-preserving isomorphism:
\begin{equation}\label{eq.B_tensor_iso}
\theta_* \;:\; \mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes_{kG} E_*G \otimes
k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]
\longrightarrow
\mathscr{B}_*^{(m_0, \ldots, m_{p-1}, m_p)}
\end{equation}
where the degree of an element $u \otimes (g_0, \ldots, g_n) \otimes (g, \phi)$
is defined recursively (Note, all elements of $\mathscr{B}_n^{(m_0)}$
are of degree $0$):
\[
deg\left(u \otimes (g_0, \ldots, g_n) \otimes (g, \phi)\right) := deg(u) + n + 1.
\]
Here, we are using the resolution $E_*G$ of $k$ as $kG$-module
defined by $E_nG = k\big[ \prod^{n+1}G \big]$,
with $G$-action $g . (g_0, g_1, \ldots, g_n) = (gg_0, g_1, \ldots, g_n)$, and face
maps
\[
\partial_i(g_0, g_1, \ldots, g_n) = \left\{\begin{array}{ll}
(g_0, \ldots, g_ig_{i+1}, \ldots, g_n), & 0 \leq i < n\\
(g_0, g_1, \ldots, g_{n-1}), & i = n
\end{array}\right.
\]
$\theta_*$ is defined on generators by:
\[
\theta_* \;:\; u \otimes (g_0, g_1, \ldots, g_n) \otimes (g, \phi) \mapsto
\]
\[
(u.g_0) \,\ast\, [m_{p-1}] \stackrel{ g_1 }{\longrightarrow} [m_{p-1}]
\stackrel{ g_2 }{\longrightarrow} \ldots \stackrel{ g_n }
{\longrightarrow} [m_{p-1}] \stackrel { (\phi, g) }{\longrightarrow} [m_p],
\]
where $u . g_0$ is the right action defined above for
$\mathscr{B}_*^{(m_0, \ldots, m_p)}$, and
$v \ast [n] \to \ldots \to [m]$ is the concatenation of chains ($v$ must have final
target $[n]$). $\theta_*$ is well-defined since for $h \in G$,
\[
u . h \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \stackrel{\theta_*}{\mapsto}
\]
\[
\big((u.h).g_0\big) \,\ast\, [m_{p-1}] \stackrel{ g_1 }{\longrightarrow} [m_{p-1}]
\stackrel{ g_2 }{\longrightarrow} \ldots \stackrel{ g_n }
{\longrightarrow} [m_{p-1}] \stackrel { (\phi, g) }{\longrightarrow} [m_p],
\]
while on the other hand,
\[
u \otimes h .(g_0, g_1, \ldots, g_n) \otimes (g, \phi) \;=\;
u \otimes (hg_0, g_1, \ldots, g_n) \otimes (g, \phi) \stackrel{\theta_*}{\mapsto}
\]
\[
\left(u.(hg_0)\right) \,\ast\, [m_{p-1}] \stackrel{ g_1 }{\longrightarrow} [m_{p-1}]
\stackrel{ g_2 }{\longrightarrow} \ldots \stackrel{ g_n }
{\longrightarrow} [m_{p-1}] \stackrel { (\phi, g) }{\longrightarrow} [m_p],
\]
If we define a right action of $\Sigma_{m_p+1}$ on
$k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]$ via
\[
(g, \phi) . h \;\mapsto \; \big( gh^{\phi}, \phi^h \big),
\]
then $\theta_*$ is a map of right $k\Sigma_{m_p+1}$-modules, since the action defined above
simply amounts to post-composition of the morphism $(\phi, g)$ with $h$.
$\theta_*$ has a two-sided $\Sigma_{m_p+1}^{\mathrm{op}}$-equivariant inverse, defined by:
\[
u \ast [m_{p-1}] \stackrel{g_1}{\to} [m_{p-1}] \stackrel{g_2}{\to} \ldots
\stackrel{g_n}{\to} [m_{p-1}] \stackrel{ (\phi, g) }{\to} [m_p]
\]
\[
\mapsto \; u \otimes (\mathrm{id}, g_1, g_2, \ldots, g_n)
\otimes (g, \phi).
\]
Observe that $\mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes_{kG} E_*G \otimes
k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]$ is a tensor product of
two chain complexes, and thus a chain complex in its own right. The differential
is given by:
\[
\partial\big( u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \big)
\;=\; \partial(u) \otimes (g_0, \ldots, g_n) \otimes (g, \phi) +
(-1)^{deg(u)} u \otimes \partial\big((g_0, \ldots, g_n) \otimes (g, \phi)\big).
\]
Note, the $n^{th}$ face map of $E_nG \otimes
k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]$ is defined by:
\[
\partial_n\big( (g_0, \ldots, g_n) \otimes (g, \phi) \big)
= (g_0, \ldots, g_{n-1}) \otimes ( g_ng, \phi).
\]
We shall verify that $\theta_*$ is a chain map with respect to this differential.
Let $u \otimes (g_0, \ldots, g_n) \otimes (g, \phi)$ be a chain
with $deg(u) = p$. Denote by $\partial_i$, $(0\leq i \leq p+n+1)$, the $i^{th}$
face map in either chain complex. If $i < p$, then clearly $\partial_i \theta_* =
\theta_* \partial_i$, since this face map acts only on $u$.
Suppose now that $i = p$, and let $(\psi, h)$ be the final morphism in the chain $u$.
\[
\big(\ldots \to [m_{p-2}] \stackrel{(\psi, h)}{\longrightarrow} [m_{p-1}]\big) \otimes
(g_0, g_1, \ldots, g_n) \otimes (g, \phi)
\]
\[
\stackrel{\partial_p}{\mapsto}\quad \big(\ldots \to [m_{p-2}] \stackrel{(\psi, h)}
{\longrightarrow} [m_{p-1}]\big) \otimes ( g_0g_1, g_2, \ldots, g_n) \otimes (g, \phi)
\]
\multiply \dgARROWLENGTH by3
\divide \dgARROWLENGTH by2
\[
\stackrel{\theta_*}{\mapsto}\quad \left(\ldots \to
\begin{diagram}
\node{ [m_{p-2}] }
\arrow{e,t}{ (\psi, h).(g_0g_1) }
\node{ [m_{p-1}] }
\end{diagram}
\stackrel{ g_2 }{\longrightarrow} \ldots \stackrel{ g_n }
{\longrightarrow} [m_{p-1}] \stackrel{ (\phi, g) }{\longrightarrow} [m_p]\right).
\]
On the other hand,
\[
\big(\ldots \to [m_{p-2}] \stackrel{(\psi, h)}{\longrightarrow} [m_{p-1}]\big) \otimes
(g_0, g_1, \ldots, g_n) \otimes (g, \phi)
\]
\[
\stackrel{\theta_*}{\mapsto}\quad \left(\ldots \to
\begin{diagram}
\node{ [m_{p-2}] }
\arrow{e,t}{ (\psi, h).g_0 }
\node{ [m_{p-1}] }
\end{diagram}
\stackrel{ g_1 }{\longrightarrow} \ldots
\stackrel{ g_n }{\longrightarrow} [m_{p-1}] \stackrel{(\phi, g)}
{\longrightarrow} [m_p]\right)
\]
\[
\stackrel{\partial_p}{\mapsto}\quad \left( \ldots \to
\begin{diagram}
\node{ [m_{p-2}] }
\arrow{e,t}{ \big((\psi, h).g_0\big).g_1 }
\node{ [m_{p-1}] }
\end{diagram}
\stackrel{ g_2 }{\longrightarrow}
\ldots
\stackrel{ g_n }{\longrightarrow} [m_{p-1}] \stackrel{(\phi, g)}
{\longrightarrow} [m_p]\right).
\]
\multiply \dgARROWLENGTH by2
\divide \dgARROWLENGTH by3
Next, suppose $i = p + j$ for some $1 \leq j < n$. In this case, $\partial_i$ has
the effect of combining $g_i$ and $g_{i+1}$ into $g_ig_{i+1}$, for either chain, so
clearly $\theta_*\partial_i = \partial_i\theta_*$.
Finally, for $i = p+n$,
\[
\theta_*\partial_{p+n}\big( u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \big)
\]
\[
= \theta_*\big( u \otimes (g_0, \ldots, g_{n-1}) \otimes (g_ng,\phi) \big)
\]
\[
= (u.g_0) \ast [m_{p-1}] \stackrel{g_1}{\to}
\ldots \stackrel{ g_{n-1} }{\longrightarrow} [m_{p-1}]
\stackrel{ (\phi,\, g_ng) }{\longrightarrow} [m_p],
\]
while
\[
\partial_{p+n}\theta_*\big( u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \big)
\]
\[
= \partial_{p+n}\big( (u.g_0) \ast [m_{p-1}] \stackrel{g_1}{\to}
\ldots \stackrel{ g_n }{\to} [m_{p-1}] \stackrel{ (\phi,\, g) }{\longrightarrow} [m_p] \big)
\]
\[
= (u.g_0) \ast [m_{p-1}] \stackrel{g_1}{\to}
\ldots \stackrel{ g_{n-1} }{\to} [m_{p-1}]
\stackrel{ g_n.(\phi,\, g) }{\longrightarrow} [m_p]
\]
\[
= (u.g_0) \ast [m_{p-1}] \stackrel{g_1}{\to}
\ldots \stackrel{ g_{n-1} }{\to} [m_{p-1}]
\stackrel{ (\phi,\, g_ng) }{\longrightarrow} [m_p]
\]
Hence, the map $\theta_*$ is a chain isomorphism.
The next step in this proof is to prove a chain homotopy equivalence,
\[
\mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes_{kG} E_*G \otimes
k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]
\]
\[
\stackrel{\simeq}{\longrightarrow} \;
\mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes
k\big[\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]
\]
To that end, we shall define chain maps $F_*$ and $G_*$ between the two complexes. Let
\[
\mathscr{U}_* \;:=\; \mathscr{B}_*^{(m_0, \ldots, m_{p-1})}, \;\textrm{and} \qquad
S \;:=\; \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p]).
\]
Define
\[
F_* \;:\; \mathscr{U}_* \otimes_{kG} E_*G \otimes k[ G \times S ]
\longrightarrow \mathscr{U}_* \otimes k[S],
\]
\[
F_*\big( u \otimes (g_0) \otimes (g, \phi) \big) \;:=\;
u.(g_0g) \otimes \phi,
\]
\[
F_*\big( u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \big) \;:=\; 0,
\qquad \textrm{if $n > 0$}.
\]
The fact that $F_*$ is well-defined is trivial to verify (we only need to check
for $n=0$, since otherwise $F_* = 0$):
\[
u.h \otimes (g_0) \otimes (g, \phi) \mapsto (u.h).(g_0g) \otimes \phi =
u.(hg_0g) \otimes \phi,
\]
while
\[
u \otimes h.(g_0) \otimes (g, \phi) = u \otimes (hg_0) \otimes (g, \phi)
\mapsto u.(hg_0g) \otimes \phi.
\]
Next, let
\[
G_* \;:\; \mathscr{U}_* \otimes k[S] \to \mathscr{U}_* \otimes_{kG} E_*G \otimes
k[ G \times S ]
\]
be the composite
\[
\mathscr{U}_* \otimes k[S] \,\stackrel{\cong}{\to}\, \mathscr{U}_* \otimes_{kG} G
\otimes k[S] \,=\, \mathscr{U}_* \otimes_{kG} E_0G \otimes k[S]
\]
\[
\stackrel{j}{\rightarrow}\,\mathscr{U}_* \otimes_{kG} E_0G \otimes k[ G \times S ]
\,\stackrel{inc}{\longrightarrow}\,
\mathscr{U}_* \otimes_{kG} E_*G \otimes k[ G \times S ],
\]
where $j$ is induced by the map sending a generator $\phi \in S$ to
$(\mathrm{id}, \phi) \in G \times S$, and
$inc$ is induced by the inclusion $E_0G \hookrightarrow E_*G$.
Observe,
\[
F_*G_*( u \otimes \phi) \;=\; F_*\big( u \otimes (\mathrm{id}) \otimes
(\mathrm{id}, \phi)\big) \;=\; (u.\mathrm{id}) \otimes \phi.
\]
Thus, $F_*G_*$ is the identity. I claim $G_*F_* \simeq \mathrm{id}$.
The desired homotopy will be given by:
\[
h_* \;:\; u \otimes (g_0, \ldots, g_n) \otimes (g, \phi)
\mapsto (-1)^{deg(u) + n}u \otimes (g_0, \ldots, g_n, g)\otimes
(\mathrm{id}, \phi).
\]
First, observe:
\[
G_*F_*\big( u \otimes (g_0) \otimes (g, \phi) \big) \,=\,
G_*( u.(g_0g) \otimes \phi ) \,=\, u.(g_0g) \otimes (\mathrm{id}) \otimes
(\mathrm{id}, \phi),
\]
\[
G_*F_*\big( u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \big) \,=\, G_*(0)
\,=\, 0, \qquad \textrm{for $n>0$}.
\]
Hence, there are two cases we must explore. For $n=0$,
\[
h\partial\big( u \otimes (g_0) \otimes (g, \phi) \big)
\;=\; h\big( \partial u \otimes (g_0) \otimes (g, \phi) \big)
\]
\[
=\; (-1)^{deg(u)-1} \partial u \otimes (g_0, g) \otimes (\mathrm{id}, \phi)
\]
On the other hand,
\[
\partial h\big( u \otimes (g_0) \otimes (g, \phi) \big)
\;=\; \partial \big( (-1)^{deg(u)}u \otimes (g_0, g) \otimes (\mathrm{id}, \phi)\big)
\]
\[
=\; (-1)^{deg(u)} \partial u \otimes (g_0, g) \otimes (\mathrm{id}, \phi)
+ (-1)^{2 deg(u)} u \otimes (g_0g) \otimes (\mathrm{id}, \phi)
- (-1)^{2 deg(u)} u \otimes (g_0) \otimes (g, \phi)
\]
\[
=\; (-1)^{deg(u)} \partial u \otimes (g_0, g) \otimes (\mathrm{id}, \phi)
+ u \otimes (g_0g) \otimes (\mathrm{id}, \phi)
- u \otimes (g_0) \otimes (g, \phi)
\]
So,
\[
\big(h\partial + \partial h\big)\big( u \otimes (g_0) \otimes (g, \phi) \big)
= u.(g_0g) \otimes (\mathrm{id}) \otimes (\mathrm{id}, \phi) -
u \otimes (g_0) \otimes (g, \phi)
\]
\[
= \big(G_*F_* - \mathrm{id}\big)\big( u \otimes (g_0) \otimes (g, \phi) \big).
\]
The case $n>0$ is handled similarly:
\[
u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \quad\stackrel{\partial}{\mapsto}
\]
\[
\partial u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \,+
\]
\[
(-1)^{deg(u)}\Big[\sum_{j=0}^{n - 1} \Big((-1)^j u \otimes (g_0, \ldots,
g_jg_{j+1}, \ldots, g_n) \otimes (g, \phi)\Big) \,+
\]
\[
(-1)^n u \otimes (g_0, \ldots, g_{n-1}) \otimes (g_n g, \phi)\Big]
\]
\[
\stackrel{h}{\mapsto}\quad (-1)^{deg(u) + n - 1}
\partial u \otimes (g_0, \ldots, g_n, g) \otimes (\mathrm{id}, \phi) \,+
\]
\[
(-1)^{2deg(u) + n - 1}\sum_{j=0}^{n-1} \Big((-1)^j u \otimes (g_0, \ldots,
g_jg_{j+1}, \ldots, g_n, g) \otimes (\mathrm{id}, \phi)\Big) \,+
\]
\[
(-1)^{2deg(u) + 2n - 1} u \otimes (g_0, \ldots, g_{n-1},
g_n g) \otimes (\mathrm{id},
\phi)
\]
\[
= -(-1)^{deg(u) + n}
\partial u \otimes (g_0, \ldots, g_n, g) \otimes (\mathrm{id}, \phi) \,+
\]
\[
\sum_{j=0}^{n-1} \Big((-1)^{j+n-1} u \otimes (g_0, \ldots,
g_jg_{j+1}, \ldots, g_n, g) \otimes (\mathrm{id}, \phi)\Big) \,+
\]
\begin{equation}\label{eq.lhs_hom_tensor}
(-1)u \otimes (g_0, \ldots, g_{n-1}, g_n g) \otimes (\mathrm{id}, \phi).
\end{equation}
On the other hand,
\[
u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \;\stackrel{h}{\mapsto}\;
(-1)^{deg(u) + n}u \otimes (g_0, \ldots, g_n , g) \otimes (\mathrm{id}, \phi)
\]
\[
\stackrel{\partial}{\mapsto} \quad(-1)^{deg(u) + n}\Big[\partial u \otimes
(g_0, \ldots, g_n, g) \otimes (\mathrm{id}, \phi) \,+
\]
\[
\sum_{j=0}^{n-1} \Big( (-1)^{deg(u) + j} u \otimes
(g_0, \ldots, g_jg_{j+1}, \ldots, g_n,
g) \otimes (\mathrm{id}, \phi)\Big)\,+
\]
\[
(-1)^{deg(u)+n}u \otimes (g_0, \ldots, g_{n-1}, g_ng) \otimes (\mathrm{id}, \phi)
\,+\, (-1)^{deg(u)+n+1} u \otimes (g_0, \ldots, g_n) \otimes (g, \phi)\Big]
\]
\[
=\;(-1)^{deg(u) + n}\partial u \otimes
(g_0, \ldots, g_n, g) \otimes (\mathrm{id}, \phi) \,+
\]
\[
\sum_{j=0}^{n-1} \Big( (-1)^{j+n} u \otimes
(g_0, \ldots, g_jg_{j+1}, \ldots, g_n, g) \otimes (\mathrm{id}, \phi)\Big)\,+
\]
\begin{equation}\label{eq.rhs_hom_tensor}
u \otimes (g_0, \ldots, g_{n-1}, g_ng) \otimes (\mathrm{id}, \phi)
\,-\, u \otimes (g_0, \ldots, g_n) \otimes (g, \phi).
\end{equation}
Now, adding eq.~(\ref{eq.rhs_hom_tensor}) to eq.~(\ref{eq.lhs_hom_tensor}) yields
$(-1)u \otimes (g_0, \ldots, g_n) \otimes (g, \phi)$, proving the relation
\[
h\partial + \partial h = G_*F_* - \mathrm{id}, \qquad \textrm{for $n>0$.}
\]
To complete the proof, simply observe that every map in the following is either a
chain isomorphism or a homotopy equivalence (each of which is also $\Sigma_{m_p+1}
^\mathrm{op}$-equivariant):
\begin{equation}\label{eq.chain_of_equiv}
\begin{diagram}
\node{ \mathscr{B}_*^{(m_0, \ldots, m_p)} }
\arrow{s,lr}{\cong}{\theta_*^{-1}}\\
\node{ \mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes_{kG} E_*G \otimes
k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big] }
\arrow{s,lr}{\simeq}{F_*}\\
\node{ \mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes
k\big[\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big] }
\arrow{s,lr}{\simeq}{\gamma_*, \;\textrm{by inductive hypothesis}}\\
\node{ \mathscr{M}_*^{(m_0, \ldots, m_{p-1})} \otimes
k\big[\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big] }
\arrow{s,l}{=}\\
\node{ I^{\otimes (m_0+1)} \otimes k\Big[ \prod_{i=1}^{p-1}
\mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big]
\otimes
k\big[\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big] }
\arrow{s,l}{\cong}\\
\node{ I^{\otimes (m_0+1)} \otimes k\Big[ \prod_{i=1}^{p}
\mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big] }
\arrow{s,l}{=}\\
\node{ \mathscr{M}_*^{(m_0, \ldots, m_p)} }
\end{diagram}
\end{equation}
We must verify that this composition is indeed the map $\gamma_*$. Denote by
$\gamma'_*$, the composition defined by (\ref{eq.chain_of_equiv}).
If $u \in
\mathscr{B}_*^{(m_0, \ldots, m_p)}$ has degree greater than $p$, then there is some
isomorphism $g$ showing up in the chain. If $g$ is an isomorphism of any $[m_i]$ for
$i < p-1$, then $\gamma'_*(u) = 0$ since then the tensor factor of $u$ in
$\mathscr{B}_*^{(m_0, \ldots, m_{p-1})}$ would have degree greater than $p-1$, hence
$\gamma_*$ would send this element to $0$ in $\mathscr{M}_*^{(m_0, \ldots, m_{p-1})}$.
If, on the other hand, $g$ is an isomorphism of $[m_{p-1}]$, then $F_*\theta_*^{-1}(u)
= 0$, since there would be a factor in $E_*G$ of degree greater than $0$. Thus,
$\gamma'_*(u) = 0$ for any $u$ of degree different from $p$.
Now, if $u$ is of degree $p$,
\[
u \;=\; Y \otimes (\psi_1, g_1) \otimes (\psi_2, g_2) \otimes \ldots \otimes
(\psi_p, g_p)
\]
\[
\stackrel{\theta_*^{-1}}{\mapsto} \quad
\big[Y \otimes (\psi_1, g_1) \otimes \ldots \otimes (\psi_{p-1}, g_{p-1}) \big] \otimes
(\mathrm{id}) \otimes (g_p, \psi_p)
\]
\[
\stackrel{F_*}{\mapsto} \quad
\big[Y \otimes (\psi_1, g_1) \otimes \ldots \otimes (\psi_{p-1}, g_{p-1}).g_p \big]
\otimes \psi_p
\]
It should be clear that applying $\gamma_*$ to the $\mathscr{B}_*^{(m_0, \ldots,
m_{p-1})}$-factor of the tensor product would have the same effect as $\gamma_*$
on the original chain, $u$.
\end{proof}
Now, we may prove Lemma~\ref{lem.E1_term}. Let $G = \Sigma_{m_p + 1}$. Observe,
\[
E^0_{p,q} \cong \bigoplus_{m_0 > \ldots > m_p}
\bigoplus_{s+t = q} \mathscr{B}_s^{(m_0, \ldots, m_p)}
\otimes_{kG} E_tG,
\]
with differential corresponding exactly to the vertical differential defined for $E^0$.
Note, the outer direct sum respects the differential $d^0$, so the $E^1$ term given by:
\begin{equation}\label{eq.E1_expression}
E^1_{p,q} = H_{p+q}(E^0_{p,*}) \cong
\bigoplus_{m_0 > \ldots > m_p} H_{p+q}\big(
\mathscr{B}_*^{(m_0, \ldots, m_p)}
\otimes_{kG} E_*G\big),
\end{equation}
where we view $\mathscr{B}_*^{(m_0, \ldots, m_p)} \otimes_{kG} E_*G$ as a double complex.
In what follows, let $(m_0, \ldots, m_p)$ be fixed.
In order to take the homology of the double complex, we set up another spectral
sequence. From the discussion above, the total differential is given by
\[
\partial_{total} = d^{v} + d^{h}, \quad \textrm{where}
\]
\[
d^{v}\big(u \otimes (g_0, \ldots, g_t)\big)
\;:=\; \partial_{B}(u) \otimes (g_0, \ldots, g_t), \quad \textrm{and}
\]
\[
d^{h}\big(u \otimes (g_0, \ldots, g_t)\big)
\;:=\; (-1)^{deg(u)} u \otimes \partial_{E}(g_0, \ldots, g_t),
\]
where $\partial_B$ and $\partial_E$ are the differentials previously mentioned for
$\mathscr{B}_*^{(m_0, \ldots, m_p)}$ and $E_*G$, respectively.
Thus, there is a spectral sequence $\{\overline{E}^r_{*,*}, d^r\}$ with
\[
\overline{E}^2 \cong H_{*,*}\Big( H\big( \mathscr{B}_*^{(m_0, \ldots, m_p)}
\otimes_{kG} E_*G,\, d^h \big),\,
d^v \Big),
\]
in the notation of McCleary (see~\cite{Mc}, Thm 3.10). Since this is a
first quadrant spectral sequence,
it must converge to $H_*\big(\mathscr{B}_*^
{(m_0, \ldots, m_p)}\otimes_{kG} E_*G\big)$. Let us examine what happens after taking
the horizontal differential. Let $t$ be fixed:
\[
\overline{E}^1_{*,t} = H_*\big( \mathscr{B}_*^{(m_0, \ldots, m_p)}
\otimes_{kG} E_tG,\, d^h \big)
\]
\[
\cong H_*( \mathscr{B}_*^{(m_0, \ldots, m_p)})\otimes_{kG} E_tG,
\]
since $E_tG$ is flat as left $kG$-module (in fact, $E_tG$ is free). Then,
by Prop.~\ref{prop.gamma_iso},
\[
\overline{E}^1_{*,t} \cong H_*( \mathscr{M}_*^{(m_0, \ldots, m_p)})\otimes_{kG} E_tG,
\]
\[
= \left\{\begin{array}{ll}
I^{\otimes (m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big]
\otimes_{kG} E_tG, &
\textrm{in degree $p$}\\
0, & \textrm{in degrees different from $p$}
\end{array}\right.
\]
So, the only groups that survive are concentrated in column $p$. Taking the vertical
differential now amounts to obtaining the $G^\mathrm{op}$-equivariant homology of
\[
I^{\otimes (m_0+1)} \otimes k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}],
[m_i]\big)\Big],
\]
so
\[
\overline{E}^2_{p,t} \cong H_t\bigg( G^{\mathrm{op}}
\; ; \; I^{\otimes (m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big] \bigg).
\]
Since $\overline{E}^2_{s,t} = 0$ for $s \neq p$, the sequence collapses here. Thus,
\[
H_{p+q}\big( \mathscr{B}_*^{(m_0, \ldots, m_p)} \otimes_{kG} E_*G\big) \cong
H_q\bigg( G^{\mathrm{op}} \; ; \; I^{\otimes (m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big] \bigg)
\]
Putting this information back into eq.~(\ref{eq.E1_expression}), we obtain the desired
isomorphism:
\[
E^1_{p,q} \cong
\bigoplus_{m_0 > \ldots > m_p} H_q\bigg( G^{\mathrm{op}} \; ; \; I^{\otimes (m_0+1)}
\otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big] \bigg).
\]
A final piece of information needed in order to use Lemma~\ref{lem.E1_term} for
computation is a
description of the horizontal differential $d^1_{p,q}$ on $E^1_{p,q}$. This map
is induced from the differential $d$ on $\mathscr{Y}_*$, and reduces the filtration
degree by $1$. Thus, it is the sum of face maps that combine strict epimorphisms.
Let
\[
[u] \in \bigoplus H_q\bigg( \Sigma_{m_p+1}^{\mathrm{op}}
\; ; \; I^{\otimes (m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big] \bigg)
\]
be represented by a chain:
\[
u = Y \otimes (\phi_1, \phi_2, \ldots, \phi_p) \otimes (g_0, \ldots, g_q).
\]
Then, the face maps are defined by:
\[
\partial_0(u) = (\phi_1)_*(Y) \otimes (\phi_2, \ldots, \phi_p) \otimes (g_0, \ldots, g_q),
\]
\[
\partial_i(u) = Y \otimes (\phi_1, \ldots, \phi_{i+1}\phi_i, \ldots, \phi_p)
\otimes (g_0, \ldots, g_q), \quad \textrm{for $0 < i < p$},
\]
The last face map has the effect of removing the morphism $\phi_p$ by iteratively commuting
it past any group elements to
the right of it.
\[
\partial_p(u) = Y \otimes (\phi_1, \ldots, \phi_{p-1}) \otimes (g'_0, \ldots, g'_q),
\]
where
\[
g'_i = g_i^{\phi_p^{g_0g_1\ldots g_{i-1}}}.
\]
Note that $\partial_p$ involves a change of group from $\Sigma_{m_p}$ to $\Sigma_{m_{p-1}}$.
\begin{prop}
The spectral sequence $E^r_{p,q}$ above collapses at $r = 2$.
\end{prop}
\begin{proof}
This proof relies on the fact that the differential $d$ on $\mathscr{Y}_*$ cannot
reduce the filtration degree by more than $1$. Explicitly, we shall show that
$d^r \,:\, E^r_{p,q} \to E^r_{p-r,q+r-1}$ is trivial for $r \geq 2$.
$d^r$ is induced by $d$ in the following way. Let $Z_p^r = \{ x \in \mathscr{F}_p
\mathscr{Y}_* \,|\, d(x) \in \mathscr{F}_{p-r}\mathscr{Y}_* \}$. Then
$E^r_{p,*} = Z_p^r/(Z_{p-1}^{r-1} + dZ_{p+r-1}^{r-1})$. Now, $d$ maps
\[
Z_p^r \to Z_{p-r}^r
\]
and
\[
Z_{p-1}^{r-1} + dZ_{p+r-1}^{r-1} \to dZ_{p-1}^{r-1}.
\]
Hence, there is an induced map $\overline{d}$ making the square below commute. $d^r$
is obtained as the composition of $\overline{d}$ with a projection onto
$E^r_{p-r,*}$.
\[
\begin{diagram}
\node{Z_p^r}
\arrow{r,t}{d}
\arrow{s,l,A}{\pi_1}
\node{Z_{p-r}^r}
\arrow{s,r,A}{\pi_2}\\
\node{Z_p^r/(Z_{p-1}^{r-1} + dZ_{p+r-1}^{r-1})}
\arrow{s,=}
\arrow{e,t}{\overline{d}}
\node{Z_{p-r}^r/dZ_{p-1}^{r-1}}
\arrow{s,r,A}{\pi'}\\
\node{E_{p,*}^r}
\arrow{se,t}{d^r}
\node{Z_{p-r}^r/(Z_{p-r-1}^{r-1} + dZ_{p-1}^{r-1})}
\arrow{s,=}\\
\node[2]{E_{p-r,*}^r}
\end{diagram}
\]
In our case, $x \in Z_p^r$ is a sum of the form
\[
x = \sum_{q \geq 0} a_i\left( Y \otimes (f_1, f_2, \ldots, f_q) \right),
\]
where $a_i \neq 0$ for only finitely many $i$, and the sum extends over all symbols
$Y \otimes (f_1, f_2, \ldots, f_q)$ with $Y \in B_*^{sym}I$, $f_j \in
\mathrm{Epi\Delta S_+}$ composable maps, and at most $p$ of the $f_j$ maps are
strict epimorphisms. The image of $x$ under $\pi_1$ looks like
\[
\pi_1(x) = \sum_{q \geq 0} a_i\left[ Y \otimes (f_1, f_2, \ldots, f_q) \right],
\]
where exactly $p$ of the $f_j$ maps are strictly epic. There are, of course,
other relations present as well -- those arising from modding out by $dZ_{p+r-1}^{r-1}$.
Consider, $\overline{d}\pi_1(x)$. This should be the result of lifting $\pi_1(x)$
to a representative in $Z_p^r$, then applying $\pi_2 \circ d$. One such representative
is:
\[
y = \sum_{q \geq 0} a_i\left( Y \otimes (f_1, f_2, \ldots, f_q) \right),
\]
in which each symbol $Y \otimes (f_1, f_2, \ldots, f_q)$ has exactly $p$
strict epimorphisms. Now, $d(y)$ is a sum
\[
d(y) = \sum_{q \geq 0} b_i\left( Z \otimes ( g_1, g_2, \ldots, g_{q-1}) \right),
\]
where each symbol $Z \otimes ( g_1, g_2, \ldots, g_{q-1})$ has either $p$ or $p-1$
strict epimorphisms, since $d$ only combines two morphisms at a time. Thus, if
$r \geq 2$, then $d(y) \in Z_{p-r}^r \mathbb{R}ightarrow d(y) = 0$. But then,
$\overline{d}\pi_1(x) = \pi_2d(y) = 0$, and $d^r = \pi'\overline{d}$ is the
zero map.
\end{proof}
\section{Implications in Characteristic 0}\label{sec.char0}
In this section, we shall assume that $k$ is a field of characteristic 0.
Then for any finite group $G$ and $kG$-module $M$, $H_q( G, M ) = 0$
for all $q > 0$ (see~\cite{B}, for example). Thus, by Lemma~\ref{lem.E1_term},
the $E^1$ term of the spectral sequence is concentrated in row $0$, and
\[
E^1_{p,0} \cong \bigoplus_{m_0 > \ldots > m_p} \bigg( I^{\otimes (m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big] \bigg)/
\Sigma_{m_p+1}^{\mathrm{op}},
\]
that is, the group of co-invariants of the coefficient group, under the right-action
of $\Sigma_{m_p+1}$.
Since $E^1$ is concentrated on a row, the spectral sequence collapses at this term.
Hence for the $k$-algebra $A$, with augmentation ideal $I$,
\begin{equation}\label{eq.symhom-char0}
HS_*(A) =
H_*\bigg( \bigoplus_{p\geq 0}
\bigoplus_{m_0 > \ldots > m_p}\Big(I^{\otimes (m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big]\Big) /
\Sigma_{m_p+1}^{\mathrm{op}},\; d^1\bigg).
\end{equation}
This complex is still rather unwieldy as the $E^1$ term is infinitely generated in each
degree. In the next chapter, we shall see another spectral sequence that is more
computationally useful.
\chapter{A SECOND SPECTRAL SEQUENCE}\label{chap.spec_seq2}
\section{Filtering by Degree}\label{sec.filtdeg}
Again, we shall assume $A$ is a $k$-algebra equipped with an augmentation,
and whose augmentation ideal is $I$. Assume further that $I$ is free as
$k$-module, with countable basis $X$. Let $\mathscr{Y}_*^{epi}$ be the
complex~\ref{epiDeltaS_complex}, with differential $d = \sum (-1)^i \partial_i$.
It will become convenient to use a \textit{reduced} version of the $B_*^{sym_+}I$
functor, induced by the inclusion $\Delta S \hookrightarrow \Delta S_+$.
\[
B_n^{sym}I \;:= \; I^{\otimes_{n+1}}, \quad \textrm{for $n \geq 0$.}
\]
Let
\begin{equation}\label{eq.reduced_Y}
\widetilde{\mathscr{Y}}_* = \bigoplus_{q \geq 0} \;\bigoplus_{m_0 \geq
\ldots \geq m_q}
k\big[ [m_0] \twoheadrightarrow [m_1] \twoheadrightarrow \ldots
\twoheadrightarrow [m_q] \big] \otimes B_{m_0}^{sym}I.
\end{equation}
Observe that there is a splitting of $\mathscr{Y}^{epi}_*$ as:
\[
\mathscr{Y}^{epi}_* \cong \widetilde{\mathscr{Y}}_* \oplus k[ N(\ast) ],
\]
where $\ast$ is the trivial subcategory of $\mathrm{Epi}_{\Delta S_+}$ consisting
of the object $[-1]$ and morphism $\mathrm{id}_{[-1]}$. The fact that $I$ is an
ideal ensures that this splitting passes to homology. Hence, we have:
\[
HS_*(A) \;\cong\; H_*(\widetilde{\mathscr{Y}}_*) \oplus k_0,
\]
where $k_0$ is the graded $k$-module consisting of $k$ concentrated in degree $0$.
Now, since $I = k[X]$ as \mbox{$k$-module}, $B_n^{sym}I =
k[X]^{\otimes(n+1)}$.
Thus, as \mbox{$k$-module}, $\widetilde{\mathscr{Y}}_*$ is generated by elements of
the form:
\[
\left( [m_0] \twoheadrightarrow [m_1] \twoheadrightarrow \ldots
\twoheadrightarrow [m_p] \right) \otimes (x_0 \otimes x_1 \otimes \ldots
\otimes x_{m_0}), \quad x_i \in X.
\]
Using an isomorphism analogous
to that of eq.~\ref{eq.B_p}, we may write:
\[
\widetilde{\mathscr{Y}}_q = \bigoplus_{m_0 \geq 0} \;\bigoplus_{m_0 \geq
\ldots \geq m_q} B_{m_0}^{sym}I \otimes
k\Big[\prod_{i=1}^q \mathrm{Epi}_{\Delta S}\big([m_{i-1}], [m_i]\big)\Big].
\]
\[
\cong \; \bigoplus_{m_0 \geq 0} \;\bigoplus_{m_0 \geq
\ldots \geq m_q} k[X]^{\otimes(m_0+1)} \otimes
k\Big[\prod_{i=1}^q \mathrm{Epi}_{\Delta S}\big([m_{i-1}], [m_i]\big)\Big].
\]
The face maps are given explicitly below:
\[
\partial_0( Y \otimes f_1 \otimes f_2 \otimes \ldots \otimes f_q )
= f_1(Y) \otimes f_2 \otimes \ldots \otimes f_q,
\]
\[
\partial_i( Y \otimes f_1 \otimes \ldots \otimes f_q )
= Y \otimes f_1 \otimes \ldots \otimes (f_{i+1}f_i) \otimes \ldots \otimes f_q,
\]
\[
\partial_{q}(Y \otimes f_1 \otimes \ldots \otimes f_{q-1} \otimes f_q )
= Y \otimes f_1 \otimes \ldots \otimes f_{q-1}.
\]
Consider a filtration
$\mathscr{G}_*$ of
$\widetilde{\mathscr{Y}}_*$ by degree of $Y \in B^{sym}_*I$:
\[
\mathscr{G}_p\widetilde{\mathscr{Y}}_q = \bigoplus_{p \geq m_0 \geq 0}
\;\bigoplus_{m_0 \geq
\ldots \geq m_q} k[X]^{\otimes(m_0+1)} \otimes
k\Big[\prod_{i=1}^q \mathrm{Epi}_{\Delta S}\big([m_{i-1}], [m_i]\big)\Big].
\]
The face maps $\partial_i$ for $i > 0$ do not affect the degree of $Y \in
k[X]^{\otimes(m_0+1)}$. Only $\partial_0$ needs to be checked. Since all morphisms are epic,
$\partial_0$ can only reduce the degree of $Y$. Thus, $\mathscr{G}_*$ is compatible
with the differential $d$. The filtration quotients are:
\[
E^0_{p,q} =
\;\bigoplus_{p = m_0 \geq
\ldots \geq m_q} k[X]^{\otimes(p+1)} \otimes
k\Big[\prod_{i=1}^q \mathrm{Epi}_{\Delta S}\big([m_{i-1}], [m_i]\big)\Big].
\]
The induced differential, $d^0$ on $E^0$ differs from $d$ only when $m_0 > m_1$. Indeed,
\[
d^0 = \left\{\begin{array}{ll}
d, \quad & m_0 = m_1,\\
d - \partial_0, \quad & m_0 > m_1.
\end{array}\right.
\]
$E^0$ splits into a direct sum based on the product of $x_i$'s in
$(x_0, \ldots, x_p) \in X^{p+1}$. For $u \in X^{p+1}$, let
$C_u$ be the set of all distinct permutations of $u$. Then,
\[
E^0_{p,q} =
\;
\bigoplus_{u \in X^{p+1}/\Sigma_{p+1}} \bigg(
\bigoplus_{p = m_0 \geq \ldots \geq m_q}
\bigoplus_{w \in C_u}
w \otimes k\Big[
\prod_{i=1}^q \mathrm{Epi}_{\Delta S}\big([m_{i-1}], [m_i]\big)\Big] \bigg).
\]
Before proceeding with the main theorem of this chapter, we must define four related
categories
$\widetilde{\mathcal{S}}_p$, $\widetilde{\mathcal{S}}'_p$, $\mathcal{S}_p$, and
$\mathcal{S}'_p$. In the definitions
that follow, let $\{z_0, z_1, z_2, \ldots x_p\}$ be a set of independent
indeterminates over $k$.
\begin{definition}
$\widetilde{\mathcal{S}}_p$ is the category with objects formal tensor products
$Z_0 \otimes \ldots \otimes Z_s$, where each $Z_i$ is a non-empty product of $z_i$'s,
and every one of $z_0, z_1, \ldots, z_p$ occurs exactly once in the tensor product.
There is a unique morphism
\[
Z_0 \otimes \ldots \otimes Z_s \to Z'_0 \otimes \ldots \otimes Z'_t,
\]
if and only if the tensor factors of the latter are products of the factors of
the former in some order. In such a case,
there is a unique $\beta \in \mathrm{Epi}\Delta S$ so that
$\beta_*(Z_0 \otimes \ldots \otimes Z_s) = Z'_0 \otimes \ldots \otimes Z'_t$.
\end{definition}
$\widetilde{\mathcal{S}}_p$ has initial objects $\sigma(z_0 \otimes z_1
\otimes \ldots \otimes z_p)$, for $\sigma \in \Sigma^{\mathrm{op}}_{p+1}$,
so $N\widetilde{\mathcal{S}}_p$ is a contractible complex. Let $\widetilde{\mathcal{S}}'_p$
be the full subcategory of $\widetilde{\mathcal{S}}_p$ with all objects
$\sigma(z_0 \otimes \ldots \otimes z_p)$ deleted.
Let $\mathcal{S}_p$ be a skeletal category equivalent to $\widetilde{\mathcal{S}}_p$.
In fact, we make make $\mathcal{S}_p$ the quotient category, identifying each
object \mbox{$Z_0 \otimes \ldots \otimes Z_s$} with any permutation of its tensor
factors, and identifying morphisms $\phi$ and $\psi$ if their source and target
are equivalent. This
category has nerve $N\mathcal{S}_p$ homotopy-equivalent to $N\widetilde{\mathcal{S}}_p$
(see Prop.2.1 in~\cite{Se}, for example). Now, $\mathcal{S}_p$ is a poset with
unique initial object, $z_0 \otimes \ldots \otimes z_p$. Let $\mathcal{S}'_p$
be the full subcategory (subposet) of $\mathcal{S}_p$ obtained by deleting
the object $z_0 \otimes \ldots \otimes z_p$. Clearly, $\mathcal{S}'_p$ is a
skeletal category equivalent to $\widetilde{\mathcal{S}}'_p$.
\begin{theorem}\label{thm.E1_NS}
There is spectral sequence converging weakly to $\widetilde{H}S_*(A)$ with
\[
E^1_{p,q} \cong \bigoplus_{u\in X^{p+1}/\Sigma_{p+1}}
H_{p+q}(EG_{u}\ltimes_{G_u}
|N\mathcal{S}_p/N\mathcal{S}'_p|;k),
\]
where $G_{u}$ is the isotropy subgroup of the orbit
$u \in X^{p+1}/\Sigma_{p+1}$.
\end{theorem}
Recall, for a group $G$, right $G$-space X, and left $G$-space $Y$,
$X \ltimes_G Y$ denotes the \textit{equivariant half-smash product}. If
$\ast$ is a chosen basepoint for
$Y$ having trivial $G$-action, then
\[
X \ltimes_G Y := (X \times_G Y)/(X \times_G \ast) = X \times Y/\approx,
\]
with equivalence relation defined by $(x.g , y) \approx (x, g.y)$ and
$(x, \ast) \approx (x', \ast)$ for all $x, x' \in X$, $y \in Y$ and $g \in G$
(cf.~\cite{M4}).
In our case, $X$ is of the form $EG$, with canonical underlying complex $E_*G$.
In chapter~\ref{chap.specseq}, we took $E_*G$ to have
a \textit{left} \mbox{$G$-action},
$g.(g_0, g_1, \ldots, g_n) = (gg_0, g_1, \ldots, g_n)$, but
$E_*G$ also has a \textit{right} \mbox{$G$-action}, $(g_0, g_1, \ldots, g_n).g =
(g_0, g_1, \ldots, g_ng)$. It is this latter action that we shall use in the
definitions of \mbox{$EG_u \ltimes_{G_u} |N\mathcal{S}_p/
N\mathcal{S}'_p|$} and \mbox{$EG_u \ltimes_{G_u} |N\widetilde{\mathcal{S}}_p/
N\widetilde{\mathcal{S}}'_p|$}.
Observe, both $N\widetilde{\mathcal{S}}_p$ and $N\widetilde{\mathcal{S}}'_p$ carry
a left $\Sigma_{p+1}$-action (hence also a $G_u$-action) given by viewing $\sigma \in
\Sigma_{p+1}$ as the $\Delta S$-isomorphism
$\overline{\sigma} \in \Sigma_{p+1}^\mathrm{op}$, then pre-composing with the
$Z_i$'s (regarded as
morphisms written in tensor notation). The result of the composition should be expressed
in tensor notation in order to be consistent.
\[
\sigma.(Z_0 \stackrel{\phi_1}{\to} Z_1 \stackrel{\phi_2}{\to} \ldots
\stackrel{\phi_q}{\to} Z_q) :=
Z_0\overline{\sigma} \stackrel{\phi_1}{\to} Z_1\overline{\sigma} \stackrel{\phi_2}{\to}
\ldots \stackrel{\phi_q}{\to} Z_q\overline{\sigma}.
\]
Define for each $u \in X^{p+1}/\Sigma_{p+1}$, the following subcomplex of $E^0_{p,q}$:
\[
\mathscr{M}_u \;:=\;\bigoplus_{p = m_0 \geq \ldots \geq m_q}
\bigoplus_{w \in C_u}
w \otimes k\Big[
\prod_{i=1}^q \mathrm{Epi}_{\Delta S}\big([m_{i-1}], [m_i]\big)\Big]
\]
\begin{lemma}\label{lem.G_u-identification}
There is a chain-isomorphism
\[
\big(N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p\big)/G_u
\stackrel{\cong}{\to} \mathscr{M}_u
\]
\end{lemma}
\begin{proof}
The forward map is given on generators by:
\[
(Z_0 \stackrel{\phi_1}{\to} \ldots
\stackrel{\phi_q}{\to} Z_q) \;\mapsto\;
Z_0(u) \otimes \phi_1 \otimes \ldots \otimes \phi_q.
\]
This map is well-defined, since if $g \in G_u$, then
\[
g.(Z_0 \stackrel{\phi_1}{\to} \ldots
\stackrel{\phi_q}{\to} Z_q) =
(Z_0\overline{g} \stackrel{\phi_1}{\to} \ldots
\stackrel{\phi_q}{\to} Z_q\overline{g})
\;\mapsto
\]
\[
Z_0\overline{g}(u) \otimes \phi_1 \otimes \ldots \otimes \phi_q
= Z_0(u) \otimes \phi_1 \otimes \ldots \otimes \phi_q
\]
For the opposite direction, we begin with a generator of the form
\begin{equation}\label{eq.w_element}
w \otimes \phi_1 \otimes \ldots \otimes \phi_q, \quad w \in C_u.
\end{equation}
Let $\tau \in \Sigma_{p+1}$ so that $w = \overline{\tau}(u)$. Then the image
of~\ref{eq.w_element} is defined to be
\begin{equation}\label{eq.image-w_element}
\overline{\tau} \stackrel{\phi}{\to} \phi_1\overline{\tau}
\stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to}
\phi_q\ldots\phi_1\overline{\tau}.
\end{equation}
We must check that this definition does not depend on choice of $\tau$. Indeed, if
$w = \overline{\sigma}(u)$ also, then $u = \overline{\sigma}^{-1}\overline{\tau}(u)$,
hence $\tau\sigma^{-1} \in G_u$. Thus,
\[
\overline{\sigma} \stackrel{\phi}{\to} \phi_1\overline{\sigma}
\stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to}
\phi_q\ldots\phi_1\overline{\sigma}
\;\approx\;\tau\sigma^{-1} . (\overline{\sigma} \stackrel{\phi}{\to}
\phi_1\overline{\sigma}
\stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to}
\phi_q\ldots\phi_1\overline{\sigma})
\]
\[
=\; \overline{\sigma}(\overline{\sigma}^{-1}\overline{\tau}) \stackrel{\phi}{\to}
\phi_1\overline{\sigma}(\overline{\sigma}^{-1}\overline{\tau})
\stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to}
\phi_q\ldots\phi_1\overline{\sigma}(\overline{\sigma}^{-1}\overline{\tau})
\;=\;\overline{\tau} \stackrel{\phi}{\to} \phi_1\overline{\tau}
\stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to}
\phi_q\ldots\phi_1\overline{\tau}.
\]
The maps are clearly inverse to one another. All that remains is to verify that
these are chain maps. For $i>0$, the face maps $\partial_i$ simply compose the
maps $\phi_{i+1}$ and $\phi_i$ in either chain complex, so only the zeroth face
map needs to be checked. First, for the forward map, assume $\phi_1$ is an
isomorphism.
\[
(Z_0 \stackrel{\phi_1}{\to} \ldots
\stackrel{\phi_q}{\to} Z_q) \;\mapsto\;
Z_0(u) \otimes \phi_1 \otimes \ldots \otimes \phi_q \;\stackrel{\partial_0}{\mapsto}\;
\phi_1Z_0(u) \otimes \phi_2 \otimes \ldots \otimes \phi_q,
\]
while
\[
(Z_0 \stackrel{\phi_1}{\to} \ldots
\stackrel{\phi_q}{\to} Z_q)\;\stackrel{\partial_0}{\mapsto}\;
(Z_1 \stackrel{\phi_2}{\to} \ldots
\stackrel{\phi_q}{\to} Z_q)\;\mapsto\;
Z_1(u) \otimes \phi_2 \otimes \ldots \otimes \phi_q.
\]
The two results agree since $\phi_1Z_0 = Z_1$. If $\phi_1$ is not an isomorphism,
then it must be a strict epimorphism, and so $Z_0(u) \otimes \phi_1 \otimes \ldots
\otimes \phi_q = 0$ in $E^0$. On the other hand,
the chain $Z_1 \to \ldots \to Z_q$ is in
$N\widetilde{\mathcal{S}}'_p$, hence should also be identified with $0$.
In the reverse direction, assume
$w = \overline{\tau}(u)$ as above, and let $\phi_1$ be an isomorphism.
\[
w \otimes \phi_1 \otimes \ldots \otimes \phi_q \;\mapsto\;
(\overline{\tau} \stackrel{\phi}{\to} \phi_1\overline{\tau}
\stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to}
\phi_q\ldots\phi_1\overline{\tau}) \;\stackrel{\partial_0}{\mapsto}\;
(\phi_1\overline{\tau} \stackrel{\phi_2}{\to} \phi_2\phi_1\overline{\tau}
\stackrel{\phi_3}{\to}
\ldots \stackrel{\phi_q}{\to} \phi_q\ldots\phi_1\overline{\tau}),
\]
while
\[
w \otimes \phi_1 \otimes \ldots \otimes \phi_q \;\stackrel{\partial_0}{\mapsto}\;
\phi_1(w) \otimes \phi_2 \ldots \otimes \phi_q \;\mapsto\;
(\phi_1\overline{\tau} \stackrel{\phi_2}{\to} \phi_2\phi_1\overline{\tau}
\stackrel{\phi_3}{\to}
\ldots \stackrel{\phi_q}{\to} \phi_q\ldots\phi_1\overline{\tau}).
\]
The rightmost expression results from the observation that if $w = \overline{\tau}(u)$,
then \mbox{$\phi_1(w) = \phi_1\overline{\tau}(u)$}. Now, if $\phi_1$ is a
strict epimorphism, then both results are $0$ for similar reasons as above.
\end{proof}
Using this lemma, we identify $\mathscr{M}_u$ with the orbit complex
$\big(N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p\big)/G_u$. Now,
the complex $N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p$ is a
free $G_u$-complex, so we have an isomorphism:
\[
H_*\Big(\big(N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p\big)/G_u\Big)
\cong H^{G_u}_*\big(N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p\big),
\]
(\textit{i.e.}, $G_u$-equivariant homology. See~\cite{B} for details).
Then, by definition,
\[
H^{G_u}_*\big(N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p\big) =
H_*(G_u, N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p),
\]
which may be computed using the free resolution, $E_*G_u$ of $k$ as right
$G_u$-module. The resulting complex $k[E_*G_u] \otimes_{kG_u}
k[N\widetilde{\mathcal{S}}_p]/k[N\widetilde{\mathcal{S}}'_p]$ is a double complex
isomorphic to the quotient of two double complexes, namely:
\[
\big(k[E_*G_u] \otimes_{kG_u} k[N\widetilde{\mathcal{S}}_p]\big)/
\big(k[E_*G_u] \otimes_{kG_u} k[N\widetilde{\mathcal{S}}'_p]\big)
\]
\[
\cong k\Big[ \big(E_*G_u \times_{G_u} N\widetilde{\mathcal{S}}_p\big)/
\big(E_*G_u \times_{G_u} N\widetilde{\mathcal{S}}'_p\big)\Big].
\]
This last complex may be identified
with the simplicial complex of the space
\[
\big(EG_u \times_{G_u} |N\widetilde{\mathcal{S}}_p|\big)/
\big(EG_u \times_{G_u} |N\widetilde{\mathcal{S}}'_p|\big)
\]
\[
\cong EG_u \ltimes_{G_u} |N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p|.
\]
The last piece of the puzzle involves simplifying the spaces
$|N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p|$. Since $\mathcal{S}$
is a skeletal subcategory of $\widetilde{\mathcal{S}}$, there is an
equivalence of categories $\widetilde{\mathcal{S}} \simeq \mathcal{S}$, inducing
a homotopy equivalence of complexes (hence also of spaces)
$|N\widetilde{\mathcal{S}}| \simeq |N\mathcal{S}|$. Note that $N\mathcal{S}$
inherits a $G_u$-action from $N\widetilde{\mathcal{S}}$, and the map
$\widetilde{\mathcal{S}} \to \mathcal{S}$ is $G_u$-equivariant. Consider the fibration
\[
X \to EG \times_{G} X \to BG
\]
associated to a group $G$ and path-connected $G$-space $X$. The resulting homotopy
sequence breaks up
into isomorphisms \mbox{$0 \to \pi_i(X)
\stackrel{\cong}{\to} \pi_i(EG \times_G X) \to 0$} for $i \geq 2$ and a
short exact sequence \mbox{$0 \to \pi_1(X) \to \pi_1(EG \times_G X) \to G \to 0$}. If
there is
a $G$-equivariant \mbox{$f : X \to Y$} for a path-connected $G$-space $Y$, then for
$i \geq 2$, we have
isomorphisms
\[
\pi_i(EG \times_{G} X) \gets \pi_i(X) \stackrel{f_*}{\to}
\pi_i(Y) \to \pi_i(EG \times_G Y),
\]
and a diagram corresponding to $i = 1$:
\[
\begin{diagram}
\node{0}
\arrow{e}
\arrow{s,=}
\node{\pi_1(X)}
\arrow{s,l}{f_*}
\arrow{e}
\node{\pi_1(EG \times_G X)}
\arrow{s,r}{(\mathrm{id} \times f)_*}
\arrow{e}
\node{G}
\arrow{s,=}
\arrow{e}
\node{0}
\arrow{s,=}
\\
\node{0}
\arrow{e}
\node{\pi_1(Y)}
\arrow{e}
\node{\pi_1(EG \times_G Y)}
\arrow{e}
\node{G}
\arrow{e}
\node{0}
\end{diagram}
\]
Thus, there is a weak equivalence $EG \times_G X \to EG \times_G Y$. In our case, we
wish to obtain weak equivalences:
\[
EG_u \times_{G_u} |N\widetilde{\mathcal{S}}_p| \to EG_u \times_{G_u} |N\mathcal{S}_p|
\]
and
\[
EG_u \times_{G_u} |N\widetilde{\mathcal{S}}'_p| \to EG_u \times_{G_u} |N\mathcal{S}'_p|,
\]
inducing a weak equivalence
\[
EG_u \ltimes_{G_u} |N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p|
\to
EG_u \ltimes_{G_u} |N\mathcal{S}_p/N\mathcal{S}'_p|.
\]
This will follow as long as the spaces $|N\widetilde{\mathcal{S}}'_p|$ and
$|N\mathcal{S}'_p|$ are path-connected. (Note, $|N\widetilde{\mathcal{S}}_p|$ and
$|N\mathcal{S}_p|$ are path-connected
because they are contractible). In fact, we need only check $|N\mathcal{S}'_p|$,
since this space is homotopy-equivalent to $|N\widetilde{\mathcal{S}}'_p|$.
\begin{lemma}\label{lemma.NS-path-connected}
For $p > 2$, $|N\mathcal{S}'_p|$ is path-connected.
\end{lemma}
\begin{proof}
Assume $p > 2$ and
let $W_0 := z_0z_1 \otimes z_2 \otimes \ldots \otimes z_p$. This represents
a vertex of $N\mathcal{S}'_p$. Suppose
$W = Z_0 \otimes \ldots \otimes Z_i'z_0z_1Z_i'' \otimes \ldots \otimes Z_s$.
Then there is a morphism $W_0 \to W$, hence an edge between $W_0$ and $W$.
Next, suppose $W = Z_0 \otimes \ldots \otimes Z_i'z_0Z_i''z_1Z_i''' \otimes \ldots
\otimes Z_s$. There is a path:
\[
\begin{diagram}
\node{Z_0 \otimes \ldots \otimes Z_i'z_0Z_i''z_1Z_i''' \otimes \ldots
\otimes Z_s}
\arrow{s}\\
\node{Z_0Z_1 \ldots Z_i'z_0Z_i''z_1Z_i''' \ldots Z_s}\\
\node{Z_0Z_1 \ldots Z_i' \otimes z_0 \otimes Z_i''z_1Z_i''' \ldots Z_s}
\arrow{n}
\arrow{s}\\
\node{z_0 \otimes Z_0Z_1 \ldots Z_i'Z_i''z_1Z_i''' \ldots Z_s}\\
\node{z_0 \otimes Z_0Z_1 \ldots Z_i'Z_i'' \otimes z_1Z_i''' \ldots Z_s}
\arrow{n}
\arrow{s}\\
\node{z_0z_1Z_i''' \ldots Z_s \otimes Z_0Z_1 \ldots Z_i'Z_i''}\\
\node{W_0}
\arrow{n}
\end{diagram}
\]
Similarly, if $W = Z_0 \otimes \ldots \otimes Z_i'z_1Z_i''z_0Z_i''' \otimes \ldots
\otimes Z_s$, there is a path to $W_0$. Finally, if
$W = Z_0 \otimes \ldots \otimes Z_s$ with $z_0$ occurring in $Z_i$ and
$z_1$ occurring in $Z_j$ for $i \neq j$, there is an edge to some
$W'$ in which $Z_iZ_j$ occurs, and thus a path to $W_0$.
\end{proof}
The above discussion coupled with Lemma~\ref{lem.G_u-identification} produces the
required isomorphism in homology, hence proving Thm.~\ref{thm.E1_NS} for $p > 2$:
\[
E^1_{p,q} \;=\; \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}} H_{p+q}(\mathscr{M}_u)
\;\cong\; \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}}H_{p+q}
\left( EG_u \ltimes_{G_u}
|N\widetilde{\mathcal{S}}_p / N\widetilde{\mathcal{S}}'_p |; k \right)
\]
\[
\cong\;
\bigoplus_{u \in X^{p+1}/\Sigma_{p+1}} H_{p+q}\left(
EG_u \ltimes_{G_u} |N\mathcal{S}_p/N\mathcal{S}'_p|, k\right).
\]
The cases $p = 0, 1$ and $2$ are handled individually:
Observe that $|N\widetilde{\mathcal{S}}'_0|$ and
$|N\mathcal{S}'_0|$ are empty spaces, since $\widetilde{\mathcal{S}}'_0$
has no objects.
\[
EG_u \times_{G_u} |N\widetilde{\mathcal{S}}'_0|
= EG_u \times_{G_u} |N\mathcal{S}'_0| = \emptyset.
\]
Furthermore, any group $G_u$ must be trivial. Thus,
\[
H_q\left( EG_u \ltimes_{G_u}|N\widetilde{\mathcal{S}}_0 /
N\widetilde{\mathcal{S}}'_0 |; k \right)
= H_q\left( |N\widetilde{\mathcal{S}}_0|; k \right)
\]
\[
\cong
H_q\left( |N\mathcal{S}_0|; k \right)
= H_q\left(EG_u \ltimes_{G_u} |N\mathcal{S}_p/N\mathcal{S}'_p|, k\right),
\]
completing the theorem for $p=0$.
Next, since $|N\widetilde{\mathcal{S}}'_1|$ is homeomorphic to
$|N\mathcal{S}'_1|$, each space consisting of the two discrete points $z_0z_1$ and
$z_1z_0$,
the theorem is true for $p = 1$ as well.
For $p=2$, observe that $|N\widetilde{\mathcal{S}}'_2|$ has two connected
components, $\widetilde{U}_1$ and $\widetilde{U}_2$ that are interchanged by any
odd permutation $\sigma \in \Sigma_3$. Similarly, $|N\mathcal{S}'_2|$ consists
of two connected components, $U_1$ and $U_2$, interchanged by any odd permutation
of $\Sigma_3$. Now, resticted to the alternating group, $A\Sigma_3$, we certianly
have weak equivalences for any subgroup $H_u \subset A\Sigma_3$,
\[
EH_u \times_{H_u} \widetilde{U}_1 \stackrel{\simeq}{\longrightarrow}
EH_u \times_{H_u} U_1,
\]
\[
EH_u \times_{H_u} \widetilde{U}_2 \stackrel{\simeq}{\longrightarrow}
EH_u \times_{H_u} U_2.
\]
The action of an odd permutation induces equivariant homeomorphisms
\[
\widetilde{U}_1 \stackrel{\cong}{\longrightarrow} \widetilde{U}_2,
\]
\[
U_1 \stackrel{\cong}{\longrightarrow} U_2,
\]
and so if we have a subgroup $G_u \subset \Sigma_3$ generated by $H_u \subset
A\Sigma_3$ and a transposition, then the two connected components are
identified in an \mbox{$A\Sigma_3$-equivariant} manner. Thus, if $G_u$ contains
a transposition,
\[
EG_u \times_{G_u} |N\widetilde{\mathcal{S}}'_2| \cong
EH_u \times_{H_u} \widetilde{U}_1 \simeq EH_u \times_{H_u} U_1
\cong EG_u \times_{G_u} |N\mathcal{S}'_2|.
\]
This completes the case $p=2$ and the proof of Thm.~\ref{thm.E1_NS}.
\begin{cor}\label{cor.square-zero}
If the augmentation ideal of $A$ satisfies $I^2 = 0$, then
\[
HS_n(A) \;\cong\; \bigoplus_{p \geq 0} \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}}
H_{n}(EG_u \ltimes_{G_u} N\mathcal{S}_p/N\mathcal{S}'_p; k).
\]
\end{cor}
\begin{proof}
This follows from consideration of the original $E^0$ term of the spectral
sequence.
$E^0$ is generated by chains $Y \otimes \phi_0 \otimes \ldots \otimes \phi_n$,
with induced differential $d^0$, agreeing with the differential $d$ of
$\widetilde{\mathscr{Y}}_*$ when $\phi_0$ is an isomorphism. When $\phi_0$ is a strict
epimorphism, however, $d^0 = d - \partial_0$. But if $\phi_0$ is strictly epic,
then
\[
\partial_0( Y \otimes \phi_0 \otimes \ldots \otimes \phi_n)
= (\phi_0)_*(Y) \otimes \phi_1 \otimes \ldots \otimes \phi_n
\;=\; 0,
\]
since $(\phi_0)_*(Y)$ would have at least one tensor factor that is the product of
two or more elements of $I$, hence, $d^0$ agrees with $d$ in the case that
$\phi_0$ is strictly epic. But if $d^0$ agrees with $d$ for all elements, the
spectral sequence collapses at level 1.
\end{proof}
\section{The complex $Sym_*^{(p)}$}\label{sec.sym-p}
Note, for $p > 0$, there are homotopy equivalences
\[
|N\mathcal{S}_p/N\mathcal{S}'_p| \simeq |N\mathcal{S}_p|
\vee S|N\mathcal{S}'_p| \simeq S|N\mathcal{S}'_p|,
\]
since $|N\mathcal{S}_p|$ is contractible. $|N\mathcal{S}_p|$ is
a disjoint union of $(p+1)!$ $p$-cubes, identified along certain faces.
Geometric analysis of
$S|N\mathcal{S}'_p|$, however, seems quite difficult. Fortunately, there is an even
smaller chain complex that computes the homology of $|N\mathcal{S}_p/N\mathcal{S}'_p|$.
\begin{definition}\label{def.sym_complex}
Let $p \geq 0$ and impose an equivalence relation on $k\big[\mathrm{Epi}_{\Delta S}
([p], [q])\big]$ generated by:
\[
Z_0 \otimes \ldots \otimes Z_i \otimes Z_{i+1} \otimes \ldots \otimes Z_q
\approx
(-1)^{ab} Z_0 \otimes \ldots \otimes Z_{i+1} \otimes Z_{i} \otimes \ldots \otimes Z_q,
\]
where $Z_0 \otimes \ldots \otimes Z_q$ is a morphism expressed in tensor notation, and
$a = deg(Z_i) := |Z_i| - 1$, $b = deg(Z_{i+1}) := |Z_{i+1}| - 1$. Here, $deg(Z)$ is one less than
the number of factors of the monomial $Z$. Indeed,
if $Z = z_{i_0}z_{i_1} \ldots z_{i_s}$, then $deg(Z) = s$.
The complex $Sym_*^{(p)}$ is then defined by:
\begin{equation}\label{eq.def_Sym}
Sym_i^{(p)} \;:=\; k\big[\mathrm{Epi}_{\Delta S}([p], [p-i])\big]/\approx,
\end{equation}
The face maps will be defined recursively. On monomials,
\begin{equation}\label{eq.Sym-differential}
\partial_i(z_{j_0} \ldots z_{j_s}) = \left\{\begin{array}{ll}
0, & i < 0,\\
z_{j_0} \ldots z_{j_i} \otimes z_{j_{i+1}} \ldots z_{j_s},
\quad,& 0 \leq i < s,\\
0, & i \geq s.
\end{array}\right.
\end{equation}
Then, extend $\partial_i$ to tensor products via:
\begin{equation}\label{eq.face_i-tensor}
\partial_i(W \otimes V) = \partial_i(W) \otimes V + W \otimes \partial_{i-deg(W)}(V).
\end{equation}
In the above formula, $W$ and $V$ are formal tensors in
$k\big[\mathrm{Epi}_{\Delta S}([p], [q])\big]$, and
\[
deg(W) = deg(W_0 \otimes \ldots \otimes W_t) \;:=\; \sum_{k=0}^t deg(W_k).
\]
The boundary map $Sym_n^{(p)} \to Sym_{n-1}^{(p)}$
is then
\[
d_n = \sum_{i=0}^n (-1)^i \partial_i = \sum_{i=0}^{n-1} (-1)^i \partial_i
\]
\end{definition}
\begin{rmk}
The result of applying $\partial_i$ on any
formal tensor will result in only a single formal tensor, since in eq.~\ref{eq.face_i-tensor},
at most one of the two terms will be non-zero).
\end{rmk}
\begin{rmk}\label{rmk.action}
There is an action $\Sigma_{p+1} \times Sym_i^{(p)} \to Sym_i^{(p)}$, given by permuting the
formal indeterminates $z_i$. Furthermore, this action is compatible with the differential.
\end{rmk}
\begin{lemma}\label{lem.NS-Sym-homotopy}
$Sym_*^{(p)}$ is chain-homotopy equivalent to $k[N\mathcal{S}_p]/k[N\mathcal{S}'_p]$.
\end{lemma}
\begin{proof}
Let $v_0$ represent the common initial vertex of the $p$-cubes making up $N\mathcal{S}_p$.
Then, as cell-complex, $N\mathcal{S}_p$ consists of $v_0$ together with all corners of
the various $p$-cubes, together with $i$-cells for each $i$-face of the cubes. Thus,
$N\mathcal{S}_p$ consists of $(p+1)!$ $p$-cells with attaching maps
\mbox{$\partial I^p \to (N\mathcal{S}_p)^{p-1}$} defined according to the face maps for
$N\mathcal{S}_p$ given above. Presently, I shall provide an explicit construction.
Label each top-dimensional cell with the permutation
induced on $\{0, 1, \ldots, p\}$ by the final vertex, $z_{i_0}z_{i_1}\ldots z_{i_p}$.
On a given $p$-cell, for each vertex $Z_0 \otimes \ldots \otimes Z_s$, there is an
ordering of the tensor
factors so that \mbox{$Z_0 \otimes \ldots \otimes Z_s \to z_{i_0}z_{i_1}\ldots z_{i_p}$}
preserves the order of formal indeterminates $z_i$. Rewrite each vertex of this $p$-cell
in this order. Now, any chain
\[
(z_{i_0} \otimes z_{i_1} \otimes \ldots \otimes z_{i_p}) \to \ldots \to
z_{i_0}z_{i_1}\ldots z_{i_p}
\]
is obtained by choosing the order in which to combine the factors. In fact,
the $p$-chains
are in bijection with the elements of $S_{p}$. A given permutation
\mbox{$\{1,2, \ldots, p\}\mapsto \{ j_1, j_2, \ldots, j_p \}$} will represent the
chain obtained by first combining $z_{j_0} \otimes z_{j_1}$ into
$z_{j_0}z_{j_1}$, then combining $z_{j_1} \otimes z_{j_2}$ into
$z_{j_1}z_{j_2}$. In effect, we ``erase'' the tensor product symbol between
$z_{j_{r-1}}$ and $z_{j_r}$ for each $j_r$ in the list above.
We shall declare that
the {\it natural} order of combining the factors will be
the one that always combines the last two:
\[
(z_{i_0} \otimes \ldots \otimes z_{i_{p-1}} \otimes z_{i_p}) \to
(z_{i_0} \otimes \ldots \otimes z_{i_{p-1}}z_{i_p}) \to
(z_{i_0} \otimes z_{i_{p-2}}z_{i_{p-1}}z_{i_p}) \to
\ldots.
\]
This corresponds to a permutation $\rho := \{1,\ldots, p\} \mapsto
\{p, p-1, \ldots, 2, 1\}$, and this chain will be regarded as {\it positive}.
A chain $C_\sigma$, corresponding to another permutation, $\sigma$, will be
regarded as positive or negative depending on the sign of the permutation
$\sigma\rho^{-1}$. Finally, the entire $p$-cell should be identified with the sum
\[
\sum_{\sigma \in S_p} sign(\sigma\rho^{-1}) C_\sigma
\]
It is this sign convention that
permits the inner faces of the cube to cancel appropriately in the boundary maps.
Thus we have a map on the top-dimensional chains:
\[
\theta_p : Sym_p^{(p)} \to \big(k[N\mathcal{S}_p]/k[N\mathcal{S}'_p]\big)_p.
\]
Define $\theta_*$ for arbitrary $k$-cells by sending $Z_0 \otimes \ldots
\otimes Z_{p-k}$ to the sum of $k$-length chains with source $z_0 \otimes
\ldots \otimes z_p$ and target $Z_0 \otimes \ldots \otimes Z_{p-k}$ with signs determined
by the natural order of erasing tensor product symbols of $z_0 \otimes \ldots \otimes z_p$,
excluding those tensor product symbols that never get erased.
The following example
should clarify the point. $W = z_3z_0 \otimes z_1 \otimes z_2z_4$ is a $2$-cell of
$Sym_*^{(4)}$. $W$ is obtained from $z_0 \otimes z_1 \otimes z_2 \otimes z_3
\otimes z_4 = z_3 \otimes z_0 \otimes z_1 \otimes z_2 \otimes z_4$ by combining
factors in some order. There are only $2$ erasable tensor product symbols in
this example. The natural order (last to first) corresponds to the chain:
\[
z_3 \otimes z_0 \otimes z_1 \otimes z_2 \otimes z_4
\to z_3 \otimes z_0 \otimes z_1 \otimes z_2z_4
\to z_3z_0 \otimes z_1 \otimes z_2z_4.
\]
So, this chain shows up in $\theta_*(W)$ with positive sign, whereas the chain
\[
z_3 \otimes z_0 \otimes z_1 \otimes z_2 \otimes z_4
\to z_3z_0 \otimes z_1 \otimes z_2 \otimes z_4
\to z_3z_0 \otimes z_1 \otimes z_2z_4
\]
shows up with a negative sign.
Now, $\theta_*$ is easily seen to be a chain map $Sym_*^{(p)} \to
k[N\mathcal{S}_p]/k[N\mathcal{S}'_p]$. Geometrically, $\theta_*$ has
the effect of subdividing a cell-complex (defined with cubical cells) into
a simplicial space.
\end{proof}
As an example, consider $|N\mathcal{S}_2|$. There are $6$ \mbox{$2$-cells},
each represented by a copy of $I^2$. The \mbox{$2$-cell} labelled by the permutation
\mbox{$\{0,1,2\} \mapsto \{1,0,2\}$}
consists of the chains
\[
z_1 \otimes z_0 \otimes z_2 \to z_1 \otimes z_0z_2 \to z_1z_0z_2
\]
and
\[
-(z_1 \otimes z_0 \otimes z_2 \to z_1z_0 \otimes z_2 \to z_1z_0z_2).
\]
Hence, the boundary is the sum of \mbox{$1$-chains}:
\[
(z_1 \otimes z_0z_2 \to z_1z_0z_2) -
(z_1 \otimes z_0 \otimes z_2 \to z_1z_0z_2)
+ (z_1 \otimes z_0 \otimes z_2 \to z_1 \otimes z_0z_2)\qquad\qquad\qquad
\]
\[
\qquad\qquad\qquad-(z_1z_0 \otimes z_2 \to z_1z_0z_2) +
(z_1 \otimes z_0 \otimes z_2 \to z_1z_0z_2)
- (z_1 \otimes z_0 \otimes z_2 \to z_1z_0 \otimes z_2)
\]
\[
= (z_1 \otimes z_0z_2 \to z_1z_0z_2) + (z_1 \otimes z_0 \otimes z_2 \to z_1 \otimes z_0z_2)
\qquad\qquad\qquad\qquad\qquad\qquad
\]
\[
\qquad\qquad\qquad
- (z_1z_0 \otimes z_2 \to z_1z_0z_2) - (z_1 \otimes z_0 \otimes z_2 \to z_1z_0 \otimes z_2).
\]
These \mbox{$1$-chains} correspond to the $4$ edges of the square.
Thus, in our example this $2$-cell
of $|N\mathcal{S}_p|$ will correspond to \mbox{$z_1z_0z_2 \in Sym_2^{(2)}$}, and
its boundary in $|N\mathcal{S}_p/N\mathcal{S}'_p|$ will consist of the two edges adjacent
to $z_0 \otimes z_1 \otimes z_2$ with appropriate signs:
\[
(z_0 \otimes z_1 \otimes z_2 \to z_1 \otimes z_0z_2)
- (z_0 \otimes z_1 \otimes z_2 \to z_1z_0 \otimes z_2).
\]
The corresponding boundary in $Sym_1^{(2)}$ will be:
\mbox{$(z_1 \otimes z_0z_2) - (z_1z_0 \otimes z_2)$}, matching with the differential
already defined on $Sym_*^{(p)}$.
See Figs.~4.1 and~4.2.
\begin{figure}\label{fig.NS_2}
\end{figure}
\begin{figure}\label{fig.sym2}
\end{figure}
Now, with one piece of new notation, we may re-interpret Thm.~\ref{thm.E1_NS}.
\begin{definition}\label{def.ltimescirc}
Let $G$ be a group. Let $k_0$ be the chain complex consisting of $k$ concentrated
in degree $0$, with trivial $G$-action.
If $X_*$ is a right $G$-complex, $Y_*$ is a left $G$-complex with $k_0 \hookrightarrow
Y_*$ as a $G$-subcomplex, then define the \textit{equivariant half-smash tensor product}
of the two complexes:
\[
X_* \textrm{\textcircled{$\ltimes$}}_G Y_* := \left(X_* \otimes_{G}
Y_*\right)/\left(X_* \otimes_{G} k_0\right)
\]
\end{definition}
\begin{cor}\label{cor.E1_Sym}
There is spectral sequence converging weakly to $\widetilde{H}S_*(A)$ with
\[
E^1_{p,q} \cong
\bigoplus_{u \in X^{p+1}/\Sigma_{p+1}}
H_{p+q}\left(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p)}; k\right),
\]
where $G_{u}$ is the isotropy subgroup of the orbit
$u \in X^{p+1}/\Sigma_{p+1}$.
\end{cor}
\section{Algebra Structure of $Sym_*$}\label{sec.sym_alg}
We may consider $Sym_* := \bigoplus_{p \geq 0} Sym_*^{(p)}$ as a bigraded
differential algebra, where
$bideg(W) = (p+1, i)$ for $W \in Sym_i^{(p)}$. The product
\[
\boxtimes : Sym_i^{(p)} \otimes Sym_j^{(q)} \to Sym_{i+j}^{(p+q+1)}
\]
is defined by:
\[
W \boxtimes V := W \otimes V',
\]
where $V'$ is obtained from $V$ by replacing each formal indeterminate $z_r$ by $z_{r+p+1}$ for
\mbox{$0 \leq r \leq q$}. Eq.~\ref{eq.face_i-tensor} then implies:
\begin{equation}\label{eq.partialboxtimes}
d( W \boxtimes V ) = d(W) \boxtimes V + (-1)^{bideg(W)_2}W \boxtimes d(V),
\end{equation}
where $bideg(W)_2$ is the second component of $bideg(W)$.
\begin{prop}\label{prop.boxtimes}
The product $\boxtimes$ is defined on the level of homology. Furthermore, this product (on both
the chain level and homology level) is skew commutative in a twisted sense:
\[
W \boxtimes V = (-1)^{ij}\tau(V \boxtimes W),
\]
where $bideg(W) = (p+1, i)$, $bideg(V) = (q+1, j)$, and $\tau$ is the permutation sending
\[
\{0, 1, \ldots, q, q+1, q+2, \ldots, p+q, p+q+1\} \mapsto \{p+1, p+2, \ldots, p+q+1, 0,
1, \ldots, p-1, p \}
\]
In fact, $\tau$ is nothing more than the block transformation $\beta_{q,p}$ defined in
section~\ref{sec.deltas}.
\end{prop}
\begin{proof}
By Eq.~\ref{eq.partialboxtimes}, the product of two cycles is again a cycle, and the
product of a
cycle with a boundary is a boundary, hence there is an induced product on homology.
Now, suppose $W \in Sym_i^{(p)}$ and $V \in Sym_j^{(p)}$. So,
\[
W = Y_0 \otimes Y_1 \otimes \ldots \otimes Y_{p-i},
\]
\[
V = Z_0 \otimes Z_1 \otimes \ldots \otimes Z_{q-j}.
\]
\begin{equation}\label{eq.boxtimesformula}
V \boxtimes W = V \otimes W' = (-1)^\alpha W' \otimes V,
\end{equation}
where $W'$ is related to $W$ by replacing each $z_r$ by $z_{r+q+1}$. The exponent $\alpha$ is
determined by the relations in $Sym_{i+j}^{(p+q+1)}$:
\[
\alpha = \big[deg(Z_0) + \ldots + deg(Z_{q-j})\big]\big[deg(Y_0) + \ldots + deg(Y_{p-i})\big]
\]
\[
= deg(V)deg(W).
\]
Observe that $deg(W)$ is exactly $i$ and $deg(V) = j$. Indeed,
$W \in \mathrm{Epi}_{\Delta S}([p], [p-i])$, so there are exactly $i$ distinct
positions where one could insert a tensor product symbol. That is, there are $i$
{\it cut points} in $W$. Since $deg(W) = \sum deg(Y_k)$, and $deg(Y_k)$ is one less
than the number of factors in $Y_k$, it follows that $deg(Y_k)$ is exactly the
number of cut points in $Y_k$. Hence, $deg(W) = i$.
Next, apply the block transformation $\tau$ to eq.~\ref{eq.boxtimesformula} to obtain
\[
\tau(V \boxtimes W) = (-1)^\alpha \tau(W' \otimes V) = (-1)^\alpha W \otimes V' = (-1)^\alpha W
\boxtimes V,
\]
where $V'$ is obtained by replacing $z_r$ by $z_{r+p+1}$ in $V$.
\end{proof}
\section{Computer Calculations}\label{sec.comp_cal}
In principle, the homology of $Sym_*^{(p)}$ may be found by using a computer. In fact,
we have the following results up to $p = 7$:
\begin{theorem}\label{thm.poincare_sym_complex}
For $0 \leq p \leq 7$, the groups $H_*(Sym_*^{(p)})$ are free abelian and have
Poincar\'e polynomials $P_p(t) := P\left(H_*(Sym_*^{(p)}); t\right)$:
\[
P_0(t) = 1,
\]
\[
P_1(t) = t,
\]
\[
P_2(t) = t + 2t^2,
\]
\[
P_3(t) = 7t^2 + 6t^3,
\]
\[
P_4(t) = 43t^3 + 24t^4,
\]
\[
P_5(t) = t^3 + 272t^4 + 120t^5,
\]
\[
P_6(t) = 36t^4 + 1847t^5 + 720t^6,
\]
\[
P_7(t) = 829t^5 + 13710t^6 + 5040t^7.
\]
\end{theorem}
\begin{proof}
These computations were performed using scripts written for the computer algebra
systems \verb|GAP|~\cite{GAP4} and \verb|Octave|~\cite{E}. See Appendix~\ref{app.comp}
for more detail about the scripts used to calculate these polynomials.
\end{proof}
\section{Representation Theory of $H_*(Sym_*^{(p)})$}\label{sec.rep_sym}
By remark~\ref{rmk.action}, the groups $H_i(Sym_*^{(p)}; k)$ are
\mbox{$k\Sigma_{p+1}$-modules},
so it seems natural to investigate the irreducible representations comprising these
modules.
\begin{prop}
Let $C_{p+1} \hookrightarrow \Sigma_{p+1}$ be the cyclic group of order $p+1$, embedded
into the symmetric group as the subgroup generated by the permutation
$\tau_p := (0, p, p-1, \ldots, 1)$.
Then there is a $\Sigma_{p+1}$-isomorphism:
\[
H_p(Sym_*^{(p)}) \cong AC_{p+1} \uparrow \Sigma_{p+1},
\]
{\it i.e.}, the alternating representation of the cyclic group, induced up to
the symmetric group. Note, for $p$ even, $AC_{p+1}$ coincides with the trivial
representation $IC_{p+1}$.
Moreover, $H_p(Sym_*^{(p)})$ is
generated by the elements $\sigma(b_p)$, for the distinct cosets $\sigma C_{p+1}$,
where $b_p$ is the element:
\[
b_p := \sum_{j = 0}^p (-1)^{jp} \tau_p^j(z_0z_1 \ldots z_p).
\]
\end{prop}
\begin{proof}
Let $w$ be a general element of $Sym_p^{(p)}$.
\[
w = \sum_{\sigma \in \Sigma_{p+1}} c_\sigma \sigma(z_0z_1 \ldots z_p),
\]
where $c_\sigma$ are constants in $k$. $H_p(Sym_*^{(p)})$ consists of
those $w$ such that $d(w) = 0$. That is,
\[
0 = \sum_{\sigma \in \Sigma_{p+1}} c_\sigma \sigma\sum_{i=0}^{p-1}
(-1)^i (z_0 \ldots z_i \otimes z_{i+1} \ldots z_p)
\]
\begin{equation}\label{eq.sum_sigma_zero}
= \sum_{\sigma \in \Sigma_{p+1}} \sum_{i=0}^{p-1}
(-1)^i c_\sigma \sigma(z_0 \ldots z_i \otimes z_{i+1} \ldots z_p).
\end{equation}
Now for fixed $\sigma$, the terms corresponding to
$\sigma(z_0 \ldots z_i \otimes z_{i+1} \ldots z_p)$
occur in pairs in the above formula. The obvious term of the pair is
\[
(-1)^i c_{\sigma} \sigma(z_0 \ldots z_i \otimes z_{i+1} \ldots z_p).
\]
Not so obviously, the second term of the pair is
\[
(-1)^{(p-i-1)i}(-1)^{p-i-1} c_{\rho} \rho
(z_0 \ldots z_{p-i-1} \otimes z_{p-i} \ldots z_p),
\]
where $\rho = \sigma \tau_p^{p-i}$. Thus, if $d(w) = 0$, then
\[
(-1)^i c_\sigma + (-1)^{(p-i-1)(i+1)}c_\rho = 0,
\]
\[
c_\rho = (-1)^{(p-i-1)(i+1) + (i+1)}c_\sigma = (-1)^{(p-i)(i+1)}c_\sigma.
\]
Set $j = p-i$, so that
\[
c_\rho = (-1)^{j(p-j+1)}c_\sigma = (-1)^{jp -j^2 +j}c_\sigma = (-1)^{jp}c_\sigma.
\]
This proves that the only restrictions on the coefficients $c_\sigma$ are that
the absolute values of coefficients corresponding to $\sigma, \sigma \tau_p,
\sigma \tau_p^2, \ldots$ must be the same, and their corresponding signs in $w$
alternate if and only if $p$ is odd; otherwise, they have the same signs. Clearly,
the elements $\sigma(b_p)$ for distinct cosets $\sigma C_{p+1}$ represents an
independent set of generators over $k$ for $H_p(Sym_*^{(p)})$.
Observe that $b_p$ is invariant under the action of $sign(\tau_p)\tau_p$, and so
$b_p$ generates an alternating representation $A C_{p+1}$ over $k$.
Induced up to
$\Sigma_{p+1}$, we obtain the representation $AC_{p+1} \uparrow \Sigma_{p+1}$
of dimension $(p+1)!/(p+1) = p!$,
generated by the elements $\sigma(b_p)$ as in the proposition.
\end{proof}
\begin{definition}
For a given proper partition $\lambda = [\lambda_0, \lambda_1, \lambda_2, \ldots, \lambda_s]$
of the $p+1$ integers $\{0, 1, \ldots, p\}$,
an element $W$ of $Sym_*^{(p)}$ will designated as {\it type $\lambda$} if it equivalent
to $\pm(Y_0 \otimes Y_1 \otimes Y_2 \otimes \ldots \otimes Y_s)$ with
$deg(Y_i) = \lambda_i - 1$. That is, each $Y_i$ has $\lambda_i$ factors.
The notation $Sym_\lambda^{(p)}$ or $Sym_\lambda$ will denote the $k$-submodule of
$Sym_{p-s}^{(p)}$
generated by all elements of type $\lambda$.
\end{definition}
In what follows, $|\lambda|$ will refer to the number of components of $\lambda$.
The action of $\Sigma_{p+1}$ leaves $Sym_\lambda$ invariant for any given $\lambda$, so the
there is a decomposition
\[
Sym_{p-s}^{(p)} = \bigoplus_{\lambda \vdash (p+1), |\lambda| = s+1} Sym_{\lambda}
\]
as $k\Sigma_{p+1}$-module.
\begin{prop}
For a given proper partition $\lambda \vdash (p+1)$,
{\it (a)} $Sym_\lambda$ contains exactly one
alternating representation $A\Sigma_{p+1}$ iff $\lambda$ contains no repeated components.
{\it (b)} $Sym_\lambda$ contains exactly one trivial representation
$I\Sigma_{p+1}$ iff $\lambda$ contains no repeated even components.
\end{prop}
\begin{proof}
$Sym_\lambda$ is a quotient of the regular representation, since it is the image of the
$\Sigma_{p+1}$-map
\[
\pi_\lambda \;:\; k\Sigma_{p+1} \to Sym_\lambda
\]
\[
\sigma \mapsto \psi_\lambda \overline{\sigma},
\]
where $\overline{\sigma} \in \Sigma_{p+1}^{\mathrm{op}}$ is the $\Delta S$
automorphism of $[p]$ corresponding to $\sigma$ and $\psi_\lambda$ is a $\Delta$
morphism $[p] \to [ |\lambda| ]$ that sends the points $0, \ldots, \lambda_0-1$
to $0$, the points $\lambda_0, \ldots, \lambda_0 + \lambda_1 -1$ to $1$, and
so on. Hence, there can be at most $1$ copy of $A\Sigma_{p+1}$ and at most
$1$ copy of $I\Sigma_{p+1}$ in $Sym_\lambda$.
Let $W$ be the ``standard'' element of $Sym_\lambda$. That is, the indeterminates
$z_i$ occur in $W$ in numerical order.
$A\Sigma_{p+1}$ exists in $Sym_\lambda$ iff the element
\[
V = \sum_{\sigma \in \Sigma_{p+1}} sign(\sigma)\sigma(W)
\]
is non-zero.
Suppose that some component of $\lambda$ is repeated, say $\lambda_i = \lambda_{i+1} = \ell$.
If $W = Y_0 \otimes Y_1 \otimes \ldots \otimes Y_s$, then $deg(Y_i) = deg(Y_{i+1}) = \ell-1$.
Now, we know that
\[
W = (-1)^{deg(Y_i)deg(Y_{i+1})} Y_0 \otimes \ldots \otimes Y_{i+1} \otimes Y_i \otimes
\ldots Y_s
\]
\[
= (-1)^{\ell^2 - 2\ell + 1}\alpha(W)
\]
\[
= -(-1)^{\ell} \alpha(W),
\]
for the permutation $\alpha \in \Sigma_{p+1}$ that exchanges the indices of indeterminates in
$Y_i$ with those in $Y_{i+1}$ in an order-preserving way. In $V$, the term $\alpha(W)$ shows
up with sign $sign(\alpha) = (-1)^{\ell^2} = (-1)^\ell$, thus cancelling with $W$. Hence,
$V = 0$, and no alternating representation exists.
If, on the other hand, no component of $\lambda$ is repeated, then no term $W$ can be
equivalent to $\pm \alpha(W)$ for $\alpha \neq \mathrm{id}$, so $V$ survives as the generator
of $A\Sigma_{p+1}$ in $Sym_\lambda$.
A similar analysis applies for trivial representations. This time, we examine
\[
U = \sum_{\sigma \in \Sigma_{p+1}} \sigma(W),
\]
which would be a generator for $I\Sigma_{p+1}$ if it were non-zero.
As before, if there is a repeated component, $\lambda_i = \lambda_{i+1} = \ell$, then
\[
W = (-1)^{\ell - 1} \alpha(W).
\]
However, this time, $W$ cancels with $\alpha(W)$ only if $\ell - 1$ is odd. That is,
$|\lambda_i| = |\lambda_{i+1}|$ is even. If $\ell - 1$ is even, or if all $\lambda_i$
are distinct, then the element $U$ must be non-zero.
\end{proof}
\begin{prop}\label{prop.alternating_reps}
$H_i(Sym_*^{(p)})$ contains an alternating representation for each partition
$\lambda \vdash (p+1)$ with \mbox{$|\lambda| = p-i$} such that no component of
$\lambda$ is repeated.
\end{prop}
\begin{proof}
This proposition will follow from the fact that $d(V) = 0$ for any generator
$V$ of an alternating representation in $Sym_\lambda$. Then, by Schur's Lemma,
the alternating representation must survive at the homology level.
Let $V$ be the generator mentioned above,
\[
V = \sum_{\sigma \in \Sigma_{p+1}} sign(\sigma)\sigma(W).
\]
$d(V)$ consists of terms $\partial_j(\sigma(W)) = \sigma(\partial_j(W))$ along with
appropriate signs.
For a given, $j$, write
\begin{equation}\label{eq.d_jW}
\partial_j(W) = (-1)^{a + \ell} Y_0 \otimes \ldots \otimes Y_i\{0, \ldots, \ell\}
\otimes Y_i\{\ell+1, \ldots, m\} \otimes \ldots \otimes Y_s,
\end{equation}
where if $Y = z_{i_0}z_{i_1} \ldots z_{i_r}$, then the notation $Y\{s, \ldots, t\}$ refers
to the monomial $z_{i_t}z_{i_{t+1}} \ldots z_{i_s}$, assuming $0 \leq s \leq t \leq r$.
In the above expression,
$a = deg(Y_0) + \ldots + deg(Y_{i-1})$.
Now, we may use the relations in $Sym_*$ to rewrite eq.~\ref{eq.d_jW} as
\begin{equation}
(-1)^{(a + \ell) + \ell(m - \ell - 1)}Y_0 \otimes \ldots \otimes Y_i\{\ell+1, \ldots, m\}
\otimes Y_i\{0, \ldots, \ell\} \otimes \ldots \otimes Y_s.
\end{equation}
Let $\alpha$ be the block permutation that relabels indices thus:
\begin{equation}\label{eq.d_jW-alpha}
(-1)^{a + m\ell - \ell^2}\alpha\big(Y_0 \otimes \ldots \otimes
Y_i\{0, \ldots, m-\ell-1\}
\otimes Y_i\{m-\ell, \ldots, m\} \otimes \ldots \otimes Y_s\big)
\end{equation}
Now, The above tensor product also occurs in $\partial_{j'}\big(sign(\alpha)\alpha(W)\big)$
for some $j'$.
This term looks like:
\begin{equation}
sign(\alpha)(-1)^{a + m-\ell-1}
\alpha\big(Y_0 \otimes \ldots \otimes
Y_i\{0, \ldots, m-\ell-1\}
\otimes Y_i\{m-\ell, \ldots, m\} \otimes \ldots \otimes Y_s\big)
\end{equation}
\begin{equation}
= (-1)^{(m-\ell)(\ell + 1)+a + m-\ell-1}
\alpha\big(Y_0 \otimes \ldots \otimes
Y_i\{0, \ldots, m-\ell-1\}
\otimes Y_i\{m-\ell, \ldots, m\} \otimes \ldots \otimes Y_s\big)
\end{equation}
\begin{equation}\label{eq.alpha-d_jprime}
= (-1)^{m\ell - \ell^2 + a - 1}
\alpha\big(Y_0 \otimes \ldots \otimes
Y_i\{0, \ldots, m-\ell-1\}
\otimes Y_i\{m-\ell, \ldots, m\} \otimes \ldots \otimes Y_s\big)
\end{equation}
Compare the sign of eq.~\ref{eq.alpha-d_jprime} with that of eq.~\ref{eq.d_jW-alpha}.
I claim the signs are opposite, which would mean the two terms cancel each other
in $d(V)$. Indeed, all that we must show is that
\[
(a + m\ell - \ell^2) + (m\ell - \ell^2 + a - 1) \equiv 1\; \mathrm{mod}\; 2,
\]
which is obviously true.
\end{proof}
By Proposition~\ref{prop.alternating_reps}, it is clear that if $p+1$ is a triangular
number -- {\it i.e.}, $p+1$ is of the form $r(r+1)/2$ for some positive integer $r$,
then the lowest dimension in which an alternating representation may occur
is $p + 1 - r$, corresponding
to the partition $\lambda = [r, r-1, \ldots, 2, 1]$. A
little algebra yields the following statement for any $p$:
\begin{cor}\label{cor.lowest_alternating_reps}
$H_i(Sym_*^{(p)})$ contains an alternating representation in degree $p+1-r$, where
\[
r = \lfloor \sqrt{2p + 9/4} - 1/2 \rfloor.
\]
Moreover, there are no alternating representations present for $i \leq p-r$.
\end{cor}
\begin{proof}
Simply solve $p+1 = r(r+1)/2$ for $r$, and note that the increase in $r$ occurs
exactly when $p$ hits the next triangular number.
\end{proof}
There is not much known about the other irreducible representations occurring in
the homology groups of $Sym_*^{(p)}$, however computational evidence shows that
$H_i(Sym_*^{(p)})$ contains no trivial representation,
$I\Sigma_{p+1}$, for \mbox{$i \leq p-r$} ($r$ as in the conjecture above) up to
$p = 50$.
\section{Connectivity of $Sym_*^{(p)}$}\label{sec.connectivity_sym}
Quite recently, Vre\'cica and \v{Z}ivaljevi\'c~\cite{VZ} observed that the
complex $Sym_*^{(p)}$ is isomorphic to
the suspension of the cycle-free chessboard complex $\Omega_{p+1}$ (in fact,
the isomorphism takes the form $k\left[S\Omega_{p+1}^+\right] \to Sym_*^{(p)}$, where
$\Omega_{p+1}^+$ is the augmented complex).
The $m$-chains of the complex $\Omega_n$ are generated by lists
\[
L = \{ (i_0, j_0), (i_1, j_1), \ldots, (i_m, j_m) \},
\]
where $1 \leq i_0 < i_1 < \ldots < i_m \leq n$, all $1 \leq j_s \leq n$ are distinct
integers, and the list $L$ is {\it cycle-free}. It may be easier to say what it means for
$L$ not to be cycle free: $L$ is not
cycle-free if there exists a subset $L_c \subset L$ and re-ordering of $L_c$ so that
\[
L_c = \{ (\ell_0, \ell_1), (\ell_1, \ell_2), \ldots, (\ell_{t-1}, \ell_t),
(\ell_t, \ell_0) \}.
\]
The differential of $\Omega_n$ is defined on generators by:
\[
d\big( \{ (i_0, j_0), \ldots, (i_m, j_m) \} \big)
:= \sum_{s = 0}^{m} (-1)^s \{ (i_0, j_0), \ldots, (i_{s-1}, j_{s-1}),
(i_{s+1}, j_{s+1}), \ldots, (i_m, j_m) \}.
\]
For completeness, an explicit isomorphism shall be provided:
\begin{prop}\label{prop.iso_omega_sym}
Let $\Omega^+_n$ denote the augmented cycle-free $(n \times n)$-chessboard complex,
where the unique $(-1)$-chain is represented by the empty $n \times n$
chessboard, and the boundary map on $0$-chains takes a vertex to the
unique $(-1)$-chain.
For each $p \geq 0$, there is a chain isomorphism
\[
\omega_* : k\left[S\Omega^+_{p+1}\right] \to Sym_*^{(p)}
\]
\end{prop}
\begin{proof}
Note that we may define $m$-chains of $\Omega_{p+1}$ as cycle-free lists
\[
L = \{ (i_0, j_0), (i_1, j_1), \ldots, (i_m, j_m) \},
\]
with no requirement on the order of $\{ i_0, i_1, \ldots, i_m \}$, under the
equivalence relation:
\[
\{ (i_{\sigma^{-1}(0)}, j_{\sigma^{-1}(0)}), \ldots,
(i_{\sigma^{-1}(m)}, j_{\sigma^{-1}(m)}) \} \approx
sign(\sigma)\{ (i_0, j_0), \ldots, (i_m, j_m) \},
\]
for $\sigma \in \Sigma_{m+1}$.
Suppose $L$ is an $(m+1)$-chain of $S\Omega^+_{p+1}$ ({\it i.e.} an $m$-chain
of $\Omega^+_{p+1}$).
Call a subset $L' \subset L$ a {\it queue} if
there is a reordering of $L'$ such that
\[
L' = \{(\ell_0, \ell_1), (\ell_1, \ell_2), \ldots, (\ell_{t-1}, \ell_t)\},
\]
and $L'$ is called a {\it maximal queue} if it is not properly contained in
any other queue.
Since $L$ is supposed to be cycle-free, we can partition
$L$ into some number of maximal queues, $L_1', L_2', \ldots, L_q'$. Let
$\sigma$ be a permutation representing the re-ordering of $L$ into
maximal ordered queues.
\[
L \approx sign(\sigma)\{ (\ell^{(1)}_0, \ell^{(1)}_1),
(\ell^{(1)}_1, \ell^{(1)}_2), \ldots,
(\ell^{(1)}_{t_1-1}, \ell^{(1)}_{t_1}),
\ldots,
(\ell^{(q)}_0, \ell^{(q)}_1),
(\ell^{(q)}_1, \ell^{(q)}_2), \ldots,
(\ell^{(q)}_{t_q-1}, \ell^{(q)}_{t_q}) \}
\]
Each maximal ordered queue will correspond to a monomial of formal
indeterminates $z_i$. The correspondence is as follows:
\begin{equation}\label{eq.monomial_correspondence}
\{ (\ell_0, \ell_1), (\ell_1, \ell_2), \ldots,
(\ell_{t-1}, \ell_{t}) \} \mapsto z_{\ell_0-1}z_{\ell_1-1}\cdots
z_{\ell_{t}-1}.
\end{equation}
For each maximal ordered queue, $L'_s$,
denote the monomial obtained by formula~(\ref{eq.monomial_correspondence}) by $Z_s$.
Let $k_1, k_2, \ldots, k_u$ be the numbers in $\{0, 1, 2, \ldots, p\}$
such that $k_{r} + 1$ does not appear in any pair $(i_s, j_s) \in L$.
Now we may define $\omega_*$ on $L = L'_1 \cup L'_2 \cup \ldots \cup L'_q$.
\[
\omega_{m+1}(L) := Z_1 \otimes Z_2 \otimes \ldots \otimes Z_q \otimes
z_{k_1} \otimes z_{k_2} \otimes \ldots \otimes z_{k_u}.
\]
Observe, if $L = \emptyset$ is the $(-1)$-chain of $\Omega^+_{p+1}$, then there are no
maximal queues in $L$, and so
\[
\omega_0(\emptyset) = z_0 \otimes z_1 \otimes \ldots \otimes z_p.
\]
$\omega_*$ is a (well-defined) chain map due to the equivalence relations present
in $Sym_*^{(p)}$
(See formulas~(\ref{eq.def_Sym}),~(\ref{eq.Sym-differential}),
and~(\ref{eq.face_i-tensor})). To see that $\omega_*$ is an isomorphism,
it suffices to exhibit an
inverse. To each monomial
$Z = z_{i_0}z_{i_1}\cdots z_{i_t}$ with $t > 0$, there is an associated
ordered queue $L' = \{ (i_0 + 1, i_1 + 1), (i_1 + 1, i_2 + 1), \ldots
(i_{t-1} +1, i_t + 1) \}$. If the monomial is a singleton, $Z = z_{i_0}$, the
associated ordered queue will be the empty set.
Now, given a generator $Z_1 \otimes Z_2 \otimes
\ldots \otimes Z_q \in Sym_*^{(p)}$, map it to the list
$L := L'_1 \cup L'_2 \cup \ldots \cup L'_q$, preserving the original order of indices.
\end{proof}
\begin{theorem}\label{thm.connectivity}
$Sym_*^{(p)}$ is $\lfloor\frac{2}{3}(p-1)\rfloor$-connected.
\end{theorem}
This remarkable fact yields the following useful corollaries:
\begin{cor}\label{cor.finitely-generated}
The spectral sequences of Thm.~\ref{thm.E1_NS} and Cor.~\ref{cor.E1_Sym} converge
strongly to $\widetilde{H}S_*(A)$.
\end{cor}
\begin{proof}
This relies on the fact that the connectivity of the complexes $Sym_*^{(p)}$ is
a non-decreasing function of $p$.
Fix $n \geq 0$, and consider the component of $E^1$ residing at position $p, q$ for
$p + q = n$,
\[
\bigoplus_{u}H_n(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p)}).
\]
A priori, the induced differentials whose sources are $E^1_{p,q}, E^2_{p,q},
E^3_{p,q}, \ldots$
will have as targets
certain subquotients of
\[
\bigoplus_{u}H_{n-1}(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p+1)}),\;
\bigoplus_{u}H_{n-1}(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p+2)}),
\]
\[
\bigoplus_{u}H_{n-1}(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p+3)}),\ldots
\]
Now, if $n-1 < \lfloor (2/3)(p+k-1)\rfloor$ for some $k \geq 0$, then for $K > k$,
we have
\[
H_{n-1}(Sym_*^{(p + K)}) = 0,
\]
hence also,
\[
H_{n-1}(E_*G_{u} \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p+K)}) = 0,
\]
using the fibration mentioned in the proof of Thm.~\ref{thm.E1_NS} and
the Hurewicz Theorem.
Thus, the induced differential $d^k$ is zero for all $k \geq K$.
On the other hand, the induced differentials whose targets are $E^1_{p,q},
E^2_{p,q}, E^3_{p,q}, \ldots$ must be zero after stage $p$, since there are
no non-zero components with $p < 0$.
\end{proof}
\begin{cor}\label{cor.trunc-isomorphism}
For each $i \geq 0$, there is a positive integer $N_i$ so that if
$p \geq N_i$, there is an isomorphism
\[
H_i(\mathscr{G}_p\widetilde{\mathscr{Y}}_*) \cong \widetilde{H}S_i(A).
\]
\end{cor}
\begin{cor}\label{cor.fin-gen}
If $A$ is finitely-generated over a Noetherian ground ring $k$, then
$HS_*(A)$ is finitely-generated over $k$ in each degree.
\end{cor}
\begin{proof}
Examination
of the $E^1$ term shows that the $n^{th}$ reduced symmetric homology group of $A$ is a
subquotient of:
\[
\bigoplus_{p \geq 0} \bigoplus_{u\in X^{p+1}/\Sigma_{p+1}}
H_n(E_*G_{u}\textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p)} ; k)
\]
Each $H_n(E_*G_{u}\textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p)} ; k)$ is a finite-dimensional
$k$-module. The inner sum is finite as long as $X$ is finite.
Thm.~\ref{thm.connectivity} shows the
outer sum is finite as well.
\end{proof}
The bounds on connectivity are conjectured to be tight. This is certainly true for
$p \equiv 1$ (mod $3$), based on Thm.~16 of~\cite{VZ}.
Corollary~12 of the same paper establishes the following result:
\[
\mathrm{Either}\qquad H_{2k}(Sym_*^{(3k-1)}) \neq 0 \quad \mathrm{or}
\quad H_{2k}(Sym_*^{(3k)}) \neq 0.
\]
For $k \leq 2$, both statements are true.
When the latter condition is true, this gives a tight bound on connectivity for
$p \equiv 0$ (mod $3$). When the former is true, there is not enough
information for a tight bound, since we are more interested in proving that
$H_{2k-1}(Sym_*^{(3k-1)})$ is non-zero, since for $k = 1, 2$, we have computed
the integral homology:
\[
H_1(Sym_*^{(2)}) = \mathbb{Z} \quad \mathrm{and} \quad H_3(Sym_*^{(5)}) = \mathbb{Z}.
\]
\section{Filtering $Sym_*^{(p)}$ by partition types}\label{sec.filterSym}
In section~\ref{sec.rep_sym}, we saw that $Sym_*^{(n)}$ decomposes over $k\Sigma_{n+1}$
as a direct sum of the submodules $Sym_\lambda$ for partitions $\lambda \vdash (n+1)$.
Filter $Sym_*^{(n)}$ by the size of the largest component of the partition.
\[
\mathscr{F}_pSym_q^{(n)} := \bigoplus_{\lambda \vdash (n+1),
|\lambda| = n+1-(p+q), \lambda_0 \leq p+1}
Sym_{\lambda},
\]
where $\lambda = [\lambda_0, \lambda_1, \ldots, \lambda_{n-q}]$, is written so that
$\lambda_0 \geq
\lambda_1 \geq \ldots \geq \lambda_{n-q}$. The differential of $Sym_*^{(n)}$ respects
this filtering, since it can only
reduce the size of partition components. With respect to this filtering, we have
an $E^0$ term for a spectral sequence:
\[
E_{p,q}^0 \cong \bigoplus_{\lambda \vdash (n+1), |\lambda| = n+1-(p+q),\lambda_0 = p+1}
Sym_{\lambda}.
\]
The vertical differential $d^0$ is induced from $d$ by keeping only those
terms of $d(W)$ that share largest component with $W$.
\chapter{$E_{\infty}$-STRUCTURE AND HOMOLOGY OPERATIONS}\label{chap.prod}
\section{Definitions}\label{sec.homopdefs}
In this chapter, homology operations are defined for $HS_*(A)$,
following May~\cite{M2}. Let $\mathscr{Y}_*^+A$ be
the simplicial $k$-module of section~\ref{sec.deltas_plus}.
The key is to show that $\mathscr{Y}_*^+A$ admits the structure of
$E_\infty$-algebra.
This will be accomplished using various guises
of the Barratt-Eccles operad. While our final goal is to produce an
action on the level of $k$-complexes, we must induce the structure from the
level of categories and through simplicial $k$-modules. Finally, we may
use Definitions~2.1 and~2.2 of~\cite{M2} to define homology operations at
the level $k$-complexes.
\begin{rmk}
Throughout this chapter, we shall fix $S_n$ to be the symmetric group
on the letters $\{1, 2, \ldots, n\}$, given by permutations $\sigma$
that act on the {\it left} of lists of size $n$. {\it i.e.,}
\[
\sigma.(i_1, i_2, \ldots i_n) = \left(i_{\sigma^{-1}(1)}, i_{\sigma^{-1}(2)},
\ldots, i_{\sigma^{-1}(n)}\right),
\]
so that $(\sigma \tau).L = \sigma.(\tau.L)$ for all $\sigma, \tau \in S_n$,
and an $n$-element list, $L$.
\end{rmk}
In this chapter, we make use of operad structures in various categories,
so for completeness, formal definitions of {\it symmetric monoidal category}
as well as {\it operad}, {\it operad-algebra}, and {\it operad-module}
will be given below.
\begin{definition}\label{def.symmoncat}
A category $\mathscr{C}$ is symmetric monoidal if there is a bifunctor
$\odot : \mathscr{C} \times \mathscr{C} \to \mathscr{C}$ together with:
1. A natural isomorphism,
\[
a : \odot ( \mathrm{id}_\mathscr{C} \times \odot ) \to \odot( \odot \times
\mathrm{id}_\mathscr{C} )
\]
satisfying the MacLane pentagon condition (commutativity of the following
diagram for all objects $A$, $B$, $C$, $D$).
\[
\begin{diagram}
\node[2]{(A \odot B) \odot (C \odot D)}
\arrow{se,t}{a_{A \odot B, C, D} }\\
\node{A\odot\left(B\odot(C\odot D)\right)}
\arrow{ne,t}{a_{A, B, C\odot D} }
\arrow{s,l}{\mathrm{id} \odot a_{B, C, D} }
\node[2]{\left( (A \odot B) \odot C \right) \odot D}\\
\node{A \odot \left( (B \odot C) \odot D\right)}
\arrow[2]{e,t}{a_{A, B \odot C, D} }
\node[2]{\left( A \odot (B \odot C)\right)\odot D}
\arrow{n,r}{a_{A, B, C} \odot\mathrm{id} }
\end{diagram}
\]
2. A \textit{unit} object $e \in \mathrm{Obj}_\mathscr{C}$, together with
natural isomorphisms
\[
\ell : e \odot \mathrm{id}_{\mathscr{C}} \to \mathrm{id}_\mathscr{C}
\]
\[
r : \mathrm{id}_{\mathscr{C}} \odot e \to \mathrm{id}_\mathscr{C}
\]
making the following diagram commute for all objects $A$ and $B$:
\[
\begin{diagram}
\node{A \odot (e \odot B)}
\arrow[2]{e,t}{a_{A,e,B}}
\arrow{se,b}{\mathrm{id} \odot \ell}
\node[2]{(A \odot e) \odot B}
\arrow{sw,b}{r \odot \mathrm{id}}\\
\node[2]{A \odot B}
\end{diagram}
\]
3. A natural transformation \mbox{$s : \odot \to \odot T$}, where
\mbox{$T : \mathscr{C} \times \mathscr{C} \to \mathscr{C}$} is the transposition functor
\mbox{$(A,B) \mapsto (B,A)$}, such that $s^2 = \mathrm{id}$ and
$s$ satisfies the {\it hexagon identity} (commutativity of the following
diagram for all objects $A$, $B$, $C$).
\[
\begin{diagram}
\node[2]{A \odot (B \odot C)}
\arrow{sw,t}{\mathrm{id} \odot s_{B,C} }
\arrow{se,t}{a_{A,B,C}}\\
\node{A \odot (C \odot B)}
\arrow{s,l}{a_{A, C, B}}
\node[2]{(A \odot B) \odot C}
\arrow{s,r}{s_{A\odot B, C} }\\
\node{(A \odot C) \odot B}
\arrow{se,b}{s_{A,C} \odot \mathrm{id}}
\node[2]{C \odot (A \odot B)}
\arrow{sw,b}{a_{C, A, B} }\\
\node[2]{(C \odot A) \odot B}
\end{diagram}
\]
Observe that if $a_{A,B,C}$, $\ell_A$ and $r_A$ are identity morphisms, then
this definition reduces to that of a permutative category.
\end{definition}
Let $\mathbf{S}$ denote the
{\it symmetric groupoid} as category. For our purposes, we may label the objects
by $\underline{0}, \underline{1}, \underline{2}, \ldots$, and the only morphisms
of $\mathbf{S}$ are automorphisms, $\mathrm{Aut}_{\mathbf{S}}(\underline{n}) :=
S_n$, the symmetric group on $n$ letters.
\begin{definition}\label{def.operad}
Suppose $\mathscr{C}$ is a symmetric monoidal category, with unit $e$.
An {\it operad} $\mathscr{P}$ in the category $\mathscr{C}$ is a functor
\[
\mathscr{P} : \mathbf{S}^\mathrm{op} \to \mathscr{C},
\]
with $\mathscr{P}(0) = e$, together with the following data:
1. Morphisms $\gamma_{k, j_1, \ldots, j_k} : \mathscr{P}(k) \odot \mathscr{P}(j_1)
\odot \ldots \odot \mathscr{P}(j_k) \to \mathscr{P}(j)$, where
$j = \sum j_s$. For brevity, we denote these morphisms simply by $\gamma$.
The morphisms $\gamma$ should satisfy the following associativity condition.
The diagram below is commutative for
all $k \geq 0$, $j_s \geq 0$, $i_r \geq 0$.
Here, $T$ is a map that permutes the components of the product in the specified way,
using the symmetric transformation $s$ of $\mathscr{C}$. Coherence of $s$
guarantees that this is a well-defined map.
{\bf Associativity:}
\[
\begin{diagram}
\node{\mathscr{P}(k) \odot \bigodot_{s = 1}^{k} \mathscr{P}(j_s)
\odot \bigodot_{r = 1}^{j} \mathscr{P}(i_r)}
\arrow{e,t}{T}
\arrow{s,l}{\gamma \odot \mathrm{id}^{\odot j}}
\node{\mathscr{P}(k) \odot \bigodot_{s=1}^{k}
\left( \mathscr{P}(j_s) \odot \bigodot_{r = j_1 + \ldots + j_{s-1} + 1}^{j_1 + \ldots
+ j_s} \mathscr{P}(i_r) \right)}
\arrow{s,r}{\mathrm{id} \odot \gamma^{\odot k}}\\
\node{\mathscr{P}(j) \odot \bigodot_{r = 1}^{j} \mathscr{P}(i_r) }
\arrow{s,l}{\gamma}
\node{\mathscr{P}(k) \odot \bigodot_{s=1}^{k} \mathscr{P}\left( \sum_{r=j_1 + \ldots
+ j_{s-1} + 1}^{j_1 + \ldots + j_s} i_r \right) }
\arrow{s,r}{\gamma}\\
\node{\mathscr{P}\left(\sum_{r=1}^{j} i_r \right) }
\arrow{e,=}
\node{\mathscr{P}\left(\sum_{r=1}^{j_1 + \ldots + j_k} i_r \right) }
\end{diagram}
\]
2. A {\it Unit} morphism \mbox{$\eta : e \to \mathscr{P}(1)$} making the following
diagrams commute:
{\bf Left Unit Condition:}
\[
\begin{diagram}
\node{e \odot \mathscr{P}(j)}
\arrow{e,t}{\eta \odot \mathrm{id}}
\arrow{se,b}{\ell}
\node{\mathscr{P}(1) \odot \mathscr{P}(j)}
\arrow{s,r}{\gamma}\\
\node[2]{\mathscr{P}(j)}
\end{diagram}
\]
{\bf Right Unit Condition:}
\[
\begin{diagram}
\node{\mathscr{P}(j) \odot e^{\odot j}}
\arrow{e,t}{\mathrm{id} \odot \eta^{\odot j}}
\arrow{se,b}{r^j}
\node{\mathscr{P}(j) \odot \mathscr{P}(1)^{\odot j}}
\arrow{s,r}{\gamma}
\\
\node[2]{\mathscr{P}(j)}
\end{diagram}
\]
Here, $r^j$ is the {\it iterated} right unit map defined recursively (for an
object $A$ of $\mathscr{C}$):
\[
r_A^j := \left\{\begin{array}{ll}
r_A, & j=1\\
r_A^{j-1}\left(r_A \odot \mathrm{id}^{\odot(j-1)}\right),
& j > 1
\end{array}\right.
\]
3. The right action of $S_n$ on $\mathscr{P}(n)$ for each $n$
must satisfy the following {\it equivariance
conditions}. Both diagrams below are commutative for
all \mbox{$k \geq 0$}, \mbox{$j_s \geq 0$}, \mbox{$(j = \sum j_s)$},
\mbox{$\sigma \in S_k^\mathrm{op}$},
and \mbox{$\tau_s \in S_{j_s}^\mathrm{op}$}.
Here, $T_{\sigma}$ is
a morphism that permutes the components of the product in the specified way,
using the symmetric transformation $s$.
$\sigma\{j_1, \ldots, j_k\}$ denotes the permutation of $j$ letters which permutes
the $k$ blocks of letters (of sizes $j_1$, $j_2$, \ldots $j_k$) according to
$\sigma$, and \mbox{$\tau_1 \oplus \ldots \oplus \tau_k$} denotes the image of
\mbox{$(\tau_1, \ldots, \tau_k)$} under the evident inclusion
\mbox{$S_{j_1}^\mathrm{op} \times \ldots \times S_{j_k}^\mathrm{op} \hookrightarrow
S_j^\mathrm{op}$}.
{\bf Equivariance Condition A:}
\[
\begin{diagram}
\node{\mathscr{P}(k) \odot \bigodot_{s=1}^k \mathscr{P}(j_s)}
\arrow{e,t}{\mathrm{id} \odot T_{\sigma}}
\arrow{s,l}{\sigma \odot \mathrm{id}^{\odot k}}
\node{\mathscr{P}(k) \odot \bigodot_{s=1}^k \mathscr{P}\left(j_{\sigma^{-1}(s)}\right)}
\arrow{s,r}{\gamma}\\
\node{\mathscr{P}(k) \odot \bigodot_{s=1}^k \mathscr{P}(j_s)}
\arrow{se,l}{\gamma}
\node{\mathscr{P}(j)}
\arrow{s,b}{\sigma\{j_1, \ldots, j_k\}}\\
\node[2]{\mathscr{P}(j)}
\end{diagram}
\]
{\bf Equivariance Condition B:}
\[
\begin{diagram}
\node{\mathscr{P}(k) \odot \bigodot_{s=1}^k \mathscr{P}(j_s)}
\arrow{e,t}{\gamma}
\arrow{s,l}{\mathrm{id} \odot \tau_1 \odot \ldots \odot \tau_k}
\node{\mathscr{P}(j)}
\arrow{s,r}{\tau_1 \oplus \ldots \oplus \tau_k}\\
\node{\mathscr{P}(k) \odot \bigodot_{s=1}^k \mathscr{P}(j_s)}
\arrow{e,t}{\gamma}
\node{\mathscr{P}(j)}
\end{diagram}
\]
\end{definition}
\begin{definition}\label{def.operad-algebra}
For a symmetric monoidal category $\mathscr{C}$ with product $\odot$
and an operad $\mathscr{P}$ over $\mathscr{C}$, a
{\it $\mathscr{P}$-algebra}
structure on an object $X$ in $\mathscr{C}$ is defined by a
family of maps
\[
\chi : \mathscr{P}(n) \odot_{S_n} X^{\odot n} \to X,
\]
which are compatible with the multiplication, unit maps, and equivariance
conditions of $\mathscr{P}$. Note, the symbol $\odot_{S_n}$ denotes
an internal equivariance condition:
\[
\chi(\pi.\sigma \odot x_1 \odot \ldots \odot x_n)
= \chi(\pi \odot x_{\sigma^{-1}(1)} \odot \ldots \odot x_{\sigma^{-1}(n)})
\]
If $X$ is a $\mathscr{P}$-algebra, we will say that $\mathscr{P}$ acts on $X$.
\end{definition}
\begin{definition}\label{def.operad-module}
Let $\mathscr{P}$ be an operad over the symmetric monoidal category $\mathscr{C}$,
and let $\mathscr{M}$ be a functor \mbox{$\mathbf{S}^\mathrm{op} \to \mathscr{C}$}.
A {\it (left) \mbox{$\mathscr{P}$-module}} structure on
$\mathscr{M}$ is a collection of structure maps,
\[
\mu : \mathscr{P}(n) \odot \mathscr{M}(j_1) \odot \ldots \odot \mathscr{M}(j_n)
\longrightarrow \mathscr{M}(j_1 + \ldots + j_n),
\]
satisfying the evident compatibility relations with the operad multiplication of
$\mathscr{P}$. For the precise definition, see~\cite{KM}.
\end{definition}
In the course of this chapter, it shall become necessary to induce structures
up from small categories to simplicial sets, then to simplicial $k$-modules, and
finally to $k$-complexes. Each of these categories is symmetric monoidal.
For notational convenience, all operads, operad-algebras,
and operad-modules will carry a subscript denoting the ambient category over which
the structure is defined:
\begin{center}
\begin{tabular}{|ll|l|l|}
\hline
Category & & Sym. Mon. Product & Notation\\
\hline
Small categories & $\mathbf{Cat}$ & $\times$ (product of categories) &
$\mathscr{P}_\mathrm{cat}$ \\
Simplicial sets & $\mathbf{SimpSet}$ & $\times$
(degree-wise set product) & $\mathscr{P}_\mathrm{ss}$ \\
Simplicial $k$-modules & $k$-$\mathbf{SimpMod}$ & $\widehat{\otimes}$ (degree-wise
tensor product) & $\mathscr{P}_\mathrm{sm}$ \\
$k$-complexes & $k$-$\mathbf{Complexes}$ & $\otimes$ (tensor prod. of chain complexes)
& $\mathscr{P}_\mathrm{ch}$ \\
\hline
\end{tabular}
\end{center}
\begin{rmk}
The notation $\widehat{\otimes}$, appearing in Richter~\cite{R}, is useful for
indicating degree-wise tensoring of graded modules:
\[
\left(A_* \,\widehat{\otimes}\, B_*\right)_n :=
A_n \otimes_k B_n,
\]
as opposed to the standard tensor product (over $k$) of complexes:
\[
\left(\mathscr{A}_* \otimes \mathscr{B}_*\right)_n := \bigoplus_{p+q=n}
\mathscr{A}_p \otimes_k \mathscr{B}_q
\]
\end{rmk}
Furthermore, we are interested in certain functors from one category to the next in
the list. These functors preserve the symmetric monoidal structure in a sense we
will make precise in Section~\ref{sec.op-mod-structure} -- hence, it will follow
that these functors send operads to operads, operad-modules to operad-modules,
and operad-algebras to operad-algebras.
\[
\begin{diagram}
\node{ (\mathbf{Cat}, \times) }
\arrow{s,r}{N \quad \textrm{(Nerve of categories)}}
\\
\node{ (\mathbf{SimpSet}, \times) }
\arrow{s,r}{k[ - ] \quad \textrm{($k$-linearization)}}
\\
\node{ (\textrm{$k$-$\mathbf{SimpMod}$}, \widehat{\otimes} ) }
\arrow{s,r}{\mathcal{N} \quad \textrm{(Normalization functor)}}
\\
\node{ (\textrm{$k$-$\mathbf{Complexes}$}, \otimes )}
\end{diagram}
\]
\begin{rmk}
Note, the normalization functor $\mathcal{N}$ is one direction of the
Dold-Kan correspondence between simplicial modules and complexes.
\end{rmk}
\begin{rmk}
The ultimate goal of this chapter is to construct an $E_\infty$ structure on
the chain complex associated with $\mathscr{Y}_*^+A$, {\it i.e.} an
action by an $E_\infty$-operad.
While we could define the notion of $E_{\infty}$-operad over general
categories, it would require extra structure on the ambient symmetric
monoidal category $(\mathscr{C}, \odot)$ -- which the examples above possess.
To avoid needless technicalities, we shall instead define versions of the
Barratt-Eccles operad over each of our ambient categories, and take for
granted that they are all $E_{\infty}$-operads.
For a more general discussion of $E_\infty$-operads and algebras in
the category of chain complexes, see~\cite{MSS}.
\end{rmk}
\begin{definition}\label{def.operadD}
$\mathscr{D}_{\mathrm{cat}}$ is the operad $\{\mathscr{D}_{\mathrm{cat}}(m)\}$ in
the category \textbf{Cat},
where $\mathscr{D}_{\mathrm{cat}}(0) = \ast$,
$\mathscr{D}_{\mathrm{cat}}(m)$ is the category whose objects are the elements of
$S_m$, and for each pair of
objects, $\sigma, \tau$, we have $\mathrm{Mor}(\sigma, \tau) = \{ \tau\sigma^{-1} \}$.
The structure map (multiplication) $\delta$ in $\mathscr{D}_{\mathrm{cat}}$ is a
functor defined on objects by:
\[
\delta \;:\; \mathscr{D}_{\mathrm{cat}}(m) \times \mathscr{D}_{\mathrm{cat}}(k_1)
\times \ldots \times \mathscr{D}_{\mathrm{cat}}(k_m) \longrightarrow
\mathscr{D}_{\mathrm{cat}}(k), \;\;\textit{where $k = \sum k_i$}
\]
\[
(\sigma, \tau_1, \ldots, \tau_m) \mapsto \sigma\{k_1, \ldots, k_m\} (\tau_1
\oplus \ldots \oplus \tau_m)
\]
Here, $\tau_1 \oplus \ldots \oplus \tau_m \in S_k$ and
$\sigma\{k_1, \ldots, k_m\} \in S_k$ are defined as in Definition~\ref{def.operad}.
The functor takes the unique morphism
\[
(\sigma, \tau_1, \ldots, \tau_m) \to (\rho, \psi_1, \ldots, \psi_m)
\]
to the unique morphism
\[
\sigma\{k_1, \ldots, k_m\}(\tau_1 \oplus \ldots \oplus \tau_m) \to
\rho\{k_1, \ldots, k_m\}(\psi_1 \oplus\ldots \oplus \psi_m).
\]
The action of $S_m^{\mathrm{op}}$ on objects of
$\mathscr{D}_{\mathrm{cat}}(m)$ is given by right
multiplication.
\end{definition}
\begin{rmk}\label{rmk.Barratt-Eccles}
We are following the notation of May for our operad
$\mathscr{D}_{\mathrm{cat}}$ (See~\cite{M3}, Lemmas~4.3, 4.8). May's
notation for $\mathscr{D}_{\mathrm{cat}}(m)$ is $\widetilde{\Sigma}_m$,
and he defines the related operad $\mathscr{D}$ over the category of spaces, as
the geometric realization of the nerve of $\widetilde{\Sigma}$.
The nerve of $\mathscr{D}_{\mathrm{cat}}$ is generally known
in the literature as the {\it Barratt-Eccles operad} (See~\cite{BE}, where
the notation for $N\mathscr{D}_{\mathrm{cat}}$ is $\Gamma$).
\end{rmk}
\section{Operad-Module Structure}\label{sec.op-mod-structure}
In order to best define the $E_\infty$ structure of $\mathscr{Y}_*^+A$,
we will begin with an {\it operad-module} structure over the category
of small categories,
then induce this structure up to the category of \mbox{$k$-complexes}.
\begin{definition}
Define for each $m \geq 0$, a category,
\[
\mathscr{K}_{\mathrm{cat}}(m) := [m-1] \setminus \Delta S_+ = \underline{m} \setminus
\mathcal{F}(as),
\]
(See Section~\ref{sec.deltas} for a definition of $\mathcal{F}(as)$.)
\end{definition}
Identifying $\Delta S_+$ with $\mathcal{F}(as)$, we see
that the morphism $(\phi, g)$ of $\Delta S_+$ consists of the
set map $\phi$, precomposed with $g^{-1}$ in order to indicate the total
ordering on all preimage sets. Thus, precomposition with symmetric
group elements defines a {\it right} $S_m$-action on objects
of $\underline{m} \setminus \mathcal{F}(as)$. When writing morphisms
of $\mathcal{F}(as)$, we may avoid confusion by writing the automorphisms
as elements of the symmetric group as opposed to its opposite group, with
the understanding that $g \in \mathrm{Mor}\mathcal{F}(as)$ corresponds
to $g^{-1} \in \mathrm{Mor}\Delta S_+$.
\[
\mathscr{K}_{\mathrm{cat}}(m) \times S_m \to \mathscr{K}_{\mathrm{cat}}(m)
\]
\[
(\phi, g).h := (\phi, gh)
\]
Let $m, j_1, j_2, \ldots, j_m \geq 0$, and let $j = \sum j_s$.
We shall define a family of functors,
\begin{equation}\label{eq.mu}
\mu = \mu_{m,j_1,\ldots,j_m} : \mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\mathscr{K}_{\mathrm{cat}}(j_s) \longrightarrow \mathscr{K}_{\mathrm{cat}}(j).
\end{equation}
Assume that the pairs of morphisms $f_i$, $g_i$ ($1 \leq i \leq m$) have specified
sources and targets:
\[
\underline{j_i} \stackrel{f_i}{\to} \underline{p_i} \stackrel{g_i}{\to}
\underline{q_i}.
\]
$\mu$ is defined on objects by:
\begin{equation}\label{eq.mu-objects}
\mu( \sigma, f_1, f_2, \ldots, f_m ) :=
{\sigma\{p_1, p_2, \ldots, p_m\}}
(f_1 \odot f_2 \odot \ldots \odot f_m).
\end{equation}
For compactness of notation, denote
\[
\underline{m} := \{1, \ldots, m\},
\]
as ordered list, along with a left $S_m$ action,
\[
\tau \underline{m} := \{\tau^{-1}(1), \ldots, \tau^{-1}(m)\}.
\]
Then for any permutation, $\tau \underline{m}$, of the ordered list
$\underline{m}$, and any list of $m$ numbers, $\{j_1, \ldots, j_m\}$,
denote
\[
j_{\tau \underline{m}} := \{j_{\tau^{-1}(1)}, \ldots, j_{\tau^{-1}(m)}\}.
\]
Furthermore, if $f_1, f_2, \ldots, f_m \in \mathrm{Mor}\mathcal{F}(as)$, denote
\[
f_{\tau \underline{m}}^\odot := f_{\tau^{-1}(1)} \odot \ldots \odot f_{\tau^{-1}(m)}
\]
so in particular, we may write:
\[
\mu( \sigma, f_1, f_2, \ldots, f_m ) =
\sigma\{p_{\underline{m}}\}f^{\odot}_{\underline{m}}
\]
Using this notation, define $\mu$ on morphisms by:
\begin{equation}\label{eq.mu-morphisms}
\begin{diagram}
\node{ (\sigma, f_1, \ldots, f_m) }
\arrow{s,r}{ \tau\sigma^{-1} \times g_1 \times \ldots \times g_m }
\arrow{e,t}{ \mu }
\node{ \sigma\{p_{\underline{m}}\}f_{\underline{m}}^\odot }
\arrow{s,r}{ (\tau\sigma^{-1})\{q_{\sigma \underline{m}} \}
g_{\sigma \underline{m}}^\odot }
\\
\node{ (\tau, g_1f_1, \ldots, g_mf_m) }
\arrow{e,t}{ \mu }
\node{ \tau\{q_{\underline{m}}\}
(g_{\underline{m}}f_{\underline{m}})^\odot }
\end{diagram}
\end{equation}
It is useful to note three properties of the block permutations and symmetric
monoid product in the category $\coprod_{m \geq 0} \mathscr{K}_{\mathrm{cat}}(m)$.
\begin{prop}\label{prop.properties-blockperm-odot}
1. For $\sigma, \tau \in S_m$, and non-negative $p_1, p_2, \ldots, p_m$,
\[
(\sigma\tau)\{ p_{\underline{m}} \} = \sigma\{p_{\tau \underline{m}}\}
\tau\{p_{\underline{m}}\}.
\]
2. For $\sigma \in S_m$, and morphisms $g_i \in \mathrm{Mor}_{\mathcal{F}(as)}
(\underline{p_i}, \underline{q_i})$, $(1 \leq i \leq m)$,
\[
\sigma\{q_{\underline{m}}\}g_{\underline{m}}^\odot =
g_{\sigma \underline{m}}^\odot \sigma\{p_{\underline{m}}\},
\]
3. For morphisms $\underline{j_i} \stackrel{f_i}{\to} \underline{p_i}
\stackrel{g_i}{\to} \underline{q_i}$,
\[
(g_{\underline{m}}f_{\underline{m}})^\odot = g_{\underline{m}}^\odot
f_{\underline{m}}^\odot
\]
\end{prop}
\begin{proof}
Property 1 is a standard composition property of block permutations.
See~\cite{MSS}, p. 41, for example. Property 2 expresses the fact that
it does not matter whether we apply the morphisms $g_i$ to blocks first, then
permute those blocks, or permute the blocks first, then apply $g_i$ to
the corresponding permuted block. Finally, property 3 is a result of
functoriality of the product $\odot$.
\end{proof}
Using these properties, it is straightforward to verify that $\mu$ is a
functor. We shall show that $\mu$ defines a
$\mathscr{D}_{\mathrm{cat}}$-module structure on
$\mathscr{K}_{\mathrm{cat}}$.
First observe that if $(\mathscr{B}, \odot)$ is a permutative category,
then $\mathscr{B}$ admits the structure of $E_\infty$-algebra.
We may express this structure using the $E_\infty$-operad,
$\mathscr{D}_{\mathrm{cat}}$. The structure map is
a family of functors
\[
\theta : \mathscr{D}_{\mathrm{cat}}(m) \times \mathscr{B}^m \to \mathscr{B},
\]
given on objects $C_i \in \mathrm{Obj}\mathscr{B}$ by:
\[
\theta(\sigma, C_1, \ldots, C_m) := C_{\sigma^{-1}(1)} \odot \ldots \odot
C_{\sigma^{-1}(m)},
\]
and on morphisms $(\sigma, C_1, \ldots, C_m) \to (\tau, D_1, \dots, D_m)$ by:
\[
\begin{diagram}
\node{ (\sigma, C_1, \ldots, C_m) }
\arrow{e,t}{ \theta }
\arrow[2]{s,l}{ \tau \sigma^{-1} \times f_1 \times \ldots \times f_m }
\node{ C_{\sigma^{-1}(1)} \odot \ldots \odot C_{\sigma^{-1}(m)} }
\arrow{s,r}{ f_{\sigma^{-1}(1)} \odot \ldots \odot f_{\sigma^{-1}(m)} }
\\
\node[2]{ D_{\sigma^{-1}(1)} \odot \ldots \odot D_{\sigma^{-1}(m)} }
\arrow{s,lr}{ \cong }{ T_{\tau\sigma^{-1}} }
\\
\node{ (\tau, D_1, \dots, D_m) }
\arrow{e,t}{ \theta }
\node{ D_{\tau^{-1}(1)} \odot \ldots \odot D_{\tau^{-1}(m)} }
\end{diagram}
\]
We just need to verify that $\theta$ satisfies the expected equivariance
condition. Recall, the required condition is
\[
\theta(\sigma \tau, C_1, \ldots, C_m) = \theta\left(\sigma, C_{\tau^{-1}(1)},
\ldots C_{\tau_{-1}(m)}\right)
\]
This is easily verified on objects, as the left-hand side evaluates as:
\[
\theta(\sigma \tau, C_1, \ldots, C_m) =
C_{(\sigma\tau)^{-1}(1)} \odot \ldots \odot C_{(\sigma\tau)^{-1}(m)},
\]
while the right-hand side evaluates as:
\[
\theta\left(\sigma, C_{\tau^{-1}(1)},
\ldots C_{\tau_{-1}(m)}\right) = C_{\tau^{-1}\left(\sigma^{-1}(1)\right)} \odot
\ldots C_{\tau^{-1}\left(\sigma^{-1}(m)\right)}
\]
\[
= C_{(\sigma\tau)^{-1}(1)} \odot \ldots \odot C_{(\sigma\tau)^{-1}(m)}.
\]
Now, let $M$ be a monoid with unit, $1$. Let \mbox{$\mathscr{X}^+_*M :=
N(- \setminus \Delta S_+) \times_{\Delta S_+} B^{sym_+}_*M$}. This is analogous to
the construction $\mathscr{X}_*$ of section~\ref{sec.symhommonoid}.
\begin{prop}\label{prop.X-perm-cat}
$\mathscr{X}^+_*M$ is the nerve of a permutative category.
\end{prop}
\begin{proof}
Consider a category $\mathcal{T}M$ whose objects are the elements of
$\coprod_{n \geq 0} M^n$,
where $M^0$ is understood to be the set consisting of the empty tuple, $\{()\}$.
Morphisms of $\mathcal{T}M$ consist of the morphisms of $\Delta S_+$,
reinterpreted as
follows: A morphism $f : [p] \to [q]$ in $\Delta S$ will be considered a morphism
$(m_0, m_1, \ldots , m_p) \to f(m_0, m_1, \ldots, m_p) \in M^{q+1}$. The unique morphism
$\iota_n$ will be considered a morphism $() \to (1, 1, \ldots, 1) \in M^{n+1}$.
The nerve of $\mathcal{T}M$ consists of chains,
\[
(m_0, \ldots, m_n)
\stackrel{ f_1 }{\to}
f_1(m_0, \ldots, m_n)
\stackrel{ f_2 }{\to}
\ldots
\stackrel{ f_i }{\to}
f_i \ldots f_1(m_0, \ldots, m_n)
\]
This chain can be rewritten uniquely as an element of $M^n$ together with a chain
in $N\Delta S$.
\[
\left( [n] \stackrel{f_1}{\to} [n_1] \stackrel{f_2}{\to} \ldots
\stackrel{f_i}{\to} [n_i]\,,\,(m_0, \ldots, m_n) \right),
\]
which in turn is uniquely identified with an element of $\mathscr{X}^+_iM$:
\[
\left( [n] \stackrel{\mathrm{id}}{\to} [n]\stackrel{f_1}{\to}
[n_1] \stackrel{f_2}{\to} \ldots
\stackrel{f_i}{\to} [n_i]\,,\,(m_0, \ldots, m_n) \right)
\]
Clearly, since any element of $\mathscr{X}^+_*M$ may be written so that the
first morphism of the chain component is the identity,
$N(\mathcal{T}M)$ can be identified with $\mathscr{X}^+_*M$.
Now, we show that $\mathcal{T}M$ is permutative. Define
the product on objects:
\[
(m_0, \ldots, m_p) \odot (n_0, \ldots, n_q) := (m_0, \ldots, m_p, n_0, \ldots, n_q),
\]
and for morphisms $f, g \in \mathrm{Mor}\mathcal{T}M$,
simply use $f \odot g$
as defined for $\Delta S_+$ in section~\ref{sec.deltas}. Associativity is strict, since
it is induced by the associativity of $\odot$ in $\Delta S_+$.
There is also a strict unit, the empty tuple, $()$.
The natural transposition ({\it i.e.}, $\gamma : \odot \to \odot T$ of the
definition given in Prop~\ref{prop.deltaSpermutative}) is defined on objects by:
\[
\gamma : (m_0, \ldots, m_p) \odot (n_0, \ldots, n_q) \to (n_0, \ldots, n_q)
\odot (m_0, \ldots, m_p)
\]
\[
\gamma = \beta_{p,q} \qquad \textrm{(the block transformation morphism of
section~\ref{sec.deltas}).}
\]
Suppose we have a morphism in the product category,
\[
(f,g) : \left((m_0, \ldots, m_p), (n_0, \ldots, n_q)\right)
\to \left( (m'_0, \ldots, m'_s), (n'_0, \ldots, n'_t)\right)
\]
Then it is easy to verify that the map $\gamma$ defined above makes the following
diagram commutative (showing $\gamma$ is a natural transformation).
\[
\begin{diagram}
\node{ (m_0, \ldots, m_p, n_0, \ldots, n_q) }
\arrow{e,t}{\gamma = \beta_{p,q}}
\arrow{s,r}{f \odot g}
\node{ (n_0, \ldots, n_q, m_0, \ldots, m_p) }
\arrow{s,r}{g \odot f}
\\
\node{ (m'_0, \ldots, m'_s, n'_0, \ldots, n'_t) }
\arrow{e,t}{\gamma = \beta_{s,t}}
\node{ (n'_0, \ldots, n'_t, m'_0, \ldots, m'_s) }
\end{diagram}
\]
The coherence conditions are satisfied for $\gamma$ in the same way as in
$\Delta S_+$.
\end{proof}
In particular, given any monoid $M$, the category $\mathcal{T}M$
constructed in the proof of Prop.~\ref{prop.X-perm-cat} has the structure
of $E_\infty$-algebra.
\begin{lemma}\label{lem.D-module_str_on_K}
$\mathscr{K}_{\mathrm{cat}}$ has the structure of a
\mbox{$\mathscr{D}_{\mathrm{cat}}$-module}.
\end{lemma}
\begin{proof}
Let $X = \{x_i\}_{i \geq 1}$ be a countable set of formal independent
indeterminates, and $J(X_+)$ the free monoid on the set
$X_+ := X \cup \{1\}$. Since the category
$\mathcal{T}J(X_+)$ is permutative, there is an
$E_\infty$-algebra structure
\[
\theta : \mathscr{D}_{\mathrm{cat}}(m) \times
\left[\mathcal{T}J(X_+)\right]^m
\to \mathcal{T}J(X_+)
\]
We can identify:
\[
\coprod_{m \geq 0} \left(\mathscr{K}_{\mathrm{cat}}(m) \times_{S_m} X^m \right)
= \mathcal{T}J(X_+),
\]
via the map
\[
(f, x_{i_1}, x_{i_2}, \ldots, x_{i_m}) \mapsto f(x_{i_1}, x_{i_2}, \ldots,
x_{i_m}).
\]
The fact that $J(X_+)$ is the free monoid on $X_+$ ensures that there is
a map in the inverse direction, well-defined up to action of the symmetric
group $S_m$. Note, the action of $S_m$ on $X^m$ is by permutation of the
components $\{x_{i_1}, x_{i_2}, \ldots, x_{i_m} \}$.
Next, let $m, j_1, j_2, \ldots, j_m \geq 0$, and let $j = \sum j_s$.
Furthermore, let
\[
X_s = (x_{j_1 + j_2 + \ldots + j_{s-1} + 1} , \ldots, x_{j_1 + j_2 + \ldots + j_s}).
\]
Define a functor $\alpha = \alpha_{m,j_1,\ldots,j_m}$:
\[
\alpha : \mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\mathscr{K}_{\mathrm{cat}}(j_s) \longrightarrow
\mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\left(\mathscr{K}_{\mathrm{cat}}(j_s) \times_{S_{j_s}} X^{j_s} \right)
\]
\[
\alpha\left( \sigma, f_1, f_2, \ldots f_m \right)
= \left(\sigma,\; \prod_{s=1}^m
(f_s, X_s) \right).
\]
$\alpha$ takes a morphism
\[
\tau\sigma^{-1} \times g_1 \times \ldots \times g_m \in
\mathrm{Mor}\left(\mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\mathscr{K}_{\mathrm{cat}}(j_s)\right)
\]
to the morphism
\[
\tau\sigma^{-1} \times (g_1 \times \mathrm{id}^{j_1}) \times \ldots \times
(g_m \times \mathrm{id}^{j_m}) \in
\mathrm{Mor}\left(\mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\left(\mathscr{K}_{\mathrm{cat}}(j_s) \times_{S_{j_s}} X^{j_s} \right)\right).
\]
Let $inc$ be the inclusion of categories:
\[
inc : \mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\left(\mathscr{K}_{\mathrm{cat}}(j_s) \times_{S_{j_s}} X^{j_s} \right)
\longrightarrow
\mathscr{D}_{\mathrm{cat}}(m) \times \left[
\coprod_{i \geq 0} \left(\mathscr{K}_{\mathrm{cat}}(i) \times_{S_i} X^i \right)
\right]^m,
\]
induced by the evident inclusion of for each $s$:
\[
\mathscr{K}_{\mathrm{cat}}(j_s) \times_{S_{j_s}} X^{j_s} \hookrightarrow
\coprod_{i \geq 0} \left(\mathscr{K}_{\mathrm{cat}}(i) \times_{S_i}
X^i \right)
\]
Let $\alpha_0$ be the functor
\[
\alpha_0 : \mathscr{K}_{\mathrm{cat}}(j) \longrightarrow
\mathscr{K}_{\mathrm{cat}}(j) \times_{S_{j}} X^{j}
\]
\[
\alpha_0(f) = (f, x_1, x_2, \ldots, x_j),
\]
and $inc_0$ be the inclusion $\mathscr{K}_{\mathrm{cat}}(j) \times_{S_{j}} X^{j}
\hookrightarrow \displaystyle{\coprod_{i \geq 0} \left(\mathscr{K}_{\mathrm{cat}}(i) \times_{S_i}
X^i \right)}$.
Next, consider the following diagram. The top row is the map $\mu$ of
(\ref{eq.mu}), and the bottom row is the
operad-algebra structure map for $\mathcal{T}J(X_+)$.
\begin{equation}\label{eq.operad-module-comm-diag}
\begin{diagram}
\node{ \mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\mathscr{K}_{\mathrm{cat}}(j_s) }
\arrow{e,t}{\mu}
\arrow{s,l}{\alpha}
\node{ \mathscr{K}_{\mathrm{cat}}(j) }
\arrow{s,r}{\alpha_0}
\\
\node{ \mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\left(\mathscr{K}_{\mathrm{cat}}(j_s) \times_{S_{j_s}} X^{j_s} \right) }
\arrow{s,l}{inc}
\node{ \mathscr{K}_{\mathrm{cat}}(j) \times_{S_{j}} X^{j} }
\arrow{s,r}{inc_0}
\\
\node{ \mathscr{D}_{\mathrm{cat}}(m) \times \left[
\coprod_{i \geq 0} \left(\mathscr{K}_{\mathrm{cat}}(i) \times_{S_i} X^i \right)
\right]^m }
\arrow{e,t}{\theta}
\node{ \coprod_{i \geq 0} \left(\mathscr{K}_{\mathrm{cat}}(i) \times_{S_i}
X^i\right) }
\end{diagram}
\end{equation}
I claim that this diagram commutes. Let $w := (\sigma, f_1, \ldots, f_m) \in
\mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m
\mathscr{K}_{\mathrm{cat}}(j_s)$ be arbitrary. Following the left-hand column
of the diagram, we obtain the element
\begin{equation}\label{eq.alphaw}
\alpha(w) = \left(\sigma,\; \prod_{s=1}^m
(f_s, X_s) \right).
\end{equation}
It is important to note that the list of all $x_r$ that occur in
expression~(\ref{eq.alphaw}) is exactly $\{x_1, x_2, \ldots, x_j\}$
with no repeats, up to permutations by $S_{j_1} \times \ldots \times S_{j_m}$.
Thus, after applying $\theta$, the result is an element of the form:
\begin{equation}\label{eq.theta_alphaw}
\theta\alpha(w) = (F, x_1, x_2, \ldots, x_j),
\end{equation}
up to permutations by $S_j$. Thus, $\theta\alpha(w)$ is in the image of
$\alpha_0$, say $\theta\alpha(w) = \alpha_0(v)$. All that remains is to show that
$\mu(w) = v$. Let us examine closely what the morphism $F$ in
formula~(\ref{eq.theta_alphaw}) must be. $\alpha(w)$ is
identified with the element
\[
\left(\sigma,\; \prod_{s=1}^m
f_s(X_s) \right)
\]
of $\displaystyle{\mathscr{D}_{\mathrm{cat}}(m) \times \left[\mathcal{T}J(X_+)\right]^m}$,
and $\theta$ sends this element to
\begin{equation}\label{eq.theta_alphaw2}
\theta\alpha(w) = \bigodot_{s=1}^m f_{\sigma^{-1}(s)}(X_{\sigma^{-1}(s)}).
\end{equation}
$\theta\alpha(w)$ is interpreted in $\mathscr{K}_{\mathrm{cat}}(j) \times_{S_{j}}
X^{j}$ as follows: Begin with the tuple
$(x_1, x_2, \ldots, x_j)$. This tuple is divided into blocks, $(X_1, \ldots, X_m)$.
Apply $f_1 \odot \ldots \odot f_m$ to obtain
\[
f_1(X_1) \odot \ldots \odot
f_m(X_m).
\]
\begin{figure}\label{diag.theta_alphaw}
\end{figure}
Finally, apply the block permutation $\sigma\{p_1, \ldots, p_m\}$ to get the
correct order in the result (see Fig.~\ref{diag.theta_alphaw}). This shows that
$F = {\sigma\{p_1, \ldots, p_m\}}(f_1 \odot \ldots \odot f_m)$, as required.
Now that we have the diagram~(\ref{eq.operad-module-comm-diag}), it is
straightforward to show that $\mu$ satisfies the associativity condition
for an operad-module structure map. Essentially, associativity is induced
by the associativity condition of the algebra structure map $\theta$.
All that remains is to verify the unit and equivariance conditions.
It is trivial to verify the unit condition on the level of objects. The
unit object of $\mathscr{D}_{\mathrm{cat}}(1)$ is the identity of $S_1$.
According to formula~(\ref{eq.mu-morphisms}), we obtain the following
diagram for morphisms $f : \underline{j} \to \underline{p}$
and $g : \underline{p} \to \underline{q}$. Clearly, the right-hand
column is identical to the morphism $f \stackrel{g}{\to} gf$.
\[
\begin{diagram}
\node{ (\mathrm{id}_{S_1}, f) }
\arrow{s,r}{ \mathrm{id}_{S_1} \times g}
\arrow{e,t}{\mu}
\node{ ({\mathrm{id}_{S_p}}) f }
\arrow{s,r}{ ({\mathrm{id}_{S_q}}) g }
\\
\node{ (\mathrm{id}_{S_1}, gf) }
\arrow{e,t}{\mu}
\node{ ({\mathrm{id}_{S_q}}) gf }
\end{diagram}
\]
(Note, there is no corresponding right unit condition in an operad-module
structure.)
Now, specify
the right-action of $\rho \in S_j$ on $\mathscr{K}_{\mathrm{cat}}(j)$ as
precomposition by $ {\rho}$. That is, $f . \rho :=
f {\rho}$ for
$f \in \underline{j} \setminus \mathcal{F}(as)$. A routine check verifies the
equivariance on the level of objects.
Let $f_i \in \mathscr{K}_{\mathrm{cat}}(j_i)$ have
specified source and target:
\[
\underline{j_i} \stackrel{f_i}{\to} \underline{p_i}.
\]
Equivariance A:
\[
\begin{diagram}
\node{ (\sigma, f_1, \ldots, f_m) }
\arrow[2]{e,t,T}{ \mathrm{id} \times T_{\tau} }
\arrow{s,l,T}{ \tau \times \mathrm{id}^{m} }
\node[2]{(\sigma, f_{\tau^{-1}(1)}, \ldots, f_{\tau^{-1}(m)}) }
\arrow{s,r,T}{ \mu }
\\
\node{ (\sigma\tau, f_1, \ldots, f_m) }
\arrow{s,l,T}{ \mu }
\node[2]{ \sigma\{p_{\tau \underline{m}} \}
f_{\tau \underline{m}}^\odot }
\arrow{s,b,T}{ \tau\{ j_{\underline{m}} \} }
\\
\node{ (\sigma\tau)\{p_{\underline{m}}\} f_{\underline{m}}^\odot }
\arrow{e,=}
\node{ \sigma\{ p_{\tau \underline{m}} \}\tau\{ p_{\underline{m}} \}
f_{\underline{m}}^\odot }
\arrow{e,=}
\node{ \sigma\{ p_{\tau \underline{m}} \} f_{\tau\underline{m}}^\odot
\tau\{ j_{\underline{m}} \} }
\end{diagram}
\]
Equivariance B:
\[
\begin{diagram}
\node{(\sigma, f_1, \ldots, f_m)}
\arrow[2]{e,t,T}{\mu}
\arrow{s,l,T}{\mathrm{id} \times \tau_1 \times \ldots \times \tau_m}
\node[2]{\sigma\{p_{\underline{m}}\}f_{\underline{m}}^\odot}
\arrow{s,r,T}{\tau_1 \oplus \ldots \oplus \tau_m}\\
\node{(\sigma, f_1\tau_1, \ldots, f_m\tau_m)}
\arrow{e,t,T}{\mu}
\node{\sigma\{p_{\underline{m}}\} (f_{\underline{m}}
\tau_{\underline{m}})^\odot }
\arrow{e,=}
\node{ \sigma\{p_{\underline{m}}\} f_{\underline{m}}^\odot
\tau_{\underline{m}}^\odot }
\end{diagram}
\]
\end{proof}
\begin{rmk}\label{rmk.K-pseuo-operad}
It turns out that $\mathscr{K}_{\mathrm{cat}}$ is in fact a pseudo-operad.
Recall from~\cite{MSS} that a pseudo-operad is a `non-unitary' operad. That is,
there are multiplication maps that satisfy operad associativity, and actions by
the symmetric groups that satisfy operad equivariance conditions, but there is
no requirement concerning a left or right unit map.
The multiplication is defined as the composition:
\[
\mathscr{K}_{\mathrm{cat}}(m) \times \prod_{s=1}^m \mathscr{K}_{\mathrm{cat}}(j_s)
\stackrel{\pi \times \mathrm{id}^j}{\longrightarrow}
\mathscr{D}_{\mathrm{cat}}(m) \times \prod_{s=1}^m \mathscr{K}_{\mathrm{cat}}(j_s)
\stackrel{\mu}{\longrightarrow}
\mathscr{K}_{\mathrm{cat}}(j_1 + \ldots + j_m),
\]
where $\pi : \mathscr{K}_{\mathrm{cat}}(m) \to \mathscr{D}_{\mathrm{cat}}(m)$
is the projection functor defined as isolating the group element (automorphism)
of an $\mathcal{F}(as)$ morphism:
\[
(\phi, g) \stackrel{\pi}{\mapsto} g
\]
Indeed, $\pi$ defines a covariant isomorphism
of the subcategory
$\mathrm{Aut}\left(\underline{m} \setminus \mathcal{F}(as)\right)$ onto
$\mathscr{D}_{\mathrm{cat}}(m)$.
\end{rmk}
Now that we have a $\mathscr{D}_{\mathrm{cat}}$-module structure on
$\mathscr{K}_{\mathrm{cat}}$, we
shall proceed in steps to induce this structure to an analogous operad-module
structure on the level of $k$-complexes.
First, we shall require the definition
of {\it lax symmetric monoidal functor}. The following appears in~\cite{T}, as
well as~\cite{R2}.
\begin{definition}
Let $\mathscr{C}$, resp. $\mathscr{C}'$, be a symmetric monoidal category with
multiplication $\odot$, resp. $\boxdot$. Denote the associativity
maps in $\mathscr{C}$, resp. $\mathscr{C}'$ by $a$, resp. $a'$, and the commutation
maps by $s$, resp. $s'$. A functor $F : \mathscr{C} \to
\mathscr{C}'$ is a lax symmetric monoidal functor if there are natural maps
\[
f : FA \,\boxdot\, FB \to F(A \odot B)
\]
such that the following diagrams are commutative:
\begin{equation}\label{eq.lax-assoc}
\begin{diagram}
\node{FA \,\boxdot\, (FB \,\boxdot\, FC)}
\arrow{e,t}{\mathrm{id} \,\boxdot\, f}
\arrow{s,l}{a'}
\node{FA \,\boxdot\, F(B\odot C)}
\arrow{e,t}{f}
\node{F\left(A \odot (B \odot C)\right)}
\arrow{s,r}{Fa}
\\
\node{(FA \,\boxdot\, FB) \,\boxdot\, FC}
\arrow{e,t}{f \,\boxdot\, \mathrm{id}}
\node{F(A \odot B) \,\boxdot\, FC}
\arrow{e,t}{f}
\node{F\left((A \odot B) \odot C\right)}
\end{diagram}
\end{equation}
\begin{equation}\label{eq.lax-comm}
\begin{diagram}
\node{FA \,\boxdot\, FB}
\arrow{e,t}{f}
\arrow{s,l}{s'}
\node{F(A \odot B)}
\arrow{s,r}{Fs}
\\
\node{FB \,\boxdot\, FA}
\arrow{e,t}{f}
\node{F(B \odot A)}
\end{diagram}
\end{equation}
If the transformation $f$ is a natural isomorphism, then the functor $F$ is
called {\it strong symmetric monoidal}.
\end{definition}
Observe that the functor \mbox{$N : \mathbf{Cat} \to \mathbf{Simpset}$}
is strong symmetric monoidal with associated natural map, $S_*$:
\[
S_* : N \mathscr{A}_{\mathrm{cat}} \times N \mathscr{B}_{\mathrm{cat}}
\longrightarrow N\left(\mathscr{A}_{\mathrm{cat}} \times
\mathscr{B}_{\mathrm{cat}}\right),
\]
defined on $n$-chains:
\[
\left( A_0 \stackrel{f_1}{\to} A_1 \stackrel{f_2}{\to} \ldots
\stackrel{f_n}{\to} A_n, \;
B_0 \stackrel{g_1}{\to} B_1 \stackrel{g_2}{\to} \ldots
\stackrel{g_n}{\to} B_n \right)
\stackrel{S_n}{\mapsto}
(A_0, B_0) \stackrel{f_1 \times g_1}{\longrightarrow} \ldots
\stackrel{f_n \times g_n}{\longrightarrow}(A_n, B_n)
\]
The $k$-linearization functor, \mbox{$k[ - ] : \mathbf{SimpSet} \to
\textrm{$k$-\textbf{SimpMod}}$} is also strong symmetric monoidal,
with associated natural map,
\[
k[ \mathscr{A}_{\mathrm{ss}} ] \,\widehat{\otimes}\, k[ \mathscr{B}_{\mathrm{ss}} ]
\longrightarrow k[ \mathscr{A}_{\mathrm{ss}} \times
\mathscr{B}_{\mathrm{ss}} ],
\]
defined degree-wise:
\[
k[ (\mathscr{A}_{\mathrm{ss}})_n ] \otimes k[ (\mathscr{B}_{\mathrm{ss}})_n ]
\stackrel{\cong}{\longrightarrow}
k[ (\mathscr{A}_{\mathrm{ss}})_n \times (\mathscr{B}_{\mathrm{ss}})_n ]
=
k[ (\mathscr{A}_{\mathrm{ss}} \times \mathscr{B}_{\mathrm{ss}})_n ].
\]
Finally, the normalization functor, \mbox{$\mathcal{N} : \textrm{$k$-\textbf{SimpMod}}
\to \textrm{$k$-$\mathbf{Complexes}$}$} is lax symmetric monoidal,
with associated natural map $f$ being the Eilenberg-Zilber shuffle map (see~\cite{R}).
\[
Sh : \mathcal{N}\mathscr{A}_{\mathrm{sm}} \otimes \mathcal{N}\mathscr{B}_{\mathrm{sm}}
\longrightarrow \mathcal{N}\left(\mathscr{A}_{\mathrm{sm}} \,\widehat{\otimes}\,
\mathscr{B}_{\mathrm{sm}}\right)
\]
Now, define the versions of $\mathscr{D}_{\mathrm{cat}}$ and
$\mathscr{K}_{\mathrm{cat}}$ over the various symmetric monoidal categories we
are considering:
\[
\begin{array}{lcl}
\mathscr{D}_{\mathrm{ss}} &=& N\mathscr{D}_{\mathrm{cat}}\\
\mathscr{D}_{\mathrm{sm}} &=& k\left[\mathscr{D}_{\mathrm{ss}}\right]\\
\mathscr{D}_{\mathrm{ch}} &=& \mathcal{N}\mathscr{D}_{\mathrm{sm}}
\end{array}
\qquad\qquad\qquad
\begin{array}{lcllcl}
\mathscr{K}_{\mathrm{ss}} &=& N\mathscr{K}_{\mathrm{cat}}\\
\mathscr{K}_{\mathrm{sm}} &=& k\left[\mathscr{K}_{\mathrm{ss}}\right]\\
\mathscr{K}_{\mathrm{ch}} &=& \mathcal{N}\mathscr{K}_{\mathrm{sm}}
\end{array}
\]
\begin{lemma}\label{lem.lax-sym-mon-functor}
Let $(\mathscr{C}, \odot, e)$ and $(\mathscr{C}', \boxdot, e')$ be symmetric monoidal
categories, and $F : \mathscr{C} \to \mathscr{C}'$ a
lax symmetric monoidal functor with associated natural transformation $f$ such
that $e' = Fe$.
1. If $\mathscr{P}$ is an operad over $\mathscr{C}$, then
$F\mathscr{P}$ is an operad over $\mathscr{C}'$.
2. If $\mathscr{P}$ is an operad and $\mathscr{M}$ is a $\mathscr{P}$-module
over $\mathscr{C}$, then $F\mathscr{M}$ is an $F\mathscr{P}$-module over
$\mathscr{C}'$.
3. If $\mathscr{P}$ is an operad over $\mathscr{C}$ and $Z \in
\mathrm{Obj}\mathscr{C}$ is a $\mathscr{P}$-algebra, then
$FZ$ is an $F\mathscr{P}$-algebra over $\mathscr{C}'$.
\end{lemma}
\begin{proof}
Note that properties~(\ref{eq.lax-assoc}) and~(\ref{eq.lax-comm}) imply that
the associativity map $a'$ and symmetry map $s'$ of $\mathscr{C}'$ may
be viewed as induced by the associativity map $a$ and symmetry
map $s$ of $\mathscr{C}$. That is, all symmetric monoidal structure is
carried by $F$ from $\mathscr{C}$ to $\mathscr{C}'$.
Denote by $f^m$ the
natural transformation induced by $f$ on $m+1$ components:
\[
f^m : FA_0 \boxdot FA_1 \boxdot \ldots \boxdot FA_m \to
F(A_0 \odot A_1 \odot \ldots \odot A_m).
\]
Technically, we should write parentheses to
indicate associativity in the source and target of $f^m$, but
property~(\ref{eq.lax-assoc}) of the functor and the MacLane Pentagon
diagram of Def.~\ref{def.symmoncat} makes this unnecessary.
Let $\mathscr{P}$ have structure map $\gamma$. Define the structure
map $\gamma'$ for $F\mathscr{P}$:
\[
\gamma' := F\gamma \circ f^{m} :
F\mathscr{P}(m) \boxdot F\mathscr{P}(j_1) \boxdot \ldots
\boxdot F\mathscr{P}(j_m) \longrightarrow
F\mathscr{P}(j_1 + \ldots + j_m).
\]
Let $e$ (resp. $e'$) be the unit object of $\mathscr{C}$ (resp. $\mathscr{C}'$).
Then $e' = Fe$. If $\mathscr{P}$ has unit map $\eta : e \to \mathscr{P}(1)$,
then define the unit map of $F\mathscr{P}$ by
\[
\eta' := F\eta : e' \to F\mathscr{P}(1).
\]
The action of $S_n$ on $F\mathscr{P}$ is defined by viewing $F\mathscr{P}$
as a functor $\mathbf{S}^{\mathrm{op}} \to \mathscr{C}'$:
\[
\mathbf{S}^{\mathrm{op}} \stackrel{\mathscr{P}}{\longrightarrow}
\mathscr{C} \stackrel{F}{\longrightarrow} \mathscr{C}'
\]
To verify that the proposed structure on $F\mathscr{P}$ defines an
operad is straightforward, but the required commutative diagrams do not fit
on one page!
Assertions {\it 2} and {\it 3} are proved similarly.
\end{proof}
\begin{cor}\label{cor.K_ch}
$\mu$ induces a multiplication map $\widetilde{\mu}_*$ on the
level of chain complexes, making $\mathscr{K}_{\mathrm{ch}}$ into a
$\mathscr{D}_{\mathrm{ch}}$-module.
\[
\widetilde{\mu}_* : \mathscr{D}_{\mathrm{ch}}(m) \otimes
\mathscr{K}_{\mathrm{ch}}(j_1)
\otimes \ldots \otimes \mathscr{K}_{\mathrm{ch}}(j_m) \to
\mathscr{K}_{\mathrm{ch}}(j_1 + \ldots + j_m).
\]
\end{cor}
\begin{proof}
Since $\mathscr{K}_{\mathrm{cat}}$ is a
$\mathscr{D}_{\mathrm{cat}}$-module, and
the functors $N$, $k[ - ]$ and $\mathcal{N}$ are symmetric monoidal
(the first two in the strong sense, the third in the lax sense), this
result follows immediately from Lemma~\ref{lem.lax-sym-mon-functor}.
\end{proof}
\section{$E_{\infty}$-Algebra Structure}\label{sec.alg-structure}
In this section we use the operad-module structure defined in the
previous section to induce a related operad-algebra structure.
\begin{definition}\label{def.distributive}
Suppose $(\mathscr{C}, \odot)$ is a cocomplete symmetric monoidal category.
We say $\mathscr{C}$ {\it $\odot$
distributes over colimits}, or is {\it distributive}, if the natural map
\[
\colim_{i \in \mathscr{I}} (B \odot C_i) \longrightarrow
B \odot \colim_{i \in \mathscr{I}} C_i
\]
is an isomorphism.
\end{definition}
\begin{rmk}
All of the ambient categories we consider in this chapter, $\mathbf{Cat}$,
$\mathbf{SimpSet}$,
$k$-$\mathbf{SimpMod}$, and
$k$-$\mathbf{Complexes}$, are cocomplete and distributive.
\end{rmk}
\begin{lemma}\label{lem.operad-algebra}
Suppose $(\mathscr{C}, \odot)$ is a cocomplete distributive symmetric monoidal
category,
$\mathscr{P}$ is an operad over $\mathscr{C}$, $\mathscr{L}$ is
a left $\mathscr{P}$-module, and $Z \in \mathrm{Obj}\mathscr{C}$. Then
\[
\mathscr{L} \langle Z \rangle :=
\coprod_{m \geq 0} \mathscr{L}(m) \odot_{S_m} Z^{\odot m}
\]
admits the structure of a $\mathscr{P}$-algebra.
\end{lemma}
\begin{rmk}
The notation $\mathscr{L} \langle Z \rangle$ appears in Kapranov and
Manin~\cite{KM}, and is also present in~\cite{MSS} as the {\it Schur
functor} of an operad (\cite{MSS}, Def~1.24),
$\mathcal{S}_{\mathscr{L}}(Z)$.
\end{rmk}
\begin{proof}
What we are looking for is an equivariant map
\[
\mathscr{P}\left\langle \mathscr{L} \langle Z \rangle \right\rangle
\to \mathscr{L} \langle Z \rangle.
\]
That is, a map
\[
\coprod_{n \geq 0} \mathscr{P}(n) \odot_{S_n} \left[
\coprod_{m \geq 0} \mathscr{L}(m) \odot_{S_m} Z^{\odot m} \right]^{\odot n}
\longrightarrow \coprod_{m \geq 0} \mathscr{L}(m) \odot_{S_m} Z^{\odot m},
\]
satisfying the required associativity conditions for an operad-algebra
structure.
Observe that the equivariant product $\odot_{S_m}$ may be constructed
as the coequalizer of the maps corresponding to the $S_m$-action. That is,
\[
\mathscr{L}(m) \odot_{S_m} Z^{\odot m} =
\coeq_{\sigma \in S_m} \left\{ \sigma^{-1} \odot \sigma \right\},
\]
where \mbox{$\sigma^{-1} \odot \sigma : \mathscr{L}(m) \odot Z^{\odot m}
\to \mathscr{L}(m) \odot Z^{\odot m}$} is given by right action of
$\sigma^{-1}$ on $\mathscr{L}(m)$ and by permutation of the factors of
$Z^{\odot m}$ by $\sigma$ (See~\cite{MSS}, formula~(1.11)). Thus,
$\mathscr{L}\langle Z \rangle$ may be expressed as a (small) colimit.
Since we presuppose that $\odot$ distributes over all small colimits, it suffices
to fix an integer $n$ as well as $n$ integers $m_1$, $m_2$, \ldots, $m_n$, and
examine the following diagram:
\[
\begin{diagram}
\node{ \mathscr{P}(n) \odot \left( \left[\mathscr{L}(m_1) \odot
Z^{\odot m_1}\right] \odot \ldots \odot \left[\mathscr{L}(m_n)
\odot Z^{\odot m_n}\right]\right) }
\arrow{s,r}{T}
\\
\node{ \mathscr{P}(n) \odot \left(\mathscr{L}(m_1) \odot
\ldots \odot \mathscr{L}(m_n)\right) \odot \left(
Z^{\odot m_1} \odot \ldots \odot Z^{\odot m_n}\right) }
\arrow{s,r}{ \mu \odot \mathrm{id} }
\\
\node{ \mathscr{L}(m_1 + \ldots + m_n) \odot \left(
Z^{\odot m_1} \odot \ldots \odot Z^{\odot m_n}\right) }
\arrow{s,r}{a}
\\
\node{ \mathscr{L}(m_1 + \ldots + m_n) \odot Z^{\odot (m_1 + \ldots + m_n)} }
\end{diagram}
\]
In this diagram, $T$ is the evident shuffling of components so that the
components $\mathscr{L}(m_i)$ are grouped together, $\mu$ is the
operad-module structure map
for $\mathscr{L}$, and $a$ stands for the various associativity maps that are
required to obtain the final form. This composition defines a family of maps
\[
\eta : \mathscr{P}(n) \odot \bigodot_{i=1}^{n}
\left[\mathscr{L}(m_i) \odot Z^{\odot m_i}\right]
\longrightarrow \mathscr{L}(m_1 + \ldots + m_n) \odot
Z^{\odot (m_1 + \ldots + m_n)}.
\]
The maps $\eta$ pass to $S_{m_i}$-equivalence classes, producing a family of maps:
\[
\overline{\eta} : \mathscr{P}(n) \odot \bigodot_{i=1}^{n}
\left[\mathscr{L}(m_i) \odot_{S_{m_i}} Z^{\odot m_i}\right]
\longrightarrow \mathscr{L}(m_1 + \ldots + m_n) \odot_{S_{m_1} \times \ldots
\times S_{m_n}}
Z^{\odot (m_1 + \ldots + m_n)}.
\]
Let $p$ be the evident projection map for a right
\mbox{$S_{m_1 + \ldots + m_n}$-object} $M$ and left
\mbox{$S_{m_1 + \ldots + m_n}$-object} $N$,
\[
M \odot_{S_{m_1} \times \ldots \times S_{m_n}} N
\stackrel{p}{\longrightarrow}
M \odot_{S_{m_1 + \ldots + m_n}} N.
\]
Define the family of maps, $\chi := p \circ \overline{\eta}$,
\[
\chi : \mathscr{P}(n) \odot \bigodot_{i=1}^{n}
\left[\mathscr{L}(m_i) \odot_{S_{m_i}} Z^{\odot m_i}\right]
\longrightarrow \mathscr{L}(m_1 + \ldots + m_n) \odot_{S_{m_1 + \ldots
+ m_n}}
Z^{\odot (m_1 + \ldots + m_n)}.
\]
That is, we have a structure map, $\chi$, defined for each $n$:
\[
\chi : \mathscr{P}(n) \odot \left[\mathscr{L} \langle Z \rangle \right]^{\odot n}
\to \mathscr{L} \langle Z \rangle.
\]
Since $\mathscr{L}$ is a left $\mathscr{P}$-module, $\chi$ is compatible
with the multiplication maps of $\mathscr{P}$.
Equivariance follows from external
equivariance conditions on $\mathscr{L}$ as $\mathscr{P}$-module, together
with the internal equivariance relations present in
\[
\mathscr{L}(m_1 + \ldots + m_n) \odot_{S_{m_1 + \ldots + m_n}}
Z^{\odot (m_1 + \ldots + m_n)},
\]
inducing the required operad-algebra structure map
\[
\chi : \mathscr{P}(n) \odot_{S_n} \left[\mathscr{L} \langle Z \rangle \right]^{\odot n}
\to \mathscr{L} \langle Z \rangle.
\]
\end{proof}
Let $A$ be an associative, unital algebra over $k$.
Let \mbox{$\mathscr{Y}^+_*A = k[N(- \setminus \Delta S_+)]\otimes_{\Delta S_+}
B^{sym_+}_*A $} be the complex from section~\ref{sec.deltas_plus}, regarded
as simplicial $k$-module. Observe, we may identify:
\[
k[N(- \setminus \Delta S_+)] \otimes_{\mathrm{Aut}\Delta S_+} B^{sym_+}_*A
= \bigoplus_{n \geq 0}
\mathscr{K}_{\mathrm{sm}}(n) \,\widehat{\otimes}_{S_n} A^{\widehat{\otimes}\, n}
= \mathscr{K}_{\mathrm{sm}} \langle A \rangle.
\]
(Note, $A$ is regarded as a trivial simplicial object, with all faces and
degeneracies being identities.)
\begin{lemma}\label{lem.hatK-operadalgebra}
$\mathscr{K}_{\mathrm{sm}} \langle A \rangle$ has the structure of an
$E_\infty$-algebra over the category of simplicial $k$-modules,
\[
\chi : \mathscr{D}_{\mathrm{sm}} \left\langle \mathscr{K}_{\mathrm{sm}}
\langle A \rangle \right\rangle \longrightarrow
\mathscr{K}_{\mathrm{sm}} \langle A \rangle
\]
\end{lemma}
\begin{proof}
This follows immediately from Lemma~\ref{lem.operad-algebra} and the fact that
$k$-$\mathbf{SimpMod}$ is cocomplete and distributive.
\end{proof}
\begin{rmk}
The fact that $\mathscr{K}_{\mathrm{cat}}$ is a pseudo-operad
(See Remark~\ref{rmk.K-pseuo-operad}) implies that
$\mathscr{K}_{\mathrm{ss}}$ and $\mathscr{K}_{\mathrm{sm}}$ are also
pseudo-operads (c.f.~Lemma~\ref{lem.lax-sym-mon-functor}). Now, the properties of
the Schur functor do not depend on existence of a right unit map
for $\mathscr{K}_{\mathrm{sm}}$, so we could conclude immediately that
$\mathcal{S}_{\mathscr{K}_{\mathrm{sm}}}(A) =
\mathscr{K}_{\mathrm{sm}} \langle A \rangle$
is a `pseudo-operad'-algebra over $\mathscr{K}_{\mathrm{sm}}$, however the preceding
proof requires a bit less machinery.
\end{rmk}
\begin{lemma}\label{lem.Y-operadalgebra}
The $\mathscr{D}_{\mathrm{sm}}$-algebra structure on $\mathscr{K}_{\mathrm{sm}}
\langle A \rangle$
induces a quotient $\mathscr{D}_{\mathrm{sm}}$-algebra structure on $\mathscr{Y}_*^+A$,
\[
\overline{\chi} : \mathscr{D}_{\mathrm{sm}}
\left\langle \mathscr{Y}_*^+A \right\rangle \longrightarrow
\mathscr{Y}_*^+A
\]
That is, $\mathscr{Y}_*^+A$ is an $E_{\infty}$-algebra over the category of
simplicial $k$-modules.
\end{lemma}
\begin{proof}
We must verify that the structure map $\chi$ from Lemma~\ref{lem.hatK-operadalgebra}
is well-defined on equivalence classes in $\mathscr{Y}_*^+A$.
Let $\overline{\chi}$ be defined
by applying $\chi$ to a representative, so that we obtain a map
\[
\overline{\chi} : \mathscr{D}_{\mathrm{sm}}(n) \,\widehat{\otimes}_{S_n}
\left(\mathscr{Y}_*^+A\right)^{\widehat{\otimes}\, n}
\to \mathscr{Y}_*^+A.
\]
It suffices to check $\overline{\chi}$ is well-defined on $0$-chains.
Let $f_i$, $g_i$, $1 \leq i \leq n$,
be morphisms of $\mathcal{F}(as)$ with specified sources and targets:
\[
\underline{m_i} \stackrel{f_i}{\to} \underline{p_i}
\stackrel{g_i}{\to} \underline{q_i}.
\]
Let $V_i$ be a simple tensor of $A^{\otimes m_i}$, that is, $V_i := a_1 \otimes
a_2 \otimes \ldots \otimes a_{m_i}$ for some $a_s \in A$. Let $\sigma \in S_n$.
Consider the $0$-chain of
$\mathscr{D}_{\mathrm{sm}}(n) \,\widehat{\otimes}_{S_n}
\left(\mathscr{Y}_*^+A\right)^{\widehat{\otimes}\, n}$:
\begin{equation}\label{eq.D-algebra-invarianceL}
\sigma \,\widehat{\otimes}\, \left( g_1f_1 \otimes V_1 \right) \,\widehat{\otimes}\, \ldots
\,\widehat{\otimes}\,
\left( g_nf_n \otimes V_n \right).
\end{equation}
The map $\overline{\chi}$ sends this to:
\[
\sigma\{q_{\underline{n}}\}(g_{\underline{n}}f_{\underline{n}})^\odot
\otimes V_{\underline{n}}^\otimes,
\]
where $V_{\underline{n}}^\otimes := V_1 \otimes \ldots \otimes V_n \in
A^{\otimes(m_1 + \ldots + m_n)}$.
On the other hand, the element~(\ref{eq.D-algebra-invarianceL}) is equivalent under
$\mathrm{Mor}\left(\mathcal{F}(as)\right)$-equivariance to:
\begin{equation}\label{eq.D-algebra-invarianceR}
\sigma \,\widehat{\otimes}\, \left( g_1 \otimes f_1(V_1) \right) \,\widehat{\otimes}\, \ldots
\,\widehat{\otimes}\,
\left( g_n \otimes f_n(V_n) \right),
\end{equation}
and $\overline{\chi}$ sends this to:
\[
\sigma\{q_{\underline{n}}\}g_{\underline{n}}^\odot \otimes
\left[f_{\underline{n}}(V_{\underline{n}})\right]^\otimes
\]
\[
= \sigma\{q_{\underline{n}}\}g_{\underline{n}}^\odot \otimes
\left[(f_{\underline{n}})^\odot\right](V_{\underline{n}}^\otimes)
\]
\[
\approx \sigma\{q_{\underline{n}}\}g_{\underline{n}}^\odot
f_{\underline{n}}^\odot \otimes V_{\underline{n}}^\otimes
\]
\[
= \sigma\{q_{\underline{n}}\}(g_{\underline{n}}f_{\underline{n}})^\odot
\otimes V_{\underline{n}}^\otimes.
\]
This proves $\overline{\chi}$ is well defined, and so $\mathscr{Y}_*^+A$
admits the structure of an $E_\infty$-algebra over the category of
simplicial $k$-modules.
\end{proof}
\begin{theorem}\label{thm.Y-operadalgebra-complexes}
The $\mathscr{D}_{\mathrm{sm}}$-algebra structure on $\mathscr{Y}_*^+A$ induces
a $\mathscr{D}_{\mathrm{ch}}$-algebra structure on $\mathcal{N}\mathscr{Y}_*^+A$
(as $k$-complex).
\end{theorem}
\begin{proof}
Again since the normalization functor $\mathcal{N}$ is lax symmetric monoidal,
the operad-algebra structure map of $\mathscr{Y}_*^+A$ induces
an operad algebra structure map over $k$-$\mathbf{complexes}$ (by
Lemma~\ref{lem.lax-sym-mon-functor}):
\[
\widetilde{\chi} : \mathscr{D}_{\mathrm{ch}}(n) \otimes_{S_n}
\left(\mathcal{N}\mathscr{Y}_*^+A\right)^{\otimes n}
\to \mathcal{N}\mathscr{Y}_*^+A.
\]
\end{proof}
\begin{cor}\label{cor.pontryagin}
$HS_*(A)$ admits a Pontryagin product, giving it the structure of graded
commutative and associative algebra.
\end{cor}
\begin{proof}
The product is induced on the chain level by first choosing any $0$-chain, $c$,
in $\mathscr{D}_{\mathrm{ch}}(2)$ corresponding to the generator
of $1 \in H_0\left(\mathscr{D}_{\mathrm{ch}}(2)\right) = k$, and then taking
the composite,
\[
k \otimes \mathcal{N}\mathscr{Y}_*^+A \otimes \mathcal{N}\mathscr{Y}_*^+A
\longrightarrow \mathscr{D}_{\mathrm{ch}}(2) \otimes
\mathcal{N}\mathscr{Y}_*^+A \otimes \mathcal{N}\mathscr{Y}_*^+A
\]
\[
\longrightarrow \mathscr{D}_{\mathrm{ch}}(2) \otimes_{k[S_2]}
\mathcal{N}\mathscr{Y}_*^+A \otimes \mathcal{N}\mathscr{Y}_*^+A
\stackrel{\widetilde{\chi}}{\longrightarrow} \mathcal{N}\mathscr{Y}_*^+A
\]
That is, for homology classes $[x]$ and $[y]$ in $HS_*(A)$, the product
is defined by
\begin{equation}\label{eq.pontryagin-product}
[x] \cdot [y] := \left[\widetilde{\chi}(c \otimes x \otimes y)\right]
\end{equation}
Now, the choice of $c$ does not matter, since $\mathscr{D}_{\mathrm{ch}}(2)$
is contractible. Indeed, since each $\mathscr{D}_{\mathrm{ch}}(p)$ is
contractible, Thm.~\ref{thm.Y-operadalgebra-complexes} shows that
$\mathcal{N}\mathscr{Y}_*^+A$ is a {\it homotopy-everything} complex, analogous
to the homotopy-everything spaces of Boardman and Vogt~\cite{BV}.
Thus, the product~(\ref{eq.pontryagin-product}) is associative and
commutative in the graded sense on the level of homology (see also
May~\cite{M}, p.~3).
\end{proof}
\begin{rmk}
The Pontryagin product of Cor.~\ref{cor.pontryagin} is directly
related to the algebra structure on the complexes $Sym_*^{(p)}$ of
Chapter~\ref{chap.spec_seq2}. Indeed,
if $A$ has augmentation ideal $I$ which is free and has countable rank over
$k$, and $I^2 = 0$, then by Cor.~\ref{cor.square-zero},
the spectral sequence would collapse at the $E_1$ stage, giving:
\[
HS_n(A) \;\cong\; \bigoplus_{p \geq 0} \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}}
H_{n}(EG_u \ltimes_{G_u} Sym_*^{(p)}; k),
\]
and the product structure of $Sym_*^{(p)}$ may be viewed as a restriction
of the algebra structure of $HS_*(A)$ to the free orbits.
\end{rmk}
\section{Homology Operations}\label{sec.homology-operations}
Recall, for a commutative ring, $k$, and a cyclic group $\pi$ of order $p$, there is a
periodic resolution of $k$ by free $k\pi$-modules (cf~\cite{M2},~\cite{B}):
\begin{definition}\label{def.W-complex}
Let $\tau$ be a generator of $\pi$. Let $N = 1 + \tau + \ldots + \tau^{p-1}$. Define
a $k\pi$-complex, $(W, d)$ by:
$W_i$ is a the free $k\pi$-module
on the generator $e_i$, for each $i \geq 0$, with differential:
\[
\left\{
\begin{array}{lll}
d(e_{2i+1}) &=& (\tau - 1)e_{2i} \\
d(e_{2i}) &=& Ne_{2i-1}
\end{array}
\right.
\]
$W$ also has the structure of a $k\pi$-coalgebra, with augmentation $\epsilon$ and
coproduct $\psi$:
\[
\epsilon(\tau^j e_0) = 1
\]
\[
\left\{
\begin{array}{lll}
\psi(e_{2i+1}) &=& \displaystyle{\sum_{j+k = i} e_{2j} \otimes e_{2k+1} \;+\;
\sum_{j+k = i} e_{2j+1} \otimes \tau e_{2k}} \\
\psi(e_{2i}) &=& \displaystyle{\sum_{j+k = i} e_{2j} \otimes e_{2k} \;+\;
\sum_{j+k = i-1} \left(\sum_{0 \leq r < s < p} \tau^r e_{2j+1} \otimes
\tau^s e_{2k+1}\right)}
\end{array}
\right.
\]
\end{definition}
In what follows, we shall specialize $p$ prime and to $k = \mathbb{Z}/p\mathbb{Z}$ (as a ring).
Let $\pi = C_p$ (as group), and denote by
$W$ the standard resolution of $k$ by $k\pi$-modules, as in definition~\ref{def.W-complex}.
Recall, $\mathscr{D}_{\mathrm{ch}}(p)$ is a contractible $k$-complex on which
$S_p$ acts freely. Embed $\pi \hookrightarrow S_p$ by $\tau \mapsto (1, p, p-1, \ldots, 2)$.
Clearly $\pi$ acts freely on $\mathscr{D}_{\mathrm{ch}}(p)$ as well. Thus,
there exists a homotopy equivalence $\xi : W \to \mathscr{D}_{\mathrm{ch}}(p)$.
Observe that the complex $\mathcal{N}\mathscr{Y}_*^+A$ computes $HS_*(A)$, since it
is defined as the quotient of $\mathscr{Y}_*^+A$ by degeneracies. By
Thm.~\ref{thm.Y-operadalgebra-complexes},
$\mathcal{N}\mathscr{Y}_*^+A$, has the structure of $E_{\infty}$-algebra,
so by results of May, if \mbox{$x \in
H_*(\mathcal{N}\mathscr{Y}_*^+A) = HS_*(A)$}, then $e_i \otimes x^{\otimes p}$ is a
well-defined
element of $H_*\left(W \otimes_{k\pi} (\mathcal{N}\mathscr{Y}_*^+A)^{\otimes p}\right)$,
where $e_i$ is the distinguished generator of $W_i$. We then use the homotopy equivalence
$\xi : W \to \mathscr{D}_{\mathrm{ch}}(p)$ and $\mathscr{D}_{\mathrm{ch}}$-algebra structure
of $\mathcal{N}\mathscr{Y}_*^+A$ to produce the required map.
Define $\kappa$ as the composition:
\[
\begin{diagram}
\node{ H_*\left(W \otimes_{k\pi} (\mathcal{N}\mathscr{Y}_*^+A)^{\otimes p}\right) }
\arrow{e,tb}{ H(\xi \otimes \mathrm{id}^p) }{ \cong }
\node{ H_*\left(\mathscr{D}_{\mathrm{ch}}(p) \otimes_{k\pi}
(\mathcal{N}\mathscr{Y}_*^+A)^{\otimes p}\right) }
\arrow{e,t}{ H(\widetilde{\chi}) }
\node{ H_*(\mathcal{N}\mathscr{Y}_*^+A)}
\end{diagram}
\]
This gives a way of defining homology (Steenrod) operations
on $HS_*(A)$. Following definition 2.2
of~\cite{M2}, first define the maps $D_i$. For $x \in HS_q(A)$ and
$i \geq 0$, define
\[
D_i(x) := \kappa(e_i \otimes x^{\otimes p}) \in HS_{pq + i}(A).
\]
\begin{definition}\label{def.homology-operations}
If $p=2$, define:
\[
P_s : HS_q(A) \to HS_{q+s}(A)
\]
\[
P_s(x) = \left\{
\begin{array}{ll}
0 \; &\textrm{if $s < q$}\\
D_{s-q}(x) \; &\textrm{if $s \geq q$}
\end{array}
\right.
\]
If $p > 2$ ({\it i.e.}, an odd prime), let
\[
\nu(q) = (-1)^{s+\frac{q(q-1)(p-1)}{4}}\Big[\Big( \frac{p-1}{2} \Big)!\Big]^q,
\]
and define:
\[
P_s : HS_q(A) \to HS_{q+2s(p-1)}(A)
\]
\[
P_s(x) = \left\{
\begin{array}{ll}
0 \; &\textrm{if $2s < q$}\\
\nu(q) D_{(2s-q)(p-1)}(x) \; &\textrm{if $2s \geq q$}
\end{array}
\right.
\]
\[
\beta P_s : HS_q(A) \to HS_{q+2s(p-1)-1}(A)
\]
\[
\beta P_s(x) = \left\{
\begin{array}{ll}
0 \; &\textrm{if $2s \leq q$}\\
\nu(q) D_{(2s-q)(p-1)-1}(x) \; &\textrm{if $2s > q$}
\end{array}
\right.
\]
\end{definition}
Note, the definition of $\nu(q)$ given here differs from that given in~\cite{M2} by
the sign $(-1)^s$ in order that all constants be collected into the term $\nu(q)$.
\chapter{LOW-DEGREE SYMMETRIC HOMOLOGY}\label{chap.ldsymhom}
\section{Partial Resolution}\label{sec.partres}
As before, $k$ is a commutative ground ring.
In this chapter, we find an explicit partial resolution of $\underline{k}$ by
projective $\Delta S^\mathrm{op}$-modules, allowing the computation of $HS_0(A)$ and
$HS_1(A)$ for a unital associative $k$-algebra $A$.
\begin{theorem}\label{thm.partial_resolution}
$HS_i(A)$ for $i=0,1$ is the homology of the following partial chain
complex
\[
0\longleftarrow A \stackrel{\partial_1}{\longleftarrow} A\otimes A\otimes A
\stackrel{\partial_2}{\longleftarrow}(A\otimes A\otimes A\otimes A)\oplus A,
\]
where
\[
\partial_1 : a\otimes b\otimes c \mapsto abc - cba,
\]
\[
\partial_2 : \left\{
\begin{array}{lll}
a\otimes b\otimes c\otimes d &\mapsto& ab\otimes c\otimes d +
d\otimes ca\otimes b + bca\otimes 1\otimes d + d\otimes bc\otimes a,\\
a &\mapsto& 1\otimes a\otimes 1.
\end{array}
\right.
\]
\end{theorem}
The proof will proceed in stages from the lemmas below.
\begin{lemma}\label{lem.0-stage}
For each $n \geq 0$,
\[
0 \gets k \stackrel{\epsilon}{\gets} k\big[\mathrm{Mor}_{\Delta S}([n], [0])\big]
\stackrel{\rho}{\gets} k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]
\]
is exact, where $\epsilon$ is defined by $\epsilon(\phi) = 1$ for any morphism
$\phi : [n] \to [0]$, and $\rho$ is defined by $\rho(\psi) =
(x_0x_1x_2)\circ\psi - (x_2x_1x_0)\circ\psi$
for any morphism $\psi : [n] \to [2]$. Note, $x_0x_1x_2$ and $x_2x_1x_0$ are
$\Delta S$ morphisms $[2] \to [0]$ written in tensor notation (see
section~\ref{sec.deltas})
\end{lemma}
\begin{proof}
Clearly, $\epsilon$ is surjective. Now, $\epsilon\rho = 0$, since $\rho(\psi)$
consists of two morphisms with opposite signs.
Let $\phi_0 = x_0x_1 \ldots x_n : [n] \to [0]$.
The kernel of $\epsilon$ is spanned by elements $\phi - \phi_0$ for $\phi \in
\mathrm{Mor}_{\Delta S}([n],[0])$. So, it suffices to show that the
sub-module of
$k\big[\mathrm{Mor}_{\Delta S}([n], [0])\big]$ generated by
$(x_0x_1x_2)\psi - (x_2x_1x_0)\psi$ (for $\psi : [n] \to [2]$) contains all of the
elements $\phi - \phi_0$. In other words, it suffices to find a sequence
\[
\phi =: \phi_k,\; \phi_{k-1},\; \ldots,\; \phi_2,\; \phi_1,\; \phi_0
\]
so that each $\phi_i$ is obtained from $\phi_{i+1}$ by reversing the order of
3 blocks, $XYZ \to ZYX$. Note, $X$, $Y$, or $Z$ can be empty.
Let $\phi = x_{i_0}x_{i_1}\ldots x_{i_n}$. If $\phi = \phi_0$, we may
stop here. Otherwise, we produce a sequence ending in $\phi_0$ by way of
two types of rearrangements.
Type I:
\[
x_{i_0}x_{i_1}\ldots x_{i_n} \leadsto x_{i_n}x_{i_0}x_{i_1} \ldots x_{i_{n-1}}.
\]
Type II:
\[
x_{i_0}x_{i_1} \ldots x_{i_{k-1}}x_{i_k}x_{i_{k+1}}\ldots x_{i_n}
\leadsto x_{i_{k+1}}\ldots x_{i_n}x_{i_k}x_{i_0}x_{i_1} \ldots x_{i_{k-1}}.
\]
In fact, it will be sufficient to use a more specialized version of the Type II
rearrangement.
Type II$'$:
\[
x_{i_0}x_{i_1} \ldots x_{i_{k-1}}x_{i_k}x_{k+1}\ldots x_n
\leadsto x_{k+1}\ldots x_n x_{i_k}x_{i_0}x_{i_1} \ldots x_{i_{k-1}},
\]
where $i_k \neq k$.
Beginning with $\phi$, perform Type I rearrangements until the final variable
is $x_n$. For convenience of notation, let this new monomial be
$x_{j_0}x_{j_1} \ldots x_{j_n}$. Of course, $j_n = n$. If $j_k = k$ for
all $k = 0, 1, \ldots, n$, then we are done. Otherwise,
there will be a number $k$ such that $j_k \neq k$ but $j_{k+1} = k + 1, \ldots,
j_n = n$. Perform a Type II$'$ rearrangement with $j_{k}$ as pivot, followed by
enough Type I rearrangements to make the final variable $x_n$ again. The net
result of such a combination is that the ending block $x_{k+1}x_{k+2}\ldots x_n$
remains fixed while the beginning block $x_{j_0}x_{j_1}\ldots x_{j_{k}}$
becomes $x_{j_k}x_{j_0} \ldots x_{j_{k-1}}$. It is clear that applying this
combination repeatedly will finally obtain a monomial
$x_{\ell_0}x_{\ell_1}\ldots x_{\ell_{k-1}} x_k x_{k+1} \ldots x_n$. After a finite
number of steps, we finally obtain $\phi_0$.
\end{proof}
Let $\mathscr{B}_n = \{ x_{i_0}x_{i_1}\ldots x_{i_{k-1}} \otimes x_{i_k} \otimes
x_{k+1}x_{k+2} \ldots x_{n} \;:\; k \geq 1, i_k \neq k \}$. $k[\mathscr{B}_n]$ is
a free submodule of $k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ of size
$(n+1)! - 1$. This count is obtained by observing that
$\{ x_{i_0} \ldots x_{i_{k-1}} \otimes x_{i_k} \otimes
x_{k+1} \ldots x_{n} \;:\; k = c, i_k \neq k \}$ has exactly $c \cdot c! = (c+1)! - c!$
elements, then adding the telescoping sum from $c = 1$ to $n$.
\begin{cor}\label{cor.B_n}
When restricted to $k[\mathscr{B}_n]$, the map $\rho$ of Lemma~\ref{lem.0-stage} is
surjective onto the kernel of $\epsilon$.
\end{cor}
\begin{proof}
In the proof of Lemma~\ref{lem.0-stage}, the Type I rearrangements correspond to
the image of elements $x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1$.
Note, we did not need $i_n = n$ in any such rearrangement.
The Type II$'$ rearrangements correspond to the image of elements
$x_{i_0} \ldots x_{i_{k-1}} \otimes x_{i_k} \otimes x_{k+1} \ldots x_{n}$, for
$k \geq 1$ and $i_k \neq k$.
\end{proof}
\begin{lemma}\label{lem.rank}
$\#\mathrm{Mor}_{\Delta S}([n], [m]) = (m+n+1)!/m!$, so
$k\big[\mathrm{Mor}_{\Delta S}([n], [m])\big]$ is a free $k$-module of
rank $(m+n+1)!/m!$.
\end{lemma}
\begin{proof}
A morphism $\phi : [n] \to [m]$ of $\Delta S$ is nothing more than an assignment
of $n+1$ objects into $m+1$ compartments, along with a total ordering of the
original $n+1$ objects, hence:
\[
\#\mathrm{Mor}_{\Delta S}([n], [m]) = \binom{m+n+1}{m}(n+1)! = \frac{(m+n+1)!}{m!}.
\]
\end{proof}
\begin{lemma}\label{lem.rho-iso}
$\rho|_{k[\mathscr{B}_n]}$ is an isomorphism $k[\mathscr{B}_n] \cong \mathrm{ker}
\,\epsilon$.
\end{lemma}
\begin{proof}
Since the rank of $k\big[\mathrm{Mor}_{\Delta S}([n], [m])\big]$ is $(n+1)!$, the
rank of the kernel of $\epsilon$ is $(n+1)! - 1$. The isomorphism then follows
from Corollary~\ref{cor.B_n}.
\end{proof}
\begin{lemma}\label{lem.4-term-relation}
The relations of the form:
\begin{equation}\label{eq.4-term}
XY \otimes Z \otimes W + W \otimes ZX \otimes Y + YZX \otimes 1 \otimes W
+ W \otimes YZ \otimes X \approx 0
\end{equation}
\begin{equation}\label{eq.1-term}
\qquad \mathrm{and} \qquad
1 \otimes X \otimes 1 \approx 0
\end{equation}
collapse $k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ onto $k[\mathscr{B}_n]$.
\end{lemma}
\begin{proof}
This proof proceeds in multiple steps.
{\bf Step 1.}
\begin{equation}\label{eq.step1}
X \otimes 1 \otimes 1 \approx 1 \otimes X \otimes 1 \approx 1 \otimes 1 \otimes X
\approx 0.
\end{equation}
$1 \otimes X \otimes 1 \approx 0$ proceeds directly from Eq.~\ref{eq.1-term}. Letting
$X = Y = W = 1$ in Eq.~\ref{eq.4-term} yields
\[
3(1 \otimes Z \otimes 1) + Z \otimes 1 \otimes 1 \approx 0 \; \mathbb{R}ightarrow\;
Z \otimes 1 \otimes 1 \approx 0.
\]
Then, $X = Z = W = 1$ in Eq.~\ref{eq.4-term} produces
\[
2(Y \otimes 1 \otimes 1) + 1 \otimes 1 \otimes Y + 1 \otimes Y \otimes 1
\approx 0 \; \mathbb{R}ightarrow\; 1 \otimes 1 \otimes Y \approx 0.
\]
{\bf Step 2.}
\begin{equation}\label{eq.step2}
1 \otimes X \otimes Y + 1 \otimes Y \otimes X \approx 0.
\end{equation}
Let $Z = W = 1$ in Eq.~\ref{eq.4-term}. Then
\[
XY \otimes 1 \otimes 1 + 1 \otimes X \otimes Y + YX \otimes 1 \otimes 1
+ 1 \otimes Y \otimes X \approx 0
\]
\[
\mathbb{R}ightarrow\; 1 \otimes X \otimes Y + 1 \otimes Y \otimes X \approx 0,
\qquad \textrm{by step 1.}
\]
{\bf Step 3. [Degeneracy Relations]}
\begin{equation}\label{eq.step3}
X \otimes Y \otimes 1 \approx X \otimes 1 \otimes Y \approx 1 \otimes X \otimes Y.
\end{equation}
Let $X = W = 1$ in Eq.~\ref{eq.4-term}.
\[
Y \otimes Z \otimes 1 + 1 \otimes Z \otimes Y + YZ \otimes 1 \otimes 1
+ 1 \otimes YZ \otimes 1 \approx 0
\]
\[
\mathbb{R}ightarrow\; Y \otimes Z \otimes 1 + 1 \otimes Z \otimes Y \approx 0,
\qquad \textrm{by step 1.}
\]
\begin{equation}\label{eq.step3a}
\mathbb{R}ightarrow\; Y \otimes Z \otimes 1 - 1 \otimes Y \otimes Z \approx 0,
\qquad \textrm{by step 2.}
\end{equation}
Next, let $X = Y = 1$ in Eq.~\ref{eq.4-term}.
\[
1 \otimes Z \otimes W + W \otimes Z \otimes 1 + Z \otimes 1 \otimes W
+ W \otimes Z \otimes 1 \approx 0
\]
\[
\mathbb{R}ightarrow\; 1 \otimes Z \otimes W + 2(1 \otimes W \otimes Z)
+ Z \otimes 1 \otimes W \approx 0,
\qquad \textrm{by Eq.~\ref{eq.step3a},}
\]
\[
\mathbb{R}ightarrow\; 1 \otimes Z \otimes W - 2(1 \otimes Z \otimes W)
+ Z \otimes 1 \otimes W \approx 0,
\qquad \textrm{by step 2,}
\]
\[
\mathbb{R}ightarrow\; Z \otimes 1 \otimes W - 1 \otimes Z \otimes W \approx 0.
\]
{\bf Step 4. [Sign Relation]}
\begin{equation}\label{eq.step4}
X \otimes Y \otimes Z + Z \otimes Y \otimes X \approx 0
\end{equation}
Let $Y = 1$ in Eq.~\ref{eq.4-term}.
\[
X \otimes Z \otimes W + W \otimes ZX \otimes 1 + ZX \otimes 1 \otimes W
+ W \otimes Z \otimes X \approx 0,
\]
\[
\mathbb{R}ightarrow\; X \otimes Z \otimes W + 1 \otimes W \otimes ZX + 1 \otimes ZX \otimes W
+ W \otimes Z \otimes X \approx 0,
\qquad \textrm{by step 3,}
\]
\[
\mathbb{R}ightarrow\; X \otimes Z \otimes W + W \otimes Z \otimes X \approx 0,
\qquad \textrm{by step 2.}
\]
{\bf Step 5. [Hochschild Relation]}
\begin{equation}\label{eq.step5}
XY \otimes Z \otimes 1 - X \otimes YZ \otimes 1 + ZX \otimes Y \otimes 1
\approx 0.
\end{equation}
Let $W = 1$ in Eq.~\ref{eq.4-term}.
\[
XY \otimes Z \otimes 1 + 1 \otimes ZX \otimes Y + YZX \otimes 1 \otimes 1
+ 1 \otimes YX \otimes X \approx 0,
\]
\[
\mathbb{R}ightarrow\; XY \otimes Z \otimes 1 + ZX \otimes Y \otimes 1 + 0
- X \otimes YX \otimes 1 \approx 0,
\qquad \textrm{by steps 1, 3 and 4}.
\]
{\bf Step 6. [Cyclic Relation]}
\begin{equation}\label{eq.step6}
\sum_{j = 0}^n \tau_n^j\big(x_{i_0}x_{i_1} \ldots
x_{i_{n-1}} \otimes x_{i_n} \otimes 1\big) \approx 0,
\end{equation}
where $\tau_n \in \Sigma_{n+1}$ is the $(n+1)$-cycle $(0, n, n-1, \ldots, 2, 1)$,
which acts by permuting the indices.
For $n = 0$, there are no such relations (indeed, no relations at all). For $n=1$,
the cyclic relation takes the form $x_0 \otimes x_1 \otimes 1 + x_1 \otimes x_0
\otimes 1 \approx 0$, which follows from degeneracy and sign relations.
Assume now that $n \geq 2$. For each $k = 1, 2, \ldots, n-1$, define:
\[
\left\{\begin{array}{ll}
A_k := & x_{i_0}x_{i_1}\ldots x_{i_{k-1}},\\
B_k := & x_{i_k},\\
C_k := & x_{i_{k+1}} \ldots x_{i_n}.
\end{array}\right.
\]
By the Hochschild relation,
\[
0 \approx \sum_{k=1}^{n-1} (A_kB_k \otimes C_k \otimes 1 -
A_k \otimes B_kC_k \otimes 1 + C_kA_k \otimes B_k \otimes 1).
\]
But for $k \leq n-2$,
\[
A_kB_k \otimes C_k \otimes 1 = A_{k+1} \otimes B_{k+1}C_{k+1} \otimes 1
= x_{i_0} \ldots x_{i_k} \otimes x_{i_{k+1}}\ldots x_{i_n} \otimes 1.
\]
Thus, after some cancellation:
\[
0 \approx - A_1 \otimes B_1 C_1 \otimes 1 + A_{n-1}B_{n-1} \otimes C_{n-1}
\otimes 1 + \sum_{k=1}^{n-1} C_kA_k \otimes B_k \otimes 1
\]
\[
= - (x_{i_0} \otimes x_{i_1} \ldots x_{i_n} \otimes 1)
+ (x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1)
+ \sum_{k=1}^{n-1} x_{i_{k+1}} \ldots x_{i_n}x_{i_0} \ldots x_{i_{k-1}}
\otimes x_{i_k} \otimes 1
\]
\[
= (x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1)
+ (x_{i_1} \ldots x_{i_n} \otimes x_{i_0} \otimes 1)
+ \sum_{k=1}^{n-1} x_{i_{k+1}} \ldots x_{i_n}x_{i_0} \ldots x_{i_{k-1}}
\otimes x_{i_k} \otimes 1,
\]
by sign and degeneracy relations. This last expression is precisely the
relation Eq.~\ref{eq.step6}.
{\bf Step 7.}
Every element of the form $X \otimes Y \otimes 1$ is equivalent
to a linear combination of elements of $\mathscr{B}_n$.
To prove this, we shall induct on the size of $Y$. Suppose $Y$ consists of
a single variable. That is, $X \otimes Y \otimes 1 =
x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1$. Now, if $i_n \neq n$,
we are done. Otherwise, we use the cyclic relation to write
\[
x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1 \approx
-\sum_{j = 1}^n \tau_n^j\big(x_{i_0} \ldots
x_{i_{n-1}} \otimes x_{i_n} \otimes 1\big).
\]
Now suppose $k \geq 1$ and any element $Z \otimes W \otimes 1$ with $|W| = k$
is equivalent to an element of $k[\mathscr{B}_n]$. Consider $X \otimes Y \otimes 1
= x_{i_0} \ldots x_{i_{n-k-1}} \otimes x_{i_{n-k}} \ldots x_{i_n} \otimes 1$.
Let
\[
\left\{\begin{array}{ll}
A_k := & x_{i_0}x_{i_1}\ldots x_{i_{n-k-1}},\\
B_k := & x_{i_{n-k}} \ldots x_{i_{n-1}},\\
C_k := & x_{i_n}.
\end{array}\right.
\]
Then, by the Hochschild relation,
\[
X \otimes Y \otimes 1 = A_k \otimes B_kC_k \otimes 1
\approx A_kB_k \otimes C_k \otimes 1 + C_kA_k \otimes B_k \otimes 1.
\]
But since $|C_k| = 1$ and $|B_k| = k$, this last expression is equivalent to an element
of $k[\mathscr{B}_n]$.
{\bf Step 8. [Modified Cyclic Relation]}
\begin{equation}\label{eq.step8}
\sum_{j = 0}^k \tau_k^j\big(x_{i_0}x_{i_1} \ldots
x_{i_{k-1}} \otimes x_{i_k} \otimes x_{i_{k+1}}\ldots x_{i_n}\big) \approx 0
\pmod{k\big[\{A \otimes B \otimes 1\}\big]}.
\end{equation}
Note, the $(k+1)$-cycle $\tau_k$ permutes the indices $i_0, i_1, \ldots, i_k$, and
fixes the rest.
First, we show that $X \otimes Y \otimes W + Y \otimes X \otimes W \approx 0
\pmod{k\big[\{A \otimes B \otimes 1\}\big]}$. Indeed, if we let $Z = 1$ in
Eq.~\ref{eq.4-term}, then
\[
XY \otimes 1 \otimes W + W \otimes X \otimes Y + YX \otimes 1 \otimes W
+ W \otimes Y \otimes X \approx 0,
\]
\[
\mathbb{R}ightarrow\; -(X \otimes Y \otimes W + Y \otimes X \otimes W)
+ XY \otimes 1 \otimes W + YX \otimes 1 \otimes W \approx 0,
\qquad \textrm{by step 4},
\]
\begin{equation}\label{eq.step8a}
\mathbb{R}ightarrow\; X \otimes Y \otimes W + Y \otimes X \otimes W
\approx XY \otimes W \otimes 1 + YX \otimes W \otimes 1,
\qquad \textrm{by step 3}.
\end{equation}
Now, we have $X \otimes Y \otimes W + Y \otimes X \otimes W
\approx 0 \pmod{k\big[\{A \otimes B \otimes 1\}\big]}$. Note that
this last expression can be used to prove the modified cyclic relation for $k=1$.
Next, rewrite
Eq.~\ref{eq.4-term}:
\[
XY \otimes Z \otimes W + W \otimes ZX \otimes Y + YZX \otimes 1 \otimes W
+ W \otimes YZ \otimes X \approx 0,
\]
\[
\mathbb{R}ightarrow\; XY \otimes Z \otimes W - Y \otimes ZX \otimes W
+ YZX \otimes W \otimes 1 - X \otimes YZ \otimes W \approx 0,
\qquad \textrm{by steps 3 and 4},
\]
\[
\mathbb{R}ightarrow\; XY \otimes Z \otimes W + ZX \otimes Y \otimes W
+ YZX \otimes W \otimes 1 - X \otimes YZ \otimes W \approx 0\pmod{k\big[
\{A \otimes B \otimes 1\}\big]},
\]
using the relation $X \otimes Y \otimes W + Y \otimes X \otimes W
\approx 0 \pmod{k\big[\{A \otimes B \otimes 1\}\big]}$ proven above.
\begin{equation}\label{eq.step8b}
\mathbb{R}ightarrow\; XY \otimes Z \otimes W - X \otimes YZ \otimes W
+ ZX \otimes Y \otimes W \approx 0 \pmod{k\big[\{A \otimes B \otimes 1\}\big]}.
\end{equation}
Eq.~\ref{eq.step8b} is a modified Hochschild relation, and we can use it in the same
way we used the Hochschild relation in step 6. Assume $k \geq 2$, and define
for $j = 1, 2, \ldots k-1$:
\[
\left\{\begin{array}{ll}
A_j := & x_{i_0}x_{i_1}\ldots x_{i_{j-1}},\\
B_j := & x_{i_j},\\
C_j := & x_{i_{j+1}} \ldots x_{i_k}.
\end{array}\right.
\]
Using the modified Hochschild relation together with the observation that
for $j \leq k-2$,
\[
A_jB_j \otimes C_j \otimes W = A_{j+1} \otimes B_{j+1}C_{j+1} \otimes W,
\]
we finally arrive at the sum:
\[
0 \approx - A_1 \otimes B_1 C_1 \otimes W + A_{k-1}B_{k-1} \otimes C_{k-1}
\otimes W + \sum_{j=1}^{k-1} C_jA_j \otimes B_j \otimes W
\pmod{k\big[\{A \otimes B \otimes 1\}\big]}
\]
\[
\equiv - (x_{i_0} \otimes x_{i_1} \ldots x_{i_n} \otimes W)
+ (x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes W)
+ \sum_{k=1}^{n-1} x_{i_{k+1}} \ldots x_{i_n}x_{i_0} \ldots x_{i_{k-1}}
\otimes x_{i_k} \otimes W
\]
\[
\equiv (x_{i_1} \ldots x_{i_n} \otimes x_{i_0} \otimes W)
+ (x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes W)
+ \sum_{k=1}^{n-1} x_{i_{k+1}} \ldots x_{i_n}x_{i_0} \ldots x_{i_{k-1}}
\otimes x_{i_k} \otimes W.
\]
{\bf Step 9.}
Every element of the form $X \otimes Y \otimes x_n$ is equivalent
to an element of $k[\mathscr{B}_n]$.
We shall use the modified cyclic relation and modified Hochschild relation in
a similar way as cyclic and Hochschild relations were used in step 7. Again we
induct on the size of $Y$. If $|Y| = 1$, then
\[
X \otimes Y \otimes x_n = x_{i_0} \ldots x_{i_{n-2}} \otimes x_{i_{n-1}} \otimes
x_{n}.
\]
If $i_{n-1} \neq n-1$, then we are done. Otherwise, use the modified cyclic
relation to re-express $X \otimes Y \otimes x_n$ as a sum of elements of
$k[\mathscr{B}_n]$, modulo $k\big[\{A \otimes B \otimes 1\}\big]$. Of course, by step
7, all elements $A \otimes B \otimes 1$ are also in $k[\mathscr{B}_n]$.
Next, suppose $k \geq 1$ and any element $Z \otimes W \otimes x_n$ with $|W| = k$
is equivalent to an element of $k[\mathscr{B}_n]$. Consider $X \otimes Y \otimes x_n
= x_{i_0} \ldots x_{i_{n-k-2}} \otimes x_{i_{n-k-1}} \ldots x_{i_{n-1}} \otimes x_n$.
Using the modified Hochschild relation with
\[
\left\{\begin{array}{ll}
A_k := & x_{i_0}x_{i_1}\ldots x_{i_{n-k-2}},\\
B_k := & x_{i_{n-k-1}} \ldots x_{i_{n-2}},\\
C_k := & x_{i_{n-1}}.
\end{array}\right.
\]
we obtain:
\[
X \otimes Y \otimes x_n = A_k \otimes B_kC_k \otimes x_n
\approx A_kB_k \otimes C_k \otimes x_n + C_kA_k \otimes B_k \otimes x_n
\pmod{k\big[\{A \otimes B \otimes 1\}\big]}.
\]
But since $|C_k| = 1$ and $|B_k| = k$, this last expression is equivalent to an element
of $k[\mathscr{B}_n]$.
{\bf Step 10.}
Every element of $k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ is equivalent
to a linear combination of elements from the set
\begin{equation}\label{eq.step10}
\mathscr{C}_n := \{X \otimes x_{i_n} \otimes 1 \;|\; i_n \neq n\} \cup
\{X \otimes x_{i_{n-1}} \otimes x_n \;|\; i_{n-1} \neq n-1\} \cup
\{X \otimes Y \otimes Zx_n \;|\; |Z| \geq 1\}
\end{equation}
Note, the $k$-module generated by $\mathscr{C}_n$ contains $k[\mathscr{B}_n]$.
Let $X \otimes Y \otimes Z$ be an arbitrary element of
$k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$. If $|X| = 0$, $|Y|=0$, or
$|Z| = 0$, then the degeneracy relations imply that this element is
equivalent to an element of the form $X' \otimes Y' \otimes 1$. Step 7 implies
this element is equivalent to one in $k[\mathscr{B}_n]$, hence also
in $k[\mathscr{C}_n]$.
Suppose now that $|X|, |Y|, |Z| \geq 1$.
If $x_n$ occurs in $X$, use the relation $X \otimes Y \otimes W \approx
-Y \otimes X \otimes W \pmod{k\big[\{A \otimes B \otimes 1\}\big]}$
to ensure that $x_n$ occurs in the middle factor. If $x_n$ occurs
in the $Z$, use the sign relation and the above relation to put $x_n$ into
the middle factor. In any case, it suffices to assume our element has
the form:
\[
X \otimes Ux_nV \otimes Z.
\]
By the modified Hochschild relation,
\[
X \otimes Ux_nV \otimes Z
\approx XUx_n \otimes V \otimes Z + VX \otimes Ux_n \otimes Z,
\pmod{k\big[\{A \otimes B \otimes 1\}\big]},
\]
\[
\approx -Z \otimes V \otimes XUx_n + Z \otimes VX \otimes Ux_n.
\]
The first term is certainly in $k[\mathscr{C}_n]$, since $|X| \geq 1$.
If $|U| > 0$, the second term also lies in $k[\mathscr{C}_n]$. If,
however, $|U| = 0$, then use step 9 to re-express $Z \otimes VX \otimes x_n$
as an element of $k[\mathscr{B}_n]$.
Observe that Step 10 proves Lemma~\ref{lem.4-term-relation} for $n = 0, 1, 2$,
since in these cases, any elements that fall within the set
$\{X \otimes Y \otimes Zx_n \;|\; |Z| \geq 1\}$ must have either $|X| = 0$
or $|Y| = 0$, hence are equivalent via the degeneracy relation to elements of
the form $A \otimes B \otimes 1$.
In what follows, assume $n \geq 3$.
{\bf Step 11.}
\begin{equation}\label{eq.step11}
W \otimes Z \otimes Xx_n \approx W \otimes x_nZ \otimes X
\pmod{k\big[\{A \otimes B \otimes 1\}\cup\{A \otimes B \otimes x_n\}\big]}.
\end{equation}
Let $Y = x_n$ in Eq.~\ref{eq.4-term}.
\[
Xx_n \otimes Z \otimes W + W \otimes ZX \otimes x_n
+ x_nZX \otimes 1 \otimes W + W \otimes x_nZ \otimes X \approx 0,
\]
\[
\mathbb{R}ightarrow\; W \otimes Z \otimes Xx_n \approx
W \otimes ZX \otimes x_n + x_nZX \otimes W \otimes 1 + W \otimes x_nZ \otimes X,
\]
\[
\approx W \otimes x_nZ \otimes X \pmod{k\big[\{A \otimes B \otimes 1\}\cup\{A \otimes B
\otimes x_n\}\big]}.
\]
{\bf Step 12.}
Every element of $k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ is equivalent
to a linear combination of elements from the set
\begin{equation}\label{eq.step12}
\mathscr{D}_n := \{X \otimes x_{i_{n-2}} \otimes x_{n-1}x_{n} \;|\; i_{n-2} \neq n-2\}
\cup
\{X \otimes Y \otimes Zx_{n-1}x_n \;|\; |Z| \geq 1\},
\end{equation}
modulo $k\big[\{A \otimes B \otimes 1\}\cup\{A \otimes B \otimes x_n\}\big]$.
Let $X \otimes Y \otimes Z$ be an arbitrary element of
$k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$. Locate $x_{n-1}$ and use the
techniques of Step 10 to re-express $X \otimes Y \otimes Z$ as a linear
combination of terms of the form:
\[
W_j \otimes Z_j \otimes X_jx_{n-1},
\]
modulo $k\big[\{A \otimes B \otimes 1\}]$.
Now, for each $j$, we will want to re-express
$W_j \otimes Z_j \otimes X_j$ as linear combinations of vectors in which $x_n$ occurs
only in the second factor. If $x_n$ occurs in $W_j$, then we just use the
modified cyclic relation:
\[
W_j \otimes Z_j \otimes X_jx_{n-1} \approx
-Z_j \otimes W_j \otimes X_jx_{n-1}
\pmod{k\big[\{A \otimes B \otimes 1\}]}.
\]
If $x_n$ occurs in $X_j$, then first use
Eq.~\ref{eq.4-term} with $Y = x_{n-1}$:
\[
Xx_{n-1} \otimes Z \otimes W + W \otimes ZX \otimes x_{n-1}
+ x_{n-1}ZX \otimes 1 \otimes W + W \otimes x_{n-1}Z \otimes X \approx 0,
\]
\[
\mathbb{R}ightarrow\; W \otimes Z \otimes Xx_{n-1} \approx W \otimes ZX \otimes x_{n-1}
+ x_{n-1}ZX \otimes W \otimes 1 + W \otimes x_{n-1}Z \otimes X,
\]
\[
\approx W \otimes ZX \otimes x_{n-1}
+ W \otimes x_{n-1}Z \otimes X \pmod{k\big[\{A \otimes B \otimes 1\}]},
\]
\[
\approx W \otimes ZX \otimes x_{n-1}
+ Wx_{n-1} \otimes Z \otimes X + ZW \otimes x_{n-1} \otimes X
\pmod{k\big[\{A \otimes B \otimes 1\}]},
\]
\[
\approx W \otimes ZX \otimes x_{n-1}
+ Z \otimes X \otimes Wx_{n-1} - ZW \otimes X \otimes x_{n-1},
\pmod{k\big[\{A \otimes B \otimes 1\}]}.
\]
Thus, we can express our original element $X \otimes Y \otimes Z$ as a linear
combination of elements of the form:
\[
X' \otimes U'x_nV' \otimes Z'x_{n-1},
\]
modulo $k\big[\{A \otimes B \otimes 1\}]$. Use the modified Hochschild relation
to obtain
\[
X' \otimes U'x_nV' \otimes Z'x_{n-1} \approx
X'U' \otimes x_nV' \otimes Z'x_{n-1} + x_nV'X' \otimes U' \otimes Z'x_{n-1}
\pmod{k\big[\{A \otimes B \otimes 1\}]}.
\]
\[
\approx
X'U' \otimes x_nV' \otimes Z'x_{n-1} - U' \otimes x_nV'X' \otimes Z'x_{n-1}
\pmod{k\big[\{A \otimes B \otimes 1\}]}.
\]
\[
\approx
X'U' \otimes V' \otimes Z'x_{n-1}x_n - U' \otimes V'X' \otimes Z'x_{n-1}x_n
\pmod{k\big[\{A \otimes B \otimes 1\}\cup\{A \otimes B \otimes x_n\}\big]},
\]
by step 11. If $|Z'| \geq 1$, then we are done, otherwise, we have elements
of the form $X'' \otimes Y'' \otimes x_{n-1}x_n$. Use an induction
argument analogous to that in step 9 to re-express this type of element as
a linear combination of elements of the form:
\[
U \otimes x_{i_{n-2}} \otimes x_{n-1}x_n, \quad i_{n-2} \neq n-2,
\pmod{k\big[\{A \otimes B \otimes 1\}\big]}.
\]
{\bf Step 13.}
Every element of $k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ is equivalent
to an element of $k[\mathscr{B}_n]$.
We shall use an iterative re-writing procedure. First of all, define
\[
\mathscr{B}_n^{j} := \{ A \otimes x_{i_{n-j}} \otimes x_{n-j+1} \ldots
x_n \;|\; i_{n-j} \neq n-j\},
\]
\[
\mathscr{C}_n^{j} := \{ A \otimes B \otimes Cx_{n-j+1} \ldots
x_n \;|\; |C| \geq 1\}.
\]
Now clearly, $\mathscr{B}_n = \bigcup_{j=0}^{n-1} \mathscr{B}_n^{j}$.
By steps 10 and 12, we can reduce any arbitrary element $X \otimes Y \otimes Z$
to linear combinations of elements in $\mathscr{B}_n^0 \cup \mathscr{B}_n^1 \cup
\mathscr{B}_n^2 \cup \mathscr{C}_n^2$. Suppose now that we have reduced elements
to linear combinations from $\mathscr{B}_n^0 \cup \mathscr{B}_n^1 \cup \ldots
\cup \mathscr{B}_n^j \cup \mathscr{C}_n^j$, for some $j \geq 2$. I claim any
element of $\mathscr{C}_n^j$ can be re-expressed as a linear combination
from $\mathscr{B}_n^0 \cup \mathscr{B}_n^1 \cup \ldots
\cup \mathscr{B}_n^{j+1} \cup \mathscr{C}_n^{j+1}$.
Let $X \otimes Y \otimes Zx_{n-j+1} \ldots x_n$, with $|Z| \geq 1$. Let
$w := x_{n-j+1} \ldots x_n$. We may now think of $X \otimes Y \otimes Zw$
as consisting of the variables $x_0, x_1, \ldots, x_{n-j}, w$, hence, by
step 12, we may re-express this element as a linear combination of elements
from the set
\[
\{ X \otimes x_{i_{n-j-1}} \otimes x_{n-j}w \;|\; i_{n-j-1} \neq n-j-1\}
\cup \{ X \otimes Y \otimes Zx_{n-j}w \;|\; |Z| \geq 1 \},
\]
modulo $k\big[\{A \otimes B \otimes 1\}\cup\{A \otimes B \otimes w\}\big]$.
Of course, this implies the element may written as a linear combination
of elements from $\mathscr{B}_n^{j+1} \cup \mathscr{C}_n^{j+1}$, modulo
$k\big[\{A \otimes B \otimes 1\}\cup\{A \otimes B \otimes x_{n-j+1}\ldots
x_n\}\big]$. Since $\{A \otimes B \otimes x_{n-j+1}x_{n-j+2}\ldots
x_n\} \subset \mathscr{C}_n^{j-1}$, the inductive hypothesis ensures that
$\{A \otimes B \otimes x_{n-j+1}\ldots
x_n\} \subset \mathscr{B}_n^0 \cup \ldots \cup \mathscr{B}_n^{j}$. This
completes the inductive step.
After a finite number of iterations, then, we can re-express any element
$X \otimes Y \otimes Z$ as a linear combination from the set
$\mathscr{B}_n^0 \cup \ldots \mathscr{B}_n^{n-1} \cup
\mathscr{C}_n^{n-1}
= \mathscr{B}_n \cup \mathscr{C}_n^{n-1}$. But $\mathscr{C}_n^{n-1} =
\{ A \otimes B \otimes Cx_{2} \ldots
x_n \;|\; |C| \geq 1\}$. Any element from this set has either $|A| = 0$ or
$|B| = 0$, therefore is equivalent to an element from $k[\mathscr{B}_n]$
already.
\end{proof}
\begin{cor}\label{cor.k-contains-one-half}
If $\frac{1}{2} \in k$, then the four-term relation
\begin{equation}\label{eq.4-term_cor}
XY \otimes Z \otimes W + W \otimes ZX \otimes Y + YZX \otimes 1 \otimes W
+ W \otimes YZ \otimes X \approx 0
\end{equation}
is sufficient to collapse $k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ onto
$k[\mathscr{B}_n]$.
\end{cor}
\begin{proof}
We only need to modify step 1 of the previous proof:
{\bf Step 1$'$.}
\[
X \otimes 1 \otimes 1 \approx 1 \otimes X \otimes 1 \approx
1 \otimes 1 \otimes X \approx 0.
\]
Letting three variables at a time equal $1$ in Eq.~\ref{eq.4-term_cor},
\begin{equation}\label{eq.step1_1}
2(1 \otimes 1 \otimes W) + 2(W \otimes 1 \otimes 1) \approx 0,
\quad \textrm{when $X = Y = Z = 1$.}
\end{equation}
\begin{equation}\label{eq.step1_2}
3(1 \otimes Z \otimes 1) + Z \otimes 1 \otimes 1 \approx 0,
\quad \textrm{when $X = Y = W = 1$.}
\end{equation}
\begin{equation}\label{eq.step1_3}
2(Y \otimes 1 \otimes 1) + 1 \otimes Y \otimes 1 + 1 \otimes 1 \otimes Y
\approx 0, \quad \textrm{when $X = Z = W = 1$.}
\end{equation}
Now, replace $W$ with $X$ in Eq.~\ref{eq.step1_1}, $Z$ with $X$ in
Eq.~\ref{eq.step1_2}, and $Y$ with $X$ in Eq.~\ref{eq.step1_3}. Then
\[
-3\big[2(1 \otimes 1 \otimes X) + 2(X \otimes 1 \otimes 1)\big]
-2\big[3(1 \otimes X \otimes 1) + X \otimes 1 \otimes 1\big]
+6\big[2(X \otimes 1 \otimes 1) + 1 \otimes X \otimes 1 + 1 \otimes 1 \otimes X\big]
\]
\[
= 4(X \otimes 1 \otimes 1).
\]
\[
\mathbb{R}ightarrow\; X \otimes 1 \otimes 1 \approx 0,
\]
as long as $2$ is invertible in $k$. Next, by Eq.~\ref{eq.step1_1},
$2(1 \otimes 1 \otimes X) \approx 0 \mathbb{R}ightarrow 1 \otimes 1 \otimes X
\approx 0$. Finally, Eq.~\ref{eq.step1_3} gives $1 \otimes X \otimes 1 \approx 0$.
\end{proof}
Lemma~\ref{lem.4-term-relation} together with Lemma~\ref{lem.rho-iso} and
Lemma~\ref{lem.0-stage} show the following sequence is exact:
\[
0 \gets k \stackrel{\epsilon}{\gets} k\big[\mathrm{Mor}_{\Delta S}([n], [0])\big]
\stackrel{\rho}{\gets} k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]\qquad\qquad\qquad
\]
\begin{equation}\label{eq.part_res}
\hspace{2.5in} \stackrel{(\alpha, \beta)}{\longleftarrow}
k\big[\mathrm{Mor}_{\Delta S}([n], [3])\big] \oplus
k\big[\mathrm{Mor}_{\Delta S}([n], [0])\big],
\end{equation}
where $\alpha : k\big[\mathrm{Mor}_{\Delta S}([n], [3])\big]
\to k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ is induced by
\[
x_0x_1 \otimes x_2 \otimes x_3 + x_3 \otimes x_2x_0 \otimes x_1
+ x_1x_2x_0 \otimes 1 \otimes x_3 + x_3 \otimes x_1x_2 \otimes x_0,
\]
and $\beta : k\big[\mathrm{Mor}_{\Delta S}([n], [0])\big]
\to k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ is induced by $1 \otimes x_0 \otimes 1$.
This holds for all $n \geq 0$, so the following is a partial resolution of $k$ by
projective \mbox{$\Delta S^\mathrm{op}$-modules}:
\[
0 \gets k \stackrel{\epsilon}{\gets} k\big[\mathrm{Mor}_{\Delta S}(-, [0])\big]
\stackrel{\rho^*}{\gets} k\big[\mathrm{Mor}_{\Delta S}(-, [2])\big]
\stackrel{(\alpha^*, \beta^*)}{\longleftarrow}
k\big[\mathrm{Mor}_{\Delta S}(-, [3])\big] \oplus
k\big[\mathrm{Mor}_{\Delta S}(-, [0])\big]
\]
Hence, we may compute
$HS_0(A)$ and $HS_1(A)$ as the homology groups of the following complex:
\[
0 \gets
k\big[\mathrm{Mor}_{\Delta S}(-, [0])\big]
\otimes_{\Delta S} B_*^{sym}A \stackrel{\rho\otimes\mathrm{id}}{\longleftarrow}
k\big[\mathrm{Mor}_{\Delta S}(-, [2])\big]
\otimes_{\Delta S} B_*^{sym}A \stackrel{(\alpha, \beta)\otimes\mathrm{id}}
{\longleftarrow}
\]
\[
\Big(k\big[\mathrm{Mor}_{\Delta S}(-, [3])\big] \oplus
k\big[\mathrm{Mor}_{\Delta S}(-, [0])\big]\Big)
\otimes_{\Delta S} B_*^{sym}A.
\]
This complex is isomorphic to the one from Thm.~\ref{thm.partial_resolution},
via the evaluation map
\[
k\big[\mathrm{Mor}_{\Delta S}(-, [p])\big]
\otimes_{\Delta S} B_*^{sym}A \stackrel{\cong}{\to} B_p^{sym}A.
\]
\section{Low-degree Computations of $HS_*(A)$}\label{sec.low-dim-comp}
\begin{theorem}\label{thm.HS_0}
For a unital associative algebra $A$ over commutative ground ring $k$,
\[
HS_0(A) \cong A/([A,A]),
\]
where $([A,A])$ is the ideal generated by the commutator submodule $[A,A]$.
\end{theorem}
\begin{proof}
By Thm.~\ref{thm.partial_resolution}, $HS_0(A) \cong A/k\big[\{abc-cba\}\big]$
as $k$-module. But $k\big[\{abc-cba\}\big]$ is an ideal of $A$, since if
$x \in A$, then
\[
xabc - xbca = (x)(a)(bc) - (bc)(a)(x) + (bca)(x) - (x)(bca) \in
k\big[\{abc-cba\}\big].
\]
Clearly $([A,A]) \subset k\big[\{abc-cba\}\big]$, and
$k\big[\{abc-cba\}\big] \subset ([A,A])$ since
\[
abc - cba = a(bc-cb) + a(cb) - (cb)a.
\]
\end{proof}
\begin{cor}
If $A$ is commutative, then $HS_0(A) \cong A$.
\end{cor}
Note that Theorem~\ref{thm.HS_0} implies that symmetric homology does not
preserve Morita equivalence, since for $n>1$,
\[
HS_0\left(M_n(A)\right) = M_n(A)/\left([M_n(A),M_n(A)]\right) = 0.
\]
This implies $HS_*\left(M_n(A)\right) = 0$, since for any
$x \in HS_q\left(M_n(A)\right)$, $x = 1 \cdot x = 0 \cdot x = 0$, via the
Pontryagin product of Cor.~\ref{cor.pontryagin}, while in general
$HS_0(A) = A/([A,A]) \neq 0$.
By working with the complex in Thm.~\ref{thm.partial_resolution}, an explicit
formula for the product $HS_0(A) \otimes HS_1(A) \to HS_1(A)$ can be
determined.
\begin{prop}\label{prop.module-structure}
For a unital associative algebra $A$ over commutative ground ring $k$,
$HS_1(A)$ is a left $HS_0(A)$-module, via
\[
A/([A,A]) \otimes HS_1(A) \longrightarrow HS_1(A)
\]
\[
[a] \otimes [x\otimes y \otimes z] \mapsto
[ax \otimes y \otimes z] - [x \otimes ya \otimes z] + [x \otimes y \otimes az]
\]
Here, elements of $HS_1(A)$ are represented as equivalence classes of elements
in $A \otimes A \otimes A$,
via the complex from Thm.~\ref{thm.partial_resolution}.)
Moreover, there is a right module structure
\[
HS_1(A) \otimes HS_0(A) \longrightarrow HS_1(A)
\]
\[
[x\otimes y \otimes z] \otimes [a] \mapsto
[xa \otimes y \otimes z] - [x \otimes ay \otimes z] + [x \otimes y \otimes za],
\]
and the two actions are equal.
\end{prop}
\begin{proof}
Define the products on the chain level.
For $a \in A$ and $x \otimes y \otimes z \in A \otimes A \otimes A$, put
\begin{equation}\label{eq.left-HS0-action-on-HS1}
a . (x \otimes y \otimes z) := ax \otimes y \otimes z
- x \otimes ya \otimes z
+ x \otimes y \otimes az
\end{equation}
\begin{equation}\label{eq.right-HS0-action-on-HS1}
(x \otimes y \otimes z).a := xa \otimes y \otimes z
- x \otimes ay \otimes z
+ x \otimes y \otimes za
\end{equation}
There are a few details to verify:
1. Formula~(\ref{eq.left-HS0-action-on-HS1}) gives an action
\[
A \otimes HS_1(A) \to HS_1(A)
\]
The product is unital, since
\[
1. [x\otimes y \otimes z] =
[x \otimes y \otimes z] - [x \otimes y \otimes z] + [x \otimes y \otimes z]
= [x \otimes y \otimes z].
\]
Similarly, Formula~(\ref{eq.right-HS0-action-on-HS1}) is a unital product.
Associativity follows by examining the following expression (on the level of chains):
\[
(ab).(x\otimes y \otimes z) - a.\left(b.(x\otimes y \otimes z)\right)
\]
\[
= (abx \otimes y \otimes z - x \otimes yab \otimes z + x \otimes y \otimes abz)
- \left( \begin{array}{c}
abx \otimes y \otimes z - bx \otimes ya \otimes z + bx \otimes y \otimes az\\
-ax \otimes yb \otimes z + x \otimes yba \otimes z - x \otimes yb \otimes az\\
+ax \otimes y \otimes bz - x \otimes ya \otimes bz + x \otimes y \otimes abz
\end{array}\right)
\]
\[
= - x \otimes yab \otimes z + bx \otimes ya \otimes z - bx \otimes y \otimes az
+ax \otimes yb \otimes z - x \otimes yba \otimes z
\]
\begin{equation}\label{eq.associator}
\qquad\qquad+ x \otimes yb \otimes az
-ax \otimes y \otimes bz + x \otimes ya \otimes bz.
\end{equation}
We may view the variables $a$, $b$, $x$, $y$ and $z$
in Eq.~(\ref{eq.associator}) as formal variables,
and hence the expression itself may be regarded as chain in
$k\left[ \mathrm{Mor}_{\Delta S}([4],[2]) \right]$ of the complex~(\ref{eq.part_res})
Now, the differential $\rho$ of complex~(\ref{eq.part_res}) applied to
Eq.~(\ref{eq.associator}) results in:
\begin{equation}\label{eq.d_associator}
\left(\begin{array}{c}
-xyabz + bxyaz - bxyaz
+axybz - xybaz + xybaz
-axybz + xyabz\\
+zyabx - zyabx + azybx
-zybax + zybax - azybx
+bzyax - bzyax
\end{array}\right).
\end{equation}
All terms cancel, showing that Eq.~(\ref{eq.associator}) is in the kernel of $\rho$.
Exactness of complex~(\ref{eq.part_res}) implies that Eq.~(\ref{eq.associator})
is in the image of $(\alpha, \beta)$. Thus, there is a $2$-chain
$C \in A^{\otimes 4} \oplus A$
such that $\partial_2(C) = (ab).(x\otimes y \otimes z) -
a.\left(b.(x\otimes y \otimes z)\right)$, and on the level of homology,
\[
(ab).[x\otimes y \otimes z] = a.\left(b.[x\otimes y \otimes z]\right)
\]
The associativity of Formula~(\ref{eq.right-HS0-action-on-HS1}) is proven in
the same way.
2. Formula~(\ref{eq.left-HS0-action-on-HS1}) induces an action
\[
HS_0(A) \otimes HS_1(A) \to HS_1(A)
\]
It is sufficient to show that if $u$ is a $1$-cycle, then $(ab-ba).u$
is a boundary for any $a, b \in A$. Consider the following element.
\[
w := (ab - ba).(x \otimes y \otimes z) - (a \otimes b \otimes 1).
\partial_1(x \otimes y \otimes z)
=(ab - ba).(x \otimes y \otimes z) - (a \otimes b \otimes 1).(xyz-zyx)
\]
After expanding and canceling a pair of terms,
\[
w = \left(\begin{array}{c}
abx \otimes y \otimes z - x \otimes yab \otimes z + x \otimes y \otimes abz\\
- bax \otimes y \otimes z + x \otimes yba \otimes z - x \otimes y \otimes baz\\
-axyz \otimes b \otimes 1 + a \otimes xyzb \otimes 1
+azyx \otimes b \otimes 1 - a \otimes zyxb \otimes 1
\end{array}\right).
\]
Consider all variables as formal, so $w \in
k\left[ \mathrm{Mor}_{\Delta S}([4],[2]) \right]$. A routine verification shows
that $\rho(w) = 0$, hence by exactness, $w$ is a boundary. Now, any $1$-cycle
$u$ is a $k$-linear combination of elements of the form $x \otimes y \otimes z$,
so the chain
\[
(ab - ba).u - (a \otimes b \otimes 1)\partial_1(u) = (ab- ba).u
\]
is a boundary.
We show that Formula~(\ref{eq.right-HS0-action-on-HS1}) induces an action
\[
HS_0(A) \otimes HS_1(A) \to HS_1(A)
\]
in the analogous way, using
\[
v := (x \otimes y \otimes z).(ab - ba) -
\partial_1(x \otimes y \otimes z).(a \otimes b \otimes 1)
\]
Lastly, we prove that the product structures are equal.
Consider the following element.
\[
t := a.(x \otimes y \otimes z) - (x \otimes y \otimes z).a -
a \otimes \partial_1(x \otimes y \otimes z) \otimes 1
\]
\[
= a.(x \otimes y \otimes z) - (x \otimes y \otimes z).a -
a \otimes xyz \otimes 1 + a \otimes zyx \otimes 1
\]
It can be verified that $\rho(t) = 0$, proving $t \in
k\left[ \mathrm{Mor}_{\Delta S}([3],[2]) \right]$ is a boundary. If $u$ is a
$1$-cycle, then
\[
a.u - u.a - a \otimes \partial_1(u) \otimes 1 = a.u - u.a
\]
is a boundary, which shows that $a.[u] = [u].a$ in $HS_1(A)$.
\end{proof}
Note, we expect the product structure given above to agree with the Pontryagin
product, but this has not been verified yet due to time constraints.
Using \verb|GAP|, we have made the following explicit computations of degree 1
integral symmetric homology. The $HS_0(A)$-module structure is also displayed.
\begin{center}
\begin{tabular}{l|l|l}
$A$ & $HS_1(A \;|\; \mathbb{Z})$ & $HS_0(A)$-module structure\\
\hline
$\mathbb{Z}[t]/(t^2)$ & $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ & Generated by $u$ with $2u=0$\\
$\mathbb{Z}[t]/(t^3)$ & $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ & Generated by $u$ with $2u=0$ and $t^2u=0$\\
$\mathbb{Z}[t]/(t^4)$ & $(\mathbb{Z}/2\mathbb{Z})^4$ & Generated by $u$ with $2u=0$\\
$\mathbb{Z}[t]/(t^5)$ & $(\mathbb{Z}/2\mathbb{Z})^4$ & \\
$\mathbb{Z}[t]/(t^6)$ & $(\mathbb{Z}/2\mathbb{Z})^6$ & \\
\hline
$\mathbb{Z}[C_2]$ & $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ & Generated by $u$ with $2u=0$\\
$\mathbb{Z}[C_3]$ & $0$ & \\
$\mathbb{Z}[C_4]$ & $(\mathbb{Z}/2\mathbb{Z})^4$ & Generated by $u$ with $2u=0$\\
$\mathbb{Z}[C_5]$ & $0$ & \\
$\mathbb{Z}[C_6]$ & $(\mathbb{Z}/2\mathbb{Z})^6$ & \\
\hline
\end{tabular}
\end{center}
Based on these calculations, we conjecture:
\begin{conj}
\[
HS_1\big(k[t]/(t^n)\big) = \left\{\begin{array}{ll}
(k/2k)^n, & \textrm{if $n \geq 0$ is even.}\\
(k/2k)^{n-1} & \textrm{if $n \geq 1$ is odd.}
\end{array}\right.
\]
\end{conj}
\begin{rmk}
The computations of $HS_1\big(\mathbb{Z}[C_n]\big)$ are consistent with those of
Brown and Loday~\cite{BL}. See section~\ref{sec.2-torsion} for a more detailed
treatment of $HS_1$ for group rings.
\end{rmk}
Additionally, $HS_1$ has been computed for the following examples. These
computations were done using \verb|GAP| in some cases and in others,
\verb|Fermat|~\cite{LEW} computations on sparse matrices
were used in conjunction with the \verb|GAP| scripts. ({\it e.g.} when the
algebra has dimension greater than $6$ over $\mathbb{Z}$).
\begin{center}
\begin{tabular}{l|l}
$A$ & $HS_1(A \;|\; \mathbb{Z})$\\
\hline
$\mathbb{Z}[t,u]/(t^2, u^2)$ & $\mathbb{Z} \oplus (\mathbb{Z}/2\mathbb{Z})^{11}$\\
$\mathbb{Z}[t,u]/(t^3, u^2)$ & $\mathbb{Z}^2 \oplus (\mathbb{Z}/2\mathbb{Z})^{11} \oplus \mathbb{Z}/6\mathbb{Z}$\\
$\mathbb{Z}[t,u]/(t^3, u^2, t^2u)$ & $\mathbb{Z}^2 \oplus (\mathbb{Z}/2\mathbb{Z})^{10}$\\
$\mathbb{Z}[t,u]/(t^3, u^3)$ & $\mathbb{Z}^4 \oplus (\mathbb{Z}/2\mathbb{Z})^7 \oplus (\mathbb{Z}/6\mathbb{Z})^5$\\
$\mathbb{Z}[t,u]/(t^2, u^4)$ & $\mathbb{Z}^3 \oplus (\mathbb{Z}/2\mathbb{Z})^{20} \oplus \mathbb{Z}/4\mathbb{Z}$\\
$\mathbb{Z}[t,u,v]/(t^2, u^2, v^2)$ & $\mathbb{Z}^6 \oplus (\mathbb{Z}/2\mathbb{Z})^{42}$\\
$\mathbb{Z}[t,u]/(t^4, u^3)$ & $\mathbb{Z}^6 \oplus (\mathbb{Z}/2\mathbb{Z})^{19} \oplus \mathbb{Z}/6\mathbb{Z} \oplus (\mathbb{Z}/12\mathbb{Z})^2$\\
$\mathbb{Z}[t,u,v]/(t^2, u^2, v^3)$ & $\mathbb{Z}^{11} \oplus (\mathbb{Z}/2\mathbb{Z})^{45} \oplus (\mathbb{Z}/6\mathbb{Z})^4$\\
$\mathbb{Z}[i,j,k], i^2=j^2=k^2=ijk=-1$ & $(\mathbb{Z}/2\mathbb{Z})^8$\\
$\mathbb{Z}[C_2 \times C_2]$ & $(\mathbb{Z}/2\mathbb{Z})^{12}$\\
$\mathbb{Z}[C_3 \times C_2]$ & $(\mathbb{Z}/2\mathbb{Z})^{6}$\\
$\mathbb{Z}[C_3 \times C_3]$ & $(\mathbb{Z}/3\mathbb{Z})^{9}$\\
$\mathbb{Z}[S_3]$ & $(\mathbb{Z}/2\mathbb{Z})^2$\\
\hline
\end{tabular}
\end{center}
\section{Splittings of the Partial Resolution}\label{sec.splittings}
Under certain circumstances, the partial complex in Thm.\ref{thm.partial_resolution}
splits as a direct sum of smaller complexes. This observation becomes increasingly
important as the dimension of the algebra increases. Indeed, some of the computations
of the previous section were done using splittings.
\begin{definition}
For a commutative $k$-algebra $A$ and $u \in A$, define the $k$-modules:
\[
\big(A^{\otimes n}\big)_u := \{ a_1 \otimes a_2 \otimes \ldots \otimes a_n
\in A^{\otimes n} \;|\; a_1a_2\cdot \ldots \cdot a_n = u \}
\]
\end{definition}
\begin{prop}
If $A = k[M]$ for a commutative monoid $M$, then the complex in
Thm.\ref{thm.partial_resolution}
splits as a direct sum of complexes
\begin{equation}\label{eq.u-homology}
0\longleftarrow (A)_u \stackrel{\partial_1}{\longleftarrow}
(A\otimes A\otimes A)_u
\stackrel{\partial_2}{\longleftarrow}
(A\otimes A\otimes A\otimes A)_u\oplus (A)_u,
\end{equation}
where $u$ ranges over the elements of $M$. For each $u$, the
homology groups of Eq.~(\ref{eq.u-homology}) will be called the $u$-layered
symmetric homology of $A$, denoted $HS_i(A)_u$.
Thus, for $i = 0,1$, we have:
\[
HS_i(A) \cong \bigoplus_{u \in M} HS_i(A)_u.
\]
\end{prop}
\begin{proof}
Since $M$ is a commutative monoid, there are direct sum decompositions as
\mbox{$k$-module}:
\[
A^{\otimes n} = \bigoplus_{u \in M} \big( A^{\otimes n}\big)_u.
\]
The maps $\partial_1$ and $\partial_2$ preserve the products of tensor factors,
so the inclusions
$\big(A^{\otimes n}\big)_u \hookrightarrow A^{\otimes n}$ induce maps of complexes,
hence the complex itself splits as a direct sum.
\end{proof}
We may use layers to investigate the symmetric homology of $k[t]$. This algebra
is monoidal, generated by the monoid $\{1, t, t^2, t^3, \ldots \}$. Now, the $t^m$-layer
symmetric homology of $k[t]$ will be the same as the $t^m$-layer symmetric homology
of $k[M^{m+2}_{m+1}]$, where $M^p_q$ denotes the cyclic monoid generated by a variable
$s$ with the property that $s^p = s^q$. Using this observation and subsequent
computation, we conjecture:
\begin{conj}\label{conj.HS_1freemonoid}
\[
HS_1\big(k[t]\big)_{t^m} = \left\{\begin{array}{ll}
0 & m = 0, 1\\
k/2k, & m \geq 2\\
\end{array}\right.
\]
\end{conj}
This conjecture has been verified up to $m = 18$, in the case $k = \mathbb{Z}$.
\section{2-torsion in $HS_1$}\label{sec.2-torsion}
The occurrence of 2-torsion in $HS_1(A)$ for the examples considered
in sections~\ref{sec.low-dim-comp} and \ref{sec.splittings} comes as no surprise,
based on Thm.~\ref{thm.HS_group}. First consider the following chain
of isomorphisms:
\[
\pi_{2}^s(B\Gamma) = \pi_2\big(\Omega^{\infty}S^{\infty}(B\Gamma)\big)
\cong \pi_{1}\big(\Omega\Omega^{\infty}S^{\infty}(B\Gamma)\big)
\]
\[
\cong \pi_{1}\big(\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)\big)
\stackrel{h}{\to} H_1\big(\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)\big).
\]
Here, $\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)$ denotes the component of the
constant loop, and $h$ is the Hurewicz homomorphism,
which is an isomorphism since
$\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)$ is path-connected and $\pi_1$ is
abelian. On the other hand, by Thm.~\ref{thm.HS_group},
\[
HS_1(k[\Gamma]) \cong H_1\big(\Omega\Omega^{\infty}S^{\infty}(B\Gamma); k\big)
\cong H_1\big(\Omega\Omega^{\infty}S^{\infty}(B\Gamma)\big) \otimes k.
\]
(All tensor products will be over
$\mathbb{Z}$ in this section.) Now $\Omega\Omega^{\infty}S^{\infty}(B\Gamma)$
consists of isomorphic copies of $\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)$, one for
each element of $\Gamma/[\Gamma, \Gamma]$, so we may write
\[
H_1\big(\Omega\Omega^{\infty}S^{\infty}(B\Gamma)\big) \otimes k
\cong
H_1\big(\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)\big) \otimes
k\big[ \Gamma/[\Gamma, \Gamma] \big].
\]
Thus, we obtain the result:
\begin{cor}\label{cor.stablegrouphomotopy}
If $\Gamma$ is a group, then
\[
HS_1(k[\Gamma]) \cong \pi_{2}^s(B\Gamma) \otimes
k\big[ \Gamma/[\Gamma, \Gamma] \big].
\]
\end{cor}
Now, by results of Brown and Loday~\cite{BL}, if $\Gamma$ is abelian, then
$\pi_2^s(B\Gamma)$ is the reduced tensor square. That is,
\[
\pi_2^s(B\Gamma) = \Gamma\, \widetilde{\wedge} \,\Gamma =
\left(\Gamma \otimes \Gamma\right)/\approx,
\]
where $g \otimes h \approx - h \otimes g$ for all $g, h \in \Gamma$. (This construction
is notated with multiplicative group action in~\cite{BL}, since they deal with the
more general case of non-abelian groups.) So in particular,
if $\Gamma = C_n$, the cyclic group of order $n$, we have
\[
\pi_2^s(BC_n) = \left\{\begin{array}{ll}
\mathbb{Z}/2\mathbb{Z} & \textrm{$n$ even.}\\
0 & \textrm{$n$ odd.}
\end{array}\right.
\]
\begin{cor}\label{cor.HS_1-C_n}
\[
HS_1(k[C_n]) = \left\{\begin{array}{ll}
(\mathbb{Z}/2\mathbb{Z})^n & \textrm{$n$ even.}\\
0 & \textrm{$n$ odd.}
\end{array}\right.
\]
\end{cor}
\begin{proof}
The result follows from Cor.~\ref{cor.stablegrouphomotopy}, as
$k\big[ C_n/[C_n, C_n] \big] \cong k[C_n] \cong k^n$, as $k$-module.
\end{proof}
\section{Relations to Cyclic Homology}\label{sec.cyc-homology}
The relation between the symmetric bar construction and the cyclic bar
construction arising from the chain of inclusions~(\ref{eq.delta-C-S-chain})
gives rise to a natural map
\begin{equation}\label{eq.HCtoHS}
HC_*(A) \to HS_*(A)
\end{equation}
Indeed, by remark~\ref{rmk.HC},
we may define cyclic homology via:
\[
HC_*(A) = \mathrm{Tor}_*^{\Delta C}( \underline{k}, B_*^{sym}A ).
\]
Using the partial complex of Thm.~\ref{thm.partial_resolution}, and an
analogous one for computing cyclic homology (c.f.~\cite{L}, p. 59), the
map~(\ref{eq.HCtoHS}) for degrees $0$ and $1$ is induced by the following
partial chain map:
\[
\begin{diagram}
\node{ 0 }
\node{ A }
\arrow{w}
\arrow{s,r}{ \gamma_0 = \mathrm{id} }
\node{ A \otimes A }
\arrow{w,t}{ \partial_1^C }
\arrow{s,r}{ \gamma_1 }
\node{ A^{\otimes 3} \oplus A }
\arrow{w,t}{ \partial_2^C }
\arrow{s,r}{ \gamma_2 }
\\
\node{ 0 }
\node{ A }
\arrow{w}
\node{ A^{\otimes 3} }
\arrow{w,t}{ \partial_1^S }
\node{ A^{\otimes 4} \oplus A }
\arrow{w,t}{ \partial_2^S }
\end{diagram}
\]
In this diagram, the boundary maps in the upper row are defined as follows:
\[
\partial_1^C : a \otimes b \mapsto ab - ba
\]
\[
\partial_2^C : \left\{\begin{array}{ll}
a \otimes b \otimes c &\mapsto ab \otimes c - a \otimes bc + ca \otimes b\\
a &\mapsto 1 \otimes a - a \otimes 1
\end{array}\right.
\]
The boundary maps in the lower row are defined as in Thm.~\ref{thm.partial_resolution}.
\[
\partial_1^S : a \otimes b \otimes c \mapsto abc - cba
\]
\[
\partial_2^S : \left\{\begin{array}{ll}
a \otimes b \otimes c \otimes d &\mapsto ab \otimes c \otimes d
- d \otimes ca \otimes b + bca \otimes 1 \otimes d
+ d \otimes bc \otimes a\\
a &\mapsto 1 \otimes a \otimes 1
\end{array}\right.
\]
The partial chain map is given in degree 1 by $\gamma_1(a \otimes b) := a \otimes b
\otimes 1$. In degree 2, $\gamma_2$ is defined on the summand $A^{\otimes 3}$ via
\[
a \otimes b \otimes c \mapsto (a \otimes b \otimes c \otimes 1
- 1 \otimes a \otimes bc \otimes 1
+ 1 \otimes ca \otimes b \otimes 1
+ 1 \otimes 1 \otimes abc \otimes 1
- b \otimes ca \otimes 1 \otimes 1)
- 2abc - cab,
\]
and on the summand $A$ via
\[
a \mapsto (-1 \otimes 1 \otimes a \otimes 1) + (4a).
\]
\backmatter
\appendix
\chapter{SCRIPTS USED FOR COMPUTER CALCULATIONS}\label{app.comp}
The computer algebra systems \verb+GAP+, \verb+Octave+ and \verb+Fermat+
were used to verify proposed theorems and also to obtain some concrete
computations of symmetric homology for some small algebras. Here is a link to
a tar-file of the scripts that were created in the course of writing this
dissertation as well as the \LaTeX source of this dissertation.
\begin{center}
\url{http://arxiv.org/e-print/0807.4521v1}
\end{center}
The tar-file contains the following files:
\begin{itemize}
\item \verb+Basic.g+ \quad - Some elementary functions, necessary for some
functions in \verb+DeltaS.g+
\item \verb+HomAlg.g+ \quad - Homological Algebra functions, such as computation
of homology groups for chain complexes.
\item \verb+Fermat.g+ \quad - Functions necessary to invoke \verb+Fermat+
for fast sparse matrix computations.
\item \verb+fermattogap+, \verb+gaptofermat+ \quad - Auxiliary text files for
use when invoking \verb+Fermat+ from \verb+GAP+.
\item \verb+DeltaS.g+ \quad - This is the main repository of scripts used to
compute various quantities associated with the category $\Delta S$, including
$HS_1(A)$ for finite-dimensional algebras $A$.
\end{itemize}
In order to use the functions of \verb+DeltaS.g+, simply copy the above files into
the working directory (such as \verb+~/gap/+), invoke \verb+GAP+, then read
in \verb+DeltaS.g+ at the prompt. The dependent modules will automatically be
loaded (hence they must be present in the same directory as \verb+DeltaS.g+).
Note, most of the computations involving homology require substantial memory
to run. I recommend calling \verb+GAP+ with the command line option
``\verb+-o +{\it mem}'', where {\it mem} is the amount of memory to be allocated
to this instance of \verb+GAP+. All computations done in this dissertation can
be accomplished by allocating 20 gigabytes of memory. The following provides a
few examples of using the functions of \verb+DeltaS.g+
\begin{verbatim}
[ault@math gap]$ gap -o 20g
gap> Read("DeltaS.g");
gap>
gap> ## Number of morphisms [6] --> [4]
gap> SizeDeltaS( 6, 4 );
1663200
gap>
gap> ## Generate the set of morphisms of Delta S, [2] --> [2]
gap> EnumerateDeltaS( 2, 2 );
[ [ [ 0, 1, 2 ], [ ], [ ] ], [ [ 0, 2, 1 ], [ ], [ ] ],
[ [ 1, 0, 2 ], [ ], [ ] ], [ [ 1, 2, 0 ], [ ], [ ] ],
[ [ 2, 0, 1 ], [ ], [ ] ], [ [ 2, 1, 0 ], [ ], [ ] ],
[ [ 0, 1 ], [ 2 ], [ ] ], [ [ 0, 2 ], [ 1 ], [ ] ],
[ [ 1, 0 ], [ 2 ], [ ] ], [ [ 1, 2 ], [ 0 ], [ ] ],
[ [ 2, 0 ], [ 1 ], [ ] ], [ [ 2, 1 ], [ 0 ], [ ] ],
[ [ 0, 1 ], [ ], [ 2 ] ], [ [ 0, 2 ], [ ], [ 1 ] ],
[ [ 1, 0 ], [ ], [ 2 ] ], [ [ 1, 2 ], [ ], [ 0 ] ],
[ [ 2, 0 ], [ ], [ 1 ] ], [ [ 2, 1 ], [ ], [ 0 ] ],
[ [ 0 ], [ 1, 2 ], [ ] ], [ [ 0 ], [ 2, 1 ], [ ] ],
[ [ 1 ], [ 0, 2 ], [ ] ], [ [ 1 ], [ 2, 0 ], [ ] ],
[ [ 2 ], [ 0, 1 ], [ ] ], [ [ 2 ], [ 1, 0 ], [ ] ],
[ [ 0 ], [ 1 ], [ 2 ] ], [ [ 0 ], [ 2 ], [ 1 ] ], [ [ 1 ], [ 0 ], [ 2 ] ],
[ [ 1 ], [ 2 ], [ 0 ] ], [ [ 2 ], [ 0 ], [ 1 ] ], [ [ 2 ], [ 1 ], [ 0 ] ],
[ [ 0 ], [ ], [ 1, 2 ] ], [ [ 0 ], [ ], [ 2, 1 ] ],
[ [ 1 ], [ ], [ 0, 2 ] ], [ [ 1 ], [ ], [ 2, 0 ] ],
[ [ 2 ], [ ], [ 0, 1 ] ], [ [ 2 ], [ ], [ 1, 0 ] ],
[ [ ], [ 0, 1, 2 ], [ ] ], [ [ ], [ 0, 2, 1 ], [ ] ],
[ [ ], [ 1, 0, 2 ], [ ] ], [ [ ], [ 1, 2, 0 ], [ ] ],
[ [ ], [ 2, 0, 1 ], [ ] ], [ [ ], [ 2, 1, 0 ], [ ] ],
[ [ ], [ 0, 1 ], [ 2 ] ], [ [ ], [ 0, 2 ], [ 1 ] ],
[ [ ], [ 1, 0 ], [ 2 ] ], [ [ ], [ 1, 2 ], [ 0 ] ],
[ [ ], [ 2, 0 ], [ 1 ] ], [ [ ], [ 2, 1 ], [ 0 ] ],
[ [ ], [ 0 ], [ 1, 2 ] ], [ [ ], [ 0 ], [ 2, 1 ] ],
[ [ ], [ 1 ], [ 0, 2 ] ], [ [ ], [ 1 ], [ 2, 0 ] ],
[ [ ], [ 2 ], [ 0, 1 ] ], [ [ ], [ 2 ], [ 1, 0 ] ],
[ [ ], [ ], [ 0, 1, 2 ] ], [ [ ], [ ], [ 0, 2, 1 ] ],
[ [ ], [ ], [ 1, 0, 2 ] ], [ [ ], [ ], [ 1, 2, 0 ] ],
[ [ ], [ ], [ 2, 0, 1 ] ], [ [ ], [ ], [ 2, 1, 0 ] ] ]
gap>
gap> ## Generate only the epimorphisms [2] -->> [2]
gap> EnumerateDeltaS( 2, 2 : epi );
[ [ [ 0 ], [ 1 ], [ 2 ] ], [ [ 0 ], [ 2 ], [ 1 ] ],
[ [ 1 ], [ 0 ], [ 2 ] ], [ [ 1 ], [ 2 ], [ 0 ] ],
[ [ 2 ], [ 0 ], [ 1 ] ], [ [ 2 ], [ 1 ], [ 0 ] ] ]
gap>
gap> ## Compose two morphisms of Delta S.
gap> a := Random(EnumerateDeltaS(4,3));
[ [ 0 ], [ 2, 4, 1 ], [ ], [ 3 ] ]
gap> b := Random(EnumerateDeltaS(3,2));
[ [ ], [ 3, 0, 2 ], [ 1 ] ]
gap> MultDeltaS(b, a);
[ [ ], [ 3, 0 ], [ 2, 4, 1 ] ]
gap> MultDeltaS(a, b);
Maps incomposeable
[ ]
gap>
gap> ## Examples of using morphisms of Delta S to act on simple tensors
gap> A := TruncPolyAlg([3,2]);
<algebra of dimension 6 over Rationals>
gap> ## TruncPolyAlg is defined in Basic.g
gap> ## TruncPolyAlg([i_1, i_2, ..., i_n]) is generated by
gap> ## x_1, x_2, ..., x_n, under the relation (x_j)^(i_j) = 0.
gap> g := GeneratorsOfLeftModule(A);
[ X^[ 0, 0 ], X^[ 0, 1 ], X^[ 1, 0 ], X^[ 1, 1 ], X^[ 2, 0 ], X^[ 2, 1 ] ]
gap> x := g[2]; y := g[3];
X^[ 0, 1 ]
X^[ 1, 0 ]
gap> v := [ x*y, 1, y^2 ];
gap> ## v represents the simple tensor xy \otimes 1 \otimes y^2.
[ X^[ 1, 1 ], 1, X^[ 2, 0 ] ]
gap> ActByDeltaS( v, [[2], [], [0], [1]] );
[ X^[ 2, 0 ], 1, X^[ 1, 1 ], 1 ]
gap> ActByDeltaS( v, [[2], [0,1]] );
[ X^[ 2, 0 ], X^[ 1, 1 ] ]
gap> ActByDeltaS( v, [[2,0], [1]] );
[ 0*X^[ 0, 0 ], 1 ]
gap>
gap> ## Symmetric monoidal product on DeltaS_+
gap> a := Random(EnumerateDeltaS(4,2));
[ [ ], [ 2, 1, 0 ], [ 3, 4 ] ]
gap> b := Random(EnumerateDeltaS(3,3));
[ [ ], [ ], [ ], [ 1, 3, 2, 0 ] ]
gap> MonoidProductDeltaS(a, b);
[ [ ], [ 2, 1, 0 ], [ 3, 4 ], [ ], [ ], [ ], [ 6, 8, 7, 5 ] ]
gap> MonoidProductDeltaS(b, a);
[ [ ], [ ], [ ], [ 1, 3, 2, 0 ], [ ], [ 6, 5, 4 ], [ 7, 8 ] ]
gap> MonoidProductDeltaS(a, []);
[ [ ], [ 2, 1, 0 ], [ 3, 4 ] ]
gap>
gap> ## Symmetric Homology of the algebra A, in degrees 0 and 1.
gap> SymHomUnitalAlg(A);
[ [ 0, 0, 0, 0, 0, 0 ], [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 0, 0 ] ]
gap> ## '0' represents a factor of Z, while a non-zero p represents
gap> ## a factor of Z/pZ.
gap>
gap> ## Using layers to compute symmetric homology
gap> C2 := CyclicGroup(2);
<pc group of size 2 with 1 generators>
gap> A := GroupRing(Rationals, DirectProduct(C2, C2));
<algebra-with-one over Rationals, with 2 generators>
gap> ## First, a direct computation without layers:
gap> SymHomUnitalAlg(A);
[ [ 0, 0, 0, 0 ], [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ] ]
gap> ## Next, compute HS_0(A)_u and HS_1(A)_u for each generator u.
gap> g := GeneratorsOfLeftModule(A);
[ (1)*<identity> of ..., (1)*f2, (1)*f1, (1)*f1*f2 ]
gap> SymHomUnitalAlgLayered(A, g[1]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[2]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[3]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[4]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> ## Computing HS_1( Z[t] ) by layers:
gap> SymHomFreeMonoid(0,10);
HS_1(k[t])_{t^0} : [ ]
HS_1(k[t])_{t^1} : [ ]
HS_1(k[t])_{t^2} : [ 2 ]
HS_1(k[t])_{t^3} : [ 2 ]
HS_1(k[t])_{t^4} : [ 2 ]
HS_1(k[t])_{t^5} : [ 2 ]
HS_1(k[t])_{t^6} : [ 2 ]
HS_1(k[t])_{t^7} : [ 2 ]
HS_1(k[t])_{t^8} : [ 2 ]
HS_1(k[t])_{t^9} : [ 2 ]
HS_1(k[t])_{t^10} : [ 2 ]
gap> ## Poincare polynomial of Sym_*^{(p)} for small p.
gap> ## There is a check for torsion, using a call to Fermat
gap> ## to find Smith Normal Form of the differential matrices.
gap> PoincarePolynomialSymComplex(2);
C_0 Dimension: 1
C_1 Dimension: 6
C_2 Dimension: 6
D_1
SNF(D_1)
D_2
SNF(D_2)
2*t^2+t
gap> PoincarePolynomialSymComplex(5);
C_0 Dimension: 1
C_1 Dimension: 30
C_2 Dimension: 300
C_3 Dimension: 1200
C_4 Dimension: 1800
C_5 Dimension: 720
D_1
SNF(D_1)
D_2
SNF(D_2)
D_3
SNF(D_3)
D_4
SNF(D_4)
D_5
SNF(D_5)
120*t^5+272*t^4+t^3
\end{verbatim}
\end{document}
ca \otimes b \otimes 1
+ 1 \otimes 1 \otimes abc \otimes 1
- b \otimes ca \otimes 1 \otimes 1)
- 2abc - cab,
\]
and on the summand $A$ via
\[
a \mapsto (-1 \otimes 1 \otimes a \otimes 1) + (4a).
\]
\backmatter
\appendix
\chapter{SCRIPTS USED FOR COMPUTER CALCULATIONS}\label{app.comp}
The computer algebra systems \verb+GAP+, \verb+Octave+ and \verb+Fermat+
were used to verify proposed theorems and also to obtain some concrete
computations of symmetric homology for some small algebras. Here is a link to
a tar-file of the scripts that were created in the course of writing this
dissertation.
\begin{center}
[insert link]
\end{center}
The tar-file contains the following files:
\begin{itemize}
\item \verb+Basic.g+ \quad - Some elementary functions, necessary for some
functions in \verb+DeltaS.g+
\item \verb+HomAlg.g+ \quad - Homological Algebra functions, such as computation
of homology groups for chain complexes.
\item \verb+Fermat.g+ \quad - Functions necessary to invoke \verb+Fermat+
for fast sparse matrix computations.
\item \verb+fermattogap+, \verb+gaptofermat+ \quad - Auxiliary text files for
use when invoking \verb+Fermat+ from \verb+GAP+.
\item \verb+DeltaS.g+ \quad - This is the main repository of scripts used to
compute various quantities associated with the category $\Delta S$, including
$HS_1(A)$ for finite-dimensional algebras $A$.
\end{itemize}
In order to use the functions of \verb+DeltaS.g+, simply copy the above files into
the working directory (such as \verb+~/gap/+), invoke \verb+GAP+, then read
in \verb+DeltaS.g+ at the prompt. The dependent modules will automatically be
loaded (hence they must be present in the same directory as \verb+DeltaS.g+).
Note, most of the computations involving homology require substantial memory
to run. I recommend calling \verb+GAP+ with the command line option
``\verb+-o +{\it mem}'', where {\it mem} is the amount of memory to be allocated
to this instance of \verb+GAP+. All computations done in this dissertation can
be accomplished by allocating 20 gigabytes of memory. The following provides a
few examples of using the functions of \verb+DeltaS.g+
\begin{verbatim}
[ault@math gap]$ gap -o 20g
gap> Read("DeltaS.g");
gap>
gap> ## Number of morphisms [6] --> [4]
gap> SizeDeltaS( 6, 4 );
1663200
gap>
gap> ## Generate the set of morphisms of Delta S, [2] --> [2]
gap> EnumerateDeltaS( 2, 2 );
[ [ [ 0, 1, 2 ], [ ], [ ] ], [ [ 0, 2, 1 ], [ ], [ ] ],
[ [ 1, 0, 2 ], [ ], [ ] ], [ [ 1, 2, 0 ], [ ], [ ] ],
[ [ 2, 0, 1 ], [ ], [ ] ], [ [ 2, 1, 0 ], [ ], [ ] ],
[ [ 0, 1 ], [ 2 ], [ ] ], [ [ 0, 2 ], [ 1 ], [ ] ],
[ [ 1, 0 ], [ 2 ], [ ] ], [ [ 1, 2 ], [ 0 ], [ ] ],
[ [ 2, 0 ], [ 1 ], [ ] ], [ [ 2, 1 ], [ 0 ], [ ] ],
[ [ 0, 1 ], [ ], [ 2 ] ], [ [ 0, 2 ], [ ], [ 1 ] ],
[ [ 1, 0 ], [ ], [ 2 ] ], [ [ 1, 2 ], [ ], [ 0 ] ],
[ [ 2, 0 ], [ ], [ 1 ] ], [ [ 2, 1 ], [ ], [ 0 ] ],
[ [ 0 ], [ 1, 2 ], [ ] ], [ [ 0 ], [ 2, 1 ], [ ] ],
[ [ 1 ], [ 0, 2 ], [ ] ], [ [ 1 ], [ 2, 0 ], [ ] ],
[ [ 2 ], [ 0, 1 ], [ ] ], [ [ 2 ], [ 1, 0 ], [ ] ],
[ [ 0 ], [ 1 ], [ 2 ] ], [ [ 0 ], [ 2 ], [ 1 ] ], [ [ 1 ], [ 0 ], [ 2 ] ],
[ [ 1 ], [ 2 ], [ 0 ] ], [ [ 2 ], [ 0 ], [ 1 ] ], [ [ 2 ], [ 1 ], [ 0 ] ],
[ [ 0 ], [ ], [ 1, 2 ] ], [ [ 0 ], [ ], [ 2, 1 ] ],
[ [ 1 ], [ ], [ 0, 2 ] ], [ [ 1 ], [ ], [ 2, 0 ] ],
[ [ 2 ], [ ], [ 0, 1 ] ], [ [ 2 ], [ ], [ 1, 0 ] ],
[ [ ], [ 0, 1, 2 ], [ ] ], [ [ ], [ 0, 2, 1 ], [ ] ],
[ [ ], [ 1, 0, 2 ], [ ] ], [ [ ], [ 1, 2, 0 ], [ ] ],
[ [ ], [ 2, 0, 1 ], [ ] ], [ [ ], [ 2, 1, 0 ], [ ] ],
[ [ ], [ 0, 1 ], [ 2 ] ], [ [ ], [ 0, 2 ], [ 1 ] ],
[ [ ], [ 1, 0 ], [ 2 ] ], [ [ ], [ 1, 2 ], [ 0 ] ],
[ [ ], [ 2, 0 ], [ 1 ] ], [ [ ], [ 2, 1 ], [ 0 ] ],
[ [ ], [ 0 ], [ 1, 2 ] ], [ [ ], [ 0 ], [ 2, 1 ] ],
[ [ ], [ 1 ], [ 0, 2 ] ], [ [ ], [ 1 ], [ 2, 0 ] ],
[ [ ], [ 2 ], [ 0, 1 ] ], [ [ ], [ 2 ], [ 1, 0 ] ],
[ [ ], [ ], [ 0, 1, 2 ] ], [ [ ], [ ], [ 0, 2, 1 ] ],
[ [ ], [ ], [ 1, 0, 2 ] ], [ [ ], [ ], [ 1, 2, 0 ] ],
[ [ ], [ ], [ 2, 0, 1 ] ], [ [ ], [ ], [ 2, 1, 0 ] ] ]
gap>
gap> ## Generate only the epimorphisms [2] -->> [2]
gap> EnumerateDeltaS( 2, 2 : epi );
[ [ [ 0 ], [ 1 ], [ 2 ] ], [ [ 0 ], [ 2 ], [ 1 ] ],
[ [ 1 ], [ 0 ], [ 2 ] ], [ [ 1 ], [ 2 ], [ 0 ] ],
[ [ 2 ], [ 0 ], [ 1 ] ], [ [ 2 ], [ 1 ], [ 0 ] ] ]
gap>
gap> ## Compose two morphisms of Delta S.
gap> a := Random(EnumerateDeltaS(4,3));
[ [ 0 ], [ 2, 4, 1 ], [ ], [ 3 ] ]
gap> b := Random(EnumerateDeltaS(3,2));
[ [ ], [ 3, 0, 2 ], [ 1 ] ]
gap> MultDeltaS(b, a);
[ [ ], [ 3, 0 ], [ 2, 4, 1 ] ]
gap> MultDeltaS(a, b);
Maps incomposeable
[ ]
gap>
gap> ## Examples of using morphisms of Delta S to act on simple tensors
gap> A := TruncPolyAlg([3,2]);
<algebra of dimension 6 over Rationals>
gap> ## TruncPolyAlg is defined in Basic.g
gap> ## TruncPolyAlg([i_1, i_2, ..., i_n]) is generated by
gap> ## x_1, x_2, ..., x_n, under the relation (x_j)^(i_j) = 0.
gap> g := GeneratorsOfLeftModule(A);
[ X^[ 0, 0 ], X^[ 0, 1 ], X^[ 1, 0 ], X^[ 1, 1 ], X^[ 2, 0 ], X^[ 2, 1 ] ]
gap> x := g[2]; y := g[3];
X^[ 0, 1 ]
X^[ 1, 0 ]
gap> v := [ x*y, 1, y^2 ];
gap> ## v represents the simple tensor xy \otimes 1 \otimes y^2.
[ X^[ 1, 1 ], 1, X^[ 2, 0 ] ]
gap> ActByDeltaS( v, [[2], [], [0], [1]] );
[ X^[ 2, 0 ], 1, X^[ 1, 1 ], 1 ]
gap> ActByDeltaS( v, [[2], [0,1]] );
[ X^[ 2, 0 ], X^[ 1, 1 ] ]
gap> ActByDeltaS( v, [[2,0], [1]] );
[ 0*X^[ 0, 0 ], 1 ]
gap>
gap> ## Symmetric monoidal product on DeltaS_+
gap> a := Random(EnumerateDeltaS(4,2));
[ [ ], [ 2, 1, 0 ], [ 3, 4 ] ]
gap> b := Random(EnumerateDeltaS(3,3));
[ [ ], [ ], [ ], [ 1, 3, 2, 0 ] ]
gap> MonoidProductDeltaS(a, b);
[ [ ], [ 2, 1, 0 ], [ 3, 4 ], [ ], [ ], [ ], [ 6, 8, 7, 5 ] ]
gap> MonoidProductDeltaS(b, a);
[ [ ], [ ], [ ], [ 1, 3, 2, 0 ], [ ], [ 6, 5, 4 ], [ 7, 8 ] ]
gap> MonoidProductDeltaS(a, []);
[ [ ], [ 2, 1, 0 ], [ 3, 4 ] ]
gap>
gap> ## Symmetric Homology of the algebra A, in degrees 0 and 1.
gap> SymHomUnitalAlg(A);
[ [ 0, 0, 0, 0, 0, 0 ], [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 0, 0 ] ]
gap> ## '0' represents a factor of Z, while a non-zero p represents
gap> ## a factor of Z/pZ.
gap>
gap> ## Using layers to compute symmetric homology
gap> C2 := CyclicGroup(2);
<pc group of size 2 with 1 generators>
gap> A := GroupRing(Rationals, DirectProduct(C2, C2));
<algebra-with-one over Rationals, with 2 generators>
gap> ## First, a direct computation without layers:
gap> SymHomUnitalAlg(A);
[ [ 0, 0, 0, 0 ], [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ] ]
gap> ## Next, compute HS_0(A)_u and HS_1(A)_u for each generator u.
gap> g := GeneratorsOfLeftModule(A);
[ (1)*<identity> of ..., (1)*f2, (1)*f1, (1)*f1*f2 ]
gap> SymHomUnitalAlgLayered(A, g[1]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[2]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[3]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[4]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> ## Computing HS_1( Z[t] ) by layers:
gap> SymHomFreeMonoid(0,10);
HS_1(k[t])_{t^0} : [ ]
HS_1(k[t])_{t^1} : [ ]
HS_1(k[t])_{t^2} : [ 2 ]
HS_1(k[t])_{t^3} : [ 2 ]
HS_1(k[t])_{t^4} : [ 2 ]
HS_1(k[t])_{t^5} : [ 2 ]
HS_1(k[t])_{t^6} : [ 2 ]
HS_1(k[t])_{t^7} : [ 2 ]
HS_1(k[t])_{t^8} : [ 2 ]
HS_1(k[t])_{t^9} : [ 2 ]
HS_1(k[t])_{t^10} : [ 2 ]
gap> ## Poincare polynomial of Sym_*^{(p)} for small p.
gap> ## There is a check for torsion, using a call to Fermat
gap> ## to find Smith Normal Form of the differential matrices.
gap> PoincarePolynomialSymComplex(2);
C_0 Dimension: 1
C_1 Dimension: 6
C_2 Dimension: 6
D_1
SNF(D_1)
D_2
SNF(D_2)
2*t^2+t
gap> PoincarePolynomialSymComplex(5);
C_0 Dimension: 1
C_1 Dimension: 30
C_2 Dimension: 300
C_3 Dimension: 1200
C_4 Dimension: 1800
C_5 Dimension: 720
D_1
SNF(D_1)
D_2
SNF(D_2)
D_3
SNF(D_3)
D_4
SNF(D_4)
D_5
SNF(D_5)
120*t^5+272*t^4+t^3
\end{verbatim}
\end{document}
|
\begin{document}
\begin{abstract}
Let $m^{\prime}athbb{C}q$ be the quantum torus associated with the $d \times d$ matrix $q = (q_{ij})$,
where $q_{ij}$ are roots of unity with $q_{ii} = 1$ and $q_{ij}^{-1} = q_{ji}$ for all $1 \leq i, j \leq d.$
Let $\mathrm{Der}(m^{\prime}athbb{C}q)$ be the Lie algebra of all the derivations of $m^{\prime}athbb{C}q$. In this paper
we define the Lie algebra $\mathrm{Der}(m^{\prime}athbb{C}q) \ltimes m^{\prime}athbb{C}q$ and classify its irreducible modules
with finite dimensional weight spaces. These modules under certain
conditions turn out to be of the form $V \otimes m^{\prime}athbb{C}q$,
where $V$ is a finite dimensional irreducible $gl_d$-module.
\end{abstract}
\title[]{The irreducible modules for the derivations of the rational quantum torus}
\author[Rao]{S. Eswara Rao}
\mathrm{ad}dress{School of mathematics, Tata Institute of Fundamental Research,
Homi Bhabha Road, Mumbai 400005, India}
\email[S. Eswara Rao]{[email protected]}
\author[Batra]{Punita Batra}
\mathrm{ad}dress{Department of Mathematics, Harish-Chandra Research Institute,
Chhatnag
Road, Jhunsi, Allahabad
211019 INDIA}
\email[Punita Batra]{[email protected]}
\author[Sharma]{Sachin S. Sharma}
\mathrm{ad}dress{School of mathematics, Tata Institute of Fundamental Research,
Homi Bhabha Road, Mumbai 400005, India}
\email[Sachin S. Sharma]{[email protected]}
s^{\prime}ubjclass{17B65, 17B66, 17B68}
\keywords{Quantum torus, derivations, irreducible module, weight module.}
m^{\prime}aketitle s^{\prime}ection {Introduction}
Let $A = m^{\prime}athbb{C}[t_1^{\pm 1},\cdots,t_n^{\pm 1}]$ be the Laurent polynomial ring in $d$
commuting variables. Let $\mathrm{Der}(A)$ be the Lie algebra of diffeomorphisms of
$d$-dimensional torus. It is well known that $\mathrm{Der}(A)$ is isomorphic to
the derivations of Laurent polynomial ring $A$ in $d$-commuting variables. G.Shen \cite{SGY} and
Rao \cite{RE1} gave several irreducible representation of $\mathrm{Der}(A)$. In a paper by
Jiang and Meng \cite{JM}, it has been proved that classification of
irreducible integrable modules of the full toroidal Lie algebra
can be reduced to the classification of irreducible
$\mathrm{Der}(A) \ltimes A$-modules.
Rao \cite{RE} has given a classification of irreducible modules for
$\mathrm{Der}(A) \ltimes A$ with finite dimensional weight spaces and associative action of $A$.
These modules are of the form $V \otimes A$, where $V$ is a finite dimensional irreducible
$gl_d$-module. Building on the work of \cite{JM} and \cite{RE}, Rao and Jiang \cite{RJ} gave
the classification of the irreducible integrable modules for the full toroidal Lie algebra.
Let $m^{\prime}athbb{C}q$ be a quantum torus associated with the $d \times d$ matrix $q = (q_{ij})$, where
$q_{ij}$ are roots of unity with $q_{ii} = 1$, $q_{ij}^{-1} = q_{ji}$ for all $1 \leq i, j \leq d.$
Let $\mathrm{Der}(m^{\prime}athbb{C}q)$ be the Lie algebra of all the derivations of $m^{\prime}athbb{C}q$. In \cite{ST} W.Lin and
S.Tan defined a functor from $gl_d$-modules to $\mathrm{Der}(m^{\prime}athbb{C}q)$-modules. They proved that
for a finite dimensional irreducible $gl_d$-module $V$, $V \otimes m^{\prime}athbb{C}q$ is a completely
reducible $\mathrm{Der}(m^{\prime}athbb{C}q)$-module except finitely many cases and hence generalised
Rao's work \cite{RE1} for the quantum case. Liu and Zhao \cite{KZ} completed the study of these
modules by proving that the ``function $g(s)$'' defined in \cite{ST} can be
taken as a constant function $1$ and made the structure of these modules completely clear.
In this paper we study the representations of the Lie algebra $\mathrm{Der}(m^{\prime}athbb{C}q) \ltimes m^{\prime}athbb{C}q$.
We now give more details of the paper. Let $W$ be the Lie algebra of the
outer derivations of $m^{\prime}athbb{C}q$. Let $m^{\prime}athbb{C}q^{(1)}$ and $m^{\prime}athbb{C}q^{(2)}$ be the copies of
$m^{\prime}athbb{C}q$ contained in the Lie algebra $\mathrm{Der}(m^{\prime}athbb{C}q) \ltimes m^{\prime}athbb{C}q$ such that $[m^{\prime}athbb{C}q^{(1)},m^{\prime}athbb{C}q^{(2)}] = 0$,
$m^{\prime}athbb{C}q^{(1)} \cap m^{\prime}athbb{C}q^{(2)} = Z(m^{\prime}athbb{C}q)$ and $\mathrm{Der}(m^{\prime}athbb{C}q) \ltimes m^{\prime}athbb{C}q = W \ltimes (m^{\prime}athbb{C}q^{(1)} + m^{\prime}athbb{C}q^{(2)})$.
The $\mathrm{Der}(m^{\prime}athbb{C}q)$-module $V \otimes m^{\prime}athbb{C}q$
is an irreducible $\mathrm{Der}(m^{\prime}athbb{C}q) \ltimes m^{\prime}athbb{C}q$-module with associative
action of $m^{\prime}athbb{C}q^{(1)}$ and anti associative action of $m^{\prime}athbb{C}q^{(1)}$(Proposition \ref{prop1}).
The action is associative on the intersection $Z(m^{\prime}athbb{C}q)$.
The main goal of this paper is to prove the converse of Proposition \ref{prop1} (Theorem \ref{prop3}) .
The paper is organised as follows.
In section \ref{sec1} we begin with the definition and properties of the quantum torus $m^{\prime}athbb{C}q$.
We define $\mathrm{Der}(m^{\prime}athbb{C}q)$ action on $m^{\prime}athbb{C}q$ and bracket operations on $\mathrm{Der}(m^{\prime}athbb{C}q) \ltimes m^{\prime}athbb{C}q$
(Prop.\ref{pr1} and Prop.\ref{pr2}). Section \ref{sec2} and section \ref{sec3}
are devoted to the proof of the Theorem \ref{prop3}. In section \ref{sec2} we compute
the actions of outer derivations of $\mathrm{Der}(m^{\prime}athbb{C}q)$ and $m^{\prime}athbb{C}q$ on $V^{\prime}$. In
section \ref{sec3} we derive the action of inner derivations on $V^{\prime}$ and complete
the proof.
s^{\prime}ection{Preliminaries}\lambdabel{sec1}
Let $q = (q_{ij})_{d \times d}$ be any $d \times d$ matrix with nonzero complex entries
satisfying $q_{ii} = 1$, $q_{ij}^{-1} = q_{ji}$, $q_{ij}$ are roots of unity for all $1 \leq i, j \leq d.$
Let us consider the non-commutative Laurent polynomial ring
$S_{[d]} = m^{\prime}athbb{C}[t_{1}^{\pm 1},\cdots,t_{d}^{\pm 1}]$. Let $J_q$ be the two sided ideal of
$S_{[d]}$ generated by the elements $\{ t_i t_j = q_{ij}t_j t_i, t_i t_i^{-1} -1, t_i^{-1}
t_i - 1 \,\, \forall \, \,1 \leq i, j \leq d \}$.
Let $m^{\prime}athbb{C}q = S_{[d]} / J_q$. Then $m^{\prime}athbb{C}q$ is called the quantum torus associated with the matrix $q$. The
matrix $q$ is called the quantum torus matrix.
For $n = (n_1, \cdots , n_d) \in m^{\prime}athbb{Z}d$, let $t^{n} = t_{1}^{n_1} \cdots t_{d}^{n_d}$.
Define $s^{\prime}igma , f : m^{\prime}athbb{Z}d \times m^{\prime}athbb{Z}d \rightarrow m^{\prime}athbb{C}^{\ast}$ by
$$s^{\prime}igma(n, m) = \prod_{1 \leq i < j \leq d}{q_{ji}^{n_j m_i}}, \, \,f(n, m) = s^{\prime}igma(n, m) s^{\prime}igma(m,n)^{-1}.$$
Then one has the following results \cite{KRY}:
\begin{enumerate}
\item $s^{\prime}igma(n+m, s+r) = s^{\prime}igma(n, s)s^{\prime}igma(n, r)s^{\prime}igma(m, s) s^{\prime}igma(m, r).$
\item $f(n, m) = f(m, n)^{-1} , \, \, f(n, n) = f(n, -n) = 1.$
\item $ f(n+m, s+r) = f(n, s)f(n, r)f(m, s)f(m, r).$
\item $ t^{n}t^{m} = s^{\prime}igma(n, m) t^{n+m}, \, \, [t^n, t^m] = (s^{\prime}igma(n, m) - s^{\prime}igma(m, n))t^{n+m},\\
t^{n}t^{m} = f(n,m)t^{m}t^{n}, \, \, \forall \, n, m, r, s \in m^{\prime}athbb{Z}d .$
\end{enumerate}
For $f$, let $\mathrm{rad}(f)$ denote the radical of $f$ which is defined by
$$\mathrm{rad}(f) = \{n\in m^{\prime}athbb{Z}d : f(n, m) = 1 \,\, \forall \, m \in m^{\prime}athbb{Z}d \} .$$
It is easy to see that $\mathrm{rad}(f)$ is a subgroup of $m^{\prime}athbb{Z}d$. As $m^{\prime}athbb{C}q$ is
$m^{\prime}athbb{Z}d$-graded, we define derivations $\partial_1, \partial_2, \cdots, \partial_d$ satisfying
$$\partial_i (t^n) = n_i t^n \,\,m^{\prime}athrm{for}\,\, n = (n_1, n_2, \cdots, n_d) \in m^{\prime}athbb{Z}d.$$
The inner derivations $\mathrm{ad} \,t^n (t^m) = (s^{\prime}igma(n, m) - s^{\prime}igma(m, n))t^{n+m}$.
Note that for $n \in \mathrm{rad}(f)$, $\mathrm{ad}\, t^n = 0$. For $u = (u_1, u_2,\cdots, u_d)\in m^{\prime}athbb{C}d$,
define $D(u, r) = t^{r}s^{\prime}um_{i = 1}^{d}{u_i \partial_i}$.
Let $\mathrm{Der}(m^{\prime}athbb{C}q)$ be the space of all derivations of $m^{\prime}athbb{C}q$. Let $\mathrm{Der}(m^{\prime}athbb{C}q)_n$ denote
the set of homogeneous derivations of $m^{\prime}athbb{C}q$ with degree $n$. Then we have the following lemma:
\begin{lem}[\cite{KRY},Lemma 2.48]
\begin{enumerate}
\item $\mathrm{Der}(m^{\prime}athbb{C}q) = \bigoplus_{n \in m^{\prime}athbb{Z}d} \mathrm{Der}(m^{\prime}athbb{C}q)_n $
\item \begin{equation*}
\mathrm{Der}(m^{\prime}athbb{C}q)_n = \left\{
\begin{array}{l l}
m^{\prime}athbb{C} \mathrm{ad} \, t^n & \quad \,m^{\prime}athrm{if} \,\,\, n \notin \mathrm{rad} (f) \\
\oplus_{i =1}^{d}{m^{\prime}athbb{C} \, t^{n} \partial_i} & \quad \, m^{\prime}athrm{if} \,\, \,n \in \mathrm{rad} (f) . \\
\end{array} \right.
\end{equation*}
\end{enumerate}
\end{lem}
The space $\mathrm{Der}(m^{\prime}athbb{C}q)$ is a Lie algebra with the following bracket operations:
\begin{enumerate}
\item $[\mathrm{ad}\,t^s, \mathrm{ad}\,t^r] = (s^{\prime}igma(s,r) - s^{\prime}igma(r,s))\, \mathrm{ad}\, t^{s+r}, \, \, \forall\, r,s \notin \mathrm{rad}(f);$
\item $[D(u,r),\mathrm{ad}\,t^s] = (u,s)s^{\prime}igma(r,s)\, \mathrm{ad}\,t^{r+s} ,\, \, \forall \, r \in \mathrm{rad}(f), s \notin \mathrm{rad}(f), u\in m^{\prime}athbb{C}d;$
\item $[D(u,r),D(u^{\prime},r^{\prime})] = D(w,r+r^{\prime}), \,\, \forall \, r, r^{\prime} \in \mathrm{rad}(f) ,
u,u^{\prime} \in m^{\prime}athbb{C}d$ and where $w = s^{\prime}igma(r,r^{\prime})((u,r^{\prime})u^{\prime}-(u^{\prime},r)u).$
\end{enumerate}
\begin{prop} \lambdabel{pr1}
$m^{\prime}athbb{C}q$ is a $\mathrm{Der}(m^{\prime}athbb{C}q)$-module with the following action:
\begin{enumerate}
\item $D(u,r).t^n = (u,n)s^{\prime}igma(r,n)t^{r+n}, \,\, \forall \,\, r\in \mathrm{rad}(f), n\in m^{\prime}athbb{Z}d, u\in m^{\prime}athbb{C}d;$
\item $\mathrm{ad}\, t^s .t^n = (s^{\prime}igma(s,n)-s^{\prime}igma(n,s))t^{s+n}, \,\, \forall \,\,s\notin \mathrm{rad}(f), n\in m^{\prime}athbb{Z}d .$
\end{enumerate}
\end{prop}
Consider the space $\mathrm{Der}(m^{\prime}athbb{C}q) \ltimes m^{\prime}athbb{C}q$. We denote its element $(T,t^{n})$ by $T+t^{n}$ where
$T \in \mathrm{Der}(m^{\prime}athbb{C}q)$ and $t^{n} \in m^{\prime}athbb{C}q$ for $n \in m^{\prime}athbb{Z}d$.
\begin{prop}\lambdabel{pr2}
$\mathrm{Der}(m^{\prime}athbb{C}q)\ltimes m^{\prime}athbb{C}q$ is a Lie algebra with the following brackets:
\begin{enumerate}
\item $[D(u,r),t^n] = (u,n)s^{\prime}igma(r,n)t^{r+n}, \,\, \forall \, r\in \mathrm{rad}(f), n\in m^{\prime}athbb{Z}d, u\in m^{\prime}athbb{C}d;$
\item $[\mathrm{ad}\, t^s, t^n] = (s^{\prime}igma(s,n)-s^{\prime}igma(n,s))t^{s+n}, \,\, \forall s\notin \mathrm{rad}(f), n\in m^{\prime}athbb{Z}d;$
\item $[t^{m},t^{n}] = (s^{\prime}igma(m,n) - s^{\prime}igma(n,m))t^{m+n} , \,\, \forall \,\,m,n \in m^{\prime}athbb{Z}d .$
\end{enumerate}
\end{prop}
Let $\tilde{\mathfrak{h}} = \{ D(u,0) : u\in m^{\prime}athbb{C}d\} \oplus m^{\prime}athbb{C}$. Then $\tilde{\mathfrak{h}}$ is a maximal abelian subalgebra of $m^{\prime}g$.
Let $W$ denote the Lie subalgebra of $\mathrm{Der}(m^{\prime}athbb{C}q)$ generated by the elements $D(u,r)$, where
$u \in m^{\prime}athbb{C}d, r\in \mathrm{rad}(f)$. Let $m^{\prime}g := \mathrm{Der}(m^{\prime}athbb{C}q)\ltimes m^{\prime}athbb{C}q$ and consider the Lie subalgebras
$m^{\prime}athbb{C}qa = m^{\prime}athbb{C}q$ and $m^{\prime}athbb{C}qb = m^{\prime}athrm{span} \{\mathrm{ad}\,(t^{n}) - t^{n} m^{\prime}id n\in m^{\prime}athbb{Z}^{d}\}$. Then
it is easy to see that $m^{\prime}g = W \ltimes (m^{\prime}athbb{C}qa + m^{\prime}athbb{C}qb)$. As $m^{\prime}athbb{C}qb$ is isomorphic to $m^{\prime}athbb{C}q$
by a Lie algebra isomorphism $\mathrm{ad}\,(t^{n}) - t^{n} \rightarrow t^{n}$, we see that $m^{\prime}athbb{C}qb$ has
an associative algebra structure given by $(\mathrm{ad}\,(t^{n}) - t^{n})(\mathrm{ad}\,(t^{m}) - t^{m}) =
s^{\prime}igma(n,m)(\mathrm{ad}\,(t^{n+m}) - t^{n+m})$.
Let $V$ be a finite dimensional irreducible $gl_d$-module and $\alpha \in m^{\prime}athbb{C}^{d}$. Liu and Zhao \cite{KZ},
also see \cite{ST}, proved that $M^\alpha(V) = V \otimes m^{\prime}athbb{C}q$ is a $\mathrm{Der}(m^{\prime}athbb{C}q)$-module with the following actions:
\begin{enumerate}
\item $\mathrm{ad}\,t^{s}v(n) = (s^{\prime}igma(s,n) - s^{\prime}igma(n,s))v(n+s)$ ;
\item $D(u,r)v(n) = s^{\prime}igma(r,n)((u,n+\alpha) + r u^{T})v(r+n)$,
\end{enumerate}
where $r u^{T} = s^{\prime}um_{i,j}{r_i u_jE_{ij}}$ for
$r = (r_1,\cdots,r_d) \in m^{\prime}athbb{Z}d$ and $u = (u_1,\cdots,u_d)^{T}$,
and $v(n) := v \otimes t^{n} \in V(n) := V \otimes t^{n}$, $n \in m^{\prime}athbb{Z}d, s\notin \mathrm{rad}(f), r\in \mathrm{rad}(f), u, \alpha \in m^{\prime}athbb{C}d$.
Let $V$ denote an irreducible finite dimensional $gl_d$-module. Then we have the following proposition:
\begin{prop}\lambdabel{prop1}
$V \otimes m^{\prime}athbb{C}q$ is an irreducible $\mathrm{Der}(m^{\prime}athbb{C}q) \ltimes m^{\prime}athbb{C}q$-module with the following actions:
\begin{enumerate}
\item $\mathrm{ad}\,t^{s}v(n) = (s^{\prime}igma(s,n) - s^{\prime}igma(n,s))v(n+s);$
\item $D(u,r)v(n) = s^{\prime}igma(r,n)((u,n+\alpha) + r u^{T})v(r+n);$
\item $t^m v(n) = s^{\prime}igma(m,n)v(m+n),$
where $v(n) := v \otimes t^{n} \in V(n) := V \otimes t^{n},$
$ s\notin \mathrm{rad}(f), r\in \mathrm{rad}(f), u, \alpha \in m^{\prime}athbb{C}d, m,n \in m^{\prime}athbb{Z}d$.
\end{enumerate}
\end{prop}
\begin{proof}
It is routine check to show that $V \otimes m^{\prime}athbb{C}q$ is $m^{\prime}g$-module.
To prove the irreducibility of $V^{\prime} = V \otimes m^{\prime}athbb{C}q$, let $M$ be a nonzero
submodule of $V^{\prime}$. Then since $M$ is a weight module, we have
$M = \oplus_{n\in m^{\prime}athbb{Z}d}{M_n \otimes t^{n}}$, where $M_n = \{v \in V: v \otimes t^{n} \in M\}$.
Let for any nonzero vector $v \in M_n$, consider the $m^{\prime}athbb{C}q$ action on it.
As we note that $m^{\prime}athbb{C}q$ is an irreducible $m^{\prime}athbb{C}q$-module with the action $t^{n}.t^{m} = s^{\prime}igma(n,m)t^{n+m}$,
we have $v \otimes m^{\prime}athbb{C}q s^{\prime}ubseteq M$. So it follows that $M_n$ is independent of $n$. So let $M_n = \bar{V} s^{\prime}ubseteq V$.
But as $D(u,r)v \otimes t^{n} \in M$, for $v \in \bar{V}$, it follows that $\bar{V}$ is a nonzero $gl_d$-submodule
of $V$. So $\bar{V} = V$ as $V$ is an irreducible $gl_d$-module and hence $M = V^{\prime}$.
\end{proof}
\begin{rem}
The irreducibility of $m^{\prime}g$-module $V \otimes m^{\prime}athbb{C}q$ also follows from Proposition 4.1 of \cite{GLKZ}, by
considering $V \otimes m^{\prime}athbb{C}q$ as $W \ltimes m^{\prime}athbb{C}q^{(1)}$-module.
\end{rem}
We will denote the $m^{\prime}g$-module in Proposition \ref{prop1} by $F^{\alpha}(V)$. It is
trivial to see that the actions of $m^{\prime}athbb{C}qa$ and $m^{\prime}athbb{C}qb$ on $F^{\alpha}(V)$ associative $(x(yv)= (xy)v)$ and
anti-associative $(x(yv)= -(yx)v)$ respectively.
Our main aim in this paper is to prove the converse of Proposition \ref{prop1} which is as follows:
\begin{thm}\lambdabel{prop3}
Let $V^{\prime}$ be an irreducible $m^{\prime}athbb{Z}d$-graded $m^{\prime}g$-module with finite dimensional weight
spaces with respect to $\tilde{\mathfrak{h}}$, with associative $m^{\prime}athbb{C}qa$ and anti-associative $m^{\prime}athbb{C}qb$ action and
$t^{0} = 1$. Then $V^{\prime} \cong F^{\alpha}(V)$ for some $\alpha \in m^{\prime}athbb{C}^{d}$ and a finite dimensional irreducible
$gl_d$-module $V$.
\end{thm}
s^{\prime}ection{The action of $D(u,r)$ and $m^{\prime}athbb{C}q$ on $V^{\prime}$}\lambdabel{sec2}
Let $U(m^{\prime}g)$ denote the universal enveloping algebra of $m^{\prime}g$. Let $L(m^{\prime}g)$
be a two sided ideal of $U(m^{\prime}g)$ generated by $\{ t^{m}t^{n} - s^{\prime}igma(m,n)t^{m+n} \,\, \forall\, m,n \in m^{\prime}athbb{Z}d
, (\mathrm{ad}\,(t^{n}) - t^{n})(\mathrm{ad}\,(t^{m}) - t^{m}) + s^{\prime}igma(m,n)(\mathrm{ad}\,t^{m+n} - t^{m+n})\,\,\forall\,\, n,m \in m^{\prime}athbb{Z}^{d},t^{0}-1 \}$.
Throughout this section $V^{\prime}$ will be as in
Theorem \ref{prop3}. Therefore
$L(m^{\prime}g)$ acts trivially on $V^{\prime}$ and
is a $U(m^{\prime}g)/L(m^{\prime}g)$-module. Let $V^{\prime} = \oplus_{r\in m^{\prime}athbb{Z}d}{V^{\prime}_r}$ be its weight space decomposition with
$V^{\prime}_r = \{v\in V^{\prime}: D(u,0)v = (u,r+\alpha)v,\,\, \forall \, u\in m^{\prime}athbb{C}d \} $ for some fixed $\alpha \in m^{\prime}athbb{C}^{d}$.
\begin{prop}\lambdabel{thm1}
For $r, s \in m^{\prime}athbb{C}^{d}$, we have
$$[t^{-s}\mathrm{ad}\,t^{s},t^{-r}\mathrm{ad}\,{t^{r}}]=0 \,\,m^{\prime}athrm{on} \,\,V^{\prime}.$$
\end{prop}
\begin{proof}
First using associativity and anti-associativity of respective rational quantum tories we get
\begin{align}\lambdabel{neq}
\mathrm{ad}\,t^{r}\mathrm{ad}\,t^{s} - (t^{r}\mathrm{ad}\,t^{s}+ t^{s}\mathrm{ad}\,t^{r}) +
s^{\prime}igma(s,r)\,\mathrm{ad}\,t^{r+s} = 0 \,\, m^{\prime}athrm{on} \,\, V^{\prime} .
\end{align}
Let us consider
\begin{align*}
[t^{-s}\mathrm{ad}\,t^{s}&,t^{-r}\mathrm{ad}\,t^{r}] \\
&= [t^{-s}\mathrm{ad}\,t^{s},t^{-r}]\mathrm{ad}\,t^{r} + t^{-r}[t^{-s}\mathrm{ad}\,t^{s},\mathrm{ad}\,t^{r}]\\
&= [t^{-s},t^{-r}]\mathrm{ad}\,t^{s}\mathrm{ad}\,t^{r} + t^{-s}[\mathrm{ad}\,t^{s},t^{-r}]\mathrm{ad}\,t^{r} \\
& + t^{-r}[t^{-s},\mathrm{ad}\,t^{r}]\mathrm{ad}\,t^{s} + t^{-r}t^{-s}[\mathrm{ad}\,t^{s},\mathrm{ad}\,t^{r}]\\
&= (s^{\prime}igma(s,r)-s^{\prime}igma(r,s))t^{-(s+r)}(t^{s}\mathrm{ad}\,t^{r} + t^{r}\mathrm{ad}\,t^{s} - s^{\prime}igma(r,s)\mathrm{ad}\,t^{r+s})\\
&+ t^{-s}(s^{\prime}igma(s,-r) - s^{\prime}igma(-r,s))t^{s-r}\mathrm{ad}\,t^{r} \\
&- t^{-r}(s^{\prime}igma(r,-s) - s^{\prime}igma(-s,r))t^{r-s}\mathrm{ad}\,t^{s}\\
&+ s^{\prime}igma(r,s)t^{-(s+r)}(s^{\prime}igma(s,r)-s^{\prime}igma(r,s))\mathrm{ad}\,t^{r+s}\\
&= (s^{\prime}igma(s,r)-s^{\prime}igma(r,s))(s^{\prime}igma(-(r+s),s)t^{-r}\mathrm{ad}\,t^{r} +s^{\prime}igma(-(r+s),r)t^{-s}\mathrm{ad}\,t^{s}\\
&- s^{\prime}igma(r,s)t^{-(r+s)}\mathrm{ad}\,t^{r+s}) + (s^{\prime}igma(s,-r) - s^{\prime}igma(-r,s))s^{\prime}igma(-s,s-r)t^{-r}\mathrm{ad}\,t^{r}\\
&- (s^{\prime}igma(r,-s) - s^{\prime}igma(-s,r))s^{\prime}igma(-r,r-s)t^{-s}\mathrm{ad}\,t^{s} \\
& +s^{\prime}igma(r,s)(s^{\prime}igma(s,r)-s^{\prime}igma(r,s))t^{-(s+r)}\mathrm{ad}\,t^{r+s}\\
&=0 .
\end{align*}
\end{proof}
We state the following lemma from \cite{HM} which will be used later.
\begin{lem}[\cite{HM}\lambdabel{thm3}, Prop.19.1(b)]
Let $m^{\prime}g^{\prime}$ be a Lie algebra which need not be finite dimensional.
Let $(V_{1},\rho)$ be an irreducible finite dimensional module for $m^{\prime}g^{\prime}$.
We have a map $\rho: m^{\prime}g^{\prime} \rightarrow m^{\prime}box{End}\,(V_1)$. Then $\rho(m^{\prime}g^{\prime})$
is a reductive Lie algebra with at most one dimensional center.
\end{lem}
Let $U_1 = U(m^{\prime}g)/L(m^{\prime}g)$ and let us define $T^{\prime}(u,r) = t^{-r}D(u,r) - s^{\prime}igma(-r,r)D(u,0)$ as
an element of $U_1$ for $r\in \mathrm{rad}(f), u\in m^{\prime}athbb{C}d$. Let $T^{\prime}$ be the subspace generated by $T^{\prime}(u,r)$ for all
$u$ and $r$. Let $m^{\prime}g^{\prime}$ be the Lie subalgebra generated by $T^{\prime}(u,r)$ and $t^{-s}\mathrm{ad}\,t^{s}$ for
all $u \in m^{\prime}athbb{C}d, \,\, r \in \mathrm{rad}(f)$ and $s\notin \mathrm{rad}(f)$. Let $I$ be a subalgebra of $m^{\prime}g^{\prime}$ generated
by the elements of the form $t^{-s}\mathrm{ad}\,t^{s}$. Then we have the following proposition:
\begin{prop}\lambdabel{pr4}
$I$ is an abelian ideal of $m^{\prime}g^{\prime}$.
\end{prop}
\begin{proof}
It follows from lemma \ref{thm3} that $I$ is an abelian subalgebra. To prove that
$I$ is an ideal of $m^{\prime}g^{\prime}$ we need to prove that $[T^{\prime}(u,r),t^{-s}\mathrm{ad}\,t^{s}] \in I$.
So consider
\begin{align*}
[T^{\prime}(u,r)&,t^{-s}\mathrm{ad}\,t^{s}] \\
&= [T^{\prime}(u,r),t^{-s}]\mathrm{ad}\,t^{s} + t^{-s}[T^{\prime}(u,r), \mathrm{ad}\,t^{s}] \\
&= t^{-s}[T^{\prime}(u,r), \mathrm{ad}\,t^{s}] \,\,(m^{\prime}box{as}\,\, [T^{\prime}(u,r),t^{-s}] =0 , \,\,m^{\prime}box{see}
\,\,m^{\prime}box{prop}.\,\, \ref{prop0}(5))\\
&= t^{-s}[t^{-r}D(u,r),\mathrm{ad}\,t^{s}] - t^{-s}s^{\prime}igma(-r,r)[D(u,0),\mathrm{ad}\,t^{s}]\\
&= t^{-s}[t^{-r},\mathrm{ad}\,t^{s}]D(u,r) +t^{-s}t^{-r} [D(u,r),\mathrm{ad}\,t^{s}] \\
&- t^{-s}s^{\prime}igma(-r,r)(u,s)\mathrm{ad}\,t^{s}
\end{align*}
as $r \in \mathrm{rad}(f)$, we have $[t^{-r},\mathrm{ad}\,t^{s}] = 0$,
so we get
\begin{align*}
[T^{\prime}(u,r),t^{-s}\mathrm{ad}\,t^{s}] &= s^{\prime}igma(s,r)(u,s)s^{\prime}igma(r,s)t^{-(s+r)}\mathrm{ad}\,t^{r+s} \\
&- s^{\prime}igma(-r,r)(u,s)t^{-s}\mathrm{ad}\,t^{s} \in I.
\end{align*}
This completes the proof.
\end{proof}
\begin{prop}\lambdabel{prop0}
\begin{enumerate}
\item $[T^{\prime}(u,r),T^{\prime}(v,s)] = (v,r)s^{\prime}igma(-s,s)T^{\prime}(u,r)\\
-(u,s)s^{\prime}igma(-r,r)T^{\prime}(v,s) + s^{\prime}igma(s,r)T^{\prime}(w,r+s)$,\\
where $w = s^{\prime}igma(r,s)[(u,s)v-(v,r)u]$ and therefore $T^{\prime}$ is a Lie-subalgebra.
\item $[D(v,0),T^{\prime}(u,r)] = 0$.
\item Let $V^{\prime} = \oplus_{r\in m^{\prime}athbb{Z}d}{V^{\prime}}_r$ be its weight space decomposition. Then
each $V^{\prime}_r$ is $T^{\prime}$ invariant.
\item Each $V^{\prime}_r$ is $T^{\prime}$-irreducible.
\item ${V^{\prime}}_r \cong {V^{\prime}}_s$ as $T^{\prime}$-module.
\end{enumerate}
\end{prop}
\begin{proof}
Proof of $(1),(2)$ and $(3)$ are same as in \cite{RE} (Proposition 3.2).
For proving $(4)$, let $U(m^{\prime}g) = \oplus_{r\in m^{\prime}athbb{Z}d}{U_{r}}$, where $U_r = \{v\in U(m^{\prime}g):[D(u,0),v] = (u,r)v,
\, \forall \, u\in m^{\prime}athbb{C}d\ \,\}$. As $V^{\prime}$ is an irreducible $m^{\prime}g$-module, again by using the same argument
as Proposition 3.2 of \cite{RE} we get that $V^{\prime}_r$ is
an irreducible $U_0$-module and every element of $U_0$ can be written as a linear combination of the elements
$t^{-r_1}D(u,r_1)\cdots t^{-r_k}D(u,r_k)t^{-s_1}\mathrm{ad}\, t^{s_1}\cdots t^{-s_n}\mathrm{ad}\,t^{s_n}$.
So $U_0$ is generated by the elements of the form $T^{\prime}(u,r)$ and $t^{-s}\mathrm{ad}\,t^{s}$, where $r\in \mathrm{rad}(f)$
and $s \notin \mathrm{rad}(f)$. Now we use lemma \ref{thm3}, where we take $m^{\prime}g^{\prime}$ as the Lie algebra generated by
$T^{\prime}(u,r)$ and $t^{-s}\mathrm{ad}\,t^{s}$ and $V = V^{\prime}_r$. As the elements of the form $t^{-s}\mathrm{ad}\,t^{s}$ forms an
abelian ideal (Prop.\ref{pr4}) , it follows from the Lemma \ref{thm3} that the elements of the form $t^{-s}\mathrm{ad}\,t^{s}$
must lie in
the center of $\rho(m^{\prime}g^{\prime})$ which is at most one dimensional. Consequently it follows that $t^{-s}\mathrm{ad}\,t^{s}$
acts as a scalar on $V_r^{\prime}$ and hence $V_r^{\prime}$ is an irreducible $T^{\prime}$-module.\\
Now let us prove $(5)$.
As $t^{s-r}V^{\prime}_{r} s^{\prime}ubseteq V^{\prime}_{s}$. But as
$$V^{\prime}_{r} = t^{(r-s)}t^{(s-r)}V^{\prime}_{r} s^{\prime}ubseteq t^{(r-s)}V^{\prime}_{s} s^{\prime}ubseteq V^{\prime}_{r}.$$
We get $V^{\prime}_{r} = t^{r-s}V^{\prime}_{s}$. Define $\psi : V^{\prime}_{r} \rightarrow V^{\prime}_{s}$ by
$\psi(v) = t^{(r-s)}v $. Note that $\psi$ is injective (as it is graded) and surjective, we need to prove that it
is a $T^{\prime}$-module homomorphism, i.e., we need to show that $[t^{(s-r)},T^{\prime}(u,m)] = 0$ which follows
from a straight forward calculation.
\end{proof}
Now by Liu and Zhao \cite{KZ}, $\mathrm{rad}(f) = m_1m^{\prime}athbb{Z} e_1 \oplus \cdots \oplus m_d m^{\prime}athbb{Z} e_d$ for
some $0 \neq m_i \in m^{\prime}athbb{Z}$ for $1\leq i \leq d$. Note that this result is true only if all the entries of
the matrix $q$ are roots of unity. Consider the Laurent polynomial ring associated with
$\mathrm{rad}(f)$ which is equal to $m^{\prime}athbb{C}[t_1^{\pm m_1},\cdots,t_d^{\pm m_d}] = Z(m^{\prime}athbb{C}q)$ = the center of $m^{\prime}athbb{C}q$.
Let $A = m^{\prime}athbb{C}[s_1^{\pm1},\cdots, s_d^{\pm1}]$ be a Laurent polynomial ring, where $s_i = t_i^{m_i}$ for
$1 \leq i \leq d$. To avoid notational confusion, we will use the notation $d(u,r)$ for the derivations of
$\mathrm{Der}(A)$, where $d(u,r) = t^{r}s^{\prime}um_{i = 1}^{d}{u_i \partial_i}$, $u = (u_1, u_2,\cdots, u_d)\in m^{\prime}athbb{C}d$
and $r \in m^{\prime}athbb{Z}d$. Let $W$ denote the Lie subalgebra of $\mathrm{Der}(m^{\prime}athbb{C}q)$ generated by the elements $D(u,r)$, where
$u \in m^{\prime}athbb{C}d, r\in \mathrm{rad}(f)$.
We have the following proposition:
\begin{prop}
$\mathrm{Der}(A)\ltimes A \cong W \ltimes Z(m^{\prime}athbb{C}q)$ with a map $\phi$ defined as
$\phi(d(u,r)+t^s ) = s^{\prime}qrt{s^{\prime}igma(r,r)}D(u,r)+s^{\prime}qrt{s^{\prime}igma(s,s)}t^s$.
\end{prop}
\begin{proof}
See Lemma 2.3 of \cite{GLKZ}.
\end{proof}
Using the above proposition we see that from \cite{RE}, $T^{\prime}/I_2^{\prime} \cong gl_d$, where $I_2^{\prime}$
is the ideal of $T^{\prime}$ spanned by the elements $T^{\prime}(u,r,n_1,n_2)$ which are defined as follows:
$T^{\prime}(u,r,n_1,n_2) = T^{\prime}(u,r) - T^{\prime}(u,r + n_1) - T^{\prime}(u,r+ n_2) + T^{\prime}(u,r + n_1 + n_2)$.
Again recall from \cite{RE} that the Lie subalgebra $T$ is isomorphic to $T^{\prime}$ under the map $\phi$
where T is the Lie subalgebra spanned by the elements $T(u,r) = t^{-r}d(u,r) - d(u,0)$. Using this isomorphism
$\phi$ we see that $\phi(T(u,r)) = s^{\prime}igma(r,r)T^{\prime}(u,r)$, so $\phi^{-1}(T^{\prime}(u,r)) = s^{\prime}igma(r,r)^{-1}T(u,r) =
s^{\prime}igma(-r,r)T(u,r)$. As $T^{\prime}/I_2^{\prime} \cong gl_d$, we see that by the same argument as in \cite{RE},
we have $V^{\prime}_{r} \cong V \otimes t^{r}$, where $V$ is a finite dimensional irreducible representation of $gl_d$.
So we have $V^{\prime} \cong V \otimes m^{\prime}athbb{C}q$. The isomorphism from $T^{\prime}/I_2^{\prime}$ to $gl_d$ is given by the
map $\pi^{\prime}$ defined by $\pi^{\prime}(T^{\prime}(e_i, e_j)) = s^{\prime}igma(e_j,e_i)^{-1}E_{ji} = E_{ji}$, as $s^{\prime}igma(e_i,e_j) = 1$.
Now let us calculate the action of $D(u,r)$ on $V^{\prime}_n = V \otimes t^{n} := V^{\prime}(n)$.
First consider
\begin{align*}
T^{\prime}(u,r)v(n) &= s^{\prime}igma(-r,r)T(u,r)v(n)\\
&=s^{\prime}igma(-r,r)s^{\prime}um_{i,j}{u_i r_jT(e_i, e_j)v(n)} \\
&= s^{\prime}igma(-r,r)s^{\prime}um_{i,j}{u_i r_j E_{ji}v(n)}.
\end{align*}
So we have
$$t^{-r}D(u,r)v(n) = s^{\prime}igma(-r,r)D(u,0)v(n) + s^{\prime}igma(-r,r)s^{\prime}um_{i,j}{u_i r_j E_{ji}}v(n)$$
multiply by $ t^{r}$ we get
\begin{align*}
s^{\prime}igma(-r,r)D(u,r)v(n) &= s^{\prime}igma(-r,r)(u,n+\alpha)s^{\prime}igma(r,n)v(n+r)\\
&+s^{\prime}igma(-r,r)(s^{\prime}um_{i,j}{u_i r_j E_{ji}})s^{\prime}igma(r,n)v(n+r).
\end{align*}
So we get
$$ D(u,r)v(n) = s^{\prime}igma(r,n)[(u,n + \alpha) + r u^{T}]v(n+r).$$
Now as by Proposition \ref{prop0} we have $V^{\prime}_{r} \cong V^{\prime}_{s}$ as $T^{\prime}$-module.
We identify $V^{\prime}_{s}$ as $t^{s}(V^{\prime}_{0})$, i.e., $t^{s}(V^{\prime}_{0}) = V^{\prime}_{s}$ for all
$s \in m^{\prime}athbb{Z}d$. Now consider
\begin{align*}
t^{m}t^{n}v(0) &= t^{m}v(n) , \,\,m^{\prime}box{by \,\, identification},\\
s^{\prime}igma(m,n)t^{m+n}v(0) &= t^{m}v(n) ,\\
s^{\prime}igma(m,n)v(m+n) &= t^{m}v(n) .
\end{align*}
So far we have proved the following:
\begin{prop}
Let $V^{\prime}$ be an irreducible $m^{\prime}athbb{Z}d$-graded $m^{\prime}g$-module with finite dimensional weight
spaces with respect to $\tilde{\mathfrak{h}}$, with associative $m^{\prime}athbb{C}qa$ and anti-associative $m^{\prime}athbb{C}qb$ action.
Then $V^{\prime} \cong V \otimes m^{\prime}athbb{C}q$, where $V$ is a finite dimensional irreducible $gl_d$-module.
The actions of $D(u,r)$ and $m^{\prime}athbb{C}q$ on $V^{\prime}$
are given by the following:
\begin{enumerate}
\item $D(u,r)v(n) = s^{\prime}igma(r,n)((u,n+\alpha) + r u^{T})v(r+n)$, $m^{\prime}athrm{for\,\, some\,\, fixed}$ $\alpha \in m^{\prime}athbb{C}^{d};$
\item $t^{m}v(n) = s^{\prime}igma(m,n)v(m+n)$ , where $u \in m^{\prime}athbb{C}d, m,n \in m^{\prime}athbb{Z}d, r\in \mathrm{rad}(f)$, and $ v(n) = v \otimes t^{n}$.
\end{enumerate}
\end{prop}
s^{\prime}ection{ $\mathrm{ad}$ action on $V^{\prime}$ and the proof of Theorem \ref{prop3}}\lambdabel{sec3}
To complete the proof of Theorem \ref{prop3} we need to determine the
action of $\mathrm{ad}$ on $V^{\prime}(n)$, which will be done in this section.
As by the Lemma \ref{thm3} $t^{-s}\mathrm{ad}\,t^{s}$ acts as a scalar on $V^{\prime}(n)$.
Let $t^{-s}\mathrm{ad}\,t^{s}v(n) = \lambdambda(s,n)v(n)$.
\begin{prop}
$\lambda(s,r)= f(r,s)\lambda(s,0) + s^{\prime}igma(-s,s)[1 - f(r,s)]$.
\end{prop}
\begin{proof}
Using
\begin{align*}
t^{-s}\mathrm{ad} \,t^{s}t^{r}v(n) &= (t^{r}t^{-s}\mathrm{ad} \,t^{s} + [t^{-s}\mathrm{ad} \,t^{s},t^{r}])v(n)
\end{align*}
we get the desired identity for $\lambda(s,r)$.
\end{proof}
\begin{prop}
$\mathrm{ad}\, t^{s}v(n) = (s^{\prime}igma(s,n) - g(s)s^{\prime}igma(n,s))v(n+s)$, where
$ g(s) = -s^{\prime}igma(s,s)\lambda(s,0) +1 $.
\end{prop}
\begin{proof}
By above proposition we have
\begin{align*}
t^{-s}\mathrm{ad}\,t^{s}v(n) = [f(n,s)\lambda(s,0) + s^{\prime}igma(-s,s)(1 - f(n,s))]v(n)
\end{align*}
Multiply by $t^{s}$ and using the relation $t^{m}t^{n} = s^{\prime}igma(m,n)t^{m+n}$, we get
\begin{align*}
\mathrm{ad}\,t^{s}v(n) &= [s^{\prime}igma(s,s)f(n,s)\lambda(s,0) + (1 - f(n,s))]s^{\prime}igma(s,n)v(n+s)\\
&= [s^{\prime}igma(s,n) - g(s)s^{\prime}igma(n,s)]v(n+s).
\end{align*}
\end{proof}
\begin{prop}
Let $g(s) = -s^{\prime}igma(s,s)\lambda(s,0) +1 $ for $s \notin \mathrm{rad}(f)$ and $g(s)=1$ otherwise. Then
$g(s+r) = g(s)g(r) \,\forall \, s,r \in m^{\prime}athbb{Z}d$.
\end{prop}
\begin{proof}
Using the identity \ref{thm1}, it is straight forward
calculation to show that $g$ satisfies the desired conditions.
\end{proof}
Now denote the $\mathrm{Der}(m^{\prime}athbb{C}q)$-module $V^{\prime}$ with the above action by $G^{\alpha}_{g}(V)$.
Lin and Tan [\cite{ST}, 2004] proved that $V \otimes m^{\prime}athbb{C}q = V(\psi,b) \otimes m^{\prime}athbb{C}q$
is a completely reducible as $Der(m^{\prime}athbb{C}q)$-module unless $(\psi,b) = (\delta_k , k), 1 \leq k \leq d-1$ or
$(\psi,b) = (0,b)$ with the following actions:
\begin{enumerate}
\item $D(u,r)v(n) = s^{\prime}igma(r,n)((u,n+\alpha) + r u^{T})v(r+n);$
\item $\mathrm{ad}\,t^{s}v(n) = (s^{\prime}igma(s,n)g(s) - s^{\prime}igma(n,s))v(n+s),$
\end{enumerate}
where $g(s+r) = g(s)g(r) \,\forall \, s,r \in m^{\prime}athbb{Z}d$ and $g(r)=1 \, \forall \,\, r\in \mathrm{rad}(f)$. We denote these
$\mathrm{Der}(m^{\prime}athbb{C}q)$-modules by $F_{g}^{\alpha}(V)$. Then we have the following proposition:
\begin{prop}\lambdabel{prop8}
$G^{\alpha}_{g}(V) \cong F_{g^{-1}}^{\alpha}(V)$ as a $\mathrm{Der}(m^{\prime}athbb{C}q)$-module.
\end{prop}
\begin{proof}
First we note that $g(s) \neq 0$ for all $s \in m^{\prime}athbb{Z}d$. Now define a map
$\Psi :G^{\alpha}_{g}(V) \rightarrow F_{g^{-1}}^{\alpha}(V)$ by
$\Psi(v(n)) = g(n)^{-1}v(n)$. Now it is easy to prove the following:
\begin{enumerate}
\item $\Psi(D(u,r)v(n)) = D(u,r)\Psi(v(n))$,
\item $\Psi(\mathrm{ad} \, t^{s}v(n)) = \mathrm{ad} \, t^{s} \Psi(v(n))$.
\end{enumerate}
\end{proof}
To prove Theorem \ref{prop3} we have to prove that $G_{g}^{\alpha}(V) \cong F_{l}^{\beta}(V)$, where
$\beta \in m^{\prime}athbb{C}d$ and $l$ is the constant function $1$ on $m^{\prime}athbb{Z}d$. To prove this we invoke \cite{KZ}
for the following result:
\begin{thm}[Theorem 3.1,\cite{KZ}]\lambdabel{thm5}
Let $g: m^{\prime}athbb{Z}d \rightarrow m^{\prime}athbb{C}^{*}$ be a function satisfying $g(m)g(n) = g(m+n)$ and $g(r) = 1$ for
any $m,n \in m^{\prime}athbb{Z}d$, $r\in \mathrm{rad}(f)$. Let $V$ be a $gl_d$-module. Then there exists $\beta \in m^{\prime}athbb{C}d$
such that $F_{g}^{\alpha}(V) \cong F_{l}^{\beta}(V)$ as $\mathrm{Der}(m^{\prime}athbb{C}q)$-module, where $l$ denotes the
constant function which maps all the elements of $m^{\prime}athbb{Z}d$ to $1$.
\end{thm}
So using Theorem \ref{thm5} and Proposition \ref{prop8} we get $G_{g}^{\alpha}(V) \cong F_{l}^{\beta}(V)$ and
this completes the proof of Theorem \ref{prop3}.
\begin{rem}
After finishing this work, we came across a paper by Liu and Zhao, Irreducible Harish-Chandra modules over
the derivation algebras of rational quantum tori, Glasgow Mathematical Journal Trust 2013, where they
consider a smaller Lie algebra.
\end{rem}
$m^{\prime}athrm{\bf{Acknowledgement}}$
We thank the anonymous referee for invaluable comments and suggestions without which
our paper wouldn't be the same.
\end{document}
|
\begin{document}
\title{Aspects of the Category $\mathsf{SKB}$ of Skew Braces}
\author{Dominique Bourn}
\address{Univ. Littoral C\^ote d'Opale, UR 2597, LMPA,
Laboratoire de Math\'ematiques Pures et Appliqu\'ees Joseph Liouville,
F-62100 Calais, France}
\email{[email protected]}
\author{Alberto Facchini}
\address{Dipartimento di Matematica ``Tullio Levi-Civita'', Universit\`a di
Padova, 35121 Padova, Italy}
\email{[email protected]}
\thanks{The second author is partially supported by Ministero dell'Istruzione, dell'Universit\`a e della Ricerca (Progetto di ricerca di rilevante interesse nazionale ``Categories, Algebras: Ring-Theoretical and Homological Approaches (CARTHA)''), Fondazione Cariverona (Research project ``Reducing complexity in algebra, logic, combinatorics - REDCOM'' within the framework of the programme Ricerca Scientifica di Eccellenza 2018), and the Department of Mathematics ``Tullio Levi-Civita'' of the University of Padua (Research programme DOR1828909 ``Anelli e categorie di moduli'').}
\author{Mara Pompili}
\address{Dipartimento di Matematica ``Tullio Levi-Civita'', Universit\`a di
Padova, 35121 Padova, Italy}
\email{[email protected]}
\keywords{Brace; Skew brace; Yang-Baxter equation; Mal'tsev category; Protomodular category. \\ {\small 2020 {\it Mathematics Subject Classification.} Primary 16T25, 18E13, 20N99.}
}
\begin{abstract} We examine the pointed protomodular category $\mathsf{SKB}$ of left skew braces. We study the notion of commutator of ideals in a left skew brace. Notice that in the literature, ``product'' of ideals of skew braces is often considered. We show that Huq=Smith for left skew braces. Finally, we give a set of generators for the commutator of two ideals, and prove that every ideal of a left skew brace has a centralizer.
\end{abstract}
\maketitle
\date{}
\maketitle
\section*{Introduction}
Braces appear in connections to the study of set-theoretic solutions of the Yang-Baxter equation.
A {\em set-theoretic solution of the Yang-Baxter equation} is a pair $(X,r)$, where $X$ is a set, $r\colon X\times X\to X\times X$ is a bijection, and $(r\times\mbox{\rm id})(\mbox{\rm id}\times r)(r\times\mbox{\rm id})=(\mbox{\rm id}\times r)(r\times\mbox{\rm id})(\mbox{\rm id}\times r)$
\cite{23}. Set-theoretic solutions of the Yang-Baxter equation appear, for instance, in the study of representations of braid groups, and form a category {\bf SYBE}, whose objects are these pairs $(X,r)$, and morphisms $f\colon (X,r)\to(X',r')$ are the mappings $f\colon X\to X'$ that make the diagram
$$\xymatrix{
X\times X\ar[r]^{f\times f}\ar[d]^r&X'\times X'\ar[d]_r\\
X\times X\ar[r]_{f\times f}&X'\times X'}$$ commute.
One way to produce set-theoretic solutions of the Yang-Baxter equation is using left skew braces.
\noindent\textbf{Definition}
{\rm \cite{GV} A {\sl (left) skew brace} is
a triple $(A, *,\circ)$, where $(A,*) $ and $(A,\circ)$ are groups (not necessarily abelian) such that \begin{equation}a\circ (b * c) = (a\circ b)*a^{-*}* ( a\circ c)\label{lsb}\tag{B}\end{equation}
for every $a,b,c\in A$. Here $a^{-*}$ denotes the inverse of $a$ in the group $(A,*)$. The inverse of $a$ in the group $(A,\circ)$ will be denoted by $a^{-\circ}$.}
A brace is sometimes seen as an algebraic structure similar to that of a ring, with distributivity warped in some sense. But a better description of a brace is probably that of an algebraic structure with two group structures out of phase with each other.
For every left skew brace $(A, *,\circ)$, the mapping $$r \colon A\times A \to A \times A,\quad r(x,y) = (x^{-*} *(x\circ y),(x^{-*} *(x\circ y))^{-\circ}\circ x\circ y),$$
is a non-degenerate set-theoretic solution of the Yang-Baxter equation (\cite[Theorem~3.1]{GV} and \cite[p.~96]{KSV}). Here ``non-degenerate'' means that the mappings $\pi_1r(x_0,-)\colon A\to A$ and $\pi_2r(-,y_0)\colon A\to A$ are bijections for every $x_0\in A$ and every $y_0\in A$.
The simplest examples of left skew braces are:
(1) For any associative ring $(R,+, \cdot)$, the Jacobson radical $(J(R),+, \circ)$, where $\circ$ is the operation on $J(R)$ defined by $x\circ y=xy+x+y$ for every $x,y\in J(R)$.
(2) For any group $(G,*)$, the left skew braces $(G,*,*)$ and $(G,*,*^{op})$.
Several non-trivial examples of skew braces can be found in \cite{SV}. A complete classification of braces of low cardinality has been obtained via computer \cite{KSV}.
A homomorphism of skew braces is a mapping which is a group homomorphism for both the operations. This defines the category $\mathsf{SKB}$ of skew braces.
From \cite{GV}, we know that in a skew brace the units of the two groups coincide. So, $\mathsf{SKB}$ appears as a fully faithful subcategory $\mathsf{SKB}\hookrightarrow \mathsf{DiGp}$ of the category $\mathsf{DiGp}$ of digroups, where a digroup is a triple $(G,*,\circ)$ of a set $G$ endowed with two group structures with same unit. This notion was introduced in \cite{Normal} and devised during discussions between the first author and G. Janelidze.
There are two forgetful functors $U_i:\mathsf{DiGp} \to \mathsf{Gp}, \; i\in\{0,1\},$ associating respectively the first and the second group structures. They both reflect isomorphisms. Since $U_0$ is left exact and reflects isomorphisms, it naturally allows the lifting of the protomodular aspects of the category $\mathsf{Gp}$ of groups to the category $\mathsf{DiGp}$. In turn, the left exact fully faithful embedding $\mathsf{SKB}\hookrightarrow \mathsf{DiGp}$ makes $\mathsf{SKB}$ a pointed protomodular category. The protomodular axiom was introduced in \cite{B0} in order to extract the essence of the homological constructions and in particular to induce an {\em intrinsic notion of
exact sequence}.
In this paper, after recalling the basic facts about protomodular categories, we study the ``protomodular aspects'' of left skew braces, in particular in relation to the category of digroups. We study the notion of commutator of ideals in a left skew brace (in the literature, ``product'' of ideals of skew braces is often considered). We show that Huq=Smith for left skew braces. Notice that Huq${}\ne{}$Smith for digroups and near-rings \cite{Commutators}. We give a set of generators for the commutator of two ideals, and prove that every ideal of a left skew brace has a centralizer.
\section{Basic recalls on protomodular categories}
In this work, any category $ \mathbb{E} $ will be supposed finitely complete, which implies that it has a terminal object $1$. The terminal map from $X$ is denoted $\tau_X: X\to 1$. Given any map $f: X\to Y$, the equivalence relation $R[f]$ on $X$ is produced by the pullback of $f$ along itself. The map $f$ is said to be a {\em regular epimorphism} in $ \mathbb{E} $ when $f$ is the quotient of $R[f]$. When it is the case, we denote it by a double head arrow $X \twoheadrightarrow Y$.
\subsection{Pointed protomodular categories}
The category $ \mathbb{E} $ is said to be pointed when the terminal object $1$ is initial as well. Let us recall that a pointed category $ \mathbb{A} $ is additive if and only if, given any split epimorphism $f: X \rightleftarrows Y, \; fs=1_Y$, the following downward pullback:
$$
\xymatrix@=20pt{
\ensuremath{\mathrm{Ker}} f \ar@{ >->}[rr]^{k_f} \ar@<-4pt>[d]_{} && X \ar@<-4pt>[d]_{f} && \\
1 \ar@{ >->}[rr]_{0_Y} \ar@{ >->}[u]_{0_{K}} & & Y \ar@{ >->}[u]_{s} }
$$
is an upward pushout, namely if and only if $X$ is the direct sum (= coproduct) of $Y$ and $ \ensuremath{\mathrm{Ker}} f$. Let us recall the following:
\begin{definition}{\rm \cite{B0}
A pointed category $ \mathbb{E} $ is said to be {\em protomodular} when, given any split epimorphism as above, the pair $(k_f,s)$ of monomorphisms is jointly strongly epic.}
\end{definition}
This means that the only subobject $u:U \rightarrowtail X$ containing the pair $(k_f,s)$ of subobjects is, up to isomorphism, $1_X$. It implies that, given any pair $(f,g): X \rightrightarrows Z$ of arrows which are equalized by $k_f$ and $s$, they are necessarily equal (take the equalizer of this pair). Pulling back the split epimorphisms along the initial map $0_Y: 1 \rightarrowtail Y$ being a left exact process, the previous definition is equivalent to saying that this process reflects isomorphisms.
The category $\mathsf{Gp}$ of groups is clearly pointed protomodular. This is the case of the category $\mathsf{Rng}$ of rings as well, and more generally, given a commutative ring $R$, of any category $R$-$\mathsf{Alg}$ of any given kind of $R$-algebras without unit, possibly non-associative. This is in particular the case of the category $R$-$\mathsf{Lie}$ of Lie $R$-algebras. Even for $R$ a non-commutative ring, in which case $R$-algebras have a more complex behaviour (they are usually called $R$-rings, see \cite[p.~36]{Bergman} or \cite[p.~52]{libromio}), one has that the category $R$-$\mathsf{Rng}$ of $R$-rings is pointed protomodular, as can be seen from the fact that the forgetul functor $R$-$\mathsf{Rng}\to Ab$ reflects isomorphisms and $Ab$ is protomodular.
The pointed protomodular axiom implies that the category $ \mathbb{E} $ shares with the category $\mathsf{Gp}$ of groups the following well-known {\em Five Principles}:\\
(1) a morphism $f$ is a monomorphism if and only if its kernel $ \ensuremath{\mathrm{Ker}} f$ is trivial \cite{B0};\\
(2) any regular epimorphism is the cokernel of its kernel, in other words any regular epimorphism produces an exact sequence, which determines \emph{an intrinsic notion of exact sequences} in $ \mathbb{E} $ \cite{B0};\\
(3) there a specific class of monomorphisms $u:U \rightarrowtail X$, the \emph{normal monomorphisms} \cite{B1}, see next section ;\\
(4) there is an intrinsic notion of abelian object \cite{B1}, see section \ref{abob};\\
(5) any reflexive relation in $ \mathbb{E} $ is an equivalence relation, i.e. the category $ \mathbb{E} $ is a Mal'tsev one \cite{B2}.
So, according to Principle (1), a pointed protomodular category is characterized by the validity of the \emph{split short five lemma}. Generally, Principle (5) is not directly exploited in $\mathsf{Gp}$; we shall show in Section \ref{asc} how importantly it works out inside a pointed protomodular category $ \mathbb{E} $. Pointed protomodular varieties of universal algebras are characterized in \cite{BJ}.
\subsection{Normal monomorphisms}
\begin{definition}{\rm \cite{B1}
In any category $ \mathbb{E} $, given a pair $(u,R)$ of a monomorphism $u:U \rightarrowtail X$ and an equivalence relation $R$ on $X$, the monomorphism $u$ is said to be {\em normal to $R$ }when the equivalence relation $u^{-1}(R)$ is the indiscrete equivalence relation $\nabla_X=R[\tau_X]$ on $X$ and, moreover, any commutative square in the following induced diagram is a pullback:}
$$\xymatrix@=3pt{
{U\times U\;} \ar@<-2ex>[ddd]_{d_0^U} \ar@{>->}[rrrrr]^{\check u} \ar@<2ex>[ddd]^{d_1^U} &&&&& {R\;} \ar@<-2ex>[ddd]_{d_0^R} \ar@<2ex>[ddd]^{d_1^R} \\
&&&&\\
&&&&\\
{U\;} \ar@{>->}[rrrrr]_{u} \ar[uuu]|{s_0^U} &&&&& X \ar[uuu]|{s_0^R}
}
$$
\end{definition}
In the category $Set$, provided that $U\neq \emptyset$, these two properties characterize the equivalence classes of $R$. By the Yoneda embedding, this implies the following:
\begin{proposition}
Given any equivalence relation $R$ on an object $X$ in a category $ \mathbb{E} $, for any map $x$, the following upper monomorphism $\check x=d_1^R.\bar x$ is normal to $R$:
$$\xymatrix@=3pt{
{I_R^x;} \ar[ddd] \ar@{>->}[rrrrr]^{\bar x} &&&&& {R\;} \ar[ddd]_{d_0^R} \ar[rrrrr]^{d_1^R} &&&&& X\\
&&&&\\
&&&&\\
{1\;} \ar@{>->}[rrrrr]_{x} &&&&& X
}
$$
\end{proposition}
In a pointed category $ \mathbb{E} $, taking the initial map $0_X: 1 \rightarrowtail X$ gives rise to a monomorhism $\iota_R: I_R \rightarrowtail X$ which is normal to $R$. This construction produces a preorder mapping $\iota^X: \ensuremath{\mathrm{Equ}}u_X \mathbb{E} \to \mathsf{Mon}_X \mathbb{E} $ from the preorder of the equivalence relations on $X$ to the preorder of subobjects of $X$ which preserves intersections. Starting with any map $f: X\to Y$, we get $I_{R[f]}= \ensuremath{\mathrm{Ker}} f$ which says that any kernel map $k_f$ is normal to $R[f]$. Principle (3) above is a consequence of the fact \cite{B1} that in a protomodular category a monomorphism is normal to at most one equivalence relation (up to isomorphism). So that being normal, for a monomorphism $u$, becomes a property in this kind of categories. This is equivalent to saying that the preorder homomorphism $\iota_X: \ensuremath{\mathrm{Equ}}u_X \mathbb{E} \to \mathsf{Mon}_X \mathbb{E} $ reflects inclusions; so, the preorder $\mathsf{Norm}_X$ of normal subobjects of $X$ is just the image $\iota^X( \ensuremath{\mathrm{Equ}}u_X)\subset \mathsf{Mon}_X$.
\subsection{Regular context}
Let us recall from \cite{Barr} the following:
\begin{definition} {\rm
A category $ \mathbb{E} $ is {\em regular }when it satisfies the two first conditions, and {\em exact} when it satisfies all the three conditions:\\
(1) regular epimorphisms are stable under pullbacks;\\
(2) any kernel equivalence relation $R[f]$ has a quotient $q_f$;\\
(3) any equivalence relation $R$ is a kernel equivalence relation.}
\end{definition}
Then, in the regular context, given any map $f: X\to Y$, the following canonical factorization $m$ is necessarily a monomorphism:
$$\xymatrix@=3pt{
&&& \mathsf{Im}_f \ar@{ >.>}[dddrrr]^{m} \\
&&&&\\
&&&&\\
{X\;} \ar@{->>}[rrruuu]^{q_f} \ar[rrrrrr]_{f} &&&&&& Y
}
$$
This produces a canonical decomposition of the map $f$ in a monomorphism and a regular epimorphism which is stable under pullbacks. Now, given any regular epimorhism $f:X \twoheadrightarrow Y$ and any subobject $u:U \rightarrowtail X$, the {\em direct image} $f(u): f(U) \rightarrowtail Y$ of $u$ along the regular epimorphism $f$ is given by $f(U)=\mathsf{Im}_{f.u} \rightarrowtail Y$.
Any variety in the sense of Universal Algebra is exact and regular epimorphisms coincide with surjective homomorphisms.
\subsection{Homological categories}
The significance of pointed protomodular categories grows up in the regular context since, in this context, the split short five lemma can be extended to any exact sequence. Furthermore, the $3\times 3$ lemma, Noether isomorphisms and snake lemma hold; they are all collected in \cite{BB}. This is the reason why a regular pointed protomodular category $ \mathbb{E} $ is called \emph{homological}.
\section{Protomodular aspects of skew braces}
\subsection{Digroups}
From \cite{Normal}, we get the characterization of normal monomorphisms in $\mathsf{DiGp}$:
\begin{proposition}\label{normal1}
A suboject $i: (G,*,\circ) \rightarrowtail (K,*,\circ)$ is normal in the category $\mathsf{DiGp}$ if and only if the three following conditions hold:\\
{\rm (1)} $i:(G,*) \rightarrowtail (K,*)$ is normal in $\mathsf{Gp}$,\\{\rm (2)} $i:(G,\circ) \rightarrowtail (K,\circ)$ is normal in $\mathsf{Gp}$,\\
{\rm (3)} for all $(x,y)\in K\times K$, $x^{-*}*y\in G$ if and only if $x^{-\circ}\circ y\in G$.
\end{proposition}
\subsection{Skew braces}
The following observation is very important:
\begin{proposition}
Let $(G,*,\circ)$ be any skew brace. Consider the mapping $\lambda:G\times G\to G$ defined by $\lambda(a,u)=a^{-*}*(a\circ u)$. Then:\\
{\rm (1)} $\lambda_{a}=\lambda(a,-)$ is underlying a group homomorphism $(G,\circ)\to \mathsf{Aut} (G,*)$, and this condition is equivalent to {\rm (\ref{lsb})};\\
{\rm (2)} we have \begin{equation}\lambda({a^{-\circ}},u)=(a^{-\circ})^{-*}*(a^{-\circ}\circ u)=a^{-\circ}\circ(a*u).\label{doublech}\end{equation}
\end{proposition}
\proof
For (1), see \cite{GV}. For (2), we have $(a^{-\circ}\circ a)*(a^{-\circ})^{-*}*(a^{-\circ}\circ u)=
(a^{-\circ})^{-*}*(a^{-\circ}*u)=\lambda({a^{-\circ}},u)$.
\endproof
\subsection{First properties of skew braces}
The following observation is straightforward:
\begin{proposition}
$\mathsf{SKB}$ is a Birkhoff subcategory of $\mathsf{DiGp}$.
\end{proposition}
This means that any subobject of a skew brace in $\mathsf{DiGp}$ is a skew brace and that, given any surjective homomorphism $f: X \twoheadrightarrow Y$ in $\mathsf{DiGp}$, the digroup $Y$ is a skew brace as soon as so is $X$. In this way, any equivalence relation $R$ in $\mathsf{DiGp}$ on a skew brace $X$ actually lies in $\mathsf{SKB}$ since it determines a subobject $R\subset X\times X$ in $\mathsf{DiGp}$ and, moreover, its quotient in $\mathsf{SKB}$ is its quotient in $\mathsf{DiGp}$. The first part of this last sentence implies that any normal subobject $u:U \rightarrowtail X$ in $\mathsf{DiGp}$ with $X\in \mathsf{SKB}$ is normal in $\mathsf{SKB}$.
We are now going to show that the normal subobjects in $\mathsf{SKB}$ coincide with the ideals of \cite{GV}.
\begin{proposition}\label{normal2}
A subobject $i: (G,*,\circ ) \rightarrowtail (K,*,\circ )$ is normal in the category $\mathsf{SKB}$ if and only if the three following conditions hold:\\
$(1)$ $i: (G,*) \rightarrowtail (K,*)$ is normal in $\mathsf{Gp}$,\\
$(2)$ $i: (G,\circ ) \rightarrowtail (K,\circ )$ is normal in $\mathsf{Gp}$,\\
$(3')$ $\lambda_x(G)=G$ for all $x\in K$.
\end{proposition}
\proof
Suppose (1) and (2). We are going to show $(3)\iff (3')$, with $(3)$ given in Proposition \ref{normal1}.\\
(i) $x^{-\circ}\circ y\in G\Rightarrow x^{-*}*y\in G$ if and only if $\lambda_{x}(G)\subset G$, setting $y=x\circ u, \; u\in G$.\\
(ii) from (\ref{doublech}):
$x^{-*}*y\in G \Rightarrow x^{-\circ}\circ y\in G$ if and only if $\lambda_{x^{-\circ}}(G)\subset G$, setting $y=x*u, \; u\in G$.\\
Finally $\lambda_{x}(G)\subset G$ for all $x$ is equivalent to $\lambda_{x}(G)=G$.
\endproof
\begin{corollary}
A subobject $i: (G,*,\circ ) \rightarrowtail (K,*,\circ )$ is normal in the category $\mathsf{SKB}$ if and only if it is an ideal in the sense of \cite{GV}, namely is such that:\\
1) $i: (G,\circ ) \rightarrowtail (K,\circ )$ is normal, 2) $G*a=a*G$ for all $a\in K$, 3) $\lambda_{a}(G)\subset G$ for all $a\in K$.
\end{corollary}
\proof
Straightforward.
\endproof
Being a variety in the sense of Universal Algebra, $\mathsf{SKB}$ is finitely cocomplete; accordingly it has binary sums (called coproducts as well). So, $\mathsf{SKB}$ is a semi-abelian category according to the definition introduced in \cite{JMT}:
\begin{definition}
{\rm A pointed category $ \mathbb{E} $ is said to be {\em semi-abelian} when it is protomodular, exact and has finite sums.}
\end{definition}
From the same \cite{JMT}, let us recall the following observation which explains the choice of the terminology: a pointed category $ \mathbb{E} $ is abelian if and only if both $ \mathbb{E} $ and $ \mathbb{E} ^{op}$ are semi-abelian.
\subsection{Internal skew braces}
Given any category $ \mathbb{E} $, the notion of internal group, digroup and skew brace is straightforward, determining the categories $\mathsf{Gp} \mathbb{E} $, $\mathsf{DiGp} \mathbb{E} $ and $\mathsf{SKB} \mathbb{E} $. Since $\mathsf{Gp} \mathbb{E} $ is protomodular, so are the two others. An important case is produced with $ \mathbb{E} =Top$ the category of topological spaces. Although $Top$ is not a regular category, so is the category $\mathsf{Gp} Top$, the regular epimorphisms being the open surjective homomorphisms. So $\mathsf{Gp} Top$ is homological but not semi-abelian.
Now let $f: X\to Y$ be any map in $\mathsf{DiGp} Top$. Let us show that $R[f]$ has a quotient in $\mathsf{DiGp} Top$. Take its quotient $q_{R[f]}: X \twoheadrightarrow Q_f$ in $\mathsf{DiGp}$, then endow $Q_f$ with the quotient topology with respect to $R[f]$; then $q_{R[f]}$ is an open surjective homomorphism since so is $U_0(q_{R[f]})$. Accordingly, a regular epimorphism in $\mathsf{DiGp} Top$ is again an open surjective homomorphism. Moreover this same functor $U_0: \mathsf{DiGp} Top \to \mathsf{Gp} Top$ being left exact and reflecting the homeomorphic isomorphisms, it reflects the regular epimorphisms; so, these regular epimorphisms in $\mathsf{DiGp} Top$ are stable under pullbacks. Accordingly the category $\mathsf{DiGp} Top$ is regular. Similarly the category $\mathsf{SKB} Top$ is homological as well, without being semi-abelian. As any category of topological semi-abelian algebras, both $\mathsf{DiGp} Top$ and $\mathsf{SKB} Top$ are finitely cocomplete, see \cite{BC}.
\section{Skew braces and their commutators}
\subsection{Protomodular aspects}
\subsubsection{Commutative pairs of subobjects, abelian objects}\label{abob}
Given any pointed category $ \mathbb{E} $, the protomodular axiom applies to the following specific downward pullback:
$$
\xymatrix@=20pt{
X \ar@{ >->}[rr]^{r_X} \ar@<-4pt>[d]_{} && X\times Y \ar@<-4pt>[d]_{p_Y} && \\
1 \ar@{ >->}[rr]_{0_Y} \ar@{ >->}[u]_{0_{K}} & & Y \ar@{ >->}[u]_{l_Y} }
$$
where the monomorphisms are the canonical inclusions. This is the definition of a \emph{unital category} \cite{B2}. In this kind of categories there is an intrisic notion of \emph{commutative pair of subobjects}:
\begin{definition} {\rm
Let $ \mathbb{E} $ be a unital category.
Given a pair $(u,v)$ of subobjects of $X$, we say that the subobjects $u$ and $v$ {\em cooperate} (or {\em commute}) when there is a (necessarily unique) map $\varphi $, called the {\em cooperator} of the pair $(u,v)$, making the following diagram commute:
$$
\xymatrix@=20pt{
& U \ar@{ >->}[dl]_{l_U} \ar@{ >->}[dr]^{u} & & \\
U \times V \ar@{.>}[rr]_{\varphi} && X \\
& V \ar@{ >->}[ul]^{r_V} \ar@{ >->}[ur]_{v} & &
}
$$
We denote this situation by $[u,v]=0$. A subobject $u:U \rightarrowtail Y$ is {\em central} when $[u,1_{X}]=0$. An object $X$ is {\em commutative} when $[1_{X},1_{X}]=0$.}
\end{definition}
Clearly $[1_{X},1_{X}]=0$ gives $X$ a structure of internal unitary magma, which, $ \mathbb{E} $ being unital, is necessarily underlying an internal commutative monoid structure. When $ \mathbb{E} $ is protomodular, this is actually an internal abelian group structure, so that we call $X$ an abelian object \cite{B1}. This gives rise to a fully faithful subcategory $Ab( \mathbb{E} )\hookrightarrow \mathbb{E} $, which is additive and stable under finite limits in $ \mathbb{E} $. From that we can derive:
\begin{proposition}\cite{B1}
A pointed protomodular category $ \mathbb{E} $ is additive if and only if any monomorphism is normal.
\end{proposition}
\subsubsection{Connected pairs $(R,S)$ of equivalence relations}
Since a protomodular category is necessarily a Mal'tsev one, we can transfer the following notions. Given any pair $(R,S)$ of equivalence relations on the object $X$ in $ \mathbb{E} $, take the following rightward and downward pullback:
$$
\xymatrix@=30pt
{
R {\overrightarrow\times}_{\!\! X} S \ar[r]^{p_S} \ar[d]_{p_R} & S \ar[d]_{d_0^S} \ar@<+1,ex>@{ >->}[l]^{r_S} \\
R \ar[r]^{d_1^R} \ar@<-1,ex>@{ >->}[u]_{l_R} & X \ar@<+1,ex>@{ >->}[l]^{s_0^R} \ar@<-1,ex>@{ >->}[u]_{s_0^S}
}
$$
where $l_R$ and $r_S$ are the sections induced by the maps $s_0^R$ and $s_0^S$. Let us recall the following definition from \cite{BG1}:
\begin{definition} {\rm
In a Mal'tsev category $ \mathbb{E} $, the pair $(R,S)$ is said to be {\em connected} when there is a (necessarily unique) morphism
$$
p : R {\overrightarrow\times}_{\!\! X} S \rightarrow X,\; xRySz\mapsto p(xRySz)
$$
such that $pr_S=d_1^S$ and $pl_R=d_0^R$, namely such that the following identities hold: $p(xRySy)=x$ and $p(yRySz)=z$. This morphism $p$ is then called the \emph{connector} of the pair, and we denote the situation by $[R,S]=0$.}
\end{definition}
From \cite{BG2}, let us recall that:
\begin{lemma}\label{func}
Let $ \mathbb{E} $ be a Mal'tsev category, $f: X\to Y$ any map, $(R,S)$ any pair of equivalence relations on $X$, $(\bar R,\bar S)$ any pair of equivalence relations on $Y$ such that $R\subset f^{-1}(\bar R)$ and $S\subset f^{-1}(\bar S)$. Suppose moreover that $[R,S]=0$ and $[\bar R,\bar S]=0$. Then the following diagram necessarily commutes:
$$
\xymatrix@=30pt
{
R {\overrightarrow\times}_{\!\! X} S \ar[rr]^{\tilde f} \ar[d]_{p_{(R,S)}} && \bar R {\overrightarrow\times}_{\!\! Y} \bar S \ar[d]^{p_{(\bar R,\bar S)}} \\
X \ar[rr]_{f} && Y
}
$$
where $\tilde f$ is the natural factorization induced by $f^{-1}(\bar R)$ and $S\subset f^{-1}(\bar S)$.
\end{lemma}
A pointed Mal'tsev category is necessarily unital. From \cite{BG1}, in any pointed Mal'sev category $ \mathbb{E} $, we have necessarily
\begin{equation}[R,S]=0 \;\; \Rightarrow \;\;\ [I_R,I_S]=0\label{h=s}\end{equation}
In this way, the ``Smith commutation" \cite{S1976} implies the ``Huq commutation" \cite{H}.
\subsection{Huq=Smith}
The converse is not necessarily true, even if $ \mathbb{E} $ is pointed protomodular, see Proposition \ref{notsh} below. When this is the case, we say that $ \mathbb{E} $ satisfies the {\rm (Huq=Smith)} condition. Any pointed strongly protomodular category satisfies {\rm (Huq=Smith)}, see \cite{BG1}. {\rm (Huq=Smith)} is true for $\mathsf{Gp}$ by the following straighforward:
\begin{proposition}\label{SHGp}
Let $(R,S)$ be a pair of equivalence relations in $\mathsf{Gp}$ on the group $(G,*)$. The following conditions are equivalent:\\
{\rm (1)} $[I_R,I_S]=0$;\\
{\rm (2)} $p(x,y,z)=x*y^{-1}*z$ defines a group homomorphism $p: G\times G \times G \to G$;\\
{\rm (3)} $[R,S]=0$.
\end{proposition}
\begin{proposition}\label{notsh}
The category $\mathsf{DiGp}$ of digroups does not satisfy {\rm (Huq=Smith)}.
\end{proposition}
\proof
We can use the counterexample introduced in \cite{Normal} for another purpose. Start with an abelian group $(A,+)$ and an object $a$ such that $-a\neq a$. Then define $\theta: A\times A \to A\times A$ as the involutive bijection which leaves fixed any object $(x,y)$ except $(a,a)$ which is exchanged with $(-a,a)$. Then defined the group structure $(A\times A,\circ)$ on $A\times A$ as the transformed along $\theta$ of $(A\times A,+)$. So, we get:
$$ (x,z)\circ(x',z')=\theta(\theta(x,z)+\theta(x',z'))$$
Clearly we have $(a,a)^{-\circ}=(a,-a)$. Since the second projection $\pi:A\times A \to A$ is such that $\pi\theta=\pi$, we get a digroup homomorphism $\pi: (A\times A,+,\circ)\to (A,+,+)$ whose kernel map is, up to isomorphism, $\iota_A: (A,+,+) \rightarrowtail (A\times A,+,\circ)$ defined by $\iota(a)=(a,0)$. The commutativity of the law $+$ makes $[\iota_A,\iota_A]=0$ inside $\mathsf{DiGp}$. We are going to show that, however we do not have $[R[\pi],R[\pi]]=0$. If it was the case, according to the previous proposition and considering the images by $U_0$ and $U_1$ of the desired ternary operation, we should have, for any triple $(x,y)R[\pi](x',y)R[\pi](x",y)$:
$$(x,y)-(x',y)+(x",y)=(x,y)\circ(x',y) ^{-\circ}\circ(x",y)$$
namely $(x,y)\circ(x',y) ^{-\circ}\circ(x",y)=(x-x'+x",y)$.
Now take $y=a=x'$ and $a\neq x \neq -a$. Then we get:\\ $(x,a)\circ(a,a) ^{-\circ}\circ(x",a)=(x,a)\circ(a,-a)\circ(x",a)=(x+a,0)\circ(x",a)$\\ $=(x+a+x",a)$, if moreover $a\neq x" \neq -a$. Now, clearly we get $x+a+x"\neq x-a+x"$ since $a\neq -a$.
\endproof
However we have the following very general observation:
\begin{proposition}
Let $ \mathbb{E} $ be any pointed Mal'tsev satisfying {\rm (Huq=Smith)}. So is any functor category $\mathcal F( \mathbb{C} , \mathbb{E} )$.
\end{proposition}
\proof
Let $(R,S)$ be a pair of equivalence relation on an object $F\in F( \mathbb{C} , \mathbb{E} )$. We have $[R,S]=0$ if and only if for each object $C\in \mathbb{C} $ we have $[R(C),S(C)]=0$ since, by Lemma \ref{func}, the naturality follows. In the same way, if $(u,v)$ is a pair of subfunctors of $F$, we have $[u,v]=0$ if and only if for each object $C\in \mathbb{C} $ we have $[u(C),v(C)]=0$. Suppose now that $ \mathbb{E} $ satisfies {\rm (Huq=Smith)}, and that $[I_R,I_S]=0$. So, for each object $C\in \mathbb{C} $ we have $[I_R(C),I_S(C)]=0$, which implies $[R(C),S(C)]=0$. Accordingly $[R,S]=0$.
\endproof
Let $ \mathbb{T} $ be any finitary algebraic theory, and denote by $ \mathbb{T} ( \mathbb{E} )$ the category of internal $ \mathbb{T} $-algebras in $ \mathbb{E} $. Let us recall that, given any variety of algebras $ \mathbb{V} ( \mathbb{T} )$, we have a {\em Yoneda embedding for the internal $ \mathbb{T} $-algebras}, namely a left exact fully faithful factorization of the Yoneda embedding for $ \mathbb{E} $:
\[\xymatrix@C=2pc@R=2pc{ \mathbb{T} ( \mathbb{E} ) \ar@{-->}[rr]^{\bar Y_{ \mathbb{T} }} \ar[d]_{\mathcal U_{ \mathbb{T} }} && \mathcal F(\mathbb E^{op}, \mathbb{V} ( \mathbb{T} )) \ar[d]^{\mathcal F( \mathbb E^{op}, \mathcal U)} \\
\mathbb{E} \ar[rr]_Y && \mathcal F(\mathbb E^{op},Set)}\]
where $\mathcal U: \mathbb{V} ( \mathbb{T} ) \to Set$ is the canonical forgetful functor.
\begin{theorem}\label{TTE}
Let $ \mathbb{T} $ be any finitary algebraic theory
such that the associated variety of algebras $ \mathbb{V} ( \mathbb{T} )$ is pointed protomodular. If $ \mathbb{V} ( \mathbb{T} )$ satisfies {\rm (Huq=Smith)}, so does
any category $ \mathbb{T} ( \mathbb{E} )$.
\end{theorem}
\proof
If $ \mathbb{V} ( \mathbb{T} )$ satisfies {\rm (Huq=Smith)}, so does $\mathcal F(\mathbb E^{op}, \mathbb{V} ( \mathbb{T} ))$ by the previous proposition. Accordingly, $\bar Y_{ \mathbb{T} }$ being left exact and fully faithful, so does $ \mathbb{T} ( \mathbb{E} )$.
\endproof
\subsection{Any category $\mathsf{SKB} \mathbb{E} $ does satisfy {\rm (Huq=Smith)}}
\begin{proposition}\label{U,V}
Given any pair $(U,V)$ of subobjects of $X$ in $\mathsf{SKB}$, the following conditions are equivalent:\\
{\rm (1)} $[U,V]=0$;\\
{\rm (2)} for all $(u,v)\in U\times V$, we get $u\circ v=u*v$ and this restriction is commutative;\\
{\rm (3)} for all $(u,v)\in U\times V, \; \lambda_u(v)=v$, $[U_0(U),U_0(V)]=0$ and $[U_1(U),U_1(V)]=0$.\\
Accordingly, an abelian object in $\mathsf{SKB}$ is necessarily of the form $(A,+,+)$ with $(A,+)$ abelian.
\end{proposition}
\proof
Straightforward, setting $\varphi(u,v)=u+v$ and using an Eckmann-Hilton argument.
\endproof
\begin{proposition}[$\mathsf{SKB}$ does satisfy {\rm (Huq=Smith)}]
Let $R$ and $S$ be two equivalence relations on an object $X\in \mathsf{SKB}$. The following conditions are equivalent:\\
{\rm (1)} $[I_R,I_S]=0$;\\
{\rm (2)} $[U_0(U),U_0(V)]=0$, $[U_1(U),U_1(V)]=0$ and
$x*y^{-*}*z=x\circ y^{-\circ}\circ z$ for all $xRySz$;\\
{\rm (3)} $[R,S]=0$.
\end{proposition}
\proof
The identity $x*y^{-*}*z=x\circ y^{-\circ}\circ z$ is equivalent to
$$y^{-\circ}\circ z=x^{-\circ}\circ(x*y^{-*}*z)=(x^{-\circ}\circ x)*(x^{-\circ})^{-*}*(x^{-\circ}\circ(y^{-*}*z))=$$ $$\qquad\qquad\qquad=(x^{-\circ})^{-*}*(x^{-\circ}\circ(y^{-*}*z)),$$ which, in turn, is equivalent to $$\lambda_{x^{-\circ}}(y^{-*}*z)=y^{-\circ}\circ z.$$
Suppose $xRySy$. Setting $z=y*v,\; v\in I_S$, this is equivalent to $\lambda_{x^{-\circ}}(v)=y^{-\circ}\circ (y*v)=\lambda_{y^{-\circ}}(v)$ by (\ref{doublech}). This in turn is equivalent to $\lambda_{y}\circ \lambda_{x^{-\circ}}(v)=\lambda_{y\circ x^{-\circ}}(v)=v$, $v\in I_S$.
Setting $y=u \circ x ,\; u\in I_R$, this is equivalent to $\lambda_u(v)=v$, $(u,v)\in I_R\times I_S$.
Now, by Proposition \ref{U,V}, $[I_R,I_S]=0$ is equivalent to: for all $(u,v)\in I_R\times I_S$, we get $\lambda_u(v)=v$, $[U_0(U),U_0(V)]=0$ and $[U_1(U),U_1(V)]=0$. So we get $[1) \iff 2)]$.
Suppose (2). From $[U_0(U),U_0(V)]=0$, we know by Proposition \ref{U,V} that $p(x,y,z)=x*y^{-*}*z$ is a group homomorphism $(R {\overrightarrow\times}_X S,*),\to (X,*)$, and from $[U_1(U),U_1(V)]=0$ that $q(x,y,z)=x\circ y^{-\circ}\circ z$ is a group homomorphism $(R {\overrightarrow\times}_X S,\circ)\to (X,\circ)$. If $p=q$, this produces the desired $R {\overrightarrow\times}_{\!\! X} S \to X$ in $\mathsf{SKB}$ showing that $[R,S]=0$. Whence $[(2)\Rightarrow (3)]$. We have already noticed that the last implication $[(3)\Rightarrow (1)]$ holds in any pointed category.
\endproof
According to Theorem \ref{TTE}, we get the following:
\begin{corollary}\label{SHGpE}
Given any category $ \mathbb{E} $, the category $\mathsf{SKB} \mathbb{E} $ satisfies {\rm (Huq= Smith)}. This is the case in particular for the category $\mathsf{SKB} Top$ of topological skew braces.
\end{corollary}
\subsection{Homological aspects of commutators}
\subsubsection{Abstract Huq commutator}
Suppose now that $ \mathbb{E} $ is any finitely cocomplete regular unital category.
In this setting, we gave in \cite{B10}, for any pair $u: U \rightarrowtail X$, $v : V \rightarrowtail X$ of subobjects, the construction of a regular epimorphism $\psi_{(u,v)}$ which universally makes their direct images cooperate. Indeed consider the following diagram, where $Q[u,v]$ is the limit of the plain arrows:
$$
\xymatrix@=25pt{
& U \ar@{ >->}[dl]_{l_U} \ar@{ >->}[dr]^{v} & & \\
U \times V \ar@{.>}[r]_{\bar{\psi}_{(u,v)}} & Q[u,v] & X \ar@{.>}[l]^{\psi_{(u,v)}} \\
& V \ar@{ >->}[ul]^{r_V} \ar@{ >->}[ur]_{v} & &
}
$$
The map $\psi_{(u,v)}$ is necessarily a regular epimorphism and the map $\bar{\psi}_{(u,v)}$ induces the cooperator of the direct images of the pair $(u,v)$ along $\psi_{(u,v)}$. This regular epimorphism $\psi_{(u,v)}$ measures the lack of cooperation of the pair $(u,v)$ in the sense that the map $\psi_{(u,v)}$ is an isomorphism if and only if $[u,v]=0$. We then get a symmetric tensor product: $I_{R[\psi_{(-,-)}]}: \mathsf{Mon}_X\times \mathsf{Mon}_X \to \mathsf{Mon}_X$ of preordered sets.
Since the map $\psi_{(u,v)}$ is a regular epimorphism, its distance from being an isomorphism is its distance from being a monomorphism, which is measured by the kernel equivalence relation $R[\psi_{(u,v)}]$. Accordingly, in the homological context, it is meaningful to introduce the following definition, see also \cite{MM}:
\begin{definition}
Given any finitely cocomplete homological category $ \mathbb{E} $ and any pair $(u,v)$ of subobjects of $X$, their abstract Huq commutator {\rm $[u,v]$} is defined as $I_{R[\psi_{(u,v)}]}$ or equivalently as the kernel map $k_{\psi_{(u,v)}}$.
\end{definition}
By this universal definition, in the category $\mathsf{Gp}$, this $[u,v]$ coincides with the usual $[U,V]$.
\subsubsection{Abstract Smith commutator}\label{asc}
Suppose $ \mathbb{E} $ is a regular category. Then, given any regular epimorphism $f: X \twoheadrightarrow Y$ and any equivalence relation $R$ on $X$, the direct image $f(R) \rightarrowtail Y\times Y$ of $R \rightarrowtail X\times X$ along the regular epimorphism $f\times f: X\times X \twoheadrightarrow Y\times Y$ is reflexive and symmetric, but generally not transitive. Now, when $ \mathbb{E} $ is a regular Malt'sev category, this direct image $f(R)$, being a reflexive relation, is an equivalence relation.
Suppose moreover that $ \mathbb{E} $ is finitely cocomplete. Let $(R,S)$ be a pair of equivalence relations on $X$,
and consider the following diagram, where $Q[R,S]$ is the colimit of the plain arrows:
$$
\xymatrix@=25pt{
& R \ar@{ >->}[dl]_{l_R} \ar[dr]^{d_{0,R}} & & \\
R \times_X S \ar@{.>}[r]_{\bar{\chi}_{(R,S)}} & Q[R,S] & X \ar@{.>}[l]^{\chi_{(R,S)}} \\
& S \ar@{ >->}[ul]^{r_S} \ar[ur]_{d_{1,S}} & &
}
$$
Notice that, here, in consideration of the pullback defining $R \overrightarrow{\times}_X S$, the role of the projections $d_0$ and $d_1$ have been interchanged. This map $\chi_{(R,S)}$ measures the lack of connection between $R$ and $S$, see \cite{B10}:
\begin{theorem}
Let $ \mathbb{E} $ be a finitely cocomplete regular Mal'tsev category. Then the map $\chi_{(R,S)}$ is a regular epimorphism and is the universal one which makes the direct images $\chi_{(R,S)}(R)$ and $\chi_{(R,S)}(S)$ connected. The equivalence relations $R$ and $S$ are connected (i.e. $[R,S]=0$) if and only if $\chi_{(R,S)}$ is an isomorphism.
\end{theorem}
Since the map $\chi_{(R,S)}$ is a regular epimorphism, its distance from being an isomorphism is its distance from being a monomorphism, which is exactly measured by its kernel equivalence relation $R[\chi_{(R,S)}]$. Accordingly, we give the following definition:
\begin{definition}
Let $ \mathbb{E} $ be any finitely cocomplete regular Mal'tsev category. Given any pair $(R,S)$ of equivalence relations on $X$, their abstract Smith commutator $[R,S]$ is defined as the kernel equivalence relation $R[\chi_{(R,S)}]$ of the map $\chi_{(R,S)}$.
\end{definition}
In this way, we define a symmetric tensor product $[-,-]=R[\chi_{(-,-)}]: \ensuremath{\mathrm{Equ}}_X\times \ensuremath{\mathrm{Equ}}_X \to \ensuremath{\mathrm{Equ}}_X$ of preorered sets. It is clear that, with this definition, we get $[R,S]=0$ in the sense of connected pairs if and only if $[R,S]=\Delta_X$ (the identity equivalence relation on $X$) in the sense of this new definition. This is coherent since $\Delta_X$ is effectively the $0$ of the preorder $ \ensuremath{\mathrm{Equ}}_X$. Let us recall the following:
\begin{proposition}\label{dim}
Let $ \mathbb{E} $ be a pointed regular Mal'tsev category. Let $f : X \twoheadrightarrow Y$ be a regular epimorphism and $R$ an equivalence relation on $X$. Then the direct image $f(I_R)$ of the normal subjobject $I_R$ along $f$ is $I_{f(R)}$.
\end{proposition}
From that, we can assert the following:
\begin{proposition}
Let $ \mathbb{E} $ be a finitely cocomplete homological category. Given any pair $(R,S)$ of equivalence relations on $X$, we have {\em $[I_R,I_S] \subset I_{[R,S]}$}.
\end{proposition}
\proof
From (\ref{h=s}), we get $$ [\chi_{(R,S)}(R),\chi_{(R,S)}(S)]=0 \;\; \Rightarrow \;\; [I_{\chi_{(R,S)}(R)},I_{\chi_{(R,S)}(S)}]=0 $$
By the previous proposition we have: $$0=[I_{\chi_{(R,S)}(R)},I_{\chi_{(R,S)}(S)}]=[ \chi_{(R,S)}(I_R),\chi_{(R,S)}(I_S)].$$
Accordingly, by the universal property of the regular epimorphism $\psi_{(I_R,I_S)}$ we get a factorization:
$$
\xymatrix@=20pt{
X \ar@{->>}[rr]^{\psi_{(I_R,I_S)}} \ar@{->>}[rrd]_{\chi_{(R,S)}}&& Q[I_R,I_S] \ar@{.>}[d]\\
&& Q[R,S]
}
$$
which shows that $[I_R,I_S]\subset I_{[R,S]}$.
\endproof
\begin{theorem}
In a finitely cocomplete homological category $ \mathbb{E} $
the following conditions are equivalent:\\
{\rm (1)} $ \mathbb{E} $ satisfies {\rm (Huq=Smith)};\\
{\rm (2)} {\em $[I_R,I_S]= I_{[R,S]}$} for any pair $(R,S)$ of equivalence relations on $X$.
Under any of these conditions, the regular epimorphisms $\chi_{(R,S)}$ and $\psi_{(I_R,I_S)}$ do coincide.
\end{theorem}
\proof
Suppose (2). Then $[I_R,I_S]=0$ means that $\psi_{(I_R,I_S)}$ is an isomorphism, so that $0=[I_R,I_S]= I_{[R,S]}$. In a homological category $I_{[R,S]}=0$ is equivalent to $[R,S]=0$. Conversely, suppose (1). We have to find a factorization:
$$
\xymatrix@=25pt{
X \ar@{->>}[rr]^{\psi_{(I_R,I_S)}} \ar@{->>}[rrd]_{\chi_{(R,S)}}&& Q[I_R,I_S] \\
&& Q[R,S] \ar@{.>}[u]
}
$$
namely to show that $[\psi_{(I_R,I_S)}(R),\psi_{(I_R,I_S)}(S)]=0$. By (1) this is equivalent to $0=[I_{\psi_{(I_R,I_S)}(R)},I_{\psi_{(I_R,I_S)}(S)}]$, namely to $0=[\psi_{(I_R,I_S)}(I_R),\psi_{(I_R,I_S)}(I_S)]$ by Proposition \ref{dim}. This is true by the universal property of the regular epimorphism $\psi_{(I_R,I_S)}$.
\endproof
\subsection{Skew braces and their commutators}
Since the categories $\mathsf{SKB}$ and $\mathsf{SKB} Top$ are finitely cocomplete homological categories, all the results of the previous section concerning commutators do apply and, in particular, thanks to the {\rm (Huq=Smith)} condition, the two notions of commutator are equivalent. It remains now to make explicit the description of the Huq commutator.
We will determine a set of generators for the Huq commutator of two ideals in a skew brace:
\begin{proposition}\label{2.5} If $I$ and $J$ are two ideals of a left skew brace $(A,*,\circ)$, their Huq commutator $[I,J]$ is the ideal of $A$ generated by the union of the following three sets: \\ \noindent {\rm (1)} the set $\{\, i\circ j\circ(j\circ i)^{-\circ}\mid i\in I,\ j\in J\,\}$, (which generates the commutator $[I,J]_{(A,\circ)}$ of the normal subgroups $I$ and $J$ of the group $(A,\circ)$); \\ \noindent {\rm (2)} the set $\{\, i* j*(j*i)^{-*}\mid i\in I,\ j\in J\,\}$, (which generates the commutator $[I,J]_{(A,*)}$ of the normal subgroups $I$ and $J$ of the group $(A,*))$; and \\ \noindent {\rm (3)} the set $\{\,(i\circ j)*(i* j)^{-*}
\mid i\in I,\ j\in J\,\}$.\end{proposition}
\begin{proof} Assume that the mapping $\mu\colon I\times J\to A/K$, $\mu(i,j)=i*j*K$ is a skew brace morphism for some ideal $K$ of $A$. Then $$\begin{array}{l} (i\circ j)\circ K=(i\circ K)\circ(j\circ K)=(i* K)\circ(j*K)=\\ \qquad=\mu(i,1)\circ\mu(1,j)=\mu((i,1)\circ(1,j))=\mu(i,j)=\mu((1,j)\circ(i,1))=\\ \qquad=\mu((1,j)\circ\mu(i,1))=(j*K)\circ(i*K)=(j\circ K)\circ(i\circ K)=(j\circ i)\circ K.\end{array}$$ This proves that the set $(1)$ is contained in $K$.
Similarly, $$\begin{array}{l}(i* j)* K=(i* K)*(j* K)=\mu(i,1)*\mu(1,j)=\mu((i,1)*(1,j))=\mu(i,j)=\\ \qquad=\mu((1,j)*(i,1))=\mu((1,j)*\mu(i,1))=(j*K)*(i*K)=(j*i)*K.\end{array}$$ Thus the set $(2)$ is contained in $K$.
Also, $$\begin{array}{l}(i\circ j)*K=(i\circ j)\circ K=(i\circ K)\circ(j\circ K)=(i*K)\circ(j*K)= \\ \qquad=\mu(i,1))\circ\mu(1,j)=\mu((i,1)\circ(1,j))=\mu(i,j)=\mu((i,1)*(1,j))=\\ \qquad=\mu(i,1)*\mu(1,j)=(i*K)*(j*K)=(i*j)*K.\end{array}$$ Hence the set (3) is also contained in $K$.
Conversely, let $K$ be the ideal of $A$ generated by the union of the three sets. It is then very easy to check that he mapping $\mu\colon I\times J\to A/K$, $\mu(i,j)=i*j*K$ is a skew brace morphism.\end{proof}
It the literature, great attention has been posed in the study of product $I\cdot J$ of two ideals $I,J$ of a (left skew) brace $(A,*,\circ)$. This product is with respect to the product $\cdot$ in the brace $A$ defined, for every $x,y\in A$ by $x\cdot y=y^{-*}*\lambda_x(y)$. Then, for every $i\in I$ and $j\in J$, $i\cdot j=j^{-*}*\lambda_i(j)=j^{-*}*i^{-*}*(i\circ j)=(i*j)^{-*}*(i\circ j)$. Hence the ideal of $A$ generated by the set $I\cdot J$ of all products $i\cdot j$ coincides with the ideal of $A$ generated by the set (3) in the statement of Proposition~\ref{2.5}.
Clearly, for a left skew brace $A$, the Huq commutator $[I,J]$ is equal to the Huq commutator $[J,I]$. Also, $I\cdot J=(J\cdot I)^{-*}$, so that the left annihilator of $I$ in $(A,\cdot)$ is equal to the right annihilator of $I$ in $(A,\cdot)$. Moreover, the condition ``$I\cdot J=0$'' can be equivalently expressed as ``$J$ is contained in the kernel of the group homomorphism $\lambda|^I\colon (A,\circ)\to\mathsf{Aut}(I,*)$.
\begin{proposition} For an ideal $I$ of a left skew brace $A$, there is a largest ideal of $A$ that centralizes $I$ (the {\em centralizer} of $I$).\end{proposition}
\begin{proof} The zero ideal centralizes $I$ and the union of a chain of ideals that centralize $I$ centralizes $I$. Hence there is a maximal element in the set of all the ideals of $A$ that centralize $I$. Now if $J_1$ and $J_2$ are two ideals of $A$, then $J_1*J_2=J_1\circ J_2$ is the join of $\{J_1,J_2\}$ in the lattice of all ideals of $A$. Now $J_1$ centralizes $I$ if and only if (1) $J_1\subseteq C_{(A,*)}(I)$, the centralizer of the normal subgroup $I$ in the group $(A,*)$; (2) $J_1\subseteq C_{(A,\circ)}(I)$, the centralizer of the normal subgroup $I$ in the group $(A,\circ)$; and (3) $J_1$ is contained in the kernel of the group morphism $\lambda|^I\colon (A,\circ)\to\mathsf{Aut}(I,*)$, which is a normal subgroup of $(A,\circ)$. Similarly for $J_2$. Hence if both $J_1$ and $J_2$ centralize $I$, then $J_1*J_2\subseteq C_{(A,*)}(I)$, and $J_1\circ J_2\subseteq C_{(A,\circ)}(I)\cap\ker\lambda|^I$. Therefore $J_1*J_2=J_1\circ J_2$ centralizes $I$. It follows that the set of all the ideals of $A$ that centralize $I$ is a lattice. Hence the maximal element in the set of all the ideals of $A$ that centralize $I$ is the largest element in that set.\end{proof}
In particular, the centralizer of the improper ideal of a left skew brace $A$ is the {\em center} of $A$.
A description of the free left skew brace over a set $X$ is available, in a language very different from ours, in \cite{Orza}.
\end{document}
|
\begin{document}
\subjclass[2010]{76W05}
\keywords{compressible MHD system, global solution.}
\title[compressible magnetohydrodynamic system]{On some large global solutions for the compressible magnetohydrodynamic system}
\author[J. Li]{Jinlu Li}
\address{School of Mathematics and Computer Sciences, Gannan Normal University, Ganzhou 341000, China}
\email{[email protected]}
\author[Y. Yu]{Yanghai Yu}
\address{School of Mathematics and Statistics, Anhui Normal University, Wuhu, Anhui, 241002, China}
\email{[email protected]}
\author[W. Zhu]{Weipeng Zhu}
\address{Department of Mathematics, Sun Yat-sen University, Guangzhou, 510275, China}
\email{[email protected]}
\begin{abstract}
In this paper we consider the global well-posedness of compressible magnetohydrodynamic system in $\mathbb{R}^d$ with $d\geq2$, in the framework of the critical Besov spaces. We can show that if the initial data, the shear viscosity and the magnetic diffusion coefficient are small comparing with the volume viscosity, then compressible magnetohydrodynamic system has a unique global solution.
\end{abstract}
\maketitle
\section{Introduction}
The present paper is devoted to the equations of magnetohydrodynamics (MHD) which describe the motion of electrically conducting fluids in the presence of a magnetic field. The barotropic compressible magnetohydrodynamic system can be written as
\begin{equation}\lambdabel{1.1}\begin{cases}
\partialartial_t\rho+\mathrm{div}(\rho u)=0,\; &\mathrm{in}\quad \mathbb{R}^+\times\mathbb{R}^d\\
\partialartial_t(\rho u)+\mathrm{div}(\rho u\otimes u)+\nabla P(\rho)=b\cdot \nabla b-\fracac12\nabla(|b|^2)+\mu \dot{\mathrm{div}elta}lta u
\\ \qquad+\nabla ((\mu+\lambdambda)\mathrm{div} u),\; &\mathrm{in}\quad \mathbb{R}^+\times\mathbb{R}^d\\
\partialartial_tb+(\mathrm{div} u)b+u\cdot \nabla b-b\cdot \nabla u-\nu\dot{\mathrm{div}elta}lta b=0,\; &\mathrm{in}\quad \mathbb{R}^+\times\mathbb{R}^d\\
\mathrm{div} b=0,\; &\mathrm{in}\quad \mathbb{R}^+\times\mathbb{R}^d
\end{cases}\end{equation}
where $\rho=\rho(t, x)\in \mathbb{R}^+$ denotes the density, $u=u(t,x)\in\mathbb{R}^d$ and $b=b(t,x)\in\mathbb{R}^d$ stand for the velocity field and the magnetic field, respectively. The barotropic assumption means that the pressure $P=P(\rho)$ is given and assumed to be strictly increasing. The constant $\nu > 0$ is the resistivity acting as the magnetic diffusion coefficient of the magnetic field. The shear and volume viscosity coefficients $\mu$ and $\lambdambda$ are constant and fulfill the standard strong parabolicity assumption:
\begin{equation}
\mu>0, \qquad \kappa=\lambdambda+2\mu>0.
\end{equation}
To complete the system (1.1), the initial data are supplemented by
\begin{align}\lambdabel{1.2}
(u,b,\rho)(t,x)|_{t=0}=(u_{0}(x),b_{0}(x),\rho_{0}(x))
\end{align}
and also, as the space variable tends to infinity, we assume
\begin{align}\lambdabel{1.3}
\lim\limits_{|x|\to\infty}\rho_0(x)=1.
\end{align}
The system of MHD involves various topics such as the evolution and dynamics of astrophysical objects, thermonuclear fusion, metallurgy and semiconductor crystal growth, see for example \cite{Cabannes1970,Landau1960}. Roughly speaking, The system \eqref{1.1} is a coupling between the compressible Navier-Stokes equations with the magnetic equations (heat equations). On the other hand, notice that $b \equiv 0$, system \eqref{1.1} reduces to the usual compressible Navier-Stokes system for baratropic fluids. Due to its physical importance, complexity, rich phenomena and mathematical challenges, there have been huge literatures on the study of the compressible MHD problem \eqref{1.1} by many physicists and mathematicians, see for example, \cite{Cabannes1970,Chen1 2002,Chen2 2003,Ducomet2006,Fan1 2007,Fan2 2008,Fan3 2009,Hu1 2008,Hu2 2010,Kawashima1984,Landau1960,Strohmer1990,Umeda 1984,vol 1972,Wang 2010,Zhang 2010} and the references therein. Now, we briefly recall some results concerned
with the multi-dimensional compressible MHD equations in the absence of vacuum, which are more relatively with our problem. Kawashima \cite{Kawashima1984} established the local and global well-posedness of the solutions to
the compressible MHD equations as the initial density is strictly positive, see also Vol'pert-Khudiaev \cite{Vol1972} and Strohmer \cite{Strohmer1990} for the local existence results. To catch the scaling invariance property, Danchin first introduce in his series papers \cite{Danchin 2000,Danchin1 2001,Danchin2 2001,Danchin2007,Danchin2016} the ``Critical Besov Spaces" which were inspired by those efforts on the incompressible Navier-Stokes. Recently, Danchin et.al prove that the compressible Navier-Stokes system convergence to the homogeneous incompressible case for the large volume viscosity in \cite{Danchin 2017}. Motivated this, our main goal of the present paper is devoted to extend the compressible Navier-Stokes system to the MHD system. That is, we will prove the global existence of strong solutions to \eqref{1.1} for a class of large initial data. We notice that if $\kappa$ tends to $+\infty$, then velocity field and magnetic field will satisfy the incompressible MHD system:
\begin{equation}\lambdabel{1.4}\begin{cases}
\partialartial_tU+U\cdotot\nabla U-\mu \dot{\mathrm{div}elta}lta U+\nabla \Pi-B\cdot \nabla B-\fracac12\nabla(|B|^2)=0,\\
\partialartial_tB+U\cdot \nabla B-B\cdot \nabla U-\nu \dot{\mathrm{div}elta}lta B=0,\\
\mathrm{div} U=\mathrm{div} B=0, \qquad (U,B)|_{t=0}=(U_0,B_0)
\end{cases}\end{equation}
with $U_0=\mathcal{P} u_0$ and $B_0=b_0$. Here, the projectors $\mathcal{P}$ and $\mathcal{Q}$ are defined by
$$\mathcal{P}:=\mathrm{Id}+(-\dot{\mathrm{div}elta}lta)^{-1}\nabla \mathrm{div}, \qquad \mathcal{Q}:=-(-\dot{\mathrm{div}elta}lta)^{-1}\nabla \mathrm{div}.$$
Our main result can state be stated as follows:
\begin{theo}\lambdabel{th1}
Assume that $d\geq 2$, $u_0\in \dot{B}^{\fracac d2-1}_{2,1}(\mathbb{R}^d)$ and $a_0:=\rho_0-1\in \dot{B}^{\fracac d2-1}_{2,1}(\mathbb{R}^d)\cap \dot{B}^{\fracac d2}_{2,1}(\mathbb{R}^d)$. Suppose that \eqref{1.4} generates a unique global solution $(U,B)\in \mathcal{C}(\mathbb{R}^+;\dot{B}^{\fracac d2-1}_{2,1}(\mathbb{R}^d))$ satisfying $U_0:=\mathcal{P} u_0$ and $B_0=b_0$. Let $C$ be a large universal constant and denote
\begin{align}\begin{split}\lambdabel{1.5}
&M:=||U,B||_{L^\infty(\mathbb{R}^+;\dot{B}^{\fracac d2-1}_{2,1})}+||U_t,B_t,\mu\nabla^2 U,\nu\nabla^2 B||_{L^1(\mathbb{R}^+;\dot{B}^{\fracac d2-1}_{2,1})},
\\&D_0:=Ce^{C(1+\mu^{-1}+\nu^{-1})(M+1)^2}\big(||a_0,\mathcal{Q} v_0||_{\dot{B}^{\fracac d2-1}_{2,1}}+\kappa||a_0||_{\dot{B}^{\fracac d2}_{2,1}}+1\big), \quad
\\&\delta_0:=Ce^{2C(1+\mu^{-2}+\nu^{-2})(M+1)^2}\big(\kappa^{-1}D^2_0+\kappa^{-\fracac12}D_0\big).
\end{split}\end{align}
If $\kappa$ is large enough and $||a_0||_{\dot{B}^{\fracac d2}_{2,1}}$ is small enough such that
$$\kappa^{-1}D_0\ll1, \quad \delta_0(\fracac1\mu+\fracac1\nu+1)\leq \fracac12, $$
then \eqref{1.1} has a unique global-in-time solution $(\rho,u,b)$ which satisfies
\begin{align}\lambdabel{1.6}\begin{split}
&u,b\in \mathcal{C}(\mathbb{R}^+;\dot{B}^{\fracac d2-1}_{2,1})\cap L^1(\mathbb{R}^+;\dot{B}^{\fracac d2+1}_{2,1}),\\&
a:=\rho-1\in C(\mathbb{R}^+;\dot{B}^{\fracac d2-1}_{2,1}\cap \dot{B}^{\fracac d2}_{2,1})\cap L^2(\mathbb{R}^+;\dot{B}^{\fracac d2}_{2,1}).
\end{split}\end{align}
\end{theo}
\begin{rema}
If $d=2$, according to Lemma \ref{pr-mhd}, we can set
$$M:=C||U_0,B_0||_{\dot{B}^0_{2,1}}\exp\dot{B}ig(C(\fracac{1}{\mu^4}+\fracac{1}{\nu^4})||U_0,B_0||^4_{L^2}\dot{B}ig).$$
From Theorem \ref{th1}, we deduce that the system \eqref{1.1} has a unique global-in-time solution without any smallness condition on the initial data. On the other hand, our result improves the the previous one due to Danchin et.al who considered the compressible Navier-Stokes system in \cite{Danchin 2017}.
\end{rema}
\section{Littlewood-Paley analysis}
In this section, we recall the Littlewood-Paley theory, the definition of homogeneous Besov spaces and some useful properties.
First, let us introduce the Littlewood-Paley decomposition. Choose a radial function $\varphi\in \mathcal{S}(\mathbb{R}^d)$ supported in $\tilde{\mathcal{C}}=\{\xi\in\mathbb{R}^d,\fracac34\leq \xi\leq \fracac83\}$ such that
\begin{align*}
\sum_{j\in \mathbb{Z}}\varphi(2^{-j}\xi)=1 \quad \mathrm{for} \ \mathrm{all} \ \xi\neq0.
\end{align*}
The frequency localization operator $\dot{\dot{\mathrm{div}elta}lta}_j$ and $\dot{S}_j$ are defined by
\begin{align*}
\dot{\dot{\mathrm{div}elta}lta}_jf=\varphi(2^{-j}D)f=\mathcal{F}^{-1}(\varphi(2^{-j}\cdotot)\mathcal{F}f), \quad \dot{S}_jf=\sum_{k\leq j-1}\dot{\dot{\mathrm{div}elta}lta}_kf \quad \mathrm{for} \quad j\in\mathbb{Z}.
\end{align*}
With a suitable choice of $\varphi$, one can easily verify that
\begin{align*}
\dot{\dot{\mathrm{div}elta}lta}_j\dot{\dot{\mathrm{div}elta}lta}_kf=0 \quad \mathrm{if} \quad |j-k|\geq2, \quad \dot{\dot{\mathrm{div}elta}lta}_j(\dot{S}_{k-1}f\dot{\dot{\mathrm{div}elta}lta}_kf)=0 \quad \mathrm{if} \quad |j-k|\geq5.
\end{align*}
Now, we will introduce the definition of the homogeneous Besov space. We denote the space $\mathcal{Z}'(\mathbb{R}^d)$ by the dual space of $\mathcal{Z}(\mathbb{R}^d)=\{f\in \mathcal{S}(\mathbb{R}^d);D^{\alpha}\hat{f}(0)=0;\ \forall \alpha\in \mathbb{N}^d\}$, which can be identified by the quotient space of $\mathcal{S}'(\mathbb{R}^d)/\mathcal{P}$ with the polynomials space $\mathcal{P}$. The formal equality $f=\sum\limits_{j\in \mathbb{Z}}\dot{\dot{\mathrm{div}elta}lta}_jf$ holds true for $f\in\mathcal{Z}'(\mathbb{R}^d)$ and is called the homogenous Littlewood-Paley decomposition.
The operators $\dot{\dot{\mathrm{div}elta}lta}_j$ help us recall the definition of the homogenous Besov space (see \cite{Bahouri2011})
\begin{defi}
Let $s\in \mathbb{R}$, $1\leq p,r\leq \infty$. The homogeneous Besov space $\dot{B}^s_{p,r}$ is defined by
\begin{align*}
\dot{B}^s_{p,r}=\{f\in \mathcal{Z}'(\mathbb{R}^d):||f||_{\dot{B}^s_{p,r}}<+\infty\},
\end{align*}
where
\begin{align*}
||f||_{\dot{B}^s_{p,r}}\triangleq \dot{B}ig|\dot{B}ig|(2^{ks}||\dot{\dot{\mathrm{div}elta}lta}_k f||_{L^p})_{k}\dot{B}ig|\dot{B}ig|_{\ell^r}.
\end{align*}
\end{defi}
\begin{rema}\lambdabel{re1}
Let $\mathcal{C}'$ be an annulus and $(u_j)_{j\in \mathbb{Z}}$ be a sequence of functions such that
$$Supp\ \hat{u}_j\subset 2^j\mathcal{C}' \quad and \quad ||(2^{js}||u_j||_{L^p})_{j\in \mathbb{Z}}||_{\ell^r}<\infty.$$
There exists a constant $C_s$ depending on $s$ such that
$$||u||_{\dot{B}^s_{p,r}}\leq C_s||(2^{js}||u_j||_{L^p})_{j\in \mathbb{Z}}||_{\ell^r}.$$
\end{rema}
Next, we give the important product acts on homogenous Besov spaces by collecting some useful lemmas from \cite{Bahouri2011}.
\begin{lemm}\lambdabel{le1}
Let $s_1,s_2\leq \fracac d2$, $s_1+s_2>0$ and $(f,g)\in\dot{B}^{s_1}_{2,1}(\mathbb{R}^d)\times\dot{B}^{s_2}_{2,1}(\mathbb{R}^d)$. Then we have
\begin{align*}
||fg||_{\dot{B}^{s_1+s_2-\fracac d2}_{2,1}}&\leq C||f||_{\dot{B}^{s_1}_{2,1}}||g||_{\dot{B}^{s_2}_{2,1}}.
\end{align*}
\end{lemm}
\begin{lemm}\lambdabel{le2}
Assume that $F\in W^{[\sigma]+2,\infty}_{loc}(\mathbb{R})$ with $F(0)=0$. Then for any $f\in L^\infty(\mathbb{R}^d)\cap \dot{B}^s_{2,1}(\mathbb{R}^d)$, we have
$$||F(f)||_{\dot{B}^s_{2,1}}\leq C(||f||_{L^\infty})||f||_{\dot{B}^s_{2,1}}.$$
\end{lemm}
\begin{lemm}\lambdabel{le3} For $(p,r_1,r_2,r)\in[1,\infty]^4$, $s_1\neq s_2$ and $\theta\in(0,1)$, the following interpolation inequality holds
$$\|u\|_{\dot{B}_{p,r}^{\theta s_1+(1-\theta)s_2}}\leq C\|u\|^{\theta}_{\dot{B}_{p,r_1}^{s_1}}\|u\|^{1-\theta}_{\dot{B}_{p,r_2}^{s_2}}.$$
\end{lemm}
\begin{prop}\lambdabel{pr-mhd}
Let $U_0,B_0\in \dot{B}^{0}_{2,1}(\mathbb{R}^2)$ with $\mathrm{div} U_0=\mathrm{div} B_0=0$. Then there exists a unique solution to \eqref{1.4} such that
$$U,B\in \mathcal{C}(\mathbb{R}^+;\dot{B}^0_{2,1}(\mathbb{R}^2))\cap L^1(\mathbb{R}^+;\dot{B}^2_{2,1}(\mathbb{R}^2)).$$
Furthermore, there exists some universal constant $C$, one has for all $T\geq 0$,
\begin{align*}
& ||U,B||_{L^\infty_T(\dot{B}^0_{2,1})}+||U_t,B_t,\mu \nabla^2 U,\nu\nabla^2 B||_{L^1_T(\dot{B}^0_{2,1})}\\& \qquad \leq C||U_0,B_0||_{\dot{B}^0_{2,1}}\exp\dot{B}ig(C(\fracac{1}{\mu^4}+\fracac{1}{\nu^4})||U_0,B_0||^4_{L^2}\dot{B}ig).
\end{align*}
\end{prop}
\begin{proof}
For any $t\in[0,T]$, the standard energy balance reads:
\begin{align*}
||U(t)||^2_{L^2}+||B(t)||^2_{L^2}+2\mu\int^t_0||\nabla U||^2_{L^2}+2\nu\int^t_0||\nabla B||^2_{L^2}=||U_0||^2_{L^2}+||B_0||^2_{L^2},
\end{align*}
which implies for all $T\geq 0$,
\begin{align}\lambdabel{2.1}
\mu^{\fracac14}||U||_{L^4_T(\dot{B}^{\fracac12}_{2,1})}+\nu^\fracac14||B||_{L^4_T(\dot{B}^{\fracac12}_{2,1})}\leq C||U_0,B_0||_{L^2}.
\end{align}
From the estimates of the Stokes system in homogeneous Besov spaces, we have
\begin{align}\lambdabel{2.2}\begin{split}
&\quad ||U,B||_{L^\infty_T(\dot{B}^0_{2,1})}+||U_t,B_t,\mu \nabla^2 U,\nu \nabla^2 B||_{L^1_T(\dot{B}^0_{2,1})}
\\& \leq C\big(||U_0,B_0||_{\dot{B}^0_{2,1}}+||U\cdot \nabla U-B\cdot \nabla B||_{L^1_T(\dot{B}^0_{2,1})}+||B\cdot \nabla U-U\cdot \nabla B||_{L^1_T(\dot{B}^0_{2,1})}\big).
\end{split}\end{align}
In view of the interpolation inequality and Young inequality, we deduce that
\begin{align}\lambdabel{2.3}\begin{split}
||U\cdot\nabla U||_{L^1_T(\dot{B}^0_{2,1})}&\leq C\int^T_0||U||_{\dot{B}^\fracac12_{2,1}}||\nabla U||_{\dot{B}^\fracac12_{2,1}}\mathrm{d} t
\\&\leq \int^T_0||U||_{\dot{B}^\fracac12_{2,1}}||\nabla U||^{\fracac14}_{\dot{B}^{-1}_{2,1}}||\nabla U||^{\fracac34}_{\dot{B}^{1}_{2,1}}\mathrm{d} t
\\&\leq \fracac{C}{\varepsilon^3\mu^3}\int^T_0||U||^{4}_{\dot{B}^{\fracac12}_{2,1}}||U||_{\dot{B}^0_{2,1}}\mathrm{d} t+\varepsilon \mu||\nabla^2U||_{L^1_T(\dot{B}^0_{2,1})}.
\end{split}\end{align}
Similar argument as in \eqref{2.3}, we obtain
\begin{align}\begin{split}\lambdabel{2.4}
&||B\cdot\nabla B||_{L^1_T(\dot{B}^0_{2,1})}\leq \fracac{C}{\varepsilon^3\nu^3}\int^T_0||B||^{4}_{\dot{B}^{\fracac12}_{2,1}}||B||_{\dot{B}^0_{2,1}}\mathrm{d} t+\varepsilon \nu||\nabla^2 B||_{L^1_T(\dot{B}^0_{2,1})},\\&
||U\cdot\nabla B||_{L^1_T(\dot{B}^0_{2,1})}\leq \fracac{C}{\varepsilon^3\nu^3}\int^T_0||U||^{4}_{\dot{B}^{\fracac12}_{2,1}}||B||_{\dot{B}^0_{2,1}}\mathrm{d} t+\varepsilon \nu||\nabla^2 B||_{L^1_T(\dot{B}^0_{2,1})},\\&
||B\cdot\nabla U||_{L^1_T(\dot{B}^0_{2,1})}
\leq \fracac{C}{\varepsilon^3\mu^3}\int^T_0||B||^{4}_{\dot{B}^{\fracac12}_{2,1}}||U||_{\dot{B}^0_{2,1}}\mathrm{d} t+\varepsilon \mu||\nabla^2U||_{L^1_T(\dot{B}^0_{2,1})}.
\end{split}\end{align}
Therefore, combing \eqref{2.2}-\eqref{2.4} and choosing $\varepsilon$ small enough, we find that
\begin{align*}\begin{split}
&\quad ||U,B||_{L^\infty_T(\dot{B}^0_{2,1})}+||U_t,B_t,\mu \nabla^2 U,\nu\nabla^2 B||_{L^1_T(\dot{B}^0_{2,1})}\\&\leq C\big((\fracac{1}{\mu^3}+\fracac{1}{\nu^3})\int^T_0(||U||^{4}_{\dot{B}^{\fracac12}_{2,1}}+||B||^{4}_{\dot{B}^{\fracac12}_{2,1}})(||U||_{\dot{B}^0_{2,1}}+||B||_{\dot{B}^0_{2,1}})\mathrm{d} t+||U_0,B_0||_{\dot{B}^0_{2,1}}\big).
\end{split}\end{align*}
It follows from the Gronwall inequality and \eqref{2.1} that the desired result of this lemma.
\end{proof}
\section{The proof of the main results}
In this section, we shall give the main details for the proof of Theorem \ref{th1}. Our main idea basically follows from the recent work in \cite{Danchin 2017}
Setting $a=\rho-1$, we infer from \eqref{1.1} that
\begin{equation}\lambdabel{3.1}\begin{cases}
\partialartial_ta+\mathrm{div}(a u)+\mathrm{div} u=0,\\
\partialartial_tu+u\cdot\nabla u+P'(1+a)\nabla a-b\cdot \nabla b+\fracac12\nabla(|b|^2)-\mu \dot{\mathrm{div}elta}lta u-\nabla ((\mu+\lambdambda)\mathrm{div} u)\\
\qquad =-a(u_t+u\cdot\nabla u),\\
\partialartial_tb+(\mathrm{div} u)b+u\cdot \nabla b-b\cdot \nabla u-\nu\dot{\mathrm{div}elta} b=0,\\
\mathrm{div} b=0.
\end{cases}\end{equation}
Before continue on, we recall the following local well-posedness of the system \eqref{3.1}.
\begin{theo} \cite{Li 2017}\lambdabel{the1.0} Assume that the initial data $(a_0:=\rho-1, u_0,b_0)$ satisfy ${\rm{div}}b_0 = 0$ and
$$(a_0,u_0,b_0)\in\dot{B}_{2,1}^{\fracac{d}{2}}\times\dot{B}_{2,1}^{\fracac{d}{2}-1}\times\dot{B}_{2,1}^{\fracac{d}{2}-1}.$$
In addition, $\inf_{x\in\mathbb{R}^d}a_0(x)>-1$, then there exists some time $T > 0$ such that
the system \eqref{3.1} has a local unique solution $(a, u,b)$ on $[0,T]\times\mathbb{R}^d$ which belongs to the function space
$$E_T:=\widetilde{\mathcal{C}}([0,T];\dot{B}_{2,1}^{\fracac{d}{2}})\times(\widetilde{\mathcal{C}}([0,T];\dot{B}_{2,1}^{\fracac{d}{2}-1})\cap L_T^1\dot{B}_{2,1}^{\fracac{d}{2}+1})^{2d},$$
where $\widetilde{\mathcal{C}}([0,T];\dot{B}_{q,1}^{s}):=\mathcal{C}([0,T];\dot{B}_{q,1}^{s})\cap \widetilde{L}^{\infty}([0,T];\dot{B}_{q,1}^{s})$ with $s\in\mathbb{R}$ and $1\leq q\leq\infty$.
\end{theo}
We set $$v=u-U\quad\mbox{and}\quad c=b-B.$$
From the very beginning, the potential $\mathcal{Q} v$ and divergence-free $\mathcal{P} v$ parts of $v$ are treated separately. Applying $\mathcal{Q}$ to the velocity equation of \eqref{3.1} and noticing that $\mathcal{Q} v=\mathcal{Q} u$ yield
\begin{align}\lambdabel{3.2}
\partialartial_t(\mathcal{Q} v)+\mathcal{Q}((v+U)\cdot \nabla \mathcal{Q} v)-\kappa\dot{\mathrm{div}elta} \mathcal{Q} v+\nabla a=-\mathcal{Q}(aU_t+av_t)-\mathcal{Q} R_1,
\end{align}
where, denoting $k(a)=P'(1+a)-1$
\begin{align}\begin{split}\lambdabel{3.2-1}
R_1&=(1+a)(v+U)\cdot \nabla \mathcal{P} v+(1+a)(v+U)\cdotot \nabla U+a(v+U)\cdot \nabla \mathcal{Q} v
\\&\quad +k(a)\nabla a-(B+c)\cdot\nabla (B+c)+\fracac12\nabla (|B+c|^2).
\end{split}\end{align}
In view of the density equation of \eqref{3.1} and using $u=\mathcal{Q} v+\mathcal{P} v+U$, we find that $a$ satisfies
\begin{align}\lambdabel{3.3}
\partialartial_ta+(v+U)\cdot \nabla a+\mathrm{div} \mathcal{Q} v=-a\mathrm{div} \mathcal{Q} v.
\end{align}
Because $\mathcal{P} U=U$ and $\mathcal{P}(\mathcal{Q} v\cdot \nabla \mathcal{Q} v)=\mathcal{P}(a\nabla a)=0$, applying $\mathcal{P}$ to the velocity equation of \eqref{3.1}, we discover that
\begin{align}\lambdabel{3.4}
\partialartial_t(\mathcal{P} v)+\mathcal{P}((v+U)\cdot \nabla \mathcal{P} v)-\mu\dot{\mathrm{div}elta} \mathcal{P} v=-\mathcal{P}(aU_t+av_t+a\nabla a)-\mathcal{P} R_2,
\end{align}
where
\begin{align}\lambdabel{3.4-1}\begin{split}
R_2&=(1+a)(v+U)\cdot \nabla \mathcal{Q} v+(1+a)v\cdot\nabla U+a(v+U)\cdot \nabla \mathcal{P} v
\\& \quad +aU\cdot \nabla U-(B+c)\cdot\nabla c-c\cdot\nabla B
\\&=(1+a)\mathcal{P} v\cdot\nabla (U+\mathcal{Q} v)+(1+a)U\cdot\nabla \mathcal{Q} v+(1+a)\mathcal{Q} v\cdot \nabla U
\\& \quad +a(v+U)\cdot \nabla \mathcal{P} v+aU\cdot\nabla U+a\mathcal{Q} v\cdot\nabla \mathcal{Q} v-(B+c)\cdot\nabla c-c\cdot\nabla B.
\end{split}\end{align}
According to the magnetic equation of \eqref{3.1}, we can show that $c$ satisfies
\begin{align}\lambdabel{3.5}
\partialartial_tc+(v+U)\cdot \nabla c-\nu\dot{\mathrm{div}elta} c=-R_3,
\end{align}
where
\begin{align}\lambdabel{3.5-1}
R_3=(\mathrm{div} \mathcal{Q} v)B+(\mathrm{div} \mathcal{Q} v)c+v\cdot \nabla B-(B+c)\cdot \nabla v-c\cdot \nabla U.
\end{align}
In the sequel, we denote $a^\ell$ and $a^h$ the low and high frequencies parts of $a$ as
\begin{align*}
a^\ell=\sum_{2^{j}\kappa\leq 1}\dot{\mathrm{div}elta}_ja, \qquad a^h=\sum_{2^{j}\kappa> 1}\dot{\mathrm{div}elta}_ja,
\end{align*}
and set
\begin{align*}
&X_d(T)=||\mathcal{Q} v,a,\kappa \nabla a||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+||\mathcal{Q} v_t+\nabla a,\kappa \nabla^2\mathcal{Q} v,\kappa\nabla^2a^\ell,\nabla a^h||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})},
\\& Y_d(T)=Y_{d,1}(T)+Y_{d,2}(T):=||\mathcal{P} v,c||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+||\mathcal{P} v_t,c_t,\mu \nabla^2\mathcal{P} v,\nu\nabla^2c||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})},
\\& Z_d(T)=||U,B||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+||U_t,B_t,\mu\nabla^2 U,\nu\nabla^2 B||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})},
\\& X_d(0)=||a_0,\mathcal{Q} v_0||_{\dot{B}^{\fracac d2-1}_{2,1}}+\kappa||a_0||_{\dot{B}^{\fracac d2}_{2,1}}.
\end{align*}
It is easy to show that
\begin{align}\lambdabel{lll0}
Z_d(T)\leq M \quad \mathrm{for} \ \mathrm{all} \ T>0.
\end{align}
We concentrate our attention on the proof global in time a priori estimates, as the local well-posedness issue has been ensured by Theorem \ref{the1.0}. We claim that if $\kappa$ is large enough then one may find some (large) $D$ and (small) $\deltalta$ so that there holds for all $T<T^*$,
\begin{align}\begin{cases}\lambdabel{lll}
X_d(T)\leq D, \quad Y_d(T)\leq \delta, \quad \kappa^{-1}D\ll1,
\\\delta(\fracac1\mu+\fracac1\nu+1)\leq 1, \quad D\geq (M+1), \quad ||a(t,\cdotot)||_{L^\infty}\leq \fracac12.
\end{cases}\end{align}
\text{\bf Step 1. Estimate on the terms $\mathcal{P} v$ and $c$.}
We first consider the estimates for $\mathcal{P} v$. Applying $\dot{\mathrm{div}elta}_j$ to \eqref{3.4}, taking the $L^2$ inner product with $\dot{\mathrm{div}elta}_j \mathcal{P} v$ then using that $\mathcal{P}^2=\mathcal{P}$, we deduce that
\begin{align}\lambdabel{3.9}
\fracac12\fracac{\mathrm{d}}{\mathrm{d} t}||\dot{\mathrm{div}elta}_j\mathcal{P} v||^2_{L^2}&+\mu||\nabla \dot{\mathrm{div}elta}_j\mathcal{P} v||^2_{L^2}=\int_{\mathbb{R}^d}([v+U,\dot{\mathrm{div}elta}_j]\cdot \nabla \mathcal{P} v)\cdot \dot{\mathrm{div}elta}_j \mathcal{P} v\mathrm{d} x
\\&-\int_{\mathbb{R}^d}\dot{\mathrm{div}elta}_j(aU_t+av_t+a\nabla a+R_2)\cdot \dot{\mathrm{div}elta}_j\mathcal{P} v\mathrm{d} x-\fracac12\int_{\mathbb{R}^d}|\dot{\mathrm{div}elta}_j\mathcal{P} v|^2\mathrm{div} v\mathrm{d} x.
\end{align}
According to the commutator estimates of Lemma 2.100 in \cite{Bahouri2011}, the commutator term may be estimated as follows:
\begin{align}\lambdabel{lyz}
2^{j(\fracac d2-1)}||[v+U,\dot{\mathrm{div}elta}_j]\cdot \nabla \mathcal{P} v||_{L^2}\leq Cc_j||\nabla (v+U)||_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}, \quad\mbox{with}\quad ||c_j||_{l^1}=1.
\end{align}
Now, multiplying both sides of \eqref{3.9} by $2^{j(\fracac d2-1)}$ and summing up over $j\in\mathbb{Z}$, using Lemma \ref{le1} and \eqref{lyz}, we obtain that
\begin{align}\lambdabel{l-1}\begin{split}
& \qquad ||\mathcal{P} v||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+\mu||\nabla^2\mathcal{P} v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}\leq C \int^T_0||\nabla (v+U)||_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t
\\& +C||a(U_t+\mathcal{P} v_t+(\mathcal{Q} v_t+\nabla a))||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}+C||R_2||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}.
\end{split}\end{align}
In order to bound $||\mathcal{P} v_t||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}$, we infer from \eqref{3.4} and \eqref{l-1} that
\begin{align}\lambdabel{l-2}\begin{split}
& ||\mathcal{P} v||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+||\mathcal{P} v_t,\mu\nabla^2\mathcal{P} v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}\leq C\int^T_0||\nabla (v+U)||_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t
\\& \quad +C\int^T_0||(v+U)\cdot \nabla \mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t+C\int^T_0||a(U_t+\mathcal{P} v_t+(\mathcal{Q} v_t+\nabla a))||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t
\\& \quad+C\int^T_0||R_2||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t.
\end{split}\end{align}
Next, we will estimate the Besov norm of the right-hand side for \eqref{l-2}. For the second term of the right-hand side for \eqref{l-2}, we can infer from Lemma \ref{le1} that
\begin{align}\lambdabel{l-3}\begin{split}
&\qquad ||(v+U)\cdot \nabla \mathcal{P} v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C\int^T_0||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t+C\int^T_0||\mathcal{Q} v+U||_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2}_{2,1}}\mathrm{d} t
\\&\leq C\int^T_0||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t
\\&\quad+\fracac{C}{\varepsilon\mu}\int^T_0||(\mathcal{Q} v,U)||^2_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t+C\mu\varepsilon||\nabla^2\mathcal{P} v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C\int^T_0||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t+C\varepsilon Y_d(T)+\fracac{C}{\varepsilon\mu}\int^T_0||(\mathcal{Q} v,U)||^2_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t.
\end{split}\end{align}
For the third term of the right-hand side for \eqref{l-2}, we can infer from that
\begin{align}\begin{split}
&\qquad \int^T_0||a(U_t+\mathcal{P} v_t+(\mathcal{Q} v_t+\nabla a))||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t
\\& \leq C||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})}(||U_t||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}+||\mathcal{P} v_t||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}+||\mathcal{Q} v_t+\nabla a||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})})
\\&\leq C\kappa^{-1}X_d(T)\dot{B}ig(X_d(T)+Y_d(T)+Z_d(T)\dot{B}ig).
\end{split}\end{align}
For the last term of the right-hand side for \eqref{l-2}, in view of Lemma \ref{le1}, we can estimate them into the following parts:
\begin{align}\lambdabel{l-4}\begin{split}
&\qquad||(1+a)\mathcal{P} v\cdot\nabla (U+\mathcal{Q} v)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C\int^T_0(1+||a||_{\dot{B}^{\fracac d2}_{2,1}})||U+\mathcal{Q} v||_{\dot{B}^{\fracac d2+1}_{2,1}}||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}} \mathrm{d} t,
\end{split}\end{align}
\begin{align}\lambdabel{l-5}\begin{split}
&\qquad ||(1+a)(\mathcal{Q} v\cdot\nabla U+U\cdot \nabla \mathcal{Q} v)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C (1+||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})})
||\mathcal{Q} v||_{L^2_T(\dot{B}^{\fracac d2}_{2,1})}||U||_{L^2_T(\dot{B}^{\fracac d2}_{2,1})}
\\&\leq C(1+\kappa^{-1}X_d(T))\kappa^{-\fracac12}\mu^{-\fracac12}X_d(T)Z_d(T),
\end{split}\end{align}
\begin{align}\lambdabel{l-6}\begin{split}
& \qquad ||a(v+U)\cdot \nabla \mathcal{P} v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})}||v+U||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}||\mathcal{P} v||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}
\\&\leq C\kappa^{-1}\mu^{-1}X_d(T)Y_d(T)\dot{B}ig(X_d(T)+Y_d(T)+Z_d(T)\dot{B}ig),
\end{split}\end{align}
\begin{align}\lambdabel{l-7}\begin{split}
& \qquad ||aU\cdot\nabla U+a\mathcal{Q} v\cdot\nabla\mathcal{Q} v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})}(||U||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}||U||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}
+||\mathcal{Q} v||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}||\mathcal{Q} v||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})})
\\& \leq C\kappa^{-1}X_d(T)\dot{B}ig(\mu^{-1}Z^2_d(T)+\kappa^{-1}X^2_d(T)\dot{B}ig),
\end{split}\end{align}
\begin{align}\lambdabel{l-8}\begin{split}
&\qquad||(B+c)\cdot\nabla c+c\cdot\nabla B||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C\int^T_0||c||_{\dot{B}^{\fracac d2-1}_{2,1}}||c||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t+C\int^T_0||B||_{\dot{B}^{\fracac d2}_{2,1}}||c||_{\dot{B}^{\fracac d2}_{2,1}}\mathrm{d} t.
\\&\leq C\int^T_0||c||_{\dot{B}^{\fracac d2-1}_{2,1}}||c||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t+C\varepsilon\nu ||c||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}+\fracac{C}{\varepsilon\nu}\int^T_0||B||^2_{\dot{B}^{\fracac d2}_{2,1}}||c||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t
\\&\leq C\int^T_0||c||_{\dot{B}^{\fracac d2-1}_{2,1}}||c||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t+C\varepsilon Y_d(T)
+\fracac{C}{\varepsilon\nu}\int^T_0||B||^2_{\dot{B}^{\fracac d2}_{2,1}}||c||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t.
\end{split}\end{align}
Therefore, summing up \eqref{l-2}-\eqref{l-8}, we obtain
\begin{align}\lambdabel{y-1}\begin{split}
&\qquad ||\mathcal{P} v||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+||\mathcal{P} v_t,\mu\nabla^2\mathcal{P} v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\& \leq C\varepsilon Y_d(T)+C\kappa^{-1}X_d(T)\dot{B}ig(X_d(T)+Y_d(T)+Z_d(T)\dot{B}ig)
\\& \quad +C(1+\kappa^{-1}X_d(T))\kappa^{-\fracac12}\mu^{-\fracac12}X_d(T)Z_d(T)+C\kappa^{-1}X_d(T)\dot{B}ig(\mu^{-1}Z^2_d(T)+\kappa^{-1}X^2_d(T)\dot{B}ig)
\\&\quad +C\kappa^{-1}\mu^{-1}X_d(T)Y_d(T)\dot{B}ig(X_d(T)+Y_d(T)+Z_d(T)\dot{B}ig)
\\&\quad +C \int^T_0\dot{B}ig((1+||a||_{\dot{B}^{\fracac d2}_{2,1}})||(\mathcal{P} v,\mathcal{Q} v,U,c)||_{\dot{B}^{\fracac d2+1}_{2,1}}+\fracac{1}{\varepsilon\mu}||(\mathcal{Q} v,U)||^2_{B^{\fracac d2}_{2,1}}+\fracac{1}{\varepsilon\nu}||B||^2_{B^{\fracac d2}_{2,1}}\dot{B}ig)Y_{d,1}\mathrm{d} t.
\end{split}\end{align}
Now, we estimate the term for $c$. Similar argument as in \eqref{l-2} and \eqref{l-3}, we infer from \eqref{3.5} that
\begin{align}\lambdabel{l-9}\begin{split}
& \qquad ||c||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+||(c_t,\nu\nabla^2c)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq ||b_0-B_0||_{\dot{B}^{\fracac d2-1}_{2,1}}+C \int^T_0||(\mathcal{P} v,\mathcal{Q} v,U)||_{\dot{B}^{\fracac d2+1}_{2,1}}||c||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t
\\& \quad +C||(v+U)\cdot \nabla c||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}+C||R_3||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}.
\end{split}\end{align}
For the last two terms of the right-hand side for \eqref{l-9}, according to Lemma \ref{le1}, we can tackle with them as follows:
\begin{align}\lambdabel{l-9.5}\begin{split}
&\qquad ||(v+U)\cdot \nabla c||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C\int^T_0||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}||c||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t+C\int^T_0||\mathcal{Q} v+U||_{\dot{B}^{\fracac d2}_{2,1}}||c||_{\dot{B}^{\fracac d2}_{2,1}}\mathrm{d} t
\\&\leq C\int^T_0||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}||c||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t
+\fracac{C}{\varepsilon\nu}\int^T_0||(\mathcal{Q} v,U)||^2_{\dot{B}^{\fracac d2}_{2,1}}||c||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t+C\nu\varepsilon||c||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}
\\&\leq C\int^T_0||\mathcal{P} v||_{\dot{B}^{\fracac d2-1}_{2,1}}||c||_{\dot{B}^{\fracac d2+1}_{2,1}}\mathrm{d} t+C\varepsilon Y_d(T)+\fracac{C}{\varepsilon\nu}\int^T_0||(\mathcal{Q} v,U)||^2_{\dot{B}^{\fracac d2}_{2,1}}||c||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t.
\end{split}\end{align}
\begin{align}\lambdabel{l-10}
||(\mathrm{div} \mathcal{Q} v)c-c\cdot \nabla v-c\cdot \nabla U||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}\leq C\int^T_0||(U,\mathcal{P} v,\mathcal{Q} v)||_{\dot{B}^{\fracac d2+1}_{2,1}}||c||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t,
\end{align}
\begin{align}\begin{split}\lambdabel{l-11}
&\qquad ||(\mathrm{div} \mathcal{Q} v)B+v\cdot \nabla B-B\cdot \nabla v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C\int^T_0||\mathcal{Q} v||_{B^{\fracac d2}_{2,1}}||B||_{B^{\fracac d2}_{2,1}}\mathrm{d} t+C\int^T_0||\mathcal{P} v||_{B^{\fracac d2}_{2,1}}||B||_{B^{\fracac d2}_{2,1}}\mathrm{d} t
\\&\leq C\int^T_0||\mathcal{Q} v||_{B^{\fracac d2}_{2,1}}||B||_{B^{\fracac d2}_{2,1}}\mathrm{d} t+\fracac{C}{\varepsilon\nu}\int^T_0||\mathcal{P} v||_{B^{\fracac d2-1}_{2,1}}||B||^2_{B^{\fracac d2}_{2,1}}\mathrm{d} t
+C\varepsilon\nu||\mathcal{P} v||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}
\\&\leq C\kappa^{-\fracac12}\nu^{-\fracac12}X_d(T)Z_d(T)+C\varepsilon Y_d(T)+\fracac{C}{\varepsilon\nu}\int^T_0||\mathcal{P} v||_{B^{\fracac d2-1}_{2,1}}||B||^2_{B^{\fracac d2}_{2,1}}\mathrm{d} t.
\end{split}\end{align}
Hence, collecting the estimates \eqref{l-9}-\eqref{l-11}, we get
\begin{align}\lambdabel{y-2}\begin{split}
& \qquad ||c||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+||(c_t,\nu\nabla^2c)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq ||b_0-B_0||_{\dot{B}^{\fracac d2-1}_{2,1}}+\fracac{C}{\nu}Y^2_d(T)+C\varepsilon Y_d(T) +C\kappa^{-\fracac12}\nu^{-\fracac12}X_d(T)Z_d(T)
\\& \quad+C \int^T_0\dot{B}ig(||(\mathcal{P} v,\mathcal{Q} v,U,c)||_{\dot{B}^{\fracac d2+1}_{2,1}}+\fracac{1}{\varepsilon\nu}||(\mathcal{Q} v,U,B)||^2_{B^{\fracac d2}_{2,1}}\dot{B}ig)Y_{d}\mathrm{d} t.
\end{split}\end{align}
Then, combining \eqref{y-1} and \eqref{y-2} and choosing $\varepsilon$ small enough, we can conclude from Gronwall's inequality that
\begin{align}\lambdabel{ly1}\begin{split}
Y_d(T)&\leq Ce^{C||\mathcal{P} v,\mathcal{Q} v,U,c||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}+(\fracac C\mu+\fracac C\nu)||(\mathcal{Q} v,U,B)||^2_{L^1_T(\dot{B}^{\fracac d2}_{2,1})}}\dot{B}ig\{\kappa^{-1}X_d(T)\dot{B}ig(X_d(T)+Y_d(T)+Z_d(T)\dot{B}ig)
\\& \quad +(1+\kappa^{-1}X_d(T))\kappa^{-\fracac12}\mu^{-\fracac12}X_d(T)Z_d(T)+\kappa^{-1}X_d(T)\dot{B}ig(\mu^{-1}Z^2_d(T)+\kappa^{-1}X^2_d(T)\dot{B}ig)
\\&\quad +\kappa^{-1}\mu^{-1}X_d(T)Y_d(T)\dot{B}ig(X_d(T)+Y_d(T)+Z_d(T)\dot{B}ig)+\kappa^{-\fracac12}\nu^{-\fracac12}X_d(T)Z_d(T)\dot{B}ig\}.
\end{split}\end{align}
\text{\bf Step 2. Estimate on the terms $\mathcal{Q} v$ and $a$.}
Now, applying $\dot{\mathrm{div}elta}_j$ to \eqref{3.1} and \eqref{3.2} yields that
\begin{align}\lambdabel{ll-1}\begin{cases}
\partialartial_ta_j+(v+U)\cdot \nabla a_j+\mathrm{div} \mathcal{Q} v_j=g_j,\\
\partialartial_t\mathcal{Q} v_j+\mathcal{Q}((v+U)\cdot \nabla \mathcal{Q} v_j)-\kappa \dot{\mathrm{div}elta} \mathcal{Q} v_j+\nabla a_j=f_j,
\end{cases}\end{align}
where
\begin{align}\lambdabel{ll-2}\begin{split}
&a_j=\dot{\mathrm{div}elta}_ja, \quad \mathcal{Q} v_j=\dot{\mathrm{div}elta}_j\mathcal{Q} v, \quad g_j=-\dot{\mathrm{div}elta}_j(a \mathrm{div} \mathcal{Q} v)-[\dot{\mathrm{div}elta}_j,(v+U)]\cdot \nabla a,
\\& f_j=-\dot{\mathrm{div}elta}_j\mathcal{Q}(aU_t+av_t)-\dot{\mathrm{div}elta}_j\mathcal{Q} R_1-\mathcal{Q}[\dot{\mathrm{div}elta}_j,(v+U)]\cdot \nabla \mathcal{Q} v.
\end{split}\end{align}
We take the $L^2$ inner product for the first equation of \eqref{ll-1} with $a_j$ and the second equation of \eqref{ll-1} with $\mathcal{Q} v_j$ to obtain
\begin{align}\lambdabel{ll-3}\begin{cases}
\fracac12\fracac{\mathrm{d}}{\mathrm{d} t}||a_j||^2_{L^2}+(a_j,\mathrm{div} \mathcal{Q} v_j)=\fracac12(\mathrm{div} v, a^2_j)+(g_j,a_j), \\
\fracac12\fracac{\mathrm{d}}{\mathrm{d} t}||\mathcal{Q} v_j||^2_{L^2}+\kappa||\nabla \mathcal{Q} v_j||^2_{L^2}-(a_j,\mathrm{div} \mathcal{Q} v_j)=\fracac12(\mathrm{div} v,|\mathcal{Q} v_j|^2)+(f_j,\mathcal{Q} v_j),
\end{cases}\end{align}
We next want to estimate for $||\nabla a_j||^2_{L^2}$. From the first equation of \eqref{ll-1}, we have
\begin{align}\lambdabel{ll-4}
\partialartial_t\nabla a_j+(v+U)\cdot \nabla \nabla a_j+\nabla \mathrm{div} \mathcal{Q} v_j=\nabla g_j-\nabla(v+U)\cdot \nabla a_j.
\end{align}
Following \eqref{ll-4} and second equation of \eqref{ll-1} and taking the $L^2$ inner product, we obtain
\begin{align}\lambdabel{ll-5}\begin{cases}
\fracac12\fracac{\mathrm{d}}{\mathrm{d} t}||\nabla a_j||^2_{L^2}+((v+U)\cdot \nabla \nabla a_j, \nabla a_j)+(\nabla \mathrm{div} \mathcal{Q} v_j,\nabla a_j)\\
\qquad =(\nabla g_j-\nabla (v+U)\cdot \nabla a_j,\nabla a_j),\\
\fracac{\mathrm{d}}{\mathrm{d} t}(\mathcal{Q} v_j,\nabla a_j)+(v+U,\nabla (\mathcal{Q} v_j\cdot \nabla a_j))-\kappa(\dot{\mathrm{div}elta} \mathcal{Q} v_j, \nabla a_j)+||\nabla a_j||^2_{L^2}
\\ \qquad +(\nabla \mathrm{div} \mathcal{Q} v_j,\mathcal{Q} v_j)=(\nabla g_j-\nabla (v+U)\cdot \nabla a_j,\mathcal{Q} v_j)+(f_j,\nabla a_j).
\end{cases}\end{align}
Noticing that $(\nabla \mathrm{div} \mathcal{Q} v_j,\nabla a_j)=(\dot{\mathrm{div}elta} \mathcal{Q} v_j, \nabla a_j)$ and $\dot{\mathrm{div}elta} \mathcal{Q} v_j=\nabla \mathrm{div} \mathcal{Q} v_j$, we get
\begin{align}\lambdabel{ll-6}\begin{split}
& \quad \fracac12\fracac{\mathrm{d}}{\mathrm{d} t}(\kappa||\nabla a_j||^2_{L^2}+2(\mathcal{Q} v_j\cdot \nabla a_j))+(||\nabla a_j||^2_{L^2}-||\nabla \mathcal{Q} v_j||^2_{L^2})
\\&=(\fracac12\kappa|\nabla a_j|^2+\mathcal{Q} v_j\cdot \nabla a_j,\mathrm{div} v)+\kappa(\nabla g_j-\nabla(v+U)\cdot \nabla a_j,\nabla a_j)
\\&\quad +(\nabla g_j-\nabla (v+U)\cdot \nabla a_j,\mathcal{Q} v_j)+(f_j,\nabla a_j).
\end{split}\end{align}
Multiplying \eqref{ll-6} by $\kappa$ and adding up twice \eqref{ll-3} yield
\begin{align}\lambdabel{ll-7}\begin{split}
&\fracac12\fracac{\mathrm{d}}{\mathrm{d} t}\mathcal{L}^2_j+\kappa(||\nabla \mathcal{Q} v_j||^2_{L^2}+||\nabla a_j||^2_{L^2})
\\&\quad =\int_{\mathbb{R}^d}(2g_ja_j+2f_j\cdot \mathcal{Q} v_j+\kappa^2\nabla g_j\cdot \nabla a_j+\kappa\nabla g_j\cdot\mathcal{Q} v_j+ \kappa f_j\cdot\nabla a_j)\mathrm{d} x
\\& \qquad +\fracac12\int_{\mathbb{R}^d}\mathcal{L}^2_j\mathrm{div} v\mathrm{d} x-\kappa\int_{\mathbb{R}^d}(\nabla(v+U)\cdot\nabla a_j)\cdot(\kappa\nabla a_j+\mathcal{Q} v_j)\mathrm{d} x,
\end{split}\end{align}
with
\begin{align}\lambdabel{ll-8}\begin{split}
\mathcal{L}^2_j&=\int_{\mathbb{R}^d}(2a^2_j+2|\mathcal{Q} v_j|^2+2\kappa\mathcal{Q} v_j\cdot \nabla a_j+|\kappa\nabla a_j|^2)\mathrm{d} x
\\&=\int_{\mathbb{R}^d}(2a^2_j+|\mathcal{Q} v_j|^2+|\mathcal{Q} v_j+ \kappa\nabla a_j|^2)\mathrm{d} x\approx ||(\mathcal{Q} v_j,a_j,\kappa \nabla a_j)||^2_{L^2}.
\end{split}\end{align}
By \eqref{ll-8}, we obtain
\begin{align*}
\kappa(||\nabla \mathcal{Q} v_j||^2_{L^2}+||\nabla a_j||^2_{L^2})\geq c\min(\kappa 2^{2j},\kappa^{-1})\mathcal{L}^2_j,
\end{align*}
which along with \eqref{ll-7} yields
\begin{align}\lambdabel{ll-8.1}\begin{split}
\fracac12\fracac{\mathrm{d}}{\mathrm{d} t}\mathcal{L}^2_j+c\min(\kappa 2^{2j},\kappa^{-1})\mathcal{L}^2_j\leq& (\fracac12||\mathrm{div} v||_{L^\infty}+||\nabla (v+U)||_{L^\infty})\mathcal{L}^2_j
\\&+C||(g_j,f_j,\kappa \nabla g_j)||_{L^2}\mathcal{L}_j.
\end{split}\end{align}
Multiplying both sides of \eqref{ll-8.1} by $2^{j(\fracac d2-1)}$ and then summing up over $j\in\mathbb{Z}$, we infer from Remark \ref{re1} that
\begin{align}\lambdabel{ll-9}\begin{split}
&||(a,\kappa\nabla a,\mathcal{Q} v)||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}+||(\kappa \nabla^2\mathcal{Q} v,\kappa\nabla^2a^\ell,\nabla a^h)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C ||(a,\kappa\nabla a,\mathcal{Q} v)(0)||_{\dot{B}^{\fracac d2-1}_{2,1}}+C\int^T_0||(v,U)||_{\dot{B}^{\fracac d2+1}_{2,1}}||(a,\kappa \nabla a,\mathcal{Q} v)||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t
\\&\quad +C\int^T_0\sum_{j\in \mathbb{Z}}2^{j(\fracac d2-1)}||(g_j,f_j,\kappa\nabla g_j)||_{L^2}\mathrm{d} t.
\end{split}\end{align}
Combining the estimates
\begin{align*}
&||a\mathrm{div} \mathcal{Q} v,\kappa\nabla(a\mathrm{div} \mathcal{Q} v)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}\leq C\int^T_0||\mathrm{div} \mathcal{Q} v||_{\dot{B}^{\fracac d2}_{2,1}}||a,\kappa\nabla a||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t,
\end{align*}
and
\begin{align*}
& \qquad\int^T_0\sum_j2^{j(\fracac d2-1)}||[\dot{\mathrm{div}elta}_j,(v+U)]\nabla a,\kappa\nabla([\dot{\mathrm{div}elta}_j,(v+U)]\nabla a)||_{L^2}\mathrm{d} t
\\&\leq C\int^T_0||\nabla (v+U)||_{\dot{B}^{\fracac d2}_{2,1}}||a,\kappa\nabla a||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t,
\end{align*}
we have
\begin{align}\lambdabel{ll-10}
\int^T_0\sum_{j\in \mathbb{Z}}2^{j(\fracac d2-1)}||(g_j,\kappa\nabla g_j)||_{L^2}\mathrm{d} t\leq C\int^T_0||(v,U)||_{\dot{B}^{\fracac d2+1}_{2,1}}||(a,\kappa \nabla a)||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t.
\end{align}
Next, we will estimate the last term $\int^T_0\sum_{j\in \mathbb{Z}}2^{j(\fracac d2-1)}||f_j||_{L^2}\mathrm{d} t$. According to Lemmas \ref{le1}-\ref{le2} and the commutator estimates of Lemma 2.100 in \cite{Bahouri2011}, we have
\begin{align}\lambdabel{ll-11}\begin{split}
&\qquad ||(1+a)(v+U)\cdot \nabla (\mathcal{P} v+U)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C(1+||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})})||(\mathcal{P} v,U)||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})} ||(\nabla \mathcal{P} v,\nabla U)||_{L^1_T(\dot{B}^{\fracac d2}_{2,1})}
\\& \quad +C\int^T_0(1+||a||_{\dot{B}^{\fracac d2}_{2,1}})||(\nabla \mathcal{P} v,\nabla U)||_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{Q} v||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t
\\&\leq C\big(1+\kappa^{-1}X_d(T)\big)\mu^{-1}\big(Y^2_d(T)+Z^2_d(T)\big)
\\& \quad +C\int^T_0(1+||a||_{\dot{B}^{\fracac d2}_{2,1}})||(\nabla \mathcal{P} v,\nabla U)||_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{Q} v||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t,
\end{split}\end{align}
\begin{align}\lambdabel{ll-12}\begin{split}
||a(v+U)\cdot\nabla \mathcal{Q} v||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}&\leq C||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})}||(v,U)||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}||\nabla \mathcal{Q} v||_{L^1_T(\dot{B}^{\fracac d2}_{2,1})}
\\&\leq C\kappa^{-2}X^2_d(T)\big(X_d(T)+Y_d(T)+Z_d(T)\big),
\end{split}\end{align}
\begin{align}\lambdabel{ll-13}\begin{split}
||k(a)\nabla a||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}&\leq C\int^T_0||a||^2_{\dot{B}^{\fracac d2}_{2,1}}\mathrm{d} t\leq C\int^T_0(||a^\ell||^2_{\dot{B}^{\fracac d2}_{2,1}}+||a^h||^2_{\dot{B}^{\fracac d2}_{2,1}})\mathrm{d} t
\\&\leq C\int^T_0(||a^\ell||_{\dot{B}^{\fracac d2-1}_{2,1}}||a^\ell||_{\dot{B}^{\fracac d2+1}_{2,1}}+||a^h||^2_{\dot{B}^{\fracac d2}_{2,1}})\mathrm{d} t\leq C\kappa^{-1}X^2_d(T),
\end{split}\end{align}
\begin{align}\lambdabel{ll-14}
\int^T_0\sum_{j\in\mathbb{Z}}2^{j(\fracac d2-1)}||[\dot{\mathrm{div}elta}_j,v+U]\nabla \mathcal{Q} v||_{L^2}\mathrm{d} t\leq C\int^T_0||\nabla (v+U)||_{\dot{B}^{\fracac d2}_{2,1}}||\mathcal{Q} v||_{\dot{B}^{\fracac d2-1}_{2,1}}\mathrm{d} t.
\end{align}
\begin{align}\lambdabel{ll-15}\begin{split}
&\qquad ||aU_t||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}+||av_t||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C||(U_t,\mathcal{P} v_t,\mathcal{Q} v_t+\nabla a)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})}+C\int^T_0||a||^2_{\dot{B}^{\fracac d2}_{2,1}}\mathrm{d} t
\\&\leq C\kappa^{-1}X_d(T)\dot{B}ig(X_d(T)+Y_d(T)+Z_d(T)\dot{B}ig),
\end{split}\end{align}
\begin{align}\lambdabel{ll-16}\begin{split}
&\qquad ||\fracac12\nabla (|B+c|^2)-(B+c)\cdot\nabla (B+c)||_{L^1_T(\dot{B}^{\fracac d2-1}_{2,1})}
\\&\leq C||(B,c)||_{L^\infty_T(\dot{B}^{\fracac d2-1}_{2,1})}||(B,c)||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}\leq \fracac{C}{\nu}\big(Z^2_d(T)+ Y^2_d(T)\big).
\end{split}\end{align}
Therefore, collecting \eqref{ll-9}-\eqref{ll-16}, we have
\begin{align}\lambdabel{ly2}\begin{split}
X_d(T)&\leq Ce^{C(1+||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})})||\mathcal{P} v,\mathcal{Q} v,U||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}}\dot{B}ig\{X_d(0)
\\& \quad +C\big(1+\kappa^{-1}X_d(T)\big)(\mu^{-1}+\nu^{-1})\big(Y^2_d(T)+Z^2_d(T)\big)
\\& \quad +C\big(\kappa^{-2}X^2_d(T)+\kappa^{-1}X_d(T)\big)\big(X_d(T)+Y_d(T)+Z_d(T)\big)\dot{B}ig\}.
\end{split}\end{align}
By \eqref{lll0} and \eqref{lll}, we can deduce that
\begin{align}\begin{split}\lambdabel{lll-1}
(1+||a||_{L^\infty_T(\dot{B}^{\fracac d2}_{2,1})})||\mathcal{P} v,\mathcal{Q} v,U,c||_{L^1_T(\dot{B}^{\fracac d2+1}_{2,1})}&\leq (1+\kappa^{-1}D)(\kappa^{-1}D+\mu^{-1}M+\mu^{-1}\delta+\nu^{-1}\delta)
\\&\leq 2(1+\mu^{-1}+\nu^{-1})(M+1),
\end{split}\end{align}
and
\begin{align}\begin{split}\lambdabel{lll-2}
(\mu^{-1}+\nu^{-1})||(\mathcal{Q} v,U,B)||^2_{L^1_T(\dot{B}^{\fracac d2}_{2,1})}&\leq (\mu^{-1}+\nu^{-1})\big(\kappa^{-1}D^2+(\mu^{-1}+\nu^{-1})M^2\big)
\\&\leq (1+\mu^{-2}+\nu^{-2})(M+1)^2.
\end{split}\end{align}
According to \eqref{lll}, \eqref{ly1} and \eqref{ly2}--\eqref{lll-2}, we have
\begin{align}\lambdabel{ll-17}\begin{split}
Y_d(T)&\leq Ce^{C(1+\mu^{-2}+\nu^{-2})(M+1)^2}\big(
\kappa^{-1}D^2+\kappa^{-\fracac12}(\mu^{-\fracac12}+\nu^{-\fracac12})DM
+\kappa^{-1}\mu^{-1}DM^2
\\&\quad +\kappa^{-1}\mu^{-1}D^2\delta\big)
\\&\leq Ce^{C(1+\mu^{-2}+\nu^{-2})(M+1)^2}\big(\kappa^{-1}D^2+\kappa^{-\fracac12}D\big),
\end{split}\end{align}
and
\begin{align}\lambdabel{ll-18}\begin{split}
X_d(T)&\leq Ce^{C(1+\mu^{-1}+\nu^{-1})(M+1)}\big(X_d(0)+(\mu^{-1}+\nu^{-1})(1+M^2)+M+1\big)
\\&\leq Ce^{C(1+\mu^{-1}+\nu^{-1})(M+1)^2}\big(X_d(0)+1\big),
\end{split}\end{align}
for a suitable large (universal) constant $C$. So it is natural to take first
\begin{align}\lambdabel{ll-19}
D:=Ce^{C(1+\mu^{-2}+\nu^{-2})(M+1)^2}\big(X_d(0)+1\big),
\end{align}
and then to set
\begin{align}\lambdabel{ll-20}
\delta=Ce^{2C(1+\mu^{-2}+\nu^{-2})(M+1)^2}\big(\kappa^{-1}D^2+\kappa^{-\fracac12}D\big).
\end{align}
for a suitable large (universal) constant $C$. It is easy to prove that $||a(t,\cdotot)||_{L^\infty}\leq C\kappa^{-1}D$. Therefore, if we make the assumption that $\kappa$ is large enough such that
$$\kappa^{-1}D\ll1, \quad \delta(\fracac1\mu+\fracac1\nu+1)\leq \fracac12, $$
then we deduce from \eqref{ll-17}-\eqref{ll-20} that the desired result \eqref{lll}.
{\bf Proof of Theorem 1.1}\quad First, Theorem \ref{the1.0} implies that there exists a unique
maximal solution $(a,u,b)$ to \eqref{3.1} which belongs to $\widetilde{\mathcal{C}}([0,T];\dot{B}_{2,1}^{\fracac{d}{2}})\times(\widetilde{\mathcal{C}}([0,T];\dot{B}_{2,1}^{\fracac{d}{2}-1})\cap L_T^1\dot{B}_{2,1}^{\fracac{d}{2}+1})^{2d}$ on some time interval $[0, T^*)$, with the global a priori estimates \eqref{lll0} and \eqref{lll} at our hand, then one conclude that $T^*=+\infty$. In fact, let us assume (by contradiction) that $T^*<\infty$. Next, applying \eqref{lll0} and \eqref{lll} for all $t < T^*$ yields
\begin{eqnarray}\lambdabel{Equ4.18}
||a,u,b||_{L^\infty_{T^*}(\dot{B}^{\fracac d2-1}_{2,1})}&\leq&C<\infty.
\end{eqnarray}
Then, for all $t_0 \in[0, T^*)$, one can solve \eqref{3.1} starting with data $(a_0,u_0,b_0)$ at time $t = t_0$ and get a solution according to Theorem \ref{the1.0} on the interval $[t_0, T + t_0]$ with $T$ independent of $t_0$. Choosing $t_0 > T^*- T$ thus shows that the solution can be continued beyond $T^*$, a contradiction.
\vspace*{1em}
\noindent\textbf{Acknowledgements.} This work was partially supported by NSFC (No. 11361004).
\end{document}
|
\begin{eqnarray*}gin{document}
\title{Einstein metrics with anisotropic boundary behaviour}
In recent years the relation between complete, infinite volume,
Einstein metrics and the geometry of their boundary at infinity has
been intensively studied, especially since the advent of the physical
AdS/CFT correspondence.
In all the previous examples of this correspondence, the Einstein
metrics at infinity are supposed to be asymptotic to some fixed
model---a symmetric space of noncompact type $G/K$. Here we shall
restrict to the rank one case, where the examples are asymptotically
real, complex or quaternionic hyperbolic metrics. The corresponding
geometries at infinity (``parabolic geometries'' modelled on $G/P$,
where $P$ is a minimal parabolic subgroup of $G$) are conformal
metrics, CR structures or quaternionic-contact structures. In this
article, we introduce a new class of examples, which are no more
asymptotic to a symmetric space. Actually the model at infinity is
still given by a homogeneous Einstein space, which may vary from point
to point on the boundary at infinity.
This phenomenon cannot occur in the most classical examples (real or
complex hyperbolic spaces), because the algebraic structure at
infinity (abelian group or Heisenberg group) has no deformation. But
such deformations exist for the quaternionic Heisenberg group (except
in dimension 7), and even in the 15-dimensional octonionic case. So
these are the two cases on which this article shall focus. In the
parabolic geometry language, these are the two cases where non regular
examples exist.
More concretely, the basic quaternionic example is the sphere
$S^{4m-1}$, with its $(4m-4)$-dimensional distribution $\mathcal{D}$, and the
octonionic example is the sphere $S^{15}$ with a $8$-dimensional
distribution $\mathcal{D}$. At each point $x$ of the sphere, there is an
induced nilpotent Lie algebra structure on $\mathfrak{n}_x=\mathcal{D}_x\oplus T_xS/\mathcal{D}_x$,
given by the projection on $T_xS/\mathcal{D}_x$ of the bracket of two vector
fields $X,Y\in \mathcal{D}_x$. It was proved in \cite{Biq00} that small deformations of
$\mathcal{D}$, such that $\mathfrak{n}_x$ remains the quaternionic Heisenberg algebra
for all $x$, are boundaries at infinity of complete Einstein metrics
on the ball. This regularity assumption (that is, keeping the
isomorphism type of the algebra $\mathfrak{n}_x$ fixed) is a strong
differential system on $\mathcal{D}$: it was shown in \cite{Biq00} that such
quaternionic-contact structures exist in abundance, but there is no
octonionic example \cite{Yam93}.
In this article, we relax the regularity assumption in these two
cases. There is a beautiful family of examples, already known in the
literature: the homogeneous Einstein metrics of Heber \cite{Heb98}. In
the upper space model, each hyperbolic space is identified with the
solvable group $S=AN$, with boundary at infinity the Heisenberg group
$N$ (where $G=KAN$ is the Iwasawa decomposition). Then Heber proved
that every deformation of $S$ carries a unique homogeneous Einstein
metric. In particular, we can associate to a deformation of the
nilpotent Lie algebra $\mathfrak{n}$ the homogeneous Einstein metric on the
corresponding solvable group $S=AN$.
\begin{eqnarray*}gin{theo}\label{th:main}
Let $n=4m-1\geq 11$ in the quaternionic case, or $n=15$ in the
octonionic case. Any small deformation of the $(4m-4)$-dimensional (in
the quaternionic case) or $8$-dimensional (in the octonionic case)
distribution of $S^n$ is the boundary at infinity of a complete
Einstein metric on the ball $B^{n+1}$.
At each point $x\in S^n$, the Einstein metric is asymptotic to Heber's
homogeneous metric on the solvable group associated to the nilpotent
algebra $\mathfrak{n}_x$.
\end{theo}
The meaning of the theorem is that all deformations of the
distributions on the boundary of the rank one symmetric spaces can be
interpreted as boundaries at infinity of Einstein metrics, but maybe
with an anisotropic behaviour (the asymptotics depends on the
direction). This gives new examples in the quaternionic case, for
dimension at least 11, and in the octonionic case.
The asymptotic condition means that there is some inhomogeneous
rescaling of the metric near the boundary point $x$ which converges to
Heber's metric, see remark \ref{rema:asympt-solv} for details.
The relation between the regular examples and the new examples is
perhaps best understood by remembering that the Einstein metrics
associated to quaternionic-contact structures in dimension at least 11
are actually quaternionic-Kähler \cite{Biq02}, so they keep the holonomy
$Sp_mSp_1$ of the hyperbolic space. This condition distinguishes
exactly the regular case:
\begin{eqnarray*}gin{coro}
In the quaternionic case, for $m\geq 3$, the Einstein metric
constructed by the previous theorem is quaternionic-Kähler if and
only if the distribution on $S^{4m-1}$ is regular (that is, is a
quaternionic-contact structure).
\end{coro}
The corollary follows from the fact that the boundary at infinity of a
quaternionic-Kähler metric must be a quaternionic-contact structure
\cite{Biq00}.
There is a similar, but obvious, story in the octonionic case. The
Cayley plane has holonomy $Spin_9$. If the Einstein metric keeps the
$Spin_9$ condition, it is well-known that it is the hyperbolic metric
($Spin_9$ metrics are locally symmetric). On the other hand, a regular
distribution of dimension 8 on $S^{15}$ must be standard. So we have a
(trivial) example of the equivalence of the holonomy condition on the
Einstein metric with the regularity condition on the boundary.
The article has two parts. The first part is algebraic, and consists
in the construction of an approximate Einstein metric near the
boundary at infinity. The new point here is that the model is not
explicit: it is the solution of algebraic equations giving conformal
structures on the distribution $\mathcal{D}$ and on the quotient
$TS/\mathcal{D}$. These equations have a nice interpretation in terms of a
stronger geometric structure, a quaternionic (or octonionic) structure
on $\mathcal{D}$, on which we add a gauge condition which enables to find a
unique solution. This additional structure should be useful in future
applications, in particular if one wishes to work out a
Fefferman-Graham type development of the Einstein metric.
The second part is analytic, and consists in deforming an approximate
Einstein metric into a solution of the equations. This relies
basically on a deformation argument, which requires to understand the
analytic properties of the deformation operator. If one has a good
understanding of the analysis for the models (Einstein metrics on
solvable groups), then one can probably use microlocal analysis to
glue together the inverses of the deformation operator into the
required parametrix. However here we prefer to avoid the analysis on
these solvable groups, since more direct methods give the required
result. Nevertheless, it is clear that the more sophisticated
microlocal analysis may be required in further developments of the
theory.
\section{Algebraic considerations}
Let $V_1$ and $V_2$ be vector spaces of dimensions $4m-4$ and $3$ (in
the quaternionic case) or of dimensions $8$ and $7$ (in the octonionic
case). A formal Levi bracket is an element $\ell$ of $W = \land^2 V_1^*
\otimes V_2$. This bracket makes
\begin{eqnarray*}gin{equation}
\mathfrak{n} = V_1 \oplus V_2\label{eq:15}
\end{equation}
into a two-graded nilpotent Lie algebra (as the Jacobi identity is
trivially satisfied). The corresponding Lie group $N$ will then carry
an invariant distribution $\mathcal{D}$ of same rank as $V_1$. Consider the
Lie bracket $[,]$ on sections of $\mathcal{D}$. This is a differential
bracket, but the differential part of it only maps into
$\mathcal{D}$. Hence the map \begin{eqnarray*}
\mathcal{L} : \land^2 \Gamma(\mathcal{D}) &\to & \Gamma(TX/\mathcal{D}) \\
\mathcal{L} (X,Y) &=& [X,Y]/\mathcal{D} \end{eqnarray*} is an algebraic map, i.e. a section
of $\land^2 \mathcal{D}^* \otimes (TN/\mathcal{D})$. If we designate by $L$ the group
$GL(V_1) \oplus GL(V_2)$, then $\mathcal{D}$ and $TN/\mathcal{D}$ are bundles
associated to an $L$-principle bundle $E \to N$.
In this set-up, $\mathcal{L}$ corresponds to an $L$-equivariant map
$f_{\mathcal{L}}$ from $E$ to $W$. Designate by $E_p$ the fibre of $E$ at
$p \in M$. By construction, $\ell$ is in the image $f_{\mathcal{L}}(E_p)$ for
all $p$, and this image consists precisely of the $L$ orbit of $\ell$ in
$W$.
Under the identifications $V_1 = \mathbb{H}^{m-1}$ and $V_2 =\im
\mathbb{H}$, the quaternionic standard Levi bracket $\kappa$ is given by the
choice of a hermitian metric $h$ on $V_1$; in this case, $\kappa$ is simply
the imaginary part of $h$.
Similarly, the standard octonionic bracket (also designated $\kappa$) is
also defined by identifications $V_1 = \mathbb{O}$, $V_2 = \im \mathbb{O}$
and a choice of $h$.
In general, $\kappa$ is only defined up to $L$-action; but as $\ell$ is
only defined up to $L$-action, we will assume our choice of $\kappa$ is
fixed.
We identify $G_0$ as the stabiliser of $\kappa$ in $L$; this can be seen as
the group that stabilises the quaternionic or octonionic structure. In
the quaternionic case, \begin{eqnarray*} G_0 = \mathbb{R}_+^* Sp(1)Sp(m-1) ,
\end{eqnarray*} while in the octonionic case, \begin{eqnarray*} G_0 = \mathbb{R}_+^* Spin(7). \end{eqnarray*}
In general, we consider a manifold $X^n$ of dimension $n=4m-1$ in the
quaternionic case, or $n=15$ in the octonionic case, and a
distribution $\mathcal{D}\subset TX$ of dimension equal to the rank of $V_1$. At each
point $x$ of $X$, the image in $TX/\mathcal{D}$ of the bracket $[X_1,X_2]$ of
two vector fields in $\mathcal{D}$ is an algebraic map, $\mathcal{L}_x\in \Lambda^2\mathcal{D}_x^*\otimes(T_xX/\mathcal{D}_x)$.
The main result of this section is:
\begin{eqnarray*}gin{prop}\label{prop:constr-ae}
There exists a $L$-invariant open set $U\subset W$ (that is an open set of
$L$-orbits), containing $\kappa$, with the following property. If $X^n$
has a distribution $\mathcal{D}$ such that for every $x\in X$ the induced
bracket $\mathcal{L}_x\in U$, then there exist metrics $\eta^2$ and $\gamma$ on $TX/\mathcal{D}$
and $\mathcal{D}$, such that, choosing any splitting $TX=\mathcal{D}\oplus V$, the metric
\begin{eqnarray*}gin{equation}
g = \frac{dt^2+\eta^2}{t^2} + \frac \gamma t \label{eq:14}
\end{equation}
on $\mathbb{R}_+^*\times X$ is asymptotically Einstein when $t\to0$:
\begin{eqnarray*}gin{equation}
\Ric(g)=\lambda g + O(t^{\frac 12}) ,\label{eq:16}
\end{equation}
where $\lambda=-m-2$ in the quaternionic case, $\lambda=-9$ in the octonionic
case. Moreover, this choice of $\eta$ and $\gamma$ is unique, up to the
conformal transformation: \begin{eqnarray*} (\eta^2, \gamma) \to ( f^2 \eta^2, f \gamma) \end{eqnarray*} for $f$
a strictly positive function $X \to \mathbb{R}$.
\end{prop}
In this statement, it is important to note that the asymptotic
behaviour (\ref{eq:16}) does not depend on the choice of splitting
$TX=\mathcal{D}\oplus V$.
\begin{eqnarray*}gin{rema}\label{rema:solvable}
In the special case where $X$ is the nilpotent group $N$ associated
to an algebraic bracket $\ell\in \Lambda^2V_1^*\otimes V_2$, and the distribution
$\mathcal{D}$ is the associated distribution, then the splitting
(\ref{eq:15}) gives a canonical choice for $V$. Then the metric
(\ref{eq:14}) is an invariant metric on the solvable group
$S=\mathbb{R}_+^*\ltimes N$. Being asymptotically Einstein when $t\to0$ and
invariant implies that it is exactly Einstein. This is the metric
constructed by Heber \cite{Heb98} on $S$. The proof of the
proposition will give another construction of this metric, at least
for small deformations of the distribution. Conversely, it also
follows from our proof that the open set $U$ can be taken equal to
the set of brackets $\ell$ such that an Einstein metric exists on the
associated solvable group.
\end{rema}
\begin{eqnarray*}gin{rema}\label{rema:asympt-solv}
In general, at each point $x\in X$ is associated a nilpotent group
$N_x$ and a solvable group $S_x=\mathbb{R}_+^*\ltimes N$. We are going to see
the relation between the asymptotically Einstein metric
(\ref{eq:16}) and $S_x$. To simplify notation, let us consider only
the quaternionic case (but the octonionic case is similar). Near $x$
we choose coordinates $(x_1,\dots,x_n)$ on $X$ such that $\mathcal{D}_x$ is
generated by the vector fields $(\frac \partial{\partial x_4},\dots,\frac \partial{\partial
x_n})$. The distribution $\mathcal{D}$ is given by the kernel of three
1-forms, $\eta_1$, $\eta_2$ and $\eta_3$, and we can suppose that at the
point $x$ one has $\eta_i=dx_i$. Then we consider the homothety
\begin{eqnarray*}gin{equation}
\label{eq:17}
h_r(t,x_1,\dots,x_n) = (rt,rx_1,rx_2,rx_3,\sqrt r x_4,\dots,\sqrt r x_n) .
\end{equation}
Note $\eta_i=\eta_i^jdx_j$, with $\eta_i^j(0)=\delta_i^j$. Then one has
$$ \bar \eta_i:=\lim_{r\to0}\frac 1{\sqrt r} h_r^*\eta_i = dx_i + \sum_{j,k=4}^n x_k
\frac{\partial\eta_i^j}{\partial x_k}(0) dx_j . $$ The three forms $\bar \eta_1$, $\bar
\eta_2$ and $\bar \eta_3$ are homogeneous, and define exactly the horizontal
distribution of the nilpotent group $N_x$. Denote $\bar \gamma:=\gamma(0)$,
then, when $r\to0$, one obtains the limit
$$ h_r^* g \longrightarrow \frac{dt^2+\bar \eta^2}{t^2} + \frac{\bar \gamma}t . $$
This is an invariant metric on the solvable group $S_x$, and more
precisely it is the homogeneous Einstein metric on $S_x$ mentioned in
the previous remark. This justifies the statement in Theorem
\ref{th:main} that at each point the constructed metric is asymptotic
to the corresponding Heber's metric.
\end{rema}
\begin{eqnarray*}gin{proof}[Proof of proposition \ref{prop:constr-ae}]
The uniqueness comes from remark \ref{rema:asympt-solv} and the
uniqueness of the homogeneous Einstein metric on $S_x$ proved by
Heber. Later in the paper, a weaker uniqueness will also be proved.
We will calculate the Ricci tensor of the metric (\ref{eq:14}) as a
function of $\gamma$ and $\eta$. The calculation is local, so we can
choose orthonormal frames $\{\check{X}_i\}$ and $\{\check{Y}_i\}$ of
$(TX/ \mathcal{D})$ and $\mathcal{D}$, respectively.
On $M = \mathbb{R}_+^* \times X$, we can define an orthonormal frame via: \begin{eqnarray*}
X_0 &=& t \frac{\partial}{\partial t}, \\
X_i &=& t \check{X}_i, \\
Y_i &=& \sqrt{t} \check{Y}_i. \end{eqnarray*} Let $O(a)$ denote sections of
$TM$ whose norm (under $g$) tends to zero at least as last as
$t^a$. Then we may calculate the Lie brackets of the above frame
elements: \begin{eqnarray*}
[X_0, X_i] &=& X_i, \\[0.cm]
[X_0, Y_i] &=& \frac{1}{2} Y_i, \\[0.cm]
[X_i, X_j] &=& O(1), \\[0.cm]
[X_i, Y_j] &=& O(1/2), \\[0.cm]
[Y_i, Y_j] &=& \mc{L}_{ij} + O(1/2), \\[0.cm]
\end{eqnarray*} where $\mc{L}_{ij} = \mc{L} (Y_i, Y_j)$. In future, we will denote by
$\mc{L}_{ij}^k$ the $k$ component of $\mc{L}_{ij}$ -- i.e. \begin{eqnarray*} \mc{L}_{ij}^k
= g(\mc{L}_{ij}, X_k), \end{eqnarray*} and $\mc{L}^k_i$ will be the section of
$\mathcal{D}$ defined by: \begin{eqnarray*} g (\mc{L}^k_i, Y_j) = \mc{L}^{k}_{ij}. \end{eqnarray*}
Now let $\nabla$ be the Levi-Civita connection of $g$. We can
calculate $\nabla$ by using the Koszul formula: \begin{eqnarray*}
2g(\nabla_X Y, Z) &=& X \cdot g(Y,Z) + Y \cdot g(X,Z) - Z \cdot g(X,Y) \\
&& g([X,Y],Z) - g([X,Z],Y) - g([Y,Z],X). \end{eqnarray*} Since our frame
elements are orthonormal, the formula reduces to \begin{eqnarray*} 2g(\nabla_X Y,
Z) &=& g([X,Y],Z) - g([X,Z],Y) - g([Y,Z],X), \end{eqnarray*} giving: \begin{eqnarray*}
\nabla_{X_0} X_0 &=& 0, \\
\nabla_{X_0} X_i = \nabla_{X_0} Y_i &=& 0, \\
\nabla_{X_i} X_0 &=& - X_i, \\
\nabla_{Y_i} X_0 &=& - \frac{1}{2} Y_i, \\
\nabla_{X_i} X_j &=& \delta_{ij} X_0 + O(1/2), \\
\nabla_{X_i} Y_j = \nabla_{Y_j} X_i &=& -\frac{1}{2} \mathcal{L}^i_j + O(1/2), \\
\nabla_{Y_i} Y_j = \nabla_{Y_j} Y_i &=& \frac{1}{2} \mc{L}_{ij} +
\frac{1}{2} \delta_{ij} X_0 + O(1/2). \end{eqnarray*}
So in this frame, $\nabla = d + A + O(1/2)$ where $A \in \Gamma(T^*X
\otimes \End TX)$ is independent of $t$. In detail: \begin{eqnarray*}
A(X_0) &=& 0, \\
A(X_i) &:& \left\{
\begin{eqnarray*}gin{array}{ccl}
X_0 &\to& - X_i \\
X_j &\to& \delta_{ij} X_0 \\
Y_j &\to& - \frac{1}{2} \mc{L}^i_j,
\end{array}
\right.
\\ \\
A(Y_i) &:& \left\{
\begin{eqnarray*}gin{array}{ccl}
X_0 &\to& - \frac{1}{2} Y_i \\
X_j &\to& - \frac{1}{2} \mc{L}^j_i \\
Y_j &\to& \frac{1}{2} \mc{L}_{ij} + \frac{1}{2} \delta_{ij} X_0.
\end{array}
\right. \end{eqnarray*} In this frame, define $dA(X,Y) = X \cdot A(Y) - Y \cdot
A(X) - A([X,Y])$. Note that differentiating $A$ in the $X_0$ direction
is zero, while differentiating $A$ in the direction of $X_i$ or $Y_i$
picks up a $t$ or $\sqrt{t}$ term, and hence become $O(1/2)$. Thus $d
A(X,Y) = -A([X,Y]) + O(1/2)$.
The curvature $R$ of $\nabla$ is $d A + [A,A]$, which immediately implies that
\begin{eqnarray*}
R_{X_0, X_i} &=& -A(X_i) + O(1/2), \\
R_{X_0, Y_i} &=& -\frac{1}{2} A(Y_i) + O(1/2), \\
R_{X_i, X_j} &=& [A(X_i), A(X_j) ] + O(1/2), \\
R_{X_i, Y_j} &=& [A(X_i), A(Y_j) ] + O(1/2), \\
R_{Y_i, Y_j} &=& - A(\mc{L}_{ij}) + [A(X_i), A(X_j) ] + O(1/2). \\
\end{eqnarray*}
The commutator terms are given by:
\begin{eqnarray*}
[A(X_i), A(X_j) ] &:& \left\{
\begin{eqnarray*}gin{array}{ccl}
X_0 & \to & 0 \\
X_k & \to & \delta_{ik} X_j - \delta_{jk} X_i \\
Y_k & \to & \frac{1}{4} \left( \mc{L}_{\mc{L}^j_k}^i - \mc{L}_{\mc{L}^i_k}^j \right),
\end{array}
\right.
\\ \\[0.cm]
[A(X_i), A(Y_j) ] &:& \left\{
\begin{eqnarray*}gin{array}{ccl}
X_0 & \to & - \frac{1}{4} \mc{L}^i_j \\
X_k & \to & \frac{1}{4} \mc{L}_{\mc{L}_j^k}^i + \frac{1}{2} \delta_{ik} Y_j \\
Y_k & \to & - \frac{1}{2} \delta_{jk} X_i + \frac{1}{4} \mc{L}_{j\mc{L}^i_k} , \\
\end{array}
\right.
\\ \\[0.cm]
[A(Y_i), A(Y_j) ] &:& \left\{
\begin{eqnarray*}gin{array}{ccl}
X_0 & \to & \frac{1}{2} \mc{L}_{ji} \\
X_k & \to & \frac{1}{4} \left( \mc{L}_{j\mc{L}^k_i} - \mc{L}_{i\mc{L}^k_j} \right) + \frac{1}{2} \mc{L}_{ji}^k X_0 \\
Y_k & \to & \frac{1}{4} \left( \mc{L}^{\mc{L}_{ik}}_j - \mc{L}^{\mc{L}_{jk}}_i + \delta_{ik} Y_j - \delta_{jk} Y_i \right).
\end{array}
\right.
\end{eqnarray*}
Now we need to take the Ricci-trace of this expression: \begin{eqnarray*}
\Ric_{X_0, X_0} &=& \sum_i g(X_i, R_{X_i,X_0} X_0 ) + \sum_i g (Y_i, R_{Y_i,X_0} X_0 ) \\
&=& \sum_i g(X_i,-X_i) + \sum_i g(Y_i,-\frac{1}{4} Y_i) + O(1/2)\\
&=& \lambda + O(1/2). \end{eqnarray*} Here $\lambda$ is equal to $-3 - (4m-4)/4 =
-m-2$ in the quaternionic case, and $-7 -8/4 = -9 $ in the octonionic
case. The cross-terms of the Ricci curvature all vanish: \begin{eqnarray*}
\Ric_{X_0, X_i} &=& \sum_j g(X_j, R_{X_j,X_0} X_i ) + \sum_i g (Y_j, R_{Y_j,X_0} X_i ) \\
&=& \sum_j g(X_j, \delta_{ij} X_0) + \sum_j \frac{1}{4}\mc{L}_{jj} + O(1/2) \\
&=& O(1/2), \\
\Ric_{X_0, Y_i} &=& \sum_j g(X_j, R_{X_j,X_0} Y_i ) + \sum_i g (Y_j, R_{Y_j,X_0} Y_i ) \\
&=& O(1/2), \\
\Ric_{X_i, Y_j} &=& g(X_0, R_{X_0, X_i} Y_j) + \sum_j g(X_j, R_{X_j,X_i} Y_j ) + \sum_i g (Y_j, R_{Y_j,X_i} Y_j ) \\
&=& O(1/2), \end{eqnarray*} the last two expressions vanishing because they are
sums of terms of type $g(X,Y)$ with $X \perp Y$. Next, the $\mathcal{D} \times
\mathcal{D}$ term is: \begin{eqnarray*}
\Ric_{X_i, X_j} &=& g(X_0, R_{X_0,X_i} X_j) + \sum_k g(X_k, R_{X_k,X_i} X_j) + \sum_k g(Y_k, R_{Y_k,X_i} X_j) \\
&=& - \delta_{ij} + \delta_{ij} - \sum_k \delta_{ij} g(X_k, X_k) - \sum_k \frac{1}{2} \delta_{ij} g(Y_k, Y_k) \\
&& - \sum_k \frac{1}{4} g(Y_k, \mc{L}^i_{\mc{L}^j_k}) + O(1/2). \end{eqnarray*} In the
quaternionic case, this is \begin{eqnarray*}qa \label{qua:one} \Ric_{X_i, X_j} =
\lambda \delta_{ij} + (1-m) \delta_{ij} + \sum_{k=1}^{4m-4}
\frac{1}{4} \mc{L}^i_{k \mc{L}^j_k} + O(1/2). \end{eqnarray*}qa In the octonionic case,
this is \begin{eqnarray*}qa \label{oct:one} \Ric_{X_i, X_j} = \lambda \delta_{ij} -2
\delta_{ij} + \sum_{k=1}^8 \frac{1}{4} \mc{L}^i_{k \mc{L}^j_k} + O(1/2).
\end{eqnarray*}qa Finally the $(TX/\mathcal{D}) \times (TX/\mathcal{D})$ term is \begin{eqnarray*}
\Ric_{Y_i, Y_j} &=& g(X_0, R_{X_0,Y_i} Y_j) + \sum_k g(X_k, R_{X_k,Y_i} Y_j) + \sum_k g(Y_k, R_{Y_k,Y_i} Y_j) \\
&=& - \frac{1}{4} \delta_{ij} + \sum_k \left(\frac{1}{4} \mc{L}_{i\mc{L}_{j}^k}^k- \frac{1}{2}\delta_{ij} \right) + \frac{1}{4} \left( \delta_{ij} + \sum_k 3 \mc{L}_{ik}^{\mc{L}_{kj}} -\delta_{ij} \right) \\
&& + O(1/2) \\
&=& -\frac{1}{2} \sum_k \delta_{ij} + \frac{1}{4} \sum_k (2
\mc{L}_{ik}^{\mc{L}_{kj}} - \delta_{ij}) + O(1/2). \end{eqnarray*} since \begin{eqnarray*} \sum_k
\mc{L}_{ik}^{\mc{L}_{kj}} = \sum_{kp} \mc{L}_{ik}^p \mc{L}_{kj}^p = \sum_{pk}
\mc{L}_{ip}^k \mc{L}_{pj}^k = \sum_k -\mc{L}_{i \mc{L}_{j}^k}^k. \end{eqnarray*} In the
quaternionic case, the curvature is \begin{eqnarray*}qa \label{qua:two} \Ric_{Y_i,
Y_j} = \lambda \delta_{ij} + \frac{3}{2} \delta_{ij} + \frac{1}{2}
\sum_{k = 1}^{4m-4} \mc{L}_{ik}^{\mc{L}_{kj}} + O(1/2). \end{eqnarray*}qa In the
octonionic case, it is: \begin{eqnarray*}qa \label{oct:two} \Ric_{Y_i, Y_j} =
\lambda \delta_{ij} + \frac{7}{2} \delta_{ij} + \frac{1}{2} \sum_{k =
1}^{8} \mc{L}_{ik}^{\mc{L}_{kj}} + O(1/2). \end{eqnarray*}qa
Now $(M,g)$ is asymptotically Einstein if $\Ric_{X_i, X_j} = \lambda \delta_{ij}
+ O(1/2)$ and $\Ric_{Y_i, Y_j} = \lambda \delta_{ij} + O(1/2)$. From now on, we
will use the Einstein summation convention, where any repeated index
is summed over. Then the equations (\ref{qua:one}) and (\ref{qua:two})
imply that in the quaternionic case, we must have:
\begin{eqnarray*}qa \label{qua:con}
\begin{eqnarray*}gin{array}{rcl}
\mc{L}_{ij}^k \mc{L}_{op}^q \gamma^{io} \gamma^{jp} &=& 4(m-1) \eta^{kq}, \\
\mc{L}_{ij}^k \mc{L}_{op}^q \gamma^{io} \eta_{kq} &=& 3 \gamma_{jp},
\end{array}
\end{eqnarray*}qa
while equations (\ref{oct:one}) and (\ref{oct:two}) imply that in the
octonionic case, we must have:
\begin{eqnarray*}qa \label{oct:con}
\begin{eqnarray*}gin{array}{rcl}
\mc{L}_{ij}^k \mc{L}_{op}^q \gamma^{io} \gamma^{jp} &=& 8 \eta^{kq}, \\
\mc{L}_{ij}^k \mc{L}_{op}^q \gamma^{io} \eta_{kq} &=& 7 \gamma_{jp}.
\end{array}
\end{eqnarray*}qa
For an $\ell$ sufficiently close to $\kappa$, these equations can be solved
(see Theorem \ref{infi:ein:theo}), and the solution is unique up to
conformal transformations.
\end{proof}
A natural question is whether, as in the quaternionic-contact case,
the conformal class $(\eta,\gamma)$ comes with a quaternionic structure on
$\mathcal{D}$ and $TX/\mathcal{D}$. The same applies for the octonionic-contact
structures, of course. We propose here a construction, where instead
of looking only for a conformal class, one constructs directly a
quaternionic or octonionic structure. As a byproduct, the system
(\ref{qua:con}) or (\ref{oct:con}) is interpreted in a natural way,
see (\ref{al:be:eq}), and existence of a solution is provided.
The automorphism group of these structures is $G_0$, which is
contained in the conformal automorphism group \begin{eqnarray*} G' = \mathbb{R}_+^* \times
SO(\eta) \times SO(\gamma) \end{eqnarray*} of $(\eta,\gamma)$. Thus it seems that to get the
quaternionic/octonionic structures on the manifold, we need to impose
extra equations beyond (\ref{qua:con}) and (\ref{oct:con}).
These can best be understood by looking at the normality $\partial^*$
operator described in \cite{CartEquiv}, \cite{TCPG} and
\cite{capslo}. It is an algebraic Lie algebra co-differential, which
extends naturally to a bundle operator on associated bundles. If $X$
is a quaternionic- or octonionic-contact manifold, $\mc{K}$ the
corresponding Levi-bracket, and $\mathcal{M}$ is any section of $\land^2
\mathcal{D}^* \otimes (TX/\mathcal{D})$. Then $\partial^* \mathcal{M} = \alpha \mathcal{M} \oplus \begin{eqnarray*}ta \mathcal{M}$
where \begin{eqnarray*}qa \label{al:be:k}
\begin{eqnarray*}gin{array}{rcl}
(\alpha \mathcal{M})^r_q &=& (\gamma^{jr} \gamma^{ip} \eta_{ko}) (\mathcal{M}_{ij}^k \mathcal{K}_{pq}^o), \\
(\begin{eqnarray*}ta \mathcal{M})_r^k &=& - \frac{1}{2} (\eta_{or} \gamma^{ip} \gamma^{jq}) (\mathcal{M}_{ij}^k \mathcal{K}_{pq}^o),
\end{array}
\end{eqnarray*}qa
Einstein summation over repeated indexes being assumed. Note that these expressions are invariant under conformal transformations $(\eta, \gamma) \to (f^2 \eta, f \gamma)$.
If we apply $\alpha$ and $\begin{eqnarray*}ta$ to $\mc{K}$ itself, we get:
\begin{eqnarray*}gin{lemm} \label{hom:model}
In the quaternionic-contact case:
\begin{eqnarray*}
(\alpha \mathcal{K}) &=& 3 Id_{\mathcal{D}}\\
-2(\begin{eqnarray*}ta \mathcal{K}) &=& 4(m-1) Id_{TX/\mathcal{D}},
\end{eqnarray*}
while in the octonionic-contact case:
\begin{eqnarray*}
(\alpha \mathcal{K}) &=& 7 Id_{\mathcal{D}}\\
-2(\begin{eqnarray*}ta \mathcal{K}) &=& 8 Id_{TX/\mathcal{D}},
\end{eqnarray*}
the same numbers as in equations (\ref{qua:con}) and (\ref{oct:con}).
\end{lemm}
\begin{eqnarray*}gin{proof}
Fix $\eta$ and $\gamma$, and pick local orthonormal sections $\{I_1, \cdots, I_p\}$
of $TX / \mathcal{D}$, where $p = 3$ in the quaternionic case, and $p =
7$ in the octonionic case. These all correspond to complex
structures on $\mathcal{D}$. Then for $X,Y \in \Gamma(\mathcal{D})$, $\mc{K}(X,Y)$ can be
written as: \begin{eqnarray*} \mc{K}(X,Y) = \sum_{i=1}^p \gamma(I_i (X),Y) I_i. \end{eqnarray*} By
extension, define $I_0$ to be the identity transformation of
$\mathcal{D}$. Now pick local orthonormal sections $\{Y_1, \cdots Y_q\}$ of
$\mathcal{D}$, chosen so that $I_i Y_j$ is orthogonal to all $I_k Y_l$
whenever $i \neq k$ or $j \neq k$. This is possible, as $\gamma$ must be
hermitian with respect to these complex structures. Here, $q = m-1$
for quaternionic structures, and $q =1$ for octonionic structures.
Again, we may rewrite $\mc{K}$ as: \begin{eqnarray*} \mc{K} = \sum_{i=0,j,k=1}^{i,j=p,k=q}
-(I_j I_i Y_k)^*\otimes (I_iY_k)^* \otimes I_j. \end{eqnarray*} If we raise and lower all
indexes with $\eta$ and $\gamma$, we get $\mc{K}^*$, which is \begin{eqnarray*} \mc{K} =
\sum_{i=0,j,k=1}^{i,j=p,k=q} -(I_j I_i Y_k)\otimes (I_iY_k) \otimes (I_j)^*. \end{eqnarray*}
Now $\alpha \mc{K}$ involves taking the trace of $\mc{K}$ and $\mc{K}^*$ over one of
the $\mathcal{D}$ components and over the $TX/\mathcal{D}$ components. The
trace over the $TX/\mathcal{D}$ component is trivial; and if $\llcorner$
denotes contraction between a space and its dual, \begin{eqnarray*}
\alpha \mc{K} &=& \sum_{k,o = 1}^{q} \sum_{j,r = 1}^{p} \sum_{i,l =0}^p \left( (I_j I_i Y_k)^* \llcorner (I_r I_l Y_o) \right) \left(I_j \llcorner I_r^* \right) (I_i Y_k)^* \otimes (I_l Y_o) \\
&=& \sum_{k,o = 1}^{q} \sum_{j,r = 1}^{p} \sum_{i,l =0}^p \left( (I_j I_i Y_k)^* \llcorner (I_r I_l Y_o) \right) \delta_{jr} (I_i Y_k)^* \otimes (I_l Y_o) \\
&=& \sum_{k,o = 1}^{q} \sum_{j = 1}^{p} \sum_{i,l =0}^p \left( (I_j I_i Y_k)^* \llcorner (I_j I_l Y_o) \right) (I_i Y_k)^* \otimes (I_l Y_o) \\
&=& \sum_{k,o = 1}^{q} \sum_{j = 1}^{p} \sum_{i,l =0}^p \delta_{il} \delta_{ko} (I_i Y_k)^* \otimes (I_l Y_o) \\
&=& \sum_{j=1}^p \sum_{i=0}^p \sum_{k=0}^q (I_i Y_k)^* \otimes (I_i Y_k) \\
&=& p Id_{\mathcal{D}}. \end{eqnarray*} The $-2\begin{eqnarray*}ta \mc{K}$ term is the contraction of $\mc{K}$
and $\mc{K}^*$ over both their $\mathcal{D}$ components; it is \begin{eqnarray*}
-2 \begin{eqnarray*}ta \mc{K} &=& \sum_{k,o = 1}^{q} \sum_{j,r = 1}^{p} \sum_{i,l =0}^p \left( (I_j I_i Y_k)^* \llcorner (I_r I_l Y_o) \right) \left( (I_i Y_k)^* \llcorner (I_l Y_o) \right) I_j \otimes I_r^* \\
&=& \sum_{k,o = 1}^{q} \sum_{j,r = 1}^{p} \sum_{i,l =0}^p \left( (I_j I_i Y_k)^* \llcorner (I_r I_l Y_o) \right) (\delta_{ko} \delta_{il}) I_j \otimes I_r^* \\
&=& \sum_{k = 1}^{q} \sum_{i =0}^p \sum_{j,r = 1}^{p} \left( (I_j I_i Y_k)^* \llcorner (I_r I_i Y_k) \right) I_j \otimes I_r^* \\
&=& \sum_{k = 1}^{q} \sum_{i =0}^p \sum_{j,r = 1}^{p} \delta_{jr} I_j \otimes I_r^* \\
&=& \sum_{k = 1}^{q} \sum_{i =0}^p \sum_{j = 1}^{p} I_j \otimes I_j^* \\
&=& q (p+1) Id_{TX/\mathcal{D}}. \end{eqnarray*} Then substituting in the values for
$p$ and $q$ gives the result.
\end{proof}
Now if $\mathcal{N}$ is a section of $\land^2 \mathcal{D}^* \otimes
(TX/\mathcal{D})$, we may use it in equations (\ref{al:be:k}) instead of
$\mc{K}$; in that case, define
\begin{eqnarray*}
(\alpha_{\mathcal{N}} \mathcal{M})^r_q &=& (\gamma^{jr} \gamma^{ip} \eta_{ko}) (\mathcal{M}_{ij}^k {\mathcal{N}}_{pq}^o), \\
(\begin{eqnarray*}ta_{\mathcal{N}} \mathcal{M})_r^k &=& - \frac{1}{2} (\eta_{or} \gamma^{ip}
\gamma^{jq}) (\mathcal{M}_{ij}^k {\mathcal{N}}_{pq}^o). \end{eqnarray*}
Similarly, though $\mathcal{K}$ defines the conformal class of $(\eta,
\gamma)$, (through the reduction to structure group $G_0 \subset G'$),
there is no reason to require that $\mathcal{K}$ be the Levi-bracket of the
distribution $\mathcal{D}$. Given $(\eta,\gamma)$ on a general manifold
with distribution $\mathcal{D}$ of correct dimension and co-dimension, they
define a (local) class of compatible brackets $\mathcal{K}$ of
quaternionic-contact or octonionic-contact type. Then the equations
(\ref{qua:con}) and (\ref{oct:con}) can be rewritten as saying that we
must find $(\eta,\gamma)$ such that for any $\mathcal{K}$ compatible with
them, \begin{eqnarray*}qa
\label{alph:eq}\alpha_{\mathcal{K}} \mc{K} &=& \alpha_{\mc{L}} \mc{L}, \\
\begin{eqnarray*}ta_{\mathcal{K}} \mc{K} &=& \begin{eqnarray*}ta_{\mc{L}} \mc{L}, \end{eqnarray*}qa or, more compactly, \begin{eqnarray*}qa
\label{al:be:eq}
\partial^*_{\mc{K}} \mc{K} = \partial^*_{\mc{L}} \mc{L}.
\end{eqnarray*}qa
It is easy to see that these equations are conformally invariant.
\begin{eqnarray*}gin{rem}
It is useful to compare these equations with those defining a
`Damek-Ricci' space (this is a subclass of Heber's metrics, see
\cite{DamekRicci}). For any section $Z$ of $\mathcal{D}$, we may define
an endomorphism $J_Z$ of $\mathcal{D}$ by \begin{eqnarray*} \gamma (J_Z X, Y) = \eta^2
(Z, \mathcal{L}(X,Y)), \end{eqnarray*} for sections $X$ and $Y$ of $\mathcal{D}$. Then $X$
is asymptotically Damek-Ricci if $J_Z^2 = -1 \eta^2(Z,Z)$. Now if
$\{Z_j\}$ is a local orthonormal frame for $TX / \mathcal{D}$, then we
may rewrite $\alpha_{\mc{L}} \mc{L}$ once more as \begin{eqnarray*}
\alpha_{\mc{L}} \mc{L} &=& \Tr_{\gamma} \big( \sum_{jk} \eta^2(Z_j,Z_k) \eta^2(Z_j, \mc{L}) \otimes \eta^2(Z_k, \mc{L}) \big) \\
&=& \Tr_{\gamma} \big( \sum_{j} J_{Z_j} \otimes J_{Z_j} \big) \\
&=& - \sum_j J_{Z_j}^2. \end{eqnarray*} Since $Z_j$ is normal, $J_{Z_j}^2 = -
Id_{\mathcal{D}}$, and Damek-Ricci spaces must solve equation
(\ref{alph:eq}). Similarly, for $Z$ and $Z'$ sections of $TX /
\mathcal{D}$ \begin{eqnarray*} -2 \eta^2 ( (\begin{eqnarray*}ta_{\mc{L}} \mc{L}) (Z), Z') &=& \Tr_{\gamma}
\ \Tr_{\gamma} \ J_{Z'} \otimes J_{Z}. \end{eqnarray*} Since this must be
symmetric, it values are determined by taking $Z = Z'$; in which
case it is $4(m-1) \eta^2(Z,Z)$ in the quaternionic case, and
$8\eta^2(Z,Z)$ in the octonionic one. Consequently Damek-Ricci
spaces are special solutions of equation (\ref{al:be:eq}), as are
any spaces that are asymptotically Damek-Ricci (i.e. spaces with
$\mc{L}$, $\gamma$ and $\eta^2$ such that the relation $J_Z^2 = -1
\eta^2(Z,Z)$ holds).
\end{rem}
It is still somewhat unsatisfactory that there is a large class of $\mathcal{K}$ compatible with a given $\mathcal{L}$. It would be better
to have a procedure that fixes $\mathcal{K}$ uniquely (and hence the
quaternionic/octonionic structure, as well as $(\eta,\gamma)$).
In the quaternionic case, the dimension of $L$ (the full graded
automorphism group) is $(4m-4)^2 + 3^2 = 16m^2 - 32m + 25$, while the
group $G'$ is of dimension $(4m-4)(4m-5)/2 + 3 + 1 = 8m^2 -18m + 14$
and $G_0$ is of dimension $ (2m-2)(2m-1)/2 + 3 + 1 = 2m^2 - 3m +1$.
Looking at equation (\ref{al:be:eq}), one can see that $\alpha_{\mc{L}} \mc{L}$
takes values in the $\gamma$-symmetric component of $\mathcal{D} \otimes \mathcal{D}^*$,
while $\begin{eqnarray*}ta_{\mc{L}} \mc{L}$ takes values in the $\eta$ symmetric component of
$(TX/\mathcal{D}) \otimes (TX/\mathcal{D})^*$. These values are not completely
independent, however: the $\gamma$ trace of $\alpha_{\mc{L}} \mc{L}$ is the complete
trace of $\mc{L}$ with itself, as is the $\eta$ trace of $-2 \begin{eqnarray*}ta_{\mc{L}}
\mc{L}$. Hence there is one extra relation, giving a total of
$(4m-4)(4m-3)/2 + 6 -1 = 8m^2 - 14 m +11$ independent equations --
just the right amount to reduce the structure group from $L$ to $G'$.
Now let us consider a slight deformation of a quaternionic-contact
structure; where $\mc{L} = \mc{K} + \epsilon \mathcal{M}$. Re-writing equation
(\ref{al:be:eq}): \begin{eqnarray*}
0 &=& \partial^*_{\mc{L}} \mc{L} - \partial^*_{\mc{K}} \mc{K}\\
&=& \epsilon \left(\partial_{\mathcal{M}}^* \mc{K} + \partial_{\mc{K}}^* \mathcal{M}\right) + O(\epsilon^2). \end{eqnarray*}
The $\epsilon$ term is the symmetric part of $\partial_{\mc{K}}^* \mathcal{M}$; so, to first
order, the requirement is that $\partial^*_{\mc{K}} \mc{L}$ be completely
anti-symmetric. A method for fixing $\mc{K}$ is suggested by the following
lemma:
\begin{eqnarray*}gin{lemm}
The equation $\partial^*_{\mc{K}} \mathcal{M} = 0$ consists of $16 m^2 - 32m + 24 $
independent equations, which is exactly enough to restrict the
structure group from $L$ to $G_0$.
\end{lemm}
\begin{eqnarray*}gin{proof}
The operator $\partial^*$ takes values in $\mathcal{D} \otimes \mathcal{D}^* \oplus (TX/\mathcal{D})
\otimes (TX/\mathcal{D})^*$. This bundle may be identified with $E(\mathfrak{l})$,
where $\mathfrak{l}$ is the Lie algebra of $L$. The bracket $\mc{K}$
defines a reduction to the structure group $G_0$ and hence
$E_0$, a $G_0$-principal bundle. This defines the vector bundle
$E_0(\mathfrak{g}_0)$, with $\mathfrak{g}_0$ the Lie algebra of
$G_0$. The inclusion $E_0 \subset E$ defines an inclusion of this
bundle into $E(\mathfrak{l})$. Then paper \cite{capslo} implies that the
image of $\partial^*$ is transverse to $E_0(\mathfrak{g}_0)$, giving us our
dimensionality result.
\end{proof}
So the natural candidate for fixing $\mc{K}$ would be one whose derivative
close to a quaternionic-contact structure is one where the
anti-symmetric part of $\partial_{\mc{K}}^* \mathcal{M}$ vanishes.
The simplest such condition is to simply require that the anti-symmetric part of $\partial_{\mc{K}}^* \mathcal{L}$ vanishes. Thus:
\begin{eqnarray*}gin{defi}[Compatibility]
The algebraic bracket $\mathcal{K}$, a section of $\land^2 \mathcal{D}^* \otimes
(TX/\mathcal{D})$, is compatible with the Levi bracket $\mc{L}$ if:
\begin{eqnarray*}gin{enumerate}
\item $\mathcal{K}$ is of quaternionic-contact or octonionic-contact type
-- hence the dimension and co-dimension of $\mathcal{D}$ is correct, and
$\mc{K}$ defines a pair of metric $(\eta,\gamma)$ up to conformal
transformations,
\item $\partial^*_{\mc{K}} \mc{K} = \partial^*_{\mc{L}} \mc{L}$,
\item $\partial^*_{\mc{K}} \mc{L}$ is symmetric.
\end{enumerate}
\end{defi}
Now, this definition is similar, but not identical, with the condition
for non-regular two-graded geometries laid out in \cite{metwograd};
indeed, the condition there (that $\partial^*_{\mc{K}} \mc{L} = 0$) is precisely the
infinitesimal version of the above.
Recall that in general, $\mathcal{L}$ is defined by an $L$-equivariant map
$f_{\mc{L}}$ from $E$ to $W = \land^2 V_1^* \otimes V_2$. This allows us to phrase
our general result:
\begin{eqnarray*}gin{theo} \label{infi:ein:theo} Let $X$ be a manifold with
distribution $\mathcal{D} \subset TX$ of the right dimension and
co-dimension. Then there is an open set $U \subset W$, containing the
standard bracket $\kappa$, such that for all points $x \in M$ where
$f(E_x)$ intersects $U$, there exists a locally unique, continuously
defined, choice of compatible algebraic bracket $\mathcal{K}_x$.
\end{theo}
\begin{eqnarray*}gin{proof}
We need to prove is the existence and uniqueness of compatible
$\mathcal{K}$. This is a purely algebraic construction, so we may work at
a point. If we choose the natural bracket $\kappa$ to be fixed in $\land^2
V_1^* \otimes V_2$, define $\theta$ as the map $\land^2 V_1^* \otimes V_2 \to \mathfrak{s}$,
\begin{eqnarray*}qa \label{theta:equations} \theta = \frac{1}{2} \left(\partial^*_{\ell} \ell +
\partial^*_{\kappa} \ell - \partial^*_\ell {\kappa} \right) - \partial_{\kappa}^* {\kappa}. \end{eqnarray*}qa Note that the
first $\ell$ term must by symmetric, while the other two $\ell$ terms
together are anti-symmetric, so there is no overlap between
them. Another important fact is that the Lie algebra $\mathfrak{g}_0$ has
one symmetric part (the grading element) and the rest is
anti-symmetric. We already know that the image of $\partial^*_{\ell} \ell$ is of
co-dimension one in the symmetric part of $\mathfrak{s}$; its image it
precisely the part transverse to the grading element $(2Id,
Id)$. Now $\partial^*_{\kappa} \ell - \partial^*_\ell {\kappa}$ is simply the anti-symmetric part
of $2\partial^*_{\kappa} \ell$. We know that $2\partial^*_{\kappa} \ell$ must be transverse to
$\mathfrak{g}_0$, and hence so is its anti-symmetric part. Consequently
$\theta$ maps into $\mathfrak{l} / \mathfrak{g}_0$.
Now if $f_{\mc{L}}(E_x)$ intersects the zero set of $\theta$, then there is
a point $p \in E_x$ such that $\theta(f_{\mc{L}}(p)) = 0$. Then if we
define $\mathcal{K}_x$ by the property that $f_{\mathcal{K}}(p) = {\kappa}$, we
will get the vanishing of the bundle version of equation
(\ref{theta:equations}). Hence this $\mathcal{K}$ will be compatible.
So what we need to show is that the $L$-orbit of the zero set of $\theta$
contains an open set $U$ around ${\kappa}$. Now consider the map $\Theta: L \times
W \to \mathfrak{s}$, \begin{eqnarray*} \Theta(s,\ell) = \theta(s \cdot \ell), \end{eqnarray*} where $s \cdot l$ denotes the
action of $s \in L$ on $\ell$. We wish to calculate the derivative of
this map in the $L$ directions around the point $({\kappa},Id)$. Let $s \in
\mathfrak{l}$; then a little bit of calculations demonstrate that this
derivative is \begin{eqnarray*} D_{\Theta}(s)({\kappa},Id) = \partial^*_{\kappa} ( \partial_{\kappa} s) \end{eqnarray*} where
\begin{eqnarray*} (\partial_{\kappa} s )(x,y) = {\kappa}(s(x),y) + {\kappa}(x,s(y)) - s({\kappa}(x,y)) \end{eqnarray*}
(see \cite{metwograd} for more details of how this is
derived). Paper \cite{capslo} then demonstrates that $\partial_{\kappa}^* \partial_{\kappa}$
is an invertible map from the image of $\partial^*$ to itself, with kernel
equal to $\mathfrak{g}_0$. An extra subtlety is needed to demonstrate that
result, namely the vanishing of the first cohomology groups
$H^{(1)}(\mathfrak{g}^+,\mathfrak{g})$ in homogeneity zero, see \cite{capslo} and
\cite{metwograd}. But Kostant's proof of the Bott-Borel-Weil theorem
(\cite{Kostant}) show that this is indeed the case in our situation.
Hence, under the action of $L$, $\theta(s \cdot {\kappa})$ must trace out an open
neighbourhood of zero in $\mathfrak{l} / \mathfrak{g}_0$. This property must
extend to points $\ell$ close to ${\kappa}$ by the implicit function
theorem, defining our set $U$.
Now let $\ell$ be in $U$ intersected with the zero set of $\theta$. If $\ell$
is close enough to ${\kappa}$ (possibly restricting $U$ to a smaller open
subset), we know that if $B_\ell\subset S$ is defined such that $\theta(b \cdot \ell ) =
0$ for all $b \in B_\ell$, then $B$ must be of same dimension as
$\mathfrak{g}_0$ (at least around the identity in $L$). However, if $g \in
G_0$, then \begin{eqnarray*}
\theta(g \cdot \ell) &=& \frac{1}{2} \left(\partial^*_{g \cdot l} g \cdot \ell + \partial^*_{\kappa} (g \cdot \ell) - \partial^*_{(g \cdot \ell)} {\kappa} \right) - \partial_{\kappa}^* {\kappa} \\
&=& \frac{1}{2} \left(\partial^*_{g \cdot \ell} g \cdot \ell + g \cdot (\partial^*_{\kappa} \ell - \partial^*_\ell {\kappa}) \right) - \partial_{\kappa}^* {\kappa} \\
&=& \frac{1}{2} (\partial^*_{\ell} \ell) - \partial_{\kappa}^* {\kappa} = 0,\\
\end{eqnarray*} since $g$ is a conformal transformation, commutes with $\partial^*$,
and $\partial^*_{\kappa} \ell - \partial^*_\ell {\kappa} = 0$ by the assumption $\theta(\ell) = 0$.
Hence around the identity, dimension count implies that $B_\ell$ is
precisely the group $G_0$. Action by $G_0$ preserves ${\kappa}$, so does
not affect the value of $\mathcal{K}_x$. Consequently, the choice of
$\mathcal{K}_x$ is locally unique for $\ell \in U$.
\end{proof}
\begin{eqnarray*}gin{rem}
As noted before, the condition that $\partial^*_{\mc{K}} \mc{L}$ be symmetric can
be replaced with any other condition that approximates the one above
to first order. There are more natural candidates for that --
involving, for instance, the decomposition of the partial trace
$\mc{L}_{ij}^k \mc{L}_{lo}^r \gamma^{il}$ into irreducible $G_0$ components,
and the vanishing of one of these components. But since we're been
unable to find a direct use of such a result (it affect the
curvature of the asymptotically Einstein metric, but it's not clear
exactly how), we've stuck with the simpler condition in this paper.
\end{rem}
\section{Construction of the Einstein metrics}
\label{sec:constr-einst-metr}
In this section, we prove theorem \ref{th:main}, along the lines of
\cite{Biq00}. Because we restrint to the case of small deformations of
the model hyperbolic metric, we are able to give a short direct proof,
in which the main step is a uniform estimate for the norm of the
inverse of the linearization.
We start with the quaternionic or octonionic hyperbolic space $M$, whose
metric in polar coordinates is expressed in both cases by
\begin{eqnarray*}gin{equation}
\label{eq:1}
g_0 = dr^2 + \sinh^2(\tfrac r2)\gamma_0 + \sinh^2(r) \eta_0^2 .
\end{equation}
Here $\eta_0$ is a 1-form on $S^{4m-1}$ (resp. $S^{15}$) with values in
$\mathbb{R}^3$ (resp. $\mathbb{R}^7$), and $\gamma_0$ is the induced metric on the
$4(m-1)$-dimensional (resp. $8$-dimensional) distribution $\mathcal{D}_0$ of
$S^n$.
We will need the mean curvature $H_0(r)=\partial_r\log v$ of the spheres
$r=\mathrm{cst}$, where $v$ is the volume element. It is given by
$H_0(r)=2(m-1)\coth(\frac r2)+3\coth(r)$ in the quaternionic case, or
$H_0(r)=4\coth(\frac r2)+7\coth(r)$ in the octonionic case. Also we
note $$\mathcal{H}=\lim_{r\to\infty}H_0(r) $$ the limit at infinity, so that
$\mathcal{H}=2m+1$ in the quaternionic case and $\mathcal{H}=11$ in the octonionic
case.
Suppose that we have now a small perturbation $\mathcal{D}$ of the
distribution $\mathcal{D}_0$. From proposition \ref{prop:constr-ae} we have
constructed $(\gamma,\eta)$ with $\mathcal{D}=\ker \eta$ such that the metric
\begin{eqnarray*}gin{equation}
\label{eq:2}
g_\mathcal{D} = dr^2 + \sinh^2(\tfrac r2)\gamma + \sinh^2(r) \eta^2
\end{equation}
is asymptotically Einstein :
\begin{eqnarray*}gin{equation}
\label{eq:3}
\Ric(g_\mathcal{D})-\lambda g_\mathcal{D} = O(e^{-\frac r2}) ,
\end{equation}
with $\lambda=-m-2$ (resp. $\lambda=-9$). Here the norms are with respect to
$g_\mathcal{D}$. Actually, in the proof of proposition \ref{prop:constr-ae},
we proved more, that is there is a developpement for the curvature,
\begin{eqnarray*}gin{equation}
\label{eq:18}
R = R_0 + e^{-\frac r2} R_1 + e^{-r} R_2 + \cdots ,
\end{equation}
where the terms $R_i$ do not depend on $r$, the term $R_1$ depends on
one derivative of the bracket $\mathcal{L}$ on the boundary, and the other
terms depend on two derivatives of $\mathcal{L}$. This immediately implies
\begin{eqnarray*}gin{equation}
\label{eq:4}
|\nabla^k(\Ric(g_\mathcal{D})-\lambda g_\mathcal{D})| \leq c_k e^{-\frac r2} \text{ for all }k,
\end{equation}
where $c_k$ can be made small if $\mathcal{L}$ is $C^{k+2}$ close to the
standard bracket.
Of course, the formula (\ref{eq:2}) does not give a smooth metric at
the origin. To remedy this, we choose a cutoff function $\chi(r)$, such
that $\chi(r)=1$ for $r\geq R+1$ and $\chi(r)=0$ or $r\leq R-1$. Then we define
\begin{eqnarray*}gin{equation}
\label{eq:5}
g = \chi g_\mathcal{D} + (1-\chi) g_0 .
\end{equation}
The metric $g$ is a global filling of $(\gamma,\eta)$ in the ball.
The first observation is that the metrics $g_\mathcal{D}$ have uniform geometry:
\begin{eqnarray*}gin{lemm}\label{lem:uniform-geom}
Suppose $k\geq 2$. For $\mathcal{D}$ varying in a fixed $C^{k+1}$ neighbourhood
of $\mathcal{D}_0$, the sectional curvature of $g$ is negative, the
curvatures of $g$ and their $(k-2)$ covariant derivatives are
uniformly bounded.
\end{lemm}
\begin{eqnarray*}gin{proof}
A $C^{k+1}$ control of $\mathcal{D}$ gives a $C^k$ control of the conformal
metric $(\eta,\gamma)$, since one derivative is needed to calculate the Levi
bracket and $(\eta,\gamma)$ is then obtained as the solution of algebraic
equations. Therefore we have a $C^k$ control on the coefficients of
$g$. The lemma then follows from the form (\ref{eq:18}) of the
curvature.
\end{proof}
This implies that balls for the metrics $g_\mathcal{D}$ are uniformly
comparable with Euclidean balls. Then the Hölder norm of a function
$f$ is defined as the supremum of the Hölder norms of $f$ on each ball
of radius $1$.
The analysis of the Einstein equation requires the use of weighted
Hölder spaces. Our weight function will be
\begin{eqnarray*}gin{equation}
\label{eq:6}
w(r) = \cosh(r)^\delta
\end{equation}
and we then define the weighted Hölder space $C^{k,\alpha}_\delta=w^{-\delta}C^{k,\alpha}$.
Of course, from the initial estimate (\ref{eq:3}), the weight we are
interested in is $\delta=\frac 12$.
As is \cite[chapter I]{Biq00}, the Einstein metric will be constructed as a
solution $h$ of the equation
\begin{eqnarray*}gin{equation}
\label{eq:7}
\Phi^g(h):=\Ric(h)-\lambda h+\delta_h^*(\delta_gh+\tfrac 12 d \Tr_gh) = 0,
\end{equation}
and we require that $h$ is asymptotic to $g$ in the sense that
\begin{eqnarray*}gin{equation}
\label{eq:8}
h-g \in C^{2,\alpha}_{1/2}.
\end{equation}
Indeed, by \cite[lemma I.1.4]{Biq00}, a solution $h$ of $\Phi^g(h)=0$
then satisfies $\delta_gh+\tfrac 12 d \Tr_gh=0$ and $\Ric(h)=\lambda h$. Given
lemma \ref{lem:uniform-geom} (in particular, the negative curvature of
$g$ implies that the linarization of $\Phi^g$ has no $L^2$ kernel), the
proof in \cite{Biq00} applies and proves that if the data $(\gamma,\eta)$ is
sufficiently close to $(\gamma_0,\eta_0)$ in $C^{2,\alpha}$ norm, that is if $\mathcal{D}$
is sufficiently close to $\mathcal{D}_0$ in $C^{3,\alpha}$ norm, then one can find
a solution $h$ of (\ref{eq:7}), if one has a uniform bound on the
inverse of the linearization of $\Phi^g$. This is provided by:
\begin{eqnarray*}gin{lemm}\label{lemm:P_invertible}
Suppose that $\frac 12(\mathcal{H}-\sqrt{\mathcal{H}^2-8})<\delta<\frac
12(\mathcal{H}+\sqrt{\mathcal{H}^2-8})$. For $\mathcal{D}$ sufficiently close to $\mathcal{D}_0$ in
$C^{3,\alpha}$ norm, the linearization
$P_g=d_g\Phi^g:C^{2,\alpha}_\delta(\Sym^2T^*M)\to C^\alpha_\delta(\Sym^2T^*M)$ is invertible
and the norm of the inverse is uniformly bounded.
\end{lemm}
From the value of $\mathcal{H}$, we check that the weight $\delta=\frac 12$ indeed
satisfies the hypothesis, so theorem \ref{th:main} follows from the
lemma.
So we now concentrate on the proof of the lemma. One has
$$P_g = \tfrac 12 \nabla^*\nabla - \Rr_g . $$
The property of the curvature term $\Rr_g$ we need is the following
\cite[lemmas I.4.1 and I.4.2]{Biq00}: for the hyperbolic metric $g_0$,
the largest eigenvalue of $\Rr_{g_0}$ is equal to $1$ (instead of $4$
in \cite{Biq00}, because here we normalize here the sectional
curvature of $g_0$ in $[-1,-\frac 14]$ instead of $[-4,-1]$). This
immediately implies that, for $\mathcal{D}$ close enough to $\mathcal{D}_0$ in $C^3$
norm, one has
\begin{eqnarray*}gin{equation}
\label{eq:9}
\Rr_g \leq 1 + \epsilon .
\end{equation}
For the function $w$ depending on $r$ only, one has
\begin{eqnarray*}gin{equation}
\label{eq:10}
\Delta w = - \partial_r^2w - H(r) \partial_rw,
\end{equation}
where $H(r)=\partial_r\log v$ is the mean curvature. For the metric $g$ given by
(\ref{eq:5}), the mean curvature $H(r)$ coincides with $H_0(r)$ for
$r\geq R+1$ or $r\leq R-1$, and for $R-1\leq r\leq R+1$ we get $|H(r)-H_0(r)|\leq \epsilon$
if we suppose $(\gamma,\eta)$ close enough to $(\gamma_0,\eta_0)$.
An easy calculation gives, for the hyperbolic metric,
\begin{eqnarray*}gin{equation}
\label{eq:11}
-\frac{\Delta w}w-2\frac{|dw|^2}{w^2} =
\delta \left( \mathcal{H}-\delta + \frac{\dim \mathcal{D}}{2\cosh r} + \frac{\delta+1}{\cosh^2r} \right).
\end{equation}
It follows that, for the metric $g$, if $\mathcal{D}$ is sufficiently close to
$\mathcal{D}_0$,
\begin{eqnarray*}gin{equation}
\label{eq:12}
-\frac{\Delta w}w-2\frac{|dw|^2}{w^2} \geq \delta (\mathcal{H}-\delta-\epsilon).
\end{equation}
Using this property of the weight function $w$, we can now establish
lemma \ref{lemm:P_invertible} using the maximum principle. From Kato's
inequality,
$$ \langle u,\nabla^*\nabla u\rangle = |u|\Delta|u| + |\nabla u|^2 - |d|u| |^2 \geq |u|\Delta|u| . $$
Using the formula
\begin{eqnarray*}gin{align*}
w\Delta|u| &= \Delta(w|u| )-w|u|\big(\frac{\Delta w}w+2\frac{|dw|^2}{|w|^2}\big)
+2\big\langle\frac{dw}w,d(w|u| )\big\rangle\\
&\geq \Delta(w|u| )+\delta (\mathcal{H}-\delta-\epsilon)w|u|+2\big\langle\frac{dw}w,d(w|u| )\big\rangle
\end{align*}
it follows from (\ref{eq:9}) that
\begin{eqnarray*}gin{equation}
\label{eq:13}
w|Pu| \geq \tfrac 12 \Delta(w|u| )+ \big( \tfrac 12 \delta(\mathcal{H}-\delta-\epsilon) - 1 - \epsilon \big) w|u| + \big\langle\frac{dw}w,d(w|u| )\big\rangle .
\end{equation}
Let $A=\tfrac 12 \delta(\mathcal{H}-\delta-\epsilon) - 1 - \epsilon$. If $\delta$ satisfies the hypothesis
of lemma \ref{lemm:P_invertible}, then one can choose $\epsilon$ sufficiently
small so that $A>0$. Then by the maximum principle applied to $w|u|$,
it follows that
\begin{eqnarray*}gin{equation}
\label{eq:13b}
\sup (w|u| ) \leq A^{-1} \sup (w|Pu| ).
\end{equation}
(A priori we cannot apply the maximum principle to $w|u|$ since it has
not to go to zero at infinity, but we can apply it for $w=(\cosh
r)^{\delta'}$ for any $\delta'<\delta$; then taking $\delta' \to \delta$ gives the estimate).
From this estimate, it is immediate that if $v\in C^\alpha_\delta$, then one can
solve $Pu=v$ with $u\in C^0_\delta$ and $\|u\|_{C^0_\delta}\leq A^{-1}\|v\|_{C^0_\delta}$. It
remains to obtain a bound on higher derivatives, but from the uniform
geometry lemma \ref{lem:uniform-geom}, applying the usual elliptic
estimate in each ball, one obtains a constant $C$ such that
$$ \|u\|_{C^{2,\alpha}_\delta} \leq C \big( \|Pu\|_{C^\alpha_\delta} + \|u\|_{C^0_\delta} \big) \leq
C(1+A^{-1})\|Pu\|_{C^\alpha_\delta} $$
which is the required estimate.
\begin{eqnarray*}gin{rem}
The previous lemma does not give an optimal interval of weights for
the isomorphism. In \cite{Biq00} the optimal interval for $g_0$ is
calculated; using microlocal analysis, it is proved in \cite{BM}
that the same interval holds if the distribution $\mathcal{D}$ is
quaternionic-contact (the regular case). In general, the optimal
interval may depend on the supremum of the eigenvalues of the
curvatures $\Rr_x$, where $\Rr_x$ is the curvature of the homogeneous
Einstein model attached to the point $x$ of the boundary.
\end{rem}
\end{document}
|
\begin{document}
\title{Optimal Sorting with Persistent Comparison Errors hanks{Research supported by SNF (project number 200021\_165524).}
\pagestyle{empty}
\begin{abstract}
We consider the problem of sorting $n$ elements in the case of \emph{persistent} comparison errors. In this model (Braverman and Mossel, SODA'08), each comparison between two elements can be wrong with some fixed (small) probability $p$, and \emph{comparisons cannot be repeated}.
Sorting perfectly in this model is impossible, and the objective is to minimize the \emph{dislocation} of each element in the output sequence, that is, the difference between its true rank and its position. Existing lower bounds for this problem show that no algorithm can guarantee, with high probability, \emph{maximum dislocation} and \emph{total dislocation} better than $\Omega(\log n)$ and $\Omega(n)$, respectively, regardless of its running time.
In this paper, we present the first \emph{$O(n\log n)$-time} sorting algorithm that guarantees both \emph{$O(\log n)$ maximum dislocation} and \emph{$O(n)$ total dislocation} with high probability. Besides improving over the previous state-of-the art algorithms -- the best known algorithm had running time $\tilde{O}(n^{3/2})$ -- our result indicates that comparison errors do not make the problem computationally more difficult: a sequence with the best possible dislocation can be obtained in $O(n\log n)$ time and, even without comparison errors, $\Omega(n\log n)$ time is necessary to guarantee such dislocation bounds.
In order to achieve this optimal result, we solve two sub-problems, and the respective methods have their own merits for further application.
One is how to locate a position in which to insert an element in an almost-sorted sequence having $O(\log n)$ maximum dislocation in such a way that the dislocation of the resulting sequence will still be $O(\log n)$.
The other is how to simultaneously insert $m$ elements into an almost sorted sequence of $m$ different elements, such that the resulting sequence of $2m$ elements remains almost sorted.
\end{abstract}
\pagestyle{plain}
\setcounter{page}{1}
\section{Introduction}\label{sec-introduction}
\newcommand{w.h.p.}{w.h.p.}
\newcommand{exp.}{exp.}
We study the problem of \emph{sorting} $n$ distinct elements under \emph{persistent} random comparison \emph{errors}. This problem arises naturally when sorting is applied to real life scenarios. For example, one could use experts to compare items, with each comparison being performed by one expert. As these operations are typically expensive, one cannot repeat them, and the result may sometimes be erroneous. Still, one would like to reconstruct from these information the correct (or a nearly correct) order of the elements.
In this classical model, which \emph{does not allow resampling},
each comparison is wrong with some fixed (small) probability $p$, and correct with probability $1-p$.\footnote{As in previous works, we assume $p<1/32$ though the results holds for $p<1/16$.} The comparison errors are independent over all possible pairs of elements, but they are persistent: Repeating the same comparison several times is useless since the result is always the same, i.e., always wrong or always correct.
Because of errors, it is impossible to sort correctly and therefore, one seeks to return a ``nearly sorted'' sequence, that is, a sequence where the elements are ``close'' to their correct positions.
To measure the quality of an output sequence in terms of sortedness, a common way is to consider the \emph{dislocation} of an element, which is the difference between its position in the output and its position in the correctly sorted sequence. In particular, one can consider the \emph{maximum dislocation} of any element in the sequence or the \emph{total dislocation} of the sequence, i.e., the sum of the dislocations of all $n$ elements.
Note that sorting with persistent errors as above is much more difficult than the case in which comparisons can be repeated, where a trivial $O(n\log^2 n)$ time solution is enough to sort perfectly with high probability (simply repeat each comparison $O(\log n)$ times and take the majority of the results). Instead, in the model with persistent errors, it is impossible to sort perfectly as no algorithm can achieve a maximum dislocation that is smaller than $\Omega(\log n)$ w.h.p., or total dislocation smaller than $\Omega(n)$ in expectation \cite{geissmann_et_al}.
Such a problem has been extensively studied in the literature, and several algorithms have been devised with the goal of sorting \emph{quickly} with small dislocation (see Table~\ref{tb-recurrent}).
Unfortunately, even though all the algorithms achieve the best possible maximum dislocation of $\Theta(\log n)$, they
use a truly superlinear number of comparisons (specifically, $\Omega(n^{c})$ with $c\geq 1.5$), and/or require significant amount of time (namely, $O(n^{3+c})$ where $c$ is a big constant that depends on $p$).
This naturally suggests the following question:
\begin{quote}
\emph{What is the time complexity of sorting optimally with persistent errors?}
\end{quote}
\noindent In this work, we answer this basic question by showing the following result:
\begin{quote}
\emph{There exists an algorithm with \textbf{optimal running time} $O(n\log n)$ which achieves simultaneously \textbf{optimal maximum dislocation} $O(\log n)$ and \textbf{optimal total dislocation} $O(n)$, both \textbf{with high probability}.}
\end{quote}
The dislocation guarantees of our algorithm are optimal, due to the lower bound of \cite{geissmann_et_al}, while the existence of an algorithm achieving a maximum dislocation of $d = O(\log n)$ in time $T(n) = o(n \log n)$ would immediately imply the existence of an algorithm that sorts $n$ elements in $T(n) + O(n \log \log n) = o(n \log n)$ time, even in the absence of comparison errors, thus contradicting the classical $\Omega(n \log n)$ lower bound for comparison-based algorithms.\footnote{Indeed, once the approximately sorted sequence $S$ is computed, it suffices to apply any $O(n \log n)$ sorting algorithm on the first $m=2\max\{d, \log n\}$ elements of $S$, in order to select the smallest $m/2$ elements. Removing those elements from $S$ and repeating this procedure $\frac{n}{m}$ times, allows to sort in $T(n) + O(\frac{n}{m} \cdot m \log m)$ time.}
Along the way to our result, we consider the problem of \emph{searching with persistent errors}, defined as follows:
\begin{quote}\emph{We are given an approximately sorted sequence $S$, and an additional element $x \not\in S$.
The goal is to compute, under persistent comparison errors, an \emph{approximate rank} (position) of $x$ which differs from the true rank of $x$ in $S$ by a \emph{small} additive error.}
\end{quote}
For this problem, we show an algorithm that requires $O(\log n)$ time to compute, w.h.p., an approximate rank
that differs from the true rank of $x$ by at most $O(\max\{d,\log n\})$, where $d$ is the maximum dislocation of $S$.
For $d=\Omega(\log n)$ this allows to insert $x$ into $S$ without any asymptotic increase of the maximum (and total) dislocation in the resulting sequence. Notice that, if $d$ is also in $O(n^{1-\epsilon})$ for any constant $\epsilon > 0$, this is essentially the best we can hope for, as an easy decision-tree lower bound shows that any algorithm must require $\Omega(\log n)$ time.
Finally, we remark that \cite{Klein2011} considered the variant in which the original sequence is \emph{sorted}, and the algorithm must compute the correct rank. For this problem, they present an algorithm that runs in $O(\log n \cdot \log \log n)$ time and succeeds with probability $1-f(p)$, with $f(p)$ vanishing as $p$ goes to $0$. As by-product of our result, we can obtain the optimal $O(\log n)$ running time with essentially the same success probability.
\begin{table}
\centering
\begin{tabular}{|c|cc|c|}
\multicolumn{4}{c}{\textbf{Upper bounds}} \\[
amount] \hline
\textbf{Running Time} & \textbf{Max Dislocation} & \textbf{Tot Dislocation} & \textbf{Reference} \\ \hline
$O(n^{3+c})$ & $O(\log n)$ w.h.p. & $O(n)$ w.h.p. & \cite{Braverman2008} \\
$O(n^2)$ & $O(\log n)$ w.h.p. & $O(n\log n)$ w.h.p. & \cite{Klein2011}
\\
$O(n^2)$ & $O(\log n)$ w.h.p. & $O(n)$ exp. & \cite{geissmann_et_al} \\
$\tilde{O}(n^{3/2})$ & $O(\log n)$ w.h.p. & $O(n)$ exp. & \cite{newwindowsort} \\ \hline \hline
$O(n\log n)$ & $O(\log n)$ w.h.p. & $O(n)$ w.h.p. & \textbf{this work} \\ \hline
\multicolumn{4}{c}{} \\[0pt]
\multicolumn{4}{c}{\textbf{Lower bounds}} \\[
amount] \hline
Any & $\Omega(\log n)$ w.h.p. & $\Omega(n)$ exp. &\cite{geissmann_et_al} \\ \hline
\end{tabular}
\caption{The existing approximate sorting algorithms and our result.
The constant $c$ in the exponent of the running time of \cite{Braverman2008} depends on the error probability $p$ and it is typically quite large.
We write $\Omega(f(n))$ w.h.p. (resp. exp.) to mean that no algorithm can achieve dislocaiton $o(f(n))$ with high probability (resp. in expectation).}
\label{tb-recurrent}
\end{table}
\subsection{Main Intuition and Techniques}
\paragraph*{Approximate Sorting}
In order to convey the main intuitions behind our $O(n \log n)$-time optimal-dislocation approximate sorting algorithm, we consider the following ideal scenario:
we already have a perfectly sorted sequence $A$ containing a random half of the elements in our input sequence $S$ and we, somehow, also know the position in which each element $x \in S\setminus A$ should be inserted into $A$ so that the resulting sequence is also sorted (i.e, the \emph{rank} of $x$ in $A$).
If these positions alternate with the elements of $A$, then, to obtain a sorted version of $S$, it suffices to \emph{merge} $S$ and $S \setminus A$, i.e., to simultaneously insert all the elements of $S \setminus A$ into their respective positions of $A$.
Unfortunately, we are far from this ideal scenario for several reasons: first of all, multiple, say $\delta$, elements in $S \setminus A$
might have the same rank in $A$. Since we do not know the order in which those elements should appear, this will already increase the dislocation of the merged sequence to $\Omega(\delta)$. Moreover, due to the lower bound of \cite{geissmann_et_al}, we are not actually able to obtain a perfectly sorted version of $A$ and we are forced to work with a permutation of $A$ having dislocation $d = \Omega(\log n)$, implying that the natural bound on the resulting dislocation can be as large as $d \cdot \delta$. This is a bad news, as one can show that $\delta = \Omega(\log n)$.
However, it turns out that the number of elements in $S \setminus A$ whose positions lie in a $O(\log n)$-wide interval of $A$
is still $O(\log n)$, w.h.p., implying that the final dislocation of $A$ is just $O(\log n)$.
But how do we obtain the approximately sorted sequence $A$ in the first place? We could just recursively apply the above strategy on the (unsorted) elements of $A$, except that this would cause a blow-up in the resulting dislocation due to the constant hidden by the big-O notation.
We therefore interleave merge steps with invocations of (a modified version of) the sorting algorithm of \cite{geissmann_et_al}, which essentially reduces the dislocation by a constant factor, so that the increase in the worst-case dislocation will be only an \emph{additive} constant per recursive step.
An additional complication is due to the fact that we are not able to compute the exact ranks in $A$ of the elements in $S\setminus A$. We therefore have to deal, once again, with approximations that are computed using the other main contribution of this paper: \emph{noisy binary search trees}, whose key ideas are described in the following.
\paragraph*{Noisy Binary Search}
As a key ingredient of our approximate sorting algorithm, we need to \emph{merge} an almost-sorted sequence with a set of elements, without causing any substantial increase in the final dislocation.
More precisely, if we are given a sequence $S$ with dislocation $d$ and an element $x$, we want to compute an \emph{approximate rank} of $x$ in $S$, i.e., a position that differs by at most $O(\max\{d, \log n\})$ from the position that $x$ would occupy if the elements $S \cup \{ x \}$ were perfectly sorted.
As a comparison, this same problem has been solved optimally in $O(\log n)$ time in the easier case in which errors are not persistent and $S$ is already sorted \cite{Feige1994}.
The idea of \cite{Feige1994} is to locate the correct position of $x$ using a binary decision tree: ideally each vertex $v$ of the tree
tests whether $x$ appears to belong to a certain \emph{interval} of $S$ and, depending on the results,
one of the children of $v$ is considered next.
As these intervals become narrower as we move from the root towards the leaves, which are in a one-to-one correspondence with positions of $S$, we eventually discover the correct rank of $x$ in $S$.
In order to cope with failures, this process is allowed to \emph{backtrack} when inconsistent comparisons are observed, thus repeating some of the comparisons involving ancestors of $v$. Moreover,
to guarantee that the result will be correct with high probability, a logarithmic number of consistent comparisons with a leaf are needed before the algorithm terminates.
Notice how this process heavily depends on the fact that it is possible to gather more information on the true relative position of $x$ by repeating a comparison multiple times (in fact, it is easy to design a simple $O(\log^2 n)$-time algorithm by exploiting this fact). Unfortunately, this is not the case anymore when errors are \emph{persistent}.
To overcome this problem we design a \emph{noisy binary search tree} in which the intervals element $x$ is compared with are, in a sense, \emph{dynamic}, i.e., they grow every time the associated vertex is visited.
This, in turn, is a source of other difficulties: first, the intervals of the descendants of $v$ also need to be updated. Moreover, we can obtain inconsistent answers not only due to the erroneous comparisons, but also due to the fact that an interval that initially did not contain $x$ might now become too large. Finally, since intervals overlap, we might end up repeating the same comparison even when two different vertices of the tree are involved.
We overcome these problems by using two search trees that initially comprise of disjoint intervals in $S$, which are selected in a way that ensures that all the bad-behaving vertices are confined into only one of the two trees.
\subsection{Related works}\label{sec:related}
Sorting with \emph{persistent errors} has been studied in several works, starting from \cite{Braverman2008} who presented the first algorithm achieving optimal dislocation (matching lower bounds appeared only recently in \cite{geissmann_et_al}).
The algorithm in \cite{Braverman2008} uses only $O(n\log n)$ comparisons, but unfortunately its running time $O(n^{3+c})$ is quite large. For example, for a success probability of $1-1/n$, the analysis in \cite{Braverman2008} yields $c=\frac{110525}{(1/2-p)^4}$. On the contrary, all subsequent faster algorithms \cite{Klein2011,geissmann_et_al,newwindowsort} -- see Table~\ref{tb-recurrent} -- use a number of comparisons which is asymptotically equal to their respective running time.
Other works considered error models in which repeating comparisons is expensive. For example, \cite{braverman2016parallel} studied algorithms which use a \emph{bounded number of rounds} for some ``easier'' versions of sorting (e.g., distinguishing the top $k$ elements from the others). In each round, a fresh set of comparison results is generated, and each round consists of $\delta \cdot n$ comparisons. They evaluate the algorithm's performance by estimating the number
of ``misclassified'' elements and also consider a variant in which errors now correspond to missing comparison results.
In general, sorting in presence of errors seems to be computationally more difficult than the error-free counterpart. For instance, \cite{ajtai2016sorting} provides algorithms using \emph{subquadratic} time (and number of comparisons) when errors occur only between elements whose difference is at most some fixed threshold. Also, \cite{Damaschke16} gives a \emph{subquadratic} time algorithm when the number $k$ of errors is known in advance.
As mentioned above, an easier error model is the one with \emph{non-persistent} errors,
meaning that the same comparison can be \emph{repeated} and the errors are independent, and happen with some probability $p<1/2$.
In this model it is possible to sort $n$ elements in time $O(n\log(n/q))$, where $1-q$ is the success probability of the algorithm \cite{Feige1994} (see also \cite{alonso,hadji} for the analysis of the classical Quicksort and recursive Mergesort algorithms in this error model).
More generally, computing with errors is often considered in the framework of a two-person game called \emph{R\'{e}nyi-Ulam Game} (see e.g. the survey \cite{Pelc02} and the monograph \cite{Cicalese13}).
\subsection{Paper Organization}
The paper is organized as follows: in Section~\ref{sec:preliminaries} we give some preliminary definitions; then, in Section~\ref{sec:noisy_binary_search}, we present our noisy binary search algorithm, which will be used in Section~\ref{sec-optimal-sorting} to design a optimal randomized sorting algorithm.
The proof of correctness of this algorithm will make use of an improved analysis of the sorting algorithm of \cite{geissmann_et_al}, which we discuss in Section~\ref{sec:windowsort}.
Finally, in Section~\ref{sec:derand}, we briefly argue on how our sorting algorithm can be adapted so that it does not require any external source of randomness. Some proofs that only use arguments that are not related to the details of our algorithms are moved to the appendix.
\section{Preliminaries}
\label{sec:preliminaries}
According to our error model, elements possess a true total linear order, however this order can only be observed through noisy comparisons. In the following, given two distinct elements $x$ and $y$, we will write $x \prec y$ (resp. $x \succ y$) to
mean that $x$ is smaller (resp. larger) than $y$ according to the true order, and $x<y$ (resp. $x>y$) to mean that $x$
appears to be smaller (resp. larger) than $y$ according to the observed comparison result.
Given a sequence or a set of elements $A$ and an element $x$ (not necessarily in $A$), we define $\rank(x, A) = |\{ y \in A : y \prec x\}|$ be as the \emph{true rank} of element $x$ in $A$ (notice that ranks start from $0$).
Moreover, if $A$ is a sequence and $x \in A$, we denote by $\pos(x, A) \in [0, |S'|-1]$ the \emph{position} of $x$ in $A$ (notice that positions are also indexed from $0$), so that the \emph{dislocation} of $x$ in $A$ is
$\disl(x,S) = |\pos(x,S)-\rank(x,S)|$, and the \emph{maximum dislocation} of the sequence $A$ is $\disl(S) = \max_{x \in S} \disl(x,S)$.
\noindent For $z \in \mathbb{R}$, we write $\ln z$ and $\log z$ to refer to the natural and the binary logarithm of $z$, respectively.
\section{Noisy Binary Search}
\label{sec:noisy_binary_search}
Given a sequence $S = \langle s_0, \dots, s_{n-1} \rangle$ of $n$ elements with maximum dislocation $d \ge \log n$, and element $x$ not in the sequence, we want to compute in time $O(\log n)$ an \emph{approximate rank} of $x$ in $S$, that is, a position where to insert $x$ in $S$ while preserving a $O(d)$ upper bound on dislocation of the resulting sequence.
More precisely, we want to compute index $r_x$ such that $|r_x - \rank(x,S)|=O(d)$, in presence of persistent comparison errors: Errors between $x$ and the elements in $S$ happen independently with probability $p$, and whether the comparison $x$ between x and an element $y \in S$ is correct or erroneous does not depend on the position of $y$ in $S$, nor on the actual permutation of the sorted elements induced by their order in $S$ (i.e., we are not allowed to pick the order of the elements in $S$ as a function of the errors). We do not impose any restriction on the errors for comparisons that do not involve $x$.
In the following, we will show an algorithm that computes such a rank $r_x$ in time $O(\log n)$.
This immediately implies that $O(\log n)$ time also suffices to insert $x$ into $S$
so that the resulting sequence $\langle s_0, \dots, s_{r_x-1}, x, s_{r_x}, s_{n-1}\rangle$ still has maximum dislocation $O(d)$.
\begin{remark}
Notice that the $O(\log n)$ running time is asymptotically optimal for all $d=n^{1-\epsilon}$, for constant $\epsilon<1$, since a $\Omega(\log n - \log d) = \Omega(\log n)$ decision-tree lower bound holds even in absence of comparison errors.
\end{remark}
In the following, for the sake of simplicity, we let $c = 10^3$ and we assume that $n = 2 c d \cdot 2^h - 1$ for some non-negative integer $h$. Moreover, we focus on $p \le \frac{1}{32}$ even though this restriction can be easily removed to handle all $p < \frac{1}{2}$, as we argue at the end of the section.
We consider the set $\{0, \dots, n\}$ of the possible ranks of $x$ in $S$
and we subdivide them into
$2 \cdot 2^h$ ordered \emph{groups} $g_0, g_1, \dots$ each containing $cd$ contiguous positions, namely, group $g_i$ contains positions $c i d$, \dots, $c (i+1) d -1$.
Then, we further partition these $2 \cdot 2^h$ groups into two ordered sets $G_0$ and $G_1$, where $G_0$ contains the groups $g_i$ with even $i$ ($i \equiv 0 \pmod{2}$) and $G_1$ the groups $g_i$ with odd $i$ ($i \equiv 1 \pmod{2}$). Notice that $|G_0| =|G_1|= 2^h$. In the next section, for each $G_j$, we shall define a \emph{noisy binary search tree} $T_j$, which will be the main ingredient of our algorithm.
\subsection{Constructing $T_0$ and $T_1$}
Let us consider a fixed $j \in \{0,1\}$ and define $\eta = 2 \lceil \log n \rceil$.
The tree $T_j$ comprises of a binary tree of height $h + \eta$ in which the first $h+1$ levels (i.e., those containing vertices at depths $0$ to $h$) are complete and the last $\eta $ levels consists of $2^h$ paths of $\eta$ vertices, each emanating from a distinct vertex on the $(h+1)$-th level.
We index the leaves of the resulting tree from $0$ to $2^h-1$,
we use $h(v)$ to denote the depth of vertex $v$ in $T_j$,
and we refer to the vertices $v$ at depth $h(v) \ge h$ as \emph{path-vertices}.
Each vertex $v$ of the tree is associated with one \emph{interval} $I(v)$, i.e., as a set of contiguous positions, as follows: for a leaf $v$ having index $i$, $I(v)$ consists of the positions in $g_{2i+j}$; for a non-leaf path-vertex $v$ having $u$ as its only child, we set $I(v)=I(u)$; finally, for an internal vertex $v$ having $u$ and $w$ as its left and right children, respectively, we define $I(v)$ as the interval containing all the positions between $\min I(u)$ and $\max I(w)$ (inclusive).
\begin{figure}
\caption{An example of the noisy tree $T_0$. On the left side the shared pointers $L(\cdot)$ and $R(\cdot)$ are shown. Notice how $L(r)$ (and, in general, all the $L(\cdot)$ pointers on the leftmost side of the tree) points to the special $-\infty$ element. Good vertices are shown in black while bad vertices are white. Notice that, since $i^* \in I(w)$, we have $T^* = T_0$ and hence all the depicted vertices are either good or bad.}
\label{fig:noisy_tree}
\end{figure}
Moreover, each vertex $v$ of the tree has a reference to two \emph{shared pointers} $L(v)$ and $R(v)$ to positions in $\{0, \dots, n\} \setminus \bigcup_{g_i \in S_j} g_i$. Intuitively, $L(v)$ (resp. $R(v)$) will always point to positions of $S$ occupied by elements that are \emph{smaller} (resp. \emph{larger}) than all the elements $s_i$ with $i \in I(v)$.
For each leaf $v$, let $L(v)$ initially point to $\min I(v) - d - 1$ and $R(v)$ initially point to $\max I(v) + d$.
A non-leaf path-vertex $v$ shares both its pointers with the corresponding pointers of its only child, while a non-path vertex $v$ shares its left pointer $L(v)$ with the left pointer of its left child, and its right pointer $R(v)$ with the right pointer of its right child. See Figure~\ref{fig:noisy_tree} for an example.
Notice that we sometimes allow $L(v)$ to point to negative positions and $R(v)$ to point to positions that are larger than $n-1$. In the following we consider all the elements $s_i$ with $i < 0$ (resp. $i \ge n)$ to be copies a special $-\infty$ (resp. $+\infty$) element such that $-\infty \prec x$ and $-\infty < x$ in every observed comparison (resp. $+\infty \succ x$ and $+\infty > x$).
\subsection{Walking on $T_j$}
The algorithm will perform a discrete-time random walk on each $T_j$.
Before describing such a walk in more detail, it is useful to define the following operation:
\begin{definition}[test operation]\label{def:test}
A \emph{test} of an element $x$ with a vertex $v$ is performed by (i) comparing $x$ with the elements $s_{L(v)}$ and $s_{R(v)}$, (ii) decrementing $L(v)$ by $1$ and, (iii) incrementing $R(v)$ by $1$. The tests succeeds if the observed comparison results are $x > s_{L(v)}$ and $x < s_{R(v)}$, otherwise the test fails.
\end{definition}
The walk on $T_j$ proceeds as follows. At time $0$, i.e., before the first step, the \emph{current} vertex $v$ coincides with the root $r$ of $T_j$.
Then, at each time step, we \emph{walk} from the current vertex $v$ to the next vertex as follows:
\begin{enumerate}
\item We test $x$ with all the children of $v$ and, if \emph{exactly one} of these tests succeeds, we\emph{ walk to the corresponding child}.
\item Otherwise, if \emph{all the tests fail}, we\emph{ walk to the parent} of $v$, if it exists.
\end{enumerate}
In the remaining cases, we ``walk'' from $v$ to itself.
\noindent
We fix an upper bound $\tau = 240 \lfloor \log n \rfloor$ on the total number of steps we perform. The walk stops as soon as one of the following two conditions is met:
\begin{description}
\item[Success:] The current vertex $v$ is a leaf of $T_j$, in which case we return $v$;
\item[Timeout:] The $\tau$-th time step is completed and the success condition is not met.
\end{description}
\subsection{Analysis}
Let $i^* = \rank(x, S)$, and let
$T^*$ be the unique tree in $\{ T_0, T_1 \}$ such that
$i^*$ belongs to the interval of a leaf in $T^*$, and let $T'$ be the other tree.
\begin{definition}[good/bad vertex]
We say that a vertex $v$ of $T^*$ is \emph{good} if $i^* \in I(v)$ and \emph{bad} if either $i^* < \min I(v) - cd$ or $i^* > \max I(v) + cd$.
\end{definition}
Notice that in $T^*$ all the vertices are either good or bad, that the intervals corresponding to vertices at the same depth in $T^*$ are pairwise disjoint, and that the set of good vertices is exactly a root-to-leaf path.
Moreover, all path-vertices of $T'$ are either bad or neither good nor bad.
In both $T'$ and $T^*$ all the children of a bad vertex are also bad.
\begin{lemma}
\label{lemma:test_good}
A test on a good vertex succeeds with probability at least $1-2p$.
\end{lemma}
\begin{proof}
Let $v$ be a good vertex.
Since $i^* \in I(v)$ and the pointer $L(v)$ only gets decremented, we have $L(v) \le \min I(v) - d - 1 \le i^* -d - 1$. Since $S$ has dislocation at most $d$, $\rank(s_{L(v)}, S) \leq L(v) + d \le i^*-1$.
This implies that $s_{L(v)} \prec x$ and hence the probability to observe $S_{L(v)} > x$ during the test is at most $p$.
Similarly, $R(v)$ is only incremented during the walk and hence $R(v) \ge \max I(v) + d \ge i^* + d$.
Since $S$ has dislocation at most $d$, this implies that $\rank(s_{R(v)}, S) \ge R(v)-d \ge i^*$ and, in turn, that $s_{R(v)} \succ x$. Therefore, $s_{R(v)} < x$ is observed with probability at most $p$.
By the union bound, $s_L(v) < x < s_R(v)$ with probability at least $1-2p$.
\end{proof}
\begin{lemma}
\label{lemma:test_bad}
A test on a bad vertex succeeds with probability at most $p$.
\end{lemma}
\begin{proof}
Let $v$ be a bad vertex and notice that at most $\tau - 1 \le 240 \log n - 1 \le 240 d - 1 < (c-2)d - 1$ tests have been performed before the current test.
We have that either $i^* < \min I(v) - cd$ or $i^* > \max I(v) + cd$.
In the former case, we have $L(v) \ge \min I(v) - d - 1 - (\tau -1) \ge \min I(v) - d - (c-2)d > (i^* + cd) - (c-1)d = i^* + d$. Since $S$ has dislocation at most $d$, this implies $\rank(s_{L(x)}, S) \geq L(x) -d \ge i^*$. Therefore, $s_{L(x)} \succ x$ and thus $s_{L(x)} > x$ is also observed with probability at least $1-p$, causing the test to fail.
Similarly, in the latter case, we have $R(v) \le \max I(v) + d + (\tau-1) \le \max I(v) + d + (c-2)d - 1 < (i^* - cd) + (c-1)d - 1 = i^* - d - 1$.
Therefore, $\rank(s_{R(v)}, S) \le i^* - 1$, implying $s_{R(v)} \prec x$, and hence $s_{R(v)} < x$ is observed with probability at least $1-p$, causing the test to fail.
\end{proof}
\noindent We say that a step on $T_j$ from vertex $v$ to vertex $u$ is \emph{improving} iff either:
\begin{itemize}
\item $u$ is a good vertex and $h(u) > h(v)$ (this implies that $v$ is also good); or
\item $v$ is a bad vertex and $h(u) < h(v)$.
\end{itemize}
Intuitively, each improving step is making progress towards identifying the interval containing the true rank $i^*$ of $x$, while each non-improving step undoes the progress of at most one improving step.
\begin{lemma}
\label{lemma:improving_step}
Each step performed during the walk on $T^*$ is improving with probability at least $1-3p$.
\end{lemma}
\begin{proof}
Consider a generic step performed during the walk on $T^*$ from a vertex $v$.
If $v$ is a good vertex, then exactly one child $u$ of $v$ in $T^*$ is good (notice that $v$ cannot be a leaf).
By Lemma~\ref{lemma:test_good}, the test on $u$ succeeds with probability at least $1-2p$. Moreover, there can be at most one other child $w \neq u$ of $v$ in $T^*$. If $w$ exists, then it must be bad and, by Lemma~\ref{lemma:test_bad}, the test on $w$ fails with probability at least $1-p$.
By using the union bound on the complementary probabilities, we have that process walks from $v$ to $u$ with probability at least $1-3p$.
If $v$ is a bad vertex, then all of its children are also bad. Since $v$ has at most $2$ children and, by Lemma~\ref{lemma:test_bad}, a test on a bad vertex fails with probability at least $1-p$, we have that all the tests fail with probability at least $1-2p$. In this case the process walks from $v$ to the parent of $v$ (notice that, since $v$ is bad, it cannot be the root of $T^*$).
\end{proof}
Since the walk on $T_j$ takes at most $\tau$ steps, any two distinct shared pointers $L(v)$ and $R(u)$ initially differ by at least $cd - 2d - 1 > (c-3)d$ positions, each step increases/decreases at most $4$ pointers (as it performs at most $2$ tests), and $\tau = 240 \lfloor \log n \rfloor < \frac{c-3}{4} d$, we can conclude that no two distinct pointers will ever point to the same position. This, in turn, implies:
\begin{observation}
\label{obs:independent_comparisons}
Element $x$ is compared to each element in $S$ at most once.
\end{observation}
The following lemmas show that the walk on $T^*$ is likely to return a vertex corresponding to a good interval, while the walk on $T' \neq T^*$ is likely to either timeout or to return a non-bad vertex, i.e., a vertex whose corresponding interval contains positions that are close to the true rank of $x$ in $S$.
\begin{lemma}
\label{lemma:timeout}
The walk on $T^*$ timeouts with probability at most $e \cdot n^{-6}$.
\end{lemma}
\begin{proof}
For $t=1,\dots, \tau$, let $X_t$ be an indicator random variable that is equal to $1$ iff the $t$-th step of the walk on $T^*$ is improving. If the $t$-th step is not performed then let $X_t = 1$.
Notice that if, at any time $t'$ during the walk, the number $X^{(t')} = \sum_{t=1}^{t'} X_t$ of improving steps exceeds the number of non-improving steps by at least $h + \eta$, then the success condition is met.
This means that a necessary condition for the walk to timeout is $X^{(\tau)} - (\tau - X^{(\tau)}) < h + \eta$, which is equivalent to $X^{(\tau)} < \frac{h + \eta + \tau}{2}$.
By Observation~\ref{obs:independent_comparisons} and by Lemma~\ref{lemma:improving_step}, we know that
each $X_t$s corresponding to a performed steps satisfies $P(X_t = 1) \ge 1-3p > 1 - \frac{1}{10}$ (since $p \le \frac{1}{32}$), regardless of whether the other steps are improving.
We can therefore consider the following experiment: at every time step $t=1,\dots, \tau$ we flip a coin that is heads with probability $q = \frac{9}{10}$, we let $Y_t = 1$ if this happens and $Y_t = 0$ otherwise.
Clearly, the probability that $X^{(\tau)} < \frac{h + \eta + \tau}{2}$ is at most the probability that
$Y^{(\tau)} = \sum_{t=1}^{\tau} Y_t < \frac{h + \eta + \tau}{2}$.
By noticing that $\tau > 3 (h + \eta )$, and by using the Chernoff bound:
\begin{multline*}
\Pr\left(X^{(\tau)} < \frac{h + \eta + \tau}{2}\right)
\le \Pr\left(Y^{(\tau)} < \frac{h + \eta + \tau}{2}\right)
\le \Pr\left(Y^{(\tau)} < \frac{2\tau}{3}\right)
= \Pr\left(Y < \frac{2 \mathbb{E}[Y]}{3q} \right) \\
< \Pr\left(Y < \left(1- \frac{1}{4}\right) \mathbb{E}[Y] \right)
\le e^{-\frac{\tau q}{32}}
< e^{-\frac{\tau}{40}}
\le e^{1-6 \log n} < e \cdot n^{-6}.
\end{multline*}
\end{proof}
\begin{lemma}
\label{lemma:bad_vertex_returned}
A walk on $T_j$ returns a bad vertex with probability at most $240 n^{-7}$.
\end{lemma}
\begin{proof}
Notice that, in order to return a bad vertex $v$, the walk must
first reach a vertex $u$ at depth $h(u)=h$, and then traverse the $\eta$ vertices of the path rooted in $u$ having $v$ as its other endpoint.
We now bound the probability that, once the walk reaches $u$, it will also reach $v$ before walking back to the parent of $u$.
Notice that all the vertices in the path from $u$ to $v$ are associated to the same interval, and hence they are all bad.
Since a test on a bad vertex succeeds with probability at most $p$ (see Lemma~\ref{lemma:test_bad}) and tests are independent (see Observation~\ref{obs:independent_comparisons}), the sought probability can be upper-bounded by considering a random walk on $\{0 ,\dots, \eta+1 \}$ that: (i) starts from $1$, (ii) has one absorbing barrier on $0$ and another on $\eta+1$, and (iii) for any state $i \in [1, \eta]$ has a probability of transitioning to state $i+1$ of $p$ and to state $i-1$ of $1-p$.
Here state $0$ corresponds to the parent of $u$, and state $i$ for $i>0$ corresponds to the vertex of the $u$--$v$ path at depth $h+i-1$ (so that state $1$ corresponds to $u$ and state $\eta+1$ corresponds to $v$).
The probability of reaching $v$ in $\tau$ steps is at most the probability of being absorbed in $\eta$ (in any number of steps), which is (see, e.g., \cite[pp.~344--346]{feller1957introduction}):
\[
\frac{ \frac{1-p}{p} - 1 }{ (\frac{1-p}{p})^{\eta+1} - 1 } \le
\left( \frac{p}{1-p} \right)^\eta \le
\left( \frac{1}{31} \right)^{2\log n} <
\left( \frac{1}{2^4} \right)^{2\log n}
= 2^{-8 \log n} = n^{-8},
\]
where we used the fact that $p \le \frac{1}{32}$.
Since the walk on $T_j$ can reach a vertex at depth $h$ at most $\tau$ times, by the union bound we have that the probability of returning a bad interval is at most $\tau n^{-8} \le 240 n^{-8} \log n \le 240 n^{-7}$.
\end{proof}
\noindent We are now ready to prove the main result of this section.
\begin{theorem}
\label{thm:noisy_binary_search}
Let $S$ be an sequence of $n$ elements having maximum dislocation at most $d \ge \log n$ and let $x \not\in S$.
Under our error model, an index $r_x$ such that $r_x \in [ \rank(x,S) - \alpha d, \rank(x,S) + \alpha d]$ can be found in $O(\log n)$ time with probability at least $1- O(n^{-6})$, where $\alpha > 1$ is an absolute constant.
\end{theorem}
\begin{proof}
We compute the index $r_x$ by performing two random walks on $T_0$ and on $T_1$, respectively.
If any of the walks returns a vertex $v$, then we return any position in the interval $I(v)$ associated with $v$. If both walks timeout we return an arbitrary position.
From Lemma~\ref{lemma:timeout} the probability that both walks timeout is at most $\frac{e}{n^6}$ (as the walk on $T^*$ timeouts with at most this probability).
Moreover, by Lemma~\ref{lemma:bad_vertex_returned}, the probability that at least one of the two walks returns a bad vertex is at most $\frac{480}{n^7}$.
By the union bound, we have that with probability at most $1-\frac{480}{n^7} - \frac{e}{n^6} = 1 - O(\frac{1}{n^6})$, vertex $v$ exists and it is not bad. In this case, using the fact that $r_x \in [\min I(v), \max I(v)]$ and that $\max I(v) - \min I(v) < cd$, we have:
\[
i^* \ge \min I(v) - cd > \max I(v) - 2cd \ge r_x - 2cd,
\]
and
\[
i^* \le \max I(v) + cd < \min I(v) + 2cd \le r_x + 2cd.
\]
To conclude the proof it suffices to notice that the random walk requires at most $\tau = O(\log n)$ steps, that each step requires constant time, and that it is not necessary to explicitly construct $T_0$ and $T_1$ beforehand.
Instead, it suffices to maintain a partial tree consisting of all the vertices visited by the random walk so far: vertices (and the corresponding pointers) are \emph{lazily} created and appended to the existing tree whenever the walk visits them for the first time.
\end{proof}
To conclude this section, we remark that our assumption that $p \le \frac{1}{32}$ can be easily relaxed to handle any constant error probability $p < \frac{1}{2}$. This can be done by modifying the test operation so that, when $x$ is tested with a vertex $v$, the majority result of the comparisons between $x$ and the set $\{ s_{L(v)}, s_{L(v)-1}, \dots, s_{L(v)-k+1} \}$ (resp.
$x$ and the set $\{ s_{R(v)}, s_{R(v)+1}, \dots, s_{R(v)+k-1} \}$) of $\eta$ elements is considered, where $k$ is a constant that only depends on $p$.
Consistently, the pointers $L(v)$ and $R(v)$ are shifted by $k$ positions, and the group size is increased to $k \cdot c$ to ensure that Observation~\ref{obs:independent_comparisons} still holds. Notice how our description for $p \le \frac{1}{32}$ corresponds exactly to the case $k=1$.
The only difference in the statement Theorem~\ref{thm:noisy_binary_search} is that $\alpha$ is no longer an absolute constant, rather, it depends (only) on the value of $p$.
\section{Optimal Sorting Algorithm}\label{sec-optimal-sorting}
\subsection{The algorithm}
\label{sec:rifflesort}
We will present an optimal approximate sorting algorithm that, given a sequence $S$ of $n$ elements, computes, in $O(n\log n)$ worst-case time, a permutation of $S$ having maximum dislocation $O(\log n)$ and total dislocation $O(n)$, w.h.p.
In order to avoid being distracted by roundings, we assume that $n$ is a power of $2$ (this assumption can be easily removed by padding the sequence $S$ with dummy $+\infty$ elements).
Our algorithm will make use of the noisy binary search of Section~\ref{sec:noisy_binary_search} and of algorithm \texttt{WindowSort}\xspace presented in \cite{geissmann_et_al}. For our purposes, we need the following \emph{stronger} version of the original analysis in \cite{geissmann_et_al}, in which the bound on the total dislocation was only given in expectation:
\begin{restatable}{theorem}{windowsortthm}
\label{thm:windowsort}
Consider a set of $n$ elements that are subject to random persistent comparison errors.
For any dislocation $d$, and for any (adversarially chosen) permutation $S$ of these elements whose dislocation is at most $d$,
$\texttt{WindowSort}\xspace(S,d)$ requires $O(n d)$ wost-case time and computes, with probability at least $1-\frac{1}{n^4}$, a permutation of $S$ having maximum dislocation at most $c_p \cdot \min\{d, \log n \}$ and total dislocation at most $c_p \cdot n$, where $c_p$ is a constant depending only on the error probability $p < \frac{1}{16}$.
\end{restatable}
We prove this theorem in Section~\ref{sec:windowsort}. Notice that \texttt{WindowSort}\xspace also works in a stronger error model in which the input permutation $S$ can be chosen adversarially after the comparison errors
between all pairs of elements have been randomly fixed, as long as the total dislocation of $S$ is at most $d$.
In the remaining of this section, we assume $p<1/32$ in order to be consistent with Section~\ref{sec:noisy_binary_search}, though both the above theorem and the algorithm we are going to present will only require $p<1/16$.
Based on the binary search in Section~\ref{sec:noisy_binary_search}, we define an operation that allows us to add a linear number of elements to an almost-sorted sequence without any asymptotic increase in the resulting dislocation, as we will formally prove in the sequel.
More precisely, if $A$ and $B$ are two disjoint subsets of $S$, we denote by $\texttt{Merge}\xspace(A,B)$ the sequence obtained as follows:
\begin{itemize}
\item For each element $x \in B$ compute and index $r_x$ such that $|\rank(s, A) - r_x| \le \alpha d$.
This can be done using the noisy binary search of Section~\ref{sec:noisy_binary_search}, which succeeds with probability at least $1-\frac{1}{|A|^6}$.
\item Insert \emph{simultaneously} all the elements $x \in B$ into $A$ in their computed positions $r_x$, breaking ties arbitrarily. Return the resulting sequence.
\end{itemize}
Our sorting algorithm, which we call \texttt{RiffleSort}\xspace (see the pseudocode in Algorithm~\ref{alg:rifflesort}), works as follows.
For $k = \frac{\log n}{2}$, we first partition $S$ into $k+1$ subsets $T_0, T_1, \dots, T_{k}$ as follows. Each $T_i$, with $1 \leq i \leq k$, contains $2^{i-1} \sqrt{n}$ elements chosen uniformly at random from $S \setminus \{T_{i+1},T_{i+2},\ldots,T_k\}$,
and $T_0 = S \setminus \{T_{1},T_{2},\ldots,T_k\}$ contains the leftover $n - \sqrt{n} \sum_{i=1}^k 2^{i-1} = \sqrt{n}$ elements.
As its first step, \texttt{RiffleSort}\xspace will approximately sort $T_0$ using \texttt{WindowSort}\xspace, and then it will alternate merge operations with calls to \texttt{WindowSort}\xspace. While, the merge operations allow us to iteratively grow the set of approximately sorted elements to ultimately include all the elements in $S$, each operation also worsens the dislocation by a constant factor.
This is a problem since the rate at which the dislocation increases is faster than the rate at which new elements are inserted. The role of the sorting operations is exactly to circumvent this issue: each \texttt{WindowSort}\xspace call locally rearranges the elements, so that all newly inserted elements are now closer to their intended position, resulting in a dislocation increase that is only an additive constant.
The corresponding pseudocode is shown in Algorithm~\ref{alg:rifflesort}, in which $\gamma \ge \max\{202 \alpha, 909 \}$ is an absolute constant.
\begin{algorithm}[t]
$T_0, T_1, \dots, T_{k} \gets$ partition of $S$ computed as explained in Section~\ref{sec:rifflesort}\;
$S_0 \gets \texttt{WindowSort}\xspace(T_0, \sqrt{n})$\;
\ForEach{$i=1,\dots,k+1$}
{
$S_i \gets \texttt{Merge}\xspace(S_{i-1}, T_{i-1})$\;
$S_i \gets \texttt{WindowSort}\xspace(S_i, \gamma \cdot c_p \cdot \log n)$\;
}
\Return $S_{k+1}$\;
\caption{\texttt{RiffleSort}\xspace\unskip(S)}
\label{alg:rifflesort}
\end{algorithm}
\subsection{Analysis}
\begin{lemma}
\label{lemma:rifflesort_runtime}
The worst-case running time of Algorithm~\ref{alg:rifflesort} is $O(n \log n)$.
\end{lemma}
\begin{proof}
Clearly the random partition $T_0, \dots, T_k$ can be computed in time $O(n \log n)$,\footnote{The exact complexity of this steps depends on whether we are allowed to generate a uniformly random integer in a range in $O(1)$ time.
If this is not the case, then integers can be generated bit-by-bit using rejection. It is possible to show that the total number of required random bits will be at most $O(n)$ with probability at least $1-n^{-2}$ (see Section~\ref{sec:derand_partitioning}). To maintain a worst-case upper bound on the running time also in the unlikely event that $O(n \log n)$ bits do not suffice, we can simply stop the algorithm and return any arbitrary permutation of $S$. This will not affect the high-probability bounds in presented in the sequel.} and the first call to \texttt{WindowSort}\xspace requires time $O(|T_0| \cdot \sqrt{n}) = O(n)$ (see Theorem~\ref{thm:windowsort}). We can therefore restrict our attention to the generic $i$-th iteration of the for loop.
The call to $\texttt{Merge}\xspace(S_{i-1}, T_{i-1})$ can be performed in $O(|S_i| \log n)$ time since, for each $x \in T_{i-1}$, the required approximation of
$\rank(x, S_{i-1})$ can be computed in time $O(\log |T_{i-1}|)$ and $|T_{i-1}|< |S_i| \le n$, while inserting the elements in $S_{i-1}$ their computed ranks requires linear time in $|S_{i-1}| + |T_{i-1}| = |S_i|$.
The subsequent execution of \texttt{WindowSort}\xspace with $d = O(\log n)$ requires time $O(|S_i| \log n)$, where the hidden constant does not depend on $i$.
Therefore, for a suitable constant $c$, the time spent in the $i$-th iteration is $c |S_i| \log n$ and total running time of Algorithm~\ref{alg:rifflesort} can be upper bounded by:
\[
c \sum_{i=1}^{k+1} |S_i| \log n = c \sqrt{n} \log n \cdot \sum_{i=1}^{k+1} 2^i
< 2^{k+2} c \sqrt{n} \log n
= 2 c n \log n.
\]
This completes the proof.
\end{proof}
The following lemma, that concerns a thought experiment involving urns and randomly drawn balls, is instrumental to bounding the dislocation of the sequences returned by the $\texttt{Merge}\xspace$ operations. Since it can be proved using arguments that do not depend on the details of \texttt{RiffleSort}\xspace, we postpone its proof to the appendix.
\begin{lemma}
\label{lemma:draw_no_long_monocolor}
Consider an urn containing $N=2M$ balls, $M$ of which are white and the remaining $N$ are black.
Balls are iteratively drawn from the urn without replacement until the urn is empty.
If $N$ is sufficiently large and $ 9 \log N \le k \le \frac{N}{16}$ holds, the probability that any contiguous subsequence of at most $100k$ drawn balls contains $k$ or fewer white balls is at most $N^{-6}$.
\end{lemma}
We can now show that, if $A$ and $B$ contain randomly selected elements, the sequence returned by $\texttt{Merge}\xspace(A,B)$ is likely to have a dislocation that is at most a constant factor larger than the dislocation of $A$:
\begin{lemma}
\label{lemma:merge_constant_disl_increase}
Let $A$ be a sequence containing $m$ randomly chosen elements from $S$ and having maximum dislocation at most $d$, with $\log n \le d = o(m)$.
Let $B$ be a set of $m$ randomly chosen elements from $S \setminus A$.
Then, for a suitable constant $\gamma$, and for large enough values of $m$, $merge(A,B)$ has maximum dislocation at most $\gamma d$ with probability at least $1-m^{-4}$.
\end{lemma}
\begin{proof}
Let $\beta = \max\{\alpha, 9/2 \}$, $S'=\texttt{Merge}\xspace(A,B)$, and $S^* = \langle s_0^*, s_1^*, \dots, s_{2n-1}^* \rangle$ be the sequence obtained by sorting $S'$ according to the true order of its elements.
Assume that:
\begin{itemize}
\item all the approximate ranks $r_x$, for $x \in B$, are such that $|r_x - \rank(x,A)| \le \beta d$; and
\item all the contiguous subsequences of $S^*$ containing up to $2 \beta d+2$ elements in $A$ have length at most $200 \beta d + 200$.
\end{itemize}
We will show in the sequel that the above assumptions are likely to hold.
Pick any element $x \in S'$.
We will show that our assumptions imply that the dislocation of $x$ in $S'$ is at most $201d$.
An element $y \in B$ can affect the final dislocation of $x$ in $S'$ only if one of the following two (mutually exclusive) conditions holds: (i) $y \prec x$ and $r_y \ge r_x$, or (ii) $y \succ x$ and $r_y \le r_x$.
All the remaining elements in $B$ will be placed in the correct relative order w.r.t.\ $x$ in $S'$, and hence they do not affect the final dislocation of $x$.
If (i) holds, we have:
\[
r_x - \beta d \le r_y - \beta d \le \rank(y, A) \le \rank(x, A) \le r_x + \beta d,
\]
while, if (ii) holds, we have:
\[
r_x -\beta d \le \rank(x, A) \le \rank(y, A) \le r_y + \beta d \le r_x + \beta d,
\]
and hence, all the elements $y \in B$ that can affect the dislocation of $x$ in $S'$ are contained in the set
$Y = \{ y \in B : r_x - \beta d \le \rank(y, A) \le r_x + \beta d\}$.
We now upper bound the cardinality of $Y$.
Let $y^-$ be the $(r_x - \beta d - 1)$-th element of $A$;
if no such element exists, then let $y^- = s^*_0$.
Similarly, let $y^+$ be the $(r_x + \beta d)$-th element of $A$; if no such element exists, then let $y^+ = s^*_{2m-1}$.
Due to our choice of $y^-$ and $y^+$ we have that $\forall y \in Y,
y^- \preceq y \preceq y^+$, implying that all the elements in $Y$ appear in the contiguous subsequence $\overline{S}$ of $S^*$ having $y^-$ and $y^+$ as its endpoints.
Since no more than $2 \beta d+2$ elements of $A$ belong to $\overline{S}$ , our assumption guarantees that $\overline{S}$ contains at most $200 \beta d+200$ elements.
This implies that the dislocation of $x$ in $S'$ is at most $\beta d + |Y| \le \beta d + |\overline{S}| \le 201 \beta d + 200 \le \gamma d$, where the last inequality holds for large enough $n$ once we choose $\gamma = 202 \beta$.
To conclude the proof we need to show that our assumptions holds with probability at least $1-|S'|^{-6}$.
Regarding the first assumption, for $x\in B$, a noisy binary search returns a rank $r_x$ such that
$|r_x - \rank(x,A)| \le \alpha d \le \beta d$ with probability at least $1 - O(\frac{1}{m^6})$. Therefore the probability that the assumption holds is at least $1-O(\frac{1}{m^5})$.
Regarding our second assumption, notice that, since the elements in $A$ and $B$ are randomly selected from $S$, we can relate their distribution in $S^*$
with the distribution of the drawn balls in the urn experiment of Lemma~\ref{lemma:draw_no_long_monocolor}: the urn contains $N=2m$ balls each corresponding to an elements in $A \cup B$, a ball is white if it corresponds to one of the $M=m$ elements of $A$, while a black ball corresponds one of the $M=m$ elements of $B$.
If the assumption does not hold, then there exists a contiguous subsequence of $S^*$ of at least $200 \beta d+200$ elements that contains at most $2 \beta d+2$ elements from $A$. By Lemma~\ref{lemma:draw_no_long_monocolor} with $k=2 \beta d+2$, this happens with probability at most $(2m)^{-6}$ (for sufficiently large values of $n$).
The claim follows by using the union bound.
\end{proof}
We can now use Lemma~\ref{lemma:merge_constant_disl_increase} and Theorem~\ref{thm:windowsort} together to derive an upper bound to the final dislocation of the sequence returned by Algorithm~\ref{alg:rifflesort}.
\begin{lemma}
\label{lemma:rifflesort_dislocation}
The sequence returned by Algorithm~\ref{alg:rifflesort} has maximum dislocation $O(\log n)$ and total dislocation $O(n)$ with probability $1- \frac{1}{n\sqrt{n}}$.
\end{lemma}
\begin{proof}
For $i = 1, \dots, k+1$, we say that the $i$-th iteration of Algorithm~\ref{alg:rifflesort} is \emph{good} if the sequence $S_i$ computed at its end
has both (i) maximum dislocation at most $c_p \log n$, and (ii) total dislocation at most $c_p |S_i|$.
As a corner case, we say that iteration $0$ is good if the sequence $S_0$ also satisfies conditions (i) and (ii) above.
We now focus on a generic iteration $i\ge 1$ and show that, assuming that iteration $i-1$ is good, iteration $i$ is also good with probability at least $1-\frac{1}{n^2}$.
Since iteration $0$ is good with probability at least $1-\frac{1}{|S_0|^4}$ = $1- \frac{1}{n^2}$ and there are $k + 1 = O(\log n)$ other iterations, the claim will follow by using the union bound.
Since iteration $i-1$ was good, the sequence $S_{i-1}$ has maximum dislocation $c_p \log n$
and hence the sequence resulting from call to $\texttt{Merge}\xspace(S_{i-1}, T_{i-1})$ returns a sequence with dislocation at most $\gamma c_p \log n$ with probability at least $1-\frac{1}{|T_{i-1}|^4} \ge 1- \frac{1}{n^2}$.
If this is indeed the case, we have that the sequence $S_i$ returned by the subsequent call to \texttt{WindowSort}\xspace satisfies (i) and (ii) with probability at least $1 - \frac{1}{|S_{i+1}|^4} \ge 1 - \frac{1}{n^2}$. The claim follows by using the union bound and by noticing that the returned sequence is exactly $S_{k+1}$.
\end{proof}
We have therefore proved the following result, which follows directly from Lemma~\ref{lemma:rifflesort_dislocation} and Lemma~\ref{lemma:rifflesort_runtime}:
\begin{theorem}
\texttt{RiffleSort}\xspace is a randomized algorithm that approximately sorts, in $O(n \log n)$ worst-case time, $n$ elements subject to random persistent comparison errors so that the maximum (resp. total) dislocation of the resulting sequence is $O(\log n)$ (resp. $O(n)$), w.h.p.
\end{theorem}
\section{WindowSort}\label{sec:windowsort}
To make our algorithm description self-contained, and in order to prove Theorem~\ref{thm:windowsort} (thus strengtening the result of\cite{geissmann_et_al}), Algorithm~\ref{alg:windowsort} reproduces (a slightly modified version of) the pseudocode of \texttt{WindowSort}\xspace.
\texttt{WindowSort}\xspace receives in input a sequence $S$ of $n$ elements, and an additional upper bound $d$ on the initial dislocation of $S$. The original \texttt{WindowSort}\xspace algorithm corresponds to the case $d=n$.
\begin{algorithm}[t]
$S_{2d} \gets S$\;
\ForEach{$w=2d, d, d/2, \dots, 2$}
{
Let $\langle x_0, \dots, x_{n-1} \rangle$ be the elements in $S_{w}$\;
\ForEach{$x_i \in S_w$}
{
$\score_w(x_i) \gets \max\{0, i - w\} + \{ x_j < x_i \, : \, |j-i| \le w \}$
}
$S_{ w/2 } \gets $ sort the element of $S_{w}$ by non-decreasing value of $\score_w( \cdot )$\;
}
\Return $S_1$
\caption{\texttt{WindowSort}\xspace\unskip($S,d$)}
\label{alg:windowsort}
\end{algorithm}
\texttt{WindowSort}\xspace iteratively computes a collection $\{S_{2d},S_{d}, S_{d/2},\dots, S_1\}$ of permutations of $S$:
at every step, \texttt{WindowSort}\xspace maintains a \emph{window size} $w$ and builds $S_{w}$ from $S_{2w}$.
Intuitively, for the algorithm to be successful, we would like $S_w$ to be a permutation of $S$ having maximum dislocation at most $w/2$, w.h.p. Even though this is true in the beginning (since we initially set $w=2d$), this property only holds up to a certain window size $w^*=\Theta(\log n)$.
Nevertheless, it is still possible to show that the maximum dislocation of the returned sequence is $\Theta(\log n)$ and that the
expected dislocation of each element is constant.
We summarize the above discussion in the following lemma, which follows from the same arguments used in the proofs of Theorems~9 and 14 in \cite{geissmann_et_al}:
\begin{lemma}
\label{lemma:windowsort}
Let $S$ be a sequence of $n$ element having maximum dislocation at most $d$. Then, with probability $1-\frac{1}{n^5}$, the following properties hold: (i) there exists a window size $w^* = \Theta(\log n)$ such that $\disl(S_{w^*})=O(\log n)$; and (ii) $\mathbb{E}[\disl(x,S_1)] = O(1)$.
All the hidden constants depend only on $p$.
\end{lemma}
In the following, we shall prove that the total dislocation of the sequence returned by \texttt{WindowSort}\xspace is linear with high probability.
We start by providing an upper bound to the change in position of an element between different iterations of \texttt{WindowSort}\xspace.
This will also immediately imply that an element can only move by at most $O(w)$ positions between $S_{w^*}$ and $S_1$.
\begin{lemma}\label{lem:move}
For every $x \in S$, $| pos(x, S_{w}) - pos(x, S_{w'}) | \le 4 |w-w'|$.
\end{lemma}
\begin{proof}
Without loss of generality let $w' < w$ (the case $w'>w$ is symmetric, and the case $w'=w$ is trivial).
Consider a generic iteration of \texttt{WindowSort}\xspace corresponding to window size $w'' < w$.
Let $i=\pos(x, S_{w})$ and $\Delta_{w''} = |i - pos(x, S_{w''/2})|$.
\noindent For every element $y$ such that $pos(y, S_{w''}) < i-2w''$ we have:
\[
\score_{w''}(x) \ge i - w'' > \pos(y, S_{w''}) + w'' \ge \score_{w''}(y),
\]
implying that $pos(x, S_{w''/2}) \ge i-2w''$.
Similarly, for every element $y \in S$ such that $pos(y, S_{w''}) > i + 2w''$:
\[
\score_{w''}(x) \le i+w'' < \pos(y, S_{w''})-w'' \le \score_{w''}(y),
\]
showing that the position of $x$ in $S_{w''/2}$ is at most $i+2w''$.
We conclude that $\Delta_{w''} \le 2w''$, which allows us to write:
\begin{multline*}
|i - pos(x_i, S_{w'}) | = \Delta_{w} + \Delta_{w/2} + \Delta_{w/4} + \dots + \Delta_{2w'} \\
\le 2w + w + w/2 + \dots + 4w' = 4w - (2w' + w' + w'/2 + \dots)
= 4w - 4w'.
\end{multline*}
\end{proof}
The previous lemma also allows us to show that, once the window size $w$ is sufficiently small, the final position of an element only depends on a small subset of nearby elements. This, in turn, will imply that, once $S_w$ is fixed, the final positions of distant elements in $S_1$ are conditionally independent. The above property is formally shown in the following:
\begin{lemma}\label{lem:dependent}
Let $x \in S$.
Given $S_{w}$, $pos(x, S_1)$ only depends on comparisons involving elements in positions $pos(x, S_1)-6w, \dots, pos(x, S_1)+6w$ in $S_{w}$.
\end{lemma}
\begin{proof}
Let $w' = w/2^t$, for $t=1, 2, \dots$, and let $S_w'=\langle x_0,\dots,x_{n-1}\rangle$.
We prove by induction on $t$, that $i=pos(x_i,S_{w'})$ only depends on the comparison results with elements in positions $i-r_t, \dots, i+r_t$ in $S_w$, where $r_t = 6 w (1 - 2^{-t} )$.
Base case $t=1$: By Lemma~\ref{lem:move} we have that $ i - 2w \le pos(s_i, S_{w}) \le i + 2w$.
Therefore, $x_i$ is can be only compared to elements $x_j$ such that $i - 3w \le pos(x_j, S_{w}) \le i + 3w$
Suppose now that the claim is true for some $t \ge 1$.
We show that the claim also holds for $t+1$.
Indeed, by Lemma~\ref{lem:move}, we have that $ i - 2w/2^t \le pos(x_i, S_{w/2^t}) \le i + 2w/2^t$
and hence it is only compared to $x_j$s such that
$i - 3w/2^t \le pos(x_j, S_{w/2^t}) \le i + 3 w/2^t$.
By induction hypothesis, these elements depend only on elements in positions
$3w/2^t + 6w - 6w 2^{-t}
= 6w - 3w/2^t
= 6w (1-2^{-(t+1)})$.
\end{proof}
\noindent We are finally ready to prove Theorem~\ref{thm:windowsort} used in section \ref{sec:rifflesort}, that we restate here for convenience.
\texttt{WindowSort}\xspacethm*
\begin{proof}
First of all notice that Lemma~\ref{lem:move}, ensures that the final dislocation of each element in $S_1$ will be at most $d + 4w$ where $w=2d$ is the initial window size of \texttt{WindowSort}\xspace, thus implying the $c_p \cdot d$ bound on the final maximum dislocation.
We will condition on the event that for some $w^* = \Theta( \log n)$, $S_{w^*}$ has maximum dislocation $\delta = O(\log n)$. Let $G$ be the indicator random variable that describes this event, i.e., $G=1$ if the event happens.
Clearly, by Lemma~\ref{lemma:windowsort},
we have that $P(G=1) \ge 1 - \frac{1}{n^5}$, implying that the final maximum dislocation will be at most $\delta + 4 w^* = O(\log n)$ with the same probability. Therefore, we now only focus on bounding the total dislocation of $S_1$.
Let $S_1= \langle x_0,\dots,x_{n-1}\rangle $.
and observe that $\mathbb{E}[\disl(x_i,S_1)\mid G=1] = O(1)$. Indeed, by Lemma~\ref{lemma:windowsort},
$\mathbb{E}[\disl(x_i,S_1)] \le c$ for all $x_i \in S$ and some constant $c>0$ depending only on $p$. Therefore:
\begin{align*}
c & \ge \mathbb{E}[\disl(x_i, S_1)] = \mathbb{E}[\disl(x_i,S_1)\mid G=1]\cdot P(G=1) + \mathbb{E}[\disl(x_i,S_1)\mid G=0]\cdot P(G=0)\\
&\ge \mathbb{E}[\disl(x_i,S_1)\mid G=1]\cdot \left(1-\frac{1}{n^5}\right),
\end{align*}
implying that $\mathbb{E}[\disl(x_i,S_1)\mid G=1] \le \frac{32}{31}c$ for all $n \ge 2$.
We partition the elements of $S$ into $k = 20w^* + 4\delta$ sets, $P^{(0)},\dots,P^{(k-1)}$,
such that $P^{(j)} = \{x \in S \, : \, \rank(x, S) = j \pmod{k} \}$.
We now show that the final positions of two elements belonging to the same set are conditionally independent on $G=1$.
Indeed, assuming $G=1$ and using Lemma~\ref{lem:move}, we have:
\[
|\pos(x, S_1) - \rank(x, S)| \le |\pos(x, S_1) - \pos(x, S_{w^*})| + |\pos(x, S_{w^*}) - \rank(x, S)| < 4{w^*} + \delta,
\]
and, using again that $G=1$ together with Lemma~\ref{lem:dependent}, we know that
$\pos(x, S_1)$ only depends on comparisons between elements in $\{ y \in S \, : \, | \pos(x, S_1) - \pos(y, S_{w^*})| \le 6{w^*} \}$ in $S_{w^*}$ which, in turn,
is a subset of $\{ y \in S \, : \, |pos(x, S_1) - \rank(y, S) | \le 6{w^*} + \delta \}$.
Combining the previous properties, we have that $pos(x, S_1)$
only depends on $\{ y \in S \, : \, |\rank(x, S) - \rank(y, S) | < 10{w^*} + 2\delta \}$,
implying that two elements belonging to the same set depend on different comparisons results.
Observe now that each set has size at least $\lfloor \frac{n}{k} \rfloor $ and at most $\lceil \frac{n}{k} \rceil = O(\frac{n}{\log n})$, define
$D^{(j)} = \sum_{x_i\in P^{(j)}} \disl(x_i,S_1)$ to be the total dislocation of all elements in $P^{(j)}$, and let $\mu^{(j)} =
\mathbb{E}[{D^{(j)}\mid G=1}] = \sum_{x \in P^{(j)}} \mathbb{E}[\disl(x, S_1) \mid G=1] \le \frac{32 c n}{31 k}$.
We will use Hoeffding's inequality to prove that $D^{(j)}\le \frac{2 c n}{k}$ with high probability.
Hoeffding's inequality is as follows: for independent random variables $X_1,\dots,X_n$, such that $X_i$ is in $[a_i,b_i]$, and $X = \sum_i X_i$,
\[
P(X - \mathbb{E}[X]\ge t) \le \exp\left(-\frac{2t^2}{\sum_{i=1}^{n}(b_i-a_i)^2}\right)\, .
\]
\noindent Hence, since each $\disl(x_i, S_1)$ is between $0$ and $\delta + 4w^* = O(\log n)$ when $G=1$, we have:
\[
P\left(D^{(j)} - \mu^{(j)} \ge \frac{30 c n}{31 k} \, \Big| \, G=1 \right) \le
\exp\left(-\frac{1800}{961} \cdot \frac{\frac{n^2}{c^2 \delta^2}}{ \lceil \frac{n}{k} \rceil (\delta+ 4w^*)^2}\right)
= \exp \left(-\Omega\left(\frac{n}{\log^3 n}\right)\right).
\]
Finally, by using the union bound over all $k = \Theta(\log n)$ sets and on the event $G=0$, we get that
$\disl(S_1) \ge 2 c n$
with probability at most $ O(\log n) \cdot \exp \left(-\Omega\left(\frac{n}{\log^3 n}\right)\right) + \frac{1}{n^5}$, which is at most $\frac{1}{n^4}$ for sufficiently large values of $n$.
\end{proof}
\section{Derandomization}
\label{sec:derand}
\subsection{Partitioning $S$}
\label{sec:derand_partitioning}
In order to run \texttt{RiffleSort}\xspace we need to partition the input sequence $S$ into a collection of random sets $T_0, T_1, \dots, T_k$ where $k= \frac{\log n}{2}$ and each $T_i$ contains $m = \sqrt{n} \cdot 2^{i-1}$ elements that are chosen uniformly at random from
the $n - \sqrt{n} \sum_{j=i+1}^k 2^{i-1} = 2 m$ elements in $S \setminus \bigcup_{j=i+1}^k T_j$.
Notice also that this is the only step in the algorithm that is randomized.
To obtain a version of \texttt{RiffleSort}\xspace that does not require any external source of randomness, i.e., that depends only on the input sequence and on the comparison results,
we will generate such a partition by exploiting the intrinsic random nature of the comparison results.
We start by showing that, with probability at least $1-\frac{1}{n^3}$, the partition $T_0, \dots, T_k$ can be found in $O(n)$ time using only $O(n)$ random bits.
To this aim it suffices to show that, given a set $A$ of $2N$ elements, a random set $B \subset A$ of $N$ elements can be
found in $O(N)$ time using $O(N)$ random bits, with probability at least $1-\frac{1}{N^7}$.
We construct $B$ as follows:
\begin{itemize}
\item For each element of $A$ perform a coin-flip. Let $C$ be the set of all the elements whose corresponding coin flip is ``heads''.
\item If $|C|=N$, return $B = C$. Otherwise, if $|C|<N$, randomly select a set $D$ of $N-|C|$ elements from $A \setminus C$ and return $B = C \cup D$.
Finally, if $|C|>N$, randomly select a set $D$ of $|C|-N$ elements from $C$ and return $B= C \setminus D$.
\end{itemize}
Standard techniques show that this method of selecting $B$ is unbiased, i.e., all the sets $B \subset A$ of $N$ elements are returned with equal probability. We therefore move the proof of the following lemma to the Appendix.
\begin{lemma}
\label{lemma:random_selection_unbiased}
For any set $X \subset A$ of $N$ elements, $\Pr(B = X) =\binom{2N}{N}^{-1}$.
\end{lemma}
We now show an upper bound on the number of random bits required.
\begin{lemma}
For sufficiently large values of $N$, the number of random bits required to select $B$ u.a.r. is at most $2N$ with probability at least $1 - N^{-7}$.
\end{lemma}
\begin{proof}
Clearly, $C$ can be built using $3N$ random bits.
Let $m=|C|$, since $E[|C|]=N$, by Hoeffding's inequality we have:
\[
\Pr(| m - N | \ge 2 \sqrt{N \ln N} ) \le 2 e^{-8 \ln N} \le 2N^{-8}.
\]
This implies that, with probability at least $1-2N^{-8}$, the set $C$ contains at most $2 \sqrt{N \ln N}$ elements. Hence, $O(\sqrt{N} \cdot \mathrm{polylog} N)$ random bits suffice to select $D$ (using, e.g., a simple rejection strategy), and therefore $B$, with probability at least $1-N^{-8}$. The claim follows by using the union bound.
\end{proof}
From the above lemmas, it immediately follows that all the sets $T_0, \dots, T_k$ can be constructed in time $O(\sum_{i=0}^k |T_i|) = O(n)$ using at most $4n$ random bits with probability at least $1- n^{-\frac{7}{2}} \cdot \log N \ge 1 - n^{-3}$, for sufficiently large values of $n$.
\subsection{Derandomized RiffleSort}
As shown in \cite{newwindowsort}, it is possible to simulate ``almost-fair'' coin flips by xor-ing together a sufficiently large number of comparison results. Indeed, we can associate the two possible results of a comparison with the values $0$ and $1$, so that each comparison behaves as a Bernoulli random variable whose parameter is either $p$ or $1-p$.
We can then use the following fact: let $c_1, \dots, c_k$ be $k = \Theta(\log n)$ independent Bernoulli random variables such that $P(c_i=1) \in \{p, 1-p\} \; \forall i=1,\dots,k$, then $| P(c_1 \oplus c_2 \oplus \dots \oplus c_k = 0) - \frac{1}{2}| \le \frac{1}{n^4}$.
Therefore, if we consider the set $A$ containing the first $9 k$ elements from $S$ and we compare each element in $A$ to all the elements in $S \setminus A$, we obtain a collection of $9 k (n - k) \ge 8 k n$ comparison results (for sufficiently large values of $n$) from which we can generate $8n$ almost-fair coin flips.
With probability at least $1 - \frac{8 k n }{n^4} - n^{-3} > 1 - \frac{1}{n^2}$ these almost-fair coin flips behave exactly as unbiased random bits, and they suffice to select a partition $T_0, \dots, T_k$ of $S \setminus A$.\footnote{This is true even if up to $|S \setminus A| - 1$ additional $+\infty$ elements are added to $S \setminus A$, as described in Section~\ref{sec:rifflesort}}.
It is now possible to use \texttt{RiffleSort}\xspace on $S \setminus A$ to obtain a sequence $S'$ having maximum dislocation $d=O(\log n)$ and linear dislocation $O(n)$ (from Lemma~\ref{lemma:rifflesort_runtime} and Lemma~\ref{lemma:rifflesort_dislocation} this requires time $O(n \log n)$ and succeeds with probability at least $1- |S \setminus A|^{-\frac{3}{2}} > 1 - 3 n^{-\frac{3}{2}}$ since $|S \setminus A| \ge \frac{n}{2}$).
What is left to do is to reinsert all the elements of $A$ into $S'$ without causing any asymptotic increase in the total and in the maximum dislocation. While one might be tempted to use the result of Section~\ref{sec-introduction}, this is not actually possible since the errors between the elements in $A$ and the elements in $S'$ now depend on the permutation $S'$.
However, a simple (but slower) strategy, which is similar to the one used in \cite{newwindowsort}, works even when the sequence $S'$ is adversarially chosen as a function of the errors, as long as its maximum dislocation is at most $d$.
Suppose that we have a guess $\tilde{r}$ on $\rank(x, S')$, we can determine whether $\tilde{r}$ is a good estimate on $\rank(x, S')$
by comparing $x$ with all the elements in positions from $\tilde{r} - cd$ to $\tilde{r} + cd -1$ in $S'$ and counting the number $m$ of \emph{mismatches}: a mismatch is an element $y$ such that either (i) $\pos(y, S') < \tilde{r}$ and $y > x$, or (ii) $\pos(y, S') \ge \tilde{r}$ and $y < x$.
Suppose that our guess of $\tilde{r}$ is much smaller than the true rank of $x$, say $\tilde{r} < \rank(x, S') - c d$ for a sufficiently large constant $c$, then
all the elements $y$ such that $\tilde{r} + d \le \rank(y, S') < \tilde{r} + (c-1)d$
are in $\{ z \in S' \, : \, \tilde{r} \le \pos(z, S') < \tilde{r} + cd \}$.
Since $x \succ y$, we have that the observed comparison result is $x > y$ with probability at least $1-p$, and
hence the expected number of mismatches $m$ is at least $(c-2)d(1-p)$, and a Chernoff bound can be used to show that with probability $1-\frac{1}{n^4}$, $m$ will exceed $\frac{1}{2}(c-2)d(1-p) \ge \frac{1}{3} c d$.
A symmetric argument holds for the case in which $\tilde{r} \ge \rank(x, S') + c d$.
On the contrary, if $\rank(x, S') - d \le \tilde{r} < \rank(x, S') + d$,
all the elements $y$ such that
either $\tilde{r} - (c-2) d \le \rank(y, S') < \tilde{r} -2d$
or $\tilde{r} + 2 d \le \rank(y, S') < \tilde{r} - (c-2) d$
are in the correct relative order w.r.t. $x$ in $S'$.
This implies that the expected number of mismatches $m'$ between $x$ and all the elements $y$ will be at most $2(c-4)d p$, that $m' \le 4(c-4)dp$ with probability at least $1-\frac{1}{n^4}$, which implies that $m \le m' + 4d \le 4(c-4)d p + 4d < \frac{1}{3}c d$ with at least the same probability.
Therefore, to compute a $r_x$ satisfying $|r_x - \rank(x, S')| = O(d)$, it suffices to count the number of mismatches for $\tilde{r} = 0, 2d, 4d, \dots $ and to select the value of $\tilde{r}$ minimizing their number.
The total time required to to compute all $r_x$ for $x \in A$ is therefore $|A| \cdot O(\frac{n}{d} \cdot d) = O(n \log n)$, and the success probability is at least $1 - O(\frac{n}{d} \cdot |A|) \cdot \frac{1}{n^4} \ge 1 - \frac{1}{n^2}$, for sufficiently large values of $n$.
Combining this with the success probability of \texttt{RiffleSort}\xspace, we obtain an overall success probability of at least $1 - \frac{1}{n}$.
Finally, since the set $A$ only contains $O(\log n)$ elements, simultaneously reinserting them in $S'$ affects the maximum dislocation of $S'$ by at most an $O(\log n)$ additive term. Moreover, their combined contribution to the total dislocation is at most $O(\log^2 n)$.
\appendix
\section{Omitted Proofs}
\subsection*{Proof of Lemma~\ref{lemma:draw_no_long_monocolor}}
\begin{proof}[\unskip\nopunct]
Let $b_i$ be the color of the $i$-th drawn ball.
We consider the sequence of drawn balls and,
for any position $n$, we bound the distance between $n$-th ball and the position of the $k$-th closest white ball. Suppose that $n \le M = \frac{N}{2}$, as otherwise we can apply similar arguments by considering the drawn balls in reverse order. We distinguish two cases.
If $n \le \frac{M}{4}$ then, for any $j \le \frac{M}{4}$, the probability that $b_{n+j}$ is white, regardless of the colors of the other balls in $b_n, \dots, b_{M/2}$, is at least:
\[
\frac{M-(n+j)}{2M-(n+j)}
= \frac{1}{2} - \frac{n+j}{4M-2(n+j)}
\ge \frac{1}{2} - \frac{M}{2 \cdot 3M} = \frac{1}{3} > \frac{1}{16}.
\]
If $\frac{M}{4} \le n \le M$, then let $X$ be the number of white balls in $\{b_1, \dots, b_n \}$.
Since $X$ is distributed as a hypergeometric random variable of parameters $N$, $M$, and $n$ we have that $\mathbb{E}[X] = \frac{nM}{N} = \frac{n}{2}$.
By using the tail bound (see, e.g., \cite{skala2013hypergeometric}) $\Pr(X \ge \mathbb{E}[X] + tn) \le e^{-2t^2 n}$ for $t \ge 0$, we obtain (for sufficiently large values of $N$):
\begin{align*}
\Pr\left(X \ge \frac{3M}{4}\right) &=
\Pr\left(X \ge \frac{M}{2} + \frac{M}{4}\right) \le
\Pr\left(X \ge \frac{n}{2} + 2 \sqrt{n \log n}\right) \\
&\le e^{-8 \log n} < n^{-8} \le
2^{24} N^{-8}.
\end{align*}
Assume now that $X < \frac{3M}{4}$, which happens with probability at least $1- \frac{2^24}{N^8}$.
In this case, for any $j \le \frac{M}{8}$, the probability that $b_{n+j}$ is white, regardless of the colors of the other balls in $b_n, \dots, b_{(9/8) M}$, is at least:
\[
\frac{M-(\frac{3M}{4}+j)}{2M-(n+j)}
\ge \frac{M-\frac{7M}{8}}{2M}
= \frac{1}{16}.
\]
Therefore, in both the first case and in the second case, as long as our assumption holds, the probability that at most $k$ balls in $b_{n}, \dots, b_{n+100k}$ are white is at most:
\begin{multline*}
\sum_{j=0}^{k} \binom{100k}{j} \left( \frac{1}{16} \right)^j \left( \frac{15}{16} \right)^{100k-j}
\le (k+1) \binom{100k}{k} \left(\frac{15}{16} \right)^{100k} \\
\le (k+1) \left(\frac{100ek}{k}\right)^k \left(\frac{15}{16}\right)^{100k}
\le (k+1) \left(100e \left(\frac{15}{16}\right)^{100} \right)^k
< \frac{k+1}{2^k},
\end{multline*}
where we used the inequality $\binom{\eta}{\kappa} \le \left( \frac{e \eta}{\kappa} \right)^\kappa$.
By using the union bound on all $n \le M$ and on the event $X < \frac{3M}{4}$ whenever $\frac{M}{4} \le n \le M$, we have can upper bound the sought probability as:
\[
N \left( \frac{k+1}{2^k} + 2^{24} N^{-8} \right)
\le N \left( \frac{N}{N^9} + 2^{24} N^{-8} \right)
\le (1+2^{24}) N^{-7} \le N^{-6},
\]
where the last inequality holds for sufficiently large values of $N$.
\end{proof}
\subsection*{Proof of Lemma~\ref{lemma:random_selection_unbiased}}
\begin{proof}[\unskip\nopunct]
For each $k \in [ -N, N ]$, we define a collection $\mathcal{Y}_k$ of sets as follows:
If $k < 0$, $\mathcal{Y}_k$ contains all the sets $Y \subset X$ such that $| X \setminus Y | = |k|$.
If $k \ge 0$, $\mathcal{Y}_k$ contains all the sets $Y \supseteq X$ such that $| Y \setminus X | = |k|$.
Notice that, depending on the value of $k$, $\mathcal{Y}_k$ can be obtained by either selecting $|k|$ elements to remove from $X$, or by selecting $k$ elements to add to $X$ from $A \setminus X$. Therefore, $|\mathcal{Y}_k| = \binom{N}{|k|}$.
Defining $m=|C|$, we have:
\begin{align*}
\Pr(B=X) &= \sum_{k=-N}^N \Pr(m=N+k) \cdot \Pr(B = X \mid m=N+k) \\
&= 2^{-2N} \sum_{k=-N}^N \binom{2N}{N+k} \sum_{Y \in \mathcal{Y}_k} \Pr(C = Y \mid m=N+k) \Pr(B=X \mid C = Y) \\
&= 2^{-2N} \sum_{k=-N}^N \binom{2N}{N+k} \sum_{Y \in \mathcal{Y}_k} \frac{1}{\binom{2N}{N+k}} \frac{1}{\binom{N+|k|}{|k|}}
= 2^{-2N} \sum_{k=-N}^N \frac{|\mathcal{Y}_k|}{\binom{N+|k|}{|k|}} \\
&= 2^{-2N} \sum_{k=-N}^N \frac{\binom{N}{|k|}}{\binom{N+|k|}{|k|}}
= 2^{-2N} \sum_{k=-N}^N \frac{N! N! |k|!}{ (N+|k|)! (N-|k|)! |k|!} \\
& = 2^{-2N} \sum_{k=-N}^N \frac{N! N!}{ (N+k)! (N-k)!}
= 2^{-2N} \sum_{j=0}^{2N} \frac{N! N!}{ j! (2N-j)!}\\
&= 2^{-2N} \frac{N! N!}{(2N)!} \sum_{j=0}^{2N} \frac{(2N)!}{ j! (2N-j)!} = \frac{N! N!}{(2N)!} = \frac{1}{\binom{2N}{N}}.
\end{align*}
\end{proof}
\end{document}
|
\begin{document}
\title{Leakage mitigation for quantum error correction using a mixed qubit scheme}
\author{Natalie C. Brown}
\affiliation{School of Physics,
Georgia Institute of Technology, Atlanta, GA, USA}
\author{Kenneth R. Brown}
\affiliation{School of Physics,
Georgia Institute of Technology, Atlanta, GA, USA}
\affiliation{Schools of Chemistry and Biochemistry and Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA}
\affiliation{Departments of Electrical and Computer Engineering, Chemistry and Physics, Duke University, Durham, NC, USA }
\date{\today}
\begin{abstract}
Leakage errors take qubits out of the computational subspace and will accumulate if not addressed. A leaked qubit will reduce the effectiveness of quantum error correction protocols due to the cost of implementing leakage reduction circuits and the harm caused by interacting leaked states with qubit states. Ion trap qubits driven by Raman gates have a natural choice between qubits encoded in magnetically insensitive hyperfine states that can leak and qubits encoded in magnetically sensitive Zeeman states of the electron spin that cannot leak. In our previous work, we compared these two qubits in the context of the toric code with a depolarizing leakage error model and found that for magnetic field noise with a standard deviation less than 32 $\mu$G that the $^{174}$Yb$^+$ Zeeman qubit outperforms the $^{171}$Yb$^+$ hyperfine qubit. Here we examine a physically motivated leakage error model based on ions interacting via the M$\o$lmer-S$\o$renson gate. We find that this greatly improves the performance of hyperfine qubits but the Zeeman qubits are more effective for magnetic field noise with a standard deviation less than 10 $\mu$G. At these low magnetic fields, we find that the best choice is a mixed qubit scheme where the hyperfine qubits are the ancilla and the leakage is handled without the need of an additional leakage reduction circuit.
\end{abstract}
\maketitle
\section{Introduction}
\label{intro}
We have yet to discover the perfect qubit. Every known qubit candidate comes with assets and liabilities. Recently, there has been a growing interest in combining different qubit types in an effort to amplify these desirable attributes and suppress the undesirable noise. Such mixed qubit architectures look promising, addressing a wide range of issues such as cooling, crosstalk, and leakage \cite{inlek2017multispecies, tan2015multi, schaetz2015quantum,ballance2015hybrid, wang2017single, barrett2003sympathetic, negnevitsky2018repeated}.
Qubits based on clock states are often favored in ion trapped quantum computers \cite{brown2016co, brown2011single, ballance2016high, blinov2004quantum}. Hyperfine qubits based on clock transitions suffer from virtually no memory errors because clock states
have a second-order dependence on magnetic field. However there exist additional energy states, resulting from Zeeman splittings, that are outside the defined computational subspace. These energy states can be accessed through leakage errors.
Leakage errors are especially damaging. If left untreated they corrupt data and render error correction syndromes useless. Even so, standard error correction schemes are not adept to handle such errors. Additional leakage reducing circuits (LRC) are required to convert leakage errors into Pauli errors before they can be corrected \cite{aliferis2005fault, fowler, suchara}.
Zeeman qubits are also a viable candidate for ion trap quantum computing \cite{lucas2004isotope, keselman2011high, poschinger2009coherent, ratschbacher2013decoherence}. While they suffer from a first-order dependence on magnetic fields and thus have more dephasing noise than hyperfine qubits, they have no additional energy states that lead to leakage. Thus the tradeoff is clear: they suffer more memory errors but do not suffer from leakage errors.
In our previous work \cite{brown2018comparing}, we studied two specific types of qubits: $^{171}$Yb$^+$ hyperfine qubits and $^{174}$Yb$^+$ Zeeman qubits. We assessed the performance of a surface code built on each type of qubit, comparing the two different error models: one with leakage but no memory errors (hyperfine) and one with large memory errors but no leakage (Zeeman). We found that in certain magnetic field regimes, the Zeeman qubit's memory error can be suppressed enough that a surface code built on this type of qubit, outperforms one built on a hyperfine system.
In this work, we study the performance of the surface code on a mixed qubit platform. Using $^{171}$Yb$^+$ hyperfine qubits for our ancilla and $^{174}$Yb$^+$ Zeeman qubits for our data, we reduce the potential for leakage errors at the cost of increasing memory errors. We simulate two different leakage models: a worst case stochastic model in which leaked qubits completely depolarize unleaked qubits they interact with and a M$\o$lmer-S$\o$renson model which captures the effects of leakage during a M$\o$lmer-S$\o$renson gate. We find that in certain magnetic field regimes there is an improvement in the logical error rate of the surface code compared to the performance on either a pure hyperfine or Zeeman system. A surface code built on the mixed qubit architecture can effectively handle leakage without the use of a LRC.
\section{Atomic Structure of Yb isotopes}
At the root of this study, we are investing the performance of the surface code when the ancilla and the data qubits have two very different error models.
Since much of our work relies on errors that are very specific to the atomic structure of the qubits involved, we briefly outline the atomic structure of the two Ytterbium isotopes used in our simulation: $^{171}$Yb$^{+}$ ($I=1/2$) and $^{174}$Yb$^{+}$ ($I=0$).
The half integer nuclear spin of $^{171}$Yb$^{+}$ gives the well known hyperfine splitting responsible for the clock states $\ket{ F = 0 , m_F = 0 }$ and $\ket{ F = 1 , m_F = 0 }$ that are often used to define a qubit. Because of their second-order magnetic field dependence, qubits based on this transition have virtually no idle errors. At a finite magnetic field with a stability of $10$
$\mu$G, the probability of a phase error for a hyperfine qubit is $10^{-15}$ smaller than the probability of a phase error for a Zeeman qubit \cite{brown2018comparing}. However, there exists additional energy states resulting from the Zeeman effect, $\ket{ F = 1 , m_F = -1 }$ and $\ket{ F = 1 , m_F = +1 }$. So the computational space defining the qubit is smaller than that of the physical system, leading to the possibility of leakage errors. $^{171}$Yb$^{+}$ is a good example to study since the leakage space is equal to the qubit space. The rate of leakage in and out of the computational space will then be equal. Other ions with a spin $1/2$ nucleus will also benefit from this symmetry (e.g. $^{133}$Ba$^+$ \cite{hucul2017spectroscopy}). Ions with larger spins, like $^{43}$Ca$^+$, will suffer from larger leakage rates due to the existence of a larger leakage space \cite{ballance2016high}.
By contrast, $^{174}$Yb$^{+}$ has a zero nuclear spin. Thus the only energy states in the $S_{1/2}$ manifold are the two states resulting from Zeeman splitting ($\ket{ F = 1/2 , m_F = -1/2 }$ and $\ket{ F = 1/2 , m_F = +1/2 }$) and it is these states that define the qubit. This is a double edge sword. On the one hand, since there are only two states available, there is no possibility for leakage. On the other, these states have a first-order dependence on magnetic field and thus will be highly susceptible to dephasing errors caused by fluctuations in the trap.
It is worth noting that in each isotope, there exists higher-level leakage states in the D and F manifolds, but these states are quickly repumped back down to the ground state and are ignored in our analysis.
\section{Error Model}
\subsection{Sources of Physical Errors}
Raman transitions are a leading candidate for gate implementations in ion trap quantum computers. In the limit of no technical noise, the main source of error will arise from spontaneous scattering \cite{wineland, toolbox, ozeri2005hyperfine, uys, Ozeri}. While spontaneous scattering does not favor any particular state, the atomic structure will affect how the scattering manifests. Raman scattering from these gates leaves the qubit in a different energy state. Depending on the atomic structure of the qubits, this leads to either Pauli $\hat{X}$ or $\hat{Y}$ type errors, or leakage errors. For hyperfine qubits, half of this scattering will result in leakage whilst for a Zeeman qubit, all the scattering results in Pauli type errors. Rayleigh scattering from these gates leaves the qubit in the same energy state but adds a phase. If the scattering from the two qubit levels is approximately equal, the scattering amplitudes can either destructively interfere leading to negligible errors (as is the case for $^{171}$Yb$^{+}$ ), or constructively interfere, leading to significant dephasing errors (as is the case for $^{174}$Yb$^{+}$ ) \cite{ozeri2005hyperfine, uys, Ozeri}. In the latter case, the probability of error resulting from Rayleigh scattering is approximately equal to that of Raman scattering.
Another source of noise arises from magnetic field fluctuations in the trap. For the Zeeman qubit, the probability of error arising from the first-order effect grows quadratically with increasing field fluctuations. For the hyperfine qubit, the errors arising from the second-order effect grows quartically. For mean field fluctuations of higher than $10^{-4}$ G, the probability of error resulting from first-order effects is above 1$\%$, the threshold error value of the surface code \cite{dennis2002topological, raussendorf2007fault, wang2009threshold}. However, even in these highly unstable fields, the probability of errors from the second-order effect is well below the threshold value \cite{brown2018comparing}. The noise resulting from these fields is significant for the Zeeman qubits and inconsequential for the hyperfine qubits.
In our simulation, we vary the probability of scattering while considering a static error arising from the magnetic field. Based on the calculations of \cite{brown2018comparing}, we modeled the effects of scattering with the error channels:
\begin{equation}
\mathcal{E}_{h}(\rho) = (1-\frac{p_s}{2})\rho+\frac{p_s}{8}\hat{X}\rho \hat{X}+\frac{p_s}{8}\hat{Y}\rho \hat{Y} +\frac{p_s}{4}\hat{L}\rho \hat{L}
\end{equation}
\begin{equation}
\mathcal{E}_{Z}(\rho) = (1-p_s)\rho+\frac{p_s}{4}\hat{X}\rho \hat{X}+\frac{p_s}{4}\hat{Y}\rho \hat{Y} +\frac{p_s}{2}\hat{Z}\rho \hat{Z}
\end{equation}
where $\mathcal{E}_{h}(\rho)$ and $\mathcal{E}_{Z}(\rho)$ is the error channel for the hyperfine and Zeeman qubits respectively and $p_s$ is the scattering error rate, and quantifies the overall error due to scattering. For the chosen detunings, the Raman and Rayleigh scattering lead to equal error on the Zeeman qubit but the hyperfine qubit experiences negligible decoherence due to Rayleigh scattering. This leads to hyperfine qubits having half the scattering error due to the qubit subspace occupying half the physical subspace. We expect for one-qubit gates $p_s$ = $9.76 \times 10^{-6}$ and two-qubit gates $p_s$ = $25.2 \times 10^{-5}$. For a more detailed discussion of how these errors manifest for the particular qubits, please refer to \cite{brown2018comparing}.
\subsection{Leakage Models}
While our error model is motivated by the physical error rates of the two ions considered, a more general view of our model is a system with one sided leakage. We defined one sided leakage as a system where only one qubit involved in a CNOT gate is able to leak. Because one-sided leakage could model the behavior of different physical systems other than ion traps, we looked at two different leakage models: depolarizing and M$\o$lmer-S$\o$renson.
The depolarizing leakage model has been used in numerous leakage simulations \cite{fowler, suchara, brown2018comparing, ghosh2013understanding}. In this model, when a leaked qubit interacts with an unleaked qubit via a CNOT, the leaked qubit remains in the leaked state while the latter is depolarized. This model is a worst-case stochastic model which may be applied to a variety of systems including superconductors \cite{ghosh2013understanding} and quantum dots \cite{fong2011universal, mehl2015fault}.
The second leakage model is specific to ion traps. It aims to capture the effect of how a leaked qubit interacts in a M$\o$lmer-S$\o$renson (MS) gate \cite{sorensen1999quantum}. The MS gate utilizes the motion of the ion crystal to couple the ions together. Two laser beams off resonantly detuned but close to the blue and red sidebands, drive the system causing both ions involved in the gate to change their state collectively \cite{haffner2008quantum, blatt2008entangled}. In a leaked ion, the spacing between the qubit energy state and the leakage energy state is large compared to the spacing between the collective motional modes of the crystal. This causes the lasers to be much farther off resonant and both on the same side of the carrier transition. The leaked ion will then only get weakly displaced. Thus when an MS gate is performed with a leaked ion, no entanglement is generated \cite{mike}.
The full CNOT gate involves several more single qubit gates that still get applied whether or not the MS gate failed \cite{maslov2017basic}. If the control leaked, the target undergoes a $X(-\pi/2)$ rotation. If the target leaked, the control undergoes a $Z(-\pi/2)$ rotation. We simulate this by applying a Pauli-twirl approximation which gives the channels:
\begin{equation}
\mathcal{E}_{bit}(\rho) = \frac{1}{2}\rho +\frac{1}{2}\hat{X}\rho \hat{X} \\
\end{equation}
\begin{equation}
\mathcal{E}_{phase}(\rho) = \frac{1}{2}\rho +\frac{1}{2}\hat{Z}\rho \hat{Z} \\
\end{equation}
as used in \cite{mike}. We applied $\mathcal{E}_{bit}(\rho)$ to the target if the control has leaked and $\mathcal{E}_{phase}(\rho)$ to the control if the target has leaked. In our one sided leakage model this translates to applying $\mathcal{E}_{bit}(\rho)$ to our data qubits if an ancilla is leaked during our $X$ stabilizer syndrome extraction or $\mathcal{E}_{phase}(\rho)$ to the data if an ancilla leaked during our $Z$ stabilizer extraction.
We make several assumptions in both our leakage models. First we assume that leakage is only caused by spontaneous scattering from the gates and thus initialization of the qubit does not cause leakage. Typically, ions are initialized using optical pumping techniques which do not result in leakage. This assumption has also been made in other leakage studies \cite{fowler}. Second, we assume that a leaked qubit has a probability to return to the computational subspace equal to the probability that it leaked out. This is again motivated by physical scattering events and has been modeled in several other studies \cite{brown2018comparing, fowler, suchara, mike}. Finally, we assume a leaked qubit remains leaked until it leaks back to the computational space or is reinitialized.
\section{Surface code Simulation}
Topological surface codes are a leading candidate for quantum error correction (QEC), due to their high thresholds and single ancilla extraction \cite{trout2018simulating, PhysRevA.90.032326, dennis2002topological, kitaev2002classical, kitaev1997quantum}. However, standard topological codes alone are incapable of handling leakage errors. If left unhandled, the performance of the surface code suffers dramatically from the correlated errors produced from a single leakage error \cite{fowler, suchara, brown2018comparing, aliferis2005fault, mike}. Fortunately, Alferis and Terhal showed that a threshold exists for the surface code in the presence of leakage if one incorporates leakage reducing circuits (LRC) \cite{aliferis2005fault}. Typically, these LRC's involve teleporting or swapping leaked qubits with an auxiliary qubit. While LRC's are effective for reducing leakage, they come at a cost. Implementing even the simplest LRC involves incorporating more gates and thus adds more potential fault locations.
\begin{figure}
\caption{Mixed species surface code layout. Hyperfine ($^{171}
\label{YbModelSurf}
\end{figure}
In a surface code, qubits are arranged on a lattice and function either as data qubits, which encode the information, or ancilla qubits, which are used to measure error syndromes (see Fig. \ref{YbModelSurf}). In the standard surface code, syndrome extraction is accomplished by performing four CNOT gates between each data and ancilla qubit and then measuring the ancilla (see Fig. \ref{circuits}). Minimum weight perfect matching is done to infer the most probable error given the observed syndrome.
In a surface code built on only Zeeman qubits, this standard syndrome extraction is all that is needed to detect and correct errors. In a surface code built on only hyperfine qubits, a LRC must be implemented to convert leakage errors into Pauli errors.
The simplest method for implementing a LRC is to add a SWAP gate at the end of the syndrome extraction circuit. Data and ancilla qubits swap their roles and thus leaked qubits get reinitialized at most every other cycle. Thanks to gate identities, this amounts to adding one extra CNOT to the error correction cycle (see Fig. \ref{circuits}). We refer to this LRC as the SWAP-LRC.
\begin{figure}
\caption{The top circuit is the standard syndrome extraction circuit for the surface code. The bottom circuit is the standard circuit with a SWAP-LRC implemented at the end. Both the homogenous Zeeman system and the mixed species system utilize the top circuit. The homogenous hyperfine system requires the LRC to handle leakage errors.}
\label{circuits}
\end{figure}
In our simulation, we assign the role of data to the $^{174}$Yb$^+$ Zeeman qubits and the role of ancilla to $^{171}$Yb$^+$ hyperfine qubits. Since data qubits cannot leak, there is no need for a LRC. In fact, leaked qubits can live for at most one error correction cycle. This is already an improvement over the pure hyperfine system where leaked qubits can live twice as long.
Furthermore, when a leaked qubit enters a CNOT gate, the other qubit involved incurs some error, as dictated by the errors models discussed above. For a pure hyperfine system there are potentially four such corrupt gates, because data qubits can leak and leakage is not necessarily eliminated every cycle. For the mixed species system, there are only three such corrupt gates since only ancilla can leak and we assumed initialization does not cause leakage.
While the advantages of the mixed species system over a hyperfine system are immediately clear, they come at a cost. While we no longer require a LRC to handle leakage errors, we have effectively traded in half our leakage errors, which vary with the scattering rate, for constant memory errors. Still, memory errors manifest as Pauli $\hat{Z}$ errors which we can correct without additional overhead and, compared to a pure Zeeman system, the mixed species system will incur half the memory errors due to the symmetry of the surface code.
\section{Results and Discussion}
Implementing the two different error models for the hyperfine and Zeeman qubits discussed above, we examined the performance of the surface code built on this mixed species structure and compared it to the performance of the pure hyperfine and Zeeman systems. In each simulation, we varied the probability of a spontaneous scattering event ($p_s$) while applying a constant magnetic field error probability ($p_M$). We simulated the effects of both the depolarizing and MS leakage models and looked at a range of magnetic field stabilities (see Table \ref{table1}) to get a grasp on where the trade off between leakage errors and memory errors might lie.
\begin{figure}
\caption{Distance comparison for the depolarizing leakage model of distance 3, 5, 7. The solid and dashed blue lines represent the Zeeman and mixed species systems respectively, stabilized to $10$ $\mu$G standard deviation from the mean magnetic field per two qubit gate. The solid black line represents the hyperfine system with the SWAP-LRC implemented. The logical error rate ($p_L$) is proportional to $p_s^{\lceil \frac{d}
\label{dist_comp_DP}
\end{figure}
\begin{figure}
\caption{Distance comparison for the MS leakage model of distance 3, 5, 7. The solid and dashed blue lines represent the Zeeman and mixed species systems respectively, stabilized to $10$ $\mu$G standard deviation from the mean magnetic field per two qubit gate. The solid black line represents the hyperfine system with the SWAP-LRC implemented. The logical error rate ($p_L$) is proportional to $p_s^{\lceil \frac{d}
\label{dist_comp_MS}
\end{figure}
\subsection{Leakage effects}
In the depolarizing leakage model, a single leakage error on an ancilla can cause a two-qubit error chain by depolarizing its neighboring data qubits. These hook errors reduce the code's effective distance by half \cite{fowler, mike}, see Fig. \ref{dist_comp_DP}. In the hyperfine system, these ancilla qubits then get swapped and reassigned as data qubits. Leaked data qubits will corrupt ancilla qubits, leading to measurement errors.
In the MS leakage model, leakage errors on ancilla can cause errors on data qubits that are of the same type as that stabilizer. All potential error outcomes are either a single-qubit or two-qubit error, up to a stabilizer. Thus any $\lfloor \frac{d-1}{2} \rfloor$ physical error does not produce a logical error and the effective code distance is maintained, see Fig. \ref{dist_comp_MS}. Leakage errors on data can produce many time correlated errors but they will not produce any additional space correlated errors since a leaked data qubit cannot spread errors to ancilla that will then propagate to other data qubits \cite{mike}.
In the mixed species system, there are less time and space correlated errors for two reasons: leakage can only live for one cycle, and leaked ancilla never get swapped with data qubits. This is independent of the leakage error model used. So we expect the mixed species model to outperform the hyperfine system if the memory errors can be suppressed.
For the depolarizing model (Fig. \ref{dist_comp_DP}), ancilla leakage is so damaging that the mixed species system, even with half the potential for leakage errors, suffers a logical error rate suppression proportional to $\lceil \frac{d}{4} \rceil$ log($p$). While in certain magnetic field regimes this removal of potential leakage errors is enough to beat the hyperfine system, the mixed species model will \textit{almost} never be able to do better than the Zeeman system in the same error regime. Having half the memory errors is not enough to compensate for the damage leakage can cause. Of course, this all rests on the effects from the magnetic field, which we will discuss in detail later.
For the MS model (Fig. \ref{dist_comp_MS}), leakage is much less damaging and we see every system behaves fault tolerantly. In this leakage model, the mixed species system has the lowest logical error rate. It beats the hyperfine system for the same reasons as the depolarizing model (i.e. less leakage and shorter lived leakage) and it beats the Zeeman system because the structure of the leakage errors imposed by the MS model makes leakage errors more comparable to memory errors. In fact, leakage errors are less damaging than two-qubit dephasing errors. While they cause errors on other qubits, the structure of the MS leakage model restricts these errors to be the same as the stabilizer. In the Zeeman model, this is not true for all ancilla; $Z$ type ancilla will have this advantage but for $X$ type ancilla, dephasing errors will cause measurement errors. Because the mixed species system suffers less of these dephasing errors, in no magnetic field regime will the pure Zeeman system outperform the mixed species system.
\subsection{Memory effects}
For both leakage models, when the main source of error arises from spontaneous scattering ($p_s > p_M$), we see an improvement in the logical error rate as the scattering probability decreases. Once the scattering rate decreases below the static memory error probability ($p_s < p_M$), the logical rate rate plateaus as memory errors dominate. The hyperfine system is immune to these memory errors and so its performance is the same for every magnetic field stability.
Table \ref{table1} lists the values of the static $p_M$ applied in our simulation.
\begin{figure}
\caption{Comparison of the different schemes for a distance-3 surface code using the depolarizing leakage model. The solid and dashed colored lines represent the Zeeman and mixed species systems respectively. The solid black line shows the performance of the hyperfine system with the SWAP-LRC implemented. The color of the line indicates the standard deviation from the mean magnetic field per two qubit gate: $100$ $\mu$G (red), $32$ $\mu$G (green), $10$ $\mu$G (blue) and $1$ $\mu$G (purple).}
\label{d=3_DP}
\end{figure}
\begin{figure}
\caption{Comparison of the different schemes for a distance-3 surface code using the MS leakage model. The solid and dashed colored lines represent the Zeeman and mixed species systems respectively. The solid black line shows the performance of the hyperfine system with the SWAP-LRC. The color of the line indicates the standard deviation from the mean magnetic field per two qubit gate: $100$ $\mu$G (red), $32$ $\mu$G (green), $10$ $\mu$G (blue) and $1$ $\mu$G (purple).}
\label{d=3_MS}
\end{figure}
\begin{center}
\begin{table}
\begin{tabular}{ | c | c | }
\hline
S. D. ($\mu$G) & $p_M$ \\ \hhline{|=|=|}
$\sigma=100$ & $7.75 \times 10^{-3}$ \\ \hline
$\sigma=32$ & $7.75 \times 10^{-4}$ \\ \hline
$\sigma=10$ & $7.75 \times 10^{-5}$ \\ \hline
$\sigma=1$ & $7.75 \times 10^{-6}$ \\
\hline
\end{tabular}
\caption{A list of error probabilities caused by the first-order Zeeman effect ($^{174}$Yb$^+$). $\sigma$ is the standard deviation from the mean magnetic field per two qubit gate in $\mu$G.}
\label{table1}
\end{table}
\end{center}
The logical error rate of a distance-3 surface code using the depolarizing leakage model can be seen in Fig. \ref{d=3_DP}. When $p_M > p_s$, then the performance of the surface code is limited by the amount of memory errors incurred. Since the Zeeman system suffers the most from these errors, it has the worst logical error rate of the three systems. The mixed species suffers half as many memory errors and thus will always be better the Zeeman system but worst than the hyperfine system in \textit{most} of this regime.
When $p_M < p_s$, the performance of the surface code is limited by the amount of leakage incurred. Since the hyperfine system suffers the most from leakage, it has the worst logical error rate. The mixed species code will always be better the the hyperfine system but always worst than the Zeeman system in this regime.
There is a small range when $p_M > p_s$ in which the mixed systems system has the lowest logical error rate. In the depolarizing leakage model, leakage errors cause more damage than memory errors. The hyperfine system not only has more potential for leakage, it also has more fault locations due to the extra gate needed for the SWAP-LRC. There is a small range for $p_s$, when the total probability of a logical error caused from a single leakage event in the hyperfine system is \textit{higher} than the probability of a logical error caused by either a single leakage or two memory errors in the mixed species system. When this is true, the mixed species system outperforms the hyperfine system.
The logical error rate of a distance-3 surface code using the MS leakage model can be seen in Fig. \ref{d=3_MS}. In this leakage model, memory errors are more damaging than leakage errors. Thus there is no magnetic field regime in which the pure Zeeman system will outperform the mixed species system. When $p_M > p_s$, the hyperfine system will have the lowest logical error rate.
In fact, we have the opposite situation of the depolarizing model: there is a small regime when $p_s > p_M$, in which the probability of a logical error caused from two leakage errors in the hyperfine system is \textit{lower} than the probability of a logical error caused by two leakage errors or two memory errors in the mixed species system. Since memory errors are more damaging, the stability of the magnetic field required to suppress the memory errors in order to see an advantage in using a Zeeman qubit is higher than when compared to the depolarizing leakage model. For the errors we are interested in, the magnetic field stability for the Zeeman qubits becomes stricter than our previous estimates with this error model \cite{brown2018comparing}.
For the ions considered, the total scattering probability for a two qubit gate was calculated to be $2.5 \times 10^{-4}$ \cite{brown2018comparing}. In these calculations we assumed the gates were driven by co-propagating linearly polarized Raman beams with a laser frequency of $355$ nm and a two qubit gate time of $200$ $\mu$s. These parameters minimize spontaneous scattering and reflect parameters used in recent experiments \cite{linke2017fault, leung2018robust, debnath2016demonstration, fallek2016transport}.
For this realistic total scattering probability ($p_s = 2.5 \times 10^{-4}$), in each leakage model there is a magnetic field regime where the mixed species outperforms both homogenous systems. For the depolarizing model, we can see this is a narrow window near a stability of 32 $\mu$G. Below this value, the homogenous Zeeman qubit yields better performance. For the M$\o$lmer-S$\o$renson leakage model, leakage is less damaging and a lower memory error is required to outperform the homogenous hyperfine qubit. Below 10 $\mu$G the Zeeman and mixed species system outperform the pure hyperfine system with the mixed species providing a fractional improvement over the Zeeman system corresponding to $1/2$ and primarily due to hyperfine qubits having a lower overall error rate after scattering than Zeeman qubits. Zeeman qubits have already been realized in fields stabilized to 10 nG, well below either model's requirement \cite{ruster2016long}.
\section{Conclusions}
In this work we have shown an advantage of mixing qubit types together in order to limit the effects of leakage. The advantage of using mixed-species depends on the details of how leaked qubits interact with qubits in the computational subspace. There are other advantages that a mixed species platform could provide.
In our simulations we did not take into account different state preparation and measurement errors (SPAM) associated with the two different types of qubits. Hyperfine qubits typically have less SPAM errors as they can be easily measured reliably using state selective fluorescence
\cite{crain2019high, noek2013high}. For the typical magnetic field strengths used in ion trap quantum computing, the frequency separation between the Zeeman qubits states (typically $8.2$ - $20$ MHz) is smaller than the natural $P$ level spectral width of $19.6$ MHz \cite{PhysRevA.82.063419}. State selective fluorescence cannot be directly applied in this case and the qubit must be first shelved to a different energy level before it can be measured \cite{toolbox}. In our mixed species scheme, the qubits that get measured often (ancilla) correspond to the qubits that are easy to measure (hyperfine).
Another intrinsic advantage of the mixed species system is its ability to limit crosstalk. Because the qubits are no longer identical, laser spillage on adjacent ions can no longer be a problem. Here the isotopic separation only reduces crosstalk but by using distinct species (e.g Be$^+$ and Ca$^+$ \cite{ballance2015hybrid, negnevitsky2018repeated}, Mg$^+$ and Be$^+$ \cite{barrett2003sympathetic}, Yb$^+$ and Ba$^+$ \cite{wang2017single}) crosstalk could be eliminated. Mixed species systems could also help with cooling issues by allowing Doppler cooling without damaging the data.
Our results also emphasize the importance of leakage models. For the depolarizing leakage model, leaked ancilla are so damaging, a single physical leakage error leads to a logical error. For the MS leakage model, the Pauli twirl approximation gives a convenient result that makes ancilla leakage less dangerous than stochastic Pauli errors. The way in which leakage is modeled can also determine which surface code is best suited to handle the correlated errors associated with leakage \cite{mike}.
Our results show that the Zeeman and mixed species systems will outperform the homogenous hyperfine system for stable magnetic fields. For the depolarizing leakage error model and stable magnetic fields. The homogenous Zeeman system outperforms the mixed species systems except for a small region of parameter space. For the MS leakage error model, the magnetic field must be more stable, but then the mixed species systems outperforms the Zeeman system for all scattering error rates.
These results highlight the fact that ancilla leakage is more dangerous than data leakage. It is natural to wonder why we used hyperfine qubits as ancilla and Zeeman as data and not the other way around. While ancilla leakage is more damaging, the standard error correction circuit naturally removes leakage without the need to implement any LRCs. If data leaks, while it might not be as damaging in any given error correction cycle, something \textit{must} be done to remove it or else it will continued to wreak havoc. At the circuit level, this means implementing an LRC. Adding a SWAP-LRC at the end to reduce the data leakage would mean the following error correction round would result in leaky ancilla. Reversing the roles of the hyperfine and Zeeman qubits not only requires additional gates for the LRC, it would also result in leakage living twice as long. Leaked ancilla would be able to live on as leaked data before being removed.
The periodic boundary conditions of the toric code help with the implementation of the SWAP-LRC. The periodicity guarantees every qubit will have a qubit to swap with at the end of the cycle. While such boundary conditions could be implemented on modular architectures \cite{NickersonNatComm2013} and single ion chains \cite{trout2018simulating}, the mixed species system is not restricted by these boundary conditions and could be easily implemented on any planar architecture suited for the surface code \cite{LekitscheSciAdv2017}. To implement the SWAP-LRC on a plane, additional qubits could be added to the boundary and swapped up and down every other cycle \cite{suchara, ghosh2015leakage}
In our study, we did not consider any other LRC implementations. We choose to look at the SWAP-LRC since it requires the least amount of overhead. We also did not consider any physical methods for leakage removal, which could in practice remove populations from the leaked qubit state. Our aim was to demonstrate the effectiveness of a surface code with leakage errors but no LRCs.
Leakage errors are a fundamental limiting error in ion trap quantum computers made with hyperfine qubits. Even in systems built on microwave gates \cite{harty2016high, lekitsch2017blueprint, ospelkaus2011microwave}, which do no suffer from the spontaneous scattering effects, background gas collisions can cause leakage. Leakage is a damaging error that needs special consideration when designing new systems.
Memory errors are also a limiting error but pose more a technical challenge. Improvements in field stability will further suppress the rate of memory errors incurred on a system. This is an active area of research where magnetic field stability continues to improve \cite{PhysRevX.7.031050}.
For near term experiments, we do not anticipate leakage being the main source of error. The probability of leakage errors is low. There is more technical noise to overcome before we see the effects of leakage dominate. But when constructing large scale fault tolerant devices, we must consider the tradeoffs between overhead of handling such errors and mitigating their effects through design. We expect to see many other advantages for mixing qubit types in the future.
\section{Acknowledgments}
We thank Michael Newman for useful discussions on the effects of leakage and Muyuan Li and Dripto Deboy for insights on gate errors and surface code simulation. We also thank Andrew Cross and Martin Suchara for providing the toric code simulator, with permission from IBM. This work was supported by the Office of the Director of National Intelligence - Intelligence Advanced Research Projects Activity through Army Research Office (ARO) contract W911NF-10-1-0231, ARO MURI on Modular Quantum Systems W911NF-16-1-0349, and EPiQC - a National Science Foundation Expedition in Computing 1730104.
\end{document}
|
\begin{document}
\begin{center}
{\Large\bf Relative Entropy and Single Qubit
Holevo-Schumacher-Westmoreland Channel Capacity}\\
{\normalsize John Cortese}\\
{\small\it Institute for Quantum Information\\
Physics Department,
California Institute of Technology 103-33,\\
Pasadena, CA 91125 U.S.A.}
\\[4mm]
\date{today}
\end{center}
\begin{center}
\today
\end{center}
\begin{abstract}
The relative entropy description of Holevo-Schumacher-Westmoreland
(HSW) classical channel capacities is applied
to single qubit quantum channels. A simple formula for the relative entropy
of qubit density matrices in the Bloch sphere representation is derived.
The formula is combined with
the King-Ruskai-Szarek-Werner qubit channel ellipsoid picture
to analyze several unital and non-unital qubit
channels in detail.
An alternate proof is presented
that the optimal HSW signalling states for
single qubit unital channels are those states
with minimal channel output
entropy. The derivation is
based on symmetries of the relative
entropy formula, and the
King-Ruskai-Szarek-Werner qubit channel ellipsoid picture.
A proof is given that the average output density matrix of
any set of optimal HSW signalling states for a
( qubit or non-qubit ) quantum channel is unique.
\end{abstract}
\tableofcontents
\pagebreak
\section{Introduction}
In 1999, Benjamin Schumacher and Michael Westmoreland published a paper entitled
\linebreak
{\em Optimal Signal Ensembles} {{\cal A}l I}te{Schumacher99}
that elegantly described the classical (product
state) channel capacity of quantum channels in terms of a function known as the
relative entropy.
Building upon this view, we study single qubit channels, adding the
following two items to the Schumacher-Westmoreland analysis.
I) A detailed understanding of the convex hull shape of the set of
quantum states
output by a channel. (The fact the set was convex has been known for
some time, but the detailed nature of the convex geometry was unknown
until recently.)
II) A useful mathematical representation (formula) for the relative entropy
function, $\mathcal{D}(\, \rho \, \| \, \phi \, )$, when both $\rho$
and $\phi$ are single qubit density matrices.
For single qubit channels, the work of King, Ruskai, Szarek, and Werner
has provided a concise description of the convex hull set{{\cal A}l I}te{Ruskai99a,rsw}.
In this paper, we
derive a useful formula for the relative entropy between qubit density matrices.
Combining this formula with the KRSW convexity information, we present
from a relative entropy perspective
several results, some previously known, and others new, related to
the (product state) classical channel capacity of quantum channels.
These include :
I) The average output density matrix for {\em any} optimal set of signalling
states that achieves the maximum classical channel capacity for a quantum channel
is unique. For single qubit unital channels, Donalds equality leads to a symmetry which
tells us this average density matrix
must be $\frac{1}{2} \; \mathcal{I}$.
This fact about the average density matrix
allows us to conclude for unital qubit channels
that the optimum signalling states are a subset of the states with minimum output
von Neumann entropy, as previously shown in {{\cal A}l I}te{Ruskai99a} .
This symmetry also allows us to see why
only two orthogonal signalling states are needed to achieve the
optimum classical channel capacity for single qubit unital channels,
and why the a priori probabilities for these two signalling
states are $\frac{1}{2}$.
II) The single qubit relative entropy formula allows us to understand
geometrically why
the a priori probabilities for optimum signalling states for
non-unital single qubit channels
are not equal.
III) Examples of channels which require non-orthogonal signalling states
to achieve
optimal classical channel capacity are given. Such channels have been
found before.
Here these channels are presented in a geometrical fashion based
on the relative entropy formula
derived in Appendix A.
\section{Background}
\subsection{Classical Communication over Classical and Quantum Channels}
\setlength{\unitlength}{0.00033300in}
\begingroup\makeatletter\ifx\SetFigFont\undefined
\def\endgroup\@setsize\SetFigFont{#2pt}#1#2#3#4#5#6#7\relax{\def\endgroup\@setsize\SetFigFont{#2pt}{#1#2#3#4#5#6}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}\fmtname xxxxxx\relax \defsplain{splain}
\ifx\endgroup\@setsize\SetFigFont{#2pt}splain
\gdef\SetFigFont#1#2#3{
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\langlerge\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname}
\else
\gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\endgroup\@setsize\SetFigFont{#2pt}{\endgroup\@setsize\SetFigFont{#2pt}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname}
\fi
\fi\endgroup
\begin{picture}(15904,5856)(301,-7288)
\thicklines
\put(5551,-4486){\framebox(1725,975){}}
\put(8401,-4561){\framebox(2550,1050){}}
\put(7276,-3886){\vector( 1, 0){1050}}
\put(10951,-3961){\vector( 1, 0){675}}
\put(14401,-3961){\vector( 1, 0){1750}}
\put(4176,-3811){\vector( 1, 0){1375}}
\put(1801,-3811){\vector( 1, 0){2375}}
\put(4176,-6211){\framebox(10175,4725){}}
\put(11701,-4561){\framebox(1850,1050){}}
\put(13526,-3961){\vector( 1, 0){775}}
\put(15301,-2536){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Classical}}}
\put(15301,-3106){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Outputs }}}
\put(15526,-5026){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}$Y_j$}}}
\put(751,-4951){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}$X_i$}}}
\put(6826,-2461){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Quantum Channel Domain}}}
\put(12001,-4336){\makebox(0,0)[lb]{\smash{\SetFigFont{5}{6.0}{rm}( POVM ) }}}
\put(376,-3886){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Inputs}}}
\put(6826,-7261){\makebox(0,0)[lb]{\smash{\SetFigFont{14}{16.8}{rm}Classical Channel }}}
\put(301,-3361){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Classical }}}
\put(5701,-3961){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Encode }}}
\put(6076,-4336){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}$\psi_i$}}}
\put(8701,-3886){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Quantum }}}
\put(8701,-4336){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Channel}}}
\put(12001,-3886){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{rm}Decode}}}
\end{picture}
\begin{center}
Figure 1:
Transmission of classical information through a quantum channel.
\end{center}
This paper discusses the transmission
of classical information over quantum channels
with no prior entanglement between the sender (Alice) and the
recipient (Bob). In such a scenario, classical information is encoded
into a set of quantum states $\psi_i$. These states are transmitted
over a quantum channel. The perturbations encountered by the signals
while transiting the channel are described using the Kraus
representation formalism. A receiver at the channel output measures
the perturbed quantum states using a POVM set. The resulting classical
measurement outcomes represent the extraction of classical information
from the channel output quantum states.
There are two common criteria for measuring the quality
of the transmission
of classical information over a channel, regardless of whether the
channel is classical or quantum. These criteria are
the (Product State) Channel Capacity{{\cal A}l I}te{Holevo98a,Hausladen96a,Schumacher97c}
and the Probability of Error (Pe).{{\cal A}l I}te{Fuchs96a} In this paper, we shall
focus on the first criterion, the Classical Information Capacity of a
Quantum Channel, $\mathcal{C}$.
In determining the classical channel capacity,
we typically have an input signal constellation consisting of
classical signals $x_i${{\cal A}l I}te{Cover91a}. The classical channel capacity
$\mathcal{C}$ is defined as {{\cal A}l I}te{Cover91a}:
$$
\mathcal{C} \;=\;
\mbox{\Large Max}
_{\{all \;possible\; x_i\} } \quad H(X) \;-\; H(\,X \, | \, Y
)
$$
Here $H(X)$ is the Shannon entropy for the discrete random variable $X$.
\linebreak
$X \;\equiv \; \{ \; p_i \,=\, prob(x_i) \; \}\, , \, i \, = \, 1 \,,\, {{\cal A}l D}ots \,,\, N$.
The Shannon entropy $H(X)$ is defined as
$H(X) \;=\; - \, \sum_{i=1}^N \; p_i \, \log ( \,p_i \, )$.
For conditional random variables, we denote
the probability of the random variable
$X$ given $Y$ as $p( X | Y )$. The corresponding
conditional Shannon entropy is defined as
$H(X|Y) \;=\; - \,
\sum_{i=1}^{N_X} \,
\sum_{j=1}^{N_Y} \; p( x_i \, , \, y_j \, ) \; \log [ \,p( \, x_i \,|\, y_j \, )\, ] $.
Our entropy calculations shall be in bits, so $\log_2$ is used.
Suppose we have $|X|$ linearly independent and equiprobable input signals
$x_i$, and possible output signals $y_j$, with $|Y| \geq |X|$.
If there is no noise in the channel,
then $\mathcal{C} \;=\; \log(|X|)$. Noise in the channel increases the
uncertainty in X given the channel output Y, and thus noise increases
$H(\,X \, | \, Y)$, thereby decreasing $\mathcal{C}$ for fixed $ H(X)$.
Geometrically, the presence of random channel noise causes the channel
mapping $x_i \;\rightarrow\;y_j$ to change from a noiseless
one-to-one relationship, to a stochastic map. We say the possible
channel mappings of $x_i$ diffuse, occupying a region $\theta_i$
instead of a single unique state $y_j$.
As long as the regions $\theta_{i}$ have
disjoint support, the receiver can use Y to distinguish
which X was sent. In this disjoint support
case, $ H(\,X \, | \, Y) \;\approx \; 0$
and $\mathcal{C} \;\approx\; H(X)$. This picture is frequently referred
to as sphere packing, since we view the diffused output signals as
roughly a sphere around the point in the output space where the
signals would
have been deposited had the channel introduced no perturbations. The
greater the channel noise, the greater the radius of the spheres. If
these spheres can be packed into a specified volume without
significant overlap, then the decoder can distinguish the input state
transmitted by determining which output sphere the decoded
signal falls into.
For sending classical information over a quantum channel, we adhere to
the same picture. We seek to maximize $H(\, X\, )$ and
minimize $H(\, X\,|\, Y\, )$, in
order to maximize the channel capacity $\mathcal{C}$. We encode each
classical input signal state $\{ \, x_i\, \} $ into a corresponding
quantum state $\psi_i$. Sending $\psi_i$ through the channel,
the POVM decoder seeks to predict which $x_i$ was originally sent.
Similar to the classical picture, the quantum channel will diffuse or
smear out the density matrix $\rho_i$ corresponding to the quantum
state $\psi_i$ as the quantum state passes through the channel.
The resulting channel output density matrix $\mathcal{E}(\rho_i)$ will have
support over a subspace $\phi_i$. As long as all the regions $\phi_i$ have
disjoint support,
the POVM based decoder will be able to distinguish which quantum state
$\rho_i$ entered the channel, and hence $H(X|Y) \; \approx \; 0$,
yielding $\mathcal{C} \;\approx\; H(X)$.
For the classical capacity of quantum channels, we
encode classical binary data into quantum states.
The product state classical capacity for a quantum channel
maximizes channel
throughput by encoding a long block of m
classical bits $x_i$ into a long block consisting
of a tensor product of n single qubit
quantum states $\psi_j$ in an optimal manner which maximizes (product state) classical channel capacity.
$$
\{ x_1, x_2, \; {{\cal A}l D}ots \; x_m \} \;\rightarrow \;
\psi_1 \;\otimes\; \psi_2 \;\otimes\; {{\cal A}l D}ots \; \otimes \; \psi_n
$$
It has been widely conjectured, but not proven,
that the product state classical channel capacity of a quantum channel
is the classical capacity of a quantum channel.
The Holevo-Schumacher-Westmoreland Theorem tells us
that the classical product state channel capacity
using the above encoding scheme is given by the Holevo
quantity \mbox{\Large $\chi$} of the output signal ensemble,
maximized over a single copy of all
possible input signal ensembles $\{ p_i\,,\, \rho_i\}${{\cal A}l I}te{Nielsen00a}.
$$ \mathcal{C}_1 \;=\; \mbox{\Large Max}_{\{all \;possible\; p_i \;and \; \rho_i \} } \quad
\mbox{\huge $\chi$}_{output} $$
$$ \qquad \;=\;
\mbox{\Large Max}_{\{all \;possible\; p_i \;and \; \rho_i \} } \quad
\mathcal{S} \left ( \; \mathcal{E} \left (\sum_{i} \; p_i \, \rho_i \right )
\; \right ) \;-\; \sum_{i} \;
p_i \, \mathcal{S}\left ( \; \mathcal{E}\left ( \rho_i \right ) \; \right ) $$
$\mathcal{S}(-)$ above is the von Neumann entropy.
The symbol $\mathcal{E}(\rho)$
represents the output density matrix obtained by presenting
the density matrix $\rho$ at the channel input. Furthermore,
the input signals $\rho_i$ can be
chosen to be pure states without affecting the
maximization{{\cal A}l I}te{Nielsen00a}.
Hereafter we shall call $\mathcal{C}_1$ defined above the
Holevo-Schumacher-Westmoreland (HSW) channel capacity.
\subsection{Relative Entropy and HSW Channel Capacity}
\langlebel{schuwest}
An alternate, but equivalent, description of HSW channel capacity
can be made using relative entropy{{\cal A}l I}te{Schumacher99}.
The relative entropy $\mathcal{D}$ of two
density matrices, $\varrho$ and $\phi$, is defined
as {{\cal A}l I}te{Schumacher99,Nielsen00a,Schumacher00a,Petz93} :
$$
\mathcal{D}( \, \varrho \, \| \, \phi \, ) \;=\;
Tr \left [ \, \varrho \, \log ( \, \varrho \, ) \;-\; \varrho \, \log( \,\phi \,)
\right ]
$$
Here Tr[-] is the trace operator.
Klein's inequality tells us that $\mathcal{D} \; \geq \; 0$,
with $\mathcal{D} \; \equiv \; 0$ iff $\varrho \;\equiv\;\phi$
{{\cal A}l I}te{Nielsen00a}. Note that we shall usually take our
logarithms to be base 2.
To see how to represent $\chi$ in terms of $\mathcal{D}$,
consider the optimal signalling state ensemble
$\{ \, p_k \,,\, \varrho_k \;=\; \mathcal{E}(\varphi_k)\; \}$.
Define $\varrho$ as $\sum_k \; p_k \, \varrho_k$.
Consider the following sum :
$$
\sum_k \; p_k \; \mathcal{D}( \, \varrho_k \, \| \, \varrho \, ) \;=\;
\sum_k \; \left \{ p_k \; Tr[ \, \varrho_k \, \log( \, \varrho_k \, ) \; ]
\;- \; p_k \; Tr[ \; \varrho_k \; \log(\,\varrho \, ) \; ] \right \}
$$
$$
=\quad \sum_k \; \left \{ p_k \;
Tr \left [ \; \varrho_k \; \log(\,\varrho_k \,) \; \right ] \;\right \}
\;- \;
Tr \left [ \; \sum_k \; \left \{ p_k \; \varrho_k \, \log(\, \varrho\, ) \;
\right \} \right ]
$$
$$
\;=\; \sum_k \; \left \{ p_k \; Tr[ \; \varrho_k \, \log( \, \varrho_k \, ) \; ] \;\right \}
\;- \; Tr \left [ \; \varrho \, \log(\varrho) \; \right ] \quad=\quad
\mathcal{S}(\, \varrho \, ) \;-\; \sum_k \; p_k \;
\mathcal{S}( \, \varrho_k\, ) \;=\; \chi
$$
Thus, the HSW capacity $\mathcal{C}_1$ can be written as
$$
\mathcal{C}_1 \;=\;
\mbox{\Large Max}_{[all \;possible\; \{p_k\;,\; \varphi_k \} ] } \quad
\sum_k \; p_k \; \mathcal{D}
\left ( \, \mathcal{E}(\varphi_k) \, || \, \mathcal{E}( \varphi )\, \right )
$$
\noindent
where the $\varphi_k$ are the quantum states input to the channel and
$\varphi \;=\; \sum_k \, p_k \, \varphi_k$.
We call an ensemble of channel output states
$\{ \, p_k \,,\, \varrho_k \;=\; \mathcal{E}(\varphi_k)\; \}$ an optimal
ensemble if this ensemble achieves $\mathcal{C}_1$.
Schumacher and Westmoreland proved the following five
properties related to optimal ensembles{{\cal A}l I}te{Schumacher99}.
\langlebel{pgschuwest}
I) $\mathcal{D}( \, \varrho_k \, \| \, \varrho \, ) \;=\; \mathcal{C}_1 \quad \forall \varrho_k$ in the optimal ensemble, and
$\varrho \;=\; \sum p_k \, \varrho_k$.
II)
$\mathcal{D}( \, \endgroup\@setsize\SetFigFont{#2pt}i \, \| \, \varrho \, ) \;\leq\; \mathcal{C}_1$
where
$\{ \, p_k \,,\, \varrho_k \;=\; \mathcal{E}(\varphi_k)\; ,\;
\varrho \;=\; \sum p_k \, \varrho_k \; \}$ is an optimal ensemble, and
$\endgroup\@setsize\SetFigFont{#2pt}i$ is {\em any} permissible channel output density matrix.
III) There exists at least one optimal ensemble
$\{ \, p_k \,,\, \varrho_k \;=\; \mathcal{E}(\varphi_k)\; \}$ that
achieves $\mathcal{C}_1$.
IV) Let $\mathcal{A}$ be the set of possible channel output states
for a channel
$\mathcal{E}$ corresponding to pure state inputs. Define
$\mathcal{B}$ as the convex hull of the set of states
$\mathcal{A}$. Then for $\varrho \;\in \mathcal{A}$ and
$\endgroup\@setsize\SetFigFont{#2pt}i \;\in $ $\mathcal{B}\;\equiv\; $ the convex hull of
$\mathcal{A}$, we
have \footnote{This result was originally derived in {{\cal A}l I}te{ohya}.} :
$$
\mathcal{C}_1 \;=\;
\mbox{\Large Min}
_{\, \endgroup\@setsize\SetFigFont{#2pt}i \, } \quad \quad
\mbox{\Large Max}
_{\,\varrho\,} \quad \quad
\mathcal{D} \left ( \; \varrho\; \| \; \endgroup\@setsize\SetFigFont{#2pt}i \;\right )
$$
V) For every $\endgroup\@setsize\SetFigFont{#2pt}i$ that satisfies the minimization in IV) above, there
exists an optimum signalling ensemble $\{ \; p_k \;,\; \rho_k \; \}$
such that $\endgroup\@setsize\SetFigFont{#2pt}i \; \equiv \; \sum_k \; p_k \; \rho_k$.
\subsection{The King - Ruskai - Szarek - Werner Qubit Channel Representation}
In this paper, we are primarily concerned with qubit channels, namely
$\mathcal{E}(\varphi) \;=\; \varrho$, where $\varphi$ and $\varrho$ are qubit
density matrices. Several authors {{\cal A}l I}te{Ruskai99a,rsw} have developed a nice
picture of single qubit maps. Recall that single qubit density matrices can
be written in
the Bloch sphere representation.
Let the density matrices $\varrho$ and $\varphi$ have
the respective Bloch sphere representations :
$$
\varphi \; = \; \frac{1}{2} \; ( \mathcal{I} \; + \;
\vec{\mathcal{W}}_{\varphi}
\bullet \vec{\sigma} ) \; \quad \quad and \quad \quad
\; \varrho\; = \; \frac{1}{2} \;
( \mathcal{I} \; + \; \vec{\mathcal{W}_{\varrho}} \bullet \vec{\sigma} )
$$
The symbol $ \vec{\sigma}$ means the vector of 2 x 2 Pauli matrices
$$ \vec{\sigma} \; = \;
\bmatrix{ \sigma_x \cr \sigma_y \cr \sigma_z }
\;\;\;\;\; where
\;\;\;\;\; \sigma_x \;=\; \bmatrix{ 0 & 1 \cr 1 & 0 },
\;\;\;\;\; \sigma_y \;=\; \bmatrix{ 0 & -i \cr i & 0 },
\;\;\;\;\; \sigma_z \;=\; \bmatrix{ 1 & 0 \cr 0 & -1 }
\;.$$
The Bloch vectors $\vec{\mathcal{W}}$ are real
three dimensional vectors
that have magnitude equal to one when representing a pure state density matrix,
and magnitude less than one for a mixed (non-pure) density matrix.
The King - Ruskai et al. qubit channel representation describes
the channel as a mapping of input to output Bloch vectors.
$$
\bmatrix{ 1 \cr \widetilde{W_x} \cr \widetilde{W_y} \cr \widetilde{W_z} }
\;=\;
\bmatrix{ 1 \quad & 0 \quad & 0 \quad & 0 \quad \cr
t_x & \langlembda_x & 0 & 0 \cr
t_y & 0 & \langlembda_y & 0 \cr
t_z & 0 & 0 & \langlembda_z }
\quad
\bmatrix{ 1 \cr W_x \cr W_y \cr W_z }
$$
\langlebel{channeldef}
All qubit channels have such a representation.
The representation is unique up to a unitary rotation, and
hence requires a choice of basis.
The $t_k$ and $\langlembda_k$ are real parameters which must satisfy certain
constraints in order to ensure the matrix above represents a completely positive
qubit map. ( Please see King - Ruskai for more details{{\cal A}l I}te{Ruskai99a}. )
From the King - Ruskai et al. qubit channel representation, we
see that
$\widetilde{\mathcal{W}_k} \;=\; t_k \;+\; \langlembda_k \; \mathcal{W}_k$
or
$$
\mathcal{W}_k\;=\;
\frac{ \; \widetilde{\mathcal{W}_k} \;-\; t_k \;}{\; \langlembda_k \;}
$$
It has been shown that $\mathcal{C}_1$ can
always be achieved using only pure input states{{\cal A}l I}te{Schumacher99}.
Therefore, all input signalling Bloch vectors obey
$\; \| \, \vec{\mathcal{W}} \, \| \;=\; 1.$
Thus $\| \, \vec{\mathcal{W}} \, \|^2 \;=\; 1$, and
$ \| \, \vec{\mathcal{W}} \, \|^2 \;=\; 1 \;=\; \mathcal{W}_x ^2 \;+\;
\mathcal{W}_y ^2 \;+\; \mathcal{W}_z ^2\; $ implies
$$
\left ( \; \frac{ \widetilde{\mathcal{W}_x} \; - \; t_x }{\langlembda_x} \; \right ) ^2 \;+\;
\left ( \; \frac{ \widetilde{\mathcal{W}_y} \; - \; t_y }{\langlembda_y} \; \right ) ^2 \;+\;
\left ( \; \frac{ \widetilde{\mathcal{W}_z} \; - \; t_z }{\langlembda_z} \; \right ) ^2 \;=\; 1
$$
The set of possible channel output states we shall be
interested in is the set of channel outputs corresponding
to pure state channel inputs. This set of states was
defined as $\mathcal{A}$ in section 2.2, and
is the \emph{surface} of the ellipsoid shown above. The convex hull of
the set of states $\mathcal{A}$ is the solid ellipsoid
defined as $\widetilde{\vec{\mathcal{W}}}$ such that
$$
\left ( \; \frac{ \widetilde{\mathcal{W}_x} \; - \; t_x }{\langlembda_x} \; \right ) ^2 \;+\;
\left ( \; \frac{ \widetilde{\mathcal{W}_y} \; - \; t_y }{\langlembda_y} \; \right ) ^2 \;+\;
\left ( \; \frac{ \widetilde{\mathcal{W}_z} \; - \; t_z }{\langlembda_z} \; \right ) ^2
\;\leq \; 1
$$
\section{Relative Entropy In The Bloch Sphere Representation}
The key formula we shall use extensively is the relative entropy in the
Bloch sphere representation.
Here $\rho$ and $\phi$ have the respective Bloch sphere representations :
$$
\rho \; = \; \frac{1}{2} \; ( \mathcal{I} \; + \; \vec{\mathcal{W}} \bullet \vec{\sigma} ) \; \; \; \; \; \; \phi \; = \; \frac{1}{2} \; ( \mathcal{I} \; + \; \vec{\mathcal{V}} \bullet \vec{\sigma} )
$$
We define $\cos(\theta)$ as :
$$
\cos(\theta) \;\;=\;\;
\frac{\vec{\mathcal{W}}\bullet \vec{\mathcal{V}}}{ \; r \; q \; }
\;\;\;\;\; where \;\;\;\;\;
r \;=\; \sqrt { \vec{\mathcal{W}} \bullet \vec{\mathcal{W}}}
\;\;\; and \;\;\; q \;=\; \sqrt { \vec{\mathcal{V}} \bullet \vec{\mathcal{V}} }
\;.
$$
In Appendix A, we prove the following formula for the relative entropy
$\mathcal{D}(\, \varrho \, \| \, \psi \, ) $ of two single qubit
density matrices $\varrho$ and $\psi$ with Bloch sphere representations
given above.
$$
\mathcal{D}(\, \varrho \, \| \, \psi \, ) \;=\;
\frac{1}{2} \log_2 \left ( 1\;-\;r^2 \right ) \;+\;
\frac{r}{2} \log_2 \left ( \frac{1\;+\;r}{1\;-\;r} \right )
\;-\;
\frac{1}{2}
\; \log_2 \left ( 1 \;-\; q^2\; \right )
\;-\; \frac{\; \vec{\mathcal{W}} \bullet \vec{\mathcal{V}} \; }
{\; 2 \; q\; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
$$
\;=\;
\frac{1}{2} \log_2 \left ( 1\;-\;r^2 \right ) \;+\;
\frac{r}{2} \log_2 \left ( \frac{1\;+\;r}{1\;-\;r} \right )
\;-\;
\frac{1}{2}
\; \log_2 \left ( 1 \;-\; q^2\; \right )
\;-\; \frac{r \; \cos ( \theta ) }
{\; 2 \; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
where $\theta$ is the angle between $\vec{\mathcal{W}}$
and $\vec{\mathcal{V}}$, and $r$ and $q$ are as defined above.
When $\phi$ in $\mathcal{D}( \, \rho \, \| \, \phi \, )$ is the maximally
mixed state $\phi \;=\; \frac{1}{2} \, \mathcal{I}$, we have $q\, = \, 0$, and
$\mathcal{D}( \, \rho \,\| \,\phi \,)$ becomes the radially symmetric function
$$
\mathcal{D}( \, \rho \, \| \, \phi \, ) \;=\;
\mathcal{D} \left ( \, \rho \, \| \, \frac{1}{2} \, \mathcal{I} \, \right ) \;=\;
\frac{1}{2} \; \log_2 \left ( 1 - r^2 \right ) \;+ \;
\frac{r}{2} \; \log_2 \left ( \frac{1 + r}{1 - r } \right )
\;=\; 1 \, - \, \mathcal{S}(\,\rho\,)\;.
$$
It is shown in Appendix A that
$\mathcal{D} \left ( \, \rho \, \| \, \frac{1}{2} \; \mathcal{I} \, \right ) \;=\; 1 \;-\; \mathcal{S}(\rho)$,
where $\mathcal{S}(\rho)$ is the von Neumann entropy of $\rho$.
In what follows, we shall often
write $\mathcal{D}(\, \rho \, \| \, \phi \, )$ as
$\mathcal{D}(\, \vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, )$,
where $\vec{\mathcal{W}}$ and $\vec{\mathcal{V}}$ are
the Bloch sphere vectors for $\rho$ and $\phi$ respectively.
In what follows, we shall graphically determine the HSW channel
capacity from the intersection of contours of constant relative entropy
with the channel ellipsoid. To that end, and to help build intuition
regarding channel parameter tradeoffs, it is advantageous
to obtain a rough idea of how the contours of constant
relative entropy $\mathcal{D}(\, \rho \, \| \, \phi \, )$ behave,
for fixed $\phi$, as $\rho$ is varied.
Furthermore, it will turn out that due to symmetries in the relative
entropy, we
frequently will only need to understand the relative entropy behavior
in a plane
of the Bloch sphere, which we choose to be the Bloch X-Y plane.
In Figure 2, we plot a few contour lines for
$\mathcal{D}( \, \rho \,\| \, \phi \,=\, \frac{1}{2} \;
\mathcal{I} \, )$ in the X-Y Bloch sphere plane.
In the figures that follow, we shall mark the location of
$\phi$ with an asterisk.
The contour values for $\mathcal{D}( \, \rho \,\| \, \phi \, )$ are
shown in the plot title. The smallest value of
$\mathcal{D}(\, \rho \, \| \, \phi \,)$ corresponds
to the contour closest to the location of $\phi$.
The largest value of
$\mathcal{D}(\, \rho \, \| \, \phi \, )$
corresponds to the outermost contour.
For
$\phi \;=\; \frac{1}{2} \, \mathcal{I}$, the location of
$\phi$ is the Bloch sphere
origin.
\begin{center}
\includegraphics*[angle=-90,scale=0.65]{q0.ps}
\end{center}
\nopagebreak
\begin{center}
Figure 2:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \mathcal{I}$.
\end{center}
As an example of how these contour lines change as $\phi$
moves away from the maximally mixed state
$\phi \;=\; \frac{1}{2} \, \mathcal{I} $,
or equivalently as q becomes non-zero, we give
contour plots below for $q \; \ne \;0$. We let
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I} \;+\; q \; \sigma_y \; \}$ with
corresponding Bloch
vector $\vec{\mathcal{V}} \;=\; \bmatrix{ 0 \cr q \cr 0}$.
The asterisk in these plots denotes the location of $\vec{\mathcal{V}}$.
The dashed outer contour is a radius equal to one, indicating
where the pure states lie.
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q1.ps}
\end{center}
\begin{center}
Figure 3:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.1 \, \sigma_y \; \}$.
\end{center}
\enlargethispage{0.2in}
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q2.ps}
\end{center}
\begin{center}
Figure 4:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.2 \, \sigma_y \; \}$.
\end{center}
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q3.ps}
\end{center}
\begin{center}
Figure 5:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.3 \, \sigma_y \; \}$.
\end{center}
\enlargethispage{0.2in}
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q4.ps}
\end{center}
\begin{center}
Figure 6:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.4 \, \sigma_y \; \}$.
\end{center}
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q5.ps}
\end{center}
\begin{center}
Figure 7:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.5 \, \sigma_y \; \}$.
\end{center}
\enlargethispage{0.2in}
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q6.ps}
\end{center}
\begin{center}
Figure 8:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.6 \, \sigma_y \; \}$.
\end{center}
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q7.ps}
\end{center}
\begin{center}
Figure 9:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.7 \, \sigma_y \; \}$.
\end{center}
\enlargethispage{0.2in}
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q8.ps}
\end{center}
\begin{center}
Figure 10:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.8 \, \sigma_y \; \}$.
\end{center}
\begin{center}
\includegraphics*[angle=-90,scale=0.54]{q9.ps}
\end{center}
\begin{center}
Figure 11:
Contours of constant relative entropy $\mathcal{D}(\rho \| \phi )$
as a function of $\rho$ in the Bloch sphere X-Y plane
for the fixed density matrix
$\phi \;=\; \frac{1}{2} \; \{ \; \mathcal{I}\; + \; 0.9 \, \sigma_y \; \}$.
\end{center}
The two dimensional plots of $\mathcal{D}( \, \rho \, \| \, \phi \, )$
shown above tell us about the {\em three dimensional} nature of
$\mathcal{D}( \, \rho \, \| \, \phi \, )$. To see why, first note that we
can always rotate the Bloch sphere X-Y-Z axes to arrange for
$\phi \; \equiv \; \vec{\mathcal{V}} \; \rightarrow \; \vec{q}$ to lie on the Y
axis, as the density matrices $\phi$ are shown in Figures 2 through 11
above.
Second, recall that our description of
$\mathcal{D}( \, \rho \, \| \, \phi \, )$
is a function of the three variables
$\{ \, r \,,\, q\,,\, \theta \, \}$ only, which were defined above as the
length of the Bloch vectors corresponding to the density matrices
$\rho$ and $\phi$ respectively, and the angle between these Bloch
vectors.
$$
\mathcal{D}( \, \rho \, \| \, \phi \, ) \;\equiv\; f( \, r
\,,\, q\,,\, \theta \, )
$$
This means the two dimensional curves of constant
$\mathcal{D}( \, \rho \, \| \, \phi \, )$
can be rotated about the Y axis as surfaces of
revolution, to yield three dimensional surfaces of constant
$\mathcal{D}( \, \rho \, \| \, \phi \, )$. (In these two and
three dimensional plots, the first argument of
$\mathcal{D}( \, \rho \, \| \, \phi \, )$,
$\rho$, is being varied, while the second argument, $\phi$, is being held
fixed at a point on the Y axis.)
Our two dimensional plots above give us a good idea of the
three dimensional behavior of the surfaces of constant relative entropy
about the density matrix $\phi$ occupying the second slot in
$\mathcal{D}( \, {{\cal A}l D}ots \, \| \, {{\cal A}l D}ots \, )$.
A picture emerges of
slightly warped ''eggshells'' nested like Russian dolls inside each other,
{\em roughly} centered on $\phi$.
A mental picture of the behavior of
$\mathcal{D}( \, \rho \, \| \, \phi \, )$
is useful because in what follows we shall
superimpose the KRSW channel ellipsoid(s) onto Figures 2 through 11 above.
By moving $\vec{\mathcal{V}}$ (the asterisk) around
in these pictures, we shall adjust the contours of constant
$\mathcal{D}( \, \rho \, \| \, \phi \, )$,
and thereby {\em graphically} determine the HSW channel capacity,
optimum (output) signalling states, and corresponding a priori
signalling probabilities. The resulting intuition we gain from these
pictures will help us understand channel parameter tradeoffs.
\section{Linear Channels}
Recall the KRSW specification of a qubit channel in terms of the six
real parameters
$\{ \; t_x \,,\, t_y \,,\, t_z \,,\, \langlembda_x \,,\, \langlembda_y \,,\, \langlembda_z \; \}$
as defined on page \pageref{channeldef} of this paper.
A linear channel is one where
$\langlembda_x \;=\; \langlembda_y \;=\; 0$, but
$\langlembda_z \;\neq\; 0$. The shift quantities ${ \;t_k \; }$ can be any real
number, up to the limits imposed by the requirement that the map be
completely positive. For more details on the complete positivity
requirements of qubit maps, please see {{\cal A}l I}te{rsw}.
A linear channel is a simple system that illustrates the basic ideas behind
our graphical approach to determining the HSW channel capacity $\mathcal{C}_1$.
Recall the relative entropy formulation for $\mathcal{C}_1$.
$$
\mathcal{C}_1 \;=\;
\mbox{\Large Max}_{[all \;possible\; \{p_k\;,\; \varphi_k \} ]} \quad
\sum_k \; p_k \; \mathcal{D} \left ( \, \mathcal{E}(\, \varphi_k \,) \,
|| \, \mathcal{E}( \, \varphi \, ) \, \right )
$$
\noindent
where the $\varphi_k$ are the quantum states input to the channel and
$\varphi \;=\; \sum_k \, p_k \, \varphi_k$.
We call an ensemble of states
$\{ \, p_k \,,\, \varrho_k \;=\; \mathcal{E}(\varphi_k)\; \}$ an optimal
ensemble if this ensemble achieves $\mathcal{C}_1$.
As discussed on page \pageref{pgschuwest},
Schumacher and Westmoreland showed the above maximization to determine
$\mathcal{C}_1$ is equivalent to the following min-max criterion :
$$
\mathcal{C}_1 \;=\;
\mbox{\Large Min}
_{\, \phi\, } \quad \quad
\mbox{\Large Max}
_{\,\varrho_k\,} \quad \quad
\mathcal{D} \left ( \; \varrho_k \; \| \; \phi \;\right )
$$
\noindent
where $\varrho_k$ is a density matrix on the surface of the channel
ellipsoid, and $\phi$ is a density matrix in the convex hull
of the channel ellipsoid. For the linear channel,
the channel ellipsoid is a line
segment of length $2 \; \langlembda_z$ centered on
$\{\; t_x \,,\, t_y \,,\, t_z \; \}$.
Thus, both $\varrho_k$ and $\phi$ must lie somewhere along this line segment.
Furthermore, Schumacher and Westmoreland tell us that
$\phi$ must be expressible as a convex combination of the $\varrho_k$ which
satisfy the above min-max{{\cal A}l I}te{Schumacher99}.
To graphically implement the min-max criterion, we overlay the channel
ellipsoid on the contour plots of relative entropy previously found.
We wish to determine the location of the optimum $\phi$ and the
optimum relative entropy contour that achieves the min-max.
The generic overlap scenarios are shown below, labeled Cases 1 - 5.
From our plots of relative entropy, we know that contours of
relative entropy are {\em roughly} circular about $\phi$. We denote
the location of $\phi$ below by an asterisk ( {\textbf *} ).
The permissible $\varrho_k$ are those density matrices at the intersection
of the relative entropy contour and the channel ellipsoid, here a line
segment.
Let us examine the five cases shown below, seeking the optimum
$\phi$ and the relative entropy contour corresponding to $\mathcal{C}_1$
(the circles below), by eliminating those cases
which do not make sense in light of the minimization-maximization above.
\begin{center}
\setlength{\unitlength}{0.00033300in}
\begingroup\makeatletter\ifx\SetFigFont\undefined
\def\endgroup\@setsize\SetFigFont{#2pt}#1#2#3#4#5#6#7\relax{\def\endgroup\@setsize\SetFigFont{#2pt}{#1#2#3#4#5#6}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}\fmtname xxxxxx\relax \defsplain{splain}
\ifx\endgroup\@setsize\SetFigFont{#2pt}splain
\gdef\SetFigFont#1#2#3{
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\langlerge\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname}
\else
\gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\endgroup\@setsize\SetFigFont{#2pt}{\endgroup\@setsize\SetFigFont{#2pt}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname}
\fi
\fi\endgroup
\begin{picture}(12691,3339)(368,-5863)
\thicklines
\put(1126,-3961){{{\cal A}l I}rcle{1500}}
\put(7126,-3961){{{\cal A}l I}rcle{1500}}
\put(9901,-3886){{{\cal A}l I}rcle{1500}}
\put(12301,-3961){{{\cal A}l I}rcle{1500}}
\put(4351,-3961){{{\cal A}l I}rcle{2348}}
\put(1501,-2986){\line( 0,-1){1875}}
\put(9901,-2761){\line( 0,-1){2400}}
\put(12301,-3211){\line( 0,-1){1500}}
\put(4351,-3511){\line( 0,-1){900}}
\put(7201,-2536){\line( 0,-1){1875}}
\put(976,-5761){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}Case 1 }}}
\put(3901,-5836){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}Case 2}}}
\put(7201,-5761){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}Case 3}}}
\put(9901,-5761){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}Case 4}}}
\put(12301,-5761){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}Case 5}}}
\put(1126,-3961){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}*}}}
\put(7201,-3961){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}*}}}
\put(9901,-4036){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}*}}}
\put(12301,-4036){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}*}}}
\put(4351,-4036){\makebox(0,0)[b]{\smash{\SetFigFont{14}{16.8}{rm}*}}}
\end{picture}
\end{center}
\begin{center}
Figure 12:
Scenarios for the intersection of the optimum
relative \linebreak entropy contour with a linear channel ellipsoid.
\end{center}
Case 1 is not an acceptable
configuration because $\phi$ does not lie inside the channel
ellipsoid, meaning for the linear channel, $\phi$ does not lie
on the line segment. Case 2 is not acceptable because there are no
permissible $\varrho_k$, since the relative entropy contour does not intersect
the channel ellipsoid line segment anywhere. Case 3 is not acceptable because
Schumacher and Westmoreland tell us that $\phi$ must be expressible
as a convex combination of the $\varrho_k$ density matrices which satisfy the
above min-max requirement. There is only one permissible $\varrho_k$ density
matrix in Case 3, and since, as seen in the diagram for Case 3,
$\phi\;\neq\; \varrho_1$, we do not have an
acceptable configuration. Case 4 at first appears acceptable. However,
here we do not achieve the maximization in the min-max relation,
since we can do better by using a relative entropy contour with a larger
radii. Case 5 is the ideal situation. The relative entropy contour
intersects both of the line segment endpoints. Taking a larger radius relative
entropy contour does not give us permissible $\varrho_k$, since we would
obtain Case 2 with a larger radii. For Case 5, if we moved $\phi$ as we
increased the relative entropy contour, we would
obtain Case 3, again an unacceptable configuration. In Case 5, using
the two $\varrho_k$ that lie at the intersection
of the relative entropy contour and
the channel ellipsoid line segment, we can form a convex combination of
these $\varrho_k$ that equals $\phi$. Case 5 is the best we can do, meaning
Case 5 yields the largest radius relative entropy contour which satisfies
the Schumacher-Westmoreland requirements. The
value of this largest radii relative entropy contour is the HSW channel
capacity we seek, $\mathcal{C}_1$.
We now restate Case 5 in Bloch vector notation.
We shall associate
the Bloch vector $\vec{\mathcal{V}}$ with $\phi$, and
the Bloch vectors $\vec{\mathcal{W}}_k$ with the $\varrho_k$ density
matrices.
For the linear channel, from our analysis above which resulted in
Case 5, we know that $\vec{\mathcal{V}}$ must lie on the line segment
between the two endpoint vectors
$\vec{\mathcal{W}}_{+}$ and
$\vec{\mathcal{W}}_{-}$.
(Note that from here on,
we shall drop the tilde $\widetilde{\;}$
we were previously using to denote
channel output Bloch vectors, as almost all the
Bloch vectors we shall talk about below are channel output Bloch
vectors. The few instances when this is not the case shall be obvious.)
For a general linear channel, the KRSW ellipsoid channel parameters satisfy
\begin{center}
$\{ \; t_x \,\neq\, 0
\;,\; t_y \,\neq\, 0\;,\;
t_z \,\neq\, 0\;,\;
\langlembda_x \, = \, 0\;,\;
\langlembda_y \, = \, 0\;,\;
\langlembda_z \,\neq\, 0\; \}$.
\end{center}
Thus, we can explicitly determine the Bloch vectors
$\vec{\mathcal{W}}_{+}$ and
$\vec{\mathcal{W}}_{-}$, which we write below.
$$
\rho_{+} \;\rightarrow \;
\vec{\mathcal{W}}_{+} \;=\;
\bmatrix{ t_x \cr t_y \cr t_z \; + \; \langlembda_z }
,\;\;\; and \;\;\;
\rho_{-} \;\rightarrow \;
\vec{\mathcal{W}}_{-} \;=\;
\bmatrix{ t_x \cr t_y \cr t_z \; - \; \langlembda_z }
$$
Note that the $t_k$ and $\langlembda_z$ are real numbers,
and any of them may be negative.
The Bloch vector $\vec{\mathcal{V}}$ however requires more work.
We parameterize the Bloch sphere vector
$\vec{\mathcal{V}}$
corresponding to $\phi$ by the real number
$\alpha$, specifying a position for
$\vec{\mathcal{V}}$ along the line segment between
$\vec{\mathcal{W}}_{+}$ and $\vec{\mathcal{W}}_{-}$.
$$
\phi \; \rightarrow \; \vec{\mathcal{V}} \;=\;
\bmatrix{ t_x \cr t_y \cr t_z \; + \; \alpha \, \langlembda_z }
$$
Here $\alpha \; \in \; [-1,1]$.
Now recall that the Schumacher-Westmoreland
maximal distance property
( see property $\#$ I
in Section \ref{schuwest} )
tells us that $D(\rho_{+} || \phi) \;=\; D(\rho_{-} || \phi)$.
To find $\vec{\mathcal{V}}$,
we shall apply the formula we have derived
for relative entropy in the Bloch
representation to $D(\rho_{+} || \phi) \;=\; D(\rho_{-} || \phi)$,
and solve for $\alpha$. The details are in Appendix B.
\subsection{A Simple Linear Channel Example}
To illustrate the ideas presented above,
we take as a simple example the linear channel with
channel parameters :
$\{ \;
t_x\;=\;0\;,\;
t_y\;=\;0 \;,\;
t_z\;=\;0.2\;,\;
\langlembda_x\;=\;0 \;,\;
\langlembda_y\;=\;0 \;,\;
\langlembda_z\;=\;0.4\; \}$.
Because the channel is linear with
$t_x\;=\;t_y\;=\;0$, we shall be able to easily
solve for $\vec{\mathcal{V}}$ and $\mathcal{C}_1$.
We define the real numbers $r_{+}$ and $r_{-}$ as the Euclidean distance
in the Bloch sphere from the Bloch sphere origin to the
Bloch vectors $\vec{\mathcal{W}}_{+}$ and $\vec{\mathcal{W}}_{-}$.
That is, $r_{+}$ and $r_{-}$ are the magnitudes of the Bloch vectors
$\vec{\mathcal{W}}_{+}$ and $\vec{\mathcal{W}}_{-}$ defined above.
For the channel parameter numbers given, we find
$r_{+} \;=\; \|\; 0.4 \;+\; 0.2\;\|\;=\; 0.6$
and $r_{-} \;=\; \| \; 0.2 \;-\; 0.4\;\|\;=\; 0.2$.
We similarly define $q$ to be the magnitude of the Bloch
vector $\vec{\mathcal{V}}$.
To find $\vec{\mathcal{V}}$,
we shall apply the formula we have derived
for relative entropy in the Bloch
representation to
$D(\rho_{+} || \phi) \;=\; D(\rho_{-} || \phi)$, or in Bloch sphere
notation,
$D(\vec{\mathcal{W}}_{+} \, || \, \vec{\mathcal{V}}\,) \;=\;
D(\vec{\mathcal{W}}_{-} \, || \, \vec{\mathcal{V}}\,)$.
The formula for relative entropy derived in Appendix A is :
$$
\mathcal{D}(\, \varrho_k \, \| \, \phi \, ) \;=\;
\frac{1}{2} \, \log_2 \left ( 1\;-\;r_k^2 \right ) \;+\;
\frac{r_k}{2} \, \log_2 \left ( \frac{1\;+\;r_k}{1\;-\;r_k} \right )
\;-\;
\frac{1}{2}
\; \log_2 \left ( 1 \;-\; q^2\; \right )
\;-\; \frac{\; \vec{\mathcal{W}}_k \, \bullet \, \vec{\mathcal{V}} \; }
{\; 2 \; q\; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
$$
\;=\;
\frac{1}{2} \, \log_2 \left ( 1\;-\;r_k^2 \right ) \;+\;
\frac{r_k}{2} \, \log_2 \left ( \frac{1\;+\;r_k}{1\;-\;r_k} \right )
\;-\;
\frac{1}{2}
\; \log_2 \left ( 1 \;-\; q^2\; \right )
\;-\; \frac{r_k \; \cos ( \theta_k ) }
{\; 2 \; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
where $\theta_k$ is the angle between $\vec{\mathcal{W}}_k$
and $\vec{\mathcal{V}}$.
Intuitively, one notes that the nearly circular
relative entropy contours about $\phi \;\equiv\; \vec{\mathcal{V}}$
tells us that $\vec{\mathcal{V}} \; \approx \;
\frac{\; \vec{\mathcal{W}}_{+} \;+\; \vec{\mathcal{W}}_{-} \; }{2}$.
Given the channel parameter numbers, this fact about
$\vec{\mathcal{V}}$,
together with the linear nature of the channel ellipsoid,
tell us that $\theta_{+} \;=\; 0$ and
$\theta_{-} \;=\; \pi$, so that
$\cos( \, \theta_{+}\, ) \;=\; 1$ and
$\cos( \, \theta_{-}\, ) \;=\; -1$.
Using this information about the
$\theta_k$, and the identity
$$
\tanh^{(-1)} [ \; x \; ] \;=\;
\frac{1}{2} \, \log \left ( \frac{1\;+\;x}{1\;-\;x} \right )
$$
the relative entropy equality relation
between the two endpoints of the linear channel can be solved for q.
$$
q_{optimum} \;=\; \tanh \left [ \;
\frac{ \frac{1}{2} \; \ln \left [ \frac{1 - r_{+}^2}{1 - r_{-}^2} \right ] \;+\;
r_{+} \; \tanh^{(-1)} [ r_{+} ] \;- \; r_{-} \; \tanh^{(-1)} [ r_{-} ] }{r_{+} \;+\;r_{-} } \; \right ] \;=\; 0.2125.
$$
Thus,
$$
\vec{\mathcal{W}}_{+} \;=\; \bmatrix{ 0 \cr 0 \cr 0.6 }
,\;\;\;\vec{\mathcal{W}}_{-} \;=\; \bmatrix{ 0 \cr 0 \cr -0.2 }, \;\;\; and\;\;\;
\vec{\mathcal{V}} \;=\; \bmatrix{ 0 \cr 0 \cr 0.2125 }
$$
The corresponding density matrices are :
$$
\rho_{+} \;=\; \frac{1}{2} \; (\; \mathcal{I} \;+\; \vec{\mathcal{W}_{+}}
\bullet \vec{\sigma} \; ) ,
\;\;\;\rho_{-} \;=\; \frac{1}{2} \; (\; \mathcal{I} \;+\; \vec{\mathcal{W}_{-}}
\bullet \vec{\sigma} \; ) ,
\;\;\;\phi \;=\; \frac{1}{2} \; (\; \mathcal{I} \;+\; \vec{\mathcal{V}}
\bullet \vec{\sigma} \; )
$$
This yields
$\mathcal{D}(\, \rho_{+} \, \| \, \phi \, ) \;=\;
\mathcal{D}( \, \rho_{-} \, \| \, \phi \, ) \;=\; 0.1246$.
Thus, the HSW channel capacity $\mathcal{C}_1$ is 0.1246.
The location of the two density matrices
$\rho_{+}$ and $\rho_{-}$ are shown in Figure 13
below as {\textbf O}.
Furthermore, the Schumacher-Westmoreland analysis tells us that the two states
$\rho_{+}$ and $\rho_{-}$ must average to $\phi$, in the sense that if $p_{+}$
and $p_{-}$ are the a priori probabilities of the two output signal states, then
$p_{+} \; \rho_{+} \;+\; p_{-} \; \rho_{-} \;=\; \phi$. In our Bloch sphere
notation, this relationship becomes
$p_{+} \; \vec{\mathcal{W}}_{+} \;+\; p_{-} \;
\vec{\mathcal{W}}_{-} \;=\; \vec{\mathcal{V}}$. The asterisk
({\textbf * }) in Figure 13
below shows the position of $ \vec{\mathcal{V}}$.
Another relation relating the a priori probabilities
$p_{+}$ and $p_{-}$ is $p_{+} \;+\; p_{-} \;=\;1$.
Using these two equations, we can solve for
the a priori probabilities
$p_{+}$ and $p_{-}$. For our example,
$$
p_{+} \vec{\mathcal{W}}_{+} \;+\; p_{-} \vec{\mathcal{W}}_{-}
\;=\; p_{+} \bmatrix{ 0 \cr 0 \cr 0.6 } \;+\;
p_{-} \bmatrix{ 0 \cr 0 \cr -0.2 }
\;=\; \vec{\mathcal{V}} \;=\; \bmatrix{ 0 \cr 0 \cr 0.2125 }
$$
Solving for $p_{+}$ and $p_{-}$ yields
$p_{+} \;=\; 0.5156$ and $p_{-} \;=\; 0.4844$.
Note that here we have found the optimum {\em output} signal states
$\rho_{+}$ and $\rho_{-}$. From these one can find the optimum
{\em input} signal states by finding the states $\varphi_{+}$ and
$\varphi_{-}$ which map to the respective optimum output states
$\rho_{+}$ and $\rho_{-}$. In our example above, these are
$\varphi_{+} \; \rightarrow \; \vec{\mathcal{W}}_{+}^{Input} \;=\; \bmatrix{ 0 \cr 0 \cr 1 }$ and
$\varphi_{-} \; \rightarrow \; \vec{\mathcal{W}}_{-}^{Input} \;=\;
\bmatrix{ 0 \cr 0 \cr -1 }$.
\begin{center}
\includegraphics*[angle=-90,scale=0.6]{linear.ps}
\end{center}
\begin{center}
Figure 13:
The intersection in the Bloch sphere X-Z plane of
a linear channel ellipsoid and the optimum relative
entropy contour. The optimum output signal
states are shown as {\textbf O}.
\end{center}
For the general linear channel, where any or all of the $t_k$
can be non-zero, we can reduce the capacity calculation to
the solution of a single, one dimensional transcendental equation.
( Please see Appendix B for the full derivation. )
Define
$$
r_{+}^2 \; =\; t_x^2 \;+\; t_y^2 \;+\; ( \; t_z \;+\; \langlembda_z \; )^2
$$
$$
q^2 \; =\; t_x^2 \;+\; t_y^2 \;+\; ( \; t_z \;+\; \beta \, \langlembda_z \; )^2
$$
$$
r_{-}^2 \; =\; t_x^2 \;+\; t_y^2 \;+\; ( \; t_z \;-\; \langlembda_z \; )^2
$$
The two quantities $r_{+}$ and $r_{-}$ are the Euclidean
distances from the Bloch sphere origin to the signalling
states $\rho_{+} \;\equiv \; \vec{\mathcal{W}}_{+}$ and
$\rho_{-} \;\equiv \; \vec{\mathcal{W}}_{-}$ respectively.
The quantity $q$ is the Euclidean distance from
the Bloch sphere origin to the density matrix
$\phi \;\equiv \; \vec{\mathcal{V}}$.
We define the three Bloch vectors
$\vec{r}_{+}$, $\vec{q}$ and $\vec{r}_{-}$ in Figure 14 below, and
refer to their respective magnitudes as
$r_{+}$, $q$, and $r_{-}$.
\begin{center}
\setlength{\unitlength}{0.00033300in}
\begingroup\makeatletter\ifx\SetFigFont\undefined
\def\endgroup\@setsize\SetFigFont{#2pt}#1#2#3#4#5#6#7\relax{\def\endgroup\@setsize\SetFigFont{#2pt}{#1#2#3#4#5#6}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}\fmtname xxxxxx\relax \defsplain{splain}
\ifx\endgroup\@setsize\SetFigFont{#2pt}splain
\gdef\SetFigFont#1#2#3{
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\langlerge\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname}
\else
\gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\endgroup\@setsize\SetFigFont{#2pt}{\endgroup\@setsize\SetFigFont{#2pt}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname}
\fi
\fi\endgroup
\begin{picture}(9000,6738)(1201,-7015)
\thicklines
\put(1801,-3961){\line( 2, 1){7780}}
\put(1801,-3961){\line( 5,-2){7758.621}}
\put(1801,-3961){\line( 6, 1){7783.784}}
\put(9571,-101){\line( 0,-1){7000}}
\put(1201,-4561){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}Bloch}}}
\put(1201,-4956){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}Sphere}}}
\put(1201,-5351){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}Origin}}}
\put(10201,-2761){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}q}}}
\put(10201,-361){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}$r_{+}$}}}
\put(10201,-6961){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}$r_{-}$}}}
\end{picture}
\end{center}
\begin{center}
Figure 14:
Definition of the Bloch vectors
$\vec{r}_{+}$,
$\vec{q}$, and
$\vec{r}_{-}$ used in the derivation below.
\end{center}
We solve the transcendental equation below for $\beta$.
$$
\frac{ 4 \; \langlembda_z \; ( t_z \;+\; \beta \langlembda_z \;) \;
\tanh^{(-1)} ( \, q \, ) \; }{q} \;=\;
2 \; r_{+} \; \tanh^{(-1)} ( r_{+} ) \; - \; 2 \,
r_{-} \; \tanh^{(-1)} ( r_{-} ) \;+\; \ln ( \, 1\, - \, r_{+}^2 \, ) \;
- \; \ln(\, 1 \, - \, r_{-}^2 \, )
$$
Note that $q$ is a function
of $\beta$, while $r_{+}$ and $r_{-}$ are not. Thus, the
right hand side remains constant while $\beta$ is varied. The smooth
nature of the functions of $\beta$ on the left hand side allow a solution
for $\beta$ to be found fairly easily.
As in our simpler linear channel example above, we have
$$
\vec{\mathcal{W}_{+}} \;=\; \bmatrix{ t_x \cr t_y \cr t_z \;+\; \langlembda_z }
,\;\;\;
\vec{\mathcal{W}_{-}} \;=\; \bmatrix{ t_x \cr t_y \cr t_z \;-\; \langlembda_z },
\;\;\; and\;\;\;
\vec{\mathcal{V}} \;=\; \bmatrix{ t_x \cr t_y \cr t_z + \beta \, \langlembda_z }
\;.
$$
where $\beta \; \in ( \,-1,1\,)$. The corresponding density
matrices are :
$$
\rho_{+} \;=\; \frac{1}{2} \; (\; \mathcal{I} \;+\; \vec{\mathcal{W}_{+}}
\bullet \vec{\sigma} \; ) ,
\;\;\;\rho_{-} \;=\; \frac{1}{2} \; (\; \mathcal{I} \;+\; \vec{\mathcal{W}_{-}}
\bullet \vec{\sigma} \; ) ,
\;\;\;\phi \;=\; \frac{1}{2} \; (\; \mathcal{I} \;+\; \vec{\mathcal{V}}
\bullet \vec{\sigma} \; )
$$
The channel capacity $\mathcal{C}_{1}$ is found from the relations
$$
D(\rho_{+} || \phi ) \;=\; D(\rho_{-} || \phi ) \;=\; \chi_{optimum}
\;=\; \mathcal{C}_{1}
\;.
$$
The a priori signaling probabilities are found by solving the
simultaneous probability equations $p_{+} \;+\; p_{-} \;=\;1$, and
$$
p_{+} \vec{\mathcal{W}_{+}} \;+\; p_{-} \vec{\mathcal{W}_{-}}
\;=\; p_{+} \bmatrix{ t_x \cr t_y \cr t_z \;+\; \langlembda_z } \;+\;
p_{-} \bmatrix{ t_x \cr t_y \cr t_z \;-\; \langlembda_z }
\;=\; \vec{\mathcal{V}} \;=\; \bmatrix{ t_x \cr t_y \cr t_z \;+\; \beta
\, \langlembda_z }
$$
This leads to a second probability equation of
$p_{+} \;-\; p_{-} \;=\;\beta$, yielding:
$$
p_{+} \;=\; \frac{ 1 \;+\; \beta }{2}
\;\;\;\;\;\;\;
\;\;\;\;\;\;\;
and
\;\;\;\;\;\;\;
\;\;\;\;\;\;\;
p_{-} \;=\; \frac{ 1 \;-\; \beta }{2}
$$
\subsection{A More General Linear Channel Example}
In the simple linear channel example above, we used
$\{ \; t_x \,=\, t_y \,=\, 0\; , \; \langlembda_x \, = \, \langlembda_y \, =\; 0\;\}$.
This choice yielded a
rotational symmetry about the Z - axis which assured us the
location of the optimum average output density matrix
$\rho \;=\; p_{+} \, \rho_{+} \;+\; p_{-} \, \rho_{-}$
was on the Z - axis. We used this fact to advantage in predicting the
angles $\theta_{\{+\,,\,-\}}$,
where $\theta_{\{+\,,\,-\}}$ was the angle between
$\vec{\mathcal{W}}_{\{+\,,\,-\}}$
and $\vec{\mathcal{V}}$.
Since we knew $\vec{\mathcal{W}}_{\{+\,,\,-\}}$
lay on the Z - axis, we found
$\theta_{+} \;=\; 0$ and
$\theta_{-} \;=\; \pi$, simplifying the
$\cos\left (\,\theta_{\{+\,,\,-\}}\, \right )$ terms in the relative entropy
expressions for
$D(\rho_{+} || \phi)$ and $D(\rho_{-} || \phi)$.
In general, we do not have values of $\pm \, 1$ for
$\cos\left (\,\theta_{\{+\,,\,-\}} \, \right )$, and this complicates finding
a solution for the linear channel relation
$D(\rho_{+} || \phi) \;=\; D(\rho_{-} || \phi)$.
A more general linear channel example is one where the
parameters $\{ \; t_x \;,\; \; t_y \;,\; \; t_z \; \}$
are all non-zero.
Consider the parameter set
$\{ \;
t_x \, = \, 0.1, \;
t_y \, = \, 0.2, \;
t_z \, = \,0.3, \;
\langlembda_x \, = \, 0,\;
\langlembda_y \, = \, 0,\;
\langlembda_z \, = \, 0.4\; \}$.
Solving the transcendental equation derived in Appendix B yields
$\beta \;=\; 0.0534$ and
$\vec{\mathcal{V}} \;=\; \bmatrix{ 0.1 \cr 0.2 \cr 0.3214 }$.
Using the density matrix $\phi$ calculated from the Bloch vector
$\vec{\mathcal{V}}$ gives us a HSW channel capacity
$\mathcal{C}_1$ of
$D(\rho_{+} || \phi ) \;=\; D(\rho_{-} || \phi ) \;=\; 0.1365$.
As discussed above, $p_{+} \;+\; p_{-} \;=\;1$,
and $p_{+} \;-\; p_{-} \;=\;\beta$.
Solving for $p_{+}$ and $p_{-}$ yields
$p_{+} \;=\; 0.5267$ and $p_{-} \;=\; 0.4733$.
The optimum {\em input} Bloch vectors are :
$$
\varphi_{+} \; \rightarrow \; \vec{\mathcal{W}}_{+}^{Input} \;=\; \bmatrix{ 0 \cr 0 \cr 1 } \qquad and \qquad
\varphi_{-} \; \rightarrow \; \vec{\mathcal{W}}_{-}^{Input} \;=\;
\bmatrix{ 0 \cr 0 \cr -1 }\; .
$$
The optimum {\em output } Bloch vectors are :
$$
\rho_+ \;=\; \mathcal{E}(\, \varphi_+\, )
\; \rightarrow \; \vec{\mathcal{W}}_{+}^{Output} \;=\; \bmatrix{ 0.1 \cr 0.2 \cr 0.7 } \qquad and \qquad
\rho_- \;=\; \mathcal{E}(\, \varphi_- \, )
\; \rightarrow \; \vec{\mathcal{W}}_{-}^{Output} \;=\;
\bmatrix{ 0.1 \cr 0.2 \cr -0.1 }\; .
$$
Below we show in Figure 15 and Figure 16 the
$\{x,z\}$ and $\{y,z\}$ slices of the {\em linear}
channel ellipsoid. One
can see that the relative entropy curve
$D( \, \rho \, \| \, \phi \, ) \;=\; \mathcal{C}_1 \;=\; 0.1365$ touches
the ellipsoid at two locations in both cross sections. (The $\{x,y\}$
cross section is trivial.)
\begin{center}
\includegraphics*[angle=-90,scale=0.53]{qxz.ps}
\end{center}
\begin{center}
Figure 15:
The intersection in the Bloch sphere X-Z plane of
a linear channel ellipsoid and the optimum relative
entropy contour. The optimum output signal
states are shown as {\textbf O}.
\end{center}
\begin{center}
\includegraphics*[angle=-90,scale=0.53]{qyz.ps}
\end{center}
\begin{center}
Figure 16:
The intersection in the Bloch sphere Y-Z plane of
a linear channel ellipsoid and the optimum relative
entropy contour. The optimum output signal
states are shown as {\textbf O}.
\end{center}
\section{Planar Channels}
A planar channel is a quantum channel where
two $\langlembda_k$ are non-zero, and one $\langlembda_k$ is zero.
For a planar channel, the $\{ \; t_k \; \}$ can have any values
allowed by complete
positivity. A planar channel restricts the possible output
density matrices to lie in the plane in the Bloch sphere
which is specified by the non-zero $\langlembda_k$.
In comparison to the linear channels discussed above, the
planar channels additional output degree of freedom (planar has
two non-zero $\langlembda_k$ versus a single linear non-zero $\langlembda_k$)
means a slightly different approach to determining $\mathcal{C}_1$ than
that discussed for linear channels must be developed. As for linear channels,
we seek
to find the optimum density matrix $\phi \;\equiv \; \vec{\mathcal{V}}$
interior to the ellipsoid
which minimizes the distance to the
most "distant", in a relative entropy sense, point(s) on the ellipsoid
surface.
We shall find the optimum $\vec{\mathcal{V}}$ in two ways : graphically and
iteratively.
Both approaches utilize the following theorem from Schumacher and
Westmoreland{{\cal A}l I}te{Schumacher99}.
{\em Theorem : }
$$
\mathcal{C}_1 \quad = \quad Min_{\phi}
\quad Max_{\rho} \quad D(\, \rho \, \| \, \phi \, )
$$
The maximum is taken over the {\em surface} of the ellipsoid, and the
minimum is taken over the {\em interior} of the ellipsoid.
In order to apply the min max formula above for $\mathcal{C}_1$ for
planar channels, we need a result about the uniqueness of
the average output ensemble density matrix
$\rho \;=\; \sum_k \; p_k \, \rho_k$ for different optimal
ensembles $\{ \; p_k \;,\; \rho_k \; \}$.
\subsection{Uniqueness Of The Average Output Ensemble Density Matrix}
The question we address is if there exists two optimum
signalling ensembles, $\{ \; p_k \;,\; \rho_k \; \}$ and
$\{ \; p'_k \;,\; \rho'_k \; \}$ of channel output states,
whether the two resulting average density matrices,
$\rho \;=\; \sum_k \; p_k \, \rho_k$ and
$\rho' \;=\; \sum_k \; p'_k \, \rho'_k$ are equal.
{\em Theorem : } The density matrix $\phi$ which achieves
the minimum in the min-max formula above for $\mathcal{C}_1$ is unique.
{\em Proof :}
From property V in Section 2.2, we know the
optimum $\phi$ which attains the minimum above
must correspond to the average of a set of signal states of an optimum
signalling ensemble. We shall prove the uniqueness of $\phi$ by
postulating there are two optimum signal ensembles, with possibly
different average density matrices, $\sigma$ and $\endgroup\@setsize\SetFigFont{#2pt}i$. We will then prove
that $\sigma$ must equal $\endgroup\@setsize\SetFigFont{#2pt}i$, thereby implying $\phi$ is unique.
Let $\{ \, \alpha_i \,,\, \rho_i \, \}$ be an optimum signal ensemble,
with probabilities $\alpha_i$ and density matrices $\rho_i$, where
$\alpha_i \;\geq \; 0$ and $\sum_i \; \alpha_i \;=\; 1$.
Define $\sigma \;=\; \sum_i \; \alpha_i \; \rho_i$.
By property I in Section 2.2, we know that
$ \mathcal{D}(\, \rho_i \, \| \, \sigma \, ) \;=\; \chi_{optimum} \;=\;
\mathcal{C}_1 \quad \forall \; i$.
Now consider a second, optimum signal ensemble
$\{ \, \beta_j \,,\, \phi_j \, \}$
differing in at least one
density matrix $\rho_i$ and/or one probability $\alpha_i$ from the
optimum ensemble $\{ \, \alpha_i \,,\, \rho_i \, \}$.
Define $\endgroup\@setsize\SetFigFont{#2pt}i \;=\; \sum_j \; \beta_j \; \phi_j$.
Consider the quantity
$\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, )$.
Let us apply Donald's equality, which is discussed in Appendix C.
$$
\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;=\;
\mathcal{D}( \, \sigma \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;+\;
\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \sigma \, )
$$
Since
$\mathcal{D}( \, \rho_i \, \| \, \sigma \, ) \;=\; \chi_{optimum} \quad \forall \; i$, and $\sum_i \; \alpha_i \;=\; 1$,
we obtain :
$$
\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;=\;
\mathcal{D}( \, \sigma \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;+\; \chi_{optimum}
$$
From property II in Section 2.2,
since $\endgroup\@setsize\SetFigFont{#2pt}i$ is the average of a set of optimal signal
states $\{ \, \beta_j \,,\, \phi_j \, \}$, we know that
$\mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;\leq\; \chi_{optimum} \; \forall \, i$.
Thus
$\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, )
\;\leq\;\chi_{optimum}$.
Combining this inequality constraint on
$\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, )$
with what we know about
$\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, )$
from Donald's equality, we obtain the two relations :
$$
\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;=\;
\mathcal{D}( \, \sigma \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;+\; \chi_{optimum}
\quad \quad and \quad \quad
\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;\leq\;
\chi_{optimum}
$$
From Klein's inequality,
we know that
$\mathcal{D}( \, \sigma \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, )\; \geq \; 0$, with equality
iff $\sigma \, \equiv \, \endgroup\@setsize\SetFigFont{#2pt}i$.
Thus, the only way the equation
$$
\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;=\;
\mathcal{D}( \, \sigma \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;+\; \chi_{optimum}
$$
can be satisfied is if
we have $\sigma \, \equiv \, \endgroup\@setsize\SetFigFont{#2pt}i$, for then
$\mathcal{D}( \, \sigma \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, )\; = \; 0$
and we have
$$
\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;=\;
\mathcal{D}( \, \sigma \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;+\; \chi_{optimum}
\;=\; \chi_{optimum}
$$
and
$$
\sum_i \; \alpha_i \; \mathcal{D}( \, \rho_i \, \| \, \endgroup\@setsize\SetFigFont{#2pt}i \, ) \;=\;
\sum_i \; \alpha_i \; D( \, \rho_i \, \| \, \sigma \, ) \;=\;
\sum_i \; \alpha_i \; \chi_{optimum} \;=\; \chi_{optimum}
$$
Therefore, only in the case where $\sigma \, \equiv \, \endgroup\@setsize\SetFigFont{#2pt}i$ is
Donald's equality satisfied.
Since $\sigma$ and $\endgroup\@setsize\SetFigFont{#2pt}i$ were the average output density
matrices for two different, but
arbitrary optimum signalling ensembles, we conclude
the average density matrices of all optimum signalling
ensembles must be equal, thereby implying $\phi$ is unique.
$\bigtriangleup$ - {\em End of Proof}.
Note that although we are primarily concerned with qubit channels in this
paper, only generic properties of the relative entropy were used in the
above proof of uniqueness, and therefore the result holds for {\em all}
channels.
\subsection{Graphical Channel Optimization Procedure}
We shall now describe a graphical technique for finding
$\phi_{optimum} \;\equiv \; \vec{\mathcal{V}}_{optimum}$.
Recall the contour surfaces of constant relative entropy
for various values of $\vec{\mathcal{V}}$ shown previously.
We seek to adjust the location of $\vec{\mathcal{V}}$ inside the
channel ellipsoid such that the largest possible contour value
$\mathcal{D}_{max} \;=\; \mathcal{D}(\, \vec{\mathcal{W}} \,\|\, \vec{\mathcal{V}} \, )$
touches the ellipsoid surface, and the
remainder of the $\mathcal{D}_{max}$ contour surface lies entirely outside the
channel ellipsoid. Our linear channel example illustrated this idea.
In that example, the $\mathcal{D}_{max}$ contour
intersects the ''ellipsoid'' at $r_{+}$ and
$r_{-}$, and otherwise lies outside the line segment
between $r_{+}$ and $r_{-}$ representing the
convex hull of $\mathcal{A}$.
(Recall from the discussion of the Schumacher and
Westmoreland paper in Section 2.2 that the points on the
ellipsoid surface were defined as the set
$\mathcal{A}$, and the interior of the
ellipsoid, where $\vec{\mathcal{V}}$
lives, is the convex hull of $\mathcal{A}$.)
A good place to start is with
$\vec{\mathcal{V}}_{initial} \;=\; \bmatrix{ t_x \cr t_y \cr t_z }$.
We then "tweak" $\vec{\mathcal{V}}$
as described above to find $\vec{\mathcal{V}}_{optimum}$.
Note that $\vec{\mathcal{V}}_{optimum}$
should be near $\vec{\mathcal{V}}_{initial}$ because of
the {\em almost} radial symmetry of $\mathcal{D}$ about
$\vec{\mathcal{V}}$ as seen in Figures 2 through 11.
This technique is graphically implementing
property IV
in Section 2.2. In Bloch sphere notation, we have :
$$
\mathcal{C}_1 \;=\;
\mbox{\Large Min}
_{\vec{\mathcal{V}}} \quad \quad
\mbox{\Large Max}
_{\vec{\mathcal{W}}} \quad \quad
\mathcal{D} \left ( \; \vec{\mathcal{W}} \; \| \; \vec{\mathcal{V}}\;\right )
$$
\noindent
where $\vec{\mathcal{W}}$ is on the channel ellipsoid surface
and $\vec{\mathcal{V}}$ is in the interior of the ellipsoid.
Moving $\vec{\mathcal{V}}$ from the optimum position described above
will increase
$\;Max _{\vec{\mathcal{W}}} \quad
\mathcal{D} \left ( \; \vec{\mathcal{W}} \; \| \; \vec{\mathcal{V}}\;\right )$,
since a larger contour value of $\mathcal{D}$ would then intersect the
channel ellipsoid surface, thereby {\em increasing}
$\; Max _{\vec{\mathcal{W}}} \quad
\mathcal{D} \left ( \; \vec{\mathcal{W}} \; \| \; \vec{\mathcal{V}}\;\right )$.
Yet $\vec{\mathcal{V}}$ should be adjusted to {\em minimize}
$\; Max _{\vec{\mathcal{W}}} \quad
\mathcal{D} \left ( \; \vec{\mathcal{W}} \; \| \; \vec{\mathcal{V}}\;\right )$.
\subsection{Iterative Channel Optimization Procedure}
For the iterative treatment, we outline an
algorithm which converges to $\vec{\mathcal{V}}_{optimum}$.
First, we need a lemma.
{\em Lemma :} Let $\vec{\mathcal{V}}$ and $\vec{\mathcal{W}}$
be any two Bloch sphere vectors. Define a third Bloch sphere vector
$\vec{\mathcal{U}}$ as :
$$
\vec{\mathcal{U}} \;=\; ( \; 1 \; - \alpha \; ) \; \vec{\mathcal{W}} \;+\;
\alpha \; \vec{\mathcal{V}}
$$
where $\alpha \;\in \; (0,1)$.
Then
$$
\mathcal{D}(\, \vec{\mathcal{W}} \, \| \, \vec{\mathcal{U}} \, ) \; < \;
\mathcal{D}( \, \vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, )
$$
{\em Proof :} By the joint convexity property of the relative entropy
{{\cal A}l I}te{Nielsen00a} :
$$
\mathcal{D}( \, \{\, \alpha \,\rho_1 +\, ( \, 1 \,-\, \alpha \,) \, \rho_2 \,
\}\, \| \, \{\,
\alpha \,\phi_1 +\, ( \, 1 \,-\, \alpha \,) \, \phi_2 \, \} \,) \;
\leq \;
\alpha \; \mathcal{D}( \,\rho_1 \, \| \, \phi_1 \, ) \; + \;
( \, 1 \, - \, \alpha \, ) \; \mathcal{D}( \,\rho_2 \, \| \, \phi_2 \, )
$$
where $\alpha \;\in \; (0,1)$.
Let $\rho_1 \,=\, \rho_2 \,\equiv \, \vec{\mathcal{W}}$,
$\phi_1 \,\equiv \, \vec{\mathcal{V}}$ and
$\phi_2 \,\equiv \, \vec{\mathcal{W}}\, $ with
$\, \vec{\mathcal{U}} \,=\, ( \; 1 \; - \alpha \; ) \; \vec{\mathcal{W}} \;+\;
\alpha \; \vec{\mathcal{V}}$.
We obtain :
$$
\mathcal{D}(\, \vec{\mathcal{W}} \, \| \, \vec{\mathcal{U}} \, ) \; = \;
\mathcal{D}( \, \vec{\mathcal{W}} \, \| \,
\alpha \,\vec{\mathcal{V}} +\, ( \, 1 \,-\, \alpha\,) \,
\vec{\mathcal{W}} \, ) \;
\leq \;
\alpha \; \mathcal{D}( \,\vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, ) \;
\; + \;( \, 1 \, - \, \alpha \, )
\; \mathcal{D}( \,\vec{\mathcal{W}} \, \| \, \vec{\mathcal{W}} \, ) \;
$$
But $\mathcal{D}( \,\vec{\mathcal{W}} \, \| \, \vec{\mathcal{W}} \, ) \; = \; 0$,
by Klein's inequality{{\cal A}l I}te{Nielsen00a}. Thus,
$$
\mathcal{D}(\, \vec{\mathcal{W}} \, \| \, \vec{\mathcal{U}} \, ) \; \leq \;
\alpha \; \mathcal{D}( \,\vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, )
\; < \;
\mathcal{D}( \,\vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, )
$$
since $\alpha \;\in \; (0,1)$.
$\bigtriangleup$ - {\em End of Proof}.
We use the lemma above to guide us in iteratively adjusting $\vec{\mathcal{V}}$
to converge towards $\vec{\mathcal{V}}_{optimal}$.
Consider
$\mathcal{D}( \,\vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, )$, where
$\vec{\mathcal{W}} \,\in\, \mathcal{A}$ and
$\vec{\mathcal{V}} \;\in \; \mathcal{B} \;\equiv $
the convex hull of $\mathcal{A}$.
We seek to find $\mathcal{C}_1 $ in an iterative fashion.
We do this by holding $\vec{\mathcal{V}}$ fixed, and
finding one of the
$\vec{\mathcal{W}'} \,\in\, \mathcal{A}$
which maximizes
$\mathcal{D}( \,\vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, )$.
From our lemma above, if we now move $\vec{\mathcal{V}}$ towards
$\vec{\mathcal{W}'}$, we shall cause
$\mathcal{D}_{max}(\, \vec{\mathcal{V}}\, ) \,=\, Max_{\vec{\mathcal{W}}} \;
\mathcal{D}( \,\vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, )$ to
decrease. We steadily decrease
$\mathcal{D}_{max}( \,\vec{\mathcal{V}} \, )$
in this manner until we reach a point
where any movement of
$\vec{\mathcal{V}}$ will increase
$\mathcal{D}_{max}( \,\vec{\mathcal{V}} \, )$.
Our uniqueness theorem above tells us there is only one
$\vec{\mathcal{V}}_{optimal}$. Our lemma above tells us we cannot
become stuck in a local minima in moving towards
$\vec{\mathcal{V}}_{optimal}$. Thus, when we reach the point where
any movement of $\vec{\mathcal{V}}$ will increase
$\mathcal{D}_{max}( \,\vec{\mathcal{V}} \, )$,
we are done and have found
$\vec{\mathcal{V}}_{final } \;=\; \vec{\mathcal{V}}_{optimum}$.
To summarize, we find the optimum $\vec{\mathcal{V}}$ using the following algorithm.
1) Generate a random starting point
$\vec{\mathcal{V}}_{initial}$
in the interior of the ellipsoid ( $\in \; \mathcal{B}$ ).
( In actuality, since the contour surfaces of constant relative entropy
are {\em roughly} spherical about $\vec{\mathcal{V}}$, a good place to
start is
$\vec{\mathcal{V}}_{initial} \;=\; \bmatrix{ t_x \cr t_y \cr t_z }$ .)
2) Determine the set of points $\{ \; \vec{\mathcal{W}'}\; \}$
on the ellipsoid surface most distant, in a relative
entropy sense, from our
$\vec{\mathcal{V}}$.
This maximal distance is
$\mathcal{D}_{max}( \, \vec{\mathcal{V}} \, )$ defined above as
$\mathcal{D}_{max}( \, \vec{\mathcal{V}} \, )
\;=\; Max_{ \vec{\mathcal{W}'}} \;
\mathcal{D}( \, \vec{\mathcal{W}'} \, \| \,
\vec{\mathcal{V}}\, )$.
3) Choose at random one Bloch sphere vector from our
maximal set of points $\{ \; \vec{\mathcal{W}'}\; \}$.
Call this selected point $\widehat{\vec{\mathcal{W}'}}$.
In the 3 real dimensional Bloch sphere space,
make a small step from $\vec{\mathcal{V}}$ towards the surface point
vector, $\widehat{\vec{\mathcal{W}'}}$.
That is, update $\vec{\mathcal{V}}$ as follows :
$$
\vec{\mathcal{V}}_{new} \;=\;
( \,1\;-\; \epsilon\, ) \, \vec{\mathcal{V}}_{old} \;+\;
\epsilon\, \, \widehat{\vec{\mathcal{W}'}}
$$
4) Loop by going back to step 2) above, using our new, updated
$\vec{\mathcal{V}}_{new}$, and continue to loop until
$\mathcal{D}_{max}$ is no longer changing.
This algorithm converges
to $\phi_{optimum} \;\equiv\; \vec{\mathcal{V}}_{optimum}$, because we
steadily proceed downhill minimizing
$Max _{\vec{\mathcal{W}}} \quad
\mathcal{D} \left ( \; \vec{\mathcal{W}} \; \| \; \vec{\mathcal{V}}\;\right )$,
and our lemma above tells us we can never get stuck in a local minima.
\subsection{Planar Channel Example}
We demonstrate the iterative algorithm above with a planar channel example.
Let
\linebreak
$\{ \;
t_x \;= \; 0.3\; ,\;
t_y \;= \; 0.1\; ,\;
t_z \;= \; 0\; , \,\;
\langlembda_x \;= \; 0.4\; ,\;
\langlembda_y \;= \; 0.5\; , \;
\langlembda_z \;= \; 0\;\}$.
The iterative algorithm outlined above yields
$\vec{\mathcal{V}}\;=\; \bmatrix{ 0.3209 \cr 0.1112 \cr 0 }$
and a HSW channel capacity
$\mathcal{C}_1 \;=\; \mathcal{D}_{optimum} \;=\; 0.1994$.
Shown below in Figure 17
is a plot of the planar channel ellipsoid (the inner
curve), and the curve of constant relative entropy
$\mathcal{D}( \, \rho \, \| \, \phi \, ) \; = \; \mathcal{D}_{optimum}$ centered
at $\vec{\mathcal{V}}$, which is marked with an asterisk {\textbf *}.
One can see that the $\mathcal{D}_{max}$ curve
intersects the ellipsoid curve at two points, marked with {\textbf O},
and these two points are the optimum channel output signals $\rho_i$.
The optimum input and output signalling states for this channel
were determined as described in Appendix E and are :
$$
P_1 \;=\; 0.4869 , \quad \quad
\vec{W}_1^{Input} \;=\; \bmatrix{ -0.0207 \cr -0.9998 \cr 0 }, \quad \quad
\vec{W}_1^{Output} \;=\; \bmatrix{ 0.2917 \cr -0.3999 \cr 0 }
\;.
$$
$$
P_2 \;=\; 0.5131, \quad \quad
\vec{W}_2^{Input} \;=\; \bmatrix{ 0.1215 \cr 0.9926 \cr 0 } , \quad \quad
\vec{W}_2^{Output} \;=\; \bmatrix{ 0.3486 \cr 0.5963 \cr 0 }
\;.
$$
These signal states yield an average channel output Bloch vector $\vec{\mathcal{V}}$ of
$$
\vec{\mathcal{V}} \;=\;
P_1 \, {{\cal A}l D}ot \, \vec{W}_1^{Output} \;+\; P_2 \, {{\cal A}l D}ot \, \vec{W}_2^{Output}
\;=\; \bmatrix{ 0.3209 \cr 0.1113 \cr 0 }\;
.
$$
Figure 17 below shows the location of the channel ellipsoid ( the inner
dashed curve ), the contour of constant relative entropy
( the solid curve ) for $\mathcal{D} \, =\, 0.1994$,
the location of the two optimum input pure states $\rho_i^{Input}$,
(the two {\textbf O} states on the circle of radius one), and the two
optimum output signal states $\rho_i^{Output}$,
also denoted by {\textbf O}, on the channel
ellipsoid curve. Note that the optimum input signalling
states are non-orthogonal.
\begin{center}
\includegraphics*[angle=-90,scale=0.6]{figure17.ps}
\end{center}
\begin{center}
Figure 17:
The intersection in the Bloch sphere X-Y plane of
a planar channel ellipsoid (the inner dashed curve)
and the optimum relative
entropy contour (the solid curve). The two
optimum input signal states (on the outer bold dashed
Bloch sphere boundary curve) and the two optimum output signal
states (on the channel ellipsoid {\em and} the optimum relative entropy
contour curve) are shown as {\textbf O}.
\end{center}
Another useful picture is how the relative entropy changes as we make
our way around the channel ellipsoid. We consider the Bloch X-Y plane
in polar coordinates $\{ \, r \, , \, \theta \, \}$, where
we measure the angle $\theta$ with respect to the
origin of the Bloch X-Y plane axes.
( Note that $\theta$ only fully
ranges over $[0 , 2 \pi ]$ when the origin of
the Bloch sphere lies inside the channel ellipsoid. )
The horizontal line at the
top of the plot is the channel capacity $\mathcal{C}_1 \;=\; 0.1994$.
Note that the two relative entropy peaks
correspond to the locations of the two output optimum signalling states.
\begin{center}
\includegraphics*[angle=-90,scale=0.6]{figure18.ps}
\end{center}
\begin{center}
Figure 18:
The change in $\mathcal{D}( \; \rho \; \| \; \phi \,\equiv \, $
{\textbf *} ) as we move $\rho$ around the channel ellipsoid.
The angle theta is with respect to the Bloch sphere origin.
\end{center}
For this channel, the optimum channel capacity is achieved using an
ensemble consisting of only
two signalling states. Davies theorem tells us that for
single qubit channels, an optimum ensemble need contain at most four
signalling states.
Using the notation of {{\cal A}l I}te{King01},
we call $C_2$ the optimum output $\mathcal{C}_1$ HSW channel
capacity attainable
using only two input signalling states,
$C_3$ is the optimum output $\mathcal{C}_1$ HSW channel capacity
attainable using only
three input signalling states, and
$C_4$ is the optimum output $\mathcal{C}_1$ HSW channel capacity
attainable using only
four input signalling states. Thus, for this channel, we see that
$C_2 \;=\; C_3 \;=\; C_4$. That is, for this channel,
allowing more than two signalling
states in your optimal ensemble does not yield additional channel
capacity over an optimal ensemble with just two signalling states.
\section{Unital Channels}
Unital channels are quantum channels that map the identity to
the identity : $\mathcal{E}(\mathcal{I}) \;=\; \mathcal{I}$. Due to this
behavior, unital channels possess certain symmetries. In the ellipsoid
picture, King and Ruskai {{\cal A}l I}te{Ruskai99a} have shown that for unital
channels, the $\{\,t_k\,\}$ are zero. This yields an ellipsoid centered
at the origin of the Bloch sphere. The resulting symmetry of such an
ellipsoid will allow us to draw powerful conclusions.
First, recall that we know there
exists at least one optimal signal ensemble, $\{\, p_i \,,\,\rho_i\, \}$,
which attains the HSW channel capacity $\mathcal{C}_1$.
( See property III
in Section \ref{schuwest}. )
Now consider the
symmetry evident in the formula we have derived for the relative entropy
for two single qubit density operators. We have :
$$
\mathcal{D}(\, \rho \, \| \, \phi \, ) \;=\;
\mathcal{D} ( \, \vec{\mathcal{W}} \, \| \, \vec{\mathcal{V}} \, ) \;
= \; f( \, r \, , \, q \, , \, \theta \, )
$$
where $r \;=\; \| \, \vec{\mathcal{W}} \, \|$,
$q \;=\; \| \, \vec{\mathcal{V}} \, \|$, and
$\theta$ is the angle between
$\vec{\mathcal{W}}$ and $\vec{\mathcal{V}}$. Thus,
if $\rho_i \, \in \, \mathcal{A}$ and $\phi\,\in\,\mathcal{B}$,
with
$\mathcal{D}(\, \rho_i \, \| \, \sigma \,)\;=\;
\mathcal{D}(\, \vec{\mathcal{W}}_i \, \| \, \vec{\mathcal{V}}\,)\;=\;
\chi_{optimum} \;=\; \mathcal{C}_1
$,
then acting in $\mathcal{R}^3$, reflecting
$\rho_i \, \equiv \, \vec{\mathcal{W}}_i$ and
$\sigma \, \equiv \, \vec{\mathcal{V}}$ through the Bloch sphere origin to
obtain
$\rho_i' \, \equiv \, \vec{\mathcal{W}}'_i$ and
$\sigma' \, \equiv \, \vec{\mathcal{V}}'$, yields elements
of $\mathcal{A}$ and $\mathcal{B}$ respectively.
Furthermore, these transformed density matrices will also satisfy
$
\mathcal{D}(\, \rho_i' \, \| \, \sigma' \,)\;=\;
\mathcal{D}(\, \vec{\mathcal{W}}'_i \, \| \, \vec{\mathcal{V}}'\,)\;=\;
\chi_{optimum} \;=\; \mathcal{C}_1
$,
because $r$, $q$, and $\theta$ remain the same when we reflect through the
Bloch sphere origin. That is,
the symmetry of the
unital channel ellipsoid about the Bloch sphere origin, corresponding
to the density matrix $\frac{1}{2} \, \mathcal{I}$, together with the
symmetry present in the qubit relative entropy formula yields a symmetry
for the optimal signal ensemble $\{\, p_i \,,\,\rho_i\, \}$, where
$\sigma \;=\; \sum_i \, p_i \, \rho_i $, or equivalently
$\vec{\mathcal{V}} \;=\; \sum_i \, p_i \, \vec{\mathcal{W}}_i$.
This symmetry indicates that for every optimal signal ensemble
$\{\, p_i \,,\,\rho_i\, \}$, there exists another ensemble,
$\{\, p'_i \,,\,\rho'_i\, \}$, obtained by reflection through the Bloch
sphere origin. Since we know there exists at least one optimal
signal ensemble, we must conclude that if
$\sigma \;=\; \sum_i \, p_i \, \rho_i \;\neq \; \frac{1}{2} \,
\mathcal{I}$, then two optimal ensembles exist with
$\sigma \;\neq \; \sigma'$. However, by our uniqueness proof above, we are
assured that
$\sigma \;=\; \sum_i \, p_i \, \rho_i$ is a unique density matrix,
regardless of the states $\{ \, p_i \, , \, \rho_i \, \}$ used, as long
as the
states $\{ \, p_i \, , \, \rho_i \, \}$ are an optimal ensemble.
Thus we must conclude that
$\sigma \;=\; \sum_i \, p_i \, \rho_i \;\equiv \; \frac{1}{2} \, \mathcal{I}$,
since only the density matrix $\frac{1}{2} \, \mathcal{I}$ maps into itself upon
reflection through the Bloch sphere origin.
Summarizing these observations, we can state the following.
{\em Theorem :}
For all unital qubit channels, and all optimal signal ensembles
$\{\, p_i \,,\,\rho_i\, \}$, the average density matrix
$\sigma \;=\; \sum_i \, p_i \, \rho_i \;\equiv \; \frac{1}{2}
\,\mathcal{I}$.
In Appendix A, it is shown that
$$
\mathcal{D} \left (\, \rho \, \| \, \frac{1}{2} \,\mathcal{I}\, \right )\;=\;
1 \;-\; \mathcal{S}(\,\rho\, )
$$
where $\mathcal{S}(\,\rho\, )$ is the von Neumann entropy of the
density matrix $\rho$. Thus, our relation for the HSW channel capacity
$\mathcal{C}_1$ becomes :
$$
\mathcal{C}_1 \;=\;
\sum_I \; p_i \, \mathcal{D} \left ( \, \rho_i \, \| \, \frac{1}{2} \,\mathcal{I}
\,\right) \;=\; 1 \;-\; \sum_i \;p_i \, \mathcal{S}(\,\rho_i\, )
$$
To maximize $\mathcal{C}_1$, we seek to minimize the
$\sum_i \; \mathcal{S}(\,\rho_i\, )$, subject to the constraint that the
$\rho_i$ satisfy $\sum_i \; p_i \, \rho_i\;=\; \frac{1}{2} \,
\mathcal{I}$, for some set of a priori probabilities $\{\, p_i\, \}$.
Recall that $\mathcal{S}(\,\rho\, ) \;\equiv\; \mathcal{S}(\,r\, )$
is a strictly decreasing function of $r$, where
$r$ is the magnitude of the Bloch vector
corresponding to $\rho$.
(Please see the plot below of
$\mathcal{S}(\,\rho\, ) \;\equiv\; \mathcal{S}(\,r\, )$.)
\begin{center}
\includegraphics*[angle=-90,scale=0.6]
{entropyIII.ps}
\end{center}
\begin{center}
Figure 19:
The Von Neumann entropy $\mathcal{S}(\rho)$ for a single qubit $\rho$
\linebreak
as a function of the Bloch sphere radius $ r \;\in [0,1]$.
\end{center}
Thus we seek to find a set of $\rho_i$ which lie
most distant, in terms of {\em Euclidean} distance in $\mathcal{R}^3$,
from the ellipsoid origin, and for which a convex combination
of these states equals the Bloch sphere origin.
Let us examine a few special cases. For the unital channel
ellipsoid, consider the case where the major axis is unique in
length, and has total length $2 \, \langlembda^{major \, axis}$. Let
$\rho_{+}$ and $\rho_{-}$ be the states lying at the end of the major
axis. By the symmetry of the ellipsoid, we have
$$
\frac{1}{2} \, \rho_{+} \;+\;
\frac{1}{2} \, \rho_{-} \;=\; \
\frac{1}{2} \; \mathcal{I}
$$
Furthermore, the magnitude of the corresponding Bloch sphere vectors
$r_{+} \;=\; \| \, \vec{\mathcal{W}}_{+}\,\|$ and
$r_{-} \;=\; \| \, \vec{\mathcal{W}}_{-}\,\|$ are equal,
$r_{+} \;=\; r_{-} \;=\; 1 \;-\;
\left | \, \langlembda^{major \, axis} \, \right |$.
Above, we use $|{{\cal A}l D}ots|$ around $\langlembda^{major \, axis}$
because
$\langlembda^{major \, axis}$ can be a negative quantity in the King - Ruskai
et al. formalism.
Using this value of $r\,=\,r_{+}\,=\,r_{-}$ yields for $\mathcal{C}_1$ :
$$
\mathcal{C}_1 \;=\; 1 \;-\;
2 \; \left ( \; \frac{1}{2} \; \mathcal{S}(r) \; \right ) \;=\;
1 \;-\;
\mathcal{S}\left (
\; \left | \, \langlembda^{major \, axis} \, \right | \; \right )
\; .
$$
If the major axis is not the unique axis of maximal length, then any
set of convex probabilities and states $\{\,p_i\,,\, \rho_i\,\}$ such that
the states lie on the major {\em surface} and
$\; \sum_i \, p_i \, \rho_i \;\equiv \; \frac{1}{2} \, \mathcal{I}\; $
will suffice.
Thus we reach the same conclusion obtained by King and Ruskai in an
earlier paper{{\cal A}l I}te{Ruskai99a}.
Summarizing, we can state the following.
{\em Theorem :}
The optimum output signalling states for unital qubit channels correspond
to the minimum output von Neumann entropy states.
Furthermore, we can also conclude :
{\em Theorem :}
For unital qubit channels, the channel capacities consisting of
signal state ensembles with two, three and four signalling states are
equal.
Furthermore, the optimum HSW channel capacity can be attained with a,
possibly non-unique, pair
of equiprobable ($p_1 \,=\, p_2 \,=\, \frac{1}{2}$)
signalling states arranged opposite one another with
respect to the Bloch sphere origin.
{\em Proof :}
Using the notation above, $C_2 \;=\; C_3 \;=\; C_4$. From the
geometry of the centered channel ellipsoid, we can
always use just two signalling states with the minimum output entropy
to convexly reach $\frac{1}{2} \; \mathcal{I}$. Thus, utilizing
more than two signaling
states will not yield any channel capacity improvement beyond using two
signalling states. The equiprobable nature of the two signalling states
derives from the symmetry of the signalling states on the channel
ellipsoid, in that one signalling state being the reflection of the
other signalling state through the Bloch sphere origin means the states may be
symmetrically added to yield an average state corresponding to the Bloch
sphere origin. It is this reflection symmetry which makes the two
signalling states equiprobable.
$\bigtriangleup$ - {\em End of Proof}.
The last three theorems were previously proven by King and Ruskai in
section 2.3 of {{\cal A}l I}te{Ruskai99a}. Here we have merely shown their results
in the relative entropy picture.
\subsection{The Depolarizing Channel}
The depolarizing channel is a unital channel with $\{\, t_k\,=\, 0\,\}$ and
$\{\, \langlembda_k\,=\, \frac{4\,x\,-\,1}{3}\,\}$, as discussed in more detail
in Appendix D. The parameter $x \; \in \; [\,0\,,\,1\,]$.
Using the analysis above, we can conclude that :
$$
\mathcal{C}_1 \;=\; 1 \;-\;
2 \; \left ( \; \frac{1}{2} \; \mathcal{S}(r) \; \right ) \;=\;
1 \;-\;
\mathcal{S}\left (
\; \left | \, \langlembda^{major \, axis} \, \right | \; \right )
\;=\; 1 \;-\;
\mathcal{S}\left (
\; \left | \, \frac{ 4 \, x \;-\; 1}{3}\, \right | \; \right )
$$
$$
\;=\;
\frac{\;1 \;+\; \left | \, \frac{ 4 \, x \;-\; 1}{3}\, \right | \; }{2}
\; \log_{2} \left (\;1 \;+\; \left | \, \frac{ 4 \, x \;-\; 1}{3}\,
\right | \; \right)
\;+\;
\frac{\;1 \;-\; \left | \, \frac{ 4 \, x \;-\; 1}{3}\, \right | \; }{2}
\; \log_{2} \left (\;1 \;-\; \left | \, \frac{ 4 \, x \;-\; 1}{3}\,
\right | \; \right)
$$
We plot $\mathcal{C}_1$ below.
\begin{center}
\includegraphics*[angle=-90,scale=0.6]
{depolarizeIInoast.ps}
\end{center}
\begin{center}
Figure 20:
The Holevo-Schumacher-Westmoreland classical channel
capacity for the depolarizing channel as a function of
the depolarizing channel parameter {\em x}.
\end{center}
\subsection{The Two Pauli Channel}
The Two Pauli channel is a unital channel with $\{\, t_k\,=\, 0\, \}$ and
$\{\langlembda_x\,=\, \langlembda_y\,=\, x\,\}$, and
$\{\langlembda_z\,=\, \, 2\,x\,-\,1\,\}$,
as discussed in more detail
in Appendix D. The parameter $x \; \in \; [\,0\,,\,1\,]$.
The determination of the major axis/surface is tricky due
to the need to take into account the {\em absolute value}
of the $\langlembda_k$. We plot
below the absolute value of the $\langlembda_k$. The dotted curve below
corresponds to the absolute value of $\langlembda_x$ and $\langlembda_y$.
The V-shaped solid curve corresponds to the absolute value of
$\langlembda_z$.
\begin{center}
\includegraphics*[angle=-90,scale=0.6]
{lambda-for-two-pauli.ps}
\end{center}
\begin{center}
Figure 21:
Calculating the length of the major axis of
the channel ellipsoid for the two pauli channel as a function of the two
pauli channel parameter {\em x}.
\end{center}
The intersection point occurs at $x \,=\, \frac{1}{3}$. Thus
$\langlembda_z$ is the major axis for $x \,\leq\,\frac{1}{3}$ and the
$\{\, \langlembda_x \,, \,\langlembda_y\,\}$ surface is the major axis surface
for $x \,\geq\,\frac{1}{3}$. The Bloch sphere radius
corresponding to the minimum entropy states is $1 \, - \, 2 \,x$ for
$x \,\leq\,\frac{1}{3}$ and $x$ for $x \,\geq\,\frac{1}{3}$.
Using our analysis above,
we can conclude that
for $x \, \leq \, \frac{1}{3}$, we have :
$$
\mathcal{C}_1 \;=\; 1 \;-\;
2 \; \left ( \; \frac{1}{2} \; \mathcal{S}(r) \; \right ) \;=\;
1 \;-\;
\mathcal{S}\left (
\; \left | \, \langlembda_z \, \right | \; \right )
\;=\; 1 \;-\;
\mathcal{S}\left ( \; 1 \, - \, 2 \, x \, \; \right )
$$
$$
\;=\; 1
\;+\;
x \; \log_{2} \left (\; x \; \right)
\;+\;
( \;1 \;-\; x \; ) \;
\; \log_{2} \left (\;1 \;-\; x \; \right)
$$
while for $x \, \geq \, \frac{1}{3}$, we have :
$$
\mathcal{C}_1 \;=\; 1 \;-\;
2 \; \left ( \; \frac{1}{2} \; \mathcal{S}(r) \; \right ) \;=\;
1 \;-\;
\mathcal{S}\left (
\; \left | \, \langlembda_x \, \right | \; \right )
\;=\; 1 \;-\;
\mathcal{S} ( \; x \; )
$$
$$
\;=\;
\frac{\;1 \;+\; x \; }{2}
\; \log_{2} (\; 1 \;+\; x \; )
\;+\;
\frac{\;1 \;-\; x \; }{2}\;
\; \log_{2} (\;1 \;-\;x \; )
$$
We plot
$\mathcal{C}_1$ below, using
the appropriate function in their allowed ranges of $x$.
\begin{center}
\includegraphics*[angle=-90,scale=0.6]
{twopaulichinoast.ps}
\end{center}
\begin{center}
Figure 22:
The Holevo-Schumacher-Westmoreland classical channel
capacity for the two pauli channel as a function of
the two pauli channel parameter {\em x}.
\end{center}
Note the symmetry evident in the plots. Examining our graph above
for $\mathcal{C}_1$, one sees that
for $ 0 \, \leq \, \alpha \,\leq \, \frac{1}{3}$,
we have $\mathcal{C}_1( \, \frac{1}{3} \,-\, \alpha \,) \;\equiv \;
\mathcal{C}_1( \, \frac{1}{3} \,+\, 2 \, \alpha \,)$. This symmetry is
also readily seen from the relations for $\mathcal{C}_1$ in the two
allowed ranges of $x$ (less than and greater than $\frac{1}{3}$).
For $ x \, \le \, \frac{1}{3}$, setting $ x \;=\; \frac{1}{3} \,-\, \alpha$,
$$
\mathcal{C}_1^{-} (\alpha) \;=\; 1 \, + \,
\frac{ 1 - 3 \alpha }{3} \; \log_2 \left ( \; \frac{ 1-3\alpha}{3} \right )
\;+\;
\frac{2+3\alpha}{3} \log_2 \left ( \frac{ 2+3\alpha}{3} \right )
$$
For $ x \, \ge \, \frac{1}{3}$, setting $ x \;=\; \frac{1}{3} \,+\,2 \alpha$,
$$
\mathcal{C}_1^{+} (\alpha) \;=\;
\frac{ 4 + 6 \alpha }{6} \; \log_2 \left ( \; \frac{ 4+6\alpha}{3} \right )
\;+\;
\frac{2-6\alpha}{6} \log_2 \left ( \frac{ 2-6\alpha}{3} \right )
$$
$$
\;=\;
\left ( \; \frac{ 4 + 6 \alpha }{6} \;
\;+\;
\frac{2-6\alpha}{6}
\right ) \; +\;
\frac{ 4 + 6 \alpha }{6} \; \log_2 \left ( \; \frac{ 2+3\alpha}{3} \right )
\;+\;
\frac{2-6\alpha}{6} \log_2 \left ( \frac{ 1-3\alpha}{3} \right )
$$
$$
\;=\;
1 \; +\;
\frac{ 2 + 3 \alpha }{3} \; \log_2 \left ( \; \frac{ 2+3\alpha}{3} \right )
\;+\;
\frac{1-3\alpha}{3} \log_2 \left ( \frac{ 1-3\alpha}{3} \right )
\;=\;
\mathcal{C}_1^{-} (\alpha)
$$
\section{Non-Unital Channels}
Non-unital channels are generically more difficult to analyze due to
the fact that one or more of the $\{\,t_k\,\}$ can be non-zero.
This allows the average density
matrix $\rho \,=\, \sum_i \, p_i \, \rho_i$ for
an optimal signal ensemble $\{\,p_i\,,\,\rho_i\,\}$
to move away from the Bloch sphere
origin $\rho \,=\, \frac{1}{2} \, \mathcal{I} \;\equiv\;
\vec{\mathcal{V}} \;=\; \bmatrix{ 0 \cr 0 \cr 0 }$.
However, there still remains the symmetry present in the qubit form of the
relative entropy formula, namely that
$\mathcal{D}(\,\rho\,\| \, \phi\,) \,=\,
\mathcal{D}(\,\vec{\mathcal{W}} \,\| \, \vec{\mathcal{V}} \,) \,=\,
f(\,r\,,\,q\,,\,\theta\,)$,
where $r \;=\; \| \, \vec{\mathcal{W}} \, \|$,
$q \;=\; \| \, \vec{\mathcal{V}} \, \|$, and
$\theta$ is the angle between
$\vec{\mathcal{W}}$ and $\vec{\mathcal{V}}$. The fact that the qubit
relative entropy depends only on $r$, $q$, and $\theta$ yields a
symmetry which can be
used to advantage in analyzing non-unital channels, as our last example
will demonstrate.
\subsection{The Amplitude Damping Channel}
The amplitude damping channel is a non-unital channel with
$\{\, t_x\,=\, t_y\,=\, 0\, \}$ and
\linebreak
$\{\, t_z\,=\, 1 \, - \, \endgroup\@setsize\SetFigFont{#2pt}i \, \}$.
The $\langlembda_k$ are
$\{\langlembda_x\,=\, \langlembda_y\,=\, \sqrt{ \endgroup\@setsize\SetFigFont{#2pt}i} \,\}$, and
$\{\langlembda_z\,=\, \, \endgroup\@setsize\SetFigFont{#2pt}i \,\}$,
where $\endgroup\@setsize\SetFigFont{#2pt}i$ is the channel parameter, $\endgroup\@setsize\SetFigFont{#2pt}i \; \in \; [\,0\,,\,1\,]$.
The amplitude damping channel is
discussed in more detail
in Appendix D.
The determination of the major axis/surface reduces to an analysis
in either the X-Y or X-Z Bloch sphere {\em plane} because of
symmetries of the channel ellipsoid {\em and} the relative entropy
formula for qubit density matrices. Since the relative entropy formula
depends only on the $r$, $q$ and $\theta$ quantities which were defined above,
by examining contour {\em curves} of relative entropy in
the X-Z plane, we can create a {\em surface} of constant relative entropy
in the three dimensional X-Y-Z Bloch sphere space by the solid of revolution
technique. That is, we shall revolve our X-Z contour curves about the
axis of symmetry, here the Z-axis. Now the channel ellipsoid in this
case is also rotationally symmetric about the Z-axis, because
$t_x \,=\, t_y \,=\,0$ and
$\langlembda_x \,=\, \langlembda_y$. Thus optimum signal {\em points}
(points on the
channel ellipsoid surface which have maximal relative entropy distance
from the average signal density matrix), in the X-Z plane, will become
{\em circles} of optimal signals in the full three dimensional
Bloch sphere picture
after the revolution about the Z - axis is completed.
Therefore, due to the simultaneous rotational symmetry
about the Bloch sphere Z axis of the relative
entropy formula (for qubits) and the channel ellipsoid, a full three
dimensional analysis of the amplitude damping channel reduces to a
much easier, yet equivalent, two dimensional analysis in the Bloch X-Z plane.
To illustrate these ideas, we take a specific instance of the
amplitude damping channel with $\endgroup\@setsize\SetFigFont{#2pt}i \,=\, 0.36$. Then
$\{\, t_x\,=\, t_y\,=\, 0\, \}$ and
$\{\, t_z\,=\, 0.64\, \}$.
The $\langlembda_k$ are
$\{\langlembda_x\,=\, \langlembda_y\,=\, 0.6 \,\}$, and
$\{\langlembda_z\,=\, \, 0.36 \,\}$. In this case
$\mathcal{C}_1 \,=\, 0.3600$ is achieved with two equiprobable
signalling states. The optimum average density matrix has
Bloch vector $\vec{\mathcal{V}} \,=\, \bmatrix{ 0 \cr 0.7126}$, and
is shown with an asterisk in the plots below.
In the first plot, we show the X(horizontal)-Z(vertical) Bloch sphere
plane. The outer
bold dotted ring is the pure state boundary, with
Bloch vector magnitude equal to one. The inner dashed circle is the channel
ellipsoid. The middle solid contour is the curve of constant relative entropy,
equal to 0.3600, and
centered at $\vec{\mathcal{V}}$. This relative entropy contour in the X-Z
plane contacts the channel ellipsoid at two {\em symmetrical} points,
indicated in the plot as {\textbf O}. Note
that these two contact points, and the location of $\vec{\mathcal{V}}$,
all lie on a perfectly horizontal line. The fact that the
line is horizontal is due to the fact that the two optimum signalling
states in the X-Z plane are symmetric about the Z axis. The point
$\vec{\mathcal{V}}$ is simply the two optimal output signal points
average. The corresponding optimal input signals are shown as
{\textbf O}'s on the outer bold dotted pure state boundary
semicircular curve.
\begin{center}
\includegraphics*[angle=-90,scale=0.6]
{ampdampV.ps}
\end{center}
\begin{center}
Figure 23:
The intersection in the Bloch sphere X-Z plane of
the amplitude damping channel ellipsoid (the inner dashed curve)
and the optimum relative
entropy contour (the solid curve). The two
optimum input signal states (on the outer bold dashed
Bloch sphere boundary curve) and the two optimum output signal
states (on the channel ellipsoid {\em and} the optimum relative entropy
contour curve) are shown as {\textbf O}.
\end{center}
Note that the optimum input signalling states are nonorthogonal.
Furthermore, this analysis tells us that
$C_2 \;=\; C_3 \;=\; C_4$. For the amplitude damping channel,
there is no advantage to using more than two signals
in the optimum signalling ensemble.
The following is a picture similar to those we have
done for the planar channels we
examined. We plot the magnitude of the relative entropy as one
moves around the channel ellipsoid in the X-Z plane. The angle $\theta$ is
with respect to the Bloch sphere origin (ie : the X-Z plane origin).
\begin{center}
\includegraphics*[angle=-90,scale=0.6]
{figure24.ps}
\end{center}
\begin{center}
Figure 24:
The change in $\mathcal{D}( \; \rho \; \| \; \phi \,\equiv \, $
{\textbf *} ) as we move $\rho$ around the channel ellipsoid.
The angle theta is with respect to the Bloch sphere origin.
\end{center}
Thus, the rotational
symmetry about the Z-axis of the relative entropy formula, coupled with
the same Z - axis rotational symmetry of the amplitude damping channel
ellipsoid, yields a complete
understanding of the behavior of the amplitude damping channel
with just a simple two dimensional analysis.
\section{Summary and Conclusions}
In this paper, we have derived a formula for the relative entropy
of two single qubit density matrices. By combining our relative
entropy formula with the King-Ruskai et al. ellipsoid picture of qubit
channels, we can use the
Schumacher-Westmoreland relative entropy approach to classical
HSW channel capacity to analyze unital and non-unital
single qubit channels in detail.
The following observation also emerges from the examples and
analyses above. In numerical simulations by this author and others,
it was noted that the a priori probabilities of the
optimum signalling states for non-unital qubit channels were in general,
approximately, but not exactly, equal. For example, consider the case of linear channels,
where the optimum HSW channel capacity is achieved with two signalling states.
In our first linear channel example, one signalling
state had an a priori probability of 0.5156 and the other signalling
state had an a priori probability of 0.4844. Similarly, in our second
linear channel example, the respective a priori probabilities were
0.5267 and 0.4733. These asymmetries in the a priori probabilities
are due to the fact that $\mathcal{D}$
is not purely a radial function of distance from
$\vec{\mathcal{V}}_{optimum}$. The relative entropy contours shown in
Figures 2 through 11 are moderately, but not exactly, circular
about $\vec{\mathcal{V}}_{optimum}$. This slight radial asymmetry leads to
a priori signal probabilities that are approximately, but not exactly, equal.
Thus, a graphical estimate of the a priori signal probabilities can be
made by observing the degree of asymmetry of the optimum relative entropy
contour about $\vec{\mathcal{V}}_{optimum}$.
In conclusion, the analysis above yields
a geometric picture which we hope will lead
to future insights into the transmission of classical information
over single qubit channels.
\section{Acknowledgments}
The author would like to thank Patrick Hayden, Charlene Ahn,
Sumit Daftuar and John Preskill
for helpful comments on a draft of this paper.
The author would also like to thank Beth Ruskai for many
interesting conversations on the classical channel capacity of
qubit channels.
\begin{appendix}
\section{Appendix A - The Derivation Of The Bloch Sphere Relative
Entropy Formula }
The relative entropy of two density matrices $\varrho$ and $\psi$ is defined
to be
$$
\mathcal{D} ( \; \varrho \, || \; \psi\;) \;=\;
Tr [ \; \varrho \, ( \; \log_2(\varrho) \;-\; \log_2(\psi) \;) \; ]
$$
Our main interest is when both $\varrho$ and $\psi$ are qubit
density operators. In that case, $\varrho$ and $\psi$ can be written using
the Bloch sphere representation.
$$
\varrho \; = \; \frac{1}{2} \; \left ( \, \mathcal{I} \; + \; \vec{\mathcal{W}} \bullet \vec{\sigma} \, \right ) \quad \quad \quad \quad \; \psi \; = \; \frac{1}{2} \; \left ( \, \mathcal{I} \; + \; \vec{\mathcal{V}} \bullet \vec{\sigma} \, \right)
$$
To simplify notation below, we define
$$
r \;=\; \sqrt { \vec{\mathcal{W}} \bullet \vec{\mathcal{W}}}
\quad \quad and \quad \quad q \;=\; \sqrt { \vec{\mathcal{V}} \bullet \vec{\mathcal{V}} }
$$
We shall also define $\cos(\theta)$ as :
$$
\cos(\theta) \;\;=\;\;
\frac{\vec{\mathcal{W}}\bullet \vec{\mathcal{V}}}{ \; r \; q \; }
$$
where $r$ and $q$ are as above.
The symbol $ \vec{\sigma}$ means the vector of 2 x 2 Pauli matrices
$$ \vec{\sigma} \; = \;
\bmatrix{ \sigma_x \cr \sigma_y \cr \sigma_z }
\;\;\;\;\; where
\;\;\;\;\; \sigma_x \;=\; \bmatrix{ 0 & 1 \cr 1 & 0 },
\;\;\;\;\; \sigma_y \;=\; \bmatrix{ 0 & -i \cr i & 0 },
\;\;\;\;\; \sigma_z \;=\; \bmatrix{ 1 & 0 \cr 0 & -1 }
$$
The Bloch vectors $\vec{\mathcal{W}}$ and $\vec{\mathcal{V}}$ are real,
three dimensional vectors
which have magnitude equal to one when representing a pure state density matrix,
and magnitude less than one for a mixed (non-pure) density matrix.
The density matrices for $\varrho$ and $\psi$ in terms of their Bloch
vectors are :
$$
\varrho \;=\; \left[ \begin {array}{cc} \frac{1}{2}+\frac{1}{2}\,{\it w_3}&\frac{1}{2}\,{\it w_1}-\frac{1}{2}\,i{\it
w_2}\\\noalign{
}\frac{1}{2}\,{\it w_1}+\frac{1}{2}\,i{\it w_2}&\frac{1}{2}-\frac{1}{2}\,{\it w_3}
\end {array} \right]
$$
$$
\psi \;=\; \left[ \begin {array}{cc} \frac{1}{2}+\frac{1}{2}\,{\it v_3}&\frac{1}{2}\,{\it v_1}-\frac{1}{2}\,i{\it
v_2}\\\noalign{
}\frac{1}{2}\,{\it v_1}+\frac{1}{2}\,i{\it v_2}&\frac{1}{2}-\frac{1}{2}\,{\it v_3}
\end {array} \right]
$$
We shall prove the following formula in two ways, an algebraic proof and
a brute force proof. We conclude Appendix A with some alternate
representations of
this formula.
$$
\mathcal{D}(\, \varrho \, \| \, \psi \, ) \;=\;
\mathcal{D}_1 \;-\; \mathcal{D}_2
\;=\;
\frac{1}{2} \log_2 \left ( 1\;-\;r^2 \right ) \;+\;
\frac{r}{2} \log_2 \left ( \frac{1\;+\;r}{1\;-\;r} \right )
\;-\;
\frac{1}{2}
\; \log_2 \left ( 1 \;-\; q^2\; \right )
\;-\; \frac{\; \vec{\mathcal{W}} \bullet \vec{\mathcal{V}} \; }
{\; 2 \; q\; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
$$
\;=\;
\frac{1}{2} \log_2 \left ( 1\;-\;r^2 \right ) \;+\;
\frac{r}{2} \log_2 \left ( \frac{1\;+\;r}{1\;-\;r} \right )
\;-\;
\frac{1}{2}
\; \log_2 \left ( 1 \;-\; q^2\; \right )
\;-\; \frac{r \; \cos ( \theta ) }
{\; 2 \; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
where $\theta$ is the angle between $\vec{\mathcal{W}}$
and $\vec{\mathcal{V}}$.
\subsection{Proof I : The Algebraic Proof }
$$
\mathcal{D}( \; \varrho \, || \; \psi\;) \;=\; Tr [ \; \varrho \, ( \; \log_2(\varrho) \;-\;
\log_2(\psi) \;) \; ]
$$
Recall the following Taylor series, valid for $\| \, x\,\| \;\leq \, 1$.
$$
\ln( \;1\;+\; x\; ) \;=\; - \; \sum_{n=1}^{\infty} \; \frac{\;\left ( \; - x \;
\right ) ^n}{n}
\;=\; x \;-\; \frac{x^2}{2} \;+\; \frac{x^3}{3} \;-\; \frac{x^4}{4}
\;+\; \frac{x^5}{5} \;-\; \frac{x^6}{6} \;+\; \frac{x^7}{7} \;-\; {{\cal A}l D}ots
$$
$$
\ln( \;1\;-\; x\; ) \;=\; - \; \sum_{n=1}^{\infty} \; \frac{x^n}{n}
\;=\; - \; x \;-\; \frac{x^2}{2} \;-\; \frac{x^3}{3} \;-\; \frac{x^4}{4}
\;-\; \frac{x^5}{5} \;-\; \frac{x^6}{6} \;-\; \frac{x^7}{7} \;-\; {{\cal A}l D}ots
$$
Combining these two Taylor series yields another Taylor expansion
we shall be interested in :
$$
\frac{1}{2} \; \left \{ \;
\ln( \;1\;+\; x\; ) \;-\; \ln( \;1\;-\; x\; ) \; \right \} \;=\;
\frac{1}{2} \; \ln \left ( \;\frac{\; 1\;+\; x\;}{ \;1\;-\; x\; } \; \right )
$$
$$
\;=\; - \; \sum_{n=1}^{\infty} \; \frac{(-x)^n}{n}
\quad-\quad \left ( \; -\; \sum_{n=1}^{\infty} \; \frac{x^n}{n} \; \right )
\;=\; x \;+\; \frac{x^3}{3} \;+\; \frac{x^5}{5} \;+\; \frac{x^7}{7}
\;+\; \frac{x^9}{9} \;+\; {{\cal A}l D}ots
$$
A different combination of the first two Taylor series above yields
yet another Taylor expansion we shall be interested in :
$$
\frac{1}{2} \; \left \{ \;
\ln( \;1\;+\; x\; ) \;+\; \ln( \;1\;-\; x\; ) \; \right \} \;=\;
\frac{1}{2} \; \ln \left [ \; 1\;-\; x^2 \; \right ]
$$
$$
\;=\; - \; \sum_{n=1}^{\infty} \; \frac{(-x)^n}{n}
\quad+\quad \left ( \; -\; \sum_{n=1}^{\infty} \; \frac{x^n}{n} \; \right ) \;
\;=\; - \; \frac{x^2}{2} \;-\; \frac{x^4}{4} \;-\; \frac{x^6}{6}
\;-\; \frac{x^8}{8} \;-\; {{\cal A}l D}ots
$$
Consider $\log( \; \varrho \; )$ with the Bloch sphere representation for
$\varrho$.
$$
\varrho \; = \; \frac{1}{2} \; \left (
\mathcal{I} \; + \; \vec{\mathcal{W}} \bullet \vec{\sigma} \right )
$$
We obtain, using the expansion given above for $\log ( \,1\,+\,x\,)$,
$$
\log( \; \varrho \; ) \;=\; \log \left [ \; \frac{1}{2} \, \left ( \; \mathcal{I}
\;+\; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) \; \right ]
\;=\; \log \left [ \; \frac{1}{2} \; \right ] \;+\; \log \left [ \; \mathcal{I}
\;+\; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ]
\;=\; \log \left [ \; \frac{1}{2} \; \right ] \;-\;
\sum_{n=1}^{\infty} \; \frac{
\; \left ( \; - \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right )^n \;}
{n}
$$
Recall that
$\left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right )^2 \; =\; r^2$,
where $r \;=\; \sqrt { \vec{\mathcal{W}} \bullet \vec{\mathcal{W}}}$.
Thus we have for even $n$,
$\left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right )^n
\; =\; r^{n}$, while for odd n we have
$\quad \left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right )^n
\; =\; r^{n\,-\,1} \quad \vec{\mathcal{W}} \bullet \vec{\sigma}$.
The expression for $\log ( \;\varrho \; )$ then becomes
$$
\log ( \;\varrho \; ) \;=\;
\log \left [ \; \frac{1}{2} \; \right ] \quad - \quad
\sum_{n=1}^{\infty} \; \frac{
\; \left ( \; - \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right )^n \;}{n}
$$
$$
\;=\;
\log \left [ \; \frac{1}{2} \; \right ] \quad + \quad
\; \vec{\mathcal{W}} \bullet \vec{\sigma}
\; - \; \frac{r^2}{2} \;+\;
\; \frac{r^2}{3} \; \vec{\mathcal{W}} \bullet \vec{\sigma} \;
\; - \; \frac{r^4}{4} \;+\;
\; \frac{r^4}{5} \; \vec{\mathcal{W}} \bullet \vec{\sigma} \;
\; - \; \frac{r^6}{6} \;+\;
\; \frac{r^6}{7} \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; - \; {{\cal A}l D}ots
$$
$$
\;=\;
\log \left [ \; \frac{1}{2} \; \right ] \quad + \quad
\; \frac{ \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; }{r} \; \left ( \;
\; r
\; + \; \frac{r^3}{3}
\; + \; \frac{r^5}{5}
\; + \; \frac{r^7}{7} \;+\; {{\cal A}l D}ots \right )
\;+\;
\left (
\; - \; \frac{r^2}{2} \;
\; - \; \frac{r^4}{4} \;
\; - \; \frac{r^6}{6} \;
\; - \; \frac{r^8}{8} \;
\; - \; \frac{r^{10}}{10} \;-\; {{\cal A}l D}ots \; \right )
$$
$$
\;=\;
\log \left [ \; \frac{1}{2} \; \right ] \quad + \quad
\frac{ \; \vec{\mathcal{W}} \bullet \vec{\sigma} \;}{\; 2 \; r\; } \;
\log \left [ \; \frac{ \;1\;+\; r\;}{ \;1\;-\; r\;} \; \right ] \;
\; + \; \frac{1}{2} \; \log \left [ \;1\;-\; r^2\; \right ] \;
$$
To evaluate $Tr \left [ \; \varrho \; \log ( \;\varrho \; )\; \right ]$ we again
use the Bloch sphere representation for $\varrho$.
$$
\varrho \; = \; \frac{1}{2} \; \left ( \; \mathcal{I} \; + \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right )
$$
We write
$$
Tr \left [ \; \varrho \; \log ( \;\varrho \; ) \; \right ] \;=\;
\frac{1}{2} \; Tr \left [ \; \mathcal{I} \;\bullet \; \log ( \;\varrho \; ) \;
\right ]
\; + \; \frac{1}{2} \;
Tr \left [ \; \left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) \; \log ( \;\varrho \; )\; \right ]
$$
Using our results above,
$$
\frac{1}{2} \;
Tr \left [ \; \mathcal{I} \;\bullet \; \log ( \;\varrho \; ) \; \right ]
\; =\;
\log \left [ \;\frac{1}{2} \; \right ] \;+\;
\frac{1}{2} \; \log \left [ \;1\;-\; r^2\; \right ]
$$
since
$Tr[\;\mathcal{I}\;] \;=\; 2$ and
$Tr[\;\sigma_x\;] \;=\; Tr[\;\sigma_y\;] \;=\; Tr[\;\sigma_z\;] \;=\; 0$.
Similarly,
$$
Tr \left [ \; \left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) \; \log ( \;\varrho \; )\; \right ]
\; =\;
\frac{\; \left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) ^2 \; }{r}
\; \log \left [ \; \frac{ \;1\;+\; r\;}{ \;1\;-\; r\;} \; \right ] \;
\; =\; r\; \log \left [ \; \frac{ \;1\;+\; r\;}{ \;1\;-\; r\;} \; \right ] \;
$$
where we again used the fact
$Tr[\;\mathcal{I}\;] \;=\; 2$ and
$Tr[\;\sigma_x\;] \;=\; Tr[\;\sigma_y\;] \;=\; Tr[\;\sigma_z\;] \;=\; 0$.
Putting all the pieces together yields :
$$
Tr \left [ \; \varrho \; \log ( \;\varrho \; ) \; \right ] \;=\;
\frac{1}{2} \; Tr \left [ \; \mathcal{I} \;\bullet \; \log ( \;\varrho \; ) \;
\right ]
\; + \; \frac{1}{2} \;
Tr \left [ \; \left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) \; \log ( \;\varrho \; )\; \right ]
$$
$$
\; =\;
\log \left [ \,\frac{1}{2} \, \right ] \;+\;
\frac{1}{2} \; \log \left [ \;1\;-\; r^2\; \right ]
\; +\; \frac{r}{2} \; \log \left [ \; \frac{ \;1\;+\; r\;}{ \;1\;-\; r\;} \; \right ] \;
$$
To evaluate $ Tr[ \; \varrho \; \log ( \;\psi \; ) \; ]$, we follow a similar
path and use the Bloch sphere representation for $\psi$ of
$$
\psi \; = \; \frac{1}{2} \; \left ( \mathcal{I} \; + \; \vec{\mathcal{V}}
\bullet \vec{\sigma} \right )
$$
The expression for $\log ( \;\psi\; )$ then becomes
$$
\log ( \;\psi\; )
\;=\;
\log \left [ \;\frac{1}{2} \; \right ] \;+\;
\frac{ \; \vec{\mathcal{V}} \bullet \vec{\sigma} \;}{\; 2 \; q\; } \;
\log \left [ \; \frac{ \;1\;+\; q\;}{ \;1\;-\; q\;} \; \right ] \;
\; + \; \frac{1}{2} \; \log \left [ \;1\;-\; q^2\; \right ] \;
$$
Using our results above,
$$
\frac{1}{2} \;
Tr \left [ \; \mathcal{I} \;\bullet \; \log ( \;\psi \; ) \; \right ]
\; =\;
\log \left [ \;\frac{1}{2} \; \right ] \;+\;
\frac{1}{2} \; \log \left [ \;1\;-\; q^2\; \right ]
\;=\;
- \, \log [\,2\, ] \;+\; \frac{1}{2} \; \log \left [ \;1\;-\; q^2\; \right ]
$$
$$
Tr \left [ \; \left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) \;
\log ( \;\psi \; )\; \right ]
\; =\;
\frac{\; \vec{\mathcal{W}} \bullet \vec{\mathcal{V}} \; }{q}
\; \log \left [ \; \frac{ \;1\;+\; q\;}{ \;1\;-\; q\;} \; \right ] \;
\; =\; r \; \cos ( \; \theta \; ) \; \log \left [ \; \frac{ \;1\;+\; q\;}{ \;1\;-\; q\;} \; \right ] \;
$$
where we again used the fact
$Tr[\;\mathcal{I}\;] \;=\; 2$ and
$Tr[\;\sigma_x\;] \;=\; Tr[\;\sigma_y\;] \;=\; Tr[\;\sigma_z\;] \;=\; 0$.
We also used the fact that
$$
\left ( \; \vec{\mathcal{V}} \bullet \vec{\sigma} \; \right ) \;
\left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) \;
= \;
\left ( \; \vec{\mathcal{V}} \bullet \vec{\mathcal{W}} \; \right ) \; \mathcal{I}
\;+\;
\left ( \; \vec{\mathcal{V}} \times \vec{\mathcal{W}} \; \right ) \;
\bullet \vec{\sigma}
$$
and therefore
$$
Tr \left [ \; \left ( \; \vec{\mathcal{V}} \bullet \vec{\sigma} \; \right ) \;
\left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) \; \right ]
\; = \;
Tr \left [ \; \left ( \;
\vec{\mathcal{V}} \bullet \vec{\mathcal{W}} \; \right ) \; \mathcal{I} \; \right ]
\;+\;
Tr \left [ \; \left ( \; \vec{\mathcal{V}} \times \vec{\mathcal{W}} \;
\right ) \; \bullet \vec{\sigma} \; \right ]
$$
$$
\; = \;
\left ( \; \vec{\mathcal{V}} \bullet \vec{\mathcal{W}} \; \right ) \;
Tr \left [ \; \mathcal{I} \; \right ]
\;+\;
\left ( \; \vec{\mathcal{V}} \times \vec{\mathcal{W}} \;
\right ) \; \bullet \; Tr \left [ \; \vec{\sigma} \; \right ]
\; = \;
2 \; \vec{\mathcal{V}} \bullet \vec{\mathcal{W}}
$$
Assembling the pieces :
$$
Tr[ \; \varrho \; \log ( \;\psi \; ) \; ] \;=\;
\frac{1}{2} \; Tr[\; \mathcal{I} \;\bullet \; \log ( \;\psi \; ) \;]\;
+ \; \frac{1}{2} \;
Tr \left [ \; \left ( \; \vec{\mathcal{W}} \bullet \vec{\sigma} \; \right ) \; \log ( \;\psi \; )\; \right ]
$$
$$
\; =\;
\log \left [ \; \frac{1}{2} \; \right ] \quad + \quad
\frac{1}{2} \; \log \left [ \;1\;-\; q^2\; \right ]
\; +\; \frac{r}{2} \; \cos ( \; \theta \; ) \; \log \left [ \; \frac{ \;1\;+\; q\;}{ \;1\;-\; q\;} \; \right ] \;
$$
Using these pieces, we obtain our final formula :
$$
\mathcal{D}( \; \varrho \, || \; \psi\;) \;=\; Tr [ \; \varrho \, ( \; \log_2(\varrho) \;-\; \log_2(\psi) \;
) \; ]
$$
$$
\; =\;
\log_2 \left [ \; \frac{1}{2} \; \right ] \quad + \quad
\frac{1}{2} \; \log_2 \left [ \;1\;-\; r^2\; \right ]
\; +\; \frac{r}{2} \; \log_2 \left [ \; \frac{ \;1\;+\; r\;}{ \;1\;-\; r\;} \; \right ] \;
$$
$$
- \; \log_2 \left [ \; \frac{1}{2} \; \right ]
\; -\; \frac{1}{2} \; \log_2 \left [ \;1\;-\; q^2\; \right ]
\; -\; \frac{r}{2} \; \cos ( \; \theta \; ) \; \log_2 \left [ \; \frac{ \;1\;+\; q\;}{ \;1\;-\; q\;} \; \right ] \;
$$
$$
\; =\;
\frac{1}{2} \; \log_2 \left [ \;1\;-\; r^2\; \right ]
\; +\; \frac{r}{2} \; \log_2 \left [ \; \frac{ \;1\;+\; r\;}{ \;1\;-\; r\;} \; \right ]
\; -\; \frac{1}{2} \; \log_2 \left [ \;1\;-\; q^2\; \right ]
\; -\; \frac{r}{2} \; \cos ( \; \theta \; ) \; \log_2 \left [ \; \frac{ \;1\;+\; q\;}{ \;1\;-\; q\;} \; \right ] \;
$$
which is our desired formula.
$\bigtriangleup$ - {\em End of Proof I}.
\subsection{Proof II : The Brute Force Proof }
The density matrices for $\varrho$ and $\psi$ in terms of their Bloch
vectors are :
$$
\varrho \;=\; \left[ \begin {array}{cc} \frac{1}{2}+\frac{1}{2}\,{\it w_3}&\frac{1}{2}\,{\it w_1}-\frac{1}{2}\,i{\it
w_2}\\\noalign{
}\frac{1}{2}\,{\it w_1}+\frac{1}{2}\,i{\it w_2}&\frac{1}{2}-\frac{1}{2}\,{\it w_3}
\end {array} \right]
$$
$$
\psi \;=\; \left[ \begin {array}{cc} \frac{1}{2}+\frac{1}{2}\,{\it v_3}&\frac{1}{2}\,{\it v_1}-\frac{1}{2}\,i{\it
v_2}\\\noalign{
}\frac{1}{2}\,{\it v_1}+\frac{1}{2}\,i{\it v_2}&\frac{1}{2}-\frac{1}{2}\,{\it v_3}
\end {array} \right]
$$
The eigenvalues of these two density matrices are :
$$
\langlembda_{\varrho}^{(1)} \;=\; \frac{1}{2}\; +\; \frac{1}{2}\,\sqrt {{{\it w_2}}^{2}+{{\it w_3}}^{2}+{{\it w_1}}^{2}} \;=\; \frac{1 \;+\; r}{2}
$$
$$
\langlembda_{\varrho}^{(2)} \;=\; \frac{1}{2}\;- \;\frac{1}{2}\,\sqrt {{{\it w_2}}^{2}+{{\it w_3}}^{2}+{{\it w_1}}^{2}} \;=\; \frac{1 \;-\; r}{2}
$$
$$
\langlembda_{\psi}^{(1)} \;=\; \frac{1}{2}\; +\; \frac{1}{2}\,\sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it v_1}}^{2}} \;=\; \frac{1 \;+\; q}{2}
$$
$$
\langlembda_{\psi}^{(2)} \;=\; \frac{1}{2}\;- \;\frac{1}{2}\,\sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it v_1}}^{2}} \;=\; \frac{1 \;-\; q}{2}
$$
We shall also be interested in the two eigenvectors of $\psi$. These are :
$$
\ket{e_1} \;=\; N_1 \;
\bmatrix{1 \cr -{\frac {-2\, \left( \frac{1}{2}+\frac{1}{2}\,\sqrt {{{\it w_2}}^{2}+{{\it w_3}}^{2}+
{{\it w_1}}^{2}} \right) {\it w_1}-2\,i \left( \frac{1}{2}+\frac{1}{2}\,\sqrt {{{\it w_2}
}^{2}+{{\it w_3}}^{2}+{{\it w_1}}^{2}} \right) {\it w_2}+{\it w_1}+i{\it
w_2}+{\it w_3}\,{\it w_1}+i{\it w_3}\,{\it w_2}}{{{\it w_1}}^{2}+{{\it w_2}}^
{2}}}}
$$
where $N_1$ is the normalization constant given below.
$$
N_1 \;=\; \sqrt{ 2\,{\frac {{{\it w_1}}^{2}+\sqrt {{{\it w_2}}^{2}+{{\it w_3}}^{2}+{{\it
w_1}}^{2}}{
\it w_3}+{{\it w_2}}^{2}+{{\it w_3}}^{2}}{{{\it w_1}}^{2}+{{\it w_2}}^{2}}}}
$$
Similarly,
$$
\ket{e_2} \;=\; N_2 \;
\bmatrix{ {\frac {2\, \left( \frac{1}{2}-\frac{1}{2}\,\sqrt {{{\it w_2}}^{2}+{{\it w_3}}^{2}+{{\it
w_1}}^{2}}
\right) {\it w_1}-2\,i \left( \frac{1}{2}-\frac{1}{2}\,\sqrt {{{\it w_2}}^{2}+{{\it
w_3}}^{2}+{{\it
w_1}}^{2}} \right) {\it w_2}-{\it w_1}+i{\it w_2}+{\it w_3}\,{\it w_1}-i{\it
w_3}\,{\it
w_2}}{{{\it w_1}}^{2}+{{\it w_2}}^{2}}} \cr 1}
$$
$$
N_2 \; =\;
\sqrt{2\,{\frac {{{\it w_1}}^{2}+\sqrt {{{\it w_2}}^{2}+{{\it w_3}}^{2}+{{\it
w_1}}^{2}}{
\it w_3}+{{\it w_2}}^{2}+{{\it w_3}}^{2}}{{{\it w_1}}^{2}+{{\it w_2}}^{2}}}}
$$
We wish to derive a formula for $\mathcal{D}(\varrho \, \| \, \psi)$ in terms of
the Bloch sphere vectors $\vec{\mathcal{W}}$ and
$\vec{\mathcal{V}}$. We do this by
breaking $\mathcal{D}(\varrho \, \| \, \psi)$
up into two terms, $\mathcal{D}_1$ and
$\mathcal{D}_2$.
$$
\mathcal{D}(\varrho \, \| \, \psi) \;=\; \mathcal{D}_1 \;-\; \mathcal{D}_2
$$
We expand $\mathcal{D}_1$ using our knowledge of the eigenvalues of $\varrho$.
$$
\mathcal{D}_1\;=\; Tr [ \;\varrho \; \log_2(\varrho) \; ]
$$
$$
\;=\;
\langlembda_{\varrho}^{(1)} \; \log_2( \langlembda_{\varrho}^{(1)} ) \; + \;
\langlembda_{\varrho}^{(2)} \; \log_2( \langlembda_{\varrho}^{(2)} )
$$
$$
\;=\;
\left ( \frac{ \;1\;+\;r\;}{\;2\;} \right )
\log_2 \left ( \frac{ \;1\;+\;r\;}{\;2\;} \right ) \;\; + \;\;
\left ( \frac{ \;1\;-\;r\;}{\;2\;} \right )
\log_2 \left ( \frac{ \;1\;-\;r\;}{\;2\;} \right )
$$
$$
\;=\; -\;\; 1 \;+\;
\left ( \frac{ \;1\;+\;r\;}{\;2\;} \right )
\log_2 \left ( 1\;+\;r \right ) \;\; + \;\;
\left ( \frac{ \;1\;-\;r\;}{\;2\;} \right )
\log_2 \left ( 1\;-\;r \right )
$$
One notes that $\mathcal{D}_1\;=\; -\mathcal{S}(\varrho)$, where
$\mathcal{S}(\varrho)$ is the von
Neumann entropy of the density matrix $\varrho$.
The second term, $\mathcal{D}_2$, is
$\mathcal{D}_2\;=\; Tr [ \;\varrho \; \log_2(\psi) \; ]$.
We evaluate $\mathcal{D}_2$ in the basis which diagonalizes $\psi$.
$$
\mathcal{D}_2\;=\; Tr [ \;\varrho \; \log_2(\varrho) \; ] \;=\;
\log_2( \langlembda_{\varrho}^{(1)} ) \;
Tr \left [ \; \varrho \; \ket{e_1}\bra{e_1} \; \right ]\; + \;
\log_2( \langlembda_{\varrho}^{(2)} ) \;
Tr \left [ \; \varrho \; \ket{e_2}\bra{e_2} \; \right ]
$$
We use the Bloch sphere representation for $\varrho$ in the expression
for $\mathcal{D}_2$.
$$
\psi \; = \; \frac{1}{2} \;
( \; \mathcal{I} \; + \; \vec{\mathcal{V}} \bullet \vec{\sigma} \; )
$$
$$
\mathcal{D}_2\;=\; Tr [ \;\varrho \; \log_2(\psi) \; ] \;=\;
$$
$$
\frac{1}{2} \; \log_2( \langlembda_{\psi}^{(1)} ) \left [
\; Tr[ \; \ket{e_1}\bra{e_1} \; ]\; + \;
\; \sum_{i} \; w_i \; Tr[ \; \sigma_i \; \ket{e_1}\bra{e_1} \; ] \right ]
\;+\;
$$
$$
\frac{1}{2} \; \log_2( \langlembda_{\psi}^{(2)} ) \left [
\; Tr[ \; \ket{e_2}\bra{e_2} \; ]\; + \;
\; \sum_{i} \; w_i \; Tr[ \; \sigma_i \; \ket{e_2}\bra{e_2} \; ] \right ]
$$
First note that
$Tr[ \; \ket{e_1}\bra{e_1} \; ]\; = \; Tr[ \; \ket{e_2}\bra{e_2} \; ]\; = \;1$
since the $\ket{e_{j}}$ are projection operators.
Next define
$$
\alpha_{i}^{(j)} \;=\; Tr[ \sigma_i \; \ket{e_j}\bra{e_j} ]
\;=\; \bra{e_j} \, \sigma_i \; \ket{e_j}
$$
Evaluating these six ( i = 1,2,3 and j = 1,2 ) constants yields :
$$
\alpha_1^{(1)} \;=\;
{\frac {{\it v_1}\, \left( \sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it
v_1}}^{2}
}+{\it v_3} \right) }{{{\it v_1}}^{2}+\sqrt {{{\it v_2}}^{2}+{{\it
v_3}}^{2}+{{\it v_1}
}^{2}}{\it v_3}+{{\it v_2}}^{2}+{{\it v_3}}^{2}}}
\;=\;
\frac{ v_1 \; ( \; q \;+\; v_3\;)}{q^2 \;+\;
q\,v_3} \;=\; \frac{v_1}{q}
$$
$$
\alpha_2^{(1)} \;=\;
{\frac {{\it v_2}\, \left( \sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it
v_1}}^{2}
}+{\it v_3} \right) }{{{\it v_1}}^{2}+\sqrt {{{\it v_2}}^{2}+{{\it
v_3}}^{2}+{{\it v_1}
}^{2}}{\it v_3}+{{\it v_2}}^{2}+{{\it v_3}}^{2}}}
\;=\;
\frac{ v_2 \; ( \; q \;+\; v_3\;)}{q^2 \;+\;
q\,v_3} \;=\; \frac{v_2}{q}
$$
$$
\alpha_3^{(1)} \;=\;
{\frac { \left( \sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it
v_1}}^{2}}+{\it v_3}
\right) {\it v_3}}{{{\it v_1}}^{2}+\sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it
v_1}}^
{2}}{\it v_3}+{{\it v_2}}^{2}+{{\it v_3}}^{2}}}
\;=\;
\frac{ v_3 \; ( \; q \;+\; v_3\;)}{q^2 \;+\;
q\,v_3} \;=\; \frac{v_3}{q}
$$
$$
\alpha_1^{(2)} \;=\;
{\frac {{\it v_1}\, \left( \sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it
v_1}}^{2}
}-{\it v_3} \right) }{-{{\it v_1}}^{2}+\sqrt {{{\it v_2}}^{2}+{{\it
v_3}}^{2}+{{\it v_1
}}^{2}}{\it v_3}-{{\it v_2}}^{2}-{{\it v_3}}^{2}}}
\;=\;
- \frac{ v_1 \; ( \; q \;-\; v_3\;)}{q^2 \;-\;
q\,v_3} \;=\; -\frac{v_1}{q}
$$
$$
\alpha_2^{(2)} \;=\;
{\frac {{\it v_2}\, \left( \sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it
v_1}}^{2}
}-{\it v_3} \right) }{-{{\it v_1}}^{2}+\sqrt {{{\it v_2}}^{2}+{{\it
v_3}}^{2}+{{\it v_1
}}^{2}}{\it v_3}-{{\it v_2}}^{2}-{{\it v_3}}^{2}}}
\;=\;
- \frac{ v_1 \; ( \; q \;-\; v_3\;)}{q^2 \;-\;
q\,v_3} \;=\; - \frac{v_2}{q}
$$
$$
\alpha_3^{(2)} \;=\;
{\frac { \left( \sqrt {{{\it v_2}}^{2}+{{\it v_3}}^{2}+{{\it
v_1}}^{2}}-{\it v_3}
\right) {\it v_3}}{-{{\it v_1}}^{2}+\sqrt {{{\it v_2}}^{2}+{{\it
v_3}}^{2}+{{\it v_1}}
^{2}}{\it v_3}-{{\it v_2}}^{2}-{{\it v_3}}^{2}}}
\;=\;
- \frac{ v_3 \; ( \; q \;-\; v_3\;)}{q^2 \;-\;
q\,v_3} \;=\; - \frac{v_3}{q}
$$
Putting it all together yields :
$$
\mathcal{D}_2\;=\; Tr [ \;\varrho \; \log_2(\psi) \; ]
$$
$$
\;=\; \frac{1}{2} \; \log_2 \left ( \langlembda_{\psi}^{(1)} \right ) \left [
\; 1\; + \;
\; \sum_{i} \; w_i \, \alpha_{i}^{(1)} \; \right ]
\;+\;
\frac{1}{2} \; \log_2 \left ( \langlembda_{\psi}^{(2)} \right ) \left [
\; 1\; + \;
\; \sum_{i} \; w_i \, \alpha_{i}^{(2)} \; \right ]
$$
$$
\;=\; \frac{1}{2} \; \left [ \; 1\; + \;
\; \sum_{i} \; w_i \, \frac{v_i}{q} \; \right ]
\; \log_2 \left ( \langlembda_{\psi}^{(1)} \right )
\;+\;
\frac{1}{2} \; \left [ \; 1\; + \;
\; \sum_{i} \; w_i \, \frac{-\,v_i}{q} \; \right ]
\; \log_2 \left ( \langlembda_{\psi}^{(2)} \right )
$$
$$
\;=\; \frac{1}{2}
\; \left [
\; 1\; + \; \; \frac{ \vec{\mathcal{W}} \bullet \vec{\mathcal{V}}}{q} \; \right ]
\; \log_2 \left ( \langlembda_{\psi}^{(1)} \right )
\;+\;
\frac{1}{2}
\; \left [
\; 1\; - \;
\; \frac{ \vec{\mathcal{W}} \bullet \vec{\mathcal{V}}}{q} \; \right ]
\; \log_2 \left ( \langlembda_{\psi}^{(2)} \right )
$$
Plugging in for the eigenvalues $\langlembda_{\psi}^{(1)}$ and
$\langlembda_{\psi}^{(2)}$ which we found above yields :
$$
\mathcal{D}_2\;=\; Tr [ \;\varrho \; \log_2(\psi) \; ]
$$
$$
\;=\; \frac{1}{2}
\; \left [
\; 1\; + \; \; \frac{ \vec{\mathcal{W}} \bullet \vec{\mathcal{V}}}{q} \; \right ]
\; \log_2 \left ( \frac{ 1 \;+\; q}{2} \right )
\;+\;
\frac{1}{2}
\; \left [
\; 1\; - \;
\; \frac{ \vec{\mathcal{W}} \bullet \vec{\mathcal{V}}}{q} \; \right ]
\; \log_2 \left ( \frac{ 1 \;-\; q}{2} \right )
$$
$$
\;=\; \frac{1}{2}
\; \log_2 ( 1 \;-\; q^2\; ) \;-\; 1
\;+\; \frac{\; \vec{\mathcal{W}} \bullet \vec{\mathcal{V}} \; }
{\; 2 \; q\; } \; \log_2 \left ( \frac{ 1 \;+\; q }{1 \;-\; q} \right)
$$
Putting all the pieces together to
obtain $\mathcal{D}( \, \varrho \,\| \, \psi \, )$, we find
$$
\mathcal{D}( \, \varrho \, \| \, \psi \, ) \;=\;
\mathcal{D}_1 \;-\; \mathcal{D}_2
\;=\;
\frac{1}{2} \log_2 ( 1\;-\;r^2 ) \;+\;
\frac{r}{2} \log_2 \left ( \frac{1\;+\;r}{1\;-\;r} \right )
\;-\;
\frac{1}{2}
\; \log_2 ( 1 \;-\; q^2\; )
\;-\; \frac{\; \vec{\mathcal{W}} \bullet \vec{\mathcal{V}} \; }
{\; 2 \; q\; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
$$
\;=\;
\frac{1}{2} \, \log_2 ( 1\;-\;r^2 ) \;+\;
\frac{r}{2} \, \log_2 \left ( \frac{1\;+\;r}{1\;-\;r} \right )
\;-\;
\frac{1}{2}
\; \log_2 ( 1 \;-\; q^2\; )
\;-\; \frac{r \; \cos ( \theta ) }
{\; 2 \; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right ) .
$$
where $\theta$ is the angle between $\vec{\mathcal{W}}$
and $\vec{\mathcal{V}}$.
$\bigtriangleup$ - {\em End of Proof II}.
Ordinarily, $\mathcal{D}( \rho \| \phi ) \;\neq \;\mathcal{D}( \phi \| \rho )$.
However, when $r$ = $q$, we can see from the above formula that
$\mathcal{D}( \rho \| \phi ) \;=\;\mathcal{D}( \phi \| \rho )$.
A few special cases of
$\mathcal{D}( \rho \| \phi )$ are worth examining.
Consider the case when $\phi \;=\; \frac{1}{2} \; \mathcal{I}$.
In this case, $q$ = 0, and
$$
\mathcal{D}( \; \rho \; || \; \phi \; ) \;=\;
\frac{1}{2} \; \log_2 \left ( 1 - r^2 \right ) \;+ \;
\frac{r}{2} \; \log_2 \left ( \frac{1 + r}{1 - r } \right )
$$
$$
\;=\; \frac{ 1 \;+\; r}{2} \; \log_2 \left ( \frac{ 1 \;+\; r}{2} \; \right )
\;+\; \frac{ 1 \;+\; r}{2} \;
\;+\; \frac{ 1 \;-\; r}{2} \; \log_2 \left ( \frac{ 1 \;-\; r}{2} \; \right )
\;+\; \frac{ 1 \;-\; r}{2} \;
\;=\; 1 \;-\; \mathcal{S}(\rho)
$$
Thus,
$\mathcal{D}(\; \rho \; || \; \frac{1}{2} \; \mathcal{I} \; ) \;=\; 1 \;-\; \mathcal{S}(\rho)$,
where $\mathcal{S}(\rho)$ is the von Neumann entropy of $\rho$, the first density matrix
in the relative entropy function.
\section{Appendix B - The Derivation Of The Linear Channel
Transcendental Equation}
In this appendix, we derive the transcendental equation for determining the
optimum position of the average density matrix for a linear channel.
The picture of the quantities we shall define shortly is below.
\begin{center}
\setlength{\unitlength}{0.00033300in}
\begingroup\makeatletter\ifx\SetFigFont\undefined
\def\endgroup\@setsize\SetFigFont{#2pt}#1#2#3#4#5#6#7\relax{\def\endgroup\@setsize\SetFigFont{#2pt}{#1#2#3#4#5#6}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}\fmtname xxxxxx\relax \defsplain{splain}
\ifx\endgroup\@setsize\SetFigFont{#2pt}splain
\gdef\SetFigFont#1#2#3{
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\langlerge\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname}
\else
\gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\endgroup\@setsize\SetFigFont{#2pt}{\endgroup\@setsize\SetFigFont{#2pt}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname}
\fi
\fi\endgroup
\begin{picture}(9000,6738)(1201,-7015)
\thicklines
\put(1801,-3961){\line( 2, 1){7780}}
\put(1801,-3961){\line( 5,-2){7758.621}}
\put(1801,-3961){\line( 6, 1){7783.784}}
\put(9571,-101){\line( 0,-1){7000}}
\put(5401,-2761){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}$\theta_{+}$}}}
\put(1201,-4561){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}Bloch}}}
\put(1201,-4956){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}Sphere}}}
\put(1201,-5351){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}Origin}}}
\put(10201,-2761){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}q}}}
\put(10201,-361){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}$r_{+}$}}}
\put(10201,-6961){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}$r_{-}$}}}
\put(5401,-4561){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.0}{rm}$\theta_{-}$}}}
\end{picture}
\end{center}
\begin{center}
Figure 25:
Definition of the Bloch vectors
$\vec{r}_{+}$,
$\vec{q}$, and
$\vec{r}_{-}$ used in the derivation below.
\end{center}
We assume that in general all the $\{\,t_k\,\neq\,0\}$. We also assume the
linear channel is oriented in the z direction, so that
$\langlembda_x \,=\, \langlembda_y \,=\, 0$, but
$\langlembda_z \,\neq\, 0$. We define
$$
A \;=\; t_x^2 \,+\, t_y^2 \,+\, ( \, t_z \,+\, \langlembda_z \,)^2 \;=\; r_{+}^2
$$
$$
B \;=\; t_x^2 \,+\, t_y^2 \,+\, ( \, t_z \,+\, \beta \, \langlembda_z \,)^2
\;=\; q( \, \beta \, )^2
$$
$$
C \;=\; t_x^2 \,+\, t_y^2 \,+\, ( \, t_z \,-\, \langlembda_z \,)^2 \;=\; r_{-}^2
$$
The three quantities above refer respectively
to the distance from the Bloch
sphere origin to $r_{+}$, the optimum point {\em q} we seek, and
$\,r_{-}$.
We define the three Bloch vectors
$\vec{r}_{+}$, $\vec{q}$ and $\vec{r}_{-}$ in Figure 25 above, and
refer to their respective magnitudes as
$r_{+}$, $q$, and $r_{-}$.
Here $\beta \, \in [-1,1]$, so that $q$ can range along the entire line
segment between $r_{+}$ and $r_{-}$.
As discussed in the Linear Channels section of this paper,
the condition on $q$ is that
$\mathcal{D}(\, r_{+} \, \| \, q \, ) \;= \; \mathcal{D}(\, r_{-} \, \| \, q \, )$.
Now recall that
$$
\mathcal{D}( \, r\, \|\,q\,)\;=\;
\;=\;
\frac{1}{2} \, \log_2 ( 1\;-\;r^2 ) \;+\;
\frac{r}{2} \, \log_2 \left ( \frac{1\;+\;r}{1\;-\;r} \right )
\;-\;
\frac{1}{2}
\; \log_2 ( 1 \;-\; q^2\; )
\;-\; \frac{r \; \cos ( \theta ) }
{\; 2 \; } \; \log_2 \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
where $\theta$ is the angle between $r$ and $q$. To determine $\theta$, we use
the law of cosines. If $\theta $ is the angle between sides a and b of
a triangle with sides a, b and c, then we have :
$$
\cos(\theta) \;=\; \frac{ a^2 \;+\; b^2 \; -\; c^2}{2 \, a \, b }
$$
Our condition
$\mathcal{D}(\, r_{+} \, \| \, q \, ) \;= \; \mathcal{D}(\, r_{-} \, \| \, q \, )$ becomes :
$$
\frac{1}{2} \, \log ( 1\;-\;r_{+}^2 ) \;+\;
\frac{r}{2} \, \log \left ( \frac{1\;+\;r_{+}}{1\;-\;r_{+}} \right )
\;-\; \frac{r_{+} \; \cos ( \theta_{+} ) }
{\; 2 \; } \; \, \log \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
$$
\;=\;
\frac{1}{2} \, \log ( 1\;-\;r_{-}^2 ) \;+\;
\frac{r}{2} \, \log \left ( \frac{1\;+\;r_{-}}{1\;-\;r_{-}} \right )
\;-\; \frac{r_{-} \; \cos ( \theta_{-} ) }
{\; 2 \; } \; \log \left ( \frac{ 1 \;+\; q}{1 \;-\; q} \right)
$$
where we canceled the term which was identically a function of $q$ from both
sides, and converted all logs from base 2 to natural logs by multiplying
both sides by $\log(2)$.
Determining $\theta_{+}$ and $\theta_{-}$, we find :
$$
\cos(\theta_{+} ) \;=\; \frac{ r_{+}^2 \;+\; q^2 \; -\; ( \, ( \, 1 \, -\,
\beta \, ) \, \langlembda_z \, ) ^2}{2 \, q\, r_{+} \, }
$$
$$
\cos(\theta_{-} ) \;=\; \frac{ r_{-}^2 \;+\; q^2 \; -\; ( \, ( \, 1 \, +\,
\beta \, ) \, \langlembda_z \, ) ^2}{2 \, q \, r_{-} \, }
$$
Next, recall the identity
$$
\tanh^{(-1)} [ \; x \; ] \;=\;
\frac{1}{2} \, \log \left ( \frac{1\;+\;x}{1\;-\;x} \right )
$$
Using this identity for arctanh, our relative entropy equality relation
between the two endpoints of the linear channel becomes :
$$
\frac{1}{2} \log ( 1 \,-\, A ) \, +\,
\sqrt{A} \, \tanh^{(-1)} \left ( \, \sqrt{A} \, \right )
\,-\, \frac { \sqrt{A} \; ( \, A \, + \, B \, - \,
(( 1 \, - \, \beta)\, \langlembda_z)^2\, )}{2 \, \sqrt{A \, B } } \;
\tanh^{(-1)}\left ( \sqrt{B} \right )
$$
$$
\;=\;
\frac{1}{2} \, \log ( 1 \,-\, C ) \, +\,
\sqrt{C} \, \tanh^{(-1)} \left ( \, \sqrt{C} \, \right )
\,-\, \frac { \sqrt{C} \, ( \, C \,+ \, B \, - \, ((
1\, + \, \beta)\, \langlembda_z)^2\, ) }{2\sqrt{B \, C } } \;
\tanh^{(-1)}\left ( \sqrt{B} \right )
$$
We can cancel several terms to obtain
$$
\frac{1}{2} \, \log ( 1 \,-\, A ) \, +\,
\sqrt{A} \, \tanh^{(-1)} \left ( \, \sqrt{A} \, \right )
\,-\, \frac { ( \, A \, + \,
\, 2 \, \beta\, \langlembda_z^2\, )}{2 \, \sqrt {B} } \;
\tanh^{(-1)}\left ( \sqrt{B} \right )
$$
$$
\;=\;
\frac{1}{2} \, \log ( 1 \,-\, C ) \, +\,
\sqrt{C} \, \tanh^{(-1)} \left ( \, \sqrt{C} \, \right )
\,-\, \frac { ( \, C \, - \, 2 \, \beta\, \langlembda_z^2\, ) }{2 \, \sqrt{B} } \;
\tanh^{(-1)}\left ( \sqrt{B} \right )
$$
which in turn becomes :
$$
\frac{1}{2} \, \log ( 1 \,-\, A ) \, +\,
\sqrt{A} \, \tanh^{(-1)} \left ( \, \sqrt{A} \, \right )
\,-\, \frac { ( \, A \, - \, C \, + \,
\, 4 \, \beta\, \langlembda_z^2\, )}{2 \, \sqrt {B} } \;
\tanh^{(-1)}\left ( \sqrt{B} \right )
$$
$$
\;=\;
\frac{1}{2} \, \log ( 1 \,-\, C ) \, +\,
\sqrt{C} \, \tanh^{(-1)} \left ( \, \sqrt{C} \, \right )
$$
Using our definitions above for A and C, we find that
A - C $=\; 4 \, \langlembda_z \, t_z $. Substituting this into
the relation immediately above yields :
$$
\frac{1}{2} \, \log ( 1 \,-\, A ) \, +\,
\sqrt{A} \, \tanh^{(-1)} \left ( \, \sqrt{A} \, \right )
\,-\, \frac { ( \, 4 \, \langlembda_z \, t_z \, + \,
\, 4 \, \beta\, \langlembda_z^2\, )}{2 \, \sqrt {B} } \;
\tanh^{(-1)}\left ( \sqrt{B} \right )
$$
$$
\;=\;
\frac{1}{2} \log ( 1 \,-\, C ) \, +\,
\sqrt{C} \, \tanh^{(-1)} \left ( \, \sqrt{C} \, \right )
$$
which we adjust to our final answer :
$$
\frac { 4 \, \langlembda_z \,( \, t_z \, + \,
\, \beta\, \langlembda_z\, )}{\sqrt {B} } \;
\tanh^{(-1)}\left ( \sqrt{B} \right )
$$
$$
=\;
\log ( 1 \,-\, A )
\;-\;
\log ( 1 \,-\, C )
\; +\;
2 \, \sqrt{A} \, \tanh^{(-1)} \left ( \, \sqrt{A} \, \right )
\; -\;
2 \, \sqrt{C} \, \tanh^{(-1)} \left ( \, \sqrt{C} \, \right )
$$
Note that B is a function of $\beta$, so the entire functionality of
$\beta$ lies to the left of the equality sign in the expression above.
All terms on the right hand side are functions of the $\{t_k\}$
and $\{\langlembda_z\}$, so the right hand side is a constant while we vary
$\beta$.
Since all the functions of $\beta$ on the left hand side are smooth
functions, the search for the optimum $\beta\, \equiv\, q$,
although transcendental, is well behaved and fairly easy.
\section{Appendix C - Donald's Equality}
We prove Donald's Equality below{{\cal A}l I}te{Donald}. Let $\rho_i$ be a set
of density matrices with a priori probabilities $\alpha_i$, so that
$\alpha_i \;\geq \;0 \; \forall \; i$ and $\sum_i \; \alpha_i \;=\; 1$.
Let $\phi$ be any density matrix, and define
$\sigma\;=\; \sum_i\; \alpha_i \; \rho_i$. Then :
$$
\sum_i \; \alpha_i \; D(\, \rho_i \, \| \, \phi \, ) \;=\;
D(\,\sigma\,\|\,\phi\,) \;+\; \sum_i \; \alpha_i \; D(\, \rho_i \, \| \, \sigma\,)
$$
{\em Proof :}
$$
\sum_i \; \alpha_i \; D(\, \rho_i \, \| \, \phi \, ) \;=\;
\sum_i \; \alpha_i \; \left \{ \; Tr [ \, \rho_i \, \log( \, \rho_i \,)\, ] \;-\;
Tr [ \, \rho_i \, \log( \, \phi \,) \, ] \; \right \}
$$
$$
\;=\; \sum_i \; \alpha_i \; \left \{ \; Tr [ \, \rho_i \, \log( \, \rho_i \,)\, ]
\; \right \} \;-\; Tr [ \, \sigma \, \log( \, \phi \,) \, ]
$$
$$
\; =\; \left \{ \; Tr [ \, \sigma \, \log( \, \sigma \,) \, ] \;-\;
\; Tr [ \, \sigma \, \log( \, \sigma \, ) \; \right \} \;
-\; Tr [ \, \sigma \, \log( \, \phi \,) \, ] \; +\;
\sum_i \; \alpha_i \; Tr [ \, \rho_i \, \log( \, \rho_i \,)\, ] \;
$$
$$
=\;
D(\, \sigma \, \| \, \phi \,) \;
\;-\; Tr [ \, \sigma \, \log( \, \sigma \, ) \, ] \;
\;+\;
\sum_i \; \alpha_i \; Tr [ \, \rho_i \, \log( \, \rho_i \,)\, ] \;
$$
$$
\;=\;
D(\, \sigma \, \| \, \phi \,) \;
\;+\;
\sum_i \; \alpha_i \; \left \{ \;
Tr [ \, \rho_i \, \log( \, \rho_i \,)\, ] \;
\;-\; Tr [ \, \rho_i \, \log( \, \sigma \, ) \, ] \; \right \}
$$
$$
\; =\;
D(\, \sigma \, \| \, \phi \,) \;
\;+\;
\sum_i \; \alpha_i \; D( \, \rho_i \, \| \, \sigma \, ) \;
$$
$\bigtriangleup$ - {\em End of Proof}.
\section{Appendix D - Quantum Channel Descriptions}
The Kraus quantum channel representation is given by the set of
Kraus matrices $\mathcal{A} \;=\; \{\;A_i\;\}$
which represent the channel dynamics via the relation :
$$
\mathcal{E}(\rho) \;=\; \sum_i \; A_i \; \rho \; A_{i}^{\dagger}
$$
The normalization requirement for the Kraus matrices is :
$$\sum_i \; A_{i}^{\dagger} \; \rho \; A_i \;=\; I $$
A channel is unital if it maps the identity to the identity.
This requirement becomes, upon setting $\rho$ = I :
$$\sum_i \; A_i \; \rho \; A_{i}^{\dagger} \;=\;
\sum_i \; A_i \; A_{i}^{\dagger} \;=\; I $$
Each set of Kraus operators, $\{\;\mathcal{A}_i\;\}$ can mapped
to a set of King-Ruskai-Szarek-Werner ellipsoid channel
parameters $\{\;t_k\,,\,\langlembda_k\;\}$, where $k=1,2,3$.
{\bf The Two Pauli Channel Kraus Representation}
$$ A_1 \;=\; \bmatrix{ \sqrt{x} \quad 0 \cr 0 \quad \sqrt{x} } \qquad
A_2 \;=\; \sqrt { \; \frac{1\,-\,x}{2} \; } \; \sigma_x \;=\;
\bmatrix{ \; 0 \qquad \qquad \sqrt{ \frac{1 \, - \, x}{2}} \cr
\sqrt{ \frac{1 \, - \, x}{2}} \qquad \qquad 0 \; } $$
$$ A_3 \;=\; - \;i\; \sqrt { \; \frac{1\,-\,x}{2} \; } \quad \sigma_y \;=\;
\bmatrix{ 0 \qquad \qquad -\; \sqrt{ \frac{1 \, - \, x}{2}} \cr
\sqrt{ \frac{1 \, - \, x}{2}} \qquad \qquad 0 } $$
In words, the channel leaves the qubit transiting the channel
alone with probability $x$, and does a $ \sigma_x$
on the qubit with probability $\frac{1 \, - \, x}{2}$ or
does a $ \sigma_y$
on the qubit with probability $\frac{1 \, - \, x}{2}$.
The Two Pauli channel is a unital channel.
The corresponding King-Ruskai-Szarek-Werner ellipsoid channel
parameters are
$t_x \;=\; t_y \;=\; t_z \;=\;0$, and
$\langlembda_x \;=\; \langlembda_y \;=\; x$, while
$\langlembda_z \;=\; 2\,x\,-\,1$. {{\cal A}l I}te{Ruskai99a}
Here $x \; \in \; [\,0\,,\,1\,]$.
{\bf The Depolarization Channel Kraus Representation}
$$ A_1 \;=\; \bmatrix{ \sqrt{x} \quad 0 \cr 0 \quad \sqrt{x} } \qquad
A_2 \;=\; \sqrt { \; \frac{1\,-\,x}{3} \; } \quad \sigma_x \;=\;
\bmatrix{ 0 \qquad \qquad \sqrt{ \frac{1 \, - \, x}{3}} \cr
\sqrt{ \frac{1 \, - \, x}{3}} \qquad \qquad 0 } $$
$$ A_3 \;=\; - \;i\; \sqrt { \; \frac{1\,-\,x}{3} \; } \quad \sigma_y \;=\;
\bmatrix{ 0 \qquad \qquad -\; \sqrt{ \frac{1 \, - \, x}{3}} \cr
\sqrt{ \frac{1 \, - \, x}{3}} \qquad \qquad 0 } $$
$$ A_4 \;=\; \sqrt { \; \frac{1\,-\,x}{3} \; } \quad \sigma_z \;=\;
\bmatrix{ \sqrt{ \frac{1 \, - \, x}{3}} \qquad \qquad 0 \cr
0 \qquad \qquad - \; \sqrt{\frac{1 \, - \, x}{3}} } $$
In words, the channel leaves the qubit transiting the channel
alone with probability $x$, and does a $ \sigma_x$
on the qubit with probability $\frac{1 \, - \, x}{3}$ or
does a $ \sigma_y$
on the qubit with probability $\frac{1 \, - \, x}{3}$.
or does a $ \sigma_z$
on the qubit with probability $\frac{1 \, - \, x}{3}$.
The Depolarization channel is a unital channel.
The corresponding King-Ruskai-Szarek-Werner ellipsoid
channel parameters are
$t_x \;=\; t_y \;=\; t_z \;=\; 0$, and
$\langlembda_x \;=\; \langlembda_y \;=\; \langlembda_z \;=\; \frac{4\,x \;-\;1}{3}$.
{{\cal A}l I}te{Ruskai99a}
Again $x \; \in \; [\,0\,,\,1\,]$.
{\bf The Amplitude Damping Channel Kraus Representation}
$$ A_1 \;=\; \bmatrix{ \sqrt{x} \quad 0 \cr 0 \quad 1 } \qquad
A_2 \;=\;
\bmatrix{ 0 \qquad \qquad 0 \cr
\sqrt{ 1 \, - \, x} \qquad \qquad 0 } $$
In this scenario, the channel leaves untouched a spin down qubit.
For a spin up qubit, with probability $x$ it leaves the qubit alone, while
with probability 1 - $x$ the channels flips the spin from up to down.
Thus, when $x$ = 0, every qubit emerging from the channel is in the spin
down state. The Amplitude Damping channel is $not$ a unital channel.
The corresponding King-Ruskai-Szarek-Werner ellipsoid channel
parameters are $t_x \;=\; 0$,
$t_y \;=\; 0$,
$t_z \;=\;1\,-\, x$,
$\langlembda_x \;=\; \sqrt{x}$,
$\langlembda_y \;=\; \sqrt{x}$, and
$\langlembda_z \;=\; x$. {{\cal A}l I}te{Ruskai99a}
Again $x \; \in \; [\,0\,,\,1\,]$.
\section{Appendix E - Numerical Analysis Of Optimal Signal Ensembles Using MAPLE and MATLAB}
The iterative, relative entropy based
algorithm outlined above was implemented
in MAPLE, and provided the plots and numbers cited in this paper.
In addition, numerical answers were verified using a brute force
algorithm based on MATLAB's Optimization Toolbox. The MATLAB optimization
criterion was the channel output Holevo $\chi$ quantity. Input qubit
ensembles of two, three and four states were used. After channel
evolution, the output ensemble Holevo $\chi$ was calculated. With this
function specified as to be maximized, the MATLAB Toolbox varied the
parameters for the ensemble qubit input pure states and the states
corresponding a priori probabilities. Pure state qubits
were represented as :
$$
| \, \psi \, \ranglengle \;=\; \bmatrix{ \; \alpha \; \cr
\; \sqrt{1 \,-\,\alpha^2} \; e^{i\, \theta} \; }
$$
thereby requiring two parameters, $\{ \, \alpha \, , \, \theta \, \}$,
for each input qubit state.
Thus a two state input qubit ensemble required an optimization over a space
of dimension five, when the a priori probabilities are included. Three and
four state ensembles required optimization over spaces of dimension
eight and eleven respectively.
\end{appendix}
\end{document}
|
\begin{document}
\title{Decaying positive global solutions of second order difference equations with mean curvature operator
\\[-2mm]}
\author{Zuzana Do\v sl\'a\footnote{Corresponding author. Email: [email protected]}, Serena Matucci and Pavel \v Reh\'ak
}
\date{}
\maketitle \vspace*{-1.1cm}
\begin{center}
\begin{minipage}{10cm}
\begin{center}
\noindent
\textit{\small
Department of Mathematics and Statistics \\
Masaryk University, Brno, Czech Republic\\
\texttt{\small [email protected]}\\
Department of Mathematics and Informatics\\
University of Florence, Florence, Italy\\
\texttt{\small [email protected]} \\
Inst. of Math., FME, Brno University of Technology\\
Technick\'a 2,
CZ--61669 Brno,
Czech Republic
\texttt{\small [email protected]}
}
\end{center}
\end{minipage}
\end{center}
\begin{center}
to appear on \emph{Electronic Journal of Qualitative Theory of Differential Equations}
\end{center}
\begin{abstract}
A boundary value problem on an unbounded domain,
associated to difference equations with the Euclidean mean curvature
operator is considered. The existence of solutions which are positive on the whole domain and decaying at infinity is examined by proving new Sturm comparison theorems for linear difference equations and using a fixed point approach based on a linearization device.
The process of discretization of the boundary value problem on the unbounded domain is examined, and some discrepancies between the discrete and the continuous case are pointed out, too.
\end{abstract}
\begin{center}
\textit{Dedicated to the 75th birthday of Professor Jeff Webb}
\end{center}
\noindent\textbf{MSC 2010:} 39A22, 39A05, 39A12.
\noindent\textbf{Keywords:} second order nonlinear difference equations, Euclidean
mean curvature operator, boundary value problems, decaying solutions, recessive solutions, comparison theorems.
\noindent
\section{Introduction}
In this paper we study the boundary value problem (BVP) on the half-line for difference equation with the Euclidean mean curvature operator
\begin{equation}
\Delta\left(a_k\frac{\Delta x_k}{\sqrt{1+(\Delta x_k)^2}}\right)
+b_kF(x_{k+1})=0,
\label{E}
\end{equation}
subject to the conditions
\begin{equation}
x_m=c,\quad x_k>0,\quad \Delta x_k\le 0,
\quad \displaystyle \lim_{k\to\infty}x_k=0,
\label{BVP}
\end{equation}
where $m \in \Z^{+}=\N \cup \{0\}$, $k\in\Z_m:=\{k \in \Z: \, k \geq m\}$ and $c\in(0,\infty)$.
Throughout the paper the following conditions are assumed:
\begin{itemize}
\item[(H$_{1}$)] The sequence $a$ satisfies $a_k>0$ for $k\in\Z_m$ and
\begin{equation*}
\label{sum-a}
\sum_{j=m}^{\infty}\frac{1}{a_j}<\infty.
\end{equation*}
\item[(H$_{2}$)] The sequence $b$ satisfies $b_k\ge 0$ for $k\in\Z_m$ and
\begin{equation*}
\label{sum-a-b}
\sum_{j=m}^{\infty}b_j\sum_{i=j}^{\infty}\frac{1}{a_i}<\infty.
\end{equation*}
\item[(H$_{3}$)] The function $F$ is continuous on $\R$, $F(u)u>0$ for $ u\ne 0$, and
\begin{equation}
\label{F}
\lim_{u\to 0+}\frac{F(u)}{u}<\infty.
\end{equation}
\end{itemize}
When modeling real life phenomena, boundary value problems for second order differential equations play important role. The BVP \eqref{E}-\eqref{BVP} originates from the
discretisation process for searching radial solutions, which are globally positive and decaying,
for PDE with Euclidean mean curvature operator.
By globally positive solutions we mean solutions which are positive on the whole domain $\mathbb{Z}_{m}$. The Euclidean mean curvature operator arises in the study of some fluid mechanics problems, in particular
capillarity-type phenomena for compressible and incompressible fluids.
Recently, discrete BVPs, associated to equation \eqref{E}, have been widely studied, both in bounded and unbounded domains, see, e.g., \cite{abgo} and references therein.
Many of these papers can be seen as a finite dimensional variant of results established in the continuous case. For instance, we refer to \cite{Mawhin07,Mawhin08,MawhinTaylor,Mawhin2013} for BVPs involving mean curvature operators in Euclidean and Minkowski spaces, both in the continuous and in the discrete case. Other results in this direction are in \cite{BC2008,BC2009}, in which the multiplicity of solutions of certain BVPs involving the $p$-Laplacian is examined. Finally, in \cite{JDEA2016,DS} for second order equations with $p$-Laplacian the existence of globally positive decaying Kneser solutions, that is solutions $x$ such that $x_n>0$, $\Delta x_n<0$ for $n\geq 1$ and $\lim_{n\to\infty}x_n=0$, is examined.
Several approaches have been used in literature for treating the above problems. Especially, we refer to variational methods \cite{Mawhin2012}, the critical point theory \cite{BC2009} and fixed point theorems on cones \cite{Webb06,Webb17}.
Here, we extend to second order difference equations with Euclidean mean curvature some results on globally positive decaying Kneser solutions stated in \cite{JDEA2016} for equations with $p$-Laplacian and $b_n<0$.
This paper is motivated also by \cite{Trieste}, in which BVPs for differential equation with the Euclidean mean curvature operator on the half-line $[1,\infty)$ have been studied subjected to the boundary conditions $x(1)=1$ and $\lim_{t\to\infty}x(t)=0$. The study in \cite{Trieste} is accomplished by using a linearization device and some properties of principal solutions of certain disconjugate second-order linear differential equations. Here, we consider the discrete setting of the problem studied in \cite{Trieste}. However, the discrete analogue presented here requires different technique. This is caused by a different behavior of decaying solutions as well as by peculiarities of the discrete setting which lead to a modified fixed point approach. Jointly with this, we prove new Sturm comparison theorems and new properties of recessive solutions for linear difference equations.
Our existence result is based on a fixed point theorem for operators defined in a Fr\' echet space by a Schauder's linearization device. This method is originated in \cite{CFM}, later extended to the discrete case in \cite{MMR2}, and recently developed in \cite{Fixed}. This tool does not require the explicit form of the fixed point operator $T$ and simplifies the check of the topological properties of $T$ on the unbounded domain, since these properties become an immediate consequence of \textit{a-priori} bounds for an associated linear equation. These bounds are obtained in an implicit form by means of the concepts of recessive solutions for second order linear equations. The main properties and results which are needed in our arguments, are presented in Sections \ref{S2} and \ref{S3}. In Section \ref{S4} the solvability of the BVP (\ref{E})-(\ref{BVP}) is given, by assuming some implicit conditions on sequences $a$ and $b$.
Several effective criteria are given, too. These criteria are obtained by considering
suitable linear equations which can be viewed as Sturm majorants of the auxiliary linearized equation.
In Section \ref{S5} we compare our results with those stated in the continuous case in \cite{Trieste}.
Throughout the paper we emphasis some discrepancies, which arise between the continuous case and the discrete one.
\section{Discrete versus continuous decay}\label{S2}
Several properties in the discrete setting have no continuous analogue. For instance, for a positive sequence $x$ we always have
\[
\frac{\Delta x_k}{x_k}=\frac{x_{k+1}}{x_k}-1>-1.
\]
In the continuous case, obviously, this does not occur in general, and the decay can be completely different. For example, if $x(t)=e^{-2t}$ then $x'(t)/x(t)=-2$ for all $t$. Further, the ratio $x'/x$ can be also unbounded from below, as the function $x(t)=\mathrm{e}^{-\mathrm{e}^t}$ shows.
Another interesting observation is the following. If two positive continuous functions $x,y$ satisfy the inequality
\[
\frac{x'(t)}{x(t)}\le M\frac{y'(t)}{y(t)}, \ \ t\geq t_0,
\]
then there exists $K>0$ such that $x(t)\le Ky^M(t)$ for $t\geq t_0$.
This is not true in the discrete case, as the following example illustrates.
\begin{example}
Consider the sequences $x,y$ given by
\[
x_k=\frac{1}{2^{2^k}}, \quad y_k=\frac{1}{2^{2^{k+2}}}.
\]
Then
\[
\frac{x_{k+1}}{x_k}=\frac{1}{2^{2^k}}, \quad \frac{y_{k+1}}{y_k}=\frac{1}{2^{2^{k+2}}},
\]
and
\[
\frac{\Delta x_{k}}{x_k}= \frac{1}{2^{2^k}}-1\leq \frac{1}{2}-1=- \frac{1}{2}\leq \frac{1}{2}\left( \frac{1}{2^{2^{k+2}}}-1\right)= \frac{1}{2}\frac{\Delta y_{k}}{y_k}
\]
On the other hand, the inequality $x_k\le K y_k^{1/2}$ is false for every value of $K>0$. Indeed,
\[
\frac{x_k}{\sqrt{y_k}}= \frac{2^{2^{k+1}}}{2^{2^{k}}}=2^{2^{k}}
\]
which is clearly unbounded.
\end{example}
The situation in the discrete case is described in the following two lemmas.
\begin{lemma}
\label{L:A}
Let $x,y$ be positive sequences on $\Z_m$ such that $M\in(0,1)$ exists, satisfying
\begin{equation}\label{xy}
\frac{\Delta x_k}{x_k}\le M\frac{\Delta y_k}{y_k}
\end{equation}
for $k\in\Z_m$. Then $1+M\Delta y_k/y_k>0$ for $k\in\Z_m$, and
$$
x_k\le x_m\prod_{j=m}^{k-1}\left(1+M\frac{\Delta y_j}{y_j} \right).
$$
\end{lemma}
\begin{proof}
First of all note that, from $M \in (0,1)$ and the positivity of $y$, we have
\[
1+M\frac{\Delta y_k}{y_k}=1+M\frac{y_{k+1}}{y_k}-M>0, \quad k \in \Z_m.
\]
From \eqref{xy} we get
\[
\frac{x_{k+1}}{x_k}\le 1+M\frac{\Delta y_k}{y_k},
\]
and taking the product from $m$ to $k-1$, $k>m$, we obtain
$$
\frac{x_k}{x_m}=\frac{x_{m+1}}{x_m}\,\frac{x_{m+2}}{x_{m+1}}\cdots\frac{x_k}{x_{k-1}}\le \prod_{j=m}^{k-1}\left(1+M\frac{\Delta y_j}{y_j} \right).
$$
\end{proof}
From the classical theory of infinite products (see for instance \cite{Knopp}) the infinite product $P=\prod_{k=m}^\infty (1+q_k)$
of real numbers is said to \emph{converge} if there is $N\in\Z_m$ such that $1 +q_k \neq 0$ for $k\geq N$ and
\[
P_n=\prod_{k=N}^n (1 +q_k)
\] has a \emph{finite and nonzero} limit as $n \to \infty$.
In case $-1<q_k\leq0$, $\{P_n\}$ is a positive nonincreasing sequence, thus $P$ being \emph{divergent} (not converging to a nonzero number) means that
\begin{equation}\label{limP}
\lim _{n\to\infty} \prod_{k=N}^n (1 +q_k) =0.
\end{equation}
Moreover, the convergence of $P$ is equivalent to the convergence of the series
$\sum_{k=N}^\infty \ln (1+q_k)$ and this is equivalent to the convergence of the series
$\sum_{k=N}^\infty q_k$. Indeed,
if $\sum_{k=m}^{\infty} q_k$ is convergent, then
$\lim_{k\to\infty} q_k=0$ and hence,
\[
\lim_{k\to\infty} \frac{\ln (1+q_k)}{q_k}=1,
\]
i.e., $\ln(1+q_k)\sim q_k$ as $k\to\infty$.
Since summing preserves asymptotic equivalence, we get that $\sum_{k=m}^{\infty}\ln(1+q_k)$ converges. Similarly, we obtain the opposite direction.
Therefore, in case $-1<q_k\leq0$, \eqref{limP} holds
if and only if $\sum_{k=N}^\infty q_k$ diverges to $-\infty$.
The following holds.
\begin{lemma} \label{L:B}
Let $y$ be a positive nonincreasing sequence on $\Z_m$ such that
$\lim_{k\to\infty}y_k=0$. Then, for any $M\in(0,1)$,
$$
\lim_{k\to\infty}\prod_{j=m}^{k}\left(1+M\frac{\Delta y_j}{y_j} \right)=0.
$$
\end{lemma}
\begin{proof}
From the theory of infinite products it is sufficient to show that
\begin{equation}
\sum_{j=m}^{\infty}\frac{\Delta y_j}{y_j}=-\infty.
\label{div}
\end{equation}
We distinguish two cases:\\
1) there exists $N>0$ such that $y_{k+1}/y_k\ge N$ for $k\in\Z_m$;\\
2) $\inf_{k\in\Z_m}y_{k+1}/y_k=0$.
As for the former case, from the Lagrange mean value theorem, we have
$$
-\Delta \ln y_k=-\frac{\Delta y_k}{\xi_k}\le-\frac{\Delta y_k}{y_{k+1}}=
-\frac{\Delta y_k}{y_k}\cdot\frac{y_k}{y_{k+1}}
\le -\frac{\Delta y_k}{Ny_k},
$$
where $\xi_k$ is such that $y_{k+1}\le\xi_k\le y_k$ for $k\in\Z_m$. Summing the above inequality from $m$ to $n-1$, $n>m$, we get
$$
\ln y_m-\ln y_n \le -\frac{1}{N}\sum_{j=m}^{n-1}\frac{\Delta y_j}{y_j}.
$$
Since $\lim_{n\to\infty}y_n=0$, letting $n \to \infty$ we get (\ref{div}).
Next we deal with the case $\inf_{k\in\Z_m}y_{k+1}/y_k=0$. This is equivalent to
\[\liminf_{k\to\infty}\frac{\Delta y_{k}}{y_k}=\liminf_{k\to\infty} \frac{y_{k+1}}{y_k}-1=-1,\]
which implies (\ref{div}),
since
$\sum_{j=m}^{k}\Delta y_j/y_j$ is negative nonincreasing.
\end{proof}
\section{A Sturm-type comparison theorem for linear equations}\label{S3}
The main idea of our approach is based on an application
of a fixed point theorem and on global monotonicity properties of recessive solutions
of linear equations. To this goal, in this section we prove a new Sturm-type comparison theorem for linear difference equations.
Consider the linear equation
\begin{equation} \label{Lmin}
\Delta(r_k\Delta y_k)+p_k y_{k+1}=0,
\end{equation}
where $p_k\ge 0$ and $r_k>0$ on $\Z_m$.
We say that a solution $y$ of equation \eqref{Lmin} has a generalized zero in $n$
if either $y_n= 0$ or $y_{n-1}y_{n}< 0$, see e.g. \cite{Agarwal, abe:dis}.
A (nontrivial) solution $y$ of \eqref{Lmin} is said to be \textit{nonoscillatory}
if $y_ky_{k+1}>0$ for all large $k$. Equation \eqref{Lmin} is said to be \textit{nonoscillatory}
if all its nontrivial solutions are nonoscillatory.
It is well known that, by the Sturm type separation theorem, the nonoscillation of \eqref{Lmin}
is equivalent to the existence of a nonoscillatory
solution see e.g. \cite[Theorem~1.4.4]{abgo}, \cite{abe:dis}.
If \eqref{Lmin} is nonoscillatory, then there exists a nontrivial solution $u$, uniquely determined up to a constant factor,
such that
$$
\lim_{k\to\infty}\frac{u_k}{y_k}=0,
$$
where $y$ denotes an arbitrary nontrivial solution of \eqref{Lmin}, linearly
independent of $u$. Solution $u$ is called \textit{recessive solution} and $y$ a \textit{dominant solution}, see e.g. \cite{ahlbrandt}.
Recessive solutions can be characterized in the following ways (both these properties are proved in \cite{ahlbrandt}):\\
(i)
A solution $u$ of \eqref{Lmin} is recessive if and only if
\begin{equation*}\label{int_char}
\sum_{j=m}^{\infty}\frac{1}{r_ju_ju_{j+1}}=\infty.
\end{equation*}
(ii)
For a recessive solution $u$ of \eqref{Lmin} and any linearly independent solution $y$ (i.e. dominant solution) of \eqref{Lmin}, one has
\begin{equation} \label{minimal}
\frac{\Delta u_k}{u_k}< \frac{\Delta y_k}{y_k} \qquad\text{ eventually.}
\end{equation}
Along with equation \eqref{Lmin} consider
the equation
\begin{equation} \label{Lmaj}
\Delta(R_k\Delta x_k)+P_k x_{k+1}=0
\end{equation}
where $P_k\ge p_k\ge 0$ and $0<R_k\le r_k$ on $\Z_m$; equation \eqref{Lmaj} is said to be a \textit{Sturm majorant} of \eqref{Lmin}.
From \cite[Lemma~1.7.2]{abgo}, it follows that if \eqref{Lmaj} is nonoscillatory, then \eqref{Lmin} is nonoscillatory as well. In this section we always assume that \eqref{Lmaj} is nonoscillatory.
The following two propositions are slight modifications of results in \cite{dos-reh}. They are preparatory to the main comparison result.
\begin{proposition}[{\cite[Lemma 2]{dos-reh}}] \label{P:0}
Let $x$ be a positive solution of \eqref{Lmaj} on $\Z_m$ and $y$ be a solution
of \eqref{Lmin} such that $y_m>0$ and $r_m\Delta y_m/y_m\ge R_m\Delta x_m/x_m$. Then
$$
y_k>0 \quad\text{ and } \quad \frac{r_k\Delta y_k}{y_k}\ge\frac{R_k\Delta x_k}{x_k}, \ \text{ for } k\in\Z_m.
$$
Moreover, if $y,\bar y$ are solutions of \eqref{Lmin} such that $y_k>0$, $k\in\Z_{m}$, and $\bar y_m>0$,
$\Delta \bar y_m/\bar y_m>\Delta y_m/y_m$, then
$$
\bar y_k>0 \quad\text{ and } \quad \frac{\Delta\bar y_k}{\bar y_k}>\frac{\Delta y_k}{y_k}, \ \text{ for } k\in\Z_m.
$$
\end{proposition}
\begin{proposition}[{\cite[Theorem 3]{dos-reh}}] \label{Prop1}
If a recessive solution $v$ of \eqref{Lmin} has a generalized zero in $N\in \Z_m$
and has no generalized zero in $(N,\infty)$, then any solution of \eqref{Lmaj} has a generalized zero in
$(N-1,\infty)$.
\end{proposition}
The following lemma is an improved version of \cite[Theorem 1]{dos-reh}.
\begin{lemma} \label{L:0.5}
Let $u, v$ be recessive solutions of \eqref{Lmin} and \eqref{Lmaj}, respectively, satisfying $u_k>0, v_k>0$ for $k\in\Z_m$.
Then
\begin{equation}\label{hallo_Zuzana}
\frac{r_k\Delta u_k}{u_k}\le \frac{R_k\Delta v_k}{v_k} \quad \text{ for } k\in\Z_m.
\end{equation}
\end{lemma}
\begin{proof}
By contradiction, assume that there exists $N\in\Z_m$ such that $r_N\Delta u_N/u_N> R_N\Delta v_N/v_N$. Let $y$ be a solution of
\eqref{Lmin} satisfying $y_N >0$ and $r_N\Delta y_N/y_N=R_N\Delta v_N/v_N$. Then $r_N\Delta y_N/y_N<r_N\Delta u_N/u_N$, (which implies that $y$ is linearly independent with $u$) and from Proposition~\ref{P:0} we get
$y_k>0$, $\Delta y_k/y_k<\Delta u_k/u_k$ for $k\in\Z_N$, which contradicts \eqref{minimal}.
\end{proof}
\begin{lemma}
\label{L:1}
Let $x$ be a positive solution of \eqref{Lmaj} on $\Z_m$. Then there exists a recessive solution
$u$ of \eqref{Lmin}, which is positive on $\Z_m$.
\end{lemma}
\begin{proof}
Let $u$ be a recessive solution of \eqref{Lmin}, whose existence is guaranteed by nonoscillation of majorant equation \eqref{Lmaj}. By contradiction, assume that there exists $N\in\Z_m$ such that
\[
u_N \neq 0, \quad u_N u_{N+1}\le 0.
\]
Then $u$ cannot have a generalized zero in $(N+1,\infty)$. Indeed, if $u$ has a generalized zero in $M\in\Z_{N+2}$, then
by the Sturm comparison theorem on a finite interval (see e.g., \cite[Theorem~1.4.3]{abgo}, \cite[Theorem~1.2]{abe:dis}), every solution of \eqref{Lmaj} has a generalized zero in $(N,M]$, which is a contradiction with the positivity of $x$.
Applying now Proposition~\ref{Prop1}, we get that any solution of \eqref{Lmaj} has a generalized zero in $(N,\infty)$ which again contradicts the positivity of $x$ on $\Z_m$.
\end{proof}
The next theorem is, in fact, the main statement of this section and it plays an important role in the proof of Theorem~\ref{Tmain}.
\begin{theorem} \label{Trec}
Let $x$ be a positive solution of \eqref{Lmaj} on $\Z_m$.
Then there is a recessive solution $u$ of \eqref{Lmin}, which is positive on $\Z_m$ and satisfies
\begin{equation} \label{ruuRxx}
\frac{r_k\Delta u_k}{u_k}\le\frac{R_k\Delta x_k}{x_k},\ \ k\in\Z_m.
\end{equation}
In addition, if $x$ is decreasing (nonincreasing) on $\Z_m$, then $u$ is decreasing (nonincreasing) on $\Z_m$.
\end{theorem}
\begin{proof}
Let $x$ be a positive solution of \eqref{Lmaj} on $\Z_m$. From Lemma~\ref{L:1}, there exist a recessive solution $u$ of \eqref{Lmin} and a recessive solution $v$ of \eqref{Lmaj}, which are both positive on $\Z_m$.
We claim that
\begin{equation} \label{hallo_Serena}
\frac{\Delta v_k}{v_k}\le\frac{\Delta x_k}{x_k} \quad \text{for } k \in\Z_m.
\end{equation}
Indeed, suppose by contradiction that there is $N\in\Z_m$ such that $\Delta x_N/x_N<\Delta v_N/v_N$. Then, in view of Proposition~\ref{P:0},
$\Delta x_k/x_k<\Delta v_k/v_k$ for $k\in\Z_N$, which contradicts \eqref{minimal}.
Combining \eqref{hallo_Serena} and \eqref{hallo_Zuzana},
we obtain \eqref{ruuRxx}. The last assertion of the statement is an immediate consequence of \eqref{ruuRxx}.
\end{proof}
Taking $p=P$ and $r=R$ in Theorem~\ref{Trec}, we get the following corollary.
\begin{corollary} \label{corol}
If \eqref{Lmaj} has a positive decreasing (nonincreasing) solution on $\Z_m$, then there exists a recessive solution of \eqref{Lmaj} which is positive decreasing (nonincreasing) on $\Z_m$.
\end{corollary}
We close this section by the following characterization of the asymptotic behavior of recessive solutions which will be used later.
\begin{lemma}
\label{L:4}
Let
$$
\sum_{j=m}^\infty \frac{1}{r_j}<\infty\ \ \text{and}\ \
\sum_{j=m}^{\infty}p_j\sum_{i=j+1}^{\infty} \frac{1}{r_i}<\infty.
$$
Then \eqref{Lmin} is nonoscillatory. Moreover, for every $d\neq 0$, \eqref{Lmin} has an eventually positive,
nonincreasing recessive solution $u$, tending to zero and satisfying
$$
\lim_{k\to\infty} \frac{u_k}{\sum_{j=k}^{\infty}r_j^{-1}}=d.
$$
\end{lemma}
\begin{proof} It follows from \cite[Lemma 2.1 and Corollary 3.6]{cdm-ade}. More precisely,
the result \cite[Lemma~2.1]{cdm-ade} guarantees $\lim_{k\to\infty}r_k\Delta u_k=-d<0$. Now, from the discrete L'Hospital rule, we get
$$
\lim_{k\to\infty}\frac{u_k}{\sum_{j=k}^{\infty}r_j^{-1}}=
\lim_{k\to\infty}\frac{\Delta u_k}{-r_k^{-1}}=d.
$$
\end{proof}
\section{Main result: solvability of BVP}\label{S4}
Our main result is the following.
\begin{theorem}\label{Tmain}
Let (H$_{i}$), i=1,2,3, be satisfied and
\begin{equation}\label{Lc}
L_c=\sup_{u\in(0,c]}\frac{F(u)}{u}.
\end{equation}
If the linear difference equation
\begin{equation}
\label{L2}
\Delta\left(\frac{a_k}{\sqrt{1+c^2}}\,\Delta z_k \right)
+L_c b_k z_{k+1}=0,
\end{equation}
has a positive decreasing solution on $\Z_m$, then BVP \eqref{E}-\eqref{BVP} has at least one solution.
\end{theorem}
Effective criteria, ensuring the existence of a positive decreasing solution of \eqref{L2}, are given at the end of this section.
\vskip2mm
From this theorem and its proof we get the following.
\begin{corollary}\label{c:1}
Let (H$_{i}$), i=1,2,3, be satisfied. If \eqref{L2} has a positive decreasing solution on $\Z_m$ for $c=c_0>0$, then \eqref{E}-\eqref{BVP} has at least one solution for every $c\in (0, c_0]$.
\end{corollary}
\vskip4mm
To prove Theorem \ref{Tmain}, we use a fixed point approach,
based on the Schauder-Tychonoff theorem on the Fr\'echet space
\[
\mathbb X=\{u: \Z_m \to \R\}
\]
of all sequences defined on $\Z_m$, endowed with the topology
of pointwise convergence on $\mathbb{Z}_{m}$.
The use of the Fr\'{e}chet space $\mathbb{X}$, instead of a
suitable Banach space, is advantageous especially for the compactness test. Even if this is true also in the
continuous case, in the discrete case the situation is even more simple, since any bounded set in $\mathbb{X}$ is
relatively compact from the discrete
Arzel\`{a}-Ascoli theorem. We recall that a set $\Omega \subset \mathbb{X}$ is bounded if the sequences in $\Omega$
are equibounded on every compact subset of $\Z_{m}$. The compactness test is therefore very
simple just owing to the topology of $\mathbb{X}$, while in discrete Banach spaces can
require some checks which are not always immediate.
Notice that, if
$\Omega\subset$ $\mathbb{X}$ is bounded, then $\Omega^{\Delta}=\{\Delta
u,\,u\in\Omega\}$ is bounded, too. This is a significant discrepancy between the continuous and the discrete case; such a property can simplify the solvability of discrete boundary value problems associated to equations of order two or higher with respect to the continuous counterpart because \textit{a-priori} bounds for the first difference
\[
\Delta x_n=x_{n+1}-x_n
\]
are a direct consequences of \textit{a-priori} bounds for $x_n$, and similarly for higher order differences.
\vskip2mm
In \cite[Theorem 2.1]{MMR2}, the authors proved an existence result for BVPs associated to functional difference equations in Fr\'echet spaces (see also \cite[Corollary 2.6]{MMR2}, \cite[Theorem 4]{Fixed} and remarks therein). That result is a discrete counterpart of an existence result stated in \cite[Theorem 1.3]{CFM} for the continuous case, and reduces the problem to that of finding good \textit{a-priori} bounds for the unknown of a auxiliary linearized equation.
The function
\begin{equation*}
\Phi(v)=\frac{v}{\sqrt{1+v^{2}}}
\end{equation*}
can be decomposed as
\begin{equation*}\label{JJ}
\Phi(v)=vJ(v),
\end{equation*}
where $J$ is a continuous function on $\R$ such that
\,${\lim_{v\to 0}J(v)=1}$. This suggests the form of an auxiliary linearized equation.
Using the same arguments as in the proof of \cite[Theorem 2.1]{MMR2}, with minor changes, we have the following.
\begin{theorem}
\label{T:FPT} Consider the (functional) BVP
\begin{equation}
\begin{cases}
\Delta(a_{n} \Delta x_{n}J(\Delta x_{n}))=g(n,x), & n\in\mathbb{Z}_{m},\\
x\in S, &
\end{cases}
\label{DF}
\end{equation}
where $J: \R \to \R$ and $g:\mathbb{Z}_{m}\times\mathbb{X}\rightarrow\mathbb{R}$ are continuous
maps, and $S$ is a subset of $\ \mathbb{X}$. \newline Let $G:\,\mathbb{Z}_{m}\times\mathbb{X}^{2}\rightarrow\mathbb{R}$ be a continuous map such that
$G(k,q,q)=g(k,q)$ for all $(k,q)\in\mathbb{Z}_{m}\times\mathbb{X}$. If there
exists a nonempty, closed, convex and bounded set $\Omega\subset\mathbb{X}$ such that: \\
a) \ for any $u\in\Omega$ the problem
\begin{equation}
\begin{cases}
\Delta(a_{n} J(\Delta u_{n}) \Delta y_{n})=G(n,y,u), & n\in\mathbb{Z}_{m},\\
y\in S, &
\end{cases}
\label{DF1}
\end{equation}
has a unique solution $y= T(u)$;\\
b) \ $T(\Omega)\subset \Omega$;\\
c) \ $\overline{T(\Omega)} \subset S$,\\
then \eqref{DF} has at least one solution.
\end{theorem}
\begin{proof} We briefly summarize the main arguments, for reader's convenience, which are a minor modification of the ones in \cite[Theorem 2.1]{MMR2}.
Let us show that the operator $T:\Omega\to \Omega$ is continuous with relatively compact image. The relatively compactness of $T(\Omega)$ follows immediately from b), since $\Omega$ is bounded. To prove the continuity of $T$ in $\Omega$, let $\{u^j\}$ be a sequence in $\Omega$, $u^j \to u^\infty \in \Omega$, and let $v^j=T(u^j)$. Since $T(\Omega)$ is relatively compact, $\{v^j\}$ admits a subsequence (still indicated with $\{v^j\}$) which is convergent to $v^\infty$, with $v^\infty \in S$ from c). Since $J, G$ are continuous on their domains, we obtain
\[
0= \Delta(a_{n} J(\Delta u^j_{n}) \Delta v^j_{n})-G(n,v^j,u^j) \to \Delta(a_{n} J(\Delta u^\infty_{n}) \Delta v^\infty_{n})-G(n,v^\infty,u^\infty)
\]
as $j\to\infty$.
The uniqueness of the solution of \eqref{DF1} implies $v^\infty=T(u^\infty)$, and therefore $T$ is continuous. By the Schauder-Tychonoff fixed point theorem, $T$ has at least one fixed point in $\Omega$, which is clearly a solution of \eqref{DF}.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{Tmain}.]
Let $z$ be the recessive solution of \eqref{L2} such that $z_m=c$, $z_k>0$, $\Delta z_k\le 0$, $k\in\Z_m$; the existence of a recessive solution with these properties follows from Corollary~\ref{corol}.
Further, from Lemma~\ref{L:4}, we have $\lim_{k\to\infty}z_k=0$.
Define the set $\Omega$ by
$$
\Omega=\left\{u\in\X : 0\le u_k\le c\prod_{j=m}^{k-1}\left(1+M\frac{\Delta z_j}{z_j}\right),k\in\Z_m \right\},
$$
where $\X$ is the Fr\'echet space of all real sequences defined on $\Z_m$, endowed with the topology of pointwise convergence on $\Z_m$, and $M=1/\sqrt{1+c^2}\in(0,1)$. Clearly $\Omega$ is a closed, bounded and convex subset of $\X$.
For any $u\in\Omega$, consider the following BVP
\begin{equation}
\label{L1}
\begin{cases}
\Delta\left(\dfrac{a_k}{\sqrt{1+(\Delta u_k)^2}} \, \Delta y_k\right)
+b_k \tilde F(u_{k+1}) y_{k+1}=0, & k \in \Z_m, \displaystyle\\[2mm]
y \in S \displaystyle
\end{cases}
\end{equation}
where
\[
\tilde F(v)=\frac{F(v)}{v} \quad \text{for } v > 0, \quad
\tilde F(0)= \lim_{v \to 0^+} \frac{F(v)}{v}
\]
is continuous on $\R^{+}$, due to assumption \eqref{F}, and
$$
S=\left\{y\in\X: y_m=c,\, y_k>0, \Delta y_k \leq 0 \text{ for }k\in\Z_m, \,\sum_{j=m}^{\infty} \frac{1}{a_j y_j y_{j+1}}=\infty \right\}.
$$
Since $0\le u_k\le c$, for every $u \in \Omega$, we have $-c\le\Delta u_k\le c$, and so $(\Delta u_k)^2\le c^2$. Therefore,
\[
\frac{1}{\sqrt{1+(\Delta u_k)^2}}\ge \frac{1}{\sqrt{1+c^2}}
\]
for every $u \in \Omega$ and $k\in\Z_m$. Further
$\tilde F(u_{k+1})\le L_c$ for $u \in \Omega$, and hence \eqref{L2} is Sturm majorant for the linear equation in \eqref{L1}. Let $\widehat{y}=\widehat{y}(u)$ be the recessive solution of the equation in \eqref{L1} such that $\widehat{y}_m=c$. Then $\widehat{y}$ is positive nonincreasing on $\Z_m$ by Theorem~\ref{Trec}, and, in view of $\widehat{y}_m=c$ and the uniqueness of recessive solutions up to the constant factor, $\widehat y $ is the unique solution of \eqref{L1}. Define the operator $\mathcal{T}:\Omega\to\X$ by $(\mathcal{T} u)_k=\widehat{y}_k$ for $u\in\Omega$.
From Theorem~\ref{Trec}, we get
$$
\frac{a_k\Delta\widehat{y}_k}{\widehat{y}_k}\le
\frac{a_k\Delta\widehat{y}_k}{\widehat{y}_k\sqrt{1+(\Delta u_k)^2}}\le
\frac{a_kM\Delta z_k}{z_k}\le 0,
$$
which implies $\Delta\widehat{y}_k/\widehat{y}_k\le M\Delta z_k/z_k$, $k\in\Z_m$. By Lemma~\ref{L:A},
\[
\widehat y_k\le c\prod_{j=m}^{k-1}\left (1+M \frac{\Delta z_j}{z_j}\right), \quad k\in\Z_m,
\]
which yields $\mathcal{T}(\Omega)\subseteq\Omega$.
Next we show that
$\overline{\mathcal{T}(\Omega})\subseteq S$. Let $\overline{y}\in\overline{\mathcal{T}(\Omega})$. Then there exists
$\{u^j\}\subset\Omega$ such that $\{\mathcal{T}u^j\}$ converges to $\overline{y}$ (in the topology of $\X$). It is not restrictive to assume $\{u^j\}\to \bar u \in \Omega$ since $\Omega$ is compact.
Since $\mathcal{T}u^j=:\widehat{y}^j$ is the (unique) solution of \eqref{L1}, we have
$\widehat{y}_m^j=c$, $\widehat{y}^j_k>0$ and $\Delta\widehat{y}^j_k\le 0$ on $\Z_m$ for every $j\in\N$.
Consequently, $\overline{y}_m=c$, $\overline{y}_k\geq 0$, $\Delta\overline{y}_k\le 0$
for $k\in\Z_m$. Further, since $\tilde F$ is continuous, $\overline{y}$ is a solution of the equation in \eqref{L1} for $u=\overline{u}$. Suppose now that there is
$T\in\Z_m$ such that $\overline{y}_T=0$. Then clearly $\Delta \overline{y}_T=0$ and by the global existence and uniqueness
of the initial value problem associated to any linear equation, we get $\overline{y}\equiv 0$ on $\Z_m$, which contradicts to $\overline{y}_m=c>0$. Thus $\overline{y}_k > 0$ for all $k\in\Z_m$.
We have just to prove that $\sum_{j=m}^{\infty}(a_j\overline{y}_j\overline{y}_{j+1})^{-1}=\infty$.
In view of
Lemma~\ref{L:4}, there exists $N>0$ such that $\overline{y}_k\le N\sum_{j=k}^{\infty} a_j^{-1}$ on $\Z_m$.
Noting that
$$
\Delta\left(\frac{1}{\sum_{j=k}^\infty a_j^{-1}} \right)
=\frac{1}{a_k\sum_{j=k}^\infty a_j^{-1}\sum_{j=k+1}^{\infty}a_j^{-1}},
$$
we obtain
\begin{align*}
\sum_{j=m}^{k-1}\frac{1}{a_j\overline{y}_j\overline{y}_{j+1}}\ge
&\sum_{j=m}^{k-1}\frac{1}{N^2a_j\sum_{i=j}^\infty a_i^{-1}\sum_{i=j+1}^{\infty}a_i^{-1}}=\frac{1}{N^2}\sum_{j=m}^{k-1} \Delta \left(\frac{1}{\sum_{i=j}^\infty a_i^{-1}}\right)\\
=&\frac{1}{N^2}\left(\frac{1}{\sum_{j=k}^\infty a_j^{-1}}-\frac{1}{\sum_{j=m}^\infty a_j^{-1}}\right)
\to\infty \text{ as }k\to\infty.
\end{align*}
Thus $\overline{y}\in S$, i.e., $\overline{\mathcal{T}(\Omega})\subseteq S$. By applying Theorem~\ref{T:FPT}, we obtain that the problem
\begin{equation*}
\begin{cases}
\Delta\left(a_k\dfrac{\Delta x_k}{\sqrt{1+(\Delta x_k)^2}}\right)
+b_kF(x_{k+1})=0,\quad k \in \Z_m,\\
x \in S
\end{cases}
\end{equation*}
has at least a solution $\bar x \in \Omega$.
From the definition of the set $\Omega$,
\[
\overline{x}_k\le c\prod_{j=m}^{k-1}\left(1+M\frac{\Delta z_j}{z_j}\right)
\]
and since $M\in(0,1)$ and $\lim_{k\to\infty}z_k=0$, we have
\[
\lim_{k\to\infty}\prod_{j=m}^{k-1}\left(1+M\frac{\Delta z_j}{z_j}\right)=0
\]
by Lemma~\ref{L:B}. Thus $\bar x_k \to 0$ as $k \to \infty$, and $\bar x$ is a solution of the BVP \eqref{E}-\eqref{BVP}.
\end{proof}
\begin{proof}[\bf Proof of Corollary \ref{c:1}.] Assume that \eqref{L2} has a positive decreasing solution for $c=c_0>0$, and let $c_1\in (0, c_0)$. Then equation \eqref{L2} with $c=c_0$ is a Sturm majorant of \eqref{L2} with $c=c_1$, and from Theorem~\ref{Trec}, equation \eqref{L2} with $c=c_1$ has a positive decreasing solution. The application of Theorem \ref{Tmain} leads to the existence of a solution of \eqref{E}-\eqref{BVP} for $c=c_1$.
\end{proof}
Effective criteria for the solvability of BVP (\ref{E})-(\ref{BVP}) can be obtained by considering as a Sturm majorant of \eqref{L2} any linear equation that is known to have a global positive solution.
In the continuous case, a typical approach to obtaining global positivity of solutions
for equation
\begin{equation}\label{ER}
(t^{2}y')' + \gamma y=0,\quad t\geq 1,
\end{equation}
where $0<\gamma\leq 1/4$, is based on the Sturm theory. In virtue of the transformation $x=t^{2}y'$, this equation is equivalent to the Euler equation
\begin{equation}\label{ERorig}
x''+\frac{\gamma}{t^{2}}x=0, \quad t\geq 1,
\end{equation}
whose general solutions are well-known.
In the discrete case, various types of Euler equations are considered in the literature, see, e.g. \cite{HV,Naoto} and references therein.
It is somehow problematic to find a solution for some
natural forms of discrete Euler equations in the self-adjoint form \eqref{Lmin}.
Here our aim is to deal with solutions of Euler type equations.
\vskip4mm
\begin{lemma}\label{LeEuR}
The equation
\begin{equation}\label{EuR}
\Delta\bigl ((k+1)^{2}\Delta x_k \bigr) + \frac{1}{4} x_{k+1}=0
\end{equation}
has a recessive solution which is positive decreasing on $\N$.
\end{lemma}
\begin{proof}
Consider the sequence
\[
y_k=\prod_{j=1}^{k-1}\frac{2j+1}{2j}, \quad k\ge1,
\]
with the usual convention $\prod_{j=1}^0u_j=1$. One can verify that
$y$ is a positive increasing solution of the equation
\begin{equation} \label{new-eu}
\Delta^2y_k+\frac{1}{2(k+1)(2k+1)}y_{k+1}=0
\end{equation}
on $\N$.
Set $x_k=\Delta y_k$. Then $x$ is a positive decreasing solution of the equation
\begin{equation}\label{Eunew2}
\Delta(2(k+1)(2k+1)\Delta x_k)+x_{k+1}=0
\end{equation}
on $\N$.
Obviously,
\[
2(k+1)(2k+1)\leq 4(k+1)^{2}, \quad k\geq 1,
\]
thus \eqref{Eunew2} is a Sturm majorant of \eqref{EuR}.
By Theorem \ref{Trec}, \eqref{EuR} has a recessive solution which is positive decreasing on $\N$.
\end{proof}
Equation \eqref{EuR} can be understand as the reciprocal equation to the Euler difference equation
\begin{equation} \label{discr_euler}
\Delta^2 u_k+\frac{1}{4(k+1)^2}u_{k+1}=0,
\end{equation}
i.e., these equations are related by the substitution relation $u_k=d\Delta x_k$, $d\in\R$, where $u$ satisfies
\eqref{discr_euler} provided $x$ is a solution of \eqref{EuR}.
The form of \eqref{discr_euler} perfectly fits the discretization of the differential equation \eqref{ERorig} with $\gamma=1/4$, using the usual central difference scheme.
\begin{corollary}
\label{Cor1} Let (H$_{i}$), i=1,2,3, be satisfied and $L_c$ be defined by \eqref{Lc}.
The BVP (\ref{E})-(\ref{BVP}) has at least one solution if there exists
$\lambda>0$ such that for $k\geq 1$
\begin{equation}\label{cc}
a_k\geq 4 \lambda (k+1)^{2},\quad \sqrt{1+c^{2}} \,L_c\, b_k \leq\lambda.
\end{equation}
\end{corollary}
\begin{proof}
Consider the equation \eqref{EuR}.
By Lemma \ref{LeEuR}, it has a positive decreasing solution on $\N$.
The same trivially holds for the equivalent equation
\begin{equation}
\Delta\bigl (4 \lambda(k+1)^{2}\Delta x_k \bigr) + \lambda x_{k+1}=0.
\label{Re}
\end{equation}
Since \eqref{cc} holds, \eqref{Re} is a Sturm majorant of \eqref{L2}, and
by Theorem \ref{Trec}, equation \eqref{L2} has a positive decreasing solution on $\N$. Now the conclusion follows from Theorem \ref{Tmain}.
\end{proof}
\noindent{\textbf{Remark.}} Note that the sequence $b$ does not need to be bounded. For example,
consider as a Sturm majorant of \eqref{L2} the equation
\begin{equation*}
\Delta\bigl(\lambda k 2^{k+1}\Delta x_k \bigr)+ \lambda 2^{k+1} x _{k+1}=0,\text{ \ \ }k\geq 0.
\end{equation*}
One can check that this equation has the solution $x_k=2^{-k}$.
This leads to the conditions
\[
a_k\geq \lambda k 2^{k+1}, \quad \ \sqrt{1+c^{2}} \,L_c\, b_k \leq\lambda 2^{k+1}\quad \text{for } k\geq 0
\]
ensuring the solvability of the BVP (\ref{E})-(\ref{BVP}).
Another criteria can be obtained by considering the equation
\begin{equation*}
\Delta\bigl( \lambda k^{3}\Delta x_k \bigr)+ \lambda \frac{k^{2}+3k+1}{k+2}x _{k+1}=0,\text{ \ \ }k\geq1
\label{Ex4}
\end{equation*}
having the solution $x_k=1/k$. This comparison with \eqref{L2} leads to the conditions
\[
a_k\geq \lambda k^{3},\quad \sqrt{1+c^{2}} \,L_c\, b_k \leq\lambda \frac{k^{2}+3k+1}{k+2}\quad \text{for }k\geq 1\, .
\]
The following example illustrates our result.
\begin{example}\label{Ex2} Consider the BVP
\begin{equation}
\begin{cases}
\Delta\bigl( (k+1)^{2}\Phi(\Delta x_k )\bigr)+ \dfrac{|\sin k|}{4\sqrt{2}\,k}\text{
}x^{3}_{k+1}=0,\text{ \ \ }k\geq1,\\[2mm]
x_1=c,\quad x_k>0,\quad \Delta x_k\le 0,
\quad \displaystyle \lim_{k\to\infty}x_k=0.
\label{Ex1}
\end{cases}
\end{equation}
We have $L_c=c^2$, $a_k=(k+1)^2$, and
$b_k= \dfrac{|\sin k|}{4\sqrt{2}\,k}$.
Conditions in \eqref{cc} are fulfilled for any $c\in(0,1]$ when taking $\lambda=1/4$.
Indeed,
$$
a_k=(k+1)^2
=4\lambda(k+1)^2$$
and
$$
\sqrt{1+c^2}L_cb_k=\sqrt{1+c^2}c^2 b_k\le\sqrt{2}b_k
\le\frac{1}{4}|\sin k|\le \frac{1}{4}=\lambda.
$$
Corollary \ref{Cor1} now guarantees solvability of the BVP \eqref{Ex1} for any $c\in (0,1]$.
\end{example}
\section{Comments and open problems}\label{S5}
It is interesting to compare our discrete BVP with the continuous one investigated in \cite{Trieste}.
Here the BVP for the differential equation with the Euclidean mean curvature operator
\begin{equation}
\begin{cases}
\left( a(t)\dfrac{x^{\prime}}{\sqrt{1+x^{\prime}{}^{2}}}\right) ^{\prime
}+b(t)F(x)=0,\qquad t\in\lbrack1,\infty),\\
x(1)=1,\,x(t)>0,\,x^{\prime}(t)\leq0\text{ for }t\geq1,\,\displaystyle\lim
_{t\rightarrow\infty}x(t)=0,
\end{cases}
\tag{P}
\label{EC}
\end{equation}
has been considered. Sometimes solutions of differential equations satisfying the condition
\begin{equation*}
x(t)>0, \quad x'(t)\leq 0\, ,\quad t\in[1,\infty),
\end{equation*}
are called \textit{Kneser solutions} and the problem to find such solution is called \textit{Kneser problem}.
The problem (\ref{EC}) has been studied under the following conditions:
\begin{itemize}
\item[(C$_{1}$)] The function $a$ is continuous on $[1,\infty)$, $a(t)>0$ in
$[1,\infty)$, and
\begin{equation*}
\int_{1}^{\infty}\frac{1}{a(t)}\,dt<\infty. \label{a}
\end{equation*}
\item[(C$_{2}$)] The function $b$ is continuous on $[1,\infty)$, $b(t)\geq0$
and
\begin{equation*}
\int_{1}^{\infty}b(t)\,\int_{t}^{\infty}\frac{1}{a(s)}dsdt<\infty. \label{B}
\end{equation*}
\item[(C$_{3}$)] The function $F$ is continuous on $\mathbb{R}$, $F(u)u>0$ for
$u\neq0$, and such that
\begin{equation}
\limsup_{u\rightarrow0^{+}}\frac{F(u)}{u}<\infty.\label{FF}
\end{equation}
\end{itemize}
The main result for solvability of \eqref{EC} is the following. Note that the principal solution for linear differential equation is defined similarly as the recessive solution, see e.g. \cite{Trieste,H}.
\begin{theorem}
\label{Th second}{\rm\cite[Theorem 3.1]{Trieste}} Let (C$_{i}$), i=1,2,3, be verified and
\begin{equation*}
L=\underset{u\in(0,1]}{\sup}\frac{F(u)}{u}.\label{Fbar}
\end{equation*}
Assume
\begin{equation*}
\alpha=\inf_{t\geq1}\text{ }a(t)A(t)>1, \label{New}
\end{equation*}
where
\begin{equation*}
A(t)=\int_{t}^{\infty}\frac{1}{a(s)}\,ds. \label{A}
\end{equation*}
If the principal solution $z_{0}$ of the linear equation
\begin{equation*}
\left( a(t)z^{\prime}\right) ^{\prime}+\frac{\alpha}{\sqrt{\alpha^{2}-1}
}L \,b(t)z=0,\quad t\geq1, \label{EM1}
\end{equation*}
is positive and nonincreasing on $[1,\infty)$, then the BVP (\ref{EC}) has at
least one solution.
\end{theorem}
It is worth to note that the method used in \cite{Trieste} does not allow that $\alpha=1$ and thus Theorem \ref{Th second} is not immediately applicable when $a(t)=t^{2}$.
In \cite{Trieste} there are given several effective criteria for the solvability of the BVP \eqref{EC} which are similar to Corollary \ref{Cor1}. An example, which can be viewed
as a discrete counterpart, is the above Example \ref{Ex2}.
\noindent\textbf{Open problems.}
\vskip2mm
\noindent{\textbf{(1)}}
The comparison between Theorem \ref{Tmain} for the discrete BVP and
Theorem \ref{Th second} for the continuous one, suggests to investigate the BVP \eqref{E}-\eqref{BVP} on times scales.
\vskip2mm
\noindent{\textbf{(2)}} In \cite{Trieste}, the solvability of the continuous BVP has been proved under the weaker assumption \eqref{FF} posed on $F$. This is due to the fact that the set $\Omega$ is defined using a precise lower bound which is different from zero.
It is an open problem if a similar estimation from below can be used in the discrete case and assumption \eqref{F} can be replaced by \eqref{FF}.
\vskip2mm
\noindent{\textbf{(3)}} Similar BVPs concerning the existence of Kneser solutions
for difference equations with \textit{p}-Laplacian operator are considered
in \cite{JDEA2016} when $b_{k}<0$ for $k\in \Z^{+}.$ It should be
interesting to extend the solvability of the BVP \eqref{E}-\eqref{BVP} to
the case in which the sequence $b$ is negative and in the more general
situation when the sequence $b$ is of indefinite sign.
\section*{Acknowledgements}
The research of the first and third author has been supported by the grant GA20-11846S of the Czech Science Foundation. The second author is partially supported by Gnampa, National Institute for Advanced Mathematics (INdAM).
\end{document}
|
\betagin{document}
\numberwithin{equation}{section}
\partialrindent=0pt
\hfuzz=2pt
\frenchspacing
\title[Supercritical Mean field equations]{Supercritical Mean Field Equations on convex domains and the Onsager's
statistical description of two-dimensional turbulence}
\rhouthor[D. Bartolucci \& F. De Marchis]{Daniele Bartolucci$^{(1,\ddag)}$\& Francesca De Marchis$^{(2,\ddag)}$}
\varthetaanks{2010 \varthetaetaxtit{Mathematics Subject classification:} 35A02, 35B40, 35B45, 35J65, 35J91, 35Q35, 35Q82, 82B99}
\varthetaanks{$^{(1)}$Daniele Bartolucci, Department of Mathematics, University
of Rome {\inftyt "Tor Vergata"}, \\ Via della ricerca scientifica n.1, 00133 Roma,
Italy. e-mail:[email protected]}
\varthetaanks{$^{(2)}$Francesca De Marchis, Department of Mathematics, University
of Rome {\inftyt "Tor Vergata"}, \\ Via della ricerca scientifica n.1, 00133 Roma,
Italy. e-mail:[email protected]}
\varthetaanks{$^{(\ddag)}$Research partially supported by FIRB project {sl
Analysis and Beyond} and by MIUR project {sl Metodi variazionali e PDE non lineari}}
\betagin{abstract}
We are motivated by the study of the Microcanonical Variational Principle within the Onsager's
description of two-dimensional turbulence in the range of energies where the equivalence of statistical ensembles fails.
We obtain sufficient conditions for the existence and multiplicity of solutions for the corresponding Mean Field
Equation on convex and "thin" enough domains in the supercritical (with respect to the Moser-Trudinger inequality) regime.
This is a brand new achievement since existence results in the supercritical region were previously known
\un{only} on multiply connected domains.
Then we study the structure of these solutions by the analysis of their linearized problems
and also obtain a new uniqueness result for solutions of the Mean Field Equation on thin domains whose
energy is uniformly bounded from above. Finally we evaluate the asymptotic expansion of those solutions with respect
to the thinning parameter
and use it together with all the results obtained so far to solve the Microcanonical Variational Principle in a small
range of supercritical energies where the entropy is eventually shown to be concave.
\end{abstract}
\maketitle
{\bf Keywords}: Mean field and Liouville-type equations,
uniqueness and multiplicity for supercritical problems, sub-supersolutions method,
non equivalence of statistical ensembles, Microcanonical Variational Principle.
\tableofcontents
section{Introduction}
setcounter{equation}{0}
In a pioneering paper \mathbb{C}te{On} L. Onsager
proposed a statistical theory of two-dimensional turbulence based on the N-vortex model
\mathbb{C}te{New}.
We refer to \mathbb{C}te{ESr} for an historical review and to \mathbb{C}te{MP} and the introduction in \mathbb{C}te{ESp} for a
detailed discussion about this theory and its range of applicability in real world models.
More recently those physical arguments was turned into rigorous proofs \mathbb{C}te{clmp1}, \mathbb{C}te{clmp2}, \mathbb{C}te{K}, \mathbb{C}te{KL}.
Together with other well known physical \mathbb{C}te{bav}, \mathbb{C}te{suzC}, \mathbb{C}te{sy2}, \mathbb{C}te{T}, \mathbb{C}te{tar}, \mathbb{C}te{yang}
and geometrical \mathbb{C}te{cygc}, \mathbb{C}te{KW}, \mathbb{C}te{Troy} applications,
these new results were the motivation for the lot of efforts in the understanding
of the resulting mean field \mathbb{C}te{clmp1}, \mathbb{C}te{clmp2} Liouville-type \mathbb{C}te{Lio} equations.
We refer the reader to
\mathbb{C}te{B2}, \mathbb{C}te{bl}, \mathbb{C}te{BM2}, \mathbb{C}te{bt}, \mathbb{C}te{bls}, \mathbb{C}te{bm}, \mathbb{C}te{CCL}, \mathbb{C}te{ChK},
\mathbb{C}te{CL1}, \mathbb{C}te{cli1}, \mathbb{C}te{CLin3},
\mathbb{C}te{CLin1}, \mathbb{C}te{CLin2}, \mathbb{C}te{CLin4}, \mathbb{C}te{CSW}, \mathbb{C}te{Kwan1}, \mathbb{C}te{DJLW},
\mathbb{C}te{dj}, \mathbb{C}te{EGP}, \mathbb{C}te{KMdP}, \mathbb{C}te{yy},
\mathbb{C}te{Lin1}, \mathbb{C}te{lin7}, \mathbb{C}te{linwang},
\mathbb{C}te{Mal1}, \mathbb{C}te{Mal2}, \mathbb{C}te{dem}, \mathbb{C}te{dem2}, \mathbb{C}te{NT}, \mathbb{C}te{OS}, \mathbb{C}te{pt}, \mathbb{C}te{st},
\mathbb{C}te{suz}, \mathbb{C}te{suzB}, \mathbb{C}te{T3}, \mathbb{C}te{w}, and more recently
\mathbb{C}te{barjga}, \mathbb{C}te{BLin2}, \mathbb{C}te{BLin3}, \mathbb{C}te{BMal}, \mathbb{C}te{BDeM}, \mathbb{C}te{malru} and the
references quoted therein.\\
In spite of these efforts it seems that there are some basic questions arising in \mathbb{C}te{clmp2} which have
been left unanswered so far. These are our main motivations and this is why we
will begin our discussion with a short review of some of the results obtained in \mathbb{C}te{clmp2}
as completed in \mathbb{C}te{CCL}.
\begin{definition}\label{defsimp}
Let $\Omegasubset\mathbb{R}^2$ be any open, bounded and simply connected domain.
We say that $\Omega$ is simple if $\partial\Omega$ is the support of a simple and rectifiable Jordan curve.\\
Let $\Omega$ be a simple domain. We say that it is regular if (see also \mathbb{C}te{CCL}):\\
(-) its boundary $\partial \Omega$ is the support of a continuous and piecewise $C^2$ curve
$\partial\Omega=\mbox{supp}(\gammamma)$ with bounded first derivative $\|\gammamma^{'}\|_{\inftynfty}\leq C$ and
at most a finite number of corner-type points $\{p_1,\ldots,p_m\}$, that is, the inner angle $\varthetaeta_j$
formed by the corresponding limiting tangents is well defined and satisfies
$\varthetaeta_j\inftyn(0,2\pi)setminus\{\pi\}$ for any $j=1,\ldots,m$;\\
(-) for each $p_j$ there exists a conformal bijection from an open neighborhood $U$ of $p_j$ which
maps $U\cap \partial\Omega$ onto a curve of class $C^2$.\\
In particular any regular domain is by definition simply connected.
\end{definition}
We will use this definitions throughout the rest of this paper without further comments. Of course
polygons of any kind are regular according to our definition. The notations
$|\Omega|$ or $A(\Omega)$ will be used to denote the area of a simple domain $\Omega$,
while $L(\partialrtial\Omega)$ will denote the length of the boundary of $\Omega$.
\begin{remark}\label{solregdef}
We will discuss at length solutions of a Liouville-type semilinear equation
with Dirichlet boundary conditions, see $P(\lambda,\Omega)$ in section \ref{ss1.1} below.
In this respect, and if $\Omega$ is regular,
a solution $u$ will be by definition an $H^{1}_0(\Omega)$ weak solution \mathbb{C}te{GT} of the problem at hand,
$H^1_0(\Omega)$ being the closure of $C^{1}_c(\Omega)$ in the norm $\|u\|_2+\|\, |\nabla u|\,\|_2$.
In those cases where $\Omega$ is just assumed to be simple, a solution will be by definition a classical solution
$u\inftyn C^{2}(\Omega)\cap C^{0}(\ov{\Omega})$.\\
It turns out that, by using the well known Brezis-Merle results \mathbb{C}te{bm} together with Lemma 2.1 in \mathbb{C}te{CCL},
any $H^{1}_0(\Omega)$ weak solution on a regular domain is also a classical
$C^{2}(\Omega)\cap C^{0}(\ov{\Omega})$ solution.
\end{remark}
bgskip
Let $\Omegasubset \mathbb{R}^2$ be open, bounded and simple. We define
$$
\mathcal{P}=\left\{\Omegaega\inftyn L^{1}(\Omega)\,|\,\Omegaega\geq 0\;\mbox{a.e. in}\;\Omega,\;\inftynt_{\Omega}\Omegaega =1\right\},
$$
and $G_\Omega(x,y)$ to be the unique solution of
\betaq\label{Green}
\graf{
-\Delta G_\Omega(x,y)= \Omegaegalta_{x=y}&
\mbox{in}\hspace{.4cm}\;\; \Omega, \\
\hspace{0.7cm}G_\Omega(x,y)=0 &\mbox{on}\hspace{.1cm}\;\; \partial \Omega,}
\end{equation}
where $\Omegaegalta_{x=y}$ is the Dirac distribution with singular point $y\inftyn\Omega$,
$G_\Omega(x,y)=-\frac{1}{2\pi}\log(|x-y|)+H_\Omega(x,y)$ and $H_\Omega$ denotes the regular part.
For any $\Omegaega\inftyn \mathcal{P}$ we also define the entropy and energy of $\Omegaega$ as
$$
\mathcal{S}(\Omegaega)=\inftynt_{\Omega}s(\Omegaega),\qquad
\mathcal{E}(\Omegaega)=\frac{1}{2}\inftynt_{\Omega} \Omegaega G[\Omegaega],
$$
respectively, where
$$
s(t)=\graf{-t\log{t}, &t>0,\\ 0, &t=0, \ }
$$
and
$$
G[\Omegaega](x)=\inftynt_{\Omega} G_\Omega(x,y)\Omegaega(y)\,dy.
$$
For any $E\inftyn\mathbb{R}$ we consider the MVP (Microcanonical Variational Principle)
$$
S(E)=sup\left\{\mathcal{S}(\Omegaega),\quad \Omegaega\inftyn \mathcal{P}_E \right\},\quad
\mathcal{P}_E=\{\Omegaega\inftyn\mathcal{P}\,|\,\mathcal{E}(\Omegaega)=E\}.\qquad\qquad {\rm (MVP)}
$$
bgskip
The following results has been obtained in \mathbb{C}te{clmp2} (see Propositions 2.1, 2.2, 2.3 in \mathbb{C}te{clmp2}):\\
MVP-(i) For any $E>0$, $S(E)<+\infty$ and there exists $\Omegaega\inftyn\mathcal{P}_E$ such that
$S(E)=\mathcal{S}(\Omegaega)$;\\
MVP-(ii) Let $\Upsilon=\frac{1}{|\Omega|}$ be the uniform density on $\Omega$ and
$E_\Upsilon=\mathcal{E}(\Upsilon)$. Then $\Upsilon$ is
a maximizer of $\mathcal{S}$ on $\mathcal{P}_{E_\Upsilon}$ and in particular if $|\Omega|=1$, then $S(E_\Upsilon)=0$; \\
MVP-(iii) If $|\Omega|=1$ then $S(E)$ is strictly increasing and negative for $E<E_\Upsilon$ and strictly decreasing and
negative for $E>E_\Upsilon$;\\
MVP-(iv) Let $\Omegaega^{scriptscriptstyle(E)}$ be a solution for the MVP at energy $E$. Then there exists $\beta=\beta_E\inftyn\mathbb{R}$ such that
$$
\Omegaega^{scriptscriptstyle(E)}=\frac{e^{-\beta G[{\varthetaetaxtstyle \Omegaega}^{scriptscriptstyle(E)}]}}{\inftynt_{\Omega} e^{-\beta G[{\varthetaetaxtstyle \Omegaega}^{scriptscriptstyle(E)}]}},
$$
or, equivalently, the function $\psi=G[\Omegaega^{scriptscriptstyle(E)}]$ satisfies the Mean Field Equation (MFE)
$$
\graf{
-\Delta \psi =\displaystyle\frac{e^{-\beta\psi}}{\inftynt_{\Omega} e^{-\beta\psi}} & \mbox{in}\quad \Omega\\
\psi =0 & \mbox{on}\quad \partial\Omega
}\qquad\qquad (\mbox{MFE});
$$
MVP-(v) $S(E)$ is continuous.
bgskip
bgskip
We find it appropriate at this point to continue our discussion by introducing some concepts as
in \mathbb{C}te{clmp2} but with the aid of a slightly different mathematical arguments based on some
results in \mathbb{C}te{bm}, \mathbb{C}te{yy}, \mathbb{C}te{ls} and in particular in \mathbb{C}te{CCL} which were not at hand at that time.\\
Since solutions of the (MFE) with fixed $\beta>-8\pi$ are unique not only if $\Omega$ is simple and smooth \mathbb{C}te{suz}
but also if $\Omega$ is regular (see \mathbb{C}te{CCL}), and by using the Brezis-Merle \mathbb{C}te{bm}
theory of Liouville-type equations (as later improved in \mathbb{C}te{ls} and then in \mathbb{C}te{yy}) and the boundary
estimates in \mathbb{C}te{CCL}, we can divide the set of regular domains (see definition \ref{defsimp}) in two classes,
first introduced in \mathbb{C}te{clmp2}:
\begin{definition}\label{kind}
Let $\Omega$ be regular. We say that $\Omega$ is of {\bf first kind} if
the unique (at fixed $\beta>-8\pi$ \mathbb{C}te{suz}, \mathbb{C}te{CCL}) solution $\psi_\beta$ of the {\rm (MFE)} satisfies
\betaq\label{blow-up}
\Omegaega_{(\beta)}:=\displaystyle\frac{e^{-\beta\psi}}{\inftynt_{\Omega} e^{-\beta\psi}}\rightharpoonup\delta_{x=p},\quad\mbox{as}\quad\betasearrow(-8\pi)^{+},
\end{equation}
weakly in the sense of measures, for some $p\inftyn\Omega$.\\
We say that $\Omega$ is of {\bf second kind} otherwise.
\end{definition}
We will skip the discussion of the case $\beta>0$ since its
mathematical-physical description is well understood \mathbb{C}te{clmp2}.\\
Let $\mathcal{E}(\Omegaega_{(\beta)})$ be the energy of the unique solution of the (MFE) with $\beta\inftyn(-8\pi,0]$.
By using known arguments based on the results in \mathbb{C}te{bm}, \mathbb{C}te{ls} and \mathbb{C}te{CCL}, \mathbb{C}te{yy} it can be shown that
either $\psi_\beta$ is uniformly bounded for $\beta\inftyn(-8\pi,0]$ or it must satisfy \rife{blow-up} and in this case
in particular $\mathcal{E}(\Omegaega_{(\beta)})\rightarrow+\infty$ as $\betasearrow (-8\pi)^+$. Here is crucial Lemma 2.1 in \mathbb{C}te{CCL}
which ensures that solutions are
uniformly bounded in a neighborhood of $\partial\Omega$ whenever $\Omega$ is regular.
\begin{remark}
As a consequence of
an argument which we introduce in Lemma \ref{lem:231112} below, we could extend this alternative
(either $\psi_\beta$ is bounded or the energy $\mathcal{E}(\Omegaega_{(\beta)})\rightarrow+\infty$ as $\betasearrow (-8\pi)^+$)
to the case where $\Omega$ is just simple, the only difference
in this case being that one would have to allow (in principle) $p\inftyn\partial\Omega$ in \rife{blow-up}.
However we do not know of any result claiming uniqueness of solutions
of the {\rm (MFE)} with $\beta\inftyn(-8\pi,0)$ under such weak regularity assumptions on $\Omega$.
\end{remark}
As in \mathbb{C}te{clmp2} we need the following:
\begin{definition}\label{ecrit}
We set $E_c=\mathcal{E}(\Omegaega_{(\beta)})\left.\right|_{\beta=(-8\pi)^{+}}$ if $\Omega$ is of second kind and
$E_c=+\infty$ if $\Omega$ is of first kind.
\end{definition}
It has been shown in \mathbb{C}te{clmp2} that $E_\Upsilon<E_c$ and that to each $E_\Upsilon<E<E_c$ there corresponds a unique $\Omegaega^{scriptscriptstyle(E)}$ which attains the
supremum in the MVP and in particular a unique $\beta=\beta(E)\inftyn (-8\pi,0)$ such that the corresponding unique
solution $\psi_\beta$ of the (MFE) satisfies $\Omegaega_{(\beta(E))}\equiv\Omegaega^{scriptscriptstyle(E)}$ and attains the supremum in the associated
CVP (Canonical Variational Principle)
$$
f(\beta)=f_\Omega(\beta)=sup\{\mathcal{F}_\beta(\Omegaega),\;\;\Omegaega \inftyn \mathcal{P}\,|\, -\mathcal{S}(\Omegaega)<+\infty\},\qquad\qquad{\rm (CVP)}
$$
where, for $\Omegaega \inftyn \mathcal{P}$,
$$
\mathcal{F}_\beta(\Omegaega)=-\frac{1}{\beta}\,\mathcal{S}(\Omegaega)+\mathcal{E}(\Omegaega),
$$
is the free energy of $\Omegaega$. In particular it has been proved in \mathbb{C}te{clmp2} that
$\mathcal{E}(\Omegaega_{(\beta)})$ is continuous and decreasing in $(-8\pi,0)$ and $S(E)$ is smooth and concave in $(E_\Upsilon,E_c)$.
Concerning these remarkable results
we refer to Theorem 3.1 and Proposition 3.3 in \mathbb{C}te{clmp2}.\\
In particular for domains of first kind the (mean field) thermodynamics of the system is rigorously defined for
any attainable value of the
energy and equivalently described by solutions of either the MVP or the CVP. Actually, this problem is closely
related with another very subtle issue, that is, the fact
that solutions of the (MFE) always exist for $\beta\inftyn (-8\pi,0]$ (a consequence of the Moser-Trudinger inequality
\mathbb{C}te{moser}) while in general do not exist for $\beta\leq -8\pi$, the value $\beta=-8\pi$ being the critical threshold
where the coercivity of the corresponding variational functional (that is \rife{var-func} below) breaks down.
A detailed discussion
of this point is behind our scopes and we
limit ourselves here with few details needed in the presentation of our results, see also section \ref{ss1.1} below.\\
Some sufficient conditions for the existence of solutions of the (MFE) at $\beta=-8\pi$ where provided in \mathbb{C}te{clmp1}
and hence used to show that for example
any long and thin enough rectangle is of second kind. The problem has been later solved in \mathbb{C}te{CCL} by
using a refined version of the subtle estimates in \mathbb{C}te{CLin1}, \mathbb{C}te{CLin2} and the newly derived uniqueness
of solutions of the (MFE) with $\beta\inftyn(-8\pi,0]$ and, whenever they exist, for $\beta=-8\pi$ as well on regular domains.
In particular, it has been proved in Proposition 6.1 in \mathbb{C}te{CCL} that if $\Omega$ is regular, then
the following facts are equivalent:\\
SK-(i) $\Omega$ is of second kind;\\
SK-(ii) There is a solution of the (MFE) with $\beta=-8\pi$, say $\psi_{-8\pi}$;\\
SK-(iii) The unique branch of solutions of the (MFE) $\psi_\beta$ with $\beta\inftyn(-8\pi,0]$ is uniformly bounded and
converges uniformly to $\psi_{-8\pi}$ as $\betasearrow (-8\pi)^+$.\\
We conclude in particular that if the branch of (unique) maximizers satisfies \rife{blow-up},
then there is no solution of the (MFE)
with $\beta=-8\pi$ and in particular that a solution of the (MFE) with $\beta=-8\pi$ exists (and is unique) if and only
if {\inftyt blow up for the {\rm (MFE)} at $\beta=8\pi$ occurs from the left}, that is, \rife{blow-up} occurs
but with $\beta\rightarrow(-8\pi)^{-}$. The fact that
(irrespective on the "side" which $\beta$ may choose to approach $8\pi$)
there is a branch of solutions which satisfy to a concentration property as in \rife{blow-up}, was already proved in
\mathbb{C}te{clmp2}, see NEQ-(ii) below.\\
The full theory as exposed in \mathbb{C}te{CCL} as well as the equivalence of statistical ensembles
has been recently extended to cover the case where $\Omega$ is multiply connected in \mathbb{C}te{BLin3}. As far as
one is concerned with the analytical problem of the existence for $\beta=-8\pi$ and uniqueness for $\beta\inftyn[-8\pi,0)$, the
results in \mathbb{C}te{CCL} has been generalized in \mathbb{C}te{bl}, \mathbb{C}te{BLin2} to the case where
Dirac-type singular data are added in the (MFE).\\
bgskip
The mean field thermodynamics for domains of second kind when $E\geq E_c$ is more involved.\\
Since it is not difficult to show that $\mathcal{F}_\beta$ is unbounded from above for $\beta<-8\pi$, then
there is no solution for the CVP with $\beta< -8\pi$ and therefore no equivalence (at all) among the MVP
and the CVP is at hand in this case. Nevertheless some insight about the range of energies $E\geq E_c$
was also obtained in \mathbb{C}te{clmp2}.
Let $\Omega$ be a domain of second kind. Then we have (see Propositions 6.1, 6.2 and Theorem 6.1 in \mathbb{C}te{clmp2}):\\
NEQ-(i) It holds
$$
-8\pi E+C_1 \leq S(E)\leq -8\pi E + C_2,\;\forall\,rall \,E\geq E_c,
$$
where $C_2=S(E_c)+8\pi E_c=8\pi f(-8\pi)$;\\
NEQ-(ii) Let $\Omegaega^{scriptscriptstyle (E)}$ be a solution of MVP at energy $E$. Then (up to subsequences)
$\Omegaega^{scriptscriptstyle (E)}\rightharpoonup\delta_{x=p}$, as $E\rightarrow+\infty$, where $p$ is a maximum point of $H_\Omega(x,x)$;\\
NEQ-(iii) $S(E)$ is not concave for $E>E_c$.
bgskip
Besides these facts, we do not know of any positive result about this problem for domains of second kind
when $E\geq E_c$.\\
It is one of our motivations to begin here a systematic study of the statistical mechanics
description of the case $E\geq E_c$. In this paper we work out the following program:\\
(-) Prove the existence of solutions of the (MFE) for suitable $\beta<-8\pi$ by assuming the domain
to be "thin" enough, see \S \ref{ss1.1} and \S \ref{ss1.4}.\\
(-) Prove that the first eigenvalue of the linearized problem for the (MFE) on those solutions
is strictly positive. This fact will imply that our solutions are local maximizers of ${\mathcal F}_\beta$
as well as a multiplicity result yielding another set of unstable solutions, see \S \ref{ss1.2}.\\
(-) Prove that if the domain is "thin" enough, then there exists at most one solution of the (MFE) with
$\beta$ bounded from below and whose
energy is less than a certain threshold. This fact will imply that we
have found a connected and smooth branch of solutions where the energy is well defined as a
function of $\lambda:=-\beta$, see Remark \ref{newrmsmooth} and \S \ref{ss1.3}.\\
(-) Prove that if the domain is "thin" enough and in a small enough range of energies, then the energy is
monotonic increasing as a function of $\lambda=-\beta$. This fact will eventually imply that there exists one and only
one solution of the MFE at fixed energy (in that small range) which therefore is also the unique
maximizer of the entropy for the MVP. In particular we will prove that the entropy is concave in this range,
see \S \ref{ss1.4}.\\
bgskip
This is the underlying idea which will guide us in the analysis of various problems of independent mathematical
interest as discussed in the rest of this introduction. We take the occasion here to provide all the motivations and/or
necessary comments about the statements
of the many results obtained (with the unique exception of Proposition \ref{pr2} below)
which is why the introduction is so lengthy.
subsection{Existence of solutions for the supercritical (MFE) on thin domains}\label{ss1.1}$\left.\right.$\\
Amongst other things which will be discussed below, one of the main reasons which makes things
more difficult in the case $E\geq E_c$ is the lack of a description of the solutions set for the
(MFE) with $\beta<-8\pi$. Since this will be a major point in our discussion, we introduce the quantities
$$
\lambda:=-\beta,\qquad\mbox{and}\quad u=-\beta\psi=\lambda\psi,
$$
and consider the following alternative but equivalent formulation of the (MFE)
$$
\graf{
-\Delta u =\lambda \displaystyle\frac{e^u}{\inftynt_{\Omega} e^u} & \mbox{in}\quad \Omega\\
u =0 & \mbox{on}\quad \partial\Omega
}\qquad P(\lambda,\Omega)
$$
which we will denote by $P(\lambda,\Omega)$. The following remark will be used throughout the rest of this paper.
\begin{remark}\label{rem3.1}
Clearly $P(\lambda,\Omega)$ is rotational and translational invariant. Moreover
the integral in the denominator of the nonlinear datum in $P(\lambda,\Omega)$ makes the problem \un{dilation invariant} too,
that is, $u$ is a solution of $P(\lambda,\Omega)$ if and only if $v(y)=u(y_0+d_0R_0y)$ is a solution of
$P(\lambda,\Omega^{(0)})$, where $y_0\inftyn\mathbb{R}^2$, $d_0>0$, $R_0$ is an orthogonal $2\times2$ matrix and
$$
\Omega^{(0)}:=\{y\inftyn\mathbb{R}^2\,|\,y_0+d_0R_0y\inftyn\Omega\}.
$$
In particular, $u$ solves $P(\lambda,\Omega_\rho)$ with $\rho=\frac{a}{b}$
where
\betaq\label{1}
\Omega_\rho=\{(x,y)\inftyn \mathbb{R}^2\,|\,\rho^2x^2+y^2\leq 1,\;\rho\inftyn (0,1]\},
\end{equation}
is the canonical two dimensional ellipse whose axis lengths are $\frac{1}{\rho}$ and $1$,
if and only if $u_0(x^{'},y^{'})$ with $\{b x^{'}=\, x,\;b y^{'}= y\}$ solves $P(\lambda,\mathbb{E}_{a,b})$, where
$$
\mathbb{E}_{a,b}=\{(x^{'},y^{'})\inftyn \mathbb{R}^2\,|\,a^2 {x^{'}}^2+b^2{y^{'}}^2\leq 1,\;a\inftyn (0,1],
\,b\inftyn (0,1],\,b\geq a\},
$$
is the canonical two dimensional ellipse whose axis lengths are $\frac{1}{a}$ and $\frac{1}{b}$.
\end{remark}
As mentioned above, we just miss a description of the solutions
set of $P(\lambda,\Omega)$ with $\lambda> 8\pi$ and $\Omega$ regular. General existence results for $P(\lambda,\Omega)$ are at hand
for $\lambda\inftyn \mathbb{R}setminus{8\pi\mathbb{N}}$ only if $\Omega$ is a multiply connected domain,
see \mathbb{C}te{DJLW}, \mathbb{C}te{st} and the deep results in \mathbb{C}te{CLin2} (see also \mathbb{C}te{BDeM}).\\
This is far from being a technical problem. Indeed, a well known result based on the Pohozaev identity (see for example
\mathbb{C}te{clmp1}) shows that if $\Omega$ is strictly starshaped, then there exists
$\lambda_*=\lambda_*(\Omega)\geq 8\pi$ (see also Remark \ref{extremal} below) such that $P(\lambda,\Omega)$ has no solutions for
$\lambda\geq \lambda_*(\Omega)$.
This result is sharp since indeed
$\lambda_*(B_R(0))=8\pi$, where $B_R(0)=\{x\inftyn\mathbb{R}^2\,:\,|x|<R\}$ for some $R>0$.\\
Therefore, in particular, the Leray-Shauder degree
of the resolvent operator for $P(\lambda,\Omega)$ with $\Omega$ regular vanishes identically for any
$\lambda>8\pi$, see \mathbb{C}te{CLin2}.\\
If this were not enough we also observe that, at least in case $\Omega$ is convex,
the well known results in \mathbb{C}te{BaPa}, \mathbb{C}te{CLin1}, \mathbb{C}te{EGP}, \mathbb{C}te{KMdP} concerning concentrating solutions for
$P(\lambda,\Omega)$ as $\lambda\rightarrow 8\pi k$, for some fixed $k\inftyn \mathbb{N}$, are of no help here,
since it has been shown in \mathbb{C}te{GrT} that in fact neither those blow-up solutions sequences
exist if $k\geq 2$.\\
Finally let us remark that we are concerned here just with solutions of $P(\lambda,\Omega)$. If
we allow some weight to multiply the exponential nonlinearity, then other solutions exist for $\lambda>8\pi$ on simply
connected domains, see
for example \mathbb{C}te{B1-1}, \mathbb{C}te{B2}, \mathbb{C}te{BM2} and more recently the general results derived in \mathbb{C}te{BMal}.\\
As a matter of fact, the only general result we are left with is the immediate corollary of the uniqueness
results in \mathbb{C}te{CCL}, which shows that:\\
SK-(iv) if $\Omega$ is of second kind, then the branch of unique solutions
$u_{scriptscriptstyle \lambda}$, $\lambda\inftyn[0,8\pi]$ of $P(\lambda,\Omega)$ can be extended (via the implicit function theorem)
in a small right neighborhood of $8\pi$.\\
Our first result is concerned with a sufficient condition for the existence of solutions of $P(\lambda,\Omega)$ with
$\lambda>8\pi$ on "thin" domains.
\betae\label{t1-intro}$\left.\right.$\\
{\rm (a)} Let $\Omega$ be a simple domain.
For any $c\inftyn(0,1]$ there exist $\overline{\rho}_*>\underline{\rho}_*(c)>0$ such that
if $\{\rho^2x^2+y^2\leq \betata_-^2\}subset\Omegasubset\{\rho^2x^2+y^2\leq\betata_+^2\}$
with $c=\tfrac{\betata_-^2}{\betata_+^2}$ then, for
any $\rho\inftyn(0,\underline{\rho}_*(c)]$ and for any $\lambda\leq\lambda_{\rho,c}$, there exists a solution $u^{\scp(\lm)}$ of
$P(\lambda,\Omega)$, where $\underline{\lambda}_{\rho,c}< \lambda_{\rho,c}<\overline{\lambda}_\rho$ and $\underline{\lambda}_{\rho,c}$, $\overline{\lambda}_\rho$ are strictly decreasing (as functions of $\rho$) in $(0,\underline{\rho}_*(c)]$, $(0,\overline{\rho}_*]$ respectively with
$\underline{\lambda}_{\underline{\rho}_*(c),c}=8\pi=\overline{\lambda}_{\overline{\rho}_*}$ and $\underline{\lambda}_{\rho,c}simeq
\frac{4\pi c}{(8-c)\rho}$, $\ov{\lambda}_{\rho}simeq
\frac{11\pi}{16\rho}$ as $\rho\rightarrow 0^+$.\\
{\rm (b)}
There exists $\bar N> 4\pi$ such that if $\Omega$ is an open, bounded and convex set (therefore simple)
whose isoperimetric ratio, $N\equiv N(\Omega)=\frac{L^2(\partial\Omega)}{A(\Omega)}$, satisfies $N\geq\bar N$, then
for any $\lambda\leq\lambda_\varthetaetaxtnormal{\tiny{$N$}}$ there exists a
solution $u^{\scp(\lm)}$ of $P(\lambda,\Omega)$, where
$\underline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}<\lambda_\varthetaetaxtnormal{\tiny{$N$}}<
\overline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}$ with $\underline{\Lambda}_\varthetaetaxtnormal{\tiny{$\bar N$}}=
8\pi$, $\underline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}$ and $\overline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}$
strictly increasing in $N$ and $\underline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}simeq\frac{\pi^2N}{496}+O(1)$,
$\overline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}simeq\frac{33sqrt3N}{16\pi}+O(1)$ as $N\rightarrow+\inftynfty$.
\end{theorem}
\begin{remark}\label{susp}
The suspect that this result should hold was initially due to the above mentioned result in \mathbb{C}te{clmp1}
(which states that if
$\Omega$ is a long and thin enough rectangle then a solution of $P(8\pi,\Omega)$ exists) and to a result in \mathbb{C}te{CCL}
(which states that there exists a critical value $d_1<1$ such that if $\Omega$ is a rectangle whose sides lengths are
$a_1\leq b_1$, then a solution of $P(8\pi,\Omega)$ exists if and only if $\frac{a_1}{b_1}\leq d_1$). In particular this
observation already shows that ${\bar N}>4\pi$.
\end{remark}
\begin{remark}\label{extremal}
Clearly $c=1$ if and only if $\Omega$ is an ellipse,
while if $\Omega$ is a rectangle it is easy to see that $c=\frac12$ is optimal.
We also
have the quantitative estimate $0.0702<\overline{\rho}_*(1)$ which could be used in principle
to obtain an estimate for either $d_1$ (see Remark \ref{susp}) or $\bar N$. We will not insist about this point since it seems that we are too
far from optimality.
In the case of the ellipse $\Omega_\rho$, the existence lower/upper threshold values
$\un{\lambda}_\rhosimeq\frac{4\pi}{7\rho}/\ov{\lambda}_{\rho}simeq
\frac{11\pi}{16\rho}$ should
be compared with the Pohozaev's upper bound for the existence of solutions for $P(\lambda,\Omega_\rho)$, that is
$$
\lambda <\lambda_*(\Omega_\rho):=4\inftynt\limits_{\partial \Omega_\rho}\frac{ds}{(\underline{x},\underline{\nu}\,)}=\frac{4\pi}{\rho}(1+\rho^2).
$$
\end{remark}
\begin{remark}\label{rem:branch}
For regular domains, the branches of solutions obtained above will be seen to be connected and smooth, see Remark
\ref{newrmsmooth} below. We will denote them by $\mathcal{G}_{\rho,c}=\{(\lambda,u^{\scp(\lm)})\,:\,\lambda\inftyn[0,\lambda_{\rho,c}]\}$ (as obtained in Theorem
\ref{t1-intro}(a)) and
$\mathcal{G}_{N}=\{(\lambda,u^{\scp(\lm)})\,:\,\lambda\inftyn[0,\lambda_\varthetaetaxtnormal{\tiny{$N$}}]\}$ (as obtained in Theorem
\ref{t1-intro}(b)) respectively.
\end{remark}
The proof of Theorem \ref{t1-intro} is, surprisingly enough, based on the sub-supersolutions method. In particular
we use the result in \mathbb{C}te{clsw} which allows for such a weak assumptions about the regularity of $\Omega$.
The underlying idea in case $\Omega=\Omega_\rho$ is:\\
(-) if the ellipse $\Omega=\Omega_\rho$ is "thin" enough (i.e. if $\rho$ is small enough)
then the branch of \un{minimal} solutions for the classical Liouville problem
$$
\graf{
-\Delta u=\mu \,{\displaystyle e^u} & \mbox{in}\quad \Omega\\
u=0 & \mbox{on}\quad \partial\Omega
}\qquad Q(\mu,\Omega)
$$
cannot be pointwise too far from the $C^{2}_0(\Omega_\rho)$ function
$$
v_{\rho,\gamma}=2\log{\left(\frac{1+\gamma^2}{1+\gamma^2(\rho^2x^2+y^2)}\right)},\quad (x,y)\inftyn \Omega_\rho,
$$
for a suitable value of $\gamma$ depending on $\mu$ and $\rho$. Of course, the guess about $v_{\rho,\gamma}$ is
inspired by the Liouville formula \mathbb{C}te{Lio}. Therefore, for fixed $\mu$ and $\rho$, we seek values
$\gamma_{\mp}$ such that
$v_{\rho,\gamma_{\mp}}$ are sub-supersolutions respectively of $Q(\mu,\Omega_\rho)$.\\
(-) if the choice of $\gamma_{\pm}(\mu)$ is made with enough care, then, along the branch of solutions
(say $u_{scriptscriptstyle \mu}$) for $Q(\mu,\Omega)$ found via the sub-supersolutions method,
the value of $\lambda$ \un{defined} as follows
$$
\lambda:=\mu\inftynt\limits_{\Omega_\rho} e^{u_{scriptscriptstyle \mu}},
$$
can be quite large whenever $\rho$ is small enough.
bgskip
Part (b) of Theorem \ref{t1-intro} will be a consequence of Part (a) and Theorems \ref{tjohn} and \ref{tlassek}
below.
\betae\label{tjohn}{{\rm \{}\mathbb{C}te{John}{\rm \}}}
Let $Ksubset\mathbb{R}^2$ be a convex body (that is a compact convex set with nonempty interior).
Then there is an ellipsoid $E$ (called the John ellipsoid which is the ellipsoid of maximal volume contained in $K$)
such that, if $c_0$ is the center of $E$, then the inclusions
$$
Esubset Ksubset \{c_0+2(x-c_0)\,:\,x\inftyn E\}
$$
hold.
\end{theorem}
\betae\label{tlassek}{{\rm \{}\mathbb{C}te{Lassek-priv}{\rm \}}}
Every convex body $Ksubset\mathbb{R}^2$ contains an ellipse of area $\tfrac{\pi}{3sqrt3}\,A(K)$.
\end{theorem}
A short proof of the previous theorem is based on a result in \mathbb{C}te{Besicovitch}, where the existence of an
affine-regular hexagon $H$ of area at least $\tfrac23\,A(K)$ and inscribed in $K$
is established. Indeed, considering the concentric inscribed ellipse in $H$ one gets the thesis.
\begin{remark}\label{rem-Lassek}
In particular Theorem \ref{tlassek} has been used to obtain the asymptotic behaviors of
$\underline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}$ and $\overline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}$.
A more rough estimate of those asymptotics could have been obtained by using other (much worst)
known estimates of the area of the enclosed ellipse. In particular, while Theorem \ref{tjohn}
is well known \mathbb{C}te{John}, it seems that Theorem \ref{tlassek} is not
and we are indebted with Prof. M. Lassak who kindly reported to us a proof of it \mathbb{C}te{Lassek-priv}
based on the cited reference \mathbb{C}te{Besicovitch}.
\end{remark}
bgskip
Clearly, as an immediate corollary of Theorem \ref{t1-intro} and the equivalence of
SK-(i) and SK-(ii) we conclude that
if $\Omega$ is regular and satisfies the assumptions of Theorem \ref{t1-intro}(a)(Theorem \ref{t1-intro}(b))
with $\rho\inftyn(0,\un{\rho}_*(c)]$ ($N(\Omega)>\bar N$) then it is of second kind.
subsection{Non degeneracy and multiplicity of solutions of the supercritical (MFE) on thin domains}\label{ss1.2}$\left.\right.$\\
Let us define the density corresponding to a solution $u^{\scp(\lm)}l$ of $P(\lambda,\Omega)$ as
\betaq\label{dedef}
\Omegaega_{scriptscriptstyle \lambda}\equiv\Omegaega(u^{\scp(\lm)}l):=\displaystyle\frac{\e{u^{\scp(\lm)}l}}{\inftynt\limits_{\Omega} \e{u^{\scp(\lm)}l}}.
\end{equation}
A crucial tool used in the proof of the equivalence of statistical ensembles \mathbb{C}te{clmp2}
is the uniqueness of solutions \mathbb{C}te{suz}, \mathbb{C}te{CCL} (see also \mathbb{C}te{BLin3})
of $P(\lambda,\Omega)$ for $\lambda\inftyn[0,8\pi]$.
The situation is far more involved in case $\lambda>8\pi$ since on domains of second kind, solutions
are not anymore unique.\\
This fact is already clear from NEQ-(ii) and SK-(iv) above, that is, if $\Omega$ is of second kind
we have a blow-up branch which satisfies
\betaq\label{blow-up2}
\Omegaega(u^{\scp(\lm)}l)\rightharpoonup\delta_{x=p},\quad\mbox{as}\quad\lambdasearrow(8\pi)^{+},
\end{equation}
weakly in the sense of measures, for some critical point $p\inftyn\Omega$ of $H_\Omega(x,x)$, and the smooth solutions
of $P(\lambda,\Omega)$ in a small right neighborhood of $8\pi$. Hence, we have at least two solutions in a right
neighborhood of $8\pi$, a well known fact that could have been also deduced
by using the alternative in Theorem 7.1 in \mathbb{C}te{clmp2} together with the uniqueness result in \mathbb{C}te{CCL}.\\
We wish to make a further step in this direction.
To this purpose we first study the linearized problem for $P(\lambda,\Omega)$ at $u^{\scp(\lm)}$,
where $u^{\scp(\lm)}$ is the solution obtained in Theorem \ref{t1-intro},
showing the positivity of its first eigenvalue (see Proposition \ref{pr2} and Remark \ref{branch} for details).
It is worth to point out that the above fact, which yields a multiplicity result too,
is also crucial in the analysis of the solutions branches $\mathcal{G}_{\rho,c},\mathcal{G}_N$, see Remarks
\ref{rem:branch} and \ref{newrmsmooth}. In particular we have:
\begin{proposition}\label{pr3} For fixed $c\inftyn(0,1]$, let $\Omega$ be a regular domain that
satisfies $\{\rho^2x^2+y^2\leq \betata_-^2\}subset\Omegasubset\{\rho^2x^2+y^2\leq\betata_+^2\}$,
with $\frac{\beta^2_-}{\beta^2_+}=c$ and $\rho\inftyn(0,\underline{\rho}_*(c)]$,
with $\underline{\rho}_*(c)$ as found in Theorem \ref{t1-intro}(a).
Let $\Omega$ be a convex domain with $N(\Omega)>\bar N$ as found in Theorem \ref{t1-intro}(b).\\
The portions of $\mathcal{G}_{\rho,c},\mathcal{G}_{\varthetaetaxtnormal{\tiny{$N$}}}$ with $\lambda\inftyn[0,8\pi]$ coincide
with the branch of unique absolute minimizers of
\betaq\label{var-func}
F_{\lambda}(u)=\frac12\inftynt\limits_{\Omega} |\nabla u|^2 \,dx-\lambda
\log\left(\;\inftynt\limits_{\Omega} e^u\,dx\;\right),\quad u\inftyn H^{1}_0(\Omega),
\end{equation}
and for each $\lambda\inftyn(8\pi,\lambda_{\rho,c}]$ or $\lambda\inftyn(8\pi,\lambda_{\varthetaetaxtnormal{\tiny{$N$}}}]$ the corresponding
solutions $u^{\scp(\lm)}$ such that $(\lambda,u^{\scp(\lm)})\inftyn \mathcal{G}_{\rho,c}$ and $(\lambda,u^{\scp(\lm)})\inftyn \mathcal{G}_\varthetaetaxtnormal{\tiny{$N$}}$
are strict local minimizers of $F_{\lambda}$.
\end{proposition}
\begin{remark}\label{newrmsmooth}
By using the bounds provided by the sub-supersolutions method (see \rife{susu} in the proof of Theorem \ref{t1-intro}),
Proposition \ref{pr2} and Theorem \ref{unique:1-intro} below, then standard bifurcation theory \mathbb{C}te{CrRab}
shows that
for any fixed $\ov{\lambda}>8\pi$, possibly taking a smaller $\un{\rho}_*(c)$ and a larger $N$,
the portions of $\mathcal{G}_{\rho,c}$ and $\mathcal{G}_\varthetaetaxtnormal{\tiny{$N$}}$ with
$\lambda\leq \ov{\lambda}$ are smooth and connected branches with no bifurcation points.
\end{remark}
The proof of Proposition \ref{pr3} is a straightforward consequence of the fact that the first eigenvalue of the
linearized problem for $P(\lambda,\Omega)$ is strictly positive along $\mathcal{G}_{\rho,c}$ and
$\mathcal{G}_{\varthetaetaxtnormal{\tiny{$N$}}}$, see Proposition \ref{pr2} in section \ref{sec2}.\\
We shall see that, by virtue of Proposition \ref{pr3},
it is possible to show that for $\lambda\inftyn (8\pi,\lambda_{\rho,c})setminus 8\pi\mathbb{N}$
the functional $F_{\lambda}$ exhibits a mountain-pass type structure which in turn yields
the existence of min-max type solutions to $P(\lambda,\Omega)$. More precisely we obtain the following result.
\betae\label{mp-intro}$\left.\right.$\\
{\rm (a)} Let $\Omega$, $\rho\inftyn(0,\un{\rho}_*(c)]$ and $\lambda_{\rho,c}$ be as in Theorem \ref{t1-intro}(a)
and let $u^{\scp(\lm)}$ be a solution of $P(\lambda,\Omega)$ for $\lambda\leq\lambda_{\rho,c}$.
Then, for any $\lambda\inftyn(8\pi,\lambda_{\rho,c})setminus 8\pi\mathbb{N}$ there exists a second solution
$v^{\scp(\lm)}$ of $P(\lambda,\Omega)$ such that $F_{\lambda}(v^{\scp(\lm)})> F_{\lambda}(u^{\scp(\lm)})$. \\
{\rm (b)} Let $\Omega$, $\bar N>4\pi$, $N(\Omega)$ and $\lambda_\varthetaetaxtnormal{\tiny{$N$}}$ be as
in Theorem \ref{t1-intro}(b)
and let $u^{\scp(\lm)}$ be a solution of $P(\lambda,\Omega)$ for $\lambda\leq\lambda_\varthetaetaxtnormal{\tiny{$N$}}$.
Then, for any $\lambda\inftyn(8\pi,\lambda_\varthetaetaxtnormal{\tiny{$N$}})setminus 8\pi\mathbb{N}$ there exists a second solution
$v^{\scp(\lm)}$ of $P(\lambda,\Omega)$ such that
$F_{\lambda}(v^{\scp(\lm)})> F_{\lambda}(u^{\scp(\lm)})$.
\end{theorem}
\begin{remark} By using well known compactness results \mathbb{C}te{yy} as well as those recently derived in \mathbb{C}te{GrT}, we
conclude that any sequence of solutions $v^{\scp(\lm)}$ with $8\pi k<\lambda< 8\pi(k+1),\,k\geq 1$ obtained in part
{\rm (b)} converges as $\lambda\rightarrow 8\pi (k+1)$ to a solution $v_{8\pi(k+1)}$ of $P(8\pi(k+1),\Omega)$.
We also have at least two different arguments showing that for any fixed $\ov{\lambda}>0$,
possibly taking a larger $N,$ those $v_{8\pi k}$
which also satisfy $8\pi k\leq\ov{\lambda}$ are distinct from
those obtained in Theorem \ref{t1-intro}{\rm (b)} for $\lambda=8\pi k$.
The first one is a standard bifurcation-type argument
based on Remark \ref{newrmsmooth} and Proposition \ref{pr2} below. The second one is based
on the uniqueness result stated in Theorem \ref{unique:1-intro} below.
\end{remark}
\begin{remark}\label{metastable}
It is easy to check that if $u$ is a solution of $P(\lambda,\Omega)$ and $\Omegaega(u)$ is defined as in \rife{dedef}, then
$\Omegaega(u)$ is a critical point of $\mathcal{F}_{-\lambda}$ and in particular
$\mathcal{F}_{-\lambda}(\Omegaega)=-\frac{1}{\lambda^2}F_\lambda(u)$. Hence, if $u^{\scp(\lm)}$ and $v^{\scp(\lm)}$ are as in Theorem \ref{mp-intro}
with $\Omegaega(u^{\scp(\lm)})$ and $\Omegaega(v^{\scp(\lm)})$ as in \rife{dedef}, then it is readily seen that
$\mathcal{F}_{-\lambda}(\Omegaega(u^{\scp(\lm)}))<\mathcal{F}_{-\lambda}(\Omegaega(v^{\scp(\lm)}))$. In particular $\Omegaega(u^{\scp(\lm)})$ is a kind
of metastable state
(in the sense that it is a strict local maximizer of $\mathcal{F}_{-\lambda}$) while $\Omegaega(v^{\scp(\lm)})$ is expected to be
unstable (since it is a min-max type critical point of $\mathcal{F}_{-\lambda}$).\\
In any case, whenever $\Omega$ is regular (and since solutions of $P(8\pi,\Omega)$ are unique in this case \mathbb{C}te{CCL}),
then any sequence of solutions found in Theorem \ref{mp-intro} for $P(\lambda,\Omega)$ with $\lambdasearrow 8\pi^+$
must satisfy \rife{blow-up2}.
\end{remark}
subsection{Uniqueness of solutions for the supercritical (MFE) with bounded energy on thin domains}\label{ss1.3}$\left.\right.$\\
As a matter of fact we are still unable to define the energy as a monodrome function
of $\lambda$. We explain the next step toward this goal in the case of the ellipse $\Omega_\rho$.\\
Although solutions of $P(\lambda,\Omega_\rho)$ are not unique as a function of $\lambda$, what we can prove is that
for fixed $\ov{\lambda}\geq 8\pi$ and $\ov{E}\geq 1$, then for $\rho$ small enough there could be at most
one solution $u_{scriptscriptstyle \rho, \lambda}$ such that $\lambda\leq\ov{\lambda}$ and
\betaq\label{endef2-intro}
\mathcal{E}(\Omegaega(u_{scriptscriptstyle \rho, \lambda}))\leq \ov{E}.
\end{equation}
This is a major achievement since, by using also Proposition \ref{pr2} below, it implies that
(as far as $\rho$ is small enough),
the energy (see Proposition \ref{pr3:I}) is well defined as a function of $\lambda$, whenever $\lambda\leq \ov{\lambda}$
and the supremum of the range of the energy itself is not greater than $\ov{E}$.\\
Let us think at the results obtained in \S \ref{ss1.1} and \S \ref{ss1.2} in terms of the
$(\lambda,\|u_{scriptscriptstyle \rho, \lambda}\|_\inftynfty)$ bifurcation diagram. To fix the ideas,
we propose the following naive description. As $\rho$ gets smaller and smaller, we have:\\
(-) the portion with $\lambda\leq\ov{\lambda}$ and $\mathcal{E}(u_{scriptscriptstyle \rho,\lambda})\leq \ov{E}$
of the (smooth, see Remark \ref{newrmsmooth}) branches of solutions $\mathcal{G}_{\rho,c},\mathcal{G}_{N}$ obtained in Theorem \ref{t1-intro}
gets lower and flatter, that is, $\|u_{scriptscriptstyle \rho,\lambda}\|_\inftynftysearrow 0^+$.
See also Remark \ref{rem6.1} below.\\
(-) In the same time the portion with $\lambda\leq\ov{\lambda}$ of the branches obtained in Theorem \ref{mp-intro}
(as well as any other possible solution) gets higher and higher the corresponding energies getting greater and finally greater than $\ov{E}$.\\
(-) Any bifurcation/bending point one should possibly meet along $\mathcal{G}_{\rho,c},\mathcal{G}_{N}$
moves in the region $\lambda>\ov{\lambda}$.
bgskip
It is understood that the value $1$ in the condition $\ov{E}\geq 1$ could have been substituted by any other
fixed positive number. More exactly we have the following:
\betae\label{unique:1-intro}$\left.\right.$\\
Fix $\ov{\lambda}\geq 8\pi$ and $\ov{E}\geq 1$. Then:\\
{\rm (a)} Let $\Omega$ be a simple domain and suppose that there exists $c\inftyn(0,1]$
such that $\{\rho^2x^2+y^2\leq \betata_-^2\}subseteq\Omegasubseteq\{\rho^2x^2+y^2\leq\betata_+^2\}$
with $c=\tfrac{\betata_-^2}{\betata_+^2}$.\\ Then there exists
$\widetilde{\rho}_{1}=\widetilde{\rho}_{1}(c,\ov{E},\ov{\lambda})>0$ such that for any $\rho\inftyn(0,\widetilde{\rho}_{1}]$,
there exists at most one solution $u_{scriptscriptstyle \lambda}$ of $P(\lambda,\Omega)$ with $\lambda\leq\ov{\lambda}$ which satisfies
\rife{endef2-intro}.\\
{\rm (b)} Let $\Omega$ be any open, bounded and convex (therefore simple) domain.
There exists $\widetilde{N}=\widetilde{N}(\ov{\lambda},\ov{E})\geq 4\pi$ such that for any such $\Omega$ satisfying
$$
N(\Omega):=\frac{L^2(\partial \Omega)}{A(\Omega)}\geq \widetilde{N},
$$
there exists at most one solution $u_{scriptscriptstyle \lambda}$ of $P(\lambda,\Omega)$ with $\lambda\leq\ov{\lambda}$ which satisfies
\rife{endef2-intro}.
\end{theorem}
The proof of Theorem \ref{unique:1-intro} is based on two main tools.\\
The first one is
an a priori estimate for solutions of $P(\lambda,\Omega)$ (which satisfy $\lambda\leq\ov{\lambda}$ and \rife{endef2-intro})
with a uniform constant $\ov{C}$ which do not depend neither on $u$ nor \un{on the domain $\Omega$}, but only on
$\ov{\lambda}$ and $\ov{E}$. Roughly speaking,
and in case $\Omega=\Omega_\rho$, this kind of uniformity with respect to the domain is needed since we consider the limit in which
$\rho$ gets very small, that is, we seek uniqueness for \un{all} domains which are "thin" in the sense specified
in the statement of Theorem \ref{unique:1-intro}. We refer to
Lemma \ref{lem:231112} and the discussion about it in section \ref{sec:unique} for further details.\\
The second tool is a careful use of the dilation invariance (see Remark \ref{rem3.1}) to be used together with
an estimate about the first eigenvalue of the Laplace-Dirichlet problem on a "thin" domain,
see \rife{110213.21} below for more details.
subsection{Uniqueness of solutions for the supercritical (MFE) on $\Omega_\rho$ with fixed energy and concavity of the
Entropy }\label{ss1.4}$\left.\right.$\\
In this subsection we fix $\Omega=\Omega_\rho$.\\
As observed above, by using Theorem \ref{unique:1-intro} and Proposition \ref{pr2} below
we can prove that (as far as $\rho$ is small enough) the energy (see Proposition \ref{pr3:I}) is well
defined as a function of $\lambda$ (along the branch $\mathcal{G}_{\rho,1}$ found in
Theorem \ref{t1-intro}(a), see Remark \ref{newrmsmooth}) whenever $\lambda\leq \ov{\lambda}$ and the supremum
of the range of the energy itself
is not greater than $\ov{E}$. It is tempting at this point to say that the entropy maximizers of the
MVP are those solutions of the (MFE) obtained in Theorem \ref{t1-intro}(a). However we still
don't know whether or not this is true, since obviously there could be many solutions on $\mathcal{G}_{\rho,1}$ (i.e.
with different values of $\lambda$) corresponding to a fixed energy $E\leq \ov{E}$
(see for example fig. 5 in \mathbb{C}te{clmp2}).
In such a situation it would be difficult to detect which is, (or worst, which are) the one which
really maximizes the entropy.
A possible solution to this problem could be obtained if we would be able to understand the monotonicity
of the energy as a function of $\lambda$ on $\mathcal{G}_{\rho,1}$. The first step toward this goal is to show
that the solutions of $P(\lambda,\Omega_\rho)$ obtained in Theorem \ref{t1-intro}(a) can be
expanded in powers of $\rho$ with the leading order taking up an explicit and simple form (see also
\rife{191112.1}, \rife{021212.2} below), that is
\betaq\label{phi0-intro}
\phi_0(x,y;\lambda,\rho)=\mu_0(\lambda,\rho)\psi_0(x,y;\rho),\quad (x,y)\inftyn \Omega_\rho,
\end{equation}
where $\mu_0$ satisfies \rife{mu0-intro}-\rife{mu0-intro1} below and
\betaq\label{080213.1-intro}
\psi_0(x,y;\rho)=\frac{1}{2(1+\rho^2)}\left(1-(\rho^2x^2+y^2)\right),\quad (x,y)\inftyn \Omega_\rho.
\end{equation}
Of course, we could have used the fact that we already knew about the existence of the branch
$\mathcal{G}_{\rho,1}$ and managed to expand those solutions as a function of $\rho$.
Instead we decided to make the argument self-contained by pursuing
another proof of independent interest of the existence of solutions of $P(\lambda,\Omega_\rho)$. It shows that there exists
$\rho_0$ small enough (depending on $\ov{\lambda}$) such that for any $\rho<\rho_0$ and for each $\lambda\inftyn[0,\ov{\lambda})$
a solution $u^{\scp(\lm)}l$ for $P(\lambda,\Omega_\rho)$ exists whose leading order with respect to $\rho$
takes up the form \rife{phi0-intro}. There is no problem in checking that these solutions coincide with
those on the branch $\mathcal{G}_{\rho,1}$ obtained in Theorem \ref{t1-intro}(a). Indeed this is at this point
an easy consequence of Theorem \ref{unique:1-intro}.\\
We still face the problem of how to handle the term $\inftynt\limits_{\Omega_\rho}e^{u^{\scp(\lm)}l}$
in the denominator of the nonlinear term in $P(\lambda,\Omega_\rho)$. This time we will solve this issue by
seeking solutions $v_\rho$ of $Q(\mu_0\rho,\Omega_\rho)$ which satisfy the following identity in a suitable
set of values of $\lambda$,
\betaq\label{lm0-intro}
\lambda=\mu_0\rho\inftynt\limits_{\Omega_\rho}e^{u^{\scp(\lm)}l}.
\end{equation}
This is the content of Theorem \ref{thm:261112-intro} below. More exactly, by setting
$$
D^{(k)}_\lambda=\frac{\partial^{k}}{\partial\lambda^k},\;k=0,1,2,
$$
we have the following:
\betae\label{thm:261112-intro}
Let $\ov{\lambda}\geq 8\pi$ be fixed. There exists $\rho_0>0$ depending on $\ov{\lambda}$ such that for
any $\rho<\rho_0$ and for each $\lambda\inftyn[0,\ov{\lambda}\,)$ there exists a solution $u^{\scp(\lm)}l$ for $P(\lambda,\Omega_\rho)$
which satisfies
\betaq\label{sol:II-intro}
u^{\scp(\lm)}l(x,y;\lambda)=\rho\phi_0(x,y;\lambda)+\rho^2\phi_{1}(x,y;\lambda)+\rho^3\phi_2(x,y;\lambda),
\quad (x,y)\inftyn \Omega_\rho,
\end{equation}
where $\{\phi_0,\phi_{1},\phi_2\}subset C^{2}_0(\Omega)$. Moreover $\phi_0$ takes the form \rife{phi0-intro} with
$\mu_0$ a smooth function which satisfies
\betaq\label{mu0-intro}
\mu_0(\lambda,\rho)=\frac{\lambda}{\pi}-\frac{\lambda^2}{4\pi^2}\rho+\mbox{\rm O}(\rho^2),
\end{equation}
and
\betaq\label{mu0-intro1}
D^{(1)}_\lambda \mu_0(\lambda,\rho)=\frac{1}{\pi}-\frac{\lambda}{2\pi^2}\rho+\mbox{\rm O}(\rho^2),\quad
D^{(2)}_\lambda \mu_0(\lambda,\rho)=-\frac{1}{2\pi^2}\rho+\mbox{\rm O}(\rho^2).
\end{equation}
In particular the following uniform estimates hold
\betaq\label{regular-intro}
\|D^{(k)}_\lambda\phi_0\|_{scriptscriptstyle C^{2}_0(\Omega)}+\|D^{(k)}_\lambda\phi_{1}\|_{scriptscriptstyle C^{2}_0(\Omega)}+
\|D^{(k)}_\lambda\phi_2\|_{scriptscriptstyle C^{2}_0(\Omega)}\leq \ov{M}_k,\;k=0,1,2,
\end{equation}
for suitable constants $\ov{M}_k$, $k=0,1,2$ depending only on $\ov{\lambda}$.
Finally these solutions' set is a smooth branch which coincides with a portion of $\mathcal{G}_{\rho,1}$.
\end{theorem}
\begin{remark}\label{rem:pr2}
In the proof of Theorem \ref{thm:261112-intro} and therefore
in \un{all} the expansions in powers of $\rho$ what we really use is
the fact that solutions $v_\rho$ of $Q(\mu_0\rho,\Omega_\rho)$ can be expanded in powers of $\rho$ and in particular that
$\lambda_0(\mu_0,\rho):=\mu_0\rho\inftynt\limits_{\Omega_\rho}e^{v_\rho}$ is smooth, see Lemma \ref{intermedio} below.
Here we need some estimates about the first eigenvalue of the linearization of $Q(\mu,\Omega)$ as obtained in
Proposition \ref{pr2} below.
\end{remark}
By using Theorem \ref{thm:261112-intro} we can prove the following result.
Let $\widetilde{\rho}_{1}$ be fixed as in Theorem \ref{unique:1-intro}(a). Then we have:
\betae\label{unique:2-intro}
Let $\ov{\lambda}\geq 8\pi$ and let $\widehat{E}_\rho$ be defined by
$$
\widehat{E}_\rho:=\frac{\rho}{8\pi}+\frac{\rho^2}{50\pi^2}\ov{\lambda}.
$$
For each $\rho<\widetilde{\rho}_{1}$ and $E\inftyn\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$ there exists one and only one
solution $u_{scriptscriptstyle \lambda}$
for $P(\lambda,\Omega_\rho)$ whose energy is $\mathcal{E}(\Omegaega(u_{scriptscriptstyle \lambda}))=E$. Let $\widehat{\lambda}_\rho$ be defined by
$\mathcal{E}(\Omegaega(u_{scriptscriptstyle \widehat{\lambda}_\rho}))=\widehat{E}_\rho$. Then in particular the identities
$$
\widehat{E}(\lambda)=\mathcal{E}(\Omegaega(u_{scriptscriptstyle \lambda})),\quad \mathcal{E}(\Omegaega(u_{scriptscriptstyle \widehat{\lambda}(E)}))=E,
$$
define: \\
$\widehat{E}(\lambda):[0,\widehat{\lambda}_\rho]\rightarrow\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$ as a smooth and strictly
increasing function of $\lambda$ and\\
$\widehat{\lambda}(E):\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]\rightarrow[0,\widehat{\lambda}_\rho]$
as a smooth and strictly increasing function of $E$.\\
Moreover we have
\betaq\label{231112.3-intro}
\widehat{E}(\lambda)=\frac{\rho}{8\pi}+\frac{\rho^2}{48\pi^2}\lambda+\mbox{\rm O}(\rho^3),\quad
\widehat{\lambda}(E)=\frac{48\pi^2}{\rho^2}\left(E-\frac{\rho}{8\pi}\right)+\mbox{\rm O}(\rho).
\end{equation}
\betaq\label{231112.3.a-intro}
\frac{d}{d\lambda}\widehat{E}(\lambda)=\frac{\rho^2}{48\pi^2}+\mbox{\rm O}(\rho^3),\quad
\frac{d}{d E}\widehat{\lambda}(E)=\frac{48\pi^2}{\rho^2}+\mbox{\rm O}(\rho),
\end{equation}
\betaq\label{231112.3.b-intro}
\frac{d^2}{d\lambda^2}\widehat{E}(\lambda)=\mbox{\rm O}(\rho^3),\quad
\frac{d^2}{d E^2}\widehat{\lambda}(E)=\mbox{\rm O}(\rho).
\end{equation}
\end{theorem}
\begin{remark}\label{rem6.1}
The notation $\mbox{\rm O}(\rho^m)$, $m\inftyn\mathbb{N}$ is used here and in the rest of this paper to denote various quantities
uniformly bounded by $C_m\rho^m$ with $C_m>0$ a suitable constant depending only on $\ov{\lambda}$.\\
This result is
consistent with the underlying idea that, as $\rho$ gets smaller and
smaller, then the energies of the entropy maximizers (which are solutions of $P(\lambda,\Omega_\rho)$)
with values of $\lambda$ uniformly bounded from
above have to approach the energy of the uniform density distribution
$\Upsilon=\frac{1}{|\Omega_\rho|}$, that is
$$
E_{\Upsilon,\rho}:=\mathcal{E}\left(\frac{1}{|\Omega_\rho|}\right)=\frac12\inftynt\limits_{\Omega_\rho}
\frac{1}{|\Omega_\rho|}G_\rho\left[\frac{1}{|\Omega_\rho|}\right]=
\frac{\rho}{2\pi}\inftynt\limits_{\Omega_\rho}\frac{1}{|\Omega_\rho|2(1+\rho^2)}\left(1-(\rho^2x^2+y^2)\right)=\frac{\rho}{8\pi}.
$$
Here we used the easily derived explicit expression of the function
$G_\rho\left[\frac{1}{|\Omega_\rho|}\right]$ see also \rife{phi0-intro}, \rife{080213.1-intro} and \rife{191112.1},
\rife{021212.2} below.
\end{remark}
\begin{remark}
In particular \rife{231112.3-intro} yields $\widehat{\lambda}_\rho=\frac{48}{50}\ov{\lambda}+\mbox{\rm O}(\rho)$ and since
$\ov{\lambda}\geq 8\pi$ can be chosen at wish and (see Definition \ref{ecrit}) $E_c=\mathcal{E}(\Omegaega(u_{8\pi}))$, then of course $E_{\Upsilon,\rho}<E_c<\widehat{E}_\rho$ and
we succeed in the description of the energy as a function of (minus) the inverse temperature $\lambda=-\beta$ in
a very small range of energies above $E_c$.
\end{remark}
Let us observe that \rife{231112.3.a-intro} is in perfect agreement with the discussion in \S \ref{ss1.3},
that is, the portion with $\lambda\leq\ov{\lambda}$
of the branch of solutions obtained in Theorem \ref{t1-intro} gets lower and flatter as $\rho$ gets smaller and smaller.
Actually we could not find another way to prove Theorem \ref{unique:2-intro} than explicit evaluations.
This is why our concern in Theorem \ref{thm:261112-intro} was with respect to the exact expression of
solutions of $P(\lambda,\Omega_\rho)$ with $\lambda\leq \ov{\lambda}$ and $\rho$ small and not just with the estimates one
can get by using the sub-supersolutions just found in Theorem \ref{t1-intro}.\\
At this point (see section \ref{sec:entropy} for details) we can
conclude that indeed $S(E)\equiv \mathcal{S}(\Omegaega(u^{\scp(\lm)}l))\left.\right|_{\lambda=\widehat{\lambda}(E)}$ in
$\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$.
In particular we conclude that $S(E)$ is also smooth
in $\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$ and
by using the asymptotic expansions \rife{231112.3-intro}, \rife{231112.3.a-intro} and \rife{231112.3.b-intro}
and the above mentioned explicit expressions \rife{phi0-intro} and \rife{080213.1-intro}
we are eventually able to evaluate $\frac{d^2 S(E)}{d E^2}$ in the case $\Omega=\Omega_\rho$ and
$E\inftyn\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$. Indeed, we have
\begin{proposition}\label{pr:entropy-intro}
Let $\Omega=\Omega_\rho$, $E\inftyn\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$ and $\rho<\tilde{\rho}_1$ as defined in Theorem
\ref{unique:2-intro}. Then we have
$$
\frac{d^2 S(E)}{d E^2}=-11\left(\frac{48\pi^{2}}{\rho^2}\right)+
\mbox{\rm O}\left(\frac{1}{\rho}\right).
$$
\end{proposition}
In other words, we conclude that the branch of "small energy" solutions of $P(\lambda,\Omega_\rho)$ with $\lambda\leq\ov{\lambda}$
corresponds, for $\rho$ small enough, to a range of energies where $S$ is concave.\\
\begin{remark} It can can be shown, of course with the necessary minor modifications,
that the monotonicity of the energy as a function of $\lambda$ in Theorem \ref{unique:2-intro}, the
asymptotic expansion of the solution $u^{\scp(\lm)}l$ in Theorem \ref{thm:261112-intro} as well as
the concavity of the entropy in Proposition \ref{pr:entropy-intro} still hold whenever
$\Omega$ is a regular domain such that
$\{\rho^2x^2+y^2\leq \betata_-^2\}subset\Omegasubset\{\rho^2x^2+y^2\leq\betata_+^2\}$,
with $\frac{\beta^2_-}{\beta^2_+}=c$ and $c\inftyn(\un{c},1]$ for some $\un{c}$ close enough to $1^{-}$.\\
The proofs of these results can be obtained with minor changes
by a step-by-step adaptation of those provided here. We will not discuss them
in particular because it seems that they do not provide any other useful
insight while they surely require a lot of additional technicalities.
\end{remark}
subsection{Open problems}
We conclude this introduction with a conjecture and an open problem.\\
It is well known that $S(E)$ is not concave (see NEQ-(iii) above) for $E>E_c$ and that solutions
of the MVP (see NEQ-(iii) and \rife{blow-up2} above) blow up as $E\rightarrow +\inftynfty$.
Concerning this point we have the following:\\
{\bf Conjecture:} Let $\Omega$ be a convex domain of the second kind. There exists one and only one branch of solutions
$u^{\scp(\lm)}l$ which satisfies $\rife{blow-up2}$ and in particular there exists $E_{\Omega}>E_c$ such that $S(E)$ is convex in
$(E_\Omega,+\inftynfty)$.\\
In particular uniqueness of blow-up solutions would imply that they coincide
(at least in a small right neighborhood of $8\pi$) with the set of
mountain-pass type solutions found in Theorem \ref{mp-intro}, see Remark \ref{metastable}.\\
Then we pose the following problem (see also fig.4 in \mathbb{C}te{clmp2}):\\
{\bf Open Problems:} Let us assume that either the above conjecture is true or that
$\Omega$ is a convex domain of the second kind for which we can find $E_{\Omega}>E_c$ such that $S(E)$ is convex in
$(E_\Omega,+\inftynfty)$. Is it true that
the entropy has only one inflection point? If not, under which conditions (if any)
the entropy has only one inflection point?\\
In particular, is it true that the global branch of solutions of $P(\lambda,\Omega_\rho)$ with $\rho$ small enough
has just one bending point, no bifurcation points and it is connected with the blow-up solution's branch
as $\lambdasearrow (8\pi)^+$ (as for $Q(\mu,\Omega)$ on nearly circular domains \mathbb{C}te{suz0})? Can we answer this question at least on convex, regular and symmetric domains?\\
Of course, these properties do not hold on general simply connected domains. For example,
there should be no reason to expect the energy to be a generally injective function of $\lambda$
(see for example fig.5 in \mathbb{C}te{clmp2}).
Moreover, some well known numerical results \mathbb{C}te{PW} suggest that bifurcation
points can exist on the bifurcation diagram of $P(\lambda,\Omega)$ on (symmetric and/or non symmetric) non convex domains.
It seems however that the very rich structure of those bifurcation diagrams \mathbb{C}te{PW}
is inherited by solutions sharing either multiple peaks or just a single peak but which may
be located at different points. The typical example of such kind of blow-up behavior
is observed on dumbbell shaped domains, see for example \mathbb{C}te{EGP}.\\
On the other side, there are more lucky situations, such as on convex domains, where $k-$peaks
solutions with $k\geq 2$ do not exist (as shown in \mathbb{C}te{GrT}). Moreover it is well known (see for example \mathbb{C}te{Gu})
that if $\Omega$ is convex then the Robin function $H_\Omega(x,x)$ is strictly concave and thus admits
one and only one critical point, which of course coincides with the absolute maximum. This rules out the possibility
of having more than one single peak blow-up solution.\\
So far, it seems that in particular the global connectivity of the solution's
branch is known only for domains which are close in $C^2$-norm to a disk, see \mathbb{C}te{suz0}.\\
Of course, if (say in case $\Omega=\Omega_\rho$ with $\rho$ small enough) the entropy really has
just one inflection point, then it will coincide with the point on the continuation of $\mathcal{G}_{\rho,1}$
where the first eigenvalue of the linearized problem for $P(\lambda,\Omega_\rho)$ will finally vanish.
However, in this situation we cannot use the standard results (see for example \mathbb{C}te{suzB}) which
in the classical cases show that this point must necessarily be a bending point. This is due to the peculiar form of the
linearized problem for $P(\lambda,\Omega)$, see \rife{7.1} below, which implies for example that in general neither
the first eigenvalue can be assumed to be simple nor the first eigenfunction to be positive. This is not a mere
technical problem and indeed an explicit example of a sign changing first eigenfunction in a similar
situation can be found in Appendix D in \mathbb{C}te{B2}.\\
In any case we think that this topic deserves a separate discussion and that
it should be already very interesting to set up the problem on some
symmetric and convex domain of the second kind such as thin ellipses and/or rectangles.\\\\
This paper is organized as follows. In section \ref{sec:unique} we prove Theorem \ref{unique:1-intro}. In section
\ref{sec1} we prove Theorem \ref{t1-intro}. In section \ref{sec2} we prove Proposition \ref{pr3} by using a result
concerning the first eigenvalue of the linearization of $P(\lambda,\Omega)$ around those solutions found in Theorem
\ref{t1-intro}, see Proposition \ref{pr2}. Section \ref{sec:mp} is devoted to the proof of Theorem \ref{mp-intro}.
Section \ref{s5} is concerned with the proofs of Theorems \ref{thm:261112-intro} and \ref{unique:2-intro}.
Finally section \ref{sec:entropy} is devoted to the proof of Proposition \ref{pr:entropy-intro}. Some technical
evaluations are left to the Appendix.
bgskip
bgskip
{\bf Acknowledgements.}\\
We wish to express our warmest thanks to Prof. M. Lassak for letting us know about His proof \mathbb{C}te{Lassek-priv}
of Theorem \ref{tlassek}. We are also indebted with Prof. G. Tarantello for suggesting the multiplicity
result Theorem \ref{mp-intro} and to Prof. A. Malchiodi for His suggestion about uniqueness of one-peak
blow-up solutions for the (MFE) and for His encouragement in our attempts to prove Theorems \ref{t1-intro}(b) and
\ref{unique:1-intro}.
section{A uniqueness result for solutions of $P(\lambda,\Omega)$.}\label{sec:unique}
The aim of this section is to obtain a uniqueness result for solutions of $P(\lambda,\Omega)$ with
finite energy $\mathcal{E}(\Omegaega_{scriptscriptstyle \lambda})\leq \ov{E}$ (see \rife{dedef}) on domains
chosen as in Theorem \ref{unique:1-intro}.
bgskip
{\inftyt The proof of Theorem \ref{unique:1-intro}}.\\
We will need an a priori estimate for solutions of $P(\lambda,\Omega)$
with a uniform constant $\ov{C}$ which does not depend neither on $u$ nor on the domain $\Omega$. This is why we
do not follow the standard route which is
widely used (under some additional regularity assumption on $\partial\Omega$, see for example \mathbb{C}te{CCL})
in case the domain is fixed. In that case in fact one needs to prove that blow-up points
(in the sense of Brezis-Merle \mathbb{C}te{bm}) cannot converge to the boundary. A detailed discussion of
this point in our situation would be not only more tricky (since we do not fix $\Omega$)
but also really counterproductive, since
instead, by using the energy bound \rife{endef2-intro}, our argument
yields the needed estimate with the weakest possible regularity
assumptions about $\partial \Omega$ (i.e. $\Omega$ simple) see definition \ref{defsimp}.\\
The underlying idea is to use the dilation invariance (see Remark \ref{rem3.1})
of $P(\lambda,\Omega)$ to show that even if a blow-up "bubble" converges to the boundary, then its energy must be unbounded.
More exactly we have:
\begin{lemma}\label{lem:231112}
Let $\ov{\lambda}\geq 8\pi$ and $\ov{E}\geq 1$ be fixed. There exists $\ov{C}=\ov{C}(\ov{\lambda},\ov{E})$
such that for any simple domain $\Omega$ and for all solutions of
$P(\lambda,\Omega)$ such that $\lambda\leq \ov{\lambda}$ and $\mathcal{E}(\Omegaega_{scriptscriptstyle \lambda})\leq \ov{E}$ it holds
$\|u_{scriptscriptstyle \lambda}\|_\infty\leq \ov{C}$. In particular $\ov{C}$ does not depend neither on
$u$ nor on $\Omega$.\\
\end{lemma}
\proof In view of Remark \ref{solregdef} we can assume $u$ to be a classical solution
of $P(\lambda,\Omega)$.\\
We argue by contradiction and suppose that there exists a sequence of simple domains $\{\Omega_n\}$ and a sequence of
positive numbers $\{\lambda_n\}$ such that $sup\limits_{\mathbb{N}}\lambda_n\leq\ov{\lambda}$ and
there exists a sequence of solutions $\{u_n\}$ for $P(\lambda_n,\Omega_n)$ such that
$$
\mathcal{E}(\Omegaega(u_n))\leq \ov{E},
$$
and there exists a sequence of points $\{x_n\}$ such that $x_n\inftyn \Omega_n$ $\forall\,\,n\inftyn\mathbb{N}$ and
$$
u_n(x_n)=\max\limits_{\Omega_n}u_n\rightarrow+\inftynfty.
$$
Of course, we have used here the fact that the maximum principle ensures that any solution for $P(\lambda_n,\Omega_n)$
is nonnegative.\\
Since the problem is translation invariant we can assume without loss of generality that
$$
x_n\equiv 0,\;\forall\, n\inftyn\mathbb{N}.
$$
Let us set
$$
d_n:=\dist(0,\partial\Omega_n),
$$
and define
$$
w_{n,0}(y)=u_n\left(\frac{d_n}{2}y\right),\quad y\inftyn \Omega_{n,0}:=\left\{y\inftyn\mathbb{R}^2\,:\,\frac{d_n}{2}y\inftyn\Omega_n\right\}.
$$
Clearly we have
\betaq\label{100213.2}
B_1(0)\Subset \Omega_{n,0}
\end{equation}
and in particular (see Remark \ref{rem3.1}) $w_{n,0}$ is a solution of $P(\lambda_n,\Omega_{n,0})$ which therefore satisfies
\betaq\label{100213.3}
w_{n,0}(0)=u_n(0)=\max\limits_{\Omega_{n,0}}w_{n,0}\rightarrow+\inftynfty.
\end{equation}
Let us set
$$
\mu_{n,0}:=\lambda_n\left(\,\inftynt\limits_{\Omega_{n,0}} e^{w_{n,0}}\right)^{-1}.
$$
We claim that:\\
{\bf Claim:} $w_{n,0}(0)+\log{\mu_{n,0}}\rightarrow+\inftynfty$.\\
We argue by contradiction and observe that if the claim were false, then, in view of \rife{100213.3} we would have
$$
\graf{
-\Delta w_{n,0} \leq C_0 & \mbox{in}\quad \Omega_{n,0}\\
w_{n,0} =0 & \mbox{on}\quad \partial\Omega_{n,0}
}
$$
foe some $C_0>0$. For any $n\inftyn\mathbb{N}$ we can choose $R_n>0$ such that $\Omega_{n,0}subset B_{R_n}$ and let
$$
\varphi_{n}(y)=\frac{C_0}{R_n^2}(R_n^2-|y|^2),\;y\inftyn B_{R_n}
$$
be the unique solution of
$$
\graf{
-\Delta \varphi_n = C_0 & \mbox{in}\quad B_{R_n}\\
\varphi_n =0 & \mbox{on}\quad \partial B_{R_n}
}
$$
Clearly, by the maximum principle we have $w_{n,0}(0)\leq \varphi_n(0)=C_0$, which is a contradiction to
\rife{100213.3}.\hspace{\fill}$square$
bgskip
Therefore we see that the function $w_{n,1}(y)=w_{n,0}(y)+\log{\mu_{n,0}}$ satisfies
$$
\graf{
-\Delta w_{n,1} =e^{w_{n,1}}\quad \mbox{in}\quad B_1\\
\inftynt\limits_{B_1}e^{w_{n,1}}\leq \ov{\lambda}\\
w_{n,1}(0)=\max\limits_{B_1}w_{n,1}\rightarrow+\inftynfty
}
$$
Hence we can apply the Brezis-Merle's result \mathbb{C}te{bm} as further improved by Li and Shafrir \mathbb{C}te{ls}
to conclude that there exists $r_0\inftyn(0,1]$ such that
$$
e^{w_{n,1}}\rightharpoonup 8\pi m \Omegaegalta_{p=0},\quad\mbox{in}\quad B_{2r_0},
$$
weakly in the sense of measures, where $m$ is a positive integer which satisfies
$1\leq m\leq\frac{\ov{\lambda}}{8\pi}$. We remark that with a little extra work we could also
prove that the oscillation of $w_{n,1}$ is bounded on (say) $\partial B_{r_0}$ and hence in particular
obtain the desired contradiction by using the Li's result \mathbb{C}te{yy}. We will not purse this approach here
since we can come up with the desired conclusion just setting
\betaq\label{110213.2}
\delta_{n,0}^2:=e^{-w_{n,1}(0)}\rightarrow 0,
\end{equation}
and use the by now standard blow-up argument in \mathbb{C}te{ls}. It shows that there exists a subsequence (which we will not relabel) such that
$$
w_n(z)=w_{n,1}(\delta_{n,0}z)-w_{n,1}(0),\quad |z|<(\delta_{n,0})^{-1},
$$
satisfies
\betaq\label{110213.1}
w_n(z)\rightarrow w(z),\quad \mbox{in}\quad C^2_{\rm loc}(\mathbb{R}^2),
\end{equation}
where
\betaq\label{110213.0}
w(z)=2\log{\frac{1}{(1+\frac{1}{8}|z|^2)}},\qquad \inftynt\limits_{\mathbb{R}^2} e^w=8\pi.
\end{equation}
At this point we observe that, in view of the translation and dilation invariance of the energy we have
$$
\inftynt\limits_{\Omega_{n,0}}\left| \nabla w_{n,1}\right|^2=\inftynt\limits_{\Omega_{n,0}}\left| \nabla w_{n,0}\right|^2=
\inftynt\limits_{\Omega_{n}}\left| \nabla u_{n}\right|^2=2\lambda_n^2\mathcal{E}(\Omegaega(u_{n}))\leq 2\ov{\lambda}^2\ov{E},
$$
so that, by using \rife{100213.3} and \rife{110213.2}, we should have,
$$
2\ov{\lambda}^2\ov{E}\geq \inftynt\limits_{\Omega_{n}}\left| \nabla u_{n}\right|^2=\lambda_n\inftynt\limits_{\Omega_{n}}\Omegaega(u_{n})u_{n}=
\lambda_n\inftynt\limits_{\Omega_{n,0}}\Omegaega(w_{n,0})w_{n,0}>\lambda_n\inftynt\limits_{B_{R\delta_{n,0}}}\Omegaega(w_{n,0})w_{n,0}=
$$
$$
\inftynt\limits_{B_{R\delta_{n,0}}}e^{w_{n,1}}(w_{n,1}-\log\mu_{n,0})=\inftynt\limits_{B_R}e^{w_{n}}(w_{n}+w_{n,1}(0)-\log\mu_{n,0})=
$$
$$
\inftynt\limits_{B_R}e^{w_{n}}w_{n}+u_{n}(0)\inftynt\limits_{B_R}e^{w_{n}},
$$
for any $R\geq 1$ and for any $n\inftyn\mathbb{N}$, which is clearly in contradiction with \rife{100213.3} and \rife{110213.1}, \rife{110213.0}.
We refer to Lemma 3.1 in \mathbb{C}te{bl} for a proof of the fact that the Gauss-Green formula
$\inftynt\limits_{\Omega_{n}}\left| \nabla u_{n}\right|^2=\lambda_n\inftynt\limits_{\Omega_{n}}\Omegaega(u_{n})u_{n}$ holds
on domains which are only assumed to be simple.
\hspace{\fill}$square$
bgskip
{\inftyt The proof of Theorem \ref{unique:1-intro} completed}.\\
We first prove part (b).\\
We argue by contradiction and suppose that there exists a sequence of open, bounded and convex domains
$\{\Omega_{n,0}\}$ such that
\betaq\label{hpdiv}
N(\Omega_{n,0})=\frac{L^2(\partial \Omega_{n,0})}{A(\Omega_{n,0})}> n,
\end{equation}
and a sequence of positive numbers $\{\lambda_n\}$ such that $sup\limits_{\mathbb{N}}\lambda_n\leq\ov{\lambda}$,
such that for any $n\inftyn\mathbb{N}$ there exist at least two solutions $u_{n,1}$ and $u_{n,2}$
for $P(\lambda_n,\Omega_{n,0})$ such that
\betaq\label{110213.5}
\mathcal{E}(\Omegaega(u_{n,i}))\leq \ov{E},\quad i=1,2.
\end{equation}
In view of Theorems \ref{tjohn} and \ref{tlassek} we see that for each $n\inftyn\mathbb{N}$ there exist two concentric and
omotetic ellipses such that
\betaq\label{Lassek.0}
\mathbb{E}_{n,-}subseteq\Omega_{n,0}subseteq\mathbb{E}_{n,+}
\end{equation}
and
\betaq\label{Lassek}
\frac{A(\mathbb{E}_{n,+})}{A(\mathbb{E}_{n,-})}=4.
\end{equation}
Since $P(\lambda,\Omega)$ and \rife{110213.5} are both rotational, translational and dilation invariant, then,
in view of Remark \ref{rem3.1}, we can assume
without loss of generality that for each $n\inftyn\mathbb{N}$
\betaq\label{110213.10}
\mathbb{E}_{n,+}=\Omega_{\rho_n},\quad\mbox{for some}\quad \rho_n>0.
\end{equation}
Clearly we have
$$
N(\mathbb{E}_{n,+})=\frac{L^2(\partial \mathbb{E}_{n,+})}{A(\mathbb{E}_{n,+})}=
\frac{\L^2(\partial \mathbb{E}_{n,+})}{4A(\mathbb{E}_{n,-})}\geq
\frac{L^2(\partial \mathbb{E}_{n,+})}{4A(\Omega_{n,0})}\geq \frac14 N(\Omega_{n,0})>\frac{n}{4}.
$$
Therefore, since in view of \rife{110213.10} we have $L^2(\partial \mathbb{E}_{n,+})\leq \frac{4\pi^2}{\rho_n^2}$ and
$A(\mathbb{E}_{n,+})=\frac{\pi}{\rho_n}$, then we also conclude that
$$
\frac{n}{4}<N(\mathbb{E}_{n,+})\leq \frac{4\pi^2}{\rho_n^2}\frac{\rho_n}{\pi},
$$
that is
\betaq\label{110213.11}
\rho_n<\frac{16\pi}{n}\,.
\end{equation}
At this point we observe that
\betaq\label{110213.21}
sg_{n,0}:=\inftynf\left\{
\left.\frac{\inftynt\limits_{\Omega_{n,0}} \left|\nabla \varphi\right|^2\,dx}
{\inftynt\limits_{\Omega_{n,0}}\varphi^2\,dx}\;\right|
\,\varphi\inftyn H^{1}_0(\Omega_{n,0})\right\}\geq 2(1+\rho_n)>2,
\end{equation}
which easily follows from the fact that $sg_{n,0}\geq sg_n$, where
$$
sg_n:=\inftynf\left\{
\left.\frac{\inftynt\limits_{\Omega_{\rho_n}} \left|\nabla \varphi\right|^2\,dx}
{\inftynt\limits_{\Omega_{\rho_n}}\varphi^2\,dx}\;\right|
\,\varphi\inftyn H^{1}_0(\Omega_{\rho_n})\right\},
$$
see \rife{140213.0} and \rife{14} below for further details.\\
Hence, by using \rife{110213.21}, we conclude that
$$
2\inftynt\limits_{\Omega_{n,0}}\left|u_{n,1}-u_{n,2}\right|^2\leq
\inftynt\limits_{\Omega_{n,0}}\left|\nabla(u_{n,1}-u_{n,2})\right|^2=
\lambda_n\inftynt\limits_{\Omega_{n,0}}(\Omegaega(u_{n,1})-\Omegaega(u_{n,2}))(u_{n,1}-u_{n,2}).
$$
Let us write
$$
\inftynt\limits_{\Omega_{n,0}}(\Omegaega(u_{n,1})-\Omegaega(u_{n,2}))(u_{n,1}-u_{n,2})=I_{1,n}+I_{2,n},
$$
where
$$
I_{1,n}=
\inftynt\limits_{\Omega_{n,0}}\frac{e^{u_{n,1}}-e^{u_{n,2}}}{\inftynt\limits_{\Omega_{n,0}} e^{u_{n,1}}}(u_{n,1}-u_{n,2}),\quad
I_{2,n}=
\inftynt\limits_{\Omega_{n,0}}e^{u_{n,2}}
\left(\frac{1}{\inftynt\limits_{\Omega_{n,0}} e^{u_{n,1}}}-\frac{1}{\inftynt\limits_{\Omega_{n,0}} e^{u_{n,2}}}\right)(u_{n,1}-u_{n,2}).
$$
It follows from Lemma \ref{lem:231112} (which of course can be applied since any open, bounded and convex
domain is simple according to Definition \ref{defsimp}) and the fact that solutions of $P(\lambda,\Omega)$
are non negative that, by using also \rife{Lassek}, we can estimate these two integrals as follows
$$
\left|I_{1,n}\right|\leq \inftynt\limits_{\Omega_{n,0}}\frac{e^{\ov{u}_n}}{A(\Omega_{n,0})}|u_{n,1}-u_{n,2}|^2\leq
\inftynt\limits_{\Omega_\rho}\frac{e^{\ov{C}}}{A(\mathbb{E}_{n,-})}|u_{n,1}-u_{n,2}|^2=
\frac{4e^{\ov{C}}}{\pi}\rho_n \inftynt\limits_{\Omega_{n,0}}|u_{n,1}-u_{n,2}|^2,
$$
and similarly,
$$
\left|I_{2,n}\right|\leq
\inftynt\limits_{\Omega_{n,0}}e^{u_{n,2}}|u_{n,1}-u_{n,2}|\left(\inftynt\limits_{\Omega_{n,0}}\frac{e^{\ov{u}_n}}
{\left(\inftynt\limits_{\Omega_{n,0}}e^{\ov{u}_n}\right)^2}(u_{n,1}-u_{n,2})\right)\leq
\frac{e^{2\ov{C}}}{A^2(\Omega_{n,0})}\left(\inftynt\limits_{\Omega_{n,0}}|u_{n,1}-u_{n,2}|\right)^2\leq
$$
$$
\frac{4 e^{2\ov{C}}}{\pi A(\Omega_{n,0})}\rho_n\left(\inftynt\limits_{\Omega_{n,0}}|u_{n,1}-u_{n,2}|\right)^2\leq
\frac{4 e^{2\ov{C}}}{\pi}\rho_n\inftynt\limits_{\Omega_{n,0}}|u_{n,1}-u_{n,2}|^2,
$$
where $\ov{u}_n$ is a suitable function which satisfies
$\ov{u}_n\inftyn(\min\{u_{n,1},u_{n,2}\}, \max\{u_{n,1},u_{n,2}\})$.\\
Plugging these estimates together we conclude that
$$
\inftynt\limits_{\Omega_{n,0}}|u_{n,1}-u_{n,2}|^2\leq \lambda_n\rho_n\frac{8e^{2\ov{C}}}{\pi}\inftynt\limits_{\Omega_{n,0}}|u_{n,1}-u_{n,2}|^2,
$$
which is of course a contradiction to \rife{110213.11}.
This contradiction shows that in fact there exists at most one solution under the given assumptions and
concludes the proof of part (b) of the statement.\\
As for part (a) it is easy to adapt the argument by contradiction used above just by
replacing the assumption of divergent isoperimetric ratio in \rife{hpdiv} with that of
the existence of $\rho_nsearrow 0^+$ and $0<\betata_{-,n}\leq \betata_{+,n}<+\infty$ such that
$$
\mathbb{E}_{n,-}:=\{\rho_n^2x^2+y^2\leq \betata_{-,n}^2\}subseteq\Omega_{n,0}subseteq
\{\rho_n^2x^2+y^2\leq \betata_{+,n}^2\}=:\mathbb{E}_{n,+},\quad \frac{\betata_{-,n}}{\betata_{+,n}}=c,\quad \forall\,\,n\inftyn\mathbb{N}.
$$
In particular we see that this time we already have (by assumption)
the needed concentric omotetic ellipses (as in \rife{Lassek.0}) which in this case satisfy
$$
\frac{A(\mathbb{E}_{n,+})}{A(\mathbb{E}_{n,-})}=\frac{\betata_{+,n}^2}{\betata_{-,n}^2}=c^2.
$$
At this point, since of course Lemma \ref{lem:231112} can be applied to the situation at hand, the proof
can be worked out as above with minor changes.\hspace{\fill}$square$
section{Solutions of supercritical Mean Field Equations on thin domains}\label{sec1}
In this section we prove Theorem \ref{t1-intro}. Indeed, we will
construct a branch of solutions of $P(\lambda,\Omega_\rho)$ which for $\rho$ small enough extends up to some value
${\lambda}_{\rho}\geq \frac{4\pi}{7\rho}$, and more generally we obtain the same statement on any domain
$\Omega$ lying between two concentric and similar \virg{thin} ellipses.
Thus, in particular we recover the result for convex domains having a large isoperimetric ratio.
To achieve our goal, we consider the auxiliary problem $Q(\mu,\Omega)$ (see \S \ref{ss1.1})
and make use of a well known result \mathbb{C}te{clsw} whose statement calls up for the following:
\begin{definition}\label{defss}
A function $u$ is said to be a \un{subsolution}(\un{supersolution}) of $Q(\mu,\Omega)$
if $u\inftyn C^0(\ov{\Omega})$ and
\betaq\label{sssyst}
\graf{
\inftynt_{\Omega}(-\Delta\varphi) u\leq(\geq) \mu \,{\displaystyle e^u}\varphi & \mbox{in}\quad \Omega\\\\
u\leq(\geq)0 & \mbox{on}\quad \partial\Omega
},\qquad\forall\,\varphi\inftyn C^{\inftynfty}_0(\Omega),\varphi\geq 0.
\end{equation}
\end{definition}
bgskip
\betae[Sub-Supersolutions method, \mathbb{C}te{clsw}]\label{t0}
Let $\Omega$ be simple.
Suppose that, for fixed $\mu>0$, there exist
a subsolution $\underline{u}_\mu$ and a supersolution $\overline{u}_\mu$ of $Q(\mu,\Omega)$.
If $\underline{u}_\mu\leq\overline{u}_\mu$ in $\Omega$, then
$Q(\mu,\Omega)$ admits a classical solution $u=u_\mu\inftyn C^{2}(\Omega)\cap C^0(\ov{\Omega})$
which moreover satisfies $\underline{u}_\mu\leq u_\mu \leq \overline{u}_\mu$.
\end{theorem}
\proof
We use the existence Theorem in \mathbb{C}te{clsw}, where
the domain $\Omega$ is just assumed to be regular with respect to the Laplacian (see \mathbb{C}te{GT}, p. 25). It is well
known that any simple domain satisfies this assumption (see \mathbb{C}te{GT}, p. 26).
Therefore we can apply the result in \mathbb{C}te{clsw} which yields the existence of a function
$u_\mu\inftyn C^0(\ov{\Omega})$ which satisfies $\underline{u}_\mu\leq u_\mu \leq \overline{u}_\mu$ and moreover satisfies
\rife{sssyst} for \un{all} $\varphi\inftyn C^{\inftynfty}_0(\Omega)$ with the equality sign replacing the corresponding
inequalities. Hence in particular $u_\mu$
is a distributional solution of the equation in $Q(\mu,\Omega)$. Therefore the Brezis-Merle \mathbb{C}te{bm} theory of
distributional solutions of Liouville type equations shows that it is also locally bounded and then
standard elliptic regularity theory shows that $u_\mu\inftyn C^{2}(\Omega)$ is a classical solution of $Q(\mu,\Omega)$ as well.
We insist about the fact that the continuity up to the boundary is a byproduct of the result in \mathbb{C}te{clsw},
which indeed yields a distributional solution $u_\mu\inftyn C^0(\ov{\Omega})$.
\hspace{\fill}$square$
\proof[Proof of Theorem \ref{t1-intro}(a)]
For fixed $c\inftyn(0,1]$ and in view of Remark \ref{rem3.1}
we can assume without loss of generality that
$$
\Omega_{\rho,c}:=\{\rho^2 x^2+y^2\leq c\}subseteq\Omegasubseteq\{\rho^2 x^2+y^2\leq 1\}=:\Omega_\rho.
$$
Let us define
\betaq\label{2}
v_{\rho,\gamma}=2\log{\left(\frac{1+\gamma^2}{1+\gamma^2(\rho^2x^2+y^2)}\right)},\quad (x,y)\inftyn \Omega_\rho.
\end{equation}
A straightforward evaluation shows that $v_{\rho,\gamma}$ satisfies
\betaq\label{3}
\graf{
-\Delta v_{\rho,\gamma}=V_{\rho,\gamma}{\displaystyle e^{v_{\rho,\gamma}}} & \mbox{in}\quad \Omega_\rho\\
v_{\rho,\gamma}=0 & \mbox{on}\quad \partial\Omega_\rho,
}
\end{equation}
where
\betaq\label{4}
V_{\rho,\gamma}(x,y)=\frac{4\gamma^2}{(1+\gamma^2)^2}\left(1+\rho^2+\gamma^2(1-\rho^2)(\rho^2x^2-y^2)\right)
\end{equation}
Since
$$
V_{\rho,\gamma}(x,y)\geq g_{+}(\gamma,\rho):=\frac{4\gamma^2}{(1+\gamma^2)^2}\left(1+\rho^2+\gamma^2(\rho^2-1)\right),\quad \forall\,rall (x,y)\inftyn \Omega_\rho,
$$
we easily verify that $v_{\rho,\gamma}$ is a classical supersolution and in particular a
supersolution (according to the above definition) of $Q(\mu,\Omega)$ whenever
\betaq\label{5}
\mu\leq g_{+}(\gamma,\rho).
\end{equation}
For fixed $\rho\inftyn(0,1)$, the function $h_\rho(t)=g_{+}(sqrt{t},\rho)$ satisfies
$h_\rho(0)=0=h_\rho\left(\frac{1+\rho^2}{1-\rho^2}\right)$, is strictly increasing in $\left(0,\frac{1+\rho^2}{3-\rho^2}\right)$ and
strictly decreasing in $\left(\frac{1+\rho^2}{3-\rho^2},\frac{1+\rho^2}{1-\rho^2}\right)$.
Therefore, putting $\overline{\gamma}_\rho^2=\frac{1+\rho^2}{3-\rho^2}$ and
$\overline{\mu}_\rho:=h_\rho\left(\overline{\gamma}_\rho^2\right)\equiv g_{+}(\overline{\gamma}_\rho,\rho)\equiv \frac{(\rho^2+1)^2}{2}$,
we see in particular that for each $\mu\inftyn(0,\overline{\mu}_\rho]$ there exists a unique
$\gamma^{+}_\rho\inftyn\left(0,\overline{\gamma}_\rho\right]$ such that $g_{+}(\gamma^{+}_\rho,\rho)=\mu$ and $v_{\rho,\gamma^{+}_\rho}$ is
a supersolution of $Q(\mu,\Omega)$. Indeed we have
$$
\left(\gamma^{+}_\rho\right)^2=\left(\gamma^{+}_\rho(\mu)\right)^2=\frac{2(1+\rho^2)-\mu-2sqrt{(1+\rho^2)^2-2\mu}}{\mu+4(1-\rho^2)}.
$$
On the other hand let us consider
\betaq\label{vmeno}
v_{\rho,\gamma,c}=\left\{
\betagin{array}{ll}
2\log{\left(\frac{1+\gamma^2}{1+\tfrac{\gamma^2}{c}(\rho^2x^2+y^2)}\right)},& \hbox{$(x,y)\inftyn\Omega_{\rho,c}$} \\
0, & \hbox{$(x,y)\inftyn\Omegasetminus\Omega_{\rho,c}$.}
\end{array}
\right.
\end{equation}
Again a straightforward computation shows that $v_{\rho,\gamma,c}$ satisfies
$$
\left\{
\betagin{array}{ll}
-\Delta v_{\rho,\gamma,c}=V_{\rho,\gamma,c}{\displaystyle e^{v_{\rho,\gamma,c}}} & \mbox{in}\quad \Omega_{\rho,c}\\
v_{\rho,\gamma,c}=0 & \mbox{on}\quad \partial\Omega_{\rho,c}, \end{array}
\right.
$$
where
$$
V_{\rho,\gamma,c}(x,y)=\left\{
\betagin{array}{ll}
\frac{4\gamma^2}{c(1+\gamma^2)^2}\left(1+\rho^2+\frac{\gamma^2}{c}(1-\rho^2)(\rho^2x^2-y^2)\right) & \mbox{in }\Omega_{\rho,c} \\
0 & \mbox{in }\Omegasetminus\Omega_{\rho,c}.
\end{array}
\right.
$$
Since
$$
V_{\rho,\gamma,c}(x,y)\leq g_{-}(\gamma,\rho,c):=\frac{4\gamma^2}{c(1+\gamma^2)^2}\left(1+\rho^2+\gamma^2(1-\rho^2)\right),\quad \forall\,rall (x,y)\inftyn \Omega,
$$
it is not difficult to check that $v_{\rho,\gamma,c}$ is a subsolution (according to the above definition) of
$Q(\mu,\Omega)$ whenever
\betaq\label{6}
\mu\geq g_{-}(\gamma,\rho,c).
\end{equation}
For fixed $\rho\inftyn(0,1)$, the function $f_{\rho,c}(t)=g_{-}(sqrt{t},\rho,c)$, $t\inftyn(0,\overline{\gamma}_\rho^2]$ is strictly
increasing and
satisfies $f_{\rho,c}(t)>h_\rho(t)$.
Therefore, for each $\mu\inftyn(0,\overline{\mu}_\rho]$ there exists a unique
$\gamma^{-}_{\rho,c}\inftyn\left(0,\overline{\gamma}_\rho\right)$ such that $g_{-}(\gamma^{-}_{\rho,c},\rho,c)=\mu$,
$\gamma^{-}_{\rho,c}<\gamma^{+}_\rho$ and $v_{\rho,\gamma^{-}_{\rho,c},c}$ is
a subsolution of $Q(\mu,\Omega)$. Indeed we have
$$
\left(\gamma^{-}_{\rho,c}\right)^2=\left(\gamma^{-}_{\rho,c}(\mu)\right)^2=\frac{\mu c-2(1+\rho^2)+2sqrt{(1+\rho^2)^2-2\rho^2\mu c}}{4(1-\rho^2)-\mu c}.
$$
In conclusion, since $\gamma^{-}_{\rho,c}(\mu)\leq \gamma^{+}_\rho(\mu)$ implies $v_{\rho,\gamma^{-}_{\rho,c},c}\leq v_{\rho, \gamma^{+}_\rho}$,
for fixed $\rho\inftyn (0,1)$ and for each $\mu\inftyn(0,\overline{\mu}_\rho]$ we can set
$$
\underline{u}_{\mu}=v_{\rho,\gamma^{-}_{\rho,c}(\mu),c},\quad \overline{u}_\mu=v_{\rho,\gamma^{+}_\rho(\mu)},
$$
to obtain (through Theorem \ref{t0}) a solution $u_{\rho,\mu,c}$ for $Q(\mu,\Omega)$ which satisfies
\betaq\label{susu}
v_{\rho,\gamma^{-}_{\rho,c}(\mu),c}\leq u_{\rho,\mu,c}\leq v_{\rho,\gamma^{+}_\rho(\mu)},\quad \forall\,rall (x,y)\inftyn\Omega.
\end{equation}
Any such a solution $u_{\rho,\mu,c}$ therefore solves $P(\lambda,\Omega)$ with $\lambda=\lambda_{\rho,c}(\mu)$ satisfying
\betal\label{stimalambdamu1}
\lambda=\lambda_{\rho,c}(\mu)=\mu\inftynt\limits_{\Omega}e^{u_{\rho,\mu,c}}\geq \mu\inftynt\limits_{\Omega_{\rho,c}}e^{v_{\rho,\gamma^{-}_{\rho,c}(\mu),c}}=
\mu c\frac{\pi}{\rho}\,(1+(\gamma^{-}_{\rho,c}(\mu))^2),
\end{equation}
and
\betal\label{stimalambdamu2}
\lambda=\lambda_{\rho,c}(\mu)=\mu\inftynt\limits_{\Omega}e^{u_{\rho,\mu,c}}\leq \mu\inftynt\limits_{\Omega_\rho}e^{v_{\rho,\gamma^{+}_\rho(\mu)}}=
\mu\frac{\pi}{\rho}\,(1+(\gamma^{+}_\rho(\mu))^2).
\end{equation}
In the particular case $\mu=\overline{\mu}_\rho$ we have $(\gamma^{-}_{\rho,c}(\overline{\mu}_\rho))^2\equiv
\underline{\gamma}_{\rho,c}^2=(1+\rho^2)
\frac{c-4+c\rho^2+4sqrt{1-c\rho^2}}{8(1-\rho^2)-c(1+\rho^2)^2}$,
$\underline{\gamma}_{\rho,c}^2<\overline{\gamma}_\rho^2$, $(\gamma^{+}_\rho(\overline{\mu}_\rho))^2\equiv\overline{\gamma}_\rho^2=(1+\rho^2)
\frac{3-\rho^2}{8(1-\rho^2)+(1+\rho^2)^2}$ and
$u_{\rho,\overline{\mu}_\rho,c}$ is a solution for $P(\lambda_{\rho,c}(\overline{\mu}_\rho),\Omega)$, where
\betal\label{lambdasotto}
\lambda_{\rho,c}:=\lambda_{\rho,c}(\overline{\mu}_\rho)\geq \underline{\lambda}_{\rho,c}=\frac{c(1+\rho^2)^2}{2}\frac{\pi}{\rho}(1+\underline{\gamma}_{\rho,c}^2)simeq
\frac{4\pi c}{(8-c)\rho},
\end{equation}
and
\betal\label{lambdasopra}
\lambda_{\rho,c}:=\lambda_{\rho,c}(\overline{\mu}_\rho)\leq \overline{\lambda}_{\rho}=\frac{(1+\rho^2)^2}{2}\frac{\pi}{\rho}
(1+\overline{\gamma}_\rho^2)simeq\frac{11\pi}{16\rho}
\end{equation}
as $\rho\rightarrow 0^+$. Moreover it is easy to verify that $\un{\lambda}_{\rho,c}$ is strictly decreasing at least for
for $\rho\inftyn(0,\frac 1{2sqrt{10}}]$ and that there exists
$\underline{\rho}_*(c)<\frac 1{2sqrt{10}}$ such that
$\underline{\lambda}_{\rho,c}\geq 8\pi$ for any $\rho\inftyn(0,\underline{\rho}_*(c)]$.
We also see that $\ov{\lambda}_\rho\rightarrow 4\pi^{-}$ as $\rho\rightarrow 1^{-}$,
is strictly decreasing for $\rho\inftyn(0,\rho_p]$ and strictly increasing
for $\rho\inftyn[\rho_p,1)$ for some $\rho_psimeq 0.5$ and then it is straightforward to check that
there exists $\overline{\rho}_*>\underline{\rho}_*(c)$ satisfying $0.0702<\overline{\rho}_*<0.0703$ such that
$\overline{\lambda}_\rho\geq 8\pi$ for any $\rho\inftyn(0,\overline{\rho}_*]$.
Finally, since $\lambda_{\rho,c}(\mu)$ is continuous in $\mu$ and by using \eqref{stimalambdamu1} and \eqref{stimalambdamu2}
$$
0<\lambda_{\rho,c}(\mu)\leq\mu\frac{\pi}{\rho}\,(1+(\gamma^{+}_\rho(\mu))^2)stackrel{\varthetaetaxtnormal{as $\mu\rightarrow0$}}{\longrightarrow} 0,
$$
we obtain the existence of a solution for $P(\lambda,\Omega)$ not only for
$\lambda=\lambda_{\rho,c}$, but for any $\lambda\inftyn(0,\lambda_{\rho,c}]$ as well.\hspace{\fill}$square$
bgskip
\proof[Proof of Theorem \ref{t1-intro}(b)] If ${\bar N}$ exists, then Remark \ref{susp} shows that
it is strictly greater than $4\pi$.\\
In view of Remark \ref{rem3.1} we can assume without loss of generality that $L(\partial\Omega)=1$.
Let $E_1$ be the John maximal ellipse of $\Omega$,
then by Theorem \ref{tjohn} $E_2:=\{c_0+2(x-c_0):x\inftyn E_1\}$, where $c_0$ is the center of $E_1$, contains $\Omega$.
Again by using Remark \ref{rem3.1} we can also assume that $c_0=0$ and in particular that $E_1$ and $E_2$ have the following form
$$
E_1=\left\{\frac{x^2}{a^2}+\frac{y^2}{b^2}=1\right\},\quad E_2=\left\{\frac{x^2}{a^2}+\frac{y^2}{b^2}=4\right\},
$$
where clearly we can suppose that $0<b\leq a$.
By virtue of Ramanujan's estimate of the perimeter of the ellipse \mathbb{C}te{Villarino}, namely:
$$
L(\partialrtial E_1)\geq \pi\{(a+b)+\frac{3(a-b)^2}{10(a+b)+sqrt{a^2+14ab+b^2}}\},
$$
being $E_1subset\Omega$, $\Omega$ convex, and since $N(\Omega)=\frac{L^2(\partial\Omega)}{A(\Omega)}$, we derive the following inequalities:
\betal\label{stimeab1}
1=L(\partial\Omega)\geq L(\partial E_1)\geq(a+b)\pi;\quad\qquad \frac{1}{N(\Omega)}=\frac{A(\Omega)}{L^2(\partial\Omega)}=A(\Omega)\geq A(E_1)=\pi a b.
\end{equation}
Moreover since $\Omegasubset E_2subset R_{a,b}:=\{(x,y)\inftyn\mathbb{R}^2\,|\, |x|\leq 2a,\,|y|\leq 2b\}$ we get
\betal\label{stimeab2}
1=L(\partial\Omega)\leq L(\partial E_2)\leq L(R_{a,b})=8(a+b),
\end{equation}
and by using Theorem \ref{tlassek}
\betal\label{stimeab3}
\frac{1}{N(\Omega)}=A(\Omega)\leq \frac{3sqrt3}{\pi}A(E_1)=3sqrt3 ab.
\end{equation}
To simplify the notation we set $N=N(\Omega)$. Collecting \eqref{stimeab1}, \eqref{stimeab2} and \eqref{stimeab3} we have
\betal\label{stimeab4}
\left\{
\betagin{array}{l}
\frac1{3sqrt3 N}\leq ab\leq \frac1{\pi N}\\
-b+\frac1{8}\leq a\leq -b+\frac1\pi,
\end{array}
\right.
\end{equation}
which in turn implies
$$
\left\{
\betagin{array}{l}
b^2-\frac b\pi+\frac1{3sqrt3 N}\leq 0\\
b^2-\frac b8+\frac1{\pi N}\geq 0.
\end{array}
\right.
$$
It is worth to notice that, since $a\geq b$ and $ab\leq\frac{1}{\pi N}$, if $N>\frac{64}{\pi}$ then $b<\frac18$.
Therefore solving the above system of inequalities, with $N>\frac{64}\pi$, we get
$$
\frac{1-sqrt{1-\frac{4\pi^2}{3sqrt3 N}}}{2\pi}\leq b\leq \frac{1-sqrt{1-\frac{256}{\pi N}}}{16}.
$$
Next, for $N>\frac{512}{\pi}$, considering the Taylor formula of the square root and estimating
the second order reminder we derive
\betal\label{b}
\frac{\pi}{3sqrt3 N}\leq\frac{\frac{2\pi^2}{3sqrt3 N}+\frac18(\frac{4\pi^2}{3sqrt3 N})^2}{2\pi}\leq b\leq
\frac{\frac{128}{\pi N}+\frac{1}{2sqrt2}(\frac{256}{\pi N})^2}{16}=\frac{8}{\pi N}+\frac{1024sqrt2}{\pi^2 N^2},
\end{equation}
thus
\betal\label{a}
\frac18-\frac{8}{\pi N}-\frac{1024sqrt2}{\pi^2 N^2}\leq a\leq\frac1\pi-\frac\pi{3sqrt3 N}.
\end{equation}
Combining \eqref{b} and \eqref{a}, we have
$$
\psi(N):=\frac{\pi^2}{3sqrt3 N-\pi^2}\leq\frac {b}{a}\leq
\frac{64+\frac{8192sqrt2}{\pi N}}{\pi N-64-\frac{8192sqrt2}{\pi N}}=:\varphi(N).
$$
By definition of $E_1$ and $E_2$ we are in position to apply point (a) of this theorem with $c=\frac14$.
Let us fix $\bar N$ such that
$\frac{64+\frac{8192sqrt2}{\pi \bar N}}{\pi \bar N-64-\frac{8192sqrt2}{\pi \bar N}}=
\underline{\rho}_*(\frac14)$. We point out that since $\underline{\rho}_*(\frac14)simeq0,0161$,
$\bar N>\frac{512}{\pi}$.
Then, for any $N\geq\bar N$, $\rho_{scriptscriptstyle N}:=\frac{b}{a}\leq \underline{\rho}_*(\frac{1}{4})$ and so we get
the existence of a solution $u^{\scp(\lm)}$ to $P(\lambda,\Omega)$ for any $\lambda\leq\lambda_{\varthetaetaxtnormal{\tiny{$N$}}}$ where
$$
\underline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}:=\underline{\lambda}_\varthetaetaxtnormal{\tiny{$\varphi(N),\frac14$}}
\leq\underline{\lambda}_\varthetaetaxtnormal{\tiny{$\rho_N,\frac14$}}
<\lambda_\varthetaetaxtnormal{\tiny{$N$}}<\overline{\lambda}_\varthetaetaxtnormal{\tiny{$\rho_N$}}
\leq\overline{\lambda}_\varthetaetaxtnormal{\tiny{$\psi(N)$}}=:\overline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}.
$$
At last from \eqref{lambdasotto} and \eqref{lambdasopra} we obtain the desired estimates on
$\underline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}$ and $\overline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}$:
$$
\underline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}simeq\frac{\pi^2N}{496}+O(1)
\qquad \overline{\Lambda}_\varthetaetaxtnormal{\tiny{$N$}}simeq\frac{33sqrt3N}{16\pi}
+O(1)\quad \varthetaetaxtnormal{as $N\rightarrow+\inftynfty$.}
$$
\hspace{\fill}$square$
section{The eigenvalue problem}\label{sec2}
The aim of this section is to prove Proposition \ref{pr2} below which yields positivity of the
first eigenvalue for the linearization of $P(\lambda,\Omega)$. Among other things,
with the aid of Proposition \ref{pr2} we have:
\proof[Proof of Proposition \ref{pr3}]
Let $\mathcal{G}_{\rho,c},\mathcal{G}_{N}$ denote the set of pairs of parameter-solutions
for $P(\lambda,\Omega)$ found in Theorem \ref{t1-intro}. Since the linearized problem for
$P(\lambda,\Omega)$ corresponds to the kernel equation for the second variation of $J_\lambda$,
then the conclusions of Proposition \ref{pr3}
are an immediate consequence of Proposition \ref{pr2} below and the uniqueness results in \mathbb{C}te{CCL}.
\hspace{\fill}$square$
bgskip
Putting
$$
\Omegaega=\Omegaega(u)=\displaystyle\frac{e^u}{\inftynt\limits_{\Omega} e^u},\quad \mbox{and}\quad <f>_\Omegaega=\inftynt\limits_{\Omega}\Omegaega(u)f,
$$
then the linearized problem for $P(\lambda,\Omega)$ takes the form
\betal\label{7.1}
\left\{
\betagin{array}{ll}
-\Delta \varphi - \lambda\Omegaega(u)\varphi +\lambda \Omegaega(u)<\varphi>_\Omegaega =0 & \mbox{in}\ \ \Omega \\
\varphi =0 & \mbox{on}\ \ \partial\Omega.
\end{array}
\right.
\end{equation}
\begin{proposition}\label{pr2}
For fixed $c\inftyn(0,1]$, let $\Omega$ be a regular domain
such that $\{\rho^2x^2+y^2\leq \betata_-^2\}subset\Omegasubset\{\rho^2x^2+y^2\leq\betata_+^2\}$,
with $\frac{\beta^2_-}{\beta^2_+}=c$. For any $\rho\inftyn(0,\underline{\rho}_*(c)]$
let $u=u^{\scp(\lm)}\equiv u_{\rho,\mu,c}$ be a solution of $P(\lambda,\Omega)$ and of $Q(\mu,\Omega)$
with $\lambda=\mu\inftynt\limits_{\Omega}e^u$ as obtained in Theorem \ref{t1-intro}(a) for $\lambda\inftyn[0,\lambda_{\rho,c}]$.
Then \eqref{7.1} has only the trivial solution and in particular
the first eigenvalues of the
linearized problems for $P(\lambda,\Omega)$ and $Q(\mu,\Omega)$ at $u=u^{\scp(\lm)}\equiv u_{\rho,\mu,c}$ respectively are strictly positive.\\
Moreover, let $\Omega$ be a regular and convex domain with $N(\Omega)>\bar N$ as defined in Theorem \ref{t1-intro}(b)
and $u^{\scp(\lm)}$ be a solution of $P(\lambda,\Omega)$ and of $Q(\mu,\Omega)$
for $0\leq \lambda=\mu\inftynt\limits_{\Omega}e^{u^{\scp(\lm)}}\leq \lambda_{\varthetaetaxtnormal{\tiny{$N$}}}$
as obtained therein. Then the first eigenvalues of the
linearized problems for $P(\lambda,\Omega)$ and $Q(\mu,\Omega)$ at $u=u^{\scp(\lm)}$ are strictly positive.
\end{proposition}
\begin{remark}\label{branch}
As far as one is concerned with problem $Q(\mu,\Omega)$, then it is well known (see for example \mathbb{C}te{suzB}) that
is well defined (and unique) the extremal (classical) solution $v_*$ which corresponds to the extremal value $\mu_*$ such that no solutions exists
for $\mu>\mu_*$ and the bifurcation diagram has a bending point at $(\mu_*,v_*)$. In particular the first eigenvalue
of the linearized problem for $Q(\mu,\Omega)$ is zero at $\mu_*$.\\
The reasons why we have strictly positive first eigenvalues for $\lambda\leq\lambda_{\rho,c}$ are:\\
(-) as it will be shown in the proof below, the first eigenvalue of the linearized problem for $P(\lambda,\Omega)$ (say $\tau_1$)
is always greater or equal to the first eigenvalue (which we will denote by $\nu_0$)
of the linearized problem for $Q(\mu,\Omega)$ and we will use the latter to estimate both;\\
(-) the value of $\mu$ corresponding to $\lambda_{\rho,c}$,
which is defined implicitly via $\lambda=\mu\inftynt\limits_{\Omega_\rho}e^{u_{scriptscriptstyle \mu}}$, is less than $\mu_*$.\\
\end{remark}
\proof
We will use the fact that (see \mathbb{C}te{bm}, \mathbb{C}te{CCL} and Remark \ref{solregdef} above)
if $u$ solves $P(\lambda,\Omega)$ then there exists $C=C(\Omega,\lambda,u)>0$ such that
$$
\frac{1}{C}\leq \Omegaega(u)\leq C.
$$
Letting $H\equiv H^{1}_0(\Omega)$ and
$$
\mathcal{L}(\phi,\psi)= \inftynt\limits_{\Omega} \left(\nabla \phi\cdot\nabla \psi\right)-
\lambda\inftynt\limits_{\Omega} \Omegaega(u)\phi\psi +
\lambda\left(\inftynt\limits_{\Omega} \Omegaega(u)\phi\right)\left(\inftynt\limits_{\Omega} \Omegaega(u)\psi\right),\;(\phi,\psi)\inftyn H\times H,
$$
then by definition $\varphi\inftyn H$ is a weak solution of \eqref{7.1} if
$$
\mathcal{L}(\varphi,\psi)=0,\quad \forall\,rall\, \psi \inftyn H.
$$
We define $\tau\inftyn\mathbb{R}$ to be an eigenvalue of the operator
$$
L[\varphi]:=-\Delta\varphi - \lambda\Omegaega(u) (\varphi -< \varphi >_\Omegaega),\quad \varphi\inftyn H,
$$
if there exists a weak solution $\phi_0\inftyn Hsetminus \{0\}$ of the linear problem
\betaq\label{7.2}
-\Delta \phi_0 - \lambda\Omegaega(u)\phi_0+\lambda \Omegaega(u)<\phi_0>_\Omegaega =\tau \Omegaega(u)\phi_0\ \ \mbox{in}\ \ \Omega,
\end{equation}
that is, if
$$
\mathcal{L}(\phi_0,\psi)=\tau\inftynt\limits_{\Omega} \Omegaega(u)\phi_0\psi,\quad \forall\,rall\, \psi \inftyn H.
$$
Standard arguments show that the eigenvalues form an unbounded (from above) sequence
$$
\tau_1\leq \tau_2\leq\cdots\leq \tau_n\cdots,
$$
with finite dimensional eigenspaces (although the first eigenfunction cannot be assumed to be neither positive nor
simple in this situation).
Let us define
$$
Q(\phi)=\frac{\mathcal{L}(\phi,\phi)}{<\phi^2>_\Omegaega}=
\frac{\inftynt\limits_{\Omega} \left|\nabla \phi\right|^2-\lambda<\phi^2>_\Omegaega+\lambda <\phi>_\Omegaega^2}{<\phi^2>_\Omegaega},\quad \phi\inftyn H.
$$
In particular it is not difficult to prove that the first eigenvalue can be characterized as
follows
$$
\tau_1=\inftynf\{Q(\phi)\,|\, \phi\inftyn Hsetminus\{0\}\}.
$$
At this point we argue by contradiction and assume that \eqref{7.1} admits a non trivial solution.
Hence, in particular, $\tau_1\leq 0$ and we readily conclude that
$$
\tau_0:=\inftynf\{Q_0(\phi)\,|\, \phi\inftyn Hsetminus\{0\}\}\leq 0,\quad\mbox{where}\quad
Q_0(\phi)=\frac{\mathcal{L}_0(\phi,\phi)}{<\phi^2>_\Omegaega}
$$
and
$$
\mathcal{L}_0(\phi,\psi)= \inftynt\limits_{\Omega} \left(\nabla \phi\cdot\nabla \psi\right)
-\lambda\inftynt\limits_{\Omega} \Omegaega(u)\phi\psi,\;(\phi,\psi)\inftyn H\times H.
$$
Clearly $\tau_0$ is attained by a simple and positive eigenfunction $\varphi_0$ which satisfies
\betal\label{7}
\left\{
\betagin{array}{ll}
\displaystyle -\Delta\varphi_0 -\lambda \Omegaega(u)\varphi_0 =\tau_0 \Omegaega(u)\varphi_0 & \mbox{in}\ \ {\Omega} \\
\varphi_0 =0 & \mbox{on}\ \ \partial{\Omega}.
\end{array}
\right.
\end{equation}
Let us recall that we have obtained solutions for $P(\lambda,{\Omega})$ as solutions of $Q(\mu,{\Omega})$
in the form $u=u_{\rho,{\mu},c}$, for some $\mu=\mu(\rho)\leq \ov{\mu}_\rho$ whose value of $\lambda=\lambda(\mu,\rho,c)$
was then estimated as a
function of $\rho$. Therefore, at this point, it is more convenient to look at the linearized problem
in the other way, that is, to go back to $\mu=\lambda\left(\inftynt_{\Omega} e^u\right)^{-1}$.
Hence, let us observe that for a generic value $\mu\leq \ov{\mu}_\rho$ \rife{7} takes the form
\betal\label{9}
\left\{
\betagin{array}{ll}
\displaystyle -\Delta\varphi_0 -\mu K_{\rho,\mu,c} \varphi_0 =\nu_0 K_{\rho,\mu,c} \varphi_0 & \mbox{in}\ \ {\Omega} \\
\varphi_0 =0 & \mbox{on}\ \ \partial{\Omega},
\end{array}
\right.
\end{equation}
where
$$
K_{\rho,\mu,c}=e^{u_{\rho,\mu,c}}\quad\mbox{and}\quad \nu_0=\mu\frac{\tau_0}{\lambda}\leq 0.
$$
\begin{remark}{\inftyt
Of course, the assertion about the positivity of the first eigenvalues corresponds to the positivity of
$\tau_1$ and $\nu_0$ respectively. Therefore that part of the statement will be automatically proved once we get the desired
contradiction.}
\end{remark}
Since also the linearized problem \eqref{7.1} is rotational, translational and dilation invariant, by arguing
exactly as in the proof of Theorem \ref{t1-intro} we can assume without loss of generality that
$$
\Omega_{\rho,c}:=\{\rho^2 x^2+y^2\leq c\}subset\Omegasubset\{\rho^2 x^2+y^2\leq 1\}=:\Omega_\rho.
$$
We observe that, by defining
$$
K_{\rho,\mu,c}^{(-)}:=e^{v_{\rho,\gamma^{-}_{\rho,c}(\mu),c}}=\left\{
\betagin{array}{ll}
\left(\frac{1+\gamma^{-}_{\rho,c}(\mu)^2}{1+\frac{\gamma^{-}_{\rho,c}(\mu)^2}{c}(\rho^2x^2+y^2)}\right)^2 & \mbox{$(x,y)\inftyn\Omega_{\rho,c}$} \\
1 & \mbox{$(x,y)\inftyn\Omegasetminus\Omega_{\rho,c}$,}
\end{array}
\right.
$$
$$
K_{\rho,\mu}^{(+)}:=e^{v_{\rho,\gamma^{+}_\rho(\mu)}}=\left(\frac{1+\gamma^{+}_\rho(\mu)^2}{1+\gamma^{+}_\rho(\mu)^2(\rho^2x^2+y^2)}\right)^2,\qquad (x,y)\inftyn\Omega_{\rho}
$$
we have
$$
K_{\rho,\mu,c}^{(-)}\leq K_{\rho,\mu,c}\leq K_{\rho,\mu}^{(+)}\qquad \varthetaetaxtnormal{for any }(x,y)\inftyn\Omega.
$$
In particular, since
$$
K_{\rho,\mu}^{(+)}\leq (1+\gamma_\rho^{+}(\mu)^2)^2\quad\mbox{and}\quad
1\leq K_{\rho,\mu,c}^{(-)}\leq (1+\gamma_{\rho,c}^{-}(\mu)^2)^2\quad \mbox{in}\quad \Omega,
$$
and
\betaq\label{140213.0}
\Omegasubset T_\rho:=\{(x,y)\inftyn\mathbb{R}^2\,|\,|\,x|\leq (\rho)^{-1},\;|\,y|\leq 1\},
\end{equation}
then, by using the fact that
$$
\nu_0=\inftynf\left\{
\left.\frac{\inftynt\limits_{\Omega} \left|\nabla \varphi\right|^2\,dx-\mu\inftynt\limits_{\Omega} K_{\rho,\mu} \varphi^2\,dx}
{\inftynt\limits_{\Omega} K_{\rho,\mu}\varphi^2\,dx}\;\right|
\,\varphi\inftyn H\right\}\leq0,
$$
it is not difficult to check that, for any $\mu\leq \ov{\mu}_\rho=\frac{(1+\rho^2)^2}{2}$, the following inequality
holds:
\betaq\label{13}
\inftynf\left\{
\left.\frac{\inftynt\limits_{T_\rho} \left|\nabla \varphi\right|^2\,dx-\mu(1+\gamma_\rho^{+}(\mu)^2)^2\inftynt\limits_{T_\rho} \varphi^2\,dx}
{\inftynt\limits_{T_\rho} \varphi^2\,dx}\;\right| \,\varphi\inftyn H\right\}\leq 0.
\end{equation}
Hence, there exists $\overline{\mu}_0\leq 0$ such that,
putting $sg=sg(\mu,\rho)=\mu(1+\gamma_\rho^{+}(\mu)^2)^2+\overline{\mu}_0$, there exists a weak solution $\phi_0\inftyn H$ of
\betal\label{14}
\left\{
\betagin{array}{ll}
\displaystyle -\Delta\phi_0 -sg\phi_0 =0 & \mbox{in}\ \ T_\rho,\\
\displaystyle \phi_0=0 & \mbox{on}\quad \partial T_\rho.
\end{array}
\right.
\end{equation}
It is well known that the minimal eigenvalue $sg_{\inftyt min}$ of \eqref{14} satisfies
$sg_{\inftyt min}=\frac{\pi^2}{4}\rho^2+\frac{\pi^2}{4}>2(1+\rho^2)$ and we conclude that
\betaq\label{16}
2(1+\rho^2)\leqsg(\mu,\rho)=\mu(1+\gamma_\rho^{+}(\mu)^2)^2+\overline{\mu}_0.
\end{equation}
Next, since $\underline{\rho}_*(c)<\tfrac{1}{2sqrt{10}}$, it is not difficult to check that
$sg=sg(\mu,\rho)$ satisfies
$$
sg(\mu,\rho)\leq 1,
$$
for any $\rho\leq\underline{\rho}_*(c)$, which is of course a contradiction to \eqref{16}. This fact concludes the first
part of the proof. As for the second one it can be derived by arguing as above with some minor changes as in the
proof Theorem \ref{t1-intro}(b).\hspace{\fill}$square$
bgskip
section{A multiplicity result}\label{sec:mp}
This section is devoted to the proof of Theorem \ref{mp-intro}.
\proof[Proof of Theorem \ref{mp-intro}]
\emph{(a).}\;\; Let us fix $\lambda\inftyn(8\pi,\lambda_{a,c})setminus 8\pi\mathbb{N}$,
then there exists $k\inftyn\mathbb{N}^*$ such that $\lambda\inftyn(8k\pi,8(k+1)\pi)$.
Let us fix now $k$ distinct points, $x_1,\ldots,x_k$,
in the interior of $\Omega_{\rho,\betata_-}=\{\rho^2x^2+y^2\leq \betata_-^2\}$.
Next we fix $\bar d>0$ such that $\dist(x_i,x_j)>4\bar d$ for any $i\neq j$
and such that $\dist(x_i,\partial\Omega_{\rho,\betata_-})>2\bar d$ for any $i\inftyn\{1,\ldots,k\}$.
Following \mathbb{C}te{dj} we introduce some notations. For $d\inftyn(0,\bar d)$ we consider
a smooth non-decreasing cut-off function $\chi_d:[0,+\inftynfty)\rightarrow\mathbb{R}$ satisfying the following properties:
$$\left\{
\betagin{array}{ll}
\chi_d(t)=t & \hbox{for $t\inftyn[0,d]$} \\
\chi_d(t)=2d & \hbox{for $t\geq 2d$} \\
\chi_d(t)\inftyn[d,2d] & \hbox{for $t\inftyn[d,2d]$.}
\end{array}
\right.
$$
Then, given $\mu>0$, we define the function $\varphi_{\mu,d}\inftyn H^1_0(\Omega)$ by
$$
\varphi_{\mu,d}(y)=\left\{
\betagin{array}{ll}
\log\,sum_{j=1}^k \frac 1k
\left(\frac{8\mu^2}{(1+\mu^2\chi^2_d(|y,x_j|))^2}\right)-\log\left(\frac{8\mu^2}{(1+4d^2\mu^2)^2}\right)
& \hbox{$y\inftyn\Omega_{\rho,\betata_-}$} \\
0 & \hbox{$y\inftyn\Omegasetminus\Omega_{\rho,\betata_-}$.}
\end{array}
\right.
$$
By arguing exactly as in Section 5 of \mathbb{C}te{dj} we have
$$
F_{\lambda}(\varphi_{\mu,d})\leq(16k\pi-2\lambda+o_d(1))\ln(\mu)+O(1)+C_d
$$
where $C_d$ is a constant independent of $\mu$ and $o_d(1)\rightarrow 0$ as $d\rightarrow 0$.\\
Then, there exist $d_0$ sufficiently small and $\mu_0$ sufficiently large such that
$$
F_{\lambda}(\varphi_{\mu_0,d_0})<F_{\lambda}(u^{\scp(\lm)})-1.
$$
Next we define
$$
\mathcal{D}=\{\gammamma:[0,1]\rightarrow H^1_0(\Omega)\,:\,\gammamma\varthetaetaxtnormal{ is continuous, $\gammamma(0)=u^{\scp(\lm)}$, $\gammamma(1)=\varphi_{\mu_0,d_0}$}\}
$$
and, for any $\eta\inftyn(8k\pi,8(k+1)\pi)\cap(8\pi,\lambda_{a,c})$, we set
$$
c_\eta=\inftynf\limits_{\gammamma\inftyn\mathcal{D}}\max\limits_{s\inftyn[0,1]}F_\eta(\gammamma(s)).
$$
Since $u^{\scp(\lm)}$ is a strict local minimum for $F_\lambda$, there exists $\varepsilon_\lambda>0$ such that $c_\lambda\geq F_\lambda(u^{\scp(\lm)})+\varepsilon_\lambda$.
Besides, since $F_\lambda$ is continuous and the branch $\mathcal{G}_{\rho,c}$ is smooth,
we have that a bound on the min-max levels applies uniformly in a small neighborhood of $\lambda$.
More precisely the following straightforward fact holds true.
\begin{lemma}\label{lemma:lambda0}
There exists $\lambda_0>0$ sufficiently small such that
$$
[\lambda-\lambda_0,\lambda+\lambda_0]subset(8k\pi,8(k+1)\pi)\cap(8\pi,\lambda_{\rho,c})
$$
and for any $\eta\inftyn[\lambda-\lambda_0,\lambda+\lambda_0]$ we have $F_\eta(\varphi_{\mu_0,d_0})\leq F_\eta(u^{\scp(\lm)})-\frac12$ and
$$
c_\eta\geq F_\lambda(u^{\scp(\lm)})+\frac34\varepsilon_\lambda\geq F_\eta(u^{\scp(\lm)})+\frac12\varepsilon_\lambda.
$$
\end{lemma}
If $\eta$, $\eta'\inftyn(\lambda-\lambda_0,\lambda+\lambda_0)$, $\eta\leq \eta'$, then
$
\frac{F_\eta}{\eta}-\frac{F_\eta'}{\eta'}=\frac12(\frac{1}{\eta}-\frac{1}{\eta'})\inftynt_{\Omega}|\nabla u|^2\geq 0,
$
whence
\betal\label{nonincreasing}
\frac{c_\eta}{\eta}\geq\frac{c_{\eta'}}{\eta'}.
\end{equation}
Therefore we have that the function $\eta\mapsto \frac{c_\eta}{\eta}$ is non-increasing and in turn
differentiable a.e. in $(\lambda-\lambda_0,\lambda+\lambda_0)$.
Set
$$
\Lambda=\{\eta\inftyn(\lambda-\lambda_0,\lambda+\lambda_0)\,|\,\frac{c_\eta}{\eta}\varthetaetaxtrm{ is differentiable at $\eta$}\}.
$$
\begin{lemma}\label{lemma:denso}
$c_\eta$ is achieved by a critical point $v^{\scp(\lm)}$ of $F_\eta$ provided that $\eta\inftyn\Lambda$.
\end{lemma}
\proof The proof is a step by step adaptation of the arguments of Lemma 3.2 of
\mathbb{C}te{DJLW} where, with respect to their notations, we have just to choose $\delta<\frac14\varepsilon_{\lambda}$.
\hspace{\fill}$square$
bgskip
Finally we state a (well known) compactness result for sequence of solutions of
$P(\lambda_n,\Omega)$.
\begin{lemma}\label{lemma:compattezza}
Let $\lambda_n\rightarrow\lambda$ and let $v^{scriptscriptstyle(\lambda_n)}\inftyn H^1_0(\Omega)$ be a solution of $P(\lambda_n,\Omega)$.
If $\lambda\notin 8\pi\mathbb{N}$, then $v^{scriptscriptstyle(\lambda_n)}$ admits a subsequence which converges smoothly
to a solution $v^{\scp(\lm)}$ of $P(\lambda,\Omega)$.
\end{lemma}
\proof In view of Lemma 2.1 in \mathbb{C}te{CCL} $v^{scriptscriptstyle(\lambda_n)}$ is uniformly bounded
in a \un{fixed} neighborhood of the boundary.
Hence the conclusion is a straightforward and well known consequence of the Brezis-Merle \mathbb{C}te{bm}
concentration-compactness result as completed by Li and Shafrir \mathbb{C}te{ls}.
\hspace{\fill}$square$
bgskip
Now we are able to conclude the proof of Theorem \ref{mp-intro}(a).
Indeed the thesis is an easy consequence of Lemmas \ref{lemma:denso} and \ref{lemma:compattezza},
noticing that the solution $v^{\scp(\lm)}$, obtained by this procedure,
does not coincide with $u^{\scp(\lm)}$, because by Lemma \ref{lemma:lambda0} $F_\lambda(v^{\scp(\lm)})> F_\lambda(u^{\scp(\lm)})$.
\
\emph{(b).}\;\;This part can be proved exactly as the previous one.
\hspace{\fill}$square$
section{A refined estimate for solutions on $\mathcal{G}_{\rho,1}$}\label{s5}
Let $\mathcal{G}_{\rho,c},\mathcal{G}_\varthetaetaxtnormal{\tiny{$N$}}$ denote the branches of parameter-solutions pairs
of $P(\lambda,\Omega)$ found in Theorem \ref{t1-intro}.
As a consequence of Theorem \ref{unique:1-intro} and Proposition
\ref{pr2} we obtain the following:
\begin{proposition}\label{pr3:I}
Let $\ov{\lambda}\geq 8\pi$, $\widetilde{\rho}_{1}$ and $\widetilde{N}$ be as in Theorem \ref{unique:1-intro}. Let
either
$\mathcal{G}^{(\ov{\lambda})}=\{(\lambda,u^{\scp(\lm)})\inftyn\mathcal{G}_{\rho,c}\,:\,\lambda\inftyn[0,\ov{\lambda})\}$ or
$\mathcal{G}^{(\ov{\lambda})}=\{(\lambda,u^{\scp(\lm)})\inftyn\mathcal{G}_\varthetaetaxtnormal{\tiny{$N$}}\,:\,\lambda\inftyn[0,\ov{\lambda})\}$
denote that part of $\mathcal{G}_{\rho,c},\mathcal{G}_\varthetaetaxtnormal{\tiny{$N$}}$
with $\lambda\inftyn[0,\ov{\lambda})$, $\rho\inftyn(0,\widetilde{\rho}_{1}]$
and $N\geq \widetilde{N}$ respectively. Then the energy function
\betaq\label{endef:II}
\widehat{E}(\lambda):= \mathcal{E}(\Omegaega(u^{\scp(\lm)})),\quad u^{\scp(\lm)} \inftyn \mathcal{G}^{(\ov{\lambda})},
\end{equation}
is a monodrome and smooth function of $\lambda\inftyn[0,\ov{\lambda})$.
\end{proposition}
\proof
By using the explicit bounds \rife{susu} and the fact that
$$
\mathcal{E}(\Omegaega(u^{\scp(\lm)}))=\frac{1}{2\lambda}\inftynt_{\Omega}\Omegaega(u^{\scp(\lm)})u^{\scp(\lm)},
$$
then it is straightforward to show that the energy of any solution lying on $\mathcal{G}^{(\ov{\lambda})}$ is
uniformly bounded
from above by a suitable value $\ov{E}$, which we can assume without loss of generality to be larger than $1$.
Therefore Theorem \ref{unique:1-intro} applies and we see that
$\mathcal{E}(\Omegaega(u^{\scp(\lm)}))$ is monodrome as a function of $\lambda\inftyn[0,\ov{\lambda})$
and consequently $\widehat{E}(\lambda)$ is well defined. At this point Proposition \ref{pr2} implies that
it is smooth as well, see also Remark \ref{newrmsmooth}.
\hspace{\fill}$square$
bgskip
Our next aim is to improve Proposition \ref{pr3:I} in case $\Omega=\Omega_\rho$ to come up with a unique solution of
$P(\lambda,\Omega_\rho)$ at fixed energy. Indeed, this is the content of Theorem \ref{unique:2-intro} whose proof is
the main aim of this section. To achieve this goal we have to pay a price in
terms of a smallness assumption on the energy and indeed we will obtain this result
by using Theorem \ref{unique:1-intro} and the expansion of solutions as functions of $\rho$.
Actually, we first need a more precise formula about the explicit form of solutions
of $P(\lambda,\Omega_\rho)$ lying on $\mathcal{G}_{\rho,1}$, as claimed in \rife{sol:II-intro} of
Theorem \ref{thm:261112-intro}.
By using these expansions we will be able to calculate explicitly, at least for small $\rho$,
their energy as a function of $\lambda$ and then prove that $\widehat{E}$ is monotone. It turns out
that this is enough to prove uniqueness of solutions with fixed energy.
Actually we also provide another proof (still by using the sub-supersolutions method)
of the existence of solutions for $P(\lambda,\Omega_\rho)$.
bgskip
{\bf The Proof of Theorem \ref{thm:261112-intro}.}\\
As above, the notation $\mbox{O}(\rho^m)$, $m\inftyn\mathbb{N}$ will be used in the rest of this proof to denote various quantities
uniformly bounded by $C_m\rho^m$ with $C_m>0$ a suitable constant depending only on $\ov{\lambda}$.bgskip
Let us first seek solutions
$v_\rho$ of $Q(\mu_0\rho,\Omega_\rho)$ in the form
\betaq\label{021212.1}
v_\rho=\rho \phi_0+\rho^2\phi_{0,1},\quad \phi_0,\phi_{0,1}\inftyn C^2(\Omega_\rho)\cap C^0(\ov{\Omega_\rho}),
\end{equation}
with the additional constraints
$$
0\leq \|\phi_0\|_\infty\leq M_0,\quad 0\leq \|\phi_{0,1}\|_\infty\leq M_1.
$$
Since $v_\rho$ must satisfy $-\Delta v= \mu_0 \rho{\displaystyle e^v}$ then $\phi_0$ and $\phi_{0,1}$ should be solutions of
\betaq\label{191112.1}
\graf{
-\Delta \phi_0 =\mu_0 & \mbox{in}\quad \Omega_\rho\\
\phi_0 =0 & \mbox{on}\quad \partial\Omega_\rho
}
\end{equation}
and
\betaq\label{231012.1.II}
\graf{
-\Delta \phi_{0,1}=\mu_0\rho^{-1}\left(e^{\rho\phi_0}e^{\rho^2\phi_{0,1}}-1\right) & \mbox{in}\quad \Omega_\rho\\
\phi_{0,1}=0 & \mbox{on}\quad \partial\Omega_\rho
}
\end{equation}
respectively. Therefore the explicit expression of $\phi_0$ is easily derived to be
\betaq\label{021212.2}
\phi_0(x,y;\rho)=\frac{\mu_0}{2(1+\rho^2)}\left(1-(\rho^2x^2+y^2)\right),\quad (x,y)\inftyn \Omega_\rho.
\end{equation}
Please observe that the function $\phi_0(x,y;\lambda,\rho)$ as defined in \rife{phi0-intro}
will be recognized to be $\phi_0(x,y;\rho)$ where $\mu_0=\mu_0(\lambda,\rho)$.\\
Clearly
$$
\|\phi_0\|_\infty=\frac{\mu_0}{2(1+\rho^2)},
$$
and therefore, in particular we have
\betaq\label{231012.2.II}
\forall\, t_0>1\, \exists\,\rho_1=\rho_1(t_0)>0\,:\,
e^{\rho\phi_0}\leq e^{\frac{\mu_0\rho}{2(1+\rho^2)}}<1+t_0\frac{\mu_0\rho}{2},\quad \forall\,\rho<\rho_1,
\end{equation}
the last inequality being a trivial consequence of the convexity of $e^{\frac{\mu_0 s}{2(1+s^2)}}$ in a right neighborhood of $s=0$.\\
Our next aim is to use the sub-supersolutions method to obtain solutions for \rife{231012.1.II}. Let us define
$$
f(t;\phi_0):=e^{\rho\phi_0}e^{\rho^2t},\quad t\geq 0,
$$
so that, in particular, we have
\betaq\label{231012.3.II}
\forall\, t_1>1\,\exists\,\rho_2>0\,:\,e^{\rho^2t}<1+t_1 \rho^2 t,\quad \forall\,\rho<\rho_{2},
\end{equation}
with $\rho_2$ depending on $t_1$.
By using \rife{231012.2.II} and \rife{231012.3.II} we conclude that
$$
f(t;\phi_0)\leq \left(1+t_0\frac{\mu_0\rho}{2}\right)\left(1+t_1 \rho^2 t\right),\quad\forall\,\rho<\min\{\rho_1,\rho_2\}.
$$
Hence, by setting
$$
A_{+}=1+t_0\frac{\mu_0\rho}{2},
$$
we see that a supersolution $\phi_{+}$ for \rife{231012.1.II} will be obtained whenever we will be able
to solve the differential problem
\betaq\label{231012.4.II}
\graf{
-\Delta \phi_{+}\geq t_0\frac{\mu_0^2}{2} + t_1\mu_0 A_{+} \rho \phi_{+} & \mbox{in}\quad \Omega_\rho\\
\phi_{+}\geq 0 & \mbox{on}\quad \partial\Omega_\rho\\
0\leq\phi_{+}\leq M_1 & \mbox{in}\quad \Omega_\rho.
}
\end{equation}
Let us define
$$
\phi_+(x,y)=\frac{C_{+}}{2(1+\rho^2)}\left(1-(\rho^2x^2+y^2)\right),\quad (x,y)\inftyn \Omega_\rho,
$$
with $C_+>0$, so that the differential inequality in \rife{231012.4.II} yields
$$
-\Delta \phi_+=C_+=\frac{C_+}{2}+\frac{C_+}{2}=\frac{C_+}{2}+(1+\rho^2)\|\phi_+\|_\infty\geq t_0\frac{\mu_0^2}{2} + t_1\mu_0 A_{+} \rho \phi_{+}.
$$
Therefore \rife{231012.4.II} will be satisfied whenever we can choose $C_+$ such that the following inequalities are verified
\betaq\label{231012.5.II}
\graf{
C_+ &\geq t_0\mu_0^2\\
(1+\rho^2)&\geq t_1\mu_0 A_{+} \rho\\
C_+&\leq 2(1+\rho^2)M_1.
}
\end{equation}
We first impose
$$
C_+=2M_1,
$$
so that the third inequality in \rife{231012.5.II} is automatically satisfied and then substitute it in the first inequality,
to obtain
\betaq\label{021212.3}
\mu_0^2\leq \min\left\{\frac{2M_1}{t_0},\left(4M_0\right)^2\right\}={\frac{2M_1}{t_0}},
\;\mbox{for any}\;M_0\;\mbox{large enough}.
\end{equation}
We conclude in particular that the second inequality is trivially satisfied for any $\rho$ small enough. At this point Theorem \ref{t0}
shows that there exists a solution $v_\rho$ of $Q(\mu_0\rho,\Omega_\rho)$ taking the form \rife{021212.1},
where $\phi_0$ is defined as in \rife{021212.2}
and $0\leq \phi_{0,1}\leq M_1$ with the constraint \rife{021212.3}.\\
Our next aim is to show that $\forall\,\ov{\lambda}\geq 8\pi$ we can find $\rho_0$ small enough such that $\forall\,\rho<\rho_0$ and for any $\lambda<\ov{\lambda}$ we can
choose $\mu_0$ in such a way that $v_\rho$ is a solution of $P(\lambda,\Omega_\rho)$. Indeed, we have
\betaq\label{vip}
\lambda=\lambda_0(\mu_0,\rho):=\mu_0\rho\inftynt\limits_{\Omega_\rho} e^{v_\rho}=\pi\mu_0+f_0(\mu_0,\rho),\;\;\mbox{where}\;\;
|f_0(\mu_0,\rho)|\leq C_{scriptscriptstyle M_1}\rho^2,
\end{equation}
where $\lambda$ is a fixed value in the range of $\lambda_0$ and we have used $\|\phi_{0,1}\|_\infty\leq M_1$ and
$$
\inftynt\limits_{\Omega_\rho} e^{\rho\phi_0}=\left(1+\rho^2\right)\frac{2\pi}{\mu_0\rho^3}\left(e^{\frac{\mu_0\rho^2}{2(1+\rho^2)}}-1\right).
$$
\begin{lemma}\label{intermedio}
$\lambda_0(\mu_0,\rho)$ is smooth.
\end{lemma}
\proof
It is straightforward to check that the energy of these solutions $v_\rho$ is uniformly bounded
from above by a suitable positive number $\ov{E}$ (possibly depending on $M_1$ and $\ov{\lambda}$)
which we can assume without loss of generality to be larger than 1.
Therefore Theorem \ref{unique:1-intro} shows that they must coincide
with some subset of the branch $\mathcal{G}^{(\ov{\lambda})}$ (see Proposition \ref{pr3:I}).
We can use Proposition \ref{pr2} at this point and conclude that $\lambda_0(\mu_0,\rho)$ is smooth as a function of
$\mu_0$. At this point the (joint) regularity of $\lambda_0(\mu_0,\rho)$ as a function of $\mu_0$ and $\rho$
is derived by a conformal transplantation on the unit disk, classical representation formulas
for derivatives of Riemann maps (see for example \mathbb{C}te{pom}) and standard elliptic theory.
\hspace{\fill}$square$
bgskip
Hence, in particular
we can always choose $\mu_0$ and $\rho_0$ such that $\forall\,\rho<\rho_0$ we have (see \rife{021212.3})
$$
[0,\ov{\lambda})subset\lambda_0\left(\left[0,2sqrt{\frac{M_1}{t_0}}\right),\rho\right),
$$
and since $\lambda_0(\mu_0,\rho)$ is also continuous, we finally obtain the desired solution for any $\lambda<\ov{\lambda}$.
At this point, let us fix a positive value $\lambda<\ov{\lambda}$ for which we seek an approximate solution
$u_{scriptscriptstyle \lambda}$ of $P(\lambda,\Omega_\rho)$. As a consequence of \rife{vip} we have
\betaq\label{mufix}
\mu_0=\mu_0(\lambda,\rho)=\frac{\lambda}{\pi}+\mbox{O}(\rho^2),
\end{equation}
and then
\betaq\label{ulfix}
u_{scriptscriptstyle \lambda}:=\rho \phi_0+\rho^2\phi_{0,1}=\frac{\rho\mu_0}{2\pi(1+\rho^2)}\left(1-(\rho^2x^2+y^2)\right)\left(1+\mbox{O}(\rho)\right)=
\frac{\rho\lambda}{2\pi}\left(1-(\rho^2x^2+y^2)\right)\left(1+\mbox{O}(\rho)\right),
\end{equation}
is a solution for $P(\lambda,\Omega_\rho)$, as desired.\\
\begin{remark}
However, by using \rife{lm0-intro}, \rife{mufix} and \rife{ulfix}, a straightforward explicit evaluation shows that
$$
\mathcal{E}(\Omegaega_{scriptscriptstyle \lambda})=\frac{1}{2}\inftynt\limits_{\Omega_\rho} \Omegaega_{scriptscriptstyle \lambda} G_\rho[\Omegaega_{scriptscriptstyle \lambda}]=
\frac{1}{2\lambda}\inftynt\limits_{\Omega_\rho} \Omegaega_{scriptscriptstyle \lambda} u^{\scp(\lm)}l=\frac{\mu_0\rho}{2\lambda^2}\inftynt\limits_{\Omega_\rho}\e{u^{\scp(\lm)}l} u^{\scp(\lm)}l=
$$
$$
\frac{\lambda\rho+\mbox{O}(\rho^3)}{2\pi\lambda^2}\inftynt\limits_{\Omega_\rho}\e{u^{\scp(\lm)}l} u^{\scp(\lm)}l=\frac{\rho}{8\pi}\left(1+\mbox{O}(\rho)\right),
$$
see Remark \ref{rem6.1}. Therefore, as far as we are interested in the monotonicity of
$\widehat{E}(\lambda)$, we see that the first order expansion is not enough to our purpose.
\end{remark}
Hence we make a further step to come up with an expansion of $\mathcal{E}$ at order $\rho^2$.
Let $\phi_{0,1}$ be the solution
of \rife{231012.1.II} determined above, we write it as
$$
\phi_{0,1}=\phi_1+\rho\phi_2,
$$
so that, if $\phi_1$ is the unique solution of
\betaq\label{011112.0}
\graf{
-\Delta \phi_1 =\mu_0\phi_0=\mu_0^2\psi_0 & \mbox{in}\quad \Omega_\rho\\
\phi_1 =0 & \mbox{on}\quad \partial\Omega_\rho
}
\end{equation}
(see \rife{phi0-intro}-\rife{080213.1-intro})
then by definition $\phi_2$ is a solution for
\betaq\label{011112.1}
\graf{
-\Delta \phi_2=\mu_0\rho^{-1}\left(e^{\rho\phi_0}e^{\rho^2\phi_{0,1}}-1-\phi_0\right) & \mbox{in}\quad \Omega_\rho\\
\phi_2=0 & \mbox{on}\quad \partial\Omega_\rho
}
\end{equation}
and it is not difficult to check that it also satisfies $\|\phi_2\|\leq M_2$, for a suitable
$M_2$ depending only $M_0$ and $M_1$.\\
At this point standard elliptic estimates to be used together with the maximum principle show
that $\{\phi_0,\phi_1,\phi_2\}subset C^{2}_0(\Omega)$ and
$\|D^{(k)}_\lambda\phi_0\|_{scriptscriptstyle C^{2}_0(\Omega)}+\|D^{(k)}_\lambda\phi_1\|_{scriptscriptstyle C^{2}_0(\Omega)}+
\|D^{(k)}_\lambda\phi_2\|_{scriptscriptstyle C^{2}_0(\Omega)}\leq \ov{M}_k$
for suitable constants $\ov{M}_k>0$ depending only on $M_0$, $M_1$, $M_2$, that is, depending only on $\ov{\lambda}$.\\
Let $\lambda_0=\lambda_0(\mu_0,\rho)$ as defined in \rife{vip} above. In view of Lemma \ref{intermedio},
we can expand $\lambda_0$ at second order in $\rho$,
$$
\lambda_0(\mu_0,\rho):=\mu_0\rho\inftynt\limits_{\Omega_\rho} e^{v_\rho}=
\mu_0\rho\inftynt\limits_{\Omega_\rho} (1+ \rho\phi_0
+\mbox{O}(\rho^2))=
$$
$$
\pi\mu_0+\frac{\pi\mu_0^2\rho}{4(1+\rho^2)}+\mbox{O}(\rho^2)=\pi\mu_0+\frac{\pi\mu_0^2\rho}{4}+\mbox{O}(\rho^2).
$$
Hence, for a fixed value $\lambda$ in the range of $\lambda_0$ we can use the implicit function theorem to obtain the
inverse expansion up to order $\rho^2$, that is
$$
\lambda=\pi\mu_0+\frac{\pi\mu_0^2\rho}{4}+\mbox{O}(\rho^2),\quad\mu_0=\frac{\lambda}{\pi}-\frac{\lambda^2}{4\pi^2}\rho+\mbox{O}(\rho^2),
$$
and \rife{mu0-intro}-\rife{mu0-intro1} follows immediately.\\
This observation concludes the proof.\hspace{\fill}$square$
bgskip
{\bf The Proof of Theorem \ref{unique:2-intro}}\\
The notation $\mbox{O}(\rho^m)$, $m\inftyn\mathbb{N}$ will be used in the rest of this proof to denote various quantities
uniformly bounded by $C_m\rho^m$ with $C_m>0$ a suitable constant possibly depending on $\ov{\lambda}$ and on
the constants $\ov{M}_k$, $k=1,2,3$ as obtained in Theorem \ref{thm:261112-intro}.\\
By using \rife{lm0-intro} above and Theorem \ref{thm:261112-intro} we obtain the Taylor expansion
$$
\mathcal{E}(\Omegaega_{scriptscriptstyle \lambda})=\frac{1}{2}\inftynt\limits_{\Omega_\rho} \Omegaega_{scriptscriptstyle \lambda} G_\rho[\Omegaega_{scriptscriptstyle \lambda}]=
\frac{1}{2\lambda}\inftynt\limits_{\Omega_\rho} \Omegaega_{scriptscriptstyle \lambda} u^{\scp(\lm)}l=\frac{\mu_0\rho}{2\lambda^2}
\inftynt\limits_{\Omega_\rho}\e{u^{\scp(\lm)}l} u^{\scp(\lm)}l=
$$
$$
\frac{\mu_0\rho}{2\lambda^2}\inftynt\limits_{\Omega_\rho}\e{u^{\scp(\lm)}l} u^{\scp(\lm)}l=
\frac{\mu_0\rho}{2\lambda^2}\inftynt\limits_{\Omega_\rho}
(1+ \rho \phi_0 + \mbox{O}(\rho^2))(\rho \phi_0 + \rho^2\phi_1 +\mbox{O}(\rho^3))=
$$
$$
\frac{\mu_0\rho}{2\lambda^2}\inftynt\limits_{\Omega_\rho}(\rho \phi_0 + \rho^2 \phi^2_0 + \rho^2\phi_1 +\mbox{O}(\rho^3))
=\frac{\mu_0\rho}{2\lambda^2}\left[\frac{\pi\mu_0}{4(1+\rho^2)}+\frac{\pi\mu^2_0\rho}{12(1+\rho^2)^2}
+\frac{\pi\mu^2_0\rho}{12(1+\rho^2)^2}+\mbox{O}(\rho^2) \right],
$$
where we have used the fact that
\betaq\label{011112.3}
\inftynt\limits_{\Omega_\rho}\rho^2\phi_1=\frac{\pi\mu^2_0\rho}{12(1+\rho^2)^2},
\end{equation}
which can be obtained by using the explicit expression of $\phi_0$ in \rife{phi0-intro}
together with the fact that $\phi_1$
solves \rife{011112.0}, see the Appendix \ref{subs:1} below for further details.\\
Hence, by using Proposition \ref{pr3:I} and \rife{mu0-intro}-\rife{mu0-intro1} and \rife{regular-intro}, we have
$$
\widehat{E}(\lambda):=\mathcal{E}(\Omegaega_{scriptscriptstyle \lambda})=\frac{\pi \mu_0^2\rho}{8\lambda^2}+\frac{\pi \mu_0^3\rho^2}{12\lambda^2}+\mbox{O}(\rho^3)=
\frac{\pi \rho}{8\lambda^2}\left(\frac{\lambda^2}{\pi^2}-\frac{\lambda^3}{2\pi^3}\rho+\mbox{O}(\rho^2)\right)+
$$
$$
\frac{\pi \rho^2}{12\lambda^2}\frac{\lambda^3}{\pi^3}+\mbox{O}(\rho^3)=
\frac{\rho}{8\pi}+\frac{\rho^2}{48\pi^2}\lambda+\mbox{O}(\rho^3).
$$
In particular we conclude that
\betaq\label{231112.3.8}
\widehat{E}(\lambda)=\frac{\rho}{8\pi}+\frac{\rho^2}{48\pi^2}\lambda+\mbox{O}(\rho^3),
\end{equation}
and, in view of \rife{mu0-intro}-\rife{mu0-intro1} and \rife{regular-intro},
\betaq\label{231112.3.a}
\frac{d}{d\lambda}\widehat{E}(\lambda)=\frac{\rho^2}{48\pi^2}+\mbox{O}(\rho^3),
\end{equation}
$$
\frac{d^2}{d\lambda^2}\widehat{E}(\lambda)=\mbox{O}(\rho^3).
$$
At this point \rife{231112.3.8} shows that we may
restrict the domain of $\widehat{E}$ to the preimages of
$E\inftyn\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$. Then \rife{231112.3.a} implies that $\widehat{E}(\lambda)$
is monotonic increasing there. Hence the preimage of
$\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$ is exactly $[0,\widehat{\lambda}_\rho]$
and the uniqueness of $u_{scriptscriptstyle \lambda}$ as a function of $\lambda$ implies that the equation
$\mathcal{E}(\Omegaega(u_{scriptscriptstyle \widehat{\lambda}(E)}))=E$ defines $\widehat{\lambda}(E)$
as a monotonic increasing function of $E$ in $\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$.
Therefore, we can use \rife{231112.3.8} and \rife{231112.3.a} together with the implicit function theorem to
take the inverse up to order $\rho^2$, that is
$$
\widehat{\lambda}(E)=\frac{48\pi^2}{\rho^2}\left(E-\frac{\rho}{8\pi}\right)+\mbox{O}(\rho),
$$
and then conclude that
$$
\frac{d}{d E}\widehat{\lambda}(E)=\frac{48\pi^2}{\rho^2}+\mbox{O}(\rho),
$$
and
$$
\frac{d^2}{d E^2}\widehat{\lambda}(E)=\mbox{O}(\rho).
$$
\hspace{\fill}$square$
section{The Entropy is concave in $E\inftyn\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$.}\label{sec:entropy}
Let us recall that according to definition \ref{dedef} the density corresponding to a solution
$u^{\scp(\lm)}l$ of $P(\lambda,\Omega_\rho)$ is defined to be
$$
\Omegaega_{scriptscriptstyle \lambda}\equiv\Omegaega(u^{\scp(\lm)}l):=\displaystyle\frac{\e{u^{\scp(\lm)}l}}{\inftynt\limits_{\Omega_\rho} \e{u^{\scp(\lm)}l}}.
$$
As usual $\mathcal{G}_{\rho,1}$ denotes the branch of solutions obtained in Theorem \ref{t1-intro}(a).\\
When evaluated on $(\lambda,u^{\scp(\lm)}l)\inftyn\mathcal{G}_{\rho,1}$, of course $\mathcal{S}(\Omegaega(u^{\scp(\lm)}l))$ yields a function
of $\lambda$ defined in principle on $\lambda\inftyn[0,\lambda_{\rho,1}]$.
Then we can use MVP-(iv), that is, the fact that any entropy maximizer (at fixed $E$) of the MVP satisfies
$P(\lambda,\Omega)$ (for a certain unknown value $\lambda$). But then we can observe that
Theorem \ref{unique:2-intro} states that there exists one and only one solution of
$P(\lambda,\Omega)$ with $\lambda=\widehat{\lambda}(E)$ such that
the energy is exactly $E$, $\widehat{E}(\lambda)=E$, whenever $E\inftyn\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$.\\
Therefore we conclude that indeed $S(E)\equiv \mathcal{S}(\Omegaega(u^{\scp(\lm)}l))\left.\right|_{\lambda=\widehat{\lambda}(E)}$ in
$\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$.
Hence, when evaluated on those densities $\Omegaega_{scriptscriptstyle \widehat{\lambda}(E)}$ as obtained in Theorem \ref{unique:2-intro},
we have
$$
S(E)\equiv\mathcal{S}(\Omegaega_{scriptscriptstyle \widehat{\lambda}(E)})=-2E \widehat{\lambda}(E)+
\log\left(\,\inftynt\limits_{\Omega_\rho} e^{u_{scriptscriptstyle \widehat{\lambda}(E)}}\right),\quad E\inftyn\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right].
$$
In particular, in view of Theorem \ref{unique:2-intro} we can set
$$
\dot{u}=\frac{du_{scriptscriptstyle \widehat{\lambda}(E)}}{d E},\quad\mbox{and}\quad
\ddot{u}=\frac{d^2u_{scriptscriptstyle \widehat{\lambda}(E)}}{d E^2},
$$
to obtain
$$
\frac{d S(E)}{d E}=-2\widehat{\lambda}(E)-2E \frac{d \widehat{\lambda}(E)}{d E}+
\inftynt\limits_{\Omega_\rho}\Omegaega_{scriptscriptstyle \widehat{\lambda}(E)}\dot{u},
$$
and then
\betaq\label{entropy}
\frac{d^2 S(E)}{d E^2}=-4\frac{d \widehat{\lambda}(E)}{d E}-2E \frac{d^2 \widehat{\lambda}(E)}{d E^2}+
\inftynt\limits_{\Omega_\rho}\Omegaega_{scriptscriptstyle \widehat{\lambda}(E)}(\dot{u})^2-
\left(\inftynt\limits_{\Omega_\rho}\Omegaega_{scriptscriptstyle \widehat{\lambda}(E)}\dot{u}\right)^2+
\inftynt\limits_{\Omega_\rho}\Omegaega_{scriptscriptstyle \widehat{\lambda}(E)}\ddot{u}.
\end{equation}
We wish to evaluate $\frac{d^2 S(E)}{d E^2}$ in case $\Omega=\Omega_\rho$ and
$E\inftyn\left[\frac{\rho}{8\pi},\widehat{E}_\rho\right]$. Indeed, this is the content of Proposition
\ref{pr:entropy-intro}.
{\bf The Proof of Proposition \ref{pr:entropy-intro}}\\
We are going to evaluate \rife{entropy} by using \rife{mu0-intro}-\rife{mu0-intro1}, Theorem \ref{unique:2-intro} and the
estimates \rife{regular-intro} in Theorem \ref{thm:261112-intro}. Let us set
$$
\dot{\widehat{\lambda}}=\frac{d }{d E}\widehat{\lambda}(E),\quad \ddot{\widehat{\lambda}}=\frac{d^2 }{d E^2}\widehat{\lambda}(E),
$$
and
$$
\phi_j^{'}=\frac{d }{d \lambda}\,\phi_j,\quad \phi_j^{''}=\frac{d^2 }{d \lambda^2}\,\phi_j,\qquad j=0,1,2,
$$
so that, in view of \rife{regular-intro} and
\rife{231112.3-intro}, \rife{231112.3.a-intro}, \rife{231112.3.b-intro} we have
$$
\dot{u}=\frac{d u}{d\lambda}\dot{\widehat{\lambda}}=
\dot{\widehat{\lambda}}\left( \rho \phi_0^{'}+\rho^2 \phi_1^{'}+\mbox{\rm O}(\rho^3)\right),
$$
and
\betaq\label{der1}
\ddot{u}=\frac{d^2 u}{d\lambda^2}\dot{\widehat{\lambda}}^2+\frac{d u}{d\lambda}\ddot{\widehat{\lambda}}=
\dot{\widehat{\lambda}}^2\left( \rho \phi_0^{''}+\rho^2 \phi_1^{''}+\mbox{\rm O}(\rho^3)\right)+
\ddot{\widehat{\lambda}}\left( \rho \phi_0^{'}+\rho^2 \phi_1^{'}+\mbox{\rm O}(\rho^3)\right),
\end{equation}
where the derivatives with respect to $\lambda$ will be estimated by using \rife{mu0-intro}-\rife{mu0-intro1}.\\
Hence we can introduce
$$
\ddot{S}_0(E):=\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})(\ddot{u}+\dot{u}^2)
-\left(\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})\dot{u}\right)^2,
$$
to obtain, after a lengthy evaluation where we use \rife{mu0-intro}-\rife{mu0-intro1} and \rife{der1},
$$
\ddot{S}_0(E)=\frac{(48\pi)^2}{\rho^2}
\left[
-\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})\psi_0-
\left(\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})\psi_0\right)^2+
\pi^2\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})\phi_1^{''}
\right]+\mbox{\rm O}\left(\frac{1}{\rho}\right).
$$
At this point we can use
\betaq\label{asym:1.0}
\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})\psi_0=\frac14+\mbox{\rm O}(\rho),
\end{equation}
and
\betaq\label{asym:2.0}
\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})\phi_1^{''}=\frac{1}{6\pi^2}+\mbox{\rm O}(\rho),
\end{equation}
whose proof is left to Appendix \ref{subs:2}, and \rife{231112.3.a-intro}, \rife{231112.3.b-intro} to obtain
$$
\frac{d^2 S(E)}{d E^2}=-4\dot{\widehat{\lambda}}-2E\ddot{\widehat{\lambda}}+\ddot{S}_0(E)=
-4\frac{48 \pi^2}{\rho^2}+\frac{(48\pi)^2}{\rho^2}\left(-\frac14-\frac{1}{16}+\frac{1}{6}\right),
$$
and the conclusion readily follows.
\hspace{\fill}$square$
section{Appendix}
subsection{ The proof of \rife{011112.3}}\label{subs:1}$\left.\right.$\\
To obtain \rife{011112.3} we multiply $-\Delta \phi_{1}$ by $y^2$ and integrate by parts twice to obtain
$$
-\inftynt\limits_{\Omega_\rho} y^2 \Delta \phi_{1} = - \inftynt\limits_{\partial \Omega_\rho} y^2 \partial_\nu \phi_{1}-
2\inftynt\limits_{\Omega_\rho} \phi_{1}.
$$
Similarly we have
$$
-\inftynt\limits_{\Omega_\rho} \rho^2x^2 \Delta \phi_{1} = - \inftynt\limits_{\partial \Omega_\rho}\rho^2 x^2 \partial_\nu \phi_{1}-
2\rho^2\inftynt\limits_{\Omega_\rho} \phi_{1},
$$
so that we can sum up to obtain
$$
2(1+\rho^2)\inftynt\limits_{\Omega_\rho} \phi_{1}= \inftynt\limits_{\Omega_\rho} (\rho^2x^2+y^2) \Delta \phi_{1}-
\inftynt\limits_{\partial \Omega_\rho}\partial_\nu \phi_{1}.
$$
Therefore, by using the equation in \rife{011112.0} and the divergence theorem we have
$$
2(1+\rho^2)\inftynt\limits_{\Omega_\rho} \phi_{1} = \inftynt\limits_{\Omega_\rho} (-(\rho^2x^2+y^2) +1)\mu_0\phi_0,
$$
that is
\betaq\label{19.0.1}
\inftynt\limits_{\Omega_\rho} \phi_{1} =(\mu_0)^2\inftynt\limits_{\Omega_\rho}\psi_0^2,
\end{equation}
and the conclusion follows by a straightforward evaluation based on the explicit expression of $\psi_0$
(see \rife{080213.1-intro}).\hspace{\fill}$square$
subsection{ The proofs of \rife{asym:1.0} and \rife{asym:2.0}}\label{subs:2}$\left.\right.$\\
Concerning \rife{asym:1.0} we just observe that
$$
\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})\psi_0=
\inftynt\limits_{\Omega_\rho}\frac{1+\mbox{\rm O}(\rho)}{\inftynt\limits_{\Omega_\rho}(1+\mbox{\rm O}(\rho))}\psi_0=
\frac{\rho}{\pi}(1+\mbox{\rm O}(\rho))\inftynt\limits_{\Omega_\rho}\psi_0=
\frac14+\mbox{\rm O}(\rho),
$$
where the last equality is obtained by a straightforward evaluation based on the explicit expression of $\psi_0$
(see \rife{080213.1-intro}).\\
Concerning \rife{asym:2.0} we observe as above that
\betaq\label{19.0.2}
\inftynt\limits_{\Omega_\rho}\Omegaega(u_{\widehat{\lambda}(E)})\phi_1^{''}=
\frac{\rho}{\pi}(1+\mbox{\rm O}(\rho))\inftynt\limits_{\Omega_\rho}\phi_1^{''},
\end{equation}
and that in view of \rife{011112.0} and \rife{phi0-intro}, then $\phi_1^{''}$ satisfies
\betaq\label{19.0.0}
\graf{
-\Delta \phi_1^{''} =(\mu_0\phi_0)^{''}\equiv(\mu_0^2)^{''}\psi_0 & \mbox{in}\quad \Omega_\rho\\
\phi_1^{''} =0 & \mbox{on}\quad \partial\Omega_\rho
}
\end{equation}
where $\mu_0=\mu_0(\lambda,\rho)$ (see \rife{mu0-intro}-\rife{mu0-intro1}). In other words $\phi_1^{''}$ is a solution for
the same problem as $\phi_1$ (that is \rife{011112.0}) but for the fact that
$\mu_0^2$ is replaced by $(\mu_0^2)^{''}$ in \rife{19.0.0}. Hence the argument in subsection \ref{subs:1}
applies and we obtain (see \rife{19.0.1})
$$
\inftynt\limits_{\Omega_\rho}\phi_1^{''}=(\mu_0^2)^{''}\inftynt\limits_{\Omega_\rho}\psi_0^2=
(\mu_0^2)^{''}\frac{\pi}{12\rho}+\mbox{\rm O}(\rho^2)=\frac{1}{6\pi\rho}+\mbox{\rm O}(\rho^2),
$$
and the conclusion follows by substituting this result in \rife{19.0.2}.
\hspace{\fill}$square$
\betagin{thebibliography}{99}
bbitem{BaPa}
S. Baraket, F. Pacard, {\em Construction of singular limits for a semilinear elliptic
equation in dimension 2}, Calc. Var. \& P.D.E. {\bf 6} (1998), 1-38.
bbitem{B1-1} D. Bartolucci, {\em On the classification of N-point
concentrating solutions for mean field equations and the critical
set of the N-vortex singular Hamiltonian on the unit disk },
{Acta Appl. Math.} {\bf 110}(1) (2010), 1-22.
bbitem{B2} D. Bartolucci, {\em Stable and unstable equilibria of
uniformly rotating self-gravitating cylinders}", {Int. Jour. Mod. Phys. D} {\bf 21}(13) (2012), 1250087.
bbitem{barjga} D. Bartolucci, {\em On the best pinching constant of conformal metrics on $\mathbb{S}^2$ with
one and two conical singularities}, Jour. Geom. Analysis {\bf 23} (2013) 855-877.
bbitem{bl} D. Bartolucci, C.S. Lin, {\em Uniqueness Results for Mean Field Equations with Singular Data},
Comm. in P. D. E. {\bf 34}(7) (2009), 676-702.
bbitem{BLin2} D. Bartolucci, C.S. Lin, {\em Sharp existence results for mean field equations with singular data},
Jour. Diff. Eq. 252(7) (2012), pp. 4115-4137.
bbitem{BLin3} D. Bartolucci, C.S. Lin, {\em Existence and uniqueness for
Mean Field Equations on multiply connected domains at the critical parameter}, Math. Ann, to appear.
bbitem{BLT} D. Bartolucci, C.S. Lin, G. Tarantello, {\em Uniqueness and symmetry results for
solutions of a mean field equation on ${\mathbb{S}}^{2}$ via a new bubbling phenomenon},
{Comm. Pure Appl. Math.} {\bf 64}(12) (2011), 1677-1730.
bbitem{BMal} D. Bartolucci, A. Malchiodi, {\em An improved geometric
inequality via vanishing moments, with applications to singular
Liouville equations}, {Comm. Math. Phys.} {\bf 322} (2013), 415-452.
bbitem{BDeM}
D. Bartolucci, F. De Marchis, {\em On the Ambjorn-Olesen electroweak condensates},
{Jour. Math. Phys.} {\bf 53} 073704 (2012); doi: 10.1063/1.4731239.
bbitem{BM2} D. Bartolucci, E. Montefusco,
{\em On the Shape of Blow up Solutions to a Mean Field Equation},
{Nonlinearity} {\bf 19}, (2006), {611-631}.
bbitem{bt} D. Bartolucci, G. Tarantello, {\em Liouville type equations with
singular data and their applications to periodic multivortices for the
electroweak theory}, Comm. Math. Phys. {\bf 229} (2002), 3-47.
bbitem{bav} F. Bavaud, {\em Equilibrium properties of the Vlasov functional: the generalized Poisson-Boltzmann-Emden
equation}, Rev. Mod. Phys. {\bf 63}(1) (1991), 129-149.
bbitem{Besicovitch} A.S. Besicovitch,
{\em Measure of assymmetry of convex curves}, J. London Math. Soc. {\bf 23} (1948), 237-240.
bbitem{bls}
H. Brezis, Y.Y. Li \& I.Shafrir,
{\em A sup+inf inequality for Some Nonlinear Elliptic Equations invoving Exponential
Nonlinearity}, Jour. Func. Analysis {\bf 115} (1993), 344-358.
bbitem{bm}
H. Brezis \& F. Merle,
{\em Uniform estimates and blow-up behaviour for
solutions of $-\Delta u = V(x)e^{u}$ in two dimensions},
{Comm. in P.D.E.} {\bf 16}(8,9) (1991), 1223-1253.
bbitem{clmp1} E. Caglioti, P.L. Lions, C. Marchioro \& M. Pulvirenti,
{\em A special class of stationary flows for two dimensional Euler equations: a
statistical mechanics description,} Comm. Math. Phys. {\bf 143} (1992), 501--525.
bbitem{clmp2} E. Caglioti, P.L. Lions, C. Marchioro \& M. Pulvirenti,
{\em A special class of stationary flows for two dimensional Euler equations: a
statistical mechanics description. II}, Comm. Math. Phys. {\bf 174} (1995),
229--260.
bbitem{CCL} S.Y.A. Chang, C.C. Chen \& C.S. Lin, {\em Extremal functions for a mean field equation in two dimension},
in: "Lecture on Partial Differential Equations", New Stud. Adv. Math. {\bf 2} Int. Press, Somerville, MA, 2003, 61-93.
bbitem{cygc} S.Y.A. Chang, P. C. Yang,
{\em Conformal deformation of metrics on $S^2$}, J. Diff. Geom. 27 (1988), 259-296.
bbitem{ChK} S. Chanillo, M.H.K. Kiessling, {\em Rotational Symmetry of Solutions of
Some Nonlinear Problems in Statistical Mechanics and in Geometry},
{Comm. Math. Phys.} {\bf 160} (1994), 217-238.
bbitem{CL1} W.X. Chen \& C. Li, {\em Prescribing Gaussian curvature on surfaces
with conical singularities}, J. Geom. Anal. {\bf 1} (1991), 359-372.
bbitem{cli1} W. X. Chen \& C. Li, {\em Classification of solutions of some nonlinear elliptic equations,}
Duke Math. J. {\bf 63}(3) (1991), 615-622.
bbitem{CLin3} C.C. Chen, C.S. Lin, {\em On the Symmetry of Blowup Solutions to a
Mean Field Equation}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire {\bf 18}(3) (2001), 271-296.
bbitem{CLin1} C. C. Chen, C.S. Lin, {\em Sharp estimates for solutions of multi-bubbles in compact Riemann surfaces},
Comm. Pure Appl. Math. {\bf55} (2002), 728-771.
bbitem{CLin2} C. C. Chen, C.S. Lin, {\em Topological Degree for a mean field equation on Riemann surface},
Comm. Pure Appl. Math. {\bf 56} (2003), 1667-1727.
bbitem{CLin4} C.C. Chen , C.S. Lin, {\em Mean field equations of liouville type with
singular data: sharper estimates}, Discr. Cont. Dyn. Syt. 28(3) (2010), 1237-1272.
bbitem{CSW} M. Chipot, I. Shafrir, G. Wolansky, {\em On the Solutions of Liouville Systems},
Jour. Diff. Eq. {\bf 140}, (1997), 59-105.
bbitem{Kwan1} K. Choe, {\em Existence and uniqueness results for a class of
elliptic equations with exponential nonlinearity}, Proc. Roy. Soc. Edinburgh {\bf 135A} (2005), 959-983.
bbitem{clsw} P. Cl\'{e}ment \& G. Sweers,
{\em Getting a solution between sub- and supersolutions without monotone iteration},
Rend. Istit. Mat. Univ. Trieste {\bf 19} (1987), 189-194.
bbitem{CrRab} M. G. Crandall, P. H. Rabinowitz, {\em Some Continuation and Variational
Methods for Positive Solutions of Nonlinear Elliptic Eigenvalue Problems},
{Arch. Rat. Mech. An.} {\bf 58} (1975), 207--218.
bbitem{DJLW} W. Ding, J. Jost, J. Li \& G. Wang, {\em Existence results for
mean field equations}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire {\bf 16}
(1999), 653--666.
bbitem{dj} Djadli Z., {\em Existence result for the mean field problem
on Riemann surfaces of all genuses}, Comm. Contemp. Math. 10(2) (2008), 205-220.
bbitem{EGP} P. Esposito, M. Grossi \& A. Pistoia, {\em On the existence of blowing-up solutions
for a mean field equation}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire {\bf 22}(2) (2005), 227-257.
bbitem{ESp} G.L. Eyink, H. Spohn, {\em Negative temperature states and large-scale,
long-lived vortices in two dimensional turbulence},
{J. Stat. Phys.} {\bf 70}(3/4) (1993), 87-135.
bbitem{ESr} G.L. Eyink, K.R. Sreenivasan, {\em Onsager and the theory of hydrodynamic turbulence},
{Rev. Mod. Phys.} {\bf 78} (2006), 833-886.
bbitem{GT} D. Gilbarg, N. Trudinger,
"Elliptic Partial Differential Equations of Second Order", Springer-Verlag, Berlin-Heidelberg-New York (1998).
bbitem{GrT} M. Grossi, F. Takahashi,
{\em Nonexistence of multi-bubble solutions to some elliptic equations on convex domains}, Jour. Funct. An.
{\bf 259}(4) (2010), 904-917.
bbitem{Gu} B. Gustafsson, {\em On the convexity of a solution of Liouville's
equation equation}, Duke Math. Jour. {\bf 60}(2) (1990), 303-311.
bbitem{John} F. John, {\em Extremum problems with inequalities as subsidiary conditions},
Studies and Essays Presented to R. Courant on his 6oth Birthday, January 8, 1948,
Interscience Publishers, Inc., New York, 1948, 187-204.
bbitem{KW} J. L. Kazdan \& F. W. Warner,
{\em Curvature functions for compact 2-manifolds}, Ann. Math. {\bf 99} (1974), 14-74.
bbitem{KMdP} M. Kowalczyk, M. Musso \& M. del Pino, {\em Singular limits in
Liouville-type equations}, Calc. Var. \& P.D.E. {\bf 24}(1) (2005), 47-81.
bbitem{K} M.K.H. Kiessling,
{\em Statistical mechanics of classical particles with logaritmic interaction},
Comm. Pure Appl. Math. {\bf 46} (1993), 27--56.
bbitem{KL} M.K.H. Kiessling \& J. L. Lebowitz {\em The Micro-Canonical Point Vortex Ensemble:
Beyond Equivalence}, Lett. Math. Phys. {\bf 42} (1997), 43--56.
bbitem{Lassek-priv} M. Lassak, private communication.
bbitem{yy} Y.Y. Li, {\em Harnack type inequality: the method of moving planes},
Comm. Math. Phys. {\bf 200} (1999), 421--444.
bbitem{ls} Y.Y. Li \& I. Shafrir, {\em Blow-up analysis for Solutions of $-\Delta u = V(x)e^{u}$
in dimension two}, {Ind. Univ. Math. J.} {\bf 43}(4) (1994), 1255--1270.
bbitem{Lin1} C.S. Lin, {\em Uniqueness of solutions to the mean field equation for the
spherical Onsager Vortex}, Arch. Rat. Mech. An. {\bf 153} (2000), 153-176.
bbitem{lin7} C.S. Lin, M. Lucia, {\em Uniqueness of solutions for a
mean field equation on torus}, J. Diff. Eq. {\bf 229}(1) (2006), 172-185.
bbitem{linwang} C.S. Lin, C.L. Wang, {\em Elliptic functions, Green functions
and the mean field equations on tori}, Ann. of Math. {\bf 172}(2) (2010), 911-954.
bbitem{Lio}
J. Liouville,
"{\em Sur L' \'Equation aux Diff\'erence Partielles
$\frac{d^{2} \log{\lambda}}{du dv} \pm \frac{\lambda}{2 a^{2}}=0$}",\\
{C.R. Acad. Sci. Paris} {\bf 36} 71-72 (1853).
bbitem{Mal1} A. Malchiodi, {\em Topological methods for an elliptic equation with exponential nonlinearities},
Discr. Cont. Dyn. Syst. {\bf 21} (2008), 277--294.
bbitem{Mal2} A. Malchiodi, {\em Morse theory and a scalar field equation on compact
surfaces}, Adv. Diff. Eq. {\bf 13} (2008), 1109-1129.
bbitem{malru} A. Malchiodi, D. Ruiz,
{\em New improved Moser-Trudinger inequalities and singular Liouville equations on compact surfaces},
G.A.F.A. {\bf 21}(5) (2011), 1196-1217.
bbitem{MP} C. Marchioro, M. Pulvirenti, {sl Mathematical Theory of Incompressible Nonviscous Fluids},
Appl. Math. Sci. {\bf 96}, Berlin, Heidelberg, New York: Springer 1994
bbitem{dem} F. De Marchis, {\em Multiplicity result for a scalar field equation on
compact surfaces}, Comm. Part. Diff. Eq. 33(10-12) (2008), 2208-2224.
bbitem{dem2} F. De Marchis, {\em Generic multiplicity for a scalar field equation on compact surfaces},
J. Funct. An. (259) (2010), 2165-2192.
bbitem{moser} Moser J., {\em A sharp form of an inequality by
N.Trudinger}, Indiana Univ. Math. J. 20 (1971), 1077-1091.
bbitem{New} P.K. Newton, {sl The N-Vortex Problem: Analytical Techniques}, Appl. Math. Sci. {\bf 145},
Springer-Verlag, New York, 2001.
bbitem{NT}
M. Nolasco \& G. Tarantello, {\em On a sharp Sobolev-type Inequality on two-dimensional
compact manifold}, {Arch. Rat. Mech. An.} {\bf 145} (1998), 161-195.
bbitem{OS} H. Ohtsuka \& T. Suzuki, {\em Palais-Smale sequence relative
to the Trudinger-Moser inequality}, Calc. Var. \& P.D.E. {\bf 17} (2003), 235-255.
bbitem{On} L. Onsager, {\em Statistical hydrodynamics}, {Nuovo Cimento} {\bf 6}(2) (1949), 279-287.
bbitem{PW} M. Plum, C. Wieners, {\em New solutions of the Gelfand problem},
Jour. Math. Anal. Appl. {\bf 269} (2002), 588-606.
bbitem{pom} Ch. Pommerenke, {sl Boundary Behaviour of Conformal Maps}, Grandlehren der Math.
Wissenschaften, {\bf 299}, p. 300, Springer-Verlag, Berlin-Heidelberg, 1992.
bbitem{pt} J. Prajapat \& G. Tarantello, {\em On a class of elliptic problems in $\mathbb{R}^2$:
symmetry and uniqueness results}, Proc. Roy. Soc. Edinburgh {\bf 131A} (2001), 967-985.
bbitem{sy2} Spruck J., Yang Y., {\em On Multivortices in the Electroweak Theory I:Existence of Periodic Solutions},
Comm. Math. Phys. {\bf 144} (1992), 1-16.
bbitem{st} M. Struwe, G. Tarantello , {\em On multivortex solutions in
Chern-Simons gauge theory}, Boll. Unione Mat. Ital.(B),
Artic. Ric. Mat. {\bf 8}(1) (1998), 109-121.
bbitem{suz0} T. Suzuki, K. Nagasaki, {\em On the nonlinear eigenvalue problem $\Delta u+\lambda e^u=0$},
Trans. Amer. Math. Soc. {\bf 309}(2) (1988), 591-608.
bbitem{suz} T. Suzuki, {\em Global analysis for a two-dimensional elliptic eiqenvalue problem with the exponential
nonlinearly}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire {\bf 9}(4) (1992), 367-398.
bbitem{suzB} T. Suzuki, {sl Semilinear Elliptic Equations}, Math. Sci. \& App. {\bf 3}, GAKUTO Int. Ser.,
Gakkotosho, Tokyo, Japan, 1994.
bbitem{suzC} T. Suzuki, {sl Free Energy and Self-Interacting Particles}, PNLDE
{\bf 62}, Birkhauser, Boston, (2005).
bbitem{T} G. Tarantello,
{\em Multiple condensate solutions for the Chern-Simons-Higgs theory},
{J. Math. Phys.} {\bf 37} (1996), 3769-3796.
bbitem{T3} G. Tarantello,
{\em Analytical aspects of Liouville type equations with singular sources}, Handbook Diff. Eqs., North Holland,
Amsterdam, Stationary partial differential equations, {\bf I} (2004), 491-592.
bbitem{tar} G. Tarantello, {sl Self-Dual Gauge Field Vortices: An Analytical Approach},
PNLDE {\bf 72}, Birkh\"auser Boston, Inc., Boston, MA, 2007.
bbitem{Troy} M. Troyanov, {\em Prescribing curvature on compact surfaces with
conical singularities}, Trans. Amer. Math. Soc. {\bf 324} (1991), 793-821.
bbitem{Villarino} M.B. Villarino,
{\em A note on the accuracy of the Ramanujan's approximative formula for the perimeter of an ellipse},
JIPAM. J. Inequal. Pure Math. {\bf 7} (2006), Article 21, 10 pp.
bbitem{w} G. Wolansky, {\em On steady distributions of self-attracting
clusters under friction and fluctuations}, Arch. Rational Mech. An.
{\bf 119} (1992), 355--391.
bbitem{yang} Y. Yang, {sl Solitons in Field Theory and Nonlinear Analysis},
Springer Monographs in Mathematics, Springer, New York, 2001.
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{center}
\vskip 1cm{\LARGE\bf Counting symmetry classes of dissections of a convex regular polygon
}
\vskip 1cm
\large
Douglas Bowman and Alon Regev\\
Department of Mathematical Sciences\\
Northern Illinois Univeristy\\
DeKalb, IL
\end{center}
{\center \section*{Abstract}}
This paper proves explicit formulas for the number of dissections of a convex regular polygon modulo the action of the cyclic and dihedral groups. The formulas are obtained by making use of the Cauchy-Frobenius Lemma as well as bijections between rotationally symmetric dissections and simpler classes of dissections. A number of special cases of these formulas are studied. Consequently, some known enumerations are recovered and several new ones are provided.
\vskip .2 in
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\sum\limits}{\sum\limits}
\newcommand{\prod\limits}{\prod\limits}
\newcommand{\G}[2]{A^{#1}_{#2}}
\newcommand{\rho}{\rho}
\newcommand{\tau}{\tau}
\newcommand{\varepsilon}{\varepsilon}
\newcommand {\hh}{\gamma}
\section{Introduction}
In 1963 Moon and Moser \cite{MM} enumerated the equivalence classes of triangulations of a regular convex $n$-gon modulo the action of the dihedral group $D_{2n}$. A year later, Brown \cite{Br} enumerated the equivalence classes of these triangulations modulo the action of the cyclic group $Z_n$.
Recall that the triangulations of an $n$-gon are in bijection with the vertices of the associahedron of dimension $n-3$ (see Figure \ref{P3}).
Lee \cite{Le} showed that the associahedron can be realized as a polytope in $(n-3)$-dimensional space having the dihedral symmetry group $D_{2n}$. Thus Moon and Moser's result and Brown's result are equivalent to enumerating of the vertices of the associahedron modulo the dihedral action and the cyclic action, respectively. The enumeration by
Moon and Moser also arose recently in the work of Ceballos, Santos and Ziegler \cite{CSZ}. Their work describes a family of realizations of the associahedron (due to Santos), and proves that the number of normally non-isomorphic realizations is the number of triangulations of a regular polygon modulo the dihedral action. In this paper we generalize the
results of Moon and Moser, as well as Brown, and enumerate the number of {\em dissections} of regular polygons modulo the dihedral and cyclic actions.
\begin{definition}
Let $n\ge 3$. A {\em $k$-dissection} of an $n$-gon is a partition of the $n$-gon into $k+1$ polygons by $k$ non-crossing diagonals. A {\em triangulation} is an $(n-3)$-dissection of an $n$-gon and an {\em almost-triangulation} is an $(n-4)$-dissection. Let $G(n,k)$ be the set of $k$-dissections of an $n$-gon, and let $G(n)=\bigcup\limits_{k=0}^{n-3}G(n,k)$.
\end{definition}
In terms of associahedra, a $k$-dissection corresponds to an $(n-k-3)$-dimensional face on an associahedron of dimension $n$. A natural generalization of the results of Moon and Moser and of Brown is the enumeration of $G(n,k)/D_{2n}$ and $G(n,k)/Z_n$, the sets of cyclic and dihedral classes, respectively, in $G(n,k)$. In 1978, Read \cite{R} considered an equivalent problem. He enumerated certain classes of cellular structures, which are in bijection with $G(n,k)/D_{2n}$ and
$G(n,k)/Z_n$. Read found generating functions for the number of such classes, and included tables of values \cite[Tables 3 and 5]{R}. In fact, the first diagonal of Table 5 of Read corresponds to the sequence found by Moon and Moser, and the first diagonal of Table 3 of Read corresponds to the sequence found by Brown. Lisonek \cite{Li} studied these results of Read
and showed that the sequences $|G(n,k)/D_{2n}|$ and $|G(n,k)/Z_n|$ are ``quasi-polynomial" in $n$ when $k$ is fixed. (Here and throughout, $|X|$ denotes the cardinality of a finite set $X$.) More recently, Read and Devadoss \cite{DR} studied various equivalence relations on the set of polygonal dissections. They gave a sequence of figures
\cite[Figures 22-25]{DR} representing all the dihedral classes of $n$-gons for $3 \le n\le 9$. However, none of the above authors give an explicit formula for $|G(n,k)/D_{2n}|$ or for $|G(n,k)/Z_n|$.
The present authors \cite{BR} give an explicit formula enumerating $G(n,n-4)/D_{2n}$, the dihedral classes of almost-triangulations, equivalently, of edges of associahedra. This formula agrees with the values of the second diagonal of Table 5 of Read \cite{R}.
Explicit formulas for $|G(n,k)/Z_n|$ and $|G(n,k)/D_{2n}|$ could in principle be derived from Read's iteratively defined generating functions, but the resulting formulas would be considerably more complicated than those computed here; see equations \eqref{diheq} and \eqref{cyceq}.
Our approach to solving these enumeration problems is similar to that of Moon and Moser \cite{MM}. For each element of the dihedral group, the number of dissections in $G(n,k)$ which are fixed under its action is computed. The Cauchy-Frobenius Lemma is then used to derive the number of dihedral and cyclic classes in $G(n,k)$.
In Section \ref{rotsec} we introduce a combinatorial bijection \eqref{bijeq} between certain rotationally symmetric dissections ({\em centrally unbordered} dissections, see Definition \ref{classes}) and a set $G^*(n,k)$ of {\em marked dissections}, which are dissections with one of their parts distinguished. These marked dissections are easy to generate and enumerate. A bijection for {\em centrally bordered} dissections is implicit in the proof of Lemma \ref{bdd}.
Przytycki and Sikora \cite{PS} studied a set of marked dissections $P_i(s,n)$, which is a subset of $G^*(n,k)$; however, the classes of dissections enumerated in \cite{PS} are different from those studied here.
Besides their intrinsic interest, bijections involving polygonal dissections have connections to other mathematical structures; for example, Torkildsen \cite{T} proved a bijection between $G(n,n-3)/Z_{n}$ and the mutation class of quivers of Dynkin type $A_n$, while Przytycki and Sikora describe a relationship between their bijection (between $P_i(s,n)$ and another combinatorial structure) and their work in knot theory as well as Jones' work on planar algebras.
After proving the general formulas \eqref{diheq} and \eqref{cyceq} in Sections \ref{Prelim} through \ref{rotsec}, special cases are studied in Section \ref{specs}. Consequently we not only recover known enumerations but are able to provide several that are new. We note several of the interesting special cases here. For example, setting $k=n-3$ in \eqref{diheq} recovers the result of Moon and Moser \cite{MM}, and setting $k=n-3$ in \eqref{cyceq} recovers the result of Brown \cite{Br}.
Setting $k=n-4$ in \eqref{diheq} recovers the result of the authors \cite{BR}, while the following theorem gives a formula for the number of cyclic classes in the case $k=n-4$.
For a nonnegative integer $n$, let $C_n$ denote the $n$-th Catalan number; \[C_{n}={1\over n+1}{2n\choose n},\] and let $C_n=0$ otherwise.
\begin{theorem}\label{cycn4}
Let $n\ge 4$. The number $|G(n,n-4)/Z_n|$ of almost-triangulations of an $n$-gon (equivalently, edges of the $(n-3)$-dimensional associahedron) modulo the cyclic action is given by
\begin{equation}\label{cycn4eq}
\frac{n-3}{2n}C_{n-2}+\frac{1}{2}C_{n/4-1}+\frac{1}{4}C_{n/2-1}.
\end{equation}
\end{theorem}
\begin{figure}
\caption{The three-dimensional associahedron}
\label{P3}
\end{figure}
Setting $k=n-5$ gives the following formulas.
\begin{theorem}\label{k=n-5}
Let $n\ge 5$.
\begin{enumerate}
\item The number $|G(n,n-5)/Z_n|$ of $(n-5)$-dissections of an $n$-gon (equivalently, the number of two-dimensional faces of the $(n-3)$-dimensional associahedron) modulo the cyclic action is given by
\begin{align}\label{cycn5eq}
{(n-3)^2(n-4)\over 4n(2n-5)}C_{n-2}+{n-4\over 8}C_{n/2-1} + {4 \over 5}C_{n/5-1}.
\end{align}
\item The number $|G(n,n-5)/D_{2n}|$ of $(n-5)$-dissections of an $n$-gon (equivalently, the number of two-dimensional faces of the $(n-3)$-dimensional associahedron) modulo the dihedral action is given by
\begin{align*}
{(n-3)^2(n-4)\over 8n(2n-5)}C_{n-2}+{2\over 5}C_{n/5-1}+{3(n-4)(n-1)\over 16(n-3)}C_{n/2-1},
\end{align*}
if $n$ is even, and
\begin{align}\label{dihn5eq}
{(n-3)^2(n-4)\over 8n(2n-5)}C_{n-2}+{2\over 5}C_{n/5-1}+{n^2-2n-11 \over 8(n-4)}C_{(n-3)/2},
\end{align}
if $n$ is odd.
\end{enumerate}
\end{theorem}
A formula equivalent to \eqref{cycn4eq} occurs in the Online Encyclopedia of Integer Sequences \cite[sequence A0003444]{S}, while the sequences of \eqref{cycn5eq} and \eqref{dihn5eq} occur there without a formula \cite[sequences A0003450 and A0003445]{S}.
Finally, setting $k=n-6$ in \eqref{cyceq} gives the following formula.
\begin{theorem}\label{k=n-6}
Let $n\ge 6$.
The number $|G(n,n-6)/Z_n|$ of $(n-6)$-dissections of an $n$-gon (equivalently, the number of three-dimensional faces of the $(n-3)$-dimensional associahedron) modulo the cyclic action is given by
\[{(n-3)(n-4)^2(n-5)\over 24n(2n-5)}C_{n-2}+{(n-4)^2\over 4n}C_{n/2-2}+{n-3\over 9}C_{n/3-1}+{1\over 3} C_{n/6-1}.\]
\end{theorem}
Another set of results is obtained by specializing equations \eqref{diheq} and \eqref{cyceq} to fixed values of $k$. Setting $k=1$ gives the formulas $|G(n,1)/Z_n|=|G(n,1)/D_{2n}|=n/2-1$ if $n$ is even and
$|G(n,k)/Z_n|=|G(n,1)/D_{2n}|=(n-3)/2$ if $n$ is odd: these formulas are easy to see directly. In the case $k=2$, \eqref{diheq} and \eqref{cyceq} are more interesting.
\begin{theorem}\label{k=2}
Let $n\ge 2$.
\begin{enumerate}
\item The number of $2$-dissections of an $n$-gon modulo the cyclic action is
\[|G(n,2)/Z_n|=\begin{cases} {1\over 12}n(n-2)(n-4), \qquad \text{ if $n$ is even}\\
{1\over 12}(n+1)(n-3)(n-4), \qquad \text{ if $n$ is odd.}\end{cases}\]
\item The number of $2$-dissections of an $n$-gon modulo the dihedral action is
\[|G(n,2)/D_{2n}|=\begin{cases}
{1\over 24}(n-4)(n-2)(n+3), \qquad \text{ if $n$ is even}\\
{1\over 24}(n-3)(n^2-13), \qquad \text{ if $n$ is odd.}
\end{cases}\]
\end{enumerate}
\end{theorem}
Note that Theorem \ref{k=2} agrees with the result of Lisonek \cite{Li} in that the formulas obtained are quasi-polynomials.
\subsection{The general formulas}
Let $\G{n}{k}=|G(n,k)|$ be the number of $k$-dissections of an $n$-gon.
Cayley \cite{C} showed that for integers $0 \le k\le n-3$,
\begin{equation}\label{disseq}
\G{n}{k}=
\frac{1}{k+1}{n+k-1 \choose k}{n-3 \choose k}.
\end{equation}
We take $\G{2}{0}=1$ corresponding to the trivial dissection of a digon ($2$-gon). Otherwise, unless $n$ and $k$ are integers with $0\le k \le n-3$,
let $\G{n}{k}=0$.
Note that
\begin{equation}\label{catdiss}
\G{n}{n-3}=\begin{cases} 0, \text{ if } n=2\\ C_{n-2}, \text{ otherwise.}\\ \end{cases}
\end{equation}
Let $\varphi(n)$ denote Euler's totient function (the number of positive integers less than $n$ that are relatively prime to $n$). The following two theorems are the main results of this paper.
\begin{theorem}\label{dih}
Let $1\le k \le n-3$. Let $|G(n,k)/D_{2n}|$ be the number of $k$-dissections of an $n$-gon (equivalently, $(n-k-3)$-dimensional faces of the $(n-3)$-dimensional associahedron) modulo the dihedral action.
If $n$ is even then $|G(n,k)/D_{2n}|$ is given by
\begin{align*}
&{1\over 2n}\G{n}{k}
+{1\over 2} \G{n/2+1}{(k-1)/2}
+ {1\over 4} \G{n/2+1}{k/2}
\nonumber \\ &
+ \sum\limits_{3\le d|n}{\varphi(d)\over 2d} \G{n/d+1}{k/d-1}
+ \sum\limits_{2\le d\le n/3}{\varphi(d)(n+k-d) \over 2dn}\G{n/d}{k/d-1} \nonumber \\
&+ \sum\limits_{\substack{2\le d|n; \ r\ge 3; \ n_1+\ldots+n_r=n/d\\ k_1+ \ldots+k_r + |\{i : n_i\ge 2\}|= k/d}} {\varphi(d)\over 2r} \prod\limits_{i=1}^r\G{n_i+1}{k_i} \nonumber \\ &
+ {1\over 4} \sum\limits_{\substack{1\le t \le k\\ n_0+\ldots + n_t=n/2\\k_0+\ldots + k_t=(k-t)/2}} \ \
\prod\limits_{s=0}^ {t} \big (\G{n_s+1}{k_s-1} + \G{n_s+1}{k_s} \big) \nonumber \\
& + {1\over 4} \sum\limits_{\substack{0\le t \le k\\ n_0+\ldots + n_t=n/2-1\\ k_0+\ldots + k_t=(k-t)/2}} \ \
\prod\limits_{s=0}^ {t} \big (\G{n_s+1}{k_s-1} + \G{n_s+1}{k_s} \big),
\end{align*}
and if $n$ is odd then $|G(n,k)/D_{2n}|$ is given by
\begin{align}\label{diheq}
& {1\over 2n}\G{n}{k}
+ \sum\limits_{3\le d|n}{\varphi(d)\over 2d} \G{n/d+1}{k/d-1}
+ \sum\limits_{2\le d\le n/3}{\varphi(d)(n+k-d) \over 2dn}\G{n/d}{k/d-1} \nonumber \\
&+ \sum\limits_{\substack{2\le d|n; \ r\ge 3; \ n_1+\ldots+n_r=n/d\\ k_1+ \ldots+k_r + |\{i : n_i\ge 2\}|= k/d}} {\varphi(d)\over 2r} \prod\limits_{i=1}^r\G{n_i+1}{k_i} \nonumber \\ &
+ {1\over 2} \sum\limits_{\substack{1\le t \le k\\ n_0+\ldots + n_t=n/2\\k_0+\ldots + k_t=(k-t)/2}} \ \
\prod\limits_{s=0}^ {t} \big (\G{n_s+1}{k_s-1} + \G{n_s+1}{k_s} \big). \nonumber \\
\end{align}
\end{theorem}
\begin{theorem}\label{cyc}
Let $n\ge 3$. The number $|G(n,k)/Z_n| $ of $k$-dissections of an $n$-gon (equivalently, ($n-k-3$)-dimensional faces of the $(n-3)$-dimensional associahedron) modulo the cyclic action is given by
\begin{align}\label{cyceq}
{1\over n}\G{n}{k}
+ \sum\limits_{3\le d|n}{\varphi(d)\over d} \G{n/d+1}{k/d-1}
+ \sum\limits_{2\le d\le n/3}{\varphi(d)(n+k-d) \over dn}\G{n/d}{k/d-1} \nonumber \\
+ \sum\limits_{\substack{2\le d|n; \ r\ge 3; \ n_1+\ldots+n_r=n/d\\ k_1+ \ldots+k_r + |\{i : n_i\ge 2\}|= k/d}} {\varphi(d)\over r} \prod\limits_{i=1}^r\G{n_i+1}{k_i}. \nonumber \\ &
\end{align}
\end{theorem}
\section{Preliminaries}\label{Prelim}
For any labeled graph $H$, let $V(H)$ be its set of vertices and let $E(H)$ be its set of edges, defined to be two-element subsets of $V(H)$.
We frequently denote the edge $\{x,y\}$ by $xy$.
To a dissection $\Phi\in G(n)$ we associate a labeled graph $(V(\Phi),E(\Phi))$, where $V(\Phi)=\{0,\ldots, n-1\}$. To the sides of the $n$-gon we associate
the edges $S(\Phi)=\{01, 12, 23,\ldots ,(n-2)(n-1), (n-1)0\}$ and to the diagonals of the dissection we associate the rest of the edges of the graph. The {\em distance} between any two vertices $x,y\in V(\Phi)$ is defined to be the graph-theoretic distance between them in the subgraph $(V(\Phi),S(\Phi))$.
It is easily seen that two crossing diagonals of a convex $n$-gon correspond to edges $ab$ and $cd$ (with $a<b$ and $c<d$) if and only if
\begin{equation}\label{crossing}
a< c < b <d \, \, \text{ or } \, \, c < a < d < b.
\end{equation}
Thus if $\Phi\in G(n)$, there are no edges $ab, cd\in E(\Phi)$ satisfying \eqref{crossing}.
Hereafter we identify dissections with their labeled graphs.
The elements of the dihedral group $D_{2n}$ are denoted using $\varepsilon$, $\rho$ and $\tau$ to represent the identity, rotation by $2\pi/n$ and
reflection about a symmetry axis (which by convention passes through one of the vertices of the $n$-gon), respectively.
The cyclic group $Z_n$ can be identified with the subgroup of $D_{2n}$ generated by $\rho$.
We use the notation $[x]_n$ to denote the remainder when $x$ is divided by $n$. The elements of $D_{2n}$ can be represented by their action on the vertices,
$\rho(v)=[v+1]_n$ and $\tau(v)=[-v]_n$.
For any $\sigma\in D_{2n}$, let $G(n,k; \sigma)$ denote the subset of $G(n,k)$ consisting of dissections fixed under the action of $\sigma$, and let
\[G(n;\sigma)=\bigcup\limits_{ k= 0}^{n-3}G(n,k;\sigma).\] Thus if $\Phi\in G(n)$ then $\Phi\in G(n;\rho^i)$ if and only if
\begin{equation*}
[x+i]_n[y+i]_n\in E(\Phi) \text{ whenever } [x]_n[y]_n\in E(\Phi),
\end{equation*}
and $\Phi\in G(n;\tau\rho^i)$ if and only if
\begin{equation*}
[i-x]_n[i-y]_n\in E(\Phi) \text{ whenever } [x]_n[y]_n\in E(\Phi) .
\end{equation*}
The Cauchy-Frobenius Lemma \cite{B} gives the equations
\begin{equation}\label{dihgeneq}
|G(n,k)/D_{2n}| = {1 \over 2n}\left( \sum\limits_{i=0}^{n-1} |G(n,k; \rho^i)| + \sum\limits_{i=0}^{n-1} |G(n,k; \tau \rho^i)|\right)
\end{equation}
and
\begin{equation}\label{cycgeneq}
|G(n,k)/Z_n| = {1 \over n} \sum\limits_{i=0}^{n-1} |G(n,k; \rho^i)|.
\end{equation}
Equations \eqref{dihgeneq} and \eqref{cycgeneq} reduce the problem of enumerating the dihedral and cyclic classes in $G(n,k)$ to that of finding $G(n,k; \sigma)$ for each $\sigma\in D_{2n}$.
In fact, the following lemma shows that it suffices to consider only a subset of $D_{2n}$.
\begin{lemma}
Let $0\le i \le n-1$. Then
\begin{enumerate}
\item
\begin{equation*}
|G(n,k;\rho^i)|=|G(n,k;\rho^{\gcd(n,i)})|.
\end{equation*}
\item
\begin{enumerate}
\item If $n$ and $i$ are even then $|G(n,k;\tau\rho^i)|=|G(n,k;\tau)|$.
\item If $n$ is even and $i$ odd then $|G(n,k;\tau \rho^i)|=|G(n,k;\tau \rho)|$.
\item If $n$ is odd then $|G(n,k;\tau \rho^i)|=|G(n,k;\tau)|$.
\end{enumerate}
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item Since $\rho^i$ and $\rho^{\gcd(n,i)}$ generate the same subgroup of $D_{2n}$, they fix precisely the same elements of $G(n,k)$.
\item It is well known \cite[p. 243]{A} that conjugate elements in a group acting on a set have the same number of fixed points.
The results then follow
from the conjugacy relations in $D_{2n}$.
\end{enumerate}
\end{proof}
\end{lemma}
Thus equations \eqref{dihgeneq} and \eqref{cycgeneq} imply
\begin{equation}\label{dihgeneq2}
|G(n,k)/D_{2n}| = \begin{cases} {1 \over 2n}\sum\limits_{d|n} \varphi(d)|G(n,k; \rho^{n/d})|+{1\over 4} |G(n,k; \tau)|+{1\over 4} |G(n,k; \tau \rho)|, & \text{ if $n$ is even} \\
{1 \over 2n}\sum\limits_{d|n} \varphi(d)|G(n,k; \rho^{n/d})|+{1\over 2} |G(n,k; \tau)|, & \text{ if $n$ is odd}
\end{cases}
\end{equation}
and
\begin{equation}\label{cycgeneq2}
|G(n,k)/Z_n| = {1 \over n} \sum\limits_{d|n} \varphi(d)|G(n,k; \rho^{n/d})|.
\end{equation}
Theorems \ref{dih} and \ref{cyc} follow from calculating the terms in \eqref{dihgeneq2} and \eqref{cycgeneq2}, respectively.
Section \ref{axsec} addresses the terms $|G(n,k; \tau)|$ and $|G(n,k; \tau \rho)|$, and Section \ref{rotsec} addresses the terms $|G(n,k;\rho^{n/d})|$.
\section{Axially symmetric dissections}\label{axsec}
The sets $G(n,k;\tau)$ and $G(n,k;\tau \rho)$ of axially symmetric dissections can be enumerated by considering the number of {\em perpendiculars}, i.e., diagonals of a dissection which are perpendicular to the axis of symmetry; these diagonals have the form $[v]_n[-v]_n$. Denote by $G(n,k;\tau; t)$ the set of dissections in $G(n,k;\tau)$ with exactly $t$ perpendiculars.
The notation $G(n,k;\tau \rho;t)$ is defined analogously. Thus
\[G(n,k;\tau)=\sum_{t\ge 0}G(n,k;\tau;t),\]
with the analogous formula holding for $G(n,k;\tau \rho;t)$.
\begin{figure}
\caption{An axially symmetric dissection.}
\label{20gon}
\end{figure}
\begin{lemma}\label{Tlem}
If $n$ is even then
\begin{equation}\label{p=0}
|G(n,k;\tau;0)|= \G{n/2+1}{(k-1)/2}+\G{n/2+1}{k/2}
\end{equation}
and for $t\ge 1$,
\begin{equation}\label{p>0}
|G(n,k;\tau;t)|= \sum\limits_{\substack{n_0+\ldots + n_t=n/2\\k_0+\ldots + k_t=(k-t)/2}}
\prod\limits_{s=0}^ {t} (\G{n_s+1}{k_s-1} + \G{n_s+1}{ k_s}).
\end{equation}
\begin{proof}
Let $[v_1]_n[-v_1]_n,\ldots ,[v_t]_n[-v_t]_n\in E(\Phi)$
be the perpendiculars of $\Phi$, where $t\ge 0$ and $0=v_0<v_1<\ldots <v_t<v_{t+1}=n/2$. Let $n_s=v_{s+1}-v_s$ for $s=0,\ldots, t$. The case $t=0$ is considered separately since in this case $v_0v_1=0{n \over 2}=[-0]_n[-{n\over 2}]_n=[-v_0]_n[-v_1]_n$, while in all other cases
$v_sv_{s+1}\ne [-v_s]_n[-v_{s+1}]_n$.
Let $\Phi\in G(n,k;\tau;0)$ and suppose first that $0{n\over 2}\in E(\Phi)$.
The remaining $k-1$ diagonals of $\Phi$ are equally distributed between the two sides of the symmetry axis. Each such dissection then
uniquely corresponds to a dissection of the resulting $(n/2+1)$-gon on either of its sides. Thus there are $\G{n/2+1}{(k-1)/2}$ dissections of this type.
By a similar argument, there are $\G{n/2+1}{k/2}$ dissections in $G(n,k;\tau;0)$ which do not contain the diagonal $0{n\over 2}$, and \eqref{p=0} follows.
Now suppose $1\le t\le k$ and let $\Phi\in G(n,k;\tau;t)$.
The $k-t$ other diagonals of $\Phi$ are pairs of the form $xy$ and $[-x]_n[-y]_n$, where $v_s\le x<y\le v_{s+1}$ and $0\le s \le t$.
Each such diagonal $xy$ is either of the form $v_sv_{s+1}$ or it is a diagonal of the $(n_s+1)$-gon with vertices $v_s, v_s+1, \ldots, v_{s+1}$.
Let $k_s$ be half the number of diagonals in the region defined by the vertices $v_s$, $v_{s+1}$, $[-v_{s+1}]_n$ and $[-v_s]_n$. If $v_sv_{s+1}\in E(\Phi)$ then the dissection of this region
corresponds to a $(k_s-1)$-dissection of the $(n_s+1)$-gon. Otherwise, it corresponds to a $k_s$-dissection
of the $(n_s+1)$-gon. This proves \eqref{p>0}.
\end{proof}
\end{lemma}
Figure \ref{20gon} gives an example of an axially symmetric dissection where, using the notation above, $n=20$, $k=12$, $t=2$, $v_1=4$, $v_2=8$, $k_0=2$, $k_1=2$ and $k_2=1$. The dissection of the region $s=0$ corresponds to a $1$-dissection of the pentagon with vertices $0, 1, 2, 3, 4$, and the dissection of the region $s=1$ corresponds to a $2$-dissection of the pentagon with vertices $4,5,6,7,8$.
For the next two lemmas, $v_sv_{s+1}\ne [-v_s]_n[-v_{s+1}]_n$ for all $s$, and therefore the case $t=0$ need not be considered separately. The proofs are otherwise analogous to that of Lemma \ref{Tlem}.
\begin{lemma}\label{Tlem-odd}
If $n$ is even then for any $t\ge 0$
\begin{equation}\label{TReq}
|G(n,k;\tau \rho;t)| =
\sum\limits_{\substack{n_0+\ldots + n_t=n/2-1\\k_0+\ldots + k_t=(k-t)/2}}
\ \prod\limits_{s=0}^ {t} (\G{n_s+1}{k_s-1} + \G{n_s+1}{k_s}).
\end{equation}
\end{lemma}
\begin{lemma}\label{TRlem}
If $n$ is odd then for $t\ge 0$,
\begin{equation}\label{Todd}
|G(n,k;\tau;t)| =
\sum\limits_{\substack{n_0+\ldots + n_t=(n-1)/2\\k_0+\ldots + k_t=(k-t)/2}}
\ \prod\limits_{s=0}^ {t} (\G{n_s+1}{k_s-1}+\G{n_s+1}{k_s}).
\end{equation}
\end{lemma}
\section{Components and marked dissections}
A dissection $\Phi\in G(n)$ can be associated with the set $\mathcal{C}(\Phi)$ of {\em components} comprising it, each of these components being a polygon free of dissecting diagonals.
Thus a component is a subgraph $\gamma$ of $\Phi$ such that for some $r\ge 2$ and $0\le v_r=v_0<\ldots <v_{r-1}\le n-1$,
\[V(\gamma)=\{v_0,\ldots, v_{r-1}\},\]
\[E(\gamma)=\{v_0v_1,\ldots , v_{r-1}v_r\},\]
and
\begin{align}\label{component}
v_iv_j\in E(\Phi) \implies v_iv_j\in E(\gamma).
\end{align}
For example, if $\Phi$ is the dissection shown in Figure \ref{20gon} then $\mathcal{C}(\Phi)$ consists of $32$ digons, $10$ triangles, two quadrilaterals and one hexagon. For $r\ge 2$ let $\mathcal{C}_r(\Phi)$ be the set of $r$-gons in $\mathcal{C}(\Phi)$, which we call {\em $r$-components}.
In what follows, $X$ represents any list of parameters. For any $r\ge 2$, an {\em $r$-marked dissection} is a dissection $\Phi$ with one of its $r$-components $\hh$ distinguished.
Let \[G^r(X)=\{(\Phi, \hh): \Phi \in G(X), \hh \in \mathcal{C}_r(\Phi) \}\] be the set of $r$-marked dissections associated with $G(X)$, and
let $G^*(X)=\bigcup\limits_{r\ge 2}G^r(X)$.
\begin{lemma}\label{marked}
Let $0\le k \le n- 3$. Then
\begin{equation}\label{r=2}
|G^2(n,k)|=(n+k)\G{n}{k},
\end{equation}
and for $r \ge 3,$
\begin{equation}\label{MD}
r|G^r(n,k)|=n \sum\limits_{\substack{n_1+\ldots+n_r=n\\ k_1+\ldots +k_r +|\{i:n_i\ge 2\}|= k}} \ \ \prod\limits_{i=1}^r\G{n_i+1}{k_i}.
\end{equation}
\begin{proof}
Equation \eqref{r=2} holds since $|\mathcal{C}_2(\Phi)|=n+k$ for any dissection $\Phi\in G(n,k)$.
Let $r\ge 3$.
The left-hand side of \eqref{MD} enumerates the marked dissections having one of the $r$ vertices $v_0$ in the distinguished $r$-gon distinguished.
The right-hand side enumerates the same elements by selecting the $r$-component first and then dissecting the region between each side of the component and the $n$-gon. Choose $0\le v_0\le n-1$. Decompose the cycle $v_0[v_0+1]_n, [v_0+1]_n[v_0+2]_n,\ldots , [v_0+n-1]_n[v_0+n]_n$
into consecutive paths of length $n_i$ (where $1\le i\le r$ and $n_i\ge 1$) with
vertices $v_0,v_1,\ldots , v_r$.
Observe that every such decomposition of the edges of the $n$-gon corresponds to a set of $n_i$ satisfying $n_1+\ldots +n_r=n$. The vertices $V(\hh)=\{v_0, \ldots,v_{r-1}\}$ of an $r$-component are thus determined by
\[v_i=[v_0+\sum\limits_{j=1}^i n_j]_n.\]
Now for each $0\le i\le r-1$ select a dissection of the region between the edge $v_iv_{i+1}$ and the path from $v_i$ to $v_{i+1}$ along the sides of the $n$-gon. Each such dissection corresponds to a set of $k_i$ satisfying $k_1+\ldots +k_r +|\{i:n_i\ge 2\}|= k$. (The term $|\{i:n_i\ge 2\}|$ accounts for those sides of the $r$-component that are not sides of the $n$-gon). Finally for each $k_i$ there $\G{n_i+1}{k_i}$ such dissections.
\end{proof}
\end{lemma}
Since a triangulation consists of $n-2$ triangles, a simpler formula for $3$-marked triangulations is
\begin{equation}\label{3marked}
|G^3(n,n-3)|=(n-2)\G{n}{n-3}.
\end{equation}
\begin{definition}
Let $\Phi\in G(n)$ and consider the representation of $\Phi$ as a set of points in the interior or boundary of a regular $n$-gon embedded in $\mathbb R^2$ and centered at the origin. Any component of $\Phi$ is a subset of the planar region. There is thus a unique component of $\Phi$ containing the origin. We call this component the
{\em central polygon} $Z(\Phi)$ of a dissection $\Phi$. Let $G_m(X)$ be the subset of $G(X)$ consisting of dissections whose central polygon is an $m$-gon; \[ G_m(X)=\{\Phi\in G(X): Z(\Phi)\in \mathcal{C}_m(\Phi)\},\] and put $G_{\ne m}(X)=G(X)\setminus G_m(X)$.
\end{definition}
For $\Phi\in G_m(n,k)$, if the regions outside of $Z(\Phi)$ are triangulated then $m=n-k$. More generally,
\begin{equation}\label{mn-k}
m\le n-k \text{ for any $\Phi\in G_m(n,k)$}.
\end{equation}
\begin{definition}
Let $\Phi\in G(n)$. Given an edge $xy\in E(\Phi)$ and a vertex $v\in V(\Phi)$, we say that $v$ is {\em outer to} $xy$ if $v$ lies strictly between $x$ and $y$ on the shorter path of the $n$-gon connecting them.
\end{definition}
\begin{remark}\label{outer}
A vertex $v\in V(Z(\Phi))$ cannot be outer to any edge $xy\in E(\Phi)$.
\end{remark}
\section{Rotationally symmetric dissections}\label{rotsec}
The enumeration of the sets $G(n,k; \rho^{n/d})$ of rotationally symmetric dissections can be achieved by considering separately the following two classes.
\begin{definition}\label{classes}
A dissection $\Phi\in G(n,k; \rho^{n/d})$ is said to be {\em centrally bordered} if $\Phi\in G_d(n,k; \rho^{n/d})$ and {\em centrally undbordered} if
$\Phi\in G_{\ne d}(n,k; \rho^{n/d})$.
\end{definition}
Lemma \ref{bdd} addresses the case of centrally bordered dissections; Lemmas \ref{Frange}, \ref{FU} and \ref{UF} and Theorem \ref{bijthm} address the case of centrally unbordered dissections. When considering the set $G(n; \rho^{n/d})$ it is convenient to put $j=n/d$.
Let $\delta_{xy}$ denote the Kronecker delta.
\begin{lemma} \label{bdd}
Let $d,j\ge 2$ and let $0 \le k\le n-3$.
\begin{equation}\label{bddeq}
|G_d(n,k;\rho^j)|= j \G{j+1}{(k-d+\delta_{d2})/d} \ .
\end{equation}
\begin{proof}
Let $\Phi \in G_d(n,k;\rho^j)$. By symmetry the central polygon $Z(\Phi)$ is a regular $d$-gon, which can be positioned in $j$ different ways in the $n$-gon. Since all $d-\delta_{d2}$ edges of $Z(\Phi)$ are diagonals of $\Phi$, there are $k-d+\delta_{d2}$ diagonals of $\Phi$ which are not edges of $Z(\Phi)$. By symmetry these diagonals are equally distributed among the $d$ resulting $(j+1)$-gons, giving
$\G{j+1}{(k-d+\delta_{d2})/d}$ choices for each position of $Z(\Phi)$.
\end{proof}
\end{lemma}
Enumeration of the centrally unbordered dissections $G_{\ne d}(n,k;\rho^{n/d})$ is accomplished by the introduction of a bijection with the marked dissections which were enumerated in Lemma \ref{marked}.
We define the following ``furling" maps $F_d$ and $F^*_d$ (see Figure \ref{bijfig}).
\begin{definition}
Let $d,j\ge 2$.
\begin{enumerate}
\item Define a function $f_d$ on edges of a graph of $n$ vertices by $f_d(xy)=[x]_j[y]_j$.
\item Let $\Phi$ be a dissection or a component of a dissection of an $n$-gon. Define the labeled graph $F_d(\Phi)$ by
$V(F_d(\Phi))=\{[x]_j : x\in V(\Phi)\}$ and $E(F_d(\Phi))=f_d[E(\Phi)]$. \footnote{Functions on subsets of a set are defined in the usual way and denoted using square brackets.}
\item For $\Phi\in G_{\ne d}(n,k;\rho^j)$, define $F^*_d(\Phi)=(F_d(\Phi), F_d(Z(\Phi))).$\end{enumerate}
\end{definition}
The conclusions of the following remark are easy observations.
\begin{remark}
Let $d,j\ge 2$.
\begin{enumerate}[(i)]
\item If $\Phi\in G_{\ne d}(n,k;\rho^j)$ then its central polygon $Z(\Phi)$ is itself invariant under $\rho^j$ and hence $Z(\Phi)$ is an $rd$-gon for some $r\ge 2$. Therefore
$G_{\ne d}(n,k;\rho^j)$ can be partitioned as follows.
\begin{equation}\label{partition}
G_{\ne d}(n,k;\rho^j)=\bigcup_{r\ge 2}G_{rd}(n,k;\rho^j).
\end{equation}
\item If $\Phi \in G_{\ne d}(n,k; \rho^j)$ and $xy\in E(\Phi)$, then the distance between $x$ and $y$ is at most $j-1$.
\item Suppose $\Phi \in G_{\ne d}(n,k; \rho^j)$. From the previous observation and by symmetry, it follows that for $0\le x<y\le j-1$,
\begin{equation}\label{xy}
xy\in E(F_d(\Phi)) \text{ if and only if either } xy \in E(\Phi) \text{ or } y(x+j)\in E(\Phi).
\end{equation}
\end{enumerate}
\end{remark}
The map ${\bf u}_d$ will be used to output symmetrically distributed edges in a dissection.
\begin{definition}
Let $d,j\ge 2$. For an edge $ab$ of a dissection $\Phi\in G(j)$, define \[{\bf u}_d(ab)=\bigcup\limits_{0\le i \le d-1} \Big \{ \{[a+ij]_n,[b+ij]_n\}\Big \} .\]
\end{definition}
Note that if $\Phi\in G_{rd}(n,k; \rho^{j})$ and $\gamma=Z(\Phi)$ then by the $d$-fold symmetry, for some $0\le v_r=v_0<\ldots <v_{r-1}\le j-1$,
\begin{align}\label{symmcenter}
V(\gamma)=\{v_0,\ldots, v_{r-1},v_0+j,\ldots, v_{r-1}+j, \ldots, v_{r-1}+(d-1)j\}, \nonumber\\
\nonumber \\
E(\gamma) = {\bf u}_d(v_0v_1) \cup {\bf u}_d(v_1v_2) \cup \ldots \cup {\bf u}_d(v_{r-2}v_{r-1})\cup {\bf u}_d(v_{r-1}(v_0+j))
\nonumber \\
\end{align}
The following lemma is readily verified using \eqref{xy} and Remark \ref{outer}. It employs the notation \eqref{symmcenter} for $Z(\Phi)$.
\begin{lemma}\label{preimage}
Let $d,j\ge 2$ and let $\Phi\in G_{rd}(n,k; \rho^j)$.
Consider the map $f_d:E(\Phi)\to E(F_d(\Phi))$ defined above. For an edge $ab\in E(F_d(\Phi))$, with $a<b$, the preimage of $ab$ under $f_d$ is given by:
\begin{align}\label{preimage eq}
f_d^{-1}[ab]=
\begin{cases}
{\bf u}_d(ab) , \qquad \text{ if $a>v_0$ or $b<v_{r-1}$},\\
{\bf u}_d(b(a+j)), \qquad \text{ if $a\le v_0$, $b\ge v_{r-1}$, and $ab\ne v_0v_1$}, \\
{\bf u}_d(ab)\cup {\bf u}_d(b(a+j)), \qquad \text { if $ab=v_0v_1$.}\\
\end{cases}
\end{align}
\end{lemma}
\begin{lemma}\label{Frange}
Let $d, j, r\ge 2$ and let $1\le k\le n-3$. If $\Phi \in G_{rd}(n,k; \rho^j)$ then
\[F^*_d(\Phi)\in {G^r(j,k/d-\delta_{r2}}).\]
\begin{proof}
Clearly $F_d(\Phi)$ has $j$ vertices. We show that its diagonals are noncrossing. Suppose that $a_1b_1$ and $a_2b_2$ are crossing diagonals of $F_d(\Phi)$, with $0\le a_1<a_2<b_1<b_2\le j-1$. By Lemma \ref{preimage}, for each $i=1, 2$ either $a_ib_i$ or $a_i(b_i+j)$ is a diagonal of $\Phi$. Since $a_1<a_2<b_1<b_2<a_1+j<a_2+j$, these two diagonals of $\Phi$ are crossing. This contradiction shows that $F_d(\Phi)\in G(j)$.
We next show that $F_d(Z(\Phi))$ is an $r$-component of $F_d(\Phi)$. By symmetry the center $Z(\Phi)$ has the form \eqref{symmcenter}. Therefore $V(F_d(Z(\Phi))=\{v_0,\ldots, v_{r-1}\}$ and
$E(F_d(Z(\Phi)))=\{v_0v_1, \ldots, v_{r-1}v_r\}$.
Now if $v_sv_t \in E(F_d(\Phi))$ with $v_s<v_t$ then by \eqref{xy} either $v_sv_t$ or $v_t(v_s+j)$ is in $E(\Phi)$. Therefore by the fact that $Z(\Phi)$ is a component of $\Phi$ and by \eqref{component}, either $v_sv_t$ or $v_t(v_s+j)$ is in $E(Z(\Phi))$.
Thus $v_sv_t\in E(F_d(Z(\Phi)))$. Applying \eqref{component} again gives the conclusion.
Let $l$ be the number of diagonals of $F_d(\Phi)$.
Suppose $r\ge 3$. In this case an edge $xy \in E(\Phi)$ is a diagonal of $\Phi$ if and only if $f_d(xy)$ is a diagonal of $F_d(\Phi)$. By \eqref{preimage eq}, for each diagonal
$ab$ of $F_d(\Phi)$, the preimage $f_d^{-1}[ab]$ consists of $d$ diagonals of $\Phi$. Furthermore, if $ab\ne a'b'$ then $f_d^{-1}[ab]$ and $f_d^{-1}[a'b']$ are disjoint. Thus $k=dl$.
Now suppose $r=2$. If $ab$ is a diagonal of $F_d(\Phi)$ with $ab\ne v_0v_1$ then $f_d^{-1}[ab]$ consists of $d$ diagonals of $\Phi$.
Note that either $v_0v_1$ or $v_1(v_0+j)$ is a diagonal of $\Phi$, since otherwise $k=0$. If both $v_0v_1$ and $v_1(v_0+j)$ are diagonals then
$f_d^{-1}[v_0v_1]$ consists of $2d$ diagonals of $\Phi$. If only one of $v_0v_1$ and $v_1(v_0+j)$ is a diagonal of $\Phi$ then $v_0v_1$ is not a diagonal of $F_d(\Phi)$ and $f_d^{-1}[v_0v_1]$ consists of $d$ diagonals of $\Phi$. Thus in either case for $r=2$, $k=dl+d$ and the result follows.
\end{proof}
\end{lemma}
Lemma \ref{Frange} shows that $F^*_d:G_{\ne d}(n,k; \rho^j)\to G^*(j)$. We next define an ``unfurling" map $U_d$.
\begin{definition}\label{Ud def}
Let $d,j\ge 2$. Define $U_d: G^*(j)\rightarrow G(n)$ as follows.
Let $(\tauheta, \beta)\in G^{r}(j)$, and denote the vertices of $\beta$ by $v_0<\ldots <v_{r-1}$.
Define $U_d(\tauheta,\beta)$ by $V(U_d(\tauheta,\beta))=\{0,\ldots,n-1\}$ and $E(U_d(\tauheta,\beta))=\bigcup\limits_{ab\in E(\Phi)} f_d^{-1}[ab]$, where $f_d^{-1}$ is given by \eqref{preimage eq}.
\end{definition}
\begin{lemma}\label{FU}
Let $d, j \ge 2$. If $(\tauheta,\beta)\in G^*(j)$ then $F^*_d(U_d(\tauheta,\beta))=(\tauheta,\beta)$.
\begin{proof}
Clearly $V(F_d(U_d(\tauheta,\beta)))=V(\tauheta)$ and
\begin{align*}
E(F_d(U_d(\tauheta, \beta)))=f_d[E(U_d(\tauheta,\beta))]=f_d \left [\bigcup\limits_{ab\in E(\tauheta)} f_d^{-1}[ab] \right ] =E(\tauheta),
\end{align*}
so $F_d(U_d(\tauheta, \beta))=\tauheta$.
Suppose $(\tauheta, \beta)\in G^r(j)$; denote the vertices of $\beta$ by $v_r=v_0< \ldots <v_{r-1}$, and let $\gamma$ be the graph given by \eqref{symmcenter}.
By definition $\gamma$ is a subgraph of $U_d(\tauheta, \beta)$.
As in the the proof of Lemma \ref{Frange}, the condition \eqref{component} can be used to show that in fact $\gamma$ is a component of $U_d(\tauheta, \beta)$.
Finally since the vertices of $\gamma$ include the regular $d$-gon with vertices $v_0, v_0+j, \ldots , v_0+(d-1)j$, their convex hull contains the origin, so
$\gamma=Z(U_d(\tauheta,\beta))$.
Thus $F_d(Z(U_d(\tauheta, \beta)))= F_d(\gamma)=\beta$, completing the proof.
\end{proof}
\end{lemma}
\begin{lemma}\label{UF}
Let $d,j\ge 2$. If $\Phi\in G_{\ne d}(n;\rho^j)$ then $U_d(F_d^*(\Phi))=\Phi$.
\begin{proof}
Let $\Phi\in G_{rd}(n;\rho^j)$. It is easily seen that $V(U_d(F_d^*(\Phi)))=V(\Phi)$.
As above, the center $Z(\Phi)$ has the form \eqref{symmcenter}.
Therefore $V(F_d(Z(\Phi)))=\{v_0,\ldots, v_{r-1}\}$, and
by Definition \ref{Ud def},
\begin{align*}
E(U_d(F_d^*(\Phi)))=E(U_d(F_d(\Phi), F_d(Z(\Phi)))
=\bigcup\limits_{ ab\in E(F_d(\Phi))} f_d^{-1}[ab]
=E(\Phi).
\end{align*}
\end{proof}
\end{lemma}
\begin{figure}
\caption{Examples showing the bijections of Theorem \ref{bijthm}
\label{fig:subfig1}
\label{fig:subfig2}
\label{fig:subfig3}
\label{fig:subfig4}
\label{bijfig}
\end{figure}
\begin{theorem}\label{bijthm}
Let $1\le k \le n-3$, let $r\ge 2$ and let $2\le d\le n/3$ with $d | n$. Then there exists a bijection:
\begin{equation*}\label{bijeq}
G_{rd}(n,k;\rho^j) \longleftrightarrow G^r(j,k/d-\delta_{r2}).
\end{equation*}
\begin{proof}
By \eqref{partition} and Lemmas \ref{Frange}, \ref{FU} and \ref{UF}, the bijection is given in one direction by $F_d^*$ and in the other direction by $U_d$.
\end{proof}
\end{theorem}
Lemma \ref{marked} and Theorem \ref{bijthm} imply that for $d\le n/3$,
\begin{align}\label{ubeq}
|G_{\ne d}(n,k;\rho^{n/d})|
&=|G^2(n/d,k/d-1)|+\sum\limits_{r\ge 3}{|G^r(n/d,k/d)|}\nonumber \\
&=
{n+k-d\over d}\G{n/d}{k/d-1}+\sum\limits_{\substack{r\ge 3; \ n_1+\ldots+n_r=n/d \nonumber \\
k_1+\ldots+k_r +|\{i|n_i\ge 2\}|= k/d}} {n \over r} \prod\limits_{i=1}^r\G{n_i+1}{k_i}.\\
\end{align}
Note that if $d> n/3$ then $|G_{\ne d}(n,k;\rho^{n/d})|=0$.
\subsection{Proof of Theorems \ref{dih} and \ref{cyc}}
The proof of Theorems \ref{dih} and \ref{cyc} now follows by substituting into equations \eqref{dihgeneq2} and \eqref{cycgeneq2} the expressions obtained in
\eqref{p=0}
--\eqref{Todd} for the number of axially symmetric dissections, and the values obtained in \eqref{bddeq} and \eqref{ubeq} for rotationally symmetric dissections.
\section{Interesting special cases}\label{specs}
The enumeration formulas can be specialized to certain classes of dissections, namely for specific values of $n-k$ and for specific values of $k$.
The next lemma is equivalent to Catalan's $k$-fold convolution formula \cite{Cat,LF}.
\begin{lemma}\label{catcon}
For any $n,m\ge 0$,
\begin{equation*}\label{catid}
\sum\limits_{\substack{i_1+\ldots+i_m=n\\i_1,\ldots,i_m\ge 0}}C_{i_1}\cdots C_{i_m}=
\begin{cases}
\frac{m(n+1)(n+2)\cdots(n+\frac m2-1)}{2(n+\frac{m}{2}+2)(n+\frac{m}{2}+3)\cdots (n+m)} C_{n+m/2}, \qquad &\text{if $m$ is even}\\
\\
\frac{m(n+1)(n+2)\cdots(n+\frac{m-1}2)}{(n+\frac{m+3}{2})(n+\frac{m+3}2+1)\cdots(n+m)} C_{n+(m-1)/2}, \qquad &\text{if $m$ is odd.}
\end{cases}
\end{equation*}
\end{lemma}
\begin{lemma}\label{disscons}
\begin{enumerate}
\item For any $n\ge 2$,
\begin{equation}\label{NCT}
\G{n}{n-3}+\G{n}{n-2}=C_{n-2}.
\end{equation}
\item For any $n\ge 2, q\ge 2$,
\begin{equation} \label{delta}
\sum\limits_{i+j=n}\G{i+1}{i-1} \G{j+1}{j+1-q}= \G{n}{n-q}.
\end{equation}
\item For any $n\ge 3$,
\begin{equation}\label{disscon1}
\sum\limits_{i+j=n}\G{i+1}{i-2} \G{j+1}{j-2}=C_{n-1}-2C_{n-2}.
\end{equation}
\item For any $n\ge 3$,
\begin{equation}\label{disscon2}
\sum\limits_{i+j=n}\G{i+1}{i-2} \G{j+1}{j-3}= {(n-3)(n-4)\over 2n}C_{n-2}.
\end{equation}
\end{enumerate}
\begin{proof}
Equations \eqref{NCT} and \eqref{delta} follow from \eqref{catdiss}, and \eqref{disscon1} follows from \eqref{NCT} and from Lemma \ref{catcon}.
To prove \eqref{disscon2}, we show that
\begin{equation}\label{disconn2a}
(n-4)\G{n}{n-4}=n\sum\limits_{i+j=n}\G{i+1}{i-2} \G{j+1}{j-3}.
\end{equation}
The result will then follow since $\G{n}{n-4}={n-3\over 2}C_{n-2}$. Now the left hand side of \eqref{disconn2a} is the number
of almost-triangulations marked by a diagonal (i.e., $(\Phi, \beta)\in G^2(n,n-4)$ where $V(\beta)$ is not of the form $\{v,[v+1]_n\}$).
These can also be enumerated as follows. Choose one vertex $v$ out of the $n$ vertices,
then choose $2\le i\le n-3$ and $j=n-i$. Mark the diagonal $v[v+i]_n$, and choose a triangulation of the resulting $(i+1)$-gon and an almost-triangulation of the resulting $(j+1)$-gon.
\end{proof}
\end{lemma}
The details of the proof of Theorem \ref{k=n-5} for $n$ even are given below. Most of the details of the other cases are omitted.
Note that if
\[\sum\limits_{\substack{\ n_1+\ldots+n_r=n/d\\ k_1+\ldots+k_r + |\{i : n_i\ge 2\}| = k/d}}\G{n_i+1}{k_i}\ne 0\] for some $d\ge 2$, $r\ge 3$, then it follows from \eqref{mn-k}
that $6\le rd\le n-k$. Thus these terms vanish in the cases $k=n-3$, $n-4$ or $n-5$.
Applying \eqref{diheq} and \eqref{cyceq} when $k=n-3$ (i.e., for triangulations), recovers the result of Moon and Moser
and the result of Brown. (We omit the details here as the even dihedral case of Theorem \ref{k=n-5}, for which the details are provided, is a similar calculation).
\begin{theorem}\label{k=n-3}
\begin{enumerate}
\item {\cite{MM}} Let $n\ge 3$. The number of triangulations of an $n$-gon (equivalently, the number of vertices of the $(n-3)$-dimensional associahedron) modulo the dihedral action is
\begin{equation*}
|G(n,n-3)/D_{2n}| =
\begin{cases}
\frac{1}{2n}C_{n-2} + \frac{1}{3}C_{n/3-1}+\frac{3}{4}C_{n/2-1}, \qquad \qquad &\text{if $n$ is even} \\
\frac{1}{2n}C_{n-2} + \frac{1}{3}C_{n/3-1}+\frac{1}{2}C_{(n-3)/2}, \qquad \qquad &\text{if $n$ is odd.}
\end{cases}
\end{equation*}
\item {\cite{B}} Let $n\ge 3$. The number of triangulations of an $n$-gon (equivalently, the number of vertices of the $(n-3)$-dimensional associahedron) modulo the cyclic action is \[|G(n,n-3)/Z_n|=\frac{1}{n}C_{n-2}+\frac{1}{2}C_{n/2-1}+\frac{2}{3}C_{n/3-1}.\]
\end{enumerate}
\end{theorem}
Setting $k=n-4$ in \eqref{diheq} recovers the following result of the authors.
\begin{theorem}\label{ges}\cite{BR}
Let $n\ge 4$, and let $g^{(e)}(n)$ be the number of almost-triangulations of an $n$-gon (equivalently, edges of the $(n-3)$-dimensional associahedron) modulo the dihedral action. Then
\begin{equation*}
g^{(e)}(n)=
\begin{cases}
(\frac{1}{4}-\frac{3}{4n})C_{n-2}+\frac{3}{8}C_{n/2-1}+(1-\frac{3}n)C_{n/2-2}+\frac{1}{4}C_{n/4-1}, & \text{ if $n$ is even}\\
(\frac{1}{4}-\frac{3}{4n})C_{n-2}+\frac{1}{4}C_{(n-3)/2}, & \text{ if $n$ is odd}. \\
\end{cases}
\end{equation*}
\end{theorem}
Setting $k=n-4$ in \eqref{cyceq} gives the result of Theorem \ref{cycn4}.
The following proposition gives the details needed to simplify \eqref{diheq}, thus completing the proof of the even dihedral case of Theorem \ref{k=n-5}. The other cases are easier.
\begin{proposition}\label{n-5prop}
Let $n\ge 6$ and $k=n-5$. If $n$ is even then
\begin{equation}\label{Gn5E}
{1\over 2n} \G{n}{k}={(n-3)^2(n-4)\over 8n(2n-5)}C_{n-2},
\end{equation}
\begin{equation}\label{Gn5T0D}
{1\over 2} \G{n/2+1}{(k-1)/2}={n-4\over 8}C_{n/2-1},
\end{equation}
\begin{equation}\label{Gn5T0}
{1\over 4} \G{n/2+1}{k/2} =0,
\end{equation}
\begin{equation}\label{Gn5bdd}
\sum\limits_{3\le d|n}{\varphi(d)\over 2d} \G{n/d+1}{k/d-1} = {2\over 5}C_{n/5-1},
\end{equation}
\begin{equation}\label{Gn5ub2}
\sum\limits_{2\le d\le n/3}{\varphi(d)(n+k-d) \over 2dn}\G{n/d}{k/d-1} =0,
\end{equation}
\begin{equation}\label{Gn5ub}
\sum\limits_{\substack{2\le d|n; \ r\ge 3; \ n_1+\ldots+n_r=n/d\\ k_1+ \ldots+k_r + |\{i : n_i\ge 2\}|= k/d}} {\varphi(d)\over 2r} \prod\limits_{i=1}^r\G{n_i+1}{k_i} =0,
\end{equation}
\begin{equation}\label{Gn5T}
{1\over 4} \sum\limits_{\substack{1\le t \le k\\ n_0+\ldots + n_t=n/2\\k_0+\ldots + k_t=(k-t)/2}}
\prod\limits_{s=0}^ {t} \big (\G{n_s+1}{k_s-1} + \G{n_s+1}{k_s} \big) = {n^2-2n-12\over 16(n-3)}C_{n/2-1},
\end{equation}
and
\begin{equation}\label{Gn5TR}
{1\over 4} \sum\limits_{\substack{0\le t \le k\\ n_0+\ldots + n_t=n/2-1\\ k_0+\ldots + k_t=(k-t)/2}} \prod\limits_{s=0}^ {t} \big (\G{n_s+1}{k_s-1} + \G{n_s+1}{k_s} \big) = {n\over 16(n-3)} C_{n/2-1}.
\end{equation}
\begin{proof}
Equation \eqref{Gn5E} follows from \eqref{catdiss}, and \eqref{Gn5T0D} and \eqref{Gn5T0} are immediate.
In \eqref{Gn5bdd}, the only nonzero summand corresponds to $d=5$.
To prove \eqref{Gn5ub2}, note that if $d$ divides $n$ and $k$ then $d=5$, but then $\G{n/d}{k/d-1}=0$ since $n/d\ge 3$.
Equation \eqref{Gn5ub} follows from the remark before Theorem \ref{k=n-3}.
For \eqref{Gn5T}, note that if $\prod\limits_{s=0}^p (\G{n_s+1}{k_s-1} + \G{n_s+1}{k_s})\ne 0$, then $k_s\le n_s-1$ for all $s$. Therefore in this case
\[(k-t)/2=\sum\limits_{s=0}^t k_s \le \sum\limits_{s=0}^t(n_s-1)=n/2-t, \]
i.e., $t\le n-k-2$, with equality if and only if $k_s=n_s-1$ for all $s$.
Therefore, using the notation of Section \ref{axsec}, the nonzero summands in \eqref{Gn5T} correspond to $|G(n,n-5;\tau;1)|$ and $|G(n,n-5;\tau;3)|$. Now
\begin{align}\label{eqn}
|G(n,n-5;\tau;1)|&=\sum\limits_{\substack{n_0+ n_1=n/2\\ k_0 + k_1=n/2-3}} \left (\G{n_0+1}{k_0-1} + \G{n_0+1}{k_0}\right)\left(\G{n_1+1}{k_1-1} + \G{n_1+1}{k_1}\right ) \qquad \qquad \nonumber \\
&=\sum\limits_{\substack{n_0+ n_1=n/2\\ k_0 + k_1=n/2-3}}
\big (\G{n_0+1}{k_0-1} \G{n_1+1}{k_1-1}+\G{n_0+1}{k_0-1} \G{n_1+1}{k_1}
+ \G{n_0+1}{k_0} \G{n_1+1}{k_1-1} + \G{n_0+1}{k_0} \G{n_1+1}{k_1}\big ).\nonumber\\
\end{align}
Any nonzero terms in \eqref{eqn} have $(k_0,k_1)=(n_0-1,n_1-2)$ or $(k_0,k_1)=(n_0-2,n_1-1)$. Therefore by Lemma \ref{disscons} and by symmetry,
\begin{align*}
|G(n,n-5;\tau;1)|=
2\sum\limits_{n_0+ n_1=n/2}
\big (\G{n_0+1}{n_0-2} \G{n_1+1}{n_1-3}+ \G{n_0+1}{n_0-2} \G{n_1+1}{n_1-2} + \G{n_0+1}{n_0-1} \G{n_1+1}{n_1-3} + \G{n_0+1}{n_0-1} \G{n_1+1}{n_1-2}\big )\\
=2\left [{(n-6)(n-8)\over 4n} C_{n/2-2} + C_{n/2-1}-2C_{n/2-2} + {n-6\over 4}C_{n/2-2} + C_{n/2-2}\right ].
\end{align*}
Next by Lemma \ref{catcon},
\begin{multline*}
|G(n,n-5;\tau;3)|=\\
\sum\limits_{n_0+n_1+n_2+ n_3=n/2}\left (\G{n_0+1}{n_0-2}+\G{n_0+1}{n_0-1}\right ) \left (\G{n_1+1}{n_1-2}+\G{n_0+1}{n_0-1}\right )
\left (\G{n_2+1}{n_2-2}+\G{n_0+1}{n_0-1}\right )\left (\G{n_3+1}{n_3-2}+\G{n_0+1}{n_0-1}\right )\\
=\sum\limits_{n_0+n_1+n_2+ n_3=n/2}C_{n_0-1}C_{n_1-1}C_{n_2-1}C_{n_3-1}={2n-12\over n}C_{n/2-2}.
\end{multline*}
Equation \eqref{Gn5T} now follows by simplifying these expressions and using the relation $C_{n/2-2}={n\over 4(n-3)}C_{n/2-1}$. A similar argument proves \eqref{Gn5TR}.
\end{proof}
\end{proposition}
The proof of Theorem \ref{k=n-5} now follows by applying Proposition \ref{n-5prop} and Theorems \ref{dih} and \ref{cyc}.
The proof of Theorems \ref{k=n-6} and \ref{k=2} proceeds along similar lines. For Theorem \ref{k=n-6}, note that if $|G^r(n/d,k/d-\delta_{r2})|\ne 0$ then either $r=2$ and $d=2$; or $r=2$ and $d=3$; or $r=3$ and $d=2$ (see Figure \ref{bijfig}). The last of these cases can be computed using \eqref{3marked}:
\[|G^3(n/2,(n-6)/2-3+3)|
=(n/2-2)\G{n/2}{n/2-3}=(n/2-2)C_{n/2-2}.\]
\end{document}
|
\begin{document}
\title{Quantum Hypergraph States in Noisy Quantum Channels}
\begin{abstract}
The family of quantum graph and hypergraph states are ubiquitous in quantum information. They have diverse applications ranging from quantum network protocols to measurement based quantum computing. The hypergraph states are a generalization of graph states, a well-known family of entangled multi-qubit quantum states. We can map these states to qudit states. In this work, we analyze a number of noisy quantum channels for qudits, on the family of qudit hypergraph states. The channels studied are the dit-flip noise, phase flip noise, dit-phase flip noise, depolarizing noise, non-Markovian Amplitude Damping Channel (ADC), dephasing noise, and depolarization noise. To gauge the effect of noise on quantum hypergraph states, the fidelity between the original and the final states is studied. The change of coherence under the action of noisy channels is also studied, both analytically and numerically.
\noindent \textbf{Keywords:} Quantum hypergraph states, Weyl operators, noisy quantum channels, fidelity, coherence.
\end{abstract}
\section{Introduction}
Graph states \cite{hein2004multiparty, anders2006fast, van2005local} are utilized as a resource in measurement-based quantum information and computation . They are constructed based on an undirected and simple graph \cite{west2001introduction}. The graph states constitute a large family of entangled states including cluster states \cite{briegel2009cluster, nielsen2004optical}, GHZ states \cite{greenberger1989going, dahlberg2020transform}, and stabilizer states \cite{gottesman1997stabilizer}. However, they cannot represent all pure entangled states. To go beyond graph states, while maintaining the important connection to graphs, this concept is generalized to quantum hypergraph states \cite{rossi2013quantum, qu2013encoding}. The hypergraph states have been widely applied to different problems of quantum information and computation, such as, error correction \cite{balakuntala2017quantum, wagner2018analysis, lyons2017local} and quantum blockchain \cite{banerjee2020quantum}, measurement based quantum computation \cite{gachechiladze2019changing, takeuchi2019quantum, morimae2017verification, zhu2019efficient}, study of quantum entanglement \cite{ghio2017multipartite, guhne2014entanglement, haddadi2019efficient, pal2006multipartite, qu2013multipartite, qu2013relationship, akhound2020evaluation}, continuous variable quanutm information \cite{moore2019quantum}, quantum optics \cite{gu2020quantum, sarkar2021phase}, and neural network \cite{cao2021representations}. The size of the Hilbert space for a graph or a hypergraph state scales exponentially with the number of qubits.
In our earlier work \cite{sarkar2021phase}, we mapped the multi-qubit structure of the family of quantum hypergraph states to qudits; this is briefly discussed in Section 2. This kind of mapping has been realized in the literature \cite{gottesman2001encoding}. In this work, we aim to explore the noise properties of the quantum hypergraph states. The transformation from multi-qubit to qudit states simplifies our analysis. To understand the noise properties of a multi-qubit state, the particular qubit which will be effected by noise needs to be fixed. Therefore, given any $n$ qubit hypergraph state, we have $n$ different choices for an application of noise. In contrast, when we consider the state as a qudit state we can apply the noise on the complete state.
To the best of our knowledge, this is the first investigation in this direction. To investigate the impact of noise on a state, the characteristics of the corresponding noisy quantum channel should be understood. Fidelity is a usful diagnostic for this. The quantum channels which we study are dit-flip noise, phase flip noise, dit-phase flip noise, depolarizing noise, ADC (non-Markovian noise), non-Markovian dephasing noise, and non-Markovian depolarization noise. These channels were originally defined to apply to qubits. The dit-flip noise, phase flip noise, dit-phase flip noise, depolarizing noise were generalized to apply on qudit states in \cite{fonseca2019high}. Following this direction, we generalize ADC (non-Markovian noise), non-Markovian dephasing, and non-Markovian depolarization noise on qudits. The analytical expression of fidelity, between the original and the final states, is calculated for each of these channels.
We note that the coherence of these states is consistently decreasing after application of these channels.
This article is distributed as follows. We mention preliminary concepts of quantum hypergraph states in Section 2. Here we also describe some of the important characteristics that will be needed subsequently. We begin Section 3 with the Weyl operator formalism that is essential to design qudit channels. We dedicate different subsections to different channels. We discuss unital as well as non-unital channels, both Markovian and non-Markovian. In every subsection, we discuss the fidelity between the original and the final state as well as the change in coherence. Then we make our conclusion. In an appendix, we describe a bit-flip noise operation on a multi-qubit hypergraph state.
\section{Preliminaries}
In combinatorics a simple graph $G = (V(G), E(G))$ is a combinatorial object consisting of a set of vertices $V(G)$, and edges $E(G)$. An edge in a graph is a set of two vertices. A hyperedge is a set composing of more than two vertices. A hypergraph \cite{bretto2013hypergraph} is a generalization of a graph, and is a combination of a set of vertices $V(H)$ and a set of hyperedges $E(H)$, which is denoted by $H = (V(H), E(H))$. An example of a hypergraph is depicted in figure \ref{example_hypergraph}.
\begin{figure}
\caption{A hypergraph with vertices $0, 1, 2$ and $3$ with two edges $(0, 3)$ and $(1, 3)$ as well as two hyperedges $(0, 2, 3)$ and $(1, 2, 3)$.}
\caption{Quantum circuit for generating the hypergraph state corresponding to the hypergraph depicted in figure \ref{example_hypergraph}
\caption{A hypergraph and its corresponding quantum circuit}
\label{example_hypergraph}
\label{example_circuit}
\end{figure}
The hypergraph states are a generalization of graph states or cluster states \cite{nielsen2006cluster}. If a hypergraph has $n$ vertices then the corresponding hypergraph state is an $n$-qubit state \cite{dutta2019permutation, dutta2018boolean} belonging to $\mathcal{H}_2^{\otimes n}$. To construct a hypergraph state we first assign a qubit $\ket{+} = \frac{\ket{0} + \ket{1}}{\sqrt{2}}$, corresponding to every vertex. Also, for every hyperedge $\{u_1, u_2, \dots u_k\}$ we apply a $k$-qubit controlled $Z$ gate on the qubits corresponding to the vertices $v_1, v_2, \dots v_k$. These states can be expressed as
\begin{equation}
\ket{G} = \frac{1}{\sqrt{2^n}} \sum_{i = 0}^{2^n - 1} (-1)^{f(\bin(i))}\ket{\bin(i)},
\end{equation}
where $f: \{0, 1\}^n \rightarrow \{0, 1\}$ is a Boolean function with $n$ variables acting on the $n$-bit binary representation $\bin(i)$ of $i$ \cite{dutta2018boolean}. Clearly, size of the state vector is $2^n$, where $n$ is the number of vertices in the hypergraph. For simplicity, we denote $2^n = N$, from now on.
The hypergraph in the figure \ref{example_hypergraph} has four vertices. To every vertex we assign a $\ket{+}$ state. Then we apply different controlled $Z$ operations on the qubits. For example, there is a hyperedge $(0, 2, 3)$. Hence, we apply a $3$-qubit controlled $Z$ gate on $0$-th, $2$-nd and $3$-rd qubits. All the controlled $Z$ operations are depicted in figure \ref{example_circuit}, as a quantum circuit. The corresponding hypergraph state is a four qubit state,
\begin{equation}\label{example_multiqubit_hyperrgaph_state}
\begin{split}
\ket{G} = \frac{1}{4} & [\ket{0000} + \ket{0001} + \ket{0010} + \ket{0011} + \ket{0100} + \ket{0101} - \ket{0110} + \ket{0111} \\
& + \ket{1000} - \ket{1001} + \ket{1010} + \ket{1011} + \ket{1100} - \ket{1101} - \ket{1110} + \ket{1111}],
\end{split}
\end{equation}
where $\ket{0}$ and $\ket{1}$ represent the qubits.
Note that, the set of vectors $\{\ket{\bin(i)}: i = 0, 1, 2, \dots (N - 1)\}$ forms a basis of finite dimensional space. We assume that the space is spanned by the set of $N$ dimensional vectors $\ket{i}$ for $i = 0, 1, 2, \dots (N - 1)$. Numerically, $\ket{i}$ is equivalent to $\ket{\bin(i)}$. To make our notations simplified we write $f(\bin(i)) = g(i)$ where $g : \{0, 1, 2, \dots (N - 1)\} \rightarrow \{0, 1\}$. Therefore, corresponding to a hypergraph with $n$ vertices there is a hypergraph state of dimension $N$ in $\mathcal{H}_{N}$, which is of the form
\begin{equation}\label{qudit_hypergraph_state}
\ket{G} = \frac{1}{\sqrt{N}} \sum_{i = 0}^{N - 1} (-1)^{g(i)}\ket{i}.
\end{equation}
Here, $N$ depends on the number of vertices in the hypergraph $G$. Also, $g$ depends on the combinatorial structure of $G$. In the later parts of this article, we shall observe that coherence and fidelity between the states depend on either or both $N$ and $g$. Our definition of qudit hypergraph state is different from the earlier proposals discussed in \cite{keet2010quantum, steinhoff2017qudit}, based on the $d$-dimensional Pauli group and its normalizer \cite{patera1988pauli}. Our approach is advantageous as it has an explicit relation to the original approach to the multi-qubit hypergraph states. The density matrix of the state can be expressed as
\begin{equation}\label{qudit_density_in_full_form}
\rho = \ket{G}\bra{G} = \sum_{i = 0}^{N -1} \sum_{j = 0}^{N - 1} \frac{(-1)^{g(i) + g(j)}}{N} \ket{i}\bra{j}.
\end{equation}
To date there are a number of techniques for investigating the evolution of a quantum state in the context of open quantum systems \cite{banerjee2018open}. The Kraus operator formalism \cite{choi1975completely, kraus1974operations, kraus1983states} finds a prominent place in this context. In this method, the evolution of the quantum state $\rho$ is modeled by a set of trace preserving maps $\{E_k: k = 1, 2, \dots\}$. The final state is represented by
\begin{equation}\label{Kraus_operator}
\Lambda(\rho) = \sum_k E_k \rho E_k^\dagger, ~\text{where}~ \sum_k E_k^\dagger E_k = I
\end{equation}
is the identity operator.
One of the central issues in the investigations of open quantum system is the dynamics of decoherence. It is concerned with the evolution of quantum coherence, which is particularly important for quantum information and computation. The $l_1$ norm of coherence is an important measure of quantum coherence in a state, which we can analytically calculate for the quantum hypergraph states \cite{baumgratz2014quantifying}. Given a density matrix $\rho = (\rho_{i,j})$, the $l_1$ norm of coherence is defined by
\begin{equation}
C_{l_1}(\rho) = \sum_{i \neq j} |\rho_{i,j}| = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} |\rho_{i,j}|.
\end{equation}
From the equation (\ref{qudit_density_in_full_form}) we can see that the $(i,j)$-th entry of the density matrix $\rho$ of a hypergraph state are $\rho_{i,j} = \frac{(-1)^{g(i) + g(j)}}{N}$. Therefore $|\rho_{i,j}| = \frac{1}{N}$. Hence, the $l_1$ norm of coherence of any member of the family of quantum hyergraph states is
\begin{equation}\label{coherence_of_hypergraph_state}
C_{l_1}(\rho) = \sum_{i \neq j} |\rho_{i,j}| = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} |\rho_{i,j}| = \frac{N^2 - N}{N} = N - 1.
\end{equation}
\section{Noisy quantum channels in higher dimension}
In this section, we generalize a number of quantum channels for qudit states and study their action on qudit hypergraph states. The mathematical preliminaries, laid down here will be used subsequently in this paper. The Weyl operators were first introduced in the context of quantum teleportation \cite{bennett1993teleporting}. This is well-studied in the context of quantum computation and information \cite{bertlmann2008bloch, fonseca2019high, narnhofer2006entanglement, baumgartner2006state, baumgartner2007special}. For an $N$ dimensional qudit system there are $N^2$ Weyl operators $\hat{U}_{r, s}$, such that \cite{bertlmann2008bloch}
\begin{equation}\label{Weyl_operator_in_general}
\hat{U}_{r, s} = \sum_{i = 0}^{N - 1} \omega_{N}^{i r} \ket{i}\bra{i \oplus s} ~\text{for}~ 0 \leq r, s, \leq (N - 1),
\end{equation}
where $\omega_{N} = \exp(\frac{2 \pi \iota}{N})$ is the primitive $N$-th root of unity, and ``$\oplus$" denotes addition modulo $N$. Clearly, $\hat{U}_{0, 0} = I_{N}$, the identity matrix of order $N$. Also, $\hat{U}_{r, s}^\dagger \hat{U}_{r, s} = \hat{U}_{r, s} \hat{U}_{r, s}^\dagger = I_N$, that is $\hat{U}_{r, s}$ is a unitary operator for all $r$ and $s$. Applying $\hat{U}_{r, s}$ on state $\ket{G}$, in equation (\ref{qudit_hypergraph_state}) we have
\begin{equation}\label{Weyl_on_G}
\hat{U}_{r, s} \ket{G} = \sum_{i = 0}^{N - 1} \omega_{N}^{i r} \ket{i}\bra{i \oplus s } \left[\frac{1}{\sqrt{N}} \sum_{j = 1}^{N - 1} (-1)^{g(j)}\ket{j}\right] = \frac{1}{\sqrt{N}} \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s)} \omega_{N}^{i r} \ket{i}.
\end{equation}
As $\omega = \exp\left(\frac{2 \pi \iota}{N}\right)$, we have $\overline{\omega^j} = \omega^{-j}$. Applying $\hat{U}_{r, s}$ on the density matrix $\rho$ in equation (\ref{qudit_density_in_full_form}) we have
\begin{equation}\label{Weyl_on_G_density}
U_{r, s} \rho U_{r, s}^\dagger = U_{r, s} \ket{G}\bra{G} U_{r, s}^\dagger = \frac{1}{N} \sum_{i = 0}^{N - 1} \sum_{j = 0}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j)r} \ket{i}\bra{j}.
\end{equation}
The following expressions will be useful in the calculations below. From equation (\ref{Weyl_on_G}) we have
\begin{equation}\label{Fidelity_half}
\begin{split}
& \braket{G| U_{r,s} | G} = \frac{1}{\sqrt{N}} \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s)} \omega_{N}^{ir} \braket{G | i} = \frac{1}{\sqrt{N}} \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s)} \omega_{N}^{ir} \frac{(-1)^{g(i)}}{\sqrt{N}} = \frac{1}{N} \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s) + g(i)} \omega_{N}^{ir}\\
\text{or}~ & \braket{G| U_{r,s} \rho U_{r,s}^\dagger | G} = \braket{G| U_{r,s} | G} \braket{G| U_{r,s}^\dagger | G} = \frac{1}{N^2} \left| \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s) + g(i)} \omega_{N}^{ir} \right|^2.
\end{split}
\end{equation}
We now apply different noisy channels on qudit hypergraph state $\rho$, equation (\ref{qudit_density_in_full_form}).
\subsection{Dit-flip noise}
The dit-flip noise is a generalization of bit-flip noise. It flips the state $\ket{i}$ to the state $\ket{i \oplus 1}, \ket{i \oplus 2}, \dots \ket{i \oplus N - 1}$ with probability $p$. The associated Kraus operators are
\begin{equation}
\hat{E}_{0, s} = \begin{cases} \sqrt{1 - p} I_{N} & ~\text{when}~ r = 0, s = 0; \\ \sqrt{\frac{p}{N - 1}} U_{0, s} & ~\text{when}~ r = 0, 1 \leq s \leq (N - 1). \end{cases}
\end{equation}
The new state after applying the dit-flip operation is
\begin{equation}
\rho(p) = \sum_{s = 0}^{N - 1} E_{0, s} \rho E_{0, s}^\dagger = (1 - p)\rho + \frac{p}{N - 1} \sum_{s = 1}^{N - 1} U_{0, s} \rho U_{0, s}^\dagger.
\end{equation}
Applying equations (\ref{qudit_density_in_full_form}) and (\ref{Weyl_on_G_density}), the final state is
\begin{equation}\label{final_state_after_dit_flip}
\begin{split}
\rho(p) & = (1 - p) \rho + \frac{p}{N(N - 1)} \sum_{s = 1}^{N - 1} \sum_{i = 0}^{N - 1} \sum_{j = 0}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \ket{i} \bra{j} \\
& = \sum_{i = 0}^{N - 1} \sum_{j = 0}^{N - 1} \left[(1 - p)\frac{(-1)^{g(i) + g(j)}}{N} + \frac{p}{N(N - 1)} \sum_{s = 1}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \right] \ket{i} \bra{j}.
\end{split}
\end{equation}
Clearly, the $(i,j)$-th element of $\rho(p)$ in equation (\ref{final_state_after_dit_flip}) is given by $\rho_{i,j} = (1 - p) \frac{(-1)^{g(i) + g(j)}}{N} + \frac{p}{N(N - 1)} \sum_{s = 1}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)}$. Hence, the $C_{l_1}$ norm of coherence is
\begin{equation}
C_{l_1}(\rho(p)) = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} |\rho_{i,j}| = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} \left|(1 - p) \frac{(-1)^{g(i) + g(j)}}{N} + \frac{p}{N(N - 1)} \sum_{s = 1}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \right|,
\end{equation}
which depends on different choice of the hypergraph $G$. Clearly,
\begin{equation}
C_{l_1}(\rho(p)) \leq \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} \left[ \frac{(1 - p)}{N} + \frac{p}{N} \right] = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} \frac{1}{N} = \frac{N(N - 1)}{N} = (N - 1),
\end{equation}
which is the $l_1$ norm of coherance of the initial state, see equation (\ref{coherence_of_hypergraph_state}). Therefore, the coherence decreases on application of the dit-flip noise on the hypergraph state.
Now we calculate the fidelity between the initial state $\rho$ and the final state $\rho(p)$. As $\rho$ is a pure state, the fidelity is given by
\begin{equation}
F(\rho, \rho(t)) = \braket{G | \rho(p) | G} = \bra{G} E_{0, 0} \rho E_{0, 0}^\dagger \ket{G} + \sum_{s = 1}^{N - 1} \bra{G} E_{0, s} \rho E_{0, s}^\dagger \ket{G} = (1 - p) + \frac{p}{N - 1} \sum_{s = 1}^{N - 1} \bra{G} U_{0, s} \rho U_{0 s}^\dagger \ket{G}.
\end{equation}
Using equation (\ref{Fidelity_half}) we have
\begin{equation}
\begin{split}
F(\rho, \rho(t)) & = (1 - p) + \frac{p}{N^2(N - 1)} \sum_{s = 1}^{N - 1} \left| \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s) + g(i)} \right|^2,
\end{split}
\end{equation}
which depends on the function $g$. Clearly, this fidelity depends on the structure of the hypergraph. Below, we obtain an upper bound on the value of the fidelity.
\begin{equation}
F(\rho, \rho(t)) \leq (1 - p) + \frac{p}{N^2(N - 1)} N(N - 1)= 1 - p + \frac{p}{N} = 1 - \frac{p(N - 1)}{N}.
\end{equation}
\subsection{$N$-phase-flip noise}
A qudit $\ket{i}$ under the influence of the $N$-phase-flip noise may be changed to any of the $N - 1$ possible states with probability $p$. The corresponding Kraus operators are of the form
\begin{equation}
E_{r, 0} = \begin{cases} \sqrt{1 - p} I & ~\text{when}~ r = 0; \\ \sqrt{\frac{p}{N - 1}} U_{r, 0} & ~\text{for}~ 1 \leq r \leq (N - 1), s = 0.
\end{cases}
\end{equation}
The new state after application of the $N$-phase-flip noise is
\begin{equation}
\rho(p) = \sum_{r = 0}^{N - 1} E_{r, 0} \rho E_{r, 0}^\dagger = E_{0, 0} \rho E_{0, 0}^\dagger + \sum_{r = 1}^{N - 1} E_{r, 0} \rho E_{r, 0}^\dagger = (1 - p)\rho + \frac{p}{N - 1} \sum_{r= 1}^{N - 1} U_{r, 0} \rho U_{r, 0}^\dagger.
\end{equation}
Using equation (\ref{qudit_density_in_full_form}) along with equation (\ref{Weyl_on_G_density}), the state $\rho(p)$ is seen to be
\begin{equation}
\begin{split}
\rho(p) & = (1 - p) \sum_{i = 0}^{N -1} \sum_{j = 0}^{N - 1} \frac{(-1)^{g(i) + g(j)}}{N} \ket{i}\bra{j} + \frac{p}{N - 1} \sum_{r = 1}^{N - 1} \left[\frac{1}{N} \sum_{i = 0}^{N - 1} \sum_{j = 0}^{N - 1} (-1)^{g(i) + g(j)} \omega_{N}^{(i - j)r} \ket{i}\bra{j} \right] \\
& = \sum_{i = 0}^{N -1} \sum_{j = 0}^{N - 1} \frac{(-1)^{g(i) + g(j)}}{N} \left[ (1 - p) + \frac{p}{N(N - 1)} \sum_{r = 1}^{N - 1} \omega_{N}^{(i - j)r} \right] \ket{i}\bra{j}.
\end{split}
\end{equation}
As $\omega_{N} = \exp(\frac{2 \pi \iota}{N})$ is the primitive $N$-th root of unity we have $\sum_{r = 0}^{N - 1} \omega_{N}^{(i - j) r} = 0$, that is $\sum_{r = 1}^{N - 1} \omega_{N}^{(i - j) r} = -1$. Therefore,
\begin{equation}
\rho(p) = \sum_{i = 0}^{N -1} \sum_{j = 0}^{N - 1} \left[(1 - p) - \frac{p}{N - 1}\right]\frac{(-1)^{g(i) + g(j)}}{N} \ket{i}\bra{j}.
\end{equation}
The $(i,j)$-th term of $\rho(p)$ is represented by
\begin{equation}
\rho_{i,j} = \left[1 - p - \frac{p}{N - 1}\right]\frac{(-1)^{g(i) + g(j)}}{N}.
\end{equation}
Therefore the $l_1$ norm of coherence is
\begin{equation}
\begin{split}
C_{l_1}(\rho(p)) & = \frac{1}{N} \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} \left|1 - p - \frac{p}{N - 1}\right| = \frac{N(N - 1)}{N} \left|1 - p - \frac{p}{N - 1}\right| = (N - 1)\left|1 - p - \frac{p}{N - 1}\right|.
\end{split}
\end{equation}
We can verify that $\left|1 - p - \frac{p}{N - 1}\right| < 1$. Thus, $C_{l_1}(\rho(p)) < (N - 1)$, which is the coherence of $\rho$. Hence, coherence decreases during the phase flip operation.
Fidelity between the states $\rho$ and $\rho(p)$ is
\begin{equation}
F(\rho(p), \rho) = \braket{G | E_{0, 0} |G} \braket{G | E_{0, 0}^\dagger|G} + \sum_{r = 1}^{N - 1} \braket{G| E_{r, 0} |G} \braket{G| E_{r, 0}^\dagger | G} = (1 - p) + \frac{p}{N - 1} \sum_{r = 1}^{N - 1} \bra{G} U_{r,0} \rho U_{r,0}^\dagger \ket{G}.
\end{equation}
From equation (\ref{Fidelity_half}) we see that
\begin{equation}
\braket{G| U_{r, 0} \rho U_{r, 0}^\dagger | G} = \frac{1}{N^2} \left| \sum_{i = 0}^{N - 1} (-1)^{g(i) + g(i)} \omega_{N}^{ir} \right|^2 = \frac{1}{N^2} \left| \sum_{i = 0}^{N - 1} \omega_{N}^{ir} \right|^2 = 0,
\end{equation}
as $\sum_{i = 0}^{N - 1} \omega_{N}^{ir} = 0$. It indicates, $F(\rho(p), \rho) = (1 - p)$. Hence, fidelity is equal for all hypergraph states, independent of its size or combinatorial structures.
\subsection{Dit-phase-flip noise}
The dit-phase-flip noise is a combination of both the dit-flip and the phse-flip noise. It is characterized by the Kraus operators
\begin{equation}
E_{r, s} = \begin{cases} \sqrt{1 - p} I_{N} & ~\text{when}~ r = 0, s = 0; \\ \sqrt{\frac{p}{N^2 - 1}} U_{r, s} & ~\text{for}~ 0 \leq r, s \leq (N - 1) ~\text{and}~ (r, s) \neq (0, 0). \end{cases}
\end{equation}
The new state after applying the dit-flip operation is
\begin{equation}
\rho(p) = E_{0, 0} \rho E_{0, 0}^\dagger + \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} E_{r, s} \rho E_{r, s}^\dagger = (1 - p) \rho + \frac{p}{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} U_{r, s} \rho U_{r, s}^\dagger.
\end{equation}
Using equations (\ref{qudit_density_in_full_form}) and (\ref{Weyl_on_G_density}), and simplifying, we have
\begin{equation}
\rho(p) = \frac{1}{N}\sum_{i = 0}^{N -1} \sum_{j = 0}^{N - 1} \left[(1 - p) (-1)^{g(i) + g(j)} + \frac{p}{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j) r} \right] \ket{i}\bra{j}.
\end{equation}
Now, we calculate the $l_1$ norm of coherence of the new state $\rho(p)$. The $(i,j)$-th entry of $\rho(p)$ is
\begin{equation}
\rho_{i,j} = \frac{1}{N}\left[(1 - p) (-1)^{g(i) + g(j)} + \frac{p}{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j) r} \right].
\end{equation}
The absolute value is bounded by
\begin{equation}
|\rho_{i,j}| \leq \frac{1}{N}\left[(1 - p) + \frac{p}{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} 1 \right] = \frac{1}{N}\left[(1 - p) + \frac{p}{N^2 - 1} (N^2 - 1) \right] = \frac{1}{N}.
\end{equation}
Therefore the $l_1$ norm of coherence of $\rho(p)$ will be bounded by
\begin{equation}
C_{l_1}(\rho(p)) = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} |\rho_{i,j}| \leq \frac{1}{N} \times N(N - 1) = N - 1,
\end{equation}
which is the $l_1$ norm of coherence of the original state. Thus, coherence decreases under the application of the dit-phase-flip noise.
Now, we calculate the fidelity between the initial state after the resultant state after applying the dit-phase-flip noise. Using equation (\ref{Fidelity_half}) we have
\begin{equation}
\begin{split}
F(\rho(p), \rho) & = \braket{G | \rho(p) | G} = (1 - p) \braket{G | \rho | G} + \frac{p}{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} \braket{G | U_{r, s} \rho U_{r, s}^\dagger | G}\\
& = (1 - p)+ \frac{p}{N^2(N^2 - 1)} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} \left| \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s) + g(i)} \omega_{N}^{ir} \right|^2,
\end{split}
\end{equation}
The fidelity $F(\rho(p), \rho)$ depends on the function $g$, in the particular hypergraph state under consideration.
\subsection{Depolarizing noise}
The Kraus operators generating a depolarizing channel is represented by \cite{imany2019high, gokhale2019asymptotic}
\begin{equation}
E_{r, s} = \begin{cases} \sqrt{1 - \frac{N^2 - 1}{N^2}p} I_{N} & ~\text{when}~ r = 0, s = 0; \\ \frac{\sqrt{p}}{N} U_{r, s} & ~\text{for}~ 0 \leq r, s \leq (N - 1) ~\text{and}~ (r, s) \neq (0, 0). \end{cases}
\end{equation}
Expanding $\rho$ and $U_{r, s} \rho U_{r, s}^\dagger$ using equations (\ref{qudit_density_in_full_form}) and (\ref{Weyl_on_G_density}), respectively, the new state after application of the depolarizing noise is
\begin{equation}
\begin{split}
\rho(p) & = \sum_{r = 0}^{N - 1} \sum_{s = 0}^{N - 1} E_{r, s} \rho E_{r, s}^\dagger = \left(1 - \frac{N^2 - 1}{N^2}p \right) \rho + \frac{p}{N^2} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)} }^{N - 1} U_{r, s} \rho U_{r, s}^\dagger \\
& = \sum_{i = 0}^{N -1} \sum_{j = 0}^{N - 1} \left[ \left(1 - \frac{N^2 - 1}{N^2}p \right) \frac{(-1)^{g(i) + g(j)}}{N} + \frac{p}{N^3} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)} }^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j) r} \right] \ket{i}\bra{j}.
\end{split}
\end{equation}
Now we work out the $l_1$ norm of coherence in the state $\rho(p)$. The $(i,j)$-th element of $\rho(p)$ is given by
\begin{equation}
\rho_{i,j} = \left(1 - \frac{N^2 - 1}{N^2}p \right) \frac{(-1)^{g(i) + g(j)}}{N} + \frac{p}{N^3} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)} }^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j) r}.
\end{equation}
An upper bound on the absolute values of $\rho_{i,j}$ is given by
\begin{equation}
|\rho_{i,j}| \leq \left(1 - \frac{N^2 - 1}{N^2}p \right) \frac{1}{N} + \frac{p(N^2 - 1)}{N^3} = \frac{1}{N}.
\end{equation}
Therefore the $l_1$ norm of coherence is bounded by
\begin{equation}
C_{l_1}(\rho(p)) = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} |\rho_{i,j}| \leq \frac{N (N - 1)}{N} = N - 1,
\end{equation}
which is the coherence of the original hypergraph state. Therefore, coherence decreases when we apply the depolarizing operation on the hypergraph states.
Applying equation (\ref{Fidelity_half}), the fidelity between the state $\rho(p)$ and $\rho$ is seen to be
\begin{equation}
\begin{split}
F(\rho(p), \rho) & = \braket{G | \rho(p) | G} = \left(1 - \frac{N^2 - 1}{N^2}p \right) + \frac{p}{N^2} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)} }^{N - 1}\braket{G| U_{r, s} \rho U_{r, s}^\dagger | G}\\
& = \left(1 - \frac{N^2 - 1}{N^2}p \right) + \frac{p}{N^4} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)} }^{N - 1} \left| \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s) + g(i)} \omega_{N}^{ir} \right|^2.
\end{split}
\end{equation}
Therefore, $F(\rho(p), \rho)$ depends on the number of vertices, and the structure of $G$, as well as the channel parameter $p$.
\subsection{Amplitude Damping Channel (non-Markovian)}
The non-Markovian Amplitude Damping Channel (ADC) for qubits is characterized by the Kraus operators \cite{ghosal2021characterizing, utagi2020ping}
\begin{equation}
\begin{split}
& M_0 = \begin{bmatrix} 1 & 0 \\ 0 & \sqrt{1 - \lambda(t)} \end{bmatrix} ~\text{and}~ M_1 = \begin{bmatrix} 0 & \sqrt{\lambda(t)} \\ 0 & 0 \end{bmatrix}; \\
\text{where}~ & \lambda(t) = 1 - e^{-gt}\left(\frac{g}{l} \sinh \left[\frac{lt}{2}\right] + \cosh \left[\frac{lt}{2}\right]\right)^2, \text{and}~ l = \sqrt{g^2 - 2 \gamma g}.
\end{split}
\end{equation}
The system exhibits Markovian and non-Markovian evolution of a state when $2 \gamma \ll g$ and $2 \gamma \gg g$, respectively, in case of qubit states. It can be easily seen that
\begin{equation}
\sqrt{1 - \lambda(t)} = e^{-\frac{gt}{2}} \left[\frac{g}{l}\sinh \left(\frac{lt}{2}\right) + \cosh \left(\frac{lt}{2}\right)\right].
\end{equation}
Note that, $\sqrt{1 - \lambda(t)} > 0$.
For an $N$-dit system we generalize the ADC non-Markovian channel using the Kraus operators
\begin{equation}
\begin{split}
E_0 & = \ket{0}\bra{0} + \sqrt{1 - \lambda(t)} \sum_{i = 1}^{N - 1} \ket{i} \bra{i} \\
E_i & = \sqrt{\lambda(t)} \ket{0}\bra{i} ~\text{for}~ 1 \leq i \leq N - 1.
\end{split}
\end{equation}
These satisfy $E_0^\dagger E_0 = \ket{0}\bra{0} + \left(1 - \lambda(t) \right) \sum_{i = 1}^{N - 1} \ket{i} \bra{i}$ and $E_i^\dagger E_i = \lambda(t) \ket{i}\bra{i}$ for $i = 1, 2, 3, \dots (N - 1)$. Therefore,
\begin{equation}
\begin{split}
\sum_{i = 0}^{N - 1} E_i^\dagger E_i & = \ket{0}\bra{0} + \left(1 - \lambda(t) \right) \sum_{i = 1}^{N - 1} \ket{i} \bra{i} + \lambda(t) \sum_{i = 1}^{N - 1} \ket{i}\bra{i} = \ket{0}\bra{0} + \sum_{i = 1}^{N - 1} \ket{i} \bra{i} = I_{N}.
\end{split}
\end{equation}
This indicates that $E_i$ are bonafide Kraus operators. Next, we apply these Kraus operators on the state $\ket{G}$, equation (\ref{qudit_hypergraph_state}). Note that, $E_0 \ket{G} = \frac{1}{\sqrt{N}} \left[ \ket{0} + \sqrt{1 - \lambda(t)} \sum_{i = 1}^{N - 1} (-1)^{g(i)} \ket{i} \right]$. Therefore,
\begin{equation}
\begin{split}
& E_0 \rho E_0^\dagger = E_0 \ket{G} \bra{G} E_0^t = \frac{1}{N} [ \ket{0} \bra{0} + \sqrt{1 - \lambda(t)} \sum_{j = 1}^{N - 1} (-1)^{g(j)} \ket{0}\bra{j} + \sqrt{1 - \lambda(t)} \sum_{i = 1}^{N - 1} (-1)^{g(i)} \ket{i}\bra{0}\\
&\hspace{5cm} + (1 - \lambda(t)) \sum_{i = 1}^{N - 1} \sum_{j = 1}^{N - 1} (-1)^{g(i) + g(j)} \ket{i} \bra{j} ].
\end{split}
\end{equation}
Also, for $1 \leq i \leq N - 1$, we have $E_i \ket{G} = \frac{\sqrt{\lambda(t)}}{\sqrt{N}} (-1)^{g(i)}\ket{0}$. Hence, $E_i \rho E_i^\dagger = E_i \ket{G}\bra{G} E_i^\dagger = \frac{\lambda(t)}{N} \ket{0}\bra{0}$ for $i = 1, 2, \dots (N - 1)$. Combining we get,
\begin{equation}
\begin{split}
\rho(t) & = \sum_{i = 0}^{N - 1} E_i \rho E_i^t = \frac{1}{N} [ \ket{0} \bra{0} + \sqrt{1 - \lambda(t)} \sum_{j = 1}^{N - 1} (-1)^{g(j)} \ket{0}\bra{j} + \sqrt{1 - \lambda(t)} \sum_{i = 1}^{N - 1} (-1)^{g(i)} \ket{i}\bra{0}\\
& \hspace{4cm} + (1 - \lambda(t)) \sum_{i = 1}^{N - 1} \sum_{j = 1}^{N - 1} (-1)^{g(i) + g(j)} \ket{i} \bra{j} ] + (N - 1) \frac{\lambda(t)}{N} \ket{0}\bra{0}\\
& = \left[(N - 1) \frac{\lambda(t)}{N} + \frac{1}{N} \right] \ket{0}\bra{0} + \sqrt{1 - \lambda(t)} \sum_{j = 1}^{N - 1} (-1)^{g(j)} \ket{0}\bra{j} + \sqrt{1 - \lambda(t)} \sum_{i = 1}^{N - 1} (-1)^{g(i)} \ket{i}\bra{0}\\
& \hspace{4cm} + (1 - \lambda(t)) \sum_{i = 1}^{N - 1} \sum_{j = 1}^{N - 1} (-1)^{g(i) + g(j)} \ket{i} \bra{j}.
\end{split}
\end{equation}
The expression of $\rho(t)$ indicates that $(i,j)$-th term of $\rho(t)$ for $i \neq j$ is represented by
\begin{equation}
\rho_{i,j} = \begin{cases} (-1)^{g(j)} \sqrt{1 - \lambda(t)} & ~\text{when}~ i = 0, ~\text{and}~ j = 1, 2, \dots (N - 1); \\
(-1)^{g(i)} \sqrt{1 - \lambda(t)} & ~\text{when}~ j = 0, ~\text{and}~ i = 1, 2, \dots (N - 1); \\
(-1)^{g(i) + g(j)} (1 - \lambda(t)) & ~\text{when}~ i \neq 0, j\neq 0, i\neq j, ~\text{and}~ i, j = 1, 2, \dots (N - 1). \end{cases}
\end{equation}
The $l_1$ norm of coherence of the state $C_{l_1}(\rho(t))$ is
\begin{equation}
C_{l_1}(\rho(t)) = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0\\ j \neq i}}^{N - 1} |\rho_{i,j}| = (N - 1)\left[ 2\sqrt{1 - \lambda(t)} + (N - 2) (1 - \lambda(t)) \right].
\end{equation}
Note that, the expression of $C_{l_1}(\rho(t))$ depends only on the number of vertices in the hypergraph and $\lambda(t)$ coming from the noisy channel. All the hypergraph states generated by the hypergraphs with equal number of vertices have equal coherence. Recall from equation (\ref{coherence_of_hypergraph_state}) that the coherence of $\rho$ is $(N - 1)$. Coherence decreases under the ADC noise if $2\sqrt{1 - \lambda(t)} + (N - 2) (1 - \lambda(t)) < 1$. Simplifying we get
\begin{equation}
\frac{-1 - \sqrt{N - 1}}{N - 2} < \sqrt{1 - \lambda(t)} < \frac{-1 + \sqrt{N - 1}}{N - 2}.
\end{equation}
Therefore,the coherence of $\rho(t)$ is less than the coherence of original state when $\sqrt{1 - \lambda} < \frac{-1 + \sqrt{N - 1}}{N - 2}$. For different values of $n$ we have plotted the coherence in figure \ref{coherence_plot}. In the non-Markovian regime, we can see the typical recurrence behavior.
\begin{figure}
\caption{$n = 3$, $0 \leq t \leq 1$.}
\caption{$n = 3$, $0 \leq t \leq 10$.}
\caption{$n = 10$, $0 \leq t \leq 1$.}
\caption{$n = 10$, $0 \leq t \leq 10$.}
\caption{(Color online) Plot of coherence as a function of $t$. Here $n$ is the number of vertices in the hypergraph and $g = 1$ in all the cases. In Markovian domain we consider $\gamma = 0.01$ and $\gamma = 0.25$. In non-Markovian domain we consider $\gamma = 10$ and $\gamma = 20$.}
\label{coherence_plot}
\end{figure}
The fidelity between $\rho(t)$ and $\rho$ is given by
\begin{equation}
F(\rho(t), \rho) = \braket{G | \rho(t) | G} = \braket{G | E_0 | G } \braket{G | E_0^t | G} + \sum_{i = 1}^{N - 1} \braket{G | E_i | G} \braket{G | E_i^t | G}.
\end{equation}
As $E_0 \ket{G} = \frac{1}{\sqrt{N}} \left[ \ket{0} + \sqrt{1 - \lambda(t)} \sum_{i = 1}^{N - 1} (-1)^{g(i)} \ket{i} \right]$, we have
\begin{equation}
\begin{split}
\braket{G | E_0 | G} = \frac{1}{\sqrt{N}} \left[ \frac{1}{\sqrt{N}} + \sqrt{1 - \lambda(t)} \sum_{i = 1}^{N - 1} \frac{(-1)^{g(i)} (-1)^{g(i)}}{\sqrt{N}} \right] = \frac{1}{N} \left[ 1 + (N - 1) \sqrt{1 - \lambda(t)} \right].
\end{split}
\end{equation}
Also, $E_i \ket{G} = \frac{\sqrt{\lambda(t)}}{\sqrt{N}} (-1)^{g(i)}\ket{0}$ for $1 \leq i \leq N - 1$. Hence, $\braket{G | E_i | G} = \frac{\sqrt{\lambda(t)}}{N} (-1)^{g(i)}$. Combining we get
\begin{equation}
F(\rho(t), \rho) = \frac{1}{N^2} \left[ 1 + (N - 1) \sqrt{1 - \lambda(t)} \right]^2 + \frac{\lambda(t)}{N^2} (N - 1).
\end{equation}
In this case, fidelity depends on the number of vertices of the hypergraphs also on $\lambda(t)$. It does not depend on the structural properties of $G$. For different values of $n$ the fidelity is shown in figure \ref{fidelity_plot}. Recurrences are seen in the non-Markovian regime.
\begin{figure}
\caption{$n = 3$, $0 \leq t \leq 1$.}
\caption{$n = 3$, $0 \leq t \leq 10$.}
\caption{$n = 10$, $0 \leq t \leq 1$.}
\caption{$n = 10$, $0 \leq t \leq 10$.}
\caption{(Color online) Plot of fidelity as a function of $t$. Here, $n$ is number of vertices in the hypergraphs and $g = 1$ in all the cases. In Markovian domain we consider $\gamma = 0.01$ and $\gamma = 0.25$. In non-Markovian domain we consider $\gamma = 10$ and $\gamma = 20$.}
\label{fidelity_plot}
\end{figure}
\subsection{Non-Markovian Dephasing}
For a qubit system the non-Markovian dephasing channel has been studied in \cite{shrikant2018non}. For a qudit system we generalize the non-Markovian dephasing channel with the following Kraus operators:
\begin{equation}
E_{r, s} = \begin{cases} \sqrt{1 - \kappa } I & ~\text{when}~ r = 0, s = 0; \\ \sqrt{\frac{\kappa }{N^2 - 1}} U_{r, s} & ~\text{for}~ 0 \leq r, s \leq (N - 1) ~\text{and}~ (r, s) \neq (0, 0). \end{cases}
\end{equation}
where $\kappa = [ 1 + \alpha(1 - p)] p,$ $0 \leq p \leq \frac{1}{2}$ and $0 \leq \alpha \leq 1$. The non-Markovianity of the channel depends on the choice of the value of $\alpha$ and the function $\kappa$ of $p$. Here, we consider
\begin{equation}
\kappa(p) = p \frac{1 + \eta (1 - 2p) \sin(\omega p)}{1 + \eta (1 - 2p)}.
\end{equation}
Here $\eta$ and $\omega$ are two positive constants characterizing the strength and frequency of the channel. Also, $0 \leq p \leq \frac{1}{2}$. Clearly, $E_{0, 0}^\dagger E_{0, 0} = (1 - \kappa) I$, and $E_{r, s}^\dagger E_{r, s} = \frac{\kappa }{N^2 - 1} U_{r, s}^\dagger U_{r, s} = \frac{\kappa }{N^2 - 1}I$. Combining we get
\begin{equation}
\sum_{r = 0}^{N - 1} \sum_{s = 0}^{N - 1} E_{r, s}^\dagger E_{r, s} = (1 - \kappa) I + \frac{\kappa }{N^2 - 1} (N^2 - 1)I = I,
\end{equation}
justifying that $E_{r,s}$ are Kraus operators.
Now we apply these Kraus operators on the hypergraph state $\rho = \ket{G}\bra{G}$. The new state is
\begin{equation}
\begin{split}
\rho(\kappa) & = \sum_{r = 0}^{N - 1} \sum_{s = 0}^{N - 1} E_{r, s} \rho E_{r, s}^\dagger = (1 - \kappa) \rho + \frac{\kappa }{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} U_{r, s} \rho U_{r, s}^\dagger\\
& = \frac{1}{N} \sum_{i = 0}^{N -1} \sum_{j = 0}^{N - 1} \left[ (-1)^{g(i) + g(j)} (1 - \kappa) + \frac{\kappa }{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j)r} \right] \ket{i}\bra{j},
\end{split}
\end{equation}
obtained by an application of equations (\ref{qudit_density_in_full_form}) and (\ref{Weyl_on_G_density}).
Next we study the coherence of the state $\rho(\kappa)$. The $(i,j)$-th term of the $\rho(\kappa)$ is
\begin{equation}
\rho_{i,j} = \frac{1}{N} \left[ (-1)^{g(i) + g(j)} (1 - \kappa) + \frac{\kappa }{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j)r} \right].
\end{equation}
The absolute value $|\rho_{i,j}|$ is bounded by
\begin{equation}
|\rho_{i,j}| \leq \frac{(1 - \kappa)}{N} + \frac{\kappa }{N(N^2 - 1)} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} 1 = \frac{(1 - \kappa)}{N} + \frac{\kappa }{N(N^2 - 1)} (N^2 - 1) = \frac{1}{N}.
\end{equation}
Therefore, the $l_1$ measure of coherence is bounded by
\begin{equation}
C_{l_1}(\rho(\kappa)) = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} |\rho_{i,j}| \leq \frac{(N^2 - N)}{N} = N - 1,
\end{equation}
which is the coherence of the hypergraph state $\rho$. Thus, coherence decreases under non-Markovian Dephasing operation.
Fidelity between the state $\rho(\kappa)$ and $\rho$ is represented by
\begin{equation}
\begin{split}
F(\rho(\kappa), \rho) & = \braket{G | \rho(\kappa) | G} = (1 - \kappa) \braket{G | \rho | G} + \frac{\kappa }{N^2 - 1} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} \braket{G | U_{r, s} \rho U_{r, s}^\dagger | G}\\
& = (1 - \kappa) + \frac{\kappa }{N^2(N^2 - 1)} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r, s) \neq (0, 0)}}^{N - 1} \left| \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s) + g(i)} \omega_{N}^{ir} \right|^2,
\end{split}
\end{equation}
applying equation (\ref{Fidelity_half}). Clearly, $F(\rho(\kappa), \rho)$ depends on the structure of $G$ and the number of vertices in it. Also, it depends on the channel parameter $\kappa$. The fidelity for different hypergraphs in Markovian and non-Markovian domain is depicted in the figure \ref{fidelity_non_markovian_dephasing}.
\begin{figure}
\caption{(Color online) Plot of $F(\rho(\kappa), \rho)$ as a function of $p$ for the hypergraph depicted in \ref{example_hypergraph}
\label{fidelity_non_markovian_dephasing}
\end{figure}
\subsection{Non-Markovian depolarization}
We define the non-Markovian depolarization noise with the Kraus operators \cite{shrikant2018non}:
\begin{equation}
E_{r, s} = \begin{cases} \sqrt{1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1} I_N & ~\text{when}~ r = 0, s = 0; \\ \frac{\sqrt{p \Lambda_2}}{N} U_{r, s} & ~\text{for}~ 0 \leq r, s \leq (N - 1), ~\text{and}~ (r, s) \neq (0, 0),\end{cases}
\end{equation}
where $\Lambda_1 = -\alpha p$ and $\Lambda_2 = \alpha (1 - p)$. Note that $(1 - p)\Lambda_1 + p \Lambda_2 = 0$. Note that
\begin{equation}
\begin{split}
\sum_{r = 0}^{N - 1} \sum_{s = 0}^{N - 1} E_{r, s}^\dagger E_{r, s} & = E_{0, 0}^\dagger E_{0, 0} + \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r,s) \neq (0, 0)}}^{N - 1} E_{r, s}^\dagger E_{r, s} = \left(1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1 \right) I_N + \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r,s) \neq (0, 0)}}^{N - 1} \frac{p \Lambda_2}{N^2} U_{r, s}^\dagger U_{r, s} \\
& = \left(1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1 \right) I_N + \frac{(N^2 - 1)p \Lambda_2}{N^2} I_N = I_N + \frac{N^2 - 1}{N^2} [(1 - p)\Lambda_1 + p \Lambda_2] = I_N.
\end{split}
\end{equation}
It justifies $E_{r,s}$ as bonafide Kraus operators.
Applying these Kraus operators on $\ket{G}$ along with equations (\ref{qudit_density_in_full_form}) and (\ref{Weyl_on_G_density}), we have
\begin{equation}
\begin{split}
\rho(\alpha) & = \sum_{r = 0}^{N - 1} \sum_{s = 0}^{N - 1} E_{r, s} \rho E_{r, s}^\dagger = \left(1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1 \right) \rho + \frac{p \Lambda_2}{N^2} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r,s) \neq (0, 0)}}^{N - 1} U_{r, s} \rho U_{r, s}^\dagger\\
& = \frac{1}{N} \sum_{i = 0}^{N -1} \sum_{j = 0}^{N - 1} \left[ (-1)^{g(i) + g(j)} \left(1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1 \right) + \frac{p \Lambda_2}{N^2} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r,s) \neq (0, 0)}}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j)r} \right] \ket{i}\bra{j}.
\end{split}
\end{equation}
Now we calculate coherence of the state $\rho(\alpha)$. The $(i,j)$-th element of $\rho(\alpha)$ is
\begin{equation}
\rho_{i,j} = \frac{1}{N} \left[ (-1)^{g(i) + g(j)} \left(1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1 \right) + \frac{p \Lambda_2}{N^2} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r,s) \neq (0, 0)}}^{N - 1} (-1)^{g(i \oplus s) + g(j \oplus s)} \omega_{N}^{(i - j)r} \right].
\end{equation}
The absolute value of $\rho_{i,j}$ is bounded by
\begin{equation}
\begin{split}
|\rho_{i,j}| & \leq \frac{1}{N} \left(1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1 \right) + \frac{p \Lambda_2}{N^3} (N^2 - 1) = \frac{1}{N} + \frac{(N^2 - 1)}{N^3}\left[ (1 - p)\Lambda_1 + p \Lambda_2 \right] = \frac{1}{N}.
\end{split}
\end{equation}
Therefore, the $l_1$ norm of coherence is bounded by
\begin{equation}
C_{l_1}(\rho(\alpha)) = \sum_{i = 0}^{N - 1} \sum_{\substack{j = 0 \\ j \neq i}}^{N - 1} |\rho_{i, j}| \leq \frac{N^2 - N}{N} = (N - 1),
\end{equation}
which is the coherence of the hypergraph state $\rho$. Therefore, coherence decreases under the non-Markovian depolarization noise.
The fidelity between the states $\rho$ and $\rho(\alpha)$ is seen to be
\begin{equation}
\begin{split}
F(\rho, \rho(\alpha)) & = \braket{G | \rho(\alpha) |G } = \left(1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1 \right) \braket{G| \rho| G} + \frac{p \Lambda_2}{N^2} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r,s) \neq (0, 0)}}^{N - 1} \braket{G| U_{r, s} \rho U_{r, s}^\dagger | G}\\
& = \left(1 + \frac{(N^2 - 1)(1 - p)}{N^2}\Lambda_1 \right) + \frac{p \Lambda_2}{N^4} \sum_{r = 0}^{N - 1} \sum_{\substack{s = 0 \\ (r,s) \neq (0, 0)}}^{N - 1} \left| \sum_{i = 0}^{N - 1} (-1)^{g(i \oplus s) + g(i)} \omega_{N}^{ir} \right|^2,
\end{split}
\end{equation}
applying equation (\ref{Fidelity_half}). Clearly, $F(\rho, \rho(\alpha))$ depends on the number of vertices, and the structure of the hypergraph $G$, as well as the channel parameters $\Lambda_1$ and $\Lambda_2$.
\section{Conclusion}
Graph theory plays a crucial role in the representation of the combinatorial structures of quantum states \cite{lockhart2021combinatorial, simmons2017symmetric, dutta2016bipartite, dutta2019condition, wang2020note}. One fundamental motivation behind the investigations at the interface of quantum and graph theory is to know how quantum mechanical properties depend on structure of a graph. Graph states have important utilization in quantum computation. Hypergraph states are generalization of graph states. Due to their important applications in quantum computation, their intrinsic physical characteristics are worth investigating. In this work, we study the noise properties of hypergraph states. We apply a number of noisy quantum channels on hypergraph states, such as the dit-flip noise, phase flip noise, dit-phase flip noise, depolarizing noise, non-Markovian Amplitude Damping Channel (ADC), dephasing noise, and depolarization noise. We work out the analytic expression of the final state after applying the noisy channel. In addition, we study the change in coherence, as well as the fidelity between the initial and final state. The coherence decreases under an application of all these channels, except the non-Markovian ADC channel, where the phenomena of recurrences in the non-Markovian regime are responsible for this behavior. In case of non-Markovian ADC coherence decreases if the channel parameter $\lambda$ and the number of vertices $n$ satisfies the following inequality
\begin{equation}
\sqrt{1 - \lambda(t)} < \frac{-1 + \sqrt{N - 1}}{N - 2},
\end{equation}
where $N = 2^n$. The fidelity between the initial and final state depends on the number of vertices as well as the structure of the hypergraph $G$ for dit-flip noise, dit-phase-flip noise, non-Markovian dephasing noise, and non-Markovian depolarizing noise. But, for the phase flip noise, and non-Markovian ADC, fidelity does not depend on the the combinatorial structure of the hypergraph. Therefore, for a fixed value of the channel parameter every hypergraph states with equal number of vertices has equal fidelity. These results would hopefully enhance the impact of quantum hypergraph states in quantum technologies.
\section*{Appendix: Noise properties of multi-qubit hypergraph states}
In the literature, the hypergraph states are mostly discussed as a multi-qubit state. In the multi-qubit set up we can apply the noise on a single qubit at a time, keeping all other qubits unchanged. Here we explain it with a particular hypergraph state and a noisy channel.
We consider the dit-flip noise. When we consider a single qubit state $N = 2$. Equation (\ref{Weyl_operator_in_general}) indicates
\begin{equation}
\hat{U}_{0, s} = \sum_{i = 0}^{1} \ket{i}\bra{i \oplus s} ~\text{for}~ 0 \leq s \leq 1.
\end{equation}
In particular
\begin{equation}
U_{0, 0} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, ~\text{and}~ U_{0, 1} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}.
\end{equation}
Clearly, $U_{0, 1}\ket{0} = \ket{1}$ and $U_{0, 1}\ket{1} = \ket{0}$. Let we apply the dit-flip noise on the second qubit of the the hypergraph state mentioned in equation (\ref{example_multiqubit_hyperrgaph_state}). The Kraus operators are given by
\begin{equation}
E_{0, s} = \begin{cases} \sqrt{1 - p} I \otimes I \otimes I \otimes I & ~\text{when}~ s = 0, \\ \sqrt{p} I \otimes U_{0, 1} \otimes I \otimes I & ~\text{when}~ s = 1. \end{cases}
\end{equation}
After applying the Kraus operators the new quantum state is
\begin{equation}
\rho = E_{0, 0} \ket{G} \bra{G} E_{0, 0}^\dagger + E_{0, 1} \ket{G} \bra{G} E_{0, 1}^\dagger = (1 - p) \ket{G} \bra{G} + pI \otimes U_{0, 1} \otimes I \otimes I \ket{G} \bra{G} I \otimes U_{0, 1}^\dagger \otimes I \otimes I,
\end{equation}
where
\begin{equation}
\begin{split}
I \otimes U_{0, 1} \otimes I \otimes I \ket{G} & = \frac{1}{4} I \otimes U_{0, 1} \otimes I \otimes I [\ket{0000} + \ket{0001} + \ket{0010} + \ket{0011} + \ket{0100} + \ket{0101} - \ket{0110} \\
& + \ket{0111} + \ket{1000} - \ket{1001} + \ket{1010} + \ket{1011} + \ket{1100} - \ket{1101} - \ket{1110} + \ket{1111}]\\
& = \frac{1}{4} [\ket{0100} + \ket{0101} + \ket{0110} + \ket{0111} + \ket{0000} + \ket{0001} - \ket{0010} \\
& + \ket{0011} + \ket{1100} - \ket{1101} + \ket{1110} + \ket{1111} + \ket{1000} - \ket{1001} - \ket{1010} + \ket{1011}] \\
& = \frac{1}{4} [\ket{0000} + \ket{0001} - \ket{0010} + \ket{0011} + \ket{0100} + \ket{0101} + \ket{0110} \\
& + \ket{0111} + \ket{1000} - \ket{1001} - \ket{1010} + \ket{1011} + \ket{1100} - \ket{1101} + \ket{1110} + \ket{1111}]\\
\end{split}
\end{equation}
Instead of second qubit we may choose any other qubit. As $\ket{G}$ is a four qubit state, we shall get four different states after different noise operations.
\section*{Acknowledgment}
SB acknowledges support from Interdisciplinary Cyber Physical Systems (ICPS) programme of the Department of Science and Technology (DST), India through Grant No.: DST/ICPS/QuEST/Theme-1/2019/6. SB also acknowledges support from Interdisciplinary Research Platform - Quantum Information and Computation (IDRP-QIC) at IIT Jodhpur.
\end{document}
|
\begin{document}
\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}uthor{Gerrit Herrmann}
\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}ddress {Fakult\"at f\"ur Mathematik \\
Universit\"at Regensburg\\
Germany\\
\newline\url{www.gerrit-herrmann.de} }
\email{[email protected]}
\title{The $L^2$-Alexander torsion for Seifert fiber spaces}
\begin{abstract}
We calculate the $L^2$-Alexander torsion for Seifert fiber spaces and graph manifolds in terms of the Thurston norm.
\end{abstract}
\maketitle
\section{Introduction and main result}
We say two functions $f,g\colon \R_{>0}\rightarrow \R_{\geq 0}$ are equivalent and write $f\doteq g$, if there exists an $r\in \R$ such that $f(t)=t^{r} g(t)$ for all $t\in\R_{>0}$.
In \cite{DFL14a} Dubois, Friedl and L\"uck associated to an admissible triple $(M,\phi,\gamma)$ a equivalence class of a function $\tau^{(2)} (M,\phi,\gamma)\colon \R_{> 0} \rightarrow \R_{\geq 0}$ called $L^2$-Alexander torsion. The triple consists of a manifold $M$, a cohomology class $\phi \in H^1(M;\R)$ and a group homomorphism $\gamma\colon \pi_1(M)\rightarrow G$ such that $\phi$ factors through $\gamma$.
The $L^2$-Alexander torsion is a generalization of the $L^2$-Alexander polynomial introduced by Li-Zhang \cite{LZ06a}, \cite{LZ06b}, \cite{LZ08} and has been studied recently by many authors Dubois-Wegner \cite{DW10}, \cite{DW13}, Ben Aribi \cite{BA13}, \cite{BA16}
Dubois-Friedl-L\"uck \cite{DFL14a}, \cite{DFL14b}, \cite{DFL15}, Friedl-L\"uck \cite{FL15} and Liu \cite{Li15}.
Given a 2-dimensional manifold $S$ with connected components $S_1\cup\ldots\cup S_k$ we define its complexity to be $\chi_-(S):=\sum_{i=1}^{k}\max \left\{-\chi(S_i),0\right\} $. Let $M$ be 3-manifold (throughout this paper every 3-manifold is understood to be orientable, compact and have empty or toroidal boundary). Thurston has shown in \cite{Th86} that the function
\begin{align*}
x_M\colon H^1(M;\Z)&\longrightarrow \Z\\
\phi&\longmapsto \min\left\{\chi_-(S)\ \left|\ \begin{array}{ll}
[S] \text{ is Poincar\'{e} dual to $\phi$} \\
\text{ and properly embedded}
\end{array}\right.\right\}.
\end{align*}
extends to a semi-norm on $H^1(M;\R)$.
In this paper we will show, that for a Seifert fiber space $M$ the function $\tau^{(2)} (M,\phi,\gamma)$ is completely determined by $x_M(\phi)$. To be more precise:
\begin{theorem}[Main Theorem]\label{maintheroem}
Let $M$ be a Seifert fiber space with $M\neq S^1\times S^2,S^1\times D^2$ and $(M,\phi,\gamma)$ an admissible triple, such that the image of a regular fiber under $\gamma$ is an element with infinite order.
Then the $L^2$-Alexander torsion is given by
\[ \tau^{(2)}(M,\phi,\gamma) \doteq \max \left\{1, t^{x_M(\phi)}\right\}.\]
\end{theorem}
Theorem \ref{maintheroem} was already used in \cite{DFL14a} to show the following two corollaries.
\begin{corollary}\label{cor:graph}
Let $(M,\phi,\gamma)$ be an admissible triple with $M\neq S^1\times S^2,S^1\times D^2$. Suppose that $M$ is a graph manifold and that given any $JSJ$-component of $M$ the image of a regular fiber under $\gamma$ is an element of infinite order, then
\[\tau^{(2)} ( M,\phi,\gamma)=\max\left\{1,t^{x_M(\phi)}\right\}. \]
\end{corollary}
Note that the next corollary was first proved by Ben Aribi \cite{BA13} using a somewhat different approach.
\begin{corollary}
Let $K\subset S^3$ be a oriented knot. Denote by $\nu K$ a tubular neighborhood of $K$ and by $\phi_K\in H^1(S^3\setminus\nu K)$ the cohomology class which sends the oriented meridian to $1$. Then $K$ is trivial if and only if $\tau^{(2)}(S^3\setminus\nu K,\phi, \op{id}} \def\ker{\op{Ker}} \def\im{\op{Im} )(t)\doteq \max\left\{1,t\right\}^{-1}$.
\end{corollary}
\begin{proof}
If the JSJ-decomposition of $S^3\setminus \nu K$ contains a hyperbolic piece, then by the work of L\"uck and Schick \cite{LS99} we have $\tau^{(2)}(S^3\setminus \nu K, \phi,\op{id}} \def\ker{\op{Ker}} \def\im{\op{Im})(1)\neq 1$. If there is no hyperbolic piece, then $S^3\setminus \nu K$ is a graph manifold. Now the statement follows from Corollary \ref{cor:graph} and the well known formula $x_{S^3\setminus \nu K}(\phi_K)=2\cdot \genus (K) -1$, if $\genus(K)>0$.
\end{proof}
The proof of the main theorem is obtained from two lemmas. The first lemma characterizes the Thurston norm of a Seifert fiber space $M$ by the combinatorial invariant $\chi^{S^1}_{\text{orb}}(M)$ (see Definition \ref{def:orbchar}).
\begin{alphalemma}
Let $M\neq S^1\times S^2,\ S^1\times D^2$ be a Seifert fibered space with infinite fundamental group. Then for any $\phi\in H^1(M;\R)$, we have
\begin{align*}
x_M(\phi) = |\chi^{S^1}_{\operatorname{orb}}(M)\cdot k_\phi |,
\end{align*}
where $k_\phi:= \phi ([F])$ and $F$ is a regular fiber.
\end{alphalemma}
The second lemma calculates the function $\tau^{(2)}(X,\phi,\gamma)$ for spaces with a certain $S^1$-action.
\begin{alphalemma}\label{l2torofs1cwc}
Let X be a connected $S^1$-CW-complex of finite type and $\phi\in H^1(X;\R)$.
Suppose that for one and hence all $x\in X$ the map $ev_x\colon S^1\rightarrow X$ defined by $z\mapsto z\cdot x$ induces an injective map $\gamma\circ ev_x\colon \pi_1(S^1,1)\rightarrow G$. The composite
\[\begin{tikzcd}
\pi_1(S^1,1)\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{ev_x}& \pi_1(X,x)\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{\phi}&\R
\end{tikzcd}\]
is given by multiplication with a real number. Let $k_\phi$ denote this number.
Define the $S^1$-orbifold Euler characteristic of X by
\[\chi_{\operatorname{orb}}^{S^1} (X) = \sum_{n\geq 0} (-1)^n \cdot \sum_{i\in J_n}\frac{1}{|H_i|}\]
where $J_n$ denotes the set of open $n$-cells and for $i\in J_n$ the set $H_i$ is the isotropy group of the corresponding cell. Then
\[\tau^{(2)}(X,\phi,\gamma) \doteq \max \left\{1, t^{k_\phi}\right\}^{-\chi_{\operatorname{orb}}^{S^1} (X)}.\]
\end{alphalemma}
The paper is organized as follows. In Section \ref{thurstonnormforseifertfiberspaces} we recall the definition of Seifert fiber spaces and prove Lemma \ref{thurstonnormforallSFS}. Afterwards we give in Section \ref{sec:basicLAT} the definition of the $L^2$-Alexander torsion and some basic properties. In the last part of the paper we prove Lemma \ref{l2torofs1cwc}. Note that not all Seifert fiber spaces admit an $S^1$-action and so we will prove the main result for the remaining cases in the remainder of this paper.
\section{Thurston norm for Seifert fiber spaces}\label{thurstonnormforseifertfiberspaces}
\subsection{Preliminaries about Seifert fiber spaces}
We quickly recall the definition and basic facts about Seifert fiber spaces. Most results are taken from the survey article \cite{Sc83}.
\begin{definition}
A \emph{Seifert fibered space} is a 3-manifold $M$ together with a decomposition of $M$ into disjoint simple closed curves (called Seifert fibers) such that each Seifert fiber has a tubular neighborhood that forms a standard fibered torus. The standard fibered torus corresponding to a pair of coprime integers $(a;b)$ with $a > 0$ is the
surface bundle of the automorphism of a disk given by rotation by an angle of $2\pi b/a$,
equipped with the natural fibering by circles.
We call $a$ the index of a Seifert fiber. A fiber is \emph{regular} if the index is one and \emph{exceptional} otherwise.\end{definition}
\begin{remark}
The number of exceptional fibers of a Seifert fiber space $M$ is finite.
\end{remark}
\begin{definition}\label{def:orbchar}
Let $M$ be a Seifert fibered space and $F_1,\ldots,F_n$ the exceptional fibers with corresponding indices $a_1,\ldots,a_n$. We define
\[ \chi^{S^1}_{\operatorname{orb}}(M):=\chi(M/S^1)- \sum_{i=1}^{n} \Big(1-\frac{1}{a_i}\Big),\]
where $M/S^1$ is obtained from $M$ by identifying all points in the same Seifert fiber.
\end{definition}
\begin{remark}
In Scott's survey article this is the Euler number of the base orbifold $M/ S^1$.
\end{remark}
Let $p\colon\widehat{M}\rightarrow M$ be a finite cover. If $M$ is Seifert fibered then $p$ induces a Seifert fiber structure on $\widehat{M}$ such that $p$ is a fiber preserving map. Therefore we get a induced branched cover $p\colon\widehat{M}/S^1\rightarrow M/S^1$. Denote by $l$ the degree of the branched cover. Then standard arguments show
\begin{align*}
l\cdot \chi^{S^1}_{\operatorname{orb}}(M)= \chi^{S^1}_{\operatorname{orb}}(\widehat{M}).
\end{align*}
For more detail we refer to \cite[Section 2 and 3]{Sc83}.
\subsection{Proof of Lemma \ref{thurstonnormforallSFS}}
Having all the notions to our hand we can prove:
\setcounter{lem}{0}
\begin{alphalemma}\label{thurstonnormforallSFS}
Let $M\neq S^1\times S^2,\ S^1\times D^2$ be a Seifert fibered space with infinite fundamental group. Then for any $\phi\in H^1(M;\R)$, we have
\begin{align*}
x_M(\phi) = |\chi^{S^1}_{\operatorname{orb}}(M)\cdot k_\phi |,
\end{align*}
where $k_\phi:= \phi ([F])$ and $F$ is a regular fiber.
\end{alphalemma}
The proof will consist of the following steps. First we reduce the problem from Seifert fiber spaces to principal $S^1$-bundles. Then we look at the two cases of a trivial and non trivial principal $S^1$-bundle separately.
\begin{claim}
It is sufficient to prove Lemma \ref{thurstonnormforallSFS} only for principal $S^1$-bundles.
\end{claim}
\begin{proof}
As shown in \cite[Section 3.2(C.10)]{AFW15} a Seifert fiber space is finitely covered by a principal $S^1$-bundle. Let $p\colon\widehat{M}\rightarrow M$ be such a finite cover. We can pullback the Seifert fiber structure from $M$ to $\widehat{M}$. This structure and the structure of the $S^1$-bundle on $\widehat{M}$ coincide, because the Seifert fiber structure of an aspherical $S^1$-bundle is unique by the argument of \cite[Theorem 3.8]{Sc83}. Let $m$ denote the degree with which regular fibers of $\widehat{M}$ cover regular fibers of $M$. Then we have $m\cdot k_\phi=k_{p^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}\phi}$. Denote by $l$ the degree of the induced branched cover $p\colon\widehat{M}/S^1\rightarrow M/S^1$. Then $l\cdot m$ is the degree of the cover $p\colon\widehat{M}\rightarrow M$. From the discussion above we deduce \[\chi^{S^1}_{orb}(M)\cdot k_{\phi} = \frac{\chi^{S^1}_{orb}(\widehat{M}) \cdot k_{p^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}\phi}}{d}.\]
Furthermore Gabai showed in \cite[Corollary 6.13]{Ga83} for a finite cover of 3-manifold $p\colon\widehat{M}\rightarrow M$:
\[x_M(\phi)=\frac{x_{\widehat{M}}(p^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}\phi)}{d}.\]
Putting together both equations, we see that it is enough to prove Lemma \ref{thurstonnormforallSFS} for $S^1$-bundles.
\end{proof}
For the next proof we need the following well known fact about fiber bundles and the Thurston norm. If $\Sigma_g\rightarrow M \xrightarrow{p} S^1$ is a fiber bundle and $\phi$ is given by $p_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st\colon H_1(M;\R)\rightarrow H_1(S^1;\R)$, then
\begin{align}\label{thurstonnormfiberbundles}
x_M(\phi) = \begin{cases}
-\chi(\Sigma_g)&\text{if } \chi(\Sigma_g)<0, \\
0&\text{else}.
\end{cases}
\end{align}
\begin{lemma}\label{lem:trivialbundle}
Let $\Sigma_g$ be a surface with $\chi(\Sigma_g)<0$. Consider $M=S^1\times \Sigma_g$ and $\phi\in H^1(M;\R)$. Then we have:
\[x_{M}(\phi)=|k_\phi\cdot \chi(\Sigma_g) |,\]
where $k_\phi=\phi([F])$ and $F$ is regular fiber.
\end{lemma}
\begin{proof}In the following homology and cohomology is understood with real coefficients.
By the K\"unneth theorem $H^1(M)\cong H_2(M,\partial M)$ decomposes into:
\[ H_2(M, \partial M) \cong H_2(\Sigma_g,\partial \Sigma_g)\ \operatornamelus\ H_1(\Sigma_g,\partial \Sigma_g)\otimes H_1(S^1)\]
Every generator of $H_1(\Sigma_g,\partial \Sigma_g)\otimes H_1(S^1)$ can be represented by a torus and therefore $H_1(\Sigma_g,\partial \Sigma_g)\otimes H_1(S^1)$ is a subspace with vanishing Thurston norm. The number $k_\phi$ can be interpreted as the intersection number of a regular fiber $F$ with the surface representing the cohomology class $\phi$. A surface representing $\phi\in H_1(\Sigma_g,\partial \Sigma_g)\otimes H_1(S^1)$ is parallel to a regular fiber and therefore $k_\phi=0$.
Since $H_1(\Sigma_g,\partial \Sigma_g)\otimes H_1(S^1)$ is a subspace of vanishing Thurston norm, the Thurston norm of a class does not change, if we add elements of $H_1(\Sigma_g,\partial \Sigma_g)\otimes H_1(S^1)$.
Moreover $k_\phi$ is linear in $\phi$ and
therefore the last open case is $PD(\phi)=[\Sigma_g]\in H_2(M,\partial M)$. This is equivalent with $\phi$ being the fiber class of the fibration $\Sigma_g \rightarrow S^1\times \Sigma_g \rightarrow S^1$ and there the formula holds by Equation (\ref{thurstonnormfiberbundles}) and the fact, that in this case $k_\phi=1$.
\end{proof}
\begin{lemma}
Let $M$ be a non trivial $S^1$-bundle over a surface $\Sigma_g$. Then the following equation
\[x_{M}(\phi)=| k_\phi\cdot \chi(\Sigma_g) |\]
holds for all $\phi\in H^1(M;\R)$.
\end{lemma}
\begin{proof}
As in the proof of Lemma \ref{lem:trivialbundle} homology and cohomology is understood with real coefficients.
We will in fact proof that both sides are equal to zero. To calculate $H^1(M)$ we look at a part of the Gysin short exact sequence \cite[Theorem 13.2]{Br93}:
\[
\begin{tikzcd}
0 \alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}& H^1(\Sigma_g)\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{p^\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}& H^1(M) \alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}& H^0(\Sigma_g)\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{\cup\, e}& H^2(\Sigma_g)\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}&\ldots
\end{tikzcd}
\]
Here $\cup\, e$ means the cup product with the Euler class $e$ associated to the $S^1$-bundle $M$. This is an isomorphism because the bundle is not trivial. Therefore $p^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}$ is an isomorphism too. We apply Poincar\'e duality and get an isomorphism $p_{!}:H_1(\Sigma_g)\rightarrow H_2(M)$. By the argument of \cite[Section 4]{GH14} an element $c\in H_1(\Sigma_g;\Z)$ will be sent by $p_{!}$ to a class in $H_2(M)$ representable by the product of a regular fiber with curves in $\Sigma_g$ representing $c$. This is a collection of tori and annuli. We conclude that all elements in $H_2(M)$ have trivial Thurston norm. We have a second look at the Gysin sequence to show that $0=[F]\in H_1(M)$ for a regular fiber $F$.
\[
\begin{tikzcd}
\ldots\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}& H^0(\Sigma_g)\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r} {\cup\, e}& H^2(\Sigma_g)\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{p^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}}& H^2(M) \alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r} &\ldots
\end{tikzcd}
\]
We change again via Poincar\'e duality to homology and obtain the map $p_{!}\colon H_0(\Sigma_g)\rightarrow H_1(M)$, $[x_0]\mapsto [F]$ which sends a point to a regular fiber $F$. This map is trivial, because $\cup\, e$ is surjective. Hence $[F]=0$ and we conclude $k_\phi=\phi([F])=0$.
\end{proof}
\section{Definition and basic properties of the $L^2$-Alexander torsion}\label{sec:basicLAT}
In the monograph \cite{Lu02} L\"uck defines the $L^2$-torsion $\rho^{(2)}(C_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st)\!\in\!\R$ of a finite Hilbert-$\mathcal{N}(G)$ chain complex of determinant class $C_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st$. We refer to \cite[Chapter 3]{Lu02} for the precise definition and basic properties.
\begin{definition}
Let $X$ be a connected finite CW-complex, $\phi\in H^{1}(X,\R)=\op{Hom}(\pi_1(X),\R)$, and $\gamma\colon \pi_1(X)\rightarrow G$ a group homomorphism. We call $(X,\phi,\gamma)$ an \emph{admissible triple} if $\phi \colon \pi_1(X)\rightarrow\R$ factors through $\gamma$ i.e.\ there exists $\phi'\colon G\rightarrow\R$ such that $\phi=\phi'\gamma$.
\end{definition}
An admissible triple $(X,\phi,\gamma)$ and positive number $t\in \R_{>0}$ give rise to a ring homomorphism:
\begin{align*}
\kappa(\phi,\gamma,t)\colon\R[\pi_1(X)] &\longrightarrow \R[G] \\
\sum_{i=1}^{n}a_i g_i&\longmapsto \sum_{i=1}^{n}a_it^{\phi(g_i)} \gamma (g_i).
\end{align*}
\begin{definition}[$L^2$-Alexander torsion]
Let $(X,\phi,\gamma)$ be an admissible triple. We write $C_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st^{\phi,\gamma,t}:= l^2(G)\otimes_{\kappa(\phi,\gamma,t)} C_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st (\widetilde{X})$, where $\widetilde{X}$ is the universal cover and consider the function
\[\tau^{(2)}(X,\phi,\gamma)(t):= \begin{cases} \exp\left(-\rho^{(2)} \big(C^{\phi,\gamma,t}_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st \big)\right) & C^{\phi,\gamma,t}_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st \text{ {\footnotesize is of determinant class and weakly acyclic,}} \\
0 &\text{else.}
\end{cases}\]
This function may not be continuous in general, but Liu showed that in the case of a 3-manifold and $\gamma=\op{id}} \def\ker{\op{Ker}} \def\im{\op{Im}$ this function is always greater than zero and continuous \cite[Theorem 1.2]{Li15}.
\end{definition}
We can define the $L^2$-Alexander torsion for a pair of spaces $(X,Y)$ in the following way. Let $(X,\phi,\gamma)$ be an admissible triple and $Y\subset X$ a subcomplex. We denote by $p:\widetilde{X}\rightarrow X$ the universal cover of $X$. We write $\widetilde{Y}=p^{-1}(Y)$. Then $C_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st(\widetilde{X},\widetilde{Y})$ is a free left $\Z[\pi_1(\widetilde{X})]$-chain complex. We write $C_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st^{\phi,\gamma,t}:= l^2(G)\otimes_{\kappa(\phi,\gamma,t)} C_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st (\widetilde{X},\widetilde{Y})$ and define the $L^2$-Alexander torsion $\tau^{(2)}(X,Y,\phi,\gamma)(t)$ as before.
We will make use of the following two proposition. The proofs are straight forward. One applies \cite[Theorem 3.35(1)]{Lu02} to the short exact sequences of the chain complexes in question.
\begin{proposition}[Product formula]\label{torsionofpair}
Let $(X,\phi,\gamma)$ be an admissible triple and $i\colon Y\hookrightarrow X$ a subcomplex. If two out of three of $\tau^{(2)}({X,\phi,\gamma})$, $\tau^{(2)}({X,\phi i_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st,\gamma i_\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st})$ and $\tau^{(2)}({X,Y,\phi,\gamma})$ are nonzero, then we have the identity:
\[\tau^{(2)}({Y,\phi i_{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st},\gamma i_{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}})\cdot \tau^{(2)}({X,Y,\phi,\gamma})\doteq \tau^{(2)}({X,\phi,\gamma})
.\]
\end{proposition}
\begin{proposition}[Gluing formula]\label{pro:LATgluing}
Consider a pushout diagram of finite CW-complexes
\[\begin{tikzcd}
X_0 \alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{i_1} \alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{d}{i_2}&X_{1}\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{d}{j_2} \\
X_2 \alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{j_2}& X_3
\end{tikzcd}\]
such that every map is cellular and $i_1$ is injective. Let $(X_3,\phi,\gamma)$ be an admissible triple. If three out of four of $\tau^{(2)}(X_0,\phi i_1 j_1, \gamma i_1 j_1),\,\tau^{(2)}(X_1,\phi j_1,\gamma j_1),\,\tau^{(2)}(X_2,\phi j_2,\gamma j_2)$ or $\tau^{(2)}(X_3,\phi,\gamma)$ are nonzero, then we have:
\[\tau^{(2)}(X_3,\phi,\gamma)\cdot \tau^{(2)}(X_0,\phi i_1 j_1, \gamma i_1 j_1)\doteq \tau^{(2)}(X_2,\phi j_2,\gamma j_2)\cdot\tau^{(2)}(X_1,\phi j_1,\gamma j_1). \]
\end{proposition}
\subsection{The $L^2$-Alexander torsion for $S^1$-CW-complexes}\label{l2alextorfors1cwcomplex}
The calculation of the $L^2$-Alexander torsion of a circle can be found in \cite[Lemma 2.8]{DFL14a}. It is the starting point of the proof of Lemma \ref{l2torofs1cwc}.
\begin{lemma}\label{l2torsionofacircle}
Let $(S^1,\phi,\gamma)$ be an admissible triple and $\gamma$ injective, then the $L^2$-Alexander torsion is given by
\[ \tau^{(2)}(S^1,\phi,\gamma)\doteq \max\left\{1,t^{\phi(g)}\right\}^{-1},\]
where $g$ is a generator of $\pi_1(S^1)$.
\end{lemma}
\begin{alphalemma}
Let X be a connected $S^1$-CW-complex of finite type and $\phi\in H^1(X;\R)$.
Suppose that for one and hence all $x\in X$ the map $ev_x\colon S^1\rightarrow X$ defined by $z\mapsto z\cdot x$ induces an injective map $\gamma\circ ev_x\colon \pi_1(S^1,1)\rightarrow G$. The map $\phi\circ ev_x\colon H_1(S^1)\rightarrow \R$
is given by multiplication with a real number which we denote by $k_\phi$. We obtain:
\[\tau^{2}(X,\phi,\gamma)\doteq \max \left\{1, t^{k_\phi}\right\}^{- \chi_{\operatorname{orb}}^{S^1} (X)}\]
\end{alphalemma}
This proof is a variation of the proof \cite[Theorem 3.105]{Lu02}:
\begin{proof}
We use induction over the dimension of cells. In dimension zero $X$ is a circle. There the statement holds by Lemma \ref{l2torsionofacircle}. Now the induction step from $n-1$ to $n$ is done as follows.
Per definition of an $S^1$-CW-complex we can choose an equivariant $S^1$-pushout with $\operatorname{dim} (X_{n})=n$
\[\begin{tikzcd}
\bigsqcup_{i\in J_n} S/H_i\times S^{n-1} \alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{\bigsqcup q_i}\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{d}{i}&X_{n-1}\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{d}{j} \\
\bigsqcup_{i\in J_n} S/H_i\times D^{n} \alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}rrow{r}{\bigsqcup Q_i}& X_n\ .
\end{tikzcd}\]
We obtain from the gluing formula \ref{pro:LATgluing}:
\[\prod_{i\in J_n}\tau^{(2)} ({\displaystyle S^1/H_i\!\times\! D^n,S^1/H_i\!\times\! S^{n-1},\phi Q_i ,\gamma Q_i})\doteq \tau^{(2)}(X_n,X_{n-1},\phi, \gamma).\]
The left hand side can be computed by the suspension isomorphism.
\begin{align*}
\prod_{i\in J_n} \tau^{(2)}({\displaystyle S^1/H_i\!\times\! D^n,S^1/H_i\!\times\! S^{n-1},\phi Q_i ,\gamma Q_i}) &\doteq \prod_{i\in J_n} \tau^{(2)}({\widetilde{S^1},\phi Q_i,\gamma Q_i}) ^{(-1)^{n} } \\ &\doteq \prod_{i\in J_n}\max\left\{1,t^{k_\phi}\right\}^{(-1)^{n+1}\cdot 1/|H_i|} \\
&=\max \left\{1,t^{k_\phi}\right\}^{(-1)^{n+1}\sum_{i\in J_n}1/|H_i|}.
\end{align*}
Here we used the assumption that $\gamma\circ ev_x$ is injective. The torsion for $X_{n-1}$ is defined by induction hypothesis. We can apply the product formula \ref{torsionofpair} and conclude:
\begin{align*}
\tau^{(2)}({X_n,\phi, \gamma})&=\tau^{(2)}({X_n,X_{n-1}\phi, \gamma})\cdot \tau^{(2)} (X_{n-1},\phi i, \gamma i) \\
&\doteq \max \left\{1,t^{k_\phi}\right\}^{(-1)^{n+1}\sum_{i\in J_n}1/|H_i|} \cdot \max \left\{1,t^{k_\phi } \right\}^{-\chi^{S^1}_{\operatorname{orb}}(X_{n-1})} \\
&= \max \left\{1, t^{k_\phi}\right\}^{-\chi_{\operatorname{orb}}^{S^1} (X)}.
\end{align*}
\end{proof}
\begin{corollary}\label{cor:Mainthm}
Let $M$ be an aspherical Seifert fiber space with $M\neq S^1\times D^2$ and $(M,\phi,\gamma)$ an admissible triple, such that the image of a regular fiber under $\gamma$ is an element with infinite order. Assume that $M/S^1$ is orientable, then the $L^2$-Alexander torsion is given by
\[ \tau^{(2)}(X,\phi,\gamma) \doteq \max \left\{1, t^{x_M(\phi)}\right\}.\]
\end{corollary}
\begin{proof}
Note that the standard fiber torus $(a;b)$ admits an effective $S^1$-action such that the fibers and the orbits of the action coincide. To choose such an action for a neighborhood of a Seifert fiber is the same as giving the corresponding point in the base space a local orientation. Hence this action extends to $M$ because $M/S^1$ is orientable. Therefore Lemma \ref{l2torofs1cwc} implies
\[ \tau^{(2)}(M,\phi,\gamma)\doteq \max \left\{1, t^{k_\phi} \right\}^{-\chi_{orb}^{S^1} (M)}=\max \left\{1, t^{-k_\phi \chi_{orb}^{S^1} (M)} \right\} .\]
The last equality holds because $M$ is aspherical and hence $-\chi_{orb}^{S^1} (M)\geq 0$ \cite[Theorem 5.3]{Sc83}.
By the construction of the $S^1$-action we easily see that $k_\phi$ and $\chi_{orb}^{S^1}(M)$ are the same as in Lemma \ref{thurstonnormforallSFS}. Moreover one has the relation $\max\left\{1,t^k\right\}\doteq \max\left\{1,t^{|k|}\right\}$ for any $k\in\R$ because of the equality $\max\left\{1,t^k\right\}=t^k\cdot \max\left\{1,t^{-k}\right\}$.
\end{proof}
\subsection{The $L^2$-Alexander torsion for Seifert fiber spaces without an effective $S^1$-action}
Let $M$ be a Seifert fiber space. An embedded torus in $M$ is called $\emph{vertical}$, if it is a union of regular fibers.
One should observe that cutting $M$ along a vertical torus $T$ is exactly the same as cutting the base space of $M$ along an embedded curve which does not intersect a cone point. This will be the key observation in the next proofs.
As indicated in the following proofs we will cut $M$ into pieces, where we can calculate the $L^2$-Alexander function. Therefore we need a gluing formula for the Thurston norm which is due to Eisenbud and Neumann \cite[Proposition 3.5]{EN85}.
\begin{theorem}\label{gluingthurstonnorom}
Let $\mathcal{T}$ be a collection of incompressible disjoint tori embedded in $N$. Denote by $\mathcal{B}$ the collection of components of $N\setminus \mathcal{T}$ and by $i_B\colon B\rightarrow N$ the inclusion of a component $B\in\mathcal{B}$. Then the Thurston norm of each $\phi\in H^1(N;\R)$ satisfies the equality
\[ x_N(\phi)=\sum_{B \in \mathcal{B}} x_B(i_B^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}\phi).\]
\end{theorem}
\begin{lemma}\label{lem:LATKlein}
Let $M$ be a Seifert fiber space with base space a non-orientable surface of genus $2$. Moreover let $(M,\phi,\gamma)$ be admissible, such that the image of a regular fiber under $\gamma$ is an element with infinite order. Then
\[ \tau^{(2)}({M,\phi,\gamma})\doteq\max\left\{1, t^{x_M(\phi)}\right\}\]
\end{lemma}
\begin{proof}
We can cut $M$ along two vertical tori, such that we obtain two pieces $M_1, M_2$ both with orientable base space (see Figure \ref{kleinbottle}). Then the restriction of $\gamma$ to the tori has infinite image by hypothesis. We obtain from Proposition \ref{pro:LATgluing}, Corollary \ref{cor:Mainthm}, and Theorem \ref{gluingthurstonnorom}:
\begin{align*}
\tau^{(2)}({M,\phi,\gamma})&\doteq \tau^{(2)}({M_1,\phi i_1,\gamma i_1})\cdot \tau^{(2)}({M_2,\phi i_2,\gamma i_2}) \\
&\doteq\max \left\{1, t^{x_{M_1} (i^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}_1\phi)} \right\} \cdot \max \left\{1, t^{x_{M_2}(i^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}_2\phi)}\right\} \\
&=\max \left\{1, t^{x_{M_1}(i^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}_1\phi)+ x_{M_2}(i^{\alpha} \def\g{\gamma} \def\bp{\begin{pmatrix}st}_2\phi)}\right\}\\
&=\max \left\{1, t^{x_{M}(\phi) } \right\}.
\end{align*}
Here we used that Lemma \ref{l2torofs1cwc} yields $\tau^{(2)}(T,\phi,\gamma)\doteq 1$ for a torus $T$ and $\gamma$ having infinite image.
\end{proof}
\begin{figure}
\caption{A Klein bottle with two boundary components is cut along two circles to obtain two orientable pieces. The points indicate exceptional fibers. So we do not cut through them.}
\label{kleinbottle}
\end{figure}
\begin{theorem}
Let $M$ be a Seifert fiber space with a non orientable base space other than $\R P^2$ and $(M,\phi,\gamma)$ admissible, such that the image of a regular fiber under $\gamma$ is an element with infinite order. Then
\[ \tau^{(2)}({M,\phi,\gamma})\doteq \max\left\{1, t^{x_M(\phi)}\right\}.\]
\end{theorem}
\begin{proof}
By the classification of non-orientable surfaces, we can cut the base space along embedded curves, such that every piece is a Klein bottle with boundary or a M\"obius strip. This corresponds to cutting $M$ along vertical tori, such that every connected component has a M\"obius strip or Klein bottle as base space. As in the proof above we can use the additivity of the Thurston norm and the gluing formula along tori to prove the statement for every piece. The case of the Klein bottle has been dealt with in the Lemma \ref{lem:LATKlein}. Since the doubling of a M\"obius strip is the Klein bottle, we can use a standard doubling argument for Thurston norm and $L^2$-Alexander torsion to receive the desired result.
\end{proof}
\end{document}
|
\betaegin{document}
\title{Singular limit of an Allen-Cahn equation with nonlinear diffusion}
\betaegin{abstract}
{
We consider an Allen-Cahn equation with nonlinear diffusion, motivated by the study
of the scaling limit of certain interacting particle systems. We investigate
its singular limit and show the generation and propagation of an interface in the limit.
The evolution of this limit interface is governed by mean curvature flow with
a novel, homogenized speed in terms of a surface tension-mobility parameter emerging
from the nonlinearity in our equation.
}
\varepsilonnd{abstract}
\footnote{ \hskip -6.5mm
{
$^+$Aix Marseille University, Toulon Univeristy,
Laboratory Centre de Physique Théorique, CNRS, Marseille, France. \\
e-mail: [email protected] \\
$^\alphast$Department of Mathematics,
Waseda University,
3-4-1 Okubo, Shinjuku-ku,
Tokyo 169-8555, Japan. \\
e-mail: [email protected] \\
$^\%$CNRS and Laboratoire de Math\'ematiques, University Paris-Saclay,
Orsay Cedex 91405, France. \\
e-mail: [email protected] \\
$^\dagger$Department of Mathematical Sciences,
Korean Advanced Institute of Science and Technology,
291 Daehak-ro, Yuseong-gu, Daejeon 34141, Korea.
e-mail: [email protected]\\
$^\diamond$Department of Mathematics,
University of Arizona,
621 N.\ Santa Rita Ave.,
Tucson, AZ 85750, USA. \\
e-mail: [email protected]
}
}
\tauhetaanks{MSC 2020:
35K57, 35B40.
\tauhetaanks{keywords:
Allen-Cahn equation, Mean curvature flow, Singular limit, Nonlinear diffusion,
Interface, Surface tension}
\section{Introduction}
The Allen-Cahn equation with linear diffusion
\betaegin{align*}
u_t = \Deltalta u - \dfrac{1}{\varepsilon^2} F'(u)
\varepsilonnd{align*}
was introduced to understand the phase separation phenomena which appears in the construction of polycrystalline materials \cite{AC1979}. Here, $u$ stands for the order parameter which describes the state of the material, $F$ is a double-well potential with two distinct local minima $\alphalpha_\pm$ at two different phases, and the parameter $\varepsilon > 0$ corresponds to the interface width in the phase separation process. When $\varepsilon$ is small, it is expected that $u$ converges to either of the two states $u = \alphalpha_+$ and $u = \alphalpha_-$. Thus, the limit $\varepsilon \downarrow 0$ creates a steep interface dividing two phases; this coincides with the phase separation phenomena and the limiting interface is known to evolve according to mean curvature flow; see \cite{AHM2008, Xinfu1990}.
In this paper,
we prove generation and propagation of interface properties for an Allen-Cahn equation with nondegenerate nonlinear diffusion. More precisely, we study the problem
\betaegin{align*}
(P^\varepsilon)~~
\betaegin{cases}
u_t
= \Deltalta \varphi(u)
+ \displaystyle{ \frac{1}{\varepsilon^2}} f(u)
&\mbox{ in } D \tauimes \mathbb{R}^+\\
\displaystyle{ \frac{\partial \varphi(u)}{\partial \nu} }
= 0
&\mbox{ in } \partial D \tauimes \mathbb{R}^+\\
u(x,0) = u_0(x)
&\tauext{ for } x \in D
\varepsilonnd{cases}
\varepsilonnd{align*}
where the unknown function $u$ denotes a phase function, $D$ is a smooth bounded domain in $\mathbb{R}^N, N \geq 2$, $\nu$ is the outward unit normal vector to the boundary $\partial D$ and $\varepsilon > 0$ is a small parameter. The nonlinear functions $\varphi$ and $f$ satisfy the following properties.
We assume that $f$ has exactly three zeros $f(\alphalpha_-) = f(\alphalpha_+) = f(\alphalpha) = 0$ where $\alphalpha_- < \alphalpha < \alphalpha_+$, and
\betaegin{align}\lambdabel{cond_f_bistable}
f \in C^2(\mathbb{R})
,~
f'(\alphalpha_-) < 0
,~ f'(\alphalpha_+) < 0
,~ f'(\alphalpha) > 0
\varepsilonnd{align}
so that
\betaegin{align}\lambdabel{cond_f_tech}
f(s) > 0
~
\tauext{for}
~
s < \alphalpha_-
,~
f(s) < 0
~
\tauext{for}
~
s > \alphalpha_+.
\varepsilonnd{align}
We suppose that
\betaegin{align}\lambdabel{cond_phi'_bounded}
\varphi \in C^4(\mathbb{R}), ~~ \varphi' \geq C_\varphi
\varepsilonnd{align}
for some positive constant $C_\varphi $. We impose one more relation between $f$ and $\varphi$, namely
\betaegin{align}\lambdabel{cond_fphi_equipotential}
\int_{\alphalpha_-}^{\alphalpha_+}
\varphi'(s) f(s) ds
= 0.
\varepsilonnd{align}
\noindent As for the initial condition $u_0(x)$ we assume that $u_0 \in C^2(\overline{D})$. Throughout the paper, we define $C_0$ and $C_1$ as follows:
\betaegin{align}
C_0
&:= || u_0 || _{C^0 \left( \overline{D} \right)}
+ || \nablabla u_0 || _{C^0 \left( \overline{D} \right)}
+ || \Deltalta u_0 || _{C^0 \left( \overline{D} \right)}\lambdabel{cond_C0}
\\
C_1
&:=
\max_{|s - \alphalpha| \leq I} \varphi(s) +
\max_{|s - \alphalpha| \leq I} \varphi'(s) +
\max_{|s - \alphalpha| \leq I} \varphi''(s)
,~~
I = C_0 + \max(\alphalpha - \alphalpha_-, \alphalpha_+ - \alphalpha).\lambdabel{cond_C1}
\varepsilonnd{align}
Furthermore, we define $\Gammamma_0$ by
\betaegin{align*}
\Gammamma_0
:=
\{
x \in D: u_0(x) = \alphalpha
\}.
\varepsilonnd{align*}
In addition, we suppose $\Gammamma_0$ is a $C^{4+\nu}, 0 < \nu < 1$ hypersurface without boundary such that
\betaegin{align}
\Gammamma_0 \Subset D, \nablabla u_0(x) \cdot n(x) \neq 0 \tauext{ if}~ x \in \Gammamma_0 \lambdabel{cond_gamma0_normal} \\
u_0 > \alphalpha \tauext{ in } D_0^+, ~~~~~~ u_0 < \alphalpha \tauext{ in } D_0^-, \lambdabel{cond_u0_inout}
\varepsilonnd{align}
where $D_0^-$ denotes the region enclosed by $\Gammamma_0$, $D_0^+$ is the region enclosed between $\partial D$ and $\Gammamma_0$, and $n$ is the outward normal vector to $D_0^-$.
It is standard that the above formulation, referred to as Problem $(P^\varepsilon)$, possesses a unique classical solution $u^\varepsilon$.
The present paper is originally motivated by the study of the scaling limit of a
Glauber+Zero-range particle system. In this microscopic system of interacting random walks, the Zero-range part governs the rates of jumps, while the Glauber part prescribes
creation and annihilation rates of the particles. In a companion paper \cite{EFHPS}, we show that
the system exhibits a phase separation and, under a certain space-time scaling limit, an interface
arises, in the limit macroscopic density field
of particles, evolving in time according to the motion by mean curvature. The system is indeed well
approximated from macroscopic viewpoint by the Allen-Cahn equation with nonlinear diffusion
$(P^\varepsilon)$, or more precisely by its discretized equation. Although, in this paper, we study
$(P^\varepsilon)$ under the Neumann boundary conditions, the formulation under periodic boundary conditions,
used in the particle system setting in \cite{EFHPS}, can be treated similarly; see Remark \ref{rem:1} below.
In some other physical situations, it is expected that the diffusion can depend on the order parameter
as in our case.
In the experimental article \cite{Wagner1952}, Wagner suggested that for metal alloys the diffusion depends on the concentration. In \cite{MD2010, RLA1999}, the authors considered degenerate diffusions such as porous medium diffusions instead of linear diffusions. In \cite{FL1994}, Fife and Lacey generalized the Allen-Cahn equation, which leads them to a parameter dependent diffusion Allen-Cahn equation. Recently, \cite{FHLR2020} considered an Allen-Cahn equation with density dependent diffusion in $1$ space dimension and showed a slow motion property. However, no rigorous proof on the motion of the interface in the nonlinear diffusion context has been given for larger space dimensions $N\geq 2$.
In this context, the purpose of this article is to study the singular limit of $u^\varepsilon$ as
$\varepsilon \downarrow 0$.
We first present a result on the generation of the interface. We use the following notation:
\betaegin{align}\lambdabel{cond_mu_eta0}
\mu = f'(\alphalpha)
, ~~
t^\varepsilon = \mu^{-1} \varepsilon^2 |\ln \varepsilon|
, ~~
\varepsilonta_0 = \min(\alphalpha - \alphalpha_-, \alphalpha_+ - \alphalpha).
\varepsilonnd{align}
\betaegin{thm}\lambdabel{Thm_Generation}
Let $u^\varepsilon$ be the solution of the problem $(P^\varepsilon)$, $\varepsilonta$ be an arbitrary constant satisfying $0 < \varepsilonta < \varepsilonta_0$. Then, there exist positive constants $\varepsilon_0$ and $M_0$ such that, for all $\varepsilon \in (0, \varepsilon_0)$, the following holds:
\betaegin{enumerate}[label = (\roman*)]
\item for all $x \in D$
\betaegin{align}\lambdabel{Thm_generation_i}
\alphalpha_- - \varepsilonta
\leq
u^\varepsilon(x,t^\varepsilon)
\leq
\alphalpha_+ + \varepsilonta;
\varepsilonnd{align}
\item if $u_0(x) \geq \alphalpha + M_0 \varepsilon$, then
\betaegin{align}\lambdabel{Thm_generation_ii}
u^\varepsilon(x,t^\varepsilon) \geq \alphalpha_+ - \varepsilonta;
\varepsilonnd{align}
\item if $u_0(x) \leq \alphalpha - M_0 \varepsilon$, then
\betaegin{align}\lambdabel{Thm_generation_iii}
u^\varepsilon(x,t^\varepsilon) \leq \alphalpha_- + \varepsilonta.
\varepsilonnd{align}
\varepsilonnd{enumerate}
\varepsilonnd{thm}
After the interface has been generated, the diffusion term has the same order as the reaction term. As a result the interface starts to propagate. Later, we will prove that the interface moves according to the following motion equation:
\betaegin{align}\lambdabel{eqn_motioneqn}
(IP)
\betaegin{cases}
V_n = - (N - 1) \lambdambda_0 \kappaappa
&
\tauext{ on } \Gammamma_t
\\
\Gammamma_t|_{t = 0} = \Gammamma_0,
&
~~
\varepsilonnd{cases}
\varepsilonnd{align}
where $\Gammamma_t$ is the interface at time $t > 0$, $V_n$ is the normal velocity on the interface, $\kappaappa$ denotes its mean curvature, and $\lambdambda_0$ is a positive constant which will be defined later (see \varepsilonqref{eqn_lambda0} and {\varepsilonqref{second lambda_0}}). It is well known that Problem $(IP)$ possesses locally in time a unique smooth solution. Fix $T > 0$ such that the solution of $(IP)$, in \varepsilonqref{eqn_motioneqn}, exists in $[0,T]$ and denote the solution by $\Gammamma = \cup_{0\leq t < T} (\Gammamma_t \tauimes \{t\})$. From Proposition 2.1 of \cite{Xinfu1990} such a $T > 0$ exists, and one can deduce that $\Gammamma \in C^{4 + \nu, \frac{4 + \nu}{2}}$ in $[0,T]$, given that $\Gammamma_0 \in C^{4 + \nu}$.
The second main theorem states a result on the generation and the propagation of the interface.
\betaegin{thm}\lambdabel{Thm_Propagation}
Under the conditions given in Theorem \ref{Thm_Generation}, for any given $0 < \varepsilonta < \varepsilonta_0$ there exist $\varepsilon_0 > 0$ and $C > 0$ such that
\betaegin{align}\lambdabel{thm_propagation_1}
u^\varepsilon
\in
\betaegin{cases}
[\alphalpha_- - \varepsilonta, \alphalpha_+ + \varepsilonta]
&
\tauext{ for } x \in D
\\
[\alphalpha_+ - \varepsilonta, \alphalpha_+ + \varepsilonta]
&
\tauext{ if } x \in D^+_t \setminus\mathcal{N}_{C\varepsilon}(\Gammamma_t)
\\
[\alphalpha_- - \varepsilonta, \alphalpha_- + \varepsilonta]
&
\tauext{ if } x \in D^-_t \setminus\mathcal{N}_{C\varepsilon}(\Gammamma_t)
\varepsilonnd{cases}
\varepsilonnd{align}
for all $\varepsilon \in (0, \varepsilon_0)$ and $t \in [t^\varepsilon,T]$, where $D_t^-$ denotes the
region enclosed by $\Gammamma_t$, $D_t^+$ is that enclosed between $\partial D$ and
$\Gammamma_t$, and
$\mathcal{N}_r(\Gammamma_t) := \{ x \in D, dist(x, \Gammamma_t) < r \}$.
\varepsilonnd{thm}
This theorem implies that, after generation, the interface propagates according to the motion $(IP)$ with a width of order $\mathcal{O}(\varepsilon)$. Note that Theorems \ref{Thm_Generation} and \ref{Thm_Propagation} extend similar results for linear diffusion Allen-Cahn equations due to \cite{AHM2008}.
We now state an approximation result inspired by a similar result proved in \cite{MH2012}.
\betaegin{thm}\lambdabel{thm_asymvali}
\betaegin{enumerate}
\item[(i)]
Let the assumptions of Theorem \ref{Thm_Propagation} hold and $\rho > 1$. Then, the solution $u^\varepsilon$ of $(P^\varepsilon)$ satisfies
\betaegin{align}\lambdabel{thm_asymvali_i}
\lim_{\varepsilon \rightarrow 0}
\sup_{\rho t^\varepsilon \leq t \leq T,~x \in D}
\left|
u^\varepsilon(x,t)
- U_0
\left(
\dfrac{d^\varepsilon(x,t)}{\varepsilon}
\right)
\right|
= 0,
\varepsilonnd{align}
where $U_0$ is a standing wave solution defined in \varepsilonqref{eqn_AsymptExp_U0} and $d^\varepsilon$ denotes the signed distance function associated with $\Gammamma_t^\varepsilon := \{ x \in D : u^\varepsilon(x,t) = \alphalpha \}$, defined as follows:
\betaegin{align*}
d^\varepsilon(x,t)
=
\betaegin{cases}
dist(x,\Gammamma^\varepsilon_t)
&\tauext{if}~~
x \in D^{\varepsilon,+}_t
\\
- dist(x,\Gammamma^\varepsilon_t)
&\tauext{if}~~
x \in D^{\varepsilon,-}_t
\varepsilonnd{cases}
\varepsilonnd{align*}
where $D^{\varepsilon,-}_t$ denotes the region enclosed by $\Gammamma^\varepsilon_t$ and $D^{\varepsilon,+}_t$ denotes the region enclosed between $\partial D$ and $\Gammamma^\varepsilon_t$.
\item[(ii)]
For small enough $\varepsilon > 0$ and for any $t \in [\rho t_\varepsilon, T]$, $\Gammamma^\varepsilon_t$ can be expressed as a graph over $\Gammamma_t$.
\varepsilonnd{enumerate}
\varepsilonnd{thm}
\betaegin{rmk} \lambdabel{rem:1}
Theorems \ref{Thm_Generation}, \ref{Thm_Propagation} and \ref{thm_asymvali} hold not only for the Neumann boundary condition of Problem $(P^\varepsilon)$ but also for periodic boundary conditions with $D = \mathbb{T}^N$, with similar proofs as given in Sections \ref{section_3}, \ref{section_4} and \ref{section_5}.
\varepsilonnd{rmk}
The paper is organized as follows. In Section \ref{sec:2}, the interface motion $(IP)$ is
formally derived from the problem $(P^\varepsilon)$ as $\varepsilon \downarrow 0$. In particular,
the constant $\lambdambda_0$ is obtained. Section \ref{section_3} studies the generation
of interface and gives the proof of Theorem \ref{Thm_Generation}.
In a short time, the reaction term $f$ governs the system and the solution of
$(P^\varepsilon)$ behaves close to that of an ordinary differential equation.
Section \ref{section_4} discusses the propagation of interface and
Theorem \ref{Thm_Propagation} is proved. The sub- and super-solutions are
constructed by means of two functions $U_0$ and $U_1$ formally introduced in
asymptotic expansions in Section \ref{sec:2}.
Section \ref{section_5} gives the proof of Theorem \ref{thm_asymvali}.
Finally, in the Appendix, we define the mobility $\mu_{AC}$ and the surface tension
$\sigma_{AC}$ of the interface, especially in our nonlinear setting, and show the relation
$\lambda_0= \mu_{AC}\sigma_{AC}$.
\section{Formal derivation of the interface motion equation}
\lambdabel{sec:2}
In this section, we formally derive the equation of interface motion of the Problem $(P^\varepsilon)$ by applying the method of matched asymptotic expansions. To this purpose, we first define the interface $\Gammamma_t$ and then derive its equation of motion.
Suppose that $u^\varepsilon$ converges to a step function $u$ where
\betaegin{align*}
u(x,t)
=
\betaegin{cases}
\alphalpha_+
&
\tauext{in}~ D^+_t
\\
\alphalpha_-
&
\tauext{in}~ D^-_t.
\varepsilonnd{cases}
\varepsilonnd{align*}
Let
\betaegin{align*}
\Gammamma_t = \overline{D^+_t}\cap \overline{D^-_t}, \overline{D^+_t}\cup \overline{D^-_t} = D ,~t \in [0,T].
\varepsilonnd{align*}
Let also $\overline{d}(x,t)$ be the signed distance function to $\Gammamma_t$ defined by
\betaegin{align}\lambdabel{eqn_signed_dist}
\overline{d}(x,t)
:=
\betaegin{cases}
- dist(x, \Gammamma_t)
& \tauext{ for } x \in \overline{D^-_t}
\\
dist(x, \Gammamma_t)
& \tauext{ for } x \in D^+_t.
\varepsilonnd{cases}
\varepsilonnd{align}
Assume that $u^\varepsilon$ has the expansions
\betaegin{align*}
u^\varepsilon(x,t)
= \alphalpha_\pm + \varepsilon u^\pm_1(x,t) + \varepsilon^2 u^\pm_2(x,t) + \cdots
\varepsilonnd{align*}
away from the interface $\Gammamma$ and that
\betaegin{align}\lambdabel{eqn_u^eps_expansion}
u^\varepsilon(x,t)
= U_0(x,t,\xi)
+ \varepsilon U_1(x,t,\xi)
+ \varepsilon^2 U_2(x,t,\xi)
+ \cdots
\varepsilonnd{align}
near $\Gammamma$, where $\displaystyle{\xi = \frac{\overline{d}}{\varepsilon}}$. Here, the variable $\xi$ is given to describe the rapid transition between the regions $\{ u^\varepsilon \sigmameq \alphalpha^+ \}$ and $ \{ u^\varepsilon \sigmameq \alphalpha^- \}$. In addition, we normalize $U_0$ and $U_k$ so that
\betaegin{align}\lambdabel{cond_Uk_normal}
U_0(x,t,0) = \alphalpha
\nonumber \\
U_k(x,t,0) = 0.
\varepsilonnd{align}
To match the inner and outer expansions, we require that
\betaegin{align}\lambdabel{cond_U0_matching}
U_0(x,t,\pm \infty) = \alphalpha_\pm,
~~~
U_k(x,t,\pm \infty) = u^\pm_k(x,t)
\varepsilonnd{align}
for all $k \geq 2$.
After substituting the expansion (\ref{eqn_u^eps_expansion}) into $(P^\varepsilon)$, we collect the $\varepsilon^{-2}$ terms, to obtain
\betaegin{align*}
\varphi(U_0)_{zz} + f(U_0) = 0.
\varepsilonnd{align*}
Since this equation only depends on the variable $z$, we may assume that $U_0$ is only a function of the variable $z$, that is $U_0(x,t,z) = U_0(z)$. In view of the conditions (\ref{cond_Uk_normal}) and (\ref{cond_U0_matching}), we find that $U_0$ is the unique increasing solution of the following problem
\betaegin{align}\lambdabel{eqn_AsymptExp_U0}
\betaegin{cases}
(\varphi(U_0))_{zz} + f(U_0)
= 0
\\
U_0(-\infty) = \alphalpha_-,~ U_0(0)= \alphalpha,~ U_0(+\infty) = \alphalpha_+.
\varepsilonnd{cases}
\varepsilonnd{align}
In order to understand the nonlinearity more clearly, we set
\betaegin{align*}
g(v) := f(\varphi^{-1}(v)),
\varepsilonnd{align*}
where $\varphi^{-1}$ is the inverse function of $\varphi$ and define $V_0(z) := \varphi(U_0(z))$; note that such transformation is possible by the condition (\ref{cond_phi'_bounded}). Substituting $V_0$ into equation (\ref{eqn_AsymptExp_U0}) yields
\betaegin{align}\lambdabel{eqn_AsymptExp_V0}
\betaegin{cases}
V_{0zz} + g(V_0) = 0
\\
V_0(-\infty) = \varphi(\alphalpha_-),~
V_0(0)= \varphi(\alphalpha), ~
V_0(+\infty) = \varphi(\alphalpha_+).
\varepsilonnd{cases}
\varepsilonnd{align}
Condition (\ref{cond_fphi_equipotential}) then implies the existence of the unique increasing solution of (\ref{eqn_AsymptExp_V0}).
Next we collect the $\varepsilon^{-1}$ terms in the asymptotic expansion. In view of the definition of $U_0(z)$ and the condition (\ref{cond_Uk_normal}), we obtain the following problem
\betaegin{align}\lambdabel{eqn_AsymptExp_U1}
\betaegin{cases}
(\varphi'(U_0) \overline{U_1})_{zz} + f'(U_0)\overline{U_1}
= \overline{d}_t U_{0z} - (\varphi(U_0))_z \Deltalta \overline{d}
\\
\overline{U_1}(x,t,0) = 0, ~~~ \varphi'(U_0) \overline{U_1} \in L^\infty(\mathbb{R}).
\varepsilonnd{cases}
\varepsilonnd{align}
To prove the existence of solution to (\ref{eqn_AsymptExp_U1}), we consider the transformed function $\overline{V_1} = \varphi'(U_0)\overline{U_1}$, which gives the problem
\betaegin{align}\lambdabel{eqn_AsymptExp_V1}
\betaegin{cases}
\overline{V_{1}}_{zz} + g'(V_0)\overline{V_1}
=
\displaystyle{\frac{V_{0z}}{\varphi'(\varphi^{-1} (V_0) )}} \overline{d}_t
-
V_{0z} \Deltalta \overline{d}
\\
\overline{V_1}(x,t,0) = 0, ~~~
\overline{V_1} \in L^\infty(\mathbb{R}).
\varepsilonnd{cases}
\varepsilonnd{align}
Now, Lemma 2.2 of \cite{AHM2008} implies the existence and uniqueness of $V_1$ provided that
\betaegin{align*}
\int_{\mathbb{ R}}
\left(
\frac{1}{\varphi'(\varphi^{-1}(V_0))}
\overline{d}_t
- \Deltalta \overline{d}
\right)
V_{0z}^2
= 0.
\varepsilonnd{align*}
Substituting $V_0 = \varphi(U_0)$ and $ V_{0z} = \varphi'(U_0) U_{0z} $ yields
\betaegin{align}
\overline{d}_t
= \frac
{\int_{\mathbb{ R}} V_{0z}^2}
{\int_{\mathbb{ R}} \frac{V_{0z}^2}{\varphi'(\varphi^{-1}(V_0))}}
\Deltalta \overline{d}
= \frac
{\int_{\mathbb{ R}} (\varphi'(U_0) U_{0z})^2}{\int_{\mathbb{ R}} \varphi'(U_0) U_{0z}^2}
\Deltalta \overline{d}.
\varepsilonnd{align}
It is known that $\overline{d}_t=-V_n$, where $V_n$ is equal to the normal velocity on the interface $\Gammamma_t$, and $\Deltalta \overline{d}$ is equal to $(N - 1) \kappaappa$, where $\kappaappa$ is the mean curvature of $\Gammamma_t$. Thus, we obtain the equation of motion of the interface $\Gammamma_t$,
\betaegin{align}
V_n = -( N - 1 ) \lambdambda_0 \kappaappa,
\varepsilonnd{align}
where
\betaegin{align}
\lambdambda_0
=
\frac
{\int_{\mathbb{ R}} (\varphi'(U_0) U_{0z})^2}{\int_{\mathbb{ R}} \varphi'(U_0) U_{0z}^2}. \lambdabel{eqn_lambda0}
\varepsilonnd{align}
The constant $\lambdambda_0$ is interpreted as the surface tension $\sigma_{AC}$ multiplied by the
mobility $\mu_{AC}$ of the interface; see Appendix. In particular, $(IP)$ coincides with
the equation (1) in \cite{AC1979}.
Finally, we derive an explicit form of $\lambdambda_0$. Indeed, we multiply
the equation \varepsilonqref{eqn_AsymptExp_U0} by ${\varphi(U_0)}_z$, yielding
\betaegin{equation*}
{\varphi(U_0)}_{zz} {\varphi(U_0)}_z+f(U_0) {\varphi(U_0)}_{z}=0\,.
\varepsilonnd{equation*}
Integrating from $-\infty$ to $z$, we obtain
$$
\frac 12 \betaig[\varphi(U_0)_z\betaig]^2(z)+\int _{-\infty} ^{z} f(U_0) {\varphi(U_0)}_{z} dz=0\,
$$
or alternatively
$$
\frac 12 \betaig[\varphi(U_0)_z\betaig]^2(z)+\int _{\alphalpha_-} ^{U_0(z)} f(s) \varphi'(s) ds=0\,.
$$
Hence,
\betaegin{equation}\lambdabel{intrinsic}
{\varphi(U_0)_z}(z)= \sqrt 2 \sqrt {W(U_0(z))}\,,
\varepsilonnd{equation}
where $W$ is given by
\betaegin{align}\lambdabel{eq:28}
W(u)
= - \int ^{u} _{\alphalpha_-} f(s) \varphi'(s) ds
= \int _{u} ^{\alphalpha_+} f(s) \varphi'(s) ds,
\varepsilonnd{align}
the last equality holdinf by \varepsilonqref{cond_fphi_equipotential}. It follows that
$$
\int_{{\mathbb{ R}}} {\varphi(U_0)}_z{U_{0z}}(z) dz= \sqrt 2 \int_{{\mathbb{ R}}} \sqrt{W(U_0(z))} U_{0z}(z) dz
$$
so that also
$$
\int _{{\mathbb{ R}}} {\varphi'(U_0)} U_{0z}^2(z)dz= \sqrt 2 \int _{\alphalpha_-} ^{\alphalpha_+} \sqrt{W(u)}du.
$$
Similarly, since
$$
\int_{\mathbb{ R}} (\varphi'(U_0) U_{0z})^2dz = \sqrt 2 \int_{\mathbb{ R}} (\varphi'(U_0) \sqrt{W(U_0(z))} U_{0z}) dz,
$$
we get
\betaegin{align}\lambdabel{eq:29}
\int_{\mathbb{ R}} (\varphi'(U_0) U_{0z})^2dz = \sqrt 2 \int _{\alphalpha_-} ^{\alphalpha_+} \varphi'(u) \sqrt{W(u)} du,
\varepsilonnd{align}
so that we finally obtain the formula
\betaegin{align}
\lambdabel{second lambda_0}
\lambdambda_0 = \frac
{\int _{\alphalpha_-} ^{\alphalpha_+} \varphi'(u) \sqrt{W(u)} du}{\int _{\alphalpha_-} ^{\alphalpha_+} \sqrt{W(u)}du}.
\varepsilonnd{align}
Note that if $\varphi(u)=u$, the case of the linear diffusion Allen-Cahn equation, we recover the value $\lambdambda_0 =1$ as expected.
\section{Generation of the interface}\lambdabel{section_3}
In this section, we prove Theorem \ref{Thm_Generation} on the generation of the interface.
The main idea, based on the comparison principle Lemma \ref{lem_comparison}, is to construct suitable sub- and super-solutions. The proof of Theorem \ref{Thm_Generation} is given in Section \ref{proof_subsec_thm_gen}.
\subsection{Comparison principle}
\betaegin{lem}\lambdabel{lem_comparison}
Let $v \in C^{2,1} (\overline{D} \tauimes {\mathbb{ R}}^+)$ satisfy
\betaegin{align*}
(P)~
\betaegin{cases}
v_t
\geq \Deltalta \varphi(v)
+ \displaystyle{ \frac{1}{\varepsilon^2}} f(v)
&\mbox{ in } D \tauimes \mathbb{R}^+\\
{ \dfrac{\partial \varphi(v)}{\partial \nu} }
= 0
&\mbox{ in } \partial D \tauimes \mathbb{R}^+\\
v(x,0) \geq u_0(x)
&\tauext{ for } x \in D.
\varepsilonnd{cases}
\varepsilonnd{align*}
Then, $v$ is a super-solution of Problem $(P^\varepsilon)$ and we have
\betaegin{align*}
v(x,t) \geq u^\varepsilon(x,t)
,~~
(x,t) \in D \tauimes {\mathbb{ R}}^+.
\varepsilonnd{align*}
If $v$ satisfies the opposite inequalities in Problem $(P)$, then $v$ is a sub-solution of Problem $(P^\varepsilon)$ and we have
\betaegin{align*}
v(x,t) \leq u^\varepsilon(x,t)
,~~
(x,t) \in D \tauimes {\mathbb{ R}}^+.
\varepsilonnd{align*}
\varepsilonnd{lem}
\betaegin{proof}
{Consider the inequality satisfied for the difference of
a super-solution $v$ and a solution $u^\varepsilon$. Apply the maximum principle to
the function $w : = v - u^\varepsilon$ to see that it is positive.}
\varepsilonnd{proof}
\subsection{Solution of the corresponding ordinary differential equation}
In the first stage of development, we expect that the solution behaves as that of the corresponding ordinary differential equation:
\betaegin{align}\lambdabel{eqn_generation_ODE}
\betaegin{cases}
Y_\tauau(\tauau, \zeta) = f(Y(\tauau,\zeta)) & \tauau > 0\\
Y(0,\zeta) = \zeta & \zeta \in \mathbb{R}.
\varepsilonnd{cases}
\varepsilonnd{align}
We deduce the following result from \cite{AHM2008}.
\betaegin{lem}\lambdabel{Lem_Generation_Matthieu}
Let $\varepsilonta \in (0, \varepsilonta_0)$ be arbitrary. Then, there exists a positive constant $C_Y
= C_Y(\varepsilonta)$ such that the following holds:
\betaegin{enumerate}[label =(\roman*)]
\item There exists a positive constant $\overline{\mu}$ such that for all $\tauau > 0$ and all $\zeta \in (-2C_0, 2C_0)$,
\betaegin{align}\lambdabel{lem_gen_1}
e^{- \overline{\mu} \tauau}
\leq
Y_\zeta(\tauau,\zeta)
\leq
C_Y e^{\mu \tauau}.
\varepsilonnd{align}
\item For all $\tauau > 0$ and all $\zeta \in (-2C_0, 2C_0)$,
$$
\left|
\frac{Y_{\zeta \zeta}(\tauau, \zeta)}{Y_\zeta(\tauau, \zeta)}
\right|
\leq C_Y (e^{\mu \tauau} - 1).
$$
\item There exists a positive constants $\varepsilon_0$ such that, for all $\varepsilon \in (0, \varepsilon_0)$, we have
\betaegin{enumerate}
\item for all $\zeta \in (-2C_0, 2C_0)$
\betaegin{align}\lambdabel{Lem_Generation_i}
\alphalpha_- - \varepsilonta
\leq
Y(\mu^{-1} |\ln \varepsilon|, \zeta)
\leq
\alphalpha_+ + \varepsilonta;
\varepsilonnd{align}
\item if $\zeta \geq \alphalpha + C_Y \varepsilon$, then
\betaegin{align}\lambdabel{Lem_Generation_ii}
Y(\mu^{-1} |\ln \varepsilon|, \zeta) \geq \alphalpha_+ - \varepsilonta;
\varepsilonnd{align}
\item if $\zeta \leq \alphalpha - C_Y \varepsilon$, then
\betaegin{align*}
Y(\mu^{-1} |\ln \varepsilon|, \zeta) \leq \alphalpha_- + \varepsilonta.
\varepsilonnd{align*}
\varepsilonnd{enumerate}
\varepsilonnd{enumerate}
\varepsilonnd{lem}
\betaegin{proof}
These results can be found in Lemma 4.7 and Lemma 3.7 of \cite{AHM2008}, except for \varepsilonqref{lem_gen_1}. To prove \varepsilonqref{lem_gen_1},
we follow similar computations as in Lemma 3.2 of \cite{AHM2008}.
Differentiating \varepsilonqref{eqn_generation_ODE} by $\zeta$, we obtain
\betaegin{align*}
\betaegin{cases}
Y_{\zeta \tauau}(\tauau, \zeta) = f'(Y(\tauau,\zeta))Y_\zeta, & \tauau > 0\\
Y_\zeta(0,\zeta) = 1, & ~
\varepsilonnd{cases}
\varepsilonnd{align*}
which yields the following equality,
\betaegin{align}\lambdabel{lem_gen_2}
Y_\zeta(\tauau, \zeta)
=
\varepsilonxp
\left[
\int_0^\tauau f'(Y(s,\zeta))
\right].
\varepsilonnd{align}
Hence, for $\zeta = \alphalpha$,
\betaegin{align*}
Y_\zeta(\tauau, \alphalpha)
=
\varepsilonxp
\left[
\int_0^\tauau f'(Y(s,\alphalpha))
\right]
=
e^
{
\mu \tauau
},
\varepsilonnd{align*}
where the last equality follows since $Y(\tauau,\alphalpha) = \alphalpha$. Also, for $\zeta = \alphalpha_\pm$, by \varepsilonqref{cond_f_bistable}, we have
\betaegin{align*}
Y_\zeta(\tauau, \alphalpha_\pm)
\leq
e^
{
\mu \tauau
}.
\varepsilonnd{align*}
For $\zeta \in (\alphalpha_- + \varepsilonta, \alphalpha_+ - \varepsilonta) \setminus \{\alphalpha\}$, Lemma 3.4 of \cite{AHM2008} guarantees the upper bound of $Y_\zeta$ in \varepsilonqref{lem_gen_1}.
We only need to consider the case that $\zeta \in (-2C_0, 2C_0 ) \setminus (\alphalpha_- + \varepsilonta, \alphalpha_+ - \varepsilonta)$. It follows from \varepsilonqref{cond_f_bistable} that we can choose a positive constant $\varepsilonta$ and $\overline{\varepsilonta}$ such that
\betaegin{align}\lambdabel{lem_gen_4}
f'(s) < 0
~,
s \in I,
\varepsilonnd{align}
where $I := (\alphalpha_- - \overline{\varepsilonta}, \alphalpha_- + \varepsilonta) \cup (\alphalpha_+ - \varepsilonta, \alphalpha_+ + \overline{\varepsilonta})$. Moreover, \varepsilonqref{cond_f_bistable} and \varepsilonqref{cond_f_tech} imply
\betaegin{align}\lambdabel{lem_gen_5}
Y(\tauau,\zeta)
\in
J,
\varepsilonnd{align}
for $\zeta \in J$ where $J := (\min\{- 2C_0, \alphalpha_- - \overline{\varepsilonta}\}, \alphalpha_- + \varepsilonta)
\cup
(\alphalpha_+ - \varepsilonta, \max\{ 2C_0, \alphalpha_+ + \overline{\varepsilonta}\})$.
Thus, \varepsilonqref{lem_gen_2},
\varepsilonqref{lem_gen_4} and \varepsilonqref{lem_gen_5} guarantee the upper bound of \varepsilonqref{lem_gen_1} for $\zeta \in I$, which leaves us only the case $\zeta \in (- 2C_0, 2C_0) \setminus I.$
We consider now the case $\zeta \in (\alphalpha_+ + \overline{\varepsilonta}, 2 C_0)$; the case of $\zeta \in (-2C_0, \alphalpha_- - \overline{\varepsilonta})$ can be analysed in a similar way. By (3.13) in \cite{AHM2008}, we have
\betaegin{align}\lambdabel{lem_gen_3}
\ln Y_\zeta(\tauau, \zeta)
=
f'(\alphalpha_+) \tauau + \int_\zeta^{Y(\tauau,\zeta)} \tauilde{f}(s) ds
,~ {\rm \ and \ }
\tauilde{f}(s)
= \dfrac{f'(s) - f'(\alphalpha_+)}{f(s)}.
\varepsilonnd{align}
Note that $\tauilde{f}(s) \tauo \dfrac{f''(\alphalpha_+)}{f'(\alphalpha_+)}$ as $s \tauo \alphalpha_+$, so that
$\tauilde{f}$ may be extended as a continuous function. We define
\betaegin{align*}
\tauilde{F}
:=
\Vert \tauilde{f} \Vert_{L^{\infty} (\alphalpha_+, \max\{ 2C_0, \alphalpha_+ + \overline{\varepsilonta}\})}.
\varepsilonnd{align*}
Since \varepsilonqref{cond_f_tech} yields $Y(\tauau, \zeta) > \alphalpha_+$ for $\zeta \in (\alphalpha_+ + \overline{\varepsilonta}, 2 C_0)$, by \varepsilonqref{lem_gen_3} we can find a constant $C_Y$ large enough such that
\betaegin{align*}
Y_\zeta(\tauau,\zeta)
\leq
C_Y e^{f'(\alphalpha_+) \tauau}
\leq C_Y e^{\mu \tauau}.
\varepsilonnd{align*}
Thus, we obtain the upper bound of \varepsilonqref{lem_gen_1}.
\newline
For the lower bound, we first define
\betaegin{align*}
\overline{\mu} := - \min_{s \in I'} f'(s),~ I' = [- 2C_0, 2C_0] \cup [\alphalpha_-, \alphalpha_+].
\varepsilonnd{align*}
Note that $\overline{\mu} > 0$ by \varepsilonqref{cond_f_bistable}. Thus, by \varepsilonqref{lem_gen_2}, we obtain
\betaegin{align*}
Y_\zeta(\tauau, \zeta) \geq e^{- \overline{\mu} \tauau}.
\varepsilonnd{align*}
\varepsilonnd{proof}
\subsection{Construction of sub- and super-solutions}
We now construct sub- and super-solutions for the proof of Theorem \ref{Thm_Generation}. For simplicity, we first consider the case where
\betaegin{align}\lambdabel{cond_subsuper_Neumann}
\frac{\partial u_0}{\partial \nu} = 0 \tauext{ on } \partial D.
\varepsilonnd{align}
In this case, we define sub- and super- solution as follows:
\betaegin{equation*}
w^{\pm}_\varepsilon(x,t)
= Y
\left(
\frac{t}{\varepsilon^2},
u_0(x)
\pm
\varepsilon^2 C_2
\left(
e^{\mu t/\varepsilon^2} - 1
\right)
\right)\\
= Y
\left(
\frac{t}{\varepsilon^2},
u_0(x)
\pm
P(t)
\right)
\varepsilonnd{equation*}
for some the constant $C_2$.
In the general case, where (\ref{cond_subsuper_Neumann}) does not necessarily hold, we need to modify $w^{\pm}_\varepsilon$ near the boundary $\partial D$. This will be discussed later in the proof of Theorem \ref{Thm_Generation}; see after equation \varepsilonqref{eqn_proofofgeneration}.
\betaegin{lem}\lambdabel{Lem_generation_with_homo_Neumann}
Assume (\ref{cond_subsuper_Neumann}). Then, there exist positive constants $\varepsilon_0$ and $C_2, \overline{C}_2$ independent of $\varepsilon$ such that, for all $\varepsilon \in (0,\varepsilon_0)$, $w^{\pm}_\varepsilon$ satisfies
\betaegin{align}\lambdabel{eqn_Gen_subsuper}
\betaegin{cases}
\mathcal{L} (w^-_\varepsilon) < - \overline{C}_2 e^{-\frac{\overline{\mu} t}{\varepsilon^2}} < \overline{C}_2 e^{-\frac{\overline{\mu} t}{\varepsilon^2}} < \mathcal{L}(w^+_\varepsilon)
&
\tauext{ in } \overline{D} \tauimes [0,t^\varepsilon]
\\
\displaystyle{\frac{\partial w^-_\varepsilon}{\partial \nu}
= \frac{\partial w^+_\varepsilon}{\partial \nu}}
= 0
&
\tauext{ on } \partial D \tauimes [0,t^\varepsilon].
\varepsilonnd{cases}
\varepsilonnd{align}
\varepsilonnd{lem}
\betaegin{proof}
We only prove that $w^{+}_\varepsilon$ is the desired super-solution; the case for $w^-_\varepsilon$ can be treated in a similar way. The assumption (\ref{cond_subsuper_Neumann}) implies
$$
\frac{\partial w^\pm_\varepsilon}{\partial \nu} = 0 \tauext{ on } \partial D \tauimes \mathbb{R}^+
$$
Define the operator $\mathcal{L}$ by
$$
\mathcal{L} u = u_t - \Deltalta \varphi(u) - \frac{1}{\varepsilon^2} f(u).
$$
Then, direct computation with $\tauau = t/\varepsilon^2$ gives
\betaegin{align*}
\mathcal{L}(w^+_\varepsilon)
&= \frac{1}{\varepsilon^2} Y_\tauau
+ P'(t) Y_\zeta
- \left(
\varphi''(w^+_\varepsilon) | \nablabla u_0|^2 (Y_\zeta)^2
+ \varphi'(w^+_\varepsilon) \Deltalta u_0 Y_\zeta
+ \varphi'(w^+_\varepsilon) |\nablabla u_0|^2 Y_{\zeta\zeta}
+ \frac{1}{\varepsilon^2} f(Y)
\right)\\
&= \frac{1}{\varepsilon^2} ( Y_\tauau - f(Y))
+ Y_{\zeta}
\left(
P'(t)
- \left(
\varphi''(w^+_\varepsilon) | \nablabla u_0|^2 Y_\zeta
+ \varphi'(w^+_\varepsilon) \Deltalta u_0
+ \varphi'(w^+_\varepsilon) |\nablabla u_0|^2 \frac{Y_{\zeta\zeta}}{Y_\zeta}
\right)
\right).
\varepsilonnd{align*}
By the definition of $Y$, the first term on the right-hand-side vanishes. By choosing $\varepsilon_0$ sufficiently small, for $0 \leq t \leq t_\varepsilon$, we have
$$
P(t)
\leq P(t^\varepsilon)
= \varepsilon^2 C_2(e^{\mu t^\varepsilon/\varepsilon^2} - 1)
= \varepsilon^2 C_2(\varepsilon^{-1} - 1) < C_0.
$$
Hence, $|u_0 + P(t)| < 2C_0$. Applying Lemma \ref{Lem_Generation_Matthieu}, \varepsilonqref{cond_C0} and \varepsilonqref{cond_C1} gives
\betaegin{align*}
\mathcal{L} w^+_\varepsilon
&\geq
Y_\zeta \left(
C_2 \mu e^{\mu t /\varepsilon^2}
- (
C_0^2 C_1 C_Y e^{\mu t / \varepsilon^2}
+ C_0 C_1
+ C_0^2 C_1 C_Y (e^{\mu t / \varepsilon^2} - 1))
\right)\\
&=
Y_\zeta \left(
(C_2 \mu - C_0^2 C_1 C_Y - C_0^2 C_1 C_Y)e^{\mu t / \varepsilon^2}
+ C_0^2 C_1C_Y
- C_0 C_1
\right).
\varepsilonnd{align*}
By \varepsilonqref{lem_gen_1}, for $C_2$ large enough, we can find a positive constant $\overline{C}_2$ independent to $\varepsilon$ such that
$$
\mathcal{L} w^+_\varepsilon \geq \overline{C}_2 e^{-\frac{\overline{\mu} t}{\varepsilon^2}}.
$$
Thus, $w^+_\varepsilon$ is a super-solution for Problem $(P^\varepsilon)$.
\varepsilonnd{proof}
\subsection{Proof of Theorem \ref{Thm_Generation}}
\lambdabel{proof_subsec_thm_gen}
We deduce from the comparison principle Lemma \ref{lem_comparison} and the construction of the sub- and super-solutions that
\betaegin{align}\lambdabel{eqn_proofofgeneration}
w^-_\varepsilon(x,t^\varepsilon)
\leq
u^\varepsilon(x,t^\varepsilon)
\leq
w^+_\varepsilon(x,t^\varepsilon)
\varepsilonnd{align}
under the condition (\ref{cond_subsuper_Neumann}).
If \varepsilonqref{cond_subsuper_Neumann} does not hold, one can modify the functions $w^\pm$ as follows:
from condition (\ref{cond_u0_inout}), there exist positive constants $d_0$ and $\rho$ such that
(i) the distance function
$d(x,\partial D) $ is smooth enough on $\{ x \in D : d(x,\partial D) < 2 d_0 \}$ and
(ii) $u_0(x) \geq \alphalpha + \rho$ if $d(x, \partial D) \leq d_0$.
Let $\xi$ be a smooth cut-off function defined on $[0,+\infty)$ such that $0 \leq \xi \leq 1, \xi(0) = \xi'(0) = 0$ and $\xi(z) = 1$ for $z \geq d_0$. Define
\betaegin{align*}
u_0^+
&:= \xi(d(x,\partial D)) u_0(x)
+\left[
1 - \xi(d(x, \partial D))
\right]
\max_{\overline{D}} u_0
\\
u_0^-
&:= \xi(d(x,\partial D)) u_0(x)
+\left[
1 - \xi(d(x, \partial D))
\right]
(\alphalpha + \rho).
\varepsilonnd{align*}
Then, $u_0^- \leq u_0 \leq u_0^+$ and $u_0^\pm$ satisfy the homogeneous Neumann boundary condition \varepsilonqref{cond_subsuper_Neumann}. Thus, by using a similar argument as in the proof of Lemma \ref{Lem_generation_with_homo_Neumann}, we may find sub- and super-solutions as follows,
\betaegin{align*}
w^{\pm}_\varepsilon(x,t)
= Y
\left(
\frac{t}{\varepsilon^2},
u_0^\pm(x)
\pm
\varepsilon^2 C_2
\left(
e^{\mu t/\varepsilon^2} - 1
\right)
\right).
\varepsilonnd{align*}
We now show \varepsilonqref{Thm_generation_i}, \varepsilonqref{Thm_generation_ii} and \varepsilonqref{Thm_generation_iii}. By the definition of $C_0$ in (\ref{cond_C0}), we have
\betaegin{align*}
-C_0
\leq \min_{x \in \overline{D}} u_0(x)
<
\alphalpha + \rho.
\varepsilonnd{align*}
Thus, for $\varepsilon_0$ small enough, we have that
$$
- 2 C_0
\leq
u^\pm_0(x) \pm (C_2 \varepsilon - C_2 \varepsilon^2)
\leq
2 C_0
~~~
\tauext{ for }
x \in D
$$
holds for any $\varepsilon \in (0, \varepsilon_0)$.
Thus, the assertion (\ref{Thm_generation_i}) is a direct consequence of (\ref{Lem_Generation_i}) and (\ref{eqn_proofofgeneration}).
For (\ref{Thm_generation_ii}), first we choose $M_0$ large enough so that
$M_0 \varepsilon - C_2 \varepsilon + C_2 \varepsilon^2 \geq C_Y \varepsilon$. Then, for any $x \in D$ such that $u^-_0(x) \geq \alphalpha + M_0 \varepsilon$, we have
$$
u_0^-(x) -
\varepsilon^2 C_2
\left(
e^{\mu t/\varepsilon^2} - 1 \right)\geq u^-_0(x) - (C_2 \varepsilon - C_2 \varepsilon^2)
\geq
\alphalpha + M_0 \varepsilon - C_2 \varepsilon + C_2 \varepsilon^2
\geq
\alphalpha + C_Y \varepsilon.
$$
Therefore, with (\ref{Lem_Generation_ii}) and (\ref{eqn_proofofgeneration}), we see that
$$
u^\varepsilon(x,t^\varepsilon) \geq \alphalpha_+ - \varepsilonta
$$
\noindent for any $x \in D$ such that $u^-_0(x) \geq \alphalpha + M_0 \varepsilon$, which implies (\ref{Thm_generation_ii}).
Note that (\ref{Thm_generation_iii}) can be shown in the same way. This completes the proof of Theorem \ref{Thm_Generation}.
\qed
\section{Propagation of the interface}\lambdabel{section_4}
The main idea of the proof of Theorem \ref{Thm_Propagation} is that we proceed by imbrication:
By the comparison principle Lemma \ref{lem_comparison}, we show at the generation time that $u^+(x,0) \geq w^+(x, t^\varepsilon)$ and
$u^-(x,0) \leq w^-(x,t^\varepsilon)$ so that we can pass continuously from the generation
of interface sub- and super-solutions to the propagation of interface sub- and super-solutions.
To this end, we first introduce a modified signed distance function, and several estimates
on the functions $U_0$ and $U_1$
useful in the sub- and super-solution construction, before showing Theorem \ref{Thm_Propagation} in Section \ref{proof_thm_prop}.
\subsection{A modified signed distance function}
We introduce a useful cut off signed distance function $d$ as follows. Recall the signed distance function $\overline{d}$ defined in \varepsilonqref{eqn_signed_dist}, and interface $\Gammamma_t$ satisfying \varepsilonqref{eqn_motioneqn}. Choose $d_0 > 0$ small enough so that the signed distance function $\overline{d}$ is smooth in the set
$$
\{
(x,t) \in \overline{D} \tauimes [0,T] , | \overline{d}(x,t) | < 3 d_0
\}
$$
and that
$$
dist(\Gammamma_t, \partial D) \geq 3 d_0
\tauext{ for all } t \in [0,T].
$$
Let $h(s)$ be a smooth { non-decreasing} function on $\mathbb{R}$ such that
$$
h(s) =
\betaegin{cases}
s & \tauext{if}~ |s| \leq d_0\\
-2d_0 & \tauext{if}~ s \leq -2d_0\\
2d_0 & \tauext{if}~ s \geq 2d_0.
\varepsilonnd{cases}
$$
We then define the cut-off signed distance function $d$ by
$$
d(x,t) = h(\overline{d}(x,t)), ~~~ (x,t) \in \overline{D} \tauimes [0,T].
$$
Note, as $d$ coincides with $\overline{d}$ in the region
$$
\{
(x,t) \in D \tauimes [0,T] : | d(x,t)| < d_0
\},
$$
that we have
\betaegin{align*}
d_t
= \lambdambda_0 \Deltalta d
~\tauext{ on }~ \Gammamma_t.
\varepsilonnd{align*}
Moreover, $d$ is constant near $\partial D$ and the following properties hold.
\betaegin{lem}\lambdabel{Lem_d_bound}
There exists a constant $C_d > 0$ such that
\betaegin{enumerate}[label = (\roman*)]
\item
$|d_t| + |\nablabla d| + |\Deltalta d| \leq C_d$,
\item
$
\left|
d_t - \lambdambda_0 \Deltalta d
\right|
\leq
C_d |d|
$
\varepsilonnd{enumerate}
in $\overline{D} \tauimes [0,T]$.
\varepsilonnd{lem}
\subsection{Estimates for the functions $U_0, U_1$}
Here, we give estimates for the functions which will be used to construct the sub- and super-solutions. Recall that $U_0$ (cf. \varepsilonqref{eqn_AsymptExp_U0}) is a solution of the equation
\betaegin{align*}
(\varphi(U_0))_{zz} + f(U_0) = 0.
\varepsilonnd{align*}
We have the following lemma.
\betaegin{lem}\lambdabel{Lem_U0_bound}
There exists constants $\hat{C}_0, \lambdambda_1 > 0$ such that for all $z\in \mathbb{R}$,
\betaegin{enumerate}[label = (\roman*)]
\item
$
|U_0| , ~ |U_{0z}| , ~ |U_{0zz}|
\leq \hat{C}_0,
$
\item
$
|U_{0z}|, ~ |U_{0zz}|
\leq \hat{C}_0 \varepsilonxp(- \lambdambda_1 |z|).
$
\varepsilonnd{enumerate}
\varepsilonnd{lem}
\betaegin{proof}
Recall that $V_0 = \varphi(U_0)$ satisfies the equation (\ref{eqn_AsymptExp_V0}) with $\varphi \in C^4({\mathbb{ R}})$. Lemma 2.1 of \cite{AHM2008} implies that there exist some positive constants $\overline{C}_0$ and $\lambdambda_1$ such that, for all $z\in \mathbb{R}$,
\betaegin{align*}
&|V_0| ,~ |V_{0z}| ,~ |V_{0zz}|
\leq \overline{C}_0;
\\
&|V_{0z}| ,~ |V_{0zz}|
\leq \overline{C}_0 \varepsilonxp(- \lambdambda_1 |z|),
\varepsilonnd{align*}
and therefore similar bounds for $U_0$.
\varepsilonnd{proof}
In terms of the cut-off signed distance
function $d=d(x,t)$, for each $(x,t) \in \overline{D}\tauimes [0,T]$,
we define $U_1(x,t,\cdot) : {\mathbb{ R}} \rightarrow {\mathbb{ R}}$ }
as the solution of the following equation:
\betaegin{align}\lambdabel{eqn_U1_bar}
\betaegin{cases}
(\varphi'(U_0) U_1)_{zz} + f'(U_0)U_1
= (\lambdambda_0 U_{0z} - (\varphi(U_0))_z) \Deltalta d\\
U_1(x,t,0) = 0, ~~~
\varphi'(U_0) U_1 \in L^\infty(\mathbb{R}).
\varepsilonnd{cases}
\varepsilonnd{align}
Existence of the solution $U_1$ can be shown in the same way as that for
$\overline{U_1}$ in \varepsilonqref{eqn_AsymptExp_U1}. Finally, we give the following estimates for $U_1=U_1(x,t,z)$.
\betaegin{lem}\lambdabel{Lem_U1_bound}
There exists a constant $\hat{C}_1, \lambdambda_1 > 0$ such that for all $z \in \mathbb{R}$
\betaegin{enumerate}[label = (\roman*)]
\item
$
|U_1| ,~ |{U_1}_z| ,~ |{U_1}_{zz}| ,~ |\nablabla {U_1}_z| ,~ |\nablabla {U_1}| ,~ |\Deltalta{U_1}| ,~ |U_{1t}| \leq \hat{C}_1,
$
\item
$
|{U_1}_z| ,~ |{U_1}_{zz}|,~ |\nablabla {U_1}_z| \leq \hat{C}_1 \varepsilonxp(- \lambdambda_1 |z|).
$
\varepsilonnd{enumerate}
{
Here, the operators $\nablabla$ and $\Deltalta$ act on the variable $x$.}
\varepsilonnd{lem}
\betaegin{proof}
Define $V_1(z) := \varphi'(U_0(z)) {U}_1(z)$. As in (\ref{eqn_AsymptExp_U1}), we obtain an equation for $V_1$:
\betaegin{align}\lambdabel{eqn_AsymptExp_V1}
\betaegin{cases}
V_{1zz} + g'(V_0)V_1
=
\Big[
\lambdambda_0 \displaystyle{\frac{V_{0z}}{\varphi'(\varphi^{-1} (V_0) )} }
- V_{0z}
\Big]
\Deltalta d
\\
V_1(x,t,0) = 0, ~~~
V_1 \in L^\infty(\mathbb{R}).
\varepsilonnd{cases}
\varepsilonnd{align}
Applying Lemmas 2.2 and 2.3 of \cite{AHM2008} to (\ref{eqn_AsymptExp_V1}) implies the boundedness of $V_1, V_{1z}, V_{1zz}$. Moreover, since $d$ is smooth in $\overline{D} \tauimes [0,T]$, we can apply Lemma 2.2 of \cite{AHM2008} to obtain the boundedness of $\nablabla V_1, \Deltalta V_1$. The desired estimates for the function $U_1$ now follows via the smoothness of $\varphi$ as in the proof of Lemma \ref{Lem_U0_bound}.
\varepsilonnd{proof}
\subsection{Construction of sub- and super-solutions}
We construct candidates sub- and super-solutions as follows: Given $\varepsilon > 0$, define
\betaegin{align}
\lambdabel{star0}
u^\pm(x,t)
= U_0
\left(
\frac{d(x,t) \pm \varepsilon p(t)}{\varepsilon}
\right)
+ \varepsilon U_1
\left(x,t,
\frac{d(x,t) \pm \varepsilon p(t)}{\varepsilon}
\right)
\pm q(t)
\varepsilonnd{align}
where
\betaegin{align*}
& p(t) = - e^{- \betaeta t/ \varepsilon^2} + e^{Lt} + K,\\
&q(t) = \sigmagma
\left(
\betaeta e^{- \betaeta t / \varepsilon^2} + \varepsilon^2 L e^{Lt}
\right),
\varepsilonnd{align*}
in terms of positive constants $\varepsilon, \betaeta, \sigmagma, L, K$. Next, we give specific conditions for these constants which will be used to show that indeed $u^\pm$ are sub- and super-solutions.
We assume that the positive constant $\varepsilon_0$ obeys
\betaegin{align}\lambdabel{eqn_cond_elc}
\varepsilon_0^2 L e^{LT} \leq 1, ~~~
\varepsilon_0\hat{C}_1 \leq \frac{1}{2}.
\varepsilonnd{align}
We first give a result on the boundedness of $f'(U_0(z)) + (\varphi'(U_0(z))_{zz}$.
\betaegin{lem}\lambdabel{Lem_f'+phi'_bound}
There exists $b > 0$ such that $f'(U_0(z)) + (\varphi'(U_0))_{zz} < 0$ on $\{ z : U_0(z) \in [\alphalpha_-,~\alphalpha_- + b] \cup [\alphalpha_+ - b ,~\alphalpha_+]\}$.
\varepsilonnd{lem}
\betaegin{proof}
We can choose $b_1, \mathcal{F} > 0$ such that
\betaegin{align*}
f'(U_0(z)) < - \mathcal{F}
\varepsilonnd{align*}
on $\{ z : U_0(z) \in [\alphalpha_-,~\alphalpha_- + b_1] \cup [\alphalpha_+ - b_1 ,~\alphalpha_+]\}$.
Note that $(\varphi'(U_0))_{zz} = \varphi'''(U_0) U^2_{0z} + \varphi''(U_0) U_{0zz}$. From Lemma \ref{Lem_U0_bound}, we can choose $b_2 > 0$ small enough so that
\betaegin{align*}
| (\varphi'(U_0))_z | < \mathcal{F},~~~| (\varphi'(U_0))_{zz} | < \mathcal{F}
\varepsilonnd{align*}
on $\{ z : U_0(z) \in [\alphalpha_-,~\alphalpha_- + b_2] \cup [\alphalpha_+ - b_2 ,~\alphalpha_+]\}$. Define $b := \min \{b_1, b_2 \}$. Then, we have
\betaegin{align*}
f'(U_0(z)) + (\varphi'(U_0))_{zz}
<\mathcal{F} - \mathcal{F}
= 0.
\varepsilonnd{align*}
\varepsilonnd{proof}
Fix $b > 0$ which satisfies the result of Lemma \ref{Lem_f'+phi'_bound}. Denote $ J_1 := \{ z : U_0(z) \in [\alphalpha_-,~\alphalpha_- + b] \cup [\alphalpha_+ - b ,~\alphalpha_+]\}, J_2 = \{ z : U_0(z) \in [\alphalpha_- + b,~\alphalpha_+ - b]\}$. Let
\betaegin{align}\lambdabel{cond_beta}
\betaeta
:= - \sup
\left\{
\frac{f'(U_0(z)) + (\varphi'(U_0(z)))_{zz}}{3} : z \in J_1
\right\}.
\varepsilonnd{align}
The following result plays an important role in verifying sub- and super-solution properties.
\betaegin{lem}\lambdabel{Lem_E3_bound}
There exists a constant $\sigmagma_0$ small enough such that for every $0 < \sigmagma < \sigmagma_0$, we have
$$
U_{0z} - \sigmagma (f'(U_0) + (\varphi'(U_0))_{zz}) \geq 3 \sigmagma \betaeta.
$$
\varepsilonnd{lem}
\betaegin{proof}
To show the assertion, it is sufficient to show that there exists $\sigmagma_0$ such that, for all $0 < \sigmagma < \sigmagma_0$,
\betaegin{align}\lambdabel{lem_E3_1}
\frac{U_{0z}}{\sigmagma}
- \left( f'(U_0) + (\varphi'(U_0))_{zz} \right)
\geq 3 \betaeta.
\varepsilonnd{align}
We prove the result on each of the sets $J_1, J_2$.
On the set $J_1$,
note that $U_{0z} > 0$ on $\mathbb{R}$. If $z \in J_1$, for any $\sigmagma > 0$ we have
$$
\frac{U_{0z}}{\sigmagma}
- \left( f'(U_0) + (\varphi'(U_0))_{zz} \right)
> - \sup_{z \in J_1} ( f'(U_0) + (\varphi'(U_0))_{zz} )
= 3 \betaeta.
$$
On the set $J_2$, which is compact
in $\mathbb{R}$, there exists positive constants $c_1, c_2$ such that
\betaegin{align*}
U_{0z} \geq c_1
,~~
| f'(U_0) + (\varphi'(U_0))_{zz} | \leq c_2.
\varepsilonnd{align*}
Therefore, we have
\betaegin{align*}
\frac{U_{0z}}{\sigmagma}
- \left( f'(U_0) + (\varphi'(U_0))_{zz} \right)
\geq
\dfrac{c_1}{\sigmagma} - c_2
\rightarrow
\infty
~\tauext{as}~
\sigmagma \downarrow 0,
\varepsilonnd{align*}
implying \varepsilonqref{lem_E3_1} on $J_2$ for $\sigmagma$ small enough.
\varepsilonnd{proof}
Before we give the rigorous proof that $u^\pm$ are sub- and super-solutions, we first give detailed computations needed in the sequel. Recall \varepsilonqref{star0}. First, note, with $U_0$ and $U_1$ corresponding to $u^+$, that
\betaegin{align}
\varphi(u^+)
&= \varphi(U_0)
+ (\varepsilon {U_1} + q) \varphi'(U_0)
+ (\varepsilon {U_1} + q)^2
\int_0^1 (1 - s) \varphi''( U_0 + ( \varepsilon {U_1} + q )s ) ds\nonumber\\
f(u^+)
&= f(U_0)
+ (\varepsilon {U_1} + q) f'(U_0)
+ \frac{(\varepsilon {U_1} + q)^2 }{2} f''(\tauhetaeta(x,t)),
\lambdabel{star1}
\varepsilonnd{align}
where $\tauhetaeta$ is a function satisfying $\tauhetaeta(x,t) \in \left(U_0, U_0 + \varepsilon {U_1} + q(t)\right)$. Straightforward computations yield
\betaegin{align}
(u^+)_t
&= U_{0z}
\left(
\frac{d_t + \varepsilon p_t}{\varepsilon}
\right)
+ \varepsilon {U_1}_t
+ {U_1}_z ( d_t + \varepsilon p_t )
+ q_t
\nonumber\\
\Deltalta \varphi(u^+)
&= \nablabla \cdot
\left(
( \varphi(U_0) )_z \frac{\nablabla d}{\varepsilon}
+ {U_1}_z \varphi'(U_0) \nablabla d
+ \varepsilon \nablabla {U_1} \varphi'(U_0)
+ (\varepsilon {U_1} + q)(\varphi'(U_0))_z \frac{\nablabla d}{\varepsilon}
+ \nablabla R
\right)
\nonumber \\
&= ( \varphi(U_0) )_{zz} \frac{|\nablabla d|^2}{\varepsilon^2}
+ (\varphi(U_0))_z \frac{\Deltalta d}{\varepsilon}
\nonumber \\
&+ ({U_1}_z \varphi'(U_0))_z \frac{| \nablabla d|^2}{\varepsilon}
+ {U_1}_z \varphi'(U_0) \Deltalta d
+ 2 \nablabla {U_1}_z \varphi'(U_0) \cdot \nablabla d
+ \nablabla {U_1} (\varphi'(U_0))_z \cdot \nablabla d
+ \varepsilon \Deltalta {U_1} \varphi'(U_0)
\nonumber \\
&+({U_1} \varphi'(U_0)_z)_z \frac{|\nablabla d|^2}{\varepsilon}
+ q (\varphi'(U_0))_{zz} \frac{|\nablabla d|^2}{\varepsilon^2}
+ \nablabla {U_1} (\varphi'(U_0))_z \cdot \nablabla d
\nonumber\\
&+ (\varepsilon {U_1} + q) (\varphi'(U_0))_z \frac{ \Deltalta d }{\varepsilon}
+ \Deltalta R
\lambdabel{star2}
\varepsilonnd{align}
where $R(x,t) = (\varepsilon {U_1} + q)^2 \int_0^1 (1 - s) \varphi''( U_0 + ( \varepsilon U_1 + q )s ) ds$. Define $r(x,t) = \int_0^1 (1 - s) \varphi''( U_0 + ( \varepsilon {U_1} + q )s ) ds$. Then, we have
\betaegin{align}
\Deltalta R(x,t)
&= \nablabla \cdot \nablabla
\Big{[}
\Big{(}
(\varepsilon {U_1})^2 + 2 \varepsilon q {U_1} + q^2
\Big{)}r
\Big{]}
\nonumber\\
&= \nablabla \cdot
\Big{[}
\Big{(}
2 \varepsilon {U_1}
\left(
{U_{1}}_z \nablabla d + \varepsilon \nablabla {U_1}
\right)
+ 2 q
\left(
{U_{1}}_z \nablabla d + \varepsilon \nablabla {U_1}
\right)
\Big{]}
r(x,t)
+
\Big{(}
(\varepsilon{U_1})^2 + 2 \varepsilon q {U_1} + q^2
\Big{)}
\nablabla r(x,t)
\Big{]}
\nonumber\\
&=
\left[
2\left(
{U_1}_z \nablabla d + \varepsilon \nablabla {U_1}
\right)^2
+ 2 \varepsilon {U_1}
\left(
U_{1zz} \frac{| \nablabla d |^2 }{\varepsilon}
+ {U_1}_z \Deltalta d
+ 2\nablabla {U_1}_z \cdot \nablabla d
+ \varepsilon \Deltalta {U_1}
\right)
\right] r(x,t)
\nonumber\\
& + 2q \left(
U_{1zz} \frac{| \nablabla d |^2 }{\varepsilon}
+ {U_1}_z \Deltalta d
+ 2\nablabla {U_1}_z \cdot \nablabla d
+ \varepsilon \Deltalta {U_1}
\right)
r(x,t)
\nonumber\\
& + 2
\Big{[}
2 \varepsilon {U_1}
\left(
{U_{1}}_z \nablabla d + \varepsilon \nablabla {U_1}
\right)
+ 2 q
\left(
{U_{1}}_z \nablabla d + \varepsilon \nablabla {U_1}
\right)
\Big{]}
\nablabla r(x,t)
\nonumber\\
& +
\Big{(}
(\varepsilon {U_1})^2 + 2 \varepsilon q {U_1} + q^2
\Big{)}
\Deltalta r(x,t)
\lambdabel{star3}
\varepsilonnd{align}
where
\betaegin{align*}
\nablabla r(x,t)
&= \int_0^1 (1 - s) \varphi'''( U_0 + ( \varepsilon {U_1} + q) s )
\left(
\left(
U_{0} + \varepsilon U_{1} s
\right)_z
\frac{\nablabla d}{\varepsilon}
+ \varepsilon \nablabla {U_1} s
\right)
ds \\
\Deltalta r(x,t)
&= \int_0^1 (1 - s) \varphi'''( U_0 + ( \varepsilon {U_1} + q) s )
\left(
(U_0 + \varepsilon {U_1} s)_{z} \frac{ \Deltalta d }{\varepsilon}
\right.
\\
&\left.
+
(U_0 + \varepsilon {U_1} s)_{zz} \frac{ | \nablabla d |^2 }{\varepsilon^2}
+
( 2 \nablabla {U_1}_z \cdot \nablabla d
+ \varepsilon \Deltalta {U_1})s
\right)
ds
\\
&+ \int_0^1 (1 - s) \varphi^{(4)}( U_0 + ( \varepsilon {U_1} + q) s )
\left(
(U_{0} + \varepsilon {U_{1}} s)_z
\frac{\nablabla d}{\varepsilon}
+ \varepsilon \nablabla {U_1} s
\right)^2
ds.
\varepsilonnd{align*}
Define $l(x,t), r_i(x,t)$ for $i = 1,2,3$ as follows:
\betaegin{align*}
l(x,t)
&=
U_{1zz} \frac{| \nablabla d |^2 }{\varepsilon}
+ {U_1}_z \Deltalta d
+ 2\nablabla {U_1}_z \cdot \nablabla d
+ \varepsilon \Deltalta {U_1}
\\
r_1(x,t)
&= \left[
2\left(
{U_1}_z \nablabla d + \varepsilon \nablabla {U_1}
\right)^2
+ 2 \varepsilon {U_1}
l(x,t)
\right] r(x,t)
+ 4 \varepsilon {U_1}
\left(
{U_{1}}_z \nablabla d + \varepsilon \nablabla {U_1}
\right) \nablabla r(x,t)
+ ( \varepsilon {U_1} )^2 \Deltalta r(x,t)
\\
r_2(x,t)
&= 2 q l(x,t) r(x,t)
+4 q
\left(
{U_{1}}_z \nablabla d + \varepsilon \nablabla {U_1}
\right)
\nablabla r(x,t)
+ 2 \varepsilon q {U_1} \Deltalta r(x,t)
\\
r_3(x,t)
&= q^2 \Deltalta r(x,t).
\varepsilonnd{align*}
Thus,
\betaegin{align}
\lambdabel{star4}
\Deltalta R=r_1+r_2+r_3.
\varepsilonnd{align}
We have the following properties for $r_i$.
\betaegin{lem}\lambdabel{Lem_remiander_bound}
There exists $C_r > 0$ independent of $\varepsilon$ such that
\betaegin{eqnarray} \lambdabel{ineq_r}
|r_1| \leq C_r, ~~~
|r_2| \leq \frac{q}{\varepsilon} C_r, ~~~
|r_3| \leq \frac{q^2}{\varepsilon^2} C_r.
\varepsilonnd{eqnarray}
\varepsilonnd{lem}
\betaegin{proof}
Note that, by Lemmas \ref{Lem_U0_bound}, \ref{Lem_U1_bound} and (\ref{eqn_cond_elc}) the term $U_a := U_0 + (\varepsilon {U_1} + q)s$ is uniformly bounded. Hence, the terms $\varphi''(U_a), \varphi'''(U_a), \varphi^{(4)}(U_a)$ are uniformly bounded, and in particular $r$ is bounded. By similar reasoning for $\nablabla r$ and $\Deltalta r$, it follows that there exists some positive constants $c_\nablabla, c_\Deltalta$ such that
$$
| \nablabla r | \leq \frac{c_\nablabla}{\varepsilon}, ~~~
| \Deltalta r | \leq \frac{c_\Deltalta}{\varepsilon^2}.
$$
\noindent Moreover, by Lemmas \ref{Lem_U0_bound}, \ref{Lem_U1_bound} there exists a positive constant $c_l$ such that
$$
|l(x,t)|
\leq \frac{c_l}{\varepsilon}.
$$
Combining these estimates yields \varepsilonqref{ineq_r}.
\varepsilonnd{proof}
\noindent Let $\sigmagma$ a fixed constant satisfying
\betaegin{align}\lambdabel{cond_sigma}
0 < \sigmagma \leq \min \{ \sigmagma_0,\sigmagma_1,\sigmagma_2 \},
\varepsilonnd{align}
where $\sigmagma_0$ is the constant defined in Lemma \ref{Lem_E3_bound}, and $\sigmagma_1$ and $\sigmagma_2$ are given by
\betaegin{align}\lambdabel{cond_sigma2}
\sigmagma_1 = \frac{1}{2(\betaeta + 1)}, ~~~
\sigmagma_2 = \frac{\betaeta}{( F + C_r) (\betaeta + 1)}, ~~~ F = ||f''||_{L^\infty(\alphalpha_- -1, \alphalpha_+ + 1)}.
\varepsilonnd{align}
Note that, since $\sigmagma < \sigmagma_1$ and \varepsilonqref{eqn_cond_elc}, we have
\betaegin{align*}
\alphalpha_- - 1
\leq
|u^\pm|
\leq
\alphalpha_+ + 1.
\varepsilonnd{align*}
\betaegin{lem}\lambdabel{Lem_Prop_subsuper}
Let $\betaeta$ be given by (\ref{cond_beta}) and let $\sigmagma$ satisfy (\ref{cond_sigma}). Then, there exists $\varepsilon_0 > 0$ and a positive constant $C_p$, which does not depend on $\varepsilon$, such that
\betaegin{align}\lambdabel{eqn_Prop_subsuper}
\betaegin{cases}
\mathcal{L} (u^-)
<
- C_p
<
C_p
<
\mathcal{L}(u^+)
&
\tauext{ in } \overline{D} \tauimes [0,T]
\\
\displaystyle{\frac{\partial u^-}{\partial \nu}
= \frac{\partial u^+}{\partial \nu}}
= 0
&
\tauext{ on } \partial D \tauimes [0,T]
\varepsilonnd{cases}
\varepsilonnd{align}
for every $\varepsilon \in (0, \varepsilon_0)$.
\varepsilonnd{lem}
\betaegin{proof}
In the following, we only show that $u^+$ is a super solution; one can show that $u^-$ is a sub-solution in a similar way.
Combining the computations above in \varepsilonqref{star1}, \varepsilonqref{star2}, \varepsilonqref{star3} and \varepsilonqref{star4}, we obtain
\betaegin{align*}
\mathcal{L}u^+
&= (u^+)_t - \Deltalta (\varphi(u^+)) - \frac{1}{\varepsilon^2} f(u^+)
\\
&= { E_1+E_2+E_3+E_4+E_5+E_6,}
\varepsilonnd{align*}
{
where }
\betaegin{align*}
E_1
&=
- \frac{1}{\varepsilon^2}
\left(
(\varphi(U_0))_{zz} | \nablabla d |^2 + f(U_0)
\right)
- \frac{| \nablabla d |^2 - 1}{\varepsilon^2} q(\varphi'(U_0))_{zz}
- \frac{| \nablabla d |^2 - 1}{\varepsilon} ({U_1} \varphi'(U_0))_{zz}
\\
E_2
&=
\frac{1}{\varepsilon} U_{0z} d_t
- \frac{1}{\varepsilon}
\left(
(\varphi(U_0))_z \Deltalta d
+ ({U_1}_z \varphi'(U_0))_z
+ ({U_1}\varphi'(U_0)_z)_z
+{U_1} f'(U_0)
\right)
\\
E_3
&=
[ U_{0z} p_t + q_t ]
- \frac{1}{\varepsilon^2}
\left[
q f'(U_0)
+ q (\varphi'(U_0))_{zz}
+ \frac{q^2}{2} f''(\tauhetaeta)
\right]
- r_3(x,t)
\\
E_4
&=
\varepsilon {U_1}_z p_t
- \frac{q}{\varepsilon}
\Big{[}
(\varphi'(U_0))_z \Deltalta d + {U_1} f''(\tauhetaeta)
\Big{]}
- r_2(x,t)
\\
E_5
&=
\varepsilon {U_1}_t
- \varepsilon \Deltalta {U_1} \varphi'(U_0)
\\
E_6
&=
{U_1}_z d_t
- 2 \nablabla {U_1}_z \varphi'(U_0) \cdot \nablabla d
- 2 \nablabla {U_1} (\varphi'(U_0))_z \cdot \nablabla d
- ( {U_1} \varphi'(U_0))_z \Deltalta d
- r_1(x,t)
- \frac{({U_1})^2}{2} f''(\tauhetaeta).
\varepsilonnd{align*}
\vskip .1cm
{\it Estimate of the term $E_1$.}
Using (\ref{eqn_AsymptExp_U0}) we write $E_1$ in the form
$$
E_1
= -\frac{| \nablabla d |^2 - 1}{\varepsilon^2}
\betaig(
(\varphi(U_0))_{zz} + q(\varphi'(U_0))_{zz}
\betaig)
-\frac{| \nablabla d |^2 - 1}{\varepsilon}
({U_1}\varphi'(U_0))_{zz}.
$$
We only consider the term
$
e_1
:= \displaystyle{\frac{| \nablabla d |^2 - 1}{\varepsilon}}
({U_1}\varphi'(U_0))_{zz}
$
; the other terms can be bounded similarly. In the region where $|d| \leq d_0$, we have $| \nablabla d | = 1$ so that $e_1 = 0$. If, however $|\nablabla d| \neq 1$, we have
$$
\frac{|({U_1}\varphi'(U_0))_{zz}|}{\varepsilon}
\leq \frac{\hat{C}_1}{\varepsilon} e^{- \lambdambda_1
\left| \frac{d}{\varepsilon} + p(t)
\right|
}
\leq \frac{\hat{C}_1}{\varepsilon} e^{- \lambdambda_1
\left[ \frac{d_0}{\varepsilon} - p(t)
\right]}
\leq \frac{\hat{C}_1}{\varepsilon} e^{- \lambdambda_1
\left[ \frac{d_0}{\varepsilon} - (1 + e^{LT} + K)
\right]}.
$$
Choosing $\varepsilon_0$ small enough such that
$$
\frac{d_0}{2\varepsilon_0}
-
\Big(
1 + e^{LT} + K
\Big)
\geq 0,
$$
we deduce
$$
\frac{|({U_1} \varphi'(U_0))_{zz}|}{\varepsilon}
\leq \frac{\hat{C}_1}{\varepsilon} e^{- \lambdambda_1 \frac{d_0}{2 \varepsilon}}
\rightarrow 0
\tauext{ as }
\varepsilon \downarrow 0.
$$
Thus, $\taufrac{1}{\varepsilon} |({U_1} \varphi'(U_0))_{zz}|$ is uniformly bounded, so that there exists $\hat{C}_2$ independent of $\varepsilon, L $ such that
$$
| e_1 | \leq \hat{C}_2.
$$
Finally, as a consequence, we deduce that there exists $\tauilde{C}_1$ independent of $\varepsilon, L $ such that
\betaegin{align}\lambdabel{eqn_E1_bound}
| E_1 | \leq \tauilde{C}_1.
\varepsilonnd{align}
\vskip .1cm
{\it Estimate of the term $E_2$.}
Using (\ref{eqn_U1_bar}), we write $E_2$ in the form
$$
E_2
= \frac{1}{\varepsilon} U_{0z} d_t
- \frac{1}{\varepsilon} \lambdambda_0 U_{0z} \Deltalta d
= \frac{U_{0z}}{\varepsilon} (d_t - \lambdambda_0 \Deltalta d).
$$
Applying Lemma \ref{Lem_d_bound}, \ref{Lem_U0_bound} and \ref{Lem_U1_bound} gives
$$
|E_2|
\leq C_d \hat{C}_0 \frac{|d|}{\varepsilon} e^{- \lambdambda _1
\left|
\frac{d}{\varepsilon} + p
\right|
}
\leq C_d \hat{C}_0
\max_{\xi \in \mathbb{R}} | \xi | e^{-\lambdambda_1 |\xi + p|}.
$$
Note that $\max_{\xi \in \mathbb{R}} | \xi | e^{-\lambdambda_1 |\xi + p|} \leq |p| + \frac{1}{\lambdambda_1}$ (cf.\ \cite{Danielle2018}). Thus, there exists $\tauilde{C}_2$ such that
\betaegin{align}\lambdabel{eqn_E2_bound}
|E_2| \leq \tauilde{C}_2(1 + e^{LT}).
\varepsilonnd{align}
\vskip .1cm
{\it Estimate of the term $E_3$.}
Substituting $p_t = \dfrac{q}{\varepsilon^2 \sigmagma}$ and then replacing $q$ by its explicit form (cf.\ \varepsilonqref{star0}) gives
\betaegin{align*}
E_3
&= \frac{q}{\varepsilon^2\sigmagma}
\left[
U_{0z} - \sigmagma ( f'(U_0) + (\varphi'(U_0) )_{zz} ) - \sigmagma q
\left(
\frac{1}{2}f''(\tauhetaeta) + \frac{\varepsilon^2}{q^2} r_3
\right)
\right]
+ q_t
\\
&= \frac{1}{\varepsilon^2}
\left(
\betaeta e^{- \frac{\betaeta t }{\varepsilon^2}} + \varepsilon^2 L e^{Lt}
\right)
\left[
U_{0z} - \sigmagma ( f'(U_0) + (\varphi'(U_0) )_{zz} ) - \sigmagma^2 (\betaeta e^{- \frac{\betaeta t}{\varepsilon^2}} + L \varepsilon^2 e^{Lt})
\left(
\frac{1}{2}f''(\tauhetaeta) + \frac{\varepsilon^2}{q^2} r_3
\right)
\right]
\\
& - \frac{1}{\varepsilon^2} \sigmagma \betaeta^2 e^{ - \frac{\betaeta t}{\varepsilon^2}}
+ \varepsilon^2 \sigmagma L^2 e^{Lt}
\\
&= \frac{1}{\varepsilon^2} \betaeta e^{- \frac{\betaeta t}{\varepsilon^2}}(I - \sigmagma\betaeta)
+ L e^{Lt} [I + \varepsilon^2 \sigmagma L]
\varepsilonnd{align*}
where
$$
I
:= U_{0z} - \sigmagma ( f'(U_0) + (\varphi'(U_0) )_{zz} ) - \sigmagma^2 (\betaeta e^{- \frac{\betaeta t}{\varepsilon^2}} + L \varepsilon^2 e^{Lt})
\left(
\frac{1}{2}f''(\tauhetaeta) + \frac{\varepsilon^2}{q^2} r_3
\right).
$$
Applying Lemma \ref{Lem_E3_bound}, using \varepsilonqref{eqn_cond_elc} and \varepsilonqref{cond_sigma}, yields
\betaegin{eqnarray*}
I &\geq & 3 \sigmagma \betaeta
- \sigmagma \sigmagma_2 \left(\betaeta + L \varepsilon^2 e^{Lt} \right)
\left(
|f''(\tauhetaeta)| + \frac{\varepsilon^2}{q^2} |r_3|
\right)\\
&\geq & 3 \sigmagma \betaeta
- \sigmagma \sigmagma_2 \left(\betaeta +1 \right)
\left(
|f''(\tauhetaeta)| + \frac{\varepsilon^2}{q^2} |r_3|
\right)\\
&\geq& 2 \sigmagma \betaeta,
\varepsilonnd{eqnarray*}
where the last inequality follows from \varepsilonqref{cond_sigma2}. This implies that
\betaegin{align}\lambdabel{eqn_E3_bound}
E_3
\geq \frac{\sigmagma \betaeta^2}{\varepsilon^2} e^{- \frac{\betaeta t }{\varepsilon^2}}
+ 2 \sigmagma \betaeta L e^{Lt}.
\varepsilonnd{align}
\vskip .1cm
{\it Estimate of the term $E_4$.}
Substituting again $p_t = \dfrac{q}{\varepsilon^2 \sigmagma}$, with $q$ in its explicit form \varepsilonqref{star0} gives
\betaegin{align*}
E_4
&= \frac{q}{\varepsilon \sigmagma}
\left( {U_1}_z
- \sigmagma((\varphi'(U_0))_z \Deltalta d +U_1 f''(\tauhetaeta))
- \sigmagma\frac{\varepsilon}{q} r_2
\right)\\
&= \frac{1}{\varepsilon}
\left(
\betaeta e^{-\frac{ \betaeta t }{\varepsilon^2}} + \varepsilon^2 L e^{Lt}
\right)
\left({U_1}_z
- \sigmagma((\varphi'(U_0))_z \Deltalta d + {U_1} f''(\tauhetaeta))
- \sigmagma\frac{\varepsilon}{q} r_2
\right).
\varepsilonnd{align*}
Applying Lemma \ref{Lem_d_bound}, \ref{Lem_U0_bound}, \ref{Lem_U1_bound} and \ref{Lem_remiander_bound} gives the uniform boundedness of the last factor in parenthesis. Thus, there exists a constant $\tauilde{C}_4$ such that
\betaegin{align}\lambdabel{eqn_E4_bound}
|E_4|
\leq \tauilde{C}_4
\frac{1}{\varepsilon}
\left(
\betaeta e^{-\frac{\betaeta t }{\varepsilon^2}} + \varepsilon^2 L e^{Lt}
\right).
\varepsilonnd{align}
\vskip .1cm
{\it Estimate of the terms $E_5$ and $E_6$.}
Applying Lemma \ref{Lem_d_bound}, \ref{Lem_U0_bound} and \ref{Lem_U1_bound}, it follows that there exists $\tauilde{C}_5$ such that
\betaegin{align}\lambdabel{eqn_E5_bound}
|E_5| + |E_6|
\leq \tauilde{C}_5.
\varepsilonnd{align}
\vskip .1cm
{\it Combination of the above estimates.}
Collecting the estimates (\ref{eqn_E1_bound}),(\ref{eqn_E2_bound}),(\ref{eqn_E3_bound}),(\ref{eqn_E4_bound}),(\ref{eqn_E5_bound}), we obtain
\betaegin{align*}
\mathcal{L}(u^+)
&\geq
\left[
\frac{\sigmagma \betaeta^2}{\varepsilon^2}
- \tauilde{C}_4 \frac{\betaeta}{\varepsilon}
\right]e^{- \frac{\betaeta t }{\varepsilon^2}}
+
\left[
2 \sigmagma \betaeta L
- \varepsilon \tauilde{C}_4 L
-\tauilde{C}_2
\right]e^{Lt}
- \tauilde{C}_1 - \tauilde{C}_2 - \tauilde{C}_5
\\
& \geq
\left[
\frac{\sigmagma \betaeta^2}{\varepsilon^2}
- \tauilde{C}_4 \frac{\betaeta}{\varepsilon}
\right]e^{- \frac{\betaeta t }{\varepsilon^2}}
+
\left[
\frac{2 \sigmagma \betaeta L}{3}
- \varepsilon \tauilde{C}_4 L
\right]e^{Lt}
\\
&+
\left[
\frac{2 \sigmagma \betaeta L}{3}
-\tauilde{C}_2
\right]e^{Lt}
+
\left[
\frac{2 \sigmagma \betaeta L}{3}
-\tauilde{C}_6
\right]
\varepsilonnd{align*}
where $\tauilde{C}_6 = \tauilde{C}_1 + \tauilde{C}_2 + \tauilde{C}_5$. Choose $\varepsilon_0$ small enough and $L$ large enough so that
\betaegin{align*}
\sigmagma \betaeta
>
3 \tauilde{C}_4 \varepsilon_0
,~
\sigmagma \betaeta L
> 3 \max \{ \tauilde{C}_2, \tauilde{C}_6 \}.
\varepsilonnd{align*}
Then, we deduce that there exists a positive constant $C_p$, independent of $\varepsilon$, such that $\mathcal{L}(u^+) \geq C_p$.
\varepsilonnd{proof}
\subsection{Proof of Theorem \ref{Thm_Propagation}}
\lambdabel{proof_thm_prop}
The proof of Theorem \ref{Thm_Propagation} is divided in two steps: (i) For large enough $K>0$, we prove that $u^-(x,t) \leq u^\varepsilon(x,t + t^\varepsilon) \leq u^+(x,t)$ for $x \in \overline{D}, t \in [0,T - t^\varepsilon]$ and (ii) we employ (i) to show the desired result.
\vskip .1cm
{\it Step 1.} Fix $\sigmagma, \betaeta$ as in (\ref{cond_beta}), (\ref{cond_sigma}).
Without loss of generality, we may assume that
\betaegin{align*}
0 < \varepsilonta < \min \left\{ \varepsilonta_0, \sigmagma\betaeta \right\}.
\varepsilonnd{align*}
Theorem \ref{Thm_Generation} implies the existence of constants $\varepsilon_0$ and $M_0$ such that (\ref{Thm_generation_i})-(\ref{Thm_generation_iii}) are satisfied.
Conditions (\ref{cond_gamma0_normal}) and (\ref{cond_u0_inout}) imply that there exists a positive constant $M_1$ such that
\betaegin{align*}
&\tauext{if } d(x,0) \leq - M_1 \varepsilon, ~~ \tauext{ then } u_0(x) \leq \alphalpha - M_0 \varepsilon,
\\
&\tauext{if } d(x,0) \geq M_1 \varepsilon, ~~ \tauext{ then } u_0(x) \geq \alphalpha + M_0 \varepsilon.
\varepsilonnd{align*}
Hence, we deduce, by applying (\ref{Thm_generation_i}), (\ref{Thm_generation_iii}), that
\betaegin{align*}
u^\varepsilon(x,t^\varepsilon)
\leq H^+(x)
:=
\betaegin{cases}
\alphalpha_+ + \frac{\varepsilonta}{4}
&
~~
\tauext{ if }
d(x,0) \geq - M_1 \varepsilon
\\
\alphalpha_- + \frac{\varepsilonta}{4}
&
~~
\tauext{ if }
d(x,0) < - M_1 \varepsilon.
\varepsilonnd{cases}
\varepsilonnd{align*}
Also, by applying (\ref{Thm_generation_i}), (\ref{Thm_generation_ii}),
\betaegin{align*}
u^\varepsilon(x,t^\varepsilon)
\geq H^-(x)
:=
\betaegin{cases}
\alphalpha_+ - \frac{\varepsilonta}{4}
&
~~
\tauext{ if }
d(x,0) > M_1 \varepsilon
\\
\alphalpha_- - \frac{\varepsilonta}{4}
&
~~
\tauext{ if }
d(x,0) \leq M_1 \varepsilon.
\varepsilonnd{cases}
\varepsilonnd{align*}
Next, we fix a sufficient large constant $K$ such that
\betaegin{align*}
U_0(M_1 - K) \leq \alphalpha_- + \frac{\varepsilonta}{4}
~~~\tauext{and}
~~~
U_0(- M_1 + K) \geq \alphalpha_+ - \frac{\varepsilonta}{4}.
\varepsilonnd{align*}
For such a constant $K$, Lemma \ref{Lem_Prop_subsuper} implies the existence of coefficients $\varepsilon_0$ and $L$ such that the inequalities in (\ref{eqn_Prop_subsuper}) holds.
We claim that
\betaegin{align}\lambdabel{thm_prop_proof1}
u^-(x,0) \leq H^-(x)
\leq
H^+(x)
\leq
u^+(x,0).
\varepsilonnd{align}
We only prove the last inequality since the first inequality can be proved similarly. By Lemma \ref{Lem_U1_bound}, we have
$| {U_1} | \leq \hat{C}_1$. Thus, we can choose $\varepsilon_0$ small enough so that, for $\varepsilon\in (0,\varepsilon_0)$, we have $ \varepsilon \hat{C}_1 \leq \dfrac{\sigmagma \betaeta}{4}$. Then, noting \varepsilonqref{star0},
\betaegin{align*}
u^+(x,0)
&\geq
U_0
\left(
\frac{d(x,0) + \varepsilon p(0)}{\varepsilon}
\right)
-
\varepsilon \hat{C}_1 + \sigmagma \betaeta + \varepsilon^2 \sigmagma L
\\
&>
U_0
\left(
\frac{d(x,0)}{\varepsilon} + K
\right)
+ \frac{3}{4} \varepsilonta.
\varepsilonnd{align*}
In the set $\{ x \in D : d(x,0) \geq - M_1\varepsilon \}$, the inequalities above, and the fact that $U_0$ is an increasing function imply
\betaegin{align*}
u^+(x,0)
>
U_0(- M_1 + K) + \frac{3}{4} \varepsilonta
\geq
\alphalpha_+ + \frac{\varepsilonta}{2}
> H^+(x).
\varepsilonnd{align*}
Moreover, since $U_0 \geq \alphalpha_-$ in the set $\{ x \in D : d(x,0) < - M_1\varepsilon \}$, we have
\betaegin{align*}
u^+(x,0)
> \alphalpha_- + \frac{3}{4} \varepsilonta
> H^+(x).
\varepsilonnd{align*}
Thus, we proved the first inequality in \varepsilonqref{thm_prop_proof1} above.
The inequalities \varepsilonqref{thm_prop_proof1} and Lemma \ref{Lem_Prop_subsuper} {now} permit to apply the comparison principle Lemma \ref{lem_comparison}, so that we have
\betaegin{align}\lambdabel{compprinciple}
u^-(x,t) \leq u^\varepsilon(x,t + t^\varepsilon) \leq u^+(x,t)
~~ \tauext{ for } ~~
x \in \overline{D}, t \in [0,T - t^\varepsilon].
\varepsilonnd{align}
\vskip .1cm
{\it Step 2.} Choose $C > 0$ so that
\betaegin{align*}
U_0(C - e^{LT} - K) \geq \alphalpha_+ - \frac{\varepsilonta}{2}
~~ \tauext{and} ~~
U_0( - C + e^{LT} + K) \leq \alphalpha_- + \frac{\varepsilonta}{2}.
\varepsilonnd{align*}
Then, we deduce from \varepsilonqref{compprinciple}, noting \varepsilonqref{star0}, that
\betaegin{align*}
&\tauext{if}~
d(x,t) \geq \varepsilon C,
~\tauext{then }
u^\varepsilon(x,t + t^\varepsilon)
\geq \alphalpha_+ - \varepsilonta
\\
&\tauext{if}~
d(x,t) \leq - \varepsilon C,
~\tauext{then }
u^\varepsilon(x,t + t^\varepsilon)
\leq \alphalpha_- + \varepsilonta
\varepsilonnd{align*}
and since $\alphalpha_\pm \pm \varepsilonta$ are respectively sub- and super-solutions of $(P^\varepsilon)$, we conclude that
\betaegin{align*}
u^\varepsilon(x,t + t^\varepsilon) \in [\alphalpha_- - \varepsilonta, \alphalpha_+ + \varepsilonta]
\varepsilonnd{align*}
for all $(x,t) \in D \tauimes [0, T - t^\varepsilon], \varepsilon \in (0,\varepsilon_0)$.
\qed
\betaegin{rmk}\lambdabel{rmk_thm13}
These sub and super solutions guarantee that $u^\varepsilon \sigmameq \alphalpha_+$(respectively, $u^\varepsilon \sigmameq \alphalpha_-$) for $d(x,t) \geq c$(respectively, $d(x,t) \leq -c$) with $t > \rho t^\varepsilon, \rho > 1$ and $\varepsilon > 0$ small enough. In fact, by the definition of $q(t)$, we expect
\betaegin{align*}
\varepsilon U_1 \pm q(t)
= \mathcal{O}(\varepsilon)
\varepsilonnd{align*}
for $t > (\rho - 1) t^\varepsilon$. Also, by Lemma \ref{Lem_U0_bound}, we expect
\betaegin{align*}
0 < U_0(z) - \alphalpha_- < \tauilde{c} \varepsilon
~\tauext{for}~
z > \dfrac{c}{\varepsilon}
,~~
0 < \alphalpha_+ - U_0(z) < \tauilde{c} \varepsilon
~\tauext{for}~
z < - \dfrac{c}{\varepsilon}.
\varepsilonnd{align*}
These estimates yield that there exists a positive constant $c'$ such that
\betaegin{align*}
|u^\varepsilon(x,t) - \alphalpha_+| \leq c'\varepsilon
~\tauext{for}~
d(x,t) > c
,~~
|u^\varepsilon(x,t) - \alphalpha_-| \leq c'\varepsilon
~\tauext{for}~
d(x,t) < - c
\varepsilonnd{align*}
for $t > \rho t^\varepsilon.$
\varepsilonnd{rmk}
\section{Proof of Theorem \ref{thm_asymvali}}\lambdabel{section_5}
We now introduce the concept of an eternal solution. A solution of an evolution equation is called \tauextit{eternal} if it is defined for all positive and negative times. In our problem, we study the nonlinear diffusion problem
\betaegin{align}\lambdabel{eqn_entire}
w_\tauau
= \Deltalta \varphi(w) + f(w),
~~ ((z', z^{(N)}), \tauau) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}},
\varepsilonnd{align}
{
where $z'\in {\mathbb{ R}}^{N-1}$ and $z^{(N)}\in {\mathbb{ R}}$.}
In order to prove Theorem \ref{thm_asymvali}, we first present two lemmas.
\betaegin{lem}\lambdabel{lem_locregularity}
Let $S$ be a domain of $\mathbb{R}^N \tauimes {\mathbb{ R}}$ and let
$u$ be a bounded function on $S$ satisfying
\betaegin{align} \lambdabel{eq:NAC}
u_t = \Deltalta \varphi(u) + f(u), ~~ (x,t) \in S,
\varepsilonnd{align}
where $\varphi, f$ satisfy conditions \varepsilonqref{cond_f_bistable}, \varepsilonqref{cond_phi'_bounded}.
Then, for any smooth bounded subset $S' \subset S$ separated from $\partial S$ by a positive constant $\tauilde{d}$ we have
\betaegin{align} \lambdabel{eq:C21}
\Vert u \Vert_{C^{2 + \tauhetaeta, 1 + \tauhetaeta/2}(\overline{S'})}
\leq C',
\varepsilonnd{align}
for any positive constants $0 < \tauhetaeta < 1$ and $C'= C'(\|u\|_{L^\infty(S)})$ which depends on
$\|u\|_{L^\infty(S)}$, $\varphi, f$, $\tauilde{d}, \tauhetaeta$ and the size of $S'$, where
\betaegin{align*}
\Vert u \Vert_{C^{k + \tauhetaeta, k' + \tauhetaeta'}(\overline{S'})}
&=
\Vert u \Vert_{C^{k,k'}(\overline{S'})}
+ \sum_{i,j = 1}^N
\sup_{(x,t),(y,t) \in S', x \neq y}
\left\{
\dfrac{|D^k_x u(x,t) - D^k_x u(y,t)|}{|x - y|^\tauhetaeta}
\right\}
\\
&
+
\sup_{(x,t),(x,t') \in S', t \neq t'}
\left\{
\dfrac{|D^{k'}_tu(x,t) - D^{k'}_tu(x,t')|}{|t - t'|^{\tauhetaeta'}}
\right\}
\varepsilonnd{align*}
where $k, k'$ are non-negative integers and $0 < \tauhetaeta, \tauhetaeta' < 1$.
\varepsilonnd{lem}
\betaegin{proof}
Since $S'$ is separated from $\partial S$ by a positive distance, we can find subsets $S_1, S_2 $ such that $S' \subset S_2 \subset S_1 \subset S$ and such that $\partial S, \partial S', \partial S_i$ are separated by a positive distance less than $\tauilde{d}$.
By condition \varepsilonqref{cond_phi'_bounded} the regularity of $u(x,t)$ is the same as the regularity of $v(x,t) = \varphi(u(x,t))$. Note that by \varepsilonqref{eq:NAC} $v$ satisfies
\betaegin{align*}
v_t
=
\varphi'(\varphi^{-1}(v)) [ \Deltalta v +g(v) ]
~,
g(s) = f(\varphi^{-1}(s))
\varepsilonnd{align*}
on $S$.
By Theorem 3.1 p.\ 437-438 of \cite{LOVA1988}, there exists a positive constant $c_1$ such that
\betaegin{align*}
| \nablabla v |
\leq c_1
\tauext{ in }
S_1
\varepsilonnd{align*}
where $c_1$ depends only on $N, \varphi, ||u||_{L^\infty(S)}$ and the distance between $S$ and $S_1$. This, together with Theorem 5, p 122 of \cite{Krylov2008}, imply that
\betaegin{align*}
\Vert v \Vert_{W^{2,1}_p(S_2)}
\leq
c_2(\Vert v \Vert_{L^p(S_1)} + \Vert \varphi'(\varphi^{-1}(v)) g(v) \Vert_{L^p(S_1)})
\varepsilonnd{align*}
for any $p > {N + 2}$ where $c_2$ is a constant that depends on $c_1, p, N, \varphi$. With this, by fixing $p$ large enough, the Sobolev embedding theorem in chapter 2, section 3 of \cite{LOVA1988} yields
\betaegin{align*}
\Vert v \Vert_{C^{1 + \tauhetaeta, (1 + \tauhetaeta)/2}(S_2)}
\leq c_3 \Vert v \Vert_{W^{2,1}_p(S_2)}
\varepsilonnd{align*}
where $0 < \tauhetaeta < 1 - \frac{N + 2}{p}$ and $c_3$ depends on $c_2$ and $p$. This implies that $\varphi'(\varphi^{-1}(v)), g(v)$ are bounded uniformly in $C^{1 + \tauhetaeta, (1 + \tauhetaeta)/2}(S_2)$. Therefore, by Theorem 10.1 p 351-352 of \cite{LOVA1988} we obtain
\betaegin{align*}
\Vert v \Vert_{C^{2 + \tauhetaeta, 1 + \tauhetaeta/2}(S')}
\leq c_4 \Vert v \Vert_{C^{1 + \tauhetaeta, (1 + \tauhetaeta)/2}(S_2)}
\varepsilonnd{align*}
where $c_4$ depends on $c_2, f$ and $\varphi$.
\varepsilonnd{proof}
\betaegin{rmk}\lambdabel{rmk_locreg}
Lemma \ref{lem_locregularity} implies uniform $C^{2,1}$ boundedness of the entire solution $w$ in the whole space. This can be derived as follows: Let
\betaegin{align*}
S_{(a,b)} = \{(x,t) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}, |x - a|^2 + (t - b)^2 \leq 2 \},~
S'_{(a,b)} = \{(x,t) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}, |x - a|^2 + (t - b)^2 \leq 1 \}
\varepsilonnd{align*}
where ${(a,b)} \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}$. Then, Lemma \ref{lem_locregularity} implies uniform $C^{2,1}$ boundedness of $w$ within $S'_{(a,b)}$ where the upper bound is fixed by \varepsilonqref{eq:C21}. Since this upper bound is independent to the choice of $(a,b)$, we have uniform $C^{2,1}$ bound of $w$ in the whole space.
\varepsilonnd{rmk}
Next, we present a result inspired by a similar one in \cite{BH2007}.
\betaegin{lem}\lambdabel{lem_entire}
Let $w((z',z^{(N)}),\tauau)$ be a bounded eternal solution of \varepsilonqref{eqn_entire} satisfying
\betaegin{align}\lambdabel{lem_entire_1}
\liminf_{z^{(N)} \rightarrow - \infty} \inf_{z' \in {\mathbb{ R}}^{N - 1}, \tauau \in {\mathbb{ R}}} w((z',z^{(N)}),\tauau) = \alphalpha_-
,~~
\limsup_{z^{(N)} \rightarrow \infty} \sup_{z' \in {\mathbb{ R}}^{N - 1}, \tauau \in {\mathbb{ R}}} w((z',z^{(N)}),\tauau) = \alphalpha_+,
\varepsilonnd{align}
where $z' = (z^{(1)}, z^{(2)}, \cdots z^{(N - 1)})$. Then, there exists a constant $z^* \in {\mathbb{ R}}$ such that
\betaegin{align*}
w((z',z^{(N)}),\tauau) = U_0(z^{(N)} - z^*).
\varepsilonnd{align*}
\varepsilonnd{lem}
\betaegin{proof}
We prove the lemma in two steps. First we show $w$ is an increasing function with respect to the $z^{(N)}$ variable. Then, we prove that $w$ only depends on $z^{(N)}$, which means that there exists a function $\psi : {\mathbb{ R}} \rightarrow (\alphalpha_-, \alphalpha_+)$ such that
\betaegin{align*}
w((z',z^{(N)}),\tauau) = \psi(z^{(N)}), \ \ ((z',z^{(N)}), \tauau) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}.
\varepsilonnd{align*}
From the increasing property with respect to $z^{(N)}$, this allows us to identify $\psi$ as the unique standing wave solution $U_0$ of the problem \varepsilonqref{eqn_AsymptExp_U0} up to a translation factor $z^*$.
We deduce from \varepsilonqref{lem_entire_1} that there exist $A > 0$ and $\varepsilonta \in (0, \varepsilonta_0)$ such that
\betaegin{align}\lambdabel{lem_entire_2}
\betaegin{cases}
\alphalpha_+ - \varepsilonta
\leq
w((z',z^{(N)}),\tauau)
\leq
\alphalpha_+ + \varepsilonta
,
&~~ z^{(N)} \geq A
\\
\alphalpha_- - \varepsilonta
\leq
w((z',z^{(N)}),\tauau) \leq \ \alphalpha_- + \varepsilonta
,
&~~ z^{(N)} \leq - A
\varepsilonnd{cases}
\varepsilonnd{align}
where $\varepsilonta_0$ is defined in \varepsilonqref{cond_mu_eta0}.
Let $\tauilde{\tauau} \in {\mathbb{ R}}, \rho \in {\mathbb{ R}}^{N - 1}$ be arbitrary. Define
\betaegin{align*}
w^s((z',z^{(N)}),\tauau) := w((z' + \rho, z^{(N)} + s), \tauau + \tauilde{\tauau})
\varepsilonnd{align*}
where $s \in {\mathbb{ R}}$. Fix $\chi \geq 2 A$ and define
\betaegin{align}\lambdabel{lem_entire_9}
b^*
:=
\inf
\left\{
b > 0: \varphi(w^\chi) + b \geq \varphi(w) ~ \tauext{in}~ {\mathbb{ R}}^N \tauimes {\mathbb{ R}}
\right\}.
\varepsilonnd{align}
We will prove that $b^* = 0$, which will imply that $w^\chi \geq w$ in ${\mathbb{ R}}^N \tauimes {\mathbb{ R}}$ since $\varphi$ is a strictly increasing function. To see this, we assume that this does not hold, that is $b^* > 0$.
Note, by \varepsilonqref{lem_entire_1} and \varepsilonqref{lem_entire_2} we have
\betaegin{align}\lambdabel{lem_entire_3}
w^\chi \geq \alphalpha_+ - \varepsilonta > \alphalpha_- + \varepsilonta \geq w
~\tauext{if}~
z^{(N)} = -A,~
\lim_{z^{(N)} \rightarrow \pm \infty}
\varphi(w^\chi) - \varphi(w) \tauo 0.
\varepsilonnd{align}
Let $E = \{ (x,t) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}, \varphi({w}) - \varphi({w}c) > 0\}$. Define a function $Z$ on $E$ as follows
\betaegin{align*}
Z((z', z^{(N)}), \tauau)
&:= e^{-C_Z \tauau}[\varphi({w}) - \varphi({w}c)]((z',z^{(N)}),\tauau)
,
\\
C_Z
&:=
\max
\left(
\sup_{x \in {E}}
\dfrac{[\varphi'({w}) - \varphi'({w}c)] \Deltalta \varphi({w})
+ [\varphi'({w})f({w}) - \varphi'({w}c)f({w}c)]}{\varphi({w}) - \varphi({w}c)}
,
0
\right) \geq 0.
\varepsilonnd{align*}
Note that $C_Z$ is bounded, since ${w}c$ is bounded uniformly in $C^{2,1}(E)$ by Remark \ref{rmk_locreg}
and in view of \varepsilonqref{cond_f_bistable} and \varepsilonqref{cond_phi'_bounded} we have
\betaegin{align}\lambdabel{lem_entire_10}
\lim_{x \tauo y}
\dfrac{\varphi'(x) - \varphi'(y)}{\varphi(x) - \varphi(y)}
= \dfrac{\varphi''(y)}{\varphi'(y)}
< \infty,
\lim_{x \tauo y}
\dfrac{\varphi'(x)f(x) - \varphi'(y)f(y)}{\varphi(x) - \varphi(y)}
=
\dfrac{(\varphi'f)'(y)}{\varphi'(y)}
< \infty.
\varepsilonnd{align}
Direct computations give
\betaegin{align*}
Z_\tauau - \varphi'({w}c) \Deltalta Z
&=
e^{-C_Z\tauau}\varphi'({w})[\Deltalta \varphi({w}) + f({w})]
-
e^{-C_Z\tauau}\varphi'({w}c)[\Deltalta \varphi({w}c) + f({w}c)]
\\
&- C_Z Z
- e^{-C_Z\tauau}\varphi'({w}c) [\Deltalta \varphi({w}) - \Deltalta \varphi({w}c)]
\\
&=
\Big(
[\varphi'({w}) - \varphi'({w}c)]\Deltalta \varphi({w})
+[\varphi'({w})f({w}) - \varphi'({w}c)f({w}c)]
\Big)
e^{-C_z\tauau}
- C_Z Z
\\
&\leq
C_Z Z - C_Z Z = 0
\varepsilonnd{align*}
in $E$.
Then, the maximum principle \cite{PW1984} Theorem 5 p.173 yields that the maximum of $Z$ is located at the boundary of $E$. By the definition of $E$, $Z = 0$ on the boundary of $E$ which implies $Z \leq 0$ in $E$. This contradicts the definition of $E$. Thus, we conclude that $b^* = 0$.
Next, we prove that $w \leq w^\chi$ for any $\chi > 0$ (see \varepsilonqref{lem_entire_7} below). For this purpose, define
\betaegin{align}\lambdabel{lem_entire_6}
\chi^* := \inf\left \{ \chi \in {\mathbb{ R}}, w^{\tauilde{\chi}} \geq w ~ \tauext{for all }~ \tauilde{\chi} \geq \chi \right \}.
\varepsilonnd{align}
Then, our goal can be obtained by proving that $\chi^* \leq 0$. By the previous argument, we already know that $\chi^* \leq 2A$.
Since $w((z', - \infty), \tauau) = \alphalpha_-$, it follows from \varepsilonqref{lem_entire_1} that $\chi^* > -\infty$, since otherwise we would have
\betaegin{align*}
\alphalpha_-
=
w^{-\infty}((z',z^{(N)}),\tauau)
\geq
w,
\varepsilonnd{align*}
leading to a contradiction since $w((z', + \infty), \tauau) = \alphalpha_+ > \alphalpha_-$. Thus, we conclude $- \infty < \chi^* \leq 2A$.
Assume that $\chi^* > 0$, and define $E' := \{((z',z^{(N)}), \tauau) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}; ~ |z^{(N)}| \leq A \}$. If $\inf_{E'} (w^{\chi^*} - w) > 0$, then there exists $\deltalta_0 \in (0, \chi^*)$ such that $w \leq w^{\chi^* - \deltalta}$ in $E'$ for all $\deltalta \in (0, \deltalta_0)$. Since $w \leq w^{\chi^* - \deltalta}$ on $\partial E'$, we deduce from a similar argument as above that $w \leq w^{\chi^* - \deltalta}$ in $\{((z',z^{(N)}), \tauau) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}; ~ |z^{(N)}| \geq A \}$. This contradicts the definition of $\chi^*$ in \varepsilonqref{lem_entire_6} so that $\inf_{E'} (w^{\chi^*} - w) = 0$. Thus, we must have a sequence $(({z}'_n, {z}_n), {t}_n)$ and $\tauilde{z}_\infty \in [-A, A]$ such that
\betaegin{align*}
w(({z}'_n, {z}_n), {t}_n)
- w^{\chi^*}(({z}'_n, {z}_n), {t}_n) \rightarrow 0
,~~
{z}_n \rightarrow {z}_\infty
~\tauext{as}~
n \rightarrow\infty.
\varepsilonnd{align*}
Define ${w}_n((z',z^{(N)}),\tauau) := w((z' + {z}'_n, z^{(N)}), \tauau + {t}_n)$. Since $w_n$ is bounded uniformly in $C^{2 + \tauhetaeta, 1 + \tauhetaeta/2}({\mathbb{ R}}^N \tauimes {\mathbb{ R}})$ by Lemma \ref{lem_locregularity}, $w_n$ converges in $C^{2,1}_{loc}$ to a solution $\tauwi$ of \varepsilonqref{eqn_entire}. Define $\tauilde{Z}$ by
\betaegin{align*}
\tauilde{Z}((z',z^{(N)}),\tauau)
:=
[\varphi(\tauwic) - \varphi(\tauwi)]((z',z^{(N)}),\tauau).
\varepsilonnd{align*}
Since $\varphi$ is strictly increasing, by \varepsilonqref{lem_entire_6} we have
\betaegin{align}\lambdabel{lem_entire_11}
\betaegin{cases}
\tauilde{Z}((z',z^{(N)}),\tauau)
\geq 0
~ \tauext{in}
~ {\mathbb{ R}}^N \tauimes {\mathbb{ R}}
\\
\tauilde{Z}((0, {z}_\infty), 0)
= \lim_{n\rightarrow\infty}
[{\varphi({w}^{\chi^*}_n)}
-
{\varphi({w}_n)}]
((0,{z}_n),0)
=
\lim_{n \rightarrow \infty}
[{\varphi(w^{\chi^*})}
-{\varphi(w)}]
(({z}'_n, {z}_n), {t}_n)
= 0.
\varepsilonnd{cases}
\varepsilonnd{align}
Then, direct computation gives
\betaegin{align*}
\tauilde{Z}_\tauau - \varphi'(\tauwic) \Deltalta \tauilde{Z}
&=
\varphi'(\tauwic)[\Deltalta \varphi(\tauwic) + f(\tauwic)]
-
\varphi'(\tauwi)[\Deltalta \varphi(\tauwi) + f(\tauwi)]
\\
&
- \varphi'(\tauwic) [\Deltalta \varphi(\tauwic) - \Deltalta \varphi(\tauwi)]
\\
&=
[\varphi'(\tauwic) - \varphi'(\tauwi)]\Deltalta \varphi(\tauwi)
+[\varphi'(\tauwic)f(\tauwic) - \varphi'(\tauwi)f(\tauwi)],
\varepsilonnd{align*}
If $\tauilde{Z} = 0$ we obtain $\tauilde{Z}_\tauau - \varphi'(\tauwic) \Deltalta \tauilde{Z} = 0$. If $\tauilde{Z} > 0$, we obtain
\betaegin{align*}
\tauilde{Z}_\tauau - \varphi'(\tauwic) \Deltalta \tauilde{Z}
&=
\left(
\dfrac{[\varphi'(\tauwic) - \varphi'(\tauwi)]\Deltalta \varphi(\tauwi)+[\varphi'(\tauwic)f(\tauwic) - \varphi'(\tauwi)f(\tauwi)]}{\varphi(\tauwic) - \varphi(\tauwi)}
\right)\tauilde{Z}
\\
&\geq
- C \tauilde{Z},
\varepsilonnd{align*}
for some positive constant $C$, where the last inequality follows from \varepsilonqref{lem_entire_10} and the fact that $\Deltalta\varphi(\tauwi)$ is uniformly bounded in the whole space.
Since by \varepsilonqref{lem_entire_11} $\tauilde{Z}$ attains a non-positive minimum at $((0,{z}_\infty),0)$, we deduce from the maximum principle applied on the domain ${\mathbb{ R}}^N \tauimes (-\infty, 0]$ that $\tauilde{Z} = 0$ for all $(z',z^{(N)}) \in {\mathbb{ R}}^N,~\tauau \leq 0$. Hence, $\tauilde{Z} \varepsilonquiv 0$ in ${\mathbb{ R}}^N \tauimes {\mathbb{ R}}$. This implies that
\betaegin{align*}
\tauwi((0,0),0)
=
\tauwi((\rho, \chi^*), \tauilde{\tauau})
=
\tauwi((2\rho, 2\chi^*), 2\tauilde{\tauau})
=
\cdots
=
\tauwi((k\rho, k \chi^*), k \tauilde{\tauau})
\varepsilonnd{align*}
for all $k \in \mathbb{Z}$, contradicting the fact that $\tauwi((k\rho, k \chi^*), k \tauilde{\tauau}) \rightarrow \alphalpha_+$ as $k \rightarrow \infty$ and $\tauwi((k\rho, k \chi^*), k \tauilde{\tauau}) \rightarrow \alphalpha_-$ as $k \rightarrow - \infty$.
Thus, we have $\chi^* \leq 0$, and therefore
\betaegin{align}\lambdabel{lem_entire_7}
w((z',z^{(N)}),\tauau) \leq
w^0((z',z^{(N)}),\tauau)
= w((z' + \rho, z^{(N)}), \tauau + \tauilde{\tauau})
\varepsilonnd{align}
holds for any $\rho \in {\mathbb{ R}}^{N - 1}, \tauilde{\tauau} \in {\mathbb{ R}}$.
We now show that $w$ only depends on $z^{(N)}$. Suppose $w$ depends on $z'$ and $\tauau$. Then, there exist $z'_1, z'_2 \in {\mathbb{ R}}^{N - 1}, z^{(N)} \in {\mathbb{ R}}$ and $t'_1, t'_2 \in {\mathbb{ R}}$ such that
\betaegin{align}\lambdabel{lem_entire_8}
w((z'_1, z^{(N)}), t_1') < w((z'_2, z^{(N)}), t'_2).
\varepsilonnd{align}
Then, by letting $z' = z_2',~\rho = z_1' - z_2'$ and $\tauau = t_2',~ \tauilde{\tauau} = t_1' - t_2'$ in the inequality \varepsilonqref{lem_entire_7}, we deduce
\betaegin{align*}
w((z_2', z^{(N)}) ,t_2') \leq w((z_1', z^{(N)}), t_1'),
\varepsilonnd{align*}
contradicting \varepsilonqref{lem_entire_8}. This implies that $w$ only depends on $z^{(N)}$, namely $w((z',z^{(N)}),\tauau) = \psi(z^{(N)})$.
Finally, from the definition of $\chi^*$, we have that $\psi$ is increasing.
\varepsilonnd{proof}
{\betaf Proof of Theorem \ref{thm_asymvali}.}
We first prove $(i)$. Recall that $d(x,t)$ is the
{cut-off}
signed distance function to the interface $\Gammamma_t$ moving according to equation \varepsilonqref{eqn_motioneqn}, and $d^\varepsilon(x,t)$ is the signed distance function corresponding to the interface
\betaegin{align*}
\Gammamma_t^\varepsilon := \{ x \in D, ~ u^\varepsilon(x,t) = \alphalpha \}.
\varepsilonnd{align*}
Let $T_1$ be an arbitrary constant such that $\frac{T}{2} < T_1 < T$. Assume by contradiction that \varepsilonqref{thm_asymvali_i} does not hold. Then, there exist $\varepsilonta > 0$ and sequences $\varepsilon_k \downarrow 0,~ t_k \in [\rho t^{\varepsilon_k}, T],~ x_k \in D$ such that $\alphalpha_+ - \varepsilonta > \alphalpha > \alphalpha_- + \varepsilonta$ and
\betaegin{align}\lambdabel{eqn_asymproof_1}
\left|
u^{\varepsilon_k}(x_k, t_k)
- U_0
\left(
\dfrac{d^{\varepsilon_k}(x_k, t_k)}{\varepsilon_k}
\right)
\right|
\geq \varepsilonta .
\varepsilonnd{align}
For the inequality \varepsilonqref{eqn_asymproof_1} to hold, by Theorem \ref{Thm_Propagation} and $U_0(\pm \infty) = \alphalpha_\pm$, we need
\betaegin{align*}
d^{\varepsilon_k}(x_k, t_k) = \mathcal{O}(\varepsilon_k).
\varepsilonnd{align*}
With these observations, and also by Theorem \ref{Thm_Propagation}, there exists a positive constant $\tauilde{C}$ such that
\betaegin{align}\lambdabel{eqn_asymproof_2}
| d(x_k,t_k) | \leq \tauilde{C} \varepsilon_k
\varepsilonnd{align}
for $\varepsilon_k$ small enough.
If $x_k \in \Gammamma^{\varepsilon_k}_{t_k}$, then the left-hand side of \varepsilonqref{eqn_asymproof_1} vanishes, which contradicts this inequality. Since the sign can either be positive or negative, by extracting a subsequence if necessary we may assume that
\betaegin{align}\lambdabel{eqn_asymproof_7}
u^{\varepsilon_k}(x_k,t_k) - \alphalpha > 0~~ \tauext{for all}~~ k \in \mathbb{N},
\varepsilonnd{align}
which is equivalent to
\betaegin{align*}
d^{\varepsilon_k}(x_k,t_k) > 0 ~~ \tauext{for all}~~ k \in \mathbb{N}.
\varepsilonnd{align*}
By \varepsilonqref{eqn_asymproof_2}, each $x_k$ has a unique orthogonal projection $p_k := p(x_k,t_k) \in \Gammamma_{t_k}$. Let $y_k$ be a point on $\Gammamma^{\varepsilon_k}_{t_k}$ that has the smallest distance from $x_k$, and therefore $u^{\varepsilon_k}(y_k,t_k) = \alphalpha$. Moreover, we have
\betaegin{align}\lambdabel{eqn_asymproof_3}
u^{\varepsilon_k}(x, t_k) > \alphalpha
~~ \tauext{if}
~~ \Vert x - x_k \Vert < \Vert y_k - x_k \Vert.
\varepsilonnd{align}
We now rescale $u^{\varepsilon_k}$ around $(p_k,t_k)$. Define
\betaegin{align}\lambdabel{eqn_asymproof_10}
w^k(z, \tauau)
:=
u^{\varepsilon_k}(p_k + \varepsilon_k \mathcal{R}_kz, t_k + \varepsilon^2_k \tauau),
\varepsilonnd{align}
where $\mathcal{R}_k$ is a orthogonal matrix in $SO(N,{\mathbb{ R}})$ that rotates the $z^{(N)}$ axis, namely the vector $(0, \cdots ,0, 1) \in {\mathbb{ R}}^N$ onto the unit normal vector to $\Gammamma_{t_k}$ at $p_k \in \Gammamma_{t_k}$, say $\dfrac{x_k - p_k}{\Vert x_k - p_k \Vert}$. To prove our result, we use Theorem \ref{Thm_Propagation} which gives information about $u^{\varepsilon_k}$ for $t_k + \varepsilon^2_k \tauau \geq t^{\varepsilon_k}$.
Then, since $\Gammamma_t$ is separated from $\partial D$ by some positive distance, $w^k$ is well-defined at least on the box
\betaegin{align*}
B_k
:=
\left\{
(z, \tauau) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}
:
|z| \leq \dfrac{c}{\varepsilon_k}
,~~
- (\rho - 1) \dfrac{{|\ln \varepsilon_k|}}{\mu}
\leq \tauau
\leq \dfrac{T - T_1}{\varepsilon_k^2}
\right\},
\varepsilonnd{align*}
for some $c > 0$. We remark that $B_k \subset B_{k + 1}, k \in \mathbb{N}$ and $\lim_{k \rightarrow \infty} B_k = {\mathbb{ R}}^N \tauimes {\mathbb{ R}}$. Writing $\mathcal{R}_k = (r_{ij})_{1 \leq i,j \leq N}$, we remark that $\mathcal{R}_k^{-1} = \mathcal{R}_k^T$, which implies that
\betaegin{align}\lambdabel{eqn_asymproof_9}
\sum_{i = 1}^N r_{ \varepsilonll i }^2 = 1
,~
\sum_{i = 1, j \neq m}^N r_{j i} r_{\varepsilonll i} = 0.
\varepsilonnd{align}
Since
\betaegin{align*}
\partial_{z_i}^2 \varphi(w^k)
=
\varepsilon_k^2 \sum_{j = 1}^N \sum_{\varepsilonll = 1}^N r_{j i} r_{\varepsilonll i} \partial_{x_\varepsilonll x_j} \varphi(u^{\varepsilon_k}),
\varepsilonnd{align*}
we have
\betaegin{align*}
\Deltalta \varphi(w^k)
&=
\varepsilon_k^2 \sum_{i = 1}^N \partial_{z_i}^2 \varphi(w^k)
\\
&=
\varepsilon_k^2 \sum_{i = 1}^N \sum_{\varepsilonll = 1}^N r_{\varepsilonll i}^2 \partial_{x_\varepsilonll}^2 \varphi(u^{\varepsilon_k})
+
\varepsilon_k^2 \sum_{i = 1}^N \sum_{j,\varepsilonll = 1, j \neq \varepsilonll}^N r_{j i} r_{\varepsilonll i} \partial_{x_\varepsilonll x_j} \varphi(u^{\varepsilon_k})
\\
&
=\varepsilon_k^2 \Deltalta \varphi(u^{\varepsilon_k}).
\varepsilonnd{align*}
Thus, we obtain
\betaegin{align*}
w^k_\tauau = \Deltalta \varphi(w^k) + f(w^k)
~~ \tauext{in}
~~ B_k.
\varepsilonnd{align*}
From the propagation result in Theorem \ref{Thm_Propagation} and the fact that the rotation matrix $\mathcal{R}_k$ maps the $z^{(N)}$ axis to the unit normal vector of $\Gammamma_t$ at $p_k$, there exists a constant $C > 0$ such that
\betaegin{align}\lambdabel{eqn_asymproof_4}
z^{(N)} \geq C {\mathbb{ R}}ightarrow w^k(z,\tauau) \geq \alphalpha_+ - \varepsilonta > \alphalpha
,~~
z^{(N)} \leq -C {\mathbb{ R}}ightarrow w^k(z,\tauau) \leq \alphalpha_- + \varepsilonta < \alphalpha
\varepsilonnd{align}
as long as $(z,\tauau) \in B_k$.
It follows from the first line of \varepsilonqref{thm_propagation_1} that $\alphalpha_- - \varepsilonta_0 \leq w^k \leq \alphalpha_+ + \varepsilonta_0$ for $k$ large enough. Then, by Lemma \ref{lem_locregularity} we can find a subsequence of $(w^k)$ converging to some $w \in C^{2,1}({\mathbb{ R}}^N \tauimes {\mathbb{ R}})$ which satisfies
\betaegin{align*}
w_\tauau
= \Deltalta \varphi(w) + f(w)
~~ on
~~ {\mathbb{ R}}^N \tauimes {\mathbb{ R}}.
\varepsilonnd{align*}
From Remark \ref{rmk_thm13} we can deduce \varepsilonqref{lem_entire_1}. Then, by Lemma \ref{lem_entire}, there exists $z^* \in {\mathbb{ R}}$ such that
\betaegin{align}\lambdabel{eqn_asymproof_5}
w(z, \tauau)
=
U_0(z^{(N)} - z^*).
\varepsilonnd{align}
Define sequences of points $\{ z_k \}, \{ \tauilde{z}_{k} \}$ by
\betaegin{align}\lambdabel{eqn_asymproof_11}
z_k :=
\dfrac{1}{\varepsilon_k} \mathcal{R}^{-1}_k(x_k - p_k), ~~
\tauilde{z}_{k} :=
\dfrac{1}{\varepsilon_k} \mathcal{R}^{-1}_k(y_k - p_k).
\varepsilonnd{align}
From \varepsilonqref{eqn_asymproof_2} and Theorem \ref{Thm_Propagation}, we have
\betaegin{align*}
&|d(x_k,t_k)|
=
\Vert x_k - p_k \Vert
= \mathcal{O}(\varepsilon_k)
,~~\\
&
\Vert y_k - p_k \Vert
\leq
\Vert y_k - x_k \Vert
+ \Vert x_k - p_k \Vert
=
|d^{\varepsilon_k}(x_k,t_k)|
+
|d(x_k,t_k)|
= \mathcal{O}(\varepsilon_k)
\varepsilonnd{align*}
(see Figure \ref{fig1}), which implies that the sequences $z_k$ and $\tauilde{z}_k$ are bounded.
Thus, there exist subsequences of $\{ z_k \}, \{ \tauilde{z}_k \}$ and $z_\infty, \tauilde{z}_\infty \in {\mathbb{ R}}^N$ such that
\betaegin{align*}
z_{k_n} \rightarrow z_\infty
,~~
\tauilde{z}_{k_n} \rightarrow \tauilde{z}_\infty
,~~
\tauext{as}~~ k \rightarrow \infty.
\varepsilonnd{align*}
\betaegin{figure}[h]
\centering
\betaegin{subfigure}[b]{0.4\tauextwidth}
\includegraphics[scale = 0.4]{fig1}
\caption{Points $x_k, y_k, p_k$ and interfaces $\Gammamma_{t_k}, \Gammamma_{t_k}^{\varepsilon_k}$ inside the box $B_k$.}
\varepsilonnd{subfigure}
\betaegin{subfigure}[b]{0.4\tauextwidth}
\includegraphics[scale = 0.4]{fig2}
\caption{Points $z_\infty$ and $\tauilde{z}_\infty$ and hyperplanes $z^{(N)} = z^*, z^{(N)} = z^{(N)}_\infty$.}
\varepsilonnd{subfigure}
\caption{In (a) the distance between $\Gammamma_{t_k}$ and $\Gammamma_{t_k}^{\varepsilon_k}$ is of $\mathcal{O}(\varepsilon_k)$. In (b), since we rescale space by $\varepsilon^{-1}$, the distance between two hyperplanes is of $\mathcal{O}(1)$.}\lambdabel{fig1}
\varepsilonnd{figure}
Since the normal vector to $\Gammamma_{t_k}$ at $p_k$ is equal to $x_k - p_k$, and the mapping $\mathcal{R}_k^{-1}$ sends the unit normal vector to $\Gammamma_{t_k}$ at $p_k$ to the vector $(0, \cdots 0, 1) \in {\mathbb{ R}}^N$, we conclude $z_\infty$ must lie on the $z^{(N)}$ axis so that we can write
\betaegin{align*}
z_\infty = (0, \cdots, 0, z^{(N)}_\infty).
\varepsilonnd{align*}
Since, by \varepsilonqref{eqn_asymproof_7},
\betaegin{align*}
w(z_\infty, 0)
=
\lim_{k_n \rightarrow \infty}
w^{k_n}(z_{k_n}, 0)
=
\lim_{k_n \rightarrow \infty}
u^{\varepsilon_{k_n}}(x_{k_n}, t_{k_n})
\geq \alphalpha,
\varepsilonnd{align*}
we deduce from \varepsilonqref{eqn_asymproof_5} and the fact that $U_0$ is an increasing function that
\betaegin{align*}
w(z_\infty, 0)
= U_0(z^{(N)}_{\infty} - z^*) \geq \alphalpha
{\mathbb{ R}}ightarrow
z^{(N)}_{\infty} \geq z^*.
\varepsilonnd{align*}
From the definition of $y_{k_n}$ and \varepsilonqref{eqn_asymproof_10}, we have
\betaegin{align}\lambdabel{eqn_asymproof_6}
w(\tauilde{z}_\infty, 0)
= \lim_{k \rightarrow \infty} w^{k_n}(\tauilde{z}_{k_n}, 0)
= \lim_{k \rightarrow \infty} u^{\varepsilon_{k_n}}(y_{k_n}, t_{k_n})
= \alphalpha.
\varepsilonnd{align}
Next, we show that
\betaegin{align}\lambdabel{eqn_asymproof_8}
w(z,0) \geq \alphalpha~ \tauext{if} ~
\Vert z - z_\infty \Vert \leq \Vert \tauilde{z}_\infty - z_\infty \Vert.
\varepsilonnd{align}
Choose $z \in {\mathbb{ R}}^N$ satisfying $\Vert z - z_\infty \Vert \leq \Vert \tauilde{z}_\infty - z_\infty \Vert$ and a sequence $a_{k_n} \in {\mathbb{ R}}^+$ such that $a_{k_n} \rightarrow \Vert z - z_\infty \Vert$ and $\varepsilon_{k_n} a_{k_n} \leq \Vert x_{k_n} - y_{k_n} \Vert $ as $k \rightarrow \infty$. Then, we define sequences $n_{k_n}$ and $b_{k_n}$ by
\betaegin{align*}
n_{k_n} = \dfrac{z - z_{k_n}}{\Vert z - z_{k_n} \Vert}
,~
b_{k_n} = a_{k_n} n_{k_n} + z_{k_n}.
\varepsilonnd{align*}
Note that $b_{k_n} \rightarrow z$ as $k \rightarrow \infty$. Then, by \varepsilonqref{eqn_asymproof_11}, we obtain
\betaegin{align*}
w(z,0)
&= \lim_{k_n \rightarrow \infty} w^{k_n}(b_{k_n}, 0)=
\lim_{k_n \rightarrow \infty}
u^{\varepsilon_{k_n}} ( p_{k_n} + \varepsilon_{k_n} \mathcal{R}_{k_n}(a_{k_n} k_n + z_{k_n}),t_{k_n})
\\
&
= \lim_{k _n\rightarrow \infty} u^{\varepsilon_{k_n}}(\varepsilon_{k_n} a_{k_n} \mathcal{R}_{k_n} n_{k_n} + x_{k_n}, t_{k_n})
\geq \alphalpha,
\varepsilonnd{align*}
where the last inequality holds by \varepsilonqref{eqn_asymproof_3}.
Note that \varepsilonqref{eqn_asymproof_5} implies $\{ w = \alphalpha \} = \{(z,\tauau) \in {\mathbb{ R}}^N \tauimes {\mathbb{ R}}, z^{(N)} = z^* \}$.
Thus, we have either $z_\infty = \tauilde{z}_\infty$ or, in view of \varepsilonqref{eqn_asymproof_5} , \varepsilonqref{eqn_asymproof_6} and \varepsilonqref{eqn_asymproof_8}, that the ball of radius $|| \tauilde{z}_\infty - z_\infty ||$ centered at $z_\infty$ is tangent to the hyperplane $z^{(N)} = z^*$ at $\tauilde{z}_\infty$. Hence,
$\tauilde{z}_\infty$ is a point on $z^{(N)}$ axis. With this observation and \varepsilonqref{eqn_asymproof_5}, we have
\betaegin{align*}
\tauilde{z}_\infty
= (0, \cdots, 0, z^*).
\varepsilonnd{align*}
This last property implies
\betaegin{align}\lambdabel{eqn_asymproof_12}
\dfrac{d^{\varepsilon_{k_n}}(x_{k_n},t_{k_n})}{\varepsilon_{k_n}}
= \dfrac{\Vert x_{k_n} - y_{k_n} \Vert}{\varepsilon_{k_n}}
= \Vert \mathcal{R}_{k_n} \left( z_{k_n} - \tauilde{z}_{k_n} \right) \Vert
= \Vert z_{k_n} - \tauilde{z}_{k_n} \Vert
\rightarrow
\Vert z_\infty - \tauilde{z}_\infty \Vert
= z^{(N)}_\infty - z^*.
\varepsilonnd{align}
We have therefore reached a contradiction since, by \varepsilonqref{eqn_asymproof_2}, \varepsilonqref{eqn_asymproof_5} and \varepsilonqref{eqn_asymproof_12},
\betaegin{align*}
0
&=
\vert w(z_\infty, 0) - U_0 (z^{(N)}_\infty - z^*) \vert
\\
&=
\left\vert \lim_{k_n \rightarrow \infty}
\left[
w^{k_n}(z_{k_n}, 0)
- U_0
\left(
\dfrac{d^{\varepsilon_{k_n}}(x_{k_n},t_{k_n})}{\varepsilon_{k_n}}
\right)
\right]
\right\vert
\\
&=
\left\vert \lim_{k_n \rightarrow \infty}
\left[
u^{\varepsilon_{k_n}}(x_{k_n},t_{k_n})
- U_0
\left(
\dfrac{d^{\varepsilon_{k_n}}(x_{k_n},t_{k_n})}{\varepsilon_{k_n}}
\right)
\right]
\right\vert,
\varepsilonnd{align*}
contradicting \varepsilonqref{eqn_asymproof_1}.
For the proof of $(ii)$, we use the same method as in \cite{MH2012}.
\qed
\section*{Appendix: Mobility and surface tension}
Mobility is defined as a linear response of the speed of traveling wave to the
external force. More precisely, motivated by (4.1) and (4.2) in \cite{Spohn1993}, let us consider
the nonlinear Allen-Cahn equation with external force $\deltalta$ on ${\mathbb{ R}}$
for small enough $|\deltalta|$:
\betaegin{equation} \lambdabel{eq:AC-delta}
u_t = \varphi(u)_{zz} +f(u)+\deltalta, \quad z\in {\mathbb{ R}},
\varepsilonnd{equation}
and corresponding traveling wave solution $U=U_\deltalta(z)$ with speed $c(\deltalta)$:
\betaegin{align} \lambdabel{eq:TW-delta}
& \varphi(U_\deltalta)_{zz} +c(\deltalta) U_{\deltalta z}+f(U_\deltalta)+\deltalta=0, ~~ z\in {\mathbb{ R}}, \\
& U_\deltalta(\pm\infty)= \alphalpha_{\pm, \deltalta},
\notag
\varepsilonnd{align}
where $\alphalpha_{\pm,\deltalta}$ are two stable solutions of $f(u)+\deltalta=0$.
Then, we define the mobility by
$$
\mu_{AC}:=-\frac{c'(0)}{ \alpha_+- \alpha_-},
$$
with a normalization factor $\alpha_+-\alpha_-$ as in \cite{Spohn1993}; compare (4.6) and (4.7) in
\cite{Spohn1993} noting that the boundary conditions at $\pm\infty$ are switched so that
we have a negative sign for $\mu_{AC}$.
To derive a formula for $\mu_{AC}$, we multiply $\varphi(U_\delta)_z$ to
\varepsilonqref{eq:TW-delta} and integrate it over ${\mathbb{ R}}$ to obtain
\betaegin{align} \lambdabel{eq:94}
c(\delta) \int_{\mathbb{ R}} U_{\delta z} \varphi(U_\delta)_z dz + \delta (\varphi(\alpha_+)-\varphi(\alpha_-)) =O(\delta^2),
\varepsilonnd{align}
by noting that
\betaegin{align*}
& \int_{\mathbb{ R}} \varphi(U_{\delta})_{zz} \varphi(U_\delta)_z dz = \frac12 \int_{\mathbb{ R}}
\betaig\{\betaig( \varphi(U_{\delta})_{z} \betaig)^2 \betaig\}_z dz =0, \\
& \int_{\mathbb{ R}} \varphi(U_{\delta})_{z} dz = \varphi(\alpha_{+,\delta})-\varphi(\alpha_{-,\delta}) = \varphi(\alpha_+)-\varphi(\alpha_-)+O(\delta), \\
& \int_{\mathbb{ R}} f(U_{\delta}) \varphi(U_\delta)_z dz = \int_{\mathbb{ R}} f(U_{\delta}) \varphi'(U_\delta) U_{\delta z} dz
= - \int_{\alpha_{-,\delta}}^{\alpha_{+,\delta}} W'(u)du = O(\delta^2).
\varepsilonnd{align*}
The last line follows by the change of variable $u=U_\delta(z)$, $W'(u) = -f(u)\varphi'(u)$
(recall \varepsilonqref{eq:28}),
$\int_{\alpha_{-}}^{\alpha_{+}} W'(u)du=0$ and $W'(\alpha_\pm)=0$, $W'\in C^1$.
However, since one can at least formally expect
$U_\delta = U_0+O(\delta)$ (recall \varepsilonqref{eqn_AsymptExp_U0} for $U_0$), by \varepsilonqref{intrinsic},
\betaegin{align*}
\int_{\mathbb{ R}} U_{\delta z} \varphi(U_\delta)_z dz
& = \int_{\mathbb{ R}} U_{0 z} \varphi(U_0)_z dz + O(\delta) \\
& = \int_{\mathbb{ R}} U_{0 z} \sqrt{2W(U_0(z))} dz + O(\delta) \\
& = \int_{\alpha_-}^{\alpha_+} \sqrt{2W(u)} du + O(\delta),
\varepsilonnd{align*}
by the change of variable $u=U_0(z)$ again. This combined with \varepsilonqref{eq:94} leads to
\betaegin{align*}
c'(0) = - \frac{\varphi(\alpha_+)-\varphi(\alpha_-)}
{\int_{\alpha_-}^{\alpha_+} \sqrt{2W(u)} du}.
\varepsilonnd{align*}
Thus, the mobility is given by the formula
\betaegin{align} \lambdabel{mobility}
\mu_{AC} = \frac{ \varphi_\pm^*}{\int_{\alpha_-}^{\alpha_+} \sqrt{2W(u)} du}
= \frac{ \varphi_\pm^*}{\int_{\mathbb{ R}} \varphi'(U_0) U_{0z}^2(z)dz},
\varepsilonnd{align}
where
$$
\varphi_\pm^* = \frac{\varphi(\alpha_+)-\varphi(\alpha_-)}{\alpha_+ - \alpha_-}.
$$
On the other hand, surface tension is defined as a gap of the energy
of the microscopic transition surface from $\alpha_-$ to $\alpha_+$ in normal
direction and that of the constant profile $\alpha_-$ or $\alpha_+$. More precisely,
define the energy of a profile $u= \{u(z)\}_{z\in {\mathbb{ R}}}$ by
$$
\mathcal{E}(u) = \int_{\mathbb{ R}} \Big\{ \frac12\betaig(\varphi(u)_z\betaig)^2 +W(u)\Big\} dz.
$$
Recall that the potential $W$ is defined by \varepsilonqref{eq:28}, and $W\ge 0$
and $W(\alpha_\pm)=0$ hold. In particular, $W$ is normalized as $\min_{u\in {\mathbb{ R}}}W(u)=0$
so that $\min_{u =u(\cdot)}\mathcal{E}(u)=0$.
Then, the surface tension is defined as
$$
\sigma_{AC} := \frac1{\varphi_\pm^*} \min_{u: u(\pm \infty)=\alpha_\pm} \mathcal{E}(u),
$$
by normalizing the energy by $\varphi_\pm^*$. We observe
that $\mathcal{E}$ is defined through $\varphi$.
Note that the nonlinear Allen-Cahn equation, that is \varepsilonqref{eq:AC-delta}
with $\delta=0$, is a distorted gradient flow associated with $\mathcal{E}(u)$:
$$
u_t = - \frac{\delta \mathcal{E}(u)}{\delta\varphi(u)}, \quad z \in {\mathbb{ R}},
$$
where the right hand side is defined as a functional derivative of
$\mathcal{E}(u)$ in $\varphi(u)$, which is given by
$$
\frac{\delta \mathcal{E}(u)}{\delta\varphi(u)} = - \varphi(u)_{zz} -f(u(z)).
$$
Indeed, to see the second term $-f(u(z))$, setting $v=\varphi(u)$, one can rewrite
$W(u) = W(\varphi^{-1}(v))$ as a function of $v$ so that
\betaegin{align*}
\betaig(W(\varphi^{-1}(v))\betaig)' & = W'(\varphi^{-1}(v)) \betaig(\varphi^{-1}(v)\betaig)' \\
& = - f(\varphi^{-1}(v)) \varphi'(\varphi^{-1}(v)) \frac1{\varphi'(\varphi^{-1}(v))} \\
& = - f(\varphi^{-1}(v)) = -f(u).
\varepsilonnd{align*}
We call the flow ``distorted'', since the functional derivative is taken in $\varphi(u)$
and not in $u$. One can rephrase this in terms of the change of
variables $v(z) = \varphi(u(z))$. Indeed, we have $\mathcal{E}(u)=
{w}detilde{\mathcal{E}}(v)$ under this change, where
$$
{w}detilde{\mathcal{E}}(v) = \int_{\mathbb{ R}} \Big\{ \frac12 v_z^2 + W(\varphi^{-1}(v))\Big\}dz,
$$
and
$$
\frac{\delta {w}detilde{\mathcal{E}}}{\delta v}
= -v_{zz} - f(\varphi^{-1}(v)) = -v_{zz} -g(v).
$$
Therefore, in the variable $v(z)$, the nonlinear Allen-Cahn equation can be rewritten as
\betaegin{align*}
v_t = \varphi'(u) u_t = - \varphi'(\varphi^{-1}(v)) \cdot \frac{\delta {w}detilde{\mathcal{E}}}{\delta v}
= \varphi'(\varphi^{-1}(v)) \betaig\{ v_{zz} +g(v)\}.
\varepsilonnd{align*}
This type of distorted equation for $v$ is sometimes called Onsager
equation; see \cite{Mi}.
Now we come back to the computation of the surface tension $\sigma_{AC}$.
In fact, it is given by
\betaegin{align} \lambdabel{eq:ST}
\sigma_{AC}= \frac1{\varphi_\pm^*} \int_{\mathbb{ R}} V_{0z}^2 dz =
\frac1{\varphi_\pm^*}
\int_{\alpha_-}^{\alpha_+} \varphi'(u) \sqrt{2W(u)} du,
\varepsilonnd{align}
where $V_0=\varphi(U_0)$ and satisfies \varepsilonqref{eqn_AsymptExp_V0}.
Indeed, the second equality follows from \varepsilonqref{eq:29}. To see the first equality,
by definition,
\betaegin{align*}
\sigma_{AC} = \frac1{\varphi_\pm^*} \min_{u: u(\pm \infty)=\alpha_\pm} \mathcal{E}(u)
= \frac1{\varphi_\pm^*}
\min_{v: v(\pm \infty)= \varphi(\alpha_\pm)} {w}detilde{\mathcal{E}}(v)
\varepsilonnd{align*}
and the minimizers of ${w}detilde{\mathcal{E}}$ under the condition
$v(\pm \infty)= \varphi(\alpha_\pm)$ are given by $V_0$ and its spatial shifts. Thus,
$$
\sigma_{AC} = \frac1{\varphi_\pm^*} {w}detilde{\mathcal{E}}(V_0)
= \frac1{\varphi_\pm^*} \int_{\mathbb{ R}} \Big\{\frac12 V_{0z}^2 + W(\varphi^{-1}(V_0)) \Big\} dz.
$$
However, since $V_{0z}= \sqrt{2W(U_0(z))}$ by \varepsilonqref{intrinsic},
we have $\int_{\mathbb{ R}} W(\varphi^{-1}(V_0)) dz = \int_{\mathbb{ R}} \frac12 V_{0z}^2 dz$.
In particular, this implies the first equality of \varepsilonqref{eq:ST}.
By \varepsilonqref{second lambda_0} combined with \varepsilonqref{mobility} and
\varepsilonqref{eq:ST}, we see that $\lambda_0= \mu_{AC}\sigma_{AC}$.
\betaegin{rmk}
The linear case $\varphi(u) = \frak{K} u$ is discussed by Spohn \cite{Spohn1993},
in which $\frak{K}$ is denoted by $\kappaappa$. In this case, since $\varphi'=\frak{K}$
and $\varphi_\pm^*=\frak{K}$, by \varepsilonqref{mobility} and \varepsilonqref{eq:ST}, we have
$\mu_{AC} = \betaig[\int_{\mathbb{ R}} U_{0z}^2 dz\betaig]^{-1}$ and $\sigma_{AC} = \frak{K}\int_{\mathbb{ R}} U_{0z}^2 dz$.
These formulas coincide with (4.7) and (4.8) in \cite{Spohn1993} by noting that
$U_0$ is the same as $w$ in \cite{Spohn1993} in the linear case except that
the direction is switched due to the choice of the boundary conditions.
\varepsilonnd{rmk}
\end{document}
|
\begin{document}
\title{Remarks on Finite Subset Spaces}
\author{Sadok Kallel}
\email{[email protected]}
\address{Laboratoire Painlev\'e\\
Universit\'e des Sciences et Technologies de Lille, France}
\author{Denis Sjerve}
\email{[email protected]}
\address{Department of Mathematics\\
University of British Columbia, Vancouver, Canada}
\begin{abstract}
This paper expands on and refines some known and less well-known
results about the finite subset spaces of a simplicial complex $X$
including their connectivity and manifold structure. It also
discusses the inclusion of the singletons into the three fold subset
space and shows that this subspace is weakly contractible but
generally non-contractible unless $X$ is a cogroup. Some
homological calculations are provided.
\end{abstract}
\maketitle
\section{Statement of Results}\label{intro}
Let $X$ be a topological space (always assumed to be path connected),
and $k$ a positive integer. It has become increasingly useful in
recent years to study the space
$$\sub{n}X:= \{\{x_1,\ldots, x_\ell\}\subset X\ |\ \ell\leq n\}
$$
of all finite subsets of $X$ of cardinality at most $n$ \cite{akr,
beilinson, handel, mp, rose, tuffley1}. This space is topologized as
the identification space obtained from $X^n$ by identifying two
$n$-tuples if and only if the sets of their coordinates coincide
\cite{borsuk}. The functors $\sub{n}(-)$ are homotopy functors in the
sense that if $X\simeq Y$ then $\sub{n}(X)\simeq\sub{n}(Y)$. If $k\le
n$ then $Sub_kX$ naturally embeds in $Sub_nX.$ We write
$j_n:X\hookrightarrow Sub_nX$ for the inclusion given by
$j_n(x)=\{x\}$.
This paper takes advantage of the close relationship between finite
subset spaces and symmetric products to deduce a number of useful
results about them.
As a starting point, we discuss cell structures on finite subset
spaces. We observe in \S \ref{decomp} that if $X$ is a finite
$d$-dimensional simplicial complex, then $\sub{n}X$ is an
$nd$-dimensional CW complex and of which $\sub{k}X$ for $k\leq n$ is a
subcomplex (Proposition \ref{celldecomp}). Furthermore, $\sub{}X :=
\coprod_{n\geq 1}\sub{n}X$ has the structure of an abelian CW-monoid
(without unit) whenever $X$ is a simplicial complex.
In \S \ref{connect} we address a connectivity conjecture stated in
\cite{tuffley3}. We recall that a space $X$ is $r$-connected if
$\pi_i(X)=0$ for $i\leq r$. A contractible space is $r$-connected
for all positive $r$. In \cite{tuffley3} Tuffley proves that
$\sub{n}X$ is $n-2$ connected and conjectures that it is $n+r-2$
connected if $X$ is $r$-connected. We are able to confirm his
conjecture for the three fold subset spaces. In fact we show
\begin{theorem}\label{main1} If $X$ is $r$-connected, $r\geq 1$ and $n\geq 3$,
then $\sub{n}X$ is $r+1$-connected.
\end{theorem}
In \S \ref{caveats} we address a somewhat surprising fact about the
embeddings $\sub{k}X\hookrightarrow\sub{n}X, k\leq n$. A theorem of
Handel \cite{handel} asserts that the inclusion $j : \sub{k}(X)
\hookrightarrow\sub{2k+1}(X)$ for any $k\geq 1$ is trivial on
homotopy groups (i.e. ``weakly trivial''). This is of course not
enough to conclude that $j$ is the trivial map, and in fact it need
not be. Let $\sub{k}(X,x_0)$ be the subspace of $\sub{k}X$ of all
finite subsets containing the basepoint $x_0\in X$. Handel's result
is deduced from the more basic fact that the inclusion $j_{x_0}:
\sub{k}(X,x_0)\hookrightarrow \sub{2k-1}(X,x_0)$ is weakly trivial.
The following theorem implies that these maps are often not
null-homotopic.
\begin{theorem}\label{essential} The embeddings $X\hookrightarrow\sub{3}(X,x_0),
x\mapsto \{x,x_0\}$, and $j: X\hookrightarrow \sub{3}(X)$, $x\mapsto
\{x\}$, are both null-homotopic if $X$ is a cogroup. If $X=S^1\times S^1$
is the torus, then both $j_3$ and $j_{x_0}$ are non-trivial in
homology and hence essential.
\end{theorem}
For a definition of a cogroup, see \S\ref{caveats}. In particular
suspensions are cogroups. The second half of Theorem \ref{essential}
follows from a general calculation given in \S\ref{caveats} which
exhibits a model for $\sub{3}(X,x_0)$ and uses it to show that its
homology is an explicit quotient of the homology of the symmetric
square $\sp{2}X$ by a submodule determined by the coproduct on
$H_*(X)$. One deduces in particular a homotopy equivalence between
$\sub{3}(\Sigma X,x_0)$ and the \textit{reduced} symmetric square
$\bsp{2}(\Sigma X)$ (cf. definition \ref{reducedcons} and proposition
\ref{equivalent}). The methods in \S \ref{caveats} are taken up again
in \cite{sadok} where an explicit spectral sequence is devised to
compute $H_*(\sub{n}X)$ for any finite simplicial complex $X$ and any
$n\geq 1$.
The final section of this paper deals with manifold structures on
$\sub{n}X$ and top homology groups. It is known that
$\sub{2}X=\sp{2}X$ is a closed manifold if and only if $X$ is closed
of dimension $2$. This is a consequence of the fact that
$\sp{2}({\mathbb R}^d)$ is not a manifold if $d>2$, while
$\sp{2}({\mathbb R}^2)\cong{\mathbb R}^4$. The following complete description is
due to Wagner \cite{wagner}
\begin{theorem}\label{main3} Let $X$ be a closed manifold of dimension $d\geq
1$. Then $\sub{n}X$ is a closed manifold if and only if either (i)
$d = 1$ and $n=3$, or (ii) $d=2$ and $n=2$.
\end{theorem}
This result is established in \S \ref{manifold} where we use in the
case $d\geq 2$ the connectivity result of theorem \ref{main1}, one
observation from \cite{mostovoy} and some homological calculations
from \cite{ks}. In the case $d=1$ we reproduce Wagner's cute
argument. Furthermore in that section we refine some results of
Handel \cite{handel} on the top homology groups of $\sub{n}X$ when
$X$ is a manifold. We point out that if $X$ is a closed orientable
manifold of dimension $d\geq 2$, then the top homology group
$H_{nd}(\sub{n}X)$ is trivial if $d$ is odd and is ${\mathbb Z}$ if $d$ is
even. This group is always trivial if $X$ is not orientable
(see \S\ref{topdim}).
{\sc Acknowledgment}: This work was initiated at PIms in Vancouver
and the first author would like to thank the institute for its
hospitality.
\section{Basic Constructions}\label{basic}
All spaces $X$ in this paper are path connected, paracompact, and
have a chosen basepoint $x_0$.
The way we will think of $\sub{n}X$ is as a quotient of the $n$-th
symmetric product $\sp{n}X$. This symmetric product is the quotient
of $X^n$ by the permutation action of the symmetric group
$\mathfrak{S}_n$. The quotient map $\pi : X^n{\ra 2}\sp{n}X$ sends
$(x_1,\ldots, x_n)$ to the equivalence class $[x_1,\ldots, x_n]$. It
will be useful sometimes to write such an equivalence class as an an
abelian product $x_1\ldots x_n$, $x_i\in X$. There are topological
embeddings \begin{equation}\label{basepoint} j_n: X\hookrightarrow
\sp{n}X\ \ \ ,\ \ \ x\mapsto xx_0^{n-1}
\end{equation}
The finite subset space $\sub{n}X$ is obtained from $\sp{n}X$
through the identifications
$$[x_1,\cdots , x_n]\ \sim\ [y_1, \cdots , y_n] \ \ \Longleftrightarrow\ \
\{x_1,\ldots, x_n\}=\{y_1,\ldots, y_n\}$$ In multiplicative
notation, elements of $\sub{n}X$ are products $x_1x_2\cdots x_k$
with $k\leq n,$ and subject to the identifications $x_1^2x_2\cdots
x_k \sim x_1x_2\cdots x_k$.
The topology of $\sub{n}X$ is the quotient topology inherited from
$\sp{n}X$ or $X^n$ \cite{handel}. When $X$ is Hausdorff this
topology is equivalent to the so-called {\sl Vietoris finite}
topology whose basis of open sets are sets of the form
$$[U_1,\ldots, U_k]:=\{S\in\sub{n}X\ |\
S\subset\bigcup_{i=1}^kU_i\ \hbox{and}\ S\cap U_i\neq\emptyset\
\hbox{for each $i$}\}$$ where $U_i$ is open in $X$ \cite{wagner}.
When $X$ is a metric space, $\sub{k}X$ is again a metric space
under the Hausdorff metric, and hence inherits a third and
equivalent topology \cite{wagner}. In all cases, for any topology we
use, continuous maps between spaces induce continuous maps between
their finite subset spaces.
\begin{example}\rm\label{e3s2} Of course $\sub{1}X = X$ and $\sub{2}X=\sp{2}X$.
Generally, if ${\bf\Delta}^{n+1}X \subset\sp{n+1}X$ denotes the
image of the fat diagonal in $X^{n+1}$; that is
$${\bf\Delta}^{n+1}X :=\{x_1^{i_1}\ldots x_r^{i_r}\in\sp{n+1}X\ |\
r\leq n, \sum i_j = n+1\
\hbox{and}\ \ i_j>0\}$$ then there is a map $q : {\bf\Delta}^{n+1}X
{\ra 2}\sub{n}X$, $x_1^{i_1}\ldots x_r^{i_r}{\ra 2} \{x_1,\ldots ,x_r\}$, and
a pushout diagram
\begin{equation}\label{pushout2}
\xymatrix{
{\bf\Delta}^{n+1}X\ar[r]^{i}\ar[d]^q&\sp{n+1}X\ar[d]\\
\sub{n}X\ar[r]&\sub{n+1}X
}
\end{equation}
This is quite clear since we obtain $\sub{n+1}X$ by identifying
points in the fat diagonal to points in $\sub{n}X$. In particular,
when $n=2$, we have the pushout
\begin{equation}\label{pushout3}
\xymatrix{
X\times X\ar[r]^{i}\ar[d]^q&\sp{3}X\ar[d]\\
\sp{2}X\ar[r]&\sub{3}X
}
\end{equation}
where $q(x,y)=xy$ and $i(x,y)= x^2y$. The homology of $\sub{3}(X)$ can
then be obtained from a Mayer-Vietoris sequence. Some calculations for
the three fold subset spaces are in \S \ref{caveats}.
There are two immediate and non-trivial consequences of the above
pushouts. Albrecht Dold shows in \cite{dold} that the homology of the
symmetric products of a CW complex $X$ only depends on the homology of
$X$. The pushout diagram in (\ref{pushout2}) shows that in the case of
the finite subset spaces, this homology also depends on the
\textit{cohomology structure of $X$}. This general fact for the three
and four fold subset spaces is further discussed in \cite{taamallah}.
The second consequence of (\ref{pushout2}) is that it yields an
important corollary.\end{example}
\begin{corollary}\label{fundamental} $\sub{n}X$ is simply connected for $n\geq
3$. \end{corollary}
\begin{proof}
We use the following known facts about symmetric products:
$\pi_1(\sp{n}X)\cong H_1(X; {\mathbb Z} )$ whenever $n\geq 2$, and the
inclusion $j_n: X\hookrightarrow\sp{n}X$ induces the abelianization
map at the level of fundamental groups (P.A. Smith \cite{smith} proves
this for $n=2,$ but his argument applies for $n>2$
\cite{taamallah}). For $n\geq 3$, consider the composite
$$X\fract{\alpha}{{\ra 2}}
{\bf\Delta}^{n}X\fract{i}{{\ra 2}}\sp{n}X$$ with $\alpha (x) =
[x,x_0,\ldots, x_0]$. The induced map $j_{n*}=i_*\circ\alpha_*$ on
$\pi_1$ is surjective, as we pointed out, and hence so is $i_*$.
Assume we know that $\pi_1(\sub{3}(X))=0$. Then the fact that $i_*$
is surjective implies immediately by the Van-Kampen theorem and the
pushout diagram in (\ref{pushout2}) that $\pi_1(\sub{4}X)=0.$
By induction we see that $\pi_1(\sub{n}X)=0$ for larger $n$.
Therefore, we need only establish the claim for $n=3.$ For that we
apply Van Kampen to diagram (\ref{pushout3}). Consider the maps
$\tau : x_0\times X \hookrightarrow X\times
X\fract{i}{{\ra 2}}\sp{3}X$ and $\beta : X\times x_0{\ra 2} X\times
X\fract{q}{{\ra 2}}\sp{2}X.$ Now $i(x,y)=x^2y$ so that $\tau (x_0,x) =
x_0^2x = j_3(x)$ and $\beta (x,x_0) = xx_0 = j_2(x).$ Since the
$j_k$'s are surjective on $\pi_1$ it follows that $\tau$ and $\beta$
are surjective on $\pi_1.$ Therefore, for any classes
$u\in\pi_1(SP^3X)$ and $v\in\pi_1(SP^2X)$, $\exists$ a class
$w\in\pi_1(X\times X)$ such that $i_*(w)=u$ and $q_*(w)=v.$ This
shows that $\pi_1(Sub_3X)=0.$
\end{proof}
This corollary also follows from \cite{biro, tuffley3}, where it is
shown that $\sub{n}X$ is $(n-2)$-connected for $n\geq 3$. However, the
proof above is completely elementary.
\subsection{Reduced Constructions}\label{reducedcons} For the
spaces under consideration, the natural inclusion
$\sub{n-1}X\subset\sub{n}X$ is a cofibration \cite{handel}. We write
$\bsub{n}X:=\sub{n}X/\sub{n-1}X$ for the cofiber. Similarly
$\sp{n-1}X$ embeds in $\sp{n}X$ as the closed subset of all
configurations $[x_1,\ldots , x_n]$ with $x_i$ at the basepoint for
some $i$. We set $\bsp{n}X := \sp{n}X/\sp{n-1}X$.
Note that even though $\sp{2}X$ and $\sub{2}X$ are the same, there is an
essential difference between their reduced analogs. The difference
here comes from the fact that the inclusion
$X\hookrightarrow\sub{2}X$ is the composite $X\fract{\Delta}{{\ra 2}}
X\times X{\ra 2}\sp{2}X\cong \sub{2}X,$ where $\Delta$ is the
diagonal, while $j_2: X\hookrightarrow\sp{2}X$ is the basepoint
inclusion.
\begin{example}\rm\label{connectbsp2s} When $X=S^1$, $\sp{2}(S^1)$ is the closed
M\"{o}bius band. If we view this band as a square with two sides
identified along opposite orientations, then
$S^1=\sp{1}(S^1)\hookrightarrow\sp{2}(S^1)$ embeds into this band as
an edge (see figures on p. 1124 of \cite{tuffley1}). Hence this
embedding is homotopic to the embedding of an equator, and so
$\bsp{2}(S^1)$ is contractible. On the other hand $S^1=\sub{1}(S^1)$
embeds into $\sub{2}(S^1)=\sp{2}(S^1)$ as the diagonal $x\mapsto
\{x,x\}=[x,x],$ which is the boundary of the M\"{o}bius band, and so
$\bsub{2}(S^1)={\mathbb R} P^2$. \end{example}
\begin{example}\rm When $X=S^2$, $\sp{2}(S^2)$ is the complex projective plane
${\mathbb P}^2,$
$\sp{1}(S^2)={\mathbb P}^1$ is a hyperplane, and
$\bsp{2}(S^2)=S^4$. On the other hand $\bsub{2}(S^2)$ has the
following description. Write ${\mathbb P}^1$ for ${\mathbb C}\cup\{\infty\}$. Then
$\bsub{2}(S^2)$ is the quotient of ${\mathbb P}^2$ by the image of the
Veronese embedding ${\mathbb P}^1{\ra 2}{\mathbb P}^2$, $z\mapsto [z^2 : -2z :1]$,
$\infty\mapsto [1:0:0]$. To see this, identify $\sp{n}({\mathbb C} )$ with
${\mathbb C}^n$ by sending $(z_1,\ldots, z_n)$ to the coefficients of the
polynomial $(x-z_1)\ldots (x-z_n)$. This extends to the
compactifications to give an identification of $\sp{n}(S^2)$ with
${\mathbb P}^n$ (\cite{hatcher}, chapter 4). When $n=1$, $(z,z)$ is mapped
to the coefficients of $(x-z)(x-z),$ that is to $(z^2,-2z)$. Note
that the diagonal $S^2{\ra 2}\sp{2}(S^2)={\mathbb P}^2$ is multiplication by
$2$ on the level of $H_2$ so that, in particular,
$H_4(\bsub{2}(S^2))={\mathbb Z},$ $H_2(\bsub{2}(S^2))={\mathbb Z}_2,$ and all
other reduced homology groups are zero. \end{example}
\section{Cell Decomposition}\label{decomp}
If $X$ is a simplicial complex, there is a standard way to pick a
$\mathfrak S_n$-equivariant simplicial decomposition for the product
$X^n$ so that the quotient map $X^n{\ra 2}\sp{n}X$ induces a cellular
structure on $\sp{n}X$. We argue that this same cellular structure
descends to a cell structure on $\sub{n}X$. The construction of this
cell structure for the symmetric products is fairly classical
\cite{liao, nakaoka}. The following is a review and slight
expansion.
\begin{proposition}\label{celldecomp} Let $X$ be a simplicial complex. For $n\geq
1$ there exist cellular decompositions for $X^n$, $\sp{n}X$ and
$\sub{n}X$ so that all of the quotient maps
$X^n\rightarrow\sp{n}X\rightarrow\sub{n}X$ and the concatenation
pairings $+$ are cellular
\begin{equation}\label{pairing}
\xymatrix{
\sp{r}X\times\sp{s}X\ar[r]^{\ \ +}\ar[d]&\sp{r+s}X\ar[d]\\
\sub{r}X\times\sub{s}X\ar[r]^{\ \ +}&\sub{r+s}X
}
\end{equation}
Furthermore the subspaces ${\bf\Delta}^{n},\sp{n-1}X\subset \sp{n}X$
and $\sub{n-1}X \subset\sub{n}X$ are subcomplexes.
\end{proposition}
\begin{proof}
Both $\sp{n}X$ and $\sub{n}X$ are obtained from $X^n$ via
identifications. If for some simplicial (hence cellular) structure
on $X^n$, derived from that on $X$, these identifications become
simplicial (i.e. they identify simplices to simplices), then the
quotients will have a cellular structure and the corresponding
quotient maps will be cellular with respect to these structures.
As we know, one obtains a nice and natural ${\mathfrak S_n}$-equivariant
simplicial structure on the product if one works with \textit{ordered}
simplicial complexes \cite{liao, nakaoka, dwyer}. We write $X_\bullet$
for the abstract simplicial (i.e. triangulated) complex of which $X$
is the realization. So we assume $X_\bullet$ to be endowed with a
partial ordering on its vertices which restricts to a total ordering
on each simplex. Let $\prec$ be that ordering. A point $w =
(v_1,\ldots, v_n)$ is a vertex in $X^n_\bullet$ if and only if $v_i$
is a vertex of $X_\bullet$. Different vertices
\begin{equation}\label{vertices}
w_0 = (v_{01}, v_{02}, \ldots, v_{0n})\ ,\ldots,\ w_k = (v_{k1},
v_{k2},\ldots, v_{kn})
\end{equation}
span a $k$-simplex in $X^n_\bullet$ if, and only if, for each $i$,
the $k+1$ vertices $v_{0i},v_{1i},\ldots, v_{ki}$ are contained in a
simplex of $X$ and $v_{0i}\prec v_{1i}\prec\cdots\prec v_{ki}$. We
write $\varpi:=[w_0, \ldots, w_k ]$ for such a simplex.
The permutation action of $\tau\in{\mathfrak S_n}$ on $\varpi=[w_0, \ldots, w_k
]$ is given by $\tau \varpi = [\tau w_0, \ldots, \tau w_n ]$. This
is a well-defined simplex since the factors of each vertex $w_j =
(v_{j_11}, v_{j_22},\ldots, v_{j_nn})$ are permuted simultaneously
according to $\tau,$ and hence the order $\prec$ is preserved. The
permutation action is then simplicial and $\sp{n}X$ inherits a CW
structure by passing to the quotient.
{\bf Fact 1}: If a point $p:= (x_1,x_2,\ldots, x_{n})\in X^n$ is such
that $x_{i_1}=x_{i_2}=\ldots = x_{i_r}$, then $p$ lies in some
$k$-simplex $\varpi$ whose vertices $[w_0,\ldots, w_k]$ are such that
$v_{ji_1}=v_{ji_2}=\cdots =v_{ji_r}$ for $j=0,\ldots, k$. This implies
that the fat diagonal is a simplicial subcomplex. It also implies that
any permutation that fixes such a point $p$ must fix the vertices of
the simplex it lies in and hences fixes it pointwise. In other words, if a
permutation leaves a simplex invariant then it must fix it pointwise.
{\bf Fact2}: If $p=(x_1,x_2,\ldots, x_n)\in\varpi$ is a simplex with
vertices $w_0, ..., w_k$ as in (\ref{vertices}), and if $\pi :
X^n{\ra 2} X^i$ is any projection, then $\pi (p)$ lies in the simplex
with vertices $\pi (w_0),\cdots, \pi (w_k)$ (which may or may not be
equal). For instance $\pi (p):=(x_1,\ldots, x_i)$ lies in the simplex
with vertices $(v_{01}, v_{02},\ldots, v_{0i})$, ..., $(v_{k1},
v_{k2},\ldots, v_{ki})$.
We are now in a position to see that $\sub{n}X$ is a CW complex.
Recall that $\sub{n}X=X^n/_\sim$ where
$$(x_1,\ldots, x_n)\sim (y_1,\ldots, y_n)\Longleftrightarrow
\{x_1,\ldots, x_n\} = \{y_1,\ldots, y_n\}$$ Clearly, if
$(x_1,\ldots, x_n)\sim (y_1,\ldots, y_n)$ then $\tau (x_1,\ldots,
x_n)\sim \tau (y_1,\ldots, y_n)$ for $\tau\in\mathfrak S_n$. We wish
to show that these identifications are simplicial. Let's argue
through an example (the general case being identical). We have the
identifications in $Sub_6X$:
\begin{equation}\label{identify}
p:=(x,x,x,y,y,z)\sim (x,x,y,y,y,z)=:q
\end{equation}
By using Fact 2 applied to the projection skipping the third
coordinate and then Fact 1, we can see that $p$ and $q$ lie in
simplices with vertices of the form $(v_1,v_1,?,v_2,v_2,v_3)$. By
using Fact 1 again, $p$ lies in a simplex $\sigma_p$ with vertices of
the form $(v_1,v_1,v_1,v_2,v_2,v_3)$ while $q$ lies in a simplex
$\sigma_q$ with vertices of the form $(v_1,v_1,v_2,v_2,v_2,v_3)$. It
follows that the identification (\ref{identify}) identifies vertices
of $\sigma_p$ with vertices of $\sigma_q,$ and hence identifies
$\sigma_p$ with $\sigma_q$ as desired.
In conclusion, the quotient $\sub{n}X$ inherits a cellular structure
and the composite
$$X^n\fract{\pi}{{\ra 2}}\sp{n}X\fract{q}{{\ra 2}}\sub{n}X$$
is cellular. Since the pairing (\ref{pairing}) is covered by
$X^r\times X^s{\ra 2} X^{r+s},$ which is simplicial (by construction),
and since the projections are cellular, the pairing (\ref{pairing})
must be cellular.
\end{proof}
\begin{remark}\rm We could have worked with simplicial sets instead \cite{biro}.
Similarly, Mostovoy (private communication) indicates how to
construct a simplicial set $\sub{n}X$ out of a simplicial set $X$
such that $|\sub{n}X| = \sub{n}|X|$. This approach will be further
discussed in \cite{sadok}.\end{remark}
The following corollary is also obtained in \cite{biro}.
\begin{corollary}\label{topcell} For $X$ a simplicial complex, $\sub{k}X$ has a
CW decomposition with top cells in $k\dim X,$ so that $H_*(\sub{k}X)
= 0$ for $*> k\dim X$. \end{corollary}
We collect a couple more corollaries
\begin{corollary}\label{same} If $X$ is a $d$-dimensional complex with $d\geq 2$,
then the quotient map $\sp{n}X\to\sub{n}X$ induces a homology
isomorphism in top dimension $nd$. \end{corollary}
\begin{proof} When $X$ is as in the hypothesis,
$\sub{n-1}X$ is a codimension $d$ subcomplex of $\sub{n}X$ and since
$d\geq 2$, $H_{nd}(\sub{n}X) = H_{nd}(\sub{n}X,\sub{n-1}X)$. On the
other hand, Proposition \ref{celldecomp} implies that
${\bf\Delta}^{n}X $ is a codimension $d$ subcomplex of $\sp{n}X$ so
that $H_{nd}(\sp{n}X)\cong H_{nd}(\sp{n}X,{\bf\Delta}^{n}X )$ as
well. But according to diagram (\ref{pushout2}), we have the
homeomorphism
$$\sp{n}X/{\bf\Delta}^{n}X \cong \sub{n}X/\sub{n-1}X
$$
Combining these facts yields the claim.
\end{proof}
\begin{corollary}\label{sameconnect} Both $\sp{k}X$ and the fat diagonal
${\bf\Delta}^k\subset \sp{k}X$ have the same connectivity as $X$,
and this is sharp. \end{corollary}
\begin{proof} If $X$ is an $r$-connected ordered simplicial
complex, then $X$ admits a simplicial structure so that the
$r$-skeleton $X_r$ is contractible in $X$ to some point $x_0\in X$.
With such a simplicial decomposition we can consider Liao's induced
decomposition $X^k_{\bullet}$ on $X^k$ and its $r$-skeleton $X^k_r$.
Note that
$$X^k_r \subset \bigcup_{i_1+\cdots + i_k\leq r}
X_{i_1}\times X_{i_2}\times\cdots\times X_{i_k} \subset (X_r)^k$$ If
$F :X_r\times I{\ra 2} X$ is a deformation of $X_r$ to $x_0$, then
$F^k$ is a deformation of $(X_r)^k$, hence $X^k_r$, to $(x_0,\ldots,
x_0)$ in $X^k,$ and this deformation is $\mathfrak{S}_k$
equivariant. Since the $r$-skeleton of $\sp{k}X$ is the
$\mathfrak{S}_k$-quotient of $X^k_r$, it is then itself contractible
in $\sp{k}X,$ and this proves the first claim. Similarly, the
simplicial decomposition we have introduced on $X^k$ includes the
fat diagonal $\Lambda^k$ as a subcomplex with $r$-skeleton
$\Lambda_r^k := \Lambda^k\cap X^k_r$. The deformation $F^k$
preserves the fat diagonal and so it restricts to $\Lambda^k$ and
to an equivariant deformation $F^k : \Lambda^k_r\times
I{\ra 2}\Lambda^k$. This means that the $r$-skeleton of
$q(\Lambda^k)=:{\bf\Delta}^k\subset\sp{k}X$ is itself contractible
in ${\bf\Delta}^k,$ and the second claim follows. This bound is
sharp for symmetric products since when $X=S^2$,
$\sp{2}(S^2)={\mathbb P}^2$. It is sharp for the fat diagonal as well since
${\bf\Delta}^3X\cong X\times X$ has exactly the same connectivity of
$X$.
\end{proof}
\section{Connectivity}\label{connect}
As we've established in corollary \ref{fundamental}, finite subset
spaces $Sub_nX,\ n\ge 3,$ are always simply connected. In this
section we further relate the connectivity of $\sub{k}X$ to that of
$X$. We first need the following useful result proved in
\cite{braid}.
\begin{theorem}\label{connectbsp} If $X$ is $r$-connected with $r\geq 1$, then
$\bsp{n}X$ is $2n+r-2$ connected.
\end{theorem}
Example \ref{bsp2} shows that $\bsp{2}(S^k)$ is $k+1$-connected as
asserted. Note that $\bsp{2}(S^2)=S^4$ is $3$-connected, so theorem
\ref{connectbsp} is sharp.
\begin{corollary}\label{nakak} (\cite{nakaoka} corollary 4.7) If $X$ is
$r$-connected, $r\ge 1,$ then $H_{*}(X)\cong H_{*}(\sp{n}X)$ for
$*\leq r+2$. This isomorphism is induced by the map $j_n$ adjoining
the base point.\end{corollary}
\begin{proof} We give a short proof based on theorem
\ref{connectbsp}. By Steenrod's homological splitting \cite{nakaoka}
\begin{equation}\label{steenrod}
H_*(\sp{n}X)\cong \bigoplus_{k=1}^n H_*(\sp{k}X,\sp{k-1}X) =
\bigoplus_{k=2}^n \tilde H_*(\bsp{k}X)\oplus H_*(X)
\end{equation} with
$\sp{0}X=\emptyset$. But $\tilde H_*(\bsp{k}X)=0$ for $*\leq
2k+r-2$. The result follows.
\end{proof}
\begin{remark}\rm Note that corollary \ref{nakak} cannot be improved to $r=0$
(i.e. $X$ connected). It fails already for the wedge $X=S^1\vee S^1$
and $n=2$ since $\sp{2}(S^1\vee S^1)\simeq S^1\times S^1$ (see
\cite{ks}) and hence $H_2(\sp{2}(S^1\vee S^1))\not\cong H_2(S^1\vee S^1)$.
Note also that (\ref{steenrod}) implies that $H_*(X)$ embeds into
$H_*(\sp{n}X)$ for all $n\geq 1$; a fact we will find useful
below.\end{remark}
\begin{proposition}\label{connectha} Suppose $X$ is $r$-connected, $r\geq 1$. Then
$\sub{k}X$ is $r+1$ connected whenever $k\geq 3$. \end{proposition}
\begin{proof} Write $x_0\in X$ for the basepoint and assume $k\geq 3$.
Remember that the $\sub{k}X$ are simply connected for $k\geq 3$
(corollary \ref{fundamental}) so by the Hurewicz theorem if they
have trivial homology up to degree $r+1$, then they are connected up
to that level. We will now show by induction that $H_*(\sub{k}X)=0$
for $*\leq r+1$. The first step is to show that
$H_*(\sp{k}X,{\bf\Delta}^k)=H_*(\sub{k}X,\sub{k-1}X)=0$ for $* \leq
r+1$. We write $i: {\bf\Delta}^k\hookrightarrow\sp{k}X$ for the
inclusion.
From the fact that ${\bf\Delta}^k$ and $\sp{k}X$ have the same
connectivity as $X$ (corollary \ref{sameconnect}), their homology
vanishes up to degree $r$ which implies similarly that the relative
groups are trivial up to that degree. On the other hand $X$ embeds
in ${\bf\Delta}^k$ via $x\mapsto [x,x_0,\cdots, x_0]$ (this is a
well-defined map since $k\geq 3$) and, since the composite $j_k :
X\rightarrow {\bf\Delta^k}\fract{i}{{\ra 2}} \sp{k}X$ is an
isomorphism on $H_{r+1}$ (corollary \ref{nakak}), we see that the
map $i_*: H_{r+1}({\bf\Delta}^k){\ra 2} H_{r+1}(\sp{k}X)$ is
surjective. Hence $H_{r+1}(\sp{k}X, {\bf\Delta}^k)= 0$.
Now since $0=H_*(\sp{k}X,{\bf\Delta}^k)= H_*(\sub{k}X,\sub{k-1}X)$
for $*\leq r+1$, it follows that $H_*(\sub{k-1}X)\cong
H_*(\sub{k}X)$ for $*\leq r$ and that $H_{r+1}(\sub{k-1}X){\ra 2}
H_{r+1}(\sub{k}X)$ is surjective. So if we prove that
$H_*(\sub{3}X)=0$ for $*\leq r+1$, then by induction we will have
proved our claim.
Consider the homology long exact sequences for $(\sub{3}X,\sub{2}X)$
and $(\sp{3}X,{\bf\Delta}^3X),$ where again we identify
${\bf\Delta}^3X$ with $X\times X$. We obtain commutative diagrams
$$\xymatrix{
\ar[r]&H_{r+2}(\sub{3}X,\sub{2}X)\ar[r]&
H_{r+1}(\sub{2}X)\ar[r]^{i_*}&H_{r+1}(\sub{3}X)\ar[r]&0\\
\ar[r]&H_{r+2}(\sp{3}X,X^2)\ar[r]\ar[u]^\cong&
H_{r+1}(X^2)\ar[r]^{\alpha_*}\ar[u]^{q_*}&H_{r+1}(\sp{3}X)\ar[r]\ar[u]^{\pi_*}&0
}
$$
where $\alpha (x,y)=x^2y$ and $\pi : \sp{3}X{\ra 2}\sub{3}X$ is the
quotient map. We want to show that $i_* = 0$ so that by exactness
$H_{r+1}(\sub{3}X)=0$. Now $q_*$ is surjective since the composite
$$X{\ra 2} X\times \{x_0\}
\hookrightarrow X\times X{\ra 2} \sp{2}X=\sub{2}X$$ induces an
isomorphism on $H_{r+1}$ by Corollary \ref{nakak}. Showing that
$i_*=0$ comes down therefore to showing that $\pi_*\circ\alpha_*
=0$. But note that for $r\geq 1,$ which is the connectivity of $X$,
classes in $H_{r+1}(X\times X)$ are necessarily spherical and we
have the following commutative diagram
$$\xymatrix{
\pi_{r+1}X\times\pi_{r+1}(X)\ar[r]^{\ \ \cong}&\pi_{r+1}(X\times X)\ar[rr]\ar[d]^h&&
\pi_{r+1}(\sub{3}(X))\ar[d]^{h}\\
&H_{r+1}(X\times X)\ar[rr]^{\pi_*\circ\alpha_*}&&H_{r+1}(\sub{3}(X))
}$$
where $h$ is the Hurewicz homomorphism. The top map is trivial since
when restricted to each factor $\pi_{r+1}(X)$ it is trivial
according to the useful theorem \ref{handel} below (or to
corollary \ref{cowt}) . Since $h$ is surjective,
$\pi_*\circ\alpha_*=0$ and $H_{r+1}(\sub{3}X) = 0$ as desired.
\end{proof}
\section{The Three Fold Finite Subset Space}\label{caveats}
There are many subtle points that come up in the study of finite
subset spaces. We illustrate several of them through the study of
the pair $(\sub{3}X, X)$. The three fold subset space has been
studied in \cite{mostovoy, rose, tuffley1} for the case of the
circle and in \cite{tuffley2} for topological surfaces.
Again all spaces below are assumed to be connected. We say a map is
weakly contractible (or weakly trivial) if it induces the trivial map
on all homotopy groups. The following is based on a cute argument well
explained in \cite{handel} or (\cite{beilinson} section 3.4).
\begin{theorem}\label{handel}\cite{handel} $\sub{k}(X)$ is weakly contractible
in $\sub{2k+1}(X)$.
\end{theorem}
{\sc Caveat 1}: A map $f : A{\ra 2} Y$ being weakly contractible does
not generally imply that $f$ is null homotopic. Indeed let $T$ be
the torus and consider the projection $T{\ra 2} S^2$ which collapses
the one-skeleton. Then this map induces an isomorphism on $H_2$ but
is trivial on homotopy groups since $T=K({\mathbb Z}^2,1)$. Of course if
$A=S^k$ is a sphere, then ``weakly trivial'' and ``null-homotopic''
are the same since the map $A{\ra 2} Y$ represents
the zero element in $\pi_{k}Y$. For example, in (\cite{ch},
lemma 3.3), the authors construct explicitly an extension of the
inclusion $S^n\hookrightarrow\sub{3}(S^n)$ to the disk
$B^{n+1}{\ra 2}\sub{3}(S^n)$, $\partial B^{n+1}=S^n$. This section
argues that this implication doesn't generally hold for
non-suspensions.
\vskip 7pt {\sc Caveat 2}: When comparing symmetric products to
finite subset spaces, one has to watch out for the fact that the
basepoint inclusion $\sp{k}(X){\ra 2}\sp{k+1}(X)$ {\it does not
commute} via the projection maps with the inclusion
$\sub{k}(X){\ra 2}\sub{k+1}(X)$. This has already been pointed out in
example \ref{connectbsp2s} and is further illustrated in the
corollary below.
\begin{corollary}\label{cowt}
The composite $\sp{k}(X){\ra 2}\sp{2k+1}(X){\ra 2}\sub{2k+1}(X)$ is
weakly trivial. \end{corollary}
\begin{proof} This map is equivalent to the composite
\begin{equation}\label{composite}
\sp{k}(X){\ra 2}\sub{k}(X)\fract{\mu
}{{\ra 2}}\sub{k+1}(X,x_0)\hookrightarrow\sub{2k+1}(X)
\end{equation}
where $\mu (\{x_1,\ldots, x_k\}) =\{x_0,x_1,\ldots, x_k\}$, $x_0$ is
the basepoint of $X$ and $\sub{k+1}(X,x_0)$ is the subspace of
$\sub{k+1}(X)$ of all subsets containing this basepoint. Note that
$\mu$ is not an embedding as pointed out in \cite{tuffley2} but is
one-to-one away from the fat diagonal. The key point here is
again (\cite{handel}, Theorem 4.1) which asserts that the inclusion
$$\sub{k+1}(X,x_0)\hookrightarrow\sub{2k+1}(X,x_0)$$
is weakly contractible. This in turn implies that the last map in
(\ref{composite}) is weakly trivial as well and the claim follows.
\end{proof}
{\sc Caveat 3}: For $n\geq 2$, one can embed
$X\hookrightarrow\sub{n}(X)$ in several ways. There is of course the
natural inclusion $j$ giving $X$ as the subpace of singletons. There
is also, for any choice of $x_0\in X,$ the embedding $j_{x_0}:
x\mapsto \{x,x_0\}$. Any two such embeddings for different choices
of $x_0$ are equivalent when $X$ is path-connected (any choice of a
path between $x_0$ and $x'_0$ gives a homotopy between $j_{x_0}$ and
$j_{x'_0}$). It turns out however that $j$ and $j_{x_0}$ are
fundamentally different. The simplest example was already pointed
out for $S^1,$ where $\sub{2}(S^1)$ was the M\"{o}bius band with $j$
being the embedding of the boundary circle while $j_{x_0}$ is the
embedding of an equator.
One might ask the question whether it is true that $j$ is
null-homotopic if and only if $j_{x_0}$ is null-homotopic? This is
at least true for suspensions as the next lemma illustrates.
Recall that a co-$H$ space $X$ is a space whose diagonal map factors
up to homotopy through the wedge; that is there exists a $\delta$
such that the composite
$$X\fract{\delta}{{\ra 2}} X\vee X\hookrightarrow X\times X$$
is homotopic to the diagonal $\Delta : X{\ra 2} X\times X, x\mapsto
(x,x)$. A cogroup $X$ is a co-$H$ space that is co-associative with
a homotopy inverse. This latter condition means there is a map $c:
X{\ra 2} X$ such that $X\fract{\delta}{{\ra 2}}X\vee X\fract{c\vee
1}{{\ra 2}} X$ is null-homotopic. This is in fact the definition of a
left inverse but it implies the existence of a right inverse as well
\cite{arkowitz}. If $X$ is a cogroup, then for every based space
$Y$, the set of based homotopy classes of based maps $[X,Y]$ is a
group. The suspension of a space is a cogroup and there exist
several interesting cogroups that are not suspensions
(\cite{arkowitz}, \S4).
Write $j_{x_0}: X\hookrightarrow\sub{3}(X,x_0)$ the map
$x\mapsto\{x,x_0\}$. Its continuation to $\sub{3}(X)$ is also
written $j_{x_0}$.
\begin{lemma}\label{null} Suppose $X$ is a cogroup. Then the embeddings $j_{x_0}:
X\hookrightarrow\sub{3}(X,x_0)$ and $j: X\hookrightarrow\sub{3}(X)$
are null-homotopic. \end{lemma}
\begin{proof} The argument in \cite{handel} extends to this situation.
We deal with $j_{x_0}$ first. This is a based map at $x_0.$ Its
homotopy class $[j_{x_0}]$ lives in the group
$G=[X,\sub{3}(X,x_0)]$. The following composite is checked to be
again $j_{x_0}$.
$$j_{x_0} : X\fract{\Delta}{{\ra 2}}X\times
X\fract{j_{x_0}+j_{x_0}}{\ra 4}\sub{3}(X,x_0)$$ This factors up to
homotopy through the wedge $\iota : X\fract{\delta}{{\ra 2}}X\vee
X\fract{j_{x_0}\vee j_{x_0}}{\ra 3}\sub{3}(X,x_0)$. Of course
$[\iota ]=[j_{x_0}]$. But observe that $[\iota]=2[j_{x_0}]$ by
definition of the additive structure of $G$. This means that
$[j_{x_0}]=2[j_{x_0}];$ thus $[j_{x_0}]=0$ and $j_{x_0}$ is trivial
(through a homotopy fixing $x_0$)
Let's now apply this to the inclusion $j: X\hookrightarrow\sub{3}(X)$
which is assumed to be based at $x_0$. We also denote the composite
$X\fract{j_{x_0}}{{\ra 2}}\sub{3}(X,x_0){\ra 2}\sub{3}X$ by $j_{x_0}$. Using the
co-H structure as before we get the commutative diagram
$$\xymatrix{
X\ar[rr]^{\Delta}\ar[d]^\delta&&X\times X\ar[d]^{j+j}\\
X\vee X\ar[rr]^{j_{x_0}\vee j_{x_0}}&& \sub{3}(X)
}
$$
Since $j_{x_0}$ was just shown to be null homotopic, then so is $j =
(j+ j)\circ \Delta$.
\end{proof}
Let's now turn to the second part of theorem \ref{essential}.
\subsection{The Space $\sub{3}(X,x_0)$}
The preceeding discussion shows the usefulness of looking at the based
finite subset space $\sub{n}(X,x_0)$. We start with a key
computation. Write $\Delta$ for the diagonal $X{\ra 2}\sp{2}X$,
$x\mapsto [x,x]$, and identify the image of $j_*:
H_*(X)\hookrightarrow H_*(\sp{2}(X))$ with $H_*(X)$ by the Steenrod
homological splitting (\ref{steenrod}).
\begin{lemma}\label{computation} Let $X$ be a compact cell complex. Then
$H_*(\sub{3}(X,x_0)) = H_*(\sp{2}X)/I$ where $I$ is the submodule
generated by $\Delta_*c-c, c\in H_*(X)\hookrightarrow H_*(\sp{2}X)$.
\end{lemma}
\begin{proof}
Start with the map $\alpha : \sp{2}(X){\ra 2}\sub{3}(X,x_0),
[x,y]\mapsto \{x,y,x_0\}$ which is surjective and generically
one-to-one (i.e. one-to-one on the subspace of points $[x,y]$ with
$x\neq y$). Observe that $\alpha ([x,x]) = \alpha ([x,x_0])$. This
implies that $\sub{3}(X,x_0)$ is homeomorphic to the identification
space
\begin{equation}\label{wahid}
\sp{2}(X)/\sim \ \ , \ \ [x,x]\sim [x,x_0]\ \ , \ \forall x\in X
\end{equation} In order to compute the homology of this quotient we
will replace it with the following space
\begin{equation}\label{thneen}
W_2(X) := \sp{2}(X)\sqcup X\times I/\sim\ \ ,\ \ [x,x]\sim (x,1)\ ,\
[x,x_0]\sim (x,0)\ ,\ [x_0,x_0]\sim (x_0,t)
\end{equation}
It is not hard to see that (\ref{wahid}) and (\ref{thneen}) are
homotopy equivalent. We can easily see that these spaces are homology
equivalent as follows (this is enough for our purpose). There is a
well-defined map $g : W_2(X){\ra 2}\sp{2}(X)/\sim$ sending $[x,y]\mapsto
[x,y], (x,t)\mapsto [x,x_0]$. The inverse image $g^{-1}([x,y])=[x,y]$
if $x\neq y$ and both points are different from $x_0$. The inverse
image of $[x,x]$ or $[x,x_0]$ is an interval when $x\neq x_0$, hence
contractible, and it is a point when $x=x_0$. In all cases preimages
under $g$ are acyclic and hence $g$ is a homology equivalence by the
Begle-Vietoris theorem. The homology structure of $\sub{3}(X,x_0)$ can
be made much more apparent using the form (\ref{thneen}) and this is
why we have introduced it.
Let $(C_*(\sp{2}(X)),\partial) $ be a chain complex for $\sp{2}(X)$
containing $C_*(X)$ as a subcomplex and for which the diagonal map
$X{\ra 2}\sp{2}X$ is cellular. Associate to $c\in C_i(X)$ a chain $|c|$
in degree $i+1$ representing $I\times c\in C_{i+1}(I\times X)$ if
$c\neq x_0$ (the $0$-chain representing the basepoint). We write
$|C_*(X)|$ for the set of all such chains. The geometry of our
construction gives a chain complex for $W_2(X)$ as follows
\begin{equation}\label{complex}
C_*(W_2(X)) = C_*(\sp{2}(X))\oplus |C_*(X)|
\end{equation}
with boundary $d$ such that $d(c)=\partial c$ and
$$d|c| = c - \Delta_*(c)-|\partial c| $$ This comes from the formula
for the boundary of the product of two cells which is in general given
by $\partial (\sigma_1\times\sigma_2) =\partial
(\sigma_1)\times\sigma_2 + (-1)^{|\sigma_1|}\sigma_1\times \partial
(\sigma_2)$. We check indeed that $d\circ d=0$. To compute the
homology we need to understand cycles and boundaries in this chain
complex. Write a general element of (\ref{complex}) as $\alpha+
|c|$. The boundary of this element is $\partial \alpha + c -
\Delta_*(c)-|\partial c|, $ and it is zero if, and only if, $\partial
\alpha = \Delta_*(c) - c$ and $|\partial c|=0$. That is if, and only
if, $c$ is a cycle and $\Delta_*(c) - c$ is a boundary. This means
that in $H_*(\sp{2}(C))$, $\Delta_*(c) = c$. We claim this is not
possible unless $c=0$. Indeed, if $c$ is a positive dimensional
(homology) class, then $\Delta_*(c) = c\otimes 1 + \sum c'\otimes c''
+ 1\otimes c$ in $H_*(X\times X)$ and hence in $H_*(\sp{2}(C))$,
$\Delta_*(c) = 2c + \sum c'*c''$ where by definition $c'*c'' =
q_*(c'\otimes c'')$, $q : X\times X{\ra 2}\sp{2}(X)$ the
projection. This can never be equal to $c$ since $\sum c'*c''\in
H_*(\sp{2}X,X)$.
To recapitulate, $\alpha+|c|$ is a cycle if, and only if, $\alpha$ is
a cycle and $c=0$. The only cycles in $C_*(W_2(X))$ are those that are
already cycles in the first summand $C_*(\sp{2}(X))$. On the other
hand, among these classes the only boundaries consist of boundaries in
$C_*(\sp{2}(X))$ and those of the form $\Delta_*(c)-c$ with $c$ a
cycle in $C_*(X)$ (in particular the only $0$-cycle is represented by
$x_0$). This proves our claim.
\end{proof}
\begin{remark}\rm\label{pushoutsub}
We could have noticed alternatively the existence
of a pushout diagram
$$\xymatrix{ X\vee
X\ar[d]^{fold}\ar[r]^f&\sp{2}X\ar[d]^\alpha\\ X\ar[r]^{j_{x_0}\ \ \ \ }&
\sub{3}(X,x_0) }$$
where $f(x,x_0) = [x,x]$ is the diagonal and $f(x_0,x) = [x,x_0]$. We
can in fact deduce lemma \ref{computation} from this pushout. We can
also deduce that $\sub{3}(X,x_0)$ is simply connected if $X$ is. This
useful fact we use to establish proposition \ref{equivalent} next.
\end{remark}
Note that lemma \ref{computation} above says that
$H_*(\sub{3}(X,x_0))$ only depends on $H_*(X)$ and on its coproduct
(i.e. on the cohomology of $X$). When $X$ is a suspension the
situation becomes simpler. The following result is a nice
combination of lemmas \ref{null} and \ref{computation}.
\begin{proposition}\label{equivalent} There is a homotopy equivalence
$\sub{3}(\Sigma X,x_0)\simeq\bsp{2}(\Sigma X)$.\end{proposition}
\begin{proof} When $X$ is a suspension, all classes are primitive so that
$\Delta_*(c)=2c$ for all $c\in H_*(X)$. Combining Steenrod's
splitting (\ref{steenrod}) ;
$$H_*(\sp{2}X)\cong H_*(X)\oplus H_*(\sp{2}X,X)$$ with lemma
\ref{computation} we deduce immediately that $H_*(\sub{3}(\Sigma
X,x_0))\cong H_*(\bsp{2}(\Sigma X))$. Both spaces are simply
connected (by remark \ref{pushoutsub} and theorem \ref{connectbsp})
and so it is enough to exhibit a map between them that induces this
homology isomorphism. Consider the map $\alpha : \sp{2}(\Sigma
X){\ra 2}\sub{3}(\Sigma X,x_0)$, $[x,y]\mapsto \{x,y,x_0\}$ as in the
proof of lemma \ref{computation}. Its restriction to $\Sigma X$ is
null-homotopic according to lemma \ref{null} and hence it factors
through the quotient $\bsp{2}(\Sigma X){\ra 2}\sub{3}(\Sigma
X,x_0)$. By inspection of the proof of lemma \ref{computation}
we see that this map induces an isomorphism on homology.
\end{proof}
\begin{example}\rm\label{bsp2} A description of $\bsp{2}(S^k)$ is
given in (\cite{hatcher} , example 4K.5) from which we infer
that
$$\sub{3}(S^k,x_0)\simeq \Sigma^{k+1}{\mathbb R} P^{k-1}\ \ \ , \ \ k\geq 1$$
This generalizes the calculation in \cite{tuffley2} that
$\sub{3}(S^2,x_0)\simeq S^4$. \end{example}
\subsection{Homology Calculations}
We determine the homology of $\sub{3}(T,x_0)$ and $\sub{3}(T)$ where
$T$ is the torus $S^1\times S^1$. Symmetric products of surfaces are
studied in various places (see \cite{ks,tuffley2} and references
therein). Their homology is torsion free and hence particularly simple
to describe. We will write throughout $q: X^n{\ra 2}\sp{n}X$ for the
quotient map and $q_*(a_1\otimes \ldots\otimes a_n) = a_1*a_2*\cdots
*a_n$ for its induced effect in homology (since our spaces are torsion
free we identify $H_*(X\times Y)$ with $H_*(X)\otimes H_*(Y)$).
\begin{corollary} The inclusion $j: \sub{2}(T,x_0)\hookrightarrow\sub{3}(T,x_0)$ is
essential.\end{corollary}
\begin{proof} We will show that $j_*$ is non-trivial on
$H_2(\sub{2}(T,x_0)) = H_2(T)={\mathbb Z}$. Here $H_*(T)$ is generated by
$e_1,e_2$ in dimension one, and by the orientation class $[T]$ in
dimension two. The groups $H_*(\sp{2}T)$ are given as follows
\cite{ks} (the generators are indicated between brackets)
\begin{equation}\label{homtorus}
\tilde H_*(\sp{2}T) = \begin{cases} {\mathbb Z}\{\gamma_2\},& \dim 4\\
{\mathbb Z}\{e_1*[T], e_2*[T]\},& \dim 3\\
{\mathbb Z}\{[T],e_1*e_2\},&\dim 2\\
{\mathbb Z}\{e_1,e_2\},&\dim 1
\end{cases}
\end{equation}
where $\gamma_2$ is the orientation class $[\sp{2}T]$ ( $\sp{2}(T)$
is a compact complex surface). Then $[T]*[T]=2\gamma_2$. Let
$\Delta$ be the diagonal into the symmetric square
$X\fract{\Delta}{{\ra 2}} X\times X\fract{q}{{\ra 2}}\sp{2}(X)$. Since
$\Delta_*([T]) = [T]\otimes 1 + e_1\otimes e_2 - e_2\otimes e_1 +
1\otimes [T]$, and since $q_*([T]\otimes 1) = q_*(1\otimes [T]) =
[T]$ and $q_*(e_1\otimes e_2) = -q_*(e_2\otimes e_1)= e_1*e_2$, we
see that
\begin{equation}\label{diagonal}
\Delta_*([T]) = 2[T] + 2e_1*e_2
\end{equation}
We can consider the composite
$$j_{x_0}: T\fract{\Delta}{{\ra 2}}\sp{2}T\fract{\alpha}{{\ra 2}}
\sub{3}(T,x_0)=\sp{2}T/\sim $$ where $\alpha$ is as in the proof of
lemma \ref{computation}. According to lemma \ref{computation}, using
the expression of the diagonal in (\ref{diagonal}), there are
classes $a = \alpha_*[T] , b = \alpha_*(e_1*e_2)$ with $a = -2b\neq
0$. But $(j_{x_0})_*[T]=(\alpha\circ \Delta)_*[T] =
\alpha_*([T])=a,$ and this is non-zero as desired.
\end{proof}
\begin{remark}\rm We can of course complete the calculation of
$H_*(\sub{3}(T,x_0))$ from lemma \ref{computation}. Under
$\alpha_*$, $e_i\mapsto 0$ (primitive classes map to $0$),
$e_1*e_2\mapsto b$, $[T]\mapsto a = -2b$, $e_i*[T]\mapsto c_i,$ and
$\gamma_2\mapsto d,$ so that
$$ H_1 = 0 \ ,\ H_2 = {\mathbb Z}\{a\}\ ,\ H_3={\mathbb Z}\{c_1,c_2\} \ ,\ H_4 =
{\mathbb Z}\{d\}
$$
It is equally easy to write down the homology groups for
$\sub{3}(S,x_0)$ for any genus $g\geq 1$ surface, orientable or not.
\end{remark}
Next we analyze the inclusion $T\hookrightarrow\sub{3}T$ in the case
of the torus (compare \cite{tuffley2}). The starting point is the
pushout (\ref{pushout3}) and the associated Mayer-Vietoris sequence
$$\cdots{\ra 2} H_*(T\times T)\fract{q_*\oplus i_*}{\ra 3}
H_*(\sp{2}T)\oplus H_*(\sp{3}T)\fract{g_* - \pi_*}{\ra 3}
H_*(\sub{3}T){\ra 2} H_{*-1}(T\times T){\ra 2}\cdots
$$
where $q: T\times T{\ra 2}\sp{2}T$ is the quotient map, $i(x,y)=x^2y$,
$g : \sp{2}T\hookrightarrow\sub{3}T$ is the inclusion (here we have
identified $\sp{2}T$ with $\sub{2}T$) and $\pi :
\sp{3}T{\ra 2}\sub{3}T$ is the projection. We focus on degree $2$ and
follow \cite{ks} for the next computations.
We have $H_2(T\times T) = {\mathbb Z}^2$ generated by $[T]\otimes 1$ and
$1\otimes [T]$, $H_2(\sp{2}T) = {\mathbb Z}^2=H_2(\sp{3}T)$ generated by a
class of the same name $[T] = q_*([T]\otimes 1)=q_*(1\otimes [T])$
and by $e_1*e_2$ ; see (\ref{homtorus}). To describe the effect of
$i_*$ we write it as a composite
$$i : T\times T\fract{\Delta\times 1}{\ra 3} T\times T\times T
\fract{q}{{\ra 2}} \sp{3}T$$ This gives $i_*([T]\otimes 1) = 2[T] +
2e_1*e_2$ as in (\ref{diagonal}), while $i_*(1\otimes [T])=[T]$. The
Mayer-Vietoris then looks like
\begin{eqnarray*}
\cdots{\ra 2}\ {\mathbb Z}^2 &\fract{q_*\oplus i_*}{\ra 3}&
{\mathbb Z}^2\oplus{\mathbb Z}^2\fract{g_* - \pi_*}{\ra 3} H_2(\sub{3}T){\ra 2}
H_{1}(T\times T){\ra 2}\cdots\\
(1,0)&{\longmapsto} &((1,0), (2,2))\\
(0,1)&\longmapsto &((1,0), (1,0))
\end{eqnarray*}
This sequence is exact. Observe that the class $((2,2),(0,0))$ is
not in the kernel of $g_*-\pi_*$ because it cannot be in the image
of $q_*\oplus i_*$. This means that $g_*(2,2) \neq 0$. This is all
we need to derive the non-nullity of the map $j:
X\hookrightarrow\sub{3}X$.
\begin{corollary} $j_*([T])\neq 0$. \end{corollary}
\begin{proof} The inclusion $j$ is the composite
$$j : X\fract{\Delta}{{\ra 2}}X\times X\fract{\pi}{{\ra 2}}
\sp{2}X\fract{g}{{\ra 2}}\sub{3}X$$ so that $j_*([T]) = g_*(2,2),$ and
this is non-trivial as asserted above.
\end{proof}
\section{The Top Dimension}\label{topdim}
Using facts about orientability of configuration spaces of closed
manifolds (\cite{braid} for example) we slightly elaborate on
\cite{handel} and (\cite{tuffley2} theorem 3).
\begin{proposition}\label{top} Suppose $M$ is a closed manifold of dimension $d\geq
2$. Then
$$
H_{nd}(\sp{n}M;{\mathbb Z} ) =
\begin{cases} {\mathbb Z} &\hbox{if}\ d\ \hbox{even and $M$ orientable}\\
0&\hbox{if}\ d\ \hbox{odd or $M$ non-orientable}
\end{cases}
$$
For mod-$2$ coefficients, $H_{nd}(\sp{n}M;{\mathbb F}_2 ) = {\mathbb F}_2$. In all
cases the map
$$H_{nd}(\sp{n}M ){\ra 2} H_{nd}(\sub{n}M )$$
is an isomorphism (Corollary \ref{same}). \end{proposition}
\begin{proof}
When $d=2$ the claim is immediate since, as is well known, $\sp{n}M$
is a closed manifold (orientable if and only if $M$ is; see
\cite{wagner}). Generally our statement follows from the fact that
$\sp{n}(X)$ is an orbifold with codimension $>1$ singularities, and
hence its top homology group is that of a manifold. More explicitly
in our case, let's denote by $B(M,n)$ the configuration space of
finite sets of cardinality $n$ in $M$; that is
$$B(M,n) = \sp{n}M - {\bf\Delta}^n = \sub{n}M - \sub{n-1}M$$
where ${\bf\Delta}^n$ is the singular set consisting of tuples with
at least one repeated entry (the image of the fat diagonal as
defined in \S\ref{basic}). By Poincar\'e duality suitably applied
(\cite{braid}, lemma 3.5)
\begin{equation}\label{dual}
H^i(B(M,n);\pm {\mathbb Z})\cong H_{nd-i}(\sp{n}M, {\bf\Delta}^n;{\mathbb Z} )
\end{equation}
where $\pm{\mathbb Z}$ is the orientation sheaf. By definition
$$H^i(B(M,n),\pm{\mathbb Z} ) = H^i(Hom_{Br_n(M)}(C_*(\tilde B(M,n)),{\mathbb Z} ))$$
where $Br_n(M)=\pi_1(B(M,n))$ is the braid group of $M$, $\tilde
B(M,n)$ is the universal cover of $B(M,n)$ and the action of the
class of a loop on ${\mathbb Z}$ is multiplication by $\pm 1$ according to
whether the loop preserves or reverses orientation. It is known
that $B(M,n)$ is orientable if and only if $M$ is orientable and
even dimensional (\cite{braid}, lemma 2.6). That is we can replace
$\pm{\mathbb Z}$ by ${\mathbb Z}$ if $M$ is orientable and $d$ is even.
Since ${\bf\Delta}^n$ is a subcomplex of codimension $d$ in
$\sp{n}M$, we have $H_{nd-i}(\sp{n}M, {\bf\Delta}^n )\cong
H_{nd-i}(\sp{n}M)$ for $i < d-1$ . In particular, for $i=0$ we
obtain
\begin{equation}\label{dualcn}
H^0(B(M,n);\pm{\mathbb Z} )\cong H_{nd}(\sp{n}M;{\mathbb Z} )
\end{equation}
If $M$ is even dimensional and orientable, $H^0(B(M,n); \pm{\mathbb Z} )
\cong H^0(B(M,n); {\mathbb Z} )={\mathbb Z}$ since $B(M,n)$ is connected if $\dim
M\geq 2$. If $\dim M$ is odd or $M$ is non-orientable, then $B(M,n)$
is not orientable and $H^0(B(M,n);\pm{\mathbb Z} )=0$ (this is because
$H^0(B(M,n);\pm{\mathbb Z} )$ is the subgroup $\{m\in{\mathbb Z}\ |\ gm = m,\
\forall g\in{\mathbb Z}[\pi_1(B(M,n)]\}$). This establishes the claim for
the symmetric products and hence for the finite subset spaces
according to corollary \ref{same}.
\end{proof}
\begin{example}\rm For $k\geq 2$ we have
$H_{2k}(\sp{2}S^k)=H_{2k}(\bsp{2}S^k)=H_{k-1}({\mathbb R} P^{k-1})$ (see
example \ref{bsp2}) and this is ${\mathbb Z}$ or $0$ depending on whether
$k$ is even or odd as predicted by proposition \ref{top}. \end{example}
\subsection{The Case of the Circle} When $M=S^1$ , proposition
\ref{top} is not true anymore since $\sp{n}S^1\simeq S^1$ for all
$n\geq 1$, while $\sub{n}(S^1)$ is either $S^n$ or $S^{n-1}$
depending on whether $n$ is odd or even \cite{mp, tuffley1}. It is
still possible to describe in this case the quotient map
$\sp{n}(S^1){\ra 2}\sub{n}(S^1)$ explicitly.
A beautiful theorem of Morton asserts that the multiplication map
$$\sp{n+1}(S^1){\ra 2} S^1$$
is an $n$-disc bundle $\eta_n$ over $S^1$ which is orientable if,
and only if, $n$ is even \cite{morton}. A close scrutiny of Morton's
proof shows that the sphere bundle associated to $\eta_n$ consists
of the image of the fat diagonal ${\bf\Delta}^{n+1}$; i.e. the
singular set. If $Th(\eta_n)$ is the Thom space of $\eta_n$, then
\begin{equation}\label{one}
Th(\eta_n) = \sp{n+1}(S^1)/{\bf\Delta}^{n+1} =
\sub{n+1}S^1/\sub{n}S^1
\end{equation}
Since $\eta_n$ is trivial when $n=2k$ is even, it follows that
\begin{equation}\label{two}
Th(\eta_{2k}) = S^{2k}\wedge S^1_+ = S^{2k+1}\vee S^{2k}
\end{equation}
But as pointed out above, $ \sub{2k+1}(S^1)\simeq S^{2k+1}$. The map
$\sp{2k+1}(S^1){\ra 2}\sub{2k+1}(S^1)$ factors through the Thom space
(\ref{two}) and the top cell maps to the top cell. Combining
(\ref{one}) and (\ref{two}) it is immediate to see that
\begin{lemma} The map $Th(\eta_{2k}){\ra 2}\sub{2k+1}(S^1)$ restricted to the
first wedge summand in (\ref{two}) induces a map
$S^{2k+1}{\ra 2}\sub{2k+1}(S^1)$ which is a homotopy equivalence.\end{lemma}
\section{Manifold Structure}\label{manifold}
In this last section we prove Theorem \ref{main3}. We distinguish
three cases : when the dimension of the manifold is $d>2$, $d=2$ or
$d=1$.
\begin{lemma}\label{n3d2} Suppose $X$ is a manifold of dimension $d>2.$ Then
$Sub_nX$ is never a manifold if $n\ge 2.$\end{lemma}
\begin{proof}
Consider the projection $X^n{\ra 2}\sub{n}X$ given by identifying
tuples whose sets of coordinates are the same. This projection
restricts to an $n!$ regular covering between the complements $\pi_n
: X^n-\Lambda^n{\ra 2} \sub{n}X - \sub{n-1}X,$ where $\Lambda^n$
as before is the fat diagonal in $X^n$. Suppose $\sub{n}X$ is a manifold of
dimension $nd$ (necessarily). Pick a point in $\sub{n-1}X$ and an
open chart $U$ around it. Now $U\cong{\mathbb R}^{nd}$ and
$Y=U\cap\sub{n-1}X$ is a closed subset in $U$. We can apply
Alexander duality to the pair $(Y,U)$ and obtain
$$H_{nd-i-1}(U-Y)\cong H^i(Y)$$
But $Y\subset\sub{n-1}(X)$ is an open subspace in a simplical
complex of dimension $(n-1)d$; therefore $H^{nd-2}(Y)=0$ (since $d>
2$) and so $H_1(U-Y)=0$. We can now use an elementary observation of
Mostovoy \cite{mostovoy} to the effect that since $U-Y$ is covered
by $\pi_n^{-1}(U-Y),$ a connected \'etale cover of degree $n!$, then
it is impossible for $H_1(U-Y)$ to be trivial since the monodromy
gives a surjection $\pi_1(U-Y){\ra 2}{\mathfrak S_n},$ and hence
a non-trivial map $H_1(U-Y){\ra 2}{\mathbb Z}_2$.
\end{proof}
Theorem 2.4 of \cite{wagner} shows that our Lemma \ref{n3d2} is
valid if $d=2$ and $n>2$ as well. As opposed to the geometric
approach of Wagner, we provide below a short homological proof of
this result.
\begin{lemma}\label{d2n2} Suppose $X$ is a closed topological surface. Then
$\sub{n}X$ is a manifold if and only if $n=2$. \end{lemma}
\begin{proof}
We will show that if $n\geq 3$, then $\sub{n}(X)$ cannot even have
the homotopy type of a closed manifold by showing that it doesn't
satisfy Poincar\'e duality. We rely on results of \cite{ks} that
give a simple description of a CW decomposition of a space
$\wsp{n}X$ homotopy equivalent to $\sp{n}X$ when $X$ is a two
dimensional complex. Since $X$ is a closed two dimensional manifold,
it has a cell structure of the form $X = \bigvee^r S^1\cup D^2$
where $D^2$ is a two dimensional cell attached to a bouquet of
circles. Each circle corresponds in the cellular chain complex for
$\wsp{n}X$ to a one-dimensional cell generator $e_i$, $1\leq i\leq
r$, while the two dimensional cell is represented by $D$. This chain
complex has a concatenation product $*: C_*(\wsp{r}X)\otimes
C_*(\wsp{s}X){\ra 2} C_*(\wsp{r+s}X)$ under which these cells map to
product cells. The full cell complex for $\wsp{n}X$ is made up of
all products of the form
$$e_{i_1}*\cdots * e_{i_\ell}*\sp{k}D\ \ \ ,\ \ \ i_1+\cdots +i_\ell +
k\leq n$$ where $i_r\neq i_s$ if $r\neq s$, and where $\sp{k}D$ is a
$2k$-dimensional cell represented geometrically by the $k$-th
symmetric product of $D^2$. The boundary $\partial$ is a derivation
and is completely determined on generators by $\partial e_i = 0$ and
$\partial \sp{n}D =
\partial D*\sp{n-1}D$.
If $X = \bigvee^r S^1\cup D$ is a closed manifold, then in mod-$2$
homology, $\partial D = 0$ (the top cell). This implies of course
that $\partial\sp{n}D = 0$ (the top cell of $\sp{n}X$), while
$H_{2n-1}(\sp{n}X,{\mathbb Z}_2)\cong{\mathbb Z}_2^r$ with generators $e_i*\sp{n-1}D$.
This shows in particular that $H_{2n-1}(\sp{n}X;{\mathbb Z}_2)\neq 0$ if
$r\geq 1$; that is if $X$ is not the two sphere. Observe that this
calculation is compatible with Theorem 2 of \cite{tuffley2}.
Now we know that $\sub{n}X$ is simply connected if $n\geq 3$.
Suppose $\sub{n}X$ is a closed manifold, then by Poincar\'e duality
$H_{2n-1}(\sub{n}X;{\mathbb Z}_2) = H_1(\sub{n}X;{\mathbb Z}_2) = 0$. But recall
the pushout diagram (\ref{pushout2}) and its associated
Mayer-Vietoris exact sequence
$$H_{2n-1}(\Delta_n){\ra 2} H_{2n-1}(\sub{n-1}X)\oplus H_{2n-1}(\sp{n}X)
{\ra 2} H_{2n-1}(\sub{n}X){\ra 2} H_{2n-2}(\Delta_n){\ra 2}\cdots
$$
Since $\Delta_n$ and $\sub{n-1}X$ are $(2n-2)$-dimensional
subcomplexes of $\sub{n}X$, their homology in degree $2n-1$
vanishes. The sequence above becomes
$$0{\ra 2} H_{2n-1}(\sp{n}X){\ra 2} H_{2n-1}(\sub{n}X){\ra 2}
H_{2n-2}(\Delta_n){\ra 2}\cdots$$ and $H_{2n-1}(\sp{n}X)$ injects into
$H_{2n-1}(\sub{n}X)$. When $H_1(X)\neq 0$; that is when $X$ is not
the sphere, $H_{2n-1}(\sub{n}X)$ is non-trivial
contradicting Poincar\'e duality.
We are left with the case $\sub{n}(S^2)$ and $n\geq 3$. Here we have
to rely on a calculation of Tuffley \cite{tuffley2} who shows that
\begin{equation}\label{calcul} H_{2n-2}(\sub{n}(S^2)) =
{\mathbb Z}\oplus{\mathbb Z}_{n-1} \end{equation} But $\sub{n}(S^2)$ is
$2$-connected according to Theorem \ref{main1} and Poincar\'e
duality is violated in this case as well.
\end{proof}
\begin{remark}\rm A computation of the homology of $\sub{n}(S^2)$ for all $n$ and
various field coefficients will appear in \cite{sadok}. It is
however straightforward using the Mayer-Vietoris sequence for the
pushout (\ref{pushout3}) to show that
\begin{equation} \tilde H_*(\sub{3}S^2)\cong\begin{cases}{\mathbb Z}&, *=6\\
{\mathbb Z}\oplus{\mathbb Z}_2&, *=4
\end{cases}
\end{equation}
Similar computations appear in \cite{biro, tuffley2, taamallah}. \end{remark}
Finally we address the case $d=1$. Write $I=[0,1], \dot{I}=(0,1)$.
First of all $\sp{n}(I)\cong I^n$. In fact this is precisely the
$n$-simplex since any point of $\sp{n}(I)$ can be written uniquely
as an $n$-tuple $(x_1,\ldots, x_n)$ with $0\leq x_1\leq\cdots\leq
x_n\leq 1$. The quotient map $q_2 : \sp{2}(I){\ra 2}\sub{2}(I)$ is a
homeomorphism and hence every interior point of $\sub{2}(I)$ has a
manifold neighborhood. The same for $n=3$ since $\sp{3}(I)$ is the
three simplex
$$\{(x_1,x_2,x_3)\ |\ 0\leq x_1\leq x_2\leq x_3\leq 1\}$$ with $4$
faces: $F_1: \{x_1=0\}$, $F_2: \{x_1=x_2\}$, $F_3:\{x_2=x_3\}$ and
$F_4:\{x_3=1\}$, and the quotient map $q_3 :\sp{3}(I)\rightarrow\sub{3}(I)$
identifies the faces $F_2$ and $F_3$. Such an identification gives
again $I^3$ and $\sub{3}(\dot{I})$ is this simplex
with two faces removed \cite{rose}. For $n>3$, the corresponding map
$q_n$ identifies various faces of the simplex $\sp{n}(I)$ to obtain
$\sub{n}(I),$ but this fails to give a manifold structure on the
quotient for there are just too many ``branches" that come together at
a single point in the image of the boundary of this simplex. This is
made precise below.
\begin{lemma}\label{subs1} $\sub{n}(S^1)$ is a closed manifold if and only if
$n=1,3$. \end{lemma}
Observe that if $n$ is even, then $\sub{n}S^1$ cannot be a closed
manifold for a simple reason: no closed manifold of dimension $n$
can be homotopic to a sphere of dimension $n-1$.
\begin{proof} (of Lemma \ref{subs1} following \cite{wagner}, Theorem
2.3). Let $M$ be a manifold and $D$ a disc neighborhood of a point
$x\in M$. Then an open neighborhood of $x\in\sub{n}(M)$ is
$\sub{n}(D)$. So if $\sub{n}(D)$ is not a manifold, then neither is
$\sub{n}(M)$. To prove lemma \ref{subs1} we will argue as in
\cite{wagner} that $\sub{n}({\mathbb R})$ is not a manifold for $n\geq 4$.
For a metric space $X$ (with metric $d$), non-empty subsets
$S,T\subset X,$ and fixed elements $s\in S,t\in T,$ we define
\begin{eqnarray*}
d(s,T)&=&\inf\{d(s,t) \bigm | t\in T\}\\
d(S,t)&=&\inf\{d(s,t) \bigm | s\in S\}
\end{eqnarray*}
Then the Hausdorff metric $D$ on $\sub{n}(X)$ is defined to be
$$D(S,T):= \hbox{sup}\{d(s,T) ,d(t,S)\ | \ s\in S, t\in T\}$$
Thus $D(S,T)<\epsilon$ means that each $s\in S$ is within an
$\epsilon$-neighborhood of some point in $T$ and each $t\in T$ is
within an $\epsilon$-neighborhood of some point in $S$.
We wish to show that $\sub{n}({\mathbb R} )$ for $n\geq 4$ is not
homemorphic to ${\mathbb R}^n$. Pick $S=\{1,2,\ldots, n-1\}$ in
$\sub{n-1}({\mathbb R} )$ and for each $i$ consider the open set $C_i$ (in
the Hausdorff metric) of all subsets $\{p_1,\ldots,
p_{n-1},q_i\}\in\sub{n}({\mathbb R} )$ such that $p_j\in (j-\frac{1}{2},
j+\frac{1}{2})$ and $q_i\in (i-\frac{1}{2}, i+\frac{1}{2})$. We then
see that $C_i$ is the subset with one or two points in the
$\frac{1}{2}$-neighborhood of $i$ and a single point in the
$\frac{1}{ 2}$-neighborhood of $j$ for $i\neq j$. Note that
$C_i\subset U$ where $U=\{T\in\sub{n}({\mathbb R} )\ |\ D(S,T)<1/2\}$.
Observe that
\begin{eqnarray*}
C_1 &=& \sub{2}\left(\frac{1}{2}, \frac{3}{2}\right)\times
\left(\frac{3}{2}, \frac{5}{2}\right)\times\cdots\times
\left(n-1-\frac{1}{2}, n-1 + \frac{1}{2}\right)
\end{eqnarray*}
This is an $n$-dimensional manifold with boundary
$V=U\cap\sub{n-1}({\mathbb R} )$ and in fact one has
$$C_i = \left\{T\in U : T\cap \left(i-\frac{1}{2},i+\frac{1}{2}\right)\
\hbox{has $1$ or $2$ points}\right\} \cup V$$ Clearly $C_1\cup
C_2\cup\cdots \cup C_{n-1} = U$ and more importantly all these open
sets have a common boundary at $V$; i.e. $C_i\cap C_j = V$. If
$n\geq 4$, we can choose at least three such $C_i$; say
$C_1,C_2,C_3$. Then $C_1\cup C_2$ is an open $n$-dimensional
manifold (union over the common boundary $V$). It must be contained
in the interior of $\sub{n}({\mathbb R} )$ and hence must be open there if
$\sub{n}({\mathbb R} )$ were to be an $n$-dimensional manifold. But
$C_1\cup C_2$ is not open in $\sub{n}({\mathbb R} )$ since every
neighborhood of $\{1,2,\ldots, n-1\}$ must meet $C_3-V$ which is
disjoint from $C_1\cup C_2$ (i.e. ``too many'' branches come
together at that point).
\end{proof}
We conclude this paper with the following cute theorem of Bott,
which is the most significant early result on the subject.
\begin{corollary} (Bott) There is a homeomorphism $\sub{3}(S^1)\cong S^3$. \end{corollary}
\begin{proof} It has been known since Seifert that the
Poincar\'e conjecture holds for Seifert manifolds; that is if a
Seifert $3$-manifold is simply connected then it is homeomorphic to
$S^3$\footnote{We thank Peter Zvengrowski for reminding us this fact}.
Clearly $\sub{3}(S^1)$ is a Seifert manifold where the action of
$S^1$ on a subset is by multiplication on elements of that subset.
Since it is simply connected (corollary \ref{fundamental}), the
claim follows. Note that the $S^1$-action has two exceptional fibers
consisting of the orbits of $\{1,-1\}$ and $\{1,j,j^2\}$ where
$j=e^{2\pi i/3}$ (compare \cite{tuffley1}).
\end{proof}
\addcontentsline{toc}{section}{Bibliography}
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
We prove that every Salem number can be realized as the first dynamical degree of an automorphism of a complex simple abelian variety.
Also by using the similar technique, we prove that the set of first dynamical degrees of automorphisms of complex simple abelian varieties except $1$ has the minimum value when fixing the dimension of complex simple abelian varieties.
Moreover, we prove that there is an automorphism of a complex simple abelian variety, whose first dynamical degree is as close as possible to $1$.
These results are inspired by the work of Nguyen-Bac Dang and Thorsten Herrig.
\end{abstract}
\section{Introduction}\label{Introduction}
For a $g$-dimensional compact K\"{a}hler manifold $X$ and a dominant meromorphic map $f\colon X\dashrightarrow X$, the $k$-th dynamical degree $\lambda_k(f)$ of $f$ is defined as
\begin{align*}
\lambda_k(f):=\lim_{n\to+\infty}||(f^n)^{*}:\mathrm{H}^{k,k}(X)\rightarrow\mathrm{H}^{k,k}(X)||^{\frac{1}{n}}
\end{align*}
where $||\cdot||$ is a norm for linear transformations and $\mathrm{H}^{k,k}(X)$ ($0\leq k\leq g$) is the $(k,k)$-Dolbeault cohomology.
The dynamical degrees are always not less than $1$ (see e.g. \cite{DS05} or \cite{DS17}).\par
As mentioned in Section \ref{Dynamical degree}, if $f$ is a holomorphic map, $\lambda_k(f)=\rho(f^{*})$ where $\rho$ is the spectral radius and $f^*$ is on $\mathrm{H}^{k,k}(X)$.\par
There are some results about the relevance between the first dynamical degrees and Salem numbers as well as Pisot numbers (cf.\ Section \ref{Algebraic preparation}).\par
\begin{theorem}[{\cite[Theorem 5.1(1)]{DF01}}]
Let $X$ be a compact K\"{a}hler surface and $f\colon X\dashrightarrow X$ be a bimeromorphic map with $\rho(f^{*})>1$ for the operator $f^{*}$ on $\mathrm{H}^{1,1}(X)$ where $\rho$ is the spectral radius.\par
Then the operator $f^{*}$ has exactly one eigenvalue $\lambda\in\mathbb{R}_{>0}$, of modulus $|\lambda|>1$, and in fact $\lambda=\rho(f^{*})$.
\end{theorem}
\begin{remark}
This theorem implies that the first dynamical degree of a birational map of a projective surface over the complex number field is $1$ or a Salem number or a Pisot number (cf.\ \cite[Theorem 1.2]{BC16}).
\end{remark}
\begin{theorem}[{\cite[Theorem 2.1 (i)]{DH22}}]\label{DH22 theorem}
Let $\lambda$ be a Salem number of degree $g$.
Then there exists an automorphism $f$ of a $g$-dimensional simple abelian variety $X$ with totally indefinite quaternion multiplication (i.e., the endomorphism algebra of $X$ is a totally indefinite quaternion algebra) with dynamical degrees
\begin{align*}
\lambda_1(f)=\cdots=\lambda_{g-1}(f)=\lambda^2 \quad \textit{and} \quad \lambda_0(f)=\lambda_g(f)=1.
\end{align*}
\end{theorem}
This theorem concludes that there is an automorphism of some simple abelian variety which realizes the square of every Salem number, which is again a Salem number, as the first dynamical degree.\par
The extension of Theorem \ref{DH22 theorem} is the main result of this paper as below.
\begin{thma}
Let $P(x)\in\mathbb{Z}[x]$ be an irreducible monic polynomial whose all roots are either real or of modulus $1$.
Assume at least one root has modulus $1$.
Let $g$ be the degree of the polynomial $P(x)\in\mathbb{Z}[x]$ and $z _1,z _2,\ldots,z_g$ be the roots of $P(x)$ which are ordered as $\abs{z_1}\geq\abs{z_2}\geq\cdots\geq\abs{z_g}$.\par
Then, there is a simple abelian variety $X$ of dimension $g$ and an automorphism $f\colon X\longrightarrow X$ with dynamical degrees
\begin{center}
$\lambda_0(f)=1, \lambda_k(f)=\prod_{i=1}^{k} \abs{z_i}^2\,(1\leq k\leq g)$.
\end{center}
\end{thma}
This theorem is proved in Section \ref{Main theorem} and it induces the next corollary in Section \ref{Corollaries}.
\begin{corb}
Any Salem number is realized as the first dynamical degree of an automorphism of a simple abelian variety over $\mathbb{C}$.
\end{corb}
Section \ref{Small first dynamical degree} is devoted to exploring small first dynamical degrees except $1$ and the next theorem is proved in Section \ref{Small dynamical degree with restricted dimension}.
(Probably, this fact is famous and the proof is trivial, but we could not found the reference.)
\begin{thmc}
For an integer $g\geq2$,
\begin{align*}
\mathcal{A}_g:=\left\{
\begin{array}{l}
\text{first dynamical degrees of surjective endomorphisms}\\
\text{ of abelian varieties over $\mathbb{C}$ whose dimension is $g$}
\end{array}
\right\}\backslash\{1\},\\
\mathcal{B}_g:=\left\{
\begin{array}{l}
\text{first dynamical degrees of automorphisms}\\
\text{ of simple abelian varieties over $\mathbb{C}$ whose dimension is $g$}
\end{array}
\right\}\backslash\{1\}
\end{align*}
have the minimum value.
\end{thmc}
But, without restricting the dimension of (simple) abelian varieties, there does not exist the minimum value of the set of first dynamical degrees except $1$.
This result is proved in Section \ref{First dynamical degree close to 1}.
\begin{thmd}
There does not exist the minimum value of
\begin{align*}
\Delta:=\{\text{first dynamical degrees of automorphisms of simple abelian varieties over $\mathbb{C}$}\}\backslash\{1\}.
\end{align*}
\end{thmd}
\noindent
{\bf Acknowledgements.} The author thanks Professor Keiji Oguiso for suggesting some of the problem of dynamical degrees and commenting to this paper.
He also thanks Long Wang for pointing out an error in the proof at the seminar.
\section{Preliminaries}
\subsection{Dynamical degrees}\label{Dynamical degree}
Let $X$ be a $g$-dimensional compact K\"{a}hler manifold and let $f\colon X\dashrightarrow X$ be a dominant meromorphic map as in Section \ref{Introduction}.
If $f$ is a holomorphic map, then
\begin{align*}
(f^n)^{*}=(f^{*})^n:\mathrm{H}^{k,k}(X)\rightarrow\mathrm{H}^{k,k}(X)
\end{align*}
holds for every $0\leq k\leq g$ and so the dynamical degrees of $f$ is calculated as
\begin{align*}
\lambda_k(f)=\lim_{n\to+\infty}||(f^n)^{*}:\mathrm{H}^{k,k}(X)\rightarrow\mathrm{H}^{k,k}(X)||^{\frac{1}{n}}&=\lim_{n\to+\infty}||(f^{*})^n:\mathrm{H}^{k,k}(X)\rightarrow\mathrm{H}^{k,k}(X)||^{\frac{1}{n}}\\
&=\rho(f^{*}:\mathrm{H}^{k,k}(X)\rightarrow\mathrm{H}^{k,k}(X))
\end{align*}
where $\rho(f^{*})$ is the spectral radius of $f^{*}:\mathrm{H}^{k,k}(X)\rightarrow\mathrm{H}^{k,k}(X)$.
Under this condition, $\lambda_g(f)$ is equal to the number of points in $f^{-1}(a)$ for a general point $a$ in $X$.
Thus, if $f$ is an automorphism of a $g$-dimensional complex projective variety $X$, then $\lambda_g(f)=1$ (cf.\ \cite{DS17}).\par
Especially, let $X$ be a $g$-dimensional abelian variety over $\mathbb{C}$ and write as $V/\Lambda$ where $V=\mathbb{C}^g$ and $\Lambda$ a $\mathbb{Z}$-lattice in $V$.
In this paper, all abelian varieties are considered over $\mathbb{C}$ and we abbreviate the base field from here.
An abelian variety $X$ is called \textit{simple} if it contains no abelian subvariety except $0$ and $X$.\par
Let $f\colon X \longrightarrow X$ be a morphism of an abelian variety $X$.
Then, $f$ can be decomposed as $f=t\circ f'$ where $t$ is a translation map and $f'\colon X \longrightarrow X$ is a morphism of $X$ such that $f'(0)=0$ and $f'(x+y)=f'(x)+f'(y)$ for any $x,y\in X$.
A morphism of $X$ which preserves the group structure of $X$ is called \textit{endomorphism} in this paper.
Also, an endomorphism of $X$ is called \textit{automorphism} if its inverse exists and also it is an endomorphism.
For a translation map $t$ of a complex torus $X$, $t_*=\mathrm{id}:\mathrm{H}_1(X)\longrightarrow\mathrm{H}_1(X)$ holds on singular homology and this implies $(t\circ f')^*=f'^*\circ t^*=f'^*$ on $\mathrm{H}^{k,k}(X)$ and so
\begin{align*}
\lambda_k(t\circ f')=\lambda_k(f').
\end{align*}
Thus, the calculation of the dynamical degrees of morphisms of $X$ can be reduced to that of endomorphisms of $X$ and so from here, we consider only the dynamical degrees of endomorphisms of $X$.\par
Denote the set of endomorphisms of $X$ by $\mathrm{End}(X)$ with the natural ring structure, and is called the endomorphism ring of $X$.
Also the endomorphism algebra of $X$ is defined as $\mathrm{End}_\mathbb{Q}(X):=\mathrm{End}(X)\otimes_\mathbb{Z} \mathbb{Q}$.
If $X$ is a simple abelian variety, then the endomorphism algebra $\mathrm{End}_\mathbb{Q}(X)$ is a division algebra of finite dimension over $\mathbb{Q}$.
\begin{thm}[{Poincar\'e's Complete Reducibility Theorem (\cite[Theorem 5.3.7]{BL04})}]
Let $X$ be a complex abelian variety.
Then there is an isogeny
\begin{align*}
X\rightarrow X_1^{n_1}\times\cdots\times X_r^{n_r},
\end{align*}
where each $X_i$ is a simple abelian variety and $n_i\in\mathbb{Z}_{>0}$.\par
Moreover, $X_i$ and $n_i$ are unique up to isogenies and permutations.
\end{thm}
\begin{remark}[{cf.\ \cite[Corollary 5.3.8]{BL04}}]\label{Poincare remark}
By using the fact that the endomorphism algebra of a simple abelian variety is divisional, the above isogeny induces
\begin{align*}
\mathrm{End}_\mathbb{Q}(X)\simeq \mathrm{M}_{n_1}(F_1)\oplus\cdots\oplus\mathrm{M}_{n_r}(F_r),
\end{align*}
where $F_i=\mathrm{End}_\mathbb{Q}(X_i)$ is a division ring.\par
Moreover, for an abelian variety $X$, $\mathrm{End}_\mathbb{Q}(X)$ is divisional if and only if $X$ is a simple abelian variety.
\end{remark}
For an abelian variety $X=V/\Lambda$, $f\in\mathrm{End}(X)$ induces a natural analytic representation $\rho_a(f):V\longrightarrow V$ and a natural rational representation $\rho_r(f):\Lambda\longrightarrow\Lambda$.
$\rho_a(f)$ is identified as an element of $\mathrm{M}_g(\mathbb{C})$ and $\rho_r(f)$ is identified as an element of $\rm{M}_{2g}(\mathbb{Z})$.\par
Let $\rho_1, \rho_2, \ldots, \rho_{g}$ be the eigenvalues of $\rho_a(f)$. Then $\rho_1, \overline{\rho_1}, \ldots, \rho_{g}, \overline{\rho_{g}}$ are the eigenvalues of $\rho_r(f)$.\par
Assume $f$ is an automorphism, or more generally, a surjective endomorphism.
Then, $\rho_a(f)$ is an isomorphism and then by renumbering $\rho_1, \rho_2, \ldots, \rho_{g}$ as $\abs{\rho_1}\geq\abs{\rho_2}\geq\cdots\geq\abs{\rho_g}>0$, the $k$-th dynamical degree $\lambda_k(f)$ of $f$ is equal to $\prod_{i=1}^{k}\abs{\rho_i}^2$ (see e.g. \cite[Section 1.3]{DH22}).\par
The following theorems are useful for considerations.
\begin{theorem}[{\cite[Chapter 13.1]{BL04}}]\label{Fix1}
Let $X$ be an abelian variety and let $f$ be an endomorphism of $X$.
Then
\begin{align*}
\#{\mathrm{Fix}(f)}=\mleft|\mathrm{det}({\mathrm{id}}_V-\rho_a(f))\mright|^2=\mathrm{det}({\mathrm{id}}_\Lambda-\rho_r(f))
=\prod_{i=1}^{g}(1-\rho_{i})(1-\overline{\rho_{i}}).
\end{align*}
\end{theorem}
\begin{remark}[{cf.\ \cite[Chapter 13.1]{BL04}}]
Here, $\#{\mathrm{Fix}(f)}$ is defined for a holomorphic map $f$ of a complex torus $X$ as
\begin{align*}
\#{\mathrm{Fix}(f)}:=
\begin{cases}
\text{cardinarity of }\mathrm{Fix}(f) & \mathrm{dim}(\mathrm{Fix}(f))=0, \\
0 & \mathrm{dim}(\mathrm{Fix}(f))>0,
\end{cases}
\end{align*}
where $\mathrm{Fix}(f)$ is the analytic subvariety of $X$ consisting of the fixed points of $f$.
$\#{\mathrm{Fix}(f)}$ is invariant under translation maps (cf.\ \cite[Lemma 13.1.1]{BL04}).\par
For $f\in\mathrm{End}(X)$, if $\mathrm{Fix}(f)$ has a positve dimension, then some eigenvalue of $\rho_r(f)$ is equal to $1$ and so $\#{\mathrm{Fix}(f)}=0$.
\end{remark}
Assume $X$ is a simple abelian variety so that the endomorphism algebra $B:=\mathrm{End}_\mathbb{Q}(X)$ is a division ring of finite dimension over $\mathbb{Q}$ as above.
The division ring $B$ has a field $F$ as a center with $\left[B\colon F\right]=d^2$ (cf.\ \cite[\S5]{Dra83}), $\left[F\colon \mathbb{Q}\right]=e$.
Under this notation, the next theorem holds.
\begin{theorem}[{\cite[Chapter 13.1]{BL04}}]\label{Fix2}
Identify $f\in\mathrm{End}(X)$ as an element of $B:=\mathrm{End}_\mathbb{Q}(X)$ for a simple abelian variety $X$.
Then,
\begin{align*}
\#{\mathrm{Fix}(f)}=\mathrm{Nrd}_{B/\mathbb{Q}}({\mathrm{id}}_X-f)^{\frac{2g}{de}},
\end{align*}
where $\mathrm{Nrd}_{B/\mathbb{Q}}:B\rightarrow \mathbb{Q}$ is the reduced norm map for central simple algebras.
\end{theorem}
\begin{remark}\label{finite-dimensional central simple algebra}
For a finite-dimensional central simple algebra $B$ over a field $K$, the reduced norm $\mathrm{Nrd}_{B/K}$
is calculated as below (cf.\ \cite[\S 22]{Dra83}).
There exists a finite field extension $K'\supset K$ with $B\otimes_K K'\simeq\mathrm{M}_n(K')$ and fix an isomorphism $\phi\colon B\otimes_K K'\rightarrow\mathrm{M}_n(K')$.
Then $\mathrm{Nrd}_{B/K}:B\rightarrow K$
is defined as
\begin{align*}
\mathrm{Nrd}_{B/K}(a):=\mathrm{det}(\phi(a\otimes1))\ (a\in B),
\end{align*}
and this is independent of the field extension of $K$ and the choice of the isomorphism.\par
Also, for a finite field extension $K\supset k$, the reduced norm $\mathrm{Nrd}_{B/k}$
is defined as
\begin{align*}
\mathrm{Nrd}_{B/k}(a):=\mathrm{N}_{K/k}(\mathrm{Nrd}_{B/K}(a))\ (a\in B),
\end{align*}
where $\mathrm{N}_{K/k}$ is the field norm for the field extension $K\supset k$ by considering $K$ as a vector space over $k$.
The details of reduced norms are explained in Section \ref{Algebraic preparation}.
\end{remark}
\subsection{Algebraic preparations}\label{Algebraic preparation}
This subsection is devoted to the notations and the properties which are used later from Section \ref{Endomorphism of simple abelian variety}.
\begin{flushleft}{\bf{Salem numbers, Pisot numbers}}\end{flushleft}
\begin{definition}[{\cite[Chapter 5.2]{BDGGH+92}}]\label{Salem Pisot}
A Salem number is a real algebriac integer $\lambda$ greater than $1$ where its other conjugates have modulus at most equal to $1$, at least one having a modulus equal to $1$.
A Pisot number is a real algebraic integer $\lambda$ greater than $1$ where its all other conjugates have modulus less than $1$.
\end{definition}\par
For a Salem number $\lambda$, the minimal polynomial $P(x)\in\mathbb{Z}[x]$ of $\lambda$ is called the Salem polynomial of $\lambda$.
By some deductions, $\frac{1}{\lambda}$ is the only root of $P(x)$ whose modulus is less than $1$ and the set of the roots of $P(x)$ can be written as $\mleft\{\lambda,\frac{1}{\lambda}\mright\}\cup\{z_1,\overline{z_1},\ldots z_k,\overline{z_k}\}$ where the $z_i$ are all of modulus $1$ (cf.\ \cite[p.84]{BDGGH+92}).\par
Thus, the definition of Salem numbers can be rewritten as below.
\begin{definition}\label{Definition of Salem}
A Salem number is a real algebraic integer $\lambda>1$ of degree at least $4$ such that its minimal polynomial $P(x)$ has $\lambda$, $\frac{1}{\lambda}$ as its roots and all other roots have modulus $1$.
\end{definition}
\begin{remark}
Let $\lambda$ be a Salem number and let $g$ be the degree of $\lambda$.
From above, $g$ is always even, and so the degree of a Salem number is always even.\par
Also by this defintion, powers of Salem numbers are again Salem numbers.
\end{remark}
\begin{flushleft}{\bf{CM-fields}}\end{flushleft}\par
A number field $K$ is called \textit{totally real} if the image of every $\mathbb{Q}$-embedding $\sigma:K\hookrightarrow\mathbb{C}$ is inside $\mathbb{R}$.
Also, a number field $K$ is called \textit{totally complex} if the image of every $\mathbb{Q}$-embedding $\sigma:K\hookrightarrow\mathbb{C}$ is not inside $\mathbb{R}$.
\begin{definition}
A CM-field is a number field $K$ which satisfies the following conditions.
\begin{enumerate}
\item $K$ is a totally complex field.
\item $K$ is a quadratic extension of some totally real number field.
\end{enumerate}
\end{definition}
\begin{flushleft}{\bf{Quaternion algebras}}\end{flushleft}\par
For a field $F$ whose characteristic is not $2$, a quaternion algebra over $F$ is defined as an algebra $B$ over $F$ which has a basis $1,i,j,ij$ over $F$ with $i^2=a, j^2=b, ij=-ji$ ($a, b\in F^{\times}$).
Under this condition, $B$ is written as $\left(\frac{a,b}{F}\right)$ and $1,i,j,ij$ is called an $F$-basis for $B$ (cf.\ \cite[Chapter 2]{Voi21}).\par
There are some properties for quaternion algebras.
\begin{itemize}
\item $\left(\frac{a,b}{F}\right)\simeq\left(\frac{b,a}{F}\right)$
\item $\left(\frac{a,b}{F}\right)\simeq\left(\frac{aa',b}{F}\right)$ ($a'\in F^{\times2}$)
\item For a field extension $F\subset K$, $\left(\frac{a,b}{F}\right)\otimes_F K\simeq\left(\frac{a,b}{K}\right)$.
\item When $F=\mathbb{R}$, $\left(\frac{1,1}{\mathbb{R}}\right)\simeq\left(\frac{1,-1}{\mathbb{R}}\right)\simeq\mathrm{M}_2(\mathbb{R})$ and $\left(\frac{-1,-1}{\mathbb{R}}\right)\simeq\mathbb{H}$.
\end{itemize}\par
Let $B$ be a quaternion algebra over a totally real number field $F$.
$B$ is called \textit{totally indefinite} if $B\otimes_\sigma \mathbb{R}\simeq\mathrm{M}_2(\mathbb{R})$ for every embedding $\sigma\colon F\hookrightarrow\mathbb{R}$.
On the other hand, $B$ is called \textit{totally definite} if $B\otimes_\sigma\mathbb{R}\simeq\mathbb{H}$ for every embedding $\sigma\colon F\hookrightarrow\mathbb{R}$.
By the above properties, $B=\left(\frac{a,b}{F}\right)$ is totally indefinite if and only if either $\sigma(a)>0$ or $\sigma(b)>0$ holds for any $\mathbb{Q}$-embedding $\sigma\colon F\hookrightarrow\mathbb{C}$.\par
Especially, if $a\in\mathbb{Q}_{>0}$, then $B=\left(\frac{a,b}{F}\right)$ is totally indefinite.
\begin{flushleft}{\bf{Anti-involutions}}\end{flushleft}
\begin{definition}[{\cite[Definition 3.2.1]{Voi21}}]
For a field $F$ and an algebra $B$ over $F$, an $F$-linear map $\phi:B\rightarrow B$ which satisfies the following conditions is called an \textit{anti-involution} over $F$.
\begin{enumerate}
\item $\phi(1)=1$
\item $\phi(\phi(x))=x$ ($x\in B$)
\item $\phi(xy)=\phi(y)\phi(x)$ ($x, y\in B$)
\end{enumerate}
\end{definition}
\begin{definition}[{cf.\ \cite[Definition 8.4.1]{Voi21}}]
For a subfield $F\subset\mathbb{R}$ and a finite-dimensional algebra $B$ over $F$, an anti-involution $\phi:B\rightarrow B$ over $F$ is called \textit{positive} over $F$ if $\mathrm{Tr}(\phi(x)x)>0$ for any nonzero $x\in B$.
Here, for an element $a\in B$, $\mathrm{Tr}(a)$ is defined as the trace of the left multiplication map $a:B\rightarrow B$ where $B$ is considered as a vector space over $F$.
\end{definition}
\begin{remark}
Let $B$ be an algebra over a number field $F$ and $\phi:B\rightarrow B$ be an anti-involution over $F$.
Then, $\phi:B\rightarrow B$ is also an anti-involution over $\mathbb{Q}$.
But, a positive anti-involution over $F$ is not always a positive anti-involution over $\mathbb{Q}$.
In this paper, positivity of anti-involutions is always considered over $\mathbb{Q}$.
\end{remark}
\begin{example}\label{anti-involution}
For a quaternion algebra $B=\mleft(\frac{a,b}{F}\mright)$ with an $F$-basis $1,i,j,ij$, the quaternion conjugate on $B$ is the map
\begin{align*}
x_1+x_2 i+x_3 j+x_4 ij\longmapsto x_1-x_2 i-x_3 j-x_4 ij\ (x_i \in F)
\end{align*}
which is an anti-involution over $F$ and denote this map by $x\mapsto \overline{x}$ (\cite[Chapter 3.2]{Voi21}).
\end{example}\par
Let $B$ be a central simple algebra over a field $K$ and $\sigma$ be an anti-involution on $B$ over some field, which may be other than $K$.
For $a\in K$, $\sigma(a)$ is an element of $K$ since for all $b\in B$,
\begin{align*}
\sigma(a)b=\sigma(\sigma^{-1}(b)a)=\sigma(a\sigma^{-1}(b))=b\sigma(a),
\end{align*}
and so $\sigma$ can be restricted to $K$.
\begin{definition}\label{second kind}
Let $B$ be a central simple algebra over a field $K$.\par
An anti-involution $\sigma$ on $B$ is called \textit{first kind} if $\sigma$ fixes $K$ pointewise.
Otherwise, $\sigma$ is called \textit{second kind}.
\end{definition}
\begin{theorem}[{cf.\ \cite[Theorem 5.5.3]{BL04}}]\label{construction of positive anti-involution}
Let $B$ be a totally indefinite quaternion algebra of finite dimension over $\mathbb{Q}$ with center a totally real number field $K$.
Assume $B$ is divisional.
Then, a positive anti-involution $\phi:B\rightarrow B$ over $\mathbb{Q}$ can be written as
\begin{align*}
\phi(x)=c^{-1}\overline{x}c
\end{align*}
where $c\in B\backslash K$ with $c^2\in K$ totally negative (i.e., the conjugates are all real and negative).
\end{theorem}
\begin{theorem}[{cf.\ \cite[Chapter 21]{Mum70}, \cite[Theorem 5.5.6]{BL04}}]\label{existence of positive anti-involution}
Let $B$ be a division algebra of finite dimension over $\mathbb{Q}$ with center a CM-field $K$.\par
Assume that $B$ admits an anti-involution $x\mapsto\tilde{x}$ of the second kind.
Then there exists a positive anti-involution $x\mapsto x'$ of the second kind.
\end{theorem}
\begin{flushleft}{\bf{Orders}}\end{flushleft}
\begin{definition}
Let $R$ be an integral domain and define $K=\mathrm{Frac}(R)$.
For a finite-dimensional algebra $B$ over K, a subset $\mathcal{O}\subset B$ which satisfies the following conditions is called an $R$-order.
\begin{enumerate}
\item $\mathcal{O}$ is a finitely generated $R$-submodule of $B$
\item $\mathcal{O}$ spans $B$ over K
\item $\mathcal{O}$ is closed under multiplication induced from $B$
\end{enumerate}
\end{definition}\par
Often, the definition of an $R$-order is applied for the case $R=\mathbb{Z}$, $K=\mathbb{Q}$.
\begin{examples}
Here are some examples of a $\mathbb{Z}$-order (cf.\ \cite[Chapter 10]{Voi21}, \cite[Chapter 1]{BL04}).
\begin{itemize}
\item For a number field $K$, the ring of integer $\mathcal{O}_K$ is a $\mathbb{Z}$-order of $K$.
Also, $\mathcal{O}_K$ is the maximal $\mathbb{Z}$-order with respect to the inclusion.
\item For a quaternion algebra $\left(\frac{a,b}{\mathbb{Q}}\right)$ ($a,b\in\mathbb{Z}\backslash\{0\}$) with a $\mathbb{Q}$-basis $1,i,j,ij$, $\mathbb{Z}\oplus\mathbb{Z}i\oplus\mathbb{Z}j\oplus\mathbb{Z}ij$ is a $\mathbb{Z}$-order of $\left(\frac{a,b}{\mathbb{Q}}\right)$.
\item For an abelian variety $X$, $\mathrm{End}(X)$ is a $\mathbb{Z}$-order of $\mathrm{End}_\mathbb{Q}(X)$.
\end{itemize}
\end{examples}
\begin{flushleft}{\bf{Reduced norms, Reduced characteristic polynomials}}\end{flushleft}\par
Let $B$ be a finite-dimensional central simple algebra over a number field $K$ with $[B:K]=d^2$ for some integer $d\in\mathbb{Z}_{>0}$.\par
As in Remark \ref{finite-dimensional central simple algebra}, for some field $K'$ with the finite Galois extension $K'/K$, an isomorphism
\begin{align*}
\phi\colon B\otimes_K K'\stackrel{\simeq}{\longrightarrow}\mathrm{M}_d(K')
\end{align*}
is induced.
For $\alpha\in B$, $\mathrm{Nrd}_{B/K}(\alpha)$ is calculated as $\mathrm{det}(\phi(\alpha\otimes1))$.
Define
\begin{align*}
p_\alpha(n)=\mathrm{Nrd}_{B/K}(n-\alpha)=\mathrm{det}(nI-\phi(\alpha\otimes1))
\end{align*}
as a polynomial in $n$.
$p_\alpha(n)$ is invariant under all $\sigma\in \mathrm{Gal}(K'/K)$ and so the coefficients of $p_\alpha(n)$ are all in $K$.
Thus, $p_\alpha(x)\in K[x]$ and this polynomial is called the reduced characteristic polynomial of $\alpha$.
Let $P(x)\in K[x]$ be the minimal polynomial of $\alpha$ over $K$ and also this is the minimal polynomial of $\alpha\otimes1\in B\otimes_K K'$ and so of $\phi(\alpha\otimes1)\in\mathrm{M}_d(K')$.
Thus, the roots of the reduced characteristic polynomial $p_\alpha(x)\in K[x]$ are the same as the roots of the minimal polynomial $P(x)$ by the theorem of linear algebra.
Moreover, because of the minimality, the reduced characteristic polynomial can be written as $p_\alpha(x)=P(x)^s$ for some $s\in\mathbb{Z}_{>0}$.
\begin{flushleft}{\bf{Dedekind domains}}\end{flushleft}\par
Let $R$ be an integral domain and $K$ be its fraction field.\par
A fractional ideal of $R$ is a non-zero $R$-submodule $I$ of $K$ such that there exists $a\in R$ such that $aI\subset R$.
A fractional ideal $I$ of $R$ is called \textit{invertible} if there exists a fractional ideal $J$ of $R$ such that $IJ=R$.
\begin{definition}[{cf.\ \cite[Vol 2, Chapter 9.5]{Coh89}}]
A Dedekind domain is an integral domain $R$ which satisfies the following equivalent conditions.
\begin{enumerate}
\renewcommand{\arabic{enumi}.}{(\roman{enumi})}
\item Every non-zero ideal $I\subsetneq R$ is invertible.
\item Every non-zero ideal $I\subsetneq R$ is expressed as a finite product of the prime ideals uniquely.
\item $R$ is Noetherian, integrally closed and all non-zero prime ideals are maximal ideals.
\end{enumerate}
\end{definition}
Let $\mathcal{O}$ be a Dedekind domain and $F$ be its fraction field.
Let $K/F$ be a finite separable field extension and $\mathcal{O}'$ be the integral closure of $\mathcal{O}$ in $K$.
Then $\mathcal{O}'$ is a Dedekind domain with the fraction field $K$ (cf.\ \cite[Chapter I, Proposition 8.1]{Neu99}).\par
For the pair $(\mathcal{O},F,\mathcal{O}',K)$ constructed just now and a non-zero prime ideal $\mathfrak{p}\subset\mathcal{O}$, there is the factorization $\mathfrak{p}\mathcal{O}'=\mathfrak{P}_1^{e_1}\cdots\mathfrak{P}_g^{e_g}$ since $\mathcal{O}'$ is a Dedekind domain.\par
The next conditons are equivalent and when they hold, we say that $\mathfrak{P}$ is over $\mathfrak{p}$.
\begin{itemize}
\item $\mathfrak{P}\cap\mathcal{O}=\mathfrak{p}$
\item $\mathfrak{P}$ appears in the prime ideal factorization of $\mathfrak{p}\mathcal{O}'$
\end{itemize}\par
Take the pair $(\mathcal{O},F,\mathcal{O}',K)$ again and assume $K/F$ is a separable extension of degree $n$.
For a non-zero prime ideal $\mathfrak{p}\subset\mathcal{O}$, there is the prime ideal factorization $\mathfrak{p}\mathcal{O}'=\mathfrak{P}_1^{e_1}\cdots\mathfrak{P}_g^{e_g}$ with $f_i=[\mathcal{O}'/\mathfrak{P}_i:\mathcal{O}/\mathfrak{p}]$ and then
\begin{align*}
n=\sum_{i=1}^g e_if_i.
\end{align*}
Under this condition, some notations can be defined (cf.\ \cite[Chapter I, \S8]{Neu99}).
\begin{definition}
\begin{enumerate}
\item $\mathfrak{p}$ is \textit{totally split} in $K$ if $e_i=1$, $f_i=1$ and $g=n$.
\item $\mathfrak{p}$ is \textit{nonsplit} in $K$ if $g=1$.
\item $\mathfrak{P}_i$ is \textit{unramified} over $F$ if $e_i=1$ and the field extension $\mathcal{O}'/\mathfrak{P}_i\supset\mathcal{O}/\mathfrak{p}$ is separable.
\item $\mathfrak{p}$ is \textit{unramified} in $K$ if all $\mathfrak{P}_i$ are unramified over $F$.
\end{enumerate}
\end{definition}\par
Moreover, by assuming $K/F$ is a Galois extension,
\begin{align*}
e_1=\cdots=e_g=e, f_1=\cdots=f_g=f
\end{align*}
hold and this implies $n=efg$.
This theorem is called Hilbert's ramification theory (cf.\ \cite[Chapter I, \S9]{Neu99}).\par
On the same assumption,
\begin{align*}
\mathcal{D}_\mathfrak{P}:=\{\sigma\in \mathrm{Gal}(K/F)\mid\sigma(\mathfrak{P})=\mathfrak{P}\}
\end{align*}
is called the decomposition group of $\mathfrak{P}$ and its order is $ef$.\par
Moreover, if $\mathfrak{P}$ is over a prime ideal $\mathfrak{p}\subset\mathcal{O}$, then $(\mathcal{O}'/\mathfrak{P})/(\mathcal{O}/\mathfrak{p})$ is a normal extension and there is a natural surjective group homomorphism
\begin{align*}
\mathcal{D}_\mathfrak{P}\rightarrow\mathrm{Aut}\left((\mathcal{O}'/\mathfrak{P})/(\mathcal{O}/\mathfrak{p})\right).
\end{align*}
Assume $(\mathcal{O}'/\mathfrak{P})/(\mathcal{O}/\mathfrak{p})$ is separable, then this homomorphism is isomorphic if and only if $\mathfrak{P}$ is unramified over $F$.
\begin{flushleft}{\bf{Cyclotomic polynomials}}\end{flushleft}\par
In this paper, for a positive integer $n$, let $\zeta_n$ be the primitive $n$-th root which has a smallest positive angle.\par
Let $\Phi_n(x)\in\mathbb{Z}[x]$ be the minimal polynomial of $\zeta_n$ over $\mathbb{Q}$.
This is called a cyclotomic polynomial.
The degree of $\Phi_n(x)$ is Euler's totient function $\varphi(n)$.
Also let $\Psi_n(x)$ be the minimal polynomial of $\zeta_n+\frac{1}{\zeta_n}=2\mathrm{cos}\left(\frac{2\pi}{n}\right)$ over $\mathbb{Q}$ (define $\Psi_4(x)=x$ for the case $n=4$).\par
By the definition, for $n\geq3$, the equation
\begin{align*}
\Phi_n(x)=x^{\frac{\varphi(n)}{2}}\Psi_n\left(x+\frac{1}{x}\right)
\end{align*}
holds and so $\Psi_n(x)$ has degree $\frac{\varphi(n)}{2}$.\par
Also, there is a theorem for the constant term of $\Psi_n(x)$.
\begin{theorem}[{cf.\ \cite{ACR16}}]\label{constant term of minimal polynomial}
The absolute value of the constant term of $\Psi_n(x)$ is equal to $1$ except the following cases.
\begin{enumerate}
\renewcommand{\arabic{enumi}.}{\rm{(\roman{enumi})}}
\item $\abs{\Psi_n(0)}=0$ if $n=4$
\item $\abs{\Psi_n(0)}=2$ if $n=2^m$ with $m\in\mathbb{Z}_{\geq0}\backslash\{2\}$
\item $\abs{\Psi_n(0)}=p$ if $n=4p^k$ with $k\in\mathbb{Z}_{>0},\ p\text{ an odd prime number}$
\end{enumerate}
\end{theorem}
\begin{flushleft}{\bf{Dirichlet density, \v{C}ebotarev's density theorem}}\end{flushleft}\par
For a number field $K$ and its ring of integer $\mathcal{O}_K$, a prime ideal of $\mathcal{O}_K$ is often said as a prime ideal of a field $K$.
\begin{definition}[{\cite[Chapter I\hspace{-.1em}I\hspace{-.1em}I \S 1, Chapter V\hspace{-.1em}I\hspace{-.1em}I Definition 13.1]{Neu99}}]
Let $M$ be a set of non-zero prime ideals of a number field $K$.
Denote
\begin{align*}
\mathfrak{N}(\mathfrak{p})=p^{f}
\end{align*}
for a prime ideal $\mathfrak{p}\subset\mathcal{O}_K$ where $p\mathbb{Z}=\mathfrak{p}\cap\mathbb{Z}$ and $f=[\mathcal{O}_K/\mathfrak{p}:\mathbb{Z}/p\mathbb{Z}]$.\par
Under this notation, the limit
\begin{align*}
d(M)=\lim_{s\to1+0}\frac{\sum_{\mathfrak{p}\in M} \mathfrak{N}(\mathfrak{p})^{-s}}{\sum_{\mathfrak{p}} \mathfrak{N}(\mathfrak{p})^{-s}},
\end{align*}
where the denominater is the sum for all non-zero prime ideals, is called the Dirichlet density of $M$ if it exists.
\end{definition}
\begin{remark}\label{infinite property}
In the above definition, the denominator $\sum_{\mathfrak{p}} \mathfrak{N}(\mathfrak{p})^{-s}$ diverges to $+\infty$ as \cite[Chapter V\hspace{-.1em}I\hspace{-.1em}I \S 13]{Neu99}.\par
Thus, if $M$ is a finite set of prime ideals of $K$, then $d(M)=0$.
\end{remark}
\begin{definition}[{cf.\ \cite[Chapter 6.3]{Sam70}}]\label{Frobenius automorphism}
Let $K/F$ be a Galois extension of number fields with Galois group $G=\mathrm{Gal}(K/F)$ and let $\mathfrak{P}$ be a prime ideal of $K$ which is unramified over $F$.
Denote $\mathfrak{p}=\mathfrak{P}\cap\mathcal{O}_F$.\par
Then there is one and only one $\sigma\in G$ such that $\sigma$ induces the map $a\mapsto a^s$ on $\mathcal{O}_K/\mathfrak{P}$ where $s=\#(\mathcal{O}_F/\mathfrak{p})$.
This $\sigma$ is denoted by $\left(\frac{K/F}{\mathfrak{P}}\right)$ and called the Frobenius automorphism.
\end{definition}
\begin{remark}
On the notation in Definition \ref{Frobenius automorphism}, $\sigma(\mathfrak{P})=\mathfrak{P}$ and by considering the isomorphism $\mathcal{D}_\mathfrak{P}\rightarrow\mathrm{Aut}\left((\mathcal{O}_K/\mathfrak{P})/(\mathcal{O}_F/\mathfrak{p})\right)$, $\mathcal{D}_\mathfrak{P}$ is generated by $\sigma$.
\end{remark}
\begin{definition}[{\cite[Chapter V\hspace{-.1em}I\hspace{-.1em}I \S 13]{Neu99}}]
Let $K/F$ be a Galois extension of number fields with Galois group $G$ and take $\sigma\in G$.\par
Define $P_{K/F}(\sigma)$ as the set of prime ideals $\mathfrak{p}$ of $F$ unramified in $K$ such that there is a prime ideal $\mathfrak{P}$ of $K$ over $\mathfrak{p}$ such that the Frobenius automorphism $\left(\frac{K/F}{\mathfrak{P}}\right)$ is equal to $\sigma$.
\end{definition}
The density of $P_{K/F}(\sigma)$ is calculated by \v{C}ebotarev's density theorem.
\begin{theorem}[{cf.\ \cite[Chapter V\hspace{-.1em}I\hspace{-.1em}I Theorem 13.4]{Neu99}}]\label{Cebotarev's density theorem}
Let $K/F$ be a Galois extension of number fields with Galois group $G$.
Then for every $\sigma\in G$, the set $P_{K/F}(\sigma)$ has a density, and it is given by
\begin{align*}
d(P_{K/F}(\sigma))=\frac{\#\langle\sigma\rangle}{\# G},
\end{align*}
where $\langle\sigma\rangle:=\{\tau^{-1}\sigma\tau\mid\tau\in G\}$.\par
In particular, by Remark \ref{infinite property}, $P_{K/F}(\sigma)$ consits of infinitely many prime ideals.
\end{theorem}
\section{Endomorphisms of simple abelian varieties}\label{Endomorphism of simple abelian variety}
This section is devoted to examining endomorphisms of simple abelian varieties and their dynamical degrees in detail.
\subsection{Endomorphism algebras of simple abelian varieties}\label{Endomorphism algebra of simple abelian varieties}
Let $X$ be a $g$-dimensional simple abelian variety and define $B:=\mathrm{End}_\mathbb{Q}(X):=\mathrm{End}(X)\otimes_\mathbb{Z}\mathbb{Q}$ as in Section \ref{Dynamical degree}.
$B$ is a division ring of finite dimension over $\mathbb{Q}$ and the Rosati involution $'$ on $B=\mathrm{End}_\mathbb{Q}(X)$ is a positive anti-involution.
Also, $B$ has a field $K$ as a center and the anti-involution $'$ can be restricted to $K$.
By defining $K_0:=\{x\in K\mid x'=x\}$, $K_0$ be a totally real number field and either $K=K_0$ or $K$ is a totally complex quadratic extension of $K_0$.
Denote $[B:K]=d^2$, $[K:\mathbb{Q}]=e$ and $[K_0:\mathbb{Q}]=e_0$.
In summary, the endomorphism algebra $B=\mathrm{End}_\mathbb{Q}(X)$ can be classified as below (cf.\ \cite[Chapter 5.5]{BL04}).
\begin{table}[hbtp]
\caption{Classification of $\mathrm{End}_\mathbb{Q}(X)$}
\label{table}
\begin{tabular}{c|c|c|c|c|c}
& $B=\mathrm{End}_\mathbb{Q}(X)$ & $K$ & $d$ & $e_0$ & restriction \\
\hline\hline
Type 1 & $K$ & totally real & $1$ & $e$ & $e\mid g$\\
Type 2 & totally indefinite quaternion algebra over $K$ & totally real & $2$ & $e$ & $2e\mid g$\\
Type 3 & totally definite quaternion algebra over $K$ & totally real & $2$ & $e$ & $2e\mid g$\\
Type 4 & division ring with center $K$ & CM-field & $d$ & $\frac{e}{2}$ & $\frac{d^2 e}{2}\mid g$
\end{tabular}
\end{table}
\subsection{Automorphisms}\label{Automorphism of simple abelian varieties}
In this subsection, by using the above classification, we analyze automorphisms of a simple abelian variety $X$, in each type.\par
Any endomorphism of $X$ can be considered as an element of $B=\mathrm{End}_{\mathbb{Q}}(X)$ and moreover it is an integral element of $B$.
Thus, any automorphism of $X$ can be considered as an element $\alpha\in B$ such that $\alpha$ and $\alpha^{-1}$ are both integral elements of $B$.\par
$B=\mathrm{End}_\mathbb{Q}(X)$, $K$, $K_0$, $d$, $e$ and $e_0$ are as in Section \ref{Endomorphism algebra of simple abelian varieties}.
Define
\begin{align*}
U_K:=\{x\in\mathcal{O}_K\backslash\{0\}\mid x^{-1}\in\mathcal{O}_K\}
\end{align*}
as the group of the invertible elements in $\mathcal{O}_K$.
\begin{flushleft}{\bf{Type 1}}\end{flushleft}\par
Any automorphism of $X$ is regarded as an element $\alpha\in\mathcal{O}_K$ and its inverse is also in $\mathcal{O}_K$ and so $\alpha$ is in $U_K$.\par
Let $F(x)=x^n+a_1x^{n-1}+\cdots+a_n\in\mathbb{Z}[x]$ be the minimal polynomial of $\alpha$, and then $a_nx^n+a_{n-1}x^{n-1}+\cdots+1$ has the root $\alpha^{-1}$.
This polynomial would be the minimal polynomial of $\alpha^{-1}$ with some constant multiplication, and so $a_n=\pm1$.\par
Thus, $\alpha$ is a root of a monic polynomial in $\mathbb{Z}[x]$ whose constant term is $\pm1$ and its roots are all real.
\begin{flushleft}{\bf{Type 2, Type 3, Type 4}}\end{flushleft}\par
Any automorphism of $X$ can be regarded as an element $\alpha\in B$ such that both $\alpha$ and $\alpha^{-1}$ are integral elements in $B$.
This implies that the coefficients of the minimal polynomial of $\alpha$ (resp. $\alpha^{-1}$) over $K$ are all in $\mathcal{O}_K$ and so the constant term is in $U_K$ and this minimal polynomial is of degree at most $d$.\par
Also, the minimal polynomial of $\alpha$ over $\mathbb{Q}$ is of the form $x^n+a_1x^{n-1}+\cdots\pm1\in\mathbb{Z}[x]$ as in Type 1.
\subsection{Caluculations of dynamical degrees}\label{Calculation}
This subsection is devoted to calculating the values of the first dynamical degree of surjective endomorphisms of simple abelian varieties.\par
Let $X$ be a $g$-dimensional simple abelian variety.
$K$, $K_0$, $d$, $e$ and $e_0$ are as in Section \ref{Endomorphism algebra of simple abelian varieties}.
\begin{flushleft}{\bf{Type 1}}\end{flushleft}\par
For a surjective endomorphism $\alpha\in\mathrm{End}(X)\subset\mathrm{End}_\mathbb{Q}(X)=K$, the minimal polynomial $F(x)\in\mathbb{Z}[x]$ of $\alpha$ of degree $d'$ has only real roots.
Let $\rho_1,\rho_2,\ldots,\rho_g$ be the eigenvalues of $\rho_a(\alpha)$ and define the endomorphism $\phi:=1-(n-\alpha)\colon X\rightarrow X$ for some integer $n$.\par
By applying Theorem \ref{Fix1} and Theorem \ref{Fix2} for $\phi:X\rightarrow X$,
\begin{align*}
\#{\mathrm{Fix}(1-(n-\alpha))}=\prod_{i=1}^{g}(n-\rho_{i})(n-\overline{\rho_{i}})\\
\#{\mathrm{Fix}(1-(n-\alpha))}=\mathrm{Nrd}_{K/\mathbb{Q}}(n-\alpha)^{\frac{2g}{1\cdot e}}=\mathrm{N}_{K/\mathbb{Q}}(n-\alpha)^{\frac{2g}{e}}=&\mathrm{N}_{\mathbb{Q}(\alpha)/\mathbb{Q}}(n-\alpha)^{\frac{e}{d'}\cdot\frac{2g}{e}}
=&{F(n)}^{\frac{2g}{d'}}.
\end{align*}
Since these two formulas are equal for any integer $n\in\mathbb{Z}$,
\begin{equation*}
\prod_{i=1}^{g}(x-\rho_{i})(x-\overline{\rho_{i}})={F(x)}^{\frac{2g}{d'}}
\end{equation*}
as polynomials in $x$.
Thus, each conjugate of $\alpha$ appears $\frac{2g}{d'}$ times in $\rho_1,\ldots,\rho_g,\overline{\rho_1},\ldots,\overline{\rho_g}$ and the dynamical degrees are calculated by using this.\par
Especially, the first dynamical degree of $\alpha\colon X\rightarrow X$ is the square of the maximal absolute value of the roots of $F(x)$.
\begin{flushleft}{\bf{Type 2, Type 3, Type 4}}\end{flushleft}\par
A surjective endomorphism $\alpha\in\mathrm{End}(X)$ can be identified as an element which has the minimal polynomial of the form $G(x)\in\mathcal{O}_K[x]$ over $K$.
Also, let $F(x)\in\mathbb{Z}[x]$ be the minimal polynomial of $\alpha$ over $\mathbb{Q}$.
Denote the degree of $G(x)$ by $d'\leq d$ and the degree of $F(x)$ by $d''$.\par
Denote the reduced characteristic polynomial of $\alpha\in B$ by $G'(x)$, whose degree is $d$ and as mentioned in Section \ref{Algebraic preparation}, $G'(x)$ can be written as $G'(x)=G(x)^\frac{d}{d'}$.
Let $\rho_1,\rho_2,\ldots,\rho_g$ be the eigenvalues of $\rho_a(\alpha)$ and define the endomorphism $\phi:=1-(n-\alpha)\colon X\rightarrow X$ for some integer $n$.\par
By applying Theorem \ref{Fix1} and Theorem \ref{Fix2} for $\phi:X\rightarrow X$,
\begin{align*}
\#{\mathrm{Fix}(1-(n-\alpha))}=\prod_{i=1}^{g}(n-\rho_{i})(n-\overline{\rho_{i}})
\end{align*}
\begin{align*}
\#{\mathrm{Fix}(1-(n-\alpha))}=\mathrm{Nrd}_{B/\mathbb{Q}}(n-\alpha)^{\frac{2g}{de}}=\mathrm{N}_{K/\mathbb{Q}}(\mathrm{Nrd}_{B/K}(n-\alpha))^{\frac{2g}{de}}&=\mathrm{N}_{K/\mathbb{Q}}(G'(n))^{\frac{2g}{de}}\\
&=\prod_{i=1}^{e}(\sigma_i(G'(n)))^{\frac{2g}{de}}\\
&=\prod_{i=1}^{e}(\sigma_i(G(n)))^{\frac{2g}{d'e}}
\end{align*}
where $\left\{\sigma_i\right\}_{1\leq i\leq e}$ is the set of all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$.
Since these two formulas are equal for any integer $n\in\mathbb{Z}$,
\begin{align*}
\prod_{i=1}^{g}(x-\rho_{i})(x-\overline{\rho_{i}})&=\prod_{i=1}^{e}(\sigma_i(G(x)))^{\frac{2g}{d'e}}={F(x)}^{\frac{2g}{d''}}
\end{align*}
as polynomials in $x$.
Thus, each conjugate of $\alpha$ appeares $\frac{2g}{d''}$ times in $\rho_1,\ldots,\rho_g,\overline{\rho_1},\ldots,\overline{\rho_g}$ and the dynamical degrees are calculated by using this.\par
Especially, the first dynamical degree is the square of the maximal absolute value of roots of all over $\sigma_i(G(x))$.\\
\par
As a conclusion of Sections \ref{Automorphism of simple abelian varieties} and \ref{Calculation}, an automorphism $f$ of a simple abelian variety $X$ is identified as an element of the central simple division algebra $B=\mathrm{End}_\mathbb{Q}(X)$ over $K$.
Denote this element by $\alpha$ and by considering the tower of extensions $K(\alpha)\supset K\supset \mathbb{Q}$, $\alpha$ has a minimal polynomial $F(x)\in\mathbb{Z}[x]$ over $\mathbb{Q}$ with the constant term $\pm1$.\par
Also, the first dynamical degree of $f$ is calculated as the square of the maximal absolute value of the roots of $F(x)$.
\section{Constructions of simple abelian varieties}\label{Construction of simple abelian varieties}
In Section \ref{Main theorem}, the next lemma proposed in \cite{DH22} is the key. This lemma is an immediate consequence of \cite[Chapter 9.4, Chapter 9.9]{BL04} composed with Remark \ref{Poincare remark}.
\begin{lemma}[{\cite[Proposition 2.3]{DH22}}]\label{Construction}
Let $B$ be a totally indefinite quaternion algebra over a totally real number field $F$ with $\left[F:\mathbb{Q}\right]=e$ and $'$ a positive anti-involution on $B$.
Fix an order $\mathcal{O}$ in $B$ and suppose that $B$ is divisional.
Then there exists a $2e$-dimensional simple abelian variety $X$ whose endomorphism ring $\mathrm{End}(X)$ contains $\mathcal{O}$.
\end{lemma}
As this therorem, the next theorems can be deduced from \cite[Chapter 9]{BL04}.
\begin{lemma}[{cf.\ \cite[Chapter 9.2]{BL04}}]\label{Construction1}
Let $F$ be a totally real number field with $\left[F:\mathbb{Q}\right]=e$ and $'$ be a positive anti-involution on $F$ (e.g., $'=\mathrm{id}_F$).
Fix an order $\mathcal{O}$ in $F$ and let $m$ be a positive integer.
Then there exists an $em$-dimensional simple abelian variety $X$ with an isomorphism $F\stackrel{\simeq}{\longrightarrow}\mathrm{End}_{\mathbb{Q}}(X)$ which induces an injective ring homomorphism $\mathcal{O}\hookrightarrow\mathrm{End}(X)$.
\end{lemma}
\begin{remark}
Consider the case $\mathcal{O}=\mathcal{O}_F$.
Since the ring of integer $\mathcal{O}_F$ of $F$ is a maximal $\mathbb{Z}$-order of $F$ and $\mathrm{End}(X)$ is an order of $\mathrm{End}_\mathbb{Q}(X)$, the $em$-dimensional simple abelian variety $X$ in Lemma \ref{Construction1} satisfies $\mathrm{End}(X)\simeq\mathcal{O}_F$.
\end{remark}
\begin{lemma}[{cf.\ \cite[Chapter 9.4]{BL04}}]\label{Construction2}
Let $B$ be a totally indefinite quaternion algebra over a totally real number field $F$ with $\left[F:\mathbb{Q}\right]=e$ and $'$ be a positive anti-involution on $B$.
Fix an order $\mathcal{O}$ in $B$ and suppose that $B$ is divisional and let $m$ be a positive integer.
Then there exists a $2em$-dimensional simple abelian variety $X$ with an isomorphism $B\stackrel{\simeq}{\longrightarrow}\mathrm{End}_{\mathbb{Q}}(X)$ which induces an injective ring homomorphism $\mathcal{O}\hookrightarrow\mathrm{End}(X)$.
\end{lemma}
\begin{lemma}[{cf.\ \cite[Proposition 2.4]{DH22}, \cite[Chapter 9.6]{BL04}}]\label{Construction4}
Let $K$ be a CM-field with $\left[K:\mathbb{Q}\right]=e=2e_0$ for an integer $e_0\in\mathbb{Z}_{>0}$.
Let $B$ be a central simple division algebra over $K$ with $[B:K]=d^2$ and $'$ be a positive anti-involution on $B$.\par
Fix an order $\mathcal{O}$ in $B$ and let $m$ be a positive integer and assume one of the next conditions.
\begin{enumerate}
\item $dm\geq3$
\item $dm=2$ and $e_0\geq2$
\end{enumerate}
Then there exists a $d^2e_0m$-dimensional simple abelian variety $X$ with an isomorphism $B\stackrel{\simeq}{\longrightarrow}\mathrm{End}_{\mathbb{Q}}(X)$ which induces an injective ring homomorphism $\mathcal{O}\hookrightarrow\mathrm{End}(X)$.
\end{lemma}
\begin{remark}
The conditions for $d,e_0,m\in\mathbb{Z}_{>0}$ are come from the existence of $r_v,s_v\in\mathbb{Z}_{\geq0}$ ($1\leq v\leq e_0$) which satisfy the conditions in \cite[Chapter 9.6, Chapter 9.9]{BL04} as below.
\begin{itemize}
\item $r_v+s_v=dm$
\item $\sum_{v=1}^{e_0}r_vs_v\neq0$
\item $(r_v,s_v)\neq(1,1)$ for some $v$
\end{itemize}
\end{remark}
\section{Main Theorem}\label{Main theorem}
This section is devoted to proving the next theorem which is analogous to Theorem \ref{DH22 theorem}. The proof is along to \cite{DH22}.
\begin{thm}[{Main Theorem}]\label{Main Theorem}
Let $P(x)\in\mathbb{Z}[x]$ be an irreducible monic polynomial whose roots are either real or of modulus $1$.
Assume at least one root has modulus $1$.
Let $g$ be the degree of the polynomial $P(x)\in\mathbb{Z}[x]$ and $z _1,z _2,\ldots,z_g$ be the roots of $P(x)$ which are ordered as $\abs{z_1}\geq\abs{z_2}\geq\cdots\geq\abs{z_g}$.\par
Then, there is a simple abelian variety $X$ of dimension $g$ and an automorphism $f\colon X\longrightarrow X$ with dynamical degrees
\begin{center}
$\lambda_0(f)=1, \lambda_k(f)=\prod_{i=1}^{k} \abs{z_i}^2\,(1\leq k\leq g)$.
\end{center}
\end{thm}
\begin{remark}
If $g=1$, then either $P(x)=x+1$ or $P(x)=x-1$ holds and there exists an $1$-dimensional simple abelian variety $X$ with the identity map $\mathrm{id}_X$ which satisfies $\lambda_0(\mathrm{id}_X)=\lambda_1(\mathrm{id}_X)=1$.
Thus, the case $g\geq2$ is worth considering.\par
For the case $g\geq2$, let $\gamma\neq1,-1$ be a root of $P(x)$ whose absolute value is $1$.
Then $\gamma^{-1}=\overline{\gamma}$ is also a root of $P(x)$.
This implies that the minimal polynomial $P(x)=a_gx^g+a_{g-1}x^{g-1}+\cdots+a_1x+a_0\,(a_g=1)$ satsfies $a_i=a_{g-i}\,(0\leq i\leq g)$.
In particular, $a_0=1$ and thus $\lambda_g(f)=1$ and this is compatible with the automorphicity.
\end{remark}
For proving Theorem \ref{Main Theorem}, assume $g\geq2$ and take $\gamma$ as in the above remark.
Then, $K=\mathbb{Q}(\gamma)$ be a quadratic extension of the totally real number field $F=\mathbb{Q}(\gamma+\gamma^{-1})$ with $[F\colon \mathbb{Q}]=\frac{g}{2}$.
\begin{lemma}[{\cite[Lemma 2.11]{DH22}}]\label{quadratic extension}
There exists an algebraic integer $a\in\mathcal{O}_F$ such that $F(\sqrt{a})=K$.
\end{lemma}
\begin{remark}
$K=F\left(\gamma-\frac{1}{\gamma}\right)$ and so $a$ can be taken as $a=\left(\gamma-\frac{1}{\gamma}\right)^2$ and we adopt this notation.
\end{remark}
On this condition, the next theorem can be applied.
\begin{thm}[{\cite[Theorem 2.5]{DH22}}]\label{divisional}
Let $F$ be a totally real number field and $K=F(\sqrt{a})$ a quadratic extension for $a\in\mathcal{O}_F$.
Then there exists a prime number $p$ such that the quaternion algebra $B=\left(\frac{a,p}{F}\right)$ is divisional.
\end{thm}
Since $p\in\mathbb{Z}_{>0}$, $B=\left(\frac{a,p}{F}\right)$ is totally indefinite.
In order to apply Lemma \ref{Construction} to $\left(\frac{a,p}{F}\right)$, we should construct the order of $\left(\frac{a,p}{F}\right)$.
Take an $F$-basis $1,i,j,ij$ of $\left(\frac{a,p}{F}\right)$ with $i^2=a$, $j^2=p$ and $ij=-ji$.
By the embedding $K=F(\sqrt{a})\hookrightarrow \left(\frac{a,p}{F}\right)$, there is a $\mathbb{Z}$-order $\mathcal{O}:=\mathcal{O}_K\oplus\mathcal{O}_K j$ in $\left(\frac{a,p}{F}\right)$ since $\mathcal{O}_K$ is a finitely generated $\mathbb{Z}$-submodule of $K$ which spans $K$ over $\mathbb{Q}$ and $j^2=p$ gives $\mathcal{O}$ is closed under the multiplication.
Also, the next lemma holds.
\begin{lemma}\label{construct positive anti-involution}
There exists a positive anti-involution on $B=\left(\frac{a,p}{F}\right)$ over $\mathbb{Q}$.
\end{lemma}
\begin{remark}
In \cite{DH22}, this part is explained in the proof of \cite[Lemma 2.12]{DH22}, but we could not understand this part, so we add the proof here.
\end{remark}
\begin{proof}[Proof of Lemma \ref{construct positive anti-involution}]
By Theorem \ref{construction of positive anti-involution}, it suffices to show that there exists $c\in B\backslash F$ such that $c^2\in F$ is totally negative.
For searching $c$, denote $c=xi+yj+zij$ ($x,y,z\in F$) where $1,i,j,ij$ is an $F$-basis of $B$.
By calculating, $c^2=x^2a+y^2p-z^2ap\in F$ and substitute
\begin{align*}
x=pX\left(\gamma^n+\frac{1}{\gamma^n}\right),\ y=Y\left(\gamma^{n+1}+\frac{1}{\gamma^{n+1}}\right),\ z=Z\left(\frac{1}{\gamma}\right)^n\sum_{i=0}^n\gamma^{2i}
\end{align*}
where $X,Y,Z\in\mathbb{Z}_{>0}$ and $n\in\mathbb{Z}_{>0}$.
Then,
\begin{align*}
&c^2=x^2a+y^2p-z^2ap\\
&=p^2X^2\left(\gamma^n+\frac{1}{\gamma^n}\right)^2\left(\gamma-\frac{1}{\gamma}\right)^2+pY^2\left(\gamma^{n+1}+\frac{1}{\gamma^{n+1}}\right)^2-pZ^2\left(\left(\frac{1}{\gamma}\right)^n\sum_{i=0}^n\gamma^{2i}\right)^2\left(\gamma-\frac{1}{\gamma}\right)^2\\
&=p^2X^2\left(\gamma^{2n+2}+\frac{1}{\gamma^{2n+2}}-2\left(\gamma^{2n}+\frac{1}{\gamma^{2n}}\right)+\left(\gamma^{2n-2}+\frac{1}{\gamma^{2n-2}}\right)+2\left(\gamma^2+\frac{1}{\gamma^2}\right)-4\right)\\
&\qquad+pY^2\left(\gamma^{2n+2}+\frac{1}{\gamma^{2n+2}}+2\right)-pZ^2\left(\gamma^{2n+2}+\frac{1}{\gamma^{2n+2}}-2\right).
\end{align*}
Thus,
\begin{align*}
\frac{c^2}{p}=&(X^2p+Y^2-Z^2)\left(\gamma^{2n+2}+\frac{1}{\gamma^{2n+2}}\right)\\
&\quad-X^2p\left(2\left(\gamma^{2n}+\frac{1}{\gamma^{2n}}\right)-\left(\gamma^{2n-2}+\frac{1}{\gamma^{2n-2}}\right)-2\left(\gamma^2+\frac{1}{\gamma^2}\right)\right)\\
&\quad\quad\quad+(-4X^2p+2Y^2+2Z^2)
\end{align*}
and a conjugate of $\frac{c^2}{p}$ can be written as
\begin{align*}
&(X^2p+Y^2-Z^2)\left(\tau^{2n+2}+\frac{1}{\tau^{2n+2}}\right)\\
&-X^2p\left(2\left(\tau^{2n}+\frac{1}{\tau^{2n}}\right)-\left(\tau^{2n-2}+\frac{1}{\tau^{2n-2}}\right)-2\left(\tau^2+\frac{1}{\tau^2}\right)\right)+(-4X^2p+2Y^2+2Z^2)\tag*{($\ast$)}
\end{align*}
where $\tau$ is a conjugate of $\gamma$. Define
\begin{align*}
&S:=\{\text{conjugates of }\gamma\text{ whose modulus is }1\}\\
&T:=\{\text{conjugates of }\gamma\text{ which is real}\}
\end{align*}
with $S\cap T=\emptyset$ since $P(x)\neq x+1,x-1$.\par
For any real number $\tau\neq\pm1$,
\begin{align*}
2\left(\tau^{2N}+\frac{1}{\tau^{2N}}\right)-\left(\tau^{2N-2}+\frac{1}{\tau^{2N-2}}\right)-2\left(\tau^2+\frac{1}{\tau^2}\right)
\end{align*}
diverges to $+\infty$ as $N\rightarrow+\infty$.
Thus, there exists a sufficiently large $N_0\in\mathbb{Z}_{>0}$ such that for any integer $N\geq N_0$,
\begin{align*}
2\left(\tau^{2N}+\frac{1}{\tau^{2N}}\right)-\left(\tau^{2N-2}+\frac{1}{\tau^{2N-2}}\right)-2\left(\tau^2+\frac{1}{\tau^2}\right)>-2
\end{align*}
holds for all $\tau\in T$.
Define
\begin{align*}
\alpha:=\underset{\tau\in S}{\mathrm{max}}\ \left(\tau^2+\frac{1}{\tau^2}\right).
\end{align*}
Now, by $-1,1\notin S$, $\alpha<2$.
By applying Lemma \ref{analogous to Kronecker's Density Theorem} which is proved below, for any small neighborhood of $1$, there are infinitely many $n$ such that all $\tau^n$ ($\tau\in T$) can be inside the neighborhood.
Thus, there exists $N\geq N_0$ such that
\begin{align*}
2\left(\tau^{2N}+\frac{1}{\tau^{2N}}\right)>2\alpha
\end{align*}
and then
\begin{align*}
2\left(\tau^{2N}+\frac{1}{\tau^{2N}}\right)-\left(\tau^{2N-2}+\frac{1}{\tau^{2N-2}}\right)-2\left(\tau^2+\frac{1}{\tau^2}\right)>2\alpha-2-2\alpha=-2
\end{align*}
for all $\tau\in S$.
Thus, there exists $\epsilon>0$ such that
\begin{align*}
2\left(\tau^{2N}+\frac{1}{\tau^{2N}}\right)-\left(\tau^{2N-2}+\frac{1}{\tau^{2N-2}}\right)-2\left(\tau^2+\frac{1}{\tau^2}\right)>-2+\epsilon
\end{align*}
for all $\tau\in S\cup T$.
For this $N$, ($\ast$) is less than
\begin{align*}
(X^2p+Y^2-Z^2)\left(\tau^{2N+2}+\frac{1}{\tau^{2N+2}}\right)+(2-\epsilon)X^2p+(-4X^2p+2Y^2+2Z^2)\\
=(X^2p+Y^2-Z^2)\left(\tau^{2N+2}+\frac{1}{\tau^{2N+2}}\right)+(-(2+\epsilon)X^2p+2Y^2+2Z^2).\tag*{($\ast\ast$)}
\end{align*}
By substituting $Y=1$, there exist infinitely many solutions for the equation
\begin{align*}
1=Z^2-X^2p\quad(X,Z\in\mathbb{Z})
\end{align*}
since this is a Pell's equation.
By substituting a solution for this equation,
\begin{align*}
(\ast\ast)=-(2+\epsilon)X^2p+2Y^2+2Z^2=-\epsilon X^2p+4
\end{align*}
and as $\abs{X}\to\infty$, ($\ast\ast$) can be nagative for all $\tau\in S\cup T$ and so for these $X,Y,Z$ and $N$, $c^2\in F$ is totally negative.
Now the proof is concluded by proving the next lemma.
\begin{lemma}\label{analogous to Kronecker's Density Theorem}
Let $M$ be a finite set of complex numbers whose modulus is $1$ and let $\epsilon>0$ be an arbitrary small positive number.\par
Then, there are infinitely many $n\in\mathbb{Z}_{>0}$ such that $\abs{1-z^n}<\epsilon$ for all $z\in M$.
\end{lemma}
This lemma is reduced to the next lemma via the isomorphism
\begin{align*}
\mathbb{R}/\mathbb{Z}\simeq S^1:=\{z\in\mathbb{C}\mid\abs{z}=1\}.
\end{align*}
\begin{lemma}
Take $r_1,r_2,\ldots,r_m\in\mathbb{R}$ and fix $\epsilon>0$ arbitrary.\par
Then, there are infinitely many $n\in\mathbb{Z}_{>0}$ such that either $\{nr_i\}<\epsilon$ or $\{nr_i\}>1-\epsilon$ holds for each $1\leq i\leq m$.
Here, $\{x\}$ represents the decimal part of $x\in\mathbb{R}$ and so $\{x\}=x-\lfloor x\rfloor$.
\end{lemma}
\begin{proof}
Define
\begin{align*}
A:=\{(\{nr_1\},\{nr_2\},\ldots,\{nr_m\})\in[0,1)^m\}_{n\in\mathbb{Z}_{>0}}
\end{align*}
as a subset of $[0,1)^m$.
Let $N\in\mathbb{Z}_{>0}$ be an integer which satisfies $\frac{1}{N}<\epsilon$.
Cut the hypercube $[0,1)^m$ into $N^m$ pieces of hypercubes whose length of a side is $\frac{1}{N}$.\par
Then by applying the pigeonhole principle, there exist $k,k'\in\mathbb{Z}_{>0}$ ($k<k'$) such that $(\{kr_1\},\{kr_2\},\ldots,\{kr_m\})$ and $(\{k'r_1\},\{k'r_2\},\ldots,\{k'r_m\})$ are contained in a common small hypercube.
By the definition of the hypercubes, $n=k'-k$ satisfies the condition in the lemma.\par
Moreover, for any large positive integer $n_0$, define
\begin{align*}
A_{n_0}:=\{(\{nr_1\},\{nr_2\},\ldots,\{nr_m\})\in[0,1)^m\}_{n=1,n_0+1,2n_0+1,\ldots}
\end{align*}
and by the same deduction as above, there exists $n\geq n_0$ which satisfies the condition.
Thus, the proof is concluded.
\end{proof}
\end{proof}
Hence the assumption of Lemma \ref{Construction} holds for $B=\left(\frac{a,p}{F}\right)$ and its order $\mathcal{O}$, and therefore there is a $g$-dimensional simple abelian variety $X$ such that $\mathcal{O}$ is embedded into $\mathrm{End}(X)$.
This simple abelian variety is of Type 2 in Table \ref{table}.
\begin{proof}[Proof of Theorem \ref{Main Theorem}]
$\gamma$ and $\overline{\gamma}=\frac{1}{\gamma}$ are elements of $\mathcal{O}_K$ and hence they are in $\mathcal{O}$, so $\gamma\in \mathrm{End}(X)$ is an automorphism of $X$.\par
Let $\rho_1,\rho_2,\ldots,\rho_g$ be the eigenvalues of $\rho_a(\gamma)$.
By applying Section \ref{Calculation} for the $g$-dimensional simple abelian variety $X$, $B=\mathrm{End}_\mathbb{Q}(X)=\left(\frac{a,p}{F}\right)$, $\alpha=\gamma$, $d=d'=2$, $d''=g$, $e=\frac{g}{2}$ and $F(x)=P(x)$,
\begin{equation*}
\prod_{i=1}^{g}(x-\rho_{i})(x-\overline{\rho_{i}})=P(x)^2=\prod_{i=1}^{g}(x-z_i)^2
\end{equation*}
as polynomials in $x$.\par
Thus, $\rho_1, \ldots, \rho_g,\overline{\rho_1}, \ldots, \overline{\rho_g}$ is a permutation of $z_1, \ldots, z_g, z_1, \ldots, z_g$.\par
Therefore, we may assume that $\abs{\rho_i}=\abs{z_i}$ ($1\leq i\leq g$) and this implies that $\lambda_0(\gamma)=1$, $\lambda_k(\gamma)=\prod_{i=1}^{k} \abs{z_i}^2$ ($1\leq k\leq g$).
\end{proof}
The next similar theorem can be proved in an analogous way.
\begin{theorem}\label{Main Theorem1}
Let $P(x)\in\mathbb{Z}[x]$ be an irreducible monic polynomial whose constant term is $\pm 1$ and whose roots are all real.
Let $g$ be the degree of the polynomial $P(x)\in\mathbb{Z}[x]$ and $z _1,z _2,\ldots,z_g\in\mathbb{R}$ be the roots of $P(x)$ which are ordered as $\abs{z_1}\geq\abs{z_2}\geq\cdots\geq\abs{z_g}$.\par
Then, there is a simple abelian variety $X$ of dimension $g$ and an automorphism $f\colon X\longrightarrow X$ with dynamical degrees
\begin{center}
$\lambda_0(f)=1, \lambda_k(f)=\prod_{i=1}^{k} \abs{z_i}^2\,(1\leq k\leq g)$.
\end{center}
\end{theorem}
\begin{proof}
Let $\delta$ be a root of $P(x)$ and so $\delta$ is an algebraic integer.
The conjugates of $\delta$ are all real, and so $F:=\mathbb{Q}(\delta)$ is a totally real number field.
The identity map on $F$ is a positive anti-involution and so by Lemma \ref{Construction1} with $m=1$, there is a $g$-dimensional simple abelian variety $X$ such that $\mathrm{End}(X)$ contains $\mathcal{O}_F$.
This simple abelian variety is of Type 1 in Table \ref{table}.\par
Since the constant term of $P(x)$ is an unit in $\mathbb{Z}$, $\delta^{-1}$ is also an integral element, and so $\delta, \delta^{-1}$ are in $\mathcal{O}_F$.
Thus, $\delta, \delta^{-1}$ can be regarded as endomorphisms of the simple abelian variety $X$ and so these are automorphisms.
Let $\rho_1,\rho_2,\ldots,\rho_g$ be the eigenvalues of $\rho_a(\delta)$.
By applying Section \ref{Calculation} for the $g$-dimensional simple abelian variety $X$, $K=F$, $\alpha=\delta$, $d=1$, $e=g$, $d'=g$ and $F(x)=P(x)$,
\begin{align*}
\prod_{i=1}^{g}(x-\rho_{i})(x-\overline{\rho_{i}})=P(x)^2=\prod_{i=1}^{g}(x-z_i)^2
\end{align*}
as polynomials in $x$.\par
Thus, $\rho_1, \ldots, \rho_g,\overline{\rho_1}, \ldots, \overline{\rho_g}$ is a permutation of $z_1, \ldots, z_g, z_1, \ldots, z_g$.\par
Therefore, we may assume that $\abs{\rho_i}=\abs{z_i}$ ($1\leq i\leq g$) and this implies that $\lambda_0(\delta)=1$, $\lambda_k(\delta)=\prod_{i=1}^{k} \abs{z_i}^2$ ($1\leq k\leq g$).
\end{proof}
\section{Corollaries}\label{Corollaries}
This section is devoted to corollaries of Theorem \ref{Main Theorem}.
\begin{lemma}\label{Main lemma}
For a Salem number $\lambda>0$ and a positive integer $n\in\mathbb{Z}$, $\sqrt[n]{\lambda}$ has an algebraic conjugate of modulus $1$.
\end{lemma}
\begin{proof}
Let $Q(x)\in\mathbb{Z}[x]$ be the minimal polynomial of $\mu=\sqrt[n]{\lambda}$.\par
The polynomial $R(x)=Q(x)Q(\zeta_n x)\cdots Q(\zeta_n^{n-1}x)$ is invariant under $\mathrm{Gal}(\mathbb{Q}(\zeta_n)/\mathbb{Q})$, so $R(x)$ has coefficients in $\mathbb{Q}$ and has non-zero coefficients only at $(an)$-th degrees ($a\in\mathbb{Z}_{>0}$) since $R(x)=R(\zeta_n x)$.
Moreover, all the coefficients of $R(x)$ are integers since they are in $\mathbb{Z}[\zeta_n]$ and so they are algebraic integers in $\mathbb{Q}$.\par
Thus, $S(x)=R(x^{\frac{1}{n}})$ is a polynomial in $\mathbb{Z}[x]$ and $\lambda$ is a root of this polynomial.
Therefore, $S(x)$ is divided by the Salem polynomial of $\lambda$ and so $S(x)$ has a root of modulus $1$.
This implies that $Q(x)$ has a root of modulus $1$ and the lemma is concluded.
\end{proof}
\begin{corollary}\label{realize salem}
Any Salem number is realized as the first dynamical degree of an automorphism of a simple abelian variety over $\mathbb{C}$.
\end{corollary}
\begin{proof}
Let $\lambda$ be a Salem number, $P(x)$ be the minimal polynomial of $\lambda$ and $Q(x)$ be the minimal polynomial of $\sqrt{\lambda}$.\par
By Lemma \ref{Main lemma} for the case $n=2$, $\sqrt{\lambda}$ has a conjugate of modulus $1$.
$Q(x)$ is a factor of $P(x^2)\in\mathbb{Z}[x]$ and since the roots of $P(x^2)$ are either real or of modulus $1$, so are the roots of $Q(x)$.
Thus, the minimal polynomial of $\sqrt{\lambda}$ satisfies the conditions in Theorem \ref{Main Theorem} and there exists an automorphism of a simple abelian variety for this minimal polynomial.
Now the element $\sqrt{\lambda}$ has the maximal absolute value among the roots of the minimal polynomial and therefore the first dynamical degree of the automorphism is $\sqrt{\lambda}^2=\lambda$.
\end{proof}
The next lemma is used in the proof of Corollary \ref{realize Salem} for analyzing the minimal polynomial of $\sqrt{\lambda}$.
\begin{lemma}[{Kronecker's theorem}]
Let $P(x)\in\mathbb{Z}[x]$ be a monic polynomial whose roots are all of modulus $1$.
Then, the roots of $P(x)$ are all roots of unity.
\end{lemma}
\begin{proof}
Let $z_1, \ldots, z_n$ be the roots of $P(x)$ and write
\begin{align*}
P(x)=\prod_{l=1}^{n}(x-z_l).
\end{align*}
Define $P_k(x)$ as
\begin{align*}
P_k(x)=\prod_{l=1}^{n}(x-{z_l}^k)
\end{align*}
for all $k\in\mathbb{Z}_{>0}$.
Now the monic polynomial $P_k(x)$ is of degree $n$ and its $i$-th coefficient is an integer whose absolute value is at most $\binom{n}{i}$ by the triangle inequality, and so the number of the candidates of the coefficients of $P_k(x)$ is finite.\par
Thus, $\{P_k(x)\}_{k=1,2,\ldots}$ is a finite set of polynomials and there are some $i, j\in\mathbb{Z}_{>0}$ ($i\neq j$) such that $P_i(x)=P_j(x)$.\par
Therefore, $\{{z_l}^i\}_{1\leq l\leq n}=\{{z_l}^j\}_{1\leq l\leq n}$ and for each $l$, there are relations ${z_l}^i={z_{l^{\prime}}}^j$, ${z_{l^{\prime}}}^i={z_{l^{\prime\prime}}}^j, \ldots$ for some $l',l'',\ldots$.
This implies that ${z_l}^{i^m}={z_l}^{j^m}$ for some $m>0$ and since $i\neq j$, $z_l$ is a root of unity for each $l$.
\end{proof}
\begin{corollary}\label{realize Salem}
For a Salem number $\lambda$ of degree $g$, there is an automorphism of a simple abelian variety, whose dimension is $g$ or $2g$, such that the first dynamical degree is equal to $\lambda$.
\end{corollary}
\begin{proof}
Let $P(x)$ be the minimal polynomial of $\lambda$.
As in Definition \ref{Definition of Salem}, $P(x)$ has $\lambda$, $\frac{1}{\lambda}$ as its roots and all other roots are of modulus $1$.
Now the minimal polynomial $Q(x)$ of $\sqrt{\lambda}$ is a factor of $P(x^2)$ and by Lemma \ref{Main lemma}, $Q(x)$ has a root of modulus $1$.
Thus, the set of the roots of $Q(x)$ is invariant under the map $z\mapsto \frac{1}{z}$.
By assuming that $P(x^2)$ is reducible, the set of the roots of $Q(x)$ is one of the next.
\begin{enumerate}
\renewcommand{\arabic{enumi}.}{\arabic{enumi}.}
\item\label{(1)} $\sqrt{\lambda}, \frac{1}{\sqrt{\lambda}}, z_1, \overline{z_1}, \ldots, z_n, \overline{z_n}$\quad (where $z_i$ is of modulus $1$ for all $i$)
\item\label{(2)} $\sqrt{\lambda}, \frac{1}{\sqrt{\lambda}}, -\sqrt{\lambda}, -\frac{1}{\sqrt{\lambda}}, z_1, \overline{z_1}, \ldots, z_n, \overline{z_n}$\quad (where $z_i$ is of modulus $1$ for all $i$)
\end{enumerate}\par
On the case \ref{(2)}, the roots of $\frac{P(x^2)}{Q(x)}$ are all of modulus $1$, so by Kronecker's theorem, they are roots of unity, a contradiction.\par
On the case \ref{(1)}, $Q(-x)$ is also a factor of $P(x^2)$ and $Q(x)$, $Q(-x)$ have no common roots.
Moreover, $\frac{P(x^2)}{Q(x)Q(-x)}$ is a constant or a polynomial whose roots are all of modulus $1$.
The latter contradicts as above and so $P(x^2)=Q(x)Q(-x)$ holds by considering the leading coefficient.\par
Thus either $Q(x)=P(x^2)$ or $Q(x)Q(-x)=P(x^2)$ holds for $Q(x)$.
This implies that the minimal polynomial of $\sqrt{\lambda}$ has degree $g$ or degree $2g$ and this corresponds to the dimension of the simple abelian variety (cf.\ Theorem \ref{Main Theorem}).
\end{proof}
\begin{rem}
By combining with Lemma \ref{Construction2}, any Salem number of degree $g$ is realized as the first dynamical degree of an automorphism of a $2g$-dimensional simple abelian variety.\par
By observing the proof of Corollary \ref{realize Salem}, the degree of the minimal polynomial of $\sqrt{\lambda}$ is $g$ if and only if $\lambda$ is the square of some Salem numbers.\par
Thus, it can be said that a Salem number $\lambda$ of degree $g$ is realized as the first dynamical degree of an automorphism of a $g$-dimensional simple abelian variety when $\lambda$ is the square of some Salem number.
\end{rem}
Let $\lambda$ be a Salem number of degree $g$.\par
If $\lambda$ is the square of some Salem number, the automorphism $f$ of a $g$-dimensional simple abelian variety constructed in Corollary \ref{realize Salem} satisfies
\begin{align*}
\lambda_0(f)=\lambda_g(f)=1, \lambda_i(f)=\lambda\ (1\leq i\leq g-1)
\end{align*}
as in Theorem \ref{DH22 theorem}.
If $\lambda$ is not the square of any Salem number, the automorphism $f$ of a $2g$-dimensional simple abelian variety constructed in Corollary \ref{realize Salem} satisfies
\begin{align*}
\lambda_0(f)=\lambda_{2g}(f)=1, \lambda_1(f)=\lambda_{2g-1}(f)=\lambda, \lambda_i(f)=\lambda^2\ (2\leq i\leq 2g-2).
\end{align*}\par
By summarizing Theorem \ref{Main Theorem} and Theorem \ref{Main Theorem1}, the next corollary is implied.
\begin{corollary}\label{possible dynamical degrees}
Let $\lambda$ be an algebraic integer of degree $g$ whose conjugates are either real or of modulus $1$, and such that $\frac{1}{\lambda}$ is also an algebraic integer.\par
Then there is an automorphism of a $g$-dimensional simple abelian variety which corresponds to $\lambda$ and the first dynamical degree of this automorphism is the square of the maximal absolute value of the conjugates of $\lambda$.
\end{corollary}
\begin{remark}\label{possible dynamical degrees1}
By combining the corollary with Lemma \ref{Construction1} and Lemma \ref{Construction2}, there is an automorphism, which corresponds to $\lambda$, of a $gm$-dimensional simple abelian variety for any $m\in\mathbb{Z}_{>0}$.
\end{remark}
\section{Small first dynamical degrees}\label{Small first dynamical degree}
In \cite{DH22}, there are analyses of the size relationship between the dynamical degrees of an automorphism of a simple abelian variety.\par
From here, we consider what values appear as the first dynamical degrees of automorphisms of simple abelian varieties.\par
This section is devoted to examining small first dynamical degrees of automorphisms of simple abelian varieties with some restrictions.
\subsection{Small first dynamical degrees with restricting the dimension}\label{Small dynamical degree with restricted dimension}
Fix a positive integer $g\geq2$ and let $X$ be an abelian variety whose dimension is $g$.\par
Denote $X=\mathbb{C}^g/\Lambda$ and then the singular homology is calculated as
\begin{align*}
\mathrm{H}_1(X)\simeq\pi^{-1}(0)\simeq\Lambda\simeq\mathbb{Z}^{2g}
\end{align*}
with the universal covering $\pi:\mathbb{C}^g\rightarrow X$.\par
Let $f$ be a surjective endomorphism of $X$.
As in Section \ref{Dynamical degree}, the eigenvalues of $\rho_r(f)$ can be written as $\rho_1,\cdots,\rho_g,\overline{\rho_1},\ldots,\overline{\rho_g}$ and they are also the eigenvalues of the isomorphism $f_*:\mathrm{H}_1(X)\longrightarrow \mathrm{H}_1(X)$ with multiplicity.
The characteristic polynomial of the linear transformation $f_*$ is an equation with coefficients in $\mathbb{Z}$ and can be written as $p(x)=x^{2g}+a_1x^{2g-1}+\cdots+a_{2g}\in\mathbb{Z}[x]$.\par
Let $c>1$ be a real number.
Assume the first dynamical degree of $f$ is less than $c^2$ and then all the roots of $p(x)$ have absolute values less than $c$.
This restriction gives
\begin{align*}
\abs{a_i}<c^i\binom{2g}{i}\quad(1\leq i\leq 2g)
\end{align*}
and so the number of the candidates of the minimal polynomials is finite.
Thus, the number of the candidates of the first dynamical degrees less than $c^2$ is finite.\par
Set a real number $c>1$ such that there is a surjective endomorphism of a $g$-dimensional abelian variety whose first dynamical degree is not $1$ and smaller than $c^2$.
Then the set of the first dynamical degrees inside the open interval $(1,c^2)$ can be not empty.
Thus, there is the minimum value of the first dynamical degrees besides $1$ and so the next theorem holds.
\begin{theorem}
Define
\begin{align*}
\mathcal{A}_g:=\left\{
\begin{array}{l}
\text{first dynamical degrees of surjective endomorphisms}\\
\text{ of abelian varieties over $\mathbb{C}$ whose dimension is $g$}
\end{array}
\right\}\backslash\{1\}.
\end{align*}
for an integer $g\geq 2$.
This set has the minimum value.
\end{theorem}
By the similar deduction, the next theorem also holds.
\begin{theorem}
For an integer $g\geq2$,
\begin{align*}
\mathcal{B}_g:=\left\{
\begin{array}{l}
\text{first dynamical degrees of automorphisms}\\
\text{ of simple abelian varieties over $\mathbb{C}$ whose dimension is $g$}
\end{array}
\right\}\backslash\{1\}
\end{align*}
has the minimum value.
\end{theorem}
\begin{remark}
Fix an integer $g\geq2$.
The smallest value of first dynamical degrees of automorphisms of $g$-dimensional simple abelian varieties larger than $1$ can be determined in lower dimensions.\par
For example, if $g=2$, then the smallest first dynamical degree larger than $1$ is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)=\left(\frac{1+\sqrt{5}}{2}\right)^2=\frac{3+\sqrt{5}}{2}=2.6180\cdots$.\par
If $g=3$, then the smallest first dynamical degree larger than $1$ is $4\mathrm{cos}^2\left(\frac{\pi}{7}\right)=3.2469\cdots$.\par
If $g=4$, then the smallest first dynamical degree larger than $1$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)=\frac{1+\sqrt{5}}{2}=1.6180\cdots$.\par
If $g=5$, then the smallest first dynamical degree larger than $1$ is $4\mathrm{cos}^2\left(\frac{\pi}{11}\right)=3.6825\cdots$.\par
\end{remark}
\subsection{Possible small first dynamical degrees}\label{possible}
From here, we consider the possible values of the first dynamical degrees of automorphisms of simple abelian varieties which are close to $1$, in each type.
For a simple abelian variety $X$, $B=\mathrm{End}_\mathbb{Q}(X)$ and $K$ are as in Section \ref{Endomorphism algebra of simple abelian varieties}.
The next lemma is used later.
\begin{lemma}\label{cyclotomic}
Let $P(x)=x^n+a_1x^{n-1}+\cdots+a_n\in\mathbb{Z}[x]$ be an irreducible monic polynomial of degree $n\geq2$ which has only real roots and denote the maximal absolute value of the roots of $P(x)$ by $\mu$.
If $\mu<2$, then $P(x)$ is a polynomial which satisfies $x^nP\left(x+\frac{1}{x}\right)=\Phi_N(x)$ for some $N\in\mathbb{Z}_{>0}$ where $\Phi_N(x)$ is the $N$-th cyclotomic polynomial.
Also, the roots of $P(x)$ can be written as $2\mathrm{cos}\left(\frac{2m\pi}{N}\right)$ for some $m\in\mathbb{N}$.\par
Under the notation, the maximal absolute value of the roots of $P(x)$ can be written as
\begin{align*}
\left\{
\begin{array}{ll}
2\mathrm{cos}\left(\frac{2\pi}{N}\right) & (N:\text{even})\\ [+5pt]
2\mathrm{cos}\left(\frac{\pi}{N}\right) & (N:\text{odd})
\end{array}.
\right.
\end{align*}
\end{lemma}
\begin{proof}
By the condition $\mu<2$, the monic polynomial $Q(x)=x^nP(x+\frac{1}{x})\in\mathbb{Z}[x]$ has only roots of modulus $1$.
Thus, by Kronecker's theorem, $Q(x)$ is a product of cyclotomic polynomials and by the irreducibility of $P(x)$, $Q(x)$ is irreducible.
Therefore, $Q(x)$ is a cyclotomic polynomial and can be written as $Q(x)=\Phi_N(x)$ for some $N\in\mathbb{Z}_{>0}$.\par
Then the roots of $P(x)$ can be written as $\zeta_N^m+\frac{1}{\zeta_N^m}$ where $m\in\mathbb{Z}$ is relatively prime to $N$.\par
Thus, the roots of $P(x)$ can be rewritten as\\
\begin{align*}
\zeta_N^m+\frac{1}{\zeta_N^m}=2\mathrm{cos}\left(\frac{2m\pi}{N}\right).
\end{align*}
and also the maximal absolute value of the roots of $P(x)$ is
\begin{align*}
\left\{
\begin{array}{ll}
2\mathrm{cos}\left(\frac{2\pi}{N}\right) & (N:\text{even})\\ [+5pt]
2\mathrm{cos}\left(\frac{\pi}{N}\right) & (N:\text{odd})
\end{array}.
\right.
\end{align*}
\end{proof}
\begin{flushleft}{\bf{Type 1}}\end{flushleft}\par
Let $\alpha$ be a root of an irreducible polynomial of the form $P(x)=x^m+a_1x^{m-1}+\cdots\pm 1\in\mathbb{Z}[x]$ which has only real roots.\par
These cover all automorphisms of simple abelian varieties of Type 1.\par
As in Theorem \ref{Main Theorem1}, any $\alpha$ which is the root of such a polynomial corresponds to an automorphism of some $nm$-dimensional simple abelian variety ($n\in\mathbb{Z}_{>0}$) by Lemma \ref{Construction1}.\par
For searching a small first dynamical degree (it is equal to the square of the maximal absolute value of the roots of $P(x)$ as in Section \ref{Calculation}), assume $P(x)$ has only roots which have absolute values less than $2$.\par
If $m=1$, then the square of the maximal absolute value of the roots of $P(x)$ is $1$.\par
If $m\geq2$, by Lemma \ref{cyclotomic}, the maximal absolute value of the roots of $P(x)$ is
\begin{align*}
\left\{
\begin{array}{ll}
2\mathrm{cos}\left(\frac{2\pi}{N}\right) & (N:\text{even})\\ [+5pt]
2\mathrm{cos}\left(\frac{\pi}{N}\right) & (N:\text{odd})
\end{array}.
\right.
\end{align*}
Since the constant term of $P(x)$ is $\pm1$,
by comparing with Theorem \ref{constant term of minimal polynomial}, the possible values of $N$ is $N=3,5,6,7,9,10,11,\ldots$.
Thus, the first dynamical degrees of this type can be sorted in ascending order as
\begin{align*}
1,\ 4\mathrm{cos}^2\left(\frac{\pi}{5}\right),\ 4\mathrm{cos}^2\left(\frac{\pi}{7}\right),\ 4\mathrm{cos}^2\left(\frac{\pi}{9}\right),\ 4\mathrm{cos}^2\left(\frac{\pi}{11}\right).\ 4\mathrm{cos}^2\left(\frac{\pi}{12}\right),\ldots
\end{align*}
\begin{flushleft}{\bf{Type 2, Type 3}}\end{flushleft}\par
Let $K$ be a totally real number field with $[K:\mathbb{Q}]=m$.
Also, let $\alpha$ be an element of $U_K$ or be an element of $B$ which has a minimal polynomial of the form $x^2+sx+t$ ($s\in\mathcal{O}_K$, $t\in U_K$) over $K$.
These cover all automorphisms of simple abelian varieties of Type 2 and Type 3.\par
Without considering the realizability, the possible first dynamical degrees of automorphisms for the former case is the same as in Type 1.\par
For considering the latter case, as in Section \ref{Calculation}, the first dynamical degree of an automorphism corresponding to $\alpha$ is calculated as the square of the maximal absolute value of the roots of
\begin{align*}
\prod_{i=1}^m (x^2+\sigma_i(s)x+\sigma_i(t))
\end{align*}
where $\left\{\sigma_i\right\}_{1\leq i\leq m}$ is the set of all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$.
Now each $\sigma_i(t)$ is a conjugate element of $t\in U_K$ and these elements are the roots of the common irreducible polynomial of the form $P(x)=x^{m'}+a_1x^{m'-1}+\cdots\pm 1\in\mathbb{Z}[x]$ ($m'\leq m$).
Thus, one of the absolute value of $\sigma_i(t)$'s is not less than $1$ and so we can assume $\abs{t}\geq1$.\par
If $\abs{t}>1$, the maximal absolute value of the roots of $x^2+\sigma_i(s)x+\sigma_i(t)$ is not less than $\sqrt{\abs{\sigma_i(t)}}$.
The lower bound of the maximal absolute value of $\abs{\sigma_i(t)}$'s is $2\mathrm{cos}\left(\frac{\pi}{5}\right)=\frac{1+\sqrt{5}}{2}$ as the deduction in Type 1.
Thus, $\frac{1+\sqrt{5}}{2}$ is the lower bound of the first dynamical degrees besides $1$ for this case.\par
If $\abs{t}=1$, since $K$ is totally real, $t=\pm 1$.\par
If $t=1$, the maximal absolute value of the roots of all over $x^2+\sigma_i(s)x+1$ is achived by one of the roots of $x^2+\sigma_i(s)x+1$ where $\abs{\sigma_i(s)}$ is maximal.
Now all $\sigma_i(s)$ are the roots of an identical irreducible polynomial and they are all real.
Thus, the roots of $x^2+\sigma_i(s)x+1$ are either real or of modulus $1$ and so the possible first dynamical degrees for this case all appear in Corollary \ref{possible dynamical degrees}.\par
If $t=-1$, the maximal absolute value of the roots of all over $x^2+\sigma_i(s)x-1$ is achieved by one of the roots of $x^2+\sigma_i(s)x-1$ where $\abs{\sigma_i(s)}$ is maximal.
Assume the maximal absolute value is not $1$ and so $s\neq0$.
All $\sigma_i(s)$ are the roots of common polynomial whose roots are all real.
Now $\prod_{i=1}^m \sigma_i(s)$ is an integer and by $\sigma_i(s)\neq0$, $\prod_{i=1}^m \abs{\sigma_i(s)}\geq1$ and so for some $i_0$, $\abs{\sigma_{i_0}(s)}\geq1$.
The roots of polynomial $x^2+\sigma_{i_0}(s)x-1$ can be denoted by $z_{i_0}$, $-\frac{1}{z_{i_0}}$ ($z_i\in\mathbb{R}$) with $\abs[\big]{z_{i_0}-\frac{1}{z_{i_0}}}=\abs{\sigma_{i_0}(s)}\geq1$.
Thus, $\mathrm{max}\left\{\abs{z_{i_0}},\frac{1}{\abs{z_{i_0}}}\right\}$ is not less than $\frac{1+\sqrt{5}}{2}$.
Therefore, the first dynamical degree of $\alpha$ is at least $\left(\frac{1+\sqrt{5}}{2}\right)^2$ besides $1$ for this case.
\begin{flushleft}{\bf{Type 4}}\end{flushleft}\par
Let $K$ be a CM-field with $[K:\mathbb{Q}]=2m$ and let $\alpha$ be an element which has a minimal polynomial of the form $P(x)\in\mathcal{O}_K[x]$ whose constant term is in $U_K$ and whose degree is not more than $d$.
These cover all automorphisms of simple abelian varieties of Type 4.\par
Without considering the realizability, the first dynamical degree of an automorphism corresponding to $\alpha$ is calculated as the square of the maximal absolute value of the roots of
\begin{align*}
\prod_{i=1}^{2m} (\sigma_i(P(x)))=\prod_{j=1}^m (\sigma'_j(P(x)\overline{P(x)}))\in\mathbb{Z}[x]
\end{align*}
where $\left\{\sigma_i\right\}_{1\leq i\leq 2m}$ (resp. $\left\{\sigma'_j\right\}_{1\leq j\leq m}$) is the set of all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$ (resp. $K_0\hookrightarrow\mathbb{C}$).\par
The possible values of small first dynamical degrees of this type would be considered in Section \ref{First dynamical degree close to 1}.
\section{First dynamical degrees close to $1$}\label{First dynamical degree close to 1}
In this section, we consider the next question.
\begin{question}
Without fixing the dimension of simple abelian varieties, then does there exist the minimum value of first dynamical degrees of automorphisms?
\end{question}
\noindent Denote
\begin{align*}
\Delta=\{\text{first dynamical degrees of automorphisms of simple abelian varieties over $\mathbb{C}$}\}\backslash\{1\}
\end{align*}
for convenience.\par
This quetion is immediately solved negatively if there does not exist the smallest number of the Salem numbers, because all Salem numbers are contained in $\Delta$ by Corollary \ref{realize salem}.
But this problem is unsolved yet, and also it is conjectured that there is the smallest Salem number (\cite[Chapter 3]{Bor02}), so another proof is needed.\par
The above question is solved negatively in Theorem \ref{minimum value of Delta} and reduced to the next theorem.
\begin{theorem}\label{automorphism}
For a prime number $p>3$ and an integer $1\leq k\leq\frac{p-1}{2}$, there are a CM-field $K$, a central simple division algebra $B$ over $K$ of the form $B=\oplus_{i=0}^{p-1} u^{i}L$ with a symbol $u$, a cyclic extension $L/K$ of degree $p$, and $[B:K]=p^2$ such that $\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}\in L$.
\end{theorem}
For proving this theorem, the next proposition is useful.
\begin{proposition}[{cf.\ \cite[Chapter 15.1, Corollary d]{Pie82}}]\label{construct division algebra}
Let $E/F$ be a cyclic extension of fields (i.e., a Galois extension with the cyclic Galois group) of degree $n$ and denote $G=\mathrm{Gal}(E/F)$ and its generator by $\sigma$.\par
Let $a\in F^{\times}$ be an element and assume the order of $a$ in $F^{\times}/\mathrm{N}_{E/F}(E^{\times})$ is exactly $n$.
Take a symbol $u$ as $u^n=a$.\par
Define $B=\oplus_{i=0}^{n-1} u^{i}E$ and construct the multiplication on $B$ as $e\cdot u=u\cdot\sigma(e)$ for $e\in E$.
Then $B$ is a division ring and also a central simple algebra over $F$ with $[B:F]=n^2$.
\end{proposition}
\begin{remark}\label{remark about the order}
Assume $a\in\mathcal{O}_F$, then the division ring $B$ constructed in Proposition \ref{construct division algebra} has an order $\mathcal{O}=\oplus_{i=0}^{n-1} u^{i}\mathcal{O}_E$ since $\sigma$ projects $\mathcal{O}_E$ into $\mathcal{O}_E$.
\end{remark}
Before proceeding to the proof, some lemmas are required.
\begin{lemma}\label{CM-field}
Let $p$ be an odd prime number.\par
Then, $\mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right)$ is a totally real number field and $\mathbb{Q}(\zeta_p)$ is a totally complex quadratic extension of $\mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right)$.
\end{lemma}
\begin{proof}
\begin{align*}
\left[\mathbb{Q}(\zeta_p):\mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right)\right]=2,\left[\mathbb{Q}(\zeta_p):\mathbb{Q}\right]=p-1
\end{align*}
imply
\begin{align*}
\left[\mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right):\mathbb{Q}\right]=\frac{p-1}{2}.
\end{align*}
For $1\leq i\leq\frac{p-1}{2}$, define
\begin{align*}
\begin{array}{cccccc}
\sigma_i\colon & \mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right) & \hookrightarrow & \mathbb{Q}(\zeta_p) & \hookrightarrow & \mathbb{C}\\
& z & \mapsto & z & & \\
& & & \zeta_p & \mapsto & \zeta_p^{i}
\end{array}.
\end{align*}
These are all different and so they are the $\frac{p-1}{2}$ numbers of $\mathbb{Q}$-embeddings.\par
$\zeta_p+\frac{1}{\zeta_p}$ maps to $\zeta_p^i+\frac{1}{\zeta_p^i}\in\mathbb{R}$ via $\sigma_i$, and so $\mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right)$ is a totally real number field.\par
$\mathbb{Q}(\zeta_p)\supset\mathbb{Q}(\zeta_p+\frac{1}{\zeta_p})$ is a quadratic extension and $\mathbb{Q}(\zeta_p)$ is totally complex and so the lemma holds.
\end{proof}
\begin{lemma}\label{irreducibility}
Let $F$ be a field.\par
For a prime number $p$ and $a\in F^{\times}\backslash F^{\times p}$, $x^p-a\in F[x]$ is an irreducible polynomial.
\end{lemma}
\begin{proof}
Let $F'$ be a splitting field of $x^p-a\in F[x]$.\par
Assume $x^p-a\in F[x]$ is reducible and denote $x^p-a=f(x)g(x)\in F[x]$ with $f(x)=(x-z_1)\cdots(x-z_n),g(x)=(x-z_{n+1})\cdots(x-z_p)\in F[x]$ for $z_1, z_2,\ldots, z_p\in F'$ ($1\leq n<p$).\par
Then $z_1\cdots z_n\in F^\times$ and so $(z_1\cdots z_n)^p=a^n\in F^{\times p}$ and so by $\mathrm{gcd}(n,p)=1$, $a\in F^{\times p}$, a contradiction.
\end{proof}
\begin{lemma}\label{cyclic extension1}
For a prime number $p>3$ and an integer $1\leq k\leq\frac{p-1}{2}$, $\mathbb{Q}\left(\zeta_p,\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}\right)/\mathbb{Q}(\zeta_p)$ is a cyclic extension of degree $p$.
\end{lemma}
\begin{proof}
The extension $\mathbb{Q}\left(\zeta_p,\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}\right)/\mathbb{Q}(\zeta_p)$ is separable since $\mathbb{Q}$ has characteristic $0$.\par
Any root of $x^p-\left(\zeta_p^k+\frac{1}{\zeta_p^k}\right)$ can be written as $\zeta_p^i\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}$ for some integer $i$ and it is contained in $\mathbb{Q}\left(\zeta_p,\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}\right)$ and so the extension $\mathbb{Q}\left(\zeta_p,\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}\right)/\mathbb{Q}(\zeta_p)$ is normal.
For proving $\mathbb{Q}\left(\zeta_p,\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}\right)/\mathbb{Q}(\zeta_p)$ is a field extension of degree $p$, by Lemma \ref{irreducibility}, it is enough to prove $\zeta_p^k+\frac{1}{\zeta_p^k}\notin{\mathbb{Q}(\zeta_p)}^{\times p}$.\par
Assume $\zeta_p^k+\frac{1}{\zeta_p^k}\in{\mathbb{Q}(\zeta_p)}^{\times p}$ and denote $\zeta_p^k+\frac{1}{\zeta_p^k}=\alpha^p$ for $\alpha\in{\mathbb{Q}(\zeta_p)}^{\times}$.
Since $\zeta_p^k+\frac{1}{\zeta_p^k}$ is an algebraic integer, so is $\alpha$.
The ring of integer of $\mathbb{Q}(\zeta_p)$ is $\mathbb{Z}[\zeta_p]$ (cf.\ \cite[Chapter 2.9]{Sam70}), and so $\alpha$ can be written as $\alpha=a_{p-2}\zeta_p^{p-2}+\cdots+a_1\zeta_p+a_0$ with $a_i\in\mathbb{Z}$.
The equation
\begin{equation}
(a_{p-2}\zeta_p^{p-2}+\cdots+a_1\zeta_p+a_0)^p-(\zeta_p^{p-k}+\zeta_p^k)=0 \tag*{($\ast$)}
\end{equation}
holds and the left side is transformed to
\begin{align*}
b_{p-1}\zeta_p^{p-1}+\cdots+b_1\zeta_p+b_0
\end{align*}
with
\begin{align*}
&b_0\equiv a_{p-2}^p+\cdots+a_0^p\ (\mathrm{mod}\ p)\\
&b_k\equiv -1\ (\mathrm{mod}\ p)\\
&b_{p-k}\equiv -1\ (\mathrm{mod}\ p)\\
&b_i\equiv0\ (\mathrm{mod}\ p)\quad(1\leq i\leq p-1, i\neq k,p-k)
\end{align*}
by calculating the combinatorial numbers.\par
The minimal polynomial of $\zeta_p$ is $x^{p-1}+\cdots+x+1\in\mathbb{Z}[x]$ and so by combining with $p>3$, the equation ($\ast$) cannot hold, a contradiction.\par
Thus, $\zeta_p^k+\frac{1}{\zeta_p^k}\notin{\mathbb{Q}(\zeta_p)}^{\times p}$ and so the extension $\mathbb{Q}\left(\zeta_p,\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}\right)/\mathbb{Q}(\zeta_p)$ is of degree $p$ and the Galois group is cyclic.
\end{proof}
\begin{lemma}\label{construction of anti-involution}
Let $p$ be an odd prime number and denote $K=\mathbb{Q}(\zeta_p)$, $F=\mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right)$.
Let $L=\mathbb{Q}(\zeta_p,\sqrt[p]{a})$ be a cyclic extension of $\mathbb{Q}(\zeta_p)$ for $a\in\mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right)$ with $[\mathbb{Q}(\zeta_p,\sqrt[p]{a}):\mathbb{Q}(\zeta_p)]=p$.
Define the map
\begin{align*}
\begin{array}{cccc}
\rho\colon & \mathbb{Q}(\zeta_p,\sqrt[p]{a}) & \rightarrow & \mathbb{Q}(\zeta_p,\sqrt[p]{a})\\
& \zeta_p & \mapsto & \zeta_p \\
& \sqrt[p]{a} & \mapsto & \zeta_p\sqrt[p]{a}
\end{array}
\end{align*}
as $\mathrm{Gal}(L/K)$ is generated by $\rho$.
Define $B=\oplus_{i=0}^{p-1} u^{i}L$ for a symbol $u$ with $u^p\in\mathbb{Q}(\zeta_p)^\times$ and multiplication on $B$ as $s\cdot u=u\cdot\rho(s)$ for $s\in L$.\par
On this condition, there exists an anti-involution on $B$ of second kind.
Moreover, by assuming that $B$ is a division ring, then there exists a positive anti-involution on $B$ of second kind.
\end{lemma}
\begin{proof}
Define the map
\begin{align*}
\begin{array}{ccccc}
\phi\colon & B & \rightarrow & B & \\
& u^i\zeta_p^j\sqrt[p]{a^k} & \mapsto & u^i\zeta_p^{ik-j}\sqrt[p]{a^k} & (0\leq i,k\leq p-1,\ 0\leq j\leq p-2)
\end{array}
\end{align*}
as a linear map over $\mathbb{Q}$.
$\phi$ restricts to $K$ as
\begin{align*}
\begin{array}{ccccc}
\phi|_K\colon & K & \rightarrow & K & \\
& \zeta_p^j & \mapsto & \zeta_p^{-j} & (0\leq j\leq p-1)
\end{array}
\end{align*}
and $\phi$ fixes the field $F$ pointwise and so $\phi(a)=a$.
Thus, by the equations
\begin{align*}
&\phi(1)=1,\\
&\phi(\phi(u^i\zeta_p^j\sqrt[p]{a^k}))=\phi(u^i\zeta_p^{ik-j}\sqrt[p]{a^k})=u^i\zeta_p^j\sqrt[p]{a^k},
\end{align*}
\begin{align*}
\phi(u^i\zeta_p^j\sqrt[p]{a^k}\cdot u^{i'}\zeta_p^{j'}\sqrt[p]{a^{k'}})&=\phi(u^i u^{i'}\cdot\rho^{i'}(\zeta_p^j\sqrt[p]{a^k})\zeta_p^{j'}\sqrt[p]{a^{k'}})\\
&=\phi(u^{i+i'}\zeta_p^{i'k+j}\sqrt[p]{a^k}\zeta_p^{j'}\sqrt[p]{a^{k'}})\\
&=\phi(u^{i+i'}\zeta_p^{i'k+j+j'}\sqrt[p]{a^{k+k'}})\\
&=u^{i+i'}\zeta_p^{(i+i')(k+k')-(i'k+j+j')}\sqrt[p]{a^{k+k'}}
\end{align*}
and
\begin{align*}
\phi(u^{i'}\zeta_p^{j'}\sqrt[p]{a^{k'}})\phi(u^i\zeta_p^j\sqrt[p]{a^k})&=u^{i'}\zeta_p^{i'k'-j'}\sqrt[p]{a^{k'}}\cdot u^i\zeta_p^{ik-j}\sqrt[p]{a^k}\\
&=u^{i'}u^i\cdot\rho^{i}(\zeta_p^{i'k'-j'}\sqrt[p]{a^{k'}})\zeta_p^{ik-j}\sqrt[p]{a^k}\\
&=u^{i+i'}\zeta_p^{i'k'-j'+ik'}\sqrt[p]{a^{k'}}\zeta_p^{ik-j}\sqrt[p]{a^k}\\
&=u^{i+i'}\zeta_p^{i'k'-j'+ik'+ik-j}\sqrt[p]{a^{k+k'}},
\end{align*}
$\phi$ is an anti-involution on $B$.\par
As above, $\phi|_K$ fixes only the elements in $F$ pointwise and so $\phi$ is of second kind (cf.\ Definition \ref{second kind}).
Moreover if $B$ is a division ring, by Theorem \ref{existence of positive anti-involution}, there exists a positive anti-involution on $B$ of second kind.
\end{proof}
\begin{lemma}\label{multiplicity}
Let $L/K$ be a Galois extension of number fields with $[L:K]=n$ and let $\mathfrak{q}\subset\mathcal{O}_L$ be a prime ideal and take $\alpha\in L^{\times}\backslash U_L$ ($U_L$ is the set of the invertible elements of $\mathcal{O}_L$).\par
Assume $\sigma(\mathfrak{q})=\mathfrak{q}$ for all $\sigma\in\mathrm{Gal}(L/K)$.
Then, the multiplicity of $\mathfrak{q}$ of the prime ideal factorization of the ideal generated by $\mathrm{N}_{L/K}(\alpha)$ in $\mathcal{O}_L$ is a multiple of $n$.
\end{lemma}
\begin{proof}
Write the prime ideal factorization of $\alpha\mathcal{O}_L$ as
\begin{align*}
\alpha\mathcal{O}_L={\mathfrak{q}_1}^{e(\mathfrak{q}_1)}\cdots{\mathfrak{q}_g}^{e(\mathfrak{q}_g)}\quad(\mathfrak{q}_i\text{ are all different})
\end{align*}
and then
\begin{align*}
\sigma(\alpha)\mathcal{O}_L=\sigma(\mathfrak{q}_1)^{e(\mathfrak{q}_1)}\cdots\sigma(\mathfrak{q}_g)^{e(\mathfrak{q}_g)}
\end{align*}
for each $\sigma\in\mathrm{Gal}(L/K)$.\par
Also, the next equation holds.
\begin{align*}
\left(\mathrm{N}_{L/K}(\alpha)\right)\mathcal{O}_L=\left(\prod_{\sigma\in\mathrm{Gal}(L/K)} \sigma(\alpha)\right) \mathcal{O}_L=\prod_{\sigma\in\mathrm{Gal}(L/K)} \prod_{i=1}^g \sigma(\mathfrak{q}_i)^{e(\mathfrak{q}_i)}
\end{align*}\par
By the assumption that $\sigma(\mathfrak{q})=\mathfrak{q}$ for all $\sigma\in\mathrm{Gal}(L/K)$, $\mathfrak{q}$ appears in the factorization of $\left(\mathrm{N}_{L/K}(\alpha)\right)\mathcal{O}_L$ if and only if it appears in the factorization of $\alpha\mathcal{O}_L$.\par
Moreover, if $\mathfrak{q}$ appears in $\alpha\mathcal{O}_L$ and put $\mathfrak{q}_1=\mathfrak{q}$, then
\begin{align*}
\left(\mathrm{N}_{L/K}(\alpha)\right)\mathcal{O}_L=\mathfrak{q}^{e(\mathfrak{q})\cdot n}\prod_{\sigma\in\mathrm{Gal}(L/K)} \prod_{i=2}^g \sigma(\mathfrak{q}_i)^{e(\mathfrak{q_i})},
\end{align*}
and so the multiplicity of $\mathfrak{q}$ is a multiple of $n$.
\end{proof}
By combining these lemmas, Theorem \ref{automorphism} can be proved.
\begin{proof}[Proof of Theorem \ref{automorphism}]
Denote $\gamma=\zeta_p^k+\frac{1}{\zeta_p^k}$.
Consider the sequence of Galois extensions $\mathbb{Q}(\zeta_p,\sqrt[p]{\gamma})\supset\mathbb{Q}(\zeta_p)\supset\mathbb{Q}$ of fields and denote $K=\mathbb{Q}(\zeta_p)$, $L=\mathbb{Q}(\zeta_p,\sqrt[p]{\gamma})$.
Now $L/K$ is a cyclic extension of degree $p$ by Lemma \ref{cyclic extension2}.\par
Also denote $K_0=\mathbb{Q}\left(\zeta_p+\frac{1}{\zeta_p}\right)$ and $G=\mathrm{Gal}(L/K)$.
By Lemma \ref{CM-field}, $K$ is a totally complex quadratic extension of the totally real number field $K_0$ and so, $K$ satisfies the conditions of CM-field.\par
Define
\begin{align*}
\begin{array}{cccc}
\rho\colon & L & \rightarrow & L \\
& \zeta_p & \mapsto & \zeta_p \\
& \sqrt[p]{\alpha} & \mapsto & \sqrt[p]{\alpha}\zeta_p
\end{array}.
\end{align*}
By using Theorem \ref{Cebotarev's density theorem} for the Galois extension $L/K$ and $\rho$, there exist infinitely many pairs $(\mathfrak{p},\mathfrak{q})$ of a prime ideal $\mathfrak{p}\subset\mathcal{O}_K$ and a prime ideal $\mathfrak{q}\subset\mathcal{O}_L$ over $\mathfrak{p}$ which satisfy the next conditions.
\begin{enumerate}
\renewcommand{\arabic{enumi}.}{\arabic{enumi}.}
\item $\mathfrak{p}$ is unramified in $L$.
\item The Frobenius automorphism $\left(\frac{K/F}{\mathfrak{q}}\right)$ is $\rho$.
\end{enumerate}
Fix a pair $(\mathfrak{p},\mathfrak{q})$ and define the prime number $q$ as $q\mathbb{Z}=\mathfrak{p}\cap\mathbb{Z}$.\par
By Hilbert's ramification theory, it can be written as
\begin{align*}
q\mathcal{O}_K=\prod_{i=1}^g {\mathfrak{p}_i}^e\quad(\mathfrak{p}_i\text{ are all different}),\\
{\mathfrak{p}_i}\mathcal{O}_L=\prod_{j=1}^{g_i} {\mathfrak{q}_{ij}}^{e_i}\quad(\mathfrak{q}_{ij}\text{ are all different}).
\end{align*}
Then,
\begin{align*}
q\mathcal{O}_L=\prod_{i=1}^g \prod_{j=1}^{g_i} {\mathfrak{q}_{ij}}^{e\cdot e_i}
\end{align*}
and denote $f=[\mathcal{O}_K/\mathfrak{p}_i:\mathbb{Z}/q\mathbb{Z}]$, $f_i=[\mathcal{O}_L/\mathfrak{q}_{ij}:\mathcal{O}_K/\mathfrak{p}_i]$ with $efg=p-1$, $e_if_ig_i=p$ by the ramification theory again.\par
Denote $\mathfrak{p}_1=\mathfrak{p}$ and $\mathfrak{q}_{11}=\mathfrak{q}$ in the above prime ideal factorizations.
The condition 1 implies $e_1=1$ and by $e\leq p-1$, $ee_1$ is not a multiple of $p$.
The condition 2 implies $\rho(\mathfrak{q})=\mathfrak{q}$ and so the decomposition group $\mathcal{D}_\mathfrak{q}$ is equal to $G$.
The order of the decomposition group $\mathcal{D}_\mathfrak{q}$ is $e_1f_1$ and so $f_1=p$, and this implies $g_1=1$.
Therefore the pair $(q,\mathfrak{q})$ satisfies the next conditions.
\begin{itemize}
\item $\sigma(\mathfrak{q})=\mathfrak{q}$ for all $\sigma\in G$
\item the multiplicity of $\mathfrak{q}$ of the prime ideal factorization of the ideal $q\mathcal{O}_L$ is not a multiple of $p$.
\end{itemize}
Since $q$ is not an invertible element in $\mathcal{O}_L$, $q\not\in\mathrm{N}_{L/K}(U_L)$.
Thus, by Lemma \ref{multiplicity}, $q$ cannot be written as $q=\mathrm{N}_{L/K}(\alpha)$ for any $\alpha\in L^{\times}$.\par
Moreover, $q^p=\mathrm{N}_{L/K}(q)\in\mathrm{N}_{L/K}(L^{\times})$ and since $p$ is a prime number, the order of $q$ in $K^{\times}/\mathrm{N}_{L/K}(L^{\times})$ is exactly $p$.\par
By applying Proposition \ref{construct division algebra}, there is a central simple division algebra $B$ of the form $B=\oplus_{i=0}^{p-1} u^{i}L$ with a symbol $u$ which satisfies $u^p=q$.
Now the division algebra $B$ satisfies all the conditons in Theorem \ref{automorphism}.
\end{proof}
Now, $\zeta_p^k+\frac{1}{\zeta_p^k}$ is an algebraic integer and also $\left(\zeta_p^k+\frac{1}{\zeta_p^k}\right)^{-1}$ is an algebraic integer by Theorem \ref{constant term of minimal polynomial}.
Thus, $\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}$ and $\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}^{-1}$ are both algebraic integers, and this implies the next result.
\begin{theorem}\label{minimum value of Delta}
There does not exist the minimum value of $\Delta$.
\end{theorem}
\begin{proof}
For a prime number $p>3$ and an integer $1\leq k\leq\frac{p-1}{2}$, there is a finite-dimensional central simple division algebra over the CM-field $\mathbb{Q}(\zeta_p)$ which contains $\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}$ by Theorem \ref{automorphism}.
The constructed division algebra $B=\oplus_{i=0}^{p-1} u^{i}L$ in Theorem \ref{automorphism} has an order $\mathcal{O}=\oplus_{i=0}^{p-1} u^{i}\mathcal{O}_L$ by Remark \ref{remark about the order} and $\mathcal{O}$ contains $\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}$ and $\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}^{-1}$ since these are integral elements.
By combining this with Lemma \ref{construction of anti-involution} and Lemma \ref{Construction4} for $d=p,e_0=\frac{p-1}{2}$ and $m=1$, there is an automorphism of a $\frac{p^2(p-1)}{2}$-dimensional simple abelian variety which corresponds to $\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}$.\par
The minimal polynomial of $\sqrt[p]{\zeta_p^k+\frac{1}{\zeta_p^k}}$ over $\mathbb{Q}$ is
\begin{align*}
\prod_{k=1}^{\frac{p-1}{2}} \left(x^p-\left(\zeta_p^k+\frac{1}{\zeta_p^k}\right)\right)
\end{align*}
and so its first dynamical degree is $\sqrt[p]{\abs[\bigg]{\zeta_p^{\frac{p-1}{2}}+\frac{1}{\zeta_p^{\frac{p-1}{2}}}}^2}$ as in Section \ref{Calculation}.\par
Now, $1<\sqrt[p]{\abs[\bigg]{\zeta_p^{\frac{p-1}{2}}+\frac{1}{\zeta_p^{\frac{p-1}{2}}}}^2}<\sqrt[p]{4}$ converges to $1$ as $p\to\infty$.
Thus, for any $\epsilon>0$, there is a first dynamical degree $\lambda$ of an automorphism of a simple abelian variety such that $1<\lambda<1+\epsilon$.
\end{proof}
By removing the automorphic condition, the next theorem holds.
\begin{theorem}\label{analogous theorem}
For any $\epsilon>0$, there exists a simple abelian variety $X$ and an endomorphism $f:X\rightarrow X$ which is not an automorphism such that $1<\lambda_1(f)<1+\epsilon$.
\end{theorem}
The thereom is proved by using the next lemma instead of Lemma \ref{cyclic extension1}.
\begin{lemma}\label{cyclic extension2}
For a positive integer $a$ and a prime number $p$ such that $\sqrt[p]{a}\notin\mathbb{Z}$, $\mathbb{Q}(\zeta_p,\sqrt[p]{a})/\mathbb{Q}(\zeta_p)$ is a cyclic extension of degree $p$.
\end{lemma}
\begin{proof}
The extension $\mathbb{Q}(\zeta_p,\sqrt[p]{a})/\mathbb{Q}(\zeta_p)$ is separable and normal as in the proof of Lemma \ref{cyclic extension1}.\par
Assume $a\in{\mathbb{Q}(\zeta_p)}^{\times p}$ and denote $a=\alpha^p$ for $\alpha\in{\mathbb{Q}(\zeta_p)}^{\times}$.
By the irreducibility of $x^p-a\in\mathbb{Z}[x]$ from $\sqrt[p]{a}\notin\mathbb{Z}$, $[\mathbb{Q}(\alpha):\mathbb{Q}]=p$ and so this is in contradiction with $\mathbb{Q}(\zeta_p)\supset\mathbb{Q}(\alpha)\supset\mathbb{Q}$.\par
Thus, $x^p-a$ is irreducible over $\mathbb{Q}(\zeta_p)$ by Lemma \ref{irreducibility} and so the extension $\mathbb{Q}(\zeta_p,\sqrt[p]{a})/\mathbb{Q}(\zeta_p)$ is of degree $p$.\par
Therefore $\mathbb{Q}(\zeta_p,\sqrt[p]{a})/\mathbb{Q}(\zeta_p)$ is a Galois extension and the Galois group $\mathrm{Gal}\left(\mathbb{Q}(\zeta_p,\sqrt[p]{a})/\mathbb{Q}(\zeta_p)\right)$ has order $p$ and this is the cyclic group generated by $\sqrt[p]{a}\mapsto\zeta_p\sqrt[p]{a}$.
\end{proof}
\begin{theorem}\label{endomorphism not automorphism}
For an odd prime number $p$, a positive integer $a$ with $\sqrt[p]{a}\not\in\mathbb{Z}$, there are a CM-field $K$, a central simple division algebra $B$ over $K$ of the form $B=\oplus_{i=0}^{p-1} u^{i}L$ with a symbol $u$, a cyclic extension $L/K$ of degree $p$, and $[B:K]=p^2$ such that $\sqrt[p]{a}\in L$.
\end{theorem}
\begin{proof}
Denote $K=\mathbb{Q}(\zeta_p)$, $L=\mathbb{Q}\left(\zeta_p, \sqrt[p]{a}\right)$ as in the proof of Theorem \ref{automorphism}.
$L\supset K\supset\mathbb{Q}$ is a sequence of Galois extensions and $L/K$ is a cyclic extension of degree $p$ by Lemma \ref{cyclic extension2}.\par
As in the proof of Theorem \ref{automorphism}, by applying Theorem \ref{Cebotarev's density theorem} for the Galois extension $L/K$, there exists a pair $(q,\mathfrak{q})$ of a prime number $q$ and a prime ideal $\mathfrak{q}\subset\mathcal{O}_L$ such that
\begin{itemize}
\item $\sigma(\mathfrak{q})=\mathfrak{q}$ for all $\sigma\in\mathrm{Gal}(L/K)$
\item The multiplicity of $\mathfrak{q}$ of the prime ideal factorization of the ideal $q\mathcal{O}_L$ is not a multiple of $p$.
\end{itemize}
Thus, as in the proof of Theorem \ref{automorphism}, by Lemma \ref{multiplicity}, the order of $q$ in $K^{\times}/\mathrm{N}_{L/K}(L^{\times})$ is exactly $p$.\par
By Proposition \ref{construct division algebra}, $B=\oplus_{i=0}^{p-1} u^{i}L$ with a symbol $u$ which satisfies $u^p=q$ is a central simple division algebra over $K$ with $[B:K]=p^2$ and satisfies all the conditons in Theorem \ref{endomorphism not automorphism}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{analogous theorem}]
For an integer $a$ and an odd prime number $p$ with $\sqrt[p]{a}\not\in\mathbb{Z}$, the central simple division algebra over the CM-field $\mathbb{Q}(\zeta_p)$ constructed in Theorem \ref{endomorphism not automorphism} has an order which contains $\sqrt[p]{a}$ by Remark \ref{remark about the order}.
Also by Lemma \ref{construction of anti-involution} and Lemma \ref{Construction4} for $d=p,e_0=\frac{p-1}{2}$ and $m=1$, there is an endomorphism of a $\frac{p^2(p-1)}{2}$-dimensional simple abelian variety which corresponds to $\sqrt[p]{a}$ and its first dynamical degree is $\sqrt[p]{a^2}$ as in Section \ref{Calculation}.\par
By substituting $a=2$, $\sqrt[p]{a}\not\in\mathbb{Z}$ holds and $\sqrt[p]{a^2}$ converges to $1$ as $p\to\infty$.
Thus, the first dynamical degrees of this type of endomorphisms are not $1$ and have no mimimum value.\par
Also, an endomorphism of this type cannot be an automorphism because $\frac{1}{\sqrt[p]{a}}$ is not an algebraic integer and so the proof is concluded.
\end{proof}
\renewcommand{References}{References}
\end{document}
|
\begin{document}
\title[Continuous extension of functions from countable sets]{Continuous extension of functions from countable sets}
\author{V.Mykhaylyuk}
\address{Department of Mathematics\\
Chernivtsi National University\\ str. Kotsjubyn'skogo 2,
Chernivtsi, 58012 Ukraine}
\email{[email protected]}
\subjclass[2000]{Primary 54C20, 54C35, Secondary 46E10, 54C05, 54C45}
\commby{Ronald A. Fintushel}
\keywords{extension property, linear extender, $C$-embedding, retract, stratifiable space, $P$-point, Stone-${\bf R}eve{C}$ech compactification}
\begin{abstract}
We give a characterization of countable discrete subspace $A$ of a topological space $X$ such that there exists a (linear) continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$. Using this characterization we answer two questions of A.~Arhangel'skii. Moreover, we introduce the notion of well-covered subset of a topological space and prove that for well-covered functionally closed subset $A$ of a topological space $X$ there exists a linear continuous mapping $\varphi:C_p(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p(A)$.
\end{abstract}
\maketitle
\section{Introduction}
For a topological space $X$ we denote the space of all continuous function $y:X\to\mathbb R$ with the topology of pointwise convergence by $C_p(X)$, and the subspace of all continuous bounded function $y:X\to\mathbb R$ is denoted by $C^*_p(X)$.
According to the well-known Tietze-Urysohn theorem, for a normal space $X$ and a closed subset $A$ of $X$ there exists a mapping $\varphi: C(A)\to C(X)$ such that $\varphi(y)|_A=y$ for every $y\in C(A)$. The existence and properties (linearity, continuity with respect different topologies, etc.) of such extender $\varphi: C(A)\to C(X)$ for various classes of spaces $X$ were investigated by many mathmeticians (see, for instance, \cite{Du}, \cite{Bor}, \cite{Sen}, \cite{St}, \cite{SV}, \cite{GHO}, \cite{Yam} and literature given there). In particular, the existence of a linear continuous extender $\varphi: C_p(A)\to C_p(X)$ for every closed subset $A$ of a stratifiable space $X$ was obtained in \cite[Theorem 4.3]{Bor} and the existence of such extender for every closed subset $A$ of locally compact generalized ordered space $X$ was proved in \cite[Corollary 1]{GHO}).
The following questions were published in \cite{Arh} (see also Questions 4.10.3 and 4.10.11 from \cite{Tkachuk}).
\begin{question}\label{q:1.1}
Let $X$ be a pseudocompact space such that for any countable set $A\subseteq X$ there exists a (linear) continuous map $\varphi:C_p^*(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p(A)$. Must $X$ be finite?
\end{question}
\begin{question}\label{q:1.2}
Let $X$ be the subspace of all weak $P$-points of $\beta\omega\setminus\omega$. It is true that for any countable set $A\subseteq X$ there exists a (linear) continuous map $\varphi:C_p^*(A)\to C_p(X)$ such that $\varphi(y)|_A=y$ for every $y\in C_p(A)$?
\end{question}
A point $x$ of a topological space $X$ is called {\it an weak $P$-point} if $x\not \in \overlineerline{A}$ for every countable set $A\subseteq X\setminus \{x\}$.
In this paper we give a characterization of countable discrete subspace $A$ of a topological space $X$ for which exists a (linear) continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$. Using this characterization we obtain the positive answer to Question \ref{q:1.1} and the negative answer to Question \ref{q:1.2}. Moreover, we introduce the notion of well-covered subset of a topological space and prove that for well-covered functionally closed subset $A$ of a topological space $X$ there exists a linear continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$.
\section{Countable $C$-embedding sets}
The next property is probably well-known (see \cite[Corollary 1]{Sen}).
\begin{proposition}\label{p:3.1} Let $X$ be a completely regular space and $A\subseteq X$ such that there exists a continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ such that $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$. Then the set $A$ is closed in $X$. \end{proposition}
\begin{proof} Suppose that $x_0\in \overlineerline{A}\setminus A$. Let $y_0(x)=0$ for every $x\in A$ and $z_0=\varphi(y_0)$. Clearly that $z_0(x_0)=0$. Consider the neighborhood $W_0=\{z\in C_p(X):z(x_0)<\tfrac12\}$ of $z_0$ in $C_p(X)$ and choose an finite set $B\subseteq A$ such that $\varphi(y)\in W_0$ for every $y\in C_p^*(A)$ with $y|_B=y_0|_B$. We choose an open in $X$ set $G$ such that $B\subseteq G\cap A$ and $x_0\not \in \overlineerline{G}$. There exists a continuous function $y\in C_p^*(A)$ such that $y(x)=0$ for every $x\in B$ and $y(x)=1$ for every $x\in A\setminus G$. It easy to see that $y|_B=y_0|_B$ and $\varphi(y)\not\in W_0$, which implies a contradiction.
\end{proof}
A set $A$ in a topological space $X$ is called {\it strongly functionally discrete in $X$} if there exists a discrete family $(G_a;a\in A)$ of functionally open sets $G_a\ni a$.
\begin{proposition}\label{p:2.1} Let $X$ be a topological space, $A=\{a_n:n\in\mathbb N\}\subseteq X$ be a countable set, $Y$ be a compact and $f:X\times Y\to\mathbb R$ be a separately continuous function such that the continuous mapping $\varphi:Y\to C_p(A)$, $\varphi(y)(a)=f(a,y)$, is a homeomorphic embedding and for every $n\in\mathbb N$ there exist $y'_n, y_n''\in Y$ with $|f(a_k,y_n')-f(a_k,y_n'')|\leq \tfrac{1}{n+1}$ for all $k=1,\dots , n$ and $|f(a_k,y_n')-f(a_k,y_n'')|\geq 1$ for all $k> n$. Then the set $A$ is a strongly functionally discrete in $X$.
\end{proposition}
\begin{proof} Without loss of generality we can supose that $Y\subseteq C_p(A)$. For every $n\in\mathbb N$ we put
$$
U_n=\bigcap_{k=1}^{n-1}\{x\in X:|f(x,y_k')-f(x,y_k'')|>\tfrac56\}\bigcap\{x\in X:|f(x,y_n')-f(x,y_n'')|<\tfrac23 \}.
$$
Clearly, $a_n\in U_n$ and $U_n$ is functionally open in $X$ for every $n\in\mathbb N$.
We show that $(U_n)_{n=1}^{\infty}$ is discrete in $X$. Fix $x_0\in X$. Since $Y\subseteq C_p(A)$ is a Hausdorff compact space and the mapping $f^{x_0}:Y\to\mathbb R$ is continuous, there exists $n_0\in\mathbb N$ such that $|f(x_0,y')-f(x_0,y'')|<\tfrac12$ for every $y',y''\in Y$ with $|y'(a_k)-y''(a_k)|<\tfrac{1}{n_0}$ for every $k=1,\dots n_0$. Then for the neighborhood
$$
U_0=\{x\in X:|f(x,y_{n_0}')-f(x,y_{n_0}'')<\tfrac12|\}
$$
of $x_0$ in $X$ we have $U_0\cap U_n=\emptyset$ for every $n>n_0$. Thus, $(U_n)_{n=1}^{\infty}$ is locally finite in $X$. It remains to use that the sequence $(F_n)_{n=1}^\infty$ of the sets
$$
F_n=\bigcap_{k=1}^{n-1}\{x\in X:|f(x,y_k')-f(x,y_k'')|\geq\tfrac56\}\bigcap\{x\in X:|f(x,y_n')-f(x,y_n'')|\leq\tfrac23 \}
$$
is disjoint.
\end{proof}
\begin{theorem}\label{th:2.2} Let $X$ be a topological space and $A\subseteq X$ be a discrete countable subspace of $X$. Then the following conditions are equivalent
$(i)$\,\, there exists a (linear) continuous mapping $\varphi:C_p^*(A)\to C_p^*(X)$ such that $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$;
$(ii)$\,\, there exists a (linear) continuous mapping $\varphi:C_p(A)\to C_p(X)$ such that $\varphi(y)|_A=y$ for every $y\in C_p(A)$;
$(iii)$\,\, there exists a (linear) continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ such that $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$;
$(iv)$\,\,the set $A$ is strongly functionally discrete in $X$.
\end{theorem}
\begin{proof} The implications $(i)\Rightarrow(iii)$ and $(ii)\Rightarrow(iii)$ are obvious.
$(iii)\Rightarrow(iv)$. Let $Y=C_p(A,\{0,1\})$ and let $f:X\times Y\to\mathbb R$ is defined by $$f(x,y)=\varphi(y)(x).$$
Clearly, $f$ is separately continuous and the set $A$ is strongly functionally discrete in $X$ according to Proposition \ref{p:2.1}.
$(iv)\Rightarrow(i)$ and $(iv)\Rightarrow(ii)$. Let $(G_a;a\in A)$ is a discrete family of functionally open sets $G_a\ni a$. For every $a\in A$ we choose a continuous mapping $\varphi_a:X\to [0,1]$ with $\varphi_a^{-1}((0,1])=G_a$ and $\varphi_a(a)=1$. Notice that the mapping $\varphi:C_p(A)\to C_p(X)$,
$$\varphi(y)(x)=\sum_{a\in A}\varphi_a(x)y(a),$$
and its restriction $\varphi_{C^*_p(A)}$ are the required one.
\end{proof}
The following corollary gives the answers to Questions \ref{q:1.1} and \ref{q:1.2}.
\begin{corollary}\label{cor:2.3} Let $X$ be a topological space such that for every countable set $A\subseteq X$ there exists a continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$. Then
$(a)$\,\,every discrete countable subspace $A$ of $X$ is a strongly functionally discrete set in $X$;
$(b)$\,\,if $X$ is a $T_2$-space in which every locally finite system of functionally open sets is finite (in particular, if $X$ is a pseudocompact) then $X$ is finite;
$(c)$\,\,$X$ does not equal to the space of all weak $P$-points in $\beta\omega\setminus \omega$;
$(d)$\,\,if $X$ is a completely regular space, then every countable subspace $A$ of $X$ is a strongly functionally discrete set in $X$.
\end{corollary}
\begin{proof} The statement $(a)$ follows immediately from Theorem \ref{th:2.2}.
$(b)$. It is enough to note that any infinite $T_2$-space has a countable discrete subspace.
$(c)$. According to \cite{GO} the space $W$ of all weak $P$-points in $\beta\omega\setminus \omega$ is ultrapseudocompact, in particular, pseudocompact. Moreover, pseudocompactness of $W$ easy follows from the next fact (see \cite{K}, \cite{vM}): {\it there exists a weak $P$-point $z$ in $\beta\omega\setminus \omega$ which is not a $P$-point in $\beta\omega\setminus \omega$} (a point $x$ of a topological space $X$ is called {\it a $P$-point} if $x\not \in \overlineerline{A}$ for every $F_\sigma$-set $A\subseteq X\setminus \{x\}$).
$(d)$. According to Proposition \ref{p:3.1}, every countable subset $A\subseteq X$ is closed. It easy to see that every countable subset $A\subseteq X$ is discrete. Now it remains to use $(a)$.
\end{proof}
\begin{remark}\label{r:3.4}
Since the closure $\overlineerline{A}$ in $\beta\omega$ of a countable set $A\subseteq \beta\omega$ is homeomorphic to $\beta A$ (see, for example, \cite[Corollary 3.2]{My}), every countable set $A\subseteq \beta\omega$ is $C^*$-embedding in $\beta\omega$. But for every infinite pseudocompact space $X\subseteq \beta\omega$ and every countable set $A\subseteq X$ does not exist a continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$ as follows from Corollary \ref{cor:2.3}(b).
\end{remark}
A space $X$ is called {\it a $P$-space} if every $x\in X$ is a $P$-point.
\begin{remark}\label{r:3.5}
It easy to see that every countable subset $A$ of a completely regular $P$-space $X$ is a strongly functionally discrete set in $X$. Hence, for every countable subset $A$ there exists a linear continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ such that $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$ according to Theorem \ref{th:2.2}.
\end{remark}
The following example shows that there exists a space $X$ with this property which is not a $P$-space.
\begin{example}
Let $S$ be an uncountable set and $X$ is the space of all function $x:S\to\{0,1\}$, i.e. $X=\{0,1\}^S$. Let $x_0(s)=0$ for every $s\in S$. The set $X\setminus \{x_0\}$ is open subset of $X$ equipped with the topology of uniform convergence on the countable subsets $T\subseteq S$. Moreover, the sets
$$
U(T,B)=(\{0,1\}^T\setminus B)\times \{0,1\}^{S\setminus T},
$$
where $T\subseteq S$ is a countable set and $B\subseteq \{0,1\}^T$ is a countable set with $x_0|_{T}\not\in B$, form a base of neighborhoods of $x_0$ in $X$.
It is easy to see that every countable set $A\subseteq X$ is strongly functionally discrete in $X$.
On other hand, let $\{t_n:n\in \omega\}\subseteq S$ be a countable set. Then the set
$$
G=\bigcap_{n\in \omega}\{0\}\times \{0,1\}^{S\setminus \{t_n\}}
$$
is a $G_\delta$-set in $X$ with $x_0\in G$, but $G$ is not a neighborhood of $x_0$. Thus, $x_0$ is not a $P$-point in $X$.
\end{example}
\section{Continuous extension from closed sets}
\begin{proposition}\label{p:4.2} Let $X$ be a topological space, $A\subseteq X$, $F\subseteq X$ be a functionally closed set and $G\subseteq X$ be a functionally open set such that $A\subseteq F\subseteq G$ and $A$ is a retract of $G$. Then there exists a linear continuous mapping $\varphi:C_p^*(A)\to C_p^*(X)$ such that $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$.
\end{proposition}
\begin{proof} Choose a retraction $r:G\to A$ and a continuous function $f:X\to [0,1]$ such that $F\subseteq f^{-1}(1)$ and $X\setminus G\subseteq f^{-1}(0)$. It remains to put $\varphi(y)(x)=f(x)\cdot y(r(x))$ for every $x\in X$.
\end{proof}
Let $X$ be a topological space and $\pi_X:X\to C_p(C_p(X))$, $\pi_X(x)(y)=y(x)$. We denote the linear subspace
$$
L(X)=\{\alpha_1\pi_X(x_1)+\cdots +\alpha_n\pi_X(x_n):n\in\omega,\,x_1,\dots ,x_n\in X,\, \alpha_1,\dots, \alpha_n\in\mathbb R\}.
$$
of $C_p(C_p(X))$ by $L(X)$. Analogously we put $\pi^*_X:X\to C_p(C^*_p(X))$, $\pi^*_X(x)(y)=y(x)$ and denote the linear subspace
$$
L^*(X)=\{\alpha_1\pi_X^*(x_1)+\cdots +\alpha_n\pi_X^*(x_n):n\in\omega,\,x_1,\dots ,x_n\in X,\, \alpha_1,\dots, \alpha_n\in\mathbb R\}.
$$
of $C_p(C^*_p(X))$ by $L^*(X)$.
\begin{proposition}\label{p:4.1} Let $X$ be a topological space and $A\subseteq X$. Consider the following conditions
$(i)$\,\,there exists a linear continuous mapping $\varphi:C_p(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p(A)$;
$(ii)$\,\,there exists a continuous mapping $\psi:X\to L(A)$ with $\psi(a)=\pi_A(a)$ for every $a\in A$;
$(iii)$\,\,there exists a linear continuous mapping $\varphi:C_p^*(A)\to C_p(X)$ with $\varphi(y)|_A=y$ for every $y\in C_p^*(A)$;
$(iv)$\,\,there exists a continuous mapping $\psi:X\to L^*(A)$ with $\psi(a)=\pi^*_A(a)$ for every $a\in A$.
Then $(i)\Leftrightarrow (ii)\Rightarrow (iii)\Leftrightarrow (iv)$. If $A$ is homeomorphic to a topological vector space, then $(iii)\Rightarrow (ii)$ and all conditions $(i)-(iv)$ are equivalent to the following condition
$(v)$\,\,$A$ is a retract of $X$.
\end{proposition}
\begin{proof} $(i)\Rightarrow(ii)$. We consider the continuous mapping $\psi:X\to C_p(C_p(A))$, $\psi(x)(y)=\varphi(y)(x)$. It remains to note that according to \cite[Theorem IV.1.2]{Sch}, for every $x\in X$ there exists $z\in L(A)$ such that $\psi(x)=z$.
$(ii)\Rightarrow(i)$. It is sufficient to put $\varphi(y)(x)=\psi(x)(y)$ for every $x\in X$ and $y\in C_p(A)$.
The implication $(iii)\Leftrightarrow(iv)$ can be proved similarly.
Let $A$ is homeomorphic to a topological vector space. Without loss of the generality we can assume that $A$ is a topological vector space. Then the implication $(iv)\Rightarrow(ii)$ follows immediately from the fact that the mapping $\phi:L(A)\to L^*(A)$, $\phi(z)=z|_{C_p^*(A)}$, is a homeomorphism. Moreover, the mapping $r:X\to A$,
$$
r(x)=\left\{\begin{array}{ll}
a, & x\in X\setminus A\,\,{\rm and}\,\,\psi(x)=\pi_A(a)\\
x, & x\in A
\end{array}
\right.
$$
is continuous.
\end{proof}
We say that a subset $A$ of a topological space $X$ is {\it a $L$-retract in $X$}, if there exists a continuous mapping $\psi:X\to L(A)$ with $\psi(a)=\pi_A(a)$ for every $a\in A$.
A subset $A$ of a topological space $X$ is called {\it well-covered in $X$}, if there exists a sequence of locally finite functionally open covers $(U(n,i):i\in I_n)$ of $A$ in $X$ and sequence of families $(\lambda_{n,i}:i\in I_n)$ of continuous mappings $\lambda_{n,i}:\overlineerline{U(n,i)}\to A$ such that for every $a\in A$ and every neighborhood $U$ of $a$ in $A$ there exist a $n_0\in \omega$ and a neighborhood $U_0$ of $a$ in $X$ such that $\lambda_{n,i}(U(n,i)\cap U_0)\subseteq U$ for every $n\geq n_0$.
\begin{theorem}\label{th:4.3}
Let $X$ be a topological space and $A$ be an well-covered functionally closed subset of $X$. Then $A$ is a $L$-retract in $X$.
\end{theorem}
\begin{proof} We choose a sequence of locally finite functionally open covers $(U(n,i):i\in I_n)$ of $A$ in $X$ and sequence of continuous mappings $(\lambda_{n,i}:i\in I_n)$ of continuous mappings $\lambda_{n,i}:U(n,i)\to A$ which satisfy the condition from the definition of well-covered set. Note that every set $G_n=\bigcup\limits_{i\in I_n}U(n,i)$ is functionally open in $X$. For every $n\in \omega$ we choose a partition of the unit $(\phi_{n,i}:i\in I_n)$ on $G_n$ which is subordinated to $(U(n,i):i\in I_n)$. For a fixed $n\in \omega$ and $i\in I_n$ we consider the mapping $\mu_{n,i}:X\to L(A)$,
$$
\mu_{n,i}(x)=\left\{\begin{array}{ll}
\phi_{n,i}(x)\pi_A(\lambda_{n,i}(x)), & x\in U(n,i)\\
0, & x\in X\setminus U(n,i).
\end{array}
\right.
$$
Note that $\mu_{n,i}(x)= \phi_{n,i}(x)\pi_A(\lambda_{n,i}(x))$ for every $x\in \overlineerline{U(n,i)}$. Therefore the restrictions of $\mu_{n,i}$ on the closed sets $\overlineerline{U(n,i)}$ is continuous. Thus $\mu_{n,i}$ is continuous too.
Now we choose sequences of functionally open in $X$ sets $W_n$ and functionally closed in $X$ sets $F_n$ such that $A\subseteq F_{n+1}\subseteq W_n\subseteq F_n\subseteq G_n$ for every $n\in \omega$, and a sequence of continuous functions $\delta_n:X\to[0,1]$ such that $\delta_n(X\setminus W_n)=\{0\}$ and $\delta_n(F_{n+1})=\{1\}$. Let the mapping $\psi_0:X\to L(A)$ is defined by $\psi_0(x)=0$. Moreover, for every $n\in \omega$ the mapping $\psi_n:G_n\to L(A)$ is defined by $\psi_n(x)=\sum\limits_{i\in I_n}\mu_{n,i}(x)$. Obviously, all mappings $\psi_n$ are continuous.
Now we consider the mapping $\psi:X\to L(A)$,
$$
\psi(x)=\left\{\begin{array}{lll}
\psi_0(x), & x\in X\setminus W_1\\
(1-\delta_n(x))\psi_{n-1}(x)+ \delta_n(x)\psi_n(x), & n\in \omega,\,\,x\in W_n\setminus W_{n+1}\\
\pi_X(x), & x\in A.
\end{array}
\right.
$$
It is clear that $\psi$ is continuous at every point $x\in X\setminus A=(X\setminus W_1)\cup\bigcup\limits_{n=1}^\infty(W_n\setminus W_{n+1})$.
Fix a point $x_0\in A$ and show that $\psi$ is continuous at $x_0$. It is sufficient to prove that for every $y\in C_p(X)$ there exists a neighborhood $U$ of $x_0$ in $X$ such that $|\psi(x)(y)-\psi(x_0)(y)|<1$ for each $x\in U$.
Let $y_0\in C_p(X)$ and $U$ be a neighborhood of $x_0$ in $X$ such that $|y_0(x)-y_0(x_0)|<1$ for every $x\in U$. According to the choice of $U(n,i)$ and $\lambda_{n,i}$, there exist a $n_0\in \omega$ and a neighborhood $U_0$ of $x_0$ such that $\lambda_{n,i}(U(n,i)\cap U_0)\subseteq U$ for every $n\geq n_0$. Then for every $n\geq n_0$ and $x\in U_0\cap G_n$ we have
$$
|\psi_n(x)(y_0)-\psi(x_0)(y_0)|=|\sum\limits_{i\in I_n}\phi_{n,i}(x)\pi_A(\lambda_{n,i}(x))(y_0)-y_0(x_0)|=
$$
$$
=|\sum\limits_{i\in I_n}\phi_{n,i}(x)y_0(\lambda_{n,i}(x))-y_0(x_0)|\leq \sum\limits_{i\in I_n}\phi_{n,i}(x)|y_0(\lambda_{n,i}(x))-y_0(x_0)|<
$$
$$
<\sum\limits_{i\in I_n}\phi_{n,i}(x)=1.
$$
Now it is easy to see that $|\psi(x)(y_0)-\psi(x_0)(y_0)|<1$ for each $x\in U_0\cap W_{n_0+1}$.
\end{proof}
The following proposition shows that Theorem \ref{th:4.3} is a generalization of Theorem 4.3 from \cite{Bor}.
\begin{proposition}\label{pr:4.4}
Let $X$ be a stratifiable space. Then every closed in $X$ set $A\subseteq X$ is well-covered in $X$.
\end{proposition}
\begin{proof} According to the definition of stratifiable space there exists a mapping $G$ which assigns to each $n\in \omega$ and a closed subset $H\subseteq X$, an open set $G(n,H)$ contained $H$ such that
$(1)$\,\, $H=\bigcap_{n\in \omega} \overlineerline{G(n,H)}$;
$(2)$\,\, $G(n,H)\subseteq G(n,K)$ for every closed subsets $H\subseteq K\subseteq X$ and $n\in \omega$.
Note that without loss of the generality we may assume that $G(n+1,H)\subseteq G(n,H)$ for all $n\in \omega$ and all open sets $H\subseteq X$. Let $A$ be a closed subset of $X$. For every $n\in \omega$ we choose a locally finite in $X$ open refinement $(U(n,i):i\in I_n)$ of $(G(n,\{x\}):x\in A)$. For every $n\in \omega$ and $i\in I_n$ we choose $x(n,i)\in A$ with $U(n,i)\subseteq G(n,\{x(n,i)\})$ and put $\lambda_{n,i}(x)=x(n,i)$ for every $x\in \overlineerline{U(n,i)}$. We show that sequences of covers $(U(n,i):i\in I_n)$ and families $(\lambda_{n,i}:i\in I_n)$ satisfy the condition from the definition of well-covered set.
Let $x_0\in A$ and $U$ be an open neighborhood of $x_0$. We choose $n_0\in \omega$ such that $x_0\not\in \overlineerline{G(n_0, X\setminus U)}$ and put $U_0=X\setminus \overlineerline{G(n_0, X\setminus U)}$. Let $n\geq n_0$ and $i\in I_n$ with $U(n,i)\cap U_0\ne \emptyset$. Since $U(n,i)\subseteq G(n,\{x(n,i)\})$, $G(n,\{x(n,i)\})\cap U_0\ne \emptyset$, i.e. $G(n,\{x(n,i)\})\not\subseteq G(n_0, X\setminus U)$. Taking into account that $G(n, X\setminus U)\subseteq G(n_0, X\setminus U)$ we obtain that $G(n,\{x(n,i)\})\not\subseteq G(n, X\setminus U)$. Thus, $x_{n,i}\in U$ according to $(2)$.
\end{proof}
\begin{remark}\label{r:4.5}
The Sorgenfrey line $\mathbb S$ is an example of a perfectly normal non-stratifiable space in which every closed subset is well-covered in $\mathbb S$.
\end{remark}
\end{document}
|
\begin{document}
\title{On Increasing and Invariant Parking Sequences}
\abstract{The notion of parking sequences is a new generalization of parking functions introduced by
Ehrenborg and Happ. In the parking process defining the classical parking functions, instead of each car only taking one parking space, we allow the cars to have different sizes and each takes up a number of adjacent parking spaces after a trailer $T$ parked on the first $z-1$ spots. A preference sequence
in which all the cars are able to park is called a parking sequence.
In this paper, we study increasing parking sequences and count them via bijections to lattice paths with right boundaries. Then we study
two notions of invariance in parking sequences and present
various characterizations and enumerative results.
}
\section{Introduction}
Classical parking functions were first introduced by Konheim and Weiss \cite{Weiss}. The original concept involves a linear parking lot with $n$ available spaces and $n$ labeled cars each with a pre-fixed parking preference. Cars enter one-by-one in order.
Each car attempts to park in its preferred spot first. If a car found its preferred spot occupied, it would move towards the exit and take the next available slot. If there is no space
available, the car exits without parking. A \emph{parking function} of length $n$ is a preference sequence for the cars in which all cars are able to park (not necessarily in their preferred spaces).
A formal definition for parking functions can be stated as follows.
\begin{definition}\label{1.1}
Let $\vec{a} = (a_1, a_ 2, . . . , a_n)$ be a sequence of positive integers, and let $a_{(1)} \leq a_{(2)} \leq \cdots \leq a_{(n)}$ be the non-decreasing rearrangement of $\vec{a}$ . Then the sequence $\vec{a}$ is a parking function if and only if $a_{(i)} \leq i$ for all indices $i$. Equivalently, $\vec{a}$ is a parking function if and only if for all $i \in [n]$,
\begin{equation}\label{eq1}
\#\{j:a_j \leq i \} \geq i.
\end{equation}
\end{definition}
For example, the preference sequences (1, 2, 3, 4), (2, 1, 3, 4) or (1, 2, 4, 1) are
all parking functions, while (2, 2, 4, 2)
is not since it will have one car leave un-parked.
It is well-known that the number of classical parking functions is $(n+1)^{n-1}$. An elegant proof by Pollak (see \cite{pollak}) uses a circle with $(n+1)$ spots where the parking functions are the preference sequences that could park all $n$ cars without using the $(n+1)$-th spot.
Definition 1.1 can be extended to define the notion of vector parking functions, or $\vec{u}$-parking functions. Let $\vec{u}$ be a non-decreasing sequence $(u_1,u_2,u_3,...)$ of positive integers. A $\vec{u}$-parking function of length $n$ is a sequence $(x_1, x_2,..., x_n)$ of positive integers whose non-decreasing rearrangement
$x_{(1)} \leq x_{(2)} \leq \cdots \leq x_{(n)}$
satisfies $x_{(i)} \leq u_i$. Equivalently, $(x_1, \dots, x_n)$ is a $\vec{u}$-parking function if and only if for all $i \in [n]$,
\begin{equation}\label{eq2}
\#\{j:x_j \leq u_i \} \geq i.
\end{equation}
Denote by $\mathsf{PF}_n(\vec{u})$
the set of all $\vec{u}$-parking functions of length $n$.
When $u_i=i$ we obtain the
classical parking functions.
When $u_i=a+b(i-1)$ for some $a, b\in \mathbb{Z}_+$, it is known that the number of $\vec{u}$-parking functions is $a(a+bn)^{n-1}$; see e.g. \cite{goncpoly}.
The set of parking functions is a basic object lying in the center of combinatorics, with many connections and applications to other branches of mathematics and disciplines, such as storage problems in computer science, graph searching algorithms, interpolation theory, diagonal harmonics, and sandpile models. Because of their rich theories and applications, parking functions and their variations have been studied extensively in the literature. See \cite{yandiff} for a comprehensive survey on the combinatorial theory of parking functions.
There is a particular generalization of parking functions that was recently introduced by Ehrenborg and Happ \cite{parkcars, parktrailer}, called parking sequences. Again, there are $n$ cars trying to park in a linear parking lot. In this new model the car $C_i$ has length $y_i \in \mathbb{Z}_+$ for each $i=1,2,...,n$. Call $\vec{y}=(y_1,y_2,...,y_n)$ the \emph{length vector}. There is a trailer $T$ of length $z-1$ parked at the beginning of the street after which the $n$ cars park with car $C_i$ taking up $y_i$ adjacent parking spaces. Given a sequence $\mathbf{c}=(c_1, . . . , c_n) \in \mathbb{Z}_+^n$, for $i=1, 2, \dots, n$ the cars enter the street in order, and
car $C _i$ looks for the first empty spot $j \geq c_i$. If the spaces $j$ through $j + y_i - 1$ are all empty, then car $C_i$ parks in these spots. If $j$ does not exist or any of the spots $j + 1$ through $j + y_i - 1$ is already occupied, then there will be a collision and the car cannot park and has to leave the street. In this case, we say the parking fails.
\begin{definition}
Assume there are $z-1+\sum_{i=1}^n y_i$ parking spots along a street, with the first $z-1$ occupied by a trailer. The sequence $\mathbf{c}=(c_1, . . . , c_n)$ is called a \emph{parking sequence for $(\vec{y},z)$} where $\vec{y} = (y_1, . . . , y_n)$ if all $n$ cars can park without any collisions. We denote the set of all such parking sequences by $\mathsf{PS}(\vec{y};z)$.
\end{definition}
For example, $\mathbf{c}=(3,7,5,3)$ is a parking sequence for $(\vec{y}; z)$ where $\vec{y}=(1,2,2,3)$ and $z=4$. Figure~\ref{fig:my_label1} shows how the cars $C_1, \dots, C_4$ would park along the street with the reference sequence
$\mathbf{c}$.
As given in \cite{parktrailer}, the number of parking sequences in $\mathsf{PS}(\vec{y};z)$ is
\begin{equation}\label{parktrailer1}
z\cdot (z+y_1 + n-1)\cdot(z+y_1 + y_2 + n - 2) \cdots (z+y_1 +\cdots + y_{n-1} + 1).
\end{equation}
\tikzstyle{vertex}=[rectangle,fill=black!15,minimum size=10pt,inner sep=0pt]
\begin{figure}
\caption{$\mathbf{c}
\label{fig:my_label1}
\end{figure}
From \eqref{eq1} and \eqref{eq2}
it is easy to see that any permutation of a $\vec{u}$-parking function is also a $\vec{u}$-parking function. This is however not true for parking sequences. Consider as an example a one-way street with 4 spots and 2 cars with fixed length vector $\vec{y}=(2,2)$ and $z=1$, (no trailer). Then, whereas $\mathbf{c}=(1,2)$ is a parking sequence for $(\vec{y};z)$, $\mathbf{c}'=(2,1)$ is not. Thus, it is natural to ask which parking sequence $\mathbf{c}$ is invariant for $(\vec{y};z)$, that is, it is still a parking sequence for $(\vec{y};z)$ after
the entries of $\mathbf{c}$ are permuted. Another question is which sequence remains a parking sequence when the cars enter the street in different orders. In other words, we want to know which preference
sequence allows all the cars to park when the length vector $\vec{y}$ is permuted to $(y_{\sigma(1)},y_{\sigma(2)}, ..., y_{\sigma(n)})$ for an arbitrary $\sigma \in \mathfrak{S}_n$.
There are several basic notions and variations associated with parking functions and their generalizations. Usually, these notions lead to a study of special classes of parking functions that have some interesting property. One of such special classes is the set of \emph{increasing parking functions}, which have non-decreasing entries and are counted by the ubiquitous Catalan numbers. It is only natural to ask for a generalization of this class in the set of parking sequences.
We study these questions in the present work. The rest of the paper is organized as follows. In section 2, we discuss increasing parking sequences and their connection to lattice paths.
In section 3, we fix the length vector $\vec{y}$ and characterize all permutation-invariant parking sequences when $\vec{y}$ has some special characteristics. Then, in section 4, we characterize all parking sequences that remain valid
for all permutations of $\vec{y}$.
We finish the paper with some closing remarks in section 5.
\section{Increasing Parking Sequences}
In this section, we consider all non-decreasing parking sequences for any given pair $(\vec{y};z)$. By convention, we write $[x]=\{1,2,...,x\}$ and the interval $[x,y]=\{x,x+1,...,y\}$, where $x,y \in \mathbb{Z}_+$ and $x<y$. Given any sequence $\mathbf{b}=(b_1, . . . , b_n) \in \mathbb{Z}_+^n$, let $\mathbf{b}_{inc}=(b_{(1)}, . . . , b_{(n)})$ be the non-decreasing rearrangement of the entries of $\mathbf{b}$ and the $i^{th}$ entry $b_{(i)}$ of $\mathbf{b}_{inc}$ is called the \emph{i-th order statistic} of $\mathbf{b}$. Next, we define the final parking configuration for any given parking sequence.
\begin{definition}
Let $\mathbf{c} \in \mathsf{PS}(\vec{y};z)$.
The \emph{final parking configuration} of
$\mathbf{c}$ is the arrangement of cars $C_1, C_2, ..., C_n$ following the trailer $T$ encoding their relative order on the street after they are done parking using the preference sequence $\mathbf{c}$.
\end{definition}
\noindent For example, in Figure \ref{fig:my_label1}, the final parking configuration of $\mathbf{c}=(3,7,5,3)$ is $T,C_1,C_3,C_2,C_4$.
The following inequalities analogous to \eqref{eq1} give a necessary condition for being a parking sequence.
\begin{lemma} \label{PSchar}
Suppose $\mathbf{c}=(c_1, . . . , c_n) \in \mathsf{PS}(\vec{y};z)$ where $\vec{y} = (y_1 , . . . , y_n)$. Then, $\#\{ j \in [n]:c_j \leq z \} \geq 1$ and for each $1\leq t \leq n-1$,
\begin{equation}\label{eq4}
\#\{ j:c_j \leq z +\sum_{i=0}^{t-1}y_{(n-i)}\} \geq t+1.
\end{equation}
\end{lemma}
\begin{proof}
We have $\#\{ j:c_j \leq z \} \geq 1$ because otherwise, there is no car whose preference is less than or equal to $z$, thus no car parks on spot $z$ and we obtain a contradiction.
Suppose for some $t \in [2, n-1]$, $\#\{ j:c_j \leq z +\sum_{i=0}^{t-1}y_{(n-i)}\} \leq t$. Then, in the final parking configuration on spots $[1,z+\sum_{i=0}^{t-1}y_{(n-i)}]$, there are at most 1 trailer and $t$ cars occupying a total of at most $z-1+y_{(n)}+y_{(n-1)}+\cdots+y_{(n-t+1)}$ spots. Thus, not all spots are used in the final parking configuration and this contradicts the fact that $\mathbf{c} \in \mathsf{PS}(\vec{y};z)$.
\end{proof}
\vanish{
Setting $z=1$ in the above Lemma yields the following.
\begin{corollary}
Let $\mathbf{c}=(c_1, . . . , c_n) \in \mathsf{PS}(\vec{y})$. where $\vec{y} = (y_1 , . . . , y_n)$.
Then,\\ $\#\{ j \in [n]:c_j = 1 \} \geq 1$ and for each $1\leq t \leq n-1$,
\begin{equation}
\#\{ j:c_j \leq 1 +\sum_{i=0}^{t-1}y_{(n-i)}\} \geq t+1
\end{equation}
\end{corollary}
}
\begin{corollary}\label{PScharcor}
Let $\mathbf{c}=(c_1, . . . , c_n) \in \mathsf{PS}(\vec{y};z)$ where $\vec{y} = (y_1 , . . . , y_n)$.
Then $c_{(1)} \leq z$ and for $j=2, \dots, n$,
\begin{equation}\label{PSdec}
c_{(j)} \leq z+ \sum_{i=0}^{j-2} y_{(n-i)}.
\end{equation}
\end{corollary}
We note that the conditions of Lemma \ref{PSchar} are not sufficient. Using the same example as before, even though $\mathbf{c}=(1,2)$ and $\mathbf{c}'=(2,1)$ both satisfy (\ref{eq4}) for $\vec{y}=(2,2)$, $\mathbf{c}' \not\in \mathsf{PS}(\vec{y};1)$.
In addition, for a parking sequence $\mathbf{c} \in PS(\vec{y};z)$, its rearrangement $\mathbf{c}_{inc}$
is not
necessarily a parking sequence. Consider the following example for $\vec{y}=(1,1,4)$ and $z=1$. $\mathbf{c}=(5,6,1) $ is in $\mathsf{PS}(\vec{y};z)$ but $\mathbf{c}_{inc}=(1,5,6)$ is not.
\begin{definition}
A sequence $\mathbf{c}=(c_1, . . . , c_n) \in \mathsf{PS}(\vec{y};z)$ is an \emph{increasing parking sequence for $(\vec{y};z)$} if $c_1\leq c_2 \leq \cdots \leq c_n$.
We denote the set of all increasing parking sequences for $(\vec{y};z)$ by $\mathsf{IPS}(\vec{y};z)$.
\end{definition}
When $\vec{y} = (1 , 1, . . . , 1)$ and $z = 1$ (i.e. the trailer of length 0), Definition 2.2 leads to the classical increasing
parking functions, which are counted by the Catalan numbers.
It is well-known that classical increasing parking functions of length $n$ are in one-to-one correspondence with Dyck paths of semilength $n$, which are lattice paths from $(0,0)$ to $(n,n)$ with strict right boundary $(1,2,...,n)$. This result can be generalized to
increasing parking sequences.
First we show that an analog of \eqref{eq1} is enough to characterize increasing parking sequences.
\begin{prop}\label{PSchar2}
Let $(\vec{y};z) = (y_1 , . . . , y_n;z)$.Then, $\mathbf{c}=(c_1, . . . , c_n) \in \mathsf{IPS}(\vec{y};z)$ if and only if $c_1 \leq c_2 \leq \cdots \leq c_n$ and
for all $i \in [n]$,
\begin{equation} \label{ipf}
c_i \leq z+ \sum_{j=1}^{i-1} y_j.
\end{equation}
\end{prop}
\begin{proof}
Observe that
if $\mathbf{c}$ is a non-decreasing preference sequence satisfying \eqref{ipf}, then the cars will park in the final configuration $T, C_1,\dots, C_n$. Hence
$\mathbf{c}$ is in $\mathsf{IPS}(\vec{y};z)$.
Conversely, for a non-decreasing sequence $\mathbf{c}$ that
allows all the cars to park, we need to prove that it satisfies \eqref{ipf}.
First by Corollary \ref{PScharcor}, $c_{1} \leq z$. Thus, car $C_1$ parks right after the trailer leaving no gap. By the rules of the parking process, if $c_i \leq c_{i+1}$ and both cars $C_i$ and $C_{i+1}$
are able to park,
then $C_{i+1}$ will
park after $C_i$. Hence for a non-decreasing $\mathbf{c} \in \mathsf{PS}(\vec{y};z)$, the final parking configuration must be $T, C_1, C_2, \dots, C_n$. It follows that
the first spot occupied by car $C_i$ is $z+y_1+ \cdots +y_{i-1}$,
which must be larger than or equal to $c_i$.
\vanish{
Now, assuming car $C_{k-1}$ ($2\leq k\leq n$) has parked and there are no gaps in between the cars parked thus far on the street. For car $C_k$, we must have that $c_{k} \leq z+y_1+y_2+\cdots+y_{k-1}$, otherwise the $n-(k-1)$ cars remaining after car $C_{k-1}$ all have preferences greater than $r=z+y_1+y_2+\cdots+y_{k-1}$, hence no car parks on spot $r$ in the final parking arrangement, contradicting the fact that $\mathbf{c} \in \mathsf{PS}(\vec{y};z)$). Thus, $C_k$ parks right after $C_{k-1}$ leaving no gaps and the proof is done by induction.}
\end{proof}
Proposition \ref{PSchar2} allows us to
enumerate increasing parking sequences for any given length vector $\vec{y}$ and $z \in \mathbb{Z}_+$ using results in lattice path counting. Recall that a \emph{lattice path} from $(0,0)$ to $(p,q)$ is a sequence of $p$ east steps and $q$ north steps.
It can be represented by a sequence of non-decreasing
integers $(x_1, x_2, \dots, x_q)$ such that the north steps are at $(x_i,i-1) \to (x_i,i)$, for $i=1,...,q$. The lattice path is said to have strict right boundary $(b_1,b_2,...,b_q)$
if $0\leq x_i < b_i$ for all $1\leq i \leq q$.
Let $\mathsf{LP}_{p,q}(b_1,b_2,...,b_q)$ denote the set of all lattice paths from $(0,0)$ to $(p,q)$ with strict right boundary $(b_1, b_2, \dots, b_q)$.
Figure \ref{fig:my_label} shows an example of a lattice path (2,3,3,7) from (0,0) to (8,4) with strict right boundary $\vec{b}=(3,4,5,8)$.
\begin{figure}
\caption{A lattice path (2,3,3,7) with strict right boundary at $(3,4,5,8)$.}
\label{fig:my_label}
\end{figure}
We can represent increasing parking sequences in terms of lattice paths with strict right boundary as follows:
Let $(\vec{y};z) = (y_1 , . . . , y_n; z)$ and $M=z-1+y_1+y_2+\cdots +y_{n-1}+y_n$. Then by Proposition \ref{PSchar2} there is a bijection from $\mathsf{IPS}(\vec{y};z)$ to the set of lattice paths from $(0,0)$ to $(M,n)$ with strict right boundary $(z,z+y_1, z+y_1+y_2,..., z+y_1+y_2+\cdots +y_{n-1})$.
(The boundary is strict because in the lattice path,
$x_i$ can be $0$ while in $\mathbf{c}\in \mathsf{IPS}(\vec{y}, z)$, $c_i \geq 1$. )
There are well-known determinant formulas to count the number of lattice paths with general boundaries, see, for example, Theorem 1 of \cite[Chap.2]{mohanty}, which leads to the following determinant formula.
\begin{corollary}\label{2.3.1}
Suppose $M=z-1+y_1+y_2+\cdots +y_{n-1}+y_n$. Then,
\begin{align*}
\#\mathsf{IPS}(\vec{y};z) &= \# \mathsf{LP}_{M,n}(z,z+y_1, z+y_1+y_2,..., z+y_1+y_2+\cdots +y_{n-1}) \\
&= \det \left[\binom{b_i}{j-i+1} \right]_{1\leq i,j \leq n}
\end{align*}
where $b_1=z$ and $b_i=z+y_1+y_2+\cdots +y_{i-1}$ for $i=2,...,n$.
\end{corollary}
For the special case that the length vector has constant entries,
there are nicer closed formulae for the determinant.
Specifically, when $\vec{y}=(k^n)=(k,k,\dots, k)$ and $M=z+kn-1$,
$\mathsf{LP}_{M,n}(z,z+k, z+2k,..., z+(n-1)k)$ is the set of lattice paths from $(0,0)$ to $(z+kn-1, n)$ which never touch the line $x = z + ky$. Using the formula (1.11) of \cite[Chap.1]{mohanty}, we have
\begin{corollary}
Suppose $(\vec{y},z)=((k^n);z)$ and $M=z+kn-1$. Then
\begin{align*}
\#\mathsf{IPS}(\vec{y};z) = \# \mathsf{LP}_{M,n}(z,z+k, z+2k,..., z+(n-1)k) &= \frac{z}{z+n(k+1)}\binom{z+n(k+1)}{n}.
\end{align*}
\end{corollary}
This specializes to the Fuss-Catalan numbers when $z=1$.
\begin{corollary} \label{Fuss}
Suppose $\vec{y}=(k,k,\dots, k) \in \mathbb{Z}_+^n$. Then
$$\#\mathsf{IPS}(\vec{y};1) = \frac{1}{kn+1}\binom{(k+1)n}{n}.$$
\end{corollary}
\vanish{
When $\vec{y} = (1 , 1, . . . , 1)$ and $z = 1$, the increasing parking sequences are exactly the classical increasing
parking functions, which are counted by the Catalan numbers. It is well-known that classical increasing parking functions of length $n$ are in one-to-one correspondence with Dyck paths of semilength $n$, which are lattice paths from $(0,0)$ to $(n,n)$ with strict right boundary $(1,2,...,n)$.
Hence Corollaries \ref{2.3.1}--\ref{Fuss} generalize the result in the classical case. }
\section{Invariance for Fixed Length Vector}
In this section, we study the first of two types of invariance for parking sequences.
Fixing the length vector $\vec{y}\in \mathbb{Z}_+^n$ and a positive integer $z$, we investigate which parking sequence remains in the set $\mathsf{PS}(\vec{y};z)$ after its entries are
arbitrarily rearranged.
\begin{definition}
Fix $\vec{y}=(y_1 , . . . , y_n)$ and $z \in \mathbb{Z}_+$.
Let $\mathbf{c} \in \mathsf{PS}(\vec{y},;z)$.
We say that $\mathbf{c}$ is a \emph{permutation-invariant parking sequence for $(\vec{y};z)$} if for any rearrangement $\mathbf{c}'$ of $\mathbf{c}$, we have $\mathbf{c}' \in PS_{n}(\vec{y};z)$. We denote the set of all permutation-invariant parking sequences for $(\vec{y};z)$ by $\mathsf{PS}_{inv}(\vec{y};z)$.
\end{definition}
\noindent For example, for $\vec{y}=(1,2)$, we have
$\mathsf{PS}(\vec{y};1)=\{(1,1), (1,2),(3,1)\}$ and $\mathsf{PS}_{inv}(\vec{y};z)=\{(1,1)\}$.
First, we describe a subset of the invariant parking sequences.
\begin{prop}\label{minimal}
For any $\mathbf{c}=(c_1,...,c_n) \in [z]^n$, we have $\mathbf{c} \in \mathsf{PS}_{inv}(\vec{y};z)$.
\end{prop}
\begin{proof}
For any preference sequence $\mathbf{c}$, if $c_i \leq z$ for all $i$, then we obtain the final parking configuration $T,C_1,C_2,...,C_n$,
which means $\mathbf{c} \in \mathsf{PS}(\vec{y};z)$.
Since the condition $c_i \in [z]$ for all $i$
does not depend on the order of $c_i$,
we have $\mathbf{c}$ is permutation-invariant.
\end{proof}
In general, $\mathsf{PS}_{inv}(\vec{y};z)$ is larger than the set $[z]^n$, and the situation can be more complicated. The following two examples show that $\mathsf{PS}_{inv}(\vec{y};z)$ depends not only on
the relative order of the $y_i$'s, but also on the difference of $y_i$'s.
\begin{example}\label{1.1}
Let $\vec{y}=(y_1,y_2)$ and $z=1$.
If $y_1 < y_2$, then $\mathsf{PS}_{inv}(\vec{y};1)=\{(1,1)\}$.
On the other hand, if $y_1 \geq y_2$, we have $\mathsf{PS}_{inv}(\vec{y};1)=\{(1,1), (1,y_2+1), (y_2+1,1)\}$.
\end{example}
\vanish{
\noindent Example \ref{1.1} shows $\mathsf{PS}_{inv}(\vec{y})$ depends on relative order of the $y_i$'s. In addition, it also depends on the size of the $y_i$'s. To see this, consider the following examples of 3 cars with different length vectors. }
\begin{example}
Suppose $\vec{y}=(4,3,2)$ and ${\vec{t}}=(4,3,1)$. It is easy to check that $$\mathsf{PS}_{inv}(\vec{y};1)=\{(1,1,1),(1,1,4),(1,4,1),(4,1,1)\}$$ and $$\mathsf{PS}_{inv}(\vec{t};1)=\{(1,1,1),(1,1,4),(1,4,1),(4,1,1),(1,1,5), (1,5,1), (5,1,1)\}.$$ Note that the relative orders for the
vectors $\vec{y}$ and $\vec{t}$ are the same, (both have the pattern $321$),
but the invariant sets are not similar.
\end{example}
In the following
we characterize the invariant set for some families of $\vec{y}$. First, we consider the case where the length vector is strictly increasing. Next, we look at the case where $\vec{y}$ is a constant sequence.
Lastly, given $a,b \in \mathbb{Z}_+$, we consider two cases where the length vector is of the form (i) $\vec{y}=(a,...,a,b,...,b)$ where $a<b$ and (ii) $\vec{y}=(a,...,a,b,...,b)$ where $b=1$ and $a>b$.
\subsection{Strictly increasing length vector}
When $\vec{y}$ is a strictly increasing sequence, we show that Proposition \ref{minimal} gives all the permutation-invariant parking sequeneces.
\begin{theorem}\label{3.3}
Let $(\vec{y};z) = (y_1,y_2, . . . , y_n; z)$ where $y_1 < y_2 < \dots < y_n$. Then, $$\mathsf{PS}_{inv}(\vec{y};z)=[z]^{n}.$$
\end{theorem}
\begin{proof}
By Proposition \ref{minimal}, $[z]^{n} \subseteq \mathsf{PS}_{inv}(\vec{y};z)$.
Conversely, suppose $\mathbf{c}=(c_1,c_2,...,c_n)$ is a parking sequence for $(\vec{y};z)$ with some $c_i \not\in [z]$. We claim that $\mathbf{c}$ is not permutation-invariant.
To see this, let $x= \min \{c_i \in \mathbf{c}\,\,| c_i > z \}$. Then, we can consider
$\mathbf{c}_{inc}= (c_{(1)},c_{(2)},...,c_{(r)},x,c_{(r+2)}, \dots,c_{(n)})$, where $c_{(1)} \leq c_{(2)} \leq \cdots \leq c_{(r)} \leq z < x \leq c_{(r+2)} \leq \cdots \leq c_{(n)}$ and $r\geq 1$ by Corollary \ref{PScharcor}. Then, by Proposition \ref{PSchar2}, $x$ satisfies the inequality: $z < x \leq z+\sum_{i=1}^{r}y_i$. Thus, we can choose the maximum $s$ such that $x > z+ \sum_{i=1}^{s}y_i$, where
$0\leq s<r$. Consider the preference $$\mathbf{c'}= (c_{(1)},c_{(2)},...,c_{(s)},x,c_{(s+1)}, \dots, c_{(r)},c_{(r+2)}, \dots,c_{(n)})$$ We try to park according to $\mathbf{c'}$. Clearly, the first $s$ cars park in order after the trailer T without any gaps in between them. Then, the car $C_{s+1}$ has preference $x$ and parks after car $C_s$ with $h$ unoccupied spots in between $C_s$ and $C_{s+1}$, where $h=x - (z+ \sum_{i=1}^{s}y_i) \geq 1$ and $h \leq y_{s+1}$ by the maximality of $s$. Among the un-parked cars $C_{s+2}, \dots, C_n$, the minimal length is $y_{s+2}$, where
$y_{s+2} > y_{s+1} \geq h$. Thus no car can fill in these $h$ unoccupied spots. It follows that
$\mathbf{c'} \not\in \mathsf{PS}(\vec{y};z)$, and hence
$\mathbf{c} \not\in \mathsf{PS}_{inv}(\vec{y};z)$.
\end{proof}
\begin{corollary}
Let $(\vec{y};z) = (y_1,y_2, . . . , y_n; z)$ where $y_1 < y_2 < \dots < y_n$. Then, $$\#\mathsf{PS}_{inv}(\vec{y};z)=z^{n}.$$
\end{corollary}
\subsection{Constant length vector }
In this subsection, we investigate the case where $\vec{y}$ is of the form $(k^n)=(k,k, . . . , k)$.
\begin{theorem}\label{Inv}
Suppose $(\vec{y};z) = ((k^n); z)$ where $k\in \mathbb{Z}_+$ and $k>1$. Then, $\mathsf{PS}_{inv}(\vec{y};z)$ is the set of all sequences $(c_1,...,c_n)$ such that for each $1\leq i\leq n$,
\begin{enumerate}[(i)]
\item $c_{(i)} \leq z+(i-1)k$, and
\item $c_i \in \{1,2,...,z,z+k,z+2k,...,z+(n-1)k\}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\mathbf{c}=(c_1,...,c_n)$ be a sequence that for each $1 \leq i \leq k$, $c_{(i)} \leq z+(i-1)k$, and
$c_i\leq z \text{ or } c_i=z+sk \text{ for some } s=0,1,...n-1$. We claim that $\mathbf{c} \in \mathsf{PS}(\vec{y}; z)$.
Since these conditions are independent of the arrangement of the terms $c_i$'s, this would implies $\mathbf{c}$ is permutation-invariant.
We attempt to park using $\mathbf{c}$.
First, $C_1$ either parks right after the trailer if $c_1 \leq z$, or on spots $[c_1,c_1+k-1]$ if $c_1=z+sk$ for some $s \geq 0$. We assume for our inductive hypothesis, that the first $r$ cars are parked already, (where $1\leq r \leq n-1$), and the following observations hold true at this stage in the parking process:
\begin{enumerate}
\item Any car already parked on the street occupies spots of the form $[z+ks, z+k(s+1)-1]$ where $s \in \{0,1,...,n-1\}$
\item For any maximal interval of unoccupied spots, the length is a multiple of $k$ and the interval starts at $z+km$ for some $m \in \{0,1,...,n-1\}$.
\end{enumerate}
Now car $C_{r+1}$ comes with preference $c_{r+1}=z+kl$. There are two possibilities:
\begin{itemize}
\item if spot $(z+kl)$ is empty, then $C_{r+1}$ parks on spots $[z+kl, z+k(l+1)-1]$.
\item if spot $(z+kl)$ is non-empty, then $C_{r+1}$ drives forward to park in the first open interval ahead. Such an open interval must exist. Otherwise, assume that the last open spot after $C_r$ parked is $x$. By inductive hypothesis, $x=z+sk-1$ for some $s< l$. So all the spots from $x$ to the end of the street $(z+nk-1)$ are occupied, by $n-s$ cars. These $n-s$ cars, as well as $C_{r+1}$, all have preference at least $x$.
In other words, there are at least $n-s+1$ cars having preference $c_i \geq z+sk$, which implies $c_{(s)} \geq z+sk$, a contradiction.
\end{itemize}
This exhausts all possible cases for $C_{r+1}$. Thus, by induction, all cars can park and $\mathbf{c} \in \mathsf{PS}(\vec{y};z)$.
Conversely, suppose for a contradiction that there is a parking sequence $\mathbf{c} \in \mathsf{PS}_{inv}(\vec{y};z)$ not satisfying Condition (ii).
(By Corollary \ref{PScharcor} Condition (i) holds for
all $\mathbf{c} \in \mathsf{PS}(\vec{y};z)$.)
Then, there is some $j \in [n]$ such that $c_{j}=z+sk+t$ for some $s \in \{0,1,...n-1\}$ and $1\leq t <k$. Consider the following rearrangement of $\mathbf{c}$ given by
$\mathbf{c}'=(c_{j},c_{1},c_{2},...,c_{j-1},c_{j+1},...,c_{n})$. By our assumption, $\mathbf{c}' \in \mathsf{PS}(\vec{y};z)$. We attempt to park using this preference. First, $C_1$ parks on $[z+sk+t,z+(s+1)k+t-1]$. However, between the trailer and $C_1$, there is now an unoccupied interval of $(z+sk+t)-z=sk+t$ spots, which is clearly nonempty and not a multiple of $k$. Thus, no matter what preferences the remaining cars have, it is impossible to park all cars on this street. This yields a contradiction to our assumption.
\end{proof}
Recall that a $\vec{u}$-parking function of length $n$ is a sequence $(x_1, x_2,..., x_n)$ satisfying $1 \leq x_{(i)} \leq u_i$. We can use the results of vector parking functions
to enumerate the number of sequences described in Theorem \ref{Inv}.
\begin{corollary}\label{Constant-count}
Let $(\vec{y};z) = ((k^n); z)$ with $k \geq 2$. Then, $$\#\mathsf{PS}_{inv}(\vec{y};z) = z(n+z)^{n-1}.$$
\end{corollary}
\begin{proof}
For any $\mathbf{c}=(c_1,...,c_n) \in \mathsf{PS}_{inv}(\vec{y};z)$ let
$\mathsf{f}(\mathbf{c})=\mathbf{c}'$, where $\mathbf{c}'$ is the sequence whose entries are given by
\[ c_i'=
\begin{cases}
c_i, \text{ if } 1\leq c_i \leq z \\
z+s, \text{ if } c_i = z+sk.
\end{cases}
\]
The condition $c_{(i)} \leq z+(i-1)k$ implies $c_{(i)}' \leq z+i-1$, hence $\mathbf{c}'$ is a vector parking function associated to the vector $\vec{u}=(z,z+1,...,z+n-1)$, and $f$ is a map from
$\mathsf{PS}_{inv}(\vec{y};z) $ to $\mathsf{PF}_n(\vec{u})$.
It is clear that $\mathsf{f}$ is a bijection
since the map can be easily inverted.
By \cite[Corollary 5.5]{goncpoly}, the number of $\vec{u}$-parking functions is $z(z+n)^{n-1}$.
\end{proof}
\textsc{Remark}. Note that Theorem~\ref{Inv} and
Corollary~\ref{Constant-count} are also valid for $k=1$, in which case $\mathsf{PS}_{inv}((1^n);z) = \mathsf{PS}((1^n);z)$ are exactly $\vec{u}$-parking functions associated to
$\vec{u}=(z, z+1, \dots, z+n-1)$.
\subsection{Length vector \texorpdfstring{$\vec{y}=(a,...,a,b,...,b)$}{} where
\texorpdfstring{$a<b$}{}}
Let $n \geq 2$ and $z, a, b, r$ be positive integers with
$a < b$ and $1 \leq r <n$. In this subsection we fix $\vec{y}=(\underbrace{a,a,...,a}_r, \underbrace{b,...,b}_{n-r}) =(a^r, b^{n-r})$, i.e. the first $r$ cars are of size $a$ and the remaining $n-r$ cars are of size $b$.
First we prove a couple of Lemmas that characterize the set of permutation-invariant parking sequences for $(\vec{y};z)$.
In the following we will refer to any car of size $a$ (respectively, size $b$) as an $A$-car (respectively, $B$-car).
\begin{lemma}\label{lemI}
Assume $\mathbf{c} \in \mathsf{PS}_{inv}(\vec{y};z)$. Then in the final parking configuration of $\mathbf{c}$, all $A$-cars park in $[z,z+ra-1]$.
\end{lemma}
\begin{proof}
Suppose not. Then, there is some $\mathbf{c} \in \mathsf{PS}_{inv}(\vec{y};z)$ with which at least one $A$-car is not parked in the interval $[z,z+ra-1]$ in the final parking configuration $\mathcal{F}$.
In $\mathcal{F}$, between the trailer $T$ and all $A$-cars
there are blocks $L_1, \dots, L_m$ of consecutive spots occupied by $B$-cars. Assume the block $L_i$ consists of $l_ib$ spots, where $l_1+l_2+\cdots + l_m=n-r$.
Let $C_j$ be the last $A$-car in the configuration $\mathcal{F}$. Then $C_j$ occupies some spots in $[z+ra, z+ra+(n-r)b-1]$, and no other $A$-car has checked the spots $C_j$ occupied in the parking process. In addition, let $C_k$ be the first $B$-car in $\mathcal{F}$. Then $j \leq r < k$ and $C_k$ parks before $C_j$ in $\mathcal{F}$.
\begin{enumerate}
\item[\textbf{Case 1:}] Assume in $\mathcal{F}$ there are some other $A$-cars parked between $C_k$ and $C_j$. Consider the rearrangement $\mathbf{c'}=(c_1,...,c_{j-1},c_k,c_{j+1},...,c_{k-1},c_{j},c_{k+1},...,c_n)$ obtained by exchanging the $j$-th and $k$-th terms in $\mathbf{c}$. Let the cars park according to the preference $\mathbf{c'}$. It is easy to see that all $A$-cars occupy the same spots as in $\mathcal{F}$ except that $C_j$ now parks on $a$ of the $b$ spots originally occupied by $C_k$ in $\mathcal{F}$, leaving $(b-a)$ of these spots unused. Hence after all the $A$-cars are parked, the first block of consecutive open spots has size $l_1b-a$, which is not a multiple of $b$. Thus it is impossible for the remaining $B$-cars to fill in and hence
$\mathbf{c'} \not \in PS(\vec{y};z)$.
\item[\textbf{Case 2:}] There is no $A$-car parked between $C_k$ and $C_j$ in $\mathcal{F}$. Then $\mathcal{F}$ is of the form $A \cdots A B \cdots B A B \cdots B$, where there are $r-1$ $A$-cars before the first $B$-car $C_k$, and $c_j=z+(r-1)a+l_1b$. Let $\mathbf{c}''$ be the following rearrangement of $\mathbf{c}$: the first $r$ entries of $\mathbf{c}''$ are $c_1, \dots, c_{j-1}, c_k, c_{j+1}, \dots, c_r$, obtained from the first entries of $\mathbf{c}$ by replacing $c_j$ with $c_k$; the preferences for $B$-cars are $c_j, c_{r+1}, \dots, c_{k-1}, c_{k+1}, \dots, c_n$. Let the cars park according to $\mathbf{c}''$. Then the $A$-cars will occupy the spots $[z, z+ra-1]$, and the first $B$-car occupies spots
$[c_j, c_j+b-1]$. Now there are $c_j-1-(z-1+ra)= l_1b-a$ spots between the last $A$-car and the first $B$-car; these spots cannot be filled by other $B$-cars. Hence $\mathbf{c}'' \not \in \mathsf{PS}(\vec{y};z)$.
\end{enumerate}
In both cases we have a permutation of $\mathbf{c}$ that is not in $\mathsf{PS}(\vec{y};z)$, contradicting the assumption that
$\mathbf{c} \in \mathsf{PS}_{inv}(\vec{y};z)$.
\end{proof}
\vanish{
Also, at least one of these open blocks intersects $[z,z+ra-1]$. Next, we attempt to park the $(n-r)$ $B$-cars. Let car $C_j$ be the last $A$-car to park on some of the spots in $[z+ra, z+ra+(n-r)b-1]$ such that no other $A$-car checked the spots $C_j$ occupies in the final parking configuration. Let car $C_k$ be the first $B$-car in the final parking configuration with at most $r-1$ $A$-cars before it. Then,$s\leq r-1$ and $c_{k} \leq z+(r-1)a$. Note that $r+1\leq k \leq n$.
Consider the rearrangement $\mathbf{c'}=(c_1,...,c_{j-1},c_k,c_{j+1},...,c_{k-1},c_{j},c_{k+1},...,c_n)$ obtained by exchanging the $j$-th and $k$-th preferences in $\mathbf{c}$. We attempt to park according to this new preference sequence. First let all $A$-cars park. Clearly, all $A$-cars occupy the same spots as before except that $C_j$ now parks on $a$ of the $b$ spots originally occupied by car $C_k$ in $\mathbf{c}$ leaving $(b-a)$ of these spots unused. Now, there are two possible scenarios that could have arisen in the final parking configuration for $\mathbf{c}$:
\begin{enumerate}
\item[\textbf{Case 1:}] \emph{There was some $A$-car parked between $C_k$ and $C_j$.} In this case, there exists some $l_1 \geq 1$ such that there was originally a block of $l_1b$ open spots between the first spot on which $C_k$ parked and the nearest $A$-car in front of it. With this new preference $\mathbf{c'}$, this first block now has $l_1b-a$ many spots and it is impossible for the remaining $B$-cars to park, since each of these cars has size $b$ and $a<b$.
\item[\textbf{Case 2:}] \emph{There were no cars parked between $C_k$ and $C_j$.} In this case, cars $C_k$ and $C_j$ were parked right next to each other. With $\mathbf{c'}$, all $A$-cars park on $[z,z+ra-1]$ and assuming no collisions with any other $B$-car, $C_k$ parks starting at spot $c_j$, where $c_j \geq z+ra+1$. All $B$-cars before $C_k$ had preference $\geq z+(r-1)a$ otherwise this would have contradicted the definition of $C_k$. Hence, either some $B$-car has preference in $[z+(r-1)a,z+ra]$ and cannot park due to not having enough open spots or all $B$-cars have preference $\geq z+ra+1$ hence $z+ra$ will be unoccupied. Both options are impossible.
\item[\textbf{Case 3:}] \emph{There were only $B$-cars parked between $C_k$ and $C_j$.} This means $C_j$ was the only $A$-car not parked on $[z,z+ra-1]$, else we are back to the first case. Thus, in $\mathbf{c'}$, after all $A$-cars are done parking, they are parked on $[z,z+ra-1]$. Next, the $B$-cars start to park. Assuming no collision, car $C_k$ (a $B$-car) parks starting at some spot $z+(r-1)a+(s+1)b$ where $s \geq 1$. There are $(s+1)b-a$ total spots between $C_j$ and $C_k$ by the time $C_k$ is parked. Consequently, no matter the preferences of the $B$-cars, there will be some $b-a$ unused spots in this region of the street. Again, this is impossible.
\end{enumerate}
}
\begin{lemma}\label{lemII}
If $(c_1,c_2,...,c_n) \in \mathsf{PS}_{inv}(\vec{y};z)$, then $c_i \leq z+(r-1)a$ for all $1\leq i \leq n$.
\end{lemma}
\begin{proof}
Suppose not. Take any permutation of $\mathbf{c}$ starting with $\max \{c_i: i \in [n]\}$ and we contradict the conclusion of Lemma \ref{lemI}.
\end{proof}
\begin{lemma}\label{lemIII}
For $\mathbf{c} \in \mathsf{PS}_{inv}(\vec{y};z)$, let $c_{(1)}\leq c_{(2)} \leq \cdots \leq c_{(n)}$ be the order statistics of $\mathbf{c}$. Then, $c_{(i)} \leq z$ for each $1\leq i \leq n-r+1$ and $c_{(n-r+j)} \in \{1,...z,z+a,z+2a,...,z+(j-1)a\}$ for each $ 2\leq j \leq r$.
\end{lemma}
\begin{proof}
By Lemmas \ref{lemI} and \ref{lemII}, if $\mathbf{c} \in \mathsf{PS}_{inv}(\vec{y};z)$, then any $r$-term subsequence of $\mathbf{c}$, say $(c_{i_1},c_{i_2},...,c_{i_r})$,
parks all $r$ $A$-cars in $[z,z+ra-1]$ and hence $c_i \leq z+(r-1)a$ for all $i=1,...,n$. Furthermore, if we consider the last $r$ terms of the order statistics of $\mathbf{c}$, this means $(c_{(n-r+1)}, c_{(n-r+2)},...,c_{(n)}) \in \mathsf{PS}_{inv}((a,a,...,a);z)$. By Theorem \ref{Inv}, we obtain $c_{(n-r+j)} \in \{1,...z,z+a,z+2a,...,z+(j-1)a\}$ for each $ 1\leq j \leq r$. Finally by the order statistics, $c_{(i)} \leq c_{(n-r+1)} \leq z$ for each $1\leq i \leq n-r$.
\end{proof}
Combining Lemmas \ref{lemI}, \ref{lemII} and \ref{lemIII}, we prove the following result.
\begin{theorem}\label{main}
Let $n \geq 2$ and $z, a, b, r$ be positive integers with
$a < b$ and $1 \leq r <n$. Assume $\vec{y}=(a^r, b^{n-r})$.
Let $\mathsf{PF}_n(\vec{u})$ be the set of $\vec{u}$-parking functions of length $n$ where $\vec{u}=(\underbrace{z,z,...,z}_{n-r+1},z+1,z+2,...,z+r-1)$.
Then, there is a bijection between the sets $\mathsf{PS}_{inv}(\vec{y};z)$ and $\mathsf{PF}_n(\vec{u})$.
\end{theorem}
\begin{proof}
First, we claim that any $\mathbf{c}$ satisfying the inequalities in Lemma \ref{lemIII} is in $\mathsf{PS}_{inv}(\vec{y};z)$. To see this, consider first the $A$-cars with preferences $(c_1, \dots, c_r)$. We have $c_i \in \{1,2,...,z,z+a,z+2a,...,z+(r-1)a\}$ for
all $1 \leq i \leq r$, and the order statistics of these $r$ terms are no more than $(z, z+a, \cdots, z+(r-1)a)$ (coordinate-wise).
By Theorem \ref{Inv}, $(c_1, \dots, c_r)$ is a parking sequence for $((a^r); z)$. Hence all $A$-cars must park on $[z,z+ra-1]$. Next, consider the $B$-cars. Since $c_{i} \leq z+(r-1)a$ and all $A$-cars are parked without any unoccupied spots on $[z,z+ra-1]$, then all $B$-cars park in increasing order after the $A$-cars. In other words, the final parking configuration is $T, C_1',...,C_r', C_{r+1},...,C_n$ where $C_1',...,C_r'$ is some rearrangement of the $A$-cars. This proves the claim.
Now, by the above claim and Lemma \ref{lemIII}, we have shown that $\mathsf{PS}_{inv}(\vec{y};z)$ is exactly the set of all sequences $\mathbf{c}$ whose order statistics satisfy $c_{(i)} \leq z$ for each $1\leq i \leq n-r+1$ and $c_{(n-r+j)} \in \{1,...z,z+a,z+2a,...,z+(j-1)a\}$ for each $ 2\leq j \leq r$.
Let $\vec{u}=(u_1, u_2, \dots, u_n)= (z,z,...,z,z+1,z+2,...,z+r-1)$.
Consider the map $\gamma_a:\mathsf{PS}_{inv}(\vec{y};z) \to \mathsf{PF}_n(\vec{u})$ defined as follows.
$$\gamma_a:(c_1,...,c_n) \mapsto (c_1',...,c_n')=\mathbf{c'}$$ where for all $1\leq j\leq n$
\[ c_j'=
\begin{cases}
c_j, \text{ if } c_j \leq z \\
z+s, \text{ if } c_j = z+sa.
\end{cases}
\]
The map $\gamma_a$ is well-defined since
the sequence $\mathbf{c'}$ has order statistics satisfying $1\leq c_{(i)}' \leq u_i$ for each $i=1,2,...,n$.
Thus $\mathbf{c}' \in \mathsf{PF}_n(\vec{u})$. Clearly the map $\gamma_a$ is invertible,
hence $\gamma_a$ is a bijection.
\end{proof}
\begin{corollary}
Let $\vec{y}$ and $\vec{u}$ be as in Theorem \ref{main}.
Then,
\begin{align*}
\# \mathsf{PS}_{inv}(\vec{y}; z)&=\# \mathsf{PF}_n(\vec{u}) \\
&= \sum_{j=0}^{r-1}\binom{n}{j}(r-j)\,r^{j-1}z^{n-j}.
\end{align*}
In particular, when $r=1$, $\# \mathsf{PS}_{inv}(a,b,b,...,b; z)= z^n.$
\end{corollary}
\begin{proof}
The result follows from Theorem \ref{main} and [\cite{yan2}, Theorem 3].
\end{proof}
\subsection{Length vector \texorpdfstring{$\vec{y}=(a, 1, 1, \dots, 1)$}{} where \texorpdfstring{$a>1$}{}}
It is natural to ask what happens for the
length vector $\vec{y}=(a^r, b^{n-r})$ with $a>b$. Unlike in the preceding subsection, the number of sequences in $\mathsf{PS}_{inv}((a^r,b^{n-r});z)$ with $a > b$ depends on the value of $a$ and $b$.
Table \ref{tab:PPer} shows the initial values for $\mathsf{PS}_{inv}((a^2,b^{n-2});z)$ where $z=b=1$ and $a=2,3$.
\begin{table}[h]
\centering
\begin{tabular}{l ccccc}
\hline\hline
&\multicolumn{4}{c}{Some Initial Values}
\\ [0.5ex]
\hline
& Length $\vec{y}$ & $(2,2)$ & $(2,2,1)$ & $(2,2,1,1)$ & $(2,2,1,1,1)$ \\[-1ex]
\raisebox{1.5ex}{$a=2$:} \raisebox{1.5ex}
& $\#\mathsf{PS}_{inv}(\vec{y};1)$ & 3 & $7$ & 31 & 171 \\[1ex]
\hline
& Length $\vec{y}$ & $(3,3)$ & $(3,3,1)$ & $(3,3,1,1)$ & $(3,3,1,1,1)$ \\[-1ex]
\raisebox{1.5ex}{$a=3$:} \raisebox{1.5ex}
& $\#\mathsf{PS}_{inv}(\vec{y};1)$ &3 & $7$ & 13 & 51 \\[1ex]
\hline
\end{tabular}
\caption{$\# \mathsf{PS}_{inv}((a,a,1,...,1);z)$ with $a=2,3$.}
\label{tab:PPer}
\end{table}
These initial values do not correspond to any known sequences in the On-Line Encyclopedia of Integer Sequences (OEIS) \cite{OEIS}.
While we do not have a solution for the general case, in the following we present a small result for the special case where there is one $A$-car and $n-1$ cars each of size $b=1$.
\begin{prop}\label{lemV}
Suppose $z, a \in \mathbb{Z}_+$ with $a >1$. Let $\mathsf{PF}_n(\vec{u})$ be the set of $\vec{u}$-parking functions, where $\vec{u}=(z,z+1,...,z+n-1)$. Then
$$\mathsf{PS}_{inv}((a,1^{n-1});z)=\mathsf{PF}_n(\vec{u}).$$
\end{prop}
\begin{proof}
Let $\mathbf{c}=(c_1,c_2,...,c_n) \in \mathsf{PS}_{inv}(\vec{y};z)$ and $c_{(1)} \leq c_{(2)} \leq \cdots \leq c_{(n)}$ be its order statistics. If $c_{(i)} > z+i-1$ for some $i$, consider the preference sequence $\mathbf{c}'= (c_{(n)}, c_{(n-1)}, \dots,, c_{(1)})$.
Under $\mathbf{c}'$ the first $n-i+1$ cars all prefer spots in $[z+i, z+a+n-2]$. There are only $a+n-i-1$ spots in this interval yet the total length of the first $n-i+1$ cars is $a+n-i$. It is impossible to park. Hence we must have $c_{(i)} \leq z+i-1$ for all $i$ and $\mathbf{c} \in \mathsf{PF}_n(\vec{u})$.
Conversely, given $\mathbf{x} \in \mathsf{PF}_n(\vec{u})$, we know $\mathsf{PF}_n(\vec{u})$ is permutation-invariant, thus we only need to show that $\mathbf{x} \in \mathsf{PS}(\vec{y};z)$ where $\vec{y}=(a,1^{n-1})$. First, $x_1\leq z+n-1$ hence $C_1$ parks. We claim that all the remaining cars can park with the preference sequence $\mathbf{x}$. Assume not, then there is a car failing to park and there are empty spots left unoccupied. Let $k$ be such an empty spot.
Note that all the remaining cars are of length 1. A car $C_i$ ($i \geq 2$) cannot park if and only if all the spots from $x_i$ to the end are occupied when $C_i$ enters.
Since $x_i \leq z+n-1$, it follows that $z \leq k \leq z+n-1$. From $\mathbf{x} \in \mathsf{PF}_n(\vec{u})$ and condition \eqref{eq2}, we have
\[
\#\{j:x_j \leq k\}\geq k-(z-1).
\]
It means that there are at least $k-(z-1)$ cars that attempted to park in the spots $[z, k]$, which has exactly $k-(z-1)$ spots. Therefore the spot $k$ must be checked and cannot be left empty, a contradiction.
\end{proof}
Again using the counting formulas for $\vec{u}$-parking functions, we have
\begin{corollary}
$\# \mathsf{PS}_{inv}(y;z)= z(n+z)^{n-1}$.
\end{corollary}
\vanish{
\begin{proof}
Follows from Proposition \ref{lemV}.
\end{proof}
}
\section{Invariance for the Set of Car Lengths}
\subsection{Strong parking sequences}
In this section, we study another type of invariance. Given a fixed set of cars of various lengths and a one-way street whose length is equal to the sum of the car lengths and a trailer's length $z-1$, we consider the parking sequences for which all $n$ cars can park on the street irrespective of the order in which they enter the street.
Denote by $\mathfrak{S}_n$ the set of all permutations on $n$ letters. For a vector $\vec{y}$ and $\sigma \text{ in } \mathfrak{S}_n$, let $\sigma(\vec{y})=(y_{\sigma(1)},...\,,y_{\sigma(n)})$.
\begin{definition}
Let $\mathbf{c}=(c_1, . . . , c_n)$ and $\vec{y} = (y_1 , . . . , y_n)$. Then, $\mathbf{c}$ is a \emph{strong parking sequence for $(\vec{y};z)$} if and only if $$\mathbf{c} \in \bigcap_{\sigma \in \mathfrak{S}_n} \mathsf{PS}(\sigma(\vec{y});z).$$
We will denote the set of all strong parking sequences for $(\vec{y};z)$ by $\mathsf{SPS}\{\vec{y};z\}$, or equivalently,
$\mathsf{SPS}\{ \vec{y}_{inc};z\}$.
\end{definition}
\begin{example}
For the case $n=2$, let $a,b\in \mathbb{Z}_+$ with $a<b$.
It is easy to see that
\begin{eqnarray*}
\mathsf{PS}((a,b);z)
=[z] \times [z+a] \cup \{(c_1, c_2): c_1=z+b, 1 \leq c_2 \leq z\}, \\
\mathsf{PS}((b,a);z) =[z] \times [z+b] \cup \{(c_1,c_2):c_1=z+a, 1 \leq c_2 \leq z\}.
\end{eqnarray*}
This gives $$\mathsf{SPS}\{(a,b);z\}=\mathsf{PS}((a,b);z)\cap \mathsf{PS}((b,a);z) =[z] \times [z+a].$$
Note that $\mathsf{SPS}\{(a,b);z\}$ is exactly the set of all preferences $\mathbf{c} \in \mathsf{PS}((a,b);z)$ that yields the final parking configuration $T, C_1, C_2$.
\end{example}
\vanish{
Next, we will consider the case where $n\geq 3$.
As an example, the preference sequence $\mathbf{c} = (1,2,2,1) \in \mathsf{SPS}\{1,1,2,2\}$ since $\mathbf{c} \in \mathsf{PS}(\vec{y'})$ for all permutations $\sigma(\vec{y'})$ of $\vec{y}=(1,1,2,2)$. On the other hand, even though $\mathbf{b} = (1,4,1,1) \in \mathsf{PS}(1,1,2,2)$, $\mathbf{b} \not\in \mathsf{SPS}\{1,1,2,2\}$ since $\mathbf{b} \not\in \mathsf{PS}(2,1,2,1)$.}
By Ehrenborg and Happ's result \eqref{parktrailer1}, we know that if $\vec{y} = (k^n)$, then $$\# \mathsf{SPS}\{\vec{y};z\} =\# \mathsf{PS}(\vec{y};z)= z\cdot \prod_{i=1}^{n-1} (z+ik+n-i).$$
In the following we consider the case that $\vec{y}$ does not have constant entries.
\vanish{
Recall that the final parking configuration of a parking sequence is the final relative arrangement of all cars on the street after they are done parking.}
\begin{definition}
We say that $\mathbf{c} \in \mathsf{PS}(\vec{y};z)$ parks $\vec{y}$ in the \emph{standard order} if the final parking configuration of $\mathbf{c}$ is given by $T,C_1,C_2,...,C_n$.
\end{definition}
\vanish{
For example, in the case where $(\vec{y};z)=(2,3,1,2,1,4;3)$, the standard order is shown in Figure \ref{standardorder}.
\tikzstyle{vertex}=[rectangle,fill=black!15,minimum size=10pt,inner sep=0pt]
\begin{figure}
\caption{Standard order for $\vec{y}
\label{standardorder}
\end{figure}
}
The following lemma is easily proved by induction.
\begin{lemma}\label{simple}
Let $\mathbf{c}=(c_1,c_2,...,c_n) \in \mathsf{PS}(\vec{y};z)$. Then, $\mathbf{c}$ parks $\vec{y}$ in the standard order if and only if \[
c_k \leq z+y_1+\cdots +y_{k-1} \text{ for all } k \in [n].
\]
\end{lemma}
The following result characterizes strong parking sequences for any set of $n \geq 2$ cars with a given length vector $\{y_1,y_2,...,y_n\}$ and a trailer $T$ of length $z-1$.
\vanish{We denote $(y_{(1)},y_{(2)}, . . . , y_{(n)})$ by $\vec{y}_{inc}$ and let $C_i$ represent a car with length $y_{(i)}$ for each $i=1,...,n$.
}
\begin{theorem}\label{lastmain}
Let $n \geq 2$. Assume that $\vec{y}=(y_1,...,y_n)$ is not a constant sequence. Then $\mathbf{c}$ is a strong parking sequence for $\{\vec{y};z\}$ if and only if $\mathbf{c}$ parks $\vec{y}_{inc}=(y_{(1)},y_{(2)}, . . . , y_{(n)})$ in the standard order.
\end{theorem}
\begin{proof}
Suppose $\mathbf{c}$ parks $\vec{y}_{inc}$ in the standard order. We need to check that $\mathbf{c}$ is a parking sequence for $(\sigma(\vec{y});z)$ for every $\sigma \in \mathfrak{S}_n$. This follows from Lemma \ref{simple} and the fact that $y_{(1)}+y_{(2)}+\cdots +y_{(i)} \leq y_{\sigma(1)}+y_{\sigma(2)}+\cdots+y_{\sigma(i)}$ for any $\sigma \in \mathfrak{S}_n$ and $i \in [n]$.
Conversely,
let $\mathbf{c}$ be a parking sequence for $(\vec{y}_{inc};z)$ that does not parks $\vec{y}_{inc}$ in
the standard order.
We will construct a permutation $\sigma$ such that for a sequence of cars with length vector $\sigma(\vec{y}_{inc})$, $\mathbf{c} \not\in \mathsf{PS}(\sigma(\vec{y}_{inc});z)$.
In the following, let $C_i$ represent a car of length $y_{(i)}$, as listed in the table below. Let $\mathcal{F}$ be the final parking configuration of $\mathbf{c}$ when we park the cars $C_1, \dots, C_n$. In $\mathbf{c}$, let $k_1$ be the minimal index $k$ such that $c_k > z+y_{(1)}+y_{(2)}+\cdots+y_{(k-1)}$.
Then in $\mathcal{F}$ the trailer is followed by $C_1, \dots, C_{k_1-1}$ with no gap, but there is a gap between $C_{k_1-1}$ and $C_{k_1}$. Let $C_{t}$ be the last car that parks right before $C_{k_1}$ in $\mathcal{F}$. Clearly $t > k_1$.
\noindent \begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Car & $C_{1}$ & $C_{2}$ & $\cdots$ & $C_{k_1}$ & $\cdots$ & $C_t$ & $\cdots$ & $C_{n-1}$ & $C_n$ \tabularnewline
Car Length & $y_{(1)}$ & $y_{(2)}$ & $\cdots$ & $y_{(k_1)}$ & $\cdots$ & $y_{(t)}$ & $\cdots$ & $y_{(n-1)}$ & $y_{(n)}$ \tabularnewline
\hline
\end{tabular}
\par\end{center}
\begin{enumerate}
\item[\textbf{Case 1}.] {Assume $y_{(k_1)} < y_{(t)}$}. Let $\sigma_1$ be the transposition $(k_1 \longleftrightarrow t)$.
For each $i \in [n]$ let $D_i$ represent a car of length $y_{\sigma_1(i)}$, as shown below.
\noindent \begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{1.2em}{$\sigma_1$} & Car & $D_1$ & $D_2$ & $\cdots$ & $D_{k_1}$ & $\cdots$ & $D_t$ & $\cdots$ & $D_{n-1}$ & $D_n$ \\
& Car Length & $y_{(1)}$ & $y_{(2)}$ & $\cdots$ & $y_{(t)}$ & $\cdots$ & $y_{(k_1)}$ & $\cdots$ & $y_{(n-1)}$ & $y_{(n)}$ \\
\hline
\end{tabular}
\par\end{center}
We park cars $D_1,...,D_n$ using the preference sequence $\mathbf{c}$.
If $\mathbf{c} \in \mathsf{PS}(\sigma_1(\vec{y});z)$, then $D_1,..., D_t$ are able to park and
\begin{enumerate}
\item $D_1, D_2,...,D_{k_1-1}$ have the same lengths and preferences as $C_1, C_2,...,C_{k_1-1}$. Hence they park in order right after the trailer with no gaps.
\item $D_{k_1}$ is longer than $C_{k_1}$ and occupies spots in $[c_{k_1}, c_{k_1}+y_{(t)}-1]$.
\item Any car $D_i$ for $i \in \{k_1+1,...,t-1\}$ has the same preference as $C_i$ so it parks either before $D_{k_1}$ and in the same spots as $C_i$ in $\mathcal{F}$, or parks after $D_{k_1}$.
\item $D_t$ takes the first $y_{(k_1)}$ spots of the ones occupied by $C_t$ in $\mathcal{F}$.
\end{enumerate}
After parking $D_1,...,D_t$, there are $y_{(t)}-y_{(k_1)}$ unused spots between cars $D_{t}$ and $D_{k_1}$. Any car trying to park after $D_t$ has length $\geq y_{(t)}>y_{(t)}-y_{(k_1)}$. So the spots between $D_t$ and $D_{k_1}$ cannot be filled and hence $\mathbf{c} \not\in \mathsf{PS}(\sigma_1(\vec{y}_{inc}), z)$.
\item[\textbf{Case 2}.] {Assume $y_{(k_1)} = y_{(t)}$}. Then, since $\vec{y}_{inc}$ is not a constant sequence, either $y_{(1)} < y_{(k_1)}$ or $y_{(t)} < y_{(n)}$.
\begin{itemize}
\item[\textit{Case 2a:}] Assume $y_{(t)} < y_{(n)}$. Let $\sigma_2$ be the
transposition $(t \longleftrightarrow n)$ and
$E_i$ be a car of length $\sigma_2(i)$ for each $i \in [n]$.
\noindent \begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{1.2em}{$\sigma_2$} & Car & $E_1$ & $E_2$ & $\cdots$ & $E_{k_1}$ & $\cdots$ & $E_t$ & $\cdots$ & $E_{n-1}$ & $E_n$ \tabularnewline
& Car Length & $y_{(1)}$ & $y_{(2)}$ & $\cdots$ & $y_{(k_1)}$ & $\cdots$ & $y_{(n)}$ & $\cdots$ & $y_{(n-1)}$ & $y_{(t)}$ \tabularnewline
\hline
\end{tabular}
\end{center}
We park cars $E_1, \dots, E_n$ using the preference sequence $\mathbf{c}$. The cars $E_1,...,E_{t-1}$ take the same spots as $C_1, \dots, C_{t-1}$ in $\mathcal{F}$.
Next, car $E_t$ tries to park in the spots $C_t$ occupies, at the interval $[c_{k_1}-y_{(t)}, c_{k_1}-1]$, where the spot $c_{k_1}$ is already occupied by $E_{k_1}$. But $E_t$ has length $y_{(n)} > y_{(t)}$ and hence cannot fit.
Therefore, $\mathbf{c} \not\in\mathsf{PS}(\sigma_2(\vec{y}_{inc});z)$.
\item[\textit{Case 2b:}] If $y_{(k_1)} = ... = y_{(t)} = ... =y_{(n)}=b$, then we must have
$k_1 >1$ and $y_{(1)}< y_{(k_1)}$.
In the final configuration $\mathcal{F}$, at the time car $C_{k_1}$ is parked, the lengths of all the intervals of consecutive empty spots left are multiples of $b$.
Let
$\sigma_3 $ be the transposition $(1 \longleftrightarrow k_1)$ and
$F_i$ be a car of length $\sigma_3(i)$ for each $i \in [n]$.
\noindent \begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{1.2em}{$\sigma_3$} & Car & $F_1$ & $F_2$ & $\cdots$ & $F_{k_1}$ & $\cdots$ & $F_t$ & $\cdots$ & $F_{n-1}$ & $F_n$ \tabularnewline
& Car Length & $y_{(k_1)}$ & $y_{(2)}$ & $\cdots$ & $y_{(1)}$ & $\cdots$ & $y_{(t)}$ & $\cdots$ & $y_{(n-1)}$ & $y_{(n)}$ \tabularnewline
\hline
\end{tabular}
\par\end{center}
We park cars $F_1, \dots, F_n$ using the preference sequence $\mathbf{c}$.
The cars $F_1, \dots, F_{k_1-1}$ will take the spaces right after the trailer. The total length of $F_1, \dots, F_{k_1-1}$ is no more than the total length of $C_1, \dots, C_{k_1-1} \text{, and }C_t$, since
$y_{(1)}+y_{(2)}+ \cdots +y_{(k_1-1)}+y_{(t)} > y_{(2)} +\cdots + y_{(k_1-1)} + y_{(k_1)}$. So
$F_{k_1}$ will park at the spot starting at $c_k$, just as $C_{k_1}$. But, as $y_{(1)} < y_{(k_1)}$, after $F_{k_1}$ is parked, the available space after $F_{k_1}$ is nonempty and not a multiple of $b$, while all the remaining cars are of length $b$. Hence, it is not possible to park all of them and $\mathbf{c} \not \in \mathsf{PS}(\sigma_3(\vec{y}_{inc});z)$.
\end{itemize}
\end{enumerate}
\vanish{
\noindent \textbf{The case of $k_1 = 1$}.
Then, there are 2 possible cases:
\begin{enumerate}
\item[\textbf{Case 1$'$:}] Suppose $y_{(1)}<y_{(t)} \leq y_{(n)}$. Consider the length vector $\sigma_4(\vec{y}_{inc})$ given by:
\noindent \begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{1.2em}{$\sigma_4$} & Car & $G_{1}$ & $G_{2}$ & $\cdots$ & $G_t$ & $\cdots$ & $G_{n-1}$ & $G_n$ \tabularnewline
& Car Length & $y_{(t)}$ & $y_{(2)}$ & $\cdots$ & $y_{(1)}$ & $\cdots$ & $y_{(n-1)}$ & $y_{(n)}$ \tabularnewline
\hline
\end{tabular}
\par\end{center}
which is the transposition $(1 \longleftrightarrow t)$.
If $\mathbf{c} \in \mathsf{PS}(\sigma_1(\vec{y});z)$, then $G_1,..., G_t$ can be parked and
\begin{enumerate}
\item $G_{1}$ is longer than $C_{1}$ and occupies spots in $[c_{1}, c_{1}+y_{(t)}-1]$
\item Any car $G_i$ for $i \in \{2,...,t-1\}$ has the same preference as $C_i$ so it parks either before $G_{1}$ and in the same spot as $C_i$ in $\mathcal{F}$, or parks after $G_{1}$.
\item $G_t$ takes the first $y_{1}$ spots of the ones occupied by $C_t$ in $\mathcal{F}$.
\end{enumerate}
After parking $G_1,...,G_t$, any car trying to park has length $\geq y_{(t)}$, hence cannot park between $G_t$ and $G_1$ since there are $y_{(t)}-y_{(1)}$ unused spots between them, where $y_{(t)}-y_{(1)}<y_{(t)}$.
\item[\textbf{Case 2$'$:}] Suppose $y_{(1)} = y_{(t)} < y_{(n)}$. Consider again the length vector $\sigma_2(\vec{y}_{inc})$. For the reasons earlier stated, $E_t$ cannot park in the spots right before $E_1$ in the preference $\mathbf{c}$. So, $\mathbf{c} \not\in \mathsf{PS}(\sigma_2(\vec{y}_{inc}), z)$.
\end{enumerate}
Thus, $\mathbf{c} \not\in \mathsf{PS}(\sigma_4(\vec{y}_{inc}), z)$ and we are done in the case where $k_1= 1$.
Hence the claim.}
\end{proof}
\noindent Combining Lemma~\ref{simple} and Theorem~\ref{lastmain}, we obtain the following counting formula.
\begin{corollary}\label{corolastmain}
Let $z \in \mathbb{Z}_+$ and $\vec{y}=(y_1, y_2, \dots, y_n) \in \mathbb{Z}_+^n$.
If $\vec{y} \neq (s^n)$ for any integer $s$, then
$$\# \mathsf{SPS} \{\vec{y};z\} = z\cdot
\prod_{i=1}^{n-1} (z+y_{(1)}+ \cdots + y_{(i)}).
$$
where $y_{(1)} \leq y_{(2)}\leq \cdots \leq y_{(n)}$ is the order statistics of $\vec{y}$.
\end{corollary}
\vanish{
\begin{proof}
By Theorem \ref{lastmain}, we need exactly all preference sequences that park $\vec{y}_{inc}$ in standard order. That is, car $C_1$ of size $y_{(1)}$ is parked on spots $[z,z+y_{(1)}-1]$ and car $C_i$ of size $y_{(i)}$ is parked on the street in spots $[z+y_{(1)}+\cdots+y_{(i-1)}, z+y_{(1)}+\cdots+y_{(i)}-1]$ (for each $2\leq i \leq n$).
Clearly, for car $C_{1}$, there are $(z-1)+1=z$ possible preferences. For car $C_{i}$, where $2 \leq i \leq n$,
the number of possible preferences is $$ (z-1)+y_{(1)}+y_{(2)}+\cdots+y_{(i-1)}+1=z+y_{(1)}+y_{(2)}+\cdots+y_{(i-1)}.$$
Hence, total number of possible preferences that yield the desired configuration is: $$z(z+y_{(1)})(z+y_{(1)}+y_{(2)})\cdots (z+y_{(1)}+y_{(2)}+\cdots +y_{(n-1)}).$$
\end{proof}
}
\subsection{Parking on a street with fixed length}
Suppose instead of fixing the set of cars, we fix the total street length. Let $\mathfrak{C}_{n}^{k}=\{\vec{y}=(n_1,n_2,...,n_k) \in \mathbb{Z}_+^k: n_1+n_2+ \cdots + n_k=n\}$ i.e. $\mathfrak{C}_{n}^k$ is the set of all compositions of $n$ into $k$ parts. We consider all possible sequences that can park any set of $k$ cars on the street of fixed length $z+n-1$.
More formally, we have the following definition.
\begin{definition}
Let $n, k, z \in \mathbb{Z}_+$ with $1 \leq k \leq n$.
Then, $\mathbf{c}=(c_1,...,c_k)$ is a \emph{$k$-strong parking sequence for $(n;z)$} if and only if $$\mathbf{c} \in \bigcap_{\vec{y} \in \mathfrak{C}_n^k} \mathsf{SPS}\{\vec{y};z\}.$$
\end{definition}
\noindent We will denote the set of all $k$-strong parking sequences for $(n;z)$ by $\mathsf{SPS}_k(n;z)$ (or $\mathsf{SPS}_k(n)$ when $z=1$).
For example, when $n=3$, we have the following sets:
\begin{align*}
\mathsf{SPS}_1(n)&=\{(1)\} \\
\mathsf{SPS}_2(n)&=\{(1,1), (1,2)\} \\
\mathsf{SPS}_3(n)&=\{(1,1,1),(1,1,2),(1,1,3),(1,2,1),(1,2,2),(1,2,3),(1,3,1),(1,3,2),\\
&(2,1,1),(2,1,2),(2,1,3),(2,2,1),(2,3,1),(3,1,1),(3,1,2),(3,2,1)\}
\end{align*}
We remark that in general, for any $n \in \mathbb{N}$, $\mathsf{SPS}_1(n)=\{(1)\}$ and $\mathsf{SPS}_n(n)=\mathsf{PF}_n$ where $\mathsf{PF}_n$ is the set of all parking functions of length $n$.
The following proposition helps characterize $\mathsf{SPS}_k(n;z)$ for any $1 \leq k \leq n$ and $z \in \mathbb{Z}_+$.
\begin{prop} \label{last}
Suppose $n, z \in \mathbb{Z}_+$ and let $\vec{y}_0=(1^{k-1},n-k+1)$ be the composition of $n$ into $k$ parts with $n_1=n_2=\cdots=n_{k-1}=1$ and $n_k=n-k+1$. Then,
\begin{equation}
\mathsf{SPS}_k(n;z)=\mathsf{SPS}\{\overbrace{1,1,...,1}^{k-1},n-k+1;z\} = \bigcap_{\sigma \in \mathfrak{S}_n} \mathsf{PS}(\sigma(\vec{y}_0);z).
\end{equation}
In other words, $\mathsf{SPS}_k(n;z)$ is the set of all sequences in $\mathsf{PS}(\vec{y}_0;z)$ that yield the standard order.
\end{prop}
\begin{proof}
The statement
follows from Lemma \ref{simple} and the fact that for any $\vec{y}=(n_1,...,n_k)$ with $n_1+\cdots+n_k=n$, we have for each $i \in [k-1]$, $$\overbrace{1+1+\cdots+1}^{i}=i \leq n_1+\cdots+n_i.$$
\end{proof}
\begin{corollary}
\[ \# \mathsf{SPS}_k(n;z)=
\begin{cases}
z^{(k)}, \text{ if } k\neq n \\
z(n+z)^{n-1}, \text{ if } k=n.
\end{cases}
\]
where $z^{(k)}=z(z+1)\cdots (z+k-1)$. In particular, when $z=1$,
$$ \# \mathsf{SPS}_k(n)=
\begin{cases}
k!, \text{ if } k\neq n \\
(n+1)^{n-1}, \text{ if } k=n.
\end{cases}
$$
\end{corollary}
\begin{proof}
Follows from Proposition \ref{last} and Corollary \ref{corolastmain}.
\end{proof}
\section{Closing Remarks}
In this paper, we studied increasing parking sequences and their connections with lattice paths. We also studied permutation-invariant parking sequences and length-invariant parking sequences. More precisely, we characterized the permutation-invariant parking sequences for some special families of length vectors. While it may not be easy to find a general formula for all cases, a natural direction to go would be to investigate other special cases of car lengths. Furthermore, in the study of parking functions we encounter quite a number of other mathematical structures including trees, non-crossing partitions, hyperplane arrangements, polytopes etc. It will be interesting to investigate if there is anything that connects
other combinatorial structures to invariant parking
sequences. Recently in \cite{ejc}, parking sequences
were extended to the case in which one or more trailers are placed anywhere on the street alongside $n$ cars with length vector $\vec{y}=(1,1,...,1)$.
A natural generalization is to consider a similar scenario where $\vec{y}$ is any length vector.
\end{document}
|
\begin{document}
\title{Toric modular forms of higher weight}
\newif \ifdraft
\def \makeauthor{
\ifdraft
\draftauthor{Lev A. Borisov and Paul E. Gunnells}
\else
\author{Lev A. Borisov}
\address{Department of Mathematics\\
Columbia University\\
New York, NY 10027}
\email{[email protected]}
\author{Paul E. Gunnells}
\address{Department of Mathematics and Statistics\\
University of Massachusetts\\
Amherst, MA 01003}
\email{[email protected]}
\fi
}
\draftfalse
\makeauthor
\ifdraft
\date{\today}
\else
\date{March 16, 2002}
\fi
\subjclass{}
\keywords{Modular forms, theta functions, Manin symbols}
\begin{abstract}
In the papers \cite{vanish, toric} we used the geometry of
complete polyhedral fans to construct a subring ${\mathscr{T}} (l)$ of the modular
forms on $\Gamma _{1} (l)$, and showed that for weight two the
cuspidal part of ${\mathscr{T}} (l)$ coincides with the space of cusp forms of
analytic rank zero. In this paper we show that in weights greater
than two, the cuspidal part of ${\mathscr{T}} (l)$ coincides with the space of
all cusp forms.
\end{abstract}
\maketitle
\section{Introduction}\label{introduction}
\subsection{}
In \cite{vanish, toric} we used the geometry of complete polyhedral
fans to construct a subring ${\mathscr{T}}_* (l)$ of the modular forms on
$\Gamma _{1} (l)$. If $l\geq 5$, we showed that ${\mathscr{T}}_* (l)$ is
generated in weight one by certain Eisenstein series, and in
\cite[Theorem 4.11]{vanish} we showed that for weight two the cuspidal
part of ${\mathscr{T}}_* (l)$ coincides with the space of cusp forms of
analytic rank zero. The main result of this paper, Theorem
\ref{main}, is that in weights greater than two, the cuspidal part of
${\mathscr{T}}_* (l)$ coincides with the space of all cusp forms. In fact, we
prove a stronger statement: we define certain weight $k$ toric modular
forms $\widetilde s^{(k)}_{a/l}$, and show that any cusp form can be
written as a ${\mathbb C}$-linear combination of the forms $\widetilde s^{(k)}_{a/l}$ and
pairwise products of the form $\widetilde s^{(m)}_{a/l} \widetilde{s}_{b/l}^{(n)}$,
where $m+n=k$ and $m,n>0$.
The proof of Theorem \ref{main} is formally very similar to the proof
of \cite[Theorem 4.11]{vanish}. Let ${\mathscr{S}} (l)$ be the space of weight
$k$ holomorphic cusp forms on $\Gamma_{1} (l)$. We define a map $\rho
\colon {\mathscr{S}} (l)\rightarrow {\mathscr{S}} (l)$, and show that its image contains
all newforms for $k\geq 3$. We describe the map $\rho$ in terms of
Manin symbols, which allows us to write $\rho(f)$ in terms of products
of certain explicit toric Eisenstein series. A key role is played by
certain weight $k$ Manin symbols $\{R_{(m,n)} | m,n\in {\mathbb Z}\}$ that
satisfy relations similar to weight two Manin symbols.
\subsection{}
Here is an outline of the paper. In Section \ref{s1} we review
results about toric modular forms, and in Section \ref{s2} review results
about Manin symbols and introduce the symbols $R_{(m,n)}$. In Section
\ref{s3} we describe \emph{$(\bmod\, l)$-polynomials}, a technical tool
we use later to manipulate $q$-expansions. We prove the main result
along with some corollaries in Section \ref{s4}.
The remaining sections contain complements to the main result and
results proved in \cite{vanish}. In Section \ref{s.toricmap} we use
products of Eisenstein series of higher weight to define a map $\mu$
from weight $k$ Manin symbols to a certain quotient of the space of
weight $k$ modular forms. This map is analogous to the map $\mu$ in
\cite[Definition 3.11]{vanish}, but some complications do occur in the
higher case. Finally, in Section \ref{s.Hecke} we show that the map
from symbols to forms is compatible with the action of the Hecke
operators.
Throughout the paper we keep our arguments as elementary as
possible. In particular, we avoid using results of \cite{toric} that
are based on the Hirzebruch-Riemann-Roch theorem for toric
varieties. While this complicates the proofs a bit, it makes the paper
accessible to readers with no knowledge of toric varieties. We outline
an alternative approach to the results using toric geometry in Remarks
\ref{toric1} and \ref{toric2}.
\section{Toric modular forms}\label{s1}
\subsection{}
We briefly review the definition of toric forms; more details can be
found in \cite{vanish, toric}. Let $k$ and $l$ be positive integers,
and suppose $l\geq 5$. As usual let $q={\mathrm{e}}^{2\pi {\mathrm{i}} \tau}$, where
$\tau$ is in the upper halfplane ${\mathfrak{H}}$. A holomorphic modular form
of weight $k$ on the group $\Gamma _{1} (l)$ is called \emph{toric} if
it can be expressed as a homogeneous polynomial of degree $k$ in the
functions $\widetilde s_{a/l}(q)$ given by
$$
\widetilde s_{a/l}(q):=(\frac12-\frac al)+\sum_{n>0}q^n\sum_{d|n}
(\delta_d^{a\bmod\, l}-\delta_d^{-a\bmod\, l}),\quad a=1,\dots ,l-1.
$$
Here $\delta_d^{a\bmod\, l}$ is $1$ if $a=d\bmod\, l$ and is $0$ otherwise.
The space of toric forms ${\mathscr{T}}_{*}(l)$ of all weights is thus
generated as a graded ring by certain weight one Eisenstein series.
By results of \cite{vanish, toric}, ${\mathscr{T}}_{*}(l)$ is known to be stable
under the action of the Hecke operators, and under Atkin-Lehner
lifting.
\begin{proposition}\label{unnamed}
The ring ${\mathscr{T}}_{*} (l)$ contains the modular forms
\begin{equation}\label{stilde}
\widetilde s_{a/l}^{(k)} := C + \sum_{n>0}q^n\sum_{d|n}d^{k-1}
(\delta_d^{a\bmod\, l}+(-1)^k\delta_d^{-a\bmod\, l}),
\end{equation}
where
$k\geq 2$ and $a=0,\dots ,l$,
except for $(a,k) = (0,2)$. Here $C$ is a constant determined uniquely
by the modularity of $\widetilde s_{a/l}^{(k)}$.
\end{proposition}
\begin{proof}
Let $\vartheta (z,\tau)$ be Jacobi's theta function \cite{Chandra}. Then
it is easy to construct a given $\widetilde s_{a/l}^{(k)}$
as a linear combination of the modular forms
\begin{align*}
s_{a/l}^{(k)}(q)&:= (2\pi{\mathrm{i}})^{-k}
\left(\frac{\partial^k}{\partial z^k}\right)_{z=0}{\log}
\left(
{
\frac{z\vartheta(z+a/l,\tau)\vartheta'(0,\tau)}
{\vartheta(z,\tau)\vartheta(a/l,\tau)}
}
\right)\\&=
C+\sum_{n>0}q^n\sum_{d|n}d^{k-1}({\mathrm{e}}^{2\pi{\mathrm{i}} da/l} + (-1)^{k}{\mathrm{e}}^{-2\pi{\mathrm{i}}
da/l} - 2\delta_k^{0\bmod\, 2})
\end{align*}
of \cite[Section 4.4]{toric}, and the standard level one Eisenstein series
\begin{equation}\label{eis.series}
E_{k} := C + \sum _{n>0}q^n\sum_{d|n}d^{k-1}
\end{equation}
for even $k\geq 4$. The forms $s_{a/l}^{(k)}$ and $E_{k}$ for $k>2$
are toric, see \cite[Theorem 4.11 and Remark 4.13]{toric}. We use
here and in what follows the convention denoting constant terms whose
exact value is irrelevant by $C$.
\end{proof}
\subsection{}
We must exclude $(a,k) = (0,2)$ from the statement since $E_2$ is not
modular. However, it will be convenient to allow $\widetilde
s_{0/l}^{(2)}$ in later arguments, which merely amounts to working in
the larger ring ${\mathscr{T}}_*(l)[E_2]$. In fact, since we will never multiply
more than two of the $\widetilde s$'s together, we will be working in
${\mathscr{T}}_*(l)+E_2{\mathscr{T}}_*(l)$. We call elements of this ring \emph{toric
quasimodular forms} (cf. \cite{Zagier.elliptic}).
\begin{remark}\label{lessthanfive}
The statement of Proposition \ref{unnamed} is true for all $l=2,3,4$, but the
definition of toric modular forms there is a bit more complicated. However,
it turns out that for these levels all modular forms are toric, as defined
in \cite{toric}. From now on we will call any polynomial in
$\widetilde s^{(k)}_{a/l}$ a toric quasimodular form of level $l$.
Then the rest of the paper works for an arbitrary level $l\geq 1$.
\end{remark}
\begin{definition}\label{pairs}
By a slight abuse of notations
we say that a weight $k$ quasimodular form $f$ can be written as a
\emph{linear combination of pairs} if $f$ can be written as a
${\mathbb C}$-linear combination of the forms $\widetilde s_{a/l}^{(k)}$ and
$\widetilde s_{a/l}^{(m)} \widetilde s_{b/l}^{(n)}$
where $m,n>0$, $m+n=k$, and $a,b=0,\dotsc ,l-1$.
\end{definition}
\begin{proposition}\label{derivs.are.there}
The space of toric quasimodular forms contains the derivatives
$\partial_{\tau} \widetilde s^{(k)}_{a/l}$. Moreover, each $\partial_{\tau}
\widetilde s^{(k)}_{a/l}$ can be written as a linear combination of pairs.
\end{proposition}
\begin{proof}
The span of $\widetilde s^{(k)}$ is the same as the span of $s^{(k)}$ and $E_{k}$,
so we will instead consider their derivatives.
The $q$-expansion of $\partial_{\tau} s^{(k)}_{a/l}$ is
\begin{equation}\label{deriv.q.exp}
2\pi {\mathrm{i}} \sum_{n>0}q^n\sum_{d|n}nd^{k-1}({\mathrm{e}}^{2\pi{\mathrm{i}} da/l} + (-1)^{k}{\mathrm{e}}^{-2\pi{\mathrm{i}}
da/l} - 2\delta_k^{0\bmod\, 2}).
\end{equation}
Now take (cf. \cite[proof of Prop. 3.8]{vanish})
\begin{equation}\label{identity}
s_{\alpha}^{(2)}+s_{\alpha}^2 =
\frac 16 - 2\sum_{n>0}q^n\sum_{d|n}\frac nd({\mathrm{e}}^{2\pi{\mathrm{i}} \alpha d}+
{\mathrm{e}}^{-2\pi{\mathrm{i}} \alpha d}),
\end{equation}
and differentiate it $k$ times with respect to $\alpha$. Let
$F_{\alpha} (q)$ be the resulting right hand side. It is easy to
express \eqref{deriv.q.exp} as a linear combination of $\{F_{a/l} (q)
\}$ and the derivatives $\partial_{\tau} E_{k}$, so it suffices to
show that these forms can be expressed as a linear combination of pairs.
We consider first the derivatives
of $s_{\alpha}^{(r)}$ with respect to
$\alpha$. Putting $E_{\rm odd} = 0$, we have
$$
(2\pi{\mathrm{i}})^{-1}\frac{\partial}{\partial \alpha} s_{\alpha}^{(r)}=
s_{\alpha}^{(r+1)} -
(2\pi{\mathrm{i}})^{-r-1}
\left(\frac{\partial^{r+1}}{\partial z^{r+1}}\right)_{z=0}\log
\left(
{
\frac{z\vartheta'(0,\tau)}{\vartheta(z,\tau)}
}
\right)
=s_{\alpha}^{(r+1)} - 2E_{r+1},
$$
and the statement follows from the fact that $E_r$ and $s^{(r)}$
can be written as linear combinations of $\widetilde s^{(r)}$.
For $\partial_{\tau} E_{k}$ we argue as follows. Expand both
sides of the equation \eqref{identity} in a Laurent series in
$\alpha$ around $\alpha=0$.
The coefficient at $\alpha^k$ on the right hand side
of \eqref{identity} is equal to $\partial_{\tau}E_k$, up to a
multiplicative and an additive constant. To expand the left hand side
notice that, up to the terms constant in $q$,
the Laurent coefficient of $s_{\alpha}$ at $\alpha^k$ is
a multiple of $E_{k+1}$, which follows from expanding
${\mathrm{e}}^{2\pi{\mathrm{i}} d\alpha}$ in the definition of $s_{\alpha}$.
It is easy to see that $s_{\alpha}$ has a simple pole at $\alpha=0$
with a constant residue, so the coefficient of the Laurent
expansion of $s_{\alpha}^2$ at $\alpha^k$ is a linear combination
of $E_{r}E_{k+2-r}$, $E_{k+2}$ and some $E_r$ for $r<k+2$.
Then the modular transformation properties of $\partial_\tau E_k$
finish the argument.
\end{proof}
Next we describe the action of $\Gamma_0(l)/\Gamma_1(l)$ on $\widetilde s$.
\begin{proposition}\label{gamma0action}
Let $\gamma\in \Gamma_0(l)$ have diagonal entries
$p^{-1}$ and $p$ $\bmod\, l$ respectively. Then
$$\gamma \widetilde s_{a/l}^{(k)}= \widetilde s_{p^{-1}a/l}^{(k)}.$$
\end{proposition}
\begin{proof}
The transformation properties of $\vartheta$ (cf. \cite[Prop. 4.3]{toric}) imply
$$\gamma s_{a/l}^{(k)}= s_{pa/l}^{(k)}.$$
One can then use linear combinations of the forms in the proof of
Proposition \ref{unnamed} to determine the action of $\gamma$ on $\widetilde s$.
We leave the details to the reader.
\end{proof}
\section{Manin symbols}\label{s2}
\subsection{}\label{relationsection}
This section closely follows \cite{Merel}, to which the reader is
referred for more details. Let $l>1$ be an integer, and let
$E_{l}\subset ({\mathbb Z} /l{\mathbb Z} )^{2}$ be the subset of pairs $(u,v)$ such
that ${\mathbb Z} u + {\mathbb Z} v = {\mathbb Z} /l{\mathbb Z} $. The space of \emph{Manin symbols}
of weight $k$ and level $l$ is the ${\mathbb C}$-vector space generated by the
symbols $x^ry^s(u,v)$, where $r$ and $s$ are nonnegative integers
summing to $k-2$ and $(u,v)\in E_l$, modulo the following relations:
\begin{enumerate}
\item $x^ry^s(u,v) + (-1)^rx^sy^r(v,-u) = 0$.
\item $x^ry^s(u,v) + (-1)^ry^r(x-y)^s(v,-u-v) +
(-1)^s(y-x)^rx^s(-u-v,u) = 0$.
\end{enumerate}
We denote the space of Manin symbols by $M$ (we omit the level and
weight from the notation since it will be clear from the context).
Two subspaces of $M$ will play an important role in what follows. Let
$\iota \colon M \rightarrow M$ be the involution
$x^ry^s(u,v)\mapsto (-1)^rx^ry^s(-u,v)$.
\begin{definition}\label{plusnminus}
The space of \emph{plus symbols} $M_{+}\subset M$ is the subspace
consisting of symbols $w$ satisfying $\iota (w) = w$. Similarly,
the space of \emph{minus symbols} $M_{-}\subset M$ is the subspace
consisting of symbols $w$ satisfying $\iota (w) = -w$.
\end{definition}
We have \emph{symmetrization maps} $(\phantom{a},\phantom{a})_{\pm }\colon M
\rightarrow M_{\pm }$ given by $x^ry^s(u,v)_{\pm } :=
(x^ry^s(u,v)\pm (-1)^rx^ry^s(-u,v))/2$.
\subsection{}\label{degenerate}
Let $M^{*} = \Hom_{{\mathbb C}} (M,{\mathbb C})$ be the dual of the space of Manin
symbols. For any $\varphi \in M^{*}$, we define $\varphi $ on ``degenerate''
symbols $x^ry^s(u,v)$ with
${\mathbb Z} u+{\mathbb Z} v \not = {\mathbb Z} /l{\mathbb Z} $ by setting $\varphi(x^ry^s(u,v)) = 0$.
This convention is somewhat artificial but turns out to be quite
useful.
\subsection{}\label{duality}
There exists a natural pairing between the spaces of Manin symbols
and the spaces of cusp forms, see \cite{Merel}.
Let ${\mathscr{M}}(l)$ be the ${\mathbb C}$-vector space of weight $k$ holomorphic
modular forms on $\Gamma_{1} (l)$, and let ${\mathscr{S}} (l)\subset {\mathscr{M}} (l)$
be the subspace of cusp forms. For $x^ry^s(u,v)\in M$ and $f \in {\mathscr{S}} (l)$ let
$g=\left(
\begin{smallmatrix}
a&b\\c&d
\end{smallmatrix}
\right)$ be an element of $\Gamma(1)$ with $(c,d)=(u,v)\bmod\, l$. Then
the integral
\[
\int_{0}^{{\mathrm{i}}\infty} (c\tau+d)^{-k}f(\frac{a\tau+b}{c\tau+d})\tau^r \,d\tau
\]
does not depend on the choice of $g$. Moreover, it is compatible
with the relations on modular symbols, and we obtain a pairing
\begin{align*}
M\times {\mathscr{S}} (l)&\longrightarrow {\mathbb C},\\
(x^ry^s(u,v), f)&\longmapsto \langle f, x^ry^s(u,v)\rangle.
\end{align*}
In general this pairing is degenerate, but one can identify a subspace
of cuspidal Manin symbols $S$ such that the pairing is non-degenerate
on $S_{\pm}\times {\mathscr{S}}(l)$. We will not use this fact, but details
can be found in \cite{Merel}.
\subsection{}\label{hecke.action}
Next we present Merel's description of the Hecke action on the Manin
symbols. Let $n\geq 1$ be an integer, and let $T_{n}$ be the
associated Hecke operator. We denote the action of $T_{n}$ on a
modular form $f$ by $\stroke{f}{T_{n}}$. For any positive integer
$n$, we define a set $H (n)\subset {\mathbb Z}^{4}$ by
\begin{equation}\label{merel.set.def}
H (n) = \{(a,b,c,d) \mid a>b\geq 0, \quad
d>c\geq 0,\quad
ad-bc = n\}.
\end{equation}
\begin{theorem}\label{HeckeisHecke}
\cite[Theorem 2 and Proposition 10]{Merel} For any positive integer
$n$ coprime to $l$, define an operator $T'_{n} \colon M\rightarrow M$
by
\begin{equation}\label{heckeact}
T'_{n} \,x^ry^s(u,v) = \sum _{H (n)}(ax+by)^r(cx+dy)^s(au+cv, bu+dv).
\end{equation}
If $n$ is not coprime to $l$, then we define $T'_{n}$ by
\eqref{heckeact} but omit terms with
$g.c.d.(l,au+cv,bu+dv ) > 1$.
Then $T'_{n}$ is the adjoint of $T_{n}$ with respect to the pairing
$\langle \phantom{a}, \phantom{a}\rangle$, that is
\[
\langle \stroke{f}{T_{n}},x^ry^s (u,v) \rangle= \langle f, T'_{n}\,
x^ry^s(u,v)\rangle.
\]
\end{theorem}
We will abuse notation in what follows and write $T_{n}$ for $T'_{n}$.
It is also proved in \cite{Merel} that this Hecke action is
compatible with the symmetrization maps:
\begin{proposition}\label{HeckeonSymbols}
We have
\[
T_{n} (x^ry^s(u,v)_{\pm }) = (T_{n}\,x^ry^s (u,v))_{\pm }.
\]
\end{proposition}
\subsection{}\label{rkls}
To conclude this section
we associate to every pair of integers
$(m,n)$ a certain Manin symbol $R_{(m,n)}$. These symbols satisfy
relations analogous to those satisfied by the weight two symbols.
\begin{definition}\label{Rmn}
Let $m, n\in{\mathbb Z} $. If $g.c.d.(m,n,l)=1$ then we
let $R_{(m,n)}=(mx+ny)^{k-2}(m,n)$. If $g.c.d.(m,n,l)>1$ then we put $R_{(m,n)}=0$.
\end{definition}
We remark that even though $R_{(m,n)}$ is built out of the Manin
symbol $(m,n)$, its value depends on more than just the residues of $m$
and $n$ modulo $l$. It is straightforward to see that the symbols
$R_{(m,n)}$ obey the following relations:
\begin{enumerate}
\item $R_{(m,n)}+R_{(-n,m)}=0$.
\item $R_{(m,n)}+R_{(-m-n,m)}+R_{(n,-m-n)} = 0$.
\item $\bigl(R_{(m,n)}\bigr)_{\pm}=(R_{(m,n)}\pm R_{(-m,n)})/2$.
\end{enumerate}
We denote the images of $R_{(m,n)}$ under the symmetrization maps by
$R_{(m,n)}^\pm$.
\section{(Mod $l$)-polynomials}\label{s3}
\subsection{} To simplify
later manipulations with $q$-expansions, we now introduce certain
functions. Fix a positive integer $l$.
\begin{definition}
A function $h\colon {\mathbb Z}\to {\mathbb C}$ is called a \emph{$(\bmod\, l)$-polynomial}
if its restriction to each coset $l{\mathbb Z} + k$ is a polynomial.
\end{definition}
One can think of a $(\bmod\, l)$-polynomial as a set of $l$
ordinary polynomials, one for each residue modulo $l$. For example, the
function that equals $m^2+m$ when $m$ is even and $m^3$ when $m$ is
odd is a $(\bmod\, 2)$-polynomial.
The set of all $(\bmod\, l)$-polynomials forms a ring. One can
analogously define $(\bmod\, l)$-polynomials $h(m,n)$ of two variables by
requiring polynomiality on each pair of cosets $(l{\mathbb Z}+k_1,l{\mathbb Z}+k_2)$.
\subsection{}
We say that a $(\bmod\, l)$-polynomial $h$ is \emph{odd} if $h (-m) = -h
(m)$. Note that the individual polynomials constituting an odd $(\bmod\,
l)$-polynomial aren't independent, since the polynomials sitting over
the residues $a \bmod\, l$ and $-a \bmod\, l$ are related. The space of odd $(\bmod\,
l)$-polynomials will be of particular importance
to us, due to the following proposition.
\begin{proposition}\label{easyremark}
Let $h$ be an odd $(\bmod\, l)$-polynomial. Then up to a constant, the function
$$
f(q) = \sum_{D>0}q^D\sum_{d|D}h(d)
$$
is a linear combination of $\{\widetilde s_{a/l}^{(k)} \mid k\geq 1,\, a=1,\dotsc
l-1\}$.
Conversely, every linear combination of $\widetilde s_{a/l}^{(k)}$ has the above
form, up to an additive constant.
\end{proposition}
\begin{proof}
Let $r_{a,k} (m)$ be the $(\bmod\, l)$-polynomial given by
$$
r_{a,k}(m)=m^k\delta_m^{a\bmod\, l}-(-1)^{k}m^k\delta_{m}^{-a\bmod\, l}.
$$
Then any odd $(\bmod\, l)$-polynomial is a linear combination of the
$r_{a,k}$'s. The result follows easily
from the definition of $\widetilde s_{a/l}^{(k)}$
in \eqref{stilde}.
\end{proof}
The following result allows one to construct odd one-variable $(\bmod\, l)$-polynomials
from even two-variable $(\bmod\, l)$-polynomials.
\begin{proposition}\label{conesum}
Let $G\colon{\mathbb Z}^2\to{\mathbb C}$ be a two-variable $(\bmod\, l)$-polynomial such that
$G(-n_1,-n_2)=G(n_1,n_2)$, and let $N$ be a positive integer. Then
$$
f(d):=\sum_{0<n<Nd}G(n,d)+\frac12 G(0,d)+\frac12 G(Nd,d)
$$
is an odd $(\bmod\, l)$-polynomial.
\end{proposition}
\begin{proof}
First we note that the space of all even two-variable $(\bmod\,
l)$-polynomials is spanned by the family of functions
$$
G(n_1,n_2) = n_1^rn_2^s {\mathrm{e}}^{{2\pi{\mathrm{i}}}(k_1n_1 + k_2 n_2)/l}
+(-n_1)^r(-n_2)^s{\mathrm{e}}^{-{2\pi{\mathrm{i}}}(k_1n_1 + k_2 n_2)/l}
$$
for nonnegative integers $r$ and $s$ and integers $k_1$ and $k_2$.
We use $\alpha_{i} = 2\pi {\mathrm{i}} k_{i}/l$ to write such a $G$ as
$$
G(n_1,n_2)=n_1^rn_2^s({\mathrm{e}}^{\alpha_1n_1+\alpha_2 n_2}
+(-1)^{r+s}{\mathrm{e}}^{-\alpha_1n_1-\alpha_2 n_2}).
$$
Now it suffices to treat the case $r=s=0$, since all others can then be
handled by partial differentiation with respect to $\alpha_1,
\alpha_2$. For $r=s=0$ and ${\mathrm{e}}^{\alpha_1}\neq 1$, an explicit
calculation gives
$$
f(d) = \sum_{0<n<Nd}({\mathrm{e}}^{\alpha_1n+\alpha_2d}+{\mathrm{e}}^{-\alpha_1n-\alpha_2d})
+\frac 12 ({\mathrm{e}}^{\alpha_2 d}+{\mathrm{e}}^{-\alpha_2d})
+\frac 12 ({\mathrm{e}}^{(\alpha_1N+\alpha_2) d}+{\mathrm{e}}^{-(\alpha_1N+\alpha_2) d})
$$
$$
=\frac{(1+{\mathrm{e}}^{\alpha_1})}{2(1-{\mathrm{e}}^{\alpha_1})}
\Big(
{\mathrm{e}}^{\alpha_2 d}-{\mathrm{e}}^{(\alpha_1 N+\alpha_2)d}
-{\mathrm{e}}^{-\alpha_2 d}+{\mathrm{e}}^{-(\alpha_1 N+\alpha_2)d}
\Big).
$$
This is clearly an odd function in $d$, and after letting $\alpha_{i}
= 2\pi {\mathrm{i}} k_{i}/l$ is obviously $(\bmod\, l)$-polynomial in $d$. The
case ${\mathrm{e}}^{\alpha_1}=1$ follows by analytic continuation.
\end{proof}
\subsection{}
The following technical statement will be needed for the proof
of Lemma \ref{f2lemma}.
\begin{proposition}\label{oddprop}
Fix a weight $k$ cusp form $f$ on $\Gamma_1(l)$,
and define a function $h\colon {\mathbb Z}_{>0}\to {\mathbb C}$ by
$$h(m):=\langle f, R_{(m,0)}^+\rangle
+2\langle f, \sum_{m>i>0} R_{(m,m-i)}^+\rangle.$$
Then $h$ extends to an odd $(\bmod\, l)$-polynomial.
\end{proposition}
\begin{proof}
We use the symmetries of $R^+$ to rewrite $h(m)$ as
$$h (m)=\langle f, \sum_{-m<i<m} R_{(m,i)}^+\rangle=
\sum_{0<i<2m}\langle f, R_{(m,i-m)}^+\rangle
+\frac 12\langle f, R_{(m,-m)}^+\rangle
+\frac 12 \langle f, R_{(m,m)}^+\rangle.$$
Then Proposition \ref{conesum} finishes the proof.
\end{proof}
\section{Main theorem}\label{s4}
\subsection{}\label{rho}
Fix a weight $k\geq 3$ and a level $l$. In this section we define
an endomorphism of the space ${\mathscr{S}}(l)$ of cusp forms of weight $k$
with respect to $\Gamma_1(l)$, and prove that its image contains all
newforms. This definition is a generalization of
\cite[Definition 4.2]{vanish} to $k>2$.
\begin{definition}
Let
$\rho\colon {\mathscr{S}}(l)\to {\mathscr{S}}(l)$ be the linear map
$$\rho(f)=\sum_{n=1}^{\infty}\left(\int_{0}^{{\mathrm{i}}\infty}(\stroke{f}{
T_n})(s)ds\right) q^n.$$
\end{definition}
\begin{proposition}\label{cuspform}
The form $\rho(f)$ is a cusp form with nebentypus equal to that of $f$.
\end{proposition}
\begin{proof}
The statement follows from \cite[Theorem 6]{Merel}; see also
\cite[Proposition 4.3]{vanish}.
\end{proof}
The map $\rho$ was used in \cite{vanish} because its image contains
all newforms of weight two whose $L$-functions don't vanish at the
center of the critical strip. The analogous
statement for higher weights is the following:
\begin{proposition}\label{nothingvanishes}
The image of $\rho$ contains all newforms.
\end{proposition}
\begin{proof}
One needs to show that for any newform $f$
$$
\int_{0}^{{\mathrm{i}}\infty}f(\tau)\,d\tau\neq 0,
$$
which is equivalent to $L(f,1)\neq 0$. Without loss of generality we
may assume that $f$ is a Hecke eigenform. If $k>3$ then $L (f,1)$ is a special
value outside the critical strip, and so cannot vanish by absolute
convergence of the Euler product. If $k=3$ then $L (f,1)$ is a
special value on the boundary of the critical strip. By \cite[Theorem
1.3]{jacquet} this special value cannot vanish.
\end{proof}
\subsection{}\label{keylemma}
Fix a cusp form $f$. By Theorem \ref{HeckeisHecke} we can express $\rho(f)$ in terms of modular
symbols as
$$\rho(f)=\sum_{n=1}^{\infty}q^n\langle f, T_n y^{k-2}(0,1)\rangle
=\sum_{n=1}^{+\infty}q^n\langle f, T_n R_{(0,1)}^+\rangle
=\sum_{n=1}^{\infty}q^n\langle f, \sum _{H (n)}R_{(c,d)}^+\rangle.$$
Our goal is to show $\rho (f)\in {\mathscr{T}}_* (l)$ and is a linear
combination of pairs. To this end,
we consider the following linear combination of toric
quasimodular forms:
$$
\rho_1(f)=\sum_{r+s=k-2}\sum_{m,n=0}^{l-1} \frac {(r+s)!}{r!s!}
\widetilde s_{m/l}^{(r+1)}\widetilde s_{n/l}^{(s+1)}
\langle f,
(x+y)^ry^s(m,m+n)-(x-y)^ry^s(m,m-n)
\rangle
$$
$$
= C + \sum_{r=0}^{k-2}\sum_{m=0}^{l-1} c_{r,m}\widetilde s_{m/l}^{(r+1)} +
\sum_{D>0}q^D\sum_{m,n,r,s} A \langle f,
(x+y)^ry^s(m,m+n)-(x-y)^ry^s(m,m-n) \rangle.
$$
Here $C,c_{r,m}$ are constants whose exact values will not
be needed, and the constant $A=A(r,s,m,D)$ is defined by
$$
A:=\frac {(r+s)!}{r!s!}
\sum_{I (D)}
k_1^rk_2^s
(\delta_{k_1}^{m\bmod\, l}+(-1)^{r-1}\delta_{k_1}^{-m\bmod\, l})
(\delta_{k_1}^{m\bmod\, l}+(-1)^{r-1}\delta_{k_1}^{-m\bmod\, l}),
$$
where $I (D)\subset {\mathbb Z}^{4}$ denotes the set
\begin{equation}\label{i.set.def}
I (D) = \{ (m_{1}, k_{1}, m_{2}, k_{2}) \mid m_1,k_1,m_2,k_2>0, \quad
m_1k_1+m_2k_2=D \}
\end{equation}
A linear combination similar to $\rho_{1} (f)$ appears in the proof of
\cite[Theorem 4.8]{vanish} as the composition of several maps, one of
which is induced by the intersection pairing on Manin symbols. Here,
however, we just take this as a definition. After some
simplification, the formula for $\rho_1(f)$ becomes
$$
\rho_1(f)
= C + \sum_{r=0}^{k-2}\sum_{m=0}^{l-1} c_{r,m}\widetilde s_{m/l}^{(r+1)}
+ 4\sum_{D>0} q^D
\langle f,
\sum_{I (D)}
(R_{(k_1,k_1+k_2)}^+ - R_{(k_1,k_1-k_2)}^+)
\rangle,
$$
where $R_{(m,n)}^{+}$ is the Manin symbol from Section \ref{rkls}.
The quasimodular form $\rho_1(f)$ and the modular form $\rho(f)$ are
related as follows:
\begin{proposition}\label{firstapprox}
We have
$$
\rho_1(f)-12\rho(f)=
C + 4F_{1} + 4F_{2}+ \sum_{r=0}^{k-2}\sum_{m=0}^{l-1} c_{r,m}\widetilde s_{m/l}^{(r+1)},
$$
where $C$ is a constant and
\begin{align*}
F_{1} &= \sum_{n>0}q^n
\sum_{d|n}\frac {2n}{d}\langle f,R_{(d,0)}^+\rangle,\\
F_{2} &= \sum_{n>0}q^n
\sum_{d|n}\Bigl(\langle f,R_{(d,0)}^+\rangle + 2\sum_{\substack{0<e<d}}
\langle f, R_{(d,d-e)}^+ \rangle\Bigr).
\end{align*}
\end{proposition}
\begin{proof}
This follows from the identity
\begin{multline*}
\sum_{I (D)}
(R_{(k_1,k_1-k_2)}^+ - R_{(k_1,k_1+k_2)}^+)
\\ =
-\sum_{d|n}\bigl(\frac {2n}{d}+1\bigr)R_{(d,0)}^+
-2\sum_{\substack{d|n\\d>e>0}}
R_{(d,d-e)}^+
-3\sum_{H (n)}
R_{(c,d)}^+.
\end{multline*}
This identity with weight $k=2$ appears as an intermediate step of the proof of
\cite[Theorem 4.8]{vanish}. However, its proof only uses relations
among $R_{(m,n)}^+$ that are independent of the weight $k$.
\end{proof}
\begin{lemma}\label{f1lemma}
The quasimodular form $F_{1}$ is a linear combination of pairs.
\end{lemma}
\begin{proof}
After some simplification, one can write
\[
F_{1} = 2\sum_{n>0}q^n \sum_{d|n} n d^{k-3} \langle f, x^{k-2}(d,0)_+\rangle.
\]
Let $G$ be the $q$-series
\[
G = \sum_{n>0}q^n \sum_{d|n} d^{k-3} \langle f, x^{k-2}(d,0)_+\rangle.
\]
The complex number $\langle f,
x^{k-2}(d,0)_+\rangle$ depends only on $d \bmod\, l$, and further
satisfies
\[
\langle f, x^{k-2}(-d,0)_+\rangle = (-1)^{k}\langle f, x^{k-2}(d,0)_+\rangle.
\]
Hence $d^{k-3} \langle f, x^{k-2}(d,0)_+\rangle$ is an odd $(\bmod\,
l)$-polynomial. By Proposition \ref{easyremark}, $G$ is a linear
combination of the $\widetilde s_{a/l}^{(k)}$ and a constant, and is hence
toric quasimodular. Differentiating the linear combination for $G$
with respect to $\tau$ and applying Proposition \ref{derivs.are.there}
completes the proof.
\end{proof}
\begin{lemma}\label{f2lemma}
The quasimodular form $F_{2}+C$ is a linear combination of pairs for a suitably
chosen constant $C$.
\end{lemma}
\begin{proof}
By Proposition \ref{oddprop}, we know that the function
\[
d\longmapsto\langle f,R_{(d,0)}^+\rangle + 2\sum_{\substack{0<e<d}}
\langle f, R_{(d,d-e)}^+ \rangle
\]
extends to a unique odd $(\bmod\, l)$-polynomial. The result then
follows from Proposition \ref{easyremark}, and
weight considerations.
\end{proof}
\subsection{}
We are now ready to prove our main theorem.
\begin{theorem}\label{main}
All cusp forms of weight three or more are toric. Moreover, any such
cusp form can be written as a linear combination of pairs
(Definition \ref{pairs}).
\end{theorem}
\begin{proof}
One can easily see that lifts of the forms $\widetilde s_{a/l}^{(r)}$
can be written as linear combinations of $\,\widetilde s\,$ for the new
level. Therefore, we may assume without loss of generality that $f$
is a newform. Hence by the proof of Proposition \ref{nothingvanishes},
$\rho(f)$ is a non-zero multiple of $f$. Proposition
\ref{firstapprox} and Lemmas \ref{f1lemma} and \ref{f2lemma} show that
$\rho (f)$ can be written up to a constant as a linear
combination of toric quasimodular forms $\widetilde s_{a/l}^{(m)}
\widetilde s_{b/l}^{(n)}$ for $m+n=k$ and $\widetilde s_{a/l}^{(n)}$ for
smaller $n\leq k$. The transformation properties under $\Gamma_1(l)$
insure that all lower weight forms come with zero coefficients, and
that all the quasimodular forms used are actually modular, i.e.
$E_2s_{a/l}^{(k-2)}$ come with zero coefficients.
\end{proof}
\begin{corollary}
If $l\geq 5$, then any cusp form of weight $k\geq 3$ can be written, up to
a weight $k$ Eisenstein series, as a degree $k$ homogeneous polynomial
in weight one Eisenstein series.
\end{corollary}
\begin{proof}
This follows from Theorem \ref{main}, Proposition \ref{unnamed},
and \cite[Theorem 4.11]{toric}.
\end{proof}
\begin{corollary}
The multiplication map
$$
{\mathscr{M}}_{m}(l)\otimes {\mathscr{M}}_{n}(l) \longrightarrow {\mathscr{M}}_{m+n}(l)
$$
is surjective for all $m\geq n\geq 1$, except for $m=n=1$.
\end{corollary}
\begin{proof}
Theorem \ref{main} assures that the image of the above map contains
all cusp forms, so it is enough to insure that the forms of the image
take arbitrary values at the cusps. To obtain a form which vanishes
at all but one cusp $p$ we multiply a form in ${\mathscr{M}}_{m}(l)$
that vanishes at all cusps except $p$ and perhaps one other cusp $q$
(relevant only if $m=2$) by a form in ${\mathscr{M}}_{n}(l)$ that
vanishes at $q$ but not at $p$.
\end{proof}
\begin{remark}
A slightly weaker statement can be proved directly by using the fact
that the ring of modular forms is Cohen-Macaulay. However, we are not
aware of any other proofs for $(m,n)=(2,1)$ or $(2,2)$.
\end{remark}
\begin{remark}
One can also ask which Eisenstein series are toric. It is easy to see
that for a prime level $p$ all Eisenstein series are toric. For
composite levels, the situation is different. For example if $l=25$
then weight two toric Eisenstein series form a subspace of codimension
one in the space of all weight two Eisenstein series. We do not know
any similar examples for higher weight.
\end{remark}
\begin{theorem}\label{eventually}
For every level $l$ there exists an $N$ such that the ring of toric
forms coincides with the ring of modular forms for weights $k\geq N$.
When $l$ is prime, one can take $N=3$.
\end{theorem}
\begin{proof}
In view of Theorem \ref{main}, one needs to show that all Eisenstein
series are eventually contained in the ring of toric forms. Because
the ring of toric forms is Hecke stable \cite[Theorem 5.3]{toric}, it suffices
to show that the values of toric forms at the cusps eventually
span a $c$-dimensional space, where $c$ is the number of cusps. For
this one needs to show that the values of $s_{a/l}$ for two different
cusps are not proportional. This is accomplished by a direct
calculation that we leave to the reader.
\end{proof}
\begin{remark}
Theorem \ref{eventually}
was used in \cite{modular} to analyze the embedding of the modular curve $X_{1}
(p)$ given by the graded ring ${\mathscr{T}}_{*} (p)$.
\end{remark}
\section{The map from symbols to forms in higher weight}
\label{s.toricmap}
\subsection{}
A key step in the proof of \cite[Theorem 4.11]{vanish}
was the analysis of a map $\mu$ from the minus space $M_{-}$ of weight 2 Manin
symbols to a quotient of the space ${\mathscr{M}}_{2} (l)$ of weight 2 modular
forms. Namely, we showed that the map
\[
\mu \colon (m,n)\longmapsto \widetilde s_{m/l} \widetilde s_{n/l}
\]
took $M_{-}$ into the quotient ${\mathscr{M}}_{2} (l)/{\mathscr{E}}_{2} (l)$, where
${\mathscr{E}}_{2} (l)$ is the space of weight 2 Eisenstein series
(\footnote{This is slightly inaccurate: the map we're denoting
by $\mu$ here is actually the composition of map called $\mu$ in
\cite{vanish} and the Fricke involution.}). In this
section we consider the analogous map in higher weight given by
\begin{equation}\label{defofmu}
\mu \colon x^ry^s(m,n)\longmapsto (-1)^s\widetilde s^{(s+1)}_{m/l}\widetilde s_{n/l}^{(r+1)}
\end{equation}
and describe the relevant quotient containing the image.
\begin{theorem}\label{mumap}
Let $k>2$. The map $\mu$ in \eqref{defofmu} applied to the space
generated by the Manin symbols $x^sy^r(m,n)$ takes the relations
\begin{equation}\label{relation}
x^ry^s(a,b)+(-1)^ry^r(x-y)^s(b,-a-b)+(-1)^s(y-x)^rx^s(-a-b,a)
\end{equation}
to the subspace generated by the modular forms $\widetilde s_{a/l}^{(k)}$
and the quasimodular forms $\partial_{\tau}\widetilde s_{a/l}^{(k-2)}$.
\end{theorem}
\begin{proof}
The symbol \eqref{relation} maps to
\begin{equation}\label{relation2}
(-1)^s\widetilde s_{a/l}^{(s+1)}\widetilde s_{b/l}^{(r+1)}
+\sum_{t=0}^s\frac{s!}{t!(s-t)!}\widetilde s_{b/l}^{(r+t+1)}
\widetilde s_{-(a+b)/l}^{(s-t+1)}
+\sum_{t=0}^r\frac{r!}{t!(r-t)!}\widetilde s_{-(a+b)/l}^{(r-t+1)}
\widetilde s_{a/l}^{(s+t+1)}(-1)^{s+r}.
\end{equation}
Up to quasimodular forms of lower weight
and $\widetilde s_{a/l}^{(k)}$, the expression \eqref{relation2} can be
simplified to
$$
\sum_{D>0}q^D\sum_{I (D)}
(
A_{k_1,k_2}-A_{-k_1,k_2}+A_{k_2,-k_1-k_2} -A_{k_2,k_1-k_2}
+A_{-k_1-k_2,k_1} - A_{k_1-k_2,-k_1}
).
$$
Here $I (D)$ is defined in \eqref{i.set.def} and
$$
A_{k_1,k_2} = (-1)^{s} k_1^sk_2^r\bar\delta^{(a,b)}_{(k_1,k_2)},
$$
where $\bar\delta^{(a,b)}_{(k_1,k_2)}=
\delta_{k_1}^{a\bmod\, l}\delta_{k_2}^{b\bmod\, l}+
(-1)^k\delta_{k_1}^{-a\bmod\, l}\delta_{k_2}^{-b\bmod\, l}$.
The set $I (D)$ can be partitioned into subsets corresponding to
different ``runs'' of the Euclidean algorithm. Namely,
there are partially defined maps $\Upsilon $ and $\Delta
$ from $I (D)$ to itself
given by
$$
\Upsilon \colon (m_1,k_1,m_2,k_2)\longmapsto
\left\{\begin{array}{ll}
(m_2,k_1+k_2,m_1-m_2,k_1),&{\rm if~} m_1>m_2\\
(m_2-m_1,k_2,m_1,k_1+k_2),&{\rm if~} m_1<m_2\\
{\rm not~defined},&{\rm if~} m_1=m_2
\end{array}\right.
$$
$$
\Delta \colon (m_1,k_1,m_2,k_2)\longmapsto
\left\{\begin{array}{ll}
(m_1+m_2,k_2,m_1,k_1-k_2),&{\rm if~} k_1>k_2\\
(m_2,k_2-k_1,m_1+m_2,k_1),&{\rm if~} k_1<k_2\\
{\rm not~defined},&{\rm if~} k_1=k_2
\end{array}\right.
$$
These maps are inverses of each other whenever their composition is
defined. The whole set $I (D)$ can be pictured as a disjoint union of
vertical threads, where each thread is obtained by starting at the top
with a solution with $m_1=m_2$ and applying $\Delta $
until arriving at a solution with $k_1=k_2$(\footnote{$\Upsilon$ and
$\Delta$ stand for \emph{up} and \emph{down}.}). The crucial
observation is that for each thread $\Theta $, the sum
$$
\sum_{\Theta } A_{k_1,k_2} + A_{k_2,-k_1-k_2} + A_{-k_1-k_2,k_1} - A_{-k_1,k_2} - A_{k_2,k_1-k_2}
-A_{k_1-k_2,-k_1}
$$
collapses. Indeed, the negative terms
for elements $(m_1,k_1,m_2,k_2)$ cancel the positive terms for
elements $\Delta (m_1,k_1,m_2,k_2)$. To see this, observe that if
$k_1>k_2$, then the positive terms of $\Delta (m_1,k_1,m_2,k_2)$ equal
$$
A_{k_2,k_1-k_2}+A_{k_1-k_2,-k_1}+A_{-k_1,k_2}.
$$
The $k_1<k_2$ case is handled similarly, taking into account the symmetry
$A_{-k_1,-k_2}=A_{k_1,k_2}$.
Hence, up to a
linear combination of lower weight forms and the forms $\widetilde
s_{a/l}^{(k)}$, the image of the relation \eqref{relation} is equal to
\begin{multline*}
\sum_{D>0}q^D
\Bigr(\sum_{\{i\in I (D) \mid m_1=m_2\}}(A_{k_1,k_2}+A_{k_2,-k_1-k_2}+A_{-k_1-k_2,k_1})
\\-\sum_{\{i\in I (D) \mid k_1=k_2\}}(A_{-k_1,k_2}+A_{k_2,k_1-k_2}+A_{k_1-k_2,-k_1})
\Bigl).
\end{multline*}
The coefficient of $q^{D}$ can be further simplified to
\begin{multline*}
\sum_{d|D}\sum_{0<e<d}
(A_{e,d-e}+A_{d-e,-d}+A_{-d,e})
-\sum_{d|D}(\frac Dd -1) \Bigl(
d^{k-2}(\delta_{(-d,d)}^{(a,b)} +(-1)^k\delta_{(-d,d)}^{(-a,-b)})
\\+(-1)^sd^s0^r(\delta_{(d,0)}^{(a,b)} +(-1)^k\delta_{(d,0)}^{(-a,-b)})
+0^sd^r(\delta_{(0,d)}^{(a,b)} +(-1)^k\delta_{(0,d)}^{(-a,-b)})
\Bigr),
\end{multline*}
where $\delta$ is now a Kronecker symbol for elements of $({\mathbb Z}/l{\mathbb Z})^2$,
and our convention is $0^s=1$ if and only if $s=0$.
To finish the proof we first observe that the contribution of the
terms with $D/k$ is, up to an additive constant,
a derivative with respect to $\tau$ of
$$
-\delta_{a+b}^0\widetilde s_{b/l}^{(k-2)}+(-1)^{s+1}0^r\delta_{b}^0
\widetilde s_{a/l}^{(k-2)}
-0^s\delta_{a}^0\widetilde s_{b/l}^{(k-2)},
$$
where $\delta$ is the usual Kronecker function.
To show that the remaining contributions give linear combinations of
the forms $\widetilde s_{a/l}^{(\leq k)}$, it is enough to establish that
for any $a,b,r,s$ the $(\bmod\, l)$-polynomial
$$h(d):=\sum_{0<e<d}
(A_{e,d-e}+A_{d-e,-d}+A_{-d,e})
+ A_{-d,d}+A_{d,0}+A_{0,d}
$$
is odd. This follows easily from Proposition \ref{conesum} and the
symmetry of $A$.
\end{proof}
\begin{corollary}\label{truemu}
The map $\mu$ induces a map from the space of weight $k$ Manin symbols
$M$ to the quotient ${\mathbb Q}Q$ of the space of weight $k$ quasimodular forms by
subspace generated by the Eisenstein series $\widetilde s_{a/l}^{(k)}$ and
the derivatives $\partial_\tau \widetilde
s_{a/l}^{(k-2)}$.
\end{corollary}
\begin{remark}\label{toric1}
An alternative approach to Theorem \ref{mumap} is to
look at the identify
$$
(s_\alpha^{(1)}+s_\beta^{(1)}+s_{-\alpha-\beta}^{(1)})^2
+\frac 12(s_\alpha^{(2)}+s_\beta^{(2)}+s_{-\alpha-\beta}^{(2)})=0
$$
which comes from a calculation of certain toric form for
the complex projective plane ${\mathbb P}^2$, see \cite{toric}.
One can differentiate the above identity with respect to
$\alpha$ and $\beta$ several times and plug in rational values of
$\alpha$ and $\beta$. Then it remains to use the transformation
that connects $\widetilde s_{a/l}^{(k)}$ and $s_{a/l}^{(k)}$. We
leave the details to the reader.
\end{remark}
\section{Hecke equivariance of the symbols to forms map}\label{s.Hecke}
\subsection{}
It is not hard to see by explicit computation that the subspace
spanned by the Eisenstein series and derivatives mentioned in
Corollary \ref{truemu} is invariant under the action of $\Gamma_0 (l)
/\Gamma_1 (l)$, the Fricke involution, and the Hecke operators. Hence
we can naturally extend their action to the quotient ${\mathbb Q}Q$. The goal
of this section is to show that the map of Corollary \ref{truemu} is
compatible with the action of Hecke operators. For this, one needs to
show that the map
$$
x^ry^s(m,n)\mapsto (-1)^s\widetilde s^{(s+1)}_{m/l}\widetilde s_{n/l}^{(r+1)}
$$
is compatible with the action of Hecke operators, up to linear combinations
of $\widetilde s_{a/l}^{(k)}$ and $\partial_\tau \widetilde s_{a/l}^{(k-2)}$.
\begin{theorem}\label{HE}
Let $p$ be a prime number coprime to $l$ and $T_p$ be the
corresponding Hecke operator on $M_k$ and ${\mathscr{M}}_k$, where we abuse
notations slightly. Let $\mu$ be the map defined in Theorem
\ref{mumap}. Then for every $w\in M_k$, the image $\mu(T_p w)$ is
equal to $T_p(\mu(\epsilon_{p^{-1}}w))$ modulo a linear combination of
$s_{a/l}^{(k)}$ and $\partial_\tau s_{a/l}^{(k-2)}$. Here
$\epsilon_{p^{-1}}$ is the action of the element of
$\Gamma_0(l)/\Gamma_1(l)$ given by $x^ry^s(u,v)\mapsto x^ry^s(pu,pv)$,
(cf. Proposition \ref{gamma0action}).
\end{theorem}
Before we begin the proof of Theorem \ref{HE}, we need a lemma
giving a geometric interpretation of the set $H (p)$ involved
in Merel's description of the $T_{p}$-action on Manin symbols
(Theorem \ref{HeckeisHecke}).
\begin{lemma}\label{abcd}
\cite[Theorem 3.16]{vanish} For each index $p$ sublattice $S\subset
{\mathbb Z}^{2}$, consider the convex hull of all nonzero points of $S$ that
lie in the first quadrant. Then the compact subset of the boundary of
this convex hull is a union of segments. Moreover, the coordinates
$(a,c)$, $(b,d)$ of the vertices of each segment (ordered from the
$x$-axis) satisfy $ad-bc=p$ and $a>b\geq 0$, $d>c\geq 0$, and hence
determine an element of $H (p)$. Conversely, all $(a,b,c,d)\in H (p)$
come from one such sublattice $S$ in this manner.
\end{lemma}
Given an index $p$ sublattice $S\subset {\mathbb Z}^{2}$, we write $H
(p,S)$ for the subset of those $(a,b,c,d)\in H (p)$ corresponding to
$S$.
\begin{example}
Figure \ref{Fig1} shows the case $p=2$. There are three
sublattices of index $2$, and altogether four distinct boundary segments.
>From the segments we obtain the four elements of $H (2)$,
namely $(1,0,0,2)$, $(2,1,0,1)$, $(1,0,1,2)$ and $(2,0,0,1)$.
\begin{figure}
\caption{\label{Fig1}
\label{Fig1}
\end{figure}
\end{example}
We will also need the following duality operation on the set of sublattices.
\begin{definition}
For an index $p$ sublattice $S$ we denote by $S^*$ the
sublattice of all points $P$ in ${\mathbb Z}^2$ such that $P\cdot S\subseteq p{\mathbb Z}$.
where $\cdot$ is the standard scalar product on ${\mathbb Z}^2$.
It is clear that $S^{**}=S$. Moreover, $S^*$ can be obtained from $S$
by a $\pi/2$ rotation at the origin.
\end{definition}
We are now ready to start the proof of Theorem \ref{HE}.
\begin{proof}
It is enough to consider $w= x^ry^s(u,v)$.
By Theorem \ref{HeckeisHecke} and the definition of $\mu$,
\[
\mu(T_p x^ry^s(u,v))\sim_l
\sum_Dq^D\sum_{h\in H (p)}
\sum_{i\in I (D)} \Phi (h,i),
\]
where
\[
\Phi (h,i) =
(ak_2-bk_1)^r(ck_2-dk_1)^s
\bar\delta_{k_1,k_2}^{au+cv,bu+dv} -
(ak_2+bk_1)^r(ck_2+dk_1)^s
\bar\delta_{k_1,k_2}^{-au-cv,bu+dv}.
\]
Here $\sim_l$ means that equality holds modulo linear combinations
of $\widetilde s_{a/l}^{(<k)}$, and $I (D)$ is defined in
\eqref{i.set.def}.
We can use $(p,l)=1$ to rewrite the above
as
$$
\Phi (h,i) = A_{dk_1-ck_2,ak_2-bk_1}-A_{-dk_1-ck_2,ak_2+bk_1},
$$
where $A_{\alpha,\beta}=\beta^r(-\alpha)^s\bar\delta_{\alpha,\beta}^{pu,pv}$.
On the other hand,
\begin{multline}\label{first.stab}
T_p \mu(\epsilon_{p^{-1}}w)\sim_{pl}
\sum_Dq^D\sum_{I (pD)}
(A_{k_1,k_2}-A_{-k_1,k_2})
\\
+p^{k-1}
\sum_Dq^D\sum_{I (D)}
(-1)^sk_1^sk_2^r(\bar\delta_{k_1,k_2}^{u,v}-\bar\delta_{-k_1,k_2}^{u,v}).
\end{multline}
For each $i\in I (pD)$ there exists a sublattice $S$ such that $(m_1,m_2)\in S$ and
$(k_1,k_2)\in S^*$. Moreover, $S$ is unique unless $m_1,k_1,m_2,k_2
=0\bmod\, p$, in which case there are $(p+1)$ such sublattices $S$.
To record this, we use the notation
\[
I (pD,S) = \{i\in I (pD) \mid (m_1,m_2)\in S, \quad (k_{1},k_{2})\in S^{*}\}.
\]
Let us further write, for any two subsets $U_{1}, U_{2}\subset {\mathbb R}^{2}$,
\[
I(pD,S;U_{1}, U_{2}) = \{i\in I (pD,S) \mid (m_1,m_2)\in U_{1}, (k_{1}, k_{2})\in U_{2}\}.
\]
Now we can rewrite \eqref{first.stab} as
\begin{equation}\label{one.guy}
T_p \mu(\epsilon_{p^{-1}}w)\sim_{pl}
\sum_Dq^D
\sum_S
\Bigl(
\sum_{I (pD,S;Q_{I}, Q_{I})} A_{k_1,k_2}
-
\sum_{I (pD,S;Q_{II}, Q_{II})} A_{k_1,k_2}
\Bigr)
\end{equation}
where $Q_{I}$ and $Q_{II}$ denote the open first and the second quadrants.
\begin{remark}
The reason we must write $\sim_{pl}$ here rather than $\sim_l$ is that
the action of $T_p$ defined for weight $k$ on $\widetilde s_{a/l}^{(\leq k)}$
will be a linear combination $\widetilde s_{a/pl}^{(\leq k)}$.
\end{remark}
Given any $h = (a,b,c,d)\in H (p)$, we also denote by $h$ the linear
transformation ${\mathbb R}^{2}\rightarrow {\mathbb R}^{2}$ given by the multiplying by
matrix $\left
( \begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)$ on the right.
This allows
us to write $\mu(T_p(w))-T_p \mu(\epsilon_{p^{-1}}w)$ as
\begin{equation}\label{second.stab}
\mu(T_p(w))-T_p \mu(\epsilon_{p^{-1}}w)\sim_{pl}
\sum_{D}q^D\sum_S
(\Sum_{1} - \Sum_{2} - \Sum_{3} + \Sum_{4}),
\end{equation}
where
\begin{align*}
\Sum_{1} &= \sum_{\substack{h\in H (p,S)\\
i\in I (pD,S;h^t (Q_{I}), h^{-1} (Q_{I}))}} A_{k_1,k_2}\\
\Sum_{2} &= \sum_{\substack{h\in H (p,S)\\
i\in I (pD,S;h^t (Q_{II}), h^{-1} (Q_{II}))}}A_{k_1,k_2}\\
\Sum_{3} &= \sum_{I (pD,S,Q_{I})} A_{k_1,k_2}\\
\Sum_{4} &= \sum_{I (pD,S,Q_{II})} A_{k_1,k_2}
\end{align*}
It is convenient to visualize $\Sum_{1},\dotsc ,\Sum_{4}$ as indicated
in Figure \ref{Fig2}.
\begin{figure}
\caption{\label{Fig2}
\label{Fig2}
\end{figure}
\subsection{}
Now we consider the right hand side of \eqref{second.stab}.
It will turn out that most terms will cancel each other, but there
will be some terms left over that will require careful consideration.
To discuss these terms, we require some additional terminology.
For every sublattice $S$ of index $p$ we form the convex hulls of the
nonzero points in \emph{each} quadrant. The open 1-cones spanned by
the points on the boundary of these hulls will be called {\em rays} of
the lattice $S$, and will be denoted $\rho (S)$. The rays generated
by the points $(\pm p,0)$ and $(0,\pm p)$ will be called the
\emph{axis} rays; all others will be called \emph{non-axis} rays. By
abuse of notation, given a point $(x,y)$ we write $(x,y)\in \rho (S)$
to mean that $(x,y)$ lies on a ray of $S$.
Finally,
for any nonzero point $v\in S$, we define a rational cone $\cone(v)$
as follows:
\begin{itemize}
\item If $v$ lies on a ray $\rho$, then we put $\cone (v) = \rho$.
\item Otherwise, we set $\cone (v)$ to be the interior of the unique $2$-cone
spanned by adjacent rays of $S$ and containing $v$.
\end{itemize}
The first step in investigating \eqref{second.stab} is
the following lemma, which is the heart of the proof.
\begin{lemma}\label{inside}
Let $S\subset {\mathbb Z}^{2}$ have index $p$. Then for every
$(m_1,k_1,m_2,k_2)$ such that $(m_1,m_2)\not \in \rho (S)$ and
$(k_1,k_2)\not \in \rho (S^{*})$, the total of the contributions
of $\Sum_1$, $\Sum_2$, $\Sum_3$ and $\Sum_4$ at $(m_1,k_1,m_2,k_2)$
and $-(m_1,k_1,m_2,k_2)$ is zero.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{inside}]
Clearly, it is enough to check this if $(k_1,k_2)$ is in the first or
second quadrant.
First assume $(k_1,k_2)\in Q_{I}$. Then the only nontrivial
contributions come from $\Sum_1$ and $\Sum_3$ when $(m_1,m_2)\in
Q_{I}$, and in this case we claim $\Sum_{1}$ contributes $A_{k_1,k_2}$
and $\Sum_3$ contributes $-A_{k_1,k_2}$. Indeed, if $(m_1,m_2)\in
Q_{I}$, there is a contribution of exactly one $(a,b,c,d)$ in
$\Sum_1$, which corresponds to $\cone(m_1,m_2)$. Hence the total
contribution is zero.
Next assume $(k_1,k_2)\in Q_{II}$. In this case $\Sum_{3}$ doesn't
contribute, and we split the contributions of the remaining sums into types:
\begin{itemize}
\item ($\Sum_1$, type $1$) We assume $(m_1,m_2)\in Q_{I}$
lies in a cone above $\cone (k_2,-k_1)$
(see Figure \ref{Fig3}, graph 1). Then there is a unique $(a,b,c,d)$ in $\Sum_1$,
that corresponds to $\cone(m_1,m_2)$, and the contribution of $\Sum_1$
is $A_{k_1,k_2}$.
\item ($\Sum_1$, type $2$) We assume $(m_1,m_2)\in Q_{I}$ lies
in a cone below the $\cone (k_2,-k_1)$
(see Figure \ref{Fig3}, graph 2).
Again, there is one $(a,b,c,d)$, and the contribution is
$A_{-k_1,-k_2}$.
\item ($\Sum_2$, type $1$) We assume $(m_1,m_2)\in Q_I$
lies in a cone above $\cone (k_2,-k_1)$.
Then there is a unique $(a,b,c,d)$ in $\Sum_2$ that corresponds to $\cone(k_2,-k_1)$,
and the contribution of $\Sum_2$ is $-A_{k_1,k_2}$.
\item ($\Sum_2$, type $2$) We assume $(m_1,m_2)\in Q_{I}$ such that $(-m_1,-m_2)$
lies in a cone below $\cone (k_2,-k_1)$.
Then there is a unique $(a,b,c,d)$ in $\Sum_2$ that corresponds to $\cone(k_2,-k_1)$,
and the contribution of $\Sum_2$ is $-A_{k_1,k_2}$.
\item ($\Sum_2$, type $3$) If $(m_1,m_2)\in Q_{II}$, then there is a contribution
of $-A_{k_1,k_2}$. Indeed, the unique $(a,b,c,d)$ corresponds to
$\cone(k_2,-k_1)$.
\end{itemize}
Clearly, the type $1$ contributions of $\Sum_{1}$ and $\Sum_2$ cancel;
after we apply the symmetry $A_{-k_1,-k_2}=A_{k_1,k_2}$, the type $2$
contributions of $\Sum_{1}$ and $\Sum_2$ cancel as well. Finally, the
contribution of $\Sum_4$ cancels the type $3$ contribution of
$\Sum_2$, which completes the proof of Lemma \ref{inside}.
\end{proof}
\begin{figure}
\caption{\label{Fig3}
\label{Fig3}
\end{figure}
\begin{remark}
In the terms that cancel, the matrices $(a,b,c,d)$ are different,
which makes Lemma \ref{abcd} crucial to the success of the proof.
\end{remark}
We now return to the proof of Theorem \ref{HE}. Having handled the
bulk of the terms in $\Sum_1$--$\Sum_4$, we now examine the
cases when $(m_1,m_2)$ or $(k_1,k_2)$ lie on a ray. For any point
$(u,v)$, we let $(u,v)^{\perp}$ be the set of all $(x,y)$ with $ux+vy=0$.
For each $(m_{1},m_{2})\in \rho (S)$ we define a subset $C
(m_{1},m_{2})\subset S^{*}$ as follows. If $(m_{1},m_{2})$ is not on
a coordinate axis, then we let $C (m_{1},m_{2})$ be the set of all
points with positive scalar product with $(m_{1},m_{2})$ \emph{except}
those that lie in one of the closed cones adjacent to
$(m_{1},m_{2})^{\perp}$ (Figure \ref{Fig4}). We use the same notation
to denote the similar set $C (k_{1},k_{2})$ constructed from a point
$(k_{1},k_{2})\in \rho (S^{*})$. If $(m_{1},m_{2})$ or $(k_{1},
k_{2})$ lies on an axis, then we define $C (m_{1},m_{2})$ and $C
(k_{1},k_{2})$ using the small diagrams in Figure \ref{Fig4}.
\begin{figure}
\caption{\label{Fig4}
\label{Fig4}
\end{figure}
The rays of the boundary of $C(m_1,m_2)$ (excluding the origin) will
be denoted by $\partial C(m_1,m_2)$ and similarly for $\partial
C(k_1,k_2)$. For any cone $C$, we write $\sideset{}{'}\sum_C$ to indicate
that the sum
is taken over $C\cup \partial C$ with terms lying in $\partial C$ taken
with weight $1/2$.
\begin{lemma}\label{rayslemma}
With the above notation,
\begin{multline}\label{rayslemmaeq}
\mu(T_p(w))-T_p \mu(\epsilon_{p^{-1}}w)
\sim_{pl}\\
\sum_S
\sum_{(k_1,k_2)\in \rho (S^*)\cap Q'_{II}}
\sideset{}{'}\sum_{(m_1,m_2)\in C(k_1,k_2)}
q^{(m_1k_1+m_2k_2)/p}A_{k_1,k_2}\\
-
\sum_S\sum_{(m_1,m_2)\in \rho (S)\cap Q'_{I}}
\sideset{}{'}\sum_{(k_1,k_2)\in C(m_1,m_2)}
q^{(m_1k_1+m_2k_2)/p}A_{k_1,k_2},
\end{multline}
where $Q'_{I}$ and $Q'_{II}$ are the closures of the first and second
quadrants.
\end{lemma}
\begin{proof}
[Proof of Lemma \ref{rayslemma}]
Because of Lemma \ref{inside},
we need to examine the contribution of $\Sum_1$, $\Sum_2$, $\Sum_3$ and $\Sum_4$
to the quadruples $\pm(m_1,k_1,m_2,k_2)$ where at least one of $(m_1,m_2)$
and $(k_1,k_2)$ lie on the ray of the corresponding lattice.
We will have to be especially careful when one of these vectors is
located on a coordinate axis. In what follows we will fix lattices
$S$ and $S^*$.
First, let us deal with the case when $(k_1,k_2)\in \rho (S^*)$ and is
not on an axis, and $(m_1,m_2)\not \in \rho (S)$. If $(k_1,k_2)$ is
in the first or third quadrant, then $\Sum_2$ and $\Sum_4$ do not
contribute, and the contributions of $\Sum_1$ and $\Sum_3$ cancel
since they are respectively $A_{k_1,k_2}$ and $-A_{k_1,k_2}$.
Therefore, it is enough to consider when $(k_1,k_2)$ lies in the
second or fourth quadrants.
Now the terms of $\Sum_2$ and $\Sum_3$ do not
contribute, and the total contribution of $\Sum_1$ and
$\Sum_4$ equals $A_{k_1,k_2}$ if and only if $(m_1,m_2)\in
C(k_1,k_2)$, and is zero otherwise. This clearly corresponds to the
terms we get on the right of \eqref{rayslemmaeq}.
Now suppose $(k_1,k_2)$ lies on a coordinate axis. We may assume that
it lies in the positive portion. If $(m_1,m_2)\not \in \rho (S)$,
then only $\Sum_1$ contributes, and the contribution is $A_{k_1,k_2}$
if any only if $(m_1,m_2)\in C(k_1,k_2)$. Clearly this corresponds
exactly to the contribution on the right of \eqref{rayslemmaeq}.
Analogously one can treat the case of $(m_1,m_2)\in \rho (S)$
with $(k_1,k_2)\not \in \rho (S^*)$. We have therefore shown that
Lemma \ref{rayslemma} holds up to the contributions of $\pm(m_1,k_1,m_2,k_2)$
with both $(m_1,m_2)$ and $(k_1,k_2)$ on the rays of the corresponding
lattices.
If both $(m_1,m_2)$ and $(k_1,k_2)$ belong to non-axis rays of
$S$ and $S^*$, the contributions of $\Sum_1$ and $\Sum_2$ are zero. Hence,
the contribution of $-A_{k_1,k_2}$ occurs if both of them lie in the
first or third quadrant and the contribution of $A_{k_1,k_2}$ occurs
if both lie in the second or fourth quadrant. To show that this
is consistent with the right hand side of the equation
of the lemma, observe that if $(k_1,k_2)\in Q_{II}$ and $(m_1,m_2)\in Q_I$,
the contributions of the two $\sideset{}{'}\sum$ cancel. Indeed, in this
case $(k_1,k_2)\in C(m_1,m_2)$ and $(k_1,k_2)\in \partial C(m_1,m_2)$
is equivalent to $(m_1,m_2)\in C(k_1,k_2)$ and
$(m_1,m_2)\in \partial C(k_1,k_2)$, respectively.
The remaining case of one or both of $(m_1,m_2)$ and $(k_1,k_2)$
on the axis with both of them on the rays is treated similarly and is left
to the reader.
\end{proof}
Continuing now with the proof of Theorem \ref{HE}, we investigate the
sums on the right of \eqref{rayslemmaeq}. We divide the contributions
to the sums over $\rho (S^{*})$ into two types: those coming from
non-axis rays, and those coming from axis rays.
\begin{lemma}\label{nonaxislemma}
In the sums over $S$ in \eqref{rayslemmaeq}, the contributions of the
non-axis rays give a linear combination of $\widetilde s_{a/pl}^{(\leq
k)}$ and $\partial_\tau \widetilde s_{a/l}^{(k-2)}$.
\end{lemma}
\begin{proof}
[Proof of Lemma \ref{nonaxislemma}]
First we calculate the
contribution of a $(k_1,k_2)\in \rho (S^*)$ such that $(k_1,k_2)$ lies
on the ray ${\mathbb R}_{>0}(-c,a)$, where $(a,c)$ is in the first quadrant.
Let $(b,d)$ (respectively $(b_1,d_1)$) be the generator of the ray of
$S$ adjacent to the ray generated by $(a,c)$ in the counterclockwise
(resp. clockwise) direction. Then the sets of vectors $\{(a,c), (b,d)
\}$ and $\{(a,c), (b_{1}, d_{1}) \}$ form a ${\mathbb Z}$-basis of $S$,
which implies $(b,d)+(b_1,d_1)=N(a,c)$ where $N$ is a positive integer.
Then any $(m_1,m_2)\in C(k_1,k_2)$ can be written
\[
(m_1,m_2)=-\alpha(a,c)+\beta(b,d),
\]
where
\begin{equation}\label{conditions}
\alpha,\beta\in{\mathbb Z},\quad (m_1,m_2)\cdot(-d,b)>0,\quad (m_1,m_2)\cdot(-d_1,b_1)>0.
\end{equation}
The conditions \eqref{conditions} translate into the inequality
$0<\alpha<N\beta$
on $\alpha$, which has $N\beta-1$ solutions for a given $\beta$.
Note that the terms in $\partial C(K_1,K_2)$ correspond to $\alpha=0$ and
$\alpha= N\beta$, which contributes an extra $A_{k_1,k_2}$
for each value of $\beta$.
Now if we write $(k_1,k_2)=t(-c,a)$ for some positive integer ${\mathbb Z} $,
then $(m_1k_1+m_2k_2)/p=t\beta$, so that
the contribution of the complete ray ${\mathbb R}_{>0}(-c,a)\in \rho (S^*)$ to
the first term of Lemma \ref{rayslemma} is
$$
\sum_{t>0} \sum_{\beta>0} q^{t\beta} (N\beta-1)A_{-tc,ta}=
\sum_{D>0} q^D\sum_{t|D} \frac {ND}t A_{-tc,ta}.
$$
When one recalls the definition of $A$, this is easily
seen to be a linear combination of
$\partial_\tau\widetilde s_{a/l}^{(k-2)}$.
Next we calculate the contribution of an $(m_1,m_2)$ that lies on ray
${\mathbb R}_{>0}(a,c)$ of $S$. The computation is very similar to the
above. As before we denote by $(b,d)$ and
$(b_1,d_1)$ the generators of the rays of $S$ adjacent to
${\mathbb R}_{>0}(a,c)$. Then in the second summation of \eqref{rayslemmaeq},
the pairs $(k_1,k_2)$ are of the form
$$(k_1,k_2)=-\alpha(c,-a)+\beta(d,-b),~\alpha,\beta\in{\mathbb Z},$$
where as before $0<\alpha<N\beta$ for $(m_1,m_2)\in C(m_1,m_2)$
and $\alpha=0$ or $\alpha =N\beta$ for $(m_1,m_2)\in \partial C(m_1,m_2)$.
If we write $(m_1,m_2)=t(a,c)$ for $t$ a positive integer, then we
obtain
$$
-\sum_{t>0}\sum_{\beta>0}q^{t\beta}
\Big(
\sum_{0<\alpha<N\beta}A_{-\alpha c+\beta d,\alpha a -\beta b}
+\frac12
A_{\beta d, -\beta b}
+\frac12
A_{\beta(-Nd+c), \beta(Na-b)}
\Big).
$$
It remains to use Propositions \ref{conesum} and \ref{easyremark}
to see that the above is a linear combination of $s_{a/l}^{(\leq k)}$.
This completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{axislemma}
In the sums over $S$ in \eqref{rayslemmaeq}, the contributions of the
axis rays give a linear combination of $\widetilde s_{a/pl}^{(\leq
k)}$ and $\partial_\tau \widetilde s_{a/l}^{(k-2)}$.
\end{lemma}
\begin{proof}
[Proof of Lemma \ref{axislemma}]
First, if $S$ or $S^*$
contain $(0,1)$ or $(1,0)$, then the contributions of the two sums in
Lemma \ref{rayslemma} cancel. Hence we may ignore lattices of this type.
If $(k_1,k_2)$ is on the positive half
of the $x$-axis, then $k_1$ is a multiple of $p$.
The top cone of $S$ in the first quadrant is
the span of the positive half of $y$-axis and $(1,a)$, with $a$ taking
all values from $0$ to $p-1$, depending on $S$. One then observes that
the contribution of $S_1$ and $S_2$ with $a_1+a_2=p$ can be thought of
as the sum over $(m_1,m_2)$ in the interior of the cone spanned by
$(1,a_1)$ and $(1, -a_2)$, plus half the sum for $(m_1,m_2)$ on
the boundary of the cone. It is then easily seen to give
a linear combination of $\partial_\tau\widetilde s_{a/l}^{(k-2)}$.
The case of $(k_1,k_2)$ on the positive half of the $y$-axis
is treated similarly.
If $(m_1,m_2)$ is on one of the axes, then we observe that the sum of
$A_{k_1,k_2}$ over $C(m_1,m_2)$ and its boundary can be thought of as
the sum over all points of ${\mathbb Z}^2$ that lie in that cone of an even
two-variable $\bmod\, pl$-polynomial $\hat A_{k_1,k_2}$, which we define
to equal $A_{k_1,k_2}$ if $(k_1,k_2)\in S^*$ and zero otherwise. One
then again invokes Propositions \ref{conesum} and \ref{easyremark} to
conclude that these terms contribute a linear combination of $\widetilde
s_{a/pl}^{(\leq k)}$.
\end{proof}
\noindent\textit{Completion of the proof of Theorem \ref{HE}}.
By Lemmas \ref{nonaxislemma} and \ref{axislemma}, we have that
$$
\mu(T_p(w))-T_p \mu(\epsilon_{p^{-1}}w)
$$
is a linear combination of $\partial_\tau \widetilde s_{a/l}^{(k-2)}$
and $\widetilde s_{a/pl}^{(\leq k)}$. The modular transformation properties then
imply that only $\widetilde s_{a/l}^{(k)}$ and
$\partial_\tau\widetilde s_{a/l}^{(k-2)}$ appear, which
finishes the proof of Theorem \ref{HE}.
\end{proof}
\begin{remark}
Another way to state Theorem \ref{HE} is to say that the composition of
$\mu$ and Fricke involution is Hecke-equivariant.
\end{remark}
\begin{remark}\label{toric2}
The discussion of this section simplifies a bit if one uses the
geometry of toric varieties. More specifically, one has to
consider toric modular forms $f_{{\mathbb Z}^2,\deg}$ defined in \cite{toric}
and then differentiate them with respect to the components of the
degree function $\deg$. Then the Hecke action described in \cite{toric}
can be interchanged with these partial differentiations, which gives
the desired result. It worth mentioning that our proof is in some sense
parallel to this calculation. For example, the number $N$ that appears
in the treatment of the second sum of Lemma \ref{rayslemma} is related
to the self-intersection numbers of the boundary divisors on the toric
surface given by the fans that correspond to the subgroups $S$.
\end{remark}
\begin{remark}
It may be interesting to analyze products of more than two $\widetilde s$.
Every such product may be associated to a symbol
$$x_1^{r_1}\cdots x_n^{r_n}(a_1,\cdots,a_n)$$
where $a_i\in {\mathbb Z}/l{\mathbb Z}$. Then one expects to be able to develop a
generalization of the theory of Manin symbols, by introducing relations
on these symbols that come from linear relations on the products.
The action of Hecke operators will then come from toric geometry,
and will be related to subgroups of index $p$ in ${\mathbb Z}^{n}$ as in
\cite{toric}.
\end{remark}
\end{document}
|
\begin{document}
\title{Learning temporal data with variational quantum recurrent neural network}
\author{Yuto Takaki}
\affiliation{Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan}
\author{Kosuke Mitarai}
\email{[email protected]}
\affiliation{Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan}
\affiliation{Center for Quantum Information and Quantum Biology, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Japan}
\affiliation{JST, PRESTO, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan}
\author{Makoto Negoro}
\affiliation{Center for Quantum Information and Quantum Biology, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Japan}
\affiliation{Institute for Quantum Life Science, National Institutes for Quantum and Radiological Science and Technology, Japan}
\author{Keisuke Fujii}
\affiliation{Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan}
\affiliation{Center for Quantum Information and Quantum Biology, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Japan}
\affiliation{Center for Emergent Matter Science, RIKEN, Wako Saitama 351-0198, Japan}
\author{Masahiro Kitagawa}
\affiliation{Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan}
\affiliation{Center for Quantum Information and Quantum Biology, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Japan}
\date{\today}
\begin{abstract}
We propose a method for learning temporal data using a parametrized quantum circuit.
We use the circuit that has a similar structure as the recurrent neural network which is one of the standard approaches employed for this type of machine learning task.
Some of the qubits in the circuit are utilized for memorizing past data, while others are measured and initialized at each time step for obtaining predictions and encoding a new input datum.
The proposed approach utilizes the tensor product structure to get nonlinearity with respect to the inputs.
Fully controllable, ensemble quantum systems such as an NMR quantum computer is a suitable choice of an experimental platform for this proposal.
We demonstrate its capability with Simple numerical simulations, in which we test the proposed method for the task of predicting cosine and triangular waves and quantum spin dynamics.
Finally, we analyze the dependency of its performance on the interaction strength among the qubits in numerical simulation and find that there is an appropriate range of the strength.
This work provides a way to exploit complex quantum dynamics for learning temporal data.
\end{abstract}
\maketitle
\section{Introduction}
Quantum machine learning \cite{Biamonte2017} is gaining attention in the hope of speeding up machine learning tasks with quantum computers.
One direction is to develop quantum machine learning algorithms with proven speedups over classical approaches \cite{Schuld2016, Wiebe2012, Rebentrost2014, yamasaki2020learning}.
Another direction, which is becoming popular recently, is to construct heuristic algorithms using parametrized quantum circuits.
Among such, a popular idea is to use an exponentially large Hilbert space of a quantum system for storing features of dataset \cite{Schuld2019, Mitarai2018, farhi2018, cerezo2020variational}.
In this approach, each datum is first encoded in a quantum state, and then we employ a parametrized quantum circuit to extract features that are not classically tractable.
Here, we specifically consider the extension of the latter approaches for the learning of temporal data.
In practice, such machine learning tasks frequently appear in various fields such as natural language processing, speech recognition, and stock price prediction.
A previous idea of using a quantum system for the time-series analysis can be found in Ref. \cite{Fujii2017}, where the authors proposed quantum reservoir computing (QRC).
QRC employs complex quantum dynamics itself as a computational reservoir.
It learns to perform temporal tasks by optimizing the readout from the reservoir while the quantum system is left unchanged.
This is in contrast to the approach taken in this work, that is, we directly tune the dynamics of the quantum system in use.
In this work, we propose a type of parametrized quantum circuit for learning temporal data in analogy with the recurrent neural network (RNN) which is a popular machine learning model on a classical computer for the task \cite{SHERSTINSKY2020132306}.
We call the proposed circuit quantum recurrent neural network (QRNN).
In QRNN, we repeat the following three steps; encoding an input, applying a parametrized quantum circuit, and measuring a portion of qubits to obtain a prediction from the system.
The circuit applied at the second step remains the same at every loop, introducing a recurrent structure.
The parameters in the circuit are optimized with respect to a suitable cost function whose minimization leads to an accurate prediction of a given temporal dataset.
This algorithm employs the tensor product structure of quantum systems to obtain the nonlinearity of the output with respect to input in a way similar to Ref. \cite{Mitarai2018}.
To use expectation values to obtain the output prediction while maintaining the recurrent structure, we assume the use of an ensemble quantum system such as NMR quantum computers which allows us to efficiently extract them.
We also test the validity of the proposed method by numerically simulating the training and prediction processes.
We believe this work opens up a way to exploit parametrized quantum circuits for time-series analysis.
The rest of this work is organized as follows.
In Sec. \ref{sec:pre}, we first review the classical recurrent neural network and existing machine learning algorithms with parametrized quantum circuits.
Then, the proposed QRNN is described in detail in Sec. \ref{sec:QRNN}.
In Sec. \ref{sec:numerical}, we present numerical simulations where we apply QRNN to some simple tome-series prediction tasks and analyze results.
Conclusion and outlook are given in Sec. \ref{sec:conclusion}.
\section{Preliminary}\label{sec:pre}
\subsection{Learning of temporal data}
In a (supervised) temporal learning task, we are given with a input sequence $\{\bm{x}_t\}_{t=0}^{T-1}$ and a target teacher sequence $\{\bm{y}_t\}_{t=0}^{T-1}$ as a training dataset.
Our task is to construct a model $\overline{\bm{y}_k} = f(\{\bm{x}_t\}_{t=0}^k)$ such that difference between $\bm{y}_k$ and $f(\{\bm{x}_t\}_{t=0}^k)$ is small from the training dataset.
A popular example is the time-series prediction task where $\bm{y}_t=\bm{x}_{t+1}$, which will be used for demonstration of the proposed method in Sec. \ref{sec:numerical}.
After constructing a model that is accurate enough, we can perform the prediction of $\bm{y}_t$.
\subsection{Recurrent neural network}
RNN \cite{SHERSTINSKY2020132306} is a famous technique to analyze time-series data.
There are a number of variants of RNN models such as Elman network, long short term memory (LSTM), and gated recurrent unit (GRU) \cite{hochreiter1997long, chung2014empirical, elman1990finding}.
Among those, we use a basic form of RNN shown in Fig.~\ref{fig:RNN} (a) as a reference to construct its quantum version.
\begin{figure}
\caption{(a) Schematic picture of a basic RNN. Blue circles represent elements of vectors $\bm{x}
\label{fig:RNN}
\end{figure}
RNN consists of an input layer $\bm{x}_t$, hidden layer $\bm{h}_t$, and output layer $\overline{{\bm y}_{t+1}}$, which are real-valued vectors.
When $\bm{x}_t$ is injected into an input layer of RNN, the value of the hidden layer is updated using the value of $\bm{x}_t$ and $\bm{h}_{t-1}$ as,
\begin{align}
\bm{h}_t = g_h(V_{\mathrm{in}}\bm{x}_t+W\bm{h}_{t-1}+\bm{b}_{\mathrm{in}})
\end{align}
where $V_{\mathrm{in}}$ and $W$ are matrices, $\bm{b}_{\mathrm{in}}$ is a bias vector, and $g_h(\cdot)$ is an activation function which element-wisely apply a nonlinear transformation to a vector.
The output $\overline{\bm{y}_{t+1}}$ is computed as,
\begin{align}
\overline{\bm{y}_{t+1}} = g_o(V_{\mathrm{out}}\bm{h}_t+\bm{b}_{\mathrm{out}})
\end{align}
where $V_{\mathrm{out}}$, $\bm{b}_{\mathrm{out}}$, and $g_o(\cdot)$ are a matrix, a bias vector, and an activation function like $g_h$, respectively.
$W$, $V_{\mathrm{in}}$, $\bm{b}_{\mathrm{in}}$, $V_{\mathrm{out}}$, and $\bm{b}_{\mathrm{out}}$ are parameters to be optimized to minimize a cost function $L$ which represents the difference between $\bm{y}_{t+1}$ and $\overline{\bm{y}_{t+1}}$ such as the squared error $L=\sum_t \|\overline{\bm{y}_{t}}-\bm{y}_{t}\|^{2}$.
The recurrent structure where the value of the hidden layer at time $t$ is computed from $\bm{h}_{t-1}$ allows the RNN to hold the information given in the past time steps \cite{SHERSTINSKY2020132306}.
Frequently, the training of RNN is performed with the unfolded expression as shown in Fig. \ref{fig:RNN} (b), which allows us to treat RNN in the same manner as feed-forward neural networks and to use standard techniques such as backpropagation to optimize the parameters $W$, $V_{\mathrm{in}}$, $\bm{b}_{\mathrm{in}}$, $V_{\mathrm{out}}$, and $\bm{b}_{\mathrm{out}}$.
After the training using the dataset $\{\bm{x}_t\}_{t=0}^{T-1}$ and $\{\bm{y}_t\}_{t=0}^{T-1}$, we can proceed to the prediction of $\bm{y}_T$, $\bm{y}_{T+1}$, $\cdots$ by repeating the structure.
However, for the time-series prediction task, the next input data $\bm{x}_T$ itself is unknown and to be predicted.
Therefore, the prediction has to be performed using the strategy given in Fig.~\ref{fig:RNN} (c) where we feed its prediction $\overline{\bm{y}_t}$ as the input.
This strategy is justified since we expect the trained network to output $\overline{\bm{y}_t}$ that is close to the true value $\bm{x}_{t+1}$.
We follow the above structure closely to construct its quantum version in the following sections.
\subsection{Machine learning with parametrized quantum circuit}
Many researchers have developed a wide range of algorithms \cite{cerezo2020variational} using parametrized circuits, including quantum chemistry simulations \cite{Peruzzo2014}, combinational optimization problems \cite{farhi2014quantum}, and machine learning \cite{Mitarai2018, farhi2018, Schuld2019, Benedetti2019}.
Algorithms involving parametrized quantum circuits usually works in the following way.
First, we apply a parametrized quantum circuit $U(\bm{\theta})$ to some initialized state $\ket{\psi_0}$, where $\bm{\theta}$ is the parameter of the circuit.
Then, we measure an expectation value of a specific observable, $\braket{O(\bm{\theta})}$.
Based on the value, we compute a cost function $L(\braket{O(\bm{\theta})})$ to be minimized by optimizing $\bm{\theta}$.
For machine learning tasks, we define $L(\braket{O(\bm{\theta})})$ such that the output $\braket{O(\bm{\theta})}$ provides us appropriate predictions when it is minimized.
Let us describe an example of supervised learning \cite{Mitarai2018, farhi2018, Schuld2019}.
In supervised learning, we are provided with a training dataset consisting of input data $\{\bm{v}_i\}$ and corresponding teacher data $\{u_i\}$.
Ideas introduced in Refs. \cite{Mitarai2018, farhi2018, Schuld2019} are to use a parametrized quantum circuit $U(\bm{\theta},\bm{v}_i)$ which depends also on $\bm{v}_i$.
This circuit encodes an input $\bm{v}_i$ into a quantum state which results in the input-dependent output $\braket{O(\bm{\theta},\bm{v}_i)}$.
The cost function is defined as, for example, the mean squared error between $\{u_i\}$ and $\{\braket{O(\bm{\theta},\bm{v}_i)}\}$.
Minimization of such a cost function by optimizing the parameters $\bm{\theta}$ results in $\braket{O(\bm{\theta}_{\mathrm{opt}},\bm{v}_i)}$ that is close to $u_i$, where $\bm{\theta}_{\mathrm{opt}}$ is the optimized parameter.
Finally, we can use $\braket{O(\bm{\theta}_{\mathrm{opt}},\bm{v})}$ for an unknown input $\bm{v}$ to predict corresponding $u$.
In the following sections, we extend this approach to the task of predicting time-series data by combining it with the ideas of RNN.
\section{Quantum Recurrent Neural Network}\label{sec:QRNN}
In this section, we first describe the proposed algorithm and its theoretical capability.
Then, a possible experimental setup for its realization and its relation with previous works are discussed.
\subsection{Algorithm}
\label{sec:algorithm}
Fig. \ref{fig:qrnn} shows a schematic of QRNN algorithm using $n=n_A+n_B$ qubits.
QRNN is composed of two groups of qubits called 'A' and 'B'.
Groups A and B have $n_A$ and $n_B$ qubits respectively.
The qubits in group A are never measured throughout the algorithm so that they hold past information.
On the other hand, group B qubits are measured and initialized at each time step $t$ to output the prediction and to take input to the system.
Each time step of QRNN consists of three parts, namely, encoding part, evolution part, and measurement part, which are schematically depicted in Fig. \ref{fig:qrnn} (a).
At the encoding part of time step $t$, we encode a training datum $\bm{x}_{t}$ to the quantum state of qubits in group B by applying $U_{\rm in}(\bm{x}_{t})$ to the initilized state $\ket{0}^{\otimes n_B}$.
Hereafter, we abbreviate $\ket{0}^{\otimes n_B}$ as $\ket{0}$ if there is no confusion.
Note that group A already holds information about $\bm{x}_0, \cdots, \bm{x}_{t-1}$ as a density matrix $\rho^A_{t-1}(\bm{\theta}, \bm{x}_0, \cdots, \bm{x}_{t-1})$ resulted from previous steps.
At the evolution part, we apply a parameterized unitary circuit $U(\bm{\theta})$ to the entire qubits in the system.
$U(\bm{\theta})$ transfers the information injected in group B to group A by introducing the interaction among the qubits.
We denote the reduced density matrix of $A$ and $B$ as $\rho^A_t$ and $\rho^B_t$ after the evolution, respectively.
At the measurement part, we first measure expectation values of a set of commuting observables, $\{O_i\}$, of group B to obtain,
\begin{align}
\braket{O_i}_{t}=\mathrm{Tr}[\rho^{B}_{t}O_i].
\end{align}
Then, we transform this expectation value to get the predicion $\overline{\bm{y}_{t}}$ of $\bm{y}_{t}$ by some function $g$;
$\overline{\bm{y}_{t}}=g\left(\{\braket{O_i}_{t}\}\right)$.
$g$ can, for example, be a linear combination of $\{\braket{O_i}_{t}\}$.
Note that the transformation $g$ can be chosen arbitrarily and optimized in general.
Afterward, the qubits in group B are initialized to $\ket{0}$.
We repeat the three parts to obtain the predictions $\overline{\bm{y}_0}, \cdots, \overline{\bm{y}_{T-1}}$ for ${\bm{y}_0}, \cdots, {\bm{y}_{T-1}}$.
After obtaining the predictions, we compute the cost function $L(\{\bm{y}_0, \cdots, \bm{y}_{T-1}\}, \{\overline{\bm{y}_0}, \cdots, \overline{\bm{y}_{T-1}}\})$ which represents the difference between the training data $\{\bm{y}_0, \cdots, \bm{y}_{T-1}\}$ and prediction $\{\overline{\bm{y}_0}, \cdots, \overline{\bm{y}_{T-1}}\}$ obtained by QRNN.
The parameter $\bm{\theta}$ is optimized to minimize $L$.
This optimization is performed by standard optimizers running on a classical computer.
For example, we can utilize gradient-free optimization methods such as Nelder-Mead or simultaneous perturbation stochastic approximation (SPSA) algorithms.
Another choice would be to use a gradient-based method such as gradient descent.
The analytic gradient of the cost function can be obtained using the so-called parameter shift rule \cite{Mitarai2018, schuld2019evaluating, koczor2020quantum} by running QRNN for $O(T)$ times for each parameter, see Appendix for detail.
After the training, we expect the trained QRNN to be able to predict $\bm{y}_t$ with $\overline{\bm{y}}_t$ for $t\geq T$ after running it through $t=0$ to $T-1$.
For the time-series prediction task, we use its prediction $\overline{\bm{y}_t}$ as input to the system similarly to the ordinary RNN case, since we expect QRNN to output $\overline{\bm{y}_t}$ which is close to its true value $\bm{x}_{t+1}$.
\begin{figure*}
\caption{Structure of QRNN. (a) Structure of QRNN for a single time step.
(b) QRNN in the training phase.
(c) QRNN in the prediction phase. Note that the obtained prediction $\overline{\bm{x}
\label{fig:qrnn}
\end{figure*}
\subsection{Capability of QRNN}
Here, we will discuss what types of function QRNN can model using a simple example.
QRNN exploits the tensor product structure to gain the nonlinearity of the output with respect to the input in a way similar to Ref. \cite{Mitarai2018}.
For simplicity, here we assume that the input and teacher data are scalar.
Suppose that we use a single qubit as group B, and take $U_{\mathrm{in}}(x) = R_y(\arccos x)$.
This gate applied to $\ket{0}$ encodes $x$ to the $Z$-expectation value of the qubit, that is, at $t=0$, the state after injecting the first input datum $x_0$ is,
\begin{equation}
\frac{1}{2}(I+\sqrt{1-x_0^2} X+ x_0 Z)\otimes \ket{0}\bra{0}^{\otimes n_A}.
\end{equation}
After the evolution $U(\bm{\theta})$, the density matrix of the system is,
\begin{equation}\label{eq:t0rho}
\sum_{P\in \mathcal{P}_n} \left(c_{1P}(\bm{\theta})x_0 + c_{2P}(\bm{\theta})\sqrt{1-x_0^2}+c_{3P}(\bm{\theta})\right) P,
\end{equation}
where $\mathcal{P}_n = \{I,X,Y,Z\}^{\otimes n}$ and $c_{1P}(\bm{\theta})$, $c_{2P}(\bm{\theta})$, and $c_{3P}(\bm{\theta})$ are real coefficients.
This means that an expectation value of any Pauli operator $P$ can be written as a linear combination of $\{x_0, \sqrt{x_0-1}, 1\}$, and the output $\overline{y_0}$ can also be written in terms of them.
In the next time step of QRNN, $\rho_{0}^A$, which is the reduced density matrix of Eq. (\ref{eq:t0rho}) obtained by tracing out $B$, is generally in the form of,
\begin{equation}\label{eq:rho_1A}
\rho_{0}^A = \sum_{P\in\mathcal{P}_{n_A}} \left(c_{1P}'(\bm{\theta})x_0 + c_{2P}'(\bm{\theta})\sqrt{1-x_0^2}+c_{3P}'(\bm{\theta})\right) P,
\end{equation}
with some coefficient $c_{1P}'(\bm{\theta})$, $c_{2P}'(\bm{\theta})$, and $c_{3P}'(\bm{\theta})$.
Then, $x_1$ is injected into $B$ by $U_{\mathrm{in}}(x)$, and $U(\bm{\theta})$ is applied to the whole system.
This results in the density matrix in the form of,
\begin{equation}\label{eq:t1rho}
\sum_{P\in\mathcal{P}_{n_A}} \sum_i c_{iP}''(\bm{\theta})\phi_i(x_0, x_1) P,
\end{equation}
where $\{\phi_i(x_0,x_1)\} = \{x_0,$ $x_1,$ $\sqrt{1-x_0^2},$ $\sqrt{1-x_1^2},$ $x_0x_1,$ $x_0\sqrt{1-x_1^2},$ $x_1\sqrt{1-x_0^2},$ $1\}$, and $c_{iP}''$ are coefficients.
Note that the nonlinear functions such as $x_0x_1$ originate from the tensor product structure of the system.
Therefore, we can conclude the output $\overline{y_1}$ can be written in terms of a linear combination of $\phi_i(x_0, x_1)$ in a way similar to the above.
Repeating the above discussion, we can see that the output of QRNN can have highly nonlinear terms such as $\prod_{t=0}^{\tau} x_t$ thanks to the tensor product structure.
Note that QRNN can ``choose'' which term to be kept in group A qubits by tuning $U(\bm{\theta})$.
For example, QRNN can fully keep $x_0$ term in $\rho_0^A$ if $U(\bm{\theta})$ transforms $X\otimes I^{\otimes n_A}$ to a local Pauli operator only acting on $A$.
On the other hand, if $U(\bm{\theta})$ transforms $X\otimes I^{\otimes n_A}$ to a nonlocal Pauli operator acting both on $A$ and $B$, $x_0$ vanishes in $\rho_0^A$ since the partial trace removes such a term.
If $U(\bm{\theta})$ acts in an intermediate manner of the above two extreme cases, the magnitude of $x_0$ in $\rho_0^A$ becomes smaller than the former case, that is, QRNN partially ``forgets'' $x_0$.
We believe that we can employ these insights to construct more efficient QRNNs in future research.
\subsection{Experimental realization}
As mentioned above, expectation values of an observable $\langle O \rangle$ are used as a new input in the prediction phase of QRNN, which requires us to obtain $\braket{O}$ in a single-shot manner.
Ensemble quantum systems such as NMR quantum computers, where such measurements are possible, are desirable for this reason.
While nuclear spins used as qubits in NMR are hard to initialize, we believe this problem can be resolved by e.g. a technique called dynamic nuclear polarization, which initializes nuclear spins by transferring the state of initialized electron spins \cite{deBoer1974, HENSTRA19906, Tateishi2014, PhysRevB.87.125207}.
On the other hand, the use of single quantum systems such as superconducting or ion trap qubits only allows us to extract bitstrings with single-shot measurements, which leads to inefficiency in the prediction phase for time-series prediction tasks.
It requires us to run the whole QRNN to some time step $t$ to obtain a prediction $\overline{\bm{y}_{t}}$, which must be obtained with some accuracy to proceed to the next time step since it is used as the input to the system for such tasks.
However, we might be able to extend the framework of QRNN to use such single quantum systems; for example, they can be used for time-series prediction of binary data, where the predicted data itself is represented by bitstrings.
We leave such an extension for future work.
\subsection{Relation between existing algorithms}
The quantum reservoir computing (QRC) \cite{Fujii2017} has been proposed as a technique to tackle temporal learning tasks using complex quantum dynamics.
QRC is a quantum-classical hybrid algorithm similar to QRNN from the viewpoint of the purpose; both of them are used to train and predict time-series data.
The difference between QRNN and QRC is that while QRC updates readout transformation $g$ to minimize the cost function, QRNN mainly tunes the quantum system by optimizing the circuit parameter $\bm{\theta}$.
Subsequent theoretical \cite{Chen2019, Chen2020, Ghosh2019, Ghosh2020, ghosh2020universal,govia2020quantum,Martinez2020,Kutvonen2020} and experimental works \cite{negoro2018machine, Chen2020} have extended the QRC in various ways.
We also notice a similar title in Ref. \cite{bausch2020recurrent}.
This approach is based on quantum neurons presented in Ref. \cite{cao2017quantum}, and use sophisticated quantum circuits.
This is in contrast to our approach, where the parametrized circuit $U(\bm{\theta})$ can be constructed in a hardware-efficient way, as we show in the next section by numerical simulations.
\section{Numerical experiments} \label{sec:numerical}
Here, we demonstrate and analyze the performance of the proposed QRNN by simulating its training and prediction phases for time-series prediction tasks.
We use scalar input data $\{x_t\}$, and the task is to construct a QRNN that can output $\overline{y_t}\approx x_{t+1}$.
Throughout the simulations presented in this section, we investigate a QRNN circuit using $n=6$ qubits; groups A and B have $n_A=3$ and $n_B=3$ qubits, respectively.
As the encoding gate, we use,
\begin{align}
U_{\rm in}(x_t)=R_y(\arccos{x_t}),
\end{align}
acting on each qubit in group B, following the ideas presented in Ref. \cite{Mitarai2018}.
Note that the input gate can be chosen differently; for example, we might be able to improve the performance of QRNN by changing it to encode orthogonal polynomials.
At the evolution part, the circuit shown in Fig. \ref{fig:rotation_gate} is applied as the parametrized gate $U(\bm{\theta})$, which consists of alternating layers of single-qubit rotations and Hamiltonian dynamics.
We fix the Hamiltonian and optimize the angles of the single-qubit rotations.
The single-qubit rotations are parametrized as,
\begin{align}
U_1(\alpha, \beta, \gamma)=R_x(\alpha)R_z(\beta)R_x(\gamma),
\label{eq:X-Zdecomposition}
\end{align}
where $\alpha$, $\beta$, and $\gamma$ are real numbers, and $R_x$ and $R_z$ are single-qubit rotations around $x$ and $z$-axis, respectively.
After the rotation, the whole system is evolved with the following Hamiltonian,
\begin{align}
H_{\rm{int}}=\sum_{j=1}^{n} a_{j} X_{j}+\sum_{j=1}^{n} \sum_{k=1}^{j-1} J_{j k} Z_{j} Z_{k},
\label{eq:H_int}
\end{align}
following Ref. \cite{Fujii2017, Mitarai2018}.
We denote the evolution time by $\tau$.
We repeat applications of $U_1(\alpha, \beta, \gamma)$ and $e^{-iH_{\rm int} \tau}$ for $D=3$ times, as shown in Fig. \ref{fig:rotation_gate}.
The coefficients $a_j, J_{jk}$ are taken randomly from a uniform distribution on $[-1, 1]$ and fixed during the training.
At the measurement part, we measure $Z$-expectation value of each qubit in group B and take $\overline{y_t}$ as their average multiplied by a real coefficient $c$.
In the simulation presented in the following subsections, the coefficient $c$ is also optimized along with $(\alpha_i^{(d)}, \beta_i^{(d)}, \gamma_i^{(d)})$ for each qubit.
For classical optimizer, we employ the BFGS algorithm implemented in SciPy \cite{Virtanen2020}.
All of the parameters are initialized to $0$ except for the coefficient $c$ which is initialized to $1$.
\begin{figure}
\caption{The evolution part of the circuit utilized in the numerical simulations.
$\alpha_i^{(d)}
\label{fig:rotation_gate}
\end{figure}
\subsection{Demonstration}
First, we train the above described QRNN to predict cosine wave and triangular wave.
For the cosine wave, we use $x_t = \cos{(\pi t')}/2$, and for the triangular wave, we use
\begin{align}
x_t = \begin{cases}
-t'+\frac{1}{2} & (0 \leq t' \leq 1) \\
t'-\frac{3}{2} & (1 \leq t' \leq 2) \\
-t'+\frac{5}{2} & (2 \leq t' \leq 3) \\
t'-\frac{7}{2} & (3 \leq t' \leq 4)
\end{cases},
\end{align}
for $t'=\frac{8}{199}t$.
For both of them, $\{x_t\}_{t=0}^{99}$ is used as training data.
After the training, we test the accuracy of the predictions for $100\leq t < 200$.
The evolution time $\tau$ is set to $0.2$ in this numerical experiment.
Secondly, we train QRNN to predict quantum spin dynamics since we envision that the proposed approach is suited for such applications, that is, for predictions of quantum phenomena.
Here, we use open quantum dynamics of a 3-spin system generated by a Lindblad master equation,
\begin{align}
\frac{d}{dt'}\sigma(t') &= -i[H, \sigma(t')] \notag \\
&+ \sum_{k} \frac{1}{2}\left[2 C_{k} \sigma(t') C_{k}^{+}-\sigma(t') C_{k}^{\dagger} C_{k}-C_{k}^{\dagger} C_{k} \sigma(t')\right]
\end{align}
with $H = -\frac{1}{2}\sum^{3}_{i=1}h^{(i)} Z_i -\frac{1}{2}\sum^{2}_{i=1}(J_x^{(i)}X_i X_{i+1} + J_y^{(i)}Y_i Y_{i+1}+J_z^{(i)}Z_i Z_{i+1})$ and $\{C_k\} = \{c(X_i+Y_i)\}_{i=1}^3$ as the data to be predicted.
The coefficients are set to $h^{(i)}=2\pi$, $J_\mu^{(i)} = 0.1\pi$ for $\mu=x, y, z$, and $c=\sqrt{0.002}$.
The initial state of the system is chosen to be $\ket{0}^{\otimes 3}$.
We train the QRNN to predict the $X$-expectation value of the first spin, $\braket{X_1(t')}$, at time $t'=\frac{100}{499}t$, that is, we take $x_t=\braket{X_1(t')}$.
$\{x_t\}_{t=0}^{199}$ is used for the training, and we test the prediction afterwards for $200\leq t<500$. The evolution time $\tau$ is set to $0.18$ in this numerical experiment.
Figure \ref{fig:results} shows the results of the numerical simulations whose mean squared error (MSE) between the output and the teacher on the first 25 test points are the smallest among 10 randomly generated coefficients $a_{j}, J_{jk}$.
In the figure, the initial output shows a sequence of predictions $\{\overline{y_t}\}$ obtained from QRNN with the initial parameters.
The reason it resembles the true data $x_t$ in the training region is because we input $x_t$ by $R_y(\arccos{x_t})$, which encodes $x_t$ to $Z$-expectation values of qubits.
This means that $x_t=\overline{y_{t}}$ if $\tau=0$.
For nonzero $\tau$, we can naturally expect $\overline{y_{t}}$ to be somewhat smaller than $x_t$ because of the interaction.
Looking at the optimized output, we can see that QRNN can be successfully trained to fit a given training dataset and to make predictions after the training.
This can also be verified with MSE between the output and the teacher evaluated on the first 25 test points, which is on the order of $10^{-3}$ for all data types (Tab. \ref{tab:mse}).
During the experiments, we found that the quality of the prediction strongly depends on the parameter $\tau$ which determines the interaction strength between the qubits.
This phenomenon is further analyzed in the next subsection.
\begin{figure}
\caption{Results of numerical simulation of training the QRNN to predict (a) cosine wave, (b) triangular wave, and (c) three-spin Lindblad dynamics.
The data points on the left of the vertical grey dashed line are used as the training dataset.
The solid lines represent the function which the QRNN is trained to model.
The parts of the data used in the training phase and the prediction phase are depicted as the black and orange lines, respectively.}
\label{fig:results}
\end{figure}
\begin{table}
\caption{\label{tab:mse} Mean squared error between the trained output and the test data shown in Fig. \ref{fig:results} evaluated with the first 25 points after the training region.}
\begin{ruledtabular}
\begin{tabular}{cccc}
Data & cos & triangle & spin dynamics\\
MSE & $3.33\times 10^{-4}$ & $2.6\times 10^{-3} $ & $2.95\times 10^{-3}$
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Dependence on the interaction strength}
Here, we vary $\tau$ from 0 to 10 and perform the training of the QRNN on the cosine wave used in Fig. \ref{fig:results} (a) to investigate the dependency of the performance of the QRNN on the interaction strength.
For this purpose, we randomly drew the coefficients of the interaction Hamiltonian $H_{\rm{int}}$ (Eq. (\ref{eq:H_int})) 10 times from uniform distribution on $[-1, 1]$, and trained each QRNN.
\begin{figure}
\caption{Relation between the evolution time $\tau$ and the prediction error evaluated in terms of the mean squared error.}
\label{fig:tau_vs_mse}
\end{figure}
Fig. \ref{fig:tau_vs_mse} shows the mean squared error between $x_t$ and $\overline{y_t}$ of the first 25 time steps.
From Fig. \ref{fig:tau_vs_mse}, we notice that there exists a certain range of $\tau$ which provides us relatively accurate predictions.
More specifically, the QRNN circuits with $\tau\approx a_j,~J_{ij}$ work better than the other ones.
This is as expected because a short $\tau$ leads to less information transferred to the subsystem $A$.
It means that the QRNN cannot store the past information, since the subsystem $B$ is measured and initialized at every time step, which erases all information that is left in $B$.
On the other hand, when $\tau$ is sufficiently larger than $a_j$ and $J_{ij}$ of the Hamiltonian, the dynamics under $H_{\mathrm{int}}$ becomes complex, which makes it hard for the single-qubit rotations to extract the required information from the system.
This analysis shows that the QRNN indeed utilizes the past information stored in the subsystem $B$, and we must have a certain amount of entangling gates at the evolution part of the QRNN.
\section{Conclusion} \label{sec:conclusion}
We proposed a quantum version of an RNN for temporal learning tasks.
The proposed algorithm, QRNN, employs a parametrized quantum circuit with a recurrent structure.
In QRNN, one group of qubits are measured and initialized at every time step to obtain outputs and inject inputs, while other qubits are never measured to store the past information.
We provided numerical experiments to verify the validity of the idea.
It remains an open question whether the proposed QRNN performs better than the classical RNN, and we leave it as a future research direction.
However, we note that the QRC which has a similar structure as the QRNN has been shown to have better performance than its classical counterpart \cite{Fujii2017}.
We can, therefore, expect the QRNN to have similar performance when its structure is optimized.
As the QRNN explicitly optimize the dynamics of the quantum system, we especially expect it to be able to predict quantum phenomena, such as the one we used in our numerical experiments.
\section{Calculation of analytic gradient} \label{apdx:derivative}
Here we describe how to evaluate analytic gradient in the QRNN using the so-called parameter shift rule \cite{Mitarai2018, schuld2019evaluating, koczor2020quantum, Mitarai2019meth}.
In short, the rule allows us to evaluate an expectation value of any observable $O$ with respect to the following operator,
\begin{align}
\frac{\partial}{\partial \theta_i}\left[U(\bm{\theta})\rho U^\dagger(\bm{\theta})\right] = \frac{\partial U(\bm{\theta})}{\partial \theta_i}\rho U^\dagger(\bm{\theta}) + U(\bm{\theta})\rho \frac{\partial U^\dagger(\bm{\theta})}{\partial \theta_i},
\end{align}
for any state $\rho$ by evaluating the expectation value twice at shifted parameters if the parametrized gate corresponding to the parameter $\theta_i$ can be written as $e^{i\theta_i P}$ for some Pauli operator $P$.
For simplicity, we assume $x_t$ to be scalar and that the output is obtained from a single observable $O$ as $\overline{y_t}=g\left(\braket{O}_t\right)$.
Also, the cost function is assumed to be $L=\frac{1}{2}\sum_{t=0}^{T-2}(\overline{{y}_{t}}-{x}_{t+1})^{2}$. However, generalization of the discussion below to other cost functions is straight forward.
Now, the gradient of $L$ can be written as,
\begin{align}\label{appeq:dl}
\frac{\partial L}{\partial \theta_i} &= \sum_{t=0}^{T-2}(\overline{y_t}({\bm \theta}, x_0, \cdots x_{t})-x_{t+1})\frac{\partial \overline{y_{t}}}{\partial \theta_i}.
\end{align}
Next, we can express $\frac{\partial \overline{y_t}}{\partial \theta_i}$ as,
\begin{align}\label{appeq:dx}
\frac{\partial \overline{y_t}}{\partial \theta_i} &= \frac{\partial g\left(\braket{O}_{t}\right)}{\partial \theta_i} = \frac{\partial g}{\partial \braket{O}_{t}}\frac{\partial \braket{O}_{t}}{\partial \theta_i}.
\end{align}
Let us define the input state $\rho_{\mathrm{in}, t}^{B} := U_{\mathrm{in}}(x_{t})\ket{0}\bra{0}U^\dagger_{\mathrm{in}}(x_{t})$ and $\rho_{\mathrm{in},t}^{AB}(\bm{\theta},x_0,\cdots,x_{t}) = \rho_{t-1}^A\otimes \rho_{\mathrm{in}, t}^{B}$.
Below we abbreviate the dependence of $\rho_{\mathrm{in},t}^{AB}$ on $x_0,\cdots,x_{t}$ and just write $\rho_{\mathrm{in},t}^{AB}(\bm{\theta})$ for simplicity.
Then, $\frac{\partial \braket{O}_{t}}{\partial \theta_i}$ can be written as,
\begin{align}
\frac{\partial \braket{O}_{t}}{\partial \theta_i} &= \frac{\partial}{\partial \theta_i}\mathrm{Tr} \left[\rho^B_t O\right] \notag \\
&= \frac{\partial}{\partial \theta_i} \mathrm{Tr} \left[ U (\bm{ \theta}) \rho_{\mathrm{in},t}^{AB}(\bm{\theta}) U^\dagger (\bm{\theta}) O\right] \notag \\
\begin{split}
&= {\rm Tr}\left[\frac{\partial U({\bm \theta})}{\partial \theta_i} \rho_{\mathrm{in},t}^{AB}({\bm \theta}) U^{\dagger}({\bm \theta}) O\right]\\
&\quad+ {\rm Tr}\left[U({\bm \theta}) \rho_{\mathrm{in},t}^{AB}({\bm \theta}) \frac{\partial U^{\dagger}({\bm \theta})}{\partial \theta_i} O\right] \\
&\quad+ {\rm Tr} \left[U({\bm \theta}) \frac{\partial \rho_{\mathrm{in},t}^{AB}(\bm{\theta})}{\partial \theta_i} U^{\dagger}({\bm \theta})O\right]. \label{appeq:do}
\end{split}
\end{align}
In Eq. (\ref{appeq:do}), the first two terms can easily be evaluated by the parameter shift rule.
As for the last term,
\begin{align}
\frac{\partial \rho_{\mathrm{in},t}^{AB}({\bm \theta})}{\partial \theta_i} &= \frac{\partial \rho_{t-1}^A({\bm \theta})}{\partial \theta_i} \otimes \rho_{\mathrm{in},t}^B \notag \\
&= \frac{\partial}{\partial \theta_i}\mathrm{Tr}_B \left[U({\bm \theta})\rho_{\mathrm{in},t-1}^{AB}(\bm{\theta}) U^{\dagger}({\bm \theta})\right] \otimes \rho_{t}^B \notag \\
\begin{split}
&= \left\{\mathrm{Tr}_B \left[ \frac{\partial U({\bm \theta})}{\partial \theta_i} \rho_{\mathrm{in},t-1}^{AB}(\bm{\theta}) U^{\dagger}({\bm \theta})\right]\right. \\
&\quad+ \mathrm{Tr}_B\left[U({\bm \theta}) \rho_{\mathrm{in},t-1}^{AB} (\bm{\theta})\frac{\partial U^{\dagger}({\bm \theta})}{\partial \theta_i}\right] \\
&\quad \left.+ \mathrm{Tr}_B\left[U({\bm \theta}) \frac{\partial \rho_{\mathrm{in},t-1}^{AB}(\bm{\theta})}{\partial \theta_i} U^{\dagger}({\bm \theta})\right]\right\} \otimes \rho_{t}^B.
\end{split}
\end{align}
In the above, the first two terms can be evaluated with parameter shift rule.
For the last term, we need to expand it again to express $U({\bm \theta}) \frac{\partial \rho_{\mathrm{in},t-1}^{AB}(\bm{\theta})}{\partial \theta_i} U^{\dagger}({\bm \theta})$ in terms of $U({\bm \theta}) \frac{\partial \rho_{\mathrm{in},t-2}^{AB}(\bm{\theta})}{\partial \theta_i} U^{\dagger}({\bm \theta})$.
The above discussion implies that, to evaluate $U({\bm \theta}) \frac{\partial \rho_{t}^{AB}({\bm \theta})}{\partial \theta_i} U^{\dagger}({\bm \theta})$ which appears in Eq. (\ref{appeq:do}), we have to perform the above expansion reccursively down to $U({\bm \theta}) \frac{\partial \rho_{\mathrm{in},0}^{AB}}{\partial \theta_i} U^{\dagger}({\bm \theta})$, where the expansion terminates since $\rho_{\mathrm{in},0}^{AB}=\ket{0}\bra{0}\otimes \rho^{B}_{\mathrm{in}}(x_0)$ does not depend on $\bm{\theta}$.
As every recursive expansion yields a pair of terms that can be evaluated by shifting the parameter twice, we need $2T$ additional evaluations of $\braket{O}$ to obtain the analytical gradient of the cost function for each $\theta_i$.
\begin{thebibliography}{36}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{https://doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Biamonte}\ \emph {et~al.}(2017)\citenamefont
{Biamonte}, \citenamefont {Wittek}, \citenamefont {Pancotti}, \citenamefont
{Rebentrost}, \citenamefont {Wiebe},\ and\ \citenamefont
{Lloyd}}]{Biamonte2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Biamonte}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wittek}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Pancotti}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Rebentrost}}, \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Wiebe}},\ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\bibfield {title} {\bibinfo
{title} {Quantum machine learning},\ }\href
{https://doi.org/10.1038/nature23474} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {549}},\ \bibinfo {pages}
{195} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Schuld}\ \emph {et~al.}(2016)\citenamefont {Schuld},
\citenamefont {Sinayskiy},\ and\ \citenamefont {Petruccione}}]{Schuld2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Schuld}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Sinayskiy}},\
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\
}\bibfield {title} {\bibinfo {title} {{Prediction by linear regression on a
quantum computer}},\ }\href {https://doi.org/10.1103/PhysRevA.94.022342}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {94}},\ \bibinfo {pages} {022342} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wiebe}\ \emph {et~al.}(2012)\citenamefont {Wiebe},
\citenamefont {Braun},\ and\ \citenamefont {Lloyd}}]{Wiebe2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Wiebe}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Braun}},\ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\bibfield
{title} {\bibinfo {title} {{Quantum Algorithm for Data Fitting}},\ }\href
{https://doi.org/10.1103/PhysRevLett.109.050505} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\
\bibinfo {pages} {050505} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rebentrost}\ \emph {et~al.}(2014)\citenamefont
{Rebentrost}, \citenamefont {Mohseni},\ and\ \citenamefont
{Lloyd}}]{Rebentrost2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Rebentrost}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Mohseni}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Lloyd}},\ }\bibfield {title} {\bibinfo {title} {Quantum support vector
machine for big data classification},\ }\href
{https://doi.org/10.1103/PhysRevLett.113.130503} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {113}},\
\bibinfo {pages} {130503} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yamasaki}\ \emph {et~al.}(2020)\citenamefont
{Yamasaki}, \citenamefont {Subramanian}, \citenamefont {Sonoda},\ and\
\citenamefont {Koashi}}]{yamasaki2020learning}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Yamasaki}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Subramanian}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Sonoda}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Koashi}},\ }\href@noop {} {\bibinfo {title} {Learning with optimized random
features: Exponential speedup by quantum machine learning without sparsity
and low-rank assumptions}} (\bibinfo {year} {2020}),\ \Eprint
{https://arxiv.org/abs/2004.10756} {arXiv:2004.10756 [quant-ph]} \BibitemShut
{NoStop}
\bibitem [{\citenamefont {Schuld}\ and\ \citenamefont
{Killoran}(2019)}]{Schuld2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Schuld}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Killoran}},\ }\bibfield {title} {\bibinfo {title} {Quantum machine learning
in feature hilbert spaces},\ }\href
{https://doi.org/10.1103/PhysRevLett.122.040504} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {122}},\
\bibinfo {pages} {040504} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mitarai}\ \emph {et~al.}(2018)\citenamefont
{Mitarai}, \citenamefont {Negoro}, \citenamefont {Kitagawa},\ and\
\citenamefont {Fujii}}]{Mitarai2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Mitarai}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Negoro}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kitagawa}},\ and\
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}},\ }\bibfield
{title} {\bibinfo {title} {Quantum circuit learning},\ }\href
{https://doi.org/10.1103/PhysRevA.98.032309} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo
{pages} {032309} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Farhi}\ and\ \citenamefont
{Neven}(2018)}]{farhi2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Farhi}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}},\
}\href@noop {} {\bibinfo {title} {Classification with quantum neural networks
on near term processors}} (\bibinfo {year} {2018}),\ \Eprint
{https://arxiv.org/abs/1802.06002} {arXiv:1802.06002 [quant-ph]} \BibitemShut
{NoStop}
\bibitem [{\citenamefont {Cerezo}\ \emph {et~al.}(2020)\citenamefont {Cerezo},
\citenamefont {Arrasmith}, \citenamefont {Babbush}, \citenamefont {Benjamin},
\citenamefont {Endo}, \citenamefont {Fujii}, \citenamefont {McClean},
\citenamefont {Mitarai}, \citenamefont {Yuan}, \citenamefont {Cincio},\ and\
\citenamefont {Coles}}]{cerezo2020variational}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Cerezo}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Arrasmith}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo
{author} {\bibfnamefont {S.~C.}\ \bibnamefont {Benjamin}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Endo}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Fujii}}, \bibinfo {author} {\bibfnamefont {J.~R.}\
\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Mitarai}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Cincio}},\ and\ \bibinfo
{author} {\bibfnamefont {P.~J.}\ \bibnamefont {Coles}},\ }\href@noop {}
{\bibinfo {title} {Variational quantum algorithms}} (\bibinfo {year}
{2020}),\ \Eprint {https://arxiv.org/abs/2012.09265} {arXiv:2012.09265
[quant-ph]} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Fujii}\ and\ \citenamefont
{Nakajima}(2017)}]{Fujii2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Fujii}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Nakajima}},\ }\bibfield {title} {\bibinfo {title} {Harnessing
disordered-ensemble quantum dynamics for machine learning},\ }\href
{https://doi.org/10.1103/PhysRevApplied.8.024030} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf {\bibinfo {volume} {8}},\
\bibinfo {pages} {024030} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sherstinsky}(2020)}]{SHERSTINSKY2020132306}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Sherstinsky}},\ }\bibfield {title} {\bibinfo {title} {Fundamentals of
recurrent neural network (rnn) and long short-term memory (lstm) network},\
}\href {https://doi.org/https://doi.org/10.1016/j.physd.2019.132306}
{\bibfield {journal} {\bibinfo {journal} {Physica D: Nonlinear Phenomena}\
}\textbf {\bibinfo {volume} {404}},\ \bibinfo {pages} {132306} (\bibinfo
{year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hochreiter}\ and\ \citenamefont
{Schmidhuber}(1997)}]{hochreiter1997long}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Hochreiter}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Schmidhuber}},\ }\bibfield {title} {\bibinfo {title} {Long short-term
memory},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Neural
computation}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {1735}
(\bibinfo {year} {1997})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chung}\ \emph {et~al.}(2014)\citenamefont {Chung},
\citenamefont {Gulcehre}, \citenamefont {Cho},\ and\ \citenamefont
{Bengio}}]{chung2014empirical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Chung}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gulcehre}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Cho}},\ and\ \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Bengio}},\ }\href@noop {}
{\bibinfo {title} {Empirical evaluation of gated recurrent neural networks on
sequence modeling}} (\bibinfo {year} {2014}),\ \Eprint
{https://arxiv.org/abs/1412.3555} {arXiv:1412.3555 [cs.NE]} \BibitemShut
{NoStop}
\bibitem [{\citenamefont {Elman}(1990)}]{elman1990finding}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont
{Elman}},\ }\bibfield {title} {\bibinfo {title} {Finding structure in
time},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Cognitive
science}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {179}
(\bibinfo {year} {1990})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Peruzzo}\ \emph {et~al.}(2014)\citenamefont
{Peruzzo}, \citenamefont {McClean}, \citenamefont {Shadbolt}, \citenamefont
{Yung}, \citenamefont {Zhou}, \citenamefont {Love}, \citenamefont
{Aspuru-Guzik},\ and\ \citenamefont {O'Brien}}]{Peruzzo2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Peruzzo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {McClean}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Shadbolt}}, \bibinfo
{author} {\bibfnamefont {M.-H.}\ \bibnamefont {Yung}}, \bibinfo {author}
{\bibfnamefont {X.-Q.}\ \bibnamefont {Zhou}}, \bibinfo {author}
{\bibfnamefont {P.~J.}\ \bibnamefont {Love}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ and\ \bibinfo {author}
{\bibfnamefont {J.~L.}\ \bibnamefont {O'Brien}},\ }\bibfield {title}
{\bibinfo {title} {A variational eigenvalue solver on a photonic quantum
processor},\ }\href {https://doi.org/10.1038/ncomms5213} {\bibfield
{journal} {\bibinfo {journal} {Nature Communications}\ }\textbf {\bibinfo
{volume} {5}},\ \bibinfo {pages} {4213} (\bibinfo {year} {2014})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Farhi}\ \emph {et~al.}(2014)\citenamefont {Farhi},
\citenamefont {Goldstone},\ and\ \citenamefont {Gutmann}}]{farhi2014quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Farhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goldstone}},\
and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}},\
}\href@noop {} {\bibinfo {title} {A quantum approximate optimization
algorithm}} (\bibinfo {year} {2014}),\ \Eprint
{https://arxiv.org/abs/1411.4028} {arXiv:1411.4028 [quant-ph]} \BibitemShut
{NoStop}
\bibitem [{\citenamefont {Benedetti}\ \emph {et~al.}(2019)\citenamefont
{Benedetti}, \citenamefont {Garcia-Pintos}, \citenamefont {Perdomo},
\citenamefont {Leyton-Ortega}, \citenamefont {Nam},\ and\ \citenamefont
{Perdomo-Ortiz}}]{Benedetti2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Benedetti}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Garcia-Pintos}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Perdomo}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Leyton-Ortega}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Nam}},\
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Perdomo-Ortiz}},\
}\bibfield {title} {\bibinfo {title} {A generative modeling approach for
benchmarking and training shallow quantum circuits},\ }\href
{https://doi.org/10.1038/s41534-019-0157-8} {\bibfield {journal} {\bibinfo
{journal} {npj Quantum Information}\ }\textbf {\bibinfo {volume} {5}},\
\bibinfo {pages} {45} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Schuld}\ \emph {et~al.}(2019)\citenamefont {Schuld},
\citenamefont {Bergholm}, \citenamefont {Gogolin}, \citenamefont {Izaac},\
and\ \citenamefont {Killoran}}]{schuld2019evaluating}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Schuld}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Bergholm}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Izaac}},\ and\ \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Killoran}},\ }\bibfield {title} {\bibinfo
{title} {Evaluating analytic gradients on quantum hardware},\ }\href
{https://doi.org/10.1103/PhysRevA.99.032331} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo
{pages} {032331} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Koczor}\ and\ \citenamefont
{Benjamin}(2020)}]{koczor2020quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Koczor}}\ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont
{Benjamin}},\ }\href@noop {} {\bibinfo {title} {Quantum analytic descent}}
(\bibinfo {year} {2020}),\ \Eprint {https://arxiv.org/abs/2008.13774}
{arXiv:2008.13774 [quant-ph]} \BibitemShut {NoStop}
\bibitem [{\citenamefont {de~Boer}\ \emph {et~al.}(1974)\citenamefont
{de~Boer}, \citenamefont {Borghini}, \citenamefont {Morimoto}, \citenamefont
{Niinikoski},\ and\ \citenamefont {Udo}}]{deBoer1974}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{de~Boer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Borghini}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Morimoto}}, \bibinfo
{author} {\bibfnamefont {T.~O.}\ \bibnamefont {Niinikoski}},\ and\ \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Udo}},\ }\bibfield {title}
{\bibinfo {title} {Dynamic polarization of protons, deuterons, and carbon-13
nuclei: Thermal contact between nuclear spins and an electron spin-spin
interaction reservoir},\ }\href {https://doi.org/10.1007/BF00661185}
{\bibfield {journal} {\bibinfo {journal} {Journal of Low Temperature
Physics}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {249}
(\bibinfo {year} {1974})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Henstra}\ \emph {et~al.}(1990)\citenamefont
{Henstra}, \citenamefont {Lin}, \citenamefont {Schmidt},\ and\ \citenamefont
{Wenckebach}}]{HENSTRA19906}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Henstra}}, \bibinfo {author} {\bibfnamefont {T.-S.}\ \bibnamefont {Lin}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Schmidt}},\ and\ \bibinfo
{author} {\bibfnamefont {W.}~\bibnamefont {Wenckebach}},\ }\bibfield {title}
{\bibinfo {title} {High dynamic nuclear polarization at room temperature},\
}\href {https://doi.org/https://doi.org/10.1016/0009-2614(90)87002-9}
{\bibfield {journal} {\bibinfo {journal} {Chemical Physics Letters}\
}\textbf {\bibinfo {volume} {165}},\ \bibinfo {pages} {6 } (\bibinfo {year}
{1990})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tateishi}\ \emph {et~al.}(2014)\citenamefont
{Tateishi}, \citenamefont {Negoro}, \citenamefont {Nishida}, \citenamefont
{Kagawa}, \citenamefont {Morita},\ and\ \citenamefont
{Kitagawa}}]{Tateishi2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Tateishi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Negoro}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Nishida}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Kagawa}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Morita}},\ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Kitagawa}},\ }\bibfield {title} {\bibinfo
{title} {Room temperature hyperpolarization of nuclear spins in bulk},\
}\href {https://doi.org/10.1073/pnas.1315778111} {\bibfield {journal}
{\bibinfo {journal} {Proceedings of the National Academy of Sciences}\
}\textbf {\bibinfo {volume} {111}},\ \bibinfo {pages} {7527} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fischer}\ \emph {et~al.}(2013)\citenamefont
{Fischer}, \citenamefont {Jarmola}, \citenamefont {Kehayias},\ and\
\citenamefont {Budker}}]{PhysRevB.87.125207}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Fischer}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Jarmola}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kehayias}},\ and\
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Budker}},\ }\bibfield
{title} {\bibinfo {title} {Optical polarization of nuclear ensembles in
diamond},\ }\href {https://doi.org/10.1103/PhysRevB.87.125207} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{87}},\ \bibinfo {pages} {125207} (\bibinfo {year} {2013})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Chen}\ and\ \citenamefont {Nurdin}(2019)}]{Chen2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Chen}}\ and\ \bibinfo {author} {\bibfnamefont {H.~I.}\ \bibnamefont
{Nurdin}},\ }\bibfield {title} {\bibinfo {title} {Learning nonlinear
input--output maps with dissipative quantum systems},\ }\href
{https://doi.org/10.1007/s11128-019-2311-9} {\bibfield {journal} {\bibinfo
{journal} {Quantum Information Processing}\ }\textbf {\bibinfo {volume}
{18}},\ \bibinfo {pages} {198} (\bibinfo {year} {2019})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2020)\citenamefont {Chen},
\citenamefont {Nurdin},\ and\ \citenamefont {Yamamoto}}]{Chen2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {H.~I.}\ \bibnamefont {Nurdin}},\
and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Yamamoto}},\
}\bibfield {title} {\bibinfo {title} {Temporal information processing on
noisy quantum computers},\ }\href
{https://doi.org/10.1103/PhysRevApplied.14.024065} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf {\bibinfo {volume}
{14}},\ \bibinfo {pages} {024065} (\bibinfo {year} {2020})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Ghosh}\ \emph {et~al.}(2019)\citenamefont {Ghosh},
\citenamefont {Opala}, \citenamefont {Matuszewski}, \citenamefont {Paterek},\
and\ \citenamefont {Liew}}]{Ghosh2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Ghosh}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Opala}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Matuszewski}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Paterek}},\ and\ \bibinfo {author}
{\bibfnamefont {T.~C.~H.}\ \bibnamefont {Liew}},\ }\bibfield {title}
{\bibinfo {title} {Quantum reservoir processing},\ }\href
{https://doi.org/10.1038/s41534-019-0149-8} {\bibfield {journal} {\bibinfo
{journal} {npj Quantum Information}\ }\textbf {\bibinfo {volume} {5}},\
\bibinfo {pages} {35} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {{Ghosh}}\ \emph {et~al.}(2020)\citenamefont
{{Ghosh}}, \citenamefont {{Opala}}, \citenamefont {{Matuszewski}},
\citenamefont {{Paterek}},\ and\ \citenamefont {{Liew}}}]{Ghosh2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{{Ghosh}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {{Opala}}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{Matuszewski}}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {{Paterek}}},\ and\ \bibinfo
{author} {\bibfnamefont {T.~C.~H.}\ \bibnamefont {{Liew}}},\ }\bibfield
{title} {\bibinfo {title} {Reconstructing quantum states with quantum
reservoir networks},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {IEEE Transactions on Neural Networks and Learning Systems}\ ,\
\bibinfo {pages} {1}} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ghosh}\ \emph {et~al.}(2020)\citenamefont {Ghosh},
\citenamefont {Krisnanda}, \citenamefont {Paterek},\ and\ \citenamefont
{Liew}}]{ghosh2020universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Ghosh}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Krisnanda}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Paterek}},\ and\ \bibinfo
{author} {\bibfnamefont {T.~C.~H.}\ \bibnamefont {Liew}},\ }\href@noop {}
{\bibinfo {title} {Universal quantum reservoir computing}} (\bibinfo {year}
{2020}),\ \Eprint {https://arxiv.org/abs/2003.09569} {arXiv:2003.09569
[quant-ph]} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Govia}\ \emph {et~al.}(2020)\citenamefont {Govia},
\citenamefont {Ribeill}, \citenamefont {Rowlands}, \citenamefont {Krovi},\
and\ \citenamefont {Ohki}}]{govia2020quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~C.~G.}\
\bibnamefont {Govia}}, \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont
{Ribeill}}, \bibinfo {author} {\bibfnamefont {G.~E.}\ \bibnamefont
{Rowlands}}, \bibinfo {author} {\bibfnamefont {H.~K.}\ \bibnamefont
{Krovi}},\ and\ \bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont
{Ohki}},\ }\href@noop {} {\bibinfo {title} {Quantum reservoir computing with
a single nonlinear oscillator}} (\bibinfo {year} {2020}),\ \Eprint
{https://arxiv.org/abs/2004.14965} {arXiv:2004.14965 [quant-ph]} \BibitemShut
{NoStop}
\bibitem [{\citenamefont {Mart{\'i}nez-Pe{\~{n}}a}\ \emph
{et~al.}(2020)\citenamefont {Mart{\'i}nez-Pe{\~{n}}a}, \citenamefont
{Nokkala}, \citenamefont {Giorgi}, \citenamefont {Zambrini},\ and\
\citenamefont {Soriano}}]{Martinez2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Mart{\'i}nez-Pe{\~{n}}a}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Nokkala}}, \bibinfo {author} {\bibfnamefont {G.~L.}\
\bibnamefont {Giorgi}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Zambrini}},\ and\ \bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont
{Soriano}},\ }\bibfield {title} {\bibinfo {title} {Information processing
capacity of spin-based quantum reservoir computing systems},\ }\href
{https://doi.org/10.1007/s12559-020-09772-y} {\bibfield {journal} {\bibinfo
{journal} {Cognitive Computation}\ } (\bibinfo {year} {2020})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Kutvonen}\ \emph {et~al.}(2020)\citenamefont
{Kutvonen}, \citenamefont {Fujii},\ and\ \citenamefont
{Sagawa}}]{Kutvonen2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kutvonen}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}},\
and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Sagawa}},\
}\bibfield {title} {\bibinfo {title} {Optimizing a quantum reservoir
computer for time series prediction},\ }\href
{https://doi.org/10.1038/s41598-020-71673-9} {\bibfield {journal} {\bibinfo
{journal} {Scientific Reports}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo
{pages} {14687} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Negoro}\ \emph {et~al.}(2018)\citenamefont {Negoro},
\citenamefont {Mitarai}, \citenamefont {Fujii}, \citenamefont {Nakajima},\
and\ \citenamefont {Kitagawa}}]{negoro2018machine}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Negoro}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mitarai}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Nakajima}},\ and\ \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Kitagawa}},\ }\href@noop {}
{\bibinfo {title} {Machine learning with controllable quantum dynamics of a
nuclear spin ensemble in a solid}} (\bibinfo {year} {2018}),\ \Eprint
{https://arxiv.org/abs/1806.10910} {arXiv:1806.10910 [quant-ph]} \BibitemShut
{NoStop}
\bibitem [{\citenamefont {Bausch}(2020)}]{bausch2020recurrent}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Bausch}},\ }\href@noop {} {\bibinfo {title} {Recurrent quantum neural
networks}} (\bibinfo {year} {2020}),\ \Eprint
{https://arxiv.org/abs/2006.14619} {arXiv:2006.14619 [cs.LG]} \BibitemShut
{NoStop}
\bibitem [{\citenamefont {Cao}\ \emph {et~al.}(2017)\citenamefont {Cao},
\citenamefont {Guerreschi},\ and\ \citenamefont
{Aspuru-Guzik}}]{cao2017quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Cao}}, \bibinfo {author} {\bibfnamefont {G.~G.}\ \bibnamefont
{Guerreschi}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Aspuru-Guzik}},\ }\href@noop {} {\bibinfo {title} {Quantum neuron: an
elementary building block for machine learning on quantum computers}}
(\bibinfo {year} {2017}),\ \Eprint {https://arxiv.org/abs/1711.11240}
{arXiv:1711.11240 [quant-ph]} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Virtanen}\ \emph {et~al.}(2020)\citenamefont
{Virtanen}, \citenamefont {Gommers}, \citenamefont {Oliphant}, \citenamefont
{Haberland}, \citenamefont {Reddy}, \citenamefont {Cournapeau}, \citenamefont
{Burovski}, \citenamefont {Peterson}, \citenamefont {Weckesser},
\citenamefont {Bright}, \citenamefont {van~der Walt}, \citenamefont {Brett},
\citenamefont {Wilson}, \citenamefont {Millman}, \citenamefont {Mayorov},
\citenamefont {Nelson}, \citenamefont {Jones}, \citenamefont {Kern},
\citenamefont {Larson}, \citenamefont {Carey}, \citenamefont {Polat},
\citenamefont {Feng}, \citenamefont {Moore}, \citenamefont {VanderPlas},
\citenamefont {Laxalde}, \citenamefont {Perktold}, \citenamefont {Cimrman},
\citenamefont {Henriksen}, \citenamefont {Quintero}, \citenamefont {Harris},
\citenamefont {Archibald}, \citenamefont {Ribeiro}, \citenamefont
{Pedregosa}, \citenamefont {van Mulbregt}, \citenamefont {Vijaykumar},
\citenamefont {Bardelli}, \citenamefont {Rothberg}, \citenamefont {Hilboll},
\citenamefont {Kloeckner}, \citenamefont {Scopatz}, \citenamefont {Lee},
\citenamefont {Rokem}, \citenamefont {Woods}, \citenamefont {Fulton},
\citenamefont {Masson}, \citenamefont {H{\"a}ggstr{\"o}m}, \citenamefont
{Fitzgerald}, \citenamefont {Nicholson}, \citenamefont {Hagen}, \citenamefont
{Pasechnik}, \citenamefont {Olivetti}, \citenamefont {Martin}, \citenamefont
{Wieser}, \citenamefont {Silva}, \citenamefont {Lenders}, \citenamefont
{Wilhelm}, \citenamefont {Young}, \citenamefont {Price}, \citenamefont
{Ingold}, \citenamefont {Allen}, \citenamefont {Lee}, \citenamefont {Audren},
\citenamefont {Probst}, \citenamefont {Dietrich}, \citenamefont {Silterra},
\citenamefont {Webber}, \citenamefont {Slavi{\v{c}}}, \citenamefont
{Nothman}, \citenamefont {Buchner}, \citenamefont {Kulick}, \citenamefont
{Sch{\"o}nberger}, \citenamefont {de~Miranda~Cardoso}, \citenamefont
{Reimer}, \citenamefont {Harrington}, \citenamefont {Rodr{\'i}guez},
\citenamefont {Nunez-Iglesias}, \citenamefont {Kuczynski}, \citenamefont
{Tritz}, \citenamefont {Thoma}, \citenamefont {Newville}, \citenamefont
{K{\"u}mmerer}, \citenamefont {Bolingbroke}, \citenamefont {Tartre},
\citenamefont {Pak}, \citenamefont {Smith}, \citenamefont {Nowaczyk},
\citenamefont {Shebanov}, \citenamefont {Pavlyk}, \citenamefont {Brodtkorb},
\citenamefont {Lee}, \citenamefont {McGibbon}, \citenamefont {Feldbauer},
\citenamefont {Lewis}, \citenamefont {Tygier}, \citenamefont {Sievert},
\citenamefont {Vigna}, \citenamefont {Peterson}, \citenamefont {More},
\citenamefont {Pudlik}, \citenamefont {Oshima}, \citenamefont {Pingel},
\citenamefont {Robitaille}, \citenamefont {Spura}, \citenamefont {Jones},
\citenamefont {Cera}, \citenamefont {Leslie}, \citenamefont {Zito},
\citenamefont {Krauss}, \citenamefont {Upadhyay}, \citenamefont {Halchenko},
\citenamefont {V{\'a}zquez-Baeza},\ and\ \citenamefont {{Scipy 1.0
Contributors}}}]{Virtanen2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Virtanen}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Gommers}},
\bibinfo {author} {\bibfnamefont {T.~E.}\ \bibnamefont {Oliphant}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Haberland}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Reddy}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Cournapeau}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Burovski}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Peterson}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {Weckesser}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Bright}}, \bibinfo {author} {\bibfnamefont {S.~J.}\
\bibnamefont {van~der Walt}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Brett}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Wilson}}, \bibinfo {author} {\bibfnamefont {K.~J.}\
\bibnamefont {Millman}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Mayorov}}, \bibinfo {author} {\bibfnamefont {A.~R.~J.}\ \bibnamefont
{Nelson}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jones}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kern}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Larson}}, \bibinfo {author} {\bibfnamefont
{C.~J.}\ \bibnamefont {Carey}}, \bibinfo {author} {\bibfnamefont
{{\.{I}}.}~\bibnamefont {Polat}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Feng}}, \bibinfo {author} {\bibfnamefont {E.~W.}\
\bibnamefont {Moore}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{VanderPlas}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Laxalde}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Perktold}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Cimrman}}, \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Henriksen}}, \bibinfo {author}
{\bibfnamefont {E.~A.}\ \bibnamefont {Quintero}}, \bibinfo {author}
{\bibfnamefont {C.~R.}\ \bibnamefont {Harris}}, \bibinfo {author}
{\bibfnamefont {A.~M.}\ \bibnamefont {Archibald}}, \bibinfo {author}
{\bibfnamefont {A.~H.}\ \bibnamefont {Ribeiro}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Pedregosa}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {van Mulbregt}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Vijaykumar}}, \bibinfo {author}
{\bibfnamefont {A.~P.}\ \bibnamefont {Bardelli}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Rothberg}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Hilboll}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Kloeckner}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Scopatz}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Rokem}}, \bibinfo {author} {\bibfnamefont {C.~N.}\ \bibnamefont {Woods}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Fulton}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Masson}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {H{\"a}ggstr{\"o}m}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Fitzgerald}}, \bibinfo {author}
{\bibfnamefont {D.~A.}\ \bibnamefont {Nicholson}}, \bibinfo {author}
{\bibfnamefont {D.~R.}\ \bibnamefont {Hagen}}, \bibinfo {author}
{\bibfnamefont {D.~V.}\ \bibnamefont {Pasechnik}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Olivetti}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Martin}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Wieser}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Lenders}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Wilhelm}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Young}}, \bibinfo {author} {\bibfnamefont {G.~A.}\
\bibnamefont {Price}}, \bibinfo {author} {\bibfnamefont {G.-L.}\ \bibnamefont
{Ingold}}, \bibinfo {author} {\bibfnamefont {G.~E.}\ \bibnamefont {Allen}},
\bibinfo {author} {\bibfnamefont {G.~R.}\ \bibnamefont {Lee}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Audren}}, \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Probst}}, \bibinfo {author} {\bibfnamefont
{J.~P.}\ \bibnamefont {Dietrich}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Silterra}}, \bibinfo {author} {\bibfnamefont {J.~T.}\
\bibnamefont {Webber}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Slavi{\v{c}}}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Nothman}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Buchner}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kulick}}, \bibinfo
{author} {\bibfnamefont {J.~L.}\ \bibnamefont {Sch{\"o}nberger}}, \bibinfo
{author} {\bibfnamefont {J.~V.}\ \bibnamefont {de~Miranda~Cardoso}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Reimer}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Harrington}}, \bibinfo {author}
{\bibfnamefont {J.~L.~C.}\ \bibnamefont {Rodr{\'i}guez}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Nunez-Iglesias}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Kuczynski}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Tritz}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Thoma}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Newville}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {K{\"u}mmerer}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Bolingbroke}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Tartre}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Pak}}, \bibinfo {author} {\bibfnamefont {N.~J.}\
\bibnamefont {Smith}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Nowaczyk}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Shebanov}},
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Pavlyk}}, \bibinfo
{author} {\bibfnamefont {P.~A.}\ \bibnamefont {Brodtkorb}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont
{R.~T.}\ \bibnamefont {McGibbon}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Feldbauer}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Lewis}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Tygier}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Sievert}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Vigna}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Peterson}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {More}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Pudlik}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Oshima}},
\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Pingel}}, \bibinfo
{author} {\bibfnamefont {T.~P.}\ \bibnamefont {Robitaille}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Spura}}, \bibinfo {author}
{\bibfnamefont {T.~R.}\ \bibnamefont {Jones}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Cera}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Leslie}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Zito}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Krauss}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Upadhyay}},
\bibinfo {author} {\bibfnamefont {Y.~O.}\ \bibnamefont {Halchenko}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {V{\'a}zquez-Baeza}},\ and\
\bibinfo {author} {\bibnamefont {{Scipy 1.0 Contributors}}},\ }\bibfield
{title} {\bibinfo {title} {Scipy 1.0: fundamental algorithms for scientific
computing in python},\ }\href {https://doi.org/10.1038/s41592-019-0686-2}
{\bibfield {journal} {\bibinfo {journal} {Nature Methods}\ }\textbf
{\bibinfo {volume} {17}},\ \bibinfo {pages} {261} (\bibinfo {year}
{2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mitarai}\ and\ \citenamefont
{Fujii}(2019)}]{Mitarai2019meth}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Mitarai}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Fujii}},\ }\bibfield {title} {\bibinfo {title} {Methodology for replacing
indirect measurements with direct measurements},\ }\href
{https://doi.org/10.1103/PhysRevResearch.1.013006} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume}
{1}},\ \bibinfo {pages} {013006} (\bibinfo {year} {2019})}\BibitemShut
{NoStop}
\end{thebibliography}
\end{document}
|
\betaegin{document}
\title{Incentive Games and Mechanisms \ for Risk Management}
\betaegin{abstract}
Incentives play an important role in (security and IT) risk management of a large-scale organization
with multiple autonomous divisions. This paper presents an incentive mechanism design framework
for risk management based on a game-theoretic approach. The risk manager acts
as a mechanism designer providing rules and incentive factors such as assistance or subsidies
to divisions or units, which are modeled as selfish players of a strategic (noncooperative) game.
Based on this model, incentive mechanisms with various objectives are developed that satisfy
efficiency, preference-compatibility, and strategy-proofness criteria.
In addition, iterative and distributed algorithms are presented, which can be implemented under information limitations such as the risk manager not knowing the individual units' preferences. An example scenario illustrates the framework and results numerically. The incentive mechanism design approach presented is useful for not only deriving guidelines but also developing computer-assistance systems for large-scale risk management.
\textbf{Keywords:} mechanism design, risk management, incentives in organizations
\end{abstract}
\section{Introduction} \lambdaabel{sec:intro}
Security risk management is a multi-disciplinary field with both \textbf{technical and organizational dimensions}. On the technical side, complex and networked systems play an increasingly important role in daily business processes. Hence, system failures and security problems have direct consequences for organizations both monetarily and in terms of productivity \cite{moore_red}. It is therefore a necessity
for any modern organization to develop and deploy technical solutions for improving robustness of these complex information technology (IT) systems with respect to failures (e.g. in the form of redundancies) and defending them against security threats (e.g. firewalls and intrusion detection/response systems).
However, even the best and most suitable technical solution will fail to perform adequately if it is not properly deployed
and supported organizationally. In order to be successful in risk management, an organization has to have proper
information about its business processes and complex technical systems or ``observe'' them as well as be able
to influence their operation or ``control'' them \cite{alpcan-book}. In a large-scale organization these two necessary
requirements, which may seem easy to satisfy at first glance, pose significant challenges. An important reason behind
this issue, beside organizational structure, is the underlying incentive mechanisms.
Autonomous yet interdependent divisions or units of a large organization have often \textbf{individual objectives and incentives} that may
not be as aligned in practice as the headquarters and executives wish. Each such unit may have a different
perspective on risk management which directly affects deployment of technical or organizational solutions. Misaligned incentives
also make observation and control of business and technical processes difficult for risk managers. Considering the complex
interdependencies in today's technology and business, such a misalignment in incentives is not a luxury even a large-scale
organization can effort.
Let us consider an \textbf{example scenario} of an enterprise deploying a new security risk management system
that entails information collection (observation), risk assessment (decision making), and mitigation (control).
In order for its successful operation, each division has to cooperate at each stage of its deployment and operation.
At the deployment phase, the divisions have to provide accurate information on their business and networked systems.
During the operational phase, each division has to allocate manpower and resources for the proper operation of
the system. All these can be accomplished only if the division has sufficient incentives for real cooperation. Otherwise,
the risk management system would simply fail as a result of bureaucracy, enterprise politics, and delaying tactics.
\textbf{Game theoretic approaches} have significant potential in addressing the above described issues as well as
in risk analysis, management, and associated decision making \cite{riskbook1,crisis09,icc10jeff}.
The performance of manual and heuristic schemes degrades fast as the scale and complexity of the organization increases.
Computer assistance in observation, decision making, and control of different risk management aspects is necessary to overcome this problem. Development of such computer-based support schemes, however, require quantitative representations and analysis. Game theoretic and analytical frameworks provide a mathematical abstraction which is useful for generalization of seemingly different problems, combining the existing ad-hoc schemes under a single umbrella, and opening doors to novel solutions. At the same time, such frameworks and the associated scientific methodology leads to streamlining of risk management processes and possibly more transparency as a consequence of increased observability and control \cite{alpcan-book}.
\textbf{Mechanism design} \cite{maskin1,lazarSemret1998,johari1}, which is a field of game theory, has been proposed recently as a way to model, analyze, and address risk management problems \cite{alpcan-book}. It can be potentially useful especially in developing analytical frameworks for incentive mechanisms. Game theory in general provides a rich set of mathematical tools and models for investigating multi-person strategic decision making where the players (decision makers) compete for limited and shared resources \cite{basargame,fudenberg}. Mechanism design studies ways of designing rules and structure of games such that their outcome achieve certain objectives.
In the context of security risk management, the units of an organization can be modeled as players (independent decision makers)
in a \textbf{risk management game} since they share and compete for organizational resources. Each player decides on the allocation of
unit's resources, e.g. in terms of manpower and investments, to assess and mitigate perceived risks. The task of organization's risk
manager (designer) is then influence the outcome of this game by imposing rules and varying its structure such that a satisfactory
amount of investment is made by each unit. Thus, the designer tries to optimize the risk management process
from the entire organization's perspective within given resource constraints, e.g. budget.
This paper adopts a \textbf{game-theoretic approach} and presents a framework of incentive mechanism design for security risk management. The analytical framework studied can not only be used to derive guidelines for handling incentives in risk management but also to develop computer-assisted risk management systems. The \textbf{main contributions} of the paper include:
\betaegin{itemize}
\item A strategic (noncooperative) game approach for analysis of incentives in (security and IT) risk management.
\item An analytical incentive mechanism design framework where the designer does not have access to utilities of individual players of the underlying strategic game.
\item Study of iterative incentive schemes which can be implemented under information limitations and their convergence analysis.
\item A numerical analysis based on a scenario of a risk management system deployment.
\end{itemize}
A more detailed discussion clarifying these contributions and a comparison with existing literature will be provided in Section~\ref{sec:discussion}.
The rest of the paper is organized as follows. The next section provides an overview of the underlying mechanism design
and game-theoretic concepts as well as the model adopted in this work. Section~\ref{sec:incentivemech}
presents incentive mechanism design for risk management. Section~\ref{sec:iterative} discusses
iterative incentive mechanisms and related distributed algorithms. An example use case scenario and related numerical analysis is presented in Section~\ref{sec:numerical},
which is followed by a brief literature review in Section~\ref{sec:discussion}. The paper concludes with a discussion
and concluding remarks in Section~\ref{sec:conclusion}.
\section{Game and Mechanism Model} \lambdaabel{sec:model}
Consider an organization with $N$ \textit{autonomous units}, which act as independent decision makers, and a risk manager,
which oversees the risk management task of the entire organization (and is often a special organizational unit itself).
This generic organization may be a large-scale multi-national enterprise (divisions versus the risk manager at the headquarters), a government (government agencies versus central executives), or even an international organization (individual countries versus general secretary of the organization).
Adopting a game-theoretic approach, each autonomous unit can be modeled as a player of a
\textbf{strategic (noncooperative) game} with the set of all players denoted as $\mathcal A$. The player $i \in \mathcal A$ independently decides on
its respective decision variable $x_i$, which represents allocation of limited resources such as monetary investments or manpower,
in accordance with own objectives. In majority of cases, the decisions of players affect each other due to constraints of the environment.
Thus, the players share and compete for resources as part of this strategic game.
The risk manager $\mathcal{D}$, which is also called \textit{designer}\footnote{The terms risk manager and designer as well as (organizational) unit and player will be used interchangeably for the rest of the paper. }
in the context of mechanism design, focuses on the aggregate outcome
of the strategic game and tries to ensure that the game satisfies some risk management objectives, e.g. information
collection for assessment or deployment of a new risk management solution. Unlike the players, the designer
achieves its objective only by \textit{indirect means such as providing additional incentives to players} in the form of incentive factors and
penalties or imposing rules. It is important to note that the risk manager cannot directly dictate individual actions of players, which is
a realistic assumption that holds for many types of civilian organizations. The interaction between risk manager (designer) and organizational units (players) is depicted in Figure~\ref{fig:mechdesign1}.
\betaegin{figure}[htbp]
\centering
\includegraphics[width=0.7 \columnwidth]{incentivemech1.eps}
\caption{The interaction between the players (autonomous organizational units) of the underlying strategic game and the
risk manager acting as mechanism designer, who observes players actions (investments) $x$ and provides additional
incentives $p$.}
\lambdaabel{fig:mechdesign1}
\end{figure}
The $N$-player strategic game, $\mathcal G$ is described as follows. Each player $i \in \mathcal A$ has a respective
scalar \textbf{decision variable}\footnote{The analysis can be easily extended to multi-dimensional case. However, since
this would complicate the notation and readability without a significant conceptual contribution,
this paper focuses on scalar decision variables.} $x_i$ such that
$$x=[x_1,\lambdadots,x_N] \in \mathcal X \subset \mathbb R^N, $$
where $\mathcal X$ is the convex, compact, and nonempty decision space of all players.
The players make their decisions in accordance with their \textbf{preferences} modeled as customary by real valued utility functions
$$ U_i (x) : \mathcal X \rightarrow \mathbb R .$$
For analytical tractability, the player utility functions are chosen as continuous, differentiable, and strictly concave.
It is important to note here that \textit{players do not reveal their utilities (preferences) to the designer}.
Application of a similar utility function approach to risk management has been discussed in detail in \cite[Chap. 3]{riskbook1}, where the concave utilities are interpreted as ``risk averse''.
While each player gains a utility from its decisions (investments), these resources also have a cost, which can be often
expressed in monetary terms. We assume that that these costs are linear in the allocated resource,
$ \beta_i x_i $, where $\betaeta_i$ is the individual per unit cost factor. Each player $i$ aims to minimize its
respective \textbf{cost function}
\betaegin{equation} \lambdaabel{e:usercost}
J_i(x)= \beta_i x_i - U_i (x) - p_i x_i ,
\end{equation}
where the linear term $p_i x_i$ represents the \textit{incentive factor} (or penalty if negative)
provided to the player by the designer $\mathcal{D}$.
Thus, player $i$ solves the optimization problem
$$ \min_{x_i} J_i (x_i,x_{-i}) ,$$
by choosing an appropriate $x_i$ given the decisions of all players denoted by $x_{-i}$ such that $x \in \mathcal X$.
Formally, strategic game $\mathcal G$ is defined as:
\betaegin{defn} \lambdaabel{def:game}
The strategic (noncooperative) game $\mathcal G$ is played among the set of selfish players, $\mathcal A$, of cardinality $N$, on the convex, compact, and non-empty decision space $\mathcal X \subset \mathbb R^N$, where
\betaegin{itemize}
\item $x=[x_1,\lambdadots,x_N] \in \mathcal X$ denotes the actions of players
\item $ U_i (x) : \mathcal X \rightarrow \mathbb R$ denotes the utility function of player $i \in \mathcal A$
\item $ J_i(x)= \beta_i x_i - U_i (x) - p_i x_i$ denotes the cost function of player $i \in \mathcal A$ for given parameters $b_i$ and $p_i$ $\forall i$,
\end{itemize}
such that each player $i$ solves its own optimization problem
$$ \min_{x_i} J_i (x_i,x_{-i}) ,$$
by choosing an appropriate $x_i$ given the decisions of all players denoted by $x_{-i}$.
\end{defn}
The \textbf{Nash equilibrium} (NE) is a widely-accepted and useful solution concept in strategic games, where no player has an incentive to deviate from it while others play according to their NE strategies \cite{Nash50,Nash51}. The NE is at the same time the intersection point of players' best responses obtained by solving their individual optimization problems. The NE of the game $\mathcal G$ in Definition~\ref{def:game} is formally defined as follows.
\betaegin{defn} \lambdaabel{def:ne}
The Nash equilibrium of the game $\mathcal G$ in Definition~\ref{def:game}, is denoted by the vector $x^*=[x_1^*,\lambdadots,x_N^*] \in \mathcal X$ and defined as
$$ x_i^* := \alpharg \min_{x_i} J_i (x_i, x_{-i}^*)\;\; \;\forall i \in \mathcal A,$$
where $x_{-i}^*=[x_1^*,\lambdadots,x_{i-1}^*,x_{i+1}^*,\lambdadots, x_N^*]$.
\end{defn}
If some special convexity and compactness conditions are imposed to the game $\mathcal G$, then it admits a unique NE solution, which simplifies mechanism and algorithm design significantly.
We refer to the Appendix A.1 as well as \cite{rosen,basargame,tansuphd} for the details and
an extensive analysis.
The risk manager (designer) $\mathcal{D}$ devises an \textbf{incentive mechanism} $\mathcal M$, which can be represented by the mapping
$\mathcal M: \mathcal X \rightarrow \mathbb R^N$, and implemented through additional incentives (e.g. subsidies) in player cost functions, $p_i x_i$, above.
Using incentive mechanism $\mathcal M$, the designer aims to achieve a certain risk management objective, which can be maximization of
aggregate player utilities (expected aggregate benefit from risk-related investments) or an independent organizational target that
depends on participation of all players such as deployment of a new risk management solution.
These can be modeled using a \textbf{designer objective function} $V$ that quantifies the desirability of an outcome $x$ from
the designers perspective. Formally, the function $V$ is defined as
$$ V(x,U(x),p) : \mathcal X \rightarrow \mathbb R.$$
Thus, the global optimization problem of the designer is
$$\max_p V(x,U_i(x),p) ,$$
which it solves by choosing the vector $p=[p_1, \lambdadots, p_N]$,
i.e. providing incentive factors to the players. Note that the designer objective $V$ (possibly) depends on player utilities $U=[U_1,\lambdadots,U_N]$, yet the designer does not have direct knowledge on them.
Furthermore, the risk manager may have only a limited budget $B$ to achieve its goal that leads to the additional constraint
$$\sum_{i=1}^N p_i x_i \lambdaeq B .$$
\textbf{Mechanism design}, as a field of game theory, studies designing the rules and structure of games such that their outcome achieve certain objectives~\cite{maskin1,lazarSemret1998,johari1,alpcan-infocom10}. Two criteria a mechanism has to satisfy has already been
described above. The player objective of minimizing own cost can also be called as \textit{preference-compatibility}. Likewise,
the designer objectives of maximizing $V$ or achieving a global goal can be interpreted as an \textit{efficiency} criterion. The third criterion arises from the
fact that the interaction between the designer and players of the game (Figure~\ref{fig:mechdesign1}) may motivate the players to misrepresent their utilities to the designer. They can benefit from misrepresenting their utilities (exaggerating or diminishing the actual benefits of their investments) to receive higher incentives. Therefore, mechanism design has a third objective called interchangeably \textit{strategy-proofness}, \textit{truth dominance}, or \textit{incentive-compatibility} in addition to the objectives of efficiency and preference-compatibility. All these three criteria
are summarized in the following table:
\betaegin{table}[htp]
\betaegin{center}
\caption{Three Criteria of Mechanism Design}
\betaegin{tabular}[t]{|l|l|}
\hline
\textit{Criterion} & \textit{Formulation in the Model} \\
\hline \hline
Efficiency & Designer objective \\ \hline
Preference- & Players minimizing own costs \\
compatibility & (NE as operating point) \\ \hline
Strategy-Proofness & No player gains from cheating \\
\hline
\end{tabular}
\end{center} \lambdaabel{tbl:mechdesign}
\end{table}
\subsection{Assumptions}
Taking into account the breadth of the field mechanism design, it is useful to clarify the underlying assumptions of the model studied
in this section. The \textbf{environment} where the players and designer interact is characterized by the following properties:
\betaegin{itemize}
\item The players and designer operate with limited resources, e.g. under budget and manpower constraints.
\item The organizational structure imposes restrictions on available information to players and communication between them.
\item The designer has no information on the preferences of individual players, but observes their actions and final costs.
\end{itemize}
The players share and compete for limited resources in the given environment under its information and communication constraints.
The following assumptions are made on the \textbf{designer and players}:
\betaegin{itemize}
\item The designer is honest, i.e. does not try to deceive players.
\item Each player acts alone and rationally according to own self interests.
\item The players may try to deceive the designer by hiding or misrepresenting their own preferences.
\item All players follow the rules of the mechanism imposed by the designer.
\end{itemize}
Implications of these assumptions and limitations of the presented model will be further discussed in Section~\ref{sec:discussion}.
\section{Incentive Mechanism Design} \lambdaabel{sec:incentivemech}
This section presents two specific incentive mechanisms for risk management based on the model of the previous section.
In the first mechanism, $\mathcal M_1$, the risk manager (designer) aims to maximize the aggregate benefit from security investments
of units, which is the sum of player utilities. This objective is sometimes also called as ``social welfare maximization''.
The second mechanism, $\mathcal M_2$ represents a scenario in which the risk manager aims to align efforts of all units for
deployment and operation of an organization-wide risk management solution. Both mechanisms (their iterative variants) satisfy the
criteria in Table~\ref{tbl:mechdesign} under specific conditions. The interaction between the designer
and players is visualized in Figure~\ref{fig:subsidymech1}.
\betaegin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{subsidymech1.eps}
\caption{Interaction between risk manager (designer) and organizational units (players) as part of incentive mechanism design.}
\lambdaabel{fig:subsidymech1}
\end{figure}
\subsection{Welfare maximizing mechanism}
The optimization problem $\min_{x_i} J_i (x)$ of player $i$ is a convex one and admits the unique solution
$$ x_i^* = \lambdaeft( \dfrac{\partial U_i(x)}{\partial x_i}\right)^{-1} (\betaeta_i - p_i) ,$$
under the strict concavity and continuous differentiability assumptions on $U_i$ \cite{bertsekas2}.
Any such solution $x^*$ that solves all player optimization problems is by definition \textbf{preference-compatible}.
It is important to note that, if there was no incentive term, $p_i x_i$, in player cost, each unit would
act according to self interest only resulting in a suboptimal result for the entire organization;
a situation sometime termed as \textit{tragedy of commons}. The designer can prevent this by
providing a carefully selected incentive scheme \cite{cdc09lacra,gamenetsne}.
The risk manager $\mathcal{D}$ objective in mechanism $\mathcal M_1$ is to maximize sum of player utilities, $\sum_i U_i(x_i)$. Considering
that under the assumptions of Section~\ref{sec:model} the risk manager does \textit{not} know these utilities makes
this goal paradoxical at first glance. However, the risk manager can actually achieve it in a carefully designed mechanism
where it deduces the needed parameters for the solution from the observed actions of players.
Formally, the designer solves the constrained optimization problem
\betaegin{equation} \lambdaabel{e:designerobj1}
\max_x V(x) \Leftrightarrow \max_x \sum_i U_i (x) \text{ such that } \sum_i p_i x_i \lambdaeq B.
\end{equation}
The optimal solution to this constrained problem by definition satisfies the \textbf{efficiency criterion}.
The associated Lagrangian function is then
$$ L(x)=\sum_i U_i (x) + \lambda \lambdaeft( B- \sum_i p_i x_i \right) ,$$
where $\lambda\gammaeq 0$ is a scalar Lagrange multiplier \cite{bertsekas2}.
Under the concavity assumptions on $U_i$, this leads to
\betaegin{equation} \lambdaabel{e:global1}
\dfrac{\partial L}{\partial x_i}=0 \Rightarrow \dfrac{1}{p_i}\sum_{j=1}^N \dfrac{\partial U_j(x)}{\partial x_i}= \lambda, \; \forall i \in \mathcal A,
\end{equation}
and the associated budget constraint\footnote{An underlying assumption here is that the risk manager (designer) utilizes
all of its budget, i.e. the constraint is active.} is
\betaegin{equation} \lambdaabel{e:constraint1}
\dfrac{\partial L}{\partial \lambda}=0 \Rightarrow \sum_i p_i x_i=B.
\end{equation}
Meeting both the preference-compatibility and efficiency
criteria requires alignment of player and designer optimization problems. This alignment
can be achieved by choosing the Lagrange multiplier $\lambda$ and player incentive factors $p$
in such a way that
\betaegin{equation} \lambdaabel{e:align1}
\dfrac{\beta_i - p_i}{p_i} + \dfrac{1}{p_i}\sum_{j\neq i}\dfrac{\partial U_j(x)}{\partial x_i} = \lambda, \; \forall i \in \mathcal A,
\end{equation}
and
\betaegin{equation} \lambdaabel{e:align2}
\sum_i p_i \lambdaeft( \dfrac{\partial U_i(x)}{\partial x_i} \right)^{-1}( \beta_i - p_i) =B.
\end{equation}
Any solution to the set of $N+1$ nonlinear equations (\ref{e:align1})-(\ref{e:align2}) is by
definition a Nash equilibrium as it lies at the intersection of the player best responses. These results are summarized in the following proposition.
\betaegin{prop} \lambdaabel{prop:m1}
Any solution of the mechanism $\mathcal M_1$ described above obtained from (\ref{e:align1})-(\ref{e:align2}) is both player preference-compatible (based on the strategic game $\mathcal G$, given in Definition~\ref{def:game}) and efficient, i.e. maximizes $\sum_i U_i (x)$.
\end{prop}
If the designer $\mathcal{D}$ wants to compute the incentive factors $p$ directly by solving (\ref{e:align1})-(\ref{e:align2}), it needs to ask each individual player $i$ for its utility, more specifically
$\partial U_i(x) / \partial x_j$ $\forall j \in \mathcal A$.
However, the players have now a motivation to misrepresent their utilities to the designer in order to gain a larger
share of resources or incentive factors. To see this, consider a cheating player $i$ reporting $\tilde U_i$ to
the designer instead of their true values. If the designer believes the player and solves (\ref{e:align1})-(\ref{e:align2})
using these, then the resulting incentive factor $\tilde p$ will naturally be different from what it should have been, $p$.
A selfish or malicious player can thus manipulate such a scheme, which by definition is not strategy-proof.
Note that, the risk manager has access to costs $\beta_i x_i$ and actions $x_i$ of individual players, which
can be, for example, part of an organizational reporting process.
One way to address the issue of strategy-proofness is to devise additional schemes to detect potential player misbehavior
(for which players already have a motivation). This, however, brings an additional layer of overhead to the
overall system both in terms of communication and computing requirements.
Alternatively, one can design an \textbf{iterative mechanism} that is based
on observation of player actions $x$ instead of asking for their word (utilities). This approach
is the basis of the iterative schemes that will be presented in Section~\ref{sec:iterative}.
\subsection{Mechanism with global objective}
The second mechanism, $\mathcal M_2$ differs from the social welfare maximizing one $\mathcal M_1$
discussed in the previous subsection. In this case, the designer
has an organization-wide or ``global'' objective represented by the strictly concave and nondecreasing function $F(x)$ which
does not directly depend on player utilities. This organization-wide objective could be,
for example, deployment and operation of an organization-wide risk management
solution that naturally requires cooperation from all units and an alignment of efforts.
In mechanism $\mathcal M_2$, the risk manager formally solves the constrained
optimization problem
\betaegin{equation} \lambdaabel{e:designerobj2}
\max_x F(x) \text{ such that } \sum_i p_i x_i \lambdaeq B.
\end{equation}
The associated Lagrangian function is then
$$ L(x)=F(x) + \lambda \lambdaeft( B- \sum_i p_i x_i \right) ,$$
where $\lambda>0$ is a scalar Lagrange multiplier. Note that the constraint is always active in this case due to the definition of $F(x)$. Under the concavity assumptions on $F(x)$, this leads to
\betaegin{equation} \lambdaabel{e:global2}
\dfrac{\partial L}{\partial x_i}=0 \Rightarrow \dfrac{1}{p_i}\dfrac{\partial F(x)}{\partial x_i}= \lambda, \; \forall i \in \mathcal A,
\end{equation}
and the associated budget constraint is
\betaegin{equation} \lambdaabel{e:constraint2}
\dfrac{\partial L}{\partial \lambda}=0 \Rightarrow \sum_i p_i x_i=B.
\end{equation}
Combining this with the player optimization problems to ensure efficiency and preference-compatibility
as in the previous subsection leads to
\betaegin{equation} \lambdaabel{e:align1a}
\dfrac{1}{p_i} \dfrac{\partial F(x)}{\partial x_i}= \lambda, \; \forall i \in \mathcal A,
\end{equation}
and
\betaegin{equation} \lambdaabel{e:align2a}
\sum_i p_i \lambdaeft( \dfrac{\partial U_i(x)}{\partial x_i} \right)^{-1}( \beta_i - p_i) =B,
\end{equation}
which are direct counterparts of (\ref{e:align1})-(\ref{e:align2}). As before, any solution constitutes a Nash equilibrium as it lies at the intersection of the player best responses.
\betaegin{prop} \lambdaabel{prop:m2}
Any solution of the mechanism $\mathcal M_2$ described above obtained from (\ref{e:align1a})-(\ref{e:align2a}) is both player preference-compatible (based on the strategic game $\mathcal G$, given in Definition~\ref{def:game}) and efficient, i.e. maximizes $F(x)$.
\end{prop}
In mechanism $\mathcal M_2$, the risk manager has to evaluate the term $\partial F(x) / \partial x_i$
for each unit $i$, in addition to asking them for their utilities and cost factors. This term can be interpreted
as the rate of contribution of each unit to the organization-wide objective. Since the risk manager sets this
objective, it can be computed or estimated with reasonable accuracy. However, as before
the solution of (\ref{e:align1a})-(\ref{e:align2a}) also depends on individual unit utilities and cost factors.
Therefore, mechanism $\mathcal M_2$ --similar to $\mathcal M_1$ -- requires deployment of iterative methods in order to meet the
criterion of strategy-proofness.
\subsection{Interdependent Utilities and Linear Influence Model} \lambdaabel{sec:coupledutil}
In the presented model and analysis, utilities
of individual players (units) may depend not only on their own actions but also on those of others,
e.g. $U_i(x)=U_i ([x_1,\lambdadots,x_N])$.
In other words, a unit benefits not only from own risk investments but also from efforts
of other related units. Such utility functions are called interdependent or nonseparable in
contrast to separable player utilities, $U_i(x_i)$, that depend only own actions.
If the player utilities are separable, then the player decisions are almost completely decoupled from each
other except from external resource constraints (such as the incentives they receive from the designer).
This simplifies development of decentralized schemes significantly.
One possible way of modeling interdependencies in player utilities is the linear influence model,
which captures how actions (investments) of players (units) affect others.
As a first-order approximation these effects are modeled as \textit{linear} resulting in an
\textit{influence matrix} defined as
\betaegin{equation} \lambdaabel{e:influencematrix}
W :=
\betaegin{cases}
1 & \text{, if } i =j , \\
w_{ij} & \text{, otherwise.}
\end{cases}
\end{equation}
where $0 \lambdaeq w_{ij} \lambdaeq 1$ denotes the non-negative effect of unit $j$ ('s investment) on unit $i$. Notice
that this effect may well be zero.
Define now the vector of \textit{effective investments}
$x^e=[x_1^e, \lambdadots, x_N^e]$, where the effective investment of unit $i$ is
$$x^e_i :=\sum_j W_{ij} x_j = (W x)_i ,$$
and $(\cdot)_i$ denotes the $i^{th}$ element of a vector.
Naturally, it is possible to develop more complex nonlinear models to capture interdependencies between
units and their actions. However, given the limitations on information collection and accuracy,
the linear first order approximation described provides a good starting point. Therefore,
we will use linear influence model in the case of interdependent (non-separable) utilities
for the rest of the paper.
Note that under the linear influence model, the nonseparable utility, $U_i(x)$, of player $i$ is
given by
$$U_i(x^e_i)=U_i \lambdaeft( (W x)_i \right).$$
\section{Iterative Incentive Mechanisms} \lambdaabel{sec:iterative}
Mechanisms $\mathcal M_1$ and $\mathcal M_2$ as defined in the previous section are shown to be efficient and
preference-compatible (See Propositions~\ref{prop:m1} and \ref{prop:m2}) but not strategy-proof. This section presents two iterative variants of these mechanisms that satisfy all three criterion and can be implemented under information limitations.
\subsection{Iterative mechanism with global objective}
In the iterative mechanism with global objective, $\mathcal{IM}_2$, both the risk manager
and units adopt an iterative scheme to facilitate information exchange
that does not allow cheating, hence resulting in a strategy-proof mechanism.
Specifically, the risk manager updates the Lagrangian multiplier $\lambda$ in (\ref{e:global2})
gradually according to
\betaegin{equation} \lambdaabel{e:iterative1a}
\lambda (n+1) = \lambda (n) + \kappa_d \lambdaeft[ \sum_i p_i(n) x_i (n) - B \right]^+ ,
\end{equation}
and computes the individual player incentive factors
\betaegin{equation} \lambdaabel{e:iterative2a}
p_i (n)= \dfrac{1}{\lambda(n)} \dfrac{\partial F(x(n))}{\partial x_i} .
\end{equation}
Here, $n=1,\lambdadots$ denotes the iteration number or time-step. The units (players)
in return react to given incentive factors by updating their investment decisions
in order to minimize their own costs such that
\betaegin{equation} \lambdaabel{e:iterative3a}
\betaegin{array}{lll}
x_i (n+1) & = & \phi x_i (n) \\
&+ & (1-\phi)\lambdaeft( \dfrac{\partial U_i(x(n))}{\partial x_i}\right)^{-1} (\betaeta_i - p_i(n)) \;\; \forall i,
\end{array}
\end{equation}
where $0< \phi <1$ is a relaxation constant used by the players to
prevent excessive fluctuations. Alternatively, this behavior can be justified with caution or
inertia of the organizational units.
The equilibrium solution(s) of (\ref{e:iterative1a})-(\ref{e:iterative3a})
clearly coincides with that of (\ref{e:align1a})-(\ref{e:align2a}). Hence, the iterative mechanism $\mathcal{IM}_2$,
assuming that it converges, solves the same problem as mechanism $\mathcal M_2$. Furthermore, it is strategy-proof since
at each update step, the players make decisions according to their own self interests and do not have the
opportunity of manipulating the system. To see this, assume otherwise and let player $i$ ``misrepresent''
its actions $\tilde x_i = x_i + \delta$ for some $\delta \in \mathbb R$. Then, the player's
instantaneous cost is $J_i (\tilde x_i) > J_i (x_i)$ at each step of the iteration. Hence, the
players have no incentive to ``cheat''. These results are summarized in the following theorem which extends Proposition~\ref{prop:m2}:
\betaegin{thm} \lambdaabel{thm:m2}
Any solution of the iterative mechanism with global objective, $\mathcal{IM}_2$ described above and in Algorithm~\ref{alg:iterative1}
is player preference-compatible, efficient, and strategy-proof.
\end{thm}
Information flow and limitations play a crucial role in implementation of the iterative mechanism $\mathcal{IM}_2$.
In practice, the risk manager is assumed to observe the actions of units which they have to reveal
in order to receive incentives. Based on this information and the total budget, the risk manager can easily implement (\ref{e:iterative1a}). Then, it only needs to estimate the individual marginal contributions of units to the overall objective, $\partial F(x(n)) /\partial x_i $ at a given moment in order to decide on actual incentive factors in (\ref{e:iterative2a}).
Likewise, given own cost estimates $\beta_i$ and incentive factor $p_i$,
each unit (player) only has to determine the marginal benefit from its own actions, $\partial U_i(x(n)) / \partial x_i$ in order to implement (\ref{e:iterative3a}). If the unit has a separable utility, then this is simply equivalent to $\partial U_i(x_i(n)) / \partial x_i$.
In the interdependent utility case, under the linear influence model this quantity turns out to be
the marginal benefit from the effective action,
$$
\dfrac{\partial U_i(x(n))}{\partial x_i}= \dfrac{\partial U_i( x_i^e(n))}{\partial x_i^e}\dfrac{\sum_j W_{ij} x_j}{x_i}=\dfrac{\partial U_i( x_i^e(n))}{\partial x_i^e},
$$
as a result of $W_{ii}=1$ and the definitions of respective quantities. Algorithm~\ref{alg:iterative1}
summarizes the steps of the iterative mechanism with global objective, $\mathcal{IM}_2$.
\betaegin{algorithm}[!ht]
\SetAlgoLined
\KwIn{\textit{Designer}: budget $B$ and global objective $F(x)$}
\KwIn{\textit{Players}: cost factor $\beta_i$ and utilities $U_i, \forall i$}
\KwResult{Player investments $x$ and incentive factors $p$}
Initial investments $x_0$ and incentive factors $p_0$ \;
\Repeat{end of iteration (negotiation)}{
\Begin(\textit{Designer:}){
Observe player investments $x$ \;
Update $\lambda$ according to (\ref{e:iterative1a}) \;
Estimate marginal contributions of players to global objective, $\partial F(x) /\partial x_i $ \;
\ForEach{player $i$}{
Compute incentive factor $p_i$ from (\ref{e:iterative2a}) \;
}
}
\Begin(\textit{Players:}){
\ForEach{player $i$}{
Estimate marginal utility $\partial U_i(x)/ \partial x_i$ \;
Compute investment $x_i$ from (\ref{e:iterative3a}) \;
}
}
}
\caption{Iterative mechanism $\mathcal{IM}_2$} \lambdaabel{alg:iterative1}
\end{algorithm}
\subsubsection*{Convergence Analysis of $\mathcal{IM}_2$}
A basic stability analysis is provided for a continuous-time approximation of
the iterative mechanism with global objective, $\mathcal{IM}_2$. For tractability, let the
player utilities be of the form $U_i=\alpha_i \lambdaog(x_i)$. Further define the global objective
function of the risk manager as $F(x):=\sum_i \gammaamma_i x_i$, for some $\gammaamma_i >0 \; \forall i$.
Substituting $p_i$ with $\gammaamma_i / \lambdaambda$, the continuous-time counterpart of (\ref{e:iterative1a})-(\ref{e:iterative3a}) is
\betaegin{eqnarray} \lambdaabel{e:contiterative1}
\dot \lambdaambda = \dfrac{d \lambda}{dt}= \kappa_{\lambdaambda} \dfrac{1}{\lambdaambda}\lambdaeft( \sum_i \gammaamma_i x_i (n) - B \right) \\
\dot x_i =-\kappa_i \dfrac{\partial J_i}{\partial x_i }=\kappa_i
\lambdaeft(\dfrac{\alpha_i}{x_i} + \dfrac{\gammaamma_i}{\lambdaambda} - \beta_i \right) \;\; , \forall i \in \mathcal A. \nonumber
\end{eqnarray}
where $t$ denotes time and $\kappa_{\lambdaambda},\; \kappa_i>0$ are step-size constants. As in the discrete-time
version, the players adopt here a gradient best response algorithm. Define the Lyapunov function
$$ V_L:= \frac{1}{2} \lambdaeft( \dfrac{ \sum_i\gammaamma_i x_i - B}{\lambdaambda} \right)^2 +
\frac{1}{2} \sum_i \lambdaeft(\dfrac{\alpha_i}{x_i} + \dfrac{\gammaamma_i}{\lambdaambda} - \beta_i \right)^2 ,$$
which is nonnegative except at the solution(s) of (\ref{e:iterative1a})-(\ref{e:iterative3a}), i.e. $V_L (x^*,\lambda^*)=0$.
Taking the derivative of $V_L$ with respect to time yields
$$ \dot V_L (x,\lambda) = -2 \dfrac{\sum_i\gammaamma_i x_i}{\lambdaambda^3} \lambdaeft( \dfrac{ \sum_i\gammaamma_i x_i - B}{\lambdaambda} \right)^2
- \sum_i \dfrac{\alpha_i}{x_i^2} \lambdaeft(\dfrac{\alpha_i}{x_i} + \dfrac{\gammaamma_i}{\lambdaambda} - \beta_i \right)^2.
$$
Consider the region where $F(x)=\sum_i \gammaamma_i x_i >0$. Then, there exists an $\varepsilonilon>0$ such that
$$ \dot V_L (x,\lambda) \lambdaeq \varepsilonilon V_L <0 , \;\; \forall (x,\lambda) \neq (x^*,\lambda^*),$$
i.e. for any point of the trajectory $(x,\lambda)$ not equal to a solution of (\ref{e:iterative1a}) and (\ref{e:iterative3a}). Thus, the continuous-time algorithm is exponentially stable \cite{khalilbook} on the set
$\mathcal{\betaar X}:=\{ x \in \mathcal X : F(x) >0 \}$.
This result, which is summarized in the next theorem, is a strong indicator of fast convergence \cite{bertsekas3} of the discrete-time iterative pricing mechanism (\ref{e:iterative2a})-(\ref{e:iterative2b}).
\betaegin{prop} \lambdaabel{thm:converge1}
The continuous-time approximation of the iterative mechanism $\mathcal{IM}_2$, given by (\ref{e:contiterative1}) exponentially converges to a solution of (\ref{e:iterative1a})-(\ref{e:iterative3a}) on the set
$\mathcal{\betaar X}=\{ x \in \mathcal X : F(x) >0 \}$.
\end{prop}
The \textit{exponential convergence} result above indicates a very fast convergence rate. To see this, let $x(0)$ be the initial player investments and $x^*$ denote a solution of (\ref{e:iterative1a})-(\ref{e:iterative3a}). Then, for the player investments $x(t)$ under continuous-time approximation of the iterative mechanism $\mathcal{IM}_2$ the following holds:
$$ \norm{x(t) - x^*} \lambdaeq \alphalpha \norm{x(0) - x^*} e^{-\betaeta t},$$
for $t \gammaeq 0$ and some $\alphalpha, \betaeta >0$. In other words, the investment levels approach their equilibrium values exponentially fast.
\subsection{Iterative welfare maximizing mechanism}
The iterative welfare maximizing mechanism $\mathcal{IM}_1$ extends mechanism $\mathcal M_1$. Same
as the previous mechanism, the risk manager updates the Lagrangian multiplier $\lambda$ according to (\ref{e:iterative1a})
and the unit updates are given by (\ref{e:iterative3a}).
However, the computation of individual player incentive factors is more involved due to the dependence
of the objective (welfare maximization) on individual player utilities
\betaegin{equation} \lambdaabel{e:iterative2b}
p_i (n)= \dfrac{1}{\lambda(n)} \sum_j \dfrac{\partial U_j(x(n))}{\partial x_i} ,
\end{equation}
which follows from (\ref{e:align1}).
At first glance, it seems that the designer has to ask players again for their marginal utility which defeats
the purpose of the iterative approach, namely ensuring strategy-proofness. Fortunately, the designer
can circumvent this issue by utilizing side information, in this case player cost factors $\beta$, within the
linear influence model.
It directly follows from the linear influence model that
$$ \dfrac{\partial U_j}{\partial x_i}=\dfrac{\partial U_j}{\partial x_j^e} \dfrac{\partial x_j^e}{\partial x_i}
=\dfrac{\partial U_j}{\partial x_j^e} W_{ji}= \dfrac{\partial U_j}{\partial x_j} W_{ji}.$$
The actions of any player $i$ chosen according to a (relaxed) best response (\ref{e:iterative3a}), and
observed by the designer yields the information
$$ \dfrac{\partial U_i(x)}{\partial x_i}=\betaeta_i - p_i $$
to the designer. Hence, the substitution
$$ \sum_j \dfrac{\partial U_j(x(n))}{\partial x_i}=\sum_j ( \betaeta_i - p_i) W_{ji}, \; \forall i,j \, , $$
can be used in (\ref{e:iterative2b}) to obtain
$$ \lambda p = W^T (\beta -p).$$
Thus, the designer implements
\betaegin{equation} \lambdaabel{e:iterative2c}
p= (W^T + \lambda I)^{-1} W^T \beta
\end{equation}
together with (\ref{e:iterative1a}) to determine player incentive factors. Here, $(\cdot)^T$ denotes the transpose operator and $I$ the identity matrix. These results are summarized in the following theorem which extends Proposition~\ref{prop:m1}:
\betaegin{thm} \lambdaabel{thm:m1}
Any solution of the iterative mechanism with global objective, $\mathcal{IM}_1$ described above and in Algorithm~\ref{alg:iterative2}
is player preference-compatible, efficient, and strategy-proof.
\end{thm}
The information structure in mechanism $\mathcal{IM}_1$ is similar to that of $\mathcal{IM}_2$ with the
following differences. In $\mathcal{IM}_1$, the risk manager has to estimate the linear dependencies in the system
represented by the matrix $W$ and observe cost factors $\beta$ of units in addition to their investments.
These information requirements are due to the complex nature of the welfare maximization objective,
which necessitates additional (indirect) communication between the risk manager and
units in practice. Algorithm~\ref{alg:iterative2} summarizes the steps of the welfare maximizing mechanism
$\mathcal{IM}_1$.
\betaegin{algorithm}[hpb]
\SetAlgoLined
\KwIn{\textit{Designer}: budget $B$ and objective $\sum_i U_i$}
\KwIn{\textit{Players}: cost factor $\beta_i$ and utilities $U_i, \forall i$}
\KwResult{Player investments $x$ and incentive factors $p$}
Initial investments $x_0$ and incentive factors $p_0$ \;
\Repeat{end of iteration (negotiation)}{
\Begin(\textit{Designer:}){
Observe player actions $x$ and cost factors $\beta$ \;
Estimate the linear influence matrix $W$ \;
Update $\lambda$ according to (\ref{e:iterative1a}) \;
Compute incentive factors $p$ from (\ref{e:iterative2c}) \;
}
\Begin(\textit{Players:}){
\ForEach{player $i$}{
Estimate marginal utility $\partial U_i(x)/ \partial x_i$ \;
Compute investment $x_i$ from (\ref{e:iterative3a}) \;
}
}
}
\caption{Iterative mechanism $\mathcal{IM}_1$} \lambdaabel{alg:iterative2}
\end{algorithm}
\subsubsection*{Convergence Analysis of $\mathcal{IM}_1$}
A basic stability analysis is provided for a continuous-time approximation of
the iterative mechanism $\mathcal{IM}_1$ similar to the one of the $\mathcal{IM}_2$ in the previous subsection.
For tractability, let the player utilities be of the form $U_i=\alpha_i \lambdaog(x_i)$ as before.
Substituting $p_i$ with
$$p_i= \dfrac{\beta_i }{1+\lambda}, $$
which follows from (\ref{e:iterative2c}) and $W=I$,
the continuous-time counterpart of (\ref{e:iterative1a}) and (\ref{e:iterative3a}) is
\betaegin{eqnarray} \lambdaabel{e:contiterative2}
\dot \lambdaambda = \dfrac{d \lambda}{dt}= \kappa_{\lambdaambda} \dfrac{1}{1+ \lambdaambda}\lambdaeft( \sum_i \beta_i x_i (n) - B \right) \\
\dot x_i =-\kappa_i \dfrac{\partial J_i}{\partial x_i }=\kappa_i
\lambdaeft(\dfrac{\alpha_i}{x_i} + \dfrac{\beta_i}{1+ \lambdaambda} - \beta_i \right) \;\; , \forall i \in \mathcal A. \nonumber
\end{eqnarray}
where $t$ denotes time and $\kappa_{\lambdaambda},\; \kappa_i>0$ are step-size constants. As in the discrete-time
version, the players adopt here a gradient best response algorithm. Define the Lyapunov function
$$ \betaar V_L:= \frac{1}{2} \lambdaeft( \dfrac{ \sum_i\beta_i x_i - B}{1+ \lambdaambda} \right)^2 +
\frac{1}{2} \sum_i \lambdaeft(\dfrac{\alpha_i}{x_i} + \dfrac{\beta_i}{1+ \lambdaambda} - \beta_i \right)^2 ,$$
which is nonnegative except at the solution(s) of (\ref{e:iterative1a}) and (\ref{e:iterative3a}), i.e. $\betaar V_L (x^*,\lambda^*)=0$.
Taking the derivative of $\betaar V_L$ with respect to time yields
$$ \dot{\betaar{V}}_L (x,\lambda) = -2 \dfrac{\sum_i\beta_i x_i}{(1+\lambdaambda)^3} \lambdaeft( \dfrac{ \sum_i\beta_i x_i - B}{1+\lambdaambda} \right)^2
- \sum_i \dfrac{\alpha_i}{x_i^2} \lambdaeft(\dfrac{\alpha_i}{x_i} + \dfrac{\beta_i}{1+\lambdaambda} - \beta_i \right)^2.
$$
Consider the region where $\sum_i \beta_i x_i >0$. Then, there exists an $\varepsilonilon>0$ such that
$$ \dot{\betaar{V}}_L (x,\lambda) \lambdaeq \varepsilonilon \betaar V_L <0 ,\;\; \forall (x,\lambda) \neq (x^*,\lambda^*), $$
i.e. for any point of the trajectory $(x,\lambda)$ not equal to a solution of (\ref{e:iterative1a}) and (\ref{e:iterative3a}).
Thus, the continuous-time algorithm is exponentially stable \cite{khalilbook} on the set
$\mathcal{\tilde X}:=\{ x \in \mathcal X : \sum_i \beta_i x_i >0 \}$.
This result, which is summarized in the next theorem, is a strong indicator of fast convergence \cite{bertsekas3} of the discrete-time iterative pricing mechanism $\mathcal{IM}_1$.
\betaegin{prop} \lambdaabel{thm:converge2}
The continuous-time approximation of the iterative mechanism $\mathcal{IM}_1$, given by (\ref{e:contiterative2}) exponentially converges to a solution on the set $\mathcal{\tilde X}:=\{ x \in \mathcal X : \sum_i \beta_i x_i >0 \}$.
\end{prop}
\section{Use Case Scenario and Numerical Analysis} \lambdaabel{sec:numerical}
In order to illustrate the incentive mechanism framework for risk management, a use case
scenario is described next. Since most organizations do not openly publish their actual risk management
structure or numbers, this scenario is naturally hypothetical and the numbers in the subsequent
numerical analysis do not necessarily coincide with real world counterparts.
\subsection{Example Use Case Scenario}
In this subsection, a possible use cases scenario is described for a large-scale enterprise with
multiple autonomous business units, denoted by set $\mathcal A$, who collaborate and share IT infrastructure in order
to provide various services and products. In addition to the business units, the enterprise headquarters has a special security risk management division, which will be simply referred to as ``risk manager'' here. The task of the risk manager, $\mathcal{D}$,
is successful deployment and operation of security and IT risk management projects that
entail enterprise-wide computer-assisted information collection (observation), risk assessment (decision making),
and mitigation (control).
The results and algorithms described in this paper can be utilized to develop a manual risk management strategy
as well as a technical system to handle a large number of business units and multiple concurrent risk
management projects. For simplicity and as a special case of the latter, this scenario focuses on the former.
Let the risk manager start a project to improve robustness of the IT systems involved in a product against
security threats. The success of the project naturally depends on collaboration of the $6$ specific business
units involved at various stages of the product in question. However, not each unit plays an equal role
in creation of the product, and hence, their risk exposure is different. Therefore, those units with a
more significant role have to make a larger investment to the project and their IT systems.
During the project, the divisions have to provide accurate information on their business and networked systems.
At the operational phase, each division allocates manpower and resources for the proper operation of
the system. Hence, participation in this risk management project is associated with a certain cost to each unit in terms of
investments and manpower. Although each unit sees a certain amount of value in the new risk management system,
if they are left alone to themselves, their contributions may not be sufficient for the successful realization of the
risk management system. Thus, the risk manager uses parts of its budget for subsidizing individual unit investments,
if necessary in the form of manpower and expertise.
Let $x=[x_1, x_2, \lambdadots, x_6]$ denote the investments (project contributions) of business units. Their contribution
to the project is evaluated using the multi-variable objective function $F(x)$, which describes the goal of the
entire project. The individual marginal contribution of a business unit $i$ (one of six) to the project
goal at a given (project) state is given by the derivative, $\partial F(x) / \partial x_i$. It is important to note
that risk manager may not know the exact form of $F(x)$ before hand, and has to estimate $\partial F(x) / \partial x_i$
for each business unit $i$ at a given state.
The goal of the risk manager is to ensure the success of the project, which may be captured by making the objective
function achieve a certain minimum threshold value, i.e. $F(x)>V_{threshold}$. The subsidies given to the units
(monetarily or in the form of assistance) are determined in proportion to their current investments. For example,
the business unit $i$ receives $p_i x_i$. These subsidies have to be of course within the allocated budget, i.e.
$\sum_{i=1}^6 p_i x_i \lambdaeq B$. Note that, the budget in question is periodic, e.g. $B$ units per month or year.
The interaction between the risk manager and individual units is designed according to Algorithm~\ref{alg:iterative1}
based on the $\mathcal{IM}_2$. The actual time-scale of the iteration depends on the specific requirements of the
enterprise. For example, the risk managers and representatives from the units may come together in weekly
or bi-weekly intervals to evaluate the progress, which gives some time to the units and manager for updating
own evaluations on marginal benefits and contributions, respectively. We next present a numerical example
to further illustrate the scenario described.
\subsection{Numerical Analysis}
Based on the use case scenario, an example is numerically analyzed with a risk manager and $6$ units, who implement
the iterative mechanism $\mathcal{IM}_2$ using Algorithm~\ref{alg:iterative1}. The budget is $B=3$,
the global objective function of the risk manager is $F(x)=\sum_{i=1}^6 \gammaamma_i x_i$,
where $\gammaamma=[0.8,\; 0.4, \; 0.5, \; 0.2,\; 0.3, \;0.1]$, the utilities of units are in the form of $U_i(x_i)=\alpha_i \lambdaog(x_i)$,
where $\alpha=[0.9,\; 0.7,\; 0.6,\; 0.8,\; 0.2,\; 0.4] $, and the unit cost factor is $\beta=3$ for all six units. Each unit
starts the iteration with an initial investment of $x_i=0.5 \; \forall i $ and receives an initial incentive
factor of $p_i=0.3 \; \forall i$. The measurement units of the budget $B$ and investments $x$ are assumed
to be on the order of millions of dollars. The step-size constants are chosen as $\kappa_d=0.05$ and $\phi=0.3$.
The success of the project is decided by whether the objective function passes minimum threshold of $2.5$, i.e.
$F(x^*)>2.5$.
The evolution of unit investment levels $x(n)$ is shown in Figure~\ref{fig:xinvest1} and the associated
incentive factors $p(n)$ in Figure~\ref{fig:incentfactor1}. The first unit, which contributes the most to the objective
receives a higher amount of aid from the risk manager than others. The algorithm converges fast, in $10-15$ steps,
for the given parameters, as indicated by the exponential convergence of its continuous-time counterpart. For a time interval of $1-2$ weeks per iteration, this corresponds to $3-6$ months
in practice. Although this convergence time may seem as a disadvantage at the first glance, in a practical
project with highly varying parameters, such an online algorithm may even be beneficial in terms of adaptability
over time.
\betaegin{figure}[htp]
\centering
\includegraphics[width=0.7\columnwidth]{xinvest1.eps}
\caption{The evolution of unit investment levels $x(n)$ under Algorithm~\ref{alg:iterative1}. }
\lambdaabel{fig:xinvest1}
\end{figure}
\betaegin{figure}[htp]
\centering
\includegraphics[width=0.7\columnwidth]{incentfactor1.eps}
\caption{The evolution of incentive factors $p(n)$ under Algorithm~\ref{alg:iterative1}. }
\lambdaabel{fig:incentfactor1}
\end{figure}
\betaegin{figure}[htbp]
\centering
\includegraphics[width=0.7\columnwidth]{yinvest1.eps}
\caption{The evolution of unit investment levels $y(n)$ without any incentive mechanism implemented. }
\lambdaabel{fig:yinvest1}
\end{figure}
In contrast, the investment results of units without any incentive mechanism in place, $y(n)$, is shown
in Figure~\ref{fig:yinvest1}. A comparison of the objective function $F(x)$ with and without an incentive
mechanism is depicted in Figure~\ref{fig:ffunction1}. Naturally, this improvement comes at an expense
of the budget $B$ spent entirely by the risk manager.
\betaegin{figure}[htbp]
\centering
\includegraphics[width=0.7\columnwidth]{ffunction1.eps}
\caption{A comparison of the objective function $F(x(n))$ with and without an incentive
mechanism. Under the incentive mechanism it passes the success threshold of $2.5$.}
\lambdaabel{fig:ffunction1}
\end{figure}
\section{Literature Review} \lambdaabel{sec:discussion}
Building upon its successful applications to economics and engineering (e.g. networks), game theory has been recently utilized to model and analyze security problems \cite{alpcan-book}. Similar formalization efforts have been ongoing in the risk management area with the goal of developing analytical approaches to (security) risk analysis, management, and associated decision making \cite{riskbook1,crisis09,guikema1,icc10jeff}. Unsurprisingly, game theory enjoys an increased interest in the risk management community \cite{gtandrisk1,gtandrisk2,patecornell1,guikemachap}, as it provides a valuable and relevant mathematical framework \cite{alpcan-book,basargame,fudenberg}. Recently, a game theoretic approach has been developed for security and risk-related decision making and investments in \cite{miurako08-decision,miurako08-investment}.
Mechanism design \cite{maskin1,lazarSemret1998,johari1} is a field of game theory, where a designer imposes rules on the underlying strategic (noncooperative) game in order to achieve certain desirable objectives such as social welfare maximization or a system-wide goal. Hence, mechanism design can be viewed as a reverse engineering of games. It is especially useful in developing analytical frameworks for incentive mechanisms. Recently, there has been widespread interest in using mechanism design for modeling, analyzing and solving problems in network resource allocation problems that are decentralized in nature~\cite{johari1,hajek1,lazarSemret1998,rajiv1,wuWangLiuClancy2009,huangBerryHonig2006b}. It has also been applied to resource allocation in the context of engineering optimization~\cite{guikema2}. A basic game design approach to security investments in the risk management context has been discussed in \cite{alpcan-book}.
The presented incentive mechanism framework makes use of both mechanism design~\cite{maskin1,lazarSemret1998,johari1,alpcan-infocom10} and game theory~\cite{basargame,fudenberg}, which provide solid analytical and conceptual foundations. In contrast to many existing studies~\cite{holger-allerton,hurwicz1972,dasgupta}
focusing on answering the question of ``which mechanisms are possible to design'', this work adopts a constructive
approach to develop a practical methodology and applies it to security risk management. Despite sharing the game-theoretic approach of earlier work \cite{miurako08-decision,miurako08-investment}, it distinguishes from these through the mechanism design framework developed on top of the game. A similar perspective has been briefly discussed in \cite[Chap. 6]{alpcan-book}, which however has not taken into account incentive-compatibility aspects.
The article~\cite{guikema2}, which shares a similar goal as this one, discusses the problem of designing an allocation scheme that leads to truthful reporting by the engineers and allocation of the scarce resources within the VCG framework. This work distinguishes from~\cite{guikema2} in multiple ways in addition to its focus on risk management. First, the mechanisms discussed here are iterative, enable operation even under limited information, and do not require any direct revelation of preferences by the users or risk manager. Similar iterative schemes have been analyzed in depth in the networking literature, e.g. see \cite{hajek1,srikantbook,tansuphd}. Second, the sufficient conditions for convergence and operation of the iterative mechanisms here are not as restrictive as in~\cite{guikema2}. Finally, the properties of iterative interaction algorithms are analyzed rigorously from a dynamical system perspective and their rapid convergence is proven.
\section{Discussion and Conclusion} \lambdaabel{sec:conclusion}
The analytical incentive mechanism design framework presented can not only be used to derive guidelines for handling incentives in risk management but also to develop computer-assisted schemes. The abstract nature of the
framework is an advantage in terms of widespread applicability to diverse situations and organization types. In order to
satisfy all three objectives of efficiency, preference-compatibility and strategy-proofness, iterative incentive mechanisms
and related algorithms are developed which also allow implementation under information limitations. These mechanisms are very straightforward to analyze and implement numerically, which is especially useful since any practical implementation of such incentive mechanism will most probably involve some kind of computer-assistance. The risk manager has then
the option to evaluate various scenarios through simulations before actual deployment. This is illustrated with a hypothetical deployment scenario and a numerical example.
The presented inventive mechanism framework can be extended in multiple directions.
One immediate extension is multiple decision variables. For example, units may need to
distinguish between monetary investments and local resources such as manpower. Similarly,
the risk manager may utilize multiple separate incentive factors. A related but more challenging
extension is multi-criteria decision making, where preferences are not simply expressed
through scalar valued functions such as $U$ and $F$. This is an open research area also
in decision and optimization theories.
The limitations of the utility-based approach adopted here is also worth noting. The expression
of preferences through specific (continuous, differentiable) functions is obviously a simplification
to facilitate devising analytically tractable models. However, as it can be seen in Sections~\ref{sec:iterative}
and \ref{sec:numerical}, the resulting algorithms do not necessarily require the players
estimate their whole utility beforehand. A step-by-step iterative estimation process is
fully sufficient to establish and communicate these preferences.
An underlying assumption of the model until now has been the fixed nature of player
preferences or utility functions. Under this assumption, the risk manager can influence
unit decisions only by introducing additive incentive factors to their cost structure
as discussed. In reality however, the unit preferences are open to changes through
psychological factors. The arts of persuasion and politics may ``shift'' the utility
curves in the model. Quantification of such factors is obviously a significant
yet open research challenge.
An approach closely related to the strategic (noncooperative) game framework discussed in
this paper, is based on coalitional (cooperative) games \cite{fudenberg,WS00}. How to
motivate team building and cooperation in security and risk management has been recently discussed
in \cite{walid2} as well as in\cite[Chap. 6]{alpcan-book}. This alternative approach provides
a complementary and potentially very interesting research direction.
Some of the other open research directions follow directly from relaxing the
assumptions in Section~\ref{sec:model}. Improving the robustness of the
incentive mechanisms against malicious units who do not follow the rules or
have utilities orthogonal to other users (sometimes referred to as adversarial mechanism
design) is an emerging and relevant research area.
Detection of such misbehavior is also of both practical and theoretical interest.
In parallel to users, the relaxation of the assumption on risk manager's honesty leads
to similarly interesting questions such as how can a unit detect and respond to
misbehavior (e.g. unfairness) of the risk manager.
\section*{Appendix}
\subsection*{
Existence and Uniqueness of Nash Equilibrium}
This appendix revisits the analysis in \cite{tansuphd,rosen} on existence
and uniqueness of Nash equilibrium.
In the strategic game $\mathcal G$ given in Definition~\ref{def:game}, the strategy (decision) space of the players
is assumed to be convex, compact, and has a nonempty interior. Furthermore, the cost functions of the players,
$J_i, \;\; i \in \mathcal A$, is strictly convex in $x_i$ and at least twice continuously differentiable due to
its definition as well as those of utility functions $U_i, \;\; i \in \mathcal A$. Therefore, the game $\mathcal G$
admits (at least) a Nash equilibrium from Theorem 4.4 in~\cite[p.176]{basargame}.
Next, additional conditions are imposed such that the game $\mathcal G$
admits a unique NE solution. Toward this end, define the pseudo-gradient operator
\betaegin{equation}\lambdaabel{e:psgrad}
\overline \nabla J:= \lambdaeft [\partial J_1(x) / \partial x_1 \cdots
\partial J_N(x) / \partial x_N \right ]^T := g(x).
\end{equation}
Subsequently, let the $N \times N$ matrix $G(x)$ be the Jacobian of $g(x)$
with respect to $x$:
\betaegin{equation}\lambdaabel{e:g1}
G(x):=
\betaegin{pmatrix}
b_1 & a_{12} & \cdots & a_{1N} \\
\vdots & & \ddots & \vdots \\
a_{N1} & a_{N2} & \cdots & b_N \
\end{pmatrix} \;,
\end{equation}
where $b_i$ and $a_{ij}$ are defined as
$b_i:=\frac{\partial^2 J_i(x)}{\partial x_i^2}$ and
$a_{i,j} := \frac{\partial^2 J_i(x)}{\partial x_i \partial x_j}$,
respectively.
\betaegin{assm} \lambdaabel{assm3}
The symmetric matrix $G(x)+G(x)^T$, where $G(x)$ is defined in~(\ref{e:g1}),
is positive definite, i.e. $G(x)+G(x)^T >0$ for all $x \in \mathcal X$.
\end{assm}
\betaegin{assm} \lambdaabel{assm4}
The strategy space $\mathcal X$ of the game $\mathcal G$ can be described as
\betaegin{equation} \lambdaabel{e:xdef}
\mathcal X:=\{x \in \mathbb R^N\,:\, h_j(x) \lambdaeq 0 ,\; j=1,\,2,\,\lambdadots r\},
\end{equation}
where $h_j: \mathbb R^N \rightarrow \mathbb R, j=1,\,2,\,\lambdadots r$, $h_j(x)$ is convex in its arguments
for all $j$, and the set $\mathcal X$ is bounded and has a nonempty interior. In addition, the derivative of
at least one of the constraints with respect to $x_i$,
$\{d h_j(x) / d x_i ,\; j=1,\,2,\,\lambdadots r\}$, is nonzero for $i=1,\,2,\,\lambdadots N$, $\forall x \in \mathcal X$.
\end{assm}
Now, revisiting the analysis in \cite{tansuphd,rosen}, it is shown that the game $\mathcal G$ admits
a unique Nash equilibrium under Assumptions~\ref{assm3} and \ref{assm4}.
In view of Assumption~\ref{assm4}, the Lagrangian function for player $i$
in this game is given by
\betaegin{equation} \lambdaabel{e:lagrangian}
L_i(x,\mu)=J_i(x)+\sum_{j=1}^r \mu_{i, j} h_j(x) ,
\end{equation}
where $\mu_{i, j}\gammaeq 0,\; j=1,\,2,\,\lambdadots r$ are the Lagrange multipliers of
player $i$~\cite[p. 278]{bertsekas2}.
We now provide a proposition for the game $\mathcal G$ with conditions similar
to the well known Karush-Kuhn-Tucker necessary conditions
(Proposition 3.3.1, p. 310,~\cite{bertsekas2}).
\betaegin{prop} \lambdaabel{kuhntucker}
Let $x^*$ be a NE point of the game $\mathcal G$ and Assumptions~\ref{assm3}-\ref{assm4}
hold. There exists then a unique set of Lagrange multipliers,
$\{\phi_{i, j}:\; j=1,\,2,\,\lambdadots r ,\; i=1,\,2,\,\lambdadots N \} $, such that
$$ \betaegin{array}{r}
\dfrac{d L(x^*,\phi)}{d x_i}= \dfrac{d J_i(x^*)}{d x_i}+
\displaystyle { \sum_{j=1}^r \phi_{i, j}^*\dfrac{d h_j(x^*)}{d x_i}} =0,\\
i=1,\,2,\,\lambdadots N, \\
\phi_{i, j} \gammaeq 0, \;\; \forall i, j, \text{ and } \;
\phi_{i, j} = 0, \;\; \forall j \notin A_i(\mathbf{x}^*), \forall i\, ,
\end{array}
$$
where $A_i(x^*)$ is the set of active constraints in $i^{th}$ player's minimization
problem at NE point $x^*$.
\end{prop}
\betaegin{proof}
The proof essentially follows lines similar to the ones of the
Proposition 3.3.1 of~\cite{bertsekas2}, where the penalty approach
is used to approximate the original constrained problem by
an unconstrained problem that involves a violation of the
constraints. The main difference here is the
repetition of this process for each individual $x_i$ at the NE point $x^*$.
\end{proof}
Define now a more compact notation the vector of Lagrangian functions as
$L:=[L_1,\lambdadots,L_N]$, and
the $N \times N$ diagonal matrix of Lagrange multipliers for the $j^{th}$
constraint as $\Phi_j=\rm{diag} [\phi_{1,j}, \phi_{2,j}, \lambdadots \phi_{N,j}]$.
By Proposition~\ref{kuhntucker} and Assumption~\ref{assm4}, a NE point $x^{(1)}$ satisfies
\betaegin{equation} \lambdaabel{e:proofunique1}
\overline \nabla L(x^{(1)},\Phi^{(1)})=g(x^{(1)})+\sum_{j=1}^r \Phi_j^{(1)}
\overline \nabla h_j(x^{(1)})=0,
\end{equation}
where $\Phi_j^{(1)}\gammaeq 0$ is unique for each $j$.
Assume there are two different NE points $x^{(0)}$ and $x^{(1)}$. Then, one can also
write the counterpart of (\ref{e:proofunique1}) for $x^{(0)}$.
Following an argument similar to the one in the proof of Theorem 2 in~\cite{rosen},
one can show that this leads to a contradiction. We present a brief outline of a simplified
version of that proof for the sake of completeness.
Multiplying (\ref{e:proofunique1}) and its counterpart for $x^{(0)}$ from left by
$(x^{(0)}-x^{(1)})^T$, and then adding them together, we obtain
\betaegin{equation} \lambdaabel{e:contradict}
\betaegin{array}{rcl}
0 & = & (x^{(0)}-x^{(1)})^T \overline \nabla L(x^{(1)},\Phi^{(1)}) \\
& & + \lambdaeft( \overline \nabla L(x^{(1)},\Phi^{(1)})\right)^T (x^{(0)}-x^{(1)}) \\
& & + (x^{(1)}-x^{(0)})^T \overline \nabla L(x^{(0)},\Phi^{(0)}) \\ \\
&= & (x^{(0)}-x^{(1)})^T \lambdaeft( g(x^{(1)})- g(x^{(0)}) \right) \\
& & + \lambdaeft( g(x^{(1)})- g(x^{(0)}) \right)^T (x^{(0)}-x^{(1)}) \\
& & + (x^{(1)}-x^{(0)})^T \sum_{j=1}^r [\Phi_j^{(1)} \overline \nabla h_j(x^{(1)}) \\
& & - \Phi_j^{(0)} \overline \nabla h_j(x^{(0)})] .\\
\end{array}
\end{equation}
Define the strategy vector $x(\theta)$ as a
convex combination of the two equilibrium points $x^{(0)}\,,\,x^{(1)} $ :
$$
x(\theta)=\theta x^{(1)} + (1-\theta) x^{(0)} ,
$$
where $0<\theta<1$. Take the derivative of $g(x(\theta))$ with respect to $\theta$,
\betaegin{equation}\lambdaabel{e:gdiff}
\dfrac{dg(x(\theta))}{d\theta}=G(x(\theta)) \frac{dx(\theta)}{d\theta}=G(x(\theta))(x^{(1)} -x^{(0)}),
\end{equation}
where $G(x)$ is defined in~(\ref{e:g1}). Integrating~(\ref{e:gdiff}) over
$\theta$ yields
\betaegin{equation}\lambdaabel{e:gintegral}
g(x^{(1)})-g(x^{(0)})=\lambdaeft[\int_0^1 G(x(\theta)) d\theta \right](x^{(1)}-x^{(0)} ) .
\end{equation}
Multiplying (\ref{e:gintegral})
from left by $(x^{(1)}-x^{(0)} )^T$, the transpose of (\ref{e:gintegral}) from right by
$(x^{(1)}-x^{(0)} )$, and adding these two terms yields
\betaegin{equation}\lambdaabel{e:gintegral2}
(x^{(1)}-x^{(0)})^T \lambdaeft[\int_0^1 G(x(\theta))+G^T(x(\theta)) d\theta \right](x^{(1)}-x^{(0)} ) .
\end{equation}
Since $G(x(\theta))+G^T(x(\theta)) $ is positive definite by Assumption~\ref{assm3}
and the sum of two positive definite matrices is positive definite,
the matrix $\betaar G:=\int_0^1 G(x(\theta))+G^T(x(\theta)) d\theta$ is positive definite.
Similarly, we have
\betaegin{equation}\lambdaabel{e:hdiff}
\dfrac{d \overline \nabla h(x(\theta))}{d\theta}=H(x(\theta)) \frac{dx(\theta)}{d\theta}=H(x(\theta))(x^1-x^0),
\end{equation}
where $H(x)$ is the Jacobian of $\overline \nabla h(x)$ and positive definite due to convexity of $h(x)$
by definition. The third term in (\ref{e:contradict})
$$ \betaegin{array}{r}
(x^{(0)}-x^{(1)} )^T \sum_{j=1}^r [\Phi_j^{(0)}\overline \nabla h_j(x^{(0)})-\Phi_j^{(1)}
\overline \nabla h_j(x^{(1)})],
\end{array}
$$
is less than
$$ \sum_{j=1}^r [\Phi_j^{(1)}-\Phi_j^{(0)}] [h_j(x^{(1)})-h_j(x^{(0)})] ,$$
due to convexity of $h(x)$. Since for each constraint $j$, $h_j(x) \lambdaeq 0\; \forall x$, $\Phi_j^{(i)} h_j(x^{(i)})=0,\; i=0, 1$, and
$\Phi_j$ is positive definite, where the latter two follow from Karush-Kuhn-Tucker
conditions, this term is also non-positive.
The sum of the first two terms in (\ref{e:contradict}) are the negative of (\ref{e:gintegral2}), which
is strictly positive for all $x^{(1)} \neq x^{(0)}$. Hence, (\ref{e:contradict})
is strictly negative which leads to a contradiction unless $x^{(1)}=x^{(0)}$. Thus, there exists a unique NE point in the game $\mathcal G$.
\betaibliographystyle{IEEEtranS}
\betaegin{thebibliography}{10}
\providecommand{\url}[1]{{#1}}
\providecommand{\urlprefix}{URL }
\expandafter\ifx\csname urlstyle\endcsname\relax
\providecommand{\doi}[1]{DOI~\discretionary{}{}{}#1}\else
\providecommand{\doi}{DOI~\discretionary{}{}{}\betaegingroup
\urlstyle{rm}\Url}\fi
\betaibitem{tansuphd}
Alpcan, T.: Noncooperative games for control of networked systems.
\newblock Ph.D. thesis, University of Illinois at Urbana-Champaign, Urbana, IL
(2006)
\betaibitem{alpcan-book}
Alpcan, T., Ba\c{s}ar, T.: Network Security: A Decision and Game Theoretic
Approach.
\newblock Cambridge University Press (2011).
\newblock \urlprefix\url{http://www.tansu.alpcan.org/book.php}
\betaibitem{crisis09}
Alpcan, T., Bambos, N.: Modeling dependencies in security risk management.
\newblock In: Proc. of 4th Intl. Conf. on Risks and Security of Internet and
Systems (Crisis). Toulouse, France (2009)
\betaibitem{gamenetsne}
Alpcan, T., Pavel, L.: Nash equilibrium design and optimization.
\newblock In: Proc. of Intl. Conf. on Game Theory for Networks ({G}ame{N}ets
2009). Istanbul, Turkey (2009)
\betaibitem{cdc09lacra}
Alpcan, T., Pavel, L., Stefanovic, N.: A control theoretic approach to
noncooperative game design.
\newblock In: Proc. of 48th {IEEE} Conf. on Decision and Control. Shanghai,
China (2009)
\betaibitem{patecornell1}
and, E.P.C.: Risks and games: Intelligent actors and fallible systems.
\newblock In: Proc. of 2nd Intl. Symp. on Engineering Systems. Cambridge, MA,
USA (2009)
\betaibitem{basargame}
Ba\c{s}ar, T., Olsder, G.J.: {D}ynamic Noncooperative Game Theory, 2nd edn.
\newblock Philadelphia, PA: SIAM (1999)
\betaibitem{bertsekas2}
Bertsekas, D.: Nonlinear Programming, 2nd edn.
\newblock Athena Scientific, Belmont, MA (1999)
\betaibitem{bertsekas3}
Bertsekas, D., Tsitsiklis, J.N.: {P}arallel and Distributed Compuation:
Numerical Methods.
\newblock Prentice Hall, Upper Saddle River, NJ (1989)
\betaibitem{holger-allerton}
Boche, H., Naik, S.: Mechanism design and implementation theoretic perspective
of interference coupled wireless systems.
\newblock In: Proc. of 47th Annual Allerton Conf. on Communication, Control,
and Computing. Monticello, IL, USA (2009)
\betaibitem{alpcan-infocom10}
Boche, H., Naik, S., Alpcan, T.: Characterization on non-manipulable and pareto
optimal resource allocation strategies in interference coupled wireless
systems.
\newblock In: Proc. of 29th IEEE Conf. on Computer Communications (Infocom).
San Diego, CA, USA (2010)
\betaibitem{dasgupta}
Dasgupta, P., Hammmond, P., Maskin, E.: The implementation of social choice
rules: Some general results of incentive compatibility.
\newblock Review of Economic Studies \textbf{46}, 185--216 (1979)
\betaibitem{fudenberg}
Fudenberg, D., Tirole, J.: Game Theory.
\newblock {MIT Press} (1991)
\betaibitem{riskbook1}
Garvey, P.R.: Analytical Methods for Risk Management: A Systems Engineering
Perspective.
\newblock Statistics: textbook and monographs. Chapman and Hall/CRC, Boca
Raton, FL, USA (2009)
\betaibitem{guikema2}
Guikema, S.D.: Incentive compatible resource allocation in concurrent design.
\newblock Engineering Optimization \textbf{38}(2), 209--226 (2006).
\newblock \doi{10.1080/03052150500420272}
\betaibitem{guikemachap}
Guikema, S.D.: Game theory models of intelligent actors in reliability
analysis: An overview of the state of the art.
\newblock In: Game Theoretic Risk Analysis of Security Threats, pp. 1--19.
Springer US (2009).
\newblock \doi{10.1007/978-0-387-87767-9}
\betaibitem{guikema1}
Guikema, S.D., Aven, T.: Assessing risk from intelligent attacks: A perspective
on approaches.
\newblock Reliability Engineering \& System Safety \textbf{95}(5), 478--483
(2010).
\newblock \doi{DOI: 10.1016/j.ress.2009.12.001}.
\newblock
\urlprefix\url{http://www.sciencedirect.com/science/article/B6V4T-4XY4JY2-1/
2/4f7273aff7ad1c84a47c2277f405b92e}
\betaibitem{huangBerryHonig2006b}
Huang, J., Berry, R., Honig, M.: Auction--based {S}pectrum {S}haring.
\newblock ACM Mobile Networks and Applications Journal \textbf{24}(5), 405--418
(2006)
\betaibitem{hurwicz1972}
Hurwicz, L.: Decision and Organization, chap. On informationally decentralized
systems, pp. pp. 297--336.
\newblock North-Holland, Amsterdam (1972)
\betaibitem{johari1}
Johari, R., Tsitsiklis, J.N.: Efficiency of scalar-parameterized mechanisms.
\newblock Operations Research \textbf{57}(4), 823--839 (2009).
\newblock \urlprefix\url{http://www.stanford.edu/~rjohari/pubs/char.pdf}
\betaibitem{gtandrisk2}
{John R. Hall}, J.: The elephant in the room is called game theory.
\newblock Risk Analysis \textbf{29}(8), 1061--1061 (2009).
\newblock \doi{10.1111/j.1539-6924.2009.01246.x}
\betaibitem{khalilbook}
Khalil, H.: Nonlinear Systems, 3rd edn.
\newblock Prentice Hall (2002)
\betaibitem{lazarSemret1998}
Lazar, A.A., Semret, N.: The progressive second price auction mechanism for
network resource sharing.
\newblock In: International Symposium on Dynamic Games and Applications.
Maastricht, Netherlands (1998)
\betaibitem{gtandrisk1}
{Louis Anthony (Tony) Cox}, J.: Game theory and risk analysis.
\newblock Risk Analysis \textbf{29}(8), 1062--1068 (2009).
\newblock \doi{10.1111/j.1539-6924.2009.01247.x}
\betaibitem{rajiv1}
Maheswaran, R.T., Basar, T.: Social welfare of selfish agents: motivating
efficiency for divisible resources.
\newblock In: 43rd {IEEE} Conf. on Decision and Control ({CDC}), vol.~2, pp.
1550-- 1555. Paradise Island, Bahamas (2004)
\betaibitem{maskin1}
Maskin, E.: Nash equilibrium and welfare optimality.
\newblock Review of Economic Studies \textbf{66}(1), 23 -- 38 (2003).
\newblock \doi{10.1111/1467-937X.00076}
\betaibitem{miurako08-investment}
Miura-Ko, R.A., Yolken, B., Bambos, N., Mitchell, J.: Security investment games
of interdependent organizations.
\newblock In: 46th Annual Allerton Conference (2008)
\betaibitem{miurako08-decision}
Miura-Ko, R.A., Yolken, B., Mitchell, J., Bambos, N.: Security decision-making
among interdependent organizations.
\newblock In: Proc. of 21st IEEE Computer Security Foundations Symp. (CSF), pp.
66--80 (2008)
\betaibitem{moore_red}
Moore, D., Shannon, C., Claffy, K.: Code-{R}ed: {A} case study on the spread
and victims of an {I}nternet worm.
\newblock In: Proc. of ACM SIGCOMM Workshop on Internet Measurement, pp.
273--284. Marseille, France (2002)
\betaibitem{icc10jeff}
Mounzer, J., Alpcan, T., Bambos, N.: Dynamic control and mitigation of
interdependent {IT} security risks.
\newblock In: Proc. of the IEEE Conference on Communication (ICC). IEEE
Communications Society (2010)
\betaibitem{Nash50}
Nash, J.F.: Equilibrium points in n-person games.
\newblock Proceedings of the National Academy of Sciences of the United States
of America \textbf{36}(1), 48--49 (1950).
\newblock \urlprefix\url{http://www.jstor.org/stable/88031}
\betaibitem{Nash51}
Nash, J.F.: Non-cooperative games.
\newblock The Annals of Mathematics \textbf{54}(2), 286 -- 295 (1951).
\newblock \urlprefix\url{http://www.jstor.org/stable/1969529}
\betaibitem{rosen}
Rosen, J.B.: {E}xistence and uniqueness of equilibrium points for concave
n-person games.
\newblock Econometrica \textbf{33}(3), 520--534 (1965)
\betaibitem{walid2}
Saad, W., Alpcan, T., Ba\c{s}ar, T., Hj{\o}rungnes, A.: Coalitional game theory
for security risk management.
\newblock In: Proc. of 5th Intl. Conf. on Internet Monitoring and Protection
({ICIMP}). Barcelona, Spain (2010)
\betaibitem{WS00}
Saad, W., Han, Z., Debbah, M., Hj{\o}rungnes, A., Ba\c{s}ar, T.: Coalitional
game theory for communication networks: {A} tutorial.
\newblock IEEE Signal Processing Mag., Special issue on Game Theory in Signal
Processing and Communications \textbf{26}(5), 77--97 (2009)
\betaibitem{srikantbook}
Srikant, R.: The Mathematics of Internet Congestion Control.
\newblock Systems \& Control: Foundations \& Applications. Birkhauser, Boston,
MA (2004)
\betaibitem{wuWangLiuClancy2009}
Wu, Y., Wang, B., Liu, K.J.R., Clancy, T.C.: Repeated open spectrum sharing
game with cheat--proof strategies.
\newblock IEEE Transactions on Wireless Communications \textbf{8}(4),
1922--1933 (2009)
\betaibitem{hajek1}
Yang, S., Hajek, B.: {VCG}-{K}elly mechanisms for allocation of divisible
goods: Adapting {VCG} mechanisms to one-dimensional signals.
\newblock IEEE JSAC \textbf{25}(6), 1237--1243 (2007).
\newblock \doi{10.1109/JSAC.2007.070817}
\end{thebibliography}
\end{document}
|
\betaegin{document}
\betaegin{abstract}
In the present paper, which is an outgrowth of our joint work
with Anthony Bak and Roozbeh Hazrat on unitary commutator
calculus
\cite{BV3, RNZ1, RNZ4, RNZ5}, we find generators of the mixed
commutator subgroups of relative elementary groups and
obtain unrelativised versions of commutator
formulas in the setting of Bak's unitary groups.
It is a direct sequel of our papers \cite{NV18, NZ2, NZ3, NZ6} and \cite{NZ1, NZ4},
where similar results were obtained for $\operatorname{GL}(n,R)$ and
for Chevalley groups over a commutative ring with 1, respectively.
Namely, let $(A,\Lambda)$ be any form ring and $n\gammae 3$.
We consider Bak's hyperbolic unitary group $\operatorname{G}U(2n,A,\Lambda)$.
Further, let $(I,\operatorname{G}amma)$ be a form ideal of $(A,\Lambda)$.
One can associate with $(I,\operatorname{G}amma)$ the corresponding
elementary subgroup $\operatorname{FU}(2n,I,\operatorname{G}amma)$ and the
relative elementary subgroup $\operatorname{EU}(2n,I,\operatorname{G}amma)$
of $\operatorname{G}U(2n,A,\Lambda)$. Let $(J,\operatorname{D}elta)$ be another form
ideal of $(A,\Lambda)$.
In the present paper we prove an unexpected result that
the non-obvious type of generators for
$\betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig]$,
as constructed in our previous papers with Hazrat, are
redundant and can be expressed as products of the obvious
generators, the elementary conjugates
$Z_{ij}(ab,c)=T_{ji}(c)T_{ij}(ab)T_{ji}(-c)$ and $Z_{ij}(ba,c)$,
and the
elementary commutators $Y_{ij}(a,b)=[T_{ji}(a),T_{ij}(b)]$,
where $a\in(I,\operatorname{G}amma)$, $b\in(J,\operatorname{D}elta)$, $c\in(A,\Lambda)$.
It follows that
$\betaig[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)\betaig]=
\betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig]$.
In fact, we establish much more precise generation results.
In particular, even the elementary commutators $Y_{ij}(a,b)$
should be
taken for one long root position and one short root position.
Moreover, $Y_{ij}(a,b)$ are central modulo
$\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$ and behave as symbols.
This allows us to generalise and unify many previous
results,including the multiple elementary commutator
formula, and dramatically simplify their proofs.
\varepsilonnd{abstract}
\muaketitle
\muaketitle
\hangindent 5.5cm\hangafter=0\nuoindent
To our
dear friend Mohammad Reza Darafsheh,\\
\phantom{xxxxxxxxx} with affection and admiration
\par\hangindent 5.5cm\hangafter=0\nuoindent
\betaigskip
\sigmaection*{Introduction}
In a series of our joint papers with Anthony Bak and
Roozbeh Hazrat \cite{BV3, RNZ1, RNZ4, RNZ5} we studied
commutator formulas in Bak's unitary groups. In the present
paper we generalise, refine and strengthen some of the main
results of these works. Namely, we discover that
the set of generators for the mixed commutator subgroup
of relative elementary unitary groups
listed in these papers can be substantially reduced and remove
all commutativity conditions therein\footnote{In particular,
this solves \cite{yoga-2}, Problem~1 and \cite{RNZ4}, Problem 1.}.
This allows us to prove unexpected unrelative versions of the
commutator formulas, generalise multiple elementary commutator formulas, and more. These results both improve a great number of previous results, and path the way to several new unexpected applications.
Morally, the present paper is a direct sequel our papers
\cite{NV18, NZ2, NZ3, NZ6} and \cite{NZ1, NZ4},
where the same was done for $\operatorname{GL}(n,R)$ and for Chevalley groups
over a commutative ring with 1, respectively. There, the proofs
heavily relied on our previous works, in particular on
\cite{ASNV, NVAS, NVAS2, RHZZ1, RHZZ2} for $\operatorname{GL}(n,R)$ and
on \cite{RNZ2, RNZ3} for Chevalley groups. Similarly, the present
paper heavily hinges on the results of \cite{BV3, RNZ1, RNZ4,
RNZ5}.
\sigmaubsection{The prior state of art} To enunciate the main results
of the present papers, let us briefly recall the notation, which will
be reviewed in somewhat more detail in \S\S~1--4.
Let $(A,\Lambda)$ be a form ring,
$n\gammae 3$, and let $\operatorname{G}U(2n,A,\Lambda)$ be the hyperbolic
Bak’s unitary group.
Below, $\operatorname{EU}(2n, A,\Lambda)$ denotes the [absolute] elementary unitary
group, generated by the elementary root unipotents.
As usual, for a form ideal $(I,\operatorname{G}amma)$ of the
form ring $(A,\Lambda)$ we denote by
$\operatorname{FU}(2n,I,\operatorname{G}amma)$ the unrelative
elementary subgroup of level $(I,\operatorname{G}amma)$, and by
$\operatorname{EU}(2n, I,\operatorname{G}amma)$
the relative elementary subgroup of level
$(I,\operatorname{G}amma)$. By definition,
$\operatorname{EU}(2n, I,\operatorname{G}amma)$ is the normal closure of $\operatorname{FU}(2n,I,\operatorname{G}amma)$
in $\operatorname{EU}(2n, A,\Lambda)$. Further, $\operatorname{G}U(2n, I,\operatorname{G}amma)$
and $\operatorname{CU}(2n, I,\operatorname{G}amma)$
denote the principal congruence subgroup and the
full congruence subgroup of level $(I,\operatorname{G}amma)$, respectively.
Let us recapitulate two principal results of our joint papers with
Roozbeh Hazrat, \cite{RNZ1, RNZ4, RNZ5}.
The first one is the birelative standard commutator
formula, \cite{RNZ1}, Theorems~1 and 2. It is a very broad generalisation
of the commutator formulas for unitary groups, previously established
by Anthony Bak, the first author, Leonid Vaserstein, Hong You,
Günter Habdank, and others, see, for instance \cite{B1, B2, BV3, VY, Ha1, Ha2, BP}.
\betaegin{oldtheorem}\langleabel{standard}
Let $R$ be a commutative ring, $(A,\Lambda)$ be a form ring
such that $A$ is a quasi-finite $R$-algebra. Further, let $(I,\operatorname{G}amma)$
and $(J,\operatorname{D}elta)$ be two form ideals of the form ring $(A,\Lambda)$
and let $n\gammae 3$. Then the following commutator identity holds
$$ [\operatorname{G}U(2n,I,\operatorname{G}amma), \operatorname{EU}(2n, J,\operatorname{D}elta)] =
[\operatorname{EU}(2n,I,\operatorname{G}amma), \operatorname{EU}(2n, J,\operatorname{D}elta)]. $$
\nuoindent
When $A$ is itself commutative, one even has
$$ [\operatorname{CU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)] =
[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)]. $$
\varepsilonnd{oldtheorem}
Another crucial result is description of a generating set for the
mixed commutator subgroup
$[\operatorname{EU}(2n,I,\operatorname{G}amma), \operatorname{EU}(2n,J,\operatorname{D}elta)]$ {\it as a group\/}, similar
to the familiar generating set for relative elementary subgroups,
see \cite{BV3}, Proposition 5.1 (compare Lemma 3 below).
\par
Recall that we denote by $T_{ij}(a)$ elementary unitary
transvections. They come in two denominations, those of
{\it short root type\/}, when $i\nueq\pm j$, and those of
{\it long root type\/}, when $i=-j$. The corresponding root
subgroups are then parametrised by the ring $A$ itself and by
the form parameter $\Lambda$, respectively. To simplify
notation in the relative case, we introduce the following convention.
For a form ideal $(I,\operatorname{G}amma)$ we write $a\in(I,\operatorname{G}amma)$ to denote
that $a\in I$ if $i\nueq\pm j$, and
$a\in\langleambda^{-(\varepsilon(i)+1)/2}\operatorname{G}amma$ if $i=-j$. Clearly,
$a\in(I,\operatorname{G}amma)$ means precisely that $T_{ij}(a)\in\operatorname{EU}(2n,I,\operatorname{G}amma)$,
see \S\S~3,4 for details.
\par
Further, we consider
the elementary conjugates $Z_{ij}(a,c)$ and the elementary commutators $Y_{ij}(a,b)$, which are defined as follows:
$$ Z_{ij}(a,c)=T_{ji}(c)T_{ij}(a)T_{ji}(-c),
\qquad Y_{ij}(a,b)=[T_{ji}(a),T_{ij}(b)], $$
\par
The following result in a slightly weaker
form was stated as Theorem~9 of \cite{RNZ5}, and in
precisely this form as Theorem 3B of \cite{RNZ4}. Observe
that there its proof depended on Theorem A, and thus
ultimately, on localisation methods.
\betaegin{oldtheorem}\langleabel{oldgenerators}
Let $R$ be a commutative ring, $(A,\Lambda)$ be a form ring
such that $A$ is a quasi-finite $R$-algebra.
Let $(I,\operatorname{G}amma)$ and $(J,\operatorname{D}elta)$ be two form ideals of the form ring $(A,\Lambda)$ and let $n\gammae 3$.
The relative commutator subgroup $[\operatorname{EU}(2n,I,\operatorname{G}amma), \operatorname{EU}(2n, J,\operatorname{D}elta)]$ is generated by the elements of the following three
types
\par\sigmamallskip
$\betaullet$ $Z_{ij}(ab,c)$ and $Z_{ij}(ba,c)$,
\par\sigmamallskip
$\betaullet$ $Y_{ij}(a,b)$,
\par\sigmamallskip
$\betaullet$ $[T_{ij}(a), Z_{ij}(b,c)]$,
\par\sigmamallskip\nuoindent
where in all cases $a\in(I,\operatorname{G}amma)$, $b\in(J,\operatorname{D}elta)$ and
$c\in(A,\Lambda)$
\varepsilonnd{oldtheorem}
\sigmaubsection{Statement of the principal result}
The technical core of the present paper are Lemmas 6--12
that we prove in \S\S~5--8. Together they imply
that the above Theorem B can be {\it drastically\/} generalised
and improved, as follows:
\par\sigmamallskip
$\betaullet$ We can lift the commutativity condition.
\par\sigmamallskip
$\betaullet$ The third type of generators are redundant.
\par\sigmamallskip
$\betaullet$ The second type of generators can be
restricted to one long and one short root (and are subject
to further relations, to be stated below).
\par\sigmamallskip
The following result is the pinnacle of the present paper,
other results are either preparation to its proof, or its easy
corollaries. For the general linear group $\operatorname{GL}(n,R)$ it was
established in \cite{NZ2}, Theorem 1. For Chevalley groups
$G(\Pihi,R)$ over commutative rings
--- and thus, in particular, for the usual symplectic group
$\operatorname{Sp}(2n,R)$ and the split orthogonal group $\operatorname{SO}(2n,R)$ ---
it is essentially a conjunction of \cite{NZ1}, Theorem 1.2, and \cite{NZ4}, Theorem 1. However, as explained below, in these
special cases one can say somewhat more.
\betaegin{The}\langleabel{generators}
Let $(A,\Lambda)$ be any associative form ring, let $(I,\operatorname{G}amma)$
and $(J,\operatorname{D}elta)$ be two form ideals of the form ring
$(A,\Lambda)$ and let $n\gammae 3$. Then the relative commutator
subgroup $[\operatorname{EU}(2n,I,\operatorname{G}amma), \operatorname{EU}(2n, J,\operatorname{D}elta)]$ is generated
by the elements of the following two types
\par\sigmamallskip
$\betaullet$ $Z_{ij}(ab,c)$ and $Z_{ij}(ba,c)$,
\par\sigmamallskip
$\betaullet$ $Y_{ij}(a,b)$,
\par\sigmamallskip\nuoindent
where in all cases $a\in(I,\operatorname{G}amma)$, $b\in(J,\operatorname{D}elta)$ and
$c\in(A,\Lambda)$. Moreover, for the second type of generators
it suffices to take one pair $(h,k)$, $h\nueq\pm k$, and one
pair $(h,-h)$.
\varepsilonnd{The}
The difference with Chevalley groups is that now we have
to throw in elementary commutators for {\it two\/} roots,
one long root and one short root. For Chevalley groups, one
{\it long\/} root would suffice. Conversely, when 2 is invertible
for types $\operatorname{B}_l,\operatorname{C}_l,\operatorname{F}_4$ and 3 is invertible for type $\operatorname{G}_2$,
one {\it short\/} toot would suffice. For unitary groups,
modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$ we can still establish
a cognate relation between short root type elementary
commutators and long root type elementary commutators,
Lemma~\rangleef{long-short}. However, unlike Chevalley groups, for unitary
groups the elements of long root subgroups are parametrised by
the form parameter $\Lambda$, whereas the elements of short
root subgroups are parametrised by the ring $A$ itself. This
means that now we could dispose of {\it some\/} short type
elementary commutators, yet not all of them. In the opposite
direction, the long type elementary commutators, one of whose arguments sits in the corresponding {\it minimal\/} ideal form
parameter could be discarded --- but not all of them! This can
be done when one of the form parameters is either minimal,
or as large as possible --- not just maximal! --- see \S~9.
\par
Observe that the proof of this theorem consists of two
independent parts. The possibility to express the third type
of generators as products of elementary conjugates and elementary commutators in $[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)]$ will be called
the {\it first claim\/} of Theorem~\rangleef{generators}. The much more arduous bid
that modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$ all elementary
commutators can be expressed in terms of such commutators
in one short and one long positions, will be called the {\it second
claim\/} of Theorem~\rangleef{generators}.
Let us mention another important trait. The published
proofs of Theorem B heavily depended on some version of
Theorem A, and thus, ultimately, on localisation. The proof
of Theorem~\rangleef{generators} given below in \S\S~5--7 is purely
{\it elementary\/}\footnote{In the technical sense that it does
not invoke anything apart from the usual Steinberg relations.}
and thus works already at the level of {\it unitary Steinberg groups\/},
see \cite{B1, B2, lavrenov}. The only reason why we do not
state our results in this generality is to skip discussion
of {\it relative unitary Steinberg groups\/}. The details and technical facts are not readily available in the literature, and would noticeably increase the length of the present paper.
\sigmaubsection{Unrelativisation}
Since both remaining types of generators listed in Theorem~\rangleef{generators}
already belong
to the mixed commutator of the {\it unrelative\/} elementary subgroups
$[\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n, J,\operatorname{D}elta)]$, we get the following amazing
equality. Morally, it shows that the commutator of {\it relative\/} elementary subgroups $[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n, J,\operatorname{D}elta)]$ is
smaller, than one expects. Observe that it only depends on the
[relatively] easy first claim of Theorem~\rangleef{generators} whose proof is
completed already in \S~5. For $\operatorname{GL}(n,R)$ the corresponding result
is \cite{NV18}, Theorem 2 (for commutative rings, with a completely
different proof), and \cite{NZ2}, Theorem 1 (for arbitrary associative rings). For $\operatorname{Sp}(2n,R)$ and $\operatorname{SO}(2n,R)$ it is a special case of
\cite{NZ1}, Theorem 1.2.
\betaegin{The}\langleabel{equality2}
Let $(A,\Lambda)$ be any associative form ring, let $(I,\operatorname{G}amma)$
and $(J,\operatorname{D}elta)$ be two form ideals of the form ring
$(A,\Lambda)$ and let $n\gammae 3$. Then the mixed commutator
subgroup $[\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n, J,\operatorname{D}elta)]$ is normal in
$\operatorname{EU}(2n,A,\Lambda)$. Furthermore, we have the following
commutator identity
$$ [\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n, J,\operatorname{D}elta)] =
[\operatorname{EU}(2n,I,\operatorname{G}amma), \operatorname{EU}(2n, J,\operatorname{D}elta)]. $$
\varepsilonnd{The}
In particular, in conjunction with Theorem A this shows
that the birelative standard commutator formula also holds
in the following unrelativised form. Again, for $\operatorname{GL}(n,R)$
this is \cite{NV18}, Theorem 1 and \cite{NZ2}, Theorem 3,
whereas for Chevalley groups it is \cite{NZ1}, Theorem 1.3.
\betaegin{The}\langleabel{unrelative}
Let $R$ be a commutative ring, $(A,\Lambda)$ be a form ring
such that $A$ is a quasi-finite $R$-algebra. Further, let $(I,\operatorname{G}amma)$
and $(J,\operatorname{D}elta)$ be two form ideals of the form ring $(A,\Lambda)$
and let $n\gammae 3$. Then we have a unrelative commutator identity
$$ [\operatorname{G}U(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)] =
[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)]. $$
\nuoindent
When $A$ is itself commutative, one even has
$$ [\operatorname{CU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)] =
[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)]. $$
\varepsilonnd{The}
The following result is a unitary analogue of the unrelative
normality theorem proven for $\operatorname{GL}(n,R)$ by Bogdan Nica
and ourselves, see \cite{Nica, NV18, NZ3}. It is an immediate
corollary of
our Theorem~\rangleef{unrelative}, if you set there $(I,\operatorname{G}amma)=(J,\operatorname{D}elta)$.
\betaegin{The}\langleabel{T:4}
Let $R$ be a commutative ring, $(A,\Lambda)$ be a form ring
such that $A$ is a quasi-finite $R$-algebra. Further, let $(I,\operatorname{G}amma)$
be a form ideals of the form ring $(A,\Lambda)$ and let $n\gammae 3$.
Then $\operatorname{FU}(2n,I,\operatorname{G}amma)$ is normal in $\operatorname{G}U(n,I,\operatorname{G}amma)$.
\varepsilonnd{The}
\sigmaubsection{Elementary commutators}
The proof of the second claim of Theorem~\rangleef{generators} is the gist of the
present paper, and proceeds as follows. First, in \S~6 we prove
that the elementary commutators $Y_{ij}(a,b)$ are central in
the absolute elementary group modulo
$\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
Recall that here
$$ (I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)=\betaig(IJ+JI,{}^J\operatorname{G}amma+{}^I\operatorname{D}elta+\operatorname{G}amma_{\muin}(IJ+JI)\betaig) $$
\nuoindent
denotes the symmetrised product of form ideals, see \S~2 for details.
\par
Since by that time we
already know that together with $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$
these commutators generate $[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n, J,\operatorname{D}elta)]$,
this result can be stated as follows. For $\operatorname{GL}(n,R)$ and Chevalley
groups this is \cite{NZ2}, Theorem 2, and \cite{NZ4}, Theorem 2,
respectively.
\betaegin{The}\langleabel{equality}
Let $(A,\Lambda)$ be any associative form ring, let $(I,\operatorname{G}amma)$
and $(J,\operatorname{D}elta)$ be two form ideals of the form ring
$(A,\Lambda)$ and let $n\gammae 3$. Then
$[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n, J,\operatorname{D}elta)]$ is central in
$\operatorname{EU}(2n,A,\Lambda)$ modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
\varepsilonnd{The}
In other words,
$$ \operatorname{B}ig[\betaig[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)\betaig],
\operatorname{EU}(2n,A,\Lambda)\operatorname{B}ig]\langlee \operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)). $$
\nuoindent
In particular, it implies that the quotient
$$ \betaig[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)\betaig]/
\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)) $$
\nuoindent
is itself abelian. This readily implies additivity of the elementary
commutator with respect to its arguments, and other similar
useful properties, collected in Theorem~\rangleef{symb}, that are employed
in the proofs of subsequent results.
However, the focal point of the present paper is \S~7,
where we prove that modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$
all elementary commutators of the same root type are equivalent.
Moreover, for the short root type they are balanced with respect
to the factors from $R$, both on the left and on the right.
For the long root type the balancing property is more
complicated, and only holds for the quadratic (=Jordan)
multiplication. In the case of the usual symplectic group,
where $A$ is a commutative ring with trivial involution,
it corresponds to the multiplication by squares, see \cite{NZ4},
Theorem 5.
\betaegin{The}\langleabel{The:Op-2.1}
Let $(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$,
and let $(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$ be form ideals of $(A,\Lambda)$.
\par\sigmamallskip
$\betaullet$
Then for any $ i\nueq \pm j$, any $h\nue\pm l$ with
$h,l \nueq \pm i, \pm j$, and $a\in I$, $b\in J$, $c,d\in A$,
the elementary commutator
$$ Y_{ij}(cad,b)\varepsilonquiv Y_{hl}(a,dbc)
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\par\sigmamallskip
$\betaullet$ Then for any $ -n\langlee i\langlee n$, any
$-n\langlee k\langlee n$, and $a\in \langleambda^{-(\varepsilonsilon(i)+1)/2}\operatorname{G}amma$, $b\in \langleambda^{(\varepsilonsilon(i)-1)/2}\operatorname{D}elta$ $c\in A$, the elementary
commutator
$$ Y_{i,-i}(ca\omegaverline c,b)\varepsilonquiv Y_{k,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a,-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc)\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\varepsilonnd{The}
The calculation behind these congruences is the highlight of the
whole theory. Inherently, it is just a birelative incarnation of a
classical calculation that appeared dozens of times in the
algebraic K-theory and the theory of algebraic groups since
mid 60-ies, see \S~12 for a terse historical medley.
\sigmaubsection{Further corollaries}
As another illustration of the power of Theorem~\rangleef{generators}, we show that
it allows to [almost completely] lift commutativity conditions in
some of the principal results of \cite{RNZ1,RNZ4,RNZ5}.
\par
Under the additional assumptions such as quasi-finiteness
the following result for any $n\gammae 3$ is \cite{RNZ5}, Theorem 7.
From Theorem~\rangleef{generators} we can derive that for $n\gammae 4$ a similar result
holds for arbitrary associative form rings. For $\operatorname{GL}(n,R)$
such generalisation was already obtained in \cite{NZ2}. We
believe this could be also done for $n=3$, see Problem 3, but in
that case it would require formidable calculations.
\betaegin{The}\langleabel{T:7}
Let $(A,\Lambda$) be any associative form ring with $1$, let $n\gammae 4$,
and let $(I_i,\operatorname{G}amma_i)\trianglelefteq R$, $i=1,\langledots,m$, be form ideals
of $(A,\Lambda)$. Consider an arbitrary arrangement of brackets\/
$\langlelbracket \langledots\ranglerbracket$
with the cut point\/ $s$. Then one has
\betaegin{multline*}
\operatorname{B}ig\langlelbracket \operatorname{EU}(2n,I_1,\operatorname{G}amma_1),\operatorname{EU}(2n,I_2,\operatorname{G}amma_2),\langledots,\operatorname{EU}(2n,I_m,\operatorname{G}amma_m)\operatorname{B}ig\ranglerbracket=\\
\operatorname{B}ig[\operatorname{EU}(2n,(I_1,\operatorname{G}amma_1)\circ\langledots\circ( I_s,\operatorname{G}amma_s)),\operatorname{EU}(2n,(I_{s+1},\operatorname{G}amma_{s+1})\circ\langledots\circ (I_m,\operatorname{G}amma_m)\operatorname{B}ig],
\varepsilonnd{multline*}
\nuoindent
where the bracketing of symmetrised products on the right hand side coincides with the bracketing of the commutators on the left hand side.
\varepsilonnd{The}
Under the additional assumption that the absolute standard
commutator formulae are satisfied, the following result
is \cite{RNZ1}, Theorem 3. As we know from \cite{BV3, RH,
RH2, RNZ1}, this condition is satisfied for quasi-finite rings.
But from the work of Victor Gerasimov \cite{Gerasimov} it follows
that
some commutativity or finiteness assumptions are necessary
for the standard commutator formulae to hold.
Now, we are in a position to prove the following result for
{\it arbitrary\/} associative form rings.
\betaegin{The}\langleabel{T:8}
Let\/ $(A,\Lambda)$ be any associative form ring and $n\gammae 3$.
Then for any two comaximal form ideals\/ $(I,\operatorname{G}amma)$ and
$(J,\operatorname{D}elta)$ of the form ring $(R,\Lambda)$, $I+J=A$, one has
the following equality
$$ [\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)]=
\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)). $$
\varepsilonnd{The}
Another bizarre corollary of Theorem~\rangleef{generators} is surjective
stability of the quotients
$$ [\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)]/
\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)), $$
\nuoindent
again for {\it arbitrary\/} associative form rings, without any
stability conditions, or commutativity conditions.
This is a typical result in the style of Bak's paradigm
``stability results without stability conditions'', see \cite{Bak}
and also \cite{RH, RH2, RN1, RN, BHV}.
\betaegin{The}\langleabel{T:9}
Let $(A,\Lambda)$ be any associative form ring, let $(I,\operatorname{G}amma)$
and $(J,\operatorname{D}elta)$ be two form ideals of the form ring
$(A,\Lambda)$ and let $n\gammae 3$. Then the stability map
\betaegin{multline*}
[\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n, J,\operatorname{D}elta)]/
\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))\langleongrightarrow\\
[\operatorname{FU}(2(n+1),I,\operatorname{G}amma), \operatorname{FU}(2(n+1), J,\operatorname{D}elta)]/
\operatorname{EU}(2(n+1),(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))
\varepsilonnd{multline*}
\nuoindent
is surjective.
\varepsilonnd{The}
Indeed, in view of Theorems~\rangleef{generators} and \rangleef{equality} as a normal subgroup of
$\operatorname{EU}(2n,A,\Lambda)$ the group $[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)]$
is generated by $[\operatorname{EU}(6,I,\operatorname{G}amma),\operatorname{EU}(6,J,\operatorname{D}elta)]$. An explicit
calculation of these quotients presents itself as a natural next
step. However, so far we were unable to resolve it, apart from some special cases, see a discussion in \S~12.
\par
\sigmaubsection{Organisation of the paper.}
The rest of the paper is devoted to the proof of these results.
In \S\S~1--4 we recall the
necessary definitions and collect requisite preliminary results.
The next four sections \S\S~5--8 are the technical core of
the paper. Namely, in \S~5 we prove Theorem~\rangleef{equality} and derive
first corollaries thereof. In \S~6 we reduce the set of generators
of $[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)]$
to the first two types. In \S~7 we prove Theorem~\rangleef{The:Op-2.1} and then in
\S~8 establish another cognate result, relating {\it some\/}
elementary commutators of short root type with {\it some\/}
elementary commutators of long root type.
This finishes the proof of Theorem~\rangleef{generators} and its corollaries, and,
in particular, also of Theorems \rangleef{equality2}--\rangleef{T:4}
In \S~9 we establish the special cases of Theorem~\rangleef{T:7}
pertaining to triple and quadruple commutators, and then in
\S~10 derive
Theorem~\rangleef{T:7} itself by an easy induction. In \S~11 we
derive Theorem~\rangleef{T:8} and yet another corollary of our main
results. Finally, in \S~12 we describe the general context,
briefly review recent related publications and
state several further related open problems.
\sigmaection{Notation}
Here we recall some basic notation that will be used throughout
the present paper.
\sigmaubsection{General linear group}\Label{general}
Let, as above, $A$ be an associative ring with 1. For natural $m,n$
we denote by $M(m,n,A)$ the additive group of $m\tauimes n$ matrices
with entries in $A$. In particular $M(m,A)=M(m,m,A)$ is the ring of
matrices of degree $m$ over $A$. For a matrix $x\in M(m,n,A)$ we
denote by $x_{ij}$, $1\langlee i\langlee m$, $1\langlee j\langlee n$, its entry in the
position $(i,j)$. Let $e$ be the identity matrix and $e_{ij}$,
$1\langlee i,j\langlee m$, be a standard matrix unit, i.e.\ the matrix which has
1 in the position $(i,j)$ and zeros elsewhere.
\par
As usual, $\operatorname{GL}(m,A)=M(m,A)^*$ denotes the general linear group
of degree $m$ over $A$. The group $\operatorname{GL}(m,A)$ acts on the free right
$A$-module $V\cong A^{m}$ of rank $m$. Fix a base $e_1,\langledots,e_{m}$
of the module $V$. We may think of elements $v\in V$ as columns with
components in $A$. In particular, $e_i$ is the column whose $i$-th
coordinate is 1, while all other coordinates are zeros.
\par
Actually, in the present paper we are only interested in the case,
when $m=2n$ is even. We usually number the base
as follows: $e_1,\langledots,e_n,e_{-n},\langledots,e_{-1}$. All other
occurring geometric objects will be numbered accordingly. Thus,
we write $v=(v_1,\langledots,v_n,v_{-n},\langledots,v_{-1})^t$, where $v_i\in A$,
for vectors in $V\cong A^{2n}$.
\par
The set of indices will be always ordered in conformity with this convention,
$\Omega=\{1,\langledots,n,-n,\langledots,-1\}$. Clearly, $\Omega=\Omega^+\sigmaqcup\Omega^-$,
where $\Omega^+=\{1,\langledots,n\}$ and $\Omega^-=\{-n,\langledots,-1\}$. For an
element $i\in\Omega$ we denote by $\varepsilon(i)$ the sign of $\Omega$, i.e.\
$\varepsilon(i)=+1$ if $i\in\Omega^+$, and $\varepsilon(i)=-1$ if $i\in\Omega^-$.
\sigmaubsection{Commutators}\Label{sub:1.4}
Let $G$ be a group. For any $x,y\in G$, ${}^xy=xyx^{-1}$ and $y^x=x^{-1}yx$
denote the left conjugate and the right conjugate of $y$ by $x$,
respectively. As usual, $[x,y]=xyx^{-1}y^{-1}$ denotes the
left-normed commutator of $x$ and $y$. Throughout the present paper
we repeatedly use the following commutator identities:
\betaegin{itemize}
\item[(C1)] $[x,yz]=[x,y]\cdot {}^y[x,z]$,
\sigmamallskip
\item[(C$1^+$)]
An easy induction, using identity (C1), shows that
$$\operatorname{B}igg[x,\prod_{i=1}^k u_i\operatorname{B}igg]=
\prod_{i=1}^k {}^{\prod_{j=1}^{i-1}u_j}[x,u_{i}], $$
\nuoindent
where by convention $\prod_{j=1}^0 u_j=1$,
\item[(C2)] $[xy,z]={}^x[y,z]\cdot [x,z]$,
\sigmamallskip
\item[(C$2^+$)]
As in (C$1^+$), we have
$$\operatorname{B}igg[\prod_{i=1}^k u_i,x\operatorname{B}igg]=
\prod_{i=1}^k {}^{\prod_{j=1}^{k-i}u_j}[u_{k-i+1},x], $$
\sigmamallskip
\item[(C3)]
${}^{x}\betaig[[x^{-1},y],z\betaig]\cdot {}^{z}\betaig[[z^{-1},x],y\betaig]\cdot
{}^{y}\betaig[[y^{-1},z],x\betaig]=1$,
\sigmamallskip
\item[(C4)] $[x,{}^yz]={}^y[{}^{y^{-1}}x,z]$,
\sigmamallskip
\item[(C5)] $[{}^yx,z]={}^{y}[x,{}^{y^{-1}}z]$,
\sigmamallskip
\item[(C6)] If $H$ and $K$ are subgroups of $G$, then $[H,K]=[K,H]$,
\varepsilonnd{itemize}
Especially important is (C3), the celebrated {\it Hall--Witt
identity\/}. Sometimes it is used in the following form, known as
the {\it three subgroup lemma\/}.
\betaegin{Lem}{\langleabel{HW1}}
Let\/ $F,H,L\taurianglelefteq G$ be three normal subgroups
of\/ $G$. Then
$$ \betaig[[F,H],L\betaig]\langlee \betaig [[F,L],H\betaig ]\cdot \betaig [F,[H,L]\betaig ]. $$
\varepsilonnd{Lem}
\sigmaection{Form rings and form ideal}\langleabel{sec2}
The notion of $\Lambda$-quadratic forms, quadratic modules and generalised
unitary groups over a form ring $(A,\Lambda)$ were introduced by Anthony
Bak in his Thesis, see \cite{B1, B2}. In this section, and the next one, we
{\it very briefly\/} review the most fundamental notation and results
that will be constantly used in the sequel. We refer to
\cite{bass73, B2, HO, knus, BV3, tang, RH, RH2, petrov1,
RNZ1, RNZ4, RNZ5, lavrenov} for details, proofs, and
further references. In the final section we mention some further related
recent works, and some generalisations.
\sigmaubsection{Form rings}\langleabel{form algebra}
Let $R$ be a commutative ring with $1$, and $A$ be an (not necessarily
commutative) $R$-algebra. An involution, denoted by $\omegaverline{\phantom{a}}$,
is an
anti-homomorphism of $A$ of order $2$. Namely, for $a,b\in A$,
one has
$$ \omegaverline{a+b}=\omegaverline a+\omegaverline b,\qquad
\omegaverline{ab}=\omegaverline b\,\omegaverline a,\qquad \omegaverline{\omegaverline a}=a. $$
\par\nuoindent
Fix an element $\langleambda\in\operatorname{C}ent(A)$ such that $\langleambda\omegaverline\langleambda=1$. One may
define two additive subgroups of $A$ as follows:
$$ \Lambda_{\muin}=\{c-\langleambda\omegaverline c\muid c\in A\}, \qquad
\Lambda_{\muax}=\{c\in A\muid c=-\langleambda\omegaverline c\}. $$
\nuoindent
A {\varepsilonm form parameter} $\Lambda$ is an additive subgroup
of $A$ such that
\betaegin{itemize}
\item[(1)] $\Lambda_{\muin}\sigmaubseteq\Lambda\sigmaubseteq\Lambda_{\muax}$,
\sigmamallskip
\item[(2)] $c\,\Lambda\,\omegaverline c\sigmaubseteq\Lambda$ for all $c\in A$.
\varepsilonnd{itemize}
The pair $(A,\Lambda)$ is called a {\varepsilonm form ring}.
\sigmaubsection{Form ideals}\langleabel{form ideals}
Let $I\trianglelefteq A$ be a two-sided ideal of $A$. We assume $I$ to be
involution invariant, i.~e.~such that $\omegaverline I=I$. Set
$$ \operatorname{G}amma_{\muax}(I)=I\cap \Lambda, \qquad
\operatorname{G}amma_{\muin}(I)=\{a-\langleambda\omegaverline a\muid a\in I\}+
\langleangle ac\omegaverline a\muid a\in I, c\in\Lambda\rangleangle. $$
\nuoindent
A {\varepsilonm relative form parameter} $\operatorname{G}amma$ in $(A,\Lambda)$ of level $I$ is an
additive group of $I$ such that
\betaegin{itemize}
\item[(1)] $\operatorname{G}amma_{\muin}(I)\sigmaubseteq \operatorname{G}amma \sigmaubseteq\operatorname{G}amma_{\muax}(I)$,
\sigmamallskip
\item[(2)] $c\,\operatorname{G}amma\,\omegaverline c\sigmaubseteq \operatorname{G}amma$ for all $c\in A$.
\varepsilonnd{itemize}
The pair $(I,\operatorname{G}amma)$ is called a {\varepsilonm form ideal}.
\par
In the level calculations we will use sums and products of form
ideals. Let $(I,\operatorname{G}amma)$ and $(J,\operatorname{D}elta)$ be two form ideals. Their sum
is artlessly defined as $(I+J,\operatorname{G}amma+\operatorname{D}elta)$, it is immediate to verify
that this is indeed a form ideal.
\par
Guided by analogy, one is tempted to set
$(I,\operatorname{G}amma)(J,\operatorname{D}elta)=(IJ,\operatorname{G}amma\operatorname{D}elta)$. However, it is considerably
harder to correctly define the product of two relative form parameters.
The papers \cite{Ha1,Ha2,RH,RH2} introduce the following definition
$$ \operatorname{G}amma\operatorname{D}elta=\operatorname{G}amma_{\muin}(IJ)+{}^J\operatorname{G}amma+{}^I\operatorname{D}elta, $$
\nuoindent
where
$$ {}^J\operatorname{G}amma=\betaig\langleangle b\,\operatorname{G}amma\,\omegaverline b\muid b\in J\betaig\rangleangle,\qquad
{}^I\operatorname{D}elta=\betaig\langleangle a\,\operatorname{D}elta\,\omegaverline a\muid a\in I\betaig\rangleangle. $$
\nuoindent
One can verify that this is indeed a relative form parameter of level $IJ$
if $IJ=JI$.
\par
However, in the present paper we do not wish to impose any such
commutativity assumptions. Thus, we are forced to consider the
symmetrised products
$$ I\circ J=IJ+JI,\qquad
\operatorname{G}amma\circ\operatorname{D}elta=\operatorname{G}amma_{\muin}(IJ+JI)+{}^J\operatorname{G}amma+{}^I\operatorname{D}elta\betaig. $$
\nuoindent
The notation $\operatorname{G}amma\circ\operatorname{D}elta$ -- as also $\operatorname{G}amma\operatorname{D}elta$ is
slightly misleading, since in fact it depends on $I$ and $J$, not
just on $\operatorname{G}amma$ and $\operatorname{D}elta$. Thus, strictly speaking, one should
speak of the symmetrised products of {\it form ideals\/}
$$ (I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)=
\betaig(IJ+JI,\operatorname{G}amma_{\muin}(IJ+JI)+{}^J\operatorname{G}amma+{}^I\operatorname{D}elta\betaig). $$
\nuoindent
Clearly, in the above notation one has
$$ (I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)=(I,\operatorname{G}amma)(J,\operatorname{D}elta)+(J,\operatorname{D}elta)(I,\operatorname{G}amma). $$
\sigmaection{Unitary groups}\langleabel{sec3}
In the present section we recall basic notation and facts related to
Bak's generalised unitary groups.
\sigmaubsection{Unitary group}\Label{unitary}
For a form ring $(A,\Lambda)$, one considers the
{\it hyperbolic unitary group\/} $\operatorname{G}U(2n,A,\Lambda)$, see~\cite[\S2]{BV3}.
This group is defined as follows:
\par
One fixes a symmetry $\langleambda\in\operatorname{C}ent(A)$, $\langleambda\omegaverline\langleambda=1$ and
supplies the module $V=A^{2n}$ with the following $\langleambda$-hermitian form
$h:V\tauimes V\langleongrightarrow A$,
$$ h(u,v)=\omegaverline u_1v_{-1}+\langledots+\omegaverline u_nv_{-n}+
\langleambda\omegaverline u_{-n}v_n+\langledots+\langleambda\omegaverline u_{-1}v_1. $$
\nuoindent
and the following $\Lambda$-quadratic form $q:V\langleongrightarrow A/\Lambda$,
$$ q(u)=\omegaverline u_1 u_{-1}+\langledots +\omegaverline u_n u_{-n} \muod\Lambda. $$
\nuoindent
In fact, both forms are engendered by a sesquilinear form $f$,
$$ f(u,v)=\omegaverline u_1 v_{-1}+\langledots +\omegaverline u_n v_{-n}. $$
\nuoindent
Now, $h=f+\langleambda\omegaverline{f}$, where $\omegaverline f(u,v)=\omegaverline{f(v,u)}$, and
$q(v)=f(u,u)\muod\Lambda$.
\par
By definition, the hyperbolic unitary group $\operatorname{G}U(2n,A,\Lambda)$ consists
of all elements from $\operatorname{GL}(V)\cong\operatorname{GL}(2n,A)$ preserving the $\langleambda$-hermitian
form $h$ and the $\Lambda$-quadratic form $q$. In other words, $g\in\operatorname{GL}(2n,A)$
belongs to $\operatorname{G}U(2n,A,\Lambda)$ if and only if
$$ h(gu,gv)=h(u,v)\quad\tauext{and}\quad q(gu)=q(u),\qquad\tauext{for all}\quad u,v\in V. $$
\par
When the form parameter is neither maximal nor minimal, these groups
are not algebraic. However, their internal structure is very similar to that
of the usual classical groups. They are also oftentimes called general
quadratic groups, or classical-like groups.
\sigmaubsection{Unitary transvections}\Label{elementary1}
{\it Elementary unitary transvections\/} $T_{ij}(\xi)$
correspond to the pairs $i,j\in\Omega$ such that $i\nueq j$. They come in
two stocks. Namely, if, moreover, $i\nueq-j$, then for any $c\in A$ we set
$$ T_{ij}(c)=e+c e_{ij}-\langleambda^{(\varepsilon(j)-\varepsilon(i))/2}\omegaverline c e_{-j,-i}. $$
\nuoindent
These elements are also often called {\it elementary short root unipotents\/}.
\nuoindent
On the other side for $j=-i$ and $c\in\langleambda^{-(\varepsilon(i)+1)/2}\Lambda$ we set
$$ T_{i,-i}(c)=e+c e_{i,-i}. $$
\nuoindent
These elements are also often called {\it elementary long root elements\/}.
\par
Note that $\omegaverline\Lambda=\omegaverline\langleambda\Lambda$. In fact, for any element
$c\in\Lambda$ one has $\omegaverline c=-\omegaverline\langleambda c$ and thus
$\omegaverline\Lambda$ coincides with the set of products $\omegaverline\langleambda c$, where $c\in\Lambda$. This means that in the
above definition $c\in\omegaverline\Lambda$ when $i\in\Omega^+$ and $c\in\Lambda$
when $i\in\Omega^-$.
\par
Subgroups $X_{ij}=\{T_{ij}(c)\muid c\in A\}$, where $i\nueq\pm j$, are
called {\it short root subgroups\/}. Clearly, $X_{ij}=X_{-j,-i}$.
Similarly, subgroups $X_{i,-i}=\{T_{ij}(c)\muid
c\in\langleambda^{-(\varepsilon(i)+1)/2}\Lambda\}$ are called {\it long root subgroups\/}.
\par
The {\it elementary unitary group\/} $\operatorname{EU}(2n,A,\Lambda)$ is generated by
elementary unitary transvections $T_{ij}(c)$, $i\nueq\pm j$, $c\in A$,
and $T_{i,-i}(c)$, $c\in\Lambda$, see~\cite[\S3]{BV3}.
\sigmaubsection{Steinberg relations}\Label{elementary2}
Elementary unitary transvections $T_{ij}(\xi)$ satisfy the following
{\it elementary relations\/}, also known as {\it Steinberg relations\/}.
These relations will be used throughout this paper.
\par\sigmamallskip
(R1) \ $T_{ij}(c)=T_{-j,-i}(-\langleambda^{(\varepsilon(j)-\varepsilon (i))/2}\omegaverline{c})$,
\par\sigmamallskip
(R2) \ $T_{ij}(c)T_{ij}(d)=T_{ij}(c+d)$,
\par\sigmamallskip
(R3) \ $[T_{ij}(c),T_{hk}(d)]=e$, where $h\nue j,-i$ and $k\nue i,-j$,
\par\sigmamallskip
(R4) \ $[T_{ij}(c),T_{jh}(d)]=
T_{ih}(cd)$, where $i,h\nue\pm j$ and $i\nue\pm h$,
\par\sigmamallskip
(R5) \ $[T_{ij}(c),T_{j,-i}(d)]=
T_{i,-i}(cd-\langleambda^{-\varepsilon(i)}\omegaverline{d}\omegaverline{c})$,
where $i\nue\pm j$,
\par\sigmamallskip
(R6) \ $[T_{i,-i}(a),T_{-i,j}(d)]=
T_{ij}(ac)T_{-j,j}(-\langleambda^{(\varepsilon(j)-\varepsilon(i))/2}\omegaverline c a c)$,
where $i\nue\pm j$.
\par\sigmamallskip
Relation (R1) coordinates two natural parametrisations of the same short
root subgroup $X_{ij}=X_{-j,-i}$. Relation (R2) expresses additivity of
the natural parametrisations. All other relations are various instances
of the Chevalley commutator formula. Namely, (R3) corresponds to the
case, where the sum of two roots is not a root, whereas (R4), and (R5)
correspond to the case of two short roots, whose sum is a short root,
and a long root, respectively. Finally, (R6) is the Chevalley commutator
formula for the case of a long root and a short root, whose sum is a root.
Observe that any two long roots are either opposite, or orthogonal, so
that their sum is never a root.
\sigmaection{Relative subgroups}\langleabel{sec4}
In this section we recall definitions and basic facts concerning relative
subgroups. For the proofs of these results, see
\sigmaubsection{Relative subgroups}\Label{relative} One associates with a form ideal $(I,\operatorname{G}amma)$
the following four relative subgroups.
\par\sigmamallskip
$\betaullet$ The subgroup $\operatorname{FU}(2n,I,\operatorname{G}amma)$ generated by elementary unitary
transvections of level $(I,\operatorname{G}amma)$,
$$ \operatorname{FU}(2n,I,\operatorname{G}a)=\betaig\langleangle T_{ij}(a)\muid \
a\in I\tauext{ if }i\nueq\pm j\tauext{ and }
a\in\langleambda^{-(\varepsilonsilon(i)+1)/2}\operatorname{G}amma\tauext{ if }i=-j\betaig\rangleangle. $$
\par\sigmamallskip
$\betaullet$ The {\it relative elementary subgroup\/} $\operatorname{EU}(2n,I,\operatorname{G}amma)$
of level $(I,\operatorname{G}amma)$, defined as the normal closure of $\operatorname{FU}(2n,I,\operatorname{G}amma)$
in $\operatorname{EU}(2n,A,\Lambda)$,
$$ \operatorname{EU}(2n,I,\operatorname{G}a)={\operatorname{FU}(2n,I,\operatorname{G}a)}^{\operatorname{EU}(2n,A,\Lambda)}. $$
\par\sigmamallskip
$\betaullet$ The {\it principal congruence subgroup\/} $\operatorname{G}U(2n,I,\operatorname{G}a)$ of level
$(I,\operatorname{G}a)$ in $\operatorname{G}U(2n,A,\Lambda)$ consists of those $g\in \operatorname{G}U(2n,A,\Lambda)$,
which are congruent to $e$ modulo $I$ and preserve $f(u,u)$ modulo $\operatorname{G}a$,
$$ f(gu,gu)\in f(u,u)+\operatorname{G}a, \qquad u\in V. $$
\par\sigmamallskip
$\betaullet$ The full congruence subgroup $\operatorname{CU}(2n,I,\operatorname{G}amma)$ of level
$(I,\operatorname{G}amma)$, defined as
$$ \operatorname{CU}(2n,I,\operatorname{G}a)=\betaig\{ g\in \operatorname{G}U(2n,A,\Lambda) \muid
[g,\operatorname{G}U(2n,A,\Lambda)]\sigmaubseteq \operatorname{G}U(2n,I,\operatorname{G}a)\betaig\}. $$
\par\sigmamallskip
In some books, including \cite{HO}, the group $\operatorname{CU}(2n,I,\operatorname{G}a)$
is defined differently. However, in many important situations
these definitions yield the same group.
\sigmaubsection{Some basic lemmas}\Label{relativefacts}
Let us collect several basic facts, concerning relative groups,
which will be used in the sequel. The first one of them, see
\cite{BV3}, Lemma 5.2, asserts that
the relative elementary groups are $\operatorname{EU}(2n,A,\Lambda)$-perfect.
\betaegin{Lem}\langleabel{hww3}
Suppose either $n\gammae 3$ or $n=2$ and $I=\Lambda I+I\Lambda$.
Then
$$ \operatorname{EU}(2n,I,\operatorname{G}amma)=[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,A,\Lambda)]. $$
\varepsilonnd{Lem}
The next lemma gives generators of the relative elementary subgroup
$\operatorname{EU}(2n,I,\operatorname{G}a)$ as a subgroup. With this end, consider matrices
$$ Z_{ij}(a,c)={}^{T_{ji}(c)}T_{ij}(a)
=T_{ji}(c)T_{ij}(a)T_{ji}(-c), $$
\nuoindent
where $a\in I$, $c\in A$, if $i\nueq\pm j$, and
$a\in\langleambda^{-(\varepsilon(i)+1)/2}\operatorname{G}amma$,
$c\in\langleambda^{-(\varepsilon(j)+1)/2}\Lambda$, if $i=-j$.
The following result is \cite{BV3}, Proposition 5.1.
\betaegin{Lem}\langleabel{genelm}
Suppose $n\gammae 3$. Then
\betaegin{multline*}
\operatorname{EU}(2n,I,\operatorname{G}a)=\betaig\langleangle Z_{ij}(a,c)\muid \
a\in I, c\in A\tauext{ if }i\nueq\pm j\tauext{ and }\\
a\in\langleambda^{-(\varepsilonsilon(i)+1)/2}\operatorname{G}amma,
c\in\langleambda^{-(\varepsilonsilon(j)+1)/2}\Lambda,
\tauext{ if }i=-j\betaig\rangleangle.
\varepsilonnd{multline*}
\varepsilonnd{Lem}
The following lemma was first established in~\cite{B1}, but remained
unpublished. See~\cite{HO} and~\cite{BV3}, Lemma 4.4, for published
proofs.
\betaegin{Lem}
The groups $\operatorname{G}U(2n,I,\operatorname{G}amma)$ and $\operatorname{CU}(2n,I,\operatorname{G}amma)$ are normal in
$\operatorname{G}U(2n,A,\Lambda)$.
\varepsilonnd{Lem}
In this form the following lemma was established in \cite{RNZ5},
Lemmas 7 and 8, see also \cite{RNZ4}, Lemma~1B for a
definitive exposition. Before that \cite{RNZ1}, Lemmas 21--23
only established weaker inclusions, with smaller left hand sides,
or larger right hand sides.
\betaegin{Lem}
$(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$, and
let $(I,\operatorname{G}amma)$ and $(J,\operatorname{D}elta)$ be two form ideals of $(A,\Lambda)$.
Then
\betaegin{align*} \operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))\langlee&\betaig[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)\betaig]\langlee\\
&\betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig]\langlee\\
&\betaig[\operatorname{G}U(2n,I,\operatorname{G}amma),\operatorname{G}U(2n,J,\operatorname{D}elta)\betaig] \langlee\operatorname{G}U(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)).
\varepsilonnd{align*}
\varepsilonnd{Lem}
\sigmaection{Elementary commutators modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$}
Now we embark on the proof of the second claim of Theorem~\rangleef{generators}.
Our first major goal is to prove that the commutator
$[\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n,J,\operatorname{D}elta)]$ is central in $\operatorname{EU}(2n,A,\Lambda)$,
modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$. Namely, here we
establish Theorem~\rangleef{equality} and derive some corollaries thereof.
We prove the congruence in Theorem~\rangleef{equality} separately for short
root positions, and then for long root positions.
\betaegin{Lem}\langleabel{Op-1}
Let $(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$, and let $(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$ be form ideals of $(A,\Lambda)$.
For any $i\nue\pm j$ any $a\in I$, $b\in J$ and any
$x\in\operatorname{EU}(2n,A,\Lambda)$, one has
$$ {}^x Y_{ij}(a,b)\varepsilonquiv Y_{ij}(a,b)
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\varepsilonnd{Lem}
\betaegin{proof}
Consider the elementary conjugate ${}^xY_{ij}(a,b)$. We argue by induction
on the length of $x\in\operatorname{EU}(2n,A,\Lambda)$ in elementary generators. Let
$x=yT_{kl}(c)$, where $y\in \operatorname{EU}(2n,A,\Lambda)$ is shorter than $x$.
\par
We start with the case $k\nueq\pm l$.
\par\sigmamallskip
$\betaullet$ If $k,l\nueq \pm i, \pm j$, then $T_{kl}(c)$ commutes with $z=Y_{ij}(a,b)$
and can be discarded.
\par\sigmamallskip
$\betaullet$ On the other hand, for any $h\nueq \pm i, \pm j$ direct computations show that
\betaegin{align*}
&[T_{ih}(c),z]=T_{ih}(-abc-ababc)T_{jh}(-babc),\\\nuoalign{\vskip3truept}
&[T_{jh}(c),z]=T_{ih}(abac)T_{jh}(bac),\\
\nuoalign{\vskip3truept}
&[T_{hi}(c),z]=T_{ih}(cab)T_{jh}(-caba),\\
\nuoalign{\vskip3truept}
&[T_{hj}(c),z]=T_{ih}(cbab)T_{jh}(-cba-cbaba),
\varepsilonnd{align*}
Similarly, one has
\betaegin{alignat*}{1}
[T_{-i,h}(c),z]&=
[T_{-h,i}(-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(i))/2}c),z]r\\
&=T_{i,-h}(-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(i))/2}cab)T_{j,-h}(-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(i))/2}caba),\\
\nuoalign{\vskip3truept}
[T_{-j,h}(c),z]& =
[T_{-h,j}(-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(j))/2}c),z]\\
&=T_{i,-h}(-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(j))/2}cbab)
T_{j,-h}(-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(j))/2}cba-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(j))/2}cbaba),\\
\nuoalign{\vskip3truept}
[T_{h,-i}(c),z]& =
[T_{i,-h}(-\langleambda^{(-(\varepsilonsilon(i)-\varepsilonsilon(h))/2}c),z] \\
&=T_{i,-h}(-\langleambda^{(-(\varepsilonsilon(i)-\varepsilonsilon(h))/2}abac)T_{j,-h}(-\langleambda^{(-(\varepsilonsilon(i)-\varepsilonsilon(h))/2}bac),\\
\nuoalign{\vskip3truept}
[T_{h,-j}(c),z]& =
[T_{j,-h}(-\langleambda^{(-(\varepsilonsilon(j)-\varepsilonsilon(h))/2}c),z] \\
& =T_{i,-h}(-\langleambda^{(-(\varepsilonsilon(j)-\varepsilonsilon(h))/2}abac)T_{j,-h}(-\langleambda^{(-(\varepsilonsilon(j)-\varepsilonsilon(h))/2}bac)
\varepsilonnd{alignat*}
All factors on the right hand side belong already to
$\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
\par
If $(k,l)=(\pm i, \pm j)$ or $(\pm j,\pm i)$, then we take an index $h\nue \pm i, \pm j$ and rewrite $T_{kl}(c)$ as $[T_{k,h}(c),T_{h,l}(1)]$ and apply the previous items to get the same congruence modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
\par
It remains to consider the case, where $k=-l$.
\par\sigmamallskip
$\betaullet$ if $k\nue \pm i, \pm j$ then $T_{k,-k}(c)$ commutes with $z$ and can be discarded.
\par\sigmamallskip
$\betaullet$ Otherwise, we have
\betaegin{alignat*}{1}
[T_{i,-i}(c),z]=&T_{i,-i}(c-(1+ab+abab)c\omegaverline{(1+ab+abab)})T_{j,-j}(-\langleambda^{((\varepsilonsilon(i)-\varepsilonsilon(j))/2}babc\omegaverline{bab})\\
&T_{i,-j}(\langleambda^{((\varepsilonsilon(i)-\varepsilonsilon(j))/2}(1+ab+abab)c\omegaverline{(bab)}),\\
\nuoalign{\vskip3truept}
[T_{j,-j}(c),z]=&T_{j,-j}(c-(1-ba)c\omegaverline{(1-ba)})T_{i,-i}(\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(i))/2}abac\omegaverline{aba})\\& T_{i,-j}(-abac(1-\omegaverline{ba})),\\
\nuoalign{\vskip3truept}
[T_{-i,i}(c),z]=&[T_{-i,i}(c),[T_{ij}(a),T_{ji}(b)]]\\=&[T_{-i,i}(c),[T_{-j,-i}(-\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(i))/2}a),T_{-i,-j}(\langleambda^{((\varepsilonsilon(i)-\varepsilonsilon(j))/2}b)]],\\
\nuoalign{\vskip3truept}
[T_{-j,j}(c),z]=&[T_{-j,j}(c),[T_{ij}(a),T_{ji}(b)]]\\ =&[T_{-j,-j}(c),[T_{-j,-i}(-\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(i))/2}a),T_{-i,-j}(\langleambda^{((\varepsilonsilon(i)-\varepsilonsilon(j))/2}b)]].
\varepsilonnd{alignat*}
The two last cases reduce to the first two. Hence
all factors on the right belong to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
We have shown that for $i\nue \pm j$,
$$ {}^xz\varepsilonquiv {}^yz \pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\varepsilonnd{proof}
\betaegin{Lem}\langleabel{Op-2}
Let $(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$, and let $(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$ be form ideals of $(A,\Lambda)$.
For any $a\in \langleambda^{-(\varepsilonsilon(i)+1)/2}\operatorname{G}amma$,
$b\in\langleambda^{(\varepsilonsilon(i)-1)/2}\operatorname{D}elta$ and any
$x\in\operatorname{EU}(2n,A,\Lambda)$,
one has
$$ {}^x Y_{i,-i}(a,b)\varepsilonquiv Y_{i,-i}(a,b) \pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\varepsilonnd{Lem}
\betaegin{proof}
Denote $Y_{i,-i}(a,b)=[T_{i,-i}(a),T_{-i,i}(b)]$ by $z$.
\par\sigmamallskip
$\betaullet$ If $(k,l)=(-i,i)$, then
$$ [T_{-i,i}(c),z]=[T_{-i,i}(c), [T_{i,-i}(a),T_{-i,i}(b)]]\\
=[T_{-i,i}(c), Z_{-i,i}(b,a)]. $$
\nuoindent
The same computation as in Case 2 in Lemma~\rangleef{Op-1} shows that
$$ [T_{-i,i}(c),z] \in \operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)). $$
\par\sigmamallskip
$\betaullet$ If $(k,l)=(i,-i)$, then
\betaegin{alignat*}{1}
[T_{i,-i}(c),z]=&[T_{i,-i}(c), [T_{i,-i}(a),T_{-i,i}(b)]]=\\
&[T_{i,-i}(c), [T_{-i,i}(b),T_{i,-i}(a)]^{-1}]=\\
&[T_{-i,i}(b),T_{i,-i}(a)]^{-1}[[T_{-i,i}(b),T_{i,-i}(a)], T_{i,-i}(c)] [T_{-i,i}(b),T_{i,-i}(a)].
\varepsilonnd{alignat*}
Now the inner factor $[[T_{-i,i}(b),T_{i,-i}(a)], T_{i,-i}(c)]$ falls into the previous case, hence belongs to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
But then the same applies also to its conjugate
$$[T_{-i,i}(b),T_{i,-i}(a)]^{-1}\cdot
\operatorname{B}ig[[T_{-i,i}(b),T_{i,-i}(a)], T_{i,-i}(c)\operatorname{B}ig]
\cdot [T_{-i,i}(b),T_{i,-i}(a)]. $$
\par\sigmamallskip
$\betaullet$ If $k=i$ and $j\nue\pm k$, then
\betaegin{multline*}
[T_{i,j}(c),z]=[T_{i,j}(c), [T_{i,-i}(a),T_{-i,i}(b)]]=
T_{i,j}(-(ab+abab)c) T_{-i,j}(-babc)\cdot\\
T_{-j,j}(-\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(i))/2}\omegaverline{ c}bab\omegaverline{c} -\langleambda^{\varepsilonsilon(j)}(\omegaverline{c}bababc+\omegaverline{c}babababc)).
\varepsilonnd{multline*}
\nuoindent
Since $a\in\langleambda^{-(\varepsilonsilon(i)+1)/2}\operatorname{G}amma$ and
$b\in\langleambda^{(\varepsilonsilon(i)-1)/2}\operatorname{D}elta$, it follows that
the right side belongs to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
\par\sigmamallskip
$\betaullet$ if $k=-i$ and $j\nue \pm k$, then
\betaegin{alignat*}{1}
[T_{-i,j}(c),z]=&[T_{-i,j}(c), [T_{i,-i}(a),T_{-i,i}(b)]]\\
=&[T_{-i,i}(b),T_{i,-i}(a)][T_{-i,j}(c), [T_{-i,i}(b),T_{i,-i}(a)]]^{-1}[T_{-i,i}(b),T_{i,-i}(a)]^{-1}.
\varepsilonnd{alignat*}
By the previous case,
$$ [T_{-i,j}(c), [T_{-i,i}(b),T_{i,-i}(a)]]\in \operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)). $$
\nuoindent
As above, normality of $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$ then implies
that the whole right side belongs to
$\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
\par\sigmamallskip
$\betaullet$ Finally, the case $l=\pm i$ and $k\nue \pm i$ reduces
to the case $k=\pm i$ via relation (R1).
\par\sigmamallskip
We have shown that
$$ {}^xz\varepsilonquiv {}^yz \pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\nuoindent
By induction we get that
$$ {}^xz\varepsilonquiv z\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\varepsilonnd{proof}
In particular, these results
immediately imply the following additivity property of
the elementary commutators with respect to its arguments.
\betaegin{The}\langleabel{symb}
Let $R$ be an associative ring with $1$, $n\gammae 3$, and let $(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$
be form ideals of $R$. Then for any $ i\nueq j$, and any
$a,a_1,a_2\in(I,\operatorname{G}amma)$, $b,b_1,b_2\in(J,\operatorname{D}elta)$ one has
\betaegin{align*}
&Y_{ij}(a_1+a_2,b)\varepsilonquiv Y_{ij}(a_1,b)\cdot Y_{ij}(a_1,b)
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))},\\
\nuoalign{\vskip3truept}
&Y_{ij}(a,b_1+b_2)\varepsilonquiv Y_{ij}(a,b_1)\cdot Y_{ij}(a,b_2)
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))},\\
\nuoalign{\vskip3truept}
&Y_{ij}(a,b)^{-1}\varepsilonquiv Y_{ij}(-a,b)\varepsilonquiv Y_{ij}(a,-b)
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))},\\
\nuoalign{\vskip3truept}
&Y_{ij}(ab_1,b_2)\varepsilonquiv Y_{ij}(a_1,a_2b)\varepsilonquiv e
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}\\
\nuoalign{\vskip3truept}
&Y_{i,-i}(\omegaverline{b_1}ab_1,b_2)\varepsilonquiv Y_{i,-i}(a_1,\omegaverline{a_2}ba_2)\varepsilonquiv e
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}
\varepsilonnd{align*}
\varepsilonnd{The}
\betaegin{proof}
The first item can be derived from Lemma~\rangleef{Op-2.1} for
$i\nueq\pm j$ and Lemma~\rangleef{Op-2.2} for $i=-j$ as follows. By
definition
$$ Y_{ij}(a_1+a_2,b)=[T_{ij}(a_1+a_2),T_{ji}(b)]=
[T_{ij}(a_1)T_{ij}(a_2),T_{ji}(b)], $$
\nuoindent
and it only remains to apply multiplicativity of commutators in
the first factor, and then apply Lemma~\rangleef{Op-2.1} and Lemma~\rangleef{Op-2.2} respectively. The second item is similar,
and the third item follows. The last two items are obvious from
the definition.
\varepsilonnd{proof}
\sigmaection{Unrelativisation}
Here we establish the first claim of Theorem~\rangleef{generators}, and thus also
Theorems~\rangleef{equality2}, \rangleef{unrelative} and \rangleef{T:4}. It immediately follows from the next two
lemmas, the first of which addresses the case of short roots,
while the second one the case of long roots.
Recall that for the easier case of the general linear group over
{\it commutative\/} rings this result was first established in
2018 in our paper \cite{NZ1}. Then it was generalised to
arbitrary associative rings in 2019, together with the second
claim of Theorem~\rangleef{generators}, see \cite{NZ2}. The proof of the following
results exploit the same ideas as the proof of \cite{NZ2},
Lemma 4, but are noticeably more demanding from a technical viewpoint.
The following two lemmas address the case of short roots, where
$i\nue \pm j$, and the case of long roots, where $i=-j$, respectively
\betaegin{Lem}\langleabel{form-1}
Let $(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$, and let $(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$ be form ideals of $(A,\Lambda)$.
Suppose that $a\in I$, $b\in J$, $r\in A$ and $i\nue \pm j$. Then
$$ [T_{ji}(a), Z_{ji}( b , r )]\in [\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n, J,\operatorname{D}elta)]. $$
\varepsilonnd{Lem}
\betaegin{proof}
Without loss of generality, we may assume that $\varepsilon(i)=\varepsilon(j)$. Pick an $h \nue i, j$ with $\varepsilon(h)=\varepsilon(i)$. Then
$$ x=[T_{ji}(a), Z_{ji}( b , r )]=
T_{ji}(a)\cdot {}^{Z_{ji}( b , r )}T_{ji}(-a)
=T_{ji}(a)\cdot {}^{Z_{ji}( b , r )}[T_{jh}(1),T_{hi}(-a)]. $$
Thus,
\betaegin{align*}
x={}&T_{ji}(a) [{}^{Z_{ji}( b , r )}T_{jh}(1),{}^{Z_{ji}( b , r )}T_{hi}(-a)]=\\
&T_{ji}(a) [T_{jh}(1- b r )T_{ih}(- r b r ),T_{hj}(-a r b r )T_{hi}(-a(1- r b ))]=\\
&T_{ji}(a) [T_{jh}(1)y,T_{hi}(-a)z],
\varepsilonnd{align*}
where
\betaegin{multline*}
y=T_{jh}(- b r )T_{ih}(- r b r )\in \operatorname{FU}(2n,J,\operatorname{D}elta), \\
z = T_{hj}(-a r b r )T_{hi}(a r b )\in \operatorname{FU}(2n, (I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)).
\varepsilonnd{multline*}
Since $T_{hi}(-a)\in\operatorname{FU}(2n,I,\operatorname{G}amma)$, the second factor of the above commutator belongs to $\operatorname{FU}(2n,I,\operatorname{G}amma)$. Thus,
\betaegin{eqnarray}\langleabel{eqn:1}
[T_{jh}(1)y,T_{hi}(-a)z] &=& {}^{T_{jh}(1)}[y,T_{hi}(-a)z]\cdot [{T_{jh}(1)},T_{hi}(-a)z].
\varepsilonnd{eqnarray}
Now, the first commutator on the right hand side
\betaegin{eqnarray*}
{}^{T_{jh}(1)}[y,T_{hi}(-a)z] &=& {}^{T_{jh}(1)}[T_{jh}(- b r )T_{ih}(- r b r ),T_{hi}(-a)T_{hj}(-a r b r )T_{hi}(a r b )].
\varepsilonnd{eqnarray*}
Expanding the commutator above by its second argument, we obtain
\betaegin{eqnarray*}
&& {}^{T_{jh}(1)}[T_{jh}(- b r )T_{ih}(- r b r ),T_{hi}(-a)T_{hj}(-a r b r )T_{hi}(a r b )]\\
&=&{}^{T_{jh}(1)} [T_{jh}(- b r )T_{ih}(- r b r ),T_{hi}(-a)]\\
&&\qquad \qquad\qquad {}^{T_{jh}(1)T_{hi}(-a)}[T_{jh}(- b r )T_{ih}(- r b r ),T_{hj}(-a r b r )T_{hi}(a r b )].
\varepsilonnd{eqnarray*}
The second factor above belongs to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$. And the first factor above equals
\betaegin{multline*}
{}^{T_{jh}(1)T_{jh}(- b r )} [T_{ih}(- r b r ),T_{hi}(-a)]\cdot [T_{jh}(- b r ),T_{hi}(-a)]\\={}^{T_{jh}(1)T_{jh}(- b r )} [T_{ih}(- r b r ),T_{hi}(-a)]\cdot T_{ji}( b r a)\\\in {}^{T_{jh}(1)T_{jh}(- b r )} [T_{ih}(- r b r ),T_{hi}(-a)]\cdot\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)).
\varepsilonnd{multline*}
On the other hand, the second commutator of (\rangleef{eqn:1}) equals
$$
[{T_{jh}(1)},T_{hi}(-a)]\cdot {}^{T_{hi}(-a)}[{T_{jh}(1)},z].
$$
The second commutator in the last expression belongs to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$, and remains there after elementary conjugations, while the first commutator equals $T_{ij}(-a)$.
Summarising the above, we see that
$$
x\in {}^{T_{ji}(a)T_{jh}(1)T_{jh}(- b r )} [T_{ih}(- r b r ),T_{hi}(-a)]\cdot
\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))
$$
which belongs to $[\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n, J,\operatorname{D}elta)]$ by Lemma~\rangleef{Op-1}.
\varepsilonnd{proof}
\betaegin{Lem}\langleabel{form-2}
Let $(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$, and let $(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$ be form ideals of $(A,\Lambda)$.
Suppose that $a\in\operatorname{G}amma$, $b\in\operatorname{D}elta$ and $r\in\Lambda$. Then
$$[T_{-i,i}(a),Z_{-i,i}(b,r)]\in [\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n, J,\operatorname{D}elta)]. $$
\varepsilonnd{Lem}
\betaegin{proof}
Without loss of generality, we may assume that $i>0$. Pick an $h>0$ with $h\nue i$. Then
\betaegin{multline*}
x=[T_{-i,i}(a), Z_{-i,i}( b , r )]=
T_{-i,i}(a)\cdot {}^{Z_{-i,i}( b , r )}T_{-i,i}(-a)=\\
T_{-i,i}(a)\cdot {}^{Z_{-i,i}( b , r )}\operatorname{B}ig(T_{hi}(-a)\cdot
[T_{h,-h}(a),T_{-h,i}(1)]\operatorname{B}ig).
\varepsilonnd{multline*}
\nuoindent
Thus,
\betaegin{align*}
x={}&T_{-i,i}( a )\cdot \operatorname{B}ig({}{}^{Z_{-i,i}( b , r )}
T_{hi}(- a )\cdot
[T_{h,-h}( a ),{}{}^{Z_{-i,i}( b , r )}T_{-h,i}(1)]\operatorname{B}ig)=\\
&T_{-i,i}( a )\cdot T_{h,i}(- a (1- b r ))\cdot T_{i,-h}(\langleambda r b r \omegaverline a )\cdot\operatorname{B}ig[T_{h,-h}( a ),T_{-h,i}(1- r b )\cdot T_{i,h}(\langleambda r b r )\operatorname{B}ig]
\varepsilonnd{align*}
\nuoindent
Using additivity of root unipotents, we can rewrite this as
$$ x=T_{-i,i}( a )T_{h,i}(- a )\cdot T_{h,i}(- a b r )
T_{i,-h}(\langleambda r b r \omegaverline a )\cdot\operatorname{B}ig[T_{h,-h}( a ),T_{-h,i}(1)T_{-h,i}(- r b )\cdot T_{i,h}(\langleambda r b r )\operatorname{B}ig]. $$
\nuoindent
Clearly,
$$ T_{h,i}(- a b r )T_{i,-h}(\langleambda r b r \omegaverline a ) \in
\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)). $$
\nuoindent
On the other hand, the commutator in the last expression equals
\betaegin{multline*}
\operatorname{B}ig[T_{h,-h}( a ),T_{-h,i}(1)T_{-h,i}(- r b )\cdot
T_{i,h}(\langleambda r b r )\operatorname{B}ig]=\\
\operatorname{B}ig[T_{h,-h}( a ),T_{-h,i}(1)\operatorname{B}ig]\cdot
{}^{T_{-h,i}(1)}\operatorname{B}ig[T_{h,-h}( a ), T_{-h,i}(- r b )\cdot T_{i,h}(\langleambda r b r )\operatorname{B}ig]=\\
T_{h,i}( a )T_{-i,i}(- a )\cdot {}^{T_{-h,i}(1)}
\operatorname{B}ig[T_{h,-h}( a ), T_{-h,i}(- r b )\cdot T_{i,h}(\langleambda r b r )\operatorname{B}ig].
\varepsilonnd{multline*}
\nuoindent
Again, clearly
$$ \operatorname{B}ig[T_{h,-h}( a ), T_{-h,i}(- r b )\cdot T_{i,h}(\langleambda r b r )\operatorname{B}ig]\in [\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n, J,\operatorname{D}elta)]. $$
\nuoindent
On the other hand, the previous factors assemble to a left
$T_{-i,i}(a)T_{h,i}(-a)$ conjugate
of an element of $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$ ,
which is contained in $[\operatorname{FU}(2n,I,\operatorname{G}amma), \operatorname{FU}(2n, J,\operatorname{D}elta)]$.
This proves Lemma~\rangleef{form-2}.
\varepsilonnd{proof}
Combined, these results imply the first claim of Theorem~\rangleef{generators}.
\sigmaection{Rolling over elementary commutators}
Now we pass to the final, and most difficult part of the
proof of Theorem~\rangleef{generators}, rolling an elementary commutator over
to a different position. Since we assume $n\gammae 3$, the case of
{\it short} root type elementary commutators is easy. It is
settled by
essentially the same calculation as for the general linear group
$\operatorname{GL}(n,R)$, $n\gammae 3$, see \cite{NZ2,NZ3}. But for the case
of {\it long\/} root type elementary commutators we have to
imitate the proof of \cite{NZ4}, Theorems 4 and 5, for $\operatorname{Sp}(4,R)$.
In the presence of non-trivial involution, non-commutativity
and non-trivial form parameters this is quite a challenge.
In \S~12 we make some observations,
to put this calculation in historical context.
\betaegin{Lem}\langleabel{Op-2.1}
Let $(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$,
and let $(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$ be form ideals of $(A,\Lambda)$.
Then for any $i\nueq \pm j$, any $h\nue\pm l$, and any
$a\in I$, $b\in J$, $c_1,c_2\in A$, one has
$$ Y_{ij}(c_1ac_2,b)\varepsilonquiv Y_{hl}(a,c_2bc_1)
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\varepsilonnd{Lem}
\betaegin{proof}
Take any $h\nueq\pm i,\pm j$, and rewrite the elementary
commutator $z=Y_{ij}(c_1ac_2,b)$ on the left hand side of the
above congruence as follows
\betaegin{multline*}
z=\betaig[T_{ij}(c_1ac_2),T_{ji}(b)\betaig]
=T_{ij}(c_1ac_2)\cdot {}^{T_{ji}(b)}T_{ij}(-c_1ac_2)=\\
T_{ij}(c_1ac_2)\cdot {}^{T_{ji}(b)} [T_{hj}(ac_2),T_{ih}(c_1)].
\varepsilonnd{multline*}
\nuoindent
Expanding the conjugation by $T_{ji}(b)$, we see that
\betaegin{multline*}
z=T_{ij}(c_1ac_2)\cdot [{}^{T_{ji}(b)}T_{hj}(ac_2),
{}^{T_{ji}(b)}T_{ih}(c_1)]=\\
T_{ij}(c_1ac_2)\cdot \operatorname{B}ig[[T_{ji}(b),T_{hj}(ac_2)]
T_{hj}(ac_2),T_{ih}(c_1)[T_{ih}(-c_1),{T_{ji}(b)}]\operatorname{B}ig]=\\
T_{ij}(c_1ac_2)\cdot \operatorname{B}ig[T_{hi}(-ac_2b)T_{hj}(ac_2),T_{ih}(c_1)T_{jh}(bc_1)\operatorname{B}ig].
\varepsilonnd{multline*}
\nuoindent
Now, the first factor $T_{hi}(-ac_2b)$ of the first argument in
this last commutator already belongs to the group
$\operatorname{FU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
Thus, as above,
$$ z\varepsilonquiv T_{ij}(c_1ac_2)\cdot \operatorname{B}ig[T_{hj}(ac_2),T_{ih}(c_1)T_{jh}(bc_1)\operatorname{B}ig] \pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\nuoindent
Using multiplicativity of the commutator w.r.t. the second argument, cancelling the first two factors of the resulting expression, and then applying Lemma~\rangleef{Op-1} we see that
$$ z\varepsilonquiv
{}^{T_{ih}(c_1)}\betaig[T_{hj}(ac_2),T_{jh}(bc_1)\betaig]
\varepsilonquiv \betaig[T_{hj}(ac_2),T_{jh}(bc_1)\betaig]
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\par
On the other hand, choosing another index $l\nueq\pm j,\pm h$ and rewriting the commutator
$\betaig[T_{hj}(ac_2),T_{jh}(bc_1)\betaig]$ on the right hand side of the
last congruence as
$$ \betaig[T_{hj}(ac_2),T_{jh}(bc_1)\betaig]=
\betaig[[T_{hl}(a),T_{lj}(c_2)],T_{jh}(bc_1)\betaig], $$
\nuoindent
by the same argument we get the congruence
$$ z\varepsilonquiv\betaig[T_{hj}(ac_2),T_{jh}(bc_1)\betaig]\varepsilonquiv
\betaig[T_{hl}(a),T_{lh}(c_2bc_1)\betaig]
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\par
Obviously, for $n\gammae 3$ we can pass from any position $(i,j)$,
$i\nueq j$, to any other such position $(k,m)$, $k\nueq\pm m$,
by a sequence of at most three such elementary moves.
\varepsilonnd{proof}
\betaegin{Lem}\langleabel{Op-2.2}
Let $(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$,
and let $(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$ be form ideals of $(A,\Lambda)$.
Then for any $ -n\langlee i\langlee n$, any $-n\langlee k\langlee n$, and any
$a\in \langleambda^{-(\varepsilonsilon(i)+1)/2}\operatorname{G}amma$,
$b\in \langleambda^{(\varepsilonsilon(i)-1)/2}\operatorname{D}elta$, $c\in A$, one has
$$ Y_{i,-i}(ca\omegaverline c,b)\varepsilonquiv Y_{k,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a,-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc)\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\varepsilonnd{Lem}
\betaegin{proof}
Rewrite the elementary commutator $z=Y_{i,-i}(ca\omegaverline c,b)$
on the left hand side of the above congruence as follows
\betaegin{multline*}
z=T_{i,-i}(ca\omegaverline c )\cdot ^{T_{-i,i}(b)}T_{i,-i}(-ca\omegaverline c)=\\
T_{i,-i}(ca\omegaverline c)\cdot ^{T_{-i,i}(b)}\operatorname{B}ig(T_{i,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}ca)[T_{i,k}( c),T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)]\operatorname{B}ig).
\varepsilonnd{multline*}
\nuoindent
Expanding the conjugation by $T_{-i,i}(b)$, we see that
$$ z=T_{i,-i}(ca\omegaverline c)\cdot ^{T_{-i,i}(b)}T_{i,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}ca)\cdot\operatorname{B}ig[{}^{T_{-i,i}(b)}T_{i,k}(c),{}^{T_{-i,i}(b)}T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)\operatorname{B}ig]. $$
\nuoindent
Clearly, the first two factors
$$ y=T_{i,-i}(ca\omegaverline c)\cdot^{T_{-i,i}(b)}T_{i,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}ca) $$
\nuoindent
can be rewritten as
$$ y=T_{i,-i}(ca\omegaverline c)\cdot
\betaig[{T_{-i,i}(b)},
T_{i,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}ca)\betaig]\cdot
T_{i,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}ca) $$
\nuoindent
which gives us the following congruence
$$ y
\varepsilonquiv T_{i,-i}(ca\omegaverline c)T_{i,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}ca) \pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\par
On the other hand, the commutator
$$ u=\operatorname{B}ig[{}^{T_{-i,i}(b)}T_{i,k}(c),T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)\operatorname{B}ig] $$
\nuoindent
in the expression of $z$ equals
$$ u=\operatorname{B}ig[T_{-i,k}(bc)
T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c b c)
T_{i,k}(c), T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)\operatorname{B}ig]. $$
\nuoindent
Expanding this last expression, we get
\betaegin{multline*}
u={}^{x}[T_{i,k}(c),
T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)]\cdot\\
{}^{y}[T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc),
T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)]\cdot\\
[T_{-i,k}(bc),T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)],
\varepsilonnd{multline*}
\nuoindent
where
$$ x=T_{-i,k}(bc)
T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c b c),\qquad y=T_{-i,k}(bc). $$
\nuoindent
It is easy to see that
$$ [T_{-i,k}(bc),T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)]
\in\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)), $$
\nuoindent
so we can drop it. Further, by Lemma~\rangleef{Op-2},
modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$ the second
factor can be simplified as follows
\betaegin{multline*}
{}^{y}[T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c b c),
T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)]\varepsilonquiv\\
[T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc),
T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)]
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}
\varepsilonnd{multline*}
\nuoindent
But by Theorem~\rangleef{symb} one has
\betaegin{multline*}
[T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc),T_{k,-k}(-\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a)]\varepsilonquiv\\
[T_{k,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a), T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc)]
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}.
\varepsilonnd{multline*}
\par
Summarising the above, we get
\betaegin{multline*}
z\varepsilonquiv T_{i,-i}(a)
T_{i,-k}(\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}ca)\cdot
{}^{x}[T_{i,k}(c),
T_{k,-k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}a)]\cdot\\
[T_{k,-k}(\langleambda^{(\varepsilonsilon(i)-\varepsilonsilon(k))/2}a), T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc)]
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}.
\varepsilonnd{multline*}
\nuoindent
Thus, to finish the proof it suffices to show that
$$ v=T_{i,-i}(a)T_{i,-k}(\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}ca)\cdot ^{x}[T_{i,k}(c), T_{k,-k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}a)]
$$
\nuoindent
belongs to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$. Clearly,
$$ v=T_{i,-i}(ca\omegaverline c)
T_{i,-k}(\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}ca)\cdot
{}^{x} T_{i,-k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}ca)
T_{i,-i}(-ca\omegaverline c), $$
\nuoindent
can be rewritten as
\betaegin{multline*}
v=[T_{i,-i}(ca\omegaverline c)
T_{i,-k}(\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}ca),x]=\\
[T_{i,-i}(ca\omegaverline c)T_{i,-k}(\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}ca),
T_{-i,k}(bc)T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline cbc)].
\varepsilonnd{multline*}
\nuoindent
Expanding this last commutator w.r.t. its first and second arguments,
we express it as the product of elementary conjugates
of the four following commutators
\par\sigmamallskip
$\betaullet$ $[T_{i,-i}(ca\omegaverline c), T_{-i,k}(bc)]$,
\par\sigmamallskip
$\betaullet$ $[T_{i,-i}(ca\omegaverline c), T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc)]$,
\par\sigmamallskip
$\betaullet$ $[T_{i,-k}(\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}ca), T_{-i,k}(bc)]$,
\par\sigmamallskip
$\betaullet$
$[T_{i,-k}(\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}ca), T_{-k,k}(-\langleambda^{(\varepsilonsilon(k)-\varepsilonsilon(i))/2}\omegaverline c bc)]$.
\par\sigmamallskip\nuoindent
A direct computation convinces us that each of these commutators belongs to the elementary subgroup
$\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$. This finishes the
proof of lemma, and thus also of Theorem~\rangleef{generators}.
\varepsilonnd{proof}
\sigmaection{Mat[ch]ing elementary commutators of different root lengths}
In this section we prove a congruence connecting elementary
commutators of long root type with those of short root type.
In the case, where one of the relative form parameters is
as small as possible (=minimal), this congruence can be used
to eliminate long root type elementary commutators. On the
other hand when one of the relative form parameters is as large as possible (=equals the corresponding ideal), one can abandon
short root type elementary commutators.
\betaegin{Lem}\langleabel{long-short}
Let $(A,\Lambda)$ be an associative form ring with $1$, $n\gammae 3$, and let $(I,\operatorname{G}amma)$ , $(J,\operatorname{D}elta)$
be form ideals of $(A,\Lambda)$. Then for any $ -n\langlee i\langlee n$, any
$-n\langlee k\langlee n$, and $a\in I$, $b\in \langleambda^{(\varepsilonsilon(i)-1)/2} \operatorname{D}elta$, one has
$$\operatorname{B}ig[T_{i,-i}\betaig(a-\langleambda^{\varepsilon(-i)} \omegaverline a \betaig),T_{-i,i}\betaig(b\betaig)\operatorname{B}ig] \varepsilonquiv \betaig[ T_{i,k}(a), T_{k,i}(b)\betaig]
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\varepsilonnd{Lem}
\betaegin{proof}
Pick an index $k\nue \pm i$, and rewrite the elementary commutator
$z=\operatorname{B}ig[T_{i,-i}\betaig(a-\langleambda^{\varepsilon(-i)} \omegaverline a \betaig),T_{-i,i}\betaig(b\betaig)\operatorname{B}ig]$ on the left hand side as
$$ z=\operatorname{B}ig[[T_{k,-i}(-1),T_{i,k}(a)], T_{-i,i}\betaig(b\betaig)\operatorname{B}ig]=
\operatorname{B}ig[ {}^{T_{k,-i}(-1)}T_{i,k}(a)\cdot T_{i,k}(-a),
T_{-i,i}\betaig(b\betaig)\operatorname{B}ig]. $$
\nuoindent
Using multiplicativity of the commutator w.r.t the first argument,
we see
$$
z={}^{T_{k,-i}(-1)T_{i,k}(a)T_{k,-i}(1)}[T_{i,k}(-a),T_{-i,i}\betaig(b\betaig)]\cdot \operatorname{B}ig[ {}^{T_{k,-i}(-1)}T_{i,k}(a), T_{-i,i}\betaig(b\betaig)\operatorname{B}ig]. $$
\nuoindent
The first factor belongs to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$, so we leave it out. Thus, $z$ is congruent modulo this subgroup to
\betaegin{multline*}
\operatorname{B}ig[ {}^{T_{k,-i}(-1)}T_{i,k}(a), T_{-i,i}\betaig(b\betaig)\operatorname{B}ig]=
{}^{T_{k,-i}(-1)}\operatorname{B}ig[ T_{i,k}(a), {}^{T_{k,-i}(1)}T_{-i,i}\betaig(b\betaig)\operatorname{B}ig]=\\
={}^{T_{k,-i}(-1)}\operatorname{B}ig[ T_{i,k}(a), [{T_{k,-i}(1)},T_{-i,i}\betaig(b\betaig)]T_{-i,i}\betaig(b\betaig)\operatorname{B}ig]=\\
{}^{T_{k,-i}(-1)}\operatorname{B}ig[ T_{i,k}(a), T_{k,i}\betaig(b\betaig)
T_{k,-k}\betaig(\langleambda^{(\varepsilon(-i)-\varepsilon(k))/2}(b)\betaig)T_{-i,i}\betaig(b\betaig)\operatorname{B}ig].
\varepsilonnd{multline*}
\nuoindent
Expanding this last commutator w.r.t the second argument, we see
that the second and the third factors belong to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$, so that we can leave them out. Now we have
$$ z\varepsilonquiv{}^{T_{k,-i}(-1)}\operatorname{B}ig[ T_{i,k}(a), T_{k,i}\betaig(b)\operatorname{B}ig]
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}, $$
\nuoindent
as claimed.
\varepsilonnd{proof}
\betaegin{Cor}
In conditions of Lemma~$\rangleef{long-short}$ further assume that $b=b'-\langleambda^{\varepsilon(i)}\omegaverline{b'}$ for some $b'\in J$, then
$$ \operatorname{B}ig[T_{i,-i}\betaig(a-\langleambda^{\varepsilon(-i)} \omegaverline a \betaig),T_{-i,i}\betaig(b-\langleambda^{\varepsilon(i)} \omegaverline b\betaig)\operatorname{B}ig] \varepsilonquiv \betaig[ T_{i,k}(a), T_{k,i}(b')\betaig]\cdot \betaig[T_{i,k}(a),T_{k,i}(-\langleambda^{\varepsilon(i)} \omegaverline{b'})\betaig]
$$
\nuoindent
modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
\varepsilonnd{Cor}
\betaegin{proof}
Keep the notation from the proof of Lemma~\rangleef{long-short}.
Under this additional assumption one has
$$ z\varepsilonquiv {}^{T_{k,-i}(-1)}\operatorname{B}ig[ T_{i,k}(a), T_{k,i}(b')T_{k,i}(-\langleambda^{\varepsilon(i)} \omegaverline{b'})\operatorname{B}ig]\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\nuoindent
Expanding the commutator w.r.t the second argument again,
we see that
\betaegin{multline*}
{}^{T_{k,-i}(-1)}\operatorname{B}ig[ T_{i,k}(a), T_{k,i}(b')T_{k,i}(-\langleambda^{\varepsilon(i)} \omegaverline{b'})\operatorname{B}ig]=\\
{}^{T_{k,-i}(-1)}\operatorname{B}ig(\betaig[ T_{i,k}(a), T_{k,i}(b')\betaig]\cdot {}^{ T_{k,i}(b')}\betaig[ T_{i,k}(a),T_{k,i}(-\langleambda^{\varepsilon(i)} \omegaverline{b'})\betaig]\operatorname{B}ig).
\varepsilonnd{multline*}
\nuoindent
Applying Lemma~\rangleef{Op-1}, we get
$$
z\varepsilonquiv \betaig[ T_{i,k}(a), T_{k,i}(b')\betaig]\cdot \betaig[T_{i,k}(a),T_{k,i}(-\langleambda^{\varepsilon(i)} \omegaverline{b'})\betaig]
\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}, $$
\nuoindent
as claimed.
\varepsilonnd{proof}
\betaegin{Cor}
If $I=\operatorname{G}amma$ or $J=\operatorname{D}elta$ then for the second type of
generators in Theorem~$\rangleef{generators}$ it suffices to take one pair $(h,-h)$.
\varepsilonnd{Cor}
\betaegin{Cor}
If $\operatorname{G}amma=I\cap\Lambda_{\muin}$ or $\operatorname{D}elta=J\cap \Lambda_{min}$ then
for the second type of generators in Theorem~$\rangleef{generators}$ it suffices to
take one pair $(h,k)$, $h\nueq\pm k$.
\varepsilonnd{Cor}
\sigmaection{Triple and quadruple commutators}
Actually Theorem~\rangleef{T:7} easily follows by induction on $m$ from
the following two special cases, triple commutators, and quadruple commutators.
\betaegin{Lem}\langleabel{triple}
Let $(A,\Lambda$) be any associative form ring with $1$, let $n\gammae 3$,
and let
$(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$, $(K,\Omega)$, be form ideals of $(A,\Lambda)$. Then
\betaegin{multline*}
\betaig[\betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig],\operatorname{EU}(2n,K,\Omega)\betaig]=\\
\betaig[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)),\operatorname{EU}(2n,K,\Omega)\betaig].
\varepsilonnd{multline*}
\varepsilonnd{Lem}
\betaegin{proof}
First of all, observe that the generators of the first type in Theorem~\rangleef{generators} belong to
$\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta))$. Thus, forming their commutators with $T_{h,k}(c)\in \operatorname{EU}(2n,K,\Omega)$ will bring us inside
$[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)),\operatorname{EU}(2n,K,\Omega)]$.
\par
Next, let $Y_{i,j}(a,b)=[T_{i,j}(a),T_{j,i}(b)]$ a typical generator of the second type of the commutator subgroup $\betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig]$ with $T_{i,j}(a)\in \operatorname{EU}(2n,I,\operatorname{G}amma)$ and $T_{j,i}(b)\in \operatorname{EU}(2n,J,\operatorname{D}elta)$.
\par
From Lemma~\rangleef{Op-1} and Lemma~\rangleef{Op-2} we know that ${}^xY_{i,j}(a,b)=Y_{i,j}(a,b)z$, for some
$z\in\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta))$, and thus for any $T_{h,k}(c)\in \operatorname{EU}(2n,K,\Omega)$,
$$ \betaig[{}^xY_{i,j}(a,b),T_{k,l}(c)\betaig]=
\betaig[Y_{i,j}(a,b)z,T_{k,l}(c)\betaig]={}^{Y_{ij}(a,b)}[z,T_{k,l}(c)]\cdot
[Y_{i,j}(a,b),T_{k,l}(c)]. $$
\nuoindent
The first of these commutators also belongs to
$$ \betaig[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)),\operatorname{EU}(2n,K,\Omega)\betaig], $$
\nuoindent
and stays there after elementary conjugations. Let us concentrate
at the second one.
\par\sigmamallskip\nuoindent
{\betaf Case 1.} When $i\nue \pm j$ the same
analysis as in the proof of Lemma~\rangleef{Op-1}, shows that:
\par\sigmamallskip
$\betaullet$ If $k\nue -l$ and $k,l\nueq\pm i,\pm j$, then $T_{k,l}(c)$ commutes with
$Y_{i,j}(a,b)$.
\par\sigmamallskip
$\betaullet$ For any $h\nueq\pm i,\pm j$ the formulas
for $Y_{ij}(a,b)$ and $Y_{ij}(a,b)^{-1}$ given in the proof of
Lemma~\rangleef{Op-1} immediately imply that
\betaegin{alignat*}{1}
[z,T_{ih}(c)]&=T_{jh}(babc)T_{ih}(abc+ababc),\\
\nuoalign{\vskip 3truept}
[z,T_{jh}(c)]&=T_{jh}(-bac)T_{ih}(-abac),\\
\nuoalign{\vskip 3truept}
[z,T_{hi}(c)]&=T_{jh}(caba)T_{ih}(-cab),\\
\nuoalign{\vskip 3truept}
[z,T_{hj}(c)]&=T_{jh}(cba+cbaba)T_{ih}(-cbab),
\varepsilonnd{alignat*}
\nuoindent
and similarly
\betaegin{alignat*}{1}
[z,T_{-i,h}(c)]&=
[z,T_{-h,i}(-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(i))/2}c)]=\\
&T_{j,-h}(\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(i))/2}caba)T_{i,-h}(\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(i))/2}cab),\\
\nuoalign{\vskip 3truept}
[z,T_{-j,h}(c)]& =[z,T_{-h,j}(-\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(j))/2}c)]=\\
&T_{j,-h}(\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(j))/2}cba+\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(j))/2}cbaba) T_{i,-h}(\langleambda^{((\varepsilonsilon(h)+\varepsilonsilon(j))/2}cbab),\\
\nuoalign{\vskip 3truept}
[z,T_{h,-i}(c)]& =
[z,T_{i,-h}(-\langleambda^{(-(\varepsilonsilon(i)-\varepsilonsilon(h))/2}c)]= \\
&=T_{j,-h}(\langleambda^{(-(\varepsilonsilon(i)-\varepsilonsilon(h))/2}bac)T_{i,-h}(\langleambda^{(-(\varepsilonsilon(i)-\varepsilonsilon(h))/2}abac),\\
\nuoalign{\vskip 3truept}
[z,T_{h,-j}(c)]& =
[z,T_{j,-h}(-\langleambda^{(-(\varepsilonsilon(j)-\varepsilonsilon(h))/2}c)] \\
&T_{j,-h}(-\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(h))/2}bac)T_{i,-h}(\langleambda^{(-(\varepsilonsilon(j)-\varepsilonsilon(h))/2}abac)
\varepsilonnd{alignat*}
\par\nuoindent
All factors on the right hand side belong already to
$\operatorname{EU}\betaig(2n,((I,\operatorname{G}amma)\circ (J,\operatorname{D}elta))\circ (K,\Omega)\betaig)$.
\par
If $(k,l)=(\pm i, \pm j)$ or $(\pm j,\pm i)$, then we take an index $h\nue \pm i, \pm j$ and rewrite $T_{kl}(c)$ as $[T_{k,h}(c),T_{h,l}(1)]$ and apply the previous items to get it belongs to $\betaig[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)),\operatorname{EU}(2n,K,\Omega)\betaig]$.
\par\sigmamallskip
On the other hand, for $k=-l$ we have:
\par\sigmamallskip
$\betaullet$ If $k\nue \pm i,\pm j$, then $T_{k,-k}(c)$ commutes
with $z$ and can be discarded.
\par\sigmamallskip
$\betaullet$ Otherwise, we have
\betaegin{align*}
&[z,T_{i,-i}(c)]=T_{i,-j}(-\langleambda^{((\varepsilonsilon(i)-\varepsilonsilon(j))/2}(1+ab+abab)c\omegaverline{(bab)})T_{j,-j}(\langleambda^{((\varepsilonsilon(i)-\varepsilonsilon(j))/2}babc\omegaverline{bab})\\
&\hskip 2.5truein T_{i,-i}(-c+(1+ab+abab)c\omegaverline{(1+ab+abab)}),\\
\nuoalign{\vskip 3truept}
&[z,T_{j,-j}(c)]=T_{i,-j}(abac(1-\omegaverline{ba}))T_{i,-i}(-\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(i))/2}abac\omegaverline{aba})\cdot\\
&\hskip 3.5truein T_{j,-j}(-c+(1-ba)c\omegaverline{(1-ba)}),\\
\nuoalign{\vskip 3truept}
&[z,T_{-i,i}(c)]=[[T_{ij}(a),T_{ji}(b)],T_{-i,i}(c)]=\\
&\hskip 1.5truein [[T_{-j,-i}(-\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(i))/2}a),T_{-i,-j}(\langleambda^{((\varepsilonsilon(i)-\varepsilonsilon(j))/2}b)],T_{-i,i}(c)],\\
\nuoalign{\vskip 3truept}
&[z,T_{-j,j}(c)]=[[T_{ij}(a),T_{ji}(b)],T_{-j,j}(c)]=\\
&\hskip 1.5truein [[T_{-j,-i}(-\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(i))/2}a),T_{-i,-j}(\langleambda^{((\varepsilonsilon(i)-\varepsilonsilon(j))/2}b)],T_{-j,-j}(c)].
\varepsilonnd{align*}
The two last cases reduce to the first two. In each case
the resulting expressions belong to
$\operatorname{EU}\betaig(2n,((I,\operatorname{G}amma)\circ (J,\operatorname{D}elta))\circ (K,\Omega)\betaig)$.
\par\sigmamallskip\nuoindent
{\betaf Case 2.} When $i=-j$ the same analysis as in the proof of Lemma~\rangleef{Op-2}, shows that:
\par\sigmamallskip
$\betaullet$ If $(k,l)=(-i,i)$, then
$$ [z,T_{-i,i}(c)]=[ [T_{i,-i}(a),T_{-i,i}(b)],T_{-i,i}(c)]
=[Z_{-i,i}(b,a),T_{-i,i}(c)]. $$
\nuoindent
Now, the same computation as in Lemma~\rangleef{form-2}
shows that
$$ [z,T_{-i,i}(c)] \in \operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)). $$
\par\sigmamallskip
$\betaullet$ If $(k,l)=(i,-i)$, then
\betaegin{multline*}
[z,T_{i,-i}(c)]=[ [T_{i,-i}(a),T_{-i,i}(b)],T_{i,-i}(c)]=
[ [T_{-i,i}(b),T_{i,-i}(a)]^{-1},T_{i,-i}(c)]\\
=[T_{-i,i}(b),T_{i,-i}(a)]\cdot
[T_{i,-i}(c),[T_{-i,i}(b),T_{i,-i}(a)] ]\cdot
[T_{-i,i}(b),T_{i,-i}(a)]^{-1}.
\varepsilonnd{multline*}
\nuoindent
By the previous subcase,
$$ [T_{i,-i}(c),[T_{-i,i}(b),T_{i,-i}(a)] ]\in \operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)). $$
\nuoindent
But then its conjugates also stay therein.
\par\sigmamallskip
$\betaullet$ If $k=i$ and $j\nue \pm k$, then
\betaegin{multline*}
[z,T_{i,j}(c)]=[ [T_{i,-i}(a),T_{-i,i}(b)],T_{i,j}(c)]=\\
T_{-j,j}(\langleambda^{((\varepsilonsilon(j)-\varepsilonsilon(i))/2}\omegaverline{ c}bab\omegaverline{c} \langleambda^{\varepsilonsilon(j)}(\omegaverline{c}bababc+\omegaverline{c}babababc))\cdot T_{-i,j}(babc)T_{i,j}((ab+abab)c)
\varepsilonnd{multline*}
\nuoindent
Since $a\in \langleambda^{-(\varepsilonsilon(i)+1)/2}\operatorname{G}amma$ and $b\in\langleambda^{(\varepsilonsilon(i)-1)/2}\operatorname{D}elta$, it follows that the right
hand side belongs to $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
\par\sigmamallskip
$\betaullet$ If $k=-i$ and $j\nue \pm k$, then
\betaegin{multline*}
[z,T_{-i,j}(c)]=[ [T_{i,-i}(a),T_{-i,i}(b)],T_{-i,j}(c)]=\\
[T_{-i,i}(b),T_{i,-i}(a)]^{-1}\cdot
[T_{-i,j}(c), [T_{-i,i}(b),T_{i,-i}(a)]]\cdot [T_{-i,i}(b),T_{i,-i}(a)].
\varepsilonnd{multline*}
\nuoindent
By the previous subcase,
$$ [T_{-i,j}(c), [T_{-i,i}(b),T_{i,-i}(a)]]\in \operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)). $$
\nuoindent
But then its conjugates also stay therein.
\par\sigmamallskip
$\betaullet$ Finally, using relation $(R1)$ the subcase $l=\pm i$
and $k\nue\pm i$ is readily reduced to the subcases, where
$k=\pm i$.
\varepsilonnd{proof}
Now, for $n\gammae 4$ the only new case of quadruple commutators
is considered in the following lemma, which immediately follows
from Lemma~\rangleef{triple} and Theorem~\rangleef{equality}. Of course, for the outstanding
case $n=3$ it requires a separate proof. All our assaults on
this remaining case were crippled by forbidding calculations.
\betaegin{Lem}\langleabel{quadruple}
Let $(A,\Lambda$) be any associative form ring with $1$ and let
$(I,\operatorname{G}amma)$, $(J,\operatorname{D}elta)$, $(K,\Omega)$, $(L, \operatorname{T}heta)$ be form ideals of $(A,\Lambda)$. If either $n\gammae 4$ or there exists an ideal equals its corresponding relative form parameter and $n\gammae3$, then
\betaegin{multline*}
\operatorname{B}ig[\betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig],\betaig[\operatorname{EU}(2n,K,\Omega),\operatorname{EU}(2n,L, \operatorname{T}heta)\betaig]\operatorname{B}ig]=\\
\betaig[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)),\operatorname{EU}(2n,(K,\Omega)\circ (L, \operatorname{T}heta))\betaig].
\varepsilonnd{multline*}
\varepsilonnd{Lem}
\betaegin{proof}
From the previous lemma we already know that
\betaegin{multline*}
\operatorname{B}ig[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)),\betaig[\operatorname{EU}(2n,K,\Omega),\operatorname{EU}(2n,L, \operatorname{T}heta)\betaig]\operatorname{B}ig]=\\
\operatorname{B}ig[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)),\operatorname{EU}(2n,(K,\Omega)\circ (L, \operatorname{T}heta))\operatorname{B}ig]
\varepsilonnd{multline*}
and that
\betaegin{multline*}
\operatorname{B}ig[\betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig],\operatorname{EU}(2n,(K,\Omega)\circ(L, \operatorname{T}heta))\operatorname{B}ig]=\\
\operatorname{B}ig[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)),\operatorname{EU}(2n,(K,\Omega)\circ (L, \operatorname{T}heta))\operatorname{B}ig].
\varepsilonnd{multline*}
\par
Thus, it only remains to prove that
$$ \betaig[Y_{ij}(a,b),Y_{hk}(c,d)\betaig]\in\operatorname{B}ig[\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta)),\operatorname{EU}(2n,(K,\Omega)\circ (L, \operatorname{T}heta))\operatorname{B}ig], $$
\nuoindent
where $a\in(I,\operatorname{G}amma)$, $b\in(J,\operatorname{D}elta)$, $c\in(K,\Omega)$ and
$d\in(L, \operatorname{T}heta)$. Conjugations by elements
$x\in\operatorname{EU}(2n,A,\Lambda)$ do not matter, since they amount to extra
factors from the above triple commutators, which are
already accounted for.
\par
Now, for $n\gammae 4$ this already finishes the proof, since in
this case we can move $Y_{hk}(c,d)$ modulo
$\operatorname{EU}(2n,(K,\Omega)\circ (L, \operatorname{T}heta))$ to a position, where it commutes with
$Y_{ij}(a,b)]$, either by Lemma~\rangleef{Op-2.1} when $i\nue\pm j$ and $h\nue\pm k$ or by Lemma~\rangleef{Op-2.2} when $i=-j$ or $h=-k$.
Suppose that there exists an ideal equals its corresponding relative form paramerter, say $I=\operatorname{G}amma$. If $i\nue \pm j$ then by Lemma~\rangleef{long-short}, we have
$$Y_{i,j}(a,b)\varepsilonquiv Y_{i,-i}(a, b-\langleambda^{\varepsilon(i)}\omegaverline b).$$
For $n\gammae 3$, we can move $Y_{i,-i}(a, b-\langleambda^{\varepsilon(i)}\omegaverline b)$ module $\operatorname{EU}(2n,(K,\Omega)\circ (L, \operatorname{T}heta))$ to a position, where it commutes with $Y_{hk}(c,d)$ by Lemma~\rangleef{Op-2.1}. Otherwise, if $i=-j$ then can also move $Y_{i,-i}(a,b)$ to a position, where it commutes with $Y_{hk}(c,d)$ by Lemma~\rangleef{Op-2.2}. This finishes the whole proof.
\varepsilonnd{proof}
\sigmaection{Elementary multiple commutator formulas}
In the current section, we show that multiple commutators of elementary subgroups can be reduced to double such commutators.
To state our main results, we have to recall some further
pieces of notation from \cite{yoga-1,RHZZ2,yoga-2, RNZ5, RNZ1, stepanov10}.
Namely, let $H_1,\langledots,H_m\langlee G$ be subgroups of $G$. There are
many ways to form a higher commutator of these
groups, depending on where we put the brackets. Thus, for three
subgroups $F,H,K\langlee G$ one can form two triple commutators
$[[F,H],K]$ and $[F,[H,K]]$. Usually, we write $[H_1,H_2,\langledots,H_m]$ for the {\it left-normed\/} commutator, defined inductively by
$$ [H_1,\langledots,H_{m-1},H_m]=[[H_1,\langledots,H_{m-1}],H_m]. $$
\nuoindent
To stress that here we consider {\it any\/} commutator of these subgroups, with an arbitrary placement of brackets, we write $\langlelbracket H_1,H_2,\langledots,H_m \ranglerbracket$. Thus, for instance, $\langlelbracket F,H,K\ranglerbracket $
refers to any of the two arrangements above.
\par
Actually, a specific arrangement of brackets usually does not play
major role in our results -- apart from one important
attribute\footnote{Actually, for non-commutative rings
symmetric product of ideals is not associative, so that
the initial bracketing of higher commutators will be reflected also
in the bracketing of such higher symmetric products.}.
Namely, what will matter a lot is the position of the outermost
pairs of inner brackets. Namely, every higher commutator subgroup
$\langlelbracket H_1,H_2,\langledots,H_m\ranglerbracket $ can be uniquely written as
$$ \langlelbracket H_1,H_2,\langledots,H_m\ranglerbracket =
[\langlelbracket H_1,\langledots,H_s\ranglerbracket ,\langlelbracket H_{s+1},\langledots,H_m\ranglerbracket ], $$
\nuoindent
for some $s=1,\langledots,m-1$. This $s$ will be called the cut point
of our multiple commutator. Now we are all set to finish the
proof of Theorem~7. The proof is an easy adaptation of the proof
of \cite{NZ3}, Theorem 1, but we reproduce it here for the sake of
completeness.
\betaegin{proof}
Denote the commutator on the left-hand side by $H$,
$$ H=\betaig\langlelbracket \operatorname{EU}(2n,I_1,\operatorname{G}amma_1),\operatorname{EU}(2n,I_2,\operatorname{G}amma_2),\langledots,\operatorname{EU}(2n,I_m,\operatorname{G}amma_m)\betaig\ranglerbracket . $$
\nuoindent
We argue by induction in $m$, with the cases $m\langlee 4$ as the
base of induction --- for the case $m=2$ there is nothing to
prove, case $m=3$ is accounted for by Lemma~\rangleef{triple}, and case
$m=4$ --- by Lemma~\rangleef{triple}, if the cut point $s\nueq 2$, and by
Lemma~\rangleef{quadruple} when $s=2$.
\par
Now, let $m\gammae 5$ and assume that our theorem is already
proven for all shorter commutators. Consider an arbitrary
arrangement of brackets\/ $[\![\langledots]\!]$ with the cut point
$s$ and let
\betaegin{multline*}
\betaig\langlelbracket \operatorname{EU}(2n,I_1,\operatorname{G}amma_1),\operatorname{EU}(2n,I_2,\operatorname{G}amma_2),\langledots,
\operatorname{EU}(2n,I_s,\operatorname{G}amma_s)\betaig\ranglerbracket ,\\
\betaig\langlelbracket \operatorname{EU}(2n,I_{s+1},\operatorname{G}amma_{s+1}),\operatorname{EU}(2n,I_{s+2},\operatorname{G}amma_{s+2}),\langledots,\operatorname{EU}(2n,I_m,\operatorname{G}amma_m)\betaig\ranglerbracket ,
\varepsilonnd{multline*}
\nuoindent
be the partial commutators, the first one containing the factors
afore the cut point, and the second one containing those after
the cut point.
\par\sigmamallskip
$\betaullet$ When the cut point occurs at $s=1$ or at $s=m-1$, one
of these commutators is a single elementary subgroup $\operatorname{EU}(2n,I_1)$
in the first case or $\operatorname{EU}(2n,I_{m-1})$ in the second one. Then we
can apply the induction hypothesis to another factor. For $s=1$,
denote by $t=2,\langledots,m-1$ the cut point of the second factor.
Then by induction hypothesis
\betaegin{multline*}
H=\betaigg[\operatorname{EU}(2n,I_1,\operatorname{G}amma_1),\operatorname{B}ig\langlelbracket \operatorname{EU}(2n,I_2,\operatorname{G}amma_2),\operatorname{EU}(2n,I_3,\operatorname{G}amma_3),\langledots,\operatorname{EU}(2n,I_m,\operatorname{G}amma_m)
\operatorname{B}ig\ranglerbracket\betaigg]=\\
\betaigg[\operatorname{EU}(2n,I_1,\operatorname{G}amma_1),
\operatorname{B}ig[\operatorname{EU}(2n,(I_2,\operatorname{G}amma_2)\circ\langledots\circ(I_t,\operatorname{G}amma_t)),
\operatorname{EU}(2n,(I_{t+1},\operatorname{G}amma_{t+1})\circ\langledots\circ(I_m,\operatorname{G}amma_m))\operatorname{B}ig]\betaigg],
\varepsilonnd{multline*}
\nuoindent
and we are done by Lemma~\rangleef{triple}. Similarly, for $s=m-1$ denote by
$r=1,\langledots,m-1$ the cut point of the first factor. Then by
induction hypothesis
\betaegin{multline*}
H=\betaigg[\operatorname{B}ig\langlelbracket\operatorname{EU}(2n,I_1,\operatorname{G}amma_1),\operatorname{EU}(2n,I_2,\operatorname{G}amma_2),\langledots,\operatorname{EU}(2n,I_{m-1},\operatorname{G}amma_{m-1})
\operatorname{B}ig\ranglerbracket,\operatorname{EU}(2n,I_m,\operatorname{G}amma_m)\betaigg]=\\
\betaigg[\operatorname{B}ig[\operatorname{EU}(2n,(I_1,\operatorname{G}amma_1)\circ\langledots\circ(I_r,\operatorname{G}amma_r)),
\operatorname{EU}(2n,(I_{r+1},\operatorname{G}amma_{r+1})\circ\langledots\circ(I_{m-1},\operatorname{G}amma_{m-1}))\operatorname{B}ig],\qquad\\
\hskip 4truein\operatorname{EU}(2n,I_m,\operatorname{G}amma_m)\betaigg],
\varepsilonnd{multline*}
\nuoindent
and we are again done by Lemma~\rangleef{triple}.
\par\sigmamallskip
$\betaullet$ Otherwise, when $s\nueq 1,m-1$, we can apply the
induction hypothesis to both factors. Let as above $r=1,\langledots,s-1$
be the cut point of the first factor and let $t=s+1,\langledots,m-1$ be
the cut point of the second factor. Then we can apply induction
hypothesis to both factors of
\betaegin{multline*}
H=\betaigg[\operatorname{B}ig\langlelbracket\operatorname{EU}(2n,I_1),\operatorname{EU}(2n,I_2),\langledots,\operatorname{EU}(2n,I_s)
\operatorname{B}ig\ranglerbracket,\\
\operatorname{B}ig\langlelbracket\operatorname{EU}(2n,I_{s+1}),\operatorname{EU}(2n,I_{s+2}),\langledots,\operatorname{EU}(2n,I_m)
\operatorname{B}ig\ranglerbracket\betaigg]
\varepsilonnd{multline*}
\nuoindent
to conclude that
\betaegin{multline*}
H=\betaigg[\operatorname{B}ig[\operatorname{EU}(2n,I_1\circ\langledots\circ I_r),\operatorname{EU}(2n,I_{r+1}\circ\langledots\circ I_s)\operatorname{B}ig],\\
\operatorname{B}ig[\operatorname{EU}(2n,I_{s+1}\circ\langledots\circ I_t),\operatorname{EU}(2n,I_{t+1}\circ\langledots\circ I_m)
\operatorname{B}ig]\betaigg],
\varepsilonnd{multline*}
\nuoindent
and we are again done, this time by Lemma~\rangleef{quadruple}.
\varepsilonnd{proof}
\sigmaection{Further applications}
Now, we are in a position to finish the proof of Theorem~\rangleef{T:8}.
\betaegin{proof}
Since $(I,\operatorname{G}amma)$ and $(J,\operatorname{D}elta)$ are comaximal, there exist $a'\in I$
and $b'\in J$ such that $a'+b'=1\in R$. But then by Lemmas~\rangleef{Op-2.1}
and \rangleef{long-short},
for $i\nue \pm j$ one has
$$ Y_{ij}(a,b)=Y_{ij}(a(a'+b'),b)\varepsilonquiv Y_{ij}(aa',b)\cdot Y_{ij}(ab',b)\varepsilonquiv e$$
modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta))$.
\par
For $i=-j$, one has
$$
Y_{i,-i}(a,b)=Y_{i,-i}((a'+b')a\omegaverline{(a'+b')},b)
=Y_{i,-i}(a'a\omegaverline{a'}+b'a \omegaverline{a'} +a'a\omegaverline{b'}+b'a \omegaverline{b'},b). $$
\nuoindent
Applying multiplicativity of commutators to the first argument of the above commutator and then Lemma~\rangleef{Op-2}, we deduce
$$
z\varepsilonquiv Y_{i,-i}(a'a\omegaverline{a'},b)Y_{i,-i}(b'a \omegaverline{a'},b)Y_{i,-i}(a'a\omegaverline{b'},b)Y_{i,-i}(b'a \omegaverline{b'} ,b)\pamod{\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta))}.
$$
By Theorem~\rangleef{symb}, each of above factors is trivial modulo $\operatorname{EU}(2n,(I,\operatorname{G}amma)\circ (J,\operatorname{D}elta))$. This finishes the proof.
\varepsilonnd{proof}
Let us state another amusing corollary of Theorem~\rangleef{symb}.
For the form ideals themselves, one has an obvious inclusion
\betaegin{multline*}
\operatorname{B}ig((I,\operatorname{G}amma)+(J,\operatorname{D}elta)\operatorname{B}ig)\circ\operatorname{B}ig((I,\operatorname{G}amma)\cap (J,\operatorname{D}elta)\operatorname{B}ig)=\\
\operatorname{B}ig((I+J)\circ(I\cap J), \operatorname{G}amma_{\muin}((I+J)\circ(I\cap J))+ {}^{(\operatorname{G}amma\cap\operatorname{D}elta)}(\operatorname{G}amma+\operatorname{D}elta) + {}^{(\operatorname{G}amma+\operatorname{D}elta)}(\operatorname{G}amma\cap\operatorname{D}elta) \operatorname{B}ig)\langlee \\
\operatorname{B}ig(I\circ J, \operatorname{G}amma_{\muin}(I\circ J) +{}^J\operatorname{G}amma +{}^I\operatorname{D}elta\operatorname{B}ig)=(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta).
\varepsilonnd{multline*}
\nuoindent
Only very rarely this inclusion is always an equality.
\betaegin{The}\langleabel{the:p4}
For any two form ideals $(I,\operatorname{G}amma)$ and $(J,\operatorname{D}elta)$ of $(A,\Lambda)$, $n\gammae 3$, one has
$$ \operatorname{B}ig[\operatorname{EU}\betaig(2n,(I,\operatorname{G}amma)+(J,\operatorname{D}elta)\betaig),
\operatorname{EU}\betaig(n,(I,\operatorname{G}amma)\cap(J,\operatorname{D}elta)\betaig)\operatorname{B}ig]\langlee
\betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig]. $$
\varepsilonnd{The}
\betaegin{proof}
The observation immediately preceding the theorem shows that
the level of the left hand side is contained in the level of the right
hand side,
$$ \operatorname{EU}\operatorname{B}ig(2n,R,
\betaig((I,\operatorname{G}amma)+(J,\operatorname{D}elta)\betaig)\circ
\betaig((I,\operatorname{G}amma)\cap(J,\operatorname{D}elta)\betaig)\operatorname{B}ig)\langlee
\operatorname{EU}\betaig(2n,R,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)\betaig). $$
\par
Thus, it only remains to prove that the elementary commutators
$Y_{ij}(a+b,c)$, with $a\in(I,\operatorname{G}amma)$, $b\in(J,\operatorname{D}elta)$,
$c\in(I,\operatorname{G}amma)\cap(J,\operatorname{D}elta)$,
in the left hand side belong to the right hand side.
\par
By Theorem~\rangleef{symb}, one has
$$ Y_{ij}(a+b,c)\varepsilonquiv Y_{ij}(a,c)\cdot Y_{ij}(b,c)
\pamod{\operatorname{EU}\betaig(2n,R,
((I,\operatorname{G}amma)+(J,\operatorname{D}elta))\circ((I,\operatorname{G}amma)\cap(J,\operatorname{D}elta))\betaig)}. $$
\nuoindent
Thus, this congruence holds also modulo the larger subgroup
$\operatorname{EU}(2n,R,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$.
\par
On the other hand, Theorem~\rangleef{The:Op-2.1} implies that
$$ Y_{ij}(b,c)\varepsilonquiv Y_{ij}(c,-b)
\pamod{\operatorname{EU}(2n,R,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}. $$
\par
Combining the above congruences, we see that
$$ Y_{ij}(a+b,c)\varepsilonquiv Y_{ij}(a,c)\cdot Y_{ij}(c,-b)
\pamod{\operatorname{EU}(2n,R,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))}, $$
\nuoindent
where both commutators in the right hand side belong to
$[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)]$, which proves the desired inclusion.
\varepsilonnd{proof}
\sigmaection{Final remarks}
Here we make some further observations concerning the context
of this work and also state some unsolved problems and reiterate
some further problems from \cite{RNZ1, RNZ5}, which are still
pending.
\sigmaubsection{How we got here.} The study of birelative standard
commutator formulas goes back to the foundational work by
Hyman Bass \cite{Bass_stable}. As early successes one should
also mention important contributions by Alec Mason and Wilson
Stothers \cite{MAS3, Mason74, MAS1, MAS2} and by Hong You \cite{HongYou}.
Our own research in this direction started in 2008--2010 in the joint
works with Alexei Stepanov and Roozbeh Hazrat \cite{NVAS, RHZZ1, NVAS2} and was then continued in 2011--2017 in a series of our
joint works based on relative versions of localisation
methods,
in particular\footnote{At least three our scheduled works of that
period, which were essentially completed by 2016, viz., the
general multiple commutator formula for $\operatorname{GL}(n,R)$, unitary
commutator width, and analysis of the case $\operatorname{G}U(4,R,\Lambda)$,
still remain
unpublished.} \cite{ RHZZ2, RNZ1, RNZ2, RNZ3, RNZ4, RNZ5}.
Simultaneously, Stepanov developed his universal localisation
and applied it to multiple commutator formulas and commutator
width, see \cite{Stepanov_nonabelian, stepanov10}. One can find
systematic description of that stage of development in our
surveys and conference papers \cite{yoga-1, portocesareo, yoga-2, RNZ4}.
The present work is a natural extension of our more recent papers
\cite{NV18, NZ2, NV19, NZ1, NZ3, NZ6, NZ4}. It owes its
existence to the two following momentous observations we
made in October 2018, and in September 2019, respectively.
\par
In October 2018 the first author proved a special case of
Theorems~\rangleef{equality2} and \rangleef{unrelative} for the general linear group $\operatorname{GL}(n,R)$,
$n\gammae 3$, over commutative rings, see \cite{NV18}. The
initial proof employed a version of decomposition of unipotents
\cite{ASNV}, that was already used for a similar purpose
in his joint work with Alexei Stepanov \cite{NVAS}. The second
author then immediately observed that Theorem~\rangleef{equality2} implies the
first claim of Theorem~\rangleef{generators} and that it should be possible to proceed conversely, first establish a version of Theorem~\rangleef{generators} by elementary calculations, and then derive Theorems~\rangleef{equality2} and \rangleef{unrelative}. This is exactly
what was done for Chevalley groups in our paper \cite{NZ2},
again over commutative rings.
\par
In July--September 2019 the first author was discussing bounded
generation of Chevalley groups in the function case with Boris Kunyavsky and Eugene Plotkin. One of
the tricks used in many published papers consisted in splitting
an elementary conjugate/elementary commutator and then reassembling it in a different position. We noticed that the same calculation of rolling
elementary conjugates to a different position appeared over
and over again in many different contexts:
\par\sigmamallskip
$\betaullet$ {\it Congruence subgroup problem.\/} In a preliminary mode
it was already present in the precursory article by Jens Mennicke
\cite{Mennicke}
and then already in full-fledged form in the epoch-making memoir
by Hyman Bass, John Milnor, and Jean-Pierre Serre
\cite{Bass_Milnor_Serre}, behold the proof of
Theorem 5.4.
\par\sigmamallskip
$\betaullet$ {\it Bounded generation\/}. Post factum, we discerned the
same calculation in the classical papers by David Carter, Gordon
Keller, and Oleg Tavgen \cite{CK83, Tavgen1990}, but we only
became aware of that perusing a recent article by Bogdan Nica \cite{Nica}.
\par\sigmamallskip
$\betaullet$ In fact, Wilberd van der Kallen and Alexei Stepanov
\cite{vdK-group, Stepanov_calculus, Stepanov_nonabelian} use
a very similar calculation to reduce the generating sets of
relative elementary subgroups.
\par\sigmamallskip\nuoindent
Here we attached merely a handful of references. Retrospectively,
we spotted the same or very similar calculations in oodles of further
papers, but apparently it was hardly ever applied in the birelative context.
\par
At the end of September the first author used essentially the same
calculation\footnote{Simultaneously and independently exactly
the same calculation was applied by Andrei Lavrenov and Sergei
Sinchuk \cite{Lavrenov_Sinchuk} at
the level of $\operatorname{K}_2$.} to prove that when $R$ is commutative and
$n\gammae 3$ the mixed relative commutator subgroup $[E(n,A),E(n,B)]$ is contained in another birelative group
$$ \operatorname{E}E(n,A,B)=\betaig\langleangle t_{ij}(c),\tauext{\ where\ }
c\in A, i<j,\tauext{\ and\ } c\in B, i>j\betaig\rangleangle, $$
\nuoindent
see \cite{NV19}, Theorem 3. Within a few days of vehement correspondence we observed that everything works over arbitrary associative rings and can be further enhanced to entail
Theorems 1 and 5 for $\operatorname{GL}(n,R)$. This is done in \cite{NZ2}, and
soon thereafter in a more mature form, implying also Theorems 6,
7 and 8, in \cite{NZ3}.
\par
Morally, the present paper, and a parallel paper that addresses
the case of Chevalley groups \cite{NZ4}, are direct offsprings
of this development. However, technically these cases turned
out to be way more demanding, and we had to spend quite some
time to supply detailed proofs of all auxiliary results.
\sigmaubsection{Degree improvements.}
Of course, the first question that immediately occurs is
whether Theorem~\rangleef{T:7} holds also for $n=3$. For
{\it quasi-finite\/} rings this is indeed the case \cite{RNZ4},
and we are pretty more inclined to believe in the positive answer.
\betaegin{Prob}
Prove that Lemma~$\rangleef{quadruple}$ and Theorem~$\rangleef{T:7}$
hold also for $n=3$.
\varepsilonnd{Prob}
Getting a proof in the same style as that of Lemma~\rangleef{triple}
seems to be highly non-trivial from a technical viewpoint. However,
the possibility to construct a counter-example appears even
more remote.
In the main body of the present paper we always assumed that
$n\gammae 3$. Obviously, due to the exceptional behavior of the
orthogonal group $\operatorname{SO}(4,A)$, these results do not fully generalise
to the case $n = 2$. It is natural to ask, whether results
of the present paper hold also for the group $\operatorname{G}U(4,A,\Lambda)$.
However, this obviously fails in general without some strong
additional assumptions on the form ring and/or form ideals.
Still, we believe they do generalise, provided
$\Lambda A+A\Lambda=A$, or the like. Known
results\footnote{Compare the work by Bak and the first author \cite{BV1}, and references therein.} clearly indicate both that this should be
possible, and that the analysis of the case $n = 2$ will be
considerably harder from a technical viewpoint, than that
of the case $n\gammae 3$.
\betaegin{Prob}\langleabel{p4}
Generalise results of the present paper to the group
$\operatorname{G}U(4,A,\Lambda)$, provided that
$\Lambda A+A\Lambda=A$, $\operatorname{G}amma J+J\operatorname{G}amma =I$,
$\operatorname{D}elta I+I\operatorname{D}elta=J$, or the like.
\varepsilonnd{Prob}
Actually, some 8 years ago we have obtained various headways
towards the relative standard commutator formula and all that for
$\operatorname{G}U(4,A,\Lambda)$, but even these results are unpublished,
due to their fiercely technical character.
\sigmaubsection{Presentations and stability.}
As a counterpart to Theorem~\rangleef{T:9} we can ask, whether the
stability map for this quotient is also injective. A natural approach
to this would be to tackle the following much more ambitious
project.
\betaegin{Prob}
Give a presentation of
$$ \betaig[\operatorname{EU}(2n,I,\operatorname{G}amma),\operatorname{EU}(2n,J,\operatorname{D}elta)\betaig]
/\operatorname{EU}(2n,A,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta)) $$
\nuoindent
by generators and relations. Does this presentation depend on
$n\gammae 3$?
\varepsilonnd{Prob}
In Theorems~\rangleef{The:Op-2.1} and \rangleef{symb} and Lemma~\rangleef{long-short} we have established some
of the relations among the elementary commutators modulo
$\operatorname{EU}(2n,A,(I,\operatorname{G}amma)\circ(J,\operatorname{D}elta))$. However, easy arithmetic
examples show this is not a defining set of relations, so that there
must be some further relations. Compare \cite{NZ2, NZ3, NZ6} for
discussion of the similar problem for $\operatorname{GL}(n,A)$.
\sigmaubsection{Higher relations.} In \cite{NZ6} we established some
further congruences for the elementary commutators in
$\operatorname{GL}(n,A)$, $n\gammae 3$, where $A$ is an arbitrary associative ring.
The highlight of that paper is the following remarkable triple
congruence, a version of the Hall---Witt identity.
Let $I,J,K$
be two-sided ideals of $R$. Then for any three distinct indices $i,j,h$
such that $1\langlee i,j,h\langlee n$, and all $a\in I$, $b\in J$, $c\in K$,
one has
$$ y_{ij}(ab,c) y_{jh}(ca,b) y_{hi}(bc,a)\varepsilonquiv e
\pamod{E(n,R,IJK+JKI+KIJ)}, $$
\nuoindent
see \cite{NZ6}, Theorem 1. This identity has lots of applications,
including many new inclusions among double and multiple mixed
relative elementary commutator subgroups.
\par
Specifically, it allows to solve the analogue of Problem 3 for
$\operatorname{GL}(n,A)$ in the particularly agreeable case of Dedekind rings.
Thus, it would be most natural to seek out similar higher
congruences in the unitary case as well.
\betaegin{Prob}
Generalise the results of \cite{NZ6} to the unitary groups
$\operatorname{G}U(2n,A,\Lambda)$, $n\gammae 3$.
\varepsilonnd{Prob}
One such congruence among {\it short\/} root type elementary commutators is immediately clear. But the congruences involving
long root type elementary commutators will be fancier and longer.
\sigmaubsection{Other birelative groups.}
Let us briefly discuss two further groups depending on two form
ideals of a form ring. First of all, it is the partially relativised
group ${\operatorname{FU}(2n,I,\operatorname{G}amma)}^{\operatorname{FU}(2n,J,\operatorname{D}elta)}$.
It seems that in view of the identity
$$ {\operatorname{FU}(2n,I,\operatorname{G}amma)}^{\operatorname{FU}(2n,J,\operatorname{D}elta)}=
[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)]\cdot \operatorname{FU}(2n,I,\operatorname{G}amma), $$
\nuoindent
our Theorem~\rangleef{generators} readily implies the following generalisation of
\cite{BV3}, Proposition 5.1, to
${\operatorname{FU}(2n,I,\operatorname{G}amma)}^{\operatorname{FU}(2n,J,\operatorname{D}elta)}$.
Namely, we assert that it is generated by the appropriate
elementary conjugates.
\betaegin{Prob}\langleabel{p7.1}
Prove that the partially relativised groups
${\operatorname{FU}(2n,I,\operatorname{G}amma)}^{\operatorname{FU}(2n,J,\operatorname{D}elta)}$ are generated by
${}^{T_{ji}(b)}T_{ij}(a)$, where $a\in(I,\operatorname{G}amma)$, $b\in(J,\operatorname{D}elta)$.
\varepsilonnd{Prob}
Another birelative group $\operatorname{EEU}(2n,(I,\operatorname{G}amma),(J,\operatorname{D}elta))$ is
defined as follows
$$ \operatorname{EEU}(2n,(I,\operatorname{G}amma),(J,\operatorname{D}elta))=
\betaig\langleangle T_{ij}(a),\tauext{\ where\ } c\in(I,\operatorname{G}amma), i<j,\tauext{\ and\ }
c\in(J,\operatorname{D}elta), i>j\betaig\rangleangle. $$
\par
The following problem proposes a unitary generalisation
of \cite{NV19}, Theorem 3, where a similar result was
established for $\operatorname{GL}(n,A)$.
\betaegin{Prob}\langleabel{p6}
Prove that
$$ [\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)]\langlee
\operatorname{EEU}(2n,(I,\operatorname{G}amma),(J,\operatorname{D}elta)). $$
\varepsilonnd{Prob}
\sigmaubsection{General multiple commutator formula.}
Let us now recall another major unsolved problem as stated
already in \cite{RNZ1, RNZ4} and \cite{RNZ5}, Problem 1.
We proffer to prove general multiple commutator formula
for unitary groups.
\betaegin{Prob}\langleabel{p9}
Let $(I_i,\operatorname{G}amma_i)$, $1\langlee i\langlee m$, be form ideals of the form
ring $(A,\Lambda)$ such that $A$ is module-finite over a
commutative ring $R$ that has finite Bass–Serre dimension
$\deltaelta(R)=d<\infty$. Prove that for any $m\gammae d$ one has
\betaegin{multline*}
\betaig\langlelbracket\operatorname{G}U(2n,I_0,\operatorname{G}amma_0),\operatorname{G}U(2n, I_1,\operatorname{G}amma_1),\langledots,
\operatorname{G}U(2n,I_m,\operatorname{G}amma_m)\betaig\ranglerbracket=\\
\betaig\langlelbracket\operatorname{EU}(2n,I_0,\operatorname{G}amma_0),\operatorname{EU}(2n, I_1,\operatorname{G}amma_1),\langledots,
\operatorname{EU}(2n, I_m,\operatorname{G}amma_m)\betaig\ranglerbracket.
\varepsilonnd{multline*}
\varepsilonnd{Prob}
Observe that the arrangement of brackets in the above formula
should be the same on both sides as the mixed commutators are
not associative. A similar problem for algebraic groups over {\it commutative\/} rings, in particular for Chevalley groups, was solved
by Alexei Stepanov \cite{stepanov10}, by his remarkable
{\it universal localisation\/} method.
\par
Recall that the proof of a similar result for $\operatorname{GL}(n,R)$ over
{\it non-commutative\/} rings is based on the following result of Mason---Stothers \cite{MAS3}, Theorem 3.6 and Corollary 3.9,
see \cite{RNZ4}, Theorem 13, for an easy modern proof. Of
course, that we can unrelativise the right hand side was only
established in \cite{NZ2}, Theorem 2, so formally this
theorem was never stated in this form.
\betaegin{oldtheorem}
Let $A$ be a ring, $I$ and $J$ be two two-sided
ideals of $A$. Assume that $n\gammae\sigmar(R),3$. Then
$$ [\operatorname{GL}(n,A,I),\operatorname{GL}(n,A,J)] = [E(n,I),E(n,J)]. $$
\varepsilonnd{oldtheorem}
For unitary groups, even such basic facts at the stable level
seem to be missing.
\betaegin{Prob}\langleabel{p8}
Find appropriate stability conditions under which
$$ [\operatorname{G}U(2n,I,\operatorname{G}amma),\operatorname{G}U(2n,J,\operatorname{D}elta)]=
[\operatorname{FU}(2n,I,\operatorname{G}amma),\operatorname{FU}(2n,J,\operatorname{D}elta)]. $$
\varepsilonnd{Prob}
After that, the proof in our unpublished paper proceeds
by induction on $d$, which depends on Bak’s results \cite{Bak},
precise form of injective stability for $K_1$, such as the
Bass–Vaserstein theorem, etc. It seems that to solve Problem 7
one has to rethink and expand many aspects of structure theory
of unitary groups, starting with stability theorems for $\operatorname{K}U_1$.
The first complete\footnote{In late 1960-ies
and mid 1970-ies Anthony Bak
and Manfred Kolster obtained stability under stronger assumptions,
with very sketchy proofs. Leonid Vaserstein worked in smaller
generality as far as groups, and his proof of injective stability for
unitary groups
contained serious gaps and inaccuracies. In 1980 Mamed-Emin Oglu Namik Mustafa-Zadeh announced surjective stability for $\operatorname{K}U_2$ ---
and thus also injective stability for $\operatorname{K}U_1$ ---
in full generality. However, a complete proof was never published,
and the exposition in his 1983 Ph.~D.~Thesis is blurred by serious mistakes.} generally accepted proof of injective stability
for $\operatorname{K}U_1$ was obtained (but not published!)~by Maria
Saliani \cite{Saliani}, and first published by Max Knus in his book
\cite{knus}. After that, generalisations and improvements were
proposed by Anthony Bak, Guoping Tang, Victor Petrov,
and Sergei Sinchuk \cite{BT, BPT, Sinchuk}, and then very
recently by Weibo Yu, Rabeya Basu and Egor Voronetsky
\cite{Yu, Basu18, Voron-2}.
\par
Problem 7 is also intimately related to the nilpotent structure
of $\operatorname{K}U_1$. In the absolute case the corresponding results for
unitary groups were obtained by Roozbeh Hazrat in his
Ph.~D.~Thesis \cite{RH, RH2}, and in the
relative case in a joint paper by Bak, Hazrat and the first
author \cite{BHV}. To fully cope with Problem 7, we need more
powerful results on the superspecial unitary groups than what
was established in \cite{BHV}. Part of what is demanded here
was recently established by Weibo Yu, Guoping Tang and Rabeya
Basu \cite{Yu-Tang, Basu16}, but there is still a lot of work to be
done.
\sigmaubsection{Subnormal subgroups.}
Initially, one of our main motivations to pursue the work
on birelative commutator formulas were prospective
applications to the study of subnormal subgroups of
$\operatorname{G}U(2n,A,\Lambda)$. As was observed by John Wilson
\cite{Wilson}, technically this amounts
to description of subgroups of $\operatorname{G}U(2n,A,\Lambda)$,
normalised by a relative elementary subgroup $\operatorname{EU}(2n,J,\operatorname{D}elta)$,
for {\it some\/} form ideal $(J,\operatorname{D}elta)$.
\par
A major early contribution is due to Günter Habdank \cite{Ha1, Ha2},
who additionally assumed that the form ring was subject to some stability conditions.
Definitive results for quasi-finite rings were then obtained by the second author and You Hong \cite{ZZ, ZZ1, ZZ2, You_subnormal}.
However, we are convinced that the bounds in these papers can be further improved and hope to return to the following problem with
our new tools.
\betaegin{Prob} Obtain optimal bounds in the description of subgroups
of $\operatorname{G}U(2n,A,\Lambda)$, normalised by the relative elementary
subgroup $\operatorname{EU}(2n,J,\operatorname{D}elta)$, for a form ideal
$(J,\operatorname{D}elta)\trianglelefteq(A,\Lambda)$.
\varepsilonnd{Prob}
Until recently, for the unitary groups the proofs of structure theorems
were in bad shape even in the absolute case\footnote{As indicated in \cite{RN}, the proof in the work by Leonid Vaserstein and Hong You \cite{VY} contained a major omission, and only established the
{\it weak\/} structure theorem. The details of the purported global
proof by Bak and the first author, that was around since the early
1990-ies, and that was harbingered in \cite{BV3}, remained unpublished.}. However, now the situation has changed. In 2013
Hong You and
Xuemei Zhou \cite{You-Zhou} published a detailed proof for
commutative form rings. Finally, in 2014 Raimund Preusser in
his Ph.~D.~Thesis \cite{Preusser-thesis} gave a first complete
{\it localisation proof\/} for quasi-finite form rings, which is
published in \cite{preusser-1}.
\par
In 2017 Raimund Preusser \cite{preusser-2, preusser-3} has also
finally succeeded in completing a {\it global proof\/} as
envisaged in \cite{BV3}. These papers constitute a major
breakthrough since, at least for commutative rings, they give
explicit polynomial expressions of non-trivial transvections as
products of elementary conjugates of a given matrix and its inverse.
(See also \cite{preusser-4, preusser-6} for further results in this
spirit for $\operatorname{GL}(n,A)$ over various classes of non-commutative
rings.)
The first author has immediately recognised that the results
by Preusser procure an effectivisation for the description of
normal subgroups in much the same sense as the {\it
decomposition of unipotents\/} \cite{ASNV}, does for the
normality of the elementary subgroup. This prompted him
to call this method {\it reverse decomposition of unipotents}
\cite{NV-reverse}. Moreover, he noticed that in the case
of $\operatorname{GL}(n,A)$ these results can be generalised (with only
marginally worse bounds) to the description of subgroups
normalised by a {\it relative\/} elementary subgroups
\cite{NV-reverse-2}.
\par
We are confident that, combining the methods developed by
Preusser in the above papers with our methods, we could
easily improve bounds in all published results for unitary
groups. Of course, to prove that the bounds thus obtained
are themselves the best possible ones would be quite a
challenge.
\sigmaubsection{Commutator width.}
Another related problem that initially motivated our work
was the study of commutator width.
Alexander Sivatsky and Alexei Stepanov
\cite{SiSt} have discovered that over rings of finite Jacobson
dimension $\operatorname{j-dim}(A)=d<\infty$ any commutator $[x,y]$,
where $x\in\operatorname{GL}(n,A)$, $y\in E(n,A)$, is a product of $\langlee L$
elementary generators, where $L=L(n,d)$ only depends on
$n$ and $d$. This result was then generalised to all Chevalley
groups $G(\Pihi,A)$ by Stepanov and the first author \cite{SV11},
with the bound depending on the type $\Pihi$ and on $d$.
\par
Ultimately, Stepanov discovered that for {\it reductive groups}
similar results hold for {\it arbitrary\/} commutative rings and
that the bound $L$ therein depends on the type of the group
alone and not on the ring $A$. Also,
he discovered that similar results hold at the relative and birelative
level, with elementary conjugates and our generators (like those
in Theorem B) as the
generating sets of $[E(\Pihi,A,I),E(\Pihi,A,J)]$, again with bounds
that depend on the type alone, and not on $A$, $I$ or $J$. See \cite{portocesareo}
for statements and detailed discussion of these results.
\par
However, Bak's unitary groups are not always algebraic and similar
results on commutator width are not yet published even in
the absolute case and even over finite-dimensional rings.
\betaegin{Prob}
Let $(A,\Lambda)$ be a commutative form ring such that
$\operatorname{j-dim}(A)<\infty$. Prove that the length of commutators in
$[\operatorname{G}U(\Pihi,A,I),E(\Pihi,A,J)]$ in terms of the generators listed in
Theorem~$\rangleef{generators}$ is bounded, and estimate this length.
\varepsilonnd{Prob}
Alexei Stepanov maintained that the above length is bounded in
the absolute case, without actually producing any specific bound.
To obtain an exponential bound depending on $d$ by relative
localisation methods \cite{RNZ1, RNZ5, RNZ4} would be simply
a matter of patience. Actually, this was essentially done by ourselves
and Roozbeh Hazrat, but even
in the absolute case all of this still remains unpublished.
\par
On the other hand, to achieve a {\it uniform\/} polynomial bound,
similar to the one established in \cite{SiSt} for $\operatorname{GL}(n,A)$ but not
depending on $d$, one would need to combine a full-scale
generalisation of Stepanov's universal localisation to unitary
groups, with full-scale unitary versions of decomposition
of unipotents, including explicit polynomial formulae
for the conjugates of root unipotents. This seems to be a rather
ambitious project.
\sigmaubsection{Unitary Steinberg groups.}
It is natural to ask to which extent our methods and results
carry over to the level of $\operatorname{K}U_2$.
\betaegin{Prob}
Prove analogues of the main results of the present paper
for the unitary Steinberg groups $\operatorname{St}U(2n,A,\Lambda)$.
\varepsilonnd{Prob}
For the definition of unitary Steinberg groups see
\cite{B2, lavrenov} and references there (or \cite{lavrenov-bis}
for odd unitary Steinberg groups). Here, we do not discuss
subtleties related to the definition of relative unitary Steinberg
groups, as also relation to excision in unitary algebraic $K$-theory,
etc.
\sigmaubsection{Description of subgroups.} The methods of the
present paper can have applications also in description of
various classes of subgroups of unitary groups. Not in the
position to discuss this at any depth here, we just cite
the works by Victor Petrov, Alexander Shchegolev and
Egor Voronetsky \cite{petrov1, Shchegolev-thesis, Shchegolev-1,
Shchegolev-2, Voron-1} where one can find many further
references. Observe that the result by Voronetsky \cite{Voron-1}
is especially powerful, since it simultaneously generalises also
the description of $\operatorname{EU}$-normalised subgroups (in the context
of odd unitary groups!)
\sigmaubsection{Odd unitary groups.}
Finally, we are positive that all results of the present paper
generalise also to odd unitary groups introduced by
Victor Petrov \cite{petrov2, petrov3}.
\betaegin{Prob}\langleabel{p7}
Generalise the results of \cite{RNZ1, RNZ3, RNZ4} and the
present paper to odd unitary groups, under suitable isotropy
assumptions.
\varepsilonnd{Prob}
Of course, this is not an individual clear-cut problem, but
rather a huge research project. Clearly, in most cases the
proofs in this setting will require much more onerous calculations.
Let us cite some important recent papers by Yu Weibo,
Tang Guoping, Li Yaya, Liu Hang, Anthony Bak, Raimund Preusser
and Egor Voronetsky \cite{Yu-Tang, Yu_Li_Liu, BP, preusser-5,
Voron-1, Voron-2} that address normal structure and stability
for odd unitary groups.
\sigmaubsection{Acknowledgements.}
We thank Anthony Bak, Roozbeh Hazrat and Alexei Stepanov
for long-standing close cooperation on this type of problems
over the last decades. The present paper gradualy evolved to
the current shape between December 2018 and March 2020.
The first author thanks Boris Kunyavsky and Eugene Plotkin,
for ongoing
discussion and comparison of the
existing proofs of the congruence subgroup problem and
bounded generation in terms of elementaries. The bout of
these deliberations that has taken place on September 16,
2019, first in ``Biblioteka Cafe'', and then in ``Manneken Pis''
on Kazanskaya, was especially fateful for \cite{NV19} and all subsequent development. We thank Pavel Gvozdevsky,
Andrei Lavrenov, Sergei Sinchuk and Anastasia Stavrova
for their very pertinent questions and comments.
We are extremely grateful also to Fan Huijun for his
friendly support. In particular, he organised a visit of the first
author to Peking University in December 2019, which gave us
an excellent opportunity to coordinate our vision.
\betaegin{thebibliography}{10}
\betaibitem{B1} A.~Bak, \varepsilonmph{The stable structure of quadratic modules.}
Thesis, Columbia University, 1969.
\betaibitem{B2} A.~Bak, \tauextit{K-Theory of Forms}. Annals of
Mathematics Studies {\betaf 98}, Princeton University Press.
Princeton, 1981.
\betaibitem{Bak}
A.~Bak,
\varepsilonmph{Non-abelian $\operatorname{K}$-theory: The
nilpotent class of $\operatorname{K}_1$ and general stability},
$K$--Theory \tauextbf{4} (1991), 363--397.
\betaibitem{BHV}
A.~Bak, R.~Hazrat, N.~Vavilov,
\varepsilonmph{Localization-completion strikes again: relative
$K_1$ is nilpotent by abelian,}
J. Pure Appl. Algebra, \tauextbf{213} (2009), 1075–1085.
\betaibitem{BPT}
A.~Bak, V.~Petrov, Guoping Tang,
\varepsilonmph{Stability for quadratic $K_1$,}
J. K-Theory \tauextbf{30} (2003), 1--11.
\betaibitem{BP} A.~Bak, R.~Preusser.
\varepsilonmph{The $E$-normal structure of odd-dimensional unitary groups.}
{J. Pure Appl. Algebra}, {\betaf 222} (2018), no.~9, 2823--2880.
\betaibitem{BT}
A.~Bak, Guoping Tang,
\varepsilonmph{Stability for Hermitian $K_1$,}
J. Pure Appl. Algebra \tauextbf{150} (2000), no.~2, 107--121.
\betaibitem{BV1} A.~Bak, N.~Vavilov,
\varepsilonmph{Normality for elementary subgroup functors. }
Math. Proc. Cambridge Philos. Soc. \tauextbf{118} (1995), no.~1, 35--47.
\betaibitem{BV3} A.~Bak, N.~Vavilov, \varepsilonmph{Structure of hyperbolic unitary
groups I: elementary subgroups.} {Algebra Colloquium},
\tauextbf{7} (2000), no.~2, 159--196.
\betaibitem{Bass_stable}
H.~Bass, \varepsilonmph{$\operatorname{K}$-theory and stable algebra,}
Inst. Hautes \'Etudes Sci. Publ. Math. (1964), no.~22, 5--60.
\betaibitem{bass73} H.~Bass, \varepsilonmph{Unitary algebraic $K$-theory.}
{Lecture Notes Math.}, {\betaf343} (1973), 57--265.
\betaibitem{Bass_Milnor_Serre}
H.~Bass, J.~Milnor, J.-P.~Serre,
\varepsilonmph{Solution of the congruence subgroup problem for
$\operatorname{SL}_n$ $(n\gammae3)$ and $\operatorname{Sp}_{2n}$ $(n\gammae2)$,}
Inst. Hautes \'Etudes Sci. Publ. Math. \tauextbf{33} (1967)
59--133.
\betaibitem{Basu16}
R.~Basu,
\varepsilonmph{Local-global principle for general quadratic and general
hermitian groups and the nilpotency of $\operatorname{K}H_1$.}
J. Math. Sci. (N.Y.) \tauextbf{232} (2018), no.~5, 591--609.
\betaibitem{Basu18}
R.~Basu,
\varepsilonmph{A note on general quadratic groups,}
J. Algebra Appl. \tauextbf{17} (2018), no.~11, 1850217, 13 pp.
\betaibitem{CK83}
D.~Carter, G.~E.~Keller,
{\it Bounded elementary generation of\/ $\operatorname{SL}_n({ \muathcal O})$},
Amer. J. Math. {\betaf 105} (1983), 673--687.
\betaibitem{Gerasimov}
V.~N.~Gerasimov,
\varepsilonmph{Group of units of a free product of rings},
Math. U.S.S.R. Sb., \tauextbf{134} (1989), no.~1, 42--65.
\betaibitem{Ha1} G.~Habdank, {\it Mixed commutator groups
in classical groups and a classification of subgroups of classical
groups normalized by relative elementary groups.}
Doktorarbeit Uni. Bielefeld, 1987, 1--71.
\betaibitem{Ha2} G.~Habdank, \varepsilonmph{A classification of subgroups of
$\Lambda$-quadratic groups normalized by relative elementary groups.}
{Adv. Math.}, {\betaf 110} (1995), 191--233.
\betaibitem{HO} A.~J.~Hahn, O.~T.~O'Meara. \tauextit{The classical
groups and $\operatorname{K}$-{theory}}. Springer Verlag, Berlin et al., 1989.
\betaibitem{RH} R.~Hazrat, \varepsilonmph{Dimension theory and nonstable $K_1$ of
quadratic modules.} {K-Theory}, \tauextbf{27} (2002), 293--328.
\betaibitem{RH2} R.~Hazrat, {\it On\/ $K$-theory of classical-like groups}.
Doktorarbeit Uni. Bielefeld, 2002, 1--62.
\betaibitem{yoga-1} R.~Hazrat, A.~Stepanov, N.~Vavilov, Z.~Zhang,
\varepsilonmph{The yoga of commutators.} {J. Math.~Sci.}, {\betaf 387} (2011), 53--82.
\betaibitem{yoga-2} R.~Hazrat, A.~Stepanov, N.~Vavilov, Z.~Zhang,
\varepsilonmph{The yoga of commutators, further applications.}
{J. Math.~Sci.}, {\betaf 200} (2014), 742--768.
\betaibitem{portocesareo} R.~Hazrat, A.~Stepanov, N.~Vavilov, Z.~Zhang,
\varepsilonmph{Commutator width in Chevalley groups.}
{Note di Matematica}, {\betaf 33} (2013), 139--170.
\betaibitem{RN1} R.~Hazrat, N.~Vavilov,
\varepsilonmph{$K_1$ of Chevalley groups are nilpotent.} {J. Pure Appl. Algebra}, {\betaf179} (2003), 99--116.
\betaibitem{RN} R.~Hazrat, N.~Vavilov, \varepsilonmph{Bak's work on the
$K$-theory of rings, with an appendix by Max Karoubi.}
{J. $K$-Theory}, {\betaf 4} (2009), 1--65.
\betaibitem{RNZ1}R.~Hazrat, N.~Vavilov, Z.~Zhang, \varepsilonmph{Relative
unitary commutator calculus and applications.} {J. Algebra},
{\betaf 343} (2011), 107--137.
\betaibitem{RNZ2} R.~Hazrat, N.~Vavilov, Z.~Zhang,
\varepsilonmph{Relative commutator calculus in Chevalley groups.}
{J. Algebra}, {\betaf 385} (2013), 262--293.
\betaibitem{RNZ3} R.~Hazrat, N.~Vavilov, Z.~Zhang, \varepsilonmph{Generation of relative commutator subgroups in Chevalley groups.}
{Proc. of the Edinburgh Math. Soc.}, {\betaf 59} (2016), 393--410.
\betaibitem{RNZ4} R.~Hazrat, N.~Vavilov, Z.~Zhang, \varepsilonmph{The commutators of classical groups.} {J. Math. Sci.},
{\betaf 222} (2017), 466--515.
\betaibitem{RNZ5} R.~Hazrat, N.~Vavilov, Z.~Zhang, \varepsilonmph{Multiple commutator formulas for unitary groups.} {Israel J. Math.}, {\betaf 219} (2017), 287--330.
\betaibitem{RHZZ1} R.~Hazrat, Z.~Zhang, \varepsilonmph{Generalized commutator
formulas.} {Comm. Algebra}, {\betaf 39} (2011), 1441--1454.
\betaibitem{RHZZ2} R.~Hazrat, Z.~Zhang, \varepsilonmph{Multiple commutator formulas.}
{Israel J. Math.}, {\betaf 195} (2013), 481--505.
\betaibitem{vdK-group}
W.~van der Kallen,
\tauextit{A group structure on certain orbit sets of unimodular rows,}
J. Algebra \tauextbf{82} (1983), 363--397.
\betaibitem{knus} M.-A.~Knus, \tauextit{Quadratic and hermitian forms
over rings}. Springer Verlag, Berlin et al., 1991.
\betaibitem{lavrenov} A.~V.~Lavrenov,\varepsilonmph{ The unitary Steinberg group is
centrally closed.} {St.~Petersburg Math. J.}, {\betaf 24} (2013), 783--794.
\betaibitem{lavrenov-bis} A.~V.~Lavrenov,
\varepsilonmph{On odd unitary Steinberg group.}
arXiv:1303.6318v1 [math.KT] 25 Mar 2013, 1--17.
\betaibitem{Lavrenov_Sinchuk}
A.~Lavrenov, S.~Sinchuk
\tauextit{A Horrocks-type theorem for even orthogonal\/ $\operatorname{K}_2$,}
arXiv:1909.02637 v1 [math.GR] 5 Sep 2019, pp.~1--23.
\betaibitem{Mason74} A.~W.~Mason, \varepsilonmph{A note on subgroups
of\/ $\operatorname{GL}(n,A)$ which are generated by commutators.}
{J. London Math. Soc}, {\betaf 11} (1974), 509--512.
\betaibitem{MAS1} A.~W.~Mason, \varepsilonmph{On subgroup of $\operatorname{GL}(n,A)$ which are
generated by commutators, II.} {J. reine angew. Math.},
{\betaf 322} (1981), 118--135.
\betaibitem{MAS2} A.~W.~Mason, \varepsilonmph{A further note on subgroups
of $\operatorname{GL}(n,A)$ which are generated by commutators.}
{Arch. Math.}, {\betaf 37} (1981)(5) 401--405.
\betaibitem{MAS3} A.~W.~Mason, W.W. Stothers, \varepsilonmph{On subgroup
of $\operatorname{GL}(n,A)$ which are generated by commutators.}
{Invent. Math.}, {\betaf 23} (1974), 327--346.
\betaibitem{Mennicke}
J.~L.~Mennicke,
\varepsilonmph{Finite factor groups of the unimodular group,}
Ann.~Math., \tauextbf{81} (1965), 31--37.
\betaibitem{Nica}
B.~Nica,
\varepsilonmph{A true relative of Suslin's normality theorem,}
Enseign.\ Math., \tauextbf{61} (2015), no.~1--2, 151--159.
\betaibitem{Nica2018}
B.~Nica,
\varepsilonmph{On bounded elementary generation for $\operatorname{SL}_n$ over
polynomial rings},
Israel J. Math. \tauextbf{225} (2018), no.~1, 403--410.
\betaibitem{petrov1} V.~Petrov, \varepsilonmph{Overgroups of unitary groups.}
{$K$-Theory}, \tauextbf{29} (2003), pp 147--174.
\betaibitem{petrov2} V.~A.~Petrov, \varepsilonmph{Odd unitary groups.}
{J. Math. Sci.}, {\betaf 130} (2003), no. 3, 4752--4766.
\betaibitem{petrov3} V.~A.~Petrov, {\it Overgroups of classical groups},
Doktorarbeit, State Univ. St.-Petersburg, 2005, 1--129 (in Russian).
\betaibitem{Preusser-thesis}
R.~Preusser,
\varepsilonmph{The normal structure of hyperbolic unitary groups,}
Doktorarbeit, Uni. Bielefeld, 2014, 1--82,
available online at https://pub.uni-bielefeld.de/record/2701405
\betaibitem{preusser-1} R.~Preusser,
\varepsilonmph{Structure of hyperbolic unitary groups II: classification of
$E$-normal subgroups.}
{Algebra Colloq.}, {\betaf 24} (2017), no.~2, 195--232.
\betaibitem{preusser-2} R.~Preusser,
\varepsilonmph{Sandwich classification for $\operatorname{GL}_n(R)$, $O_{2n}(R)$ and
$U_{2n}(R,\Lambda)$ revisited.}
{J. Group Theory}, {\betaf 21} (2017), 21--44.
\betaibitem{preusser-3} R.~Preusser,
\varepsilonmph{Sandwich classification for $O_{2n+1}(R)$ and $U_{2n+1}(R,\operatorname{D}elta)$ revisited.}
{J. Group Theory}, {\betaf 21} (2018), 539--571.
\betaibitem{preusser-4} R.~Preusser,
\varepsilonmph{Reverse decomposition of unipotents over
noncommutative rings I: General linear groups.}
arXiv:1912.03536 [math.RA], 7 Dec 2019, 1--15.
\betaibitem{preusser-5} R.~Preusser,
\varepsilonmph{The $E$-normal structure of Petrov’s odd unitary groups over
commutative rings,}
Comm. Algebra, \tauextbf{48} (2020), no~3, 1--18.
\betaibitem{preusser-6} R.~Preusser,
\varepsilonmph{On general linear groups over exchange rings.}
Linear Multilinear Algebra, 2020, DOI: 10.1080/03081087.2020.1743636.
\betaibitem{Saliani}
M. Saliani, \varepsilonmph{On the stability of the unitary group,}
https://people.math.ethz.ch/\~{}knus/papers/ Maria\_Saliani.pdf,
1--12.
\betaibitem{Shchegolev-thesis}
A.~Shchegolev,
\varepsilonmph{Overgroups of Elementary Block-Diagonal Subgroups in Even Unitary Groups over Quasi-Finite Rings,}
Doktorarbeit, Uni. Bielefeld, 2015,
available online at http://pub.uni-bielefeld.de/publication/2769055.
\betaibitem{Shchegolev-1}
A.~V.~Shchegolev,
\varepsilonmph{Overgroups of block-diagonal subgroups of a hyperbolic unitary group over a quasifinite ring: main results.}
J. Math. Sci. (N.Y.), \tauextbf{222} (2017), no.~4, 516--523.
\betaibitem{Shchegolev-2}
A.~V.~Shchegolev,
\varepsilonmph{Overgroups of an elementary block-diagonal subgroup of the classical symplectic group over an arbitrary commutative ring.}
St. Petersburg Math. J., \tauextbf{30} (2019), no.~6, 1007--1041.
\betaibitem{Sinchuk}
S.~Sinchuk,
\varepsilonmph{Injective stability for unitary $K_1$, revisited,}
J.~K-Theory \tauextbf{11} (2013), no.~2, 233--242.
\betaibitem{SiSt}
A.~Sivatski, A.~Stepanov, \varepsilonmph{On the word length of
commutators in\/ $\operatorname{GL}_n(R)$,}
{$K$-theory}, {\betaf 17} (1999), 295--302.
\betaibitem{Stepanov_calculus}
A.~Stepanov, \varepsilonmph{Elementary calculus in Chevalley groups over rings,}
{J.~Prime Res.\ Math.}, \tauextbf{9} (2013), 79--95.
\betaibitem{Stepanov_nonabelian}
A.~V.~Stepanov,
\varepsilonmph{ Non-abelian $\operatorname{K}$-theory for Chevalley groups over rings,} {J.~Math.\ Sci.}, \tauextbf{209} (2015), no.~4, 645--656.
\betaibitem{stepanov10} A.~Stepanov,
\varepsilonmph{Structure of Chevalley groups over rings via universal localization,}
J. Algebra, \tauextbf{450} (2016), 522--548.
\betaibitem{ASNV} A.~Stepanov, N.~Vavilov,
\varepsilonmph{Decomposition of transvections\/{\ranglem :} A theme with variations,}
$K$-Theory, \tauextbf{19} (2000), 109--153.
\betaibitem{SV11} A.~Stepanov, N.~Vavilov,
\varepsilonmph{On the length of commutators in Chevalley groups,}
{Israel J. Math.} \tauextbf{185} (2011), 253--276.
\betaibitem{tang} Tang Guoping,
\varepsilonmph{Hermitian groups and $K$-theory.}
{$K$-Theory}, {\betaf 13} (1998), no.~3, 209--267.
\betaibitem{Tavgen1990}
O.~I.~Tavgen,
\varepsilonmph{Bounded generation of Chevalley groups over rings
of\/ $S$-integer algebraic numbers},
Izv. Acad. Sci. USSR, {\betaf 54} (1990), no.1, 97--122.
\betaibitem{VY} L.~N.~Vaserstein, Hong You,
\varepsilonmph{Normal subgroups of
classical groups over rings.} {J. Pure Appl. Algebra},
{\betaf 105} (1995), 93--105.
\betaibitem{NV-reverse} N.~Vavilov,
\varepsilonmph{Towards the reverse decomposition of unipotents.}
{J.~Math.\ Sci.}, {\betaf 470} (2018), 21--37.
\betaibitem{NV18} N.~Vavilov, \varepsilonmph{Unrelativised standard commutator
formula,} {J.~Math.\ Sci.}, {\betaf 470} (2018), 38--49.
\betaibitem{NV19} N.~Vavilov,
\varepsilonmph{Commutators of congruence subgroups
in the arithmetic case,} { J.~Math.\ Sci.}, {\betaf 479} (2019), 5--22.
\betaibitem{NV-reverse-2} N.~Vavilov,
\varepsilonmph{Towards the reverse decomposition of unipotents. II. The relative case}
{ J.~Math.\ Sci.}, {\betaf 484} (2019), 5--22.
\betaibitem{NVAS} N.~A.~Vavilov, A.~V.~Stepanov,
\varepsilonmph{Standard commutator
formula.}
{Vestnik St. Petersburg State Univ., ser.~$1$}
\tauextbf{41} (2008), no.~1, 5--8.
\betaibitem{NVAS2} N.~A.~Vavilov, A.~V.~Stepanov,
\varepsilonmph{Standard commutator formula, revisited.}
{Vestnik St. Petersburg State Univ., ser.~$1$},
{\betaf 43} (2010), no.~1, 12--17.
\betaibitem{NZ2} N.~Vavilov, Z.~Zhang,
\varepsilonmph{Commutators of relative and unrelative elementary groups,
revisited,}
J. Math. Sci. \tauextbf{485} (2019), 58--71.
\betaibitem{NZ1} N.~Vavilov, Z.~Zhang,
\varepsilonmph{Generation of relative commutator subgroups in Chevalley groups. {\ranglem II},}
Proc. Edinburgh Math. Soc., \tauextbf{} (2020), 1--15,
doi:10.1017/S0013091519000555.
\betaibitem{NZ3} N.~Vavilov, Z.~Zhang,
\varepsilonmph{Multiple commutators of elementary subgroups\/{\ranglem:}
end of the line,}
Linear Algebra Applic., \tauextbf{} (2019), 1--14.
\betaibitem{NZ6} N.~Vavilov, Z.~Zhang,
\varepsilonmph{Inclusions among commutators of elementary subgroups,}
J.~Algebra, \tauextbf{} (2019), 1--25.
\betaibitem{NZ4} N.~Vavilov, Z.~Zhang,
\varepsilonmph{Commutators of relative and unrelative elementary subgroups
in Chevalley groups,}
Proc. Edinburgh Math. Soc., \tauextbf{} (2020), 1--19.
\betaibitem{Voron-1}
E.~Yu.~Voronetsky,
\varepsilonmph{Groups normalized by the odd unitary group,}
Algebra i Analiz \tauextbf{31} (2019), no.~6, 38--78.
\betaibitem{Voron-2} E.~Voronetsky,
\varepsilonmph{Stability for odd unitary $K_1$,}
arXiv:1909.03254v1 [math.GR] 7 Sep 2019, 1--39.
\betaibitem{Wilson}
J.~S.~Wilson,
\varepsilonmph{The normal and subnormal structure of general linear groups,}
Proc. Camb. Philos. Soc. \tauextbf{71} (1972), 163--177.
\betaibitem{HongYou}
You Hong,
\varepsilonmph{On subgroups of Chevalley groups which are
generated by commutators,}
J.~Northeast Normal Univ., (1992), no.~2, 9--13.
\betaibitem{You_subnormal} You Hong,
\varepsilonmph{Subgroups of classical groups normalized by relative
elementary groups,}
J. Pure Appl. Algebra \tauext{216} (2012), no.~5, 1040--1051.
\betaibitem{You-Zhou}
You Hong, Zhou Xuemei,
\varepsilonmph{The structure of quadratic groups over commutative
rings,}
Sci. China Math. \tauextbf{56} (2013), no.~11, 2261--2272.
\betaibitem{Yu}
Yu Weibo,
\varepsilonmph{Stability for odd unitary $K_1$ under the
$\Lambda$-stable range condition,}
J. Pure Appl. Algebra \tauextbf{217} (2013), no.~5, 886--891.
\betaibitem{Yu-Tang}
Yu Weibo, Tang Guoping
\varepsilonmph{Nilpotency of odd unitary $K_1$-functor,}
Comm. Algebra \tauextbf{44} (2016), no.~8, 3422--3453.
\betaibitem{Yu_Li_Liu}
Yu Weibo, Li Yaya, Liu Hang,
\varepsilonmph{A classification of subgroups of odd unitary groups,}
Comm. Algebra \tauext{46} (2018), no~9, 3795--3805.
\betaibitem{ZZ}
Zhang Zuhong,
\varepsilonmph{Lower K-theory of unitary groups,}
Doktorarbeit, Queen's Univ. Belfast, 2007, pp. 1--67.
\betaibitem{ZZ1}
Zhang Zuhong,
\varepsilonmph{Stable sandwich classification theorem for classical-like
groups,} Math. Proc. Camb. Philos. Soc.
\tauextbf{143} (2007), 607--619.
\betaibitem{ZZ2}
Zhang Zuhong,
\varepsilonmph{Subnormal structure of non-stable unitary groups over rings,}
J. Pure Appl. Algebra \tauextbf{214} (2010), no.~5, 622--628.
\varepsilonnd{thebibliography}
\varepsilonnd{document}
|
{{\mathfrak{m}}athfrak{b}}egin{document}
{\mathfrak{m}}aketitle
{{\mathfrak{m}}athfrak{b}}egin{abstract}
In this paper, we consider a natural question how many minimal rational curves are needed to join two general points on a Fano manifold $X$ of Picard number $1$. In particular, we study the minimal length of such chains in the cases where the dimension of $X$ is at most $5$, the coindex of $X$ is at most $3$ and $X$ equips with a structure of a double cover. As an application, we give a better bound on the degree of Fano $5$-folds of Picard number $1$.
{{\mathfrak{m}}athfrak{e}}nd{abstract}
\section{Introduction}
For a Fano manifold, a {\it minimal rational component} ${{\mathfrak{m}}athscr{K}}$ is defined to be a dominating irreducible component of the normalization of the parameter space of rational curves whose degree is minimal among such components and a {\it variety of minimal rational tangents} is the parameter space of the tangent directions of ${{\mathfrak{m}}athscr{K}}$-curves at a general point. Nowadays these two objects often appear in the study of Fano manifolds \cite{Hw2,KeS}. On the other hand, chains of rational curves also play an important role in the field. For instance, Koll\'ar-Miyaoka-Mori \cite{KMM} and Nadel \cite{Na} independently showed the boundedness of the degree of Fano manifolds of Picard number $\rho=1$ by using chains of rational curves.
From these viewpoints, it is a natural question how many rational curves in the family ${{\mathfrak{m}}athscr{K}}$ are needed to join two general points. We denote by $l_{{{\mathfrak{m}}athscr{K}}}$ the minimal length of such chains of general ${{\mathfrak{m}}athscr{K}}$-curves.
In this direction, Hwang and Kebekus \cite{HK} developed an infinitesimal method to study the lengths of Fano manifolds via the varieties of minimal rational tangents. They also dealt with some examples when the varieties of minimal rational tangents and those secant varieties are simple, such as complete intersections, Hermitian symmetric spaces and homogeneous contact manifolds. Furthermore the following was obtained.
{{\mathfrak{m}}athfrak{b}}egin{them}[\cite{HK,IR}]\label{bound} Let $X$ be a prime Fano $n$-fold of $\rho=1$. If the Fano index $i_X$ satisfies $n+1>i_X > \frac{2}{3}n$, then $l_{{{\mathfrak{m}}athscr{K}}}=2$.
{{\mathfrak{m}}athfrak{e}}nd{them}
A Fano manifold is $\it{prime}$ if the ample generator of the Picard group is very ample. Our original motivation of this paper is to compute the lengths of Fano manifolds of coindex $\leq 3$. By the above theorem, it is sufficient to consider the cases where $n \leq 5$, $(n,i_X)=(6,4)$ and $X$ is non-prime. Remark that non-prime Fano manifolds of coindex $\leq 3$ admit double cover structures \cite{F1,F2,F3,Muk,Me}.
First we show the following by using the method of Hwang and Kebekus (Precise definitions of notations are given in Section~$2$ and $4$.):
{{\mathfrak{m}}athfrak{b}}egin{them}[Theorem~\ref{length2}, Theorem~\ref{length3}]\label{MT1} Let $X$ be a Fano $n$-fold of $\rho=1$, ${{\mathfrak{m}}athscr{K}}$ a minimal rational component of $X$ and $p+2$ the anti-canonical degree of rational curves in ${{\mathfrak{m}}athscr{K}}$. Then if $p=n-3>0$, we have $l_{{{\mathfrak{m}}athscr{K}}}=2$ and if $(n,p)=(5,1)$, we have $l_{{{\mathfrak{m}}athscr{K}}}=3$.
{{\mathfrak{m}}athfrak{e}}nd{them}
By combining this theorem and well-known or easy arguments, we obtain the following table (see Remark~\ref{3fold}, Theorem~\ref{4fold} and Theorem~\ref{5fold}). In particular, when $n \leq 5$, $l_{{{\mathfrak{m}}athscr{K}}}$ depends only on $(n,p)$.
{{\mathfrak{m}}athfrak{b}}egin{center}
{{\mathfrak{m}}athfrak{b}}egin{tabular}{|c|c|c||c|c|c||c|c|c|}
\hline
$n$ & $p$ & $l_{{{\mathfrak{m}}athscr{K}}}$ & $n$ & $p$ & $l_{{{\mathfrak{m}}athscr{K}}}$ & $n$ & $p$ & $l_{{{\mathfrak{m}}athscr{K}}}$ \\ \hline \hline
$3$ & $2$ & $1$ & $4$ & $3$ & $1$ & $5$ & $4$ & $1$ \\
$3$ & $1$ & $2$ & $4$ & $2$ & $2$ & $5$ & $3$ & $2$ \\
$3$ & $0$ & $3$ & $4$ & $1$ & $2$ & $5$ & $2$ & $2$ \\
$$ & $$ & $$ & $4$ & $0$ & $4$ & $5$ & $1$ & $3$ \\
$$ & $$ & $ $ & $$ & $$ & $$ & $5$ & $0$ & $5$ \\
\hline
{{\mathfrak{m}}athfrak{e}}nd{tabular}
{{\mathfrak{m}}athfrak{e}}nd{center}
As a corollary, we get a better bound on the degree of Fano $5$-folds of $\rho=1$.
{{\mathfrak{m}}athfrak{b}}egin{cor}[Corollary~\ref{canobound}] For a Fano $5$-fold of $\rho=1$, $(-K_X)^5 \leq 9^5=59049$.{{\mathfrak{m}}athfrak{e}}nd{cor}
On the other hand, the following shows $l_{{{\mathfrak{m}}athscr{K}}}$ does not depend only on $(n,p)$ in general.
{{\mathfrak{m}}athfrak{b}}egin{them}[Theorem~\ref{coindex3}]\label{MT2} Let $X$ be a Fano manifold of $\rho=1$ with coindex $3$ and ${{\mathfrak{m}}athscr{K}}$ a minimal rational component of $X$. Assume that $n:= \dim X {\mathfrak{g}}eq 6$. Then $l_{{{\mathfrak{m}}athscr{K}}}=2$ except the case $X$ is a $6$-dimensional Lagrangian Grassmannian $LG(3,6)$. In the case $X=LG(3,6)$, we have $l_{{{\mathfrak{m}}athscr{K}}}=3$.
{{\mathfrak{m}}athfrak{e}}nd{them}
Consequently, we obtain the following table ($n {\mathfrak{g}}eq 6$).
{{\mathfrak{m}}athfrak{b}}egin{center}
{{\mathfrak{m}}athfrak{b}}egin{tabular}{|c|c|c|}
\hline
$X$ & $i_X$ & $l_{{{\mathfrak{m}}athscr{K}}}$ \\ \hline \hline
${{\mathfrak{m}}athbb{P}}^n$ & $n+1$ & $1$ \\
${{\mathfrak{m}}athbb{Q}}^n$ & $n$ & $2$ \\
del Pezzo mfd. of dim. $n$ & $n-1$ & $2$ \\
Mukai mfd. of dim. $n {\mathfrak{g}}eq 7$ & $n-2$ & $2$ \\
Mukai mfd. of dim. $6$ & $4$ & $2$ or $3$ \\
\hline
{{\mathfrak{m}}athfrak{e}}nd{tabular}
{{\mathfrak{m}}athfrak{e}}nd{center}
In Theorem~\ref{2/3}, we give a classification of prime Fano $n$-folds satisfying $i_X=\frac{2}{3}n$ and $l_{{{\mathfrak{m}}athscr{K}}} \neq 2$. These are extremal cases of Theorem~\ref{bound}. Except the case $n=3$, these varieties are deeply related to Severi varieties which are classified by Zak \cite{Za} (see Corollary~\ref{Zak1}). Furthermore, for prime Fano manifolds, we discuss a relation among {\it $2$-connectedness by lines, conic-connectedness} and {\it defectiveness of the secant varieties}.
In the last section, we investigate Fano manifolds which equip with structures of double covers and are covered by rational curves of degree $1$, by a geometric argument without using varieties of minimal rational tangents. In Proposition~\ref{criterion}, we give a criterion for such Fano manifolds to be $2$-connected. Remark that all Fano manifolds dealt in \cite{HK} as examples are prime. However our cases include some non-prime Fano manifolds. Throughout this paper, we work over the complex number field ${{\mathfrak{m}}athscr{C}}C$.
\section{Deformation theory of rational curves and varieties of minimal rational tangents}
First we review some basic facts of deformation theory of rational curves and the definition of varieties of minimal rational tangents.
For detail, we refer to \cite{Hw2,Ko} and follow the conventions of them.
Throughout this paper, unless otherwise noted, we always assume that $X$ is a Fano manifold of ${\rm Pic} (X) \cong {{\mathfrak{m}}athscr{Z}}Z[{{\mathfrak{m}}athscr{O}}_X(1)]$, where ${{\mathfrak{m}}athscr{O}}_X(1)$ is the ample generator, and denote by ${\rm RatCurves}^n(X)$ the normalization of the space of integral rational curves on $X$. We also assume $n:=\dim X {\mathfrak{g}}eq 3$. We denote by $i_X$ the {\it Fano index of $X$} which is the integer satisfying $\omega_X \cong {{\mathfrak{m}}athscr{O}}_X(-i_X)$, where $\omega_X = {{\mathfrak{m}}athscr{O}}_X(K_X)$ is the canonical line bundle of $X$. We call $n+1-i_X$ the {\it coindex of $X$}.
As is well known, a Fano manifold is uniruled. It is equivalent to the condition that there exists a free rational curve $f: {{\mathfrak{m}}athbb{P}}^1 \rightarrow X$. Here we call a rational curve $f: {{\mathfrak{m}}athbb{P}}^1 \rightarrow X$ {\it free} if $f^*T_X$ is semipositive.
An irreducible component ${{\mathfrak{m}}athscr{K}}$ of ${\rm RatCurves}^n(X)$ is called a {\it minimal rational component} if it contains a free rational curve of minimal anti-canonical degree. We denote by ${{{\mathfrak{m}}athscr{K}}}_x$ the normalization of the subscheme of ${{\mathfrak{m}}athscr{K}}$ parametrizing rational curves passing through $x$. Since each member of ${{\mathfrak{m}}athscr{K}}$ is numerically equivalent, we can define the ${{\mathfrak{m}}athscr{O}}_X(1)$-degree of ${{\mathfrak{m}}athscr{K}}$ which is denoted by $d_{{{\mathfrak{m}}athscr{K}}}$. We will use the symbol $p$ to denote $i_Xd_{{{\mathfrak{m}}athscr{K}}}-2$. In this setting, the minimal rational component ${{\mathfrak{m}}athscr{K}}$ satisfies the following fundamental properties.
{{\mathfrak{m}}athfrak{b}}egin{pro}[see \cite{Hw2}]\label{stand}
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{For a general point $x \in X$, ${{{\mathfrak{m}}athscr{K}}}_x$ is a disjoint union of smooth projective varieties of dimension $p$.}
\item{For a general member $[f]$ of ${{\mathfrak{m}}athscr{K}}$, $f^*T_X \cong {{\mathfrak{m}}athscr{O}}(2) \oplus {{\mathfrak{m}}athscr{O}}(1)^p \oplus {{\mathfrak{m}}athscr{O}}^{n-1-p}$ which is called a {\it standard rational curve}. In particular, $p \leq n-1$}.
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{pro}
{{\mathfrak{m}}athfrak{b}}egin{them} [\cite{Ke2}] For a general point $x \in X$, there are only finitely many curves in ${{\mathfrak{m}}athscr{K}}_x$ which are singular at $x$.
{{\mathfrak{m}}athfrak{e}}nd{them}
For a general point $x \in X$, we define the tangent map ${\tau}_x : {{{\mathfrak{m}}athscr{K}}}_x \rightarrow {{\mathfrak{m}}athbb{P}}(T_xX)$\footnote{For a vector space $V$, ${{\mathfrak{m}}athbb{P}}(V)$ denotes the projective space of lines through the origin in $V$.} by assigning the tangent vector at $x$ to each member of ${{\mathfrak{m}}athscr{K}}_x$ which is smooth at $x$. We denote by ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ the image of ${\tau}_x$, which is called the {\it variety of minimal rational tangents} at $x$.
{{\mathfrak{m}}athfrak{b}}egin{them} [\cite{HM2,Ke2}] The tangent map ${\tau}_x : {{{\mathfrak{m}}athscr{K}}}_x \rightarrow {{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is the normalization.
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{them}[\cite{CMSB,Ke1}]\label{CMSB} If $p=n-1$, namely ${{\mathfrak{m}}athscr{C}}_x ={{\mathfrak{m}}athbb{P}}(T_xX)$, then $X$ is isomorphic to ${{\mathfrak{m}}athbb{P}}^n$.
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{them}[\cite{HH}]\label{HHM} Let $S=G/P$ a rational homogeneous variety corresponding to a long simple root and ${{\mathfrak{m}}athscr{C}}_o \subset {{\mathfrak{m}}athbb{P}}(T_oS)$ the variety of minimal rational tangents at a reference point $o \in S$. Assume ${{\mathfrak{m}}athscr{C}}_o \subset {{\mathfrak{m}}athbb{P}}(T_oS)$ and ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ are isomorphic as projective subvarieties. Then $X$ is isomorphic to $S$.
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{them}[\cite{Mi}]\label{Mi} If $X$ is a Fano manifold of $n:=\dim X {\mathfrak{g}}eq 3$, the following are equivalent.
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{$X$ is isomorphic to a smooth quadric hypersurface ${{\mathfrak{m}}athbb{Q}}^n$.}
\item{The Picard number of $X$ is $1$ and the minimal value of the anti-canonical degree of rational curves passing through a very general point $x_0 \in X$ is equal to $n$.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{cor}\label{Mi2} If $p=n-2$, namely ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is a hypersurface, $X$ is isomorphic to ${{\mathfrak{m}}athbb{Q}}^n$.
{{\mathfrak{m}}athfrak{e}}nd{cor}
{{\mathfrak{m}}athfrak{b}}egin{proof} For a very general point $x_0 \in X$, any rational curve passing through $x_0$ is free. Let $C_0$ be a rational curve passing through $x_0$ whose degree is minimal among such rational curves and ${{\mathfrak{m}}athscr{H}} \subset {\rm RatCurves}^n(X)$ an irreducible component containing $[C_0]$. Then ${{\mathfrak{m}}athscr{H}}$ is a dominating family. It implies that the anticanonical degree of ${{\mathfrak{m}}athscr{H}}$ is equal to one of ${{\mathfrak{m}}athscr{K}}$. Furthermore the anticanonical degree of ${{\mathfrak{m}}athscr{K}}$ is $n$ from our assumption. Therefore $X$ is isomorphic to ${{\mathfrak{m}}athbb{Q}}^n$ by Theorem~\ref{Mi}.
{{\mathfrak{m}}athfrak{e}}nd{proof}
\section{Varieties of minimal rational tangents in the cases $p=n-3$ and $(n, p)=(5, 1)$ }
{{\mathfrak{m}}athfrak{b}}egin{pro}[{\cite[Proposition 1.4, Proposition 1.5, Theorem 2.5]{Hw2}, \cite[Proposition 2, Proposition 5]{Hw3}, \cite[Proposition~2.2]{Hw4}}]\label{hwlem1} Let $X$, ${{\mathfrak{m}}athscr{K}}$ and $p$ be as in Section $2$ and ${{\mathfrak{m}}athscr{C}}_x$ the variety of minimal rational tangents associated to ${{\mathfrak{m}}athscr{K}}$ at a general point $x \in X$.
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{The tangent map $\tau_x:{{\mathfrak{m}}athscr{K}}_x \rightarrow {{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is an immersion at $[C] \in {{\mathfrak{m}}athscr{K}}_x$ if $C$ is a standard rational curve on $X$.}
\item{If $X \subset {{\mathfrak{m}}athbb{P}}^N$ is covered by lines, the tangent map $\tau_x$ is an embedding. In particular, ${{\mathfrak{m}}athscr{C}}_x$ is a disjoint union of smooth projective varieties of dimension $p$. }
\item{If $2p>n-3$ and ${{\mathfrak{m}}athscr{C}}_x$ is smooth, ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is non-degenerate.}
\item{If ${{\mathfrak{m}}athscr{C}}_x$ is reducible, it has at least three components.}
\item{If ${{\mathfrak{m}}athscr{C}}_x$ is a union of linear subspaces of dimension $p>0$, two distinct irreducible components of ${{\mathfrak{m}}athscr{C}}_x$ are disjoint.}
\item{${{\mathfrak{m}}athscr{C}}_x$ cannot be an irreducible linear subspace.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{pro}
{{\mathfrak{m}}athfrak{b}}egin{pro}[{\cite[Proposition 9]{HM1}}]\label{hmlem} Let $X$, ${{\mathfrak{m}}athscr{K}}$ and ${{\mathfrak{m}}athscr{C}}_x$ be as above, ${{\mathfrak{m}}athbb{P}}(W_x)$ the linear span of ${{\mathfrak{m}}athscr{C}}_x$ and ${{\mathfrak{m}}athscr{T}}_x \subset {{\mathfrak{m}}athbb{P}}(\wedge^2W_x)$ the subvariety parametrizing tangent lines of the smooth locus of ${{\mathfrak{m}}athscr{C}}_x$. Then ${{\mathfrak{m}}athscr{T}}_x$ is contained in ${{\mathfrak{m}}athbb{P}}({\rm Ker}([~,~]_x)) \subset {{\mathfrak{m}}athbb{P}}(\wedge^2W_x)$, where $[~,~]_x: \wedge^2W_x \rightarrow T_xX/W_x$ is the Frobenius bracket tensor.
{{\mathfrak{m}}athfrak{e}}nd{pro}
{{\mathfrak{m}}athfrak{b}}egin{lem}\label{tang} If $X \subset {{\mathfrak{m}}athbb{P}}(V)$ is an irreducible hypersurface which is not linear, then its variety of tangential lines ${{\mathfrak{m}}athscr{T}}_X \subset {{\mathfrak{m}}athbb{P}}(\wedge^2V)$ is non-degenerate.
{{\mathfrak{m}}athfrak{e}}nd{lem}
{{\mathfrak{m}}athfrak{b}}egin{proof} Assume that ${{\mathfrak{m}}athscr{T}}_X \subset {{\mathfrak{m}}athbb{P}}(\wedge^2V)$ is degenerate. We denote by $C(X) \subset V$ the cone corresponding to $X \subset {{\mathfrak{m}}athbb{P}}(V)$. Then there exists a nonzero $\omega \in \wedge^2V^*$ such that $C(X)$ is isotropic with respect to $\omega$. We set $Q:=\{v \in V| \omega (v, w)=0$ for any $w \in V\}$. $\omega$ induces a nonzero symplectic form on V/Q. For the projection ${\mathfrak{p}}i: V \rightarrow V/Q$, it follows $2 \dim {\mathfrak{p}}i(C(X)) \leq \dim V/Q$. Remark that $\dim V -1 = \dim C(X)$. Therefore we have an inequality $\dim V/Q \leq 2$. Since ${\mathfrak{p}}i (C(X))$ is not $V/Q$ and $\{0\}$, it implies that ${\mathfrak{p}}i (C(X)) \subset V/Q$ is a line. Hence $C(X) \subset V$ is a hyperplane. This contradicts the non-linearity of $X$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{pro}[{\cite[Proposition 2]{Hw1}}]\label{hwlem2} Let $X$, ${{\mathfrak{m}}athscr{K}}$, ${{\mathfrak{m}}athscr{C}}_x$, $W_x$ be as in Proposition~\ref{hmlem} and $W$ be the distribution defined by $W_x$ for general $x \in X$. Then $W$ is integrable if and only if $W_x$ coincides with $T_xX$ for general $x \in X$.
{{\mathfrak{m}}athfrak{e}}nd{pro}
{{\mathfrak{m}}athfrak{b}}egin{pro}[cf. {\cite[Proposition~7]{Hw3}}]\label{n-3} Let $X$ be a Fano $n$-fold of $\rho=1$ and ${{\mathfrak{m}}athscr{K}}$ a minimal rational component of $X$ with $p=n-3 > 0$. Then the variety of minimal rational tangents ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ at a general point $x \in X$ is one of the following:
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item {a non-degenerate variety with no linear component, or}
\item {a disjoint union of at least three lines. }
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{pro}
{{\mathfrak{m}}athfrak{b}}egin{proof} First assume that ${{\mathfrak{m}}athscr{C}}_x$ has a linear component. Then every component of ${{\mathfrak{m}}athscr{C}}_x$ is linear. This follows from the irreducibility of ${{\mathfrak{m}}athscr{C}}$, where ${{\mathfrak{m}}athscr{C}} \subset {{\mathfrak{m}}athbb{P}}(T_X)$ is the closure of the union of the ${{\mathfrak{m}}athscr{C}}_x$ for general points $x \in X$.
By Proposition~\ref{hwlem1}, ${{\mathfrak{m}}athscr{C}}_x$ is a disjoint union of at least three linear subspaces. Let ${{\mathfrak{m}}athscr{C}}_{x,1}$ and ${{\mathfrak{m}}athscr{C}}_{x,2}$ be distinct components of ${{\mathfrak{m}}athscr{C}}_x$. Since $\dim {{\mathfrak{m}}athscr{C}}_{x,1} + \dim {{\mathfrak{m}}athscr{C}}_{x,2} - \dim {{\mathfrak{m}}athbb{P}}(T_xX)=n-5$, we have ${{\mathfrak{m}}athscr{C}}_{x,1} \cap {{\mathfrak{m}}athscr{C}}_{x,2} \neq {{\mathfrak{m}}athfrak{e}}mptyset$ if $n {\mathfrak{g}}eq 5$. This implies $n=4$ and ${{\mathfrak{m}}athscr{C}}_x$ is a disjoint union of at least three lines.
Second assume that ${{\mathfrak{m}}athscr{C}}_x$ has no linear components. We will show that ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is non-degenerate. Suppose ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is degenerate. Let ${{\mathfrak{m}}athbb{P}}(W_x)$ be the linear span of ${{\mathfrak{m}}athscr{C}}_x$ and ${{\mathfrak{m}}athscr{T}}_x \subset {{\mathfrak{m}}athbb{P}}(\wedge^2W_x)$ be the subvariety parametrizing tangent lines of the smooth locus of ${{\mathfrak{m}}athscr{C}}_x$. By Proposition~\ref{hmlem}, we have ${{\mathfrak{m}}athscr{T}}_x \subset {{\mathfrak{m}}athbb{P}}({\rm Ker}([~,~]_x)) \subset {{\mathfrak{m}}athbb{P}}(\wedge^2W_x)$, where $[~,~]_x: \wedge^2W_x \rightarrow T_xX/W_x$ is the Frobenius bracket tensor. Lemma~\ref{tang} implies that ${{\mathfrak{m}}athscr{T}}_x \subset {{\mathfrak{m}}athbb{P}}(\wedge^2W_x)$ is non-degenerate. Therefore ${{\mathfrak{m}}athbb{P}}({\rm Ker}([~,~]_x))$ coincides with ${{\mathfrak{m}}athbb{P}}(\wedge^2W_x)$. Applying Frobenius Theorem, the distribution $W$ defined by $W_x$ is integrable. However, this contradicts Proposition~\ref{hwlem2}.
{{\mathfrak{m}}athfrak{e}}nd{proof}
By the same argument, we can show the following:
{{\mathfrak{m}}athfrak{b}}egin{pro}\label{5,1} If $(n, p)= (5, 1)$, then the variety of minimal rational tangents ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ at a general point $x \in X$ satisfies one of the following:
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item {a curve with no linear component whose linear span $<{{\mathfrak{m}}athscr{C}}_x>$ has dimension at least $3$, or}
\item {a disjoint union of at least three lines. }
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{pro}
\section{Spanning dimensions of loci of chains}
For simplicity, throughout this section, we continue to work under the same assumption as in Section~$2$ except Definition~\ref{sv} and Remark~\ref{defective}, that is, $X$ is a Fano $n$-fold of $\rho=1$ with $n {\mathfrak{g}}eq 3$ and ${{\mathfrak{m}}athscr{K}}$ is a minimal rational component of $X$.
Remark that we can also work in a slight more general situation (in the category of uniruled manifolds).
{{\mathfrak{m}}athfrak{b}}egin{defi}[\cite{HK}]\label{loc} \rm Let $X$ and ${{\mathfrak{m}}athscr{K}}$ be as above. For a general point $x \in X$, we define
${\rm loc}^1(x):= \displaystyle{{{\mathfrak{m}}athfrak{b}}igcup_{[C] \in {{\mathfrak{m}}athscr{K}}_x} C}$ and ${\rm loc}^{k+1}(x):= \overline{ \displaystyle{{{\mathfrak{m}}athfrak{b}}igcup_{[C] \in {{\mathfrak{m}}athscr{K}}_y ~for~general~y \in {\rm loc}^k(x)}}C}$ inductively.
We denote the maximal value of the dimensions of irreducible components of ${\rm loc}^k(x)$ by $d_k$.
{{\mathfrak{m}}athfrak{e}}nd{defi}
{{\mathfrak{m}}athfrak{b}}egin{defi}[\cite{HK}]\label{leng} \rm If there exists an integer $l$ such that $d_l=\dim X$ but $d_{l-1} < \dim X$, we say that $X$ has {\it length $l$ with respect to ${{\mathfrak{m}}athscr{K}}$}, or $X$ is {\it $l$-connected by ${{\mathfrak{m}}athscr{K}}$}. We denote by $l_{{{\mathfrak{m}}athscr{K}}}$ the length.
{{\mathfrak{m}}athfrak{e}}nd{defi}
By our assumption that the Picard number of $X$ is $1$, we can define the length.
{{\mathfrak{m}}athfrak{b}}egin{pro}[{\cite{KMM1}, \cite[Lemma 1.3]{KMM}, \cite{Na} and \cite[Corollary IV.4.14]{Ko}}]\label{Na} Let $X$ and ${{\mathfrak{m}}athscr{K}}$ be as above.
Then there exists $l_{{{\mathfrak{m}}athscr{K}}}$, that is, two general points on $X$ can be connected by a finite number of rational curves in ${{\mathfrak{m}}athscr{K}}$. Furthermore we have $l_{{{\mathfrak{m}}athscr{K}}} \leq \dim X$.
{{\mathfrak{m}}athfrak{e}}nd{pro}
{{\mathfrak{m}}athfrak{b}}egin{defi}\label{sv} \rm For varieties $X, Y \subset {{\mathfrak{m}}athbb{P}}^N$, we define the {\it join of $X$ and $Y$} by the closure of the union of lines passing through distinct two points $x \in X$ and $y \in Y$ and denote by $S(X, Y)$. In the special case that $X=Y$, $S^1X:=S(X,X)$ is called the {\it secant variety of $X$}.
{{\mathfrak{m}}athfrak{e}}nd{defi}
{{\mathfrak{m}}athfrak{b}}egin{rem}\label{defective} \rm In general, it is easy to see the dimension of the secant variety $S^1X$ is at most $2n +1$, where $n:=\dim X$. The expected dimension of the secant variety $S^1X$ is $2n +1$. When the dimension of $S^1X$ is less than $2n +1$, we say the secant variety $S^1X$ {\it defective}.
{{\mathfrak{m}}athfrak{e}}nd{rem}
In \cite{HK} Hwang and Kebekus computed the first spanning dimension $d_1$ and gave a lower bound of the second $d_2$ (resp. $d_k$) under the assumption ${{\mathfrak{m}}athscr{K}}_x$ is irreducible for a general point $x \in X$ by using the secant variety of the variety of minimal rational tangents. However their proof works even if we drop the assumption on the irreducibility of ${{\mathfrak{m}}athscr{K}}_x$.
{{\mathfrak{m}}athfrak{b}}egin{them}[\cite{HK, KeS}]\label{HK} Let $X$ be a Fano $n$-fold of $\rho=1$ with $n {\mathfrak{g}}eq 3$ and ${{\mathfrak{m}}athscr{K}}$ a minimal rational component of $X$. Then, without the assumption that ${{\mathfrak{m}}athscr{K}}_x$ is irreducible for a general point $x \in X$, we have
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{$d_1=p+1$ and $d_k \leq k(p+1)$ for each $k {\mathfrak{g}}eq 1$,}
\item{$d_{2} {\mathfrak{g}}eq \dim S^1{{{\mathfrak{m}}athscr{C}}}_x +1$, if $p>0$.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{proof} The former follows from Mori's Bend and Break and an easy induction on $k$. For the later, there is a proof in \cite[Theorem~21]{KeS} which is easier than one in \cite{HK}.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{lem}\label{0} Let $X$ and ${{\mathfrak{m}}athscr{K}}$ be as above. If $p=0$, we have $l_{{{\mathfrak{m}}athscr{K}}}=n$.
{{\mathfrak{m}}athfrak{e}}nd{lem}
{{\mathfrak{m}}athfrak{b}}egin{proof} We have an inequality $d_{k+1} \leq d_k +1$. In particular, $d_k \leq k$. By combining Proposition~\ref{Na}, we obtain our assertion.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{rem}\label{3fold} \rm If $X$ is a Fano $3$-fold of $\rho=1$ which is not isomorphic to ${{\mathfrak{m}}athbb{P}}^3$ and ${{\mathfrak{m}}athbb{Q}}^3$, then $l_{{{\mathfrak{m}}athscr{K}}}= 3$. Hence in the $3$-dimensional case, we have the following table:
{{\mathfrak{m}}athfrak{b}}egin{center}
{{\mathfrak{m}}athfrak{b}}egin{tabular}{|c|c|c|c|c|}
\hline
$p$ & $i_X$ & $X$ & $l_{{{\mathfrak{m}}athscr{K}}}$ & $d_{{{\mathfrak{m}}athscr{K}}}$ \\ \hline \hline
$2$ & $4$ & ${{\mathfrak{m}}athbb{P}}^3$ & $1$ & $1$ \\
$1$ & $3$ & ${{\mathfrak{m}}athbb{Q}}^3$ & $2$ & $1$ \\
$0$ & $2$ & del Pezzo & $3$ & $1$ \\
$0$ & $1$ & Mukai & $3$ & $2$ \\
\hline
{{\mathfrak{m}}athfrak{e}}nd{tabular}
{{\mathfrak{m}}athfrak{e}}nd{center}
{{\mathfrak{m}}athfrak{e}}nd{rem}
\section{Lengths of Fano manifolds of dimension $\leq 5$}
{{\mathfrak{m}}athfrak{b}}egin{lem}\label{sec} Let $X, Y \subset {{\mathfrak{m}}athbb{P}}^{N}$ be irreducible projective varieties of dimension $n$. Then the following holds.
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{${\rm Vert}(X):=\{ p \in X| S(p,X)=X \} \subset {{\mathfrak{m}}athbb{P}}^N $ is a linear subspace.}
\item{If ${\rm codim}({\rm Vert}(X),X) \leq 1$, then ${\rm Vert}(X)=X$.}
\item{If $\dim S(X,Y)=n+1$, then $X, Y \subset {\rm Vert}(S(X,Y))$.}
\item{If $N=n+2$ and $X \subset {{\mathfrak{m}}athbb{P}}^{n+2}$ is a non-degenerate variety which is not linear, then $S^1X = {{\mathfrak{m}}athbb{P}}^{n+2}$.}
\item{If $X$ or $Y$ is non-linear and $\dim S(X,Y)=n+1$, then $S(X, Y) \subset {{\mathfrak{m}}athbb{P}}^N$ is a linear subspace.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{lem}
{{\mathfrak{m}}athfrak{b}}egin{proof} $\rm (i)$ It is well known that the vertex ${\rm Vert}(X) \subset {{\mathfrak{m}}athbb{P}}^N$ is a linear subspace (see \cite[Proposition 4.6.2]{FOV}).
$\rm (ii)$ Suppose that ${\rm codim}({\rm Vert}(X),X) \leq 1$ and ${\rm Vert}(X) \neq X$. Then $\dim{\rm Vert}(X) = n-1$. For a point $x \in X \setminus {\rm Vert}(X)$, it turns out that $X$ coincides with a linear space $S(x, {\rm Vert}(X))$. Hence ${\rm Vert}(X)$ is an $n$-dimensional linear space. This is a contradiction.
$\rm (iii)$ It is enough to prove that $Y \subset {\rm Vert}(S(X,Y))$. Assume that $\dim S(X,Y)=n+1$. If $Y$ is contained in ${\rm Vert}(X)$, we see $S(X,Y)=X$. This contradicts our assumption. So this implies that $Y$ is not contained in ${\rm Vert}(X)$. For $y \in Y \setminus {\rm Vert}(X)$, we obtain $S(y,X)=S(Y,X)$. Furthermore we see
{{\mathfrak{m}}athfrak{b}}egin{equation}
S(y,S(X,Y))=S(y,S(y,X))=S(y,X)=S(Y,X).
{{\mathfrak{m}}athfrak{e}}nd{equation}
Thus our assertion holds.
$\rm (iv)$ Assume that $S^1X \neq {{\mathfrak{m}}athbb{P}}^{n+2}$. Then $X=S^1X$ or $\dim S^1X=n+1$. If $X=S^1X$, $X$ is linear. It gives a contradiction. So we have $\dim S^1X=n+1$. Then $\rm (iii)$ implies that $X \subset {\rm Vert}(S^1X) \subset S^1X$. Since $X \subset {{\mathfrak{m}}athbb{P}}^{n+2}$ is non-degenerate, we get a contradiction.
$\rm (v)$ Because $\dim S(X,Y)=n+1$, it follows from $\rm (iii)$ that $X, Y \subset {\rm Vert}(S(X,Y)) \subset S(X,Y)$. Then we have ${\rm Vert}(S(X,Y)) = S(X,Y)$ because al least one of $X$ and $Y$ is not linear. So $S(X,Y)$ is a linear space.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{them}\label{length2} Let $X$ be a Fano manifold of $\rho=1$ with $n=\dim X {\mathfrak{g}}eq 4$. Assume that $X$ has a minimal rational component ${{\mathfrak{m}}athscr{K}}$ with $p=n-3 > 0$. Then $X$ is $2$-connected by ${{\mathfrak{m}}athscr{K}}$. In particular, if the Fano index $i_X$ is $n-1$, then $X$ is $2$-connected by lines.
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{proof} By Proposition~\ref{n-3}, the variety of minimal rational tangents ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{a non-degenerate variety with no linear component, or}
\item{a disjoint union of at least three lines.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
Let ${{\mathfrak{m}}athscr{C}}_x$ be as in $\rm (i)$. If ${{\mathfrak{m}}athscr{C}}_x$ is irreducible, Lemma~\ref{sec} $\rm (iv)$ implies $S^1{{{\mathfrak{m}}athscr{C}}}_x = {{\mathfrak{m}}athbb{P}}(T_xX)$. On the other hand, in the case where ${{\mathfrak{m}}athscr{C}}_x$ is reducible, $S^1{{{\mathfrak{m}}athscr{C}}}_x = {{\mathfrak{m}}athbb{P}}(T_xX)$ also holds. In fact, for the irreducible decomposition ${{\mathfrak{m}}athscr{C}}_x = {{\mathfrak{m}}athscr{C}}_{x,1} \cup \dots \cup {{\mathfrak{m}}athscr{C}}_{x,m}$, we assume that $S({{\mathfrak{m}}athscr{C}}_{x,i}, {{\mathfrak{m}}athscr{C}}_{x,j}) \neq {{\mathfrak{m}}athbb{P}}(T_xX)$ for any $i,j$. Then we see $\dim S({{\mathfrak{m}}athscr{C}}_{x,i}, {{\mathfrak{m}}athscr{C}}_{x,j})=n-2$. Hence Lemma~\ref{sec} $\rm (v)$ implies that $S({{\mathfrak{m}}athscr{C}}_{x,i}, {{\mathfrak{m}}athscr{C}}_{x,j})$ is a linear subspace ${{\mathfrak{m}}athbb{P}}^{n-2} \subset {{\mathfrak{m}}athbb{P}}(T_xX)$. It turns out from Proposition~\ref{hwlem1} that $m {\mathfrak{g}}eq 3$. From now on, we focus on $(i,j)=(1,2)$. Because ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is non-degenerate, there exists $j$ such that $S({{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,2}) \neq S({{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,j})$. We may assume that $j$ is $3$.
We have ${{\mathfrak{m}}athscr{C}}_{x,1} \subset S({{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,2}) \cap S({{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,3})$. Furthermore since $S({{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,2})$ and $S({{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,3})$ are distinct linear subspaces of dimension $n-2$, these intersection is a linear subspace of dimension $n-3$. Thus we have ${{\mathfrak{m}}athscr{C}}_{x,1} = S({{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,2}) \cap S({{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,3})$. However this contradicts our assumption that ${{\mathfrak{m}}athscr{C}}_x$ has no linear components. If ${{\mathfrak{m}}athscr{C}}_x$ is as in $\rm (ii)$, we also have $S^1{{{\mathfrak{m}}athscr{C}}_x}= {{\mathfrak{m}}athbb{P}}(T_xX)$.
As a consequence, in any case we have $S^1{{{\mathfrak{m}}athscr{C}}}_x = {{\mathfrak{m}}athbb{P}}(T_xX)$. This implies that $d_2=n$. On the other hand, since $d_1=p+1=n-2<n$, $X$ is $2$-connected by ${{\mathfrak{m}}athscr{K}}$. If $i_X=n-1 {\mathfrak{g}}eq 3$, then it follows from the equation $p+2=i_Xd_{{{\mathfrak{m}}athscr{K}}}$ that $p=n-3$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{rem} \rm If $X$ is a prime Fano manifold with $i_X=n-1$ which is a del Pezzo manifold whose degree is at least 3, then the latter statement of the above theorem follows from Theorem~\ref{bound}.
{{\mathfrak{m}}athfrak{e}}nd{rem}
{{\mathfrak{m}}athfrak{b}}egin{them}\label{4fold} Let $X$ be a Fano $4$-fold of $\rho=1$. Then we have the following table:
{{\mathfrak{m}}athfrak{b}}egin{center}
{{\mathfrak{m}}athfrak{b}}egin{tabular}{|c|c|c|c|c|}
\hline
$p$ & $i_X$ & $X$ & $l_{{{\mathfrak{m}}athscr{K}}}$ & $d_{{{\mathfrak{m}}athscr{K}}}$ \\ \hline \hline
$3$ & $5$ & ${{\mathfrak{m}}athbb{P}}^4$ & $1$ & $1$ \\
$2$ & $4$ & ${{\mathfrak{m}}athbb{Q}}^4$ & $2$ & $1$ \\
$1$ & $3$ & del Pezzo & $2$ & $1$ \\
$1$ & $1$ & coindex $4$ & $2$ & $3$ \\
$0$ & $2$ & Mukai & $4$ & $1$ \\
$0$ & $1$ & coindex $4$ & $4$ & $2$ \\
\hline
{{\mathfrak{m}}athfrak{e}}nd{tabular}
{{\mathfrak{m}}athfrak{e}}nd{center}
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{proof} The computation of the length with respect to ${{\mathfrak{m}}athscr{K}}$ is an immediate consequence of Theorem~\ref{CMSB}, Corollary~\ref{Mi2}, Lemma~\ref{0} and Theorem~\ref{length2}. The other parts follow from the relation $p+2=i_Xd_{{{\mathfrak{m}}athscr{K}}}$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
Here we introduce two fundamental lemmata.
{{\mathfrak{m}}athfrak{b}}egin{lem}\label{sec2} For an irreducible non-degenerate projective curve $C \subset {{\mathfrak{m}}athbb{P}}^N$, $\dim S^1C= {\rm min}\{3,N\}$.
{{\mathfrak{m}}athfrak{e}}nd{lem}
{{\mathfrak{m}}athfrak{b}}egin{proof} In general, we see that $\dim S^1C \leq 3$. So when $\dim S^1C=3$, there is nothing to prove. If $\dim S^1C=1$, then $C$ is a line. Hence our claim holds. Thus we assume that $\dim S^1C=2$. Then it follows from Lemma~\ref{sec} $\rm (v)$ that $S^1C$ is a plane. As a consequence, we see $\dim S^1C= {\rm min}\{3,N\}$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{lem}[{\cite[Remark 4.6.10]{FOV}}]\label{sec3} For two distinct integral curves $C, D \subset {{\mathfrak{m}}athbb{P}}^N$, $\dim S(C,D)=2$ holds if and only if $C \cup D $ is a plane curve.
{{\mathfrak{m}}athfrak{e}}nd{lem}
{{\mathfrak{m}}athfrak{b}}egin{proof} The "only if" part comes from Lemma~\ref{sec} $\rm (v)$. The converse is trivial.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{them}\label{length3} Let $X$ be a Fano $5$-fold of $\rho=1$. Assume that $X$ has a minimal rational component ${{\mathfrak{m}}athscr{K}}$ with $p=1$. Then $X$ is $3$-connected by ${{\mathfrak{m}}athscr{K}}$.
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{proof} By the same argument as in Theorem~\ref{length2}, we can prove this theorem. In fact, Proposition~\ref{5,1}, Lemma~\ref{sec2} and Lemma~\ref{sec3} imply that $\dim S^1{{{\mathfrak{m}}athscr{C}}}_x {\mathfrak{g}}eq 3$ for the variety of minimal rational tangents ${{{\mathfrak{m}}athscr{C}}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$. It turns out that $d_2 {\mathfrak{g}}eq 4$. Because $d_2 \leq 2(p+1)=4$, $d_2=4$. Hence $X$ is $3$-connected by ${{\mathfrak{m}}athscr{K}}$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{them}\label{5fold} Let $X$ be a Fano $5$-fold of $\rho=1$. Then we have the following table:
{{\mathfrak{m}}athfrak{b}}egin{center}
{{\mathfrak{m}}athfrak{b}}egin{tabular}{|c|c|c|c|c|}
\hline
$p$ & $i_X$ & $X$ & $l_{{{\mathfrak{m}}athscr{K}}}$ & $d_{{{\mathfrak{m}}athscr{K}}}$ \\ \hline \hline
$4$ & $6$ & ${{\mathfrak{m}}athbb{P}}^5$ & $1$ & $1$ \\
$3$ & $5$ & ${{\mathfrak{m}}athbb{Q}}^5$ & $2$ & $1$ \\
$2$ & $4$ & del Pezzo & $2$ & $1$ \\
$2$ & $2$ & coindex $4$ & $2$ & $2$ \\
$2$ & $1$ & coindex $5$ & $2$ & $4$ \\
$1$ & $3$ & Mukai & $3$ & $1$ \\
$1$ & $1$ & coindex $5$ & $3$ & $3$ \\
$0$ & $2$ & coindex $4$ & $5$ & $1$ \\
$0$ & $1$ & coindex $5$ & $5$ & $2$ \\
\hline
{{\mathfrak{m}}athfrak{e}}nd{tabular}
{{\mathfrak{m}}athfrak{e}}nd{center}
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{proof}
The computation of the length with respect to ${{\mathfrak{m}}athscr{K}}$ is an immediate consequence of Theorem~\ref{CMSB}, Corollary~\ref{Mi2}, Lemma~\ref{0}, Theorem~\ref{length2} and Theorem~\ref{length3}. The other parts follow from the relation $p+2=i_Xd_{{{\mathfrak{m}}athscr{K}}}$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{conj} For a Fano $n$-fold of $\rho=1$, $(-K_X)^n \leq (n+1)^n$.
{{\mathfrak{m}}athfrak{e}}nd{conj}
This conjecture was proved by Hwang \cite{Hw3} in the case $n=4$. However, in general, this conjecture is still open for $n {\mathfrak{g}}eq 5$. For $n=5$, Hwang proved $(-K_X)^5$ is at most $81250$ \cite{Hw5}. To the best of my knowledge, it is the best known bound for $n=5$. As an application of Theorem~\ref{5fold}, we obtain an improved bound.
{{\mathfrak{m}}athfrak{b}}egin{cor}\label{canobound} For a Fano $5$-fold of $\rho=1$, $(-K_X)^5 \leq 9^5=59049$.{{\mathfrak{m}}athfrak{e}}nd{cor}
{{\mathfrak{m}}athfrak{b}}egin{proof} If $p=0$, we know $(-K_X)^5 \leq 35310< 59049$ from \cite[Corollary~3]{Hw5}. When $p=4$ or $3$, our assertion is derived from Theorem~\ref{CMSB} and \ref{Mi2}. So it is enough to consider in the cases $p=2$ or $1$. From the definition of the locus ${\rm loc}^k(x)$, we know two general points can be joined by a chain of free rational curves whose anticanonical degree is at most $l_{{{\mathfrak{m}}athscr{K}}}d_{{{\mathfrak{m}}athscr{K}}}$. Furthermore a well-known argument \cite[Step~3]{KMM1} implies $(-K_X)^5 \leq (l_{{{\mathfrak{m}}athscr{K}}}d_{{{\mathfrak{m}}athscr{K}}})^5$ (see also the proof of Lemma~\ref{CC} (i) below). It follows from Theorem~\ref{5fold} that $l_{{{\mathfrak{m}}athscr{K}}}d_{{{\mathfrak{m}}athscr{K}}} \leq 9$ for $p=1,2$. So our assertion holds.
{{\mathfrak{m}}athfrak{e}}nd{proof}
\section{Lengths of Fano manifolds of coindex $3$}
In this section, we study Fano manifolds of coindex $3$. Because we already dealt with the case where $n:= \dim X \leq 5$ in Theorem~\ref{MT1}, we study the case where $n {\mathfrak{g}}eq 6$.
{{\mathfrak{m}}athfrak{b}}egin{pro}[\cite{KK}]\label{KK} Let $X$ be a projective variety and ${{\mathfrak{m}}athscr{H}} \subset RatCurves^n(X)$ a proper dominating family of rational curves such that none of the associated curves has a cuspidal singularity.
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{For general $x \in X$, all curves in ${{\mathfrak{m}}athscr{H}}$ passing through $x$ are smooth at $x$ and no two of them share a common tangent direction at $x$. }
\item{Assume that for general $x \in X$ and any irreducible component ${{\mathfrak{m}}athscr{H}}' \subset {{\mathfrak{m}}athscr{H}}$, $\dim {{\mathfrak{m}}athscr{H}}'_x {\mathfrak{g}}eq \frac{\dim X-1}{2}$ holds. Then ${{\mathfrak{m}}athscr{H}}_x$ is irreducible. In particular, ${{\mathfrak{m}}athscr{H}}$ is irreducible.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{pro}
{{\mathfrak{m}}athfrak{b}}egin{lem}[{\cite[Corollary 1.4.3]{BS}}]\label{BS} Let $C$ be an integral curve and $L$ a spanned line bundle of degree $1$ on $C$. Then $(C, L) \cong ({{\mathfrak{m}}athbb{P}}^1, {{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^1}(1))$.
{{\mathfrak{m}}athfrak{e}}nd{lem}
{{\mathfrak{m}}athfrak{b}}egin{nota} \rm We denote by $(d_1) \cap \dots \cap (d_k) \subset {{\mathfrak{m}}athbb{P}}^n$ a smooth complete intersection of hypersurfaces of degrees $d_1, \dots, d_k$, in particular, by $(d)^k$ if $d=d_1= \dots = d_k$. We denote by $G(k,n)$ a Grassmannian of $k$-planes in ${{\mathfrak{m}}athscr{C}}C^n$, by $LG(k,n)$ a Lagrangian Grassmannian which is the variety of isotropic $k$-planes for a non-degenerate skew-symmetric bilinear form on ${{\mathfrak{m}}athscr{C}}C^n$, by $S_k$ the spinor variety which is an irreducible component of the Fano variety of $k$-planes in ${{\mathfrak{m}}athbb{Q}}^{2k}$.
We denote a simple exceptional linear algebraic group of Dynkin type $G$ simply by $G$ and for a dominant integral weight $\omega$ of $G$, the minimal closed orbit of $G$ in ${{\mathfrak{m}}athbb{P}}(V_{\omega})$ by $G(\omega)$, where $V_{\omega}$ is the irreducible representation space of $G$ with highest weight $\omega$.
For example, $E_7({\omega}_1)$ is the minimal closed orbit of an algebraic group of type $E_7$ in ${{\mathfrak{m}}athbb{P}}(V_{{\omega}_1})$, where ${\omega}_1$ is the first fundamental dominant weight in the standard notation of Bourbaki \cite{Bo}.
{{\mathfrak{m}}athfrak{e}}nd{nota}
{{\mathfrak{m}}athfrak{b}}egin{them}\label{coindex3} Let $X$ be a Fano manifold of coindex $3$ with ${\rm Pic}(X) \cong {{\mathfrak{m}}athscr{Z}}Z[{{\mathfrak{m}}athscr{O}}_X(1)]$ and ${{\mathfrak{m}}athscr{K}}$ a minimal rational component of $X$. Assume that $n:= \dim X {\mathfrak{g}}eq 6$. Then $(l_{{{\mathfrak{m}}athscr{K}}}, d_{{{\mathfrak{m}}athscr{K}}})=(2, 1)$ except the case $X$ is a Lagrangian Grassmannian $LG(3,6)$. In the case $X=LG(3,6)$ we have $(l_{{{\mathfrak{m}}athscr{K}}}, d_{{{\mathfrak{m}}athscr{K}}})=(3, 1)$.
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{proof} We have an inequality $n+1 {\mathfrak{g}}eq p+2=(n-2)d_{{{\mathfrak{m}}athscr{K}}}$. It follows from our assumption $n {\mathfrak{g}}eq 6$ that $(p,d_{{{\mathfrak{m}}athscr{K}}})=(n-4,1)$.
By Iskovskikh Theorem \cite{Is} or Mukai's classification result of Fano manifolds of coindex $3$ \cite{Muk,Me}, $X$ is
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{a prime Fano manifold, which means ${{\mathfrak{m}}athscr{O}}_X(1)$ is very ample, }
\item{a double cover ${\mathfrak{p}}i: X \rightarrow {{\mathfrak{m}}athbb{P}}^n$ with a branch divisor $B \in |{{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^n}(6)|$, or}
\item{a double cover ${\mathfrak{p}}i: X \rightarrow {{\mathfrak{m}}athbb{Q}}^n$ with a branch divisor $B \in |{{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{Q}}^n}(4)|$. }
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{b}}egin{cl} For a general point $x \in X$, the variety of minimal rational tangents ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is an equidimensional disjoint union of smooth projective varieties.
{{\mathfrak{m}}athfrak{e}}nd{cl}
When $X$ is prime, this follows from Proposition~\ref{hwlem1}. So we assume $X$ is as in $\rm (ii)$ or $\rm (iii)$.
We denote by $Y$ the target of ${\mathfrak{p}}i$ which is ${{\mathfrak{m}}athbb{P}}^n$ or ${{\mathfrak{m}}athbb{Q}}^n$. A Barth-type Theorem \cite{L} implies that ${\rm Pic}(X)\cong {\rm Pic}(Y)$ and that ${\mathfrak{p}}i^*{{\mathfrak{m}}athscr{O}}_Y(1) \cong {{\mathfrak{m}}athscr{O}}_X(1)$, where ${{\mathfrak{m}}athscr{O}}_Y(1)$ is the ample generator of the Picard group of $Y$.
By Proposition~\ref{stand} it is sufficient to show that the tangent map $\tau_x: {{\mathfrak{m}}athscr{K}}_x \rightarrow {{\mathfrak{m}}athscr{C}}_x$ is isomorphic.
Since ${{\mathfrak{m}}athscr{O}}_X(1)$ is spanned and $d_{{{\mathfrak{m}}athscr{K}}}=1$, Lemma~\ref{BS} implies that any $l$ in ${{\mathfrak{m}}athscr{K}}$ is isomorphic to ${{\mathfrak{m}}athbb{P}}^1$. Furthermore ${{\mathfrak{m}}athscr{K}}$ is proper because $d_{{{\mathfrak{m}}athscr{K}}}=1$ (see \cite[II. Proposition~2.14]{Ko}). So Proposition~\ref{KK} implies that $\tau_x$ is bijective. For $[l] \in {{\mathfrak{m}}athscr{K}}_x$ we have ${{\mathfrak{m}}athscr{O}}_X(1).l=1$. The projection formula shows that ${{\mathfrak{m}}athscr{O}}_X(1).l=({\rm deg}{\mathfrak{p}}i_l) \cdot {{\mathfrak{m}}athscr{O}}_Y(1).{\mathfrak{p}}i(l)$, where ${\mathfrak{p}}i_l:l \rightarrow {\mathfrak{p}}i(l)$ is the restriction of ${\mathfrak{p}}i$ to $l$. Therefore ${\mathfrak{p}}i(l) \subset Y$ is a standard line and ${\mathfrak{p}}i_l$ is an isomorphism. Since $x \in X$ is general, we may assume that $l$ is free and the natural morphism between normal bundles $N_{l/X} \rightarrow N_{{\mathfrak{p}}i(l)/Y}$ is generically surjective. Remark that $N_{{\mathfrak{p}}i(l)/Y} = {{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^1}(1)^{\oplus n-1}$ or ${{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^1}(1)^{\oplus n-2} \oplus {{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^1}$. For $N_{l/X}= \oplus {{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^1}(a_i)$, if there exists $i$ such that $a_i {\mathfrak{g}}eq 2$, we see that the induced morphism ${{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^1}(a_i) \rightarrow N_{{\mathfrak{p}}i(l)/Y}$ is a zero map. However this contradicts to the fact that $N_{l/X} \rightarrow N_{{\mathfrak{p}}i(l)/Y}$ is generically surjective. This concludes that $l \subset X$ is a standard rational curve. Hence by Proposition~\ref{hwlem1}, $\tau_x$ is an immersion. As a consequence, we see $\tau_x$ is an embedding. So our claim holds.
Since $n {\mathfrak{g}}eq 6$, Proposition~\ref{hwlem1} implies that ${{\mathfrak{m}}athscr{C}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is non-degenerate. When $n {\mathfrak{g}}eq 7$, we see ${{\mathfrak{m}}athscr{C}}_x$ is irreducible. In fact, if there exist distinct irreducible components ${{\mathfrak{m}}athscr{C}}_{x,1}, {{\mathfrak{m}}athscr{C}}_{x,2}$ of ${{{\mathfrak{m}}athscr{C}}_x}$, we see $\dim {{\mathfrak{m}}athscr{C}}_{x,1} + \dim {{\mathfrak{m}}athscr{C}}_{x,2} - \dim {{\mathfrak{m}}athbb{P}}(T_xX) {\mathfrak{g}}eq 0$. This implies that ${{\mathfrak{m}}athscr{C}}_{x,1} \cap {{\mathfrak{m}}athscr{C}}_{x,2} \neq {\mathfrak{p}}hi$. This contradicts the above claim. According to Zak's theorem on linear normality \cite{Za} and Theorem~\ref{HK}, we have $l_{{{\mathfrak{m}}athscr{K}}}=2$. So it remains to prove the case $n=6$. If there exists an irreducible component of ${{\mathfrak{m}}athscr{C}}_x$ whose secant variety coincides with ${{\mathfrak{m}}athbb{P}}(T_xX)$, we have $l_{{{\mathfrak{m}}athscr{K}}}=2$. Therefore we assume that the secant variety of any irreducible component of ${{\mathfrak{m}}athscr{C}}_x$ does not coincide with ${{\mathfrak{m}}athbb{P}}(T_xX)$. If ${{\mathfrak{m}}athscr{C}}_x$ is irreducible, then it is the Veronese surface $v_2({{\mathfrak{m}}athbb{P}}^2) \subset {{\mathfrak{m}}athbb{P}}^5$. This follows from Zak's classification of Severi varieties \cite{Za}. Here remark that the Veronese surface is the variety of minimal rational tangents of the Lagrangian Grassmannian $LG(3,6)$ at a general point (see \cite{HM,E,LM}). Thus in this case $X$ is isomorphic to $LG(3,6)$ by Theorem~\ref{HHM}. Because the secant variety of the Veronese surface is a hypersurface, it implies that $d_2=4$. Therefore we have $\l_{{{\mathfrak{m}}athscr{K}}}=3$. If ${{\mathfrak{m}}athscr{C}}_x$ is reducible, there exists disjoint irreducible components $V_1$ and $V_2$. Remark that we assumed that $S^1V_i$ does not coincide with ${{\mathfrak{m}}athbb{P}}(T_xX)$ for $i=1,2$. If $\dim S(V_1,V_2) \leq 4$, we have a point $q \in {{\mathfrak{m}}athbb{P}}^5 \setminus S(V_1,V_2) \cup S^1V_1 \cup S^1V_2$. For a projection ${\mathfrak{p}}i_q$ from a point $q$, ${\mathfrak{p}}i_q(V_i) \subset {{\mathfrak{m}}athbb{P}}^4$ is a surface. Hence it turns out that ${\mathfrak{p}}i_q(V_1) \cap {\mathfrak{p}}i_q(V_2) \subset {{\mathfrak{m}}athbb{P}}^4$ is non-empty. This contradicts $q \in S(V_1,V_2)$. Therefore we have $S(V_1,V_2)={{\mathfrak{m}}athbb{P}}(T_xX)$. In particular, $S^1 {{\mathfrak{m}}athscr{C}}_x={{\mathfrak{m}}athbb{P}}(T_xX)$ and $l_{{{\mathfrak{m}}athscr{K}}}=2$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
\section{$2$-connectedness by lines, conic-connectedness and defectiveness of secant varieties.}
Here we remark a relation between $2$-connectedness by lines and conic-connectedness.
{{\mathfrak{m}}athfrak{b}}egin{defi}[cf. {\cite{KaS,IR}}] \rm For a projective manifold $X \subset {{\mathfrak{m}}athbb{P}}^N$, we call $X$ {\it conic-connected} if there exists an irreducible conic passing through two general points on $X$.
{{\mathfrak{m}}athfrak{e}}nd{defi}
{{\mathfrak{m}}athfrak{b}}egin{lem}[\cite{IR}]\label{CC} Let $X \subset {{\mathfrak{m}}athbb{P}}^N$ be a projective manifold which is covered by lines. Then
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{if two general points on $X$ are connected by two lines, $X$ is conic-connected;}
\item{if $X$ is conic-connected, then the Fano index $i_X$ is at least $\frac{n+1}{2}$.}
\item{Assume that $X$ is conic-connected. Then two general points on $X$ are not connected by two lines if and only if $i_X = \frac{n+1}{2}$.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{lem}
{{\mathfrak{m}}athfrak{b}}egin{proof} $\rm (i)$ is well known to the experts (cf. \cite{KMM1}, \cite[Proof of Proposition 5.8]{De}). Suppose that two general points $x_1, x_2 \in X$ are connected by two lines $l_1, l_2$. Then, without loss of generality, we may assume such two lines are free. By the gluing lemma, there exists a smoothing $({\mathfrak{p}}i:{{\mathfrak{m}}athscr{C}} \rightarrow (T,0), F:{{\mathfrak{m}}athscr{C}} \rightarrow X, s_1)$ of $l_1 \cup l_2 \subset X$ fixing $x_1$, where $s_1: T \rightarrow {{\mathfrak{m}}athscr{C}}$ is a section of ${\mathfrak{p}}i$ such that $s_1(0)=x_1 \in {\mathfrak{p}}i^{-1}(0) \cong l_1 \cup l_2$ and $F \circ s_1 (T)= \{x_1\}$ (see \cite[Chapter II.7]{Ko}). According to a suitable base change, we may assume that there exists a section $s_2$ of ${\mathfrak{p}}i$ such that $s_2(0)=x_2 \in {\mathfrak{p}}i^{-1}(0) \cong l_1 \cup l_2$. Let $Z \subset X \times X$ be the set of points $(y_1,y_2) \in X \times X$ which can be joined by an irreducible conic in $X$. Then for a point $t \neq 0$ in $T$, $(s_1(t), s_2(t)) \in Z$. It turns out that $(x_1,x_2)$ is contained in the closure of $Z$. By the generality of $(x_1,x_2) \in X \times X$, we see $Z$ is dense in $X \times X$. Consequently our assertion holds.
$\rm (ii)$ is in \cite{IR}. If $X$ is conic-connected, then there exists a smooth conic $C$ such that $T_X|_C$ is ample. This implies that $2i_X=\deg T_X|_C {\mathfrak{g}}eq n+1$. Hence $\rm (ii)$ holds.
$\rm (iii)$ Suppose that $X$ is a conic-connected manifold which is not $2$-connected by lines. Then for general two points $x, y \in X$ there exists a smooth conic $f:{{\mathfrak{m}}athbb{P}}^1 \cong C \subset X$ passing through $x$ and $y$ such that $T_X|_C$ is ample. This implies that $H^1({{\mathfrak{m}}athbb{P}}^1, f^*T_X(-2))=0$. Hence there is no obstruction in the deformation of $f$ fixing the marked points $x, y$. It turns out that
{{\mathfrak{m}}athfrak{b}}egin{equation}
\dim_{[f]}{\rm Hom}({{\mathfrak{m}}athbb{P}}^1,X:f(0)=x, f(\infty)=y)=2i_X-n.
{{\mathfrak{m}}athfrak{e}}nd{equation}
If $2i_X-n {\mathfrak{g}}eq 2$, Mori's Bend and Break implies $C$ degenerates into a union of two lines containing $x$ and $y$. This is a contradiction. Hence $2i_X-n \leq 1$. By combining $\rm (ii)$, we have $i_X = \frac{n+1}{2}$. Conversely if the Fano index satisfies $i_X=\frac{n+1}{2}$, it turns out from the same argument as in Theorem~\ref{HK} $\rm (i)$ that $X$ is not $2$-connected by lines.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{exa} {\rm Let $S_4 \subset {{\mathfrak{m}}athbb{P}}^{15}$ be the $10$-dimensional spinor variety and let $X$ be $S_4$ or its linear section of dimension $n {\mathfrak{g}}eq 6$. Then $X$ is a Fano manifold of coindex $3$ with the genus $g:=\frac{H^n}{2}+1=7$, where $H$ is the ample generator of the Picard group of $S_4$. There exists a $6$-dimensional smooth quadric passing through two general points on $S_4$ \cite{ESB}. So $X$ is conic-connected and $2$-connected by lines. Hence two geneal points on $X$ can be connected by a chain of two lines which is obtained as a degeneration of a conic.}
{{\mathfrak{m}}athfrak{e}}nd{exa}
{{\mathfrak{m}}athfrak{b}}egin{exa} {\rm Let $X$ be a Grassmaniann $G(2,6) \subset {{\mathfrak{m}}athbb{P}}^{14}$ or its linear section of dimension $n {\mathfrak{g}}eq 6$. Then $X$ is a Fano manifold of coindex $3$ with the genus $g=8$. For two distinct points $x, y \in G(2,6)$, they correspond to $2$-dimensional vector subspaces $L_x, L_y \subset {{\mathfrak{m}}athscr{C}}C^6$. Then there exists a $4$-dimensional vector subspace $V \subset {{\mathfrak{m}}athscr{C}}C^6$ which contains the join $<L_x, L_y>$. This implies that $x, y$ is contained in a $4$-dimensional quadric $Q^4 \cong G(2,4) \subset G(2,6)$. So $X$ is conic-connected and $2$-connected by lines.}
{{\mathfrak{m}}athfrak{e}}nd{exa}
{{\mathfrak{m}}athfrak{b}}egin{rem} \rm $X:= G(2,6) \cap (1)^3 \subset {{\mathfrak{m}}athbb{P}}^{14}$ is a $5$-dimensional Fano manifold of index $3$. According to Theorem~\ref{5fold}, $X$ is $3$-connected by lines. However $X$ is conic-connected. This example shows that our chain of minimal rational curves connecting two general points is not necessary a chain with minimal total degree.
{{\mathfrak{m}}athfrak{e}}nd{rem}
{{\mathfrak{m}}athfrak{b}}egin{them}\label{2/3} Let $X$ be a prime Fano $n$-fold of $\rho=1$ with $i_X=\frac{2}{3}n$ and ${{\mathfrak{m}}athscr{K}}$ a minimal rational component of $X$. Then $l_{{{\mathfrak{m}}athscr{K}}}=2$ except the following cases:
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{$(3) \subset {{\mathfrak{m}}athbb{P}}^4$ a hypersurface of degree $3$.}
\item{$(2) \cap (2) \subset {{\mathfrak{m}}athbb{P}}^5$ a complete intersection of two hyperquadrics.}
\item{$G(2,5) \cap (1)^3 \subset {{\mathfrak{m}}athbb{P}}^6$ a $3$-dimensional linear section of $G(2,5)$.}
\item{$LG(3,6)$ a Lagrangian Grassmannian.}
\item{$G(3,6)$ a Grassmannian.}
\item{$S_5$ a spinor variety.}
\item{$E_7({\omega}_7)$ a rational homogeneous manifold of type $E_7$.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
Furthermore in the cases $\rm (i)-(vii)$ we have $l_{{{\mathfrak{m}}athscr{K}}}=3$.
{{\mathfrak{m}}athfrak{e}}nd{them}
{{\mathfrak{m}}athfrak{b}}egin{proof}
According to the assumption that $3i_X=2n$, $n$ is $3$, $6$, or at least $9$. If $n=3$, $X$ is a del Pezzo $3$-fold. So Remark~\ref{3fold} implies that $(l_{{{\mathfrak{m}}athscr{K}}}, d_{{{\mathfrak{m}}athscr{K}}})=(3,1)$. Hence $X$ is isomorphic to one of the manifolds listed in $\rm (i), (ii)$ or $\rm (iii)$ by the Fujita-Iskovskikh's classification result \cite{F1,F2,F3}. In the case where $n=6$, we have $\l_{{{\mathfrak{m}}athscr{K}}}=2$ or $X$ is $LG(3,6)$ by Theorem~\ref{coindex3}.
From here, we make the assumption $n {\mathfrak{g}}eq 9$. In this case, we have $2i_X > n+1$. So $d_{{{\mathfrak{m}}athscr{K}}}=1$, that is, $X$ is covered by lines. By Proposition~\ref{hwlem1} the variety of minimal rational tangents ${{{\mathfrak{m}}athscr{C}}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is smooth irreducible and non-degenerate. It follows from our assumption that $2p {\mathfrak{g}}eq n-1$. Hence ${{{\mathfrak{m}}athscr{C}}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is a non-degenerate irreducible projective manifold of dimension $\frac{2}{3}n-2$. By Zak's theorem on linear normality, a classification of Severi varieties \cite{Za} and the assumption that $n {\mathfrak{g}}eq 9$, $S^1{{{\mathfrak{m}}athscr{C}}}_x = {{\mathfrak{m}}athbb{P}}(T_xX)$ or ${{{\mathfrak{m}}athscr{C}}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is isomorphic to the Segre product ${{\mathfrak{m}}athbb{P}}^2 \times {{\mathfrak{m}}athbb{P}}^2 \subset {{\mathfrak{m}}athbb{P}}^8$, the Grassmann variety $G(2,6) \subset {{\mathfrak{m}}athbb{P}}^{14}$ or $E_6$-variety $E_6({\omega}_1) \subset {{\mathfrak{m}}athbb{P}}^{26}$. In the former case Theorem~\ref{HK} implies that $l_{{{\mathfrak{m}}athscr{K}}}=2$. So we assume the latter holds. Remark that the above Segre variety, Grassmann variety and $E_6$-variety are varieties of minimal rational tangents of $G(3,6)$, $S_5$ and $E_7({\omega}_7)$ respectively (For example, see \cite{HM,E,LM}). By Theorem~\ref{HHM}, $X$ is isomorphic to one of these varieties. In this case, since ${{{\mathfrak{m}}athscr{C}}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is a Severi variety, $S^1{{{\mathfrak{m}}athscr{C}}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is a hypersurface \cite{Za}. This implies $d_2=n-1$ \cite[Theorem 3.14]{HK}. Hence $l_{{{\mathfrak{m}}athscr{K}}}=3$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{cor}\label{Zak1} Let $X$ be a prime Fano $n$-fold of $\rho=1$ with $i_X=\frac{2}{3}n$ and ${{\mathfrak{m}}athscr{K}}$ a minimal rational component of $X$. Assume that $n {\mathfrak{g}}eq 6$. Then the following are equivalent.
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{$l_{{{\mathfrak{m}}athscr{K}}} \neq 2$.}
\item{$l_{{{\mathfrak{m}}athscr{K}}}=3$.}
\item{$X \subset {{\mathfrak{m}}athbb{P}}(H^0(X, {{\mathfrak{m}}athscr{O}}_X(1))^{\vee})$ is not conic-connected.}
\item{The dimension of the secant variety $S^1X \subset {{\mathfrak{m}}athbb{P}}(H^0(X, {{\mathfrak{m}}athscr{O}}_X(1))^{\vee})$ is $2n+1$.}
\item{The variety of minimal rational tangents at a general point ${{{\mathfrak{m}}athscr{C}}}_x \subset {{\mathfrak{m}}athbb{P}}(T_xX)$ is a Severi variety.}
\item{$X \subset {{\mathfrak{m}}athbb{P}}(H^0(X, {{\mathfrak{m}}athscr{O}}_X(1))^{\vee})$ is projectively equivalent to one of the manifolds listed in Theorem~\ref{2/3} $\rm (iv)-(vii)$.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{cor}
{{\mathfrak{m}}athfrak{b}}egin{proof} By the above theorem and its proof, $\rm (i), (ii), (v)$ and $\rm (vi)$ are equivalent to each other. In general, if $X \subset {{\mathfrak{m}}athbb{P}}^N$ is conic-connected, then the dimension of the secant variety $S^1X$ is smaller than $2n +1$ (see \cite[Proposition 3.2]{IR1} and \cite[Theorem 2.1]{R}). Hence $\rm (iv) {{\mathfrak{m}}athscr{R}}ightarrow (iii)$ holds. $\rm (iii) {{\mathfrak{m}}athscr{R}}ightarrow (i)$ follows from Lemma~\ref{CC}. Finally, $\rm (vi) {{\mathfrak{m}}athscr{R}}ightarrow (iv)$ comes from \cite{K}.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{rem}\label{conicdefective} \rm
Corollary~\ref{Zak1} and Theorem~\ref{bound} implies that $i_X=\frac{2}{3}n$ is also a boundary of conic-connectedness and defectiveness of the secant variety (Remark~\ref{defective}):
{{\mathfrak{m}}athfrak{b}}egin{center}
{{\mathfrak{m}}athfrak{b}}egin{tabular}{|c|c|c|c|}
\hline
Property & $i_X>\frac{2}{3}n$ & {\mathfrak{m}}ulticolumn{2}{|c|}{$i_X=\frac{2}{3}n$} \\ \hline \hline
$l_{{{\mathfrak{m}}athscr{K}}}$ & $2$ & $2$ & $3$ \\
Conic-connectedness & Yes & Yes & No \\
Defectiveness of the secant variety & Yes & Yes & No \\
\hline
{{\mathfrak{m}}athfrak{e}}nd{tabular}
{{\mathfrak{m}}athfrak{e}}nd{center}
{{\mathfrak{m}}athfrak{e}}nd{rem}
\section{Lengths of Fano manifolds admitting the structures of double covers}
Let $X$ be a Fano $n$-fold with Pic$(X) \cong {{\mathfrak{m}}athscr{Z}}Z[{{\mathfrak{m}}athscr{O}}_X(1)]$, where ${{\mathfrak{m}}athscr{O}}_X(1)$ is ample and $n:=\dim X {\mathfrak{g}}eq 3$. In this section, we assume that $X$ is a double cover of a projective manifold ${\mathfrak{p}}i: X \rightarrow Y$. A Barth-type Theorem \cite{L} implies that Pic$(X)\cong$ Pic$(Y)$ and that ${\mathfrak{p}}i^*{{\mathfrak{m}}athscr{O}}_Y(1) \cong {{\mathfrak{m}}athscr{O}}_X(1)$, where ${{\mathfrak{m}}athscr{O}}_Y(1)$ is the ample generator of the Picard group of $Y$. It follows from the ramification formula of the branched cover that $Y$ is a Fano manifold. We denote by $B \in |{{\mathfrak{m}}athscr{O}}_Y(b)|$ the branch divisor of ${\mathfrak{p}}i$ and by ${{\mathfrak{m}}athscr{R}}_1$ the family of rational curves of degree $1$ ${\rm RatCurves}^n_1(X)$. We assume that ${{\mathfrak{m}}athscr{R}}_1$ is a dominating family. Then we can define the $k$-th locus ${\rm loc}^k_{{{\mathfrak{m}}athscr{R}}_1}(x)$ and the length with respect to ${{\mathfrak{m}}athscr{R}}_1$ as in Definition~\ref{loc} and Definition~\ref{leng}.
{{\mathfrak{m}}athfrak{b}}egin{pro}\label{criterion} Let $X$ and ${{\mathfrak{m}}athscr{R}}_1$ be as above. Then the following holds.
{{\mathfrak{m}}athfrak{b}}egin{enumerate}
\item{$X$ is $2$-connected by ${{\mathfrak{m}}athscr{R}}_1$ if and only if for general $x_1, x_2 \in X$, ${\mathfrak{p}}i({\rm loc}^1_{{{\mathfrak{m}}athscr{R}}_1}(x_1)) \cap {\mathfrak{p}}i({\rm loc}^1_{{{\mathfrak{m}}athscr{R}}_1}(x_2)) \neq {\mathfrak{p}}hi$.}
\item{Under the assumption that ${{\mathfrak{m}}athscr{O}}_Y(1)$ is spanned, $X$ is $2$-connected by ${{\mathfrak{m}}athscr{R}}_1$ if and only if for general points $y_1, y_2 \in Y$ there exist curves $l_1 \ni y_1, l_2 \ni y_2$ on $Y$ such that $l_1 \cap l_2 \neq {\mathfrak{p}}hi$, ${{\mathfrak{m}}athscr{O}}_Y(1).l_i=1$ and ${\rm length}_q(B \cap l_i) {{\mathfrak{m}}athfrak{e}}quiv 0$ {\rm mod} $2$ for any $q \in Y$ and $i=1,2$.}
{{\mathfrak{m}}athfrak{e}}nd{enumerate}
{{\mathfrak{m}}athfrak{e}}nd{pro}
{{\mathfrak{m}}athfrak{b}}egin{proof} $\rm (i)$ The "only if" part is trivial. We show the converse. Let $x_1, x_2$ be points on $X$ which are not contained in the ramification locus of ${\mathfrak{p}}i$ and set $y_2:={\mathfrak{p}}i(x_2)$. Then we have ${{\mathfrak{p}}i}^{-1}(y_2)= \{x_2, {x_2}'\}$. We assume there exists a point $z \in {\mathfrak{p}}i({\rm loc}^1_{{{\mathfrak{m}}athscr{R}}_1}(x_1)) \cap {\mathfrak{p}}i({\rm loc}^1_{{{\mathfrak{m}}athscr{R}}_1}(x_2))$. Then there exists a curve $[l_{x_i}] \in {{\mathfrak{m}}athscr{R}}_1$ such that $x_i \in l_{x_i}$ and $z \in {\mathfrak{p}}i(l_{x_i})$ for $i=1,2$. Since ${\mathfrak{p}}i(l_{x_2}) \subset Y$ is a curve of degree $1$, ${{\mathfrak{p}}i}^{-1}({\mathfrak{p}}i(l_{x_2}))$ is a curve of degree $2$. It follows from the inclusion $l_{x_i} \subset {{\mathfrak{p}}i}^{-1}({\mathfrak{p}}i(l_{x_i}))$ that there exists a curve $[l_{{x_2}'}] \in {{\mathfrak{m}}athscr{R}}_{1, {x_2}'}$ such that ${{\mathfrak{p}}i}^{-1}({\mathfrak{p}}i(l_{x_2}))=l_{x_2} \cup l_{{x_2}'}$. Our assumption implies that $l_{x_1} \cap l_{x_2} \neq {\mathfrak{p}}hi$ or $l_{x_1} \cap l_{{x_2}'} \neq {\mathfrak{p}}hi$. So $x_2$ or ${x_2}'$ is contained in ${\rm loc}^2_{{{\mathfrak{m}}athscr{R}}_1}(x_1)$. This means ${\mathfrak{p}}i|_{{\rm loc}^2_{{{\mathfrak{m}}athscr{R}}_1}(x_1)}: {\rm loc}^2_{{{\mathfrak{m}}athscr{R}}_1}(x_1) \rightarrow Y$ is dominant. Since ${\mathfrak{p}}i|_{{\rm loc}^2_{{{\mathfrak{m}}athscr{R}}_1}(x_1)}$ is proper, it is surjective. Hence we see $X = {\rm loc}^2_{{{\mathfrak{m}}athscr{R}}_1}(x_1)$.
$\rm (ii)$ Suppose that ${{\mathfrak{m}}athscr{O}}_Y(1)$ is spanned. Let $l$ be a rational curve on $Y$ satisfying ${{\mathfrak{m}}athscr{O}}_Y(1).l=1$. ${\mathfrak{p}}i^{-1}(l)$ is denoted by $C$. From $\rm (i)$, it is sufficient to show the following claim.
{{\mathfrak{m}}athfrak{b}}egin{cl}
$C$ is reducible if and only if ${\rm length}_q(B \cap l) {{\mathfrak{m}}athfrak{e}}quiv 0$ ${\rm mod}$ $2$ for any $q \in Y$.
{{\mathfrak{m}}athfrak{e}}nd{cl}
For the double cover ${\mathfrak{p}}i:X \rightarrow Y$, we have ${\mathfrak{p}}i_*{{\mathfrak{m}}athscr{O}}_X \cong {{\mathfrak{m}}athscr{O}}_Y \oplus L^{-1}$, where $L$ is an ample line bundle on $Y$ which satisfies $L^{\otimes 2} \cong {{\mathfrak{m}}athscr{O}}_Y(B)$. Furthermore there exists a morphism $X \hookrightarrow {{\mathfrak{m}}athbb{L}} :={\rm Spec}({\rm Sym} L^{-1})$ over $Y$. Denote by ${\mathfrak{p}}i_C$ the restriction of ${\mathfrak{p}}i$ to $C$ and by $L_l$ one of $L$ to $l$. Since $X$ is a divisor on ${{\mathfrak{m}}athbb{L}}$, we can obtain the defining equation of $X$ on ${{\mathfrak{m}}athbb{L}}$.
In particular, we see that there exists a global section $s \in {{\mathfrak{m}}athscr{G}}amma(C,{{\mathfrak{p}}i_C}^*L_l)$ such that $s^2={{\mathfrak{p}}i_C}^*{{\mathfrak{p}}hi}$, where ${\mathfrak{p}}hi \in {{\mathfrak{m}}athscr{G}}amma({{\mathfrak{m}}athbb{P}}^1,{{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^1}(b))$ and $({\mathfrak{p}}hi=0)= l \cap B$ as divisors of $l$. We may assume that ${\mathfrak{p}}i_C$ is unramified at ${\infty} \in {{\mathfrak{m}}athbb{P}}^1$. Then we see $C$ is reducible if and only if ${\mathfrak{p}}i_C^{-1}({{\mathfrak{m}}athscr{A}}A^1)$ is reducible. Without loss of generality, we may assume that ${\mathfrak{p}}hi=(x-a_1y)\cdots(x-a_by)$, where $a_i \in {{\mathfrak{m}}athscr{C}}C$ and ${{\mathfrak{m}}athscr{G}}amma({{\mathfrak{m}}athbb{P}}^1,{{\mathfrak{m}}athscr{O}}_{{{\mathfrak{m}}athbb{P}}^1}(b)) \cong {{\mathfrak{m}}athfrak{b}}igoplus_{i=0}^{b} {{\mathfrak{m}}athscr{C}}C x^iy^{b-i}$. Furthermore we may assume ${\mathfrak{p}}i_C^{-1}({{\mathfrak{m}}athscr{A}}A^1)=(s^2=(x-a_1)\cdots(x-a_b)) \subset {{\mathfrak{m}}athscr{A}}A^2$. Thus $C$ is reducible if and only if the cardinality $\# \{ j|a_j=a_i \} {{\mathfrak{m}}athfrak{e}}quiv 0$ mod $2$ for any $i$. Hence we obtain our assertion.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{cor}\label{f.proj} Let $X$, $Y$ and ${{\mathfrak{m}}athscr{R}}_1$ be as above. If $Y={{\mathfrak{m}}athbb{P}}^n$ and $n {\mathfrak{g}}eq b$, then $X$ is $2$-connected by ${{\mathfrak{m}}athscr{R}}_1$.
{{\mathfrak{m}}athfrak{e}}nd{cor}
{{\mathfrak{m}}athfrak{b}}egin{proof} There exists a standard rational curve $f: {{\mathfrak{m}}athbb{P}}^1 \rightarrow X$ such that $f^{*}T_X \cong {{\mathfrak{m}}athscr{O}}(2) \oplus {{\mathfrak{m}}athscr{O}}(1)^p \oplus {{\mathfrak{m}}athscr{O}}^{n-1-p}$. By the ramification formula, the Fano index $i_X$ of $X$ is equal to $n+1-\frac{b}{2}$. It follows from the assumption $n {\mathfrak{g}}eq b$ that $i_X > \frac{n+1}{2}$. Hence we have deg $f_*({{\mathfrak{m}}athbb{P}}^1)=1$ and $p=n-\frac{b}{2}-1$. For general two points $x_1, x_2 \in X$,
{{\mathfrak{m}}athfrak{b}}egin{equation}
\dim {\mathfrak{p}}i({\rm loc}^1_{{{\mathfrak{m}}athscr{R}}_1}(x_1)) + \dim {\mathfrak{p}}i({\rm loc}^1_{{{\mathfrak{m}}athscr{R}}_1}(x_2)) - \dim {{\mathfrak{m}}athbb{P}}^n=2(n-\frac{b}{2})-n=n-b {\mathfrak{g}}eq 0.
{{\mathfrak{m}}athfrak{e}}nd{equation}
Hence Proposition~\ref{criterion} implies that $X$ is $2$-connected by ${{\mathfrak{m}}athscr{R}}_1$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{\mathfrak{m}}athfrak{b}}egin{cor}\label{h.proj} Let $X$, $Y$ and ${{\mathfrak{m}}athscr{R}}_1$ be as in above. If $Y \subset {{\mathfrak{m}}athbb{P}}^{n+1}$ is a hypersurface of degree $d$ and $n {\mathfrak{g}}eq 2d+b-1$, then $X$ is $2$-connected by ${{\mathfrak{m}}athscr{R}}_1$.
{{\mathfrak{m}}athfrak{e}}nd{cor}
{{\mathfrak{m}}athfrak{b}}egin{proof} There exists a standard rational curve $f: {{\mathfrak{m}}athbb{P}}^1 \rightarrow X$ such that $f^{*}T_X \cong {{\mathfrak{m}}athscr{O}}(2) \oplus {{\mathfrak{m}}athscr{O}}(1)^p \oplus {{\mathfrak{m}}athscr{O}}^{n-1-p}$. By the ramification formula, the Fano index $i_X$ of $X$ is equal to $n+2-d-\frac{b}{2}$. Since we have $i_X > \frac{n+1}{2}$, it implies that deg $f_*({{\mathfrak{m}}athbb{P}}^1)=1$ and $p=n-\frac{b}{2}-d$. For general two points $x_1, x_2 \in X$,
{{\mathfrak{m}}athfrak{b}}egin{equation}
\dim {\mathfrak{p}}i({\rm loc}^1_{{{\mathfrak{m}}athscr{R}}_1}(x_1)) + \dim {\mathfrak{p}}i({\rm loc}^1_{{{\mathfrak{m}}athscr{R}}_1}(x_2)) - \dim {{\mathfrak{m}}athbb{P}}^{n+1} {\mathfrak{g}}eq 0.
{{\mathfrak{m}}athfrak{e}}nd{equation}
Thus $X$ is $2$-connected by ${{\mathfrak{m}}athscr{R}}_1$.
{{\mathfrak{m}}athfrak{e}}nd{proof}
{{{\mathfrak{m}}athfrak{b}}f Acknowledgements}
The author would like to thank his supervisor Professor Hajime Kaji for fruitful advice and encouragement. He would also like to thank the referee for the careful reading of the manuscript and useful comments. The author is supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists.
{{\mathfrak{m}}athfrak{b}}egin{thebibliography}{7}
{{\mathfrak{m}}athfrak{b}}ibitem{BS} M.C. Beltrametti and A.J. Sommese, {The Adjunction Theory of Complex Projective Varieties}, Expositions in Mathematics, vol. 16, (Walter de Gruyter, Berlin, 1995).
{{\mathfrak{m}}athfrak{b}}ibitem{Bo} N. Bourbaki: {$\rm {{\mathfrak{m}}athfrak{a}}cute{E}$l$\rm {{\mathfrak{m}}athfrak{a}}cute{e}$ments de Math$\rm {{\mathfrak{m}}athfrak{a}}cute{e}$matique, Groupes et alg$\rm {\mathfrak{g}}rave{e}$bres de Lie}, Chapters 4,5, and 6, (Hermann, Paris, 1968).
{{\mathfrak{m}}athfrak{b}}ibitem{CMSB} K. Cho, Y. Miyaoka and N. I. Shepherd-Barron, {\it Characterizations of projective space and applications to complex symplectic manifolds}, in {\it Higher dimensional birational geometry}, Kyoto, 1997, Advanced Studies in Pure Mathematics, vol. 35, (Mathematical Society of Japan, Tokyo, 2002), 1-88.
{{\mathfrak{m}}athfrak{b}}ibitem{De} O. Debarre, {Higher-dimensional algebraic geometry}, Universitext, (Springer, New York, 2001).
{{\mathfrak{m}}athfrak{b}}ibitem{E} S. Elisabetta, {\it Lines in $G/P$}, Math. Z. 242 no. 2 (2002), 227-240.
{{\mathfrak{m}}athfrak{b}}ibitem{ESB} E. Lawrence and N. I. Shepherd-Barron, {\it Some special Cremona transformations}, Amer. J. Math. 111 no. 5 (1989), 783-800.
{{\mathfrak{m}}athfrak{b}}ibitem{FOV} H. Flenner, L. O'Carroll and W. Vogel, {Joins and intersections}, Springer Monographs in Mathematics, (Springer, Berlin, 1999).
{{\mathfrak{m}}athfrak{b}}ibitem{F1} T. Fujita, {\it On the structure of polarized manifolds with total deficiency one, I}, J. Math. Soc. of Japan 32 (1980), 709-725.
{{\mathfrak{m}}athfrak{b}}ibitem{F2} T. Fujita, {\it On the structure of polarized manifolds with total deficiency one, II}, J. Math. Soc. of Japan 33 (1981), 415-434.
{{\mathfrak{m}}athfrak{b}}ibitem{F3} T. Fujita, {\it On the structure of polarized manifolds with total deficiency one, III}, J. Math. Soc. of Japan 36 (1984), 75-89.
{{\mathfrak{m}}athfrak{b}}ibitem{HH} J. Hong and J. M. Hwang, {\it Characterization of the rational homogeneous space associated to a long simple root by its variety of minimal rational tangents}, Algebraic geometry in East Asia-Hanoi 2005, Advanced Studies of Pure Mathematics, 50, (Mathematical Society of Japan, Tokyo, 2008), 217-236.
{{\mathfrak{m}}athfrak{b}}ibitem{Hw1} J. M. Hwang, {\it Stability of tangent bundles of low-dimensional Fano manifolds with Picard number $1$}, Math. Ann. 312 no. 4 (1998), 599-606.
{{\mathfrak{m}}athfrak{b}}ibitem{Hw2} J. M. Hwang, {\it Geometry of minimal rational curves on Fano manifolds. In School on Vanishing Theorems and Effective Results in Algebraic Geometry}, Trieste, 2000, ICTP Lecture Notes, vol. 6 (Abdus Salam Int. Cent. Theoret. Phys., 2001), 335-393. Available on the ICTP web site at $\rm http://www.ictp.trieste.it/~pub{\_}off/services$.
{{\mathfrak{m}}athfrak{b}}ibitem{Hw3} J. M. Hwang, {\it On the degrees of Fano four-folds of Picard number 1}, J. reine angew. Math. 556 (2003), 225-235.
{{\mathfrak{m}}athfrak{b}}ibitem{Hw5} J. M. Hwang, {\it On the multiplicities of pluri-anti-canonical divisors and the degrees of Fano manifolds}, Asian J. Math. 7 (2003), no. 4, 599-607.
{{\mathfrak{m}}athfrak{b}}ibitem{Hw4} J. M. Hwang, {\it Deformation of holomorphic maps onto Fano manifolds of second and fourth Betti numbers 1}, Ann. Inst. Fourier (Grenoble) 57 no. 3 (2007), 815-823.
{{\mathfrak{m}}athfrak{b}}ibitem{HK} J-M. Hwang and S. Kebekus, {\it Geometry of chains of minimal rational curves}, J. reine angew. Math. 584 (2005), 173-194.
{{\mathfrak{m}}athfrak{b}}ibitem{HM1} J. M. Hwang and N. Mok, {\it Rigidity of irreducible Hermitian symmetric spaces of the compact type under K$\ddot{a}$hler deformation}, Invent. Math. 131 no. 2 (1998), 393-418.
{{\mathfrak{m}}athfrak{b}}ibitem{HM} J. M. Hwang and N. Mok, {\it Deformation rigidity of the rational homogeneous space associated to a long simple root}, Ann. Sci. Ec. Norm. Super. 35 (2002), 173-184.
{{\mathfrak{m}}athfrak{b}}ibitem{HM2} J. M. Hwang and N. Mok, {\it Birationality of the tangent map for minimal rational curves}, Asian J. Math. 8 no. 1 (2004), 51-63.
{{\mathfrak{m}}athfrak{b}}ibitem{IR1} P. Ionescu and F. Russo, {\it Varieties with quadratic entry locus, II}, Comp. Math. 144 (2008), 949-962.
{{\mathfrak{m}}athfrak{b}}ibitem{IR} P. Ionescu and F. Russo, {\it Conic-connected Manifolds}, J. reine angew. Math. 644 (2010), 145-157.
{{\mathfrak{m}}athfrak{b}}ibitem{Is} V. A. Iskovskikh, {\it Anticanonical models of 3-dimensional algebraic varieties}, J. Soviet Math. 13 (1980), 745-814.
{{\mathfrak{m}}athfrak{b}}ibitem{KaS} Y. Kachi and E. Sato, {Segre's reflexivity and an inductive characterization of hyperquadrics}, Mem. Amer. Math. Soc. 160 no. 763 (2002).
{{\mathfrak{m}}athfrak{b}}ibitem{K} H. Kaji, {\it Homogeneous projective varieties with degenerate secants}, Trans. Amer. Math. Soc. 351 (1999), 533-545.
{{\mathfrak{m}}athfrak{b}}ibitem{Ke1} S. Kebekus, {\it Characterizing the projective space after Cho, Miyaoka and Shepherd-Barron}, Complex geometry (G$ \rm \ddot{o}$ttingen, 2000), (Springer, Berlin, 2002), 147-155.
{{\mathfrak{m}}athfrak{b}}ibitem{Ke2} S. Kebekus, {\it Families of singular rational curves}, J. Algebraic Geom. 11 (2002), 245-256.
{{\mathfrak{m}}athfrak{b}}ibitem{KK}S. Kebekus and S. J. Kov\'acs, {\it Are rational curves determined by tangent vectors?}, Ann. Inst. Fourier (Grenoble) 54 no. 1 (2004), 53-79.
{{\mathfrak{m}}athfrak{b}}ibitem{KeS} S. Kebekus and L. Sol\'a Conde, {\it Existence of rational curves on algebraic varieties, minimal rational tangents, and applications}, Global aspects of complex geometry, (Springer, Berlin, 2006), 359-416.
{{\mathfrak{m}}athfrak{b}}ibitem{Ko} J. Koll\'ar, {Rational curves on algebraic varieties}, Ergebnisse der Mathematik und ihrer Grenzgebiete (3), vol. 32 (Springer, Berlin, 1996).
{{\mathfrak{m}}athfrak{b}}ibitem{KMM1} J. Koll\'ar, Y. Miyaoka and S. Mori, {\it Rational curves on Fano varieties}, Proc. Alg. Geom. Conf. Trento, Springer Lecture Notes 1515 (1992) 100-105
{{\mathfrak{m}}athfrak{b}}ibitem{KMM} J. Koll\'ar, Y. Miyaoka and S. Mori, {\it Rational connectedness and boundedness of Fano manifolds}, J. Differential Geom. 36 no. 3 (1992), 765-779.
{{\mathfrak{m}}athfrak{b}}ibitem{L} R. Lazarsfeld, {\it A Barth-type theorem for branched coverings of projective space}, Math. Ann. 249 no. 2 (1980), 153-162.
{{\mathfrak{m}}athfrak{b}}ibitem{LM} J. M. Landsberg and L. Manivel, {\it On the projective geometry of rational homogeneous varieties}, Comment. Math. Helv. 78 no. 1 (2003), 65-100.
{{\mathfrak{m}}athfrak{b}}ibitem{Me} M. Mella, {\it Existence of good divisors on Mukai varieties}, J. Algebraic Geom. 8 (1999), 197-206.
{{\mathfrak{m}}athfrak{b}}ibitem{Mi} Y. Miyaoka, {\it Numerical characterisations of hyperquadrics}, Complex analysis in several variables-Memorial Conference of Kiyoshi Oka's Centennial Birthday, Advanced Studies of Pure Mathematics, vol. 42, (Mathematical Society of Japan, Tokyo, 2004), 209-235.
{{\mathfrak{m}}athfrak{b}}ibitem{Muk} S. Mukai, {\it Biregular classification of Fano 3-folds and Fano manifolds of coindex 3}, Proc. Natl. Acad. Sci. USA 86 (1989), 3000-3002.
{{\mathfrak{m}}athfrak{b}}ibitem{Na} A. M. Nadel, {\it The boundedness of degree of Fano varieties with Picard number one}, J. Amer. Math. Soc. 4 no. 4 (1991), 681-692.
{{\mathfrak{m}}athfrak{b}}ibitem{R} F. Russo, {\it Varieties with quadratic entry locus, I}, Math. Ann. 344 no. 3 (2009), 949-962.
{{\mathfrak{m}}athfrak{b}}ibitem{Za} F. L. Zak, {Tangents and secants of algebraic varieties}, Translations of Mathematical Monographs, vol. 127, (American Mathematical Society, Providence, RI, 1993).
{{\mathfrak{m}}athfrak{e}}nd{thebibliography}
{{\mathfrak{m}}athfrak{e}}nd{document}
|
\begin{document}
\title{A function field analogue of the Rasmussen-Tamagawa conjecture: The Drinfeld module case}
\thispagestyle{empty}
\begin{abstract}
In the arithmetic of function fields, Drinfeld modules play the role that elliptic curves play in the arithmetic of number fields.
The aim of this paper is to study a non-existence problem of Drinfeld modules with constrains on torsion points at places with large degree.
This is motivated by a conjecture
of Christopher Rasmussen and Akio Tamagawa on the non-existence of abelian varieties over number fields with some arithmetic constraints.
We prove the non-existence of Drinfeld modules satisfying Rasmussen-Tamagawa type conditions in the case where the inseparable degree of base fields is not divisible by the rank of Drinfeld modules.
Conversely if the rank divides the inseparable degree, then we give an example of Drinfeld modules satisfying Rasmussen-Tamagawa-type conditions.
\end{abstract}
\pagestyle{myheadings}
\markboth{Y.\ Okumura}{Non-existence of certain Drinfeld modules}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnote[0]{2010 Mathematics Subject Classification:\ Primary 11G09;\ Secondary 11R58.}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnote[0]{Keywords:\ Drinfeld\ modules;\ Rasmussen-Tamagawa\ conjecture;\ Galois representations.}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\section{Introduction}
Let $p$ be a prime number and fix some $p$-power $q=p^m$.
Write $A:=\bF_q[t]$ for the polynomial ring in one variable $t$ over $\bF_q$ and set $F:=\bF_q(t)$.
Let $K$ be a finite extension of $F$.
In this article, we identify every monic irreducible element $\pi$ of $A$ with the corresponding finite place of $F$.
Write $\bF_\pi=A/\pi A$ for the residue field at $\pi$ and set $q_\pi:=\#\bF_\pi=q^{\deg (\pi)}$.
Let $r$ be a positive integer and $\pi$ a monic irreducible element of $A$.
Define $\sD(K,r,\pi)$ to be the set of isomorphism classes $[\phi]$ of rank-$r$ Drinfeld modules
over $K$ which satisfy the following two conditions:
\begin{itemize}
\setlength{\leftskip}{0.5cm}
\item[(D1)] $\phi$ has good reduction at any finite places of $K$ not lying above $\pi$,
\item[(D2)] the mod $\pi$ representation $\mrep{\phi}{\pi}:G_K \rightarrow \GL{r}{\bF_\pi}$ attached to $\phi$
is of the form
\[
\mrep{\phi}{\pi} \simeq
\begin{pmatrix}
\chi_\pi^{i_1} & * & \cdots & *\\
& \chi_\pi^{i_2} & \ddots & \vdots \\
& & \ddots & * \\
& & &\chi_\pi^{i_{r}}
\end{pmatrix},
\]
where $\chi_\pi$ is the mod $\pi$ Carlitz character (see Example \ref{Carlitz}) and $0 \leq i_1, \ldots,i_r \leq q_\pi-1$ are integers.
\end{itemize}
Consider the following:
\begin{prob}\leftarrowbel{conj}
Does there exist a positive constant $C>0$ depending only on $K$ and $r$ which satisfies the following:
if $\deg(\pi)>C$, then the set $\sD(K,r,\pi)$ is empty ?
\end{prob}
The motivation of this question is a non-existence conjecture on abelian varieties stated by Rasmussen and Tamagawa \cite{RT1}.
Let $k$ be a finite extension of $\bQ$ and $g$ a positive integer.
For a prime number $\ell$, denote by $\tilde{k}_\ell$ the maximal pro-$\ell$ extension of $k(\mu_\ell)$ which is unramified outside $\ell$, where $\mu_\ell=\mu_\ell(\bar{k})$ is the set of $\ell$-th roots of unity.
For an abelian variety $X$ over $k$, write $k(X[\ell^\infty]):=k(\bigcup_{n \geq 1} X[\ell^n])$ for the field generated by all $\ell$-power torsion points of $X$.
Define $\sA(k,g,\ell)$ to be the set of isomorphism classes $[X]$ of $g$-dimensional abelian varieties over $k$ which satisfy the following equivalent conditions:
\begin{itemize}
\setlength{\leftskip}{0.5cm}
\item[(RT-1)] $k(X[\ell^\infty]) \subseteq \tilde{k}_\ell$,
\item[(RT-2)] $X$ has good reduction at any finite place of $k$ not lying above $\ell$ and $k(X[\ell])/k(\mu_\ell)$ is an $\ell$-extension,
\item[(RT-3)] $X$ has good reduction at any finite place of $k$ not lying above $\ell$ and
the mod $\ell$ representation $\bar{\rho}_{X,\ell}:G_k\rightarrow\GL{\bF_\ell}{X[\ell]}\simeq \GL{2g}{\bF_\ell}$ is of the form
\[
\bar{\rho}_{X,\ell} \simeq
\begin{pmatrix}
\chi^{i_1}_\ell & * & \cdots & *\\
& \chi^{i_2}_\ell & \ddots & \vdots \\
& & \ddots & * \\
& & &\chi^{i_{2g}}_\ell
\end{pmatrix} ,
\]
where $\chi_\ell$ is the mod $\ell$ cyclotomic character.
\end{itemize}
These conditions come from the study of a question of Ihara \cite{Ihara} related with the kernel of the canonical outer Galois representation of the pro-$\ell$ fundamental group of
$\bP^1 {\setminus} \{ 0,1,\infty\} $; see \cite{RT1}.
Rasmussen and Tamagawa conjectured the following:
\begin{conj}[Rasmussen and Tamagawa {\cite[Conjecture 1]{RT1}}] \leftarrowbel{RTconj}
The set $\sA(k,g,\ell)$ is empty for any $\ell$ large enough.
\end{conj}
\noindent
Since the set $\sA(k,g,\ell)$ is always finite (see Section 5, or \cite{RT1}), the conjecture is equivalent to saying that the union $\bigcup_\ell \sA(k,g,\ell)$ is also finite.
For example, the following cases are known:
\begin{itemize}
\item $k=\bQ$ and $g=1$ \cite[Theorem 2]{RT1},
\item $k=\bQ$ and $g=2,3$ \cite[Theorem 7.1 and 7.2]{RT2},
\item for abelian varieties with everywhere semistable reduction \cite[Corollary 4.5]{Ozeki} and \cite[Theorem 3.6]{RT2},
\item for abelian varieties with abelian Galois representations \cite[Corollary 1.3]{OzekiCM},
\item for QM abelian surfaces over certain imaginary quadratic fields \cite[Theorem 9.3]{Arai}.
\end{itemize}
We notice that,
under the assumption of the Generalized Riemann Hypothesis (GRH) for Dedekind zeta functions of number fields, the conjecture is true in general \cite[Theorem 5.1]{RT2}.
The key tool of this proof is the effective version of Chebotarev density theorem for number fields, which holds under GRH.
Rasmussen and Tamagawa also state the ``uniform version'' of the conjecture \cite[Conjecture 2]{RT2}, which says that one can take a lower bound of $\ell$ satisfying $\sA(k,g,\ell)=\varnothing$ depending only on the degree $[k:\bQ]$ and $g$.
For instance, the uniform version of the conjecture for CM abelian varieties is proved by Bourdon \cite[Corollary 1]{Bou} and Lombardo \cite[Theorem 1.3]{L}.
Under GRH, the uniform version conjecture is true if $[k:\bQ]$ is odd \cite[Theorem 5.2]{RT2}.
The arithmetic properties of Drinfeld modules are similar to those of elliptic curves over number fields.
Under this analogy, the condition (D1)$+$(D2) can be regarded as a natural translation of the condition (RT-3).
In fact, we have also Drinfeld module versions of (RT-1) and (RT-2);
we will show that the set $\sD(K,r,\pi)$ is characterized by three equivalent conditions as in the abelian variety case (Proposition \ref{DRTcondi}).
The main purpose of this paper is to give a complete answer to Question \ref{conj}:
\begin{thm}[{Theorem \ref{main1} and Theorem \ref{main2}}]\leftarrowbel{MM}
If $r$ does not divide the inseparable degree $[K:F]_{\rm i}$ of $K/F$, then the set $\sD(K,r,\pi)$ is empty for any $\pi$ whose degree is large enough.
\end{thm}
\begin{thm}[{Theorem \ref{nonempty}}]\leftarrowbel{NE}
If $r$ divides $[K:F]_{\rm i}$, then the set $\sD(K,r,\pi)$ is never empty for any $\pi$.
\end{thm}
The proof of Theorem \ref{MM} consists of the two cases: (i) $r=p^\nu$, and (ii) $r=r_0\cdot p^{\nu}$ for some $r_0>1$ which is prime to $p$.
In the case (ii),
we use the effective version of Chebotarev density theorem for function fields proved by Kumer and Scherk \cite{KS}, which is a modification of the strategy in \cite{RT2}.
In this case, the uniform version, which is an analogue of \cite[Theorem 5.2]{RT2}, is also shown (Theorem \ref{uniform}).
However, the same argument dose not work well in the case (i).
The proof in the case (i) is provided by observations about the tame inertia weights of $\mrep{\phi}{\pi}$
for any $[\phi] \in \sD(K,r,\pi)$.
This technique is used in \cite{Ozeki} and \cite{RT2}.
There are difference between the number field setting and the function field setting.
Indeed if
$r$ divides $[K:F]_{\rm i}$ (for which there is no number field setting), then we construct a rank-$r$ Drinfeld module $\Phi$ over $K$ satisfying (D1) and (D2) for any $\pi$ and prove Theorem \ref{NE}.
This means that in this case a Drinfeld module analogue of the Rasmussen-Tamagawa conjecture is not true
although the original conjecture is generally true under GRH.
The organization of the paper is as follows.
In Section 2, after reviewing several basic facts on Drinfeld modules, we study the ramification of Galois representations coming from Drinfeld modules, whose consequences are needed in the next section.
In Section 3, for any $[\phi] \in \sD(K,r,\pi)$, an important integer $e_\phi$ is introduced and
we prove some non-trivial properties of it, which imply the result in the case (i).
The aim of Section 4 is to give the proof in the case (ii).
For any $[\phi] \in \sD(K,r,\pi)$, we introduce a character $\chi(m_\phi)$ and show the property that $\chi(m_\phi)$ never vanishes on the Frobenius elements of places with some conditions.
It contradicts a consequence of the effective Chebotarev density theorem if $\deg(\pi)$ is sufficiently large.
Finally, in Section 5, we construct a Drinfeld module satisfying both (D1) and (D2) for any $\pi$
in the case where $r$ divides $[K:F]_{\rm i}$.
We also show that the set $\sD(K,r,\pi)$ is infinite if $\pi=t$ and $r \geq 2$.
\subsection*{Notation}
Let $p,q,A,F$, and $K$ be as above.
Set $n_K^{}:=[K:F]$ and write $K_{\rm s}$ for the separable closure of $F$ in $K$.
For a finite place $u$ of $K$ above $\pi$, let $K_u$ be the completion of $K$ at $u$, $\cO_{K_u}$ its valuation ring, and $\bF_u$ its residue field.
We use the same symbol $u$ for the normalized valuation of $K_u$.
Set $q_u^{ }:= \# \bF_u$.
Identify $G_{K_u}$ with the decomposition group of $G_K$ at $u$ and regard it as a subgroup of $G_K$.
Denote by $I_{K_u}$ the inertia subgroup of $G_{K_u}$ at $u$ and choose a lift $\Frob{u}{} \in G_{K_u}$ of the Frobenius element of $G_{K_u}/I_{K_u}$.
Denote by $e_{u|\pi}$ the absolute ramification index of $u$ and set $f_{u|\pi}:= [\bF_u:\bF_\pi]$.
Let $F_\infty:=\bF_q{(}{(}1/t))$ be the completion of $F$ at the infinite place $\infty$ of $F$ and $\bC_\infty$ the completion of a fixed algebraic closure of $F_\infty$.
Every algebraic extension of $F$ is always regarded as a subfield of $\bC_\infty$.
Let $|\cdot|$ be the absolute value of $F_\infty$ attached to the normalized valuation of $F_\infty$.
We also denote by $|\cdot|$ the unique extension of it to $\bC_\infty$, which defines an absolute value of each algebraic extension of $F$ by restriction.
For any non-zero $a \in A$, we see that
$|a|=\# (A{/}aA)=q^{\deg(a)} $.
For any field $L$, denote by $G_L:=\Gal{L^{\rm sep}}{L}$ the absolute Galois group of $L$.
The notation $C=C(x,y,\ldots,z)$ indicates a constant $C$ depending only on
$x,y,\ldots,$ and $z$.
We use the notation $\rho^{\rm ss}$ for the semisimplification of a representation $\rho$.
\section*{Acknowledgments}
The author is grateful to his supervisor, Yuichiro Taguchi, for giving him useful advice about Drinfeld modules and for his guidance in preparing this paper.
The author is greatly indebted to Akio Tamagawa for pointing out a mistake in the preprint and for providing
his idea to construct examples in Propositions \ref{main3} and \ref{m}.
The author also would like to thank Yoshiyasu Ozeki for his helpful comments on Proposition \ref{DRTcondi}.
\section{Drinfeld modules}
\subsection{Basic definitions}
Let $L$ be a field equipped with an $\bF_q$-algebra homomorphism $\iota:A \rightarrow L$.
Such a pair $(L,\iota)$ is called an {\it $A$-field}.
Let $\bG_{a,L}$ be the additive group scheme defined over $L$.
Denote by $\End{\bF_q}{\bG_{a, L}}$ the ring of $\bF_q$-linear endomorphisms of $\bG_{a,L}$.
It is isomorphic to the non-commutative polynomial ring $L\{ \tau \}$
in one variable $\tau$ satisfying $\tau c=c^{q}\tau$ for any $c \in L$, where $\tau$ is the $q$-power Frobenius map.
Let $r$ be a positive integer.
\begin{definition}\leftarrowbel{Dr}
A {\it Drinfeld module} $\phi$ of rank $r$ defined over the $A$-field $L$ is an $\bF_q$-algebra homomorphism
\[
\phi:A \rightarrow L \{ \tau \};a \mapsto \phi_a
\]
such that $\phi_t=\iota(t) + c_1\tau + \cdots + c_r\tau^r$, where $c_1, \ldots ,c_r \in L$ and $c_r \neq 0$.
\end{definition}
Note that $\phi$ is completely determined by the image $\phi_t$ of $t$.
For two Drinfeld modules $\phi$ and $\psi$ over $L$,
a {\it morphism} from $\phi$ to $\psi$ is an element $f \in L\{ \tau \}$
such that $f \phi_a = \psi_a f$ for any $a \in A$.
We say that $f$ is an {\it isomorphism} if there exists a morphism $g$ from $\psi$ to $\phi$ such that $fg=gf=1$.
It is easy to check that $f$ is an isomorphism if and only if $f \in L^\times$.
For any $a \in A$, its image $\phi_a$ is an endomorphism of $\bG_{a,L}$, so that $\phi$ endows the additive group $\bG_{\mathrm{a},L}(L^{\mathrm{sep}})\simeq L^{\mathrm{sep}}$ with a new $A$-module structure defined by $a\cdot \leftarrowmbda:=\phi_a(\leftarrowmbda)$.
Denote this $A$-module by ${}_{\phi}L^{\mathrm{sep}}$.
For any non-zero element $a \in A$, the set of {\it $a$-torsion points}
\[
\phi[a]=\{ \leftarrowmbda \in {}_{\phi}L^{\mathrm{sep}}; \ a\cdot \leftarrowmbda = \phi_a(\leftarrowmbda)=0 \}
\]
of $\phi$ is an $A$-submodule of ${}_{\phi}L^{\mathrm{sep}}$ on which $G_L$ acts.
If $a$ is not contained in the kernel of $\iota$, then $\phi[a]$ is a free $A/aA$-module of rank $r$.
Let $K$ be a finite extension of $F$.
From now on, unless otherwise stated, we regard $K$ as an $A$-field via the inclusion $A \hookrightarrow F \subset K$.
Let $\phi$ be a rank-$r$ Drinfeld module over $K$.
For any finite place $v$ of $K$,
we can regard $\phi$ as a Drinfeld module over $K_v$ via the canonical inclusion $K\{ \tau \} \hookrightarrow K_v\{ \tau \}$.
\begin{definition}
(1) We say that $\phi$ has {\it stable reduction} at $v$ if there exists a Drinfeld module $\psi$ over $K_v$ such that $\psi$ is isomorphic to $\phi$ over $K_v$ and
\[
\psi_t= t + c'_1\tau+ \cdots + c'_r\tau^r
\]
such that $c'_1 , \ldots , c'_r \in \cO_{K_v}$ and $ c'_{r'} \in \cO_{K_v}^\times$ for some $1 \leq r' \leq r$.
(2) We say that $\phi$ has {\it good reduction} at $v$ if it has stable reduction at $v$ and $c_r' \in \cO_{K_v}^\times$.
\end{definition}
\begin{prop}[Drinfeld {\cite[Proposition 7.1]{Dri}}]
Every Drinfeld module $\phi$ over $K$ has potentially stable reduction at any finite place $v$ of $K$.
\end{prop}
\begin{proof}
Write $\phi_t=t + c_1\tau + \cdots + c_r\tau^r$ and set $R:=\min_{1 \leq s \leq r} \{ \frac{v(c_s)}{q^s-1} \}$.
Let $K_v'$ be a finite extension of $K_v$.
If the ramification index $e(K_v'/K_v) $ satisfies $e(K_v'/K_v) \cdot R \in \bZ$, then
$\phi$ has stable reduction over $K_v'$.
\end{proof}
\begin{rem}\leftarrowbel{stablereduction}
In particular, we can take $K_v'/K_v$ as a tamely ramified finite separable extension whose ramification index $e(K_v'/K_v)$ divides $\prod_{s=1}^r(q^s-1)$.
Every rank-one Drinfeld module clearly has potentially good reduction at any finite place.
\end{rem}
For any monic irreducible element $\pi \in A$,
the set of $\pi$-torsion points $\phi[\pi]$ is a $G_K$-stable $r$-dimensional $\bF_\pi$-vector space, so that
the {\it mod $\pi$ representation}
\[
\mrep{\phi}{\pi}:G_K \rightarrow \GL{\bF_\pi}{\phi[\pi]} \simeq \GL{r}{\bF_\pi}
\]
attached to $\phi$ can be defined.
Let $A_\pi:=\varprojlim A/\pi^n A$ be the $\pi$-adic completion of $A$.
Considering the maps $\phi[\pi^{n+1}] \rightarrow \phi[\pi^n]$ defined by $x \mapsto \pi \cdot x$, one can define the {\it $\pi$-adic Tate module}
$T_\pi(\phi):= \varprojlim \phi[\pi^n]$, which is a free $A_\pi$-module of rank $r$ with continuous $G_K$-action.
Write $\rep{\phi}{\pi}: G_K \rightarrow \GL{r}{A_\pi}$ for the representation attached to $T_\pi(\phi)$.
Let $\pi_0$ be a monic irreducible element of $A$ with $\pi_0 \neq \pi$ and let $v$ be a finite place of $K$ above $\pi_0$.
The next proposition is an analogue of the N{\'e}ron-Ogg-Shafarevich criterion for good reduction of abelian varieties (cf. \cite[Theorem 1]{ST}).
\begin{prop}[Takahashi {\cite[Theorem 1]{Tak}}]\leftarrowbel{NOS}
A Drinfeld module $\phi$ over $K$ has good reduction at $v$ if and only if $T_\pi(\phi)$ is unramified at $v$.
\end{prop}
Suppose that $\phi$ has good reduction at $v$.
Then $\rep{\phi}{\pi}$ is unramified at $v$, and so $\rep{\phi}{\pi}(\Frob{v}{}) \in \rep{\phi}{\pi}(G_K)$ is independent of the choice of $\Frob{v}{}$.
Denote by
\[
P_{v}(T):=\det(T-\rep{\phi}{\pi}(\Frob{v}{}) | T_\pi(\phi)) \in A_\pi[T]
\]
the characteristic polynomial of $\Frob{v}{}$.
Then we have the following:
\begin{prop}[Takahashi {\cite[Proposition 3 (ii)]{Tak}}]\leftarrowbel{good}
The polynomial $P_v(T)$ has coefficients in $A$ and is independent of $\pi$.
Any root $\alpha$ of $P_v(T)$ satisfies $|\alpha|=q_v^{1/r}$.
\end{prop}
The following example gives a function field analogue of cyclotomic extensions of number fields.
\begin{ex}[cf.\ {\cite[Chapter 12]{Ros}}]\leftarrowbel{Carlitz}
The rank-one Drinfeld module $\cC:A \rightarrow F\{ \tau \}$ determined by $\cC_t= t + \tau$ is called the {\it Carlitz module}.
For any monic irreducible element $\pi \in A$, define
\[
\chi_\pi:G_F \rightarrow \GL{\bF_\pi}{\cC[\pi]} \simeq \bF_\pi^\times,
\]
which is called the {\it mod $\pi$ Carlitz character}.
Since $\cC$ has good reduction at any finite place $\pi_0$ of $F$,
the character $\chi_\pi$ is unramified at $\pi_0$ if $\pi_0 \neq \pi$.
For any finite place $v$ of $K$ above $\pi_0 \neq \pi$, it is known that $\chi_\pi $ satisfies
\[
\chi_\pi(\Frob{v}{}) \equiv \pi_0^{f_{v|\pi_0}} \pmod \pi.
\]
The mod $\pi$ Carlitz character induces an isomorphism $\Gal{F(\cC[\pi])}{F} \overset{\sim}{\rightarrow} \bF_\pi^\times$
, so that $F(\cC[\pi])/F$ is a cyclic extension which is unramified outside $\pi$ and $\infty$.
Moreover, it is known that $\pi$ is totally ramified in $F(\cC[\pi])$
and the ramification of the infinite place $\infty$ is as follows: there exists a subfield $F(\cC[\pi])_+$ with degree $[F(\cC[\pi]):F(\cC[\pi])_+]=q-1$ such that $\infty$ is totally split in $F(\cC[\pi])_+$ and any place of $F(\cC[\pi])_+$ above $\infty$ is totally ramified in $F(\cC[\pi])$.
\end{ex}
\subsection{Tate uniformization}
Let $u$ be a finite place of $K$ above $\pi$ and $\phi$ a rank-$r$ Drinfeld module over $K$.
Suppose that $\phi$ has stable reduction at $u$.
Then Drinfeld's result on Tate uniformization gives an analytic description of $\phi$ as a Drinfeld module over $K_u$.
\begin{prop}[Tate uniformization; Drinfeld {\cite[Section 7]{Dri}}]
There exist a unique Drinfeld module $\psi$ over $K_u$ with good reduction and a unique entire analytic surjective morphism $e:\psi \rightarrow \phi$ defined over $K_u$ such that
$e$ is the identity on $\mathrm{Lie}(\bG_{a,K_u})$.
\end{prop}
It is known that the rank $r'$ of $\psi$ satisfies $r' \leq r$ and the
kernel $H:=\Ker{e}(K_u^{\rm sep})$ is an $A$-lattice of rank $h:=r-r'$ in ${}_\psi K_u^{\rm sep}$, endowed with an action of a finite quotient of $G_{K_u}$.
For any monic irreducible element $\pi_0 \in A $, the analytic morphism $e$ induces the short exact sequence
\begin{align}\leftarrowbel{Tu}
0 \rightarrow T_{\pi_0}(\psi) \rightarrow T_{\pi_0}(\phi) \rightarrow H\otimes_{A}A_{\pi_0} \rightarrow 0
\end{align}
of $A_{\pi_0}[G_{K_u}]$-modules.
In the case where $\pi_0 \neq \pi$,
the $I_{K_u}$-action on $T_{\pi_0}(\phi)$ is potentially unipotent
since both $T_{\pi_0}(\psi)$ and $H\otimes_AA_{\pi_0}$ are potentially unramified at $u$.
\begin{rem}\leftarrowbel{tau}
By the theory on ``analytic $\tau$-sheaves''(see \cite{Gar}, \cite{Gar1} and \cite{Gar2}), the sequence (\ref{Tu}) can be interpreted as follows.
For any Drinfeld module $\phi$ over $K_u$, one can construct the analytic $\tau$-sheaf
$\tilde{M}(\phi)$ associated with $\phi$, which is a locally free $\sO_{\tilde{\bA}^1_{K_u}}$-module of finite rank on $\tilde{\bA}^1_{K_u}$ with some additional structures, where $\tilde{\bA}^1_{K_u}$ is the affine line
$\bA_{K_u}^1=\mathrm{Spec}A \times_{\mathrm{Spec}\bF_q} \mathrm{Spec}K_u$, seen as a rigid analytic space.
Then the $\pi_0$-adic Tate module $T_{\pi_0}(\tilde{M}(\phi))$ of $\tilde{M}(\phi)$ can be defined and it is isomorphic to
$T_{\pi_0}(\phi)$.
The Tate uniformization implies that there exist an analytic $\tau$-sheaf $\tilde{N}$ which is potentially trivial and the exact sequence
\[
0 \rightarrow \tilde{N} \rightarrow \tilde{M}(\phi) \rightarrow \tilde{M}(\psi) \rightarrow 0.
\]
Since $\tilde{M} \mapsto T_{\pi_0}(\tilde{M})$ is a contravariant exact functor, we obtain
\[
0 \rightarrow T_{\pi_0}(\tilde{M}(\psi)) \rightarrow T_{\pi_0}(\tilde{M}(\phi)) \rightarrow T_{\pi_0}(\tilde{N}) \rightarrow 0,
\]
which coincides with the sequence (\ref{Tu}) (for example, see \cite[Example 7.1]{Gar3}).
\end{rem}
We would like to estimate the tame ramification of the lattice $H$.
Suppose that $H$ is non-trivial, that is, $h \neq 0$ and consider the representation
\[
\rho : I_{K_u} \rightarrow \GL{A}{H} \simeq \GL{h}{A}.
\]
Then we have the following:
\begin{prop}\leftarrowbel{stable}
There exists a finite separable extension $L/K_u$ such that
\begin{itemize}
\item the action of $I_L$ on $H$ is trivial,
\item the ramification index $e(L_0/K_u)$ divides $\prod_{s=1}^{r-1}(q^s-1)$, where $L_0$ is the maximal tamely ramified extension of $K_u$ in $L$.
\end{itemize}
\end{prop}
\begin{proof}
Let $E/K_u$ be a finite Galois extension such that the action of $G_{K_u}$ on $H$ factors through $\Gal{E}{K_u}$.
Now the image $I$ of $I_{K_u}$ by the canonical projection map $G_{K_u} \twoheadrightarrow \Gal{E}{K_u}$ is the inertia subgroup of $\Gal{E}{K_u}$.
For the representation
\[
\rho:I \rightarrow \GL{A}{H} \simeq \GL{h}{A},
\]
denote by $L$ the fixed subfield of $E$ by $J:=\Ker{\rho}$.
By construction, the action of inertia subgroup $I_L$ of $G_L$ on $H$ is trivial.
Now the image $\mathrm{Im}(\rho)$ is a finite subgroup of $\GL{h}{A}$ of order $\#I/J=e(L/K_u)$ and $h \leq r-1$.
Applying Lemma \ref{gp} below to $\mathrm{Im}(\rho)$, we see that $e(L_0/K_u)$ divides $\prod_{s=1}^{r-1}(q^s-1)$.
\end{proof}
\begin{rem}
Proposition \ref{stable} means that the Drinfeld module $\phi$ is ``semistable'' over $L$ in the following sense.
By Remark \ref{tau}, the analytic $\tau$-sheaf $\tilde{M}(\phi)$ is the extension
of $\tilde{M}(\psi)$ by $\tilde{N}$ and both $\tilde{M}(\psi)$ and $\tilde{N}$ are ``good'' over $L$.
Hence the analytic $\tau$-sheaf $\tilde{M}(\phi)$ is {\it strongly semistable} over $L$ in the sense of
\cite[Definition 4.6]{Gar2}.
\end{rem}
\begin{lem}\leftarrowbel{gp}
For any positive integer $n$,
let $G$ be a finite subgroup of $\mathrm{GL}_n(A)$.
Then the maximal prime-to-$p$ divisor of $\#G$ is a factor of $\prod_{s=1}^{n}(q^s-1)$.
\end{lem}
\begin{proof}
Consider the $t$-adic completion $\bF_q[[t]]$ of $A$ and regard $G$ as a finite subgroup of $\GL{n}{\bF_q[[t]]}$.
Let $\Gamma_n$ be the kernel of the map $\GL{n}{\bF_q[[t]]} \twoheadrightarrow \GL{n}{\bF_q}$ induced by the reduction map $\bF_q[[t]] \twoheadrightarrow \bF_q$.
Since $\bF_q[[t]]$ is a complete noetherian local ring whose residue field is finite of characteristic $p$, $\Gamma_n$ is a pro-$p$ group.
Hence the short exact sequence $1 \rightarrow \Gamma_n \rightarrow \GL{n}{\bF_q[[t]]} \rightarrow \GL{n}{\bF_q} \rightarrow 1$ shows that the maximal prime-to-$p$ divisor of $\# G$ is a factor of $\# \GL{n}{\bF_q}=q^{n(n-1)/2}\prod_{s=1}^{n}(q^s-1)$.
\end{proof}
\section{Inertia action on torsion points}
Let $\pi$ be a monic irreducible element of $A$.
In this section, studying ramification of mod $\pi$ representations attached to Drinfeld modules, we show the non-existence result (Theorem \ref{main1}) in the case where $r$ is a $p$-power and does not divide $[K:F]_{\rm i}$.
\subsection{Tame inertia weights}
Let $u$ be a finite place of $K$ above $\pi$.
For a fixed separable closure $K^{\rm sep}_u$ of $K_u$ with residue field $\bar{\bF}_u$, denote
by $K_u^{\rm ur}$ (resp. $K_u^{\rm t}$) the maximal unramified (resp. maximal tamely ramified) extension of $K_u$ in $K^{\rm sep}_u$, so that $I_{K_u}$ is isomorphic to $\Gal{K_u^{\rm sep}}{K_u^{\rm ur}}$.
Denote by $I^{\rm t}:=\Gal{K_u^{\rm t}}{K_u^{\rm ur}}$ the tame inertia subgroup of $I_{K_u}$.
Let $d$ be a positive integer and $\bF$ the finite field with $q_\pi^{d}$ elements in $\bar{\bF}_u$.
Then $\bF$ is the finite extension of $\bF_\pi$ of degree $d$.
Write $\mu_{q_\pi^d-1}(K_u^{\rm sep})$ for the set of $(q_\pi^d-1)$-st roots of unity in $K^{\rm sep}_u$ and fix the isomorphism $\mu_{q_\pi^d-1}(K_u^{\rm sep}) \overset{\sim}{\rightarrow} \bF^\times$ coming from the reduction map $\cO_{K_u^{\rm sep}} \twoheadrightarrow \bar{\bF}_u$.
For a uniformizer $\varpi$ of $K_u$, choose a solution $\eta \in K_u^{\rm sep}$ to the equation
$X^{q_\pi^d-1}- \varpi =0$ and define
\[
\omega_{d, K_u}:I_{K_u} \rightarrow \mu_{q_\pi^d-1}(K_u^{\rm sep}) \overset{\sim}{\rightarrow} \bF^\times \ ;
\sigma \mapsto \frac{\eta^\sigma}{\eta},
\]
which is independent of the choices of $\varpi$ and $\eta$.
The character $\omega_{d, K_u}$ factors through $I^{\rm t}$ (cf. \cite{Serre}).
We call the $\Gal{\bF}{\bF_\pi}$-conjugates $(\omega_{d, K_u})^{q_\pi^i}$ for $0\leq i \leq d-1$ of $\omega_{d, K_u}$
the {\it fundamental characters} of level $d$.
It is easy to check that
\[
(\omega_{d, K_u})^{1+q_\pi+\cdots + q_\pi^{d-1}} = \omega_{1, K_u}
\]
and $(\omega_{d, K_u})^{q_\pi^d-1}=1$.
For any finite extension $L$ of $K_u$,
we see that $(\omega_{d, K_u})|_{I_L}=(\omega_{d,L})^{e(L/K_u)}$ by definition.
As an analogue of Serre's classical result on the mod $\ell$ cyclotomic character (\cite[Proposition 8]{Serre}), the following fact is known.
\begin{prop}[Kim {\cite[Proposition 9.4.3.\ (2)]{Kim}}]\leftarrowbel{fund}
The character $(\omega_{1, K_u})^{e_{u|\pi}}$ coincides with the mod $\pi$ Lubin-Tate character restricted to $I_{K_u}$.
\end{prop}
\begin{rem}
The {\it mod $\pi$ Lubin-Tate character} is the character describing the $G_{K_u}$ action on the $\pi$-torsion points of Lubin-Tate formal group over $\cO_{K_u}$ associated with $\pi$.
It coincides with the mod $\pi$ Carlitz character $\chi_\pi$ restricted to $G_{K_u}$, so that $\chi_\pi=(\omega_{1, K_u})^{e_{u|\pi}}$ on $I_{K_u}$.
\end{rem}
Let $V$ be a $d$-dimensional irreducible $\bF_\pi$-representation of $I_{K_u}$.
Then the action of $I_{K_u}$ on $V$ factors through $I^{\rm t}$, so that $V$ can be regarded as a representation of $I^{\rm t}$.
Using Schur's Lemma, we see that $\End{I^{\rm t}}{V}$ is a finite field of order $q_\pi^d$.
Fix an isomorphism $f : \End{I^{\rm t}}{V} \overset{\sim}{\rightarrow} \bF$ and regard $V$ as a one-dimensional $\bF$-representation
\[
\rho:I^{\rm t} \rightarrow \End{I^{\rm t}}{V}^\times \overset{\sim}{\rightarrow} \bF^\times
\]
of $I^{\rm t}$.
Since $I^{\rm t}$ is pro-cyclic and $\omega_{d, K_u}$ is surjective, there exists an integer
$0 \leq j \leq q_\pi^{d}-2$ such that $\rho = (\omega_{d, K_u})^j$.
If we decompose $j=n_0 + n_1q_\pi + \cdots + n_{d-1}q_\pi^{d-1}$ with integers $0 \leq n_s \leq q_\pi-1$, then the set $\{ n_0, n_1,\ldots, n_{d-1} \}$ is independent of the choice of $f$.
\begin{definition}
These numbers $ n_0, n_1,\ldots,n_{d-1}$ are called {\it tame inertia weights} of $V$.
In general, for any $\bF_\pi$-representation $\rho: G_{K_u} \rightarrow \GL{\bF_\pi}{V}$, the tame inertia weights of $\rho$ are the tame inertia weights of all the Jordan-H$\ddot{\rm o}$lder quotients of $V|_{I_{K_u}}$.
\end{definition}
Denote by $\mathrm{TI}_{K_u}(\rho)$ the set of tame inertia weights of $\rho:G_{K_u} \rightarrow \GL{d}{\bF_\pi}$.
\subsection{Ramification of constrained torsion points}
Let $\phi$ be a Drinfeld module over $K$ satisfying $[\phi] \in \sD(K,r,\pi)$
and $u$ a finite place of $K$ above $\pi$.
By Remark \ref{stablereduction}, we can take a finite separable extension $K_u'$ of $K_u$ such that $\phi$ has stable reduction and $e(K_u'/K_u)$ divides $\prod_{s=1}^r(q^s-1)$.
By Tate uniformization, we obtain the exact sequence
\begin{align}\leftarrowbel{mrep}
0 \rightarrow \psi[\pi] \rightarrow \phi[\pi] \rightarrow H{\otimes_A}\bF_\pi \rightarrow 0
\end{align}
of $\bF_\pi[G_{K_u'}]$-modules.
We also take a finite separable extension $L$ of $K_u'$ as in Proposition \ref{stable} and denote by $L_0$ the maximal tamely ramified extension of $K_u'$ in $L$.
Set
\[
C_1=C_1(q,r):=(q^r-1)\prod_{s=1}^{r-1}(q^s-1)^2
\]
and $e_u:=e(L_0/K'_u) \cdot e(K_u'/F_\pi)$.
Then $e_u$ divides $e_{u|\pi} C_1(q,r)$.
\begin{prop}\leftarrowbel{tame}
Every tame inertia weight of $\mrep{\phi}{\pi}|_{G_{L_0}}$ is between $0$ and $e_u$.
\end{prop}
\begin{proof}
By (D2), the restriction
$
\bar{\rho}_{\phi, \pi}^{\rm ss}|_{I_{L_0}}$ is isomorphic to $(\omega_{1, L_0})^{j_1} \oplus \cdots \oplus (\omega_{1, L_0})^{j_r}
$, where $\{ j_1, \ldots, j_r \}=\mathrm{TI}_{L_0}(\mrep{\phi}{\pi})$.
Write $\bar{\rho}:G_{L_0} \rightarrow \GL{h}{\bF_\pi}$ for the representation arising from $H{\otimes}_A\bF_\pi$.
Then the sequence (\ref{mrep}) implies $\bar{\rho}_{\phi, \pi}^{\rm ss} = \bar{\rho}_{\psi, \pi}^{\rm ss} \oplus \bar{\rho}_{}^{\rm ss}$ on $G_{L_0}$ , so that
$\mathrm{TI}_{L_0}(\mrep{\phi}{\pi}) = \mathrm{TI}_{L_0}(\bar{\rho}_{\psi,\pi}) \cup \mathrm{TI}_{L_0}(\bar{\rho})$.
Let $\tilde{M}(\psi)$ and $\tilde{N}$ be analytic $\tau$-sheaves on $\tilde{\bA}_{L_0}^1$ attached to $\psi$ and $H$, respectively.
Since $\tilde{M}(\psi)$ is good over $L_0$,
we see that $\mathrm{TI}_{L_0}(\bar{\rho}_{\psi,\pi}) \subset [0, e_u]$ by \cite[Theorem 2.14]{Gar}.
On the other hand, the analytic $\tau$-sheaf $\tilde{N}$ is of dimension zero and good over $L$, so that
every tame inertia weight of $\bar{\rho}|_{G_L}$ is zero by \cite[Theorem 2.14]{Gar}, which means that $\bar{\rho}_{}^{\rm ss}|_{I_L}=1$.
Hence we see that
\[
(\omega_{1,L_0})^j|_{I_{L}}=(\omega_{1,L})^{e(L/L_0) \cdot j}=1
\]
for any $j \in \mathrm{TI}_{L_0}(\bar{\rho})$,
so that $e(L/L_0) \cdot j \equiv 0 \pmod {q_\pi-1}$ holds.
Since $e(L/L_0)$ is a $p$-power and $0 \leq j \leq q_\pi-2$, we see that $j=0$.
\end{proof}
The condition (D2) means that
$
\bar{\rho}_{\phi, \pi}^{\rm ss} $
is isomorphic to
$\chi_\pi^{i_1} \oplus \cdots \oplus \chi_\pi^{i_r}$ for
$0 \leq i_1,\ldots, i_r \leq q_\pi-1$.
By renumbering $\{ j_1, \ldots, j_r\}$ if necessary, Propositions \ref{fund} and \ref{tame} mean that $\chi_\pi^{i_s}|_{I_{L_0}}=(\omega_{1,L_0})^{ i_s\cdot e_u}=(\omega_{1,L_0})^{j_s}$ for any $1 \leq s \leq r$ .
Thus we obtain
\begin{align}\leftarrowbel{ee}
i_s \cdot e_u \equiv j_s \pmod {q_\pi-1}
\end{align}
for any $1 \leq s \leq r.$
For any finite place $v$ of $K$ not lying above $\pi$ and any integer $m$, denote by
$
P_{v,m}(T)=\det (T-\rep{\phi}{\pi}(\Frob{v}{m}) | T_\pi(\phi)) \in A[T]
$
the characteristic polynomial of $\Frob{v}{m}$.
Set
\[
C_2=C_2(n_K^{}, q,r):=r \cdot n_K^{2} \cdot C_1(q,r).
\]
Then we obtain the following important proposition.
\begin{prop}\leftarrowbel{cong}
If $\deg(\pi) > C_2$, then $r$ divides $e_u$ and the congruence
\[
i_s \cdot e_u \equiv \frac{e_u}{r} \pmod {q_\pi-1}
\]
holds for any $1 \leq s \leq r.$
\end{prop}
\begin{proof}
Suppose that $\deg(\pi)> C_2$.
Take a monic irreducible element $\pi_0 \in A $ with $\deg(\pi_0)=1$ and a finite place $v$ of $K$ above $\pi_0$.
Since $\phi$ has good reduction at $v$ by (D1), the polynomial $P_{v,e_u}(T)$ is well-defined.
Now the roots of $P_{v,e_u}(T)$ are given by $\{ \alpha_s^{e_u}\}_{s=1}^r$, where $\{ \alpha_s \}_{s=1}^r$ are the roots of $P_v(T)=P_{v,1}(T)$.
On the other hand, the condition (D2$)$ implies that the roots of the polynomial
$P_{v,e_u}(T) \pmod \pi$ in $\bF_\pi[T]$
are given by $\{ \chi_\pi(\Frob{v}{})^{i_s \cdot e_u} \}_{s=1}^r$.
Set $\pi_v:=\pi_0^{f_{v|\pi_0}}$.
By the above relation (\ref{ee}), we see that $\chi_\pi(\Frob{v}{})^{i_s\cdot e_u}=\chi_\pi(\Frob{v}{})^{j_s}$
for any $1 \leq s \leq r$.
Since $\chi_\pi(\Frob{v}{})^{j_s} \equiv \pi_v^{j_s} \pmod \pi$ holds by Example \ref{Carlitz}, we obtain
\begin{equation}\leftarrowbel{eigen}
P_{v,e_u}(T) \equiv \prod_{s=1}^r(T-\chi_\pi(\Frob{v}{})^{j_s}) \equiv \prod_{s=1}^r(T-\pi_v^{j_s}) \pmod \pi.
\end{equation}
Denote by $S_k(x_1,\ldots,x_r)$ the fundamental symmetric polynomial of degree $k$ with
$r$ variables $x_1,\ldots,x_r$ for $0 \leq k \leq r$.
Then
\[
\prod_{s=1}^r(T-x_s)=\sum_{k=0}^r(-1)^k S_k(x_1,\ldots,x_r)T^{r-k}.
\]
Now $|\alpha_s^{e_u}| =q_v^{e_u/r}$ for any $1 \leq s \leq r$ by Proposition \ref{good}.
For any $0 \leq k \leq r$, we obtain
\begin{eqnarray}
\left|S_k(\alpha_1^{e_u},\ldots,\alpha_r^{e_u} ) - S_k(\pi_v^{j_1},\ldots,\pi_v^{j_r})\right|
&\leq& \underset{1 {\leq} s_1 {<} {\cdots} {<} {s_k} {\leq} r}{\mathrm{max}} \left\{ q_v^\frac{k \cdot e_u}{r}, q_v^{j_{s_1}+\cdots+j_{s_k}}
\right\} \nonumber \\
&\leq&q_v^{k \cdot e_u} \nonumber \\
&\leq& q_v^{r \cdot e_u} =q^{r\cdot e_u \cdot f_{v|\pi_0}}_{}\nonumber
\end{eqnarray}
since $j_s \leq e_u$ for each $s$ by Proposition \ref{tame}.
Since $e_u$ divides $e_{u|\pi}\cdot C_1(q,r)$ and both $e_{u|\pi}$ and $f_{v|\pi_0}$ are less than or equal to $n_K=[K:F]$,
we see that
\[
q^{r \cdot e_u \cdot f_{v|\pi_0}} \leq q^{C_2}<q^{\deg(\pi)}=|\pi|,
\]
which means that all absolute values of coefficients of $P_{v,e_u}(T)-\prod_{s=1}^r(T-\pi_v^{j_s})$
are smaller than $|\pi|$.
Therefore the congruence (\ref{eigen}) implies $P_{v,e_u}(T)=\prod_{s=1}^r(T-\pi_v^{j_s})$.
Comparing the absolute values of the roots of $P_{v,e_u}(T)$ and $\prod_{s=1}^r(T-\pi_v^{j_s})$,
we see that
${e_u/r}=j_s$ for any $1 \leq s \leq r$, which implies the conclusion.
\end{proof}
Set $e_{\phi}:={\gcd}\{ e_u ; \ u|\pi \}$ and $\bS_r:=\{ {\bf s}=(s_1, \ldots , s_r) \in \bZ^{r} ; 1 \leq s_k \leq r\}$.
\begin{lem}\leftarrowbel{e}
Suppose that $\deg(\pi)>C_2$.
$(1)$ $e_\phi| n_K^{} \cdot C_1(q,r)$. If $\pi$ is unramified in $K_{\rm s}$, then $e_\phi | [K:F]_{\rm i} \cdot C_1(q,r)$.
$(2)$ For any
$(s_1, \ldots , s_r) \in \bS_r$, the relation $e_\phi \cdot (i_{s_1} + \cdots + i_{s_r}-1) \equiv 0 \pmod {q_\pi-1}$
holds.
\end{lem}
\begin{proof}
Let $u$ be a finite place of $K$ above $\pi$ and $u_0$ the place of $K_{\rm s}$ below $u$.
Then $e_{u|\pi}=e_{u_0|\pi}[K:F]_{\rm i}$ since $u_0$ is totally ramified in $K$ if $K \neq K_{\rm s}$.
Hence (1) follows from the relation $e_u|e_{u|\pi}C_1(q,r)$.
By Proposition \ref{cong}, we see that $i_s \cdot e_\phi \equiv \frac{e_\phi}{r} \pmod {q_\pi-1}$.
Adding this congruence for $s_1,\ldots,s_r$ gives
\[
e_\phi \cdot (i_{s_1}+ \cdots +i_{s_r}) \equiv e_\phi \pmod {q_\pi-1},
\]
which proves (2).
\end{proof}
There exist only finitely many places of $F$ which are ramified in $K_{\rm s}$.
Define $C_3=C_3(K_{\rm s})$ to be the maximal degree of such places and set
\[
C_4=C_4(n_K^{},q,r,K_{\rm s}) := \max \{C_2(n_K^{}, q,r),C_3(K_{\rm s})\}.
\]
\begin{thm}\leftarrowbel{main1}
Suppose that $r=p^\nu>1$ and $r$ does not divide $[K:F]_{\rm i}$.
If $\deg(\pi) > C_4$, then the set $\sD(K,r,\pi)$ is empty.
\end{thm}
\begin{proof}
Assume that $[\phi] \in \sD(K,r,\pi)$ and $\deg(\pi) > C_4$.
Then $\pi$ is unramified in $K_{\rm s}$ by $\deg(\pi)>C_3$.
Proposition \ref{cong} and Lemma \ref{e} imply that
$r $ divides $ [K:F]_{\rm i}\cdot C_1(q,r)$.
The integer $C_1(q,r)$ is prime to $p$ and so $r$ must divide $[K:F]_{\rm i}$, which contradicts the assumption on $r$.
\end{proof}
\begin{rem}
In Section 5, we see that if $r$ divides $[K:F]_{\rm i}$, then
$\sD(K,r,\pi)$ is not empty.
\end{rem}
\section{Observations at places with small degree}
In this section,
using Propositions \ref{mth} and \ref{mv}, we prove Theorem \ref{main2} on the emptiness of $\sD(K,r,\pi)$ when $r$ is not a $p$-power.
In this case, we also prove its uniform version (Theorem \ref{uniform}).
\subsection{Effective Chebotarev density theorem}
Recall some basic facts on function field arithmetic.
Let $L$ be an algebraic extension of $K$.
The {\it constant field} $\bF_L$ of $L$ is the algebraic closure of $\bF_q$ in $L$.
If $L=\bF_LK$, then $L$ is called a {\it constant field extension} of $K$, which
is unramified at any places (\cite[Proposition 8.5]{Ros}).
If $\bF_L=\bF_K$, then $L$ is called a {\it geometric extension} of $K$.
In general, the field $\bF_LK$ is the maximal constant extension of $K$ in $L$ and the extension $L/\bF_LK$
is geometric.
Set $[L:K]_{\rm g}:=[L:\bF_LK]$ if $L/K$ is finite, which is called the {\it geometric extension degree} of $L/K$.
For example, for any $a \in A$, the field $F(\cC[a])$ arising from the Carlitz module is a geometric extension of $F$.
Denote by $\mathrm{Div}(K)$ the divisor group of $K$, that is, the free abelian group generated by all places of $K$.
We write divisors additively, so that a typical divisor is of the form $D=\sum_vn_vv$.
The notation $v \not\in D$ means that $n_v=0$.
The {\it degree} of a place $v$ of $K$ is defined by $\deg_Kv:=[\bF_v:\bF_K]$ and it is extended to any divisor $D=\sum_vn_vv$ by $\deg_KD=\sum_vn_v\deg_Kv$.
The degree $\deg_F\pi$
of a finite place $\pi$ of $F$ is exactly the degree $\deg(\pi)$ as a polynomial.
Suppose that $L$ is a finite separable extension of $K$.
Then the {\it conorm map} $i_{L/K}:\mathrm{Div}(K) \rightarrow \mathrm{Div}(L)$ is defined to be the linear extension of
\[
i_{L/K}v=\sum_{w|v}e_{w|v}w,
\]
where $v$ is a place of $K$.
The following is known (cf.\ \cite[Proposition 7.7]{Ros}).
\begin{lem}\leftarrowbel{deg}
Let $w$ be a place of $L$ above a place $v$ of $K$ and $D \in \mathrm{Div}(K)$.
Then
\[
\deg_Li_{L/K}D=[L:K]_{\rm g}\deg_KD \ \mbox{and}\ \deg_Lw=\frac{f_{w|v}}{[\bF_L:\bF_K]}\deg_Kv.
\]
\end{lem}
For any place $w$ of $L$ above a place $v$ of $K$, denote by $\mathfrak{p}_w$ the maximal ideal of $\cO_{L_w}$ and let $\delta_w$
the exact power
of $\mathfrak{p}_w$ dividing the different of $\cO_{L_w}$ over $\cO_{K_v}$.
Then it satisfies $\delta_w \geq e_{w|v}-1$ with equality holding if and only if $p$
does not divide $e_{w|v}$.
Define the {\it ramification divisor} of $L/K$ by
$
\cD_{L/K}=\sum_w\delta_ww.$
For any intermediate field $K'$ of $L/K$, we see that
\[
\cD_{L/K}=\cD_{L/K'} + i_{L/K'}\cD_{K'/K}
\]
(for example, see \cite[Chapter III 4]{Serre2}).
Hence $\cD_{L/K'} \leq \cD_{L/K}$ holds.
In addition, the following holds (cf.\ {\cite[Lemma 2.6]{CL}}).
\begin{lem}\leftarrowbel{d}
Let $L/K$ and $L'/K$ be finite separable extensions.
Then
\[
\cD_{LL'/K} \leq i_{LL'/L}\cD_{L/K} + i_{LL'/L'}\cD_{L'/K}.
\]
\end{lem}
Now let $E$ be a finite Galois extension of $K$ and $v$ a place of $K$ unramified in $E$.
For any place $w$ of $E$ above $v$, denote by $\mathrm{Fr}_{w|v}$ the Frobenius element in $ \Gal{E}{K}$.
These elements consist a conjugacy class
\[
\FC{E}{K}{v}:=\{ \Fr{w}{v} ; w|v\ \}
\]
in $\Gal{E}{K}$.
Define $\Sigma_{E/K}$ to be the divisor of $K$ that is the sum of all ramified places of $K$ in $E$.
As a consequence of the effective version of Chebotarev density theorem \cite[Theorem 1]{KS}, the following holds.
\begin{prop}[Chen and Lee {\cite[Corollary 3.4]{CL}}]\leftarrowbel{CL}
Let $E/K$ be a finite Galois extension and $\Sigma$ a divisor of $K$ such that
$\Sigma \geq \Sigma_{E/K}$.
Set $d_0:=[\bF_K:\bF_q]$ and $d:=[\bF_E:\bF_K]$.
Define the constant $B=B(E/K, \Sigma)$ by
\[
B=\max\{\deg_K\Sigma, \deg_E\cD_{E/\bF_EK}, 2[E:\bF_EK]-2, 1 \}.
\]
Then for any nonempty conjugacy class $\sC$ in $\Gal{E}{K}$, there exists a place $v$ of $K$
with $v \notin \Sigma$ such that
\begin{itemize}
\item $\sC=\FC{E}{K}{v}$,
\item $\deg_Kv \leq \frac{4}{d_0} \log_q\frac{4}{3}(B+3g_K+3) + d$,
\end{itemize}
where $g_K$ is the genus of $K$.
\end{prop}
Let $\pi$ be a monic irreducible element of $A$ and $m \geq 1$ an integer
which divides $\#\bF_\pi^\times = q_\pi-1$.
A monic irreducible element $\pi_0$ distinct from $\pi$ is called an {\it $m$-th power residue modulo $\pi$} if $(\pi_0 \ \mbox{mod}\ \pi) \in (\bF_\pi^\times)^m$.
As an application of Proposition \ref{CL}, we show that one can find an $m$-th power residue modulo $\pi$ whose degree is smaller than $\deg(\pi)$ if $\deg(\pi)$ is sufficiently large.
Denote by $F_m$ the unique subfield of $F(\cC[\pi])$ with $[F_m:F]=m$ and
consider the character
$
\chi(m):G_F \overset{\chi_\pi}{\longrightarrow} \bF_\pi^\times \twoheadrightarrow \bF_\pi^\times/(\bF_\pi^\times)^m.
$
\begin{lem}\leftarrowbel{est}
The following are equivalent.
\begin{itemize}
\item $\pi_0$ is an $m$-th power residue modulo $\pi$.
\item $\chi(m)(\Frob{\pi_0}{})=1$.
\item $\Frob{\pi_0}{}{\mid}_{F_m}^{}=\mathop{\mathrm{id}}\nolimits$.
\end{itemize}
\end{lem}
\begin{proof}
It is trivial when $m=1$.
If not, then this lemma immediately follows from that $\chi_\pi(\Frob{\pi_0}{}) \equiv \pi_0\ (\mbox{mod}\ \pi)$ and $F_m$ is the fixed field of $F^{\rm sep}$ by $\Ker {\chi(m)}$.
\end{proof}
Denote by $\tilde{K}$ the Galois closure of $K_{\rm s}$ over $F$
and set $E:=\tilde{K}F_m$, which is also a Galois extension of $F$.
Consider the divisor $\Sigma:=\Sigma_{E/F} +\pi + \infty \in \mathrm{Div}(F)$.
For the constant $B=B(E/F, \Sigma)=\max\{\deg_F\Sigma, \deg_E\cD_{E/\bF_EF}, 2[E:\bF_EF]-2, 1 \}$, we obtain the following estimate.
\begin{lem}\leftarrowbel{B}
Let $n$ be a positive integer.
Then there exists a constant $C_5=C_5(K_{\rm s}, n_K^{}, q,m , n)>0$ such that
for any $\pi$ satisfying $\deg(\pi) > C_5$, the inequality
\[
4 \log_q\frac{4}{3}(B + 3) + [\bF_{\tilde{K}}: \bF_q] < \frac{1}{n} \deg(\pi)
\]
holds.
\end{lem}
\begin{proof}
We first compute an upper bound of $B$.
We may assume that $\pi$ is unramified in $K_{\rm s}$.
Since the degree $[\tilde{K}:F]$ is less than or equal to $n_{K}^{}!$, we see that
$[\bF_{\tilde{K}}:\bF_q] \leq n_K^{}!$ and $[E:\bF_EF] \leq m\cdot n_K^{}!$.
By Example \ref{Carlitz}, the infinite place $\infty$ of $F$ is split into at most $m$ places in $F_m$
whose ramification indices divide $q-1$ and $\pi$ is totally ramified or unramified (if $m=1$) in $F_m$.
Thus we see that
\[
\deg_F\Sigma \leq \deg_F(\Sigma_{F_m/F} + \Sigma_{\tilde{K}/F} + \pi + \infty)\leq 2\deg(\pi)+2+\deg_F\Sigma_{\tilde{K}/F}.
\]
Now $\cD_{E/\bF_EF} = \cD_{E/F}$ holds.
Lemmas \ref{deg} and \ref{d} imply
\begin{eqnarray}
\deg_E\cD_{E/F} &{\leq}& \deg_{E}i_{E/\tilde{K}}\cD_{\tilde{K}/F} + \deg_{E}i_{E/F_m}\cD_{F_m/F} \nonumber \\
&\leq& m\cdot \deg_{\tilde{K}}\cD_{\tilde{K}/F} + [E:F_m]_{\rm g}
\cdot \deg_{F_m}(\sum_{v |\infty}(q-2)v + m\pi) \nonumber \\
&\leq& m\cdot \deg_{\tilde{K}}\cD_{\tilde{K}/F} + n_K! \cdot m(q-2 + \deg(\pi)). \nonumber
\end{eqnarray}
Hence there exist positive constants $B_1$ and $B_2$ depending only on $K_{\rm s}, n_K$, $q$ and $m$ such that
$
B \leq B_1 \cdot \deg(\pi) +B_2
$ holds.
Therefore if $\deg(\pi)$ is sufficiently large, then $4 \log_q\frac{4}{3}(B + 3) + [\bF_{\tilde{K}}: \bF_q] < \frac{1}{n} \deg(\pi)$ holds.
\end{proof}
Proposition \ref{CL} and Lemma \ref{B} imply the following:
\begin{prop}\leftarrowbel{mth}
Let $n$ be a positive integer.
If $\deg(\pi) > C_5$, then there exist a monic irreducible element $\pi_0 \in A$
and a place $v$ of $K$ above $\pi_0$ such that
\begin{itemize}
\item $\pi_0$ is an $m$-th power residue modulo $\pi$,
\item $\deg(\pi_0) < \frac{1}{n}\deg(\pi)$,
\item $f_{v|\pi_0}=1$.
\end{itemize}
\end{prop}
\begin{proof}
We may assume that $K=K_{\rm s}$ since the extension $K/K_{\rm s}$ is totally ramified at any place if $K \neq K_{\rm s}$.
Let $\tilde{K}$ and $E=\tilde{K}F_m$ be as above and fix an element $\sigma \in \Gal{E}{F}$ such that $\sigma|_{KF_m}=\mathop{\mathrm{id}}\nolimits$.
For the conjugacy class $\sC$ of $\sigma$ in $\Gal{E}{F}$, by Proposition \ref{CL} and Lemma \ref{B},
there exists a place $\pi_0$ of $F$ with $\pi_0 \notin \Sigma$ (hence it is a finite place) such that
$\FC{E}{F}{\pi_0}=\sC$ and $\deg(\pi_0) < \frac{1}{n}\deg(\pi)$, so that $\sigma=\mathrm{Fr}_{w|\pi_0}$
for some place $w$ of $E$.
Then the decomposition group $Z_w$ of $w$ over $\pi_0$ is generated by $\sigma$ and
it is a subgroup of $\Gal{E}{KF_m}$.
Denote by $K'$ the fixed field of $E$ by $Z_w$.
Then the place $v'$ of $K'$ below $w$ satisfies
$e_{v'|\pi_0}=1$ and $f_{v'|\pi_0}=1$.
Hence $f_{v|\pi_0}=1$, where $v$ is the place of $K$ below $v'$.
By construction, we see that $\Frob{v}{}|_{F_m}=\mathop{\mathrm{id}}\nolimits$.
Lemma \ref{est} means that $\pi_0$ is an
$m$-th power residue modulo $\pi$.
\end{proof}
\subsection{Non-$p$-power rank case}
Let $\phi$ be a rank-$r$ Drinfeld module over $K$ satisfying $[\phi] \in \sD(K,r,\pi)$.
In this subsection, we always assume that $r=r_0 \cdot p^\nu$ for some $r_0>1$ which is prime to $p$.
Now let $i_1, \ldots, i_r$ be the positive integers satisfying $\bar{\rho}^{\rm ss}_{\phi,\pi} \simeq \chi_\pi^{i_1} \oplus \cdots \oplus \chi_\pi^{i_r}$ by (D2).
For any ${\bf s}=(s_1, \ldots, s_r) \in \bS_r$, set $\varepsilon_{\bf s}:=\chi_\pi{}^{i_{s_1}+\cdots +i_{s_r}-1}$ and define
\[
\epsilon:=(\varepsilon_{\bf s})_{{\bf s} \in \bS_r} : G_F \rightarrow (\bF_\pi^\times)^{\oplus r^r}.
\]
Set $m_\phi:=\#\epsilon(G_F)$, which is the least common multiple of the orders of $\varepsilon_{\bf s}$.
Since $\epsilon$ factors through $\bF_\pi^\times$, the image $\epsilon(G_F)$ is cyclic and $m_\phi|(q_\pi-1)$.
Then we obtain the following commutative diagram
\begin{equation}
\begin{xy}
(30,20) *{\epsilon(G_F)}="G",
(0,0) *{G_F }="H", (2,-3)*{ }="S", (30,0) *{\bF_\pi^\times }="I", (60,0)*{ }="Q", (60,3)*{ }="P",(60,-3)*{ }="R",(70,0) *{\bF_\pi^\times /(\bF_\pi^\times)^{m_\phi}}="J".
\ar "H";"G"^-{\epsilon}
\ar@{>>} "I";"G"^-{}
\ar "H";"I"^-{\chi_\pi}
\ar@{>>}"I";"Q"^-{}
\ar "P";"G"_{\simeq}
\ar @/_2mm/@{.>}"S";"R"_{\chi \left(m_\phi \right)}
\end{xy} .\nonumber
\end{equation}
\noindent
Hence a monic irreducible element $\pi_0$ is an $m_\phi$-th power residue modulo $\pi$ if and only if $\varepsilon_{\bf s}(\Frob{\pi_0}{})=1$ for any ${\bf s} \in \bS_r$.
\begin{lem}
If $\deg(\pi)>C_2(n_K,q,r)$, then $m_\phi$ divides the greatest common divisor $(e_\phi, q_\pi-1)$.
In particular, it divides $n_K C_1(q,r)$.
\end{lem}
\begin{proof}
It follows from Lemma \ref{e} (2).
\end{proof}
\begin{prop}\leftarrowbel{mv}
If there exist a monic irreducible element $\pi_0$ and a finite place $v$ of $K$ above $\pi_0$ such that $\deg(\pi) > f_{v|\pi_0} \deg(\pi_0)$ and $r_0$ does not divide $f_{v|\pi_0}$,
then $m_{\phi}>1$ and $\chi(m_\phi)(\Frob{v}{}) \neq 1$.
\end{prop}
\begin{proof}
Assume that either $m_\phi=1$ or $\chi(m_\phi)(\Frob{v}{})=1$ holds.
Then $\varepsilon_{\bf s}(\Frob{v}{})=1$ for any ${\bf s} \in \bS_r$.
Denote by $a_{v,p^\nu} \in A$ the coefficient of $T^{r-p^\nu}$ in the characteristic polynomial $P_v(T)$ of $\Frob{v}{}$ on $T_\pi(\phi)$.
It is given by $a_{v,p^\nu}=(-1)^{p^\nu}S_{p^\nu}(\alpha_1, \ldots, \alpha_r)$, where $\alpha_1, \ldots, \alpha_r$ are the roots of $P_v(T)$ and $S_{p^\nu}(x_1, \ldots,x_r)$ is the fundamental symmetric polynomial of degree $p^\nu$ with $r$ variables.
Consider the subset $\bS_{r,p^\nu}:=\{ (s_1, \ldots, s_{p^\nu}) ; 1 {\leq} s_1 {<}\cdots {<}s_{p^\nu} {\leq} r\}$ of $\bZ^{p^\nu}_{}$.
Then the product $\bS_{p^\nu}^{r_0}$ can be regarded as a subset of $\bS_r$.
Since $S_{p^\nu}(x_1, \ldots,x_r)$ is the sum of $\binom{r}{p^\nu}$ monomials of degree $p^\nu$, we obtain that
\begin{eqnarray}
(a_{v,p^\nu})^{r_0} {=} (-1)^r S_{p^\nu}(\alpha_1, \ldots, \alpha_r)^{r_0} &\equiv&
(-1)^r \left(\sum_{(s_1, \ldots, s_{p^\nu}) \in \bS_{r,p^\nu}} \chi_\pi^{i_{s_1}+\cdots + i_{s_{p^\nu} }}(\Frob{v}{}) \right)^{r_0} \nonumber \\
&\equiv& (-1)^r\sum_{{\bf s} \in \bS_{p^\nu}^{r_0}} \varepsilon_{\bf s}(\Frob{v}{})\chi_{\pi}(\Frob{v}{}) \nonumber \\
&\equiv& (-1)^r\sum_{{\bf s} \in \bS_{p^\nu}^{r_0}} \chi_\pi(\Frob{v}{}) \nonumber \\
&\equiv& (-1)^r\binom{r}{p^\nu}^{r_0} \pi_0^{f_{v|\pi_0}} \pmod \pi \nonumber
\end{eqnarray}
and $ (-1)^r\binom{r}{p^\nu}^{r_0} \pi_0^{f_{v|\pi_0}} \not\equiv 0 \pmod \pi$ since $\binom{r}{p^\nu}$ is not divisible by $p$.
Now we see that
\[
|(a_{v,p^\nu})^{r_0}| \leq q_v =q^{f_{v|\pi_0}\deg(\pi_0)} < |\pi|\
\mbox{and}\
\left| (-1)^r\binom{r}{p^\nu}^{r_0} \pi_0^{f_{v|\pi_0}} \right|=|\pi_0^{f_{v|\pi_0}}| =q_v < |\pi|.
\]
Hence the above congruence implies $(a_{v,p^\nu})^{r_0}=(-1)^r\binom{r}{p^\nu}^{r_0} \pi_0^{f_{v|\pi_0}}$.
Comparing the $\pi_0$-adic valuations of both sides, we obtain $r_0|f_{v|\pi_0}$, which is a contradiction.
\end{proof}
Set
\begin{eqnarray}
C_6 &=& C_6(K_{\rm s}, n_K, q,r):=\max\{ C_5(K_{\rm s}, n_K, q,m,1) ;\ m|n_KC_1(q,r)\} \nonumber \\
C_7 &=& C_7(K_{\rm s}, n_K,q, r):=\max\{ C_2(n_K, q,r), C_6(K_{\rm s}, n_K,q,r)\}. \nonumber
\end{eqnarray}
Then we have the following theorem.
\begin{thm}\leftarrowbel{main2}
Suppose that $r=r_0p^\nu$ as above and $\deg(\pi)>C_7$.
Then the set $\sD(K,r,\pi)$ is empty.
\end{thm}
\begin{proof}
Assume that $\sD(K,r,\pi)$ is not empty and $[\phi] \in \sD(K,r,\pi)$.
By Proposition \ref{mth}, there exist a monic irreducible element $\pi_0$ and a place $v$ of $K$ above $\pi_0$
such that $f_{v|\pi_0}=1$, $\deg(\pi_0) < \deg(\pi)$ and $\chi(m_\phi)(\Frob{\pi_0}{})=1$.
However, since $\pi_0$ and $v$ satisfy the assumption of Proposition \ref{mv}, we see that
$\chi(m_\phi)(\Frob{v}{})=\chi(m_\phi)(\Frob{\pi_0}{}) \neq 1$.
\end{proof}
By the same argument, we can also prove a uniform version.
For a fixed finite separable extension $K_0$ of $F$ with degree $n_0:=[K_0:F]$ and a positive integer $n$, set
\[
C_8=C_8(K_0,q, r, n):=\max \left\{ C_2(n n_0, q,r), \max\{C_5(K_0,n_0,q, m, n);\ m | n_0C_1(q,r) \} \right\}.
\]
\begin{thm}\leftarrowbel{uniform}
Let $r=r_0p^\nu$, $K_0$, and $n$ be as above.
Suppose that $r_0$ does not divide $n$.
If $\deg(\pi)>C_8$,
then for any finite extension $K$ of $K_0$ satisfying $[K:K_0]=n$, the set
$
\sD(K,r,\pi)
$ is empty, namely the union
\[
\underset{[K:K_0]=n}{\bigcup}\sD(K,r,\pi)
\]
is empty.
\end{thm}
\begin{proof}
Let $K$ be a finite extension of $K_0$ with $[K:K_0]=n$ and assume that $[\phi] \in \sD(K,r,\pi)$.
Applying Proposition \ref{mth} to $K_0$, we can find a monic irreducible element $\pi_0$ and a finite place $v_0$ of $K_0$ above $\pi_0$ such that $f_{v_0|\pi_0}=1$, $n\deg(\pi_0) < \deg(\pi)$ and $\chi(m_\phi)(\Frob{\pi_0}{})=1$.
Now we can take a place $v$ of $K$ above $v_0$ such that $f_{v|v_0} (=f_{v|\pi_0})$ is not divisible by $r_0$.
Indeed, if not, then $r_0$ must divide $n=\sum_{v|v_0}e_{v|v_0}f_{v|v_0}$.
Since $f_{v|\pi_0}\deg(\pi_0) < n\deg(\pi_0)< \deg(\pi)$, by Proposition \ref{mv}, we see that
$\chi(m_\phi)(\Frob{v}{})=\chi(m_\phi)(\Frob{\pi_0}{})^{f_{v|\pi_0}} \neq 1$.
It is a contradiction.
\end{proof}
\section{Comparison with number field case}
In this final section,
we compare the Rasmussen-Tamagawa conjecture and its Drinfeld module analogue.
After studying the similarly of them, we construct an example of a Drinfeld module satisfying Rasmussen-Tamagawa type conditions for any $\pi$ and prove Theorem \ref{nonempty}.
We also prove the infiniteness of $\sD(K,r,\pi)$ for $r\geq 2$ and $\pi=t$ in Proposition \ref{infinite}.
\subsection{Defining conditions of $\sD(K,r,\pi)$}
In number field case, $\sA(k,g,\ell)$ is defined by the equivalent conditions (RT-1), (RT-2), and (RT-3) in Section 1.
The equivalence of them follows from the criterion of N{\'e}ron-Ogg-Shafarevich and the next group theoretic lemma:
\begin{lem}[Rasmussen and Tamagawa {\cite[Lemma 3.4]{RT2}}]\leftarrowbel{group}
Let $\bF$ be a finite field of characteristic $\ell$.
Suppose $G$ is a profinite group, $N \subset G$ is a pro-$\ell$ open normal subgroup, and
$C=G{/}N$ is a finite cyclic subgroup with $\#C | \#\bF^\times$.
Let $V$ be an $\bF$-vector space of dimension $r$ on which $G$ acts continuously.
Fix a group homomorphism $\chi_0:G \rightarrow \bF^\times$ with $\mathrm{Ker}(\chi_0)=N$.
Then there exists a filtration
\[
0=V_0 \subset V_1 \subset \cdots \subset V_r=V
\]
such that each $V_s$ is $G$-stable and $\dim_\bF V_s=s$ for any $0 \leq s \leq r$.
Moreover, for each $1 \leq s \leq r $, the $G$-action on each quotient $V_s{/}V_{s-1}$ is given by
$\chi_0^{i_s}$ for some integer $i_s$ satisfying $0 \leq i_s < \#C$.
\end{lem}
\begin{rem}
In \cite{RT2}, this lemma is proved when $\bF=\bF_\ell$.
The general case can be proved in the same way.
\end{rem}
As an analogue, we give two conditions which are equivalent to (D1)$+$(D2).
Let $\phi$ be a rank-$r$ Drinfeld module over $K$ and let $\pi \in A$ be a monic irreducible element.
Consider the field $K(\phi[\pi^\infty]):=K({\bigcup}_{n \geq 1}\phi[\pi^n])$ generated by all $\pi$-power torsion points of $\phi$, so that it coincides with the fixed subfield
of $K^{\rm sep}$ by the kernel of $\rep{\phi}{\pi}:G_K \rightarrow {\rm GL}_r(A_\pi)$.
Recall that the mod $\pi$ Carlitz character $\chi_\pi:G_K \rightarrow {\rm GL}_{\bF_\pi}(\cC[\pi]) \simeq \bF_\pi^\times$ is an analogue of the mod $\ell$ cyclotomic character.
For the field $L:=K(\phi[\pi]) \cap K(\cC[\pi])$, we can prove the next proposition in the same way as the abelian variety case.
\begin{prop}\leftarrowbel{DRTcondi}
The following conditions are equivalent.
\begin{itemize}
\setlength{\leftskip}{0.5cm}
\item[(DR-1)] $K(\phi[\pi^\infty])/L$ is a pro-$p$ extension which is unramified at any finite place of $L$ not lying above $\pi$,
\item[(DR-2)] $\phi$ has good reduction at any finite place of $K$ not lying above $\pi$ and $K(\phi[\pi])/L$ is a $p$-extension,
\item[(DR-3)] $\phi$ satisfies {\rm (D1)} and {\rm (D2)}.
\end{itemize}
\end{prop}
\begin{rem}
Unlike the abelian variety case, the field $K(\phi[\pi])$ may not contain $K(\cC[\pi])$.
For example, for $x \in \bF_q^\times {\setminus} \{ 1 \}$, consider the rank-one Drinfeld module
$\phi$ over $F$ determined by $\phi_t=t + x\tau$ and suppose $q \neq 2$.
Then the fields $F(\phi[t])$ and $F(\cC[t])$ are generated by the roots of $t + xT^{q-1}$ and $t + T^{q-1}$, respectively.
By Kummer theory, we see that $F(\phi[t]) \neq F(\cC[t])$, so that $F(\phi[t]) \not\supset F(\cC[t])$.
\end{rem}
\begin{proof}[Proof of Proposition \ref{DRTcondi}]
Since the kernel of ${\rm GL}_r(A_\pi) {\twoheadrightarrow} {\rm GL}_r(\bF_\pi)$ is a pro-$p$ group, the extension $K(\phi[\pi^\infty])/K(\phi[\pi])$ is always pro-$p$.
The extension $K(\cC[\pi])/K$ is unramified at any finite place of $K$ not lying above $\pi$
(Example \ref{Carlitz}).
Hence the conditions (DR-1) and (DR-2) are equivalent by Proposition \ref{NOS}.
Suppose that (DR-2) holds.
Then the condition (DR-3) immediately follows from Lemma \ref{group} for $G={\rm Gal}(K(\phi[\pi])/K)$, $\chi_0=\chi_\pi|_G$,
$N=\mathrm{Ker}(\chi_\pi|_G)={\rm Gal}(K(\phi[\pi])/L)$, and $V=\phi[\pi]$.
Conversely, if (DR-3) holds, then the image $\mrep{\phi}{\pi}(N)$ of $N={\rm Gal}(K(\phi[\pi])/L)$
is contained in
\[
\left\{
\begin{pmatrix}
1 & & {\leftarrowrge*}\\
& \ddots & \\
& & 1
\end{pmatrix}
\in {\rm GL}_r(\bF_\pi) \right\},
\]
which is a Sylow $p$-subgroup of ${\rm GL}_r(\bF_\pi)$.
Since $\mrep{\phi}{\pi}|_G$ is injective, we see that $K(\phi[\pi])/L$ is a $p$-extension.
\end{proof}
\begin{rem}
The original conjecture of Rasmussen and Tamagawa is formulated for abelian varieties of arbitrary dimension, and so we would like to formulate its function field analogue for some higher dimensional objects (recall that Drinfeld modules are analogues of elliptic curves).
In \cite{Anderson}, Anderson introduced objects called {\it $t$-motives} as analogues of abelian varieties of higher dimensions, which are also generalizations of Drinfeld modules.
In fact the category of Drinfeld modules is anti-equivalent to that of $t$-motives of dimension one.
It is known that $t$-motives have the notions of good reduction and Galois representation attached to their $\pi$-torsion points (see, for example \cite{Gar}), so that we can consider the conditions (D1) and (D2) for $t$-motives.
Moreover, Proposition \ref{DRTcondi} is also generalized to $t$-motives since Galois criterion of good reduction for $t$-motives holds.
Therefore the set $\sM(K,d,r,\pi)$ of isomorphism classes of $d$-dimensional $t$-motives over $K$ of rank $r$ satisfying the Rasmussen-Tamagawa type conditions can be defined and the following question makes sense: is the set $\sM(K,d,r,\pi)$ empty for any $\pi$ with sufficiently large degree?
\end{rem}
\subsection{Non-emptiness of $\sD(K,r,\pi)$}
In this subsection, giving a concrete example, we prove the following theorem:
\begin{thm}\leftarrowbel{nonempty}
If $r$ divides $[K:F]_{\rm i}$, then the set $\sD(K,r,\pi)$ is never empty for any $\pi$.
\end{thm}
If $r=1$, then Theorem \ref{nonempty} is trivial since the Carlitz module $\cC$ satisfies both (D1) and (D2).
Assume that $r \geq 2$ and $[K:F]_{\rm i}$ is divisible by $r$, so that $r$ is a $p$-power.
Now
the $r$-power map $A \rightarrow A;a \mapsto a^r$ is an injective ring homomorphism.
For any $a =\sum x_nt^n \in A$ with $x_n \in \bF_q$, set $\hat{a}:=\sum x_n^{1/r}t^n$.
Then we see that $a \mapsto \hat{a}$ is a ring automorphism of $A$ and that the map
$A \rightarrow A; a \mapsto \hat{a}^r$ is an injective $\bF_q$-algebra homomorphism.
\begin{lem}\leftarrowbel{ps}
Set $[K:F]_{\rm i}=p^\nu$. Then $K_{\rm s}=K_{}^{p^{\nu}}$.
\end{lem}
\begin{proof}
Since $K$ is a purely inseparable extension of $K_{\rm s}$ of degree $p^{\nu}$, the field
$K_{}^{p^{\nu}}$ is contained in $K_{\rm s}$.
Consider the sequence of fields $K \supset K^p \supset \cdots \supset K^{p^{\nu}} $.
Proposition 7.4 of \cite{Ros} implies that each extension $K^{p^n}/K^{p^{n+1}}$
is of degree $p$.
Hence $[K:K_{}^{p^{\nu}}]=p^{\nu}=[K:K_{\rm s}]$, which means that $K_{\rm s}=K_{}^{p^{\nu}}$.
\end{proof}
Since $r$ divides $[K:F]_{\rm i}$, Lemma \ref{ps} implies that $K$ contains the field
$F^{{1/r}}$.
In particular the $r$-th root $t^{1/r}$ of $t$ is contained in $K$, so that
we have a new injective $A$-field structure $\iota:A \rightarrow K$ defined by $\iota(t)=t^{1/r}$.
Define the rank-one Drinfeld module
\[
\cC':A \rightarrow K\{ \tau \}
\]
over the $A$-field $(K,\iota)$ by
$\cC'_t=t^{1/r}+\tau$.
Set ${}^{(r)}\mu:=\sum c_i^r \tau^i$ for any $\mu=\sum c_i\tau^i \in K\{ \tau \}$, which defines a ring homomorphism $K\{ \tau \} \rightarrow K\{ \tau \}$.
Then we can relate $\cC'$ with the Carlitz module $\cC$ as follows:
\begin{lem}\leftarrowbel{relation}
{\rm (1)} For any $a \in A$, ${}^{(r)}\cC'_{\hat{a}}=\cC_a$.
{\rm (2)} For any element $\leftarrowmbda \in \cC'[\hat{a}]$, there exists a unique $\delta \in \cC[a]$ such that $\leftarrowmbda=\delta^{1/r}$.
\end{lem}
\begin{proof}
Clearly ${}^{(r)}\cC'_{\hat{x}}=x=\cC_x$ for any $x \in \bF_q$ and
${}^{(r)}\cC'_{\hat{t}}={}^{(r)}\cC'_{t}=\cC_t$.
Hence for any $a=\sum x_nt^n$,
\[
{}^{(r)}\cC'_{\hat{a}}={}^{(r)}\left( \sum x_n^{1/r} (\cC'_t)^n \right)=\sum x_n (\cC_t)^n=\cC_a.
\]
For any $\leftarrowmbda \in \cC'[\hat{a}]$, we see that
\[
0=\left(\cC'_{\hat{a}}(\leftarrowmbda) \right)^r={}^{(r)}\cC'_{\hat{a}}(\leftarrowmbda^r)=\cC_a(\leftarrowmbda^r),
\]
so that $\leftarrowmbda^r \in \cC[a]$ and we have the injective homomorphism $\cC'[{\hat{a}}] \rightarrow \cC[a];\leftarrowmbda \mapsto \leftarrowmbda^r$ of finite groups.
Since $\#\cC'[\hat{a}] $ is equal to $\#\cC[a]$ by $\deg(\hat{a})=\deg(a)$, it is a bijection.
\end{proof}
Now define $\Phi_a:=\cC'_{\hat{a}^r}=(\cC'_{\hat{a}})^r \in K\{ \tau \}$ for any $a \in A$.
Then
by construction it gives an $\bF_q$-algebra homomorphism
\[
\Phi: A \rightarrow K\{ \tau \},
\]
which is determined by $\Phi_t=(t^{1/r} + \tau)^r$.
Since $\iota(\hat{a}^r)=a$ holds, $\Phi$ is a rank-$r$ Drinfeld module over $K$.
Moreover it has good reduction at every finite place $v$ of $K$ since $v(t^{1/r}) \geq 0$.
By the following proposition, we see that $[\Phi] \in \sD(K,r,\pi)$, which implies Theorem \ref{nonempty}.
\begin{prop}\leftarrowbel{main3}
Let $i$ be the positive integer satisfying $ir \equiv 1 \pmod {q_\pi-1}$ and $i < q_\pi-1$.
Then the mod $\pi$ representation attached to $\Phi$ is of the form
\[
\mrep{\Phi}{\pi}\simeq
\begin{pmatrix}
\chi_\pi^i & & {\leftarrowrge*}\\
& \ddots & \\
& & \chi_\pi^i
\end{pmatrix}.
\]
\end{prop}
\begin{proof}
It suffices to prove that $\bar{\rho}_{\Phi,\pi}^{\rm ss} = (\chi_\pi^i)^{\oplus r}$.
For each $1 \leq s \leq r$, set
\[
V_s:=\{ \leftarrowmbda \in {}_\Phi K^{\rm sep} ; \cC'_{\hat{\pi}^{s}}(\leftarrowmbda)=0\}.
\]
For any $a \in A$ and $\leftarrowmbda \in V_s$, we see that $\Phi_a(\leftarrowmbda) \in V_s$ since
$
\cC_{\hat{\pi}^s}'(\Phi_a(\leftarrowmbda))= \cC_{\hat{\pi}^s}'(\cC_{\hat{a}^r}'(\leftarrowmbda))=\cC_{\hat{a}^r}'(\cC_{\hat{\pi}^s}'(\leftarrowmbda))=0.
$
Hence
$V_s$ is an $A$-submodule of ${}_\Phi K^{\rm sep}$ with the natural $G_K$-action.
Moreover $\Phi_\pi(\leftarrowmbda)=0$ for any $\leftarrowmbda \in V_s$, so that $V_s$ is an $\bF_\pi$($=A/\pi A$)-vector space.
Here $\Phi[\pi]=V_r$ by the definition of $\Phi$.
Then we obtain the filtration
\[
0 =V_0 \subset V_1 \subset V_2 \subset \cdots \subset V_r=\Phi[\pi]
\]
of $G_K$-stable $\bF_\pi$-subspaces of $\Phi[\pi]$.
Now the map $V_s\rightarrow V_1;\leftarrowmbda \mapsto \cC'_{\hat{\pi}^{s-1}}(\leftarrowmbda)$ induces a $G_K$-equivariant isomorphism $V_s{/}V_{s-1} \cong V_1$.
Since $V_1=\cC'[\hat{\pi}]$ (as a set) and $\deg(\pi)=\deg(\hat{\pi})$, we have
$\#V_1=q_{\hat{\pi}}=q_\pi (=\# \bF_\pi)$.
Hence $\dim_{\bF_\pi}V_1=1$ and
the semisimplification of $\Phi[\pi]$ (as an $\bF_\pi[G_K]$-module) is
$\Phi[\pi]^{\rm ss} =\oplus_{s=1}^rV_s{/}V_{s-1} \cong V_1^{\oplus r}$.
For any $\sigma \in G_K$ and $\leftarrowmbda \in V_1$, we prove $\sigma(\leftarrowmbda)=\chi_\pi(\sigma)^i \cdot \leftarrowmbda$ as follows.
Take an element $a_\sigma \in A$ satisfying $a_\sigma \equiv \chi_\pi(\sigma) \pmod \pi$.
By Lemma \ref{relation} (2), $\leftarrowmbda=\delta^{1/r}$ for some $\delta \in \cC[\pi]$.
Then
\[
\sigma(\leftarrowmbda)^r=\sigma(\delta)=\chi_{\pi}(\sigma)\cdot\delta=\cC_{a_\sigma}(\delta).
\]
Now the $\bF_\pi$-vector space structure of $V_1$ is determined by $\Phi$ and so
$\chi_\pi(\sigma)^i \cdot \leftarrowmbda = \Phi_{a_\sigma^i}(\leftarrowmbda) = \cC'_{\hat{a}_\sigma^{ir}}(\leftarrowmbda)$.
Since
$ir \equiv 1 \pmod {q_{\hat{\pi}}-1}$ holds,
we have
$\hat{a}_{\sigma}^{ir} \equiv \hat{a}_{\sigma} \pmod {\hat{\pi}} $.
This implies
$\cC'_{\hat{a}_\sigma^{ir}}(\leftarrowmbda) =\cC'_{\hat{a}_\sigma}(\leftarrowmbda)$.
By Lemma \ref{relation} (1), we obtain
\[
\left( \chi_\pi(\sigma)^i\cdot\leftarrowmbda \right)^r
=\left( \cC'_{\hat{a}_\sigma}(\leftarrowmbda) \right)^r
={}^{(r)}\cC'_{\hat{a}_\sigma}(\leftarrowmbda^r)
=\cC_{a_\sigma}(\delta)=\sigma(\leftarrowmbda)^r.
\]
Since the $r$-power map is injective, we have
$\sigma(\leftarrowmbda)=\chi_\pi(\sigma)^i \cdot \leftarrowmbda$.
Hence
the $G_K$-action on $V_1$ is given by $\chi_\pi^i$.
\end{proof}
\begin{rem}
Let $u$ be a finite place of $K$ above $\pi$. Now $r$ divides $e_{u|\pi}$ by assumption and set $e=e_{u|\pi}/r$.
Since $ir \equiv 1 \pmod {q_\pi-1}$, we see that
\[
\chi_\pi^i|_{I_{K_u}}=(\omega_{1,K_u})^{i \cdot e_{u|\pi}} = (\omega_{1,K_u})^{i \cdot r \cdot e}=(\omega_{1,K_u})^e.
\]
Hence the set of tame inertia weights of $\mrep{\Phi}{\pi}|_{I_{K_u}}$ is $\mathrm{TI}_{{K_u}}(\mrep{\Phi}{\pi})=\{e\}$.
\end{rem}
\subsection{Infiniteness of $\sD(K,r,t)$}
Finally, for $\pi=t$, we construct an infinite subset of $\sD(K,r,t)$.
In number field case, the set $\sA(k,g,\ell)$ is always finite because of the Shafarevich conjecture proved by Faltings \cite{Fal}, which states that there exist only finitely many isomorphism classes of abelian varieties over fixed $k$ with fixed dimension $g$ which have good reduction outside a fixed finite set of finite places of $k$.
However, the Drinfeld module analogue of it does not hold:
\begin{ex}\leftarrowbel{a}
For any $a\in A$, consider the rank-2 Drinfeld module $\phi^{(a)}:A \rightarrow F\{ \tau \}$ given by
$
\phi^{(a)}_t=t + a\tau + \tau^2.
$
It is easily seen that $\phi^{(a)}$ has good reduction at any finite place of $F$.
If $\phi^{(a)}$ is isomorphic to $\phi^{(a')}$ for some $a' \in A$ over $F$,
then there exists an element $c \in F$ such that $c\phi_t^{(a')} = \phi_t^{(a)}c$, so that
\[
\phi_t^{(a')}=t + a'\tau + \tau^2=t + c^{q-1}a\tau + c^{q^2-1}\tau^2.
\]
This means that $c \in \bF_q^\times$ and hence $a'=c^{q-1}a=a$.
Therefore the set of isomorphism classes
$
\{ [\phi^{(a)}] ; a \in A \}
$ is infinite.
\end{ex}
Let $W$ be a $G_K$-stable one-dimensional $\bF_q$-vector space contained in $K^{\rm sep}$ and write
$\kappa_W: G_K \rightarrow \bF_q^\times$ for the character attached to $W$.
Set
$P_W(T):=\prod_{\leftarrowmbda \in W}(T-\leftarrowmbda)$, which is an $\bF_q$-linear polynomial of the form
\[
P_W(T)=T^q+c_WT, \ \ \ c_W:= \Bigl({\prod}_ {\leftarrowmbda \in W\setminus \{0 \}}-\leftarrowmbda \Bigr) \in K^\times
\]
by \cite[Corollary 1.2.2]{Goss}.
For any $c \in K^\times$, denote by $\bar{c} \in K^\times{/}(K^\times)^{q-1}$ the class of $c$ and by
$\kappa_{(c)}:G_K \rightarrow \bF_q^\times$ the character corresponding to $\bar{c}$ by the map
$K^\times{/}(K^\times)^{q-1} \overset{\sim}{\rightarrow} \mathrm{Hom}(G_K,\bF_q^\times)$ of Kummer theory.
\begin{lem}\leftarrowbel{k}
For the above element $c_W \in K^\times$, the character $\kappa_{(-c_W)}$ coincides with $\kappa_W$.
\end{lem}
\begin{proof}
Since $\leftarrowmbda^{q-1}=-c_W$ for any $\leftarrowmbda \in W{\setminus}\{0\}$, the character $\kappa_{(-c_W)}$ is given by
$
\kappa_{(-c_W)}(\sigma)={\sigma(\leftarrowmbda)/\leftarrowmbda}=\kappa_W(\sigma)
$
for any $\sigma \in G_K$.
\end{proof}
Identify $\bF_t=A/tA=\bF_q$.
Then $\cC[t]$ is a one-dimensional $\bF_q$-subspace of $K^{\rm sep}$ and $P_{\cC[t]}(T)=T^q + t T$ by the definition of $\cC$.
By Lemma \ref{k}, we see that $\chi_t=\kappa_{(-t)}$.
Note that $\chi_t^i=\kappa_{((-t)^i)}$ for any integer $i$.
Take $r$ elements $c_1, \ldots, c_r \in K^\times$.
For any $1 \leq s \leq r$, define
$f_s(\tau):=(\tau + c_s)(\tau +c_{s-1}) \cdots (\tau+c_1) \in K\{ \tau \}$
and set $W_s:=\Ker {f_s:K^{\rm sep} \rightarrow K^{\rm sep} }$, which is a $G_K$-stable $s$-dimensional $\bF_q$-subspace of $K^{\rm sep}$.
Thus we obtain the filtration
\[
0=W_0 \subset W_1 \subset \cdots \subset W_r
\]
of $\bF_q[G_K]$-modules.
\begin{lem}\leftarrowbel{c}
The $\bF_q$-linear representation $\bar{\rho}:G_K \rightarrow \GL{\bF_q}{W_r} \simeq \GL{r}{\bF_q}$ is of the form
\[
\bar{\rho} \simeq
\begin{pmatrix}
\kappa_{(-c_1)} & * & \cdots & *\\
& \kappa_{(-c_2)} & \ddots & \vdots \\
& & \ddots & * \\
& & &\kappa_{(-c_{r})}
\end{pmatrix}.
\]
\end{lem}
\begin{proof}
For any $1 \leq s \leq r$, the quotient $W_s{/}W_{s-1}$ is isomorphic to $\Ker {\tau + c_s:K^{\rm sep} \rightarrow K^{\rm sep}}$ as an $\bF_q[G_K]$-module.
Hence each $W_s{/}W_{s-1}$ is embedded into $K^{\rm sep}$.
By Lemma \ref{k}, the action of $G_K$ on $W_s{/}W_{s-1}$ is given by $\kappa_{(-c_s)}$.
\end{proof}
Fix $r$ integers $i_1, \ldots , i_r$ satisfying $\sum_{s=1}^r i_s=1$.
For any ${\bf m}=(m_1, \ldots , m_r) \in \bZ^{r}$ satisfying $\sum_{s=1}^rm_s=0$,
consider the $\bF_q$-algebra homomorphism $\phi^{\bf m}:A \rightarrow K\{ \tau \}$ given by
\[
\phi^{\bf m}_t=(-1)^{r-1}\prod_{s=1}^r(\tau - (-t)^{k_s}),
\]
where $k_s=i_s + m_s(q-1)$ for any $1 \leq s \leq r$.
Now $\sum_{s=1}^rk_s=1$, so that the constant term of $\phi_t^{\bf m}$ is $(-1)^{r-1}\prod_{s=1}^r( -(-t)^{k_s})=(-1)^{2r}t=t$ and hence
$\phi^{\bf m}$ is
a rank-$r$ Drinfeld module over $K$.
\begin{prop}\leftarrowbel{m}
The isomorphism class $[\phi^{\bf m}]$ is contained in $\sD(K,r,t)$.
Moreover,
the mod $t$ representation attached to $\phi^{\bf m}$ is of the form
\[
\mrep{\phi^{\bf m}}{t} \simeq
\begin{pmatrix}
\chi_t^{i_1} & * & \cdots & *\\
& \chi_t^{i_2} & \ddots & \vdots \\
& & \ddots & * \\
& & &\chi_t^{i_r}
\end{pmatrix},
\]
where $i_1, \ldots, i_r$ are the integers fixed as above.
\end{prop}
\begin{proof}
For any finite place $v$ of $K$ not lying above $t$,
since $-t \in \cO_{K_v}$ and the leading coefficient of $\phi_t^{\bf m}$ is $(-1)^{r-1}$, we see that $\phi^{\bf m}$ has good reduction at $v$.
Now $\phi^{\bf m}[t]$ coincides with the kernel of
$\prod_{s=1}^r(\tau-(-t)^{k_s})$.
Applying Lemma \ref{c} to $f_s=(\tau-(-t)^{k_s})\cdots(\tau-(-t)^{k_1})$, we see that $\mrep{\phi^{\bf m}}{t}$ is given as above since $\kappa_{((-t)^{k_s})}=\chi_t^{k_s}=\chi_t^{i_s}$ for any $1 \leq s \leq r$.
\end{proof}
\begin{prop}\leftarrowbel{infinite}
If $r \geq 2$, then the set $\sD(K,r,t)$ is infinite.
\end{prop}
\begin{proof}
We construct an infinite subset of $\sD(K,r,t)$ as follows.
Fix $r$ integers $i_1, \ldots, i_r$ satisfying $\sum_{s=1}^ri_s=1$.
For any positive integer $m$, consider $(-m,0,\ldots,0,m) \in \bZ^r$ and define $\phi^m:=\phi^{(-m,0,\ldots,0,m)}$, which is a Drinfeld module satisfying $[\phi^m] \in \sD(K,r,t)$ by Proposition \ref{m}.
Write $\phi^m_t=t + c_1\tau + \cdots + c_{r-1}\tau^{r-1} + (-1)^{r-1}\tau^r \in K\{ \tau \}$. Then by construction the coefficient $c_{r-1}$ is given by
\[
c_{r-1}=(-t)^{i_1-m(q-1)} + (-t)^{i_r +m(q-1)} + \sum_{s=2}^{r-1}(-t)^{i_s}.
\]
For any finite place $u$ of $K$ above $t$, if $m$ is sufficiently large, then
\[
u(c_{r-1})=(i_1-m(q-1)) u(-t)<0,
\]
hence we see that $u(c_{r-1}) \rightarrow -\infty$ as $m \rightarrow \infty$.
On the other hand, for two positive integers $m$ and $m'$, if $\phi^{m'}$ is isomorphic to $\phi^m$, then $\phi^{m'}_t=x^{-1}\phi^m_tx$ for some $x \in \bF_K^\times$ by the same argument of Example \ref{a}.
These facts imply that if $m'$ is sufficiently large, then $\phi^{m}$ and $\phi^{m'}$ are not isomorphic.
Therefore the subset $\{[\phi^m]; m \in \bZ_{>0} \}$ of $\sD(K,r,t)$ is infinite.
\end{proof}
\ \\
Department of Mathematics, Tokyo Institute of Technology
2-12-1 Oh-okayama, Meguro-ku, Tokyo 152-8551, Japan
{\it E-mail address} : \email{\bf [email protected]}
\end{document}
|
\begin{document}
\RUNAUTHOR{Daw}
\RUNTITLE{Conditional Uniformity and Hawkes Processes}
{\mathrm{T}}ITLE{
Conditional Uniformity and Hawkes Processes
}
\ARTICLEAUTHORS{
\AUTHOR{Andrew Daw}
\AFF{University of Southern California Marshall School of Business}
}
\ABSTRACT{Classic results show that the Hawkes self-exciting point process can be viewed as a collection of temporal clusters, where exogenously generated initial events give rise to endogenously driven descendant events. This perspective provides the distribution of a cluster's size through a natural connection to branching processes, but this is irrespective of time. Insight into the chronology of a Hawkes process cluster has been much more elusive. Here, we employ this cluster perspective and a novel adaptation of the random time change theorem to establish an analog of the conditional uniformity property enjoyed by Poisson processes. Conditional on the number of epochs in a cluster, we show that the transformed times are jointly uniform within a particular convex polytope. Furthermore, we find that this polytope leads to a surprising connection between these continuous state clusters and parking functions, discrete objects central in enumerative combinatorics and closely related to Dyck paths on the lattice. In particular, we show that uniformly random parking functions constitute hidden spines within Hawkes process clusters. This yields a decomposition that is valuable both methodologically and practically, which we demonstrate through application to the popular Markovian Hawkes model and through proposal of a flexible and efficient simulation algorithm.
}
\KEYWORDS{Hawkes processes, parking functions, conditional uniformity, cluster duration, self-excitement, compensators, time change, Dyck paths}
\maketitle
\section{Introduction}
The hallmark of the Hawkes self-exciting point process is that each event generates its own stream of offspring events, so that the history of the process becomes an endogenous driver of its own future activity \citep{hawkes1971spectra}. In this way, the model can be viewed as a collection of \emph{clusters} of events, where each cluster is initiated by some exogenous activity and then filled by the progeny descending from that original event. This structure appears throughout the many applications of the Hawkes process, observed in the virality of social media and the contagion of financial risk, and employed in the prediction of seismological activity and the spread of COVID-19 \citep[e.g.][]{ogata1998space,ait2015modeling,bertozzi2020challenges,nickel2020learning}.\footnote{See also: \url{https://ai.facebook.com/blog/using-ai-to-help-health-experts-address-the-covid-19-pandemic/}} This cluster-based perspective was first provided by \citet{hawkes1974cluster}, and it has served as a cornerstone for analysis of the stochastic process. A particularly useful consequence is that, through a connection to Poisson branching processes, we are granted comprehensive time-agnostic insight into the model. That is, the distribution of the size of the cluster, meaning the number of events it contains, is well understood and available in closed form. In this paper, we address a question that is elementary, related, yet evasive: how long will a cluster last?
The duration of a Hawkes process cluster, meaning the time elapsed from the first epoch to the last, must be at least as important in application as the cluster size; examples of longing to know when an endogenously driven activity will cease surely cannot be far from mind. However, despite the ease of access to the distribution of the cluster size, this answer has remained elusive. While formalizing the cluster-based definition, \citet{hawkes1974cluster} bounded the mean cluster duration and obtained an integral equation that must be satisfied by its cumulative distribution function (CDF), but acknowledged that it would only ``be solved in principle by repeated numerical integration.'' \citet{moller2005perfect} tightened the bound on the mean of the duration and showed that the duration itself could be stochastically dominated by an exponential random variable under some tail conditions on the excitation function, but the authors also remarked that the duration distribution is ``unknown even for the simplest examples of Hawkes processes.'' They too develop a numerical approximation of the CDF, and this informs their approach for introducing a perfect simulation procedure for the Hawkes process, which is the primary focus of that work. \citet{chen2021perfect} is also devoted to perfect simulation of the Hawkes process, particularly in steady-state, and applies this to a single server queue driven by Hawkes arrivals. By comparison to \citet{moller2005perfect}, \cite{chen2021perfect} leverages exponential tilting to produce a more efficient algorithm. Because exponential tilting with respect to the cluster duration is challenging, the author instead constructs an upper bound of the duration that is almost surely longer than the duration itself. It is important that we note that simulation of Hawkes process clusters is actually a subroutine of the \citet{chen2021perfect} algorithm \citep[and likewise for][]{moller2005perfect}, and this is explicitly described in the extension to multivariate Hawkes processes in \citet{chen2020perfect}. In fact, all of these works simulate the cluster through the structure provided by the \citet{hawkes1974cluster} definition.
Several other works have sought headway into the cluster duration outside the context of simulation. \citet{graham2021regenerative} also bounds the duration of the cluster, invoking the tail conditions from \citet{moller2005perfect} to provide inequalities for the mean and for the CDF. Here, the author uses the bounds to prove regenerative properties for the Hawkes process as a whole. Relatedly, \citet{costa2020renewal} also show{}{s} renewal results, but only for excitation functions with a bounded support condition. Additionally, \citet{reynaud2007some} provide{}{s} non-asymptotic tail bounds of the cluster duration, and \citet{bremaud2002rate} bound{}{s} the tail of the duration in order to bound the rate of convergence to equilibrium in the Hawkes process overall. However, none of these prior works characterize the mean duration explicitly, let alone its distribution.
Here, we address the fundamental pursuit of the cluster duration through a surprising connection from Hawkes processes to parking functions, a family of random objects that are rooted in enumerative combinatorics. In a comprehensive survey of combinatorial results, \citet{yan2015parking} remarked that the parking function lies ``in the center of combinatorics and appear[s] in many discrete and algebraic structures.'' In this paper, we find that parking functions are also the hidden spines of Hawkes process clusters. While this bridge from discrete to continuous space may be unexpected, the parking function itself is truly not a lonely concept. Originally introduced in \citet{konheim1966occupancy}, the classic context is an ordered vector of preferences over a row of $k$ parking spaces such that, if $k$ drivers proceed left to right and take only their preferred space or the next available to the right of it, all cars will be able to park. In enumerative combinatorics, this simple concept has been connected to many other interesting objects. For example, \citet{riordan1969ballots} linked parking functions and labeled trees, while \citet{stanley1997parking} connected parking functions and non-crossing partitions. \citet{stanley2002polytope} then extended this to plane trees and to the associahedron, as well as to a family of polytopes closely related to the one that arises here. Parking functions have also been of use in the analysis of polynomials, such as in \citet{carlsson2018proof}.
Highly relevant to our following study is the bijection between parking functions and labeled Dyck paths, or, equivalently, between sorted parking functions and Dyck paths \citep[see e.g.~Section 13.6 in][]{yan2015parking}. Many other combinatorial relationships are available in great detail from \citet{yan2015parking}. Quite recently, the parking function has received considerable attention as a random object; for example by \citet{diaconis2017probabilizing} and~\citet{kenyon2021parking}.
Motivated by the idea of a uniformly random parking function (from the collection of all those considering the same number of spaces), \citet{diaconis2017probabilizing} explore{}{s} the joint distribution of the full vector of preferences and the marginal distribution of the first preference. Additionally, the authors also conduct an asymptotic study of parking functions as the number of cars (or, equivalently, spaces) grows large, yielding a connection to Brownian excursion processes in the limit. In the combinatorics literature, there are many generalizations of the parking function in which the cardinality of spaces exceeds the cardinality of preferences, and \citet{kenyon2021parking} adapt{}{s} this generalization to the stochastic model. These generalized parking functions are again uniformly random within each combination of space and preference sizes, and \citet{kenyon2021parking} stud{}{ies} both the marginal distribution of a single coordinate for the generalized parking function and the covariance between two coordinates for the classical notion.
Our connection between Hawkes processes and parking functions joins a family of work descend{}{ing} from the random time change theorem for point processes. The classical random time change theorem \citep[e.g. Theorem 7.4.I of][]{daley2003introduction} provides that if the integral of the process intensity is evaluated at the arrival epochs of a simple, adapted (not necessarily Poisson) point process, these transformed points will form a unit{}{-}rate Poisson process. This integral of the intensity is called the compensator. This beautiful result can be traced back to \citet{meyer1971demonstration}, with the corresponding characterization of the Poisson on the half-line originating with \citet{watanabe1964discontinuous}. There is a deep theory surrounding this {}{transformation}, such as the elegant martingale proofs given in \citet{bremaud1975extension} or \citet{brown1988simple}. A rich collection of these ideas is available in \citet{bremaud1981point}, as are many demonstrations of the potency of martingales for point process models. However, it has since been seen that this result need not require the most advanced of techniques, as \citet{brown2002time} showed that this attractive idea can be explained with elementary arguments, essentially reducing the proof to calculus. We will make use of this simplicity here.
In addition to {}{its} elegance, the random time change {}{theorem} also {}{holds great value}. For example, in the simulation of point processes, if one can compute and invert the compensator function, Algorithm 7.4.III of \citet{daley2003introduction} shows that one can obtain a point process with the corresponding intensity function by transforming arrival epochs of a unit{}{-}rate Poisson process according to the inverse of the compensator. For the Hawkes process, the application of this idea can be traced to \citet{ozaki1979maximum}, and inverting the compensator is essentially the idea behind the exact simulation procedure in \citet{dassios2013exact}, even if it is not described as such. As we will see, our focus on the Hawkes cluster sets us apart. The simulation method that arises from our analysis differs from these Hawkes compensator inversion methods by sampling the points collectively from a polytope, rather than iteratively in the sample path. In statistics, the random time change theorem leads to methods for evaluating the fit of point process models on data \citep[see, e.g.,][]{ogata1988statistical,brown2005statistical,kim2014call}. The idea of these techniques is that, by transforming a complicated, possibly time-varying and/or path-dependent point process to a unit{}{-}rate Poisson process, one can more easily observe and quantify exceedances from confidence intervals for the model. Both the simulation and statistical techniques are perhaps most often used for non-stationary Poisson processes, where one can immediately petition to the classical Poissonian idea of conditional uniformity. Here, parking functions will allow us to extend this notion to the clusters within Hawkes processes.
Our nearest predecessors in random time change theory are likely \citet{giesecke2005dependent} and \citet{ding2009time}. Both of these papers are interested in using this concept to build stochastic models. \citet{giesecke2005dependent} offer{}{s} somewhat of a converse to the random time change theorem, inverting from unit{}{-}rate Poisson epochs to construct a more general point process. \citet{ding2009time} uses a similar idea to construct point processes for financial contexts by converting from a pure birth process. By comparison, in this work we will not construct a point process generally, but rather uncover structure specific to the Hawkes process that becomes visible only when the process is transformed akin to the random time change theorem. Furthermore, it is fundamental to our approach that we are inspecting the cluster while conditioning on its size. Our pursuit of the times within a Hawkes cluster is framed by the (conditional) knowledge of how many times there will be, and thus our methodology is indebted conceptually to both the random time change theorem and the conditional uniformity property enjoyed by the Poisson process. This size-conditioning actually constitutes a departure from the typical random time change theorem assumptions in subtle yet important ways, as we will discuss in detail.
This brings us to our contributions and to the organization of the remainder of the paper. Our methodological goal in this work is to use one of the most important results for the Hawkes process, \citet{hawkes1974cluster}'s cluster definition, and one of the most important results for point processes in general, the random time change theorem, to create an analog of one of the most powerful tools for the Poisson process, conditional uniformity (Lemma~\ref{Gentimechange}). This is powered by an unexpected connection between two notable stochastic models, Hawkes process clusters and random parking functions. In linking the continuous time point process to this discrete random vector, we will find that parking functions uncover the full chronology of Hawkes clusters (Theorem~\ref{distThm}). Through this, we are brought back to our original goal, which is to understand the cluster duration. We organize this analysis as follows. In Section~\ref{backgroundSec}, we review the key background concepts, providing precise definitions and summarizing important results from the literature. Then, in Section~\ref{randTimeSec}, we invoke the compensator transform of the Hawkes process and adapt the random time change theorem for our setting. Our primary result, in which we formalize the Hawkes-process-parking-function connection, is shown in Section~\ref{decompSec}. To demonstrate its usefulness, we apply the techniques to the popular Markovian Hawkes process in Section~\ref{markovSec}, where we use conditional uniformity and parking functions to prove an explicit equivalence between the cluster duration and a random sum of conditionally independent exponential random variables (Theorem~\ref{markovDistThm}). In a second application of the parking function decomposition, in Section~\ref{simSec} we describe the resulting simulation algorithm, which may be of importance to applications in rare event simulation thanks to its ability to explicitly condition on the cluster size (Algorithm~\ref{algSim}). Finally, in Section~\ref{concSec} we conclude.
\section{Model, Scope, and Background Fundamentals}\label{backgroundSec}
To begin, let us briefly review the definitions and prior results that set the stage for our analysis. In particular, we will provide formal definitions of the two focal objects in this work, the Hawkes process cluster and the parking function, while also touching on related topics like branching processes, the Borel distribution, Poisson conditional uniformity, and Dyck paths.
\subsection{Hawkes Processes, Branching Processes, and Poisson Conditional Uniformity}
Originally introduced by \citet{hawkes1971spectra}, the Hawkes point process $N_t$ and intensity $\lambda_t$ are defined such that
\begin{align}
\PP{N_{t+\delta} - N_t = n \mid \mathcal{F}_t}
&
=
\begin{cases}
\lambda_t \delta + o(\delta) & n = 1\\
1 - \lambda_t \delta + o(\delta) & n = 0\\
o(\delta) & n > 1
\end{cases}
\label{mainHawkesDef1}
\end{align}
where $\mathcal{F}_t$ is the natural filtration on the underlying probability space $(\Omega, \mathcal{F}, \mathbb{P})$ generated by the point process $N_t$, and where the conditional intensity function $\lambda_t$ is given by
\begin{align}
\lambda_t
&
=
\lambda
+
\int_{-\infty}^t g(t-u) \mathrm{d}N_u
=
\lambda
+
\sum_{A_i < t}g(t- A_i)
,
\label{mainHawkesIntensityDef1}
\end{align}
with $\{A_i \mid i \in \mathbb{Z}\}$ as the increasing sequence of arrival epochs in the point process $N_t$. The function $g:\mathbb{R}_+ \to \mathbb{R}_+$ governs the excitement generated upon each arrival epoch, and thus is often referred to as the \textit{excitation kernel} or excitation function. On the other hand, $\lambda \geq 0$ is commonly called the \textit{baseline intensity}, as it drives an underlying stream of exogenous arrivals. By construction in \eqref{mainHawkesIntensityDef1}, every event epoch increases the intensity upon its occurrence by $g(0)$ and then generally $s$ time units later by $g(s)$, hence earning the process its hallmark trait of ``self-excitement.'' For this reason, it is the history of the process that drives its future activity.
One of the most powerful tools for analysis of Hawkes processes has been an alternate definition of the model, originally provided and proved to equivalent to~\eqref{mainHawkesDef1} and~\eqref{mainHawkesIntensityDef1} in \citet{hawkes1974cluster}. This formulation, frequently referred to as the cluster-based definition, bears a style similar to branching process. In a first-level stream, initial events are generated according to a Poisson process at rate $\lambda$. Then, in a secondary stream for each initial event, a progeny cluster is generated independently of all other clusters and arrivals. Starting with the initial event and for each successive arrival in the cluster, direct offspring of that event are generated according to an inhomogeneous Poisson process with rate $g(t)$ for $t$ time units elapsed since the given arrival epoch. This repeats with generating descendants of the offspring themselves until no further arrivals occur in the cluster, and extinction is guaranteed if $\rho = \int_0^\infty g(t) \mathrm{d}t < 1$.
One of the foremost benefits of this alternate definition of the Hawkes process is that it makes very clear the two types of arrivals: the baseline-generated stream and the excitement-generated clusters that spawn off of it. Because the baseline-generated stream is a Poisson process, its behavior is well understood, and so the cluster-based definition allows us to focus on the impact of the self-excitement. This is where the focus of this paper will lie. To dedicate our attention to the structure of the self-excitement, we will narrow the primary definition and isolate the clusters. That is, let us essentially mirror Equations~\eqref{mainHawkesDef1} and~\eqref{mainHawkesIntensityDef1} but with a time 0 initial arrival ($A_0 = 0$) and no baseline stream ($\lambda=0$).
\begin{definition}[Hawkes Process Cluster]
\emph{For $t \geq 0$, we will henceforward take $(\lambda_t, N_t)$ as the cluster-specific intensity-and-point-process pair with $N_t$ functioning as in Equation~\eqref{mainHawkesDef1} and with the intensity being simply
\begin{align}
\lambda_t
=
\int_0^t g(t-u) \mathrm{d}N_u
=
\sum_{i=0}^{N_t-1} g(t-A_i)
,
\label{mainHawkesDefCluster}
\end{align}
where $A_0 = 0$ without loss of generality. Supposing $\rho = \int_0^\infty g(t) \mathrm{d}t < 1$, the cluster can be fully characterized by its arrival epochs $0 = A_0 < A_1 < \dots < A_{N-1}$, where $N = \lim_{t\to\infty} N_t$ is the cluster size and $\tau = A_{N-1}$ the cluster duration.}
\end{definition}
One can think of the cluster initializing with a time zero arrival from the exogenous baseline stream, and thus the above definition focuses only on the endogenously driven activity. The stability assumption $\rho < 1$ traces back to Hawkes' original work, and it provides that the size $N$ is finite almost surely and that the intensity $\lim_{t\to\infty} \lambda_t = 0$ almost surely \citep{hawkes1971spectra}. The \citet{hawkes1974cluster} alternate definition of the model already provides {}{a} perspective of the cluster's descendant structure irrespective of time, and this describes the cluster's size with clarity. Because each event spawns its own offspring Poisson process with time-varying rate $g(t)$, its expected total number of offspring events is $\rho$, and moreover the total number of direct offspring of any one event is $\mathsf{Pois}(\rho)$ distributed. This means that the family tree becomes a Poisson branching process and thus the total progeny will be Borel distributed \citep{feller2008introduction}. That is, the size $N \sim \mathsf{Borel}(\rho)$ has probability mass function
\begin{align}
\PP{
N
=
k
}
=
\frac{e^{-\rho k}}{k!}\left(\rho k\right)^{k-1}
,
\label{borelDist}
\end{align}
for all $k \in \mathbb{Z}_+$.
While this clear understanding of the cluster size is helpful, on the surface it only tells us a limited part of the story. However, we will see here that {}{this time-agnostic quantity} provides a key to the chronology as well. Our approach {}{to unlocking these epochs} is rooted in {}{the} concept {}{of} conditional uniformity property for Poisson processes. The classical notion of conditional uniformity states that for a Poisson process with a homogeneous arrival rate, the joint distribution of the epochs $A_1 < A_2 < \dots < A_k$ given that there were $k$ arrivals in the interval $[0,t)$ is equivalent to the joint distribution of the order statistics of $k$ i.i.d.~uniform random variables on $[0,t)$. Alternately stated, as a random vector, $[A_1, A_2, \dots, A_k]$ is conditionally uniform on the polytope $\mathcal{U} = \{x \in [0,t)^k \mid x_i < x_{i+1} ~\forall~ i \leq k - 1\}$. The volume of $\mathcal{U}$ is $t^k \slash k!$, and hence the joint density of this random vector is $k! \slash t^k$ for all $x \in \mathcal{U}$, as the generalized uniform distribution on a polytope means that all points in that polytope have density given by the inverse of the volume. It is of course straightforward to sample from $\mathcal{U}$, as one can simply generate $k$ i.i.d.~$\mathsf{Uni}(0,1)$ random variables, sort them, and multiply by $t$. This creates a handy way of sampling Poisson processes, and the idea extends to time inhomogeneous Poisson processes as well. For a Poisson process with time-varying arrival rate given by $f:\mathbb{R} \to \mathbb{R}^+$ with $F(t) = \int_0^t f(s) \mathrm{d}s$, it can be simulated on an interval $[0,t)$ where $f(\cdot) > 0$ by generating the number of epochs according to $\mathsf{Pois}(F(t))$, sampling the yielded number of standard uniform random variables, sorting them, and returning each arrival epoch as $A_i = F^{-1}(U_{(i)} F(t))$, where $U_{(i)}$ is the $i^\text{th}$ smallest. This idea can be seen to follow simply from the inverse transform sampling method and the likelihood function of the time-varying Poisson process, but it can also be seen as a consequence of the random time change theorem, as we will discuss in Section~\ref{randTimeSec}.
\subsection{Dyck Paths and Parking Functions}
Well-known in combinatorics, a $k$-step Dyck path is a non-decreasing trail of points on the lattice from $(0,1)$ to $(k,k)$ such that the height of the path never exceeds the {}{right-most value of its $x$-coordinate integer interval}. {}{Here, ``$k$-step'' means that there are $k$ horizontal moves across the lattice in total.} For reference and intuition's sake, Figure~\ref{dyckfig} shows all 3-step Dyck paths. Dyck paths have myriad connections to other combinatorial objects, and many of these connections follow from the fact that they are enumerated by the Catalan numbers \citep[e.g.][]{stanley1999enumerative}. We will let $\mathsf{D}_k$ be the set of all $k$-step Dyck paths. For convenience downstream, our convention will be to record the Dyck path as the vector containing the integer values that are the {}{largest} $y$-coordinate {}{attained at each} $x$-coordinate in the lattice path, omitting the terminus. That is, the $2$-step path $(0,1)\to(1,1){}{\to(2,1)}\to(2,2)$ would be recorded $[1, 1]$ and likewise $(0,1){}{\to(1,1)}\to(1,2)\to(2,2)$ would be $[1,2]$.
\begin{figure}
\caption{The collection of all 3-step Dyck paths.}
\label{dyckfig}
\end{figure}
Closely related to Dyck paths are parking functions, which will serve as a focal point throughout this paper. These can be defined through a parsimonious condition.
\begin{definition}[Parking Function]
\emph{For $k \in \mathbb{Z}_+$, $\pi \in \mathbb{Z}_+^k$ is a parking function of length $k$ if and only if it is such that, when sorted such that $\pi_{(1)} \leq \dots \leq \pi_{(k)}$, $\pi_{(i)} \leq i$ for each $i \in \{1, \dots, k\}$.
}
\end{definition}
Parking functions earn their name from the following intuitive context. Suppose $k$ cars arrive in successive fashion to a strip of $k$ parking spaces, labeled 1 through $k$. Each car $i$ has a preferred space $\pi_i$. If spot $\pi_i$ is available, then the car will park there, otherwise they will take the next available space after $\pi_i$, meaning greater than $\pi_i$. Hence, the constraints $\pi_{(i)} \leq i ~\forall~ 1 \leq i \leq k$ ensure that all cars will be able to park.
For our context, perhaps the most valuable property of parking functions will be that they can be viewed as labeled Dyck paths, or, equivalently, that a sorted parking function is a Dyck path \citep[see, e.g., Section 13.6 of][]{yan2015parking}. That is, every $k$-step Dyck path is also a parking function of length $k$, {}{and} every parking function of length $k$ can be seen to be a permutation of a $k$-step Dyck path. We will let $\mathsf{PF}_k$ for each $k \in \mathbb{Z}_+$ be the set of all length-$k$ parking functions, and it will be valuable for us to both enumerate these and review an elegant proof of this enumeration. The proof is due to Pollak, but it was recorded and published by contemporaries \citep[e.g.][]{riordan1969ballots,foata1974mappings}, and it is also available on p.~836 in \citet{yan2015parking}.
\begin{lemma}\label{pollak}
There are $(k+1)^{k-1}$ parking functions of length $k \in \mathbb{Z}_+$.
\end{lemma}
\proof{Proof (Pollak's Circle Argument).} Suppose that there are instead $k+1$ parking spaces, and let $\tilde{\pi} \in \{1, \dots, k+1\}^k$ be any $k$-length preference vector for these spaces. Let the cars progress one-by-one so that the $i^\text{th}$ car is the $i^\text{th}$ to pick and suppose, as before, that each car attempts to park in their preferred spot $\tilde{\pi}_i$ and, if unavailable, takes the next space open after it. However, suppose now that the spaces are arranged in a circle, so that if a car has made it to space $k+1$ without parking, it will start again at space 1. Hence, one can instead think of $\tilde{\pi}$ as an element of $(\mathbb{Z}\slash (k+1)\mathbb{Z})^k$. By the pigeonhole principle, all $k$ cars will be able to park and there will be exactly one space remaining, some $\ell \in (\mathbb{Z} \slash (k+1) \mathbb{Z})$. If $\ell = k+1$, then $\tilde \pi$ is a parking function, and if not, it can be converted to one by defining $\pi = (\tilde \pi - \ell)~\mathsf{mod}~(k + 1)$, effectively rotating the empty spot to align with space $k+1$. Since there are $k+1$ possible rotations that would convert to the same parking function, the cardinality of the set of parking functions is $1\slash(k+1)$ of the cardinality of the set of preference vectors, which is $(k+1)^k$.
\Halmos\endproof
\begin{figure}
\caption{Example conversion of a preference vector drawn from $\boldsymbol{\{1, \dots, k+1\}
\label{pollakfig}
\end{figure}
A demonstration of this argument and the concept of rotating the empty space is shown in Figure~\ref{pollakfig}. This construction along the circle leads to a group theoretic sampling of uniformly random parking functions that is used in both~\citet{diaconis2017probabilizing} and~\citet{kenyon2021parking}, and we will employ this idea later for our proposed Hawkes cluster simulation algorithm in Section~\ref{simSec}.
\section{Conditional Uniformity through Random Time Change}\label{randTimeSec}
To connect conditional uniformity and Hawkes processes and to begin understanding this relationship, we first need to define the \textit{compensator} of the process, a transformation of the stochastic process. Let $G(t) = \int_0^t g(u) \mathrm{d}u$. Then, the continuous time compensator of the Hawkes cluster intensity is given by
\begin{align}
\Lambda(t)
=
\int_0^t \lambda_s \mathrm{d}s
=
\sum_{i=0}^{N_t-1}
\int_{A_i}^t
g(s-A_i)
\mathrm{d}s
=
\sum_{i=0}^{N_t-1} G(t-A_i)
.
\end{align}
By construction, $\Lambda(t)$ is a continuous and increasing function. In general, the difference of a point process and its compensator, $N_t - \Lambda(t)$, is a martingale, which can be seen from the Doob-Meyer decomposition \citep[see, e.g., Lemma 7.2.V in][]{daley2003introduction}, and this hints at some of the elegant martingale approaches to the random time change theorem.
In the case of the Hawkes process, we can see that the process is deterministic between its arrival epochs, and so we can capture the full information of the sample path through the discrete time compensator process evaluated only at these points in time:
\begin{align}
\Lambda_k
=
\Lambda(A_k)
=
\sum_{i=0}^{k-1} G(A_k-A_i)
,
\label{disCompDef}
\end{align}
for $k \in \mathbb{Z}_+$, where $\Lambda_k$ is only defined for clusters of size at least $k+1$, so that $A_k < \infty$. Here, we can see that not only is the sequence increasing by definition since $\lambda_t > 0$ {}{for any finite $t$}, we can also recall that $\lim_{t\to\infty} G(t) = \rho = \int_0^\infty g(u) \mathrm{d}u$, and so
$
\Lambda_{k-1} < \Lambda_k < k\rho.
$
The compensator sets the stage for us to invoke and extend the classical random time change theorem.
\begin{lemma}\label{Gentimechange}
Given that there are $N = k + 1$ events in the cluster in total including the initial time 0 event, the conditional joint density of the transformed epochs is
\begin{align}
f(\Lambda_1, \Lambda_2, \dots, \Lambda_k \mid N = k + 1)
=
\left(\frac{1}{\rho}\right)^k\frac{(k+1)!}{(k+1)^k}
,
\end{align}
for all $k \in \mathbb{N}$ and all $0 < \Lambda_1 < \dots < \Lambda_k < k\rho$ with $\Lambda_i < i \rho$ for all $i \in \{1, \dots, k\}$. Hence, $\Lambda \mid N \sim \mathsf{Uni}(\mathcal{P}_{N-1})$ for $\Lambda$ as the vector of the compensator points and the polytope ${\mathcal{P}}_k = \{x \in \mathbb{R}^k_+ \mid x_i < i \rho ~\forall~ 1 \leq i \leq k , x_i < x_{i+1} ~\forall~ 1 \leq i \leq k-1\}$.
\end{lemma}
\proof{Proof.}
From the classical random time change theorem, e.g.~Equation (2.14) of \citet{brown2002time}, the unconditioned joint density of the first $k$ compensator points is
\begin{align*}
f(\Lambda_1, \Lambda_2, \dots, \Lambda_k)
&=
e^{-\Lambda_k}
,
\end{align*}
where $\Lambda_i < i\rho$ for all $1 \leq i \leq k$ is both implied and required whenever there are at least $k$ offspring events. Then, the conditional density can be expressed
\begin{align*}
f(\Lambda_1, \Lambda_2, \dots, \Lambda_k \mid N = k + 1)
&=
\PP{N = k+1 \mid \Lambda_1, \Lambda_2, \dots, \Lambda_k}
\frac{
e^{-\Lambda_k}
}{
\PP{N = k+1}
}
.
\end{align*}
Now, given that there have been $k$ epochs (excluding the initial time 0 {}{arrival}) so far up to time $A_k$, the event that there will be exactly $k$ epochs in the cluster in total is equivalent to the event that there will be no epochs after time $A_k$, i.e.~$N_{[A_k,\infty)} = N - N_{A_k} = 0$. Given the history of the process up to time $A_k$ (as contained in the natural filtration $\mathcal{F}_{A_k}$), the probability that $N_{[A_k,\infty)} = 0$ is given by
\begin{align*}
\PP{N_{[A_k,\infty)} = 0 \mid \mathcal{F}_{A_k}}
&=
e^{-\int_{A_k}^\infty \lambda_t \mathrm{d}t}
=
e^{-\sum_{i=0}^{k}\int_{A_k}^\infty g(t -A_i) \mathrm{d}t}
,
\end{align*}
since Equation~\eqref{mainHawkesDef1} shows that the Hawkes process is conditionally non-stationary Poisson given its history. Because the intensity $\lambda_t$ must be strictly positive on $[0,A_k]$, there is a unique and deterministic bijection between $(A_1, A_2, \dots, A_k)$ and $(\Lambda_1, \Lambda_2, \dots, \Lambda_k)$. Hence, because the collection of arrival epochs completely specifies the sample path of the process, so does the collection of compensator points. Therefore, we have
\begin{align*}
\PP{N = k+1 \mid \Lambda_1, \Lambda_2, \dots, \Lambda_k}
&=
\PP{N_{[A_k,\infty)} = 0 \mid \mathcal{F}_{A_k}}
=
e^{-\sum_{i=0}^{k}\int_{A_k}^\infty g(t -A_i) \mathrm{d}t}
,
\end{align*}
which by the definition of the compensator further implies that
\begin{align*}
\PP{N = k+1 \mid \Lambda_1, \Lambda_2, \dots, \Lambda_k} e^{-\Lambda_k}
=
e^{-\sum_{i=0}^{k}\int_{A_k}^\infty g(t -A_i) \mathrm{d}t -\sum_{i=0}^{k}\int_{0}^{A_k} g(t -A_i) \mathrm{d}t}
=
e^{-(k+1)\rho}
.
\end{align*}
Recalling that $N \sim \mathsf{Borel}(\rho)$, we can substitute this probability mass function for $\PP{N=k+1}$ and achieve the stated result.
\Halmos \endproof
Let us briefly contrast Lemma~\ref{Gentimechange} with the classical version of the random time change theorem. It is well-established that the seminal result holds for any simple, adapted point process, and that of course includes the Hawkes process we study here. So, it is not that we are newly extending to self-exciting processes; rather, the novelty is in the use of conditioning. Specifically, in this lemma we obtain the conditional joint density of the compensator points given the size of the cluster. Because this conditioning is on a tally collected at the end of time, we depart from some of the common assumptions of the classical random time change theorem. That is, the statement of the theorem typically contains an assumption that $\Lambda(t)$ is not bounded almost surely \citep[such as in Theorem 7.4.I in][]{daley2003introduction}, but here we have discussed that each $\Lambda_i$ is strictly less than $i\rho$. In fact, $\PP{\lim_{t\to\infty} \Lambda(t) = {}{(k+1)} \rho \mid N=k+1} = 1$.\footnote{{}{This observation can actually be used to construct an alternate proof of Lemma~\ref{Gentimechange} through a limit of the results in \citet{brown2002time}, arising out of Equations (2.5) and (2.14) therein.}} By direct consequence, we also do not require an infinite sequence of epochs on the positive real half-line \citep[as in Proposition 7.4.IV or Theorem T16 in, respectively,][]{daley2003introduction,bremaud1981point}. Similarly, there is also often an assumption that $\lambda_t > 0$ for all $t$ in the time interval of the transform \citep[as in][]{brown2002time}, but here we are explicitly assuming that the intensity converges to 0: $\PP{\lim_{t\to\infty} \lambda_t = 0 \mid N=k+1} = 1$.
These changes are subtle in assumption but important in consequence, as it shifts the connection's end result from the Poisson to the uniform distribution instead. The key takeaway from Lemma~\ref{Gentimechange} is that the conditional density is constant: it has no dependence on the values of the compensator points. Thus, for a cluster with $k$ post-initial events ($N = k +1$), any collection of compensator points satisfying $0 < \Lambda_1 < \Lambda_2 < \dots < \Lambda_k < k\rho$ is equally likely to occur. As used in the proof of Lemma~\ref{Gentimechange}, the compensator vector entirely characterizes the cluster, so we now have access to the full sample path through these conditionally uniform points. This also shows that any two Hawkes processes with the same $\rho$ will have equivalent distributions of compensator points; neither the size of the cluster nor the conditional joint density of the compensators depend on $g(\cdot)$ outside of $\rho$.
Still, the establishment of conditional uniformity is not out of the blue. Setting aside the departure from typical random time change theorem assumptions, the conditional uniformity in Lemma~\ref{Gentimechange} is not overly surprising given the well known transformation to a unit-rate Poisson process. However, what is special for this result is the polytope on which the transformed points lie uniformly distributed. Recalling the review of Poisson conditional uniformity in Section~\ref{backgroundSec}, we know that the only constraints on the Poisson arrival epochs are that each point is less than the next. However, Lemma~\ref{Gentimechange} shows that in addition to that, each compensator point in the Hawkes cluster is constrained to be less than the product of its index and $\rho$. Hence, the interesting piece of this conditional uniformity that remains to be understood is determining what exactly it means to be uniformly random on this convex polytope $\mathcal{P}_k$.
\section{Decomposing Clusters through Conditional Uniformity}\label{decompSec}
This leads us to our first main result. Lemma~\ref{Gentimechange} gave us the family of polytopes $\{\mathcal{P}_k \mid k \in \mathbb{N}\}$ upon which the cluster's compensator points are conditionally uniform. What we will find now is that this implies that parking functions provide a hidden spine {}{in} these transformed arrival epochs. Theorem~\ref{distThm} shows that uniformly random parking functions actually provide a partition for these polytopes from which the compensators can be drawn directly through standard uniforms. Hence, this decomposition elucidates the structure within self-excitement and unites Hawkes processes with parking functions.
\begin{theorem}\label{distThm}
Let $k \in \mathbb{N}$. Given that the Hawkes process cluster is comprised of $N=k+1$ events, the compensator transformed arrival epochs $\Lambda_1, \dots, \Lambda_k$ are such that
\begin{align}
\Lambda_i
\stackrel{\mathsf{D}}{=}
\rho\cdot\left(\pi - U\right)_{(i)}
\qquad
\forall\,i \in \{1, \dots, k\}
,
\end{align}
where $\left(\pi - U\right)_{(1)} < \dots < \left(\pi - U\right)_{(k)}$, with $\pi \in \mathsf{PF}_k$ as a $k$-step parking function drawn uniformly at random and $U \in {}{(}0,1)^k$ as $k$ i.i.d.~standard uniform random variables.
\end{theorem}
We will prove Theorem~\ref{distThm} across a series of subsidiary statements. For simplicity and without loss of generality, let us consider the unit polytope $\bar{\mathcal{P}}_k = \{x \in \mathbb{R}^k_+ \mid x_i < i ~\forall~ 1 \leq i \leq k, x_i < x_{i+1} ~\forall~ 1 \leq i \leq k-1\}$, which maps to $\mathcal{P}_k$ through the bijection $x \mapsto \rho x$ for all $k$. To begin to establish our methodology of sampling from $\bar{\mathcal{P}}_k$, we will first view $\bar{\mathcal{P}}_k$ as a union over sub-polytopes with measure 0 intersections. Specifically, let us distill each point in $\bar{\mathcal{P}}_k$ as lying on intervals bounded by integer steps in each coordinate direction. Then, any point in $\bar{\mathcal{P}}_k$ can be seen to lie in one of these subspaces.
To organize this decomposition of $\bar{\mathcal{P}}_k$, we will draw in the first combinatorial object discussed in Section~\ref{backgroundSec}: Dyck paths. In Proposition~\ref{pDecompProp}, we use the collection of $k$-step Dyck paths to partition $\bar{\mathcal{P}}_k$ into $|\mathsf{D}_k| = C_k = {2k \choose k}\slash(k+1)$ disjoint subspaces. As the brief proof demonstrates, the result itself is straightforward, but it will be of use in structuring the uniform sampling from the polytope.
\begin{proposition}\label{pDecompProp}
For each $k \in \mathbb{Z}_+$, the unit polytope $\bar{\mathcal{P}}_k$ can be decomposed through the set of $k$-step Dyck paths, i.e.
\begin{align}
\bar{\mathcal{P}}_k
&=
\bigsqcup_{d \in \mathsf{D}_k}
\bar{\mathcal{P}}_{k,d}
,
\label{pDecompEq}
\end{align}
where
\begin{align}
\bar{\mathcal{P}}_{k,d}
&=
\{x \in \mathbb{R}_+^k
\mid
d_i-1 \leq x_i < d_i ~\forall~ 1 \leq i \leq k, x_i < x_{i+1} ~\forall~ 1 \leq i \leq k-1
\}
\label{PkdDefEq}
,
\end{align}
for each Dyck path $d \in \mathsf{D}_k$.
\end{proposition}
\proof{Proof.}
By definition, the sets $\bar{\mathcal{P}}_{k,d}$ are mutually disjoint across all paths $d$. Because the ordering constraint ${}{x_i < x_{i+1}}$ is shared by every set in the Dyck path union and by $\bar{\mathcal{P}}_k$, we only must check that the space constraints agree. Here, we can quickly observe that $1 \leq d_i \leq i$ for each $i \in \{1, \dots, k\}$, hence $\bigcup_{d \in \mathcal{D}_k}
\{x_i \in \mathbb{R}_+
\mid
d_i-1 \leq x_i < d_i \} =
\{x_i \in \mathbb{R}_+
\mid
{}{0 \leq x_i} < i \}$.
\Halmos \endproof
\begin{figure}
\caption{The three dimensional polytope (left) and its decomposition into the five sub-regions indexed by Dyck paths (right).}
\label{decompFig}
\end{figure}
The three dimensional polytope, $\bar{\mathcal{P}}_3$, and its Dyck path decomposition are shown in Figure~\ref{decompFig}. In some sets in the Dyck path partition, the ordering constraints will be superfluous given the integer ranges on which each coordinate lives. For example, at any dimension $k$ the largest volume subset will be the hypercube $\bar{\mathcal{P}}_{k,[1, 2, \dots, k]} = \{x \in \mathbb{R}_+^k
\mid
i-1 \leq x_i < i ~\forall~ 1 \leq i \leq k
\}$, in which the ordering constraints are not needed. By comparison, the smallest volume subset will correspond to the ``lowest'' Dyck path $[1, 1, \dots, 1]$, which yields $\bar{\mathcal{P}}_{k,[1, 1, \dots, 1]} = \{x \in {}{[0,1)}^k
\mid
x_i < x_{i+1} ~\forall~ 1 \leq i \leq k-1
\}$.\footnote{{}{In fact, $\bar{\mathcal{P}}_{k,[1, 1, \dots, 1]}$ is equivalent (up to scaling) to the polytope on which homogeneous Poisson arrival epochs lie, $\mathcal{U}$. The self-exciting patterns of the Hawkes create the added regions and their corresponding constraints.}} This structure is also intimately linked to the value of the decomposition. The benefit of Proposition~\ref{pDecompProp} is that it is simple and intuitive to sample uniformly within a given subspace from the Dyck path partition. That is, a uniform sample from the hypercube is simply $U_i \sim \mathsf{Uni}(i-1,i)$, drawn independently for each $i$. On the other hand, a uniform sample from the $[1,1,\dots, 1]$ subspace is given by the order statistics of $k$ independent $\mathsf{Uni}(0,1)$ random variables. This generalizes cleanly. To sample uniformly from the region given by the Dyck path $d \in \mathsf{D}_k$, draw $k$ independent $\mathsf{Uni}(0,1)$ random variables, say $U_1, \dots, U_k$, and return the order statistics of $d_1 - U_1, \dots, d_k - U_k$ as the sample. This is again straightforward to show.
\begin{proposition}\label{subsampleprop}
For $k \in \mathbb{Z}_+$ and $d \in \mathsf{D}_k$, let $X_d \in \mathbb{R}_+^k$ be equivalent in distribution to the order statistics of $d - U$, where $U \in {}{(}0,1)^k$ is such that $U_i \stackrel{\mathsf{iid}}{\sim} \mathsf{Uni}(0,1)$. Then, $X_d \sim \mathsf{Uni}(\bar{\mathcal{P}}_{k,d})$.
\end{proposition}
\proof{Proof.}
The two distributions share a sample space by definition. Because the elements of $U$ are independent, distinct values of $d$ yield independent elements of $X_d$. When values are repeated in $d$, the corresponding elements of $X_d$ are equivalent to shifted standard uniform order statistics. Hence, for $\kappa_i(d) = |\{j \mid d_j = i\}|$, the density of $X_d$ is $f(x_1, \dots, x_k) = \prod_{i=1}^k \kappa_i(d)!$, which is precisely the inverse of the volume of $\bar{\mathcal{P}}_{k,d}$.
\Halmos \endproof
Hence, we see that the Dyck path indexing provides a way of encoding which points share an interval and which do not. {}{To use} this decomposition to uniformly sample from $\bar{\mathcal{P}}_k$, {}{it remains} to find a distribution over the Dyck paths, so that we draw from $\bar{\mathcal{P}}_k$ by first selecting a set from the partition and then draw a point from within it. Because Dyck paths are enumerated by the Catalan numbers, we know that there will be $C_k = {2k \choose k}\slash(k+1)$ sets within the partition of $\bar{\mathcal{P}}_k$. However, each set should not be equally likely; rather, the likelihood of each set should be proportional to its volume. This is clear at $k=2$. $\bar{\mathcal{P}}_{2,[1, 2]}$, a square comprised of $0 \leq x_1 < 1$ and $1 \leq x_2 < 2$, should be twice as likely as $\bar{\mathcal{P}}_{2,[1, 1]}$, the half-square given by $0 \leq x_1 <
x_2 < 1$. At $k=3$, there is one cube ($\bar{\mathcal{P}}_{3,[1, 2,3]}$), three half-cubes ($\bar{\mathcal{P}}_{3,[1, 1,2]}$, $\bar{\mathcal{P}}_{3,[1, 1,3]}$, and $\bar{\mathcal{P}}_{3,[1, 2,2]}$), and one sixth-cube ($\bar{\mathcal{P}}_{3,[1, 1,1]}$), as shown in Figure~\ref{decompFig}. Since the total volume of $\bar{\mathcal{P}}_3$ is $8/3$, the respective probabilities for each of the five regions should be $3/8$, $3/16$, $3/16$, $3/16$, and $1/16$.
In what is of beautiful consequence for our problem, parking functions yield precisely the correct weighting of Dyck paths. Recalling from Section~\ref{backgroundSec} that there is a bijection between sorted parking functions and Dyck paths, the suite of proportions of length-$k$ parking functions that {}{sort} to each $k$-step Dyck path yields exactly the desired distribution over the partitions of $\bar{\mathcal{P}}_k$. Hence, to properly select the partition subset from which to sample in $\bar{\mathcal{P}}_k$, we can simply sample the Dyck paths according to the fractions of equivalent parking functions, or, even more directly, we can merely draw a parking function uniformly at random, sort it, and select the corresponding subset of the polytope. We prove this now in Proposition~\ref{PFdistProp}.
\begin{proposition}\label{PFdistProp}
For $k \in \mathbb{Z}_+$, let $\pi \in \mathsf{PF}_k$ be a uniformly random parking function of length $k$, and let $\vec{\pi}$ be the parking function sorted increasingly. Likewise, let $\Lambda \in \mathbb{R}_+^k$ be selected uniformly at random from $\bar{\mathcal{P}}_k$. Then, for any Dyck path $d \in \mathsf{D}_k$,
\begin{align}
\PP{\Lambda \in \bar{\mathcal{P}}_{k,d}}
=
\PP{\vec\pi = d}
=
\frac{k!}{(k+1)^{k-1}
\prod_{i=1}^k \kappa_i(d)!}
,
\label{PFdistEq}
\end{align}
where $\kappa_i(d) = |\{j \mid d_j = i\}|$ counts the number of occurrences of $i$ within a Dyck path $d$.
\end{proposition}
\proof{Proof.}
Let $d \in \mathsf{D}_k$ be arbitrary. We will prove the result by showing that $\PP{\Lambda \in \bar{\mathcal{P}}_{k,d}}$ and $\PP{\vec\pi = d}$ are both equal to the right-most side of Equation~\eqref{PFdistEq}. Starting with $\PP{\vec\pi = d}$, let us consider the space of parking functions with length $k$. From Lemma~\ref{pollak}, there are $(k+1)^{k-1}$ parking functions of length $k$. Then, accounting for repeated values, there are $k!\slash (\prod_{i=1}^k \kappa_i(d)!)$ ways to permute the $k$ values in $d$, with each one being a possible draw from the space of parking functions. Hence, Equation~\eqref{PFdistEq} shows the true distribution of $\vec \pi$.
Now, for $\PP{\Lambda \in \bar{\mathcal{P}}_{k,d}}$, let us recall that Lemma~\ref{Gentimechange} implies that the density of $\Lambda$ on $\bar{\mathcal{P}}_k$ is $k! \slash (k+1)^{k-1}$. Hence, the probability that $\Lambda$ lies in $d$'s partition set is
\begin{align}
\PP{\Lambda \in \bar{\mathcal{P}}_{k,d}}
&=
\int_0^1\int_{(d_2-1 \vee x_1)}^{d_2} \dots \int_{(d_k-1 \vee x_{k-1})}^{d_k}
\frac{k!}{(k+1)^{k-1}}
\mathrm{d}x_k
\dots
\mathrm{d}x_2
\mathrm{d}x_1
.
\label{intEqP3}
\end{align}
If $d_{i-1} < d_i < d_{i+1}$, {}{then neither} the range of integration nor the integrand of the integral with respect to $x_i$ will depend on any of the other variables of integration, hence this integration is separable. Moreover, the integrals over $x_i$ and $x_j$ are not separable if and only if $d_i = d_j$. {}{That is, for any Dyck path $d$, this separability allows us to decompose the integral in Equation~\eqref{intEqP3} into a product over smaller integrals, each of which contains only the variables that share a corresponding $d$ value.} {}{Ignoring the $k!\slash(k+1)^{k-1}$ coefficient temporarily for brevity's sake}, we can re-express the integral as
\begin{align*}
\int_0^1\int_{(d_2-1 \vee x_1)}^{d_2} \dots \int_{(d_k-1 \vee x_{k-1})}^{d_k}
1 \cdot
\mathrm{d}x_k
\dots
\mathrm{d}x_2
\mathrm{d}x_1
&=
\prod_{i=1}^k
\int_{{}{i-1}}^{{}{i}}
\int_{\xi_1}^{{}{i}}
\dots
\int_{\xi_{\kappa_i(d)-1}}^{{}{i}}
1 \cdot
\mathrm{d}\xi_{\kappa_i(d)}
\dots
\mathrm{d}\xi_2
\mathrm{d}\xi_1
,
\end{align*}
where an empty integral is taken to equal 1 by convention. In this simple form we can quickly see that
\begin{align*}
\prod_{i=1}^k
\int_{{}{i-1}}^{{}{i}}
\int_{\xi_1}^{{}{i}}
\dots
\int_{\xi_{\kappa_i(d)-1}}^{{}{i}}
1 \cdot
\mathrm{d}\xi_{\kappa_i(d)}
\dots
\mathrm{d}\xi_2
\mathrm{d}\xi_1
=
\prod_{i=1}^k \frac{1}{\kappa_i(d)!}
,
\end{align*}
which yields the stated result for $\PP{\Lambda \in \bar{\mathcal{P}}_{k,d}}$.
\Halmos \endproof
Theorem~\ref{distThm} now follows directly.
\proof{Proof of Theorem~\ref{distThm}.}
By Lemma~\ref{Gentimechange}, we have that {}{the} vector of transformed epochs $\Lambda = [\Lambda_1, \dots, \Lambda_k]$ is jointly uniform over the polytope {}{$\mathcal{P}_k$}, i.e.~$\Lambda \sim \mathsf{Uni}(\mathcal{P}_k)$. Equivalently, $\Lambda \slash \rho \sim \mathsf{Uni}(\bar{\mathcal{P}}_k)$. Then, by the decomposition into disjoint subsets in Proposition~\ref{pDecompProp}, for $d \in \mathsf{D}_k$ we further have that $\Lambda \slash \rho \mid (\Lambda \slash \rho \in \bar{\mathcal{P}}_{k,d}) \sim \mathsf{Uni}(\bar{\mathcal{P}}_{k,d})$. By Proposition~\ref{subsampleprop}, the order statistics of $d-U$ are also uniform within $\bar{\mathcal{P}}_{k,d}$. Now, Proposition~\ref{PFdistProp} establishes that the probability $\PP{\Lambda \slash \rho \in \bar{\mathcal{P}}_{k,d}} = \PP{\vec{\pi} = d}$ for a uniformly random parking function $\pi$, and thus the order statistics of $\pi - U$ and $\Lambda \slash \rho$ are equivalent in their uniform distribution on $\bar{\mathcal{P}}_k$.
\Halmos \endproof
\section{Application to the Markovian Hawkes Process}\label{markovSec}
In the remainder of this paper, we will demonstrate some of the advantages of the parking function decomposition of the Hawkes process clusters. We start by showcasing the analytical value of the conditional uniformity through application to what is perhaps the most widely used Hawkes excitation kernel. Within this section, let us now assume that $g(x) = \alpha e^{-\beta x}$ for $0 < \alpha < \beta$. Under this kernel, the intensity becomes a Markov process \citep{oakes1975markovian}. Much of the popularity of the exponential kernel follows from its frequent tractability, as we will see here.
Let us start by expressing the compensator for the exponential kernel. Given $g(x) = \alpha e^{-\beta x}$, we see that $G(x) = \rho(1-e^{-\beta x})$ for $\rho = \alpha \slash \beta$, and thus
\begin{align*}
\Lambda(t)
=
\sum_{i=0}^{N_t-1}
\rho\left(1-e^{-\beta(t-A_i{}{)}}\right)
\quad
\text{ and }
\quad
\Lambda_k
=
\sum_{i=0}^{k-1}
\rho\left(1-e^{-\beta(A_k - A_i)}\right)
.
\end{align*}
There is a great deal of information hidden in this expression. For example, if we let $\lambda_k = \lambda_{A_k^-} = \sum_{i=0}^{k-1} \alpha e^{-\beta(A_k - A_i)}$ be the value of the intensity immediately before the cluster's $k^\text{th}$ event, then we can observe that this quantity is merely an affine transformation of the corresponding compensator point $\Lambda_k$. That is,
\begin{align}
\Lambda_k
=
\rho k
-
\frac{1}{\beta}
\sum_{i=0}^{k-1}
\alpha e^{-\beta(A_k - A_i)}
=
\rho k
-
\frac{1}{\beta}
\lambda_k
\longrightarrow
\lambda_k
=
\alpha k - \beta \Lambda_k
\label{intensEq}
.
\end{align}
An interesting consequence of this relationship is that, after shifting and normalizing, the pre-event intensity values will satisfy the same distributional properties seen in Lemma~\ref{Gentimechange} and Theorem~\ref{distThm}. Hence, the conditional uniformity property is even stronger for this kernel. Given the number of events in the cluster, here we find the stronger and even more surprising implication that the intensity sample paths are conditionally uniform. {}{This is an immediate corollary of Lemma~\ref{Gentimechange}.} The connection to parking functions also carries over naturally, as Equation~\eqref{intensEq} implies $\lambda_k \stackrel{\mathsf{D}}{=}\alpha \left(k - (\pi - U)_{(k)}\right)$.
This tractability also allows us direct access to the distribution of the cluster's duration, as the arrival {}{epochs} and inter-arrival times can be easily obtained from the compensator points. Letting $S_k = A_k - A_{k-1}$ be the inter-arrival times of the cluster, we can see that
\begin{align*}
\Lambda_k
=
\sum_{i=0}^{k-1}
\rho\left(1-e^{-\beta(S_k + A_{k-1} - A_i)}\right)
=
k\rho - e^{-\beta S_k} \left( k\rho - \Lambda_{k-1}\right)
,
\end{align*}
and thus each inter-arrival time is given by
\begin{align}
S_k
=
-\frac{1}{\beta}\log\left(
\frac{k\rho - \Lambda_k}{k\rho - \Lambda_{k-1}}
\right)
.
\end{align}
Hence, we also immediately have that the $k^\text{th}$ {}{arrival epoch} can be written
\begin{align}
A_k
=
-\frac{1}{\beta}\log\left(
\prod_{i=1}^k
\frac{i\rho - \Lambda_i}{i\rho - \Lambda_{i-1}}
\right)
,
\label{EqMarkovA}
\end{align}
for all $k \in \mathbb{Z}^+$. This convenient form enables us to analyze this paper's most sought after single quantity: the cluster duration $\tau = A_{N-1}$. This brings us to our second main result, in which we provide a parsimonious characterization of the duration {}{distribution}, showing that $\tau$ is equivalent to a sum of conditionally independent exponential random variables. Again, a uniformly random parking function appears as a hidden spine of the Hawkes process cluster.
\begin{theorem}\label{markovDistThm}
Let $N \sim \mathsf{Borel}(\rho)$ for $\rho = \alpha \slash \beta$, and, given $N$, let $\pi$ be a uniformly random parking function of length $N-1$. Then, the duration of a cluster in the Markovian Hawkes process satisfies
\begin{align}
\tau
\stackrel{\mathsf{D}}{=}
\frac{1}{\beta}
\sum_{i=1}^{N-1}
T_{\pi, i}
,
\end{align}
where $T_{\pi, i} \sim \mathsf{Exp}\big(i+1 - \sum_{j=N-i}^{N-1} \kappa_j(\pi) \big)$ are conditionally independent given $\pi$, with $\kappa_i(\pi) = |\{j \mid \pi_j = i\}|$.
\end{theorem}
\proof{Proof.}
Let us first find an expression for the conditional Laplace-Stieltjes transform (LST) directly from Lemma~\ref{Gentimechange}, and then interpret it through the lens of Theorem~\ref{distThm}. If $N = 1$, then the cluster contains only the initial time-0 event. Thus, $\PP{\tau = 0 \mid N = 1} = 1$ and $\E{e^{-\theta \tau} \mid N = 1} = 1$. Then, for $k \geq 1$, we can see through applying Equation~\eqref{EqMarkovA} that the conditional LST can be found through the integral
\begin{align*}
\E{e^{-\theta\tau} \mid N = k + 1}
=
{}{\frac{1}{\rho^k}} \frac{(k+1)!}{(k+1)^k}
\int_{0}^{\rho}
\int_{x_1}^{2\rho}
\dots
\int_{x_{k-1}}^{k\rho}
\left(
\prod_{i=1}^k
\frac{i\rho - x_i}{i\rho - x_{i-1}}
\right)^{\frac{\theta}{\beta}}
\mathrm{d}x_k
\dots
\mathrm{d}x_2
\mathrm{d}x_1
.
\end{align*}
Now, for reference, let us note that for $a > 0$, {}{$b \geq 0$,} $c \geq 0$, and $m \in \mathbb{N}$, we can obtain the following {}{expression for a soon-to-be relevant} definite integral directly from the binomial theorem:
\begin{align}
\int_0^{1+c} x^m \left(\frac{ax}{1+c}\right)^b \mathrm{d}x
&=
\frac{(1+c)^{m+1} a^b }{m+b+1}
=
\sum_{\ell=0}^{m+1} {m+1 \choose \ell} \frac{a^b c^{\ell}}{m+b+1}
\label{IntIndEq}
.
\end{align}
To make use of this for the conditional LST, let us substitute $y_i = i - x_i\slash{\rho}$ for each index of integration $x_i$. After iterative application of \eqref{IntIndEq}, the exponent on the next index of integration will depend on the exponent of the previous. This leaves us with
\begin{align*}
&
\E{e^{-\theta\tau} \mid N = k + 1}
=
\frac{(k+1)!}{(k+1)^k}
\int_{0}^{1}
\int_{0}^{1+y_1}
\dots
\int_{0}^{1+y_{k-1}}
\left(
\prod_{i=1}^k
\frac{y_i}{1+y_{i-1}}
\right)^{\frac{\theta}{\beta}}
\mathrm{d}y_k
\dots
\mathrm{d}y_2
\mathrm{d}y_1
\\
&=
\frac{(k+1)!}{(k+1)^k}
\int_{0}^{1}
\int_{0}^{1+y_1}
\dots
\int_{0}^{1+y_{k-2}}
\sum_{\ell_1=0}^1
{1 \choose \ell_1}
\frac{y_{k-1}^{\ell_1}}{\frac{\theta}{\beta}+1}
\left(
\prod_{i=1}^{k-1}
\frac{y_i}{1+y_{i-1}}
\right)^{\frac{\theta}{\beta}}
\mathrm{d}y_{k-1}
\dots
\mathrm{d}y_2
\mathrm{d}y_1
\\
&=
\frac{(k+1)!}{(k+1)^k}
\int_{0}^{1}
\int_{0}^{1+y_1}
\dots
\int_{0}^{1+y_{k-3}}
\sum_{\ell_1=0}^1
\frac{{1 \choose \ell_1}}{\frac{\theta}{\beta}+1}
\sum_{\ell_2=0}^{\ell_1+1}
\frac{{\ell_1+1 \choose \ell_2}}{\frac{\theta}{\beta}+\ell_1+1}
y_{k-2}^{\ell_2}
\left(
\prod_{i=1}^{k-2}
\frac{y_i}{1+y_{i-1}}
\right)^{\frac{\theta}{\beta}}
\mathrm{d}y_{k-2}
\dots
\mathrm{d}y_2
\mathrm{d}y_1
\\
&
\,\,\,\vdots
\\
&=
\frac{(k+1)!}{(k+1)^k}
\sum_{\ell_1=0}^1
\frac{{1 \choose \ell_1}}{\frac{\theta}{\beta}+1}
\sum_{\ell_2=0}^{\ell_1+1}
\frac{{\ell_1+1 \choose \ell_2}}{\frac{\theta}{\beta}+\ell_1+1}
\sum_{\ell_3=0}^{\ell_2+1}
\frac{{\ell_2+1 \choose \ell_3}}{\frac{\theta}{\beta}+\ell_2+1}
\dots
\sum_{\ell_{k-1}=0}^{\ell_{k-2}+1}
\frac{{\ell_{k-2}+1 \choose \ell_{k-1}}}{\frac{\theta}{\beta}+\ell_{k-2}+1}
\int_{0}^{1}
y_{1}^{\frac{\theta}{\beta}+\ell_{k-1}}
\mathrm{d}y_1
\\
&=
\frac{(k+1)!}{(k+1)^k}
\sum_{\ell_1=0}^1
\frac{{1 \choose \ell_1}}{\frac{\theta}{\beta}+1}
\sum_{\ell_2=0}^{\ell_1+1}
\frac{{\ell_1+1 \choose \ell_2}}{\frac{\theta}{\beta}+\ell_1+1}
\sum_{\ell_3=0}^{\ell_2+1}
\frac{{\ell_2+1 \choose \ell_3}}{\frac{\theta}{\beta}+\ell_2+1}
\dots
\sum_{\ell_{k-1}{}{=}0}^{\ell_{k-2}+1}
\frac{{\ell_{k-2}+1 \choose \ell_{k-1}}}{\frac{\theta}{\beta}+\ell_{k-2}+1}
\frac{1}{\frac{\theta}{\beta}+\ell_{k-1}+1}
,
\end{align*}
where now the bounds of summation, rather than the bounds of integration, depend on the magnitude of the prior term. Combining like terms into a product, this can be re-expressed
\begin{align*}
\E{e^{-\theta\tau} \mid N = k + 1}
&=
\frac{(k+1)!}{(k+1)^k}
\sum_{\ell_1=0}^1
{1 \choose \ell_1}
\sum_{\ell_2=0}^{\ell_1+1}
{\ell_1+1 \choose \ell_2}
\dots
\sum_{\ell_{k-1}=0}^{\ell_{k-2}+1}
{\ell_{k-2}+1 \choose \ell_{k-1}}
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}\frac{1}{\frac{\theta}{\beta}+\ell_{i}+1}
.
\end{align*}
By decomposing the binomial coefficients into the underlying factorial terms, cancelling numerators with preceding denominators, and moving the remaining pieces to the product, we can pull $(k-1)!$ from the leading $(k+1)!$ and recognize a resulting multinomial coefficient. That is, after cancellation, the remaining denominators contain factorials of terms that will sum to $k-1$, and thus with the leading $(k-1)!$ the multinomial arises:
\begin{align*}
&
\E{e^{-\theta\tau} \mid N = k + 1}
=
\frac{(k+1)!}{(k+1)^k}
\sum_{\ell_1=0}^1
\frac{1!}{\ell_1!(1-\ell_1)!}
\dots
\sum_{\ell_{k-1}=0}^{\ell_{k-2}+1}
\frac{(\ell_{k-2}+1)!}{\ell_{k-1}!(\ell_{k-2}+1-\ell_{k-1})!}
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}\frac{1}{\frac{\theta}{\beta}+\ell_{i}+1}
\\
&=
\frac{(k+1)!}{(k+1)^k}
\sum_{\ell_1=0}^1
\frac{1}{(1-\ell_1)!}
\sum_{\ell_2=0}^{\ell_1+1}
\frac{1}{(\ell_1+1-\ell_2)!}
\dots
\sum_{\ell_{k-1}=0}^{\ell_{k-2}+1}
\frac{1}{\ell_{k-1}!(\ell_{k-2}+1-\ell_{k-1})!}
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}\frac{\ell_{i-1}+1}{\frac{\theta}{\beta}+\ell_{i}+1}
\\
&=
\frac{k}{(k+1)^{k-1}}
\sum_{\boldsymbol{\ell}\in\mathcal{L}_{k-1}}
{
k-1
\choose
1-\ell_1, \ell_1+1-\ell_2, \dots, \ell_{k-2}+1-\ell_{k-1}, \ell_{k-1}
}
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}\frac{\ell_{i-1}+1}{\frac{\theta}{\beta}+\ell_{i}+1}
,
\end{align*}
where $\mathcal{L}_{k-1} = \{\boldsymbol{\ell} \in \mathbb{N}^{k-1} \mid 0 \leq \ell_1 \leq 1, 0 \leq \ell_i \leq \ell_{i-1} + 1 \,\,\forall\,\, 2 \leq i \leq k-1\}$.
To simplify further, we can now multiply and divide by $\ell_{k-1}+1$, allowing us to re-index the product and absorb the leading $k$ into the multinomial, yielding a form which we can now interpret:
\begin{align*}
\E{e^{-\theta\tau} \mid N = k + 1}
&=
\frac{1}{(k+1)^{k-1}}
\sum_{\boldsymbol{\ell}\in\mathcal{L}_{k-1}}
{
k
\choose
1-\ell_1, \dots, \ell_{k-2}+1-\ell_{k-1}, \ell_{k-1} + 1
}
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}\frac{\ell_{i}+1}{\frac{\theta}{\beta}+\ell_{i}+1}
.
\end{align*}
First, let us observe that $\mathcal{L}_{k-1}$ can be viewed as the set of differences between {}{each given} $k$-step Dyck path and the diagonal $y = x$ line. {}{Hence, an isomorphism with $\mathsf{D}_k$} is implied directly from the constraints {}{that define $\mathcal{L}_{k-1}$}. Following this, the multinomial coefficient can then be viewed as counting the number of length-$k$ parking functions that can be formed using {}{each given $\ell$, as we can recall that the parking functions of length $k$ are all the possible orderings of the $k$-step Dyck paths \citep[e.g.,][]{yan2015parking}.} That is, the multinomial coefficient counts the number of parking functions with $1-\ell_1$ $k$'s, $\ell_1 +1 - \ell_2$ $(k-1)$'s, and so on. Using the $\kappa_i(\pi)$ notation, we can then re-express the conditional Laplace-Stieltjes transform as a sum over all parking functions:
\begin{align*}
\E{e^{-\theta \tau} \mid N = k + 1}
&
=
\frac{1}{(k+1)^{k-1}}
\sum_{\pi \in \mathsf{PF}_k}
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}
\frac{
i+1 - \sum_{j=k+1-i}^k \kappa_j(\pi)
}
{
\frac{\theta}{\beta}+ i+1 - \sum_{j=k+1-i}^k \kappa_j(\pi)
}
.
\end{align*}
Here, we have used the pattern above to substitute $i - \sum_{j=k+1-i}^k \kappa_j(\pi)$ for $\ell_i$. Because there are $(k+1)^{k-1}$ parking functions of length $k$, this can also be viewed as an expectation over a uniformly sampled parking function. That is,
\begin{align*}
\frac{1}{(k+1)^{k-1}}
\sum_{\pi \in \mathsf{PF}_k}
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}
\frac{
i+1 - \sum_{j=k+1-i}^k \kappa_j(\pi)
}
{
\frac{\theta}{\beta}+ i+1 - \sum_{j=k+1-i}^k \kappa_j(\pi)
}
&
=
\E{
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}
\frac{
i+1 - \sum_{j=k+1-i}^k \kappa_j(\pi)
}
{
\frac{\theta}{\beta}+ i+1 - \sum_{j=k+1-i}^k \kappa_j(\pi)
}
}
,
\end{align*}
where the expectation is taken relative to $\pi$. By recognizing that the Laplace-Stieltjes transform of $X \sim \mathsf{Exp}(\lambda)$ is $\E{e^{-\theta X}} = \lambda \slash (\theta + \lambda)$, we can further reduce to
\begin{align*}
\E{
\frac{1}{\frac{\theta}{\beta}+1}
\prod_{i=1}^{k-1}
\frac{
i+1 - \sum_{j=k+1-i}^k \kappa_j(\pi)
}
{
\frac{\theta}{\beta}+ i+1 - \sum_{j=k+1-i}^k \kappa_j(\pi)
}
}
&
=
\E{
e^{-\frac{\theta}{\beta}\sum_{i=1}^{k}T_{\pi,i}}
}
,
\end{align*}
where now the expectation is taken relative to both $\pi$ and the random variables $T_{\pi,i}$. Hence, removing the conditioning on $N = k + 1$, we achieve the stated equivalence.
\Halmos \endproof
As direct corollaries of this result, we can see how the distribution of objects like the duration, the intensities, and the full arrival epoch sequence depend on the parameters $\alpha$ and $\beta$. For example, by consequence of Theorem~\ref{markovDistThm}, if $0 < \alpha_1 < \beta_1$ and $0 < \alpha_2 < \beta_2$ with $\alpha_1 \slash \beta_1 = \alpha_2 \slash \beta_2$ are parameter pairs defining two UHP models with respective duration random variables $\tau_1$ and $\tau_2$, then $\beta_1 \tau_1$ and $\beta_2 \tau_2$ are equivalent in distribution. Similarly said, for $\rho = \alpha \slash \beta$ held fixed, Theorem~\ref{markovDistThm} shows that the only dependence $\tau$ has on $\beta$ is through the $1 \slash \beta$ coefficient. This realization can actually be traced back to Lemma~\ref{Gentimechange}, implying that this same proportionality can be extended to any arrival epoch, not only the final one. Conditioning yields similar insights even when comparing different values of $\rho$. For example, the conditional LST found in the proof shows that $\beta_1 \tau_1 \mid N_1 = k$ is equivalent to $\beta_1 \tau_2 \mid N_2 = k$ for all $k \in \mathbb{N}$ even if $\alpha_1 \slash \beta_1 \ne \alpha_2 \slash \beta_2$.
\section{A Modular Simulation Algorithm for General Hawkes Processes}\label{simSec}
In addition to analytic results, the conditional uniformity and parking function decomposition also enable us to propose a novel simulation algorithm for Hawkes process clusters. This new procedure draws upon the combinatorial structure and group theoretic perspectives of parking functions, mirroring what we have reviewed in Section~\ref{backgroundSec}. As we will see, this new algorithm offers competitive efficiency relative to widely used procedures in the literature. However, like in the preceding demonstration {}{of analysis}, the most important implication of this methodology again lies in the conditioning. The method we propose first generates the size of the cluster and then generates the arrival epochs accordingly, and we will discuss how this leads to natural applications for rare event simulation and importance sampling.
\begin{algorithm}
\SetAlgoLined
\textbf{Input:} Integrated excitation kernel $G(x) = \int_0^x g(u) \mathrm{d}u$, with $\rho = \int_0^\infty g(u) \mathrm{d}u$
\textbf{Output:} {Cluster of $N$ self-excited arrival epochs $0 = A_0 < A_1 < \dots < A_{N-1}$.}
\begin{enumerate}
\item Generate the total number of events, $N \sim \mathsf{Borel}(\rho)$.
\item Generate the conditionally uniform compensator points, $\Lambda \in \mathbb{R}_+^{N-1}$:
\begin{enumerate}
\item Draw $\pi_i \stackrel{\mathsf{iid}}{\sim} \mathsf{Uni}\{1,N\}$ for $i \in \{1, \dots, N-1\}$, define $\sigma \in \{0,1\}^{N}$ s.t.~$\sigma = [0 \dots 0]$.
\item \textbf{for} $i \in \{1, \dots, N-1\}$:
\item \qquad \textbf{if} $\sigma_{\pi_i} = 0$, set $\sigma_{\pi_i} = 1$; \textbf{else} set $\sigma_{j^*~\mathsf{mod}~N} = 1$, where $j^* = \min\{j > \pi_i \mid \sigma_{j~\mathsf{mod}~N} = 0\}$.
\item Set $\pi = (\pi - \ell)~\mathsf{mod}~N$ where $\ell$ is s.t.~$\sigma_\ell = 0$.
\item Draw $U_i \stackrel{\mathsf{iid}}{\sim} \mathsf{Uni}(0,1)$ for $i \in \{1, \dots, N-1\}$ and set $\Lambda = \rho\cdot\mathbf{sort}(\pi - U)$.
\end{enumerate}
\item \textbf{return} $A_1, \dots, A_{N-1}$ as the unique solution to the system of equations, $\Lambda_i = \sum_{j=0}^{i-1}G(A_i - A_j)$ for each $i \in \{1, \dots, N-1\}$.
\end{enumerate}
\caption{Simulation of a Hawkes Process Cluster}
\label{algSim}
\end{algorithm}
We present the pseudo-code {}{for} our procedure now in Algorithm~\ref{algSim}. Let us walk through the idea of this method. The algorithm terminates for any Hawkes process with $\rho < 1$, and it uses $G(\cdot)$ and $\rho$ as arguments. In the first step, the size of the cluster $N$ is sampled from the Borel distribution described in Equation~\eqref{borelDist}. All of the remaining steps will then use $N$ as if it was a parameter. Steps 2 and 3 presume that $N > 1$; whenever Step 1 yields $N=1$, the simulation trivially completes with $A_0 = 0$. The second step uses Theorem~\ref{distThm} to generate the vector of compensator transformed arrival epochs, $\Lambda$, by generating a uniformly random parking function, $\pi$. Specifically, Steps 2(a)-(d) employ a group theoretic sampling of $\pi$ that follows directly from Pollak's circle argument from the proof of Lemma~\ref{pollak}. Like what is shown in Figure~\ref{pollakfig}, Step 2(a) generates the preference vector on spaces $1$ through $N$, and Steps 2(b) and (c) ``park'' the preferences on the circle, as recorded in the vector $\sigma$. Then, Step 2(d) identifies the empty space on the circle and rotates it so that the open spot becomes space $N$, yielding $\pi$ as a parking function of length $N-1$. Step 2(e) then implements Theorem~\ref{distThm}, and Step 3 returns the unique vector of arrival epochs associated with this vector of compensator points. This brings us to our third main result.
\begin{proposition}\label{algprop}
The output of Algorithm~\ref{algSim} exactly follows the joint distribution of the times and size of a Hawkes process cluster.
\end{proposition}
The proof of Proposition~\ref{algprop} follows directly from Theorem~\ref{distThm} and the proof of Lemma~\ref{pollak}. One of the benefits of this algorithm is that it is modular, in the sense of the word as it might be used by enthusiasts of high fidelity audio equipment or designers of building construction. That is, it can be built from the pieces one prefers. Each of the three primary steps can be conducted through one's selected method, swapping a style of one component out for another approach as desired. For example, our Step 2 uses Pollak's circular parking argument to generate $\pi$, which aligns with \citet{kenyon2021parking}. However, one could just as readily use a similar idea described in \citet{diaconis2017probabilizing}, where the preference vector is simply iteratively increased by 1 modulo $N$ until it is a valid parking function. Similarly, one could forgo Theorem~\ref{distThm} and instead sample from the polytope $\mathcal{P}_k$ in Step 2 through use of Markov Chain Monte Carlo (MCMC) methods like Metropolis-Hastings or hit-and-run algorithms \citep[see, e.g.,][and references therein]{chen2018fast}. Likewise, our implementations of Algorithm~\ref{algSim} have generated $N$ in Step 1 directly through the probability mass function in~\eqref{borelDist}, but one could, for example, instead simulate a Poisson branching process to yield the same distribution.
This speaks to another advantage to which we have already alluded: the potential for application in rare event simulation and importance sampling. Algorithm~\ref{algSim} untangles the cluster distribution and the cluster size. That is, this method allows one to specify a particular value or, more generally, a target distribution of the cluster size in Step 1 and then generates a collection of cluster arrival epochs matching this desired cardinality in Steps 2 and 3. We expect this to be of particular practical value, as the Borel distribution might be called \emph{nearly} or \emph{asymptotically} heavy tailed. As $\rho \to 1$, Equation~\eqref{borelDist} yields a valid probability distribution on the positive integers with no finite moments. Hence, it is quite natural for a Hawkes process with $\rho$ near 1 to experience clusters of extreme size, and Algorithm~\ref{algSim} provides a controlled way of simulating and evaluating these scenarios. To the best of our knowledge, no prior Hawkes process simulation algorithm allowed for user selection of the cluster size, as this is a direct consequence of the conditional uniformity we have explored throughout this paper.
\begin{table}[h]
\renewcommand{1.25}{1.25}
\caption{Observed run times (seconds) of Hawkes process cluster simulation algorithms across various choices of excitation kernels ($\boldsymbol{2^{20}}$ replications).}\label{simTable}
\centering
\begin{small}
\centering
\begin{tabular}{c | c c c c c c c c}
& \multicolumn{8}{c}{\textbf{Excitation Kernel} $\boldsymbol{g(x)}$}
\\
\hline
\textbf{Simulation Procedure}
& $3 e^{-4 x}$ & $15 e^{-16 x}$ & $63 e^{-64 x}$ & $255 e^{-256 x}$
& $\frac{1}{(2+x)^2}$ & $\frac{3}{(4+x)^2}$ & $\frac{7}{(8+x)^2}$ & $\frac{15}{(16+x)^2}$
\\
\hline
Algorithm~\ref{algSim} & 5.6 & 11.4 & 30.6 & 94.4 & 7.4 & 15.2 & 39.1 & 143.6
\\
\citet{dassios2013exact} & 2.1 & 12.0 & 47.1 & 183.2 & $\times$ & $\times$ & $\times$ & $\times$
\\
\citet{hawkes1974cluster} & 42.2 & 182.0 & 727.8 & 2,945.5 & 19.3 & 42.4 & 88.7 & 181.5
\\
\end{tabular}
\end{small}
\end{table}
To measure the efficiency and performance of Algorithm~\ref{algSim} relative to the literature, in Table~\ref{simTable} we compare the observed run times of this procedure against the literature.\footnote{{}{The observed KS statistics between the empirical distributions of $\tau$ and $N$ from Algorithm~\ref{algSim} and those from the literature are no more than 0.001 in the exponential kernel experiment, and not more than 0.006 for the power law.}} Specifically, we consider two prior algorithms: the classical non-stationary Poisson clustering process perspective provided by \citet{hawkes1974cluster} and the exact simulation method by \citet{dassios2013exact}. Let us note that other well known Hawkes process simulation methods, such as the thinning procedure from \citet{lewis1979simulation,ogata1981lewis} or Algorithm 7.4.III from \citet{daley2003introduction}, cannot be applied to {}{the} generation of clusters alone, as they implicitly rely on the presence of a continual baseline stream. \citet{hawkes1974cluster}'s method is essentially exactly like the corresponding definition: for each arrival, simulate its own descendants according to a non-stationary Poisson process with rate given by the kernel $g(\cdot)$, and repeat this process until there are no more descendants. The generality of this procedure has made it quite popular; it is a backbone subroutine of the perfect Hawkes process simulation algorithms in \citet{moller2005perfect,chen2021perfect}, and~\citet{chen2020perfect}. The \citet{dassios2013exact} procedure (Algorithm 3.1 in that paper) only applies to the exponential kernel case and it relies on that Markov assumption. At each step, a weighted coin is flipped to decide if there will be another arrival given the current value of the intensity. If there will be another, then the time and intensity are updated accordingly, and otherwise the simulation terminates.
We conduct the simulation in Table~\ref{simTable} with two families of excitation kernels, the Markovian exponential decay kernel and the heavy-tailed power law kernel, which is often called ``Omori's law'' in seismology \citep{ogata1998space}. Specifically, we consider a series of kernels in each family with increasing mean cluster size: $g(x) = (4^m - 1)e^{-4^m x}$ for the exponential, which will have mean cluster size $4^m$, and $g(x) = (2^m - 1)\slash(2^m + x)^2$ for the power law, which will have mean cluster size $2^m$. All three algorithms can be used to simulate clusters under the exponential kernel, but only \citet{hawkes1974cluster} can be compared to Algorithm~\ref{algSim} outside of the Markovian setting. Table~\ref{simTable} shows that the speed of Algorithm~\ref{simTable} is broadly competitive with the literature and superior in many cases. In particular, in the case of the exponential kernel, the parking function simulation of the Hawkes process proves to be more efficient than either alternative as the clusters grow large. Here, Section~\ref{markovSec} shows how Step 3 can be solved in closed form, and so the efficiency of generating one Borel distributed random variable and one random parking function outpaces the performance of either simulating a non-stationary Poisson process many times in \citet{hawkes1974cluster} or performing many iterative computations of the intensity and the next event epoch in \citet{dassios2013exact}. When the clusters are mostly small though, the set-up of the circular parking function sampling is likely less efficient than \citet{dassios2013exact}'s simple steps, and so Algorithm~\ref{algSim} only outperforms \citet{hawkes1974cluster} in such settings. For general decay kernels, closed form solutions to Step 3 may not be available and one might instead turn to root-finding methods, like what we have done here for the power law kernel. Here we make relatively naive use of Newton's method and, while the efficiency of the Borel and parking function generations helps Algorithm~\ref{algSim}, it is clear that its performance is bottlenecked by the Newton calculations. Thus, as the cluster size increases our method should eventually be outpaced by the \citet{hawkes1974cluster} approach. Hence, careful design of Step 3 of Algorithm~\ref{algSim} is an interesting future direction for this work, and it seems likely that more efficient approaches are available when tailoring to a particular kernel. As we have discussed, though, we anticipate the primary benefit of this algorithm to lie in the conditioning on $N$, since the size distribution's propensity for large values creates an opportunity for rare event simulation.
\section{Discussion and Conclusion}\label{concSec}
In this paper, we have seen that uniformly random parking functions constitute hidden spines {}{in} Hawkes process clusters, providing a decomposition structure for the conditionally uniform compensator transform of the cluster's arrival epochs. Hence, from the cluster size and a random parking function, we can faithfully reconstruct the full sequence of events in the cluster. We have also demonstrated the impact of this connection {}{on} both analysis and simulation. For the former, we have found a surprising distributional equivalence between the duration of the Markovian Hawkes cluster and a random sum of {}{conditionally independent} exponential random variables. For the latter, we have demonstrated that the resulting sampling methodology offers both efficiency and, perhaps more importantly, fine control over the experiment through cluster-size-conditioning.
Much of our discussion of the connection between Hawkes clusters and parking functions has centered around what the discrete object can do for the continuous, but we can also see that there is some partial reciprocity available. That is, it is an immediate consequence of Theorem~\ref{distThm} that, for the arrival epochs drawn from any Hawkes process cluster, the shuffled ceiling of the compensator points will be a parking function, where only the length of the parking function will be dependent on the parameters of the Hawkes process. This underscores a point that we have mentioned only briefly, which is that Lemma~\ref{Gentimechange} shows how one Hawkes process cluster can readily be transformed to another cluster driven by an entirely different excitation kernel so long as the values of $\rho$ match. Hence, one could replace Steps 1 and 2 in Algorithm~\ref{algSim} with simulating the cluster according to some efficient Hawkes process procedure, such as \citet{dassios2013exact}, and then transform to the true targeted excitation function in Step 3. In a similar notion, one could also petition to the classical random time change theorem in place of Lemma~\ref{Gentimechange} and replace Steps 1 and 2 with a unit{}{-}rate Poisson process. That is, one could run a unit{}{-}rate Poisson process until the first index $i \geq 1$ such that the Poisson arrival epoch $T_i$ exceeds $i\rho$, and then return times 1 through $i-1$ as the compensator points and proceed to Step 3. Analogously by the closure of the exponential distribution when multiplying by a constant, one could also run a Poisson process at rate $\rho$ and then compare $T_i$ just to the index $i$. This is close to the idea of Algorithm 7.4.III from \citet{daley2003introduction}, but the above proposal would terminate with an absorption event for the end of the cluster. By comparison, it is inherent to Algorithm 7.4.III that the point process continues indefinitely and that the compensator grows without bound, as otherwise ``the final interval is then
infinite and so cannot belong to a unit-rate Poisson process'' \citep{daley2003introduction}. In numerical experiments, we found that the speed of this alternate method is essentially identical to the \citet{dassios2013exact} algorithm. This makes sense, because they are mostly doing the same thing: generating exponentials until an absorption event occurs. Hence, there could be some efficiency gains over our algorithm when $\rho$ is {}{small enough}, but regardless, like \citet{dassios2013exact}, this approach does not offer the control over the support and distribution of $N$ like Algorithm~\ref{algSim} does.
In the broader perspective of the Hawkes process literature, we have studied the same model as contemporary works like \citet{chen2021perfect,graham2021regenerative}, which is a linear \citep[relative to the non-linear generalization introduced by][]{bremaud1996stability} univariate \citep[relative to the mutually-exciting version actually dating back to][]{hawkes1971spectra} Hawkes process with general excitation kernel \citep[relative to kernel-specific works like][]{dassios2013exact}. For generalizations of these results, our initial suspicion is that multiple dimensions is the most promising next step, as there are analogs of much of the background fundamentals from which we have drawn. In particular, the random time change does still hold in higher dimensions, where we can build from both results for general processes \citep[e.g.][]{brown1988simple} and for Hawkes processes specifically \citep[e.g.][]{embrechts2011multivariate}. One can also leverage similar time-agnostic multi-type branching process perspectives for the cluster size distribution \citep[e.g.][]{good1960generalizations}. In fact, a very similar proof to Lemma~\ref{Gentimechange} can create conditionally uniform analogs of the two styles of multivariate random time change in \citet{embrechts2011multivariate}. The remaining challenge arises while converting from conditionally uniform compensator points to the arrival epochs in each sub-stream, as it may be possible that there are multiple solutions to the compensator system of equations. Hence, we are quite interested in this future direction, particularly given the variety of recent interest in these models, such as in works like \citet{ait2015modeling,nickel2020learning,daw2021co,karim2021exact}.
{}{Another intriguing open problem arises in trying to transport these results from conditioning on the long-run total size of the cluster to instead conditioning on the number of events by some given time $T$. A transient result like this would likely be of broad interest. Arguments similar to that of Lemma~\ref{Gentimechange} should produce a transient result like
\begin{align*}
f(\Lambda_1, \Lambda_2, \dots, \Lambda_k \mid N_T=k+1)
&=
\frac{e^{-\Lambda(T)}}{\PP{N_T = k+1}}
,
\end{align*}
however this expression may be deceptively simple. Setting aside the fact that the distribution of $N_T$ may not be as accessible as that of $N$, what this form of the density hides is the dependence of $\Lambda(T)$ on $\Lambda_1$ through $\Lambda_k$. That is, $\Lambda(T) = \sum_{i=0}^k G(T-A_i)$, and the epochs, $A_1, \dots, A_k$, are deterministic functions of the compensator points, $\Lambda_1, \dots, \Lambda_k$. All this is to say, the conditional density is no longer uniform. While each $\Lambda_i$ will continue to satisfy the constraints $\Lambda_{i-1} < \Lambda_i < i\rho$ (as well as the added constraint $\Lambda_i < \Lambda(T)$), changes to the values of these points will change $\Lambda(T)$ and thus change the density. This suggests a direction that may offer more immediate tractability. Rather than fixing the ending time $T$, one can instead fix the ending compensator value $\Lambda(T)$. In this case, the conditional density should again be uniform, and the compensator points should lie on a polytope that is the intersection of $\mathcal{P}_k$ and the hypercube where every coordinate is between 0 and $\Lambda(T)$. While this result would leverage the transience in the transformed compensator space, it is of course more desirable to analyze true transience in time. Bridging these perspectives stands as a highly interesting direction of future work.}
Finally, to that end, let us emphasize that this paper's analysis has not been a purely theoretical exercise. Our original motivation was to study Hawkes process clusters in an operational context inspired by the data and application in \citet{daw2021co}, but we found the methodological cupboard not yet full enough for the questions we sought to answer. Hence, we are quite interested in returning to these problems with the ability to address the cluster duration and chronology now in hand. Building from the model of \citet{daw2021co}, the Hawkes cluster duration can be seen as the length of a co-produced service exchange, {}{and, more specifically, the cluster epochs mark the contributions and points of interactions between the customer and service agent. In the context of \citet{daw2021co}, these epochs are the timestamps of messages exchanged within a text-based contact center's conversational service}. Understanding {}{these distributions} allows us to evaluate how the history-dependent service process will impact the service system overall, leading to new possibilities in the analysis of natural queueing theoretic questions like staffing and routing decisions. {}{The service time distribution is a cornerstone of any queueing model, and a myriad, if not virtually all, queueing formulas depend on the mean of this distribution (at the very least). In services modeled as a Hawkes cluster like in \citet{daw2021co}, both this mean and this distribution have been out of reach even for the simplest forms of this process, like \citet{moller2005perfect} described. Thanks to the parking function structure uncovered in Theorems~\ref{distThm} and~\ref{markovDistThm}, this is no longer the case. Leveraging this hidden spine} is of foremost intrigue to us as a direction of future research, and we {}{anticipate} that the conditional uniformity property and parking function decomposition {}{can} provide valuable insight into {}{these classic service-side queueing problems.}
{}{Of course, the Hawkes process finds many an application in operations research models beyond service durations and customer-agent interactions. In fact, well-known examples of the Hawkes show that clusters are both quantities of operational interest in their own right and key components embedded within any use of the point process. In addition to our original motivating application, this stochastic model can be found among representations of financial contagion and risk, product management, customer arrivals, leadership and communication patterns, digital marketing conversions, and -- lest we forget -- outbreaks in pandemics. By the very nature of the Hawkes process and its self-excited epochs, clusters lie at the core of each of these applications. Let us detail them.}
{}{For example, \citet{azizpour2018exploring} uses the Hawkes events to model corporate defaults. Here, a cluster contains the collection of firms that fail as a downstream result of one initial default; likewise, the duration captures how long the contagion lasts from first default to last related casualty. Similarly, \citet{mukherjee2022hiding} models the timing of product recalls, and thus each cluster contains those customers who return their purchases because of the influence of returns by their peers. Elsewhere in the transaction timeline, \citet{xu2014path} uses the Hawkes process as a model of customers' online shopping activity under digital marketing campaigns, with the cluster marking the clicks on the path of conversion from advertisement to purchase. Similarly, on the arrival side of queueing, the Hawkes has served as the arrival stream to both single \citep{chen2021perfect} and infinite server queueing models \citep{gao2018functional,daw2017queues,koops2017infinite}, and in these cases a cluster represents the lineage of customers who choose to patron the service because they saw others do it first. \citet{fox2016HawkesEmails} models the communication (and, within this, leadership) structure in an organization through Hawkes processes, and here clusters arise in the response threads and email chains. Finally, when modeling the spread of COVID-19 and other infectious diseases, like what is done in \citet{bertozzi2020challenges}, the Hawkes clusters naturally capture the trace of contacts who pass the sickness on to one another throughout time.}
{}{While these and many other Hawkes process applications may feature both self-excited arrivals and exogenously driven baseline events, let us recall that the \citet{hawkes1974cluster} representation shows us how the cluster perspective persists. That is, the baseline stream can be thought of as the arrival process \textit{of clusters}. Hence, the model studied in this paper descends off from each of these initial points, and these initial points themselves form a Poisson process that is independent from the subsequent activity within the clusters. So, the distribution of the cluster epochs and durations constitute the nature of the offsets from the well-understood Poisson stream.}
{}{From what we have seen, a parking function or Dyck path may not have a direct interpretation for these applications. However, it does provide a structure through which the Hawkes cluster becomes easier to understand, and thus easier to use. Through another level of conditioning, one can apply both traditional Poisson conditional uniformity and the Hawkes variant which we have seen here to analyze, say, customer purchasing patterns or COVID-19 outbreaks and glean insight into the distributions, relationships, and dependencies within. These hidden spines simplify the hallmark self-exciting structure of the Hawkes process, offering answers to fundamental questions about the model. Returning now to the service operations domains from which this question originally sprung, we look forward to leveraging our new knowledge and exploring what more we can learn once we know how long a cluster will last.}
\end{document}
|
\begin{document}
\title{Descriptive complexity of graph spectra}
\titlerunning{Descriptive complexity of graph spectra}
\mathrm{Aut}hor{Anuj Dawar\inst{1} Simone Severini\inst{2}
Octavio Zapata\inst{2}}
\mathrm{Aut}horrunning{Anuj Dawar et al.}
\tocauthor{Octavio Zapata}
\institute{University of Cambridge Computer Laboratory, UK\\
\and
Department of Computer Science, University College London, UK\thanks{We thank Aida Abiad, Chris Godsil, Robin Hirsch and David Roberson for fruitful discussions. This work was supported by CONACyT, EPSRC and The Royal Society.}}
\maketitle
\begin{abstract}
Two graphs are co-spectral if their respective adjacency matrices
have the same multi-set of eigenvalues. A graph is said to be
determined by its spectrum if all graphs that are co-spectral with
it are isomorphic to it. We consider these properties in relation
to logical definability. We show that any pair of graphs that are
elementarily equivalent with respect to the three-variable counting
first-order logic $C^3$ are co-spectral, and this is not the case
with $C^2$, nor with any number of variables if we exclude counting
quantifiers. We also show that the class of graphs that are
determined by their spectra is definable in partial fixed-point
logic with counting. We relate these properties to other algebraic and
combinatorial problems.
\keywords{descriptive complexity, algebraic graph theory, isomorphism approximations}
\end{abstract}
\renewcommand{\mathrm{sp}}{\mathrm{sp}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\text{DS}}{\text{DS}}
\newcommand{\textsc{pfpc}}{\textsc{pfpc}}
\newcommand{\textsc{fpc}}{\textsc{fpc}}
\newcommand{\textsc{PSpace}}{\textsc{PSpace}}
\section{Introduction}\label{sec:intro}
The spectrum of a graph $G$ is the multi-set of eigenvalues of its adjacency matrix. Even though it is defined in terms of the adjacency matrix of $G$, the spectrum does not, in fact, depend on the order in which the vertices of $G$ are listed. In other words, isomorphic graphs have the same spectrum. The converse is false: two graphs may have the same spectrum without being isomorphic. Say that two graphs are co-spectral if they have the same spectrum. Our aim in this paper is to study the relationship of this equivalence relation on graphs in relation to a number of other approximations of isomorphism coming from logic, combinatorics and algebra. We also investigate the definability of co-spectrality and related notions in logic.
Specifically, we show that for any graph $G$, we can construct a formula $\phi_G$ of first-order logic with counting, using only three variables (i.e.\ the logic $C^3$) so that $H \models \phi_G$ only if $H$ is co-spectral with $G$. From this, it follows that elementary equivalence in $C^3$ refines co-spectrality, a result that also follows from~\cite{Alzaga10}. In contrast, we show that co-spectrality is incomparable with elementary equivalence in $C^2$, or with elementary equivalence in $L^k$ (first-order logic with $k$ variables but without counting quantifiers) for any $k$. We show that on strongly regular graphs, co-spectrality exactly co-incides with $C^3$-equivalence.
For definability results, we show that co-spectrality of a pair of
graphs is definable in $\textsc{fpc}$, inflationary fixed-point logic with
counting. We also consider the property of a graph $G$ to be
\emph{determined by its spectrum}, meaning that all graphs co-spectral
with $G$ are isomorphic with $G$. We establish that this property is
definable in \emph{partial fixed-point logic with counting} ($\textsc{pfpc}$).
In section \ref{pre}, we construct some basic first-order formulas
that we use to prove various results later, and we also review some
well-known facts in the study of graph spectra. In section \ref{wlk},
we make explicit the connection between the spectrum of a graph and
the total number of closed walks on it. Then we discuss aspects of the
class of graphs that are uniquely determined by their spectra, and
establish that co-spectrality on the class of all graphs is refined by $C^3$-equivalence. Also, we show a lower bound for the distinguishability of graph spectra in the finite-variable logic. In section \ref{sym}, we give an overview of a combinatorial algorithm (named after Weisfeiler and Leman) for distinguishing between non-isomorphic graphs, and study the relationship with other algorithms of algebraic and combinatorial nature. Finally, in section \ref{pfp}, we establish some results about the logical definability of co-spectrality and of the property of being a graph determined by its spectrum.
\section{Preliminaries}\label{pre}
Consider a first-order language $L = \{E\}$, where $E$ is a binary relation symbol interpreted as an irreflexive symmetric binary relation called \emph{adjacency}. Then an $L$-structure $G=(V_G,E_G)$ is called a \emph{simple undirected graph}. The domain $V_G$ of $G$ is called the \emph{vertex set} and its elements are called \emph{vertices}. The unordered pairs of vertices in the interpretation $E_G$ of $E$ are called \emph{edges}. Formally, a \emph{graph} is an element of the elementary class axiomatised by the first-order $L$-sentence: $\forall x\forall y(\lnot E(x,x)\land(E(x,y)\rightarrow E(y,x))). $
The \emph{adjacency matrix} of an $n$-vertex graph $G$ with vertices
$v_1,\ldots,v_n$ is the $n\times n$ matrix $A_G$ with $(A_G)_{ij}=1$
if vertex $v_i$ is adjacent to vertex $v_j$, and $(A_G)_{ij}=0$
otherwise. By definition, every adjacency matrix is real and symmetric
with diagonal elements all equal to zero. A \emph{permutation matrix}
$P$ is a binary matrix with a unique $1$ in each row and column.
Permutation matrices are orthogonal matrices so the inverse $P^{-1}$
of $P$ is equal to its transpose $P^{T}$. Two graphs $G$ and $H$ are
\emph{isomorphic} if there is a bijection $h$ from $V_G$ to $V_H$ that
preserves adjacency. The existence of such a map is denoted by $G\cong H$. From this definition it is not difficult to see that two graphs $G$ and $H$ are isomorphic if, and only if, there exists a permutation matrix $P$ such that $A_GP =PA_H. $
The \emph{characteristic polynomial} of an $n$-vertex graph $G$ is a polynomial in a single variable $\lambda$ defined as $p_G(\lambda):=det(\lambda I - A_G),$
where $det(\cdot)$ is the operation of computing the determinant of the matrix inside the parentheses, and $I$ is the identity matrix of the same order as $A_G$. The \emph{spectrum} of $G$ is the multi-set $\mathrm{sp}(G):=\{\lambda:p_G(\lambda)=0\}$, where each root of $p_G(\lambda)$ is considered according to its multiplicity. If $\theta\in\mathrm{sp}(G)$ then $\theta I - A_G$ is not invertible, and so there exists a nonzero vector $u$ such that $A_Gu=\theta u$. A vector like $u$ is called an \emph{eigenvector} of $G$ corresponding to $\theta$. The elements in $\mathrm{sp}(G)$ are called the \emph{eigenvalues} of $G$. Two graphs are called \emph{co-spectral} if they have the same spectrum.
The \emph{trace} of a matrix is the sum of all its diagonal
elements. By the definition of matrix multiplication, for any two matrices $A,B$ we have $
tr(AB)=tr(BA), $ where $tr(\cdot)$ is the operation of computing the trace of the matrix inside the parentheses. Therefore, if $G$ and $H$ are two isomorphic graphs then $
tr(A_H)=tr(P^{T}A_GP)=tr(A_GPP^{T})=tr(A_G) $ and so, $tr(A^k_G)=tr(A^k_H)$ for any $k\geq 0$.
By the spectral decomposition theorem, computing the trace of the $k$-th powers of a real symmetric matrix $A$ will give the sum of the $k$-th powers of the eigenvalues of $A$. Assuming that $A$ is an $n\times n$ matrix with (possibly repeated) eigenvalues $\lambda_1,\dots,\lambda_n$, the \emph{elementary symmetric polynomials} $e_k$ in the eigenvalues are the sum of all distinct products of $k$ distinct eigenvalues:
\[
\begin{array}{l}
e_0(\lambda_1,\dots,\lambda_n):=1;~~~~~~~ e_1(\lambda_1,\dots,\lambda_n):=\sum_{i=1}^n \lambda_i;\\
e_k(\lambda_1,\dots,\lambda_n):=\sum_{1\leq i_1<\cdots<i_k\leq
n}\lambda_{i_1}\cdots\lambda_{i_k}\ \ \text{ for $1\leq k\leq n$}.
\end{array}
\]
This expressions are the coefficients of the characteristic polynomial of $A$ modulo a $1$ or $-1$ factor. That is,
\begin{align*}
det(\lambda I-A)&=\prod_{i=1}^{n}(\lambda-\lambda_i)\\
&=\lambda^n-e_1(\lambda_1,\dots,\lambda_n)\lambda^{n-1}+\dots+(-1)^{n}e_n(\lambda_1,\dots,\lambda_n)\\
&=\sum_{k=0}^{n}(-1)^{n+k}e_{n-k}(\lambda_1,\dots,\lambda_n)\lambda^k.
\end{align*}
So if we know $s_k(\lambda_1,\dots,\lambda_n):=\sum_{i=1}^{n}\lambda_i^k$ for $k=1,\dots,n$, then using \emph{Newton's identities}: $$ e_k(\lambda_1,\dots,\lambda_n)=\frac{1}{k}\sum_{j=1}^{k}(-1)^{j-1}e_{k-j}(\lambda_1,\dots,\lambda_n)s_k(\lambda_1,\dots,\lambda_n)\ \ \text{for $1\leq k\leq n$}, $$ we can obtain all the symmetric polynomials in the eigenvalues, and so we can reconstruct the characteristic polynomial of $A$.\\
\begin{proposition}~\label{prop:pre1}
For $n$-vertex graphs $G$ and $H$, the following are equivalent:
\begin{itemize}
\item[$(i)$] $G$ and $H$ are co-spectral;
\item[$(ii)$] $G$ and $H$ have the same characteristic polynomial;
\item[$(iii)$] $tr(A_G^k) = tr(A_H^k)$ for $1\leq k \leq n$.
\end{itemize}
\end{proposition}
\newcommand{\mathrm{GF}}{\mathrm{GF}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\section{Spectra and Walks}~\label{wlk}
Given a graph $G$, a \emph{walk of length} $l$ in $G$ is a sequence $(v_0,v_1,\dots,v_{l})$ of vertices of $G$, such that consecutive vertices are adjacent in $G$. Formally, $(v_0,v_1,\dots,v_{l})$ is a walk of length $l$ in $G$ if, and only if, $\{v_{i-1},v_i\}\in E_G$ for $1\leq i\leq l$. We say that the walk $(v_0,v_1,\dots,v_{l})$ \emph{starts} at $v_0$ and \emph{ends} at $v_l$. A walk of length $l$ is said to be \emph{closed} (or $l$-\emph{closed}, for short) if it starts and ends in the same vertex.
Since the $ij$-th entry of $A_G^l$ is precisely the number of walks of length $l$ in $G$ starting at $v_i$ and ending at $v_j$, by Proposition~\ref{prop:pre1}, we have that the spectrum of $G$ is completely determined if we know the total number of closed walks for each length up to the number of vertices in $G$. Thus, two graphs $G$ and $H$ are co-spectral if, and only if, the total number of $l$-closed walks in $G$ is equal to the total number of $l$-closed walks in $H$ for all $l\geq 0$.
For an example of co-spectral non-isomorphic graphs, let $G=C_4\cup
K_1$ and $H=K_{1,4}$, where $C_n$ is the $n$-vertex cycle, $K_n$ is the complete $n$-vertex graph,
$K_{n,m}$ the complete $(n+m)$-vertex bipartite graph, and ``$\cup$''
denotes the disjoint union of two graphs. The spectrum of both $G$ and
$H$ is the multi-set $\{-2,0,0,0,2\}$.
However, $G$ contains an isolated vertex while $H$ is a connected graph.
\subsection{Finite Variable Logics with Counting}
For each positive integer $k$, let $C^k$ denote the fragment of first-order logic in which only $k$ distinct variables can be used but we allow \emph{counting quantifiers}: so for each $i\geq 1$ we have a quantifier $\exists^i$ whose semantics is defined so that $\exists^i x \phi$ is true in a structure if there are at least $i$ distinct elements that can be substituted for $x$ to make $\phi$ true. We use the abbreviation $\exists^{=i} x \phi$ for the formula $\exists^i x \phi \land\lnot \exists^{i+1} x \phi$ that asserts the existence of exactly $i$ elements satisfying $\phi$. We write $G \equiv_C^k H$ to denote that the graphs $G$ and $H$ are not distinguished by any formula of $C^k$. Note that \emph{$C^k$-equivalence} is the usual first-order elementary equivalence relation restricted to formulas using at most $k$ distinct variables and possibly using counting quantifiers.
We show that for integers $k,l$, with $k\geq 0$ and $l\geq 1$, there is a formula $\psi^l_k(x,y)$ of $C^3$ so that for any graph $G$ and vertices $v,u\in V_G$, $G\models\psi^{l}_k[v,u]$ if, and only if, there are exactly $k$ walks of length $l$ in $G$ that start at $v$ and end at $u$. We define this formula by induction on $l$. Note that in the inductive definition, we refer to a formula $\psi^l_k(z,y)$. This is to be read as the formula $\psi^l_k(x,y)$ with all occurrences of $x$ and $z$ (free or bound) interchanged. In particular, the free variables of $\psi^l_k(x,y)$ are exactly $x,y$ and those of $\psi^l_k(z,y)$ are exactly $z,y$.
For $l = 1$, the formulas are defined as follows: $$\psi^1_0(x,y):= \lnot E(x,y);\ \ \psi^1_1(x,y):= E(x,y); $$ $$ \mathrm{~~~and}\ \ \psi^1_k(x,y):=\mathrm{false\ \ \ \ for}\ \ k>1.$$
For the inductive case, we first introduce some notation. Say that a collection $(i_1,k_1),\dots,(i_r,k_r)$ of pairs of integers, with $i_j\geq 1$ and $k_j\geq 0$ is an \emph{indexed partition} of $k$ if the $k_1,\dots,k_r$ are pairwise distinct and $k =\sum_{j=1}^{r} i_j k_j$. That is, we partitioned $k$ into $\sum_{j=1}^r i_j$ distinct parts, and there are exactly $i_j$ parts of size $k_j$ where $j=1,\dots,r$. Let $K$ denote the set of all indexed partitions of $k$ and note that this is a finite set.
Now, assume we have defined the formulas $\psi^l_k(x,y)$ for all values of $k\geq 0$. We proceed to define them for $l+1$ $$\psi^{l+1}_0(x,y):= \forall z(E(x,z)\rightarrow\psi^{l}_0(z,y))$$
$$\psi^{l+1}_k(x,y):= \bigvee_{(i_1,k_1),\dots,(i_r,k_r)\in K}\Big(\big(\bigwedge_{j=1}^r\exists^{=i_j}z\ (E(x,z) \land \psi^{l}_{k_j}(z,y)\big)\land\exists^{=d} z\ E(x,z)\Big),$$ where $d=\sum_{j=1}^r i_j$. Note that without allowing counting quantification it would be necessary to use many more distinct variables to rewrite the last formula.
Given an $n$-vertex graph $G$, as noted before $(A_G^{l})_{ij}$ is equal to the number of walks of length $l$ in $G$ from vertex $v_i$ to vertex $v_j$, so $(A_G^{l})_{ij}=k$ if, and only if, $G\models \psi^{l}_k (v_i,v_j)$. Once again, let $K$ denote the set of indexed partitions of $k$. For each integer $k\geq 0$ and $l\geq 0$, we define the sentence
$$\phi^{l}_{k} := \bigvee_{(i_1,k_1),\dots,(i_r,k_r)\in K} \Big(\bigwedge_{j=1}^{r} \exists^{=i_j}x\exists y\big( x=y\land \psi^{l}_{k}(x,y)\big)\Big).$$
Then we have $G\models \phi^{l}_k$ if, and only if, the total number of closed walks of length $l$ in $G$ is exactly $k$. Hence $G\models \phi^{l}_k$ if, and only if, $tr(A_G^{l})=k$. Thus, we have the following proposition.
\begin{proposition}\label{prop:wlk1}
If $G\equiv_C^3 H$ then $G$ and $H$ are co-spectral.
\end{proposition}
\begin{proof}
Suppose $G$ and $H$ are two non-cospectral graphs. Then there is some $l$ such that $tr(A_G^{l})\neq tr(A_H^{l})$, i.e. the total number of closed walks of length $l$ in $G$ is different from the total number of closed walks of length $l$ in $H$ (see Proposition \ref{prop:pre1}). If $k$ is the total number of closed walks of length $l$ in $G$, then $G\models\phi^{l}_k$ and $H\not\models\phi^{l}_k$. Since $\phi^{l}_k$ is a sentence of $C^3$, we conclude that $G\not\equiv_C^3 H$. \qed
\end{proof}
For any graph $G$ and $l\geq 1$, there exists a positive integer $k_{l}$ such that $tr(A_G^{l})=k_{l}$. Since having the traces of powers of the adjacency matrix of $G$ up to the number of vertices is equivalent to having the spectrum of $G$, we can define a sentence $$\phi_G := \bigwedge_{l=1}^n \phi^{l}_{k_l}$$ of $C^3$ such that for any graph $H$, we have $H\models\phi_G$ if, and only if, $\mathrm{sp}(G)=\mathrm{sp}(H)$.
\subsection{Graphs Determined by Their Spectra}
We say that a graph $G$ is \emph{determined by its spectrum} (for short, DS) when for any graph $H$, if $\mathrm{sp}(G)=\mathrm{sp}(H)$ then $G\cong H$. In words, a graph is determined by its spectrum when it is the only graph up to isomorphism with a certain spectrum. In Proposition \ref{prop:wlk1} we saw that $C^3$-equivalent graphs are necessarily co-spectral. That is, if two graphs $G$ and $H$ are $C^3$-equivalent then $G$ and $H$ must have the same spectrum. It thus follows that being identified by $C^3$ is weaker than being determined by the spectrum, so there are more graphs identified by $C^3$ than graphs determined by their spectra.
\begin{observation}
On the class of all finite graphs, $C^3$-equivalence refines co-spectrality.
\end{observation}
In general, determine whether a graph has the DS property (i.e., the equivalence class induced by having the same spectrum coincides with its isomorphism class) is an open problem in spectral graph theory (see, e.g. \cite{Van03}). Given a graph $G$ and a positive integer $k$, we say that the logic $C^k$ \emph{identifies} $G$ when for all graphs $H$, if $G\equiv_C^k H$ then $G\cong H$. Let $\mathcal{C}^k_n$ be the class of all $n$-vertex graphs that are identified by $C^k$.
Since $C^2$-equivalence corresponds to indistinguishability by the $1$-dimensional Weisfeiler-Lehman algorithm~\cite{Immerman90}, from a classical result of Babai, Erd\H{o}s and Selkow~\cite{Babai80}, it follows that $\mathcal{C}^{2}_n$ contains almost all $n$-vertex graphs. Let $\text{DS}_n$ be the class of all DS $n$-vertex graphs.
The $1$-dimensional Weisfeiler-Lehman algorithm (see Section
\ref{sym}) does not distinguish any pair of non-isomorphic regular graphs of the same degree with the same number of vertices. Hence, if a regular graph is not determined up to isomorphism
by its number of vertices and its degree, then it is not in $\mathcal{C}^2_n$.
However, there are regular graphs that are determined by their number of
vertices and their degree. For instance, the complete graph on $n$
vertices, which gives an example of a graph in $\text{DS}_n\cap \mathcal{C}^2_n$.
Let $T$ be a tree on $n$ vertices. By a well-known result from Schwenk~\cite{Schwenk73}, with probability one there exists another tree $T^{\prime}$ such that $T$ and $T^{\prime}$ are co-spectral but not isomorphic. From a result of Immerman and Lander~\cite{Immerman90} we know that all trees are identified by $C^2$. Hence $T$ is an example of a graph in $\mathcal{C}^2_n$ and not in $\text{DS}$. On the other hand, the disjoint union of two complete graphs with the same number of vertices is a graph which is determined by its spectrum. That is, $2K_m$ is DS (see~\cite[Section~6.1]{Van03}). For each $m>2$ it is possible to construct a connected regular graph $G_{2m}$ with the same number of vertices and the same degree as $2K_m$. Hence $G_{2m}$ and $2K_m$ are not distinguishable in $C^2$ and clearly not isomorphic. This shows that co-spectrality and elementary equivalence with respect to the two-variable counting logic is incomparable.
From a result of Babai and Ku\v{c}era~\cite{Babai79}, we know that a graph randomly selected from the uniform distribution over the class of all unlabeled $n$-vertex graphs (which has size equal to $2^{n(n-1)/2}$) is not identified by $C^2$ with probability equal to $(o(1))^n$. Moreover, in~\cite{Kucera87} Ku\v{c}era presented an efficient algorithm for labelling the vertices of random regular graphs from which it follows that the fraction of regular graphs which are not identified by $C^3$ tends to $0$ as the number of vertices tends to infinity. Therefore, almost all regular $n$-vertex graphs are in $\mathcal{C}^3_n$. Summarising, $\text{DS}_n$ and $\mathcal{C}_n^2$ overlap and both are contained in $\mathcal{C}_n^3$.
\subsection{Lower Bounds}
Having established that $C^3$-equivalence is a refinement of co-spectrality, we now look at the relationship of the latter with equivalence in finite variable logics without counting quantifiers. First of all, we note that some co-spectral graphs can be distinguished by a formula using just two variables and no counting quantifiers.
\begin{proposition}\label{prop:lb1}
There exists a pair of co-spectral graphs that can be distinguished in first-order logic with only two variables.
\end{proposition}
\begin{proof}
Let us consider the following two-variable first-order sentence:
$$\psi:= \exists x\forall y\ \lnot E(x, y). $$
For any graph $G$ we have that $G\models\psi$ if, and only if, there is an isolated vertex in $G$. Hence $C_4\cup K_1\models\psi$ and $K_{1,4}\not\models\psi$. Therefore, $C_4\cup K_1\not\equiv^2 K_{1,4}$. \qed
\end{proof}
Next, we show that counting quantifiers are essential to the argument
from the previous section in that co-spectrality is not subsumed by
equivalence in any finite-variable fragment of first-order logic in
the absence of such quantifiers. Let $L^k$ denote the fragment of
first-order logic in which each formula has at most $k$ distinct
variables.
For each $r,s\geq0$, the \emph{extension axiom} $\eta_{r,s}$ is the first-order sentence $$
\forall x_1\dots\forall x_{r+s}\Bigg(\bigg(\bigwedge_{i\neq j} x_i\neq x_j\bigg)\rightarrow\exists y\bigg(\bigwedge_{i\leq r} E(x_i,y)\land\bigwedge_{i>r}\lnot E(x_i,y)\wedge x_i\neq y\bigg)\Bigg). $$
A graph $G$ satisfies the \emph{$k$-extension property} if $G\models\eta_{r,s}$ and $r+s=k$. In~\cite{Kolaitis92} Kolaitis and Vardi proved that if the graphs $G$ and $H$ both satisfy the $k$-extension property, then there is no formula of $L^{k}$ that can distinguish them. If this happens, we write $G\equiv^{k} H$. Fagin~\cite{Fagin76} proved that for each $k\geq 0$, almost all graphs satisfy the $k$-extension property. Hence almost all graphs are not distinguished by any formula of $L^k$.
Let $q$ be a prime power such that $q\equiv 1$ (mod 4). The \emph{Paley graph} of order $q$ is the graph $P(q)$ with vertex set $\mathrm{GF}(q)$, the finite field of order $q$, where two vertices $i$ and $j$ are adjacent if there is a positive integer $x$ such that $x^2 \equiv (i-j)$ (mod $q$). Since $q\equiv 1$ (mod 4) if, and only if, $x^2\equiv -1$ (mod $q$) is solvable, we have that $-1$ is a square in $\mathrm{GF}(q)$ and so, $(j-i)$ is a square if and only if $-(i-j)$ is a square. Therefore, adjacency in a Paley graph is a symmetric relation and so, $P(q)$ is undirected. Blass, Exoo and Harary~\cite{Blass81} proved that if $q$ is greater than $k^2 2^{4k}$, then $P(q)$ satisfies the $k$-extension property.
Now, let $q=p^r$ with $p$ an odd prime, $r$ a positive integer, and $q\equiv 1$ (mod $3$). The \emph{cubic Paley graph} $P^3(q)$ is the graph whose vertices are elements of the finite field $\mathrm{GF}(q)$, where two vertices $i,j\in\mathrm{GF}(q)$ are adjacent if and only if their difference is a cubic residue, i.e. $i$ is adjacent to $j$ if, and only if, $i-j = x^3$ for some $x\in\mathrm{GF}(q)$. Note that $-1$ is a cube in $\mathrm{GF}(q)$ because $q\equiv 1$ (mod $3$) is a prime power, so $i$ is adjacent to $j$ if, and only if, $j$ is adjacent to $i$. In~\cite{Ananchuen06} it has been proved that $P^3(q)$ has the $k$-extension property whenever $q\geq k^2 2^{4k-2}$.
The \emph{degree} of vertex $v$ in a graph $G$ is the number $d(v):= |\{\{v,u\}\in E:u\in V_G\}|$ of vertices that are adjacent to $v$. A graph $G$ is \emph{regular} of degree $d$ if every vertex is adjacent to exactly $d$ other vertices, i.e.\ $d(v)=d$ for all $v\in V_G$. So, $G$ is regular of degree $d$ if, and only if, each row of its adjacency matrix adds up to $d$. It can been shown that the Paley graph $P(q)$ is regular of degree $(q-1)/2$~\cite{GodsilRoyle}. Moreover, it has been proved that the cubic Paley graph $P^3(q)$ is regular of degree $(q-1)/3$~\cite{Elsawy12}.
\begin{lemma}\label{lem:lb1}
Let $G$ be a regular graph of degree $d$. Then $d\in\mathrm{sp}(G)$ and for each $\theta\in\mathrm{sp}(G)$, we have $|\theta|\leq d$. Here $|\cdot|$ is the operation of taking the absolute value.
\end{lemma}
\begin{proof}
Let us denote by $\mathbf{1}$ the all-ones vector. Then $A_G\mathbf{1}=d\mathbf{1}$. Therefore, $d\in\mathrm{sp}(G)$. Now, let $s$ be such that $|s| > d$. Then, for each row $i$, $$|S_{ii}|>\sum_{j\neq i}|S_{ij}|$$ where $S=s I - A_G$. Therefore, the matrix $S$ is strictly diagonally dominant, and so $det(sI-A_G)\neq 0$. Hence $s$ is not an eigenvalue of $G$. \qed
\end{proof}
\begin{lemma}\label{lem:lb2}
Let $G$ and $H$ be regular graphs of distinct degrees. Then $G$ and $H$ do not have the same spectrum.
\end{lemma}
\begin{proof}
Suppose that $G$ is regular of degree $s$ and $H$ is regular of degree $t$, with $s\neq t$. Then $A_G\mathbf{1}=s\mathbf{1}$ and $A_H\mathbf{1}=t\mathbf{1}$, where $\mathbf{1}$ is the all-ones vector. Therefore, $s$ is the greatest eigenvalue in the spectrum of $G$ and $t$ is the greatest eigenvalue in the spectrum of $H$. Hence $\mathrm{sp}(G)\neq\mathrm{sp}(H)$. \qed
\end{proof}
\begin{proposition}\label{prop:lb2}
For each $k\geq 1$, there exists a pair $G_k, H_k$ of graphs which are not co-spectral, such that $G_k$ and $H_k$ are not distinguished by any formula of $L^{k}$.
\end{proposition}
\begin{proof}
For any positive integer $r$ we have that $13^r\equiv 1$ (mod $3$) and $13^r\equiv 1$ (mod $4$). For each $k\geq 1$, let $r_k$ be the smallest integer greater than $2 (k \log(4)+\log(k))/\log(13)$, and let $q_k = 13^{r_k}$. Hence $q_k>k^2 2^{4k}$. Now, let $G_k=P(q_k)$ and $H_k=P^3(q_k)$. Then $G_k$ and $H_k$ both satisfy the $k$-extension property, and so $G_k \equiv^{k} H_k$. Since the degree of $G_k$ is $(13^{r_k}-1)/2$ and the degree of $H_k$ is $(13^{r_k}-1)/3$, by Lemma~\ref{lem:lb1} we conclude that $\mathrm{sp}(G_k)\neq\mathrm{sp}(H_k)$. \qed
\end{proof}
So having the same spectrum is a property of graphs that does not follows from any finite collection of extension axioms, or equivalently, from any first-order sentence with asymptotic probability 1.
\renewcommand{\b}[1]{\mathbf{#1}}
\newcommand{\mathrm{Aut}}{\mathrm{Aut}}
\newcommand{\mathrm{srg}}{\mathrm{srg}}
\newcommand{\mathrm{tp}}{\mathrm{tp}}
\section{Isomorphism Approximations}\label{sym}
\subsection{WL Equivalence}
The automorphism group $\mathrm{Aut}(G)$ of $G$ acts naturally on the set $V^k_G$ of all $k$-tuples of vertices of $G$, and the set of orbits of $k$-tuples under the action of $\mathrm{Aut}(G)$ form a corresponding partition of $V^k_G$. The \emph{$k$-dimensional Weisfeiler-Leman algorithm} is a combinatorial method that tries to approximate the partition induced by the orbits of $\mathrm{Aut}(G)$ by labelling the $k$-tuples of vertices of $G$. For the sake of completeness, here we give a brief overview of the algorithm.
The $1$-dimensional Weisfeiler-Leman algorithm has the following steps: first, label each vertex $v\in V_G$ by its degree $d(v)$. The set $N(v):=\{u:\{v,u\}\in E_G\}$ is called the \emph{neighborhood} of $v\in V_G$ and so, the degree of $v$ is just the number of neighbours it has, i.e. $d(v)=|N(v)|$. In this way we have defined a partition $P_{0}(G)$ of $V_G$. The number of labels is equal to the number of different degrees. Hence $P_{0}(G)$ is the degree sequence of $G$. Then, relabel each vertex $v$ with the multi-set of labels of its neighbours, so each label $d(v)$ is substituted for $\{d(v), \{d(u):u\in N(v)\}\}$. Since these are multi-sets they might contain repeated elements. We get then a partition $P_{1}(G)$ of $V_G$ which is either a refinement of $P_{0}(G)$ or identical to $P_{0}(G)$. Inductively, the partition $P_{t}(G)$ is obtained from the partition $P_{t-1}(G)$, by constructing for each vertex $v$ a new multi-set that includes the labels of its neighbours, as it is done in the previous step. The algorithm halts as soon as the number of labels does not increase anymore. We denote the resulting partition of $V_G$ by $P^1_G$.
Now we describe the algorithm for higher dimensions. Recall that we are working in the first-order language of graphs $L=\{E\}$. Now, for each graph $G$ and each $k$-tuple $\b{v}$ of vertices of $G$ we define the \emph{(atomic) type of $\b{v}$} in $G$ as the set $\mathrm{tp}^k_G(\b{v})$ of all atomic $L$-formulas $\phi(\b{x})$ that are true in $G$ when the variables of $\b{x}$ are substituted for vertices of $\b{v}$. More formally, for $k>1$ we let
\[
\mathrm{tp}^k_G(\b{v}) := \{\phi(\b{x}): |\b{x}|\leq k, G\models\phi(\b{v})\}
\]
where, $|\b{x}|$ denotes the number of entries the tuple $\b{x}$ have, and each $\phi(\b{x})$ is either $x_i=x_j$ or $E(x_i,x_j)$ for $1\leq i, j\leq k$. Essentially, the formulas of $\mathrm{tp}^k_G(\b{v})$ give us the complete information about the structural relations that hold between the vertices of $\b{v}$. If $u\in V_G$ and $1\leq i\leq k$, let $\b{v}_i^u$ denote the result of substituting $u$ in the $i$-th entry of $\b{v}$.
For each $k> 1$ the $k$-dimensional Weisfeiler-Leman algorithm proceeds as follows: first, label the $k$-tuples of vertices with their types in $G$, so each $k$-tuple $\b{v}$ is labeled with $\ell_0(\b{v}):=\mathrm{tp}^k_G(\b{v})$; this induces a partition $P^k_0(G)$ of the $k$-tuples of vertices of $G$. Inductively, refine the partition $P^k_i(G)$ of $V^k_G$ by relabelling the $k$-tuples so that each label $\ell_i(\b{v})$ is substituted for $\ell_{i+1}(\b{v}):=\{\ell_{i}(\b{v}),\{\ell_{i}(\b{v}_1^u),\dots,\ell_{i}(\b{v}_k^u):u\in V_G\}\}$. The algorithm continues refining the partition of $V^k_G$ until it gets to a step $t\geq 1$, where $P_t^k(G) = P_{t-1}^k(G)$; then it halts. We denote the resulting partition of $V^k_G$ by $P^k_G$.
Notice that for any fixed $k\geq 1$, the partition $P^k_G$ of $k$-subsets is obtained after at most $|V_G|^k$ steps. If the partitions $P^k_G$ and $P^k_H$ of graphs $G$ and $H$ are the same multi-set of labels obtained by the $k$-dimensional Weisfeiler-Leman algorithm, we say that $G$ and $H$ are \emph{$k$-WL equivalent}. In~\cite{Cai92}, Cai, F\"urer and Immerman proved that two graphs $G$ and $H$ are $C^{k+1}$-equivalent if, and only if, $G$ and $H$ are $k$-WL equivalent.
\subsection{Symmetric Powers}
The \emph{$k$-th symmetric power} $G^{\{k\}}$ of a graph $G$ is a graph where each vertex represents a $k$-subset of vertices of $G$, and two $k$-subsets are adjacent if their symmetric difference is an edge of $G$. Formally, the vertex set $V_{G^{\{k\}}}$ of $G^{\{k\}}$ is defined to be the set of all subsets of $V_G$ with exactly $k$ elements, and for every pair of $k$-subsets of vertices $V=\{v_1,\dots,v_k\}$ and $U=\{u_1,\dots,u_k\}$, we have $\{V, U\}\in E_{G^{\{k\}}}$ if, and only if, $(V\smallsetminus U)\cup(U\smallsetminus V) \in E_G$. The symmetric powers are related to a natural generalisation of the concept of a walk in a graph. A \emph{$k$-walk} of length $l$ in $G$ is a sequence $(V_0, V_1,\dots, V_l)$ of $k$-subsets of vertices, such that the symmetric difference of $V_{i-1}$ and $V_{i}$ is an edge of $G$ for $1\leq i\leq l$. A $k$-walk is said to be closed if $V_0=V_l$. The connection with the symmetric powers is that a $k$-walk in $G$ corresponds to an ordinary walk in $G^{\{k\}}$. Therefore, two graphs have the same total number of closed $k$-walks of every length if, and only if, their $k$-th symmetric powers are co-spectral. For each $k\geq 1$, there exist infinitely many pairs of non-isomorphic graphs $G$ and $H$ such that the $k$-th symmetric powers $G^{\{k\}}$ and $H^{\{k\}}$ are co-spectral~\cite{Barghi09}.
Alzaga, Iglesias and Pignol~\cite{Alzaga10} have shown that given two graphs $G$ and $H$, if $G$ and $H$ are $2k$-WL equivalent, then their $k$-th symmetric powers $G^{\{k\}}$ and $H^{\{k\}}$ are co-spectral. This two facts combined allow us to deduce the following generalisation of Proposition~\ref{prop:wlk1}.
\begin{proposition}\label{prop:sym1}
Given graphs $G$ and $H$ and a positive integer $k$, if $G\equiv_{C}^{2k+1}H$ then $G^{\{k\}}$ and $H^{\{k\}}$ are co-spectral.
\end{proposition}
\subsection{Cellular Algebras}
Originally, Weisfeiler and Leman~\cite{WL68} presented their algorithm in terms of algebras of complex matrices. Given two matrices $A$ and $B$ of the same order, their \emph{Schur product} $A\circ B$ is defined by $(A\circ B)_{ij}:=A_{ij}B_{ij}$. For a complex matrix $A$, let $A^{\ast}$ denote the adjoint (or conjugate-transpose) of $A$. A \emph{cellular algebra} $W$ is an algebra of square complex matrices that contains the identity matrix $I$, the all-ones matrix $J$, and is closed under adjoints and Schur multiplication. Thus, every cellular algebra has a unique basis $\{A_1,\dots,A_m\}$ of binary matrices which is closed under adjoints and such that $\sum_i A_i = J$.
The smallest cellular algebra is the one generated by the span of $I$ and $J$. The cellular algebra of an $n$-vertex graph $G$ is the smallest cellular algebra $W_G$ that contains $A_G$. Two cellular algebras $W$ and $W^{\prime}$ are isomorphic if there is an algebra isomorphism $h:W\to W^{\prime}$, such that $h(A\circ B)=h(A)\circ h(B)$, $h(A)^{\ast}=h(A^{\ast})$ and $h(J)=J$. Given an isomorphism $h:W\to W^{\prime}$ of cellular algebras, for all $A\in W$ we have that $A$ and $h(A)$ are co-spectral (see Lemma~3.4 in~\cite{Friedland89}). So the next result is immediate.
\begin{proposition}\label{prop:ca1}
Two graphs $G$ and $H$ are co-spectral if there is an isomorphism of $W_G$ and $W_H$ that maps $A_G$ to $A_H$.
\end{proposition}
In general, the converse of Proposition~\ref{prop:ca1} is not true. That is, there are known pairs of co-spectral graphs whose corresponding cellular algebras are non-isomorphic (see, e.g. \cite{Barghi09}).
The elements of the standard basis of a cellular algebra correspond to the ``adjacency matrices'' of a corresponding coherent configuration. Coherent configurations where introduced by Higman in~\cite{Higman75} to study finite permutation groups.
Coherent configurations are stable under the $2$-dimensional Weisfeiler-Leman algorithm. Hence two graphs $G$ and $H$ are $2$-WL equivalent if, and only if, there is an isomorphism of $W_G$ and $W_H$ that maps $A_G$ to $A_H$.
\begin{proposition}\label{obs:ca1}
Given graphs $G$ and $H$ with cellular algebras $W_G$ and $W_H$, $G\equiv_C^3 H$ if, and only if, there is an isomorphism of $W_G$ and $W_H$ that maps $A_G$ to $A_H$.
\end{proposition}
\subsection{Strongly Regular Graphs}
A \emph{strongly regular graph} $\mathrm{srg}(n,r,\lambda,\mu)$ is a regular $n$-vertex graph of degree $r$ such that each pair of adjacent vertices has $\lambda$ common neighbours, and each pair of nonadjacent vertices has $\mu$ common neighbours. The numbers $n,r,\lambda,\mu$ are called the \emph{parameters} of $\mathrm{srg}(n,r,\lambda,\mu)$. It can be shown that the spectrum of a strongly regular graph is determined by its parameters~\cite{GodsilRoyle}. The complement of a strongly regular graph is strongly regular. Moreover, co-spectral strongly regular graphs have co-spectral complements. That is, two strongly regular graphs having the same parameters are co-spectral. Recall $J$ is the all-ones matrix.
\begin{lemma}
If $G$ is a strongly regular graph then $\{I , A_G, (J - I - A_G)\}$ form the basis for its corresponding cellular algebra $W_G$.
\end{lemma}
\begin{proof}
By definition, $W_G$ has a unique basis $\mathcal{A}$ of binary matrices closed under adjoints and so that \[\sum_{A\in\mathcal{A}} A = J.\]
Notice that $I,A_G$ and $J - I - A_G$ are binary matrices such that $I^{\ast}=I$, $A_G^{\ast}=A_G$ and $
(J - I - A_G)^{\ast} = J - I - A_G.$ Furthermore,
\[I + A_G + (J - I - A_G) = J.\]
\end{proof}
There are known pairs of non-isomorphic strongly regular graphs with the same parameters (see, e.g. \cite{Brouwer84}). These graphs are not distinguished by the $2$-dimensional Weisfeiler-Leman algorithm
since there is an algebra isomorphism that maps the adjacency matrix of one to the adjacency matrix of the other. Thus, for strongly regular graphs the converse of Proposition~\ref{prop:ca1} holds.
\begin{lemma}\label{lem:ca1}
If $G$ and $H$ are two co-spectral strongly regular graphs, then there exists an isomorphism of $W_G$ and
$W_H$ that maps $A_G$ to $A_H$.
\end{lemma}
\begin{proof}
The cellular algebras $W_G$ and $W_H$ of $G$ and $H$ have standard basis $\{I , A_G, (J - I - A_G)\}$ and $\{I , A_H, (J - I - A_H)\}$, respectively. Since $G$ and $H$ are co-spectral, there exist an orthogonal matrix $Q$ such that $QA_GQ^{T}=A_H$ and $Q(J - I - A_G)Q^{T}=(J - I - A_H)$. In~\cite{Friedland89}, Friedland has shown that two cellular algebras with standard bases $\{A_1,\dots,A_m\}$ and $\{B_1,\dots,B_m\}$ are isomorphic if, and only if, there is an invertible matrix $M$ such that $MA_iM^{-1}=B_i$ for $1\leq i\leq m$. As every orthogonal matrix is invertible, we can conclude that there exists an isomorphism of $W_G$ and
$W_H$ that maps $A_G$ to $A_H$.
\end{proof}
\begin{proposition}\label{prop:ca2}
Given two strongly regular graphs $G$ and $H$, the following statements are equivalent:
\begin{enumerate}
\item $G\equiv_C^3 H$;
\item $G$ and $H$ are co-spectral;
\item there is an isomorphism of $W_G$ and $W_H$ that maps $A_G$ to $A_H$.
\end{enumerate}
\end{proposition}
\begin{proof}
Proposition~\ref{prop:wlk1} says that for all graphs $(1)$ implies $(2)$. From Proposition~\ref{obs:ca1}, we have $(1)$ if, and only if, $(3)$. By Lemma~\ref{lem:ca1}, if $(2)$ then $(3)$.
\end{proof}
\newcommand{\mathrm{num}}{\mathrm{num}}
\newcommand{\mathrm{elem}}{\mathrm{elem}}
\newcommand{\mathrm{lex}}{\mathrm{lex}}
\newcommand{\mathrm{next}}{\mathrm{next}}
\newcommand{\mathrm{isom}}{\mathrm{isom}}
\newcommand{\mathrm{cospec}}{\mathrm{cospec}}
\newcommand{\mathrm{sum}}{\mathrm{sum}}
\newcommand{\ensuremath{\bar{R}}}{\ensuremath{\bar{R}}}
\newcommand{\countingTerm}[1]{\#_{#1}}
\newcommand{\tup}[1]{\vec{#1}}
\newcommand{\struct}[1]{\ensuremath{\mathbf #1}}
\newcommand{\univ}[1]{\ensuremath{\mathrm{dom}(\struct #1)}}
\newcommand{\mathrm{dom}}{\mathrm{dom}}
\newcommand{\logicoperator}[1]{\mathbf{#1}}
\newcommand{\logicoperator{ifp}}{\logicoperator{ifp}}
\newcommand{\logicoperator{pfp}}{\logicoperator{pfp}}
\section{Definability in Fixed Point Logic with Counting}\label{pfp}
In this section, we consider the definability of co-spectrality and
the property DS in fixed-point logics with counting. To be precise, we
show that co-spectrality is definable in \emph{inflationary fixed-point logic with
counting} ($\textsc{fpc}$) and the class of graphs that are DS is definable in
\emph{partial fixed-point logic with counting} ($\textsc{pfpc}$). It follows
that both of these are also definable in the infinitary logic with counting, with a bounded
number of variables (see~\cite[Prop.~8.4.18]{EF99}). Note that it is
known that $\textsc{fpc}$ can express any polynomial-time decidable property
of \emph{ordered} structures and similarly $\textsc{pfpc}$ can express all
polynomial-space decidable properties of ordered structures. It is
easy to show that co-spectrality is decidable in polynomial time and
DS is in $\textsc{PSpace}$. For the latter, note that DS can easily be
expressed by a $\Pi_2$ formula of second-order logic and therefore the
problem is in the second-level of the polynomial hierarchy. However,
in the absence of a linear order $\textsc{fpc}$ and $\textsc{pfpc}$ are strictly
weaker than the complexity classes $P$ and $\textsc{PSpace}$ respectively.
Indeed, there are problems in $P$ that are not even expressible in the
infinitary logic with counting. Nonetheless, it is in this context
without order that we establishe the definability results below.
We begin with a brief definition of the logics in question, to fix the
notation we use. For a more detailed definition, we refer the reader
to~\cite{EF99}~\cite{Lib04}.
$\textsc{fpc}$ is an extension of inflationary fixed-point logic with the
ability to express the cardinality of definable sets. The logic has
two sorts of first-order variables: \emph{element variables}, which
range over elements of the structure on which a formula is interpreted
in the usual way, and \emph{number variables}, which range over some
initial segment of the natural numbers. We usually write element
variables with lower-case Latin letters $x, y, \dots$ and use
lower-case Greek letters $\mu, \eta, \dots$ to denote number
variables. In addition, we have relational variables, each of which has an arity $m$ and an associated type from $\{\mathrm{elem},\mathrm{num}\}^m$. $\textsc{pfpc}$ is similarly obtained by allowing the
\emph{partial fixed point} operator in place of the inflationary
fixed-point operator.
For a fixed signature $\tau$, the atomic formulas of $\textsc{fpc}[\tau]$ of $\textsc{pfpc}[\tau]$ are all formulas of the form $\mu
= \eta$ or $\mu \le \eta$, where $\mu, \eta$ are number variables; $s
= t$ where $s,t$ are element variables or constant symbols from
$\tau$; and $R(t_1, \dots, t_m)$, where $R$ is a relation symbol
(i.e.\ either a symbol from $\tau$ or a relational variable) of arity
$m$ and each $t_i$ is a term of the appropriate type (either $\mathrm{elem}$ or $\mathrm{num}$, as determined by the type of $R$). The set $\textsc{fpc}[\tau]$ of
\emph{$\textsc{fpc}$ formulas} over $\tau$ is built up from the atomic formulas by
applying an inflationary fixed-point operator $[\logicoperator{ifp}_{R,\tup x}\phi](\tup t) $;
forming \emph{counting terms} $\countingTerm{x} \phi$, where $\phi$ is a formula
and $x$ an element variable; forming formulas of the kind $s = t$ and $s \le t$
where $s,t$ are number variables or counting terms; as well as the standard
first-order operations of negation, conjunction, disjunction, universal and
existential quantification. Collectively, we refer to element variables and
constant symbols as \emph{element terms}, and to number variables and counting
terms as \emph{number terms}. The formulas of $\textsc{pfpc}[\tau]$ are
defined analogously, but we replace the fixed-point operator rule by
the partial fixed-point: $[\logicoperator{pfp}_{R,\tup x}\phi](\tup t)$.
For the semantics, number terms take values in $\{0,\ldots,n\}$,
where $n$ is the size of the structure in which they are interpreted. The semantics of atomic formulas,
fixed-points and first-order operations are defined as usual (c.f.,
e.g., \cite{EF99} for details), with comparison of number terms
$\mu \le \eta$ interpreted by comparing the corresponding integers in
$\{0,\ldots,n\}$. Finally, consider a counting term of the form
$\countingTerm{x}\phi$, where $\phi$ is a formula and $x$ an element
variable. Here the intended semantics is that $\countingTerm{x}\phi$
denotes the number (i.e.\ the element of $\{0,\ldots,n\}$) of elements that
satisfy the formula $\phi$.
Note that, since an inflationary
fixed-point is easily expressed as a partial fixed-point, every
formula of $\textsc{fpc}$ can also be expressed as a formula of $\textsc{pfpc}$. In
the construction of formulas of these logics below, we freely use
arithmetic expressions on number variables as the relations defined by
such expressions can easily be defined by formulas of $\textsc{fpc}$.
In Section~\ref{wlk} we constructed sentences $\phi^l_k$ of $C^3$
which are satisfied in a graph $G$ if, and only if, the number of
closed walks in $G$ of length $l$ is exactly $k$. Our first aim is to
construct a single formula of $\textsc{fpc}$ that expresses this for all $l$
and $k$. Ideally, we would have the numbers as parameters to the
formula but it should be noted that, while the length $l$ of walks we
consider is bounded by the number $n$ of vertices of $G$, the number
of closed walks of length $l$ is not bounded by any polynomial in $n$.
Indeed, it can be as large as $n^n$. Thus, we cannot represent the
value of $k$ by a single number variable, or even a fixed-length tuple
of number variables. Instead, we represent $k$ as a binary relation
$K$ on the number domain. The order on the number domain induces a
lexicographical order on pairs of numbers, which is a way of encoding
numbers in the range $0,\ldots, n^2$. Let us write $[i,j]$ to denote
the number coded by the pair $(i,j)$. Then, a binary relation $K$ can be
used to represent a number $k$ up to $2^{n^2}$ by its binary encoding. To
be precise, $K$ contains all pairs $(i,j)$ such that bit position
$[i,j]$ in the binary encoding of $k$ is 1. It is easy to define
formulas of $\textsc{fpc}$ to express arithmetic operations on numbers
represented in this way.
Thus, we aim to construct a single formula
$\phi(\lambda,\kappa_1,\kappa_2)$ of $\textsc{fpc},$ with three free number
variables such that $G \models \phi[l,i,j]$ if, and only if, the
number of closed walks in $G$ of length $l$ is $k$ and position
$[i,j]$ in the binary expansion of $k$ is 1. To do this, we first
define a formula $\psi(\lambda,\kappa_1,\kappa_2,x,y)$ with free
number variables $\lambda$, $\kappa_1$ and $\kappa_2$ and free element
variables $x$ and $y$ that, when interpreted in $G$ defines the set of
tuples $(l,i,j,v,u)$ such that if there are exactly $k$ walks of
length $l$ starting at $v$ and ending at $u$, then position $[i,j]$ in
the binary expansion of $k$ is 1. This can be defined by taking the
inductive definition of $\psi^l_k$ we gave in Section~\ref{wlk} and
making the induction part of the formula.
We set out the definition below.
$$
\begin{array}{rcl@{}l}
\psi(\lambda,\kappa_1,\kappa_2,x,y) & := &
\logicoperator{ifp}_{W,\lambda,\kappa_1,\kappa_2,x,y}[& \lambda = 1 \land \kappa_1 = 0 \land \kappa_2 =1 \land E(x,y) \lor \\
& & & \lambda = \lambda'+1 \land \mathrm{sum}(\lambda',\kappa_1,\kappa_2,x,y) ]
\end{array}
$$ where $W$ is a relation variable of type
$(\mathrm{num},\mathrm{num},\mathrm{num},\mathrm{elem},\mathrm{elem})$ and the formula $\mathrm{sum}$ expresses that
there is a 1 in the bit position encoded by $(\kappa_1,\kappa_2)$ in
the binary expansion of $k = \sum_{z : E(x,z)} k_{\lambda',z,y}$,
where $k_{\lambda',z,y}$ denotes the number coded by the binary
relation $\{(i,j) : W(\lambda',i,j,z,y)\}$. We will not write out the
formula $\mathrm{sum}$ in full. Rather we note that it is easy to define
inductively the sum of a set of numbers given in binary notation, by
defining a sum and carry bit. In our case, the set of numbers is
given by a ternary relation of type $(\mathrm{elem},\mathrm{num},\mathrm{num})$ where fixing
the first component to a particular value $z$ yields a binary relation
coding a number. A similar application of induction to sum a set of
numbers then allows us to define the formula
$\phi(\lambda,\kappa_1,\kappa_2)$ which expresses that the bit
position indexed by $(\kappa_1,\kappa_2)$ is 1 in the binary expansion
of $k = \sum_{x \in V} k_x$ where $k_x$ denotes the number coded by
$\{(i,j) : \psi[\lambda,i,j,x,x]\}$.
To define co-spectrality in $\textsc{fpc}$ means that we
can write a formula $\mathrm{cospec}$ in a vocabulary with two binary
relations $E$ and $E'$ such that a structure $(V,E,E')$ satisfies this
formula if, and only if, the graphs $(V,E)$ and $(V,E')$ are
co-spectral. Such a formula is now easily derived from $\phi$. Let
$\phi'$ be the formula obtained from $\phi$ by replacing all
occurrences of $E$ by $E'$, then we can define:
$$\mathrm{cospec} := \forall \lambda,\kappa_1,\kappa_2 \; \phi \Leftrightarrow \phi'. $$
Now, in order to give a definition in $\textsc{pfpc}$ of the class of graphs that are DS, we need two variations of the formula $\mathrm{cospec}$. First, let $R$ be a relation symbol of type $(\mathrm{num},\mathrm{num})$. We write $\phi(R)$ for the formula obtained from $\phi$ by replacing the symbol $E$ with the relation variable $R$, and suitably replacing number variables with element variables. So, $\phi(R,\lambda,\kappa_1,\kappa_2)$ defines, in the graph defined by the relation $R$ on the number domain, the number of closed walks of length $\lambda$. We write $\mathrm{cospec}_R$ for the formula
$$\forall \lambda,\kappa_1,\kappa_2 \; \phi(R) \Leftrightarrow \phi, $$
which is a formula with a free relational variable $R$ which, when interpreted in a graph $G$ asserts that the graph defined by $R$ is co-spectral with $G$. Similarly, we define the formula with two free second-order variables $R$ and $R'$
$$\mathrm{cospec}_{R,R'} := \forall \lambda,\kappa_1,\kappa_2 \;\phi(R) \Leftrightarrow \phi(R'). $$
Clearly, this is true of a pair of relations iff the graphs they define are co-spectral.
Furthermore, it is not difficult to define a formula $\mathrm{isom}(R,R')$ of
$\textsc{pfpc}$ with two free relation symbols of type $(\mathrm{num},\mathrm{num})$ that
asserts that the two graphs defined by $R$ and $R'$ are isomorphic.
Indeed, the number domain is ordered and any property in $\textsc{PSpace}$ over an ordered domain is definable in $\textsc{pfpc}$, so such a formula must exist. Given these, the property of a graph being DS is given by the following formula with second-order quantifiers:
$$\forall R (\mathrm{cospec}_{R} \Rightarrow \forall R' (\mathrm{cospec}_{R,R'} \Rightarrow \mathrm{isom}(R,R') )). $$
To convert this into a formula of $\textsc{pfpc}$, we note that second-order quantification over the number domain can be expressed in $\textsc{pfpc}$. That is, if we have a formula $\theta(R)$ of $\textsc{pfpc}$ in which $R$ is a free second-order variable of type $(\mathrm{num},\mathrm{num})$, then we can define a $\textsc{pfpc}$ formula that is equivalent to $\forall R \, \theta$. We do this by means of an induction that loops through all binary relations on the number domain in lexicographical order and stops if for one of them $\theta$ does not hold.
First, define the formula $\mathrm{lex}(\mu,\nu,\mu',\nu')$ to be the following formula which defines the lexicographical ordering of pairs of numbers:
$$ \mathrm{lex}(\mu,\nu,\mu',\nu') := (\mu < \mu') \lor (\mu = \mu' \land \nu < \nu').$$
We use this to define a formula $\mathrm{next}(R,\mu,\nu)$ which, given a binary relation $R$ of type $(\mathrm{num},\mathrm{num})$, defines the set of pairs $(\mu,\nu)$ occurring in the relation that is lexicographically immediately after $R$.
$$
\begin{array}{rcl}
\mathrm{next}(R,\mu,\nu) & := & R(\mu,\nu) \land \exists \mu' \nu' (\mathrm{lex}(\mu',\nu',\mu,\nu) \land \neg R(\mu',\nu')) \lor \\
& & \lor \neg R(\mu,\nu) \land \forall \mu' \nu' (\mathrm{lex}(\mu',\nu',\mu,\nu) \Rightarrow R(\mu',\nu')) .
\end{array}
$$
We now use this to simulate, in $\textsc{pfpc}$, second-order quantification over the number domain. Let $\ensuremath{\bar{R}}$ be a new relation variable of type $(\mathrm{num},\mathrm{num},\mathrm{num})$ and we define the following formula
$$
\begin{array}{r@{}l}
\forall \alpha \forall \beta \logicoperator{pfp}_{\ensuremath{\bar{R}},\mu,\nu,\kappa} [ & (\forall \mu \nu \ensuremath{\bar{R}}(\mu,\nu,0) ) \land \theta(\ensuremath{\bar{R}}) \land \kappa=0 \lor \\
& \lor \neg\theta(\ensuremath{\bar{R}}) \land \kappa \neq 0 \lor \\
& \lor \theta(\ensuremath{\bar{R}}) \land \mathrm{next}(\ensuremath{\bar{R}},\mu,\nu) \land \kappa = 0 ] (\alpha,\beta,0) .
\end{array}
$$
It can be checked that this formula is equivalent to $\forall R \, \theta$.
\section{Conclusion}
Co-spectrality is an equivalence relation on graphs with many
interesting facets. While not every graph is determined upto
isomorphism by its spectrum, it is a long-standing conjecture (see~\cite{Van03}), still open, that \emph{almost all} graphs are DS.
That is to say that the proportion of $n$-vertex graphs that are DS
tends to $1$ as $n$ grows. We have established a number of results
relating graph spectra to definability in logic and it is instructive
to put them in the perspective of this open question. It is an easy
consequence of the results in~\cite{Kolaitis92} that the proportion of
graphs that are determined up to isomorphism by their $L^k$ theory
tends to $0$. On the other hand, it is known that almost all graphs
are determined by their $C^2$ theory (see~\cite{HKL97}) and \emph{a fortiori} by their
$C^3$ theory. We have established that co-spectrality is
incomparable with $L^k$-equivalence for any $k$; is incomparable with
$C^2$ equivalence; and is subsumed by $C^3$ equivalence. Thus, our
results are compatible with either answer to the open question of
whether almost all graphs are DS. It would be interesting to explore
further whether logical definability can cast light on this question.
\end{document}
|
\begin{document}
\title{A Note on $3$-quasi-Sasakian Geometry}
\classification{02.40.Ky.}
\keywords {Almost contact metric $3$-structures,
$3$-Sasakian manifolds, $3$-cosymplectic manifolds.}
\author{Beniamino Cappelletti Montano}{
address={Dipartimento di Matematica,
Universit\`{a} degli Studi di Bari, Via E. Orabona 4, 70125 Bari, Italy \\
[email protected], [email protected]}}
\author{Antonio De Nicola}{
address={CMUC, Department of Mathematics, University of Coimbra, 3001-454 Coimbra, Portugal\\
[email protected]}}
\author{Giulia Dileo}{
address={Dipartimento di Matematica,
Universit\`{a} degli Studi di Bari, Via E. Orabona 4, 70125 Bari, Italy \\
[email protected], [email protected]}}
\begin{abstract}
$3$-quasi-Sasakian manifolds were recently studied by the authors
as a suitable setting unifying $3$-Sasakian and $3$-cosymplectic geometries.
In this paper some geometric properties of this class of almost
$3$-contact metric manifolds are briefly reviewed, with an emphasis
on those more related to physical applications.
\end{abstract}
\maketitle
\section{Introduction}
The class of $3$-quasi-Sasakian manifolds is the analogue in the setting of $3$-structures
of the class of quasi-Sasakian manifolds, introduced by Blair \cite{blair0}
and later studied among others by Tanno \cite{tanno}, Kanemaki \cite{kanemaki1},
Olszak \cite{olszak2}. More recent are the examples of applications of quasi-Sasakian manifolds
to string theory found by Friedrich and his collaborators \cite{agricola1,friedrich}.
Just like quasi-Sasakian manifolds include Sasakian and cosymplectic
manifolds, so $3$-quasi-Sasakian manifolds unify $3$-Sasakian and $3$-cosymplectic geometry.
A $3$-quasi-Sasakian manifold can arise, for example, as the product
of a $3$-Sasakian manifold and a hyper-K\"{a}hler manifold (see Sect.~\ref{ranksection} or \cite{mag}).
The setting of $3$-structures has been recently the object of a wider interest from both
mathematicians and physicists due to the important role acquired by the $3$-Sasakian and the related
quaternionic structures in supergravity and superstring theory, where they appear
in the so called hypermultiplet solutions (see e.~g. \cite{acharya,agricola1,gibbons,yee}). This note
contains a concise review of the main properties of $3$-quasi-Sasakian manifolds, recently studied by
the authors in \cite{mag}, together with some relevant properties of the two important subclasses
of $3$-Sasakian and $3$-cosymplectic manifolds which were compared in \cite{cappellettidenicola}.
\section{$3$-quasi-Sasakian geometry}
An \emph{almost contact metric manifold} is a $(2n+1)$-dimensional
manifold $M$ endowed with a field $\phi$ of endomorphisms of the
tangent spaces, a vector field $\xi$, called \emph{Reeb vector field},
a $1$-form $\eta$ satisfying $\phi^2=-I+\eta\otimes\xi$, $\eta\left(\xi\right)=1$
(where $I \colon TM\rightarrow TM$ is the identity mapping)
and a \emph{compatible} Riemannian metric $g$ such that $g\left(\phi X,\phi Y\right) =
g\left(X,Y\right)-\eta\left(X\right)\eta\left(Y\right)$ for all
$X,Y\in\Gamma\left(TM\right)$.
The manifold is said to be \emph{normal} if the tensor field
$N^{(1)}=[\phi,\phi]+2d\eta\otimes\xi$ vanishes identically.
The $2$-form $\Phi$ on $M$ defined
by $\Phi\left(X,Y\right)=g\left(X,\phi Y\right)$ is called the
\emph{fundamental $2$-form} of the almost contact metric manifold
$\left(M,\phi,\xi,\eta,g\right)$.
Normal almost contact metric manifolds such that both $\eta$ and $\Phi$ are
closed are called \emph{cosymplectic manifolds} and those such that $d\eta=\Phi$ are called
\emph{Sasakian manifolds}.
The notion of quasi-Sasakian structure unifies those of Sasakian and cosymplectic
structures. A \emph{quasi-Sasakian manifold} is defined as a
normal almost contact metric manifold whose fundamental $2$-form
is closed. A quasi-Sasakian manifold $M$ is said to be of rank $2p$ (for some $p\leq n$) if
$\left(d\eta\right)^p\neq 0$ and $\eta\wedge\left(d\eta\right)^p=0$ on $M$,
and to be of rank $2p+1$ if $\eta\wedge\left(d\eta\right)^p\neq 0$ and
$\left(d\eta\right)^{p+1}=0$ on $M$ (cf. \cite{blair0,tanno}).
Blair proved that there are no quasi-Sasakian manifolds of even rank.
Just like Blair and Tanno did, we will only consider quasi-Sasakian manifolds of constant (odd) rank.
If the rank of $M$ is $2p+1$, then the module $\Gamma(TM)$ of vector
fields over $M$ splits into two submodules as follows:
$\Gamma(TM)={\cal E}^{2p+1}\oplus{\cal E}^{2q}$, $p+q=n$, where
\(
{\cal E}^{2q}=\{X\in\Gamma(TM)\; | \; i_X d\eta=0 \mbox{ and } i_X \eta=0\}
\)
and ${\cal E}^{2p+1}={\cal
E}^{2p}\oplus\left\langle\xi\right\rangle$, ${\cal E}^{2p}$ being
the orthogonal complement of ${\cal
E}^{2q}\oplus\left\langle\xi\right\rangle$ in
$\Gamma\left(TM\right)$. These modules satisfy $\phi {\cal
E}^{2p}={\cal E}^{2p}$ and $\phi {\cal E}^{2q}={\cal E}^{2q}$ (cf.
\cite{tanno}).
We now come to the main topic of our paper, i.e. $3$-quasi-Sasakian geometry,
which is framed into the more general setting of almost $3$-contact geometry.
An \emph{almost $3$-contact metric manifold} is a
$\left(4n+3\right)$-dimensional smooth manifold $M$ endowed with
three almost contact structures $\left(\phi_1,\xi_1,\eta_1\right)$,
$\left(\phi_2,\xi_2,\eta_2\right)$,
$\left(\phi_3,\xi_3,\eta_3\right)$ satisfying the following
relations, for any even permutation
$\left(\alpha,\beta,\gamma\right)$ of $\left\{1,2,3\right\}$,
\begin{gather}
\phi_\gamma=\phi_{\alpha}\phi_{\beta}-\eta_{\beta}\otimes\xi_{\alpha}=-\phi_{\beta}\phi_{\alpha}+\eta_{\alpha}\otimes\xi_{\beta},\\
\nonumber
\xi_{\gamma}=\phi_{\alpha}\xi_{\beta}=-\phi_{\beta}\xi_{\alpha}, \ \
\eta_{\gamma}=\eta_{\alpha}\circ\phi_{\beta}=-\eta_{\beta}\circ\phi_{\alpha},
\label{3-sasaki}
\end{gather}
and a Riemannian metric $g$ compatible with each of them. It is well
known that in any almost $3$-contact metric manifold the Reeb vector
fields $\xi_1,\xi_2,\xi_3$ are orthonormal with respect to the
compatible metric $g$ and that the structural group of the tangent
bundle is reducible to $Sp\left(n\right)\times I_3$.
Due to a general result (cf. \cite[Prop.~3.6.2]{joyce}), it follows that
any $3$-quasi-Sasakian manifold is a \emph{spin manifold}, i.~e. there
exists a double cover of the orthonormal frame bundle $SO(TM)$, which is
non-trivial on the fibers of the latter, by a principal bundle with structure group
a spin group. Then, to each representation of the spin group corresponds an associated
vector bundle whose sections are called \emph{spinor fields} (see \cite{lawson} for details).
The existence of spinor fields is required in quantum theories which encompass fermions.
Moreover, by putting ${\cal{H}}=\bigcap_{\alpha=1}^{3}\ker\left(\eta_\alpha\right)$ one
obtains a $4n$-dimensional \emph{horizontal} distribution on $M$ and the tangent
bundle splits as the orthogonal sum $TM={\cal{H}}\oplus{\cal{V}}$, where
${\cal V}=\left\langle\xi_1,\xi_2,\xi_3\right\rangle$ is the \emph{vertical} distribution.
\begin{definition}
A \emph{$3$-quasi-Sasakian manifold} is an almost
$3$-contact metric manifold $(M,\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$
such that each almost contact structure is quasi-Sasakian.
\end{definition}
The class of $3$-quasi-Sasakian manifolds includes as special cases the well-known
$3$-Sasakian and $3$-cosymplectic manifolds.
The following theorem combines the results obtained in Theorems 3.4 and 4.2 of \cite{mag}.
\begin{theorem}\label{classification}
Let $(M,\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$ be a
$3$-quasi-Sasakian manifold. Then the $3$-dimensional distribution
$\cal V$ generated by $\xi_1$, $\xi_2$, $\xi_3$ is integrable.
Moreover, $\cal V$ defines a totally geodesic and Riemannian
foliation of $M$ and for any even permutation
$(\alpha,\beta,\gamma)$ of $\left\{1,2,3\right\}$ and for some $c\in \mathbb R$
\begin{equation*}
\left[\xi_\alpha,\xi_\beta\right]=c\xi_\gamma.
\end{equation*}
\end{theorem}
Using Theorem \ref{classification} we may divide
$3$-quasi-Sasakian manifolds in two classes according
to the behaviour of the leaves of the
foliation $\cal V$: those $3$-quasi-Sasakian manifolds for which
each leaf of $\cal V$ is locally $SO\left(3\right)$ (or
$SU\left(2\right)$) (which corresponds to take in Theorem
\ref{classification} the constant $c\neq 0$), and those for which
each leaf of $\cal V$ is locally an abelian group (this corresponds
to the case $c=0$).
The preceding theorem also allows to define a canonical metric connection
on any $3$-quasi-Sasakian manifold. Indeed, let $\nabla^B$ be the
Bott connection associated to $\mathcal V$, that is the partial
connection on the normal bundle $TM/{\mathcal V}\cong\mathcal H$ of
$\mathcal V$ defined by
\(
\nabla^B_{V}Z:=[V,Z]_{\mathcal H}
\)
for all $V\in\Gamma(\mathcal V)$ and $Z\in\Gamma(\mathcal H)$.
Following \cite{tondeur} we may construct an adapted connection on
$\mathcal H$ putting
\begin{equation*}
\tilde\nabla_{X}Y:=\left\{
\begin{array}{ll}
\nabla^B_{X}Y, & \hbox{if $X\in\Gamma(\mathcal V)$;} \\
(\nabla_{X}Y)_{\mathcal H}, & \hbox{if $X\in\Gamma(\mathcal H)$.}
\end{array}
\right.
\end{equation*}
This connection can be also
extended to a connection on all $TM$ by requiring that
$\tilde\nabla\xi_\alpha=0$ for each $\alpha\in\left\{1,2,3\right\}$.
Some properties of this global connection have been considered in
\cite{cappellettidenicola} for any almost $3$-contact metric
manifold. Now combining Theorem \ref{classification} with \cite[Theorem
3.6]{cappellettidenicola} we have:
\begin{theorem}\label{connessionecanonica}
Let $(M,\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$ be a
$3$-quasi-Sasakian manifold. Then there exists a unique metric connection
$\tilde\nabla$ on $M$ satisfying the following properties:
\begin{enumerate}
\item[(i)]
$\tilde\nabla\eta_\alpha=0$, $\tilde\nabla\xi_\alpha=0$, for each
$\alpha\in\left\{1,2,3\right\}$,
\item[(ii)]
$\tilde T\left(X,Y\right)=2\sum_{\alpha=1}^{3}d\eta_{\alpha}(X,Y)\xi_\alpha$,
for all $X,Y\in\Gamma\left(TM\right)$.
\end{enumerate}
\end{theorem}
\section{The rank of a $3$-quasi-Sasakian manifold}\label{ranksection}
For a $3$-quasi-Sasakian manifold one can consider the ranks of the three structures $(\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$.
The following theorem assures that these three ranks coincide.
\begin{theorem}[\cite{mag}]\label{rango}
Let $(M,\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$ be a
$3$-quasi-Sasakian manifold of dimension $4n+3$. Then the $1$-forms $\eta_1$, $\eta_2$
and $\eta_3$ have the same rank $4l+3$
or $4l+1$, for some $l\leq n$, according to
$\left[\xi_\alpha,\xi_\beta\right]=c\xi_\gamma$ with $c\neq 0$, or
$\left[\xi_\alpha,\xi_\beta\right]=0$, respectively.
\end{theorem}
According \ to \ Theorem \ref{rango}, \ we \ say \ that \ a \
$3$-quasi-Sasakian \
manifold $(M,\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$ has \emph{rank}
$4l+3$ or $4l+1$ if any quasi-Sasakian structure has such rank. We may thus classify $3$-quasi-Sasakian manifolds of dimension
$4n+3$, according to their rank. For any $l\in\{0,\ldots,n\}$ we
have one class of manifolds such that
$\left[\xi_\alpha,\xi_\beta\right]=c\xi_\gamma$ with $c\neq 0$,
and one class of manifolds with
$\left[\xi_\alpha,\xi_\beta\right]=0$. The total number of classes
amounts then to $2n+2$.
In the following we will use the notation ${\cal E}^{4m}:=\{X\in\Gamma({\cal H})\; | \; i_X d\eta_\alpha=0\}$, while
${\cal E}^{4l}$ will be the orthogonal complement of ${\cal E}^{4m}$ in $\Gamma({\cal H})$,
${\cal E}^{4l+3}:={\cal E}^{4l} \oplus \Gamma({\cal V})$, and
${\cal E}^{4m+3}:={\cal E}^{4m} \oplus \Gamma({\cal V})$.
We now consider the class of $3$-quasi-Sasakian manifolds such that
$[\xi_\alpha,\xi_\beta]=c\xi_\gamma$ with $c\neq 0$ and let $4l+3$ be the rank.
In this case, according to \cite{blair0}, we define for each structure
$(\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$ two $(1,1)$-tensor fields
$\psi_\alpha$ and $\theta_\alpha$ by putting
\begin{equation*}
\psi_\alpha X=\left\{
\begin{array}{ll}
\phi_\alpha X, & \hbox{if $X\in {\cal{E}}^{4l+3}$;}\\
0, & \hbox{if $X\in {\cal{E}}^{4m}$;}
\end{array}
\right.
\ \textrm{ } \
\theta_\alpha X=\left\{
\begin{array}{ll}
0, & \hbox{if $X\in {\cal{E}}^{4l+3}$;}\\
\phi_\alpha X, & \hbox{if $X\in {\cal{E}}^{4m}$.}\\
\end{array}
\right.
\end{equation*}
Note that, for each $\alpha\in\left\{1,2,3\right\}$ we have
$\phi_\alpha=\psi_\alpha+\theta_\alpha$.
Next, we define a new (pseudo-Riemannian, in general) metric
$\bar{g}$ on $M $ setting
\begin{equation*}
\bar{g}\left(X,Y\right)=\left\{
\begin{array}{ll}
-d\eta_\alpha\left(X,\phi_\alpha Y\right), & \hbox{for $X,Y\in{\cal E}^{4l}$;} \\
g\left(X,Y\right), & \hbox{elsewhere.}
\end{array}
\right.
\end{equation*}
This definition is well posed by virtue of normality and of \cite[Lemma 5.3]{mag}.
$(M,\phi_\alpha,\xi_\alpha,\eta_\alpha,\bar{g})$
is in fact a hyper-normal almost $3$-contact metric manifold, in general
non-$3$-quasi-Sasakian.
We are now able to formulate the following decomposition theorem, proven in \cite{mag}.
\begin{theorem}
Let $(M^{4n+3},\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$ be a
$3$-quasi-Sasakian manifold of rank $4l+3$ with $\left[\xi_\alpha,\xi_\beta\right]=2\xi_\gamma$. Assume $[\theta_\alpha, \theta_\alpha]=0$ for some $\alpha\in\{1,2,3\}$ and $\bar{g}$ positive definite on ${\cal E}^{4l}$.
Then $M^{4n+3}$ is locally the product of a $3$-Sasakian manifold $M^{4l+3}$ and a hyper-K\"{a}hlerian manifold $M^{4m}$ with $m=n-l$.
\end{theorem}
We now consider the class of $3$-quasi-Sasakian manifolds such that
$[\xi_\alpha,\xi_\beta]=0$ and let $4l+1$ be the rank.
In this case we define for each structure
$(\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$ two $(1,1)$-tensor fields
$\psi_\alpha$ and $\theta_\alpha$ by putting
\begin{equation*}
\psi_\alpha X=\left\{
\begin{array}{ll}
\phi_\alpha X, & \hbox{if $X\in {\cal{E}}^{4l}$;}\\
0, & \hbox{if $X\in {\cal{E}}^{4m+3}$;}
\end{array}
\right.
\ \textrm{ } \
\theta_\alpha X=\left\{
\begin{array}{ll}
0, & \hbox{if $X\in {\cal{E}}^{4l}$;}\\
\phi_\alpha X, & \hbox{if $X\in {\cal{E}}^{4m+3}$.}\\
\end{array}
\right.
\end{equation*}
Note that for each $\alpha$ the maps $-\psi_\alpha^2$ and
$-\theta_\alpha^2+ \eta_\alpha\otimes\xi_\alpha$ define an almost product
structure which is integrable if and only if $[-\psi_\alpha^2,-\psi_\alpha^2]=0$
or, equivalently, $[\psi_\alpha,\psi_\alpha]=0$. Under this assumption the structure
turns out to be $3$-cosymplectic:
\begin{theorem}[\cite{mag}]
Let \ $(M,\phi_\alpha,\xi_\alpha,\eta_\alpha,g)$ be a
$3$-quasi-Sasakian manifold of rank $4l+1$ such that
$\left[\xi_\alpha,\xi_\beta\right]=0$ for any
$\alpha,\beta\in\{1,2,3\}$ and $[\psi_\alpha, \psi_\alpha]=0$ for
some $\alpha\in\{1,2,3\}$. Then $M$ is a $3$-cosymplectic
manifold.
\end{theorem}
As we have remarked before, $3$-Sasakian and $3$-cosymplectic manifolds
belong to the class of $3$-quasi-Sasakian manifolds, having respectively rank $4n+3=\dim(M)$ and rank $1$.
We now briefly collect some additional properties of these two important subclasses.
We have seen that the vertical distribution $\cal{V}$ is integrable already in any $3$-quasi-Sasakian manifold.
Ishihara (\cite{ishihara}) has shown that if the foliation defined by $\cal{V}$
is regular then the space of leaves is a quaternionic-K\"{a}hlerian manifold.
Boyer, Galicki and Mann have proved the following more general result.
\begin{theorem}[\cite{galicki1}]\label{proiezione}
Let $\left(M^{4n+3},\phi_\alpha,\xi_\alpha,\eta_\alpha,g\right)$
be a $3$-Sasakian manifold such that the Killing vector fields
$\xi_1$, $\xi_2$, $\xi_3$ are complete. Then
\begin{description}
\item[(i)]
$M^{4n+3}$ is
an Einstein manifold of positive scalar curvature equal to
$2\left(2n+1\right)\left(4n+3\right)$.
\item[(ii)] Each leaf of the foliation $\cal{V}$
is a $3$-dimensional homogeneous spherical space form.
\item[(iii)] The space of leaves $M^{4n+3}/\cal{V}$ is a quaternionic-K\"{a}hlerian
orbifold of dimension $4n$ with positive scalar
curvature equal to $16n\left(n+2\right)$.
\end{description}
\end{theorem}
We consider now the horizontal distribution: on the one hand, in the
$3$-Sasakian subclass ${\cal{H}}$ is never integrable. On the other
hand, in any $3$-cosymplectic manifold $\cal H$ is integrable since
each $\eta_\alpha$ is closed. Furthermore, the projectability with
respect to $\cal{V}$ is always granted, as the following theorem
shows.
\begin{theorem}[\cite{cappellettidenicola}]\label{hyperkahler}
Every regular $3$-cosymplectic manifold projects onto a hyper-K\"{a}hlerian manifold.
\end{theorem}
As a corollary, it follows that every $3$-cosymplectic manifold is Ricci-flat.
In \cite{cappellettidenicola} the horizontal flatness of such
structures has been studied. In particular it has been proven to
be equivalent to the existence of Darboux-like coordinates, that is
local coordinates $\left\{x_1,\ldots,x_{4n},z_1,z_2,z_3\right\}$ with respect to
which, for each $\alpha\in\left\{1,2,3\right\}$, the fundamental
$2$-forms $\Phi_\alpha=d\eta_\alpha$ have constant components and
$\xi_\alpha=a^1_\alpha \frac{\partial}{\partial z_1} +
a^2_\alpha\frac{\partial}{\partial z_2} + a^3_\alpha
\frac{\partial}{\partial z_3}$, $a_\alpha^\beta$ being functions
depending only on the coordinates $z_1,z_2,z_3$. Consequently, in view of Theorem
\ref{proiezione} and Theorem \ref{hyperkahler} we have the following result.
\begin{theorem}[\cite{cappellettidenicola}]\label{sasakiano}
A $3$-Sasakian manifold does not admit any Darboux-like coordinate system.
On the other hand, a $3$-cosymplectic manifold admits a Darboux-like coordinate system
around each of its points if and only if it is flat.
\end{theorem}
\section{Final Remarks}
A number of natural questions arose during the development of our work on $3$-quasi-Sasakian manifolds.
We have seen that $3$-Sasakian manifolds do not admit any Darboux coordinate system,
while on $3$-cosymplectic manifolds such coordinate exist if and only if the manifold is flat, so
it is natural to wonder whether these coordinates do not exist on any $3$-quasi-Sasakian manifold of rank greater than one.
Another important topic would be to study the projectability of $3$-quasi-Sasakian manifolds for understanding the general relation
between this class and the quaternionic structures, since the $3$-Sasakian manifolds project over quaternionic-K\"{a}hler
structures while the structure of the leaf space turns out to be globally hyper-K\"{a}hlerian in the $3$-cosymplectic case.
Finally, as both $3$-Sasakian and $3$-cosymplectic manifolds are Einstein manifolds a natural question would be to ask
whether all $3$-quasi-Sasakian manifolds are Einstein. However, since we have already found an example of an $\eta$-Einstein,
non-Einstein $3$-quasi-Sasakian manifold in \cite{mag}, the natural problem now becomes to establish if there is any
$3$-quasi-Sasakian manifolds which is not $\eta$-Einstein. We will try to address some of these questions in the next future.
\section*{Acknowledgments}
The second author acknowledges financial support by a CMUC postdoctoral fellowship.
\end{document}
|
\begin{document}
\vfuzz0.5pc
{\mathcal H}fuzz0.5pc
\newcommand{\mathop{\mathrm{cl}}\nolimitsaimref}[1]{Claim \ref{#1}}
\newcommand{\thmref}[1]{Theorem \ref{#1}}
\newcommand{\propref}[1]{Proposition \ref{#1}}
\newcommand{\lemref}[1]{Lemma \ref{#1}}
\newcommand{\coref}[1]{Corollary \ref{#1}}
\newcommand{\remref}[1]{Remark \ref{#1}}
\newcommand{\conjref}[1]{Conjecture \ref{#1}}
\newcommand{\questionref}[1]{Question \ref{#1}}
\newcommand{\defnref}[1]{Definition \ref{#1}}
\newcommand{\secref}[1]{\S \ref{#1}}
\newcommand{\ssecref}[1]{\ref{#1}}
\newcommand{\sssecref}[1]{\ref{#1}}
\newcommand{{{\mathcal M}athrm{red}}}{{{\mathcal M}athrm{red}}}
\newcommand{{{\mathcal M}athrm{tors}}}{{{\mathcal M}athrm{tors}}}
\newcommand{\Leftrightarrow}{\Leftrightarrow}
\newcommand{{\mathcal M}apright}[1]{\smash{{\mathcal M}athop{\longrightarrow}\limits^{#1}}}
\newcommand{{\mathcal M}apleft}[1]{\smash{{\mathcal M}athop{\longleftarrow}\limits^{#1}}}
\newcommand{{\mathcal M}apdown}[1]{{\mathcal B}ig\downarrow\rlap{$\vcenter{{\mathcal H}box{$\scriptstyle#1$}}$}}
\newcommand{\smapdown}[1]{\downarrow\rlap{$\vcenter{{\mathcal H}box{$\scriptstyle#1$}}$}}
\newcommand{{{\mathcal M}athbb A}}{{{\mathcal M}athbb A}}
\newcommand{{{\mathcal M}athcal I}}{{{\mathcal M}athcal I}}
\newcommand{{{\mathcal M}athcal J}}{{{\mathcal M}athcal J}}
\newcommand{{{\mathcal M}athcal O}}{{{\mathcal M}athcal O}}
\newcommand{{{\mathcal M}athcal C}}{{{\mathcal M}athcal C}}
\newcommand{{{\mathcal M}athbb C}}{{{\mathcal M}athbb C}}
\newcommand{{{\mathcal M}athbb Q}}{{{\mathcal M}athbb Q}}
\newcommand{{\mathcal M}}{{{\mathcal M}athcal M}}
\newcommand{{\mathcal H}}{{{\mathcal M}athcal H}}
\newcommand{{\mathcal Z}}{{{\mathcal M}athcal Z}}
\newcommand{{\mathbb Z}}{{{\mathcal M}athbb Z}}
\newcommand{{\mathcal W}}{{{\mathcal M}athcal W}}
\newcommand{{\mathcal Y}}{{{\mathcal M}athcal Y}}
\newcommand{{\mathcal T}}{{{\mathcal M}athcal T}}
\newcommand{{\mathbb P}}{{{\mathcal M}athbb P}}
\newcommand{{{\mathcal M}athcal C}P}{{{\mathcal M}athcal P}}
\newcommand{{\mathbb G}}{{{\mathcal M}athbb G}}
\newcommand{{\mathbb R}}{{{\mathcal M}athbb R}}
\newcommand{{\mathcal D}}{{{\mathcal M}athcal D}}
\newcommand{{\mathcal D}D}{{{\mathcal M}athcal D}}
\newcommand{{\mathcal L}}{{{\mathcal M}athcal L}}
\newcommand{{\mathcal F}}{{{\mathcal M}athcal F}}
\newcommand{{\mathcal E}}{{{\mathcal M}athcal E}}
\newcommand{{\mathbb N}}{{{\mathcal M}athbb N}}
\newcommand{{\mathcal N}}{{{\mathcal M}athcal N}}
\newcommand{{\mathcal K}}{{{\mathcal M}athcal K}}
\newcommand{{\mathbb R}}{{{\mathcal M}athbb R}}
\newcommand{{\mathbb P}}{{{\mathcal M}athbb P}}
\newcommand{{\mathbb P}}{{{\mathcal M}athbb P}}
\newcommand{{\mathbb F}}{{{\mathcal M}athbb F}}
\newcommand{{\mathcal Q}}{{{\mathcal M}athcal Q}}
\newcommand{\mathop{\mathrm{cl}}\nolimitsosure}[1]{\overline{#1}}
\newcommand{{\mathcal E}Q}{\Leftrightarrow}
\newcommand{\Rightarrow}{{\mathbb R}ightarrow}
\newcommand{\cong}{\cong}
\newcommand{\hookrightarrow}{{\mathcal H}ookrightarrow}
\newcommand{\mathop{\otimes}}{{\mathcal M}athop{\otimes}}
\newcommand{\wt}[1]{{\widetilde{#1}}}
\newcommand{\overline}{\overline}
\newcommand{\underline}{\underline}
\newcommand{{\backslash}}{{\backslash}}
\newcommand{{{\mathcal M}athcal C}S}{{{\mathcal M}athcal S}}
\newcommand{{{\mathcal M}athcal C}A}{{{\mathcal M}athcal A}}
\newcommand{{\mathbb Q}}{{{\mathcal M}athbb Q}}
\newcommand{{\mathcal F}}{{{\mathcal M}athcal F}}
\newcommand{{\text{sing}}}{{\text{sing}}}
\newcommand{\U} {{{\mathcal M}athcal U}}
\newcommand{{\mathcal B}}{{{\mathcal M}athcal B}}
\newcommand{{\mathcal X}}{{{\mathcal M}athcal X}}
\newcommand{{\mathcal V}}{{{\mathcal M}athcal V}}
\newcommand{{\mathcal E}CS}[1]{E_{#1}(X)}
\newcommand{{{\mathcal M}athcal C}V}[2]{{{\mathcal M}athcal C}_{#1,#2}(X)}
\newcommand{\mathop{\mathrm{rank}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{rank}}\nolimits}
\newcommand{\mathop{\mathrm{codim}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{codim}}\nolimits}
\newcommand{\mathop{\mathrm{Ord}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{Ord}}\nolimits}
\newcommand{{\mathcal V}ar}{{\mathcal M}athop{{\mathcal M}athrm{Var}}\nolimits}
\newcommand{{\mathcal E}xt}{{\mathcal M}athop{{\mathcal M}athrm{Ext}}\nolimits}
\newcommand{{\mathcal E}XT}{{\mathcal M}athop{{{\mathcal M}athcal E}{\mathcal M}athrm{xt}}\nolimits}
\newcommand{\mathop{\mathrm{Pic}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{Pic}}\nolimits}
\newcommand{\mathop{\mathrm{Spec}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{Spec}}\nolimits}
\newcommand{{{\mathcal M}athcal J}ac}{{\mathcal M}athop{{\mathcal M}athrm{Jac}}\nolimits}
\newcommand{{\mathcal D}iv}{{\mathcal M}athop{{\mathcal M}athrm{Div}}\nolimits}
\newcommand{\mathop{\mathrm{sgn}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{sgn}}\nolimits}
\newcommand{\mathop{\mathrm{supp}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{supp}}\nolimits}
\newcommand{\mathop{\mathrm{Hom}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{Hom}}\nolimits}
\newcommand{\mathop{\mathrm{Sym}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{Sym}}\nolimits}
\newcommand{\mathop{\mathrm{nilrad}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{nilrad}}\nolimits}
\newcommand{{{\mathcal M}athbb A}nn}{{\mathcal M}athop{{\mathcal M}athrm{Ann}}\nolimits}
\newcommand{\mathop{\mathrm{Proj}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{Proj}}\nolimits}
\newcommand{{\mathcal M}ult}{{\mathcal M}athop{{\mathcal M}athrm{mult}}\nolimits}
\newcommand{{\mathcal B}s}{{\mathcal M}athop{{\mathcal M}athrm{Bs}}\nolimits}
\newcommand{\mathop{\mathrm{Span}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{Span}}\nolimits}
\newcommand{{{\mathcal M}athcal I}M}{{\mathcal M}athop{{\mathcal M}athrm{Im}}\nolimits}
\newcommand{\mathop{\mathrm{Hol}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{Hol}}\nolimits}
\newcommand{{\mathcal E}nd}{{\mathcal M}athop{{\mathcal M}athrm{End}}\nolimits}
\newcommand{{{\mathcal M}athcal C}H}{{\mathcal M}athop{{\mathcal M}athrm{CH}}\nolimits}
\newcommand{{\mathcal E}xec}{{\mathcal M}athop{{\mathcal M}athrm{Exec}}\nolimits}
\newcommand{\mathop{\mathrm{span}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{span}}\nolimits}
\newcommand{\mathop{\mathrm{birat}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{birat}}\nolimits}
\newcommand{\mathop{\mathrm{cl}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{cl}}\nolimits}
\newcommand{\mathop{\mathrm{rat}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{rat}}\nolimits}
\newcommand{{\mathcal B}ir}{{\mathcal M}athop{{\mathcal M}athrm{Bir}}\nolimits}
\newcommand{{\mathbb R}at}{{\mathcal M}athop{{\mathcal M}athrm{Rat}}\nolimits}
\newcommand{\mathop{\mathrm{aut}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{aut}}\nolimits}
\newcommand{{{\mathcal M}athbb A}ut}{{\mathcal M}athop{{\mathcal M}athrm{Aut}}\nolimits}
\newcommand{\mathop{\mathrm{eff}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{eff}}\nolimits}
\newcommand{\mathop{\mathrm{nef}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{nef}}\nolimits}
\newcommand{\mathop{\mathrm{amp}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{amp}}\nolimits}
\newcommand{{\mathcal D}IV}{{\mathcal M}athop{{\mathcal M}athrm{Div}}\nolimits}
\newcommand{{\mathcal B}l}{{\mathcal M}athop{{\mathcal M}athrm{Bl}}\nolimits}
\newcommand{{{\mathcal M}athcal C}ox}{{\mathcal M}athop{{\mathcal M}athrm{Cox}}\nolimits}
\newcommand{{\mathcal N}E}{{\mathcal M}athop{{\mathcal M}athrm{NE}}\nolimits}
\newcommand{{\mathcal N}M}{{\mathcal M}athop{{\mathcal M}athrm{NM}}\nolimits}
\newcommand{{\mathbb G}al}{{\mathcal M}athop{{\mathcal M}athrm{Gal}}\nolimits}
\newcommand{\mathop{\mathrm{coker}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{coker}}\nolimits}
\newcommand{\mathop{\mathrm{ch}}\nolimits}{{\mathcal M}athop{{\mathcal M}athrm{ch}}\nolimits}
\title{Nodal Curves on K3 Surfaces}
\mathop{\mathrm{aut}}\nolimitshor{Xi Chen}
\address{632 Central Academic Building\\
University of Alberta\\
Edmonton, Alberta T6G 2G1, CANADA}
\email{[email protected]}
\date{August 6, 2017}
\thanks{Research partially supported by Discovery Grant 262265 from the Natural Sciences and Engineering Research Council of Canada.}
\keywords{K3 Surface, Severi Variety, Moduli Space of Curves}
\subjclass{Primary 14J28; Secondary 14E05}
\begin{abstract}
In this paper, we study the Severi variety $V_{L,g}$ of genus $g$ curves in $|L|$ on a general polarized K3 surface $(X,L)$.
We show that the closure of every component of $V_{L,g}$ contains a
component of $V_{L,g-1}$. As a consequence, we see that the general members of every component of $V_{L,g}$ are nodal.
\end{abstract}
{\mathcal M}aketitle
\section{Introduction}
It was proved that every complete linear system on
a very general polarized K3 surface
$(X, L)$ contains a nodal rational curve \cite{C1} and
furthermore every rational
curve in $|L|$ is nodal, i.e., has only nodes $xy=0$ as singularities \cite{C2}.
The purpose of this note is to prove an analogous
result on singular curves in $|L|$ of geometric genus $g>0$.
For a line bundle $A$ on a projective surface $X$, we use the notation
$V_{A,g}$ to denote the Severi varieties of integral curves of geometric
genus $g$ in the complete linear series $|A| = {\mathbb P} H^0(A)$.
For a K3 surface $X$, it is well known that every component of $V_{A,g}$ has the expected dimension $g$. Furthermore, using theory of deformation of maps, one can show that $\nu: \widehat{C}\to X$ is an immersion for $\nu$ the normalization of
a general member $[C]\in V_{A,g}$ if $g > 0$ \cite[Chap. 3, Sec. B]{H-M}.
It was claimed that a general member of $V_{A,g}$ is nodal on every projective K3
surface $X$ and every $A\in \mathop{\mathrm{Pic}}\nolimits(X)$ as long as $g > 0$
in \cite[Lemma 3.1]{C1}. However, as kindly pointed out to the
author by Edoardo Sernesi \cite[Sec. 3.3]{D-S}, the proof there is wrong. So this note provides a partial fix for this problem, albeit only
for singular curves in the primitive class $|L|$ on a general
polarized K3 surface $(X,L)$. Our main theorem is
\begin{thm}\label{ECK3THM000}
For a general polarized K3 surface $(X, L)$, every (irreducible) component of $\overline{V}_{L,g}$ contains a component of $V_{L,g-1}$ for all $1\le g\le p_a(L)$,
where $\overline{V}_{L,g}$ is the closure of $V_{L,g}$ in $|L|$
and $p_a(L) = L^2/2 + 1$ is the arithmetic genus of $L$.
\end{thm}
Clearly, the above theorem, combining with the fact that every rational curve in $|L|$ is nodal \cite{C2}, implies the following corollary by induction:
\begin{cor}\label{ECK3CORMODULI}
For a general polarized K3 surface $(X, L)$,
the general members of every component of $V_{L,g}$ are nodal for all $0\le g \le p_a(L)$.
\end{cor}
It was proved in \cite[Theorem 1.3, 5.3 and Remark 5.6]{KLM} that the general members of every component of $V_{L,g}$ are not trigonal for $g \ge 5$. Combining with \cite[Theorem B.4]{D-S}, it shows that the corollary holds for $5\le g\le p_a(L)$. Of course, we have settled it for all genus $g$ here. As an application, it shows that the genus $g$ Gromov-Witten invariant computed in \cite{B-L} is the same as the number of genus $g$ curves in $|L|$ passing through $g$ general points.
A comprehensive treatment for $V_{mL,g}$ is planned in a future paper.
As another potential application of Theorem \ref{ECK3THM000}, we want to mention the conjecture of the irreducibility of universal Severi variety ${\mathcal V}_{L,g}$ on K3 surfaces:
\begin{conj}\label{ECK3CONJUNIV}
Let ${\mathcal K}_p$ be the moduli space of polarized K3 surfaces $(X,L)$ of genus $p = p_a(L)$ and let
\begin{equation}\label{ECK3E007}
{\mathcal V}_{L,g} = \{(X,L,C): (X,L)\in {\mathcal K}_p, C\in V_{L,g}\}
\end{equation}
be the universal Severi variety of genus $g$ curves in $|L|$ over ${\mathcal K}_p$.
Then ${\mathcal V}_{L,g}$ is irreducible.
\end{conj}
If we approach the conjecture along the line of argument of J. Harris for the irreducibility of Severi variety of plane curves \cite{H}, we need to establish two facts:
\begin{itemize}
\item Every component of $\overline{{\mathcal V}}_{L,g}$ contains a component of ${\mathcal V}_{L,0}$.
\item ${\mathcal V}_{L,0}$ is irreducible and the monodromy action on the $p$ nodes of a rational curve $C\in V_{L,0}$ is the full symmetric group $\Sigma_p$ as $(X,L,C)$ moves in ${\mathcal V}_{L,0}$.
\end{itemize}
The second fact comes easily for plane curves, while the establishment of the first fact is the focus of Harris' proof (see also \cite[Chap. 6, Sec. E]{H-M}). The situation for ${\mathcal V}_{L,g}$ is somewhat reversed at the moment: the first fact follows from our main theorem, while the difficulty lies in the second fact:
\begin{conj}\label{ECK3CONJUNIVRAT}
Let ${\mathcal V}_{L,0}$ be the universal Severi variety of rational curves in $|L|$ over the moduli space ${\mathcal K}_p$ of polarized K3 surfaces $(X,L)$ of genus $p$ and let
\begin{equation}\label{ECK3E008}
\begin{aligned}
{\mathcal W}_{L,0} = \big\{(X,L,C,s_1,s_2,...,s_p):
&\ (X,L,C)\in {\mathcal V}_{L,0},\\
&\ C_\text{sing} = \{s_1,s_2,...,s_p\}
\big\}.
\end{aligned}
\end{equation}
Then ${\mathcal W}_{L,0}$ is irreducible.
\end{conj}
Our above discussion shows that Conjecture \ref{ECK3CONJUNIVRAT}
implies \ref{ECK3CONJUNIV}.
\subsection*{Conventions}
We work exclusively over ${{\mathcal M}athbb C}$.
A K3 surface in this paper is always projective. A polarized K3 surface is a pair $(X, L)$, where $X$ is a K3 surface and $L$ is an indivisible ample line bundle on $X$.
\subsection*{Acknowledgment}
I am very grateful to Edoardo Sernesi for pointing out the above-mentioned gap in my paper \cite{C1}. I also want to thank Thomas Dedieu for bringing the paper \cite{KLM} to my attention.
\section{Proof of Theorem \ref{ECK3THM000}}
We start with the following observation:
\begin{prop}\label{ECK3PROP000}
Let $W$ be a component of $V_{L,g}$ for a polarized K3 surface $(X,L)$ with $\mathop{\mathrm{Pic}}\nolimits(X) = {\mathbb Z}$. The following are equivalent:
\begin{enumerate}
\item\label{ECK3PROP000ITEM1}
The closure $\overline{W}$ of $W$ in $|L|$ contains a component of $V_{L,g-1}$.
\item\label{ECK3PROP000ITEM2}
$\dim (\overline{W}\backslash W) = g-1$.
\item\label{ECK3PROP000ITEM3}
For a set $\sigma$ of $g-1$ general points on $X$, $W\cap \Lambda_\sigma$ is not projective (i.e. complete), where $\Lambda_\sigma\subset |L|$ is the locus of curves $C\in |L|$ passing through $\sigma$.
\end{enumerate}
\end{prop}
\begin{proof}
(\ref{ECK3PROP000ITEM1}) ${\mathbb R}ightarrow$ (\ref{ECK3PROP000ITEM2}) is obvious. Since every curve in $|L|$ is integral, we have
\begin{equation}\label{ECK3E000440}
\overline{W}\backslash W \subset \bigcup_{i<g} V_{L,i}.
\end{equation}
And since $\dim V_{L,i}\le i$, we have (\ref{ECK3PROP000ITEM2}) ${\mathbb R}ightarrow$ (\ref{ECK3PROP000ITEM1}).
Let $\partial W = \overline{W}\backslash W$. Obviously,
$\dim (\partial W \cap \Lambda_\sigma) = \dim \partial W - (g-1)$. Therefore,
(\ref{ECK3PROP000ITEM2}) ${\mathbb R}ightarrow$ (\ref{ECK3PROP000ITEM3}). On the other hand, if
$W\cap \Lambda_\sigma$ is not complete, then there exists $C_\sigma\in \partial W$ passing through $\sigma$. Then $\dim \partial W \ge g-1$.
So (\ref{ECK3PROP000ITEM3}) ${\mathbb R}ightarrow$ (\ref{ECK3PROP000ITEM2}).
\end{proof}
So it suffices to show that $W\cap \Lambda_\sigma$ is not complete for every component $W$ of $V_{L,g}$. We prove this using a degeneration argument similar to the one in \cite{C2}.
A general K3 surface can be specialized to a {\em Bryan-Leung} (BL) K3 surface $X_0$,
which is a K3 surface with Picard lattice
\begin{equation}\label{ECK3E000}
\begin{bmatrix}
-2 & 1\\
1 & 0
\end{bmatrix}.
\end{equation}
It can be polarized by the line bundle $C + mF$, where
$C$ and $F$ are the generators of $\mathop{\mathrm{Pic}}\nolimits(X_0)$ satisfying $C^2 = -2$,
$CF=1$ and $F^2 = 0$.
A general polarized K3 surface of genus $m$ can be degenerated to $(X_0, C+mF)$.
Such $X_0$ has an elliptic fibration $X_0\to {\mathbb P}^1$ with fibers in $|F|$.
For a general BL K3 surface $X_0$, there are exactly $24$ nodal fibers in $|F|$.
A key fact here is that every member
of $|C + mF|$ is ``completely'' reducible in the sense that it is a union of $C$ and $m$ fibers in $|F|$ (counted with multiplicities).
Let $X$ be a family of K3 surfaces of genus $m$ over a smooth quasi-projective
curve $T$ such that $X_0$ is a general BL K3 surface for a point $0\in T$, $X_t$ are K3 surfaces of $\mathop{\mathrm{Pic}}\nolimits(X_t) = {\mathbb Z}$ for $t\ne 0$ and $L$ is a line bundle on $X$ with $L_0 = C+mF$. After a base change, there exists $W\subset {\mathcal V}_{L,g}$ flat over $T$ such that $W_t$ is a component of $V_{L_t,g}$ for all $t\ne 0$. Let $\sigma$ be a set of $g-1$ general sections of $X/T$. It suffices to prove that $W_t\cap \Lambda_\sigma$ is not projective for $t$ general.
By stable reduction, there exists a family $f: Y\to X$ of genus $g$ stable maps over a smooth surface $S$ with the commutative diagram
\begin{equation}\label{ECK3E000150}
\begin{tikzcd}
Y \ar{r}{f} \ar{d} & X\ar{d}\\
S \ar{r}{\pi} & T
\end{tikzcd}
\end{equation}
where $S$ is flat and projective over $T$, $f_* Y_s \in \overline{W}_t \cap \Lambda_\sigma$ on $X_t$ for all $s\in S_t$ and $t\in T$ and $S$ dominates $\overline{W} \cap \Lambda_\sigma$ via the map sending $s\to [f_* Y_s]$. In other words, $f: Y\to X$ is the stable reduction of the universal family over $\overline{W}$
such that $f: Y_s\to X$ is the normalization of a general member $G\in W_t$ passing through the $g-1$ points $\sigma(t)$ for $s\in S_t$ general and $t\ne 0$.
Let us consider the moduli map $\rho: S\to \overline{{\mathcal M}}_g\times T$ sending $s\to ([Y_s], \pi(s))$, where
$\overline{{\mathcal M}}_g$ is the moduli space of stable curves of genus $g$ with ${\mathcal M}_g$ its open subset parameterizing smooth curves. To show that $W_t\cap \Lambda_\sigma$ is not complete, it suffices to show that
\begin{equation}\label{ECK3E000901}
\rho^{-1}({\mathcal D}elta\times T) \cap S_t \ne \emptyset
\end{equation}
for $t\ne 0$, where ${\mathcal D}elta = \overline{{\mathcal M}}_g \backslash {\mathcal M}_g$ is the boundary divisor of $\overline{{\mathcal M}}_g$.
Let $F_1, F_2, ..., F_{g-1}\subset X_0$ be $g-1$ fibers in $|F|$ passing through the $g-1$ points $\sigma(0)$, respectively. Since $\sigma(0)$ are in general position, $F_1, F_2, ..., F_{g-1}$ are $g-1$ general fibers in $|F|$ and $\sigma(0)\cap C = \emptyset$.
For every $s\in S_0$, $f_* Y_s\in |C+mF|$ passes through $\sigma(0)$. Therefore, we must have
\begin{equation}\label{ECK3E000900}
f_* Y_s = C + m_1 F_1 + m F_2 + ... + m_{g-1} F_{g-1} + M_s
\end{equation}
for some $m_1, m_2, ..., m_{g-1}\in {\mathbb Z}^+$. Since the curves in $W_t\cap \Lambda_\sigma$ cover $X_t$ for $t\ne 0$, $f$ is surjective. Hence
$f_* Y_s$ covers $X_0$ as $s$ moves in $S_0$.
Therefore, $M_s$ contains a moving fiber in $|F|$. More precisely, there exists a component ${\mathbb G}amma$ of $S_0$ such that $\cup_{s\in {\mathbb G}amma} M_s = X_0$.
For a general point $s\in {\mathbb G}amma$, $M_s$ contains a general fiber $F_s$ in $|F|$. Therefore, $Y_s$ has components $\widehat{F}_{1,s}, \widehat{F}_{2,s}, ..., \widehat{F}_{g-1,s}, \widehat{F}_s$ dominating $F_1, F_2, ..., F_{g-1}, F_s$, respectively. And since
$p_a(Y_s) = g$, $\widehat{F}_{1,s}, \widehat{F}_{2,s}, ..., \widehat{F}_{g-1,s}, \widehat{F}_s$
are all elliptic curves. Indeed, it is very easy to see that its moduli $[Y_s]$ in $\overline{{\mathcal M}}_g$
\begin{equation}\label{ECK3E000889}
[Y_s] = [\widehat{C}_s \cup \widehat{F}_{1,s}\cup \widehat{F}_{2,s}\cup ...\cup \widehat{F}_{g-1,s}\cup \widehat{F}_s]
\end{equation}
is a smooth rational curve $\widehat{C}_s$ with $g$ elliptic ``tails''
$\widehat{F}_{1,s}, \widehat{F}_{2,s}, ..., \widehat{F}_{g-1,s}, \widehat{F}_s$ attached to it,
where $\widehat{C}_s$ is the component of $Y_s$ dominating $C$. Of course,
when $g \le 2$, $\widehat{C}_s$ is contracted under the moduli map.
Note that $\widehat{F}_{1,s}, \widehat{F}_{2,s}, ..., \widehat{F}_{g-1,s}, \widehat{F}_s$ are isogenous to $F_1, F_2, ..., F_{g-1}, F_s$, respectively. As $s$ moves on ${\mathbb G}amma$,
$F_s$ moves in $|F|$. So $\widehat{F}_s$ has varying moduli.
This shows that $\rho$ maps $S$ generically finitely onto its image. That is,
\begin{equation}\label{ECK3E000903}
\dim \rho(S) = 2.
\end{equation}
Furthermore, when $F_s$ becomes one of $24$ nodal fibers in $|F|$, $\widehat{F}_s$ becomes a union of rational curves. Therefore, there exists $b\in {\mathbb G}amma$ such that $\widehat{F}_b$ is a connected union of rational curves with normal crossings and $p_a(\widehat{F}_b) = 1$. The moduli $[Y_b]$ of $Y_b$ is thus a smooth rational curve with $g-1$ elliptic tails and one nodal rational curve attached to it. Consequently,
\begin{equation}\label{ECK3E001}
\rho(b) \in {\mathcal D}elta_0\times T
\end{equation}
where ${\mathcal D}elta_0$ is the component of ${\mathcal D}elta$ whose general points parameterize curves of genus $g-1$ with one node. Combining \Leftrightarrowref{ECK3E000903}, \Leftrightarrowref{ECK3E001} and the fact that ${\mathcal D}elta_0$ is ${{\mathcal M}athbb Q}$-Cartier, we conclude that
\begin{equation}\label{ECK3E002}
\rho(S)\cap ({\mathcal D}elta_0\times T) \ne \emptyset \text{ has pure dimension } 1.
\end{equation}
Therefore, for every connected component $G$ of $\rho^{-1}({\mathcal D}elta_0\times T)$, we have
\begin{equation}\label{ECK3E004}
\dim \rho(G) = 1.
\end{equation}
If $\rho^{-1}({\mathcal D}elta_0\times T)\cap S_t\ne\emptyset$ for $t\ne 0$, then \Leftrightarrowref{ECK3E000901} follows and we are done. Otherwise,
\begin{equation}\label{ECK3E003}
\rho^{-1}({\mathcal D}elta_0\times T) \subset S_0.
\end{equation}
Let $G$ be the connected component of $\rho^{-1}({\mathcal D}elta_0\times T)$ containing the point $b$. Then $G\subset S_0$ and $\dim \rho(G) = 1$.
Let $B$ be an irreducible component of $G$ passing through $b$. For $Y_b$, we have
\begin{equation}\label{ECK3E0006}
f_* Y_b = C + m_1 F_1 + m F_2 + ... + m_{g-1} F_{g-1} + M_b
\end{equation}
with $M_b$ supported on the union $F_\Sigma$ of $24$ nodal rational curves in $|F|$. Therefore,
for $s\in B$ general, $M_s$ must also be supported on $F_\Sigma$; otherwise, $M_s$ contains a general member $F_s$ of $|F|$, the moduli $[Y_s]$ of $Y_s$ is given by \Leftrightarrowref{ECK3E000889} and $[Y_s]\not\in {\mathcal D}elta_0$.
Consequently, $M_s \Leftrightarrowuiv M_b$ for all $s\in B$ and $\rho$ is constant on $B$.
For a component $Q$ of $G$ with $q\in B\cap Q \ne \emptyset$, the same argument shows that $M_s \Leftrightarrowuiv M_q$ is supported on $F_\Sigma$ for all $s\in Q$ and $\rho$ is constant on $Q$. And since $G$ is connected, we can use this argument to show that $\rho$ is constant on every component of $G$, i.e., constant on $G$. This contradicts \Leftrightarrowref{ECK3E004}.
\end{document}
|
\begin{document}
\author{Frank Calegari\footnote{Supported in part by the American Institute of Mathematics.} \\ Matthew Emerton}
\title{On the Ramification of Hecke Algebras at Eisenstein Primes}
\mathfrak{p}aketitle
\section{Introduction}
Fix a prime $p$, and a modular residual representation
$\overlineerline{\rho}: G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf Fbar_p)$.
Suppose $f$ is a normalised cuspidal Hecke eigenform of
some level $N$
and weight $k$ that gives rise to $\overlineerline{\rho}$, and let
$K_{\kern-.1em{f}}$ denote the extension of $\mathfrak{p}athbf Q_p$ generated by the
$q$-expansion coefficients $a_n(f)$ of~$f$.
The field $K_{\kern-.1em{f}}$ is a finite extension
of $\mathfrak{p}athbf Q_p$. What can one say about the extension $K_{\kern-.1em{f}}/\mathfrak{p}athbf Q_p$?
Buzzard \cite{buzz} has made the following conjecture:
if $N$ is fixed, and $k$ is allowed to vary,
then the degree $[K_{\kern-.1em{f}}:\mathfrak{p}athbf Q_p]$ is bounded
independently of $k$.
Little progress has been made on this conjecture so far;
indeed, very little seems to have been proven at all regarding
the degrees $[K_{\kern-.1em{f}}:\mathfrak{p}athbf Q_p]$. The goal of this paper
is to consider a somewhat orthogonal question to Buzzard,
namely, to fix the weight and vary the level. Moreover, we only
consider certain \emph{reducible} representations
$\overlineerline{\rho}$ that arise in Mazur's study of the Eisenstein
Ideal~\cite{eisenstein}. Our results
suggest that the degrees $[K_{\kern-.1em{f}}:\mathfrak{p}athbf Q_p]$ are, in fact,
arithmetically
significant.
Suppose that $N \geq 5$ is prime, and that $p$ is a prime
that exactly divides the numerator of $(N-1)/12$.
Mazur \cite{eisenstein}
has shown that there is a weight two cuspform defined over $\overlineerline{\mathfrak{p}athbf Q}_p$,
unique up to conjugation by $G_{\mathfrak{p}athbf Q_p}$ (the Galois group of $\overlineerline{\mathfrak{p}athbf Q}_p$
over $\mathfrak{p}athbf Q_p$),
satisfying the congruence
\begin{equation}\ellabel{congruence}
a_{\ell}(f) \equiv 1 + \ell \pmod{\mathfrak{p}}\end{equation}
(where $\mathfrak{p}$ is the maximal ideal in the ring of integers of $K_{\kern-.1em{f}}$,
and $\ell$ ranges over primes distinct from $N$).
It follows moreover from \cite{eisenstein} (Prop.~19.1, p.~140)
that $K_{\kern-.1em{f}}$ is a \emph{totally ramified}
extension of $\mathfrak{p}athbf Q_p$, and thus that the degree $[K_{\kern-.1em{f}}:\mathfrak{p}athbf Q_p]$
is equal to the (absolute) ramification degree of $K_{\kern-.1em{f}}$.
Denote this ramification degree by $e_p$.
In this paper we prove the following theorem, in the case when
$p=2$.
\begin{theorem}\ellabel{thm:main:p=2}
Suppose that $p = 2$ and that $N \equiv 9 \pmod 16$,
and let $f$ be
a weight two eigenform on $\Gamma_0(N)$ satisfying the
congruence~(\ref{congruence}).
If $2^m$ is the largest power of $2$ dividing
the class number of the field $\mathfrak{p}athbf Q(\sqrt{-N}),$
then $e_ptwo = 2^{m-1}-1$.
\end{theorem}
When $p$ is odd, we establish the following less
definitive result.
\begin{theorem}\ellabel{thm:main:p odd}
\ellabel{theorem:oddp}
Suppose that $p$ is an odd prime exactly dividing
the numerator of $(N-1)/12$.
Let $f$ be
a weight two eigenform on $\Gamma_0(N)$ satisfying the
congruence~(\ref{congruence}).
\begin{itemize}
\item[(i)] Suppose that $p=3$. $($Our hypothesis on $N$ thus becomes
$N \equiv 10 \textrm{ or } 19 \pmod{27})$.
Then $e_pthree = 1 $ if and only if the $3$-part of
the class group of $\mathfrak{p}athbf Q(\sqrt{-3}, N^{1/3})$ is cyclic.
\item[(ii)] Suppose that $p \geq 5$. $($Our hypothesis on $N$
thus becomes $p \| N-1)$. Then $e_p = 1 $ if
the $p$-part of the class group of $\mathfrak{p}athbf Q(N^{1/p})$ is cyclic.
\end{itemize}
\end{theorem}
The question of computing $e_p$ has been addressed previously,
in the paper \cite{merel} of Merel. In this work, Merel
establishes a necessary and sufficient criterion for $e_p = 1$.
Merel's criterion for $e_p = 1$ is {\it not} expressed
in terms of class groups; rather, it is expressed in terms of
whether or not the
congruence class modulo $N$ of a certain explicit expression
is a $p$th power.
When $p = 2$,
Merel, using classical results from algebraic number theory,
was able to reinterpret his explicit criterion for $e_ptwo=1$
so as to prove that $e_ptwo = 1$ if and only if $m = 2$.
(It is known that $m \geq 2$ if and only if $N \equiv 1 \pmod{8}.$)
Theorem \ref{thm:main:p=2} strengthens this result, by relating
the value of $e_ptwo$ in all cases to the order of the 2-part of the class
group of $\mathfrak{p}athbf Q(\sqrt{-N}).$
When $p$ is odd, Merel was not able to reinterpret his explicit
criterion in algebraic number theoretic terms. However, combining
Merel's result with Theorem~\ref{thm:main:p=2}
(and the analogue of this theorem for more general $N$),
we obtain the following result.
\begin{theorem}
\ellabel{theorem:merel}
Let $N \ge 5$ be prime.
\begin{itemize}
\item[(i)] Let $N \equiv 1 \pmod 9$.
The $3$-part of the class group of $\mathfrak{p}athbf Q(\sqrt{-3},N^{1/3})$
is cyclic if and only if
$\elleft(\frac{N-1}{3} \right)!$ is not a cube modulo $N$.
Equivalently, if we let $N = \pi \bar{\pi}$ denote the factorisation
of $N$ in $\mathfrak{p}athbf Q(\sqrt{-3}),$ then the $3$-part of the class group
of $\mathfrak{p}athbf Q(N^{1/3},\sqrt{-3})$ is cyclic if and only if the
$9$th power residue symbol $\elleft( \dfrac{\pi}{\bar{\pi}}\right)_9$
is non-trivial.
\footnote{The claimed equivalence follows from the
formula $\elleft(\elleft(\frac{N-1}{3}\right) !\right)^3 \equiv \pi
\pmod \bar{\pi}$, which was pointed out to us by Noam Elkies.
Ren\'e Schoof has told us that one can
prove part~(i) of Theorem~\ref{theorem:merel}
using class field theory. It is not apparent, however, that
(ii) can be proved in this way.}
Furthermore, if these equivalent conditions hold, then
the $3$-part of the class group of $\mathfrak{p}athbf Q(N^{1/3})$ (which
a fortiori
is cyclic of order divisible by three) has order exactly three.
\item[(ii)] Let $p \ge 5$, and let $N \equiv 1 \pmod p$.
If the $p$-part of the class group of $\mathfrak{p}athbf Q(N^{1/p})$ is cyclic then
$$\prod_{\ell=1}^{(N-1)/2} \ell^{\ell}$$
is not a $p$th power modulo $N$.
\end{itemize}
\end{theorem}
The proof of Theorems~\ref{thm:main:p=2} and \ref{thm:main:p odd}
depends on arguments using deformations of
Galois representations. Briefly, if $\mathfrak{p}athbf T$ denotes
the completion of the Hecke algebra acting on weight two
modular forms on $\Gamma_0(N)$ at its $p$-Eisenstein
ideal, then we identify $\mathfrak{p}athbf T$ with the universal deformation
ring for a certain deformation problem. The theorems
are then proved by an explicit analysis of this deformation
problem over Artin $\mathfrak{p}athbf F_p$-algebras.
It may be of independent interest to note that
our identification of $\mathfrak{p}athbf T$ as a universal deformation
ring also allows us to recover
{\it all} the results of Mazur proved in the reference
\cite{eisenstein} regarding the
structure of $\mathfrak{p}athbf T$ and the Eisenstein ideal:
for example, that $\mathfrak{p}athbf T$ is monogenic over $\mathfrak{p}athbf Z_p$
(and hence Gorenstein);
that the Eisenstein ideal is principal,
and is generated by $T_{\ell} - (1 + \ell)$
if and only if $\ell \neq N$ is a good prime;
and also that $T_N = 1$ in $\mathfrak{p}athbf T$.
\
Let us now give a more detailed explanation of our
method. For the moment, we
relax our condition on $N$, assuming simply that
$N$ and $p$ are distinct primes.
We begin by defining a continuous representation
$\overlineerline{\rho}: G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_p)$.
If $p$ is odd, we let
$$\overlineerline{\rho} = \elleft(\begin{matrix} \overlineerline{\chi}_p & 0 \\ 0 & 1 \end{matrix}\right),$$
where $\overlineerline{\chi}_p$ is the mod $p$ reduction of the cyclotomic
character.
If $p$ is even, we let
$$\overlineerline{\rho} = \elleft(\begin{matrix} 1 & \phi \\ 0 & 1 \end{matrix}\right),$$
where $\phi:G_{\mathfrak{p}athbf Q}\rightarrow \mathfrak{p}athbf F_2$ is the unique
$\mathfrak{p}athbf F_2$-valued homomorphism
inducing an isomorphism $\mathfrak{p}athrm{Gal}(\mathfrak{p}athbf Q(\sqrt{-1})/\mathfrak{p}athbf Q) \cong \mathfrak{p}athbf F_2$.
Let $\overline{V}$ denote the two dimensional vector space on which $\overlineerline{\rho}$
acts, and fix a line $\cal Lbar$ in $\overline{V}$ that is {\it not} invariant
under $G_{\mathfrak{p}athbf Q}$ (equivalently,
$G_{\mathfrak{p}athbf Q_p}$).
If $A$ is an
Artinian local ring with residue field $\mathfrak{p}athbf F_p$,
consider the set of triples $(V,L,\rho)$, where $V$ is a
free $A$-module, $L$ is a direct summand of $V$ that
is free of rank one over $A$, and $\rho$ is a continuous homomorphism
$G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}(V)$, satisfying the
following
conditions:
\begin{enumerate}
\item The triple $(V,L,\rho)$ is a deformation of $(\overline{V}, \cal Lbar,\overlineerline{\rho})$.
\item The representation $\rho$ is unramified away from $p$ and $N$,
and is finite at $p$ (i.e.~$V$, regarded as a $G_{\mathfrak{p}athbf Q_p}$-module, arises as
the generic fibre of a finite flat group scheme over~$\mathfrak{p}athbf Z_p$).
\item The inertia subgroup at $N$ acts trivially on the submodule $L$ of $V$.
\item The determinant of $\rho$ is equal to the composition of
the cyclotomic character
$\chi_p: G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athbf Z_p^{\times}$ with the natural map
$\mathfrak{p}athbf Z_p^{\times} \rightarrow A^{\times}$.
\end{enumerate}
If we let $\mathfrak{p}athcal Def(A)$ denote the collection of such triples
modulo strict equivalence, then $\mathfrak{p}athcal Def$ defines a deformation
functor on the category of Artinian local rings $A$.
Note that the representation $\overlineerline{\rho}$ is reducible,
and is either the direct sum of two characters (if $p$ is odd)
or an extension of the trivial character by itself (if $p = 2$).
Nevertheless,
one has the following result.
\begin{prop}\ellabel{representability}
The deformation functor $\mathfrak{p}athcal Def$ is pro-representable by
a complete Noetherian local $\mathfrak{p}athbf Z_p$-algebra $R$.
\end{prop}
The proposition follows directly from that fact that the only
endomorphisms of the triple $(\overline{V},\cal Lbar,\overlineerline{\rho})$ are the scalars.
(The authors learned the idea of introducing a locally invariant line
to rigidify an otherwise unrepresentable
deformation problem from Mark Dickinson, who has applied it
to analyse the deformation theory of residually irreducible representations
that are ordinary, but not $p$-distinguished, locally at $p$.)
\
Having defined a universal deformation ring, we now introduce the
corresponding Hecke algebra. As indicated above,
we let $\mathfrak{p}athbf T$ denote the completion at its $p$-Eisenstein
ideal of the $\mathfrak{p}athbf Z$-algebra of Hecke operators
acting on the space of all modular forms (i.e.~the cuspforms
together with the Eisenstein series) of level $\Gamma_0(N)$
and weight two. (The $p$-Eisenstein ideal is
the maximal ideal in the Hecke algebra
generated by the elements $T_{\ell} - (1 + \ell)$
($\ell \neq N$), $T_N - 1$, and $p$).
The following result relates $R$ and $\mathfrak{p}athbf T$.
\begin{theorem}\ellabel{R=T}
If $\rho^{\mathfrak{p}athrm{univ}}$ denotes the universal deformation of $\overlineerline{\rho}$
over the universal deformation ring $R$,
then there is an isomorphism of $\mathfrak{p}athbf Z_p$-algebras $R \cong \mathfrak{p}athbf T$,
uniquely determined by the requirement that
the trace of Frobenius at $\ell$ under $\rho^{\mathfrak{p}athrm{univ}}$ $($for primes $\ell \neq p,N)$
maps to the Hecke operator $T_{\ell} \in \mathfrak{p}athbf T$.
\end{theorem}
Let us now return to the setting of Theorems~\ref{thm:main:p=2}
and~\ref{thm:main:p odd}.
Thus we suppose again that $p$ exactly divides the numerator
of $(N-1)/12$,
and let $f$ be as in the statements of the theorems.
If $\mathfrak{p}athbb Cal O$ denotes the ring of integers in $K_{\kern-.1em{f}}$,
and $\mathfrak{p}$ its maximal ideal,
then the results of \cite{eisenstein} imply (taking into account
the congruence satisfied by $N$) that
the Hecke algebra $\mathfrak{p}athbf T$ admits the following description:
$$\mathfrak{p}athbf T
= \{(a,b) \in \mathfrak{p}athbf Z_p \times \mathfrak{p}athbb Cal O \, | \, a \bmod p = b \bmod \mathfrak{p} \}.$$
From this description of $\mathfrak{p}athbf T$, one easily computes that
$\mathfrak{p}athbf T/p$ is isomorphic
to $\mathfrak{p}athbf F_p[X]/X^{e_p+1}.$
Theorem~\ref{R=T} thus yields the following characterisation of $e_p$.
\begin{cor}\ellabel{EPR}
The natural number $e_p$ is the largest integer $e$ for which
we may find a triple $(V,L,\rho)$ in $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_p[X]/X^{e+1})$
such that the induced map
$R \rightarrow \mathfrak{p}athbf F_p[X]/X^{e+1}$ is surjective.
\end{cor}
Theorems~\ref{thm:main:p=2} and~\ref{thm:main:p odd} are
a consequence of this corollary,
together with an explicit analysis of the deformations
of $(\overline{V},\cal Lbar,\overlineerline{\rho})$ over Artinian local rings of the
form $\mathfrak{p}athbf F_p[X]/X^n$.
If $p^2$ divides the numerator
of $(N-1)/12$, then the residually
Eisenstein cusp forms of level $N$ need not be
mutually conjugate.
However, one still has an isomorphism of the
form $\mathfrak{p}athbf T/p = \mathfrak{p}athbf F_p[x]/x^{g_p + 1}$, where $g_p+1$ denotes
the rank of $\mathfrak{p}athbf T$ over $\mathfrak{p}athbf Z_p$. (Thus $g_p$ is the
rank over $\mathfrak{p}athbf Z_p$ of the cuspidal quotient of $\mathfrak{p}athbf T$.)
In particular, the cuspidal Hecke algebra localized at the Eisenstein
prime is isomorphic to $\mathfrak{p}athbf Z_p$ if and only if $g_p = 1$.
In this way
our analysis of deformations over $\mathfrak{p}athbf F_p[X]/X^n$ suffices to
prove Theorem~\ref{theorem:merel}. More generally,
our paper can
be seen as providing a partial answer to Mazur's
question (\cite{eisenstein}, p.~140): ``\emph{Is there anything
general that can be said \elldots about $g_p$?}''.
The organisation of the paper is as follows. In
Section~\ref{sec:ext}
we develop some results about group schemes that will
be required in our study of the deformation functor $\mathfrak{p}athcal Def$.
In Section~\ref{sec:RT} we prove Theorem~\ref{R=T},
using the numerical criterion of Wiles \cite{wiles}
(subsequently strengthened by Lenstra \cite{lenstra}).
As in \cite{skiles}, we use the class field theory of cyclotomic
fields to obtain the required upper bound for the size of an appropriate
Galois cohomology group; the numerical criterion is then
established by comparing this upper bound with the congruence modulus
of the weight two Eisenstein series on $\Gamma_0(N)$ (which is known
by \cite{eisenstein} to equal the numerator of $(N-1)/12$).
Finally in Sections~\ref{sec:explicit at 2} (respectively~\ref{sec:odd explicit})
we perform the analysis
necessary to deduce Theorem~\ref{thm:main:p=2} (respectively~\ref{thm:main:p odd})
from Corollary~\ref{EPR}.
Let us close this introduction by emphasising that the only
result of \cite{eisenstein} required for the proof
of Theorem~\ref{R=T}
is the computation of the congruence modulus
between the Eisenstein and cuspidal locus in
the Hecke algebra of weight two and level $N$.
(Namely, that this congruence modulus is equal
to the numerator of $(N-1)/12$.)
As remarked upon above, we are then able to deduce
all the results of \cite{eisenstein} regarding $\mathfrak{p}athbf T$
and its quotient $\mathfrak{p}athbf T^0$ from Theorem~\ref{R=T}.
The necessary arguments are presented
at the end of Section~\ref{sec:RT}.
\section{Some group scheme-theoretic calculations}
\ellabel{sec:ext}
Let us fix a prime $p$, and a natural number $n$.
We begin by considering finite flat commutative groups
schemes of exponent $p^n$ that are extensions of
$\mathfrak{p}athbf Z/p^n$ by $\mathfrak{p}u_{p^n}$.
For any scheme $S$ we let $\groups{p}{S}$ denote
the category of
commutative finite flat group schemes of exponent $p^n$
over the base $S$,
and we write $\cal Ext^1_S(\textrm{--} , \textrm{--})$ to
denote the Yoneda Ext-bifunctor on the additive category
$\groups{p}{S}$.
\begin{lemma}\ellabel{inj-one}
The natural map $\cal Ext^1_{\mathfrak{p}athbf Z_p}(\mathfrak{p}athbf Z/p^n,\mathfrak{p}u_{p^n})
\rightarrow \cal Ext^1_{\mathfrak{p}athbf Q_p}(\mathfrak{p}athbf Z/p^n,\mathfrak{p}u_{p^n})$,
induced by restricting
to the generic fibre, is injective.
\end{lemma}
\begin{Proof}
Kummer theory identifies the map in the statement of the
lemma with the obviously injective map
$\mathfrak{p}athbf Z_p^{\times}/(\mathfrak{p}athbf Z_p^{\times})^{p^n} \rightarrow
\mathfrak{p}athbf Q_p^{\times}/(\mathfrak{p}athbf Q_p^{\times})^{p^n}.
\square \ $
\end{Proof}
\
If $p = 2$, we let $V^{\mathrm{min}}_n$ denote the extension of
$\mathfrak{p}athbf Z/2^n$ by $\mathfrak{p}u_{2^n}$ in the category $\groups{2}{\mathfrak{p}athbf Q}$
corresponding by Kummer theory to the element
$-1 \in \mathfrak{p}athbf Q^{\times}/(\mathfrak{p}athbf Q^{\times})^{2^n}$.
If $p$ is odd, we let $V^{\mathrm{min}}_n$ denote the direct sum
$\mathfrak{p}athbf Z/p^n \bigoplus \mathfrak{p}u_{p^n}$ in the category $\groups{p}{\mathfrak{p}athbf Q}$.
We may (and do) regard $V^{\mathrm{min}}_n$ as an object of the category
of $G_{\mathfrak{p}athbf Q}$-modules annihilated by $p^n$.
More explicitly, let $\chi_p$ denote the $p$-adic cyclotomic character.
Then if $p = 2,$ the $G_{\mathfrak{p}athbf Q}$-module $V^{\mathrm{min}}_n$ corresponds to the
representation
$$\rho^{\mathfrak{p}athrm{min}}_n: G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf Z/2^n)$$
given by
$$\sigma \mathfrak{p}apsto \elleft( \begin{matrix} \chi_2(\sigma) &
(\chi_2(\sigma) - 1)/2 \\ 0 & 1 \end{matrix} \right) \pmod{2^n},$$
whilst if $p$ is odd, the $G_{\mathfrak{p}athbf Q}$-module $V^{\mathrm{min}}_n$ corresponds to
the representation
$$\rho^{\mathfrak{p}athrm{min}}_n: G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf Z/p^n)$$
given by
$$\sigma \mathfrak{p}apsto \elleft( \begin{matrix} \chi_p(\sigma) & 0
\\ 0 & 1 \end{matrix}\right)\pmod p^n.$$
(Here we have denoted by $\sigma$ an element of $G_{\mathfrak{p}athbf Q}$.)
\begin{prop}\ellabel{unique prol}
For any natural number $M$, the $G_{\mathfrak{p}athbf Q}$-module $V^{\mathrm{min}}_n$
has a unique prolongation to an object of $\groups{p}{\mathfrak{p}athbf Z[1/M]}$.
\end{prop}
\begin{Proof}
The Galois module $V^{\mathrm{min}}_n$ is unramified away from $p$,
and so $V^{\mathrm{min}}_n$ has a unique prolongation to an \'etale group
scheme over $\mathfrak{p}athbf Z[1/Mp]$.
It thus suffices to show that $V^{\mathrm{min}}_n$, regarded as
a $G_{\mathfrak{p}athbf Q_p}$-module, has a unique prolongation to an object
of $\groups{p}{\mathfrak{p}athbf Z_p}$.
If $p$ is odd, then this is a direct consequence
of \cite{fontaine}, Thm.~2. Thus we assume
for the remainder of the proof that $p = 2$.
In this case, $V^{\mathrm{min}}_n$ is defined to be the extension
of $\mathfrak{p}athbf Z/2^n$ by $\mathfrak{p}u_{2^n}$ corresponding to $-1 \in \mathfrak{p}athbf Q_2^{\times}$.
Since $-1$ in fact lies in $\mathfrak{p}athbf Z_2^{\times}$,
$V^{\mathrm{min}}_n$ does prolong to a finite flat group scheme over $\mathfrak{p}athbf Z_2$.
We must show that this prolongation is unique.
We begin with the case $n=1$.
Suppose that $G$ is a finite flat group
scheme over $\mathfrak{p}athbf Z_2$ having $(V^{\mathrm{min}}_1)_{/G_{\mathfrak{p}athbf Q_2}}$ as its associated
Galois representation. The scheme-theoretic closure of the fixed
line in $V^{\mathrm{min}}_1$ yields an order two finite flat subgroup scheme $H$ of $G$.
Both $H$ and $G/H$ are thus finite flat group schemes of order two.
The results of \cite{OT} show that $\mathfrak{p}athbf Z/2$ and $\mathfrak{p}u_2$ are the only
group schemes of order 2 over $\mathfrak{p}athbf Z_2$.
Thus $G$ is an extension of either $\mathfrak{p}athbf Z/2$ or $\mathfrak{p}u_2$ by
either $\mathfrak{p}athbf Z/2$ or $\mathfrak{p}u_2$. Since neither $G$ nor its Cartier dual
are unramified (since $V^{\mathrm{min}}_1$ is self-dual and ramified at 2),
we see that both $\mathfrak{p}athbf Z/2$ and $\mathfrak{p}u_2$ must appear. Since $V^{\mathrm{min}}_1$ is
a non-trivial $G_{\mathfrak{p}athbf Q_2}$-module,
a consideration of the connected-\'etale exact sequence
attached to $G$ shows that in fact $G$ is an extension of $\mathfrak{p}athbf Z/2$ by
$\mathfrak{p}u_2$. The fact that $G$ is determined uniquely by $V^{\mathrm{min}}_1$
now follows from Lemma~\ref{inj-one}.
Now consider the case of $n$ arbitrary. Let $G_n$ be an prolongation
of $V^{\mathrm{min}}_n$ to a group scheme over $\mathfrak{p}athbf Z_2$. The Galois module
$V^{\mathrm{min}}_n$ admits a filtration by submodules with successive
quotients isomorphic to $V^{\mathrm{min}}_1$.
Taking scheme-theoretic closures, and appealing to the
conclusion of the preceding paragraph, we obtain a filtration
of $G_n$ by finite flat closed subgroup schemes, with successive quotients
isomorphic to the unique extension of $\mathfrak{p}athbf Z/2$ by $\mathfrak{p}u_2$ over $\mathfrak{p}athbf Z_2$
with generic fibre isomorphic to $V^{\mathrm{min}}_1$.
Consider the connected-\'etale sequence of $G_n$:
$$0 \rightarrow G^{0} \rightarrow G \rightarrow G^{\text{\'et}} \rightarrow 0.$$
We see that $G^{0}$ is a successive extension of copies of
$\mathfrak{p}u_2$, and that $G^{\text{\'et}}$ is a successive extension of copies
of $\mathfrak{p}athbf Z/2$. Such extensions (being respectively dual-to-\'etale,
or \'etale) are uniquely determined by their
corresponding Galois representations. Thus $G^{0}$ is in
fact isomorphic to $\mathfrak{p}u_{2^n},$ whilst $G^{\text{\'et}}$ is isomorphic to
$\mathfrak{p}athbf Z/2^n$. Thus $G_n$ is an extension of $\mathfrak{p}athbf Z/2^n$ by $\mathfrak{p}u_{2^n}$,
and Lemma~\ref{inj-one} shows that it is uniquely determined
by $V^{\mathrm{min}}_n$.
$
\square \ $
\end{Proof}
\begin{lemma}\ellabel{Exts}
Let $D_n$ denote the (uniquely determined, by Proposition~\ref{unique prol})
prolongation of $V^{\mathrm{min}}_n$ to an object of $\groups{p}{\mathfrak{p}athbf Z}$.
We have $\cal Ext^1_{\mathfrak{p}athbf Z}(\mathfrak{p}athbf Z/p^n,D_n)
= 0$.
\end{lemma}
\begin{Proof}
Writing $D_n$ as an extension
of $\mathfrak{p}athbf Z/p^n$ by $\mathfrak{p}u_{p^n},$
we obtain the exact sequence of Yoneda Ext groups
$$\cal Ext^1_{\mathfrak{p}athbf Z}(\mathfrak{p}athbf Z/p^n,\mathfrak{p}u_{p^n}) \rightarrow \cal Ext^1_{\mathfrak{p}athbf Z}(\mathfrak{p}athbf Z/p^n,D_n)
\rightarrow \cal Ext^1_{\mathfrak{p}athbf Z}(\mathfrak{p}athbf Z/p^n,\mathfrak{p}athbf Z/p^n).$$
The third of these groups always vanishes, since $\mathfrak{p}athbf Z$ has no non-trivial
\'etale covers. If $p$ is odd, the first of these groups always vanishes.
If $p = 2$, then the first
of these groups has order two, with the non-trivial element corresponding
by Kummer theory to $-1 \in \mathfrak{p}athbf Z^{\times}$. Since $D_n$ is itself classified
by this same element when $p = 2$, we see that the first arrow vanishes in
all cases, and thus so does the middle group.
$
\square \ $
\end{Proof}
\begin{prop}\ellabel{uniqueness}
Suppose that $M_{/\mathfrak{p}athbf Q_p}$ is a $G_{\mathfrak{p}athbf Q_p}$-module equipped with
a composition series whose subquotients are isomorphic to $V^{\mathrm{min}}_1$,
that admits a prolongation
to a finite flat group scheme $M$ over $\mathfrak{p}athbf Z_p$.
Then $M$ is unique up to unique isomorphism.
\end{prop}
\begin{Proof}
Let $M$ and $M'$ be two choices of a finite flat group scheme over $\mathfrak{p}athbf Z_p$
prolonging $M_{/\mathfrak{p}athbf Q_p}$. The results of \cite{ray} show that we may find
a prolongation of $M_{/\mathfrak{p}athbf Q_p}$ that maps (in the category of such prolongations)
to each of $M$ and $M'$. Thus we
may assume we are given a map $M \rightarrow M'$ that induces the identity on
generic fibres. By assumption we may find an embedding
$V^{\mathrm{min}}_1 \subset M_{/\mathfrak{p}athbf Q_p}$.
Passing to scheme theoretic closures in each of $M$ and $M'$,
and taking into account Lemma \ref{unique prol},
this prolongs to an embedding of $D_1$ into each of $M$ and $M'$,
so that the map $M \rightarrow M'$ restricts to the identity map
between these two copies of $D_1$.
Replacing $M_{/\mathfrak{p}athbf Q_p}$ by $M_{\mathfrak{p}athbf Q_p}/V^{\mathrm{min}}_1,$
$M$ by $M/D_1$, and $M'$ by $M'/D_1$, and arguing by induction on the order
of $M$, the proposition follows from the $5$-lemma (applied,
for example, in the category of sheaves on the $f\kern-.02em{p}p\kern-.07em{f}$ site
over $\mathfrak{p}athbf Z_p$).
$
\square \ $
\end{Proof}
\begin{cor}\ellabel{uniquenesscor}
Suppose that $A$ is an Artinian local ring with maximal ideal $\mathfrak{p}$
and residue field $\mathfrak{p}athbf F_p$, that $V$ is a free $A$-module of rank two,
and that $\rho:G_{\mathfrak{p}athbf Q_p}
\rightarrow \mathfrak{p}athrm{GL}(V)$ is a deformation of $(V^{\mathrm{min}}_1)_{/ \mathfrak{p}athbf Q_p}$ that is
finite flat at $p$.
Then there is a unique up to unique isomorphism finite flat group scheme
$M$ over $\mathfrak{p}athbf Z_p$
whose generic fibre equals $V$. Furthermore, the $A$-action on
$V$ prolongs to an $A$-action on $M$, and the connected-\'etale sequence
of $M$ gives rise to a two-step filtration of $V$ by free $A$-submodules
of rank one.
\end{cor}
\begin{Proof}
If we choose a Jordan-H\"older filtration of $A$ as a module over itself,
then this induces a composition series on $V$ with successive quotients
isomorphic to $V^{\mathrm{min}}_1$. Thus we are in the situation of the
preceding Proposition, and the uniqueness of $M$ follows.
In particular $M$ is the maximal prolongation of $V,$ and by
functoriality, the $A$-action on $V$ prolongs to an $A$-action
on $M$.
on $M$. (More precisely, just as in the discussion of
\cite{ray}, p.~265, we see that the automorphisms of $V$ induced
by the group of units $A^{\times}$ extend to automorphisms of $M$.
Since $A$ is generated as a ring by $A^{\times}$, we conclude that in
fact the $A$-action on $V$ extends to an $A$-action on $M$.)
Finally, let $0 \rightarrow M^{0} \rightarrow M
\rightarrow M^{\text{\'et}} \rightarrow 0$ be the connected-\'etale sequence of $M$.
The functorial nature of its construction implies that it is an exact sequence
of $A$-submodules of $M$.
Thus the exact sequence $0 \rightarrow M^{0}_{/\mathfrak{p}athbf Q_p} \rightarrow V
\rightarrow M^{\text{\'et}}_{/\mathfrak{p}athbf Q_p} \rightarrow 0$ obtained by restricting to
$\mathfrak{p}athbf Q_p$ yields a two-step filtration of $V$ by $A$-submodules.
The formation of this filtration is clearly functorial in $A$.
Thus if we tensor
$M$ with $A/\mathfrak{p}$ over $A$, and take into account
that $A/\mathfrak{p} \otimes_A M = D_1,$ we find that
$A/\mathfrak{p} \otimes_A M^{0} = D_1^{0} = \mathfrak{p}u_p$
and that
$A/\mathfrak{p} \otimes_A M^{\text{\'et}} = D_1^{\text{\'et}} = \mathfrak{p}athbf Z/p.$
Thus each of $M^{0}_{/\mathfrak{p}athbf Q_p}$ and $M^{\text{\'et}}_{/\mathfrak{p}athbf Q_p}$ are cyclic
$A$-modules. Since $M_{/\mathfrak{p}athbf Q_p}$ is free of rank two over $A$,
they must both be free $A$-modules of rank one, as claimed.
$
\square \ $
\end{Proof}
\begin{prop}\ellabel{ordinary}
Let $A$ be an Artinian local $\mathfrak{p}athbf Z/p^n$-algebra
with maximal ideal $\mathfrak{p}$ and residue field $\mathfrak{p}athbf F_p$.
If $V$ is a free $A$-module of rank two and $\rho:G_{\mathfrak{p}athbf Q_p}
\rightarrow \mathfrak{p}athrm{GL}(V)$ is a deformation of $(V^{\mathrm{min}}_1)_{/\mathfrak{p}athbf Q_p}$
that is finite at $p$,
then the coinvariants of $V$ with respect to the
inertia group $I_p$ are free of rank one over $A$.
\end{prop}
\begin{Proof}
The preceding corollary shows that $V$ admits a two-step
filtration, with rank one free quotients, corresponding
to the connected-\'etale sequence of the prolongation of
$V$ to a group scheme over $\mathfrak{p}athbf Z_p$. In particular,
the inertial coinvariants $V_{I_p}$ admit a surjection
onto a free $A$-module of rank one.
On the other hand, if $\mathfrak{p}$ is the maximal ideal
of $A$, then
$(V_{I_p})/\mathfrak{p} = (V/\mathfrak{p})_{I_p} = (V^{\mathrm{min}}_1)_{I_p}.$
This latter space is directly checked to be one dimensional
over $\mathfrak{p}athbf F_p$, implying that $V_{I_p}$ is a cyclic $A$-module.
Altogether, we find that $V_{I_p}$ is free of rank one over
$A$, as claimed.
$
\square \ $
\end{Proof}
\begin{prop}\ellabel{unram def}
Let $A$ be an Artinian local $\mathfrak{p}athbf Z/p^n$-algebra
with maximal ideal $\mathfrak{p}$ and residue field $\mathfrak{p}athbf F_p$.
If $V$ is a free $A$-module of rank two and $\rho:G_{\mathfrak{p}athbf Q}
\rightarrow \mathfrak{p}athrm{GL}(V)$ is a deformation of $V^{\mathrm{min}}_1$ that is unramified away
from $p$ and finite at $p$,
then there is an isomorphism
$V 0ng A\otimes_{\mathfrak{p}athbf Z/p^n} V^{\mathrm{min}}_n.$
\end{prop}
\begin{Proof}
Corollary~\ref{uniquenesscor} shows that $V$ prolongs to a finite flat
$A$-module scheme $M$ over $\mathfrak{p}athop{\mathfrak{p}athrm{Spec}}\nolimits \mathfrak{p}athbf Z$. If we choose a Jordan-H\"older
filtration of $A$ as an $A$-module, then this gives rise to a corresponding
filtration of $V$, with successive quotients isomorphic to $V^{\mathrm{min}}_1$.
Passing to the scheme-theoretic closure in $M$, and taking into
account Proposition~\ref{unique prol}, we obtain
a filtration of $M$ by closed subgroup schemes, with successive quotients
isomorphic to $D_1$.
If $p$ is odd, then Proposition~I.4.5 of~\cite{eisenstein}
shows that $M$ is (in a canonical way) the product of a constant group scheme
and a $\mathfrak{p}u$-type group (that is, the Cartier dual of a constant group scheme).
Each of these subgroups is then an $A$-module scheme, and we easily conclude
that $V 0ng A\otimes_{\mathfrak{p}athbf Z/p^n} V^{\mathrm{min}}_n.$
If $p=2$, then Propositions~I.2.1 and~I.3.1 of \cite{eisenstein}
show that $M$ is the extension of a constant group scheme
by a $\mathfrak{p}u$-type group. Again, each of these groups is seen to be
an $A$-module scheme, and we easily conclude that $M$ is in fact an extension
of the constant $A$-module scheme $A$ by the $\mathfrak{p}u$-type $A$-module scheme
$A\otimes_{\mathfrak{p}athbf Z/2^n}\mathfrak{p}u_{2^n}$.
The group of all such extensions is classified
by
$$H^1(\mathfrak{p}athop{\mathfrak{p}athrm{Spec}}\nolimits \mathfrak{p}athbf Z, A\otimes_{\mathfrak{p}athbf Z/2^n} \mathfrak{p}u_{2^n})\cong A\otimes_{\mathfrak{p}athbf Z/2^n}
\mathfrak{p}athbf Z^{\times}/(\mathfrak{p}athbf Z^{\times})^{2^n} \cong A/\mathfrak{p}athfrak{p} \otimes_{\mathfrak{p}athbf F_2} \{\pm 1\}.$$
Since $V$ is a deformation of $V^{\mathrm{min}}_1$, which corresponds by Kummer
theory to
the non-trivial element of $\{\pm 1\}$, we see that in
fact $M$ is classified by the non-trivial element of
$A/\mathfrak{p}athfrak{p} \otimes_{\mathfrak{p}athbf F_2}\{\pm1\},$
and thus that $M \cong A\otimes_{\mathfrak{p}athbf Z/2^n} D_n,$
and hence that $V\cong A\otimes_{\mathfrak{p}athbf Z/2^n} V^{\mathrm{min}}_n.$
$
\square \ $
\end{Proof}
\
We leave it to the reader to verify
the following lemma.
\begin{lemma}\ellabel{minendos}
If $A$ is an Artinian local $\mathfrak{p}athbf Z/p^n$-algebra
with maximal ideal $\mathfrak{p}$,
then the ring of Galois equivariant endomorphisms of
$A\otimes_{\mathfrak{p}athbf Z/p^n}V^{\mathrm{min}}$ admits
the following description:
\begin{itemize}
\item[(i)] If $p = 2,$ then $\cal End_{A[G_{\mathfrak{p}athbf Q}]}(A\otimes_{\mathfrak{p}athbf Z/2^n}V^{\mathrm{min}}_n)
=\{ \elleft(
\begin{matrix} a & b \\ 0 & a - 2b \end{matrix} \right) \, | \, a,b \in A\}.$
\item[(ii)] If $p$ is odd, then $\cal End_{A[G_{\mathfrak{p}athbf Q}]}(A\otimes_{\mathfrak{p}athbf Z/p^n}V^{\mathrm{min}}_n)
=\{ \elleft(
\begin{matrix} a & 0 \\ 0 & d \end{matrix} \right) \, | \, a,d \in A\}.$
\end{itemize}
\end{lemma}
\section{Proving that $R=\mathfrak{p}athbf T$}
\ellabel{sec:RT}
In this section we prove Theorem \ref{R=T}. We let $\mathfrak{p}athcal Def$ denote
the deformation problem described in the introduction.
We employ the technique introduced in~\cite{wiles}: namely,
we first consider a minimal deformation $\rho^{\mathfrak{p}athrm{min}}$ of $(\overline{V},\cal Lbar,\overlineerline{\rho})$
over $\mathfrak{p}athbf Z_p$, and then verify the numerical criterion of~\cite{wiles}.
Let us define the minimal deformation problem $\mathfrak{p}athcal Defmin$, as
the subfunctor of $\mathfrak{p}athcal Def$ consisting of those deformations
of $(\overline{V},\cal Lbar,\overlineerline{\rho})$ that are unramified away from $p$.
Let us also define $\mathfrak{p}athcal Defminprime$ to be the functor that
classifies all deformations of $(\overline{V},\overlineerline{\rho})$ that are unramified
away from $p$ and finite at $p$. Forgetting the $I_N$-fixed line
$L$ gives a natural transformation $\mathfrak{p}athcal Defmin \rightarrow \mathfrak{p}athcal Defminprime.$
Let us now define the triple $(V^{\mathrm{min}},\cal Lmin,\rho^{\mathfrak{p}athrm{min}})$.
We take $V^{\mathrm{min}} = \mathfrak{p}athbf Z_p\bigoplus \mathfrak{p}athbf Z_p$.
If $p = 2$, then we
let $\rho^{\mathfrak{p}athrm{min}}$ denote the representation
$$\sigma \mathfrak{p}apsto \elleft( \begin{matrix} \chi_2(\sigma) &
(\chi_2(\sigma) - 1)/2 \\ 0 & 1 \end{matrix} \right)$$
$($here $\sigma$ denotes an element of $G_{\mathfrak{p}athbf Q})$,
while if $p$ is odd, we let $\rho^{\mathfrak{p}athrm{min}}$ denote the direct sum
of $\chi_p$ (the $p$-adic cyclotomic character) and $1$ (the trivial
character).
In each case,
the pair $(V^{\mathrm{min}},\rho^{\mathfrak{p}athrm{min}})$ is certainly a lifting of $(\overline{V},\overlineerline{\rho})$.
We take $\cal Lmin$ to be any free of rank one $\mathfrak{p}athbf Z_p$-submodule of $V^{\mathrm{min}}$
lifting the line $\cal Lbar$ in $\overline{V}$.
Note that
for any natural number $n$, we have $V^{\mathrm{min}}/p^n = V^{\mathrm{min}}_n$ (the Galois module
introduced in the preceding section).
\begin{prop}\ellabel{mindefiso}
The natural transformation $\mathfrak{p}athcal Defmin \rightarrow \mathfrak{p}athcal Defminprime$
is an isomorphism of functors. Moreover,
the deformation functor $\mathfrak{p}athcal Defmin$ is pro-represented by
$(V^{\mathrm{min}},\cal Lmin,\rho^{\mathfrak{p}athrm{min}})$ in $\mathfrak{p}athcal Defmin(\mathfrak{p}athbf Z_p).$
\end{prop}
\begin{Proof}
Let $A$ be an Artinian local $\mathfrak{p}athbf Z_p$-algebra, and
let $(V,\rho)$ be an object of $\mathfrak{p}athcal Defminprime(A)$.
Proposition~\ref{unram def} shows that there is
an isomorphism $V \cong A\otimes_{\mathfrak{p}athbf Z_p} V^{\mathrm{min}}.$
The explicit description of the endomorphisms of
$A\otimes_{\mathfrak{p}athbf Z_p}V^{\mathrm{min}}$ provided by Lemma~\ref{minendos}
shows that we may furthermore choose this isomorphism
so that it is strict. Thus we see that
$\mathfrak{p}athcal Defminprime$ is pro-represented by $\mathfrak{p}athbf Z_p$, with
$(V^{\mathrm{min}},\rho^{\mathfrak{p}athrm{min}})$ as universal object.
Now suppose that $(V,L,\rho)$ is an object
of $\mathfrak{p}athcal Defmin(A).$ Using Lemma~\ref{minendos}
again, we see that we may choose the strict endomorphism
$V \cong A\otimes_{\mathfrak{p}athbf Z_p} V^{\mathrm{min}}$ of the preceding paragraph
in such a way that
$L$ is identified with $A\otimes_{\mathfrak{p}athbf Z_p} \cal Lmin.$
Thus $\mathfrak{p}athbf Z_p$ also pro-represents $\mathfrak{p}athcal Defmin,$ with universal
object $(V^{\mathrm{min}},\cal Lmin,\rho^{\mathfrak{p}athrm{min}}).$ This establishes the proposition.
$
\square \ $
\end{Proof}
\
Note that the preceding lemma implies in particular that
the class of $(V^{\mathrm{min}},\cal Lmin,\rho^{\mathfrak{p}athrm{min}})$ in $\mathfrak{p}athcal Def(\mathfrak{p}athbf Z_p)$ is independent
of the choice of $\cal Lmin$ (provided that it lifts $\cal Lbar$).
Of course, this is easily checked directly, using the description
of $\cal End_{G_{\mathfrak{p}athbf Q}}(V^{\mathrm{min}})$ afforded by Lemma~\ref{minendos}.
\
Let $R$ denote the universal deformation ring that pro-represents
the functor $\mathfrak{p}athcal Def$, and
let $(V^{\mathrm{univ}},\cal Luniv,\rho^{\mathfrak{p}athrm{univ}})$ denote the
universal deformation of $(\overline{V},\cal Lbar,\overlineerline{\rho})$ over~$R$.
Corresponding to $(V^{\mathrm{min}},\cal Lmin,\rho^{\mathfrak{p}athrm{min}})$ there is
a homomorphism $R \rightarrow \mathfrak{p}athbf Z_p$ of $\mathfrak{p}athbf Z_p$-algebras.
We let $I$ denote the kernel of this homomorphism.
The following more explicit description of $I$ will be useful.
\begin{prop}\ellabel{I described}
If $S$ is any finite set of primes containing $p$ and $N$,
then $I$ is generated by the set
$$\{1 + \ell - \mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(\mathfrak{p}athbf Frob_{\ell})) \, | \, \ell \not\in S\}.$$
\end{prop}
\begin{Proof}
Let $I_S$ denote the ideal generated by the stated set.
Clearly $I_S \subset I.$ We will show that the Galois representation
$G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}_2(R/I_S)$ obtained by reducing $\rho^{\mathfrak{p}athrm{univ}}$
modulo $I_S$ is unramified at $N$. It will follow from
Proposition~\ref{mindefiso} that $I \subset I_S$, and the proposition
will be proved. The argument is a variation of that used to prove
Prop.~2.1 of~\cite{skiles}.
Suppose first that $p$ is odd. Let us choose a basis for $V^{\mathrm{univ}}$,
and write
$$\rho^{\mathfrak{p}athrm{univ}}(\sigma) = \elleft( \begin{matrix} a({\sigma}) & b(\sigma) \\
c(\sigma) & d(\sigma)\end{matrix} \right),$$
for $\sigma \in G_{\mathfrak{p}athbf Q}$. We may assume that if $c \in G_{\mathfrak{p}athbf Q}$ denotes
complex conjugation, then
$$\rho^{\mathfrak{p}athrm{univ}}(c) = \elleft( \begin{matrix} -1 & 0 \\ 0 & 1 \end{matrix}
\right).$$
We find that
$$a(\sigma) = \dfrac{1}{2}\elleft( \mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(\sigma))
- \mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(c \sigma)) \right),$$
and that
$$d(\sigma) = \dfrac{1}{2}\elleft( \mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}( \sigma))
+ \mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(c \sigma))\right).$$
Thus
$$a(\sigma) \equiv \chi_p(\sigma) \pmod{I_S},$$
whilst
$$d(\sigma) \equiv 1 \pmod{I_S}.$$
In particular, if $\sigma \in I_N,$ then
$$\rho^{\mathfrak{p}athrm{univ}}(\sigma) \equiv \elleft( \begin{matrix} 1 & b(\sigma) \\
c(\sigma) & 1 \end{matrix} \right) \pmod{I_S}.$$
The universal $I_N$-fixed line is spanned by a vector
of the form $(1,x),$ where $x \in R^{\times}$.
We conclude that if $\sigma \in I_N$ then
$$(1 + b(\sigma) x, c(\sigma) + x) \equiv (1,x) \pmod{I_S},$$
and thus that
$$b(\sigma) \equiv c(\sigma) \equiv 0 \pmod{I_S}.$$
This implies that $\rho^{\mathfrak{p}athrm{univ}} \pmod{I_S}$ is unramified at $N$,
as required.
Consider now the case $p = 2$.
Again, we write
$$\rho^{\mathfrak{p}athrm{univ}}(\sigma) = \elleft( \begin{matrix} a({\sigma}) & b(\sigma) \\
c(\sigma) & d(\sigma)\end{matrix} \right),$$
for $\sigma \in G_{\mathfrak{p}athbf Q}$. We may assume that if $c \in G_{\mathfrak{p}athbf Q}$ denotes
complex conjugation, then
$$\rho^{\mathfrak{p}athrm{univ}}(c) = \elleft( \begin{matrix} -1 & -1 \\ 0 & 1 \end{matrix}\right).$$
We may also assume that the universal $I_N$-fixed line is spanned by
the vector $(0,1)$.
By considering $\mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(c \sigma)),$ for $\sigma \in G_{\mathfrak{p}athbf Q},$
we find that
$$-a(\sigma) - c(\sigma) + d(\sigma) \equiv 1 - \chi_2(\sigma) \pmod{I_S}.$$
If $\sigma \in I_N,$ then since $\sigma$ fixes $(0,1),$ we find
that
$$b(\sigma) \equiv 0, \quad d(\sigma) \equiv 1 \pmod{I_S}.$$
The preceding equations, the fact that $\det \rho^{univ} = \chi_2,$
and the fact that $\chi_2(\sigma) = 1$
for $\sigma \in I_N,$ imply that also
$$a(\sigma) \equiv 1, \quad c(\sigma) \equiv 0 \pmod{I_S}.$$
Altogether, we conclude that $\rho^{\mathfrak{p}athrm{univ}} \pmod{I_S}$ is unramified
at $N$, as required.
$
\square \ $
\end{Proof}
\
The preceding result has the following important corollary.
\begin{cor}\ellabel{traces}
If $S$ is any finite set of primes containing $p$ and $N$,
then the complete local $\mathfrak{p}athbf Z_p$-algebra $R$ is topologically
generated by the elements $\mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(\mathfrak{p}athbf Frob_{\ell}))$,
for $\ell \not\in S$.
\end{cor}
\begin{Proof}
This follows immediately from the description of
$I$ provided by Proposition~\ref{I described}, the fact that $R$
is $I$-adically complete, and the fact that $R/I \cong \mathfrak{p}athbf Z_p$.
$
\square \ $
\end{Proof}
\
We now compute the order of $I/I^2$, which is one of the
two ingredients we will eventually use in our verification
of the Wiles-Lenstra numerical criterion.
\begin{theorem}\ellabel{cohomology calculation}
The order of $I/I^2$ $($which is a power of $p)$ divides $(N^2-1)/24$.
\end{theorem}
\begin{Proof}
Let $n$ be a natural number, and let $(V^{\mathrm{min}}_n,\cal Lmin_n,\rho^{\mathfrak{p}athrm{min}}_n)$
denote the reduction modulo $p^n$ of $(V^{\mathrm{min}},\cal Lmin,\rho^{\mathfrak{p}athrm{min}}).$
We consider extensions of Galois modules
$$0 \rightarrow (V^{\mathrm{min}}_n,\cal Lmin_n)
\rightarrow (E,F) \rightarrow (V^{\mathrm{min}}_n,\cal Lmin_n) \rightarrow 0;$$
here the
notion indicates that $E$ is a $G_{\mathfrak{p}athbf Q}$-module that extends $V^{\mathrm{min}}_n$
by itself, and that $F$ is a submodule of $E$ (not assumed to
be Galois invariant) providing an extension
of $\cal Lmin_n$ by itself.
We let $A_n$ denote the group of isomorphism classes of such extensions
for which
$E$ is annihilated by $p^n$, is unramified away from $p$ and $N$,
is finite at $p$, and is Cartier self-dual as an extension of
$V^{\mathrm{min}}$ by itself, and for which $F$ is fixed (element-wise)
by the inertia group $I_N$.
The usual identification of the relative tangent space
to a deformation functor with an appropriate Ext-group in an
appropriate category of Galois modules shows that
$\mathfrak{p}athop{\mathfrak{p}athrm{Hom}}\nolimits(I/I^2, \mathfrak{p}athbf Q_p/\mathfrak{p}athbf Z_p) \cong \displaystyle \lim_{\longrightarrow} A_n.$
\begin{lemma}\ellabel{one} If $(E,F)$ is an object of $A_n$ for which $E$ is
a trivial extension, then the pair $(E,F)$ is also a trivial extension.
\end{lemma}
\begin{Proof}
Let us remind the reader that
if $E$ is the trivial extension of $V^{\mathrm{min}}_n$ by itself,
then the automorphisms of $E$ are of the form
$\elleft( \begin{matrix} \mathfrak{p}athrm{Id} & A \\ 0 & \mathfrak{p}athrm{Id} \end{matrix}
\right),$
where $A$ is an element of $\cal End_{G_{\mathfrak{p}athbf Q}}(V^{\mathrm{min}}).$
This being said, the lemma is easily checked using
Lemma~\ref{minendos}.
Alternatively, we may appeal to Proposition~\ref{mindefiso}.
Since $E$ is assumed to be a trivial extension, it is in
particular unramified at $N$, and thus corresponds to a
deformation for the subproblem $\mathfrak{p}athcal Defmin$ of $\mathfrak{p}athcal Def$. The triviality of
$E$ implies that this deformation is trivial, when regarded as an
deformation for the problem $\mathfrak{p}athcal Defminprime.$ Since $\mathfrak{p}athcal Defmin$ maps isomorphically
to $\mathfrak{p}athcal Defminprime$, we obtain the assertion of the lemma.
$
\square \ $
\end{Proof}
\
If $(E,F)$ is an object of $A_n,$ then since $\mathfrak{p}athbf Z/p^n$ (with
the trivial $G_{\mathfrak{p}athbf Q}$-action) is a quotient of $V^{\mathrm{min}}_n$, whilst
$\mathfrak{p}u_{p^n}$ (with its natural $G_{\mathfrak{p}athbf Q}$-action) is a submodule
of $V^{\mathrm{min}}_n$, the extension $E$ determines an extension
$E'$ of $G_{\mathfrak{p}athbf Q}$ modules
\begin{equation}\ellabel{extension}
0 \rightarrow \mathfrak{p}athbf Z/p^n \rightarrow E' \rightarrow \mathfrak{p}u_{p^n}
\rightarrow 0.\end{equation}
\begin{lemma}\ellabel{two} If $(E,F)$ is an object of $A_n$ for
which the extension $E'$ is trivial, then $E$ is also a trivial
extension.
\end{lemma}
\begin{Proof}
We let $D_n$ denote the (unique, by Proposition~\ref{unique prol})
prolongation of $V^{\mathrm{min}}_n$ to a finite flat group scheme over $\mathfrak{p}athbf Z[1/N]$.
Proposition~\ref{uniqueness} shows that $E$ has a unique prolongation
to a finite flat group scheme $\cal E$ over $\mathfrak{p}athbf Z[1/N]$, that provides
an extension of $D_n$ by itself.
We let $D_n^{(1)}$ denote the copy of $D_n$ that appears
as a submodule of $\cal E$, and let $D_n^{(2)}$ denote the copy
of $D_n$ that appears as a quotient. Also, we let $\mathfrak{p}u_{p^n}^{(i)}$
(respectively $(\mathfrak{p}athbf Z/p^n)^{(i)}$) denote the copy of $\mathfrak{p}u_{p^n}$
(respectively $\mathfrak{p}athbf Z/p^n$) that appears as a subgroup scheme (respectively
a quotient group scheme) of $D_n^{(i)}$, for $i = 1,2$.
The quotient ${\cal E}/\mathfrak{p}u_{p^n}^{(1)}$ is an extension of $D_n^{(2)}$ by
$(\mathfrak{p}athbf Z/p^n)^{(1)}$. Thus it yields a class
$e \in \cal Ext_{\mathfrak{p}athbf Z[1/N]}(D_n^{(2)},(\mathfrak{p}athbf Z/p^n)^{(1)})$.
This latter group sits in the exact sequence
$$\cal Ext_{\mathfrak{p}athbf Z[1/N]}((\mathfrak{p}athbf Z/p^n)^{(2)},(\mathfrak{p}athbf Z/p^n)^{(1)}) \rightarrow
\cal Ext_{\mathfrak{p}athbf Z[1/N]}(D_n^{(2)},(\mathfrak{p}athbf Z/p^n)^{(1)}) \rightarrow
\cal Ext_{\mathfrak{p}athbf Z[1/N]}(\mathfrak{p}u_{p^n}^{(2)},(\mathfrak{p}athbf Z/p^n)^{(1)}).$$
By assumption the image of $e$ under the second arrow vanishes,
and thus there is a class
$e' \in \cal Ext_{\mathfrak{p}athbf Z[1/N]}((\mathfrak{p}athbf Z/p^n)^{(2)},(\mathfrak{p}athbf Z/p^n)^{(1)})$
that maps to $e$ under the first arrow.
We can construct such an extension class $e'$ concretely as follows:
we may choose a lift of $\mathfrak{p}u_{p^n}^{(2)}$ to a subgroup scheme $\mathfrak{p}u$
of ${\cal E}/\mathfrak{p}u_{p^n}^{(1)}$. The quotient
$({\cal E}/\mathfrak{p}u_{p^n}^{(1)})/\mathfrak{p}u$
is then an extension of $(\mathfrak{p}athbf Z/p^n)^{(2)}$ by $(\mathfrak{p}athbf Z/p^n)^{(1)}$,
which gives a realisation of a class $e'$ mapping to $e$.
Our assumption on the submodule $F$ of $E$ implies that
it maps surjectively onto
$(E/\mathfrak{p}u_{p^n}^{(1)})/\mathfrak{p}u$
(the generic fibre of
$({\cal E}/\mathfrak{p}u_{p^n}^{(1)})/\mathfrak{p}u$),
and thus that the action of inertia at $N$ on
$(E/\mathfrak{p}u_{p^n}^{(1)})/\mathfrak{p}u$ is trivial. Thus
$({\cal E}/\mathfrak{p}u_{p^n}^{(1)})/\mathfrak{p}u$ has a prolongation
to a finite flat group scheme over $\mathfrak{p}athbf Z$, yielding an extension of
$\mathfrak{p}athbf Z/p^n$ by itself. There are no such non-trivial extensions
that are finite flat over $\mathfrak{p}athbf Z$, and thus the extension class $e'$ is
trivial. Hence the extension class $e$ is also trivial, and so
${\cal E}/\mathfrak{p}u_{p^n}^{(1)}$ is a split extension of $D_n^{(2)}$ by
$(\mathfrak{p}athbf Z/p^n)^{(1)}$.
If ${\cal E}'$ denotes the preimage in ${\cal E}$ of the subgroup $\mathfrak{p}u_{p^n}^{(2)}
\subset D_n^{(2)}$, then (since ${\cal E}$ is Cartier self-dual),
we find that ${\cal E}'$ is Cartier dual to ${\cal E}/\mathfrak{p}u_{p^n}^{(1)}$.
The result of the preceding paragraph thus shows that
${\cal E}'$ is a split extension of $\mathfrak{p}u_{p^n}^{(2)}$ by $D_n^{(1)}$.
Consider the exact sequence
$$\cal Ext_{\mathfrak{p}athbf Z[1/N]}((\mathfrak{p}athbf Z/p^n)^{(2)},D_n^{(1)}) \rightarrow
\cal Ext_{\mathfrak{p}athbf Z[1/N]}(D_n^{(2)},D_n^{(1)})\rightarrow
\cal Ext_{\mathfrak{p}athbf Z[1/N]}(\mathfrak{p}u_{p^n}^{(2)},D_n^{(1)}).$$
If $e''$ denotes the class of ${\cal E}$ in the middle group,
then we have just seen that its image under the second arrow
vanishes. Thus we may find a class
$e''' \in \cal Ext_{\mathfrak{p}athbf Z[1/N]}((\mathfrak{p}athbf Z/p^n)^{(2)},D_n^{(1)})$
mapping to $e''$ under the first arrow.
We can construct such a class $e'''$ concretely as follows:
lift $\mathfrak{p}u_{p^n}^{(2)}$ to a subgroup scheme $\mathfrak{p}u'$ of ${\cal E}'$.
The quotient ${\cal E}/\mathfrak{p}u'$ then provides an extension
of $(\mathfrak{p}athbf Z/p^n)^{(2)}$ by $D_n^{(1)}$ whose extension class
$e'''$ maps to $e''$. Our assumption on $F$ implies
that its image in $E/\mathfrak{p}u'$ (the generic fibre of ${\cal E}/\mathfrak{p}u'$)
has non-zero image in $(\mathfrak{p}athbf Z/p^n)^{(2)}$, and thus that
inertia at $N$ acts trivially on $E/\mathfrak{p}u'$,
and so ${\cal E}/\mathfrak{p}u'$ has a prolongation to a finite flat group
scheme over $\mathfrak{p}athbf Z$ that extends $\mathfrak{p}athbf Z/p^n$ by $D_n$.
Lemma~\ref{Exts} shows that any such extension is split,
and thus that $e'''$ vanishes. Consequently $e''$ also
vanishes, and so $E$ is a split extension, as claimed.
$
\square \ $
\end{Proof}
\
Let $B_n$ denote the set of extensions
of $G_{\mathfrak{p}athbf Q}$-modules of the form~(\ref{extension}) that
are unramified away from $p$ and $N$,
and that prolong over $\mathfrak{p}athbf Z_p$ to an extension of the finite flat group scheme
$\mathfrak{p}u_{p^n}$ by the finite flat group scheme $\mathfrak{p}athbf Z/p^n$.
\begin{lemma}\ellabel{three} If $n$ is sufficiently large,
then the natural map $B_n \rightarrow B_{n+1}$ is an isomorphism,
and each side has order at most the $p$-power part of $(N^2-1)/24$.
\end{lemma}
\begin{Proof}
Let $\Sigma = \{p,N,\infty\},$ and let $G_{\Sigma}$ denote the
Galois group of the maximal extension of $\mathfrak{p}athbf Q$ in $\overlineerline{\mathfrak{p}athbf Q}$
unramified away from the elements of $\Sigma$.
Extensions of the form~(\ref{extension}) are classified by
the Galois cohomology group
$H^1(G_{\Sigma}, \mathfrak{p}u_{p^n}^{\otimes -1}).$
If such an extension prolongs to an extension of finite flat
groups over $\mathfrak{p}athbf Z_p$, then it is in fact trivial locally at $p$,
since the connected group scheme $\mathfrak{p}u_{p^n}$ cannot have
a non-trivial extension over the \'etale group scheme $\mathfrak{p}athbf Z/p^n$.
Thus $B_n$
is equal to the kernel of the natural map
$$H^1(G_{\Sigma}, \mathfrak{p}u_{p^n}^{\otimes -1})\rightarrow
H^1(G_{\mathfrak{p}athbf Q_p}, \mathfrak{p}u_{p^n}^{\otimes -1}).$$
Let $K_n$ denote the extension of $\mathfrak{p}athbf Q$ obtained by adjoining
all $p^n$th roots of unity in $\overlineerline{\mathfrak{p}athbf Q}$. Let $H$ denote the
normal subgroup of $G_{\Sigma}$ which fixes $K_n$; the quotient
$G_{\Sigma}/H$ is naturally isomorphic to $(\mathfrak{p}athbf Z/p^n)^{\times}$.
The prime $p$ is totally ramified in $K_n$. Thus, if $\pi$ denotes
the unique prime of $K_n$ lying over $p$, the quotient
$G_{\mathfrak{p}athbf Q_p}/G_{K_{n,\pi}}$ also maps isomorphically to $(\mathfrak{p}athbf Z/p^n)^{\times}$.
The inflation-restriction exact sequence gives a diagram
$$\xymatrix{
0 \ar[r] & H^1((\mathfrak{p}athbf Z/p^n)^{\times},\mathfrak{p}u_{p^n}^{\otimes {-1}}) \ar[r]\ar@{=}[d] &
H^1(G_{\Sigma},\mathfrak{p}u_{p^n}^{\otimes{-1}}) \ar[r]\ar[d] &
H^1(H,\mathfrak{p}u_{p^n}^{\otimes{-1}})^{(\mathfrak{p}athbf Z/p^n)^{\times}} \ar[d] \\
0 \ar[r] & H^1((\mathfrak{p}athbf Z/p^n)^{\times},\mathfrak{p}u_{p^n}^{\otimes {-1}})\ar[r] &
H^1(G_{\mathfrak{p}athbf Q_p},\mathfrak{p}u_{p^n}^{\otimes{-1}}) \ar[r] &
H^1(G_{K_{n,\pi}},\mathfrak{p}u_{p^n}^{\otimes{-1}})^{(\mathfrak{p}athbf Z/p^n)^{\times}}.}$$
Taking into account the discussion of the preceding paragraph,
this diagram in turn induces an injection
$$B_n \hookrightarrow
\mathfrak{p}athop{\mathfrak{p}athrm{ker}}\nolimits\,(
H^1(H,\mathfrak{p}u_{p^n}^{\otimes{-1}})^{(\mathfrak{p}athbf Z/p^n)^{\times}} \rightarrow \\
H^1(G_{K_{n,\pi}},\mathfrak{p}u_{p^n}^{\otimes{-1}})^{(\mathfrak{p}athbf Z/p^n)^{\times}}).$$
Since $H$ acts trivially on $\mathfrak{p}u_{p^n}^{\otimes -1}$,
there is an isomorphism
$$H^1(H,\mathfrak{p}u_{p^n}^{\otimes{-1}})^{(\mathfrak{p}athbf Z/p^n)^{\times}} \cong
\mathfrak{p}athop{\mathfrak{p}athrm{Hom}}\nolimits_{(\mathfrak{p}athbf Z/p^n)^{\times}}(H,\mathfrak{p}u_{p^n}^{\otimes -1}).$$
Thus $B_n$ injects into the subgroup of
$\mathfrak{p}athop{\mathfrak{p}athrm{Hom}}\nolimits_{(\mathfrak{p}athbf Z/p^n)^{\times}}(H,\mathfrak{p}u_{p^n}^{\otimes -1})$
consisting of homomorphisms that are trivial on
$G_{K_{n,\pi}}.$
Any element of $\mathfrak{p}athop{\mathfrak{p}athrm{Hom}}\nolimits_{(\mathfrak{p}athbf Z/p^n)^{\times}}(H,\mathfrak{p}u_{p^n}^{\otimes -1})$
that is trivial on $G_{K_{n,\pi}}$
factors through the Galois group $\mathfrak{p}athrm{Gal}(L_n/K_n)$, where $L_n$
is the extension of $K_n$ defined in the statement of the following
lemma.
Lemma~\ref{three} is now seen to follow from the conclusion of that lemma.
$
\square \ $
\end{Proof}
\begin{lemma}\ellabel{four} Let $L_n$ denote the maximal abelian extension
of $K_n$ of degree dividing $p^n$ that is unramified away from $N$,
in which the prime lying over $p$ splits completely,
and on whose Galois group $(\mathfrak{p}athbf Z/p^n)^{\times}
= \mathfrak{p}athrm{Gal}(K_n/\mathfrak{p}athbf Q)$ acts via $\chi_p^{-1}$. Then $L_n$ is a cyclic
extension of $K_N$, and the degree
$[L_n:K_n]$ divides the $p$-power part of $(N^2 - 1)/24$.
\end{lemma}
\begin{Proof}
Let $\zeta$
be a choice of primitive $p^n$th root of unity.
If $\mathfrak{p}athcal O_n$ denotes
the ring of integers in $K_n$, then $1-\zeta$ generates the
unique prime ideal of $\mathfrak{p}athcal O_n$ lying above $p$.
Let $((\mathfrak{p}athcal O_n/N)^{\times}/p^n)_{(-1)}$ denote the maximal quotient of
$(\mathfrak{p}athcal O_n/N)^{\times}/p^n$ on which $\mathfrak{p}athrm{Gal}(K_n/\mathfrak{p}athbf Q)$ acts via $\chi_p^{-1}$.
Since Herbrand's criterion shows that the $\chi_p^{-1}$-eigenspace
in the $p$-part of the class group of $K_n$ vanishes,
global class field theory shows that the Galois
group of $L_n/K_n$ is equal to
the cokernel of the composite
$$\mathfrak{p}athcal O_n^{\times}[1-\zeta,(1-\zeta)^{-1}] \rightarrow (\mathfrak{p}athcal O_n/N)^{\times}
\rightarrow ((\mathfrak{p}athcal O_n/N)^{\times}/p^n)_{(-1)}.$$
Fix a prime $\mathfrak{p}athfrak n$ of $K_n$ lying over $N$. The inclusion
$(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times} \hookrightarrow (\mathfrak{p}athcal O_n/N)^{\times}$
induces an isomorphism
\begin{equation}\ellabel{induction formula}
(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times}/(p^n,N^2-1)
\cong ((\mathfrak{p}athcal O_n/N)^{\times}/p^n)_{(-1)}.
\end{equation}
Since $(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times}$ is a cyclic group,
we conclude that $((\mathfrak{p}athcal O_n/N)^{\times}/p^n)_{(-1)}$
has order bounded by the $p$-part of $N^2-1$.
Thus if $p \geq 5$ the lemma is proved.
We now perform a more refined analysis, which will prove
the lemma in the remaining cases (i.e.~$p = 2$ or 3).
A simple computation shows that under the isomorphism~(\ref{induction formula}),
the subgroup of
$$((\mathfrak{p}athcal O_n/N)^{\times}/p^n)_{(-1)}$$ generated by $(1-\zeta)$
corresponds to the subgroup of $(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times}/(p^n,N^2-1)$
generated by
$$\prod_{a \in (\mathfrak{p}athbf Z/p^n)^{\times}/\ellangle N \rightarrowngle } (1 - \zeta^a)^a$$
(where $\ellangle N \rightarrowngle$ denotes the cyclic subgroup of $(\mathfrak{p}athbf Z/p^n)^{\times}$
generated by $N \mathfrak{p}od p^n$).
Suppose first that $p$ is odd, so that the order of $(\mathfrak{p}athbf Z/p)/\ellangle N \rightarrowngle$
is prime to $p$. Let $c$ denote this order. Also, write
$N = \omega(N) N_1$ in $\mathfrak{p}athbf Z_p$, where $\omega(N)$ is the Teichm\"uller lift
and $N_1$ is a 1-unit. If $p^f$ denotes the exact power of $p$ dividing
$N^2-1$, and $p^{f'}$ denotes
the exact power of $p$ dividing $N_1 - 1$,
then $f' \geq f,$ with equality if $p = 3$. Let us assume that
$n \geq f'$, so that
$(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times}/(p^n,N^2-1)$ is cyclic of order $p^f,$
generated by the image of $\zeta$, or of $-\zeta$.
Since $2c$ is prime to $p,$
the subgroup of $(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times}/(p^n,N^2-1)$
generated by
$$\prod_{a \in (\mathfrak{p}athbf Z/p^n)^{\times}/\ellangle N \rightarrowngle } (1 - \zeta^a)^a$$
coincides with the subgroup generated by
$$\elleft( \prod_{a \in (\mathfrak{p}athbf Z/p^n)^{\times}/\ellangle N \rightarrowngle } (1 - \zeta^a)^a
\right)^{2c}
= \elleft( \prod_{a \in (\mathfrak{p}athbf Z/p^n)^{\times}/\ellangle N_1 \rightarrowngle}
(1 - \zeta^a)^a \right)^2$$
$$= \prod_{a \in (\mathfrak{p}athbf Z/p^n)^{\times}/\ellangle N_1 \rightarrowngle}
(1 - \zeta^a)^a (1-\zeta^{-a})^{-a}
= \prod_{a \in (\mathfrak{p}athbf Z/p^{f'})^{\times}}(-\zeta)^{a^2}.$$
(Here the equalities hold in the quotient
$(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times}/(p^n,N^2-1)$.)
If $p \geq 5,$ then since there are quadratic residues distinct from
1 in $(\mathfrak{p}athbf Z/p)^{\times},$ we compute that
$\sum_{a \in (\mathfrak{p}athbf Z/p^{f'})^{\times}} a^2\equiv 0 \pmod{p^{f'}},$
and so
$\prod_{a \in (\mathfrak{p}athbf Z/p^n)^{\times}/\ellangle N \rightarrowngle } (1 - \zeta^a)^a$
generates the trivial subgroup of
$(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times}/(p^n,N^2-1)$. In this case, our ``refined analysis''
adds no further restrictions to the degree of $L_n$ over $K_n$.
However, if $p = 3$, then 1 is the only quadratic residue in
$(\mathfrak{p}athbf Z/3)^{\times},$ and one computes that the power of $3$ dividing
$\sum_{a \in (\mathfrak{p}athbf Z/p^{f'})^{\times}} a^2$ is exactly
$3^{f' -1 } = 3^{f-1}$. Thus we find that the degree
$[L_n:K_n]$ is bounded above by $3^{f-1}.$ This is the exact power of
3 dividing $(N^2-1)/24$, and thus we have proved the lemma in the case
$p=3$.
Suppose now that $p=2$. Write $N = \pm 1 \cdot N_1,$ where $N_1 \equiv 1
\pmod 4$. Let $2^f$ be the exact power of 2 dividing $N^2-1,$
and let $2^{f'}$ be the exact power of 2 dividing $N_1 - 1$.
Note that $f = f' + 1.$
Also, assume that $n \geq f$. In particular, $n \geq 2$,
and so $-\zeta$ is also a primitive $2^n$th root of unity.
The quotient $(\mathfrak{p}athcal O_n/\mathfrak{p}athfrak n)^{\times}/(2^n,N^2-1)$ is then cyclic
of order $2^f$, generated by $\zeta,$ or by $-\zeta$.
We may rewrite
$\prod_{a \in (\mathfrak{p}athbf Z/2^n)^{\times}/\ellangle N \rightarrowngle } (1 - \zeta^a)^a$
in the form
$$\prod_{a \in (\mathfrak{p}athbf Z/2^n)^{\times}/\ellangle N \rightarrowngle } (1 - \zeta^a)^a
=\prod_{a \in (1 + 4\mathfrak{p}athbf Z/2^n)/\ellangle N_1 \rightarrowngle}
(1 - \zeta^a)^a (1-\zeta^{-a})^{-a}
= \prod_{a \in (1+4\mathfrak{p}athbf Z/2^n)/(1+2^{f'}\mathfrak{p}athbf Z/2^n)} (-\zeta)^{a^2}.$$
One computes that the largest power of 2 dividing
$\sum_{a \in (1 + 4\mathfrak{p}athbf Z/2^n)/(1+2^{f'}\mathfrak{p}athbf Z/2^n)} a^2$
is $2^{f'-2} = 2^{f-3}$. Thus the degree $[L_n:K_n]$ is bounded above
by $2^{f-3}$.
This is the exact power of $2$
dividing $(N^2-1)/24$, and so we have proved the lemma
in the case $p=2$.
$
\square \ $
\end{Proof}
\
{\em Conclusion of proof of Theorem~\ref{cohomology calculation}:}
Lemmas~\ref{one} and~\ref{two} yield a natural injection
$I/I^2 \cong \displaystyle \lim_{\longrightarrow} A_n \hookrightarrow \displaystyle \lim_{\longrightarrow} B_n.$
Together with Lemma~\ref{three},
this implies Theorem~\ref{cohomology calculation}.
$
\square \ $
\end{Proof}
\
The reduced Zariski tangent space
of the deformation ring $R$ can be computed via a calculation
similar to that used to prove Theorem~\ref{cohomology calculation}.
We state the result here, but postpone the details of the calculation
to the following sections. (See Proposition~\ref{prop:tangent space at 2}
for the case $p=2$, and
Proposition~\ref{prop:tangent space at odd p}
for the case of odd $p$.)
\begin{prop}\ellabel{tangent space}
If $\mathfrak{p}$ denotes the maximal ideal of $R$, then
the reduced Zariski tangent space $\mathfrak{p}/(\mathfrak{p}^2,p)$ of $R$ is of
dimension at most one over $\mathfrak{p}athbf F_p$.
More precisely,
$\mathfrak{p}/(\mathfrak{p}^2,p)$ vanishes unless
$p$ divides the numerator of $(N-1)/12$,
in which case it has dimension one over $\mathfrak{p}athbf F_p$.
\end{prop}
Having introduced the deformation ring $R$,
we now turn to constructing the corresponding Hecke ring $\mathfrak{p}athbf T$.
We consider the space $M_2(N)$ of all modular forms of weight
two on $\Gamma_0(N)$ defined over $\overlineerline{\mathfrak{p}athbf Q}_p$,
and the commutative $\mathfrak{p}athbf Z_p$-algebra $H$ of endomorphisms
of $M_2(N)$ generated by the Hecke operators $T_n$.
We define the $p$-Eisenstein maximal ideal of the algebra $H$ to be the
ideal generated by the elements $T_n - \sigma^*(n)$ (where
$
\sigma^*(n) = \sum_{{0 < d|n}\atop (d,N)=1} d$
for any positive integer $n$)
together with the prime $p$, and let $\mathfrak{p}athbf T$ denote the completion
of~$H$ at its $p$-Eisenstein maximal ideal.
Then $\mathfrak{p}athbf T$ is a reduced $\mathfrak{p}athbf Z_p$-algebra.
We let $J$ denote the kernel of the surjection $\mathfrak{p}athbf T \rightarrow \mathfrak{p}athbf Z_p$
describing the action of $\mathfrak{p}athbf T$ on the Eisenstein series $E^*_2$,
let $\mathfrak{p}athbf T^0$ denote
the quotient of $\mathfrak{p}athbf T$ that acts faithfully on cuspforms,
and let $J^0$ denote the image of $J$ in $\mathfrak{p}athbf T^0$.
(This is the localisation at $p$ of the famous
Eisenstein ideal of \cite{eisenstein}.)
\begin{lemma}\ellabel{congruence modulus}
The order of $\mathfrak{p}athbf T^0/J^0$ $($which is a power of $p)$
is equal to the $p$-power part of the
numerator of $(N-1)/12$.
\end{lemma}
\begin{Proof}
This is Proposition~II.9.7 of \cite{eisenstein}.
$
\square \ $
\end{Proof}
\begin{prop}\ellabel{Hecke rep}
There is an object $(V,L,\rho)$ of $\mathfrak{p}athcal Def(\mathfrak{p}athbf T)$, uniquely
determined by the property that $\mathfrak{p}athbf Trace(\rho(\mathfrak{p}athbf Frob_{\ell}))
= T_{\ell}$, for $\ell \neq p,N$. Furthermore,
the diagram
$$\xymatrix{R \ar[d] \ar[rr] && \mathfrak{p}athbf T \ar[d] \\
R/I \ar@{=}[r] & \mathfrak{p}athbf Z_p \ar@{=}[r] & \mathfrak{p}athbf T/J}$$
is commutative.
\end{prop}
\begin{Proof}
Since, by Corollary~\ref{traces}, the universal deformation ring
$R$ is generated by the traces $\mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(\mathfrak{p}athbf Frob_{\ell}))$,
there is at most one object $(V,L,\rho)$ of $\mathfrak{p}athcal Def(\mathfrak{p}athbf T)$ satisfying the
condition $\mathfrak{p}athbf Trace(\rho(\mathfrak{p}athbf Frob_{\ell})) = T_{\ell}$ for $\ell \neq p,N$.
This gives the uniqueness statement of the proposition.
In order to construct the required object $(V,L,\rho)$,
we proceed in several steps.
\begin{lemma}\ellabel{five}
Let $\overline{V}'$ be a two dimensional continuous $G_{\mathfrak{p}athbf Q}$-module
over a finite extension $k$ of $\mathfrak{p}athbf F_p$. Suppose that $\overline{V}'$ is finite at $p$,
unramified away from $p$ and $N$, contains an $I_N$-fixed
line that is {\em not} $G_{\mathfrak{p}athbf Q}$-stable, and has semi-simplification
isomorphic to the semi-simplification of $k\otimes_{\mathfrak{p}athbf F_p}\overline{V}.$
Then $\overline{V}'0ng k\otimes_{\mathfrak{p}athbf F_p} \overline{V}.$
\end{lemma}
\begin{Proof}
Since $\overline{V}'$ and $k\otimes_{\mathfrak{p}athbf F_p}\overline{V}$ have isomorphic semi-simplifications,
we see that $\overline{V}'$ is an extension of one of $k\otimes_{\mathfrak{p}athbf F_p}\mathfrak{p}u_p$ or
$k\otimes_{\mathfrak{p}athbf F_p} \mathfrak{p}athbf Z/p$
(thought of as \'etale groups schemes over $\mathfrak{p}athbf Q$, or equivalently as
$G_{\mathfrak{p}athbf Q}$-representations) by the other. Both these one dimensional
representations are unramified at $N$, and $\overline{V}'$ contains
one or the other as a $G_{\mathfrak{p}athbf Q}$-submodule. It also contains
an $I_N$-fixed line which is {\em not} a $G_{\mathfrak{p}athbf Q}$-submodule.
Thus $\overline{V}'$ is in fact spanned by $I_N$-fixed lines, and so
is unramified at $N$. By assumption it is finite at $p$,
and so it has a prolongation to a finite flat group scheme over $\mathfrak{p}athbf Z$.
If $p$ is odd, then $\overline{V}$ must prolong to an extension of one
of $k\otimes_{\mathfrak{p}athbf F_p}\mathfrak{p}u_p$ or $k\otimes_{\mathfrak{p}athbf F_p} \mathfrak{p}athbf Z/p$
by the other as a group scheme over
$\mathfrak{p}athop{\mathfrak{p}athrm{Spec}}\nolimits \mathfrak{p}athbf Z$ (since by \cite{fontaine}, Thm.~2, $p$-power order group
schemes over $\mathfrak{p}athbf Z$ are determined by their associated Galois
representations). There are no such non-trivial extensions
(\cite{eisenstein}, Ch.~I), and thus $\overline{V}' \cong k\otimes_{\mathfrak{p}athbf F_p}\overline{V}.$
In the case that $p = 2,$ note first that since both
$k\otimes_{\mathfrak{p}athbf F_2}\mathfrak{p}u_2$ and $k\otimes_{\mathfrak{p}athbf F_2}\mathfrak{p}athbf Z/2$
yield the trivial character of $G_{\mathfrak{p}athbf Q}$,
the module $\overline{V}$ cannot be the direct sum of these
two characters; if it were, every line (including
the $I_N$-fixed line appearing in the statement of the lemma)
would be $G_{\mathfrak{p}athbf Q}$-stable. Taking this into account,
it is easily seen (again using the results of \cite{eisenstein},
Ch.~I) that $\overline{V}' \cong k\otimes_{\mathfrak{p}athbf F_2}\overline{V}$.
$
\square \ $
\end{Proof}
\begin{lemma}\ellabel{six}
Let $K$ be a finite extension of $\mathfrak{p}athbf Q_p,$ with ring of integers $\mathfrak{p}athcal O$.
Let $k$ denote the residue field of $\mathfrak{p}athcal O,$ and let $\mathfrak{p}athcal O'$ denote
the order in $\mathfrak{p}athcal O$ consisting of elements whose image in $k$
lies in the prime subfield $\mathfrak{p}athbf F_p$ of $k$.
Suppose given a two dimensional $K$-vector space $W$,
and a continuous representation $G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}(W)$
that is finite at $p$ $($in the sense that one, or equivalently
any, $G_{\mathfrak{p}athbf Q}$-invariant $\mathfrak{p}athcal O$-lattice in $W$ is finite at $p)$,
semistable at $N$ $($in the sense that $W$ contains an $I_N$-fixed
line$)$, unramified away from $p$ and $N$, such that the semi-simple
residual representation attached to $W$ is isomorphic to the
direct sum of the trivial character and the mod $p$ cyclotomic character.
If $W$ is irreducible, then we may find a free $\mathfrak{p}athcal O'$-module of rank
two $V$, equipped with a continuous representation $\rho:
G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}(V)$, and containing an $I_N$-fixed line $L$,
such that the triple $(V,L,\rho)$ deforms $(\overline{V},\cal Lbar,\overlineerline{\rho})$,
and such that $K\otimes_{\mathfrak{p}athcal O'} V \cong W$ as $G_{\mathfrak{p}athbf Q}$-modules.
\end{lemma}
\begin{Proof}
Choose any $G_{\mathfrak{p}athbf Q}$ lattice $V'$ in $W$, and let $L'$ denote
the intersection of $V'$ with the $I_N$-fixed line in $W$.
Since $W$ is irreducible, the line $L'$ is not $G_{\mathfrak{p}athbf Q}$-stable.
Thus we may find a natural number $n$ such that
$L'/p^n$ is $G_{\mathfrak{p}athbf Q}$-stable in $V'/p^n,$ but such that
$L'/p^{n+1}$ is {\em not} $G_{\mathfrak{p}athbf Q}$-stable in $V'/p^{n+1}$.
If we define $V''$ to be the preimage in $V'$ of $L'/p^n$,
then we see that $L'/p$, when regarded as a subspace of
$V''/p,$ is not $G_{\mathfrak{p}athbf Q}$-stable.
Lemma~\ref{five} implies that $V''/p 0ng k\otimes_{\mathfrak{p}athbf F_p}\overline{V}$.
Using the description of the automorphisms of $k\otimes_{\mathfrak{p}athbf F_p}\overline{V}$
afforded by Lemma~\ref{minendos}, we deduce easily that in
fact there is an isomorphism of pairs $(V''/p,L'/p) \cong k\otimes_{\mathfrak{p}athbf F_p}
(\overline{V},\cal Lbar).$ If we choose a basis for $(V'',L')$ over $\mathfrak{p}athcal O$ that reduces
to an $\mathfrak{p}athbf F_p$ basis for $(\overline{V},\cal Lbar)$, then the $\mathfrak{p}athcal O'$-span of
this basis gives rise to the required pair $(V,L)$.
$
\square \ $
\end{Proof}
\
If $\widetilde{\T}$ denotes the normalisation of $\mathfrak{p}athbf T$, then we may
write $\widetilde{\T} = \prod_{i=1}^d \mathfrak{p}athcal O_i,$
where each $\mathfrak{p}athcal O_i$ is a discrete valuation ring, of finite
index over $\mathfrak{p}athbf Z_p$. The rings $\mathfrak{p}athcal O_i$ are in bijection
with the conjugacy classes of normalised eigenforms $f_i$ in
$M_2(N)$ that satisfy the congruence $f_i \equiv E^*_2 \pmod \mathfrak{p}_i$
(where $\mathfrak{p}_i$ denotes the maximal ideal of $\mathfrak{p}athcal O_i$); as before,
$E^*_2$ denotes the weight two Eisenstein series on $\Gamma_0(N)$.
The ring $\mathfrak{p}athcal O_i$ is the ring of integers in the subfield of $\overlineerline{\mathfrak{p}athbf Q}_p$
generated by the Fourier coefficients of $f_i$.
The injection $\mathfrak{p}athbf T \rightarrow \widetilde{\T} = \prod_{i=1}^d \mathfrak{p}athcal O_i$ is characterised
by the property
$T_n \mathfrak{p}apsto (a_n(f_i))_{i=1,\elldots,d}.$
Note that $E^*_2$ is one such form $f_i$. We may choose the
labeling so that $E^*_2 = f_1$; then $\mathfrak{p}athcal O_1 = \mathfrak{p}athbf Z_p = \mathfrak{p}athbf T/J$.
As in the statement of Lemma~\ref{six}, for each $i = 1,\elldots,d$,
define $\mathfrak{p}athcal O_i'$ to be the order in $\mathfrak{p}athcal O_i$
obtained as the preimage under the map to the residue field
of the prime subfield $\mathfrak{p}athbf F_p$.
By construction $\mathfrak{p}athcal O_i'$ is a complete Noetherian local
ring with residue field $\mathfrak{p}athbf F_p$. Also, the natural map $\mathfrak{p}athbf T \rightarrow \mathfrak{p}athcal O_i$
factors through $\mathfrak{p}athcal O_i'$.
\begin{lemma}\ellabel{seven} For each $i = 1,\elldots,d$,
we may construct an object $(V_i,L_i,\rho_i) \in \mathfrak{p}athcal Def(\mathfrak{p}athcal O_i')$
with the property that $\mathfrak{p}athbf Trace(\rho_i(\mathfrak{p}athbf Frob_{\ell}))$ is equal
to the image of $T_{\ell}$ in $\mathfrak{p}athcal O_i'$, for each $\ell \neq p,N$.
\end{lemma}
\begin{Proof} If $i = 1$, so that $\mathfrak{p}athcal O_i' = \mathfrak{p}athbf Z_p,$ we take
$(V_1,L_1,\rho_1)$ to be the triple $(V^{\mathrm{min}},\cal Lmin,\rho^{\mathfrak{p}athrm{min}}).$
Suppose now that $i\geq 2,$ so that $\mathfrak{p}athcal O_i$ corresponds to a cuspform
$f_i$. If we consider the usual irreducible Galois representation
into $\mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf Q_p\otimes_{\mathfrak{p}athbf Z_p} \mathfrak{p}athcal O_i)$ attached to $f_i$,
and apply Lemma~\ref{six}, then we again obtain the required triple.
$
\square \ $
\end{Proof}
\
{\em Conclusion of proof of Proposition~\ref{Hecke rep}:}
Each of the triples $(V_i,L_i,\rho_i)$ constructed in the
previous lemma corresponds to a homomorphism $\phi_i : R \rightarrow \mathfrak{p}athcal O_i'.$
The product of all these yields a homomorphism
$\phi: R \rightarrow \prod_{i = 1}^d \mathfrak{p}athcal O_i'.$
Since $R$ is topologically generated by the elements
$\mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(\mathfrak{p}athbf Frob_{\ell}))$ ($\ell \neq p,N$),
we see that $\phi$ factors through $\mathfrak{p}athbf T$. The map $\phi$ in turn
corresponds to a triple $(V,L,\rho) \in \mathfrak{p}athcal Def(\mathfrak{p}athbf T),$
satisfying the requirements of the proposition.
By construction, the diagram appearing
in the statement of the proposition commutes.
$
\square \ $
\end{Proof}
\
Let $\mathfrak{p}athbf T'$ denote the image in $\mathfrak{p}athbf T$ of the map constructed
in Proposition~\ref{Hecke rep}. Our ultimate goal
is to prove that that map is an isomorphism, and so in
particular that $\mathfrak{p}athbf T' = \mathfrak{p}athbf T$. However, we will proceed in
stages.
Write $J' = \mathfrak{p}athbf T' \bigcap J$,
let $(\mathfrak{p}athbf T')^0$ denote the image of $\mathfrak{p}athbf T'$ in $\mathfrak{p}athbf T^0$,
and let $(J')^0$ denote the image of $J'$ in $(\mathfrak{p}athbf T')^0$.
We have the morphism of short exact sequences
$$\xymatrix{ 0 \ar[r] &\mathfrak{p}athbf T' \ar[r]\ar[d] & \mathfrak{p}athbf Z_p \bigoplus (\mathfrak{p}athbf T')^0 \ar[r]\ar[d] &
(\mathfrak{p}athbf T')^0/(J')^0 \ar[r]\ar[d] & 0 \\
0 \ar[r] & \mathfrak{p}athbf T \ar[r] & \mathfrak{p}athbf Z_p \bigoplus \mathfrak{p}athbf T^0 \ar[r] &
\mathfrak{p}athbf T^0/J^0 \ar[r] & 0 .}$$
Applying the snake lemma we obtain
the following exact sequence:
$$
\begin{matrix} 0 \ellongrightarrow \mathfrak{p}athop{\mathfrak{p}athrm{ker}}\nolimits((\mathfrak{p}athbf T')^0/(J')^0 \rightarrow \mathfrak{p}athbf T^0/J^0)
\ellongrightarrow 0kernel(\mathfrak{p}athbf T' \rightarrow \mathfrak{p}athbf T) \qquad \qquad \qquad
& \\ \qquad \qquad \qquad
\ellongrightarrow 0kernel((\mathfrak{p}athbf T')^0 \rightarrow \mathfrak{p}athbf T^0)
\ellongrightarrow
0kernel((\mathfrak{p}athbf T')^0/(J')^0 \rightarrow \mathfrak{p}athbf T^0/J^0) \ellongrightarrow 0.\end{matrix}
$$
We also have the following tautological exact sequence:
$$\begin{matrix} 0 \ellongrightarrow\mathfrak{p}athop{\mathfrak{p}athrm{ker}}\nolimits((\mathfrak{p}athbf T')^0/(J')^0 \rightarrow \mathfrak{p}athbf T^0/J^0)
\ellongrightarrow (\mathfrak{p}athbf T')^0/(J')^0 \qquad
& \\ \qquad \qquad \qquad
\ellongrightarrow \mathfrak{p}athbf T^0/J^0
\ellongrightarrow
0kernel((\mathfrak{p}athbf T')^0/(J')^0 \rightarrow \mathfrak{p}athbf T^0/J^0)\ellongrightarrow 0.\end{matrix}$$
Thus we find that
\begin{equation}\ellabel{inequality}
\#(\mathfrak{p}athbf T')^0/(J')^0 - \#(\mathfrak{p}athbf T^0/J^0) = \# 0kernel(\mathfrak{p}athbf T' \rightarrow \mathfrak{p}athbf T) -
\# 0kernel((\mathfrak{p}athbf T')^0 \rightarrow \mathfrak{p}athbf T^0).
\end{equation}
Since $\mathfrak{p}athbf T \rightarrow \mathfrak{p}athbf T^0$ is surjective, we conclude that
the right hand side of~(\ref{inequality}) is non-negative,
and thus that the order of $(\mathfrak{p}athbf T')^0/(J')^0$ is at least equal to
that of $\mathfrak{p}athbf T^0/J^0$. By Lemma~\ref{congruence modulus},
the order of this latter group has order equal to the $p$-power part
of the numerator of $(N-1)/12.$ Thus the order of
$(\mathfrak{p}athbf T')^0/(J')^0$ is at least equal to this number.
Suppose now that $N \not\equiv -1 \pmod{2p}$.
The $p$-power part of $(N^2-1)/24$ is then equal
to the $p$-power part of the numerator of $(N-1)/12$.
The numerical criterion of \cite{wiles}
(as strengthened in \cite{lenstra}) thus applies
to show that the surjection $R \rightarrow \mathfrak{p}athbf T'$ of the
preceding proposition is an isomorphism of local complete
intersections.
Furthermore, we conclude that in fact $(\mathfrak{p}athbf T')^0/(J')^0$
has order exactly equal to the power of $p$ dividing the
numerator of $(N-1)/12$, that is, to the order of $\mathfrak{p}athbf T^0/J^0$.
The equation~(\ref{inequality}) then shows
that $\mathfrak{p}athbf T' = \mathfrak{p}athbf T$ if and
only if $(\mathfrak{p}athbf T')^0 = \mathfrak{p}athbf T^0.$
\begin{lemma}\ellabel{eight} The inclusion $\mathfrak{p}athbf T' \rightarrow \mathfrak{p}athbf T$
is an isomorphism.
\end{lemma}
\begin{Proof} It follows from Corollary~\ref{traces}, together with
the construction of the map $R \rightarrow \mathfrak{p}athbf T$ of
Proposition~\ref{Hecke rep}, that $\mathfrak{p}athbf T'$ contains $T_{\ell}$
for all $\ell \neq N, p$. Proposition~\ref{ordinary} shows
that $\rho^{\mathfrak{p}athrm{univ}}$ has a rank one space of $I_p$-coinvariants,
on which $\mathfrak{p}athbf Frob_p$ then acts as multiplication by a unit. It follows from
the construction of $R \rightarrow \mathfrak{p}athbf T$, and the known structure
of Galois representations attached to modular forms, that
the image of this unit in $\mathfrak{p}athbf T$ is equal to the Hecke operator~$T_p.$
Thus $\mathfrak{p}athbf T'$ contains $T_p$.
It remains to show that $T_N$ lies in $\mathfrak{p}athbf T'$. By the remark
preceding the statement of the lemma, it in fact
suffices to show that $T_N$ lies in $(\mathfrak{p}athbf T')^0$.
The surjection $R \rightarrow (\mathfrak{p}athbf T')^0$ induces an object
$(V^0,L^0,\rho^0) \in \mathfrak{p}athcal Def((\mathfrak{p}athbf T')^0)$.
The concrete construction of the map $R \rightarrow \mathfrak{p}athbf T$
(and hence the map $R \rightarrow \mathfrak{p}athbf T^0$) shows that
this representation is built out of Galois representations
attached to cuspforms on $\Gamma_0(N)$, which are
(so to speak) genuinely semi-stable at $N$. In particular,
the line $L^0$ is not only fixed by
$I_N$, but is stable under the decomposition group at $N$.
Standard properties of Galois representations attached to cusp forms
show that the eigenvalue of $\mathfrak{p}athbf Frob_N$ on this line is furthermore
equal to $T_N$. Thus $T_N \in (\mathfrak{p}athbf T')^0,$ and so we see
that $(\mathfrak{p}athbf T')^0 = \mathfrak{p}athbf T^0,$ as required.
$
\square \ $
\end{Proof}
\
The preceding lemma completes the proof of Theorem~\ref{R=T}
in the case when $N \not \equiv -1 \pmod{2p}$.
If, on the other hand, we have $N \equiv -1 \pmod{2p},$
then Proposition~\ref{tangent space} shows that
the Zariski tangent space of $R$ is trivial.
In this case, the map $R\rightarrow \mathfrak{p}athbf Z_p$ is an isomorphism.
Also, Lemma~\ref{congruence modulus} then implies that
$\mathfrak{p}athbf T^0 = 0,$ and hence that $\mathfrak{p}athbf T = \mathfrak{p}athbf Z_p$. Thus the
map $R \rightarrow \mathfrak{p}athbf T$ is certainly an isomorphism in this case,
and we have completely proved Theorem~\ref{R=T} of the introduction.
\
Let us make two remarks:
(A) An alternative approach to proving Proposition~\ref{Hecke rep} is
as follows. The results of \cite{eisenstein}, Section~II.16,
show that if $V^0$ denotes the $p$-Eisenstein part of the $p$-adic Tate module
of $J_0(N)$, then $V^0$ is free of rank two over $\mathfrak{p}athbf T^0$, and
the $G_{\mathfrak{p}athbf Q}$-action on $V^0$
yields a deformation $\rho^0$ of $\overlineerline{\rho}$ over $\mathfrak{p}athbf T^0$. The $I_N$-invariants
in this representation form a rank one free submodule $L^0$ of this
representation. The discussion of \cite{eisenstein}, Section~II.11
shows that both the cuspidal and Shimura subgroup map isomorphically
onto the connected component group of the fibre over $N$ of
the N\'eron model of $J_0(N)$, and this in turn implies that
$(V^0,L^0,\rho^0)$ provides an object of $\mathfrak{p}athcal Def(\mathfrak{p}athbf T^0)$.
Thus we obtain a corresponding map $R \rightarrow \mathfrak{p}athbf T^0$. Taking the
product of this with
the map $R \rightarrow R/I = \mathfrak{p}athbf Z_p,$ we obtain the required map
$R \rightarrow \mathfrak{p}athbf T$ of Proposition~\ref{Hecke rep}.
Finally, the explicit description of $\mathfrak{p}athbf T^0$ provided by
\cite{eisenstein}, Cor.~II.16.2 assures us that the
map $R \rightarrow \mathfrak{p}athbf T$ is surjective.
We have chosen to present the alternative argument above
both because it is more elementary (the only ingredient
required from \cite{eisenstein}, Ch.~II, is the
computation of the order of $\mathfrak{p}athbf T^0/J^0$), and because
we are then able to recover the results of
\cite{eisenstein}, Sections~II.16, II.17, as we
explain below.
(B) In the proof of Lemma~\ref{eight}, we have struggled
slightly to prove that $T_N$ in fact lies in $\mathfrak{p}athbf T'$.
This is somewhat amusing, since in fact
$T_N = 1$ in $\mathfrak{p}athbf T$! This follows from \cite{eisenstein},
Prop.~II.17.10. When $p$ is odd, the argument is
straightforward: namely, since $T_N^2 = 1$ for general
reasons (the Galois representations attached to
modular forms on $\Gamma_0(N)$ are semi-stable at
$N$ and Cartier self-dual), it suffices to note
that $T_N \equiv 1$ modulo the maximal ideal of $\mathfrak{p}athbf T$.
When $p = 2$, Mazur's proof of this result depends
on his detailed analysis of the 2-Eisenstein torsion in
$J_0(N)$. We present an alternative proof below,
using the deformation theoretic techniques of
this paper.
\
We close this section by explaining how Theorem~\ref{R=T}
allows us to recover the main results of Section II of~\cite{eisenstein}.
\begin{cor}\ellabel{gorenstein} The $\mathfrak{p}athbf Z_p$-algebra $\mathfrak{p}athbf T$
$($and consequently also its
quotient $\mathfrak{p}athbf T^0)$ is generated by a {\em single} element over $\mathfrak{p}athbf Z_p$.
In particular, both $\mathfrak{p}athbf T$ and $\mathfrak{p}athbf T^0$ are local complete intersections,
and hence Gorenstein.
\end{cor}
\begin{Proof}
Theorem~\ref{R=T} shows
that it suffices to verify the analogous statement for the
deformation ring $R$. Proposition~\ref{tangent space} shows
that if $\mathfrak{p}$ denotes the maximal ideal of $R$,
then $\mathfrak{p}/(\mathfrak{p}^2,p)$ has dimension at most one over $\mathfrak{p}athbf F_p$,
and the corollary follows.
$
\square \ $
\end{Proof}
\
The fact that $\mathfrak{p}athbf T^0$ is monogenic over $\mathfrak{p}athbf Z_p$ was originally proved by Mazur
(\cite{eisenstein}, Cor.~16.2).
Since $\mathfrak{p}athbf T$ is monogenic over $\mathfrak{p}athbf Z_p$, and is equipped with a map
$\mathfrak{p}athbf T \rightarrow \mathfrak{p}athbf T/J \cong \mathfrak{p}athbf Z_p,$ we see that we may write
$\mathfrak{p}athbf T \cong \mathfrak{p}athbf Z_p[X]/Xf(X),$ where $X$ generates the ideal $J$ in $\mathfrak{p}athbf T$,
the polynomial $f(X) \in \mathfrak{p}athbf Z_p[X]$ satisfies
$f(X) \equiv X^{g_p} \pmod p$, and there is an isomorphism
$\mathfrak{p}athbf T^0 0ng \mathfrak{p}athbf Z_p[X]/f(X).$
(Here we follow \cite{eisenstein} in letting $g_p$ denote the rank
of $\mathfrak{p}athbf T^0$ over $\mathfrak{p}athbf Z_p$.) The image of $X$ in $\mathfrak{p}athbf Z_p[X]/f(X)$
generates the ideal $J^0$ in $\mathfrak{p}athbf T^0$.
In \cite{eisenstein}, Prop.~II.18.10, Mazur treats
the questions of exhibiting explicit generators of $J^0$
(or equivalently, explicit choices for the element
``$X$'' of the preceding paragraph). We recall his
result here, and give a deformation-theoretic proof.
\begin{prop}\ellabel{prop:good primes}
Suppose that $p$ divides the numerator of $(N-1)/12$.
Let $\ell$ be a prime different from $N$.
Say that $\ell$ is {\em good} $($with respect to
the pair $(p,N))$ if (i) one of $\ell$ or $p$
is odd, $\ell$ is not a $p$th power modulo
$N$, and $(\ell - 1)/2 \not\equiv 0 \pmod p;$
or (ii) $\ell = p = 2$ and $-4$ is not
an $8$th power modulo $N$.
\footnote{This definition originally
appeared in \cite{eisenstein}, p.~124. However,
condition~(ii) is misstated there.}
Then $T_{\ell} - (1 + \ell)$ generates the
ideal $J^0$ if and only if $\ell$ is a good prime.
\end{prop}
\begin{Proof}
Let $R 0ng \mathfrak{p}athbf T \rightarrow \mathfrak{p}athbf F_p[X]/X^2$ be a map that classifies
a (unique up to scaling,
by Proposition~\ref{tangent space}) non-trivial element
in the reduced Zariski tangent space of $R$.
If $\ell$ is distinct from $p$, then
we must show that $T_{\ell} - (1 + \ell)
= \mathfrak{p}athbf Trace(\rho^{\mathfrak{p}athrm{univ}}(\mathfrak{p}athbf Frob_{\ell})) - (1 + \ell)$ has non-zero
image under this map if and only if $\ell$ is a good prime.
If $\alpha_p\in R 0ng \mathfrak{p}athbf T$ denotes the
scalar by which $\mathfrak{p}athbf Frob_p$ acts on the rank one quotient module
of $I_p$-coinvariants of $V^{\mathrm{univ}}$, then $T_p = \alpha_p$, and
so we must also show that $\alpha_p - (1 + p)$ has non-zero image
under this map if and only if $p$ is a good prime.
Both cases follow from Proposition~\ref{prop:tangent space at 2}
in the case when $p=2$, and from
Proposition~\ref{prop:tangent space at odd p}
in the case of odd $p$.
$
\square \ $
\end{Proof}
\
As was remarked upon above, the next result (and the final result
of this section) is also originally due to Mazur.
\begin{prop}\ellabel{T_N = 1}
In $\mathfrak{p}athbf T$ we have the equality $T_N = 1$.
\end{prop}
\begin{Proof} As we recalled above, this result is straightforward
when $p$ is odd. Thus we assume that $p = 2$. The $T_N$-eigenvalue
of $E^*_2$ is equal to 1. Thus, in order to
show that $T_N = 1$, it suffices to show that for each
cuspform $f_i$ ($i = 2,\elldots,d$ -- we are using the notation
introduced during the proof of Proposition~\ref{Hecke rep}),
the image of $T_N$ in $\mathfrak{p}athcal O_i$ is equal to $1$.
If $N \not\equiv 1 \pmod 8$, then there are no cuspforms to
consider, and hence there is nothing to prove. Thus we assume
for the remainder of the argument that $N \equiv 1 \pmod 8$.
Fix a cuspform $f_i$, and let $S$ denote the local ring
$$S = \{(a,b) \in \mathfrak{p}athbf Z/4 \times \mathfrak{p}athcal O_i/2\mathfrak{p}_i \, | \, a \bmod 2 = b \bmod \mathfrak{p}_i\}.$$
The objects $(V^{\mathrm{min}}_2,\cal Lmin_2,\rho^{\mathfrak{p}athrm{min}}_2) \in \mathfrak{p}athcal Def(\mathfrak{p}athbf Z/4)$ and
the object in $\mathfrak{p}athcal Def(\mathfrak{p}athcal O_i'/2\mathfrak{p}_i)$ obtained by reducing modulo
$2\mathfrak{p}_i$ the object
$(V_i,L_i,\rho_i) \in \mathfrak{p}athcal Def(\mathfrak{p}athcal O_i')$ (the latter was constructed in the
course of proving Proposition~\ref{Hecke rep}) glue to yield
an object $(V,L,\rho) \in \mathfrak{p}athcal Def(S)$.
Since $N \equiv 1 \pmod 8,$ we see that $G_{\mathfrak{p}athbf Q_N}$ acts trivially
on $V^{\mathrm{min}}_2$. Since $(V_i,L_i,\rho_i)$ is constructed from
the Galois representation attached to the cuspform $f_i,$ we
see that $G_{\mathfrak{p}athbf Q_N}$ stabilises $L_i,$ and $\mathfrak{p}athbf Frob_N$ acts
as multiplication by $T_N$ on $L_i$. Thus the line $L$ is stabilised by $G_{\mathfrak{p}athbf Q_N}$
(in addition to being fixed by $I_N$), and $\mathfrak{p}athbf Frob_N$ acts
as multiplication by the image of $T_N$ in $S$. If the image of $T_N$ in
$\mathfrak{p}athcal O_i$ is equal to $-1$, then we see that the image of $T_N$
in $S$ is equal to $(1,-1)$.
Now $(1,-1) \not\equiv (1,1) \pmod{2S}$.
Thus the object $(V/2,L/2,\rho/2) \in \mathfrak{p}athcal Def(S/2)$ obtained by
reducing $(V,L,\rho)$ modulo 2 has the property that
$L$ is stable, but not trivial, under the action of $G_{\mathfrak{p}athbf Q_N}$.
On the other hand, Theorem~\ref{p=2 class field
factorisation}, together with Lemma~\ref{lem:info at N}, shows
that there are no elements of $\mathfrak{p}athcal Def(S/2)$. This contradiction
proves the proposition.
$
\square \ $
\end{Proof}
\section{Explicit deformation theory: $p$ = 2}
\ellabel{sec:explicit at 2}
Let us begin by fixing an odd prime $N$,
and recalling some class field theory
of the field $K = \mathfrak{p}athbf Q(\sqrt{(-1)^{(N+1)/2}N}).$
We let $H$ denote the 2-power part of the
strict class group $\mathfrak{p}athbb Cl(\mathfrak{p}athcal O_K)$ of the ring of integers $\mathfrak{p}athcal O_K$ of $K$,
and let $E$ denote the corresponding cyclic 2-power
extension of $K$, which is unramified at all
finite primes.
Genus theory shows that $H$ is cyclic,
and non-trivial. Thus $E$ is a non-trivial
cyclic 2-power extension of $K$; its unique
quadratic subextension is equal to $K(\sqrt{-1}).$
We let $\pi_K$ denote the unique prime
of $K$ lying over 2; its image in $H$
generates the two-torsion subgroup $H[2]$ of $H$.
The following result is classical, but we will
recall a proof for the benefit of the reader.
\begin{prop}\ellabel{prop:2-cft}
The order of $H$ is divisible by four if and only
if $N \equiv 1 \pmod 8.$ The order of $H$ is
divisible by eight if and only if furthermore
$-4$ is an $8$th power modulo $N$.
\end{prop}
\begin{Proof}
If $N \equiv -1 \pmod 4,$ then $K$ is a real
quadratic field. If $E^+$ denotes the 2-Hilbert
class field of $K$ (so $E^+$ is the maximal
totally real subextension of $E$), then
we see that $E$ is equal to the compositum
of $E^+$ and $K(\sqrt{-1}).$ Since $E$ is
cyclic over $K$, we deduce that $E^+$ must
in fact be trivial. Thus in this case
$H$ is of order two.
Suppose now that $N \equiv 1 \pmod 4,$ and
that $E$ contains a degree four sub-extension.
Since $E/K$ is cyclic, this sub-extension is
unique, and hence Galois over $\mathfrak{p}athbf Q$.
It must contain $\mathfrak{p}athbf Q(\sqrt{-1})$,
and one sees easily that it is in fact
a biquadratic extension of $\mathfrak{p}athbf Q(\sqrt{-1})$,
unramified away from $N$.
Since it is Galois over $\mathfrak{p}athbf Q$, it must be
of the form
$\mathfrak{p}athbf Q(\sqrt{-1},\sqrt{\nu}, \sqrt{\bar{\nu}}),$
where $\nu$ is an element of $\mathfrak{p}athbf Z[\sqrt{-1}]$
(and $\bar{\nu}$ is its conjugate) satisfying
$\nu\bar{\nu} = N.$
However, for the extension
$\mathfrak{p}athbf Q(\sqrt{-1},\sqrt{\nu},\sqrt{\bar{\nu}})/\mathfrak{p}athbf Q(\sqrt{-1})$
to actually be unramified at 2, it must be that
$\nu \equiv 1 \pmod 4.$ The element $\nu$ can
be chosen in this manner if and only
if $N \equiv 1 \pmod 8.$ Thus we see that $E$
has a degree four subfield if and only
if $N$ satisfies this congruence.
Finally, let us consider the question of whether
the order of $H$ is divisible by eight. This
is the case if and only if the two-torsion
subgroup $H[2]$ of $H$ has trivial image in
$H/4$; equivalently, if and only if $\pi_K$
has trivial image in $H/4$.
This holds, in turn,
if and only if $\pi_K$ splits
completely in
$\mathfrak{p}athbf Q(\sqrt{-1},\sqrt{\nu}, \sqrt{\bar{\nu}}).$
Clearly, this is true if and only if
the ideal $(1+i)$ splits completely
in this field, regarded as an extension of $\mathfrak{p}athbf Q(\sqrt{-1}).$
This holds, in turn, if and only if
$(1+i)$ is a quadratic residue modulo $\nu$
(or equivalently module $\bar{\nu}$).
Raising to $4$th powers, and taking
into account the isomorphism
$\mathfrak{p}athbf Z/N \cong \mathfrak{p}athbf Z[\sqrt{-1}]/\nu,$
we see that this is equivalent
to $-4$ being an $8$th power modulo $N$.
$
\square \ $
\end{Proof}
\
The following lemma is used in
the proof of Proposition~\ref{T_N = 1}.
\begin{lemma}\ellabel{lem:info at N}
The inertia group $I_N$ and the decomposition group $G_{\mathfrak{p}athbf Q_N}$
have the same image in $\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q).$
\end{lemma}
\begin{Proof}
There is a unique prime lying above $N$ in $K$,
and it principal. Thus this prime splits completely in the
Hilbert class field $E$ of $K$.
Thus $I_N$ and $G_{\mathfrak{p}athbf Q_N}$ both have trivial image in $\mathfrak{p}athrm{Gal}(E/K)$.
Since $N$ is ramified in $K/\mathfrak{p}athbf Q$, the lemma follows.
$
\square \ $
\end{Proof}
\
Let $H'$ denote the $2$-power part of the
strict ray class group of $K$ of conductor $\pi_K^2,$
and let $H''$ denote the $2$-power part of the
strict ray class group of $K$ of conductor $\pi_K^3$.
(Here ``strict'' means that in the case when $K$ is real quadratic, we
allow ramification at infinity.) We let $E'$ and $E''$ denote
the corresponding abelian extensions of $K$.
\begin{prop}\ellabel{prop:RCF} (i) The natural surjection $H'' \rightarrow H'$
is an isomorphism.
(ii) Either the natural surjection $H' \rightarrow H$ is an isomorphism,
in which case $E = E'$,
or else the kernel of this surjection has order two,
in which case $E'/E$ is a quadratic extension that is ramified at two.
(iii) The group $H'$ is cyclic.
(iv)
Let $D_2(E'/\mathfrak{p}athbf Q)$ denote the decomposition group of some prime
of $E'$ lying over $2$, and let $I_2(E'/\mathfrak{p}athbf Q)$ denote the
inertia subgroup of $D_2(E'/\mathfrak{p}athbf Q)$.
Then $I_2(E'/\mathfrak{p}athbf Q)$ has index two in $D_2(E'/\mathfrak{p}athbf Q)$.
If furthermore $E'/E$ is a quadratic extension,
then $D_2(E'/\mathfrak{p}athbf Q)$ is dihedral of order $8$.
\end{prop}
\begin{Proof}
The groups $H'$ and $H''$ sit inside the
following exact diagram:
$$\xymatrix{
\mathfrak{p}athcal O^{\times}_K \ar[r]\ar@{=}[d] & (\mathfrak{p}athcal O_K/\pi^3_K)^{\times} \ar[r]\ar[d]^{\text{ps}i}
& H'' \ar[r] \ar[d] & H \ar[r] \ar@{=}[d]& 0 \\
\mathfrak{p}athcal O^{\times}_K \ar[r] & (\mathfrak{p}athcal O_K/\pi^2_K)^{\times}
\ar[r] & H' \ar[r]
& H \ar[r] & 0.}$$
To prove that the map $H''\rightarrow H'$ is an isomorphism,
it suffices to show that the kernel
of $\text{ps}i$ maps to zero in $H''$; in other
words, that the kernel of $\text{ps}i$ consists of the images of global units.
Since $\pi^2_K = (2)$, we see that the kernel of $\text{ps}i$ is
equal to $\{\pm 1\}$; this completes the proof of~(i).
The proof of~(ii) is even more straightforward: it follows
immediately from a consideration of the exact sequence
$$\mathfrak{p}athcal O_K^{\times} \rightarrow (\mathfrak{p}athcal O_K/\pi_K^2)^{\times}
= (\mathfrak{p}athcal O_K/2)^{\times} \rightarrow H' \rightarrow H \rightarrow 0.$$
We now turn to proving~(iii). For this, it
suffices to prove that $H'/2 0ng \mathfrak{p}athbf Z/2 $.
Note that since the non-trivial element of $\mathfrak{p}athrm{Gal}(K/\mathfrak{p}athbf Q)$ acts on $H'$
via $h \mathfrak{p}apsto h^{-1}$, we see that the extension $K'$ of $K$
corresponding by class field theory to $H'/2$ is abelian
over $\mathfrak{p}athbf Q$. If $H'/2$ were isomorphic to $\mathfrak{p}athbf Z/2 \oplus \mathfrak{p}athbf Z/2$
(rather than $\mathfrak{p}athbf Z/2$),
then since $H$ is cyclic,
this would imply that there
exists a subextension of $K'$, quadratic over $K$ and of conductor $2$.
Such an extension would again be abelian over $\mathfrak{p}athbf Q$.
Using the Kronecker-Weber theorem,
it is easy to check that there are no quadratic extensions of $K$.
Thus $H'$ must be cyclic, as claimed.
As remarked upon above, the class of $\pi_K$ has order two
in $H$.
Thus the decomposition group $D_2(E/K)$ at $2$ in the Hilbert
class field has order exactly $2$. Since $K/\mathfrak{p}athbf Q$ is ramified
at $2$, we see that the decomposition group $D_2(E/\mathfrak{p}athbf Q)$ has
order four, and that the inertia subgroup $I_2(E/\mathfrak{p}athbf Q)$ has order two.
If $E' = E$ then this completes the proof of~(iv).
If instead $E'/E$ is quadratic, then $E'/E$ is ramified at $2$,
implying that $D_2(E'/\mathfrak{p}athbf Q)$ has order $8$, and that $I_2(E'/\mathfrak{p}athbf Q)$
has order $4$.
Since $D_2(E'/K) \subseteq \mathfrak{p}athrm{Gal}(E'/K) 0ng H'$ is cyclic, by~(iii),
and since $\mathfrak{p}athrm{Gal}(E'/\mathfrak{p}athbf Q)$ is dihedral, it follows that $D_2(E'/\mathfrak{p}athbf Q)$
is dihedral of order $8$.
$
\square \ $
\end{Proof}
\
Let $(V,L,\rho)$ be an object of $\mathfrak{p}athcal Def(A)$ for some
Artinian local $\mathfrak{p}athbf F_2$-algebra, and let $F$ denote compositum
of $K$ with the fixed field of the kernel of $\rho$. The following
result greatly restricts the possibilities for~$F$.
\begin{theorem}\ellabel{p=2 class field factorisation}
The field $F$ is contained in the
strict $2$-Hilbert class field $E$ of $K$.
\end{theorem}
\begin{Proof}
Since $A$ is assumed to be of characteristic $2$,
the natural map $\mathfrak{p}athbf Z_2^{\times} \rightarrow A^{\times}$
has trivial image, and thus the image of $\rho$ is contained in
$\mathfrak{p}athrm{SL}_2(A)$.
Since $I_N$ acts trivially on both $L$ and $V/L$,
we deduce that inertia at $N$
acts through an abelian group of exponent $2$, and thus
through a cyclic group of order at most $2$.
\begin{lemma}
The extension $F/K$ is unramified at all finite primes outside $\pi_K$.
Moreover, if $K^{ab}/K$ is the maximal abelian extension
of $K$ contained in $F$, then
the finite part of the conductor of $K^{ab}/K$ divides $\pi^3_K$.
\end{lemma}
\begin{Proof} The Galois group $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$
embeds into $\mathfrak{p}athrm{Gal}(K/\mathfrak{p}athbf Q) \times \rho(G_{\mathfrak{p}athbf Q})$. In particular, it is of
2-power order, and so
the image of an inertia group $I_N$ at $N$ in $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$ is cyclic
of two-power order. As observed above, $\rho(I_N)$ is a quotient of
$I_N$ of order at most two. On the other hand,
since $K/\mathfrak{p}athbf Q$ is a quadratic extension that is ramified at $N$,
we see that $I_N$ surjects onto the order two group $\mathfrak{p}athrm{Gal}(K/\mathfrak{p}athbf Q)$.
It follows that the image of $I_N$ in $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$ has trivial
intersection with $\mathfrak{p}athrm{Gal}(F/K)$, and so $F/K$ is unramified
at the prime above $N$.
By definition, $\rho$ is unramified outside $2$ and $N$,
and so it remains to prove the result about the conductor
of $K^{ab}/K$. Since the compositum of extensions of
conductor dividing $\pi^3_K$ has conductor at most $\pi^3_K$, it
suffices to prove the result for extensions
$K'/K$ with \emph{cyclic} Galois group. Suppose
such an extension $K'/K$ with Galois group $\mathfrak{p}athbf Z/2^k \mathfrak{p}athbf Z$
had conductor divisible by $\pi^4_K$. Then the
conductor discriminant formula says
that the discriminant $\mathfrak{p}athcal Delta_{K'/K}$ is the
product over all characters of $\mathfrak{p}athbf Z/2^k \mathfrak{p}athbf Z$ of
the corresponding conductor:
$$\mathfrak{p}athcal Delta_{K'/K} = \prod_{\chi} \mathfrak{p}athfrak{f}_{\chi}.$$
Since $\mathfrak{p}athbf Z/2^k \mathfrak{p}athbf Z$ has exactly
$2^{k-1}$ faithful characters, restricting the
product to this set we find that the discriminant
is divisible by at least
$(\pi_K)^{4 \cdot 2^{k-1}}$. In particular, this
implies a lower bound for the two part of the
root discriminant of $K'$, and thus of $F$.
Explicitly,
$$\delta_{2,F} \ge \delta_{2,K'} = \delta_{2,K} N_{K/\mathfrak{p}athbf Q}(\mathfrak{p}athcal Delta_{K'/K})^{1/[K':\mathfrak{p}athbf Q]}
\ge 2 \cdot 2 = 4.$$
Yet the Fontaine bound (\cite{fbound}, Theorem 1)
for finite flat group schemes
over $\mathfrak{p}athbf Z_2$ killed by $2$ implies that $\delta_{2,F}
< 2^{1 + \frac{1}{2-1}} = 4$. Thus the result follows
by contradiction.
$
\square \ $
\end{Proof}
\
We will strengthen this lemma step-by-step,
until we eventually establish the theorem.
\begin{lemma}\ellabel{lem:intermediate} The extension $F/K$ is cyclic, and
is contained in the field $E'$.
\end{lemma}
\begin{Proof}
The preceding lemma, together with part~(i) of
Proposition~\ref{prop:RCF},
shows that the extension of $K$ cut out by
any abelian quotient
of $\mathfrak{p}athrm{Gal}(F/K)$ is contained in $E'' = E'$.
Part~(iii) of the same proposition then
implies that any such quotient is cyclic.
Thus $\mathfrak{p}athrm{Gal}(F/K)$ is a $2$-group with no
non-cyclic abelian quotients, and so is itself
cyclic. The result follows.
$
\square \ $
\end{Proof}
\
We now turn to a more careful study of the ramification
at $2$. Corollary~\ref{uniquenesscor} shows that
$V_{/\mathfrak{p}athbf Q_2}$ has a unique prolongation to
a finite flat group scheme $M_{/\mathfrak{p}athbf Z_2}$, that the action of $A$ on $V$
prolongs to an action of $A$ on $M$, and that
the connected-\'etale sequence
$$0 \rightarrow M^{0} \rightarrow M \rightarrow M^{\text{\'et}}
\rightarrow 0$$
induces a two-step filtration of $V$ by free $A$-modules
of rank one.
\begin{lemma} The action of inertia at $2$ on $ M^{0}(\overlineerline{\mathfrak{p}athbf Q}_2)$
and $ M^{\text{\'et}}(\overlineerline{\mathfrak{p}athbf Q}_2)$ is trivial.
\end{lemma}
\begin{Proof} This is clear for $M^{\text{\'et}}(\overlineerline{\mathfrak{p}athbf Q}_2)$,
since \'{e}tale implies unramified. It follows for
$ M^{0}(\overlineerline{\mathfrak{p}athbf Q}_2)$ from the Cartier self-duality of $M_{/\overlineerline{\mathfrak{p}athbf Q}_2}$.
$
\square \ $
\end{Proof}
\begin{lemma} If $\sigma \in G_{\mathfrak{p}athbf Q_2}$ then
$\sigma^2$ acts trivially on $V$.
\end{lemma}
\begin{Proof} Let us choose a basis of $V$ compatible with its
filtration arising from the connected-\'etale sequence
of $M$, and write the action of $\sigma$ on
$V$ as a matrix over $A$ in terms of this basis:
$$\sigma = \elleft(\begin{array}{cc} 1 + a & b \\
0 & 1 + c \end{array} \right)$$
Part~(iv) of Proposition~\ref{prop:RCF} implies
that $\sigma^2$ lies in the inertia subgroup.
Thus it must fix $ M^{0}(\overlineerline{\mathfrak{p}athbf Q}_2)$
and $ M^{\text{\'et}}(\overlineerline{\mathfrak{p}athbf Q}_2)$. Computing $\sigma^2$,
we find that $(1+a)^2 = (1+c)^2 = 1$, and so $a^2 = c^2 = 0$.
Since the determinant of $\sigma$ is $1$, we see that
$(1+c) = (1+a)^{-1} = 1 - a$. Now computing $\sigma^2$ we
find that it is trivial.
$
\square \ $
\end{Proof}
\
{\em Conclusion of proof of Theorem~\ref{p=2 class field factorisation}:}
If $E' = E$,
then by Lemma~\ref{lem:intermediate} there is nothing more to prove.
Otherwise, Proposition~\ref{prop:RCF} implies that
the $D_2(E'/\mathfrak{p}athbf Q)$ is dihedral of order $8$.
We have seen that for any $\sigma \in G_{\mathfrak{p}athbf Q_2}$, the element
$\sigma^2$ acts trivially. Thus the image $\rho_{|G_{\mathfrak{p}athbf Q_2}}$ factors through
an exponent $2$ group, which is therefore abelian. Yet
the dihedral group of order $8$ is not abelian, and hence
$F$ is contained in a proper subfield of $E'$ that is Galois
over $\mathfrak{p}athbf Q$. All such subfields lie inside $E$.
$
\square \ $
\end{Proof}
\
\begin{cor}\ellabel{cor:existence} If $2^m$ denotes the order of $H$, then
there exists a surjection $R \rightarrow \mathfrak{p}athbf F_2[X]/X^n$
if and only if $n \elleq 2^{m-1}$. Furthermore, any
such surjection is unique up to applying an automorphism
of $\mathfrak{p}athbf F_2[X]/X^n$.
\end{cor}
\begin{Proof}
Corollary~\ref{traces} implies that
there exists a surjection $R \rightarrow \mathfrak{p}athbf F_2[X]/X^n$ if and only
if there exits $(V,L,\rho) \in \mathfrak{p}athcal Def(\mathfrak{p}athbf F_2[X]/X^n)$ with
the property that the traces of $\rho$ generate $\mathfrak{p}athbf F_2[X]/X^n$
as an $\mathfrak{p}athbf F_2$-algebra
(or equivalently, with the property that there is
an element of $G_{\mathfrak{p}athbf Q}$ whose image under $\rho$ has
trace congruent to $X \pmod{X^2}$.)
\begin{lemma}\ellabel{lem:2-powers}
Let $A$ be an $\mathfrak{p}athbf F_2$-algebra, and let $U \in \mathfrak{p}athrm{SL}_2(A)$.
(i) $U^2 = I + \mathfrak{p}athbf Trace(U) U.$
(ii) For any $k \geq 1$, we have that $\mathfrak{p}athbf Trace(U^k) \in \mathfrak{p}athbf Trace(U) A.$
(iii)
If $U \in \mathfrak{p}athrm{SL}_2(A),$ then
$$U^{2^k} = \elleft( \sum_{i=0}^{m-1} \mathfrak{p}athbf Trace(U)^{2^k-2^{k-i}} \right) I
+ \mathfrak{p}athbf Trace(U)^{2^k-1} U,$$
for any $k \geq 1$.
\end{lemma}
\begin{Proof}
Any $2\times 2$ matrix $U$ over the ring $A$
satisfies the identity $U^2 = \mathfrak{p}athcal Det(U) I + \mathfrak{p}athbf Trace(U) U.$
Part~(i) is a particular case of this identity, and parts~(ii) and~(iii)
follow by induction.
$
\square \ $
\end{Proof}
\
Theorem~\ref{p=2 class field factorisation}
shows that $\rho$ factors as
$G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q) \rightarrow \mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_2[X]/X^n).$
Now $\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q)$ is a dihedral group of order $2^{m+1}$;
indeed, we may write
\begin{equation}\ellabel{eq:presentation}
\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q)= \ellangle \sigma, \tau | \sigma^{2^m} = \tau^2 =
(\sigma \tau)^2 = 1 \rightarrowngle,
\end{equation}
where $\sigma$ generates $\mathfrak{p}athrm{Gal}(E/K)$, and $\tau$ generates the image of $I_N$
in $\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q).$
Part~(i) of Lemma~\ref{lem:2-powers} shows that any element of order two in
the image of $\rho$ has vanishing trace.
Since any element of $\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q)$ that is not of order two is a power of
$\sigma$,
we conclude from part~(ii) of the same lemma that
all the traces of $\rho$ lie in the ideal of $\mathfrak{p}athbf F_2[X]/X^n$
generated by $\mathfrak{p}athbf Trace(\rho(\sigma))$.
Since the trace of any element in the image of $\overlineerline{\rho}$ is zero,
we see that this ideal is contained in the maximal ideal of $\mathfrak{p}athbf F_2[X]/X^n$.
Applying part~(iii) of Lemma~\ref{lem:2-powers}, we deduce
that $\mathfrak{p}athbf Trace(\sigma)^{2^{m-1}} = 0$ (since $\sigma^{2^m} = 1$,
and so $\rho(\sigma^{2^m}) = I$).
Thus, on the one hand, the only
way that $X$ can arise as a trace of $\rho$ is if
$\mathfrak{p}athbf Trace(\sigma) \equiv X \pmod X^2.$
On the other hand, if this condition holds, then $X^{2^{m-1}} = 0,$
and hence $n \elleq 2^{m-1}.$ This proves one direction of the ``if and only
if'' statement of the corollary.
Let us now prove the uniqueness assertion, assuming that
we are given a surjective map $R \rightarrow \mathfrak{p}athbf F_2[X]/X^n$.
Since the corresponding triple $(V,L,\rho)$
deforms $(\overline{V},\cal Lbar,\overlineerline{\rho}),$ and since
$\sigma$ has non-trivial
image in $\mathfrak{p}athrm{Gal}(\mathfrak{p}athbf Q(\sqrt{-1})/\mathfrak{p}athbf Q),$
while $\tau$ generates
the image of $I_N$ in $\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q),$
we may choose a basis of
$V$ such that $\sigma$ and $\tau$ act through matrices in $\mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_2[X]/X^n)$
of the form
$$
\rho(\sigma) = \elleft(\begin{matrix} a(\sigma) & b(\sigma) \\
c(\sigma) & d(\sigma) \end{matrix} \right) \equiv \elleft( \begin{matrix}
1 & 1 \\ 0 & 1\end{matrix} \right) \mathfrak{p}od X \, ,
\quad
\rho(\tau) = \elleft(\begin{matrix} a(\tau) & 0 \\
c(\tau) & d(\tau) \end{matrix} \right) \equiv \elleft( \begin{matrix}
1 & 0 \\ 0 & 1\end{matrix} \right) \mathfrak{p}od X.
$$
Now conjugating by matrices in
$\mathfrak{p}athop{\mathfrak{p}athrm{ker}}\nolimits(\mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_2[X]/X^n) \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_2))$
of the form
$$\elleft(\begin{matrix} \alpha & 0 \\ \gamma & \delta \end{matrix}
\right),$$
it is easy to show that we may change our basis so that
$$\rho(\sigma) = \elleft(\begin{matrix} 1 + u X & 1 \\
u X & 1 \end{matrix} \right),
\qquad
\rho(\tau) = \elleft(\begin{matrix} 1 & 0 \\
u X & 1 \end{matrix} \right),
$$
for some $u \in (\mathfrak{p}athbf F_2[X]/X^n)^{\times}.$
Thus, after applying the inverse of the automorphism
of $\mathfrak{p}athbf F_2[X]/X^n$ induced by
the map $X \mathfrak{p}apsto u X$,
we see that we may put $\rho$ in the form
\begin{equation}\ellabel{eq:normal form}
\rho(\sigma) = \elleft(\begin{array}{cc} 1 + X & 1 \\
X & 1 \end{array} \right),
\qquad
\rho(\tau) = \elleft(\begin{array}{cc} 1 & 0 \\
X & 1 \end{array} \right).
\end{equation}
This proves the uniqueness statement.
Finally, one checks that the preceding formula
gives a well-defined homomorphism $\rho:\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q)
\rightarrow \mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_2[X]/X^n)$,
so long as $n\elleq 2^{m-1}$,
and that it deforms $\overlineerline{\rho}$. It is certainly flat at $2$,
since the inertia group at two acts through its image in
$\mathfrak{p}athrm{Gal}(\mathfrak{p}athbf Q(\sqrt{-1})/\mathfrak{p}athbf Q)$.
Thus, if we let $L$ denote the line spanned by the vector $(0,1)$,
then we obtain an object of $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_2[X]/X^n)$ of the required
sort (since $\mathfrak{p}athbf Trace(\rho(\sigma)) = X$). This completes
the proof of the corollary.
$
\square \ $
\end{Proof}
\
Let us consider the particular case $n=2$ of the preceding
corollary.
\begin{prop}\ellabel{prop:tangent space at 2}
If $N\not\equiv 1 \pmod 8,$ then $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_2[X]/X^2) = 0.$
If $N \equiv 1 \pmod 8,$ then $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_2[X]/X^2)$ is one dimensional
over $\mathfrak{p}athbf F_2.$ Furthermore, if $(V,L,\rho)$ corresponds to the
non-trivial element, then we have the following formulas for
the traces of $\rho$:
\begin{itemize}
\item[(i)] If $\ell$ is an odd prime distinct from $N$, then
$$\mathfrak{p}athbf Trace(\rho(\mathfrak{p}athbf Frob_{\ell}))
= \begin{cases}
0 \, \textrm{ if } \ell \equiv 1 \pmod 4 \textrm{ or } \ell \textrm{ is
a square} \mathfrak{p}od N \\ X \textrm{ otherwise } \end{cases}.$$
\item[(ii)] If $\alpha_2$ denotes the eigenvalue of $\mathfrak{p}athbf Frob_2$ on the
rank one $\mathfrak{p}athbf F_2[X]/X^2$-module of $I_2$-coinvariants of $V$,
then
$$\alpha_2 = \begin{cases} 1 \textrm{ if } -4 \textrm{ is an $8$th power mod }
N \\
1 + X \textrm{ if not }\end{cases}.$$
\end{itemize}
\end{prop}
\begin{Proof}
If $N \not\equiv 1 \pmod 8$ then Proposition~\ref{prop:2-cft}
shows that $H$ has order two, and
Corollary~\ref{cor:existence} shows that
any map $R \rightarrow \mathfrak{p}athbf F_2[X]/X^2$ factors through the
map $R \rightarrow \mathfrak{p}athbf F_2$. Thus in this case $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_2[X]/X^2) = 0,$
as claimed.
If $N \equiv 1 \pmod 8,$ then conversely we conclude from
Proposition~\ref{prop:2-cft} that $H$ has order divisible by $4$.
Corollary~\ref{cor:existence} then shows that there is
a unique surjection $R \rightarrow \mathfrak{p}athbf F_2[X]/X^2$, and
thus that $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_2[X]/X^2)$ is one dimensional over $\mathfrak{p}athbf F_2$.
If $F$ denotes the subextension of $E$ over $K$ cut out by
this non-trivial deformation, then $F$ is a dihedral extension
of $\mathfrak{p}athbf Q$ of degree $8$, unramified over $K$, containing $K(\sqrt{-1})$.
(Concretely, as we saw in the proof
of Proposition~\ref{prop:2-cft}, the field $F$ has the form
$\mathfrak{p}athbf Q(\sqrt{-1},\sqrt{\nu},\sqrt{\bar{\nu}}),$ for appropriate
$\nu,$ $\bar{\nu}$.)
Recall the presentation~(\ref{eq:presentation}) of $\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q)$.
If we let $\bar{\sigma}$ and $\bar{\tau}$ denote the image of $\sigma$
and $\tau$ under the surjection $\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q) \rightarrow \mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$,
then $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$ has the following presentation:
$$\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)= \ellangle \bar{\sigma}, \bar{\tau} | \bar{\sigma}^4 = \bar{\tau}^2 =
(\bar{\sigma} \bar{\tau})^2 = 1 \rightarrowngle.$$
Recall from the proof of Corollary~\ref{cor:existence}
that the only elements of $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$ whose images under
$\overlineerline{\rho}$ have non-zero trace (which is then equal to $X$)
are $\bar{\sigma}^{\pm 1};$ that is, the elements of $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$
that are of order $4$.
If $\ell$ is an odd prime distinct from $N$, then
$\ell$ is unramified in $F$. The final remark of the preceding
paragraph shows that
$$\mathfrak{p}athbf Trace(\rho(\mathfrak{p}athbf Frob_{\ell})) = \begin{cases} 0 \textrm{ if } \mathfrak{p}athbf Frob_{\ell}
\textrm{ has order $1$ or $2$}\\ X \textrm{ if } \mathfrak{p}athbf Frob_{\ell} \textrm{ has
order $4$ } \end{cases}.$$
Now $K$ is the maximal subfield of $F$ fixed by $\bar{\sigma}$,
while one checks that any element of $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$ of order two
fixes at least one of the subfields $\mathfrak{p}athbf Q(\sqrt{-1})$ or $\mathfrak{p}athbf Q(\sqrt{N})$
of $F$. Thus we see that $\rho(\mathfrak{p}athbf Frob_{\ell})$ has trace zero
(as opposed to trace $X$)
if and only if $\ell$ splits in at least one of the
fields $\mathfrak{p}athbf Q(\sqrt{-1})$ or $\mathfrak{p}athbf Q(\sqrt{N})$. This establishes~(i).
Again referring to the presentation~(\ref{eq:presentation})
of $\mathfrak{p}athrm{Gal}(E/\mathfrak{p}athbf Q),$
one easily checks that $D_2(E/\mathfrak{p}athbf Q)$ is generated
by $\sigma \tau$ and $\sigma^{2^{m-1}},$ with $I_2(E/\mathfrak{p}athbf Q)$
being generated by $\sigma \tau$. (Recall that
$2^m$ denotes the order of $H$.)
Thus $D_2(F/\mathfrak{p}athbf Q)$
is generated by $\bar{\sigma} \bar{\tau}$ and $\bar{\sigma}^{2^{m-1}}.$
Thus if $m \geq 3,$ then we see that $D_2(F/\mathfrak{p}athbf Q) = I_2(F/\mathfrak{p}athbf Q)$,
while if $m = 2$, then $D_2(F/\mathfrak{p}athbf Q)/I_2(F/\mathfrak{p}athbf Q)$ is generated by the
image of $\bar{\sigma}^2$.
In terms of the explicit model~(\ref{eq:normal form})
for $\rho$, we see that the coinvariants of
$I_2(F/\mathfrak{p}athbf Q) = \ellangle \bar{\sigma} \bar{\tau} \rightarrowngle$ on $V$ are
spanned by the image of the basis vector $(0,1)$, and
that $\bar{\sigma}^2$ (which is central in $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$,
and so does act on the space of coinvariants)
acts on the image of this vector as multiplication by
$1+X$. Combining this computation with the discussion of the
previous paragraph proves part~(ii), once we recall
from Proposition~\ref{prop:2-cft} that $m\geq 3$ if and only
if $-4$ is an $8$th power modulo $N$.
$
\square \ $
\end{Proof}
\
The quotient $R/2$ is the universal deformation ring
classifying deformations of $(\overline{V},\cal Lbar,\overlineerline{\rho})$
in characteristic $2$.
The preceding two results together imply that
$R/2 \cong \mathfrak{p}athbf F_2[X]/X^{2^{m-1}},$ where $2^m$ is
the order of $H$; formula~(\ref{eq:normal form})
then gives an explicit model for the universal deformation
over $R/2$.
\
We close this section by observing that
Theorem~\ref{thm:main:p=2}
follows from Corollaries~\ref{EPR} and~\ref{cor:existence}
taken together.
\section{Explicit deformation theory: $p$ odd}\ellabel{sec:odd explicit}
In this section we suppose that $p\geq 3$, and
that $N$ is prime to $p$.
We begin by considering the problem of analysing deformations
$(V,L,\rho) \in \mathfrak{p}athcal Def(A)$, where~$A$
is an Artinian local $\mathfrak{p}athbf F_p$-algebra with residue field $\mathfrak{p}athbf F_p$.
Our results will be less definitive than those obtained
in the case of $p = 2$.
Let $\mathfrak{p}athcal Delta$ denote the following
subgroup of $\mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_p)\subset \mathfrak{p}athrm{GL}_2(A)$:
$$\mathfrak{p}athcal Delta =
\elleft \{
\elleft( \begin{matrix} \alpha & 0 \\ 0 & 1 \end{matrix}\right)
\, \elleft | \right. \, \alpha \in \mathfrak{p}athbf F_p^{\times} \right \},$$
and let $G'$ denote the kernel of the map
$\mathfrak{p}athrm{SL}_2(A) \rightarrow \mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_p)$
induced by reduction modulo $\mathfrak{p}$ (the maximal ideal of $A$);
note that $G'$ is a
normal subgroup of $\mathfrak{p}athrm{GL}_2(A)$.
If we let $G$ denote the subgroup of $\mathfrak{p}athrm{GL}_2(A)$ generated by
$G'$ and $\mathfrak{p}athcal Delta$, then $G$ is isomorphic to the semi-direct product
$G' \sdp \mathfrak{p}athcal Delta$, where $\mathfrak{p}athcal Delta$ acts on $G'$ via conjugation.
Explicitly, one computes that
\begin{equation}\ellabel{eq:conjugation}
\elleft( \begin{matrix} \alpha & 0 \\ 0 & 1 \end{matrix}\right)
\elleft( \begin{matrix} a & b \\ c & d \end{matrix} \right)
\elleft( \begin{matrix} \alpha^{-1} & 0 \\ 0 & 1 \end{matrix}\right)
= \elleft( \begin{matrix} a & \alpha b \\ \alpha^{-1} c & d \end{matrix} \right).
\end{equation}
\begin{lemma}\ellabel{lemma:basis} Let $(V,L,\rho)$ be an object of $\mathfrak{p}athcal Def(A)$,
and let $M$ denotes the finite flat group scheme over $\mathfrak{p}athbf Z_p$
whose generic fibre equals $V$. If
$0 \rightarrow M^{0} \rightarrow M \rightarrow M^{\text{\'et}} \rightarrow 0$
denotes the connected-\'etale exact sequence of $M$,
then there is a basis for $V$ over $A$ such that
\begin{itemize}
\item[(i)] The representation $\rho: G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}_2(A)$
has image lying in $G$.
\item[(ii)] The submodule $M^{0}(\overlineerline{\mathfrak{p}athbf Q}_p)$ of $V$
(which is free of rank one, by Corollary~\ref{uniquenesscor})
is spanned by the vector $(1,0)$.
\item[(iii)] The line $L$ is spanned by the vector $(1,1)$.
\end{itemize}
\end{lemma}
\begin{Proof} By definition of the deformation problem
$\mathfrak{p}athcal Def,$ the determinant of $\rho$ is equal to $\overlineerline{\chi}_p$.
Thus $\mathfrak{p}athop{\mathfrak{p}athrm{im}}\nolimits(\rho)$ sits in the exact sequence of groups
$$0 \rightarrow G' \rightarrow \mathfrak{p}athop{\mathfrak{p}athrm{im}}\nolimits(\rho) \rightarrow \mathfrak{p}athbf F_p^{\times}
\rightarrow 0.$$
The order of $\mathfrak{p}athbf F_p^{\times}$ is coprime to the order of $G'$,
and so this exact sequence splits.
If we fix a splitting $s$,
then one easily sees that we may choose an eigenbasis for
the action of $s(\mathfrak{p}athbf F_p^{\times})$ so that this group acts via
the matrices in $\mathfrak{p}athcal Delta$. Thus condition~(i) is satisfied
for this basis. Condition~(ii) follow directly from condition~(i).
The stipulations of the deformation problem $\mathfrak{p}athcal Def$ then imply that
$L$ is spanned by a vector of the form $(1,u)$, for some unit
$u \in A^{\times}$. Rescaling the second basis vector by $u$,
we may assume that $L$ is in fact spanned by $(1,1)$.
$
\square \ $
\end{Proof}
\
From now on, we fix an object $(V,L,\rho) \in \mathfrak{p}athcal Def(A)$,
and choose a basis of $V$ as in the preceding lemma.
Thus we may regard $\rho$ as a homomorphism
$G_{\mathfrak{p}athbf Q} \rightarrow G \subset \mathfrak{p}athrm{GL}_2(A).$
\begin{lemma}\ellabel{lem:inertia at N}
If $(V,L,\rho)$ is a non-trivial deformation,
then the image of
$I_N$ under $\rho$ is a cyclic subgroup of $G'$ of order $p$.
Furthermore,
if $\elleft(\begin{matrix} a & b \\ c & d \end{matrix}\right)$
is a generator of this cyclic group, then neither $b$ nor $c$
vanishes, and neither $a$ nor $d$ equals $1$.
\end{lemma}
\begin{Proof}
Since the image under $\rho$
of inertia at $N$ acts trivially on each of the lines $L$ and $V/L$
(the determinant of $\rho$ equals $\overlineerline{\chi}_p,$
which is trivial on $I_N$),
we see that $I_N$ acts via an abelian group of exponent~$p$. Since
tame inertia is pro-cyclic, inertia at $N$ must act
through a group of order dividing~$p$.
If $I_N$ has trivial image, then Proposition~\ref{mindefiso}
shows that $V = A\otimes_{\mathfrak{p}athbf F_p} \overline{V},$ and thus
that $(V,L,\rho)$ is the trivial deformation, contradicting
our assumption. Thus $I_N$ has image of order $p$.
The line $L$ is spanned by the vector $(1,1)$.
Thus
if $\gamma = \elleft(\begin{matrix} a & b \\ c & d \end{matrix}\right)$
is a generator of the image of $I_N$, it fixes such a vector.
Since the determinant of $\rho$ equals $\overlineerline{\chi}_p,$ we see
that $\det(\gamma) = 1$.
If $a$ (respectively $d$) equals $1$ then
we conclude that $b$ (respectively $c$) equals $0$.
If either $b$ or $c$ vanishes, one easily checks that $\gamma$
must be the identity, contradicting the fact that $I_N$ has
non-trivial image.
$
\square \ $
\end{Proof}
\
If $F$ denotes the extension of $\mathfrak{p}athbf Q$ cut out by the
kernel of $\rho,$ then $F$ contains $\mathfrak{p}athbf Q(\zeta_p)$ (where
$\zeta_p$ denotes a primitive $p$th root of unity),
since $\det(\rho) = \overlineerline{\chi}_p$.
We let $F^{ab}$ denote the maximal subextension
of $F$ abelian over $\mathfrak{p}athbf Q(\zeta_p)$.
\begin{lemma} \ellabel{lemma:classf}
The $p$-part of the
conductor of $F^{ab}/\mathfrak{p}athbf Q(\zeta_p)$ divides $\pi^2$, where
$\pi = (1-\zeta_p) \mathfrak{p}athbf Z[\zeta_p]$, and the extension $F/\mathfrak{p}athbf Q(\zeta_p)$ has
inertial degree dividing $p$
at $N$ and is unramified outside $N$ and $\pi$. \end{lemma}
\begin{Proof}
Lemma~\ref{lem:inertia at N} shows that
the image under $\rho$ of inertia at $N$ is a cyclic
group of order dividing $p$.
Therefore it suffices to prove the conductor bound at $\pi$.
The image under $\rho$
of $G_{\mathfrak{p}athbf Q(\zeta_p)}$ lies in $G'$, a $p$-group, and so we see
that $\mathfrak{p}athrm{Gal}(F^{ab}/\mathfrak{p}athbf Q(\zeta_p))$ is an abelian $p$-group.
Thus it is a compositum of cyclic
extensions of $p$-power degree. The conductor of a compositum
of cyclic extensions is equal to the g.c.d.~of the
conductors of the individual cyclic extensions, and
thus it suffices to bound the conductor of a cyclic
subextension of $F^{ab}$ of degree $p^k$, for some $k\geq 1$.
Let $F'$ be such a subextension, and suppose that
the conductor of $F'$ is divisible by $\pi^3$.
There are $(p-1) p^{k-1}$ faithful characters of $\mathfrak{p}athbf Z/p^k$,
and so by the conductor discriminant formula,
the discriminant $\mathfrak{p}athcal Delta_{F'/\mathfrak{p}athbf Q(\zeta_p)}$ is divisible by
$\pi^{3(p-1)p^{k-1}}$. Thus
the $p$-root
discriminant of $F'$ satisfies
$$\delta_{F',p} \ge \delta_{\mathfrak{p}athbf Q(\zeta_p)}
N_{\mathfrak{p}athbf Q(\zeta_p)/\mathfrak{p}athbf Q}(\pi^{3(p-1)p^{k-1}})^{1/[F':\mathfrak{p}athbf Q]}
= p^{(p-2)/(p-1)} \cdot p^{3(p-1)/(p(p-1))}$$
and thus
$$v_p(\mathfrak{p}athcal D_{F'/\mathfrak{p}athbf Q}) \ge 1 + \frac{1}{p-1} + \frac{p-3}{p(p-1)}.$$
This violates Fontaine's bound \cite{fbound} when $p \ge 3$.
The result follows for $F^{ab}$.
$
\square \ $
\end{Proof}
\
In order to apply this result, we will need to classify the
relevant class fields of $\mathfrak{p}athbf Q(\zeta_p)$ that can arise in
the situation of the preceding lemma.
\begin{prop}\ellabel{prop:odd cft}
Let $p$ be an odd prime, and let $N$ be a prime distinct from $p$.
For any value of $i$, let $K_{(i)}$ denote the maximal abelian extension
of $\mathfrak{p}athbf Q(\zeta_p)$ satisfying the following conditions: $K_{(i)}$ has conductor
dividing $\pi^2 N$; the Galois group $\mathfrak{p}athrm{Gal}(K_{(i)}/\mathfrak{p}athbf Q_p(\zeta_p))$ has exponent
$p$;
the Galois group $\mathfrak{p}athrm{Gal}(\mathfrak{p}athbf Q_p(\zeta_p)/\mathfrak{p}athbf Q)$ acts on $\mathfrak{p}athrm{Gal}(K_{(i)}/\mathfrak{p}athbf Q_p(\zeta_p))$
through the $i$th power of the mod $p$ cyclotomic character $\overlineerline{\chi}_p$.
Then:
\begin{itemize}
\item[(i)] $K_{(1)} = \mathfrak{p}athbf Q(\zeta_p,N^{1/p})$;
\item[(ii)] $K_{(0)} = \begin{cases} \textrm{ the degree $p$ subextension
of $\mathfrak{p}athbf Q_p(\zeta_p, \zeta_N)/\mathfrak{p}athbf Q_p(\zeta_p)$ if } N \equiv 1 \mathfrak{p}od p \\
\mathfrak{p}athbf Q(\zeta_p) \textrm{ otherwise }
\end{cases};$
\item[(iii)]
$K_{(-1)} = \begin{cases} \textrm{ a degree $p$ extension of
$\mathfrak{p}athbf Q_p(\zeta_p)$ if } N^2 \equiv 1 \mathfrak{p}od p \\ \mathfrak{p}athbf Q(\zeta_p)
\textrm{ otherwise }
\end{cases}.$
\end{itemize}
\end{prop}
\begin{Proof}
Let $E_{(i)}$ denote the unramified extension of
$\mathfrak{p}athbf Q(\zeta_p)$ of exponent $p$ corresponding to
the maximal elementary $p$-abelian quotient of
the class group of $\mathfrak{p}athbf Q(\zeta_p)$ on which
$\mathfrak{p}athrm{Gal}(\mathfrak{p}athbf Q(\zeta_p)/\mathfrak{p}athbf Q)$ acts through $\overlineerline{\chi}_p^i$.
Then we have the short exact sequence of abelian
Galois groups
$$
0 \rightarrow \mathfrak{p}athrm{Gal}(K_{(i)}/E_{(i)}) \rightarrow
\mathfrak{p}athrm{Gal}(K_{(i)}/\mathfrak{p}athbf Q(\zeta_p)) \rightarrow \mathfrak{p}athrm{Gal}(E_{(i)}/\mathfrak{p}athbf Q(\zeta_p))
\rightarrow 0.$$
Global class field theory allows us to compute the group
$\mathfrak{p}athrm{Gal}(K_i/E_{(i)})$.
Indeed, it sits in the exact sequence
$$ (\mathfrak{p}athbf Z[\zeta_p])^{\times}
\ellongrightarrow \elleft( \elleft(
\mathfrak{p}athbf Z[\zeta_p]/\pi^2 \times \mathfrak{p}athbf Z[\zeta_p]/N \right)^{\times} / p \right)_{(i)}
\ellongrightarrow \mathfrak{p}athrm{Gal}(K_{(i)}/E_{(i)}) \ellongrightarrow 0;$$
here the subscript $(i)$ denotes the maximal quotient
on which $\mathfrak{p}athrm{Gal}(\mathfrak{p}athbf Q(\zeta_p)/\mathfrak{p}athbf Q)$ acts via $\overlineerline{\chi}_p^i$.
Since the reduction mod $\pi^2$ of
the global unit $\zeta_p = 1 + (\zeta_p -1)$ generates the $p$-power
part of $(\mathfrak{p}athbf Z[\zeta_p]/\pi^2)^{\times}$, we may eliminate
this factor from the second term of the preceding exact sequence.
If we fix a prime $\mathfrak{p}athfrak n$ over $N$ in $\mathfrak{p}athbf Z[\zeta_p]$,
then as in the proof of Lemma~\ref{four},
we obtain a surjection
$$(\mathfrak{p}athbf Z[\zeta_p]/\mathfrak{p}athfrak n)^{\times}/(p,N^{1-i} - 1) \rightarrow
\mathfrak{p}athrm{Gal}(K_{(i)}/E_{(i)}).$$
Consequently, we find that
$\mathfrak{p}athrm{Gal}(K_{(i)}/E_{(i)})$ is either trivial
(when $N^{(1-i)} \not\equiv 1 \pmod p$) or of order $p$
(when $N^{(1-i)} \equiv 1 \pmod p$).
Let us now consider the particular cases $i = 1,0,-1.$
The $1$, $0$ and $-1$ eigenspaces inside the class group of
$\mathfrak{p}athbf Q(\zeta_p)$ are trivial by Kummer theory,
abelian class field theory and Herbrand's theorem respectively.
Thus for these values of $i$, we have $E_i = \mathfrak{p}athbf Q(\zeta_p)$,
and so the preceding paragraph yields a computation of
$\mathfrak{p}athrm{Gal}(K_{(i)}/\mathfrak{p}athbf Q(\zeta_p))$. The explicit descriptions
of $K_{(i)}$ in the case when $i = 1$ or $0$ are easily
verified, and so we leave this verification to the reader.
$
\square \ $
\end{Proof}
\
We are now in a position to determine the reduced Zariski tangent space
to the deformation functor $\mathfrak{p}athcal Def$. We will also record
some useful information regarding non-trivial elements of this tangent
space (assuming that they exist).
\begin{prop}\ellabel{prop:tangent space at odd p}
If $p$ does not divide the numerator of $(N-1)/12$,
then $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_p[X]/X^2) = 0$; otherwise,
$\mathfrak{p}athcal Def(\mathfrak{p}athbf F_p[X]/X^2)$ is one dimensional over $\mathfrak{p}athbf F_p.$
Suppose for the remainder of the statement of the proposition
that we are in the second case,
and let
$(V,L,\rho)$ correspond to a
non-trivial element of $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_p[X]/X^2)$.
(i) If as above $F$ denotes the extension of $\mathfrak{p}athbf Q$ cut out
by the kernel of $\rho$, then
$F$ is equal to the compositum $K_{(1)}K_{(0)}K_{(-1)}$
$($where the class fields $K_{(i)}$ of $\mathfrak{p}athbf Q(\zeta_p)$
are defined as in the statement of the previous
proposition$)$.
(ii) If $p = 3$, then $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\zeta_p)) \cong
\mathfrak{p}athrm{Gal}(K_1/\mathfrak{p}athbf Q(\zeta_p)) \times \mathfrak{p}athrm{Gal}(K_0 /\mathfrak{p}athbf Q(\zeta_p)),$
and the image of an appropriately chosen generator of the first $($respectively
second$)$ factor under $\rho$ has the form
$\elleft(\begin{matrix} 1 & -r X \\ r X & 1\end{matrix} \right)$
$($respectively $\elleft(\begin{matrix} 1 + r X & 0 \\ 0 & 1 - r X \end{matrix} \right))$
for some $r \in \mathfrak{p}athbf F_p^{\times}$.
(iii) If $p \geq 5$, then $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\zeta_p)) \cong
\mathfrak{p}athrm{Gal}(K_1/\mathfrak{p}athbf Q(\zeta_p)) \times \mathfrak{p}athrm{Gal}(K_0 /\mathfrak{p}athbf Q(\zeta_p)) \times \mathfrak{p}athrm{Gal}(K_1/\mathfrak{p}athbf Q(\zeta_p)),$
and the image of an appropriately chosen generator of the first $($respectively
second, respectively third$)$ factor under $\rho$ has the form
$\elleft(\begin{matrix} 1 & -r X \\ 0 & 1\end{matrix} \right)$
$($respectively $\elleft(\begin{matrix} 1 + r X & 0 \\ 0 & 1 - r X \end{matrix} \right),$
respectively
$\elleft(\begin{matrix} 1 & 0 \\ r X & 1\end{matrix} \right))$
for some $r \in \mathfrak{p}athbf F_p^{\times}$.
(iv)
We have the following formulas for
the traces of $\rho$:
\begin{itemize}
\item[(iv.i)] If $\ell$ is a prime distinct from $N$ and $p$,
then $$\mathfrak{p}athbf Trace(\rho(\mathfrak{p}athbf Frob_{\ell}))
= \begin{cases}
1 + \ell \, \textrm{ if } \ell \equiv 1 \pmod p \textrm{ or } \ell \textrm{ is
a $p$th power} \mathfrak{p}od N \\ 1 + \ell + u X \textrm{ otherwise } \end{cases};$$
here $u$ denotes an element of $\mathfrak{p}athbf F_p^{\times}$.
\item[(iv.ii)] If $\alpha_p$ denotes the eigenvalue of $\mathfrak{p}athbf Frob_p$ on the
rank one $\mathfrak{p}athbf F_p[X]/X^2$-module of $I_p$-coinvariants of $V$,
then
$$\alpha_p = \begin{cases} 1 \textrm{ if } p \textrm{ is a $p$th power mod }
N \\
1 + u X \textrm{ if not }\end{cases};$$
again, $u$ denotes an element of $\mathfrak{p}athbf F_p^{\times}$.
\end{itemize}
\end{prop}
\begin{Proof}
Let $(V,L,\rho)$ be a non-trivial element of $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_p[X]/X^2)$,
cutting out the extension $F$ of $\mathfrak{p}athbf Q$. As above, we choose the basis
of $V$ so that the conditions of Lemma~\ref{lemma:basis} are satisfied.
Since $G'$ is abelian,
we see that $F = F^{ab}$. Equation~\ref{eq:conjugation}, together
with Lemma~\ref{lemma:classf}, thus shows
that $F$ is contained in the compositum $K_{(1)}K_{(0)}K_{(-1)}.$
Lemma~\ref{lem:inertia at N} then shows that in fact
$F$ must be equal to this compositum, proving part~(i) of the proposition;
that furthermore,
each of the extensions $K_{(1)},$ $K_{(0)}$ and $K_{(-1)}$ of
$\mathfrak{p}athbf Q(\zeta_p)$ must be non-trivial, and thus that
$N \equiv 1 \mathfrak{p}od p$, by proposition~\ref{prop:odd cft};
and that either part~(ii) or part~(iii)
of the proposition is satisfied, depending
on whether $p = 3$ or $p \geq 5$. (We choose the generator of
each group $\mathfrak{p}athrm{Gal}(K_{(i)}/\mathfrak{p}athbf Q(\zeta_p))$ to be the image of some
fixed generator of the inertia group $I_N$.)
Suppose conversely that
$N \equiv 1 \mathfrak{p}od p$, so that each of $K_{(1)},$ $K_{(0)}$ and
$K_{(-1)}$ is a non-trivial extension of $\mathfrak{p}athbf Q(\zeta_p)$.
Write $F = K_{(1)}K_{(0)} K_{(-1)}$.
If we fix an element $r \in \mathfrak{p}athbf F_p^{\times},$ then
we may use the formulas of parts~(ii) and~(iii) to define
a representation $\rho:\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q) \rightarrow G \subset \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_p[X]/X^2)$.
If we let $L$ denote the line spanned by $(1,1)$, then this representation
will deform the representation $(V,L)$.
Thus it will provide an element of $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_p[X]/X^2)$ provided that
it is finite at $p$. An argument as in the proof of Lemmas~\ref{three}
and~\ref{four} shows that this is automatically the case when
$p \geq 5,$ and holds provided $p$ divides $(N-1)/12$, when $p = 3$.
This establishes the initial claim of the proposition.
It remains to prove part~(iv) of the proposition.
Suppose first that $\ell$ is a prime distinct from $p$ and $N$.
We may write
$$\rho(\mathfrak{p}athbf Frob_{\ell}) = \elleft( \begin{matrix} \ell & 0 \\ 0 & 1 \end{matrix}
\right) \elleft( \begin{matrix} 1 + a X & b X \\ c X & 1 - a X \end{matrix}
\right),$$
for some elements $a,b,c \in \mathfrak{p}athbf F_p$. Thus
$\mathfrak{p}athbf Trace(\rho(\mathfrak{p}athbf Frob_{\ell})) = 1 + \ell + (\ell -1 ) a X.$
This is distinct from $1 + \ell$ if and only if $\ell \not\equiv 1
\mathfrak{p}od p$ and $a \neq 0$. The latter occurs if and only
if the primes over $\ell$ are not split in the extension
$K_{(0)}/\mathfrak{p}athbf Q(\zeta_p),$ which in turn is the case if and only if
$\ell$ is not a $p$th power $\mathfrak{p}od N$. (Here we have taken into
account the explicit description of $K_{(0)}$ provided by
Proposition~\ref{prop:odd cft}.) Thus we proved part~(iv.i).
Since the vector $(1,0)$ spans the subspace
$M^{0}(\overlineerline{\mathfrak{p}athbf Q}_p)$ of $V$, the space of $I_p$-inertial coinvariants
is spanned by the image of the vector $(0,1)$. The Frobenius element
$\mathfrak{p}athbf Frob_p$ acts non-trivially on the image of this vector if and only
if the prime over $p$ is not split in the extension $K_{(0)}$ of
$\mathfrak{p}athbf Q(\zeta_p)$, which is the case if and only if $p$ is not a $p$th
power $\mathfrak{p}od N$. This proves part~(iv.ii).
$
\square \ $
\end{Proof}
\
As we will see below,
for $p \ge 3$, the rank $g_p + 1$ of $\mathfrak{p}athbf T/p$ over $\mathfrak{p}athbf F_p$ is no
longer explained by an abelian extension of number fields
(and hence by a single class group), as it is in the case $p = 2$,
but by certain more complicated solvable extensions.
However, the question of whether or not
$g_p=1$ is somewhat tractable.
Indeed, from Corollary~\ref{EPR} we deduce the following criterion.
\begin{lemma}\ellabel{lem:criterion}
The rank $g_p$ of the parabolic Hecke algebra
$\mathfrak{p}athbf T^0/p$ over $\mathfrak{p}athbf F_p$ is greater than one $($equivalently, $\displaystyle
\mathfrak{p}athbf T^0 \neq Z_p)$
if and only if there
exists a $(V,L,\rho)$ in $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_p[X]/X^3)$ whose
traces generate $\mathfrak{p}athbf F_p[X]/X^3$. \end{lemma}
In order to apply this lemma, we now assume that $A = \mathfrak{p}athbf F_p[X]/X^3,$
so that $(V,L,\rho)$ lies in $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_p[X]/X^3)$. As always,
we assume that the basis of $V$ is chosen so as to satisfy the
conditions of Lemma~\ref{lemma:basis}.
We let
$\rho_n$ denote the composition of $\rho$ with
the natural surjection
$\mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_p[X]/X^3) \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_p[X]/X^n)$,
for $n \elleq 3$.
Requiring the traces of $\rho$ to generate $\mathfrak{p}athbf F_p[X]/X^3$
is equivalent to requiring the traces of $\rho_2$ to
generate $\mathfrak{p}athbf F_p[X]/X^2,$ which in turn is equivalent
is to requiring that $\rho_2$ be a non-trivial deformation.
We assume this to be the case.
Also, we let $F_n$ denote the extension cut out
by the kernel of $\rho_n$. Thus
$F_1 =\mathfrak{p}athbf Q(\zeta_p)$, and $F_3 = F$.
Since we are assuming that $\rho_2$ is non-trivial,
Proposition~\ref{prop:tangent space at odd p}
shows that $p$ divides the numerator of $(N-1)/12$,
and that $F_2$ is equal to the compositum of the
class fields $K_{(i)}$ (for $i = 1,0,-1$).
\begin{lemma}\ellabel{lemma:shape of F_2}
(i) We have $F_2 = F^{ab}$, and $\mathfrak{p}athrm{Gal}(F_2/F_1) \cong (\mathfrak{p}athbf Z/p)^2$
$($respectively $(\mathfrak{p}athbf Z/p)^3)$ if $p = 3$ $($respectively $p \geq 5)$.
(ii) $F/F_2$ is unramified at $N$.
\end{lemma}
\begin{Proof}
Since $p \geq 3,$ we see that $G'$ has exponent $p$.
Lemma~\ref{lemma:classf} and equation~(\ref{eq:conjugation})
then imply that
$F^{ab} \subset K_{(1)}K_{(0)} K_{(-1)} = F_2$.
Certainly $F_2 \subset F^{ab}$, and so we have the equality stated
in~(i).
The claims regarding $\mathfrak{p}athrm{Gal}(F_2/F_1)$ follow from
parts~(ii) and~(iii) of Proposition~\ref{prop:tangent space at odd p}.
Part~(ii) follows from the Lemma~\ref{lem:inertia at N}
and the fact that $F_2/F_1$ is ramified at $N$.
$
\square \ $
\end{Proof}
\
We now separate our analysis into two cases: $p = 3$, and $p \geq 5$.
\subsection{$p = 3$}
Throughout this subsection we set $p = 3$.
\begin{lemma} The extension $F/F_2$ is unramified everywhere
and has degree exactly three.
\end{lemma}
\begin{Proof}
The image of
$\rho_{| G_{\mathfrak{p}athbf Q(\sqrt{-3})}}$
is a subgroup of
$G' = \ker(\mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_3[X]/X^3) \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_3))$
whose image in $\mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_3[X]/X^2)$ is isomorphic to
$(\mathfrak{p}athbf Z/3)^2$, by Lemma~\ref{lemma:shape of F_2}. Thus the commutator
subgroup of the image of
$\rho_{| G_{\mathfrak{p}athbf Q(\sqrt{-3})}}$
is either trivial or cyclic of order three.
Thus the extension $F/F_2$ has degree at most three.
Consider the representation $\rho_2$, which factors through
$\mathfrak{p}athrm{Gal}(F_2/\mathfrak{p}athbf Q)$. By assumption this yields a non-trivial element of
$\mathfrak{p}athcal Def(\mathfrak{p}athbf F_3[X]/X^2)$.
Part~(ii) of Lemma~\ref{prop:tangent space at odd p} thus shows
that the image under $\rho_2$ of the element of order three coming
from the $\overlineerline{\chi}_p^1$ extension $K_{(1)} = \mathfrak{p}athbf Q(\sqrt{-3},\sqrt[3]{N})$
of $\mathfrak{p}athbf Q(\sqrt{-3})$
must be of the form
$$\elleft(\begin{matrix} 1 &
-r X \\ r X & 1 \end{matrix} \right),$$
and that the image under $\rho_2$ of
the element of order three coming from the
$\overlineerline{\chi}_p^{0}$ extension $K_{(0)}$
of $\mathfrak{p}athbf Q(\sqrt{-3})$ is of the form
$$\elleft(\begin{matrix} 1+rX & 0 \\ 0 & 1-rX \end{matrix} \right),$$
for some $r \in \mathfrak{p}athbf F_3^{\times}$.
Lifting these two elements (in any way) to $\mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_3[X]/X^3)$ and
taking their commutator, we produce a new element in $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q)$
which has a lower left-hand entry equal to $r^2 X^2 = X^2$.
This element cannot be in the decomposition group at $3$ because
it doesn't preserve $M^0(\overlineerline{\mathfrak{p}athbf Q}_3)$, which is generated by
$(1, 0)$. Thus
$F/F_2$ has order exactly three and is unramified at
all primes above three. Part~(ii) of Lemma~\ref{lemma:shape of F_2}
shows that the extension $F/F_2$ is also unramified at all
primes above $N$, and the lemma is proved.
$
\square \ $
\end{Proof}
\
Let $K = \mathfrak{p}athbf Q(\sqrt[3]{N})$, and as above write $K_{(1)}
= K^{gal} = K(\sqrt{-3})$.
The extension $F/K_{(1)}$ has degree $9$, and $\mathfrak{p}athrm{Gal}(F/K_{(1)}) = (\mathfrak{p}athbf Z/3\mathfrak{p}athbf Z)^2$.
Moreover, $F/K_{(1)}$ is unramified everywhere. The following lemma
shows that the existence of such an extension $F$
is sufficient for the construction of a deformation $\rho$
of the type considered here.
This completes the proof of part one of each of
Theorems~\ref{theorem:oddp} and~\ref{theorem:merel}.
\begin{lemma} \ellabel{lemma:hwl}
If $N \equiv 1 \mathfrak{p}od 9$, then
the class group of $K_{(1)}=\mathfrak{p}athbf Q(\sqrt{-3},\sqrt[3]{N})$ has
$3$-rank greater than or equal \emph{(}equivalently, equal\emph{)} to
two if and only if there exists a surjection
$R \rightarrow \mathfrak{p}athbf F_3[X]/X^3$; the kernel of the corresponding deformation
$\rho:G_{\mathfrak{p}athbf Q} \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_p[X]/X^3)$ then
cuts out the $(3,3)$ unramified class field $F$ of $K_{(1)}$.
\end{lemma}
\begin{Proof}
The preceding discussion establishes the ``if'' claim, and so
it suffices to prove the ``only if'' claim.
Genus theory and a consideration of
the ambiguous class predicts that the 3-rank of the class group
of $K_{(1)}$ is either
one or two, and hence by assumption this rank
is exactly two (see for example \cite{Gerth}).
We let $F$ denote the corresponding
unramified $(3,3)$-extension of $K_{(1)}$,
and (as above) let $F^{ab}$ denote the
unique subextension of $F$ abelian over $\mathfrak{p}athbf Q(\sqrt{-3})$.
It is easily checked that $F^{ab}$ is in fact
the maximal abelian 3-power extension of $\mathfrak{p}athbf Q(\sqrt{-3})$
that is unramified over $K_{(1)}$,
and that $F^{ab} = K_{(1)}K_{(0)}$.
Proposition~\ref{prop:tangent space at odd p} yields a Galois
representation $\mathfrak{p}athrm{Gal}(F^{ab}/F) \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_3[X]/X^2)$,
while lemma~\ref{lem:cf structure} below shows
that $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\sqrt{-3}))$ is the unique non-abelian group of
order $27$ and of exponent three.
It is then easy to see that one can lift the representation of $\mathfrak{p}athrm{Gal}(F^{ab}/F)$
to a representation $\rho: \mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q) \rightarrow \mathfrak{p}athrm{GL}_2(\mathfrak{p}athbf F_3[X]/X^3).$
Furthermore one checks that for any such lift, the image of $I_N$
fixes an appropriate line.
To show that we have constructed an element of $\mathfrak{p}athcal Def(\mathfrak{p}athbf F_3[X]/X^3)$,
as required, it remains to
show that this representation
extends to a finite flat group scheme at $3$,
For this, it suffices to work over the maximal unramified extension
of $\mathfrak{p}athbf Q_3$. Since $F/\mathfrak{p}athbf Q(\sqrt{-3})$ is unramified at $3$
(because $N \equiv 1 \mathfrak{p}od 9$),
the representation $\rho |_{\mathfrak{p}athbf Q^{ur}_3}$ factors through
a group of order two, and explicitly prolongs to a product of
trivial and multiplicative group schemes. Thus $\rho$ is indeed
finite at the prime $3$.
$
\square \ $
\end{Proof}
\begin{lemma}\ellabel{lem:cf structure}
The Galois group $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\sqrt{-3}))$ is the (unique up to isomorphism)
non-abelian group of
order $27$ and of exponent three.
\end{lemma}
\begin{Proof}
Let $\Gamma = \mathfrak{p}athrm{Gal}(K_{(1)}/\mathfrak{p}athbf Q(\sqrt{-3})) = \ellangle \gamma \rightarrowngle$.
The $3$-class group $H$ of $K_{(1)}$ is naturally a $\mathfrak{p}athbf Z_3[\Gamma]$-module.
From class field theory we have that $H/(\gamma-1)H$ is isomorphic to
the Galois group over $K_{(1)}$
of the maximal abelian 3-extension of $\mathfrak{p}athbf Q(\sqrt{-3})$
that is unramified over $K_{(1)}$; that is, to $\mathfrak{p}athrm{Gal}(F^{ab}/K_{(1)})$,
a cyclic group of order 3.
Thus by Nakayama's lemma $H$ is a cyclic $\mathfrak{p}athbf Z_3[\Gamma]$-module.
By class field theory, the quotient $H/3$ is isomorphic to $\mathfrak{p}athrm{Gal}(F/K_{(1)})$.
Note that $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\sqrt{-3}))$ sits in the exact sequence:
$$0 \rightarrow \mathfrak{p}athrm{Gal}(F/K_{(1)}) \ellongrightarrow
\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\sqrt{-3}) \ellongrightarrow \mathfrak{p}athrm{Gal}(K_{(1)}/
\mathfrak{p}athbf Q(\sqrt{-3})) \ellongrightarrow 0,$$
which is an extension of $\Gamma 0ng \mathfrak{p}athbf Z/3\mathfrak{p}athbf Z$ by $H/3 0ng (\mathfrak{p}athbf Z/3\mathfrak{p}athbf Z)^2$.
The action via conjugation of $\Gamma$
on $H/3$ is non-trivial, since otherwise $H$ could
not be cyclic as a $\Gamma$-module. Already this shows that
$\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\sqrt{-3}))$ is one of the two non-abelian groups of
order $27$.
To pin down the group precisely, we must show that it has
exponent three. For this, it suffices to find a
splitting of the above exact sequence (a section from $\Gamma = \mathfrak{p}athrm{Gal}(K_{(1)}/
\mathfrak{p}athbf Q(\sqrt{-3}))$ back to $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\sqrt{-3}))$).
Since the inertia group above $N$ in $\mathfrak{p}athrm{Gal}(F/\mathfrak{p}athbf Q(\sqrt{-3}))$ has
order exactly three, and maps isomorphically to
$\Gamma$, the required splitting exists.
$
\square \ $
\end{Proof}
\
The final result of this section provides a relation between
the rank of the $3$-class group of $K_{(1)}$ and the power of $3$
dividing the class number of $K$.
\begin{lemma} \ellabel{lemma:ump}
The $3$-class group of $K_{(1)} = \mathfrak{p}athbf Q(\sqrt{-3},\sqrt[3]{N})$ has
three rank two if the $3$-class group of $K = \mathfrak{p}athbf Q(\sqrt[3]{N})$ $($which
is cyclic$)$ is divisible by nine.
\end{lemma}
\begin{Proof}
One has a class number relation between
$K$ and $K_{(1)}$ given by
$h_{K_{(1)}} = h^2_K/3 \cdot q$, where $q$ is the index of
the units in $K_{(1)}$ coming from $K$, $K^{\gamma}$, and $\mathfrak{p}athbf Q(\sqrt{-3})$
inside the full unit group. (Here, as above, $\gamma$ denotes a generator
of the cyclic group $\Gamma = \mathfrak{p}athrm{Gal}(K_{(1)}/\mathfrak{p}athbf Q(\sqrt{-3}))$.)
If $9 | h_K$, then
$27 | h_{K_{(1)}}$. Recall from the proof of the previous lemma that
the $3$-part $H$ of the class group is a cyclic $\mathfrak{p}athbf Z_3[\Gamma]$-module,
and satisfies the condition that $H/(\gamma - 1)H$ is cyclic of order 3.
Now $\mathfrak{p}athbf Z_3[\Gamma]$
admits no quotients $H'$ that are cyclic groups of order $27$ with the property
that $H'/(\gamma - 1)H'$ is of order $3$.
It follows that if $H$ is of order divisible by $27$, then it must be
non-cyclic, as claimed.
$
\square \ $
\end{Proof}
\
We conjecture that the converse to the preceding lemma is also true.
To prove this, it would suffice to show that whenever
$3 \| h_K$, the unit index $q$ is always equal to one. We have
verified this for all primes less than $50$,$000$ for which $3 \| h_K$.
\subsection{$p \ge 5$}
Throughout this section we assume that $p \geq 5,$ and that
we are given a deformation to $\mathfrak{p}athbf F_p[X]/X^3$ as in the
discussion following Lemma~\ref{lem:criterion}.
Proposition~\ref{prop:tangent space at odd p} and Lemma~\ref{lemma:shape of F_2}
together show that $F_2 = K_{(1)} K_{(0)} K_{(-1)},$
that $\mathfrak{p}athrm{Gal}(F_2/F_1) = (\mathfrak{p}athbf Z/p)^3$, and that $F_2 = F^{ab}$.
We see that $F_2/F_1$ is unramified at $p$ if and
only if $N \equiv 1 \mathfrak{p}od p^2$.
It follows from our determination of $F_2$ that $\mathfrak{p}athrm{Gal}(F/F_1)$
is the full kernel of the map from
$\mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_p[x]/x^3)$ to $\mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_p)$, since all the elements
of
$$\mathfrak{p}athrm{Ker}(\mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_p[x]/x^3) \rightarrow \mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_p[x]/x^2))$$
are generated by commutators of lifts of elements of
$\mathfrak{p}athrm{Ker}(\mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_p[x]/x^2) \rightarrow \mathfrak{p}athrm{SL}_2(\mathfrak{p}athbf F_p))$.
\
\begin{lemma} \ellabel{lemma:messy} If $E$ is a degree $p$
Galois extension of $F_2$ inside $F_3$ on which the matrix
$$\elleft(\begin{matrix} 1 + x^2 & 0 \\ 0 & 1 - x^2 \end{matrix}\right)$$
acts non-trivially, then $E/F_2$
is everywhere unramified.
\end{lemma}
\begin{Proof} Part~(ii) of Lemma~\ref{lemma:shape of F_2} shows
that this extension is unramified at primes above $N$.
To see that it is unramified at primes above $p$, it
suffices to note that the matrix
$\elleft(\begin{matrix} 1 + x^2 & 0 \\ 0 & 1 - x^2 \end{matrix}\right)$
does not fix the vector $(1,0)$ (which spans $M^0(\overlineerline{\mathfrak{p}athbf Q}_p)$).
$
\square \ $ \end{Proof}
\
Let $K = \mathfrak{p}athbf Q(N^{1/p})$ and $L = K^{gal} = K(\zeta_p)$.
\begin{lemma}\ellabel{lemma:final} The Hilbert class field of $K$
has $p$-rank at least two.
\end{lemma}
\begin{Proof}
Let us first consider the extension $\mathfrak{p}athrm{Gal}(F/K)$.
One sees that
$\mathfrak{p}athrm{Gal}(F/K)^{ab} \cong (\mathfrak{p}athbf Z/p \mathfrak{p}athbf Z)^2\times (\mathfrak{p}athbf Z/p)^{\times}$ is explicitly generated by the images
of
$$\elleft(\begin{matrix} 1 + x^k & 0 \\ 0 & (1+x^k)^{-1} \end{matrix}\right),$$
for $k=1,2$, together with the image of $\mathfrak{p}athcal Delta$.
We let $H$ be the $(p,p)$-extension of $K$ contained in $F$ that is fixed by
$\mathfrak{p}athcal Delta$. We will show that $H$ is unramified over $K$.
We may write $H$ as a compositum $H = H_1 H_2$, where
for each of $k=1,2$, we let $H_k$ denote a $p$-extension of $K$ contained in $F$,
on which the matrix
$\elleft(\begin{matrix} 1 + x^k & 0 \\ 0 & (1+x^k)^{-1} \end{matrix}\right)$
acts non-trivially.
If we let $\zeta^+_N$ denote an element of $\mathfrak{p}athbf Q(\zeta_N)$ that
generates the degree $p$ subextension over $\mathfrak{p}athbf Q$ (so that
$K_{(0)} = \mathfrak{p}athbf Q(\zeta_p,\zeta_N^+)$),
then we may take $H_1$ to be
$K(\zeta^+_N)$, which is clearly
unramified everywhere over $K$ (it is the genus field).
We will show that $H_2$ is is also unramified
everywhere over $K$.
Lemma~\ref{lem:inertia at N} takes care of the primes above $N$,
and so it remains to treat the primes above $p$.
We begin by proving that $H_2(\zeta_p)/L$ is unramified.
Lemma~\ref{lemma:messy} shows that
the extension $H_2 \cdot F_2/F_2$ is unramified.
Since $F_2/L$ is unramified, it follows that $H_2(\zeta_p)/L$
is unramified, as claimed.
We now use the fact that $H_2(\zeta_p)/L$
is unramified to show that $H_2/K$ is unramified.
We consider two cases.
Suppose first that $p \| N - 1.$
Then $K$ is totally ramified at $p$,
and thus if $H_2/K$ is ramified
we deduce that since $H_2$ is \emph{Galois} over $K$,
$e_p(H_2) = p^2$, contradicting the fact that
$H_2(\zeta_p)/L$ is unramified.
If instead $N \equiv 1
\mathfrak{p}od p^2$, then things are even easier: If
$H_2/K$
is ramified at at least one prime $\mathfrak{p}athfrak{p}$ above $p$,
then again using the fact that $H_2/K$ is
Galois we deduce that
$p | e_{\mathfrak{p}athfrak{p}}(H_2)$. Yet $p$ is tamely ramified
in $L$ and therefore also in $H_2(\zeta_p)$. Thus
$H_2/K$ is unramified everywhere, and
$K$ has $p$-rank at least two. $
\square \ $
\end{Proof}
\
This completes
the proof of parts two of Theorem~\ref{theorem:oddp}
and Theorem~\ref{theorem:merel}.
We expect (based on the numerical evidence) that the condition that
the class group of $K$ has $p$-rank two
is equivalent
to the existence of an appropriate group scheme,
and thus to $g_p > 1$.
Part of this could perhaps be proved by
more sophisticated versions of Lemmas~\ref{lemma:hwl},
\ref{lem:cf structure},
and~\ref{lemma:ump}.
\section{Examples}
The first example in Mazur's
table \cite{eisenstein} where $e_2 > 1$ occurs when
$N = 41$. The class group of $\mathfrak{p}athbf Q(\sqrt{-41})$ is
$\mathfrak{p}athbf Z/8\mathfrak{p}athbf Z$. Thus one has $e_2 = 3$.
Using {\tt gp} and William Stein's programmes one can verify
that the class group of $\mathfrak{p}athbf Q(\sqrt{-21929})$ is
$\mathfrak{p}athbf Z/256\mathfrak{p}athbf Z$ and that $e_2 = 127$ for $N = 21929$.
In Mazur's table, $e_3$ always equals $1$ or $2$.
One has to go quite some distance before finding
an example where $e_3 > 2$.
For $N = 2143$, however, one has $e_3 = 3$. This
is related to the fact that $2143$ is the smallest prime
congruent
to $1 \mathfrak{p}od 9$
such that the class group of the corresponding extension
$K_{(0)}$ of $\mathfrak{p}athbf Q(\sqrt{-3})$ (in the terminology of
Proposition~\ref{prop:odd cft})
has an element of order $9$. The corresponding class field
contributes to the maximal unramified \emph{solvable} extension
of $K = \mathfrak{p}athbf Q(\sqrt[3]{2143})$. Finally, let us note
that when $p=3$, Lemmas~\ref{lemma:hwl} and~\ref{lemma:ump}
show that the value of $g_p$ is related to the
\emph{size} of the $3$-power part of the class group of $\mathfrak{p}athbf Q(\sqrt[3]{N})$,
whereas for $p \ge 5$, Lemma~\ref{lemma:final} shows
that this value is related to
the $p$-rank of the class group of $\mathfrak{p}athbf Q(\sqrt[p]{N})$.
As an illustration, when $N = 4261$, one computes that
the Hilbert class field
of $\mathfrak{p}athbf Q(\sqrt[5]{4261})$ is $\mathfrak{p}athbf Z/25\mathfrak{p}athbf Z$. However, since the $5$-rank
of $\mathfrak{p}athbf Z/25\mathfrak{p}athbf Z$ is one, it follows that $e_5 = 1$.
\noindent \it Email addresses\rm:\tt \ [email protected]
\hskip 22mm \tt \ [email protected]
\end{document}
|
\begin{document}
\title{Rational polynomials of simple type}
\author{Walter D. Neumann}
\address{Department of Mathematics\\Barnard College\\Columbia University\\
NY 10027\\USA}
\email{[email protected]}
\author{Paul Norbury}
\address{Department of Mathematics and Statistics\\University of
Melbourne\\Australia 3010}
\email{[email protected]}
\keywords{}
\subjclass{14H20, 32S50, 57M25}
\thanks{This research was supported by the Australian Research Council}
\begin{abstract}
We classify two-variable polynomials which are rational of simple
type. These are precisely the two-variable polynomials with trivial
homological monodromy.
\end{abstract}
\maketitle
\section{Introduction}
A polynomial map $f\colon{\mathbb C}^2\to {\mathbb C}$ is \emph{rational} if its
\regular{} fibre, and hence every fibre, is of genus
zero. It is of \emph{simple type} if, when extended to a morphism
$\tilde f\colon X\to\bbP^1$ of a compactification $X$ of ${\mathbb C}^2$, the
restriction of $\tilde f$ to each curve $C$ of the compactification
divisor $D=X-{\mathbb C}^2$ is either degree $0$ or $1$. The curves $C$ on
which $\tilde f$ is non-constant are called \emph{horizontal curves},
so one says briefly ``each horizontal curve is degree 1''.
The classification of rational polynomials of simple type gained some
new interest through the result of Cassou-Nogues, Artal-Bartolo, and
Dimca \cite{artal-cassou-dimca} that they are precisely the
polynomials whose homological monodromy is trivial (it suffices that
the homological monodromy at infinity be trivial by an observation of
Dimca).
A classification appeared in \cite{MSuGen}, but it is incomplete. It
implicitly assumes trivial \emph{geometric} monodromy (on page 346,
lines 10--11). Trivial geometric monodromy implies isotriviality
(\regular{} fibres pairwise isomorphic) and turns out to be equivalent
to it for rational polynomials of simple type. The classification in
the non-isotrivial case was announced in the final section of
\cite{NNoMo}. The main purpose of this paper is to prove it. But we
recently discovered that there are also isotrivial rational
polynomials that are not in \cite{MSuGen}, so we have added a
classification for the isotrivial case using our methods. This case
can also be derived from Kaliman's classification \cite{KalPol} of
\emph{all} isotrivial polynomials. The fact that his list includes
rational polynomials of simple type that are not in \cite{MSuGen}
appears not to have been noticed before (it also includes rational
polynomials not of simple type).
In general, the classification of polynomial maps
$f:{\mathbb C}^2\rightarrow{\mathbb C}$ is an open problem with extremely rich
structure. One notable result is the theorem of Abhyankar-Moh and
Suzuki \cite{AMoEmb,SuzPro} which classifies all polynomials with one
fibre isomorphic to ${\mathbb C}$. The analogous result for the next
simplest case, where one fibre is isomorphic to ${\mathbb C}^*$, is open
except in special cases when the genus of the \regular{} fibre of the
polynomial is given. Kaliman \cite{KalRat} classifies all {rational}
polynomials with one fibre isomorphic to ${\mathbb C}^*$.
The basic tool we use in our study of rational polynomials is to
associate to any rational polynomial $f:{\mathbb C}^2\rightarrow{\mathbb C}$ a
compactification $X$ of ${\mathbb C}^2$ on which $f$ extends to a
well-defined map $\tilde{f}:X\rightarrow\bbP^1$ together with a map
$X\rightarrow\bbP^1\times\bbP^1$. The map to $\bbP^1\times\bbP^1$ is
not in general canonical. We will exploit the fact that for a
particular class of rational polynomials, there is an almost canonical
choice.
Although we give explicit polynomials, the classification is initially
presented in terms of the splice diagram for the link at infinity of a
\regular{} fibre of the polynomial (Theorem \ref{th:main}). This is
called the \emph{regular splice diagram} for the polynomial} \def\regular{regular (i.e., generic)\def\regular{regular}. See
\cite{NeuCom} for a description of the link at infinity and its splice
diagram. The regular splice diagram determines the embedded topology
of a \regular{} fibre and the degree of each horizontal curve. Hence
we can speak of a ``rational splice diagram of simple type''.
The first author has asked if the moduli space of polynomials with
given regular splice diagram is connected. For a rational splice
diagram of simple type we find the answer is ``yes''. We describe the
moduli space for our polynomials in Theorem \ref{th:defspace} and use
it to help give explicit normal forms for the polynomials. We also
describe how the topology of the irregular fibres varies over the
moduli space.
The more general problem of classifying all rational polynomials,
which would cover much of the work mentioned above, is still an open
and interesting problem. It is closely related to the problem of
classifying birational morphisms of the complex plane since a
polynomial is rational if and only if it is one coordinate of a
birational map of the complex plane. Russell \cite{RusGoo} calls this
a ``field generator'' and defines a good field generator to be a
rational polynomial that is one coordinate of a birational morphism of
the complex plane. A rational polynomial is good precisely when its
resolution has at least one degree one horizontal curve,
\cite{RusGoo}. Daigle \cite{DaiBir} studies birational morphisms
${\mathbb C}^2\rightarrow{\mathbb C}^2$ by associating to a compactification $X$ of
the domain plane a canonical map $X\rightarrow\bbP^2$. A birational
morphism is then given by a set of curves and points in $\bbP^2$
indicating where the map is not one-to-one. The approach we use in
this paper is similar.
The full list of rational polynomials $f\colon{\mathbb C}^2\to{\mathbb C}$ of simple
type is as follows. We list them up to polynomial automorphisms of
domain ${\mathbb C}^2$ and range ${\mathbb C}$ (so-called ``right-left
equivalence'').
\begin{thm}\label{th:summary}
Up to right-left equivalence a rational polynomial $f(x,y)$ of
simple type has one of the following forms $f_i(x,y)$, $i=1$, $2$,
or $3$.
\begin{align*}
f_1(x,y)=&x^{q_1}s^q+x^{p_1}s^p
\prod_{i=1}^{r-1}(\beta_i-x^{q_1}s^q)^{a_i}&(r\ge2)\phantom{.}\\
f_2(x,y)=&x^{p_1}s^p
\prod_{i=1}^{r-1}(\beta_i-x^{q_1}s^q)^{a_i}&(r\ge1)\phantom{.}\\
f_3(x,y)=&y\prod_{i=1}^{r-1}(\beta_i-x)^{a_i}+h(x)&(r\ge1).
\end{align*}
Here:
$0\le q_1<q,\quad 0\le p_1<p,\quad \left|
\begin{matrix}
p&p_1\\q&q_1
\end{matrix}\right|=\pm1 $;
$s=yx^k+P(x),\text{ with $k\ge1$ and $P(x)$ a polynomial of
degree $<k$}$;
$a_1,\dots,a_{r-1}\text{ are positive integers}$;
$\beta_1,\dots,\beta_{r-1}\text{ are distinct elements of }{\mathbb C}^*$;
$h(x)\text{ is a polynomial of degree }< \sum_1^{r-1}a_i$.
Moreover, if $g_1(x,y)=g_2(x,y)=x^{q_1}s^q$ and $g_3(x,y)=x$ then
$(f_i,g_i)\colon{\mathbb C}^2\to{\mathbb C}^2$ is a birational morphism for
$i=1,2,3$. In fact, $g_i$ maps a \regular{} fibre $f_i^{-1}(t)$
biholomorphically to ${\mathbb C}-\{0,t,\beta_1,\dots,\beta_{r-1}\}$,
${\mathbb C}-\{0,\beta_1,\dots,\beta_{r-1}\}$, or
${\mathbb C}-\{\beta_1,\dots,\beta_{r-1}\}$, according as $i=1,2,3$. Thus
$f_1$ is not isotrivial and $f_2$ and $f_3$ are.
\end{thm}
In \cite{MSuGen} the isotrivial case is subdivided into seven
subcases, but these do not include any $f_2(x,y)$ with $p,q,p_1,q_1$
all $>1$.
\section{Resolution}
Given a polynomial $f:{\mathbb C}^2\rightarrow{\mathbb C}$, extend it to a map
$\bar{f}:\bbP^2\rightarrow\bbP^1$ and resolve the points of
indeterminacy to get a regular map $\tilde{f}:X\rightarrow\bbP^1$ that
coincides with $f$ on ${\mathbb C}^2\subset X$. We call $D=X-{\mathbb C}^2$ the
divisor at infinity. The divisor $D$ consists of a connected union of
rational curves. An irreducible component $E$ of $D$ is {\em
horizontal} if the restriction of $\tilde{f}$ to $E$ is not a
constant mapping. The \emph{degree of a horizontal curve} $E$ is the
degree of the restriction $\tilde{f}|E$. Although the
compactification defined above is not unique, the horizontal curves
are essentially independent of choice.
Note that a \regular{} fibre $F_c:=f^{-1}(c)$ is a punctured Riemann
surface with punctures precisely where $\overline F_c$ meets a
horizontal curve. Thus $f$ has simple type if and only if $\overline
F_c$ meets each horizontal curve exactly once, so the number of
punctures equals the number of horizontal curves. For non-simple type
the number of punctures will exceed the number of horizontal curves.
We say that a rational polynomial is {\em ample} if it has at least
three degree one horizontal curves. Those polynomials with no degree
one horizontal curves, or bad field generators \cite{RusGoo}, are
examples of polynomials that are not ample. The classification of
Kaliman \cite{KalRat} mentioned in the introduction gives examples of
polynomials with exactly one degree one horizontal curve so they are
also not ample. Nevertheless, ample rational polynomials will be the
focus of our study in this paper.
We will classify all ample rational polynomials that are also of
simple type.
\section{Curves in $\bbP^1\times\bbP^1$.} \label{sec:cur}
If $\tilde{f}:X\rightarrow\bbP^1$ is a regular map with rational
fibres then $X$ can be blown down to a Hirzebruch surface, $S$, so
that $\tilde{f}$ is given by the composition of the sequence of
blow-downs $X\rightarrow S$ with the natural map $S\rightarrow\bbP^1$;
see \cite{BavdV}
for details. Moreover, by first replacing
$X$ by a blown-up version of $X$ if necessary, we may assume that
$S=\bbP^1\times\bbP^1$
and the natural map to $\bbP^1$ is projection onto the first factor.
A rational polynomial $f\colon{\mathbb C}^2\to{\mathbb C}$, once compactified to
$\tilde f\colon X={\mathbb C}^2\cup D\to \bbP^1$, may thus be given by
$\bbP^1\times\bbP^1$ together with instructions how to blow up
$\bbP^1\times\bbP^1$ to get $X$ and how to determine $D$ in $X$. For
this we give the following data:
\begin{itemize}
\item a collection $\mathcal C$ of irreducible rational curves in
$\bbP^1\times\bbP^1$ including $L_\infty:=\infty\times\bbP^1$;
\item a set of instructions on how to blow up $\bbP^1\times\bbP^1$ to
obtain $X$;
\item a sub-collection $\mathcal E$ of the curves of the exceptional
divisor of $X\to\bbP^1\times\bbP^1$;
\end{itemize}
satisfying the condition:
\begin{itemize}
\item If $D$ is the union of the curves of $\mathcal E$ and the proper
transforms of the curves of $\mathcal C$ then $X-D\isom{\mathbb C}^2$;
\end{itemize}
If $C\subset\bbP^1\times\bbP^1$ is an irreducible algebraic curve we
associate to it the pair of integers $(m,n)$ given by degrees of the
two projections of $C$ to the factors of $\bbP^1\times\bbP^1$.
Equivalently, $(m,n)$ is the homology class of $C$ in terms of
$H_2(\bbP^1\times\bbP^1)=\bbZ\oplus\bbZ$. We call $C$ an $(m,n)$
curve. The intersection number of an $(m,n)$ curve $C$ and an
$(m',n')$ curve $C'$ is $C\cdot C'=mn'+nm'$.
The above collection $\mathcal C$ of curves in $\bbP^1\times\bbP^1$ will
consist of some \emph{vertical curves} (that is, $(0,1)$ curves; one
of these is $L_\infty$) and some other curves. These non-vertical
curves give the horizontal curves for $f$, so they all have $m=1$ if
$f$ is of simple type. Note that a $(1,n)$ curve is necessarily smooth
and rational (since it is the graph of a morphism $\bbP^1\to\bbP^1$).
The image in $\bbP^1\times\bbP^1$ of the fibre over infinity is the
$(0,1)$ curve $L_{\infty}$ and the image of a degree $m$ horizontal
curve is an $(m,n)$ curve. This view allows one to see as follows a
geometric proof of the result of Russell \cite{RusGoo} that a rational
polynomial $f$ is good precisely when its resolution has at least one
degree one horizontal curve. A degree one horizontal curve
for $f$ has image in $\bbP^1\times\bbP^1$ given by a $(1,n)$ curve.
Call this image $C$ and let $P$ be its intersection with $L_\infty$.
The $(1,n)$ curves that do not intersect $C-P$ form a ${\mathbb C}$--family
that sweeps out $\bbP^1\times\bbP^1- (L_\infty\cup C)$ so they
lead to a map $X\to\bbP^1$ which takes values in ${\mathbb C}$ at points that
do not lie over $L_\infty\cup C$. Restricting to ${\mathbb C}^2=X-D$ we
obtain a meromorphic function $g_1$ that has poles only at points that
belong to exceptional curves that were blown up on $C$ (and do not
belong to $\mathcal E$). However the polynomial $f$ is constant on
each such curve, so if $c_1, \dots, c_k$ are the values that $f$ takes
on these curves, then $g:=g_1(f-c_1)^{a_1}\dots(f-c_k)^{a_k}$ will
have no poles, and hence be polynomial, for $a_1, \dots, a_k$
sufficiently large. Then $(f,g)$ is the desired birational morphism
${\mathbb C}^2\to {\mathbb C}^2$. For the converse, given a birational morphism
$(f,g)\colon{\mathbb C}^2\to{\mathbb C}^2$, we compactify it to a morphism $(\tilde
f,\tilde g)\colon X\to\bbP^1\times\bbP^1$. Then the proper transform
of $\bbP^1\times\infty$ is the desired degree one horizontal curve for
$f$.
We shall use the usual encoding of the topology of $D$ by the dual
graph, which has a vertex for each component of $D$, an edge when two
components intersect, and vertex weights given by self-intersection
numbers of the components of $D$. We will sometimes speak of the
\emph{valency} of a component $C$ of $D$ to mean the valency of the
corresponding vertex of the dual graph, that is, the number of other
components that $C$ meets.
The approach we will take to get rational polynomials will be to start
with any collection $\C$ of $k$ curves in $\bbP^1\times\bbP^1$ and see
if we can produce a divisor at infinity $D$ for a map from ${\mathbb C}^2$ to
${\mathbb C}$. In order to get a divisor at infinity we must blow up
$\bbP^1\times\bbP^1$, say $m$ times, and include some of the resulting
exceptional curves in the collection so that this new collection gives
a divisor $D$ whose complement is ${\mathbb C}^2$. The exceptional curves
that we ``leave behind'' (i.e., do not include in $D$) will be called
\emph{cutting divisors}.
\begin{lemma} \label{th:propD}
(i) $D$ must have $m+2$ irreducible components, so we must
include $m-k+2$ of the exceptional divisors in the collection leaving
$k-2$ behind as cutting divisors;
(ii) $D$ must be connected and have no cycles;
(iii) $D$ must reduce to one of the ``Morrow configurations'' by a
sequence of blow-downs. The Morrow configurations are the
configurations of rational curves with dual graphs of one of the
following three types,
in which, in the last case, after replacing the central $(n,0,-n-1)$ by
a single $(-1)$ vertex the result should blow down to a single
$(+1)$ vertex by a sequence of blow-downs:
$$
\xymatrix@R=6pt@C=30pt@M=0pt@W=0pt@H=0pt{
_{1}\\
\circ}$$ $$
\xymatrix@R=6pt@C=30pt@M=0pt@W=0pt@H=0pt{
_{0}&_{l}\\
\circ\ar@{-}[r]&\circ}$$ $$
\xymatrix@R=6pt@C=30pt@M=0pt@W=0pt@H=0pt{
_{l_m}&_{\cdots}&_{l_1}&_{n}&_{0}&_{-n-1}&_{t_1}&_{\cdots}&_{t_k}\\
\circ\ar@{.}[rr]&&\circ\ar@{-}[r]&\circ\ar@{-}[r]&\circ\ar@{-}[r]&
\circ\ar@{-}[r]&\circ\ar@{.}[rr]&&\circ}
$$
These conditions are also sufficient that $X-D\isom {\mathbb C}^2$.
\end{lemma}
\begin{proof}
The first property follows from the fact that each blow-up
increases the rank of second homology by $1$. Thus $H_2(X)$ has rank
$m+2$, so $D$ must have $m+2$ irreducible components.
Notice that this implies easily the well-known result
\cite{KalTwo,MSuGen,SuzPro} that
\[\delta-1=\sum_{a\in{\mathbb C}}(r_a-1),\]
where $\delta$ is the number of horizontal curves of $f$ and $r_a$ is
the number of irreducible components of $f^{-1}(a)$. (Both sides are
equal to $k-1-\{$number of finite curves at infinity$\}$.)
The second property follows from the third property. For the third
property and sufficiency see \cite{Morrow, Rama}.
\end{proof}
Now assume that $\tilde{f}$ has at least three degree one horizontal
curves. Take these three horizontal curves and use them to map $X$ to
$\bbP^1\times\bbP^1$ as follows. The three horizontal curves define
three points in a \regular{} fibre of $\tilde{f}$. We can map this
\regular{} fibre to $\bbP^1$ by mapping these three points to
$0,1,\infty\in\bbP^1$. This defines a map from a Zariski open set of
$X$ to $\bbP^1$ which then extends to a map $\pi$ from $X$ to
$\bbP^1$. If $\pi$ is not a morphism then we blow up $X$ to get a
morphism. Rather than introducing further notation for this blow-up
we will assume we began with this blow-up and call it $X$. Together
with the map $\tilde{f}$ this gives us the desired morphism
\[X\stackrel{(\tilde{f},\pi)}{\longrightarrow}\bbP^1\times\bbP^1\]
with the property that the three horizontal curves map to $(1,0)$
curves.
If all horizontal curves for $f$ are of type $(1,0)$ then the \regular{}
fibres form an isotrivial family (briefly ``$f$ is isotrivial'').
Thus if $f$ is of simple type but not isotrivial, there must be a
horizontal curve of type $(1,n)$ in $\C$ with $n>0$. From now on,
therefore, we assume that there are at least three $(1,0)$ curves and
at least one $(1,n)$ curve in $\C$ with $n>0$.
\begin{lemma}\label{le:beyond}
Any curve of $D$ that is beyond a horizontal curve from the point of
view of $\tilde L_\infty$ has self-intersection number $\le-2$.
\end{lemma}
\begin{proof}
If the curve is an exceptional curve then it has self-intersection
$\le-1$. If $-1$, then the curve must have valency at least three
(since any $-1$ exceptional curve that could be blown down is a
cutting divisor). Any three adjacent curves must include two
horizontal curves, which contradicts the fact that the dual graph of
$D$ has no cycles. If the curve is not exceptional then it is the
proper transform of a vertical curve. But we must have blown up at
least three times on the vertical curve to get rid of cycles in the
dual graph of $D$ so in this case the self-intersection is $\le-3$.
\end{proof}
\subsection{Horizontal curves}
The next few lemmas will be devoted to finding restrictions on the
horizontal curves in the configuration $\C\subset\bbP^1\times\bbP^1$,
culminating in Proposition~\ref{th:canon}.
\begin{lemma} \label{th:n=1}
A horizontal curve of type $(1,n)$ in $\C$ must be of type $(1,1)$.
\end{lemma}
\begin{proof}
Assume we have a horizontal curve $C\in \C$ of type $(1,n)$ with
$n>1$. It intersects each of the three $(1,0)$ curves $n$ times
(counting with multiplicity) so in order to break
cycles---Lemma~\ref{th:propD} ~(ii)---we have to blow up at least
$n$ times on each $(1,0)$ horizontal curve, so the proper transforms
of the three $(1,0)$ curves have self-intersection at most $-n$ and
the proper transform of the $(1,n)$ curve has self-intersection at
most $2n-3n=-n$.
By Lemma~\ref{th:propD} ~(iii), $D$ must reduce to a Morrow
configuration by a sequence of blow-downs. Thus $D$ must contain a
$-1$ curve $E$ that blows down.
By Lemma
\ref{le:beyond}, the curve $E$ must be a proper transform of a
horizontal curve. The proper transform of each $(1,0)$ curve has
self-intersection at most $-n<-1$. Thus $E$ must come from one of
the $(1,*)$ horizontal curves. As mentioned above, the proper
transform of a $(1,k)$ curve has self-intersection $\leq -k$ so $E$
must be the proper transform of a $(1,1)$ curve, $E_0$. But $E_0$
would intersect $C$, the $(1,n)$ curve, $2n$ times and hence
$E.E\leq 2-2n<-1$ since $n>1$. This is a contradiction so any
horizontal curve of type $(1,n)$ must be a $(1,1)$ curve.
\end{proof}
Hence, the horizontal curves consist of a collection of $(1,0)$ curves
and $(1,1)$ curves. Figure~\ref{fig:conf1} shows an example of a
possible configuration of horizontal curves in $\bbP^1\times\bbP^1$.
\begin{figure}
\caption{Configuration of horizontal curves.}
\label{fig:conf1}
\end{figure}
\begin{lemma} \label{th:triple}
$\tilde{L}_{\infty}\cdot \tilde{L}_{\infty}=-1$.
\end{lemma}
\begin{proof} We blow up at a point on $L_{\infty}$
precisely when at least two horizontal curves meet in a common point
there. In general, if a horizontal curve meets $L_{\infty}$ with a
high degree of tangency then we blow up repeatedly there. But,
since all horizontal curves are $(1,0)$ and $(1,1)$ curves, they
meet $L_{\infty}$ transversally, so a point on $L_{\infty}$ will be
blown up at most once.
If there are two such points to be blown up, then after blowing up
there will be (in the dual graph) two non-neighbouring $-1$ curves
with valency $>2$. The complement of such a configuration cannot be
${\mathbb C}^2$. This is proven by Kaliman \cite{KalTwo} as Corollary 3.
Actually the result is stated for two $-1$ curves of valency 3 but
it applies to valency $\ge3$.
Thus, at most one point on $L_{\infty}$ is blown up and
$\tilde{L}_{\infty}\cdot \tilde{L}_{\infty}=0$ or $-1$. We must show
$0$ cannot occur.
Since there are at least four horizontal curves, if
$\tilde{L}_{\infty}\cdot \tilde{L}_{\infty}=0$, then
$\tilde{L}_{\infty}$ has valency at least $4$ and every other curve
has negative self-intersection. Furthermore, the only possible $-1$
curves must be horizontal curves, and these intersect
$\tilde{L}_{\infty}$ in $D$. As we attempt to blow down $D$ to get
to a Morrow configuration, the only curves that can be blown down
will always be adjacent to $\tilde L_{\infty}$. Thus the
intersection number of $\tilde L_\infty$ will become positive and
all other intersection numbers remain negative, so a Morrow
configuration cannot be reached. Hence, $\tilde{L}_{\infty}\cdot
\tilde{L}_{\infty}=-1$.
\end{proof}
\begin{lemma} \label{th:neg}
A configuration of curves that contains two branches consisting of
curves of self-intersection $<-1$ that meet at a valency $>2$ curve
of self-intersection greater than or equal to $-1$ as in
Figure~\ref{fig:neg} (where the meeting curve is drawn with valency
3 for convenience) cannot be blown down to a Morrow configuration.
\end{lemma}
\begin{proof}
Since the two branches consist of curves of self-intersection $<-1$,
they cannot be reduced before the other branches are reduced. If
the rest of the configuration of curves is blown down first then the
valency $>2$ curve becomes a valency $2$ curve with non-negative
self-intersection and no more blow-downs can be done. Since there is
no $0$ curve, we have not reached a Morrow configuration.
\end{proof}
\begin{figure}
\caption{The branches $B_1$ and $B_2$ consist of curves of
self-intersection $<-1$ and $e\geq -1$.}
\label{fig:neg}
\end{figure}
\begin{lemma} \label{th:inunion}
The intersection of any two $(1,1)$ curves in $\C$ consists of two
distinct points contained in the union of the $(1,0)$ curves in
$\C$.
\end{lemma}
\begin{proof}
We will assume otherwise and reduce to the situation of
Lemma~\ref{th:neg} to give a contradiction. Thus, assume that two
$(1,1)$ curves do not intersect in two points contained in the union
of the $(1,0)$ curves. Then in order to break cycles these curves
must be blown up at least four times---once each for at least three
of the $(1,0)$ curves and at least another time for the intersection
of the two $(1,1)$ curves. Thus they have self-intersection $<-1$.
{\em Case 1}: Suppose two $(1,1)$ curves meet on $L_{\infty}$. Then
after blowing up (twice if the $(1,1)$ curves meet at a tangent),
the exceptional curves are retained and the final exceptional curve
has self-intersection $-1$, valency 3 and two branches, which we
will call $B_1$ and $B_2$, consisting of the proper transforms of
the two $(1,1)$ curves and any other curves beyond these proper
transforms all of which have self-intersection $<-1$. Thus we are
in the situation of Lemma~\ref{th:neg} and we get a contradiction.
{\em Case 2}: Suppose two $(1,1)$ curves meet $L_{\infty}$ at
distinct points. Then at least one of the $(1,1)$ curves, $D$, must
meet $L_{\infty}$ at a point away from the $(1,0)$ curves by
Lemma~\ref{th:triple}. Also one of the $(1,0)$ curves, $H$, must
meet $L_{\infty}$ away from the $(1,1)$ curves and contain at least
two points where it intersects the $(1,1)$ curves and thus have
self-intersection $<-1$ after blowing up to break cycles. We are
once more at the situation of Lemma~\ref{th:neg} where the valency
$>2$ curve is $\tilde{L}_{\infty}$ which has self-intersection $-1$
by Lemma~\ref{th:triple}, and the branches $B_1$ and $B_2$ are the
proper transform of $D$ and any curves beyond it, respectively the
proper transform of $H$ and any curves beyond it. Thus we have a
contradiction.
Notice that both cases apply to two $(1,1)$ curves that may
intersect at a tangent point, and shows that this situation is
impossible.
\end{proof}
\begin{lemma} \label{th:three}
If there is more than one $(1,1)$ curve in $\C$ then there are
exactly three $(1,0)$ horizontal curves in $\C$.
\end{lemma}
\begin{proof}
Assume that there are more than three $(1,0)$ horizontal curves in
$\C$ and at least two $(1,1)$ curves, say $C_1$ and $C_2$.
{\em Case 1}: $C_1$ and $C_2$ meet on $\tilde L_\infty$. Then they
meet each of at least two $(1,0)$ curves in distinct points, so
after blowing up to destroy cycles, these $(1,0)$ curves have
self-intersection number $\le-2$ and Lemma \ref{th:neg} applies.
{\em Case 2}: $C_1$ and $C_2$ meet $\tilde L_\infty$ at distinct
points. Then one of them, say $C_1$, meets $\tilde L_\infty$ at a
point not on a $(1,0)$ curve by Lemma \ref{th:triple}. At least one
$(1,0)$ curve $C_3$ meets $C_1$ and $C_2$ in distinct points. After
breaking cycles, $C_1$ and $C_3$ have self-intersections $\le -2$ so
Lemma \ref{th:neg} applies again.
\end{proof}
\begin{lemma} \label{th:common}
A family of $(1,1)$ horizontal curves in $\C$ must pass through a
common pair of points.
\end{lemma}
\begin{proof}
The statement is trivial for one $(1,1)$ horizontal curve so assume
there are at least two $(1,1)$ horizontal curves in $\C$. By the
previous lemma, there are exactly three $(1,0)$ horizontal curves.
If there are exactly two $(1,1)$ horizontal curves in $\C$ then the
lemma is clear since the curves cannot be tangent by
Lemma~\ref{th:inunion}.
When there are more than two $(1,1)$ curves in $\C$, apply
Lemma~\ref{th:inunion} to two of them. If another $(1,1)$
horizontal curve in $\C$ does not intersect these two $(1,1)$ curves
at their common two points of intersection then, by
Lemma~\ref{th:inunion}, it must meet both these $(1,1)$ curves at
the third $(1,0)$ horizontal curve of $\C$. So the first two $(1,1)$
curves would meet there, which is a contradiction.
\end{proof}
\begin{prop} \label{th:canon}
Any configuration of horizontal curves in $\C$ is equivalent to one
of the form in Figure~\ref{fig:conf1}.
\end{prop}
\begin{proof}
By assumption and Lemma~\ref{th:n=1} there are at least three
$(1,0)$ horizontal curves and some $(1,1)$ horizontal curves in
$\C$. If there is exactly one $(1,1)$ horizontal curve then the
proposition is clear. If there is more than one $(1,1)$ horizontal
curve, then by Lemmas~\ref{th:three} and \ref{th:common} there are
precisely three $(1,0)$ horizontal curves and two of the $(1,0)$
horizontal curves contain the common intersection of the $(1,1)$
curves. Each $(1,1)$ curve also contains a distinguished point
where the curve meets the third $(1,0)$ horizontal curve. A Cremona
transformation can bring such a configuration to that in
Figure~\ref{fig:conf1} by blowing up at the two points of
intersection of the $(1,1)$ curves and blowing down the two vertical
lines containing the two points. This sends two of the $(1,0)$
horizontal curves and each $(1,1)$ curve to $(1,0)$ horizontal
curves and one of the $(1,0)$ curves to a $(1,1)$ curve that
intersects each of the other horizontal curves exactly once. Note
that since we blow up $\bbP^1\times\bbP^1$ to get the polynomial
map, two configurations of curves $\C,\C'$ in $\bbP^1\times\bbP^1$
related by a Cremona transformation give rise to the same
polynomial, so we are done.
\end{proof}
\subsection{The configuration $\C$}
The image $\C$ of $D\subset X\rightarrow\bbP^1\times\bbP^1$ will
consist of the configuration of horizontal curves in
Figure~\ref{fig:conf1} plus some $(0,1)$ vertical curves. The next
two lemmas show that in fact the only $(0,1)$ vertical curve we need
to include in $\C$ is $L_{\infty}$ and furthermore that $\C$ can be
given by Figure~\ref{fig:confC}.
\begin{lemma}\label{le:list}
The configuration $\C$ appears in Figure~\ref{fig:list} or
Figure~\ref{fig:confC}.
\end{lemma}
\begin{proof}
Let $r+2$ denote the number of horizontal curves and $k+1$ denote
the number of $(0,1)$ vertical curves in $\C$. Thus $\C$ consists
of $k+r+3$ irreducible components and by Lemma~\ref{th:propD} ~(i),
when blowing up to get $D$ from $\C$ we must leave $k+r+1$
exceptional curves behind as cutting divisors.
By Lemma~\ref{th:propD} ~(ii) we must break all cycles. The minimum
number of cutting divisors needed to do this is $kr+k+r-2{\rm
min}\{k,r\}$. This is because each of the $k$ $(0,1)$ vertical
curves different from $L_{\infty}$ must be separated from all but
one of the $r+1$ $(1,0)$ horizontal curves, so we need $kr$ cutting
divisors. Also, the $(1,1)$ horizontal curve meets each of the
$r+1$ $(1,0)$ horizontal curves and each of the $k$ $(0,1)$ vertical
curves once, so that requires $k+r$ cutting divisors (by
Lemma~\ref{th:triple} the $(1,1)$ curve must meet $L_{\infty}$ at a
triple point with a $(1,0)$ horizontal curve, so this intersection
does not produce a cycle to be broken). We would thus require
$kr+k+r$ cutting divisors except that the $(1,1)$ curve may pass
through intersections of the $(1,0)$ horizontal curves and the
$(0,1)$ vertical curves, so some of the cutting divisors may
coincide. The most such intersections possible is ${\rm
min}\{k,r\}$ and we have then over-counted required cutting
divisors by $2{\rm min}\{k,r\}$. Hence we get at least $kr+k+r-2{\rm
min}\{k,r\}$ cutting divisors.
Since the number $k+r+1$ of cutting
divisors is at least $kr+k+r-2{\rm min}\{k,r\}$, we have $k+r+1\geq
kr+k+r-2{\rm min}\{k,r\}$, so
\begin{equation} \label{eq:ineq}
1\geq k(r-2)\ \ {\rm and}\ \ 1\geq (k-2)r,\ \ \ k\geq 0,\ r\geq 2.
\end{equation}
The solutions of (\ref{eq:ineq}) are
$(k,r)=\{(0,r),(1,2),(1,3),(2,2)\}$.
Recall by Lemma~\ref{th:triple} that the $(1,1)$ curve must meet
$L_{\infty}$ at a triple point with a $(1,0)$ horizontal curve.
Furthermore, by keeping track of when either inequality in
(\ref{eq:ineq}) is an equality, or one away from an equality, we can
see that the $(1,1)$ curve must meet any other $(0,1)$ vertical
curves at a triple point with a $(1,0)$ horizontal curve. Thus, the
only possible configurations for $\C$ are given in
Figures~\ref{fig:list} and \ref{fig:confC}.
\end{proof}
\begin{figure}
\caption{Configuration $\C$.}
\label{fig:list}
\end{figure}
\begin{figure}
\caption{Configuration $\C$ with $r+2$ horizontal curves.}
\label{fig:confC}
\end{figure}
In the following lemmas we will exclude the configurations in
Figure~\ref{fig:list}. Label the triple points in the first two
configurations of Figure~\ref{fig:list} by $P_{\infty}\in L_{\infty}$
and $P_1$, and in the third configuration by $P_\infty,P_1,P_2$.
Also, label the exceptional divisor obtained by blowing up the triple
point $P_i$ by $E_i$ and its proper transform by $\tilde{E}_i$.
\begin{lemma} \label{th:confC}
If $E_i$ is a cutting divisor then the $(0,1)$ vertical curve
containing $P_i$ can be removed from $\C$ by a birational
transformation.
\end{lemma}
\begin{proof}
In each of the configurations of Figure~\ref{fig:list} we can perform
a Cremona transformation by blowing up $P_{\infty}$ and $P_i$ for
$i=1$ or $2$ and then blowing down $\tilde{L}_{\infty}$ and the proper
transform of the $(0,1)$ vertical curve that contains $P_i$. The
exceptional divisors $E$ and $E_i$ become $(0,1)$ curves and the
$(0,1)$ vertical curve that contains $P_i$ becomes an exceptional
divisor in a new configuration $\C$. When $E_i$ is a cutting divisor
this operation essentially removes a $(0,1)$ vertical curve from $\C$.
\end{proof}
\begin{lemma}
In a configuration from Figure~\ref{fig:list} with
$(k,r)\in\{(1,3),(2,2)\}$ at least one of the exceptional divisors $E_1$
or $E_2$ is a cutting divisor.
\end{lemma}
\begin{proof}
Suppose otherwise, that $E_1$ is not a cutting divisor and for
$(k,r)=(2,2)$ nor is $E_2$ a cutting divisor. The exceptional curves
$E_i$ introduce an extra intersection and hence an extra cutting
divisor is required. There is one such extra intersection in the
configuration with $(k,r)=(1,3)$ and two such extra intersections
in the configuration with $(k,r)=(2,2)$. As mentioned in the
proof of Lemma~\ref{le:list} the solution $(k,r)=(1,3)$ gives
equality in (\ref{eq:ineq}) and so it cannot sustain an extra cutting
divisor. Similarly the solution $(k,r)=(2,2)$ is $1$ away from
equality in (\ref{eq:ineq}) and so it cannot sustain two extra cutting
divisors. Hence we get a contradiction and the lemma is proven.
\end{proof}
By the previous two lemmas we can simplify any configuration from
Figure~\ref{fig:list} to lie in Figure~\ref{fig:confC} or to be the
first configuration from Figure~\ref{fig:list} (the one with
$(k,r)=(1,2)$) with the requirement that $E_1$ is not a cutting
divisor. It is this last case that we will now exclude.
The next three lemmas suppose that we have the first configuration
from Figure~\ref{fig:list} and that $E_1$ is not a cutting divisor.
We will denote the four horizontal curves by $H_i$, $i=1,\dots,4$, and
their proper transforms by $\tilde{H}_i$ where $H_4$ is the $(1,1)$
curve, $H_1$ contains $P_1$ and $H_3$ contains $P_{\infty}$. Also
denote the $(1,0)$ vertical curve that contains $P_1$ by $L_1$ and its
proper transform by $\tilde{L}_1$.
\begin{lemma} \label{th:atl}
At least one of $\tilde{H}_1$ and $\tilde{H}_2$ and at least one of
$\tilde{H}_3$ and $\tilde{H}_4$ has self-intersection $-1$.
\end{lemma}
\begin{proof}
The proper transform of each horizontal curve has self-intersection
less than or equal to $-1$ and all curves in $D$ beyond horizontal
curves have self-intersection strictly less than $-1$. If the two
horizontal curves that meet $\tilde{L}_{\infty} $, $\tilde{H}_1$ and
$\tilde{H}_2$, have self-intersection strictly less than $-1$, then
since all curves beyond the two horizontal curves also have
self-intersection strictly less than $-1$, and since
$\tilde{L}_{\infty}$ has self-intersection $-1$ and valence $3$ this
gives a contradiction by Lemma~\ref{th:neg}. The same argument
applies to $\tilde{H}_3$ and $\tilde{H}_4$ together with $E$.
\end{proof}
\begin{lemma} \label{th:pair}
$\tilde{H}_4\cdot\tilde{H}_4= -1$ if and only if
$\tilde{H}_2\cdot\tilde{H}_2= -1$.
\end{lemma}
\begin{proof}
Since $L_1$ must be separated from at least one of $H_2$ and $H_3$
then at most one of $\tilde{H}_2\cdot\tilde{H}_2= -1$ and
$\tilde{H}_3\cdot\tilde{H}_3= -1$ can be true. Similarly $E_1$ must
be separated from at least one of $H_1$ and $H_4$ so at most one of
$\tilde{H}_1\cdot\tilde{H}_1= -1$ and $\tilde{H}_4\cdot\tilde{H}_4=
-1$ can be true. By Lemma~\ref{th:atl}, if
$\tilde{H}_2\cdot\tilde{H}_2\neq -1$ then
$\tilde{H}_1\cdot\tilde{H}_1= -1$ so $\tilde{H}_4\cdot\tilde{H}_4\neq
-1$. Similarly, $\tilde{H}_1\cdot\tilde{H}_1\neq -1$ implies that
$\tilde{H}_2\cdot\tilde{H}_2= -1$ and $\tilde{H}_4\cdot\tilde{H}_4=
-1$.
\end{proof}
\begin{lemma}
The configuration from Figure~\ref{fig:list} with $(k,r)=(1,2)$
together with the requirement that $E_1$ is not a cutting divisor
cannot occur.
\end{lemma}
\begin{proof}
Suppose otherwise. Assume that $\tilde{H}_1\cdot\tilde{H}_1= -1$ and
$\tilde{H}_3\cdot\tilde{H}_3= -1$. If this is not the case, then by
Lemmas~\ref{th:atl} and \ref{th:pair} we may assume that
$\tilde{H}_4\cdot\tilde{H}_4= -1$ and $\tilde{H}_2\cdot\tilde{H}_2=
-1$ and argue similarly. The curves beyond $\tilde{H}_1$ have
self-intersection strictly less than $-1$. The curve immediately
adjacent and beyond $\tilde{H}_1$ is $\tilde{E}_1$ and this has
self-intersection strictly less than $-2$. This is because we must
blow up between $E_1$ and $H_4$ to separate cycles, and also between
$\tilde{E}_1$ and $\tilde{L}_1$ to break cycles and to maintain
$\tilde{H}_1\cdot\tilde{H}_1= -1$ and $\tilde{H}_3\cdot\tilde{H}_3=
-1$. Thus if we blow down $\tilde{H}_1$ the remaining branch beyond
$\tilde{L}_{\infty}$ consists of curves with self-intersection
strictly less than $-1$. Also $\tilde{H}_2$ has self-intersection
strictly less than $-1$ since we have to blow up the intersection
between $H_2$ and $H_4$ and the intersection between $H_2$ and $L_1$
in order to break cycles and maintain $\tilde{H}_3\cdot\tilde{H}_3=
-1$. After blowing down $\tilde{H}_1$, $\tilde{L}_{\infty}$ has
self-intersection $0$ and valency $3$ with two branches consisting of
curves of self-intersection strictly less than $-1$. Thus we can use
Lemma~ \ref{th:neg} to get a contradiction.
\end{proof}
\section{Non-isotrivial rational polynomials of simple type}
\label{sec:class}
The configuration in Figure~\ref{fig:confC} is the starting point for
any non-isotrivial rational polynomial of simple type. Notice that we
can fill one puncture in each fibre of any such map to get an
isotrivial family of curves and the puncture varies linearly with
$c\in{\mathbb C}$. Notice also that there is an irregular fibre for each of
the $r$ intersection points of the $(1,1)$ curve with $(1,0)$
horizontal curves away from $L_{\infty}$. In fact there is at most
one more irregular fibre which can only occur in rather special cases,
as we discuss in subsection \ref{subsec:irr}.
From now on the configuration $\C$ is given by Figure~\ref{fig:confC}
with $r+2$ horizontal curves. Beginning with $\C$ we will list all of
the rational polynomials of simple type generated from this
configuration. We shall give the splice diagrams for these
polynomials first. Although we compute the polynomials later,
geometric information of interest is often more easily extracted from
the splice diagram or from our construction of the polynomials than
from an actual polynomial.
The splice diagram encodes the topology of the polynomial. It
represents the link at infinity of the \regular{} fibre, or it can be
thought of as an efficient plumbing graph for the divisor at infinity,
$D\subset X$. It encodes an entire parametrised family of polynomials
with the same topology of their regular fibres. See
\cite{ENeThr,NeuCom,NeuIrr} for more details. Within this family,
polynomials can still differ in the topology of their irregular
fibres. Our methods also give all information about the irregular
fibres, as we describe in subsection \ref{subsec:irr}.
The configuration $\C$ has $r+3$ irreducible components so when we
blow up to get $D$ by Lemma~\ref{th:propD} ~(i) we will leave $r+1$
exceptional curves behind as cutting divisors. By
Lemma~\ref{th:propD} ~(ii) we must break the $r$ cycles in $\C$ with
multiple blow-ups at the points of intersection leaving $r$
exceptional curves behind as cutting divisors. We blow up multiple
times between the $r$th $(1,0)$ horizontal curve and the $(1,1)$
horizontal curve in order to break a cycle. Thus, we require those
blow-ups to satisfy the condition that the exceptional curve will
break the cycle if removed. Equivalently, each new blow-up takes
place at the intersection of the most recent exceptional curve with an
adjacent curve. We call such a multiple blow-up a {\em separating blow-up sequence{}}.
We have one extra cutting divisor. This will arise as the last
exceptional curve blown up in a sequence of blow-ups that does not
break a cycle. We will call this
sequence of blow-ups a {\em non-separating blow-up sequence{}}. A
priori, this non-separating blow-up sequence{} could be a
sequence as in Figure~\ref{fig:nonbreak},
\begin{figure}
\caption{Sequence of blow-ups starting at $P$ and ending at the $-1$ curve.}
\label{fig:nonbreak}
\end{figure}
where the final $-1$ curve is the cutting divisor. However, we shall
see that the extra nodes this introduces in the dual graph prohibit
$D$ from blowing down to a Morrow configuration, so the sequence is
simply a string of $-2$ exceptional curves followed by $-1$
exceptional curve that is the cutting divisor. This arises from
blowing up a point on a curve in the blow-up of $\C$ that does not lie
on an intersection of irreducible components.
Let us begin by just performing the separating blow-up sequence s at the points of
intersection, of $\C$ and leaving the non-separating blow-up sequence{}
until later. This gives the dual graph in Figure~\ref{fig:plumb1}
with the proper transforms of the $r+1$ $(1,0)$ horizontal curves and
the $(1,1)$ horizontal curve indicated along with $\tilde{L}_{\infty}$
and the exceptional curve $E$ arising from the blow-up of the triple
point in $\C$. There are $r$ branches heading out from the proper
transform of $(1,1)$ consisting of curves of self-intersection less
than $-1$ and beyond each of the proper transforms of the $r$ $(1,0)$
horizontal curves the curves have self-intersection less than $-1$.
\begin{figure}
\caption{Dual graph of $\C$ blown up at points of intersection.}
\label{fig:plumb1}
\end{figure}
The self-intersection of each of $(\widetilde{1,0})_0$, $E$ and
$\tilde{L}_{\infty}$ is $-1$. The self-intersections of
$(\widetilde{1,1})$ and $(\widetilde{1,0})_i$, $i=1,\dots,r$ are
negative and depend on how we blow up at each point of intersection.
\begin{lemma} \label{th:oneb}
There is at most one branch in $D$ beyond $(\widetilde{1,1})$, and
$r-1$ of the horizontal curves $(\widetilde{1,0})_i$ (those with
index $i=1,\dots,r-1$ say) have self-intersection $-1$ and only $-2$
curves beyond.
\end{lemma}
\begin{proof}
Since the self-intersection of each of the curves beyond
$(\widetilde{1,1})$ is less than $-1$ each branch beyond
$(\widetilde{1,1})$ cannot be blown down before $(\widetilde{1,1})$.
Thus, there are at most two branches.
Furthermore, since the self-intersection of each of the curves
beyond $(\widetilde{1,0})_i$, $i=1,\dots,r$ is less than $-1$, the
branch beyond $(\widetilde{1,0})_i$ can be blown down before
$(\widetilde{1,0})_i$ only if $(\widetilde{1,0})_i$ has
self-intersection $-1$ and each curve beyond has self-intersection
$-2$. Thus, at most two branches beyond $(\widetilde{1,0})_i$,
$i=1,\dots,r$ do not consist of a $-1$ curve with a string of $-2$
curves beyond. If there are two such branches then the blow-ups
that create them create corresponding branches beyond
$(\widetilde{1,1})$ (or possibly just decrease the intersection
number at $(\widetilde{1,1})$). These two branches cannot be fully
blown down until everything else connecting to the $\tilde L_\infty$
vertex are blown down, but the vertex $(\widetilde{1,1})$ and any
branches beyond it cannot blow down first. Thus $D$ cannot blow down
to a Morrow configuration. Thus there is at most one such branch,
proving the Lemma.
\end{proof}
Figure~\ref{fig:plumb2} gives the dual graph of the partially blown up
$\C$ where the label of each curve is now its self-intersection
number. The branch beyond $(\widetilde{1,0})_i$ consists of a string
of $a_i-1$ $-2$ curves and $A=\sum_{i=1}^{r-1}a_i$. We have thus far
only blown up once between the $r$th $(1,0)$ horizontal curve and the
$(1,1)$ horizontal curve, indicating the exceptional divisor by
$\otimes$. We may blow up many more times---perform a
separating blow-up sequence{}---leaving behind the final exceptional curve as cutting
divisor to get a branch beyond $(\widetilde{1,1})$ and a branch beyond
$(\widetilde{1,0})_r$. In addition, we still have to perform the
non-separating blow-up sequence{} at some point on the divisor.
\begin{figure}
\caption{Dual graph of partially blown-up configuration of curves.}
\label{fig:plumb2}
\end{figure}
\begin{lemma} \label{th:nonse}
The non-separating blow-up sequence{} occurs beyond either
$(\widetilde{1,1})$, $(\widetilde{1,0})_r$, or $(\widetilde{1,0})_0$
and in the latter case $(\widetilde{1,1})\cdot(\widetilde{1,1})=-1$.
\end{lemma}
\begin{proof}
If the non-separating blow-up sequence{} occurs on the branch beyond
$(\widetilde{1,0})_i$, $i=1,\dots,r-1$ then that branch cannot be
blown down. By the proof of lemma~\ref{th:oneb}, in order to obtain
a linear graph we must blow down $r-1$ of the branches beyond
$(\widetilde{1,0})_i$, $i=1,\dots,r$. Thus, if the
non-separating blow-up sequence{}
does occur beyond $(\widetilde{1,0})_i$ for some $i\le
r-1$, then the $(\widetilde{1,0})_r$ branch blows down, so we simply
swap the labels $i$ and $r$.
The non-separating blow-up sequence{} cannot occur on $E$ or
$\tilde{L}_{\infty}$ because the resulting cutting divisor would
not be sent to a finite value.
If the non-separating blow-up sequence{} occurs on the branch beyond
$(\widetilde{1,0})_0$ then we must be able to blow down the branch
beyond $(\widetilde{1,1})$, hence the branch must consist of
$(\widetilde{1,1})$ with self-intersection $-1$.
\end{proof}
\begin{lemma} \label{th:crem}
We may assume the non-separating blow-up sequence{} does not occur beyond
$(\widetilde{1,0})_0$.
\end{lemma}
\begin{proof}
By Lemma~\ref{th:nonse} if the non-separating blow-up sequence{} occurs
beyond $(\widetilde{1,0})_0$ then
$(\widetilde{1,1})\cdot(\widetilde{1,1})=-1$. In particular,
$1=A=\sum_1^{r-1}a_i$. Thus, $r=2$, $a_1=1$. With only four
horizontal curves, we can perform a Cremona transformation to make
$(\widetilde{1,0})_0$ the $(1,1)$ curve and hence we are in the
first case of Lemma~\ref{th:nonse}.
\end{proof}
\begin{lemma} \label{th:nons}
The non-separating blow-up sequence{} occurs on either of the last curves
beyond $(\widetilde{1,1})$ or $(\widetilde{1,0})_r$ and is a string of
$-2$ curves followed by the $-1$ curve that is a cutting divisor.
\end{lemma}
\begin{proof}
Arguing as previously, if the non-separating blow-up sequence{} occurs
anywhere else, or if it is more complicated, then it introduces a
new branch preventing the divisor $D$ from blowing down to a linear
graph.
\end{proof}
We now know that our divisor $D$ results from
Figure~\ref{fig:plumb2} by doing a separating blow-up sequence{} between the $(1,1)$
curve and the $r$-th $(1,0)$ curve, leaving behind the final $-1$
exceptional curve as a cutting divisor and then performing a
non-separating blow-up sequence{} on a curve adjacent to this cutting
divisor to produce second cutting divisor.
A priori, it is not clear that this procedure always gives rise to a
divisor $D\subset X$ where $X$ is a blow-up of $\bbP^2$ and $D$ is the
pre-image of the line at infinity. The classification will be complete
once we show it does.
\begin{lemma} \label{th:anyt}
The above procedure always gives rise to a configuration that blows
down to a Morrow configuration (see Lemma \ref{th:propD}) and hence
determines a rational polynomial of simple type.
\end{lemma}
\begin{proof}
The calculation involves the relation between plumbing graphs and
splice diagrams described in \cite{ENeThr} or \cite{NeuIrr}, with
which we assume familiarity. In particular, we use the continued
fractions of weighted graphs described in \cite{ENeThr}. If one has
a chain of vertices with weights $-c_0,-c_1,\dots,-c_t$, its
continued fraction based at the first vertex is defined to be
$$
c_0-\cfrac1{c_1-\cfrac 1{c_2-\genfrac{}{}{0pt}{}{}{\ddots~
\genfrac{}{}{0pt}{}{}{-\genfrac{}{}{0pt}{}{}{\cfrac1{c_t}}}}}}$$
The dual graph for the curve configuration of Lemma \ref{th:anyt}
has chains starting at the vertex $(\widetilde{1,1})$ and
$(\widetilde{1,0})_r$. We claim these chains have continued
fractions evaluating to $A-1+\frac PQ$ and $\frac qp$ respectively,
where $P,Q,p,q$ are arbitrary positive integers with $Pq-pQ=1$. We
describe the main ingredients of this calculation but leave the
details to the reader.
An easy induction shows that the initial separating blow-up sequence{}
leads to chains at $(\widetilde{1,1})$ and $(\widetilde{1,0})_r$
with continued fractions $A-1+\frac nm$ and $\frac mn$ with positive
coprime $n$ and $m$. The non-separating blow-up sequence{} then changes the
fraction $\frac nm$ or $\frac mn$ that it operates on as follows. If
the non-separating blow-up sequence{} consists of $k$ blow-ups at the end
of the left chain then $\frac nm$ is replaced by $\frac NM$ with
$Nm-nM=1$ and $k\le \frac Mm<\frac Nn\le (k+1)$. If the
non-separating blow-up sequence{} is on the right then $\frac mn$ is
similarly changed instead.
Renaming, we can describe this in terms of our chosen names
$p,q,P,Q$ as follows. We either have $P>p$ or $q>Q$. If $P>p$ the
initial separating blow-up sequence{} leads to chains with continued
fractions $A-1+\frac pq$ and $\frac qp$ and the
non-separating blow-up sequence{}
then consists of a sequence of $k:=\lfloor\frac Qq\rfloor$
blowups extending the left chain (and changing its continued
fraction to $A-1+\frac PQ$). If $q>Q$ the continued fractions are
$A-1+\frac PQ$ and $\frac QP$ after the separating blowup and the
non-separating blow-up consists of $k:=\lfloor\frac pP\rfloor$
blow-ups extending the right chain (and changing its continued
fraction to $\frac qp$).
To prove the Lemma we must show that the dual graph of our curve
configuration blows down to a Morrow configuration. We can blow down
the chains starting at $(\widetilde{1,0})_i$, $i=0,\dots,r-1$, to
get a chain. To check that this chain is a Morrow configuration we
must compute its determinant, which we can do with continued
fractions as in \cite{ENeThr}. We first replace the two end chains
by vertices with the rational weights $-A+1-\frac PQ$ and $-\frac
qp$ determined by their continued fractions to get a chain of four
vertices with weights
$$
-A+1-\frac PQ,\quad 0,\quad -1+A,\quad -\frac qp.$$
Then,
computing the continued fraction for this chain based at its right
vertex gives $\frac qp-\frac QP = \frac{Pq-pQ}{Pp}=\frac1{Pp}$,
showing that the determinant is $-1$ as desired, and completing the
proof.
\end{proof}
\iffalse
This can be proved by induction on the complexity of the multiple
blow-up. Alternatively we can reduce to the classification of
polynomials with \regular{} fibres isomorphic to an annulus as follows.
In Figure~\ref{fig:plumb2}, blow down the $r$ branches corresponding
to the horizontal curves $(1,0)_i$, $i=0,\dots,r-1$ and also blow down
the exceptional curve $\otimes$. This leaves a divisor with four
components as in Figure~\ref{fig:plumb3}. By repeatedly blowing up
adjacent to the bottom left $0$ curve and then blowing down the $0$
curve (a total of $A-1$ times), we can change the divisor to that in
Figure~\ref{fig:plumb4} so that it lives in $\bbP^1\times\bbP^1$.
Our problem is to blow up the divisor in Figure~\ref{fig:plumb4} and
leave behind two curves to get a divisor $D\subset X$ where $X$ is a
blow-up of $\bbP^2$ and $D$ is the pre-image of the line at infinity.
This is exactly the problem of finding all polynomials with \regular{}
fibres annuli. This is solved in the classification of Miyanishi and
Sugie \cite{MSuGen}, and in terms of splice diagrams the
classification appears in \cite{NeuCom}. The conclusion is that any
repeated blow-up as described in the lemma is admissible.
\end{proof}
\begin{figure}
\caption{Dual graph of simplified divisor.}
\label{fig:plumb3}
\end{figure}
\begin{figure}
\caption{Dual graph of a divisor in $\bbP^1\times\bbP^1$.}
\label{fig:plumb4}
\end{figure}
The classification of polynomials with \regular{} fibres isomorphic to an
annulus is given by the family of splice diagrams in
Figure~\ref{fig:ann}. It gives a polynomial for any $Q,q\geq 1$ with
$(Q,q)=1$. Usually one would require $Q\geq q$ to get rid of
duplication. We would like to keep all values of $Q$ and $q$ since
our construction will be non-symmetric.
\def\lower.2pc\hbox to 2pt{\hss$\bullet$\hss}{\lower.2pc\hbox to 2pt{\hss$\bullet$\hss}}
\def\lower.2pc\hbox to 2pt{\hss$\circ$\hss}{\lower.2pc\hbox to 2pt{\hss$\circ$\hss}}
\def\raise5pt\hbox{$\vdots$}{\raise5pt\hbox{$\vdots$}}
\begin{figure}
\caption{Splice diagram for polynomial with \regular{}
\label{fig:ann}
\end{figure}
The splice diagram is essentially an efficient plumbing graph where
the splice weights are determinants of sub-graphs in the dual graph.
See \cite{ENeThr} or \cite{NeuIrr} for details. The two arrows
indicate the two horizontal curves of the polynomial with \regular{}
fibres isomorphic to an annulus. One of these horizontal curves
corresponds to $(\widetilde{1,1})$ and the other to
$\tilde{L}_{\infty}$. We blow up this divisor to get
$(\widetilde{1,0})_i$, $i=0,\dots,r-1$, reversing the blow-downs in
the proof of Lemma~\ref{th:anyt}.
\fi
\begin{thm}\label{th:main}
Given positive integers $P,Q,p,q$ with $Pq-pQ=1$ and positive
integers $a_1,\dots,a_{r-1}$, the splice diagram of our rational
polynomial $f$ of simple type with non-isotrivial fibres is given in
Figure~\ref{fig:splice1} with
\begin{align*}
A&=a_1+\dots+a_{r-1},\\
B&=AQ+P-Q,\\
C&=Aq+p-q,\\
b_i&=qQa_i+1 \quad\text{for each $i$.}
\end{align*}
The degree of $f$ is:\quad $\operatorname{deg}(f)=A(Q+q)+P+p$.
\end{thm}
\def\lower.2pc\hbox to 2pt{\hss$\bullet$\hss}{\lower.2pc\hbox to 2pt{\hss$\bullet$\hss}}
\def\lower.2pc\hbox to 2pt{\hss$\circ$\hss}{\lower.2pc\hbox to 2pt{\hss$\circ$\hss}}
\def\raise5pt\hbox{$\vdots$}{\raise5pt\hbox{$\vdots$}}
\begin{figure}
\caption{Splice diagram for non-isotrivial rational polynomial.}
\label{fig:splice1}
\end{figure}
(In \cite{NNoMo} an ``additional'' case was given, which is, however,
of the above type with $P=Q=p=1$, $q=a_1=2$.)
\begin{proof}
For the following computations we continue to assume the reader is
familiar with the relationship between resolution graphs and splice
diagrams described in \cite{NeuIrr}. The arrows signify places at
infinity of the \regular{} fibre, one on each horizontal curve. The
fact that $(\widetilde{1,0})_r$ is next to $\tilde{L}_{\infty}$ in
the dual graph says that the edge determinant of the intervening
edge is $1$. This corresponds to the fact that $Pq-pQ=1$, which we
already know. Similarly, $(\widetilde{1,0})_i$ is next to
$\tilde{L}_{\infty}$ for $i=1,\dots,r-1$ so the weight $b_i$ is
determined by the edge determinant condition $b_i=qQa_i+1$.
The
``total linking number'' at the vertex corresponding to each
horizontal curve (before blowing down $(\widetilde{1,0})_0$) is zero
(terminology of \cite{NeuIrr}; this reflects the fact that the link
component corresponding to the horizontal curve has zero linking
number with the entire link at infinity, since at almost all points
on a horizontal curve, the polynomial has no pole). The weight $C$
is determined by the zero total linking number of
$(\widetilde{1,1})$, giving $C=Aq+p-q$. For any $i$ the fact that
vertex $(\widetilde{1,0})_i$ has zero total linking gives
$B=AQ+P-Q$.
\end{proof}
It is worth summarising some consequences of our construction that
will be useful later.
\begin{lemma}
The number of blow-ups in the final non-separating blow-up sequence{} is
$k:=\max(\lfloor\frac Qq\rfloor,\lfloor\frac pP\rfloor)$ and these
blow-ups occurred at the $(Q,-q)$ branch or the $(p,-P)$ branch of
the above splice diagram according as the first or second entry of
this {\rm max} is the larger. Moreover, the non-separating blow-ups
occurred on the corresponding horizontal curve if and only if $q=1$
resp.\ $P=1$.
\end{lemma}
\begin{proof}
The first part was part of the proof of Lemma \ref{th:anyt}. For
the second part, note that if $q=1$ then certainly $q>Q$ must fail,
so $P>p$ and the nonseparating blow-ups were on the left. The
continued fraction on the left was $A-1+\frac pq=A-1+p$ which is
integral, showing that the left chain consisted only of the
exceptional curve before the non-separating blow-up. Conversely, if
the non-separating blow-ups were adjacent to that exceptional curve
then the left chain was a single vertex, hence had integral
continued fraction, so $q=1$. The argument for $P=1$ is the same.
\end{proof}
\begin{thm}\label{th:defspace}
The moduli space of polynomials $f\colon{\mathbb C}^2\to{\mathbb C}$ with the
above regular splice diagram, modulo left-right equivalence (that
is, the action of polynomial automorphisms of both domain ${\mathbb C}^2$
and range ${\mathbb C}$), has dimension $r+k-2$ with $k$ determined in the
previous Lemma. In fact it is a ${\mathbb C}^k$-fibration over the
$(r-2)$-dimensional configuration space of $r-1$ distinct points in
${\mathbb C}^*$ labelled $a_1,\dots,a_{r-1}$, modulo permutations that
preserve the labelling and transformations of the form $z\mapsto
az$.
\end{thm}
\begin{proof}
The splice diagram prescribes the number of horizontal curves and
the separating blow-up sequence s at each point of intersection. The
only freedom is in the placement of the horizontal curves in
$\bbP^1\times\bbP^1$, and in the choice of points, on prescribed
curves, on which to perform the string of blow-ups we call the
non-separating blow-up sequence{}. The $(1,1)$ horizontal curve is
\emph{a priori} the graph of a linear map $y=ax+b$ but can be
positioned as the graph of $y=x$ by by an automorphisms of the image
${\mathbb C}$.
The point in the configuration space of the Theorem determines the
placement of the horizontal curves $(1,0)_1,\dots,(1,0)_{r}$ (after
putting the $(1,0)_0$ curve at $\bbP^1\times\{\infty\}$ and the
$(1,0)_r$ curve at $\bbP^1\times\{0\}$). The fibre ${\mathbb C}^k$
determines the sequence of points for the
non-separating blow-up sequence{}.
This proves the Theorem, except that we need to be careful, since
some diagrams occur in the form of Theorem \ref{th:main} in two
different ways, which might seem to lead to disconnected moduli
space. But the only cases that appear twice have four horizontal
curves and the configurations $\C$ are related by Cremona
transformations.
\end{proof}
This completes the classification of non-isotrivial rational
polynomials of simple type.
\subsection{The irregular fibres}\label{subsec:irr}
We can read off the topology of the irregular fibres of the polynomial
$f$ of Theorem \ref{th:main} from our construction, since any such
fibre is the proper transform of a vertical $(0,1)$ curve together
with any exceptional curves left behind as cutting divisors when
blowing up on this vertical curve.
We shall use the notation ${\mathbb C}(r)$ to mean ${\mathbb C}$ with $r$ punctures
(so ${\mathbb C}^*={\mathbb C}(1)$), and for the purpose of this subsection we used
$C\cup C'$ to mean disjoint union of curves $C$ and $C'$, and $C+C'$
to mean union with a single normal crossing. The \regular{} fibre of $f$
is ${\mathbb C}(r+1)$.
The irregular fibres of $f$ arise through the breaking of cycles
between the $(1,1)$ curve and the $(1,0)_i$ curve for $i=1,\dots,r$,
so there are $r$ of them. The non-separating blow-up also
contributes, but it usually contributes to the $r$-th irregular fibre.
However, if $P=1$ or $q=1$ then the non-separating blow-up occurs on a
horizontal curve and can thus have any $f$-value, so it generically
leads to an additional $(r+1)$-st irregular fibre.
The irregular fibres are all reduced except for the $r$-th irregular
fibre, which is always non-reduced unless one of $P,Q,p,q$ is $1$.
We first assume $q\ne1$ and $P\ne 1$, so there are exactly $r$
irregular fibres. Then for each $i=1,\dots,r-1$ the $i$-th irregular
fibre is ${\mathbb C}(r-1)+{\mathbb C}^*$ if $a_i=1$ and ${\mathbb C}(r)\cup{\mathbb C}^*$ if
$a_i>1$. The $r$-th irregular fibre is ${\mathbb C}(r)\cup{\mathbb C}^*\cup {\mathbb C}$
generically. As mentioned above, this fibre is reduced if and only if
$Q=1$ or $p=1$. There is a single parameter value in the ${\mathbb C}^k$
factor of the parameter space of Theorem \ref{th:defspace} for which
the $r$-th irregular fibre has different topology, namely
${\mathbb C}(r)\cup({\mathbb C}+{\mathbb C})$. In this case it is non-reduced even if $Q=1$
or $p=1$.
If $q=1$ or $P=1$ then write $\frac PQ$ and $\frac qp$ as $\frac 1a$
and $\frac{ak+1}k$ in some order. The non-separating blow-up creates
irregularity in a fibre which generically is distinct from the the
first $r$ irregular fibres. The generic situation is that the $r$-th
irregular fibre is ${\mathbb C}(r)\cup{\mathbb C}^*$ or ${\mathbb C}(r-1)+{\mathbb C}^*$ according
as $a>1$ or $a=1$ and the $(r+1)$-st irregular fibre is
${\mathbb C}(r+1)\cup{\mathbb C}$ or ${\mathbb C}(r)+{\mathbb C}$ according as $k>1$ or $k=1$, and
both are reduced. But there are codimension $1$ subspaces of the
parameter space for which the topology is different. For instance, the
$(r+1)$-st irregular fibre will be non-reduced if one blows up more
than once on a vertical curve while doing the
non-separating blow-up sequence{} that
creates it.
\subsection{Monodromy} \label{subsec:monodromy}
We can also read off the monodromy for our polynomial $f$. Consider a
generic vertical $(0,1)$ curve $C$ in our construction. Removing its
intersections with the horizontal curves gives a regular fibre $F$ of
$f$. Since we have positioned the horizontal curve $(1,0)_0$ at
$\infty$ we think of $F$ as an $r+1$-punctured ${\mathbb C}$. We call the
intersection of the $(1,1)$ horizontal curve with $C$ the $0$-th
puncture of $F$ and for $i=1,\dots,r$ we call the intersection of the
$(1,0)_i$ curve with $C$ the $i$-th puncture of $F$.
If the $(r+1)$-st irregular fibre exists the local monodromy around it
is trivial. For $i=1,\dots,r$ the monodromy around the $i$-th
irregular fibre rotates the $0$-th puncture of the regular fibre
${\mathbb C}(r+1)$ around the $i$-th puncture. In terms of the braid group
on the $r+1$ punctures, with standard generators $\sigma_i$ exchanging
the $(i-1)$-st and $i$-th puncture for $i=1,\dots,r$, the local
monodromies are $h_1=\sigma_1^2$,
$h_2=\sigma_1\sigma_2^2\sigma_1^{-1}$, \dots,
$h_r=\sigma_1\dots\sigma_{r-1}\sigma_r^2\sigma_{r-1}^{-1}
\dots\sigma_1^{-1}$. The monodromy $h_\infty=h_r\dots h_1$ at
infinity is $\sigma_1\sigma_2\dots\sigma_r\sigma_r\dots\sigma_1$.
It is not hard to verify that $h_1,\dots,h_r$ freely generate a free
subgroup of the braid group.
\section{Explicit polynomials}\label{sec:poly}
The splice diagram gives sufficient information (Newton polygon,
topological properties, etc.) that one can easily find the polynomial
without significant computation by making an educated guess and then
confirming that the guess is correct. The answer is as follows:
{\bf Case 1.} $k\le\frac pP<k+1$.
(Then $\frac pP<\frac qQ\le k+1$.)
Let $s_1=\alpha_0+\alpha_1x+\dots+\alpha_{k-1}x^{k-1}+x^ky$. Let
$\beta_1,\dots\beta_{r-1}$ be distinct complex numbers in ${\mathbb C}^*$.
$$f(x,y)=x^{q-Qk}s_1^Q+x^{p-Pk}s_1^P
\prod_{i=1}^{r-1}(\beta_i-x^{q-Qk}s_1^Q)^{a_i}.$$
{\bf Case 2.} $k\le\frac Qq<k+1$. (Then $\frac Qq<\frac Pp\le k+1$.)
Let $s_2=\alpha_0+\alpha_1y+\dots+\alpha_{k-1}y^{k-1}+xy^k$. Let
$\beta_1,\dots\beta_{r-1}$ be distinct complex numbers in ${\mathbb C}^*$.
$$f(x,y)=y^{Q-qk}s_2^q+y^{P-pk}s_2^p
\prod_{i=1}^{r-1}(\beta_i-y^{Q-qk}s_2^q)^{a_i}.$$
One can compute the splice diagram and see it is correct. One can
verify that the \regular{} fibres are rational by the explicit
isomorphism:
$$f^{-1}(t) \to {\mathbb C}-\{0,\beta_1,\dots,\beta_{r-1}, t\}\qquad\left\{
\begin{matrix}
(x,y)&\mapsto &x^{q-Qk}s_1^Q&&&\text{(Case 1),}\\
(x,y)&\mapsto &y^{Q-qk}s_2^q&&&\text{(Case 2),}
\end{matrix}\right.$$
for generic
$t$. The irregular values of $t$ are
$0,\beta_1,\dots,\beta_{r-1}$ if $P\ne1$ and $q\ne1$. If $P=1$ then
$t=\alpha_0\prod \beta_i$ is the additional irregular value that our
earlier discussion predicts, and if $q=1$ then $t=\alpha_0$ is the
additional irregular value.
The space of parameters
$(\alpha_0,\dots,\alpha_{k-1},\beta_1,\dots,\beta_{r-1})$ maps to the
moduli space we computed earlier with fibre of dimension $1$. Indeed,
with $B,C$ as in Theorem \ref{th:main}, the polynomial
$$f_\lambda(x,y)=\lambda^{-1}f(\lambda^Bx,\lambda^{-C}y)$$ has the same
form with the parameters $\beta_j$ replaced by $\lambda^{-1}\beta_j$
and $\alpha_j$ replaced by $\lambda^{jB+A-1}\alpha_j$.
To put the above polynomials in the form of $f_1(x,y)$ of Theorem
\ref{th:summary}, in case 1 we
rename the exponents
$q-Qk$ to $q_1$, $p-Pk$ to $p_1$, $Q$ to $q$, $P$ to $p$.
In case 2 we rename $Q-qk$ to $q_1$, $P-pk$ to $p_1$, and then
exchange $x$ and $y$.
\section{The isotrivial case.}\label{sec:miyanishi-sugie}
After the first version of this paper was completed we realised that
the classification in \cite{MSuGen} for the isotrivial case has
omissions. In this section we therefore sketch the corrected
classification using the techniques of this paper. The discussion of
the parameter spaces and the irregular fibres for the resulting
polynomials is similar to the non-isotrivial case, so we leave it to
the reader. One can give an alternative proof using Kaliman's
classification \cite{KalPol} of all isotrivial polynomials.
We will restrict ourselves to the case of ample rational polynomials,
i.e. those with at least three $(1,0)$ horizontal curves. The case of
one $(1,0)$ horizontal curve always gives a polynomial equivalent to a
coordinate by the Abhyankar-Moh-Suzuki theorem \cite{AMoEmb,SuzPro}.
The case of two $(1,0)$ horizontal curves is dealt with from a splice
diagram perspective in \cite{NeuCom} and earlier by analytic methods
in \cite{Saito}. The result is included in our summary Theorem
\ref{th:summary}.
As before, compactify ${\mathbb C}^2$ to $X$ and construct a map
$X\rightarrow\bbP^1\times\bbP^1$. The map is essentially canonical
(up to an automorphism of one factor.) The image of the divisor at
infinity $D\subset X$ in $\bbP^1\times\bbP^1$ is given by a collection
of $(1,0)$ curves since we used three of the horizontal curves to get
a map to $\bbP^1\times\bbP^1$ and in order that the fibres give an
isotrivial family, any other horizontal curves must also be $(1,0)$
curves.
When there are at least three $(1,0)$ horizontal curves, by the
following lemma the original configuration of curves in
$\bbP^1\times\bbP^1$ breaks into the two cases of no vertical curves
or one vertical curve.
\begin{lemma}
An ample rational polynomial with isotrivial fibres has at most one
vertical curve over a finite value.
\end{lemma}
\begin{proof}
We can argue as in the previous section. The curve over infinity,
$L_{\infty}$ is not blown up since there are no triple points. If
there is more than one vertical curve over a finite value then there
are precisely three $(1,0)$ horizontal curves since otherwise there
would be at least two $(1,0)$ horizontal curves that would be blown
up at least twice and since all curves beyond these horizontal
curves (exceptional curves or vertical curves) have
self-intersection $<-1$ we would get two branches $B_1$ and $B_2$
made up of the proper transforms of these two $(1,0)$ horizontal
curves and all curves beyond these, meeting at a valency $>2$ curve,
$L_{\infty}$, with self-intersection $0$. This is the impossible
situation of Lemma~\ref{th:neg}.
There can be at most two vertical curves since if there are $l$
vertical curves we need to break $2l$ cycles but since there are
precisely three $(1,0)$ horizontal curves, we begin with $l+4$ curves
so we can break at most $l+2$ cycles by Lemma~\ref{th:propD} ~(i).
Therefore $2l\leq l+2$ so $l\leq 2$.
The lemma follows when we get rid of the case of two vertical curves
and three $(1,0)$ horizontal curves. The few cases are easily
dismissed by hand.
\end{proof}
So the beginning configuration is given by Figure~\ref{fig:conf7} or
Figure~\ref{fig:conf8}. We analyse these below as Case 1 and Case 2.
\begin{figure}
\caption{Configuration of horizontal curves with $L_{\infty}
\label{fig:conf7}
\end{figure}
\begin{figure}
\caption{Configuration of horizontal curves with $L_{\infty}
\label{fig:conf8}
\end{figure}
\subsection*{Case 1.} Denote by $r$ the number of horizontal
curves.
In Figure~\ref{fig:conf7} we must leave behind $r-1$
curves as cutting divisors. To do so we do a
non-separating blow-up sequence{}
on each of $r-1$ horizontal curves (anything else leads to a
configuration of curves whose intersection matrix has determinant $0$,
and which can therefore not blow down to a Morrow configuration).
Thus, on the $i$-th horizontal curve we blow up $a_i$ times and then
leave behind the final exceptional divisor, giving a string of $-2$
curves of length $a_i-1$.
The resulting splice diagram is as in Figure~\ref{fig:splice3}.
\begin{figure}
\caption{Splice diagram for case 1 of isotrivial fibres.}
\label{fig:splice3}
\end{figure}
This splice diagram has been analysed in \cite{NeuIrr}, where it is shown
that its general polynomial is
$$f(x,y)=y\prod_{i=1}^{r-1}(x-\beta_i)^{a_i}+h(x),$$
where $h(x)$ is a polynomial of degree $<\sum_{i=1}^{r-1}a_i$.
This case covers the following cases from \cite{MSuGen}: Case 1 of
Theorem 3.3., Theorem 3.7, Case I of Theorem 3.10.
\subsection*{Case 2.}
Denote by $r+1$ the number of horizontal curves.
In Figure~\ref{fig:conf8} we must do separating blow-up sequence s at
$r$ intersection points and then do an additional
non-separating blow-up sequence{}.
As in the Section \ref{sec:class}, one finds that each of
$r-1$ of the separating blow-up sequence s creates a string of $-2$ curves
attached to the corresponding horizontal curve, while the last one can
be arbitrary, as described in the proof of Lemma
\ref{th:anyt}. In Figure~\ref{fig:ms1} we show the situation after
doing the first $r-1$ separating blow-up sequence s and doing the first
step of the $r$-th one.
\begin{figure}
\caption{Dual graph of partially blown-up configuration of curves for
Fig.~\ref{fig:conf8}
\label{fig:ms1}
\end{figure}
Moreover, the non-separating blow-up sequence{} then occurs adjacent to the
exceptional curve left behind in the final separating blow-up sequence{}.
The analysis is almost identical to the proof of Lemma \ref{th:anyt},
with the resulting strings now having continued fractions $A+\frac PQ$
and $\frac qp$ respectively, with notation as in that proof.
The resulting splice diagram is as in Figure~\ref{fig:ms2}, with
notation exactly as in Theorem \ref{th:main}.
\def\raise5pt\hbox{$\vdots$}{\raise5pt\hbox{$\vdots$}}
\begin{figure}
\caption{Splice diagram for Case 2 of isotrivial fibres.}
\label{fig:ms2}
\end{figure}
The polynomial in this case is exactly as in Section \ref{sec:poly}
except that the first term $x^{q-Qk}s_1^Q$ respectively
$y^{Q-qk}s_2^q$ is omitted. Namely, let
$\beta_1,\dots\beta_{r-1}$ be distinct complex numbers in ${\mathbb C}^*$ and
let $k$ be as in Theorem \ref{th:defspace}.
If $k\le\frac pP<k+1$
(so $\frac pP<\frac qQ\le k+1$),
let $s_1=\alpha_0+\alpha_1x+\dots+\alpha_{k-1}x^{k-1}+x^ky$. Then
$$f(x,y)=x^{p-Pk}s_1^P
\prod_{i=1}^{r-1}(\beta_i-x^{q-Qk}s_1^Q)^{a_i}.$$
If $k\le\frac Qq<k+1$ (so $\frac Qq<\frac Pp\le k+1$),
let $s_2=\alpha_0+\alpha_1y+\dots+\alpha_{k-1}y^{k-1}+xy^k$. Then
$$f(x,y)=y^{P-pk}s_2^p
\prod_{i=1}^{r-1}(\beta_i-y^{Q-qk}s_2^q)^{a_i}.$$
This case covers the following cases from \cite{MSuGen}: Cases 2,3,4
of Theorem 3.3 and Case II of Theorem 3.10. However, \cite{MSuGen}
only has examples in which one of $P,Q,p,q$ is equal to $1$.
Note that the isotrivial splice diagrams of Case 1
and Case 2 can be considered to belong to one family: putting
$(P,Q)=(1,0)$ in Figure~\ref{fig:ms2} gives Figure~\ref{fig:splice3}.
Nevertheless, the two cases have rather different geometric
properties.
\section{General rational polynomials.}
In this section we will give a result for ample rational polynomials
that are not necessarily of simple type.
\begin{prop} \label{th:hor-1}
An ample rational polynomial contains a $(1,0)$ horizontal curve whose
proper transform has self-intersection $-1$ and meets
$\tilde{L}_{\infty}$.
\end{prop}
\begin{proof}
By the classification of ample rational polynomials of simple type,
the proposition is true in this case. So, we may assume that there is
a horizontal curve of type $(m,n)$ for $m>1$.
Suppose there is no $(1,0)$ horizontal curve with the property of the
proposition. Then by the proof of Lemma~\ref{th:triple} there are at least two
$(1,0)$ horizontal curves whose proper transforms have
self-intersection $<-1$ and meet $\tilde{L}_{\infty}$.
By Lemma~\ref{le:beyond} any curves beyond these horizontal curves
have self-intersection $<-1$.
A horizontal curve of type $(m,n)$ must meet $L_{\infty}$ at exactly
one point, and hence with a tangency of order $m$ or at a singularity
of the curve. This is because if a horizontal curve were to meet
$L_{\infty}$ twice then we would not be able to break cycles since
when we blow up next to $L_{\infty}$, those exceptional curves are
sent to infinity under the polynomial and hence must be retained
in the configuration of curves. Thus we must blow up there to
get a configuration of curves with normal intersections. The final
exceptional curve in such a sequence of blow-ups will have
self-intersection $-1$ and valency $>2$.
If we can blow down the configuration of curves then eventually at
least one curve adjacent to the $-1$ curve is blown down and hence the
$-1$ curve ends up with non-negative self-intersection. But the final
configuration is not a linear graph since the proper transforms of the
two $(1,0)$ horizontal curves and any curves beyond give two branches.
Thus the final configuration is not a Morrow configuration which
contradicts Lemma \ref{th:propD}.
\end{proof}
The following result is a generalisation of Lemma~\ref{th:n=1}.
\begin{cor}
For any ample rational polynomial, a smooth horizontal curve of type
$(m,n)$ with $m>0$ must be of type $(m,1)$.
\end{cor}
\begin{proof}
The statement is true for $m=1$ by Lemma~\ref{th:n=1} so will assume
$m>1$. A curve of type $(m,n)$ will intersect the $(1,0)$ horizontal
curves $m$ times, with multiplicity, unless possibly if the $(m,n)$
curve is singular at these points of intersection. The latter
possibility is ruled out by the assumption of the corollary. Hence
the $(1,0)$ horizontal curves will be blown up at least $m$ times and
their proper transforms will have self-intersection $<-m$. This
contradicts the previous proposition so the result follows.
\end{proof}
When the rational polynomial is not ample, Russell has an example of a
horizontal curve of type $(3,2)$. See the examples in the next
section. Note that smoothness of the horizontal curve is necessary in
the corollary (at the points of intersection with the $(1,0)$
horizontal curves) since we can always have two horizontal curves of
types $(l,1)$ and $(m,1)$ and together they can be considered as a
singular horizontal curve of type $(l+m,2)$.
\subsection{Adding horizontal curves}
Consider the following construction on ${\mathbb C}^2$. Blow up repeatedly
starting at a point on the $y$-axis so that the resulting exceptional
curves form a chain from the $y$-axis to the last exceptional curve
blown up. If we now remove the $y$-axis and all but the last
exceptional curve from the blown-up ${\mathbb C}^2$ we get a new ${\mathbb C}^2$
that we call ${\mathbb C}^2_1$. Any polynomial $f\colon{\mathbb C}^2\to{\mathbb C}$
induces a polynomial $f_1\colon{\mathbb C}^2_1\to{\mathbb C}$. Suppose the $y$-axis
intersects generic fibres of $f$ in $d$ points. Then the \regular{}
fibres of $f_1$ are simply \regular{} fibres of $f$ with $d$ extra
punctures. In fact, this construction simply adds an extra degree $d$
horizontal curve, namely the $y$-axis becomes a degree $d$ horizontal
curve for $f_1$.
From the point of view of the polynomials, what we have done is
replaced $f(x,y)$ by
$$f_1(x,y)=f(x,s),\quad s=a_0+a_1x+\dots+a_{k-1}x^{k-1}+x^ky,$$
that is, we have composed $f$ with the birational morphism
$(x,y)\mapsto (x,s)$ of ${\mathbb C}^2$.
Since one can compose $f$ first with a polynomial automorphism to
raise its degree, one can easily add horizontal curves of arbitrarily high
degree by this construction. This makes clear that any classification
of non-simple-type polynomials must take account of this sort of
operation, including composition with more complicated birational
morphisms.
Although this is a complication, it can also simplify some issues.
Here is a simple illustrative example. We start with the simplest
rational polynomial $g(x,y)=x$, apply a polynomial automorphism to get
$f(x,y)=x+y^2$ and then apply the above birational morphism to get
$f_1(x,y)=x+(a_0+a_1x+\dots+a_{k-1}x^{k-1}+x^ky)^2$ with one degree
one horizontal and one degree two horizontal. It is not hard to check
(e.g., by listing possible splice diagrams) that this gives, up to
equivalence, the only non-simple-type polynomials with \regular{} fibre
${\mathbb C}-\{0,1\}$, so with the classification of simple type polynomials,
we get:
\begin{prop}
A polynomial with general fibre ${\mathbb C}-\{0,1\}$ is left-right
equivalent to one of the form
$f_2(x,y)$ or $f_3(x,y)$ of Theorem
\ref{th:summary} with $r=2$ or $r=3$ respectively, or to
$f(x,y)=x+(a_0+a_1x+\dots+a_{k-1}x^{k-1}+x^ky)^2$.\qed
\end{prop}
This proposition also follows from
Kaliman's classification \cite{KalPol} of isotrivial
polynomials.
\section{Examples}
It is worth including some interesting known examples of rational
polynomials from the perspective used in this paper. These examples
are neither of simple type nor ample.
Russell \cite{RusGoo} (correctly presented in \cite{BCaOne})
constructed an example of a rational polynomial with no degree one
horizontal curves. This is an example of a bad field generator---a
polynomial that is one coordinate in a birational transformation but
not in a birational morphism. It is given by beginning with three
curves in $\bbP^1\times\bbP^1$ as in Figure~\ref{fig:rus}. The
$(2,1)$ curve and the $(3,2)$ curve intersect at an order three
tangency and at the same point the $(3,2)$ intersects itself at a
tangency. They are the two horizontal curves of the polynomial. The
vertical curve is $L_{\infty}$.
\begin{figure}
\caption{A bad field generator.}
\label{fig:rus}
\end{figure}
The actual polynomial in this case is, with $s=xy+1$, $$
f(x,y)=(y^2s^4+y(s+xy)s+1)(ys^5+2xys^2+x)$$
and the splice diagram is
$$\xymatrix@R=12pt@C=12pt@M=0pt@W=0pt@H=0pt{
\lower.2pc\hbox to 2pt{\hss$\circ$\hss}\ar@{-}[rr]^(.75){3}&&\lower.2pc\hbox to 2pt{\hss$\circ$\hss}\ar[ld]\ar[d]\ar[rd]
\ar@{-}[rrr]^(.25){-4}&&&
\lower.2pc\hbox to 2pt{\hss$\bullet$\hss}\ar@{-}[rrr]^(.75){-2}&&&\lower.2pc\hbox to 2pt{\hss$\circ$\hss}\ar@{-}[rr]^(.25){3}
\ar@{-}[dd]^(.75){-13}
&&\lower.2pc\hbox to 2pt{\hss$\circ$\hss}\\ &&&&&&&&&&\\
&&&&&&\lower.2pc\hbox to 2pt{\hss$\circ$\hss}\ar[lu]\ar[ld]\ar@{-}[rr]^(.25){-27}&&
\lower.2pc\hbox to 2pt{\hss$\circ$\hss}\ar@{-}[dd]^(.25){2}\\
&&&&&&&&&&\\
&&&&&&&&\lower.2pc\hbox to 2pt{\hss$\circ$\hss}}$$
Kaliman \cite{KalRat} classified all rational polynomials with one
fibre isomorphic to ${\mathbb C}^*$. Figure~\ref{fig:kal} gives three curves
in $\bbP^1\times\bbP^1$, the two horizontal curves and $L_{\infty}$.
The $(m,1)$ curve has the property that when it is mapped downwards
onto a $(1,0)$ curve, there are only two points of ramification, both
with maximal ramification of $m$, at $L_{\infty}$ and at the irregular
fibre isomorphic to ${\mathbb C}^*$. Kaliman's entire classification begins
with this configuration of curves. The only points that can be blown
up are those that are infinitely near to the point of intersection of
the two horizontal curves (besides the unnecessary blowing up where
the $(m,1)$ curve meets $L_{\infty}$) and one exceptional curve is
left behind as a component of the reducible fibre.
\begin{figure}
\caption{Classification of rational polynomials with a ${\mathbb C}
\label{fig:kal}
\end{figure}
\iffalse
Recall from Section~\ref{sec:class} that the classification of
polynomials with \regular{} fibre isomorphic to an annulus lies behind
the classification of rational polynomials of simple type with
non-isotrivial fibres. It seems reasonable to conjecture that the
classification of polynomials with \regular{} fibre isomorphic to an
annulus along with Kaliman's classification of rational polynomials
with one fibre isomorphic to ${\mathbb C}^*$ can be used in a similar way to
classify all ample rational polynomials with one horizontal curve of
degree greater than one.
\fi
\end{document}
|
\begin{document}
\title{ Davies' method for anomalous diffusions}
\operatorname{Aut}hor[M. Murugan]{Mathav Murugan}
\address{Department of Mathematics, University of British Columbia and Pacific Institute for the Mathematical Sciences, Vancouver, BC V6T 1Z2, Canada.}
\email[M. Murugan]{[email protected]}
\operatorname{Aut}hor[L. Saloff-Coste]{Laurent Saloff-Coste$^\dagger$}
\thanks{$\dagger$Both the authors were partially supported by NSF grant DMS 1404435}
\address{Department of Mathematics, Cornell University, Ithaca, NY 14853, USA}
\email[L. Saloff-Coste]{[email protected]}
\date{\today}
\subjclass[2010]{60J60, 60J35}
\begin{abstract}
Davies' method of perturbed semigroups is a classical technique to obtain off-diagonal upper bounds on the heat kernel.
However Davies' method does not apply to anomalous diffusions due to the singularity of energy measures.
In this note, we overcome the difficulty by modifying the Davies' perturbation method to obtain sub-Gaussian upper bounds on the heat kernel.
Our computations closely follow the seminal work of Carlen, Kusuoka and Stroock \cite{CKS}. However, a cutoff Sobolev inequality due to Andres and Barlow \cite{AB} is used to bound the energy measure.
\end{abstract}
\maketitle
\section{Introduction}
Davies' method of perturbed semigroups is a well-known method to
obtain off-diagonal upper bounds on the heat kernel. It was
introduced by E.\ B.\ Davies to obtain the explicit constants in
the exponential term for Gaussian upper bounds \cite{Dav1} using the
logarithmic-Sobolev inequality. Davies' method was extended by
Carlen, Kusuoka and Stroock to a non-local setting \cite[Section 3]{CKS} using
Nash inequality. Moreover, Davies extended this technique to higher order elliptic operators on $\mathbb{R}^n$ \cite[Section 6 and 7]{Dav2}.
More recently Barlow, Grigor'yan and Kumagai
applied Davies' method as presented in \cite{CKS} to obtain
off-diagonal upper bounds for the heat kernel of heavy tailed jump processes
\cite[Section 3]{BGK}.
Despite these triumphs, Davies' perturbation method
has not yet been made to work in the following contexts:
\begin{enumerate}[(a)]
\item Anomalous diffusions (See \cite[Section 4.2]{Bar}).
\item Jump processes with jump
index greater than or equal to $2$ (See \cite[Remark 1(d)]{MS} and \cite[Section 1]{GHL}).
\end{enumerate}
The goal of this work is to extend Davies' method to anomalous
diffusions in order to obtain sub-Gaussian upper bounds. In the anomalous
diffusion setting, we use cutoff functions satisfying a cutoff
Sobolev inequality to perturb the corresponding heat semigroup. We use a
recent work of Andres and Barlow \cite{AB} to construct these cutoff
functions. We extend the techniques developed here in a sequel to a non-local setting for the jump processes mentioned in (b) above \cite{MS4}. In \cite{MS4}, we consider the analogue of symmetric stable processes on fractals while in this work we are motivated by Brownian motion on fractals.
Before we proceed, we briefly outline Davies' method as presented in \cite{CKS} and point out the main difficulty in extending it to the anomalous diffusion setting.
Consider a metric measure space $(M,d,\mu)$ and a Markov semigroup $(P_t)_{t \ge 0}$ symmetric with respect to $\mu$. The most classical case is that of the heat semigroup in $\mathbb{R}^n$ (corresponding to Brownian motion in $\mathbb{R}^n$) associated with the Dirichlet form $\mathcal{E}(f,f)= \int_{\mathbb{R}^n} \abs{\nabla f}^2 \,d\mu$, where $\mu$ is the Lebesgue measure.
Instead of considering the original Markov semigroup $(P_t)_{t \ge 0}$, we consider the perturbed semigroup
\begin{equation} \label{e-pet}
\left( P^\psi_t f\right) (x)= e^{\psi(x)} \left( P_t \left( e^{-\psi} f \right) \right)(x)
\end{equation}
where $\psi$ is a `sufficiently nice function'.
Given an ultracontractive estimate \begin{equation}\label{e-ult}\norm{P_t}_{1 \to \infty} \le m(t)\end{equation} for the diffusion semigroup, Davies' method yields an ultracontractive estimate for the perturbed semigroup
\begin{equation}\label{e-pult}
\norm{P^\psi_t}_{1 \to \infty} \le m_\psi(t).
\end{equation}
If $p_t(x,y)$ is the kernel of $P_t$, then the kernel of $P^\psi_t$ is $p_t^\psi(x,y) = e^{-\psi(x)}p_t(x,y) e^{\psi(y)}$.
Therefore by \eqref{e-pult}, we obtain the off-diagonal estimate
\begin{equation} \label{e-offd}
p_t(x,y) \le m_\psi(t) \exp\left( \psi(y)-\psi(x) \right).
\end{equation}
By varying $\psi$ over a class of `nice functions' to minimize the right hand side of \eqref{e-offd}, Davies obtained off-diagonal upper bounds.
In Davies' method as presented in \cite{Dav1,CKS}, it is crucial that the function $\psi$ satisfies
\[
e^{-2\psi} \Gamma(e^\psi,e^\psi) \ll \mu \hspace{0.5cm} \mbox{and} \hspace{0.5cm} e^{2\psi} \Gamma(e^{-\psi},e^{-\psi}) \ll \mu,
\]
where $\Gamma(\cdot,\cdot)$ denotes the corresponding energy measure (cf. Definition \ref{d-energy}). For the classical example of heat semigroup in $\mathbb{R}^n$ described above, the energy measure $\Gamma(f,g)$ is $\nabla f. \nabla g \,d\mu$, where $\mu$ is the Lebesgue measure.
In fact the expression of $m_\psi$ in \eqref{e-pult} depends on the uniform bound on the Radon-Nikodym derivatives of the energy measure given by (See \cite[Theorem 3.25]{CKS})
\[ \Gamma(\psi) := \norm{\frac{d e^{-2\psi} \Gamma(e^\psi,e^\psi) }{d \mu}}_\infty \vee\norm{\frac{d e^{2\psi} \Gamma(e^{-\psi},e^{-\psi}) }{d\mu}}_\infty .\]
The main difficulty in extending Davies' method to anomalous diffusions is that, for many `typical fractals' that satisfy a sub-Gaussian estimate,
the energy measure $\Gamma( \cdot, \cdot)$
is singular with respect to the underlying symmetric measure $\mu$ \cite{Hin,Kus,BST}.
This difficulty is well-known to experts (for instance, \cite[p. 1507]{BB} or \cite[p. 86]{Kum}).
In this context, the condition $e^{-2\psi} \Gamma(e^\psi,e^\psi) \ll \mu$ implies that $\psi$ is
necessarily a constant, in which case the off-diagonal estimate of \eqref{e-offd} is not an improvement over the diagonal estimate \eqref{e-pult}.
We briefly recall some fundamental notions regarding Dirichlet form and refer the reader to \cite{FOT} for details.
Let $(M,d,\mu)$ be a locally compact metric measure space where $\mu$ is a positive Radon measure on $M$ with $\operatorname{supp}(\mu)=M$. We denote by $\langle \cdot, \cdot \rangle$ the inner product on $L^2(M,\mu)$.
Let $ \mathbf{X}= (\Omega, \mathcal{F}_\infty,\mathcal{F}_t,X_t, \mathbb{P}_x)$ denote the diffusion corresponding to a strongly, local regular Dirichlet form. Here $\Omega$ denotes the totality of right continuous paths with left-limits from $[0,\infty)$ to $M$ and $\mathbb{P}_x$ denotes the law of the process conditioned to start at $X_0=x$.
The corresponding Markov semigroup $\Sett{P_t} {t \ge 0}$ of $\mathbf{X}$ is defined by
$P_tf(x):= \mathcal{E}E_x[f(X_t)]$, where $\mathcal{E}E_x$ denote the expectation with respect to the measure $\mathbb{P}_x$.
These
operators $\Sett{P_t} {t \ge 0}$ form a strongly continuous semigroup of self-adjoint contractions. The
Dirichlet form $(\mathcal{E},\mathcal{D})$ associated with $\mathbf{X}$ is the symmetric, bilinear form
\[ \mathcal{E}(u,v):= \lim_{t \downarrow 0} \frac{1}{t}\langle u - P_t u, v \rangle
\]
defined on the domain
\[\mathcal{D} := \Sett{u \in L^2(M,\mu)}{ \sup_{t >0}
\frac{1}{t}\langle u - P_t u,u\rangle < \infty}.\]
Recall that a Dirichlet form $(\mathcal{E},\mathcal{D})$ on $L^2(M,\mu)$ is said to be \emph{regular} if $C_c(M) \cap \mathcal{D}$ is dense in both $(C_c(M), \norm{\cdot}_\infty)$ and the Hilbert space $(\mathcal{D},\mathcal{E}_1)$.
Here $C_c(M)$ is the space of continuous functions with compact
support in $M$ and $\mathcal{E}_1(\cdot, \cdot):= \mathcal{E}(\cdot, \cdot)+ \langle \cdot,\cdot \rangle$ denotes the inner product on $\mathcal{D}$. For a $\mu$-measurable function $u$ let $\operatorname{Supp}[u]$ denote the support of the measure $u \,d\mu$.
We say that a Dirichlet form $(\mathcal{E},\mathcal{D})$ on $L^2(M,\mu)$ to be \emph{strongly local} if it satisfies the following property: For all functions $u,v \in \mathcal{D}$ such that $\operatorname{Supp}[u]$, $\operatorname{Supp}[u]$ are compact and $v$ is constant on a neighbourhood of $\operatorname{Supp}[v]$, we have $\mathcal{E}(u,v)=0$. For example, the form corresponding to the heat semigroup on $\mathbb{R}^n$ defined by $(f \mapsto \int_{\mathbb{R}^n} \abs{\nabla f}^2 \,d\mu, W^{1,2}(\mathbb{R}^n))$ is a regular, strongly local Dirichlet form on $L^2(\mathbb{R}^n,\mu)$, where $\mu$ is the Lebesgue measure and $W^{1,2}$ denotes the Sobolev space.
We denote by $B(x,r):= \{y \in M: d(x,y)<r \}$ the ball centered at $x$ with radius $r$ and by
\[
V(x,r):= \mu(B(x,r))
\]
the corresponding volume. We assume that the metric measure space is Ahlfors-regular: meaning that there exist $C_1>0$ and ${d_f} >0$ such that
\begin{equation} \label{ahl}\tag*{$\operatorname{V}({d_f})$}
C_1^{-1} r^{d_f} \le V(x,r) \le C_1 r^{d_f}
\end{equation}
for all $x \in M$ and for all $r \ge 0$. The quantity ${d_f} >0$ is called the \emph{volume growth exponent} or \emph{fractal dimension}.
Let $p_t(\cdot,\cdot)$ be the (regularised) kernel of $P_t$ with respect to $\mu$ \cite[eq.~(1.10)]{AB}.
We are interested in obtaining sub-Gaussian upper bounds of the form
\begin{equation} \tag*{$\operatorname{USG}({d_f},{d_w})$}
\label{usg} p_t(x,y) \le \frac{C_1}{t^{{d_f}/{d_w}}} \exp \left( -C_2 \left( \frac{d(x,y)^{d_w}}{t}\right)^{1/({d_w}-1)} \right)
\end{equation}
where ${d_w} \ge 2$ is the \emph{escape time exponent} or \emph{walk dimension}.
It is known that if the heat kernel $p_t$ satisfies \ref{usg}, then $d_w \ge 2$ (cf. \cite[p. 252]{Hin2}). The corresponding diffusion $X_t$ then has a diffusive speed of at least $t^{1/d_w}$ (up to constants). This means that a process starting at $x$ first exits a ball $B(x,r)$ at the time $\tau_{B(x,r)} \gtrsim r^{d_w}$ (cf. \cite[Lemma 5.3]{AB}). Moreover, if the process satisfies a matching sub-Gaussian lower bound for $p_t$ with different constants, then $\tau_{B(x,r)} \asymp r^{d_w}$. For comparison, recall that the Brownian motion on Euclidean space has a Gaussian heat kernel and satisfies $\tau_{B(x,r)} \asymp r^{2}$.
Such sub-Gaussian estimates are typical of many fractals (cf. \cite[Theorem 8.18]{Bar0}).
We assume the on-diagonal bound corresponding to the sub-Gaussian estimate of \ref{usg}. That is, we assume that there exists $C_1>0$ such that
\begin{equation}
\label{e-diag} p_t(x,x) \le \frac{C_1}{t^{{d_f}/{d_w}}}
\end{equation}
for all $x \in M$ and for all $t>0$.
The on-diagonal estimate of \eqref{e-diag} is equivalent to the following Nash inequality (\cite[Theorem 2.1]{CKS}): there exists $C_N>0$ such that
\begin{equation} \label{nash} \tag*{$\operatorname{N}({d_f},{d_w})$}
\norm{f}_2^{2(1+ {d_w}/{d_f})} \le C_N \mathcal{E}(f,f) \norm{f}_1^{2{d_w}/{d_f}}
\end{equation}
for all $f \in \mathcal{D} \cap L^1(M,\mu)$. The Nash inequality \ref{nash} may be replaced by an equivalent Sobolev inequality, a logarithmic Sobolev inequality or a Faber-Krahn inequality (See \cite{BCLS}).
However, we will follow the approach of \cite{CKS} and use the Nash inequality. Such a Nash inequality can be obtained from geometric assumptions like a Poincar\'{e} inequality and a volume growth assumption like \ref{ahl}.
Since $\mathcal{E}$ is regular, it follows that $\mathcal{E}(f,g)$ can be written in terms of a signed measure $\Gamma(f,g)$ as
\[
\mathcal{E}(f,g) = \int_M \Gamma (f,g),
\]
where the energy measure $\Gamma$ is defined as follows.
\begin{definition}\label{d-energy}
For any essentially bounded $f \in \mathcal{D}$, $\Gamma(f,f)$ is the unique Borel measure on $M$ (called the energy measure) on $M$ satisfying
\[
\int_M g \,d\Gamma(f,f) = \mathcal{E}(f,fg) -\frac{1}{2} \mathcal{E}(f^2,g)
\]
for all essentially bounded $g \in \mathcal{D} \cap C_c(M)$; $\Gamma(f,g)$ is then defined by polarization.
\end{definition}
We shall use the following properties of the energy measure.
\begin{enumerate}
\item[(i)] \emph{Locality}: For all functions $f,g \in \mathcal{D}$ and all measurable sets $G \subset M$ on which $f$ is constant
\[
\mathbf{1}_G d\Gamma(f,g) = 0
\]
\item[(ii)] \emph{Leibniz and chain rules}: For $f,g \in \mathcal{D}$ essentially bounded and $\phi \in C^1(\mathbb{R})$,
\begin{eqnarray}
\label{e-lei} d\Gamma(fg,h) &=& f d\Gamma(g,h)+g d\Gamma(f,h)\\
\label{e-cha} f \Gamma(\phi(f),g) &=& \phi'(f) d\Gamma(f,g).
\end{eqnarray}
\end{enumerate}
We wish to obtain an off-diagonal estimate using Davies' perturbation method.
The main difference from the previous implementations of the method is that,
in addition to an on-diagonal upper bound (or equivalently Nash inequality), we also require a cutoff Sobolev inequality.
Spaces satisfying the sub-Gaussian upper bound given in \ref{usg}
necessarily satisfy the cutoff Sobolev annulus inequality \ref{csa},
a condition introduced by Andres and Barlow \cite{AB}.
The condition $\operatorname{CS}A$ simplifies the cut-off Sobolev inequalities $\operatorname{CS}$ which were originally introduced by Barlow and Bass \cite{BB} for weighted graphs.
The significance of the cut-off Sobolev inequalities $\operatorname{CS}$ and $\operatorname{CS}A$ is that they are stable under bounded perturbations of the Dirichlet form (Cf. \cite[Corollary 5.2]{AB}).
Moreover, the condition $\operatorname{CS}$ is stable under quasi-isometries (rough isometries) of the underlying space \cite[Theorem 2.21(b)]{BBK}.
Therefore cutoff Sobolev inequalities provide a robust method to obtain heat kernel estimates with anomalous time-space scaling. We now define the cutoff Sobolev inequality \ref{csa}.
\begin{definition}
Let $U \subset V$ be open sets in $M$ with $U \subset \bar{U} \subset V$. We say that a continuous function $\phi$ is a cutoff function for $U \subset V$ if $\phi \equiv 1$ on
$U$ and $\phi \equiv 0$ on $V^c$.
\end{definition}
\begin{definition} (\cite[Definition 1.10]{AB}) \label{d-csa}
We say \ref{csa} holds if there exists $C_1,C_2>0$ such that for every $x \in M$, $R > 0$, $r>0$, there exists a cutoff function $\phi$ for $B(x,R) \subset B(x,R+r)$ such that if $f \in \mathcal{D}$, then
\begin{equation} \label{csa} \tag*{$\operatorname{CSA}({d_w})$}
\int_U f^2 \, d \Gamma(\phi,\phi) \le C_1 \int_U \phi^2 \, d\Gamma(f,f) + \frac{C_2}{r^{d_w}} \int_U f^2 \, d\mu,
\end{equation}
where $U= B(x,R+r) \setminus \overline{B(x,r)}$.
\end{definition}
It is clear that the condition $\operatorname{CS}A({d_w})$ is preserved by bounded perturbations of the Dirichlet form.
The above definition is slightly different to the one introduced in \cite[Definition 1.10]{AB}, where the constant $C_1$ is taken to be $1/8$. However both definitions are equivalent due to a `self-improving' property of $\operatorname{CS}A({d_w})$ \cite[Lemma 5.1]{AB}.
Our main result is that the Nash inequality \ref{nash} and the cutoff Sobolev inequality \ref{csa} imply the desired sub-Gaussian estimate \ref{usg}.
By \cite[Theorem
1.12]{AB}, it is known that both \ref{nash} and \ref{csa} are also necessary for
the sub-Gaussian estimate \ref{usg} to hold .
More precisely,
\begin{theorem} \label{t-main}
Let $(M,d,\mu)$ be a locally compact metric measure space that satisfies \ref{ahl} with volume growth exponent ${d_f}$.
Let $(\mathcal{E},\mathcal{D})$ be a strongly local, regular, Dirichlet form whose energy measure $\Gamma$ satisfies the cutoff Sobolev inequality \ref{csa} for some $d_w \ge 2$.
Then the Nash inequality \ref{nash} implies the sub-Gaussian upper bound \ref{usg}.
\end{theorem}
\begin{remark}
The above properties given by \ref{ahl} and \ref{usg} are a special case of the more general assumptions of volume doubling and heat kernel upper bounds with a general time-space scaling of \cite{AB}.
In fact, Theorem \ref{t-main} is subsumed by \cite[Theorem
1.12]{AB}. A recent work of Lierl provides an alternate proof of the sub-Gaussian estimates in \cite{AB} using Moser's iteration method and extends the results to certain time-dependent, non-symmetric local bilinear forms \cite{Lie}. Like earlier work by Andres and Barlow and
the present work, Lierl's arguments involve improved control on some cutoff functions.
Our methods give an alternate proof to
\cite[Theorem 1.12]{AB} in a restricted setting. Moreover we show in \cite{MS4} that this techniques can be adapted to the non-local setting to provide new result and
resolve the conjecture posed in \cite[Remark 1(d)]{MS}.
\end{remark}
\section{Off diagonal estimates using Davies' method}\label{s-Davies}
Spaces satisfying \ref{csa} have a rich class of cutoff functions with low energy. We start by studying energy estimates of these cutoff functions.
\subsection{Self-improving property of $\operatorname{CS}A$}\label{ss-self}
The cutoff Sobolev inequality \ref{csa} has a self-improving property which states that the constants $C_1,C_2$ in \ref{csa} are flexible. For example, we can decrease the value of $C_1$ in \ref{csa}
by increasing $C_2$ appropriately. This is quantified in Lemma \ref{l-sip}. Lemma \ref{l-sip} is essentially contained in \cite{AB}; we simplify the proof and obtain a slightly stronger result.
\begin{lemma}\label{l-sip} Let $(M,d,\mu)$ satisfy \ref{ahl}. Let $(\mathcal{E},\mathcal{D})$ denote a strongly local, regular, Dirichlet form with energy measure $\Gamma$ that satisfies \ref{csa}.
There exists $C_1,C_2>0$ such that
for all $\rho \in (0,1]$, for all $R,r >0$ and for all $x \in M$, there exists a cutoff function $\phi_\rho = \phi_\rho(f)$ for $B(x,R) \subset B(x,R+r)$ that satisfies
\begin{equation} \label{e-sip}
\int_U f^2 \, d \Gamma(\phi_\rho,\phi_\rho) \le 4 C_1 \rho^2 \int_U d\Gamma(f,f) + \frac{C_2 \rho^{2 -{d_w}} }{r^{d_w}} \int_U f^2 \, d\mu
\end{equation}
for all $f \in \mathcal{D}$,
where $C_1,C_2$ are the constants in \ref{csa}.
Further the cutoff function $\phi_\rho$ above satisfies
\begin{equation} \label{e-siv}
\abs{ \phi_\rho(y) - \left( \frac{R+r - d(x,y)}{r} \right) } \le 2 \rho
\end{equation}
for all $y \in B(x,R+r)\setminus B(x,R)$.
\end{lemma}
\begin{remark}
Lemma \ref{l-sip} is essentially contained in work of Andres and Barlow \cite[Lemma 5.1]{AB}. More recently, following \cite[Lemma 5.1]{AB}, J. Lierl obtained a cutoff Sobolev inequality \cite[Lemma 2.3]{Lie} that is similar to Lemma \ref{l-sip}. However, the estimate \eqref{e-siv} is new and it shows that the cutoff functions converge in $L^\infty$ norm as $\rho \to 0$ to the ``linear cutoff function''. The constructions in \cite[Lemma 5.1]{AB} and \cite[Lemma 2.3]{Lie} converge in $L^\infty$ norm as $\rho \to 0$ to a somewhat more complicated cutoff function that depends on ${d_w}$. The proof below was suggested to us by Martin Barlow.
\end{remark}
\begin{proof}[Proof of Lemma \ref{l-sip}]
Let $x \in M$, $r>0$, $R>0$, $\rho>0$, $f \in \mathcal{D}$. Define $n:= \lfloor \rho^{-1} \rfloor \in [\rho^{-1}/2 ,\rho^{-1}]$. We divide the annulus $U= B(x,R+r) \setminus B(x,R)$ into $n$-annuli $U_1,U_2,\ldots,U_n$ of equal width, where
\[
U_i:= B(x,R+ ir/n) \setminus B(x,R+ (i-1)r/n), \hspace{5mm} i=1,2,\ldots,n.
\]
By \ref{csa}, there exists a cutoff function $\phi_i$ for $B(x,R+(i-1)r/) \subset B(x,R+ir/n)$ satisfying
\begin{equation}\label{e-sip1}
\int_{U_i} f^2 \, d\Gamma(\phi_i,\phi_i) \le C_1 \int_{U_i} \, d\Gamma(f,f) + \frac{C_2}{ (r/n)^{d_w}} \int_{U_i} f^2 \, d\mu
\end{equation}
for $i=1,2,\ldots,n$. We define $\phi= n^{-1} \sum_{i=1}^n \phi_i$. By locality, we have
\begin{equation}
\label{e-sip2} d\Gamma(\phi,\phi) = \frac{1}{n^2}\sum_{i=1}^n d\Gamma(\phi_i,\phi_i).
\end{equation}
Therefore by \eqref{e-sip2}, \eqref{e-sip1} and $\rho^{-1}/2 \le n= \lfloor \rho^{-1} \rfloor \le \rho^{-1}$ , we obtain
\begin{align}
\int_U f^2 \,d\Gamma(\phi,\phi) &= n^{-2} \sum_{i=1}^n \int_U f^2 \,d\Gamma(\phi,\phi) \nonumber \\
& \le n^{-2} \sum_{i=1}^n \left( C_1 \int_{U_i} d\Gamma(f,f) + \frac{C_2}{ (r/n)^{d_w}} \int_{U_i} f^2 \, d\mu \right) \nonumber \\
& \le C_1 n^{-2} \int_{U} d\Gamma(f,f) + \frac{C_2 n^{{d_w}-2} }{ r^{d_w} } \int_{U_i} f^2 \, d\mu \nonumber \\
& \le 4 C_1 \rho^2 \int_{U} d\Gamma(f,f) + \frac{C_2 \rho^{2-{d_w}} }{ r^{d_w} } \int_{U_i} f^2 \, d\mu. \nonumber
\end{align}
This completes the proof of \eqref{e-sip}.
Note that if $y \in U_i$, then $ 1 - i/n \le \phi(y) \le 1 - (i-1)/n$ and $R + (i-1)r/n \le d(x,y) < R + ir/n$, for each $1 \le i \le n$. This along with $n^{-1} \le 2 \rho$ implies \eqref{e-siv}.
\end{proof}
Observe that by \eqref{e-siv}, the cutoff function $\phi_\rho$ for $B(x,r)\subset B(x,R+r)$ satisfies
\[
\lim_{\rho \downarrow 0} \phi_\rho(y) = 1 \wedge \left(\frac{(R+r-d(x,y))_+}{r} \right).
\]
\subsection{Estimates on perturbed forms}\label{ss-perturbed}
The key to carry out Davies' method is the following elementary inequality.
\begin{lemma} \label{l-key}
Let $(\mathcal{E},\mathcal{D})$ be a strongly local, regular, Dirichlet form. Then
\begin{equation} \label{e-key}
\mathcal{E}(e^\psi f^{2p-1} , e^{-\psi} f) \ge \frac{1}{p} \mathcal{E}(f^p,f^p) - p \int_{M} f^{2p} \, d\Gamma (\psi,\psi)
\end{equation}
for all $f \in \mathcal{D}$, $\psi \in \mathcal{D}$ and $p \in [1,\infty)$.
\end{lemma}
\begin{proof}
Using Leibniz rule \eqref{e-lei} and chain rule \eqref{e-cha}, we obtain
\begin{eqnarray} \label{e-ky1}
\lefteqn{\Gamma(e^\psi f^{2p-1} , e^{-\psi} f) - \frac{1}{p} \Gamma(f^p,f^p) - p f^{2p} \Gamma (\psi,\psi)} \nonumber \\
&=& (p-1) \left( f^{2(p-1)} \Gamma(f,f) + f^{2p} \Gamma(\psi,\psi) - 2 f^{2p-1}\Gamma(f,\psi) \right).
\end{eqnarray}
By \cite[Theorem 3.7]{CKS} and the Cauchy-Schwarz inequality, we have
\begin{equation*}
\int_M f^{2p-1}\, d\Gamma(f,\psi) \le \left( \int_M f^{2(p-1)} \, d \Gamma(f,f) \cdot \int_M f^{2p} \, d\Gamma(\psi,\psi) \right)^{1/2}.
\end{equation*}
Therefore
\begin{equation} \label{e-ky2}
2 \int_M f^{2p-1}\, d\Gamma(f,\psi) \le \int_M f^{2(p-1)} \, d \Gamma(f,f) + \int_M f^{2p} \, d\Gamma(\psi,\psi).
\end{equation}
By integrating \eqref{e-ky1} and using \eqref{e-ky2}, we obtain \eqref{e-key}.
\end{proof}
Davies used the bound
\[
\int_{M} f^{2p} \, d\Gamma (\psi,\psi) \le \norm{\frac{d \Gamma(\psi,\psi)}{d \mu}}_\infty \norm{f}_{2p}^{2p}
\]
to control a term in \eqref{e-key}. However for anomalous diffusions, the energy measure is singular to $\mu$.
We will instead use \ref{csa} to bound $\int_{M} f^{2p} \, d\Gamma (\psi,\psi) $ by choosing $\psi$ to be a multiple of the cutoff function satisfying \ref{csa}.
The following estimate is analogous to \cite[Theorem 3.9]{CKS} but unlike in \cite{CKS},
the cutoff functions depend on both $p$ and $\lambda$. This raises new difficulties in the implementation of Davies' method.
\begin{prop} \label{p-drive}
Let $(M,d,\mu)$ be a metric measure space. Let $(\mathcal{E},\mathcal{D})$ be a strongly local, regular, Dirichlet form on $M$ satisfying \ref{csa}. There exists $C>0$ such that, for all $\lambda \ge 1$, for all $r>0$, for all $x \in M$, and
for all $p \in [1,\infty)$, there exists a cutoff function $\phi=\phi_{p,\lambda}$ on $B(x,r) \subset B(x,2r)$ such that
\begin{equation} \label{e-dri3}
\mathcal{E}(e^{\lambda \phi} f^{2p-1}, e^{-\lambda \phi} f) \ge \frac{1}{2p} \mathcal{E}(f^p,f^p) - C \frac{\lambda ^{d_w} p^{{d_w}-1}}{r^{d_w}} \norm{f}_{2p}^{2p},
\end{equation}
for all $f \in \mathcal{D}$.
There exists $C'>0$ such that the cutoff functions $\phi_{p,\lambda}$ above satisfy
\begin{equation} \label{e-step}
\norm{\exp \left( \lambda (\phi_{p,\lambda} - \phi_{2p,\lambda}) \right)}_\infty \vee \norm{\exp \left( -\lambda (\phi_{p,\lambda} - \phi_{2p,\lambda}) \right)}_\infty \le \exp(C'/p)
\end{equation}
for all $\lambda \ge 1$ and for all $p \ge 1$.
\end{prop}
\begin{proof}
This theorem follows from Lemma \ref{l-key} and Lemma \ref{l-sip}. Let $x \in M$ and $r>0$ be arbitrary.
Using \eqref{e-key}, we obtain
\begin{equation} \label{e-ts1}
\mathcal{E}(e^{\lambda \phi} f^{2p-1}, e^{-\lambda \phi} f) \ge \frac{1}{p} \left( \mathcal{E}(f^p,f^p) - (p \lambda)^2 \int_M f^{2p} \, d\Gamma(\phi,\phi) \right).
\end{equation}
By Lemma \ref{l-sip} and fixing $\rho^2= (p\lambda)^{-2}/(8C_1)$ in \eqref{e-sip}, we obtain a cutoff function $\phi= \phi_{p,\lambda}$ for $B(x,r)\subset B(x,2r)$ and $C > 0$ such that
\begin{equation}\label{e-ts2}
(p \lambda)^2 \int_M f^{2p} \, d\Gamma(\phi,\phi) \le \frac{1}{2} \mathcal{E}(f^p,f^p) + C \frac{ (\lambda p)^{d_w}}{r^{d_w}} \int_M f^{2p}\,d\mu.
\end{equation}
By \eqref{e-ts1} and \eqref{e-ts2}, we obtain \eqref{e-dri3}.
By \eqref{e-siv} and the above calculations, there exists $C'>0$ such that the cutoff functions $\phi_{p,\lambda}=\phi_{p,\lambda}(f)$ satisfy
\[
\norm{ \phi_{p,\lambda} - \phi_{2p,\lambda} }_\infty \le \frac{C'}{p \lambda}
\]
for all $p \ge 1$, for all $\lambda \ge 1$, for all $f \in \mathcal{D}$, for all $x \in M$ and for all $r>0$.
This immediately implies \eqref{e-step}.
\end{proof}
\begin{remark}
Estimates similar to \eqref{e-dri3}, were introduced by Davies in \cite[equation (3)]{Dav2} to obtain off-diagonal estimates for higher order (order greater than 2) elliptic operators.
Roughly speaking, the generator $\mathcal L$ for anomalous diffusion with walk dimension ${d_w}$ behaves like an `elliptic operator of order ${d_w}$'.
However the theory presented in \cite{Dav2} is complete only when the `order' ${d_w}$ is bigger than the volume growth exponent ${d_f}$, \emph{i.e.} in the strongly recurrent case.
This is because the method in \cite{Dav2} relies on a Gagliardo-Nirenberg inequality which is true only in the strongly recurrent setting. We believe that one can adapt the methods of \cite{Dav2}
to obtain an easier proof for the strongly recurrent case. However, we will not impose any such restrictions and our proof will closely follow the one in \cite{CKS}.
\end{remark}
\subsection{ Proof of Theorem \ref{t-main}:}
Let $\lambda \ge 1$ and $x \in M$ and $r>0$. Let $p_k=2^k$ and let
$\psi_k = \lambda \phi_{p_k,\lambda}$, where $ \phi_{p_k,\lambda}$
is a cutoff function on $B(x,r) \subset B(x,2r)$ given by
Proposition \ref{p-drive}. We write
\begin{equation} \label{e-fdef}
f_{t,k}:= P_t^{\psi_{k} } f
\end{equation}
for all $k \in \mathbb{N}$, where $f \in \mathcal{D}$ and $P_t^{\psi_{k} }$ denotes the perturbed semigroup as in \eqref{e-pet}.
Using \eqref{e-dri3}, there exists $C_0>0$ such that
\begin{eqnarray}
\nonumber \frac{d}{dt} \norm{f_{t,0}}_{2}^2 &=& -2 \mathcal{E}\left(e^{\psi_1} f_{t,0}, e^{- \psi_1 } f_{t,0}\right) \\
&\le& 2 C_0 \frac{ \lambda^{d_w} }{r^{d_w}} \norm{f_{t,0}}_2^2 \label{e-sr1}
\end{eqnarray}
and
\begin{eqnarray}
\nonumber \frac{d}{dt} \norm{f_{t,k}}_{2p_{k}}^{2p_k} &=& -2 p_k \mathcal{E}\left(e^{\psi_k} f_{t,k}^{2p_k-1}, e^{-\psi_k } f_{t,k} \right) \\
&\le& - \mathcal{E} \left( f_{t,k}^{p_k}, f_{t,k}^{p_k}\right) + 2 C_0 \left(\frac{\lambda p_k}{r} \right)^{d_w} \norm{f_{t,k}}_{2 p_k}^{2 p_k} \label{e-sr2}
\end{eqnarray}
for all $k \in \mathbb{N}^*$.
By \eqref{e-sr1}, we obtain
\begin{equation}
\label{e-sr3} \norm{f_{t,0}}_{p_1} =\norm{f_{t,0}}_2 \le \exp\left( C_0 \lambda^{d_w} t /r^{d_w} \right) \norm{f}_2.
\end{equation}
Using \eqref{e-sr2} and the Nash inequality \ref{nash}, we obtain
\begin{equation}
\label{e-sr4} \frac{d}{dt} \norm{f_{t,k}}_{2p_{k}} \le - \frac{1}{2 C_N p_k} \norm{f_{t,k}}_{2 p_k}^{1+ 2 {d_w} p_k /{d_f} }\norm{f_{t,k}}_{p_k}^{-2 {d_w} p_k/{d_f}} + C_0 p_k^{{d_w}-1}\left( \frac{\lambda}{r}\right)^{d_w}\norm{f_{t,k}}_{2p_k}
\end{equation}
for all $k \in \mathbb{N}^*$. By \eqref{e-step} and the fact that $P_t$ is a contraction on $L^\infty$, we have
\begin{equation} \label{e-sc}
\exp(-2 C_1/p_k) f_{t,k+1} \le f_{t,k} \le \exp(2 C_1/p_k) f_{t,k+1}
\end{equation}
for all $k \in \mathbb{N}_{\ge 0}$.
Combining \eqref{e-sr4} and \eqref{e-sc}, we obtain
\begin{equation}
\label{e-sr5} \frac{d}{dt} \norm{f_{t,k}}_{2p_{k}} \le - \frac{1}{C_A p_k} \norm{f_{t,k}}_{2 p_k}^{1+ 2 {d_w} p_k /{d_f} }\norm{f_{t,k-1}}_{p_k}^{-2 {d_w} p_k/{d_f}} + C_0 p_k^{{d_w}-1}\left( \frac{\lambda}{r}\right)^{d_w}\norm{f_{t,k}}_{2p_k}
\end{equation}
for all $k \in \mathbb{N}^*$, where $C_A= 2 C_N \exp ( 8{d_w} C_1 /{d_f})$.
To obtain off-diagonal estimates using the differential inequalities \eqref{e-sr5} we use the following lemma.
The following lemma is analogous to \cite[Lemma 3.21]{CKS} but the statement and its proof is slightly modified to suit our anomalous diffusion context with walk dimension ${d_w}$.
\begin{lemma}\label{l-dife}
Let $w:[0,\infty) \to (0,\infty)$ be a non-decreasing function and suppose that $u \in C^1([0,\infty); (0,\infty))$ satisfies
\begin{equation} \label{e-cks}
u'(t) \le - \frac{\epsilonilon}{p} \left( \frac{t^{(p-2)/\theta p}}{w(t)}\right)^{\theta p } u^{1+\theta p}(t) + \delta p^{{d_w} - 1}u(t)
\end{equation}
for some positive $\epsilonilon, \theta$ and $\delta$, ${d_w} \in [2,\infty)$ and $p=2^k$ for some $k \in \mathbb{N}^*$. Then $u$ satisfies
\begin{equation} \label{e-fs}
u(t) \le \left( \frac{2 p^{{d_w}} }{\epsilonilon \theta} \right)^{1/\theta p} t^{(1-p)/\theta p} w(t) e^{\delta t/p}
\end{equation}
\end{lemma}
\begin{proof}
Set $v(t) = e^{- \delta p^{{d_w}-1}t} u(t)$. By \eqref{e-cks}, we have
\[
v'(t)= e^{- \delta p^{{d_w}-1}t} \left( u'(t) - \delta p^{{d_w} -1} u(t) \right) \le - \frac{\epsilonilon t^{p-2}}{p w(t)^{\theta p}} e^{\theta \delta p^{{d_w}}t} v(t)^{1+\theta p}.
\]
Hence
\[
\frac{d}{dt} \left( v(t) \right)^{-\theta p} \ge \epsilonilon \theta t^{p-2} w(t)^{-\theta p} e^{\theta \delta p^{{d_w}}t}
\]
and so, since $w$ is non-decreasing
\begin{equation} \label{e-fs1}
e^{ \delta \theta p^{d_w} t} u(t)^{-\theta p} \ge \epsilonilon \theta w(t)^{-\theta p} \int_0^t s^{(p-2)} e^{\theta \delta p^{{d_w}}s} \, ds.
\end{equation}
Note that
\begin{eqnarray}
\nonumber \int_0^t s^{(p-2)} e^{\theta \delta p^{{d_w}}s} \, ds &\ge& (t/\delta \theta p^{d_w})^{p-1} \int_{\delta \theta p^{d_w}(1- 1/p^{d_w})}^{\delta \theta p^{d_w}} y^{(p-2)} e^{ty}\, dy \\
\nonumber &\ge & \frac{t^{p-1}}{p-1} \exp \left( \delta \theta p^{d_w} t - \delta \theta t \right) \left[ 1 - (1- p^{-{d_w}})^{p-1} \right] \\
\label{e-fs2} &\ge & \frac{t^{p-1}}{2p^{{d_w}}} \exp \left( \delta \theta p^{d_w} t - \delta \theta t \right).
\end{eqnarray}
In the last line above, we used the bound $(1-p^{-{d_w}})^{p-1} \ge 1 - p^{-{d_w}}(p-1)$ for all $p,{d_w} \ge 2$.
Combining \eqref{e-fs1} and \eqref{e-fs2} yields \eqref{e-fs}.
\end{proof}
We now pick $f \in L^2(M,\mu)$ and $f \ge 0$ with $\norm{f}_2=1$.
Let $u_{k}(t)= \norm{f_{t,k-1}}_{p_{k}}$ and let \[w_k(t)= \sup \{ s^{ {d_f}(p_k-2)/(2 {d_w} p_k)} u_k(s) : s \in (0,t] \}.\]
By \eqref{e-sr3}, $w_1(t) \le \exp( C_0 \lambda^{d_w} t /r^{d_w})$. Further by \eqref{e-sr5}, $u_{k+1}$ satisfies \eqref{e-cks} with
$\epsilonilon = 1/C_A$, $\theta = 2 {d_w}/{d_f}$, $\delta = C_0 (\lambda/r)^{d_w}$, $w=w_k$ and $p=p_k$. Hence by \eqref{e-fs},
\[
u_{k+1}(t) \le ( 2^{{d_w} k +1}/\epsilonilon \theta)^{1/(\theta p_k)} t^{(1-p_k)/\theta p_k} e^{\delta t/p_k} w_k(t).
\]
Therefore
\[
w_{k+1}(t)/w_k(t) \le ( 2^{{d_w} k +1}/\epsilonilon \theta)^{1/(\theta 2^k)} t^{-1/(2^k \theta)} e^{\delta t/2^k}
\]
for $k \in \mathbb{N}^*$. Hence, we obtain
\[
\lim_{k \to \infty} w_{k}(t) \le C_2 t^{-1/\theta} e^{\delta t} w_1(t) \le C_2 t^{-1/\theta} \exp( C_0 \lambda^ {d_w} t/r^{d_w} )
\]
where $C_2 =C_2( {d_w}, \epsilonilon, \theta)$.
Since $P_t$ is a contraction on all $L^p(M,\mu)$ for $1\le p \le \infty$, we obtain
\[
\lim_{k \to \infty} u_k(t) = \norm{ P_t^{\psi_\infty}f}_\infty \le \frac{C_2}{t^{{d_f}/2{d_w}}} \exp(C_0 \lambda^ {d_w} t/r^{d_w} ).
\]
where $\psi_\infty=\lim_{k \to \infty} \psi_k$.
Since the above bound holds for all $f \in L^2(M,\mu)$ with $\norm{f}_2=1$, we have
\[
\norm{P_t^{\psi_\infty} }_{2 \to \infty} \le \frac{C_2}{t^{{d_f}/2{d_w}}} \exp( C_0 \lambda^ {d_w} t/r^{d_w} ).
\]
The estimate is unchanged if we replace $\psi_k$'s by $-\psi_k$. Since $P_t^{-\psi}$ is the adjoint of $P_t^\psi$, by duality we have that
\[
\norm{P_t^{\psi_\infty}}_{1\to 2} \le \frac{C_2}{t^{{d_f}/2{d_w}}} \exp( C_0 \lambda^ {d_w} t/r^{d_w} ).
\]
Combining the above, we have
\begin{equation}
\norm{P_t^{\psi_\infty} }_{1\to \infty} \le \norm{P_{t/2}^{\psi_\infty} }_{1\to 2} \norm{P_{t/2}^{\psi_\infty} }_{2 \to \infty} \le \frac{C_2 2^{{d_f}/{d_w}}}{t^{{d_f}/{d_w}}} \exp( C_0 \lambda^ {d_w} t/r^{d_w} ).
\end{equation}
Therefore
\[
p_t(x,y) \le \frac{C_2 2^{{d_f}/{d_w}}}{t^{{d_f}/{d_w}}} \exp( C_0 \lambda^ {d_w} t/r^{d_w} + \psi_\infty(y) - \psi_\infty(x) ).
\]
for all $x,y \in M$ and for all $r,t>0$ and $\lambda \ge 1$.
If we choose $r=d(x,y)/2$, we have $\psi_\infty(y) - \psi_\infty(x)= -\lambda$. This yields
\[
p_t(x,y) \le \frac{C_3}{t^{{d_f}/{d_w}}} \exp(C_4 \lambda^ {d_w} t/d(x,y)^{d_w} - \lambda ).
\]
where $C_3,C_4>1$. Assume $\lambda = C_4^{-1/({d_w}-1)} (d(x,y)^{d_w}/t)^{1/({d_w}-1)} \ge 1$ in the above equation, we obtain
\[
p_t(x,y) \le \frac{C_3}{t^{{d_f}/{d_w}}} \exp\left(- \left(\frac{d(x,y)^{d_w}}{C_5 t}\right)^{1/({d_w}-1)}\right).
\]
for all $x,y \in M$ and for all $t>0$ such that $d(x,y)^{d_w} \ge C_4 t$.
If $d(x,y)^{d_w} < C_4 t$, the on-diagonal estimate \eqref{e-diag} suffices to obtain the desired sub-Gaussian upper bound.
\qed
\begin{remark}
The following generalized capacity estimate is a weaker form of the cutoff Sobolev inequality \ref{csa}, where the cutoff function $\phi$ is allowed to depend on the function $f$. This generalized capacity estimate was introduced by Grigor'yan, Hu and Lau and they obtain sub-Gaussian estimate under this weaker assumption \cite[Theorem 1.2]{GHL2}.
\begin{definition} \label{d-gcap}
We say \ref{gcap} holds if there exists $C_1,C_2>0$ such that for every $x \in M$, $R > 0$, $r>0$, $f \in \mathcal{D}$, there exists a cutoff function $\phi = \phi(f)$ for $B(x,R) \subset B(x,R+r)$ such that
\begin{equation} \label{gcap} \tag*{$\operatorname{Gcap}({d_w})$}
\int_U f^2 \, d \Gamma(\phi,\phi) \le C_1 \int_U \phi^2 \, d\Gamma(f,f) + \frac{C_2}{r^{d_w}} \int_U f^2 \, d\mu,
\end{equation}
where $U= B(x,R+r) \setminus \overline{B(x,r)}$.
\end{definition}
We refer the reader to \cite[Definition 1.5]{CKW} for an analogous generalized capacity estimate in a non-local setting.
It is an interesting open problem to modify the proof of Theorem \ref{t-main} under the above weaker assumption. The main difficulty for carrying out Davies' method under the weaker generalized capacity estimate assumption is that we require the inequalities \eqref{e-sr1} and \eqref{e-sr2} as $t>0$ varies. This would require the cutoff function $\psi_k$ to depend on $f_{t,k}$ for each $t$. Therefore the derivatives computed in \eqref{e-sr1} and \eqref{e-sr2} will have additional terms, since $\psi_k$ varies with time $t$.
We were informed of the reference \cite{HL} by the referee during the revision stage. In \cite{HL}, the techniques developed here and in the companion paper \cite{MS4} is extended to Dirichlet forms on metric measure spaces (possibly non-local) with jumps satisfying a polynomial type upper bound.
\end{remark}
\end{document}
\end{document}
|
\begin{document}
\title{Compactness of Sobolev embeddings and decay of norms}
\author{Jan Lang, Zden\v ek Mihula and Lubo\v s Pick}
\address{Jan Lang, Department of Mathematics,
The Ohio State University,
231 West 18th Avenue,
Columbus, OH 43210-1174}
\email{[email protected]}
\urladdr{0000-0003-1582-7273}
\address{Zden\v{e}k Mihula, Department of Mathematical Analysis, Faculty of Mathematics and
Physics, Charles University, Sokolovsk\'a~83,
186~75 Praha~8, Czech Republic
--AND-- Department of Mathematics, Faculty of Electric Engineering, Czech Technical University in Prague, Technick\'a~2,
166 27 Prague~6, Czech Republic}
\email{[email protected]}
\email{[email protected]}
\urladdr{0000-0001-6962-7635}
\address{Lubo\v s Pick, Department of Mathematical Analysis, Faculty of Mathematics and
Physics, Charles University, Sokolovsk\'a~83,
186~75 Praha~8, Czech Republic}
\email{[email protected]}
\urladdr{0000-0002-3584-1454}
\subjclass[2020]{46E30, 46E35}
\keywords{compactness, Sobolev embeddings, Ahlfors measures, rearrangement-invariant spaces, optimal range spaces}
\thanks{This research was partly supported by the grant P201-18-00580S of the Czech Science Foundation; by the Charles University, project GA UK No.~1056119; by Charles University Research program No.~UNCE/SCI/023; by the project OPVVV CAAS CZ.02.1.01/0.0/0.0/16\_019/0000778.}
\begin{abstract}
We investigate the relationship between the compactness of embeddings of Sobolev spaces built upon rearrangement-invariant spaces into rearrangement-invariant spaces endowed with $d$-Ahlfors measures under certain restriction on the speed of its decay on balls. We show that the gateway to compactness of such embeddings, while formally describable by means of optimal embeddings and almost-compact embeddings, is quite elusive. It is known that such a Sobolev embedding is not compact when its target space has the optimal fundamental function. We show that, quite surprisingly, such a target space can actually be ``fundamentally enlarged'', and yet the resulting embedding remains noncompact. In order to do that, we develop two different approaches. One is based on enlarging the optimal target space itself, and the other is based on enlarging the Marcinkiewicz space corresponding to the optimal fundamental function.
\end{abstract}
{\fam0 d}ate{\today}
\maketitle
\setcitestyle{numbers}
\section{Introduction}
Compact embeddings of function spaces containing weakly differentiable functions defined on subdomains of an Euclidean space into other function spaces constitute an important technique that is widely applicable when solutions to partial differential equations are sought by functional-analytic or variational methods. Such embeddings are particularly handy for showing the discreteness of the spectra of linear elliptic partial differential operators defined on bounded domains.
The most classical result on the compactness of a Sobolev embedding is the Rellich--Kondrachov theorem, which originated in a lemma of Rellich~\cite{Re:30} and was proved specifically for Sobolev spaces by
Kondrachov~\cite{Ko:45}. It is often used in the form stating that, given $n\in\mathbb{N}$, $n\geq2$ (we shall assume this implicitly throughout the paper), $p\in[1,n]$ and a bounded Lipschitz domain $\Omega\subseteq\R^n$, the Sobolev space $W^{1,p}(\Omega)$ is compactly embedded into the Lebesgue space $L^{q}(\Omega)$ for any $q\in[1,\frac{np}{n-p})$ (the fraction $\frac{np}{n-p}$ is to be interpreted as $\infty$ when $p=n$). Possibly the most natural way of proving the Rellich--Kondrachov theorem is based on the fact that a bounded set in $W^{1,p}(\Omega)$ is equiintegrable in $L^{q}(\Omega)$, that is, given $\varepsilon>0$, there always exists a ${\fam0 d}elta>0$ such that for every subset $E$ of $\Omega$ of measure not exceeding ${\fam0 d}elta$ one has
\begin{equation*}
\sup_{\|u\|_{W^{1,p}(\Omega)}\le1} \|u\chi_{E}\|_{L^{q}(\Omega)} < \varepsilon,
\end{equation*}
where $\chi_E$ stands for the characteristic function of $E$. There are several ways to achieve this fact. One of the most successful ones is based on a combination of the (in some sense) optimal Sobolev embedding and a so-called almost-compact embedding between function spaces on the target side of the embedding relation. We shall now describe this technique in more detail.
Roughly speaking, a space $Y(\Omega)$ is entitled to be called the \textit{optimal target space} for a given space $X(\Omega)$ in the Sobolev embedding $W^1X(\Omega)\hookrightarrow Y(\Omega)$ if it cannot be replaced by any essentially smaller space from a specified category of function spaces. A precise specification of the pool of competing spaces is important. For example, if $p\in[1,n)$ and $\Omega$ is a bounded Lipschitz domain in $\mathbb{R}^{n}$, then, in the classical Sobolev embedding
\begin{equation}\label{E:classical-cobolev}
W^{1,p}(\Omega)\hookrightarrow L^{\frac{np}{n-p}}(\Omega),
\end{equation}
the space $L^{\frac{np}{n-p}}(\Omega)$ is the optimal range partner for $L^{p}(\Omega)$ \textit{in the category of Lebesgue spaces} because it cannot be replaced by any essentially smaller \textit{Lebesgue} space. While the embedding
\begin{equation*}
L^{\frac{np}{n-p}}(\Omega) \hookrightarrow L^{q}(\Omega)
\end{equation*}
is continuous, it is not compact. The next step is to observe that the embedding, while noncompact, is \textit{almost compact} in the sense that
\begin{equation*}
\lim_{k\to\infty}\sup_{\|u\|_{L^{\frac{np}{n-p}}(\Omega)}\leq1} \|u\chi_{E_k}\|_{L^{q}(\Omega)} =0
\end{equation*}
for every sequence $\{E_k\}$ of measurable subsets of $\Omega$ satisfying $E_k\searrow\emptyset$ a.e.~in $\Omega$. These observations can be summarized in a chain of embeddings, namely
\begin{equation}\label{E:scheme}
W^{1,p}(\Omega) \hookrightarrow L^{\frac{np}{n-p}}(\Omega) \stackrel{*}{\hookrightarrow} L^{q}(\Omega),
\end{equation}
where the symbol $\stackrel{*}{\hookrightarrow}$ denotes an almost-compact embedding. Not surprisingly, \eqref{E:scheme} implies that every bounded set in $W^{1,p}(\Omega)$ is equiintegrable in $L^{q}(\Omega)$. The fact that the combination of the two relations in~\eqref{E:scheme} guarantees that
\begin{equation*}
W^{1,p}(\Omega) \hookrightarrow \hookrightarrow L^{q}(\Omega),
\end{equation*}
where the symbol $\hookrightarrow\hookrightarrow$ denotes a compact embedding, is known. For instance, it is explicitly proved, in a more general setting, in~\cite[Theorem~3.2]{Sla:12}. Interestingly, one obtains the almost-compact embedding
\begin{equation*}
L^{\frac{np}{n-p}}(\Omega) \stackrel{*}{\hookrightarrow} L^{q}(\Omega)
\end{equation*}
almost for free: suppose that $q\in[1,\frac{np}{n-p})$ and $E_k\searrow \emptyset$ a.e., then we get by H\"older's inequality that, for every function $u$ in the closed unit ball of $L^{\frac{np}{n-p}}(\Omega)$,
\begin{equation}\label{E:fundamental-lebesgue}
\|u\chi_{E_k}\|_{L^{q}(\Omega)} \le \|u\|_{L^{\frac{np}{n-p}}(\Omega)}\|\chi_{E_k}\|_{L^{\frac{qnp}{np-nq+pq}}(\Omega)} \le |E_k|^{\frac1{q}-\frac1{p}+\frac1{n}}\to 0\quad\text{as $k\to\infty$,}
\end{equation}
where $|E_k|$ denotes the $n$-dimensional Lebesgue measure of $E_k$. To explore the scheme illustrated by~\eqref{E:scheme} any deeper, we need, however, finer classes of function spaces than that of Lebesgue spaces.
Although the classical theory works almost solely with Lebesgue spaces, there are other, more complicated, function spaces that are also of considerable interest. Important generalizations of Lebesgue spaces are
Lorentz spaces and Orlicz spaces. While Lorentz spaces constitute a useful tool for certain fine tuning of Lebesgue spaces, Orlicz spaces have been successfully used when either more rapid or slower than power growth of functions is needed (cf.~\cite{Go:74}, \cite{GiTr:01}, \cite{Ci:96}). Allowing these types of spaces has a considerable impact on the quality of Sobolev embeddings. For example, in~\eqref{E:classical-cobolev}, the target space is optimal as a Lebesgue space, but it is not optimal as a Lorentz space, because it can be replaced by a strictly smaller Lorentz space $L^{\frac{np}{n-p},p}(\Omega)$. The situation is even more interesting when the Sobolev space $W^{1,n}(\Omega)$ is in play, in which the
degree of integrability coincides with the dimension of the underlying Euclidean space. In that case, there does not exist any optimal Lebesgue target space, but there does exist an optimal Orlicz space, namely the celebrated
Zygmund class $\exp L^{n'}(\Omega)$ ($n'=\frac{n}{n-1}$), a result that is nowadays considered classical and that goes back to Trudinger~\cite{Tr:67}, Poho\v{z}aev~\cite{Po:65} and Judovi\v{c}~\cite{Yu:61}.
However, neither Lorentz nor Orlicz spaces hold the key to all answers, because even this space can be improved. It can be replaced by the Lorentz--Zygmund space $L^{\infty,n;-1}(\Omega)$, which is strictly smaller than $\exp L^{n'}(\Omega)$ and which has been surfacing in various contexts and also in various disguises, see, for example, \cite{CP,Ma:11,BW,Ha}. The scale of Lorentz--Zygmund spaces, in some sense a meeting point of Orlicz and Lorentz families of spaces, was introduced in~\cite{BeRu:80} and has been later generalized in numerous ways, e.g.~\cite{OP, EvGoOp:18, EdKePi:00}.
In order to avoid technical difficulties that each individual scale of spaces inevitably brings, it is advisable to make use of a common framework that encompasses all of these function spaces. Experience shows that probably the most suitable one is that of the \textit{rearrangement-invariant spaces} (for precise definitions see \hyperref[sec:prel]{Section~\ref*{sec:prel}}). This category of function spaces is naturally built on the procedure of symmetrization, but it also involves spaces whose original definitions do not rely on rearrangement techniques, such as Lebesgue or Orlicz spaces.
In the framework of rearrangement-invariant spaces, Sobolev embeddings have been studied heavily over the past two decades. The key technique is the so-called \textit{reduction principle}, which enables us to reformulate equivalently a difficult problem involving differential operators and functions of many variables in the form of a question concerning boundedness of an integral operator involving functions acting on an interval. These advances, introduced in~\cite{EdKePi:00} and then further developed in many works, see e.g.~\cite{KePi:06,CiPiSl:15}, paved the way for studying deeper properties of Sobolev embeddings, a pivotal example of which is compactness, see~\cite{KerPi:08,Sla:15,Sla:12,CaMi:19}.
The results of Slav\'{\i}kov\'a~\cite{Sla:12,Sla:15} showed that the two-step method described above in connection with Lebesgue spaces is extendable to the general setting of rearrangement-invariant spaces. There are, however, some pitfalls. In particular, if one can prove an analogue of~\eqref{E:scheme} in the form
\begin{equation*}
W^{1}X(\Omega) \hookrightarrow Y_X(\Omega) \stackrel{*}{\hookrightarrow} Z(\Omega),
\end{equation*}
in which $X(\Omega)$ and $Z(\Omega)$ are rearrangement-invariant spaces and $Y_X(\Omega)$ is the optimal (the smallest) rearrangement-invariant space $Y(\Omega)$ rendering $W^{1}X(\Omega) \hookrightarrow Y(\Omega)$ true (such a rearrangement-invariant space always exists; we shall comment on that later), then the desired compact embedding
\begin{equation}\label{E:compact-embedding-general}
W^{1}X(\Omega) \hookrightarrow \hookrightarrow Z(\Omega)
\end{equation}
follows. What is not, however, clear at all is whether one can use an analogue of~\eqref{E:fundamental-lebesgue} to get \eqref{E:compact-embedding-general}. Such an analogue would inevitably call to play the so-called \textit{fundamental function} of a rearrangement-invariant space $X(\Omega)$. Its fundamental function $\varphi_X$ is defined on $[0,|\Omega|)|$ as
\begin{equation*}
\varphi_X(t)=\|\chi_E\|_{X},\ \text{$t\in[0,|\Omega|)$, where $E\subseteq\Omega$ is any subset satisfying $|E|=t$.}
\end{equation*}
A proper analogue of~\eqref{E:fundamental-lebesgue} would be something like
\begin{equation}\label{E:fundamental-general}
\lim_{t\to0_+} \frac{\varphi_{Y_X}(t)}{\varphi_Z(t)} = 0,
\end{equation}
but the question is, does \eqref{E:fundamental-general} imply \eqref{E:compact-embedding-general}? We shall see that the situation is much more complicated when we do not restrict ourselves to the class of Lebesgue spaces.
The concept of the fundamental function leads us to the \textit{Marcinkiewicz spaces} $M_{\varphi}(\Omega)$
and the \textit{Lorentz endpoint spaces} $\Lambda_{\varphi}(\Omega)$ (see \hyperref[sec:prel]{Section~\ref*{sec:prel}} for their definitions).
It is known that $M_{\varphi}(\Omega)$ and $\Lambda_{\varphi}(\Omega)$ are the biggest and the smallest, respectively, rearrangement-invariant spaces with the fixed fundamental function $\varphi$.
The embedding
\begin{equation}\label{E:optimal-r.i.-embedding}
W^{1}X(\Omega)\hookrightarrow Y_X(\Omega)
\end{equation}
is known to never be compact, regardless of the choice of $X(\Omega)$, but what if we replace $Y_X(\Omega)$ with the Marcinkiewicz space $M_{Y_X}(\Omega)$, the largest space having the same fundamental function as $Y_X(\Omega)$? Since $Y_X(\Omega)$ is embedded into $M_{Y_X}(\Omega)$, we plainly have that
\begin{equation}\label{E:optimal-marc-embedding}
W^{1}X(\Omega)\hookrightarrow M_{Y_X}(\Omega).
\end{equation}
It turns out, however, that, even though \eqref{E:optimal-marc-embedding} has a possibly larger target space (hence the embedding is possibly weaker) than~\eqref{E:optimal-r.i.-embedding}, it is still never compact (we shall comment on that in more detail later). At this stage, we are left with the question, how much larger than $M_{Y_X}(\Omega)$ can a target space $Y(\Omega)$ be in order to guarantee that the embedding $W^1X(\Omega)\hookrightarrow Y(\Omega)$ is still not compact?
All of the observations that we made lead us to three natural questions, which will be formulated soon. However, in order to ensure that our results are applicable to a large number of different Sobolev-type embeddings, which are often considered and studied separately, we actually consider much more general embeddings of Sobolev-type spaces (even of higher orders) into function spaces built upon quite general measure spaces. We say that a finite Borel measure $\nu$ on $\overline{\Omega}$ is a \emph{$d$-Ahlfors measure}, where $d\in(0,n]$, if
\begin{equation*}
\sup\limits_{x\in\R^n, r>0}\frac{\nu\left(B_r(x)\cap\overline{\Omega}\right)}{r^d}<\infty,
\end{equation*}
where $B_r(x)$ is the open ball in $\R^n$ centered at $x$ with radius $r$, and there is a point $x_0\in\overline{\Omega}$ and $R>0$ such that
\begin{equation*}
\inf\limits_{r\in(0,R]}\frac{\nu\left(B_r(x_0)\cap\overline{\Omega}\right)}{r^d}>0.
\end{equation*}
We shall consider Sobolev-type embeddings having the form $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$, where $W^mX(\Omega)$ is the $m$-th order, $m\in\mathbb{N}$, Sobolev-type space built upon a rearrangement-invariant space $X(\Omega)$, and $Y(\overline{\Omega}, \nu)$ is a rearrangement-invariant space on $\overline{\Omega}$ endowed with a $d$-Ahlfors measure $\nu$ (for more detail, see
\hyperref[sec:prel]{Section~\ref*{sec:prel}}). Such Sobolev-type embeddings and their compactness were recently studied in \citep{CPS:20, CPS_OrlLor:20, CM:20}. This general setting encompasses, for example, not only the standard Sobolev
embeddings ($\nu=\lambda_n$, the $n$-dimensional Lebesgue measure on $\Omega$) but also Sobolev trace embeddings onto $d$-dimensional submanifolds ($\nu=\mathcal H^d\rvert_{\Omega_d}$, where $\Omega_d$ is a $d$-dimensional
Riemannian submanifold) and boundary trace embeddings ($\nu=\mathcal H^{n-1}\rvert_{\partial\Omega}$) as well as some weighted Sobolev embeddings (e.g.~${\fam0 d}\nu(x)=|x-z|^{d-n}\,{\fam0 d} x$, where $z\in\overline{\Omega}$ is a fixed
point). Now we can formulate the principal questions that we tackle in this paper.
Let $m<n$ and $d\in[n-m,n]$. Let $X(\Omega)$ be a rearrangement-invariant space such that $X(\Omega)\not\subseteq L^{\frac{n}{m},1}(\Omega)$. Let $Y_X(\overline{\Omega}, \nu)$ be the optimal rearrangement-invariant target space in $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ (its existence is guaranteed by \citep[Theorem~4.4]{CPS:20}). It was proved in \citep[Theorem~4.1]{CM:20} (cf.~\citep[Theorem~4.1]{Sla:15}, \citep[Theorem~5.1]{KerPi:08}) that, if $Y(\overline{\Omega}, \nu)\neq L^\infty(\overline{\Omega}, \nu)$, then $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is compact if and only if $Y_X(\overline{\Omega}, \nu)\overset{*}{\hookrightarrow} Y(\overline{\Omega}, \nu)$. It follows from this characterization combined with \eqref{prel:almostcompembnecessary} that the optimal embedding $W^mX(\Omega)\hookrightarrow Y_X(\overline{\Omega}, \nu)$ is never compact. Actually, since $Y_X(\overline{\Omega}, \nu)$ and $M_{Y_X}(\overline{\Omega}, \nu)$ have the same fundamental function, even the embedding $W^mX(\Omega)\hookrightarrow M_{Y_X}(\overline{\Omega}, \nu)$ is never compact. While $M_{Y_X}(\overline{\Omega}, \nu)$ is the biggest rearrangement-invariant space on the same fundamental scale as $Y_X(\overline{\Omega}, \nu)$, it is not the biggest rearrangement-invariant target space that renders the embedding noncompact. Surprising as it may appear, there is, in general, no such a space. We shall show that the set of the target spaces that renders a Sobolev embedding noncompact has, roughly speaking, no biggest element. The remarks made in this paragraph lead us to two possible ideas of how to construct bigger, noncompact target spaces.
The first idea is to enlarge the Marcinkiewicz space $M_{Y_X}(\overline{\Omega}, \nu)$ to a space $Y(\overline{\Omega}, \nu)$ in such a way that the embedding $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is still not compact.
\begin{question}\label{q:noncompactMarc}
Is there a rearrangement-invariant space $Y(\overline{\Omega}, \nu)$ with the following properties:
\begin{itemize}
\item $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ non-compactly,
\item $M_{Y_X}(\overline{\Omega}, \nu)\subsetneq Y(\overline{\Omega}, \nu)$, where $M_{Y_X}(\overline{\Omega}, \nu)$ is the Marcinkiewicz space corresponding to $Y_X(\overline{\Omega}, \nu)$?
\end{itemize}
\end{question}
We shall give a comprehensive answer to \hyperref[q:noncompactMarc]{Question~\ref*{q:noncompactMarc}}, in which we make substantial use of a construction that ensures that, given a Marcinkiewicz space $M_\varphi$, we can construct a Marcinkiewicz space $M_\psi$ such that $M_\varphi\subsetneq M_\psi$ having the crucial property that $M_\varphi$ is not almost-compactly embedded into $M_\psi$. The construction does not, however, guarantee that $M_\psi$ is ``fundamentally bigger'' than $M_\varphi$, that is, $\lim\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=0$. In fact, the limit does not exist at all. It is thus natural to search for a ``fundamentally bigger'' space in the next step. This leads us to a new problem, closely related to \hyperref[q:noncompactMarc]{Question~\ref*{q:noncompactMarc}} but more difficult.
\begin{question}\label{q:noncompactfundamentallybigger1}
Is there a rearrangement-invariant space $Y(\overline{\Omega}, \nu)$ with the following properties:
\begin{itemize}
\item $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ non-compactly,
\item $M_{Y_X}(\overline{\Omega}, \nu)\subsetneq Y(\overline{\Omega}, \nu)$,
\item $\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi_{Y_X}(t)}=0$?
\end{itemize}
\end{question}
The difference between \hyperref[q:noncompactMarc]{Question~\ref*{q:noncompactMarc}} and \hyperref[q:noncompactfundamentallybigger1]{Question~\ref*{q:noncompactfundamentallybigger1}} is that, in the latter, the space $Y(\overline{\Omega}, \nu)$ is required to be ``fundamentally bigger'' than $Y_X(\overline{\Omega}, \nu)$. We deal with the first two questions in \hyperref[sec:q1q2]{Section~\ref*{sec:q1q2}}.
The other approach is to enlarge the optimal space $Y_X(\overline{\Omega}, \nu)$ to a space $Y(\overline{\Omega}, \nu)$ in such a way that the embedding $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is still not compact. This leads to the following question, which we deal with in \hyperref[prel:q3]{Section~\ref*{prel:q3}}.
\begin{question}\label{q:noncompactfundamentallybigger2}
Is there a rearrangement-invariant space $Y(\overline{\Omega}, \nu)$ with the following properties:
\begin{itemize}
\item $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ non-compactly,
\item $Y_X(\overline{\Omega}, \nu)\subsetneq Y(\overline{\Omega}, \nu)$,
\item $\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi_{Y_X}(t)}=0$?
\end{itemize}
\end{question}
On the one hand, since we no longer require that $Y(\overline{\Omega}, \nu)$ contains the Marcinkiewicz space $M_{Y_X}(\overline{\Omega}, \nu)$, such a space $Y(\overline{\Omega}, \nu)$ can be closer to the optimal space $Y_X(\overline{\Omega}, \nu)$; thus it may appear that this approach leads to weaker results. On the other hand, by dropping the requirement that $Y(\overline{\Omega}, \nu)$ contains the Marcinkiewicz space $M_{Y_X}(\overline{\Omega}, \nu)$, we lose a great deal of information; thus the second approach brings in a lot of technical complications, which we do not face when following the first idea. Furthermore, the results in \hyperref[prel:q3]{Section~\ref*{prel:q3}} are usable in situations not covered by the results in \hyperref[sec:q1q2]{Section~\ref*{sec:q1q2}}. Therefore, \hyperref[sec:q1q2]{Section~\ref*{sec:q1q2}} and \hyperref[prel:q3]{Section~\ref*{prel:q3}} complement each other rather than one extending the other. While pursuing the second approach, we also obtain a useful result of independent interest, namely \hyperref[prop:optimalpair]{Proposition~\ref*{prop:optimalpair}}, which characterizes when the spaces in a Sobolev embedding $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ are mutually optimal.
Before we start searching for answers, we provide some comments on the restrictions on the parameters $d,m,n$ and on the space $X(\Omega)$. First, when $m\geq n$, the rearrangement-invariant setting is not well suited to capturing fine details of corresponding Sobolev embeddings, because the space $W^{m}X(\Omega)$ is embedded into $L^{\infty}(\overline{\Omega}, \nu)$, the smallest rearrangement-invariant space over $(\overline{\Omega}, \nu)$, no matter what $X(\Omega)$ is and $L^{\infty}(\overline{\Omega}, \nu)$ is almost-compactly embedded into any rearrangement-invariant space over $(\overline{\Omega}, \nu)$ that is different from $L^{\infty}(\overline{\Omega}, \nu)$ (\citep[Theorem~5.2]{Sla:12}). In this case, a more suitable class of potential target spaces consists of various function spaces measuring smoothness and/or oscillation (rather than size). Such research, while of great interest, goes beyond the scope of this paper.
Next, the assumption $X(\Omega)\not\subseteq L^{\frac{n}{m},1}(\Omega)$ is actually completely natural. Indeed, if $X(\Omega)\subseteq L^{\frac{n}{m},1}(\Omega)$, then $Y_X(\overline{\Omega}, \nu)=L^\infty(\overline{\Omega}, \nu)$ (\citep[Theorem~3.1]{CPS_OrlLor:20}). Hence $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is always compact whenever $Y(\overline{\Omega}, \nu)\neq L^\infty(\overline{\Omega}, \nu)$ thanks to \citep[Theorem~4.1]{CM:20} combined with \citep[Theorem~5.2]{Sla:12}. Therefore, the questions are of no interest if $X(\Omega)\subseteq L^{\frac{n}{m},1}(\Omega)$.
Finally, the assumption $d\ge n-m$ is technical in nature. It goes back to~\cite{CPS:20,CPS_OrlLor:20}, where it was discovered that a balance condition between $d,m$ and $n$ constitutes a threshold that divides Sobolev embeddings with respect to $d$-Ahlfors measures into two completely different groups. It turns out that the so-called fast-decaying measures (that is, those having $d\ge n-m$) behave naturally, while their counterparts, the slowly-decaying measures ($d<n-m$), bring some rather unexpected technical anomalies, which we prefer to avoid here.
We collect the background material used in this paper and fix the notation in \hyperref[sec:prel]{Section~\ref*{sec:prel}}, which is quite lengthy because we prefer this paper to be as self-contained as possible. Readers familiar with rearrangement-invariant spaces might want to skim over the section.
\section{Background material}\label{sec:prel}
Throughout the paper, the relations $\lq\lq \lesssim "$ and $\lq\lq \gtrsim"$ between two positive expressions mean that the former is bounded by the latter and vice versa, respectively, up to a multiplicative constant independent of all important quantities in question. When both these relations hold at the same time (with possibly different constants), we write $\lq\lq \approx "$.
Throughout this section, let $(R, \mu)$ be a $\sigma$-finite, nonatomic measure space.
We set
\begin{align*}
\mathcal M(R, \mu)&= \{f: \text{$f$ is a $\mu$-measurable function on $R$ with values in $[-\infty,\infty]$}\}\\
\intertext{and}
\mathcal Mpl(R, \mu)&= \{f \in \mathcal M(R, \mu)\colon f \geq 0\}.
\end{align*}
The \emph{nonincreasing rearrangement} $f^* \colon [0,\infty) \to [0, \infty ]$ of a function $f\in \mathcal M(R, \mu)$ is
defined as
\begin{equation*}
f^*(t)=\inf\{\lambda\in(0,\infty)\colon\mu\left(\{x\in R\colon|f(x)|>\lambda\}\right)\leq t\},\ t\in[0,\infty).
\end{equation*}
The \emph{maximal nonincreasing rearrangement} $f^{**} \colon (0,\infty) \to [0, \infty ]$ of a function $f\in \mathcal M(R, \mu)$ is
defined as
\begin{equation*}
f^{**}(t)=\frac1t\int_0^ t f^{*}(s)\,{\fam0 d} s,\ t\in(0,\infty).
\end{equation*}
If there is any possibility of misinterpretation, we use the more explicit notations $f^*_\mu$ and $f^{**}_\mu$ instead of $f^*$ and $f^{**}$, respectively, to stress what measure the rearrangements are taken with respect to.
If $|f|\leq |g|$ $\mu$-a.e.\ in $R$, then $f^*\leq g^*$. The operation $f\mapsto f^*$ is neither subadditive nor multiplicative. The lack of subadditivity of the operation
of taking the nonincreasing rearrangement is, up to some extent, compensated by the following fact (\cite[Chapter~2,~(3.10)]{BS}): for every
$t\in(0,\infty)$ and every $f,g\in\mathcal M(R, \mu)$, we have that
\begin{equation*}
\int_0^ t(f +g)^{*}(s)\,{\fam0 d} s\leq\int_0^ tf^{*}(s)\,{\fam0 d} s + \int_0^ tg^{*}(s)\,{\fam0 d} s.
\end{equation*}
This inequality can be also written in the form
\begin{equation*}
(f+g)^{**}\leq f^{**}+g^{**}.
\end{equation*}
A fundamental result in the theory of Banach function spaces is the \emph{Hardy lemma} (\citep[Chapter~2, Proposition~3.6]{BS}), which states that, if two nonnegative measurable functions $f,g$ on $(0,\infty)$ satisfy
\begin{align}
\int_0^{t}f(s)\,{\fam0 d} s&\leq \int_0^ tg(s)\,{\fam0 d} s\nonumber
\intertext{for all $t\in(0,\infty)$, then, for every nonnegative, nonincreasing function $h$ on $(0,\infty)$, one has}
\int_0^{\infty}f(s)h(s)\,{\fam0 d} s&\leq \int_0^ {\infty}g(s)h(s)\,{\fam0 d} s.\label{prel:hardy-lemma}
\end{align}
An important fact concerning rearrangements is the \emph{Hardy-Littlewood inequality} (\citep[Chapter~2, Theorem~2.2]{BS}), which asserts that, if $f, g \in\mathcal M(R, \mu)$,
then
\begin{equation}\label{prel:HL}
\int _R |fg| \,{\fam0 d}\mu \leq \int _0^{\infty} f^*(t) g^*(t)\,{\fam0 d} t.
\end{equation}
If $(R, \mu)$ and $(S, \nu)$ are two (possibly different) $\sigma$-finite measure spaces, we say that functions $f\in \mathcal M(R, \mu)$ and $g\in\mathcal M(S,\nu)$ are \emph{equimeasurable}, and we write $f\sim g$, if $f^*=g^*$ on $(0,\infty)$. Note that $f$ and $f^*$ are equimeasurable.
A functional $\varrho\colon \mathcal Mpl (R, \mu) \to [0,\infty]$ is called a \emph{Banach function norm} if, for all $f$, $g$ and $\{f_j\}_{j\in\mathbb{N}}$ in $\mathcal M_+(R, \mu)$, and every $\lambda \geq0$, the following properties hold:
\begin{enumerate}[(P1)]
\item $\varrho(f)=0$ if and only if $f=0$;
$\varrho(\lambda f)= \lambda\varrho(f)$; $\varrho(f+g)\leq \varrho(f)+ \varrho(g)$;
\item $ f \le g$ a.e.\ implies $\varrho(f)\le\varrho(g)$;
\item $ f_j \nearrow f$ a.e.\ implies
$\varrho(f_j) \nearrow \varrho(f)$;
\item $\varrho(\chi_E)<\infty$ \ for every $E\subseteq R$ of finite measure;
\item if $E\subseteq R$ is of finite measure, then $\int_{E} f\,{\fam0 d}\mu \le C_E
\varrho(f)$, where $C_E$ is a positive constant possibly depending on $E$ and $\varrho$ but not on $f$.
\end{enumerate}
If, in addition, $\varrho$ satisfies
\begin{itemize}
\item[(P6)] $\varrho(f) = \varrho(g)$ whenever
$f \sim g$,
\end{itemize}
then we say that $\varrho$ is a
\emph{rearrangement-invariant (Banach) function norm}.
If $\varrho$ is a rearrangement-invariant function norm, then the set
\begin{equation*}
X=X({\varrho})=\{f\in\mathcal M(R, \mu)\colon \varrho(|f|)<\infty\}
\end{equation*}
equipped with the norm $\|f\|_X=\varrho(|f|)$, $f\in X$, is called a~\emph{rearrangement-invariant space} (corresponding to the rearrangement-invariant function norm $\varrho$). We also sometimes write $X(R, \mu)$ to stress the underlying measure space. Note that the quantity $\|f\|_{X}$ is actually well defined for every $f\in\mathcal M(R, \mu)$ and
\begin{equation*}
f\in X\quad\Leftrightarrow\quad\|f\|_X<\infty.
\end{equation*}
With any rearrangement-invariant function norm $\varrho$, there is associated another functional, $\varrho'$, defined for $g \in \mathcal Mpl(R, \mu)$ as
\begin{equation*}
\varrho'(g)=\sup\left\{\int_{R} fg\,{\fam0 d}\mu\colon f\in\mathcal Mpl(R, \mu),\ \varrho(f)\leq 1\right\}.
\end{equation*}
It turns out that $\varrho'$ is also a~rearrangement-invariant function norm, which is called the~\emph{associate norm} of $\varrho$. Moreover, for every rearrangement-invariant function norm $\varrho$ and every $f\in\mathcal Mpl(R, \mu)$, we have (see~\citep[Chapter~1, Theorem~2.9]{BS}) that
\begin{equation*}
\varrho(f)=\sup\left\{\int_{R}fg\,{\fam0 d}\mu\colon g\in\mathcal Mpl(R, \mu),\ \varrho'(f)\leq 1\right\}.
\end{equation*}
If $\varrho$ is a~rearrangement-invariant function norm, $X=X({\varrho})$ is the rearrangement-invariant space determined by $\varrho$, and $\varrho'$ is the associate norm of $\varrho$, then the function space $X({\varrho'})$ determined by $\varrho'$ is called the \emph{associate space} of $X$ and is denoted by $X'$. We always have that
\begin{equation}\label{prel:X''}
(X')'=X,
\end{equation}
and we shall write $X''$ instead of $(X')'$. Furthermore, the \emph{H\"older inequality}
\begin{equation}\label{prel:holder}
\int_{R}fg\,{\fam0 d} \mu\leq\|f\|_{X}\|g\|_{X'}
\end{equation}
holds for every $f,g\in \mathcal M(R, \mu)$.
An important corollary of the Hardy--Littlewood inequality~\eqref{prel:HL} is the fact that, if $f\in M(R, \mu)$ and $X$ is a~rearrangement-invariant space over $(R, \mu)$, then we in fact have that
\begin{equation}\label{prel:assocnormwithstars}
\|f\|_{X}=\sup\left\{\int_0^{\infty}g^*(t)f^*(t)\,{\fam0 d} t\colon \|g\|_{X'}\leq 1\right\}.
\end{equation}
For every rearrangement-invariant space $X$ over a $\sigma$-finite, nonatomic measure space $(R, \mu)$, there is a~unique rearran\-gement-invariant space $X(0,\mu(R))$ over the interval $(0,\mu(R))$ endowed with the one-dimensional Lebesgue measure such that $\|f\|_X=\|f^*\|_{X(0,\mu(R))}$. This space is called the~\textit{representation space} of $X$. This follows from the Luxemburg representation theorem (see \citep[Chapter~2, Theorem~4.10]{BS}). Throughout this paper, the representation space of a rearrangement-invariant space $X$ will be denoted by $X(0,\mu(R))$. It is worth noting that, when $R=(0,a)$, $a\in(0,\infty]$, and $\mu$ is the Lebesgue measure, then every $X$ over $(R, \mu)$ coincides with its representation space.
If $X$ is a rearrangement-invariant space, we define its \textit{fundamental function}, $\varphi_X$, as
\begin{equation*}
\varphi_X(t)=\|\chi_E\|_X,\ t\in[0,\mu(R)),
\end{equation*}
where $E\subseteq R$ is any set such that $\mu(E)=t$. Property (P6) of rearrangement-invariant function norms guarantees that the fundamental function is well defined. Moreover, we have that
\begin{equation}\label{prel:fundamentalfuncsidentity}
\varphi_X(t)\varphi_{X'}(t)=t\quad\text{for every $t\in[0,\mu(R))$}.
\end{equation}
The fundamental function $\varphi_X$ is a \emph{quasiconcave function on $[0,\mu(R))$}, that is, $\varphi_X(t)=0$ if and only if $t=0$, $\varphi_X$ is nondecreasing on $[0,\mu(E))$, and the function $t\mapsto\frac{\varphi_X(t)}{t}$ is nonincreasing on $(0,|E|)$.
There are always the smallest and the largest rearrangement-invariant spaces over $(R, \mu)$ whose fundamental functions are equivalent to a given quasiconcave function $\varphi$ on $[0,\mu(R))$. More precisely, the functionals $\|\cdot\|_{\Lambda_\varphi(R, \mu)}$ and $\|\cdot\|_{M_\varphi(R, \mu)}$ defined as
\begin{equation}\label{prel:defLorentzendpoint}
\|f\|_{\Lambda_\varphi(R, \mu)}=\int_{[0,\mu(R))}f^*(t)\,{\fam0 d}\tilde{\varphi}(t),\ f\in\mathcal Mpl(R, \mu),
\end{equation}
where $\tilde{\varphi}$ is the least concave majorant of $\varphi$, which satisfies $\frac{1}{2}\tilde{\varphi}\leq\varphi\leq\tilde{\varphi}$ on $[0,\mu(R))$, and
\begin{equation*}
\|f\|_{M_\varphi(R, \mu)}=\sup_{t\in(0,\mu(R))}f^{**}(t)\varphi(t),\ f\in\mathcal Mpl(R, \mu),
\end{equation*}
are rearrangement-invariant function norms, and the corresponding rearrangement-invariant spaces have the following properties. We have that $\varphi_{M_\varphi(R, \mu)}=\varphi$ and $\varphi_{\Lambda_\varphi(R, \mu)}=\tilde{\varphi}$, and
\begin{equation}\label{prel:endpoints}
\Lambda_\varphi(R, \mu)\hookrightarrow X\hookrightarrow M_\varphi(R, \mu)
\end{equation}
whenever $X$ is a rearrangement-invariant space over $(R, \mu)$ whose fundamental function is equivalent to $\varphi$. The integral in \eqref{prel:defLorentzendpoint} is to be interpreted as the Lebesgue-Stieltjes integral. The spaces $M_\varphi(R, \mu)$ and $\Lambda_\varphi(R, \mu)$ are sometimes called a \emph{Marcinkiewicz endpoint space} and a \emph{Lorentz endpoint space}, respectively. For more information on endpoint spaces, we refer the reader to \citep[Chapter~7, Section~10]{FSBook}.
Let $X$ and $Y$ be rearrangement-invariant spaces over the same measure space $(R, \mu)$. We write $X\hookrightarrow Y$ to denote the fact that $X$ is (continuously) embedded into $Y$, that is, there is a positive constant $C$ such that $\|f\|_{Y}\leq C\|f\|_{X}$ for every $f\in\mathcal M(R, \mu)$. Furthermore, we have that (\citep[Chapter~1, Theorem~1.8]{BS})
\begin{align}
X \subseteq Y\quad&\text{if and only if}\quad X \hookrightarrow Y,\nonumber
\intertext{and}
X \hookrightarrow Y\quad&\text{if and only if}\quad Y' \hookrightarrow X'\label{prel:embdual}
\end{align}
with the same embedding constants. We denote by $X=Y$ the fact that $X\hookrightarrow Y$ and $Y\hookrightarrow X$ simultaneously, that is, $X$ and $Y$ are equal in the set-theoretic sense and the norms on them are equivalent to each other.
We say that a function $f\in X$ has \emph{absolutely continuous norm in $X$} if $\lim\limits_{k\to\infty}\|f\chi_{E_k}\|_{X}=0$ whenever $E_k\subseteq R$, $k\in\mathbb{N}$, are measurable sets such that $\lim\limits_{k\to\infty}\chi_{E_k}(x)=0$ for $\mu$-a.e.~$x\in R$. Note that $f$ has absolutely continuous norm in $X$ if and only if $\lim\limits_{a\to0^+}\|f^*\chi_{(0,a)}\|_{X(0,\mu(R))}=0$. We say that a rearrangement-invariant space $X$ has absolutely continuous norm if every function $f\in X$ has absolutely continuous norm in $X$.
Let $X$ and $Y$ be rearrangement-invariant spaces over the same $\sigma$-finite, nonatomic measure space $(R, \mu)$. We say that $X$ is \emph{almost-compactly embedded} into $Y$, and we write $X\overset{*}{\hookrightarrow}Y$, if $\lim\limits_{k\to\infty}\sup\limits_{\|f\|_{X}\leq1}\|f\chi_{E_k}\|_{Y}=0$ whenever $E_k\subseteq R$, $k\in\mathbb{N}$, are measurable sets such that $\lim\limits_{k\to\infty}\chi_{E_k}(x)=0$ for $\mu$-a.e.~$x\in R$. If $X\overset{*}{\hookrightarrow}Y$, then $X\hookrightarrow Y$ (\citep[Theorem~7.11.5]{FSBook}) and (\citep[(3.1)]{F-MMP:10})
\begin{equation}\label{prel:almostcompembnecessary}
\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi_X(t)}=0.
\end{equation}
Furthermore,
\begin{align}
X\overset{*}{\hookrightarrow}Y\quad&\text{if and only if}\quad X(0,\mu(R))\overset{*}{\hookrightarrow}Y(0,\mu(R)),\nonumber\\
\intertext{and}
X\overset{*}{\hookrightarrow}Y\quad&\text{if and only if}\quad Y'\overset{*}{\hookrightarrow} X'.\label{prel:almostcompactdual}
\end{align}
We refer the reader to \citep{Sla:12} for more information on almost-compact embeddings.
Textbook examples of rearrangement-invariant spaces are the standard Lebesgue spaces. The functional $\|\cdot\|_{L^p(R, \mu)}$ is defined as
\begin{equation*}
\|f\|_{L^p(R, \mu)}=\begin{cases}
\left(\int_Rf(x)^p\,{\fam0 d}\mu(x)\right)^\frac1{p}\quad&\text{if $p\in(0,\infty)$},\\
\esssup\limits_{x\in R}f(x)\quad&\text{if $p=\infty$},
\end{cases}
\end{equation*}
for $f\in\mathcal Mpl(R, \mu)$. The functional $\|\cdot\|_{L^p(R, \mu)}$ is a rearrangement-invariant function norm if and only if $1\leq p\leq\infty$.
An important generalization of the Lebesgue functionals is constituted by the two-parameter Lorentz functionals. Let $0<p,q\leq\infty$. We define the functional $\|\cdot\|_{L^{p,q}(R, \mu)}$ as
\begin{equation*}
\|f\|_{L^{p,q}(R, \mu)}=\left\|t^{\frac1{p}-\frac1{q}}f^*(t)\right\|_{L^q(0,\mu(R))}
\end{equation*}
for $f \in\mathcal Mpl(R, \mu)$. Here, and in what follows, we use the convention that $\frac1{\infty}=0$. The functional $\|\cdot\|_{L^{p,q}(R, \mu)}$ is equivalent to a~rearrangement-invariant function norm if and only if $1<p<\infty$ and $1\leq q\leq\infty$, or $p=q=1$, or $p=q=\infty$. In that case, the corresponding rearrangement-invariant space is called a \emph{Lorentz space} and
\begin{equation}\label{prel:lorentzass}
(L^{p,q})'(R, \mu)=L^{p',q'}(R, \mu).
\end{equation}
Note that $L^{p,p}(R, \mu)=L^p(R, \mu)$ with the same norms.
We say that a continuous function $b\colon(0,a]\to(0,\infty)$, where $a\in(0,\infty)$, is \emph{slowly varying} on $(0,a]$ if for every $\varepsilon>0$ there is $t_0\in(0,a)$ such that the functions $t\mapsto t^\varepsilon b(t)$ and $t\mapsto t^{-\varepsilon}b(t)$ are nondecreasing and nonincreasing, respectively, on the interval $(0,t_0)$. If $b$ is a slowly-varying function on $(0,a]$, so is $b^\alpha$ for any $\alpha\in\mathbb{R}$.
Assume now that $\mu(R)<\infty$. Let $0<p,q\le\infty$ and $b$ be a slowly-varying function on $(0,\mu(R)]$. We define the functional $\|\cdot\|_{L^{p,q;b}(R, \mu)}$ as
\begin{equation*}
\|f\|_{L^{p,q;b}(R, \mu)}=
\left\|t^{\frac{1}{p}-\frac{1}{q}}b(t)f^*(t)\right\|_{L^q(0,\mu(R))}
\end{equation*}
for $f\in\mathcal Mpl(R, \mu)$. The functional $\|\cdot\|_{L^{p,q;b}(R, \mu)}$ is equivalent to a rearrangement-invariant function norm if and only if $q\in[1,\infty]$ and one of the following conditions holds (\citep[Theorem~3.35]{P}):
\begin{equation*}
\begin{cases}
1<p<\infty;\\
p=q=1,\ \text{$b$ is equivalent to a nonincreasing function on $(0,\mu(R)]$};\\
p=\infty,\ q<\infty,\ \int_0^{\mu(R)}t^{-1}b^q(t)\,{\fam0 d} t<\infty;\\
p=q=\infty,\ b\in L^\infty(0,\mu(R)).
\end{cases}
\end{equation*}
If this is the case, the corresponding rearrangement-invariant space is called a \emph{Lorentz--Karamata space}. Note that Lebesgue and Lorentz spaces are instances of Lorentz--Karamata spaces ($b\equiv1$). Another
important subclass of Lorentz-Karamata spaces is that of (generalized) \emph{Lorentz--Zygmund spaces} $L^{p,q;\alpha,\beta}(R, \mu)$, which, in the language of Lorentz--Karamata spaces, correspond to
$b(t)=\ell(t)^\alpha\ell\ell(t)^\beta$, $\alpha,\beta\in\mathbb{R}$, where $\ell(t)=\log\left(e\frac{\mu(R)}{t}\right)$ and $\ell\ell(t)=\log\left(e\log\left(e\frac{\mu(R)}{t}\right)\right)$, $t\in(0,\mu(R)]$. If $\beta=0$, we usually
write $L^{p,q;\alpha}(R, \mu)$ instead of $L^{p,q;\alpha,0}(R, \mu)$. We will also occasionally need Lorentz--Zygmund spaces with more than two levels of logarithms, and we define such spaces in the obvious way. The class of Lorentz--Zygmund
spaces (more generally, that of Lorentz--Karamata spaces) encompasses not only Lebesgue spaces and Lorentz spaces, but also all types of exponential and logarithmic classes, and also the spaces discovered independently by Maz'ya (in
a~somewhat implicit form involving capacitary estimates~\citep[pp.~105 and~109]{Ma:11}), Hansson~\citep{Ha} and Brezis--Wainger~\citep{BW}, who used them for describing the sharp target space in a limiting Sobolev embedding (the
spaces can be also traced in the works of Brudnyi~\citep{B} and, in a more general setting, Cwikel and Pustylnik~\citep{CP}). For more information on Lorentz--Karamata and Lorentz--Zygmund spaces, we refer the interested reader to \citep{OP,P}.
A large number of rearrangement-invariant spaces (including the Lorentz-Karamata ones) are actually just special instances of the so-called \emph{classical Lorentz spaces} $\Lambda^q(v)$, $q\in[1, \infty]$, for suitable choices of the weight function $v$. A \emph{weight} on $(0,\mu(R))$ is any nonnegative, measurable function on $(0,\mu(R))$ that is positive on the interval $(0,{\fam0 d}elta)$ for some ${\fam0 d}elta\in(0,\mu(R))$ and such that $V(t)<\infty$ for every $t\in(0,\mu(R))$, where $V(t)=\int_0^tv(s)\,{\fam0 d} s$, $t\in(0,\mu(R))$. If $v$ is a weight on $(0,\mu(R))$, we define the functional $\|\cdot\|_{\Lambda^q(v)}$ as
\begin{equation*}
\|f\|_{\Lambda^q(v)}=\begin{cases}
\left(\int_0^{\mu(R)}f^*(t)^qv(t)\,{\fam0 d} t\right)^\frac1{q}\quad&\text{if $q\in[1,\infty)$},\\
\esssup\limits_{t\in(0,\mu(R))}f^*(t)v(t)\quad&\text{if $q=\infty$},
\end{cases}
\end{equation*}
for $f\in\mathcal Mpl(R, \mu)$. The functional $\|\cdot\|_{\Lambda^q(v)}$ is equivalent to a rearrangement-invariant function norm if and only if
\begin{equation}\label{prel:lambdari}
\begin{cases}
\frac1{t}\int_0^tv(u)\,{\fam0 d} u\lesssim\frac1{s}\int_0^sv(u)\,{\fam0 d} u\quad\text{for every $0<s< t<\mu(R)$}\quad&\text{if $q=1$},\\
\int_0^ts^{q'}v(s)V^{-q'}(s)\,{\fam0 d} s\lesssim t^{q'}V^{1-q'}(t)\quad\text{for every $0<t<\mu(R)$}\quad&\text{if $q\in(1,\infty)$},\\
\text{$\tilde{v}$ is a finite function}\quad\text{and}\quad\sup\limits_{t\in(0,\mu(R))}\tilde{v}(t)\frac1{t}\int_0^t\frac1{\tilde{v}(s)}\,{\fam0 d} s<\infty\quad&\text{if $q=\infty$},
\end{cases}
\end{equation}
where $\tilde{v}(t)=\esssup\limits_{s\in(0,t)}v(s)$, $t\in(0,\mu(R))$, that is, $\tilde{v}$ is the least nondecreasing (essential) majorant of $v$. We refer the reader to \citep{S:90} for $q\in(1,\infty)$, to \citep{CGS:96} for $q=1$ and to \citep{GS:14} for $q\in(1,\infty]$. The multiplicative constants in \eqref{prel:lambdari} may depend only on $q$ and $v$.
Finally, we define \emph{Sobolev-type spaces built upon rearrangement-invariant spaces}. Let $\Omega$ be a bounded Lipschitz domain (e.g.~\citep[Chapter~4, 4.9]{AF:03}) in $\R^n$, $n \geq 2$. Given $m \in \mathbb{N}$ and a rearrangement-invariant space $X(\Omega)$ over $\Omega$ endowed with the Lebesgue measure, the $m$-th order Sobolev-type space $W^m X(\Omega)$ is defined as
\begin{equation*}
W^m X(\Omega) = \big\{u\colon \hbox{$u$ is $m$-times weakly differentiable in $\Omega$ and $|\nabla ^k u| \in X(\Omega )$ for $k=0, {\fam0 d}ots , m$}\big\}.
\end{equation*}
Here, $\nabla^k u$ denotes the vector of all $k$-th order weak derivatives of $u$ and $\nabla ^0 u=u$. The Sobolev-type space $W^m X(\Omega)$ equipped with the norm
\begin{equation*}
\|u\|_{W^m X(\Omega )} = \sum _{k=0}^{m} \| \, |\nabla^k u| \, \|_{X(\Omega )},\ u \in W^m X(\Omega),
\end{equation*}
is a Banach space. When $X(\Omega) = L^p (\Omega )$, $p \in [1,\infty]$, one has $W^mX(\Omega) = W^{m,p}(\Omega )$ (the standard Sobolev space of $m$-th order on $\Omega$).
We say that $W^mX(\Omega)$ is embedded into a rearrangement-invariant space $Y(\overline{\Omega}, \nu)$ on $\overline{\Omega}$ endowed with a $d$-Ahlfors measure $\nu$, and we write $W^mX(\Omega)\rightarrow Y(\overline{\Omega}, \nu)$, if there is a bounded linear operator $\Tr\colon W^mX(\Omega)\to Y(\overline{\Omega}, \nu)$ such that $\Tr u=u$ for every $u\in W^mX(\Omega)\cap\mathcal C(\overline{\Omega})$. If the operator is compact, we say that the embedding is compact.
\section{On enlarging Marcinkiewicz spaces}\label{sec:q1q2}
We begin with a proposition concerning concave functions, which, while elementary, is of independent interest in the theory of Marcinkiewicz spaces.
\begin{proposition}\label{prop:piecewiseconcavefunction}
Let $\varphi\colon[0,a]\rightarrow[0,\infty)$, where $a\in(0,\infty)$, be a nondecreasing, concave function vanishing only at the origin. Assume that
\begin{equation}\label{thm:piecewiseconcavefunction:assumption1}
\lim\limits_{t\to0^+}\frac{t}{\varphi(t)}=0
\end{equation}
and
\begin{equation}\label{thm:piecewiseconcavefunction:assumption2}
\lim\limits_{t\to0^+}\varphi(t)=0.
\end{equation}
There exists a nondecreasing, concave function $\psi\colon[0,a]\rightarrow[0,\infty)$ such that $\psi\leq\varphi$ on $[0,a]$, $\psi$ vanishes only at the origin, $\liminf\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=0$ and $\limsup\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=1$.
\end{proposition}
\begin{proof}
We shall find, by induction, two sequences $\{t_k\}_{k=1}^\infty$, $\{\tau_k\}_{k=1}^\infty$ of positive numbers converging to $0$ such that, for each $k\in\mathbb{N}$,
\begin{align}
&t_{k+1}<\tau_{k}<t_k,\nonumber\\
&\frac{\varphi(\tau_k)-\varphi(t_{k+1})}{\tau_k-t_{k+1}}>2^{k+1}\frac{\varphi(t_k)-\varphi(t_{k+1})}{t_k-t_{k+1}},\label{thm:piecewiseconcavefunction:property1}\\
&\varphi(t_{k+1})\leq2^{-k-1}\varphi(\tau_k)\label{thm:piecewiseconcavefunction:property2}.
\end{align}
Set $t_1=a$ and assume that we have already found $t_1,{\fam0 d}ots,t_k$ and $\tau_1,{\fam0 d}ots,\tau_{k-1}$ for some $k\in\mathbb{N}$. By \eqref{thm:piecewiseconcavefunction:assumption1} there exists $\tau_k\in(0,\frac{t_k}{2})$ such that $\frac{\varphi(\tau_k)}{\tau_k}>2^{k+1}\frac{\varphi(t_k)}{t_k}$. Since $\lim\limits_{t\to0^+}\frac{\varphi(\tau_k)-\varphi(t)}{\tau_k-t} = \frac{\varphi(\tau_k)}{\tau_k}$ by \eqref{thm:piecewiseconcavefunction:assumption2}, there exists $t_{k+1}\in(0,\tau_k)$ such that $\frac{\varphi(\tau_k) - \varphi(t_{k+1})}{\tau_k - t_{k+1}}>2^{k+1}\frac{\varphi(t_k)}{t_k}$. Moreover, we can find $t_{k+1}$ in such a way that $\varphi(t_{k+1})\leq2^{-k-1}\varphi(\tau_k)$ thanks to \eqref{thm:piecewiseconcavefunction:assumption2} and the fact that $\varphi(\tau_k)\neq0$.
Clearly $t_{k+1}<\tau_k<t_k$ and $t_{k+1}\leq\frac1{2^k}$. Since $\varphi$ is concave, we have that $\frac{\varphi(t_k)}{t_k}\geq\frac{\varphi(t_k)-\varphi(t_{k+1})}{t_k-t_{k+1}}$. Hence $\frac{\varphi(\tau_k) - \varphi(t_{k+1})}{\tau_k - t_{k+1}}>2^{k+1}\frac{\varphi(t_k)-\varphi(t_{k+1})}{t_k-t_{k+1}}$. This completes the inductive step.
We define the function $\psi\colon[0,a]\rightarrow[0,\infty)$ as
\begin{equation*}
\psi(t)=\begin{cases}\varphi(t_{k+1})+\frac{\varphi(t_k)-\varphi(t_{k+1})}{t_k-t_{k+1}}(t-t_{k+1}),\ &t\in(t_{k+1},t_k],\ k\in\mathbb{N},\\
0,\ &t=0.
\end{cases}
\end{equation*}
Note that, since $\varphi$ is concave and nondecreasing, so is $\psi$. Clearly, $\psi(t_k)=\varphi(t_k)$ for each $k\in\mathbb{N}$. Furthermore, since $\varphi$ is concave and vanishes only at the origin, we have $0<\psi\leq\varphi$ on $(0,a]$. Hence $\limsup\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=1$.
Finally, for each $k\in\mathbb{N}$,
\begin{align*}
\frac{\psi(\tau_k)}{\varphi(\tau_k)}&=\frac{\varphi(t_{k+1})+\frac{\varphi(t_k)-\varphi(t_{k+1})}{t_k-t_{k+1}}(\tau_k-t_{k+1})}{\varphi(\tau_k)} \leq \frac{\varphi(t_{k+1})+2^{-k-1}\frac{\varphi(\tau_k)-\varphi(t_{k+1})}{\tau_k-t_{k+1}}(\tau_k-t_{k+1})}{\varphi(\tau_k)}\\
&=\frac{(1-2^{-k-1})\varphi(t_{k+1}) + 2^{-k-1}\varphi(\tau_k)}{\varphi(\tau_k)}\leq\frac{2^{-k}\varphi(\tau_k)}{\varphi(\tau_k)}=2^{-k},
\end{align*}
where the first and the second inequalities follow from \eqref{thm:piecewiseconcavefunction:property1} and \eqref{thm:piecewiseconcavefunction:property2}, respectively. Hence $\liminf\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=0$.
\end{proof}
We are now in the position to give a complete answer to \hyperref[q:noncompactMarc]{Question~\ref*{q:noncompactMarc}}. Note that $X(\Omega)\not\subseteq L^{\frac{n}{m},1}(\Omega)$ is equivalent to $t^{\frac{m}{n}-1}\notin X'(0,|\Omega|)$. This follows easily from \eqref{prel:embdual} combined with \eqref{prel:lorentzass}. The latter condition is usually easier to verify, and so we use it in the statement of the following theorem.
\begin{theorem}\label{T:answer-to-Q1}
Let $\Omega\subseteq\R^n$ be a bounded Lipschitz domain, $m\in\mathbb{N}$, $m<n$, $d\in[n-m,n]$ and $\nu$ a $d$-Ahlfors measure on $\overline{\Omega}$. Assume that $X(\Omega)$ is a rearrangement-invariant space such that $t^{\frac{m}{n}-1}\notin X'(0,|\Omega|)$. Furthermore, if $d=n-m$, we assume that $X(\Omega)\neq L^1(\Omega)$. There is a rearrangement-invariant space $Y(\overline{\Omega}, \nu)$ such that $M_{Y_X}(\overline{\Omega}, \nu)\subsetneq Y(\overline{\Omega}, \nu)$ and the embedding $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is not compact.
\end{theorem}
\begin{proof}
Set $\varphi=\varphi_{Y_X}$, where $\varphi_{Y_X}$ is the fundamental function of $Y_X(\overline{\Omega}, \nu)$. Without loss of generality, we may assume that $\varphi$ is concave on $[0,\nu(\overline{\Omega})]$ (see~\citep[Chapter~2, Proposition~5.11]{BS}).
The assumption $t^{\frac{m}{n}-1}\notin X'(0,|\Omega|)$ is equivalent to $X(\Omega)\not\subseteq L^{\frac{n}{m},1}(\Omega)$, which ensures that $Y_X(\overline{\Omega}, \nu)\neq L^\infty(\overline{\Omega}, \nu)$. Hence $\varphi$
satisfies \eqref{thm:piecewiseconcavefunction:assumption2} (\citep[Theorem~5.2]{Sla:12}). Next, we claim that $Y_X(\overline{\Omega}, \nu)\neq L^1(\overline{\Omega}, \nu)$. Indeed, if $d\in(n-m,n]$, then this follows from
$Y_X(\overline{\Omega}, \nu)\subseteq Y_{L^1}(\overline{\Omega}, \nu)=L^{\frac{d}{n-m},1}(\overline{\Omega}, \nu)\subsetneq L^1(\overline{\Omega}, \nu)$ (\citep[Theorem~3.1]{CPS_OrlLor:20}). If $d=n-m$, then $X(\Omega)\neq L^1(\Omega)$ is assumed. Therefore, it follows from \citep[Theorem~5.3]{Sla:12} that $X(\Omega)\stackrel{*}{\hookrightarrow} L^1(\Omega)$. That, however, in turn implies that $Y_X(\overline{\Omega}, \nu)\neq L^1(\overline{\Omega}, \nu)$ thanks to \citep[Proposition~3.5]{CaMi:19} combined with
\citep[Theorem~3.6]{CM:20} (see also \citep[Theorem~4.6]{Sla:15}). Consequently, applying \citep[Theorem~5.3]{Sla:12} once again, we obtain that $\varphi$ satisfies
also \eqref{thm:piecewiseconcavefunction:assumption1}.
By virtue of \hyperref[prop:piecewiseconcavefunction]{Proposition~\ref*{prop:piecewiseconcavefunction}}, there is a nondecreasing, concave function $\psi\colon[0,\nu(\overline{\Omega})]\rightarrow[0,\infty)$ such that $\psi\leq\varphi$ on $[0,\nu(\overline{\Omega})]$, $\liminf\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=0$ and $\limsup\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=1$. We have that $M_{Y_X}(\overline{\Omega}, \nu)\subsetneq M_\psi(\overline{\Omega}, \nu)$ because $\psi\leq\varphi$ on $[0,\nu(\overline{\Omega})]$ and $\liminf\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=0$. Furthermore, $Y_X(\overline{\Omega}, \nu)$ is not almost compactly embedded into $M_\psi(\overline{\Omega}, \nu)$. This follows from the fact that $\limsup\limits_{t\to0^+}\frac{\psi(t)}{\varphi(t)}=1$ and \eqref{prel:almostcompembnecessary}. Therefore, $W^mX(\Omega)\hookrightarrow M_\psi(\overline{\Omega}, \nu)$ is not compact thanks to \citep[Theorem~4.1]{CM:20}. Hence $Y(\overline{\Omega}, \nu)=M_\psi(\overline{\Omega}, \nu)$ has the desired properties.
\end{proof}
\begin{remark}
We would like to point out that the assumption of Theorem~\ref{T:answer-to-Q1} that $X(\Omega)\neq L^1(\Omega)$ when $d=n-m$ brings no restriction at all, for if
$X(\Omega)=L^1(\Omega)$ and $d=n-m$, then
\begin{equation*}
Y_X(\overline{\Omega}, \nu)=L^{\frac{d}{n-m},1}(\overline{\Omega}, \nu)=L^1(\overline{\Omega}, \nu)=M_{L^1}(\overline{\Omega}, \nu).
\end{equation*}
The first identity follows from~\citep[Theorem~3.1]{CPS_OrlLor:20}, and the remaining ones are well known. Since
$L^1(\overline{\Omega}, \nu)$ is the largest rearrangement-invariant space over $(\overline{\Omega}, \nu)$ (\citep[Chapter~2, Corollary~6.7]{BS}), there is no rearrangement-invariant space $Y(\overline{\Omega}, \nu)$ bigger than $Y_X(\overline{\Omega}, \nu)$; thus the answer to \hyperref[q:noncompactMarc]{Question~\ref*{q:noncompactMarc}} is negative in this case.
\end{remark}
Having answered \hyperref[q:noncompactMarc]{Question~\ref*{q:noncompactMarc}}, we now turn our attention to \hyperref[q:noncompactfundamentallybigger1]{Question~\ref*{q:noncompactfundamentallybigger1}}. We start with some observations.
\begin{proposition}
Let $\Omega\subseteq\R^n$ be a bounded Lipschitz domain, $m\in\mathbb{N}$, $m<n$, $d\in[n-m,n]$ and $\nu$ a $d$-Ahlfors measure on $\overline{\Omega}$. If $Y(\overline{\Omega}, \nu)$ is a rearrangement-invariant space satisfying properties required by \hyperref[q:noncompactfundamentallybigger1]{Question~\ref*{q:noncompactfundamentallybigger1}}, then:
\begin{itemize}
\item $Y(\overline{\Omega}, \nu)$ is not a Marcinkiewicz space;
\item $M_{Y_X}(\overline{\Omega}, \nu)\not\hookrightarrow \Lambda_{Y}(\overline{\Omega}, \nu)$, where $\Lambda_{Y}(\overline{\Omega}, \nu)$ is the Lorentz endpoint space with the same fundamental function as $Y(\overline{\Omega}, \nu)$;
\item $Y(\overline{\Omega}, \nu)$ does not have absolutely continuous norm.
\end{itemize}
\end{proposition}
\begin{proof}
If $Y(\overline{\Omega}, \nu)$ were a Marcinkiewicz space, we would get $M_{Y_X}(\overline{\Omega}, \nu)\stackrel{*}{\hookrightarrow}Y(\overline{\Omega}, \nu)$ by~\cite[Corollary~7.5]{Sla:12} and, consequently, $W^{m}X(\Omega)\hookrightarrow\hookrightarrow Y(\overline{\Omega}, \nu)$ due to~\cite[Theorem~4.1]{CM:20}.
If $M_{Y_X}(\overline{\Omega}, \nu)\hookrightarrow \Lambda_{Y}(\overline{\Omega}, \nu)$, then \cite[Corollary~7.3]{Sla:12} would imply that $M_{Y_X}(\overline{\Omega}, \nu)\stackrel{*}{\hookrightarrow}\Lambda_{Y}(\overline{\Omega}, \nu)$. If $Y(\overline{\Omega}, \nu)$ had absolutely continuous norm, then $M_{Y_X}(\overline{\Omega}, \nu)\hookrightarrow Y(\overline{\Omega}, \nu)$ coupled with~\cite[Theorem~7.2]{Sla:12} would imply that $M_{Y_X}(\overline{\Omega}, \nu)\stackrel{*}{\hookrightarrow}\Lambda_{Y}(\overline{\Omega}, \nu)$. Either way, we would obtain from $M_{Y_X}(\overline{\Omega}, \nu)\stackrel{*}{\hookrightarrow}\Lambda_{Y}(\overline{\Omega}, \nu)$ that $W^{m}X(\Omega)\hookrightarrow\hookrightarrow\Lambda_{Y}(\overline{\Omega}, \nu)\hookrightarrow Y(\overline{\Omega}, \nu)$ thanks to \cite[Theorem~4.1]{CM:20} and \eqref{prel:endpoints}.
\end{proof}
While the preceding proposition limits what a potential space $Y(\overline{\Omega}, \nu)$ sought in \hyperref[q:noncompactfundamentallybigger1]{Question~\ref*{q:noncompactfundamentallybigger1}} can be, there is also a natural restriction on the space $Y_X(\overline{\Omega}, \nu)$ should the answer to \hyperref[q:noncompactfundamentallybigger1]{Question~\ref*{q:noncompactfundamentallybigger1}} be positive. It turns out that $Y_X(\overline{\Omega}, \nu)$ cannot be a Lorentz endpoint space. Indeed, suppose that $Y_X(\overline{\Omega}, \nu)=\Lambda_{Y_X}(\overline{\Omega}, \nu)$ and that $Y(\overline{\Omega}, \nu)$ is any rearrangement-invariant space satisfying $\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi_{Y_X}(t)}=0$. Then $\Lambda_{Y_X}(\overline{\Omega}, \nu)\overset{*}{\hookrightarrow}\Lambda_{Y}(\overline{\Omega}, \nu)\hookrightarrow Y(\overline{\Omega}, \nu)$ thanks to \cite[Corollary~7.5]{Sla:12} and \eqref{prel:endpoints}; hence $W^{m}X(\Omega)\hookrightarrow\hookrightarrow\Lambda_{Y}(\overline{\Omega}, \nu)\hookrightarrow Y(\overline{\Omega}, \nu)$ by virtue of \cite[Theorem~4.1]{CM:20}.
Nevertheless, if $Y_X(\overline{\Omega}, \nu)=M_{Y_X}(\overline{\Omega}, \nu)$ is a Marcinkiewicz space, we can (at least for a lot of customary choices of $X(\Omega)$) explicitly construct a space $Y(\overline{\Omega}, \nu)$ giving an affirmative answer to \hyperref[q:noncompactfundamentallybigger1]{Question~\ref*{q:noncompactfundamentallybigger1}}. Before doing that, we prove a general theorem, which provides a guideline on how to construct such spaces.
\begin{theorem}\label{thm:partanswto:noncompactfundamentallybigger1}
Assume that $\varphi\colon[0,a]\rightarrow[0,\infty)$, where $a\in(0,\infty)$, is a quasiconcave function satisfying
\begin{equation}
\frac1{t}\int_0^t\frac1{\varphi(s)}\,{\fam0 d} s\lesssim\frac1{\varphi(t)}\quad\text{for every $t\in(0,a)$}.\label{thm:partanswto:noncompactfundamentallybigger1:conB}
\end{equation}
Let $\tau$ be a positive, measurable function on $(0,a]$ such that
\begin{align}
\int_0^t\frac{\varphi(s)}{\tau(s)}\,{\fam0 d} s&\lesssim\varphi(t)\quad\text{for every $t\in(0,a)$},\label{thm:partanswto:noncompactfundamentallybigger1:conintphiovertau}\\
\int_t^a\frac1{\tau(s)}\,{\fam0 d} s&<\infty\quad\text{for every $t\in(0,a)$},\label{thm:partanswto:noncompactfundamentallybigger1:confunbfinite}\\
\int_0^a\frac1{\tau(s)}\,{\fam0 d} s&=\infty.\label{thm:partanswto:noncompactfundamentallybigger1:confunbblowsup}
\end{align}
Furthermore, assume that the function $b\colon(0,a]\to(0,\infty)$ defined as
\begin{equation*}
b(t)=1+\int_t^a\frac1{\tau(s)}\,{\fam0 d} s,\ t\in(0,a],
\end{equation*}
is slowly varying on $(0,a]$, and that
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:interepreoftauoverphi}
\frac{\tau(t)}{\varphi(t)}\approx\int_0^t\xi(s)\,{\fam0 d} s,\quad\text{for every $t\in(0,a)$},
\end{equation}
where $\xi$ is some positive, continuous function on $(0,a]$ that is equivalent to a nonincreasing function. Finally, assume that
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:supremumopbounded}
\frac1{\xi(t)}\int_0^t\xi(s)b(s)\,{\fam0 d} s\lesssim\int_0^tb(s)\,{\fam0 d} s\quad\text{for every $t\in(0,a)$}.
\end{equation}
Let $(R, \mu)$ be a nonatomic measure space such that $\mu(R)=a$. Set
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:normY}
\varrho_Y(f)=\sup\limits_{t\in(0,a)}\frac1{b(t)}\int_t^af^*(s)\frac{\varphi(s)}{\tau(s)}\,{\fam0 d} s,\ f\in\mathcal Mpl(R, \mu).
\end{equation}
The functional $\varrho_Y$ is equivalent to a rearrangement-invariant function norm on $(R, \mu)$. Let $Y(R, \mu)$ denote the corresponding rearrangement-invariant space. We have that $M_\varphi(R, \mu)\hookrightarrow Y(R, \mu)$ and $\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi(t)}=0$, but the embedding is not almost compact.
\end{theorem}
\begin{proof}
We start off by showing that the functional $\varrho_Y$, defined by \eqref{thm:partanswto:noncompactfundamentallybigger1:normY}, is equivalent to a rearrangement-invariant function norm. We claim that $\varrho_Y$ is equivalent to a functional $\sigma$ defined as
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:defsigma}
\sigma(f)=\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^af^*(t)\frac1{\int_0^t\xi(u)\,{\fam0 d} u}\int_0^t\xi(s)\sup_{u\in[s,a)}\frac1{\xi(u)}g^*(u)\,{\fam0 d} s\,{\fam0 d} t,\ f\in\mathcal Mpl(R, \mu),
\end{equation}
and that $\sigma$ is a rearrangement-invariant function norm. As for the equivalence, it follows from \citep[Theorem~3.33]{P} and the Hardy--Littlewood inequality \eqref{prel:HL} that
\begin{equation*}
\varrho_Y(f)=\left\|\int_t^af^*(s)\frac{\varphi(s)}{\tau(s)}\,{\fam0 d} s\right\|_{L^{\infty,\infty;b^{-1}}(0,a)}\approx\sup\limits_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^ag^*(t)\int_t^af^*(s)\frac{\varphi(s)}{\tau(s)}\,{\fam0 d} s\,{\fam0 d} t,
\end{equation*}
where $b^{-1}=\frac1{b}$. Hence, thanks to Fubini's theorem and \eqref{thm:partanswto:noncompactfundamentallybigger1:interepreoftauoverphi},
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:dualexprrho}
\begin{aligned}
\varrho_Y(f)&\approx\sup\limits_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^af^*(s)\frac{\varphi(s)}{\tau(s)}\int_0^sg^*(t)\,{\fam0 d} t\,{\fam0 d} s\\
&\approx\sup\limits_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^af^*(s)\frac1{\int_0^s\xi(u)\,{\fam0 d} u}\int_0^sg^*(t)\,{\fam0 d} t\,{\fam0 d} s.
\end{aligned}
\end{equation}
It plainly follows from \eqref{thm:partanswto:noncompactfundamentallybigger1:defsigma} and \eqref{thm:partanswto:noncompactfundamentallybigger1:dualexprrho} that
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:rholesssigma}
\varrho_Y(f)\lesssim\sigma(f).
\end{equation}
As for the opposite inequality, we need to introduce the supremum operator $T_\xi$ defined as, for every fixed $h\in\mathcal M(0,a)$,
\begin{equation*}
T_\xi h(t)=\xi(t)\sup_{u\in[t,a)}\frac1{\xi(u)}h^*(u),\ t\in(0,a).
\end{equation*}
Note that, for each $h\in\mathcal M(0,a)$, $T_\xi h$ is equivalent to a nonincreasing function on $(0,a)$. Furthermore, observe that assumption \eqref{thm:partanswto:noncompactfundamentallybigger1:supremumopbounded} guarantees that the operator is bounded on the rearrangement-invariant space $L^{1,1;b}(0,a)$ (cf.~\citep[Theorem~3.35]{P}) owing to \citep[Theorem~3.2]{GOP}. Hence, thanks to \eqref{thm:partanswto:noncompactfundamentallybigger1:defsigma}, Fubini's theorem, \citep[Theorem~3.32]{P} combined with H\"older's inequality \eqref{prel:holder}, and \eqref{thm:partanswto:noncompactfundamentallybigger1:interepreoftauoverphi},
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:sigmalessrho}
\begin{aligned}
\sigma(f)&=\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^af^*(s)\frac1{\int_0^s\xi(u)\,{\fam0 d} u}\int_0^sT_\xi g(t)\,{\fam0 d} t\,{\fam0 d} s\\
&=\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^aT_\xi g(t)\int_t^af^*(s)\frac1{\int_0^s\xi(u)\,{\fam0 d} u}\,{\fam0 d} s\,{\fam0 d} t\\
&\approx\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^aT_\xi g(t)\int_t^af^*(s)\frac{\varphi(s)}{\tau(s)}\,{\fam0 d} s\,{\fam0 d} t\\
&\lesssim\left\|\int_t^af^*(s)\frac{\varphi(s)}{\tau(s)}\,{\fam0 d} s\right\|_{L^{\infty,\infty;b^{-1}}(0,a)}\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\|T_\xi g\|_{L^{1,1;b}(0,a)}\\
&\lesssim\varrho_Y(f).
\end{aligned}
\end{equation}
Combining \eqref{thm:partanswto:noncompactfundamentallybigger1:rholesssigma} and \eqref{thm:partanswto:noncompactfundamentallybigger1:sigmalessrho}, we obtain the desired equivalence of $\varrho_Y$ and $\sigma$. It remains to show that the functional $\sigma$ is a rearrangement-invariant function norm. We shall only prove that $\sigma$ is subadditive and that
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:sigmalocembintol1}
\int_R f(x)\,{\fam0 d}\mu(x)\lesssim\sigma(f)\quad\text{for every $f\in\mathcal Mpl(R, \mu)$}
\end{equation}
because it can be readily verified that $\sigma$ possesses all of the other properties of a rearrangement-invariant function norm. Let $f_1,f_2\in\mathcal Mpl(R, \mu)$ be given. Being the integral mean of the nonincreasing function $(0,a)\ni s\mapsto \sup\limits_{u\in[s,a)}\frac1{\xi(u)}g^*(u)$, where $g\in L^{1,1;b}(0,a)$ is a fixed function, over the interval $(0,t)$ with respect to the measure $\xi(s)\,{\fam0 d} s$, the function $(0,a)\ni t\mapsto \frac1{\int_0^t\xi(u)\,{\fam0 d} u}\int_0^tT_\xi g(s)\,{\fam0 d} s$ is nonincreasing on $(0,a)$, and so, thanks to Hardy's lemma \eqref{prel:hardy-lemma}, we have that
\begin{align*}
\sigma(f_1+f_2)&=\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^a\left(f_1+f_2\right)^*(t)\frac1{\int_0^t\xi(u)\,{\fam0 d} u}\int_0^tT_\xi g(s)\,{\fam0 d} s\,{\fam0 d} t\\
&\leq\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^af_1^*(t)\frac1{\int_0^t\xi(u)\,{\fam0 d} u}\int_0^tT_\xi g(s)\,{\fam0 d} s\,{\fam0 d} t\\
&\quad+ \sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^af_2^*(t)\frac1{\int_0^t\xi(u)\,{\fam0 d} u}\int_0^tT_\xi g(s)\,{\fam0 d} s\,{\fam0 d} t\\
&=\sigma(f_1)+\sigma(f_2).
\end{align*}
Hence $\sigma$ is subadditive. Let $f\in\mathcal Mpl(R, \mu)$ be given. Exploiting again the fact that the function $(0,a)\ni t\mapsto \frac1{\int_0^t\xi(u)\,{\fam0 d} u}\int_0^tT_\xi g(s)\,{\fam0 d} s$ is, for every fixed $g\in\mathcal M(0,a)$, nonincreasing on $(0,a)$, we have that
\begin{align*}
\sigma(f)&=\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\int_0^af^*(t)\frac1{\int_0^t\xi(u)\,{\fam0 d} u}\int_0^tT_\xi g(s)\,{\fam0 d} s\,{\fam0 d} t\\
&\gtrsim\sup_{\|g\|_{L^{1,1;b}(0,a)}\leq1}\frac1{\int_0^a\xi(u)\,{\fam0 d} u}\int_0^aT_\xi g(s)\,{\fam0 d} s\int_0^af^*(t)\,{\fam0 d} t\\
&\geq\frac1{\|\chi_{(0,a)}\|_{L^{1,1;b}(0,a)}}\frac1{\int_0^a\xi(u)\,{\fam0 d} u}\int_0^aT_\xi \chi_{(0,a)}(s)\,{\fam0 d} s\int_0^af^*(t)\,{\fam0 d} t\\
&\approx\frac1{\|\chi_{(0,a)}\|_{L^{1,1;b}(0,a)}}\frac1{\int_0^a\xi(u)\,{\fam0 d} u}\int_0^a\frac{\xi(s)}{\xi(a)}\,{\fam0 d} s\int_0^af^*(t)\,{\fam0 d} t\\
&=\frac1{\|\chi_{(0,a)}\|_{L^{1,1;b}(0,a)}}\frac1{\xi(a)}\int_0^af^*(t)\,{\fam0 d} t\\
&\geq\frac1{\|\chi_{(0,a)}\|_{L^{1,1;b}(0,a)}}\frac1{\xi(a)}\int_Rf(x)\,{\fam0 d}\mu(x),
\end{align*}
where the last inequality is true thanks to Hardy-Littlewood inequality \eqref{prel:HL}. Hence \eqref{thm:partanswto:noncompactfundamentallybigger1:sigmalocembintol1} holds.
Now that we know that $Y(R, \mu)$ is equivalent to a rearrangement-invariant space, we turn our attention to its relation to the Marcinkiewicz space $M_\varphi(R, \mu)$. Note that $M_\varphi(R, \mu)\hookrightarrow Y(R, \mu)$. Indeed, since $\varphi$ satisfies \eqref{thm:partanswto:noncompactfundamentallybigger1:conB}, we have that
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:eq1}
\|f\|_{M_\varphi(R, \mu)}\approx\sup\limits_{t\in(0,a)}f^*(t)\varphi(t)\quad\text{for every $f\in\mathcal M(R, \mu)$}
\end{equation}
(see \cite[Lemma~2.1]{MusOl:19}); therefore, it is sufficient to verify that $\frac1{\varphi}\in Y(0,a)$, which is easy (recall \eqref{thm:partanswto:noncompactfundamentallybigger1:confunbfinite} and \eqref{thm:partanswto:noncompactfundamentallybigger1:confunbblowsup}).
Next, we claim that
\begin{equation*}
\varphi_Y(t)\lesssim\frac{\varphi(t)}{b(t)}\quad\text{for every $t\in(0,a)$}.
\end{equation*}
Indeed, thanks to \eqref{thm:partanswto:noncompactfundamentallybigger1:conintphiovertau} and the fact that the function $b$ is nonincreasing,
\begin{equation*}
\varphi_Y(t)\approx\sup\limits_{u\in(0,t]}\frac1{b(u)}\int_u^t\frac{\varphi(s)}{\tau(s)}\,{\fam0 d} s\leq\frac1{b(t)}\int_0^t\frac{\varphi(s)}{\tau(s)}\,{\fam0 d} s\lesssim\frac{\varphi(t)}{b(t)}.
\end{equation*}
Hence $\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi(t)}=0$ thanks to \eqref{thm:partanswto:noncompactfundamentallybigger1:confunbblowsup}.
Finally, in order to prove that the embedding $M_\varphi(R, \mu)\hookrightarrow Y(R, \mu)$ is not almost compact, we consider functions $f_k\in \mathcal Mpl(R, \mu)$, $k\in\mathbb{N}$, such that
\begin{equation*}
f^*_k(t)=\frac1{\varphi(\frac1{k})}\chi_{\left(0,\frac1{k}\right)}(t) + \frac1{\varphi(t)}\chi_{\left[\frac1{k},a\right)}(t),\ t\in(0,a).
\end{equation*}
Note that the set $M=\{f_k\colon k\in\mathbb{N}\}$ is bounded in $M_\varphi(R, \mu)$, for, thanks to \eqref{thm:partanswto:noncompactfundamentallybigger1:eq1},
\begin{equation*}
\|f_k\|_{M_\varphi(R, \mu)}\approx\max\left\{\frac1{\varphi(\frac1{k})}\sup_{t\in(0,\frac1{k})}\varphi(t), \sup_{t\in(\frac1{k},a)}\frac1{\varphi(t)}\varphi(t)\right\}=1,
\end{equation*}
where the equivalence constants are independent of $k$. However, $M$ does not have uniformly absolutely continuous norm in $Y(R, \mu)$. Indeed, let ${\fam0 d}elta\in(0,a)$. Owing to \eqref{thm:partanswto:noncompactfundamentallybigger1:confunbblowsup} and \eqref{thm:partanswto:noncompactfundamentallybigger1:confunbfinite}, we can find $k\in\mathbb{N}$ large enough that
\begin{align}
\frac1{k}&<{\fam0 d}elta,\nonumber\\
b\left(\frac1{k}\right)&\leq2\int_{\frac1{k}}^a\frac1{\tau(s)}\,{\fam0 d} s,\label{thm:partanswto:noncompactfundamentallybigger1:eq3}\\
\int_{\frac1{k}}^a\frac1{\tau(s)}\,{\fam0 d} s&\geq2\int_{\fam0 d}elta^a\frac1{\tau(s)}\,{\fam0 d} s.\label{thm:partanswto:noncompactfundamentallybigger1:eq4}
\end{align}
Hence
\begin{equation}\label{thm:partanswto:noncompactfundamentallybigger1:eq5}
\|f^*_k\chi_{(0,{\fam0 d}elta)}\|_{Y(0,a)}\gtrsim\sup_{t\in[\frac1{k},{\fam0 d}elta]}\frac1{b(t)}\int_t^{\fam0 d}elta\frac1{\tau(s)}\,{\fam0 d} s\geq\frac1{b\left(\frac1{k}\right)}\int_\frac1{k}^{\fam0 d}elta\frac1{\tau(s)}\,{\fam0 d} s\geq\frac{1}{2}\frac{\int_\frac1{k}^{\fam0 d}elta\frac1{\tau(s)}\,{\fam0 d} s}{\int_\frac1{k}^a\frac1{\tau(s)}\,{\fam0 d} s}\geq\frac{1}{4},
\end{equation}
where the third and last inequality is valid thanks to \eqref{thm:partanswto:noncompactfundamentallybigger1:eq3} and \eqref{thm:partanswto:noncompactfundamentallybigger1:eq4}, respectively. Since ${\fam0 d}elta\in(0,a)$ was arbitrary, \eqref{thm:partanswto:noncompactfundamentallybigger1:eq5} implies that $M$ does not have uniformly absolutely continuous norm in $Y(R, \mu)$.
\end{proof}
\begin{remark}\label{rem:partanswto:noncompactfundamentallybigger1}
Since the question of whether a rearrangement-invariant space is (almost compactly) embedded into another one is invariant with respect to equivalently renorming the spaces, \hyperref[thm:partanswto:noncompactfundamentallybigger1]{Theorem~\ref*{thm:partanswto:noncompactfundamentallybigger1}} can be used even when the function $\varphi$ is merely equivalent to a quasiconcave function on $[0,a]$.
Furthermore, since \hyperref[thm:partanswto:noncompactfundamentallybigger1]{Theorem~\ref*{thm:partanswto:noncompactfundamentallybigger1}} has a large number of assumptions, it is worth noting some concrete, important examples of functions that satisfy the assumptions.
\begin{enumerate}[(i)]
\item If $\varphi(t)=t^\alpha\ell(t)^\beta$ with $\alpha\in(0,1)$ and $\beta\in\mathbb{R}$, then we can take $\tau(t)=t$, $b(t)=\ell(t)$, and $\xi(t)=t^{-\alpha}\ell(t)^{-\beta}$.
\item If $\varphi(t)=\ell(t)^\beta$ with $\beta<0$, then we can take $\tau(t)=t\ell(t)$, $b(t)=\ell\ell(t)$, and $\xi(t)=\ell(t)^{1-\beta}$.
\end{enumerate}
In both cases (i) and (ii), the function $\varphi$ is equivalent to a quasiconcave function on $[0,a]$, and $\varphi$ together with the functions $\tau$, $b$ and $\xi$, defined above, satisfies the assumptions of \hyperref[thm:partanswto:noncompactfundamentallybigger1]{Theorem~\ref*{thm:partanswto:noncompactfundamentallybigger1}}. These examples actually illustrate how to use \hyperref[thm:partanswto:noncompactfundamentallybigger1]{Theorem~\ref*{thm:partanswto:noncompactfundamentallybigger1}} when $\varphi$ has the form $\varphi(t)=t^\alpha b(t)$, where $\alpha\in[0,1)$ and $b$ is a slowly-varying function on $(0,a]$, which is the case in many customary situations.
\end{remark}
We now use \hyperref[thm:partanswto:noncompactfundamentallybigger1]{Theorem~\ref*{thm:partanswto:noncompactfundamentallybigger1}} together with \hyperref[rem:partanswto:noncompactfundamentallybigger1]{Remark~\ref*{rem:partanswto:noncompactfundamentallybigger1}} to obtain examples of spaces $Y(\overline{\Omega}, \nu)$ giving a positive answer to \hyperref[q:noncompactfundamentallybigger1]{Question~\ref*{q:noncompactfundamentallybigger1}} in the case where $X(\Omega)$ is a weak Lorentz-Zygmund space. Our approach also outlines a general way that can be used to construct a space $Y(\overline{\Omega}, \nu)$ sought in \hyperref[q:noncompactfundamentallybigger1]{Question~\ref*{q:noncompactfundamentallybigger1}} in the case where $Y_X(\overline{\Omega}, \nu)$ is a Marcinkiewicz space.
\begin{theorem}
Let $\Omega\subseteq\R^n$ be a bounded Lipschitz domain, $m\in\mathbb{N}$, $m<n$, $d\in[n-m,n]$ and $\nu$ a $d$-Ahlfors measure on $\overline{\Omega}$. Assume that either $p\in(1,\frac{n}{m})$ and $\alpha\in\mathbb{R}$ or $p=\frac{n}{m}$ and $\alpha\leq1$. There is a rearrangement-invariant space $Y(\overline{\Omega}, \nu)$, whose norm is induced by the function norm defined by \eqref{thm:partanswto:noncompactfundamentallybigger1:defsigma} (and equivalent to \eqref{thm:partanswto:noncompactfundamentallybigger1:normY}) with $a=\nu\left(\overline{\Omega}\right)$,
\begin{align*}
\varphi(t)&\approx\begin{cases}
t^\frac{n-mp}{dp}\ell(t)\quad&\text{if $p\in(1,\frac{n}{m})$},\\
\ell(t)^{\alpha-1}\quad&\text{if $p=\frac{n}{m}$ and $\alpha<1$},\\
\ell\ell(t)^{-1}\quad&\text{if $p=\frac{n}{m}$ and $\alpha=1$},
\end{cases}\\
\tau(t)&=\begin{cases}
t\quad&\text{if $p\in(1,\frac{n}{m})$},\\
t\ell(t)\quad&\text{if $p=\frac{n}{m}$ and $\alpha<1$},\\
t\ell\ell(t)\quad&\text{if $p=\frac{n}{m}$ and $\alpha=1$},
\end{cases}\\
\intertext{and}
b(t)&=\begin{cases}
\ell(t)\quad&\text{if $p\in(1,\frac{n}{m})$},\\
\ell\ell(t)\quad&\text{if $p=\frac{n}{m}$ and $\alpha<1$},\\
\ell\ell\ell(t)\quad&\text{if $p=\frac{n}{m}$ and $\alpha=1$},
\end{cases}
\end{align*}
such that
\begin{itemize}
\item $W^mL^{p,\infty;\alpha}(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ non-compactly,
\item $M_{Y_{L^{p,\infty;\alpha}}}(\overline{\Omega}, \nu)\subsetneq Y(\overline{\Omega}, \nu)$,
\item $\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi_{Y_{L^{p,\infty;\alpha}}}(t)}=0$.
\end{itemize}
\end{theorem}
\begin{proof}
It is known (see \cite[Theorem~5.1]{CM:20}) that
\begin{equation*}
Y_{L^{p,\infty;\alpha}}(\overline{\Omega}, \nu)=\begin{cases}
L^{\frac{dp}{n-mp},\infty;\alpha}(\overline{\Omega}, \nu)\quad&\text{if $p\in(1,\frac{n}{m})$},\\
L^{\infty, \infty;\alpha-1}(\overline{\Omega}, \nu)\quad&\text{if $p=\frac{n}{m}$ and $\alpha<1$},\\
L^{\infty, \infty;0,-1}(\overline{\Omega}, \nu)\quad&\text{if $p=\frac{n}{m}$ and $\alpha=1$}.
\end{cases}
\end{equation*}
Furthermore, $Y_{L^{p,\infty;\alpha}}(\overline{\Omega}, \nu)$ is equivalent to a Marcinkiewicz space $M_\psi(\overline{\Omega}, \nu)$ with $\psi\approx \varphi$. The claim now follows from \hyperref[thm:partanswto:noncompactfundamentallybigger1]{Theorem~\ref*{thm:partanswto:noncompactfundamentallybigger1}} combined with \citep[Theorem~4.1]{CM:20}.
\end{proof}
\section{On enlarging general rearrangement-invariant spaces}\label{prel:q3}
The following proposition provides a general principle that can be exploited to construct a space $Y(\overline{\Omega}, \nu)$ sought in \hyperref[q:noncompactfundamentallybigger2]{Question~\ref*{q:noncompactfundamentallybigger2}}.
\begin{proposition}\label{prop:paransq2}
Let $\left(R, \mu\right)$ be a finite, nonatomic measure space. Let $Z_1(R, \mu)$ and $Z_2(R, \mu)$ be rearrangement-invariant spaces. Assume that $\lim\limits_{t\to0^+}\frac{\varphi_{Z_2}(t)}{\varphi_{Z_1}(t)}=0$ and $Z_1(R, \mu)\not\subseteq Z_2(R, \mu)$. Set $Y(R, \mu)=\left(Z_1'(R, \mu)\cap Z_2'(R, \mu)\right)'$. We have that $\lim\limits_{t\to0^+}\frac{\varphi_{Y}(t)}{\varphi_{Z_1}(t)}=0$ and $Z_1(R, \mu)\hookrightarrow Y(R, \mu)$, but the embedding is not almost compact.
Moreover, if either of the spaces $Z_1'(R, \mu)$ and $Z_2'(R, \mu)$ has absolutely continuous norm, then $Y(R, \mu)=Z_1(R, \mu)+Z_2(R, \mu)$.
\end{proposition}
\begin{proof}
Combining \eqref{prel:X''} and \eqref{prel:embdual}, we obtain that $Z_1(R, \mu)\hookrightarrow Y(R, \mu)$. Next, we have that $\varphi_{Y'}=\max\{\varphi_{Z_1'}, \varphi_{Z_2'}\}$, whence
\begin{equation*}
\lim_{t\to0^+}\frac{\varphi_Y(t)}{\varphi_{Z_1}(t)}=\lim_{t\to0^+}\frac{\varphi_{Z_1'}(t)}{\varphi_{Y'}(t)}\leq\lim_{t\to0^+}\frac{\varphi_{Z_1'}(t)}{\varphi_{Z_2'}(t)}=\lim_{t\to0^+}\frac{\varphi_{Z_2}(t)}{\varphi_{Z_1}(t)}=0
\end{equation*}
thanks to \eqref{prel:fundamentalfuncsidentity}.
It remains to show that $Z_1(R, \mu)$ is not almost-compactly embedded into $Y(R, \mu)$, which is equivalent to showing that $Z_1'(R, \mu)\cap Z_2'(R, \mu)$ is not almost-compactly embedded into $Z_1'(R, \mu)$ thanks to \eqref{prel:almostcompactdual}. Owing to \eqref{prel:embdual}, we have that $Z_2'(R, \mu)\not\subseteq Z_1'(R, \mu)$, and so the embedding $Z_1'(R, \mu)\cap Z_2'(R, \mu)\hookrightarrow Z_1'(R, \mu)$ is not almost compact by \citep[Lemma~3.7]{F-MMP:10}.
Finally, if either of the spaces $Z_1'(R, \mu)$ and $Z_2'(R, \mu)$ has absolutely continuous norm, then $\left(Z_1'(R, \mu)\cap Z_2'(R, \mu)\right)'=Z_1(R, \mu)+Z_2(R, \mu)$ thanks to \citep[Chapter~3, Exercise~5]{BS}.
\end{proof}
The following two theorems suggest how to find a space $Z_2(R, \mu)$ for a given space $Z_1(R, \mu)=\Lambda^q(v)$ in such a way that \hyperref[prop:paransq2]{Proposition~\ref*{prop:paransq2}} can be used. Since a large number of customary function spaces can be described as a $\Lambda^q(v)$ space, the theorems provide quite general tools for producing spaces $Y(\overline{\Omega}, \nu)$ sought in \hyperref[q:noncompactfundamentallybigger2]{Question~\ref*{q:noncompactfundamentallybigger2}}.
\begin{theorem}\label{thm:paransq2lambdaspaces}
Let $\left(R, \mu\right)$ be a finite, nonatomic measure space. Let $q\in(1,\infty)$ and $v$ be a weight on $(0,\mu(R))$ satisfying
\begin{equation*}
\int_0^ts^{q'}v(s)V(s)^{-q'}\,{\fam0 d} s\lesssim t^{q'}V(t)^{1-q'}\quad\text{for every $t\in(0,\mu(R))$}.
\end{equation*}
Set $Z(R, \mu)=\Lambda^q(v)$. Let $r\in[1,q)$ and $w\colon(0,\mu(R))\to(0,\infty)$ be a weight satisfying \eqref{prel:lambdari} with $v$ and $q$ replaced by $w$ and $r$, respectively,
\begin{equation}\label{thm:paransq2lambdaspaces:ratiooffundamentalfuncs}
\lim\limits_{t\to0^+}\frac{W(t)^\frac1{r}}{V(t)^\frac1{q}}=0
\end{equation}
and
\begin{equation}\label{thm:paransq2lambdaspaces:XnotintoZ}
\int_0^{\mu(R)}\left(\frac{W(t)}{V(t)}\right)^\frac{q}{q-r}v(t)\,{\fam0 d} t=\infty.
\end{equation}
Set $Y(R, \mu)=\Lambda^q(v) + \Lambda^r(w)$. We have that $\lim\limits_{t\to0^+}\frac{\varphi_{Y}(t)}{\varphi_{Z}(t)}=0$ and $Z(R, \mu)\hookrightarrow Y(R, \mu)$, but the embedding is not almost compact.
\end{theorem}
\begin{proof}
The assertion of the theorem will immediately follow from \hyperref[prop:paransq2]{Proposition~\ref*{prop:paransq2}} with $Z_1(R, \mu)=\Lambda^q(v)$ and $Z_2(R, \mu)=\Lambda^r(w)$ once we verify its assumptions. Note that $\Lambda^q(v)$ and $\Lambda^r(w)$ are equivalent to rearrangement-invariant spaces. Assumption \eqref{thm:paransq2lambdaspaces:XnotintoZ} is equivalent to the fact that $Z_1(R, \mu)\not\subseteq Z_2(R, \mu)$ by \citep[Proposition~1]{S:93}. Since $\varphi_{Z_1}\approx V^\frac1{q}$ and $\varphi_{Z_2}\approx W^\frac1{r}$, we plainly have that $\lim\limits_{t\to0^+}\frac{\varphi_{Z_2}(t)}{\varphi_{Z_1}(t)}=0$ thanks to \eqref{thm:paransq2lambdaspaces:ratiooffundamentalfuncs}. It only remains to observe that $Z_1'(R, \mu)$ has absolutely continuous norm. Indeed, owing to \citep[Theorem~1]{S:90} combined with \eqref{prel:assocnormwithstars}, we have that
\begin{align*}
\|g^*\chi_{(0,a)}\|_{Z_1'(0,\mu(R))}&\approx\left(\int_0^{\mu(R)}\left(g^*\chi_{(0,a)}\right)^{**}(t)^{q'}t^{-q'}V(t)^{-q'}v(t)\,{\fam0 d} t\right)^{\frac1{q'}}\\
&\quad+V(\mu(R))^{-q'}\int_0^ag^*(t)\,{\fam0 d} t
\end{align*}
for every $g\in Z_1'(R, \mu)$ and $a\in(0,\mu(R))$. Hence the claim follows from Lebesgue's dominated convergence theorem.
\end{proof}
\begin{theorem}\label{thm:paransq2lambdaspacesqinfty}
Let $\left(R, \mu\right)$ be a finite, nonatomic measure space. Let $v$ be a weight on $(0,\mu(R))$ satisfying
\begin{equation*}
\tilde{v}(\mu(R))<\infty\quad\text{and}\quad\sup_{t\in(0,\mu(R))}\tilde{v}(t)\frac1{t}\int_0^t\frac1{\tilde{v}(s)}\,{\fam0 d} s<\infty,
\end{equation*}
where $\tilde{v}(t)=\esssup\limits_{s\in(0,t)}v(s)$, $t\in(0,\mu(R)]$. Set $Z(R, \mu)=\Lambda^\infty(v)$. Let $r\in[1,\infty)$ and $w\colon(0,\mu(R))\to(0,\infty)$ be a weight on $(0,\mu(R))$ satisfying \eqref{prel:lambdari} with $v$, $q$ replaced by $w$ and $r$, respectively,
\begin{equation*}
\lim\limits_{t\to0^+}\frac{W(t)^\frac1{r}}{\tilde{v}(t)}=0
\end{equation*}
and
\begin{equation*}
\left\|\frac1{\tilde{v}}\right\|_{\Lambda^r(w)}=\infty.
\end{equation*}
Set $Y(R, \mu)=\Lambda^\infty(v) + \Lambda^r(w)$. We have that $\lim\limits_{t\to0^+}\frac{\varphi_{Y}(t)}{\varphi_{Z}(t)}=0$ and $Z(R, \mu)\hookrightarrow Y(R, \mu)$, but the embedding is not almost compact.
\end{theorem}
\begin{proof}
A proof of the theorem can be carried out along the lines of the proof of \hyperref[thm:paransq2lambdaspaces]{Theorem~\ref*{thm:paransq2lambdaspaces}}, and we omit the details. We just note that (cf.~\citep[Lemma~1.5]{GoPi:06})
\begin{equation*}
\|f\|_{Z(R, \mu)}=\esssup_{t\in(0, \mu(R))}f^*(t)\tilde{v}(t)\quad\text{for every $f\in\mathcal Mpl(R, \mu)$},
\end{equation*}
and so, since $\frac1{\tilde{v}}$ is a positive, nonincreasing function on $(0,\mu(R))$,
\begin{equation*}
\|g\|_{Z'(R, \mu)}=\sup_{\|f\|_{Z(R, \mu)}\leq1}\int_0^{\mu(R)}f^*(t)g^*(t)\,{\fam0 d} t=\int_0^{\mu(R)}\frac{g^*(t)}{\tilde{v}(t)}\,{\fam0 d} t\quad\text{for every $g\in\mathcal Mpl(R, \mu)$}
\end{equation*}
thanks to \eqref{prel:assocnormwithstars}.
\end{proof}
Now is the time to provide concrete examples of spaces $Y(\overline{\Omega}, \nu)$ sought in \hyperref[q:noncompactfundamentallybigger2]{Question~\ref*{q:noncompactfundamentallybigger2}}. We shall do it in the case where $X(\Omega)$ is a Lorentz--Zygmund space.
\begin{theorem}
Let $\Omega\subseteq\R^n$ be a bounded Lipschitz domain, $m\in\mathbb{N}$, $m<n$, $d\in[n-m,n]$ and $\nu$ a $d$-Ahlfors measure on $\overline{\Omega}$. Let $q\in(1,\infty]$ and $\alpha\in\mathbb{R}$. Fix any $s\in[1,q)$ and consider the rearrangement-invariant space $Y(\overline{\Omega}, \nu)$ defined as
\begin{equation*}
Y(\overline{\Omega}, \nu) =\begin{cases}
L^{\frac{dp}{n-mp},q;\alpha}(\overline{\Omega}, \nu)+L^{\frac{dp}{n-mp},s;\alpha+\frac1{q}-\frac1{s}}(\overline{\Omega}, \nu)\quad&\text{if $p\in(1,\frac{n}{m})$,}\\
L^{\infty,q;\alpha-1}(\overline{\Omega}, \nu) + L^{\infty,s;\alpha-1+\frac1{q}-\frac1{s},\frac1{q}-\frac1{s}}(\overline{\Omega}, \nu)\quad&\text{if $p=\frac{n}{m}$ and $\alpha<1-\frac1{q}$,}\\
L^{\infty,q;-\frac1{q},-1}(\overline{\Omega}, \nu)+L^{\infty,s;-\frac1{s},\frac1{q}-\frac1{s}-1,\frac1{q}-\frac1{s}}(\overline{\Omega}, \nu)\quad&\text{if $p=\frac{n}{m}$ and $\alpha=1-\frac1{q}$.}
\end{cases}
\end{equation*}
The space $Y(\overline{\Omega}, \nu)$ satisfies
\begin{itemize}
\item $W^mL^{p,q;\alpha}(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ non-compactly,
\item $Y_{L^{p,q;\alpha}}(\overline{\Omega}, \nu)\subsetneq Y(\overline{\Omega}, \nu)$,
\item $\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi_{Y_{L^{p,q;\alpha}}}(t)}=0$.
\end{itemize}
\end{theorem}
\begin{proof}
Thanks to \citep[Theorem~5.1]{CM:20}, we have that
\begin{equation*}
Y_{L^{p,q;\alpha}}(\overline{\Omega}, \nu)=\begin{cases}
L^{\frac{dp}{n-mp},q;\alpha}(\overline{\Omega}, \nu)\quad&\text{if $p\in(1,\frac{n}{m})$,}\\
L^{\infty,q;\alpha-1}(\overline{\Omega}, \nu)\quad&\text{if $p=\frac{n}{m}$ and $\alpha<1-\frac1{q}$,}\\
L^{\infty,q;-\frac1{q},-1}(\overline{\Omega}, \nu)\quad&\text{if $p=\frac{n}{m}$ and $\alpha=1-\frac1{q}$.}
\end{cases}
\end{equation*}
Due to \citep[Theorem~4.1]{CM:20}, $W^mL^{p,q;\alpha}(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is not compact if and only if $Y_{L^{p,q;\alpha}}(\overline{\Omega}, \nu)\hookrightarrow Y(\overline{\Omega}, \nu)$ is not almost compact.
The fact that the space $Y(\overline{\Omega}, \nu)$ has all of the desired properties can be derived from \hyperref[thm:paransq2lambdaspaces]{Theorem~\ref*{thm:paransq2lambdaspaces}} (if $q<\infty$) or \hyperref[thm:paransq2lambdaspacesqinfty]{Theorem~\ref*{thm:paransq2lambdaspacesqinfty}} (if $q=\infty$). However, making use of computations that were already done, we can also obtain the assertion directly from \hyperref[prop:paransq2]{Theorem~\ref*{prop:paransq2}} with $X(\overline{\Omega}, \nu)=Y_{L^{p,q;\alpha}}(\overline{\Omega}, \nu)$ and
\begin{equation*}
Z(\overline{\Omega}, \nu)=\begin{cases}
L^{\frac{dp}{n-mp},s;\alpha+\frac1{q}-\frac1{s}}(\overline{\Omega}, \nu)\quad&\text{if $p\in(1,\frac{n}{m})$,}\\
L^{\infty,s;\alpha-1+\frac1{q}-\frac1{s},\frac1{q}-\frac1{s}}(\overline{\Omega}, \nu)\quad&\text{if $p=\frac{n}{m}$ and $\alpha<1-\frac1{q}$,}\\
L^{\infty,s;-\frac1{s},\frac1{q}-\frac1{s}-1,\frac1{q}-\frac1{s}}(\overline{\Omega}, \nu)\quad&\text{if $p=\frac{n}{m}$ and $\alpha=1-\frac1{q}$.}
\end{cases}\end{equation*}
Indeed, we have that $\lim\limits_{t\to0^+}\frac{\varphi_Z(t)}{\varphi_{Y_X}(t)}=0$ thanks to \citep[Lemma~3.7]{OP}, and that $Y_X(\overline{\Omega}, \nu)\not\subseteq Z(\overline{\Omega}, \nu)$ thanks to \citep[Theorem~4.5]{OP}. Finally, since $q>1$, $Y'_{L^{p,q;\alpha}}(\overline{\Omega}, \nu)$ has absolutely continuous norm by \citep[Theorems~6.11 and~9.5]{OP}.
\end{proof}
We have seen that the connection between the optimal target space $Y_X(\overline{\Omega}, \nu)$ in the embedding $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ and the question of whether the embedding $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is compact is fairly unrelated to ``the fundamental scale'' of $Y_X(\overline{\Omega}, \nu)$. In fact, we have witnessed that it may even happen that $Y(\overline{\Omega}, \nu)$ is ``fundamentally bigger'' than $M_{Y_X}(\overline{\Omega}, \nu)$ (i.e.~$M_{Y_X}(\overline{\Omega}, \nu)\subsetneq Y(\overline{\Omega}, \nu)$ and $\lim\limits_{t\to0^+}\frac{\varphi_Y(t)}{\varphi_{M_{Y_X}}(t)}=0$), and yet the embedding $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is still not compact. This shows that the question of whether $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$ is compact is actually far more subtle than it may misleadingly appear when only the Lebesgue (or even two-parameter Lorentz) spaces are taken into account. We conclude this paper with results illustrating even further the unrelatedness of compactness to the fundamental scale of the optimal target space. More precisely, we construct a Lorentz endpoint space $\Lambda_\psi(\overline{\Omega}, \nu)$ (i.e.~the smallest rearrangement-invariant space on the fundamental scale given by $\psi$) and rearrangement-invariant spaces $X(\overline{\Omega}, \nu)$ and $Y_X(\overline{\Omega}, \nu)$ with the following properties:
\begin{itemize}
\item The spaces $X(\overline{\Omega}, \nu)$ and $Y_X(\overline{\Omega}, \nu)$ are mutually optimal in $W^mX(\Omega)\hookrightarrow Y_X(\overline{\Omega}, \nu)$, that is, $Y_X(\overline{\Omega}, \nu)$ is the optimal target space for $X(\Omega)$ and, simultaneously, $X(\Omega)$ is the optimal domain space for $Y_X(\overline{\Omega}, \nu)$ (i.e.~the largest possible rearrangement-invariant space rendering the embedding true);
\item $W^mX(\Omega)\hookrightarrow \Lambda_\psi(\overline{\Omega}, \nu)$ non-compactly;
\item $Y_X(\overline{\Omega}, \nu)\hookrightarrow \Lambda_\psi(\overline{\Omega}, \nu)$;
\item $\lim\limits_{t\to0^+}\frac{\psi(t)}{\varphi_{Y_X}(t)}=0$.
\end{itemize}
The following proposition characterizes when the spaces in a Sobolev embedding are mutually optimal. We will use the proposition, which is of independent interested, to prove \hyperref[thm:answtoq:noncompactintolorentzendpoint]{Theorem~\ref*{thm:answtoq:noncompactintolorentzendpoint}}, which will tell us how to construct spaces having the properties listed above. We note that the optimal domain space $X_Y(\Omega)$ appearing in the following proposition always exists (see the proof of the proposition).
\begin{proposition}\label{prop:optimalpair}
Let $\Omega\subseteq\R^n$ be a bounded Lipschitz domain, $m\in\mathbb{N}$, $m<n$, $d\in[n-m,n]$ and $\nu$ a $d$-Ahlfors measure on $\overline{\Omega}$. Let $Y(\overline{\Omega}, \nu)$ be a rearrangement-invariant space over $(\overline{\Omega}, \nu)$. Let $X_Y(\Omega)$ be the optimal domain space for $Y(\overline{\Omega}, \nu)$ in $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$. The space $Y(\overline{\Omega}, \nu)$ is the optimal range space for $X_Y(\Omega)$ in the Sobolev embedding, that is, $Y(\overline{\Omega}, \nu)$ and $X_Y(\Omega)$ are mutually optimal, if and only if $T_{d,m,n}\colon Y'(0,\nu(\overline{\Omega}))\to Y'(0,\nu(\overline{\Omega}))$ is bounded, where
\begin{equation}\label{prop:optimalpair:defsupop}
T_{d,m,n}f(t)=t^{\frac{n-m}{d}-1}\sup\limits_{s\in[t,\nu(\overline{\Omega}))}s^{1-\frac{n-m}{d}}f^*(s),\ t\in(0,\nu(\overline{\Omega})),\ f\in\mathcal M(0,\nu(\overline{\Omega})).
\end{equation}
Moreover, in that case, the norm on $X_Y(\Omega)$ is equivalent to the functional
\begin{equation}\label{prop:optimalpair:optimaldomainfunc}
\mathcal M(\Omega)\ni f\mapsto\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Y(0,\nu(\overline{\Omega}))}.
\end{equation}
\end{proposition}
\begin{proof}
First, we note that, for every pair of rearrangement-invariant spaces $Z_1(\Omega)$ and $Z_2(\overline{\Omega}, \nu)$, the validity of the embedding $W^mZ_1(\Omega)\hookrightarrow Z_2(\overline{\Omega}, \nu)$ is equivalent to the validity of
\begin{equation}\label{prop:optimalpair:reductionprinciple}
\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Z_2(0,\nu(\overline{\Omega}))}\lesssim\|f^*\|_{Z_1(0,|\Omega|)}\quad\text{for every $f\in\mathcal M(\Omega)$}
\end{equation}
thanks to \citep[Theorems~4.1~and~4.3]{CPS:20}. Using this characterization, we readily obtain that the rearrangement-invariant space $X_Y(\Omega)$ induced by the rearrangement-invariant function norm
\begin{equation}\label{prop:optimalpair:optimaldomainnorm}
\mathcal Mpl(\Omega)\ni f\mapsto\sup\limits_{g\sim f}\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}g\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Y(0,\nu(\overline{\Omega}))},
\end{equation}
where the supremum is taken over all $g\in\mathcal Mpl(0,|\Omega|)$ equimeasurable with $f$, is the optimal domain space for $Y(\overline{\Omega}, \nu)$ in $W^mX(\Omega)\hookrightarrow Y(\overline{\Omega}, \nu)$. The fact that \eqref{prop:optimalpair:optimaldomainnorm} is indeed a rearrangement-invariant function norm can be proved similarly to \citep[Theorem~3.3]{KePi:06}.
Second, assume that the space $Y(\overline{\Omega}, \nu)$ is the optimal target space for $X_Y(\Omega)$ in the Sobolev embedding $W^mX_Y(\Omega)\hookrightarrow Z(\overline{\Omega}, \nu)$. Set $\alpha=1-\frac{n-m}{d}\in[0,1)$ and $T=T_{d,n,m}$. We have that
\begin{equation}\label{prop:optimalpair:optrangeforoptdomain}
\|g\|_{Y'(0,\nu(\overline{\Omega}))}\approx\Bigg\|s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}g^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}t\Big)\,{\fam0 d} t\Bigg\|_{X_Y'(0,|\Omega|)}\quad\text{for every $g\in\mathcal M(0,\nu(\overline{\Omega}))$}
\end{equation}
thanks to \citep[Theorem~4.4]{CPS:20}. Observe that, for every $g\in\mathcal M(0,\nu(\overline{\Omega}))$, the function $Tg$ is nonincreasing on $(0, \nu(\overline{\Omega}))$. Consequently, for every $g\in\mathcal M(0,\nu(\overline{\Omega}))$,
\begin{equation}\label{prop:optimalpair:odhadnasupremalni}
\begin{aligned}
\|Tg\|_{Y'(0,\nu(\overline{\Omega}))}&\approx\Bigg\|s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}Tg\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}t\Big)\,{\fam0 d} t\Bigg\|_{X_Y'(0,|\Omega|)}\\
&\lesssim\Bigg\|s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}t^{-\alpha}\sup\limits_{\tau\in\big[t,|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}\big]}\tau^\alpha g^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}\tau\Big)\,{\fam0 d} t\Bigg\|_{X_Y'(0,|\Omega|)}\\
&\quad+\Bigg\|s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}t^{-\alpha}\sup\limits_{\tau\in\big[|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n},|\Omega|\big)}\tau^\alpha g^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}\tau\Big)\,{\fam0 d} t\Bigg\|_{X_Y'(0,|\Omega|)}.
\end{aligned}
\end{equation}
As for the first term, using \citep[Lemma~3.3(ii)]{CP:16}, we obtain that
\begin{align*}
&\Bigg\|s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}t^{-\alpha}\sup\limits_{\tau\in\big[t,|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}\big]}\tau^\alpha g^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}\tau\Big)\,{\fam0 d} t\Bigg\|_{X_Y'(0,|\Omega|)}\\
&\lesssim\Bigg\|s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}g^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}t\Big)\,{\fam0 d} t\Bigg\|_{X_Y'(0,|\Omega|)}
\end{align*}
for every $g\in\mathcal M(0,\nu(\overline{\Omega}))$. As for the second term in \eqref{prop:optimalpair:odhadnasupremalni}, we have that
\begin{align*}
&\Bigg\|s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}t^{-\alpha}\sup\limits_{\tau\in\big[|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n},|\Omega|\big)}\tau^\alpha g^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}\tau\Big)\,{\fam0 d} t\Bigg\|_{X_Y'(0,|\Omega|)}\\
&\approx\Bigg\|s^{-1+\frac{m}{n}+\frac{d}{n}(1-\alpha)}\sup\limits_{\tau\in\big[|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n},|\Omega|\big)}\tau^\alpha g^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}\tau\Big)\Bigg\|_{X_Y'(0,|\Omega|)}\\
&\approx\Bigg\|\sup\limits_{\tau\in\big[s,|\Omega|\big)}\tau^{\alpha\frac{d}{n}} g^*\Bigg(\frac{\nu(\overline{\Omega})}{|\Omega|^\frac{d}{n}}\tau^\frac{d}{n}\Bigg)\Bigg\|_{X_Y'(0,|\Omega|)}\lesssim\Bigg\|s^{\alpha\frac{d}{n}} g^*\Bigg(\frac{\nu(\overline{\Omega})}{|\Omega|^\frac{d}{n}}s^\frac{d}{n}\Bigg)\Bigg\|_{X_Y'(0,|\Omega|)}\\
&\leq\Bigg\|s^{\alpha\frac{d}{n}} g^{**}\Bigg(\frac{\nu(\overline{\Omega})}{|\Omega|^\frac{d}{n}}s^\frac{d}{n}\Bigg)\Bigg\|_{X_Y'(0,|\Omega|)}\approx\Bigg\|s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}g^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}t\Big)\,{\fam0 d} t\Bigg\|_{X_Y'(0,|\Omega|)}
\end{align*}
for every $g\in\mathcal M(0,\nu(\overline{\Omega}))$, where the last but one inequality follows from a simple modification of \citep[Lemma~4.10]{EMMP:20}. Hence, combining these two chains of inequalities with \eqref{prop:optimalpair:odhadnasupremalni} and \eqref{prop:optimalpair:optrangeforoptdomain}, we obtain that $T$ is bounded on $Y'(0,\nu(\overline{\Omega}))$.
Finally, assume that $T$ is bounded on $Y'(0,\nu(\overline{\Omega}))$. Let $Z(\overline{\Omega}, \nu)$ be the optimal target space for $X_Y(\Omega)$ in the Sobolev embedding $W^mX_Y(\Omega)\hookrightarrow Z(\overline{\Omega}, \nu)$. Its existence is guaranteeed by \citep[Theorem~4.4]{CPS:20}. Since $X_Y(\Omega)$ is the optimal domain space for $Y(\overline{\Omega}, \nu)$ and $Z(\overline{\Omega}, \nu)$ is the optimal target space for $X_Y(\Omega)$, we have that $Z(\overline{\Omega}, \nu)\hookrightarrow Y(\overline{\Omega}, \nu)$. Hence, for every $f\in\mathcal M(\Omega)$,
\begin{equation}\label{prop:optimalpair:equiYZLHS}
\begin{aligned}
&\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Y(0,\nu(\overline{\Omega}))}\\
&\lesssim\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Z(0,\nu(\overline{\Omega}))}\\
&\lesssim\|f^*\|_{X_Y(0,|\Omega|)} = \|f\|_{X_Y(\Omega)}
\end{aligned}
\end{equation}
thanks to \eqref{prop:optimalpair:reductionprinciple} with $Z_1(\Omega)=X_Y(\Omega)$ and $Z_2(\overline{\Omega}, \nu)=Z(\overline{\Omega}, \nu)$. Now, fix any $f\in\mathcal M(\Omega)$ and let $g\in\mathcal Mpl(0,|\Omega|)$ be equimeasurable with $f$. Observe that, for each $h\in\mathcal M(0,\nu(\overline{\Omega}))$, the function
\begin{equation*}
(0,|\Omega|)\ni s\mapsto s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}Th\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}t\Big)\,{\fam0 d} t
\end{equation*}
is nonincreasing on $(0,|\Omega|)$, for it is a constant multiple of the integral mean of a nonincreasing function with respect to the measure $t^{-\alpha}\,{\fam0 d} t$. Using the boundedness of $T$ on $Y'(0,\nu(\overline{\Omega}))$ in the final inequality, we obtain that
\begin{align*}
&\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}g\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Y(0,\nu(\overline{\Omega}))}\\
&=\sup\limits_{\|h\|_{Y'(0,\nu(\overline{\Omega}))}\leq1}\int_0^{\nu(\overline{\Omega})}h^*(t)\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}g\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\,{\fam0 d} t\\
&\approx\sup\limits_{\|h\|_{Y'(0,\nu(\overline{\Omega}))}\leq1}\int_0^{|\Omega|}g(s)s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}h^*\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}t\Big)\,{\fam0 d} t\,{\fam0 d} s\\
&\lesssim\sup\limits_{\|h\|_{Y'(0,\nu(\overline{\Omega}))}\leq1}\int_0^{|\Omega|}g(s)s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}Th\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}t\Big)\,{\fam0 d} t\,{\fam0 d} s\\
&\leq\sup\limits_{\|h\|_{Y'(0,\nu(\overline{\Omega}))}\leq1}\int_0^{|\Omega|}f^*(s)s^{-1+\frac{m}{n}}\int_0^{|\Omega|^{1-\frac{d}{n}}s^\frac{d}{n}}Th\Big(\frac{\nu(\overline{\Omega})}{|\Omega|}t\Big)\,{\fam0 d} t\,{\fam0 d} s\\
&\approx\sup\limits_{\|h\|_{Y'(0,\nu(\overline{\Omega}))}\leq1}\int_0^{\nu(\overline{\Omega})}Th(t)\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\,{\fam0 d} t\\
&\leq\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Y(0,\nu(\overline{\Omega}))}\sup\limits_{\|h\|_{Y'(0,\nu(\overline{\Omega}))}\leq1}\|Th\|_{Y'(0,\nu(\overline{\Omega}))}\\
&\lesssim\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Y(0,\nu(\overline{\Omega}))},
\end{align*}
where the equality follows from \eqref{prel:assocnormwithstars}, the equivalences follow from Fubini's theorem coupled with a change of variables, the second inequality follows from \eqref{prel:HL} and the equimeasurability of $g$ with $f$, and the last but one inequality follows from \eqref{prel:holder}. Hence
\begin{equation}\label{prop:optimalpair:equiYZRHS}
\|f\|_{X_Y(\Omega)}\lesssim\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Y(0,\nu(\overline{\Omega}))}
\end{equation}
for every $f\in\mathcal M(\Omega)$. Combining \eqref{prop:optimalpair:equiYZLHS} and \eqref{prop:optimalpair:equiYZRHS}, we obtain that
\begin{equation}\label{prop:optimalpair:equiYZ}
\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Y(0,\nu(\overline{\Omega}))}\approx\Bigg\|\int_{\nu(\overline{\Omega})^{1-\frac{n}{d}}t^\frac{n}{d}}^{\nu(\overline{\Omega})}f^*\Big(\frac{|\Omega|}{\nu(\overline{\Omega})}s\Big)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Bigg\|_{Z(0,\nu(\overline{\Omega}))}
\end{equation}
for every $f\in\mathcal M(\Omega)$. Let $v$ be a simple function on $(0,\nu(\overline{\Omega}))$ having the form $v=\sum\limits_{j=0}^Nc_j\chi_{(0,t_j)}$, where $N\in\mathbb{N}$, $c_j>0$, $0<t_1<\cdots<t_N<\nu(\overline{\Omega})$. By straightforwardly modifying \citep[Lemma~4.9]{EMMP:20} and using \eqref{prop:optimalpair:equiYZ}, we have that
\begin{equation}\label{prop:optimalpair:YZonsimple}
\|v\|_{Y(0,\nu(\overline{\Omega}))}\approx\|v\|_{Z(0,\nu(\overline{\Omega}))}.
\end{equation}
Since every nonnegative, nonincreasing function on $(0,\nu(\overline{\Omega}))$ is the limit of a nondecreasing sequence of such functions, it follows from \eqref{prop:optimalpair:YZonsimple} and property (P3) of rearrangement-invariant function norms that
\begin{equation*}
\|g^*\|_{Y(0, \nu(\overline{\Omega}))}\approx\|g^*\|_{Z(0, \nu(\overline{\Omega}))}
\end{equation*}
for every $g\in\mathcal M(\overline{\Omega}, \nu)$. Hence $Y(\overline{\Omega}, \nu)=Z(\overline{\Omega}, \nu)$, that is, $Y(\overline{\Omega}, \nu)$ is the optimal target space for $X_Y(\Omega)$.
\end{proof}
\begin{remark}\label{rem:boundednessofT}
Note that, in the special case where $d=n-m$, the operator $T_{d,m,n}$ collapses into the identity operator, and thus it is plainly bounded on any rearrangement-invariant space.
When $Y(\overline{\Omega}, \nu)=L^{p,q;\alpha}(\overline{\Omega}, \nu)$ is a Lorentz--Zygmund space, the operator $T_{d,m,n}$ is bounded on $Y'(0, \nu(\overline{\Omega}))$ if and only if $p\in(\frac{d}{n-m},\infty]$, or $p=\frac{d}{n-m}$, $q=1$ and $\alpha\geq0$ (see~\cite[Proposition~5.4]{Mih:20}, cf.~\citep[Theorem~3.2]{GOP}).
\end{remark}
\begin{theorem}\label{thm:answtoq:noncompactintolorentzendpoint}
Let $\Omega\subseteq\R^n$ be a bounded Lipschitz domain, $m\in\mathbb{N}$, $m<n$, $d\in[n-m,n]$ and $\nu$ a $d$-Ahlfors measure on $\overline{\Omega}$. Assume that $Z(\overline{\Omega}, \nu)$ is a rearrangement-invariant space over
$(\overline{\Omega}, \nu)$ and $\Lambda_{\psi}(\overline{\Omega}, \nu)$ is a Lorentz endpoint space such that $\lim\limits_{t\to0^+}\frac{\psi(t)}{\varphi_{Z}(t)} = 0$. Assume that
$Z(\overline{\Omega}, \nu)\not\subseteq\Lambda_{\psi}(\overline{\Omega}, \nu)$. Furthermore, assume that the operator $T_{d,m,n}$, defined by \eqref{prop:optimalpair:defsupop}, is bounded on both
$M_{\frac{t}{\psi(t)}}(0,\nu(\overline{\Omega}))$ and $Z'(0,\nu(\overline{\Omega}))$. Let $X_{\Lambda_{\psi}}(\Omega)$ and $X_{Z}(\Omega)$ be the optimal domain spaces in $W^{m}X(\Omega)\hookrightarrow
\Lambda_{\psi}(\overline{\Omega}, \nu)$ and $W^{m}X(\Omega)\hookrightarrow Z(\overline{\Omega}, \nu)$, respectively. Set $X(\Omega)=X_{\Lambda_{\psi}}(\Omega)\cap X_{Z}(\Omega)$ and
$Y_X(\overline{\Omega}, \nu)=\Lambda_{\psi}(\overline{\Omega}, \nu)\cap Z(\overline{\Omega}, \nu)$. The following facts are true:
\begin{itemize}
\item The spaces $X(\Omega)$ and $Y_X(\overline{\Omega}, \nu)$ are mutually optimal in $W^mX(\Omega)\hookrightarrow Y_X(\overline{\Omega}, \nu)$;
\item $W^mX(\Omega)\hookrightarrow \Lambda_\psi(\overline{\Omega}, \nu)$ non-compactly;
\item $Y_X(\overline{\Omega}, \nu)\hookrightarrow\Lambda_{\psi}(\overline{\Omega}, \nu)$;
\item $\lim\limits_{t\to0^+}\frac{\psi(t)}{\varphi_{Y_X}(t)}=0$.
\end{itemize}
\end{theorem}
\begin{proof}
First, as $\Lambda_{\psi}(\overline{\Omega}, \nu)$ has absolutely continuous norm, we have that $\left(\Lambda_{\psi}(\overline{\Omega}, \nu)\cap Z(\overline{\Omega}, \nu)\right)'=M_{\frac{t}{\psi(t)}}(\overline{\Omega}, \nu)+Z'(\overline{\Omega}, \nu)$ (see~\citep[Chapter~3, Exercise~5]{BS}). Since the supremum operator $T_{d,m,n}$ is bounded on both $M_{\frac{t}{\psi(t)}}(0,\nu(\overline{\Omega}))$ and $Z'(0,\nu(\overline{\Omega}))$, it is also bounded on $M_{\frac{t}{\psi(t)}}(0,\nu(\overline{\Omega})) + Z'(0,\nu(\overline{\Omega}))$. Thanks to \hyperref[prop:optimalpair]{Proposition~\ref*{prop:optimalpair}}, we have that the spaces in the embeddings $W^mX_{\Lambda_{\psi}\cap Z}(\Omega)\hookrightarrow\Lambda_{\psi}(\overline{\Omega}, \nu)\cap Z(\overline{\Omega}, \nu)$, $W^mX_{\Lambda_{\psi}}(\Omega)\hookrightarrow\Lambda_{\psi}(\overline{\Omega}, \nu)$ and $W^mX_{Z}(\Omega)\hookrightarrow Z(\overline{\Omega}, \nu)$ are mutually optimal in each of these embeddings. Moreover, it follows from \eqref{prop:optimalpair:optimaldomainfunc} that $X_{\Lambda_{\psi}\cap Z}(\Omega)=X_{\Lambda_{\psi}}(\Omega)\cap X_{Z}(\Omega)$. Hence the spaces $X(\Omega)$ and $Y_X(\overline{\Omega}, \nu)$ are mutually optimal in the embedding $W^mX(\Omega)\hookrightarrow Y_X(\overline{\Omega}, \nu)$, where $X(\Omega)=X_{\Lambda_{\psi}}(\Omega)\cap X_{Z}(\Omega)$ and $Y_X(\overline{\Omega}, \nu)=\Lambda_{\psi}(\overline{\Omega}, \nu)\cap Z(\overline{\Omega}, \nu)$.
Next, as $\lim\limits_{t\to0^+}\frac{\psi(t)}{\varphi_{Z}(t)} = 0$ and $\varphi_{Y_X}\approx\max\{\psi,\varphi_Z\}$, we have that $\lim\limits_{t\to0^+}\frac{\psi(t)}{\varphi_{Y_X}(t)}=0$.
Last, thanks to the fact that $Z(\overline{\Omega}, \nu)\not\subseteq\Lambda_{\psi}(\overline{\Omega}, \nu)$, the embedding $\Lambda_{\psi}(\overline{\Omega}, \nu)\cap Z(\overline{\Omega}, \nu)\hookrightarrow\Lambda_{\psi}(\overline{\Omega}, \nu)$ is not almost compact (\citep[Lemma~3.7]{F-MMP:10}), and so $W^{m}X(\Omega)\hookrightarrow \Lambda_\psi(\overline{\Omega}, \nu)$ non-compactly owing to \citep[Theorem~4.1]{CM:20}.
\end{proof}
We conclude this paper with a concrete application of \hyperref[thm:answtoq:noncompactintolorentzendpoint]{Theorem~\ref*{thm:answtoq:noncompactintolorentzendpoint}}, which covers a relatively wide class of function spaces.
\begin{theorem}
Let $\Omega\subseteq\R^n$ be a bounded Lipschitz domain, $m\in\mathbb{N}$, $m<n$, $d\in[n-m,n]$ and $\nu$ a $d$-Ahlfors measure on $\overline{\Omega}$. Let $p\in(\frac{d}{n-m},\infty]$ and $q\in(1,\infty]$. Let $\alpha\in\mathbb{R}$ if $p<\infty$, or $\alpha+1<0$ if $p=\infty$. Set
\begin{equation*}
X(\Omega)=\begin{cases}
L^{\frac{np}{d+mp},1;\alpha}(\Omega)\cap L^{\frac{np}{d+mp},q;\alpha+1-\frac1{q}}(\Omega)\quad&\text{if $p\in(\frac{d}{n-m},\infty)$,}\\
L^{\frac{n}{m},1;\alpha+1}(\Omega)\cap X_q(\Omega)\quad&\text{if $p=\infty$ and $\alpha+1<0$},
\end{cases}
\end{equation*}
where $X_q(\Omega)$ is the rearrangement-invariant space over $\Omega$ satisfying
\begin{equation*}
\|f\|_{X_q(\Omega)}\approx\Big\|t^{-\frac1{q}}\ell^{\alpha+1-\frac1{q}}(t)\ell\ell^{1-\frac1{q}}(t)\int_{|\Omega|^{1-\frac{n}{d}}t^\frac{n}{d}}^{|\Omega|}f^*(s)s^{-1+\frac{m}{n}}\,{\fam0 d} s\Big\|_{L^q(0,|\Omega|)}.
\end{equation*}
Set
\begin{equation*}
Y_X(\overline{\Omega}, \nu)=\begin{cases}
L^{p,1;\alpha}(\overline{\Omega}, \nu)\cap L^{p,q;\alpha+1-\frac1{q}}(\overline{\Omega}, \nu)\quad&\text{if $p\in(\frac{d}{n-m},\infty)$,}\\
L^{\infty,1;\alpha}(\overline{\Omega}, \nu)\cap L^{\infty, q;\alpha+1-\frac1{q},1-\frac1{q}}(\overline{\Omega}, \nu)\quad&\text{if $p=\infty$ and $\alpha+1<0$}.
\end{cases}
\end{equation*}
The following facts are true:
\begin{itemize}
\item The spaces $X(\Omega)$ and $Y_X(\overline{\Omega}, \nu)$ are mutually optimal in $W^mX(\Omega)\hookrightarrow Y_X(\overline{\Omega}, \nu)$;
\item $W^mX(\Omega)\hookrightarrow \Lambda_\psi(\overline{\Omega}, \nu)$ non-compactly, where $\Lambda_\psi(\overline{\Omega}, \nu)$ is the Lorentz endpoint space whose fundamental function is equivalent to
\begin{equation}\label{thm:answtoq:noncompactintolorentzendpointLZ:psi}
\psi(t)\approx\begin{cases}
t^\frac1{p}\ell(t)^{\alpha}\quad&\text{if $p\in(\frac{d}{n-m},\infty)$,}\\
\ell\ell(t)^{\alpha+1}\quad&\text{if $p=\infty$ and $\alpha+1<0$;}
\end{cases}
\end{equation}
\item $Y_X(\overline{\Omega}, \nu)\hookrightarrow\Lambda_\psi(\overline{\Omega}, \nu)$;
\item $\lim\limits_{t\to0^+}\frac{\psi(t)}{\varphi_{Y_X}(t)}=0$.
\end{itemize}
\end{theorem}
\begin{proof}
The assertion follows from \hyperref[thm:answtoq:noncompactintolorentzendpoint]{Theorem~\ref*{thm:answtoq:noncompactintolorentzendpoint}} with
\begin{equation*}
Z(\overline{\Omega}, \nu)=\begin{cases}
L^{p,q;\alpha+1-\frac1{q}}(\overline{\Omega}, \nu)\quad&\text{if $p\in(\frac{d}{n-m},\infty)$,}\\
L^{\infty, q;\alpha+1-\frac1{q},1-\frac1{q}}(\overline{\Omega}, \nu)\quad&\text{if $p=\infty$ and $\alpha+1<0$,}
\end{cases}
\end{equation*}
and $\psi$ equal to a concave function on $[0,\nu(\overline{\Omega})]$ satisfying \eqref{thm:answtoq:noncompactintolorentzendpointLZ:psi}. Note that $\Lambda_\psi(\overline{\Omega}, \nu)=L^{p,1;\alpha}(\overline{\Omega}, \nu)$.
As for the assumptions of \hyperref[thm:answtoq:noncompactintolorentzendpoint]{Theorem~\ref*{thm:answtoq:noncompactintolorentzendpoint}} being satisfied, since (e.g.~\cite[Lemma~3.14]{OP})
\begin{equation*}
\varphi_{Z}(t)\approx\begin{cases}
t^\frac1{p}\ell(t)^{\alpha+1-\frac1{q}}\quad&\text{if $p\in(\frac{d}{n-m},\infty)$,}\\
\ell(t)^{\alpha+1}\ell\ell(t)^{1-\frac1{q}}\quad&\text{if $p=\infty$ and $\alpha+1<0$,}
\end{cases}
\end{equation*}
it follows that $\lim\limits_{t\to0^+}\frac{\psi(t)}{\varphi_{Z}(t)} = 0$. Furthermore, we have that $Z(\overline{\Omega}, \nu)\not\subseteq\Lambda_{\psi}(\overline{\Omega}, \nu)$ thanks to \cite[Theorem~4.5]{OP}. Next, as $p>\frac{d}{n-m}$, the supremum operator $T_{d,m,n}$ is bounded on the associate spaces of $Z(0, \nu(\overline{\Omega}))$ and $L^{p,1;\alpha}(0, \nu(\overline{\Omega}))$ (see \hyperref[rem:boundednessofT]{Remark~\ref*{rem:boundednessofT}}). Finally, the description of $X_{L^{p,1;\alpha}}(\Omega)$ and $X_Z(\Omega)$ can be obtained similarly to \citep[Thereom~5.3]{Mih:20}.
\end{proof}
\paragraph{Acknowledgments}
The second author would like to express his sincere gratitude to the Fulbright Program for supporting him and giving him the opportunity to visit the first author at the Ohio State University as a Fulbrighter. The research we conducted during the visit led us to the questions this paper deals with.
\end{document}
|
\begin{document}
\renewcommand{1.5}{1.5}
\renewcommand{1.5}{1.5}
\begin{center}{\bf\LARGE Some Rules of Elementary Transformation in $B^{+}(E,F)$ and their applications}\\
\vskip 0.5cm Ma Jipu$^{1,2}$
\end{center}
{\bf Abstract}\quad \small{Let $E,F$ be two Banach spaces, $B(E,F)$
the set of all bounded linear operators from $E$
into $F$, and $B^+(E,F)$ the set of double splitting operators in
$B(E,F)$.
In this paper, we present some rules of elementary transformations in $B^+(E,F)$, consisting of five
theorems. Let $\Phi_{m,n}$ be the set of all Fredholm operators $T$
in $B(E,F)$ with dim$N(T)=m$ and codim$R(T)=n$, and $F_k=\{T\in
B(E,F):$rank$T=k<\infty\}$. Applying the rules we prove
that $F_k(k<$dim$F)$ and $\Phi_{m,n}(n>0)$ are path connected, so
that they are not only smooth submanifolds in $B(E,F)$ with tangent
space $M(X)=\{T\in B(E,F):TN(X)\subset R(X)\}$ at $X$ in them, but
also path connected. Hereby we obtain the following
topological construction: $B(\mathbf{R}^m,\mathbf{R}^n)=\bigcup^n\limits_{k=0}F_k(m\geq n),
B(\mathbf{R}^m,\mathbf{R}^n)=\bigcup^m\limits_{k=0}F_k(m<n),F_k$ is
path connected and smooth sub-hypersurface in
$B(\mathbf{R}^m,\mathbf{R}^n)(k<n),$ and especially
dim$F_k=(m+n-k)k$ for $k=0,1,\cdots,n$. Finally we introduce an
equivalent relation in $B^+(E,F)$ and prove that the equivalent
class $\tilde{T}$ generated by $T$ is path connected for any $T\in
B^+(E,F)$ with $R(T)\varsubsetneq F$.
}
{\bf Key words}\quad Elementary Transformation Path Connected Set
of Operators Dimension Sub-hypersurface Smooth Submanifold
Equivalent Relation.
\textbf{2000 Mathematics Subject Classification:}\quad 47B38, 46T20,
58A05, 15A09.
\vskip 0.2cm{\bf 1\quad Introduction }\vskip 0.2cm
Let $E,F$ be two Banach spaces, $B(E,F)$ the set of all linear
bounded operators from $E$ into $F$, and $B^+(E,F)$ the set of all
double splitting operators in $B(E,F)$.
It is well known that the elementary transformation of a matrix
is a power tool in matrix theory.
For example, by the elementary transformation one can show that $F_k=\{T\in
B(\mathbf{R}^n):$rank$T=k\}(k<n)$ is path connected. When
dim$E=$dim$F=\infty$ there is no such elementary transformation in
$B(E,F)$, so it is difficult to prove that $F_k(k<$dim$F)$ in
$B(E,F)$ is path connected. In this paper, we present some rules of
elementary transformation in $B^+(E,F)$, which consist of five
theorems, see Section 2. Using the rules we prove that $F_k(k<$dim$F)$ and
$\Phi_{m,n}(n>0)$ are path connected, where $\Phi_{m,n}$
denotes the set of all Fredholm operators $T$ in $B(E,F)$ with
dim$N(T)=m$ and codim$R(T)=n$. Then by Theorem 4.2 in [Ma4] we
obtain the following result $F_k(k<$dim$F)$ and $\Phi_{m,n}(n>0)$
are not only smooth submanifolds in $B(E,F)$ with tangent space
$M(X)=\{T\in B(E,F):TN(X)\subset R(V)\}$ at any $X$ in then, but
also path connected. As its application we have that
$B(\mathbf{R}^m,\mathbf{R}^n)=\bigcup^n\limits_{k=0}F_k(m\geq n),
B(\mathbf{R}^m,\mathbf{R}^n)=\bigcup^m\limits_{k=0}F_k(m<n),F_k(k<n)$,
is path connected and smooth sub-hpersurface in
$B(\mathbf{R}^m,\mathbf{R}^n)$ and especially, dim$F_k=(m+n-k)k$
for $k=0,1,\cdots,n$. Finally we introduce an equivalent relation
in $B^+(E,F)$ and prove that the equivalent class $\tilde{T}$
generated by $T$ is path connected for $T\in B^+(E,F)$ with
$R(T)\varsubsetneq F.$
\vskip 0.2cm\begin{center}{\bf 2\quad Some Rules of Elementary Transformation in $B^{+}(E,F)$}\end{center}\vskip 0.2cm
In this section, we will introduce some rules of elementary
transformation in $B^{+}(E,F)$, which consist of five theorems. It
is useful to imagene the trace of these rules of elementary
Transformations from Euclidean space to Banach space.
{\bf Theorem 2.1}\quad{\it If $E = E_{1}\oplus R = E_{*}\oplus R$ , then the following conclusions hold:
$(i)$ there exists a unique $\alpha\in B(E_{*}, R)$ such that
$$
E_{1} = \left\{x + \alpha x: \forall x\in E_{*}\right\};\eqno(2.1)
$$}
conversely, for any $\alpha\in B(E_{*}, R)$ the subspace $E_{1}$ defined by $(2.1)$ satisfies $E = E_{1}\oplus R$
$(ii)$ $$
P^{R}_{E_{1}} = P^{R}_{E_{*}} + \alpha P^{R}_{E_{*}}\,\,\mathrm{and so}\,\,P_{R}^{E_{1}} = P_{R}^{E_{*}} - \alpha P^{R}_{E_{*}}.
$$
{\bf Proof}\quad For the proof of $(i)$ see [Ma3] and [Abr].
Obviously,
$\left(P^{R}_{E_{*}} + \alpha P^{R}_{E_{*}}\right)^{2} = P^{R}_{E_{*}} + \alpha P^{R}_{E_{*}},$ and
$$P^{R}_{E_{*}}x + \alpha P^{R}_{E_{*}}x = 0\,\, \mathrm{for}\,\, x\in E\Leftrightarrow P^{R}_{E_{*}}x=0
\Leftrightarrow x\in R$$
Then by (2.1), one concludes
$$
P^{R}_{E_{1}} = P^{R}_{E_{*}} + \alpha P^{R}_{E_{*}},\,\,\mathrm{and\, so,}\,\,P_{R}^{E_{1}} = P_{R}^{E_{*}} - \alpha P^{R}_{E_{*}}.
$$
The proof ends. \quad $\Box$
Let $B^{+}(E)$ be the set of all double splitting operators in $B(E)$ and $C_{r}(R) = \left\{T\in B^{+}(E): E = R(T) \oplus R\right\}$
{\bf Theorem 2.2}\quad{\it Suppose that $E = E_{*}\oplus R$ and $\dim R > 0 .$ Then $P^{R}_{E_{*}}$ and $(-P^{R}_{E_{*}})$
are path connected in the set $\left\{T\in C_{r}(R): N(T) = R\right\}.$}
{\bf Proof}\quad Due to $\dim R > 0,$ one can assume that
$B(E_{*},R)$ contains a non-zero operator $\alpha$, otherwise
$E_*=\{0\}$ the theorem is trivial. Let $E_{1} = \left\{x+ \alpha
x:\forall x\in E_{*} \right\}.$ Then by Theorem 2.1, $E =
E_{1}\oplus R$ and
$$
P^{R}_{E_{1}} = P^{R}_{E_{*}} + \alpha P^{R}_{E_{*}}. \eqno(2.2)
$$
Consider the path
$$
P(\lambda)= (1-2\lambda)
P^{R}_{E_{*}}+(1-\lambda)P^{R}_{E_{*}}\,\,\,\, 0\leq\lambda\leq1.
$$
Clearly,
$$
R(P(\lambda))= R(P^{R}_{E_{*}} + \frac{1-\lambda}{1-2\lambda}\alpha
P^{R}_{E_{*}}),\quad\,\,\,0\leq\lambda\leq1.
$$
Then by Theorem 2.1, $R(P(\lambda))\oplus R = E$, i.e.$P(\lambda)\in
C_{r}(R)$ for all $\lambda\in [0,1]$. In addition, $P(0)=P^R_{E_1},
P(1)=-P^R_{E_*}$ and $N(P(\lambda))=R,\forall\lambda\in[0,1]$. This
shows that $P^{R}_{E_{1}}$ and $-P^{R}_{E_{*}}$ are path connected
in $\left\{T\in C_{r}(R): N(T) = R\right\}.$
Next go to show that $P^{R}_{E_{1}}$ and $P^{R}_{E_{*}}$ are path connected in
$\left\{T\in C_{r}(R): N(T) = R\right\}.$ Consider the path
$$
P(\lambda) = P^{R}_{E_{*}} + \lambda\alpha P^{R}_{E_{*}}\,\,\,\,0\leq\lambda\leq1.
$$
By Theorem 2.1, $P(1) = P^{R}_{E_{*}} + \alpha P^{R}_{E_{*}}= P^{R}_{E_{1}},$
and $R(P(\lambda))\oplus R = E\,\,\,\,\forall \lambda\in [0,1],$ where $E_{1} = \left\{x+\alpha x: \forall x\in E_{*}\right\}.$
Obviously, $N(P(\lambda))= R$ and $P(0)= P^{R}_{E_{*}}$. Therefor
$P^{R}_{E_{1}}$ and $P^{R}_{E_{*}}$ are path connected in
$\left\{T\in C_{r}(R): N(T) = R\right\}.$ Thus the theorem is
proved. \quad $\Box$
For simplicity, still write $C_{r}(N)=\left\{T\in B^{+}(E,F):
R(T)\oplus N= F\right\}$ in the sequal.
{\bf Theorem 2.3}\quad{\it Suppose $T_{0}\in C_{r}(N)$ and $F =
F_{*}\oplus N$. Then $T_{0}$ and $P^{N}_{F_{*}}T_{0}$ are path
connected in the set $\left\{T\in C_{r}(N): N(T) =
N(T_{0})\right\}.$}
{\bf Proof}\quad One can assume $R(T_{0}) \neq F_{*}$, otherwise the
theorem is trivial. Then by Theorem 2.1, there exists a non-zero
operator $\alpha\in B(F_{*},N)$ such that
$$
R(T_{0})=\left\{y+\alpha y : \forall y\in F_{*}\right\},\,\,\,P^{N}_{R(T_{0})}= P^{N}_{F_{*}}+\alpha P^{N}_{F_{*}},
$$
and
$P_{N}^{R(T_{0})}= P_{N}^{F_{*}}-\alpha P^{N}_{F_{*}}.$ So
$$
T_{0}=\left\{P^{N}_{F_{*}}+\alpha P^{N}_{F_{*}}\right\}T_{0}.\eqno(2.3)
$$
Let
$$
F_{\lambda}=\left\{y+\lambda\alpha y : \forall y\in
F_{*}\right\}\,\,\,\,\mathrm{for\ all}\,\,\,\,\lambda\in [0,1].
$$
Note $\lambda\alpha\in B(F_{*},N).$ According to Theorem 2.1 we
also have
$$
P^{N}_{F_{\lambda}}=P^{N}_{F_{*}}+\lambda\alpha
P^{N}_{F_{*}}\quad\mbox{and}\quad
P^{F_{\lambda}}_{N}=P^{F_{*}}_{N}-\lambda\alpha P^{N}_{F_{*}}
,\quad\quad \forall\lambda\in[0,1].$$
Consider the path
$$
P(\lambda)=P^{N}_{F_{\lambda}}T_{0},\;\;\forall \lambda\in[0,1].
$$
Because $F=R(T_{0})\oplus N=F_{\lambda}+N$ one observers
$$
R(P(\lambda))=F_{\lambda},\;\;\forall \lambda\in[0,1],
$$
and so
$$
R(P(\lambda))\oplus N=F,\quad{\rm i.\ e.,}\quad P(\lambda)\in
C_r(N).
$$
Note
$$
y\in N(P(\lambda))\Leftrightarrow P^{N}_{F_{\lambda}}T_{0}y=0\Leftrightarrow T_{0}y\in N\Leftrightarrow y\in N(T_{0})
$$
i.e., $N(P(\lambda))=N(T_{0}), \forall \lambda\in[0,1].$ Thus
$$
P(\lambda)\in \{T\in C_{r}(N):N(T)=N(T_{0})\},\;\lambda\in [0,1].
$$
In addition, $P(1)=T_{0}$ by (2.3), and $P(0)=P^{N}_{F_{*}}T_{0}$.
Finally we conclude that $T_{0}$ and $P^{N}_{F_{*}}T_{0}$ are path
connected in the set $\{T\in C_{r}(N):N(T)=N(T_{0})\}$. The proof
ends.\quad$\Box$
Let $C_{d}(R)=\{T\in B^{+}(E,F):E=N(T)\oplus R\}$.
{\bf Theorem 2.4}\quad {\it Suppose that $T_{0}\in C_{d}(R_{0})$ and
$E=E_{*}\oplus R_{0}.$ Then $T_{0}$ and $T_0P^{E_{*}}_{R_{0}}$ are
path connected in the set $\{T\in C_{d}(R_{0}):R(T)=R(T_{0})\}$}.
\textbf{Proof}\quad One can assume $E_{*}\neq N(T_{0}),$ otherwise
the theorem is trivial. Then by Theorem 2.1, there exists a non-zero
operator $\alpha\in B(E_{*},R_{0})$ such that
$$
P^{R_{0}}_{N(T_{0})}=P^{R_{0}}_{E_{*}}+\alpha P^{R_{0}}_{E_{*}},
$$
and so,
$$
T_{0}=T_{0}P^{N(T_{0})}_{R_{0}}=T_{0}(P^{E_{*}}_{R_{0}}-\alpha
P^{R_{0}}_{E_{*}}).\eqno(2.4)
$$
Consider the path as follows,
$$
P(\lambda)=T_{0}(P^{E_{*}}_{R_{0}}-\lambda\alpha
P_{E_{*}}^{R_{0}}),\;\;0\leq\lambda\leq1.
$$
Since $(P^{E_{*}}_{R_{0}}-\lambda\alpha
P_{E_{*}}^{R_{0}})x=x\;\forall x \in R_{0},$ we conclude
$R(P(\lambda))=R(T_{0})$. We also have
$N(P(\lambda))=R(P^{E_{*}}_{R_{0}})+\lambda\alpha
P^{E_{*}}_{R_{0}})$.Indeed,
$$
x\in N(P(\lambda))
\Longleftrightarrow (P^{E_{*}}_{R_{0}}-\lambda\alpha P^{R_{0}}_{E_{*}})x=0
\Longleftrightarrow x\in R(P^{R_{0}}_{E_{*}}+\lambda\alpha P^{R_{0}}_{E_{*}})
$$
since $\lambda\alpha\in B(E_{*},R_{0})$ for all $\lambda\in[0,1].$
Thus $P(\lambda)\in \{T\in C_{d}(R):R(T)=R_{0}\},
\forall\lambda\in[0,1].$ In addition, $P(1)=T_{0}$ by (2.4), and
$P(0)=T_{0}P^{E_{*}}_{R_{0}}.$ Then the theorem is
proved.\quad$\Box$
{\bf Theorem 2.5}\quad{\it Suppose that the subspaces $E_{1}$ and
$E_{2}$ in $E$ satisfy $dim E_{1}=dim E_{2}< \infty.$ Then $E_{1}$
and $E_{2}$ possess a common complement R, i.e.,$E=E_{1}\oplus R=
E_{2}\oplus R$.}
\textbf{Proof}\quad According to the assumption $dim E_{1}=dim
E_{2}<\infty$, we have the following decompositions:
$$
E=H\oplus(E_{1}+E_{2}),\;E_{1}=E^{*}_{1}\oplus(E_{1}\cap E_{2}),\;\mbox{and}\;E_{2}=E^{*}_{2}\oplus(E_{1}\cap E_{2}).
$$
It is easy to observe that $(E^{*}_{1}\oplus
E^{*}_{2})\cap(E_{1}\cap E_{2})=\{0\}$ and $dim E^{*}_{1}=dim
E^{*}_{2}<\infty.$ Indeed, if $e^{*}_{1}+e^{*}_{2}$ belongs to
$E_{1}\cap E_{2}$, for $e^{*}_{i}\in E^{*}_{i},\;i=1,2,$
then
$e_{2}^{*}=(e^{*}_{1}+e^{*}_{2})-e^{*}_{1}\in E_{1}$ and
$e_{1}^{*}=(e^{*}_{1}+e^{*}_{2})-e^{*}_{2}\in E_{2}$, so that
$e^{*}_{1}=e^{*}_{2}=0$ because of $E^{*}_{i}\cap(E_{1}\cap
E_{2})=\{0\},\,i=1,2.$ Hereby, one can see
$$
E_{1}+ E_{2}=(E^{*}_{1}\oplus E^{*}_{2})\oplus(E_{1}\cap
E_{2}).\eqno(2.5)
$$
we now are in the position to determine $R$. We may assume $dim
E^{*}_{1}=dim E^{*}_{2}>0,$ otherwise the theorem is trivial. Then
$B^{X}(E^{*}_{1},E^{*}_{2})$ contains an operator $\alpha$, which
bears the subspace $H_{1}$ as follows,
$$
H_{1}=\{x+\alpha x:\forall x\in E^{*}_{1}\}=\{x+\alpha^{-1}x:\forall x\in E^{*}_{2}\}.
$$
By Theorem 2.1,
$$
E^{*}_{1}\oplus E^{*}_{2}=H_{1}\oplus E^{*}_{2}=H_{1}\oplus E^{*}_{1}
$$
Finally, according to (2.5) we have
$$
E=H\oplus(E_{1}+ E_{2})=H\oplus(E^{*}_{1}\oplus E^{*}_{2})\oplus(E_{1}\cap E_{2})=H\oplus H_{1}\oplus E_{2}^{*}\oplus(E_{1}\cap E_{2})=H\oplus H_{1}\oplus E_{2}
$$
and
$$
E=H\oplus H_{1}\oplus E_{1}^{*}\oplus(E_{1}\cap E_{2})=H\oplus H_{1}\oplus E
$$
This says $R=H\oplus H_{1}.$ The proof ends.\quad$\Box$
\vskip 0.2cm\begin{center}{\bf 3\quad Some
Applications}\end{center}\vskip 0.2cm
In this section we will give some application of the rules in
Section 2.
{\bf Theorem 3.1}\quad{\it For $k < dim F$, $F_k$ is path
connected.}
{\bf Proof}\quad In what follows, we may assume $k>0$, otherwise,
the theorem is trivial. Let $T_1$ and $T_2$ be arbitrary two
operators in $F_k$, and
$$E=N(T_{i})\oplus R_{i},\quad\quad\quad i=1,2.$$
Then $0<$ dim $R_{1}=$dim $R_{2}=k<\infty$, and by Theorem 2.5,
there exists a subspace $N_{0}$
in $E$ such that
$$E=N_{0}\oplus R_{1}=N_{0}\oplus R_{2}\eqno(3.1)$$
Let
$$L_{i}x=\left\{\begin{array}{rcl}T_{i}x, & & {x\in R_{i}}\\
0, & & {x\in N_{0}}
\end{array}\right.$$
$i=1,2.$
We claim that $T_{i}$ and $L_{i}$ are path connected in $F_{k},\,i=1,2.$ Due to $(3.1)$ the theorem 2.1 shows that there exists an operator
$\alpha_{i}\in B(N_{0},R_{i})$ such that
$$
P^{N(T_{i})}_{R_{i}} = P^{N_{0}}_{R_{i}}- \alpha_{i}P^{R_{i}}_{N_{0}},\; i=1,2.
$$
Consider the following paths:
$$ P_{i}(\lambda)= T_{i}(P^{N_{0}}_{R_{i}}- \lambda\alpha_{i}P^{R_{i}}_{N_{0}}),\;0\leq \lambda \leq 1 \,\mbox{and}\, i = 1,2.$$
Obviously, $P_{i}(0)=T_{i}P^{N_{0}}_{R_{i}}=L_{i},$ and
$P(1)=T_{i}(P^{N_{0}}_{R_{i}}-\alpha_{i}P^{R_{i}}_{N_{0}})=
T_{i}P^{N(T_{i})}_{R_{i}}=T_{i},\;i=1,2.$ Next go to show
$R(P_i(\lambda))=T_{i}R_{i}=R(T_{i}).$
Evidently, $R(P^{N_{0}}_{R_{i}}-
\lambda\alpha_{i}P^{R_{i}}_{N_{0}})\subset R_{i},$ and the converse
rlation follows from $(P^{N_{0}}_{R_{i}}-
\lambda\alpha_{i}P^{R_{i}}_{N_{0}})r = r, \forall r\in R_{i}.$ This
says $P_{i}(\lambda)\in F_{k}$ for $0\leq \lambda \leq 1
\,\mbox{and}\, i = 1,2,$ so that $L_{i}$ and $T_{i}$ are path
connected in $F_{k},\; i=1,2.$ Hence, in what follows, we can assume
$N(T_{1})=N(T_{2})=N_{0}.$ In the other hand, according to Theorem
2.5 we have
$$F = R(T_{1})\oplus N_{*}= R(T_{2})\oplus N_{*}\eqno(3.2)$$
where $ N_{*}$ is a closed subspace in $F$. Then by Theorem 2.3, the
proof of the theorem turns to that of the following conclusion:
$P^{N^{*}}_{R(T_{1})}T_{2} $ and $T_{1}$ are path connected in
$F_{k}$. Let
$$T^+_{1}y=\left\{\begin{array}{rcl}\left(T_{1}|_{R_{1}}\right)^{-1}y, & & {y\in R(T_{1})}\\
0, & & {y\in N_{*}}
\end{array}\right.$$
It is easy to observe
$$
P^{N_{*}}_{R_{T_{1}}}T_{2} =
\left(P^{N_{*}}_{R_{(T_{1})}}T_{2}T^{+}_{1}\right)T_{1}.\eqno(3.3)
$$
In fact, note $N(T_{1})=N(T_{2})=N_{0}$ and $T^{+}_{1}T_{1}=P^{N_{0}}_{R_{1}},$ then
$$
P^{N_{*}}_{R_{(T_{1})}}T_{2} = P^{N_{*}}_{R_{(T_{1})}}T_{2}\left(P^{N_{0}}_{R_{1}}+P_{N_{0}}^{R_{1}}\right)
= P^{N_{*}}_{R_{(T_{1})}}T_{2}P^{N_{0}}_{R_{(T_{1})}}
= P^{N_{*}}_{R_{(T_{1})}}T_{2}T^{+}_{1}T_{1}.
$$
We claim $P^{N_{*}}_{R_{(T_{1})}}T_{2}T^{+}_{1}|_{R(T_{1})}\in
B^{X}(R(T_{1})).$
Evidently,
\begin{eqnarray*}
&& P^{N_{*}}_{R_{(T_{1})}}T_{2}T^{+}_{1}y=0\,\, \mbox{for}\, y\in R(T_{1})
\Leftrightarrow T_{2}T^{+}_{1}y\in N_{*}\\
&& \Leftrightarrow T^{+}_{1}y\in N_{0}\cap R_{1} \,\,(\mbox{because of (3.2)})\\
&& \Leftrightarrow T^{+}_{1}y=0\Leftrightarrow y=0,
\end{eqnarray*}
i.e.,$P^{N_{*}}_{R_{(T_{1})}}T_{2}T^{+}_{1}|_{R(T_{1})}$ is injective.
In the other hand, since the decomposition (3.2) implies
$P^{N_{*}}_{R_{(T_{1})}}|_{R(T_{2})}\in B^{X}\left(R(T_{2}),
R(T_{1})\right)$, there is a unique $r_{2}\in R_{2}$ for any $y\in
R(T_{1})$, such that
$$
y
= P^{N_{*}}_{R_{(T_{1})}}T_{2}r_{2}
= P^{N_{*}}_{R_{(T_{1})}}T_{2}\left(P^{N_{0}}_{R_{1}}r_{2}+P_{N_{0}}^{R_{2}}r_{2}\right)
= P^{N_{*}}_{R_{(T_{1})}}T_{2}P^{N}_{R_{1}}r_{2}
$$
Then $y_{0} = T_{1}P^{N_{0}}_{R_{1}}r_{2}$ fulfills
$$
y = P^{N_{*}}_{R_{(T_{1})}}T_{2}P^{N_{*}}_{R_{1}}r_{2} = P^{N_{*}}_{R_{(T_{1})}}T_{2}T_{1}^{+}y_{0}.
$$
This says that $P^{N_*}_{R(T_1)}T_2T^{+}_1|_{R(T_1)}$ is surjective.
So $P^{N_*}_{R(T_1)}T_2T^{+}_1|_{R(T_1)}\in B^X(R(T_1))$ of all
invertible operators in $B(R(T_1)$.
It is well known that $P^{N_*}_{R(T_1)}T_2T^{+}_1|_{R(T_1)}$ is
path connected with some one of $-I_{R(T_1)}$ and $I_{R(T_1)}$ is
$B^X(R(T_1))$, where $I_{R(T_1)}$ denotes the identity on
$R(T_1)$.
Let $Q(t)$ be a path in $B^X(R(T_1))$ with
$Q(0)=P^{N_*}_{R(T_1)}T_2T_1^+|_{R(T_1)}$ and $Q(1)=-I_{R(T_1)}$ (or
$I_{R(T_1)})$. Then $Q_1(t)=Q(t)T_1$ is a path in the set $S=\{T\in
B(E,F):R(T)=R(T_1)$ and $N(T)=N(T_1)\}$ satisfying
$$Q_1(0)=P^{N_*}_{R(T_1)}T_2T^+_1T_1\quad{\rm and}\quad
Q_1(0)=-T_1({\rm or}\ T_1).$$ Since $S\subset F_k$, it follows that
$P^{N_*}_{R(T_1)}T_2T^+_1T_1$ and $-T_1($or $T_1)$ are path
connected in $F_k$. By (3.3) we merely need to show the following
conclusion: if $P^{N_*}_{R(T_1)}T_2T^+_1T_1$ is path connected with
$-T_1$ in $F_2$ then $P^{N_*}_{R(T_1)}T_2T^+_1T_1$ and $T_1$ are
path connected in $F_k$. Note $F=R(T_1)\oplus N_*$ and dim$N_*>0$.
By Theorem 2.2, $P^{N_*}_{R(T_1)}$ and $-P^{N_*}_{R(T_1)}$ are path
connected in the set $\{T\in G_r(N_*):N(T)=N_*\}=\{T\in
B(F):F=R(T)\oplus N_*$ and $N(T)=N_*\}$. Let $Q(t)$ be such a path
in the set with $Q(0)=-^{N_*}_{R(T_1)}$ and $Q(1)=P^{N_*}_{R(T_1)}.$
Since $F=R(Q(t))\oplus N_*$ and $N(Q(t))=N_*\forall t\in[0,1],
R(Q(t)T_1)=Q(t)R(T_1)=Q(t)F,$ $Q(t)$ for $t\in[0,1]$ belongs to
$F_k$, and satisfies $Q(0)T_1=-P^{N_*}_{R(T_1)}T_1=-T_1$ and
$Q(1)T_1=T_1.$ This says that $-T_1$ and $T_1$ are path connected in
$F_k$, so that the conclusion is proved. \quad$\Box$
It is obvious that if either dim$E$ or dim$F$ is finite, then
$B(E,F)$ consists of all finite rank operators. Hence we assume
dim$E=$dim$F=\infty$ in the sequel.
Let $\Phi_{m,n} = \left\{T\in
B^{+}(E,F): \dim N(T) = m <\infty\; \mbox{and} \;\mathrm{codim} R(T)
= n < \infty\right\}$, we have
{\bf Theorem 3.2}\quad{\it $\Phi_{m,n} , (n>0)$ is path
connected.}
{\bf Proof}\quad Let $T_{1}$ and $T_{2}$ be arbitrary two operators
in $\Phi_{m,n}$ and
$$
F = R(T_{1}) \oplus N_{1} = R(T_{2}) \oplus N_{2},
$$
i.e. $T_{1}\in C_{r}(N_{1})$ and $T_{2}\in C_{r}(N_{2})$.
Clearly, $\dim N_{1} = \dim N_{2} = n<\infty$. Then by Theorem 2.5,
there exists a subspace $F_{*}$ in $F$ such that
$$
F = R(T_{1}) \oplus N_{1} =F_{*} \oplus N_{1}\, \mathrm{and}\, F = R(T_{2}) \oplus N_{2} =F_{*} \oplus N_{2}.\eqno(3.4)
$$
So, by Theorem 2.3 the proof of the theorem turns to that of the
operators $P^{N_{1}}_{F_{*}}T_{1}$ and $P^{N_{2}}_{F_{2}}T_{2}$
being path connected in $\Phi_{m,n}.$ For simplicity, still write
$P^{N_1}_{F_1}T_1$ and $P^{N_2}_{F_*}T_2$ as $T_1$ and $T_2$,
respectively. However, here $R(T_1)=R(T_2)=F_*$ while $N(T_1),
N(T_2)$ keep invariant. Due to dim$N(T_1)=$dim$N(T_2)=m<\infty.$
Theorem 2.5 shows that there exists a subspace $R$ in $E$ such that
$$E=N(T_1)\oplus R=N(T_2)\oplus R.\eqno(3.5)$$
Then by Theorem 2.4, one can conclude that $T_2P^{N(T_1)}_R$ and
$T_2$ are path connected in $\Phi_{m,n}.$ Thus the proof of the
theorem turns once more to that of $T_2P^{N(T_1)}_R$ and $T_1$ being
path connected in $\Phi_{m,n}.$ In what follows, we do this.
Let
$$T_{1}^{+}y =\left\{\begin{array}{rcl}\left(T_{1}|_{R}\right)^{-1}y, & & {y\in F_{*}},\\
0, & & {y\in N_{1}}.
\end{array}\right.$$
Then
$$
T_{2}P^{N(T_{1})}_{R} = T_{2}T^{+}_{1}T_{1}.\eqno(3.6)
$$
It is easy to observe $T_{2}T^{+}_{1}\mid _{F_{*}}\in B^{X}(F_{*}).$ Indeed,
\begin{eqnarray*}
&& T_{2}T^{+}_{1}y=0\,\, \mbox{for}\, y\in F_{*}
\Leftrightarrow T^{+}_{1}y\in N(T_{2})\cap R\\
&& \Leftrightarrow T^{+}_{1}y = 0 \,\,\mbox{because of (3.5)} \Leftrightarrow y=0,
\end{eqnarray*}
i.e., $T_{2}T^{+}_{1}|_{F_{*}}$ is injective; let; $r\in R$ satisfy
$T_2r=y$ for any $y\in F_*$, and $T_1r-y_0\in F_*$, then
$$T_2T^+_1y_0=T_2T^+_1T_1r=T_2P^{N(T_1)}_Rr=T_2r=y,$$
and so $T_2T^+_1|_{F_*}$ is also surjective. Thus
$T_2T^+_1|_{F_*}\in B^X(F_*)$ and is path connected with some one of
$-I_{F_*}$ and $I_{F_*}$ is in $B^X(F_*)$. Similar to the way in the
proof of Theorem 3.1 one can prove that $T_2T^+_1T_1$ is path
connected with some one of $-T_1$ and $T_2$ is in $\Phi_{m,n}$. Note
the equality $F=F_*\oplus N_1$ in (3.4) and dim$N_1>0.$ Due to dim
$N_1>0$ Theorem 2.2 shows that $-P^{N_1}_{F_*}$ and $P^{N_1}_{F_*}$
are path connected in the set $S=\{T\in C_F(N_1)\subset (F);
N(T)=N_1\}$, say that $Q(X)\in S$ for $\lambda\in[0,1]$ fulfills
$Q(0)=-P^{N_1}_{F_*}$ and $Q(1)=P^{N_1}_{F_*}.$ Obviously,
$$R(Q(\lambda)T_1)=Q(\lambda)F_*=Q(\lambda)(F_*\oplus
N_1)=R(Q(\lambda))\quad({\rm note}\ R(T_1)=F_*),$$ and
$$Q(\lambda)T_1x=0\Leftrightarrow T_1x\in N_1\Leftrightarrow x\in
N(T_1)\quad({\rm note}\ F_*\cap N_1=\{0\})$$ for all
$\lambda\in[0,1].$ So $Q(\lambda)T_1$ for any $\lambda\in[0,1]$
belongs to $\Phi_{m,n}$. This says that $-T_1$ and $T_1$ are path
connected in $\Phi_{m,n}$. Therefore by (3.6) $T_2P^{N(T_1)}_R$ is
also path connected with $T_1$ in $\Phi_{m,n}$.$\Box$
By Theorem 4.2 in [Ma4] we futher have
\textbf{Theorem 3.3}\quad $F_k(k<$dim$F$) and $\Phi_{m,n}(n>0)$ are
not only path connected set but also smooth submanifolds in
$B(E,F)$ with tangent space $M(X)=\{T\in B(E,T):TN(X)\subset R(X)\}$
at any $X$ in them.
Applying the theorem to $B(\mathbf{R}^m,\mathbf{R}^n)$ we obtain the
geomatrical and topologial construction of
$B(\mathbf{R}^m,\mathbf{R}^n)$.
\textbf{Theorem 3.4}\quad
$B(\mathbf{R}^m,\mathbf{R}^n)=\bigcup^n\limits_{k=0}F_k(m\geq n),
B(\mathbf{R}^m,\mathbf{R}^n)=\bigcup^n\limits_{k=0}F_k(m<n),F_k$ is
a smooth and path connected submanifold in
$B(\mathbf{R}^m,\mathbf{R}^n)$, and especially, dim$F_k=(m+n-k)k$
for $k=0,1,\cdots,n.$
\textbf{Proof}\quad We need only to prove the formula
dim$F_k=(m+n-k)k$ for $k=0,1,\cdots,n$ since otherwise the theorem
is immediate from Theorem 3.3. Let $T=\{T_{i,j}\}^{m,n}_{i,j=1}$
for any $T\in B(\mathbf{R}^m,\mathbf{R}^n),I_k=\{T\in
B(\mathbf{R}^m,\mathbf{R}^n):t_{i,j}=0$ except $t_{i,i}=1, 1\leq
i\leq k\}$, and $I^{+}_k=\{(s_{i,j}\}^{m,n}_{i,j=1}\in
B(\mathbf{R}^n,\mathbf{R}^m):s_{i,j}=0$ except $s_{i,i}=1, 1\leq
i\leq k\}$.
Obviously, $I_kI^+_kI_k=I_k$ and $I^{+}_kT_kT^+_k=T^+_k.$ So
$$P^{R(I_k)}_{N(I^+_k)}=I^*_n-I_kI^+_k=\{\{s_{i,j}\}^n_1\in
B(\mathbf{R}^n):s_{i,j}=0\quad{\rm except}\quad s_{i,i}=1,k+1\leq
i\leq n\},$$
and
$$P^{R(I^+_k)}_{N(I_k)}=I^*_m-I^+_kI_k=\{\{s_{i,j}\}^m_1\in
B(\mathbf{R}^m):s_{i,}=0\ {\rm exeept}\quad s_{i,i}=1,k+1\leq i\leq
m\},$$ where $I^*_n,I^*_m$ denote the identity on $\mathbf{R}^n$
and $\mathbf{R}^m$, respectively. Let
$$M^+=\{P^{R(I_k)}_{N(I^+_k)}TP^{R(I^+_k)}_{N(I_k)}:\forall T\in
B(\mathbf{R}^m,\mathbf{R}^n)\}.$$ By direct computing
$$M^+=\{\{\lambda_{i,j}\}^{n,m}_{i,j=1}:\lambda_{i,j}=0\quad{\rm
except}\quad\lambda_{i,i}=t_{i,i}, k+1\leq i\leq n\quad{\rm
and}\quad k+1\leq j\leq m\}.$$ so that dim$M^+=(n-k)(m-k).$ By Lemma
4.1 in [Ma4], $B(\mathbf{R}^m,\mathbf{R}^n)=M(I_k)\oplus M^+$ and
so, dim$M(I_k)=m\times n-(m-k)(n-k)=(m+n)-k)k.$ Due to $F_k$ being
path connected, by Theorem 3.1, one can conclude ${\rm dim}F_k={\rm
dim}M(I_k)=(m+n-k)k.$ \quad$\Box$
Let $G(\cdot)$ denote the set of all splitling subspaces in the
Banach space in the parentheses, $U_{E}(R) = \left\{H\in G(E) : E =
R \oplus H\right\}$ for any $R\in G(E) $, and $U_{F}(S) = \{L\in
G(E) :$ $ F = S \oplus L\}$ for any $S\in G(E) $. In order to
consider of more general results then that of the previous
theorems 3.1 and 3.2, we introduce the equivalent relation as
follows.
{\bf Definition 3.1}\quad{\it $T_{0}$ and $T_{*}$ in $B^{+}(E,F)$
are said to be equivalent provided there exist finite number of subspaces
$N_{1}, \cdots, N_{m}$ in $G(E)$, and $F_{1}, \cdots, F_{n}$ in $G(F)$
such that all
$$
U_{E}(N(T_{0}))\cap U_{E}(N_{1}),\cdots,U_{E}(N_{m})\cap U_{E}(N(T_{*}))
$$
and
$$
U_{F}(R(T_{0}))\cap U_{F}(F_{1}),\cdots,U_{F}(F_{n})\cap U_{F}(R(T_{*}))
$$
are non-empty. For abbreviation, write it as $T_{0}\sim T_{*},$ and let
$\widetilde{T}$ denote the equivalent class generated by $T$ in $B(E,F).$}
{\bf Theorem 3.5}\quad{\it $\widetilde{T_{0}}$ for any $T_{0}\in
B^{+}(E,F)$ with $\mathrm{codim } R(T_{0}) > 0$ is path connected.}
{\bf Proof}\quad Assume that $T_{*}$ is any operator in $\widetilde{T_{0}}$, and
$$
R_{1}\in U_{E}(N(T_{0}))\cap U_{E}(N_{1}),
\cdots,R_{m}\in U_{E}(N_{m-1})\cap U_{E}(N_{m}),
R_{m+1}\in U_{E}(N_{m})\cap U_{E}(N_(T_{*})).\eqno(3.7)
$$
We define by induction:
$$
T_{k} = T_{k-1}P^{N_{k}}_{R_{k}},\,\,k=1,2,\cdots,m.
$$
It is easy to observe
$$
R(T_{k}) = R(T_{0})\,\mathrm{and }\, N(T_{k}) = N_{k}, k=1,\cdots,m.\eqno(3.8)
$$
Indeed, $R(T_1)=T_0R(P^{N_1}_{R_1})=T_0R_1=R(T_0)$ and$N(T_1)=\{x\in
E:P^{N_1}_{R_1}x\in N(T_0)\}=\{x\in E:P^{N_1}_{R_1}x=0\}=N_1$
because of $E=R_1\oplus N_1$; similarly, by induction one can
conclude $R(T_k)=R(T_0)$ and $N(T_k)=N_k$ for $k=1,2,\cdots,m.$ So
$T_k\in\tilde{T}_0$ for $k=1,2,\cdots,m.$
Let $S_k=\{T\in G_d(R_k):R(T)=R(T_0)\}=\{T\in B(E,F):E=N(T)\oplus
R_k$ and $R(T)=R(T_0)\}$ for $k=1,2,\cdots,m.$ Evidently $S_k\subset
\tilde{T}_0, k=1,2,\cdots,m.$ In fact, by (3.7) $R_1\in
U_E(N(T_0))\cap U_E(N_1),\cdots,R_k\in U_E(N_{k-1})\cap U_E(N_k);$
while $R_k\in U_E(N_k)\cap U_E(N(T))$ and $R(T)=R(T_0);$ so that
$S_k\subset\tilde{T}_0,k=1,2,\cdots,m.$ Next go to show that $T_0$
and $T_m$ are path connected in $\tilde{T}_0$. Due to $N_{k-1}\oplus
R_k)=N(T_{k-1})\oplus R_k=N_k\oplus R_k.$ Theorem 2.4 shows that
$T_{k+}$ and $T_k$ are path connected in $S_k$, and so in
$\tilde{T}_0$ for $k=1,2,\cdots,m$. Therefore $T_0$ and $T_m$ are
path connected in $\tilde{T}_0$.
Let $M_k=\{T\in C_d(R_k):\mathbf{R}(T)=R(T_k)\}.$
By (3.8)
$$M_k=R(T_k)\}=\{T\in
C_d(R_k):R(T)=R(T_0)\},\quad k=1,2,\cdots,m,$$ so that
$M_k\subset\tilde{T}_0$. Note $E=N(T_{k-1})\oplus R_k=N_k\oplus
R_k.$ Then by Theorem 2.4, $T_k=T_{k-1}P^{N_k}_{R_k}$ and $T_{k-1}$
are path connected in $M_{k-1}=\{T\in
C_d(R_k):R(T)=R(T_{k-1})\}\subset\tilde{T}_0, k=1,2,\cdots,m.$ This
shows that $T_0$ and $T_m$ are poth connected in $\tilde{T}_0$.
Finally go to prove that $T_m$ and $T_*$ are path connected in
$\tilde{T}_0.$
In other hand, assume
$$
S_{1}\in U_{F}(R(T_{0})) \cap U_{F}(F_{1}),\cdots,
S_{n}\in U_{F}(F_{n-1}) \cap U_{F}(F_{n}),
S_{n+1}\in U_{F}(F_{n}) \cap U_{F}(R(T_{*})).\eqno(3.9)
$$
Write $T_{m,0}=T_m.$
We define by induction,
$$
T_{m,i}=P_{F_{i}}^{s_{i}}T_{m,i-1},\, i=1, 2,\cdots,n.
$$
According to the equality $R(T_{m})= R(T_{0})$ in (3.8), we infer
$$
R(T_{m,i})=F_{i}\,\mathrm{and}\,N(T_{m,i})=N(T_{m}),\,\,i=1,\cdots,n.\eqno(3.10)
$$
In fact wsmite $F_0=\mathbf{R}(T_m)$, then by (3.9)
$$
F = F_{i-1}\oplus S_{i} = F_{i} \oplus S_{i}, \,i = 1, 2,\cdots,n;
$$
and so
$$N(T_{m,i})=N(T_m)\quad{\rm and}\quad
R(T_{m,i})=F_i,i=1,2,\cdots,n.$$
Thus, take $T_{m,i-1},F_i$ and $S_i$ for $i=1,2,n$ in the places of
$T_0,F_*$ and $N$, in Theorem 2.3, respectively, then the theorem
shows that $T_{m,k}$ and $T_{m,k-1}$ are path connected in
$\left\{T\in C_{r}(S_{k}): N(T) = N_{m}\right\} \subset
\widetilde{T_{0}}, k=1,2,\cdots,n,$ so that $T_{m}$ and $T_{m,n}$
are path connected in $\widetilde{T_{0}}$. Since $T_{0}$ and
$T_{m}$ are path connected in $\widetilde{T_{0}},$ $T_{0}$ and
$T_{m,n}$ are path connected in $\widetilde{T_{0}},$ so the proof
of the theorem reduces to that of $T_{m,n}$ being path connected
with $T_{*}$ in $\widetilde{T_{0}},$ where $T_{m,n}$ and $T_{*}$
satisfy
$$
E = N(T_{m,n})\oplus R_{m+1}= N(T_{*})\oplus R_{m+1} \,\, \mathrm{because\, of }\,\,
N(T_{m,n}) = N_{m},
$$
and
$$
F = R(T_{m,n})\oplus S_{n+1}= R(T_{*})\oplus S_{n+1} \,\, \mathrm{because\, of }\,\,
R(T_{m,n}) = F_{n}.
$$
$$\eqno(3.11)$$
The first equality in (3.11) shows $T_{*}\in C_{\alpha}(R_{m+1})$
and $E = N_{m}\oplus R_{m+1}.$ Then by Theorem 2.4,
$T_{*}P^{N_{m}}_{R_{m+1}}$ and $T_{*} $ are path connected in
$\left\{T\in C_{\alpha}(R_{m+1}): R(T) = R(T_{*})\right\} \subset
\widetilde{T_{0}}.$
The second equality in (3.11) shows $T_{*}\in C_{r}(S_{n+1})$ an d
$F = F_{n}\oplus S_{n+1}.$ Then by Theorem 2.3,
$P^{S_{n+1}}_{F_{n}}T_{*}$ and $T_{*} $ are path connected in
$\left\{T\in C_{r}(S_{n+1}): N(T) = N(T_{*})\right\} \subset
\widetilde{T_{0}}.$ Combining the preceding results, we conclude
that $P^{S_{n+1}}_{F_{n}}T_{*}P^{N_{m}}_{R_{n+1}}$ and $T_{*}$ are
path connected in $\widetilde{T_{0}}.$ So far, the proof of the
theorem reduces to that of
$P^{S_{n+1}}_{F_{n}}T_{*}P^{N_{m}}_{R_{m+1}}$ being path connected with $T_{m,n}$ in $\widetilde{T_{0}}.$
According to (3.10) we have
$$
E = N_{m}\oplus R_{m+1}\,\,\mathrm{and}\,\,F = F_{n}\oplus S_{n+1}
$$
where $N_{m}=N(T_{m,n})$ and $R(T_{m,n})=F_{n}.$
Let
$$T_{m,n}^{+}y =\left\{\begin{array}{rcl}\left(T_{m,n}|_{R_{m+1}}\right)^{-1}y, & & {y\in F_{n}},\\
0, & & {y\in S_{n+1}}.
\end{array}\right.$$
Obviously,
$$
P^{S_{n+1}}_{F_n}T_*P^{N_m}_{R_{m+1}}=P^{S_{n+1}}_{F_{n}}T_{*}T^{+}_{m,n}T_{m,n}.\eqno(3.12)
$$
We claim
$$
P^{S_{n+1}}_{F_{n}}T_{*}T^{+}_{m,n}|_{F_{n}}\in B^{X}(F_{n})
$$
$ P^{S_{n+1}}_{F_{n}}T_{*}T^{+}_{m,n}y=0 \,\,\mathrm{for}\,\,y\in
F_{n}\Leftrightarrow T_{*}{T^{+}_{m,n}}y=0 $ because of (3.11)
$\Leftrightarrow T^{+}_{m,n}y=0 $ since $T^{+}_{m,n}y\in R_{m+1}
\Leftrightarrow y=0;$ while from the assumption in (3.7)and (3.8),
$R_{m+1}\in U_{E}(N_{m}) \cap U_{E}(N(T_{*})) $ and $S_{n+1} \in
U_{F}(F_{n}) \cap U_{F}(R(T_{*})),$ it follows that there exist
$r\in R_{m+1}$ and $y_{0}\in F_{n},$ for any $y\in F_{n}$ such that
$ P^{S_{n+1}}_{F_{n}}T_{*}r =
y\,\,\mathrm{and}\,\,T^{+}_{m,n}y_{0}=r,\,i.e.,
P^{S_{n+1}}_{F_{n}}T_{*}T^{+}_{m,n} $ is surjective.
It is well known that $P^{S_{n+1}}_{F_{n}}T_{*}T^{+}_{m,n}|_{F_{n}}$
is path connected with some one of $I_{F_{n}}$ and $(-I_{F_{n}})$
in $B^{X}(F_{n}).$ Hence
$P^{S_{n+1}}_{F_{n}}T_{*}T^{+}_{m,n}P^{S_{n+1}}_{F_{n}}$ is path
connected with some one $P^{S_{n+1}}_{F_{n}}$ and
$(-P^{S_{n+1}}_{F_{n}})$ in the set $S=$
$\left\{F\in B(F_{n}): R(T) =
F_{n}\,\, \mathrm{and}\,\, N(T) = S_{n+1}\right\}$ $\subset
\left\{T\in C_{r}(S_{n+1}): N(T)=S_{n+1}\right\}.$ If
$P^{S_{n+1}}_{F_{n}}T_{*}T^{+}_{m,n}P^{S_{n+1}}_{F}$ is path
connected with $(-P^{S_{n+1}}_{F_{n}})$, then by Theorem 2.2, it is
path connected with $P^{S_{n+1}}_{F_{n}}$ in $S$. Thus
$P^{S_{n+1}}_{F_{n}}T_{*}T^{+}_{m,n}P^{S_{n+1}}_{F_{n}}$ is also
path connected with $P^{S_{n+1}}_{F_{n}}$ in $\left\{T\in
C_{r}(S_{n+1}): N(T)= S_{n+1}\right\}$.
Note the equality (3.10). Finally, by the simular way to that in
the proof of Theorem 3.2, one can infer that
$P^{S_{n+1}}_{F_n}T_*P^{N_m}_{R_{m+1}}$ and $T_m$ are path connected
in $\tilde{T}_0.$ The proof ends.\quad$\Box$
It is easy to see that $\widetilde{T} = F_{k}$ for any $T\in F_{k}
(k< \infty)$ as well as
$\widetilde{T} = \Phi_{m,n}$ for any $T\in \Phi_{m,n} ,\, n
> 0.$
\vskip 0.1cm
\begin{center}{\bf References}
\end{center}
\vskip -0.1cm
{\footnotesize
\def\REF#1{\par\hangindent\parindent\indent\llap{#1\enspace}\ignorespaces}
\REF{[Abr]}\ R. Abraham, J. E. Marsden, and T. Ratin, Manifolds,
tensor analysis and applications, 2nd ed., Applied Mathematical
Sciences 75, Springer, New York, 1988.
\REF{[An]}\ V. I. Arnol'd, Geometrical methods in the theory of
ordinary differential equations, 2nd ed., Grundlehren der
Mathematischen Wissenschaften 250, Springer, New York, 1988.
\REF{[Bo]}\ B. Booss,D.D.Bleecker, Topology and Analysis: the
Atyah-Singer Index Formula and Gauge-Theoretic physics, New York:
Springer-Verlag 1985.
\REF{[Caf]}\ V. Cafagra, Global invertibility and finite
solvability, pp. 1-30 in Nonlinear functional analysis (New York,
NJ, 1987), edited by P. S. Milojevic, Lecture Notes in Pure and
Appl. Math. 121, Dekker, New York, 1990.
\REF{[Ma1]}\ Jipu Ma, Dimensions of Subspaces in a Hilbert Space and
Index of Semi-Fredholm Operators, Sci, China, Ser. A 39:12(1986).
\REF{[Ma2]}\ Jipu Ma, Generalized Indices of Operators in B(H), Sci,
China, Ser. A , 40:12(1987).
\REF{[Ma3]}\ Jipu Ma, Complete Rank Theorem in Advanced Calculus and
Frobenius Theorem in Banach Space, arxiv:1407.5198v5[math.FA]23 Jan.
2015.
\REF{[Ma4]}\ Jipu Ma Frobenius Theorem in Banach Space, submited to
Ivent. Math..(Also see) arXiv: submit/1512157[math.FA] 19 Mar 2016).
1. Department of Mathematics, Nanjing University, Nanjing, 210093,
P. R. China
2. Tseng Yuanrong Functional Analysis Research Center, Harbin Normal
University, Harbin, 150080, P. R. China
E-mail address: [email protected]; [email protected]
}
\end{document}
|
\begin{document}
\title{Quantum particles in a suddenly moving localized potential}
\author{
Miguel Ahumada-Centeno \\
Facultad de Ciencias, Universidad de Colima,\\
Bernal D\'{i}az del Castillo 340, Colima, Colima, Mexico \\
[email protected]
\and
Paolo Amore \\
Facultad de Ciencias, CUICBAS, Universidad de Colima,\\
Bernal D\'{i}az del Castillo 340, Colima, Colima, Mexico \\
[email protected]
\and
Francisco M Fern\'andez \\
INIFTA, Division Quimica Teorica, \\
Blvd. 113 S/N, Sucursal 4, Casilla de Correo 16, 1900 La Plata, Argentina \\
[email protected]
\and
Jesus Manzanares \\
Universidad de Sonora, Mexico \\
[email protected]
}
\maketitle
\begin{abstract}
We study the behavior of a quantum particle, trapped in localized potential, when the trapping potential starts suddenly to move with constant velocity. In one dimension we have reproduced the results obtained
by Granot and Marchewka, Ref.~\cite{Granot09}, for an attractive delta function, using an approach based on a spectral decomposition, rather than on the propagator.
We have also considered the cases of P\"oschl-Teller and simple harmonic oscillator potentials (in one dimension) and to the hydrogen atom (in three dimensions). In this last case we have calculated explicitly the leading contribution to the ionization probability for the hydrogen atom due to a sudden movement.
\end{abstract}
\section{Introduction}
\label{Intro}
In a recent paper, Ref.~\cite{Granot09}, Granot and Marchewka have studied the interesting problem of determining the behavior
of a quantum particle trapped in a localized potential, when the potential suddenly starts to move at constant speed at $t=0$. These
authors used an attractive Dirac delta potential in one dimension to model the problem, calculating exactly the probability that
the particle remains confined to the moving potential or that it remains in the initial position. In addition to these two possibilities, they also
observed the probability that the particle moves at twice the speed of the potential: for an observer sitting in the rest frame
of the potential at $t>0^+$ this phenomenon can be interpreted as a quantum reflection of a particle moving to the left with speed $-v$
from the well.
The technological advances are making increasingly realistic the scenarios where a quantum particle, such as an atom, can be
trapped by suitable attractive potential (corresponding for example to a tip of a needle in a scanning tunneling microscope or a highly focused laser beam
in an optical tweezer) and thus be relocated in a different region~\cite{Ashkin86,Block90,Phillips14,Moffitt08}. As remarked by Granot and Marchewka, the quantum nature of this process leads to surprising results, as the possibility that the particle moves at twice the speed of the needle.
The present paper has two different goals: first, to reproduce the analysis of Ref.~\cite{Granot09} using a more direct approach based
on a spectral decomposition rather than on the propagator; second, to extend the analysis to a wider class of problems, in one and
three dimensions.
The paper is organized as follows: in Section \ref{spectral} we discuss the general framework, using spectral decomposition; in
Section \ref{appl} we consider several examples of potentials, with spectra which can be either mixed or discrete, and calculate
{\sl explicitly} the relevant probabilities for each case; finally in Section \ref{concl} we draw our conclusions.
\section{Spectral decomposition}
\label{spectral}
Our starting point is the time dependent Schr\"odinger equation (TDSE)
\begin{eqnarray}
i \hbar \frac{\partial \Psi}{\partial t} = -\frac{\hbar^2}{2m} \Delta \Psi + V(\vec{r}-\vec{v} t) \Psi(\vec{r},t)
\label{eq_tdse}
\end{eqnarray}
where $V(\vec{r}-\vec{v} t)$ is a potential moving with velocity $\vec{v}$.
Let us define $\vec{\xi} (\vec{r},t) \equiv \vec{r}-\vec{v} t$ and let $\phi(\vec{\xi})$ be the solution of the time independent
Schr\"odinger equation (TISE)
\begin{eqnarray}
- \frac{\hbar^2}{2m} \Delta_{\xi} \phi + V(\vec{\xi}) \phi(\vec{\xi}) = E \phi(\vec{\xi})
\label{eq_tise}
\end{eqnarray}
It can be easily verified that
\begin{eqnarray}
\Psi(\vec{r},t) = e^{\frac{i m \vec{v} \cdot \vec{r}}{\hbar} -\frac{i m \vec{v}^2 t}{2\hbar}} \phi(\vec{\xi}) e^{-\frac{i E t}{\hbar}}
\label{subst}
\end{eqnarray}
is a solution to the time-dependent Schr\"odinger equation (\ref{eq_tdse}).
The physical situation studied by Granot and Marchewka amounts to having a quantum particle at the initial time in the ground state
of the static potential and determining the probability at later times $t>0$ that the particle can be found in any of the modes
of the moving potential. For simplicity we will denote as $\psi_n(\vec{r},t)$ and $\psi_{\vec{k}}(\vec{r},t)$ the eigenmodes of the
moving potential and $\phi_n(\vec{r})$ and $\phi_{\vec{k}}(\vec{r})$ the eigenmodes of the static potential, where $n$ and $\vec{k}$ refer to bound an continuum states, respectively.
We also call $\Phi(\vec{r})$ the initial wave function at $t=0$ (for the specific case studied in Ref.~\cite{Granot09} this is
the wave function of the ground state).
Let $\Psi(\vec{r},t)$ be the wave function solution to the TDSE with a moving potential at $t>0$, subject to the condition
$\Psi(\vec{r},0) = \Phi(\vec{r})$; this wave function can be naturally decomposed in the basis of the moving potential as
\begin{eqnarray}
\Psi(\vec{r},t) = \sum_{n} a_n \psi_n(\vec{r},t) + \int \frac{d^3k}{(2\pi)^3} b(\vec{k}) \psi_{\vec{k}}(\vec{r},t)
\label{eq_arb}
\end{eqnarray}
where, using eq.~(\ref{subst}),
\begin{eqnarray}
a_n &=& \int d^3r \psi_n^\star(\vec{r},0) \Phi(\vec{r}) = \int d^3r e^{- i \frac{m \vec{v} \cdot \vec{r}}{\hbar}}
\phi_n^\star(\vec{r},0) \Phi(\vec{r}) \nonumber \\
b({\vec{k}}) &=& \int d^3r \psi_{\vec{k}}^\star(\vec{r},0) \Phi(\vec{r}) = \int d^3r e^{- i \frac{m \vec{v} \cdot \vec{r}}{\hbar}} \phi_{\vec{k}}^\star(\vec{r},0) \Phi(\vec{r}) \nonumber \ .
\end{eqnarray}
In this way $\sum_n |a_n|^2$ is the probability that the particle remains in a bound state when the potential starts moving, whereas
$\int \frac{d^3k}{(2\pi)^3} |b(\vec{k})|^2$ is the probability that the particle will escape to the continuum~\footnote{We are discussing here the three dimensional case, but the modifications for the one and two dimensional cases are straightforward.}.
Calling $N$ the number of bound states of the potential, we define the physical amplitudes
\begin{eqnarray}
\mathcal{Q}_{ij}(\vec{v}) &\equiv& \int_{-\infty}^{\infty} e^{- i m \vec{v} \cdot \vec{r} /\hbar} \left[\phi_j(\vec{r})\right]^\star \phi_i(\vec{r}) d^3 r \nonumber \\
\mathcal{P}_{i}(\vec{k},\vec{v}) &\equiv& \int_{-\infty}^{\infty} e^{- i m \vec{v} \cdot \vec{r} /\hbar} \left[\phi(\vec{k},\vec{r})\right]^\star \phi_i(\vec{r}) d^3r \nonumber \\
\tilde{\mathcal{P}}_{i}(\vec{k},\vec{v}) &\equiv& \int_{-\infty}^{\infty} e^{- i m \vec{v} \cdot \vec{r} /\hbar}
\phi_i^\star(\vec{r}) \phi(\vec{k},\vec{r}) d^3r = \tilde{\mathcal{P}}_{i}^\star(\vec{k},-\vec{v})
\nonumber
\end{eqnarray}
The wave function at later times for a particle initially in the ith state of the static potential will then be
\begin{eqnarray}
\Psi(\vec{r},t) = \sum_{j=1}^N \mathcal{Q}_{ij} \psi_j(\vec{r},t) + \int_{-\infty}^\infty \mathcal{P}_{i}(\vec{k},v) \psi(\vec{k},\vec{r},t) \frac{d^3k}{(2\pi)^3}
\label{eq_psit}
\end{eqnarray}
Clearly $|\mathcal{Q}_{ij}|^2$ represents the probability that the particle, initially in the $ith$ bound state ends up in the
$jth$ bound state; similarly $|\mathcal{P}_i(\vec{k},v)|^2 d^3 k/(2\pi)^3$ represents the probability that the particle initially in the ith bound state ends up in the continuum with momentum in an infinitesimal volume about $\hbar \vec{k}$.
The conservation of total probability requires
\begin{eqnarray}
\sum_{j=1}^N |\mathcal{Q}_{ij}|^2 + \int_{-\infty}^{\infty} |\mathcal{P}_i(\vec{k},v)|^2 \frac{d^3k}{(2\pi)^3} = 1
\end{eqnarray}
Similarly we define the amplitude
\begin{eqnarray}
\mathcal{R}(\vec{k},\vec{k}',\vec{v}) &\equiv& \int_{-\infty}^{\infty} e^{- i m \vec{v}\cdot \vec{r} /\hbar} \left[\phi(\vec{k}',\vec{r})\right]^\star \phi(\vec{k},\vec{r}) d^3r \nonumber
\end{eqnarray}
which is related to the probability that a particle initially with momentum $\hbar \vec{k}$ ends up in a state with momentum $\hbar \vec{k}'$.
Of course these arguments can be generalized to the case that the initial wave function is not a stationary state of
the static problem. In this case the time dependent wave function is
\begin{eqnarray}
\Psi(\vec{r},t) = \sum_{j=1}^N \bar{\mathcal{Q}}_{j} \psi_j(\vec{r},t) + \int_{-\infty}^\infty \bar{\mathcal{P}}(\vec{k},v) \psi(\vec{k},\vec{r},t) \frac{d^3k}{(2\pi)^3}
\label{eq_psit2}
\end{eqnarray}
where
\begin{eqnarray}
\bar{\mathcal{Q}}_j &\equiv& \int e^{- i m \vec{v} \cdot \vec{r}/\hbar} \left[\phi_j(\vec{r})\right]^\star \Phi(\vec{r}) d^3r \nonumber \\
&=& \sum_{i=1}^N a_i \mathcal{Q}_{ij} + \int b(\vec{k}) \tilde{\mathcal{P}}_j(\vec{k},\vec{v}) \frac{d^3k}{(2\pi)^3} \nonumber \\
\bar{\mathcal{P}}(k',v) &\equiv& \int e^{- i m \vec{v} \cdot \vec{r} /\hbar} \left[\phi(\vec{k}',\vec{r})\right]^\star \Phi(\vec{r}) d^3r \nonumber
\end{eqnarray}
of which Eq.(\ref{eq_psit}) is a special case.
\section{Applications}
\label{appl}
In this section we apply our general discussion to different examples.
\subsection{Attractive Dirac delta potential}
\label{delta}
We consider the attractive Dirac delta potential studied in Ref.~\cite{Granot09}:
\begin{eqnarray}
V(x,t)=\left\{ \begin{array}{ccc}
-\gamma \delta(x) & , & t \leq 0 \\
-\gamma \delta(x-vt) & , & t > 0 \\
\end{array}\right.
\end{eqnarray}
To implement the procedure explained in Section \ref{spectral} we first write explicitly
the eigenfunctions of the static potential, $V(x,0)= -\gamma \delta(x)$, reported in Ref.~\cite{Blinder88}
\begin{eqnarray}
\phi_0(x) &=& \sqrt{\beta } e^{-\beta \left| x\right| }
\label{eq_bs} \\
\phi_p^{(e)}(x) &=& \frac{\sqrt{2} (p \cos (p x)-\beta \sin (p \left| x\right| ))}{\sqrt{\beta ^2+p^2}}
\label{eq_even} \\
\phi_p^{(o)}(x) &=& \sqrt{2} \sin (p x)
\label{eq_odd}
\end{eqnarray}
where $\beta = m \gamma/\hbar^2$. Damert \cite{Damert75} and Patil \cite{Patil00} have proved the completeness
of the set of the energy eigenfunctions.
Note that the spectrum of $V(x,0)$ is mixed with a single bound state of energy $E_0 = - \frac{\gamma ^2 m}{2 \hbar ^2}$.
The orthonormality relations for this set of functions are
\begin{eqnarray}
\int_{-\infty}^{\infty} \phi_0^\star(x) \phi_0(x) dx &=& 1 \nonumber \\
\int_{-\infty}^{\infty} \phi_0^\star(x) \phi_p^{(e)}(x) dx &=& \int_{-\infty}^{\infty} \phi_0^\star(x) \phi_p^{(o)}(x) dx = 0 \nonumber \\
\int_{-\infty}^{\infty} {\phi_{p'}^{(e)}}^\star(x) \phi_p^{(e)}(x) dx &=& \int_{-\infty}^{\infty} {\phi_{p'}^{(o)}}^\star(x) \phi_p^{(o)}(x) dx = 2 \pi \delta (p-p') \nonumber \\
\int_{-\infty}^{\infty} {\phi_{p'}^{(o)}}^\star(x) \phi_p^{(e)}(x) dx &=& 0 \nonumber
\end{eqnarray}
Using the prescription (\ref{subst}) one can obtain the solutions to the time dependent problem for $t>0$ corresponding
to the static solutions of eqs. (\ref{eq_bs}), (\ref{eq_even}) and (\ref{eq_odd}):
\begin{eqnarray}
\psi_0(x,t,v) &=& \sqrt{\beta } e^{-\beta \left| x-t v\right| +\frac{i m \left(\gamma ^2 t+v \hbar ^2 (2 x-t v)\right)}{2 \hbar^3}}
\nonumber \\
\psi_k^{(e)}(x,t,v) &=& \frac{\sqrt{2}}{\sqrt{\beta ^2+k^2}} e^{-\frac{i \left(k^2 t \hbar ^2+m^2 v (t v-2 x)\right)}{2 m \hbar }}
(k \cos (k (x-t v))-\beta \sin (k \left| x-t v\right|)) \nonumber \\
\psi_k^{(o)}(x,t,v) &=& \sqrt{2} \sin (k (x-t v)) e^{-\frac{i \left(k^2 t \hbar ^2+m^2 v (t v-2 x)\right)}{2 m \hbar }} \nonumber
\end{eqnarray}
The initial wave function is the bound state of the static delta potential
\begin{eqnarray}
\Psi_0(x) &=& \sqrt{\beta } e^{-\beta \left| x\right| } \nonumber
\end{eqnarray}
and it can be decomposed in the basis of the time-dependent potential as
\begin{eqnarray}
\Psi_0(x) = \mathcal{Q}_{11}(v) \psi_0(x,0) + \int_0^\infty \frac{dk}{2\pi} \left[\mathcal{P}_1^{(e)}(k,v) \psi_k^{(e)}(x,0)+
\mathcal{P}_1^{(o)}(k,v) \psi_k^{(o)}(x,0)\right] \nonumber
\end{eqnarray}
as explained in Section \ref{spectral}.
The amplitude for transition to the bound state of the moving well is given by
\begin{eqnarray}
\mathcal{Q}_{11}(v) &=& \int_{-\infty}^\infty e^{- i m v x/\hbar} \phi_0^\star(x) \phi_0(x) dx = \frac{4}{\theta ^2+4}
\end{eqnarray}
where $\theta \equiv \hbar v/\gamma$ is the adiabatic Massey parameter \cite{Elberfeld88,Granot09}.
Clearly, the probability that the particle remains in the bound state of the moving well is simply given by
\begin{eqnarray}
P_{bound}= |\mathcal{Q}_{11}|^2 = \frac{16}{(\theta^2+4)^2}
\end{eqnarray}
in agreement with the eq.(13) of Ref.~\cite{Granot09}.
Similarly we can calculate the coefficients $\mathcal{P}_1^{(e)}(k,v)$ and $\mathcal{P}_1^{(o)}(k,v)$, representing the amplitudes for transitions to a state in the continuum (even and odd states, respectively)
with momentum $\hbar k$
\begin{eqnarray}
\mathcal{P}_1^{(e)}(k,v)
&=& \int_{-\infty}^\infty e^{- i m v x/\hbar} \left[{\phi_k^{(e)}}(x)\right]^\star \phi_0(x) dx \nonumber \\
&=& \frac{4 \sqrt{2} \beta^{7/2} \theta ^2 k}{\sqrt{\beta^2+k^2} \left(\beta^4 \left(\theta ^2+1\right)^2+k^4-2 \beta ^2 \left(\theta ^2-1\right) k^2\right)} \\
\mathcal{P}_1^{(o)}(k,v)
&=& \int_{-\infty}^\infty e^{- i m v x/\hbar} \left[{\phi_k^{(o)}}(x)\right]^\star \phi_0(x) dx \nonumber \\
&=& -\frac{4 i \sqrt{2} \beta ^{5/2} \theta k}{\beta ^4 \left(\theta^2+1\right)^2+k^4-2 \beta ^2 \left(\theta^2-1\right) k^2}
\end{eqnarray}
where $\beta = m v/\hbar\theta$.
As a result the probability that the particle ends up in the continuum is
\begin{eqnarray}
P_{continuum} &=& \int_{0}^\infty \left[ |\mathcal{P}_1^{(e)}(k,v)|^2 + |\mathcal{P}_1^{(o)}(k,v)|^2\right] \frac{dk}{2\pi} \nonumber \\
&=& \int_{-\infty}^\infty \left[ |\mathcal{P}_1^{(e)}(k,v)|^2 + |\mathcal{P}_1^{(o)}(k,v)|^2\right] \frac{dk}{4\pi} \nonumber \\
&=& \int_{-\infty}^\infty \frac{16 \beta ^5 \theta ^2 k^2 \left(\beta ^2
\left(\theta^2+1\right)+k^2\right)}{\left(\beta^2+k^2\right) \left(\beta ^4 \left(\theta^2+1\right)^2+k^4-2 \beta ^2 \left(\theta^2-1\right) k^2\right)^2} \frac{dk}{2\pi} \nonumber
\end{eqnarray}
A straightforward integration of this expression using the residue theorem provides the final result
\begin{eqnarray}
P_{continuum} &=& 1-\frac{16}{\left(\theta ^2+4\right)^2}
\end{eqnarray}
The total probability correctly sums to $1$:
\begin{eqnarray}
P_{total} = P_{bound}+P_{continuum} = 1 \nonumber
\end{eqnarray}
The solution at $t >0$ is then obtained using eq.~(\ref{eq_psit}):
using this expression we were able to reproduce Fig.~3 of Ref.~\cite{Granot09} performing a numerical integration
of this equation (notice however a typo in the second equation of (15) of Ref.~\cite{Granot09}).
\subsection{P\"oschl-Teller potentials}
The second example that we want to consider is the P\"oschl-Teller (PT) potential
\begin{eqnarray}
V(x) = -\frac{\hbar^2 \lambda (\lambda +1)}{2 a^2 m} \ \sech^2\left(\frac{x}{a}\right)
\label{eq:poschl}
\end{eqnarray}
for which exact solutions are available.
The potential (\ref{eq:poschl}) provides a nice generalization of our discussion for the attractive delta potential, both because
it has a mixed spectrum, with $\lambda$ bound states ($\lambda$ integer), and because it is known to be reflectionless~\cite{Kay56}.
The eigenfunctions of the bound states read
\begin{eqnarray}
\phi_j(x) = \frac{\mathcal{N}_j}{\sqrt{a}} \ P_{\lambda }^j\left(\tanh \left(\frac{x}{a}\right)\right) \ \ , \ \ j=1, \dots, \lambda
\end{eqnarray}
where $P_{\lambda}^j(x)$ is the associated Legendre polynomial; the corresponding eigenenergies are
$E_j= -\frac{j^2 \hbar ^2}{2 a^2 m}$. Here $\mathcal{N}_j$ is a (dimensionless) normalization constant.
As an example we consider the case $\lambda=1$, for which
\begin{eqnarray}
\phi_1(x) &=& -\frac{\sqrt{1-\tanh^2\left(\frac{x}{a}\right)}}{\sqrt{2} \sqrt{a}} \\
\phi_k(x) &=& \frac{\left(-\tanh \left(\frac{x}{a}\right)+i a k\right)}{1+i a k} \ e^{i k x}
\end{eqnarray}
and assume that the particle is in the ground state of the static potential at $t=0$.
The amplitudes for the transition to the bound state and to the continuum states of the moving potential can then be
calculated explicitly as
\begin{eqnarray}
\mathcal{Q}_{11} &\equiv& \int_{-\infty}^{\infty} e^{- i m v x /\hbar} \left[\phi_1(x)\right]^\star \phi_1(x) dx
= \frac{1}{2} \pi \kappa {\csch}\left(\frac{\pi \kappa}{2}\right) \nonumber \\
\mathcal{P}_{1}(k,v) &\equiv& \int_{-\infty}^{\infty} e^{- i m v x /\hbar} \left[\phi(k,x)\right]^\star \phi_1(x) dx
= \frac{\pi \sqrt{a} \kappa }{\sqrt{2} (a k+i)} \ {\sech}\left(\frac{1}{2} \pi (a k+\kappa )\right) \nonumber
\end{eqnarray}
where $\kappa \equiv a mv/\hbar$.
\begin{figure}
\caption{Left plot: Probability that a particle initially in the bound state of the Poschl-Teller potential with $\lambda=1$ stays
trapped as the well starts to move with constant velocity (blue curve); the red curve is the probability that the particle ends
up in a state of the continuum. Right plot: Probability density at four different times $t=0,5,10,15$ for a particle initially in the bound state of the static potential with $\lambda=1$. We have used $a=m=\hbar=\kappa=1$. }
\label{Fig_1}
\end{figure}
In the left plot of Figure \ref{Fig_1} we plot the probability that a particle initially in the bound state of the Poschl-Teller potential with $\lambda=1$ stays trapped as the well starts to move with constant velocity (blue curve),
$P_{bound}(\kappa) = | \mathcal{Q}_{11} |^2$, and the probability that the particle ends up in the continuum (red curve),
$P_{continuum}(\kappa) = \int_{-\infty}^{\infty} | \mathcal{P}_{1}(k,v) |^2 \frac{dk}{2\pi}$. The integral over momentum
is performed numerically and it is verified within the numerical accuracy that $P_{bound}(\kappa)+ P_{continuum}(\kappa) =1$.
The situation is qualitatively similar to the case treated in Ref.~\cite{Granot09}, but with $P_{bound}(\kappa)$ decaying now exponentially
for $\kappa \gg 1$.
The wave function at $t>0$ can be then obtained as
\begin{equation}
\begin{split}
\Psi(x,t) &= \frac{1}{2} \pi \kappa {\csch}\left(\frac{\pi \kappa}{2}\right) \psi_1(x,t) \\
&+ \int_{-\infty}^\infty \frac{\pi \sqrt{a} \kappa }{\sqrt{2} (a k+i)} \ {\sech}\left(\frac{1}{2} \pi (a k+\kappa )\right) \psi(k,x,t) \frac{dk}{2\pi} \nonumber
\end{split}
\end{equation}
where
\begin{eqnarray}
\psi_1(x,t) &=& e^{\frac{i m v x}{\hbar} -\frac{i m v^2 t}{2\hbar}} \phi_1(x-vt) e^{-\frac{i E_1 t}{\hbar}} \nonumber \\
\psi(k,x,t) &=& e^{\frac{i m v x}{\hbar} -\frac{i m v^2 t}{2\hbar}} \phi(k,x-vt) e^{-\frac{i \hbar k^2 t}{2m}} \nonumber
\end{eqnarray}
Since the P\"oschl-Teller potential is reflectionless we expect that the wave function would be qualitatively different from the wave function of the moving delta well, due to the absence of a peak moving with velocity $2v$.
The time evolution of a wave function for a particle initially in the bound state of the PT potential, which suddenly starts to move
with constant velocity, is displayed in the right plot Fig.~\ref{Fig_1}. We have used $a=m=\hbar=\kappa=1$; the probability density is plotted at four different times $t=0,5,10,15$. In this case it is evident the absence of the reflected wave, as expected, given the
nature of the potential. We also appreciate that the peak is moving at velocity $v$, and the corresponding
wave function is not dispersing.
\begin{figure}
\caption{Left plot: Probability that a particle initially in the bound state of the Poschl-Teller potential with $\lambda=2$ stays
trapped as the well starts to move with constant velocity (blue and green curves); the red curve is the probability that the particle ends up in a state of the continuum. Right plot: Probability density at four different times $t=0,5,10,15$ for a particle initially in the bound state of the static potential with $\lambda=2$. We have used $a=m=\hbar=\kappa=1$. }
\label{Fig_2}
\end{figure}
The plots in Figs.~\ref{Fig_2} are the analogous of the plots in Figs.~\ref{Fig_1}, but for a potential with $\lambda=2$;
in this case the potential possess two bound states and, as the potential starts to move, the probability of exciting
the first excited state of the well grows up to a maximum (for $\kappa \approx 1.5$) and then decreases (see the left plot in Fig.~\ref{Fig_2}).
The conservation of total probability is verified numerically to hold.
The location of the maximum of the probability of exciting a different bound state approximately corresponds to absorbing
the kinetic energy of the particle and make a transition to the other bound state
\begin{eqnarray}
- \frac{\hbar^2}{2 m a^2} \left( \mu^2 - \lambda^2 \right) = \frac{1}{2} m v^2
\end{eqnarray}
or equivalently
\begin{eqnarray}
\kappa = \sqrt{\lambda^2 -\mu^2}
\end{eqnarray}
For the case in the left plot of Fig.~\ref{Fig_2}, $\lambda = 2$ and $\mu=1$, and therefore the maximum corresponds to
$\kappa = \sqrt{3}$.
In Fig.~\ref{Fig_3} we display the time evolution of the wave function for $\lambda=2$ but for $\kappa=2$, at which the components of
the bound states are comparable: in this case one can appreciate the asymmetric time-dependent shape of the peak, which reflects the fact that the particle is not in a stationary state.
\begin{figure}
\caption{Probability density at four different times $t=0,5,10,15$ for a particle initially in the bound state of the static
potential with $\lambda=2$. We have used $a=m=\hbar=1$ and $\kappa=2$. }
\label{Fig_3}
\end{figure}
\subsection{Simple harmonic oscillator}
The simple harmonic oscillator
\begin{eqnarray}
V(x) = \frac{1}{2} \ m \omega^2 x^2
\end{eqnarray}
is possibly the most important example of quantum mechanical problem for which exact solutions are known.
In this case the spectrum is discrete, with bound states of energy
\begin{eqnarray}
E_n = \hbar \omega \left(n+\frac{1}{2}\right) \ \ \ , \ \ \ n=0,1,\dots
\end{eqnarray}
and with eigenfunctions
\begin{eqnarray}
\psi_n(x) = \frac{1}{ \sqrt{2^n n!}} \left({\frac{m \omega }{\pi \hbar }}\right)^{1/4} e^{-\frac{m x^2 \omega }{2 \hbar }}
H_n\left(x \sqrt{\frac{m \omega }{\hbar}}\right)
\end{eqnarray}
where $H_n(x)$ is the Hermite polynomial of order $n$.
We define the dimensionless parameter
\begin{eqnarray}
\kappa \equiv \frac{m v^2}{2\hbar \omega}
\end{eqnarray}
representing the ratio between the kinetic energy associated with the motion of the well and a quanta of energy $\hbar \omega$.
Assuming that the particle is initially in the ground state of the static potential, the amplitudes can be obtained explicitly;
the first few amplitudes are
\begin{eqnarray}
\mathcal{Q}_{0,0}(\kappa) = e^{-\kappa /2} \ \ \ &,& \ \ \
\mathcal{Q}_{0,1}(\kappa) = -i e^{-\kappa /2} \sqrt{\kappa } \nonumber \\
\mathcal{Q}_{0,2}(\kappa) = -\frac{e^{-\kappa /2} \kappa }{\sqrt{2}} \ \ \ &,& \ \ \
\mathcal{Q}_{0,3}(\kappa) = \frac{i e^{-\kappa /2} \kappa^{3/2}}{\sqrt{6}} \nonumber
\end{eqnarray}
In this case the maximum of the probability $\left| \mathcal{Q}_{0n}\right|^2$ of exciting the state $n$
corresponds a velocity given by
\begin{eqnarray}
\frac{m v^2}{2} = \hbar \omega n
\end{eqnarray}
or equivalently
\begin{eqnarray}
\kappa=n
\label{sho_cond}
\end{eqnarray}
\begin{figure}
\caption{Probability of transition from the ground state to an excited state for the simple harmonic
oscillator, due to a sudden movement. The vertical lines correspond to the condition (\ref{sho_cond}
\label{Fig_SHO}
\end{figure}
The probabilities of transition from the ground state of the SHO to an excited state, due to a sudden movement of the
well, are plotted in Fig.~\ref{Fig_SHO}. The vertical lines in the plot correspond to the condition (\ref{sho_cond}).
It is worth noticing that at $\kappa = n$, $|\mathcal{Q}_{0,n}(n)|^2 = |\mathcal{Q}_{0,n-1}(n)|^2$, for $n>1$. The proof of this property can be found in Appendix \ref{appA}.
\subsection{Hydrogen atom}
\label{sub:hyd}
Let us now consider a hydrogen atom, initially at rest in the ground state, which is suddenly kicked and the proton
starts to move with constant velocity $\vec{v}$.
The wave function of the bound states of the (static) hydrogen atom read
\begin{eqnarray}
\psi_{nlm}(r,\theta,\phi) &=& R_{nl}(r) Y_l^m(\theta ,\phi ) \nonumber \\
&=& 2^{l+1} e^{-\frac{r}{a_0 n}} \sqrt{\frac{(-l+n-1)!}{a_0^3 n^4 (l+n)!}}
\left(\frac{r}{a_0 n}\right)^l L_{-l+n-1}^{2 l+1}\left(\frac{2 r}{n a_0}\right) Y_l^m(\theta ,\phi )
\end{eqnarray}
with $n \geq 1$, $0 \leq l \leq n-1$ and $|m| \leq l$.
Based on our general discussion, the amplitude for the transition from the ground state to any excited state reads~\footnote{Since the
quantum number for the third component of angular momentum needs to vanish,$m=0$, we express the amplitudes as functions of $n$ and $l$ alone.}
\begin{eqnarray}
\mathcal{Q}_{1,0;n,l} &=& \int e^{-i \frac{\mu v r \cos\theta}{\hbar}} \psi^\star_{n,l,0}(\vec{r}) \psi_{1,0,0}(\vec{r}) d^3r
\end{eqnarray}
where $\mu$ here is the mass of the electron.
Using the partial wave decomposition of a plane wave
\begin{eqnarray}
e^{i \frac{\mu v r \cos\theta}{\hbar}} = \sum_{l=0}^\infty (2l+1) \ i^l \ j_l\left(\frac{\mu v r}{\hbar}\right) \ P_l(\cos\theta)
\nonumber
\end{eqnarray}
one can reduce the expressions to a one dimensional integral
\begin{eqnarray}
\mathcal{Q}_{1,0;n,l}(\kappa) &=& \sqrt{2l+1} i^l \int_0^\infty j_l\left(\frac{\mu v r}{\hbar}\right) R_{10}(r) R_{nl}(r) r^2 dr
\end{eqnarray}
We have calculated explicitly the amplitudes for $1 \leq n \leq 10$; their expressions
(not reported here) depend uniquely on the dimensionless parameter
$\kappa \equiv \frac{\hbar v}{e^2/4\pi \epsilon_0} = \mu v a_0/\hbar$ ($a_0$ is the Bohr radius).
Just to get an idea, $\kappa = 1$ corresponds to a speed $v \approx 0.007892 c$; using the root mean square speed for
hydrogen gas $v_{rms} = \sqrt{3 k_B T/M}$ we can associate a temperature $T \approx 2.2 \times 10^8 \ K$. To obtain a sizeable
ionization effect on the atoms of a gas trapped in a container by means of the elastic collisions of the individual atoms with the walls of the container, one should reach incredibly high temperatures~\footnote{Of course, this argument is only qualitative since the boundary conditions for that problem would be different and the wave functions for the moving and static systems would not be related by eq.~(\ref{subst}).}.
Using these expressions we can calculate exacly the probability of a transition $(1,0) \rightarrow (n,l)$, with
$n \leq N$, due to a sudden movement with velocity $\vec{v}$~\footnote{Notice that for the hydrogen atom there is an
infinite number of bound states.}
\begin{eqnarray}
P_{n \leq N} = \sum_{n=0}^N \sum_{l=0}^{n-1} \left| \mathcal{Q}_{1,0;n,l}(v) \right|^2
\end{eqnarray}
Of course the probability of not ionizing the atom corresponds to using $N\rightarrow \infty$ in the expression above.
At small velocities ($\kappa \ll 1$) we may obtain the leading behavior of this probability
\begin{eqnarray}
P_{n \leq 6} &\approx& 1 -0.302617 \kappa^2-0.576334 \kappa ^4+\dots \nonumber \\
P_{n \leq 7} &\approx& 1 -0.297702 \kappa^2-0.572154 \kappa ^4 +\dots \nonumber \\
P_{n \leq 8} &\approx& 1 -0.294468 \kappa^2-0.569285 \kappa ^4 +\dots \nonumber \\
P_{n \leq 9} &\approx& 1 -0.292225 \kappa^2-0.567241 \kappa ^4 +\dots \nonumber \\
P_{n \leq 10} &\approx& 1 -0.290603 \kappa^2 - 0.565735 \kappa ^4 +\dots
\end{eqnarray}
The coefficients of the contribution of order $\kappa^2$ form a nice monotonic sequence, so that one can use extrapolation
to estimate its value for $N \rightarrow \infty$ accurately. Moreover, since these coefficients receive contributions only from
the transition $(1,0) \rightarrow (n,1)$, a larger number of coefficients can be calculated with limited effort.
We have used Richardson extrapolation on the sequence of the first $20$ coefficients obtaining
\begin{equation}
\begin{split}
- 0.&28341221595516952094 - 0.78146725925265723860 \ \frac{1}{N^2} \\
&+ 0.78146725925251190251 \ \frac{1}{N^3} +\dots \nonumber
\end{split}
\end{equation}
showing that the ionization probability for the hydrogen atom goes as
$0.28341221595516952094 \ \kappa^2$ for $\kappa \rightarrow 0$.
This result can be confirmed by using the wave functions of the continuum, $\Psi_{klm}(\vec{r})$.
In this case
\begin{eqnarray}
\mathcal{P}(k) &=& \int e^{-i \frac{\mu v r \cos\theta}{\hbar}} \Psi^\star_{klm}(\vec{r}) \psi_{1,0,0}(\vec{r}) d^3r
\end{eqnarray}
is the amplitude for the transition from the ground state to the continuum (i.e. its modulus square is the probability of ionizing the atom).
The direct calculation of the ionization probability requires using the continuum wave functions. These wave functions
are reported for instance in Ref.~\cite{Peng10} and read:
\begin{eqnarray}
\Psi_{klm}(r,\theta,\phi) &=& Y_{lm}(\theta,\phi) \ \frac{\phi_{kl}(r)}{r} \equiv Y_{lm}(\theta,\phi) \ R_{l}(k,r)
\end{eqnarray}
where (note that the radial wave functions $R_{l}(k,r)$ obey the $k/(2 \pi)$ normalization)
\begin{equation}
\begin{split}
\phi_{kl}(r) &= \frac{2^{l+1} (k r)^{l+1}}{a_0 r (2 l+1)!} e^{\frac{\pi }{2 a_0 k}-i k r} \left| \Gamma\left(l-\frac{i}{k a_0}+1\right)\right| \\
&\times ~_1F_1\left(l+\frac{i}{k a_0}+1;2 (l+1);2 i k r\right) \nonumber
\end{split}
\end{equation}
Using these wave functions we obtain the exact expression for the leading behavior of the ionization probability as $\kappa \rightarrow 0$
\begin{equation}
\begin{split}
\int_0^\infty & \left| \mathcal{P}(k) \right|^2 \frac{dk}{2\pi} \approx \\
& \kappa^2 \ \int_0^\infty \frac{256 \pi u
\left(\frac{u+i}{-u+i}\right)^{-i/u} \left(-1+\frac{2 i}{u+i}\right)^{i/u}
\left(\coth \left(\frac{\pi}{u}\right)+1\right)}{3 \left(u^2+1\right)^5} \frac{du}{2\pi} + O(\kappa^4) \nonumber \\
&\approx 0.28341221595516952089 \ \kappa^2 \nonumber
\end{split}
\end{equation}
The value obtained using the extrapolation of the complementary probabilities provides a very accurate estimate
(with an error approximately of $4.7 \times 10^{-20}$).
\section{Conclusions}
\label{concl}
We have extended the one dimensional model of Granot and Marchewka in Ref.~\cite{Granot09}
for an atom displacing with a moving tip (represented by a Dirac delta function) to a number of
potentials with different spectrum (both discrete and mixed) and in one and three dimensions.
Our calculations are based on a spectral decomposition, rather than on the direct use of the
propagator (as done in Ref.~\cite{Granot09}) and, for the case discussed in \cite{Granot09}
we reproduce the probability that the particle stays trapped calculated by Granot and Marchewka.
The remaining examples that we discuss present new and interesting features, not found in the example
considered in Ref.~\cite{Granot09}: for instance, for the case of P\"oschl-Teller potentials
we show that the reflected peak moving with velocity $2v$ found in Ref.~\cite{Granot09} is absent,
due to the reflectionless nature of PT potentials; moreover, for the case of potentials with more
than one bound state, there is a probability that the particle gets to an excited bound state, rather
than to the continuum and we have found a simple criterium based on energy conservation to identify
the maxima of this probability. Finally, we have calculated {\sl exactly} the leading contribution
in the velocity $v$ to the probability that a hydrogen atom gets ionized due to a sudden movement
of the proton.
\appendix
\section{Amplitudes for the simple harmonic oscillator}
\label{appA}
We can understand this result in terms of the creation and annihilation operators
$\hat{a}$ and $\hat{a}^\dagger$; calling $|n\rangle$ an eigenstate of the Hamiltonian
of the simple harmonic oscillator we have
\begin{eqnarray}
\hat{a}^\dagger\hat{a}|n\rangle &=& n|n\rangle \nonumber\\
\hat{a}|n\rangle &=& \sqrt{n}|n-1\rangle \nonumber\\
\hat{a}^\dagger|n\rangle &=& \sqrt{n+1}|n+1\rangle. \nonumber
\end{eqnarray}
Following Ref.~\cite{Fernandez95} we define
\begin{eqnarray}
I_{mn} \equiv \langle m | U | n \rangle \nonumber
\end{eqnarray}
where $\hat{U} = e^{-\alpha \hat{x}}$ and $\hat{x} = \sqrt{\frac{\hbar}{2m\omega}}\left(\hat{a} + \hat{a}^{\dagger}\right)$.
Then
\begin{equation*}
\begin{split}
I_{0,n} & = \langle 0|\hat{U}|n\rangle = \langle 0|\hat{U}\frac{\hat{a}^\dagger}{\sqrt{n}}|n-1\rangle = \frac{1}{\sqrt{n}}\langle 0|\left(\hat{a}^\dagger-\alpha\sqrt{\frac{\hbar}{2m\omega}}\right)\hat{U}|n-1\rangle \\
& = -\frac{\alpha}{\sqrt{n}}\sqrt{\frac{\hbar}{2m\omega}}\langle 0|\hat{U}|n-1\rangle = -\frac{\alpha}{\sqrt{n}}\sqrt{\frac{\hbar}{2m\omega}} I_{0,n-1}.
\end{split}
\end{equation*}
On the other hand $I_{0,0} = e^{\alpha^2 \hbar/4 m\omega}$ and for $\kappa/2 = -\frac{\hbar}{4m\omega}\alpha^2$ we obtain
\begin{equation*}
I_{0,n}(\kappa) = -i\sqrt{\frac{\kappa}{n}}I_{0,n-1}(\kappa)
\end{equation*}
from which the property
\begin{equation*}
|I_{0,n}(n)|^2 = |I_{0,n-1}(n)|^2
\end{equation*}
follows.
\end{document}
|
\begin{document}
\title{Quadratic homogeneous polynomial maps $H$ and Keller maps $x+H$ with
$3 \le \operatorname{rk} {\mathcal{J}} H \le 4$}
\author{Michiel de Bondt}
\maketitle
\begin{abstract}
We compute by hand all quadratic homogeneous polynomial maps $H$ and all
Keller maps of the form $x + H$, for which $\operatorname{rk} {\mathcal{J}} H = 3$, over a field
of arbitrary characteristic.
Furthermore, we use computer support to compute Keller maps of the form
$x + H$ with $\operatorname{rk} {\mathcal{J}} H = 4$, namely:
\begin{compactitem}
\item all such maps in dimension $5$ over fields with $\frac12$;
\item all such maps in dimension $6$ over fields without $\frac12$.
\end{compactitem}
We use these results to prove the following over fields of arbitrary
characteristic: for Keller maps $x + H$ for which $\operatorname{rk} {\mathcal{J}} H \le 4$,
the rows of ${\mathcal{J}} H$ are dependent over the base field.
\end{abstract}
\section{Introduction}
Let $n$ be a positive integer and let $x = (x_1,x_2,\ldots,x_n)$ be an $n$-tuple
of variables.
We write $a|_{b=c}$ for the result of substituting $b$ by $c$ in $a$.
Let $K$ be any field. In the scope of this introduction, denote
by $L$ an unspecified (but big enough) field, which contains $K$ or even $K(x)$.
For a polynomial or rational map $H = (H_1,H_2,\ldots,H_m) \in L^m$, write ${\mathcal{J}} H$ or
${\mathcal{J}}_x H$ for the Jacobian matrix of $H$ with respect to $x$. So
$$
{\mathcal{J}} H = {\mathcal{J}}_x H = \left(\begin{array}{cclc}
\parder{}{x_1} H_1 & \parder{}{x_2} H_1 & \cdots & \parder{}{x_n} H_1 \\
\parder{}{x_1} H_2 & \parder{}{x_2} H_2 & \cdots & \parder{}{x_n} H_2 \\
\vdots & \vdots & \vdots\,\vdots\,\vdots & \vdots \\
\parder{}{x_1} H_m & \parder{}{x_2} H_m & \cdots & \parder{}{x_n} H_m
\end{array}\right)
$$
Denote by $\operatorname{rk} M$ the rank of a matrix $M$, whose entries are contained in $L$,
and write $\operatorname{tr}deg_K L$ for the transcendence degree of $L$ over $K$.
It is known that $\operatorname{rk} {\mathcal{J}} H \le \operatorname{tr}deg_K K(H)$ for a rational map $H$ of any
degree, with equality if $K(H) \subseteq K(x)$ is separable, in particular if
$K$ has characteristic zero. This is proved in \cite[Th.\@ 1.3]{1501.06046},
see also \cite[Ths.\@ 10, 13]{DBLP:conf/mfcs/PandeySS16}.
Let a \emph{Keller map} be a polynomial map $F \in K[x]^n$, for
which $\det {\mathcal{J}} F \in K \setminus \{0\}$.
If $H \in K[x]^n$ is homogeneous of degree at least $2$, then $x + H$ is
a Keller map, if and only if ${\mathcal{J}} H$ is nilpotent.
We say that a matrix $M \in \operatorname{Mat}_n(L)$ is \emph{similar over $K$}
to a matrix $\tilde{M} \in \operatorname{Mat}_n(L)$, if there exists a $T \in \operatorname{GL}_n(K)$ such that
$\tilde{M} = T^{-1}MT$. If ${\mathcal{J}} H$ is similar over $K$ to a triangular
matrix, say that $T^{-1}({\mathcal{J}} H)T$ is a triangular matrix, then
$$
{\mathcal{J}} \big(T^{-1}H(Tx)\big) = T^{-1}({\mathcal{J}} H)|_{x=Tx}T
$$
is triangular as well.
Section $2$ is about quadratic homogeneous maps $H$ in general, with the
focus on compositions with invertible linear maps, and the invariant
$r = \operatorname{rk} {\mathcal{J}} H$. In section $3$, a classification is given for the
case $r = 3$.
In section $4$, a classification is given for the case $r = 3$,
combined with the nilpotency of ${\mathcal{J}} H$. Nilpotency is an invariant of
conjugations with invertible linear maps, but is not an invariant of
compositions with invertible linear maps in general.
In section $5$, we compute all Keller maps $x + H$ with $H$ quadratic homogeneous,
for which $\operatorname{rk} {\mathcal{J}} H = 4$ in dimension $5$ over fields with $\frac12$,
and in dimension $6$ over fields without $\frac12$.
We use these results to prove the following over fields of arbitrary
characteristic: for Keller maps $x + H$ with $H$ quadratic homogeneous,
for which $\operatorname{rk} {\mathcal{J}} H \le 4$, the rows of ${\mathcal{J}} H$ are dependent over the base field.
\section{rank {\mathversion{bold}$r$}}
\begin{theorem} \label{rkr}
Let $H \in K[x]^m$ be a quadratic homogeneous polynomial map, and
$r := \operatorname{rk} {\mathcal{J}} H$.
Then there are $S \in \operatorname{GL}_m(K)$ and $T \in \operatorname{GL}_n(K)$, such that
for $\tilde{H} := S H(Tx)$, only the first $\frac12 r^2 + \frac12 r$ rows
of ${\mathcal{J}} \tilde{H}$ may be nonzero, and one of the following statements holds:
\begin{enumerate}[\upshape(1)]
\item Only the first $\frac12 r^2 - \frac12 r + 1$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero;
\item $\operatorname{char} K \ne 2$ and only the first $r$ columns of ${\mathcal{J}} \tilde{H}$ are nonzero;
\item $\operatorname{char} K = 2$ and only the first $r+1$ columns of ${\mathcal{J}} \tilde{H}$ are nonzero.
\end{enumerate}
Conversely, $\operatorname{rk} {\mathcal{J}} \tilde{H} \le r$ if either $\tilde{H}$ is as in
{\upshape(2)} or {\upshape(3)}, or $1 \le r \le 2$ and $\tilde{H}$ is as in
{\upshape(1)}.
\end{theorem}
\begin{corollary} \label{rk4}
Let $H \in K[x]^m$ be a quadratic homogeneous polynomial map.
Suppose that $r := \operatorname{rk} {\mathcal{J}} H \le 4$.
Then there are $S \in \operatorname{GL}_m(K)$ and $T \in \operatorname{GL}_n(K)$, such that
for $\tilde{H} := S H(Tx)$, only the first $\frac12 r^2 + \frac12 r$
of ${\mathcal{J}} \tilde{H}$ may be nonzero, and one of the following statements holds:
\begin{enumerate}[\upshape (1)]
\item Only the first $r+1$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero;
\item $r = 4$, $\tilde{H}_i \in K[x_1,x_2,x_3]$ for all $i \ge 2$, and
$\operatorname{char} K \neq 2$;
\item $r = 4$, $\tilde{H}_i \in K[x_1,x_2,x_3,x_4,x_5^2,x_6^2,\ldots,x_n^2]$
for all $i \ge 2$, and $\operatorname{char} K = 2$;
\item Only the first $r+1$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero;
\item $r = 4$, only the leading principal minor matrix of size $6$ of
${\mathcal{J}} \tilde{H}$ is nonzero, and $\operatorname{char} K = 2$.
\end{enumerate}
\end{corollary}
\begin{lemma} \label{23}
If $\tilde{H}$ is as in {\upshape(2)} or {\upshape(3)} of
theorem {\upshape\ref{rkr}}, then theorem {\upshape\ref{rkr}}
holds for $H$.
\end{lemma}
\begin{proof}
The number of terms of degree $d$ in $x_1, x_2, \ldots, x_r$ is
$$
\binom{r-1+d}{d}
$$
and the number of square-free terms of degree $d$ in $x_1, x_2, \ldots, x_r, x_{r+1}$
is
$$
\binom{r+1}{d}
$$
which is both $\frac12 r^2 + \frac12 r$ if $d = 2$.
If $\tilde{H}$ is as in (2) of theorem \ref{rkr}, then $\operatorname{char} K \ne 2$
and the rows of ${\mathcal{J}} \tilde{H}$ are dependent over $K$ on the $\frac12 r^2 + \frac12 r$ rows of
$$
{\mathcal{J}} (x_1^2,x_1 x_2, \ldots, x_r^2)
$$
If $\tilde{H}$ is as in (3) of theorem \ref{rkr}, then $\operatorname{char} K = 2$
and the rows of ${\mathcal{J}} \tilde{H}$ are dependent over $K$ on the $\frac12 r^2 + \frac12 r$ rows of
$$
{\mathcal{J}} (x_1 x_2, x_1 x_3, \ldots, x_r x_{r+1})
$$
So at most $\frac12 r^2 + \frac12 r$ rows of ${\mathcal{J}} \tilde{H}$ as in theorem \ref{rkr}
need to be nonzero.
If $\tilde{H}$ is as in (2) of theorem \ref{rkr}, then
$\operatorname{rk} {\mathcal{J}} \tilde{H} \le r$ is direct. If $\tilde{H}$ is as in
(3) of theorem \ref{rkr}, then $\operatorname{rk} {\mathcal{J}} \tilde{H} \le r$ follows as well,
because ${\mathcal{J}} \tilde{H} \cdot x = 0$ if $\operatorname{char} K = 2$.
\end{proof}
\begin{lemma} \label{Irlem}
Let $M$ be a nonzero matrix whose entries are linear forms in $K[x]$. Suppose
that $r := \operatorname{rk} M$ does not exceed the cardinality of $K$.
Then there are invertible matrices $S$ and $T$ over $K$, such that for $\tilde{M} := S M T$,
$$
\tilde{M} = \tilde{M}^{(1)} L_1 + \tilde{M}^{(2)} L_2 + \cdots + \tilde{M}^{(n)} L_n
$$
where $\tilde{M}^{(i)}$ is a matrix with coefficients in $K$ for each $i$,
$L_1, L_2, \ldots, L_n$ are independent linear forms, and
$$
\tilde{M}^{(1)} = \left( \begin{array}{cc}
I_r & \zeromat \\ \zeromat & \zeromat \\
\end{array} \right)
$$
\end{lemma}
\begin{proof}
Since $\operatorname{rk} M = r \ge 1$, $M$ has a minor matrix of size $r \times r$ whose determinant
is nonzero. Assume without loss of generality that the determinant of the leading
principal minor matrix of size $r \times r$ of $M$ is nonzero. Then this determinant
$f$ is a homogeneous polynomial of degree $r$. From \cite[Lemma 5.1 (ii)]{1310.7843},
it follows that there exists a $v \in K^n$ such that $f(v) \ne 0$.
Take independent linear forms $L_1, L_2, \ldots, L_n$ such that $L_i(v) = 0$
for all $i \ge 2$. Then $L_1(v) \ne 0$, and we may assume that $L_1(v) = 1$.
We can write
$$
M = M^{(1)} L_1 + M^{(2)} L_2 + \cdots + M^{(n)} L_n
$$
where $M^{(i)}$ is a matrix with coefficients in $K$ for each $i$.
If we substitute $x = v$ on both sides, we see that $\operatorname{rk} M^{(1)} \le r$ and that
the leading principal minor matrix of size $r \times r$ of $M^{(1)}$ has a
nonzero determinant.
So $\operatorname{rk} M^{(1)} = r$, and we can choose invertible matrices $S$ and $T$ over $K$,
such that
$$
S M^{(1)} T = \left( \begin{array}{cc}
I_r & \zeromat \\ \zeromat & \zeromat \\
\end{array} \right)
$$
So we can take $\tilde{M}^{(i)} = S M^{(i)} T$ for each $i$.
\end{proof}
Suppose that $\tilde{M}$ is as in lemma \ref{Irlem}. Write
\begin{equation} \label{ABCD}
\tilde{M} = \left( \begin{array}{cc} A & B \\ C & D \end{array} \right)
\end{equation}
where $A \in \operatorname{Mat}_r(K)$. If we extend $A$ with one row and one
column of ${\mathcal{J}} H$, we get an element of $\operatorname{Mat}_{r+1}\big(K(x)\big)$ from which the
determinant is zero. If we focus on the coefficients of $L_1^r$
and $L_1^{r-1}$ of this determinant, we see that
\begin{equation} \label{D0CB0}
D = 0 \qquad \mbox{and} \qquad C \cdot B = 0
\end{equation}
respectively. In particular,
\begin{equation} \label{rkCrkBler}
D = 0 \qquad \mbox{and} \qquad C \cdot B = 0
\end{equation}
\begin{lemma} \label{Ccoldep1}
Let $\tilde{H} \in K[x]^m$, such that ${\mathcal{J}} \tilde{H}$ is as $\tilde{M}$ in
lemma \ref{Irlem}.
Suppose that either $\operatorname{char} K \ne 2$ or the rows of $B$ are dependent over $K$.
Then the following hold.
\begin{enumerate}[\upshape (i)]
\item The columns of $C$ in \eqref{ABCD} are dependent over $K$.
\item If $C \ne 0$, then there exists a $v \in K^n$ of which the first $r$
coordinates are not all zero, such that
$$
({\mathcal{J}} \tilde{H}) \cdot v = \left( \begin{array}{cc} I_r & \zeromat \\
\zeromat & \zeromat \end{array} \right) \cdot x
$$
\end{enumerate}
\end{lemma}
\begin{proof}
The claims are direct if $C = 0$, so assume that $C \ne 0$. Then there exists
an $i > r$, such that $\tilde{H}_i \ne 0$.
\begin{enumerate}[\upshape (i)]
\item Take $v$ as in (ii). From $D = 0$, we deduce that $C \cdot v' = (C | D) \cdot v = 0$
for some nonzero $v' \in K^r$, which yields (i).
\item Take $v$ as in lemma \ref{Irlem}, and write $v = (v',v'')$,
such that $v' \in K^r$ and $v'' \in K^{n-r}$. Since $\tilde{H}$ is quadratic
homogeneous, we have
$$
({\mathcal{J}} \tilde{H}) \cdot v = ({\mathcal{J}} \tilde{H})|_{x=v} \cdot x = \tilde{M}^{(1)} \cdot x
= \left( \begin{array}{cc} I_r & \zeromat \\ \zeromat & \zeromat \end{array} \right) \cdot x
$$
So it remains to show that $v' \ne 0$. We distinguish two cases:
\begin{itemize}
\item \emph{the rows of $B$ are dependent over $K$.}
Take $w \in K^r$ nonzero, such that $w^{\rm t} B = 0$. Then
$$
w^{\rm t} A\,v' = w^{\rm t} A\,v' + w^{\rm t} B\,v'' = w^{\rm t} (A|B)\,v
= w^{\rm t} \left( \begin{smallmatrix} x_1 \\ x_2 \\[-5pt] \vdots \\ x_r \end{smallmatrix} \right)
= \sum_{i=1}^r w_i x_i \ne 0
$$
so $v' \ne 0$.
\item \emph{$\operatorname{char} K \ne 2$.}
From $CB = 0$, we deduce that
$$
CA\,v' = CA\,v' + CB\,v'' = C\,(A|B)\,v
= C \left( \begin{smallmatrix} x_1 \\ x_2 \\[-5pt] \vdots \\ x_r \end{smallmatrix} \right)
= 2 \left( \begin{smallmatrix} \tilde{H}_{r+1} \\ \tilde{H}_{r+2} \\[-5pt] \vdots \\
\tilde{H}_{m} \end{smallmatrix} \right)
$$
As $\tilde{H}_{i} \ne 0$ for some $i > r$, the right-hand side is nonzero, so $v' \ne 0$. \qedhere
\end{itemize}
\end{enumerate}
\end{proof}
\begin{lemma} \label{BCrkr}
Let $\tilde{H} \in K[x]^m$, such that ${\mathcal{J}} \tilde{H}$ is as $\tilde{M}$ in
lemma \ref{Irlem}.
Suppose that $\operatorname{rk} B + \operatorname{rk} C = r$ and that the columns of $C$ are dependent
over $K$. Then the column space of $B$ contains a nonzero constant vector.
\end{lemma}
\begin{proof}
From $\operatorname{rk} C + \operatorname{rk} B = r$ and $CB = 0$, we deduce that $\ker C$ is equal
to the column space of $B$. Hence any $v' \in K^r$
such that $C v' = 0$ is contained in the column space of $B$.
\end{proof}
\begin{proof}[Proof of theorem \ref{rkr}]
From lemma \ref{Fqlem} below, it follows that we may assume that $K$ has at least
$r$ elements. Let $M = {\mathcal{J}} H$ and take $S$ and $T$ as in lemma \ref{Irlem}.
Then $S ({\mathcal{J}} H) T$ is as $\tilde{M}$ in lemma \ref{Irlem}. Let $\tilde{H} := SH(Tx)$.
Then ${\mathcal{J}} \tilde{H} = S ({\mathcal{J}} H) |_{x=Tx} T$ is as $\tilde{M}$ in lemma \ref{Irlem}
as well, but for different linear forms $L_i$.
Take $\tilde{M} = {\mathcal{J}} \tilde{H}$ and take $A$, $B$, $C$, $D$ as in \eqref{ABCD}.
We distinguish four cases:
\begin{itemize}
\item \emph{The column space of $B$ contains a nonzero constant vector.}
Then there exists an $U \in \operatorname{GL}_m(K)$, such that the column space of
$U \tilde{M}$ contains $e_1$, because $D = 0$. Consequently, the matrix which
consists of the last $m-1$ rows of ${\mathcal{J}} (U \tilde{H}) = U \tilde{M}$ has rank $r-1$.
By induction on $r$, it follows that we can choose $U$ such that
only
$$
\tfrac12(r-1)^2 + \tfrac12(r-1) = (\tfrac12 r^2 - r + \tfrac12) +
(\tfrac12 r - \tfrac12) = \tfrac12 r^2 - \tfrac12 r
$$
rows of ${\mathcal{J}} (U \tilde{H})$ are nonzero besides the first row of
${\mathcal{J}} (U \tilde{H})$. So $U \tilde{H}$ is as $\tilde{H}$ in (1) of
theorem \ref{rkr}.
\item \emph{The rows of $B$ are dependent over $K$ in pairs.}
If $B \ne 0$, then the column space of $B$ contains a nonzero
constant vector, and the case above applies because $D = 0$.
So assume that $B = 0$. Then only the first $r$ columns of
${\mathcal{J}} \tilde{H}$ may be nonzero, because $D = 0$. Since
$\operatorname{rk} {\mathcal{J}} \tilde{H} = r$, the first $r$ columns of ${\mathcal{J}} \tilde{H}$
are indeed nonzero.
Furthermore, it follows from ${\mathcal{J}} \tilde{H} \cdot x = 2 \tilde{H}$
that $\operatorname{char} K \neq 2$. So $\tilde{H}$ is as in (2) of theorem \ref{rkr},
and the result follows from lemma \ref{23}.
\item \emph{$\operatorname{char} K = 2$ and $\operatorname{rk} B \le 1$.}
If the rows of $B$ are dependent over $K$ in pairs, then the second case
above applies, so assume that the rows of $B$ are not dependent over $K$
in pairs.
On account of \cite[Theorem 2.1]{1601.00579},
the columns of $B$ are dependent over $K$ in pairs. As $D = 0$, there exists an
$U'' \in \operatorname{GL}_{n-r}(K)$ such that only the first column
of
$$
\binom{B}{D} U''
$$
may be nonzero. Hence there exists an $U \in \operatorname{GL}_n(K)$
such that only the first $r+1$ columns of $({\mathcal{J}} \tilde{H})\, U$
may be nonzero. Consequently, $\tilde{H}(Ux)$ is as
$\tilde{H}$ in (3) of theorem \ref{rkr}, and the result follows from lemma \ref{23}.
\item \emph{None of the above.}
We first show that $\operatorname{rk} C \le r - 2$. So assume that $\operatorname{rk} C \ge r - 1$.
From $\operatorname{rk} C + \operatorname{rk} B \le r$, it follows that $\operatorname{rk} B \le 1$. As the last case
above does not apply, $\operatorname{char} K \ne 2$. From (i) of lemma \ref{Ccoldep1}, it follows
that the columns of $C$ are dependent over $K$. As the first case above does not
apply, it follows from lemma \ref{BCrkr} that $\operatorname{rk} C + \operatorname{rk} B < r$.
So $\operatorname{rk} B = 0$ and the rows of $B$ are dependent over $K$ in pairs,
which is the second case above, and a contradiction.
So $\operatorname{rk} C \le r - 2$ indeed.
By induction on $r$, it follows that $C$ needs to have at most
$$
\tfrac12 (r-2)^2 + \tfrac12 (r-2) = (\tfrac12 r^2 - 2r + 2) + (\tfrac12 r - 1)
= \tfrac12 r^2 - \tfrac32 r + 1
$$
nonzero rows. As $A$ has $r$ rows, there exists an $U \in \operatorname{GL}_m(K)$ such that
$U\tilde{H}$ is as $\tilde{H}$ in (1) of theorem \ref{rkr}.
\end{itemize}
The last claim of theorem \ref{rkr} follows from lemma \ref{23}
and the fact that $\frac12 r^2 - \frac12 r + 1 = r$ if $1 \le r \le 2$.
\end{proof}
\begin{lemma} \label{Fqlem}
Let $L$ be an extension field of $K$. If theorem {\upshape\ref{rkr}}
holds for $L$ instead of $K$, then theorem {\upshape\ref{rkr}} holds.
\end{lemma}
\begin{proof}
We only prove lemma \ref{Fqlem} for the first claim of theorem \ref{rkr},
because the second claim can be treated in a similar manner, and the last
claim does not depend on the actual base field.
Suppose $H$ satisfies the first claim of theorem \ref{rkr},
but with $L$ instead of $K$. If $m \le \frac12 r^2 + \frac12 r$, then
only the first $\frac12 r^2 + \frac12 r$ rows of ${\mathcal{J}} H$ may be nonzero,
and $H$ satisfies the first claim of theorem \ref{rkr}.
So assume that $m > \frac12 r^2 + \frac12 r$. Then the rows of ${\mathcal{J}} H$ are
dependent over $L$. Since $L$ is a vector space over $K$,
the rows of ${\mathcal{J}} H$ are dependent over $K$. So we
may assume that the last row of ${\mathcal{J}} H$ is zero. By induction on $m$,
$(H_1,H_2,\ldots,H_{m-1})$ satisfies the first claim for $H$ in
in theorem \ref{rkr}. As $H_m = 0$,
we conclude that $H$ satisfies the first claim of theorem \ref{rkr}.
\end{proof}
\begin{proof}[Proof of corollary \ref{rk4}]
If (2) or (3) of theorem \ref{rkr} applies, then (4) of corollary \ref{rk4}
follows. So assume that (1) of theorem \ref{rkr} applies. Then only the
first $\tfrac12 r^2 - \tfrac12 r + 1$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero.
Suppose first that $r \le 3$. Then
$$
\tfrac12 r^2 - \tfrac12 r + 1 \le \tfrac32 r - \tfrac12 r + 1 = r + 1
$$
and corollary \ref{rk4} follows.
Suppose next that $r = 4$.
Take $\tilde{M} = {\mathcal{J}} \tilde{H}$ and take $A$, $B$, $C$, $D$ as in \eqref{ABCD}.
We distinguish three cases.
\begin{itemize}
\item \emph{The column space of $B$ contains a nonzero constant vector.}
Then there exists an $U \in \operatorname{GL}_m(K)$, such that the column space of
$U \tilde{M}$ contains $e_1$. So the matrix which consists of the last
$m-1$ rows of ${\mathcal{J}} (U \tilde{H}) = U \tilde{M}$ has rank $r-1$.
Make $\tilde{U}$ from $U$ by replacing its first row by the zero row.
Then $\operatorname{rk} {\mathcal{J}} (\tilde{U} \tilde{H}) = r - 1$, and we can apply theorem
\ref{rkr} to $\tilde{U} \tilde{H}$.
\begin{compactitem}
\item If case (1) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$,
then case (1) of corollary \ref{rk4} follows, because
$$
\tfrac12 (r-1)^2 - \tfrac12 (r-1) + 1 = \tfrac92 - \tfrac32 + 1 = 4 = r
$$
\item If case (2) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$,
then case (2) of corollary \ref{rk4} follows.
\item If case (3) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$,
then case (3) of corollary \ref{rk4} follows.
\end{compactitem}
\item \emph{$\operatorname{rk} B \le 1$.}
If the columns of $B$ are dependent over $K$ in pairs, then (4) of corollary
\ref{rk4} is satisfied. So assume that the columns of $B$ are not dependent over
$K$ in pairs. Then $B \ne 0$, and from \cite[Theorem 2.1]{1601.00579},
it follows that the rows of $B$ are dependent over $K$ in pairs.
Hence the first case above applies.
\item \emph{$\operatorname{rk} C \le 1$.}
Then it follows from theorem \ref{rkr} that at most one row of $C$ needs to
be nonzero. So (1) of corollary \ref{rk4} is satisfied.
\item \emph{None of the above.}
We first show that $\operatorname{rk} C = \operatorname{rk} B = 2$ and that the columns
of $C$ are independent over $K$.
Since $\operatorname{rk} C + \operatorname{rk} B \le r = 4$, we deduce from $\operatorname{rk} B > 1$ and $\operatorname{rk} C > 1$
that $\operatorname{rk} C = \operatorname{rk} B = 2$. So $\operatorname{rk} C + \operatorname{rk} B = 4 = r$. As the first case above
does not apply, it follows from lemma \ref{BCrkr} that the columns
of $C$ are independent over $K$.
From (i) of lemma \ref{Ccoldep1}, we deduce that
$\operatorname{char} K = 2$ and that the rows of $B$ are independent over $K$.
Since the $\operatorname{rk} C + 2$ columns of $C$ are independent over $K$, it follows from
theorem \ref{rkr} that $C$ needs to have at most
$$
\tfrac12 \cdot 2^2 - \tfrac12 \cdot 2 + 1 = 2 - 1 + 1 = 2
$$
nonzero rows.
Since the $\operatorname{rk} B + 2$ rows of $B$ are independent over $K$, it follows from
\cite[Theorem 2.3]{1601.00579} that $B$ needs to have at most
$2$ nonzero columns, because the first case above does not apply.
So (5) of corollary \ref{rk4} is satisfied.
\qedhere
\end{itemize}
\end{proof}
The last case in corollary \ref{rk4} is indeed necessary,
e.g.\@ ${\mathcal{J}} \tilde{H} = {\mathcal{H}} (x_1 x_2 x_3 + x_4 x_5 x_6)$, or
${\mathcal{J}} \tilde{H} = {\mathcal{H}} (x_1 x_2 x_3 + x_1 x_5 x_6 + x_4 x_2 x_6 + x_4 x_5 x_3)$.
\section{rank 3}
\begin{theorem} \label{rk3}
Let $H \in K[x]^m$ be a quadratic homogeneous polynomial map, such that
$r := \operatorname{rk} {\mathcal{J}} H = 3$. Then we can choose $S \in \operatorname{GL}_m(K)$ and $T \in \operatorname{GL}_n(K)$,
such that for $\tilde{H} := S H(Tx)$, one of the following statements holds:
\begin{enumerate}[\upshape(1)]
\item Only the first $3$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero;
\item Only the first $4$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero, and
$$
(\tilde{H}_1,\tilde{H}_2,\tilde{H}_3,\tilde{H}_4) =
(\tilde{H}_1, \tfrac12 x_1^2, x_1 x_2, \tfrac12 x_2^2)
$$
(in particular, $\operatorname{char} K \ne 2$);
\item Only the first $4$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero,
$$
{\mathcal{J}} (\tilde{H}_1,\tilde{H}_2,\tilde{H}_3,\tilde{H}_4) =
{\mathcal{J}} (\tilde{H}_1, x_1 x_2, x_1 x_3, x_2 x_3)
$$
and $\operatorname{char} K = 2$;
\item $\tilde{H}$ is as in {\upshape(2)} or {\upshape(3)} of theorem
{\upshape\ref{rkr}};
\item Only the first $4$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero, and
$$
\big(\tilde{H}_1,\tilde{H}_2,\tilde{H}_3,\tilde{H}_4\big) =
\big( x_1 x_3 + c x_2 x_4, x_2 x_3 - x_1 x_4,
\tfrac12 x_3^2 + \tfrac{c}2 x_4^2, \tfrac12 x_1^2 + \tfrac{c}2 x_2^2 \big)
$$
for some nonzero $c \in K$ (in particular, $\operatorname{char} K \ne 2$).
\end{enumerate}
Conversely, $\operatorname{rk} {\mathcal{J}} \tilde{H} \le 3$ in each of the five statements above.
\end{theorem}
\begin{corollary} \label{rktrdeg}
Let $H \in K[x]^m$ be a quadratic homogeneous polynomial map, such that
$\operatorname{rk} {\mathcal{J}} H \le 3$. If $\operatorname{char} K \neq 2$, then $\operatorname{rk} {\mathcal{J}} H = \operatorname{tr}deg_K K(H)$.
\end{corollary}
\begin{proof}
Since $\operatorname{rk} {\mathcal{J}} H \le \operatorname{tr}deg_K K(H)$, it suffices to show that
$\operatorname{tr}deg_K K(H) \le 3$ if $\operatorname{char} K \neq 2$.
In (5) of theorem \ref{rk3}, we have $\operatorname{tr}deg_K K(H) \le 3$ because
$$
\tilde{H}_1^2 + c \tilde{H}_2^2 - 4 \tilde{H}_3 \tilde{H}_4 = 0
$$
In the other cases of theorem \ref{rk3} where $\operatorname{char} K \neq 2$,
$\operatorname{tr}deg_K K(H) \le 3$ follows directly.
\end{proof}
\begin{lemma} \label{Ccoldep2}
Let $\tilde{H} \in K[x]^m$, such that ${\mathcal{J}} \tilde{H}$ is as $\tilde{M}$ in
lemma \ref{Irlem}.
If $\operatorname{rk} C = 1$ in \eqref{ABCD} and $r$ is odd, then the columns of $C$ in
\eqref{ABCD} are dependent over $K$.
\end{lemma}
\begin{proof}
The case where $\operatorname{char} K \ne 2$ follows from (i) of lemma \ref{Ccoldep1}, so
assume that $\operatorname{char} K = 2$.
Since $\operatorname{rk} C = 1 = \frac12 \cdot 1^2 + \frac12 \cdot 1$, we deduce from
theorem \ref{rkr} that the rows of $C$ are dependent over $K$ in pairs.
Say that the first row of $C$ is nonzero.
As $r$ is odd, it follows from proposition \ref{evenrk} below that
$\operatorname{rk} {\mathcal{H}} \tilde{H}_{r+1} < r$. Hence there exists a $v' \in K^r$
such that $({\mathcal{H}} \tilde{H}_{r+1}) \, v' = 0$, and
$$
({\mathcal{J}} \tilde{H}_{r+1}) \, v' = x^{\rm t} ({\mathcal{H}} \tilde{H}_{r+1}) \, = 0
$$
The row space of $C$ is spanned by ${\mathcal{J}} \tilde{H}_{r+1}$,
so $C\,v' = 0$.
\end{proof}
\begin{proposition} \label{evenrk}
Let $M \in \operatorname{Mat}_{n,n}(K)$ be either a symmetric matrix, or an anti-symmetric
matrix with zeroes on the diagonal.
Then there exists a lower triangular matrix $T \in \operatorname{Mat}_{n,n}(K)$ with
ones on the diagonal, such that $T^{\rm t} M T$ is the product of a symmetric
permutation matrix and a diagonal matrix.
In particular, $\operatorname{rk} M$ is even if $M$ is an anti-symmetric
matrix with zeroes on the diagonal.
\end{proposition}
\begin{proof}
We describe an algorithm to transform $M$ to the product of a symmetric
permutation matrix and a diagonal matrix. We distinguish three cases.
\begin{itemize}
\item \emph{The last column of $M$ is zero.}
Advance with the principal minor of $M$ that we get by removing row and
column $n$.
\item \emph{The entry in the lower right corner of $M$ is nonzero.}
Use $M_{nn}$ as a pivot to clean the rest of the last column and the last row
of $M$. Advance with the principal minor of $M$ that we get by removing row
and column $n$.
\item \emph{None of the above.}
Let $i$ be the index of the lowest nonzero entry in the last column of $M$.
Use $M_{in}$ and $M_{ni}$ as pivots to clean the rest of columns $i$ and $n$
of $M$ and rows $i$ and $n$ of $M$. Advance with the principal minor of $M$
that we get by removing rows and columns $i$ and $n$.
\end{itemize}
If $M$ is $M$ is an anti-symmetric matrix with zeroes on the diagonal, then so is
$T^{\rm t} M T$. By combining this with that $T^{\rm t} M T$ is the product of a symmetric
permutation matrix and a diagonal matrix, we infer the last claim.
\end{proof}
Notice that over characteristic $2$, any Hessian
matrix of a polynomial is \mbox{(anti-)}\allowbreak symmetric with zeroes on the diagonal,
which has even rank by extension of scalars. Furthermore, one can show
that any nondegenerate quadratic form in odd dimension $r$ over a perfect field
of characteristic $2$ is equivalent to
$$
x_1 x_2 + x_3 x_4 + \cdots + x_{r-2} x_{r-1} + x_r^2
$$
\begin{corollary} \label{symdiag}
Let $\operatorname{char} K \neq 2$ and $M \in \operatorname{Mat}_{n,n}(K)$ be a symmetric matrix.
Then there exists a $T \in \operatorname{GL}_{n}(K)$, such that $T^{\rm t} M T$ is
a diagonal matrix.
\end{corollary}
\begin{proof}
Notice that the permutation matrix $P$ in proposition \ref{evenrk}
only has cycles of length $1$ and $2$, because just like $M$,
the product of $P$ and the diagonal matrix $D$
is symmetric. Furthermore, for every cycle of length $2$, the
entries on the diagonal of $D$ which correspond to the two
coordinates of that cycle are equal. Since
$$
\left( \begin{array}{cc} 1 & 1 \\ -1 & 1 \end{array} \right)
\left( \begin{array}{cc} 0 & c \\ c & 0 \end{array} \right)
\left( \begin{array}{cc} 1 & -1 \\ 1 & 1 \end{array} \right) =
\left( \begin{array}{cc} 2c & 0 \\ 0 & -2c \end{array} \right)
$$
we can get rid of the cycles of length $2$ in $P$.
\end{proof}
\begin{lemma} \label{rk3trafo}
Suppose that $H \in K[x_1,x_2,x_3,x_4]^4$, such that
$$
{\mathcal{J}} H_4 = (\, x_1 ~ c x_2 ~ 0 ~ 0 \,)
\qquad \mbox{and} \qquad
{\mathcal{J}} H \cdot v = \left( \begin{smallmatrix}
x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right)
$$
for some nonzero $c \in K$, and a $v \in K^4$ of which
the first $3$ coordinates are not all zero.
Suppose in addition that $\det {\mathcal{J}} H = 0$ and that the last column of
${\mathcal{J}} H$ does not generate a nonzero constant vector.
Then there are $S, T \in \operatorname{GL}_4(K)$, such that for
$\tilde{H} := S H(Tx)$, ${\mathcal{J}} \tilde{H}$ is of the form
$$
{\mathcal{J}} \tilde{H} =
\left( \begin{array}{cccc}
A_{11} & A_{12} & x_1 & c x_2 \\
A_{21} & A_{22} & x_2 & -x_1 \\
A_{31} & A_{32} & x_3 & \tilde{c}x_4 \\
x_1 & c x_2 & 0 & 0
\end{array} \right)
$$
where the coefficients of $x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ are zero.
\end{lemma}
\begin{proof}
Since the last row of ${\mathcal{J}} H$ is $(\, x_1 ~ c x_2 ~ 0 ~ 0 \,)$
and the last coordinate of ${\mathcal{J}} \tilde{H} \cdot v$ is zero, we deduce that
$v_1 = v_2 = 0$. As the first $3$ coordinates of $v$ are not all zero,
we conclude that $v_3 \ne 0$.
Make $S$ from $v_3 I_4$ by changing column $4$ to $e_4$
Make $T$ from $I_4$ by changing column $3$ to $v_3^{-1} v$, i.e.\@
replacing the last entry of column $3$ by $v_3^{-1} v_4$, and by replacing
the last column by an arbitrary scalar multiple of $e_4$. Then
\begin{align*}
({\mathcal{J}} \tilde{H}) \cdot e_3
&= S\, ({\mathcal{J}} H)|_{x=Tx} \cdot T e_3
= \big(S\,({\mathcal{J}} H) \cdot v_3^{-1} v\big)\big|_{x=Tx} \\
&= v_3^{-1} S\, \left.
\left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right)
\right|_{x=Tx}
= \left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right)
\end{align*}
and
$$
\tilde{H}_4 = H_4(Tx) = H_4
$$
So the third column of ${\mathcal{J}} \tilde{H}$ is as claimed. It follows that
${\mathcal{J}} \tilde{H}$ is as $\tilde{M}$ in lemma \ref{Irlem}, with
$L_1 = x_3$, $L_2 = x_2$, $L_3 = x_1$, and $L_4 = x_4$. Define
$A$, $B$, $C$ and $D$ as in \eqref{ABCD}.
Just like the last column of ${\mathcal{J}} H$, the last column of ${\mathcal{J}} \tilde{H}$ does
not generate a nonzero constant vector. Consequently, $B_{11}$ and $B_{21}$ are not both
zero. From \eqref{D0CB0}, it follows that $C_{11} B_{11} + C_{12} B_{21} = 0$.
So we can choose the last column of $T$, such that $B_{11} = C_{12} = c x_2$
and $B_{21} = -C_{11} = -x_1$.
The coefficient of $x_3$ in $B_{31}$ is zero, and by changing the third row
of $I_4$ on the left of the diagonal in a proper way, we can get an
$U \in \operatorname{GL}_4$ such that
$$
U \binom{B}{D} = \left( \begin{smallmatrix}
B_{11} \\ B_{21} \\ \tilde{c} x_4 \\ D_{11}
\end{smallmatrix} \right) = \left( \begin{smallmatrix}
c x_2 \\ - x_3 \\ \tilde{c} x_4 \\ 0
\end{smallmatrix} \right)
$$
for some $\tilde{c} \in K$. Since $U^{-1}$ can be obtained
by changing the third row of $I_4$ on the left of the diagonal
in a proper way as well, we infer that
\begin{align*}
{\mathcal{J}} \big(U^{-1} \tilde{H} (Ux)\big) \cdot e_3
&= U^{-1}\, ({\mathcal{J}} \tilde{H})|_{x = Ux} \,U \cdot e_3
= U^{-1}\, ({\mathcal{J}} \tilde{H})|_{x = Ux} \cdot e_3 \\
&= U^{-1} \left.
\left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right)
\right|_{x=Ux}
= \left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right)
\end{align*}
and
$$
(U^{-1})_4\, \tilde{H} (Ux) = \tilde{H}_4 (Ux) = \tilde{H}_4
$$
So if we replace $S$ and $T$ by $U^{-1} S$ and $TU$ respectively, then $\tilde{H}$ will
be replaced by $U^{-1} \tilde{H}(Ux)$.
So we can get $B_{31}$ of the form $\tilde{c}x_4$ for some $\tilde{c} \in K$.
Finally, we can clean the coefficients of $x_1$ in $A_{11}$, $A_{21}$, $A_{21}$
with row operations, using $C_{11}$ as a pivot. So we can get the coefficients of
$x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ equal to zero.
\end{proof}
\begin{lemma} \label{rk3calc}
Suppose that $\det {\mathcal{J}} \tilde{H} = 0$ and ${\mathcal{J}} \tilde{H}$ is of the form
$$
\left( \begin{array}{cccc}
A_{11} & A_{12} & x_1 & c x_2 \\
A_{21} & A_{22} & x_2 & -x_1 \\
A_{31} & A_{32} & x_3 & \tilde{c}x_4 \\
x_1 & c x_2 & 0 & 0
\end{array} \right)
$$
where $c,\tilde{c} \in K$, such that $c \neq 0$. If the coefficients of
$x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ are zero, then
$$
\tilde{H} = \left( \begin{array}{c}
x_1 x_3 + c x_2 x_4 \\ x_2 x_3 - x_1 x_4 \\
\frac12 x_3^2 + \frac{c}2 x_4^2 \\ \frac12 x_1^2 + \frac{c}2 x_2^2
\end{array} \right)
\qquad \mbox{and} \qquad
{\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc}
x_3 & c x_4 & x_1 & c x_2 \\
- x_4 & x_3 & x_2 & -x_1 \\
0 & 0 & x_3 & c x_4 \\
x_1 & c x_2 & 0 & 0
\end{array} \right)
$$
\end{lemma}
\begin{proof}
Let $a_i$ and $b_i$ be the coefficients of $x_1$ and $x_2$ in
$A_{i2}$ respectively, for each $i \le 3$. Then
\begin{gather*}
\tilde{H} = \left( \begin{array}{c}
a_1 x_1 x_2 + \frac12 b_1 x_2^2 + x_1 x_3 + c x_2 x_4 \\
a_2 x_1 x_2 + \frac12 b_2 x_2^2 + x_2 x_3 - x_1 x_4 \\
a_3 x_1 x_2 + \frac12 b_3 x_2^2 + \frac12 x_3^2 + \frac{\tilde{c}}2 x_4^2 \\
\frac12 x_1^2 + \frac{c}2 x_2^2
\end{array} \right)
\qquad \mbox{and} \\
{\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc}
a_1 x_2 + x_3 & a_1 x_1 + b_1 x_2 + c x_4 & x_1 & c x_2 \\
a_2 x_2 - x_4 & a_2 x_1 + b_2 x_2 + x_3 & x_2 & -x_1 \\
a_3 x_2 & a_3 x_1 + b_3 x_2 & x_3 & \tilde{c} x_4 \\
x_1 & c x_2 & 0 & 0
\end{array} \right)
\end{gather*}
Consequently, it suffices to show that $a_i = b_i = 0$ for each $i \le 3$,
and that $\tilde{c} = c$.
Suppose that the coefficients of $x_1^4$ in $\det {\mathcal{J}} \tilde{H}$ are zero.
Then we see by expansion along rows $3$, $4$, $1$, in that order,
that $a_3 x_1 = 0$. Hence the third row of ${\mathcal{J}} \tilde{H}$ reads
$$
{\mathcal{J}} \tilde{H}_3 = (\, 0 ~ b_3 x_2 ~ x_3 ~ \tilde{c} x_4 \,)
$$
Since the coefficients of $x_1^3 x_2$ and $x_1^3 x_3$ in $\det {\mathcal{J}} \tilde{H}$
are zero, we see by expansion along rows $3$, $4$, $1$, in that order, that
$b_3 x_2 = a_1 x_1 = 0$. Hence the third row of ${\mathcal{J}} \tilde{H}$ reads
$$
{\mathcal{J}} \tilde{H}_3 = (\, 0 ~ 0 ~ x_3 ~ \tilde{c} x_4 \,)
$$
Since the coefficient of $x_2^3 x_3$ in $\det {\mathcal{J}} \tilde{H}$
is zero, we see by expansion along rows $3$, $4$, $2$, in that order, that
$a_2 x_2 = 0$. So
$$
{\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc}
x_3 & b_1 x_2 + c x_4 & x_1 & c x_2 \\
- x_4 & b_2 x_2 + x_3 & x_2 & -x_1 \\
0 & 0 & x_3 & \tilde{c} x_4 \\
x_1 & c x_2 & 0 & 0
\end{array} \right)
$$
Since the coefficient of $x_1^2 x_3 x_4$ in $\det {\mathcal{J}} \tilde{H}$
is zero, we see by expansion along row $3$, and columns $2$ and $1$,
in that order, that $\tilde{c} x_4 = c x_4$.
Since the coefficient of $x_1 x_2^2 x_3$ in $\det {\mathcal{J}} \tilde{H}$
is zero, we see by expansion along row $3$, and columns $1$, $4$, in that order,
that $b_2 x_2 = 0$. Using that and that the coefficient of $x_1^2 x_2 x_3$ in
$\det {\mathcal{J}} \tilde{H}$ is zero, we see by expansion along row $3$, and columns
$1$, $2$, in that order, that $b_1 x_2 = 0$. So $\tilde{H}$ is as claimed.
\end{proof}
\begin{proof}[Proof of theorem \ref{rk3}]
Take $\tilde{M} = {\mathcal{J}} \tilde{H}$ and take $A$, $B$, $C$, $D$ as in \eqref{ABCD}.
We distinguish three cases:
\begin{itemize}
\item \emph{The column space of $B$ contains a nonzero constant vector.}
Take $U$ and $\tilde{U}$ as in the corresponding case in the proof of corollary
\ref{rk4}.
\begin{compactitem}
\item If case (1) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$,
then case (1) of theorem \ref{rk3} follows, because
$$
\tfrac12 (r-1)^2 - \tfrac12 (r-1) + 1 = \tfrac42 - \tfrac22 + 1 = 2 = r - 1
$$
\item If case (2) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$,
then case (1) or case (2) of theorem \ref{rk3} follows.
\item If case (3) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$,
then case (1) or case (3) of theorem \ref{rk3} follows.
\end{compactitem}
\item \emph{The columns of $B$ are dependent over $K$ in pairs.}
If $\operatorname{char} K = 2$, then $\tilde{H}$ is as in (3) of theorem \ref{rkr}, which
is included in (4) of theorem \ref{rk3}. So assume that $\operatorname{char} K \neq 2$.
If $B = 0$, then $\tilde{H}$ is as in (2) of theorem \ref{rkr}, which
is included in (4) of theorem \ref{rk3} as well. So assume that $B \ne 0$.
From (i) of lemma \ref{Ccoldep1}, it follows that the columns of $C$ are
dependent over $K$. If $\operatorname{rk} C \ge 2$, then $\operatorname{rk} C = 2$ because
$\operatorname{rk} B + \operatorname{rk} C \le r = 3$ and $B \ne 0$, and $B$ contains a nonzero constant
vector because of lemma \ref{BCrkr}. If $\operatorname{rk} C = 0$, then (1) of theorem \ref{rk3}
follows. So assume that $\operatorname{rk} C = 1$.
Since $\operatorname{rk} C = 1 = \frac12 \cdot 1^2 + \frac12 \cdot 1$, we deduce from
theorem \ref{rkr} that the rows of $C$ are dependent over $K$ in pairs.
So we can choose $S$ such that only the first row of $C$ is nonzero.
From corollary \ref{symdiag}, it follows that there exists an $U' \in \operatorname{GL}_r(K)$,
such that
$$
{\mathcal{H}}_{x_1,x_2,\ldots,x_r} \big(\tilde{H}_{r+1}(U'x)\big)
= (U')^{-1} ({\mathcal{H}} \tilde{H}_{r+1})\, U'
$$
is a diagonal matrix. Furthermore, the first entry on the diagonal is nonzero
because ${\mathcal{H}} \tilde{H}_{r+1} \ne 0$, and the last entry on the diagonal is zero
because the columns of $C$ are dependent over $K$.
By adapting $S$, we can obtain that the entries on the diagonal
of ${\mathcal{H}}_{x_1,x_2,\ldots,x_r} (\tilde{H}_{r+1}(U'x)$ are $1$, $c$ and $0$, in that
order.
By adapting $S$ and $T$, we can replace $\tilde{H}$ by
$$
\left( \begin{array}{cc}
(U')^{-1} & \zeromat \\ \zeromat & I_{m-3}
\end{array} \right) \tilde{H}\left(\left( \begin{array}{cc}
U' & \zeromat \\ \zeromat & I_{n-3}
\end{array} \right) x \right)
$$
Then the first row of $C$ becomes $(\, x_1 ~ c x_2 ~ 0 \,)$.
We distinguish two cases.
\begin{compactitem}
\item $c = 0$.
Notice that
$$
{\mathcal{J}} (\tilde{H}|_{x_1=1}) = ({\mathcal{J}} \tilde{H})|_{x_1=1} \cdot ({\mathcal{J}} (1,x_2,x_3,\ldots,x_m))
$$
so $\operatorname{rk} {\mathcal{J}} (\tilde{H}|_{x_1=1}) = r - 1 = 2$, and we can apply \cite[Theorem 2.3]{1601.00579}.
\begin{compactitem}
\item In the case of \cite[Theorem 2.3]{1601.00579} (1), case (2) of theorem
\ref{rkr} follows, which yields (4) of theorem \ref{rk3}.
\item In the case of \cite[Theorem 2.3]{1601.00579} (2), case (1) of theorem
\ref{rk3} follows, because any linear combination of the rows of ${\mathcal{J}} \tilde{H}$
which is dependent on $e_1$, is linearly dependent on $C_1$.
\item In the case of \cite[Theorem 2.3]{1601.00579} (3), case (1) or case (2)
of theorem \ref{rk3} follows.
\end{compactitem}
\item $c \ne 0$.
From (ii) of lemma \ref{Ccoldep1} and lemma \ref{rk3trafo}, it follows that we can choose
$S$ and $T$, such that the leading principal minor matrix of size $4$
of ${\mathcal{J}} \tilde{H}$ is of the form
$$
\left( \begin{array}{cccc}
A_{11} & A_{12} & x_1 & c x_2 \\
A_{21} & A_{22} & x_2 & -x_1 \\
A_{31} & A_{32} & x_3 & \tilde{c}x_4 \\
x_1 & c x_2 & 0 & 0
\end{array} \right)
$$
where the coefficients of $x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ are zero.
From lemma \ref{rk3calc}, case (5) of theorem \ref{rk3} follows.
\end{compactitem}
\item \emph{None of the above.}
We first show that $\operatorname{rk} B \ge 2$. For that purpose, assume that $\operatorname{rk} B \le 1$.
Since the columns of $B$ are not dependent over $K$ in pairs, we deduce that
$B \ne 0$, and it follows from \cite[Theorem 2.1]{1601.00579} that the rows of
$B$ are dependent over $K$ in pairs. This contradicts the fact that the column
space of $B$ does not contain a nonzero constant vector.
So $\operatorname{rk} B \ge 2$ indeed.
From $\operatorname{rk} B + \operatorname{rk} C = r$, we deduce that $\operatorname{rk} C \le 1$. If $C = 0$, then (1)
of theorem \ref{rk3} follows, so assume that $C \ne 0$. Then $\operatorname{rk} C = 1$ and
$\operatorname{rk} B = r - 1 = 2$. From lemmas \ref{Ccoldep2} and \ref{BCrkr}, we deduce
that the column space of $B$ contains a nonzero constant vector, which is a
contradiction.
\end{itemize}
So it remains to prove the last claim. In case of (5) of theorem \ref{rk3},
the last claim follows from the proof of corollary \ref{rktrdeg}. Otherwise,
the last claim of theorem \ref{rk3} follows in a similar manner as
the last claim of theorem \ref{rkr}.
\end{proof}
\begin{lemma} \label{F2lem}
Let $K$ be a field of characteristic $2$ and $L$ be an
extension field of $K$. If theorem {\upshape\ref{rk3}} holds,
but for $L$ instead of $K$, then theorem {\upshape\ref{rk3}}
holds.
\end{lemma}
\begin{proof}
Take $H$ as in theorem \ref{rk3}. Then $H$ is as in (1), (3) or (4)
of theorem \ref{rk3}, but with $L$ instead of $K$.
We assume that $H$ is as in (3) of theorem \ref{rk3} with $L$
instead of $K$, because the other case follows in a similar manner
as lemma \ref{Fqlem}.
Notice that
$$
\big(\,0 ~ x_3 ~ x_2 ~ x_1 ~ y_4 ~ y_5 ~ \cdots ~ y_m\,\big) \cdot
{\mathcal{J}} \tilde{H} = 0
$$
Since ${\mathcal{J}} \tilde{H} = S^{-1} ({\mathcal{J}} H)|_{Tx} T$, it follows that
$$
\big(\,0 ~ x_3 ~ x_2 ~ x_1 ~ y_4 ~ y_5 ~ \cdots ~ y_m\,\big) \cdot S \cdot {\mathcal{J}} H = 0
$$
as well.
Suppose first that $m = 4$. As $\operatorname{rk} {\mathcal{J}} H = m-1$, there exists a nonzero
$v \in K(x)^m$ such that $\ker \big(\,v_1 ~ v_2 ~ v_3 ~ v_4\,\big)$ is equal
to the column space of ${\mathcal{J}} H$. Since the column space of ${\mathcal{J}} H$ is contained
in $\ker \big((\,0~x_3~x_2~x_1\,) \cdot S\big)$, it follows that
$\big(\, v_1 ~ v_2 ~ v_3 ~ v_4 \,\big)$ is dependent on $(\,0~x_3~x_2~x_1\,) \cdot S$.
So
$$
v_1 (S^{-1})_{11} + v_2 (S^{-1})_{21} + v_3 (S^{-1})_{31} + v_4 (S^{-1})_{41} = 0
$$
and the components of $v$ are dependent over $L$. Consequently, the
components of $v$ are dependent over $K$. So $\ker \big(\,v_1 ~ v_2 ~ v_3 ~ v_4\,)$
contains a nonzero vector over $K$, and so does the column space of ${\mathcal{J}} H$.
Now we can follow the same argumentation as in the first case in the proof
of theorem \ref{rk3}.
Suppose next that $m > 4$. Then the rows of ${\mathcal{J}} H$ are dependent over
$L$. Hence the rows of ${\mathcal{J}} H$ are dependent over $K$ as well. So we
may assume that the last row of ${\mathcal{J}} H$ is zero. By induction on $m$,
$(H_1,H_2,\ldots,H_{m-1})$ is as $H$ in theorem \ref{rk3}. As $H_m = 0$,
we conclude that $H$ satisfies theorem \ref{rk3}.
\end{proof}
\section{rank 3 with nilpotency}
\begin{theorem} \label{rk3np}
Let $H \in K[x]^n$ be quadratic homogeneous, such that ${\mathcal{J}} H$ is nilpotent and
$\operatorname{rk} {\mathcal{J}} H \le 3$. Then there exists a $T \in \operatorname{GL}_n(K)$, such that for
$\tilde{H} := T^{-1} H(Tx)$, one of the following statements holds:
\begin{enumerate}[\upshape (1)]
\item ${\mathcal{J}} \tilde{H}$ is lower triangular with zeroes on the diagonal;
\item $n \ge 5$, $\operatorname{rk} {\mathcal{J}} H = 3$, and ${\mathcal{J}} \tilde{H}$ is of the form
$$
\left( \begin{array}{ccc@{\qquad}ccc}
0 & x_5 & 0 & * & \cdots & * \\
x_4 & 0 & -x_5 & * & \cdots & * \\
0 & x_4 & 0 & * & \cdots & * \\[10pt]
0 & 0 & 0 & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & & \vdots \\
0 & 0 & 0 & 0 & \cdots & 0
\end{array} \right)
$$
\item $n \ge 6$, $\operatorname{rk} {\mathcal{J}} H = 3$, $\operatorname{char} K = 2$ and ${\mathcal{J}} \tilde{H}$ is
of the form
$$
\left( \begin{array}{ccc@{\qquad}ccc@{\qquad}ccc}
0 & x_6 & 0 & 0 & 0 & x_2 & 0 & \cdots & 0 \\
x_5 & 0 & -x_6 & 0 & * & * & * & \cdots & * \\
0 & x_5 & 0 & 0 & x_2 & 0 & 0 & \cdots & 0 \\[10pt]
0 & 0 & 0 & 0 & x_6 & x_5 & 0 & \cdots & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\[10pt]
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & & \vdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0
\end{array} \right)
$$
\end{enumerate}
Furthermore, $x + H$ is tame if $\operatorname{char} K \neq 2$, and
there exists a tame invertible map
$x + \bar{H}$ such that $\bar{H}$ is quadratic homogeneous and
${\mathcal{J}} \bar{H} = {\mathcal{J}} H$ in general.
Conversely, ${\mathcal{J}} \tilde{H}$ is nilpotent in each of the three statements
above. Furthermore, $\operatorname{rk} {\mathcal{J}} \tilde{H} = 3$ in {\upshape (2)},
$\operatorname{rk} {\mathcal{J}} \tilde{H} = 4$ in {\upshape (3)} if $\operatorname{char} K \neq 2$, and
$\operatorname{rk} {\mathcal{J}} \tilde{H} = 3$ in {\upshape (3)} if $\operatorname{char} K = 2$.
\end{theorem}
\begin{lemma} \label{lem3}
Let $H \in K[x]^3$ be homogeneous of degree $2$, such that
${\mathcal{J}}_{x_1,x_2,x_3} H$ is nilpotent.
If ${\mathcal{J}}_{x_1,x_2,x_3} H$ is not similar over $K$ to a triangular matrix,
then ${\mathcal{J}}_{x_1,x_2,x_3} H$ is similar over $K$ to a matrix of the form
$$
\left( \begin{array}{ccc}
0 & f & 0 \\ b & 0 & f \\ 0 & -b & 0
\end{array} \right)
$$
where $f$ and $b$ are independent linear forms in $K[x_4,x_5,\ldots,x_n]$.
\end{lemma}
\begin{proof}
Suppose that ${\mathcal{J}}_{x_1,x_2,x_3} H$ is not similar over $K$ to a triangular matrix.
Take $i$ such that the coefficient matrix of $x_i$ of ${\mathcal{J}}_{x_1,x_2,x_3} H$
is nonzero, and define
$$
N := {\mathcal{J}}_{x_1,x_2,x_3} (H|_{x_i=x_i+1}) = ({\mathcal{J}}_{x_1,x_2,x_3} H)|_{x_i=x_i+1}
$$
Then $N$ is nilpotent, and $N$ is not similar over $K$ to a triangular matrix.
$N(0)$ is similar over $K$ to the matrix $N(0)$ in either (iii) or (iv)
of \cite[Lemma 3.1]{1601.00579}, so it follows from
\cite[Lemma 3.1]{1601.00579} that $N$ is similar over $K$ to a matrix of the form
$$
\left( \begin{array}{ccc}
0 & f+1 & 0 \\ b & 0 & f+1 \\ 0 & -b & 0
\end{array} \right)
$$
where $b$ and $f$ are linear forms. $b$ and $f$ are independent, because the
coefficients of $x_i$ in $b$ and $f$ are $0$ and $1$ respectively. So
${\mathcal{J}}_{x_1,x_2,x_3} H$ is similar over $K$ to a matrix of the form
$$
\left( \begin{array}{ccc}
0 & f & 0 \\ b & 0 & f \\ 0 & -b & 0
\end{array} \right)
$$
where $b$ and $f$ are independent linear forms.
Let $\bar{H}$ be the quadratic part with respect to $x_1, x_2, x_3$ of
$H$. Then ${\mathcal{J}}_{x_1,x_2,x_3} \bar{H}$ is nilpotent.
By showing that $\bar{H} = 0$, we prove that $b$ and $f$ are contained in
$K[x_4,x_5,\ldots,x_n]$.
So assume that $\bar{H} \ne 0$. From \cite[Theorem 3.2]{1601.00579}, it follows
that we may assume that ${\mathcal{J}}_{x_1,x_2,x_3} \bar{H}$ is lower triangular. If
the coefficient matrix of $x_2$ of ${\mathcal{J}}_{x_1,x_2,x_3} \bar{H}$ is nonzero,
then it has rank $1$ because only the last row is nonzero, and so has the
coefficient matrix of $x_2$ of
${\mathcal{J}}_{x_1,x_2,x_3} H$. Otherwise, the coefficient matrix of $x_1$ of
${\mathcal{J}}_{x_1,x_2,x_3} \bar{H}$ and ${\mathcal{J}}_{x_1,x_2,x_3} H$ has rank $1$,
because only the first column is nonzero.
So we could have chosen $i$, such that the coefficient matrix of $x_i$ of
${\mathcal{J}}_{x_1,x_2,x_3} H$ would have rank $1$. From \cite[Lemma 3.1]{1601.00579},
it follows that ${\mathcal{J}}_{x_1,x_2,x_3} H$ is similar over $K$ to a triangular
matrix. Contradiction, so $\bar{H} = 0$ indeed.
\end{proof}
\begin{lemma} \label{lem1}
Suppose that $H \in K[x]^n$, such that ${\mathcal{J}} H$ is nilpotent.
\begin{enumerate}[\upshape (i)]
\item Suppose that ${\mathcal{J}} H$ may only be nonzero in the first row and the
first $2$ columns. Then there exists a $T \in \operatorname{GL}_n(K)$ such that for
$\tilde{H} := T^{-1} H(Tx)$, the following holds.
\begin{enumerate}[\upshape (a)]
\item ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row and the
first $2$ columns (just like ${\mathcal{J}} H$).
\item The Hessian matrix of the leading part with
respect to $x_2,x_3,\ldots,x_n$ of $\tilde{H}_1$ is the product of
a symmetric permutation matrix and a diagonal matrix.
\item Every principal minor determinant of the leading principal minor
matrix of size $2$ of ${\mathcal{J}} \tilde{H}$ is zero.
\end{enumerate}
\item Suppose that $\operatorname{char} K = 2$ and that ${\mathcal{J}} H$ may only be nonzero
in the first row and the first $3$ columns. Then there exists a
$T \in \operatorname{GL}_n(K)$ such that for $\tilde{H} := T^{-1} H(Tx)$, the
following holds.
\begin{enumerate}[\upshape (a)]
\item ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row and the
first $3$ columns (just like ${\mathcal{J}} H$).
\item The Hessian matrix of the leading part with
respect to $x_2,x_3,\ldots,x_n$ of $\tilde{H}_1$ is the product of
a symmetric permutation matrix and a diagonal matrix.
\item Every principal minor determinant of the leading principal minor
matrix of size $3$ of ${\mathcal{J}} \tilde{H}$ is zero.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
From proposition \ref{evenrk}, it follows that there exists a
$T \in \operatorname{Mat}_{n,n}(K)$ of the form
$$
\left( \begin{array} {ccccc}
1 & 0 & 0 & \cdots & 0 \\
0 & 1 & 0 & \cdots & 0 \\
0 & * & 1 & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots & 0 \\
0 & * & \cdots & * & 1
\end{array} \right)
$$
such that the Hessian matrix of the leading part with
respect to $x_2,x_3,\ldots,x_n$ of $\tilde{H}_1 = H_1(Tx)$ is the
product of a symmetric permutation matrix and a diagonal matrix.
Furthermore, ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row
and the first $2$ or $3$ columns respectively (just like ${\mathcal{J}} H$),
because of the form of $T$.
\begin{enumerate}[\upshape (i)]
\item
Let $N$ be the principal minor matrix of size $2$ of ${\mathcal{J}} \tilde{H}$.
Suppose first that ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first $2$
columns. From \cite[Lemma 1.2]{1601.00579}, it follows that $N$ is nilpotent.
On account of \cite[Theorem 3.2]{1601.00579}, $N$ is similar over $K$
to a triangular matrix.
Hence the rows of $N$ are dependent over $K$.
If the second row of $N$ is zero, then (i) follows. If the second row of
$N$ is not zero, then we may assume that the first row of $N$ is zero, and (i)
follows as well.
Suppose next that ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row and
the first $2$ columns, but not just the first $2$ columns. We distinguish two
cases.
\begin{compactitem}
\item $\parder{}{x_2} \tilde{H}_1 \in K[x_1,x_2]$.
Since $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we deduce that $\parder{}{x_1} \tilde{H}_1 \in
K[x_1,x_2]$ as well. Let $G := \tilde{H}(x_1,x_2,0,0,\ldots,0)$. Then
${\mathcal{J}} G = ({\mathcal{J}} \tilde{H})|_{x_3=x_4=\cdots=x_n=0}$. Consequently,
the nonzero part of ${\mathcal{J}} G$ is restricted to the first two columns.
So the leading principal minor matrix of size $2$ of ${\mathcal{J}} G$
is nilpotent. But this minor matrix is just $N$, and just as for
$\tilde{H}$ before, we may assume that only one row of $N$ is nonzero.
This gives (i).
\item $\parder{}{x_2} \tilde{H}_1 \notin K[x_1,x_2]$.
Since ${\mathcal{H}} (\tilde{H}_1|_{x_1=0})$ is the product of a permutation matrix and a diagonal matrix,
it follows that $\parder{}{x_2} \tilde{H}_1$ is a linear combination of $x_1$ and $x_i$,
where $i \ge 3$, such that $x_i$ does not occur in any other entry of ${\mathcal{J}} \tilde{H}$.
Looking at the coefficient of $x_i^1$ in the sum of the determinants of the
principal minors of size $2$, we infer that $\parder{}{x_1} \tilde{H}_2 = 0$.
So the second row of ${\mathcal{J}} \tilde{H}$ is dependent on $e_2^{\rm t}$. From a permuted version of
\cite[Lemma 1.2]{1601.00579}, we infer that $\parder{}{x_2} \tilde{H}_2$ is nilpotent.
Hence the second row of ${\mathcal{J}} \tilde{H}$ is zero. Since $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we infer (i).
\end{compactitem}
\item
Let $N$ be the principal minor matrix of size $3$ of ${\mathcal{J}} \tilde{H}$.
Suppose first that ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first $3$ columns.
From \cite[Lemma 1.2]{1601.00579}, it follows that the leading principal
minor of size $3$ of ${\mathcal{J}} \tilde{H}$ is nilpotent. On account of
\cite[Theorem 3.2]{1601.00579}, $N$ is similar over $K$ to a triangular matrix.
But for a triangular nilpotent Jacobian matrix of size $3$ over characteristic
$2$, the rank cannot be $2$. So $\operatorname{rk} N \le 1$.
Hence the rows of $N$ are dependent over $K$ in pairs.
If the second and the third row of $N$ are zero, then (ii) follows.
If the second or the third row of $N$ is not zero, then we may assume that
the first $2$ rows of $N$ are zero, and (ii) follows as well.
Suppose next that ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row
and the first $3$ columns, but not just the first $3$ columns. We distinguish
three cases.
\begin{compactitem}
\item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3]$.
Using techniques of the proof of (i), we can reduce to the case where
${\mathcal{J}} \tilde{H}$ may only be nonzero in the first $3$ columns.
\item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \notin K[x_1,x_2,x_3]$.
Using techniques of the proof of (i), we can deduce that
$\parder{}{x_1} \tilde{H}_2 = \parder{}{x_1} \tilde{H}_3 = 0$,
and that ${\mathcal{J}}_{x_2,x_3} (\tilde{H}_2,\tilde{H}_3)$ is nilpotent.
On account of \cite[Theorem 3.2]{1601.00579},
${\mathcal{J}}_{x_2,x_3} (\tilde{H}_2,\tilde{H}_3)$ is similar over $K$
to a triangular matrix. But for a triangular nilpotent Jacobian matrix
of size $2$ over characteristic $2$, the rank cannot be $1$.
So $\operatorname{rk} {\mathcal{J}}_{x_2,x_3} (\tilde{H}_2,\tilde{H}_3) = 0$.
Consequently, the last two rows of $N$ are zero, and (ii) follows.
\item None of the above.
Assume without loss of generality that $\parder{}{x_2} \tilde{H}_1 \in
K[x_1,x_2,x_3]$ and $\parder{}{x_3} \tilde{H}_1 \notin K[x_1,x_2,x_3]$.
Since ${\mathcal{H}} (\tilde{H}_1|_{x_1=0})$ is the product of a permutation matrix and a diagonal
matrix, it follows that $\parder{}{x_3} \tilde{H}_1$ is a linear combination of
$x_1$ and $x_i$, where $i \ge 4$, such that $x_i$ does not occur in any other
entry of ${\mathcal{J}} \tilde{H}$.
Looking at the coefficient of $x_i^1$ in the sum of the determinants of the
principal minors of size $2$, we infer that $\parder{}{x_1} \tilde{H}_3 = 0$. If
$\parder{}{x_1} \tilde{H}_2 = 0$ as well, then we can advance as above, so assume that
$\parder{}{x_1} \tilde{H}_2 \neq 0$.
Looking at the coefficient of $x_i^1$ in the
sum of the determinants of the principal minors of size $3$, we infer that
$\parder{}{x_2} \tilde{H}_3 = 0$. So the third row of ${\mathcal{J}} \tilde{H}$ is dependent on
$e_3^{\rm t}$. From a permuted version of \cite[Lemma 1.2]{1601.00579}, we infer that
$\parder{}{x_3} \tilde{H}_3$ is nilpotent. Hence the third row of
${\mathcal{J}} \tilde{H}$ is zero.
From $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we deduce that
$\parder{}{x_1} \tilde{H}_1 = -\parder{}{x_2} \tilde{H}_2$. We show that
\begin{equation} \label{DN0}
\parder{}{x_1} \tilde{H}_1 = \parder{}{x_2} \tilde{H}_2 = 0
\end{equation}
For that purpose, suppose that $\parder{}{x_1} \tilde{H}_1 \neq 0$.
Since $\parder{}{x_1} x_1^2 = 0$, the coefficient of $x_1$ in
$\parder{}{x_1} \tilde{H}_1$ is zero. Similarly, the coefficient of $x_2$ in
$\parder{}{x_2} \tilde{H}_2$ is zero. As $\tilde{H}_2 \in K[x_1,x_2,x_3]$, we
infer that
$$
\parder{}{x_1} \tilde{H}_1 = -\parder{}{x_2} \tilde{H}_2 \in K x_3 \setminus \{0\}
$$
Looking at the
coefficient of $x_3^2$ in the sum of the determinants of the
principal minors of size $2$, we deduce that the coefficient of $x_3^2$ in
$(\parder{}{x_i} \tilde{H}_1) \cdot (\parder{}{x_1} \tilde{H}_i)$ is nonzero.
Consequently, the coefficient of $x_3^3$ in
$(\parder{}{x_i} \tilde{H}_1) \cdot (\parder{}{x_2} \tilde{H}_2) \cdot
(\parder{}{x_1} \tilde{H}_i) \in K x_3^3 \setminus \{0\}$ is nonzero.
This contributes to the coefficient of $x_3^3$ in the sum of the determinants
of the principal minors of size $3$. Contradiction, because this contribution
cannot be canceled.
So \eqref{DN0} is satisfied. We show that in addition,
\begin{equation} \label{N120}
\parder{}{x_2} \tilde{H}_1 = 0
\end{equation}
The coefficient of $x_1$ of $\parder{}{x_2} \tilde{H}_1$ is zero, because of \eqref{DN0}.
The coefficient of $x_2$ of $\parder{}{x_2} \tilde{H}_1$ is zero, because
$\parder{}{x_2} x_2^2 = 0$.
The coefficient of $x_3$ of $\parder{}{x_2} \tilde{H}_1$ is zero, because
the coefficient of $x_2$ of $\parder{}{x_3} \tilde{H}_1 \in K x_1 + K x_i$ is zero.
So \eqref{N120} is satisfied as well.
Recall that the third row of $N$ is zero.
From \eqref{DN0} and \eqref{N120}, it follows that the diagonal and the second column
of $N$ are zero as well. Hence every principal minor determinant of $N$ is zero, which gives (ii).
\qedhere
\end{compactitem}
\end{enumerate}
\end{proof}
\begin{lemma} \label{lem2}
Let $\tilde{H}$ be as in lemma \ref{lem1}. Suppose that ${\mathcal{J}} \tilde{H}$
has a principal minor matrix $M$ of which the determinant is nonzero. Then
\begin{enumerate}[\upshape (a)]
\item $\tilde{H}$ is as in {\upshape (ii)} of lemma \ref{lem1};
\item rows $2$ and $3$ of ${\mathcal{J}} \tilde{H}$ are zero;
\item $M$ has size $2$ and $x_2 x_3 \mid \det M$;
\item Besides $M$, there exists exactly one principal minor matrix $M'$
of size $2$ of ${\mathcal{J}} H$, such that $\det M' = - \det M$.
\end{enumerate}
\end{lemma}
\begin{proof}
Take $N$ as in (i) or (ii) of lemma \ref{lem1} respectively.
Then $M$ is not a principal minor matrix of $N$. So if $M$ does not
contain the upper left corner of ${\mathcal{J}} \tilde{H}$, then the last column
of $M$ is zero. Hence $M$ does contain the upper left corner of
${\mathcal{J}} \tilde{H}$.
If $M$ has two columns outside the column range of $N$,
then both columns are dependent on $e_1$. So $M$ has exactly one column
outside the column range of $N$, say column $i$.
\begin{enumerate}[(i)]
\item Suppose first that $\tilde{H}$ is as in (i) of lemma \ref{lem1}.
Then either $M$ has size $2$ with row and column indices $1$ and $i$,
or $M$ has size $3$ with row and column indices $1$, $2$ and $i$.
The coefficient of $x_1$ in the upper right corner of $M$ is zero, because
$N_{11} = 0$. Hence the upper right corner
of $M$ is of the form $c x_j$ for some nonzero $c \in K$ and a $j \ge 2$.
If $j \ge 3$, then $\det M$ is equal to the sum of the terms which are
divisible by $x_j$ in the sum of the determinants of the principal minors of
the same size as $M$, which is zero.
So $j = 2$. Now $M$ is the only principal minor matrix of its size, of which
the determinant is nonzero, because if we would take another $i$, then $j$
changes as well. Contradiction, because the sum of the determinants of the
principal minors of the same size as $M$ is zero.
\item Suppose next that $\tilde{H}$ is as in (ii) of lemma \ref{lem1}.
If the second row of ${\mathcal{J}}\tilde{H}$ is nonzero, then the coefficient of
$x_1 x_3$ in $\tilde{H}_2$ is nonzero, because $N_{22} = 0$.
If the third row of ${\mathcal{J}}\tilde{H}$
is nonzero, then the coefficient of $x_1 x_2$ in $\tilde{H}_3$ is nonzero,
because $N_{33} = 0$.
Since every principal minor determinant of $N$ is zero, we infer that
$N_{23} N_{32} = 0$, so either the second or the third row of ${\mathcal{J}}\tilde{H}$ is zero.
Assume without loss of generality that the second row of ${\mathcal{J}} \tilde{H}$
is zero. Then either $M$ has size $2$ with row and column indices $1$ and $i$,
or $M$ has size $3$ with row and column indices $1$, $3$ and $i$.
The upper right corner of $M$ is of the form $c x_j$ for some nonzero
$c \in K$, and with the techniques in (i) above, we see that $2 \le j \le 3$.
Furthermore, we infer with the techniques in (i) above that ${\mathcal{J}} \tilde{H}$
has another principal minor matrix $M'$ of the same size as $M'$, of which the
determinant is nonzero as well. The upper right corner of $M'$ can only be
of the form $c' x_{5-j}$ for some nonzero $c' \in K$.
It follows that $N_{12} \ne 0$ and $N_{13} \ne 0$. Consequently, $N_{21} = N_{31} = 0$.
This is only possible if both the second and the third
row of ${\mathcal{J}} \tilde{H}$ are zero. So $M$ has size $2$, and claims (c) and (d)
follow. \qedhere
\end{enumerate}
\end{proof}
\begin{lemma} \label{lem4}
Let $n = 4$,
$$
\tilde{H} = \left( \begin{array}{c}
x_1 x_3 + c x_2 x_4 \\ x_2 x_3 - x_1 x_4 \\
\frac12 x_3^2 + \frac{c}2 x_4^2 \\ \frac12 x_1^2 + \frac{c}2 x_2^2
\end{array} \right)
\qquad \mbox{and} \qquad
{\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc}
x_3 & c x_4 & x_1 & c x_2 \\
- x_4 & x_3 & x_2 & -x_1 \\
0 & 0 & x_3 & c x_4 \\
x_1 & c x_2 & 0 & 0
\end{array} \right)
$$
as in lemma \ref{rk3calc} (with $c \neq 0$).
Let $M \in \operatorname{Mat}_{4,4}(K)$, such that $\deg \det \big({\mathcal{J}}{\tilde{H}} + M\big) \le 2$.
Then there exists a translation $G$, such that
$$
\tilde{H}\big(G(x)\big) - \big(\tilde{H} + Mx\big) \in K^4
$$
In particular, $\det \big({\mathcal{J}}{\tilde{H}} + M\big) = \det {\mathcal{J}} \big(\tilde{H} + Mx\big) = 0$.
\end{lemma}
\begin{proof}
Since the quartic part of $\det ({\mathcal{J}}{\tilde{H}} + M)$ is zero, we deduce that
$\det ({\mathcal{J}}{\tilde{H}}) = 0$.
By way of completing the squares, we can choose a translation $G$ such that
the linear part of $F := \tilde{H}(G^{-1}(x)) + M\,G^{-1}(x)$ is of the form
$$
\left( \begin{array}{cccccccc}
a_1 x_1 &+& b_1 x_2 &+& c_1 x_3 &+& d_1 x_4 \\
a_2 x_1 &+& b_2 x_2 &+& c_2 x_3 &+& d_2 x_4 \\
a_3 x_1 &+& b_3 x_2 & & & & \\
& & & & c_4 x_3 &+& d_4 x_4
\end{array}\right)
$$
Notice that $\deg \det {\mathcal{J}} F \le 2$. Looking at the coefficients of
$x_1^3$, $x_2^3$, $x_3^3$, and $x_4^3$ of $\det {\mathcal{J}} F$, we see that
$b_3 = a_3 = d_4 = c_4 = 0$. Looking at the coefficients of
$x_1^2 x_3$, $x_1 x_3^2$, $x_2^2 x_4$, and $x_2 x_4^2$ of $\det {\mathcal{J}} F$,
we see that $b_1 = d_1 = a_1 = c_1 = 0$. Looking at the coefficients of
$x_1^2 x_4$, $x_1 x_4^2$, $x_2^2 x_3$, and $x_2 x_3^2$ of $\det {\mathcal{J}} F$,
we see that $b_2 = c_2 = a_2 = d_2 = 0$.
So $F$ has trivial linear part, and $\tilde{H} - F \in K^4$.
Hence $\tilde{H}(G) - F(G) \in K^4$, as claimed. The last claim follows from
$\det ({\mathcal{J}}{\tilde{H}}) = 0$.
\end{proof}
\begin{proof}[Proof of theorem \ref{rk3np}]
From \cite[Theorem 3.2]{1601.00579}, it follows that (1) is satisfied if
$\operatorname{rk} {\mathcal{J}} H \le 2$. So assume that $\operatorname{rk} {\mathcal{J}} H = 3$. Then we can follow the cases
of theorem \ref{rk3}.
\begin{itemize}
\item $H$ is as in (1) of theorem \ref{rk3}.
Let $\tilde{H} = S H(S^{-1}x)$. Then only the first $3$ rows of
${\mathcal{J}} \tilde{H}$ may be nonzero. If the leading principal minor
matrix $N$ of size $3$ of ${\mathcal{J}} \tilde{H}$ is similar over $K$
to a triangular matrix, then so is ${\mathcal{J}} \tilde{H}$ itself.
So assume that $N$ is not similar over $K$ to a triangular matrix.
From lemma \ref{lem3}, it follows that $N$ is similar over $K$ to
a matrix of the form
\begin{equation} \label{eq3}
\left( \begin{array}{ccc}
0 & f & 0 \\ b & 0 & f \\ 0 & -b & 0
\end{array} \right)
\end{equation}
where $f$ and $b$ are independent linear forms in $K[x_4,x_5,\ldots,x_n]$.
Consequently, we can choose $S$, such that for
$\tilde{H} := S H(S^{-1}x)$, only the first $3$ rows of ${\mathcal{J}} \tilde{H}$
are nonzero, and the leading principal minor matrix of size $3$ is as
in \eqref{eq3}. If we negate the third row and the third column of \eqref{eq3},
and replace $b$ by $x_4$ and $f$ by $x_5$, then we get
$$
\left( \begin{array}{ccc}
0 & x_5 & 0 \\ x_4 & 0 & -x_5 \\ 0 & x_4 & 0
\end{array} \right)
$$
We can even choose $S$ such that the leading principal minor matrix of size $3$
of ${\mathcal{J}} \tilde{H}$ is as above. So (2) of theorem \ref{rk3np} is satisfied for
$T = S^{-1}$.
If we replace $\tilde{H}_2$ by $0$, then all principal minor determinants of
${\mathcal{J}} \tilde{H}$ become zero. On account of \cite[Lemma 1.2]{MR1210415},
${\mathcal{J}} \tilde{H}$ becomes permutation similar to a triangular matrix.
From proposition \ref{tameZ} below, we infer that $x + H$ is tame
if $\operatorname{char} K = 0$.
\item $H$ is as in (2) of theorem \ref{rk3} or as in (2) of theorem \ref{rkr}.
Let $\tilde{H} = T^{-1} H(Tx)$. Then the rows of ${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$
are dependent over $K$ in pairs. Suppose first that the first $2$ rows of
${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ are zero. Then we can choose $T$ such that
only the last row of ${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ may be nonzero.
From \cite[Lemma 1.2]{1601.00579}, it follows that
leading principal minor matrix $N$ of size $2$ of ${\mathcal{J}} \tilde{H}$ is
nilpotent as well. On account of \cite[Theorem 3.2]{1601.00579}, $N$ is
similar over $K$ to a triangular matrix. From \cite[Corollary 1.4]{1601.00579},
we deduce that ${\mathcal{J}} \tilde{H}$ is similar over $K$ to a triangular matrix.
So we can choose $T$ such that ${\mathcal{J}} \tilde{H}$ is lower triangular, and
(1) is satisfied. Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$.
Suppose next that the first $2$ rows of
${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ are not both zero. Then we can choose
$T$ such that only the first row of ${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ may
be nonzero. From lemma \ref{lem1} (i) and lemma \ref{lem2}, we infer that we
can choose $T$ such that every principal minor of ${\mathcal{J}} \tilde{H}$ has determinant
zero. From \cite[Lemma 1.2]{MR1210415}, it follows that ${\mathcal{J}} \tilde{H}$ is
permutation similar to a triangular matrix. So (1) is satisfied.
Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$.
\item $H$ is as in (3) of theorem \ref{rk3} or as in (3) of theorem \ref{rkr}.
Let $\tilde{H} = T^{-1} H(Tx)$. Then the rows of ${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$
are dependent over $K$ in pairs. Suppose first that the first $3$ rows of
${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ are zero. Then we can choose $T$ such that
only the last row of ${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ may be nonzero,
and just as above, (1) is satisfied. Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$.
Suppose next that the first $3$ rows of
${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ are not all zero. Then we can choose
$T$ such that only the first row of ${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ may
be nonzero. If we can choose $T$ such that every principal minor of
${\mathcal{J}} \tilde{H}$ has determinant zero, then (1) is satified, just as before.
Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$.
So assume that we cannot choose $T$ such that every principal minor of
${\mathcal{J}} \tilde{H}$ has determinant zero. From lemma \ref{lem1} (ii) and
lemma \ref{lem2}, we infer that we can choose $T$ such that $\tilde{H}$
is as in lemma \ref{lem2}. More precisely, we can choose $T$ such that
${\mathcal{J}} \tilde{H}$ is of the form
\begin{equation} \label{eq2}
\left( \begin{array}{cccccccc}
0 & x_4 & -x_5 & x_2 & -x_3 & * & * & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
x_3 & * & * & 0 & 0 & 0 & 0 & \cdots \\
x_2 & * & * & 0 & 0 & 0 & 0 & \cdots \\
* & * & * & 0 & 0 & 0 & 0 & \cdots \\
* & * & * & 0 & 0 & 0 & 0 & \cdots \\
\vdots & \vdots & \vdots & \vdots &
\vdots & \vdots & \vdots & \ddots
\end{array} \right)
\end{equation}
Since $\parder{}{x_1} x_1^2 = 0$, the coefficients of $x_1$
of the starred entries in the first column of \eqref{eq2} are zero.
Hence we can clean these starred entries by way of row operations
with rows $4$ and $5$.
We can also clean them by way of a linear
conjugation, because if a starred entry in the first column is nonzero,
the transposed entry in the first row is zero, so the corresponding
column operations which are induced by the linear conjugation will not have
any effect.
If rows $1$, $4$, and $5$ remain as the only nonzero rows, then $H$ is as in (1)
of theorem \ref{rk3}, which is the first case. So assume that another row remains
nonzero. By way of additional row operations and associated column operations,
we can get ${\mathcal{J}} \tilde{H}$ of the form
\begin{equation} \label{eq2a}
\left( \begin{array}{cccccccc}
0 & x_4 & -x_5 & x_2 & -x_3 & * & * & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
x_3 & 0 & x_1 & 0 & 0 & 0 & 0 & \cdots \\
x_2 & x_1 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & x_3 & x_2 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
\vdots & \vdots & \vdots & \vdots &
\vdots & \vdots & \vdots & \ddots
\end{array} \right)
\end{equation}
where we do not maintain (b) of lemma \ref{lem1} (ii) any more.
Hence $P^{-1} \tilde{H}(Px)$ is as $\tilde{H}$ in (3) for a suitable
permutation matrix $P$.
In a similar manner as in the case where $H$ is as in (1) of theorem
\ref{rk3}, we infer that $x + H$ is tame if $\operatorname{char} K = 0$.
\item $H$ is as in (5) of theorem \ref{rk3}.
Then only the first $4$ columns of $\tilde{H} = T^{-1} H(Tx)$ may be nonzero.
Hence the leading principal minor matrix $N$ of size $4$ of ${\mathcal{J}} \tilde{H}$
is nilpotent. We distinguish two subcases.
\begin{compactitem}
\item The rows of $N$ are linearly independent over $K$.
Then there exists an $U \in \operatorname{GL}_4(K)$, such that $U N$ is as ${\mathcal{J}} \tilde{H}$
in lemma \ref{lem4}. Furthermore,
$$
\det (UN + U) = \det U \det (N + I_4) = \det U \in K^{*}
$$
So $\det (UN + U) \ne 0$ and $\deg \det (UN + U) \le 2$.
This contradicts lemma \ref{lem4}.
\item The rows of $N$ are linearly independent over $K$.
Then we can apply the first case above on the map
$(\tilde{H}_1,\tilde{H}_2,\tilde{H}_3,\tilde{H}_4)$.
Since $N$ has less than $5$ columns, the case where $N$ is not similar
over $K$ to a triangular matrix cannot occur in the case above. So
$N$ is similar over $K$ to a triangular matrix, and so are ${\mathcal{J}} \tilde{H}$
and ${\mathcal{J}} H$. Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$.
\end{compactitem}
\end{itemize}
So it remains to prove the last claim in the case $\operatorname{char} K \neq 0$.
This follows by way of techniques in the proof of theorem \ref{dim5} in the
next section.
\end{proof}
\begin{proposition} \label{tameZ}
Let $R$ be an integral domain of characteristic zero, and let $F \in R[x]^n$
be a polynomial map which is invertible over $R$. Let $\tilde{F} \in R[x]^n$,
such that only the $i$\textsuperscript{th} component of $\tilde{F}$ is
different from that of $F$, and such that $\det {\mathcal{J}} \tilde{F} = \det {\mathcal{J}} F$.
Then $\tilde{F}(F^{-1})$ is an elementary invertible polynomial map over $R$.
In particular, $\tilde{F}$ is invertible over $R$.
\end{proposition}
\begin{proof}
Assume without loss of generality that $i = n$. Let $K$ be the fraction field of $R$.
Then $\operatorname{rk} {\mathcal{J}} H = \operatorname{tr}deg_K K(H)$, because $K$ has characteristic zero.
Since
$$
\operatorname{rk} {\mathcal{J}} (F_1,F_2,\ldots,F_{n-1},0) = n-1 = \operatorname{rk} {\mathcal{J}} (F_1,F_2,\ldots,F_{n-1},\tilde{F}_n-F_n)
$$
it follows that $\tilde{F}_n-F_n$ is algebraically dependent
over $K$ on $F_1,F_2,\ldots,F_{n-1}$. As $F$ is invertible over $R$,
$$
\tilde{F}_n-F_n \in R[F_1,F_2,\ldots,F_{n-1}]
$$
So $\tilde{F}_n(F^{-1})-x_n \in R[x_1,x_2,\ldots,x_{n-1}]$. Furthermore,
$\tilde{F}_i(F^{-1}) = F_i(F^{-1}) = x_i$ for all $i < n$ by assumption, so
$\tilde{F}(F^{-1})$ is elementary.
\end{proof}
\section{rank 4 with nilpotency}
Let $M \in \operatorname{Mat}_n(K)$ be nilpotent and $v \in K^n$ be nonzero.
Define the {\em image exponent} of $v$ with respect to $M$ as
$$
\operatorname{IE} (M,v) = \operatorname{IE}_K (M,v) := \max \{i \in \mathbb{N} \mid M^i v \ne 0\}
$$
and the {\em preimage exponent} of $v$ with respect to $M$ as
$$
\operatorname{PE} (M,v) = \operatorname{PE}_K (M,v) := \max \{i \in \mathbb{N} \mid M^i w = v \mbox{ for some } w \in K^n\}
$$
\begin{theorem} \label{dim5}
Let $\operatorname{char} K \neq 2$ and $n = 5$. Suppose that $H \in K[x]^5$,
such that ${\mathcal{J}} H$ is nilpotent and $\operatorname{rk} {\mathcal{J}} H \ge 4$. Then
$\operatorname{rk} {\mathcal{J}} H = 4$, and
there exists a $T \in GL_5(K)$, such that ${\mathcal{J}} \big(T^{-1}H(Tx)\big)$
is either triangular with zeroes on the diagonal, or of one of the
following forms.
$$
\left( \begin{array}{ccccc}
0 & 0 & 0 & 0 & 0 \\
* & 0 & 0 & 0 & 0 \\
0 & x_4 & 0 & x_2 & 0 \\
x_3 & -x_5 & x_1 & 0 & -x_2 \\
* & * & 0 & x_1 & 0
\end{array} \right)
\quad
\left( \begin{array}{ccccc}
0 & 0 & 0 & 0 & 0 \\
* & 0 & 0 & x_4 & 0 \\
x_2 & x_1 & 0 & -x_5 & -x_4 \\
x_3 & 0 & x_1 & 0 & x_5 \\
* & 0 & 0 & x_1 & 0
\end{array} \right)
$$
Furthermore, $x + H$ is tame.
\end{theorem}
\begin{proof}
Suppose first that $K$ is infinite.
From \cite[Theorem 4.4]{1609.09753}, it follows that $\operatorname{PE}_{K(x)}({\mathcal{J}} H,x) = 0$.
As $\operatorname{rk} {\mathcal{J}} H = n - 1$, $\operatorname{IE}_{K(x)}({\mathcal{J}} H,x) + \operatorname{PE}_{K(x)}({\mathcal{J}} H,x) = n-1$, so
$\operatorname{IE}_{K(x)}({\mathcal{J}} H,x) = n - 1$. From \cite[Corollary 4.3]{1609.09753}, it follows
that there exists a $T \in \operatorname{GL}_5(K)$, such that for
$\tilde{H} := T^{-1}H(Tx)$, we have
$$
({\mathcal{J}} \tilde{H})|_{x = e_1} =
\left( \begin{array}{ccccc}
0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0
\end{array} \right)
$$
Using this, one can compute all solutions, and match them against
the given classification. We did this with Maple 8, see {\tt dim5qdr.pdf}.
To prove that $x + H$ is tame, we show that the maps of the classification
are tame. If ${\mathcal{J}} H$ is similar over $K$ to a triangular matrix,
then $x + H$ is tame. So assume that ${\mathcal{J}} H$ is not similar over $K$ to
a triangular matrix. If
$$
G = \left( \begin{array}{c}
0 \\ z_1 x_1^2 \\ x_2 x_4 \\ x_1 x_3 - x_2 x_5 \\
z_2 x_1^2 + z_3 x_1 x_2 + z_4 x_2^2 + x_1 x_4
\end{array} \right)
\quad \mbox{or} \quad
G = \left( \begin{array}{c}
0 \\ z_1 x_1^2 + \tfrac12 x_4^2 \\ x_1 x_2 - x_4 x_5 \\
x_1 x_3 - \tfrac12 x_5^2 \\
z_2 x_1^2 + x_1 x_4
\end{array} \right)
$$
then we can apply proposition \ref{tameZ} with $R = \mathbb{Z}[z_1,z_2,z_3,z_4]$
and $i = 4$, to obtain that $x + G$ is the composition of an elementary map
and a map $x + \tilde{G}$ for which ${\mathcal{J}} \tilde{G}$ is permutation similar to
a triangular matrix with zeroes on the diagonal. So $x + G$ is tame over $R$.
Hence $x + G$ modulo $I$ is tame over $R/I$ for any ideal $I$ of $R$.
Since $x + H$ has this form up to conjugation with a linear map, we infer
that $x + H$ is tame.
Suppose next that $K$ is finite, and let $L$ be an infinite extension
field of $K$. If ${\mathcal{J}} H$ is similar over $L$ to a triangular matrix,
then by \cite[Proposition 1.3]{1601.00579}, ${\mathcal{J}} H$ is similar over $K$
to a triangular matrix as well. So assume that ${\mathcal{J}} H$ is not similar over
$K$ to a triangular matrix. Then one can check for the solutions over $L$
that \cite[Theorem 5.2]{1609.09753} is satisfied, which we did. So
\cite[Corollary 4.3]{1609.09753} holds over $K$ as well, and so does
this theorem.
\end{proof}
\begin{theorem} \label{dim6}
Let $\operatorname{char} K = 2$ and $n = 6$. Suppose that $H \in K[x]^6$,
such that ${\mathcal{J}} H$ is nilpotent and $\operatorname{rk} {\mathcal{J}} H \ge 4$. Then
$\operatorname{rk} {\mathcal{J}} H = 4$, and
there exists a $T \in GL_6(K)$, such that ${\mathcal{J}} \big(T^{-1}H(Tx)\big)$
is either triangular with zeroes on the diagonal, or of one of the
following forms.
\begin{gather*}
\left( \begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
x_2 & x_1 & 0 & 0 & 0 & 0 \\
0 & 0 & x_5 & 0 & x_3 & 0 \\
* & * & * & x_2 & 0 & -x_3 \\
* & * & * & 0 & x_2 & 0
\end{array} \right)
\quad
\left( \begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & x_5 & 0 & 0 & x_2 & 0 \\
x_3 & \!\!-c x_5 - x_6\!\! & x_1 & 0 & -c x_2 & -x_2 \\
x_4 & c x_6 & 0 & x_1 & 0 & c x_2 \\
x_5 & 0 & 0 & 0 & x_1 & 0
\end{array} \right) \\
\left( \begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & x_4 & 0 & x_2 & 0 & 0 \\
x_3 & -x_5 & x_1 & 0 & -x_2 & 0 \\
x_4 & 0 & 0 & x_1 & 0 & 0 \\
* & * & * & * & * & 0
\end{array} \right)
\quad
\left( \begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & x_5 & x_4 & 0 \\
x_2 & x_1 & 0 & -x_6 & -2x_5 & -x_4 \\
x_3 & 0 & x_1 & 0 & x_6 & x_5 \\
x_4 & 0 & 0 & x_1 & 0 & 0 \\
x_5 & 0 & 0 & 0 & x_1 & 0
\end{array} \right)
\end{gather*}
where $c \in \{0,1\}$. Furthermore, there exists a tame invertible map
$x + \bar{H}$ such that $\bar{H}$ is quadratic homogeneous and
${\mathcal{J}} \bar{H} = {\mathcal{J}} H$.
\end{theorem}
\begin{proof}
Suppose first that $K$ is infinite.
From \cite[Theorem 4.4]{1609.09753}, it follows that $\operatorname{PE}({\mathcal{J}} H,x) = 0$.
Since $\operatorname{IE}_{K(x)}({\mathcal{J}} H,x) = 0$ as well, it follows from
\cite[Theorem 4.4]{1609.09753} that there exists a $T \in \operatorname{GL}_5(K)$,
such that for $\tilde{H} := T^{-1}H(Tx)$, we have
$$
({\mathcal{J}} \tilde{H})|_{x = e_1} =
\left( \begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0
\end{array} \right)
$$
Using this, one can compute all solutions, and match them against
the given classification. We did this with Maple 8, see {\tt dim6chr2qdr.pdf}.
To prove that $x + \bar{H}$ is tame for some quadratic homogeneous $\bar{H}$
such that ${\mathcal{J}} \bar{H} = {\mathcal{J}} H$, we can use the same arguments as in
the proof of theorem \ref{dim5}. Namely, we apply proposition \ref{tameZ}
with $R = \mathbb{Z}[z_1,z_2,z_3,z_4,z_5,z_6,z_7,z_8,z_9,z_{10}]$ and
$i=5$, $i=5$, $i=4$, and $i=4$ respectively.
Suppose next that $K$ is finite, and let $L$ be any infinite extension
field of $K$. If ${\mathcal{J}} H$ is similar over $L$ to a triangular matrix,
then by \cite[Proposition 1.3]{1601.00579}, ${\mathcal{J}} H$ is similar over $K$
to a triangular matrix as well. So assume that ${\mathcal{J}} H$ is not similar over
$K$ to a triangular matrix. We distinguish two cases.
\begin{itemize}
\item The columns of ${\mathcal{J}} H$ are dependent over $L$.
Then the columns of ${\mathcal{J}} H$ are dependent over $K$ as well. So we may
assume that the last column of ${\mathcal{J}} H$ is zero. Then the leading principal
minor $N$ of size $5$ of ${\mathcal{J}} H$ is nilpotent as well. Just like
$\operatorname{rk} {\mathcal{J}} H = 6 - 2 = 4$, we deduce that $\operatorname{rk} N = 5 - 2 = 3$. So we
can apply theorem \ref{rk3np} on $N$, to infer that we can get
${\mathcal{J}} \tilde{H}$ of the form of the third matrix.
\item The columns of ${\mathcal{J}} H$ are independent over $L$.
Then one can check for the solutions over $L$
that lemma \ref{lem5} below is satisfied, which we did. So
there exists a $v \in K^6$, such that $({\mathcal{J}} H)|_{x=v}^4 \neq 0$.
Hence the Jordan Normal Form of $({\mathcal{J}} H)|_{x=v}$ has Jordan blocks of
size $1$ and $5$, just like that of ${\mathcal{J}} H$.
Furthermore,
$\operatorname{IE}\big(({\mathcal{J}} H)|_{x=v},v\big) = 0 = \operatorname{IE}_{K(x)}({\mathcal{J}} H,x)$, and it follows
from \cite[Theorem 4.4]{1609.09753} that $\operatorname{PE}\big(({\mathcal{J}} H)|_{x=v},v\big) = 0
= \operatorname{PE}_{K(x)}({\mathcal{J}} H,x)$. Consequently, one can verify that
\cite[Theorem 4.2]{1609.09753} holds over $K$ as well, and so does
this theorem. \qedhere
\end{itemize}
\end{proof}
\begin{lemma} \label{lem5}
Let $L$ be an extension field of $K$. Suppose that $H \in K[x]^n$
and $T \in \operatorname{GL}_n(L)$, such that for $\tilde{H} := T^{-1}H(Tx)$,
the ideal generated by the entries of $({\mathcal{J}} \tilde{H})^s$
contains a power of a polynomial $f$.
Then there exists a $v \in K^n$, such that $({\mathcal{J}} H)|_{x=v}^s \neq 0$,
in the following cases.
\begin{enumerate}[\upshape (i)]
\item $\#K \ge \deg f + 1$;
\item $f$ is homogeneous and $\#K \ge \deg f$.
\end{enumerate}
\end{lemma}
\begin{proof}
From \cite[Lemma 5.1]{1310.7843}, it follows that there exists a $v \in K^n$
such that $f(T^{-1}v) \ne 0$. Let $I$ be the ideal over $L$ generated by the
entries of $({\mathcal{J}} \tilde{H})^s$. As the radical of $I$ contains $f$, the
radical of $I(T^{-1}v)$ contains $f(T^{-1}v)$.
$I(T^{-1}v)$ is generated over $L$ by the entries of
$({\mathcal{J}} \tilde{H})|_{x=T^{-1}v}^s$ and
$$
T ({\mathcal{J}} \tilde{H})|_{x=T^{-1}v}^s T^{-1} = ({\mathcal{J}} H)|_{x=v}^s
$$
From $f(T^{-1}v) \ne 0$, it follows that $I(T^{-1}v) \ne 0$ and
$({\mathcal{J}} H)|_{x=v}^s \ne 0$.
\end{proof}
The cardinality of $K$ may be too small for the computational method
of \cite[Theorem 4.2]{1609.09753}: the following maps $H$ do not satisfy
\cite[Theorem 4.2]{1609.09753} and neither lemma \ref{Irlem}:
\begin{itemize}
\item $\#K = 3$ and $H = (0,\frac12 x_1^2,\frac12 x_2^2,(x_1+x_2)x_3,(x_1-x_2)x_4)$;
\item $\#K = 2$ and $H = (0,0,x_1 x_2,x_1 x_3,x_2 x_4,(x_1-x_2)x_5)$;
\item $\#K = 2$ and $H = (0,0,x_1 x_4,x_1 x_3-x_2 x_5,x_2 x_4,(x_1-x_4)x_5)$.
\end{itemize}
\begin{theorem}
Let $H$ be quadratic homogeneous such that ${\mathcal{J}} H$ is nilpotent and $\operatorname{rk} {\mathcal{J}} H \le 4$.
Then the rows of ${\mathcal{J}} H$ are dependent over $K$. Furthermore, one of the following
statements hold.
\begin{enumerate}[\upshape (1)]
\item Every set of $6$ rows of ${\mathcal{J}} H$ is dependent over $K$.
\item There exists a tame invertible map
$x + \bar{H}$ such that $\bar{H}$ is quadratic homogeneous and
${\mathcal{J}} \bar{H} = {\mathcal{J}} H$.
\end{enumerate}
In addition, $x + H$ is stably tame if $\operatorname{char} K \neq 2$.
\end{theorem}
\begin{proof}
The case where $\operatorname{rk} {\mathcal{J}} H \le 3$ follows from theorem \ref{rk3np}, so assume that
$\operatorname{rk} {\mathcal{J}} H = 4$. We follow the cases of corollary \ref{rk4}.
\begin{itemize}
\item $H$ is as in (1) of corollary \ref{rk4}.
Let $\tilde{H} = S^{-1}H(Sx)$. Then only the first $5$ rows of ${\mathcal{J}} \tilde{H}$
may be nonzero. Hence every set of $6$ rows of ${\mathcal{J}} H$ is dependent over $K$.
If $n > 5$, then the sixth row of ${\mathcal{J}} \tilde{H}$ is zero.
So assume that $n \le 5$. Then it follows from (i) of lemma \ref{lem6} below that
the rows of ${\mathcal{J}} H$ are dependent over $K$.
From \cite[Theorem 4.14]{MR2927368} and theorem \ref{dim5}, it follows that
$x + \tilde{H}$ is stably tame if $\operatorname{char} K \neq 2$.
\item $H$ is as in (2) of corollary \ref{rk4}.
Let $\tilde{H} = T^{-1}H(Tx)$ and let $N$ be the leading principal minor matrix
of size $3$ of ${\mathcal{J}} \tilde{H}$.
Just like for the case where $H$ is as in (2) of theorem \ref{rk3} in the proof
of theorem \ref{rk3np}, we can reduce to the case where only the first row
and the first three columns of ${\mathcal{J}} \tilde{H}$ may be nonzero.
From proposition \ref{evenrk}, it follows that we may assume that
${\mathcal{H}} (\tilde{H}_1|_{x_1=0})$ is a product of a permutation matrix
and a diagonal matrix. We distinguish three subcases:
\begin{compactitem}
\item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3]$.
Since $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we deduce that $\parder{}{x_1} \tilde{H}_1 \in
K[x_1,x_2,x_3]$ as well. We may assume that the coefficient of either $x_4^2$
or $x_4 x_5$ of $\tilde{H}_1$ is nonzero. With techniques in the proof of
corollary \ref{symdiag}, $x_4x_5$ can be transformed to $x_4^2 - x_5^2$,
so we may assume that the coefficient of $x_4^2$ of $\tilde{H}_1$ is nonzero.
Let $G := \tilde{H}(x_1,x_2,x_3,x_4,0,0,\ldots,0)$. Then
${\mathcal{J}} G = ({\mathcal{J}} \tilde{H})|_{x_5=\cdots=x_n=0}$. Consequently,
the nonzero part of ${\mathcal{J}} G$ is restricted to the first four columns.
From (i) of lemma \ref{lem6} below, it follows that the rows of
${\mathcal{J}} G$ are dependent over $K$ and that $x+G$ is tame.
Since the first row of ${\mathcal{J}} G$ is independent of the other rows of ${\mathcal{J}} G$,
the rows of ${\mathcal{J}} \tilde{H}$ are dependent over $K$. From proposition
\ref{tameZ}, it follows that $x + \tilde{H}$ is tame if $\operatorname{char} K = 0$,
which gives (2) if $\operatorname{char} K = 0$. Using techniques in the proof of
theorem \ref{dim5}, the case $\operatorname{char} K \neq 0$ follows as well.
\item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \notin
K[x_1,x_2,x_3]$.
Then we may assume that the coefficients of $x_2 x_4$ and $x_3 x_5$ of
$\tilde{H}_1$ are nonzero. Looking at the coefficients of $x_4^1$ and
$x_5^1$ of the sum of the determinants of the principal minors of size $2$,
we infer that $\parder{}{x_1} \tilde{H}_2 = 0$ and
$\parder{}{x_1} \tilde{H}_3 = 0$.
From a permuted version of \cite[Lemma 2]{1601.00579}, we deduce that
${\mathcal{J}}_{x_2,x_3} \allowbreak (\tilde{H}_2,\tilde{H}_3)$ is nilpotent.
On account of theorem \ref{rk3np} or \cite[Theorem 3.2]{1601.00579},
the second and the third row of ${\mathcal{J}} \tilde{H}$ are dependent over $K$.
By applying \cite[Lemma 2]{1601.00579} twice, we see that we can
replace $\tilde{H}_1$ by $0$ without affecting the nilpotency
of ${\mathcal{J}} \tilde{H}$. Now (2) follows in a similar manner as in the
case $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \in
K[x_1,x_2,x_3]$ above.
\item None of the above.
Then we may assume that $\parder{}{x_2} \tilde{H}_1 \notin
K[x_1,x_2,x_3]$ and $\parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3]$.
Furthermore, we may assume that the coefficient of $x_2 x_4$ of
$\tilde{H}_1$ is nonzero. Now we can apply the same arguments as in the
case $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \in
K[x_1,x_2,x_3]$ above.
\end{compactitem}
\item $H$ is as in (3) of corollary \ref{rk4}.
Let $\tilde{H} = T^{-1}H(Tx)$ and let $N$ be the leading principal minor matrix
of size $4$ of ${\mathcal{J}} \tilde{H}$.
Just like for the case where $H$ is as in (3) of theorem \ref{rk3} in the proof
of theorem \ref{rk3np}, we can reduce to the case where only the first row
and the first four columns of ${\mathcal{J}} \tilde{H}$ may be nonzero.
From proposition \ref{evenrk}, it follows that we may assume that
${\mathcal{H}} (\tilde{H}_1|_{x_1=0})$ is a product of a permutation matrix
and a diagonal matrix. We distinguish three subcases:
\begin{compactitem}
\item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1,
\parder{}{x_4} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$.
Since $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we deduce that $\parder{}{x_1} \tilde{H}_1 \in
K[x_1,x_2,x_3,x_4]$ as well. We may assume that the coefficient of $x_5 x_6$ of
$\tilde{H}_1$ is nonzero.
Just like we reduced to the case where only
the first $4$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero above, we can reduce
to the case where only the first $6$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero.
Hence the results follow from (ii) of lemma \ref{dim6}.
\item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1,
\parder{}{x_4} \tilde{H}_1 \notin K[x_1,x_2,x_3,x_4]$.
Then we may assume that the coefficients of $x_2 x_5$, $x_3 x_6$ and $x_4 x_7$ of
$\tilde{H}_1$ are nonzero. Looking at the coefficients of $x_5^1$, $x_6^1$ and
$x_7^1$ of the sum of the determinants of the principal minors of size $2$,
we infer that $\parder{}{x_1} \tilde{H}_2 = 0$, $\parder{}{x_1} \tilde{H}_3 = 0$
and $\parder{}{x_1} \tilde{H}_4 = 0$.
From a permuted version of \cite[Lemma 2]{1601.00579}, we deduce that
${\mathcal{J}}_{x_2,x_3,x_4} \allowbreak (\tilde{H}_2,\tilde{H}_3,\tilde{H}_4)$ is nilpotent.
On account of theorem \ref{rk3np} or \cite[Theorem 3.2]{1601.00579},
the second, the third and the fourth row of ${\mathcal{J}} \tilde{H}$ are
dependent over $K$.
By applying \cite[Lemma 2]{1601.00579} twice, we see that we can
replace $\tilde{H}_1$ by $0$ without affecting the nilpotency
of ${\mathcal{J}} \tilde{H}$. Now (2) follows in a similar manner as in the
case $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1,
\parder{}{x_4} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$ above.
\item None of the above.
Then we may assume that $\parder{}{x_2} \tilde{H}_1 \notin
K[x_1,x_2,x_3,x_4]$ and $\parder{}{x_4} \tilde{H}_1 \in
K[x_1,x_2,x_3,x_4]$. Furthermore, we may assume that the
coefficient of $x_2 x_5$ of $\tilde{H}_1$ is nonzero.
Suppose first that $\parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$.
Then we can apply the same argument as in the case $\parder{}{x_2} \tilde{H}_1,
\parder{}{x_3} \tilde{H}_1, \parder{}{x_4} \tilde{H}_1 \in
K[x_1,x_2,x_3,x_4]$ above, to reduce to the case where only the
first $5$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero, which
follows from (ii) of lemma \ref{lem6} below.
Suppose next that $\parder{}{x_3} \tilde{H}_1 \notin K[x_1,x_2,x_3,x_4]$.
Then we may assume that the coefficient of $x_3 x_6$ of $\tilde{H}_1$
is nonzero. Now we can apply the same arguments as in the case
$\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1, \allowbreak
\parder{}{x_4} \tilde{H}_1 \in K[x_1,x_2,x_3]$ above,
to reduce to the case where only the first $6$ columns of
${\mathcal{J}} \tilde{H}$ may be nonzero, which
follows from (ii) of lemma \ref{lem6} below.
\end{compactitem}
\item $H$ is as in (4) of corollary \ref{rk4}.
Let $\tilde{H} = T^{-1}H(Tx)$. Then only the first $5$ columns of
${\mathcal{J}} \tilde{H}$ may be nonzero, so the results follow from
(i) of lemma \ref{lem6} below.
\item $H$ is as in (5) of corollary \ref{rk4}.
Let $\tilde{H} = T^{-1}H(Tx)$. Then only the first $6$ columns of
${\mathcal{J}} \tilde{H}$ may be nonzero, so the results follow from
(ii) of lemma \ref{lem6} below. \qedhere
\end{itemize}
\end{proof}
\begin{lemma} \label{lem6}
Let $H$ be quadratic homogeneous, such that ${\mathcal{J}} H$ is nilpotent.
\begin{enumerate}[\upshape (i)]
\item If $\operatorname{char} K \neq 2$ and only the first $5$ columns of ${\mathcal{J}} H$
may be nonzero, then the first $5$ rows of ${\mathcal{J}} H$ are dependent over $K$,
and $x + H$ is tame.
\item If $\operatorname{char} K = 2$ and only the first $6$ columns of ${\mathcal{J}} H$
may be nonzero, then the first $6$ rows of ${\mathcal{J}} H$ are dependent over $K$,
and there exists a tame invertible map $x + \bar{H}$ such that
$\bar{H}$ is quadratic homogeneous and ${\mathcal{J}} \bar{H} = {\mathcal{J}} H$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $N$ be the leading principal minor matrix of size $5$ of
${\mathcal{J}} H$ in case of (i), and leading principal minor matrix of size $6$ of
${\mathcal{J}} H$ in case of (ii). From \cite[Lemma 2]{1601.00579}, it follows that $N$
is nilpotent.
If $\operatorname{rk} N \le 3$, then the results follow from theorem \ref{rk3np}.
If $\operatorname{rk} N \ge 4$, then the results follow from theorem \ref{dim5} in case of (i)
and from theorem \ref{dim6} in case of (ii).
\end{proof}
Notice that we only used the case where only the first $4$ columns of ${\mathcal{J}} H$
may be nonzero of (i) of lemma \ref{lem6}, which does not require the calculations of
theorem \ref{dim5} to be proved.
\end{document}
|
\begin{document}
\title{Multivariate Functional Additive Mixed Models}
\begin{abstract}
Multivariate functional data can be intrinsically multivariate like movement trajectories in 2D or complementary like precipitation, temperature, and wind speeds over time at a given weather station. We propose a multivariate functional additive mixed model (multiFAMM) and show its application to both data situations using examples from sports science (movement trajectories of snooker players) and phonetic science (acoustic signals and articulation of consonants). The approach includes linear and nonlinear covariate effects and models the dependency structure between the dimensions of the responses using multivariate functional principal component analysis. Multivariate functional random intercepts capture both the auto-correlation within a given function and cross-correlations between the multivariate functional dimensions. They also allow us to model between-function correlations as induced by e.g.\ repeated measurements or crossed study designs. Modeling the dependency structure between the dimensions can generate additional insight into the properties of the multivariate functional process, improves the estimation of random effects, and yields corrected confidence bands for covariate effects. Extensive simulation studies indicate that a multivariate modeling approach is more parsimonious than fitting independent univariate models to the data while maintaining or improving model fit.\\
\\
\textbf{Keywords:} functional additive mixed model; multivariate functional principal components; multivariate functional data; snooker trajectories; speech production
\end{abstract}
\section{Introduction}
With the technological advances seen in recent years, functional data sets are increasingly multivariate. They can be multivariate with respect to the domain of a function, its codomain, or both. Here, we focus on multivariate functions with a one-dimensional domain $\bm{f} = (f^{(1)},..., f^{(D)}) \colon \mathcal{I} \subset \mathbb{R} \to \mathbb{R}^{D}$ with square-integrable components $f^{(d)} \in L^2(\mathcal{I}), d = 1,..., D$. For this type of data, we can distinguish two subclasses: One has interpretable separate dimensions and can be seen as several complementary modes of a common phenomenon \citep[``multimodal'' data, cf.~][]{uludaug2014general} as in the analysis of acoustic signals and articulation processes in speech production
in one of our data examples. The codomain then simply is the Cartesian product $\mathcal{S} = \mathcal{S}^{(1)} \times ... \times \mathcal{S}^{(D)}$ of interpretable univariate codomains $\mathcal{S}^{(d)} \subset \mathbb{R}$. The other subclass is more ``intrinsically'' multivariate insofar as univariate analyses would not yield meaningful results. Consider for example two-dimensional movement trajectories as in one of our motivating applications, where the function measures Cartesian coordinates over time: for fixed trajectories, rotation or translation of the essentially arbitrary coordinate system would change the results of univariate analyses. For intrinsically multivariate functional data a multivariate approach is the natural and preferred mode of analysis, yielding interpretable results on the observation level. Even for multimodal functional data, a joint analysis may generate additional insight by incorporating the covariance structure between the dimensions. This motivates the development of statistical methods for multivariate functional data. We here propose multivariate functional additive mixed models to model potentially sparsely observed functions with flexible covariate effects and crossed or nested study designs.
Multivariate functional data have been the interest in different statistical fields such as clustering \citep{jacques2014, park2017}, functional principal component analysis \citep{chiou2014, happ2018, backenroth2018, li2020}, and registration \citep{carroll2020, steyer2020}. There is also ample literature on multivariate functional data regression such as graphical models \citep{zhu2016}, reduced rank regression \citep{liu2020}, or varying coefficient models \citep{zhu2012, Li2017}. Yet, so far, there are only few approaches that can handle multilevel regression when the functional response is multivariate. In particular, \citet{goldsmith2016} propose a hierarchical Bayesian multivariate functional regression model that can include subject level and residual random effect functions to account for correlation between and within functions. They work with bivariate functional data observed on a regular and dense grid and assume \emph{a priori} independence between the different dimensions of the subject-specific random effects. Thus, they model the correlation between the dimensions only in the residual function. As our approach explicitely models the dependencies between dimensions for multiple functional random effects and also handles data observed on sparse and irregular grids on more than two dimensions, the model proposed by \citet{goldsmith2016} can be seen as a special case of our more general model class.
Alternatively, \citet{zhu2017} use a two stage transformation with basis functions for the multivariate functional mixed model. This allows the estimation of scalar regression models for the resulting basis coefficients that are argued to be approximately independent. The proposed model is part of the so called \gls{fmm} framework \citep{morris2017}. While \glspl{fmm} use basis transformations of functional responses (observed on equal grids) at the start of the analysis, we propose a multivariate model in the \gls{famm} framework, which uses basis representations of all (effect) functions in the model \citep{scheipl2015}. The differences between these two functional regression frameworks have been extensively discussed before \citep{greven2017, morris2017}.
The main advantages of our multivariate regression model, also compared to \citet{goldsmith2016} and \citet{zhu2017}, are that it is readily available for sparse and irregular functional data and that it allows to include multiple nested or crossed random processes, both of which are required in our data examples. Another important contribution is that our approach directly models the multivariate covariance structure of all random effects included in the model using multivariate \glspl{fpc} and thus implicitly models the covariances between the dimensions. This makes the model representation more parsimonious, avoids assumptions difficult to verify, and allows further interpretation of the random effect processes, such as their relative importance and their dominating modes. As part of the \gls{famm} framework, our model provides a vast toolkit of modeling options for covariate and random effects, of estimation and inference \citep{wood2017}. The proposed \gls{mfamm} extends the \gls{famm} framework combining ideas from multilevel modeling \citep{cederbaum2016} and multivariate functional data \citep{happ2018} to account for sparse and irregular functional data and different study designs.
We illustrate the \gls{mfamm} on two motivating examples. The first (intrinsically multivariate) data stem from a study on the effect of a training programme for snooker players with a nested study design (shots within sessions within players) \citep{snooker2014}. The movement trajectories of a player's elbow, hand, and shoulder during a snooker shot are recorded on camera, yielding six-dimensional multivariate functional data (see Figure \ref{fig:snook_obs}). In the second data example, we analyse multimodal data from a speech production study with a crossed study design (speakers crossed with words) \citep{pouplier2016} on so-called ``assimilation'' of consonants. The two measured modes (acoustic and articulatory, see Figure \ref{fig:speech_obs}) are expected to be closely related but joint analyses have not yet incorporated the functional nature of the data.
These two examples motivate the development of a regression model for sparse and irregularly sampled multivariate functional data that can incorporate crossed or nested functional random effects as required by the study design in addition to flexible covariate effects. The proposed approach is implemented in \texttt{R} \citep{rsoftware} in package \texttt{multifamm}\citep{multifammpackage}. The paper is structured as follows: Section \ref{sec:MultiFAMM} specifies the \gls{mfamm} and section \ref{sec:Estimation} its estimation process. Section \ref{sec:application} presents the application of the \gls{mfamm} to the data examples and section \ref{sec:simulation} shows the estimation performance of our proposed approach in simulations. Section \ref{sec:discussion} closes with a discussion and outlook.
\section{Multivariate Functional Additive Mixed Model}
\label{sec:MultiFAMM}
\subsection{General Model}
Let $\bm{y}^{*}_i(t) = (y_i^{*(1)}(t),..., y_i^{*(D)}(t))^{\top}$ be the multivariate functional response of unit $i=1,...,N$ over $t \in \mathcal{I}$, consisting of dimensions $d=1,...,D$. Without loss of generality, we assume a common one-dimensional interval domain $\mathcal{I} = [0,1]$ for all dimensions, and square-integrable $y_i^{*(d)}\in L^2(\mathcal{I})$. Define $L^2_{D}(\mathcal{I}) := L^2(\mathcal{I}) \times ...\times L^2(\mathcal{I})$ so that $\bm{y}^{*}_i\in L^2_D(\mathcal{I})$. The underlying smooth function $\bm{y}_i^{*}$, however, is only evaluated at (potentially sparse or dimension-specific) points $\bm{y}_{it}^{*} = (y_{it}^{*(1)}, ..., y_{it}^{*(D)})^{\top}$ and the evaluation is subject to white noise, i.e.\ $\bm{y}_{it} = \bm{y}_{it}^{*} + \bm{\epsilon}_{it}$. The residual term $\bm{\epsilon}_{it}$ reflects additional uncorrelated white noise measurement error, following a $D$-dimensional multivariate normal distribution $\mathcal{N}_D$ with zero-mean and diagonal covariance matrix $\tilde{\bm{\Sigma}} = \operatorname{diag}(\sigma_{1}^2, ..., \sigma_{D}^2)$ with dimension specific variances $\sigma_{d}^2$. We construct a multivariate functional mixed model as
\begin{align} \label{eq:multivariateFLMM}
\begin{split}
\bm{y}_{it} &= \bm{y}_i^{*}(t) + \bm{\epsilon}_{it}= \bm{\mu}(\bm{x}_i, t) + \bm{U}(t)\bm{z}_i + \bm{\epsilon}_{it} \\
&= \bm{\mu}(\bm{x}_i, t) + \sum_{j=1}^q\bm{U_j}(t)\bm{z}_{ij} + \bm{E}_i(t) + \bm{\epsilon}_{it}, \quad t \in \mathcal{I},
\end{split}
\end{align}
where
\begin{align*}
\bm{U}_{j}(t) &= (\bm{U}_{j1}(t), ..., \bm{U}_{jV_{U_j}}(t)); j = 1,..., q,\\
\bm{U}_{jv}(t) &\stackrel{\text{ind.c.}}{\sim} MGP\left(\bm 0, K_{U_{j}}\right); v = 1,..., V_{U_j}; \forall j = 1,...,q, \\
\bm{E}_{i}(t) &\stackrel{\text{ind.c.}}{\sim} MGP\left(\bm 0, K_{E} \right); i = 1,..., N, \text{ and }\\
\bm{\epsilon}_{it} &\stackrel{\text{i.i.d.}}{\sim} \mathcal{N}_D\left(\bm 0, \tilde{\bm{\Sigma}} = \operatorname{diag}(\sigma_{1}^2, ..., \sigma_{D}^2)\right) ; i = 1,..., N; \quad t \in \mathcal{I}.
\end{align*}
We assume an additive predictor $\bm{\mu}(\bm{x}_i, \cdot) = \sum_{l = 1}^p \bm{f}_l (\bm{x}_{i}, \cdot)$ of fixed effects, which consists of partial predictors $\bm{f}_{l}(\bm{x}_{i}, \cdot) = (f_{l}^{(1)}(\bm{x}_{i}, \cdot), ..., f_{l}^{(D)}(\bm{x}_{i}, \cdot))^{\top}\in L^2_D(\mathcal{I}),\ l = 1,...,p$, that are multivariate functions depending on a subset of the vector of scalar covariates $\bm{x}_{i}$. This allows to include linear or smooth covariate effects as well as interaction effects between multiple covariates as in the univariate \gls{famm} \citep{scheipl2015}. Partial predictors may also depend on dimension specific subsets of covariates.
For random effects $\bm{U}$, we focus on model scenarios with $q$ independent multivariate functional random intercepts for crossed and/or nested designs. For group level $v = 1,\dots, V_{U_j}$ within grouping layer $j=1,\dots,q$, these take the value $\bm{U}_{jv} \in L^2_D(\mathcal{I})$. For each layer, the $\bm{U}_{j1}, ..., \bm{U}_{jV_{U_j}}$ present independent copies of a multivariate smooth zero-mean Gaussian random process. Analogously to scalar linear mixed models, the $\bm{U}_{jv}$ model correlations between different response functions $\bm{y}_i^{*}$ within the same group as well as variation across groups. By arranging them in a $(D \times V_{U_j})$ matrix $\bm{U}_j(t)$ per $t$, the $j$th random intercept can be expressed in the common mixed model notation in (\ref{eq:multivariateFLMM}) using appropriate group indicators $\bm{z}_{ij} = (z_{ij1}, \dots, z_{ijV_{U_j}})^\top$ for the respective design.
Although technically a curve-specific functional random intercept, we distinguish the smooth residuals $\bm{E}_i\in L^2_D(\mathcal{I})$ in the notation, as they model correlation within rather than between response functions. We write $\bm{E}_v \in L^2_D(\mathcal{I}), v = 1,..., V_E$ with $V_E = N$. The $\bm{E}_i$ capture smooth deviations from the group-specific mean $\bm{\mu}(\bm{x}_i, \cdot) + \sum_{j=1}^{q}\bm{U}_j(\cdot)\bm{z}_{ij}$.
For a more compact representation, we can arrange all $\bm{U}_j(t)$ and $\bm{E}_i(t)$ together in a $(D \times (\sum_{j=1}^qV_{U_j} + N))$ matrix $\bm{U}(t)$ per $t$, and the group indicators for all layers in a corresponding vector $\bm{z}_i = (\bm{z}_{i1}^\top, \dots, \bm{z}_{iq}^\top, \boldsymbol{e}_i^\top)^\top$ with $\boldsymbol{e}_i$ the $i$-th unit vector. The resulting model term $\bm{U}(t) \bm{z}_i$ then comprises all smooth random functions, accounting for all correlation between/within response functions $\bm{y}_i^{*}$ given the covariates $\bm{x}_i$ as required by the respective experimental design.
$\bm{E}_i$ and $\bm{U}_{jv}$ are independent copies (ind.\ c.) of random processes having multivariate $D\times D$ covariance kernels $K_E, K_{U_j}$, with univariate covariance surfaces $K_{E}^{(d,e)}(t, t') = \text{Cov}\left[E_i^{(d)}(t), E_i^{(e)}(t')\right]$ and $K_{U_{j}}^{(d,e)}(t, t') = \text{Cov}\left[U_{jv}^{(d)}(t), U_{jv}^{(e)}(t')\right]$ reflecting the covariance between the process dimensions $d$ and $e$ at $t$ and $t'$. We
call these auto-covariances for $d = e$ and cross-covariance otherwise. The multivariate Gaussian processes are uniquely defined by their multivariate mean function, here the null function $\bf{0}$, and the multivariate covariance kernels $K_g$ and we write $MGP\left(\bm{0}, K_g\right), g \in \{U_1, \dots, U_q, E\}$. Note that vectorizing the matrix $\bm{U}(t)$ allows to formulate the joint distribution assumption $\operatorname{vec}(\bm{U}(t)) \sim MGP\left(\bm{0}, K_{U}\right)$ with $K_{U}(t, t')$ having a block-diagonal structure repeating each $K_{U_j}(t,t')$ for
$V_{U_j}$ times and $K_E(t,t')$ for $N$ times.
We assume that the different sources of variation $\bm{U}_j(t),j = 1,...,q, \bm{E}_i(t)$, and $\bm{\epsilon}_{it}$ are mutually uncorrelated random processes to assure model identification. Assuming smoothness of the covariance kernel $K_E$ further guarantees that the residual process $\bm{E}_i(t)$ can be separated from the white noise $\bm{\epsilon}_{it}$, removing the error variance from the diagonal of the smooth covariance kernel \citep[e.g.,][]{yao2005functional}.
\subsection{FPC Representation of the Random Effects}
Model \eqref{eq:multivariateFLMM} specifies a univariate \gls{flmm} as given in \citet{cederbaum2016} for each dimension $d$. The main difference lies in the multivariate random processes that introduce dependencies between the dimensions. In order to avoid restrictive assumptions about the structure of these multivariate covariance operators, which would typically be very difficult to elicit \emph{a priori} or verify \emph{ex post}, we estimate them directly from the data. The main difficulty then becomes computationally efficient estimation, which is already costly in the univariate case. Especially for higher dimensional multivariate functional data, accounting for the cross-covariances can become a complex task, which we tackle with \gls{mfpca}.
Given the covariance operators (see Section \ref{sec:Estimation}), we represent the multivariate random effects
in model \eqref{eq:multivariateFLMM} using truncated multivariate \gls{kl} expansions
\begin{equation}
\begin{aligned}
\label{eq:truncated_KL_expansion}
\bm{U}_{jv} (t) &\approx \sum_{m=1}^{M_{U_j}}\rho_{U_j v m}\bm{\psi}_{U_jm}(t),\; j = 1,...,q;\, v =1, ..., V_{U_j},\\
\bm{E}_v(t) &\approx \sum_{m=1}^{M_E}\rho_{Ev m}\bm{\psi}_{Em}(t), \; v = 1, ..., N,
\end{aligned}
\end{equation}
where the orthonormal multivariate eigenfunctions $\bm{\psi}_{gm} = (\psi_{gm}^{(1)}, ..., \psi_{gm}^{(D)})^{\top}\in L^2_D(\mathcal{I})$, $m = 1,..., M_g$, $g\in\{U_1, ..., U_q, E\}$ of the corresponding covariance operators with truncation order $M_g$ are used as basis functions and the random scores $\rho_{gvm} \sim N(0, \nu_{gm})$ are \gls{iid} with ordered eigenvalues $\nu_{gm}$ of the corresponding covariance operator. Note that the assumption of Gaussianity for the random processes can be relaxed. For non-Gaussian random processes, the \gls{kl} expansion still gives uncorrelated (but non-normal) scores and estimation based on a \gls{pls} criterion (see Section \ref{subsec:FAMM}) remains reasonable.
Using \gls{kl} expansions gives a parsimonious representation of the multivariate random processes that is an optimal approximation with respect to the integrated squared error \citep[cf.][]{ramsay2005}, as well as interpretable basis functions capturing the most prominent modes of variation of the respective process. The distinct feature of this approach is that the multivariate \glspl{fpc} directly account for the dependency structure of each random process across the dimensions. If, by contrast, e.g.\ splines were used in the basis representation of the random effects, it would be necessary to explicitly model the cross-covariances of each random process in the model \citep[cf.][]{li2020}. Multivariate eigenfunctions, however, are designed to incorporate the dependency structure between dimensions and allow the assumption of independent (univariate) basis coefficients $\rho_{gvm}$ via the \gls{kl} theorem \citep[see e.g.~][]{happ2018}. This leads to a parsimonious multivariate basis for the random effects, where a typically small vector of scalar scores $\rho_{gvm}$ common to all dimensions represents nearly the entire information about these $D$-dimensional processes.
\section{Estimation}
\label{sec:Estimation}
We use a two-step approach to estimate the \gls{mfamm} and the respective multivariate covariance operators. In a first step (section \ref{subsec:EigenfunctionEstimation}), the D-dimensional eigenfunctions $\bm{\psi}_{gm}(t)$ with their corresponding eigenvalues $\nu_{gm}$ are estimated from their univariate counterparts following \citet{cederbaum2018} and \citet{happ2018}. These estimates are then plugged into \eqref{eq:truncated_KL_expansion} and we represent the \gls{mfamm} as part of the general \gls{famm} framework (section \ref{subsec:FAMM}) by suitable re-arrangement. We can view the estimated $\bm{\psi}_{gm}(t)$ simply as an empirically derived basis that parsimoniously represents the patterns in the observed data. While their estimation adds uncertainty, we are not interested in inferential statements for the variance modes and our simulations (see Section \ref{sec:simulation}) suggest that the estimated eigenfunctions are reasonable approximations that work well as a basis.
\subsection{Step 1: Estimation of the Eigenfunction Basis}
\label{subsec:EigenfunctionEstimation}
\subsubsection*{Step 1 (i): Univariate Mean Estimation}
\label{subsubsec:UniMean}
In a first step, we obtain preliminary estimates of the dimension specific means $\mu^{(d)}(\bm{x}_i, t) = \sum_{l=1}^{p}f_l^{(d)}(\bm{x}_{il}, t)$ using univariate \glspl{famm}. We model
\begin{align}
y_{it}^{(d)}&= \mu^{(d)}(\bm{x}_i, t) + \epsilon^{(d)}_{it};\; d=1,\dots,D
\label{eq:uniMeanEstim}
\end{align}
independently for all $d$ with \gls{iid}~Gaussian random variables $\epsilon^{(d)}_{it}$. The estimation of $\mu^{(d)}(\bm{x}_i, t)$ proceeds analogously to the estimation of the \gls{mfamm} described in section \ref{subsec:FAMM}. It is based on the evaluation points of the $y_i^{*(d)}(t)$, whose locations on the interval $\mathcal{I}$ can vary across dimensions. Model \eqref{eq:uniMeanEstim} thus accommodates sparse and irregular multivariate functional data and implies a working independence assumption across scalar observations within and across functions.
\subsubsection*{Step 1 (ii): Univariate Covariance Estimation}
This preliminary mean function is used to centre the data $\tilde{y}_{it}^{(d)} = y_{it}^{(d)}- \hat{\mu}^{(d)}(\bm{x}_i, t)$ in order to obtain noisy evaluations of the detrended functions $\tilde{y}_i^{*(d)}(t) = y_i^{*(d)}(t) - {\mu}^{(d)}(\bm{x}_i, t)$ for covariance estimation. \citet{cederbaum2016} already find that for this purpose, the working independence assumption within functions across evaluation points in \eqref{eq:uniMeanEstim} gives reasonable results. The expectation of the crossproducts of the centred functions then coincides with the auto-covariance, i.e.\ $\mathbb{E}\left(\tilde{y}_{it}^{(d)}\tilde{y}_{i't'}^{(d)}\right) \approx \text{Cov}\left[y_{it}^{(d)}, y_{i't'}^{(d)}\right]$. For the independent random components specified in model \eqref{eq:multivariateFLMM}, this overall covariance decomposes additively into contributions from each random process as
\begin{align}
\label{eq:nested_random_effects_cov_estimation}
\mathbb{E}\left(\tilde{y}_{it}^{(d)}\tilde{y}_{i't'}^{(d)}\right) \approx \sum_{j=1}^q K_{U_j}^{(d,d)}(t, t')\delta_{v_jv_j'} + \big(K_{E}^{(d,d)}(t, t') + \sigma^2_{d}\delta_{tt'}\big)\delta_{ii'},
\end{align}
using indicators $\delta_{xx'}$ that equal one for $x=x'$ and zero otherwise. The indicator $\delta_{v_jv_j'}$ thus identifies if the curves in the crossproduct belong to the same group $v_j$ of the $j$th layer. Using $t$, $t'$, and the indicators $\delta_{v_jv_j'},\delta_{tt'}, \delta_{ii'}$ as covariates and the crossproducts of the centred data as responses, we can estimate the auto-covariances $K_{U_1}^{(d,d)}, ..., K_{U_q}^{(d,d)},$ and $K_{E}^{(d,d)}$ of the random processes using symmetric additive covariance smoothing \citep{cederbaum2018}. This extends the univariate approach proposed by \citet{cederbaum2016}. In particular, we also allow a nested random effects structure as required for the snooker training application in section \ref{subsec:snookerTraining} by specifying the indicator of the nested effect as the product of subject and session indicators. Note that estimating \eqref{eq:nested_random_effects_cov_estimation} also yields estimates of the dimension specific error variances $\sigma_d^{2}$ as a byproduct.
\subsubsection*{Step 1 (iii): Univariate Eigenfunction Estimation}
Based on the covariance kernel estimates, we apply separate univariate \glspl{fpca} for each random process by conducting an eigendecomposition of the respective linear integral operator. Practically, each estimated process- and dimension-specific auto-covariance is re-evaluated on a dense grid so that a univariate \gls{fpca} can be conducted. Alternatively, \citet{reiss2020tensor} provide an explicit spline representation of the estimated eigenfunctions. Eigenfunctions with non-positive eigenvalues are removed to ensure positive definiteness, and further regularization by truncation based on the proportion of variance explained is possible \citep[see e.g.][]{di2009multilevel, peng2009geometric, cederbaum2016}. However, we suggest to keep all univariate \glspl{fpc} with positive eigenvalues for the computation of the \gls{mfpca} in order to preserve all important modes of variation and cross-correlation in the data.
\subsubsection*{Step 1 (iv): Multivariate Eigenfunction Estimation}
The estimated univariate eigenfunctions and scores are then used to conduct an \gls{mfpca} for each of the $g$ multivariate random processes separately. The \gls{mfpca} exploits correlations between univariate \gls{fpc} scores across dimensions to reduce the number of basis functions needed to sufficiently represent the random processes. We base the \gls{mfpca} on the following definition of a (weighted) scalar product
\begin{align}
\label{eq:weighted_scalar_product}
\langle\langle \bm{f}, \bm{g} \rangle\rangle := \sum_{d=1}^D w_d \int_{\mathcal{I}}f^{(d)}(t)g^{(d)}(t)dt, \quad \bm{f}, \bm{g} \in L^2_D(\mathcal{I}),
\end{align}
for the response space with positive weights $w_d, d = 1,...,D$ and the induced norm denoted by $\vert\vert\vert\cdot\vert\vert\vert$.
The corresponding covariance operators $\Gamma_{g}: L^2_D(\mathcal{I}) \rightarrow L^2_D(\mathcal{I})$ of the multivariate random processes $\bm{U}_{jv}$ and $\bm{E}_v$ are then given by $(\Gamma_{g} \bm{f})(t) = \langle\langle \bm{f}, K_{g}(t, \cdot) \rangle\rangle$, $g\in\{U_1,..., U_q, E\}$.
The standard choice of weights in our applications is $w_1 = ... = w_D = 1$ (unweighted scalar product) but other choices are possible. Consider for example a scenario where dimensions are observed with different amounts of measurement error. If variation in dimensions with a large proportion of measurement error is to be downweighted, we propose to use $w_d = \frac{1}{\hat{\sigma}^2_d}$ with the dimension specific measurement error variance estimates $\hat{\sigma}^2_d$ obtained from \eqref{eq:nested_random_effects_cov_estimation}.
\citet{happ2018} show that estimates of the multivariate eigenvalues $\nu_{gm}$ of $\Gamma_g$ can be obtained from an eigenanalysis of a covariance matrix of the univariate random scores. The corresponding multivariate eigenfunctions $\bm{\psi}_{gm}$ can be obtained as linear combinations of the univariate eigenfunctions with the weights given by the resulting eigenvectors. The estimates $\hat{\bm{\psi}}_{gm}$ are then substituted for the basis functions of the truncated multivariate \gls{kl} expansions of the random effects $\bm{U}_{jv}$ and $\bm{E}_v$ in \eqref{eq:truncated_KL_expansion}. Note that for each random process $g$, the maximum number of \glspl{fpc} is given by the total number of univariate eigenfunctions included in the estimation process of the \gls{mfpca} of $g$. To achieve further regularization and analogously to \citet{cederbaum2016}, we propose to choose truncation orders $M_g$ for each \gls{kl} expansion of the multivariate random processes using a prespecified proportion of explained variation.
\subsubsection*{Step 1 (v): Multivariate Truncation Order}
We offer two different approaches for the choice of truncation orders $M_g$ based on different variance decompositions (derivation in Appendix \ref{app_sec:vardecomp}):
\begin{gather}
\label{eq:total_variation_decomp}
\mathbb{E}\left(\vert\vert\vert \bm{y}_i - \bm{\mu}(\bm{x}_i) \vert\vert\vert^2 \right) = \sum_{d=1}^Dw_d\int_{\mathcal{I}}\mathrm{Var}\big(y_{i}^{(d)}(t)\big) dt =\sum_{g}\sum_{m = 1}^{\infty}\nu_{gm} + \sum_{d=1}^Dw_d\sigma_d^2 \vert\mathcal{I}\vert, \\
\label{eq:univar_variation_decomp}
\text{and} \quad \int_{\mathcal{I}}\mathrm{Var}\big(y_{i}^{(d)}(t)\big) dt = \sum_{g}\sum_{m = 1}^{\infty}\nu_{gm}\vert\vert\psi_{gm}^{(d)}\vert\vert^2 + \sigma_d^2 \vert\mathcal{I}\vert
\end{gather}
with $\vert \mathcal{I}\vert$ the length of the interval $\mathcal{I}$ (here equal to one) and $\vert\vert\cdot\vert\vert$ the $L^2$ norm. Multivariate variance decomposition \eqref{eq:total_variation_decomp} uses the (weighted) sum of total variation in the data across dimensions. We select the \glspl{fpc} with highest associated eigenvalues $\nu_{gm}$ over all random processes $g$ until their sum reaches a prespecified proportion (e.g.\ 0.95) of the total variation, thus approximating the infinite sums in \eqref{eq:total_variation_decomp} with $M_g$ summands. For the approach based on the univariate variance \eqref{eq:univar_variation_decomp}, we require $M_g$ to be the smallest truncation order for which at least a prespecified proportion of variance is explained on every dimension $d$. This second choice of $M_g$ might be preferable in situations where the variation is considerably different (in amount or structure) across dimensions, whereas the first approach gives a more parsimonious representation of the random effects. Note that both approaches can lead to a simplification of the \gls{mfamm} if $M_g = 0$ is chosen for some $g$. The simulation results of section \ref{sec:simulation} suggest that increasing the number of \glspl{fpc} improves model accuracy which is why sensitivity analyses with regard to the truncation order are recommended.
\subsection{Step 2: Estimation of the multiFAMM}
\label{subsec:FAMM}
In the following, we discuss estimating the \gls{mfamm} given the estimated multivariate \glspl{fpc}. We base the proposed model on the general \gls{famm} framework of \citet{scheipl2015}, which models functional responses using basis representations. To make the extension of the \gls{famm} framework to multivariate functional data more apparent, the multivariate response vectors and the respective model matrices are stacked over dimensions, so that every block has the structure of a univariate \gls{famm} over all observations $i$. This gives concatenated basis functions with discontinuities between the dimensions. The fixed effects are modelled analogously to the univariate case by interacting all covariate effects with a dimension indicator. The random effects are based on the parsimonious, concatenated multivariate \gls{fpc} basis.
\subsubsection*{Matrix Representation}
For notational simplicity we assume that the functions are evaluated on a fixed grid of time points $\boldsymbol{t} = (\boldsymbol{t}^{(1)}, ..., \boldsymbol{t}^{(D)})^{\top}$ with $\boldsymbol{t}^{(d)} = (\boldsymbol{t}_{1}^{(d)}, ..., \boldsymbol{t}_{N}^{(d)})$ and identical $\boldsymbol{t}_i^{(d)} \equiv (t_1,..., t_T)$ over all $N$ individuals and $D$ dimensions. However, our framework allows for sparse functional data using different grids per dimension and per observed function as in the two applications (Section \ref{sec:application}). Correspondingly, $\boldsymbol{y} = (\boldsymbol{y}^{(1)\top}, ..., \boldsymbol{y}^{(D)\top})^{\top}$ is the $DNT$-vector of stacked evaluation points with $\boldsymbol{y}^{(d)} = (\boldsymbol{y}_1^{(d)\top}, ..., \boldsymbol{y}_N^{(d)\top})^{\top}$ and $\boldsymbol{y}_i^{(d)} = (y_{i1}^{(d)},..., y_{iT}^{(d)})^{\top}$. Model \eqref{eq:multivariateFLMM} on this grid can be written as
\begin{align}
\label{eq:multiFAMM}
\bm{y} = \bm{\Phi\theta} + \bm{\Psi\rho} + \bm{\epsilon}
\end{align}
with $\bm{\Phi}, \bm{\Psi}$ the model matrices for the fixed and random effects, respectively, $\bm{\theta},\bm{\rho}$ the vectors of coefficients and random effect scores to be estimated, and $\bm{\epsilon} = (\bm{\epsilon}^{(1)\top}, ..., \bm{\epsilon}^{(D)\top})^{\top}$, $\bm{\epsilon}^{(d)} = (\epsilon_{11}^{(d)}, ..., \epsilon_{1T}^{(d)}, ..., \epsilon_{NT}^{(d)})^{\top}$ the vector of residuals. We have $\bm{\epsilon} \sim N(\bm{0}, \bm{\Sigma})$ with $\bm{\Sigma} = \operatorname{diag}(\sigma_1^2,..., \sigma_D^2)\otimes \bm{I}_{NT}$, the Kronecker product denoted by $\otimes$, and the $(NT\times NT)$ identity matrix $\bm{I}_{NT}$.
We estimate $\bm{\theta}$ and $\bm{\rho}$ by minimizing the \gls{pls} criterion
\begin{align}
\label{eq:pls}
(\bm{y} - \bm{\Phi\theta} - \bm{\Psi\rho})\bm{\Sigma}^{-1}(\bm{y} - \bm{\Phi\theta} - \bm{\Psi\rho})^{\top} + \sum_{l=1}^{p}\bm{\theta}_{l}^{\top}\bm{P}_{l}(\bm{\lambda}_{xl}, \bm{\lambda}_{tl})\bm{\theta}_{l} + \sum_{g}\lambda_{g}\bm{\rho}_{g}^{\top}\bm{P}_{g}\bm{\rho}_{g}
\end{align}
using appropriate penalty matrices $\bm{P}_{l}(\bm\lambda_{xl}, \bm\lambda_{tl})$ and $\bm{P}_{g}$ for the fixed effects and random effects, respectively, and smoothing parameters $\bm{\lambda}_{xl} = \left(\lambda_{xl}^{(1)}, ..., \lambda_{xl}^{(D)}\right),\bm{\lambda}_{tl} = \left(\lambda_{tl}^{(1)}, ..., \lambda_{tl}^{(D)}\right)$, and $\lambda_g$. The model and penalty matrices as well as the parameter vectors of \eqref{eq:multiFAMM} and \eqref{eq:pls} are discussed in detail below.
\subsubsection*{Modeling of Fixed Effects}
The block diagonal matrix $\bm{\Phi} = \operatorname{diag}\left(\bm{\Phi}^{(1)}, ..., \bm{\Phi}^{(D)}\right)$ models the fixed effects separately on each dimension as in a \gls{famm} \citep{scheipl2015}. The $(DNT \times b)$ matrix $\bm{\Phi}$ consists of the design matrices $\bm{\Phi}^{(d)} = (\bm{\Phi}_1^{(d)}\,|\,...\,|\,\bm{\Phi}_p^{(d)})$ that are constructed for the partial predictors $f_{l}^{(d)}(\bm{x}, \bm{t}^{(d)}), l = 1, ..., p$, which correspond to the $NT$-vectors of evaluations of the effect functions $f_l^{(d)}$. The vectors of scalar covariates $\bm{x}_i$ are repeated $T$ times to form the matrix of covariate information $\bm{x} = (\bm{x}_1,..., \bm{x}_1,..., \bm{x}_N)^{\top}$. We use the basis representations
\begin{align*}
f_l^{(d)}(\bm{x}, \bm{t}^{(d)}) \approx \bm{\Phi}_l^{(d)}\bm{\theta}_l^{(d)} = (\bm{\Phi}_{xl}^{(d)} \odot \bm{\Phi}_{tl}^{(d)})\bm{\theta}_l^{(d)} ,
\end{align*}
where $\bm{A}\odot\bm{B}$ denotes the row tensor product $(\bm{A}\otimes\bm{1}_b^{\top})\cdot(\bm{1}_a^{\top}\otimes\bm{B})$ of the $(h\times a)$ matrix $\bm{A}$ and the $(h\times b)$ matrix $\bm{B}$ with element-wise multiplication $\cdot$ and $\bm{1}_c$ the $c$-vector of ones. This modeling approach combines the $(NT \times b_{xl}^{(d)})$ basis matrix $\bm{\Phi}_{xl}^{(d)}$ with the $(NT \times b_{tl}^{(d)})$ basis matrix $\bm{\Phi}_{tl}^{(d)}$. These matrices contain the evaluations of suitable marginal bases in $\bm{x}$ and $\bm{t}^{(d)}$, respectively. For a linear effect, for example, the basis matrix $\bm{\Phi}_{xl}^{(d)}$ is specified as the familiar linear model design matrix $\bm{x}$ for the linear effect $f_l^{(d)}(\bm{x}, \bm{t}^{(d)}) = \bm{x}\bm{\beta}_l^{(d)}(\bm{t}^{(d)})$ with coefficient function $\bm{\beta}_l^{(d)}(\bm{t}^{(d)})$. For a nonlinear effect $f_l^{(d)}(\bm{x}, \bm{t}^{(d)})= g_l^{(d)}(\bm{x}, \bm{t}^{(d)})$, the basis matrix $\bm{\Phi}_{xl}^{(d)}$ contains an (e.g. B-spline) basis representation analogously to a scalar additive model. For the functional intercept, $\bm{\Phi}_{xl}^{(d)}$ is a vector of ones, and we generally use a spline basis for $\bm{\Phi}_{tl}^{(d)}$. For a complete list of possible effect specifications with examples, we refer to \citet{scheipl2015}. The tensor product basis is weighted by the $b_{xl}^{(d)}b_{tl}^{(d)}$ unknown basis coefficients in $\bm{\theta}_l^{(d)}$. Stacking the vectors $\bm{\theta}_l^{(d)}$ gives $\bm{\theta}^{(d)} = (\bm{\theta}_1^{(d)\top}, ..., \bm{\theta}_p^{(d)\top})^{\top}$ and finally the $b$-vector $\bm{\theta} = (\bm{\theta}^{(1)\top},..., \bm{\theta}^{(D)\top})^{\top}$ with $b = \sum_{d}\sum_{l}b_{xl}^{(d)}b_{tl}^{(d)}$.
Choosing the number of basis functions is a well known challenge in the estimation of nonlinear or functional effects. We introduce regularization by a corresponding quadratic penalty term in \eqref{eq:pls}. Let $\bm{\theta}_l$ contain the coefficients corresponding to the partial predictor $l$ and order it by dimensions. The penalty $\bm{P}_{l}(\bm{\lambda}_{xl}, \bm{\lambda}_{tl})$ is then constructed from the penalty on the marginal basis for the covariate effect, $\bm{P}_{xl}^{(d)}$, and the penalty on the marginal basis over the functional index, $\bm{P}_{tl}^{(d)}$. Specifically, $\bm{P}_{l}(\bm{\lambda}_{xl}, \bm{\lambda}_{tl})$ is a block-diagonal matrix with blocks for each $d$ corresponding to the Kronecker sums of the marginal penalty matrices $\lambda_{xl}^{(d)}\bm{P}_{xl}^{(d)}\otimes \bm{I}_{b_{tl}^{(d)}} + \lambda_{tl}^{(d)}\bm{I}_{b_{xl}^{(d)}} \otimes \bm{P}_{tl}^{(d)}$ \citep{wood2017}. A standard choice for these marginal penalty matrices given a B-splines basis representation are second or third order difference penalties, thus approximately penalizing squared second or third derivatives of the respective functions \citep{eilers1996}. For unpenalized effects such as a linear effect of a scalar covariate, the corresponding $\bm{P}_{xl}^{(d)}$ is simply a matrix of zeroes.
\subsubsection*{Modeling of Random Effects}
We represent the $DNT$-vectors $\bm{U}_j(\bm{t}) = (\bm{U}_j(\bm{t}^{(1)})^{\top}, ..., \bm{U}_j(\bm{t}^{(D)})^{\top})^{\top}$, $\bm{E}(\bm{t}) = (\bm{E}(\bm{t}^{(1)})^{\top}, ..., \bm{E}(\bm{t}^{(D)})^{\top})^{\top}$ with $\bm{U}_j(\bm{t}^{(d)})$, $\bm{E}(\bm{t}^{(d)})$ containing the evaluations of the univariate random effects for the corresponding groups and time points using the basis approximations
\begin{align*}
\bm{U}_j(\bm{t}) \approx \bm{\Psi}_{U_j} \bm{\rho}_{U_j} = (\bm{\delta}_{U_j} \odot \tilde{\bm{\Psi}}_{U_j}) \bm{\rho}_{U_j}, \quad
\bm{E}(\bm{t}) \approx \bm{\Psi}_E \bm{\rho}_{E} = (\bm{\delta}_E \odot \tilde{\bm{\Psi}}_E) \bm{\rho}_{E}.
\end{align*}
The $v$th column in the $(DNT \times V_g), g \in \{U_1,..., U_q, E\}$ indicator matrix $\bm{\delta}_g$ indicates whether a given row is from the $v$th group of the corresponding grouping layer. Thus, the rows of the indicator matrix $\bm{\delta}_g$ contain repetitions of the group indicators $\bm{z}_{ij}^{\top}$ and $\bm{e}_{i}^{\top}$ in model \eqref{eq:multivariateFLMM}. For the smooth residual, $\bm{\delta}_E$ simplifies to $\bm{1}_D\otimes(\bm{I}_N \otimes \bm{1}_T)$. The $(DNT \times M_g)$ matrix $\tilde{\bm{\Psi}}_g = (\tilde{\bm{\Psi}}_g^{(1)\top}| ... |\tilde{\bm{\Psi}}_g^{(D)\top})^{\top}$ comprises the evaluations of the $M_g$ multivariate eigenfunctions $\psi_{gm}^{(d)}(t)$ on dimensions $d = 1,...,D$ for the $NT$ time points contained in the $(NT\times M_g)$ matrix $\tilde{\bm{\Psi}}_g^{(d)}$. The $M_gV_g$ vector $\bm{\rho}_g = (\bm{\rho}_{g1}^{\top},...,\bm{\rho}_{gV_g}^{\top})^{\top}$ with $\bm{\rho}_{gv} = (\rho_{gv1},...,\rho_{gvM_g})^{\top}$ stacks all the unknown random scores for the functional random effect $g$. The $(DNT \times \sum_g M_gV_g)$ model matrix $\bm{\Psi} = (\bm{\Psi}_{U_1} | ... |\bm{\Psi}_{U_q} | \bm{\Psi}_E)$ then combines all random effect design matrices. Stacking the vectors of random scores in a $\sum_g M_gV_g$ vector $\bm{\rho}= (\bm{\rho}_{U_1}^{\top},..., \bm{\rho}_{U_q}^{\top}, \bm{\rho}_{E}^{\top})^{\top}$ lets us represent all functional random intercepts in the model via $\bm{\Psi}\bm{\rho}$.
For a given functional random effect, the penalty takes the form $\bm{\rho}_{g}^{\top}\bm{P}_{g}\bm{\rho}_{g} = \bm{\rho}_g^{\top} (\bm{I}_{V_g}\otimes \tilde{\bm{P}}_g)\bm{\rho}_g$, where $\bm{I}_{V_g}$ corresponds to the assumed independence between the $V_g$ different groups. The diagonal matrix $\tilde{\bm{P}}_g =\operatorname{diag}(\nu_{g1},..., \nu_{gM_g})^{-1}$ contains the (estimated) eigenvalues $\nu_{gm}$ of the associated multivariate \glspl{fpc}. This quadratic penalty is mathematically equivalent to a normal distribution assumption on the scores $\bm{\rho}_{gv}$ with mean zero and covariance matrix $\tilde{\bm{P}}_g^{-1}$, as implied by the \gls{kl} theorem for Gaussian random processes. Note that the smoothing parameter $\lambda_g$ allows for additional scaling of the covariance of the corresponding random process.
\subsubsection*{Estimation}
We estimate the unknown smoothing parameters in $\bm{\lambda}_{xl}, \bm{\lambda}_{tl}$, and $\lambda_g$ using fast \gls{reml}-estimation \citep{wood2017}. The standard identifiability constraints of \glspl{famm} are used \citep{scheipl2015}. In particular, in addition to the constraints for the fixed effects, the multivariate random intercepts are subject to a sum-to-zero constraint over all evaluation points as given by e.g.\ \citet{refundpackage}.
We propose a weighted regression approach to handle the heteroscedasticity assumption contained in $\bm{\Sigma}$. We weigh each observation proportionally to the inverse of the estimated univariate measurement error variances $\hat{\sigma}_d^2$ from the estimation of the univariate covariances \eqref{eq:nested_random_effects_cov_estimation}. Alternatively, updated measurement error variances can be obtained from fitting separate univariate \glspl{famm} on the dimensions using the univariate components of the multivariate \glspl{fpc} basis. In practice, we found that the less computationally intensive former option gives reasonable results.
As our proposed model is part of the \gls{famm} framework, inference for the \gls{mfamm} is readily available based on inference for scalar additive mixed models \citep{wood2017}. Note, however, that all inferential statements do not incorporate uncertainty due to the estimated multivariate eigenfunction bases, nor in the chosen smoothing parameters. The estimation process readily provides i.a.\ standard errors for the construction of point-wise univariate \glspl{cb}.
\subsection{Implementation}
We provide an implementation of the estimation of the proposed \gls{mfamm} in the \texttt{multifamm} \texttt{R}-package \citep{multifammpackage}. It is possible to include up to two functional random intercepts in $\bm{U}(t)$, which can have a nested or crossed structure, in addition to the curve-specific random intercept $\bm{E}_i(t)$. While including e.g.\ functional covariates is conceptually straightforward \citep[see][]{scheipl2015}, our implementation is restricted to scalar covariates and interactions thereof. We provide different alternatives for specifying the multivariate scalar product, the multivariate cut-off criterion, and the covariance matrix of the white noise error term. Note that the estimated univariate error variances have been proposed as weights for two separate and independent modeling decisions: as weights in the scalar product of the \gls{mfpca} and as regression weights under heteroscedasticity across dimensions.
\section{Applications}
\label{sec:application}
We illustrate the proposed \gls{mfamm} for two different data applications corresponding to intrinsically multivariate and multimodal fuctional data. The presentation focuses on the first application with a detailed description of the multimodal data application in Appendix \ref{APPsec:ca_data}. We provide the data and the code to produce all analyses in the github repository \textit{\href{https://github.com/alexvolkmann/multifammPaper}{multifammPaper}}.
\subsection{Snooker Training Data}
\label{subsec:snookerTraining}
\subsubsection*{Data Set and Preprocessing}
In a study by \citet{snooker2014}, 25 recreational snooker players split into two groups, one of which had instructions to follow a self-administered training schedule over the next six weeks consisting of exercises aimed at improving snooker specific muscular coordination. The second was a control group. Before and after the training period, both groups were recorded on high speed digital camera under similar conditions to investigate the effects of the training on their snooker shot of maximal force. In each of the two recording sessions, six successful shots per participant were videotaped. The recordings were then used to manually locate points of interest (a participant's shoulder, elbow, and hand) and track them on a two-dimensional grid over the course of the video. This yields a six dimensional functional observation per snooker shot $\bm{y}^{*} = (y^{*(\text{elbow.x})}, ..., y^{*(\text{shoulder.y})}): \mathcal{I} = [0, 1] \to \mathbb{R}^{6}$, i.e.\ a two-dimensional movement trajectory for each point of interest (see Figure \ref{fig:snook_obs}).
\begin{figure}
\caption{\textit{Left:}
\label{fig:snook_obs}
\end{figure}
In their starting position (hand centred at the origin), the snooker players are positioned centrally in front of the snooker table aiming at the cue ball. From their starting position, the players draw back the cue, then accelerate it forwards and hit the cue ball shortly after their hands enter the positive range of the horizontal $x$ axis. After the impulse onto the cue ball, the hand movement continues until it is stopped at a player's chest. \citet{snooker2014} identify two underlying techniques that a player can apply: dynamic and fixed elbow. With a dynamic elbow, the cue can be moved in an almost straight line (piston stroke) whereas additionally fixing the elbow results in a pendular motion (pendulum stroke). In both cases, the shoulder serves as a fixed point and should be positioned close to the snooker table.
We adjust the data for differences in body height and relative speed \citep{steyer2020} and apply a coarsening method to reduce the number of redundant data points, thereby lowering computational demands of the analysis. Appendix \ref{app_sec:snooker_analysis} provides a detailed description of the data preprocessing. As some recordings and evaluations of bivariate trajectories are missing, the final data set contains 295 functional observations with a total of 56,910 evaluation points. These multivariate functional data are irregular and sparse, with a median of 30 evaluation points per functional observation (minimum 8, maximum 80) for each of the six dimensions.
\subsubsection*{Model Specification}
We estimate the following model
\begin{align}
\label{eq:snooker_model}
\bm{y}_{ijht}= \bm{\mu}(\bm{x}_{ij}, t) + \bm{B}_i(t) + \bm{C}_{ij}(t) + \bm{E}_{ijh}(t) + \bm{\epsilon}_{ijht},
\end{align}
with $i = 1,..., 25$ the index for the snooker player, $j = 1, 2$ the index for the session, $h = 1,..., H_{ij}$ the index for the typically six snooker shot repetitions in a session, and $t \in [0,1]$ relative time. Correspondingly, $\bm{B}_i(t)$ is a subject-specific random intercept, $\bm{C}_{ij}(t)$ is a nested subject-and-session-specific random intercept, and $\bm{E}_{ijh}(t)$ is the shot-specific random intercept (smooth residual). The nested random effect $\bm{C}_{ij}(t)$ is supposed to capture the variation within players between sessions (e.g.\ differences due to players having a good or bad day). Different positioning of participants with respect to the recording equipment or the snooker table as well as shot to shot variation are captured by the smooth residual $\bm{E}_{ijh}(t)$. The white noise measurement error $\bm{\epsilon}_{ijht}$ is assumed to follow a zero-mean multivariate normal distribution with covariance matrix $\sigma^2\bm{I}_6$, as all six dimensions are measured with the same set-up. The additive predictor is defined as
\begin{align*}
\bm{\mu}(\bm{x}_{ij},t)
&= \bm{f}_0(t) + \texttt{skill}_{i}\cdot \bm{f}_1(t) +\texttt{group}_{i} \cdot \bm{f}_2(t) + \texttt{session}_{j}\cdot \bm{f}_3(t) \\ & \quad + \texttt{group}_{i}\cdot \texttt{session}_{j}\cdot\bm{f}_4(t).
\end{align*}
The dummy covariates $\texttt{skill}_i$ and $\texttt{group}_{i}$ indicate whether player $i$ is an advanced snooker player and belongs to the treatment group (i.e.\ receives the training programme), respectively. Note that the snooker players self-select into training and control group to improve compliance with the training programme, which is why we include a group effect in the model. The dummy covariate $\texttt{session}_{j}$ indicates whether the shot $j$ is recorded after the training period. The effect function $\bm{f}_4(t)$ can thus be interpreted as the treatment effect of the training programme.
Cubic P-splines with first order difference penalty, penalizing deviations from constant functions over time, with 8 basis functions are used for all effect functions in the preliminary mean estimation as well as in the final \gls{mfamm}. For the estimation of the auto-covariances of the random processes, we use cubic P-splines with first order difference penalty on five marginal basis functions. We use an unweighted scalar product \eqref{eq:weighted_scalar_product} for the \gls{mfpca} to give equal weight to all spatial dimensions, as we can assume that the measurement error mechanism is similar across dimensions. Additionally, we find that hand, elbow, and shoulder contribute roughly the same amount of variation to the data, cf.\ Table \ref{APPENDIXtab:snooker_varcontr} in Appendix \ref{app_subsec:snooker_analysis}, where we also discuss potential weighting schemes for the \gls{mfpca}. The multivariate truncation order is chosen such that 95\% of the (unweighted) sum of variation \eqref{eq:total_variation_decomp} is explained.
\subsubsection*{Results}
The \gls{mfpca} gives sets of five (for $\bm{C}$ and $\bm{E}$) and six (for $\bm{B}$) multivariate \glspl{fpc} that explain 95\% of the total variation. The estimated eigenvalues allow to quantify their relative importance. Approximately 41\% of the total variation (conditional on covariates) can be attributed to the nested subject-and-session-specific random intercept $\bm{C}_{ij}(t)$, 33\% to the subject-specific random intercept $\bm{B}_i(t)$, 14\% to the shot-specific $\bm{E}_{ijh}(t)$, and 7\% to white noise. This suggests that day to day variation within a snooker player is larger than the variation between snooker players. Note that these proportions are based on estimation step 1 (see Section \ref{subsec:EigenfunctionEstimation}).
\begin{figure}
\caption{\textit{Left:}
\label{fig:eval_snooker}
\end{figure}
The left plot of Figure \ref{fig:eval_snooker} displays the first \gls{fpc} for $\bm{C}$, which explains about $28\%$ of total variation. A suitable multiple of the \glspl{fpc} is added ($+$) to and subtracted ($-$) from the overall mean function (black solid line, all covariate values set to $0.5$). We find that the dominant mode of the random subject-and-session specific effect influences the relative positioning of a player's elbow, shoulder, and hand, thus suggesting a strong dependence between the dimensions. \citet{snooker2014} argue from a theoretical viewpoint that the ideal starting position should place elbow and hand in a line perpendicular to the plane of the snooker table (corresponding to the x axis). The most prominent mode of variation captures deviations from this ideal starting position found in the overall mean. The next most important \gls{fpc} $\bm{\psi}_{B1}$ of the subject-specific random effect, which explains about $15\%$ of total variation, represents a subject's tendency towards the piston or pendulum stroke (see Appendix Figure \ref{APPENDIXfig:snooker_fpc_B}). This additional insight into the underlying structure of the variance components might be helpful for e.g.\ developing personalized training programmes.
The central plot on the right of Figure \ref{fig:eval_snooker} compares the estimated mean movement trajectory for advanced snooker players (red) to that in the reference group (blue). It suggests that more experienced players tend towards the dynamic elbow technique, generating a hand trajectory resembling a straight line (piston stroke). Uncertainties in the trajectory could be represented by pointwise ellipses, but inference is more straightforward to obtain from the univariate effect functions. The marginal plots display the estimated univariate effects with pointwise 95\% confidence intervals. Even though we find only little statistical evidence for increased movement of the elbow (horizontal-left and vertical-top marginal panels), the hand and shoulder movements (horizontal centre and right, vertical centre and bottom) strongly suggest that the skill level indeed influences the mean movement trajectory of a snooker player. Further results indicate that the mean hand trajectories might slightly differ between treatment and control group at baseline as well as between sessions ($\bm{f}_2(t)$ and $\bm{f}_3(t)$, see Appendix Figure \ref{APPENDIXfig:snooker_cov_023}). The estimated treatment effect $\bm{f}_4(t)$ (Appendix Figure \ref{APPENDIXfig:snooker_cov_1_4}), however, suggests that the training programme did not change the participants' mean movement trajectories substantially. Appendix \ref{app_subsec:snooker_analysis} contains a detailed discussion of all model terms as well as some model diagnostics and sensitivity analyses.
\subsection{Consonant Assimilation Data}
\subsubsection*{Data Set and Model Specification}
\citet{pouplier2016} study the assimilation of the German /s/ and /sh/ sounds such as the final consonant sounds in ``Kürbis'' (English example: ``haggis'') and ``Gemisch'' (English example: ``dish''), respectively. The research question is how these sounds assimilate in fluent speech when combined across words such as in ``Kürbis-Schale'' or ``Gemisch-Salbe'', denoted as /s\#sh/ and /sh\#s/ with \# the word boundary. The 9 native German speakers in the study repeated a set of 16 selected word combinations five times. Two different types of functional data, i.e.\ \gls{aco} and \gls{epg} data, were recorded for each repetition to capture the acoustic (produced sound) and articulatory (tongue movements) aspects of assimilation over (relative) time $t$ within the consonant combination.
Each functional index varies roughly between $+1$ and $-1$ and measures how similar the articulatory or acoustic pattern is to its reference patterns for the first ($+1$) and second ($-1$) consonant at every observed time point \citep{cederbaum2016}. Without assimilation, the data are thus expected to shift from positive to negative values in a sinus-like form (see Figure \ref{fig:speech_obs}). The data set contains 707 bivariate functional observations with differently spaced grids of evaluation points per curve and dimension, with the number of evaluation points ranging from 22 to 59 with a median of 35. Note that the consonant assimilation data are unaligned as registration of the time domain would mask transition speeds between the consonants, which are an interesting part of assimilation.
\begin{figure}
\caption{Index curves of the consonant assimilation data set for both \gls{aco}
\label{fig:speech_obs}
\end{figure}
For comparability, we follow the model specification of \citet{cederbaum2016}, who analyse only the \gls{aco} dimension and ignore the second mode \gls{epg}. Our specified multivariate model is similar to \eqref{eq:snooker_model} with $i = 1,..., 9$ the speaker index, $j = 1, ..., 16$ the word combination index, $h = 1,..., H_{ij}$ the repetition index and $t \in [0,1]$ relative time. Note that the nested effect $\bm{C}_{ij}(t)$ is replaced by the crossed random effect $\bm{C}_{j}(t)$ specific to the word combinations. The additive predictor $\bm{\mu}(\bm{x}_{j}, t)$ now contains eight partial effects: the functional intercept plus main and interaction effects of scalar covariates describing characteristics of the word combination such as the order of the consonants /s/ and /sh/. The white noise measurement error $\bm{\epsilon}_{ijht}$ is assumed to follow a zero-mean bivariate normal distribution with diagonal covariance matrix $\operatorname{diag}(\sigma^2_{\text{ACO}}, \sigma^2_{\text{EPG}})$. The basis and penalty specifications follow the univariate analysis in \citet{cederbaum2016}. Given different sampling mechanisms, we also compare the \gls{mfamm} based on weighted and unweighted scalar products for the \gls{mfpca}.
\subsubsection*{Results}
The multivariate analysis supports the findings of \citet{cederbaum2016} that assimilation is asymmetric (different mean patterns for /s\#sh/ and /sh\#s/). Overall, the estimated fixed effects are similar across dimensions as well as comparable to the univariate analysis. Hence, the multivariate analysis indicates that previous results for the acoustics are consistently found also for the articulation. Compared to univariate analyses, our approach reduces the number of \gls{fpc} basis functions and thus the number of parameters in the analysis. The \gls{mfamm} can improve the model fit and can provide smaller \glspl{cb} for the \gls{aco} dimension compared to the univariate model in \citet{cederbaum2016} due to the strong cross-correlation between the dimensions. We find similar modes of variation for the multivariate and the univariate analysis as well as across dimensions. In particular, the word combination-specific random effect $\bm{C}_{j}(t)$ is dropped from the model as much of the between-word variation is already explained by the included fixed effects. The definition of the scalar product has little effect on the estimated fixed effects but changes the interpretation of the \glspl{fpc}. Appendix \ref{APPsec:ca_data} contains a more in depth description of this application.
\section{Simulations}
\label{sec:simulation}
\subsection{Simulation Set-Up}
We conduct an extensive simulation study to investigate the performance of the \gls{mfamm} depending on different model specifications and data settings (over 20 scenarios total), and to compare it to univariate regression models as proposed by \citet{cederbaum2016}, estimated on each dimension independently. Given the broad scope of analysed model scenarios, we refer the interested reader to Appendix \ref{app_sec:simulation} for a detailed report and restrict the presentation here to the main results.
We mimic our two presented data examples (Section \ref{sec:application}) and simulate new data based on the respective \gls{mfamm}-fit. Each scenario consists of model fits to 500 generated data sets, where we randomly draw the number and location of the evaluation points, the random scores, and the measurement errors according to different data settings. The accuracy of the estimated model components is measured by the \gls{mse} based on the unweighted multivariate norm but otherwise as defined by \citet{cederbaum2016}, see Appendix \ref{app_subsec:sim_description}. The \gls{mse} takes on (unbounded) positive values with smaller values indicating a better fit.
\subsection{Simulation Results}
Figure \ref{fig:sim_eval_multi} compares the \gls{mse} values over selected modeling scenarios based on the consonant assimilation data. We generate a benchmark scenario (dark blue), which imitates the original data without misspecification of any model component. In particular, the number of \glspl{fpc} is fixed to avoid truncation effects. Comparing this scenario to the other blue scenarios illustrates the importance of the number of \glspl{fpc} in the accuracy of the estimation. Choosing the truncation order via the proportion of univariate variance explained (Cut-Off Uni) as in \eqref{eq:univar_variation_decomp} gives models with roughly the same number of \glspl{fpc} (mean $\bm{B}:2.8, \bm{E}:5$) as is used for the data generation ($\bm{B}:3, \bm{E}:5$). The cut-off criterion based on the multivariate variance (Cut-Off Mul) given by \eqref{eq:total_variation_decomp} results in more parsimonious models (mean $\bm{B}:2.15, \bm{E}:4$) and thus considerably higher \gls{mse} values. The increased variation in the \gls{mse} values can also be attributed to variability in the truncation orders (cf. Appendix Figure \ref{fig:app_sim_nfpc_y}), leading to a mixture distribution. Comparing the benchmark scenario to more sparsely observed functional data (ceteris paribus) suggests a lower estimation accuracy for the Sparse Data scenario, especially for the curve-specific random effect $\bm{E}_{ijh}(t)$ and resultingly the fitted curves $\bm{y}_{ijh}(t)$, but pooling the information across functions helps the estimation of $\bm{\mu}(\bm{x}_{ij}, t)$ and $\bm{B}_{i}(t)$. In particular, the estimation of the mean $\bm{\mu}(\bm{x}_{ij}, t)$ is quite robust against the increased uncertainty of these three scenarios. Only when the random scores are not centred and decorrelated as in the benchmark scenario do we find an increase in \gls{mse} values for the mean (Uncentred Scores). This corresponds to a departure from the modeling assumptions likely to occur in practice when only few levels of a random effect are available (here for the subject-specific $\bm{B}_{i}(t)$). The model then has difficulties to correctly separate the intercept in $\bm{\mu}(\bm{x}_{ij}, t)$ and the random effects $\bm{B}_{i}(t)$. The empirical (non-zero) mean of the $\bm{B}_{i}(t)$ is then absorbed by the intercept in $\bm{\mu}(\bm{x}_{ij}, t)$, resulting in higher \gls{mse} values for both of these model terms. However, this shift does not affect the overall fit to the data $\bm{y}_{ijh}(t)$ nor the estimation of the other fixed effects (cf. Appendix Figure \ref{fig:app_sim_eff_umse}). Note that the \gls{mse} values of the Sparse Data and Uncentred Scores scenarios are based on slightly different normalizing constants (i.e.\ different true data) and cannot be directly compared except for the mean.
\begin{figure}
\caption{\gls{mse}
\label{fig:sim_eval_multi}
\end{figure}
Our simulation study thus suggests that basing the truncation orders on the proportion of explained variation on each dimension \eqref{eq:univar_variation_decomp} gives parsimonious and well fitting models. If interest lies mainly in the estimation of fixed effects, the alternative cut-off criterion based on the total variation in the data \eqref{eq:total_variation_decomp} allows even more parsimonious models at the cost of a less accurate estimation of the random effects and overall model fit. Furthermore, the results presented in Appendix \ref{app_sec:simulation} show that the mean estimation is relatively stable over different model scenarios including misspecification of the measurement error variance structure or of the multivariate scalar product, as well as in scenarios with strong heteroscedasticity across dimensions. In our benchmark scenario, the \glspl{cb} cover the true effect $89-94\%$ of the time but coverage can further decrease with additional uncertainty e.g.\ about the number of \glspl{fpc}. Overall, the covariance structure such as the leading \glspl{fpc} can be recovered well, also for a nested random effect such as in the snooker training application. The comparison to the univariate modeling approach suggests that the \gls{mfamm} can improve the mean estimation but is especially beneficial for the prediction of the random effects while reducing the number of parameters to estimate. In some cases like strong heteroscedasticity, including weights in the multivariate scalar product might further improve the modeling.
\section{Discussion}
\label{sec:discussion}
The proposed multivariate functional regression model is an additive mixed model, which allows to model flexible covariate effects for sparse or irregular multivariate functional data. It uses \gls{fpc} based functional random effects to model complex correlations within and between functions and dimensions. An important contribution of our approach is estimating the parsimonious multivariate \gls{fpc} basis from the data. This allows us to account not only for auto-covariances, but also for non-trivial cross-covariances over dimensions, which are difficult to adequately model using alternative approaches such as parametric covariance functions like the Matèrn family or penalized splines, which imply a parsimonious covariance only within but not necessarily between functions. As a \gls{famm}-type regression model, a wide range of covariate effect types is available, also providing pointwise \glspl{cb}. Our applications show that the \glspl{mfamm} can give valuable insight into the multivariate correlation structure of the functions in addition to the mean structure.
An apparent benefit of multivariate modeling is that it allows to answer research questions simultaneously relating to different dimensions. In addition, using multivariate \glspl{fpc} reduces the number of parameters compared to fitting comparable univariate models while improving the random effects estimation by incorporating the cross-covariance in the multivariate analysis. The added computational costs are small: For our multimodal application, the multivariate approach prolongs the computation time by only five percent (104 vs.\ 109 minutes on a 64-bit Linux platform).
We find that the average point-wise coverage of the point-wise \glspl{cb} can in some cases lie considerably below the nominal value. There are two main reasons for this: One, the \glspl{cb} presented here do not incorporate the uncertainty of the eigenfunction estimation nor of the smoothing parameter selection. Two, coverage issues can arise in (scalar) mixed models, if effect functions are estimated as constant when in truth they are not \citep[e.g.][]{wood2017, Grevencomment}. To resolve these issues, further research on the level of scalar mixed models might be needed. A large body of research covering \gls{cb} estimation for functional data \citep[e.g.][]{goldsmith2013corrected, choi2018, liebl2019fast} suggests that the construction of \glspl{cb} is an interesting and complex problem, also outside of the \gls{famm} framework.
It would be interesting to extend the \gls{mfamm} to more general scenarios of multivariate functional data such as observations consisting of functions with different dimensional domains, e.g.\ functions over time and images as in \citet{happ2018}. This would require adapting the estimation of the univariate auto-covariances for spatial arguments $t, t'$. Exploiting properties of dense functional data, such as the block structure of design matrices for functions observed on a grid, could help to reduce computational cost in this case. Future research could further generalize the covariance structure of the \gls{mfamm} by allowing for additional covariate effects. In our snooker training application, for example, a treatment effect of the snooker training might show itself in the form of reduced intra-player variance \citep[cf.][]{backenroth2018}. Ideas from distributional regression could be incorporated to jointly model the mean trajectories and covariance structure conditional on covariates.
\appendix
\section{Derivation of Variance Decompositions}
\label{app_sec:vardecomp}
This is the derivation of the variance decompositions \eqref{eq:total_variation_decomp} and \eqref{eq:univar_variation_decomp} from the model equations \eqref{eq:multivariateFLMM} and \eqref{eq:truncated_KL_expansion}. Following the argumentation of \cite{cederbaum2016}, the variation of the response can be decomposed per dimension using iterated expectations as
\begin{align*}
\int_{\mathcal{I}}\mathrm{Var}\big(y_{i}^{(d)}(t)\big) dt = & \sum_{j=1}^q\sum_{v=1}^{V_{U_j}} z_{ijv}^2 \int_{\mathcal{I}}\mathrm{Var}\big(U_{jv}^{(d)}(t)\big) dt + \int_{\mathcal{I}}\mathrm{Var}\big(E_{i}^{(d)}(t)\big) dt +\int_{\mathcal{I}}\mathrm{Var}\big(\epsilon_{it}^{(d)}\big) dt \\
= & \sum_{j=1}^{q}\sum_{m = 1}^\infty\nu_{U_jm}\underbrace{\int_{\mathcal{I}}\psi_{U_jm}^{(d)}(t)\psi_{U_jm}^{(d)}(t)dt}_{\textstyle=\vert\vert\psi_{U_jm}^{(d)}\vert\vert^2} \\
& + \sum_{m = 1}^\infty\nu_{Em}\underbrace{\int_{\mathcal{I}}\psi_{Em}^{(d)}(t)\psi_{Em}^{(d)}(t)dt}_{\textstyle=\vert\vert\psi_{Em}^{(d)}\vert\vert^2} + \int_{\mathcal{I}}\sigma_{d}^2dt \\
= & \sum_{j=1}^{q}\sum_{m = 1}^\infty\nu_{U_jm}\vert\vert\psi_{U_jm}^{(d)}\vert\vert^2 + \sum_{m = 1}^\infty\nu_{Em}\vert\vert\psi_{Em}^{(d)}\vert\vert^2 + \sigma_d^2 \vert\mathcal{I}\vert
\end{align*}
assuming in the second equality that each observation is in exactly one group in the $j$th grouping layer, i.e.\ exactly one indicator $z_{ijv} = z_{ijv}^2, v = 1, \dots, V_{U_j}$ is one for each $i$ and $j$, and also using in the second equality that the scores are uncorrelated across eigenfunctions, in particular $\text{Cov}(\rho_{U_jvm}, \rho_{U_jvm'}) = 0$ if $m \neq m'$.
Similarly, the total variance as measured by the sum of the univariate variances can be decomposed as
\begin{align*}
\mathbb{E}\left(\vert\vert\vert \bm{y}_i - \bm{\mu}(\bm{x}_i) \vert\vert\vert^2 \right) = & \sum_{d=1}^{D}w_d\int_{\mathcal{I}}\mathrm{Var}\big(y_{i}^{(d)}(t)\big) dt \\ = & \sum_{j=1}^q \sum_{d=1}^{D}\sum_{v=1}^{V_{U_j}} z_{ijv}^2 w_d\int_{\mathcal{I}}\mathrm{Var}\big(U_{jv}^{(d)}(t)\big) dt + \sum_{d=1}^{D}w_d\int_{\mathcal{I}}\mathrm{Var}\big(E_{i}^{(d)}(t)\big) dt \\
& + \sum_{d=1}^{D}w_d\int_{\mathcal{I}}\mathrm{Var}\big(\epsilon_{it}^{(d)}\big) dt \\
= & \sum_{j=1}^{q}\sum_{m = 1}^\infty\nu_{U_jm}\underbrace{\sum_{d=1}^{D}w_d\int_{\mathcal{I}}\psi_{U_jm}^{(d)}(t)\psi_{U_jm}^{(d)}(t)dt}_{\textstyle=\vert\vert\vert\bm{\psi}_{U_jm}\vert\vert\vert^2 = 1} \\
& + \sum_{m = 1}^\infty\nu_{Em}\underbrace{\sum_{d=1}^{D}w_d\int_{\mathcal{I}}\psi_{Em}^{(d)}(t)\psi_{Em}^{(d)}(t)dt}_{\textstyle=\vert\vert\vert\bm{\psi}_{Em}\vert\vert\vert^2 = 1} + \sum_{d=1}^{D}w_d\int_{\mathcal{I}}\sigma_{d}^2dt \\
= & \sum_{j=1}^{q}\sum_{m = 1}^\infty\nu_{U_jm} + \sum_{m = 1}^\infty\nu_{Em} + \sum_{d=1}^{D}w_d\sigma_d^2 \vert\mathcal{I}\vert.
\end{align*}
\FloatBarrier
\section{Additional Results for the Snooker Data}
\label{app_sec:snooker_analysis}
\subsection{Data Description}
In this section, we provide additional information of the study by \cite{snooker2014}. The participants of the snooker training study volunteered if they wanted to take part in the training group. The training schedule for this treatment group recommended to perform a training two to three times a week which consisted of three exercises that are supposed to improve snooker specific muscular coordination. Note that these were autonomous trainings so that there is no reliable information if the participants followed the recommendations.
In order to analyse the snooker shot of maximal force, the participants were asked to place themselves in the typical playing position. Yet no exact measure of the distance between the tip of their cue and the cue ball is available and thus the time of the impulse onto the cue ball cannot be exactly determined from the data. The participants were videotaped and the open source software Kinovea (\url{https://www.kinovea.org}) was used to manually locate points of interest (a participant's shoulder, elbow, and hand) and track them on a two-dimensional grid over the course of the video. Figure \ref{APPENDIXfig:snooker_univ_obs} shows the univariate functions that make up the two-dimensional trajectories of the participants' elbow, hand, and shoulder. The univariate functions for the x-axis of the hand show that for most observed curves, the time of impact might lie around $0.75$ of the relative time of the snooker shot. That is when the hand moves into the positive range on the $x$ dimension. For their shot, the snooker players can either try to fix their elbow or move the elbow dynamically. In order to move the hand in a straight line (piston stroke), the elbow has to move dynamically down and up when the hand is drawn back and accelerated towards the cue ball. The latter part of the movement trajectory (after the cue hits the cue ball) does not impact the snooker shot itself. Note that a long final downwards movement of an elbow trajectory indicates that the hand is not stopped at the chest but is rather pulled through towards the shoulder. This can give insight into a player's stance at the snooker table, e.g.\ if the upper body is close to the table or if it is more erect.
\begin{figure}
\caption{Univariate functions of the snooker trajectories as functions of the standardized time $t$. The value of the function corresponds to the position of the elbow, hand, and shoulder on the $x$ and $y$ axes at the associated time point.}
\label{APPENDIXfig:snooker_univ_obs}
\end{figure}
Note that the univariate functions of Figure \ref{APPENDIXfig:snooker_univ_obs} show different characteristics over the different dimensions, and a first-order difference penalty penalizing deviations from a constant function seems to be a sensible default choice.
\subsection{Data Preprocessing}
In order to account for differences in body height between the participants, we first rescale the observed coordinate locations by the median distance between hand and elbow. As we are interested in the movement trajectory irrespective of phase variation, i.e.\ independent of the exact timing of different parts of the stroke, we apply an elastic registration approach to the data which aligns the two dimensional trajectory of the hand to its respective mean trajectory across all players and shots \citep{steyer2020}. We then reparameterize the time for the elbow and shoulder trajectories according to the results of the hand alignment. Other registration approaches are possible and we provide both the original and the reparameterized time of the data. Due to the high frame rate of the high speed camera and a comparatively rough grid of the tracking software, the resulting multivariate functional data are dense with redundant information. The following subsection describes the coarsening method in detail. The coarsening reduces the data set from roughly 400,000 to only 56,910 scalar observations. Note that 5 functional observations are lost due to technical problems during the recording. Even bivariate trajectories within one multivariate functional observation can have different numbers of scalar observations due to inconsistencies in the tracking programme.
\subsubsection*{Coarsening Method for the Snooker Data}
\label{app_sec:snooker_analysis:coarse}
Functional response variables as given by the snooker trajectories typically exhibit very high autocorrelation. Thus, up to a certain point, removing single curve measurements will inflict only very limited information loss while reducing computational cost in the model fit at least linearly.
We therefore coarsen the snooker trajectories as a preprocessing step, as they were originally sampled ``overly'' densely in that sense. In particular, there are also evaluation points that do not carry additional information as they measure the same location as the evaluation points right before and after (e.g. because the hand does not move noticeably over these three or more time points). The coarsening should a) retain original data points via subsampling to avoid pre-smoothing losses and b) be optimal in the sense that as much information as possible is preserved given the subsample size.
Finding an optimal subsample, with respect to a general criterion and for all possible subsamples, is however typically very computationally demanding in itself. We therefore propose a fast greedy coarsening algorithm recursively discarding single points from the sample. The ultimate aim of the algorithm is to find a subsample polygon (Fig. \ref{APPENDIXfig:coarsen1}) representing the whole data as well as possible.
\begin{figure}
\caption{An example subsample polygon of length four through some curve measurements. The polygon canonically connects points in the subsample according to the order of the indices $t_i$ of the curve measurements $y(t_i)$ in the subsample.}
\label{APPENDIXfig:coarsen1}
\end{figure}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\Delta}{\Delta}
\begin{figure}
\caption{One iteration of the proposed fast greedy coarsening algorithm: Given the current subsample polygon (top), the loss $\Delta^{(k)}
\label{APPENDIXfig:coarsen2}
\end{figure}
One step of the coarsening algorithm described in the following is illustrated in Figure \ref{APPENDIXfig:coarsen2}. Let $\mathcal{T}_0 = \{t_1, \dots, t_m\} \subset \mathcal{I}$ be the set of evaluation points $t_1 < \dots < t_m$ of a curve $y:\mathcal{I} \rightarrow \mathbb{R}^2$.
We select a sequence of subsamples $\mathcal{T}_0 \supset \mathcal{T}_1 \supset \mathcal{T}_2 \supset \dots$ by recursively discarding the point $t$ from $\mathcal{T}_k$ where $y(t)$ is closest to the line segment $[y(t_-), y(t_+)]$ between its adjacent points at $t_- = \max \{s\in \mathcal{T}_k : s < t\}$ and $t_+ = \min \{s\in \mathcal{T}_k : s > t\}$.
Note that this is only well-defined for $t \in \mathring \mathcal{T}_k = \mathcal{T}_k \setminus \{\min \mathcal{T}_k, \max \mathcal{T}_k\}$, in the ``interior'' of $\mathcal{T}_k$, and the first and last point $t_1$ and $t_m$ are kept throughout the algorithm.
In more detail, we proceed in the $k$th step of the algorithm as follows
\begin{enumerate}
\item $\forall t\in \mathring \mathcal{T}_k$ compute the step-to-step quadratic error
\begin{align*}
\Delta^{(k)}(t) = \min_{p\in[y(t_-), y(t_+)]} (y(t), p)^2.
\end{align*}
For computing the quadratic distance $\Delta^{(k)}(t)$, we have to distinguish the case where the projection of $y(t)$ on the line through $y(t_-)$ and $y(t_+)$ lies between the two points and the case where it lies on either side of them. To do so, define $y'(t) = y(t) - y(t_-)$ and $y'_+ = y'(t_+)$, as well as the unit vector $u = \frac{y'_+}{\|y'_+\|}$ pointing from $y(t_-)$ to $y(t_+)$. Then we can compute
\begin{align*}
\Delta^{(k)}(t) = \begin{cases}
\|y'(t)\|^2 & \text{ if } \langle y'(t), u\rangle \leq 0\\
\|y'(t) - \langle y'(t), u\rangle u\|^2 & \text{ if } \|y'_+\| \geq \langle y'(t), u\rangle > 0 \\
\|y'(t) - y'_+\|^2 & \text{ if } \langle y'(t), u\rangle > \|y'_+\|
\end{cases}.
\end{align*}
\item Choose $t^{(k)} \in \operatorname{argmin}_{t\in\mathring \mathcal{T}_k} \Delta^{(k)}(t)$ and set $\mathcal{T}_{k+1} = \mathcal{T}_k \setminus \{t^{(k)}\}$.
\end{enumerate}
This procedure may be repeated until $\mathcal{T}_{k+1}$ is of the desired sample size. However, we consider an error based stopping criterion more convenient in most cases.
We suggest to specify a threshold for the cumulative step-to-step error $S_k = \sum_{\kappa=1}^k \Delta^{(\kappa)}(t^{(\kappa)})$ or a relative version $R_k = S_k / \bar\Delta$ of it, relative to $\bar \Delta = \frac{1}{m-2} \sum_{\mathring \mathcal{T}} \Delta^{(\infty)}(t)$ the mean quadratic distance to the line segment $[y(t_1), y(t_m)]$ between the two only points left at the ultimate subsample $\mathcal{T}_\infty$.
In our experience, a threshold $R^* = 10^{-4}$ for $R_k$ yields a close to lossless subsample from a visual point of view. For the model, $R^* = 0.003$ was chosen to limit computational demands.
While the \gls{mfamm} would allow for dimension specific evaluations of the curves, the warping algorithm applied as part of the preprocessing does not. Thus, we decided to apply the coarsening algorithm to the hand trajectories, which we consider the most informative component with the best signal to noise ratio. Unselected observed time points for each hand trajectory are then also dropped for the corresponding trajectories of the shoulder and elbow.
As a result, the coarsened data retain the characteristic of the snooker trajectory data such that for a given time point, we observe the location of (almost) all points of interest (i.e.\ the grid of evaluation points is identical over the dimensions for a given observation).
\subsection{Additional Results of the Analysis}
\label{app_subsec:snooker_analysis}
\subsubsection*{Analysis of the Variance Components}
The \gls{mfamm} estimates a total of 61 univariate \glspl{fpc} (20 each for $B$ and $C$, and 21 for $E$), where each process is represented by three to five univariate \glspl{fpc} on each dimension. Table \ref{APPENDIXtab:snooker_varcontr} shows that the three points of interest (hand, elbow, and shoulder) all show a similar amount of variation. The source of variation, however, differs in interpretation. While the variation in the hand trajectories stems from differing movements, the variation in the shoulder mainly reflects different positioning. Applying higher weights to dimensions associated with the hand, for example, can shift the focus of the \gls{mfpca} to favor movement based variation of the hand trajectories in the analysis. From Table \ref{APPENDIXtab:snooker_varcontr} it is also apparent that movements or positions along the horizontal $x$-axis contribute more to the variation in the data than movements or positions along the vertical $y$-axis. This also suggests that variation in the $x$ direction is the driving factor of the \gls{mfpca}. By assigning higher weights in the scalar product to dimensions associated with the $y$-axis, the analyst can ``distort'' the natural grid of observations to balance out the different variation in the axes. Note that we use an unweighted scalar product in the analysis. Also keep in mind that the variation in Table \ref{APPENDIXtab:snooker_varcontr} is calculated based on estimation step 1.
Based on the cut-off criterion \eqref{eq:total_variation_decomp}, 16 multivariate \glspl{fpc} are chosen to explain $95\%$ of total variation. For a similar amount of variance explained, 23 univariate \glspl{fpc} would be needed, which would give a more complex model that contains redundancies and ignores the multivariate nature of the data. Figures \ref{APPENDIXfig:snooker_fpc_B} and \ref{APPENDIXfig:snooker_fpc_C} show i.a.\ the two most prominant modes of variation in the snooker training data. As described in the main part, the \gls{fpc} $\bm{\psi}_{C1}$ seems to represent variation in the starting position (explaining about $27\%$ of total variation. The second leading \gls{fpc} $\bm{\psi}_{B1}$ (explaining about $15\%$ of total variation) shows subject-specific variation related to the two snooker techniques identified by \cite{snooker2014}: The red trajectory ($+$) shows that the elbow moves first down and then up to draw the hand back and accelerate it towards the cue ball. This then vertically contracts the hand trajectory compared to fixing the elbow during the acceleration phase (blue $-$), which allows the hand to swing more freely (pendulum stroke). We also find that players with a personal tendency towards the pendulum stroke seem to not stop their hands at their chest. Note that for $\bm{\psi}_{B1}$ in Figure \ref{APPENDIXfig:snooker_fpc_B}, the red trajectory is overlapping and thus masks the down-up-and-down movement of the elbow.
\begin{table}
\centering
\caption{Total variation of the centred responses per univariate dimension and overall. The variation is calculated as the sum of non-negative univariate eigenvalues and the dimension-specific measurement error variance as given from Step $1$ in the estimation of the \gls{mfamm} as described in Section \ref{subsec:EigenfunctionEstimation}}
\label{APPENDIXtab:snooker_varcontr}
\begin{tabular}{l|cccccc|c}
& elbow.x & elbow.y & hand.x & hand.y & shoulder.x & shoulder.y & Total \\\hline
Variation & 0.012 & 0.004 & 0.015 & 0.001 & 0.014 & 0.008 & 0.055 \\
Proportion & 22\% & 7\% & 28\% & 2\% & 26\% & 14\% & 100\%
\end{tabular}
\end{table}
Figure \ref{APPENDIXfig:snooker_fpc_B} shows the multivariate \glspl{fpc} of the subject-specific functional random intercept included in the \gls{mfamm}. $\bm{\psi}_{B2}(t)$ (explaining about 9\% of the total variation) seems to capture variation in the positioning of the shoulder, size differences of the upper arm, and the final part of the movement trajectories after the cue ball is hit. Figure \ref{APPENDIXfig:snooker_fpc_C} shows the multivariate \glspl{fpc} of the subject-and-session-specific random intercept. The second \gls{fpc} (explaining about $6\%$ of the total variation) shifts the relative positioning of the elbow and shoulder. Figure \ref{APPENDIXfig:snooker_fpc_E} shows the multivariate \glspl{fpc} of the curve-specific random intercept. The leading \gls{fpc} (explaining about $7\%$ of total variation) captures variation in the starting position of elbow and shoulder, and in the final hand movement.
\begin{figure}
\caption{\glspl{fpc}
\label{APPENDIXfig:snooker_fpc_B}
\end{figure}
\begin{figure}
\caption{\glspl{fpc}
\label{APPENDIXfig:snooker_fpc_C}
\end{figure}
\begin{figure}
\caption{\glspl{fpc}
\label{APPENDIXfig:snooker_fpc_E}
\end{figure}
\subsubsection*{Analysis of the Estimated Effect Functions}
\begin{figure}
\caption{Estimated covariate effect functions for skill (left) and treatment effect (right). The central plot shows the effect of the coefficient function in parenthesis in the title (red) on the two dimensional trajectories for the reference (without paranthesis, blue). The start of the trajectories are marked with an asterisk. The marginal plots show the estimated univariate effect functions (black) with pointwise 95\% confidence intervals (dotted) and the reference (blue dashed).}
\label{APPENDIXfig:snooker_cov_1_4}
\end{figure}
The left panel of Figure \ref{APPENDIXfig:snooker_cov_1_4} shows the estimated covariate effect functions for the covariate \texttt{skill}. In addition to the effect described in the main part, we point out that for skilled players, the shoulder is positioned closer to the table as well as to the body all other things being equal. The movement trajectories in the right panel of Figure \ref{APPENDIXfig:snooker_cov_1_4} are composed of the estimated effect functions of intercept, treatment group, and session (blue trajectories), so that the interaction effect can be separated and interpreted (red trajectories). We do not find strong evidence for differences in the displayed mean trajectories, nor in the univariate effects (marginal plots). This suggests that the training programme did not considerably change the participants' mean movement trajectories.
Figure \ref{APPENDIXfig:snooker_cov_023} shows the estimated intercept (left) and effect functions of the covariates for treatment group (center) and session (right). The intercept (scalar plus functional) gives the mean movement trajectories (dark blue) in the reference group, i.e.\ an unskilled snooker player in the control group with a shot in the first session. In the middle panel, the red trajectories show the estimated effect of the treatment group added to the intercept. The marginal plots around the movement trajectories show the estimated univariate effects and their pointwise 95\% confidence intervals. We find only minor differences between movement trajectories in treatment and control group. The hand trajectories of the treatment group seem to be slightly higher and further along the $x$-axis than the control group, all other things being equal. The right panel compares the mean trajectory for the reference group of players in the first (blue) to the second (red) session. The univariate estimated covariate effects on the $y$-axis seem to indicate a slight shift in the vertical direction of the trajectories. Keep in mind that the trajectories of the hand have been centred to the origin before the analysis.
\begin{figure}
\caption{Estimated functional fixed effects of intercept (left), covariate treatment group (center), and covariate session (right). The main plots show the effect of the coefficient function in parenthesis (red) on the two dimensional trajectories for the intercept (blue). The start of the trajectories are marked with an asterisk. The marginal plots show the estimated univariate effect functions (black) with pointwise 95\% confidence intervals (dotted) and the reference (grey/blue dashed).}
\label{APPENDIXfig:snooker_cov_023}
\end{figure}
\subsubsection*{Model Diagnostics and Sensitivity Analysis}
We use the \gls{umse} defined in Section \ref{sec:simulation} as a criterion for model fit between the different dimensions. Table \ref{APPENDIXtab:snooker_umse_fit} shows that the model fits comparatively well on all dimensions except for the $x$-axis of the elbow and the $y$-axis of the hand. In order to get an impression of the quality of the model fit, Figure \ref{APPENDIXfig:snooker_diagn_fit} shows selected fitted trajectories in solid red together with the observed trajectories in dashed grey. We choose to present the quantiles from the contribution of each observation to the \gls{mmse}, i.e.\ quantiles from $\vert\vert\vert\bm{\zeta}_i- \hat{\bm{\zeta}}_i\vert\vert\vert^2$ of the multivariate functional trajectory $\bm{\zeta}_i, i = 1,..., N$ and the corresponding estimate $\hat{\bm{\zeta}}_i$ (see Section \ref{sec:simulation}).
The presented fitted trajectories suggest that the estimates might not always overlap with the observed trajectories, which suggests residual structure in the model residuals. The top panels of Figure \ref{APPENDIXfig:snooker_resids} plot the scalar residuals over time for each of the dimensions. Especially on the $y$-axis, we can discern patterns in the residuals even though this type of graphic is prone to overplotting. In order to investigate residual autocorrelation, we apply the following ad-hoc method, which allows us to approximate the well-known concept of an autocorrelation function in time series analysis. We first use fast symmetric additive covariance smoothing \citep{cederbaum2018} to estimate a smoothed correlation matrix for the residuals. Then we calculate the means of the off-diagonals, which corresponds to an approximated mean autocorrelation for a given time lag. Figure \ref{APPENDIXfig:snooker_acf} shows the results based on the residuals for each dimension separately. Overall, we find that the model residuals (red) show clear signs of autocorrelation, in particular up to a time lag of $0.25$. However, compared to the autocorrelation of the original data (solid grey line), we see a considerable reduction, which can be attributed to the functional random effects in the model.
Table \ref{APPENDIXtab:snooker_pred_var} also underlines the importance of the random effects in modeling the snooker trajectories. We use the predictor variance as an indicator for quantifying the effect of the fixed and random effects on the model fit. Separately on each dimension, we calculate the variance of the partial predictors on the data and give the respective proportions in Table \ref{APPENDIXtab:snooker_pred_var}. We find that with exception of the reference mean ($\bm{f}_0(t)$), the partial predictors of the fixed effects contribute generally little to the overall predictor variance (highest proportions for $\bm{f}_1(t)$ with around $5\%$ on most dimensions). Even the reference mean contributes little to the predictor variance on the shoulder and the $x$-axis of the elbow. Consequently, we find that most variance in the partial predictors can be assigned to the random effects. This suggests that the snooker training data might be dominated by random processes with little explanatory power of the available covariates.
\begin{table}
\centering
\caption{\gls{umse} values for the model fit on the snooker training data.}
\label{APPENDIXtab:snooker_umse_fit}
\begin{tabular}{l|cccccc}
& elbow.x & elbow.y & hand.x & hand.y & shoulder.x & shoulder.y \\\hline
Main Model & 0.172 & 0.028 & 0.088 & 0.370 & 0.025 & 0.039 \\
Sensitivity Analysis & 0.121 & 0.019 & 0.084 & 0.233 & 0.019 & 0.025 \\
\end{tabular}
\end{table}
\begin{figure}
\caption{Fitted (red) and observed (blue) snooker training trajectories for selected observations (grey dashed). The qantiles given in the subtitles correspond to the contribution to the \gls{mmse}
\label{APPENDIXfig:snooker_diagn_fit}
\end{figure}
\begin{table}
\centering
\caption{Proportion of predictor variance for each dimension.}
\label{APPENDIXtab:snooker_pred_var}
\begin{tabular}{l|cccccccc}
& $f_0^{(d)}(t)$ & $f_1^{(d)}(t)$ & $f_2^{(d)}(t)$ & $f_3^{(d)}(t)$ & $f_4^{(d)}(t)$ & $B_i^{(d)}(t)$ & $C_{ij}^{(d)}(t)$ & $E_{ijh}^{(d)}(t)$ \\ \hline
elbow.x & 0.018 & 0.020 & 0.053 & 0.003 & 0.001 & 0.210 & 0.120 & 0.574 \\
elbow.y & 0.381 & 0.048 & 0.006 & 0.002 & 0.000 & 0.144 & 0.261 & 0.157 \\
hand.x & 0.862 & 0.022 & 0.014 & 0.000 & 0.000 & 0.022 & 0.050 & 0.030 \\
hand.y & 0.755 & 0.037 & 0.012 & 0.004 & 0.000 & 0.103 & 0.076 & 0.012 \\
shoulder.x & 0.007 & 0.047 & 0.013 & 0.001 & 0.019 & 0.184 & 0.144 & 0.585 \\
shoulder.y & 0.009 & 0.068 & 0.007 & 0.018 & 0.001 & 0.105 & 0.512 & 0.280 \\
\end{tabular}
\end{table}
\begin{figure}
\caption{Model residuals per dimension over time. The top panels correspond to the model presented in the main analysis, the lower panels to the model from the sensitivity analysis. Five observations have been removed from the lower panels so that the scale of the plots is identical.}
\label{APPENDIXfig:snooker_resids}
\end{figure}
\begin{figure}
\caption{Approximated mean autocorrelation function over the time lag for the observed data (grey), the fit of the model presented in the main analysis (red), and the model from the sensitivity analysis (blue). The dashed line marks zero autocorrelation.}
\label{APPENDIXfig:snooker_acf}
\end{figure}
We conduct a sensitivity analysis, where we increase the prespecified proportion of variance to $0.99$. Additionally, we assume heteroscedasticity in the model given the residual plot in Figure \ref{APPENDIXfig:snooker_resids}. The model then estimates different measurement error variances with a predominantly large variance on dimension $hand.x$. The model almost doubles the number of \glspl{fpc} and includes 30 multivariate \glspl{fpc} (B: $9$, C: $10$, E:$11$). The estimation of fixed effects hardly change (results not shown) but we find a better fit to the data (compare Table \ref{APPENDIXtab:snooker_umse_fit} and Figure \ref{APPENDIXfig:snooker_diagn_fit}). The scalar residuals in the lower panels of Figure \ref{APPENDIXfig:snooker_resids} seem to exhibit less structure for the model of the sensitivity analysis but we can still discern patterns in the residuals. Considering the residual autocorrelation, Figure \ref{APPENDIXfig:snooker_acf} overall shows a reduction of autocorrelation for small time lags compared to the main model. For larger time lags we find a somewhat surprising increase of the approximated autocorrelation function. Including a large number of comparatively trivial \glspl{fpc} might thus overall improve model fit while introducing new dependencies in the residuals. Given that our main interest lies in the leading \glspl{fpc} and the fixed effects, we conclude that the sensitivity analysis yields similar results with added computational complexity.
\FloatBarrier
\section{Detailed Analysis of the Consonant Assimilation Data}
\label{APPsec:ca_data}
\subsection{Data and Model Description}
\label{APPsubsec:phonetic_description}
\subsubsection*{Data Description}
In this section we present more information on the analysis of the consonant assimilation data. The \gls{aco} data were recorded via microphone at 32,768 Hz. In addition, all speakers wore a custom-fit palate sensor with 62 electrodes that measured the area where the tongue touched the palate during the articulation in high temporal resolution (\gls{epg} data). These primary data were summarized and transformed into a functional index over the time of articulation for the two dimensions \gls{aco} and \gls{epg} separately. Each functional index measures how similar the articulatory or acoustic pattern is to its reference patterns for the first and second consonant at every observed time point. These similarity indices vary roughly between $+1$ and $-1$, where the value $+1$ indicates patterns close to the reference for the final target consonant (i.e.\ the consonant ending the first word) and $-1$ for the initial target consonant (i.e.\ the consonant beginning the second word). Due to the specifics of the index calculation the index values can lie slightly outside the interval $[-1,1]$. The curves on the \gls{aco} dimension are generally smoother than on the \gls{epg} dimension, which reflects the index calculation: The \gls{aco} signal is much richer in information because 256 continuously valued points are considered in the calculation of the similarity index for every time point while only 62 binary points enter the index calculation for the \gls{epg} signal. Details on the data generation and functional index computation can be found in \cite{pouplier2016} and \cite{cederbaum2016}.
\subsubsection*{Model Specification}
For this application, we follow the model specification of \cite{cederbaum2016}, who analyse only the \gls{aco} dimension of the data and ignore the second mode of the consonant assimilation data. Our specified multivariate model is
\begin{align}
\label{eq:phonetic_model}
\bm{y}_{ijh}(t) = \bm{\mu}(\bm{x}_{ij}, t) + \bm{B}_i(t) + \bm{C}_{j}(t) + \bm{E}_{ijh}(t) + \bm{\epsilon}_{ijht},
\end{align}
with $i = 1,..., 9$ the speaker index, $j = 1, ..., 16$ the word combination index, and $h = 1,..., H_{ij}$ the repetition index. Again, $\bm{B}_i(t)$ and $\bm{E}_{ijh}(t)$ are the person-specific and curve-specific random intercepts. $\bm{C}_{j}(t)$ is the word combination-specific random intercept, which gives a crossed random effects structure. Note that here, the curve-specific random component $\bm{E}_{ijh}(t)$ also captures speaker and word interactions. Taking the different smoothness of the dimensions into account, the white noise measurement error $\bm{\epsilon}_{ijht}$ is assumed to follow a zero-mean bivariate normal distribution with diagonal covariance matrix $\operatorname{diag}(\sigma^2_{\text{ACO}}, \sigma^2_{\text{EPG}})$.
The additive predictor of the \gls{mfamm} is specified as
\begin{align*}
\bm{\mu}(\bm{x}_{ij},t) &= \bm{f}_0(t) + \texttt{order}_{j}\cdot\bm{f}_1(t) + \texttt{stress1}_{j}\cdot\bm{f}_2(t) + \texttt{stress2}_{j}\cdot\bm{f}_3(t) \\
& \quad + \texttt{vowel}_{j}\cdot\bm{f}_4(t) + \texttt{order}_{j}\cdot\texttt{stress1}_{j}\cdot\bm{f}_5(t) \\
& \quad + \texttt{order}_{j}\cdot\texttt{stress2}_{j}\cdot\bm{f}_6(t) + \texttt{order}_{j}\cdot\texttt{vowel}_{j}\cdot\bm{f}_7(t),
\end{align*}
with dummy covariates $\texttt{order}_{j}, \texttt{stress1}_{j}, \texttt{stress2}_{j}$, and $\texttt{vowel}_{j} $ indicating whether in the word combination /sh/ is followed by /s/ (instead of the reference /s\#sh/), the final target syllable is not stressed, the initial target syllable is not stressed, and the vowel context (i. e. the adjacent vowel for each consonant sound of interest) is /a\#i/ (instead of the reference /i\#a/ as in ``Gem\textbf{i}sch S\textbf{a}lbe''), respectively. The functional intercept $\bm{f}_0(t)$ and the effect function $\bm{f}_1(t)$ and their deviation from a sinus-like form or zero, respectively, are especially interesting for studying assimilation.
For comparability, we follow the univariate analysis in \cite{cederbaum2016} by specifying cubic P-splines with third order difference penalty with eight and five (marginal) basis functions for the covariate effect functions and auto-covariance estimation, respectively. The \gls{mfpca} is based on an unweighted scalar product. We choose the multivariate \gls{mfpca} truncation orders so that at least 95\% of the total variation in the data is explained as presented in \eqref{eq:total_variation_decomp}. To handle the heteroscedasticity, we use the weighted regression approach. Given the different sampling mechanisms on the dimensions, Appendix \ref{app_subsec:phon_m_wei} also contains an alternative analysis based on a weighted scalar product for the \gls{mfpca}.
\subsection{Results of the Model Presented in the Main Part}
\label{app_subsec:phon_main}
\subsubsection*{Analysis of the Variance Components}
The univariate decomposition of the auto-covariances yields five univariate \glspl{fpc} for the random components $\bm{B}$ and $\bm{E}$ on both dimensions and one and two \glspl{fpc} for $\bm{C}$ on \gls{aco} and \gls{epg}, respectively. The cut-off criterion based on the sum of total variation selects a total of eight \glspl{fpc}. The estimated \gls{mfamm} then contains five multivariate \glspl{fpc} for the smooth residual (representing $64\%$ of total variation) and three for the subject-specific random effects ($20\%$ of total variation) with $12\%$ of total variation due to measurement error. With the chosen truncation order $M_C = 0$, the crossed random component $\bm{C}$ is effectively dropped from the model as a lot of variation in the data is already explained by the included fixed effects, all of which capture characteristics of the word combinations ($8$ fixed effects for $16$ word combinations). Table \ref{APPENDIXtab:phonetic_varexp} shows the contribution of each random process to the total variation in the consonant assimilation data according to the fitted \gls{mfamm}. The first row gives the eigenvalues of the random components $\bm{B}$ and $\bm{E}$ as well as the estimated univariate error variances and the total amount of variation in the data as computed in \eqref{eq:total_variation_decomp}. The second and third row show the univariate $L^2$ norm of the multivariate eigenfunctions. The proportion of explained variance $\pi$ is given in the final three rows. The proportions displayed in the fourth and fifth row are computed according to \eqref{eq:univar_variation_decomp} and the last row is computed according to \eqref{eq:total_variation_decomp}. Note that the fitted model uses the latter multivariate cut-off criterion to determine the number of \glspl{fpc}. With different sampling mechanisms for different dimensions and thus different measurement errors, it is advisable to check whether the proportion of explained variance on each dimension \eqref{eq:univar_variation_decomp} is adequate. For the consonant assimilation data, the selected number of \glspl{fpc} explains about $97\%$ of variation on \gls{epg} but only $94\%$ on \gls{aco}, which we still deem acceptable. If a greater disparity is revealed in an application, we recommend to use the proportion of explained univariate variation as a cut-off criterion (in our case this would lead to including a sixth \gls{fpc} for component $E$).
\begin{table} \centering
\caption{Variance components included in the \gls{mfamm}. First row: Estimates of eigenvalues, univariate error variances, total variation. Second (third) row: Univariate norms of estimated \glspl{fpc} on dimension \gls{aco} (\gls{epg}). Fourth (fifth) row: Proportion of univariate variation explained on dimension \gls{aco} (\gls{epg}) by eigenfunctions and error variance, total univariate variation explained. Sixth row: Proportion of multivariate variation explained by eigenfunctions and error variances, total multivariate variation explained.}
\label{APPENDIXtab:phonetic_varexp}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cccccccc|cc|c}
& $B_1$ & $B_2$ & $B_3$ & $E_1$ & $E_2$ & $E_3$ & $E_4$ & $E_5$ & $\sigma_{ACO}^2$ & $\sigma_{EPG}^2$ & Total \\\hline
\hline
Variation & $0.018$ & $0.009$ & $0.004$ & $0.060$ & $0.017$ & $0.012$ & $0.007$ & $0.003$ & $0.004$ & $0.014$ & $0.153$ \\
$\vert\vert\psi^{(ACO)}\vert\vert^2$ & $0.169$ & $0.585$ & $0.642$ & $0.153$ & $0.217$ & $0.849$ & $0.178$ & $0.713$ & $--$ & $--$ & $--$ \\
$\vert\vert\psi^{(EPG)}\vert\vert^2$ & $0.831$ & $0.415$ & $0.358$ & $0.847$ & $0.783$ & $0.151$ & $0.822$ & $0.287$ & $--$ & $--$ & $--$ \\
$\pi^{(ACO)}$ & $0.068$ & $0.126$ & $0.052$ & $0.210$ & $0.082$ & $0.226$ & $0.026$ & $0.056$ & $0.094$ & $--$ & $0.940$ \\
$\pi^{(EPG)}$ & $0.134$ & $0.036$ & $0.012$ & $0.467$ & $0.119$ & $0.016$ & $0.049$ & $0.009$ & $--$ & $0.127$ & $0.969$ \\
$\pi$ & $0.115$ & $0.062$ & $0.023$ & $0.393$ & $0.108$ & $0.077$ & $0.042$ & $0.023$ & $0.027$ & $0.091$ & $0.961$ \\
\hline
\end{tabular} }
\end{table}
We present the estimated multivariate \glspl{fpc} for component $\bm{B}$ in Figure \ref{APPENDIXfig:phonetics_fpc_B}. The estimated leading multivariate \gls{fpc} $\bm{\psi}_{B1}(t)$ of the subject-specific random effect $\bm{B}_i(t)$ (see left panels) shows a personal tendency to pronounce the final or initial target syllable distinctly, which explains roughly $12\%$ of the total variation. It changes the relative time spent pronouncing each syllable and pulls the entire curve closer to the reference sound of the preferred consonant. The second \gls{fpc} (accounting for $6\%$ variation), however, shows the individual tendency for assimilation. It flattens or amplifies the sinus-like shape of the overall mean (black solid line) thus rendering the speaker more or less prone to distinguish between the two sounds. Note that in the univariate analysis for \gls{aco} presented in \cite{cederbaum2016}, this mode of variation is identified as the leading \gls{fpc} of the person-specific random effect. The three leading \glspl{fpc} $\bm{\psi}_{E1}(t)$, $\bm{\psi}_{B1}(t)$, and $\bm{\psi}_{E2}(t)$ all show greater univariate norms on the \gls{epg} dimension (Table \ref{APPENDIXtab:phonetic_varexp}) suggesting that this dimension is the driving source of variation in the model -- but this dimension was ignored in the analysis of \cite{cederbaum2016}. Note that only $\bm{\psi}_{B3}(t)$ impacts the dimensions differently in that more assimilation on one dimension equals less assimilation on the other.
\begin{figure}
\caption{\glspl{fpc}
\label{APPENDIXfig:phonetics_fpc_B}
\end{figure}
For the curve-specific random effect $\bm{E}_{ijh}(t)$ (Figure \ref{APPENDIXfig:phonetics_fpc_E}), the leading \gls{fpc} impacts primarily the final target consonant pulling it towards or pushing it away from its reference sound. $\bm{\psi}_{E2}(t)$ has a similar shape to $\bm{\psi}_{B2}(t)$. Note that the third leading \gls{fpc} of $E$ affects the mean function in opposite directions on to the two dimensions shifting the curve up on one dimension and down on the other.
\begin{figure}
\caption{\glspl{fpc}
\label{APPENDIXfig:phonetics_fpc_E}
\end{figure}
Figure \ref{APPENDIXfig:phonetic_surv_E} shows the estimated auto-and cross-covariance surface of components $\bm{B}$ and $\bm{E}$ . It is evident that the cross-covariances are about as large in magnitude as the auto-covariances for the dimension \gls{aco}. In univariate analyses, however, the cross-covariance is completely ignored. Note that when no covariates are included in the model, the estimated \gls{mfamm} contains one \gls{fpc} for component $C$ (results not shown).
\begin{figure}
\caption{Auto- and cross-covariance surface of the subject-specific functional random effect $\bm{B}
\label{APPENDIXfig:phonetic_surv_E}
\end{figure}
\subsubsection*{Analysis of the Estimated Effect Functions}
Figure \ref{APPENDIXfig:phonetic_covar_estim} presents the estimated covariate effects (black, top plot for consonant order /s\#sh/, bottom for /sh\#s/). Overall, we find similar shapes for the estimated effects on both dimensions. In the reference group, the functional intercept $\bm{f}_0(t)$ shows signs of assimilation for consonant /s/. The effect $\bm{f}_1(t)$ of covariate \texttt{order}, however, pushes the final target syllable (in this case /sh/) towards its reference, all other things being equal. The positive effect at the end of the observed interval then pushes the initial target syllable /s/ towards the center, thus indicating a more assimilated pronunciation. Shortly before, we find a negative effect on the dimension \gls{epg} which might indicate that for a brief section, the articulatory pattern of /s/ is indeed close to its reference but this does not necessarily translate to the produced sound. Thus, similar to \cite{cederbaum2016} (red), we find that the final target syllable /sh/ is pulled towards the reference while the initial target syllable /s/ seems to be less affected. Given the similar shape on the dimension \gls{epg}, this supports their finding that assimilation is asymmetric. Since the estimated effects are similar across dimensions and similar to the univariate results, we refer to \cite{cederbaum2016} for interpretation of the other fixed effects.
\begin{figure}
\caption{Estimated covariate effects (black) and comparison to univariate model (red). The corresponding $95\%$ confidence intervals are given by the dashed line. Upper (from left to right): reference mean and covariate effects of \texttt{stress1}
\label{APPENDIXfig:phonetic_covar_estim}
\end{figure}
\subsubsection*{Comparison to Univariate Analysis}
Compared to separate univariate \glspl{famm}, the multivariate model incorporates the dependency between the dimensions, thus reducing the number of \gls{fpc} bases in the analysis (for a similar amount of variance explained, ten univariate \glspl{fpc} would be needed). The shaded areas in Figure \ref{APPENDIXfig:phonetic_covar_estim} indicate where the standard errors of the univariate analysis presented in \cite{cederbaum2016} are smaller than the corresponding standard errors of the \gls{mfamm}. We find that overall, the multivariate analysis seems to give smaller standard errors. In order to compare the model fit between the univariate and the multivariate analysis, we can use the \gls{umse} defined in Section \ref{sec:simulation}. We then compare the fitted values with the observed values for the consonant assimilation data. The multivariate analysis yields \gls{umse} values of $0.970$ (\gls{aco}) and $0.966$ (\gls{epg}), whereas independent univariate \glspl{famm} give values of $0.978$ (\gls{aco}) and $0.961$ (\gls{epg}), respectively. Consequently, a univariate model analogously specified to the \gls{aco} model in \cite{cederbaum2016} gives a slightly better model fit on the \gls{epg} data as measured by the \gls{umse}. However, on the \gls{aco} dimension, the multivariate analysis is slightly preferred.
The added computational complexity of multivariate analyses is also negligible in our case: Fitting a univariate \gls{famm} as proposed by \cite{cederbaum2016} takes around 52 minutes on a 64-bit Linux platform and requires a considerable amount of RAM memory (32 GB is sufficient). The multivariate analysis maintains the requirements for internal memory, while the duration to fit the multivariate model (109 minutes on the aforementioned platform) only slightly increases compared to sequentially fitting two univariate models.
\subsection{Results of the Model Using a Weighted Scalar Product}
\label{app_subsec:phon_m_wei}
This section gives a short description of the considered \gls{mfamm} when a weighted scalar product is used. We only present the results of estimating the eigenfunction basis as the effect on the estimated covariate effects is negligible. Table \ref{APPENDIXtab:phonetic_varexp_weighted} shows the contribution of each random process to the total weighted variation and is structured similarly to Table \ref{APPENDIXtab:phonetic_varexp}. The number of eigenfunctions is chosen by weighted sum of total variation \eqref{eq:total_variation_decomp} which results in the same number of eigenfunctions for each random component as with the unweighted approach. However, the total share of weighted variation explained is slightly different (about $22\%$ and $65\%$ explained by the components $\bm{B}$ and $\bm{E}$, respectively). Consequently, the proportion of (weighted) variation assigned to the measurement error is smaller when applying the weighted scalar product. We also find that the univariate norms of the eigenfunctions are now more evenly distributed between the dimensions \gls{aco} and \gls{epg}, with most of the norms bigger on the \gls{aco} dimension. We thus see that weighting the scalar product shifts emphasis to the dimension \gls{aco} which has a lower measurement error. With this emphasis on \gls{aco}, the proportion of univariate variation explained on \gls{epg} falls now short of $95\%$ while \gls{aco} has a proportion of univariate variation explained of about $98\%$. For this model, too, we have specified the cut-off using the weighted sum of total variation \eqref{eq:total_variation_decomp} and the model explains $96\%$ of variation in the data.
\begin{table} \centering
\caption{Variance components included in the \gls{mfamm} using a weighted scalar product. First row: Estimates of eigenvalues, univariate error variances, total variation. Second (third) row: Univariate norms of estimated \glspl{fpc} on dimension \gls{aco} (\gls{epg}). Fourth (fifth) row: Proportion of univariate variation explained on dimension \gls{aco} (\gls{epg}) by eigenfunctions and error variance, total univariate variation explained. Sixth row: Proportion of multivariate variation explained by eigenfunctions and error variances, total multivariate variation explained.}
\label{APPENDIXtab:phonetic_varexp_weighted}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cccccccc|cc|c}
& $B_1$ & $B_2$ & $B_3$ & $E_1$ & $E_2$ & $E_3$ & $E_4$ & $E_5$ & $\sigma_{ACO}^2$ & $\sigma_{EPG}^2$ & Total \\\hline
\hline
Variation & $1.937$ & $1.625$ & $0.493$ & $6.501$ & $2.168$ & $1.823$ & $0.694$ & $0.507$ & $0.004$ & $0.014$ & $18.476$ \\
$\vert\vert\psi^{(ACO)}\vert\vert^2$ & $0.630$ & $0.743$ & $0.442$ & $0.574$ & $0.741$ & $0.404$ & $0.564$ & $0.455$ & $--$ & $--$ & $--$ \\
$\vert\vert\psi^{(EPG)}\vert\vert^2$ & $0.370$ & $0.257$ & $0.558$ & $0.426$ & $0.259$ & $0.596$ & $0.436$ & $0.545$ & $--$ & $--$ & $--$ \\
$\pi^{(ACO)}$ & $0.115$ & $0.114$ & $0.021$ & $0.352$ & $0.152$ & $0.070$ & $0.037$ & $0.022$ & $0.094$ & $--$ & $0.975$ \\
$\pi^{(EPG)}$ & $0.091$ & $0.053$ & $0.035$ & $0.352$ & $0.071$ & $0.138$ & $0.038$ & $0.035$ & $--$ & $0.127$ & $0.941$ \\
$\pi$ & $0.105$ & $0.088$ & $0.027$ & $0.352$ & $0.117$ & $0.099$ & $0.038$ & $0.027$ & $0.054$ & $0.054$ & $0.960$ \\
\hline
\end{tabular} }
\end{table}
Figure \ref{APPENDIXfig:phonetics_fpc_B_wei} shows the estimated multivariate \glspl{fpc} of random component $\bm{B}$. We find that the leading \gls{fpc} again depicts an individual tendency to spend more time pronouncing the final or the initial target consonant. For $\bm{\psi}_{B2}(t)$, almost the same mode of variation is captured on the dimension \gls{aco} as with the unweighted scalar product. On the dimension \gls{epg}, however, we find the assimilation primarily in the final target syllable with the initial target syllable rather unaffected (more or less time spent in the pronunciation). The third \gls{fpc} for the subject-specific random effect is comparable to the scenario of the unweighted scalar product.
\begin{figure}
\caption{\glspl{fpc}
\label{APPENDIXfig:phonetics_fpc_B_wei}
\end{figure}
Figure \ref{APPENDIXfig:phonetics_fpc_E_wei} shows the estimated multivariate \glspl{fpc} of random component $\bm{E}$. The leading \gls{fpc} is somewhat unchanged by the weighting of the scalar product. While in the unweighted case the effect of $\bm{\psi}_{E2}$ was distributed equally across the two consonants, we find a stronger effect on the initial target consonant for dimension \gls{aco} and on the final target consonant for dimension \gls{epg} in the weighted case. This seems to be compensated by the third \gls{fpc}, where in the weighted scenario the emphasis lies on differences in the final target consonant on \gls{aco} and in the initial target consonant on \gls{epg}. The remaining \glspl{fpc} are comparable for both the weighted and unweighted scalar product.
\begin{figure}
\caption{\glspl{fpc}
\label{APPENDIXfig:phonetics_fpc_E_wei}
\end{figure}
We conclude that the model based on the weighted scalar product gives similar results. However, if some dimension were considerably more noisy than the rest, weighting the scalar product might be advisable (see results of the simulation).
\FloatBarrier
\section{Detailed Analysis of the Simulation Study}
\label{app_sec:simulation}
We conduct an extensive simulation study in order to answer the following questions: How does the performance of the proposed \gls{mfamm} depend on different model specifications? How does the model perform in different data settings? And how does the \gls{mfamm} compare to a univariate modeling approach, where univariate regression models as proposed by \cite{cederbaum2016} are estimated on each dimension independently? Additionally, we evaluate the estimation of the covariance and the fixed effects. We simulate data based on the presented model fits \eqref{eq:snooker_model} of the snooker training data (main part) and \eqref{eq:phonetic_model} of the consonant assimilation data (Appendix \ref{APPsec:ca_data}). Six different settings of the data generating process are analysed, where for each data setting, we additionally compare up to seven different model specifications. One modeling scenario (data setting plus model specification) consists of independent model fits based on 500 generated data sets. Table \ref{tab:sim_data_model} provides an overview of the analysed data settings and model specifications, which are described in the following section. Note that we use data setting 1 and model specification A, respectively, to generate benchmark estimates useful for comparison to other settings and specifications.
\begin{table}
\centering
\caption{Summary of the different data settings and model specifications analysed in the simulation study. Each analysed combination of data setting and model specification comprises 500 model fits.}
\label{tab:sim_data_model}
\begin{tabular}{ccll|cl}
\multicolumn{4}{c|}{Data Settings} & \multicolumn{2}{c}{Model Specification} \\ \hline
\multicolumn{2}{l}{1} & \multicolumn{2}{l|}{Consonant assimilation data} & A & True model \\
&2 & &Strong heteroscedasticity & B & Truncation via total variation (TV) \\
&3 & &Sparse data & C & Truncation via univariate variation (UV)\\
&4 & &Uncentred scores & D & TV with alternate scalar product \\
&5 & &Weighted scalar product & E & UV with alternate scalar product \\
\multicolumn{2}{l}{6}& \multicolumn{2}{l|}{Snooker training data} & F & Scedastic misspecification \\
&& && U & Univariate models
\end{tabular}
\end{table}
\subsection{Simulation Description}
\label{app_subsec:sim_description}
\subsubsection*{Data Settings}
Table \ref{app_tab:sim_data_settings} provides detailed description of the data generating process for all data settings. For each data setting, 500 data sets have been simulated. For data setting $1$, we generate new observations based on the model fit \eqref{eq:phonetic_model} (Appendix \ref{APPsec:ca_data}) of the consonant assimilation data by randomly drawing the evaluation points, the random scores, and the measurement errors. Note that we center and decorrelate the random scores so that the empirical means and covariances coincide with the theoretical counterparts. As in the original data, each simulated data set contains the bivariate functional observations of $i = 1,.., 9$ individuals, each repeating the $j=1,...,16$ different word combinations five times.
Setting 2 (strong heteroscedasticity) is analogously to setting 1 but with larger difference between the measurement error variances of the two dimensions. Setting 2 mimics an application setting with (multimodal) multivariate data, where some dimensions are much noisier than others, in order to evaluate whether this imbalance interferes with the variance decomposition of the \gls{mfamm}.
Setting 3 (sparse data) focuses on the estimation quality for sparse functional data. Compared to setting 1, the number of evaluation points is reduced to three to ten measurements per dimension. In setting 4 (uncentred scores), the random scores are not decorrelated or centred (otherwise identical to setting 1). Especially for covariance components with few grouping levels (particularly $\bm{B}$), this can result in a considerable departure from the modeling assumptions, which is likely to occur in such settings. This setting explores the sensitivity of the \gls{mfamm} to violations of its assumptions. Setting 5 (weighted scalar product) is identical to setting 1 but all model components are simulated based on the estimated model using a weighted scalar product for the \gls{mfpca} (see Appendix \ref{app_subsec:phon_m_wei}). This data setting helps to understand the impact of weights in the scalar product. For setting 6 (snooker training data), we generate new trajectory data according to the model fit \eqref{eq:snooker_model} of the snooker training data. This allows us to evaluate a higher dimensional \gls{mfamm} as well as the estimation quality of nested random effects.
\begin{table}
\caption{Description of data settings.}
\centering
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{c|p{15cm}}
Data Setting & Description \\ \hline \hline
1 & Data based on consonant assimilation data (bivariate functions on $dim1, dim2$, each on $[0,1]$). $9$ individuals, each repeating $16$ crossed grouping levels $5$ times. Number of observed time points is an independent random draw for each dimension from a uniform discrete distribution of natural numbers in $[20,50]$. Observed time point is an independent random draw from uniform distribution on $[0,1]$. Mean consists of functional intercept and $7$ covariate effect functions as given in Appendix \ref{app_subsec:phon_main}. Covariates are given for each observation based on the crossed grouping level (word combination). Functional random effects $\bm{B},\bm{E}$ are based on estimated eigenfunctions of model in Appendix \ref{app_subsec:phon_main} ($3$ and $5$ \glspl{fpc}). Corresponding random scores are independent draws from $N(0,\hat{\nu}_{\cdot\cdot})$, then demeaned and decorrelated. Measurement errors are independent draws from $N(0, \hat{\sigma}^2_{\cdot})$ for each observed time point on the respective dimension. \\\hline
2 & Measurement errors are independent draws from $N(0, \hat{\sigma}^2_{1})$ for each observed time point on $dim1$ and independent draws from $N(0, 16\cdot\hat{\sigma}^2_{1})$ on $dim2$. Rest as in 1.\\\hline
3 & Number of observed time points is a random draw for each dimension from a uniform discrete distribution of natural numbers in $[3,10]$. Rest as in 1.\\\hline
4 & Random scores are independent draws from $N(0,\hat{\nu}_{\cdot\cdot})$ (no centering or decorrelation). Rest as in 1. \\\hline
5 & All estimated components (eigenfunctions, eigenvalues, covariate effect functions, measurement error variances) stem from the model based on a weighted scalar product for the \gls{mfpca} as presented in Appendix \ref{app_subsec:phon_m_wei}. Rest as in 1. \\\hline
6 & Data based on snooker training data (six-dimensional functions on $dim1,..., dim6$, each on $[0,1]$). $25$ individuals, each repeating $2$ nested grouping levels $6$ times. Number of observed points is an independent random draw from a uniform discrete distribution of natural numbers in $[10,50]$ per multivariate curve with observed time points identical over the dimensions. Observed time point is an independent random draw from uniform distribution on $[0,1]$. Mean consists of functional intercept and $4$ covariate effect functions as given in Appendix \ref{app_sec:snooker_analysis}. Covariates \texttt{skill}, \texttt{group} are independent random draws from Bernoulli distribution with probability $0.5$ per subject and \texttt{session} is given for each observation based on the nested grouping level. Functional random effects $\bm{B},\bm{C}, \bm{E}$ are based on estimated eigenfunctions and eigenvalues of model in Appendix \ref{app_sec:snooker_analysis} ($6,5$ and $5$ \glspl{fpc}). Measurement errors are independent draws from $N(0, \hat{\sigma}^2)$ for each observed time point on each dimension.
\end{tabular}}
\label{app_tab:sim_data_settings}
\end{table}
\subsubsection*{Model Specifications}
Table \ref{app_tab:sim_model_scen} provides a detailed description of the different modeling scenarios used in the simulation study. We denote the most accurate approach of modeling as specification A (true model). This standard scenario mirrors the data generation so that there is no model misspecification. Most notably, we fix the number of \glspl{fpc} to the number used for generating the data in order to avoid truncation effects. Though somewhat unrealistic, specification A serves to separate the impact of modeling decisions or situations with more (realistic) uncertainty for the user from the overall performance of the \gls{mfamm}.
For model specification B (\gls{tv}), the truncation orders of the \glspl{fpc} are chosen so that $95\%$ of the total variation \eqref{eq:total_variation_decomp} are explained. In scenario C (\gls{uv}), we choose the number of \glspl{fpc} so that on every dimension $95\%$ of univariate variance \eqref{eq:univar_variation_decomp} are explained. Specifications D (\gls{tv} with alternate scalar product) and E (\gls{uv} with alternate scalar product) use the cut-off criterions analogous to B and C but we alternate the scalar product on which the \gls{mfpca} is based: For data generated from a model based on an unweighted \gls{mfpca}, the scalar product used in these scenarios is weighted by $\frac{1}{\hat{\sigma}^2_d}$, and vice versa. Model specification F (scedastic misspecification) evaluates misspecifying the \gls{mfamm} with a homoscedasticity assumption. We also contrast the \gls{mfamm} with the univariate approach of fitting independent univariate models. In modeling scenarios denoted with U, we fit an independent \gls{famm} on each dimension. We use the \glspl{famm} proposed by \cite{cederbaum2016} so that we can apply the same model specifications as for the \gls{mfamm} (e.g.\ basis functions, penalties, etc.). The number of \glspl{fpc} in the model is then chosen so that $95\%$ of univariate variation is explained.
\begin{table}
\caption{Description of model specifications.}
\resizebox{\textwidth}{!}{
\begin{tabular}{c|p{15cm}}
Model Specification & Description \\ \hline \hline
A & Univariate mean, multivariate mean, univariate auto-covariance estimation based on cubic P-splines with 8, 8, 5 (marginal) basis functions and choice of penalty as in the model used for generating the data setting (third order for 1-5, first order for 6). No univariate truncation for the \gls{mfpca} except negative eigenvalues. \gls{mfpca} based on scalar product as in the model used for generating the data setting (weights of one except for 5). Number of multivariate \glspl{fpc} is fixed according to the data setting (1-5: ($\bm{B}$:3, $\bm{E}$:5), 6: ($\bm{B}$:6, $\bm{C}$:5, $\bm{E}$:5)). Weighted regression approach with weights obtained from univariate variance decompositions for heteroscedastic data settings (1-5).\\\hline
B & Number of \glspl{fpc} chosen so that $95\%$ of the (weighted) sum of total variation in the data is explained. Rest as in A. \\\hline
C & Number of \glspl{fpc} chosen so that on each dimension at least $95\%$ of the univariate variation in the data is explained. Rest as in A. \\\hline
D & Weighted scalar product for \gls{mfpca} with weights based on dimension specific measurement error variances if data setting is based on model using a scalar product with weights of one (all except 5) and vice versa (5). Number of \glspl{fpc} chosen so that $95\%$ of the (weighted) sum of total variation in the data is explained. Rest as in A. \\\hline
E & Weighted scalar product for \gls{mfpca} with weights based on dimension specific measurement error variances if data setting is based on model using a scalar product with weights of one (all except 5) and vice versa (5). Number of \glspl{fpc} chosen so that on each dimension at least $95\%$ of the univariate variation in the data is explained. Rest as in A. \\\hline
F & Homoscedasticity is assumed, no regression weights. Rest as in A. \\ \hline
U & Univariate modeling approach. Independent sparse \gls{flmm} models for each dimension. Number of univariate \glspl{fpc} chosen so that $95\%$ of univariate dimension is explained. Rest as in A.
\end{tabular}}
\label{app_tab:sim_model_scen}
\end{table}
\subsubsection*{Modeling Scenarios}
For each of the combinations of data settings and modeling scenarios indicated in Table \ref{app_tab:sim_combinations}, analyses were performed giving $500$ fitted models per combination. The results presented in the main part correspond to modeling scenarios 1A (Benchmark), 1B (Cut-Off Multi), 1C (Cut-Off Uni), 3A (Sparse Data), and 4A (Uncentred Scores).
\begin{table}
\caption{Overview of the analysed modeling scenarios.}
\centering
\begin{tabular}{cc||cccccc}
& & \multicolumn{6}{c}{Data Setting}\\
\multicolumn{2}{l||}{Model Specification} & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline\hline
& A & X & X & X & X & X & X \\
& B & X & X & & & X & \\
& C & X & X & & & X & \\
& D & X & X & & & X & \\
& E & X & X & & & X & \\
& F & X & X & & & & \\
& U & X & X & & & X &
\end{tabular}
\label{app_tab:sim_combinations}
\end{table}
\subsubsection*{Evaluation Criteria}
The accuracy of the estimated model components is measured using the \gls{mse}. We analyse the accuracy of the covariance estimation, i.e.\ eigenfunctions, eigenvalues, and measurement error variances as well as the mean estimation. For evaluating the overall accuracy of the estimate $\hat {\bm{\zeta}}$ of the multivariate functional component $\bm{\zeta} = (\bm{\zeta}_1,..., \bm{\zeta}_S)^{\top}$, we define the \gls{mmse} as
\begin{align*}
\mathrm{mrrMSE}(\boldsymbol\zeta, \hat{\boldsymbol{\zeta}}) = \sqrt{\frac{\frac{1}{S}\sum_{s=1}^{S}\vert\vert\vert\bm{\zeta}_s- \hat{\bm{\zeta}}_s\vert\vert\vert^2}{\frac{1}{S}\sum_{s=1}^{S}\vert\vert\vert\bm{\zeta}_s\vert\vert\vert^2}}
\end{align*}
with the multivariate norm based on the unweighted scalar product. Note that $S$ depends on the model component to be evaluated, e.g.\ for the fitted values $\bm{y}_{ijh}(t)$, $S$ equals $N$ but for the subject-specific random effect $\bm{B}_i(t)$, $S$ equals the number of individuals. The \gls{mmse} corresponds to the evaluation criterion used in section 5 of the main part. We also define a \gls{umse}
\begin{align*}
\mathrm{urrMSE}\big(\boldsymbol{\zeta}^{(d)}, \hat{\boldsymbol{\zeta}}^{(d)}\big) = \sqrt{\frac{\frac{1}{S}\sum_{s=1}^{S}\vert\vert\zeta_s^{(d)}- \hat{\zeta}_s^{(d)}\vert\vert^2}{\frac{1}{S}\sum_{s=1}^{S}\vert\vert\zeta_s^{(d)}\vert\vert^2}}
\end{align*}
for $\bm{\zeta}^{(d)} = (\zeta_1^{(d)}, ..., \zeta_S^{(d)})^{\top}$, which allows to evaluate the dimension specific estimation accuracy as well as a straightforward comparison to the univariate modeling approach. For scalar estimates such as eigenvalues and error variances, we define the \gls{mse} as
\begin{equation*}
\mathrm{rrMSE}(\zeta, \hat{\zeta}) = \sqrt{\frac{(\zeta - \hat{\zeta})^2}{\zeta^2}}
\end{equation*}
with estimate $\hat{\zeta}$ of the scalar value $\zeta$. The \gls{mse} takes on (unbounded) positive values with smaller values indicating a better fit. As the \gls{mse} is a relative measure, small differences between estimate and true component can result in large \gls{mse} values for true component norms close to zero. Note that eigenfunctions are defined only up to a sign change. We thus flip the estimated eigenfunction (multiply it by $(-1)$) if this results in a smaller norm for the difference between the true function and its estimate.
Additionally, we evaluate the coverage of the point-wise \glspl{cb} of the estimated fixed effects.
\FloatBarrier
\subsection{Results of the Simulation Study}
\label{app_subsec:sim_results}
\subsubsection*{Impact of Model Specifications}
The results for setting 1 (consonant assimilation data) demonstrate the importance of the number of \glspl{fpc} in the accuracy of the estimation (see Figure \ref{app_fig:sim_eval_multi} and Table \ref{app_tab:sim_number_fpc}). With specifications A (true model) and F (scedastic misspecification), the sets of \glspl{fpc} are fixed giving overall low values for the \gls{mmse} (the misspecification in F yields only slightly worse results). Similarly, choosing the truncation order via the proportion of univariate variance explained in scenario C gives models with roughly the same number of \glspl{fpc} as is used for the data generation. The cut-off criterion based on the total amount of variance in scenario B results in more parsimonious models and thus considerably higher \gls{mmse} values. The number of selected \glspl{fpc} also explains the wider boxplots of the scenarios B-E compared to A: rather than a larger overall variance in estimation, we find separate clusters based on the included \gls{fpc} sets (see also Figure \ref{fig:app_sim_nfpc_y}). For the modeling scenarios based on a weighted scalar product (1D and 1E), the number of chosen \glspl{fpc} is quite similar regardless of the cut-off criterion but the overall \gls{mmse} values of the fitted curves ($\bm{y}_{ijh}(t)$) are higher than for the unweighted approach. The estimate of the mean $\bm{\mu}(\bm{x}_j, t)$, however, is comparatively stable over the different model specifications.
\begin{figure}
\caption{\gls{mmse}
\label{app_fig:sim_eval_multi}
\end{figure}
\begin{table}
\resizebox{\textwidth}{!}{
\begin{tabular}{c||ccccccc||cccccc}
& \multirow{2}{*}{1A} & \multirow{2}{*}{1B} & \multirow{2}{*}{1C} & \multirow{2}{*}{1D} & \multirow{2}{*}{1E} & \multicolumn{2}{c||}{1U} & \multirow{2}{*}{2B} & \multirow{2}{*}{2C} & \multirow{2}{*}{2D} & \multirow{2}{*}{2E} & \multicolumn{2}{c}{2U} \\
& & & & & & dim1 & dim2 & & & & & dim1 & dim2\\ \hline\hline
\multirow{2}{*}{$\bm{B}_{i}(t)$} & \multirow{2}{*}{3.00} & \multirow{2}{*}{2.15} & \multirow{2}{*}{2.80} & \multirow{2}{*}{2.62} & \multirow{2}{*}{2.88} & \multicolumn{2}{c||}{3.09} & \multirow{2}{*}{2.02} & \multirow{2}{*}{2.87} & \multirow{2}{*}{2.00} & \multirow{2}{*}{3.00} & \multicolumn{2}{c}{3.06} \\
& & & && & 2.00 & 1.09 & & & & & 2.00 & 1.06 \\ \hline\hline
\multirow{2}{*}{$\bm{E}_{ijh}(t)$} & \multirow{2}{*}{5.00} & \multirow{2}{*}{4.00} & \multirow{2}{*}{5.00} & \multirow{2}{*}{4.01} & \multirow{2}{*}{4.12} & \multicolumn{2}{c||}{4.99} & \multirow{2}{*}{3.77} & \multirow{2}{*}{4.44} & \multirow{2}{*}{3.00} & \multirow{2}{*}{4.27} & \multicolumn{2}{c}{4.72} \\
& & & & & & 2.00 & 2.99 & & & & & 2.00 & 2.72
\end{tabular}}
\caption{Average number of eigenfunctions selected (scenario A fixed to underlying truth) in the 500 simulation iterations per modeling scenario (column) and random effect (row). For the univariate modeling approach (scenario U) we report the total amount (top) and the number per independent model (bottom).}
\label{app_tab:sim_number_fpc}
\end{table}
Figure \ref{fig:app_sim_nfpc_y} shows the \gls{umse} values of the fitted curves $\bm{y}_{ijh}(t)$ for different modeling scenarios (1U, 1B, 1C, 1D, 1E) depending on the number of \glspl{fpc} included in the model. For example in the univariate modeling scenario 1U, all 500 models choose two \glspl{fpc} for each random effect (B2-E2) on dimension $dim1$. On $dim2$, however, different combinations of \glspl{fpc} lead to considerably different \gls{umse} values. For the multivariate modeling scenarios, we also find that the reduction in \gls{mse} values depends on which additional \gls{fpc} is included: $\bm{\psi}_{E5}$ reduces the values more than $\bm{\psi}_{B3}$.
\begin{figure}
\caption{\gls{umse}
\label{fig:app_sim_nfpc_y}
\end{figure}
Figure \ref{fig:app_sim_multivariate1} shows the \gls{mmse} values of the fitted curves $\bm{y}_{ijh}(t)$, the mean $\bm{\mu}(\bm{x}_i, t)$, and the random effects $\bm{B}_{i}(t)$ and $\bm{E}_{ijh}(t)$ for the different modeling scenarios of data settings 1, 2, and 5. Again, we find considerable differences for different numbers of \glspl{fpc} in the models when there is strong heteroscedasticity between the different dimensions (setting 2). The overall model fit seems to be better for applying an unweighted scalar product for the \gls{mfpca} (2B, 2C compared to 2D, 2E). Later, we will see that the model fit on single dimensions can be improved by weighting the scalar product. Note that for setting 2F, misspecifying the model assumption now has a larger negative impact on the fitted curves than in setting 1F. In data setting 5, the data are generated based on the model using a weighted scalar product. Then, the number of \glspl{fpc} in modeling scenarios 5B and 5C is very similar. Interestingly, basing the \gls{mfpca} on a scalar product using weights of one can lead to a similar estimation accuracy than the standard modeling scenario (5A compared to 5E).
\begin{figure}
\caption{\gls{mmse}
\label{fig:app_sim_multivariate1}
\end{figure}
Our simulation study thus suggests that basing the truncation orders on the proportion of explained variation on each dimension gives parsimonious and well fitting models. If interest lies mainly on the estimation of fixed effects, the alternative cut-off criterion based on the total variation in the data allows even more parsimonious models. Furthermore, an unweighted scalar product is a reasonable starting point for the \gls{mfamm}.
\subsubsection*{Model Performance on Different Data Settings}
To compare different data settings, we focus on model specification A (true model) in Figure \ref{app_fig:sim_eval_multi}. Note that the \gls{mmse} values of the other data settings cannot be directly compared as the denominator of the \gls{mmse} (slightly) changes (except for the mean of 2A, 3A, and 4A). We find that strong heteroscedasticity (2A) mainly negatively affects the fitted curves and the smooth residual. Unsurprisingly, the results for scenario 3A suggest that the estimation accuracy is lower for sparse functional data. However, the mean estimation is comparable to more densely observed data. On the other hand, the mean estimation is susceptible to violations of the assumption of uncorrelated and centred realizations of the random effects (4A). The model then has difficulties to separate intercept and random effects (see also Figure \ref{fig:app_sim_eff_umse}), which does not necessarily translate to a worse overall fit to the data. The results of scenario 5A (weighted scalar product) suggest that the accuracy of the \gls{mfamm} does not depend on the definition of the scalar product used for the \gls{mfpca}.
Figure \ref{fig:app_sim_univariate_sno} shows the \gls{umse} values for the aforementioned model components of modeling scenario 6A (snooker training data) where we compare the estimation accuracy across the dimensions. The fit of the functional curves $\bm{y}_{ijh}(t)$ suggests that there might be pronounced differences between the estimation accuracy of the dimensions, the dimensions $dim1$ (corresponding to $elbow.x$) and $dim4$ (corresponding to $hand.y$) giving high \gls{mse} values.
\begin{figure}
\caption{\gls{umse}
\label{fig:app_sim_univariate_sno}
\end{figure}
We conclude that the proposed \gls{mfamm} performs well even in more challenging data settings such as sparse functional data or data with few grouping levels for the random effects. Especially the estimation of the fixed effects seems to be stable over the different analysed settings.
\subsubsection*{Comparison to Univariate Approach}
We compare the univariate modeling approach U to multivariate modeling scenarios C (\gls{uv}) and E (\gls{uv} with alternate scalar product), i.e.\ we make sure that the proportion of explained univariate variation is also at least $95\%$ for each model. Table \ref{app_tab:sim_number_fpc} shows that in data settings 1 (consonant assimilation data) and 2 (strong heteroscedasticity) the number of included \glspl{fpc} tends to be higher for scenario U. Yet Figure \ref{fig:sim_eval_uni} indicates that the \gls{mfamm} yields consistently lower \gls{umse} values on $dim1$ (smaller measurement error variance). For $dim2$, the random effects seem to be estimated more accurately and the fixed effects similarly well, but especially with the weighted scalar product (scenarios E), the overall fit for $y$ can give higher \gls{umse} values. Overall, the results suggest that the unweighted scalar product is to be preferred in data situations with similar measurement error variances of the dimensions such as setting 1, where it gives reliably good results across all model components. However, downweighting the dimension $dim2$ with larger error seems to be a reasonable modeling decision in setting 2 if interest lies primarily on $dim1$ (lowest \gls{umse} values for 2E). Again, we point out that the estimation of the fixed effects is relatively stable across approaches and slightly better than for the univariate modeling approach.
\begin{figure}
\caption{\gls{umse}
\label{fig:sim_eval_uni}
\end{figure}
Table \ref{app_tab:st_error_comparison} compares the standard errors of the fixed effects estimation for the \gls{mfamm} in scenario 1C with the univariate modeling approach of scenario 1U. We look at the ratio of standard errors $\frac{se_m}{se_u}$, where $se_m$ denotes the standard error of the \gls{mfamm} and $se_u$ the standard error of the corresponding univariate \gls{famm}. For each dimension and partial predictor, we calculate the proportion of ratios smaller and larger than one over an equidistant grid of 100 time points and all simulation runs. Especially on $dim1$, we find that the multivariate approach gives smaller standard errors. On $dim2$, the proportions are more similar with slightly more proportions larger than one. Overall, we find more ratios smaller than one in our simulation study thus indicating that the multivariate approach can yield smaller standard errors. Table \ref{app_tab:coverage_phon_sno} further shows that the coverage of these two scenarios is very similar (see subsection on the estimation of fixed effects).
\begin{table}\centering
\caption{Proportion of standard error ratios of the \gls{mfamm} ($se_{m}$) of scenario 1C and the univariate modeling approach ($se_{u}$) of scenario 1U over all simulation runs and all evaluated time points.}
\label{app_tab:st_error_comparison}
\resizebox{\textwidth}{!}{
\begin{tabular}{cc|ccccccccc}
\hline
\hline
$d$ & Ratio & $\beta_0^{(d)}$ & $f_0^{(d)}(t)$ & $f_1^{(d)}(t)$ & $f_2^{(d)}(t)$ & $f_3^{(d)}(t)$ & $f_4^{(d)}(t)$ & $f_5^{(d)}(t)$ & $f_6^{(d)}(t)$ & $f_7^{(d)}(t)$ \\
\hline
\multirow{2}{*}{$dim1$} & $\frac{se_{m}}{se_{u}}<1$ & 0.67 & 0.93 & 0.65 & 0.74 & 0.76 & 0.69 & 0.85 & 0.56 & 0.42 \\
& $\frac{se_{m}}{se_{u}}>1$ & 0.33 & 0.07 & 0.35 & 0.26 & 0.24 & 0.31 & 0.15 & 0.44 & 0.58 \\ \hline
\multirow{2}{*}{$dim2$} & $\frac{se_{m}}{se_{u}}<1$ & 0.28 & 0.48 & 0.45 & 0.36 & 0.48 & 0.41 & 0.31 & 0.54 & 0.62 \\
& $\frac{se_{m}}{se_{u}}>1$ & 0.72 & 0.52 & 0.55 & 0.64 & 0.52 & 0.59 & 0.69 & 0.46 & 0.38 \\ \hline
\end{tabular}}
\end{table}
We conclude that the multivariate modeling approach can improve the mean estimation but is especially beneficial for the prediction of the random effects. In some cases, including weights in the multivariate scalar product might further improve the modeling.
\subsubsection*{Covariance Estimation}
Figure \ref{fig:app_sim_fpc} shows the \gls{mmse} values for the estimated eigenfunctions of the two random effects in the standard modeling scenario A of data settings 1 to 5. In general, we find that the leading eigenfunctions tend to have lower \gls{mmse} values. Especially the leading eigenfunctions of the smooth residual show a high accuracy. There seems to be more variance in the estimation accuracy for the subject-specific random effect. This can be confirmed with Figure \ref{fig:app_sim_eigfcts}, which contains the estimated eigenfunctions of all 500 simulation iterations for modeling scenario 1A (grey curves) and compares them to the data generating function (red curve). Overall we find that the modes can be reconstructed sufficiently well, albeit considerably better for the random smooth residual.
\begin{figure}
\caption{\gls{mmse}
\label{fig:app_sim_fpc}
\end{figure}
Figure \ref{fig:app_sim_fpc} also allows to compare the eigenfunction estimation across the different data settings. Keep in mind that scenario 5A is based on a different model so the \gls{mmse} values are not directly comparable. We find that the eigenfunction estimation of the smooth residual suffers from strong heteroscedasticity (2A), whereas the subject-specific random effect seems unaffected. The same effect (even more pronounced) can be observed for sparse data (3A), where fewer information about the correlation within functions is available. On the other hand, $\bm{B}_{i}(t)$ suffers from few different individuals as this can lead to a considerable departure from the modeling assumption when the independent draws of scores are not centred and decorrelated (4A). With 720 different grouping levels of the smooth residual, we typically get an empirical covariance for the random scores that is closer to its assumption and thus smaller differences in \gls{mmse} values compared to 1A. The estimation of the eigenfunctions is somewhat less accurate in scenario 5A as the weights of the scalar product have to be estimated as well. This additional uncertainty leads to a larger variance for the \gls{mmse} values and makes it harder to correctly identify the data generating modes of variation. Note that this does not affect the overall estimation of the random effects as discussed above.
\begin{figure}
\caption{Estimated eigenfunctions in scenario 1A for all 500 simulation iterations (grey curves). The red curves show the data generating eigenfunctions.}
\label{fig:app_sim_eigfcts}
\end{figure}
We observe similar trends for the \gls{mmse} values of the estimated eigenfunctions in scenario 6A as presented in Figure \ref{fig:app_sim_fpc_sno}. Leading eigenfunctions tend to be more accurately estimated and increasing the number of grouping levels ($\bm{B}_{i}(t):25$, $\bm{C}_{ij}(t): 50$, $\bm{E}_{ijh}(t):300$) seems to have a diminishing effect on the variance of the \gls{mmse} values.
\begin{figure}
\caption{\gls{mmse}
\label{fig:app_sim_fpc_sno}
\end{figure}
Figure \ref{fig:app_sim_vars} shows the \gls{mse} values of the estimated multivariate eigenvalues and measurement error variances. Compared to scenario 1A, the \gls{mse} values of the smooth residual are higher for scenarios 2A (strong heteroscedasticity) and 3A (sparse data), whereas the \gls{mse} values of the subject-specific random effect are higher for scenario 4A (correlated scores). This is along the lines of the findings for the eigenfunctions and the random effects. Scenario 5A is again not directly comparable and the overall higher \gls{mse} values suggest that the uncertainty in the estimation of the weights of the scalar product influences the accuracy of the estimation of the eigenvalues. With regards to the error variance, the \gls{mse} values are comparable over the different scenarios except for the sparse data setting, where we find higher values.
\begin{figure}
\caption{\gls{mse}
\label{fig:app_sim_vars}
\end{figure}
We conclude that modes of variation can be recovered well in most of the model scenarios. The nested random effect and its leading modes of variation can be well captured by the \gls{mfamm}.
\subsubsection*{Fixed Effects Estimation}
Figure \ref{fig:app_sim_eff_umse} shows the \gls{umse} values of the estimated effect functions in scenario 1A (blue). We find that the estimation of the functional intercept $\bm{f}_0(t)$ and the covariate effect of \texttt{order} $\bm{f}_1(t)$ yield low \gls{umse} values on both dimensions. The estimation of the other covariates ($\bm{f}_2(t)$ to $\bm{f}_4(t)$) and especially the interactions of \texttt{order} with the other covariates ($\bm{f}_5(t)$ to $\bm{f}_7(t)$) give larger \gls{umse} values and a higher variance of these values. The yellow boxplots show the corresponding values in scenario 4A, thus indicating that only the estimation of the intercept is affected by correlated and uncentred scores (as an empirical score mean different from zero times the corresponding eigenfunctions is captured by the intercept). Figure \ref{fig:app_sim_esteff} plots all estimated effect functions against the data generating effect functions in scenario 1A. This suggests that the \gls{mfamm} can overall capture characteristics of the true underlying effect functions.
\begin{figure}
\caption{\gls{umse}
\label{fig:app_sim_eff_umse}
\end{figure}
\begin{figure}
\caption{Estimated effect functions in scenario 1A for all 500 simulation iterations (grey curves). The red curves show the data generating effect functions.}
\label{fig:app_sim_esteff}
\end{figure}
Figure \ref{fig:app_sim_umse_eff_sno} shows the \gls{umse} values for each effect function in scenario 6A. Overall, the functional intercept $\bm{f}_0(t)$ and the covariate effect of \texttt{skill} $\bm{f}_1(t)$ have a higher estimation accuracy as implied by lower \gls{umse} values. The other effect functions are also somewhat less pronounced, which is why the scaling with the inverse mean norm can give large \gls{umse} values. Figure \ref{fig:app_sim_esteff_sno} shows all estimated effect functions (grey curves) and the corresponding data generating functions in red. This plot suggests that especially on the dimensions $dim1, dim5$, and $dim6$ the effect estimation shows a larger variance across the simulation runs. These dimensions correspond to those dimensions in the data set ($elbox.x, shoulder.x,$ and $shoulder.y$ in Figure \ref{APPENDIXfig:snooker_univ_obs}) where the response functions are relatively constant over $t$ compared to the other dimensions.
\begin{figure}
\caption{\gls{umse}
\label{fig:app_sim_umse_eff_sno}
\end{figure}
\begin{figure}
\caption{Estimated effect functions in scenario 6A for all 500 simulation iterations (grey curves). The red curves show the data generating effect functions.}
\label{fig:app_sim_esteff_sno}
\end{figure}
For all modeling scenarios, we use the average point-wise coverage to evaluate the $95\%$ \glspl{cb} of the estimated fixed effects (see Table \ref{app_tab:coverage_phon_sno}). For scenario 1A, we find the \glspl{cb} to cover the true effect $88-95\%$ of the time. Additional uncertainty e.g.\ about the number of \glspl{fpc} further reduces the coverage (for example 1B). Overall, the coverage of fixed effects by the corresponding \glspl{cb} is comparable over the different data settings (see Appendix Table \ref{app_tab:coverage_phon_sno}). In data settings 2 and 3 the uncertainty in the data is increased which allows for wider \glspl{cb} and thus a slightly better coverage. The averaged pointwise coverage in 4A is comparable to 1A except for the functional intercept. Here, the averaged point-wise coverage of the scalar and functional intercept lie well below $70\%$ as the data setting makes it particularly hard to identify the true underlying mean (the scores of the random effects are not centred or decorrelated and any mean difference to zero is absorbed by the intercept). In scenario 5A, the coverage tends to be lower than for models based on an unweighted scalar product, possibly due to the added uncertainty from estimation of the weight in the scalar problem. We find averaged point-wise coverage to be considerably lower than its nominal value in scenario 6A. In this scenario, we find that in many of the simulation runs the effects are wrongly estimated as constants (cf. also Figure \ref{fig:app_sim_esteff_sno}), which can lead to undercoverage also in scalar additive mixed models \citep{Grevencomment}.
\begin{table} \centering
\caption{Averaged point-wise coverage of the point-wise \glspl{cb} for the estimated effect functions and the averaged coverage of the scalar intercept $\beta_0^{(d)}$ for different scenarios. The coverage is averaged over the 500 simulation iterations and over 100 evaluation points along $t$.}
\label{app_tab:coverage_phon_sno}
\resizebox{\textwidth}{!}{
\begin{tabular}{cc|ccccccccc}
\\\hline
\hline
$d$ & Scenario & $\beta_0^{(d)}$ & $f_0^{(d)}(t)$ & $f_1^{(d)}(t)$ & $f_2^{(d)}(t)$ & $f_3^{(d)}(t)$ & $f_4^{(d)}(t)$ & $f_5^{(d)}(t)$ & $f_6^{(d)}(t)$ & $f_7^{(d)}(t)$ \\
\hline
\multirow{11}{*}{$dim1$} & 1A & $93.6$ & $90.6$ & $92.6$ & $89.5$ & $93.1$ & $93.9$ & $91.3$ & $89.0$ & $90.4$ \\
& 1B & $93.8$ & $83.5$ & $90.6$ & $86.6$ & $89.7$ & $90.2$ & $87.9$ & $86.6$ & $88.0$ \\
& 1C & $93.8$ & $90.7$ & $92.5$ & $89.5$ & $93.1$ & $94.1$ & $91.3$ & $88.9$ & $90.8$ \\
& 1D & $93.4$ & $87.7$ & $91.3$ & $86.2$ & $89.5$ & $91.3$ & $87.9$ & $87.1$ & $88.0$ \\
& 1E & $92.8$ & $88.7$ & $90.7$ & $86.9$ & $90.2$ & $91.3$ & $88.0$ & $86.9$ & $87.8$ \\
& 1F & $94.6$ & $92.2$ & $93.7$ & $90.1$ & $94.1$ & $94.7$ & $91.5$ & $90.3$ & $93.2$ \\
& 1U & $92.6$ & $90.8$ & $92.5$ & $90.2$ & $93$ & $92.6$ & $91.7$ & $89.5$ & $88.2$ \\
& 2A & $95.4$ & $92.8$ & $94.2$ & $90.6$ & $93.9$ & $94.8$ & $92.9$ & $90.5$ & $92.0$ \\
& 3A & $94.4$ & $89.2$ & $93.0$ & $91.2$ & $93.0$ & $91.7$ & $90.8$ & $92.4$ & $93.5$ \\
& 4A & $68.4$ & $48.7$ & $92.7$ & $89.7$ & $92.7$ & $93.4$ & $91.4$ & $89.4$ & $90.2$ \\
& 5A & $94.0$ & $88.4$ & $91.4$ & $89.8$ & $92.8$ & $92.7$ & $89.7$ & $91.4$ & $94.2$ \\ \hline
\multirow{11}{*}{$dim2$} & 1A & $93.4$ & $87.6$ & $93.3$ & $91.8$ & $94.6$ & $92.9$ & $93.3$ & $93.5$ & $94.1$ \\
& 1B & $95.0$ & $86.1$ & $93.1$ & $91.9$ & $94.0$ & $93.3$ & $93.6$ & $92.2$ & $92.5$ \\
& 1C & $93.8$ & $87.5$ & $93.2$ & $91.9$ & $94.9$ & $93.0$ & $93.4$ & $93.5$ & $94.0$ \\
& 1D & $91.2$ & $86.5$ & $90.7$ & $89.4$ & $89.4$ & $89.5$ & $90.0$ & $88.2$ & $90.0$ \\
& 1E & $91.0$ & $86.6$ & $90.8$ & $90.0$ & $90.0$ & $89.3$ & $90.2$ & $88.5$ & $90.0$ \\
& 1F & $93.4$ & $87.8$ & $92.7$ & $92.3$ & $93.7$ & $92.5$ & $93.2$ & $92.9$ & $94.0$ \\
& 1U & $93.2$ & $87.1$ & $92.3$ & $93.6$ & $94.7$ & $92.0$ & $94.0$ & $93.9$ & $93.6$ \\
& 2A & $94.8$ & $89.4$ & $92.5$ & $93.2$ & $94.5$ & $93.4$ & $93.8$ & $93.4$ & $92.7$ \\
& 3A & $94.6$ & $90.1$ & $92.7$ & $92.8$ & $95.8$ & $94.1$ & $94.0$ & $95.2$ & $91.9$ \\
& 4A & $68.6$ & $54.7$ & $93.2$ & $92.3$ & $94.3$ & $92.7$ & $93.3$ & $93.7$ & $93.8$ \\
& 5A & $90.4$ & $83.5$ & $88.1$ & $88.6$ & $93.9$ & $87.5$ & $86.7$ & $91.9$ & $91.4$ \\
\hline \\[-1.8ex]
\end{tabular} }
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{cc|cccccc}
\\\hline
\hline
$d$ & Scenario & $\beta_0^{(d)}$ & $f_0^{(d)}(t)$ & $f_1^{(d)}(t)$ & $f_2^{(d)}(t)$ & $f_3^{(d)}(t)$ & $f_4^{(d)}(t)$\\
\hline
$dim1$ & \multirow{6}{*}{6A} & $87.4$ & $84.9$ & $90.8$ & $80.7$ & $74.4$ & $76.0$ \\
$dim2$ & & $87.6$ & $88.3$ & $87.9$ & $86.9$ & $75.2$ & $81.4$ \\
$dim3$ & & $82.4$ & $85.3$ & $87.1$ & $84.2$ & $81.6$ & $71.7$ \\
$dim4$ & & $82.4$ & $82.2$ & $83.6$ & $78.5$ & $78.3$ & $74.1$ \\
$dim5$ & & $85.4$ & $82.1$ & $84.3$ & $84.3$ & $78.0$ & $76.3$ \\
$dim6$ & & $86.2$ & $74.1$ & $83.0$ & $84.2$ & $79.3$ & $65.7$ \\
\hline \\
\end{tabular} }
\end{table}
Figure \ref{fig:app_sim_coverage} shows the point-wise coverage average over the 500 simulation iterations in scenario 1A. It suggests that coverage tends to lie below the nominal value in areas close to the border of the functions.
\begin{figure}
\caption{Point-wise averaged coverage over the functional index for the estimated effect functions in scenario 1A. The nominal value is $95\%$ (red dotted line).}
\label{fig:app_sim_coverage}
\end{figure}
\end{document}
|
\begin{document}
\title{On the growth of the $L^p$ norm of the Riemann zeta-function on the line $\operatorname{Re}(s)=1$}
\maketitle
\begin{abstract}
We prove that if $\delta>0$ and $p$ is real then
$$\sup_T \int_T^{T+\delta} \abs{\zeta(1+it)}^p dt <\infty,$$
if and only if $-1<p<1$. Furthermore, we show the omega estimates
\begin{gather*}
\int_T^{T+\delta} \abs{\zeta(1+it)}^{\pm 1} dt = \Omega(\log \log \log T), \\ \int_T^{T+\delta} \abs{\zeta(1+it)}^{\pm p} dt = \Omega((\log \log T)^{p-1}), \qquad (p>1)
\end{gather*}
which with the exception of an additional $\log \log \log T$ factor in the second estimate coincides with conditional (under the Riemann hypothesis) order estimates.
We also prove weaker unconditional order estimates.
\end{abstract}
\tableofcontents
\section{Classical order and omega estimates}
The study of the Riemann zeta-function on the line $\operatorname{Re}(s)=1$ has been studied by a lot of authors, starting with the work of Hadamard \cite{Hadamard} and de la Vall{\'e}e-Poussin \cite{Poussin} who proved that $\zeta(1+it) \neq 0$ which implies the prime number theorem. Assuming the Riemann hypothesis, Littlewood \cite{Littlewood} showed that
\begin{gather} \label{A1}
\zeta(1+it) \ll \log \log t, \qquad \zeta(1+it)^{-1} \ll \log \log t. \\ \intertext{Bohr and Landau \cite{BohrLandau1,BohrLandau2,BohrLandau3} proved the corresponding omega-estimates} \label{A2}
\zeta(1+it)=\Omega(\log \log t), \qquad \zeta(1+it)^{-1}=\Omega(\log \log t),
\end{gather}
unconditionally, so Littlewood's conditional bound is the best possible. The best unconditional bound are the estimates
\begin{gather} \label{A3}
\zeta(1+it) \ll (\log t)^{2/3}, \qquad \zeta(1+it)^{-1} \ll ( \log t)^{2/3} (\log \log t)^{1/3},
\end{gather}
of Vinogradov \cite{Vinogradov} and Korobov \cite{Korobov}. For a discussion of these results as well as the current record, see the recent paper of Granville-Soundararajan \cite{Sound}.
\section{The $L^p$ norm in short intervals}
\subsection{Bounds from below}
A related question which has been less studied is the question of the $p$'th moment of the Riemann zeta-function in short intervals. What can we say about
\begin{gather} \label{star}
\int_T^{T+\delta} \abs{\zeta(1+it)}^p dt?
\end{gather}
One of our recent results \cite[Theorem 7]{Andersson1} is the following.
\begin{thm} We have the following estimates for the $L^p$ norm, for $p>0$ of the zeta-function and its inverse in short intervals:
\begin{align*}
(i)& \qquad \inf_{T} \p{\frac 1 {\delta} \int_T^{T+\delta} \abs{\zeta(1+it)}^p dt}^{1/p} = \frac{\pi^2 e^{-\gamma}}{24} \delta+\Oh{\delta^3}, \\
(ii)& \qquad \inf_{T} \p{\frac 1 \delta \int_T^{T+\delta} \abs{\zeta(1+it)}^{-p} dt}^{1/p} = \frac{e^{-\gamma}} 4 \delta+\Oh{\delta^3},
\end{align*}
for $\delta>0$. Furthermore, both estimates are valid if $\displaystyle \inf_T$ is replaced by $\displaystyle \liminf_{T \to \infty}$, and if $1+it$ is replaced by $\sigma+it$ and the infimum is also taken over $\sigma>1$.
\end{thm}
This result gives lower estimates for this integral. In particular it shows that the infimum is strictly positive and thus gives an analogue of Hadamard and de la Vall\'ee Poussin's result for the non vanishing of the Riemann zeta-function on the line $\operatorname{Re}(s)=1$.
As discussed in \cite{Andersson1} this can be applied to the question of universality on the 1-line. It should be noted that in a surprise turn of events \cite{Andersson2} we recently managed to prove that a Voronin universality type result in fact do hold on the line Re$(s)=1$ if we in addition to vertical shift allow scaling in the argument and adding a positive constant in the range.
\subsection{Bounds from above}
Another question is whether we similarly as Littlewood's and Bohr's results can obtain omega, and order results for the quantity in \eqref{star}? This is the topic for the current paper. The first question regarding this is whether for some $p>0$ this is bounded. This is answered in the following theorem
\begin{thm} \label{thm2}
We have for real $p$ that
\begin{gather}
\sup_T \int_T^{T+\delta} \abs{\zeta(1+it)}^p dt<\infty
\end{gather}
if and only if $-1<p<1$. Furthermore $\displaystyle \sup_T$ can be replaced by $\displaystyle \lim \sup_T$ and the same result holds.
\end{thm}
For $-1<p<1$ this implies similar results on the non universality of the Riemann zeta-function on the 1-line as Theorem 1 as we proved in \cite{Andersson1}. More precisely Theorem \operatorname{Re}f{thm2} for $0<p<1$ gives an upper bound $M$ for the $L^p$-norm of the functions $\zeta_T(t):=\zeta(1+iT+it)$ on the interval $[0,\delta]$.
Thus the zeta-function can not approximate any function $f$ with $L^p$ norm strictly greater than $M$. Thus this gives another proof of the fact that the usual Voronin universality theorem does not extend to the line $\operatorname{Re}(s)=1$.
It is reasonable to expect that $T=-\delta/2$ should maximize this integral for $p>0$. It is clear that this would imply Theorem 1 for positive values of $p$, since the integral with this value of $T$ is divergent exactly when
$p \geq 1$. We will not prove this, but rather leave it as an open problem. However we will manage to prove the corresponding result for the following related integral
\begin{gather}
\sup_T \int \abs{\zeta(1+it)}^p \theta\p{\frac {t-T} \delta} dt
\end{gather}
whenever the Fourier transform
\begin{gather} \label{ft} \hat \theta(\tau)= \int_{-\infty}^\infty e^{- 2 \pi i \tau x} \theta(x)dx \end{gather}
is non negative. For the purpose of this paper we will choose the triangular function
\begin{gather} \label{thetaref}
\theta(x)=\begin{cases} 1-|x|, & |x| \leq 1,\\ 0, & |x|>1.\end{cases}
\end{gather}
For this integral kernal it is well known that its Fourier transform
\begin{gather} \label{ft2}
\hat \theta(\tau)= \frac{(\sin \pi \tau)^2}{\pi^2 \tau ^2}
\end{gather}
is non negative\footnote{This is essentially the Fourier transform of the Fej\'er kernel}.
\subsubsection{Integral kernals with non negative Fourier transforms and Dirichlet series}
Before starting to prove Theorem 1 and Theorem 2 we prove some more general lemmas
\begin{lem} \label{lem1}
Let
\begin{gather*}
L(s)=\sum_{n=1}^\infty a(n) n^{-s}
\end{gather*}
be a Dirichlet series absolutely convergent on $\operatorname{Re}(s)=\sigma$, where $a(n)=|a(n)| b(n)$ and where $b(n)$ is a completely multiplicative arithmetical function.
Then
\begin{gather*}
\limsup_{T \to \infty} \int_{T-\delta}^{T+\delta} \abs{L(\sigma+it)}^2 \, (\delta- |t|) dt = \int_{-\delta}^{\delta} \abs{\tilde L(\sigma+it)}^2 \, (\delta- |t|) dt, \\ \intertext{where}
\notag \tilde L(s)=\sum_{n=1}^\infty \abs{a(n)} n^{-s},
\end{gather*}
and where $\limsup_{T \to \infty}{}$ may be replaced by $\sup_T{}$. \end{lem}
\begin{proof} By using \eqref{ft}, \eqref{thetaref}, the fact that $\hat \theta(x)\geq 0$ is non negative and the triangle inequality we see that
\begin{align}
\int_{T-\delta}^{T+\delta} \abs{L(\sigma+it)}^2 &\, (\delta-|t|) dt \notag \\
&= \sum_{m,n=1}^\infty \frac{a(n)\overline{a(m)}}{(nm)^\sigma} \int_{T-\delta}^{T+\delta} \p{\frac n m}^{-it} \delta \theta \p{\frac t \delta} dt, \notag \\
\notag
&=\sum_{m,n=1}^\infty \frac{a(n)\overline{a(m)}}{(nm)^\sigma} \p{\frac n m}^{iT} \int_{-\delta}^{\delta} e^{-i \log \frac n m t} \delta \theta \p{\frac t \delta} dt, \notag \\
&= \delta^2 \sum_{m,n=1}^\infty \frac{a(n)\overline{a(m)}}{(nm)^\sigma} \p{\frac n m}^{iT} \hat \theta\p{\frac \delta {2 \pi} \log \frac n m}, \label{uiii2}
\\
&\leq \delta^2 \sum_{m,n=1}^\infty \frac{\abs{a(n)} \abs{a(m)}}{(nm)^\sigma} \hat \theta\p{\frac \delta {2 \pi} \log \frac n m}, \notag \\
&=\int_{-\delta}^{\delta} \abs{\tilde L(\sigma+it)}^2 \, (\delta- |t|) dt. \notag
\end{align}
By Kroneckers theorem we may choose $T$ such that
\begin{gather} \abs{b(P)-P^{-iT}}<\varepsilon, \qquad (P \text{ prime}, \, P<N_0),
\end{gather}
and by choosing $\varepsilon$ sufficiently small and $N_0$ sufficiently large it follows by the fact that $L(s)$ is absolutely convergent on the line $\operatorname{Re}(s)=\sigma$ that \eqref{uiii2} may be as close to
\begin{gather*} \delta^2 \sum_{m,n=1}^\infty \frac{ \abs{a(n)} \abs{a(m)}}{(nm)^\sigma} \hat \theta\p{\frac{\delta}{2 \pi}\log \frac n m}=\int_{-\delta}^{\delta} \abs{\tilde L(\sigma+it)}^2 \, (\delta- |t|) dt
\end{gather*}
as we wish.
\end{proof}
From this Lemma we obtain the following result for the Riemann zeta-function.
\begin{lem} \label{lem2} Let $\sigma>1$ and $p \geq 0$. Then
\begin{align*}
(i)& & \sup_T \int_{T-\delta}^{T+\delta} \abs{\zeta(\sigma+it)}^p \, (\delta-|t|) dt&= \int_{-\delta}^{\delta} \abs{\zeta(\sigma+it)}^p \, (\delta-|t|) dt. \\
(ii)& & \sup_T \int_{T-\delta}^{T+\delta} \abs{\frac{\zeta(2 \sigma+2it)}{\zeta(\sigma+it)}}^p \, (\delta-|t|) dt &= \int_{-\delta}^{\delta} \abs{\zeta(\sigma+it)}^{p} \, (\delta-|t|) dt.
\end{align*}
Furthermore $\sup_T$ may be replaced by $\lim\sup_{T}$.
\end{lem}
\begin{proof} In general we have the following equality for $\operatorname{Re}(s)>1$
\begin{gather}
\zeta(s)^p=\prod_{P \text{ prime}} \left(1-P^{-s}\right)^{-p}=\sum_{n=1}^\infty d_{p}(n) n^{-s}.
\end{gather}
for any real number $p$, where $d_{p}(n)$ denote the generalized divisor function. When $p\geq 0$ it follows that $d_p(n)$ is non negative by the fact that $d_p(n)$ is a multiplicative function and from the fact that \begin{gather*} (1-P^{-s})^{-p} = \sum_{k=0}^\infty \binom {-p} k (-1)^k P^{-ks}, \\ \intertext{where} \binom {-p} k (-1)^k = \prod_{j=1}^k \frac{p+j-1}{j} \geq 0.\end{gather*}
Thus we may apply Lemma \operatorname{Re}f{lem1} to obtain the first part of Lemma \operatorname{Re}f{lem2}.
To prove the second part we use the following equality for $\operatorname{Re}(s)>1$
\begin{gather*}
\p{\frac{\zeta(2s)}{\zeta(s)}}^p=\prod_{P \text{ prime}} \left(1+P^{-s}\right)^{-p}=\sum_{n=1}^\infty \lambda(n) d_{p}(n) n^{-s}.
\end{gather*}
where $\lambda(n)=(-1)^{\nu(n)}$ and where $\nu(n)$ counts the number of prime factors (with multiplicity) of $n$. Since $a(n)$ is the product of a non negative function $d_{p}(n)$ and a completely multiplicative function $\lambda(n)$, Lemma \operatorname{Re}f{lem2} $(ii)$ follows from Lemma \operatorname{Re}f{lem1}.
\end{proof}
\subsubsection{A stronger upper bound for $-1<p<1$.}
We will prove a somewhat stronger result than Theorem \operatorname{Re}f{thm2} in the case $-1<p<1$.
\begin{thm} \label{thm3} Assume that $-1<p<1$ and $\delta>0$. Then
$$ 0.3 \frac{\delta^{1-|p|}}{1-|p|} \leq \limsup_{T \to \infty} \int_T^{T+\delta} \abs{\zeta(1+it)}^p dt \leq 12 \frac{\delta^{1-|p|}}{1-|p|} +6 \delta(1+\log(1+\delta)). $$
\end{thm}
\begin{proof}
Let $0 \leq q <1$. From Lemma \operatorname{Re}f{lem2}, by letting $\sigma \to 1^+$ and using continuity, it follows that
\begin{gather} \label{ineq1}
I\p{\frac\delta 2,q} \leq \limsup_{T \to \infty} \int_T^{T+\delta} \abs{\zeta(1+it)}^q dt \leq 2I(\delta,q), \\
\intertext{and} \label{ineq2}
I\p{\frac\delta 2,q} \leq \limsup_{T \to \infty} \int_T^{T+\delta} \abs{\frac{\zeta(2+2it)}{\zeta(1+it)}}^q dt \leq 2I(\delta,q), \\ \intertext{where} \notag
I(\delta,q):=
\int_{-\delta}^{\delta}\abs{\zeta(1+it)}^q \p{1-\frac{|t|} \delta} dt.
\end{gather}
By the Laurent expansion at $s=1$ of the Riemann zeta-function
and the bound $|\zeta(1+it)| \leq 1+\log(|t|+1)$ valid for $|t| \geq 1$ we have that
\begin{gather*}
{|t|}^{-q} \leq \abs{\zeta(1+it)}^q \leq \abs{t}^{-q}+1+\log(|t|+1),
\qquad (0 \leq q \leq 1),
\end{gather*}
and it follows by integrating this inequality that
\begin{gather} \label{ineq3} \frac{2\delta^{1-q}}{(1-q)(2-q)} \leq I(\delta,q) \leq \frac{2\delta^{1-q}}{(1-q)(2-q)}+ \delta(1+ \log(\delta+1)).
\end{gather}
Our lemma follows for $p=q \geq 0$ from \eqref{ineq1} and \eqref{ineq3} with the somewhat stronger lower constant $0.5$ (rather than 0.3) and upper constant $4$ (rather than 12). If $-1<p <0$ we let $q=-p$ and our results follows from \eqref{ineq2}, \eqref{ineq3} and the inequality
\begin{gather} \frac 1 3< \abs{\zeta(2+2it)}<\frac 5 3, \qquad (t \in \R).
\label{line2} \end{gather}
\end{proof}
\section{Order estimates and Omega estimates}
For the final proof of Theorem \operatorname{Re}f{thm2} we need that the $p \,$th moment of the Riemann zeta-function in short intervals on the 1-line is unbounded for $|p| \geq 1$. While this in fact follows from Theorem \operatorname{Re}f{thm3} when $p \to 1^-$ we are interested in obtaining more precise Omega-estimates and Order estimates that answers the question on how fast such integrals might grow.
\subsection{Order estimates}
\begin{thm} \label{thm4} Assuming the Riemann hypothesis then for any $\delta>0$ we have
\begin{gather*}
\int_T^{T+\delta} \abs{\zeta(1+it)}^{\pm 1} dt = O(\log \log \log T).
\end{gather*}
\end{thm}
\begin{proof}
We have that
\begin{gather*}
\int_T^{T+\delta}\abs{\zeta(1+it)}^{\pm 1}dt \leq \sup_{t \in [T,T+\delta]} \abs{\zeta(1+it)^{\pm 1}}^{1-p} \int_T^{T+\delta}\abs{\zeta(1+it)}^{\pm p} dt.
\end{gather*}
The result follows from choosing $p=1-1/\log \log \log T$, invoking Littlewood's bound \eqref{A1} on the first part and using Theorem \operatorname{Re}f{thm3} on the remaining integral.
\end{proof}
\begin{thm} \label{thm5} For any $\delta>0$ we have
\begin{gather*}
\int_T^{T+\delta} \abs{\zeta(1+it)}^{\pm 1} dt = O(\log \log T).
\end{gather*}
\end{thm}
\begin{proof}
This follows from Theorem \operatorname{Re}f{thm3} in the same way as Theorem \operatorname{Re}f{thm4}, but by choosing $p=1-1/\log \log T$ and invoking Vinogradov-Korobov's estimate \eqref{A3} in view of Littlewood's.
\end{proof}
\begin{thm} \label{thm6} Assuming the Riemann hypothesis, then for any $\delta>0$ and $p>1$ the following bound holds true
\begin{gather*}
\int_T^{T+\delta} \abs{\zeta(1+it)}^{\pm p} dt = O(\log \log \log T \hskip 1pt (\log \log T)^{p-1}).
\end{gather*}
\end{thm}
\begin{proof}
We have that
\begin{gather*}
\int_T^{T+\delta}\abs{\zeta(1+it)}^pdt \leq \sup_{t \in [T,T+\delta]} \abs{\zeta(1+it)}^{p-1} \int_T^{T+\delta}\abs{\zeta(1+it)} dt.
\end{gather*}
The result follows from Theorem \operatorname{Re}f{thm4} and Littlewood's bound \eqref{A1}.
\end{proof}
\begin{thm} For any $\delta>0$ and $p>1$ the following bound holds true
\begin{gather*}
\ \int_T^{T+\delta} \abs{\zeta(1+it)}^{\pm p} dt = O(\log \log T (\log T)^{2/3(p-1)}).
\end{gather*}
\end{thm}
\begin{proof}
This follows from Theorem \operatorname{Re}f{thm5} in the same way as Theorem \operatorname{Re}f{thm6} follows by Theorem \operatorname{Re}f{thm4} by invoking Vinogradov-Korobov's estimate \eqref{A3} in view of Littlewood's bound \eqref{A1}.
\end{proof}
\subsection{Omega estimates}
We have the following Omega estimates
\begin{thm} \label{thm8} We have for any fixed $\delta>0$ that
\begin{align*}
&(i) & \qquad \int_T^{T+\delta} \abs{\zeta(1+it)}^{\pm 1} dt&=\Omega(\log \log \log T), \\
&(ii) & \qquad \int_T^{T+\delta} \abs{\zeta(1+it)}^{\pm p} dt&=\Omega(( \log \log T)^{p-1}), \qquad (p>1).
\end{align*}
\end{thm}
We would like to remark that Theorem \operatorname{Re}f{thm8} with $p=2$ answers a question of Weber \cite[Problem 6.4]{Weber} in the case $\sigma=1$ in the affirmative\footnote{Ram\=ununas Garunk\v{s}tis remarked that the case $1/2<\sigma<1$ in Weber's problem follows as a direct consequence of the Voronin universality theorem.}.
\begin{proof} Again we are going to use a convolution. Since it is more convenient to have the compact support on the sum side we will consider convolution by the Fourier transform of the triangular function (and higher order convolutions of the triangular function). Define recursively
\begin{align}
\theta_1(x)&=\theta(x) \\
\theta_n(x)&=(\theta_{n- 1}* \theta)(x)= \int_{-\infty}^\infty \theta_{n-1}(t) \theta(x-t) dt
\end{align}
It is clear that $0 \leq \theta_n(x) \leq 1$ is a continuous function with support on $[-n,n]$ and by \eqref{ft2} it is clear that its Fourier transform satisfies
\begin{gather} \label{ere}
\hat \theta_n(t) = (\hat \theta(t))^n = \p{\frac{\sin \pi t}{ \pi t}}^{2n}.
\end{gather}
Consider
\begin{align} \label{io2} \zeta_{n,N}^{p}(s)&:= \int_{-\infty}^\infty \p{\zeta\p{s+i\frac{x} N}}^p \hat \theta_n (x) dx, \qquad \operatorname{Re}(s)>1 \\ \intertext{and}
\label{io3} Z_{n,N}^{p}(s)&:= \int_{-\infty}^\infty
\p{\frac {\zeta\p{2s+2i\frac{x} N}} {\zeta\p{s+i\frac{x} N}} }^{p} \hat \theta_n (x) dx, \qquad \operatorname{Re}(s)>1
\end{align}
where the functions are defined by continuous extension when $\operatorname{Re}(s)=1$ and
where $\hat \theta_n(x)$ is given by \eqref{ere}. In particular $ \zeta_{n,N}^{p}$ is a smoothed version of the $p$'th power of usual Riemann zeta-function. From now on choose $n>p/2$ to be an integer.
It follows from the convolution \eqref{io2} and the Laurent expansion of the zeta-function at $s=1$\footnote{The integrals \eqref{io2} and \eqref{io3} should here be interpreted as the limit when $s \to 1^+$.} that
\begin{gather*}
\abs{\zeta_{n,N}^{p}(1+it)} = t^{-p} \p{1+O(t)+O((Nt)^{p-2n})}, \qquad (N^{-1} \leq t \leq 1).
\end{gather*}
Thus, since $p<2n$, by calculus, we have for fixed $\delta>0$ and $p \geq 1$ that
\begin{gather} \int_0^{\delta} \abs{\zeta_{n,N}^{p}(1+it)} dt \gg \begin{cases} \log N+O(1), & p=1, \\ N^{p-1}, & p>1. \end{cases} \label{ire}
\end{gather}
By \eqref{io2}, \eqref{io3} and \eqref{thetaref} we have the Dirichlet series expansions
\begin{gather}
\zeta_{n,N}^{p}(s)=\sum_{j=1}^{\lfloor \exp(nN) \rfloor} \frac {d_{p}(j)} {j^s} \theta_n \p{\frac{\log j} N}, \label{oj1} \\ \intertext{and}
Z_{n,N}^{p}(s)=\sum_{j=1}^{\lfloor \exp(nN) \rfloor} \frac {d_{p}(j)\lambda(j)} {j^s} \theta_n \p{\frac{\log j} N}, \label{oj2}
\end{gather}
By \eqref{ere}, \eqref{io2}, \eqref{io3} and the triangle inequality it follows for $T \geq T_0$ sufficiently large and $N \geq 1$ that
\begin{gather} \label{aj}
0.1 \int_{T}^{T+\delta} \abs{\zeta_{n,N}^{p}(1+it)} dt \leq \max_{T/2 \leq X \leq 2T} \int_{X}^{X+\delta} \abs{\zeta(1+it)}^{p} dt, \\ \intertext{and} \label{ajajaj}
0.1 \int_{T}^{T+\delta} \abs{Z_{n,N}^{p}(1+it)} dt \leq \max_{T/2 \leq X \leq 2T} \int_{X}^{X+\delta} \abs{\frac{\zeta(2+2it)} {\zeta(1+it)}}^{p} dt.
\end{gather}
Thus it is sufficient to bound the left hand side of \eqref{aj}. By Dirichlet's approximation theorem there exists for each $N > 0$ some \begin{gather} \label{TNineq} 0 \leq T_N \leq N^{4 \pi(\exp(nN))}, \end{gather} where $\pi(\exp(nN))$ here denote the number of primes less than $\exp(nN)$,
such that
\begin{gather} \label{io}
\operatorname{dist}\p{\frac{T_N \log P}{2 \pi}, \mathbb{Z}}<N^{-4}, \qquad (P \text{ prime}, 2 \leq P \leq \exp(nN)).
\end{gather}
It follows from \eqref{io} that\footnote{The worst case is when $n$ is a power of 2}
\begin{gather}
\abs{j^{i{T_N}}-1}<N^{-2}, \qquad (1 \leq j \leq \exp(nN))
\end{gather}
For such a $T_N$ it follows from \eqref{oj1} that
\begin{gather} \label{aj99a}
\abs{\zeta_{n,N}^p(1+iT_N+it)-\zeta_{n,N}^p(1+it)} \leq N^{-1}, \qquad (t \in \R)
\end{gather}
By noticing that it follows from $\eqref{TNineq}$ that $\log \log T_N \ll N$ and the inequalities \eqref{ire}, \eqref{aj} and \eqref{aj99a} it follows that
\begin{gather*}
\max_{T_N/2 \leq X \leq 2T_N} \int_{X}^{X+\delta} \abs{\zeta(1+it)}^{p} dt \gg \begin{cases} \log \log \log T_N, & p=1, \\ (\log \log T_N)^{p-1}, & p>1, \end{cases}
\end{gather*}
which gives our result for $p \geq 1$.
For negative moments $p \leq -1$ we need some corresponding result for $Z_{n,N}^p(s)$. It is not sufficient to use the Dirichlet approximation theorem directly and we need some effective variant of the Kronecker approximation theorem. Bohr and Landau \cite{BohrLandau1,BohrLandau3}\footnote{see also the discussion in \cite[p.182]{Steuding}} proved that
\begin{gather}
\abs{P^{iT}+1}<\frac 1 M, \qquad (1 \leq P \leq M, \, P \text{ prime}),
\end{gather}
holds for some $0<T<\exp(M^6)$. By using $M=\lfloor \exp(nN) \rfloor$ and $T_N=T$ it follows from \eqref{oj2} that
\begin{gather} \label{aj99b}
\abs{Z_{n,N}^p(1+iT_N+it)-\zeta_{n,N}^{-p}(1+it)} \leq 1, \qquad (t \in \R)
\end{gather}
holds for some $0 \leq T_N \leq \exp(\exp(6nN))$. Thus by combining \eqref{ire}, \eqref{ajajaj}, \eqref{TNineq}, \eqref{line2} and \eqref{aj99b}
it follows that
\begin{gather*}
\max_{T_N/2 \leq X \leq 2T_N} \int_{X}^{X+\delta} \abs{\zeta(1+it)}^{-p} dt \gg \begin{cases} \log \log \log T_N, & p=1, \\ (\log \log T_N)^{p-1}, & p>1. \end{cases}
\end{gather*}
\end{proof}
\end{ack}
\section{Proof of Theorem \operatorname{Re}f{thm2}}
Theorem \operatorname{Re}f{thm2} follows from Theorem \operatorname{Re}f{thm3} and Theorem \operatorname{Re}f{thm8}. \qed
\def$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
\def$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
\end{document}
|
\begin{document}
\title{Explicit laws of large numbers for random nearest-neighbour type graphs}
\author{Andrew R.~Wade\footnote{e-mail: \texttt{[email protected]}}\\
\normalsize
Department of Mathematics, University of Bristol,\\
\normalsize
University Walk, Bristol BS8 1TW, England.}
\date{February 2007}
\maketitle
\begin{abstract}
Under the unifying umbrella of a general result of Penrose \& Yukich
[\emph{Ann. Appl. Probab.}, (2003) {\bf 13}, 277--303]
we give laws of large numbers (in the $L^p$ sense)
for the total power-weighted length of several nearest-neighbour type graphs
on random point sets
in ${\bf R }^d$, $d\in{\bf N}$. Some of these results are known; some are new.
We give limiting constants explicitly, where previously they have been evaluated in less
generality or not at all. The graphs we consider include
the $k$-nearest neighbours graph, the Gabriel graph,
the minimal directed spanning forest, and the on-line nearest-neighbour
graph.
\end{abstract}
\vskip 3mm
\noindent
{\em Key words and phrases:} Nearest-neighbour type graphs; laws of large numbers;
spanning forest; spatial network evolution.
\vskip 3mm
\noindent
{\em AMS 2000 Mathematics Subject Classification:} 60D05, 60F25.
\section{Introduction}
Graphs constructed
on
random point sets in ${\bf R }^d$
($d \in{\bf N}$), formed
by joining nearby points
according to some deterministic rule, have recently received
considerable interest \cite{p1,st,yu}. Such graphs include the geometric graph,
the minimal spanning tree, and (as studied in this paper)
the nearest-neighbour graph and its relatives. Applications include the modelling
of spatial networks, as well as statistical procedures.
The graphs in this paper are based on edges between nearest neighbours, sometimes
in some restricted sense. A unifying characteristic of these graphs is {\em stabilization}: roughly speaking,
the configuration of edges around any particular vertex is not affected by changes to the vertex
set outside of some sufficiently large (but finite) ball. Thus these graphs are locally determined in some sense.
A functional of particular interest is the total edge length of the graph, or, more generally, the total
power-weighted edge length (i.e.~the sum of the edge lengths each raised to a given power $\alpha \geq 0$). The
large-sample asymptotic theory for power-weighted length of
stabilizing graphs is now well understood;
see e.g.~\cite{kl,p1,p2,py1,py2,st,yu}.
In the present paper we collect several laws of large numbers (LLNs) for total
power-weighted length
from the family of
nearest-neighbour type graphs, defined on independent random points
on ${\bf R }^d$. We present these results as corollaries to
a general umbrella theorem of Penrose \& Yukich \cite{py2}. Some of the results
(for the most common graphs) are known to various extents
in the literature; others are new. We take a unified
approach which highlights the connections between these results.
In particular, all our results are explicit: we give explicit expressions for limiting constants.
In some cases these constants have been seen
previously in the literature.
Nearest-neighbour graphs and nearest-neighbour
distances in ${\bf R }^d$
are of interest in several areas of applied science, including
the social sciences, geography and ecology, where proximity data are often
important (see e.g.~\cite{ko,pi}). Ad-hoc networks, in which nodes scattered in space
are connected according to some geometric rule, are of interest
with respect to various types of communication networks. Quantities of interest
such as overall network throughput may be related to power-weighted length.
In the analysis of multivariate data, in particular via non-parametric
statistics, nearest-neighbour graphs and near-neighbour
distances have found many applications, including
goodness of fit tests, classification, regression, noise estimation,
density estimation, dimension
identification, cluster analysis,
and the two-sample and multi-sample problems;
see for example \cite{bb,bqy,ej,fr,h,hv,t}
and references therein.
In this paper we give a new LLN for the total power-weighted length
of the {\em on-line nearest-neighbour
graph} (ONG), which is one of the simplest models of network evolution.
We give a detailed description later. In the ONG on a sequence
of points arriving in ${\bf R }^d$,
each point after the first is joined by an edge
to its nearest predecessor.
The ONG appeared
in \cite{bbbcr}
as a simple model for the evolution of the Internet graph. Figure {\rm e}f{ongmdst} shows a sample
realization of an
${\rm {\textrm ONG}}$.
Recently, graphs with an
`on-line' structure, in which vertices are added
one by one and connected to existing vertices via some rule,
have been the subject of considerable study in relation to the
modelling of real-world networks. The ONG is one of the simplest
network evolution models that
captures some of the observed characteristics of real-world networks,
such as spatial structure and sequential growth.
We also consider the {\em minimal directed spanning forest} (MDSF).
The MDSF
is constructed on a partially ordered point set in ${\bf R }^d$ by
connecting each point to its nearest neighbour amongst those
points (if any) that precede it in the partial order. If an MDSF is
a tree, it is called a {\em minimal directed spanning tree} (MDST).
The MDST was introduced by Bhatt \& Roy in \cite{br} as a model for drainage
or communications networks, in $d=2$, with the `coordinatewise' partial order
$\preccurlyeqstar$, such that $(x_1,y_1) \preccurlyeqstar (x_2,y_2)$ iff $x_1 \leq x_2$ and $y_1 \leq y_2$.
In this version of the MDSF, each point is joined by an edge to its nearest neighbour
in its `south-westerly' quadrant.
In the present paper we give new LLNs for the total power-weighted
length
for
a family of MDSFs indexed by partial orderings
on ${\bf R }^2$, which include $\preccurlyeqstar$ as a special case.
Figure {\rm e}f{ongmdst}
shows an example of a MDSF under $\preccurlyeqstar$.
\begin{figure}
\caption{Realizations of the ONG (left) and MDSF under $\preccurlyeqstar$ (right), each
on 50 simulated uniform random points in the unit square.}
\label{ongmdst}
\end{figure}
\section{Notation and results}
Notions of {\em stabilizing} functionals
of point sets have recently proved to be a useful basis for
establishing limit
theorems
for functionals
of random
point sets in ${\bf R }^d$.
In particular, Penrose \& Yukich \cite{py1,py2}
prove general central limit theorems and laws of large numbers
for stabilizing functionals.
The LLNs we give in the present paper
are all derived ultimately from Theorem 2.1 of \cite{py2}, which we restate
as Theorem
{\rm e}f{llnpenyuk} below before we present our results.
In order to describe the result of \cite{py2}, we need to introduce
some notation. Let $d\in {\bf N}$.
Let $\| \cdot\|$
be the Euclidean norm on ${\bf R }^d$. Write ${\rm card}({\cal X})$ for the cardinality
of a finite set ${\cal X} \subset {\bf R }^d$.
For a locally finite
point set
${\cal X} \subset {\bf R }^d$,
$a>0$, and ${\bf y} \in
{\bf R }^d$, let ${\bf y}+a{\cal X}$ denote the set $\{ {\bf y} + a{\bf x}
: {\bf x} \in {\cal X}\}$.
Let $B({\bf x};r)$ denote the closed
Euclidean ball
with centre ${\bf x} \in{\bf R }^d$ and radius $r>0$. Let ${\bf 0}$ denote the origin in ${\bf R }^d$.
Let $\xi ( {\bf x} ; {\cal X} )$ be a measurable $[0,\infty)$-valued function
defined for all pairs $({\bf x} , {\cal X})$, where ${\cal X} \subset {\bf R }^d$ is finite
and ${\bf x} \in {\cal X}$. Assume $\xi$ is translation invariant, that is,
for all ${\bf y} \in {\bf R }^d$,
$\xi( {\bf y} + {\bf x}; {\bf y}+{\cal X}) = \xi ({\bf x} ; {\cal X})$.
When ${\bf x} \notin {\cal X}$, we abbreviate the notation
$\xi({\bf x};{\cal X} \cup \{{\bf x}\})$ to $\xi({\bf x};{\cal X})$.
For our applications, $\xi$ will be {\em homogeneous of order} $\alpha \geq 0$,
that is $\xi ( r {\bf x} ; r {\cal X}) = r^\alpha \xi({\bf x};{\cal X})$
for all $r>0$, all finite
point sets ${\cal X}$, and all ${\bf x} \in {\cal X}$.
For any locally finite point set ${\cal X} \subset {\bf R }^d$ and any $\ell
\in {\bf N}$ define
\begin{eqnarray*} \xi^+({\cal X};\ell) := \sup_{k \in {\bf N}} \left( \mathrm{ess}
\sup \left\{ \xi({\bf 0};({\cal X} \cap B({\bf 0};\ell)) \cup \mathcal{A}^*) : \mathcal{A}
\in ( {\bf R }^d
\backslash B({\bf 0};\ell))^k
\right\} \right) \textrm{, and} \\
\xi^-({\cal X};\ell) := \inf_{k \in {\bf N}} \left( \mathrm{ess}
\inf \left\{ \xi({\bf 0};({\cal X} \cap B({\bf 0};\ell)) \cup \mathcal{A}^*) : \mathcal{A}
\in ( {\bf R }^d
\backslash B({\bf 0};\ell))^k
\right\} \right) , \end{eqnarray*}
where for $\mathcal{A} = ({\bf x}_1,\ldots,{\bf x}_k) \in ({\bf R }^d)^k$
we put $\mathcal{A}^*= \{ {\bf x}_1, \ldots, {\bf x}_k \}$ (provided all $k$ vectors are distinct).
Define the {\em limit} of
$\xi$ on ${\cal X}$ by
\[ \xi_{\infty}({\cal X}) := \limsup_{\ell \to \infty}
\xi^+({\cal X};\ell) . \] We say the functional $\xi$ \emph{stabilizes} on
${\cal X}$ if
\begin{eqnarray*}
\lim_{\ell \to \infty} \xi^+({\cal X};\ell) = \lim_{\ell \to \infty}
\xi^-({\cal X};\ell) = \xi_{\infty} ({\cal X}) .
\end{eqnarray*}
Stabilization can be interpreted loosely as the property
that
the value of the functional at a point is unaffected by changes in the
configuration of points at a sufficiently large distance from that point.
Let $f$ be a probability density function on ${\bf R }^d$.
For $n \in {\bf N}$
let ${\cal X}_n := ({\bf X}_1,{\bf X}_2,\ldots,{\bf X}_n)$ be the point process consisting of $n$ independent
random $d$-vectors
with common density $f$.
With
probability one, ${\cal X}_n$
has
distinct
inter-point distances; hence
all the
nearest-neighbour type graphs on ${\cal X}_n$ that
we consider
are almost
surely unique.
Let ${\cal H}_1$ be a homogeneous Poisson
point process of unit intensity on ${\bf R }^d$.
The following general LLN
is due to Penrose \& Yukich, and is obtained from Theorem 2.1 of \cite{py2}
together with equation (2.9) there (the homogeneous case).
\begin{theorem}
\label{llnpenyuk} Let $q \in \{1,2\}$. Suppose that $\xi$ is homogeneous of order $\alpha$
and
almost surely stabilizes on ${\cal H}_1$, with limit
$\xi_{\infty}({\cal H}_1)$.
If $\xi$ satisfies the moments condition \begin{equation}
\label{moms}
\sup_{n
\in {\bf N}} E [ \xi ( n^{1/d} {\bf X}_1 ;n^{1/d}
{\cal X}_n ) ^p ] < \infty, \end{equation}
for some $p>q$, then
as $n \to \infty$,
\begin{eqnarray*}
n^{-1}
\sum_{{\bf x} \in {\cal X}_n} \xi( n^{1/d} {\bf x}; n^{1/d} {\cal X}_n )
\stackrel{L^q}{\longrightarrow} E [ \xi_\infty ( {\cal H}_1)]
\int_{{\rm supp }(f)} f({\bf x}) ^{(d-\alpha)/d} {\rm d} {\bf x} ,
\end{eqnarray*}
and the limit is finite. \end{theorem}
From this result we will derive LLNs for the total power-weighted length
for a collection of nearest-neighbour type graphs.
Let $j \in {\bf N}$.
A point ${\bf x} \in {\cal X}$ has
a $j$-th {\em nearest neighbour} ${\bf y} \in {\cal X} \setminus \{ {\bf x}\}$ if
${\rm card} (\{ {\bf z} : {\bf z} \in {\cal X} \setminus \{ {\bf x} \}, \|{\bf z}-{\bf x}\| <
\|{\bf y} - {\bf x}\| \}) = j-1$.
For all ${\bf x},{\bf y} \in {\bf R }^d$ we define the
weight function
\begin{eqnarray*}
w_\alpha ({\bf x},{\bf y}) := \| {\bf x}-{\bf y} \|^\alpha ,\end{eqnarray*}
for some fixed parameter $\alpha \geq 0$.
By the total power-weighted edge length of a graph
with edge set $E$ (where
edges may be directed or undirected),
we mean the functional
\[ \sum_{ ({\bf u}, {\bf v}) \in E} w_\alpha ( {\bf u},{\bf v})
= \sum_{ ({\bf u},{\bf v}) \in E} \| {\bf u}-{\bf v}\|^\alpha .\]
We will often assume one
of the following conditions on the function $f$ --- either
\begin{itemize}
\item[(C1)] $f$ is supported by a convex polyhedron in ${\bf R }^d$ and is bounded away from $0$
and infinity on its support; or
\item[(C2)] for weight exponent $\alpha \in [0,d)$, we require that $\int_{{\bf R }^d} f({\bf x}) ^{(d-\alpha)/d} {\rm d} {\bf x} < \infty$
and $\int_{{\bf R }^d} \|{\bf x}\|^r f({\bf x}) {\rm d} {\bf x} < \infty$ for some $r>d/(d-\alpha)$.
\end{itemize}
In some cases, we take
$f({\bf x})=1$ for ${\bf x} \in (0,1)^d$ and $f({\bf x})=0$ otherwise, in which case we denote
${\cal X}_n ={\cal U}_n=({\bf U}_1,{\bf U}_2,\ldots,{\bf U}_n)$, the binomial
point process consisting
of $n$ independent uniform random vectors
on $(0,1)^d$.
In the remainder of this section we present our LLNs
derived from Theorem {\rm e}f{llnpenyuk}.
Theorems {\rm e}f{llnthm}, {\rm e}f{nngu}, and {\rm e}f{ggthm}
follow directly from Theorem {\rm e}f{llnpenyuk} and results in
\cite{py2}, up to evaluation of constants,
while Theorems {\rm e}f{onngthm} and {\rm e}f{mdstlln} need some more work.
These results are natural companions, as are their proofs,
which we present in Section {\rm e}f{proofs} below; in particular
the proof of Theorem
{\rm e}f{llnthm} is useful for the other proofs.
\subsection{The $k$-nearest neighbours and $j$-th
nearest neighbour graphs}
\label{subsecknng}
Let $j \in {\bf N}$. In the
$j$-th nearest-neighbour (directed) graph
on ${\cal X}$, denoted by
${j{\rm {\textrm -th~NNG}}}' ({\cal X})$, a directed edge joins each point of ${\cal X}$
to
its $j$-th nearest-neighbour.
Let $k \in {\bf N}$. In the $k$-nearest neighbours
(directed) graph on ${\cal X}$,
denoted ${k{\rm {\textrm -NNG}}}'({\cal X})$, a directed edge
joins
each point
of ${\cal X}$
to each of its first $k$ nearest neighbours in ${\cal X}$
(i.e.~each of its $j$-th nearest neighbours for
$j=1,2,\ldots,k$). Clearly the $1$-th NNG$'$ and $1$-NNG$'$
coincide, giving the standard nearest-neighbour (directed) graph. See Figure {\rm e}f{jkfig}
for realizations of particular ${j{\rm {\textrm -th~NNG}}}'$, ${k{\rm {\textrm -NNG}}}'$.
\begin{figure}
\caption{Realizations of the $3$-rd NNG$'$ (left) and
$5$-NNG$'$ (right), each
on 50 simulated uniform random points in the unit square.}
\label{jkfig}
\end{figure}
We also consider the $k$-nearest neighbours
(undirected) graph on ${\cal X}$, denoted by
${k{\rm {\textrm -NNG}}}({\cal X})$, in which
an undirected edge joins
${\bf x}, {\bf y}\in {\cal X}$ if ${\bf x}$ is one of the first $k$ nearest
neighbours of ${\bf y}$, or ${\bf y}$ is one of the first $k$ nearest neighbours
of ${\bf x}$ (or both).
From now on we take the point set ${\cal X}$ to be {\em
random}, in particular, for $n \in{\bf N}$, we take ${\cal X}={\cal X}_n$.
For $d\in {\bf N}$ and $\alpha \geq 0$, let
${\cal L}^{d,\alpha}_j({\cal X}_n)$, ${\cal L}^{d,\alpha}_{\leq k}({\cal X}_n)$ denote
respectively the total power-weighted edge length of the $j$-th nearest-neighbour (directed)
graph, $k$-nearest neighbours (directed) graph on ${\cal X}_n \subset
{\bf R }^d$.
Note that
\begin{eqnarray}
\label{0818a}
{\cal L}^{d,\alpha}_{\leq k} ({\cal X}_n) = \sum_{j=1}^k {\cal L}^{d,\alpha}_j ({\cal X}_n) .\end{eqnarray}
For $d \in {\bf N}$, we denote the
volume of the unit
$d$-ball (see e.g.~(6.50) in \cite{hu}) by
\begin{eqnarray}
\label{0818c}
v_d
: = \pi^{d/2} \left[ {\cal G}amma \left( 1+ (d/2) \right) \right]^{-1}.
\end{eqnarray}
Theorems {\rm e}f{llnthm} and {\rm e}f{onngthm} below feature
constants $C(d,\alpha,k)$ defined for $d, k \in {\bf N}$, $\alpha \geq 0$ by
\begin{eqnarray}
\label{0903c}
C(d,\alpha,k) := v_d^{-\alpha/d} \frac{d}{d+\alpha}
\frac{{\cal G}amma (k+1+(\alpha/d))}{{\cal G}amma(k)}.
\end{eqnarray}
Our first result is Theorem {\rm e}f{llnthm} below,
which gives LLNs for
${\cal L}^{d,\alpha}_j({\cal X}_n)$ and ${\cal L}^{d,\alpha}_{\leq k}({\cal X}_n)$,
with explicit expressions for the limiting
constants; it is the natural starting point for our LLNs for nearest-neighbour
type graphs.
Let ${\rm supp } (f)$ denote the support of $f$; under
(C1), ${\rm supp }(f)$ is a convex polyhedron, under
(C2) ${\rm supp }(f)$ is ${\bf R }^d$.
\begin{theorem}
\label{llnthm}
Let $d \in {\bf N}$.
The following results
hold, with
$p=2$, for $\alpha \geq 0$ if
$f$ satisfies condition (C1), and, with $p=1$, for $\alpha \in [0,d)$ if $f$ satisfies condition
(C2).
\begin{itemize}
\item[(a)]
For ${j{\rm {\textrm -th~NNG}}}'$
on ${\bf R }^d$
we have, as $n \to \infty$,
\begin{eqnarray}
\label{0818ff}
n^{(\alpha-d)/d} {\cal L}^{d,\alpha}_j
({\cal X}_n) \inLp v_d^{-\alpha/d} \frac{{\cal G}amma( j+(\alpha/d))}{{\cal G}amma(j)}
\int_{{\rm supp }(f)} f({\bf x})^{(d-\alpha)/d} {\rm d} {\bf x}.
\end{eqnarray}
\item[(b)]
For ${k{\rm {\textrm -NNG}}}'$
on ${\bf R }^d$
we have, as $n \to \infty$,
\begin{eqnarray}
\label{0818g}
n^{(\alpha-d)/d} {\cal L}^{d,\alpha}_{\leq k}
({\cal X}_n) \inLp C(d,\alpha,k) \int_{{\rm supp }(f)} f({\bf x})^{(d-\alpha)/d} {\rm d} {\bf x}.
\end{eqnarray}
In particular, as $n\to \infty$,
\begin{eqnarray}
\label{0924aa}
n^{(\alpha-d)/d} {\cal L}^{d,\alpha}_{\leq k}
({\cal U}_n) \inLp C(d,\alpha,k).
\end{eqnarray}
\end{itemize}
\end{theorem}
\noindent \textbf{Remark. }s
(a) If we use a different norm on ${\bf R }^d$ from the Euclidean, Theorem {\rm e}f{llnthm}
remains valid with $v_d$ redefined as the volume of the
unit $d$-ball in the chosen norm.
(b)
Theorem {\rm e}f{llnthm}
is essentially contained in Theorem 2.4 of
\cite{py2}, with the constants evaluated explicitly.
There are several related LLN results in the literature.
Theorem 8.3 of \cite{yu} gives
LLNs (with complete convergence) for ${\cal L}^{d,1}_{\leq k}({\cal X}_n)$
(see also \cite{mcg});
the limiting
constants are not given.
Avram \& Bertsimas (Theorem
7 of \cite{ab}) state a result on the limiting expectation
(and hence the constant in the LLN)
for ${\cal L}^{2,1}_j ({\cal U}_n)$, which they attribute to Miles \cite{m}
(see also p.~101 of \cite{yu}).
The constant in \cite{ab}
is given as
\[ \frac{1}{2} \pi^{-1/2} \sum_{i=1}^j \frac{{\cal G}amma (i-(1/2))}{{\cal G}amma(i)},\]
which
simplifies (by induction on $j$)
to $\pi^{-1/2} {\cal G}amma (j+(1/2))/{\cal G}amma(j)$, the $d=2$, $\alpha=1$ case of ({\rm e}f{0818ff})
in the case ${\cal X}_n ={\cal U}_n$.
(c) Related results are the asymptotic
expectations of
$j$-th nearest neighbour distances in finite
point sets given in \cite{ejs} and \cite{pm}. The
results in \cite{pm} are consistent with the $\alpha=1$
case of our ({\rm e}f{0924aa}). The result in \cite{ejs}
includes general $\alpha$ and certain non-uniform densities, although their
conditions on $f$ are more restrictive than our (C1);
the result is consistent with ({\rm e}f{0818g}).
Also, \cite{ejs} gives (equation (6.4))
a weak LLN for the empirical
mean $k$-nearest
neighbour {\em distance}.
With Theorem 2.4 of \cite{py2},
the results in \cite{ejs} yield LLNs
for the total weight of the
${j{\rm {\textrm -th~NNG}}}'$ and ${k{\rm {\textrm -NNG}}}'$ only when $d-1<\alpha<d$
(due to the rates of convergence given in \cite{ejs}).
(d) Smith \cite{sm} gives, in some sense, expectations of randomly selected edge lengths for nearest-neighbour type
graphs on the homogeneous Poisson point process of unit intensity in ${\bf R }^d$, including
the ${j{\rm {\textrm -th~NNG}}}'$, nearest-neighbour (undirected) graph, and Gabriel graph. His results coincide with ours
only for the ${j{\rm {\textrm -th~NNG}}}'$, since here each vertex contributes a fixed number ($j$) of directed edges: equation (5.4.1) of \cite{sm}
matches the expression for our $C(d,1,k)$. \\
From the results on nearest-neighbour (directed) graphs,
we may obtain results for nearest-neighbour (undirected)
graphs, in which if ${\bf x}$ is a nearest neighbour of ${\bf y}$ and vice
versa, then the edge between ${\bf x}$ and ${\bf y}$ is counted only once. As an example,
we give the following result.
For $d \in {\bf N}$ and $\alpha \geq 0$ let
${\bf N}N^{d,\alpha}({\cal X}_n)$ denote
the total power-weighted edge length
of the nearest-neighbour (undirected)
graph on ${\cal X}_n \subset
{\bf R }^d$.
For $d\in {\bf N}$, let $\omega_d$
be the volume of the union of two unit $d$-balls with centres unit distance apart in ${\bf R }^d$.
\begin{theorem}
\label{nngu}
Suppose that $d \in {\bf N}$, $\alpha \geq 0$ and
$f$ satisfies condition (C1). As $n \to \infty$,
\begin{eqnarray}
\label{0924x} & &
n^{(\alpha-d)/d} {\bf N}N^{d,\alpha} ({\cal X}_n) \nonumber\\
& & \inLL
{\cal G}amma (1+ (\alpha/d)) \left( v_d^{-\alpha/d}
- \frac{1}{2} v_d \omega_d^{-1-(\alpha/d)} \right) \int_{{\rm supp }(f)} f({\bf x})^{(d-\alpha)/d} {\rm d} {\bf x}.
\end{eqnarray}
In particular, when $d=2$ we have, for $\alpha \geq 0$
\begin{eqnarray}
\label{0924b}
n^{(\alpha-2)/2} {\bf N}N^{2,\alpha} ({\cal U}_n) \inLL {\cal G}amma (1+(\alpha/2)) \left( \pi^{-\alpha/2}
- \frac{\pi}{2} \left( \frac{6}{ 8 \pi + 3\sqrt{3}} \right)^{1+(\alpha/2)} \right),
\end{eqnarray}
and when $d=2$, $\alpha=1$, we get
\begin{eqnarray}
\label{0913a}
n^{-1/2} {\bf N}N^{2,1} ({\cal U}_n) \inLL
\frac{1}{2} - \frac{1}{4} \left( \frac{6 \pi}
{8 \pi +3 \sqrt{3}} \right)^{3/2} \approx 0.377508.
\end{eqnarray}
Finally, when $d=1$, $\alpha=1$, we have ${\bf N}N^{1,1} ({\cal U}_n) \inLL 7/18$ as $n \to \infty$.
\end{theorem}
\noindent \textbf{Remark. }
A pair of points, each of which is the other's nearest neighbour, is known
as a reciprocal pair. Reciprocal pairs are of interest in ecology (see \cite{pi}).
When $\alpha=0$, ${\bf N}N^{d,0}({\cal X}_n)$ counts the number of vertices, minus
one half of the number of reciprocal pairs.
In this case ({\rm e}f{0924x})
says
$n^{-1} {\bf N}N^{d,0} ({\cal X}_n) \inLL 1 - (v_d/(2 \omega_d))$. This is consistent with results
of Henze \cite{h}
for the fraction of points that are the $\ell$-th
nearest neighbour of their own $k$-th
nearest neighbour; in particular, (see \cite{h} and references therein)
as $n \to \infty$, the probability that
a point is in a reciprocal pair
tends to $v_d/\omega_d$.
\subsection{The on-line nearest-neighbour graph}
\label{subseconng}
We now consider the {\em on-line nearest-neighbour graph} (${\rm {\textrm ONG}}$).
Let $d \in {\bf N}$. Suppose ${\bf x}_1, {\bf x}_2, \ldots$
are points in $(0,1)^d$, arriving
sequentially; for $n\in{\bf N}$
form a graph
on vertex set $\{ {\bf x}_1,\ldots,{\bf x}_n\}$
by connecting each
point ${\bf x}_i$, $i=2,3,\ldots,n$ to its nearest neighbour amongst
its predecessors
(i.e.~${\bf x}_1, \ldots, {\bf x}_{i-1}$),
using the lexicographic ordering on ${\bf R }^d$ to break any ties. The
resulting
tree is
the ${\rm {\textrm ONG}}$ on $({\bf x}_1,{\bf x}_2,\ldots,{\bf x}_n)$.
Again, we take our sequence of points to be random.
We restrict our analysis to the case
in which we have independent uniformly distributed points ${\bf U}_1,{\bf U}_2,\ldots$ on $(0,1)^d$.
For $d \in {\bf N}$, $\alpha \geq 0$ and $n \in {\bf N}$, let ${\cal O}^{d,\alpha} ({\cal U}_n)$
denote the total power-weighted edge length
of the ${\rm {\textrm ONG}}$ on sequence ${\cal U}_n=({\bf U}_1,\ldots,{\bf U}_n)$.
The next result gives a new LLN for
${\cal O}^{d,\alpha} ({\cal U}_n)$
when $\alpha < d$.
\begin{theorem}
\label{onngthm}
Suppose $d \in {\bf N}$ and
$\alpha \in [0,d)$. With $C(d,\alpha,k)$ as given by ({\rm e}f{0903c}),
we have that as $n \to \infty$
\begin{eqnarray}
\label{0915az}
n^{(\alpha-d)/d} {\cal O}^{d,\alpha} ({\cal U}_n)
\inL \frac{d}{d-\alpha} C(d,\alpha,1) = \frac{d}{d-\alpha} v_d^{-\alpha/d} {\cal G}amma (1+(\alpha/d)).
\end{eqnarray}
\end{theorem}
Related results include those on convergence in distribution
of ${\cal O}^{d,\alpha} ({\cal U}_n)$, given in \cite{pw3} for $\alpha >d$ ($\alpha > 1/2$ in the case $d=1$)
and in \cite{p2} in the form of a central limit theorem for $\alpha \in (0,1/4)$.
Also, the ${\rm {\textrm ONG}}$ in $d=1$
is related to the `directed linear tree' considered in
\cite{pw2}.
\subsection{The minimal directed spanning forest}
\label{secmdsf}
The {\em minimal directed spanning forest}
(MDSF) is related to the standard nearest-neighbour (directed)
graph, with the additional constraint that edges can only lie in a given direction.
In general, the MDSF can be defined as a global optimization problem for directed
graphs on partially ordered sets endowed with a weight function,
and it also admits a local construction; see \cite{br,pw1,pw2}. As above, we consider
the Euclidean setting, where our points lie in ${\bf R }^d$.
Suppose that ${\cal X} \in {\bf R }^d$ is a finite set bearing a partial order
$\preccurlyeq$.
A {\em minimal element}, or {\em sink}, of ${\cal X}$ is a vertex
${\bf v}_0 \in {\cal X}$ for which there
exists no ${\bf v}\in {\cal X} \setminus \{{\bf v}_0\}$ such that ${\bf v} \preccurlyeq {\bf v}_0$.
Let ${\cal S}$ denote the set of all sinks of ${\cal X}$. (Note that ${\cal S}$ cannot be empty.)
For ${\bf v}\in {\cal X}$,
we say that ${\bf u}\in {\cal X} \setminus \{{\bf v}\}$
is a \emph{directed nearest neighbour} of ${\bf v}$
if ${\bf u} \preccurlyeq {\bf v}$ and $\| {\bf v} - {\bf u} \| \leq \| {\bf v} - {\bf u}' \|$ for all
$ {\bf u}' \in {\cal X}\setminus \{ {\bf v} \} $ such that $ {\bf u}' \preccurlyeq {\bf v}$.
For each ${\bf v}\in {\cal X} \setminus {\cal S}$,
let $n_{\bf v}$ be a directed nearest neighbour of ${\bf v}$ (chosen
arbitrarily if ${\bf v}$ has more than one).
Then (see \cite{pw1}) the directed graph on ${\cal X}$
obtained by taking edge set
$
E := \{ ({\bf v},n_{\bf v}): {\bf v} \in {\cal X} \setminus {\cal S} \}
$
is a MDSF of ${\cal X}$. Thus, if all edge-weights are distinct, the MDSF
is unique, and is
obtained by connecting each non-minimal vertex to its directed
nearest neighbour. In the case where there is a single sink,
the MDSF is a tree (ignoring directedness of edges) and
it is called the {\em minimal directed spanning tree} (MDST).
For what follows, we consider a general type of partial order
on ${\bf R }^2$, denoted
$\stackrel{\theta,\phi}{\preccurlyeq}$, specified by the angles
$\theta \in [0 ,2 \pi)$ and $\phi \in (0,\pi ]$.
For $\mathbf{x} \in
{\bf R }^2$, let $C_{\theta,\phi}(\mathbf{x})$ be the closed half-cone of angle $\phi$
with
vertex $\mathbf{x}$ and boundaries given by the rays from
$\mathbf{x}$ at angles $\theta$ and $\theta+\phi$, measuring
anticlockwise from the upwards vertical. The partial order is such
that, for $\mathbf{x}_1, \mathbf{x}_2 \in {\bf R }^2$,
\begin{eqnarray}
\mathbf{x}_1 \stackrel{\theta,\phi}{\preccurlyeq} \mathbf{x}_2
\textrm{ iff } \mathbf{x}_1 \in C_{\theta,\phi} (\mathbf{x}_2) .
\label{0719} \end{eqnarray} We shall use $\preccurlyeq^*$ as shorthand for
the special case $\stackrel{\pi/2,\pi/2}{\preccurlyeq}$, which is
of particular interest, as in \cite{br}. In this case $(u_1,u_2)
\preccurlyeqstar (v_1,v_2)$
iff $u_1 \leq
v_1$ and $u_2 \leq v_2$. The symbol $\preccurlyeq$ will denote a
general partial order on ${\bf R }^2$. Note that in the case $\phi = \pi$,
({\rm e}f{0719}) does not, in fact, define a partial order
on the whole of ${\bf R }^2$, since the antisymmetric property
(${\bf x} \preccurlyeq {\bf y}$ and ${\bf y} \preccurlyeq {\bf x}$ implies ${\bf x} = {\bf y}$) fails;
however it is, with probability one,
a true
partial order (in fact, a total order) on the random point sets that we consider.
We do not permit here the case $\phi=0$, which
would almost surely give us a disconnected point set.
Nor do
we allow $\phi \in(\pi, 2\pi]$, since in this case
the
directional relation ({\rm e}f{0719})
is not a partial order, since the transitivity property
(if ${\bf u} \preccurlyeq {\bf v}$ and ${\bf v} \preccurlyeq {\bf w}$ then ${\bf u} \preccurlyeq {\bf w}$)
fails for $\phi \in(\pi, 2\pi]$.
Again we take ${\cal X}$ to be random;
set ${\cal X}= {\cal X}_n$, where (as before)
${\cal X}_n$ is a point
process consisting
of $n$ independent random points on $(0,1)^2$ with common density $f$.
When the partial order
is $\preccurlyeqstar$, as in \cite{br}, we also consider the point set ${\cal X}_n^0:={\cal X}_n \cup \{ {\bf 0}\}$
(where
${\bf 0}$ is the origin in ${\bf R }^2$) on which the MDSF is a MDST
rooted at ${\bf 0}$.
In this random setting, almost surely
each point of ${\cal X}$
has a unique directed nearest neighbour, so
that ${\cal X}$ has a unique MDSF. Denote by ${\cal M}^\alpha({\cal X})$ the total
power-weighted edge length, with weight exponent $\alpha>0$, of the MDSF on ${\cal X}$.
Theorem {\rm e}f{mdstlln} presents LLNs for ${\cal M}^\alpha({\cal X}_n)$
in the uniform case ${\cal X}_n={\cal U}_n$.
However, the
proof carries through to other distributions. In particular, if the points
of ${\cal X}_n$ are
distributed in
${\bf R }^2$ with a density $f$ that satisfies condition (C1) above,
then ({\rm e}f{0728e}) holds with a factor
of $\int_{{\rm supp }(f)} f({\bf x})^{(2-\alpha)/2} {\rm d} {\bf x}$ introduced into the
right-hand side.
\begin{theorem} \label{mdstlln}
Let $d\in {\bf N}$
and
$\alpha \in (0,2)$. Under partial order
$\preccurlyeqtp$ with $\theta \in[0,2\pi)$ and $\phi\in(0,\pi]$,
we have that, as $n \to \infty$,
\begin{eqnarray} n^{(\alpha-2)/2}
{\cal M}^\alpha ( {\cal U}_n ) \inL (2/\phi)^{\alpha/2} {\cal G}amma( 1+(\alpha/2)). \label{0728e} \end{eqnarray}
Moreover, when the
partial order is $\preccurlyeqstar$, ({\rm e}f{0728e}) remains true with
${\cal U}_n$ replaced by ${\cal U}_n^0$.
\end{theorem}
\subsection{The Gabriel graph}
\label{gg}
In the {\em Gabriel graph} (see \cite{gs})
on point set ${\cal X} \subset {\bf R }^d$, two points
are joined by an edge
iff the ball that has the line segment joining those two
points as a diameter contains no other points of ${\cal X}$.
The Gabriel
graph has been applied in many of the same contexts as nearest-neighbour graphs;
see for example \cite{t}.
For $d \in {\bf N} $ and $\alpha \geq 0$, let
${\cal G}^{d,\alpha}({\cal X})$ denote
the total power-weighted edge length of the Gabriel
graph on ${\cal X} \subset
{\bf R }^d$.
As before, we consider the random point set ${\cal X}_n$
with underlying density $f$.
A LLN for ${\cal G}^{d,\alpha}({\cal X}_n)$
was given
in \cite{py2}; in the present paper we give the limiting constant explicitly.
\begin{theorem}
\label{ggthm}
Let $d \in {\bf N}$ and $\alpha \geq 0$.
Suppose that
$f$ satisfies (C1).
As $n \to \infty$,
\begin{eqnarray}
\label{01f}
n^{(\alpha-d)/d} {\cal G}^{d,\alpha}
({\cal X}_n) \inLL v_d^{-\alpha/d} 2^{d+\alpha-1} {\cal G}amma( 1+(\alpha/d))
\int_{{\rm supp }(f)} f({\bf x})^{(d-\alpha)/d} {\rm d} {\bf x}.
\end{eqnarray}
\end{theorem}
\section{Proofs}
\label{proofs}
\subsection{Proof of Theorems {\rm e}f{llnthm} and {\rm e}f{nngu}} \label{seclln}
For $j \in {\bf N}$, let $d_j ( {\bf x}; {\cal X} )$ be the (Euclidean) distance from
${\bf x}$ to its $j$-th nearest
neighbour in ${\cal X} \setminus \{{\bf x}\}$,
if such a neighbour exists, or zero otherwise.
We will use the following form of
Euler's Gamma integral (see equation 6.1.1 in \cite{as}). For $a,b,c \geq 0$
\begin{eqnarray}
\label{0819a}
\int_0^\infty r^a {\rm e}^{-cr^b} {\rm d} r =
\frac{1}{b} c^{-(a+1)/b
} {\cal G}amma \left( (a+1)/b \right) .
\end{eqnarray}
\noindent
{\bf Proof of Theorem {\rm e}f{llnthm}.}
In applying Theorem {\rm e}f{llnpenyuk} to the ${j{\rm {\textrm -th~NNG}}}'$ and ${k{\rm {\textrm -NNG}}}'$ functionals, we
take $\xi({\bf x} ; {\cal X}_n)$ to be $(d_j({\bf x};{\cal X}_n))^\alpha$,
where $\alpha \geq 0$. Then
$\xi$ is translation invariant and homogeneous of order $\alpha$.
It was
shown in Theorem 2.4 of \cite{py2}
that
the ${j{\rm {\textrm -th~NNG}}}'$ total weight
functional
$\xi$ satisfies
the conditions of Theorem {\rm e}f{llnpenyuk} in the following two cases:
(i) with $q=2$, if $f$ satisfies (C1), and $\alpha \geq 0$;
and (ii) with $q=1$, if $f$ satisfies (C2), and $0 \leq \alpha <d$.
(In fact, in \cite{py2} this is proved for the ${k{\rm {\textrm -NNG}}}'$ functional
$\sum_{j=1}^k (d_j({\bf x};{\cal X}_n))^\alpha$, but this implies
that the conditions also hold for the ${j{\rm {\textrm -th~NNG}}}'$ functional
$(d_j({\bf x};{\cal X}_n))^\alpha$.)
The functional $\xi({\bf x};{\cal X}_n)
= (d_j ({\bf x} ; {\cal X}_n))^\alpha$
stabilizes on ${\cal H}_1$,
with limit $\xi_\infty ({\cal H}_1) = (d_j ({\bf 0} ; {\cal H}_1 ))^\alpha$. Also,
the moment condition
({\rm e}f{moms}) is satisfied for some $p>1$ (if $f$ satisfies
(C2) and $\alpha<d$) or $p>2$ (if $f$ satisfies (C1)),
and so Theorem {\rm e}f{llnpenyuk}, with $q=1$ or $q=2$
respectively, yields (using
the fact that $\xi$ is homogeneous of order $\alpha$)
\begin{eqnarray}
n^{(\alpha/d)-1} {\cal L}^{d,\alpha}_j ({\cal X}_n)
= n^{-1} \sum_{{\bf x} \in{\cal X}_n} n^{\alpha/d} \xi ( {\bf x} ; {\cal X}_n )
=
n^{-1}
\sum_{{\bf x} \in {\cal X}_n} \xi( n^{1/d} {\bf x}; n^{1/d}{\cal X}_n) \nonumber\\
\inLq
E [\xi_\infty({\cal H}_1)] \int_{{\rm supp }(f)} f({\bf x})^{(d-\alpha)/d} {\rm d} {\bf x}. \label{0728f}
\end{eqnarray}
We now need to evaluate the expectation on the right-hand side of
({\rm e}f{0728f}). For $r>0$
\begin{eqnarray*}
P [ \xi_{\infty} ( {\cal H}_1 ) >r ] & = &
P [ d_j ({\bf 0};{\cal H}_1) >r^{1/\alpha} ]
= \sum_{i=0} ^{j-1}
P [ {\rm card} (\{ B ( {\bf 0} ; r^{1/\alpha} ) \cap {\cal H}_1 \}) = i
] \\ & = & \sum_{i=0}^{j-1} \frac{
(v_d r^{d/\alpha})^i}{i!} \exp ( -v_d r^{d/\alpha} ),
\end{eqnarray*}
where $v_d$ is given by ({\rm e}f{0818c}).
So
\begin{eqnarray*}
E \left[ \xi_{\infty} (
{\cal H}_1 ) \right] = \int_0^{\infty} P
\left[ \xi_\infty \left( {\cal H}_1 \right) > r \right]
{\rm d} r = \int_0^{\infty}
\sum_{i=0}^{j-1} \frac{ (v_d
r^{d/\alpha})^i}{i!} \exp ( -v_d r^{d/\alpha} ) {\rm d} r .
\end{eqnarray*}
Interchanging the order of summation and
integration, and using ({\rm e}f{0819a}), we obtain
\begin{eqnarray}
\label{0819b}
E \left[ \xi_{\infty} (
{\cal H}_1 ) \right]
= v_d^{-\alpha/d} \frac{\alpha}{d} \sum_{i=0}^{j-1} \frac{{\cal G}amma(i +(\alpha/d))}{{\cal G}amma(i+1)}
= v_d^{-\alpha/d} \frac{{\cal G}amma
\left(j+(\alpha/d) \right)}{{\cal G}amma(j)},
\end{eqnarray}
where the final equality follows by induction on $j$. Then from ({\rm e}f{0818c}),
({\rm e}f{0728f}) and ({\rm e}f{0819b}) we obtain the ${j{\rm {\textrm -th~NNG}}}'$ result ({\rm e}f{0818ff}).
By ({\rm e}f{0818a}), the ${k{\rm {\textrm -NNG}}}'$ result ({\rm e}f{0818g}) follows
from ({\rm e}f{0818ff}) with
\begin{eqnarray*}
C(d ,\alpha,k) =
v_d^{-\alpha/d} \sum_{j=1}^k \frac{{\cal G}amma
\left(j+(\alpha/d) \right)}{{\cal G}amma(j)}
= v_d^{-\alpha/d} \frac{d}{d+\alpha} \frac{{\cal G}amma(k+1+(\alpha/d))}{{\cal G}amma(k)}. ~\square
\end{eqnarray*}
\noindent
{\bf Proof of Theorem {\rm e}f{nngu}.}
The nearest-neighbour (directed) graph counts
the weights of edges
from points that are nearest neighbours of their
own nearest neighbours twice, while the
nearest-neighbour (undirected) graph
counts such weights only once.
Let $q({\bf x};{\cal X})$ be
the distance from ${\bf x}$
to its nearest neighbour in ${\cal X} \setminus \{ {\bf x}\}$
if ${\bf x}$ is a nearest neighbour
of its own nearest neighbour, and zero otherwise. Recall
that $d_1 ({\bf x};{\cal X})$ is the distance
from ${\bf x}$ to its nearest neighbour in ${\cal X} \setminus \{{\bf x}\}$. For $\alpha \geq 0$,
define
\[ \xi' ({\bf x};{\cal X}) := (d_1 ({\bf x};{\cal X}))^\alpha - \frac{1}{2} ( q({\bf x};{\cal X}))^\alpha.\]
Then
$\sum_{{\bf x} \in {\cal X}} \xi'({\bf x},{\cal X})$ is
the total weight of the nearest-neighbour (undirected) graph on ${\cal X}$. Note
that $\xi'$ is translation invariant and homogeneous
of order $\alpha$.
One can check that $\xi'$ is stabilizing on the Poisson process
${\cal H}_1$, using similar arguments to those for the ${j{\rm {\textrm -th~NNG}}}'$ and
${k{\rm {\textrm -NNG}}}'$ functionals. Also (see \cite{py2}) if condition (C1)
holds then $\xi'$ satisfies the moments condition ({\rm e}f{moms}) for
some $p>2$, for all $\alpha \geq 0$.
Let ${\bf e}_1$ be a vector
of unit length in ${\bf R }^d$. For $d \in {\bf N}$,
let $\omega_d:= | B({\bf 0};1) \cup B( {\bf e}_1 ; 1)|$,
the volume of the union of two unit $d$-balls with
centres unit distance apart.
Now we apply Theorem {\rm e}f{llnpenyuk}
with $q=2$. We have
\begin{eqnarray}
\label{0924c}
n^{(\alpha/d)-1} {\bf N}N^{d,\alpha} ({\cal X}_n) =
n^{-1} \sum_{{\bf x} \in {\cal X}_n} \xi'(n^{1/d} {\bf x}; n^{1/d} {\cal X}_n )
\nonumber\\
\inLL
E[ \xi'_\infty ({\cal H}_1)] \int_{{\rm supp }(f)} f({\bf x})^{(d-\alpha)/d} {\rm d} {\bf x},
\end{eqnarray}
where $E [ \xi'_\infty ({\cal H}_1) ] = E[ ( d_1 ({\bf 0}; {\cal H}_1))^\alpha] - (1/2) E [(q ({\bf 0}; {\cal H}_1))^\alpha]$.
Now we need to evaluate $E[(q({\bf 0};{\cal H}_1))^\alpha]$. With ${\bf X}$ denoting the nearest
point of ${\cal H}_1$ to ${\bf 0}$,
\begin{eqnarray*}
P [ q({\bf 0};{\cal H}_1) \in {\rm d} r] & = &
P [ \{|{\bf X}| \in {\rm d} r \} \cap \{ {\cal H}_1 \cap ( B({\bf 0};r) \cup B({\bf X};r))= \{ {\bf X} \} \} ]
\\
& = & d v_d r^{d-1} {\rm e}^{-v_d r^d} {\rm e}^{-(\omega_d-v_d)r^d} {\rm d} r
= d v_d r^{d-1} {\rm e}^{-\omega_d r^d} {\rm d} r.
\end{eqnarray*}
So using ({\rm e}f{0819a}) we obtain
\begin{eqnarray}
\label{0924d}
E [ (q({\bf 0};{\cal H}_1))^\alpha] = \int_0^\infty d v_d r^{d-1+\alpha} {\rm e}^{-\omega_d r^d} {\rm d} r =
v_d \omega_d^{-1-(\alpha/d)} {\cal G}amma (1+(\alpha/d)).\end{eqnarray}
Then from ({\rm e}f{0924c}) with ({\rm e}f{0924d}) and the $j=1$ case of
({\rm e}f{0819b})
we obtain ({\rm e}f{0924x}). By some calculus, $\omega_2 = (4 \pi/3) +(\sqrt{3}/2)$,
which with the $d=2$ case of ({\rm e}f{0924x}) yields ({\rm e}f{0924b}); for ({\rm e}f{0913a})
note that ${\cal G}amma(3/2)=\pi^{1/2}/2$ (see 6.1.9 in \cite{as}). Finally, we obtain the statement for ${\bf N}N^{1,1}({\cal U}_n)$
from the $d=1$ case of ({\rm e}f{0924x}) since $\omega_1=3$. $\square$
\subsection{Proof of Theorem {\rm e}f{onngthm}}
\label{onngprf}
In order to obtain our LLN
(Theorem {\rm e}f{onngthm}
above), we modify the setup of the ${\rm {\textrm ONG}}$ slightly. Let
${\cal U}_n$ be a {\em marked} random finite point process in ${\bf R }^d$,
consisting of $n$ independent uniform random vectors in $(0,1)^d$,
where each point ${\bf U}_i$ of ${\cal U}_n$ carries a random mark
$T({\bf U}_i)$ which is uniformly distributed on $[0,1]$, independent
of the other marks and of the point process ${\cal U}_n$.
Join each point ${\bf U}_i$ of ${\cal U}_n$ to its nearest neighbour
amongst those points of ${\cal U}_n$
with mark less than $T({\bf U}_i)$, if there are any such
points, to obtain a graph that we call the ${\rm {\textrm ONG}}$ on the
marked point set ${\cal U}_n$.
This definition extends to infinite but
locally finite point sets.
Clearly the ${\rm {\textrm ONG}}$ on the marked point process ${\cal U}_n$
has the same distribution as the ${\rm {\textrm ONG}}$ (with
the first definition) on a sequence ${\bf U}_1,{\bf U}_2,\ldots,{\bf U}_n$
of independent uniform points on $(0,1)^d$.
We apply Theorem {\rm e}f{llnpenyuk} to obtain a LLN
for ${\cal O}^{d,\alpha} ( {\cal U}_n )$, $\alpha \in [0,d)$. Once again,
the method enables us to evaluate
the limit explicitly.
We
take $f$
to be the indicator of $(0,1)^d$.
Define $D({\bf x};{\cal X})$ to be
the distance from point ${\bf x}$ with mark $T({\bf x})$ to its nearest
neighbour in ${\cal X}$ amongst those points
${\bf y} \in {\cal X}$ that have mark $T({\bf y})$ such that
$T({\bf y}) < T({\bf x})$, if such a neighbour exists, or zero otherwise.
We
take $\xi({\bf x} ; {\cal X})$ to be $(D({\bf x};{\cal X}))^\alpha$.
Again,
$\xi$ is translation invariant and homogeneous of order $\alpha$.
\begin{lemma}
\label{0919d}
The ONG functional $\xi$ almost surely stabilizes on ${\cal H}_1$.
\end{lemma}
\noindent \textbf{Proof. }
Although the notion of stabilization there is somewhat
different, the same argument as given
at the start of the proof of Theorem 3.6 of \cite{p2} applies. $\square$
\begin{lemma}
\label{0919a}
Let $d \in {\bf N}$, $\alpha \in [0,d)$, and let $p>1$ with $\alpha p<d$.
Then the ONG functional $\xi$ satisfies the moments condition
({\rm e}f{moms}).
\end{lemma}
\noindent \textbf{Proof. }
Let $T_n$ denote the rank of the mark of ${\bf U}_1$ amongst
the marks of all the points of ${\cal U}_n$, so that
$T_n$ is distributed uniformly over
the integers
$1,2,\ldots,n$.
We have, by conditioning
on $T_n$,
\begin{eqnarray}
\label{0920a}
E [ (\xi ( n^{1/d} {\bf U}_1 ; n^{1/d} {\cal U}_n ))^p ]
& = & n^{-1} \sum_{i=1}^n E [ (d_1 ( n^{1/d} {\bf U}_1; n^{1/d} {\cal U}_i )) ^{p\alpha}
] \nonumber\\
& = & n^{-1} \sum_{i=1}^n (n/i)^{p\alpha /d} E [( d_1 (i^{1/d} {\bf U}_1;
i^{1/d} {\cal U}_i ))^{p \alpha} ].\end{eqnarray}
It was shown in \cite{pw3} that there exists $C\in(0,\infty)$ such that for all $r>0$
\[ \sup_{i \geq 1} P [ d_1(i^{1/d} {\bf U}_1;
i^{1/d} {\cal U}_i ) > r ] \leq C \exp (-r^{1/d}/C).\]
Thus the last expectation in ({\rm e}f{0920a}) is bounded by a constant independent of $i$.
So the final expression
in ({\rm e}f{0920a}) is bounded by a constant times
\[ n^{(p \alpha -d)/d} \sum_{i=1}^n i^{-p\alpha/d} ,\]
which is uniformly bounded by a constant for $\alpha p <d$. $\square$ \\
\noindent
{\bf Proof of Theorem {\rm e}f{onngthm}.}
Let $d \in {\bf N}$. Let $f$ be the indicator of $(0,1)^d$, and
$\xi$ be the ${\rm {\textrm ONG}}$ functional $\xi({\bf x};{\cal U}_n)
= (D ({\bf x} ; {\cal U}_n))^\alpha$.
By Lemmas {\rm e}f{0919d} and {\rm e}f{0919a},
$\xi$
is homogeneous of order $\alpha$,
stabilizing on ${\cal H}_1$
with limit $\xi_\infty({\cal H}_1) = ( D( {\bf 0} ;{\cal H}_1))^\alpha$,
and satisfies the moment condition
({\rm e}f{moms}) for some $p>1$, provided $\alpha < d$. So
Theorem {\rm e}f{llnpenyuk} with $q=1$ implies
\begin{eqnarray*}
n^{(\alpha/d)-1} {\cal O}^{d,\alpha}({\cal U}_n) =
n^{-1}
\sum_{{\bf x} \in {\cal U}_n} (D (n^{1/d} {\bf x}; n^{1/d}{\cal U}_n))^\alpha
\inL
E [\xi_\infty({\cal H}_1)].
\end{eqnarray*}
For $u \in (0,1)$ the points of ${\cal H}_1$ with lower mark than $u$ form a
homogeneous Poisson point process of intensity $u$, so by
conditioning on the mark of the point at ${\bf 0}$,
\begin{eqnarray*}
E [ \xi_\infty ({\cal H}_1)] = \int_0^1 E [ (d_1({\bf 0};{\cal H}_u))^\alpha ] {\rm d} u
= \int_0^1 u^{-\alpha/d} E [( d_1({\bf 0} ; {\cal H}_1))^\alpha ] {\rm d} u = \frac{d}{d-\alpha} C(d,\alpha,1),
\end{eqnarray*}
since we saw in the proof of Theorem {\rm e}f{llnthm} that $E[ (d_1({\bf 0};{\cal H}_1))^\alpha]=C(d,\alpha,1)$.
$\square$
\subsection{Proof of Theorem {\rm e}f{mdstlln}} \label{seclln2}
In applying Theorem
{\rm e}f{llnpenyuk} to the MDSF, we
take
$f$ to be the indicator
of $(0,1)^2$.
We take $\xi(\mathbf{x} ; {\cal X})$ to be
$(d({\bf x};{\cal X}))^\alpha$, where
$d({\bf x};{\cal X})$ is
the distance from point $\mathbf{x}$ to its directed nearest
neighbour in ${\cal X} \setminus \{{\bf x}\}$, if such a point exists, or
zero otherwise, i.e.
\begin{eqnarray}
\xi(\mathbf{x};{\cal X}) = ( d(\mathbf{x};{\cal X}) )^\alpha ~~~
{\rm with}~~~
d ({\bf x} ; {\cal X}) := \min
\{ \| {\bf x} - {\bf y} \| : {\bf y} \in {\cal X} \setminus \{ {\bf x} \}, {\bf y}
\preccurlyeqtp {\bf x} \}
\label{0802}
\end{eqnarray}
with the convention that $\min \emptyset = 0$.
We consider the random point set
${\cal U}_n$,
the
binomial point process consisting of $n$ independent uniformly distributed
points on $(0,1)^2$. However, as remarked before the statement of Theorem
{\rm e}f{mdstlln}, the result
({\rm e}f{0728e}) carries through
(with virtually the same proof) to more general point sets ${\cal X}_n$.
We need to show that $\xi$ given by ({\rm e}f{0802}) satisfies the conditions
of Theorem {\rm e}f{llnpenyuk}. As before, ${\cal H}_1$ denotes a homogeneous
Poisson process on ${\bf R }^2$.
\begin{lemma} \label{stabil} The MDSF functional
$\xi$ given by ({\rm e}f{0802}) almost surely stabilizes on
$\mathcal{H}_1$
with limit
$\xi_\infty({\cal H}_1) = (d({\bf 0};{\cal H}_1))^\alpha$.
\end{lemma}
\noindent \textbf{Proof. }
Set $R := d( {\bf 0} ; {\cal H}_1 )$.
Since $\phi>0$ we have $0< R < \infty$ almost surely.
But then
for any $\ell >R$, we have $\xi({\bf 0}; ({\cal H}_1 \cap B({\bf 0};\ell)) \cup
{\cal A}) = R^\alpha$, for any finite ${\cal A} \subset {\bf R }^d \setminus
B({\bf 0};\ell)$. Thus $\xi$ stabilizes on ${\cal H}_1$ with limit
$\xi_\infty({\cal H}_1) =R^\alpha$. $\square$\\
We now give a geometrical lemma. For $B \subset {\bf R }^2$
with $B$ bounded, and for ${\bf x} \in B$,
write ${\rm dist}({\bf x};\partial B)$ for $\sup\{r: B({\bf x};r) \subseteq B\}$,
and for $s >0$, define the region
\begin{eqnarray}
\label{Atpdef}
A_{\theta,\phi}(\mathbf{x},s;B) :=
B( \mathbf{x}; s ) \cap B \cap C_{\theta,\phi}({\bf x}).
\end{eqnarray}
\begin{lemma}
\label{lem0727}
Let $B$ be a convex bounded set in ${\bf R }^2$, and let ${\bf x} \in B$.
If $A_{\theta,\phi} ({\bf x},s;B) \cap \partial B({\bf x};s) \neq \emptyset$,
and $s > {\rm dist}({\bf x}, \partial B)$, then
$$
|A_{\theta,\phi}({\bf x},s;B)| \geq
s
\sin (\phi /2)
{\rm dist}(x,\partial B) /2.
$$
\end{lemma}
\noindent \textbf{Proof. }
The condition
$A_{\theta,\phi} ({\bf x},s;B) \cap \partial B({\bf x};s) \neq \emptyset$
says that there exists ${\bf y} \in B \cap C_{\theta,\phi}({\bf x})$
with $\|{\bf y} - {\bf x}\| = s$.
The line segment ${\bf x} {\bf y}$ is contained in
the cone $C_{\theta,\phi}({\bf x})$; take a half-line ${\bf h}$ starting
from ${\bf x}$, at an angle $\phi/2$ to the line segment ${\bf x} {\bf y}$
and such that ${\bf h}$ is also contained in $ C_{\theta,\phi}({\bf x})$.
Let ${\bf z}$ be the point in $\bf h$ at a distance ${\rm dist}({\bf x},\partial B)$
from ${\bf x}$. Then the interior of the triangle ${\bf x} {\bf y} {\bf z}$ is entirely
contained in $A_{\theta,\phi}({\bf x},s)$, and has area
$s \sin (\phi /2) {\rm dist}(x,\partial B)/2$. $\square$
\begin{lemma} \label{lem0k715b} Suppose
$\alpha >0$.
Then the MDSF functional $\xi$ given by ({\rm e}f{0802}) satisfies the
moments condition ({\rm e}f{moms}) for any $p \leq 2 /\alpha$.
\end{lemma}
\noindent \textbf{Proof. }
Setting $R_n :=(0,n^{1/2})^2$, conditioning on the position of ${\bf U}_1$, we have
\begin{eqnarray}
E [ \xi ( n^{1/2}\mathbf{U}_1;n^{1/2} {\cal U}_n
)^p ] = n^{-1}
\int_{R_n} E [ ( \xi
(\mathbf{x};n^{1/2} {\cal U}_{n-1}) )^p ] {\rm d} \mathbf{x}
.
\label{0728}
\end{eqnarray}
For ${\bf x} \in R_n$ set $m({\bf x}) := {\rm dist}({\bf x}, \partial R_n)$.
We divide $R_n$ into three regions
\begin{eqnarray*}
R_n(1) & : = & \{{\bf x} \in R_n: m({\bf x}) \leq n^{-1/2} \}; ~~~~
R_n(2) : = \{{\bf x} \in R_n: m({\bf x}) > 1 \};
\\
R_n(3) & : = & \{{\bf x} \in R_n: n^{-1/2} < m({\bf x}) \leq 1 \}.
\end{eqnarray*}
For all ${\bf x} \in R_n$, we have
$\xi({\bf x};n^{1/2} {\cal U}_{n-1}) \leq (2n)^{\alpha/2}$, and hence,
since $R_n(1)$ has area at most 4, we can
bound the contribution to ({\rm e}f{0728}) from ${\bf x} \in R_n(1)$ by
\begin{eqnarray}
\label{0728a}
n^{-1} \int_{R_n(1)} E [ ( \xi
({\bf x};n^{1/2}{\cal U}_{n-1}))^p ] {\rm d} {\bf x}
\leq 4 n^{-1} (2n)^{p\alpha /2} = 2^{2+ p\alpha/2} n^{(p \alpha -2)/2},
\end{eqnarray}
which is bounded if $p \alpha \leq 2$.
Now, for $\mathbf{x} \in R_n$, with $A_{\theta,\phi}$ defined
at ({\rm e}f{Atpdef}),
we have
\begin{eqnarray}
P [ d
({\bf x}; n^{1/2} {\cal U}_{n-1}) > s ] & \leq &
P [ n^{1/2} {\cal U}_{n-1}
\cap A_{\theta,\phi}({\bf x},s;R_n) = \emptyset ]
\nonumber \\
& = &
\left(1 - \frac{|A_{\theta,\phi}({\bf x},s;R_n)| }{n} \right)^{n-1}
\nonumber \\
& \leq & \exp( 1 - |A_{\theta,\phi}({\bf x},s;R_n)| ),
\label{0728b} \end{eqnarray}
since $|A_{\theta,\phi}({\bf x},s;R_n)|\leq n$.
For ${\bf x} \in R_n$ and $s>m({\bf x})$, by Lemma {\rm e}f{lem0727} we have
$$
| A_{\theta,\phi}(\mathbf{x},s;R_n)
| \geq s \sin(\phi/2 ) m({\bf x})/2 ~~~{\rm if} ~~
A_{\theta,\phi} ({\bf x},s;R_n) \cap \partial B({\bf x};s)
\neq \emptyset,
$$
and also
$$
P [ d({\bf x};n^{1/2} {\cal U}_{n-1}) > s ] = 0 ~~~{\rm if}~~~
A_{\theta,\phi} ({\bf x},s;R_n) \cap \partial B({\bf x};s)
= \emptyset.
$$
For $s \leq m({\bf x})$,
we have that
$
|
A_{\theta,\phi}(\mathbf{x},s;R_n) | = s^2
(\phi/2) \geq s^2 \sin (\phi/2).
$
Combining these observations and ({\rm e}f{0728b}),
we obtain for all ${\bf x} \in R_n$ and $s >0$ that
\begin{eqnarray*}
P [ d
({\bf x}; n^{1/2} {\cal U}_{n-1}) > s ] & \leq &
\exp ( 1 - (s/2) \min (s, m({\bf x})) \sin (\phi/2)
), ~~~ {\bf x} \in R_n.
\end{eqnarray*}
Setting $c =(1/2) \sin(\phi/2)$, we therefore have for $\mathbf{x} \in R_n$
that
\begin{eqnarray}
E [ (\xi (\mathbf{x};n^{1/2} {\cal U}_{n-1}))^p
]
= \int_0^\infty P [ d({\bf x}; n^{1/2} {\cal U}_{n-1})
> r^{1/(\alpha p)} ] {\rm d} r
\nonumber\\
\leq
\int_0^{m({\bf x})^{\alpha p}}
\exp { (1 - c r^{2/(\alpha p)} ) } {\rm d} r
+
\int_{m({\bf x})^{\alpha p }}^\infty
\exp { (1 - c m({\bf x})r^{1/(\alpha p)} ) } {\rm d} r
\nonumber \\
= O(1) + \alpha p m({\bf x})^{-p\alpha} \int_{m({\bf x})^2}^\infty
{\rm e}^{1-cu} \alpha p u^{\alpha p-1} {\rm d} u
= O(1) + O(m({\bf x})^{-\alpha p}).
\label{0728d}
\end{eqnarray}
For ${\bf x} \in R_n(2)$, this bound is $O(1)$, and the area
of $R_n(2)$ is less than $n$, so that the contribution
to ({\rm e}f{0728}) from $R_n(2)$ satisfies
\begin{eqnarray}
\label{0728c}
\limsup_{n \to \infty} n^{-1}
\int_{R_n(2)} E [ ( \xi
(\mathbf{x};n^{1/2} {\cal U}_{n-1}) )^p ]
{\rm d} \mathbf{x}
< \infty.
\end{eqnarray}
Finally, by ({\rm e}f{0728d}), there is a constant $C \in (0,\infty)$ such
that
the contribution to ({\rm e}f{0728}) from $R_n(3)$ satisfies
\begin{eqnarray*}
n^{-1} \int_{R_n(3)} E [ ( \xi
(\mathbf{x};n^{1/2} {\cal U}_{n-1}) )^p ] {\rm d} \mathbf{x}
&\leq &
C n^{-1/2} \int_{y=n^{-1/2}}^1 y^{-\alpha p} {\rm d} y
\\
& \leq & C n^{-1/2} \max \{ \log n , n^{(\alpha p -1)/2} \},
\end{eqnarray*}
which is bounded if $\alpha p \leq 2$.
Combined with the bounds in ({\rm e}f{0728a}) and ({\rm e}f{0728c}),
this shows that the expression ({\rm e}f{0728})
is uniformly bounded, provided $\alpha p \leq 2$.
$\square$\\
For $k \in {\bf N}$, and for $a<b$ and $c<d$
let ${\cal U}_{k,(a,b] \times (c,d]}$ denote the point process
consisting of
$k$ independent random vectors uniformly distributed
on the rectangle $(a,b] \times (c,d]$.
Before proceeding further,
we recall
that
if $M({\cal X})$ denotes the number of minimal elements, under partial
order
$\preccurlyeqstar$, of a point
set ${\cal X} \subset {\bf R }^2 $, then
\begin{eqnarray}
E [ M({\cal U}_{k,(a,b]\times (c,d]}) ] =
E [ M({\cal U}_k) ]
= 1 + (1/2) + \cdots + (1/k) \leq 1 + \log k.
\label{harmonicbd}
\end{eqnarray}
The first equality in ({\rm e}f{harmonicbd}) comes
from some obvious scaling which shows that the
distribution of
$ M({\cal U}_{k,(a,b]\times (c,d]}) $ does not depend on $a,b,c,d$.
For the second equality
in ({\rm e}f{harmonicbd}),
see e.g.~\cite{bns}.\\
\noindent
{\bf Proof of Theorem {\rm e}f{mdstlln}.}
Suppose $\alpha \in (0,2)$,
and set $f$ to be the indicator of
$(0,1)^2$.
By Lemmas
{\rm e}f{stabil} and {\rm e}f{lem0k715b} the functional $\xi$,
given at ({\rm e}f{0802}), satisfies the
conditions of Theorem {\rm e}f{llnpenyuk} with $p= 2/\alpha$
and $q=1$.
So by Theorem {\rm e}f{llnpenyuk},
we have
\begin{eqnarray}
n^{(\alpha/2)-1} {\cal M}^\alpha({\cal U}_n) =
n^{-1}
\sum_{{\bf x} \in {\cal U}_n} \xi (n^{1/2}\mathbf{x};n^{1/2}{\cal U}_n)
\inL
E [ \xi_\infty({\cal H}_1)].
\label{0728fm}
\end{eqnarray}
Since the disk sector $C_{\theta,\phi}(\mathbf{x}) \cap B({\bf x};r)$
has area $(\phi/2) r^2$, by Lemma {\rm e}f{stabil} we have
\begin{eqnarray*} P [ \xi_{\infty} (
{\cal H}_1 ) >s ] & = & P [ {\cal H}_1 \cap
C_{\theta,\phi}(\mathbf{0}) \cap B({\bf 0};s^{1/\alpha}) = \emptyset ] =
\exp (-(\phi/2) s^{2/\alpha} ).
\end{eqnarray*}
Hence the limit in ({\rm e}f{0728fm}) is, using ({\rm e}f{0819a}),
\[
E \left[ \xi_{\infty} ( {\cal H}_1 ) \right] =
\int_0^{\infty} P \left[ \xi_\infty \left( {\cal H}_1 \right)
> s \right] {\rm d} s
= \alpha 2^{(\alpha-2)/2} \phi^{-\alpha/2} {\cal G}amma( \alpha/2 ),
\]
and this gives us ({\rm e}f{0728e}). Finally, in the case where $\preccurlyeqtp$ is $\preccurlyeqstar$,
({\rm e}f{0728e}) remains true when ${\cal U}_n$ is replaced by ${\cal U}_n^0$, since
\begin{eqnarray}
\label{0806a}
E [n^{(\alpha/2)-1} | {\cal M}^\alpha ({\cal U}_n^0) - {\cal M}^\alpha ({\cal U}_n)| ]
\leq 2^{\alpha/2} n^{(\alpha/2)-1} E [ M({\cal U}_n)] ,
\end{eqnarray}
where $M({\cal U}_n)$ denotes the number of $\preccurlyeqstar$-minimal elements of ${\cal U}_n$. By ({\rm e}f{harmonicbd}),
$E[ M({\cal U}_n)] \leq 1+\log n$, and hence the right-hand side of ({\rm e}f{0806a})
tends to 0 as $n \to \infty$ for $\alpha<2$.
$\square$
\subsection{Proof of Theorem {\rm e}f{ggthm}} \label{secgg}
\noindent
{\bf Proof of Theorem {\rm e}f{ggthm}.}
In applying Theorem {\rm e}f{llnpenyuk} to the Gabriel graph, we
take $\xi({\bf x} ; {\cal X}_n)$ to be
{\em one half} of the total $\alpha$ power-weighted
length
of all the edges incident to ${\bf x}$ in the Gabriel graph on ${\cal X}_n \cup \{{\bf x}\}$; the factor
of one half prevents double counting.
As stated in \cite{py2} (Section 2.3(e)),
$\xi$ is translation invariant, homogeneous of order $\alpha$ and
stabilizing on ${\cal H}_1$, and
if the function $f$ satisfies condition (C1)
then the moment condition
({\rm e}f{moms}) is satisfied for some $p>2$.
So by Theorem {\rm e}f{llnpenyuk} with $q=2$,
\begin{eqnarray}
n^{(\alpha/d)-1} {\cal G}^{d,\alpha} ({\cal X}_n) =
n^{-1}
\sum_{{\bf x} \in {\cal X}_n} \xi( n^{1/d} {\bf x}; n^{1/d}{\cal X}_n) \nonumber\\
\inLL
E [\xi_\infty({\cal H}_1)] \int_{{\rm supp }(f)} f({\bf x})^{(d-\alpha)/d} {\rm d} {\bf x}. \label{08f}
\end{eqnarray}
We need to evaluate the expectation on the right-hand side of
({\rm e}f{08f}). The net contribution
from a vertex at ${\bf 0}$ to the total weight of the Gabriel graph on ${\cal H}_1$ is
\begin{eqnarray}
\label{01a}
\frac{1}{2} \sum_{k=1}^\infty (d_k ({\bf 0};{\cal H}_1))^\alpha \cdot {\bf 1}_{E_k},\end{eqnarray}
where the factor of one half ensures that edges are not counted twice,
$d_k({\bf 0};{\cal H}_1)$ is the distance from ${\bf 0}$ to its $k$-th nearest neighbour
in ${\cal H}_1$, and $E_k$ denotes the event that ${\bf 0}$ and its $k$-th nearest neighbour
in ${\cal H}_1$ are joined by an edge in the Gabriel graph.
Given that the point ${\bf x} \in {\cal H}_1$ is the $k$-th nearest neighbour
of ${\bf 0}$, an edge between ${\bf x}$ and ${\bf 0}$ exists in the Gabriel graph iff
the ball with ${\bf 0}$ and ${\bf x}$ diametrically opposed contains none of the other
$k-1$ points of ${\cal H}_1$ that are uniformly distributed in the ball centre ${\bf 0}$ and
radius $\|{\bf x}\|$. Thus for $k \in {\bf N}$,
\begin{eqnarray}
\label{01b}
P[E_k] = \left( \frac{v_d r^d - v_d(r/2)^d}{v_d r^d} \right)^{k-1}
= \left( 1 - 2^{-d} \right)^{k-1}.\end{eqnarray}
So from ({\rm e}f{01a}) and ({\rm e}f{01b}) we have
\begin{eqnarray*} E[\xi_\infty({\cal H}_1)] & = & \frac{1}{2} \sum_{k=1}^\infty
\left( 1 - 2^{-d} \right)^{k-1} E[ (d_k ({\bf 0};{\cal H}_1))^\alpha] \\
& = & \frac{1}{2} \sum_{k=1}^\infty
\left( 1 - 2^{-d} \right)^{k-1} v_d^{-\alpha/d} \frac{{\cal G}amma(k+(\alpha/d))}{{\cal G}amma(k)},
\end{eqnarray*}
by ({\rm e}f{0819b}). But by properties of Gauss hypergeometric series (see 15.1.1 and 15.1.8 of \cite{as})
\[ \sum_{k=1}^\infty
\left( 1 - 2^{-d} \right)^{k-1} \frac{{\cal G}amma(k+(\alpha/d))}{{\cal G}amma(k)}
= {\cal G}amma (1+(\alpha/d)) 2^{d+\alpha}.\]
Then with ({\rm e}f{08f}) the proof is complete. $\square$
\begin{center}
{\bf Acknowledgements}
\end{center}
Some of this work was done when AW was
at the University of Durham, supported by an EPSRC
doctoral training account, and at the University of Bath. AW thanks Mathew Penrose for very helpful
discussions and comments, and two anonymous referees, whose comments have led to an improved presentation.
\end{document}
|
\begin{document}
\title{On the Characteristic Polynomial of the Gross Regulator Matrix}
\author{Samit Dasgupta and Michael Spie\ss \\
}
\maketitle
\begin{abstract} We present a conjectural formula for the principal minors and the characteristic polynomial of Gross's regulator matrix associated to a totally odd character of a totally real field. The formula is given in terms of the Eisenstein
cocycle, which was defined and studied earlier by the authors and collaborators. For the determinant of the regulator matrix, our conjecture follows from recent work of Kakde, Ventullo and the first author. For the diagonal entries, our conjecture overlaps with the conjectural formula presented in our prior work. The intermediate cases are new and provide a refinement of the Gross--Stark conjecture.
\end{abstract}
\tableofcontents
\section{Introduction}
\maketitle
Let $F$ be a totally real field of degree $n$, and let \[ \chi\colon \mathbf{G}al(\overline{F}/F) \ellongrightarrow \overline{\mathbf{Q}} \] be a totally odd character.
We fix once and for all a prime number $p$ and embeddings $\overline{\mathbf{Q}} \subset \mathbf{C}$ and $\overline{\mathbf{Q}} \subset \mathbf{C}_p$, so $\chi$ may be viewed as taking values in $\mathbf{C}$ or $\mathbf{C}_p$. Let $H$ denote the fixed field of the kernel of $\chi$, and let $G = \mathbf{G}al(H/F)$. As usual we view $\chi$ also as a multiplicative map on the semigroup of integral fractional ideals of $F$ by defining $\chi(\mathfrak{q}) = \chi(\mathbf{F}rob_{\mathfrak{q}})$ if $\mathfrak{q}$ is unramified in $H$ and $\chi(\mathfrak{q}) = 0$ if $\mathfrak{q}$ is ramified in $H$.
The field $H$ is a finite, cyclic, CM extension of $F$. Let
$S_p$ denote the set of primes of $F$ lying above $p$.
We partition $S_p$ as $R \cup R_0 \cup R_1$, where $R$ denotes the primes split in $H$ (i.e.\ $\chi(\mathfrak{p}) = 1$),
$R_0$ denotes the primes ramified in $H$ (i.e.\ $\chi(\mathfrak{p}) =0$),
and $R_1$ denotes the remaining primes above $p$.
Let $S = S_{\rightarrowm} \cup S_p$ denote the union of $S_p$ with the set of finite places of $F$ that are ramified in $H$. The Artin $L$-function
\[ L_S(\chi, s) = \sum_{(\mathfrak{a}, S)= 1} \frac{\chi(\mathfrak{a})}{\mathrm{N}\mathfrak{a}^{s}} = \prod_{\mathfrak{q}} (1 - \chi(\mathfrak{q})\mathrm{N}\mathfrak{q}^{-s})^{-1}, \quad \real(s) > 1, \]
has an analytic continuation to the entire complex plane. If we simply write $L(\chi, s)$ for $L_{S_{\rightarrowm}}(\chi, s)$, then
\[ L_S(\chi, s) = \prod_{\mathfrak{p} \in R \cup R_1} (1 - \chi(\mathfrak{p}) \mathrm{N}\mathfrak{p}^{-s}) L(\chi, s). \]
It is known that $L(\chi, 0) \neq 0$. We therefore find
\begin{equation} \ellabel{e:ordclassical}
\ord_{s = 0} L_S(\chi, s) = r_\chi, \quad \text{ where } r_\chi = \# R,
\end{equation}
and furthermore
\begin{equation}
\ellabel{e:ltclassical}
\frac{L_S^{(r_\chi)}(\chi, 0)}{r_\chi! L(\chi, 0)} = \mathscr{R}(\chi) \prod_{\mathfrak{p} \in R_1} (1 - \chi(\mathfrak{p})), \quad \text{ where }
\mathscr{R}(\chi) = \prod_{\mathfrak{p} \in R} \ellog \mathrm{N}\mathfrak{p}.
\end{equation}
In this paper we refine Gross's conjectural $p$-adic analogs of (\ref{e:ordclassical}) and (\ref{e:ltclassical}), which we now recall.
Let \[ \omega: \mathbf{G}al(F(\mu_{2p})/F) \ellongrightarrow (\mathbf{Z}/2p\mathbf{Z})^* \ellongrightarrow \mu_{2(p-1)} \] denote the Teichm{\"u}ller character.
There is a $p$-adic $L$-function \[ L_p(\chi\omega, s) \colon \mathbf{Z}_p \ellongrightarrow \mathbf{C}_p \] determined by the interpolation
property
\[ L_p(\chi \omega, k) = L_S(\chi\omega^k, k) \text{ for } k \in \mathbf{Z}^{\elle 0}. \]
The existence of this function was proved independently by Deligne--Ribet \cite{dr} and Cassou-Nogu\`es \cite{cn} in the 1970s, and new approaches have been considered recently in \cite{pcsd}, \cite{cdg}, and \cite{spiess}.
Gross proposed the following conjecture regarding the leading term of $L_p(\chi\omega, s)$ at $s=0$.
\begin{conjecture}[Gross] \ellabel{c:gross} We have
\begin{enumerate} \item \ellabel{i:ov}
\[ \ord_{s=0} L_p(\chi\omega, s) \ge r_\chi. \] \ellabel{e:g1}
\item \ellabel{i:lt}
\[ \frac{L_p^{(r_\chi)}(\chi, 0)}{r_\chi! L(\chi, 0)} = \mathscr{R}_p(\chi) \prod_{\mathfrak{p} \in R_0} (1 - \chi(\mathfrak{p})), \] where $\mathscr{R}_p(\chi)$
is a certain regulator of $p$-units of $H$ defined below. \ellabel{e:g2}
\item $\mathscr{R}_p(\chi) \neq 0$, so in view of (\ref{e:g1}) and (\ref{e:g2}) we have $\ord_{s=0} L_p(\chi\omega, s) = r_\chi.$
\end{enumerate}
\end{conjecture}
It can be shown that for $p \neq 2$, part (\ref{e:g1}) of Gross's conjecture follows from Wiles' proof of the Main Conjecture of Iwasawa theory. However,
a more direct analytic proof that holds even for $p=2$ was given recently in \cite{pcsd} and \cite{spiess} (note that both of these latter papers use the general
cohomological results of \cite{spiesshilb}).
Part (\ref{e:g2}) has been proven recently by the first author in joint work with M.\ Kakde and K.\ Ventullo \cite{dkv} (the case $r_{\chi}=1$ had been settled earlier in \cite{ddp} and \cite{vent}).
The goal of this paper is to study and refine Gross's $p$-adic regulator $\mathscr{R}_p(\chi)$, which we now define.
For each prime $\mathfrak{p} \in S_p$, consider the group of $\mathfrak{p}$-units
\[ U_\mathfrak{p} := \{u \in H^*: \ord_\mathfrak{P}(u) = 0 \text{ for all } \mathfrak{P} \nmid \mathfrak{p}\}. \]
Write
\begin{align*}
U_{\mathfrak{p}, \chi} :=& \ (U_\mathfrak{p} \otimes \overline{\mathbf{Q}})^{\chi^{-1}} \\
=& \ \{u \in U_\mathfrak{p} \otimes \overline{\mathbf{Q}}: \sigma(u) = u \otimes \chi^{-1}(\sigma) \text{ for all } \sigma \in G \} .
\end{align*}
The Galois equivariant form of Dirichlet's unit theorem implies that
\[ \displaystylem_{\overline{\mathbf{Q}}} U_{\mathfrak{p}, \chi} = \begin{cases}
1 & \text{ if } \mathfrak{p} \in R, \\
0 & \text{ otherwise.}
\end{cases} \]
For $\mathfrak{p} \in R$, we let $u_{\mathfrak{p}, \chi}$ denote any generator (i.e.\ non-zero element) of $U_{\mathfrak{p}, \chi}$.
Consider the continuous homomorphisms
\begin{equation}
\begin{aligned} \ellabel{e:olpdef}
o_\mathfrak{p}:= \ord_{\mathfrak{p}} \colon& \ F_\mathfrak{p}^* \ellongrightarrow \mathbf{Z} \\
\ell_\mathfrak{p}:= \ellog_p \circ \mathrm{N}orm_{F_{\mathfrak{p}}/\mathbf{Q}_p} \colon& \ F_\mathfrak{p}^* \ellongrightarrow \mathbf{Z}_p.
\end{aligned}
\end{equation}
Suppose we choose for each $\mathfrak{p} \in R$, a prime $\mathfrak{P}_\mathfrak{p}$ of $H$ lying above $\mathfrak{p}$.
Then for $\mathfrak{p}, \mathfrak{q} \in R$, via
\[ U_\mathfrak{p} \subset H \subset H_{\mathfrak{P}_\mathfrak{q}} \cong F_\mathfrak{q}, \]
we can evaluate $o_\mathfrak{q}$ and $\ell_\mathfrak{q}$ on elements of $U_\mathfrak{p}$, and extend by
linearity to maps \[ o_\mathfrak{q}, \ell_\mathfrak{q}: U_{\mathfrak{p}, \chi} \ellongrightarrow \mathbf{C}_p. \] (Of course, $o_\mathfrak{q}(U_{\mathfrak{p}, \chi}) =0$
for $\mathfrak{q} \neq \mathfrak{p}$.)
Define
\[ {\mscr L}_{{\mathrm{alg}}}(\chi)_{ \mathfrak{p}, \mathfrak{q}} = - \frac{\ell_\mathfrak{q}(u_{\mathfrak{p}, \chi})}{o_{\mathfrak{p}}(u_{\mathfrak{p}, \chi})},
\]
which is clearly independent of the choice of $u_{\mathfrak{p}, \chi} \in U_{\mathfrak{p}, \chi}$.
Gross's regulator is the determinant of the $r_\chi \times r_\chi$ matrix whose entries are given by these values:
\[ \mathscr{R}_p(\chi) := \det({\mscr M}_p(\chi)), \quad \text{ where } {\mscr M}_p(\chi) := \elleft({\mscr L}_{{\mathrm{alg}}}(\chi)_{ \mathfrak{p}, \mathfrak{q}}\right)_{\mathfrak{p}, \mathfrak{q} \in R}. \]
The functions $o_\mathfrak{q}$ and $\ell_\mathfrak{q}$ evaluated on $U_{\mathfrak{p}, \chi}$ depend on the choice of prime $\mathfrak{P}_\mathfrak{q}$ above $\mathfrak{q}$
used to embed $U_{\mathfrak{p}}$ into $F_\mathfrak{q}^*$. If $\mathfrak{P}_\mathfrak{q}$ is replaced by $\mathfrak{P}_\mathfrak{q}^\sigma$ for $\sigma \in \mathbf{G}al(H/F)$, then these functions are scaled by
$\chi(\sigma)$. Accordingly, this change scales the $\mathfrak{q}$th row of ${\mscr M}_p(\chi)$ by $\chi(\sigma)^{-1}$ and the $\mathfrak{q}$th column by $\chi(\sigma)$. In particular, the diagonal entries ${\mscr L}_{{\mathrm{alg}}}(\chi)_{ \mathfrak{p}, \mathfrak{p}}$ are independent of choices, as is the
regulator $\mathscr{R}_p(\chi)$ and the characteristic polynomial of ${\mscr M}_p(\chi)$. More generally, for any subset $J \subset R$, the principal minor of ${\mscr M}_p(\chi)$ corresponding to $J$ defined by
\[ \mathscr{R}_p(\chi)_J := \det\elleft({\mscr L}_{{\mathrm{alg}}}(\chi)_{ \mathfrak{p}, \mathfrak{q}}\right)_{\mathfrak{p}, \mathfrak{q} \in J} \]
is independent of choices. Note that the characteristic polynomial of a matrix can by expressed simply in terms of the principal minors by
\[
\det(t \cdot \mathbbm{1}_r - {\mscr M}_p(\chi)) = \sum_{k=0}^r t^k (-1)^{r-k} \sum_{\stack{J\subset R}{\#J=r-k}}\mathscr{R}_p(\chi)_J
\]
where $r = r_\chi = \#R$. In this paper we present a conjectural formula for the individual $\mathscr{R}_p(\chi)_J$ as well as for the characteristic polynomial of ${\mscr M}_p(\chi)$ in terms of
purely analytic data depending on $F$ (i.e.\ not depending on knowledge of the algebraic group of $\mathfrak{p}$-units in the extension $H$).
Our conjectural formula is given in terms of the Eisenstein cocycle, which was defined and studied in \cite{pcsd}, \cite{cdg}, and \cite{spiess}.
For the purposes of this paper, we describe a simplified version of the cocycle, which is a certain group cohomology class
\[ \kappa_\chi \in H^{n-1}(E_R^*, {\rm Meas}(F_R, K)). \]
Here $E_R^*$ denotes the rank $n + r - 1$
group of totally positive $R$-units in $F$,
\[ F_R := \prod_{\mathfrak{p} \in R} F_{\mathfrak{p}}, \]
and $K$ is a finite extension of $\mathbf{Q}_p$ containing the values of the character $\chi$.
The group ${\rm Meas}(F_R, K)$
denotes the $K$-vector space of $K$-valued measures on $F_R$ (i.e.\ $p$-adically bounded linear forms $C_c(F_R, K)\ellongrightarrow K$ where $C_c(F_R, K)$ is the $K$-vector space of compactly supported continuous functions from $F_R$ to $K$). The space $C_c(F_R, K)$ is endowed with an $F_R^*$-action by \[ (gf)(x):= f(g^{-1}x), \] which induces an action of $E_R^*$ via the diagonal embedding $E_R^* \subset F_R^*$.
This in turn induces an action of $E_R^*$ on the dual $C_c(F_R, K)^\vee$ and its subspace ${\rm Meas}(F_R, K)$.
There are several constructions of the Eisenstein cocycle; in this paper we describe what is perhaps the simplest, in terms of Shintani cones.
In fact, to define the Eisenstein cocycle one must introduce a certain auxiliary prime $\ellambda$ of $F$ and use it to employ a smoothing operation.
For simplicity in this introduction, we suppress the prime $\ellambda$ from the notation.
Let $J \subset R$ be a subset. In \S \ref{section:mainconj} we describe how to define two $r$-cocycles \[ c_{\ell, J}, c_o \in H^r(E_R^*, C_c(F_R, K)). \]
The subscripts $\ell$ and $o$ refer to the homomorphisms $\ell_\mathfrak{p}$ and $o_\mathfrak{p}$ in (\ref{e:olpdef}) used to define the cocycles for the primes $\mathfrak{p} \in J$. Let
\begin{equation} \ellabel{e:vtdef}
\vartheta \in H_{n+r-1}(E_R^*, \mathbf{Z}) \cong \mathbf{Z} \end{equation} be a generator. We
define a constant
\begin{equation} \ellabel{e:rpandef1}
\mathscr{R}_p(\chi)_{J, {\mathrm{an}}} := (-1)^{\# J} \,\frac{c_{\ell, J} \cap (\kappa_\chi \cap \vartheta)}
{c_o \cap (\kappa_\chi \cap \vartheta)} \in K.
\end{equation}
The denominator of (\ref{e:rpandef1}) is non-zero because it can be shown
to equal $L(\chi, 0) \prod_{\mathfrak{p} \in R_0} (1 - \chi(\mathfrak{p}))$ up to sign.
We propose:
\begin{conjecture} \ellabel{c:main} For each subset $J \subset R$, we have $\mathscr{R}_p(\chi)_J = \mathscr{R}_p(\chi)_{J, {\mathrm{an}}}$.
\end{conjecture}
More generally for $t\in K$ we define a class $c_{to+\ell} \in H^r(E_R^*, C_c(F_R, K))$ and propose
\begin{conjecture} \ellabel{c:mainchar}
For the characteristic polynomial of ${\mscr M}_p(\chi)$ we have
\[
\det(t \cdot \mathbbm{1}_r - {\mscr M}_p(\chi)) = \frac{c_{to+\ell} \cap (\kappa_\chi \cap \vartheta)}
{c_{o,R} \cap (\kappa_\chi \cap \vartheta)}.
\]
\end{conjecture}
Conjecture~\ref{c:mainchar} follows from Conjecture~\ref{c:main} but is slightly weaker (see section \S \ref{section:mainconj}).
The following results provide the main theoretical justification for our conjecture.
\begin{theorem} For $J = R$, we have \[ \mathscr{R}_p(\chi)_{J, {\mathrm{an}}} = \frac{L_p^{(r)}(\chi, 0)}{r! L(\chi, 0) \prod_{\mathfrak{p} \in R_0} (1 - \chi(\mathfrak{p}))}. \] \end{theorem}
Hence Conjecture~\ref{c:main} for $J=R$ is equivalent to part (2) of Conjecture~\ref{c:gross}, whence it holds by \cite{dkv}.
We also consider the other extremal case $\#J = 1$. The following result is proved for $n=2$, but we suspect that it can be shown in general.
\begin{theorem} Let $n=2$ and let $J = \{\mathfrak{p}\}$. Then Conjecture~\ref{c:main} agrees with the conjectural formula for ${\mscr L}_{{\mathrm{alg}}}(\chi)_{\mathfrak{p}, \mathfrak{p}}$ proposed in
\cite[Conjecture 3.21]{das}.
\end{theorem}
For $1 < \# J < r$, Conjecture~\ref{c:main} is a new generalization of Gross's Conjecture.
\section{The Eisenstein Cocycle} \ellabel{s:eis}
To define the Eisenstein cocycle, we first fix an ordering of the real places of $F$, yielding an embedding $F \subset \mathbf{R}^n$.
The group $(\mathbf{R}^*)^n$, and hence $F^* \subset (\mathbf{R}^*)^n$, acts on $\mathbf{R}^n$ by componentwise multiplication.
Given linearly independent vectors $x_1, \dotsc, x_m \in (\mathbf{R}^{>0})^n$, we define the simplicial cone
\begin{equation} \ellabel{e:conedef}
C(x_1, \dotsc, x_m) = \{ t_1 x_1 + \cdots + t_m x_m: t_i \in \mathbf{R}^{>0} \} \subset (\mathbf{R}^{>0})^n.
\end{equation}
The vector $Q = (1,0,0, \dots, 0)$ has the property that
its ray (i.e.\ its set of $\mathbf{R}^{>0}$ multiples) is preserved by the action of $(\mathbf{R}^{>0})^n$. We define $C^*(x_1, \dotsc, x_n)$ to be the union of
$C(x_1, \dotsc, x_n)$ with the boundary faces that are brought into the interior of the cone by a small perturbation by $Q$, i.e.~the set whose characteristic function is defined by
\[ 1_{C^*(x_1, \dotsc, x_n)}(x) := \ellim_{h \rightarrow 0^+} 1_{C(x_1, \dots, x_n)}(x + hQ). \]
Let $\mathcal{O}_{F, R}$ denote the ring of $R$-integers of $F$. For any fractional ideal $\mathfrak{b} \subset F$ relatively prime to $S$, we let
$\mathfrak{b}_{R} = \mathfrak{b} \otimes_{\mathcal{O}_F} \mathcal{O}_{F, R}$ denote the $\mathcal{O}_{F, R}$-module generated by $\mathfrak{b}$.
Let \[ U \subset F_R := \prod_{\mathfrak{p} \in R} F_{\mathfrak{p}} \] be a compact open subset.
Let $C$ be any union of simplicial cones in $ (\mathbf{R}^{>0})^n$.
For $s \in \mathbf{C}$ with $\mathbf{R}eal(s) > 1$, consider the Shintani $L$-function
\begin{equation} \ellabel{e:shinL}
L(C, \chi, \mathfrak{b}, U, s) := \sum_{\stack{\xi \in C \cap \mathfrak{b}_{R}^{-1}, \ \xi \in U}{(\xi, S \backslash R) = 1}} \frac{\chi((\xi))}{\mathrm{N}\xi^s}.
\end{equation}
Here $\chi((\xi))$ denotes $\chi$ evaluated on the image of the principal ideal $(\xi)$ under the Artin reciprocity map for the extension $H/F$.
The set $\mathfrak{b}_{R} \cap U$ can be written as the disjoint union of translates of fractional ideals of $F$, which are lattices in $\mathbf{R}^n$.
Shintani proved that the $L$-function defined in (\ref{e:shinL}) has a meromorphic continuation to $\mathbf{C}$, and that its values at nonpositive integers lie in the cyclotomic field $k$ generated by the values of $\chi$.
Furthermore, for $\chi$, $C$, $\mathfrak{b},$ and $s$ fixed, it is clear that the values $L(C, \chi, \mathfrak{b}, U, s)$ form a distribution on $F_R$ in the sense that
for disjoint compact opens $U_1, U_2 \subset F_R$, we have \[ L(C, \chi, \mathfrak{b}, U_1 \cup U_2, s) = L(C, \chi, \mathfrak{b}, U_1 , s) + L(C, \chi, \mathfrak{b}, U_2, s). \]
The space of $k$-valued distributions on $F_R$, denoted $\Dist(F_R, k)$, has an action of $F_R^*$ given by
$(x \cdot \mu)(U) = \mu(x^{-1} U).$ As in the introduction let $E_R^*$ denote the group of totally positive units in $\mathcal{O}_R$ which we view as a subgroup of $(\mathbf{R}^{>0})^n$.
The following proposition follows directly from \cite[Theorem 1.6]{cdg}.
\begin{prop} \ellabel{p:cocycle} Let $x_1, \dotsc, x_n \in E_R^*$ and let $x$ denote the $n \times n$ matrix whose columns are the images of the $x_i$ in $(\mathbf{R}^{>0})^n$.
For a compact open subset $U \subset F_R$ let
\[ \mu_{\chi, \mathfrak{b}}(x_1, \dotsc, x_n)(U) := \mathrm{sgn}(x) L( C^*(x_1, \dotsc, x_n), \chi, \mathfrak{b}, U, 0), \]
where $\mathrm{sgn}(x) = \sign(\det(x))$ if $\det(x)\ne 0$ and $\mathrm{sgn}(x) =0$ otherwise. Then $\mu_{\chi, \mathfrak{b}}$ is a homogeneous $(n-1)$-cocycle yielding a class $[\mu_{\chi, \mathfrak{b}}] \in H^{n-1}(E_R^*, \Dist(F_R, k)).$
\end{prop}
In order to achieve integral distributions, we introduce a smoothing operation using an auxiliary prime ideal $\ellambda$ of $F$. We assume that $\ellambda$ is cyclic in the sense that $\mathcal{O}_F/\ellambda \cong \mathbf{Z}/\ell \mathbf{Z}$ for a prime number $ \ell \in \mathbf{Z},$ and we assume that $\ell \ge n+2$. We also assume that no primes in $S$ have residue characteristic equal to $\ell$ (in particular $\ell \neq p$). We then define the smoothed Shintani $L$-function
\[ L_\ellambda(C, \chi, \mathfrak{b}, U, s) := L(C, \chi, \mathfrak{b} \ellambda^{-1}, U, s) - \chi(\ellambda)\ell^{1-s} L(C, \chi, \mathfrak{b} , U, s). \]
Using ``Cassou--Nogu\`es' trick", it is shown in \cite{cdg} (see also \cite{ds}) that if the generators of the cones comprising $C$ can be chosen to be units at the primes above $\ell$, then $L_\ellambda(C, \chi, \mathfrak{b}, U, 0) \in \mathcal{O}_k$.
For $x_1, \dotsc, x_n \in E_R^*$ and $U \subset F_R$ an open compact subset let
\[ \mu_{\chi, \mathfrak{b}, \ellambda}(x_1, \dotsc, x_n)(U) := \mathrm{sgn}(x) L_\ellambda( C^*(x_1, \dotsc, x_n), \chi, \mathfrak{b}, U, 0). \]
Let $\mathfrak{P}$ be the prime of $k$ above $p$ corresponding to $k \subset \overline{\mathbf{Q}} \subset \mathbf{C}_p$, where the second embedding is the one fixed at the outset of the paper, and let $K=k_{\mathfrak{P}}$. Since $\mu_{\chi, \mathfrak{b}, \ellambda}$ is integral, it is in particular $\mathfrak{P}$-adically bounded with values in $\mathcal{O}_K$ and can therefore be viewed as a $K$-valued measure on $F_R$, i.e.\ as having values in \[ {\rm Meas}(F_R, K) := \mathbf{H}om(C_c(F_R, \mathbf{Z}), \mathcal{O}_K)\otimes_{\mathcal{O}_K} K. \] We define
\begin{equation}
\kappa_{\chi, \mathfrak{b}, \ellambda} := [\mu_{\chi, \mathfrak{b}, \ellambda}]\in H^{n-1}(E_R^*, {\rm Meas}(F_R, K))
\ellabel{e:kapchipart}
\end{equation}
and the Eisenstein cocycle associated to $\chi$ and $\ellambda$ by
\begin{equation}
\kappa_{\chi, \ellambda} = \sum_{i=1}^h \chi(\mathfrak{b}_i) \kappa_{\chi, \mathfrak{b}_i, \ellambda} \in H^{n-1}(E_R^*, {\rm Meas}(F_R, K)).
\ellabel{e:kapchi}
\end{equation}
Here $\{\mathfrak{b}_1, \dotsc, \mathfrak{b}_h\}$ is a set of integral ideals representing the narrow class group of $\mathcal{O}_{F,R}$ (i.e.\ the group of fractional ideals of $\mathcal{O}_{F,R}$ modulo the group of fractional principal ideals generated by totally positive elements of $F$).
We conclude this section by recalling a cap product pairing that can be applied to the Eisenstein cocycle $\kappa_{\chi, \ellambda}$.
There is a canonical integration pairing
\begin{align}
\begin{split}
C_c(F_R, K)\times {\rm Meas}(F_R, K)\,&\ellongrightarrow\, K, \\
(f, \mu) & \ellongmapsto \int_{F_R} f(t) d \mu(t):= \ellim_{|| \mathcal{V} || \rightarrow 0} \sum_{V \in \mathcal{V}} f(t_V) \mu(V)
\ellabel{e:intdef}
\end{split}
\end{align}
where the limit is taken over uniformly finer covers $\mathcal{V}$ of the support of $f$ by open compacts $V$, and $t_V \in V$ is any element. More generally if $A$ is a locally profinite $K$-algebra (i.e.\ an Iwasawa algebra) we have an $F_R$-equivariant integration pairing
\begin{align}
\begin{split}
C_c(F_R, A)\times {\rm Meas}(F_R, K)\, &\ellongrightarrow\,A, \\
(f, \mu) &\ellongmapsto \int_{F_R} f(t) d \mu(t) \ellabel{e:intdef2}
\end{split}
\end{align}
(see e.g.\ \cite[\S 2]{ds}) where the $F_R^*$-action on $C_c(F_R, A)$ is given by $(x\cdot f)(y) = f(x^{-1}y)$. For each non-negative integer $m$, the pairing (\ref{e:intdef2}) induces a cap-product pairing
\begin{equation}
\cap: H^{m}(E_R^*, {\rm Meas}(F_R, K)) \times H_{m}(E_R^*, C_c(F_R, A)) \,\ellongrightarrow\, A.
\ellabel{e:cap1}
\end{equation}
\section{Conjecture}
\ellabel{section:mainconj}
\subsection{Statement}
We recall the following definition from \cite{spiesshilb, ds}. Let $\mathfrak{p} \in R$, let $g\colon F_\mathfrak{p}^* \ellongrightarrow K$ be a continuous homomorphism and let $f\in C_c(F_{\mathfrak{p}}, \mathbf{Z})$, i.e.\ $f\colon F_{\mathfrak{p}} \ellongrightarrow \mathbf{Z}$ is a locally constant function with compact support. For $a\in F_{\mathfrak{p}}^*$ let $af\in C_c(F_{\mathfrak{p}}, \mathbf{Z})$ be given by $(af)(x) = f(a^{-1}x)$. Since $(1-a) \cdot f= f-af$ vanishes at $0\in F_{\mathfrak{p}}$, the function \begin{align*}
F_{\mathfrak{p}}^* & \ellongrightarrow K \\
x& \ellongmapsto (f(x)-f(a^{-1}x)) \cdot g(x) \end{align*}
extends continuously to $F_{\mathfrak{p}}$ hence can be viewed as a function \[ (f-af) \cdot g \colon F_{\mathfrak{p}}\to K. \] The map
\[ z_{f, g}\colon F_\mathfrak{p}^* \ellongrightarrow C_c(F_\mathfrak{p}, K) \] given by
\begin{align}
\ellabel{e:zgdef}
z_{f,g}(a) & = `` (1 - a)(g \cdot f) " \nonumber \\ & := (af) \cdot (g-ag)+ (f-af)\cdot g \\
& = (af) \cdot g(a) + (f-af) \cdot g \nonumber
\end{align}
is an inhomogeneous 1-cocycle. Its class $[z_{f,g}] \in H^1(F_\mathfrak{p}^*, C_c(F_\mathfrak{p}, K))$ depends only on the value of $f$ at $0$, i.e. if $f,f'\in C_c(F_{\mathfrak{p}}, \mathbf{Z})$ satisfy $f(0)=f'(0)$ then $[z_{f,g}] = [z_{f',g}]$. In particular if we choose $f$ so that $f(0)=1$, e.g.\ $f= 1_{\mathcal{O}_{\mathfrak{p}}}$ then
\[
c_g := [z_{f,g}] \in H^1(F_\mathfrak{p}^*, C_c(F_\mathfrak{p}, K))
\]
depends only on $g$. Note that the expression $ (1 - a)(g \cdot f)$ has no literal meaning since the function $g$ does not necessarily extend to a continuous function
on $F_\mathfrak{p}$ (and for this reason, the cocycle $z_g$ is not necessarily a coboundary); nevertheless this expression provides the intuition for the definition of $z_g$ given by the right side of (\ref{e:zgdef}).
The construction above in particular applies
to the homomorphisms $o_\mathfrak{p}, \ell_\mathfrak{p}$ from (\ref{e:olpdef}) and we thus obtain classes
$c_{o_\mathfrak{p}}, c_{\ell_{\mathfrak{p}}} \in H^1(F_\mathfrak{p}^*, C_c(F_\mathfrak{p}, K))$ for each $\mathfrak{p} \in R$.
Recall that $r = r_\chi = \#R$ and that $E_R^*$ denotes the rank $n + r - 1$
group of totally positive $R$-units in $F$.
As in (\ref{e:vtdef}), let $\vartheta \in H_{n+r-1}(E_R^*, \mathbf{Z}) \cong \mathbf{Z}$ be a generator (this is well-defined up to sign). Cap-product with the Eisenstein cocycle yields a class
\[ \kappa_{\chi, \ellambda} \cap \vartheta \in H_r(E_R^*, {\rm Meas}(F_R, K)). \]
Label the elements of $R$ by $\mathfrak{p}_1, \mathfrak{p}_2, \dots, \mathfrak{p}_r$ and let $J\subset R$. Define classes
\[ c_{o}, c_{\ell, J} \in H^r( F_R^*, C_c(F_R, K) ) \] by
\begin{align*}
c_{o} &= c_{o_{\mathfrak{p}_1}} \cup \cdots \cup c_{o_{\mathfrak{p}_r}}\\
c_{\ell, J} &= c_{g_1} \cup \cdots \cup c_{g_r}
\end{align*}
where
\[
g_i= \elleft\{\begin{array}{cc} \ell_{\mathfrak{p}_i} & \mathbbox{ if $i\in J$,}\\ o_{\mathfrak{p}_i}& \mathbbox{if $i\not\in J$.}\end{array}\right.
\]
Here the cup-product is induced by the canonical map
\[
C_c(F_{\mathfrak{p}_1}, K) \otimes \cdots \otimes C_c(F_{\mathfrak{p}_r}, K) \ellongrightarrow C_c(F_R,K)
\]
that sends a tensor $f_1\otimes \elldots \otimes f_r$ to the function
\begin{align*}
F_R = \prod_{i=1}^r F_{\mathfrak{p}_i} &\ellongrightarrow K \\
(x_i)_{i=1,\elldots, r}&\ellongmapsto \prod_{i=1}^r f_i(x_i).
\end{align*}
Define a constant
\begin{equation} \ellabel{e:rpandef}
\mathscr{R}_p(\chi)_{J, {\mathrm{an}}} := (-1)^{\# J}\, \frac{c_{\ell, J} \cap (\kappa_{\chi, \ellambda} \cap \vartheta)
}{c_{o} \cap (\kappa_{\chi, \ellambda} \cap \vartheta)} \in K.
\end{equation}
Here the first cap-product of the numerator and denominator is the pairing (\ref{e:cap1})
for $m=r$ and $A=K$. We will show in Proposition~\ref{p:denom} below that
the denominator of (\ref{e:rpandef}) is non-zero and in
Proposition~\ref{p:kappa} that the constant $\mathscr{R}_p(\chi)_{J, {\mathrm{an}}}$ is independent of the auxiliary prime $\ellambda$.
We propose:
\begin{conjecture} \ellabel{c:main2} For each subset $J \subset R$, we have $\mathscr{R}_p(\chi)_J = \mathscr{R}_p(\chi)_{J, {\mathrm{an}}}$.
\end{conjecture}
For $t\in K$ we define the class $c_{to+\ell} \in H^r(F_\mathfrak{p}^*, C_c(F_\mathfrak{p}, K))$ by
\[
c_{to+\ell} = c_{(to_{\mathfrak{p}_1} + \ell_{\mathfrak{p}_1})} \cup \cdots \cup c_{(to_{\mathfrak{p}_r} + \ell_{\mathfrak{p}_r})}
\]
and propose
\begin{conjecture} \ellabel{c:main2char} The characteristic polynomial of Gross' regulator matrix is given by
\[
\det(t \cdot 1_r - {\mscr M}_p(\chi)) = \frac{c_{to+\ell} \cap (\kappa_{\chi, \ellambda} \cap \vartheta)
}{c_{o} \cap (\kappa_{\chi, \ellambda} \cap \vartheta)}.
\]
\end{conjecture}
Conjecture \ref{c:main2char} follows from Conjecture \ref{c:main2} since we have
\begin{align*}
c_{to+\ell} &= \sum_{k=0}^r t^k \sum_{\stack{J\subset R}{\#J=r-k}} c_{\ell, J},\\
\det(t \cdot \mathbbm{1}_r - {\mscr M}_p(\chi)) & = \sum_{k=0}^r t^k (-1)^{r-k} \sum_{\stack{J\subset R}{\#J=r-k}} \mathscr{R}_p(\chi)_J.
\end{align*}
\begin{remark}
\rm Instead of considering only the homomorphisms $\ell_{\mathfrak{p}}$ in the definition of in Gross' regulator matrix and in the nominator of (\ref{e:rpandef}) one may consider arbitrary continuous homomorphisms $\psi_1: F_{\mathfrak{p}_1}^*\to E, \elldots, \psi_r: F_{\mathfrak{p}_r}^*\to K$. So we conjecture that more generally we have
\[
\det\elleft(\elleft(- \frac{\psi_j(u_{\mathfrak{p}_i, \chi})}{o_{\mathfrak{p}_i}(u_{\mathfrak{p}_i, \chi})}\right)_{i,j=1, \elldots, r}\right) = (-1)^r\, \frac{c_{\psi_{\mathfrak{p}}, \mathfrak{p}\in R} \cap (\kappa_{\chi, \ellambda} \cap \vartheta)
}{c_{o} \cap (\kappa_{\chi, \ellambda} \cap \vartheta)}
\]
where $c_{\psi_{\mathfrak{p}}, \mathfrak{p}\in R} = c_{\psi_1} \cup \cdots \cup c_{\psi_r}$.
\end{remark}
\subsection{Well-formedness of the conjecture}
The following proposition shows that the denominator of (\ref{e:rpandef}) is non-zero. The result was essentially proved in \cite{ds}, but we explain here how to relate our present notation to the setting of {\em loc.\ cit.}
\begin{prop}
\ellabel{p:denom}
With the correct choice of sign for $\vartheta$ we have
\[
c_{o} \cap (\kappa_{\chi, \ellambda} \cap \vartheta) = (-1)^{r(n-1)} (1 - \chi(\ellambda)\ell)L(\chi, 0) \prod_{\mathfrak{p} \in R_0} (1 - \chi(\mathfrak{p})).
\]
\end{prop}
\begin{proof} Let ${\mscr I}_R$ be the group of fractional ideals of $F$ generated by the elements of $R$ and let ${\mscr H}_R\subseteq {\mscr I}_R$ be the subgroup of principal fractional ideals that have a totally positive generator. Let
$\mathfrak{c}_1,\elldots, \mathfrak{c}_m$ be a system of representatives for ${\mscr I}_R/{\mscr H}_R$. Recall from (\ref{e:kapchi}) that $\{\mathfrak{b}_1, \dotsc, \mathfrak{b}_h\}$ denotes a set of integral ideals representing the narrow class group of $\mathcal{O}_{F,R}$.
Note that $\{\mathfrak{b}_i\mathfrak{c}_j\mid i=1,\elldots, h, \, j=1,\elldots, m\}$ is a system of representatives for the narrow class group of $F$.
Let $E^*$ be the group of totally positive units of $\mathcal{O}_F$. Let $\mathcal{D}$ denote a signed Shintani domain for the action of $E^*$ on $(\mathbf{R}^{>0})^n$. This is a finite formal linear combination of simplicial cones (as in (\ref{e:conedef}))
\[ \mathcal{D} = \sum_{\nu} a_{\nu} C_\nu, \qquad a_\nu \in \mathbf{Z}, \quad C_\nu = \text{simplicial cone,} \]
whose characteristic function $1_\mathcal{D} := \sum_\nu a_\nu 1_{C_\nu}$ satisfies
\[ \sum_{\epsilonsilon \in E^*}1_{\mathcal{D}}( \epsilonsilon \cdot x) = 1 \]
for all $x \in (\mathbf{R}^{>0})^n$. We have (compare e.g.\ \cite[Lemma 5.8]{ds})
\begin{align}
\begin{split}
\ellabel{e:shinlchi}
\sum_{\nu} \sum_{i=1}^h\sum_{ j=1}^m a_\nu \chi(\mathfrak{b}_i) \mathrm{N}(\mathfrak{b}_i\mathfrak{c}_j)^{-s} L_\ellambda(C_\nu, \chi, \mathfrak{b}_i, (\mathfrak{c}_j^{-1})^R, s) &= \\
(1-\chi(\ellambda) \ell^{1-s}) L(\chi, s) &\prod_{\mathfrak{p} \in R_0} (1 - \chi(\mathfrak{p})\mathrm{N}\mathfrak{p}^{-s})
\end{split}
\end{align}
where $(\mathfrak{c}_j^{-1})^R := \prod_{\mathfrak{p}\in R} \mathfrak{c}_j^{-1}\otimes \mathcal{O}_{\mathfrak{p}}$. Since $(\mathfrak{c}_j^{-1})^R$ is an $E^*$-stable subset of $F_R$ we see that by restricting the cocycle $\mu_{\chi, \mathfrak{b}_i, \ellambda}$ to
$E^*$ and keeping $U=(\mathfrak{c}_j^{-1})^R$ fixed, i.e.\ by setting
\[
\mu_{\chi, \mathfrak{b}_i, \mathfrak{c}_j, \ellambda}(x_1, \dotsc, x_n) := \mu_{\chi, \mathfrak{b}_i, \ellambda}(x_1, \dotsc, x_n)((\mathfrak{c}_j^{-1})^R)
\]
for $x_1, \elldots, x_n\in E^*$ we obtain a homogeneous $(n-1)$-cocycle yielding a class \[ [\mu_{\chi, \mathfrak{b}_i, \mathfrak{c}_j, \ellambda}]\in H^{n-1}(E^*, \mathcal{O}_K).\] For the correct choice of the generator $\vartheta_0$ of $H^{n-1}(E^*, \mathbf{Z})\cong \mathbf{Z}$ we obtain
\begin{equation}
\ellabel{e:shinlchi2}
(\sum_{i=1}^h\sum_{ j=1}^m \chi(\mathfrak{b}_i) [\mu_{\chi, \mathfrak{b}_i, \mathfrak{c}_j, \ellambda}])\cap \vartheta_0 = (1-\chi(\ellambda) \ell) L(\chi, 0)\prod_{\mathfrak{p} \in R_0} (1 - \chi(\mathfrak{p})).
\end{equation}
Indeed by \cite[Theorem 1.5]{cdg} the left hand side is equal to the left hand side of (\ref{e:shinlchi}) for $s=0$ and a specific signed Shintani domain $\mathcal{D} = \sum a_\nu C_\nu$.
This formula can also be interpreted as follows. Let $\mathbf{Z}[{\mscr I}_R]$ be the group ring of ${\mscr I}_R$.
Consider the $E_R^*$-equivariant isomorphism
\begin{equation}
\ellabel{e:shinlchi3}
(\Ind_{E^*}^{E_R^*} \mathbf{Z})^m\ellongrightarrow \mathbf{Z}[{\mscr I}_R]
\end{equation}
corresponding by Frobenius reciprocity to the
$E^*$-equivariant map
\begin{align*}
\mathbf{Z}^m &\ellongrightarrow \mathbf{Z}[{\mscr I}_R] \\
(a_1, \elldots, a_m) &\ellongmapsto \prod_{j=1}^m \mathfrak{c}_j^{a_j}.
\end{align*}
The assignment $\mathfrak{c}\mapsto 1_{\mathfrak{c}^R}$ extends to a homomorphism
\begin{equation} \ellabel{e:shinlchi4}
\delta\colon \mathbf{Z}[{\mscr I}_R]\ellongrightarrow C_c(F_R, \mathbf{Z}).
\end{equation} Composing (\ref{e:shinlchi3}) and (\ref{e:shinlchi4}) we obtain an $E_R^*$-equivariant map
\begin{equation} \ellabel{e:shinlchi5}
(\Ind_{E^*}^{E_R^*} \mathbf{Z})^m \ellongrightarrow C_c(F_R, \mathbf{Z}).
\end{equation}
By Shapiro's Lemma the homomorphism (\ref{e:shinlchi5}) induces a homomorphism
\[
H_{n-1}(E^*, \mathbf{Z})^m \ellongrightarrow H_{n-1}(E_R^*, C_c(F_R, \mathbf{Z}))
\]
and we denote by $\widetilde{\vartheta}\in H_{n-1}(E_R^*, C_c(F_R, \mathbf{Z}))$ the image of $(\vartheta_0, \elldots, \vartheta_0)\in H_{n-1}(E^*, \mathbf{Z})^m$. Tracing through the definitions, formula (\ref{e:shinlchi2}) is equivalent to \[ \kappa_{\chi, \ellambda} \cap \widetilde{\vartheta} = (1 - \chi(\ellambda)\ell) L(\chi, 0)\prod_{\mathfrak{p} \in R_0} (1 - \chi(\mathfrak{p})). \]
On the other hand by \cite[Lemma 3.5]{ds} we can choose $\vartheta$ so that $\widetilde{\vartheta} = c_o \cap \vartheta$. It follows that
\[
c_o \cap (\kappa_\chi \cap \vartheta)
= (-1)^{r(n-1)} \kappa_{\chi}\cap \widetilde{\vartheta} = (-1)^{r(n-1)} (1- \chi(\ellambda)\ell) L(\chi, 0) \prod_{\mathfrak{p} \in R_0} (1 - \chi(\mathfrak{p}))
\] as desired.
\end{proof}
\begin{prop}
\ellabel{p:kappa}
The constant $\mathscr{R}_p(\chi)_{J, {\mathrm{an}}}$ is independent of the choice of the auxiliary prime $\ellambda$.
\end{prop}
\begin{proof} To see this it is useful to work within the adelic framework (compare e.g.\ \cite{ds} or \cite{ds2}). Before delving into the details, let us summarize the basic idea of the proof.
Let $\ellambda'$ be another auxiliary cyclic prime ideal lying above a prime number $\ell'\ge n+2$ with $\ell\ne \ell'$, $S_{\ell'}\cap S=\emptyset$. Replacing $F_R$ with certain adelic spaces, we will describe the construction of classes
\[ \tilde{\kappa}_{\ellambda}, \tilde{\kappa}_{\ellambda'}, \text{ and } \tilde{\kappa}_{\ellambda, \ellambda'} \]
lying in cohomology groups endowed with an action of the group ${\mscr I}^S$ of fractional ideals of $F$ that are relatively prime to $S$.
It will follow directly from the definitions that
\begin{equation}
\ellabel{e:changelambda}
\widetilde{\kappa}_{\ellambda} - \ell' (\ellambda'^{-1} \cdot \widetilde{\kappa}_{\ellambda}) = \tilde{\kappa}_{\ellambda, \ellambda'} = \widetilde{\kappa}_{\ellambda'} - \ell (\ellambda^{-1} \cdot \widetilde{\kappa}_{\ellambda'}).
\end{equation}
Under cap product with the classes appearing in the numerator and denominator of the definition of
$\mathscr{R}_p(\chi)_{J, {\mathrm{an}}}$, the action of $\ellambda^{-1}$ is given by multiplication by $\chi(\ellambda)$.
Therefore, replacing $\widetilde{\kappa}_{\ellambda}$ by the left side of (\ref{e:changelambda}) scales the numerator and denominator of $\mathscr{R}_p(\chi)_{J, {\mathrm{an}}}$ each by the constant $1 - \ell' \chi(\ellambda')$, and leaves the ratio unchanged. The desired result then follows from (\ref{e:changelambda}). Let us now carry out the details of this construction.
Let $\mathbf{A}_f$ be the ring of finite adeles of $F$. For a finite set of finite places $\Sigma$ of $F$ we put \[ \mathbf{A}\!_f^{\Sigma} = \sideset{}{'}\prod_{v\not\in \Sigma\cup S_{\infty}} F_v, \qquad U^{\Sigma}=\prod_{v\not\in \Sigma\cup S_{\infty}} \mathcal{O}_v^* \subset (\mathbf{A}\!_f^{\Sigma})^*. \]
Let $a^SU^S\in (\mathbf{A}\!_f^{S})^*/U^S$ and let \[ \mathfrak{a} = F\cap a^S\prod_{v\not\in S\cup S_{\infty}} \mathcal{O}_v \] be the fractional $\mathcal{O}_{F, S}$-ideal attached to the idele $a^S$. For linearly independent elements $x_1, \dotsc, x_n \in F^*_+ \subset (\mathbf{R}^{>0})^n$ and a compact open set
$U \subset F_R\times F_{S-R}^*$,
the Shintani zeta function
\[ \zeta(x_1, \dots, x_n; U, \mathfrak{a}, s) = \sum_{\stack{\xi \in C^*(x_1, \dotsc, x_n)}{\xi\in \mathfrak{a} \cap U}} \frac{1}{\mathrm{N}\xi^s}, \qquad \mathbf{R}eal(s) > 1
\]
has a meromorphic continuation to $\mathbf{C}$ and its values at non-positive integers are rational.
As any compact open subset of \[ F_R \times (\mathbf{A}\!_f^{R})^*/U^S = F_R\times F_{S-R}^* \times (\mathbf{A}\!_f^{S})^*/U^S \] can be written as a disjoint union of sets of the form $U\times a^S U^S$ considered above, the assignment
\[
U\times a^S U^S \ellongmapsto \mathrm{sgn}(x) \zeta(x_1, \dots, x_n; U, \mathfrak{a}, 0)
\]
defines a $\mathbf{Q}$-valued distribution $\widetilde{\mu}(x_1, \dots, x_n)$ on $F_R \times (\mathbf{A}\!_f^{R})^*/U^S$. Moreover \[ (x_1, \dots, x_n)\ellongmapsto \widetilde{\mu}(x_1, \dots, x_n) \] is a homogeneous $(n-1)$-cocycle for the group $F_+^*$ of totally positive elements of $F$.
As before, ``smoothing" with respect to the auxiliary prime $\ellambda$ yields an integral valued distribution. For that let $S_\ell$ denote the set of primes of $F$ lying above $\ell$ and
let $(F_+^{\ell})^* \subset F^*_+ $ be the subgroup of elements with valuation $0$ at every prime in $S_{\ell}$. Given a compact open subset \[ U \subset F_R \times (\mathbf{A}\!_f^{R\cup S_{\ell}})^*/U^{S\cup S_{\ell}} \] we define two associated compact open subsets \[ U_0, U_1 \subset F_R \times (\mathbf{A}\!_f^{R})^*/U^S \] by
\[ U_0 = U \times \prod_{v \mid \ell} \mathcal{O}_v^*, \qquad U_1 = U \times \varpi_\ellambda \mathcal{O}_\ellambda^* \times \prod_{\stack{v \mid \ell}{v \neq \ellambda}}\mathcal{O}_v^*, \]
where $\varpi_{\ellambda}$ is a prime element of $\mathcal{O}_{\ellambda}$. For $x_1, \dotsc, x_n \in (F_+^{\ell})^*$, define
\[
\widetilde{\mu}_{\ellambda}(x_1, \dots, x_n)(U) = \widetilde{\mu}(x_1, \dots, x_n)(U_0) - \ell \widetilde{\mu}(x_1, \dots, x_n)(U_1).
\]
Then $(x_1, \dots, x_n)\mapsto \widetilde{\mu}_{\ellambda}(x_1, \dots, x_n)$ is a homogeneous $(n-1)$-cocycle for $(F_+^{\ell})^*$ with values in \[ \Dist(F_R \times (\mathbf{A}\!_f^{R\cup S_{\ell}})^*/U^{S\cup S_{\ell}}, \mathbf{Z})= \mathbf{H}om(C_c(F_R \times (\mathbf{A}\!_f^{R\cup S_{\ell}})^*/U^{S\cup S_{\ell}}, \mathbf{Z}), \mathbf{Z}). \] Note that there exists a canonical isomorphism of $F_+^*$-modules
\[
\Ind_{(F_+^{\ell})^*}^{F^*_+} C_c(F_R \times (\mathbf{A}\!_f^{R\cup S_{\ell}})^*/U^{S\cup S_{\ell}})\cong C_c(F_R \times (\mathbf{A}\!_f^{R})^*/U^S)
\]
and hence, by Shapiro's Lemma, an isomorphism
\begin{equation} \ellabel{e:shapiro}
H^{n-1}((F_+^{\ell})^*, \Dist(F_R \times (\mathbf{A}\!_f^{R\cup S_{\ell}})^*/U^{S\cup S_{\ell}}, \mathbf{Z})) \cong H^{n-1}(F_+^*, \Dist(F_R \times (\mathbf{A}\!_f^R)^*/U^S, \mathbf{Z})).
\end{equation}
A $\mathbf{Z}$-valued distribution is of course $p$-adically bounded and hence yields a
measure. Therefore by (\ref{e:shapiro}) the cohomology class of the cocycle $\widetilde{\mu}_{\ellambda}$ defines an element
\begin{equation}
\ellabel{e:wkappa}
\widetilde{\kappa}_{\ellambda} = \widetilde{\kappa}_{\ellambda}^R \in H^{n-1}(F_+^*, {\rm Meas}(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K)).
\end{equation}
The canonical action of the finite ideles $\mathbf{A}_f^*$ on $C_c(F_R \times (\mathbf{A}\!_f^R)^*/U^S, \mathbf{Z})$ induces an action of the idele class group $\mathbf{A}_f^*/F_+^* U^S$ on $H^{n-1}(F_+^*, {\rm Meas}(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K))$. In particular we obtain an action of the group ${\mscr I}^S$ of fractional ideals of $F$ that are relatively prime to $S$ on $H^{n-1}(F_+^*, {\rm Meas}(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K))$.
Now let $\ellambda'$ be another auxiliary cyclic prime ideal as above. One can carry out the above construction of the smoothed cocycle $\widetilde{\mu}_{\ellambda}$ not only for the single primes $\ellambda, \ellambda'$, but more generally for a finite set of such primes (see \cite{ds}).
One can easily see that the image of $\widetilde{\kappa}_{\ellambda,\ellambda'}$ under the canonical map
\[
H^{n-1}((F_+^{\ell, \ell'})^*, {\rm Meas}(F_R \times (\mathbf{A}\!_f^{R\cup S_{\ell, \ell'}})^*/U^{S\cup S_{\ell, \ell'}}, K)) \cong H^{n-1}(F_+^*, {\rm Meas}(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K))
\]
is equal to the image of left and right hand side of (\ref{e:changelambda}).
By class field theory we can view our character $\chi$ as a character of the idele class group
\[ \chi\colon (\mathbf{A}_f)^*/F_+^*U^S\ellongrightarrow \mathcal{O}_K^*. \] By assumption the local components $\chi_{\mathfrak{p}}\colon F_{\mathfrak{p}}^* \ellongrightarrow K^*$ of $\chi$ are trivial for all $\mathfrak{p}\in R$, so we may omit them, i.e.\ we view $\chi$ as a character
\[ \chi^R\colon (\mathbf{A}\!_f^R)^*/F_+^*U^S\ellongrightarrow K^*. \]
The character $\chi^R$ can thus be viewed as an element
\begin{equation} \ellabel{e:chir}
\chi^R \in H^0(F_+^*, C((\mathbf{A}\!_f^R)^*/U^S, K)). \end{equation}
Let ${\cal F}$ denote a finite subset of $(\mathbf{A}\!_f^R)^*/U^R$ that is a fundamental domain for the action of $F_+^*/E_R^*$. For example, we may choose \[ {\cal F} = \{b_1U^R, \elldots, b_h U^R\} \] where $b_1, \elldots, b_h\in (\mathbf{A}\!_f^R)^*$ are ideles whose associated fractional $\mathcal{O}_{F,R}$-ideals are $(\mathfrak{b}_1)_R, \elldots, (\mathfrak{b}_h)_R$.
The constant function 1 yields an element $1_{\cal F} \in H^0(E_R^*, C({\cal F}, \mathbf{Z}))$, which under cap product with $\vartheta$ yields an element
\begin{equation} \ellabel{e:1fc}
1_{{\cal F}} \cap \vartheta \in H_{n+ r- 1}(E_R^*, C({\cal F}, \mathbf{Z})) \cong H_{n+r-1}(F_+^*, C_c((\mathbf{A}\!_f^R)^*/U^R, \mathbf{Z})).
\end{equation}
The isomorphism in (\ref{e:1fc}) follows from Shapiro's Lemma. We denote the image of $1_{{\cal F}} \cap \vartheta$ under this isomorphism by $\vartheta^R$. By taking the cap product with (\ref{e:chir}) we obtain a class
\begin{equation}
\chi^R \cap \vartheta^R\in H_{n+r-1}(F_+^*, C_c((\mathbf{A}\!_f^R)^*/U^S, K)).
\ellabel{e:intcohom2a}
\end{equation}
The first cap-product in (\ref{e:intcohom2a})
is induced by the canonical pairing
\begin{align*}
C((\mathbf{A}\!_f^R)^*/U^S, K) \times C_c((\mathbf{A}\!_f^R)^*/U^R, \mathbf{Z}) &\ellongrightarrow C_c((\mathbf{A}\!_f^R)^*/U^S,K), \\
(f,g) & \ellongmapsto f\cdot g.
\end{align*}
By taking the cap-product of the homology class (\ref{e:intcohom2a}) with $c_{\ell, J}, c_o \in H^r( F_+^*, C_c(F_R, K))$ we obtain classes
\[
c_{\ell, J}\cap (\chi^R \cap \vartheta^R), \, c_o\cap (\chi^R \cap \vartheta^R)\in H_{n-1}(F_+^*, C_c(F_R\times (\mathbf{A}\!_f^R)^*/U^S, K)).
\]
Similar to \eqref{e:cap1} we have a canonical pairing
\begin{equation}
\cap: H^{n-1}(F_+^*, {\rm Meas}(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K))\times H_{n-1}(F_+^*, C_c(F_R\times (\mathbf{A}\!_f^R)^*/U^S, K)) \ellongrightarrow K
\ellabel{e:intcohom2b}
\end{equation}
induced by $p$-adic integration. Tracing through the definitions, one sees that
\begin{equation}
\ellabel{e:regadelic}
c \cap (\kappa_{\chi, \ellambda} \cap \vartheta) = c\cap (\widetilde{\kappa}_{\ellambda} \cap (\chi^R \cap \vartheta^R))
\end{equation}
for $c= c_{\ell, J}$ or $c=c_0$ and in particular
\begin{equation}
\ellabel{e:regadelic2}
\mathscr{R}_p(\chi)_{J, {\mathrm{an}}} = (-1)^r \frac{c_{\ell, J}\cap (\widetilde{\kappa}_{\ellambda} \cap (\chi^R \cap \vartheta^R))}{c_o\cap (\widetilde{\kappa}_{\ellambda}\cap (\chi^R \cap \vartheta^R))}.
\end{equation}
Note that the pairing (\ref{e:intcohom2b}) is ${\mscr I}^S$-equivariant. It follows that for $\mathfrak{a}\in {\mscr I}^S$ and any class $c\in H^r( F_+^*, C_c(F_R, K))$ we have
\begin{align*}
c\cap ((\mathfrak{a}^{-1} \cdot \widetilde{\kappa}_{\ellambda}) \cap (\chi^R \cap \vartheta^R)) &=c \cap (\widetilde{\kappa}_{\ellambda} \cap (\mathfrak{a} \cdot (\chi^R \cap \vartheta^R)))\\
&= \chi(\mathfrak{a}) \, c\cap (\widetilde{\kappa}_{\ellambda} \cap (\chi^R \cap \vartheta^R)).
\end{align*}
Consequently, the fraction on the right hand side of (\ref{e:regadelic2}) does not change if we replace $\widetilde{\kappa}_{\ellambda}$ by $\widetilde{\kappa}_{\ellambda} - \ell' (\ellambda'^{-1} \cdot \widetilde{\kappa}_{\ellambda})$. Hence by (\ref{e:changelambda}) the constant $\mathscr{R}_p(\chi)_{J, {\mathrm{an}}}$ does not depend on the choice of the auxiliary prime $\ellambda$.
\end{proof}
\subsection{Alternate formulation}
We give now a slightly different description of $\mathscr{R}_p(\chi)_{J, {\mathrm{an}}}$ that will be used in section \ref{ss:diag}. In the following $J$ denotes a fixed non-empty subset of $R$. We set $s= \# J$ and label the elements of $J$ by $\mathfrak{p}_1, \mathfrak{p}_2, \dots, \mathfrak{p}_s$. Let $E_J^*$ be the group of totally positive $J$-units in $F$.
Given a fractional ideal $\mathfrak{b}$ of $F$ with $(\mathfrak{b}, S) = 1$, a compact open subset $U$ of $F_J$ and a union of simplicial cones $C$ in $(\mathbf{R}^{>0})^n$, we consider the Shintani zeta function
\begin{equation} \ellabel{e:szf}
\zeta(C, \mathfrak{b}, U, s):= \mathrm{N}\mathfrak{b}^{-s}\sum_{\alphapha} \mathrm{N}\alphapha^{-s},
\end{equation}
where $\alphapha$ ranges over elements of $\mathfrak{b}^{-1}_J= \mathfrak{b}^{-1}\otimes_{\mathcal{O}_F} \mathcal{O}_{F, J}$ satisfying the conditions $(\alphapha, S-R) = 1$, $\alphapha \equiv 1 \pmod{\mathfrak{f}}$, $\alphapha \in U$, and $\alphapha \in C$. Using the auxiliary prime $\ellambda \subset \mathcal{O}_F$ satisfying the conditions stated in \S\ref{s:eis}, we define
\begin{equation} \ellabel{e:szf2} \zeta_\ellambda(C, \mathfrak{b}, U, s):= \zeta(C, \mathfrak{b}, U, s) - \ell^{1-s} \zeta(C, \mathfrak{b} \ellambda^{-1}, U, s).
\end{equation}
Let $\mathfrak{f}$ be the conductor of the extension $H/F$. By $E(\mathfrak{f})$ (respectively $E(\mathfrak{f})_J$) we denote the group of totally positive units (respectively totally positive $J$-units) congruent to $1$ (mod $\mathfrak{f}$). For $x_1, \dotsc, x_n \in E(\mathfrak{f})_J$ and compact open $U \subset F_J$ we put
\begin{equation}
\nu_{\mathfrak{b}, \ellambda}^J(x_1, \dotsc, x_n)(U) := \mathrm{sgn}(x) \zeta_\ellambda( C^*(x_1, \dotsc, x_n), \mathfrak{b}, U, 0).
\ellabel{e:nupartf}
\end{equation}
Then $\nu_{\mathfrak{b}, \ellambda}^J$ is again a homogeneous $(n-1)$-cocycle on $E(\mathfrak{f})_J$ with values in the space of $\mathbf{Z}$-distribution on $F_J$, hence defines a class
\begin{equation}
\omega_{\mathfrak{f}, \mathfrak{b}, \ellambda}^J := [\nu_{\mathfrak{b}, \ellambda}^J]\in H^{n-1}(E(\mathfrak{f})_J, {\rm Meas}(F_J, K)).
\ellabel{e:nupartclass}
\end{equation}
Let $G_\mathfrak{f}$ denote the narrow ray class group of $F$ of conductor $\mathfrak{f}$. We also consider the following variant of the Eisenstein cocycle (\ref{e:kapchi})
\begin{equation}
\ellabel{e:nuchif}
\omega_{\chi, \ellambda}^J = \sum_{[\mathfrak{b}] \in G_\mathfrak{f}} \chi(\mathfrak{b}) \omega_{\mathfrak{f}, \mathfrak{b}, \ellambda}^J
\end{equation}
where the sum ranges over a system of representatives of $G_{\mathfrak{f}}$.
We also define the following classes in $H^r( F_J^*, C_c(F_J, K))$
\[ c_{o}^J= c_{o_{\mathfrak{p}_1}} \cup \cdots \cup c_{o_{\mathfrak{p}_s}}, \qquad c_{\ell}^J = c_{\ell_{\mathfrak{p}_1}} \cup \cdots \cup c_{\ell_{\mathfrak{p}_s}}.
\]
\begin{prop}
\ellabel{p:Jvariant}
Let $\vartheta'\in H_{n+s-1}(E(\mathfrak{f})_J, \mathbf{Z})$ be a generator. Then we have
\begin{equation}
\ellabel{e:Jvariant}\mathscr{R}_p(\chi)_{J, {\mathrm{an}}} = (-1)^{\#J} \, \frac{c_{\ell}^J \cap (\omega_{\chi, \ellambda}^J \cap \vartheta')
}{c_{o}^J \cap (\omega_{\chi, \ellambda}^J \cap \vartheta')} .
\end{equation}
In fact the numerator and denominator of the right hand sides of (\ref{e:rpandef}) and of (\ref{e:Jvariant}) coincide up to the same sign.
\end{prop}
\begin{proof} As in the proof of Prop.\ \ref{p:kappa} it is best to work within the adelic framework.
We put $J' = R-J$, $S' = S-J'$ and label the elements of $J'$ by $\mathfrak{p}_{s+1}, \dots, \mathfrak{p}_r$.
By replacing everywhere $R$ by $J$ in the definition of (\ref{e:wkappa}) and (\ref{e:1fc}) we obtain classes
\begin{equation*}
\widetilde{\kappa}_{\ellambda}^J \in H^{n-1}(F_+^*, {\rm Meas}(F_J \times (\mathbf{A}\!_f^J)^*/U^{S'}, K)).
\end{equation*}
and
\begin{equation*}
\vartheta^J \in H_{n+s-1}(F_+^*, C_c((\mathbf{A}\!_f^J)^*/U^J, \mathbf{Z})).
\end{equation*}
We claim that
\begin{eqnarray}
\ellabel{e:Jvariant1a}
c_{\ell, J} \cap (\kappa_{\chi, \ellambda} \cap \vartheta) = \pm c_{\ell}^J \cap (\widetilde{\kappa}_{\ellambda}^J \cap (\chi^J \cap \vartheta^J)), \\
\ellabel{e:Jvariant1b} c_{o} \cap (\kappa_{\chi, \ellambda} \cap \vartheta) = \pm c_o^J\cap (\widetilde{\kappa}_{\ellambda}^J \cap (\chi^J \cap \vartheta^J)).
\end{eqnarray}
For this it suffices to show by (\ref{e:regadelic}) that
\begin{equation}
\ellabel{e:Jvariant2}
c_o^{J'}\cap (\widetilde{\kappa}_{\ellambda}^R \cap (\chi^R \cap \vartheta^R)) = \pm \widetilde{\kappa}_{\ellambda}^J \cap (\chi^J \cap \vartheta^J).
\end{equation}
since $c_{\ell, J}= c_{\ell}^J\cup c_o^{J'}$ and $c_o = c_o^J\cap c_o^{J'}$.
To prove (\ref{e:Jvariant2}) we introduce maps
\begin{eqnarray*}
&& \delta^J: C_c((\mathbf{A}\!_f^J)^*/U^{S'}, K) \ellongrightarrow C_c(F_{J'} \times (\mathbf{A}\!_f^R)^*/U^S, K)\\
&& \delta: C_c(F_J \times (\mathbf{A}\!_f^J)^*/U^{S'}, K) \ellongrightarrow C_c(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K)
\end{eqnarray*}
that have as local components at the places $\mathfrak{p} \in J'$ the maps
\[
\delta_{\mathfrak{p}}: C_c(F_{\mathfrak{p}}^*/U_{\mathfrak{p}}, \mathbf{Z}) \ellongrightarrow C_c(F_{\mathfrak{p}}, \mathbf{Z}), 1_{xU_{\mathfrak{p}}} \mapsto 1_{x\mathcal{O}_{\mathfrak{p}}}
\]
introduced in \cite[Remark 3.2]{ds} and that are the identity at all other places. More precisely we have canonical isomorphisms
\begin{eqnarray*}
&& C_c((\mathbf{A}\!_f^J)^*/U^{S'}, K) \cong \bigotimes_{i=s+1}^r C_c(F_{\mathfrak{p}_i}^*/U_{\mathfrak{p}_i}, \mathbf{Z})\otimes C_c((\mathbf{A}\!_f^R)^*/U^{S}, K)\\
&& C_c(F_J \times (\mathbf{A}\!_f^J)^*/U^{S'}, K) \cong \bigotimes_{i=s+1}^r C_c(F_{\mathfrak{p}_i}^*/U_{\mathfrak{p}_i}, \mathbf{Z})\otimes C_c(F_J \times (\mathbf{A}\!_f^R)^*/U^{S}, K)\\
\end{eqnarray*}
and canonical monomorphisms
\begin{eqnarray}
&& \bigotimes_{i=s+1}^r C_c(F_{\mathfrak{p}_i}, \mathbf{Z})\otimes C_c((\mathbf{A}\!_f^R)^*/U^{S}, K)
\hookrightarrow C_c(F_{J'} \times (\mathbf{A}\!_f^R)^*/U^S, K) \ellabel{e:tens1}\\
&& \bigotimes_{i=s+1}^r C_c(F_{\mathfrak{p}_i}, \mathbf{Z})\otimes C_c(F_J \times (\mathbf{A}\!_f^R)^*/U^{S}, K)\hookrightarrow
C_c(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K) \ellabel{e:tens2}
\end{eqnarray}
and we define $\delta^J$ and $\delta$ as the composite of
\[
\elleft(\bigotimes_{i=s+1}^r \delta_{\mathfrak{p}_i}\right) \otimes \Id_{C_c((\mathbf{A}\!_f^R)^*/U^{S}}, K)\qquad \mathbbox{and} \qquad \elleft(\bigotimes_{i=s+1}^r \delta_{\mathfrak{p}_i}\right) \otimes \Id_{C_c(F_J \times (\mathbf{A}\!_f^R)^*/U^{S}, K)}
\]
with (\ref{e:tens1}) and (\ref{e:tens2}) respectively.
Dualizing $\delta$ yields
\[
\Delta: {\rm Meas}(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K)\ellongrightarrow {\rm Meas}(F_J \times (\mathbf{A}\!_f^J)^*/U^{S'}, K).
\]
By tracing through the definitions one sees that $\widetilde{\kappa}_{\ellambda}^J$ is the image of $\widetilde{\kappa}_{\ellambda}^R$ under the induced homomorphism
\[
\Delta_*: H^{n-1}(F_+^*, {\rm Meas}(F_R \times (\mathbf{A}\!_f^R)^*/U^S, K))\ellongrightarrow H^{n-1}(F_+^*, {\rm Meas}(F_J \times (\mathbf{A}\!_f^J)^*/U^{S'}, K)).
\]
On the other hand by \cite[Lemma 3.5]{ds} the following equation holds in the homology group $H_{n+s-1}(F_+^*C_c(F_{J'} \times (\mathbf{A}\!_f^R)^*/U^S, K))$:
\[
(\delta^J)_*(\chi^J \cap \vartheta^J) = \pm c_o^{J'}\cap (\chi^R \cap \vartheta^R).
\]
We conclude
\begin{eqnarray*}
& c_o^{J'}\cap (\widetilde{\kappa}_{\ellambda}^R \cap (\chi^R \cap \vartheta^R)) = \pm \widetilde{\kappa}_{\ellambda}^R \cap (\delta^J)_*(\chi^J \cap \vartheta^J) \\
& = \pm \Delta_*(\widetilde{\kappa}_{\ellambda}^R) \cap (\chi^J \cap \vartheta^J) = \pm \widetilde{\kappa}_{\ellambda}^J \cap (\chi^J \cap \vartheta^J).
\end{eqnarray*}
Having established (\ref{e:Jvariant2}) it remains to show that numerator and denominator of the right hand side of (\ref{e:Jvariant}) are equal to the right hand side of
(\ref{e:Jvariant1a}) and (\ref{e:Jvariant1b}) respectively. Let
$\mathfrak{f} = \prod_{\mathfrak{q}\nmid \infty} \mathfrak{q}^{n_{\mathfrak{q}}}$ be the prime decomposition of $\mathfrak{f}$ and put
\[
U(\mathfrak{f})^J = \prod_{\mathfrak{q}\nmid\infty, \mathfrak{q}\not\in J } U_\mathfrak{q}^{(n_\mathfrak{q})}
\]
and
\[
\mathbf{G}amma(\mathfrak{f}) = F_+^* \cap \prod_{\mathfrak{q}\mid \mathfrak{f}} U_\mathfrak{q}^{(n_\mathfrak{q})} = \{x\in F_+^*\mid \ord_{\mathfrak{q}}(x-1) \ge n_\mathfrak{q}\,\,\forall\,\,\mathfrak{q}\mid \mathfrak{f}\}.
\]
Here as usual we have set
\[
U_{\mathfrak{q}}^{(n_\mathfrak{q})} = \elleft\{\begin{array}{cc} \mathcal{O}_v^*& \mathbbox{if $n_\mathfrak{q} =0$;}\\
1 + \mathfrak{q}^{n_{\mathfrak{q}}} \mathcal{O}_{\mathfrak{q}} & \mathbbox{if $n_\mathfrak{q}>0$, i.e.\ if $\mathfrak{q}\in S_{\rightarrowm}$.}
\end{array}\right.
\]
Let $\pi: F_J \times (\mathbf{A}\!_f^J)^*/U^{S'} \to F_J \times (\mathbf{A}\!_f^J)^*/U(\mathfrak{f})^J$ denote the canonical projection. It induces a map
\[
C_c(F_J \times (\mathbf{A}\!_f^J)^*/U(\mathfrak{f})^J, \mathbf{Z}) \to C_c(F_J \times (\mathbf{A}\!_f^J)^*/U^S, \mathbf{Z}), \qquad f\mapsto f\circ \pi
\]
hence by dualising a homomorphism
\begin{equation}
\ellabel{e:imagemeas}
\pi: {\rm Meas}(F_J \times (\mathbf{A}\!_f^J)^*/U^{S'}, K) \to {\rm Meas}(F_J \times (\mathbf{A}\!_f^J)^*/U(\mathfrak{f})^J, K).
\end{equation}
We denote by $\widetilde{\omega}_{\mathfrak{f}, \ellambda}^J$ the image of $\widetilde{\kappa}_{\ellambda}^J$ under \begin{eqnarray*}
H^{n-1}(F_+^*, {\rm Meas}(F_J \times (\mathbf{A}\!_f^J)^*/U^{S'}, K)) & \ellongrightarrow & H^{n-1}(F_+^*, {\rm Meas}(F_J \times (\mathbf{A}\!_f^J)^*/U(\mathfrak{f})^J, K))\\
& \stackrel{\cong}{\ellongrightarrow} & H^{n-1}(\mathbf{G}amma(\mathfrak{f}), {\rm Meas}(F_J \times (\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, K))
\end{eqnarray*}
where the first arrow is induced by (\ref{e:imagemeas}) and the second by weak approximation and Shapiro's Lemma. We also denote the image of $\vartheta^J$ under
\begin{eqnarray*}
H_{n+s-1}(F_+^*, C_c((\mathbf{A}\!_f^J)^*/U^J, \mathbf{Z})) & \ellongrightarrow & H_{n+s-1}(F_+^*, C_c((\mathbf{A}\!_f^J)^*/U(\mathfrak{f})^J, \mathbf{Z}))\\
& \stackrel{\cong}{\ellongrightarrow} & H_{n+s-1}(\mathbf{G}amma(\mathfrak{f}), C_c((\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, \mathbf{Z}))
\end{eqnarray*}
by $\vartheta_{\mathfrak{f}}^J$. Here the first map is induced by the natural projection $(\mathbf{A}\!_f^J)^*/U(\mathfrak{f})^J\to (\mathbf{A}\!_f^J)^*/U^J$ and the second again by Shapiro's Lemma.
Moreover we view the character $\chi^J$ as an element of
\begin{equation*}
H^0(F_+^*, C((\mathbf{A}\!_f^J)^*/U(\mathfrak{f})^J, K))\cong H^0(\mathbf{G}amma(\mathfrak{f}), C((\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, \mathbf{Z}))
\end{equation*}
so that we have
\begin{equation}
\ellabel{e:Jvariant3}
c\cap (\widetilde{\kappa}_{\ellambda}^J \cap (\chi^J \cap \vartheta^J)) = c\cap (\widetilde{\omega}_{\mathfrak{f}, \ellambda}^J \cap (\chi^J \cap \vartheta_{\mathfrak{f}}^J))
\end{equation}
for $c= c_{\ell}^J$ of $c_o^J$. Note that the cohomology groups
\[
H^{n-1}(\mathbf{G}amma(\mathfrak{f}), {\rm Meas}(F_J \times (\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, K)),\qquad H^0(\mathbf{G}amma(\mathfrak{f}), C((\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, \mathbf{Z}))
\]
and the homology group
\[
H_{n+s-1}(\mathbf{G}amma(\mathfrak{f}), C_c((\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, \mathbf{Z})), \\
\]
all carry a natural $G_{\mathfrak{f}} \cong (\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/\mathbf{G}amma(\mathfrak{f})U^{J\cup S_{\rightarrowm}}$-action. Consider the homomorphisms
\begin{equation}
\ellabel{e:shapiro3}
H_{n+s-1}(E(\mathfrak{f})_J, \mathbf{Z}) \to H_{n+s-1}(\mathbf{G}amma(\mathfrak{f}), C_c((\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, \mathbf{Z}))
\end{equation}
induced by the $E(\mathfrak{f})_J$-equivariant map $\mathbf{Z}\to C_c((\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, \mathbf{Z})$ sending $1$ to the characteristic function of $U^{J\cup S_{\rightarrowm}}$ and by the inclusion $E(\mathfrak{f})_J\hookrightarrow \mathbf{G}amma(\mathfrak{f})$. By abuse of notation we denote the image of the generator $\vartheta'\in H_{n+s-1}(E(\mathfrak{f})_J, \mathbf{Z})$ under (\ref{e:shapiro3}) by $\vartheta'$ as well. It is easy to see that with the correct choice of sign of $\vartheta'$ we have
\begin{equation}
\ellabel{e:raydecomp}
\vartheta_{\mathfrak{f}}^J = \sum_{[\mathfrak{b}] \in G_\mathfrak{f}} [\mathfrak{b}]\cdot \vartheta'
\end{equation}
Note that $\chi^J \cap \vartheta'$ is independent of $\chi$. In fact if $\epsilonsilon\in H^0(\mathbf{G}amma(\mathfrak{f}), C((\mathbf{A}\!_f^{J\cup S_{\rightarrowm}})^*/U^{J\cup S_{\rightarrowm}}, \mathbf{Z}))$ denotes the characteristic function of $\mathbf{G}amma(\mathfrak{f}) U^{J\cup S_{\rightarrowm}}/U^{J\cup S_{\rightarrowm}}$ then we have
\begin{equation*}
\chi^J \cap \vartheta' = \epsilonsilon \cap \vartheta
\end{equation*}
Thus for $c= c_{\ell}^J$ or $c= c_o^J$ it follows that
\begin{align*}
c\cap (\widetilde{\omega}_{\mathfrak{f}, \ellambda}^J \cap (\chi^J \cap \vartheta_{\mathfrak{f}}^J)) &= c \cap \elleft(\sum_{[\mathfrak{b}] \in G_\mathfrak{f}} \widetilde{\omega}_{\mathfrak{f}, \ellambda}^J \cap (\chi^J \cap ( [\mathfrak{b}]^{-1} \cdot \vartheta'))\right)\\
& = c\cap \elleft(\sum_{[\mathfrak{b}] \in G_\mathfrak{f}} ([\mathfrak{b}] \cdot \widetilde{\omega}_{\mathfrak{f}, \ellambda}^J) \cap (([\mathfrak{b}] \cdot \chi^J) \cap \vartheta')\right) \\
&= \sum_{[\mathfrak{b}] \in G_\mathfrak{f}} \chi(\mathfrak{b})\, c\cap (([\mathfrak{b}] \cdot \widetilde{\omega}_{\mathfrak{f}, \ellambda}^J) \cap (\epsilonsilon \cap \vartheta')).
\end{align*}
Passing back from the idele- to ideal-theoretic language we get
\begin{equation*}
\ellabel{e:Jvariant4a}
c \cap (([\mathfrak{b}] \cdot \widetilde{\omega}_{\mathfrak{f}, \ellambda}^J) \cap (\epsilonsilon \cap \vartheta')) = c \cap (\omega_{\mathfrak{f}, \mathfrak{b}, \ellambda}^J\cap \vartheta').
\end{equation*}
Multiplying (\ref{e:Jvariant3}) with $\chi(\mathfrak{b})$ and summing over the set of representatives $\mathfrak{b}$ of $G_{\mathfrak{f}}$ yields
\begin{equation}
\ellabel{e:Jvariant4}
c \cap (\widetilde{\omega}_{\mathfrak{f}, \ellambda}^J \cap (\chi^J \cap \vartheta_{\mathfrak{f}}^J)) = c \cap (\omega_{\chi, \ellambda}^J \cap \vartheta').
\end{equation}
Finally, by combining (\ref{e:regadelic}), (\ref{e:Jvariant1a}), (\ref{e:Jvariant1b}), (\ref{e:Jvariant3}) and (\ref{e:Jvariant4}) the assertion follows.
\end{proof}
\section{Evidence}
\subsection{The full Gross regulator}
The following result implies that Conjecture~\ref{c:main2} for $J=R$ is equivalent to part (2) of Conjecture~\ref{c:gross}. Hence by \cite{dkv}, Conjecture~\ref{c:main2} holds unconditionally in this case.
\begin{theorem} \ellabel{t:jr}
For $J = R$, we have \[ \mathscr{R}_p(\chi)_{J, {\mathrm{an}}} = \frac{L_p^{(r)}(\chi, 0)}{r! L(\chi, 0) \prod_{\mathfrak{p} \in R_1} (1 - \chi(\mathfrak{p}))}, \]
and hence Conjecture~\ref{c:main2} holds for $J=R$.
\end{theorem}
Before proving Theorem~\ref{t:jr}, we must first relate the Eisenstein cocycle to the $p$-adic $L$-function of $\chi$.
For this, we first need to extend the definition of $\mu_{\chi,\mathfrak{b}, \ellambda}$ so that it involves all primes of $F$ above $p$. Put $R'= R_0\cup R_1$ and $\mathcal{O}_{R'}^* = \prod_{\mathfrak{p}\in R'} \mathcal{O}_{\mathfrak{p}}^*$. Instead of considering only compact open subsets of $F_R$ we may consider more generally compact open subsets $U$ of $F_R\times \mathcal{O}_{R'}^*$ in the definition of $L(C, \chi, \mathfrak{b}, U, s)$. As before we obtain a homogeneous ($n-1$)-cocycle $\widetilde{\mu}_{\chi, \mathfrak{b}, \ellambda}$ with values in ${\rm Meas}(F_R\times \mathcal{O}_{R'}^*, K)$ that is mapped to $[\mu_{\chi, \mathfrak{b},\ellambda}]$
under the map
\[
H^{n-1}(E_R^*, {\rm Meas}(F_R\times \mathcal{O}_{R'}^*, K))\ellongrightarrow H^{n-1}(E_R^*, {\rm Meas}(F_R, K))
\]
induced by $\pi_*: {\rm Meas}(F_R\times \mathcal{O}_{R'}^*, K)\to {\rm Meas}(F_R, K)$, where
$\pi: F_R\times \mathcal{O}_{R'}^*\to F_R$ denotes the projection.
Let $F_{\infty}/F$ be the cyclotomic $\mathbf{Z}_p$-extension of $F$, and let
\[ \mathbf{G}amma=\mathbf{G}al(F_{\infty}/F), \qquad \Lambda=\mathbf{Z}_p[\![\mathbf{G}amma]\!]\otimes_{\mathbf{Z}_p} K. \] The action of $\mathbf{G}amma$ on $p$-power roots of unity allows us to view $\mathbf{G}amma$ as a subgroup of $1+2p\mathbf{Z}_p$. For $\gamma\in \mathbf{G}amma$ we denote by $\iota(\gamma)$ the corresponding element in $\Lambda$. We view the reciprocity map of class field theory for the extension $F_{\infty}/F$ as a map
\begin{equation}
\rec: F_p^*\times {\mscr I}^p = \prod_{\mathfrak{p}\mid p} F_{\mathfrak{p}}^*\times {\mscr I}^p \ellongrightarrow \mathbf{G}amma
\ellabel{e:rec}
\end{equation}
where ${\mscr I}^p= {\mscr I}^{S_p}$ denotes the group of fractional ideals of $F$ that are relatively prime to $p$. The restriction of (\ref{e:rec}) to ${\mscr I}^p$ will be denoted by
\begin{equation}
{\mscr I}^p \ellongrightarrow \mathbf{G}amma,\,\, \mathfrak{a} \mapsto \gamma_{\mathfrak{a}}
\ellabel{e:recartin}
\end{equation}
and to $F_R^*\times \mathcal{O}_{R'}^*\subseteq F_p^*$ by
\begin{equation}
\rec_{F_{\infty}/F, p}: F_R^*\times \mathcal{O}_{R'}^*\ellongrightarrow \mathbf{G}amma.
\ellabel{e:recp}
\end{equation}
We can view $\iota \circ\rec_{F_{\infty}/F, p}$ as an element
\begin{equation} \ellabel{e:iotarec}
\iota \circ\rec_{F_{\infty}/F, p} \in H^0(E_R^*, C(F_R^*\times \mathcal{O}_{R'}^*, \Lambda)).
\end{equation}
Let ${\cal F}$ denote a compact open subset of $F_R^*\times \mathcal{O}_{R'}^*$ that is stable under the group of totally positive units of $F$ and such that $F_R^*\times \mathcal{O}_{R'}^*$ is the disjoint union of the cosets $x{\cal F}$ where $x$ runs through a system of representatives of $E_R^*/E^*$. As before let $\vartheta_0$ denote a generator of $H_{n-1}(E^*, \mathbf{Z})$. As in (\ref{e:1fc}), we can consider the element
\begin{equation} \ellabel{e:2fc}
1_{{\cal F}} \cap \vartheta_0 \in H_{n-1}(E_+, C({\cal F}, \mathbf{Z})) \cong H_{n-1}(E_R^*, C_c(F_R^*\times \mathcal{O}_{R'}^*,\mathbf{Z})),
\end{equation}
where the isomorphism is by Shapiro's Lemma since
\[ \Ind_{E^*}^{E_R^*} C({\cal F}, \Lambda) \cong C(F_R^*\times \mathcal{O}_{R'}^*, \Lambda). \]
Taking the cap product of (\ref{e:iotarec}) and (\ref{e:2fc}) yields a
class
\begin{equation}
\rho = (\iota\circ \rec_{F_{\infty}/F, p} )\cap (1_{{\cal F}} \cap \vartheta_0)\in H_{n-1}(E_R^*, C_c(F_R\times \mathcal{O}_{R'}^*,\Lambda)).
\ellabel{e:intcohom2}
\end{equation}
The first cap-product is induced by the pairing
\begin{align*}
C(F_R^*\times \mathcal{O}_{R'}^*,\Lambda) \times C_c(F_R^*\times \mathcal{O}_{R'}^*,\mathbf{Z}) & \ellongrightarrow C_c(F_R\times \mathcal{O}_{R'}^*,\Lambda), \\
(f,g) &\ellongmapsto (f\cdot g)_!
\end{align*}
where the subscript $!$ denotes ``extension by zero". Similar to (\ref{e:cap1}) we have a cap-product pairing
\[
H^{n-1}(E_R^*, {\rm Meas}(F_R\times \mathcal{O}_{R'}^*, K)) \times H_{n-1}(E_R^*, C_c(F_R\times \mathcal{O}_{R'}^*,\Lambda))\ellongrightarrow \Lambda,
\]
so we can consider $[\widetilde{\mu}_{\chi, \mathfrak{b}, \ellambda}] \cap \rho \in \Lambda$. To link this element of the Iwasawa algebra to the $p$-adic $L$-function $L_p(\chi,s)$ we recall that there exists a canonical homomorphism
\begin{equation}
\mathbf{X}i: \Lambda \ellongrightarrow C^{{\mathrm{an}}}(\mathbf{Z}_p, K)
\ellabel{e:iwasfunc}
\end{equation}
characterized by \[ \mathbf{X}i(\iota(\gamma))(s) = \gamma^s:=\exp_p(s\ellog_p(\gamma)) \] for all $\gamma\in \mathbf{G}amma$ and $s\in \mathbf{Z}_p$. Here $C^{{\mathrm{an}}}(\mathbf{Z}_p, K)$ is the $K$-algebra of locally analytic maps $f\colon\mathbf{Z}_p\ellongrightarrow K$.
\begin{prop}
We have
\begin{equation}
\mathbf{X}i\elleft(\sum_{i=1}^h \chi(\mathfrak{b}_i) \iota(\gamma_{\mathfrak{b}_i}) [\widetilde{\mu}_{\chi, \mathfrak{b}_i, \ellambda}] \cap \rho\right)(s) \, =\, (1-\chi(\ellambda)\gamma_{\ellambda}^s \ell) L_p(\chi,s).
\ellabel{e:cap2}
\end{equation}
\end{prop}
\begin{proof}
This formula is a variant of \cite[Prop.\ 5.6]{ds} and the proof there carries over. In fact, as we now explain,
the present result can be deduced from the statement of {\em loc.\ cit.}
It is well known that the $p$-adic $L$-function interpolates to an element of $\Lambda$, i.e.\ that there exists a unique element
\[ {\mscr L}_{p, \ellambda}(\chi) \in \Lambda \] such that
\[ \mathbf{X}i({\mscr L}_{p, \ellambda}(\chi)) = (1-\chi(\ellambda)\gamma_{\ellambda}^s \ell) L_p(\chi,s). \]
We must show that the expression in parenthesis on the left side of (\ref{e:cap2}) is equal to ${\mscr L}_{p, \ellambda}(\chi)$, and for this if suffices to show that they agree under application of the dense set of homomorphisms
\[ \tilde{\psi} \colon \Lambda \ellongrightarrow \mathbf{C}_p^* \]
induced by $p$-power conductor Dirichlet characters $\psi: \mathbf{G}amma \ellongrightarrow \mu_{p^{\infty}}$. (These homomorphisms are ``dense" in the sense that the intersection of their kernels in $\Lambda$ is trivial.) In other words, we must show that
\begin{equation}
\ellabel{e:lpinterp}
\tilde{\psi}\elleft(\sum_{i=1}^h \chi(\mathfrak{b}_i) \iota(\gamma_{\mathfrak{b}_i}) [\widetilde{\mu}_{\chi, \mathfrak{b}_i, \ellambda}] \cap \rho\right) =
(1-\chi\psi(\ellambda) \ell) L_S(\chi \psi,0).
\end{equation}
Now if we let $K$ be the fixed field of $\chi\psi$, set $k=0$, and apply the character $\chi\psi$ to the equation
in \cite[Prop.\ 5.6]{ds}, then we obtain exactly (\ref{e:lpinterp}).
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t:jr}]
By Prop.\ \ref{p:denom} it suffices to show
\begin{equation}
c_{\ell, J} \cap (\kappa_\chi \cap \vartheta) = (-1)^{rn} (1 - \chi(\ellambda) \ell) L_p^{(r)}(\chi, 0)/r!. \ellabel{e:cellcap}
\end{equation}
In order to study the leading term of (\ref{e:cap2}) at $s=0$, we consider the homomorphism of $K$-algebras
\begin{align*}
\Ta_{\elle r}: C^{{\mathrm{an}}}(\mathbf{Z}_p, K) \, &\ellongrightarrow \, K[X]/(X^{r+1}),\\
f&\ellongmapsto \Ta_{\elle r} f = \sum_{k=0}^r \frac{f^{(k)}(0)}{k!} \overline{X}^k
\end{align*}
and the composite
\begin{equation} \ellabel{e:composite}
\Ta_{\elle r} \circ \ \mathbf{X}i \circ \iota \circ \rec_{F_{\infty}/F, p}: F_R^*\times \mathcal{O}_{R'}^*\ellongrightarrow \mathbf{G}amma \ellongrightarrow \Lambda \ellongrightarrow C^{{\mathrm{an}}}(\mathbf{Z}_p, K) \ellongrightarrow K[X]/(X^{r+1}).
\end{equation}
Restricting (\ref{e:composite}) to $F_{\mathfrak{p}}^*$ for $\mathfrak{p}\in R$ and reducing modulo $(\overline{X}^2)$ yields the homomorphism
\begin{equation} \ellabel{e:lpclass}
F_{\mathfrak{p}}^* \ellongrightarrow K[X]/(X^2), \qquad a\ellongmapsto 1- \ell_{\mathfrak{p}}(a) \overline{X}.
\end{equation}
As before we consider the class
\[
\overline{\rho}:=(\Ta_{\elle r}\circ \ \mathbf{X}i \circ \iota)\cap (1_{{\cal F}} \cap \vartheta_0)\in H^{n-1}(E_R^*, C_c(F_R\times \mathcal{O}_{R'}^*,K[X]/(X^{r+1}))).
\]
Applying \cite[Prop.\ 3.6]{ds} we obtain
\begin{equation} \ellabel{e:rhobar}
\overline{\rho} \,=\, (-1)^r c_{\ell, J}\cap ((\overline{X}^r 1_{\mathcal{O}_{R'}^*})\cap \vartheta).
\end{equation}
(To make the connection with the notation in {\em loc.\ cit.}, note that our $\overline{\rho}$ and $1_{\mathcal{O}_{R'}^*}\cap \vartheta$ correspond to $\kappa$ and $\overline{\kappa}$ there, and that in view of (\ref{e:lpclass}) our $(-1)^r c_{\ell, J} \overline{X}^r$ corresponds to $c_{d\chi_1} \cup \cdots \cup c_{d\chi_r}$ there.)
In (\ref{e:rhobar}), the second cap-product lies in $H_{n+r-1}(E_R^*, C(\mathcal{O}_{R'}^*, K[X]/(X^{r+1})))$. Therefore
\begin{align}
(\Ta_{\elle r}\circ \ \mathbf{X}i)([\widetilde{\mu}_{\chi, \mathfrak{b}, \ellambda}] \cap \rho) & = [\widetilde{\mu}_{\chi, \mathfrak{b}, \ellambda}] \cap \overline{\rho} \nonumber \\
&= (-1)^r (-1)^{(n-1)r} \, (c_{\ell, J} \cup (\overline{X}^r 1_{\mathcal{O}_{R'}^*}) \cup [\widetilde{\mu}_{\chi, \mathfrak{b}, \ellambda}]) \cap \vartheta \nonumber \\
& = (-1)^{nr}\overline{X}^r \, c_{\ell, J} \cap ([\mu_{\chi, \mathfrak{b}, \ellambda}] \cap \vartheta). \ellabel{e:tar}
\end{align}
Combining (\ref{e:cap2}) and (\ref{e:tar}), we obtain
\begin{align}
\Ta_{\elle r}((1-\chi(\ellambda)\gamma_{\ellambda}^s \ell) L_p(\chi,s)) &=
\sum_{i=1}^h \chi(\mathfrak{b}_i) \Ta_{\elle r} \circ \ \mathbf{X}i (\iota(\gamma_{\mathfrak{b}_i}) [\mu_{\chi, \mathfrak{b}_i, \ellambda}] \cap \rho) \nonumber \\
&=
(-1)^{nr}\overline{X}^r \sum_{i=1}^h \chi(\mathfrak{b}_i) (\Ta_{\elle r}\circ \mathbf{X}i)(\iota(\gamma_{\mathfrak{b}_i})) c_{\ell, J} \cap ([\mu_{\chi, \mathfrak{b}, \ellambda}] \cap \vartheta)\nonumber \\
&= (-1)^{nr}\overline{X}^r \sum_{i=1}^h \chi(\mathfrak{b}_i) c_{\ell, J} \cap ([\mu_{\chi, \mathfrak{b}_i, \ellambda}] \cap \vartheta) \ellabel{e:1mod} \\
&= (-1)^{nr}\, \overline{X}^r c_{\ell, J} \cap (\kappa_{\chi, \ellambda}\cap \vartheta) \ellabel{e:key}
\end{align}
where (\ref{e:1mod}) follows from $(\Ta_{\elle r}\circ \ \mathbf{X}i)(\iota(\gamma_{\mathfrak{b}_i})) \equiv 1$ modulo $\overline{X}$. Since
\[
\Ta_{\elle r}(1-\chi(\ellambda)\, \gamma_{\ellambda}^s \,\ell) \equiv 1 - \chi(\ellambda) \ell \pmod{\overline{X}}
\]
we see that $\Ta_{\elle r}(1-\chi(\ellambda)\, \gamma_{\ellambda}^s\, \ell)$ is a unit in $K[X]/(X^{r+1})$. Hence
\[ \Ta_{\elle r}(L_p(\chi,s))\equiv 0 \pmod{\overline{X}^r} \] and
\[
\Ta_{\elle r}((1-\chi(\ellambda)\, \gamma_{\ellambda}^s\, \ell) L_p(\chi,s)) = (1 - \chi(\ellambda) \ell) \, L_p^{(r)}(\chi, 0)/r! \,\overline{X}^r.
\]
Together with (\ref{e:key}) we conclude (\ref{e:cellcap}). \end{proof}
\begin{remark} \rm We would like to point out that Proposition \ref{p:denom} and formula
(\ref{e:cellcap}) could be deduced directly from \cite[Corollary 3.19(b)]{spiesshilb} and \cite[Corollary 3.22]{spiesshilb}. However we feel that the framework developed in \cite[\S 3] {ds} is somewhat superior to that of \cite[\S 4]{spiesshilb} and think it is worthwhile to present the application to trivial zeros of $p$-adic $L$-functions here again in some detail.
\end{remark}
\subsection{The diagonal entries}
\ellabel{ss:diag}
We now consider the other extremal case $\#J = 1$. If $J = \{ \mathfrak{p} \}$, then
\[ \mathscr{R}_p(\chi)_J = {\mscr L}_{{\mathrm{alg}}}(\chi)_{ \mathfrak{p}, \mathfrak{p}} = - \frac{\ell_\mathfrak{p}(u_{\mathfrak{p}, \chi})}{o_{\mathfrak{p}}(u_{\mathfrak{p}, \chi})}. \]
In this setting, the first named author \cite[Conjecture 3.21]{das} had previously conjectured a formula for the image of $u_{\mathfrak{p}, \chi}$ in $F_\mathfrak{p}^* \otimes K$.
We recall below the definition of this conjectural image, denoted $\mathcal{U}_{\mathfrak{p}, \chi}$.
\begin{theorem} \ellabel{t:jep} When $n=2$, Conjecture~\ref{c:main} for $J = \{\mathfrak{p}\}$ is consistent with \cite[Conjecture 3.21]{das}, i.e.\ we have
\[ \mathscr{R}_p(\chi)_{J, {\mathrm{an}}} = - \frac{\ell_\mathfrak{p}(\mathcal{U}_{\mathfrak{p}, \chi})}{o_{\mathfrak{p}}(\mathcal{U}_{\mathfrak{p}, \chi})}.
\]
\end{theorem}
\begin{remark} We expect Theorem~\ref{t:jep} to be tractable when $n > 2$ as well, but we leave this as an open problem.
\end{remark}
Before proving Theorem~\ref{t:jep}, we recall the definition of $\mathcal{U}_{\mathfrak{p}, \chi}$. We keep the notation of the end of \S \ref{section:mainconj}. Let $\mathcal{D} = \sum_i a_i C_i$ denote a signed Shintani domain for the action of $E(\mathfrak{f})$ on $(\mathbf{R}^{>0})^n$, so the characteristic function $1_\mathcal{D} := \sum_i a_i 1_{C_i}$ satisfies $\sum_{\epsilonsilon \in E(\mathfrak{f})}1_{\mathcal{D}}( \epsilonsilon \cdot x) = 1$ for all $x \in (\mathbf{R}^{>0})^n$.
For each fractional ideal $\mathfrak{b}$ of $F$ relatively prime to $\mathfrak{f}$ and $p$, we will define an element $\mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D}) \in F_\mathfrak{p}^*$ and define
\begin{equation} \ellabel{e:upcdef}
\mathcal{U}_{\mathfrak{p}, \chi} = \sum_{[\mathfrak{b}] \in G_\mathfrak{f}} \mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D}) \otimes \chi(\mathfrak{b})/(1-\chi(\ellambda)\ell),
\end{equation}
where the sum ranges over a set of representatives $\mathfrak{b}$ for $G_\mathfrak{f}$. The independence of $ \mathcal{U}_{\mathfrak{p}, \chi}$ from the choices of the $\mathfrak{b}$, $\mathcal{D}$, and $\ellambda$ is somewhat subtle and is discussed in \cite[\S 5]{das}.
We now define $\mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D}).$
Let $e$ be the order of $\mathfrak{p}$ in $G_\mathfrak{f}$, and write $\mathfrak{p}^e = (\pi)$ where $\pi$ is totally positive and $\pi \equiv 1 \pmod{\mathfrak{f}}$.
For a compact open subset of $F_{\mathfrak{p}}$ we define
\[
\nu_{\mathfrak{b}, \ellambda, \mathcal{D}}(U) := \sum_i a_i \zeta_\ellambda(\mathfrak{b}, C_i, U, 0), \qquad \text{ where } \mathcal{D} = \sum a_i C_i.
\]
where $\zeta_\ellambda(\mathfrak{b}, C_i, U, s)$ denotes the function (\ref{e:szf2}).
Our assumptions on $\ellambda$ imply $\nu_{\mathfrak{b}, \ellambda, \mathcal{D}}(U)\in \mathbf{Z}$ (see \cite[Proposition 3.12]{das}). The main contribution to the definition of $\mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D})$ is a multiplicative integral defined analogously to (\ref{e:intdef}), but with Riemann {\em products} instead of sums:
\[ \times\!\!\!\!\!\!\!\int_{\mathcal{O}_\mathfrak{p} - \pi \mathcal{O}_\mathfrak{p}} x \ d\nu_{\mathfrak{b}, \ellambda, \mathcal{D}}(x) := \ellim_{|| \mathcal{V} || \rightarrow 0} \prod_{V \in \mathcal{V}} t_V^{\nu_{\mathfrak{b}, \ellambda, \mathcal{D}}(V)},
\]
as $\mathcal{V} = \{V\}$ ranges over uniformly finer covers of $\mathcal{O}_\mathfrak{p} - \pi \mathcal{O}_\mathfrak{p}$ and $t_V \in V$.
The element $\mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D}) \in F_\mathfrak{p}^*$ is defined as the product of this multiplicative integral with a certain global unit in $F$ and a power of $\pi$.
Given a formal linear combination of simplicial cones $\mathcal{D} = \sum a_i C_i$ and a totally positive $x \in F^*$, we define $x\mathcal{D} = \sum a_i \cdot x C_i$, with characteristic function $1_{x \mathcal{D}}(y) = \sum a_i 1_{C_i}(x^{-1} y)$. Given two such formal linear combinations, we define their intersection as the formal linear combination whose characteristic function is the product:
\[ 1_{\mathcal{D} \cap \mathcal{D}'} := 1_{\mathcal{D}} \cdot 1_{\mathcal{D}'}. \]
With these notations, we define
\begin{equation} \ellabel{e:edef}
\epsilonsilon_{\mathfrak{b}, \ellambda, \mathcal{D}, \pi} := \prod_{\epsilonsilon \in E(\mathfrak{f})} \epsilonsilon^{\nu_{\mathfrak{b}, \ellambda, \epsilonsilon \mathcal{D} \cap \pi^{-1} \mathcal{D}}(\mathcal{O}_\mathfrak{p})}.
\end{equation}
One easily shows that there are only finitely many $\epsilonsilon$ for which the exponent in (\ref{e:edef}) is nonzero.
Finally, we define
\[ \mathcal{U}(\mathfrak{b}, \ellambda, \mathcal{D}) := \epsilonsilon_{\mathfrak{b}, \ellambda, \mathcal{D}, \pi} \cdot \pi^{\nu_{\mathfrak{b}, \ellambda, \mathcal{D}}(\mathcal{O}_\mathfrak{p})} \cdot \times\!\!\!\!\!\!\!\int_{\mathcal{O}_\mathfrak{p} - \pi \mathcal{O}_\mathfrak{p}} x \ d\nu_{\mathfrak{b}, \ellambda, \mathcal{D}}(x)\]
and $\mathcal{U}_{\mathfrak{p}, \chi}$ as in (\ref{e:upcdef}).
We assume now that $n=2$. Recall that we have we have fixed an ordering of the real places of $F$ yielding an embedding $F\subset \mathbf{R}^2$. We choose the generator $\epsilon$ of $E(\mathfrak{f})$ so that it lies in the half plane $\{(x,y)\in \mathbf{R}^2\mid x<y\}$. Then we have
\[
\mathrm{sgn}(1, \epsilon)=1 \qquad \mathbbox{and} \qquad C^*(1, \epsilon) = C(1,\epsilon) \cup C(\epsilon)
\]
and $\mathcal{D} = C^*(1, \epsilon)$ is a Shintani domain for the action of $E(\mathfrak{f})$ on $(\mathbf{R}^{>0})^2$. For the cocycle (\ref{e:nupartf}) evaluated at the pair $(1, \epsilon)$ we have
\begin{equation}
\ellabel{e:jep1}
\nu_{\mathfrak{b}, \ellambda}^J(1, \epsilon) = \nu_{\mathfrak{b}, \ellambda, \mathcal{D}}
\end{equation}
Now Theorem \ref{t:jep} follows immediately from
\begin{prop}
\ellabel{p:partgs} When $n=2$ and $J = \{\mathfrak{p}\}$ and let $[\mathfrak{b}]\in G_{\mathfrak{f}}$. We have
\[
\ell_\mathfrak{p}( \mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D})) = \pm c_{\ell_\mathfrak{p}} \cap (\omega_{\mathfrak{f}, \mathfrak{b}, \ellambda}^J\cap \vartheta'),\qquad o_\mathfrak{p}( \mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D})) = \pm c_{o_\mathfrak{p}} \cap (\omega_{\mathfrak{f}, \mathfrak{b}, \ellambda}^J\cap \vartheta').
\]
\end{prop}
\begin{proof} After replacing $\pi$ by $\pi\epsilon^n$ for some $n\in\mathbf{Z}$ we may assume that $\pi\in \mathcal{D}$. Since $\epsilonsilon, \pi$ are a $\mathbf{Z}$-basis of $E(\mathfrak{f})_J$ the cycle
\[
\vartheta' : = [\pi | \epsilonsilon] - [\epsilonsilon| \pi]
\]
is a generator of $H_2(E(\mathfrak{f})_J, \mathbf{Z})$. Let $g: F_{\mathfrak{p}}^*\to K$ be any continuous homomorphism (e.g.\ $g= \ell_{\mathfrak{p}}$ or $g=o_{\mathfrak{p}}$). The assertion follows from
\begin{equation}
\ellabel{e:comp}
g( \mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D})) = c_g \cap (\omega_{\mathfrak{f}, \mathfrak{b}, \ellambda}^J\cap \vartheta')
\end{equation}
Since $\pi\in \mathcal{D}=C^*(1,\epsilon)$ we have
\begin{equation}
\ellabel{e:jep2}
\epsilon^n \mathcal{D}\cap \pi^{-1}\mathcal{D} = C^*(\epsilon^n, \epsilon^{n+1}) \cap C^*(\pi^{-1}, \pi^{-1}\epsilon) =\elleft\{\begin{array}{cc} C^*(1, \epsilon\pi^{-1})& \mathbbox{if $n =0$;}\\
C^*(1, \pi^{-1}) & \mathbbox{if $n=-1$;}\\
\emptyset & \mathbbox{otherwise.}
\end{array}\right.
\end{equation}
Since $E(\mathfrak{f}) = \ellangle \epsilon \rightarrowngle$ and $\mathrm{sgn}(1, \pi^{-1})=-1$ this implies
\begin{eqnarray}
g(\epsilonsilon_{\mathfrak{b}, \ellambda, \mathcal{D}, \pi}) & = & \elleft(\sum_{n\in \mathbf{Z}} n \nu_{\mathfrak{b}, \ellambda, \epsilonsilon \mathcal{D} \cap \pi^{-1} \mathcal{D}}(\mathcal{O}_\mathfrak{p})\right) g(\epsilon)
\ellabel{e:jep3}\\
& = & - \nu_{\mathfrak{b}, \ellambda, C^*(1, \pi^{-1})}(\mathcal{O}_\mathfrak{p}) \cdot g(\epsilon)
\nonumber\\
& = & \nu_{\mathfrak{b}, \ellambda}^J(1, \pi^{-1})(\mathcal{O}_\mathfrak{p})\cdot g(\epsilon)\nonumber\\
& = & - \nu_{\mathfrak{b}, \ellambda}^J(1, \pi)(\pi \mathcal{O}_\mathfrak{p})\cdot g(\epsilon)\nonumber
\end{eqnarray}
The last equality follows from the fact that $x\mapsto \nu_{\chi, \ellambda}^J(1,x)$ is an inhomogeneous 1-cocyle on $E(\mathfrak{f})_J$.
We will choose as representative of $c_g$ the inhomogeneous 1-cocycle $z: = z_{1_{\pi\mathcal{O}_{\mathfrak{p}}}, g}$
i.e.\ we choose $f= 1_{\pi\mathcal{O}_{\mathfrak{p}}} = \pi 1_{\mathcal{O}_\mathfrak{p}}$ in (\ref{e:zgdef}). A simple computation yields
\begin{equation}
\ellabel{e:jep4}
z(\epsilon) = (\pi 1_{\mathcal{O}_\mathfrak{p}}) \cdot g(\epsilon)
\end{equation}
and
\begin{equation}
\ellabel{e:jep5}
\pi^{-1} z(\pi) = 1_{\mathcal{O}_{\mathfrak{p}} - \pi \mathcal{O}_{\mathfrak{p}}} \cdot g + 1_{\mathcal{O}_{\mathfrak{p}}} \cdot g(\pi)
\end{equation}
Put $\nu = \nu_{\mathfrak{b}, \ellambda}^J$ and $\nu_{\mathcal{D}} = \nu_{\mathfrak{b}, \ellambda, \mathcal{D}}$ so that $\nu(1,\epsilon) = \nu_{\mathcal{D}}$ by (\ref{e:jep1}). Using (\ref{e:jep3}), (\ref{e:jep4}) and (\ref{e:jep5}) we get
\begin{eqnarray*}
c_g \cap (\omega_{\mathfrak{f}, \mathfrak{b}, \ellambda}^J\cap \vartheta') & = & \int_{F_{\mathfrak{p}}} z(\pi)(x) d(\pi \nu(1, \epsilon))(x) - \int_{F_\mathfrak{p}} z(\epsilon)(x) d(\epsilon\nu(1, \pi))(x)\\
& = & \int_{F_{\mathfrak{p}}} (\pi^{-1}z(\pi))(x) d\nu_{\mathcal{D}} (x) - \int_{F_\mathfrak{p}} (\epsilon^{-1} z(\epsilon))(x) d\nu(1, \pi)(x)\\
& = & \int_{\mathcal{O}_{\mathfrak{p}} - \pi \mathcal{O}_{\mathfrak{p}}} g(x) d\nu_{\mathcal{D}} (x) + \nu_{\mathcal{D}}(\mathcal{O}_{\mathfrak{p}}) \cdot g(\pi) - \nu(1, \pi)(\pi \mathcal{O}_\mathfrak{p})\cdot g(\epsilon)\\
& = & g\elleft( \times\!\!\!\!\!\!\!\int_{\mathcal{O}_\mathfrak{p} - \pi \mathcal{O}_\mathfrak{p}} x \ d\nu_{\mathcal{D}}(x)\right) + g\elleft(\pi^{\nu_{\mathcal{D}}(\mathcal{O}_{\mathfrak{p}})}\right) + g(\epsilonsilon_{\mathfrak{b}, \ellambda, \mathcal{D}, \pi})\\
& = & g( \mathcal{U}_\mathfrak{p}(\mathfrak{b}, \ellambda, \mathcal{D}))
\end{eqnarray*}
\end{proof}
\begin{remark} \rm A more indirect approach towards Theorem~\ref{t:jep} is as follows. In \cite[\S 6]{ds} we have defined certain elements $\mathcal{U}'_{\mathfrak{p}}(\mathfrak{b}, \ellambda)$ of $F_{\mathfrak{p}}$ in terms of the Eisenstein cocycle. We expect that these elements agree with the elements $\mathcal{U}_{\mathfrak{p}}(\mathfrak{b}, \ellambda, \mathcal{D})$.
It should be much easier to verify Theorem~\ref{t:jep} with $\mathcal{U}'_{\mathfrak{p}}(\mathfrak{b}, \ellambda)$ replacing $\mathcal{U}_{\mathfrak{p}}(\mathfrak{b}, \ellambda, \mathcal{D})$ in (\ref{e:upcdef}). On the other hand in \cite{das} and \cite[\S 6]{ds} a list of functorial properties for the elements $\mathcal{U}_{\mathfrak{p}}(\mathfrak{b}, \ellambda, \mathcal{D})$ and $\mathcal{U}'_{\mathfrak{p}}(\mathfrak{b}, \ellambda)$ have been established. Since these properties determine the elements uniquely up to a root of unity neither $\ell_\mathfrak{p}(\mathcal{U}_{\mathfrak{p}, \chi})$ nor $o_{\mathfrak{p}}(\mathcal{U}_{\mathfrak{p}, \chi})$ will change while replacing the elements $\mathcal{U}_{\mathfrak{p}}(\mathfrak{b}, \ellambda, \mathcal{D})$ with $\mathcal{U}'_{\mathfrak{p}}(\mathfrak{b}, \ellambda)$ in the definition of $\mathcal{U}_{\mathfrak{p}, \chi}$.
\end{remark}
\end{document}
|
\begin{document}
\title{Revisiting Latent-Space Interpolation \ via a Quantitative Evaluation Framework}
\begin{abstract}
Latent-space interpolation is commonly used to demonstrate the generalization ability of deep latent variable models. Various algorithms have been proposed to calculate the best trajectory between two encodings in the latent space. In this work, we show how data labeled with semantically continuous attributes can be utilized to conduct a quantitative evaluation of latent-space interpolation algorithms, for variational autoencoders. Our framework can be used to complement the standard qualitative comparison, and also enables evaluation for domains (such as graph) in which the visualization is difficult. Interestingly, our experiments reveal that the superiority of interpolation algorithms could be domain-dependent. While normalised interpolation works best for the image domain, spherical linear interpolation achieves the best performance in the graph domain. Next, we propose a simple-yet-effective method to restrict the latent space via a bottleneck structure in the encoder. We find that all interpolation algorithms evaluated in this work can benefit from this restriction. Finally, we conduct interpolation-aware training with the labeled attributes, and show that this explicit supervision can improve the interpolation performance.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Generative latent variable models, especially variational autoencoders (VAEs) \citep{Kingma2014}, and generative adversarial networks (GANs) \citep{Goodfellow14gan}, have been gaining huge research interest in recent years. For both VAEs and GANs, a decoder (or generator, in GAN terminology) is learned to generate data samples $x$ from a latent variable $z$, which is sampled from a prior distribution. For VAE, the decoder is usually probabilistic (i.e., $p(x|z)$), while for GAN the generator is a one-to-one mapping function.
Latent-space interpolation has been widely used for thw evaluation of generative latent variable models, as a way of demonstrating that a generative model generalizes well, instead of simply memorizing the training examples. It is usually implemented in an unsupervised and qualitative manner: A trajectory is formed between two latent encodings sampled from the prior distribution. Then, the decoded samples (e.g., images) from the latent encodings along that trajectory are qualitatively examined. Intuitively, a good interpolation trajectory should be decoded into meaningful outputs exhibiting a gradual transition. Meanwhile, it should also reflect the internal structure of the data.
Linear interpolation, where the selected trajectory is simply the shortest path in Euclidean space, has been most widely used in the literature \citep{Kingma2014,radford2015unsupervised}. Despite its simplicity, multiple recent studies \citep{white16spherical,agustsson2018optimal,lesniak2018distributioninterpolation} point out that linear interpolation introduces a \textit{mismatch} between the distribution of the interpolated points and the original prior distribution. And such a mismatch could potentially prevent the interpolation from faithfully reflecting the inner structure of the generative model. From this viewpoint, several interpolation algorithms have been proposed to eliminate this distribution mismatch (to be reviewed in the background section).
However, the unsupervised nature of deep generative model training makes a principled comparison between different interpolation algorithms challenging. Most of the recent studies apply these algorithms to generate a trajectory between two points sampled from the latent space, and qualitatively compare the outputs (usually in the form of images) decoded from successive latent encodings on the trajectory. This is also due to the ``discrete'' nature of the commonly used datasets such as LSUN \citep{fisher15lsun} or CelebA \citep{liu15celeba}: Given two data samples, there are no ground-truth reference samples that are supposed to be semantically ``between'' them. Moreover, in some domains such as a graph, it is difficult to visualize the decoded samples, making a qualitative comparison almost impossible.
In this work, we attempt to compare different interpolation algorithms in a quantitative evaluation framework, \textbf{by utilizing datasets with semantically continuous attributes}. Here is one example: Consider that we train a VAE with an image dataset containing views of a 3-D object from different angles. Naturally, we would expect the latent encodings to capture the variations of the angles. If we interpolate the latent encodings of 30-degree and 90-degree views of the same object, it is natural to expect that the decoded image from the mid-point interpolation to be close to the 60-degree view. Further, we can synthesize new views of the object from any angle in between, by changing the interpolation weight. In this way, we can evaluate the interpolation algorithm by comparing the decoded image to the ground-truth image with standard metrics.
Our experiments cover image-domain and graph-domain VAE models. We briefly summarize the key messages from our experiments as follows: Our evaluation reveals that the superiority of interpolation algorithms could be domain-dependent. Normalised interpolation works best in the image domain, while spherical linear interpolation achieves the best performance in the graph domain. Next, we propose a simple-yet-effective method to restrict the latent space via a bottleneck structure in the encoder. And we report that the interpolation algorithms evaluated in this work can all benefit from this restriction. Finally, we conduct interpolation-aware training with the labeled attributes, and we show that this explicit supervision can boost the interpolation performance.
{\bm{s}}pace{-0.1cm}
\section{Background}
\label{sec:background}
{\bm{s}}pace{-0.05cm}
In this section, we review the framework of variational autoencoders, and introduce three existing interpolation algorithms. As a generative latent variable model, VAE \citep{Kingma2014}, adopts a two-step generation process: (1) First, the $D$-dimensional latent variable $z$ is sampled from a fixed prior distribution $p(z)$ (e.g., standard Gaussian). (2) Then, a decoder network $p_\theta(x|z)$ maps $z$ into the data space. $\theta$ denotes the trainable parameters in the decoder.
The existence of the latent variable makes direct maximum likelihood estimation (MLE) training difficult. Instead, the evidence lower bound (ELBO) loss is adopted, where an inference network $q_\phi(z|x)$ is introduced:
\begin{equation}
\small
\label{eq:std_elbo}
\mathcal{L}_\text{ELBO}=-\mathbb{E}_{z\sim q_\phi(z|x)}\log p_\theta(x|z) + D_\text{KL}(q_\phi(z|x)||p(z)),
\end{equation}
where $D_\text{KL}$ refers to Kullback-Leibler divergence (definition given in Appendix A). Note that we will also refer to the inference work as the encoder in the following text.
It can be derived that $-\mathcal{L}_\text{ELBO}=\log p_\theta(x) -D_\text{KL}(q_\phi(z|x)||p_\theta(z|x))$. Therefore, $-\mathcal{L}_\text{ELBO}$ is a lower bound of the log-likelihood of $x$. And during the minimization of $\mathcal{L}_\text{ELBO}$, the inference network $q_\phi(z|x)$ is implicitly trained to approximate the true posterior $p_\theta(z|x)$.
Latent-space interpolation has been used to qualitatively examine how well the model generalizes. It forms a trajectory between two latent encodings $z_1$ and $z_2$, which are independently sampled from the prior distribution. Following the notations from \citet{lesniak2018distributioninterpolation}, we formulate interpolation algorithms as a function $f$:
\begin{equation}
\small
f: \mathbb{R}^D \times \mathbb{R}^D \times [0,1] \ni (z_1, z_2, \lambda) \mapsto z \in \mathbb{R}^D,
\end{equation}
where $\lambda$ is referred to as the interpolation weight. Next, we review several existing interpolation algorithms.
\textit{Linear interpolation} is the most widely used interpolation algorithm, which simply forms a straight line between $z_1$ and $z_2$. We formulate it below:
\begin{equation}
\small
f^\text{linear}(z_1, z_2, \lambda) = (1-\lambda) z_1 + \lambda z_2.
\end{equation}
\textit{Spherical linear interpolation}~\citep{white16spherical} treats the interpolation as a great circle path on a $D$-dimensional hypersphere. It utilizes a formula from \citet{shoemake85animatingrotate}:
\begin{equation}
\small
f^\text{slerp}(z_1, z_2, \lambda) = \frac{\text{sin}[(1-\lambda)\Omega]}{\text{sin}(\Omega)} z_1 + \frac{\text{sin}[\lambda \Omega]}{\text{sin}(\Omega)} z_2,
\end{equation}
where $\Omega$ is the angle between $z_1$ and $z_2$.
\textit{Normalised interpolation} \citep{agustsson2018optimal} is based on optimal transport maps. It adapts the latent-space interpolation operations, so that the resulting interpolated points would match the prior distribution. Based on linear interpolation with Gaussian prior, the formulation can be derived as:
\begin{equation}
\small
f^\text{norm}(z_1, z_2, \lambda) = \frac{(1-\lambda)z_1 + \lambda z_2}{\sqrt{(1-\lambda)^2+\lambda^2}}.
\end{equation}
{\bm{s}}pace{-0.1cm}
\section{Evaluation Methodology}
\label{sec:framework}
{\bm{s}}pace{-0.1cm}
In this section, we introduce our evaluation framework for interpolation algorithms. As discussed in the introduction section, commonly used datasets for VAE training do not have reference samples for the evaluation of interpolation. Take the popular CelebA dataset \citep{liu15celeba} as an example, given two images of human faces, it is difficult to locate reference face images that are supposed to be ``between'' these two faces.
Motivated by this problem, in this work we utilize datasets with \textit{temporal} or \textit{spatial} attributes. For example, we will use citation networks that evolve with time, and 2-D rendering of 3-D objects from different angles. Formally, we denote our dataset as $\{\langle t,x_t {\textnormal{a}}ngle\}$, where each data sample $x$ comes with a \textbf{semantically continuous attribute} $t \in \mathbb{R}$. For VAE baseline training, we will ignore the $t$ labels, and just train the model in an unsupervised manner with $\{x\}$. Therefore, we expect the latent encodings to capture the variance related to attribute $t$.
Our proposed evaluation procedure is motivated by an \textit{application} point of view. Given $x_{t_1}$ and $x_{t_3}$, the generative latent variable model and the interpolation algorithm enable us to \textbf{synthesize} $x_{t_2}$ at any $t_2$ between $t_1$ and $t_3$. We now describe it in more details below.
To evaluate a given interpolation algorithm $f$, we randomly select a number of triples in the test set $\{(x_{t_1},x_{t_2},x_{t_3})\}$ with $t_1 < t_2 < t_3$. We then infer $(\hat{z}_{t_1}, \hat{z}_{t_2}, \hat{z}_{t_3})$ by taking $\hat{z}_t:=\argmax_z q_\phi(z|x_t)$.\footnote{As discussed in the background section, $q_\phi(z|x)$ is implicitly trained to approximate the true posterior $p_\theta(z|x)$.} For the VAEs used in this work, this is equivalent to simply taking the outputed mean vector from the inference model.
An estimation of $z_{t_2}$ can be obtained by applying the interpolation algorithm $z^\text{inter}_{t_2}:=f(\hat{z}_{t_1}, \hat{z}_{t_3}, \frac{t_3-t_2}{t_3-t_1})$. Then we feed $z^\text{inter}_{t_2}$ to the decoder to generate $x^\text{inter}_{t_2}:=\argmax_x p_\theta(x|z^\text{inter}_{t_2})$. Finally, by measuring the distance between $x^\text{inter}_{t_2}$ and $x_{t_2}$, or the distance between $z^\text{inter}_{t_2}$ and $\hat{z}_{t_2}$ with standard metrics, we can quantify the performance of the interpolation algorithm $f$.
{\bm{s}}pace{-0.1cm}
\section{Model Formulations}
{\bm{s}}pace{-0.1cm}
In this section, we first formulate the standard VAE models considered in this work. We then discuss how we adjust these models or the objective functions for better interpolation performance.
{\bm{s}}pace{-0.05cm}
\subsection{Variational Autoencoders}
\label{sec:model_vae}
{\bm{s}}pace{-0.05cm}
To prepare baseline models, we use a standard convolutional neural network (CNN)-based VAE model for the image-domain data, and a graph convolutional network (GCN)-based VAE model for the graph-domain data.
{\bm{s}}pace{-0.1cm}
\paragraph{CNN-based VAE} For the image-domain VAE, we use a popular implementation of ``vanilla'' VAE from \citet{Subramanian2020}. The encoder contains 6 layers of CNN \citep{Krizhevsky12cnnimage}.\footnote{The hidden dimensions of the CNN is [16, 32, 64, 128, 256, 512].} Each CNN layer is followed by a batch normalization layer and a LeakyReLU activation. Then two fully-connected output layers output the $D$-dimensional mean and standard deviation vector for the posterior Gaussian distribution $q_\phi(z|x)$. The decoder is composed of 6 deconvolution layers, followed by a convolution layer with a Tanh activation for the final output. We use standard Gaussian as the prior distribution. The standard ELBO loss (Equation {\textnormal{e}}f{eq:std_elbo}) is adopted.
{\bm{s}}pace{-0.1cm}
\paragraph{Graph Variational Autoencoders (GVAEs)} We denote a directed graph as $\mathcal{G}=(\mathcal{V}, \mathcal{E})$, with $N=|\mathcal{V}|$ nodes and edge set $\mathcal{E}$. From $\mathcal{E}$, we construct its adjacency matrix $A$. Each node $i$ is assigned a $D$-dimensional latent encoding $z_i$, summarized in an $N \times D$ matrix $Z$.\footnote{When we apply spherical linear interpolation, we will concatenate all $z_i$ and treat it as a long vector.} For node features, we simply use one-hot representation, giving a $N \times N$ feature matrix $X$.
We develop a version of GVAE \citep{kipf17gcn} for directed graphs. Our GVAE model consists of a two-layer graph convolutional network (GCN) encoder, and a decoder consisted of an MLP and followed by a linear product. For the inference model, we first encode the graph via the GCN and get $ \{ ( e^\mu_i , e^\sigma_i )\}=\text{GCN}(X, A) $, where $( e^\mu_i , e^\sigma_i )$ is a pair of $D$-dimensional embeddings for each node $i$. We refer readers to \citet{Kipf2016VariationalGA} for details about the GCN. We now formulate inference model\footnote{Our code is based on \url{https://github.com/DaehanKim/vgae_pytorch}.} as follows:
\begin{equation}
\small
q_\phi(Z|X,A) = \prod^N_{i=1} \mathcal{N}(z_i|\text{MLP}_{\text{enc}_\mu}(e^\mu_i), \exp (e^\sigma_i)),
\end{equation}
where $\mathcal{N}$ denotes the Gaussian distribution. To grant the model more flexibility, a two-layer multi-layer perceptron (MLP) with a hidden dimension of $D$ is added for the mean-output branch.
For decoding, we first pass $z_i$ through a 2-layer MLP and get $z'_i:=\text{ MLP}_\text{dec}(z_i)$. To model the directness of the graph, we do a simple half-half separation of $z'_i$ into two $\frac{D}{2}$-dimensional vectors $z'_{i1}$ and $z'_{i2}$. Finally, the generative model is formulated as a simple inner product with a trainable bias term $b$ to control the graph sparsity:
\begin{equation}
\small
p_\theta(A|Z)=\prod^N_{i=1}\prod^N_{j=1}p_\theta(A_{ij}|z_i,z_j),
\end{equation}
where $p_\theta(A_{ij}|z_i,z_j)=\sigma({z'}^\top_{i1} z'_{j2} + b)$, and $\sigma(\cdot)$ is the logistic sigmoid function.
The GVAE is then optimized with the ELBO loss:
\begin{equation}
\scriptsize
\mathcal{L}^\text{GVAE}_\text{ELBO} = - \mathbb{E}_{q_\phi(Z|X,A)}[\log p_\theta(A|Z)] + D_\text{KL}(q_\phi(Z|X,A)||p(Z)),
\end{equation}
where we use standard Gaussian for the prior $p(Z)=\prod^N_{i=1} \mathcal{N}(Z_i|0, I)$.
{\bm{s}}pace{-0.1cm}
\subsection{Restricting the Latent Space}
\label{sec:model_lowrank}
{\bm{s}}pace{-0.1cm}
Intuitively, it could be easier for interpolation algorithms to locate the right $z_{t_2}$ when the latent space is simpler.
In our experiments, we explore two intuitive ways to restrict the latent space. (1) We directly shrink the latent dimension $D$ (the hidden dimension in the encoder and decoder model is not changed). (2) We enforce a bottleneck structure for the posterior-mean branch of the encoder. To be more specific, suppose the original weight parameter of the final linear layer is a matrix $W$ of size $H \times D$, where $H$ is the final hidden dimension in the encoder. Given a rank $R$, we replace $W$ by $W^R_1W^R_2$, where $W^R_1$ and $W^R_2$ are $H \times R$ and $R \times D$ matrices, respectively. In this way, the mean of the Gaussian posterior $q_\phi(z|x)$ will be restricted to the (at most) $R$-dimensional linear space spanned by the row vectors of $W^R_2$. The intuition is that by restricting the power of the inference network $q_\phi(z|x)$, we expect that the true posterior distribution $p_\theta(z|x)$ would be encouraged to be simpler as well.
{\bm{s}}pace{-0.1cm}
\subsection{Interpolation-Aware Training}
\label{sec:model_explicittrain}
{\bm{s}}pace{-0.1cm}
With the additional attributes, it is attractive to conduct \textit{interpolation-aware} training (IAT) in a supervised manner, where we encourage the model to be consistent with the interpolation algorithm we choose. Take a step further, we can directly parametrize the interpolation function via a function approximator such as an MLP. Below, we take the CNN-based VAE as an example, and formulate different variants of IAT objective functions.
We first consider adding a penalty to the model to encourage it to be consistent with a chosen interpolation algorithm $f$. This can be done either on latent-encoding level, or decoder-output level:
\begin{equation}
\small
\begin{split}
&\mathcal{L}_\text{IAT}^\text{latent} = \mathbb{E}_{(x_{t_1}, x_{t_2}, x_{t_3})} ||\hat{z}_{t_2} - f(\hat{z}_{t_1}, \hat{z}_{t_3}, \frac{t_3-t_2}{t_3-t_1})||^2_2, \\
&\mathcal{L}_\text{IAT}^\text{decode} = \mathbb{E}_{(x_{t_1}, x_{t_2}, x_{t_3})} \log p_\theta(x_{t_2}|z=f(\hat{z}_{t_1}, \hat{z}_{t_3}, \frac{t_3-t_2}{t_3-t_1})),
\end{split}
\end{equation}
where $\hat{z}_t$ is the output mean from the encoder model $q_\phi(z|x_t)$.\footnote{We do not do stop-gradient for $\hat{z}_t$, therefore the encoder will also be updated by $\mathcal{L}_\text{IAT}$.}
Further, we attempt to parametrize the interpolation function via a 3-layer MLP with a hidden dimension of $D$ and a ReLU activation:
\begin{equation}
\small
\begin{split}
&\mathcal{L}_\text{IAT}^\text{MLP+latent} = \mathbb{E}_{(x_{t_1}, x_{t_2}, x_{t_3})} ||\hat{z}_{t_2} - \text{MLP}^\text{inter}(\hat{z}_{t_1}, \hat{z}_{t_3}, z^\text{inter}_{t_2})||^2_2, \\
&\mathcal{L}_\text{IAT}^\text{MLP+dec} = \mathbb{E}_{(x_{t_1}, x_{t_2}, x_{t_3})} \log p_\theta(x_{t_2}|z=\text{MLP}^\text{inter}(\hat{z}_{t_1}, \hat{z}_{t_3}, z^\text{inter}_{t_2})),
\end{split}
\end{equation}
where $z^\text{inter}_{t_2}=f(\hat{z}_{t_1}, \hat{z}_{t_3}, \frac{t_3-t_2}{t_3-t_1})$. The input to the MLP is a concatenation of $\hat{z}_{t_1}, \hat{z}_{t_3}, \hat{z}^\text{inter}_{t_2}$, where the information about the interpolation weight is contained in $\hat{z}^\text{inter}_{t_2}$. Note that the $\mathcal{L}^\text{MLP+latent}_\text{IAT}$ variant can be regarded as an informal upper bound of what's the best interpolation algorithms can do.
The final joint objective is a weighted combination of $\mathcal{L}_\text{ELBO}$ and $\mathcal{L}_\text{IAT}$:
\begin{equation}
\mathcal{L}_\text{joint-IAT}=\mathcal{L}_\text{ELBO} + \lambda_\text{IAT} \mathcal{L}_\text{IAT},
\end{equation}
where $\lambda_\text{IAT}$ controls the weight of $\mathcal{L}_\text{IAT}$.
{\bm{s}}pace{-0.1cm}
\section{Datasets and Tasks}
\label{sec:data}
{\bm{s}}pace{-0.1cm}
Three datasets are adopted for our experiments: ShapeNet and CO3D for the image domain, and Citation Network for the graph domain. We describe them below:
{\bm{s}}pace{-0.1cm}
\paragraph{ShapeNet} The dataset we introduce to evaluate interpolation quantitively in the image domain is generated from ShapeNet~\citep{shapenet2015}. We construct a dataset including $20$ chairs of different styles, and each object is rendered with 60 different horizontally rotation degrees that are randomly selected between $0\si{\degree}$ and $180\si{\degree}$. We use a fixed light position and a fixed distance between the camera and objects during rendering. The renderer applied here is composed of a rasterizer and a shader, which is implemented by Pytorch3D~\citep{ravi2020pytorch3d}. For each chair, we use $30$ randomly selected angles as the training set, $15$ angles as the validation set, and $15$ angles as the test set. Note that the training and validation sets are only used for IAT. We design an image interpolation task as the following: We randomly select a chair and a triplet of angles $(x_{t_1},x_{t_2},x_{t_3})$, and the model is then asked to synthesize $x_{t_2}$ given information of $x_{t_1},x_{t_3}$ and $t_2$, via latent-space interpolation. Note that for evaluation or IAT, we only interpolate between the same chair of different rotation angles. From the test set, $2000$ sampled triplets are used for evaluation.
\paragraph{CO3D} We also adopt a real-world image dataset: Common Objects in 3D (CO3D)~\citep{reizenstein2021common}. We create a dataset including $434$ chairs of different styles. Each object (chair) has about $100$ consecutive views. These views constitute a natural rotation or translation of the object. We regard the index of the view as $t$. For each object, we use $60$ randomly selected views as the training set, $20$ views as the validation set, and $20$ views as the test set. From the test set, $21700$ sampled triplets are used for evaluation.
{\bm{s}}pace{-0.1cm}
\paragraph{Citation Network} The dataset we introduce to perform the evaluation on the graph domain is generated from the high energy physics citation network~\citep{leskovec2005graphs}. To limit the total number of nodes in consideration, we first choose the most-cited paper as the center core, and then only consider papers that cite the center core. This process creates a directed graph with $661$ nodes, where each node represents a paper. Then, using the submission time of these papers, we create a temporal graph with $200$ time stamps spanning $10$ years. The adjacency matrix $A_t$ grows as papers cite each other. We randomly select $100$ time stamps as the training set, $50$ stamps as the validation set, and $50$ stamps as the test set. We design a graph interpolation task as the following: We first randomly select a triplet of time stamps $({t_1},{t_2},{t_3})$. Given information of $A_{t_1},A_{t_3}$ and $t_2$, the model is asked to predict $A_{t_2}$ via latent-space interpolation. $2000$ sampled triplets are used for evaluation.
{\bm{s}}pace{-0.1cm}
\section{Experiments}
\label{sec:exp}
{\bm{s}}pace{-0.1cm}
Our experiments can be roughly divided into three sections: (1) We use our evaluation framework to compare existing interpolation algorithms, for the unsupervised baseline model. (2) We explore applying a low-dimensional latent space, or a low-rank encoder to restrict the latent space, and test whether the interpolation performance can benefit from this restriction. (3) Finally, we utilize the labeled attributes and conduct interpolation-aware training.
{\bm{s}}pace{-0.1cm}
\subsection{Evaluation of Interpolation Algorithms}
\label{sec:exp_evalcompare}
{\bm{s}}pace{-0.1cm}
To train the VAE models, we use stochastic gradient descent with the Adam optimizer \citep{kingma15adam}. We stop the training iterations when the ELBO loss has converged. Hyper-parameters such as learning rate are tuned based on the validation set performance. For the baseline models of ShapeNet and CitationNet, we use latent dimension $D=512$. For CO3D we use $D=4096$ as it gives better performance. More details about our implementation are deferred to Appendix B.
The metrics we use to evaluate on ShapeNet in image domain is mean square error (MSE) and \textit{structure similarity index measure} (SSIM)~\citep{hore2010image} between the interpolated image ${x}^\text{inter}_{t_2}$ and the ground truth $x_{t_2}$. For CO3D, we use MSE and \textit{peak signal-to-noise ratio} (PSNR) ~\citep{hore2010image}. For Citation Network, we report with binary cross-entropy (BCE) and the \textit{edge intersection over union} (eIoU) \citep{jaccard12} metric for edge prediction between the adjacency matrices of interpolated graph and reference ground-truth graph. On the other hand, we also quantify the distance between the interpolated encoding ${z}^\text{inter}_{t_2}$ and the inferred encoding $\hat{z}_{t_2}$ using MSE and cosine similarity (denoted by cos-sim).
\iffalse
\begin{table}[t]
\centering
\addtolength{\tabcolsep}{1pt}
\begin{tabular}{c|ccc|c|ccc}
\toprule
\multicolumn{4}{c|}{\textbf{ShapeNet}} & \multicolumn{4}{c}{\textbf{Citation Network}} \\ \midrule
\textbf{Metrics} & $f^\text{linear}$ & $f^\text{slerp}$ & $f^\text{norm}$ & \textbf{Metrics} & $f^\text{linear}$ & $f^\text{slerp}$ & $f^\text{norm}$ \\ \midrule
$\text{MSE}({x}^\text{inter}_{t_2},x_{t_2})$ & 0.133 & 0.151 & \textbf{0.131} & \multirow{2}{*}{$\text{eIoU}({x}^\text{inter}_{t_2},x_{t_2})$} & \multirow{2}{*}{0.545} & \multirow{2}{*}{\textbf{0.592}} & \multirow{2}{*}{0.580} \\
$\text{SSIM}({x}^\text{inter}_{t_2},x_{t_2})$ & 0.733 & 0.721 & \textbf{0.736} & & & & \\ \midrule
$\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})$ & 0.078 & 0.021 & \textbf{0.015} & $\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})$ & 0.011 & 0.011 & \textbf{0.010} \\
$\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})$ & 0.493 & \textbf{0.381} & 0.502 & $\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})$ & 0.742 & \textbf{0.724} & 0.741 \\ \bottomrule
\end{tabular}
{\bm{s}}pace{0.2cm}
\caption{A quantitative comparison of different interpolation algorithms. It is shown that linear interpolation is indeed outperformed by normalised interpolation or spherical linear interpolation.}
\label{tab:interalg_compare}
\end{table}
\fi
\begin{table*}[t]
{\bm{s}}pace{-0.1cm}
\small
\centering
\addtolength{\tabcolsep}{-1.9pt}
\begin{tabular}{r|ccc|r|ccc|r|ccc}
\toprule
\multicolumn{4}{c|}{\textbf{ShapeNet}} & \multicolumn{4}{c|}{\textbf{CO3D}} & \multicolumn{4}{c}{\textbf{Citation Network}} \\ \midrule
\textbf{Metrics} & $f^\text{linear}$ & $f^\text{slerp}$ & $f^\text{norm}$ & \textbf{Metrics} & $f^\text{linear}$ & $f^\text{slerp}$ & $f^\text{norm}$ & \textbf{Metrics} & $f^\text{linear}$ & $f^\text{slerp}$ & $f^\text{norm}$ \\ \midrule
$\text{MSE}({x}^\text{inter}_{t_2},x_{t_2}) \downarrow$ & 0.133 & 0.151 & \textbf{0.131} & $\text{MSE}({x}^\text{inter}_{t_2},x_{t_2}) \downarrow$ & 0.274 & 0.226 & \textbf{0.174} & $\text{BCE}({x}^\text{inter}_{t_2},x_{t_2})\downarrow$ &0.950 &\textbf{0.932} &0.938\\
$\text{SSIM}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$ & 0.733 & 0.721 & \textbf{0.736} & $\text{PSNR}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$ & 13.31 & 13.65 & \textbf{14.33} & $\text{eIoU}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$ &0.562 &\textbf{0.590} &0.578 \\ \midrule
$\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & 0.078 & 0.021 & \textbf{0.015} & $\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & 0.044 & 0.006 & \textbf{0.004} & $\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & \textbf{0.009} & 0.012 & 0.012 \\
$\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & 0.493 & \textbf{0.381} & 0.502 & $\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & 0.434 &\textbf{0.290} & 0.436 & $\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & 0.736 & \textbf{0.715} & 0.736 \\ \bottomrule
\end{tabular}
{\bm{s}}pace{0.2cm}
\caption{A quantitative comparison of different interpolation algorithms. Our evaluation reveals that the superiority of interpolation algorithms could be domain-dependent. While normalised interpolation works best in the image domain, spherical linear interpolation achieves the best performance in the graph domain.
}
{\bm{s}}pace{-0.3cm}
\label{tab:interalg_compare}
\end{table*}
\begin{figure*}
\caption{ShapeNet: Qualitative evaluations of different interpolation algorithms including unsupervised baseline and interpolation-aware training. The ``BN encoder'' refers to the rank-8 bottleneck encoder. The quality gap between different interpolation algorithms for the unsupervised baseline is not very clear.}
\label{fig:main_qualititave}
\end{figure*}
We compare different interpolation algorithms for the baseline VAE models with metrics described above, and the results are shown in Table {\textnormal{e}}f{tab:interalg_compare}. For the performance w.r.t. $x^\text{inter}_{t_2}$, normalised interpolation performs best both on ShapeNet and CO3D. However, for Citation Network, spherical linear interpolation outperforms the other two algorithms. While this agrees with previous works in that a more sophisticated interpolation algorithm could indeed bring performance gain, it also suggests that the superiority of interpolation algorithms could be \textbf{domain dependent} (note that previous works have been focusing on the image domain).
For the performance w.r.t. $z^\text{inter}_{t_2}$, interestingly, we observe that for all datasets, spherical linear interpolation achieves the best cosine-similarity score, while normalised interpolation achieves the best MSE score on ShapeNet and CO3D. Motivated by this, we attempt to combine the ideas from both sides as \textit{normalised spherical interpolation} (formulated in Appendix A). However, this combination does not add performance gain, and we omit the results here.
In the upper part of Figure {\textnormal{e}}f{fig:main_qualititave}, we show a qualitative comparison of the ShapeNet data. While in some cases we do observe that normalised interpolation gives better decoded images than linear interpolation, the quality gap is not clear enough. Our quantitative metrics, on the other hand, give complementary information when the qualitative comparison falls short.
{\bm{s}}pace{-0.1cm}
\subsection{Interpolation in Restricted Latent Space}
\label{sec:exp_lowdim}
{\bm{s}}pace{-0.1cm}
As described in the formulation section, we test two intuitive ways to restrict the latent space: (1) We directly shrink the latent dimension $D$. (2) We adopt a low-rank ($R$) structure to the final linear transform in the encoder. We show the performance of different interpolation algorithms in Figure {\textnormal{e}}f{fig:main_rankdim}, where we tune $D$ or $R$ in log-scale, and all other hyper-parameters are kept the same with the baseline model.
It is shown that the low-rank encoder gives better performance than shrinking the latent dimension $D$ for all three interpolation algorithms. Especially in the Citation Network dataset, the rank-2 encoder achieves the best performance for different interpolation algorithms. This matches our intuition that interpolation would benefit from a simpler latent space. Consistent with Table {\textnormal{e}}f{tab:interalg_compare}, normalised interpolation performs best for ShapetNet and CO3D, and spherical linear interpolation performs best for Citation Network.
We report the detailed measurements in Table {\textnormal{e}}f{tab:rankdim_metric}. Surprisingly, we observe a misalignment between the performance w.r.t. $z^\text{inter}_{t_2}$ and the performance w.r.t. $x^\text{inter}_{t_2}$. For example, for the ShapeNet dataset, the best-performing (w.r.t. SSIM) rank-8 model has relatively poor MSE or cosine-similarity scores for $z^\text{inter}_{t_2}$. This could be due to the fact that the linear space $\hat{z}_t$ stays in is changing with $R$ or $D$, and therefore these numbers are not directly comparable.
\begin{figure*}
\caption{Interpolation performance with low-dimensional ($D$) latent space, or with low-rank ($R$) encoder. The common legend is shown in the middle figure. In most cases, the low-rank encoder has better performance.}
\label{fig:main_rankdim}
\end{figure*}
\iffalse
\begin{table}[t]
\centering
\addtolength{\tabcolsep}{-2.0pt}
\small
\begin{tabular}{c|ccccc|ccccc}
\toprule
\textbf{Dataset} & \multicolumn{5}{c|}{\textbf{Latent Dimension}} & \multicolumn{5}{c}{\textbf{Encoder Rank}} \\ \midrule
\textbf{ShapeNet ($f^\text{norm}$)} & \textbf{D512} & \textbf{D128} & \textbf{D32} & \textbf{D8} & \textbf{D2} & \textbf{R512} & \textbf{R128} & \textbf{R32} & \textbf{R8} & \textbf{R2} \\ \midrule
$\text{MSE}({x}^\text{inter}_{t_2},x_{t_2})$ & 0.131 & 0.136 & 0.130 & 0.131 & 0.148 & 0.131 & 0.134 & 0.132 & 0.130 & \textbf{0.127} \\
$\text{SSIM}({x}^\text{inter}_{t_2},x_{t_2})$ & 0.736 & 0.735 & 0.739 & 0.740 & 0.706 & 0.736 & 0.737 & 0.739 & \textbf{0.744} & 0.700 \\ \midrule
$\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})$ & 0.015 & 0.061 & 0.304 & 0.842 & 0.931 & 0.015 & 0.019 & \textbf{0.014} & 0.018 & 0.015 \\
$\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})$ & 0.502 & 0.433 & 0.402 & 0.474 & 0.604 & 0.502 & \textbf{0.426} & 0.490 & 0.480 & 0.693 \\ \midrule
\textbf{CitationNetwork ($f^\text{slerp}$)} & \textbf{D512} & \textbf{D128} & \textbf{D32} & \textbf{D8} & \textbf{D2} & \textbf{R512} & \textbf{R128} & \textbf{R32} & \textbf{R8} & \textbf{R2} \\ \midrule
$\text{eIoU}({x}^\text{inter}_{t_2},x_{t_2})$ & 0.592 & 0.561 & 0.563 & 0.588 & 0.525 & 0.592 & 0.584 & 0.585 & 0.605 & \textbf{0.611} \\ \midrule
$\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})$ & 0.011 & 0.064 & 0.760 & 0.051 & 10.970 & 0.011 & 0.008 & 0.006 & \textbf{0.005} & 1.307 \\
$\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})$ & 0.724 & 0.734 & 0.739 & 0.750 & 0.736 & 0.724 & \textbf{0.721} & 0.741 & 0.726 & 0.734 \\ \bottomrule
\end{tabular}
{\bm{s}}pace{0.2cm}
\caption{Interpolation performance with various metrics for low-dimensional ($D$) latent space, or with low-rank ($R$) encoder.}
\label{tab:rankdim_metric}
\end{table}
\fi
\begin{table*}[t]
\small
\centering
\footnotesize
\addtolength{\tabcolsep}{-2.0pt}
\begin{tabular}{r|ccccc|ccccc}
\toprule
\textbf{Dataset} & \multicolumn{5}{c|}{\textbf{Latent Dimension}} & \multicolumn{5}{c}{\textbf{Encoder Rank}} \\ \midrule
\textbf{ShapeNet ($f^\text{norm}$)} & \textbf{D512} & \textbf{D128} & \textbf{D32} & \textbf{D8} & \textbf{D2} & \textbf{R512} & \textbf{R128} & \textbf{R32} & \textbf{R8} & \textbf{R2} \\ \midrule
$\text{MSE}({x}^\text{inter}_{t_2},x_{t_2})\downarrow$ & 0.131 & 0.136 & 0.130 & 0.131 & 0.148 & 0.131 & 0.134 & 0.132 & 0.130 & \textbf{0.127} \\$\text{SSIM}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$ & 0.736 & 0.735 & 0.739 & 0.740 & 0.706 & 0.736 & 0.737 & 0.739 & \textbf{0.744} & 0.700 \\ \midrule
$\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & 0.015 & 0.061 & 0.304 & 0.842 & 0.931 & 0.015 & 0.019 & \textbf{0.014} & 0.018 & 0.015 \\
$\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & 0.502 & 0.433 & 0.402 & 0.474 & 0.604 & 0.502 & \textbf{0.426} & 0.490 & 0.480 & 0.693 \\ \midrule
\textbf{Citation Network ($f^\text{slerp}$)} & \textbf{D512} & \textbf{D128} & \textbf{D32} & \textbf{D8} & \textbf{D2} & \textbf{R512} & \textbf{R128} & \textbf{R32} & \textbf{R8} & \textbf{R2} \\ \midrule
$\text{BCE}({x}^\text{inter}_{t_2},x_{t_2})\downarrow$ &0.932 &0.944 &0.972 &0.988 &0.893 &0.932 &0.924 &0.977
&0.945 &\textbf{0.866}\\
$\text{eIoU}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$ &0.590 &0.572
&0.567 &0.589 &0.526 &0.590
&0.587 &0.580 &0.604 &\textbf{0.609}
\\ \midrule
$\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ & 0.012 &1.586 &1.044 &0.050 &11.784 &0.012 &0.009 &0.007 &\textbf{0.006} &1.270\\
$\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ &0.715 &0.723 &0.732 &0.734 &0.731 &0.715 &\textbf{0.714}
&0.726 &0.719 &0.724 \\ \bottomrule
\end{tabular}
{\bm{s}}pace{0.2cm}
\caption{Interpolation performance with various metrics for low-dimensional ($D$) latent space, or with low-rank ($R$) encoder. For ShapeNet, we observe a steady SSIM improvement from R512 to R8. Due to lack of space, we defer results with CO3D to Appendix C.}
\label{tab:rankdim_metric}
\end{table*}
\subsection{Interpolation-Aware Training}
\label{sec:exp_intertrain}
{\bm{s}}pace{-0.1cm}
So far, the labeled attributes have been used to evaluate the interpolation algorithms, while the training remains unsupervised. As discussed in the formulation section, we apply several variants of interpolation-aware training to the baseline model, and the results are shown in Table {\textnormal{e}}f{tab:main_iat}. Note that for ShapeNet/CO3D we use the normalised interpolation, while for Citation Network we use spherical linear interpolation.
As expected, IAT greatly boosts the interpolation performance comparing to the unsupervised baseline. Especially, the variants where the interpolation MLP are introduced achieve stronger performance for most metrics, showing the potential of a parametrized interpolation function. Note that the performance of the $\mathcal{L}^\text{MLP+latent}_\text{IAT}$ variant can be regarded as an informal upper bound of interpolation algorithms for unsupervised training, and the gap suggests that there could still be room for the interpolation algorithms to improve.
We also observe that the decoder variants give better performance than the latent-encoding variant. This is as expected because in these cases the decoder is directly optimized to predict $x_{t_2}$. In Appendix C, we provide a study of how the performance improves as the number of labeled data grows.
We show qualitative examples for the decoder variants of IAT in Figure {\textnormal{e}}f{fig:main_qualititave}. Comparing to the unsupervised models, the decoded chairs from IAT are rotated to the correct angle. Consistent with the scores in Table {\textnormal{e}}f{tab:main_iat}, the $\mathcal{L}_\text{MLP+decode}$ variant generates the most sharp and clear images.
\begin{table*}[t]
\addtolength{\tabcolsep}{-3.5pt}
\centering
\footnotesize
\begin{tabular}{c|ccccc|c|ccccc}
\toprule
\multicolumn{6}{c|}{\textbf{ShapeNet ($f^\text{norm}$)}} & \multicolumn{6}{c}{\textbf{Citation Network ($f^\text{slerp}$)}} \\ \midrule
\textbf{IAT} & \multicolumn{1}{c}{\textbf{N/A}} & \multicolumn{1}{c}{\textbf{lat.}} & \multicolumn{1}{c}{\textbf{dec.}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}mlp \\ + lat.\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}mlp \\ + dec.\end{tabular}}} & \multicolumn{1}{c|}{\textbf{IAT}} & \textbf{N/A} & \textbf{lat.} & \textbf{dec.} & \textbf{\begin{tabular}[c]{@{}c@{}}mlp \\ + lat.\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}mlp \\ + dec.\end{tabular}} \\ \midrule
$\text{MSE}({x}^\text{inter}_{t_2},x_{t_2})\downarrow$ & 0.131 & 0.127 & 0.056 & 0.059 & \textbf{0.047} &\multicolumn{1}{c|}{$\text{BCE}({x}^\text{inter}_{t_2},x_{t_2})\downarrow$} &0.932 &0.071 &\textbf{0.004} &0.056 &0.039 \\
$\text{SSIM}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$ & 0.736 & 0.742 & 0.815 & 0.837 & \textbf{0.859} & \multicolumn{1}{c|}{$\text{eIoU}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$} & 0.590 & 0.605 & 0.736 & 0.682 & \textbf{0.741} \\
\bottomrule
\end{tabular}
\caption{Results of interpolation-aware training (IAT). ``lat.'' refers to $\mathcal{L}^\text{latent}_\text{IAT}$, and ``dec.'' refers to $\mathcal{L}^\text{decode}_\text{IAT}$. Results on CO3D are deferred to Appendix C. The supervision provided by the labeled attributes greatly improve the interpolation performance, comparing to the unsupervised baseline (marked by ``N/A'').}
{\bm{s}}pace{-0.2cm}
\label{tab:main_iat}
\end{table*}
{\bm{s}}pace{-0.1cm}
\section{Discussion and Limitations}
\label{sec:discuss}
{\bm{s}}pace{-0.1cm}
We devote this section to discuss the limitations of this work. Our evaluation is motivated by an application point of view, where we apply the interpolation algorithms to output a latent code that can decode to a data sample of a certain attribute value. However, due to the unsupervised nature of standard VAE training, and the fact that existing interpolation algorithms are only designed to traverse the latent space, there is no guarantee that their behavior should match our desire. Therefore, our approach should be only treated \textbf{as a proxy} to evaluate the performance of interpolation algorithms. Still, we hope our work can motivate works focusing more on the
application aspect of interpolation, in addition to distribution matching (also discussed in the next section).
Next, our evaluation framework requires an inference model $q_\phi(z|x)$ to be available. However, except for some variants such as BiGAN \citep{jeff16bigan}, most GAN models \citep{Goodfellow14gan} do not have an inference network. Thus, our methodology can not be directly applied to GANs.
{\bm{s}}pace{-0.1cm}
\section{Related Works}
\label{sec:related}
{\bm{s}}pace{-0.1cm}
What is the best way to traverse the latent space is an ongoing research topic. \citet{white16spherical} proposes spherical linear interpolation, which treats the interpolation as a great circle path on an n-dimensional hypersphere. Based on optimal transport maps, normalised interpolation \citep{agustsson2018optimal} adapts the latent space operations, so that they fully match the prior distribution, while minimally modifying the original operation.
With similar motivation, \citet{lesniak2018distributioninterpolation} defines the distribution matching property (DMP), as a potential guideline for interpolation algorithm design. For standard Gaussian prior, they point out that linear and spherical linear interpolation does not satisfy DMP, while normalised interpolation does. Further, they propose \textit{Cauchy-linear} interpolation, which satisfies DMP for a wide range of prior choices. Finally, \citet{lukasz19realismindex} propose a numerically efficient algorithm that maximizes the \textit{realism index} of an interpolation.
As discussed in the introduction section, most evaluations in the literature are qualitative. \citet{agustsson2018optimal} did a quantitative evaluation, where \textit{inception score} \citep{salimans16ganinception} is used to measure the quality or diversity of decoded samples from the interpolated encodings in an unsupervised manner. Therefore their evaluation focuses more on how well the distribution of the decoded samples matches the data distribution, instead of how ``accurate'' the interpolation algorithm is. In this work since we utilize datasets with attributes, we are able to provide reference $x$ or $z$ for the interpolated points.
Similar to interpolation, \textit{analogy} \citep{mikolov-etal-2013-linguistic}, usually in the form of 4-tuple $A:B::C:D$ (often spoken as ``A is to B as C is to D''), has been used to demonstrate the structure of the latent space. As pointed out by \citet{agustsson2018optimal}, the most commonly used analogy function $\hat{z}_d:=z_c+(z_b-z_a)$ would also introduce distribution mismatch. With appropriate datasets \citep{reed15analogy}, our evaluation framework can be generalized to compare different latent-space analogy algorithms, and we leave that as future work.
In the disentanglement learning literature \citep{Higgins2017betaVAELB}, several quantitative metrics have been proposed to measure the disentanglement of the latent variable, including z-diff score \citep{Higgins2017betaVAELB}, SAP score \citep{kumar2018variational}, and factor score \citep{pmlr-v80-kim18b}. These metrics are different from our evaluation in that they do not involve traversing the latent space. Also, note that they require datasets with factor annotations (similar to the semantically continuous label used in our work).
{\bm{s}}pace{-0.1cm}
\paragraph{Related Applications} On the computer vision application side, optimization-based methods~\citep{solomon:hal-01188953} and learning-based methods~\citep{mildenhall2020nerf, Riegler2020FVS, chen2019dibrender, DBLP:journals/corr/abs-1906-07316} have been proposed to synthesize (interpolate) new images and / or 3D shapes from existing observations. Of note, Nerf~\citep{mildenhall2020nerf} enables photorealistic image generation by incorporating neural networks into volumetric rendering. These applications are similar to the image interpolation task considered in this work. However, most of these works did not adopt generative latent variable models.
Related to our application on graph interpolation is the body of works on learning and modeling dynamic graphs, where the goal is mostly to accurately predict future graphs. These works treat a dynamic graph as a sequence of graph snapshots and rely on the combination of recurrent neural networks (RNN) and GNN to perform discrete-time dynamic graph forecasting~\citep{sankar2020dysat,hajiramezanali2019variational,manessi2020dynamic}.
There are RNN-based methods that extracts features from each snapshot using a graph model and then feed them into a Recurrent Neural Network (RNN)~\citep{seo2018structured,taheri2019learning,lei2019gcn,manessi2020dynamic,seo2018structured,chen2018gc,li2019predicting,hajiramezanali2019variational}.
There are also approaches imposing temporal constraints on top of RNN. These constraints include spatial / temporal attention mechanism \citep{sankar2020dysat}, architectural constraints on the static graph model for each snapshot \citep{goyal2018dyngem}, and dynamic parameter constraints (i.e., using another RNN to update parameters) on the static graph model \citep{pareja2020evolvegcn}.
{\bm{s}}pace{-0.1cm}
\section{Conclusion}
{\bm{s}}pace{-0.1cm}
In this work, we show how data labeled with semantically continuous attributes can be utilized to conduct a quantitative evaluation of latent-space interpolation algorithms, for variational autoencoders. Our evaluation can be used to complement the standard qualitative comparison, and also enables evaluation for domains (such as graphs) in which visualization is difficult.
Interestingly, our experiments reveal that the superiority of interpolation algorithms could be domain-dependent. While normalised interpolation works best in the image domain, spherical linear interpolation achieves the best performance in the graph domain. Next, we test a simple and effective method to restrict the latent space via a bottleneck structure in the encoder. And we report that the interpolation algorithms considered in this work can all benefit from this restriction. Finally, we conduct interpolation-aware training with the labeled attributes, and show that this explicit supervision can greatly boost the interpolation performance.
We also hope this work could motivate future works focusing more on the application potential of latent-space interpolation for deep generative models, in addition to its current role as a demonstration tool of generalization ability.
\appendix
\section*{Supplemental Materials}
\section{A: Auxiliary Formulations}
\label{appsec:formulation}
\textbf{Definition of $D_\text{KL}$} For distributions $P$ and $Q$ for continuous random variables, the Kullback–Leibler divergence is defined as:
\begin{equation}
D_\text{KL}(P||Q) = \int^{+\infty}_{-\infty} p(x) \log \frac{p(x)}{q(x)}.
\end{equation}
\textbf{Normalised spherical interpolation} In section
Evaluation of Interpolation Algorithms, we attempt to combine $f^\text{slerp}$ and $f^\text{norm}$ as \textit{normalised spherical interpolation}:
\begin{equation}
f^\text{SN}(z_1, z_2, \lambda) = \frac{\text{sin}[(1-\lambda)\Omega] z_1 + \text{sin}[\lambda \Omega] z_2}{\sqrt{\text{sin}^2[(1-\lambda)\Omega] + \text{sin}^2[\lambda \Omega]}}.
\end{equation}
However, this combination does not bring performance gain.
\section{B: Implementation Details}
\label{appsec:implement_detail}
We report more details of the hyperparameters setting for our evaluations in the following,
\textbf{ShapeNet} In the unsupervised VAE training, we use a learning rate of $0.0005$, and a minibatch size of $40$. The VAE model is trained for $40000$ iterations. In the interpolation-aware training, we use a learning rate of $0.0005$. For each training iteration, we uniformly sample a new minibatch of $20$ triplets $\{x_{t1},x_{t2},x_{t3}\}$ to optimize $ \mathcal{L}_\text{IAT}$. We use a $\lambda_\text{IAT}$ of $1$ during training. The model is trained for $80000$ iterations.
\textbf{CO3D} In the unsupervised VAE training, we use a learning rate of $0.0001$, and a minibatch size of $100$. The VAE model is trained for $30000$ iterations. In the interpolation-aware training, we use a learning rate of $0.0001$. For each training iteration, we uniformly sample a new minibatch of $40$ triplets $\{x_{t1},x_{t2},x_{t3}\}$ to optimize $ \mathcal{L}_\text{IAT}$. We use a $\lambda_\text{IAT}$ of $1$ during training. The model is trained for $30000$ iterations.
\textbf{Citation Network}
In the unsupervised VAE training, we use a learning rate of $0.0005$, and a minibatch size of $10$. The VAE model is trained for $30000$ iterations. In the interpolation-aware training, we use a learning rate of $0.0005$. For each training iteration, we uniformly sample a new minibatch of $20$ triplets $\{A_{t1},A_{t2},A_{t3}\}$ to optimize $ \mathcal{L}_\text{IAT}$. We use the $\lambda_\text{IAT}$ of $5$ during training. And the model is trained for $50000$ iterations.
\section{C: Auxiliary Experiments}
\label{appsec:auxexp}
Table {\textnormal{e}}f{tab:co3d_rankdim_metric} shows the results with restricted latent space for CO3D; the results are consistent with ShapeNet and Citation Network: the low-rank encoder benefits latent-space interpolation. Table {\textnormal{e}}f{tab:co3d_iat} shows the IAT training results for CO3D.
\begin{figure}
\caption{Performance of IAT with different number of labeled training samples.}
\label{fig:app_difftrainnum}
\end{figure}
\begin{table}[h]
\addtolength{\tabcolsep}{-3.5pt}
\centering
\small
\begin{tabular}{c|ccccc}
\toprule
\multicolumn{6}{c}{\textbf{CO3D ($f^\text{norm}$)}} \\ \midrule
\textbf{IAT} & \multicolumn{1}{c}{\textbf{N/A}} & \multicolumn{1}{c}{\textbf{lat.}} & \multicolumn{1}{c}{\textbf{dec.}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}mlp \\ + lat.\end{tabular}}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}mlp \\ + dec.\end{tabular}}} \\ \midrule
$\text{MSE}({x}^\text{inter}_{t_2},x_{t_2})\downarrow$ &0.175 &0.174 &0.151 &\textbf{0.131} &0.134\\
$\text{PSNR}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$ &14.33 &14.33 &14.76 &\textbf{15.32} &15.23 \\
\bottomrule
\end{tabular}
\caption{Results of interpolation-aware training (IAT) on CO3D dataset. ``lat.'' refers to $\mathcal{L}^\text{latent}_\text{IAT}$, and ``dec.'' refers to $\mathcal{L}^\text{decode}_\text{IAT}$. The supervision provided by the labeled attributes greatly improve the interpolation performance, comparing to the unsupervised baseline (marked by ``N/A'').}
{\bm{s}}pace{-0.2cm}
\label{tab:co3d_iat}
\end{table}
In Figure {\textnormal{e}}f{fig:app_difftrainnum}, we provide a study of how the performance improves as the number of labeled data grows. The objective variant is $\mathcal{L}_\text{IAT}^\text{MLP+decode}$. Interestingly, we observe that for the Citation Network dataset, the improvement saturates at around 25 labeled samples, which is around 25\% of the whole training set.
\begin{table*}
\small
\centering
\footnotesize
\addtolength{\tabcolsep}{-2.0pt}
\begin{tabular}{r|cccccc|cccccc}
\toprule
\textbf{Dataset} & \multicolumn{6}{c|}{\textbf{Latent Dimension}} & \multicolumn{6}{c}{\textbf{Encoder Rank}} \\ \midrule
\textbf{CO3D ($f^\text{norm}$)} & \textbf{D4019} & \textbf{D1024} & \textbf{D256} & \textbf{D64} & \textbf{D16} & \textbf{D4} & \textbf{R4096} & \textbf{R1024} & \textbf{R256} & \textbf{R64} & \textbf{R16} & \textbf{R4}\\ \midrule
$\text{MSE}({x}^\text{inter}_{t_2},x_{t_2})\downarrow$ & 0.174 & 0.176 & 0.178 & 0.181 & 0.187 &0.191 & 0.174 & 0.178 & 0.177 & 0.175 & \textbf{0.170} &0.178 \\$\text{PSNR}({x}^\text{inter}_{t_2},x_{t_2})\uparrow$ &14.33 &14.32 &14.34 &14.22 &14.18 &13.86 &14.33 &14.34 &14.32 &14.34 &\textbf{14.37} &14.02 \\ \midrule
$\text{MSE}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ &0.004 &0.017 &0.071 &0.297 &0.963 &0.803 &0.004 &0.005 &0.005 &0.004 &0.006 &\textbf{0.003} \\
$\text{cos-sim}({z}^\text{inter}_{t_2},\hat{z}_{t_2})\downarrow$ &0.436 &0.423 &\textbf{0.410} &0.411 &0.420 &0.527 &0.436 &0.423 &0.425 &0.436 &0.480 &0.555 \\
\bottomrule
\end{tabular}
{\bm{s}}pace{-0.2cm}
\caption{Interpolation performance with various metrics for low-dimensional ($D$) latent space, or with low-rank ($R$) encoder. The results are from normalized interpolation. For CO3D, we observe a steady PSNR improvement from R4096 to R16.}
\label{tab:co3d_rankdim_metric}
\end{table*}
\end{document}
|
\begin{document}
\subjclass[2010]{30E05, 46C20}
\keywords{Nevanlinna-Pick interpolation, Indefinite metric, Pontryagin spaces, Moment problem}
\title{Functions of classes~$\mathcal N_\varkappa^+$}
\author{Alexander Dyachenko}
\email{[email protected]}
\email{[email protected]}
\thanks{This work was financially supported by the European Research Council under the
European Union's Seventh Framework Programme (FP7/2007--2013)/ERC grant agreement no.
259173.}
\address{TU-Berlin, MA 4-2, Stra\ss e des 17. Juni 136, 10623 Berlin, Germany}
\begin{abstract}
In the present note we give an elementary proof of the necessary and sufficient condition
for a univariate function to belong the class~$\mathcal N_\varkappa^+$. This class was
introduced mainly to deal with the indefinite version of the Stieltjes moment problem (and
corresponding $\pi$-Hermitian operators), although it is applicable beyond the original
scope. The proof relies on asymptotic analysis of the corresponding Hermitian forms. Our
result closes a gap in the criterion given by Krein and Langer in their joint paper of~1977.
The correct condition was stated by Langer and Winkler in~1998, although they provided no
proper reasoning.
\end{abstract}
\maketitle
\section{Introduction}
The function classes~$\mathcal N_\varkappa$ with $\varkappa=0,1,\dots$ were introduced in the
prominent paper~\cite{KreinLanger1} of M.~Krein and H.~Langer. They serve as a natural
generalisation of the Nevanlinna class~$\mathcal N \colonequals \mathcal N_0$ of all holomorphic
mappings~$\mathbb{C}_+\to\mathbb{C}_+$, which are also known as~$\mathcal R$-functions
(here~$\mathbb{C}_+\colonequals\{z\in\mathbb{C}:\Im z>0\}$ is the upper half of the complex
plane). A function~$\varphi(z)$ belongs to~$\mathcal N_\varkappa$ whenever it is meromorphic
in~$\mathbb{C}_+$, for any set of non-real points~$z_1$, $z_2$, \dots, $z_k$ the Hermitian
form
\begin{equation}\label{eq:quad_form}
h_\varphi(\xi_1, \dots, \xi_k \vert z_1,\dots, z_k)\colonequals
\sum_{n,m=0}^k \frac{\varphi(z_m) - \overline{\varphi(z_n)}}
{z_m-\overline z_n}\xi_m\overline\xi_n
\end{equation}
has at most~$\varkappa$ negative squares and for some set of points there are
exactly~$\varkappa$ negative squares. It is convenient (and generally accepted) to define
$\mathcal N_\varkappa$\nobreakdash-functions in the lower half of the complex plane by complex
conjugation, i.e.~$\overline{\varphi(z)}=\varphi(\overline z)$.
A significant particular case is presented by the classes~$\mathcal N_\varkappa^+$, which are
considered here. They contain all $\mathcal N_\varkappa$-functions~$\varphi(z)$ such
that~$z\varphi(z)$ belongs to~$\mathcal N$. Among various applications, $\mathcal N$, \
$\mathcal N_\varkappa$ and~$\mathcal N_\varkappa^+$ appear in the moment problems and have
connections to the spectral theory of operators. However, the classes~$\mathcal N_\varkappa^+$
can find even more applications as a foremost generalisation of the Stieltjes
functions~$\mathcal N_0^+$.
This note aims at obtaining the necessary and sufficient condition for a function to be in the
class~$\mathcal N_\varkappa^+$ through the asymptotic analysis of the corresponding Hermitian
forms. As a main tool, we use the basic Nevanlinna-Pick theory for the halfplane within the
framework presented, for example, in~\cite[Chapter~3]{Akhiezer65}
or~\cite[Chapters~II--III]{Donoghue}. We show that, roughly speaking, $\mathcal N_\varkappa^+$
differs from the Stieltjes class~$\mathcal N_0^+$ in having~$\varkappa$ simple negative poles,
one of which can reach the origin and merge there into another singularity. More precisely,
\emph{a function~$\varphi(z)$ belongs to the class~$\mathcal N_\varkappa^+$ if and only if it
has one of the forms
\begin{gather}
\tag{A}\label{eq:repr_th_3.8_a}
\varphi(z)=s_0+\sum_{j=1}^\varkappa\frac{\gamma_j}{\alpha_j-z}+\int_0^\infty\frac{d\nu(t)}{t-z};\\
\tag{B}\label{eq:repr_th_3.8_b}
\varphi(z)=s_0+\frac{s_1}{z}-\frac{s_2}{z^2}+\sum_{j=1}^{\varkappa-1}\frac{\gamma_j}{\alpha_j-z}
+\int_0^\infty\frac{d\nu(t)}{t-z},
\quad\text{where }
\max\{s_1,s_2\}>0,\ \nu(+0)=0;\\
\notag
\varphi(z)=s_0+\frac{s_1}{z}-\frac{s_2}{z^2}+\sum_{j=1}^{\varkappa-1}\frac{\gamma_j}{\alpha_j-z}
+\frac 1z\int_0^\infty\leqslantft(\frac{1}{t-z}-\frac t{1+t^2}\right)d\sigma(t),\\
\tag{C}\label{eq:nu_inf}
\hspace{52mm}
\text{where }
\int_0^1\frac{d\sigma(t)}t = \infty,\ \sigma(0+)=0,\
\int_0^{\infty}\frac{d\sigma(t)}{1+t^2} < \infty.
\end{gather}
Here $s_0,s_2\geqslant 0$, ~$s_1\in\mathbb{R}$, ~$\gamma_j,\alpha_j<0$ for~$j=1,2,\dots,\varkappa$ and
$\nu(t),\sigma(t)$ are nondecreasing left-continuous functions such that~$\nu(0)=\sigma(0)=0$
and~$\int_0^{\infty}\frac{d\nu(t)}{1+t} < \infty$.}
The function~$\sigma(t)$ is intentionally denoted by a distinct letter to emphasize that its
constraints at infinity are weaker. For our purposes, it is more convenient to formulate this
criterion as Theorem~\ref{th:1}: to give the corresponding formulae for functions~$\Phi(z)$ such
that~$\varphi(z)\colonequals\frac 1z\Phi(z)$ belong to the class~$\mathcal N_\varkappa^+$. At
that, the case~\eqref{eq:repr_th_3.8_a} corresponds to the
representation~\eqref{eq:phi_form_2_with_cond}, and the cases~\eqref{eq:repr_th_3.8_b}
and~\eqref{eq:nu_inf} correspond to the representation~\eqref{eq:phi_form_1_with_cond}. The
intermediary Propositions~\ref{prop:1} and~\ref{prop:2} are more general than required for
proving Theorem~\ref{th:1} and can be interesting per se.
This result corrects Theorem~3.8 of~\cite{KreinLanger1}: the authors put the
condition~$\int_0^{\infty}\frac{d\sigma(t)}{1+t} < +\infty$ in the case~\eqref{eq:nu_inf}. As a
result,
Theorem~3.8 fails to address~$\mathcal N_\varkappa^+$\nobreakdash-functions like
\[
\psi(z)=\frac 1z\cdot\leqslantft(\frac{1}{\sqrt{z}}\cot\frac{1}{\sqrt{z}}-\sqrt{z}\cot\sqrt{z}\right)
\]
with~$\varkappa=1$. More than likely, this mistake is just an oversight: for proving the
representations~\eqref{eq:repr_th_3.8_a}--\eqref{eq:repr_th_3.8_b} the authors use, in fact, the
measure~$\frac{d\sigma(t)}t$ as~$d\nu(t)$; then they put~$\nu(t)$ instead of~$\sigma(t)$
in~\eqref{eq:nu_inf}. Furthermore, their proof seems to be less transparent since it involves
operator theory (which is convenient for the general case of~$\mathcal N_\varkappa$). Lemma~5.3
from~\cite[p.~421]{LangerWinkler} (see Theorem~\ref{th:1} herein) has a proper statement, and
the function~$\psi(z)$ is allowed as an entry of~$\mathcal N_1^+$. Unfortunately, the proof
in~\textup{\cite{LangerWinkler}} is invalid. Our proof does not depend on the results
of~\cite{KreinLanger1,LangerWinkler}.
It is worth noting that the gap in~\cite[Theorem~3.8]{KreinLanger1} does not affect dependent
results which assume that the function~$z\varphi(z)$ has an asymptotic expansion of the
form~$\sum_{n=0}^\infty c_nz^n$ as $z\to i\cdot 0+$ or of the form~$\sum_{n=0}^\infty c_nz^{-n}$
as $z\to+\infty\cdot i$. Indeed, in the former case~$\varphi(z)$ can be expressed as
in~\eqref{eq:repr_th_3.8_a} or~\eqref{eq:repr_th_3.8_b}, and in the latter case our
representation~\eqref{eq:nu_inf} reduces (see~\cite[p.~143]{Kac}) to the item~3.\ of Theorem~3.8
in~\cite{KreinLanger1}. On the other hand, the function~$z\psi(z)$, where~$\psi(z)$ is given
above, has no such asymptotic expansions.
\section{Preliminaries}\label{sec:n-functions}
Each $\mathcal N$-function~$\Phi$ has the following integral representation
(see \emph{e.g.}~\cite[p.~92]{Akhiezer65}, \cite[p.~20]{Donoghue}):
\begin{equation}\label{eq:Phi_N}
\Phi(z)= bz + a + \int_{-\infty}^\infty\leqslantft(\frac{1}{t-z}-\frac{t}{1+t^2}\right)d\sigma(t)
\end{equation}
where $a$ is real, $b\geqslant 0$ and $\sigma(t)$ is a real non-decreasing function satisfying
$\int_{-\infty}^\infty \frac{d\sigma(t)}{1+t^2} < \infty$. The converse is also true: all functions of
the form~\eqref{eq:Phi_N} belong to~$\mathcal N$.
To be definite, we assume that the function~$\sigma(t)$ is \emph{left-continuous}, that
is~$\sigma(t)=\sigma(t-)$ for all~$t\in\mathbb{R}$. Accordingly, the notation for integrals
with respect to~$d\sigma(t)$ is as in the formula
\( \int_\alpha^\beta f(t)\,d\sigma(t) \colonequals \int_{[\alpha,\beta)} f(t)\,d\sigma(t) \) for
arbitrarily taken real numbers~$\alpha,\beta$ and function~$f(t)$.
\begin{remark*}\label{rem:defN}
A function~$\Phi$ given by the formula~\eqref{eq:Phi_N} is holomorphic outside the real
line. Furthermore, it has an analytic continuation through the intervals outside the support
of $d\sigma$. The function $\varphi(z)\colonequals\Phi(z)/z$ has the same singularities with
the exception of the origin (generally speaking). We can additionally note that,
\[
\text{if}\quad z_1<z_2<t \quad\text{or}\quad t<z_1<z_2,\quad\text{then}\quad
\frac 1{t-z_2} - \frac 1{t-z_1} =\frac {z_2-z_1}{(t-z_1)(t-z_2)}>0.
\]
Consequently, given a real interval~$(\alpha,\beta)$ that has no common points with the
support of~$d\sigma$, the condition $\alpha<z_1<z_2<\beta$ implies $\Phi(z_1)<\Phi(z_2)$
unless~$\Phi(z)\equiv a$, which is seen from the representation~\eqref{eq:Phi_N}. (This fact
is also seen immediately from the definition of the class~$\mathcal N$:
see~\cite[p.~18]{Donoghue}.) Put in other words, the function~$\Phi(z)$ increases in the
interval~$(\alpha,\beta)$ unless it is identically constant.
\end{remark*}
\begin{theorem}[Coincides with Lemma~5.3 from~{\cite[p.~421]{LangerWinkler}}]\label{th:1}
Let a function~$\Phi\in\mathcal N$. The function $\varphi(z)\colonequals \Phi(z)/z$ belongs
to~$\mathcal N_\varkappa$ if and only if the representation~\eqref{eq:Phi_N} of~$\Phi(z)$ is
either of the form
\begin{subequations} \label{eq:phi_form_1_with_cond}
\begin{equation} \label{eq:phi_form_1}
\Phi(z)= bz + a + \sum_{n=1}^{\varkappa-1}\frac{\sigma_n}{\lambda_n-z}
+ \int_0^\infty\leqslantft(\frac{1}{t-z}-\frac{t}{1+t^2}\right)d\sigma(t)
\end{equation}
where~$\lambda_n<0$, \ $n=1,2,\dots,\varkappa-1$ and
\begin{equation} \label{eq:phi_form_1_cond}
0<\Phi(0-)
= a + \sum_{n=1}^{\varkappa-1}\frac{\sigma_n}{\lambda_n}
+ \int_0^\infty\frac{d\sigma(t)}{t+t^3}\leqslant\infty,
\end{equation}
\end{subequations}
or of the form
\begin{subequations}\label{eq:phi_form_2_with_cond}
\begin{equation} \label{eq:phi_form_2}
\Phi(z)= bz + a + \sum_{n=1}^{\varkappa}\frac{\sigma_n}{\lambda_n-z}
+ \int_0^\infty\leqslantft(\frac{1}{t-z}-\frac{t}{1+t^2}\right)d\sigma(t),
\end{equation}
where~$\lambda_n<0$, \ $n=1,2,\dots,\varkappa$ and
\begin{equation}\label{eq:phi_form_2_cond}
\Phi(0-) = a + \sum_{n=1}^{\varkappa}\frac{\sigma_n}{\lambda_n} +
\int_0^\infty\frac{d\sigma(t)}{t+t^3}\leqslant 0.
\end{equation}
\end{subequations}
\end{theorem}
Note that~$\mathcal N$-functions of the forms~\eqref{eq:phi_form_1} and~\eqref{eq:phi_form_2}
have the corresponding limit~$\Phi(0-)$ defined, because (when non-constant) they grow
monotonically (see Remark~\ref{rem:defN}) outside the support of corresponding
measure~$d\sigma(t)$. Moreover, all numbers~$\sigma_n$
in~\eqref{eq:phi_form_1_with_cond}--\eqref{eq:phi_form_2_with_cond} are positive due to the
condition~$\Phi\in\mathcal N$.
\section{Proofs}\label{sec:proofs}
\begin{definition*}\label{def:poi_inc}
A real point~$\lambda$ is called a \emph{point of increase} of a function~$\sigma(t)$ if
$\sigma(\lambda+\varepsilon)>\sigma(\lambda-\varepsilon)$ for every~$\varepsilon>0$ small
enough. In particular, the set of all points of increase of a non-decreasing
function~$\sigma(t)$ is the support of~$d\sigma(t)$.
\end{definition*}
In each punctured neighbourhood of the point of increase~$\lambda$ there exists~$\lambda'$ such
that the limit
\[
\sigma'(\lambda')
= \lim_{\varepsilon\to 0}
\frac{\sigma(\lambda'+\varepsilon)-\sigma(\lambda'-\varepsilon)}{2\varepsilon}
\]
is positive or nonexistent. Indeed, otherwise $\sigma'(t)\leqslant 0$ in the closed
interval~$-\varepsilon\leqslant t\leqslant \varepsilon$ for some~$\varepsilon$ small enough, thus
integrating~$\sigma'(t)$ over this interval leads us to a contradiction. Consequently, if we
know that the function~$\sigma(t)$ has at least $\varkappa$ negative points of increase, then we
always can select $\varkappa$ points of
increase~$\lambda_\varkappa<\lambda_{\varkappa-1}<\dots<\lambda_1<0$, in which the
derivative~$\sigma'$ is nonexistent or positive. Given such a set of points, denote
\begin{equation}\label{eq:def_delta}
\delta\colonequals
\frac 13 \min\Big\{-\lambda_1,\min_{1\leqslant n\leqslant\varkappa-1}(\lambda_{n}-\lambda_{n+1})\Big\},
\end{equation}
put $U_n\colonequals(\lambda_n-\delta,\lambda_n+\delta)$, where $n=1,\dots,\varkappa$, and
$U_0\colonequals\mathbb{R}\setminus\leqslantft(\bigcup_{n=1}^{\varkappa} U_n\right)$.
\begin{prop*}\label{prop:1}
Consider a function~$\Phi(z)=z\varphi(z)$ of the form~\eqref{eq:Phi_N}. Let the
function~$\sigma(t)$ have at least~$\varkappa$ negative points of
increase~$\lambda_\varkappa<\dots<\lambda_1$ in which the derivative~$\sigma'$ is
nonexistent or positive. Then the Hermitian
form~\( h_\varphi(\xi_1, \dots, \xi_\varkappa \vert \lambda_1+i\eta,\dots,
\lambda_\varkappa+i\eta) \) defined in~\eqref{eq:quad_form} has~$\varkappa$ negative squares
for some small values of~$\eta>0$.
\end{prop*}
\begin{proof}
Since
\begin{subequations}\label{eq:ident_1tz_1t_2}
\begin{align}
\label{eq:ident_1tz}
\frac{1}{t-z} &= \frac{z+t-z}{t(t-z)} = \frac{z}{t(t-z)}+\frac{1}{t}
\quad \text{and}\\
\label{eq:ident_1t_2}
\frac{1}{t}-\frac{t}{1+t^2} &= \frac{1+t^2 -t^2}{t(1+t^2)} = \frac{1}{t(1+t^2)},
\end{align}
\end{subequations}
from the expression~\eqref{eq:Phi_N} we obtain
\begin{equation}\label{eq:ident_phi_int}
\begin{aligned}
\varphi(z)={} & b + \frac az
+ \frac 1z \int_{-\infty}^\infty\leqslantft(\frac{1}{t-z}-\frac{t}{1+t^2}\right)d\sigma(t)
\\
={}& b + \frac az
+ \int_{-\infty}^\infty\leqslantft(\frac{1}{t-z}+\frac{1}{z(1+t^2)}\right)\frac{d\sigma(t)}t
= \sum_{n=0}^{\varkappa} \varphi_n(z),
\end{aligned}
\end{equation}
where
\[
\varphi_n(z)
= \int_{U_n}\frac{1}{t-z}\cdot\frac{d\sigma(t)}{t},\quad n=1,\dots,\varkappa,
\]
are the terms dominant on the intervals~$U_n$, and
\[
\varphi_0(z)
\colonequals b + \frac {\widetilde a}z
+ \frac 1z \int_{U_0}\leqslantft(\frac{1}{t-z}-\frac{t}{1+t^2}\right)d\sigma(t)
\quad\text{with}\quad
\widetilde a\colonequals a + \sum_{n=1}^{\varkappa} \int_{U_n}\frac{d\sigma(t)}{t+t^3}
\]
contains the remainder term. Let us additionally assume $z_n\colonequals\lambda_n+i\eta$.
On the one hand, each function~$\varphi_n$ is holomorphic outside~$U_n$; therefore,
when~$m,k\ne n$ we have that the limits
\begin{equation}\label{eq:lim_boundary}
\lim_{\eta\to 0}\frac{\varphi_n(z_m) - \varphi_n(\overline z_k)}{z_m-\overline z_k}
=
\begin{cases}
\dfrac{\varphi_n(\lambda_m) - \varphi_n(\lambda_k)}{\lambda_m-\lambda_k},
&\text{if } k\ne m,\\
\varphi'_n(\lambda_m),
&\text{if } k=m\\
\end{cases}
\end{equation}
are finite as~$n=0,\dots,\varkappa$. On the other hand, with the notation
$\widetilde\sigma(t)=
\int_{-\lambda_\varkappa-\delta}^t|s|^{-1}d\sigma(s)$, that is
$\widetilde\sigma(t)=
-\int_{-\lambda_\varkappa-\delta}^ts^{-1}d\sigma(s)$ when~$t<0$, for~$n\ne 0$ we have
\[
\begin{aligned}
\rho^2_n(\eta)\colonequals{}&
-\frac{\varphi_n(z_n) - \varphi_n(\overline z_n)}{z_n-\overline z_n}
={}\int_{U_n}\frac{t - \overline z_n - t + z_n}
{(z_n-\overline z_n)(t-z_n)(t-\overline z_n)}d\widetilde\sigma(t)
={}\int_{U_n}\frac{d\widetilde\sigma(t)}{|t-z_n|^2}\\
\geqslant{}&\int_{\lambda_n-\eta}^{\lambda_n+\eta}\frac{d\widetilde\sigma(t)}{(t-\lambda_n)^2+\eta^2}
\geqslant\int_{\lambda_n-\eta}^{\lambda_n+\eta}\frac{d\widetilde\sigma(t)}{2\eta^2}
=
\frac 1\eta \cdot
\frac {\widetilde\sigma (\lambda_n+\eta)-\widetilde\sigma (\lambda_n-\eta)}{2\eta}.
\end{aligned}
\]
Consequently, $\limsup_{\eta\to0+}\leqslantft(\eta\cdot\rho^2_n(\eta)\right)$ is positive
or~$+\infty$ because~$\lambda_n$ is a point of increase of~$\sigma(t)$. Moreover, we
fix~$\rho_n(\eta)>0$ for definiteness. In terms of big $O$ notation, there exists a
sequence of positive numbers~$\eta_1, \eta_2, \dots$ tending to zero such that
\begin{equation}\label{eq:lim_rho}
\frac 1{\rho_n(\eta_k)} = O\leqslantft(\sqrt{\eta_k}\right)
\quad\text{as}\quad k\to +\infty
\quad\text{and}\quad n=1,\dots,\varkappa.
\end{equation}
According to~\eqref{eq:lim_boundary}, we additionally have
\begin{equation} \label{eq:phi_n_n}
-\frac{\varphi(z_n) - \varphi(\overline z_n)}{z_n-\overline z_n}=
-\frac{\varphi_n(z_n) - \varphi_n(\overline z_n)}{z_n-\overline z_n}
-\sum_{m \ne n}\frac{\varphi_m(z_n) - \varphi_m(\overline z_n)}{z_n-\overline z_n}
=\rho^2_n(\eta)+O(1)
\end{equation}
when~$\eta$ is assumed to be small. Furthermore,
\[
-\frac{\varphi_n(z_n) - \varphi_n(\overline z_m)}{z_n-\overline z_m}
=\int_{U_n}\frac{t - \overline z_m - t + z_n}
{(z_n-\overline z_m)(t-z_n)(t-\overline z_m)}d\widetilde\sigma(t)
=\int_{U_n} \frac{d\widetilde\sigma(t)}{(t-z_n)(t-\overline z_m)},
\]
which implies (with the help of the elementary inequality
$2\alpha\beta\leqslant \frac{\alpha^2}{c} + c\beta^2$ valid for any positive numbers)
\begin{equation}\label{eq:bound_mixed}
\begin{aligned}
\leqslantft|\frac{\varphi_n(z_n) - \varphi_n(\overline z_m)}{z_n-\overline z_m}\right|
&\leqslant \int_{U_n} \frac{d\widetilde\sigma(t)}{|t-z_n|\cdot|t-\overline z_m|}
\leqslant \int_{U_n} \frac{d\widetilde\sigma(t)}{2\rho_n(\eta)|t-z_n|^2}
+ \int_{U_n} \frac{\rho_n(\eta)\,d\widetilde\sigma(t)}{2|t-z_m|^2}
\\
&= \frac {1}{2\rho_n(\eta)}\rho^2_n(\eta)
+ \frac {\rho_n(\eta)}{2} \int_{U_n} \frac{d\widetilde\sigma(t)}{|t-z_m|^2}
\leqslant C(n,m,\delta)\rho_n(\eta),
\end{aligned}
\end{equation}
because the distance between~$U_n$ and~$z_m$ is more than~$\delta$. The
factor~$C(n,m,\delta)>0$ in~\eqref{eq:bound_mixed} is independent of~$\eta$. The finiteness
of~\eqref{eq:lim_boundary} gives
\[
\frac{\varphi(z_n) - \varphi(\overline z_m)}
{z_n-\overline z_m}
=
\frac{\varphi_n(z_n) - \varphi_n(\overline z_m)}
{z_n-\overline z_m}
+ \frac{\varphi_m(z_n) - \varphi_m(\overline z_m)}
{z_n-\overline z_m}
+ O(1)\quad\text{as}\quad\eta\to 0+.
\]
This can be combined with the estimate~\eqref{eq:bound_mixed}, thus giving us for
small~$\eta$
\begin{equation} \label{eq:phi_n_m_est}
\leqslantft|\frac{\varphi(z_n) - \varphi(\overline z_m)}
{(z_n-\overline z_m)\rho_n(\eta)\rho_m(\eta)}\right|
\leqslant
\frac{C(n,m,\delta)}{\rho_m(\eta)}+\frac{C(m,n,\delta)}{\rho_n(\eta)}
+O\leqslantft(\frac{1}{\rho_n(\eta)\rho_m(\eta)}\right).
\end{equation}
The relations~\eqref{eq:phi_n_n} and~\eqref{eq:phi_n_m_est} allow us to make the final step
in the proof. The substitution $\xi_n \mapsto \zeta_n/\rho_n(\eta)$ gives us
\begin{gather}\label{eq:h_is_neg_def}
h_\varphi\leqslantft(\frac{\zeta_1}{\rho_1(\eta)},\dots,\frac{\zeta_\varkappa}{\rho_\varkappa(\eta)}
\Big\vert z_1,\dots,z_\varkappa\right)
= \!\!\sum_{n,m=1}^{\varkappa} \!\!
\frac{\varphi(z_n) - \varphi(\overline z_m)} {z_n-\overline z_m}
\cdot \frac{\zeta_n\overline\zeta_m}{\rho_n(\eta)\rho_m(\eta)}
= R(\eta) -\sum_{n=1}^{\varkappa} |\zeta_n|^2,
\\
\label{eq:h_is_neg_def_R}
\text{where}\quad
\big|R(\eta)\big| = \sum_{n=1}^{\varkappa} \Bigg(
|\zeta_n|^2O\bigg(\frac 1{\rho_n^2(\eta)}\bigg)
+ \sum_{m\ne n}
\zeta_n\overline\zeta_m
O\bigg(\frac 1{\rho_m(\eta)} + \frac 1{\rho_n(\eta)}\bigg)
\Bigg)
.
\end{gather}
According to~\eqref{eq:lim_rho}, in each neighbourhood of zero we can
choose~$\eta\in\big\{\eta_k\big\}_{k=1}^\infty$, such that the inequality
\( \frac 1{\rho_n(\eta)} \leqslant M_n\sqrt{\eta} \) holds true for a fixed number~$M_n>0$
dependent only on~$\varphi$ and~$n$. For such choice of~$\eta$, the
estimate~\eqref{eq:h_is_neg_def_R} implies
$\big|R(\eta)\big|\leqslant M\sqrt{\eta}\cdot\sum_{n=1}^{\varkappa} |\zeta_n|^2$ with some
fixed~$M$. Therefore, the sign of the Hermitian form~\eqref{eq:h_is_neg_def} will be
determined by the last term~$-\sum_{n=1}^{\varkappa} |\zeta_n|^2$ alone for every set of
complex numbers~$\{\zeta_1,\dots,\zeta_{\varkappa}\}$ as soon
as~$\eta\in\big\{\eta_k\big\}_{k=1}^\infty$ is small enough.
\end{proof}
\begin{prop*}\label{prop:2}
Under the conditions of Proposition~\ref{prop:1} assume that $\Phi(z)$ is regular in the
interval~$(-\varepsilon,0)$ and $0<\Phi(0-)\leqslant\infty$. Then the Hermitian form
\( h_\varphi(\xi_0, \dots, \xi_\varkappa \vert z_0, \dots, z_\varkappa) \),
where~$z_m=\lambda_m+i\eta$ for $m=1,\dots,\varkappa$ and $z_0=-\sqrt\eta+i\mu$, is negative
definite when the numbers~$\eta>0$ and~$\mu>0$ are chosen appropriately.
\end{prop*}
\begin{proof}
Split the function~$\varphi(z)$ into two parts $\psi_0(z)$ and~$\psi_1(z)$ such that
$\varphi(z)=\psi_0(z)+\psi_1(z)$ and
\[
\begin{aligned}
\psi_1(z) &\colonequals
b+\frac 1z
\int_{\mathbb R\setminus(-\varepsilon,\varepsilon)}\leqslantft(\frac{1}{t-z}-\frac{t}{1+t^2}
-\frac{1}{t+t^3}\right)d\sigma(t).
\end{aligned}
\]
The integral here is analytic for~$|z|<\varepsilon$ and vanishes at the origin
(see~\eqref{eq:ident_1t_2}). The function~$\psi_1(z)$ therefore is also analytic
for~$|z|<\varepsilon$. The part~$\psi_0(z)$ has the form
\begin{align*}
\psi_0(z) \colonequals{}& \frac az
+ \frac 1z \int_{(-\varepsilon,\varepsilon)}\leqslantft(\frac{1}{t-z} - \frac{t}{1+t^2}\right)d\sigma(t)
+ \frac 1z \int_{\mathbb R\setminus(-\varepsilon,\varepsilon)}\frac{d\sigma(t)}{t+t^3}\\
={}& \frac Az
+ \frac 1z \int_0^\varepsilon\frac{d\sigma(t)}{t-z},
\quad\text{where we put}\quad A\colonequals
a + \int_{\mathbb R\setminus(-\varepsilon,\varepsilon)}\frac{d\sigma(t)}{t+t^3}
- \int_0^\varepsilon\frac{td\sigma(t)}{1+t^2},
\end{align*}
i.e. $A$ is a finite real constant. The integral over~$(-\varepsilon,0)$ is zero in the
representation of~$\psi_0$, because the function~$\Phi(z)$ is regular in this interval, and
thus~$\sigma(t)$ is constant for $-\varepsilon<t<0$.
First assume that $x$ varies on $(-\varepsilon,0)$ close enough to~$0$, so that $\Phi(x)>3M$
with some fixed~$M>0$. On the one hand, one of the Cauchy-Riemann equations and the
condition $\Phi'(x)\geqslant 0$ (see Remark~\ref{rem:defN}) imply
\begin{equation*}
\frac{\partial\Im\varphi(x+iy)}{\partial y}\bigg|_{y=0}
= \frac{d}{dx}\varphi(x)
= \frac{d}{dx}\frac{\Phi(x)}{x}
= \frac{\Phi'(x)x - \Phi(x)}{x^2}
< -\frac{3M}{x^2},
\end{equation*}
Given~$x$ we can chose~$\mu\in(0,-x)$ such that
\begin{equation}\label{eq:choice_of_mu}
\leqslantft|\frac{\partial\Im\varphi(x+iy)}{\partial y}\bigg|_{y=0} -
\frac{\varphi(x+i\mu)-\varphi(x-i\mu)}{2i\mu}\right|\leqslant
\frac{M}{x^2}
\end{equation}
relying on the fact that~$\varphi(x+iy)$ is smooth for real~$y$. The last two inequalities
together imply that
\[
\frac{\varphi(z_0)-\varphi(\overline z_0)}{z_0-\overline z_0}
=\frac{\varphi(x+i\mu)-\varphi(x-i\mu)}{(x+i\mu)-(x-i\mu)}
< -\frac{3M}{x^2} + \frac{M}{x^2}
= -\frac{2M}{x^2}
\]
for $z_0=x+i\mu$. Therefore,
\begin{equation}\label{eq:phi_prime_at_zero}
\rho_0^2(x^2)\colonequals
-\frac{\psi_0(z_0)-\psi_0(\overline z_0)}{z_0-\overline z_0}
=-\frac{\varphi(z_0)-\varphi(\overline z_0)}{z_0-\overline z_0}
+\frac{\psi_1(z_0)-\psi_1(\overline z_0)}{z_0-\overline z_0}
\geqslant \frac{M}{x^2}
\end{equation}
for small enough~$|x|$ on account of the smoothness of~$\psi_1(z)$. We
assume~$\rho_0(x^2)>0$ for definiteness.
On the other hand, the definition of~$\psi_0(z)$ implies that
\begin{equation}\label{eq:rho0_in_terms_psi0}
\begin{aligned}
\frac{\psi_0(z_0) - \psi_0(\overline z_m)}{z_0 - \overline z_m}
={}& \frac{A/z_0-A/\overline z_m}{z_0 - \overline z_m}
+ \frac 1{z_0\overline z_m(z_0-\overline z_m)}
\int_0^\varepsilon\leqslantft(
\frac{\overline z_m}{t-z_0}-\frac{z_0}{t-\overline z_m}
\right)d\sigma(t)\\
={}&
\frac 1{z_0\overline z_m}\leqslantft(-A
+ \int_0^\varepsilon
\frac{\overline z_m(t-\overline z_m)- z_0(t-z_0)}
{(t-\overline z_m)(t-z_0)(z_0-\overline z_m)}d\sigma(t)
\right)\\
={}&
-\frac 1{z_0\overline z_m}\leqslantft(A
+ \int_0^\varepsilon \frac{t - z_0 -\overline z_m}
{(t-\overline z_m)(t-z_0)}d\sigma(t)\right),
\quad
m=0,\dots,\varkappa.
\end{aligned}
\end{equation}
The inequalities~$\Big|\frac{-z_0}{t-z_0}\Big|\leqslant 1$ and,
hence,~$\Big|\frac{-z_0}{(t-\overline z_m)(t-z_0)}\Big|\leqslant \frac{1}{|z_m|}$ are valid for
all~$t\geqslant 0$. Let us apply the latter to estimating the absolute value of the expression
\eqref{eq:rho0_in_terms_psi0}:
\begin{equation}\label{eq:finite_difference_psi0}
\begin{aligned}
\leqslantft|\frac{\psi_0(z_0) - \psi_0(\overline z_m)}{z_0 - \overline z_m}\right|
\leqslant{}&
\frac 1{|z_0z_m|}\leqslantft(|A|
+ \int_0^\varepsilon
\leqslantft|\frac{- z_0}{(t-\overline z_m)(t-z_0)}
+ \frac{1}{(t-z_0)}
\right|d\sigma(t)
\right)\\
\leqslant{}&
\frac {|A|}{|z_0z_m|}
+ \frac 1{|z_0z_m^2|} \int_0^\varepsilon d\sigma(t)
+ \frac 1{|z_0z_m|}\int_0^\varepsilon\frac {d\sigma(t)}{|t-z_0|}.
\end{aligned}
\end{equation}
In particular, putting~$m=0$ in~\eqref{eq:finite_difference_psi0} gives us
\begin{equation}\label{eq:est_rho_0}
\rho_0^2(x^2)=
\leqslantft|\frac{\psi_0(z_0) - \psi_0(\overline z_0)}{z_0 - \overline z_0}\right|
\leqslant
\frac 2{|z_0^3|}\int_0^\varepsilon d\sigma(t) + \frac{|A|}{|z_0^2|}
\leqslant
2\frac {\sigma(\varepsilon-)-\sigma(0)}{|x|^3} + \frac{|A|}{x^2},
\end{equation}
which complements
\begin{equation}\label{eq:est_psi_0_0}
\leqslantft|\frac{\varphi(z_0) - \varphi(\overline z_0)}{z_0 - \overline z_0}
+
\rho_0^2(x^2)
\right|
=
\leqslantft|
\frac{\psi_1(z_0) - \psi_1(\overline z_0)}{z_0 - \overline z_0}
\right|
= O\leqslantft(1\right)
\quad\text{as}\quad x\to 0,
\end{equation}
where~$O\leqslantft(1\right)$ on the right-hand side does not depend on~$\mu\in(0,\varepsilon)$. Now recall
that~$\Re z_\varkappa=\lambda_\varkappa<\dots<\Re z_1=\lambda_1<-\varepsilon$. Since
$|t-z_0| = t-x +\mu < t-2x$ provided that~$t\geqslant 0$ and~$\mu<|x|$,
from~\eqref{eq:finite_difference_psi0} we obtain
\begin{equation}\label{eq:est_psi_0_n}
\begin{aligned}
\leqslantft|\frac{\psi_0(z_0) - \psi_0(\overline z_m)}{z_0 - \overline z_m}\right|
&\leqslant
\frac 1{|z_0z_m^2|} \int_0^\varepsilon d\sigma(t)
+ \frac {|A|}{|z_0z_m|}
+ \frac 1{|z_0z_m|}\int_0^\varepsilon\frac {|t-z_0|}{|t-z_0|^2}d\sigma(t)
\\
&\leqslant
\frac 1{|z_0z_m^2|} \int_0^\varepsilon d\sigma(t)
+ \frac {2|A|}{|z_0z_m|}
+ \frac{|z_0|}{|z_m|}\leqslantft(
\frac {A}{|z_0|^2}
+ \frac 1{|z_0|^2}\int_0^\varepsilon\frac {t-2x}{|t-z_0|^2}d\sigma(t)\right)\\
&\xlongequal{\eqref{eq:rho0_in_terms_psi0}}
\frac 1{|z_0z_m^2|} \int_0^\varepsilon d\sigma(t)
+ \frac {2|A|}{|z_0z_m|}
+ \frac{|z_0|}{|z_m|}\leqslantft(
-\frac{\psi_0(z_0) - \psi_0(\overline z_0)}{z_0 - \overline z_0}\right)\\
&\leqslant
\frac {\sigma(\varepsilon-)-\sigma(0)}{|x|\lambda_m^2}
+ \frac {2|A|}{|x\lambda_m|}
+ \frac{2|x|}{|\lambda_m|}\rho_0^2(x^2)
=O\leqslantft(\frac 1x\right) + O\leqslantft(x\rho_0^2(x^2)\right)
,
\end{aligned}
\end{equation}
where~$x$ tends to zero and~$m=1,\dots,\varkappa$.
To implement the same technique as in the proof of Proposition~\ref{prop:1} it is enough to
put~$x\colonequals-\sqrt\eta$ and to study the order of summands in the
form~\(h_\varphi(\xi_0, \dots, \xi_\varkappa \vert z_0, \dots, z_\varkappa)\). In addition
to~$x>-\varepsilon$, we suppose that~$x>-\delta$, where the positive number~$\delta$ is
defined in~\eqref{eq:def_delta}; consequently~$\eta<\min\{\varepsilon^2,\delta^2\}$. We
regard~$\eta$ as tending to zero, so the conditions~\eqref{eq:phi_prime_at_zero}
and~\eqref{eq:est_rho_0}--\eqref{eq:est_psi_0_0} imply that
\begin{gather}\label{eq:est_rho_lu_0}
\frac{1}{\rho_0(\eta)}=O\leqslantft(\sqrt\eta\right),\quad
\rho_0(\eta) = O\leqslantft(\eta^{-\frac34}\right),
\quad\text{and thus}\\[2pt]
\label{eq:est_phi_0_0}
\frac{\varphi(z_0) - \varphi(\overline z_0)}{(z_0 - \overline z_0)\rho_0^2(\eta)}
=-1 + O\leqslantft(\eta\right).
\end{gather}
Now we make use of the same notation as in the proof of Proposition~\ref{prop:1}. If
$m,n\ne 0$ and $m\ne n$, then the estimates~\eqref{eq:phi_n_n} and~\eqref{eq:phi_n_m_est}
concerning $\varphi(z_1)$, \dots, $\varphi(z_\varkappa)$ are valid. Since the distance
between~$U_n$ and~$z_0$ is more than~$\delta$, the inequality~\eqref{eq:bound_mixed} is
satisfied on condition that~$m=0\ne n$. Then~\eqref{eq:bound_mixed}
and~\eqref{eq:est_psi_0_n} give us the following:
\begin{equation}\label{eq:est_phi_0_n}
\begin{aligned}
\leqslantft|
\frac{\varphi(z_0) - \varphi(\overline z_n)}
{z_0-\overline z_n}
\right|
\leqslant{}&
\leqslantft|
\frac{\varphi_n(z_0) - \varphi_n(\overline z_n)}
{z_0-\overline z_n}
\right|
+
\leqslantft|
\frac{\psi_0(z_0) - \psi_0(\overline z_n)}
{z_0-\overline z_n}
\right|\\
&+
\leqslantft|
\frac{\big(\varphi(z)-\varphi_n(z_0)-\psi_0(z_0)\big)
- \big(\varphi(\overline z_n)-\varphi_n(\overline z_n)-\psi_0(\overline z_n)\big)}
{z_0-\overline z_n}
\right|
\\
\overset{\eqref{eq:bound_mixed}}\leqslant{}&
C(n,0,\delta)\,\rho_n(\eta)+
\leqslantft|\frac{\psi_0(z_0) - \psi_0(\overline z_n)}
{z_0 - \overline z_n}\right|
+O\leqslantft(1\right)\\
\xlongequal{\eqref{eq:est_psi_0_n}}{}&
C(n,0,\delta)\,\rho_n(\eta)
+O\Big(\eta^{-\frac 12}\Big)
+O\leqslantft(\sqrt{\eta}\rho_0^2(\eta)\right)
+O\leqslantft(1\right)
\end{aligned}
\end{equation}
as~$\eta\to 0+$. Assume that~$\eta$ is taken from the
sequence~$\big\{\eta_k\big\}_{k=1}^\infty$ corresponding to~\eqref{eq:lim_rho} and that the
choice of~$\mu\in(0,\sqrt{\eta})$ satisfies the condition~\eqref{eq:choice_of_mu}.
Then
\[
\begin{aligned}
\leqslantft|\frac{\varphi(z_n) - \varphi(\overline z_0)}
{(z_n-\overline z_0)\rho_n(\eta)\rho_0(\eta)}\right|
={}&
\leqslantft|
\frac{\varphi(z_0) - \varphi(\overline z_n)}
{(z_0-\overline z_n)\rho_0(\eta)\rho_n(\eta)}
\right|
\\
\xlongequal{\eqref{eq:est_phi_0_n}\text{ and }\eqref{eq:est_rho_lu_0}}{}&
O\leqslantft(\sqrt\eta\right)
+\leqslantft(O\Big(\eta^{-\frac12+\frac12}\Big)
+O\Big(\eta^{\frac12-\frac34}\Big)
+O\Big(\eta^{\frac12}\Big)\right)\frac 1{\rho_n(\eta)}\\
\xlongequal{\eqref{eq:lim_rho}}{}&
O\leqslantft(\sqrt\eta\right)
+O\Big(\eta^{-\frac14}\Big)O\leqslantft(\sqrt\eta\right)
=O\leqslantft(\sqrt[4]{\eta}\right).
\end{aligned}
\]
This estimate together with~\eqref{eq:est_phi_0_0},~\eqref{eq:lim_rho},~\eqref{eq:phi_n_n}
and~\eqref{eq:phi_n_m_est} yields that
\[
\begin{aligned}
h\leqslantft(\frac{\zeta_0}{\rho_0(\eta)},\dots,\frac{\zeta_\varkappa}{\rho_\varkappa(\eta)}
\Big\vert z_0,\dots,z_\varkappa\right)
&= \!\sum_{n,m=1}^{\varkappa}
\frac{\varphi(z_n) - \varphi(\overline z_m)}
{(z_n-\overline z_m)\rho_n(\eta)\rho_m(\eta)}
\,\zeta_n\overline\zeta_m\\
&= - \sum_{m=0}^{\varkappa}|\zeta_m|^2 +O\leqslantft(\sqrt[4]{\eta}\right)
\sum_{n,m=0}^{\varkappa} \zeta_m\overline\zeta_n,
\end{aligned}
\]
where $O\leqslantft(\sqrt[4]{\eta}\right)$ does not depend on~$\zeta_0$, \dots, $\zeta_\varkappa$.
That is, this Hermitian form is negative definite provided that the value
of~$\eta\in\big\{\eta_k\big\}_{k=1}^\infty$ is small enough.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:1}]
Suppose that~$\Phi(z)=z\varphi(z)$ can be represented as in~\eqref{eq:phi_form_2_with_cond}.
Then for specially chosen numbers~$z_1$, \dots, $z_\varkappa\notin\mathbb R$ the Hermitian
form $h_\varphi(\xi_1, \dots, \xi_\varkappa \vert z_1,\dots, z_\varkappa)$ has $\varkappa$
negative squares by Proposition~\ref{prop:1}. Let us show that this is the greatest possible
number of negative squares in the form
$h[\varphi]\colonequals h_{\varphi}(\xi_1, \dots, \xi_k \vert z_1,\dots, z_k)$.
Denote $\widetilde\sigma(t)=\int_{0}^ts^{-1}d\sigma(s)$. Since the integral
\[
0 \leqslant \int_0^\infty\frac{d\sigma(t)}{t(1+t^2)}
=\int_0^\infty\frac{d\widetilde\sigma(t)}{1+t^2}
\overset{\eqref{eq:phi_form_2_cond}}{{}\leqslant{}}
- a - \sum_{i=1}^{\varkappa}\frac{\sigma_i}{\lambda_i}<\infty
\]
is finite, we can split the last term of~\eqref{eq:phi_form_2} divided by~$z$ into two parts
to obtain (cf.~\eqref{eq:ident_phi_int})
\[
\varphi(z)
= b + \frac az + \sum_{i=1}^{\varkappa}
\leqslantft(
\frac{\sigma_i/\lambda_i}{\lambda_i-z}+\frac{\sigma_i/\lambda_i}{z}
\right)
+ \int_0^\infty\frac{d\sigma(t)}{t(t-z)}
+ \frac 1z \int_0^\infty\frac{d\sigma(t)}{t(1+t^2)},
\]
that is~$\varphi(z)=\widetilde\varphi(z)+\varphi_0(z)$, where
\[
\widetilde\varphi(z)\colonequals
\sum_{i=1}^{\varkappa}\frac{\sigma_i/\lambda_i}{\lambda_i-z}
\quad\text{and}\quad
\varphi_0(z) \colonequals
b
+ \frac {\Phi(0-)}z
+ \int_0^\infty\frac{d\widetilde\sigma(t)}{t-z}.
\]
The functions $\varphi_0(z)$ and $-\widetilde\varphi(z)$ have the form~\eqref{eq:Phi_N},
i.e. belong to the class~$\mathcal N$. For $\mathcal N$-functions and any set of
numbers~$\{z_1, \dots, z_k\}$ the Hermitian form~\eqref{eq:quad_form} is nonnegative
definite. That is, the conditions
\begin{equation*}
h[\varphi_0]\colonequals
h_{\varphi_0}(\xi_1, \dots, \xi_k \vert z_1,\dots, z_k)\geqslant 0
\quad\text{and}\quad
h[\widetilde\varphi]\colonequals
h_{\widetilde\varphi}(\xi_1, \dots, \xi_k \vert z_1,\dots, z_k)\leqslant 0
\end{equation*}
holds true. Moreover, since~$\widetilde\varphi(z)$ is a rational function with $\varkappa$
poles, which is bounded at infinity, the rank of~$h[\widetilde\varphi]$ can be at
most~$\varkappa$ (see Theorem~3.3.3 and its proof in~\cite[pp.~105--108]{Akhiezer65} or
Theorem~1 in~\cite[p.~34]{Donoghue}). Therefore, the
form~$h[\varphi]= h_{\varphi}(\xi_1, \dots, \xi_k \vert z_1,\dots, z_k)$ has at
most~$\varkappa$ negative squares as a sum of~$h[\varphi_0]$ and~$h[\widetilde\varphi]$.
(This become evident after the reduction of the Hermitian form~$h[\widetilde\varphi]$ to
principal axes since~$h[\varphi_0]$ is nonnegative definite irrespectively of coordinates.)
Suppose that $\Phi(z)$ can be expressed as in~\eqref{eq:phi_form_1_with_cond}, and let
$\varepsilon_0>0$ be such that~$\Phi(-\varepsilon)>0$ provided that
$0<\varepsilon<\varepsilon_0$. In particular, it implies $\max_i\lambda_i<-\varepsilon_0$
since $\Phi(\max_i\lambda_i+)<0$. Proposition~\ref{prop:2} provides a set of
points~$\{z_0,\dots,z_{\varkappa-1}\}$ such that the corresponding Hermitian
form~$h[\varphi]$ has~$\varkappa$ negative squares. Let us prove that~$h[\varphi]$ has at
most~$\varkappa$ squares negative. Consider the function
\[
\varphi_\varepsilon(z)
\colonequals
\frac{\Phi(z)-\Phi(-\varepsilon)}{z+\varepsilon} + \frac{\Phi(-\varepsilon)}{z+\varepsilon}
\xlongequal{\eqref{eq:phi_form_1}}
b + \frac{\Phi(-\varepsilon)}{z+\varepsilon}
+\sum_{i=1}^{\varkappa-1}\frac{\sigma_i / (\lambda_i+\varepsilon)}{\lambda_i-z}
+ \int_0^\infty\frac{1}{t-z}\cdot\frac{d\sigma(t)}{t+\varepsilon}.
\]
Denote~$\Phi_\varepsilon(z)\colonequals z\varphi_\varepsilon(z)$ and
$A_i\colonequals\dfrac{\sigma_i\lambda_i}{\lambda_i+\varepsilon}$, then
\begin{align*}
\Phi_\varepsilon(z)
={}& bz
+ \Phi(-\varepsilon) - \frac{\varepsilon\Phi(-\varepsilon)}{z+\varepsilon}
+ \sum_{i=1}^{\varkappa-1}\frac{A_iz / \lambda_i}{\lambda_i-z}
+ \int_0^\infty\frac{z}{t-z}\cdot\frac{d\sigma(t)}{t+\varepsilon}\\
\xlongequal{\eqref{eq:ident_1tz}}{}& bz
+ \Phi(-\varepsilon) - \frac{\varepsilon\Phi(-\varepsilon)}{z+\varepsilon}
+ \sum_{i=1}^{\varkappa-1}\frac{A_i}{\lambda_i-z} - \sum_{i=1}^{\varkappa-1}\frac{A_i}{\lambda_i}
+ \int_0^\infty\leqslantft(\frac{1}{t-z}-\frac 1t\right)\frac{t\,d\sigma(t)}{t+\varepsilon}\\
\xlongequal{\eqref{eq:ident_1t_2}}{}& bz
+ \leqslantft[
\Phi(-\varepsilon) - \sum_{i=1}^{\varkappa-1}\frac{A_i}{\lambda_i}
- \int_0^\infty\frac {d\sigma(t)}{(t+\varepsilon)(1+t^2)}
\right]\\
&- \frac{\varepsilon\Phi(-\varepsilon)}{z+\varepsilon}
+ \sum_{i=1}^{\varkappa-1}\frac{A_i}{\lambda_i-z}
+ \int_0^\infty\leqslantft(\frac{1}{t-z}-\frac t{1+t^2}\right)\frac{t\,d\sigma(t)}{t+\varepsilon},
\end{align*}
i.e. $\Phi_\varepsilon\in\mathcal N$. Moreover, $\Phi_\varepsilon(z)$ is an increasing
function when $-\varepsilon<z<0$ (see Remark~\ref{rem:defN}) which implies
\[
\Phi_\varepsilon(0-)
= \lim_{z\to0-}\int_0^\infty\frac{z}{t-z}\cdot\frac{d\sigma(t)}{t+\varepsilon}\leqslant 0,
\]
since the integrand is negative. That is, the function $\Phi_\varepsilon(z)$ has the
form~\eqref{eq:phi_form_2_with_cond}. As it is shown above, we have
$\varphi_\varepsilon\in\mathcal N_\varkappa^+$ for each~$\varepsilon$ between~$0$
and~$\varepsilon_0$.
Given a fixed set of points~$\{z_1, \dots, z_k\}$ there exists some positive
number~$\varepsilon_1<\varepsilon_0$, such that for all $0<\varepsilon<\varepsilon_1$ the
form~$h[\varphi_\varepsilon]\colonequals h_{\varphi_\varepsilon}(\xi_1, \dots, \xi_k \vert
z_1,\dots, z_k)$ has at least the same number of negative squares as the form~$h[\varphi]$.
(Indeed: the characteristic numbers of~$h[\varphi]$ depend continuously on its
coefficients.) Suppose that the Hermitian form~$h[\varphi]$ has more than~$\varkappa$
negative squares. Then~$h[\varphi_\varepsilon]$ must have more than~$\varkappa$ negative
squares as well, which is impossible. Thus, the form~$h[\varphi]$ has at most~$\varkappa$
negative squares.
Suppose that~$\varphi\in\mathcal N_\varkappa^+$. Then the function~$\Phi(z)=z\varphi(z)$ can
be represented as in~\eqref{eq:Phi_N} and the form~$h[\varphi]$ for any set of
numbers~$\{z_1, \dots, z_k\}$ has at most~$\varkappa$ negative squares (as stated in the
definition of~$\mathcal N_\varkappa^+$). By Proposition~\ref{prop:1}, the
function~$\sigma(t)$ appearing in~\eqref{eq:Phi_N} can have at most~$\varkappa$ negative
points of increase. These points are isolated, and therefore (see
Definition~\ref{def:poi_inc}) for negative~$t$ the function~$\sigma(t)$ is a step function
with at most~$\varkappa$ steps. That is, all negative singular points of~$\Phi(z)$ are
simple poles; they have negative residues since~$\Phi\in\mathcal N$, i.e. $\sigma_i>0$ for
all~$i$. Here we have two mutually exclusive options:
$\Phi(0-)\leqslant0$, then $\Phi(z)$ has the form~\eqref{eq:phi_form_2_with_cond} corresponding to
some~$\varkappa_0\leqslant\varkappa$, and
$0<\Phi(0-)\leqslant\infty$, i.e. $\Phi(z)$ has the form~\eqref{eq:phi_form_1_with_cond}
corresponding to~$\varkappa_0\leqslant\varkappa+1$.
The sufficiency (first) part of the current proof shows
that~$\varphi\in\mathcal N_{\varkappa_0}^+$ in both cases. Since the
classes~$\mathcal N_{\varkappa_0}^+$ and~$\mathcal N_{\varkappa}^+$ are disjoint by
definition, we necessarily have~$\varkappa_0=\varkappa$.
\end{proof}
\section*{Acknowledgments}
This work appeared, inter alia, by virtue of my collaboration visits to Shanghai in 2014
(although they were devoted to other mathematical problems). I am grateful to Mikhail Tyaglov
for organizing these visits, to {\otherlanguage{ngerman}Technische Universit\"at Berlin} and
Shanghai Jiao Tong University for the financial support. I also thank Olga Holtz for her
encouragement and Victor Katsnelson for his attention.
\end{document}
|
\begin{equation}gin{examp}in{document}
\title{Sufficient conditions for univalence and study of a class of meromorphic univalent functions}
\begin{equation}gin{examp}in{center}
{\tiny \texttt{FILE:~\jobname .tex,
printed: \number\year-\number\month-\number\day,
\thehours.\ifnum\theminutes<10{0}\fi\theminutes}
}
\end{center}
\author{Bappaditya Bhowmik${}^{~\mathbf{*}}$}
\address{Bappaditya Bhowmik, Department of Mathematics,
Indian Institute of Technology Kharagpur, Kharagpur - 721302, India.}
\email{[email protected]}
\author{Firdoshi Parveen}
\address{Firdoshi Parveen, Department of Mathematics,
Indian Institute of Technology Kharagpur, Kharagpur - 721302, India.}
\email{[email protected]}
\subjclass[2010]{30C45, 30C55} \keywords{ Meromorphic functions, Univalent functions, Subordination,
Taylor coefficients}
\begin{equation}gin{examp}in{abstract}
In this article we consider the class $\mathcal{A}(p)$ which consists of functions that
are meromorphic in the unit disc ${\mathbb D}$ having a simple pole at $z=p\in (0,1)$ with the normalization $f(0)=0=f'(0)-1 $.
First we prove some sufficient conditions for univalence of such functions in ${\mathbb D}$. One of these conditions enable us to
consider the class $\mathcal{V}_{p}(\leftarrowmbda)$ that consists of functions satisfying certain differential inequality which forces univalence of
such functions. Next
we establish that $\mathcal{U}_{p}(\leftarrowmbda)\subsetneq \mathcal{V}_{p}(\leftarrowmbda)$, where $\mathcal{U}_{p}(\leftarrowmbda)$ was introduced
and studied in \cite{BF-1}. Finally, we discuss some coefficient problems for $\mathcal{V}_{p}(\leftarrowmbda)$ and end the article with a coefficient conjecture.
\end{abstract}
\thanks{}
\maketitle
\pagestyle{myheadings}
\markboth{B. Bhowmik and F. Parveen}{Sufficient conditions for univalence and study of a class of meromorphic univalent functions}
\section{Introduction and sufficient condition for univalence}
Let $\mathcal{M}$ be the set of meromorphic functions $F$ in ${\mathbb D}elta=\{\zeta\in {\mathbb C}:|\zeta|>1\}\cup\{\infty\}$ with the following expansion:
$$
F(\zeta)=\zeta+\sum_{n=0}^{\infty} b_{n}\zeta^{-n},\quad \zeta\in{\mathbb D}elta.
$$
This means that these functions have simple pole at $z=\infty$ with residue $1$.
Let $\mathcal{A}$ be the collection of all analytic functions in ${\mathbb D}$ with the normalization $f(0)=0=f'(0)-1 $.
In \cite{aksen}, Aksent\'{e}v proved a sufficient condition for a function $F\in \mathcal{M}$ to be univalent which we state now:
\begin{equation}gin{examp}in{Thm}\leftarrowbel{TheoA}
If $F\in \mathcal{M}$ satisfies the inequality
$$
|F'(\zeta)-1|\leq1,\quad \zeta\in{\mathbb D}elta,
$$
then $F$ is univalent in ${\mathbb D}elta$.
\end{Thm}
This result motivated many authors to consider the classes
$\mathcal{U}(\leftarrowmbda):=\{f\in \mathcal{A}: |U_f(z)|< \leftarrowmbda\}, \leftarrowmbda\in
(0,1]$ where $U_f(z):=(z/f(z))^2 f'(z)-1$ and this class has been studied
extensively in \cite{OP,OPW} and references therein. In \cite{BF-1}, we
wanted to see the meromorphic analogue of the class $\mathcal{U}(\leftarrowmbda)$ by
introducing a nonzero simple pole for such functions in ${\mathbb D}$. More
precisely, we consider the class $\mathcal{A}(p)$ of all functions $f$ that
are holomorphic in ${\mathbb D}\setminus \{p\}$, $p\in(0,1)$ possessing a simple
pole at the point $z=p$ with nonzero residue $m$ and normalized by the
condition $f(0)=0=f'(0)-1 $. We define $\Sigma(p):=\{f\in\mathcal{A}(p): f
\mbox{ is one to one in} ~~{\mathbb D} \}$. Therefore, each $f\in \mathcal{A}(p)$ has
the Laurent series expansion of the following form
\begin{equation}gin{eqnarray} \leftarrowbel{fp4eq1}
f(z)=\frac{m}{z-p}+\sum\limits_{n=0}^\infty a_n z^n,~~z\in {\mathbb D}\setminus\{p\}.
\end{eqnarray}
In this context we proved a sufficient condition for a function $f\in
\mathcal{A}(p)$ to be univalent (see \cite[Theorem 1]{BF-1}), which we recall
now.
\begin{equation}gin{examp}in{Thm}\leftarrowbel{TheoB}
Let $f\in \mathcal{A}(p)$. If $\left|U_f(z)\right|\leq\left((1-p)/(1+p)\right)^2$ for $z\in {\mathbb D}$,
then $f$ is univalent in ${\mathbb D}$.
\end{Thm}
Using Theorem~B, we constructed a subclass $\mathcal{U}_{p}(\leftarrowmbda)$ of $\Sigma(p)$ which is defined as follows:
$$
\mathcal{U}_p(\leftarrowmbda):=\left\{f\in \mathcal{A}(p):\left|U_{f}(z)\right|<\leftarrowmbda \mu, z\in {\mathbb D}\right\}
$$
where $0<\leftarrowmbda \leq 1$ and $\mu=((1-p)/(1+p))^2$. In this note, we improve
the sufficient condition proved in Theorem~B by replacing the number $\mu=((1-p)/(1+p))^2$ with the
number $1$. We give a proof of this result below.
\begin{equation}gin{examp}in{thm}\leftarrowbel{fp4th1}
Let $f\in \mathcal{A}(p)$. If $\left|U_{f}(z)\right|<1$
holds for all $z\in {\mathbb D}$ then $f\in \Sigma(p)$.
\end{thm}
\begin{equation}gin{pf}
Let $\mathcal{M}_p:=\{ f\in \mathcal{M}: F(1/p)=0\}$ where $0<p<1$. Clearly,
$\mathcal{M}_p \subseteq \mathcal{M}$. For each $f\in\mathcal{A}(p)$ consider the
transformation $F(\zeta):=1/f(1/\zeta)$, $\zeta \in {\mathbb D}elta$. We claim that $F\in\mathcal{M}_p \subseteq \mathcal{M}$. Since $f$ has an
expansion of the form (\ref{fp4eq1}), therefore we have
\begin{equation}gin{eqnarray}q
F(\zeta)&=&1/f(1/\zeta)\\
&=& \left(m \zeta/(1-p\zeta)+\sum_{n=0}^{\infty}a_n\zeta^{-n}\right)^{-1}\\
&=&\zeta+ (a_1-p a_2-1)/p +\left(p(a_2-pa_3)+(a_1-pa_2)^2-(a_1-pa_2)\right)/\zeta p^2+\cdots .
\end{eqnarray}q
Here we see that
$F(1/p)=0, F(\infty)=\infty$ and $F'(\infty)=1$.
This proves that each $f\in \mathcal{A}(p)$ can be associated with the mapping $F\in\mathcal{M}_p $.
Using the change of variable ${\mathbb D} \ni z=1/\zeta$, the above association quickly yields
$$
F'(\zeta)-1=f'(1/\zeta)/(\zeta^{2}f^{2}(1/\zeta))-1=z^{2}f'(z)/f^{2}(z)-1=U_{f}(z).
$$
Now since $\mathcal{M}_p \subseteq \mathcal{M}$, an application of the Theorem~A gives that if any function $F\in\mathcal{M}_p $ satisfies
$|F'(\zeta)-1|\leq1,\quad \zeta\in{\mathbb D}elta$, then $F$ is univalent in ${\mathbb D}elta$, i.e. the inequality $|U_{f}(z)|<1$ forces $f$ to be univalent in ${\mathbb D}$.
\end{pf}
In view of the Theorem~\ref{fp4th1}, it is natural to consider a new subclass $\mathcal{V}_{p}(\leftarrowmbda)$ of $\Sigma(p)$ defined as:
$$
\mathcal{V}_p(\leftarrowmbda):=\left\{f\in \mathcal{A}(p):\left|U_{f}(z)\right|<\leftarrowmbda, ~z\in {\mathbb D}\right\},\quad \mbox{for}~~ \leftarrowmbda\in(0,1].
$$
We now claim that $\mathcal{U}_{p}(\leftarrowmbda)\subsetneq \mathcal{V}_{p}(\leftarrowmbda) \subsetneq \Sigma(p)$. To establish the first inclusion,
we note that as $\leftarrowmbda \mu < \leftarrowmbda$, therefore we have $\mathcal{U}_{p}(\leftarrowmbda)\subseteq \mathcal{V}_{p}(\leftarrowmbda)$. Now consider the function
$$
k_p^{\leftarrowmbda}(z):=\frac{-pz}{(z-p)(1-\leftarrowmbda pz)}, z\in {\mathbb D}.
$$
It is easy to check that $ U_{k_p^{\leftarrowmbda}}(z)=-\leftarrowmbda z^2$ so that $| U_{k_p^{\leftarrowmbda}}(z)|<\leftarrowmbda$ but
$|U_{k_p^{\leftarrowmbda}}(z)|\nless \leftarrowmbda\mu$ for all $z\in {\mathbb D}$. This proves the first inclusion. Next we wish to establish the second inclusion of our claim. We see that by virtue of
the Theorem~\ref{fp4th1}, $\mathcal{V}_{p}(\leftarrowmbda)\subseteq \Sigma(p)$. Again considering the following two examples,
we see that $\mathcal{V}_{p}(\leftarrowmbda)\subsetneq \Sigma(p)$ for $0<\leftarrowmbda\leq 1$.\\
\textbf{Case 1:}\,($0<\leftarrowmbda<1$). Take $a\in {\mathbb C}$ such that $\leftarrowmbda<|a|<1$. Consider the functions $f_a$ defined by
$$
f_a(z)=\frac{z}{(z-p)(az-1/p)}, \quad z\in {\mathbb D}.
$$
It is easy to check that $f_a$ satisfies the normalizations $f_a(p)=\infty$ and $f_a(0)=0=f_a'(0)-1$. Also $f_a(z)$ is univalent in ${\mathbb D}$ and $U_{f_a}(z)=-az^2$. Now as $|z|\rightarrow 1^{-}$, $|U_{f_a}(z)|\rightarrow |a| > \leftarrowmbda$. Therefore $f_a(z)\notin \mathcal{V}_{p}(\leftarrowmbda)$. This shows that $\mathcal{V}_{p}(\leftarrowmbda)$ is a proper subclass of $\Sigma(p)$ for $0<\leftarrowmbda<1$. \\
\textbf{Case 2:}\,($\leftarrowmbda=1$). It is well-known that the function
$$
g(z)=\frac{z-\frac{2p}{1+p^2}z^2}{(1-\frac{z}{p})(1-zp)}, z\in {\mathbb D},
$$
is in $\Sigma(p)$ (Compare \cite{BPW}). A little calculation shows that
$$
U_g(z)=\left(z(1-p^2)/(1+p^2)\right)^2\left(1-(2pz/(1+p^2))\right)^{-2}.
$$
Now $|U_g(z)|<1$ holds for all $|z|\leq R$ whenever $R< \frac{1+p^2}{1+2p-p^2}<1$. From here we can conclude that $g$ does not belongs to the class $\mathcal{V}_{p}(\leftarrowmbda)$ for $\leftarrowmbda=1$, i.e. $\mathcal{V}_{p}:=\mathcal{V}_{p}(1) \subsetneq \Sigma(p)$.
\begin{equation}gin{rem}
It can be easily seen that similar to the class $\mathcal{U}_{p}(\leftarrowmbda)$, the class $\mathcal{V}_{p}(\leftarrowmbda)$ is preserved under conjugation
and is not preserved under the operations like rotation, dilation, omitted value transformation and the $n$-th root transformations.
\end{rem}
Let $f\in\mathcal{A}(p)$. We see that the function $z/f$ is analytic in ${\mathbb D}$ and non vanishing in ${\mathbb D}\setminus\{p\}$. Therefore it has a Taylor
expansion of the following form about the origin.
\begin{equation}gin{eqnarray}\leftarrowbel{fp4eq5}
\frac{z}{f(z)}=1+b_{1}z+b_{2}z^2+\cdots, ~~~z\in {\mathbb D}.
\end{eqnarray}
Now we prove some sufficient conditions for univalence of functions $f\in\mathcal{A}(p)$ which involves the second and higher order derivatives of $z/f$.
These are the contents of the next two theorems.
\begin{equation}gin{examp}in{thm}\leftarrowbel{fp4th3}
Let $f\in\mathcal{A}(p)$ and $f/z$ is non-vanishing in ${\mathbb D} \setminus\{0\}$. If $|(z/f(z))''|\leq 2$ for $z\in {\mathbb D}$, then $f$ is univalent in ${\mathbb D}$.
This condition is only sufficient for univalence but not necessary.
\end{thm}
\begin{equation}gin{pf}
First we prove the univalence of $f$. Using the expansion (\ref{fp4eq5}), we have
$$
U_f(z)=-z(z/f)'+(z/f)-1=\sum_{n=2}^{\infty}(1-n)b_n z^n.
$$
We also note that $zU_f'(z)=-z^2(z/f)''$. Therefore $|(z/f)''|\leq 2$ yields $|zU_f'(z)|\leq 2|z|$. This implies that $zU_f'(z)\prec 2z$ where $\prec$ denotes usual
subordination. Now by a well known result of subordination (compare \cite[p. 76, Theorem 3.1d.]{MM}), we get $U_f(z)\prec z$, i.e. $|U_f(z)|\leq |z|<1$.
This shows that $f$ is univalent in ${\mathbb D}$ by virtue of the Theorem~\ref{fp4th1}.
In order to establish the second claim of the theorem, we consider the function
$$
h(z)= \frac {2pz}{(p-z)(2-pz(p+z))},\, z\in {\mathbb D}.
$$
Note that $h(0)=0=h'(0)-1$ and $h(p)=\infty$. Also since $|pz(p+z)|<2$, $h$ has no other poles in ${\mathbb D}$ except at $z=p$.
Consequently $h\in \mathcal{A}(p)$.
It is easy to check that $U_{h}(z)=-z^3$ and $(z/h)''= 3 z$. Hence $|U_{f}(h)|<1$ but $|(z/h)''|> 2$ for $2/3<|z|<1$.
This example shows that the boundedness condition in the statement of the Theorem is only sufficient but not necessary.
\end{pf}
The following theorem is also a univalence criteria described by a sharp inequality involving the
$n$-th order derivatives of $z/f$ (denoted by $(z/f)^n$), $n\geq 3$.
\begin{equation}gin{examp}in{thm}\leftarrowbel{fp4th4}
Let $f\in \mathcal{A}(p)$ and $f(z)\neq 0$ for ${\mathbb D}\setminus\{0\}$. If for $n\geq 3$,
\begin{equation}gin{eqnarray}\leftarrowbel{fp4eq8}
\sum_{k=0}^{n-3}\frac{k+1}{(k+2)!}|\alpha_k|+\frac{n-1}{n!}\left|\left(\frac{z}{f}\right)^n\right|\leq 1,~ z\in {\mathbb D},
\end{eqnarray}
where $\alpha_k=- (z/f)^{k+2}|_{z=0}$, then $f$ is univalent in ${\mathbb D}$. The result is sharp and equality holds in the above inequality for the function
$k_p(z)=-pz/(z-p)(1-pz)$ for all $n\geq 3$ and for the functions
$$
f_n(z)= \frac {z}{1-\left(1/p+ p^{n-1}/(n-1)\right)z+ z^n/(n-1)}, \, z\in {\mathbb D},
$$
for each $n\geq 3$.
\end{thm}
\begin{equation}gin{pf}
Proceeding similarly as the proof of \cite[Theorem 1.1]{OP1}, the inequality (\ref{fp4eq8}) will imply that $|U_f(z)|<1$
which proves that $f$ is univalent in ${\mathbb D}$.
To complete the proof of remaining assertion of the theorem, we consider the univalent function $k_p(z)$, $z\in {\mathbb D}$ and compute
$$
(z/k_p(z))'=-(1/p+p)+2z,\quad (z/k_p(z))''=2 \quad \mbox{and} \quad (z/k_p(z))^n=0,\,n\geq3.
$$
Therefore we get $\alpha_0=-2$ and $\alpha_k=0$ for $k\geq 1$.
Taking account of the above computations, it can now be easily checked that the equality holds in the inequality (\ref{fp4eq8}).
Lastly, It can be proved that the functions $f_n \in \mathcal{V}_{p}(\leftarrowmbda)$ for $\leftarrowmbda=1$ i.e., $f_n$ is univalent in ${\mathbb D}$.
Again for the functions $f_n$, it is easy to check that
$\alpha_k=0,\,0\leq k\leq n-3$ and $(z/f_n)^n=n!/(n-1)$ for all $n\geq 3$, which essentially proves the sharpness of the result.
\end{pf}
Now in the following theorem we give sufficient conditions for a function $f\in \mathcal{A}(p)$ to be in the class $\mathcal{V}_{p}(\leftarrowmbda)$
by using Theorem~\ref{fp4th1}, Theorem~\ref{fp4th3} and Theorem~\ref{fp4th4} in terms of the coefficients $b_n$ defined in (\ref{fp4eq5}).
\begin{equation}gin{examp}in{thm}\leftarrowbel{fp4th5}
Let $f\in \mathcal{A}(p)$ and each $z/f$ has the expansion of the form $(\ref{fp4eq5})$. If $f$ satisfies any one of the following three conditions namely \\
$(i) ~~\sum_{n=2}^{\infty}(n-1)|b_n|\leq \leftarrowmbda$\\
$(ii)~~ \sum_{n=2}^{\infty}n(n-1)|b_n|\leq 2\leftarrowmbda$\\
$(iii)~~ \sum_{k=2}^{n}(k-1)|b_k|+(n-1)\sum_{k=n+1}^{\infty}\binom{k}{n}|b_k|\leq \leftarrowmbda$\\
then $f\in \mathcal{V}_{p}(\leftarrowmbda)$.
\end{thm}
\begin{equation}gin{pf}
Since $z/f$ has the form (\ref{fp4eq5}), it is simple exercise to see that
$$
U_f(z)=-\sum_{n=2}^{\infty}(n-1)b_n z^n,\quad (z/f)''=\sum_{n=2}^{\infty}n(n-1)b_n z^{n-2}
$$
and
$$
\left(\frac{z}{f}\right)^n= n! b_n+\sum_{k=n+1}^{\infty}\frac{k!b_k}{(k-n)!} z^{k-n}=\sum_{k=n}^{\infty}\frac{k!b_k}{(k-n)!} z^{k-n}
$$
Therefore condition (i) and (ii) implies that $|U_f(z)|<\leftarrowmbda$ and $|(z/f)''|<2 \leftarrowmbda$ respectively.
Again following the similar arguments of the proof of the Theorem~\ref{fp4th3}, we conclude that $|(z/f)''|<2 \leftarrowmbda$ implies $|U_f(z)|<\leftarrowmbda$.
Now
$$\alpha_k=- (z/f)^{k+2}|_{z=0}=-(k+2)! b_{k+2}.
$$
Substituting the value of $\alpha_k$ and $(z/f)^n$ in terms of the coefficient $b_n$ in the left hand side of the inequality (\ref{fp4eq8}) we get
\begin{equation}gin{eqnarray}q
&& \sum_{k=0}^{n-3}(k+1)|b_{k+2}|+\frac{n-1}{n!}\left|\sum_{k=n}^{\infty}\frac{k!b_k}{(k-n)!} z^{k-n}\right|\\
&\leq& \sum_{k=2}^{n}(k-1)|b_k|+(n-1)\sum_{k=n+1}^{\infty}\binom{k}{n}|b_k| \\
&\leq& \leftarrowmbda \quad \mbox{(by (iii))}
\end{eqnarray}q
Hence an application of the Theorem~\ref{fp4th4} gives $|U_f(z)|<\leftarrowmbda$.
This shows that in each case $f\in \mathcal{V}_{p}(\leftarrowmbda)$.
\end{pf}
In the following section we study some coefficient problem for functions in $\mathcal{V}_{p}(\leftarrowmbda)$ which is one of the important problem in geometric function theory.
\section{Coefficient problem for the class $\mathcal{V}_{p}(\leftarrowmbda)$}
Let $f\in\mathcal{V}_{p}(\leftarrowmbda)$ with the expansion (\ref{fp4eq5}). Now proceeding as a similar manner of ( \cite[Theorem 12]{BF-1}) we have the sharp bounds for $|b_n|, n\geq 2$, which is given by
$$
|b_n|\leq \frac{\leftarrowmbda }{n-1}, \quad n\geq 2,
$$
and equality holds in the above inequality for the function
\begin{equation}gin{eqnarray}\leftarrowbel{fp4eq6}
f(z)= \frac {z}{1-\left(1/p+(\leftarrowmbda p^{n-1})/(n-1)\right)z+\leftarrowmbda z^n/(n-1)}, \, z\in {\mathbb D}.
\end{eqnarray}
Each $f\in \mathcal{V}_{p}(\leftarrowmbda) $ has the following Taylor expansion
\begin{equation}gin{eqnarray}\leftarrowbel{fp4eq7}
f(z)=z+\sum_{n=2}^{\infty}a_{n}(f)z^{n},\quad |z|<p.
\end{eqnarray}
Now the problem is to find out the region of variability of these Taylor coefficients $a_n, n\geq2 $. Here we note that similar to the class $\mathcal{U}_{p}(\leftarrowmbda)$, every $f\in \mathcal{V}_{p}(\leftarrowmbda)$ has the following representation (see \cite[Theorem 3]{BF-1}):
\begin{equation}gin{eqnarray}\leftarrowbel{fp4eq3}
\frac{z}{f(z)}=1-\left(\frac{f''(0)}{2}\right)z+\leftarrowmbda z\int_{0}^{z}w(t)dt,
\end{eqnarray}
where $w\in\mathcal{B}$. Here $\mathcal{B}$ denotes the class of functions $w$ that are analytic in ${\mathbb D}$ such that $|w(z)|\leq1$ for $z\in {\mathbb D}$. By using this representation formula in the following theorem we give the exact set of variability for the second Taylor coefficient of $f\in\mathcal{V}_{p}(\leftarrowmbda)$.
\begin{equation}gin{examp}in{thm}\leftarrowbel{fp2th2}
Let each $f\in\mathcal{V}_{p}(\leftarrowmbda)$ has the Taylor expansion $f(z)=z+\sum_{n=2}^{\infty}a_{n}(f)z^{n},$ in the disc $\{z: |z|<p\}.$
Then the exact region of variability of the second Taylor coefficient $a_{2}(f)$ is the disc determined by the inequality
\begin{equation}gin{eqnarray}
\leftarrowbel{fp4eq4}
|a_{2}(f)-1/p| \leq \leftarrowmbda p.
\end{eqnarray}
\end{thm}
\begin{equation}gin{pf}
Substituting $z=p$ in (\ref{fp4eq3}) we get
$$
a_2(f)=\frac{f''(0)}{2}=\frac{1+\leftarrowmbda p\int_{0}^{p}w(t)dt}{p}
$$
which implies
\begin{equation}gin{eqnarray}q
|a_{2}(f)-1/p|&=& \left|\frac{\leftarrowmbda p\int_{0}^{p}w(t)dt}{p}\right|\\
&\leq& \leftarrowmbda \int_{0}^{p}|w(t)|dt \leq \leftarrowmbda p.
\end{eqnarray}q
Therefore $|a_{2}(f)-1/p|\leq \leftarrowmbda p$.
A point on the boundary of the disc described by (\ref{fp4eq4}) is attained for the function
$$
f_{\theta}(z)=\frac{z}{1-\frac{z}{p}\left(1+\leftarrowmbda p^{2}e^{i\theta}\right)+\leftarrowmbda e^{i\theta}z^2}
$$
where $\theta\in [0,2\pi].$ Also the points in the interior of the disc described in (\ref{fp4eq4}) are attained by the functions
$$
f_{a}(z)=\frac{z}{1-\frac{z}{p}\left(1+\leftarrowmbda a p^{2}\right)+\leftarrowmbda a z^2}
$$
where $0<|a|<1$. It is easy to see that these functions belong to the class $\mathcal{V}_{p}(\leftarrowmbda)$.
This shows that the exact region of variability of $a_{2}(f)$ is given by the disc (\ref{fp4eq4}).
\end{pf}
Following consequences of the above theorem can be observed easily:
\begin{equation}gin{examp}in{cor}
Let for some $\leftarrowmbda \in (0,1]$, $f\in\mathcal{V}_{p}(\leftarrowmbda)$ and has the form $f(z)=z+\sum_{n=2}^{\infty}a_{n}(f)z^{n}$, in $|z|<p$.
Then $|a_{2}(f)|\leq 1/p+\leftarrowmbda p$ and equality holds in this inequality for the function
$k_p^{\leftarrowmbda}$.
\end{cor}
Now the function $k_p^{\leftarrowmbda}$ is analytic in the disk $\{z: |z|<p\}$ and has the Taylor expansion as
$$
k_p^{\leftarrowmbda}(z)=\sum_{n=1}^{\infty}\frac{1-\leftarrowmbda^n p^{2n}}{p^{n-1}(1-\leftarrowmbda p^2)}z^n, \quad |z|<p.
$$
Since the function $k_p^{\leftarrowmbda}$ serves as an extremal function for the class $\mathcal{V}_{p}(\leftarrowmbda)$, the above corollary enables us to make the following
\begin{equation}gin{conj}
If $f\in\mathcal{V}_{p}(\leftarrowmbda)$ for some $0<\leftarrowmbda\leq 1$ and has the expansion of the form $(\ref{fp4eq7})$.
Then the bound
$$
|a_n(f)|\leq \frac{1-\leftarrowmbda^n p^{2n}}{p^{n-1}(1-\leftarrowmbda p^2)}
$$
is sharp for $n\geq 3$.
\end{conj}
\begin{equation}gin{rem}\leftarrowbel{fp4r2}
Here we note that all the results proved in \cite{BF-1} and in \cite{BF-2} for the class $\mathcal{U}_p(\leftarrowmbda)$ will also
be true for the bigger function class $\mathcal{V}_{p}(\leftarrowmbda)$
if we substitute $\leftarrowmbda$ in place of $\leftarrowmbda \mu$ and follow the same method of proof.
We also remark that the authors of \cite{OPW} has also considered similar meromorphic functions and arrive at
this conjectured bound for $|a_n|$ (compare \cite[Remark~2]{OPW}), but their study of such functions come from a different perspective.
\end{rem}
\begin{equation}gin{examp}in{thebibliography}{99}
\bibitem{aksen} {\sc L. A. Aksent\'{e}v}, Sufficient conditions for univalence of regular
functions (Russian), \textit{Izv. Vys\v{s}. U\v{c}ebn. Zaved. Matematika}, {\bf 4}(1958), 3-7.
\bibitem{BF-1} {\sc B. Bhowmik and F. Parveen}, On a subclass
of meromorphic univalent functions, \textit{Complex Var. Elliptic Equ.}, {\bf 62} (2017), 494-510.
\bibitem{BF-2} {\sc B. Bhowmik and F. Parveen}, Criteria for univalence, integral means and dirichlet integral
for meromorphic functions, \textit{Bull. Belg. Math. Soc.(Simon Stevin)}, To appear.
\bibitem{BPW} {\sc B. Bhowmik, S. Ponnusamy and K.-J. Wirths}, Concave functions, Blaschke products and polygonal
mappaings, \textit{Siberian Mathematical Journal}, {\bf 50}(2009), 609-615.
\bibitem{MM} {\sc S. S. Miller and P. T. Mocanu}, Differential
subordinations, Theory and Applications, Marcel Dekker Inc., New York, 1999.
\bibitem{OP1} {\sc M. Obradovi\'{c} and S. Ponnusamy}, Criteria for univalent funvtions in the unit disk, \textit{Arch. Math.}, {\bf 100}(2013), 149-157.
\bibitem{OP} {\sc M. Obradovi\'{c} and S. Ponnusamy}, Univalence and starlikeness of certain integral transforms defined
by convolution of analytic functions, \textit{J. Math. Anal. Appl.}, {\bf 336}(2007), 758-767.
\bibitem{OPW} {\sc M. Obradovi\'{c}, S. Ponnusamy and K.-J. Wirths}, Geometric studies on the class $\mathcal{U}(\leftarrowmbda)$,
\textit{Bull. Malays. Math. Sci. Soc.}, {\bf 39}(2016), 1259-1284.
\bibitem{PW} {\sc S. Ponnusamy and K.-J. Wirths}, Elementary consideration for classes of meromorphic univalent functions, arXiv:1704.08184.
\end{thebibliography}
\end{document}
|
\begin{document}
\title[The Periodic Plateau problem and its application]
{The Periodic Plateau problem and its application}
\author[ J. CHOE]{JAIGYOUNG CHOE}
\date{(arXiv) April 19, 2021.\,\,\,\,\,(Revised) April 26, 2021}
\thanks{Supported in part by NRF-2018R1A2B6004262}
\address{Korea Institute for Advanced Study, Seoul, 02455, Korea}
\email{[email protected]}
\begin{abstract} Given a noncompact disconnected complete periodic curve $\Gamma$ with no self intersection in $\mathbb R^3$, it is proved that there exists a noncompact simply connected periodic minimal surface spanning $\Gamma$. As an application it is shown that for any tetrahedron $T$ with dihedral angles $\leq90^\circ$ there exist four embedded minimal annuli in $T$ which are perpendicular to $\partial T$ along their boundary. It is also proved that every Platonic solid of $\mathbb R^3$ contains five types of free boundary embedded minimal surfaces of genus zero.\\
\noindent{\it Keywords}: Plateau problem, periodic, minimal surface, free boundary, Platonic solid\\
{\it MSC}\,: 53A10, 49Q05
\end{abstract}
\maketitle
\section{introduction}
The famous problem of finding a surface of least area spanning a given Jordan curve, called the Plateau problem, was settled by Douglas and Rad\'{o} independently in 1931. Since then many questions have been raised about the Douglas-Rad\'{o} solution: the uniqueness, the embeddedness, the topology of the solution and the number of solutions.
In this paper we are concerned with the Plateau problem for a noncompact disconnected complete curve $\Gamma\subset\mathbb R^3$ which is periodic. $\Gamma$ is said to be periodic if $\Gamma$ has a fundamental piece $\bar{\gamma}$ in a convex polyhedron $U$ and $\Gamma$ is the infinite union of the congruent copies of $\bar{\gamma}$ obtained in a periodic way. In particular, $\Gamma$ is {\it helically periodic} if it is the union of images of $\bar{\gamma}$ under the cyclic group $\langle\sigma\rangle$ generated by a screw motion $\sigma$. $\Gamma$ is {\it translationally periodic} if it is invariant under the cyclic group $\langle\tau\rangle$ generated by a translation $\tau$. $\Gamma$ is {\it rotationally periodic} if the congruent copies of $\bar{\gamma}$ are obtained by repeatedly extending $\bar{\gamma}$ through $180^\circ$-rotations about the lines connecting each pair of endpoints of $\bar{\gamma}$. $\Gamma$ is {\it reflectively periodic} if the congruent copies of $\bar{\gamma}$ are obtained by infinitely extending $\bar{\gamma}$ by the reflections across the planar faces of $\partial U$ (see Figure 1). The extensions by screw motions, translations, rotations and reflections are to be performed infinitely until $\Gamma$ becomes complete.
We prove that for every complete noncompact disconnected periodic curve $\Gamma$ in $\mathbb R^3$ there exists a noncompact simply connected minimal surface $\Sigma\subset\mathbb R^3$ spanning $\Gamma$ such that $\Sigma$ inherits the periodicity of $\Gamma$ (Theorem \ref{main}). Furthermore, in case $\Gamma$ consists of the $x_3$-axis and a complete connected translationally periodic curve $\gamma_1$ winding around the $x_3$-axis such that a fundamental piece of $\gamma_1$ admits a one-to-one orthogonal projection onto a convex closed curve in the $x_1x_2$-plane, we can show that $\Sigma$ is unique and embedded (Theorem \ref{plateau}).
\begin{center}
\includegraphics[width=5.85in]{helitran9.jpg}
\end{center}
These two theorems have an interesting application. Smyth \cite{Sm} showed that given a tetrahedron $T$, there exist three embedded minimal disks in $T$ which meet $\partial T$ orthogonally along their boundary. From $T$ Smyth considered a quadrilateral $\Gamma$ whose edges are perpendicular to the faces of $T$. $\Gamma$ bounds a unique minimal graph $\Sigma$. He then showed that the conjugate minimal surface of $\Sigma$ is the desired minimal surface in $T$.
In this paper we will first see that the tetrahedron $T$ gives rise to a noncompact, disconnected, translationally periodic, piecewise linear curve $\Gamma$ such that the edges (=line segments) of a fundamental piece $\bar{\gamma}$ of $\Gamma$ are perpendicular to the faces of $T$. In fact, $\bar{\gamma}$ has two components $\bar{\gamma}_0$, $\bar{\gamma}_1$, where $\bar{\gamma}_0$ has only one edge and $\bar{\gamma}_1$ has 3 edges. So one of the two components of $\Gamma$ is a straight line $\ell$. By Theorem \ref{main} $\Gamma$ bounds a noncompact simply connected translationally periodic minimal surface $\Sigma$. Let $\Sigma^*$ be its conjugate minimal surface. In Theorem \ref{fb} we will prove that if $\ell$ is properly chosen relative to $\bar{\gamma}_1$ then $\Sigma^*$ is a minimal annulus in $T$ which is perpendicular to $\partial T$ (see Figure 2). One boundary component of $\Sigma^*$ is a convex closed curve lying in one face of $T$
and the other component traces along the remaining three faces. Since there are four lines perpendicular to a face of $T$ we conclude that there exist
\begin{center}
\includegraphics[width=4.8in]{intro9.jpg}
\end{center}
four free boundary minimal annuli in $T$ if the dihedral angles of $T$ are $\leq90^\circ$. If at least one dihedral angle of $T$ is $>90^\circ$, there exist four minimal annuli which are not necessarily inside $T$ but still perpendicular to the planes containing the faces of $T$ along their boundary.
In general, one cannot generalize Theorem \ref{fb} to construct a free boundary minimal annulus in a polyhedron other than a tetrahedron. However, in case $P_y$ is a right pyramid with a regular polygonal base $B$ and apex $p$ (i.e., $P_y=p\mbox{$\times \hspace*{-0.244cm} \times$} B$, the cone), we can show the existence of a free boundary minimal annulus $\Sigma^*$ in $P_y$ such that one boundary component of $\Sigma^*$ is in $B$ and the other component in $p\mbox{$\times \hspace*{-0.244cm} \times$} \partial B$ winding around $p$ (Theorem \ref{ps}). Consequently, it is proved that every Platonic solid $P_s$ bounded by regular $n$-gons contains five types of free boundary embedded minimal surfaces $\Sigma_1,\ldots,\Sigma_5$ of genus 0. Three of them, $\Sigma_1,\Sigma_2,\Sigma_3$, intersect each face of $P_s$ along $1,n,2n$ convex closed congruent curves, respectively. $\Sigma_4$ intersects every edge of $P_s$, while $\Sigma_5$ surrounds every vertex of $P_s$ (Corollary \ref{pl}; see Figure 3). As a matter of fact, if $P_s$ is the cube, $\Sigma_1$ is the well-known Schwarz $P$-surface, $\Sigma_4$ is Neovius' surface and $\Sigma_5$ is Schoen's I-WP surface. Finally, if $P_r$ is a right pyramid whose base is a rhombus, a free boundary minimal annulus in $P_r$ can be similarly constructed (Corollary \ref{pr}).
\begin{center}
\includegraphics[width=5.5in]{polyhedron9.jpg}
\end{center}
\section{Periodic Plateau problem}
A Jordan curve is simple and closed. So it has no self intersection and is homeomorphic to a circle. If a simple curve $\Gamma\subset\mathbb R^3$ is not closed but homeomorphic to $\mathbb R^1$ and has infinite length, one cannot in general find a minimal surface spanning $\Gamma$. However, if there exists a surface of finite area spanning $\Gamma$, one can easily show the existence of a minimal surface spanning $\Gamma$. The same is true if $\Gamma$ is the union of simple open curves of infinite lengths bounding a surface of finite area. In case $\Gamma$ cannot bound a surface of finite area, one needs to impose extra conditions on $\Gamma$ to get a minimal surface spanning $\Gamma$. In this section we will see that the periodicity of $\Gamma$ is a sufficient condition for this purpose.
\begin{definition}
Let $\Gamma\subset\mathbb R^3$ be the union of complete open rectifiable curves $\gamma_1,\gamma_2,\gamma_3,\ldots$ and let $U$ be a convex polyhedral domain in $\mathbb R^3$. $\Gamma$ is said to be {\it periodic} if $\Gamma$ is the infinite union of the congruent copies of $\bar{\gamma}:=\Gamma\cap U$. $\bar{\gamma}$ is called a {\it fundamental piece} of $\Gamma$.
a) Suppose $\Gamma$ is homeomorphic to two parallel lines. $\Gamma$ is {\it translationally periodic} if it is the union of translated fundamental pieces $\tau^n(\bar{\gamma})$ for the cyclic group $\langle\tau\rangle$ generated by a parallel translation $\tau$. $\Gamma$ is invariant under $\langle\tau\rangle$. Moreover, $\Gamma$ is {\it helically periodic} if it is the union of $\sigma^n(\bar{\gamma})$ for the cyclic group $\langle\sigma\rangle$ generated by a screw motion $\sigma$. Assume that the screw motion $\sigma$ is the rotation about the $x_3$-axis by angle $\beta$ composed with the translation by $e$, that is,
\begin{equation}\label{heli}
\sigma(r\cos\theta,r\sin\theta,x_3)=(r\cos(\theta+\beta),r\sin(\theta+\beta),x_3+e).
\end{equation}
Every translationally periodic $\Gamma$ can be said to be helically periodic as well with respect to $\sigma$ for $\beta=0$.
b) Suppose the fundamental piece $\bar{\gamma}$ has at least two components. $\Gamma$ is said to be {\it rotationally periodic} (or {\it oddly periodic}) if the congruent copies of $\bar{\gamma}$ in $\Gamma$ are obtained by indefinitely extending $\bar{\gamma}$ through $180^\circ$-rotations about the lines connecting each pair of endpoints of $\bar{\gamma}$. On the other hand, $\Gamma$ is {\it reflectively periodic} (or {\it evenly periodic}) if the congruent copies of $\bar{\gamma}$ are obtained by indefinitely extending $\bar{\gamma}$ by the reflection across the planar faces of $\partial U$.
$\Gamma$ is complete because translations, screw motions, rotations and reflections are performed infinitely.
\end{definition}
\begin{theorem}\label{main}
Let $\Gamma\subset\mathbb R^3$ be the union of complete pairwise disjoint simple curves $\gamma_1,\gamma_2,\gamma_3,\ldots$ of infinite lengths. Suppose $\Gamma$ is periodic and its fundamental piece is a finite union of simple curves. Then there exists a periodic simply connected minimal surface $\Sigma$ spanning $\Gamma$. $\Sigma$ inherits the periodicity of $\Gamma$ and its fundamental region has least area among the fundamental regions of all the periodic simply connected surfaces spanning $\Gamma$.
\end{theorem}
\begin{proof}
Let's first prove the theorem when $\Gamma$ is helically periodic. We assume that $\Gamma$ is invariant under the $\sigma$ defined by (\ref{heli}). We may further assume that $\sigma$ maps the fundamental piece $\bar{\gamma}$ of $\Gamma$ to its adjoining piece, that is, $\bar{\gamma}$ is connected to $\sigma(\bar{\gamma})$ through their common endpoints. $\Gamma$ uniquely determines the angle $\beta>0$ of (\ref{heli}), which we call the {\it period} of $\Gamma$. $\hat{\Sigma}$ is a fundamental region of $\Sigma$ if and only if
$$\Sigma=\bigcup_{k\in\mathbb Z}\sigma^k(\hat{\Sigma})\,\,\,{\rm and}\,\,\, \hat{\Sigma}\cap\sigma(\hat{\Sigma})=\emptyset.$$
\begin{definition}
To each complete helically periodic curve $\Gamma$ we associate the class $\mathcal{C}_{a,\Gamma}$ of admissible maps $\varphi$ from the infinite strip $I_a:=[0,a]\times\mathbb R$ to $\mathbb R^3$ with the following properties:
\begin{enumerate}
\item $\varphi$ is a piecewise $C^{1}$ immersion in the interior of $I_a$ and is continuous in ${I_a}$;
\item $\varphi(x,y+k\beta)=\sigma^k(\varphi(x,y)),\,\, (x,y)\in I_a,\,\,k:{\rm integer},\,\, \beta:{\rm fixed}\,>0$;
\item $\varphi$ restricted to $\{0,a\}\times(0,\beta]$ is a monotone map onto a fundamental piece $\bar{\gamma}$ of $\Gamma$, i.e., $\bar{\gamma}$ is traversed once by $\varphi(\{0,a\}\times(0,\beta])$ although we allow arcs of $\{0,a\}\times(0,\beta]$ to map onto single points of $\bar{\gamma}$.
\end{enumerate}
To normalize $\mathcal{C}_{a,\Gamma}$ let's assume that $\varphi(0,0)=p$ for a fixed point $p$ of $\bar{\gamma}$.
$\varphi$ is said to be {\it invariant under the screw motion} $\sigma$ with {\it period} $\beta$ if $\varphi$ satisfies property (2).
\end{definition}
Define the {\it area functional} $A$ on $\mathcal{C}_{a,\Gamma}$ by
$$A(\varphi)=\int\int_{[0,a]\times[0,\beta]}|\varphi_x\wedge\varphi_y|dx\,dy$$
and the {\it Dirichlet integral} $D(\varphi)$ of $\varphi\in\mathcal{C}_{a,\Gamma}$ by
$$D(\varphi)=\int\int_{[0,a]\times[0,\beta]}|\nabla\varphi|^2dx\,dy.$$
Since $$|\varphi_x\wedge\varphi_y|\leq\frac{1}{2}\left(|\varphi_x|^2+|\varphi_y|^2\right)$$
we have
\begin{equation}\label{A<D}
A(\varphi)\leq\frac{1}{2}D(\varphi),\,\,\varphi\in\mathcal{C}_{a,\Gamma}
\end{equation}
where equality holds if and only if $\varphi$ is almost conformal. In order to obtain the equality case, we need to prove the existence of periodic isothermal coordinates invariant under $\sigma$ on the surface $\varphi(I_a)$.
\begin{proposition}\label{prop1}
For any $\varphi\in\mathcal{C}_{a,\Gamma}$ there exists a periodic homeomorphism $H:I_a\rightarrow I_{\bar{b}}:=[0,\bar{b}]\times\mathbb R$ such that $H^{-1}$ has period $\beta$ and the reparametrized map $\varphi\circ H^{-1}:I_{\bar{b}}\rightarrow\varphi(I_a)$ is a conformal map in $\mathcal{C}_{\bar{b},\Gamma}$.
\end{proposition}
\begin{proof}
Let $N$ be the annulus obtained from $[0,a]\times[0,\beta]$ by identifying the two line segments $[0,a]\times\{0,\beta\}$. Let $g$ be the metric on $N$ which is pulled back by $\varphi$ from the metric of $\varphi(I_a)$. $g$ is well-defined since $\varphi$ is invariant under the screw motion $\sigma$ determined by $\Gamma$. Let's consider the Dirichlet problem on $(N,g)$ for constant $b>0$:
$$\Delta u=0,\,\,u=0\,\,{\rm on}\,\,\{0\}\times[0,\beta],\,\,u=b\,\,{\rm on}\,\,\{a\}\times[0,\beta].$$
There exists a unique solution $u=h_b$ to this problem. The harmonic function $h_b$ has a conjugate harmonic function $h_b^*$ which is multi-valued on $(N,g)$. But $h_b^*$ is well-defined on its universal cover $\widetilde{N}=I_a$. Let $\tau(b)>0$ be the period of $h_b^*$ on $N$. $\tau(b)$ is an increasing function which varies from 0 to $\infty$ as $b$ does so. Hence there exists $\bar{b}>0$ such that $\tau(\bar{b})=\beta$. Note that $h_{\bar{b}}$ can also be lifted to $h_{\bar{b}}$ on $I_a$. Then the map $H:I_a\rightarrow I_{\bar{b}}$ defined by $H(q)=(h_{\bar{b}}(q),h_{\bar{b}}^*(q))$ is a periodic homeomorphism and yields a conformal map $\varphi\circ H^{-1}:I_{\bar{b}}\rightarrow\varphi(I_a)$. Note that $H^{-1}$ has period $\beta$ and $\varphi\circ H^{-1}$ is invariant under the screw motion $\sigma$. This completes the proof of the proposition.
\end{proof}
In order to prove the existence of an area-minimizing surface spanning $\Gamma$, let's define
$$a_\Gamma=\inf_{\varphi\in\mathcal{C}_{a,\Gamma} ,\,a>0}A(\varphi)\,\,\,\,{\rm and}\,\,\,\,d_\Gamma=\inf_{\varphi\in\mathcal{C}_{a,\Gamma},\,a>0}D(\varphi).$$
Then by (\ref{A<D}) and the existence of the isothermal coordinates we have
$$a_\Gamma=\frac{1}{2}\,d_\Gamma.$$
Therefore
$$D(\psi)=d_\Gamma\,\,{\rm for}\,\,{\rm some}\,\,\psi\in\mathcal{C}_{a,\Gamma} \,\, \Longleftrightarrow \,\,A(\psi)=a_\Gamma\,\,{\rm and}\,\, \psi\,\,{\rm is\,\,almost\,\,conformal}.$$
Thus, to solve the periodic Plateau problem it suffices to find $\bar{a}>0$ and a map $\psi\in\mathcal{C}_{\bar{a},\Gamma}$ which minimizes the Dirichlet integral $D(\varphi)$ on $[0,{a}]\times[0,\beta]$ among all $\varphi$ in $\mathcal{C}_{a,\Gamma}$ and all $a>0$.
First we shall fix $a>0$ and apply the periodic Dirichlet principle on $\mathcal{C}_{a,\Gamma}$ as follows.
\begin{lemma}\label{lemma1}
For each admissible map $\varphi$ in $\mathcal{C}_{a,\Gamma}$ there exists a unique harmonic admissible map $\psi\in\mathcal{C}_{a,\Gamma}$ with $\psi|_{\partial I_a}=\varphi|_{\partial I_a}$. Moreover, $D(\psi)\leq D(\varphi)$.
\end{lemma}
\begin{proof}
Let $x,y$ be the Euclidean coordinates of $\mathbb R^2$ and set $t=x+iy$. Define
$$f_1(t)=e^{\pi it/a}\,\,\,{\rm and}\,\,\,f_2(z)=\frac{iz+1}{z+i}.$$
Then $z=f_1(t)$ maps the infinite vertical strip $I_a$ one-to-one onto the upper half plane $\{{\rm Im}\,z\geq0\}\setminus\{0\}$ and $w=f_2(z)$ maps $\{{\rm Im}\,z\geq0\}\setminus\{0\}$ one-to-one onto the unit disk $\{|w|\leq1\}\setminus\{ i,-i\}$. Furthermore, we see that $f_2(f_1(\partial I_a))=\{|w|=1\}\setminus\{i,-i\}$. Let's consider the vector-valued Dirichlet problem for $u=(u_1,u_2,u_3)$ in $D:=\{w:|w|<1\}$:
\begin{equation}\label{bc0}
\Delta u=0\,\,{\rm in}\,\,D,\,\,\,\,\,\,u=\varphi\circ {f_1}^{-1}\circ f_2^{-1}\,\,{\rm on}\,\,\partial D,\,\,\,\,\,\,\varphi=(\varphi_1,\varphi_2,\varphi_3).
\end{equation}
Since $\varphi$ satisfies $\varphi(x,y+k\beta)=\sigma^k(\varphi(x,y))$ for the screw motion $\sigma$ defined by (\ref{heli}),
we see that $\varphi_1,\varphi_2$ are bounded and
\begin{equation}\label{y=x}
\varphi_3(x,y+k\beta)=\varphi_3(x,y)+ke.
\end{equation}
The Dirichlet problem (\ref{bc0}) has a unique bounded solution for $u_1,u_2$ because of the boundedness of $\varphi_1,\varphi_2$. Even though $\varphi_3$ is unbounded, by (\ref{y=x}) $\varphi_3-\frac{e}{\beta }y$ is bounded and periodic in $I_a$. So if the Dirichlet problem
\begin{equation}\label{bc1}
\Delta v=0 \,\,{\rm in}\,\,I_a,\,\,\,\,\,\,v=\varphi_3-\frac{e}{\beta }y\,\,{\rm on}\,\, \partial I_a
\end{equation}
has a bounded solution, it must be unique and periodic. To find its bounded solution, we convert it to a new Dirichlet problem on $D$:
\begin{equation}\label{bc2}
\Delta w =0\,\,{\rm in}\,\, D,\,\,\,\,\,\,w=(\varphi_3-\frac{e}{\beta}y)\circ {f_1}^{-1}\circ f_2^{-1}\,\,{\rm on}\,\,\partial D.
\end{equation}
The boundedness of $(\varphi_3-\frac{e}{\beta}y)\circ {f_1}^{-1}\circ f_2^{-1}$ gives the existence of a unique bounded solution $w=\tilde{h}_3$ to (\ref{bc2}). As $\frac{e}{\beta}y\circ f_1^{-1}\circ f_2^{-1}$ is harmonic in $D$, it is easy to see that $u_3:=\tilde{h}_3+\frac{e}{\beta}y\circ f_1^{-1}\circ f_2^{-1}$ is the third component of a desired solution to (\ref{bc0}).
Pulling back $(u_1,u_2,u_3)$ by $f_2\circ f_1$ to $I_a$, one can obtain a harmonic map $\psi:I_a\rightarrow\mathbb R^3$ having the same boundary value as $\varphi$ on $\partial I_a$. We now show that $\psi$ is invariant under the screw motion $\sigma$, in other words, $$\psi(x,y+\beta)=\sigma(\psi(x,y)).$$
Let $h_1,h_2,h_3:I_a\rightarrow\mathbb R$ be the harmonic components of $\psi$, that is,
$$\psi(x,y)=(h_1(x,y),h_2(x,y),h_3(x,y)).$$
(One easily sees that $h_3=\tilde{h}_3\circ f_2\circ f_1+\frac{e}{\beta}y$.)
Define
$$\psi_A(x,y)=\psi(x,y+\beta)\,\,{\rm and}\,\,\psi_B(x,y)=\sigma(\psi(x,y)).$$
Since $\tilde{h}_3\circ f_2\circ f_1$ is periodic with period $\beta$, we have
$$h_3(x,y+\beta)=h_3(x,y)+e.$$
So the third component of $\psi_A(x,y)$ equals that of $\psi_B(x,y)$.
On the other hand,
$$\psi_B(x,y)=(\cos\beta\, h_1(x,y)-\sin\beta\, h_2(x,y),\sin\beta \,h_1(x,y)+\cos\beta\, h_2(x,y),h_3(x,y)+e).$$
Hence $\psi_A,\psi_B$ are harmonic maps. As $h_1,h_2$ are bounded, so is $\psi_A-\psi_B$. Since $\sigma(\Gamma)=\Gamma$, $\psi_A-\psi_B$ vanishes on $\partial I_a$. Then $(\psi_A-\psi_B)\circ f_1^{-1}\circ f_2^{-1}$ is a bounded harmonic map vanishing on $\partial D$ and so $\psi_A-\psi_B\equiv0$. Therefore $\psi$ is invariant under $\sigma$.
It follows that $\psi$ is a unique admissible harmonic map in $\mathcal{C}_{a,\Gamma}$ having the same boundary values as $\varphi$.
Set $\Phi=\varphi-\psi$. Then $\Phi$ is also invariant under $\sigma$ and hence
$$D(\varphi)=D(\Phi)+D(\psi)+2D(\Phi,\psi)$$
where
$$D(\Phi,\psi)=\int\int_{[0,a]\times[0,\beta]}\left(\langle\frac{\partial\Phi}{\partial x},\frac{\partial\psi}{\partial x}\rangle+\langle\frac{\partial\Phi}{\partial y},\frac{\partial\psi}{\partial y}\rangle\right)dxdy.$$
Green's identity implies that
$$D(\Phi,\psi)=\int_{\partial([0,a]\times[0,\beta])}\langle\Phi,\frac{\partial\psi}{\partial\nu}\rangle ds-\int\int_{[0,a]\times[0,\beta]}\langle\Phi,\Delta\psi\rangle dxdy,$$
where $\nu$ is the outward unit normal to $\partial([0,a]\times[0,\beta])$.
But
$$\Phi=0 \,\,{\rm on}\,\, \{0,a\}\times[0,\beta]\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,\frac{\partial\psi}{\partial\nu}\big|_{[0,a]\times\{\beta\}}=-\frac{\partial\psi}{\partial\nu}\big|_{[0,a]\times\{0\}}$$
because of the invariance of $\psi$ under $\sigma$.
Hence $D(\Phi,\psi)=0$. It then follows that
$$D(\psi)\leq D(\varphi),$$
which completes the proof of the lemma.
\end{proof}
Define
$$d_{a,\Gamma}={\rm inf}_{\varphi\in \mathcal{C}_{a,\Gamma}}D(\varphi).$$
We claim here that $d_{a,\Gamma}$ goes to infinity as $a\rightarrow\infty$ and as $a\rightarrow0$.
\begin{eqnarray*}
D(\varphi)&\geq&\int_0^a\int_0^\beta|\varphi_y|^2dydx=\int_0^a\int_0^\beta\sum_{i=1}^3\left(\frac{\partial\varphi_i}{\partial y}\right)^2dydx\\
&\geq&\frac{1}{\beta}\int_0^a\left(\int_0^\beta\frac{\partial\varphi_3}{\partial y}dy\right)^2dx=\frac{1}{\beta}\int_0^a(\varphi_3(x,\beta)-\varphi_3(x,0))^2dx\\
&=&\frac{ae^2}{\beta}.
\end{eqnarray*}
So $\lim_{a\rightarrow\infty}d_{a,\Gamma}=\infty$. On the other hand,
\begin{eqnarray*}
D(\varphi)&\geq&\int_0^\beta\int_0^a|\varphi_x|^2dxdy=\int_0^\beta\int_0^a\sum_{i=1}^3\left(\frac{\partial\varphi_i}{\partial x}\right)^2dxdy\\
&\geq&\frac{1}{a}\int_0^\beta\sum_{i=1}^3\left(\int_0^a\frac{\partial\varphi_i}{\partial x}dx\right)^2dy=\frac{1}{a}\int_0^\beta\sum_{i=1}^3(\varphi_i(a,y)-\varphi_i(0,y))^2dy\\
&\geq&\frac{\beta d^2}{a},
\end{eqnarray*}
where $d$ is the distance between the two components $\gamma_0$, $\gamma_1$ of $\Gamma$ which are written as $\gamma_0=\varphi(\{0\}\times\mathbb R), \gamma_1=\varphi(\{a\}\times\mathbb R)$.
Hence $\lim_{a\rightarrow0}d_{a,\Gamma}=\infty$ as well.
Therefore we can conclude that there exists a positive constant $\bar{a}$ such that
$$d_\Gamma=d_{\bar{a},\Gamma}.$$
To finish the proof of Theorem \ref{main} we need the following.
\begin{lemma}\label{lemma2}
Let $M$ be a constant $>d_{\Gamma}$. Then for any $a>0$ the family of functions
$$\mathcal{F}_a=\{\varphi|_{\partial I_{{a}}}:\varphi\in\mathcal{C}_{{a},\Gamma},\,\,D(\varphi)\leq M\}$$
is compact in the topology of uniform convergence.
\end{lemma}
\begin{proof}
For each $z\in \partial I_{{a}}$ and each $r>0$, define $C_r$ to be the intersection of $I_{{a}}$ with the circle of radius $r$ centered at $z$, and denote by $s$ the arc length parameter of $C_r$. Choose any $\varphi\in\mathcal{C}_{{{a}},\Gamma}$ with $D(\varphi)\leq M$. For $0<\delta<{\rm min}(1,{a}^2)$, consider the integral
$$K:=\int_\delta^{\sqrt{\delta}}\int_{C_r}|\varphi_s|^2ds\,dr\leq D(\varphi)\leq M.$$
One can see that
$$K=\int_\delta^{\sqrt{\delta}}f(r)\,d(\log r),\,\,\,\,f(r):=r\int_{C_r}|\varphi_s|^2ds.$$
By the mean value theorem there exists $\rho$ with $\delta\leq\rho\leq\sqrt{\delta}$ such that
$$K=f(\rho)\int_\delta^{\sqrt{\delta}}d(\log r)=\frac{1}{2}\,f(\rho)\log(\frac{1}{\delta}).$$
Hence
$$\int_{C_\rho}|\varphi_s|^2ds\leq \frac{2M}{\rho\log(\frac{1}{\delta})}.$$
Denote the length of the curve $\varphi(C_r)$ by $L(\varphi(C_r))$. Then $L(\varphi(C_\rho))=\int_{C_\rho}|\varphi_s|ds$ and from the Cauchy-Schwarz inequality it follows that
\begin{equation}\label{lc}
L(\varphi(C_\rho))^2\leq\frac{2\pi M}{\,\log(\frac{1}{\delta})}.
\end{equation}
Given a number $\varepsilon>0$, by the compactness of $\Gamma/\langle\sigma\rangle$ we see that there exists $d>0$ such that for any $p,p'$ in $\Gamma$ with $0<|pp'|<d$, the diameter of the bounded component of $\Gamma\setminus\{p,p'\}$ is smaller than $\varepsilon$. Choose $\delta<{\rm min}(1,{a}^2)$ such that ${\frac{2\pi M}{\log(\frac{1}{\delta})}}<d^2$. Then for any $z\in\partial I_{{a}}$, there exists a number $\rho$ with $\delta<\rho<\sqrt{\delta}$ such that by (\ref{lc}), $L(\varphi(C_\rho))<d$.
Let $E_z$ be the interval in $\partial I_a$ between $z_1$ and $z_2$, the two endpoints of $C_\rho$. Then $|\varphi(z_1)\varphi(z_2)|<d$ and hence the diameter of $\varphi(E_z)$ is smaller than $\varepsilon$. Therefore for any $z,z'\in\partial I_{{a}}$ with $|z-z'|<{\delta}$ and $z$ being the center of $C_\rho$, we have $\varphi(z),\varphi(z')\in\varphi(E_z)$ and thus
$$|\varphi(z)-\varphi(z')|<\varepsilon.$$
Since $\delta$ was chosen independently of $z,z'$ and $\varphi$, we obtain the equicontinuity of $\mathcal{F}_a$.
In Douglas's solution for the existence of a conformal harmonic map $\varphi:D\rightarrow\mathbb R^n$ spanning a Jordan curve $\Gamma$, it was essential to prescribe $\varphi(z_i)=p_i$ for arbitrarily chosen points $z_1,z_2,z_3\in\partial D$ and $p_1,p_2,p_3\in\Gamma$. This was for the purpose of deriving the equicontinuity in a minimizing sequence. Fortunately we do not need this kind of prescription for our compact set $\Gamma/\langle\sigma\rangle$ as $\varphi(I_a)/\langle\sigma\rangle$ is not a disk. Yet we need to avoid an unwanted situation resulting from the disconnectedness of $\Gamma$: we have to show that $\varphi(\{a\}\times\mathbb R)$ does not drift away from $\varphi(\{0\}\times\mathbb R)$ (recall that $\varphi(0,0)$ is fixed). This can be done by deriving a length bound from a bound on $D(\varphi)$ as above.
For each $y\in [0,\beta]$ let $\ell_y$ denote the line segment $[0,a]\times \{y\}$. Choose $\varphi\in\mathcal{C}_{{{a}},\Gamma}$ and suppose $D(\varphi)\leq M$. Consider the integral
$$K:=\int_0^{{\beta}}\int_{\ell_y}|\varphi_x|^2dx\,dy\leq D(\varphi)\leq M.$$
Then
$$K=\int_0^{{\beta}}\tilde{f}(y)\,dy,\,\,\,\,\,\tilde{f}(y):=\int_{\ell_y}|\varphi_x|^2dx.$$
The mean value theorem implies that there exists $0<\bar{y}<\beta$ such that
$$K=\beta\,\tilde{f}(\bar{y})\leq M.$$
Hence
\begin{equation}\label{length}
L(\varphi(\ell_{\bar{y}}))^2=\left(\int_{\ell_{\bar{y}}}|\varphi_x|dx\right)^2\leq a\,\tilde{f}(\bar{y})\leq\frac{aM}{\beta}.
\end{equation}
We say that $\varphi(\{a\}\times\mathbb R)$ drifts away from $\varphi(\{0\}\times\mathbb R)$ if $\lim_{x\rightarrow a}|\varphi_3(x,y)|=\infty$ for some $0\leq y\leq\beta$. Therefore (\ref{length}) means that no drift occurs under $\varphi$ if $D(\varphi)$ is bounded, as claimed. Thus by Arzela's theorem the equicontinuity yields the compactness of $\mathcal{F}_a$. This completes the proof of Lemma \ref{lemma2}.
\end{proof}
Finally, let $\{\varphi_n\}$ be a minimizing sequence in $\mathcal{C}_{\bar{a},\Gamma}$, that is, $\lim_{n\rightarrow\infty}D(\varphi_n)=d_\Gamma$. From Lemma \ref{lemma2} it follows that there exists a subsequence $\{\varphi_{n_i}\}$ such that $\{\varphi_{n_i}|_{\partial I_{\bar{a}}}\}$ converges uniformly to $\bar{\varphi}|_{\partial I_{\bar{a}}}$ for some $\bar{\varphi}\in\mathcal{C}_{\bar{a},\Gamma}$. By Lemma \ref{lemma1} there exist harmonic maps $\psi_{i},\psi$ $\in\mathcal{C}_{\bar{a},\Gamma}$ such that
$$\psi_{i}|_{\partial I_{\bar{a}}}=\varphi_{n_i}|_{\partial I_{\bar{a}}},\,\,\,\,\,D(\psi_i)\leq D(\varphi_{n_i}),\,\,\,\,\,\psi|_{\partial I_{\bar{a}}}=\bar{\varphi}|_{\partial I_{\bar{a}}},\,\,\,\,\,\psi=\lim_{i\rightarrow \infty}\psi_{i}.$$
Then Harnack's principle gives
$$D(\psi)\leq\lim\inf_iD(\psi_{i})\leq d_\Gamma.$$
Consequently, $D(\psi)=d_\Gamma$ and so $\psi$ is almost conformal and harmonic. This completes the proof of Theorem \ref{main} when $\Gamma$ is helically periodic and therefore when it is translationally periodic as well. Since $\psi$ is periodically area minimizing in $\mathbb R^3$ it has no interior branch point (see \cite{G}).
Let $U$ be a convex polyhedral domain in $\mathbb R^3$ and $\bar{\gamma}:=\Gamma\cap U$ a fundamental piece of $\Gamma$. While a helically periodic $\Gamma$ has only two components, the fundamental piece $\bar{\gamma}$ of a rotationally (or reflectively) periodic $\Gamma$ may have more than two components. Let $\bar{\gamma}_1,\bar{\gamma}_2,\bar{\gamma}_3,\ldots,\bar{\gamma}_n$ be the components of $\bar{\gamma}$. For $i=1,\ldots,n$, let $p_i^1,p_i^2$ be the endpoints of $\bar{\gamma}_i$. Reordering $i=1,\ldots,n$ if necessary, we may assume that the line segment $\overline{p_i^1p_{i+1}^2}$ is in a planar face of $U$ connecting $\bar{\gamma}_i$ to $\bar{\gamma}_{i+1}$ and that $\Gamma_0:=\bar{\gamma}_1\cup\cdots\cup\bar{\gamma}_n\cup\overline{p_1^1p_2^2}\cup\overline{p_2^1p_3^2}\cup\cdots\cup
\overline{p_{n-1}^1p_n^2}\cup\overline{p_n^1p_1^2}$ is a Jordan curve in ${U}\cup\partial U$. There exists a Douglas solution $\Sigma_a$ spanning $\Gamma_0$. If $\Gamma$ is rotationally periodic, $\Gamma$ can be recaptured by indefinitely rotating $\bar{\gamma}_1\ldots,\bar{\gamma}_n$ $180^\circ$ around $\overline{p_1^1p_2^2},\ldots,\overline{p_n^1p_1^2}$ and around the corresponding line segments in the adjacent polyhedra. If we perform the same indefinite rotations on $\Sigma_a$ as we obtain $\Gamma$ from $\bar{\gamma}_1,\ldots,\bar{\gamma}_n$, then $\Sigma_a$ gives rise to a rotationally periodic minimal surface $\Sigma$ spanning $\Gamma$, as desired.
Suppose $\Gamma$ is reflectively periodic with fundamental piece $\bar{\gamma}$ in $U$. By the theorem of existence of minimizers for the free boundary problem (see Section 5.3 of \cite{DHKW}) there exists a minimal surface $\Sigma_b$ of least area in $U$ such that $\partial\Sigma_b\setminus\partial U=\bar{\gamma}$ and $\Sigma_b$ is perpendicular to $\partial U$ along $\partial\Sigma_b\cap\partial U$. Apply the same indefinite reflections to $\Sigma_b$ as we do to $\bar{\gamma}$ to get $\Gamma$. Then we can obtain a reflectively periodic minimal surface $\Sigma$ spanning $\Gamma$, as desired.
\end{proof}
\begin{remark}
(a) One can similarly consider a disjoint union $\Gamma$ of complete simple curves in the hyperbolic space $\mathbb H^3$ and in the sphere $\mathbb S^3$. $\Gamma$ can be compact in $\mathbb S^3$. One easily sees that Theorem \ref{main} still holds for $\Gamma$ in $\mathbb H^3$ and in $\mathbb S^3$.
(b) A periodic minimal surface (and its boundary) may be partly rotationally periodic and partly reflectively periodic.
(c) A helically periodic $\Sigma$ spanning $\Gamma$ with invariance group $\langle\sigma\rangle$ gives rise to the quotient surface $\Sigma/\langle\sigma\rangle$ and the quotient boundary $\Gamma/\langle\sigma\rangle$.
\end{remark}
\section{Uniqueness and Embeddedness}
Under what condition can $\Gamma$ guarantee the uniqueness and embeddedness of the periodic Plateau solution $\Sigma$? For the Douglas solution with Jordan curve $\Gamma$ Nitsche \cite{N} and Ekholm-White-Wienholtz \cite{EWW} proved the uniqueness and the embeddedness, respectively, if the total curvature of $\Gamma\leq4\pi$. But even before Douglas, Rad\'{o} \cite{R} showed that the Dirichlet solution of the minimal surface equation for any continuous boundary data over the boundary of a convex domain in $\mathbb R^2$ exists as a graph, which is obviously unique and embedded. In the same spirit, we have a partial answer for our periodic Plateau problem as follows.
\begin{theorem}\label{plateau}
Let $\gamma_0$ be the $x_3$-axis and $\gamma_1$ a complete connected curve winding around $\gamma_0$. Define $\Gamma=\gamma_0\cup\gamma_1$ and let $\tau$ be a vertical translation by $e$. If $\Gamma$ is translationally periodic with respect to $\tau$ and a fundamental piece of $\gamma_1$ admits a one-to-one orthogonal projection onto a convex closed curve in the $x_1x_2$-plane, then the translationally periodic minimal surface $\Sigma$ spanning $\Gamma$ has the following properties:
\begin{itemize}
\item[(a)] The Gaussian curvature of $\Sigma$ is negative at any point $p\in\gamma_0$;
\item[(b)] $\Sigma$ is embedded and its fundamental region (not including $\gamma_0$) is a graph over its projection onto the $x_1x_2$-plane;
\item[(c)] $\Sigma$ is unique.
\end{itemize}
\end{theorem}
\begin{proof}
(a) $\gamma_0$ is parametrized by $x_3$. At any point $p(x_3)$ of $\gamma_0$, $\Sigma$ has a tangent {\it half plane} $Q_{p(x_3)}$. In a neighborhood of $p(x_3)$, $\Sigma$ is divided by $Q_{p(x_3)}$, like a half pie, into $m(\geq2)$ regions (see Figure 4). Define $\theta(x_3)$ to be the angle between $Q_{p(x_3)}$ and the positive $x_1$-axis. $\theta(x_3)$ is a well-defined analytic function satisfying $\theta(x_3+e)=\theta(x_3)+2\pi$. It is known (to be proved shortly) that
\begin{center}
\includegraphics[width=5.7in]{pie9.jpg}\\
\end{center}
\begin{equation}\label{equivalence}
m=2\,\,{\rm at}\,\,p(x_3) \,\,\, \Leftrightarrow\,\,\, K(x_3)<0 \,\,\, \Leftrightarrow\,\,\, \theta'(x_3)\neq0,
\end{equation}
where $K(x_3)$ is the Gaussian curvature of $\Sigma$ at $p(x_3)$.
We claim that $m\equiv2$ on $\gamma_0$. Suppose $m\geq3$ at $p(x_3)$ so that $Q_{p(x_3)}\cap\Sigma\setminus\gamma_0$ is the union of at least two analytic curves $C_1,C_2,...,C_k$ emanating from $p(x_3)$. Since
$Q_{p(x_3)}$ intersects $\gamma_1$, at least one of $C_1,C_2,...,C_k$ should reach $\gamma_1$. So we have two
possibilities: either {\bf (i)} only one of them, say $C_1$, reaches $\gamma_1$, or {\bf (ii)} two of them, say $C_1,C_2$, reach $\gamma_1$ (see Figure 4). In the first case, since $C_2$ is disjoint from $\gamma_1$ and
translationally periodic, it cannot be unbounded and should be in a fundamental region of $\Sigma$. Hence $C_2$ comes back to $\gamma_0$.
$C_2$ and $\gamma_{0}$ should then bound a domain $D\subset\Sigma$ with $\partial D\subset {Q_{p(x_3)}}$ as $\Sigma$ is simply connected. But this contradicts the maximum principle because $D$ has a point which attains the maximum distance from $Q_{p(x_3)}$. In case of {\bf (ii)}, set $C_1\cap\gamma_1=\{q_1\}$ and $C_2\cap\gamma_1=\{q_2\}$. Denote by $\pi$ the projection onto the $x_1x_2$-plane. Due to the convexity of $\pi(\gamma_1)$, $Q_{p(x_3)}$ intersects any fundamental piece of $\gamma_1$ only at one point. Therefore $\{q_1,q_2\}$ should be the boundary of a fundamental piece of $\gamma_1$. Hence $\tau(q_1)=q_2$, interchanging $q_1$ and $q_2$ if necessary. So the two curves $\tau(C_1)$ and $C_2$ meet at $q_2$. Then $\tau(C_1)$, $C_2$ and $\gamma_0$ bound a domain $D\subset\Sigma$. Again $\partial D$ is a subset of $Q_{p(x_3)}$, which is a contradiction to the maximum principle. Therefore $m\equiv2$ on $\gamma_0$, as claimed.
To give a proof of the equivalences (\ref{equivalence}), let's view $\Sigma$ in a neighborhood of $p\in\gamma_0$ as a graph over $Q_p$, the tangent half plane of $\Sigma$ at $p$. Introduce $x,y,z$ as the coordinates of $\mathbb R^3$ such that $z\equiv0$ on $Q_p$, $x\equiv0$ on $\gamma_0$ and $p=(0,0,0)$. Then $\Sigma$ is the graph of an analytic function $z=f(x,y)$ and the lowest order term of its Taylor series is $f_m(x,y)=c_m\,{\rm Im}(x+iy)^m, m\geq2$, when $m$ is an even integer and $f_m(x,y)=c_m\,{\rm Re }(x+iy)^m$ when $m$ is odd. It follows that $\Sigma$ is divided by $Q_p$ into $m$ regions in a neighborhood of $p$ and that $K(p)=0$ if $m\geq3$ and $K(p)<0$ if $m=2$, which is the first equivalence in (\ref{equivalence}). Hence $K<0$ on $\gamma_0$ by the claim above and this proves (a). The second equivalence follows from the expression for the Gaussian curvature in terms of the Weierstrass data on $\Sigma$, a 1-form $fdz$ and the Gauss map $g$:
\begin{equation}\label{K}
K=-\frac{16|g'|^2}{|f|^2(1+|g|^2)^4}.
\end{equation}
(b) First we show that $\Sigma\setminus\gamma_0$ has no vertical tangent plane. Suppose not; let $q$ be an interior point of $\Sigma$ at which the tangent plane ${P}$ is vertical. Remember that $\pi(\gamma_1)$ is convex. Hence ${P}$ intersects $\gamma_1$ only at two points in its fundamental piece. ${P}\cap\Sigma$ is locally the union of at least four curves $C_1,\ldots,C_k, k\geq4,$ emanating from $q$, and two of them should reach $\gamma_1$. If we assume only four curves emanate from
$q$ in ${P}\cap\Sigma$, two of them will reach $\gamma_1$ and then either the remaining two will reach $\gamma_0$ or they will be connected to each other by the translation $\tau$ as in Figure 5: {\bf (i)} $C_1$,
$C_2$ will intersect $\gamma_1$ and $C_3,C_4$ will intersect $\gamma_{0}$; {\bf (ii)} $C_1,C_2$ will intersect $\gamma_1$ and
$C_3,C_4$ will be disjoint from $\gamma_0\cup\gamma_{1}$ so that $C_4$ will be connected to $\tau(C_3)$. In case
of {\bf (i)}, $C_3\cup C_4\cup\gamma_{0}$ will bound a domain $D\subset\Sigma_{}$. But this is a contradiction to the maximum principle since $\partial D\subset {P}$. In case of {\bf (ii)}, ${\gamma}_{0}$ is disjoint from ${P}$. Then ${\gamma}_{0}$ and ${P}\cap\Sigma_{}$ bound an infinite strip $S\subset\Sigma_{}$ lying on one side of ${P}$. Since $S/\langle\tau\rangle$ is compact, there exists a point $p_S\in S$ which has the maximum distance from ${P}$ among all points of $S$. $\gamma_0$ is a constant distance away from $P$ and the inward unit conormals to $\gamma_0$ on $\Sigma$ wind around it once in its fundamental piece. So there is a point in $\gamma_0$ at which the inward unit conormal to $\gamma_0$ points away from $P$. Then in
\begin{center}
\includegraphics[width=3.11in]{cases9.jpg}\\
\end{center}
that direction the distance from $P$ increases, hence $p_S$ is not a point of $\gamma_0$ but an interior point of $S$. However, this contradicts the maximum principle. Consequently, no tangent plane to $\Sigma$ can be vertical at any point of $\Sigma_0$. Even if ${P}\cap\Sigma$ consists of six curves or more, the same argument works.
We now show that the interior of $\Sigma$ does not intersect $\gamma_0$. Let $\psi:[0,\bar{a}]\times\mathbb R\rightarrow\mathbb R^3$ be the periodically area minimizing conformal harmonic map such that $\psi([0,\bar{a}]\times\mathbb R)=\Sigma$, $\psi(\{0\}\times\mathbb R)=\gamma_0$ and $\psi(\{\bar{a}\}\times\mathbb R)=\gamma_1$. Suppose there exists an interior point $p\in(0,\bar{a})\times\mathbb R$ such that $\Sigma$ intersects $\gamma_0$ at $\psi(p)$. Define $f(q)=x_1(q)^2+x_2(q)^2$ for $q\in\Sigma$. Let $\mathcal{F}$ be the family of all arcs on $\Sigma$ connecting $\gamma_0$ to $\psi(p)$. Let's find a saddle point in $\Sigma$ for the function $f$. Define
$$A={\rm min}_{\alpha\in\mathcal{F}}\,{\rm max}_{q\in\alpha}f(q).$$
Clearly there exists a saddle point $q_0$ in $\Sigma$ such that $f(q_0)=A$. Suppose $A=0$. Then there is an arc $\tilde{\alpha}\subset[0,\bar{a}]\times\mathbb R$ connecting $\{0\}\times\mathbb R$ to $p$ such that $f\equiv0$ on $\psi(\tilde{\alpha})$. Since $\Sigma$ is periodically area minimizing, it has no interior branch point. Neither does $\Sigma$ have a boundary branch point on $\gamma_0$. Hence $\psi$ is an immersion on $[0,\bar{a})\times\mathbb R$. But $\psi$ maps $(\{0\}\times\mathbb R)\cup\tilde{\alpha}$ onto $\gamma_0$ if $f\equiv0$ on $\psi(\tilde{\alpha})$. This is not possible for the immersion $\psi$. Hence $A$ cannot be equal to $0$. Since $\nabla f=0$ at $q_0$, the tangent plane to $\Sigma$ at $q_0$ is parallel to $\gamma_0$ and hence it must be vertical. This is a contradiction. Therefore the interior of $\Sigma$ does not intersect $\gamma_0$.
Henceforth we show that $\hat{\Sigma}\setminus\gamma_{0}$ is a graph over the $x_1x_2$-plane, where $\hat{\Sigma}$ is a fundamental region of $\Sigma$. By (a) we know that $m\equiv2$ on $\gamma_0$. Hence, given a vertical half plane $Q$ emanating from $\gamma_{0}$ and a suitably chosen fundamental region $\hat{\Sigma}$ of $\Sigma$, $\overline{Q\cap\hat{\Sigma}\setminus\gamma_0}$ is a single smooth curve joining $\gamma_{0}$ to $\gamma_1$. Since the interior of $\Sigma$ does not intersect $\gamma_0$, the projection map $\pi|_{Q\cap\hat{\Sigma}\setminus\gamma_0}$ is one-to-one near $\gamma_0$. As $\pi(\gamma_1)$ is convex and $\pi|_{\hat{\Sigma}\cap\gamma_1}$ is one-to-one, hence $\pi(\Sigma)$ lies inside $\pi(\gamma_1)$ and $\pi|_{\hat{\Sigma}}$ is one-to-one near $\gamma_1$. Suppose the curve $Q\cap\hat{\Sigma}\setminus\gamma_0$ contains a point $p$ at which its tangent line is vertical. Then the tangent plane to $\Sigma$ at $p$ is also vertical, which is a contradiction. Hence $Q\cap\hat{\Sigma}\setminus\gamma_0$ admits a one-to-one projection into $\pi(Q)$ for all $Q$. It follows that $\hat{\Sigma}\setminus{\gamma}_{0}$ is a 2-dimensional graph over $\pi(\hat{\Sigma}\setminus{\gamma}_{0})$. Hence $\Sigma$ is embedded.
(c) Suppose there exist two periodic Plateau solutions $\Sigma_1,\Sigma_2$ spanning $\Gamma$. Assume that their fundamental regions $\hat{\Sigma}_1,\hat{\Sigma}_2$ are the graphs of analytic functions $f_1,f_2:D\subset x_1x_2$-plane $\rightarrow\mathbb R$, $D:=\pi({\Sigma}_1\setminus\gamma_0)=\pi({\Sigma}_2\setminus\gamma_0)$. Assume also that $f_1\geq f_2$. If there exists an interior point $p\in D$ such that $(f_1-f_2)(p)={\rm max}_{q\in D}(f_1-f_2)(q)$, we have a contradiction to the maximum principle. Hence $f_1-f_2$ has no interior maximum in $D$. Since $f_1-f_2\equiv0$ on $\pi(\gamma_1)$, it can have a maximum only at $\pi(\gamma_0)=(0,0)$. However, the maximum is attained {\it anglewise} as follows. Let $M={\rm sup}_{q\in D}(f_1-f_2)(q)$. Given a half plane $Q$ emanating from $\gamma_0$, let $M_Q={\rm sup}_{q\in {Q\cap D}}(f_1-f_2)(q).$ Then $M={\rm max}_{Q}M_Q$. Hence there exists a half plane $Q_1$ emanating from $\gamma_0$ such that
$$M=\lim_{q\in \ell,\,q\rightarrow (0,0)}(f_1-f_2)(q), \,\,\,{\rm where}\,\,\ell=Q_1\cap D.$$
Then the parallel translate of $\Sigma_2$ by $M$, denoted as $\Sigma_2+M$, still contains $\gamma_0$ as $\Sigma_1$ does, lies on one side of $\Sigma_1$ (above $\Sigma_1$) and is tangent to $\Sigma_1$ at $x_3=q_1:=\lim_{q\in \ell,\,q\rightarrow(0,0)}f_1(q)$. Hence by the boundary maximum principle(boundary point lemma), $f_2+M\equiv f_1$, that is, $\Sigma_2+M=\Sigma_1$. Since $\Sigma_2+M$ spans $\Gamma+M$ and $\Sigma_1$ does $\Gamma$, $M$ must equal $0$ and thus follows the uniqueness of $\Sigma$.
\end{proof}
\section{Smyth's Theorem}
It was H.A. Schwarz \cite{Sc} who first constructed a triply periodic minimal surface in $\mathbb R^3$. He started from a regular tetrahedron, four edges of which forms a Jordan curve, which in turn generates a unique minimal disk. Schwarz found this surface using specific Weierstrass data. By applying his reflection principle he was able to extend the minimal disk across its linear boundary to obtain the $D$-surface. Then Schwarz introduced its conjugate surface, which he called the $P$-surface. This surface is embedded and triply periodic just like the $D$-surface. Moreover, part of it is a free boundary minimal surface in a cube.
It is interesting to notice that both $D$-surface and $P$-surface have fundamental regions which are free boundary minimal disks in two specific tetrahedra, respectively. However, this is not an accident; B. Smyth \cite{Sm} showed surprisingly that {\it any} tetrahedron contains as many as {\it three} free boundary minimal disks. In the remainder of the paper we are interested in applying Smyth's method to the periodic Plateau solutions. To do so, we shall first review Smyth's theorem in this section.
Given a tetrahedron $T$ in $\mathbb R^3$, let $F_1,F_2,F_3,F_4$ be its faces and $\nu_1,\nu_2,\nu_3,\nu_4$ the outward unit normals to the faces, respectively. Then any three of $\nu_1,\nu_2,\nu_3,\nu_4$ are linearly independent but all four of them are not. Hence there should exist positive numbers $c_1,c_2,c_3,c_4$ such that
\begin{equation}\label{lr}
c_1\nu_1+c_2\nu_2+c_3\nu_3+c_4\nu_4=0.
\end{equation}
In fact, we may assume
$$c_i={\rm Area}(F_i),\,\,\,i=1,2,3,4.$$
This is due to the divergence theorem applied on the domain $T$ to the gradients of the harmonic functions $x_1,x_2,x_3$, the Euclidean coordinates of $\mathbb R^3$.
By (\ref{lr}) we see that there exists an oriented skew quadrilateral $\Gamma$ whose edges (as vectors) are $c_1\nu_1,c_2\nu_2,c_3\nu_3,c_4\nu_4$. The Jordan curve $\Gamma$ bounds a unique minimal disk $\Sigma$, which is the image $X(D)$ of a conformal harmonic map $X:=(x_1,x_2,x_3)$. It is well known that $x_1,x_2,x_3$ are also harmonic on $\Sigma$. Hence there exist their conjugate harmonic functions $x_1^*,x_2^*,x_3^*$ on $\Sigma$. Then $X^*:=(x_1^*,x_2^*,x_3^*)$ defines a conformal harmonic map from $D$ onto $\Sigma^*$ in $\mathbb R^3$. $X^*\circ X^{-1}:\Sigma\rightarrow\Sigma^*$ is a local isometry because of the Cauchy-Riemann equations. Therefore $\Sigma^*$ is a minimal surface locally isometric to $\Sigma$.
Let $y_i=b_i^1x_1+b_i^2x_2+b_i^3x_3$ be a linear function in $\mathbb R^3$ such that $\nabla y_i=c_i\nu_i,\,i=1,2,3,4$. Then $y_i$ is constant($=d_i$) on the face $F_i$. Suppose $u,v$ are isothermal coordinates on $\Sigma$ such that $v$ is constant along the edge $c_i\nu_i$. Then $dX(\frac{\partial}{\partial v})$ is perpendicular to the vector $c_i\nu_i$ on the edge $c_i\nu_i$. Hence $\frac{\partial y_i}{\partial v}=0$, and by Cauchy-Riemann $\frac{\partial y_i^*}{\partial u}=0$ on $c_i\nu_i$ as well, where $y_i^*:=b_i^1x_1^*+b_i^2x_2^*+b_i^3x_3^*$. Therefore $y_i^*$ is constant along the edge $c_i\nu_i$, meaning that the image $X^*(c_i\nu_i)$ lies on the plane $\{y_i^*=d_i^*\}$ for some constant $d_i^*$.
$dX(\frac{\partial}{\partial u})$ is parallel to $\nabla y_i$ along the edge $c_i\nu_i$. By Cauchy-Riemann, there exists a number $c(p)$ at $p\in c_i\nu_i$ such that
\begin{equation}\label{*}
c(p)(b_i^1,b_i^2,b_i^3)=dX(\frac{\partial}{\partial u})=dX^*(\frac{\partial}{\partial v}).
\end{equation}
Hence $dX^*(\frac{\partial}{\partial v})$ is parallel to $(b_i^1,b_i^2,b_i^3)$. Therefore $\Sigma^*$ is perpendicular to the plane $\{y_i^*=d_i^*\}$ along $X^*(c_i\nu_i)$. In conclusion, $\Sigma^*$ is locally isometric to $\Sigma$ and is a free boundary minimal surface in a tetrahedron $T'$ which is similar to $T$. Thus $T$ contains a free boundary minimal surface which is a homothetic expansion of $\Sigma^*$.
The skew quadrilateral $\Gamma$ depends on the order of $c_1\nu_1,c_2\nu_2,c_3\nu_3,c_4\nu_4$. Any edge of the four can be chosen to be the first in a quadrilateral. Hence there are $6=3!$ orderings of the four edges. But they can be paired off into three quadrilaterals with two opposite orientations. To be precise, for example, if the quadrilateral $\Gamma_1$ determined by four ordered vectors $(u,v,w,x)$ is reversely traversed, we get the quadrilateral $-\Gamma_1$ for the ordering $(-u,-x,-w,-v)$. Define an orthogonal map $\xi(p)=-p,\, p\in\mathbb R^3$, then $\xi(-\Gamma_1)$ is the quadrilateral determined by $(u,x,w,v)$. $\xi(-\Gamma_1)$ cannot be obtained from $\Gamma_1$ by a Euclidean motion. Even so, the two minimal disks spanning $\Gamma_1$ and $\xi(-\Gamma_1)$ are intrinsically isometric. Moreover, their conjugate surfaces are extrinsically isometric, i.e., they are identical modulo a Euclidean motion. Therefore the six orderings of the four edges yield three geometrically distinct conjugate minimal disks which, if properly expanded, will be free boundary minimal surfaces in $T$.
\section{Free boundary minimal annulus}
By applying Smyth's theorem to a translationally periodic solution of the periodic Plateau problem we are going to construct four free boundary minimal annuli in a tetrahedron.
\begin{theorem}\label{fb}
Let $T$ be a tetrahedron with faces $F_1,F_2,F_3,F_4$ in $\mathbb R^3$ and let $\pi_i$ be the orthogonal projection onto the plane $P_i$ containing $F_i, i=1,2,3,4$.
\begin{itemize}
\item[(a)] If every dihedral angle of $T$ is $\leq90^\circ$, there exist four free boundary minimal annuli $A_1,A_2,A_3,A_4$ in $ T$.
\item[(b)] If at least one dihedral angle of $T$ is $>90^\circ$, there exist four minimal annuli $A_1,A_2,A_3$, $A_4$ which are perpendicular to $\cup_{j=1}^4 P_j$ along $\partial A_i$. Part of $A_i$ may lie outside $T$ if a dihedral angle is nearer to $180^\circ$. {\rm (}See {\rm Figure 6, right.)} Near $\partial A_i$, however, $A_i$ lies in the same side of $P_j$ as $T$ does. Moreover, $\partial A_i$ equals $\Gamma_i^1\cup\Gamma_i^2$, where $\Gamma_i^1$ is a closed convex curve in $P_i$ and $\Gamma_i^2$ is a closed, piecewise planar curve in $P_j\cup P_k\cup P_l$ with $ \{i,j,k,l\}=\{1,2,3,4\}$.
\item[(c)] If the three dihedral angles along $\partial F_i$ are $\leq90^\circ$, then $A_i$ lies inside $ T$. $\Gamma_i^1$ is a closed convex curve in $F_i$ and $\Gamma_i^2$ is a closed, piecewise planar curve in $\partial T\setminus F_i$. {\rm (}See {\rm Figure 6, left.)}
\item[(d)] Each planar curve in $\Gamma_i^2$ is convex and is perpendicular to the lines containing the edges of $T$ at its end points.
\item[(e)] $A_i$ is an embedded graph over $\pi_i(A_i)$.
\end{itemize}
\end{theorem}
\begin{center}
\includegraphics[width=5.5in]{obtuse9.jpg}\\
\end{center}
\begin{proof}
As in the preceding section, $\nu_i$ denotes the outward unit normal to $F_i$. Again, there are positive constants $c_i={\rm Area}(F_i)$ such that $c_1\nu_1+c_2\nu_2+c_3\nu_3+c_4\nu_4=0$. Assume that $\nu_4$ is parallel to the $x_3$-axis so that $F_4$ is contained in the $x_1x_2$-plane. Denote the $x_1x_2$-plane by $P_4$ and recall that $\pi_4$ denotes the orthogonal projection onto $P_4$. Since
$$\pi_4(c_1\nu_1)+\pi_4(c_2\nu_2)+\pi_4(c_3\nu_3)=0,$$
$\pi_4(c_1\nu_1),\pi_4(c_2\nu_2),\pi_4(c_3\nu_3)$ determine the boundary of a triangle $\Delta_4\subset P_4$, that is, $\pi_4(c_i\nu_i)$ is the $i$th oriented edge of $\Delta_4$, $i=1,2,3$. $\pi_4(c_i\nu_i)$ is perpendicular to the boundary edge $F_i\cap F_4$ of $F_4$. Also $\pi_4(c_i\nu_i)$ is perpendicular to the corresponding edge of $J(\Delta_4)$, where $J$ denotes the counterclockwise $90^\circ$ rotation on $P_4$. Therefore $\Delta_4$ is similar to $F_4$.
Choose a point $q$ from the interior $\check{\Delta}_4$ of $\Delta_4$ and let $\bar{\gamma}_q$ be the vertical line segment starting from $q$ and corresponding to (i.e., having the same length and direction as) $-c_4\nu_4$. Let $\bar{\gamma}_1$ be a connected piecewise linear open curve starting from a vertex of $\Delta_4$ that is the starting point of the oriented edge $\pi_4(c_1\nu_1)$ such that $\bar{\gamma}_1$ is the union of the three oriented line segments corresponding to the ordered vectors $c_1\nu_1,c_2\nu_2,c_3\nu_3$. Then $\pi_4(\bar{\gamma}_1)=\partial\Delta_4$. Also the endpoints of $\bar{\gamma}_1$ and $\bar{\gamma}_q$ are in ${\Delta}_4$ and in its parallel translate. One can extend $\bar{\gamma}_q\cup\bar{\gamma}_1$ into a complete translationally periodic curve $\Gamma_q:=\gamma_q\cup\gamma_1$ such that $\bar{\gamma}_q\cup\bar{\gamma}_1,\bar{\gamma}_q,\bar{\gamma}_1$ become fundamental pieces of $\Gamma_q,\gamma_q,\gamma_1$, respectively. By Theorem \ref{main} and Theorem \ref{plateau} there uniquely exists a simply connected minimal surface $\Sigma_q$ spanning $\Gamma_q$. $\Sigma_q$ has the same translational periodicity as $\Gamma_q$ does. (See Figure 7.)
Let $\Sigma_q^*$ be the conjugate minimal surface of $\Sigma_q$ and denote by $Y_q^*=X_q^*\circ X_q^{-1}$ the local isometry from $\Sigma_q$ to $\Sigma_q^*$. By Smyth's arguments in the preceding section we see that the image $Y_q^*(c_i\nu_i)$ of the edge $c_i\nu_i$ is in a plane parallel to the face $F_i$. More precisely, $Y_q^*(c_i\nu_i)$ lies in the plane $\{y_i^*=d_i^*\}$, where $\nabla y_i^*=c_i\nu_i$. However, $Y_q^*(\bar{\gamma}_q)$ is not closed in general because $Y_q^*$ may have nonzero period along $\bar{\gamma}_q$. But
\begin{center}
\includegraphics[width=5.5in]{procedure9.jpg}\\
\end{center}
note that by Cauchy-Riemann the period of $Y_q^*$ along $Y_q^*(\bar{\gamma}_q)$ equals the flux of $\Sigma_q$ along $\bar{\gamma}_q$. Therefore in order to make $\Sigma_q^*$ a well-defined compact minimal annulus, we need to find a suitable point $q$ in $\check{\Delta}_4$ for which the flux of $\Sigma_q$ along $\bar{\gamma}_q$ becomes the zero vector. Note here that the flux of $\Sigma_q$ along $\bar{\gamma}_1$ vanishes if and only if the flux of $\Sigma_q$ along $\bar{\gamma}_q$ does.
Let $n(p)$ be the inward unit conormal to $\bar{\gamma}_q$ on $\Sigma_q$ at $p\in\bar{\gamma}_q$ and define
$$f(q)=\int_{p\in\bar{\gamma}_q}n(p).$$
Then $f(q)$ is the flux of $\Sigma_q$ along $\bar{\gamma}_q$ and $f$ is a map from the interior $\check{\Delta}_4$ to the set $N$ of vectors parallel to the plane $P_4$. $f$ is a smooth map and can be extended continuously to the closed triangle $\Delta_4$. Let ${\Delta}_4\times\mathbb R$ be the vertical prism over ${\Delta}_4$. Obviously $\Sigma_q$ lies inside ${\Delta}_4\times\mathbb R$. Since $\bar{\gamma}_1$ winds around $\bar{\gamma}_q$ once, so does $n(p)$ as $p$ moves along $\bar{\gamma}_q$. But as $q$ approaches a point $\tilde{q}\in\partial\Delta_4$, $\Gamma_q$ converges to a complete translationally periodic curve ${\Gamma}_{\tilde{q}}:={\gamma}_{\tilde{q}}\cup{\gamma}_1$ of which $\bar{\gamma}_{\tilde{q}}\cup\bar{\gamma}_1$ is a fundamental piece. Let $\tau$ be the translation defined by $\tau(\bar{p})=\bar{p}-c_4\nu_4,\, \bar{p}\in\mathbb R^3$. Since $\bar{\gamma}_{\tilde{q}}$ intersects $\bar{\gamma}_1$, ${\Gamma}_{\tilde{q}}$ is a periodic union of Jordan curves, or more precisely, ${\Gamma}_{\tilde{q}}=\cup_n\tau^n({{\gamma}}_{1\tilde{q}})$, where ${\gamma}_{1\tilde{q}}$ is a Jordan curve which is a subset of $(\bar{\gamma}_{\tilde{q}}\cup\bar{\gamma}_1)\cup\tau(\bar{\gamma}_{\tilde{q}}\cup\bar{\gamma}_1)$. ${\gamma}_{1\tilde{q}}$ consists of five (or four if $\bar{\gamma}_{\tilde{q}}$ passes through a vertex of $\bar{\gamma}_1$) line segments. It is known that the total curvature of ${\gamma}_{1\tilde{q}}$ equals the length of its tangent indicatrix $T_{1\tilde{q}}$. $T_{1\tilde{q}}$ is comprised of (i) a geodesic triangle and a geodesic with multiplicity 2 in case ${\gamma}_{1\tilde{q}}$ consists of five line segments or (ii) four geodesics connecting the four points in $\mathbb S^2$ that correspond to $\nu_1,\nu_2,\nu_3,\nu_4$. Since the length of a geodesic triangle is less than $2\pi$ and the length of a geodesic is less than $\pi$, the total length of $T_{1\tilde{q}}$ is smaller than $4\pi$ in either case. Thus by \cite{N} there exists a unique minimal disk spanning ${\gamma}_{1\tilde{q}}$ . As a matter of fact, we can easily extend the proof of Theorem \ref{plateau} (c) to the limiting case where $\bar{\gamma}_1$ intersects $\bar{\gamma}_0$. So we can see that ${\gamma}_{1\tilde{q}}$ bounds a unique minimal surface $\hat{\Sigma}_{\tilde{q}}\subset\Delta_4\times\mathbb R$ regardless of its topology. As $q\rightarrow\tilde{q}\in\partial\Delta_4$, a fundamental region of $\Sigma_q$ converges to $\hat{\Sigma}_{\tilde{q}}$. Hence, by continuity of the extended map $f:\Delta_4\rightarrow N$, $f(q)$ converges to $f(\tilde{q})=\int_{p\in{\bar{\gamma}}_{\tilde{q}}}n(p)$ which is the flux of $\hat{\Sigma}_{\tilde{q}}$ along ${\bar{\gamma}}_{\tilde{q}}\subset\partial\Delta_4\times\mathbb R$. Therefore, as $n(p)$ points into the interior of $\Delta_4$ at any $p\in{\bar{\gamma}_{\tilde{q}}}$, $f(\tilde{q})$ is a nonzero horizontal vector pointing toward the interior of $\Delta_4$.
Now we are ready to show that there is a point $q$ in the interior $\check{\Delta}_4$ at which the flux $f(q)$ vanishes. Suppose $f(q)\neq0$ for all $q\in\check{\Delta}_4$ and define a map $\tilde{f}:{\Delta}_4\rightarrow \mathbb S^1$ by
$$\tilde{f}(q)=\frac{f(q)}{|f(q)|}.$$
Then $\tilde{f}$ is continuous and $\tilde{f}\big|_{\partial\Delta_4}$ has winding number $1$ because the nonzero horizontal vector $f(\tilde{q})$ points toward the interior $\check{\Delta}_4$ at any $\tilde{q}\in\partial\Delta_4$. But this is a contradiction since the induced homomorphism $\tilde{f}_*:\pi_1(\Delta_4)\rightarrow\pi_1(\mathbb S^1)$ must then be surjective. Therefore there should exist $q_4\in\check{\Delta}_4$, and a minimal surface $\Sigma_{q_4}$ which has zero flux $f(q_4)=0$ along $\bar{\gamma}_{q_4}$. Thus the conjugate surface $\Sigma_{q_4}^*$ is a well-defined minimal annulus. (See Figure 7.)
It remains to show that a homothetic expansion of $\Sigma_{q_4}^*$ is in $T$ and perpendicular to $\partial T$ along its boundary. According to the arguments of Smyth's theorem, there exist constants $d_1^*,d_2^*,d_3^*,d_4^*$ such that the curve $Y^*_{q_4}(c_i\nu_i)$ is in the plane $\{y_i^*=d_i^*\}$ and $\Sigma_{q_4}^*$ is perpendicular to that plane along $Y^*_{q_4}(c_i\nu_i)$. Moreover, the outward unit conormal to $Y_{q_4}^*(c_i\nu_i)$ on $\Sigma_{q_4}^*$ is $\nu_i$ and hence near $Y_{q_4}^*(c_i\nu_i)$, $\Sigma_{q_4}^*$ lies in the same side of the plane $\{y_i^*=d_i^*\}$ as $T'$ does. Remember that the four planes $\cup_{i=1}^4\{y_i=d_i\}$ enclose the tetrahedron $T$ and $\cup_{i=1}^4\{y_i^*=d_i^*\}$ enclose the tetrahedron $T'$. Since $y_i=b_i^1x_1+b_i^2x_2+b_i^3x_3$ and $y_i^*=b_i^1x_1^*+b_i^2x_2^*+b_i^3x_3^*$, $T'$ is similar to $T$. As $\nu_4$ is assumed to be parallel to the $x_3$ axis, $y_4^*=b_4^*x_3^*$.
Obviously a homothetic expansion of $\Sigma^*_{q_4}$ will give a minimal annulus $A_4$ which is perpendicular to $\cup_{i=1}^4\{y_i=d_i\}$ along $\partial A_4$. Working with a new plane $P_j$ containing $F_j, j=1,2,3,$ instead of $F_4$ and using the triangles $\Delta_j \subset P_j$, obtained from the relation for the projection $\pi_j$ into $P_j$:
$$\left(\sum_{i=1}^4\pi(c_i\nu_i)\right)-\pi_j(c_j\nu_j)=0,\,\,\,j=1,2\,\,{\rm or}\,\,3,$$
one can similarly find minimal annuli $A_1,A_2,A_3$ which are homothetic expansions of $\Sigma_{q_j}^*$ for some $q_j\in\Delta_j,j=1,2,3$. This proves (b) except for the convexity of the closed curve.
Let's denote by $F_j'$ the face of $T'$ which is similar to the face $F_j$ of $T$, $j=1,2,3,4$. Is it true that $\partial\Sigma_{q_j}^*\subset\partial T'$? Here we have to be careful because $Y^*_{q_j}(\bar{\gamma}_{q_j})$ and $Y^*_{q_j}(\bar{\gamma}_{1})$ are {\it disconnected}. (Notice that $\partial\Sigma^*$ is connected in Smyth's case.) Consequently, for $j=4$, $Y^*_{q_4}(\bar{\gamma}_{1})$ is not necessarily a subset of $\partial T'\setminus\{y_4^*=d_4^*\}$ and it may intersect the plane $\{y_4^*=d_4^*\}(=\{x_3^*=0\})$ as in Figure 6, right. To get some information about the location of $\partial\Sigma_{q_4}^*$, let's first assume that (d) and (e) are true. Since near $Y_{q_4}^*(c_i\nu_i), i=1,2,3,$ $\Sigma_{q_4}^*$ lies in the same side of the plane $\{y_i^*=d_i^*\}$ as $T'$ does and since $Y_{q_4}^*(c_i\nu_i)$ are convex and are perpendicular on their endpoints to the three lines containing the edges $F_1'\cap F_2'$, $F_2'\cap F_3'$, $F_3'\cap F_1'$, respectively, one can conclude that {\it (i)} $Y_{q_4}^*(\bar{\gamma}_1)$ lies in the tangent cone $TC_{p_4'}(\partial T')$ of $\partial T'$ at $p_4'$, the vertex of $T'$ opposite $F_4'$. As $\Sigma_{q_4}^*$ is a graph over $\pi_4(\Sigma_{q_4}^*)$, {\it (ii)} $Y_{q_4}^*(\bar{\gamma}_{q_4})$ is surrounded by $\pi_4(Y_{q_4}^*(\bar{\gamma}_1))$ in the plane $\{y_4^*=d_4^*\}$.
Now let's prove a lemma which is more general than (c). If the dihedral angles along $\partial F_4$ are $\leq90^\circ$, the unit normals $\nu_1,\nu_2,\nu_3$ are pointing upward and $\bar{\gamma}_1$ goes upward. So one can consider the following generalization.
\begin{lemma}\label{convexity}
Let $\Gamma=\gamma_0\cup\gamma_1$ be a translationally periodic curve and $\gamma_0$ the $x_3$-axis. Assume that $\Sigma_\Gamma$ is a translationally periodic Plateau solution spanning $\Gamma$. If $x_3$ is a nondecreasing function on $\gamma_1$, then the boundary component of $\Sigma_\Gamma^*$ corresponding to $\gamma_0$ is in the $x_1^*x_2^*$-plane and $\Sigma_\Gamma^*$ is on and above the $x_1^*x_2^*$-plane.
\end{lemma}
\begin{proof}
$\Sigma_\Gamma$ has no horizontal tangent plane $T_p\Sigma_\Gamma$ at any interior point $p\in\Sigma_\Gamma$. This can be verified as follows. Every horizontal plane $\{x_3=h\}$ intersects $\Gamma$ either at two points only or at infinitely many points (the second case occurs when $\{x_3=h\}\cap\gamma_1$ is a curve of positive length). If $T_p\Sigma_\Gamma=\{x_3=h\}$, then $\{x_3=h\}\cap\Sigma_\Gamma$ is the set of at least four curves emanating from $p$. But then three of them intersect $\gamma_1$ and hence there exists a domain $D\subset\Sigma_\Gamma$ with $\partial D\subset\{x_3=h\}$, which contradicts the maximum principle. Hence $\{x_3=h\}$ is transversal to $\Sigma_\Gamma$ for every $h$ and therefore $x_3^*$ is an increasing function on every horizontal section $\{x_3=h\}\cap\Sigma_\Gamma$. Since $x_3^*=0$ on $\gamma_0$, $x_3^*$ must be nonnegative on $\Sigma_\Gamma^*$.
\end{proof}
If the dihedral angles along $\partial F_4$ are $\leq90^\circ$, then by the above lemma $Y_{q_4}^*(\bar{\gamma}_1)\subset \partial T'\setminus F_4'$. By (e), which will be proved independently, $Y_{q_4}^*(\bar{\gamma}_{q_4})$ is surrounded by $\pi_4(Y_{q_4}^*(\bar{\gamma}_1))$ and hence $Y_{q_4}^*(\bar{\gamma}_{q_4})$ lies inside $F_4'$. This proves (c) (except for convexity) and (a) as well.
We now derive the convexity of $\partial\Sigma_{q_4}^*$ as follows. Henceforth our proof will be independent of (a), (b), (c). It should be mentioned that $\Sigma_{q_4}^*$ has been constructed independently of (d) and (e). Let $Q$ be a vertical half plane emanating from $\bar{\gamma}_{q_4}$, that is, $\partial Q\supset\bar{\gamma}_{q_4}$. Then $Q\cap\bar{\gamma}_1$ is a single point unless $Q$ contains the two boundary points of $\bar{\gamma}_1$. Let $q$ be a point of $\bar{\gamma}_{q_4}$ which is the end point of $Q\cap(\Sigma_{q_4}\setminus\bar{\gamma}_{q_4})$. Here we claim that in a neighborhood $U$ of $q$, $C:=U\cap Q\cap(\Sigma_{q_4}\setminus\bar{\gamma}_{q_4})$ is a single curve emanating from $q$. If not, $U\cap Q\cap(\Sigma_{q_4}\setminus\bar{\gamma}_{q_4})$ is the union of at least two curves $C_1,C_2,\ldots$ emanating from $q$. These curves can be extended all the way up to $\bar{\gamma}_{q_4}\cup\bar{\gamma}_1$. In case $Q\cap\partial\bar{\gamma}_1=\emptyset$, $Q\cap\bar{\gamma}_1$ is a single point, then only one of $C_1,C_2,\ldots$, say $C_1$, can reach the point $Q\cap\bar{\gamma}_1$ and $C_2$ can only reach $\bar{\gamma}_{q_4}$. Since $\Sigma_{q_4}$ is simply connected, $C_2$ and $\bar{\gamma}_{q_4}$ bound a domain $D\subset\Sigma_{q_4}$ with $\partial D\subset Q$. This contradicts the maximum principle. In case $Q$ intersects $\bar{\gamma}_1$ at its boundary points $p_1,p_2$, there exist two curves, say $C_1,C_2\subset Q\cap \Sigma_{q_4}$ emanating from $q$, such that $p_1\in C_1$ and $p_2\in C_2$. Remember that $\bar{\gamma}_{q_4}\cup\bar{\gamma}_1$ is a fundamental piece of $\Gamma_{q_4}$ which is translationally periodic under the vertical translation $\tau$ by $-c_4\nu_4$. Hence $\tau(p_1)=p_2$ and therefore the two distinct curves $\tau(C_1),C_2\subset Q\cap\Sigma_{q_4}$ emanate from $p_2$. But this is not possible since in a neighborhood of $p_2$, $Q\cap\Sigma_{q_4}$ is a single curve emanating from $p_2$. Hence the claim follows.
Note that $\log g=i\,{\rm arg}\,g$ on the straight line $\gamma_{q_4}$ containing $\bar{\gamma}_{q_4}$ because $|g|\equiv1$ there. If $(d/dx_3){\rm arg}\,g=0$ at a point $q\in\gamma_{q_4}$ ($x_3$: the parameter of ${\gamma}_{q_4}$), then for the vertical half plane $Q$ tangent to $\Sigma_{q_4}$ at $q$, $Q\cap(\Sigma_{q_4}\setminus{\gamma}_{q_4})$ will be the union of at least two curves emanating from $q$, contradicting the claim. Hence $g'\neq0$ on ${\gamma}_{q_4}$. Therefore $g'\neq0$ on $\Sigma_{q_4}^*\cap \{y_4^*=d_4^*\}=Y_{q_4}^*(\gamma_{q_4})$ as well and so $\Sigma_{q_4}^*\cap \{y_4^*=d_4^*\}$ is convex. Similarly, let $Q_j$ be a half plane emanating from the line segment $L$ in $\bar{\gamma}_1$ corresponding to $c_j\nu_j$, $j=1,2,3$. Being nonvertical, $Q_j$ intersects ${\gamma}_{q_4}$ only at one point. Hence $Q_j\cap(\Sigma_{q_4}\setminus L)$ is a single curve joining a point $p\in L$ to $Q_j\cap{\gamma}_{q_4}$ and $p$ is a tangent point of $Q_j$ and $\Sigma_{q_4}$. If we rotate $\Sigma_{q_4}$ in such a way that $|g|\equiv1$ on $L$, we can conclude $g'(p)\neq0$ in the same way as above, as long as $p$ is an interior point of $L$. On the other hand, $g'=0$ at the boundary of $L$ because the interior angle at the boundary of $L$ is $<\pi$. Note that any interior point of $L$ can be a tangent point of $Q_j$ and $\Sigma_{q_4}$ for some $Q_j$ emanating from $L$ and that $Q_j$ intersects $\gamma_{q_4}$ at one point only. Therefore $g'\neq0$ in the interior of $L\subset\Sigma_{q_4}$ and hence $g'\neq0$ in the interior of $\Sigma_{q_4}^*\cap \{y_j^*=d_j^*\}=Y^*(L)$. Thus $\Sigma_{q_4}^*\cap \{y_j^*=d_j^*\}$ is convex, $j=1,2,3$.
Since $\Sigma_{q_4}^*$ is perpendicular to $\{y_i^*=d_i^*\}$ and to $\{y_j^*=d_j^*\}$ at $p=\Sigma_{q_4}^*\cap \{y_i^*=d_i^*\}\cap \{y_j^*=d_j^*\}, 1\leq i\neq j\leq3$, so is $\partial \Sigma_{q_4}^*$ to the edge $\{y_i^*=d_i^*\}\cap \{y_j^*=d_j^*\}$ at $p$. This proves (d).
Remark that $Q\cap\bar{\gamma}_1$ being a single point is the key to the convexity of $\Sigma_{q_4}^*\cap \{y_4^*=d_4^*\}$. Therefore one can easily prove the following generalization which is dual to Lemma \ref{convexity}.
\begin{lemma}
Let $\Gamma=\gamma_0\cup\gamma_1$ be a translationally periodic curve and $\gamma_0$ the $x_3$-axis. Assume that $\Sigma_\Gamma$ is a translationally periodic Plateau solution spanning $\Gamma$ and that its conjugate surface $\Sigma_\Gamma^*$ is a well-defined minimal annulus. If a fundamental piece $\bar{\gamma}_1$ of $\gamma_1$ has a one-to-one projection into the $x_1x_2$-plane $\{x_3=0\}$, then the closed curve $\Sigma_\Gamma^*\cap\{x_3^*=0\}$ is convex.
\end{lemma}
Finally let's prove (e). Theorem \ref{plateau} (b) implies that $\hat{\Sigma}_{q_4}\setminus\gamma_{q_4}$ is a graph over $\pi_4(\Sigma_{q_4}\setminus{\gamma}_{q_4})$. The two boundary curves $\partial\hat{\Sigma}_{q_4}\setminus(\gamma_{q_4}\cup\gamma_1)$ are the parallel translates of one another. Therefore $\Sigma_{q_4}$ is embedded. Now we are going to use Krust's argument (see Section 3.3 of \cite{DHKW}) to prove that $\Sigma_{q_4}^*$ is also a graph. Let $X=(x_1,x_2,x_3)$ be the immersion of $[0,a]\times[0,\beta]$ into $\Sigma_{q_4}$ and $X^*=(x_1^*,x_2^*,x_3^*)$ the immersion: $[0,a]\times[0,\beta]\rightarrow\Sigma_{q_4}^*$. We can write the orthogonal projections of $X$ and $X^*$ into the horizontal plane as respectively
$$w(z):=x_1(z)+ix_2(z),\,\,w^*(z):=x_1^*(z)+ix_2^*(z),\,\,z=x+iy,\,\,(x,y)\in[0,a]\times[0,\beta].$$
Then $w$ is a map from $[0,a]\times[0,\beta]$ onto the triangle $\Delta_4$. Given two distinct points $z_1,z_2\in(0,a]\times(0,\beta]$, we have $w(z_1)\neq w(z_2)$ because $X((0,a]\times(0,\beta])$ is a graph over $\Delta_4\setminus\{q_4\}$. Let $\ell:[0,1]\rightarrow \Delta_4$ be the line segment connecting $p_1:=w(z_1)$ to $p_2:=w(z_2)$ with constant speed, that is, $\ell(0)=p_1$, $\ell(1)=p_2$ and $|\dot{\ell}(t)|=|p_2-p_1|$ for all $t\in[0,1]$.
(1) Choosing a fundamental region $\hat{\Sigma}_{q_4}$ of $\Sigma_{q_4}$ suitably, we may suppose $\ell$ is disjoint from $\pi(\partial\hat{\Sigma}_{q_4})$. Then there is a smooth curve $c:[0,1]\rightarrow(0,a]\times(0,2\beta]$ such that $\ell(t)=w(c(t))$. Clearly $|\dot{c}(t)|>0$ for all $0\leq t\leq1$. Let $g:[0,a]\times\mathbb R\rightarrow\mathbb C$ be the Gauss map of $\Sigma_{q_4}$. Krust showed that the inner product $W$ of the two vectors $p_2-p_1$ and $i(w^*(z_2)-w^*(z_1))$ of $\mathbb R^2$ is written as
$$W:=\langle p_2-p_1,i(w^*(z_2)-w^*(z_1))\rangle=\int_0^1\frac{1}{4}|\dot{c}(t)|^2\left(|g(c(t))|^2-\frac{1}{|g(c(t))|^2}\right)dt.$$
Since $\Sigma_{q_4}\setminus{{\gamma}}_{q_4}$ is a multi-graph, we have $|g|>1$ on $(0,a]\times\mathbb R$. Hence $W>0$ and therefore $w^*(z_1)\neq w^*(z_2)$.
(2) Suppose $\ell$ intersects $\pi(\partial\hat{\Sigma}_{q_4})$ at the point $q_4$. Then $c$ is piecewise smooth and there exist $0<d_1<d_2<1$ such that $q_4\notin w(c([0,d_1)))\cup w(c((d_2,1]))$, $w(c([d_1,d_2]))=\{q_4\}$, and $|\dot{c}(t)|>0$ for $t\in[0,d_1)\cup(d_2,1]$. Clearly $$|g(c(t))|=1\,\,{\rm for}\,\, t\in[d_1,d_2],\,\,\,\,|g(c(t))|>1\,\, {\rm for}\,\, t\in[0,d_1)\cup (d_2,1].$$ Hence
$$W=\left(\int_0^{d_1}+
\int_{d_2}^1\right)\,\frac{1}{4}|\dot{c}(t)|^2\left(|g(c(t))|^2-\frac{1}{|g(c(t))|^2}\right)dt>0$$
and so $w^*(z_1)\neq w^*(z_2)$.
Thus we can conclude that $X^*((0,a]\times(0,\beta))$ is a graph over the $x_1^*x_2^*$-plane. Since $X^*([0,a]\times\{0\})$ coincides with $X^*([0,a]\times\{\beta\})$, $X^*((0,a]\times[0,\beta])=\Sigma^*_{q_4}\setminus\gamma_{q_4}$ is also a graph over its projection into the $x_1^*x_2^*$-plane. This proves (e).
\end{proof}
\section{Pyramid}
It has been possible to construct free boundary minimal annuli in a tetrahedron $T$ because $T$ is the simplest polyhedron in $\mathbb R^3$. In general one cannot find a free boundary minimal annulus in a polyhedron like a quadrilateral pyramid $P_y$ in Figure 8. Of course, given a translationally periodic curve $\Gamma_q$ with fundamental piece $\bar{\gamma}_q\cup\bar{\gamma}_1$ corresponding to $c_1\nu_1,\ldots,c_5\nu_5$, respectively, where $\nu_1,\ldots,\nu_5$ are the unit normals to the faces of $P_y$, one can show that there exists a translationally periodic minimal surface $\Sigma_{q}$ spanning $\Gamma_q$. One can also find a point $q_5\in\Delta_5$ such that $\Sigma_{q_5}^*$ is a minimal annulus. However, $\Sigma_{q_5}^*$ may be a free boundary minimal annulus not in $P_y$ but in a polyhedron $P_o$ like Figure 8 which has the same unit normals as those of $P_y$. And yet, in case $P_y$ is a regular pyramid or a rhombic pyramid, we can show that $P_y$ has a free boundary minimal annulus. Surprisingly, we can also show that there exist genus zero free boundary minimal surfaces in every Platonic solid.
\begin{center}
\includegraphics[width=5.4in]{parallel9.jpg}\\
\end{center}
\begin{theorem}\label{ps}
Let $P_y$ be a right pyramid whose base $B$ is a regular $n$-gon. Then there exists a free boundary minimal annulus $A$ in $P_y$ which is a graph over $B$. $A$ is invariant under the rotation by $2\pi/n$ about the line through the apex and the center of $B$. One component of $\partial A$ is convex and closed in $B$ and the other is convex in each remaining face of $P_y$.
\end{theorem}
\begin{proof}
Let $F_1,\ldots,F_n$ be the faces of $P_y$ other than the base $B$. Denote by $\nu_0,\nu_1,\ldots,\nu_n$ the outward unit normals to $B,F_1,\ldots,F_n$, respectively. Then there exists a unique positive constant $c$ such that
$$c\nu_0+\nu_1+\cdots+\nu_n=0.$$
Assume that $B$ lies in the $x_1x_2$-plane with center at the origin. Let $\bar{\gamma}_0$ be a vertical line segment of length $c$\, on the $x_3$-axis and let $\bar{\gamma}_1$ be a connected piecewise linear curve determined by $\nu_1,\ldots,\nu_n$(i.e., $\nu_i$ is the $i$-th oriented line segment of $\bar{\gamma}_1$) such that the projection $\pi(\bar{\gamma}_1)$ of $\bar{\gamma}_1$ onto the $x_1x_2$-plane is a regular $n$-gon centered at the origin. Moreover, let's assume that the two end points of $\bar{\gamma}_0$ and $\bar{\gamma}_1$ have the same $x_3$-coordinates: $0$ and $c$. $\bar{\gamma}_0\cup\bar{\gamma}_1$ determines a complete helically periodic curve $\Gamma$ of which $\bar{\gamma}_0\cup\bar{\gamma}_1$ is a fundamental piece. $\Gamma$ is translationally periodic as well. Then Theorem \ref{main} guarantees that there exists a translationally periodic minimal surface $\Sigma$ spanning $\Gamma$.
Define the screw motion $\sigma$ by
$$\sigma(r\cos\theta,r\sin\theta,x_3)=\left(r\cos(\theta+\frac{2\pi}{n}), r\sin(\theta+\frac{2\pi}{n}), x_3+\frac{c}{n}\right).$$
Obviously $\Sigma$ is invariant under $\sigma^n$. The point is that $\Sigma$ is invariant under $\sigma$ as well. This is because by Theorem \ref{plateau} the periodic Plateau solution spanning $\Gamma$ uniquely exists and $\sigma(\Sigma)$ also spans $\Gamma$. So evenly divide $\bar{\gamma}_0$ into $n$ line segments $\bar{\gamma}_{0}^1,\ldots,\bar{\gamma}_{0}^n$ such that
$$\bar{\gamma}_{0}^k:=\{p\in\bar{\gamma}_{0}:\frac{k-1}{n}c\leq x_3(p)\leq\frac{k}{n}c\},\,\,\,k=1\ldots, n.$$
Similarly, set
$$\Sigma^k=\{p\in\Sigma:\frac{k-1}{n}c\leq x_3(p)\leq\frac{k}{n}c\},\,\,\,k=1,\ldots,n.$$
It is clear that
$$\sigma(\bar{\gamma}_0^k)=\bar{\gamma}_0^{k+1},\,\,\sigma(\Sigma^k)=\Sigma^{k+1},\,\,k=1,\ldots,n-1,\,\,\,{\rm and}\,\,\, \sigma(\Sigma^n)=\sigma^n(\Sigma^1).$$
Denote by $f_\gamma(\Sigma)$ the flux of $\Sigma$ along $\gamma\subset\partial\Sigma$, that is,
$$f_\gamma(\Sigma)=\int_{p\in\gamma}n(p),$$
where $n(p)$ is the inward unit conormal to $\gamma$ on $\Sigma$ at $p\in\gamma$. Clearly
$$f_{\sigma(\gamma)}(\sigma(\Sigma))=\sigma(f_\gamma(\Sigma))\,\,\,{\rm and}\,\,\,f_{\bar{\gamma}_{0}}(\Sigma)=\sum_{k=1}^{n}f_{\bar{\gamma}_{0}^k}(\Sigma^k).$$
Hence
\begin{eqnarray*}
\sigma(f_{\bar{\gamma}_{0}}(\Sigma))&=&\sum_{k=1}^n\sigma(f_{\bar{\gamma}_{0}^k}(\Sigma^k))\,\,=\,\,\sum_{k=1}^n f_{\sigma(\bar{\gamma}_{0}^k)}(\sigma(\Sigma^k))\\
&=&\sum_{k=1}^{n-1}f_{\bar{\gamma}_{0}^{k+1}}(\Sigma^{k+1})+f_{\sigma^n(\bar{\gamma}^1_{0})}(\sigma^n(\Sigma^1))\\
&=&\sum_{k=1}^nf_{\bar{\gamma}_{0}^k}(\Sigma^k)\,\,=\,\,f_{\bar{\gamma}_{0}}(\Sigma).
\end{eqnarray*}
But $\sigma(f_{\bar{\gamma}_{0}}(\Sigma))=f_{\bar{\gamma}_{0}}(\Sigma)$ holds only when $f_{\bar{\gamma}_{0}}(\Sigma)=0$. In this case $f_{\bar{\gamma}_1}(\Sigma)$ also vanishes. Therefore $\Sigma^*$ is a well-defined minimal annulus.
We now show that $\Sigma^*$ is in $P_y$ with free boundary. Choose a point $p\in\Sigma^k$ with coordinates
$$X(p)=(x_1(p),x_2(p),x_3(p)).$$
Denote by $X^*(p)$ the point of $\Sigma^{k*}$ corresponding to $p\in\Sigma^k$,
$$X^*(p)=(x_1^*(p),x_2^*(p),x_3^*(p)).$$
The coordinates of $\sigma(p)$ are
$$X(\sigma(p))=\left((x_1(p),x_2(p))\cdot\left( \begin{array}{c}
\cos\alpha\\-\sin\alpha
\end{array}
\begin{array}{c}
\sin\alpha\\\cos\alpha
\end{array}\right),\,x_3(p)+\frac{c}{n}\right),\,\,\,\alpha=\frac{2\pi}{n}.$$
Then
\begin{eqnarray*}
X^*(\sigma(p))&=&\left((x_1^*(p),x_2^*(p))\cdot\left( \begin{array}{c}
\cos\alpha\\-\sin\alpha
\end{array}
\begin{array}{c}
\sin\alpha\\\cos\alpha
\end{array}\right),\,x_3^*(p)+0\right)\\
&=&\sigma_0(X^*(p)),
\end{eqnarray*}
where $\sigma_0$ is the rotation in $\mathbb R^3$ defined by
$$\sigma_0(r\cos\theta,r\sin\theta,x_3)=\left(r\cos(\theta+\frac{2\pi}{n}), r\sin(\theta+\frac{2\pi}{n}), x_3\right).$$
Hence
\begin{equation}\label{rot}
(\Sigma^{k+1})^*=\sigma_0(\Sigma^{k*}),\,\,k=1,\ldots,n
\end{equation}
and so
\begin{eqnarray*}
\sigma_0(\Sigma^*)&=&\sigma_0(\Sigma^{1*}\cup\cdots\cup\Sigma^{n*})\,\,=\,\,\Sigma^{2*}\cup\cdots\cup\Sigma^{n*}\cup\sigma_0(\Sigma^{n*})\\
&=&\Sigma^{2*}\cup\cdots\cup\Sigma^{n*}\cup\sigma_0^n(\Sigma^{1*})\,\,=\,\,\Sigma^*.
\end{eqnarray*}
Therefore $\Sigma^*$ is invariant under the rotation $\sigma_0$.
We know that the curve $X^*(\nu_1)$ is in the plane $\{y_1^*=d_1^*\}$ orthogonal to $\nabla y_1^*=\nu_1$ and $\Sigma^*$ is perpendicular to that plane along $X^*(\nu_1)$.
Therefore \eqref{rot} implies that $\Sigma^*$ is a free boundary minimal surface in the pyramid $P_m$ bounded by a plane perpendicular to $\nu_{n+1}$ and by the $n$ planes $\cup_{i=1}^{n}(\sigma_0)^i(\{y_1^*=d_1^*\})$. $P_m$ is similar to $P_y$ and a homothetic expansion $A$ of $\Sigma^*$ is a free boundary minimal annulus in $P_y$.
By the same argument as in the proof of Theorem \ref{fb} we see that $A$ is a graph over $B$ and $\partial A$ is convex on each face of $P_y$.
There is another way of constructing $\Sigma^*$: Smyth's method. Divide the regular $n$-gon $B$ into $n$ congruent triangles $B_1,\ldots,B_n$. Then one can tessellate $P_y$ by $n$ congruent tetrahedra $T_1,\ldots,T_n$ with the apex of $P_y$ as their common vertex and $B_1,\ldots,B_n$ as their bases. Smyth's theorem gives us three free boundary minimal disks in $T_1$. Among them let's choose the one that is disjoint from the line through the apex and the center of $B$. By reflections one can extend the chosen minimal disk to a free boundary minimal annulus in $P_y$. This annulus must be the same as $A$ by the uniqueness of Theorem \ref{plateau} (c).
\end{proof}
\begin{corollary}\label{pl}
Every Platonic solid with regular $n$-gon faces has five types of embedded, genus zero, free boundary minimal surfaces $\Sigma_1,\ldots,\Sigma_5$. Three of them, $\Sigma_1,\Sigma_2,\Sigma_3$, intersect each face along 1, $n$, $2n$ closed convex congruent curves, respectively. $\Sigma_4$ intersects every edge of the solid and $\Sigma_5$ surrounds every vertex of the solid. {\rm (See Figure 3.)}
\end{corollary}
\begin{proof}
Given a Platonic solid $P_s$, let $p$ be its center and $F$ one of its faces. Then the cone from $p$ over $F$ is a right pyramid with a regular $n$-gon base and hence $P_s$ is tessellated into congruent pyramids. Each pyramid contains an embedded free boundary minimal annulus by Theorem \ref{ps}. The union of all those minimal annuli in the congruent pyramids of the tessellation, denoted as $\Sigma_1$, is the analytic continuation of each minimal annulus into an embedded, genus zero, free boundary minimal surface in $P_s$.
The regular $n$-gon $F$ can be tessellated into $n$ isosceles, one of which is denoted as $F_n$. $F_n$ can be divided into two congruent right triangles, one of which is $F_{2n}$. Then the cone from $p$ over $F_n$ is a tetrahedron $T_n$ and $T_{2n}$ denotes the tetrahedron determined by $p$ and $F_{2n}$. Note here that all the dihedral angles of $T_{2n}$ are $\leq90^\circ$ whereas an edge of $T_n$ has dihedral angle $=120^\circ$ in case $P_s$ is a tetrahedron, an octahedron or an icosahedron. Fortunately, the three dihedral angles of $T_n$ along $\partial F_n$ are $\leq90^\circ$. Hence by Theorem \ref{fb} there exist free boundary minimal annuli $A_2$ in $T_n$ and $A_3$ in $T_{2n}$ one boundary component of which is a closed convex curve in $F_n$ and in $F_{2n}$, respectively. Then $\Sigma_2,\Sigma_3$ are exactly the analytic continuations (by reflection) of $A_2,A_3$, respectively.
On the other hand, Smyth's theorem gives three minimal disks $S_4,S_{5},S_6$ with free boundary in $T_{2n}$. Only one of them, say $S_6$, is disjoint from $\partial F$. Then $S_6$ must be a subset of $\Sigma_1$. Since $P_s$ is tessellated by congruent copies of $T_{2n}$, both $S_4$ and $S_5$ can be extended analytically into free boundary embedded minimal surfaces of genus zero in $P_s$, which we denote as $\Sigma_4$ and $\Sigma_5$. Assuming that $S_4$ connects the two orthogonal edges of $F_{2n}$, we see that every boundary component of $\Sigma_4$ intersects exactly one edge of $P_s$ orthogonally. Then $S_5$ connects two nonorthogonal edges of $F_{2n}$ and hence each boundary component of $\Sigma_5$ surrounds exactly one vertex of $P_s$.
\end{proof}
\begin{corollary}\label{pr}
If $P_r$ is a right pyramid with rhombic base $B$, there exists a free boundary minimal annulus $A$ in $P_r$ which is a graph over $B$. One boundary component of $A$ is convex and closed in $B$ and the other one is convex in each remaining face of $P_r$.
\end{corollary}
\begin{proof}
Similar to Theorem \ref{ps}
\end{proof}
\begin{remark}
(a) As $n\rightarrow\infty$, $P_y$ of Theorem \ref{ps} becomes a right circular cone and then $\Sigma$ will be part of the helicoid and $\Sigma^*$ a catenoidal waist in $P_y$.
(b) In case the Platonic solid $P_s$ is a cube, the free boundary minimal surface $\Sigma_1$ with genus 0 proved to exist in $P_s$ by Corollary \ref{pl} is the same as Schwarz's $P$-surface $S$. This can be verified as follows. The cube $P_s$ is tessellated into six right pyramids with square base. Let $P_y$ be one of them. Then $(\Sigma_1\cap P_y)^*$ and $(S\cap P_y)^*$ are translationally periodic minimal surfaces. Denote their boundaries by $\Gamma_{\Sigma_1}$ and $\Gamma_S$, respectively. $\Gamma_{\Sigma_1}$ and $\Gamma_S$ are piecewise linear and translationally periodic. Since their fundamental pieces are determined by the outward unit normals to the faces of the same pyramid $P_y$, $\Gamma_{\Sigma_1}$ and $\Gamma_S$ must be identical. Let $\bar{\gamma}_0\cup\bar{\gamma}_1$ be their fundamental piece. Since the projection of $\bar{\gamma}_1$ into the base of $P_y$ is a square which is convex, there is only one periodic Plateau solution spanning $\Gamma_{\Sigma_1}$ by Theorem \ref{plateau} (c). Hence $\Sigma_1^*$ must be the same as $S^*$ and therefore $\Sigma_1=S$. Similarly, the free boundary minimal surface $\Sigma_1$ in the regular tetrahedron is the same as the one constructed by Nitsche \cite{N2}.
(c) In the cube $P_s$, $\Sigma_4$ is nothing but {\it Neovius' surface} and $\Sigma_5$ is {\it Schoen's I-WP surface} (see figure 9). Only in the cube can one construct an extra free boundary minimal surface $\Sigma_6$ as follows. Let $F$ be a square face of the cube and let $F_2$ be a right isosceles which is a half of $F$. Then the tetrahedron that is the cone from the center of $P_s$ over $F_2$ contains three free boundary minimal disks. If we choose one of the three that connects the two orthogonal edges of $F_2$, then its analytic continuation is the desired $\Sigma_6$. This is {\it Schoen's F-RD surface} which surrounds only four vertices of the cube whereas Schoen's I-WP surface surrounds all eight vertices of the cube (see Figure 9).
\begin{center}
\includegraphics[width=4.7in]{schoen9.jpg}\\
\end{center}
\end{remark}
We would like to conclude our paper by proposing the following interesting problems.\\
\noindent{\bf Problems.}
\begin{enumerate}
\item What kind of a pyramid $P_y$ with $n$-gon base ($n\geq4$) has a free boundary minimal annulus?
\item Let $\Gamma$ be a Jordan curve in $\mathbb R^3$ bounding a minimal disk $\Sigma$. If the total curvature of $\Gamma$ is $\leq4\pi$, we know that $\Sigma$ is unique \cite{N}. Show that $\Sigma^*$ is the unique minimal disk spanning $\partial\Sigma^*$.
\item Assume that $\Gamma\subset\mathbb R^3$ is a Jordan curve with total curvature $\leq4\pi$. It is proved that any minimal surface $\Sigma$ spanning $\Gamma$ is embedded \cite{EWW}. If $\Sigma$ is simply connected, show that $\Sigma^*$ is also embedded.
\item Let $\Gamma$ be a complete translationally (or helically) periodic curve with a fundamental piece $\bar{\gamma}$. Assume that a translationally(or helically) periodic minimal surface $\Sigma_\Gamma$ spans $\Gamma$. What is the maximum total curvature of $\bar{\gamma}$ that guarantees the uniqueness of $\Sigma_\Gamma$? What about the embeddedness of $\Sigma_\Gamma$?
\item Assume that $\Sigma\subset\mathbb R^3$ is a free boundary minimal annulus in a ball. Show that $\Sigma^*$ is a translationally periodic free boundary minimal surface in a cylinder so that $\Sigma$ is necessarily the critical catenoid.
\end{enumerate}
\end{document}
|
\begin{document}
\title[Revisiting calculation of moments of number of comparisons]{Revisiting calculation of moments of number of comparisons\\ used by the randomized quick sort algorithm}
\author[Sumit Kumar Jha]{Sumit Kumar Jha\\ Preprint of the following paper in\\ Discrete Mathematics, Algorithms and Applications:\\ \url{http://www.worldscientific.com/doi/pdf/10.1142/S179383091750001X}}
\address{Center for Security, Theory, and Algorithmic Research\\
International Institute of Information Technology,
Hyderabad, India}
\curraddr{}
\email{[email protected]}
\thanks{}
\begin{abstract}
We revisit the method of Kirschenhofer, Prodinger and Tichy to calculate the moments of number of comparisons used by the randomized quick sort algorithm. We reemphasize that this approach helps in calculating these quantities with less computation. We also point out that as observed by Knuth this method also gives moments for total path length of a binary search tree built over a random set of $n$ keys.
\end{abstract}
\maketitle
\section{Introduction}
\nocite{bworld1}
\nocite{bworld2}
\nocite{bworld3}
\nocite{bworld4}
Consider the following variant of quick sort algorithm from \cite{Sedgewick2011}: the quick sort algorithm recursively sorts numbers in an array by partitioning it into two smaller and independent subarrays, and thereafter sorting these parts. The partitioning procedure chooses the last element in the array as \emph{pivot} and puts it in its right place where numbers to the left of it are smaller than it, and those to its right are larger than it. \par
For purposes of this analysis assume that the input array to the quick sort algorithm contains distinct numbers which are randomly ordered. We may assume the input to the algorithm is simply a permutation of $\{1,2,\cdots,n\}$ (if the input array has $n$ elements).\par
Let $S_{n}$ be the set of all $n!$ permutations of $\{1,2,\cdots,n\}$. Consider a uniform probability distribution on the set $S_{n}$, and define for all $\sigma\in S_{n}$, $C_{n}(\sigma)$ to be the number of comparisons used to sort $\sigma$ by the quick sort algorithm. We wish to calculate mean and variance of $C_{n}$ over the uniform distribution on $S_{n}$.\par
Our aim here is to obtain following \cite{prodinger52} \cite{bworld}
\begin{theorem}[Knuth \cite{KnuthSort}]
\label{mainthm}
We have
$$\normalfont \text{Mean}(C_{n})=2((n+1)H_{n}-n);$$
and
$$\normalfont \text{Var}(C_{n})=7n^{2}-4(n+1)^{2}H_{n}^{(2)}-2(n+1)H_{n}+13n,$$
over the uniform probability distribution on $S_{n}$. Here we have used the notation $H_{n}=\sum_{k=1}^{n}\frac{1}{k}$ and $H_{n}^{(2)}=\sum_{k=1}^{n}\frac{1}{k^{2}}.$
\end{theorem}
Before proceeding we would like to point out that Hennequin \cite{bworld2} has computed the first five cumulants of the number of comparisons of Quicksort. Also, the variance of the number of comparisons of Quicksort is computed in \cite{bworld1}.
\section{Calculation of mean and variance}
Let $a_{n,s}$ be the number of permutations of $n$ elements requiring a total of $s$ comparisons to sort by the procedure of quicksort.\par
We start by defining the corresponding \emph{probability generating function}:
$$G_{n}(z)=\sum_{k\geq 0}\frac{a_{n,k}\, z^{k}}{n!}.$$
\begin{theorem}
For $n\geq 1$
\begin{equation}
\label{eq1}
G_{n}(z)=\frac{z^{n+1}}{n}\sum_{1\leq j \leq n}G_{n-j}(z)G_{j-1}(z),
\end{equation}
and
\begin{equation}
\label{eq2}
G_{0}(z)=1.
\end{equation}
\end{theorem}
\begin{proof}
The first partitioning stage requires $n+1$ comparisons (for some other variants this might be $n-1$). If the pivot element is $k$th largest, then the sub arrays after partitioning are of sizes $k-1$ and $n-k$. Thus we can write
\begin{equation}
\label{eq3}
a_{n,s}=\sum_{1\leq k \leq n}\binom{n-1}{k-1}\sum_{i+j=s-(n+1)}a_{n-k,i}\,a_{k-1,j}.
\end{equation}
Multiplying equation \eqref{eq3} by $z^{s}$ and dividing by $n!$ we get
\begin{eqnarray*}
\frac{a_{ns}\, z^{s}}{n!}=\sum_{1\leq k \leq n}\frac{z^{s}}{n}\, \sum_{i+j=s-(n+1)}\frac{a_{n-k,i}}{(n-k)!}\cdot \frac{a_{k-1,j}}{(k-1)!}\\
=\sum_{1\leq k \leq n}\frac{z^{s}}{n}\cdot \left\{\text{coefficient of } z^{s-(n+1)} \text{ in } G_{n-k}(z)\cdot G_{k-1}(z)\right\}\\
=\sum_{1\leq k \leq n}\frac{z^{n+1}}{n}\cdot z^{s-(n+1)} \left\{\text{coefficient of } z^{s-(n+1)} \text{ in } G_{n-k}(z)\cdot G_{k-1}(z)\right\}
\end{eqnarray*}
after which summing on $s$ gives us equation \eqref{eq1}.
\end{proof}
We will now consider the \emph{double generating function} $H(z,u)$ defined by
\begin{equation}
\label{eq4}
H(z,u)=\sum_{n\geq 0}G_{n}(z) u^{n}.
\end{equation}
\begin{corollary} We have
\begin{equation}
\label{eq5}
\frac{\partial H(z,u)}{\partial u}=z^{2}\cdot H^{2}(z,zu),
\end{equation}
and
\begin{equation}
\label{eq6}
H(1,u)=(1-u)^{-1}.
\end{equation}
\begin{proof}
From equation \eqref{eq1} we have
\begin{eqnarray*}
\frac{\partial H(z,u)}{\partial u}=z^{2}\sum_{n\geq 1}(uz)^{n-1}\sum_{1\leq j \leq n}G_{n-j}(z)G_{j-1}(z)\\
=z^{2}\sum_{n\geq 1}(uz)^{n-1} \cdot \left\{\text{coefficient of } (uz)^{n-1} \text{ in } H(z,uz)\cdot H(z,uz) \right\} \\
=z^{2}\cdot H(z,zu)\cdot H(z,zu).
\end{eqnarray*}
Equation \eqref{eq6} follows from the fact that $G_{n}(1)=1$.
\end{proof}
\end{corollary}
Now we write the $s$th factorial moments $\beta_{s}(n)$ of the random variable with the aid of the probability generating function $G_{n}(z)$:
\begin{equation}
\label{eq7}
\beta_{s}(n)=\left[\frac{d^{s}}{dz^{s}}G_{n}(z)\right]_{z=1}.
\end{equation}
The generating functions $f_{s}(u)$ of $\beta_{s}(n)$ are
\begin{equation}
\label{eq8}
f_{s}(u)=\sum_{n\geq 0}\beta_{s}(n)u^{n}.
\end{equation}
By Taylor's formula and equation \eqref{eq7} we get
\begin{equation}
\label{eq9}
H(z,u)=\sum_{s\geq 0}f_{s}(u)\frac{(z-1)^{s}}{s!}.
\end{equation}
\begin{theorem}
For integer $s\geq 0$ we have
\begin{equation}
\label{eq10}
f'_{s}(u)=s!\cdot \sum_{i+j+k+l+m=s}\frac{a_{i}\cdot f^{(k)}_{j}(u) \cdot f_{l}^{(m)}(u)\cdot u^{k+m} }{j!\cdot k!\cdot l!\cdot m!},
\end{equation}
where
$$
a_{k}=
\begin{cases}
1&\text{if $k=0$};\\
2&\text{if $k=1$};\\
1&\text{if $k=2$};\\
0&\text{if $k>2$}.
\end{cases}
$$
\end{theorem}
\begin{proof}
Using Taylor's theorem we can write
$$f_{j}(x)=\sum_{k\geq 0}\frac{f_{j}^{(k)}(u)(x-u)^{k}}{k!}$$
which on substituting $x=uz$ gives
\begin{equation}
\label{eq11}
f_{j}(uz)=\sum_{k\geq 0}\frac{f_{j}^{(k)}(u)(z-1)^{k}u^{k}}{k!}.
\end{equation}
\par
Now substituting equation \eqref{eq9} in equation \eqref{eq5} gives:
\begin{eqnarray*}
\sum_{s\geq 0}f'_{s}(u)\frac{(z-1)^{s}}{s!}=z^{2}\cdot \sum_{p\geq 0}f_{p}(uz)\frac{(z-1)^{p}}{p!} \cdot \sum_{r\geq 0}f_{r}(uz)\frac{(z-1)^{r}}{r!}\\
=\sum_{i\geq 0}a_{i}(z-1)^{i}\cdot \sum_{p\geq 0}\frac{(z-1)^{p}}{p!}\sum_{l\geq 0}\frac{f_{p}^{(l)}(u)(z-1)^{l}u^{l}}{l!}\cdot \sum_{r\geq 0}\frac{(z-1)^{r}}{r!}\sum_{m\geq 0}\frac{f_{r}^{(m)}(u)(z-1)^{m}u^{m}}{m!}\\
=\sum_{h\geq 0}(z-1)^{h}\sum_{i+j+k+l+m=h}a_{i}\cdot \frac{1}{j!}\cdot \frac{f^{(k)}_{j}(u)\, u^{k}}{k!}\cdot \frac{1}{l!}\cdot \frac{f_{l}^{(m)}(u)\, u^{m}}{m!}
\end{eqnarray*}
where in the second last line we replaced $z^{2}$ by $\sum_{i\geq 0} a_{i}(z-1)^{i}$. Now comparing coefficients on both sides of the equation gives
$$f'_{s}(u)=s!\cdot \sum_{i+j+k+l+m=s}\frac{a_{i}\cdot f^{(k)}_{j}(u) \cdot f_{l}^{(m)}(u)\cdot u^{k+m} }{j!\cdot k!\cdot l!\cdot m!}.$$
\end{proof}
\begin{remark}
For asymptotic theory of differential equations originating here we recommend reader the paper \cite{bworld4}.
\end{remark}
\begin{corollary}
We have
\begin{equation}
\label{eq12}
f_{0}(u)=(1-u)^{-1},
\end{equation}
\begin{equation}
\label{eq13}
f_{1}(u)=\frac{2}{(1-u)^{2}}\log\frac{1}{1-u},
\end{equation}
\begin{equation}
\label{eq14}
f_{2}(u)=\frac{8\log^{2}(1-u)}{(1-u)^{3}}-\frac{8\log(1-u)}{(1-u)^{3}}-\frac{4\log^{2}(1-u)}{(1-u)^{2}}+\frac{12\log(1-u)}{(1-u)^{2}}+\frac{6}{(1-u)^{3}}-\frac{6}{(1-u)^{2}}.
\end{equation}
\end{corollary}
\begin{proof}
The equation \eqref{eq12} follows from the fact that $\beta_{0}(n)=1$ for all $n\geq 0$.\par
Setting $s=1$ in equation \eqref{eq10} gives
\begin{eqnarray*}
f'_{1}(u)=a_{1}\cdot f_{0}^{(0)}(u)\cdot f_{0}^{(0)}(u)+f_{0}^{(1)}(u)\cdot f_{0}^{(0)}(u)\cdot u+f_{0}^{(1)}(u)\cdot f_{0}^{(0)}(u)\cdot u\\
+f_{1}^{(0)}(u)\cdot f_{0}^{(0)}(u)
+f_{0}^{(0)}(u)\cdot f_{1}^{(0)}(u)\\
=\frac{2}{(1-u)^{2}}+\frac{u}{(1-u)^{3}}+\frac{u}{(1-u)^{3}}+\frac{f_{1}(u)}{(1-u)}+\frac{f_{1}(u)}{(1-u)},
\end{eqnarray*}
where we used the fact that $f_{0}(u)=(1-u)^{-1}$. The above equation is
\begin{equation}
\label{eq15}
f'_{1}(u)-\frac{2f_{1}(u)}{1-u}=\frac{2u}{(1-u)^{3}}+\frac{2}{(1-u)^{2}}.
\end{equation}
Solving the linear differential equation \eqref{eq15} by multiplying with integrating factor $(1-u)^2$ gives
$$f_{1}(u)=\frac{2}{(1-u)^{2}}\log\frac{1}{1-u}+f_{1}(0)=\frac{2}{(1-u)^{2}}\log\frac{1}{1-u}.$$
Plugging $s=2$ in \eqref{eq10} and solving the resultant differential equation gives
$$f_{2}(u)=\frac{8\log^{2}(1-u)}{(1-u)^{3}}-\frac{8\log(1-u)}{(1-u)^{3}}-\frac{4\log^{2}(1-u)}{(1-u)^{2}}+\frac{12\log(1-u)}{(1-u)^{2}}+\frac{6}{(1-u)^{3}}-\frac{6}{(1-u)^{2}}.$$\\
\end{proof}
\begin{corollary}
We have
$$\beta_{1}(n)=2((n+1)H_{n}-n),$$
and
$$\beta_{2}(n)=4(n+1)^{2}(H_{n}^{2}-H_{n}^{(2)})+4(n+1)^{2}H_{n}-8(n+1)H_{n}+8nH_{n}-4nH_{n}(5+3n)+11n^{2}+15n.$$
\end{corollary}
\begin{proof}
We use following expansions from \cite{Greene}
$$\frac{1}{(1-u)^{m+1}}\log\left(\frac{1}{1-u}\right)=\sum_{n\geq 0}(H_{n+m}-H_{m})\binom{n+m}{n}u^{n};$$
$$\frac{1}{(1-u)^{m+1}}\log^{2}\left(\frac{1}{1-u}\right)=\sum_{n\geq 0}((H_{n+m}-H_{m})^{2}-(H_{n+m}^{(2)}-H_{m}^{(2)}))\binom{n+m}{n}u^{n},$$
to conclude the assertion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainthm}]
We conclude the results after noting
$$\text{Mean}(C_{n})=\beta_{1}(n),$$
$$\text{Var}(C_{n})=\beta_{2}(n)-(\beta_{1}(n))^{2}+\beta_{1}(n).$$
\end{proof}
\section{Similar Partial Differential Functional Equations}
We point out that following two examples from \cite{bworld3} can be analyzed using the method employed here:
\begin{itemize}
\item[1.] Moments of total path length $L_{n}$ of a binary search tree built over a random set of $n$ keys can be extracted from the
$$\frac{\partial L(z,u)}{\partial z}=L^{2}(zu,u),\quad \frac{\partial L(0,u)}{\partial z}=1,$$
where $L(z,u)=\sum_{n\geq 0}L_{n}(u)z^{n}$ is the bivariate generating function.
\item[2.] A \emph{digital} search tree for which the bivariate generating function $L(z,u)$ satisfies
$$\frac{\partial L(z,u)}{\partial z}=L^{2}\left(\frac{1}{2}zu,u\right),$$
with $L(z,0)=1.$
\end{itemize}
\end{document}
|
\begin{document}
\title{A sharp inequality for Sobolev functions}
\subjclass[2000]{46E35, 35J65}
\author{Pedro M.\ Gir\~{a}o}
\address{Mathematics Department, Instituto Superior T\'{e}cnico, Av.\
Rovisco Pais, 1049-001 Lisbon, Portugal}
\email{[email protected]}
\begin{abstract}
Let $N\geq 5$, $a>0$,
$\Om$ be a smooth bounded domain in ${\mathbb{R}}^{N}$,
$\ts=
\frac{2N}{N-2}$,
$\tz=\frac{2(N-1)}{N-2}$ and $||u||^2=\ngupq+a\nupq$. We prove there exists an
$\alpha_{0}>0$ such that, for all $u\in
H^1(\Om)\setminus\{0\}$,
$$\sddn\leq\frac\nhu\nupsq\lb1+\alpha_{0}
\frac{\iuz}{\pnhumeio\cdot|u|_{\ts}^{\ts\!/2}}\rb.$$
This inequality implies Cherrier's inequality.
\end{abstract}
\maketitle
\noindent Let $N\geq 5$, $a>0$, $\alpha\geq 0$, $\Om$ be a smooth
bounded domain in ${\mathbb{R}}^{N}$,
$\ts=
\frac{2N}{N-2}$
and $\tz=\frac{2(N-1)}{N-2}$.
We regard $a$ as fixed and
$\alpha$ as a parameter.
Denote the $L^p$ and $H^1$ norms of $u$ in $\Om$ by
$$|u|_{p}:=\left(\textstyle\int|u|^p\right)^\frac{1}{p}\qquad
\mbox{and}\qquad
||u||:=\left(\ngupq+a\nupq\right)^\frac{1}{2}\!\!,$$
respectively.
All our integrals are over $\Om$. We define the
functional $\delta:H^1(\Om)\setminus\{0\}\to{\mathbb{R}}$,
homogeneous of degree zero,
by
$$
\delta(u):=
\frac{\iuz}{\pnhumeio\cdot|u|_{\ts}^{\ts\!/2}}
$$
and consider the system
\addtocounter{equation}{+1}
\setcounter{um}{\theequation}
\newcommand{\rf}{$(\theum)_{\alpha}$}
\newcommand{\rfk}{$(\theum)_{\alpha_{k}}$}
\newcommand{\rfz}{$(\theum)_{\alpha_{0}}$}
$$
\left\{\begin{array}{ll}
\lb 1+\dd(u)\rb(-\Delta u+au)+\tzdois\alpha u^\tzu=\lb 1+
\tzud\alpha\delta(u)\rb u^\tsu&\mbox{in\ }\Om,\\
u>0&\mbox{in\ }\Om,\\
\frac{\partial u}{\partial\nu}=0&\!\!\!\!\mbox{on\ }\partial\Om.
\end{array}\right.\eqno{(\theequation)_{\alpha}}
$$
We claim that the solutions of \rf\ correspond to critical points of
the functional $\Phi_{\alpha}:H^1(\Om)\setminus\{0\}\to{\mathbb{R}}$, defined by
\be\label{phi}
\Phi_{\alpha}(u):=\lb
\frac 12\nhu-\frac{1}{\ts}\ius
\rb\lb1+\alpha\defdel\rb^{\frac{N}{2}}=
\Phi_{0}(u)(1+\alpha\delta(u))^{\frac{N}{2}}.
\ee
In fact, since
$$
\Phi_{\alpha}^\prime=(1+\alpha\delta)^{\frac{N}{2}-1}\left[
\Phi_{0}^\prime(1+\alpha\delta)+{\textstyle \frac{N}{2}}
\Phi_{0}\alpha\delta'
\right]
$$
and, for $\varphi\in H^1(\Om)$,
\begin{eqnarray*}
\delta'(u)(\varphi)&=&
-
\frac{\delta(u)}{\nhu}\intum
+\tz\frac{\delta(u)}{\iuz}
\int (|u|^\tzd u\varphi)\\
&&-\frac{\ts}{2\,}
\frac{\delta(u)}{\ius}\int (|u|^{\tsd}u\varphi),
\end{eqnarray*}
the critical points of $\Phi_{\alpha}$ satisfy
$$\begin{array}{rcl}\textstyle
(-\Delta u+au)\lb 1+\frac{4-N}{4}\alpha\delta(u)+\frac{N-2}{4}
\frac{\ius}{\nhu}\alpha\delta(u)\rb &&\\
+\
\frac{\tz N}{2}\alpha |u|^\tzd u\lb\frac 12\frac{||u||}{|u|_{\ts}^{\ts\!/2}}-
\frac{1}{\ts}\frac{|u|_{\ts}^{\ts\!/2}}{||u||}\rb&&\\
-\ |u|^\tsd u\lb 1+\frac{4-N}{4}\alpha\delta(u)+\frac{\ts
N}{8}\frac{\nhu}{\ius}\alpha\delta(u)\rb&=&0
\end{array}
$$
in $\Om$, $\lb\frac{\partial u}{\partial\nu}=0\mbox{\ on\
}\partial\Om\rb$.
However, multiplying this equation by $u$ and integrating over $\Om$
(i.e.\ differentiating (\ref{phi}) along the radial
direction)
we get $||u||^2=\ius$. Conversely, the solutions of \rf\ are
solutions of the previous equation\!\!: multiplying \rf\ by $u$ and integrating
over $\Om$ we get a quadratic equation in $||u||/|u|_{\ts}^{\ts\!/2}$, whose
solution is $||u||/|u|_{\ts}^{\ts\!/2}=1$.
This proves our claim.
The functional $\Phi_{\alpha}$ restricted to the Nehari manifold,
$${\mathcal{N}}:=\left\{u\in H^1(\Om)\setminus\{0\}:
\Phi_{\alpha}^\prime(u)u=0\right\}
=\left\{u\in H^1(\Om)\setminus\{0\}:||u||^2=\ius\right\},$$
is
$
\frac 1N[\beta(1+\alpha\delta)]^\frac{N}{2},
$
where $\beta:H^1(\Om)\setminus\{0\}\to{\mathbb{R}}$ is defined by
$$
\beta(u):=\frac\nhu\nupsq.
$$
So, we consider the functional
$\Psi_{\alpha}:H^1(\Om)\setminus\{0\}\to{\mathbb{R}}$,
defined by
$$\Psi_{\alpha}:=\beta(1+\alpha\delta).
$$
A {\em
least energy\/} solution of
\rf\ is a function $u\in H^1(\Om)\setminus\{0\}$, such that
$$
\Phi_{\alpha}(u)=\inf_{{\mathcal{N}}}\Phi_{\alpha}=\inf_{H^1(\Om)\setminus\{0\}}
{\textstyle\frac{1}{N}} (\Psi_{\alpha})^\frac{N}{2}.
$$
We are interested in proving existence and nonexistence of least
energy solutions of \rf. We note that
every critical point of $\Phi_{\alpha}$ is a
critical point of $\Psi_{\alpha}$.
It is easy to check that the Nehari manifold is a
natural constraint for $\Phi_{\alpha}$. So
conversely, if $u$ is a critical point of
$\Psi_{\alpha}$, then
there exists a unique $t(u)>0$, such that
$t(u)u$ is a critical point of
$\Phi_{\alpha}$ $\left(
t(u)=\left(\nhu/|u|_{\ts}^{\ts}\rb^\frac{N-2}{4}\right)$.
We consider the minimization
problem corresponding to
$$
S_{\alpha}:=\inf\left\{\Psi_{\alpha}(u)|u\in
H^1(\Om)\setminus\{0\}\right\}.
$$
We recall that
$
S:=\inf\left\{\left.\frac{|\nabla u|_{L^2({\mathbb{R}}^N)}^2}{|u|_{L^{\ts}\!({\mathbb{R}}^N)}^2}\right|u\in
L^\ts({\mathbb{R}}^{N}), \nabla u\in L^2({\mathbb{R}}^{N}),
u\neq 0\right\}
$
is achieved by the instanton
$
U(x):=\left(\frac{N(N-2)}{N(N-2)+|x|^2}\right)^{\frac{N-2}{2}}
$.
Our main result is
\begin{theorem}\label{theorem}
There exists a positive real number
$$\alpha_{0}=\alpha_{0}(a,\Om)=
\min\left\{\alpha\,|\;S_{\alpha}=S/2^\frac{2}{N}\right\}$$ such that
\begin{enumerate}
\item[(i)] if $\alpha<\alpha_{0}$, then
\rf\ has a least energy solution $u_{\alpha}$;
\item[(ii)] if
$\alpha>\alpha_{0}$, then \rf\ does not have a least energy
solution and
$
\Psi_{\alpha}\geq S/2^\frac{2}{N}.
$
The constant $ S/2^\frac{2}{N}$ is sharp.
\end{enumerate}
\end{theorem}
\begin{remark}
Obviously, $\alpha_{0}$ is a nonincreasing function of $a$.
By testing $\Psi_{\alpha}$ with constant functions and instantons we
can prove that
$$
\alpha_{0}\geq\max\left\{
[{S/(2|\Om|)^{\frac{2}{N}}-a}]/{\sqrt{a}}, C(N)\max_{\partial\Om}H
\right\}
$$
where $|\Om|$ is the Lebesgue measure of $\Om$, $H$ is the mean
curvature of $\partial\Om$ and $C(N)$ is a constant that only depends
on $N$. The least energy solutions might be constant
($a^\frac{N-2}{4}$) for $a\leq S/(2|\Om|)^{\frac{2}{N}}$ if
$\alpha\leq [{S/(2|\Om|)^{\frac{2}{N}}-a}]/{\sqrt{a}}$.
\end{remark}
\begin{corollary} For all $u\in
H^1(\Om)\setminus\{0\}$,
$$\sddn\leq\frac\nhu\nupsq\lb1+\alpha_{0}
\frac{\iuz}{\pnhumeio\cdot|u|_{\ts}^{\ts\!/2}}\rb.
$$
\end{corollary}
\begin{corollary} For all $u\in H^1(\Om)$,
$$\sddn\nupsq\leq\nhu+\alpha_{0}||u||\cdot|u|_{2}
\qquad\mbox{and}\qquad\sddn\nupsq\leq
\lb|\nabla u|_{2}+c_{a,\alpha_{0}}|u|_{2}\rb^2,
$$
with
$c_{a,\alpha_{0}}=\max\left\{\alpha_{0}/2,
\sqrt{a+\alpha_{0}\sqrt{a}\,\,}\right\}$.
\end{corollary}
\begin{proof}
From H\"older's inequality
$|u|_{\tz}^\tz\leq |u|_{2}|u|_{\ts}^{{\ts\!/2}}$, so
$\delta(u)\leq\frac{|u|_{2}}{||u||}$.
\end{proof}
\begin{corollary} (Cherrier's inequality). Let $\varepsilon>0$. For all $u\in H^1(\Om)$,
$$\sddn\nupsq\leq(1+\varepsilon)\nhu+\frac{\alpha_{0}^2}{4\varepsilon}|u|_{2}^2=
(1+\varepsilon)\ngupq+\lb\frac{\alpha_{0}^2}{4\varepsilon}+a\varepsilon\rb|u|_{2}^2.$$
\end{corollary}
{\em Sketch of the proof of}\/ Theorem~\ref{theorem}. $-$
By testing $\Psi_{\alpha}$ with instantons,
$S_{\alpha}\leq\sddn$, for all $\alpha\geq 0$.
We claim that if $S_{\alpha}<\sddn$, then $S_{\alpha}$ is achieved.
This is a consequence of the concentration-compactness principle. A
minimizing sequence $\uk$ with $|\uk|_{\ts}=1$ is bounded and we can
assume $\uk\weak u$ in $H^1(\Om)$,
$\lim_{k\to\infty}\ngukpq=\ngupq+||\mu||$
and $\lim_{k\to\infty}\nukpss=\nupss+||\nu||=1$,
where $\sddn||\nu||^\frac 2\ts\leq
||\mu||$ (we remark that this inequality follows from
Cherrier's inequality).
We can write
$\beta(u)\delta(u)=\gamma(u)\sqrt{\beta(u)}$ for
$\gamma(u)={\iuz}/{\nupsz}$. The key step
in the proof of the claim is
the following observation.
Define $f$ and $g:[0,1]\to{\mathbb{R}}$, by
$$\begin{array}{lcl}
f(x)&:=&\textstyle
\beta x^\frac{2}{\ts}+\sddn(1-x)^\frac{2}{\ts}+\alpha\gamma
x^\frac{\tz}{\ts}
\sqrt{\beta x^\frac{2}{\ts}+\sddn(1-x)^\frac{2}{\ts}}\\
&\geq& \textstyle
\beta x+\sddn(1-x)+\alpha\gamma x\sqrt{\beta x+\sddn(1-x)}
\ \ =:\ \ g(x).
\end{array}
$$
Suppose $\min f<\sddn$. It follows that $\beta<\sddn$, and this in turn
implies that
$g$ is concave. Since $f$ and $g$ coincide at 0 and 1, the minimum of
$f$ occurs at 1. This proves the claim.
A similar argument
shows
$\alpha\mapsto S_{\alpha}$
is continuous. In particular, the supremum
$\alpha_{0}:=\sup\{\alpha|S_{\alpha}<S/2^\frac 2N\}$ is either $+\infty$ or a
maximum.
The map $\alpha\mapsto S_{\alpha}$ is strictly increasing on
$[0,\alpha_{0}]$.
If $\alpha\in[0,\alpha_{0}[$, then \rf\ has a least energy solution.
If $\alpha\in]\alpha_{0},+\infty[$, then \rf\ does
not have a least energy solution. It remains to prove that
$\alpha_{0}$ is finite.
Suppose
$\alpha_{0}=+\infty$. Choose $\alpha_{k}\to+\infty$ as $k\to+\infty$ and
let $\uk$ be minimizer for $S_{\alpha_{k}}$ satisfying
\rfk.
It is easy to prove that
$\lim_{\alpha\to\infty}S_{\alpha}=S/2^\frac 2N$,
$\mk:=\max_{\bar\Om}\uk=\uk(P_{k})\to+\infty$,
$\lim_{k\to\infty}\ngukpq=\lim_{k\to\infty}\nukpss=S^\frac
N2/2$
and $\ak\delta(\uk)\to 0$. We can apply the Gidas-Spruck blow up
technique to \rfk\ because $\ak\delta(\uk)\to 0$.
Define $
\varepsilon_{k}:=M_{k}^{-{2}/(N-2)}$ and
$U_{\varepsilon,y}:=\varepsilon^{-\frac{N-2}{2}}U\left(\frac{x-y}{\varepsilon}\right)$.
We can prove that
$
\lim_{k\to\infty}\ak\varepsilon_{k}=0$,
$
\lim_{k\to\infty}|\nabla\uk-\nabla U_{\varepsilon_{k},P_{k}}|_{2}=0
$
and $\pkk\in\partial\Om$, for large $k$.
At this point, using the ideas of \cite{APY}, we follow the argument in
\cite{CG}, which applies with no modification. We
show
$\Psi_{\ak}(\uk)>\sddnd$, for large $k$. This is impossible.
Therefore $\alpha_{0}$ is finite.
$\Box$
\begin{remark}
The functional behavior
behind this inequality is also present, for example, in
the Dirichlet problem for $-\Delta u-au+\alpha u^{1/3}=u^{7/3}$
in $\Om\subset{\mathbb{R}}^5$, with
$0<a<\lambda_{1}\left(-\Delta,H^1_{0}(\Om)\right)$. In this case
$s:=[t(u)]^{2/3}$ is the
solution of a cubic equation,
$(|\nabla u|_{2}^2-a|u|_{2}^2)s+\alpha|u|_{4/3}^{4/3}-|u|_{10/3}^{10/3}s^3=0$.
\end{remark}
\end{document}
|
\betaegin{document}
\betaegin{abstract}
We consider the Calder\'on-Zygmund kernels $K_ {\alpha,n}(x)=(x_i^{2n-1}/|x|^{2n-1+\alpha})_{i=1}^d$ in ${\mathbb R}n$ for $0<\alphalpha{\lambda}eq 1$ and $n\in\mathbb{N}$.
We show that, on the plane, for $0<\alphalpha<1$, the capacity associated to the kernels $K_{\alpha,n}$ is comparable to the Riesz capacity
$C_{\frac23(2-\alphalpha),\frac 3 2}$ of non-linear potential theory. As consequences we deduce the semiadditivity and bilipschitz invariance of this capacity.
Furthermore we show that for any Borel set
$E\subset{\mathbb R}n$ with finite length the $L^2(\mathcal{H}^1 {\lambda}floor E)$-boundedness of the singular integral associated to $K_{1,n}$
implies the rectifiability of the set $E$. We thus extend to any ambient dimension, results previously known only in the plane.
\end{abstract}
\maketitle
\section{Introduction and statement of the results}
In this paper we continue the program started in \cite{cmpt} and \cite{cmpt2} where an extensive study of the kernels $x_i^{2n-1}/|x|^{2n}, n \in \mathbb{N},$ was performed in the plane. We explore the kernels $K_{\alpha,n}(x)=(K_{\alpha,n}^i(x))_{i=1}^d$ in ${\mathbb R}n$, where
$$K_{\alpha,n}^i(x)= \frac{x_i^{2n-1}}{|x|^{2n-1+\alphalpha}},$$
for $0<\alpha{\lambda}eq 1$, $n\in\mathbb{N}$, in connection to rectifiability and their corresponding capacities.
For compact sets $E\subset{\mathbb R}d$, we define
\betaegin{equation}{\lambda}abel{capalfa}
\gamma_{\alphalphapha}^n(E)=\sup|{\lambda}angle T,1\rightarrowngle|,
\end{equation}
the supremum taken over those real distributions $T$ supported on $E$ such
that for $i=1,2$, the potentials $K_{\alpha,n}^i*T$
are in the unit ball of $L^\infty({\mathbb R}d)$. For $n=1$ and $\alphalpha=1$ the capacity $\gamma_{\alphalphapha}^nmma_1^1$ coincides with the analytic capacity, modulo multiplicative constants
(see \cite{semiad}) and it is worth mentioning that for $\alphalpha=1$ and $n\in\mathbb{N}$ it was proved in \cite{cmpt2} that $\gamma_{\alphalphapha}^nmma_1^n$ is comparable to analytic capacity.
Recall that the analytic capacity of a compact subset of the plane is defined by $$\gamma_{\alphalphapha}^nmma(E)=\sup|f'(\infty)|,$$the supremum taken over the analytic functions on
$\mathbb{C}\setminus E$ such that $|f(z)|{\lambda}e 1$ for $z\in\mathbb{C}\setminus E$. Analytic capacity may be written as \eqref{capalfa} interchanging the real by complex distributions
and the vectorial kernel $K_{\alpha,n}$
by the Cauchy kernel. Therefore, our set function $\gamma_{\alphalphapha}^n$ can be viewed as a real variable version of analytic capacity associated to the vector-valued kernel $K_{\alpha,n}$.
There are several papers where similar capacities have been studied;
in ${\mathbb R}n$, for $0<\alphalpha<1$, it was discovered in \cite{imrn} that compact sets with finite $\alphalpha-$dimensional Hausdorff measure have zero $\gamma_{\alphalphapha}^nmma_\alphalpha^1$ capacity
(for the case of non-integer $\alphalpha>1$ one has to assume some extra regularity assumptions on the set, see \cite{imrn} and \cite{illinois}).
This is in strong contrast with the situation where $\alphalpha\in\mathbb{Z}$ (in this case $\alphalpha-$dimensional smooth hypersurfaces have positive $\gamma_{\alphalphapha}^nmma_\alphalpha^1$ capacity, see \cite{mp},
where they showed that if $E$ lies on a Lipschitz graph, then $\gamma_{\alphalphapha}^nmma_{d-1}^1(E)$ is comparable to the $(d-1)-$Hausdorff measure ${\mathcal H}^{d-1}(E)$). In \cite{tams}
the semiadditivity of the $\gamma_{\alphalphapha}^nmma_\alpha^1$ was proven for $0<\alpha<d$ in ${\mathbb R}n$.
For $s>0$, $1 < p < \infty$ and $0 < sp{\lambda}e 2$, the Riesz capacity $C_{s,p}$ of a compact set $K\subset{\mathbb R}d$, is defined as
\betaegin{equation}{\lambda}abel{wolffcap}
C_{s,p}(K)=\sup_{\mu}\mu(K)^p,
\end{equation}
where the supremum runs over all positive measures $\mu$ supported on $K$ such that
$$I_s(\mu)(x)=\int\frac{d\mu(x)}{|x-y|^{2-s}}$$
satisfies $\|I_s(\mu)\|_q{\lambda}e 1$, where as usual $q= p/(p-1)$. The capacity $C_{s,p}$ plays a central role in understanding the nature of Sobolev spaces
(see \cite{adamshedberg} Chapter 1, p. 38).
In \cite{mpv} it was surprisingly shown that in ${\mathbb R}n$ for $0<\alpha<1$,
the capacities $\gamma_{\alphalphapha}^nmma_\alpha^1$ and $C_{\frac 2 3(d-\alphalpha),\frac 3 2}$ are comparable.
In this paper we extend the main result from \cite{mpv} on the plane by establishing the equivalence between $\gamma_{\alphalphapha}^n$, $0<\alphalpha<1$, $n\in\mathbb{N}$ and the capacity $C_{\frac 2 3(2-\alphalpha),\frac 3 2}$ of non-linear potential theory.
Our first main result reads as follows:
\betaegin{teo}{\lambda}abel{main}
For each compact set $K\subset{\mathbb R}d$, $0 <\alphalpha< 1$ and $n\in\mathbb{N}$ we have
\betaegin{equation*}
c^{-1}\ C_{\frac 2 3(2-\alphalpha),\frac 3 2}(E){\lambda}e\gamma_{\alphalphapha}^n(E){\lambda}e c\ C_{\frac 2 3(2-\alphalpha),\frac 3 2}(E).
\end{equation*}
where $c$ is a positive constant depending only on $\alphalpha$ and $n$.
\end{teo}
On the plane and for $\alpha\in (1,2)$ the equivalence of the above capacities is not known. In \cite{env} it was shown that, in ${\mathbb R}n$, for $0<\alpha<d$ and $n=1$,
the first inequality in Theorem \ref{main} holds (replacing $C_{\frac 2 3(2-\alphalpha),\frac 3 2}$ by $C_{\frac 2 3(d-\alphalpha),\frac 3 2}$). The question concerning the validity
of the inequality $\gamma_{\alphalphapha}^n(E){\lambda}esssim C_{\frac 2 3(d-\alphalpha),\frac 3 2}(E)$ for all non integer $\alpha\in (0,d)$ and $n\in\mathbb{N}$ remains open.\newline
Theorem \ref{main} has some interesting consequences. As it is well known, sets with positive capacity $C_{s,p}$ have non finite Hausdorff
measure ${\mathcal H}^{2-sp}$. Therefore, the same applies to $\gamma_{\alphalphapha}^n$, for $0<\alpha<1$ and $n\in\mathbb{N}$.
Hence as a direct corollary of Theorem \ref{main} one can assert that $\gamma_{\alphalphapha}^n$ vanishes on sets with finite ${\mathcal H}^{\alphalphapha}$ measure.
On the other hand, since $C_{s,p}$ is a subadditive set function (see \cite{adamshedberg}, p. 26), $\gamma_{\alphalphapha}^n$ is semiadditive, which means that given compact sets $E_1$ and $E_2$
$$\gamma_{\alphalphapha}^n(E_1\cup E_2){\lambda}e C(\gamma_{\alphalphapha}^n(E_1)+\gamma_{\alphalphapha}^n(E_2)),$$for some constant $C$ depending on $\alphalpha$ and $n$. In fact $\gamma_{\alphalphapha}^n$ is countably semiadditive.
Another consequence of Theorem \ref{main} is the bilipschitz invariance of $\gamma_{\alphalphapha}^n$, meaning that for bilipschitz homeomorphisms of ${\mathbb R}d$, $\phi:{\mathbb R}d\to{\mathbb R}d$,
namely $$L^{-1}|x-y|{\lambda}e|\phi(x)-\phi(y)|{\lambda}e L|x-y|\,\;\;\;x,\;y\in{\mathbb R}d$$ one has $$\gamma_{\alphalphapha}^n(E)\alphapprox\gamma_{\alphalphapha}^n(\phi(E)).$$
The fact that analytic capacity is bilipschitz invariant was a very deep result in \cite{bilipschitz}, see also \cite{gv} and \cite{gpt}.
All advances concerning analytic capacity in the last 40 years, \cite{mmv}, \cite{david}, \cite{semiad}, \cite{bilipschitz}, go through the deep geometric study of the Cauchy transform which was initiated by Calderon in \cite{ca}. In particular it was of great significance to understand what type of geometric regularity does the $L^2(\mu)$-boundedness of the Cauchy transform impose on the underlying measure $\mu$. From the results of David, Jones, Semmes and others, soon it became clear that rectifiability plays an important role in the understanding of the aforementioned problem. Recall that $n$-rectifiable sets in ${\mathbb R}n$ are contained, up to an ${\mathcal H}^n$-negligible set, in a countable union of $n$-dimensional Lipschitz graphs. Mattila, Melnikov and Verdera in \cite{MMV} proved that whenever $E$ is an $1$-Ahlfors-David regular set that the $L^2({\mathcal H}^1 {\lambda}floor E)$-boundedness of the Cauchy transform is equivalent to $E$ being $1$-uniformly rectifiable. A set $E$ is called $1$-Ahlfors-David regular, or $1$-AD-regular, if there exists some constant $c$ such that
$$c^{-1}r{\lambda}eq {\mathcal H}^1(B(x,r) \cap E) {\lambda}eq c\,r\quad\mbox{for all} x\in E,
0<r{\lambda}eq{\rm diam}(E).$$
Uniform rectifiability is a influential notion of quantitative rectifiability introduced by David and Semmes, \cite{DS1} and \cite{DS2}. In particular a set $E$ is $1$-uniformly rectifiable if it $1$-AD regular and is containted in an $1$-AD regular rectifiable curve.
Leg\'er in \cite{Leger} proved that if $E$ has positive and finite length and the Cauchy transform is bounded in $L^2({\mathcal H}^1 {\lambda}floor E)$ then $E$ is rectifiable. It is a remarkable fact that the proofs of the aforementioned results depend crucially on a special subtle positivity property of the Cauchy kernel related to an old notion of curvature named after Menger. Given three distinct points $z_1,z_2,z_3 \in \mathbb{C}$ their Menger curvature is
\betaegin{equation}
{\lambda}abel{meng}
c(z_1,z_2,z_3)=\frac{1}{R(z_1,z_2,z_3)},
\end{equation}
where $R(z_1,z_2,z_3)$ is the radius of the circle passing through $x,y$ and $z$. Melnikov in \cite{Me} discovered that the Menger curvature is related to the Cauchy kernel by the formula
\betaegin{equation}
{\lambda}abel{mel}
c(z_1,z_2,z_3)^2= \sum_{s \in S_3} \frac{1}{(z_{s_2}-z_{s_1})\overline{(z_{s_3}-z_{s_1})}},
\end{equation}
where $S_3$ is the group of permutations of three elements. It follows immediately that the permutations of the Cauchy kernel are always positive. Further implications of this identity related to the $L^2$-boundedness of the Cauchy transform where illuminated by Melnikov and Verdera in \cite{MeV}.
While the Cauchy transform is pretty well understood in this context, very few things are know for other kernels. The David-Semmes conjecture, dating from 1991, asks if the $L^2(\mu)$-boundedness of the operators associated with the $n$-dimensional Riesz kernel $x/|x|^{n+1}$, suffices to imply $n$-uniform rectifiabilty. For $n=1$ we are in the case of the Cauchy transform discussed in the previous paragraph. The conjecture has been very recently resolved by Nazarov, Tolsa and Volberg in \cite{NToV} in the codimension 1 case, that is for $n=d-1$, using a mix of different deep techniques some of them depending crucially on $n$ being $d-1$. The conjecture is open for the intermediate values $n \in(1,d-1)$
Recently in \cite{cmpt} the kernels $x_i^{2n-1} / |x|^{2n}, \, x\in {\mathbb R}d, n \in \mathbb{N},$ were considered and it was proved that the $L^2$-boundedness of the operators associated with any of these kernels implies rectifiability. These are the only known examples of convolution kernels not directly related to the Riesz kernels with this property. In this paper we extend this result to any ambient dimension $d$.
For $n \in \mathbb{N}$ and $E \subset {\mathbb R}n$ with finite length we consider the singular integral operator $T_n=(T^i_n)_{i=1}^d$ where formally
\betaegin{equation}
{\lambda}abel{operator}
T^i_{n}(f)(x)= \int_E K^i_{1,n}(x-y)f(y) d {\mathcal H}^1(y)
\end{equation}
and
\betaegin{equation*}
K^i_{1,n}(x)=\frac{x_i^{2n-1}}{|x|^{2n}}, \quad x =(x_1,\dots,x_d) \in {\mathbb R}n \setminus \{0\}.
\end{equation*}
We extend Theorem 1.2 and Theorem 1.3 from \cite{cmpt} to any dimension $d$. Our result reads as follows.
\betaegin{teo}
{\lambda}abel{recthm1}
Let $E \subset {\mathbb R}n$ be a Borel set such that $0<\mathcal{H}^1(E)<\infty$,
\betaegin{enumerate}
\item if $T_n$ is bounded in $L^2(\mathcal{H}^1 {\lambda}floor E)$, then the set $E$ is rectifiable.
\item if moreover $E$ is 1-$AD$-regular then $T_n$ is bounded in $L^2(\mathcal{H}^1 {\lambda}floor E)$ if and only if $E$ is 1-uniformly rectifiable.
\end{enumerate}
\end{teo}
The plan of the paper is the following. In section \ref{secsketch} we outline the proof of Theorem \ref{main} which is based on two main
technical ingredients: positivity of the quantity obtained when symmetrizing the kernel $K_{\alpha,n}^i$ and the fact that our kernel localizes in the uniform norm. In Section \ref{secperm} we state all the necessary Propositions involving the permutations of the kernels $K_{\alpha,n}$. Due to their technical nature we have included the proofs of these results in an Appendix.
Section \ref{secloc} is devoted to the proof of the Localization Theorem for our potentials. In section \ref{secoutline} we complete the proof of the
main theorem showing that $\gamma_{\alphalphapha}^n$ is comparable to $C_{\frac 2 3(2-\alphalpha),\frac 3 2}$. Finally in section \ref{rect} we elaborate how Theorem \ref{recthm1} follows from our symmetrization results involving the permutations of the kernels $K_{1,n}$ and \cite{cmpt}.
\section{Sketch of the proof of Theorem \ref{main}}{\lambda}abel{secsketch}
Our proof of Theorem \ref{main} rests on two steps:\newline
{\betaf First step. } The first step is the analogue of the main result in the paper \cite{semiad}, that is, the equivalence
between the capacities $\gamma_{\alphalphapha}^n$ and $\gamma_{\alphalphapha}^nmma_{\alpha,+}^n$, where for compact sets $K\subset{\mathbb R}d$, $$\gamma_{\alphalphapha}^nmma_{\alpha,+}^n(K)=\sup \mu(K),$$ the supremum taken over those positive measures
$\mu$ with support in $K$ whose vector-valued potential $K_{\alpha,n}*\mu$ lies on the unit ball of $L^{\infty}({\mathbb R}d,{\mathbb R}d)$ (see also \cite{mpv},
\cite{mpv2}, \cite{tams} and \cite{cmpt2} for related results). Clearly, the quantity $\gamma_{\alphalphapha}^n$ is larger or equal than $\gamma_{\alphalphapha}^nmma_{\alpha,+}^n$. The reverse inequality can be obtained following Tolsa's approach in \cite{semiad}, which is
based on two main technical points, the first one is the symmetrization of the kernels $K^i_{\alpha,n}$, $n\in\mathbb{N}$, $0<\alpha{\lambda}e 1$, $i=1,2$,
and the second one is a localization result for $K^i_{\alphalphapha,n}$, $i=1,2$.\newline
In section \ref{secperm}, we deal with the symmetrization process for our kernels. We prove, not only the positivity but the explicit description of the quantity
obtained when symmetrizing the kernels $K^i_{\alpha, n}$, for $0<\alpha{\lambda}e 1$. This will allow us to study the $L^2$-boundedness of the operators with kernel $K^i_{\alpha, n}$.\newline
The localization result needed in our setting is written in section \ref{secloc}. Specifically, we prove that there exists a positive constant $C$ such that, for each
compactly supported distribution $T$ and for each coordinate $i$, we have
\betaegin{equation}{\lambda}abel{prelloc}{\lambda}eft\|K_{\alpha,n}^i*\varphi_QT\right\|_{\infty}{\lambda}e C{\lambda}eft\|K_{\alpha,n}^i*T\right\|_{\infty},\end{equation}
for each square $Q$ and each $\varphi_Q\in {\mathbb{C}C}_0^{\infty}(Q)$ satisfying $\|\varphi_Q\|_\infty{\lambda}e C$,
$\|\nabla\varphi_Q\|_\infty{\lambda}e l(Q)^{-1}$ and $\|{\Delta}elta\varphi_Q\|_\infty{\lambda}e l(Q)^{-2}$, where $l(Q)$ denotes the sidelength of the cube $Q$.
Once the symmetrization and \eqref{prelloc} is at our disposal, Tolsa's machinery applies straighforwardly as was already explained in \cite[Section 2.2]{mpv}.
{\betaf Second step. } Once step 1 is performed, i.e. the comparability between the capacities $\gamma_{\alphalphapha}^n$ and $\gamma_{\alphalphapha}^nmma_{\alpha,+}^n$, we complete the proof
of the main theorem showing that $\gamma_{\alphalphapha}^nmma_{\alpha,+}^n$ is equivalent to $C_{\frac 2 3(2-\alpha),\frac 3 2}$ in section \ref{secoutline}.
\section{Permutations of the kernels $K_{\alphalphapha,n}$}{\lambda}abel{secperm}
For any three distinct $x,y,z\in{\mathbb R}n$, we consider the symmetrization of the kernels $K^i_{\alpha,n}$:
\betaegin{equation}
{\lambda}abel{permdef}
\betaegin{split}
p^i_{\alpha,n}(x,y,z) &= K^i_{\alpha,n}(x-y)\,K^i_{\alpha,n}(x-z) + K^i_{\alpha,n}(y-x)\,K^i_{\alpha,n}(y-z) \\&\quad \quad \quad + K^i_{\alpha,n}(z-x)\,K^i_{\alpha,n}(z-y).
\end{split}
\end{equation}
We prove that the permutations $p_{\alpha,n}^i(x,y,z)$, $n\in\mathbb{N}$ and $0<\alpha<1$ behave like
the inverse of the largest side of the triangle determined by the points $x,y,z$ to the power $2\alpha$. We also prove a comparability result of the permutations with
Menger curvature when $\alpha=1$ which is essential in order to extend the rectifiabilty results from \cite{cmpt}. It is an interesting fact that our proofs also depend on Heron's formula of Euclidean geometry. In order to enhance readability we chose to include the rather lengthy proofs of the following propositions in an Appendix.
\betaegin{prop}{\lambda}abel{lemmaperm}
Let $0<\alpha<1$ and $x,y,z$ be three distinct points in ${\mathbb R}n$. For $1{\lambda}e i{\lambda}e d$ we have
\betaegin{equation}{\lambda}abel{icomparability}
\frac{A(n, d, \alpha) \ M_i^{2n}}{L(x,y,z)^{2\alpha+2n}}{\lambda}e p^i_{\alpha,n}(x,y,z){\lambda}e \frac{B(n, d, \alpha)}{L(x,y,z)^{2\alpha}}
\end{equation}
where $M_i=\max\{|y_i-x_i|,|z_i-y_i|,|z_i-x_i|\}$, $L(x,y,z)$ denotes the length of the largest side of the triangle determined by the three points $x,y,z$ and $A(n, d, \alpha), B(\alpha,n)$
are some positive constants depending only on $d,\alpha,n$.
\end{prop}
We also consider
\betaegin{equation}
{\lambda}abel{permdef2}
p_{\alpha,n}(x,y,z)=\sum_{i=1}^d p^i_{\alpha,n}(x,y,z).
\end{equation}
Proposition \ref{lemmaperm} allows us to prove the following:
\betaegin{co}
{\lambda}abel{permalfa}
Let $0<\alpha<1$ and $x,y,z$ be three distinct points in ${\mathbb R}n$. Then the following holds
\betaegin{equation*}
\frac{A(n,d, \alpha)}{L(x,y,z)^{2\alpha}}{\lambda}e p_{\alpha,n}(x,y,z){\lambda}e \frac{B(n,d,\alpha)}{L(x,y,z)^{2\alpha}}
\end{equation*}
where $L(x,y,z)$ denotes the length of the largest side of the triangle determined by the three points $x,y,z$ and $A(n,d,\alpha),B(n,d,\alpha)$ are some
positive constants depending on $n$, and $n,d,\alpha$ respectively.
\end{co}
\betaegin{proof} For $1{\lambda}e i{\lambda}e d$ set $M_i=\max\{|y_i-x_i|,|z_i-y_i|,|z_i-x_i|\}$. Without loss of generality assume that
$M_1=\max\{M_i: 1{\lambda}e i{\lambda}e d\}$. Then $M_1$ is comparable to $L(x,y,z)$. The corollary follows from Proposition \ref{lemmaperm}.\end{proof}
For any two distinct points $x_1,x_2 \in {\mathbb R}^d$ we denote by $L_{x_1,x_2}$ the line which contains them and for any two lines $L_1$ and $L_2$ we denote by
${{\measuredangle}uredangle} (L_1,L_2)$ the smallest angle between $L_1$ and $L_2$.
\betaegin{prop}
{\lambda}abel{posperm}
For any three distinct points $x,y,z \in {\mathbb R}^d$ and $i=1,\dots,d$,
\betaegin{enumerate}
\item[(i)] $p^i_{1,n}(x,y,z)\geq 0$ and vanishes if and only if $x,y,z$ are collinear or the three points lie on a $(d-1)$-hypersurface perpendicular to the $i$ axis,
that is $x_i=y_i=z_i$.
\item[(ii)] If $V_j=\{x_j=0\}$ and ${{\measuredangle}uredangle}{(V_j,L_{x,y})}+{{\measuredangle}uredangle}{(V_j,L_{x,z})}+ {{\measuredangle}uredangle}{(V_j,L_{y,z})}\geq \theta_0>0$, then
$$\sum_{i\neq j} p^i_{1,n}(x,y,z)\geq C(\theta_0)c(x,y,z)^2.$$
\end{enumerate}
\end{prop}
\section{Growth conditions and localization}{\lambda}abel{secloc}
\subsection{Growth conditions}
Recall that for a compactly supported
distribution $T$ with bounded Cauchy potential
\betaegin{equation*}
\betaegin{split}
|{\lambda}angle T,\varphi_Q\rightarrowngle |&={\lambda}eft |{\lambda}eft{\lambda}angle T,\frac{1}{\pi
z}*\overline\partial\varphi_Q\right\rightarrowngle \right|=
{\lambda}eft|{\lambda}eft{\lambda}angle \frac{1}{\pi z}*T, \overline\partial\varphi_Q
\right\rightarrowngle\right|\\*[7pt] &{\lambda}eq \frac 1 \pi{\lambda}eft\|\frac 1{z}*T
\right\|_\infty \,\|\overline\partial\varphi_Q\|_{L^1(Q)}{\lambda}eq \,
\frac 1 \pi{\lambda}eft\|\frac 1{z}*T \right\|_\infty \,l(Q),
\end{split}
\end{equation*}
whenever $\varphi_Q$ satisfies
$\|\overline\partial\varphi_Q\|_{L^1(Q)} {\lambda}e l(Q). $
We want to deduce a similar growth condition for the case of having bounded $T*K_{\alpha,n}^i$, $i=1,2,$ potentials. This is crucial in obtaining the localization
result in Lemma \ref{localization}. The above written argument is based on the fact that one can recover $f$ using the formula $f=(1/\pi)\overline{\partial}f*1/z$.
Therefore, we need an analogous reproduction formula for
the kernels $K_{\alpha,n}^i$, $i=1,2$. In \cite{imrn} (Lemma 3.1)
a reproduction formula for $x_i/|x|^{1+\alphalpha}$, $0<\alphalpha<1$, in $\mathbb{R}^d$ was found. In our current setting, the kernels depend on $n \in \mathbb{N}$ hence the arguments are more technically involved.
\betaegin{lemma}{\lambda}abel{reproductionformula}
If a function $f$ has continuous derivatives up to order two, then it is representable in the form
\betaegin{equation}{\lambda}abel{repr}
f(x)=(\varphi_1*K_1)(x)+(\varphi_2*K_2)(x),\;\;x\in{\mathbb R}d,
\end{equation}
where for $i=1,2$,
$$\varphi_i=S_i({\Delta}elta f)*\frac{x_i}{|x|^{3-\alphalphapha}}:={\lambda}eft(c{\Delta}elta f+\widetilde S_i({\Delta}elta f)\right)*\frac{x_i}{|x|^{3-\alphalphapha}},$$
for some constant $c$ and Calder\'on-Zygmund operators $\widetilde S_1$ and $\widetilde S_2$.
\end{lemma}
Once Lemma \ref{reproductionformula} is available, we obtain the desired growth condition for our compactly supported distribution $T$ with bounded potentials $K_{\alpha,n}^i*T$, $i=1,2$:
\betaegin{equation}{\lambda}abel{ourgrowth}
\betaegin{split}
|{\lambda}angle T,\varphi_Q\rightarrowngle |&={\lambda}eft |{\lambda}eft{\lambda}angle T,K_1*S_1({\Delta}elta\partial_1\varphi_Q)*\frac1{|x|^{1-\alphalpha}}+K_2*S_2({\Delta}elta\partial_2\varphi_Q)*\frac1{|x|^{1-\alphalpha}}\right\rightarrowngle \right|\\*[7pt] &{\lambda}e
\sum_{i=1}^2{\lambda}eft|{\lambda}eft{\lambda}angle S_i(K_{\alpha,n}^i*T),{\Delta}elta\partial_i\varphi_Q*\frac1{|x|^{1-\alphalpha}}
\right\rightarrowngle\right|\\*[7pt] &{\lambda}eq \sum_{i=1}^2{\lambda}eft\|S_i(K_{\alpha,n}^i*T)
\right\|_{\tiny{\mbox{BMO}}} \,{\lambda}eft\|{\Delta}elta\partial_i\varphi_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{H^1({\mathbb R}d)}\\*[7pt] &{\lambda}eq\,
C \,l(Q)^\alphalpha,
\end{split}
\end{equation}
whenever $\varphi_Q$ satisfies the conditions
\betaegin{equation}{\lambda}abel{normalization1}
{\lambda}eft\|{\Delta}elta\partial_i\varphi_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{H^1({\mathbb R}d)} {\lambda}e l(Q)^\alphalpha,\,\,\,\mbox{for }\, i=1,2.
\end{equation}
Observe that the penultimate inequality in \eqref{ourgrowth} comes from the fact that Calder\'on-Zygmund operators send $L^\infty$ to BMO. Recall that a function $f\in\mbox{H}^1({\mathbb R}d)$ if and only if
$f\in L^ 1({\mathbb R}d)$ and all its Riesz transforms $R_j$, $1{\lambda}e j{\lambda}e2,$ (the Calder\'on-Zygmund operators with Fourier multiplier $\xi_j/|\xi|$) are also in $L^1({\mathbb R}d)$. The norm of $H^1({\mathbb R}d)$ is defined as
$$\|f\|_{H^1({\mathbb R}d)}=\|f\|_{L^1({\mathbb R}d)}+\sum_{j=1}^2\|R_j(f)\|_{L^1({\mathbb R}d)}.$$
We now formulate a definition. We say that a distribution $T$ has growth $\alphalphapha$ provided that
$$G_\alphalpha(T)=\sup_{\varphi_Q}\frac{|{\lambda}angle T,\varphi_Q\rightarrowngle|}{l(Q)^\alphalpha}<\infty,$$
where the supremum is taken over all $\varphi_Q\in {\mathcal C}_0^\infty(Q)$ satisfying the normalization inequalities \eqref{normalization1} (see also \cite{mpv2} and \cite{tams}, for similar conditions).
The normalization in the $H^1$ norm is the right condition to impose, as will
become clear later on. Recall that a positive Radon measure has growth $\alphalpha$ provided $\mu(B(x,r)){\lambda}e Cr^\alphalpha$, for $x\in{\mathbb R}d$ and $r\ge 0$.
For positive Radon measures $\mu$ in ${\mathbb R}d$ the preceding notion
of $\alphalpha$ growth is equivalent to the usual one.
Notice that from \eqref{ourgrowth}, if $T$ is a compactly supported distribution with bounded potentials $K_1*T$ and $K_2*T$, then $T$ has growth $\alphalpha$.
For the proof of Lemma \ref{reproductionformula} we need to compute the Fourier transform of the kernels
$K_{\alpha,n}^i(x)=x_i^{2n-1}/|x|^{2n-1+\alphalpha}$, $1{\lambda}e i{\lambda}e 2$, $n\in\mathbb{N}$, $0<\alphalpha<1$ (see Lemma 12 in \cite{cmpt2}
for the case $\alphalpha=1$).
\betaegin{lemma}{\lambda}abel{fourier}
For $1{\lambda}e i{\lambda}e 2$, $n\in\mathbb{N}$ and $0<\alphalpha<1$,
\betaegin{equation*}
\widehat{K_{\alpha,n}^i}(\xi)=c\frac{\xi_i}{|\xi|^{2n+1-\alphalpha}}p(\xi_1,\xi_2),
\end{equation*}
for some homogeneous polynomial $p(\xi_1,\xi_2)$ of degree $2n-2$ with no non-vanishing zeros and some positive constant $c:=c(\alphalpha,n)$.
\end{lemma}
To prove Lemma \ref{fourier}, the following identity is vital.
\betaegin{lemma}{\lambda}abel{coeff} For $n\in\mathbb{N}$ and $0{\lambda}e l{\lambda}e n-1$,
$$\sum_{k=1}^{l+1}(-1)^k\frac{(1-\alpha)(3-\alpha)\cdots(2k-1-\alpha)}{2^{n-k}\ (2k-1)!\ (l+1-k)!}=-\frac{(1-\alpha)\alpha(\alpha+2)(\alpha+4)\dots(\alpha+2(l-1))}{2^{n-1-l}\ (2l+1)!}.$$
\end{lemma}
\betaegin{proof}Consider the polynomial
\betaegin{equation}{\lambda}abel{polynomial1}
p(\alpha)=\sum_{k=1}^{l+1}(-1)^k\frac{(1-\alpha)(3-\alpha)\dots(2k-1-\alpha)}{ 2^{n-k}\ (2k-1)!\ (l+1-k)!\ }.
\end{equation}
It follows immediately that $\alpha=1$ is a root of $p$. In the following we will show that $0,-2,-4,\dots,-2(l-1)$ are also roots of $p$. For $j=0,1,\dots,l-1$
\betaegin{equation*}
\betaegin{split}
p(-2j)&=\sum_{k=1}^{l+1} (-1)^k\frac{(1+2j)(3+2j)\dots(2k-1+2j)}{(2k-1)!2^{n-k}(l+1-k)!}\\
&=\frac{1}{1\cdot 3\dots (2j-1)}\sum_{k=1}^{l+1} (-1)^k\frac{(2k+2j)!}{(2k-1)!2^{n-k}(l+1-k)!2^{k+j}(k+j)!}\\
&=\frac{1}{1\cdot 3\dots (2j-1)\ 2^{n+j}}\sum_{k=1}^{l+1}(-1)^k \frac{2k (2k+1)\dots (2k+2j)}{(k+j)!(l+1-k)!}\\
&=\frac{1}{1\cdot 3\dots (2j-1)\ 2^{n+j}}\cdot\\
&\quad\quad\quad\sum_{k=1}^{l+1}(-1)^k \frac{2^{j+1} k\cdot (k+1) \dots (k+j) \ (2k+1)(2k+3)\dots (2k+2j-1)}{(k-1)!k \cdot (k+1)\dots (k+j)\ (l+1-k)!}.
\end{split}
\end{equation*}Hence
\betaegin{equation*}
\betaegin{split}
p(-2j)&=\frac{1}{1\cdot 3\dots (2j-1)\ 2^{n-1}}\sum_{k=1}^{l+1}(-1)^k\frac{ \ (2k+1)(2k+3)\dots (2k+2j-1)}{(k-1)!\ (l+1-k)!}\\
&=\frac{1}{1\cdot 3\dots (2j-1)\ 2^{n-1}}\sum_{k=1}^{l+1} \sum_{i=0}^j \frac{(-1)^kc_i k^i}{(k-1)!\ (l+1-k)!}\\
&=\frac{1}{1\cdot 3\dots (2j-1)\ 2^{n-1}} \sum_{i=0}^jc_i\sum_{k=1}^{l+1} \frac{ (-1)^kk^i}{(k-1)!\ (l+1-k)!}
\end{split}
\end{equation*}
Therefore in order to prove that $-2j, j=0,\dots,l-1$ are roots of $p$ it suffices to show that
\betaegin{equation*}
\sum_{k=1}^{l+1} \frac{(-1)^k k^i}{(k-1)!\ (l+1-k)!}=-\sum_{m=0}^l \frac{(-1)^m (m+1)^i}{m!\ (l-m)!}=0
\end{equation*}
for $i=0,\dots,j$. This will follow immediately if we show that for any $l\geq1$
\betaegin{equation}
{\lambda}abel{inductionone}
\sum_{m=0}^l \frac{(-1)^m m^i}{m!\ (l-m)!}=0
\end{equation}
for $i=0,\dots,j$.
We will prove \eqref{inductionone} by induction. For $i=0$ we have that for any $l \geq1$
\betaegin{equation*}
\sum_{m=0}^l \frac{(-1)^m }{m!\ (l-m)!}=\frac{1}{l!}\sum_{m=0}^l \betainom{l}{m}(-1)^m=\frac{(1-1)^l}{l!}=0.
\end{equation*}
We will now assume that for any $l\geq 1$
\betaegin{equation}
{\lambda}abel{indhyp}
\sum_{m=0}^l \frac{(-1)^m m^i}{m!\ (l-m)!}=0
\end{equation}
for $i=0,\dots,j-1$. Then by \eqref{indhyp},
\betaegin{equation*}
\betaegin{split}
\sum_{m=0}^l \frac{(-1)^m m^j}{m!\ (l-m)!}&=\sum_{m=1}^l \frac{(-1)^m m^{j-1}}{(m-1)!\ (l-m)!}\\
&=\sum_{N=0}^{l-1}\frac{(-1)^{N+1} (1+N)^{j-1}}{N!\ (l-1-N)!}\\
&=\sum_{N=0}^{l-1}\frac{(-1)^{N+1} \sum_{i=0}^{j-1}\betainom{j-1}{i}N^i}{N!\ (l-1-N)!}\\
&=\sum_{i=0}^{j-1}\betainom{j-1}{i}\sum_{N=0}^{l-1}\frac{(-1)^{N+1} N^{i}}{N!\ (l-1-N)!}\\
&=0.
\end{split}
\end{equation*}
Hence we have shown that
\betaegin{equation*}
p(\alpha)=C(l)(1-\alpha)\alpha(\alpha+2)\dots(\alpha+2(l-1)).
\end{equation*}
Plugging this into \eqref{polynomial1} we see that $-C(l)$ is the coefficient of
the greatest degree monomial of the polynomial in \eqref{polynomial1}, that is,
\betaegin{equation*}
C(l)= -\text{coefficient of }\alpha^{l+1} =-\frac{(-1)^{l+1}(-1)^{l+1}} {(2l+1)! \ 2^{n-l-1}} =-\frac{2^{l+1}}{2^n (2l+1)!}.
\end{equation*}
\end{proof}
\betaegin{proof}[Proof of lemma \ref{fourier}] Without loss of generality fix $i=1$. Notice that for some constant $c$, $$\displaystyle{\widehat{K_{\alpha,n}^i}(\xi)=c\;\partial_1^{2n-1}|\xi|^{2n-3+\alphalpha}.}$$
To compute $\partial_1^{2n-1}|x|^\betaeta$,
for $\beta=2n-3+\alpha$, we will use the following formula from \cite{lz}:
\betaegin{equation*}
L(\partial)E_n=\sum_{k=0}^{n-1}\frac 1{2^k\;k!}\;{\Delta}elta^k L(x){\lambda}eft(\frac 1 r\frac{\partial}{\partial r}\right)^{2n-1-k}E_n(r),
\end{equation*}
where $r=|x|$ and $L(x)=x_1^{2n-1}$. First notice that for $0{\lambda}e k{\lambda}e n-1$, we have
$${\Delta}elta^k(x_1^{2n-1})=\betainom{2n-1}{2k}(2k)!\;x_1^{2n-2k-1},$$
and one can check that
$${\lambda}eft(\frac 1 r\frac{\partial}{\partial r}\right)^{m}r^\betaeta=\beta(\beta-2)\dots(\beta-2m+2)r^{\beta-2m}.$$
Therefore for $E_n=|x|^\betaeta$ and $\betaeta=2n-3+\alpha$,
\betaegin{equation*}
\betaegin{split}
\partial_1^{2n-1}|x|^\betaeta&=\sum_{k=0}^{n-1} \betainom{2n-1}{2k}\frac{(2k)!}{2^k k!}x_1^{2n-2k-1}\beta (\beta-2)\dots(\beta-2(2n-1-k)+2)r^{\beta-2(2n-1-k)} \\
&=\frac{x_1}{|x|^{4n-2-\beta}}(2n-1)!\beta(\beta-2)\dots(\beta-2(n-2)) \cdot\\
&\quad\quad\quad\sum_{k=0}^{n-1} \frac{x_1^{2(n-k-1)}|x|^{2k}}{(2n-1-2k)!\ 2^k \ k!}(\beta-2(2n-2-(n-1)))\dots(\beta-2(2n-2-k))\\
&=c(n)\frac{x_1}{|x|^{2n+1-\alpha}}\sum_{k=0}^{n-1} \frac{x_1^{2(n-k-1)}|x|^{2k}}{(2n-1-2k)!\ 2^k\ k!}(-1+\alpha)(-3+\alpha)\dots(-2(n-k)+1+\alpha).
\end{split}
\end{equation*}
Therefore
\betaegin{equation}
{\lambda}abel{polyn}
\partial_1^{2n-1}|x|^\betaeta =c(n)\frac{x_1}{|x|^{2n+1-\alpha}}\sum_{k=0}^{n-1} a_k x_1^{2(n-k-1)}|x|^{2k},
\end{equation}
where $$a_k=(-1)^{n-k} \frac{(1-\alpha)(3-\alpha)\dots(2(n-k)-1-\alpha)}{(2n-1-2k)!\ 2^k \ k!}$$
We claim that the homogeneous polynomial of degree $2n-2$ in \eqref{polyn}, namely,
\betaegin{equation}
p(x_1,x_2)=\sum_{k=0}^{n-1} a_k x_1^{2(n-k-1)}|x|^{2k},
\end{equation}
has negative coefficients. Notice that
\betaegin{equation*}\betaegin{split}
p(x)&=\sum_{k=0}^{n-1}\;a_k \;x_1^{2(n-k-1)}\;(x_1^2+x_2^2)^{k}=\sum_{k=0}^{n-1}\sum_{j=0}^{k}a_k\betainom{k}{j}x_1^{2(n-k+j-1)}x_2^{2(k-j)}\\&=\sum_{l=0}^{n-1}b_{2l}x_1^{2m}x_2^{2(n-1-l)},
\end{split}
\end{equation*}
where for $0{\lambda}e l{\lambda}e n-1$,
$$b_{2l}=\sum_{k=1}^{l+1}a_{n-k}\betainom{n-k}{l+1-k}=\sum_{k=1}^{l+1}\frac{(-1)^k(1-\alpha)(3-\alpha)\dots(2k-1-\alpha)}{(2k-1)!\ 2^{n-k}\ (l+1-k)!\ (n-l-1)!}.$$
Applying now Lemma \ref{coeff}, we get that the coefficients $b_{2l}$, $0{\lambda}e l{\lambda}e n-1$, of the polynomial $p$ are negative.
\end{proof}
Now we are ready to prove the reproduction formula that will allow as to obtain the localization result that we need.
\betaegin{proof}[Proof of Lemma \ref{reproductionformula}] By lemma \ref{fourier}, the Fourier transform of \eqref{repr} is
$$\widehat f(\xi)=\widehat\varphi_1(\xi)\frac{\xi_1}{|\xi|^{3-\alpha}}\frac{p(\xi_1,\xi_2)}{|\xi|^{2n-2}}+\widehat\varphi_2(\xi)\frac{\xi_2}{|\xi|^{3-\alpha}}\frac{p(\xi_2,\xi_1)}{|\xi|^{2n-2}},$$
where $p$ is some homogeneous polynomial of degree $2n-2$ with no non-vanishing zeros.
Define the operators $R_1, R_2$ associated to the kernels
$\displaystyle{\widehat r_1(\xi_1,\xi_2)=\frac{p(\xi_1,\xi_2)}{|\xi|^{2n-2}}}$ and \newline$\displaystyle{\widehat r_2(\xi_1,\xi_2)=\widehat r_1(\xi_2,\xi_1)}$ respectively.
Since $p$ is a homogeneous polynomial of degree $2n-2$, it can be decomposed as $$p(\xi_1,\xi_2)=\sum_{j=0}^{n-1}p_{2j}(\xi_1,\xi_2)|\xi|^{2n-2-2j},$$ with $p_{2j}$ being
homogeneous harmonic polynomials of degree $2j$ (see \cite[Section 3.1.2, p. 69]{stein}). Hence, the operators $R_i$, $i=1,2,$ can be written as
\betaegin{equation}{\lambda}abel{invertible}
R_if=cf+\mbox{p.v.}\frac{\Omega(x/|x|)}{|x|^2}*f,
\end{equation}
for some constant $c$ and $\Omega\in\mathcal{C}^\infty$ with zero average. Therefore, by \cite[Theorem 4.15, p. 82]{duandikoetxea} the operators $R_i$, $1{\lambda}e i{\lambda}e 2$,
are invertible operators and the inverse operators, say $S_i$ have the same form as $R_i$. This means that the operators $S_i$, $i=1,2,$, with kernels
$\displaystyle{\widehat s_1(\xi_1,\xi_2)=\frac{|\xi|^{2n-1}}{p(\xi_1,\xi_2)}}$ and $\widehat s_2(\xi_1,\xi_2)=\widehat s_1(\xi_2,\xi_1)$ respectively,
can be written as in \eqref{invertible}.
Finally, setting $$\varphi_i=S_i({\Delta}elta f)*\frac{x_i}{|x|^{3-\alpha}}$$ for $i=1,2,$ finishes the proof of the Lemma.
\end{proof}
\subsection{Localization}
In what follows, given a square $Q$, $\varphi_Q$ will denote an infinitely differentiable function supported on $Q$ and such that $\|\varphi_Q\|_\infty{\lambda}e C$,
$\|\nabla\varphi_Q\|_\infty{\lambda}e l(Q)^{-1}$ and $\|{\Delta}elta\varphi_Q\|_\infty{\lambda}e l(Q)^{-2}$.
The localization lemma presented in the following is an extension of Lemma 14 in \cite{cmpt2} for $0<\alphalpha<1$.
\betaegin{lemma}{\lambda}abel{localization}
Let $T$ be a compactly supported distribution in ${\mathbb R}d$ with growth $\alphalpha$, $0<\alphalpha<1$, such that $(x_i^{2n-1}/|x|^{2n-1+\alphalpha})*T \in L^\infty({\mathbb R}d)$ for some $n\in\mathbb{N}$
and $1{\lambda}e i{\lambda}e 2$. Then $(x_i^{2n-1}/|x|^{2n-1+\alphalpha})*\varphi_Q\in L^\infty({\mathbb R}d)$ and
$${\lambda}eft\|\frac{x_i^{2n-1}}{|x|^{2n-1+\alphalpha}}*\varphi_QT\right\|_\infty{\lambda}e C{\lambda}eft({\lambda}eft\|\frac{x_i^{2n-1}}{|x|^{2n-1+\alphalpha}}*T\right\|_{\infty}+G_\alphalpha(T)\right),$$ for some positive constant $C$.
\end{lemma}
The next lemma states a sufficient condition for a test function to satisfy the normalization conditions in \eqref{normalization1}.
\betaegin{lemma}{\lambda}abel{nicecondition} Let $f_Q$ be a test function supported on a square $Q$, satisfying $\|{\Delta}elta f_Q\|_{L^1(Q)}{\lambda}e C.$
Then,
$${\lambda}eft\|{\Delta}elta\partial_if_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{H^1({\mathbb R}d)}{\lambda}e l(Q)^\alphalpha,\,\,\,\mbox{for }\, i=1,2. $$
\end{lemma}
\betaegin{proof}[Proof of Lemma \ref{nicecondition}]
We have to show that for $i=1,2,$
\betaegin{equation}{\lambda}abel{L1}
{\lambda}eft\|{\Delta}elta\partial_if_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1({\mathbb R}d)}{\lambda}e l(Q)^\alphalpha
\end{equation}
and
\betaegin{equation}{\lambda}abel{L1riesz}
{\lambda}eft\|R_j({\Delta}elta\partial_if_Q)*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1({\mathbb R}d)}{\lambda}e l(Q)^\alphalpha,\;\;\;j=1,2,
\end{equation}
where $R_j$, $j=1,2$, is the $j$-th component of the Riesz operator with kernel $x_j/|x|^3$.
\betaegin{equation*}
\betaegin{split}
{\lambda}eft\|{\Delta}elta\partial_if_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1({\mathbb R}d)}&={\lambda}eft\|{\Delta}elta\partial_if_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1(2Q)}+{\lambda}eft\|{\Delta}elta\partial_if_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1((2Q)^c)}\\*[7pt] &= A+B.
\end{split}
\end{equation*}
We estimate first the term $A$. By taking one derivative from $f$ to the kernel, using Fubini and the fact that $\|{\Delta}elta f_Q\|_{L^1}{\lambda}e C$, we obtain
\betaegin{equation*}
\betaegin{split}
A={\lambda}eft\|{\Delta}elta\partial_if_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1(2Q)}&{\lambda}e\int_{2Q}\int_Q\frac{|{\Delta}elta f_Q(x)|}{|x-y|^{2-\alphalpha}}dxdy{\lambda}e Cl(Q)^\alphalpha.
\end{split}
\end{equation*}
To estimate term $B$ we bring the Laplacian from $f_Q$ to the kernel $|x|^{\alphalpha-1}$ and then use Fubini, the Cauchy-Schwartz inequality and a well
known inequality of Maz'ya, \cite[1.1.4, p. 15]{mazya}, and \cite[1.2.2, p. 24]{mazya} , stating that
$\|\nabla f_Q\|_2{\lambda}e C\|{\Delta}elta f_Q\|_1$. Hence,
\betaegin{equation*}
\betaegin{split}
{\lambda}eft\|{\Delta}elta\partial_if_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1((2Q)^c)}&{\lambda}e C \int_{(2Q)^c}\int_Q\frac{|\partial_i f_Q(x)|}{|x-y|^{3-\alphalpha}}dxdy \\*[7pt]&{\lambda}e C\; \|\nabla f_Q\|_{L^1(Q)}\;l(Q)^{\alphalpha-1}{\lambda}e C\; \|\nabla f_Q\|_2 \;l(Q)^\alphalpha{\lambda}e C\;l(Q)^\alphalpha,
\end{split}
\end{equation*}
the last inequality coming from the hypothesis $\|{\Delta}elta f_Q\|_{L^1}{\lambda}e C$.
This finishes the proof of \eqref{L1}. To prove \eqref{L1riesz}, we remark that,
\betaegin{equation}{\lambda}abel{remark}
\frac{x_j}{|x|^3}*\frac1{|x|^{1-\alphalpha}}=c\frac{x_j}{|x|^{2-\alphalpha}},
\end{equation} for some constant $c$. This can be seen by computing the Fourier transform of the above kernels. Using this fact, we obtain that
\betaegin{equation*}
\betaegin{split}
{\lambda}eft\|R_j({\Delta}elta\partial_if_Q)*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1({\mathbb R}d)}&={\lambda}eft\|{\Delta}elta\partial_if_Q*\frac{x_j}{|x|^3}*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1({\mathbb R}d)}\\*[7pt]&=c{\lambda}eft\|{\Delta}elta\partial_if_Q*\frac{x_j}{|x|^{2-\alphalpha}}\right\|_{L^1({\mathbb R}d)}{\lambda}e Cl(Q)^\alphalpha,
\end{split}
\end{equation*}
where the last integral can be estimated in an analogous way as \eqref{L1}. This finishes the proof of \eqref{L1riesz} and the lemma.
\end{proof}
For the proof of Lemma \ref{localization} we need the following preliminary lemma.
\betaegin{lemma}{\lambda}abel{prelocalization}
Let $T$ be a compactly supported distribution in ${\mathbb R}d$ with growth $\alphalpha$. Then, for each coordinate $i$, the distribution $(x_i^{2n-1} / |x|^{2n-1+\alphalpha})
* \varphi_Q T$ is an integrable function in the interior of $\frac 1 4 Q$ and
$$
\int_{\frac 1 4 Q}{\lambda}eft|{\lambda}eft(\frac{x_i^{2n-1}}{|x|^{2n-1+\alphalpha}}*\varphi_QT\right)(y)\right|dy{\lambda}eq C \, G_\alphalpha(T)\;l(Q)^2,
$$
where $C$ is a positive constant.
\end{lemma}
For $\alphalpha=1$ the proof Lemma \ref{prelocalization} can be found in \cite{cmpt2}. In $\mathbb{R}^d$, for $n=1$ and $0<\alphalpha<d$, the proof is given in \cite{tams}.
Although the scheme of our proof is the same as in the papers cited above, several difficulties arise due to the fact that we are considering more general kernels, namely kernels involving non-integer indexes $\alphalpha$ and $n\in\mathbb{N}$.
For the rest of the section we will assume, without loss of generality, that $i=1$ and we will write $K_1(x)=x_1^{2n-1}/|x|^{2n-1+\alphalpha}$.
\betaegin{proof}[Proof of Lemma \ref{prelocalization}]
We will
prove that $K_1* \varphi_Q T$~is in
$L^{p}(2Q)$ for each $p$ in $1{\lambda}e p<2.$ Indeed, fix any $q$ satisfying $2<q<\infty$ and call $p$ the dual exponent, so that $1<p<2$. We need to estimate the action of
$K_1 * \varphi_Q T$ on functions $\psi \in {\mathcal C}^\infty_0(2Q)$ in
terms of $\|\psi\|_{q} $. We clearly have
$$
{\lambda}angle K_1 * \varphi_Q T, \psi\rightarrowngle = {\lambda}angle T, \varphi_Q(K_1 * \psi)\rightarrowngle.
$$
We claim that, for an appropriate
positive constant $C $, the test function
\betaegin{equation*}
f_Q=\frac{\varphi_Q(K_1 * \psi)}{C \,l(Q)^{\frac{2}{p}-\alphalpha} \|\psi\|_{q}}
\end{equation*}
satisfies the normalization inequalities \eqref{normalization1} in
the definition of $G_\alphalpha(T)$. Once this is proved, by the definition of $G_\alphalpha(T)$ we get that $|{\lambda}angle K_1 * \varphi_Q T, \psi\rightarrowngle | {\lambda}e C\,
l(Q)^{\frac{2}{p}}\|\psi\|_{q} \,G_\alphalpha(T),$
and therefore $\|K_1 * \varphi_Q T \|_{L^{p}(2Q)} {\lambda}e C\,
l(Q)^{\frac{2}{p}}G_\alphalpha(T).$ Hence
\betaegin{equation*}
\betaegin{split}
\frac{1}{|\frac{1}{4}Q|}\int_{\frac{1}{4} Q} |(K_1 * \varphi_Q
T)(x)|\,dx &{\lambda}e 16\frac{1}{|Q|}\int_Q |(K_1 * \varphi_Q
T)(x)|\,dx \\*[7pt]
& {\lambda}e 16{\lambda}eft(\frac{1}{|Q|}\int_Q |(K_1
* \varphi_Q T)(x)|^{p} \,dx\right)^{\frac{1}{p}}\\*[7pt]
& {\lambda}e C\,G_\alphalpha(T),
\end{split}
\end{equation*}
which proves Lemma \ref{prelocalization}.
Notice that since ${\Delta}elta(\varphi_Q(K_1 * \psi)$ is not in $L ^1(Q),$ to prove the claim, we cannot use Lemma \ref{nicecondition}. Therefore we have to check that, for $i=1,2,$
\betaegin{equation*}
{\lambda}eft\|{\Delta}elta\partial_if_Q*\frac1{|x|^{1-\alphalpha}}\right\|_{H^1({\mathbb R}d)}{\lambda}e C\;l(Q)^\alphalpha.
\end{equation*}
This is equivalent to checking conditions
\betaegin{equation}{\lambda}abel{normaL1}
{\lambda}eft\|{\Delta}elta\partial_i{\lambda}eft(\varphi_Q(K_1*\psi)\right)*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1({\mathbb R}d)}{\lambda}e C\;l(Q)^{\frac2p}\|\psi\|_q
\end{equation}
and
\betaegin{equation}{\lambda}abel{normaL1riesz}
{\lambda}eft\|R_j({\Delta}elta\partial_i{\lambda}eft(\varphi_Q(K_1*\psi)\right))*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1({\mathbb R}d)}{\lambda}e C\;l(Q)^{\frac2p}\|\psi\|_q
\end{equation}
for $i,j=1,2$.
By Fubini and H\"older,
\betaegin{equation}{\lambda}abel{pre}
\int_Q|(K_1*\psi)(y)|dy{\lambda}e \int_{2Q}|\psi(z)|\int_Q\frac{dydz}{|z-y|^{\alphalphapha}}{\lambda}e C\|\psi\|_ql(Q)^{\frac{2}{p}+2-\alphalpha}.
\end{equation}
In the same way one can obtain
\betaegin{equation}{\lambda}abel{pre2}
\int_Q|(\partial_iK_1*\psi)(y)|dy{\lambda}e \int_{2Q}|\psi(z)|\int_Q\frac{dydz}{|z-y|^{1+\alphalphapha}}{\lambda}e C\|\psi\|_ql(Q)^{\frac{2}{p}+1-\alphalpha},\;i=1,2,.
\end{equation}
To check \eqref{normaL1} we compute first the $L^1$-norm in $(2Q)^c$ by bringing all derivatives to the kernel $|x|^{\alphalpha-1}$, using Fubini and \eqref{pre}. Then
\betaegin{equation}{\lambda}abel{outside}
\betaegin{split}
{\lambda}eft\|{\Delta}elta\partial_i{\lambda}eft(\varphi_Q(K_1*\psi)\right)*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1((2Q)^c)}&{\lambda}e
C\int_{Q}|(K_1*\psi)(y)|\int_{(2Q)^c}\frac{dxdy}{|y-x|^{4-\alphalpha}}\\*[7pt]&{\lambda}e C\|\psi\|_ql(Q)^{\frac{2}{p}}.
\end{split}
\end{equation}
Now we are left to compute the $L^1$-norm in $2Q$ of the integral in \eqref{normaL1}. For this, we bring the Laplacian to the kernel $|x|^{\alphalpha-1}$.
Since for $i=1,2,$ we clearly have $\partial_i{\lambda}eft(\varphi_Q(K_1*\psi)\right))=\partial_i\varphi_Q(K_1*\psi)+\varphi_Q\partial_i(K_1*\psi)$, adding and substracting
some terms to get integrability, we get
\betaegin{equation}{\lambda}abel{inside}
\betaegin{split}
{\lambda}eft\|{\Delta}elta\partial_i{\lambda}eft(\varphi_Q(K_1*\psi)\right)*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1(2Q)} &{\lambda}e
C\int_{2Q}{\lambda}eft|\int_Q\frac{(\varphi_Q(y)-\varphi_Q(x))(\partial_iK_1*\psi)(y)}{|y-x|^{3-\alphalpha}}dy\right|dx\\*[7pt] &+
C\int_{Q}|\varphi_Q(x)|{\lambda}eft|{\lambda}eft({\Delta}elta\partial_i K_1*\psi*\frac1{|y|^{1-\alphalpha}}\right)(x)\right|dx\\*[7pt] &+
C\int_Q{\lambda}eft|\int_{Q^c}\frac{\varphi_Q(x)(\partial_iK_1*\psi)(y)}{|y-x|^{3-\alphalpha}}dy\right|dx\\*[7pt] &+
C\int_{2Q}{\lambda}eft|\int_Q\frac{(\partial_i\varphi_Q(y)-\partial_i\varphi_Q(x))(K_1*\psi)(y)}{|y-x|^{3-\alphalpha}}dy\right|dx\\*[7pt] &+
C\int_{Q}|\partial_i\varphi_Q(x)|{\lambda}eft|{\lambda}eft({\Delta}elta K_1*\psi*\frac1{|y|^{1-\alphalpha}}\right)(x)\right|dx\\*[7pt] &+
C\int_Q{\lambda}eft|\int_{Q^c}\frac{\partial_i\varphi_Q(x)(K_1*\psi)(y)}{|y-x|^{3-\alphalpha}}dy\right|dx\\*[7pt] &= A_1+A_2+A_3+A_4+A_5+A_6,
\end{split}
\end{equation}
the last identity being a definition for $A_l$, $1{\lambda}e l{\lambda}e 6$.
The mean value theorem, Fubini and \eqref{pre2}, give us
$$A_1{\lambda}e C l(Q)^{-1}\int_{Q}|(\partial_iK_1*\psi)(y)|\int_{2Q}\frac1{|y-x|^{2-\alphalpha}}dx\;dy{\lambda}e C\|\psi\|_ql(Q)^{\frac{2}{p}}.$$
The same reasoning but using \eqref{pre} instead of \eqref{pre2}, give us $A_4{\lambda}e C\|\psi\|_ql(Q)^{\frac{2}{p}}.$
We deal now with term $A_2$. By Lemma \ref{fourier}, taking Fourier transform of the convolution ${\Delta}elta\partial_i K_1*\psi*\frac1{|y|^{1-\alphalpha}}$, one sees that
$$\widehat{{\lambda}eft({\Delta}elta\partial_i K_1*\psi*\frac1{|y|^{1-\alphalpha}}\right)}(\xi)=c\frac{\xi_i\xi_1p(\xi_1,\xi_2)}{|\xi|^{2n}}\widehat{\psi}(\xi).$$
Therefore, since the homogeneous polynomial $\xi_i\xi_1p(\xi_1,\xi_2)$, of degree $2n$, has no non-vanishing zeros, by \cite[Theorem 4.15, p.82]{duandikoetxea}, we obtain that
$${\lambda}eft({\Delta}elta\partial_i K_1*\psi*\frac1{|y|^{1-\alphalpha}}\right)(x)=c\psi+cS_0(\psi)(x),$$
for some constant $c$ and some smooth homogeneous Calder\'on-Zygmund operator $S_0$.
Now using H\"older's inequality and the fact that Calder\'on-Zygmund operators preserve $L^q({\mathbb R}d)$, $1<q<\infty$, we get $A_2{\lambda}e Cl(Q)^{2/p}\|\psi\|_q$.
To estimate $A_3$, notice that $\varphi_Q$ is supported on $Q$, therefore
\betaegin{equation*}
\betaegin{split}
A_3&{\lambda}e C \int_Q{\lambda}eft|\int_{3Q\setminus Q}\frac{(\varphi_Q(x)-\varphi_Q(y))(\partial_iK_1*\psi)(x)}{|y-x|^{3-\alphalpha}}dy\right| dx
\\*[7pt] & +C \int_Q|\varphi_Q(x)|\int_{(3Q)^c}\frac{{\lambda}eft|(\partial_iK_1*\psi)(y)\right|}{|y-x|^{3-\alphalpha}}dy\;dx=A_{31}+A_{32}.
\end{split}
\end{equation*}
For $A_{31}$ we use the mean value theorem and argue as in the estimate of $A_1$. We deal now with $A_{32}$:
\betaegin{equation*}
\betaegin{split}
A_{32}&{\lambda}e C\int_Q\int_{(3Q)^{c}}\frac1{|y-x|^{3-\alphalpha}}\int_{2Q}\frac{|\psi(z)|}{|z-y|^{1+\alphalpha}}dz\;dy\;dx\\*[7pt]&
{\lambda}e C l(Q)^{-1-\alphalpha}\|\psi\|_1\int_Q\int_{(3Q)^{c}}\frac1{|y-x|^{3-\alphalpha}}dy\;dx\\*[7pt]&{\lambda}e Cl(Q)^{-1-\alphalpha}\|\psi\|_ql(Q)^{\frac 2 p}l(Q)^{1+\alphalpha}=C l(Q)^{\frac{2}{p}}\|\psi\|_q,
\end{split}
\end{equation*} using H\"older's inequality.
To estimate terms $A_5$ and $A_6$, one argues in a similar manner, we leave the details to the reader. This finishes the proof of \eqref{normaL1}.
We are still left with checking that condition \eqref{normaL1riesz} holds. Notice that by \eqref{remark},
\betaegin{equation*}
\betaegin{split}
{\lambda}eft\|R_j({\Delta}elta\partial_i{\lambda}eft(\varphi_Q(K_1*\psi)\right))*\frac1{|x|^{1-\alphalpha}}\right\|_{L^1({\mathbb R}d)}&
=c{\lambda}eft\|{\Delta}elta\partial_i{\lambda}eft(\varphi_Q(K_1*\psi)\right)*\frac{x_j}{|x|^{2-\alphalpha}}\right\|_{L^1({\mathbb R}d)}\\*[7pt]&=B_1+B_2,
\end{split}
\end{equation*}
where $B_1$ and $B_2$ denote the above $L^1$ norm in $(2Q)^c$ and in $2Q$ respectively. To estimate $B_1$ we transfer all derivatives to the kernel $x_j/|x|^{2-\alphalpha}$
and argue as in \eqref{outside}. The estimate of $B_2$ follows the same reasoning as \eqref{inside}.
\end{proof}
For the reader's convenience, we repeat the main points of the proof of the localization lemma, for more details see \cite{cmpt2}.
\betaegin{proof}[Proof of Lemma \ref{localization}]
Let $x\in(\frac32 Q)^c$. Since $|(K_1*\varphi_QT)(x)|=l(Q)^{-\alphalpha}|{\lambda}angle T, l(Q)^\alphalpha\varphi_Q(y)K_1(x-y)\rightarrowngle|,$
by \eqref{ourgrowth} and Lemma \ref{nicecondition}, the required estimate of the $L^\infty-$norm of the function $K_1*\varphi_QT$ is equivalent to checking that
$f_Q(y)=l(Q)^{\alpha}\varphi_Q(y)K_1(x-y)$ satisfies $\|{\Delta}elta f_Q\|_{L^1(Q)}{\lambda}e C$, which is easily seen to hold for this case.
If $x\in\frac32 Q$, the boundedness of $\varphi_Q$ and $T*K_1$ implies that
$$|(K_1*\varphi_QT)(x)|{\lambda}e |(K_1*\varphi_QT)(x)-\varphi_Q(x)(K_1*T)(x)|+\|\varphi_Q\|_\infty\|K_1*T\|_\infty.$$
We consider now $\psi_Q\in{\mathcal C}_0^\infty({\mathbb R}d)$ such that $\psi\equiv 1$ in $2Q$, $\psi\equiv 0$ in $(4Q)^c$, $\|\psi_Q\|_\infty{\lambda}e C$,
$\|\nabla\psi_Q\|_\infty{\lambda}e Cl(Q)^{-1}$ and $\|{\Delta}elta\psi_Q\|_\infty{\lambda}e Cl(Q)^{-2}$. Set $K_1^x(y)=K_1(x-y)$. Then,
\betaegin{equation}
{\lambda}abel{oldfor}
\betaegin{split}
|(K_1*\varphi_QT)(x)-\varphi_Q(x)(K_1*T)(x)|&{\lambda}eq|{\lambda}angle T,\psi_Q(\varphi_Q-\varphi_Q(x))K_1^x\rightarrowngle|\\*[7pt]&
+\|\varphi_Q \|_{\infty}|{\lambda}angle T,(1-\psi_Q)K_1^x\rightarrowngle|=A+B.
\end{split}
\end{equation}
In fact, for the first term in the right hand side of \eqref{oldfor} to make sense, one needs to resort to a standard regularization process, whose details may be found in \cite[Lemma 12]{mpv2} for example.
The estimate of the term $A$ is a consequence of the $\alphalpha-$growth of the distribution (see \eqref{ourgrowth}) and Lemma \ref{nicecondition},
because the mean value theorem implies that $f_Q=l(Q)^\alphalpha\psi_Q(\varphi_Q-\varphi_Q(x))K_1^x$ satisfies $\|{\Delta}elta f_Q\|_1{\lambda}e C$.
We turn now to $B$. By Lemma \ref{prelocalization}, there exists a Lebesgue point of $K_1*\psi_QT$, $x_0\in Q$, such that $|(K_1*\psi_QT)(x_0)|{\lambda}e CG_\alphalpha(T)$.
Then $|(K_1*(1-\psi_Q)T)(x_0)|{\lambda}e C(\|K_1*T\|_\infty+G_\alphalpha(T))$, which implies
$$B{\lambda}e C|{\lambda}angle T,(1-\psi_Q)(K_1^x-K_1^{x_0})\rightarrowngle|+C(\|K_1*T\|_\infty+G_\alphalpha(T)).$$
To estimate $|{\lambda}angle
T,(1-\psi_Q)(K_1^x-K_1^{x_0})\rightarrowngle|$, we decompose
${\mathbb R}d\setminus \{x\}$ into a union of rings $$N_j=\{z\in
{\mathbb R}d:2^j\,l(Q){\lambda}eq|z-x|{\lambda}eq 2^{j+1}\,l(Q)\},\quad j\in\mathbb{Z},$$
and consider functions $\varphi_j$ in ${\mathcal
C}^\infty_0({\mathbb R}d)$, with support contained in
$$N^*_j=\{z\in
{\mathbb R}d:2^{j-1}\,l(Q){\lambda}eq|z-x|{\lambda}eq 2^{j+2}\,l(Q)\},\quad
j\in\mathbb{Z},$$ such that $\|\varphi_j\|_\infty{\lambda}e C$, $\|\nabla\varphi_j\|_\infty{\lambda}eq C
\,(2^j\,l(Q))^{-1}$, $\|{\Delta}elta\varphi_j\|_{\infty}{\lambda}e C\,(2^j\,l(Q))^{-2}$ and $\sum_j\varphi_j=1$ on
${\mathbb R}d\setminus\{x\}$. Since $x\in\frac 3 2 Q$ the smallest ring
$N^*_j$ that intersects $(2Q)^c$ is $N^*_{-3}$. Therefore we have
\betaegin{equation*}
\betaegin{split}
|{\lambda}angle T,(1-\psi_Q)(K_1^{x}-K_1^{x_0})\rightarrowngle|
&={\lambda}eft|{\lambda}eft{\lambda}angle T,\sum_{j\geq -3}\varphi_j(1-\psi_Q)(K_1^{x}-K_1^{x_0})\right\rightarrowngle\right|\\*[7pt]
&{\lambda}eq{\lambda}eft|{\lambda}eft{\lambda}angle T,\sum_{j\in I}\varphi_{j}(1-\psi_Q)(K_1^{x}-K_1^{x_0})\right\rightarrowngle \right|\\*[7pt]
&\quad+\sum_{j\in J}|{\lambda}angle T,\varphi_{j}(K_1^{x}-K_1^{x_0})\rightarrowngle|,
\end{split}
\end{equation*}
where $I$ denotes the set of indices $j\geq -3$ such that the
support of $\varphi_j$ intersects $4Q$ and $J$ denotes the remaining
indices, namely those $j \geq -3 $ such that $\varphi_j$ vanishes
on $4Q$. Notice that the cardinality of $I$ is bounded by a
positive constant.
Set
$$g =C\,l(Q)^\alphalpha\sum_{j\in I}\varphi_j(1-\psi_Q)\,(K_1^{x}-K_1^{x_0}),$$
and for $j\in J$
$$g_j=C\,2^j(2^{j}\,l(Q))^\alphalpha\,\varphi_j\,(K_1^{x}-K_1^{x_0}).$$
We leave it to the reader to verify that the test functions $g$ and $g_j$, $j\in J$,
satisfy the normalization inequalities \eqref{normalization1} in
the definition of $G_\alphalpha(T)$ for an appropriate choice of the (small)
constant $C$ (In fact one can check that the condition in Lemma \ref{nicecondition} holds for these functions). Once this is available, using the $\alphalpha$ growth
condition of $T$ we obtain
\betaegin{equation*}
\betaegin{split}
|{\lambda}angle T,(1-\psi_Q)(K_1^{x}-K_1^{x_0})\rightarrowngle |&{\lambda}eq C l(Q)^{-\alphalpha}|{\lambda}angle T,g\rightarrowngle|+ C \sum_{j\in J} 2^{-j}(2^jl(Q))^{-\alphalpha}|{\lambda}angle T,g_j\rightarrowngle |\\*[7pt]
&{\lambda}eq C\,G_\alphalpha(T) + C\sum_{j\geq -3}2^{-j}\,G_\alphalpha(T){\lambda}eq C\,G_\alphalpha(T),
\end{split}
\end{equation*}
which completes the proof of Lemma \ref{localization}.
\end{proof}
\section{Relationship between the capacities $\gamma_{\alphalphapha}^n$ and non linear potentials}{\lambda}abel{secoutline}
This section will complete the proof of Theorem \ref{main} by showing the equivalence between the capacities $\gamma_{\alphalphapha}^nmma_{\alpha,+}^n$ and $C_{\frac23(2-\alpha).\frac 3 2}$.
For our purposes, the description of Riesz capacities in terms of Wolff potentials is more useful than the definition of $C_{s,p}$ in \eqref{wolffcap}.
The Wolff potential of a positive Radon measure $\mu$ is defined by
$$W^\mu_{s,p}(x)=\int_0^\infty{\lambda}eft(\frac{\mu(B(x,r))}{r^{2-sp}}\right)^{q-1}\frac{dr}{r},\;\;x\in{\mathbb R}d,$$
The Wolff Energy of $\mu$ is $$E_{s,p}(\mu)=\int_{{\mathbb R}d}W^\mu_{s,p}(x)d\mu(x).$$
A well known theorem of Wolff (see \cite{adamshedberg}, Theorem 4.5.4, p. 110) asserts that
\betaegin{equation}{\lambda}abel{wolffineq}
C^{-1}\sup_\mu\frac1{E_{s,p}(E)^{p-1}}{\lambda}e C_{s,p}(E){\lambda}e C\sup_\mu\frac1{E_{s,p}(E)^{p-1}},
\end{equation}
the supremum taken over the probability measures $\mu$ supported on $E$. Here $C$ stands for a positive constant depending only on $s$, $p$ and the dimension.
To understand the relationship between the capacities $\gamma_{\alphalphapha}^n$ and non linear potentials, we need to recall the
characterization of these capacities in terms of the symmetrization method.
Let $\mu$ be a positive measure and $0<\alphalpha<1$. For $x\in{\mathbb R}d$ set,
$$p_{\alphalpha,n}^2(\mu)(x)=\iint p_{\alphalpha,n}(x,y,z)d\mu(y)d\mu(z),$$
$$M_\alphalpha\mu(x)=\sup_{r>0}\frac{\mu(B(x,r))}{r^\alphalpha}$$ and
$$U_{\alphalpha,n}^\mu(x)=M_\alphalpha\mu(x)+p_{\alphalpha,n}^2(\mu)(x).$$
We denote the energy associated to this last potential by $$\mathcal{E}_{\alpha,n}(\mu)=\int_{{\mathbb R}d}U_{\alphalpha,n}^\mu(x)d\mu(x).$$
Notice that Corollary \ref{permalfa} states that for any $n\in\mathbb{N}$, given three distinct points $x,y,z\in{\mathbb R}d$, $p_\alpha^n(x,y,z)\alphapprox p_\alpha^1(x,y,z)$. Hence, for any $n\in\mathbb{N}$
\betaegin{equation}{\lambda}abel{comparabilitympv}
{\mathcal E}_{\alpha,n}(\mu)\alphapprox \mathcal{E}_{\alpha,1}(\mu).
\end{equation}
Recall from \cite[Lemma 4.1]{mpv}, that for a compact set $K\subset{\mathbb R}d$ and $0<\alpha<1$, $$\displaystyle{\gamma_{\alphalphapha}^nmma_{\alpha,+}^1(K)\alphapprox\sup_{\mu}\frac1{\mathcal{E}_{\alpha,1}(\mu)}},$$
the supremum taken over the probability measures $\mu$ supported on $K$.
Adapting the proof of Lemma 4.1 in \cite{mpv} to our situation (using the reproduction formula from Lemma \ref{reproductionformula} and \eqref{ourgrowth}), we get that
$$\gamma_{\alphalphapha}^nmma_{\alpha,+}^n(K)\alphapprox\sup_{\mu}\frac1{\mathcal{E}_{\alpha,n}(\mu)},$$ where the supremum is taken over the probability measures $\mu$ supported on $K$.
The explanation given in Step 1 of section \ref{secsketch} implies that $$\gamma_{\alphalphapha}^nmma_{\alpha}^n(K)\alphapprox\gamma_{\alphalphapha}^nmma_{\alpha,+}^n(K),$$ hence we deduce that $$\gamma_{\alphalphapha}^nmma_{\alpha}^n(K)\alphapprox \sup_{\mu}\frac1{\mathcal{E}_{\alpha,1}(\mu)}.$$
Lemma 4.2 in \cite{mpv} shows that for any positive Radon measure $\mu$, the energies $\mathcal{E}_{\alpha,1}(\mu)$ and $E_{\frac 2 3(2-\alpha),\frac 3 2}(\mu)$ are comparable.
Now, Wolff's inequality \eqref{wolffineq}, with $s=2(2-\alpha)/3$ and $p=3/2$, (see the proof of the main Theorem in \cite[p. 221]{mpv}) finishes the proof of Theorem \ref{main}.
\section{Rectifiability and $L^2$-boundedness of $T_n$}{\lambda}abel{rect}
Recalling \eqref{permdef} and \eqref{permdef2} for any Borel measure $\mu$ we define
\betaegin{equation}
{\lambda}abel{permtriple}
p_{1,n}(\mu)=\iiint p_{1,n}(x,y,z) d \mu (x)d \mu (y) d \mu (z).
\end{equation}
The following lemma relates the finiteness of $p_{1,n}$ to the $L^2(\mu)$-boundedness of the operator $T_n$
\betaegin{lemma}
{\lambda}abel{mmvcl2}
Let $\mu$ be a continuous positive Radon measure in ${\mathbb R}n$ with linear growth. If the operator $T_n$ is bounded in $L^2(\mu)$ then there exists a constant $C$ such
that for any ball $B$, $$\iiint_{B^3}p(x,y,z) d \mu (x)d \mu (y)d \mu (z) {\lambda}eq C {\rm diam} (B).$$
\end{lemma}
The proof of Lemma \ref{mmvcl2} can be found in \cite[Lemma 2.1]{mmv}. There it is stated and proved for the Cauchy transform but the proof is identical in our case.
When $p_{1,n}(x,y,z)$ is replaced by the square of the Menger curvature $c(x,y,z)$, recall \eqref{meng} and \eqref{mel}, the triple integral in \eqref{permtriple} is called the curvature of $\mu$ and is denoted by $c^2(\mu)$. A famous theorem of David and L\'eger \cite{Leger}, which was also one of the cornerstones in the proof of Vitushkin's conjecture by David in \cite{david}, states that if $E \subset {\mathbb R}n$ has finite length and $c^2({\mathcal H}^1 {\lambda}floor E)< \infty$ then $E$ is rectifiable. Here we obtain the following generalization of the David-Leger Theorem.
\betaegin{teo}
{\lambda}abel{legerperm}
Let $E \subset {\mathbb R}n$ be a Borel set such that $0<\mathcal{H}^1(E)<\infty$ and
$p_{1,n}(\mathcal{H}^1 {\lambda}floor E)<\infty$, then the set $E$ is rectifiable.
\end{teo}
\betaegin{remarkthep} We first note that statement (1) of Theorem \ref{recthm1} follows immediately from Lemma \ref{mmvcl2} and Theorem \ref{legerperm}. Theorem \ref{legerperm} was earlier proved in \cite{cmpt} for $d=2$. We stress that the constraint $d=2$ in \cite[Theorem 1.2 (i)]{cmpt} is essentially used in the proofs of \cite[Proposition 2.1]{cmpt} and \cite[Lemma 2.3]{cmpt} which only go through in the plane. Nevertheless in no other instance the arguments in \cite{cmpt} depend on the ambient space being $2$-dimensional. Proposition \ref{posperm} bypasses this issue by using completely different reasoning, and generalizes \cite[Proposition 2.1]{cmpt} and \cite[Lemma 2.3]{cmpt} in Euclidean spaces
of arbitrary dimension. Furthermore it removes the assumption of the triangles with comparable sides which was also essential in the proofs of \cite[Proposition 2.1]{cmpt}
and \cite[Lemma 2.3]{cmpt}. With Proposition \ref{posperm} at our disposal we obtain (i) by following the arguments from \cite[Sections 3-7]{cmpt}
without any changes. In several cases in \cite[Sections 3-6]{cmpt} there are references to several components from \cite{Leger} but
this does not create any problem, since the proof in \cite{Leger} holds for any ${\mathbb R}n$.
The proof of (ii) from Theorem \ref{recthm1} follows, as in (i), by Proposition \ref{posperm} and \cite[Section 8]{cmpt}, as the arguments there do not depend on the dimension of the ambient space.
\end{remarkthep}
\alphappendix
\section{Proofs of Propositions \ref{lemmaperm} and \ref{posperm}}{\lambda}abel{secpermap}
For simplicity we let $n$ odd. Then for $0<\alpha{\lambda}eq 1$
\betaegin{equation*}
K^i_{\alpha,n}(x)=\frac{x_i^{n}}{|x|^{n+\alpha}}, \quad x =(x_1,\dots,x_d) \in {\mathbb R}n \setminus \{0\}.
\end{equation*}
\betaegin{proof}[Proof of Proposition \ref{lemmaperm}]
Write $a=y-x$ and $b=z-y$; then $a+b=z-x$. Without loss of generality we can assume that $|a|{\lambda}eq |b|{\lambda}eq|a+b|$. A simple computation yields
\betaegin{equation}
{\lambda}abel{permi}
\betaegin{split}
&p^i_{\alpha,n}(x,y,z)\\
&\quad= K^i_{\alpha,n}(x-y)\,K^i_{\alpha,n}(x-z) + K^i_{\alpha,n}(y-x)\,K^i_{\alpha,n}(y-z) + K^i_{\alpha,n}(z-x)\,K^i_{\alpha,n}(z-y)\\
&\quad= K^i_{\alpha,n}(-a)\,K^i_{\alpha,n}(-a-b) + K^i_{\alpha,n}(a)\,K^i_{\alpha,n}(-b) + K^i_{\alpha,n}(a+b)\,K^1_{\alpha,n}(b)\\
&\quad=K^i_{\alpha,n}(a+b)K^i_{\alpha,n}(a)+K^i_{\alpha,n}(a+b)K^i_{\alpha,n}(b)-K^i_{\alpha,n}(a)K^i_{\alpha,n}(b)\\
&\quad=\frac{(a_i+b_i)^na_i^n|b|^{n+\alpha}+(a_i+b_i)^nb_i^n|a|^{n+\alpha}-a_i^nb_i^n|a+b|^{n+\alpha}}{|a|^{n+\alpha}|b|^{n+\alpha}|a+b|^{n+\alpha}}.
\end{split}
\end{equation}
If $a_ib_i=0$ the proof is immediate. Take for example $a_i=0$. Then we trivially obtain
\betaegin{equation}
{\lambda}abel{aibizero}
p^i_{\alpha,n}(x,y,z)=\frac{b_i^{2n}}{|b|^{n+\alpha}|a+b|^{n+\alpha}} \alphapprox \frac{ \ M_i^{2n}}{L(x,y,z)^{2\alpha+2n}}.
\end{equation}
To prove the upper bound inequality in \eqref{icomparability} we distinguish two cases.\newline
{\em Case} $a_ib_i>0:$ Without loss of generality assume $a_i> 0$ and $b_i> 0$. In case $a_i<0$ and $b_i<0$,
\betaegin{equation*}
\betaegin{split}
&(a_i+b_i)^na_i^n|b|^{n+\alpha}+(a_i+b_i)^nb_i^n|a|^{n+\alpha}-a_i^nb_i^n|a+b|\\
&\quad \quad=(|a_i|+|b_i|)^n|a_i|^n|b|^{n+\alpha}+(|a_i|+|b_i|)^n|b_i|^n|a|^{n+\alpha}-|a_i|^n|b_i|^n|a+b|^{n+\alpha}
\end{split}
\end{equation*}
and thus it can be reduced to the case where both coordinates are positive.
Notice that since $|a|{\lambda}e |b|{\lambda}e |a+b|$, $a_i{\lambda}e |a|$ and $0<\alpha<1$,
\betaegin{equation*}
\betaegin{split}
p^i_{\alpha,n}(x,y,z)&= \frac{(a_i+b_i)^nb_i^n}{|b|^{n+\alpha}|a+b|^{n+\alpha}}+\frac{a_i^n{\lambda}eft((a_i+b_i)^n|b|^{n+\alpha}-b_i^n|a+b|^{n+\alpha}\right)}{|a|^{n+\alpha}|b|^{n+\alpha}|a+b|^{n+\alpha}}\\
&{\lambda}e\frac{1}{|b|^{\alpha}|a+b|^{\alpha}}+\frac{(a_i+b_i)^n-b_i^n}{|a|^\alpha|a+b|^{n-\alpha}}\\
&{\lambda}e \frac{1}{|b|^{\alpha}|a+b|^{\alpha}}+\frac{a_i^\alpha}{|a|^\alpha}\frac{\sum_{k=1}^n\betainom{n}{k}a_i^{k-\alpha}b_i^{n-k}}{|b|^{n+\alphalpha}}\\
&{\lambda}e \frac{1}{|b|^{\alpha}|a+b|^{\alpha}}+\frac{\sum_{k=1}^n\betainom{n}{k}|a|^{k-\alpha}|b|^{n-k}}{|b|^{n+\alphalpha}}\\&{\lambda}e \frac{1}{|b|^{\alpha}|a+b|^{\alpha}}+\frac{B(n)|b|^{n-\alphalpha}}{|b|^{n+\alpha}}
{\lambda}e \frac{B(n,\alpha)}{|a+b|^{2\alpha}},
\end{split}
\end{equation*}
where the last inequality comes from $|a+b|{\lambda}e 2|b|$, which follows from the triangle inequality and the fact that $|a|{\lambda}e|b|$.\newline
{\em Case} $a_i b_i <0:$ Without loss of generality we can assume that $a_i<0$, $b_i> 0$ and $b_i{\lambda}e |a_i|$, the other cases follow
analogously by interchanging the roles of $a_i$ and $b_i$.
\betaegin{equation*}
\betaegin{split}
p^i_{\alpha,n}(x,y,z)&= \frac{(|a_i|-b_i)^n|a_i|^n|b|^{n+\alpha}-(|a_i|-b_i)^nb_i^n|a|^{n+\alpha}+|a_i|^nb_i^n|a+b|^{n+\alpha}}{|a|^{n+\alpha}|b|^{n+\alpha}|a+b|^{n+\alpha}}\\
&{\lambda}e\frac{(|a_i|-b_i)^n|a_i|^n|b|^{n+\alpha}+|a_i|^nb_i^n|a+b|^{n+\alpha}}{|a|^{n+\alpha}|b|^{n+\alpha}|a+b|^{n+\alpha}}\\
&{\lambda}e\frac{|a|^{2n}(|b|^{n+\alpha}+|a+b|^{n+\alpha})}{|a|^{n+\alpha}|b|^{n+\alpha}|a+b|^{n+\alpha}}=|a|^{n-\alpha}{\lambda}eft(\frac1{|a+b|^{n+\alpha}}+\frac1{|b|^{n+\alpha}}\right)\\
&{\lambda}e\frac{2^{2\alpha}+1}{|a+b|^{2\alpha}}.
\end{split}
\end{equation*}
since $b_1{\lambda}e |a_1|$, $0<|a_1|-b_1<|a_1|<|a|$ and $|a|{\lambda}e|b|{\lambda}e|a+b|$.
We now prove the lower bound estimate in (\ref{icomparability}). \newline
{\em Case} $a_ib_i > 0:$ As explained in the proof of the upper bound inequality the proof can be reduced to the case when $a_i>0$ and $b_i>0$.
Setting $t=|b|/|a|$ in (\ref{permi}) and noticing that $|a+b|/|a| {\lambda}eq 1+t$ we get
\betaegin{equation}
{\lambda}abel{pfbound}
{p^i_{\alphalphapha,n}}(x,y,z) \geq \frac{a_i^n(a_i+b_i)^nt^{n+\alpha}-b_i^n a_i^n (1+t)^{n+\alpha}+b_i^n (a_i+b_i)^n }{|b|^{n+\alpha}|a+b|^{n+\alpha}}:=\frac{f_1(t)}{|b|^{n+\alpha}|a+b|^{n+\alpha}}.
\end{equation}
Then it readily follows that the unique zero of
$$f'_1(t)=a_i^n(n+\alpha)(t^{n+\alpha-1} (a_i+b_i)^n-b_i^n (1+t)^{n+\alpha-1})$$
is
$$t^\alphast=\frac{1}{{\lambda}eft( \frac{a_i}{b_i}+1 \right)^{\frac{n}{n+\alpha-1}}-1}>0.$$
Moreover $f_1$ attains its minimum at $t^\alphast$ because $f_1'(0)=-(n+\alpha)a_i^n b_i^n$ and
${\lambda}im_{t \rightarrow \infty} f_1'(t)={\lambda}im_{t \rightarrow \infty} ((b_i+a_i)^n-b_i^n)t^{n+\alpha-1}=+\infty$.
We first consider the case when $t^\alphast >1$. Then we deduce that
\betaegin{equation}{\lambda}abel{s}
0<\frac{a_i}{b_i}<2^{\frac{n + \alpha-1}{n}}-1<1,
\end{equation}
the last inequality coming from $\alpha<1$.
Therefore $a_i<b_i$. Setting $s=a_i/b_i$ we obtain
\betaegin{equation*}
f_1(t)
=b_i^{2n} {\lambda}eft(s^n(1+s)^nt^{n+\alpha}-s^n (1+t)^{n+\alpha}+(s+1)^n\right)
\end{equation*}
and it follows easily that
$$f_1(t^\alphast) =b_i^{2n} (1+s)^n {\lambda}eft(1-\frac{s^n}{((s+1)^{\frac{n}{n+\alpha-1}}-1)^{n+\alpha-1}} \right).$$
A direct computation shows that the function
$$g_1(s)=1-\frac{s^n}{((s+1)^{\frac{n}{n+\alpha-1}}-1)^{n+\alpha-1}}$$ is decreasing. Then, by \eqref{s}, $g_1$ attains its minimum at $s=2^{\frac{n + \alpha-1}{n}}-1$. Therefore
\betaegin{equation}
{\lambda}abel{tastg1}
f_1(t) \geq f_1(t^\alphast) \geq b_i^{2n} (1- (2^{\frac{n + \alpha-1}{n}}-1)^n):=b_i^{2n}A_1(n,\alpha)
\end{equation}
and since $\alpha<1$, $A_1(n, \alpha) >0.$
We now consider the case when $t^\alphast {\lambda}eq 1$ and notice that as $t \geq 1$ we have $f_1(t) \geq f_1(1)$. As before for $s=\min \{a_i, b_i\} /\max \{a_i, b_i\}$
\betaegin{equation}
{\lambda}abel{ftast2}
\betaegin{split}
f_1(t) \geq f_1(1)&= a_i^n (a_i+b_i)^n-a_i^n b_i^n 2^{n+\alpha}+b_i^n (a_i+b_i)^n \\
&=(a_i+b_i)^n (\max \{a_i,b_i\})^n {\lambda}eft(s^n-{\lambda}eft( \frac{s}{s+1} \right)^n 2^{n+\alpha}+1 \right) \\
&:=(a_i+b_i)^n (\max \{a_i,b_i\})^n g_2(s).
\end{split}
\end{equation}
It follows easily that the only non-zero root of
$$g_2'(s)=n s^{n-1}{\lambda}eft(1-\frac{2^{n+\alpha}}{(1+s)^{n+1}}\right)$$
is $$s^\alphast = 2^{\frac{n+\alpha}{n+1}}-1.$$ Since $\alpha \in (0,1)$, then $2^{\frac{n+\alpha-1}{n}}-1<s^\alphast<1.$ Furthermore notice that
$g_2'(2^{\frac{n+\alpha-1}{n}}-1)<0$ and $g_2'(1)>0$. Hence $g_2$ attains its minimum at $s^\alphast$. Therefore
\betaegin{equation}{\lambda}abel{tsmall1}
\betaegin{split}
g_2(s)\geq g_2(s^\alphast)&=(2^{\frac{n+\alpha}{n+1}}-1)^n {\lambda}eft( 1-\frac{2^{n+\alpha}}{2^{\frac{(n+\alpha)n}{n+1}}} \right)+1 \\
&=1-(2^{\frac{n+\alpha}{n+1}}-1)^{n+1}:=A_2(n,\alpha)>0,
\end{split}
\end{equation}
the positivity of the constant $A_2(n,\alpha)$ coming from inequality $\alpha<1$. Therefore (\ref{pfbound}) together with \eqref{tastg1}, (\ref{ftast2}) and \eqref{tsmall1} imply that
\betaegin{equation}
{\lambda}abel{aibipos2}
\betaegin{split}
{p^i_{\alphalphapha,n}} (x,y,z) \geq A(n,\alpha) \frac{M_i^{2n}}{L(x,y,z)^{2n+2a}},
\end{split}
\end{equation}
for some positive constant $A(n,\alpha)$. Hence we have finished the proof when $a_i \, b_i > 0$.
{\em Case} $a_i \, b_i < 0:$
Setting $t=|b| /|a|$ and using (\ref{pfbound}) we get that
\betaegin{equation}
{\lambda}abel{secondf1}
\betaegin{split}
{p^i_{\alphalphapha,n}} (x,y,z)&=\frac{1}{|b|^{n+\alpha}|a+b|^{n+\alpha}}(a_i^n(a_i+b_i)^nt^{n+\alpha}-b_i^n a_i^n (1+t)^{n+\alpha}+b_i^n (a_i+b_i)^n ) \\
& \geq \frac{1}{|b|^{n+\alpha}|a+b|^{n+\alpha}}(a_i^n(a_i+b_i)^nt^{n+\alpha}-b_i^n a_i^n t^{n+\alpha}+b_i^n (a_i+b_i)^n )\\
&:= \frac{f_2(t)}{|b|^{n+\alpha}|a+b|^{n+\alpha}}.
\end{split}
\end{equation}
Notice that $f_2$ is an increasing function because $a_i^2+a_ib_i>a_ib_i$ and $n$ is odd:
\betaegin{equation*}
\betaegin{split}f'_2(t)&=(n+\alpha)t^{n+\alpha-1}(a_i^n(a_i+b_i)^n-a_i^nb_i^n)=(n+\alpha)t^{n+\alpha-1}((a_i^2+a_ib_i)^n-a_i^nb_i^n) > 0.
\end{split}
\end{equation*}
Therefore since $t \geq 1$ we have that
$$f_2(t) \geq f_2(1)=(a_i^n+b_i^n)(a_i+b_i)^n-b_i^n a_i^n.$$
We assume that $|b_i| \geq |a_i|$, the case where $|b_i| <|a_i|$ can be treated in the exact same manner. We first consider the case where $a_i>0$ and $b_i<0$.
Let
$$h(r)=(r-|b_i|)^n(r^n-|b_i|^n)+r^n|b_i|^n.$$
Then
$$h'(r)=n{\lambda}eft((r-|b_i|)^{n-1}(r^n-|b_i|^n)+r^{n-1}((r-|b_i|)^n+|b_i|^n)\right).$$
Notice that $$h'(|b_i|/2)=0.$$ Furthermore $$h'(r)>0 \mbox{ for }|b_i|/2 <r{\lambda}eq |b_i|.$$ To see this notice that since $0<|b_i|-r<r$, then
$(|b_i|-r)^{n-1} <r^{n-1}$.Therefore since $r^n-|b_i|^n<0$
\betaegin{equation*}
\betaegin{split}
h'(r)>n(r^{n-1}(r^n-|b_i|^n)+r^{n-1}((r-|b_i|)^n+|b_i|^n))=nr^{n-1}(r^n-(|b_i|-r)^n)>0.
\end{split}
\end{equation*}
With an identical argument one sees that $h'(r)<0$ for $0<r {\lambda}eq |b_i|/2$. Hence it follows that, for $0<r{\lambda}eq |b_i|$,
$$h(r) \geq h(|b_i|/2) \geq \frac{|b_i|^{2n}}{2^n}.$$
Since $a_i \in (0, |b_i|]$ we get that $f_2(1) \geq \frac{|b_i|^{2n}}{2^n}$ and by (\ref{secondf1})
\betaegin{equation}
{\lambda}abel{aibineg1}
{p^i_{\alphalphapha,n}} (x,y,z) \geq 2^{-n} \frac{b_i^{2n}}{|b|^{n+\alpha}|a+b|^{n+\alpha}} \geq A_3(n) \frac{M_i^{2n}}{L(x,y,z)^{2n+2\alpha}}.
\end{equation}
The case where $a_i <0$ and $b_i>0$ is very similar. In this case instead of the function $h$ we consider the function
$l(r)=(r+b_i)^n(r^n+b_i^n)-r^nb_i^n$
for $-|b_i|/2{\lambda}eq x <0$ and we show that in that range,
$$l(r) \geq l(-|b_i|/2) \geq b_i^{2n}/2^n.$$
Therefore as $a_i \in [-|b_i|,0)$, $f_2(1) \geq \frac{|b_i|^{2n}}{2^n}$ and we obtain from \eqref{secondf1}
\betaegin{equation}
{\lambda}abel{aibineg2}
{p^i_{\alphalphapha,n}} (x,y,z) \geq 2^{-n} \frac{b_i^{2n}}{|b|^{n+\alpha}|a+b|^{n+\alpha}} \geq A_3(n) \frac{M_i^{2n}}{L(x,y,z)^{2n+2\alpha}}.
\end{equation}
Therefore the proof of the lower bound follows by (\ref{aibipos2}), (\ref{aibineg1}) and (\ref{aibineg2}).
\end{proof}
\betaegin{remark1}
{\lambda}abel{aibinegp1}
Notice that in the proof of the lower bound inequality when $a_ib_i <0$, we do not make use of the fact that $\alpha<1$. Therefore (\ref{aibineg1}) and (\ref{aibineg2}) remain valid in the case where $\alpha=1$.
\end{remark1}
\betaegin{proof}[Proof of Proposition \ref{posperm}] For simplicity we let $p^i_{1,n}:=p^i_n$ for $i=1,\dots,d$. Let $a=y-x, b=z-y$ then $a+b=z-x$ and without loss of generality
we can assume that $|a|{\lambda}eq |b|{\lambda}eq|a+b|=1$. In case $x_i=y_i=z_i$, then trivially by (\ref{permi}), $p^i_n(x,y,z)=0$. Hence we can assume that
$a_i \neq 0$ or $b_i \neq 0$ and, by (\ref{permi}), for $\alpha=1$ , assuming without loss of generality that $b_i \neq 0$, we get
\betaegin{equation}
\betaegin{split}
{\lambda}abel{perm1}
p^i_{n}(x,y,z)= \frac{(a_i+b_i)^n b_i^n {\lambda}eft({\lambda}eft(\frac{a_i}{b_i} \right)^n |b|^{n+1}+|a|^{n+1}-\frac{a_i^n}{(a_i+b_i)^n}\right)}{|a|^{n+1}|b|^{n+1}}.
\end{split}
\end{equation}
If the points $x,y,z$ are collinear then the initial assumption $|a|{\lambda}eq|b|{\lambda}eq|a+b|$ implies that $|a|+|b|=|a+b|$. Furthermore $b={\lambda}ambda a$ for some ${\lambda}\neq 0$.
We provide the details in the case when ${\lambda}ambda >0$ as the remaining case is identical. We have by (\ref{perm1})
\betaegin{equation*}
\betaegin{split}
p^i_{n}(x,y,z)&=\frac{(a_i+{\lambda}ambda a_i)^n {\lambda}ambda^n a_i^n {\lambda}eft({\lambda}eft(\frac{1}{{\lambda}ambda} \right)^n {\lambda}ambda^{n+1} |a|^{n+1}+|a|^{n+1}-{\lambda}eft(\frac{1}{1+{\lambda}}\right)^n\right)}{|a|^{n+1}|b|^{n+1}} \\
&=\frac{a_i^{2n} {\lambda}ambda^n {\lambda}eft({\lambda}eft( (1+{\lambda})|a| \right)^{n+1}-1\right)}{|a|^{n+1}|b|^{n+1}}\\
&=\frac{a_i^{2n} {\lambda}ambda^n }{|a|^{n+1}|b|^{n+1}}{\lambda}eft((1+{\lambda})|a|-1\right)\sum_{j=0}^{n}((1+{\lambda})|a|)^j=0
\end{split}
\end{equation*}
because $(1+{\lambda})|a|-1=|a|+|b|-1=0$.
We will now turn our attention to the case when the points $x,y,z$ are not collinear. We will consider several cases. \newline
\textit{Case} $a_ib_i>0.$ As in the proof of Proposition \ref{icomparability} we only have to consider the case when $a_i,b_i>0$. We first consider the subcase $0<|a|{\lambda}eq |b|<|a+b|=1$.
Setting $w=a_i/b_i$ in \eqref{perm1} we get
\betaegin{equation*}
p^i_{n}(x,y,z)=\frac{(a_i+b_i)^n b_i^n}{|a|^{n+1}|b|^{n+1}} f(w)
\end{equation*}
with $$f(w)=w^n|b|^{n+1}+|a|^{n+1}-{\lambda}eft(1+\frac{1}{w}\right)^{-n}.$$
Notice that the only non-vanishing admissible root of the equation
$$f'(w)=nw^{n-1}{\lambda}eft(|b|^{n+1}-{\lambda}eft(\frac{1}{w+1}\right)^{n+1} \right)=0$$
is $w=|b|^{-1}-1$. Furthermore it follows easily that
$${\lambda}im_{w\rightarrow 0^+}f(w)=|a|^{n+1}>0 \quad \text{and}\quad {\lambda}im_{w \rightarrow +\infty}f(w)=+\infty$$
hence $f:(0,\infty) \rightarrow {\mathbb R}$ attains its minimum at $|b|^{-1}-1$. After a direct computation we get that
$$f(|b|^{-1}-1)=|a|^{n+1}-(1-|b|)^{n+1}.$$
We can now write,
\betaegin{equation*}
\betaegin{split}
|a|^{n+1}&-(1-|b|)^{n+1}=|a|^{n+1}{\lambda}eft(1-{\lambda}eft( \frac{1-|b|}{|a|}\right)^{n+1} \right)\\
&=|a|^{n+1}{\lambda}eft(1- \frac{1-|b|}{|a|}\right) \sum_{j=0}^n {\lambda}eft(\frac{1-|b|}{|a|} \right)^j=|a|^n(|a|-1+|b|) \sum_{j=0}^n {\lambda}eft(\frac{1-|b|}{|a|} \right)^j. \\
\end{split}
\end{equation*}
Therefore
\betaegin{equation}
{\lambda}abel{estal1}p^i_{n}(x,y,z)\geq \frac{(a_i+b_i)^n b_i^n}{|a|^{n+1}|b|^{n+1}}|a|^n (|a|-1+|b|)\sum_{j=0}^n {\lambda}eft(\frac{1-|b|}{|a|} \right)^j.
\end{equation}
Recall that, by Heron's formula, the area of the triangle determined by $x,y,z \in {\mathbb R}^d$ is given by
$$\text{area}(T_{x,y,z})=\frac{1}{2} \sqrt{|a+b|^2|a|^2-{\lambda}eft(\frac{|a+b|^2+|a|^2-|b|^2}{2}\right)^2},$$
where $a=y-x, b=z-y$ and $a+b=z-x$.
Hence
$$16 \, \text{area}(T_{x,y,z})^2=(2|a+b||a|-(|a+b|^2+|a|^2-|b|^2))(2|a+b||a|+|a+b|^2+|a|^2-|b|^2).$$
Plugging this identity into Menger's curvature formula we get
\betaegin{equation*}
\betaegin{split}
c^2(x,y,z)&=\frac{16 \, \text{area}(T_{x,y,z})^2}{|a|^2|b|^2|a+b|^2}\\
&=\frac{(2|a+b||b|-|a+b|^2-|a|^2+|b|^2)(2|a+b||b|+|a+b|^2+|a|^2-|b|^2)}{|a|^2|b|^2|a+b|^2}\\
&=\frac{(|b|^2-(|a+b|-|a|)^2)((|a|+|a+b|)^2-|b|^2)}{|a|^2|b|^2|a+b|^2}\\
&=\frac{(|b|-|a+b|+|a|)(|b|+|a+b|-|a|)(|a|+|a+b|-|b|)(|b|+|a+b|+|a|)}{|a|^2|b|^2|a+b|^2}
\end{split}
\end{equation*}
and since we are assuming $|a+b|=1$,
\betaegin{equation}
{\lambda}abel{herrocurv}
c^2(x,y,z)=\frac{(|b|+|a|-1)(|b|+1-|a|)(|a|+1-|b|)(|b|+1+|a|)}{|a|^2|b|^2}.
\end{equation}
By (\ref{estal1}) and (\ref{herrocurv}) we get that
\betaegin{equation*}
\betaegin{split}
p^i_{n}(x,y,z)&\geq \frac{(a_i+b_i)^n b_i^n}{|a|^{n+1}|b|^{n+1}}\frac{|a|^n|a|^2|b|^2}{(|b|+1-|a|)(|a|+1-|b|)(|b|+1+|a|)}
\sum_{j=0}^n {\lambda}eft(\frac{1-|b|}{|a|} \right)^j c^2(x,y,z)\\
&\ge\frac{b_i^{2n}|a||b|}{|b|^n(|b|+1-|a|)(|a|+1-|b|)(|b|+1+|a|)}c^2(x,y,z),
\end{split}
\end{equation*}
the last inequality coming from $a_i+b_i\ge b_i$ and the fact that the sum above is greater than one. Using the triangle inequality, $1=|a+b|{\lambda}e|a|+|b|$, and the fact
that $1=|a+b|{\lambda}e 2|b|$, we obtain
\betaegin{equation*}
p^i_{n}(x,y,z)\ge \frac{b_i^{2n}}{12|b|^n}c^2(x,y,z)\ge c(n)\frac{ b_i^{2n}}{|b|^{2n}}c^2(x,y,z).
\end{equation*}
To complete the proof in case $a_ib_i>0$, we are left with the situation $|b|=|a+b|=1$. By (\ref{perm1})
\betaegin{equation*}
p^i_{n}(x,y,z)=\frac{(a_i+b_i)^n b_i^n {\lambda}eft({\lambda}eft(\frac{a_i}{b_i} \right)^n +|a|^{n+\alpha}-{\lambda}eft(\frac{a_i}{a_i+b_i}\right)^n\right)}{|a|^{n+\alpha}}\geq (a_i+b_i)^n b_i^n \geq b_i^{2n},
\end{equation*}
because $\frac{a_i}{b_i}>\frac{a_i}{a_i+b_i}$ and thus ${\lambda}eft(\frac{a_i}{b_i} \right)^n>{\lambda}eft(\frac{a_i}{a_i+b_i}\right)^n.$
Hence
$$p^i_{n}(x,y,z) \geq \frac{b_i^{2n}}{|b|^{2n}}|b|^{-2} \gtrsim \frac{b_i^{2n}}{|b|^{2n}} c^2(x,y,z).$$
\textit{Case} $a_i b_i<0$. It follows from Remark \ref{aibinegp1} (see \eqref{aibineg2} with $\alpha=1$).
\textit{Case} $a_i \,b_i=0$. Since we have assumed that $b\neq 0$ we have that $a_i=0$ and by \eqref{aibizero}, with $\alpha=1$, we are done.
Therefore we have shown that whenever $x,y,z$ are not collinear and they do not lie in the hyperplane $x_i=y_i=z_i$, then $p^i_{n}(x,y,z)>0$. This finishes the proof of (i).
Furthermore, we have shown that if this is the case, then
\betaegin{equation}
{\lambda}abel{p11}
p^i_{n}(x,y,z) \geq C(n) \frac{ b_i^{2n}}{|b|^{2n}}c^2(x,y,z).
\end{equation}
\newline
For the proof of (ii) notice that since ${{\measuredangle}uredangle} (V_j, L_{y,z}) \geq \theta_0$ there exists some coordinate $i_0 \neq j$ such that
$$|b_{i_0}|=|y_{i_0}-z_{i_0}|\geq C(\theta_0)|y-z|=C(\theta_0)|b|,$$
hence (ii) follows by (\ref{p11}).
\end{proof}
\textbf{Acknowledgement}. We would like to thank Joan Mateu and Xavier Tolsa for valuable conversations during the preparation of this paper.
\betaegin{thebibliography}{CMPT1}
\betaibitem[Ca]{ca} {\sc A. C. Calder\'on}, {\em Cauchy integrals on Lipschitz curves and related operators.} Proc. Nat. Acad. Sci. U.S.A. 74 (1977), no. 4, 1324--1327.
\betaibitem[AH]{adamshedberg}
{\sc D. R. Adams and L. I. Hedberg},
{\em Function Spaces and Potential Theory},
Grundl. Math. Wiss. {\betaf 314}, Springer-Verlag, Berlin 1996.
\betaibitem[A]{Ahlfors}
{\sc L. Ahlfors},
{\em Bounded analytic functions,}
Duke Math. J. {\betaf 14} (1947), 1--11.
\betaibitem[ACrL]{acl}
{\sc N. Aronszajn, T. Creese and L. Lipkin},
{\em Polyharmonic functions,}
Oxford Mathematical Monographs, Oxford University Press, New York, (1983).
\betaibitem[CMPT1]{cmpt}
{\sc V. Chousionis, J. Mateu, L. Prat and X. Tolsa},
{\em Calder\'on-Zygmund kernels and rectifiability in the plane},
Adv. Math. 231 (2012), no. 1, 535--568.
\betaibitem[CMPT2]{cmpt2}
{\sc V. Chousionis, J. Mateu, L. Prat and X. Tolsa},
{\em Capacities associated with Calder\'on-Zygmund kernels},
Potential Anal. 38 (2013), no. 3, 913--949.
\betaibitem[D]{david}
{\sc G. David},
{\em Unrectifiable 1-sets have vanishing analytic capacity},
Rev. Mat. Iberoamericana {\betaf 14} (1998), no. 2, 369-479.
\betaibitem[DS1]{DS1}{\sc G. David and S. Semmes}, {\em Singular Integrals and rectifiable sets in $\mathbb {R}
^n$: Au-del\`{a} des graphes lipschitziens.} Ast\'erisque
193, Soci\'{e}t\'{e} Math\'{e}matique de France (1991).
\betaibitem[DS2]{DS2} {\sc G. David and S. Semmes}, {\em Analysis of and on uniformly
rectifiable sets.} Mathematical Surveys and Monographs, 38. American
Mathematical Society, Providence, RI, (1993).
\betaibitem[D{\O}]{do}
{\sc A. M. Davie and B. {\O}ksendal},
{\em Analytic capacity and differentiability properties of finely harmonic functions},
Acta Math. 149 (1982), 127--152.
\betaibitem[Du]{duandikoetxea}
{\sc J. Duoandikoetxea},
{\em Fourier Analysis},
American Mathematical Society, 2001.
\betaibitem[ENV]{env}
{\sc V. Eiderman, F. Nazarov and A. Volberg},
{\em Vector-valued Riesz potentials: Cartan-type estimates
and related capacities}
Proc. London Math. Soc. (2010), 1--32.
\betaibitem[GV]{gv}
{\sc J. Garnett and J. Verdera},
{\em Analytic capacity, bilipschitz maps and Cantor sets}, Math. Res. Lett. 10 (2003), no. 4, 515--522.
\betaibitem[GPT]{gpt}
{\sc J. Garnett, L. Prat and X. Tolsa},
{\em Lipschitz harmonic capacity and bilipschitz images of Cantor sets. }Math. Res. Lett. 13 (2006), no. 5-6, 865--884.
\betaibitem[L\'e]{Leger} {\sc J. C. L\'eger}, {\em Menger curvature and
rectifiability.} Ann. of Math. 149 (1999), 831--869.
\betaibitem[LZ]{lz}
{\sc R. Lyons and K. Zumbrun},
{\em Homogeneous partial derivatives of radial functions,}
Proc. Amer. Math. Soc. 121(1) (1994), 315--316.
\betaibitem[MMV]{MMV}{\sc P. Mattila, M. Melnikov and J. Verdera}, {\em The Cauchy integral, analytic capacity, and uniform rectifiability.} Ann. of Math. (2) 144 (1996), no. 1, 127--136.
\betaibitem[MP]{mp}
{\sc P. Mattila and P.V. Paramonov},
{\em On geometric properties of harmonic $\mbox{Lip}_1$-capacity},
Pacific J. Math. {\betaf 171} (1995), no.2, 469--491.
\betaibitem[MPV]{mpv}
{\sc J. Mateu, L. Prat and J. Verdera},
{\em The capacity associated to signed Riesz kernels, and Wolff potentials},
J. reine angew. Math. {\betaf 578} (2005), 201--223.
\betaibitem[MPV2]{mpv2}
{\sc J. Mateu, L. Prat and J. Verdera},
{\em Potential theory of scalar Riesz kernels},
Indiana Univ. Math. J. 60 (2011), no. 4, 1319--1361.
\betaibitem[MMV]{mmv}
{\sc P. Mattila, M. Melnikov and J. Verdera.}
{\em The Cauchy integral, analytic capacity, and uniform rectifiability.} Ann. of Math. (2) 144 (1996), no. 1, 127--136.
\betaibitem[MzS]{mazya}
{\sc V. G. Maz'ya and T. O. Shaposhnikova},
{\em Theory of Multipliers in spaces of differentiable functions,}
Monographs and studies in Mathematics 23. Pitman (Advanced Publishing Program), Boston, MA, 1985.
\betaibitem[Me]{Me}
{\sc M. S. Melnikov},
{\em Analytic capacity: discrete approach and curvature of measure,}
Sbornik: Mathematics {\betaf 186} (1995), no.~6, 827--846.
\betaibitem[MeV]{MeV}
{\sc M. S. Melnikov and J. Verdera},
{\em A geometric proof of the $L^2$ boundedness of the Cauchy integral on Lipschitz graphs,}
Inter. Math. Res. Not. {\betaf 7} (1995), 325--331.
\betaibitem[NToV]{NToV} {\sc F. Nazarov, X. Tolsa and A. Volberg}, {\em On the uniform rectifiability of AD-regular measures with bounded Riesz transform operator: the case of codimension 1.} to appear in Acta Math.
\betaibitem[P1]{imrn}
{\sc L. Prat},
{\em Potential theory of signed Riesz kernels: capacity and Hausdorff measure}, Int. Math. Res. Not. 2004, no. 19, 937--981.
\betaibitem[P2]{illinois}
{\sc L. Prat}, Null sets for the capacity associated to Riesz kernels. Illinois J. Math. 48 (2004), no. 3, 953--963.
{\em }
\betaibitem[P3]{tams}
{\sc L. Prat},
{\em On the semiadditivity of the capacities associated with signed vector valued Riesz kernels},
to appear in Trans. Amer. Math. Soc.
\betaibitem[St]{stein}
{\sc E. M. Stein},
{\em Singular Integrals and differentiability properties of functions,}
Princeton University Press, Princeton, 1970.
\betaibitem[T1]{tolsaindiana}
{\sc X. Tolsa},
{\em On the analytic capacity $\gamma_{\alphalphapha}^nmma_+$,}
Indiana Univ. Math. J. 51(2) (2002), 317--344.
\betaibitem[T2]{semiad}
{\sc X. Tolsa},
{\em Painlev\'e's problem and the semiadditivity of analytic capacity,}
Acta Math. {\betaf 190} (2003), no.~1, 105--149.
\betaibitem[T3]{semiad2}
{\sc X. Tolsa},
{\em The semiadditivity of continuous analytic capacity and the inner boundary conjecture,}
Amer. J. Math. {\betaf 126} (2004), 523--567.
\betaibitem[T4]{bilipschitz}
{\sc X. Tolsa},
{\em Bilipschitz maps, analytic capacity, and the Cauchy integral,} Ann. of Math. (2) 162 (2005), no. 3, 1243--1304.
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{abstract}
An {\em almost p-K\"ahler manifold} is a triple $(M,J,\Omega)$, where $(M,J)$ is an almost complex manifold of real dimension $2n$ and $\Omega$ is a closed real tranverse $(p,p)$-form on $(M,J)$, where
$1\leq p\leq n$. When $J$ is integrable, almost $p$-K\"ahler manifolds are called $p$-{\em K\"ahler manifolds}. We produce families of almost $p$-K\"ahler structures $(J_t,\Omega_t)$ on $\C^3$, $\C^4$, and on the real torus $\mathbb{T}^6$, arising as deformations of K\"ahler structures $(J_0,g_0,\omega_0)$, such that the almost complex structures $J_t$ cannot be locally compatible with any symplectic form for $t\neq 0$. Furthermore, examples of special compact nilmanifolds with and without almost $p$-K\"ahler structures are presented.
\end{abstract}
\title{Families of almost complex structures and transverse $(p,p)$-forms}
\tableofcontents
\section{Introduction}
Let $(M,J,g,\omega)$ be a $2n$-dimensional compact K\"ahler manifold, that is $M$ is a compact $2n$-dimensional smooth manifold endowed with an integrable almost complex structure $J$ and $J$-Hermitian metric $g$ whose fundamental form $\omega$ is closed, namely $(J,g,\omega)$ is a K\"ahler structure on $M$. Then, the celebrated theorem of Kodaira and Spencer \cite[Theorem 15]{KS} states that the K\"ahler condition is stable under small $\mathcal{C}^\infty$-deformations of the complex structure $J$. As a consequence of complex Hodge Theory, the existence of a K\"ahler structure on a compact manifold imposes strong restrictions on the topology of $M$, e.g., the odd index Betti numbers of $M$ are even, the even index Betti numbers are greater than zero and the de Rham complex of $M$ is a formal differential graded algebra in the sense of Sullivan. In \cite{HL} Harvey and Lawson give an intrinsic characterization in terms of currents of compact
complex manifolds admitting a K\"ahler metric. However there are many examples of compact complex manifolds without any K\"ahler structures; examples are easily constructed by taking compact quotients of simply connected nilpotent Lie groups. The underlying differentiable manifolds of such complex manifolds may carry further structures, e.g., symplectic structures, and {\em balanced}, {\em SKT}, {\em Astheno K\"ahler} metrics, that is Hermitian metrics whose fundamental form $\omega$ satisfies $d\omega^{n-1}=0$, respectively $\partial\partialbar\omega=0$, respectively $\partial\partialbar\omega^{n-2}=0$, where $\dim_\C M=n$.
An almost K\"ahler manifold is an almost complex manifold equipped with a Hermitian metric whose fundamental form is closed.
In the almost K\"ahler setting stability properties are drastically different. First of all, a dense subset of almost complex structures $J$ on $\R^{2n}$, with $n>2$, are not compatible with any symplectic form, that is, there are no symplectic forms $\omega$, such that $g_J(\cdot,\cdot)=\omega(\cdot,J\cdot)$ is a $J$-Hermitian metric, or equivalently, $\omega$ is a closed positive $(1,1)$-form. Such almost complex structures can be extended to any K\"ahler manifold $(M,J)$ of dimension bigger than $2$, showing that there exists a curve $\{J_t\}_{t\in(-\varepsilon,\varepsilon)}$ such that $J_0=J$ and $J_t$, for $t\neq 0$, is a (non-integrable) almost complex structure on $M$, which is not even locally compatible with respect to any symplectic form. This is a direct consequence of e.g., \cite[Theorem 2.4, Corollary 2.5]{MT} which deals with the local case.
In the present paper, starting with a complex manifold $(M,J)$, we are interested
in studying the stability properties of transversely closed $(p,p)$-forms, namely the stability properties of $p$-{\em K\"ahler manifolds} in the terminology of Alessandrini and Andreatta \cite{AA} under possible non integrable small deformations of the complex structure.\newline
The notion of transversality was firstly introduced by Sullivan in \cite{S}, in the context of {\em cone structures}, namely a continuous
field of cones of {\em p}-vectors on a manifold, and then the $p$-K\"ahler condition was studied by several authors (see e.g., \cite{AA,AB1} and the references therein). In particular, $1$-K\"ahler manifolds correspond to K\"ahler manifolds and $(n-1)$-K\"ahler manifolds correspond to {\em balanced} manifolds in the terminology of Michelsohn (see \cite{M}): for the proofs of these results see \cite[Proposition 1.15]{AA} or \cite[Corollary 4.6]{RWZ1}. K\"{a}hler manifolds are $p$-K\"{a}hler for all $p$ (by taking the $p$-th power of the form) but $p$-K\"{a}hler manifolds may not be K\"ahler. Note that there is a difference between the case $p=n-1$ and $p<n-1$. Indeed, according to \cite[p.279]{M}, if $\Omega$ is an $(n-1)$-K\"ahler structure, then $\Omega=\omega^{n-1}$ for a suitable fundamental form $\omega$ of a Hermitian metric on $M$. Hence $d\omega^{n-1}=0$ and so $M$ admits a balanced metric. On the other hand if $p<n-1$ and $\omega$ is the fundamental form of an almost Hermitian metric on $M$, then $d\omega^p=0$ implies that $d\omega=0$, so in fact $M$ is almost K\"ahler (see \cite[Theorem 3.2]{GH}).
In contrast to the K\"ahler case, the $p$-K\"ahler condition on a compact complex manifold is not stable under small deformation of the complex structure. This was proved in \cite{AB1} by constructing a non balanced deformation of the natural complex structure for the Iwasawa manifold, which carries a balanced metric.
Recently, in \cite{RWZ}, Rao, Wan and Zhao further studied the stability of $p$-K\"ahler compact manifolds under small integrable deformations of the complex structure. Assuming that the $(p, p + 1)$-th mild $\partial\partialbar$-lemma holds, it is shown that $p$-K\"ahler structures are stable for all $1 \le p \le n-1$. Here the $(p, p + 1)$-th mild $\partial\partialbar$-lemma for a complex manifold means that each $\partial$-exact and $\partialbar$-closed
$(p, p + 1)$-form on this manifold is $\partial\partialbar$-exact. Note that such a condition does not hold in the Iwasawa example in \cite[p.1062]{AB1}. For other recent results on families of compact balanced manifolds see \cite{Sf}.\newline
In the terminology by Gray and Hervella \cite[p.40]{GH}, a $J$-Hermitian metric $g$ on an almost complex manifold $(M,J)$ of complex dimension $n$ is said to be {\em semi-K\"ahler} if $d\omega^{n-1}=0$.
The aim of this paper is to produce families of almost $p$-K\"ahler structures $(J_t,\Omega_t)$ arising as deformations of K\"ahler structures $(J_0,g_0,\omega_0)$, such that the almost complex structures $J_t$ cannot be locally compatible with any symplectic form for $t\neq 0$.
Our first main result is the following, see Theorem \ref{main-theorem-1}
\noindent{\bf Theorem} {\em Let $(J,\omega)$ be the standard K\"ahler structure on the standard torus
$\T^{6}=\R^{6}\slash \Z^{6}$, with coordinates $(z_1,z_2,z_3)$, $z_j=x_j+iy_j$, $j=1,2,3$. Let $f=f(z_2,\overline{z}_2)$ be a $\Z^{6}$-periodic smooth complex valued function on $\R^6$ and set $f=u+iv$, where $u=u(x_2,y_2)$, $v=v(x_2,y_2)$. \newline
Let $I=(-\varepsilon,\varepsilon)$. Assume that $u$, $v$ satisfy the following condition
$$
\left(fullrac{pureartial u}{pureartial x_2}
+
fullrac{pureartial v}{pureartial y_2}\right)\Big\vert_{(x,y)=0}\neq 0
$$
Then, for $\varepsilon >0$ small enough, there exists a $1$-parameter complex family of almost
complex structures $\{J_{t}\}_{t\in I}$ on $\mathbb{T}^{6}$, such that
\begin{enumerate}
\item[I)] $J_{0}=J$;
\item[II)] $J_{t}$ admits a semi-K\"ahler metric for all $t\in I$;
\item[III)] For any given $0\neq t\in I $, the almost complex structure $J_{t}$ is not locally compatible with respect to any symplectic form on $\mathbb{T}^6$.
\end{enumerate}
}
Then, starting with a balanced structure on $(J,g,\omega)$ on $\C^n$, we provide necessary conditions in order that a curve $(J_t,g_t,\omega_t)$ give rise to semi-K\"ahler structures on $\C^n$ (Theorem \ref{m-theorem-prime}). Special results are obtained for $n=3$, $p=2$ (Theorem \ref{m-theorem}, Corollaries \ref{m-cor-3} and \ref{m-cor-3-first}).
Finally we have a result for almost $2$-K\"{a}hler structures on $\C^4$, see Theorem \ref{C-4-main}.
\noindent{\bf Theorem} {\em There exists a family of almost-complex structures $J_t$ on $\C^4$ such that $J_0=i$ is the standard integrable complex structure and $J_t$ is almost $2$-K\"{a}hler for all $t \neq 0$, but $J_t$ is not locally K\"{a}hler for all $t \neq 0$.
}
It is interesting to contrast this with the linear case, where we see in Proposition \ref{linearcase} that a complex structure preserving a product $\omega^p$ automatically preserves $\omega$ (up to sign). Hence for $p<n$ the almost $p$-K\"ahler (but not almost K\"{a}hler) forms in a deformation cannot be powers of
$2$-forms $\omega_t$ (since the $\omega_t$ would necessarily be closed and hence almost K\"{a}hler). In particular it is impossible to find a deformation as in the theorem above where the almost $2$-K\"{a}hler forms remain equal to $\omega_0^p$ (where $\omega_0$ is the standard K"{a}hler form on $\C^4$). This differs from the K\"{a}hler case, where by Moser's theorem we may apply a family of diffeomorphisms $\Phi_t$ to any deformation $(J_t,\omega_t)$, with $[\omega_t]$ constant, such that the form is unchanged, that is $(\Phi_t^*J_t,\Phi_t^*\omega_t)=(J'_t,\omega_0)$.
The paper is organized as follows: in Section \ref{preliminaries} we start by fixing notation and recalling some basic facts on $p$-K\"ahler and almost $p$-K\"ahler structures, giving a simple example of a compact $(2n-1)$-dimensional complex manifold without any $p$-K\"ahler structure, for $1\leq p\leq (n-1)$. In Section \ref{complex-curves} we prove Proposition \ref{linearcase}. In Proposition \ref{fixedomega} we provide an example of a deformation of a non-K\"{a}hler integrable complex structure into nonintegrable structures such that none of the structures are almost K\"{a}hler, in fact they cannot be tamed by any symplectic form, but they are all almost $2$-K\"{a}hler, and in fact all compatible with the same $(2,2)$ form. Finally Sections \ref{main} and \ref{main-examples} are devoted to the proofs of the main results and to the constructions of the almost $p$-K\"ahler families. In particular, in Theorem \ref{C-4-main} we obtain the family of almost $2$-K\"ahler structures in $\C^4$ which are not almost K\"ahler, and in Proposition \ref{prop-Iwasawa} we construct a curve of almost complex structures $\{J_t\}_{t\in\R}$ (non integrable for $t\neq0$) on the Iwasawa manifold such that $J_0$ admits a balanced metric and $J_t$ does not admit any semi-K\"ahler metric for $t\neq 0$.
\vskip.2truecm\noindent
\noindent {\em Acknowledgements.} We would like to thank {\em Fondazione Bruno Kessler-CIRM (Trento)} for their support and very pleasant working environment. We would like also to thank S. Rao and Q. Zhao for useful comments and remarks.
\section{Almost $p$-K\"ahler structures}\label{preliminaries}
Let $V$ be a real $2n$-dimensional vector space endowed with a complex structure $J$, that is an automorphism $J$ of $V$ satisfying $J^2=-\hbox{\rm id}_V$. Let $V^*$ be the dual space of $V$ and denote by the same symbol the complex structure on $V^*$ naturally induced by $J$ on $V$. Then the complexified
$V^{*\C}$ decomposes as the direct sum of the $purem\,i$-eigenspaces, $V^{1,0}$, $V^{0,1}$ of the extension of $J$ to $V^{*\C}$, given by
$$
\begin{array}{l}
V^{1,0}=\{\varphi\in V^{*\C}\,\,\,\vert\,\,\,J\varphi=i\varphi\}=
\{\alpha-iJ\alpha \,\,\,\vert\,\,\,\alpha\in V^{*}\}\\
V^{0,1}=\{puresi\in V^{*\C}\,\,\,\vert\,\,\,Jpuresi=-ipuresi\}=
\{\beta+iJ\beta \,\,\,\vert\,\,\,\beta\in V^{*}\},
\end{array}
$$
that is
$$V^{*\C}=V^{1,0}\overline{purearzial}lus V^{0,1}.
$$
According to the above decomposition, the space $\Lambda^r(V^{*\C})$ of complex $r$-covectors on $V^\C$
decomposes as
$$
\Lambda^r(V^\C)=\bigoplus_{p+q=r}\Lambda^{p,q}(V^{*\C}),
$$
where
$$
\Lambda^{p,q}(V^{*\C})=\Lambda^p(V^{1,0})\otimes\Lambda^q(V^{0,1}).
$$
If $\{\varphi^1,\ldots,\varphi^{n}\}$ is a basis of $V^{1,0}$, then
$$
\{\varphi^{i_1}\wedge\cdots\wedge\varphi^{i_p}\wedge\overline{\varphi^{j_1}}\wedge\cdots\wedge\overline{\varphi^{j_q}}\,\,\,\vert\,\,\, 1\leq i_1<\cdots<i_p\leq n,\,\,
1\leq j_1<\cdots<j_q\leq n\}
$$
is a basis of $\Lambda^{p,q}(V^{*\C})$. Set $\sigma=i^{p^2}2^{-p}$. Then, given any
$\varphi\in\Lambda^{p,0}(V^{*\C})$ we have that
$$
\overline{\sigma_p\varphi\wedge\overline{\varphi}}=\sigma_p\varphi\wedge\overline{\varphi},
$$
that is $\sigma_p\varphi\wedge\overline{\varphi}$ is a $(p,p)$-real form. Consequently, denoting by
$$
\Lambda_{\R}^{p,p}(V^{*\C})=\{puresi\in\Lambda^{p,p}(V^{*\C})\,\,\,\vert\,\,\,puresi=\overline{puresi}\},
$$
we get that
$$
\{\sigma_p\varphi^{i_1}\wedge\cdots\wedge\varphi^{i_p}\wedge\overline{\varphi^{i_1}}\wedge\cdots\wedge\overline{\varphi^{i_p}}\,\,\,\vert\,\,\, 1\leq i_1<\cdots<i_p\leq n\}
$$
is a basis of $\Lambda_{\R}^{p,p}(V^{*\C})$.
\begin{rem}
The complex structure $J$ acts on the space of real $k$-covectors $\Lambda^k(V^*)$ by setting, for any given $\alpha\in \Lambda^k(V^*)$,
$$J\alpha (V_1,\ldots,V_k)=\alpha(JV_1,\ldots,JV_k).
$$
Then it is immediate to check that if $puresi\in\Lambda_{\R}^{p,p}(V^{*\C})$ then $Jpuresi=puresi$. For $k=2$, the converse holds.
\end{rem}
Denoting by
$$
\hbox{\rm Vol}=(fullrac{i}{2}\varphi^1\wedge\overline{\varphi^1})\wedge\cdots\wedge
(fullrac{i}{2}\varphi^n\wedge\overline{\varphi^n}),
$$
we obtain that
$$
\hbox{\rm Vol}=\sigma_n\varphi^1\wedge\cdots \wedge\varphi^n\wedge \overline{\varphi^1}\wedge\cdots\wedge
\wedge\overline{\varphi^n},
$$
that is $\hbox{\rm Vol}$ is a volume form on $V$. A real $(n,n)$-form $puresi$ is said to be {\em positive} respectively {\em strictly positive} if
$$puresi=a{\rm Vol},
$$
where $a\geq 0$, respectively $a>0$. By definition, $puresi\in\Lambda^{p,0}(V^{*\C})$ is said to be {\em simple} or {\em decomposable} if
$$
puresi=\eta^1\wedge\cdots\wedge\eta^p,
$$
for suitable $\eta^1,\ldots,\eta^p\in V^{1,0}$. Let $\Omega\in\Lambda_{\R}^{p,p}(V^{*\C})$. Then $\Omega$ is said to be {\em transverse} if, given any non-zero simple $(n-p)$-covector $puresi$, the real $(n,n)$-form
$$
\Omega\wedge\sigma_{n-p}puresi\wedge \overline{puresi}
$$
is strictly positive. \newline
The notion of positivity on complex vector spaces can be transferred pointwise to almost complex
manifolds. Let $(M,J)$ be an almost complex manifold of real dimension $2n$; we will denote by
$A^{p,q}(M)$ the space of complex $(p,q)$-forms, that is the space of smooth sections of the bundle
$\Lambda^{p,q}(M)$ and by $A_{\R}^{p,p}(M)$ the space of real $(p,p)$-forms.
\begin{definition}
Let $(M,J)$ be an almost complex manifold of real dimension $2n$ and let $1\leq p\leq n$. A
$p$-{\em K\"ahler form} is a closed real transverse $(p,p)$-form $\Omega$, that is
$\Omega$ is $d$-closed and, at every $x\in M$, $\Omega_x\in\Lambda^{p,p}_{\R}(T_x^*M)$ is transverse. The triple $(M,J,\Omega)$ is said to be an {\em almost p-K\"ahler manifold}.
\end{definition}
\section{Curves of almost complex structures preserving the $2^{th}$-power}\label{complex-curves}
Let $I=(-\varepsilon,\varepsilon)$ and
$\{J_t\}_{t\in I}$ be a smooth curve of almost
complex structures on $M$, such that $J_0=J$. Then, for small $\varepsilon$, there exists a unique $L_t:TM\to TM$,
with $L_tJ+JL_t=0$, for every $t\in I$, such that
\begin{equation}
J_t=(\ID+L_t)J(\ID+L_t)^{-1},
\end{equation}
for every $t$ (see e.g., \cite{AL}, \cite[Sec. 3]{dBM}). We can write $L_t=tL+o(t)$. Assume that $J$ is {\em compatible} with respect to a symplectic form $\omega$ on $M$, that is, at any given $x\in M$,
$$
g_x(\cdot,\cdot):=\omega_x(\cdot, J\cdot)
$$
is a positive definite Hermitian metric on $M$. Equivalently, $\omega$ is a positive $(1,1)$-form with respect to $J$. Then
$\omega$ is a positive $(1,1)$-form with respect to $J_t$ if and only if $L_t$ is $g_{J}$-symmetric and
$purearallel L_tpurearallel < 1$. \newline
We can show the following
\begin{prop}\label{linearcase}
Let $\omega$ be a positive $(1,1)$-form on the complex vector space $\C^n$. Let $\{J_t\}$ be a curve of linear complex structures on $\C^n$. If $J_t$ preserves $\omega^p$ for all $t$, $1\leq p <n$, then $J_t$ preserves $\omega$.
\end{prop}
\begin{proof}
We will show that a complex structure on $\C^n$ preserving $\omega^p$ preserves $\omega$ up to sign. Hence in a $1$-parameter family with $J_0 =i$ all complex structures must preserve $\omega$ itself.
First note that a subspace $W \subset \C^n$ of real dimension at least $2p$ is symplectic if and only if $\omega^p$ is nondegenerate. Also, for a symplectic subspace of dimension at least $2p$ the symplectic complement can be defined either in the usual way as $$W^{pureerp} := \{ v \in \C^n | \omega(v,w)=0 \mbox{ for all } w \in W\}$$ or equivalently as $$W^{pureerp} := \{ v \in \C^n | \omega^p(v,w_1, \dots, w_{2p-1})=0 \mbox{ for all } w_1, \dots, w_{2p-1} \in W\}.$$
Suppose then that $J$ is a complex structure preserving $\omega^p$ and $V$ is a $2$-dimensional symplectic plane. Then $V^{pureerp} = W$ is symplectic and hence $\omega^p|_W$ is nondegenerate. As $J$ preserves $\omega^p$ we have that $\omega^p|_{JW}$ is also nondegenerate and therefore also symplectic, and using the definition of $(JW)^{pureerp}$ only in terms of $\omega^p$ we see that $(JW)^{pureerp}=JV$. Hence $JV$ is also a symplectic plane. Moreover, if $U$ is a symplectic plane in $V^{pureerp}$ then $JU \subset J(V^{pureerp}) = JW = (JV)^{pureerp}$.
Let $x_1, y_1, \dots , x_n, y_n$ be a basis of $\C^n$ with $\omega(x_k, y_k)=1$ for all $k$ and such that the symplectic planes $V_k = \mbox{Span}(x_k, y_k)$ are orthogonal (for example we can take the standard symplectic basis). Then by the remark at the end of the previous paragraph the planes $JV_k$ are also symplectically orthogonal. Set $\lambda_k = \omega(Jx_k, Jy_k)$. Since $J$ preserves $\omega^p$ we have that $\lambda_{i_1} \lambda_{i_2} \dots \lambda_{i_p} =1$ for all $1 \le i_1 < \dots < i_p \le n$. Hence either all $\lambda_k =1$ or all $\lambda_k = -1$. (The second case is only possible when $p$ is even.) It follows that $J$ is either symplectic or anti-symplectic.
\end{proof}
Notice that there exist exact $2$-K\"ahler structures on $3$-dimensional compact complex manifolds, that is balanced metrics $g$, such that $\omega^2$ is $d$-exact, where $\omega$ denotes the fundamental form of $g$. This is in contrast with the almost K\"ahler case, in view of Stokes Theorem.
\begin{ex}{\em
Let $G=SL(2,\C)$. Then $G$ admits compact quotients by uniform discrete subgroups $\Gamma$, so that
$$
M=\Gamma\backslash G
$$
is a $3$-dimensional compact complex manifold. Denote by
$$
Z_1=fullrac12
\left[
\begin{array}{cc}
i & 0 \\
0 & -i
\end{array}
\right],\quad
Z_2=fullrac12
\left[
\begin{array}{cc}
0 & -1 \\
1 & 0
\end{array}
\right]
\quad
Z_3=fullrac12
\left[
\begin{array}{cc}
0 & i \\
i & 0
\end{array}
\right];
$$
then $\{Z_1,Z_2,Z_3\}$ is a basis of the Lie algebra of $G$. We have
$$
[Z_1,Z_2]=-Z_3,\quad [Z_1,Z_3]=Z_2,\quad [Z_2,Z_3]=-Z_1.
$$
Accordingly, the dual left-invariant coframe $\{puresi^1,puresi^2,puresi^3\}$ satisfies the Maurer-Cartan equations
$$
dpuresi^1=puresi^2\wedge puresi^3,\quad dpuresi^2=-puresi^1\wedge puresi^3,\quad dpuresi^3=puresi^1\wedge puresi^2.
$$
Let
$$
\Omega =fullrac14(puresi^{12\bar{1}\bar{2}}+puresi^{13\bar{1}\bar{3}}+puresi^{23\bar{2}\bar{3}}),
$$
where we indicated $puresi^{r\bar{s}}=puresi^r\wedge\overline{puresi^s}$. Then,
\begin{equation}\label{Omega-exact}
\Omega=fullrac18 d(puresi^{12\bar{3}}+puresi^{\bar{1}\bar{2}3}-
puresi^{13\bar{2}}-puresi^{\bar{1}\bar{3}2}+
puresi^{23\bar{1}}+puresi^{\bar{2}\bar{3}1}
):=d\gamma
\end{equation}
In \cite[p.467]{OUV} it is proved that every left-invariant $(2,2)$-form is $d$-exact (see also \cite{PT} for cohomological computations).
Let us define a complex curve of almost complex structures on $\Gamma\backslash G$ through a basis of $(1,0)$ forms by setting
$$
puresi_t^1=puresi^1,\qquad
puresi_t^2=puresi^2,\qquad
puresi_t^3=puresi^3-t\overline{puresi^3},\qquad
$$
for $t\in \mathbb{B} (0,\varepsilon)$. A direct computation gives
\begin{equation}\label{mc-deformation}
\left\{
\begin{array}{lll}
dpuresi^1_t&=&fullrac{1}{1-\vert t\vert ^2}(puresi^{23}_t+tpuresi^{2\bar{3}}_t)\\[5pt]
dpuresi^2_t&=&-fullrac{1}{1-\vert t\vert ^2}(puresi^{13}_t+tpuresi^{1\bar{3}}_t)\\[5pt]
dpuresi^3_t&=&puresi^{12}_t-tpuresi^{\bar{1}\bar{2}}_t.
\end{array}
\right.
\end{equation}
In particular, the last equation shows that $J_t$ is not integrable for $t\neq 0$. Now we compute the action of $J_t$ on the forms $puresi^1,puresi^2,puresi^3$. A straightforward calculation yields to
$$\
J_tpuresi^1=ipuresi^1,\quad J_tpuresi^2=ipuresi^2,\quad
J_tpuresi^3=fullrac{i}{1-\vert t\vert^2}\big((1+\vert t\vert^2)puresi^3-2t\overline{puresi^3}\big)
$$
and
$$
J_t\overline{puresi^1}=-i\overline{puresi^1},\quad J_t\overline{puresi^2}=-i\overline{puresi^2},\quad
J_t\overline{puresi^3}=fullrac{i}{1-\vert t\vert^2}\big(-(1+\vert t\vert^2)\overline{puresi^3}+2\overline{t}puresi^3\big)
$$
Therefore, it immediate to check that
$$
J_t\Omega=\Omega,
$$
so that $\Omega$ is $(2,2)$-with respect to $J_t$ for every $t$. \newline
Finally, there are no symplectic structures taming $J_t$, for every given $t\in \mathbb{B}(0,\varepsilon)$. By contradiction: assume that there exist a symplectic structure $\omega_t$ on $\Gamma\backslash SL(2,\C)$ taming $J_t$. Then, since for every $t$ the almost complex structure is left invariant, by an average process, we may produce a left invariant symplectic structure $\hat{\omega}$ on $\Gamma\backslash SL(2,\C)$,
taming $J_t$. Let $\hat{\omega}$ be given as
$$
2\hat{\omega}=iApuresi_t^{1\bar{1}}+iBpuresi_t^{2\bar{2}}+iCpuresi_t^{3\bar{3}}+upuresi_t^{1\bar{2}}-\bar{u}puresi_t^{2\bar{1}} +vpuresi_t^{1\bar{3}}-\bar{v}puresi_t^{3\bar{1}} +wpuresi_t^{2\bar{3}}-\bar{w}puresi_t^{3\bar{2}},
$$
where $A,B,C,u,v,w\in\C$. Then, a direct calculation using \eqref{mc-deformation} gives that, if $\hat{\omega}$ is closed, then $C=0$. This is absurd.}
\end{ex}
Therefore, we have proved the following
\begin{prop}\label{fixedomega}
For any given $t\in\mathbb{B}(0,\varepsilon)\subset\C$, $(\Gamma\backslash SL(2,\C),J_t,\Omega)$ is a compact almost $2$-K\"ahler manifold of complex dimension $3$, such that:
\begin{enumerate}
\item[i)] the almost complex structure $J_t$ is integrable if and only if $t=0$;
\item[ii)] the almost $2$-K\"ahler structure $\Omega$ is $d$-exact;
\item[iii)] $J_t$ has no tamed symplectic structures for every given $t\in \mathbb{B}(0,\varepsilon)$.
\end{enumerate}
\end{prop}
We end this Section proving a result of non-existence of almost $p$-K\"ahler structures.
\begin{prop}\label{nop} Let $(M,J)$ be a closed almost complex manifold of (complex) dimension n.
Suppose $\alpha$ is a non closed $1$-form such that the $(1,1)$ part
$$(d\alpha)^{1,1} = \sum_k c_k puresi_k \wedge \bar{puresi_k}$$
where the $puresi_k$ are $(1,0)$-covectors and the $c_k$ have the same sign. Then $(M,J)$ does not have a balanced metric.
More generally, suppose there exists a non closed $(2n-2p-1)$-form $\beta$ such that
$$(d\beta)^{n-p,n-p} = \sum_k c_k puresi_k \wedge \bar{puresi_k}$$
where the $puresi_k$ are simple $(n-p,0)$-covectors and the $c_k$ have the same sign. Then $(M,J)$ does not admit an almost $p$-K\"{a}hler form.
\end{prop}
\begin{proof} It suffices to prove the second statement. Without loss of generality, we may assume that all the $c_k$ are positive. Arguing by contradiction, suppose that $\Omega$ is a $p$-K\"{a}hler form and $\beta$ is a non-zero $(2n-2p-1)$-form as above. Then since $M$ is closed we have
$$0 = \int_M \sigma_{n-p} d(\Omega \wedge \beta) = \sum_k c_k \int_M \Omega \wedge \sigma_{n-p} puresi_k \wedge \bar{puresi_k} >0$$
as all integrals on the right are strictly positive. This gives a contradiction.
\end{proof}
\begin{rem} All Riemann surfaces are almost K\"{a}hler. Therefore if a $1$-form $\alpha$ as in Proposition \ref{nop} exists, it must restrict to a closed form on all $1$-dimensional subvarieties of $M$.
\end{rem}
As an application of Proposition \ref{nop}, we provide a family of $n$-dimensional compact complex manifolds which are not $p$-K\"ahler, for any given $1\leq p\leq (n-1)$.
\begin{ex}{\em
Let
$$
\mathbb{H}_{2n-1}(\R):=\Big\{
A=\left [
\begin{array}{lll}
1 & X & v\\
0 & I_{n-1} & Y\\
0 & 0 &1
\end{array}
\right]\,\,\,\vert\,\,\, X^t,Y\in\R^{n-1},\,v\in\R
\Big\}
$$
be the $(2n-1)$-dimensional real Heisenberg group, where $I_{n-1}$ denotes the identity matrix of order $n-1$. Then $\mathbb{H}_{2n-1}(\R)$ is $(2n-1)$-dimensional nilpotent Lie group and the subset
$\Gamma\subset\mathbb{H}_{2n-1}(\R)$, formed by matrices having integers entries, is a uniform discrete subgroup of $\mathbb{H}_{2n-1}(\R)$, so that $\Gamma\backslash\mathbb{H}_{2n-1}(\R)$ is a comapct $(2n-1)$-dimensional nilmanifold. Then
$$
M=\Gamma\backslash\mathbb{H}_{2n-1}(\R)\times \R\slash\Z
$$
is a $2n$-dimensional compact nilmanifold having a global coframe $\{e^1,\ldots,e^n,f^1,\ldots,f^n\}$, defined as
$$
\begin{array}{lll}
e^{\alpha}=dx^\alpha,&1\leq\alpha \leq n-1, & e^n=du\\
f^{\alpha}=dy^\alpha,&1\leq\alpha \leq n-1, & f^n=dv-\di^{\circledast}playstyle\sum_{\beta=1}^{n-1}x^\beta dy^\beta ,
\end{array}
$$
where $u$ denotes the natural coordinate on $\R$. It is immediate to check that
\begin{equation}
\left\{
\begin{array}{ll}
de^{\alpha}=0 & 1\leq\alpha \leq n\\[5pt]
df^{\beta}=0 & 1\leq\beta \leq n-1\\[5pt]
df^{n}=-\di^{\circledast}playstyle\sum_{\gamma =1}^{n-1}e^\gamma\wedge f^\gamma &
\end{array}
\right.
\end{equation}
Then
$$
\varphi^\alpha=e^\alpha +if^\alpha,\qquad 1\leq\alpha\leq n
$$
give rise to a complex coframe of $(1,0)$-forms on $M$ such that
$$
\left\{
\begin{array}{ll}
d\varphi^{\alpha}=0 & 1\leq\alpha \leq n-1\\[5pt]
d\varphi^n=\di^{\circledast}playstylefullrac12\sum_{\beta =1}^{n-1}\varphi^\beta\wedge\overline{\varphi^\beta} &
\end{array}
\right.
$$
so that induced almost complex structure $J$ is integrable,
Fix any $1\leq p\leq (n-1)$. We show that $(M,J)$ is not $p$-K\"ahler. Define
$$
\beta=\varphi^n\wedge\varphi^{1\overline{1}\ldots (n-p-1)\overline{(n-p-1)}}.
$$
Then,
$$
d\beta=(d\beta)^{n-p,n-p}=\Big(\sum_{\beta=1}^{n-1}\varphi^{\beta\overline{\beta}}\Big)
\wedge\varphi^{1\overline{1}\ldots (n-p-1)\overline{(n-p-1)}}
$$
and the result follows from Proposition \ref{nop}.
\newline
Note that the smooth manifold $M$ does not carry any K\"ahler structure $(J,g,\omega)$, since it is a non toral nilmanifold.
}
\end{ex}
\section{Semi-K\"ahler deformations of balanced metrics}
\label{main}
Let $M=\C^n$ endowed with a balanced structure $(J,g,\omega)$, that is $J$ is a complex structure on $\C^n$, $g$ is a Hermitian metric such that the fundamental form $\omega$ of $g$ satisfies $d\omega^{n-1}=0$.
Denote by $\{\varphi^1,\ldots,\varphi^{n}\}$ be a complex $(1,0)$-coframe on $(\C^n,J)$, and let
$$
\omega=fullrac{i}{2}\sum_{j,k=1}^n\omega_{jk}(z)\varphi^j\wedge\overline{\varphi^k},
$$
be the fundamental form of $g$, where $(\omega_{jk}(z))$ is Hermitian and positive definite.
Then, assuming $\omega$ is a balanced metric,
$$\Omega:=fullrac{1}{(n-1)!}\omega^{n-1}$$
is $d$-closed. By assumption $d\Omega=0$. Denote by $\{\zeta_1,\ldots,\zeta_n\}$ the dual $(1,0)$-frame of $\{\varphi^1,\ldots,\varphi^{n}\}$. Let $I=(-\varepsilon,\varepsilon)$ and let $\{J_t\}_{t\in I}$ be a smooth curve of almost complex structures on $\C^n$ such that $J_0=J$.
Then, as already recalled in Section \ref{complex-curves}, there exists a unique $L_t\in\hbox{\rm End}(T\C^n)$ such that $L_tJ+JL_t=0$ and
$$
J_t=(I+L_t)J(I+L_t)^{-1}.
$$
Then, $L_t$ can be identified with an element $\Phi(t)\in \Gamma(\C^n,\Lambda^{0,1}\C^n\otimes T^{1,0}\C^n)$, by defining
$$
\Phi(t)=fullrac12(L_t-iL_t).
$$
In other words, the curve of almost complex
structures $\{J_t\}_{t\in I}$ is encoded by such a $\Phi(t)\in \Gamma(\C^n,\Lambda^{0,1}\C^n\otimes T^{1,0}\C^n)$ so that, if the expression of $\Phi(t)$ is
$$
\Phi(t)=\sum_{h,k=1}^n\sigma^j_k(z,t)\overline{\varphi^k}\otimes\zeta_j,
$$
with $\sigma^j_k=\sigma^j_k(z,t)$ smooth on $(z,t)$, then
a complex $(1,0)$-coframe on $(\C^n,J_t)$ is given by
$$
\varphi^j_t=\varphi^j-\langle \Phi(t),\zeta_j\rangle,\quad j=1,\ldots,n,
$$
where
$$
\langle \Phi(t),\zeta_j\rangle=\sum_{k=1}^n\sigma^j_k\overline{\varphi^k}
$$
Then, explicitly
\begin{equation}\label{def-phi-t}
\varphi^j_t=\varphi^j-\sum_{k=1}^n\sigma^j_k\overline{\varphi^k},\quad \quad j=1,\ldots,n.
\end{equation}
According to the Kodaira and Spencer theory of small deformations of complex structures (see \cite{MK}, \cite{GHJ}) $J_t$ is integrable if and only if the Maurer-Cartan equation holds, that is
$$
\partialbar\Phi(t)+fullrac12[[\Phi(t),\Phi(t)]]=0.
$$
Here, we are not assuming that $J_t$ is integrable. Let $(J_t,g_t,\omega_t)$ be a curve of almost Hermitian metrics on $\C^n$ such that $(J_0,g_0,\omega_0)=(J,g,\omega)$. Then,
\begin{equation}\label{def-omega-t-prime}
\omega_t:=fullrac{i}{2}\sum_{j,k=1}^n\omega_{jk}(z,t)\varphi_t^j\wedge\overline{\varphi_t^k};
\end{equation}
where $\omega_{jk}(z,t)$ are smooth and $\omega_{jk}(z,0)=\omega_{jk}(z)$, $j,k=1,\ldots, n$.
\newline
In the sequel we will use the symbol $\dot{}$ to denote the derivative with respect to $t$, e.g., we will use the following notation
$$
\dot{\varphi}^j_0=fullrac{d}{dt}{\varphi}^j_t\vert_{t=0}
$$
and we will drop $0$ for the $t$ derivative of the functions $\sigma^j_k(z_1,\ldots,z_n,t)$ evaluated at $t=0$, that is
$$
\dot{\sigma^j_k}=fullrac{d}{dt}
\sigma^j_k(z_1,\ldots,z_n,t)\vert_{t=0}.
$$
Set
\begin{equation}\label{def-Omega-t}
\Omega_t:=fullrac{1}{(n-1)!}\omega_t^{n-1}.
\end{equation}
Let us compute the $t$ derivative of $\Omega_t$ at $t=0$.
In view of \eqref{def-phi-t}, we easily compute
$$
\dot{\varphi}^j_0=-\sum_{k=1}^n\dot{\sigma^j_k}\overline{\varphi^k},
$$
by the definition of $\Omega_t$, we obtain that
$$\dot{\Omega}_0\in A^{n,n-2}\C^n\overline{purearzial}lus A^{n-1,n-1}\C^n\overline{purearzial}lus A^{n-2,n}\C^n,\qquad \dot{\Omega}_0=\overline{\dot{\Omega}}_0.
$$
Consequently, we can define the $(n-2,n)$-form $\eta$ and the $(n-1,n-1)$-form $\lambda$ on $(\C^n,J_t)$ respectively by
\begin{equation}\label{derivative-Omega-t-prime}
\eta:=(\dot{\Omega}_0)^{n-2,n}
\end{equation}
and
\begin{equation}\label{derivative-n-1-Omega-t-prime}
\lambda:=(\dot{\Omega}_0)^{n-1,n-1}.
\end{equation}
We have
\begin{equation}\label{Omega-eta-prime}
\dot{\Omega}_0=\eta+\overline{\eta}+\lambda.
\end{equation}
We are ready to state the following
\begin{theorem}\label{m-theorem-prime}
Let $(J,g,\omega)$ be a balanced structure on $\C^n$. Let $(J_t,g_t,\omega_t)$, for $t\in I$, be a curve of almost Hermitian metrics on $\C^n$ such that $(J_0,g_0,\omega_0)=(J,g,\omega)$. If $(J_t,g_t,\omega_t)$ is a curve of semi-K\"ahler structures on $\C^n$, then
\begin{equation}\label{del-eta-prime}
\partial\eta+\partialbar\lambda=0.
\end{equation}
\end{theorem}
\begin{proof}
By definition, $\Omega_t:=fullrac{1}{(n-1)!}\omega_t^{n-1}$ and by assumption,
\begin{equation}\label{d-Omega-t-prime}
d\Omega_t=0.
\end{equation}
Thus, by taking the derivative of \eqref{d-Omega-t-prime} with respect to $t$ evaluated at $t=0$ and taking into account \eqref{Omega-eta-prime}, we obtain
$$
0=d\dot{\Omega}_0=d(\eta+\overline{\eta}+\lambda)=(\partial+\partialbar)(\eta+\overline{\eta}+\lambda)=\partial\eta+\partialbar\overline{\eta}+\partial\lambda+\partialbar\lambda,
$$
where we have used that $J_t$ is integrable for $t=0$ and that $\eta\in A^{n-2,n}\C^n$. Therefore, the above equation
$$
\partial\eta+\partialbar\overline{\eta}+\partial\lambda+\partialbar\lambda=0
$$
turns to be equivalent, by type reasons, to
$$
\partial\eta+\partialbar\lambda=0,
$$
that is, if $\Omega_t$ is $d$-closed, then \eqref{del-eta-prime} holds. The Theorem is proved.
\end{proof}
Let $M$ be a compact holomorphically parallelizable complex manifold. Then, by a Theorem of Wang (see \cite{W}), there exists a simply-connected, connected complex Lie group $G$ and a lattice $\Gamma\subset G$ such that
$M=\Gamma\backslash G$. Assume that $M$ is a solvmanifold, that is $G$ is solvable. Then, according to Nakamura \cite[Prop. 1.4]{N}, the universal covering of $M$ is biholomorphically equivalent to $\C^N$. Due to Abbena and Grassi \cite[Theorem 3.5]{AG}, the natural complex structure $J$ on $M$ admits a balanced metric $g$.
As a direct consequence of Theorem \ref{m-theorem-prime}, we get the following (see also Sferruzza \cite{Sf})
\begin{cor}\label{solvable-necessary.condition}
Let $M=\Gamma\backslash\C^N$ be a compact complex solvmanifold endowed with the natural balanced structure $(J,g,\omega)$. If $(J_t,g_t,\omega_t)$ is a curve of semi-K\"ahler structures on $M$ such
that $(J_0,g_0,\omega_0)=(J,g,\omega)$, then
\begin{equation}\label{cohomological-prime}
0=[\partial\eta]_{\partialbar}\in H_{\partialbar}^{N-1,N}(M).
\end{equation}
\end{cor}
Coming back to $\C^n$, in the particular case that
\begin{equation}\label{def-omega-t}
\omega_t:=fullrac{i}{2}\sum_{j=1}^n\varphi_t^j\wedge\overline{\varphi_t^j},
\end{equation}
we obtain the following
\begin{prop}\label{m-theorem}
Let $(J,g,\omega)$ be a balanced structure on $\C^n$. Let $\{J_t\}_{t\in I}$ be a smooth curve of almost complex structures on $\C^n$ such that $J_0=J$. Let $\omega_t$ be the real $J_t$-positive $(1,1)$-form defined by \eqref{def-omega-t} and $g_t$ be the associated $J_t$-Hermitian metric on $\C^n$. If $(J_t,g_t,\omega_t)$ is a curve of semi-K\"ahler structures on $\C^n$, then
\begin{equation}\label{del-eta}
\partial\eta=0.
\end{equation}
\end{prop}
The proof of the above Proposition follows at once from Theorem \ref{m-theorem-prime} by noting that in such a case $\lambda=0$.\newline
Finally, under the same assumptions as in the last Proposition \ref{m-theorem}, for $n=3$, we derive the following
\begin{cor}\label{m-cor-3}
Let $(J,g,\omega)$ be a balanced structure on $\C^3$. Let $\{J_t\}_{t\in I}$ be a smooth curve of almost complex structures on $\C^3$ such that $J_0=J$. Let $\omega_t$ be the real $J_t$-positive $(1,1)$-form defined by \eqref{def-omega-t} and $g_t$ be the associated $J_t$-Hermitian metric on $\C^3$. If $(J_t,g_t,\omega_t)$ is a curve of semi-K\"ahler structures on $\C^3$, then
\begin{equation}\label{del-eta-3}
\partial \left(\left[(\dot{\sigma}^2_3-\dot{\sigma}^3_2)\varphi^1 +(\dot{\sigma}^3_1-\dot{\sigma}^1_3)\varphi^2+(\dot{\sigma}^1_2-\dot{\sigma}^2_1)\varphi^3 \right]\wedge\varphi^{\bar{1}\bar{2}\bar{3}}\right)=0.
\end{equation}
\end{cor}
\begin{cor}\label{m-cor-3-first}
In the same assumptions as in the Corollary \ref{m-cor-3}, under the additinal assumption that $\varphi^j=dz_j, j=1,2,3$, if $(J_t,g_t,\omega_t)$ is a curve of semi-K\"ahler structures on $\C^3$, then
the following equations hold
\begin{equation}\label{necessary-condition-3}
\left\{
\begin{array}{l}
fullrac{pureartial}{pureartial z_1}(\dot{\sigma}^3_1-\dot{\sigma}^1_3)-fullrac{pureartial}{pureartial z_2}(\dot{\sigma}^2_3-\dot{\sigma}^3_2)=0\\[10pt]
fullrac{pureartial}{pureartial z_1}(\dot{\sigma}^1_2-\dot{\sigma}^2_1)-fullrac{pureartial}{pureartial z_2}(\dot{\sigma}^2_3-\dot{\sigma}^3_2)=0\\[10pt]
fullrac{pureartial}{pureartial z_2}(\dot{\sigma}^1_2-\dot{\sigma}^2_1)-fullrac{pureartial}{pureartial z_3}(\dot{\sigma}^3_1-\dot{\sigma}^1_3)=0
\end{array}
\right.
\end{equation}
\end{cor}
\begin{rem}
The previous constructions can be easily adapted replacing the parameter space $I=(-\varepsilon,\varepsilon)$, with $$\mathbb{B}(0,\varepsilon)=\{t=(t_1,\ldots,t_k\}\in\R^k\,\,\,:\,\,\, \vert t\vert <\varepsilon.$$
In particular, Theorem \ref{m-theorem-prime} can be generalized by considering
$$
\eta_j=\left(fullrac{pureartial}{pureartial t_j}\Omega\vert_{t=0}\right)^{n-2,n},\quad
\lambda_j=\left(fullrac{pureartial}{pureartial t_j}\Omega\vert_{t=0}\right)^{n-1,n-1}.
$$
\end{rem}
\section{Applications and examples}\label{main-examples}
First, we construct a family of semi-K\"ahler structures on the $6$-dimensional torus $\mathbb{T}^6$, obtained as a deformation of the standard K\"ahler structure on $\mathbb{T}^6$, which cannot be locally compatible with any symplectic form. More precisely, we start by showing the following
\begin{theorem}\label{main-theorem-1}
Let $(J,\omega)$ be the standard K\"ahler structure on the standard torus
$\T^{6}=\R^{6}\slash \Z^{6}$, with coordinates $(z_1,z_2,z_3)$, $z_j=x_j+iy_j$, $j=1,2,3$. Let $f=f(z_2,\overline{z}_2)$ be a $\Z^{6}$-periodic smooth complex valued function on $\R^6$ and set $f=u+iv$, where $u=u(x_2,y_2)$, $v=v(x_2,y_2)$. \newline
There is an almost complex structure $J = J(f)$ on $\mathbb{T}^6$ such that
\begin{enumerate}
\item[I)] If $f=0$ then $J=J(0)$ is the standard complex structure;
\item[II)] $J$ admits a semi-K\"ahler metric provided $f$ is sufficiently small (in the uniform norm);
\item[III)] Suppose that $u$, $v$ satisfy the following condition
\begin{equation}\label{condition-Derivative}
\left(fullrac{pureartial u}{pureartial x_2}
+
fullrac{pureartial v}{pureartial y_2}\right)\Big\vert_{(x,y)=0}\neq 0.
\end{equation}
Then $J$ is not locally compatible with respect to any symplectic form on $\mathbb{T}^6$.
\end{enumerate}
\end{theorem}
\begin{proof}
I) Define
\begin{equation}\label{almost-complex-structure-torus}
\left\{
\begin{array}{ll}
Jpureartial_{x_1}=& 2vpureartial_{x_3}+pureartial_{y_1}-2upureartial_{y_3}\\[5pt]
Jpureartial_{x_2}=& pureartial_{y_2}\\[5pt]
Jpureartial_{x_3}=& pureartial_{y_3}\\[5pt]
Jpureartial_{y_1}=& -pureartial_{x_1}-2upureartial_{x_3}-2vpureartial_{y_3}\\[5pt]
Jpureartial_{y_2}=& -pureartial_{x_1}\\[5pt]
Jpureartial_{y_3}=& -pureartial_{x_3}\\[5pt]
\end{array}
\right.
\end{equation}
By definition of $J$, in view of the $\Z^6$-periodicity of the functions $u$, $v$ we immediately get that $J$ gives rise to an almost complex strcuture on $\mathbb{T}^6$, such that $J(0)$ coincides with the standard complex structure on $\mathbb{T}^6$.\vskip.2truecm\noindent
II) The almost complex structure defined by \eqref{almost-complex-structure-torus} induces an almost complex structure on the cotangent bundle of $\mathbb{T}^6$, still denoted by $J$ and expressed as
\begin{equation}\label{almost-complex-structure-dual-torus}
\left\{
\begin{array}{ll}
J dx_1=& -dy_1\\[5pt]
J dx_2=& -dy_2\\[5pt]
J dx_3=&-dy_3+ 2vdx_1+pureartial_{y_1}-2udy_1\\[5pt]
J dy_1=& dx_1\\[5pt]
J dy_2=& dx_2\\[5pt]
J dy_3=&dx_3- 2udx_1-2vdy_1
\end{array}
\right.
\end{equation}
Accordingly, a $(1,0)$-coframe on $\mathbb{T}^6$ with respect to $J$ is given by
$$
\varphi^1=dz_1,\quad \varphi^2=dz_2,\quad\varphi^3=dz_3-fd\bar{z}_1.
$$
Thus,
$$
\sigma^3_1=f(z_2,\overline{z}_2),
$$
with the other $\sigma^j_k$ vanishing. Then, for $f$ small enough, $J$ admits a semi-K\"ahler metric $g$, whose fundamental form is provided by
$$
\omega=fullrac{i}{2}\sum_{j=1}^3\varphi^j\wedge\overline{\varphi^j}
$$
\vskip.2truecm\noindent
III) In order to show that the almost complex structure admits no locally compatible symplectic forms when \eqref{condition-Derivative} holds, we need to recall the following result.\newline
Let $P$ be an almost complex structure on $\R^6$, with coordinates $(x_1,\ldots,x_6)$. Then, according to \cite[Theorem 2.4]{MT}, if $P$ is locally compatible with respect to a symplectic form, then the following necessary conditions hold
\begin{equation}\label{equations-mt}
\left\{
\begin{array}{l}
-fullrac{pureartial}{pureartial x_1}(P_{26}-P_{62})-fullrac{pureartial}{pureartial x_2}(P_{16}-P_{61})
-fullrac{pureartial}{pureartial x_3}(P_{15}-P_{51})\\[10pt]
-fullrac{pureartial}{pureartial x_4}(P_{23}-P_{32})+fullrac{pureartial}{pureartial x_5}(P_{13}-P_{31})
-fullrac{pureartial}{pureartial x_6}(P_{12}-P_{21})=0\\[15pt]
-fullrac{pureartial}{pureartial x_1}(P_{23}-P_{32})-fullrac{pureartial}{pureartial x_2}(P_{13}-P_{31})
-fullrac{pureartial}{pureartial x_3}(P_{12}-P_{21})\\[10pt]
-fullrac{pureartial}{pureartial x_4}(P_{26}-P_{62})+fullrac{pureartial}{pureartial x_5}(P_{16}-P_{61})
-fullrac{pureartial}{pureartial x_6}(P_{15}-P_{51})=0,
\end{array}
\right.
\end{equation}
where all the derivatives are computed at $x=0$. In our notation, $x_4=y_1$, $x_5=y_2$, $x_6=y_3$.
\vskip.2truecm\noindent
Now, by the assumption \eqref{condition-Derivative}
$$
\left(fullrac{pureartial u}{pureartial x_2}
+
fullrac{pureartial v}{pureartial y_2}\right)\Big\vert_{(x,y)=0}\neq 0
$$
on the partial derivatives at $(x,y)=0$ of $f=u+iv=u(x_2,y_2)+iv(x_2,y_2)$, so we immediately see that the first equation of \eqref{equations-mt} is not satisfied. Therefore, $J(f)$ cannot be locally compatible with any symplectic form.
\end{proof}
\begin{rem} We can generate $1$-parameter families of almost complex structures by setting $J_t = J(tf)$. These show that the necessary condition \eqref{necessary-condition-3} of Corollary \ref{m-cor-3-first} is also sufficient in this case. Indeed, in such a case the natural $
\omega_t=fullrac{i}{2}\sum_{j=1}^3\varphi^j_t\wedge\overline{\varphi^j_t}
$ satisfies $d\omega^2_t=0$.\newline
As a consequence of III), for any given $t\neq 0$, $J_t$ is not integrable.
\end{rem}
\begin{ex}{\em (Iwasawa manifold)\,}{\em Let $\C^3$ be endowed with the product $*$ defined by
$$
(w_1,w_2,w_3)*(w_1,w_2,w_3)=(w_1+z_1,w_2+z_2,w_3+w_1z_2+z_3).
$$
Then $(\C^3,*)$ is a complex nilpotent Lie group which admits a lattice $\Gamma =\Z[i]^3$ and accordingly it turns out that
$\mathbb{I}_3=\Gamma\backslash\C^3$ is a compact complex $3$-dimensional manifold, the {\em Iwasawa manifold}. It is immediate to check that
$$
\{\varphi^1=dz_1,\quad \varphi^2=dz_2,\quad \varphi^3=dz_3-z_1dz_2\}
$$
is a complex $(1,0)$-coframe for the standard complex structure naturally induced by $\C^3$, whose dual frame is
$$
\{\zeta_1=fullrac{pureartial}{pureartial z_1},\quad
\zeta_2=fullrac{pureartial}{pureartial z_2}+z_1fullrac{pureartial}{pureartial z_3},\quad
\zeta_3=fullrac{pureartial}{pureartial z_3}\}.
$$
The following
$$
g=\sum_{j=1}^3\varphi^j\odot\overline{\varphi^j}
$$
is a balanced metric on $\mathbb{I}_3$. Indeed,
the fundamental form of $g$ is
$$
\omega=fullrac{i}{2}(dz_1\wedge d\overline{z_1}+dz_2\wedge d\overline{z_2}-z_1dz_2\wedge d\overline{z_3}-\overline{z_1}dz_3\wedge d\overline{z_2}+dz_3\wedge d\overline{z_3},
$$
which satisfies $d\omega^2=0$. \newline
We will provide a smooth curve of almost complex structures $\{J_t\}_{t\in I}$ such that $J_0$ coincides with the complex structure on $\mathbb{I}_3$ and such that, for any $t\neq 0$, $J_t$ admits no semi-K\"ahler metric. To this purpose let us define $\{J_t\}_{t\in I}$ on $\mathbb{I}_3$ by assigning
$$
\Phi(t)=t\overline{\varphi^1}\otimes\zeta_2-
t\overline{\varphi^2}\otimes\zeta_1.
$$
Then, accordingly,
$$
\{\varphi^1_t=dz_1+td\overline{z_2},\quad \varphi^2_t=dz_2-td\overline{z_1},\quad \varphi^3_t=dz_3-z_1dz_2\}
$$
is a complex $(1,0)$-coframe on $(\mathbb{I}_3,J_t)$. A simple calculation yields to the following structure equations
\begin{equation}\label{structure-equations-iwasawa-deformed}
d\varphi^1_t=0,\quad
d\varphi^2_t=0,\quad
d\varphi^3_t=-fullrac{1}{1+t^2}\left[
\varphi^{12}_t+t(\varphi_t^{1\bar{1}}+\varphi_t^{2\bar{2}}) +t^2\varphi_t^{\bar{1}\bar{2}}
\right]
\end{equation}
The last equation implies that $J_t$ is not integrable for $t\neq 0$.
First we observe that, by the definition of $\Phi(t)$,
$$\sigma^2_1=t,\quad\sigma^1_2=-t,\qquad \sigma^j_k=0,\,\hbox{\rm otherwise}.
$$
Hence
\begin{eqnarray*}
\partial \left(\left[(\dot{\sigma}^2_3-\dot{\sigma}^3_2)\varphi^1 +(\dot{\sigma}^3_1-\dot{\sigma}^1_3)\varphi^2+(\dot{\sigma}^1_2-\dot{\sigma}^2_1)\varphi^3 \right]\wedge\varphi^{\bar{1}\bar{2}\bar{3}}\right)&=&2\partial\varphi^{3\bar{1}\bar{2}\bar{3}}\\
&=&-2\varphi^{12\bar{1}\bar{2}\bar{3}}
\end{eqnarray*}
A simple calculation shows that
$$
0\neq [-2\varphi^{12\bar{1}\bar{2}\bar{3}}]_{\partialbar}\in H_{\partialbar}^{2,3}(\mathbb{I}_3),
$$
that is the necessary condition
$$
0\neq [\partial\eta]_{\partialbar}\in H_{\partialbar}^{2,3}(\mathbb{I}_3)
$$
of Corollary \ref{solvable-necessary.condition} is not satisfied.
Indeed, the form
$$
-2\varphi^{12\bar{1}\bar{2}\bar{3}}
$$
is $\partialbar$-harmonic with respect to the Hermitian
metric
$$
g=\sum_{j=1}^3\varphi^j\odot\overline{\varphi^j}
$$
on $\mathbb{I}_3$. In fact a stronger statement is true. We show that $J_t$ does not admit any semi-K\"ahler metric for $t\neq 0$. By the third equation of \eqref{structure-equations-iwasawa-deformed}, we immediately get that
$$
\left(d\varphi^3_t\right)^{1,1}=-fullrac{t}{1+t^2}\left(
\varphi_t^{1\bar{1}}+\varphi_t^{2\bar{2}}\right)
$$
Then, Proposition \ref{nop} applies, proving the following
\begin{prop}\label{prop-Iwasawa}
The curve $\{J_t\}_{t\in I}$ of almost complex structures on $\mathbb{I}_3$ satisfies the following:
\begin{enumerate}
\item $J_0$ coincides with the natural complex structure on $\mathbb{I}_3$ and admits a balanced metric.
\item For any given $t\neq 0$, $J_t$ is not integrable and it has no semi-K\"ahler metrics.
\end{enumerate}
\end{prop}
}
\end{ex}
\begin{ex}
{\em (A family of almost $2$-K\"ahler structures on $\C^4$) Let $M=\C^4$ with real coordinates $(x_1,\ldots,x_4,y_1,\ldots,y_4)$ and $g$ be a smooth real valued function on $\C^4$. Define an almost complex structure $\mathcal{J}=\mathcal{J}_g$ on $\C^4$ by setting
\begin{equation}\label{almost-complex-structure-C4}
\left\{
\begin{array}{ll}
\mathcal{J}pureartial_{x_1}=& gpureartial_{x_3}+pureartial_{y_1}\\[5pt]
\mathcal{J}pureartial_{x_2}=& pureartial_{y_2}\\[5pt]
\mathcal{J}pureartial_{x_3}=& pureartial_{y_3}\\[5pt]
\mathcal{J}pureartial_{x_4}=& pureartial_{y_4}\\[5pt]
\mathcal{J}pureartial_{y_1}=& -pureartial_{x_1}-gpureartial_{y_3}\\[5pt]
\mathcal{J}pureartial_{y_2}=& -pureartial_{x_1}\\[5pt]
\mathcal{J}pureartial_{y_3}=& -pureartial_{x_3}\\[5pt]
\mathcal{J}pureartial_{y_4}=& -pureartial_{x_4}
\end{array}
\right.
\end{equation}
Then, a straightforward calculation shows that
\begin{equation}\label{PHI-4}
\left\{
\begin{array}{lll}
\Phi^1&=&dx_1+idy_1,\\[5pt]
\Phi^2&=&dx_2+idy_2,\\[5pt]
\Phi^3&=&dx_3+i(-gdx_1+dy_3),\\[5pt]
\Phi^4&=&dx_4+idy_4
\end{array}
\right.
\end{equation}
is a complex $(1,0)$-coframe on $(\C^4,\mathcal{J})$. Define
$$
\Omega=fullrac{1}{4}\sum_{j<k}\Phi^{j\bar{j}k\bar{k}}
$$
and set
$$
\Omega_\tau=\Omega+\tau\hbox{\rm Re}\,\Phi^{2\bar{3}4\bar{4}}
$$
We have the following
\begin{lemma}\label{lemma-C-4}
The form $\Omega_\tau$ is $(2,2)$ with respect to $\mathcal{J}$ and positive for every $\tau\in (-\varepsilon,\varepsilon)$, for $\varepsilon$ small enough. Futhermore, assume that $g=g(x_2,x_3)$. Then, for any given $\tau$, we have that
$$
d\Omega_\tau=0 \qquad \iff \qquad
fullrac{pureartial g}{pureartial x_2}-2\tau
fullrac{pureartial g}{pureartial x_3}=0
$$
\end{lemma}
\begin{proof}
By \eqref{PHI-4} we obtain
$$
\begin{array}{ll}
\Phi^{1\bar{1}2\bar{2}}&=-4dx_1\wedge dy_1\wedge dx_2\wedge dy_2,\quad
\Phi^{1\bar{1}3\bar{3}}=-4dx_1\wedge dy_1\wedge dx_3\wedge dy_3\\[7pt]
\Phi^{1\bar{1}4\bar{4}}&=-4dx_1\wedge dy_1\wedge dx_4\wedge dy_4,\quad
\Phi^{2\bar{2}4\bar{4}}=-4dx_2\wedge dy_2\wedge dx_4\wedge dy_4\\[7pt]
\Phi^{2\bar{2}3\bar{3}}&=4\left(gdx_1\wedge dx_2\wedge dx_3\wedge dy_2-dx_2\wedge dy_2\wedge dx_3\wedge dy_3\right)\\[7pt]
\Phi^{3\bar{3}4\bar{4}}&=-4\left(gdx_1\wedge dx_3\wedge dx_4\wedge dy_4+dx_3\wedge dy_3\wedge dx_4\wedge dy_4\right)
\end{array}
$$
In view of the above formulae, since $g=g(x_2,x_3)$, we get
\begin{equation}\label{d-OMEGA}
d\Omega=fullrac14 d\sum_{j<k}\Phi^{j\bar{j}k\bar{k}}=
fullrac{pureartial g}{pureartial x_2}
dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_4
\end{equation}
Expanding $\Phi^{2\bar{3}4\bar{4}}$, we obtain
\begin{eqnarray*}
\hbox{\rm Re}\,\Phi^{2\bar{3}4\bar{4}}&=&
2\left(-gdx_1\wedge dx_2\wedge dx_4\wedge dy_4
-dx_2\wedge dy_3\wedge dx_4\wedge dy_4
+\right.\\
&{}& \left.+dy_2\wedge dx_3\wedge dx_4\wedge dy_4
\right)
\end{eqnarray*}
Consequently,
\begin{equation}\label{d-PHI-2344}
d\hbox{\rm Re}\,\Phi^{2\bar{3}4\bar{4}}=
-2fullrac{pureartial g}{pureartial x_3}
dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_4
\end{equation}
Taking into account the defnition of
$$
\Omega_\tau=\Omega +\tau\hbox{\rm Re}\,\Phi^{2\bar{3}4\bar{4}},
$$
the Lemma follows immediately from \eqref{d-OMEGA} and \eqref{d-PHI-2344}.
\end{proof}
We are ready to state and prove the following
\begin{theorem}\label{C-4-main}
Let $g$ be a smooth real valued function on $\C^4$ such that $g=g(x_2,x_3)$. Assume that, for any given $\tau\in (-\varepsilon,\varepsilon)$ the function $g$ satisfies the partial differential equation
$$
fullrac{pureartial g}{pureartial x_2}-2\tau
fullrac{pureartial g}{pureartial x_3}=0.
$$
Let $\mathcal{J}$ be the almost complex structure on $\C^4$ associated with $g$. Then,
\begin{enumerate}
\item[i)] For any given $\tau\in (-\varepsilon,\varepsilon)$, $(\mathcal{J},\Omega_\tau)$ is an almost $2$-K\"ahler structure on $\C^4$.
\item[ii)] If $fullrac{pureartial g}{pureartial x_2}\vert_{(x,y)=0}\neq 0$, then $\mathcal{J}$ cannot be locally compatible with respect to any symplectic form on $\C^4$.
\end{enumerate}
\end{theorem}
\begin{proof}
i) By Lemma \ref{lemma-C-4} and by assumption on $g$, we immediately obtain that $(\mathcal{J},\Omega_\tau)$ is an almost $2$-K\"ahler structure on $\C^4$. \vskip.2truecm\noindent
ii) We will apply again \cite[Theorem 2.4]{MT}.\newline
Let $\R^6\cong\C^3_{z_1,z_2,z_3}$, where $z_j=x_j+iy_j,\,j=1,2,3$. Then the restriction $\mathcal{J}\vert_{\R^6}$ of $\mathcal{J}$ to $\R^6$ gives rise to an almost complex structure on $\R^6$. We have that
$$
\left\{
\begin{array}{ll}
\mathcal{J}pureartial_{x_1}=& gpureartial_{x_3}+pureartial_{y_1}\\[5pt]
\mathcal{J}pureartial_{x_2}=& pureartial_{y_2}\\[5pt]
\mathcal{J}pureartial_{x_3}=& pureartial_{y_3}\\[5pt]
\mathcal{J}pureartial_{y_1}=& -pureartial_{x_1}-gpureartial_{y_3}\\[5pt]
\mathcal{J}pureartial_{y_2}=& -pureartial_{x_1}\\[5pt]
\mathcal{J}pureartial_{y_3}=& -pureartial_{x_3}\\
\end{array}
\right.,
$$
where $g=g(x_2,x_3)$ satisfies
$$
fullrac{pureartial g}{pureartial x_2}-2\tau
fullrac{pureartial g}{pureartial x_3}=0.
$$
By assumption, $
fullrac{pureartial g}{pureartial x_2}\vert_{(x,y)=0}\neq 0$. Therefore, in view of \cite[Theorem 2.4]{MT}, the second equation of \eqref{equations-mt} is not satisfied. Consequently, $\mathcal{J}\vert_{\R^6}$ cannot be locally compatible with any symplectic form on $\R^6\cong\C^3_{z_1,z_2,z_3}$. \newline
By contradiction: assume that there exists a symplectic form on $\C^4$ locally compatible with $\mathcal{J}$. Thus $(\mathcal{J},\omega)$ gives rise to an almost K\"ahler metric $\mathcal{G}$ on $\C^4$. Hence, $(\mathcal{J}\vert_{\R^6},\mathcal{G}\vert_{\R^6})$ would be an almost K\"ahler structure on the submanifold $\R^6\subset\C^4$. This is absurd.
\end{proof}
As an explicit example, we may take $g$ defined as
$$
g(x_2,x_3)=2\tau x_2+x_3.
$$
Then,
\begin{itemize}
\item[a)] $g$ verifies the partial differential equation
$$
fullrac{pureartial g}{pureartial x_2}-2\tau
fullrac{pureartial g}{pureartial x_3}=0.
$$
\item[b)] For every $\tau\in (-\varepsilon,\varepsilon),$ $\tau\neq 0$, it is $fullrac{pureartial g}{pureartial x_2}\vert_{(x,y)=0}\neq 0$.
\item[c)] For $\tau=0$ the almost complex structure is not integrable and it is almost K\"ahler.
\end{itemize}
Therefore, for any $\tau\in (-\varepsilon,\varepsilon),$ $\tau\neq 0$, $(\mathcal{J},\Omega_\tau)$ is a almost $2$ K\"ahler structure on $\C^4$, such that $\mathcal{J}$ admits no compatible symplectic structures.
}
\end{ex}
\end{document}
|
\begin{document}
\title[Continuity of Oseledets subspaces for fiber-bunched cocycles]{A note on the continuity of Oseledets subspaces for fiber-bunched cocycles}
\author{Lucas Backes }
\address{\noindent IME - Universidade do Estado do Rio de Janeiro, Rua S\~ao Francisco Xavier 524, CEP 20550-900, Rio de
Janeiro, RJ, Brazil .
\newline e-mail: \rm
\texttt{[email protected]} }
\maketitle
\begin{abstract}
We prove that restricted to the subset of fiber-bunched elements of the space of $GL(2,\mathbb{R})$-valued cocycles Oseledets subspaces vary continuously, in measure, with respect to the cocycle.
\end{abstract}
\section{Introduction}
In its simple form, a linear cocycle is just an invertible dynamical system $f:M \rightarrow M$ and a matrix-valued map $A:M\rightarrow GL(d, \mathbb{R})$. Sometimes one calls linear cocycle (over $f$ generated by $A$), instead, the sequence $\lbrace A^n\rbrace _{n\in \mathbb{Z}}$ defined by
\begin{equation*}\label{def:cocycles}
A^n(x)=
\left\{
\begin{array}{ll}
A(f^{n-1}(x))\ldots A(f(x))A(x) & \mbox{if } n>0 \\
Id & \mbox{if } n=0 \\
A(f^{n}(x))^{-1}\ldots A(f^{-1}(x))^{-1}& \mbox{if } n<0 \\
\end{array}
\right.
\end{equation*}
for all $x\in M$.
A special class of cocycles is given when the base dynamics $f$ is hyperbolic and the dynamics induced by $A$ on the projective space is dominated by the dynamics of $f$. That is, the rates of contraction and expansion of the cocycle $A$ along an orbit are smaller than the rates of contraction and expansion of $f$.
Such a cocycle is called \textit{fiber-bunched} (see Section \ref{sec: definitions and statements} for the precise definitions).
Many aspects of fiber-bunched cocycles are rather well understood. For instance, it is known that their cohomology classes are completely characterized by the information on periodic points \cite{Ba, Sa}, generically they have simple Lyapunov spectrum \cite{BV, VianaAlmostAllCocyc} and in the case when $d=2$, Lyapunov exponents are continuous as functions of the coycle \cite{BBB}. In this short note, still in the context of fiber-bunched cocycles, we address the problem of continuity of the Oseledets subspaces. More precisely, we prove that restricted to the subset of fiber-bunched elements of the space of $GL(2,\mathbb{R})$-valued cocycles Oseledets subspaces vary continuously, in measure, with respect to the cocycle. The proof of this result relies on ideas from \cite{BBB} and \cite{BocV}. In a different context a similar statement was recently gotten by \cite{DK}.
\section{Definitions and statements}\label{sec: definitions and statements}
Let $(M,{\mathsf{d}})$ be a compact metric space and $f: M \to M $ be a
homeomorphism. Given any $x\in M$ and $\varepsilon >0$, we define the
\emph{local stable} and \emph{unstable sets} of $x$ with respect to $f$ by
\begin{align*}
W^s_{\epsilon} (x) &:= \left\{y\in M : {\mathsf{d}}(f^n(x),f^n(y))\leq\epsilon ,\ \forall
n \geq 0\right\}, \\
W^u_{\epsilon } (x) &:= \left\{y\in M : {\mathsf{d}}(f^n(x),f^n(y))\leq\epsilon ,\ \forall
n \leq 0\right\},
\end{align*}
respectively.
Following \cite{AvilaVianaExtLyapInvPrin}, we say that a homeomorphism $f:M\to M$ is \emph{hyperbolic with
local product structure} (or just \emph{hyperbolic} for short)
whenever there exist constants $C_1,\epsilon ,\tau>0$ and $\lambda\in
(0,1)$ such that the following conditions are satisfied:
\begin{itemize}
\item $\; {\mathsf{d}}(f^n(y_1),f^n(y_2)) \leq C_1\lambda^n {\mathsf{d}}(y_1,y_2)$,
$\forall x\in M$, $\forall y_1,y_2 \in W^s_{\epsilon } (x)$, $\forall
n\geq 0$;
\item $\; {\mathsf{d}}(f^{-n}(y_1), f^{-n}(y_2)) \leq C_1\lambda^n
{\mathsf{d}}(y_1,y_2)$, $\forall x\in M$, $\forall y_1,y_2 \in W^u_{\epsilon } (x)$,
$\forall n\geq 0$;
\item If ${\mathsf{d}}(x,y)\leq\tau$, then $W^s_{\epsilon }(x)$ and
$W^u_{\epsilon }(y)$ intersect in a unique point which is denoted by
$[x,y]$ and depends continuously on $x$ and $y$. This property is called \textit{local product structure}.
\end{itemize}
Fix such an hyperbolic homeomorphism and let $A:M \rightarrow GL(d,\mathbb{R})$ be a $r$-H\"older continuous map. This means that there exists $C_2>0$ such that
\begin{displaymath}
\Vert A(x)-A(y)\Vert \leq C_2 {\mathsf{d}}(x,y)^r \; \textrm{for any} \; x, y\in M.
\end{displaymath}
Let us denote by $H^r(M)$ the space of such $r$-H\"older continuous maps. We endow this space with the $r$-H\"older topology which is generated by norm
\begin{displaymath}
\parallel A \parallel _{r}:= \sup _{x\in M} \parallel A(x)\parallel + \sup _{x\neq y} {\mathsf{d}}frac{\parallel A(x)-A(y)\parallel}{d(x,y)^r}.
\end{displaymath}
We say that the cocycle generated by $A$ satisfies the \textit{fiber bunching condition} or that the cocycle is \textit{fiber-bunched} if there exists $C_3>0$ and $\theta <1$ such that
\begin{displaymath}
\Vert A^n(x)\Vert \Vert A^n(x)^{-1}\Vert \lambda ^{nr}\leq C_3 \theta ^n
\end{displaymath}
for every $x\in M$ and $n\geq 0$ where $\lambda$ is the constant given in the definition of hyperbolic homeomorphism.
Let $\mu$ be an ergodic $f$-invariant probability measure on $M$ with local product structure. Roughly speaking, the last property means that $\mu$ is locally equivalent to the product measure $\mu ^s \times \mu ^u$ where $\mu ^s $ and $\mu ^u$ are measures on the local stable and unstable manifolds respectively induced by $\mu$ via the local product structure of $f$. Since we are not going to use explicitly this property we just refer to \cite{BBB} for the precise definition.
It follows from a famous theorem due to Oseledets (see \cite{V2}) that for $\mu$-almost every point $x\in M$ there exist numbers $\lambda _1(x)>\ldots > \lambda _{k}(x)$, and a direct sum decomposition $\mathbb{R}^d=E^{1,A}_{x}\oplus \ldots \oplus E^{k,A}_{x}$ into vector subspaces such that
\begin{displaymath}
A(x)E^{i,A}_{x}=E^{i,A}_{f(x)} \; \textrm{and} \; \lambda _i(x) =\lim _{n\rightarrow \infty} {\mathsf{d}}frac{1}{n}\log \parallel A^n(x)v\parallel
\end{displaymath}
for every non-zero $v\in E^{i,A}_{x}$ and $1\leq i \leq k$. Moreover, since our measure $\mu$ is assumed to be ergodic the \textit{Lyapunov exponents} $\lambda _i(x)$ are constant on a full $\mu$-measure subset of $M$ as well as the dimensions of the \textit{Oseledets subspaces} $E^{i,A}_{x}$. Thus, we will denote by $\lambda ^-(A,\mu)=\lambda _k(x)$ and $\lambda ^+(A,\mu)=\lambda _1(x)$ the \textit{extremal Lyapunov exponents} and by $E^{s,A}_{x}=E^{k,A}_{x}$ and $E^{u,A}_{x}=E^{1,A}_{x}$ the \textit{stable and unstable spaces} respectively. It follows by the Sub-Additive Ergodic Theorem of Kingman (see \cite{K} or \cite{V2}) that the extremal Lyapunov exponents are also given by
\begin{equation*}
\lambda ^+(A,\mu)= \lim _{n\rightarrow \infty}{\mathsf{d}}frac{1}{n}\log \|A^n(x)\|
\end{equation*}
\begin{flalign}\label{eq: extremal Lyapunov}
& \textrm{and}&
\end{flalign}
\begin{equation*}
\lambda ^-(A,\mu)= \lim _{n\rightarrow \infty}{\mathsf{d}}frac{1}{n}\log \| (A^n(x))^{-1}\|^{-1}
\end{equation*}
for $\mu$ almost every point $x \in M$. The objective of this note is to understand, for a fixed base dynamics $f$, how does the map $A\to E^{i,A}_{x}$ vary in the case when $d=2$, that is, in the case when the cocycle $A$ takes values in $GL(2,\mathbb{R})$.
Let $d$ be the distance on the projective space $\mathbb{P}(\mathbb{R}^2)$ defined by the angle between two directions. We say that an element $A$ of $H^r(M)$ with $\lambda ^+(A,\mu) >\lambda ^-(A,\mu)$ is a \emph{continuity point for the Oseledets decomposition with respect to the measure $\mu$} if the Oseledets subspaces are continuous, in measure, as functions of the cocycle. More precisely, for any sequence $\lbrace (A_k)_{k\in \mathbb{N}}\rbrace \subset H^r(M)$ converging to $A$ in the $r$-H\"older topology and any $\varepsilon >0$, we have
\begin{equation*}
\mu \Big( \Big\lbrace x\in M; \; d(E^{u,A_k}_{x}, E^{u,A}_{x})<\varepsilon \quad \textrm{and}\quad d(E^{s,A_k}_{x}, E^{s,A}_{x})<\varepsilon \Big\rbrace \Big) \xrightarrow{k\rightarrow \infty}1.
\end{equation*}
Thus, our main result is the following
\begin{theorem}\label{theorem: continuity of oseledets subspaces}
If $A \in H^r(M)$ is a fiber-bunched cocycle with $\lambda ^+(A,\mu)>\lambda ^-(A,\mu)$ then it is a continuity point for the Oseleteds decomposition with respect to the measure $\mu$.
\end{theorem}
The hypotheses that $A$ is fiber-bunched and $\mu$ has local product structure are only used to apply the results about continuity of Lyapunov exponents from \cite{BBB}. Thus, more generally, if we have a sequence $\lbrace (A_k)_{k\in \mathbb{N}}\rbrace \subset H^r(M)$ \textit{converging uniformly with holonomies} to $A$ as in the main theorem of \cite{BBB}, then
\begin{equation*}
\mu \Big( \Big\lbrace x\in M; \; d(E^{u,A_k}_{x}, E^{u,A}_{x})<\varepsilon \quad \textrm{and}\quad d(E^{s,A_k}_{x}, E^{s,A}_{x})<\varepsilon \Big\rbrace \Big) \xrightarrow{k\rightarrow \infty}1.
\end{equation*}
Consequently, our result also applies if we restrict ourselves to the space of locally constant cocycles endowed with the uniform topology.
\section{Proof of the theorem}
Let us consider the \textit{projective cocycle} $F_{A}:M\times \mathbb{P}(\mathbb{R}^2)\to M\times \mathbb{P}(\mathbb{R}^2)$ associated to $A$ and $f$ which is given by
\begin{displaymath}
F_{A} (x,v)=(f(x),{\mathbb{P}} A(x)v)
\end{displaymath}
where ${\mathbb{P}} A$ denotes the action of $A$ on the projective space. We say that an $F_A$-invariant measure $m$ on $M\times \mathbb{P}(\mathbb{R}^2)$ \textit{projects} to $\mu$ if $\pi _{\ast}m=\mu$ where $\pi :M\times \mathbb{P}(\mathbb{R}^2) \to M$ is the canonical projection on the first coordinate. Given a non-zero element $v\in \mathbb{R}^2$ we are going to use the same notation to denote its equivalence class in ${\mathbb{P}}(\mathbb{R}^2)$.
Let $\mathbb{R}^2=E^{s,A}_{x}\oplus E^{u,A}_{x}$ be the Oseledets decomposition associated to $A$ at the point $x\in M$. Consider also
\begin{displaymath}
m^s=\int _{M}{\mathsf{d}}elta _{(x,E^{s,A}_{x})} d\mu(x)
\end{displaymath}
and
\begin{displaymath}
m^u=\int _{M}{\mathsf{d}}elta _{(x,E^{u,A}_{x})} d\mu(x)
\end{displaymath}
which are $F_A$-invariant probability measures on $M\times \mathbb{P}(\mathbb{R}^2)$ projecting to $\mu$. Moreover, by the Birkhoff ergodic theorem and \eqref{eq: extremal Lyapunov} we have that
\begin{displaymath}
\lambda ^-(A,\mu) =\int _{M\times \mathbb{P}(\mathbb{R}^2)} \varphi _{A}(x,v) dm^s (x, v)
\end{displaymath}
and
\begin{displaymath}
\lambda ^+(A,\mu) =\int _{M \times \mathbb{P}(\mathbb{R}^2)} \varphi _{A}(x,v) dm^u (x,v)
\end{displaymath}
where $\varphi _{A}: M\times \mathbb{P}(\mathbb{R}^2)\rightarrow \mathbb{R}$ is given by
\begin{equation*}
\varphi _{A} (x,v)= \log {\mathsf{d}}frac{\parallel A(x)v\parallel}{\parallel v\parallel}.
\end{equation*}
By the (non-uniform) hyperbolicity of $(A,\mu)$ we have the following.
\begin{lemma}\label{lemma:convex combination}
Let $m$ be a probability measure on $M\times \mathbb{P}(\mathbb{R}^2)$ that projects down to $\mu$. Then, $m$ is $F_{A}$-invariant if and only if it is a convex combination of $m^s$ and $m^u$ for some $f$-invariant functions $\alpha ,\beta :M\to [0,1]$ such that $\alpha(x)+\beta (x)=1$ for every $x\in M$.
\end{lemma}
\begin{proof}
One implication is trivial. For the converse one only has to note that every compact subset of $\mathbb{P}(\mathbb{R}^2)$ disjoint from $\lbrace E^u, E^s\rbrace$ accumulates on $E^u$ in the future and on $E^s$ in the past.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem: continuity of oseledets subspaces}]
Suppose that $A$ is a fiber-bunched cocycle such that $\lambda ^+(A,\mu)>\lambda ^-(A,\mu)$. As the subset of fiber-bunched elements of $H^r(M)$ is open we may assume without loss of generality that $A_k$ is fiber-bunched for every $k\in \mathbb{N}$. Moreover, since the Lyapunov exponents depend continuously on the cocycle $A$ (see Theorem 1.1 from \cite{BBB}) and $\lambda ^+(A,\mu)>\lambda ^-(A,\mu)$ we may also assume that $\lambda ^+(A_k,\mu)>\lambda ^-(A_k,\mu)$ for every $k\in \mathbb{N}$. We will prove just the assertion about the unstable spaces, that is, that $\mu \left( \left\lbrace x\in M; \; d(E^{u,A_k}_{x}, E^{u,A}_{x})<{\mathsf{d}}elta \right\rbrace \right) \xrightarrow{k\rightarrow \infty}1$. The case of the stable spaces is analogous.
For each $k\in \mathbb{N}$, let us consider the measure
\begin{displaymath}
m_k=\int _{M} {\mathsf{d}}elta _{(x,E^{u,A_k}_{x})} d\mu(x)
\end{displaymath}
and let $m^u$ be the measure given by
\begin{displaymath}
m^u=\int _{M} {\mathsf{d}}elta _{(x,E^{u,A}_{x})} d\mu(x).
\end{displaymath}
These are $F_{A_k}$ and $F_A$-invariant probability measures on $M\times {\mathbb{P}}(\mathbb{R}^2)$ respectively, projecting to $\mu$. Moreover, $m_k\xrightarrow{k\rightarrow \infty} m^u$. Indeed, let $(m_{k_j})_{j\in \mathbb{N}}$ be a convergent subsequence of $(m_{k})_{k\in \mathbb{N}}$ and suppose that it converges to $\eta$. Since for each $j\in \mathbb{N}$ the measure $m_{k_j}$ is $F_{A_{k_j}}$-invariant and projects to $\mu$ it follows that $\eta$ is an $F_{A}$-invariant measure projecting to $\mu$. Moreover, since
\begin{displaymath}
\lambda ^+(A_{k_j},\mu) \xrightarrow{j\rightarrow \infty}\lambda ^+(A, \mu)
\end{displaymath}
once the Lyapunov exponents are continuous as functions of the cocycle (see \cite{BBB}) and
\begin{displaymath}
\lambda ^+(A_{k_j},\mu)=\int _{M\times \mathbb{P}(\mathbb{R}^2)} \varphi _{A_{k_j}}dm_{k_j} \xrightarrow{j\rightarrow \infty} \int _{M \times \mathbb{P}(\mathbb{R}^2)} \varphi _{A}d\eta
\end{displaymath}
we get that
\begin{displaymath}
\lambda ^+(A, \mu) = \int _{M \times \mathbb{P}(\mathbb{R}^2)} \varphi _{A}d\eta.
\end{displaymath}
Thus, invoking Lemma \ref{lemma:convex combination} and using the fact that $\mu$ is ergodic it follows that $\eta=m^u$. Indeed, otherwise we would have $\eta=\alpha m^s + \beta m^u$ with $\alpha >0$ and consequently
$$\int _{M \times \mathbb{P}(\mathbb{R}^2)} \varphi _{A}d\eta= \alpha \lambda ^-(A, \mu) + \beta \lambda ^+(A, \mu) <\lambda ^+(A, \mu).$$
Therefore, $m_k\xrightarrow{k\rightarrow \infty} m^u$ as claimed.
Let $g:M \rightarrow \mathbb{P}(\mathbb{R}^2)$ be the measurable map given by
\begin{displaymath}
g(x)=E^{u, A}_{x}.
\end{displaymath}
Note that its graph has full $m^u$-measure. By Lusin's Theorem, given $\varepsilon >0$ there exists a compact set $K\subset M$ such that the restriction $g_K$ of $g$ to $K$ is continuous and $\mu (K)>1-\varepsilon$. Now, given ${\mathsf{d}}elta >0$, let $U\subset M\times \mathbb{P}(\mathbb{R}^2)$ be an open neighborhood of the graph of $g_K$ such that
\begin{displaymath}
U\cap (K\times \mathbb{P}(\mathbb{R}^2))\subset U_{{\mathsf{d}}elta}
\end{displaymath}
where
\begin{displaymath}
U_{{\mathsf{d}}elta}:=\lbrace (x,v)\in K\times \mathbb{P}(\mathbb{R}^2); \; d(v, g(x))<{\mathsf{d}}elta \rbrace .
\end{displaymath}
By the choice of the measures $m_k$,
\begin{equation}\label{eq: auxiliary eq 1 theo 2}
m_k(U_{{\mathsf{d}}elta})= \mu (\lbrace x\in K; \; d(E^{u,A_k}_{x}, E^{u,A}_{x})<{\mathsf{d}}elta \rbrace ).
\end{equation}
Now, as $m_k\xrightarrow{k\rightarrow \infty} m^u$ it follows that $\liminf m_k(U)\geq m^u(U)> 1-\varepsilon$. On the other hand, as $m_k(K\times \mathbb{P}(\mathbb{R}^2))=\mu(K)> 1-\varepsilon$ for every $k\in \mathbb{N}$, it follows that
\begin{equation}\label{eq: auxiliary eq 2 theo 2}
m_k(U_{{\mathsf{d}}elta})\geq m_k(U \cap (K\times \mathbb{P}(\mathbb{R}^2)))\geq 1-2\varepsilon
\end{equation}
for every $k$ large enough. Thus, combining \eqref{eq: auxiliary eq 1 theo 2} and \eqref{eq: auxiliary eq 2 theo 2}, we get that $\mu(\lbrace x\in M; \; d(E^{u,A_k}_{x}, E^{u,A}_{x})<{\mathsf{d}}elta \rbrace)\geq 1-2\varepsilon$ for every $k$ large enough completing the proof of Theorem \ref{theorem: continuity of oseledets subspaces}.
\end{proof}
\end{document}
|
\begin{equation}gin{document}
\title{Device-independent quantum key distribution\\
secure against collective attacks}
\author{Stefano Pironio$^{1}$\thanks{[email protected]}, Antonio Ac\'{\i}n$^{2,3}$, Nicolas Brunner$^4$,\\ Nicolas Gisin$^1$, Serge Massar$^5$, Valerio Scarani$^6$\\begin{equation}0.5em]
$^1$ Group of Applied Physics, University of Geneva\\
$^2$ ICFO-Institut de Ciencies Fotoniques, 08860 Castelldefels, Spain\\
$^3$ ICREA-Instituci\'o Catalana de Recerca i Estudis
Avan\c cats\\
Pg. Lluis Companys 23, 08010 Barcelona, Spain\\
$^4$ H.H. Wills Physics Laboratory, University of Bristol\\
$^5$ Laboratoire d'Information Quantique, Universit\'{e} Libre de Bruxelles\\
C.P 225, Boulevard du Triomphe, B-1050 Bruxelles, Belgium \\
$^6$ Centre for Quantum Technologies and \\
Department of Physics,
National University of Singapore }
\date{}
\maketitle
\begin{equation}gin{abstract}
Device-independent quantum key distribution (DIQKD) represents a
relaxation of the security assumptions made in usual quantum key
distribution (QKD). As in usual QKD, the security of DIQKD follows
from the laws of quantum physics, but contrary to usual QKD, it
does not rely on any assumptions about the internal working of the
quantum devices used in the protocol. We present here in detail
the security proof for a DIQKD protocol introduced in [Phys. Rev.
Lett. 98, 230501 (2008)]. This proof exploits the full structure
of quantum theory (as opposed to other proofs that exploit the
no-signalling principle only), but only holds again collective
attacks, where the eavesdropper is assumed to act on the quantum
systems of the honest parties independently and identically at
each round of the protocol (although she can act coherently on her
systems at any time). The security of any DIQKD protocol
necessarily relies on the violation of a Bell inequality. We
discuss the issue of loopholes in Bell experiments in this
context.
\end{abstract}
\pagebreak
\section{Introduction}
Device-independent quantum key distribution (DIQKD) protocols
aim at generating a secret key between two parties in a
provably secure way without making assumptions about the internal
working of the quantum devices used in the protocol. In DIQKD, the
quantum apparatuses are seen as black boxes that produce classical
outputs, possibly depending on the value of some classical inputs (see Fig.~\ref{figdiqkd}).
These apparatuses are thought to implement a quantum process, but
no hypothesis in terms of Hilbert space, operators, or states are
made on the actual quantum process that generates the outputs
given the inputs.
DIQKD can be thought of by contrasting it with usual quantum key
distribution (QKD). In its entanglement-based version
\cite{BBM92}, traditional QKD involves two parties, Alice and Bob,
who receive entangled particles emitted from a common source and
who measure each of them in some chosen bases. The measurement
outcomes are kept secret and form the raw key. As the source of
particles is situated between Alice's and Bob's secure locations,
it is not trusted by the parties, but assumed to be under the
control of an eavesdropper Eve. The eavesdropper could for
instance have replaced the original source by one who produces
states that give her useful information about Alice's and Bob's
measurement outcomes. However, by performing measurements in
well-chosen bases on a random subset of their particles and by
comparing their results, Alice and Bob can estimate the quantum
states that they receive from the eavesdropper and decide whether
a secret key can be extracted from them.
In a device-independent analysis of this scenario, Alice and Bob
would not only distrust the source of particles, but they would
also distrust their measuring apparatuses. The measurement
directions may for instance drift with time due to imperfections
in the apparatuses, or the entire apparatuses may be untrusted
because they have been fabricated by a malicious party. Alice and
Bob have therefore no guarantee that the actual measurement bases
corresponds to the expected ones. In fact they cannot even make
assumptions about the dimension of the Hilbert space in which they
are defined. In DIQKD, Alice and Bob have thus to bound Eve's
information by looking for the worst combination of states and
measurements (in Hilbert spaces of arbitrary dimension) that are
compatible with the observed classical input-output relations. In
contrast, in usual QKD Alice and Bob have a perfect knowledge of
the measurements that are carried out and of the Hilbert
space dimension of the quantum state they measure, and they exploit
this information to bound the eavesdropper's information when they
look for the worst possible states compatible with their observed
data.
\begin{equation}gin{figure}[h]\begin{equation}gin{center}
\includegraphics[scale=0.65]{diqkd2.eps}
\caption{Schematic representation of the DIQKD scenario. Alice and Bob see their quantum devices as black boxes producing classical outputs, $a$ and $b$, as a function of classical inputs $X$ and $Y$. From the observed statistics, and without making any assumption on the internal working of the devices, they should be able to conclude whether they can establish a secret key secure against a quantum eavesdropper.
}\label{figdiqkd}
\end{center}\end{figure}
\subsection{Why DIQKD?}
DIQKD represents a relaxation of the security assumptions made in
usual QKD. In this sense, it fits in the continuity of a series of
works that aim to design cryptographic protocols secure against
more and more powerful eavesdroppers.
From a fundamental point of view, DIQKD shows that the security of
a cryptographic scheme is possible based on a minimal set of
fundamental assumptions. It only requires that:
\begin{equation}gin{itemize}
\item Alice's and Bob's physical locations are secure, i.e., no unwanted information can leak out to the outside;
\item they have a trusted random number generator,
possibly quantum, producing a classical random output;
\item they have trusted classical devices (e.g., memories and computing devices) to store and process the classical data generated by their quantum
apparatuses;
\item they share an authenticated, but otherwise public, classical channel (this hypothesis can be ensured if Alice and Bob start off with a small shared secret key);
\item quantum physics is correct.
\end{itemize}
Other than these prerequisites, shared by all QKD protocols, no
others are necessary. In addition to these essential requirements,
usual QKD protocols assume that Alice and Bob have some knowledge
about their quantum devices.
From a practical point of view, DIQKD resolves some of the
drawbacks of usual QKD. Usual security proofs of QDK make several
assumptions about the quantum systems, such as their Hilbert space
dimension. These assumptions are often critical: as we show below,
the security of the BB84 protocol, for instance, is entirely
compromised if Alice and Bob share four-dimensional systems
instead of sharing qubits as usually assumed. The problem is that
real-life implementation of QKD protocols may differ from the
ideal design. For instance, the quantum apparatuses may be noisy
or there may be uncontrolled side channels. A possible, but
challenging, way to address these problems would be to
characterize very precisely the quantum devices and try to adapt
the security proof to the actual implementation of the protocol.
The concept of device-independent QKD, on the other hand, applies
through its remarkable generality in a simple way to these
situations as it allows us to ignore all implementation details.
DIQKD makes it also easier to test the components of a QKD
protocol. Since its security relies on the observed classical data
generated by the devices, errors or deterioration with time of the
internal working of the quantum devices, which could be exploited
by an eavesdropper, are easily monitored and accounted for in the
key rate.
A third practical motivation for DIQKD is that it covers the
adverse scenario where the quantum devices are not trusted. For
instance, someone who had access to the quantum apparatuses at
some time might have hacked or modified their mechanism. But if
the devices still produce proper classical input-output relations,
which is all what is required, this is irrelevant to the security
of the scheme. To some extent DIQKD overturns the adage that the
security of a cryptographic system is only as good as its physical
security. Of course an eavesdropper who had access to the quantum
devices might have modified their working so that they directly
send her information about the measurement settings and outcomes.
But this goes against the basic requirement that Alice's and Bob's
locations should be completely secure against Eve's scrutiny -- a
necessary requirement for cryptography to have any meaning. It is
modulo this assumption, that the eavesdropper is
free to tamper with their devices.
\subsection{Usual QKD protocols are not secure in the device-independent scenario}
A consequence of adopting a more general security model is that
traditional QKD protocols may no longer be secure, as illustrated
by the following example.
Consider the entanglement-based version of BB84~\cite{BB84}. Alice
has a measuring device that takes a classical input $X\in\{0,1\}$
(her choice of measurement setting) and that produces an output
$a\in\{0,1\}$ (the measurement outcome). Similarly, Bob's device
accepts inputs $Y\in\{0,1\}$ and produce outputs $b\in\{0,1\}$.
Both measuring devices act on a two-dimensional subspace of the
incoming particles (e.g., the polarization of a photon). The
setting ``0'' is associated to the measurement of $\sigmagma_x$,
while the setting ``1'' corresponds to $\sigmagma_z$. Suppose that in
an ideal, noise-free situation they observe the following
correlations:
\begin{equation}gin{eqnarray}\label{bb84}
&&P(ab|00)=P(ab|11)=1/2\quad\text{if } a=b\nonumber\\
&&P(ab|01)=P(ab|10)=1/4\quad \text{for all }a,b\,,
\end{eqnarray}
where $P(ab|XY)$ is the probability to observe the pair of
outcomes $a,b$ given that they have made measurements $X,Y$. That
is, if Alice and Bob perform measurements in the same bases, they
always get perfectly correlated outcomes; while if they measure in
different bases, they get completely uncorrelated random outcomes.
In term of the measurement operators $\sigmagma_x$ and $\sigmagma_z$ and
the two-qubit state $|\psi\rangle\in
\mathbb{C}^2\otimes\mathbb{C}^2$ that characterizes their incoming
particles, the above correlations can be rewritten as
\begin{equation}gin{eqnarray}\label{bb84state}
\langle \psi |\sigmagma_x\otimes\sigmagma_x|\psi\rangle = \langle \psi |\sigmagma_z\otimes\sigmagma_z|\psi\rangle =1\nonumber\\
\langle \psi |\sigmagma_x\otimes\sigmagma_z|\psi\rangle =\langle \psi |\sigmagma_z\otimes\sigmagma_x|\psi\rangle =0\,.
\end{eqnarray}
The only state compatible with this set of equations is the
maximally entangled state $(|00\rangle+|11\rangle)/\sqrt 2$. Alice
and Bob therefore rightly conclude that they can safely extract a
secret key from their measurement data.
In the device-independent scenario, however, Alice and Bob can no
longer assume that the measurement settings ``0'' and ``1''
correspond to the operators $\sigmagma_x$ and $\sigmagma_z$, nor that
they act on the two-qubit space $\mathbb{C}^2\otimes\mathbb{C}^2$.
It is then not difficult to find separable (hence insecure) states
that reproduce the measurement data (\ref{bb84}) for appropriate
choice of measurements \cite{Magniez,AGM}. An example is given by
the $\mathbb{C}^4\otimes\mathbb{C}^4$ state
\begin{equation}\label{bb84state2}
\rho_{AB}=\frac{1}{4}\sum_{z_0,z_1=0}^1
\left(|z_0\,z_1\rangle\langle
z_0\,z_1|\right)_A\otimes\left(|z_0\,z_1\rangle\langle
z_0\,z_1|\right)_B ,
\end{equation}
where the vectors $\ket{0}$ and $\ket{1}$ define the $z$ basis, and by the measurements
$\sigmagma_z\otimes I$ for the setting ``0'' and $I\otimes \sigmagma_z$
for the setting ``1''. Clearly this combination of state and
measurements reproduce the correlations (\ref{bb84}): Alice and
Bob find completely correlated outcomes when the use the same
measurement settings, and completely uncorrelated ones otherwise.
In contrast to the previous situation, however, Eve can now have a
perfect copy of the local states of Alice and Bob, for instance if
they share the tripartite state
\begin{equation}
\rho_{ABE}=\frac{1}{4}\sum_{x,z=0}^1 \left(|z_0\,z_1\rangle\langle
z_0\,z_1|\right)_A\otimes\left(|z_0\,z_1\rangle\langle
z_0\,z_1|\right)_B\otimes \left(|z_0\,z_1\rangle\langle
z_0\,z_1|\right)_E\,.
\end{equation}
This example illustrates the fact that in the usual security
analysis of BB84 it is crucial to assume that Alice and Bob
measurements act on a two-dimensional space, a condition difficult
to check experimentally. If we relax this assumption, the security
is no longer guaranteed.
\subsection{How can DIQKD possibly be secure?}
Understanding better why usual QKD protocols are not secure in the
device-independent scenario may help us identify physical
principles on which to base the security of a device-independent
scheme. A first observation is that the correlations (\ref{bb84})
produced in BB84 are classical: we don't need to invoke quantum
physics at all to reproduce them. They can simply be generated by
a set of classical random data shared by Alice's and Bob's systems
--- in essence this is what the separable state (\ref{bb84state})
achieves. Formally, they can be written in the form
\begin{equation}\label{classcorr}
P(ab|XY) = \sum_{\lambda} P(\lambda)\, D(a|X,\lambda)\,D(b|Y,\lambda)
\end{equation}
where $\lambda$ is a classical variable with probability
distribution $P(\lambda)$ shared by Alice's and Bob's devices and
$D(a|X,\lambda)$ is a function that completely specifies Alice's
outputs once the input $X$ and $\lambda$ are given (and similarly
for $D(b|Y,\lambda)$ ). An eavesdropper might of course have a
copy of $\lambda$, which would give her full information about
Alice's and Bob's outputs once the inputs are announced.
This trivial strategy is not available to the eavesdropper,
however, if the outputs of Alice's and Bob's apparatuses are
correlated in a non-local way, in the sense that they violate a
Bell inequality \cite{Bell}. Indeed, non-local correlations are
defined precisely as those that cannot be written in the form
(\ref{classcorr}). The violation of a Bell inequality is thus a
necessary requirement for the security of QKD protocol in the
device-independent scenario. This condition is clearly not
satisfied by BB84.
More than a necessary condition for security, non-locality is the
physical principle on which all device-independent security proofs
are based. This follows from the fact that non-local correlations
require for their generation entangled states, whose measurement
statistics cannot be known completely to an eavesdropper. To put
it in another way, Bell inequalities are the only entanglement
witnesses that are device-independent, in the sense that they do
not depend on the physical details underlying Alice's and Bob's
apparatuses.
\subsection{Earlier works and relation to QKD against no-signalling eavesdroppers}
The intuition that the security of a QKD scheme could be based on
the violation of a Bell inequality was at the origin of Ekert's
1991 celebrated proposal \cite{Ekert}. The crucial role that
non-local correlations play in a device-independent scenario was
also implicitly recognized by Mayers and Yao \cite{Mayers}.
Quantitative progress, however, has been possible only recently
thanks to the pioneering work of Barrett, Hardy, and Kent
\cite{BHK}. Barrett, Hardy, and Kent proved the security of QKD
scheme against general attacks by a supra-quantum eavesdropper
that is limited by the no-signalling principle only (rather than
the full quantum formalism). This is possible because once the
no-signaling condition is assumed, nonlocal correlations satisfy a
monogamy condition analogous to that of entanglement in quantum
theory \cite{nlinfo}. Since quantum theory satisfies the
no-signalling condition, security against a no-signalling
eavesdropper implies security in the device-independent scenario.
Barrett, Hardy, and Kent's result is a proof of principle as their
protocol requires Alice and Bob to have a noise-free quantum
channel and generates a single shared secret bit (but makes a
large number of uses of the channel). A slight modification of
their protocol based on the results of \cite{BKP} enables the
generation of a secret key of $\log_2 d$ bit if Alice and Bob have
a channel that distributes $d$-dimensional systems. Barrett,
Hardy, and Kent's work was extended to noisy situations and
non-vanishing key rates in \cite{AGM,AMP,Scarani}, though these
works only considered security against individual attacks, where
the eavesdropper is restricted to act independently on each of
Alice's and Bob's systems. Masanes {\sl et al.} introduced a
security proof valid against arbitrary attacks by an eavesdropper
that is not able to store non-classical information
\cite{masanes-winter}. This result was improved by Masanes
\cite{masanes} who proved security in the universally-composable
sense, the strongest notion of security. Although the last two
results take into account eavesdropping strategies that act
collectively on systems corresponding to different uses of the
devices, they require the no-signalling condition to hold not only
between the devices on Alice's and on Bob's side, but also between
all individual uses of the quantum device of one party. This
condition can be enforced, although not in a practical manner, by
having the parties use in parallel $N$ devices that are space-like
separated from each other, rather than using sequentially a single
device $N$ times.
There are fundamental motivations to study the security of QKD
protocols against no-signalling eavesdroppers (NSQKD); this
improves for instance our understanding of the relationship
between information theory and physical theories. From a practical
point of view, it is also interesting to develop cryptographic
schemes that rely on physical principles independent from quantum
theory and thus that could be guaranteed secure even if quantum
theory were ever to fail.
However, given that for the moment we have no good reasons (apart possibly theoretical ones) to doubt the validity of quantum theory, nor evidences that a hypothetical breakdown of quantum theory would signify the immediate end of quantum key distribution\footnote{For instance, quantum physics might only breakdown at an energy scale that would remain unaccessible to human control for ages.}, it is advantageous to exploit the full quantum formalism in the device-independent context. First of all, as the entire quantum formalism is more constraining than the no-signalling principle alone, we expect to derive higher key rates and better noise resistance in the quantum case (for instance, the proof of general security given in \cite{masanes} has a key rate and a noise-resistance that is not practical when applied to quantum correlations). A second advantage is that, from a technical point of view, we can exploit in proving security all the theoretical framework of quantum theory -- as opposed to a single principle. We may, in particular, use existing results such as de Finetti theorems, efficient privacy amplification schemes against quantum adversaries, etc. (but might also have to derive new technical results that may find applications in other contexts).
\subsection{Content and structure of the paper}
Here we prove the security of a modified version of the Ekert
protocol \cite{Ekert}, proposed in Ref. \cite{AMP}. Our proof,
already introduced in \cite{aci07}, exploits the full quantum
formalism, but is restricted to collective attacks, where Eve is
assumed to act independently and identically at each use of the
devices, though she can act coherently at any time on her own
systems. In the usual security model, security against collective
attacks implies security against the most general type of attacks
\cite{coll}. It is an open question whether this is also true in
the device-independent scenario. In the protocol that we analyze,
Alice and Bob bound Eve's information by estimating the violation
of the Clauser-Horne-Shimony-Holt (CHSH) inequality \cite{chsh}.
Our main result is a tight bound on the Holevo information between
Alice and Eve as a function of the amount of violation of the CHSH
inequality. The protocol that we use, our security assumptions,
and our main result are presented in Section~2. In particular, we
present in Subsection~2.4 all the details of our security proof,
which was only sketched in~\cite{aci07}.
It is crucial for the security of DIQKD that Alice's and Bob's
outcomes genuinely violate a Bell inequality. All experimental
tests of non-locality that have been made so far, however, are
subject to at least one of several loopholes and therefore admit
in principle a local description. We discuss in Section~3 the
issue of loopholes in Bell experiments from the perspective of~
DIQKD.
Finally, we conclude with a discussion of our results and some
open questions in Section~4.
\section{Results}
\subsection{The protocol}
The protocol that we study is a modification of the Ekert 1991
protocol \cite{Ekert} proposed in Ref.~\cite{AMP}. Alice and Bob
share a quantum channel consisting of a source that emits pairs of
particles in an entangled state $\rho_{AB}$. Alice can choose to
apply to her particle one out of three possible measurements
$A_0$, $A_1$ and $A_2$, and Bob one out of two measurements $B_1$
and $B_2$. All measurements have binary outcomes labeled by
$a_i,b_j\in\{+1,-1\}$.
The raw key is extracted from the pair $\{A_0,B_1\}$. The quantum
bit error rate (QBER) is defined as $Q=P(a\neq b|01)$.
This parameter estimates the amount of correlations between
Alice's and Bob's symbols and thus quantifies the amount of
classical communication needed for error correction. The
measurements $A_{1}$, $A_2$, $B_1$, and $B_2$ are used on a subset
of the particles to estimate the CHSH polynomial
\begin{equation}gin{equation}\label{CHSHeq}
{S}=\moy{a_1b_1}+\moy{a_1b_2}+ \moy{a_2b_1}- \moy{a_2b_2}\,,
\end{equation}
where the correlator $\moy{a_ib_j}$ is defined as
$P\,(a=b|ij)-P\,(a\neq b|ij)$. The CHSH polynomial is used by
Alice and Bob to bound Eve's information and, thus, governs the
privacy amplification process. We note that there is no a priori
relation between the value of ${S}$ and the value of $Q$: these
are two parameters which are available to estimate Eve's
information.
Without loss of generality, we suppose that the marginals are
random for each measurement, i.e., $\moy{a_i}=\moy{b_j}=0$ for all
$i$ and $j$. Were this not the case, Alice and Bob could achieve
it a posteriori through public one-way communication by agreeing
on flipping randomly a chosen half of their bits. This
operation would not change the value of $Q$ and $S$ and would be
known to Eve.
A particular implementation of our protocol with qubits is given
for instance by the noisy two-qubit state $\rho_{AB}= p
\ket{\Phi^+}\bra{\Phi^+}+(1-p)I/4$ and by the qubit measurements
$A_0=B_1=\sigmagma_z$, $B_2=\sigmagma_x$, $A_1=(\sigma_z+\sigma_x)/{\sqrt{2}}$
and $A_2=(\sigma_z-\sigma_x)/\sqrt{2}$, which maximize the CHSH
polynomial for the state $\rho_{AB}$. The state $\rho_{AB}$ corresponds to a
two-qubit Werner state and arises, for instance, from the state
$\ket{\Phi^+}=1/\sqrt{2}(\ket{00}+\ket{11})$ after going through a
depolarizing channel, or through a phase-covariant cloner. The resulting correlations
satisfy $S=2\sqrt{2} p$ and $Q=1/2-p/2$, i.e.,
$S=2\sqrt{2}(1-2{Q})$. Though these correlations can be generated
in the way that we just described, it is important to stress that
Alice and Bob do not need to assume that they perform the above
measurements, nor that their quantum systems are of dimension 2,
when they bound Eve's information.
In the case of classically correlated data (corresponding to
$p\leq 1/\sqrt{2}$ for the above correlations), the maximum of the
CHSH polynomial (\ref{CHSHeq}) is 2, which defines the well-known
CHSH Bell inequality ${S}\leq 2$. Secure DIQKD is not possible if
the observed value of ${S}$ is below this classical limit, since
in this case there exists a trivial attack for Eve that gives her
complete information, as discussed in Subsection~1.3. On the other
hand, at the point of maximal quantum violation ${S}=2\sqrt 2$
(corresponding to $p=1$ for the above correlations), Eve's
information is zero. This follows from the work of
Tsirelson~\cite{cir80}, who showed that any quantum realization of
this violation is equivalent to the case where Alice and Bob
measure a two-qubit maximally entangled state. The main ingredient
in the security proof of our DIQKD protocol is a lower bound on
Eve's information as a function of the CHSH value. This bound
allows us to interpolate between the two extreme cases of zero and
maximal quantum violation and yields provable security for
sufficiently large violations.
\subsection{Eavesdropping strategies}
\subsection*{Most general attacks}
In the device-independent scenario, Eve is assumed not only to
control the source (as in usual entanglement-based QKD), but also
to have fabricated Alice's and Bob's measuring devices. The only
data available to Alice and Bob to bound Eve's knowledge is the
observed relation between the inputs and outputs, without
any assumption on the type of quantum measurements and systems
used for their generation.
In complete generality, we may describe this situation as follows.
Alice, Bob, and Eve share a state $\ket{\Psi}_{ABE}$ in
$H_A^{\otimes n}\otimes H_B^{\otimes n} \otimes H_E$, where $n$ is
the number of bits of the raw key. The dimension $d$ of Alice's and
Bob's Hilbert spaces $H_A=H_B=\mathbb{C}^d$ is unknown to them and
fixed by Eve. The measurement $M_k$ yielding the $k^\mathrm{th}$
outcome of Alice is defined on the $k^\mathrm{th}$ subspace of
Alice and chosen by Eve. This measurement may depend on the
input $A_{j_k}$ chosen by Alice at step $k$ and on the
value $c_k$ of a classical register stored in the device, that is,
$M_k=M_k(A_{j_k},c_k)$. The classical memory $c_k$ can in
particular store information about all previous inputs and
outputs. Note that the quantum device may also have a quantum memory,
but this quantum memory at step $k$ of the protocol can be seen as
part of Alice's state defined in $H_A^{k}$. The value of this
quantum memory can be passed internally from step $k$ of the
protocol to step $k+1$ by teleporting it from $H_A^k$ to
$H_A^{k+1}$ using the classical memory $c_k$. The situation is
similar for Bob.
\subsubsection*{Collective attacks}
In this paper, we focus on collective attacks where Eve applies
the same attack to each system of Alice and Bob. Specifically, we
assume that the total state shared by the three parties has the
product form $\ket{\Psi_{ABE}}=\ket{\psi_{ABE}}^{\otimes n}$ and
that the measurements are a function of the current input
only, e.g., for Alice $M_k=M(A_{j_k})$. We thus assume that the
devices are memoryless and behave identically and independently at
each step of the protocol. From now on, we simply write the
measurement $M(A_j)$ as $A_j$.
For collective attacks, the asymptotic secret key rate $r$ in the limit of a key of infinite size under one-way
classical postprocessing from Bob to Alice is lower-bounded by the
Devetak-Winter rate~\cite{DW}, \begin{equation} \label{rate} r\, \geq\,
r_{DW}\,= \,I(A_0:B_1)\,-\, \chi(B_1:E)\,, \end{equation} which is the
difference between the mutual information between Alice and Bob,
\begin{equation} I(A_0:B_1)=H(A_0)+H(B_1)-H(A_0,B_1) \end{equation} and the Holevo
quantity between Eve and Bob \begin{equation}\chi(B_1:E)=S(\rho_E)-
\demi\sum_{b_1=\pm 1}S(\rho_{E|b_1})\,. \end{equation} Here $H$ and $S$
denote the standard Shannon and von Neumann entropies,
$\rho_E=\linebreak[4]\text{Tr}_{AB}\ket{\psi_{ABE}}\bra{\psi_{ABE}}$
denotes Eve's quantum state after tracing out Alice and Bob's
particles, and $\rho_{E|b_1}$ is Eve's quantum state when Bob has
obtained the result $b_1$ for the measurement $B_1$. The optimal collective attack corresponds to the case where the tripartite state $\ket{\psi_{ABE}}$ is the purification of the bipartite state $\rho_{AB}$ shared by Alice and Bob.
Since we have
assumed uniform marginals, the mutual information between Alice
and Bob is given here by \begin{equation} I(A_0:B_1)=1-h(Q)\,, \end{equation} where $h$ is the
binary entropy.
Note that the rate is given by \eqref{rate} and not by
$I(A_0:B_1)-\chi(A_0:E)$ because $\chi(A_0:E)\geq \chi(B_1:E)$
holds for our protocol \cite{AMP}; it is therefore advantageous
for Alice and Bob to do the classical postprocessing with public
communication from Bob to Alice.
\subsection{Security of our protocol against collective attacks}
To find Eve's optimal collective attack, we have to find the
largest value of $\chi(B_1:E)$ compatible with the observed
parameters $Q$ and $S$ without assuming anything about the
physical systems and the measurements that are performed. Our main
result is the following.
\begin{equation}gin{thm}
Let $\ket{\psi_{ABE}}$ be a quantum state and
$\{A_1,A_2,B_1,B_2\}$ a set of measurements yielding a violation
$S$ of the CHSH inequality. Then after Alice and Bob have
symmetrized their marginals, \begin{equation} \chi(B_1:E)\leq
h\left(\frac{1+\sqrt{({S}/2)^2-1}}{2}\right)\,. \label{bound} \end{equation}
\end{thm}
The proof of this Theorem will be given in
Subsection~\ref{sec:proof}. From this result, it immediately
follows that the key rate for given observed values of $Q$ and $S$
is \begin{equation}\label{keyrate} r\geq
1-h(Q)-h\left(\frac{1+\sqrt{({S}/2)^2-1}}{2}\right)\,. \end{equation} As an
illustration, we have plotted in Fig.~\ref{figcurves} the key rate
for the correlations introduced in Subsection~2.1 that satisfy
$S=2\sqrt{2}(1-2{Q})$ and which arise from the state
$\ket{\Phi^+}$ after going through a depolarizing channel. We
stress that although with have specified a particular state and
particular qubit measurements that produce these correlations, we
do not assume anything about the implementation of the
correlations when computing the key rate. For the sake of
comparison, we have also plotted the key rate under the usual
assumptions of QKD for the same set of correlations. In this case,
Alice and Bob have a perfect control of their apparatuses, which
we have assumed to faithfully perform the qubit measurements given
in Subsection~2.1. The protocol is then equivalent to Ekert's,
which in turn is equivalent to the entanglement-based version of
BB84,
and one finds
\begin{equation} \chi(B_1:E)\leq h\left(Q+{S}/2\sqrt{2}\right)\,,
\label{boundstandard} \end{equation}
as proven in Subsection~2.5.
If
$S=2\sqrt{2}(1-2{Q})$, this expression yields the
well-known critical QBER of $11\%$ \cite{SP}, to be compared to
$7.1\%$ in the device-independent scenario (Fig.~\ref{figcurves}).
\begin{equation}gin{figure}\begin{equation}gin{center}
\includegraphics[scale=0.6]{fig.eps}
\caption{Extractable secret-key rate against collective attacks in
the usual scenario [$\chi(B_1:E)$ given by
eq.~(\ref{boundstandard})] and in the device-independent scenario
[$\chi(B_1:E)$ given by eq.~(\ref{bound})], for correlations
satisfying $S=2\sqrt{2}(1-2{Q})$. The key rate is plotted as a
function of $Q$. Remember that the key rate for the BB84
protocol in the device-independent scenario is zero.
}\label{figcurves}
\end{center}\end{figure}
To illustrate further the difference between the
device-independent scenario and the usual scenario, we now give an
explicit attack which saturates our bound; this example also
clarifies why the bound (\ref{bound}) is independent of $Q$. To
produce correlations characterized by given values of $Q$ and $S$,
Eve sends to Alice and Bob the two-qubit Bell-diagonal state \begin{equation}
\rho_{AB}({S})=\frac{1+{C}}{2}\,P_{\Phi^+}\,+\,
\frac{1-{C}}{2}\,P_{\Phi^-}\,, \label{rhoabc} \end{equation} where
$P_{\Phi^\pm}$ are the projectors on the Bell states
$\ket{\Phi^\pm}=(\ket{00}\pm\ket{11})/\sqrt{2}$ and where
${C}=\sqrt{({S}/2)^2-1}$. She defines the measurements to be
$B_1=\sigma_z$, $B_2=\sigma_x$ and
$A_{1,2}=\frac{1}{\sqrt{1+{C}^2}}\sigma_z\pm\frac{{C}}{\sqrt{1+{C}^2}}\sigma_x$.
Any value of $Q$ can be obtained by choosing $A_0$ to be $\sigma_z$
with probability $1-2Q$ and to be a randomly chosen bit with
probability $2Q$. One can check that the Holevo information
$\chi(B_1:E)$ for the state (\ref{rhoabc}) and the measurement
$B_1=\sigma_z$ is equal to the righ-hand side of (\ref{bound}), i.e.,
this attack saturates our bound. This attack is impossible within
the usual assumptions because here not only the state $\rho_{AB}$,
but also the measurements taking place in Alice's apparatus depend
explicitly on the observed values of ${S}$ and $Q$. The state
(\ref{rhoabc}) has a nice interpretation: it is the two-qubit
state which gives the highest violation ${S}$ of the CHSH
inequality for a given value of the entanglement, measured by the
concurrence ${C}$ \cite{vw}. Therefore, for the optimal attack,
Eve uses the quantum state achieving the observed Bell violation
with the minimal amount of entanglement between Alice and Bob.
Since entanglement is a monogamous resource, this allows her to
maximize her correlations with the honest parties.
\subsection{Proof of upper bound on the Holevo quantity}\label{sec:proof}
The proof of the bound (\ref{bound}) was only sketched in
Ref.~\cite{aci07}. We present here all the details of that proof.
For clarity, we divide the proof in four steps.
\subsubsection{Step 1: Reduction to calculations on two qubits}
\begin{equation}gin{lemma} \label{lemmaone}
It is not restrictive to suppose that Eve sends to Alice and Bob a
mixture $\rho_{AB}=\sum_\lambda p_\lambda\,\rho_\lambda$ of
two-qubit states, together with a classical ancilla (known to her)
that carries the value $\lambda$ and determines which measurements
$A_{i}^\lambda$ and $B_{j}^\lambda$ are to be used on
$\rho_\lambda$.
\end{lemma}
The proof of this first statement relies critically on the
simplicity of the CHSH inequality (two binary settings on each
side). We present the argument for Alice, the same holds for Bob.
First, since any generalized measurement (POVM) can be viewed as a von Neumann measurement in a larger Hilbert space \cite{NielsenChuang}, we may assume that the two measurements $A_{1},A_{2}$ of Alice are von Neumann measurements, if necessary by including
ancillas in the state $\rho_{AB}$ shared by Alice and Bob. Thus
$A_1$ and $A_2$ are hermitian operators on $\mathbb{C}^d$ with
eigenvalues $\pm 1$. We can then use the following lemma.
\begin{equation}gin{lemma} \label{lemmaoneB}
Let $A_1$ and $A_2$ be Hermitian operators with eigenvalues equal
to $\pm 1$ acting on a Hilbert space $H$ of finite or countable
infinite dimension. Then we can decompose the Hilbert space $H$ as
a direct sum \begin{equation} H= \oplus_\alpha H_\alpha^2 \end{equation} such that
$\mathrm{dim}(H_\alpha^2)\leq 2$ for all $\alpha$, and such that
both $A_1$ and $A_2$ act within $H_\alpha^2$, that is, if
$|\psi\rangle \in H_\alpha^2$, then $A_1|\psi\rangle \in
H_\alpha^2$ and $A_2|\psi\rangle \in H_\alpha^2$.
\end{lemma}
\begin{equation}gin{proof}
Previous proofs of this result have been obtained
independently by Tsirelson \cite{tsirelson} and
Masanes~\cite{masanes2}. Here, we provide an alternative and
possibly simpler proof.
Note that since the eigenvectors of $A_1$
and $A_2$ are $\pm 1$, these operators square to the identity:
$A_1^2 = A_2^2 = \leavevmode\hbox{\small1\normalsize\kern-.33em1}$. Therefore $A_2 A_1$ is a unitary operator.
Let $|\alpha\rangle$ be an eigenvector of $A_2 A_1$: \begin{equation} A_2 A_1
|\alpha\rangle = \omega |\alpha\rangle \quad \mbox{with}\quad
|\omega|=1. \label{A2A1}\end{equation} Then
$\ket{\tilde\alpha}=A_2\ket{\alpha}$ is also an eigenvector of
$A_2 A_1$ with eigenvalue $\overline\omega$, since $A_2
A_1\ket{\tilde\alpha}=A_2 A_1 A_2\ket{\alpha}\linebreak[1]=A_2
(A_2 A_1)^\dagger\ket{\alpha}=\overline \omega
A_2\ket{\psi}=\overline\omega\ket{\tilde\alpha}$. As $A_2 A_1$ is
unitary, its eigenvectors span the entire Hilbert space $H$. It
follows that $H$ can be decomposed as the direct sum
$H=\oplus_\alpha H_\alpha^2$, where
$H_c^2=\text{span}\{\ket{\alpha},\ket{\tilde\alpha}\}$ is (at
most) two-dimensional.
It remains to show that $A_1$ and $A_2$ act within $H_\alpha^2$.
By definition $A_2\ket{\alpha}=\ket{\tilde\alpha}$ and
$A_2\ket{\tilde\alpha}=\ket{\alpha}$. On the other hand,
$A_1\ket{\alpha}=A_1A_2\ket{\tilde\alpha}=\omega\ket{\tilde\alpha}$
and
$A_1\ket{\tilde\alpha}=A_1A_2\ket{\alpha}=\overline\omega\ket{\alpha}$.
Note that in the case where $\omega=\pm 1$, $A_1=\pm A_2$ on
$H_\alpha^2$, that is $A_1$ and $A_2$ are identical operators up
to a phase.
\end{proof}
\begin{equation}gin{proof}[Proof of Lemma \ref{lemmaone}]
We can rephrase Lemma \ref{lemmaoneB} as saying that
$A_j=\sum_\alpha P_\alpha A_j P_\alpha$ where the $P_\alpha$s are
orthogonal projectors of rank $1$ or $2$. From Alice's standpoint,
the measurement of $A_i$ thus amounts at projecting in one of the
(at most) two-dimensional subspaces defined by the projectors
$P_\alpha$, followed by a measurement of the reduced observable
$P_\alpha A_i P_\alpha=\vec{a}\,^\alpha_i\cdot\vec{\sigmagma}$.
Clearly, it cannot be worse for Eve to perform the projection
herself before sending the state to Alice and learn the value of
$\alpha$. The same holds for Bob. We conclude that without loss of
generality, in each run of the experiment Alice and Bob receive a
two-qubit state. The deviation from usual proofs of security of
QKD lies in the fact that the measurements to be applied can
depend explicitly on the state sent by Eve.
\end{proof}
\subsubsection{Step 2: Reduction to Bell-diagonal states of two qubits}
Let $\ket{\Phi^{\pm}}=1/\sqrt{2}\left(\ket{00}\pm\ket{11}\right)$
and $\ket{\Psi^{\pm}}=1/\sqrt{2}\left(\ket{01}\pm\ket{10}\right)$
be the four Bell states.
\begin{equation}gin{lemma} \label{lemmatwo}In the basis of Bell states ordered as $\{\ket{\Phi^+},\ket{\Psi^-}, \ket{\Phi^-},\ket{\Psi^+}\}$, each state $\rho_\lambda$ can be taken to
be a Bell-diagonal state of the form
\begin{eqnarray}\rho_\lambda\left(\begin{equation}gin{array}{cccc}
\lambda_{\Phi^+}\\
& \lambda_{\Psi^-}\\
&& \lambda_{\Phi^-}\\
&&& \lambda_{\Psi^+}
\end{array}\right)\,,\label{belldiaglemma}\end{eqnarray}
with eigenvalues satisfying
\begin{eqnarray}\lambda_{\Phi^+}\geq \lambda_{\Psi^-} &,&
\lambda_{\Phi^-}\geq\lambda_{\Psi^+}\,. \label{orderlambdalemma}\end{eqnarray}
Furthermore, the measurements $A_i^\lambda$ and $B_j^\lambda$ can
be taken to be measurements in the $(x,z)$ plane.
\end{lemma}
\begin{equation}gin{proof}
For fixed $\lambda$ (we now omit the index $\lambda$), we can
label the axis of the Bloch sphere on Alice's side in such a way
that $\vec{a}_1$ and $\vec{a}_2$ define the $(x,z)$ plane; and
similarly on Bob's side.
Eve is a priori distributing any two-qubit state $\rho$ of which
she holds a purification. Now, recall that we have supposed,
without loss of generality, that all the marginals are uniformly
random. Knowing that Alice and Bob are going to symmetrize their
marginals, Eve does not loose anything in providing them a state
with the suitable symmetry. The reason is as follows. First note
that since the (classical) randomization protocol that ensures
$\moy{a_i}=\moy{b_j}=0$ is done by Alice and Bob through public
communication, we can as well assume that it is Eve who does it,
i.e., she flips the value of each outcome bit with probability one
half. But because the measurements of Alice and Bob are in the
$(x,z)$ plane, we can equivalently, i.e., without changing Eve's
information, view the classical flipping of the outcomes as the
quantum operation $\rho\rightarrow (\sigmagma_y\otimes\sigmagma_y)
\rho(\sigmagma_y\otimes\sigmagma_y)$ on the state $\rho$. We conclude
that it is not restrictive to assume that Eve is in fact sending
the mixture
\begin{eqnarray}\begin{eqnarray}r{\rho}=\demi\left[\rho+(\sigmagma_y\otimes\sigmagma_y)
\rho(\sigmagma_y\otimes\sigmagma_y)\right]\,,\end{eqnarray} i.e., that she is
sending a state invariant under $\sigma_y\otimes\sigma_y$.
Now, $\ket{\Phi^+}$ and $\ket{\Psi^-}$ are eigenstates of
$\sigmagma_y\otimes\sigmagma_y$ for the eigenvalue $-1$, whereas
$\ket{\Phi^-}$ and $\ket{\Psi^+}$ are eigenstates of
$\sigmagma_y\otimes\sigmagma_y$ for the eigenvalue $+1$. Consequently,
$\begin{eqnarray}r{\rho}$ is obtained from $\rho$ by erasing all the coherences
between states with different eigenvalues. Explicitly, in the
basis of Bell states, ordered as $\{\ket{\Phi^+},\ket{\Psi^-},
\ket{\Phi^-},\ket{\Psi^+}\}$ we have \begin{eqnarray}
\begin{eqnarray}r{\rho}&=&\left(\begin{equation}gin{array}{cccc}
\lambda_{\Phi^+} & r_1e^{i\phi_1}\\
r_1e^{-i\phi_1}& \lambda_{\Psi^-}\\
&& \lambda_{\Phi^-} & r_2e^{i\phi_2}\\
&& r_2e^{-i\phi_2}& \lambda_{\Psi^+}
\end{array}\right) \end{eqnarray} where all the non-zero elements coincide with those of the original $\rho$.
We now use some additional freedom that is left in the labeling:
we can select any two orthogonal axes in the $(x,z)$ plane to be
labeled $x$ and $z$, and we can also choose their orientation. We
make use of this freedom to bring $\begin{eqnarray}r{\rho}$ to the form \begin{eqnarray}
\begin{eqnarray}r{\rho}&=&\left(\begin{equation}gin{array}{cccc}
\lambda_{\Phi^+} & ir_1\\
-ir_1& \lambda_{\Psi^-}\\
&& \lambda_{\Phi^-} & ir_2\\
&& -ir_2& \lambda_{\Psi^+}
\end{array}\right)\,,
\end{eqnarray} with $r_1$ and $r_2$ real and with the diagonal elements
arranged as \begin{eqnarray}\lambda_{\Phi^+}\geq \lambda_{\Psi^-} &,&
\lambda_{\Phi^-}\geq\lambda_{\Psi^+}\,. \label{orderlambda}\end{eqnarray}
Indeed, let $R_y(\theta)=cos\frac{\theta}{2}\leavevmode\hbox{\small1\normalsize\kern-.33em1} + i
\sigman\frac{\theta}{2}\sigma_y$: by applying $R_y(\alpha)\otimes
R_y(\begin{equation}ta)$ with \begin{eqnarray}
\tan(\alpha-\begin{equation}ta)=\frac{2r_1\cos\phi_1}{\lambda_{\Phi^+}-\lambda_{\Psi^
-}}&,&
\tan(\alpha+\begin{equation}ta)=-\frac{2r_2\cos\phi_2}{\lambda_{\Phi^-}-\lambda_{\Psi^
+}}\,, \end{eqnarray} the off-diagonal elements become purely imaginary. In
order to further arrange the diagonal elements according to
(\ref{orderlambda}), one can make the following extra rotations:
\begin{equation}gin{itemize}
\item in order to relabel $\Phi^+\leftrightarrow \Psi^-$ without changing the others, one sets $\alpha-\begin{equation}ta=\pi$ and $\alpha+\begin{equation}ta=0$ i.e. $\alpha=-\begin{equation}ta=\frac{\pi}{2}$;
\item in order to relabel $\Phi^-\leftrightarrow \Psi^+$ without changing the others, one sets $\alpha-\begin{equation}ta=0$ and $\alpha+\begin{equation}ta=\pi$ i.e. $\alpha=\begin{equation}ta=\frac{\pi}{2}$;
\item in order to relabel both, one takes the sum of the previous ones i.e. $\alpha=\pi$ and $\begin{equation}ta=0$.
\end{itemize}
In this way one fixes $\lambda_{\Phi^+}\geq \lambda_{\Psi^-}$ and
$\lambda_{\Phi^-}\geq \lambda_{\Psi^+}$, i.e. the order of the
diagonal elements in each sector.
Finally, we repeat an argument similar to the one given above:
since $\begin{eqnarray}r{\rho}$ and its conjugate $\begin{eqnarray}r{\rho}^{\,*}$ produce
the same statistics for Alice and Bob's measurements and provide
Eve with the same information, we can suppose without loss of
generality that Alice and Bob rather receive the Bell-diagonal
mixture \begin{eqnarray} \rho_\lambda&=&\demi
\left(\begin{eqnarray}r{\rho}+\begin{eqnarray}r{\rho}^{\,*}\right)\,=\,
\left(\begin{equation}gin{array}{cccc}
\lambda_{\Phi^+}\\
& \lambda_{\Psi^-}\\
&& \lambda_{\Phi^-}\\
&&& \lambda_{\Psi^+}
\end{array}\right)\,,\label{belldiag}\end{eqnarray} with the eigenvalues satisfying (\ref{orderlambda}).
\end{proof}
\subsubsection{Step 3: Explicit calculation of the bound}
\begin{equation}gin{lemma} \label{lemmathree}
For a Bell-diagonal state $\rho_{\lambda}$ (\ref{belldiaglemma})
with eigenvalues ${\lambda}$ ordered as in eq.
(\ref{orderlambdalemma}) and for measurements in the $(x,z)$
plane, \begin{equation} \chi_{{\lambda}}(B_1:E)\leq
h\left(\frac{1+\sqrt{({S}_{{\lambda}}/2)^2-1}}{2}\right)\,,
\label{estimate2} \end{equation} where
$S_{{\lambda}}$
is the largest violation of the CHSH
inequality by the state $\rho_{\lambda}$.
\end{lemma}
In order to prove Lemma \ref{lemmathree} we have to bound
$\chi(B_1:E)=S(\rho_E)-\sum_{b_1=\pm
1}p(b_1)S\left(\rho_{E|b_1}\right)$. For the Bell diagonal state
(\ref{belldiaglemma}) one has \begin{eqnarray}\label{chistep3}
\chi_{\lambda}(B_1:E)&=&H\left(\underline{\lambda}\right)\,-\,\demi
\left[S\left(\rho_{E|b_1=1}\right)+S\left(\rho_{E|b_1=-1}\right)\right]\,,
\end{eqnarray} where $H$ is Shannon entropy and where we have adopted the
notation $\underline{\lambda}\equiv\{ \lambda_{\Phi^+},
\lambda_{\Phi^-}, \lambda_{\Psi^+}, \lambda_{\Psi^-}\}$.
We divide the proof of Lemma \ref{lemmathree} into three parts. In
the first part, we prove that, for any given Bell-diagonal state,
Eve's best choice for Bob's measurement is $B_1=\sigmagma_z$, which
allows to express~(\ref{chistep3}) solely in term of the
eigenvalues $\underline{\lambda}$. In the second part, we obtain
an inequality between entropies. In the third part, we compute the
maximal violation of the CHSH inequality for states of the form
(\ref{belldiaglemma}).
\paragraph{Step 3, Part 1: Upper bound for given Bell-diagonal state}
\begin{equation}gin{lemma} \label{lemmafour}
For a Bell-diagonal state $\rho_{\lambda}$ with eigenvalues
${\lambda}$ ordered as in (\ref{orderlambdalemma}) and for
measurements in the $(x,z)$ plane, \begin{equation} \chi_{{\lambda}}(B_1:E)\leq
H\left(\underline{\lambda}\right)-h(\lambda_{\Phi^+}+\lambda_{\Phi^-})\,.\label{chilambda2}
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
Let us compute $S\left(\rho_{E|b_1}\right)$. First, one gives Eve
the purification of $\rho_\lambda$: \begin{eqnarray} \ket{\Psi}_{ABE}&=&
\sqrt{\lambda_{\Phi^+}}\ket{\Phi^+}\ket{e_1} +
\sqrt{\lambda_{\Phi^-}}\ket{\Phi^-}\ket{e_2} +
\sqrt{\lambda_{\Psi^+}}\ket{\Psi^+}\ket{e_3} +
\sqrt{\lambda_{\Psi^-}}\ket{\Psi^-}\ket{e_4} \end{eqnarray} with
$\braket{e_i}{e_j}=\delta_{ij}$. By tracing Alice out, one obtains
$\rho_{BE}$.
Now, Bob measures in the $x,z$ plane. His measurement $B_1$ can be
written as \begin{eqnarray}
B_1=\cos{\varphi}\,\sigmagma_z+\sigman\varphi\,\sigmagma_x\,.\end{eqnarray} After the
measurement, the system is projected in one of the eigenstates of
$B_1$ which can be written as
$\ket{b_1}=\sqrt{\frac{1+b_1\cos\varphi}{2}}\ket{0}+b_1\sqrt{\frac{1-b_1\cos\varphi}{2}}\ket{1}$
when $\varphi\in [0,\pi]$. The case $\varphi\in [\pi,2\pi]$
corresponds to a flip of the outcome $b_1$, but as the result that
follows is independent of the value of $b_1$, it is sufficient to
consider $\varphi\in [0,\pi]$. The reduced density matrix of Eve
conditioned on the value of $b_1$ is given by \begin{eqnarray}
\rho_{E|b_1}=\ket{\psi^+(b_1)}\bra{\psi^+(b_1)} +
\ket{\psi^-(-b_1}\bra{\psi^-(-b_1)}\,, \end{eqnarray} where we have defined
the two non-normalized states \begin{eqnarray}
\ket{\psi^\sigmagma(b_1)}&=&\sqrt{\frac{1+b_1\cos\varphi}{2}}\left[
\sqrt{\lambda_{\Phi^+}}\ket{e_1} + \sigmagma
\sqrt{\lambda_{\Phi^-}}\ket{e_2} \right. \nonumber\\&& + b_1
\sqrt{\frac{1-b_1\cos\varphi}{2}}\left[
\sqrt{\lambda_{\Psi^+}}\ket{e_3} +
\sigmagma\sqrt{\lambda_{\Psi^-}}\ket{e_4}\right]\,. \end{eqnarray} The
calculation of the eigenvalues of a rank two matrix is a standard
procedure. The result is that the eigenvalues of $\rho_{E|b_1}$
are independent of $b_1$ and are given by \begin{eqnarray}
\Lambda_{\pm}=\demi\left(1\pm\,\sqrt{(\lambda_{\Phi^+}-\lambda_{\Psi^-})^2
+ (\lambda_{\Phi^-}-\lambda_{\Psi^+})^2+2\cos
2\varphi(\lambda_{\Phi^+}-\lambda_{\Psi^-})(\lambda_{\Phi^-}-\lambda_{\Psi^+})}\right)\,.
\end{eqnarray} Therefore we have obtained
$S\left(\rho_{E|b_1=1}\right)=S\left(\rho_{E|b_1=-1}\right)=h(\Lambda_+)$,
that is \begin{eqnarray}
\chi_{\lambda}(B_1:E)=H\left(\underline{\lambda}\right)-h(\Lambda_+)\,.
\end{eqnarray} Now, for any set of $\lambda$'s, Eve's information is the
largest for the choice of $\varphi$ that minimizes $h(\Lambda_+)$,
which is the one for which the difference $\Lambda_+-\Lambda_-$ is
the largest. Because of (\ref{orderlambda}), the product
$(\lambda_{\Phi^+}-\lambda_{\Psi^-})(\lambda_{\Phi^-}-\lambda_{\Psi^+})$
is non-negative and the maximum is obtained for $\varphi=0$, i.e.,
$B_1=\sigmagma_z$. This gives the upper bound that we wanted \begin{eqnarray}
\chi_{\lambda}(B_1:E)\leq
H\left(\underline{\lambda}\right)-h(\lambda_{\Phi^+}+\lambda_{\Phi^-})\,.\label{chilambda}
\end{eqnarray}
\end{proof}
\paragraph{Step 3, Part 2: Entropic Inequality}
\begin{equation}gin{lemma} \label{lemmafive}
Let $\underline{\lambda}$ be probabilities, i.e.
$\lambda_{\Phi^+}, \lambda_{\Phi^-}, \lambda_{\Psi^+},
\lambda_{\Psi^-} \geq 0$ and $\lambda_{\Phi^+}+ \lambda_{\Phi^-}+
\lambda_{\Psi^+}+ \lambda_{\Psi^-}=1$. Let $R^2 =
(\lambda_{\Phi^+}-\lambda_{\Psi^-})^2 + (\lambda_{\Phi^-}-
\lambda_{\Psi^+})^2$. Then \begin{eqnarray}
F(\underline{\lambda})=H\left(\underline{\lambda}\right)-h(\lambda_{\Phi^+}+\lambda_{\Phi^-})&\leq
&
h\left(\frac{1+\sqrt{
2 R^2
-1}}{2}\right) \quad \mbox{if $R^2 >1/2$}\label{estimate4a}\\
&\leq & 1 \quad \mbox{if $R^2 \leq 1/2$}
\,, \label{estimate4} \end{eqnarray}
with equality in eq. (\ref{estimate4a}) if and only if $\lambda_{\Phi^\pm}=0$ or $\lambda_{\Psi^\pm}=0$.
\end{lemma}
\begin{equation}gin{proof}
We can parameterize the $\lambda$'s as:
\begin{eqnarray}
\lambda_{\Phi^+} &=& \frac{1}{4} + \frac{R}{2}\cos \theta + \delta \nonumber\\
\lambda_{\Phi^-} &=& \frac{1}{4} + \frac{R}{2}\sigman \theta - \delta \nonumber\\
\lambda_{\Psi^-} &=& \frac{1}{4} - \frac{R}{2}\cos \theta + \delta \nonumber\\
\lambda_{\Psi^+} &=& \frac{1}{4} - \frac{R}{2}\sigman \theta - \delta\,.
\end{eqnarray}
The conditions
$\lambda_{\Phi^+}, \lambda_{\Phi^-}, \lambda_{\Psi^+}, \lambda_{\Psi^-} \geq 0$
imply
\begin{equation}
-\frac{1}{4} + \frac{R}{2}|\cos \theta | \leq \delta \leq \frac{1}{4} - \frac{R}{2}|\sigman \theta|\,.
\label{domain1}
\end{equation}
There is a solution for $\delta$ if and only if
\begin{equation}\label{cond1}
|\cos \theta | + |\sigman \theta| \leq \frac{1}{R}\,.
\end{equation}
This condition is non trivial if $R > 1/\sqrt{2}$.
When $R > 1/\sqrt{2}$, the extremal values of $\theta$, solution
of $|\cos \theta | + |\sigman \theta| = \frac{1}{R}$, correspond to
$\lambda_{\Phi^+}=0$ or $\lambda_{\Psi^-}=0$, and
$\lambda_{\Phi^-}=0$ or $\lambda_{\Psi^+}=0$. When both
$\lambda_{\Phi^\pm}=0$ or both $\lambda_{\Psi^\pm}=0$,
$F(\underline{\lambda})=h(1/2+(\sqrt{
2 R^2-1})/2)$ and one has equality in eq.~(\ref{estimate4a}). In the other cases, $F(\underline{\lambda})=0$ and the inequality~(\ref{estimate4a}) is satisfied. Our strategy is to prove that when
$R > 1/\sqrt{2}$, the maximum of $F(\underline{\lambda})$ occurs
when $|\cos \theta | + |\sigman \theta| = \frac{1}{R}$, i.e., at
the edge of the allowed domain for $\theta$. This will establish
(\ref{estimate4a}).
Let us start by finding the maximum of $F$ for fixed $R$ and
$\theta$. To this end, we compute the derivative of $F$ with
respect to $\delta$ \begin{equation} \frac{\partial}{\partial \delta}
F\left(\underline{\lambda}\right) = - \log_2 \lambda_{\Phi^+}
+\log_2 \lambda_{\Phi^-} +\log_2 \lambda_{\Psi^+} -\log_2
\lambda_{\Psi^-}\,.\end{equation} The derivative with respect to $\delta$
vanishes if and only if $ \lambda_{\Phi^+} \lambda_{\Psi^-} =
\lambda_{\Phi^-} \lambda_{\Psi^+}$, which is equivalent to \begin{equation}
\delta = \delta^*(\theta)=\frac{R^2}{4}\left( \cos^2 \theta -
\sigman^2 \theta \right)\,. \end{equation} Note that $\delta^*(\theta)$ always
belongs to the domain (\ref{domain1}) for $\theta$ satisfying
(\ref{cond1}), i.e., it is an extremum of $F$. We also have that
\begin{equation} \frac{\partial^2}{\partial_\delta^2}
F\left(\underline{\lambda}\right) = - \frac{1}{\lambda_{\Phi^+}} -
\frac{1}{ \lambda_{\Phi^-} } - \frac{1}{\lambda_{\Psi^+} } -
\frac{1}{\lambda_{\Psi^-}} < 0\,,\end{equation} which shows that
$\delta^*(\theta)$ is a maximum of $F$ (not a minimum).
We have thus identified the unique maximum of $F$ at fixed
$\theta$. Let us now take the optimal value of $\delta =
\delta^*(\theta)$, and let $\theta$ vary. We compute the
derivative of $F$ with respect to $\theta$ along the curve $\delta
= \delta^*(\theta)$: \begin{eqnarray} \frac{d}{d \theta} F |_{\delta=\delta^*}
&=& \frac{\partial}{\partial_\delta} F |_{\delta=\delta^*}\frac{d
\delta^*(\theta)}{d \theta} + \frac{\partial}{\partial_\theta} F
|_{\delta=\delta^*} = \frac{\partial}{\partial_\theta}
F |_{\delta=\delta^*}\nonumber\\
&=& -\frac{R}{2} \cos \theta \log_2 \left( \frac{ \lambda_{\Phi^-}
(\lambda_{\Psi^+} + \lambda_{\Psi^-} ) } {\lambda_{\Psi^+} (
\lambda_{\Phi^+} + \lambda_{\Phi^-})}\right) +\frac{R}{2} \sigman
\theta \log_2 \left( \frac{ \lambda_{\Phi^+} (\lambda_{\Psi^+} +
\lambda_{\Psi^-} ) } {\lambda_{\Psi^-} ( \lambda_{\Phi^+} +
\lambda_{\Phi^-})}\right)\,. \end{eqnarray} Now, when $\delta=\delta^*$, we
have the identities: \begin{equation} \frac{ \lambda_{\Phi^+} } {
\lambda_{\Phi^+} + \lambda_{\Phi^-}} =\frac{ \lambda_{\Psi^+} } {
\lambda_{\Psi^+} + \lambda_{\Psi^-}} = \frac{1}{2} + \frac{R}{2}
\cos \theta - \frac{R}{2} \sigman \theta\,, \end{equation} and \begin{equation} \frac{
\lambda_{\Phi^-} } { \lambda_{\Phi^+} + \lambda_{\Phi^-}} =\frac{
\lambda_{\Psi^-} } { \lambda_{\Psi^+} + \lambda_{\Psi^-}} =
\frac{1}{2} - \frac{R}{2} \cos \theta + \frac{R}{2} \sigman \theta\,.
\end{equation} Using these relations we obtain \begin{eqnarray} \frac{d}{d_\theta} F
|_{\delta=\delta^*} &=& -\frac{R}{2} \left ( \cos \theta + \sigman
\theta \right) \log_2 \frac{ 1 - R\cos \theta + R \sigman \theta } {
1 + R\cos \theta - R \sigman \theta }\,. \end{eqnarray} This quantity vanishes
(i.e. we have an extremum) if and only if $\cos \theta + \sigman
\theta=0$ or $\cos \theta - \sigman \theta =0$, that is $\theta=\pm
\pi/4, \pm 3 \pi/4$.
When $R> 1/\sqrt{2}$ the points $\theta=\pm \pi/4, \pm 3 \pi/4$
lie outside the allowed domain for $\theta$. Hence the maximum of
$G$ occurs when $\theta$ lies at the edge of its allowed domain.
As discussed above, this proves our claim when $R> 1/\sqrt{2}$.
When $R\leq 1/\sqrt{2}$, the extrema can be reached. Note that
$\theta=\pm \pi/4, \pm 3 \pi/4$ implies $\delta^*=0$. One then
easily checks that the maximum of $F$ occurs when $\theta = \pi/4,
-3 \pi/4$, whereupon $F=1$. This establishes
eq.~(\ref{estimate4}).
\end{proof}
\paragraph{Step 3, Part 3: Violation of CHSH}
\begin{equation}gin{lemma}\label{lemmaseven}
The maximal violation $S_\lambda$ of the CHSH inequality for a
Bell diagonal state $\rho_\lambda$ given by (\ref{belldiaglemma})
with eigenvalues ordered according to (\ref{orderlambdalemma}) is
\begin{equation}\label{chshmax} S_\lambda = \max\left\{ 2\sqrt{2}\,\sqrt{
(\lambda_{\Phi^+}- \lambda_{\Psi^-})^2 + (\lambda_{\Phi^-} -
\lambda_{\Psi^+})^2 }\,,\,2\sqrt{2}\,\sqrt{(\lambda_{\Phi^+}-
\lambda_{\Psi^+})^2 + (\lambda_{\Phi^-} - \lambda_{\Psi^-})^2}
\right\} \end{equation}
\end{lemma}
\begin{equation}gin{proof}
For any given two-qubit state $\rho$, the maximum value of the
CHSH expression can be computed using the following recipe
\cite{horodecki}: let $T$ be the tensor with entries
$t_{ij}=\mathrm{Tr}\left[\sigmagma_i\otimes\sigmagma_j\,\rho\right]$,
and let $\tau_1$ and $\tau_2$ be the two largest eigenvalues of
the symmetric matrix $T^TT$. Then, for optimal measurement
$S=2\sqrt{\tau_1+\tau_2}$.
We are working with the Bell-diagonal state (\ref{belldiaglemma}),
for which \begin{eqnarray} T_\lambda=\left(\begin{equation}gin{array}{ccc} \lambda_{\Phi^+}-
\lambda_{\Phi^-}+ \lambda_{\Psi^+}- \lambda_{\Psi^-}\\ &
-\lambda_{\Phi^+}+\lambda_{\Phi^-}+\lambda_{\Psi^+}-\lambda_{\Psi^-}\\
&&\lambda_{\Phi^+}+\lambda_{\Phi^-}-\lambda_{\Psi^+}-\lambda_{\Psi^-}\end{array}\right).
\end{eqnarray}
Taking into account the order (\ref{orderlambdalemma}), one has
$T_{zz}\geq |T_{xx}|$.
Hence either
\begin{equation} \tau_1 +\tau_2 = T_{zz}^2 + T_{xx}^2 =
2\left[(\lambda_{\Phi^+}- \lambda_{\Psi^-})^2 +
(\lambda_{\Phi^-} - \lambda_{\Psi^+})^2\right]\,,\end{equation}
or
\begin{equation} \tau_1 +\tau_2 = T_{zz}^2 + T_{yy}^2 =
2\left[(\lambda_{\Phi^+}- \lambda_{\Psi^+})^2 +
(\lambda_{\Phi^-} - \lambda_{\Psi^-})^2\right]\,.\end{equation}
\end{proof}
We can now provide the proof of lemma \ref{lemmathree}.
\begin{equation}gin{proof}[Proof of lemma \ref{lemmathree}]
In the case that $S_\lambda$ is equal to the first expression in
(\ref{chshmax}), Lemma \ref{lemmathree} immediately follows from
combining Lemmas \ref{lemmafour} and \ref{lemmafive}, since
$S_\lambda=2\sqrt{2}R$. Note that the threshold $R^2=1/2$ in Lemma
\ref{lemmafive} corresponds to the threshold for violating the
CHSH inequality.
In the other case, we once again combine Lemmas \ref{lemmafour}
and \ref{lemmafive}, and note that the function $F$ in Lemma
\ref{lemmafour} is invariant under permutation of
$\lambda_{\Psi^+}$ and $\lambda_{\Psi^-}$ with $\lambda_{\Phi^+}$,
$\lambda_{\Phi^-}$ fixed.
\end{proof}
\subsubsection{Step 4: Convexity argument}
To conclude the proof of the Theorem, note that if Eve sends a
mixture of Bell-diagonal states $\sum_{\lambda}
p_\lambda\,\rho_{\lambda}$ and chooses the measurements to be in
the $(x,z)$ plane, then $\chi(B_1:E)=\sum_\lambda p_\lambda\,
\chi_{\lambda}(B_1:E)$. Using \eqref{estimate2}, we then find
$\chi(B_1:E)\leq \sum_\lambda p_\lambda\, F(S_\lambda)\leq
F(\sum_\lambda p_\lambda\, S_\lambda)$, where the last inequality
holds because $F$ is concave. But since the observed violation $S$
of CHSH is necessarily such that $S\leq \sum_\lambda p_\lambda
S_\lambda$ and since $F$ is a monotonically decreasing function,
we find $\chi(B_1:E)\leq F(S)$.
\subsection{Derivation of the bound (\ref{boundstandard}) in the standard scenario}
In the standard scenario, Alice and Bob know that they are measuring qubits and have set their measurement settings in the best possible way for the reference state $\ket{\Phi^+}$. We assume one such possible choice (all the others being equivalent), the one specified in Subsection~2.1: $A_0=B_1=\sigmagma_z$, $A_1=(\sigmagma_z+\sigmagma_x)\sqrt{2}$, $A_2=(\sigmagma_z-\sigmagma_x)\sqrt{2}$, $B_2=\sigmagma_x$. Thus the CHSH polynomial becomes
\begin{eqnarray}
CHSH&=&\sqrt{2}\left(\sigmagma_x\otimes \sigmagma_x+\sigmagma_z\otimes \sigmagma_z\right)\,.\label{formchsh}
\end{eqnarray}
The calculation of the unconditional security bound follows exactly the usual one, as presented for instance in Appendix A of Ref.~\cite{revqkd}. As well-known, in the usual BB84 protocol, the measured parameters are the error rate in the $Z$ and in the $X$ basis, $\varepsilon_{z,x}$; if the $Z$ basis is used for the key and the $X$ basis for parameter estimation, Eve's information is bounded by \begin{eqnarray}\chi(A_Z:E)=\chi(B_Z:E)=h(\varepsilon_x)\label{boundusual}\,.\end{eqnarray}
In our case, $\varepsilon_z=Q$; but instead of $\varepsilon_x$, the parameter from which Eve's information is inferred is the average value $S$ of the CHSH polynomial. Given (\ref{formchsh}), the evaluation of $S$ on a Bell-diagonal state is straightforward: $S=2\sqrt{2}(\lambda_1-\lambda_4)$. Now, with the parametrization $\lambda_1=(1-\varepsilon_z)(1-u)$ and $\lambda_4=\varepsilon_z v$, we immediately obtain $\lambda_1-\lambda_4=1-\varepsilon_z-[(1-\varepsilon_z)u+\varepsilon_z v]= 1-\varepsilon_z-\varepsilon_x$ because of Eq.~(A7) of Ref.~\cite{revqkd}. Therefore $S=2\sqrt{2}(1-Q-\varepsilon_x)$ i.e. \begin{eqnarray}
\varepsilon_x=1-Q-S/2\sqrt{2}\,.\end{eqnarray} Since $h(\varepsilon_x)=h(1-\varepsilon_x)$, this leads immediately to (\ref{boundstandard}).
\section{Loopholes in Bell experiments and DIQKD}\label{sec:loopholes}
The security of our protocol, like the security of any DIQKD
protocol, relies on the violation of a Bell inequality. All
experimental tests of Bell inequalities that have been made so
far, however, are subject to at least one of several loopholes and
therefore admit in principle a local description.
We discuss here how these loopholes can impact DIQKD protocols.
\subsection{Loopholes in Bell experiments}
Basically, a loophole-free Bell experiment requires two
ingredients: i) no information about the input of one party should
be known to the other party before she has produced her output;
ii) high enough detection efficiencies.
If the first requirement is not fulfilled, the premises of Bell's
theorem are not satisfied and it is trivial for a classical model
to account for the apparent non-locality of the observed
correlations. In practice, this means that the measurements should
be carried out sufficiently fast and far apart from each other so
that no sub-luminal influence can propagate from the choice of
measurement on one wing to the measurement outcome on the other
wing. Additionally, the local choices of measurement
should not be determined in advance,
i.e., they should be truly random events. Failure to satisfy one
of these two conditions is known as the locality
loophole~\cite{Bell}.
The second requirement arises from the fact that in practice not
all signals are detected by the measuring devices, either
because of inefficiencies in the devices themselves, or because of
particle losses on the path from the source to the detectors. The
detection loophole~\cite{pearle} exploits the idea that it is a
local variable that determines whether a signal will be registered
or not. The particle is detected only if the setting of the
measuring device is in agreement with a predetermined scheme. In
this way, apparently non-local correlations can be reproduced by a
purely local model provided that the efficiency $\eta$ of the
detectors is below a certain threshold. In general the efficiency
necessary to rule out a local description depends on the Bell
inequality that is tested, and is quite high for Bell inequalities
with low numbers of inputs and outputs (for the CHSH inequality,
one must have $\eta> 82.8\%$). It is an open question whether
there exist Bell inequalities (with reasonably many inputs and
outputs) allowing significantly lower detection efficiencies (see
e.g.~\cite{zoology,I4422,Vertesi2}). From the point of view of the
data analysis, to decide whether an experiment with inefficient
detector has produced a genuine violation of a Bell inequality,
all measurement events, including no-detection events, should be
taken into account in the non-locality test.
All Bell experiments performed so far suffer from (at least) one
of the two above loopholes. On the one hand, photonic experiments
can close the locality loophole~\cite{aspect,weihs,tittel}, but
cannot reach the desired detection efficiencies. On the other
hand, experiments carried out on entangled
ions~\cite{rowe,matsukevich} manage to close the detection
loophole, but are unsatisfactory from the point of view of
locality. Note that other loopholes or variants of the above
loophole have also been identified, such as the coincidence-time
loophole~\cite{gill}, but these are not as problematic.
\subsection{Loopholes from the perspective of DIQKD}
When considering the implications of these loopholes for DIQKD, a
first point to realize is that they are mainly a technological
problem, but do not in any way undermine the concept of DIQKD
itself. An eavesdropper trying to exploit one of the above
loopholes would clearly have to tamper with Alice and Bob's
devices, but it is not necessary for Alice and Bob to ``trust'' or
characterize the inner working of their devices to be sure that
all loopholes are closed. This can be decided solely by looking at
the classical input-output relations produced by the quantum
devices (and possibly their timing). In other words, we do not
have to leave the paradigm of DIQKD to guarantee the security of
the protocol (though of course with present-day technology it
might be difficult to construct devices that pass such security
tests).
A second important observation is that there is a fundamental
difference between a Bell experiment whose aim is to establish the
non-local character of Nature and a quantum key distribution
scheme based on the violation of a Bell inequality. In the first
case, we are trying to rule out a whole set of models of Nature
(including models that can overcome the laws of physics as they
are currently known), while in the second case, we are merely
fighting an eavesdropper limited by the laws of quantum physics.
Seen in this light, the locality loophole is not problematic in
our context. In usual Bell experiments, the locality loophole is
dealt with by enforcing a space-like separation between Alice and
Bob. This guarantee that no sub-luminal signals (including signals
mediated by some yet-unknown theory) could have traveled between
Alice's and Bob's devices. In the context of DIQKD, it is
sufficient to guarantee that no quantum signals (e.g. no photon)
can travel from Alice to Bob. This can be enforced by a proper
isolation of Alice's and Bob's locations. As stated in
Section~1.1, we make here the basic assumption, shared by usual
QKD and without which cryptography wouldn't make any sense, that
Alice's and Bob's locations are secure, in the sense that no
unwanted information can leak out to the outside. Whether this
condition is fulfilled is an important question in practice, but
it is totally alien to quantum key distribution, whose aim is to
establish a secret key between two parties given that this
assumption is satisfied. In a similar way, we assume here, as in
usual QKD, that Alice and Bob choose their measuring settings with
trusted random number generators whose outputs are unknown to Eve.
The locality loophole is thus not a fundamental loophole in the
context of DIQKD and can be dealt with using today's technology.
The detection loophole, on the other hand, is a much more
complicated issue. Experimental tests of non-locality circumvent
this problem by discarding no-detection events and recording only
the events where both measuring devices have produced an answer.
This amounts to perform a post-selection on the measurement data.
This post-selection is usually justified by the fair sampling
assumption, which says that the sample of detected particles is a
fair sample of the set of all particles, i.e., that there are no
correlations between the state of the particles and their
detection probability. Although it may be very reasonable to
expect such a condition to hold for any realistic model of Nature,
it is clearly unjustified in the context of DIQKD, where we assume
that the quantum devices are provided by an untrusted party
\cite{larsson,lo}. In our context, it is thus crucial to close the
detection loophole. This has already been done for some
experiments \cite{rowe,matsukevich} although not yet on distances
relevant for QKD.
Note that a proper security analysis of DIQKD with inefficient
detectors has to take into account all measurement outcomes
produced by the devices, which in our case would include the
outcomes ``1'', ``-1'', and the no-detection outcome ``$\bot$''. A
possible strategy to apply our proof to this new situation simply
consists in viewing the absence of a click ``$\bot$'' as a ``-1''
outcome, thus replacing a 3-output device by an effective 2-output
device. To give an idea of the amount of detection inefficiency
that can be tolerated in this way, we have plotted in
Figure~\ref{figdet} the key rate as a function of the efficiency
of the detectors for the ideal set of quantum correlations
that give the maximal violation of the CHSH inequality, obtained
when measuring a $\ket{\phi^+}$ state. The key rate is given by
Eq.~\eqref{keyrate} with $Q=\eta(1-\eta)$ and $S= 2\sqrt
2\eta^2+2(1-\eta)^2$.
\begin{equation}gin{figure}\begin{equation}gin{center}
\includegraphics[scale=0.6]{detection.eps}
\caption{Key rate as a function of detection efficiency for the ideal correlations coming from the maximally entangled state $\ket{\phi^+}$ and satisfying $Q=0$ and $S=2\sqrt{2}$, obtained by replacing the absence of a click by the outcome $-1$. The efficiency threshold above which a positive key rate can be extracted is $\eta=0.924$. \label{figdet}}
\end{center}\end{figure}
\subsection{Ideas for overcoming the detection loophole}
As mentioned above, the experiment of \cite{matsukevich}, which is based on
entanglement swapping between two ions separated by about 1 meter, is immune to the detection loophole. A natural way to implement our DIQKD protocol would thus be to improve this experiment. This would require extending the distance between the ions, but also improving the visibility and significantly
improving the data rate (currently one event every 39 seconds). This approach could of course in principle also be implemented with neutral atoms, quantum dots, etc. Here we discuss ideas on how
the problem of the detection loophole could be solved (at least partially) within an all photonic implementation, using heralded quantum memories and trusted detectors.
In a realistic quantum key distribution scenario there are
basically two kinds of losses that should be studied separately:
line losses and detector losses. Line losses are due (in practice)
to the imperfections of the quantum channel between Alice and Bob.
However, as far as the theoretical security analysis is concerned,
these losses should be assumed to be the result of Eve's actions,
since Alice and Bob do not control the quantum channel. One
possibility for Alice and Bob to overcome this problem is to use
heralded quantum memories. Using this technique, Alice and Bob can
know whether their respective memory device is loaded or not, that
is whether a photon really arrived in their device or not. In the
case that both memories are loaded, they release the photons and
perform their measurements. This procedure thus implements a kind
of quantum non-demolition measurement of the incoming states,
which allows Alice and Bob to get rid of the losses of the quantum
channel. This should be realizable within a few years, thanks to
the development of quantum repeaters \cite{TittelRev,Qinternet}.
The second type of losses, the detector losses, are probably more
crucial. We can, however, consider the situation where Alice's and
Bob's detectors are not part of the uncharacterized quantum
devices. That is, the quantum devices of Alice and Bob are viewed
as black-boxes that receive some classical input and produce an
output signal which is later detected, and transformed into the
final classical outcome, by a separate detector\footnote{Note, we
are not considering here the ``detector'' as a complete
measurement device, but only as the part of the device that clicks
or not whenever it is hit by one or several photons. In
particular, all the machinery that determines the choice of
measurement bases (and which may include beam-splitters,
polarizers, etc.) is still assumed to be part of the black-box
device.}. The detectors may be assumed to be trusted by and under
the control of Alice and Bob or they can be tested independently
from the rest of the quantum devices. Alice and Bob can for
instance do a tomography of their quantum detectors
\cite{tomo_det99,tomo_det01,tomo_det04,tomo_det08}, which consists
in determining the measurement that these detectors
actually perform. Such detector tomography, which has been
recently demonstrated experimentally in \cite{tomo_det08}, clearly
limits Eve's ability to exploit the detection loophole. This kind
of analysis may require to elaborate counter-measures against some
sort of a trojan-horse attacks on the detectors, in which Alice's
device (manufactured by Eve), sometimes sends nothing and
sometimes sends bright pulses in order to ensure that a detection
occurs. We believe that the power of such attacks can be severely
constrained by placing multiple detectors instead of one at each
output mode of Alice's and Bob's devices \cite{trojan}.
In the scenario that we outlined in the preceding paragraph,
we have made a move to a situation that is intermediate between
usual QKD, where all devices are assumed to be trusted, and DIQKD,
where all quantum devices are untrusted. In this new situation,
Alice and Bob either need to trust their detectors (in the same
way as that they trust their random number generators or the
classical devices) or they need to test them with a trusted
calibration device (that they should get from a different provider
than Eve). Whether this is a reasonable or practical scenario to
consider depends on the respective difficulty of testing the
detectors vs the entire quantum devices, and on the advantages
that may follow from trusting part of the quantum devices (this
may still allow for instance to forget about side-channels, or
imperfections in the measurement bases).
\section{Discussion and open questions}
Identifying the minimal set of physical assumptions allowing
secure key distribution is a fascinating problem, both from a
fundamental and an applied point of view. DIQKD possibly
represents the ultimate step in this direction, since its security
relies only on a fundamental set of basic assumptions: (i) access
to secure locations, (ii) trusted randomness, (iii)
trusted classical processing devices, (iv) an authenticated
classical channel, (v) and finally the general validity of a
physical theory, quantum theory. In this work, we have shown that
for the restricted scenario of collective attacks, secure DIQDK is
indeed possible. There remain, however, plenty of interesting open
questions in the device-independent scenario.
From an applied point of view, the most relevant questions are
related to loopholes in Bell tests, particularly the detection
loophole, as discussed in Section~\ref{sec:loopholes}. The detection loophole, usually seen mostly as a foundational problem, thus becomes a relevant issue from an applied perspective, with important implications for cryptography \footnote{Recently, the role of the detector efficiency loophole in standard QKD has been analyzed in Ref.~\cite{lutkenhaus}}. From a
theoretical point of view, it would be highly desirable to extend
the security proof presented here to other scenarios, the ultimate
goal being a general security proof. We list below several
possible directions to extend our results.
\begin{equation}gin{itemize}
\item As we discussed, the violation of a Bell inequality represents a
necessary condition for secure DIQKD. It would be interesting, then, to
consider other protocols, based on different Bell inequalities,
even under the additional
assumption of collective attacks. Some interesting questions are:
(i) how does the security of DIQKD change when using larger
alphabets, especially when compared with
standard QKD~\cite{qudit1,qudit2,errfiltr}?
(ii) can one establish
more general relations bounding Eve's information from the amount
of observed Bell inequality violation?
\item A key ingredient in our security
proof is the fact that it is possible to reduce the
whole analysis to a two-qubit optimization problem. This is because
any pair of quantum binary measurements can be decomposed as the direct sum of pairs of measurements
acting on two-dimensional spaces. Do similar results exist for
more complex scenarios? More generally, are all possible bipartite quantum
correlations for $m$ measurements of $n$ outcomes, for finite
$m$ and $n$,
attainable by measuring finite-dimensional
systems~\cite{qcorr}? Some progress on this question was
recently obtained in Refs.~\cite{vertpal,briet}, where it was shown that infinite
dimensional systems are needed to generate all
two-outcome ($n=2$) quantum correlations, thus proving a conjecture made in Ref. \cite{dimH}. The proof of this result,
however, is only valid when $m\rightarrow\infty$.
\item Our security analysis works for the case of
one-way reconciliation protocols. How is the security of the protocol
modified when two-way reconciliation techniques are
considered? Does then a Bell inequality violation represent a sufficient condition for
security? In this direction, it was shown in Ref.~\cite{MAG} that
all correlations violating a Bell inequality contain some
form of secrecy, although not necessarily distillable into a
key.
\item In the spirit of removing the largest number of assumptions necessary for the security of QKD, an interesting extension has been anticipated by Kofler, Paterek and Brukner \cite{kpb}. They noticed that quantum cryptography may be secure even when one allows the eavesdropper to have partial information about the measurement settings. To illustrate this scenario on our protocol, we suppose that in each run Eve has some probability to make a correct guess on the choice of measurement settings. The best way to model this situation from the perspective of Eve is to have an additional bit $f$ (``flag'') such that $f = 1$ guarantees her guess to be correct, while $f = 0$ implies that her guess is uncorrelated with the real settings: indeed, any scenario with partial knowledge may be obtained by Eve forgetting the value of $f$. We suppose that the case $f=1$ happens with probability $q$ and $f=0$ with probability $1-q$. When Eve has full information on Bob's measurement choice, she can fix in advance Alice's and Bob's outcome while at the same time engineering a violation of CHSH up to the algebraic limit of 4. If Eve follows this strategy, the observed violation will then be $S=4q+(1-q)S'$ and the security bound will be given by \begin{eqnarray}\chi(B_1:E)&=&q+(1-q)h\left([1+\sqrt{(S'/2)^2-1}] /2\right)\,.\end{eqnarray}
This proves that there cannot be any security if $q\geq \sqrt{2}-1\approx 41\%$. It would be interesting to consider more elaborated situations, e.g. those in which Eve may have partial information about sequences of measurement settings.
\item In standard QKD, it is known that security against collective attacks implies security against the most general type of attacks. This follows from an application of the exponential quantum De
Finetti theorem of Ref.~\cite{renner}, but can also be proven trough a direct argument \cite{coll}.
Does a similar result
hold in the device-independent scenario? In particular can the exponential de Finetti theorem \cite{renner} be extended to the device-independent scenario? If this was the case, our security proof would automatically be promoted to a general security proof.
Some preliminary results in this direction have
been obtained in Refs.~\cite{df1,barrett09}, where two different versions of
a de Finetti theorem for general
no-signaling probability distributions were derived.
Or could it be that collective attacks are strictly weaker than general attacks in the device-independent scenario? Here the main difficulty for deriving a general security proof is that, contrary to standard QKD, the devices may behave in a way that depends on previous inputs and outputs. In particular, the measurement setting could be different in each round and depend on the results of previous measurements. It is not clear what role such memory effects play in the device-independent scenario, and whether it would be possible to find an explicit attack exploiting them which would outperform any collective (hence memoryless) attack.
\item A final possibility would be to adapt the techniques developed
in Refs.\cite{masanes-winter,masanes}, valid for the general
case of no-signaling correlations, to the quantum
scenario. The results of these works prove the security of QKD protocols against
eavesdroppers limited only by the no-signaling principle. Unfortunately, the corresponding key rates and noise-resistance are at present unpractical when applied to correlations that can be obtained by measuring quantum states.
A natural question is then: how can one incorporate the constraints associated to the
quantum formalism to these techniques in order to obtain
better key rates and better noise-resistance for quantum correlations?
\end{itemize}
\end{document}
|
\begin{equation}gin{document}
\title[On equivalency of various geometric structures]{\bf{On equivalency of various geometric structures}}
\author[Absos Ali Shaikh and Haradhan Kundu]{Absos Ali Shaikh$^*$ and Haradhan Kundu}
\date{}
\address{\noindent\newline Department of Mathematics,\newline University of
Burdwan, Golapbag,\newline Burdwan-713104,\newline West Bengal, India}
\email{[email protected], [email protected]}
\email{[email protected]}
\begin{equation}gin{abstract}
In the literature we see that after introducing a geometric structure by imposing some restrictions on Riemann-Christoffel curvature tensor, the same type structures given by imposing same restriction on other curvature tensors being studied. The main object of the present paper is to study the equivalency of various geometric structures obtained by same restriction imposing on different curvature tensors. In this purpose we present a tensor by combining Riemann-Christoffel curvature tensor, Ricci tensor, the metric tensor and scalar curvature which describe various curvature tensors as its particular cases. Then with the help of this generalized tensor and using algebraic classification we prove the equivalency of different geometric structures (see, Theorem \ref{cl-1} - \ref{thm5.6}, Table \ref{stc-1} and Table \ref{stc-2}).\\
\end{abstract}
\maketitle
\noindent{\bf Mathematics Subject Classiï¬cation (2010).} 53C15, 53C21, 53C25, 53C35.\\
\noindent{\bf Keywords:} generalized curvature tensor, locally symmetric manifold, recurrent space, semisymmetric manifold, pseudosymmetric manifold.
\section{\bf{Introduction}}\label{intro}
Let $M$ be a semi-Riemannian manifold of dimension $n\ge 3$, endowed with the semi-Riemannian metric $g$ with signature $(p, n-p)$, $0\le p \le n$. If (i) $p=0$ or $p=n$; (ii) $p=1$ or $p=n-1$, then $M$ is said to be a (i) Riemannian; (ii) Lorentzian manifold respectively. Let $\nabla$, $R$, $S$ and $r$ be the Levi-Civita connection, Riemannian-Christoffel curvature tensor, Ricci tensor and scalar curvature of $M$ respectively. All the manifolds considered here are assumed to be smooth and connected. We note that any two 1-dimensional semi-Riemannian manifolds are locally isometric, and an 1-dimensional semi-Riemannian manifold is a void field. Also for $n=2$, the notions of above three curvatures are equivalent. Hence throughout the study we will confined ourselves with a semi-Riemannian manifold $M$ of dimension $n\ge 3$. In the study of differential geometry there are various theme of research to derive the geometric properties of a semi-Riemannian manifold. Among others ``symmetry'' plays an important role in the study of differential geometry of a semi-Riemannian manifold.\\
\indent As a generalization of manifold of constant curvature, the notion of local symmetry was introduced by Cartan \cite{ca} with a full classification for the Riemann case. A full classification of such notion was given by Cahen and Parker (\cite{CAH}, \cite{CAH1}) for indefinite case. The manifold $M$ is said to be locally symmetric if its local geodesic symmetries are isometry and $M$ is said to be globally symmetric if its geodesic symmetries are extendible to the whole of $M$. Every locally symmetric manifold is globally symmetric but not conversely. For instance, every compact Riemann surface of genus$> 1$ endowed with its usual metric of constant curvature $(-1)$ is locally symmetric but not globally symmetric. We note that the famous Cartan-Ambrose-Hicks theorem implies that $M$ is locally symmetric if and only if $\nabla R = 0$, and any simply connected complete locally symmetric manifold is globally symmetric.
During the last eight decades the notion of local symmetry has been weakened by many authors in different directions by imposing some restriction(s) on the curvature tensors and introduced various geometric structures, such as recurrency, semisymmetry, pseudosymmetry etc. In differential geometry there are various curvature tensors arise as an invariant of different transformations, e.g., projective (P), conformal (C), concircular (W), conharmonic (K) curvature tensors etc. All these above restrictions are studied by many geometers on various curvature tensors with their classification, existence and applications.\\
\indent In the literature there are many papers where the same curvature restriction is studied with other curvature tensors which are either meaningless or redundant due to their equivalency. Cartan \cite{ca} first studied the local symmetry. In 1958 So\'os \cite{soos} and in 1964 Gupta \cite{BG} studied the symmetry condition on projective curvature tensor, and then in 1967 Reynolds and Thompson \cite{RT} proved that the notions of local symmetry and projective symmetry are equivalent. Also in \cite{DA2} Desai and Amur studied concircular and projective symmetry and showed that these notions are equivalent to Cartan's local symmetry. Again, the study of recurrent manifold was initiated by Ruse (\cite{Ru1}, \cite{Ru2}, \cite{Ru3}) as the Kappa space and latter named as recurrent space by Walker \cite{Ag}. On the analogy, various authors such as Garai \cite{RG}, Desai and Amur \cite{DA1}, Rahaman and Lal \cite{RL} etc. studied the recurrent notion on the projective curvature tensor as well as concircular curvature tensor. However, all these notions are equivalent to the recurrent manifold (\cite{Glodek}, \cite{mik}, \cite{mik2}, \cite{ol}). Recently Singh \cite{singh} studied the recurrent condition on the $\mathcal{M}$-projective curvature tensor, but from our paper (see, Section 6) it follows that such notion is equivalent to the notion of recurrent manifold. The main object of this paper is to prove the equivalency of various geometric structures obtained by the same curvature restriction on different curvature tensors. For this purpose we present a (0,4) tensor $B$ by the linear combination of the Riemann-Christoffel curvature tensor, Ricci tensor, metric tensor and scalar curvature such that the tensor $B$ describes various curvature tensors as its particular cases. The tensors of the form like $B$ (i.e., particular cases of $B$) are said to be $B$-tensors and the set of all $B$-tensors will be denoted by $\mathscr B$. We classify the set $\mathscr B$ with respect to contraction such that in each class, several geometric structures obtained by the same curvature restriction are equivalent.\\
\indent We are mainly interested on those geometric structures which are obtained by imposing restrictions as some operators on various curvature tensors. We will call these restrictions as ``curvature restriction''. The work on this paper assembled such curvature restrictions and we classify them with respect to their linearity and commutativity with contraction and study the results for each class of restrictions together. On the basis of this study we can say that a specific curvature restriction provides us how many different geometric structures arise due to different curvature tensors.\\
\indent In section 2 we present the tensor $B$ and showed various curvature tensors which are introduced already, are particular cases of it. Section 3 deals with preliminaries. In section 4 we classify the curvature restrictions (these are actually generalized or extended or weaker restrictions of symmetry defined by Cartan) and give the definitions of various geometric structures formed by some curvature restrictions. Section 5 is concerned with basic well known results and some basic properties of the tensors $B$. In section 6 we classify the set $\mathscr B$ and calculate the main results on equivalency of various structures. Finally in last section we make conclusion of the whole work.
\section{\bf{The tensor $B$ and $B$-tensors}}\label{B}
Again recall that $M$ is an $n (\ge 3)$-dimensional connected semi-Riemannian manifold equipped with the metric $g$.
We denote by $\nabla, R, S, r$, the Levi-Civita connection, the Riemann-Christoffel curvature tensor,
Ricci tensor and scalar curvature of $M$ respectively. We define a $(0,4)$ tensor $B$ given by
\begin{equation}a\label{tensor}
&&B(X_1,X_2, X_3, X_4) = a_0 R(X_1, X_2, X_3, X_4) + a_1 R(X_1, X_3, X_2, X_4)\\\nonumber
&&\hspace{0.7in} + a_2 S(X_2, X_3) g(X_1, X_4) + a_3 S(X_1, X_3) g(X_2, X_4) + a_4 S(X_1, X_2) g(X_3, X_4)\\\nonumber
&&\hspace{0.7in} + a_5 S(X_1, X_4) g(X_2, X_3) + a_6 S(X_2, X_4) g(X_1, X_3) + a_7 S(X_3, X_4) g(X_1, X_2)\\\nonumber
&&\hspace{0.7in} + r\big[a_8 g(X_1, X_4) g(X_2, X_3) + a_9 g(X_1, X_3) g(X_2, X_4) + a_{10} g(X_1, X_2) g(X_3, X_4)\big],
\end{equation}a
where $a_i$'s are scalars on M and $X_1, X_2, X_3, X_4\in \chi(M)$, the Lie algebra of all smooth vector fields on M. Recently Tripathi and Gupta \cite{tri} introduced a similar tensor $\mathcal T$ named as $\mathcal T$-curvature tensor, where two terms are absent. Infact, if $a_1 = 0 = a_{10}$, then the tensor $B$ turns out to be the $\mathcal T$-curvature tensor. Hence the tensor $B$ may be called as \textit{extended $\mathcal T$-curvature tensor} and such name is suggested by M. M. Tripathi (personal communication). However, throughout the paper by the tensor $B$ we shall always mean the extended $\mathcal T$-curvature tensor. We note that for different values of $a_i$'s, as given in the following table (Table \ref{tab-B}), the tensor $B$ reduce to various curvature tensors such as (i) Riemann-Christoffel curvature tensor $R$, (ii) Weyl conformal curvature tensor $C$, (iii) projective curvature tensor $P$, (iv) concircular curvature tensor $W$ \cite{yano1}, (v) conharmonic curvature tensor $K$ \cite{ishi}, (vi) quasi conformal curvature tensor $C^*$ \cite{yano4}, (vii) $C'$ curvature tensor \cite{LC}, (viii) pseudo projective curvature tensor $P^*$ \cite{prasad}, (ix) quasi-concircular curvature tensor $W^*$ \cite{prasad2}, (x) pseudo quasi conformal curvature tensor $\tilde W$ \cite{SJ}, (xi) $\mathcal M$-projective curvature tensor \cite{pokh3}, (xii) $\mathcal W_i$-curvature tensor, $i=1,2,...,9$ (\cite{pokh1}, \cite{pokh2}, \cite{pokh3}), (xiii) $\mathcal W^*_i$-curvature tensor, $i=1,2,...,9$ (\cite{pokh3}) and (xiv) $\mathcal T$-curvature tensor \cite{tri}.
\begin{equation}gin{table}[H]
\begin{equation}gin{center}
{\footnotesize\begin{equation}gin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\hline
\textbf{Tensor}& $a_0$ & $a_1$ & $a_2$ & $a_3$ & $a_4$ & $a_5$ & $a_6$ & $a_7$ & $a_8$ & $a_9$ & $a_{10}$\\\hline
$R$ & $1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$C$ & $1$ & $0$ & $-\frac{1}{n-2}$ & $\frac{1}{n-2}$ & $0$ & $-\frac{1}{n-2}$ & $\frac{1}{n-2}$ & $0$ & $\frac{1}{(n-1)(n-2)}$ & $-\frac{1}{(n-1)(n-2)}$ & $0$\\\hline
$P$ & $1$ & $0$ & $-\frac{1}{n-1}$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$W$ & $1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{n(n-1)}$ & $\frac{1}{n(n-1)}$ & $0$\\\hline
$K$ & $1$ & $0$ & $-\frac{1}{n-2}$ & $\frac{1}{n-2}$ & $0$ & $-\frac{1}{n-2}$ & $\frac{1}{n-2}$ & $0$ & $0$ & $0$ & $0$\\\hline
$C^*$ & $a_0$ & $0$ & $a_2$ & $-a_2$ & $0$ & $a_2$ & $-a_2$ & $0$ & $-\frac{1}{n}\left(\frac{a_0}{n-1}+2a_2\right)$ & $\frac{1}{n}\left(\frac{a_0}{n-1}+2a_2\right)$ & $0$\\\hline
$C'$ & $a_0$ & $0$ & $a_2$ & $-a_2$ & $0$ & $a_2$ & $-a_2$ & $0$ & $a_8$ & $-a_8$ & $0$\\\hline
$P^*$ & $a_0$ & $0$ & $a_2$ & $-a_2$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{n}\left(\frac{a_0}{n-1}+a_2\right)$ & $\frac{1}{n}\left(\frac{a_0}{n-1}+a_2\right)$ & $0$\\\hline
$W^*$ & $a_0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\frac{1}{n}\left(\frac{a_0}{n-1}+2 b\right)$ & $\frac{1}{n}\left(\frac{a_0}{n-1}+ 2 b\right)$ & $0$\\\hline
$\widetilde W$ & $a_0$ & $0$ & $a_2$ & $-a_2$ & $0$ & $a_5$ & $-a_5$ & $0$ & $-\frac{a_0+(n-1)(a_2+a_5)}{n(n-1)}$ & $\frac{a_0+(n-1)(a_2+a_5)}{n(n-1)}$ & $0$\\\hline
$\mathcal M$ & $1$ & $0$ & $-\frac{1}{2(n-1)}$ & $\frac{1}{2(n-1)}$ & $0$ & $-\frac{1}{2(n-1)}$ & $\frac{1}{2(n-1)}$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_0$ & $1$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_0^*$ & $1$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_1$ & $1$ & $0$ & $-\frac{1}{n-1}$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_1^*$ & $1$ & $0$ & $\frac{1}{n-1}$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_2$ & $1$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_2^*$ & $1$ & $0$ & $0$ & $0$ & $0$ & $\frac{1}{n-1}$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_3$ & $1$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_3^*$ & $1$ & $0$ & $0$ & $\frac{1}{n-1}$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_4$ & $1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_4^*$ & $1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\frac{1}{n-1}$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_5$ & $1$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_5^*$ & $1$ & $0$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_6$ & $1$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_6^*$ & $1$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_7$ & $1$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_7^*$ & $1$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_8$ & $1$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_8^*$ & $1$ & $0$ & $\frac{1}{n-1}$ & $0$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_9$ & $1$ & $0$ & $0$ & $0$ & $-\frac{1}{n-1}$ & $\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\mathcal W_9^*$ & $1$ & $0$ & $0$ & $0$ & $\frac{1}{n-1}$ & $-\frac{1}{n-1}$ & $0$ & $0$ & $0$ & $0$ & $0$\\\hline
$\tau$ & $a_0$ & $0$ & $a_2$ & $a_3$ & $a_4$ & $a_5$ & $a_6$ & $a_7$ & $a_8$ & $a_9$ & $0$\\\hline
\end{tabular}}
\end{center}
\caption{List of $B$-tensors}\label{tab-B}
\end{table}
There may arise some other tensors from the tensor $B$ as its particular cases, which are not introduced so far. We recall that the tensors arising out from the tensor $B$ as its particular cases are called $B$-tensors and the set of all such $B$-tensors will be denoted by $\mathscr B$. It is easy to check that $\mathscr B$ forms a module over $C^{\infty}(M)$, the ring of all smooth functions on $M$. We note that the $B$-tensor pseudo quasi-conformal curvature tensor $\widetilde W$ was studied by Shaikh and Jana in 2006 \cite{SJ}, but the same notion was studied by Prasad et. al. in 2011 \cite{prasad3} as generalized quasi-conformal curvature tensor ($G_{qc}$).
\section{\bf{Preliminaries}}\label{pre}
Let us now consider a connected semi-Riemannian manifold $M$ of dimension n($\ge 3$). Then for two (0, 2) tensors $A$ and $E$, the
Kulkarni-Nomizu product (\cite{D0}, \cite{DGHS}, \cite{GLOG}, \cite{GLOG1}, \cite{Kow2}) $A\wedge E$ is given by
\begin{equation}a\label{KN}
(A \wedge E)(X_1,X_2,X_3,X_4)&=&A(X_1,X_4)E(X_2,X_3) + A(X_2,X_3)E(X_1,X_4)\\\nonumber
&&-A(X_1,X_3)E(X_2,X_4) - A(X_2,X_4)E(X_1,X_3),
\end{equation}a
where $X_1, X_2, X_3, X_4\in \chi(M)$.\\
A tensor $D$ of type (1,3) on $M$ is said to be generalized curvature tensor (\cite{D11}, \cite{D2}, \cite{D10}), if
\begin{equation}b
&(i)&D(X_1,X_2)X_3+D(X_2,X_1)X_3=0,\\
&(ii)&D(X_1,X_2,X_3,X_4)=D(X_3,X_4,X_1,X_2),\\
&(iii)&D(X_1,X_2)X_3+D(X_2,X_3)X_1+D(X_3,X_1)X_2=0,
\end{equation}b
where $D(X_1,X_2,X_3,X_4)=g(D(X_1,X_2)X_3,X_4)$, for all $X_1,X_2,$ $X_3,X_4$. Here we denote the same symbol $D$ for both generalized curvature tensor of type (1,3) and (0,4). Moreover if $D$ satisfies the second Bianchi identity i.e.,
$$(\nabla_{X_1}D)(X_2,X_3)X_4+(\nabla_{X_2}D)(X_3,X_1)X_4+(\nabla_{X_3}D)(X_1,X_2)X_4=0,$$
then $D$ is called a proper generalized curvature tensor. We note that a linear combination of generalized curvature tensors over $C^{\infty}(M)$ is again a generalized curvature tensor but it is not true for proper generalized curvature tensors, in general. However, if the linear combination is taken over $\mathbb R$, then it is true.\\
\indent Now for any (1,3) tensor $D$ (not necessarily generalized curvature tensor) and given two vector fields $X,Y\in\chi(M)$, one can define an endomorphism $\mathscr{D}(X,Y)$ by
$$\mathscr{D}(X,Y)(Z)=D(X,Y)Z, \ \ \forall\mbox{$Z\in\chi(M)$}.$$
Again, if $X,Y\in\chi(M)$ then for a (0,2) tensor $A$, one can define two endomorphisms $\mathscr{A}$ and $X \wedge_A Y$, by (\cite{D11}, \cite{D2}, \cite{D10})
$$g(\mathscr{A}(X),Y)=A(X,Y),$$
$$(X \wedge_A Y)Z = A(Y,Z)X - A(X,Z)Y, \ \forall\ \mbox{$Z \in \chi(M)$}.$$
Now for a $(0,k)$-tensor $T$, $k\geq 1$, and an endomorphism $\mathscr H$, one can operate $\mathscr H$ on $T$ to produce the
tensor $\mathscr H T$, given by (\cite{D11}, \cite{D2}, \cite{D10})
\begin{equation}b\label{rdot}
(\mathscr{H} T)(X_1,X_2,\cdots,X_k) = -T(\mathscr{H}X_1,X_2,\cdots,X_k) - \cdots - T(X_1,X_2,\cdots,\mathscr{H}X_k).
\end{equation}b
\indent We consider that the operation of $\mathscr H$ on a scalar is zero. In particular, $\mathscr H$ may be $\mathscr{D}(X,Y)$, $X \wedge_A Y$, $\mathscr{A}$ etc. In particular for $\mathscr H = \mathscr{D}(X,Y)$ and $\mathscr{H} = (X \wedge_A Y)$, we have (\cite{D11}, \cite{D2}, \cite{D10}, \cite{tac})
\begin{equation}b\label{ddt}
(\mathscr D(X,Y) T)(X_1,X_2,\cdots,X_k)=-T(\mathscr D(X,Y)(X_1),X_2,\cdots,X_k) - \cdots - T(X_1,X_2,\cdots,\mathscr D(X,Y)(X_k))\\\nonumber
=- T(D(X,Y)X_1,X_2,\cdots,X_k) - \cdots - T(X_1,X_2,\cdots, D(X,Y)X_k),
\end{equation}b
\begin{equation}b\label{qgr}
((X \wedge_A Y) T)(X_1,X_2,\cdots,X_k)= -T((X \wedge_A Y)X_1,X_2,\cdots,X_k) - \cdots - T(X_1,X_2,\cdots,(X \wedge_A Y)X_k)\\\nonumber
= A(X, X_1) T(Y,X_2,\cdots,X_k) + \cdots + A(X, X_k) T(X_1,X_2,\cdots,Y)\\\nonumber
- A(Y, X_1) T(X,X_2,\cdots,X_k) - \cdots - A(Y, X_k) T(X_1,X_2,\cdots,X),
\end{equation}b
where $X, Y, X_i \in \chi(M)$, $i = 1,2,\cdots,k$.\\
We denote the above tensor $(\mathscr D(X,Y) T)(X_1,X_2,\cdots,X_k)$ as $D\cdot T(X_1,X_2,\cdots,X_k,X,Y)$ and the tensor $((X \wedge_A Y) T)(X_1,X_2,\cdots,X_k)$ as $Q(A,T)(X_1,X_2,\cdots,X_k,X,Y)$.\\
\indent For an 1-form $\Pi$ and a vector field $X$ on $M$, we can define an endomorphism $\Pi_{_X}$ as
$$\Pi_{_X}(X_1) = \Pi(X_1)X, \ \mbox{$\forall X_1\in \chi(M)$}.$$
Then we can define the tensor $\Pi_{_X} T$ as follows:
\begin{equation}b\label{pidot}
&&(\Pi_{_X} T)(X_1,X_2, \cdots, X_k)\\\nonumber
&&= -T(\Pi_{_X}(X_1),X_2, \cdots, X_k) - \cdots - T(X_1,X_2, \cdots, \Pi_{_X}(X_k)),\\\nonumber
&&= -\Pi(X_1)T(X,X_2, \cdots, X_k) -\Pi(X_2)T(X_1,X, \cdots, X_k)- \cdots -\Pi(X_k)T(X_1,X_2, \cdots, X),
\end{equation}b
$\forall X, X_i\in\chi(M), i= 1, 2, \cdots,k$.
\section{\bf{Some geometric structures defined by curvature related operators}}\label{structures}
In this section we discuss some geometric structures which arise by some curvature restrictions on a semi-Riemannian manifold. We are mainly interested on those geometric structures which are obtained by some curvature restrictions imposed on $B$-tensors by means of some operators, e.g., symmetry, recurrency, pseudosymmetry etc. These operators are linear over $\mathbb{R}$ and may or may not be linear over $C^{\infty}(M)$ and thus called as $\mathbb{R}$-linear operators or simply linear operators. The linear operators which are not linear over $C^{\infty}(M)$, said to be \emph{operators of the 1st type} and which are linear over $C^{\infty}(M)$, said to be \emph{operators of the 2nd type}. Some important 1st type operators are symmetry, recurrency, weakly symmetry (in the sense of Tam\'assy and Binh) etc. and some important 2nd type operators are semisymmetry, Deszcz pseudosymmetry, Ricci generalized pseudosymmetry etc. We denote the set of all tensor fields on $M$ of type $(r,s)$ by $\mathcal T^k_s$ and we take $\mathcal L$ as any $\mathbb R$-linear operator such that the operation of $\mathcal L$ on $T \in \mathcal T^k_s$ is denoted by $\mathcal L\ T$.\\
\indent Another classification of such linear operators may be given with respect to their extendibility. Actually these operators are imposed on $(0,4)$ curvature tensors but the defining condition of some of them can not be extended to any $(0,k)$ tensor, e.g., symmetry, semisymmetry, weak symmetry (all three types) operators are extendible but weakly generalized recurrency, hyper generalized recurrency operators are not extendible. Again extendible operators are classified into two subclasses, (i) operators commute with contraction or commutative and (ii) operators not commute with contraction or non-commutative., e.g., symmetry, semisymmetry operators are commutative but weak symmetry operators are non-commutative. Throughout this paper by a commutative or non-commutative operator we mean a linear operator which commutes or not commutes with contraction. The tree diagram of the classification of linear operators imposed on (0,4) curvature tensors is given by:\\
\noindent $\put(20,0){\framebox(120,30){operators of 1st type}} \put(300,0){\framebox(120,30){operators of 2nd type}}$
\noindent $\put(80,-16){\vector(0,1){15}} \put(380,-16){\vector(0,1){15}}
\put(80,-16){\line(1,0){300}}
\put(225,-31){\vector(0,1){15}}$\\
$\put(120,0){\framebox(200,50){$\begin{equation}gin{array}{c}$\textbf{Class of linear operators defined}$\\$\textbf{on (0,4) curvature tensors}$\end{array}$}}
\put(225,-1){\vector(0,-1){15}}
\put(80,-16){\line(1,0){300}}
\put(80,-16){\vector(0,-1){15}} \put(380,-16){\vector(0,-1){15}}
\put(20,-61){\framebox(182,30){non-extendible for any (0,k) tensor}}
\put(290,-61){\framebox(160,30){extendible for any (0,k) tensor}}
\put(340,-62){\vector(0,-1){15}}
\put(160,-77){\line(1,0){240}}
\put(160,-77){\vector(0,-1){15}} \put(400,-77){\vector(0,-1){15}}
\put(110,-132){\framebox(140,40){$\begin{equation}gin{array}{c}$commute with contraction$\\$or commutative$\end{array}$}}
\put(320,-132){\framebox(160,40){$\begin{equation}gin{array}{c}$not commute with contraction$\\$or non-commutative$\end{array}$}}
$
\begin{equation}gin{defi}\cite{ca}
Consider the covariant derivative operator $\nabla_X : \mathcal T^0_k \rightarrow \mathcal T^0_{k+1}$. A semi-Riemannian manifold is said to be \textit{$T$-symmetric} if $\nabla_X T = 0$, for all $X\in \chi(M)$, $T\in \mathcal T^0_{k}$.
\end{defi}
\indent Obviously this operator is of 1st type and commutative. The condition for $T$-symmetry is written as $\nabla T = 0$.
\begin{equation}gin{defi}$($\cite{Ag}, \cite{Ru1}, \cite{Ru2}, \cite{Ru3}$)$
Consider the operator $\kappa_{(X,\Pi)}: \mathcal T^0_k \rightarrow \mathcal T^0_{k+1}$ defined by $\kappa_{(X,\Pi)} T = \nabla_X T - \Pi(X)\otimes T$, $\otimes$ is the tensor product, $\Pi$ is an 1-form and $T\in \mathcal T^0_k$. A semi-Riemannian manifold is said to be \textit{$T$-recurrent} if $\kappa_{(X,\Pi)}T = 0$ for all $X\in \chi(M)$ and some 1-form $\Pi$, called the associated 1-form or the 1-form of recurrency.
\end{defi}
\indent Obviously this operator is of 1st type and commutative. The condition for $T$-recurrency is written as $\nabla T - \Pi\otimes T = 0$ or simply $\kappa T = 0$.\\
\indent Keeping the commutativity property, we state some generalization of symmetry operator and recurrency operator which are respectively said to be \textit{symmetric type} operator and \textit{recurrent type} operator. For this purpose we denote the $s$-th covariant derivative as
$$\nabla_{X_1}\nabla_{X_2}\cdots\nabla_{X_s} = \nabla^{s}_{X_1 X_2\cdots X_s}.$$
Now the operator
$$L^s_{X_1 X_2 \cdots X_s} = \sum_{\sigma} \alpha_{\sigma}\nabla^{s}_{X_{\sigma(1)} X_{\sigma(2)}\cdots X_{\sigma(s)}}$$
is called a \textit{symmetric type operator of order $s$}, where $\sigma$ is permutation over $\{1,2,...,s\}$ and the sum is taken over the set of all permutations over $\{1,2,...,s\}$ and $\alpha_{\sigma}$'s are some scalars not all together zero.\\
\indent A manifold is called \textit{$T$-symmetric type of order $s$} if
\begin{equation}\label{eq4.4}
L^s_{X_1 X_2 \cdots X_s} T = 0 \ \ \ \forall\ X_1, X_2, \cdots X_s \in\chi(M) \mbox{ and some scalars } \alpha_{\sigma}.
\end{equation}
The scalars $\alpha_{\sigma}$'s are called the associated scalars. We denote the condition for $T$-symmetry type of order $s$ is written as $L^s T = 0$.\\
Again for some $(0,i)$ tensors $\Pi^i_{\sigma}$ (not all together zero), $i=0,1,2,\cdots,s$ and all permutations $\sigma$ over $\{1,2,...,s\}$ (i.e., $\Pi^0_{\sigma}$ are scalars), the operator
\begin{equation}b
\kappa^s_{X_1 X_2 \cdots X_s} &=&\sum_{\sigma}\left[ \Pi^0_{\sigma} \nabla^{s}_{X_{\sigma(1)} X_{\sigma(2)}\cdots X_{\sigma(s)}}\right.\\
&& + \Pi^1_{\sigma}(X_{\sigma(1)}) \nabla^{s-1}_{X_{\sigma(2)} X_{\sigma(3)}\cdots X_{\sigma(s)}}\\
&& + \Pi^2_{\sigma}(X_{\sigma(1)},X_{\sigma(2)}) \nabla^{s-2}_{X_{\sigma(3)} X_{\sigma(4)}\cdots X_{\sigma(s)}}\\
&& + \cdots \ \ \ \cdots \ \ \ \cdots \ \ \ \cdots\\
&& + \Pi^{s-1}_{\sigma}(X_{\sigma(1)},X_{\sigma(2)},\cdots, X_{\sigma(s-1)}) \nabla_{X_{\sigma(s)}}\\
&& \left. + \Pi^s_{\sigma}(X_{\sigma(1)},X_{\sigma(2)},\cdots, X_{\sigma(s)})I_d\right],
\end{equation}b
$I_d$ is the identity operator, is called a \textit{recurrent type operator of order $s$}.\\
\indent A manifold is called \textit{$T$-recurrent type of order $s$} if it satisfies
\begin{equation}\label{eq4.5}
\kappa^s_{X_1 X_2 \cdots X_s} T = 0, \ \ \forall X_1, X_2, \cdots X_s \in\chi(M) \mbox{ and some $i$-forms $\Pi^i_{\sigma}$'s}, i=0,1,2,\cdots,s.
\end{equation}
The $i$-forms $\pi^{i}_{\sigma}$'s are called the associated $i$-forms. The $T$-recurrency condition of order $s$, will be simply written as $\kappa^s T = 0$.\\
\indent As recurrency is a generalization of symmetry, likewise, the recurrent type condition is a generalization of symmetric type condition. We note that these symmetric type and recurrent type operators are, generally, of 1st type but some of them may be of second type. For example, the semisymmetric operator $(\nabla_X\nabla_Y-\nabla_Y\nabla_X)$ is symmetric type as well as recurrent type and also of 2nd type operator.\\
\indent Another way to generalize recurrency there are some other geometric structures defined as follows:
\begin{equation}gin{defi}\cite{DUB}
Consider the operator $G\kappa_{(X,\Pi,\Phi)}: \mathcal T^0_4 \rightarrow \mathcal T^0_5$ defined by
$$G\kappa_{(X,\Pi,\Phi)} T = \nabla_X T - \Pi(X)\otimes T - \Phi(X)\otimes G,$$
$\Pi$ and $\Phi$ are 1-forms and $T$ is a $(0,4)$ tensor. A semi-Riemannian manifold is said to be generalized $T$-recurrent if $G\kappa_{(X,\Pi,\Phi)}T = 0$ for all $X\in\chi(M)$ and some 1-forms $\Pi$ and $\Phi$, called the associated 1-forms.
\end{defi}
\indent Obviously this operator is of 1st type and non-extendible.
\begin{equation}gin{defi}\cite{SP}
Consider the operator $H\kappa_{(X,\Pi,\Phi)}: \mathcal T^0_4 \rightarrow \mathcal T^0_5$ defined by
$$H\kappa_{(X,\Pi,\Phi)} T = \nabla_X T - \Pi(X)\otimes T - \Phi(X)\otimes g\wedge S,$$
$\Pi$ and $\Phi$ are 1-forms and $T$ is a $(0,4)$ tensor. A semi-Riemannian manifold is said to be hyper-generalized $T$-recurrent if $H\kappa_{(X,\Pi,\Phi)}T = 0$ for all $X\in\chi(M)$ and some 1-forms $\Pi$ and $\Phi$, called the associated 1-forms.
\end{defi}
\indent Obviously this operator is of 1st type and non-extendible.
\begin{equation}gin{defi}\cite{ROY}
Consider the operator $W\kappa_{(X,\Pi,\Phi)}: \mathcal T^0_4 \rightarrow \mathcal T^0_5$ defined by
$$W\kappa_{(X,\Pi,\Phi)} T = \nabla_X T - \Pi(X)\otimes T - \Phi(X)\otimes S\wedge S,$$
$\Pi$ and $\Phi$ are 1-forms and $T$ is a $(0,4)$ tensor. A semi-Riemannian manifold is said to be weakly generalized $T$-recurrent if $W\kappa_{(X,\Pi,\Phi)}T = 0$ for all $X\in\chi(M)$ and some 1-forms $\Pi$ and $\Phi$, called the associated 1-forms.
\end{defi}
\indent Obviously this operator is of 1st type and non-extendible.
\begin{equation}gin{defi}\cite{ROY1}
Consider the operator $Q\kappa_{(X,\Pi,\Phi)}: \mathcal T^0_4 \rightarrow \mathcal T^0_5$ defined by
$$Q\kappa_{(X,\Pi,\Phi,\Psi)} T = \nabla_X T - \Pi(X)\otimes T - \Phi(X)\otimes [g\wedge (g + \Psi\otimes \Psi)],$$
$\Pi$, $\Phi$ and $\Psi$ are 1-forms and $T$ is a $(0,4)$ tensor. A semi-Riemannian manifold is said to be quasi generalized $T$-recurrent if $Q\kappa_{(X,\Pi,\Phi, \Psi)}T = 0$ for all $X\in\chi(M)$ and some 1-forms $\Pi$, $\Phi$ and $\Psi$, called the associated 1-forms.
\end{defi}
\indent Obviously this operator is of 1st type and non-extendible.
\begin{equation}gin{defi}
Consider the operator $S\kappa_{(X,\Pi,\Phi,\Psi,\Theta)}: \mathcal T^0_4 \rightarrow \mathcal T^0_5$ defined by
$$S\kappa_{(X,\Pi,\Phi,\Psi,\Theta)} T = \nabla_X T - \Pi(X)\otimes T - \Phi(X)\otimes G - \Psi(X)\otimes g\wedge S - \Theta(X)\otimes S\wedge S,$$
$\Pi$, $\Phi$, $\Psi$ and $\Theta$ are 1-forms and $T$ is a $(0,4)$ tensor. A semi-Riemannian manifold is said to be super generalized $T$-recurrent if $S\kappa_{(X,\Pi,\Phi,\Psi,\Theta)}T = 0$ for all $X\in\chi(M)$ and some 1-forms $\Pi$, $\Phi$, $\Psi$ and $\Theta$, called the associated 1-forms.
\end{defi}
\indent Obviously this operator is of 1st type and non-extendible.\\
\indent We now state another generalization of local symmetry, given as follows:
\begin{equation}gin{defi}\cite{CHA}
Consider the operator $CP_{(X,\Pi)}: \mathcal T^0_k \rightarrow \mathcal T^0_{k+1}$ defined by
$$CP_{(X,\Pi)} T = \nabla_X T - 2\Pi(X)\otimes T + \Pi_{_X} T,$$
$\Pi$ is an 1-form and $T\in \mathcal T^0_k$. A semi-Riemannian manifold is said to be Chaki $T$-pseudosymmetric \cite{CHA} if $CP_{(X,\Pi)} T = 0$ for all $X\in\chi(M)$ and some 1-form $\Pi$, called the associated 1-form.
\end{defi}
\indent Obviously this operator is of 1st type and non-commutative.\\
\indent Again in another way Tam$\acute{\mbox{a}}$ssy and Binh \cite{tb} generalized the recurrent and Chaki pseudosymmetric structures and named it weakly symmetric structure. But there are three types of weak symmetry \cite{SD} which are given below:
\begin{equation}gin{defi}
Consider the operator $W^1_{(X,\stackrel{\sigma}{\Pi})}: \mathcal T^0_k \rightarrow \mathcal T^0_{k+1}$ defined by
$$W^1_{(X,\stackrel{\sigma}{\Pi})} T = (\nabla_{X}T)(X_2,X_3,...,X_{k+1}) - \sum_{\sigma}\stackrel{\sigma}{\Pi}(X_{\sigma(1)})T(X_{\sigma(2)},X_{\sigma(3)},...,X_{\sigma(k+1)}),$$
$\stackrel{\sigma}{\Pi}$ are 1-forms, $T\in \mathcal T^0_k$ and the sum includes all permutations $\sigma$ over the set $(1,2,...,k+1)$. A semi-Riemannian manifold $M$ is said to be weakly $T$-symmetric of type-I if $W^1_{(X,\stackrel{\sigma}{\Pi})} T = 0,$ for all $X\in\chi(M)$ and some 1-forms $\stackrel{\sigma}{\Pi}$, called the associated 1-forms.
\end{defi}
\indent Obviously this operator is of 1st type and non-commutative.
\begin{equation}gin{defi}
Consider the operator $W^2_{(X,\Phi,\Pi_i)}: \mathcal T^0_k \rightarrow \mathcal T^0_{k+1}$ defined by
\begin{equation}b
(W^2_{(X,\Phi,\Pi_i)} T)(X_1,X_2,...,X_k) &=& (\nabla_X T)(X_1,X_2,...,X_k)\\\nonumber &-&\Phi(X)T(X_1,X_2,...,X_k)-\sum^k_{i=1}\Pi_i(X_i)T(X_1,X_2,...,\underset{i-th\ place}{X},...,X_k),
\end{equation}b
where $\Phi$ and $\Pi_i$ are 1-forms and $T\in \mathcal T^0_k$. A semi-Riemannian manifold $M$ is said to be weakly $T$-symmetric of type-II if $W^2_{(X,\Phi,\Pi_i)} T = 0,$ for all $X\in\chi(M)$ and some 1-forms $\Phi$ and $\Pi_i$, called the associated 1-forms.
\end{defi}
\indent Obviously this operator is of 1st type and non-commutative.
\begin{equation}gin{defi}
Consider the operator $W^3_{(X,\Phi,\Pi)}: \mathcal T^0_k \rightarrow \mathcal T^0_{k+1}$ defined by
$$W^3_{(X,\Phi,\Pi)} T = \nabla_X T - \Phi\otimes T + \pi_{_X}T,$$
where $\Phi$ and $\Pi$ are 1-forms and $T\in \mathcal T^0_k$. A semi-Riemannian manifold $M$ is said to be weakly $T$-symmetric of type-III if $W^3_{(X,\Phi,\Pi)} T = 0,$ for all $X\in\chi(M)$ and two 1-forms $\Phi$ and $\Pi$, called the associated 1-forms.
\end{defi}
\indent Obviously this operator is of 1st type and non-commutative.\\
\indent The weak symmetry of type-II was first introduced by Tam$\acute{\mbox{a}}$ssy and Binh \cite{tb} and the other two types of the weak symmetry can be deduced from the type-II (see, \cite{sti}). Although there is an another notion of weak symmetry introduced by Selberg \cite{sel} which is totally different from this notion and the representation of such a structure by the curvature restriction is unknown till now. However, throughout our paper we will consider the weak symmetry in sense of Tam$\acute{\mbox{a}}$ssy and Binh \cite{tb}.
\begin{equation}gin{defi}
For a $(0,4)$ tensor $D$ consider the operator $\mathcal D(X,Y): \mathcal T^0_r \rightarrow \mathcal T^0_{r+2}$ defined by
$$(\mathcal D(X,Y) T)(X_1,X_2,\cdots,X_k) = (D\cdot T)(X_1,X_2,\cdots,X_k,X,Y).$$
A semi-Riemannian manifold is said to be $T$-semisymmetric type if $\mathcal D(X, Y) T = 0$ for all $X,Y\in\chi(M)$. This condition is also written as $D\cdot T = 0$.
\end{defi}
\indent Obviously this operator is of 2nd type and commutative or non-commutative according as $D$ is skew-symmetric or not in 3rd and 4th places i.e., $D(X_1,X_2,X_3,X_4) = -D(X_1,X_2,X_4,X_3)$ or not. Especially, if we consider $D = R$, then the manifold is called T-semisymmetric \cite{sz}.
\begin{equation}gin{defi}$($\cite{adm}, \cite{DR1}, \cite{DES}, \cite{des7}$)$
A semi-Riemannian manifold is said to be $T$-pseudosymmetric type if $(\sum_i c_i D_i)\cdot T = 0$, where $\sum_i c_i D_i$ is a linear combination of $(0,4)$ curvature tensors $D_i$'s over $C^{\infty}(M)$, $c_i\in C^{\infty}(M)$, called the associated scalars.
\end{defi}
\indent Obviously this operator is of 2nd type and generally commutative or non-commutative according as all $D_i$'s are skew-symmetric or not in 3rd and 4th places. Consider the special cases $(R - L G)\cdot T = 0$ and $(R - L X\wedge_S Y)\cdot T = 0$. These are known as Deszcz T-pseudosymmetric (\cite{adm}, \cite{DR1}, \cite{DES}, \cite{des7}) and Ricci generalized T-pseudosymmetric (\cite{DEF}, \cite{DEF1}) respectively. It is clear that the operator of Deszcz pseudosymmetry is commutative but Ricci generalized pseudosymmetry is non-commutative.
\section{\bf{Some basic properties of the tensor $B$}}\label{basic}
In this section we discuss some basic well known properties of the tensor $B$.
\begin{equation}gin{lem}\label{lemsk}
An operator $\mathcal L$ is commutative if $\mathcal L g = 0$. Moreover if $\mathcal L$ is an endomorphism then this condition is equivalent to the condition that $\mathcal L$ is skew-symmetric i.e. $g(\mathcal L X, Y) = - g(X, \mathcal L Y)$ for all $X,Y\in\chi(M)$.
\end{lem}
\noindent\textbf{Proof:} If $\mathcal L g = 0$ then, without loss of generality, we may suppose that T is a (0,2) tensor, and we have
$$\mathcal L(\mathscr{C}(T)) = \mathcal L (g^{ij}T_{ij}) = g^{ij} (\mathcal L T_{ij}) = \mathscr{C}(\mathcal L T),$$
where $\mathscr{C}$ is the contraction operator.
Again if $\mathcal L$ is an endomorphism then for all $X,Y\in\chi(M)$, $\mathcal L g = 0$ implies
$$(\mathcal L g)(X, Y) = -g(\mathcal L X, Y)-g(X, \mathcal L Y) = 0$$
$$\Rightarrow g(\mathcal L X, Y) = - g(X, \mathcal L Y)$$
\begin{equation}gin{center}$\Rightarrow \mathcal L$ is skew-symmetric.\end{center}
From the last part of this lemma we can say that
\begin{equation}gin{lem}\label{lem5.22}
The curvature operator $\mathscr D(X,Y)$ formed by a $(0,4)$ tensor $D$ is commutative if and only if $D$ is skew-symmetric in 3rd and 4th places, i.e., $D(X_1,X_2,X_3,X_4)=-D(X_1,X_2,X_4,X_3)$, for all $X_1,X_2,X_3,X_4$.
\end{lem}
\begin{equation}gin{lem}\label{lem3.1}
Contraction and covariant derivative operators are commute each other.
\end{lem}
\begin{equation}gin{lem}\label{lem3.5}
$Q(g,T) = G\cdot T$.
\end{lem}
\noindent\textbf{Proof:} For a (0,k) tensor $T$, we have
\begin{equation}b
Q(g,T)(X_1,X_2,\cdots X_k;X,Y)=((X\wedge_g Y)\cdot T)(X_1,X_2,\cdots X_k).
\end{equation}b
Now $(X\wedge_g Y)(X_1,X_2) = G(X,Y,X_1,X_2)$, so the result follows.
\begin{equation}gin{lem}\label{lem3.6}
Let $D$ be a generalized curvature tensor. Then\\
(1) $D(X_1,X_2,X_1,X_2) = 0$ implies $D(X_1,X_2,X_3,X_4) = 0$,\\
(2) $(\mathcal L D)(X_1,X_2,X_1,X_2) = 0$ implies $(\mathcal L D)(X_1,X_2,X_3,X_4) = 0$, $\mathcal L$ is any linear operator.
\end{lem}
\noindent\textbf{Proof:} The results follows from Lemma 8.9 of \cite{lee} and hence we omit it.\\
\indent We now consider the tensor $B$ and take contraction on i-th and j-th place and get the $(i$-$j)$-th contraction tensor $^{^{ij}}S$ for $i, j \in \{1,2,3,4\}$ as
\begin{equation}a
&&\left\{\begin{equation}gin{array}{l}
^{12}S = (-a_1 + a_2 + a_3 + a_5 + a_6 + n a_7)S + r(a_4 + a_8 + a_9 + n a_{10})g = (^{12}p) S + (^{12}q) r g\\
^{13}S = (-a_0 + a_2 + a_4 + a_5 + n a_6 + a_7)S + r(a_3 + a_8 + n a_9 + a_{10})g = (^{13}p) S + (^{13}q) r g\\
^{14}S = (a_0 + a_1 + n a_2 + a_3 + a_4 + a_6 + a_7)S + r(a_5 + n a_8 + a_9 + a_{10})g = (^{14}p) S + (^{14}q) r g\\
^{23}S = (a_0 + a_1 + a_3 + a_4 + n a_5 + a_6 + a_7)S + r(a_2 + n a_8 + a_9 + a_{10})g = (^{23}p) S + (^{23}q) r g\\
^{24}S = (-a_0 + a_2 + n a_3 + a_4 + a_5 + a_7)S + r(a_6 + a_8 + n a_9 + a_{10})g = (^{24}p) S + (^{24}q) r g\\
^{34}S = (-a_1 + a_2 + a_3 + n a_4 + a_5 + a_6)S + r(a_7 + a_8 + a_9 + n a_{10})g = (^{34}p) S + (^{34}q) r g
\end{array}\right.
\end{equation}a
Again contracting all $^{^{ij}}S$ we get $^{ij}r$ for $i, j \in \{1,2,3,4\}$ as
\begin{equation}a
&&\left\{\begin{equation}gin{array}{l}
^{12}r = {}^{34}r = (-a_1 + a_2 + a_3 + n a_4 + a_5 + a_6+ n a_7+ n a_8+ n a_9 + n^2 a_{10}) r\\
^{13}r = {}^{24}r = (-a_0 + a_2 + n a_3 + a_4 + a_5 + n a_6 + a_7 + n a_8+ n^2 a_9 + n a_{10}) r\\
^{14}r = {}^{23}r = (a_0 + a_1 + n a_2 + a_3 + a_4 + n a_5 + a_6 + a_7 + n^2 a_8 + n a_9 + n a_{10}) r
\end{array}\right.
\end{equation}a
\begin{equation}gin{lem}\label{lem3.8}
$(i)$ If $S = 0$, then $B = 0$ if and only if $R = 0$.\\
$(ii)$ If $\mathcal L S = 0$, then $\mathcal L B = 0$ if and only if $\mathcal L R = 0$, where $\mathcal L$ is a commutative 1st type operator and $a_i$'s are constant.\\
$(iii)$ If $\mathcal L S = 0$, then $\mathcal L B = 0$ if and only if $\mathcal L R = 0$, where $\mathcal L$ is a commutative 2nd type operator.
\end{lem}
\begin{equation}gin{lem}\label{lem3.9}
The tensor $B$ is a generalized curvature tensor if and only if
\begin{equation}a\label{eq5.3}
&&\left\{\begin{equation}gin{array}{c}
a_1 = a_4 = a_7 = a_{10} = 0,\\
a_2 = - a_3 = a_5 = - a_6 \mbox{ and } a_8 = -a_9.
\end{array}\right.
\end{equation}a
\end{lem}
\noindent\textbf{Proof:} $B$ is a generalized curvature tensor if and only if
\begin{equation}a\label{eq3.1}
&&\left\{\begin{equation}gin{array}{c}
B(X_1, X_2, X_3, X_4) + B(X_2, X_1, X_3, X_4) = 0,\\
B(X_1, X_2, X_3, X_4) - B(X_3, X_4, X_1, X_2) = 0,\\
B(X_1, X_2, X_3, X_4) + B(X_2, X_3, X_1, X_4) + B(X_3, X_1, X_2, X_4) = 0.
\end{array}\right.
\end{equation}a
Solving the above equations we get the result.\\
\indent Thus if $B$ is a generalized curvature tensor then $B$ can be written as
\begin{equation}\label{eqgen}
B = b_0 R + b_1 g\wedge S + b_2 r g\wedge g,
\end{equation}
where $b_0$, $b_1$ and $b_2$ are scalars.\\
\indent We note that the equation $B(X_1, X_2, X_3, X_4) + B(X_2, X_3, X_1, X_4) + B(X_3, X_1, X_2, X_4) = 0$ can be omitted from the system of equations (\ref{eq3.1}) keeping the solution unaltered. Thus the tensor $B$ turns out to be a generalized curvature tensor if and only if
\begin{equation}b
&B(X_1, X_2, X_3, X_4) + B(X_2, X_1, X_3, X_4) = 0,&\\
&B(X_1, X_2, X_3, X_4) - B(X_3, X_4, X_1, X_2) = 0.&
\end{equation}b
\begin{equation}gin{lem}\label{lem3.12}
The tensor $B$ is a proper generalized curvature tensor if and only if $B$ is some constant multiple of $R$.
\end{lem}
\noindent\textbf{Proof:} Let $B$ be a proper generalized curvature tensor. Then obviously $B$ is a generalized curvature tensor and hence it can be written as
$$B = b_0 R + b_1 g\wedge S + b_2 r g\wedge g,$$
where $b_0$, $b_1$ and $b_2$ are scalars. Now $(g\wedge S)$ and $r (g\wedge g)$ both are not proper generalized curvature tensors. Hence for the tensor $B$ to be proper generalized curvature tensor, the scalars $b_1$ and $b_2$ must be zero (since $R$, $(g\wedge S)$ and $r (g\wedge g)$ are independent). Then $B=a_0 R$. Now from condition of proper generalized curvature tensor , we get $b_0 =$ constant. This completes the proof.
\begin{equation}gin{lem}\label{lem3.13}
The endomorphism operator $\mathscr B(X,Y)$ is skew-symmetric if $$a_2 = - a_6, \ a_3 = - a_5, \ a_8 = - a_9, \ a_1 = a_4 = a_7 = a_{10} = 0.$$
\end{lem}
\noindent\textbf{Proof:} From the Lemma \ref{lem5.22}, the operator $\mathscr B$ is skew-symmetric if
$$B(X_1,X_2,X_3,X_4) = -B(X_1,X_2,X_4,X_3) \mbox{ for all } X_1,X_2,X_3,X_4.$$
Thus the result follows from the solution of the equation $B(X_1, X_2, X_3, X_4) + B(X_1, X_2, X_4, X_3) = 0$.
\section{\bf{Main Results}}\label{Main}
In this section we first classify the set $\mathscr B$ of all $B$-tensors with respect to the contraction and then find out the equivalency
of some structures for each class members. This classification can express in tree diagram as follows:\\
$\put(115,0){\framebox(130,30){Class of $B$-tensors ($\mathscr B$)}}
\put(180,-1){\vector(0,-1){15}}
\put(80,-16){\line(1,0){200}}
\put(80,-16){\vector(0,-1){15}} \put(280,-16){\vector(0,-1){15}}
\put(30,-61){\framebox(100,30){All $(^{ij}S) = 0$}}
\put(230,-61){\framebox(100,30){Some $(^{ij}S) \neq 0$}}
\put(280,-62){\vector(0,-1){15}}
\put(100,-77){\line(1,0){240}}
\put(100,-77){\vector(0,-1){15}} \put(340,-77){\vector(0,-1){15}}
\put(50,-122){\framebox(100,30){All $(^{ij}p) = 0$}}
\put(290,-122){\framebox(100,30){Some $(^{ij}p) \neq 0$}}
\put(340,-122){\vector(0,-1){15}}
\put(160,-137){\line(1,0){240}}
\put(160,-137){\vector(0,-1){15}} \put(400,-137){\vector(0,-1){15}}
\put(110,-182){\framebox(100,30){All $(^{ij}r) = 0$}}
\put(350,-182){\framebox(100,30){Some $(^{ij}r) \neq 0$}}\\
$
Thus we get four different classes of $B$-tensors with respect to contraction given as follows:\\
(i) {\bf \textit{Class 1}:} In this class $(^{ij}S) = 0$ for all $i,j \in \{1,2,3,4\}$. Then we get dependency of $a_i$'s for this class as
\begin{equation}a\label{eq4.1}
&&\left\{\begin{equation}gin{array}{c}
a_0 = -a_9(n-2)(n-1),\ a_1 = a_7(n-2),\\
a_2 = a_5 = -a_7 + (n-1)a_9,\ a_3 = a_6 = -(n-1)a_9,\ a_4 = a_7,\\
a_8 = -a_9 + \frac{a_7}{(n-1)},\ a_{10} = -\frac{a_7}{(n-1)}.
\end{array}\right.
\end{equation}a
\indent An example of such class of $B$-tensors is conformal curvature tensor $C$. We take $C$ as the representative member of this class.\\
(ii) {\bf \textit{Class 2}:} In this class $(^{ij}S) \neq 0$ for some $i,j \in \{1,2,3,4\}$ but $(^{ij}p) = 0$ for all $i,j \in \{1,2,3,4\}$. We get the dependency of $a_i$'s for this class that (\ref{eq4.1}) does not satisfy (i.e. one of $a_4 + a_8 + a_9 + n a_{10},\ a_3 + a_8 + n a_9 + a_{10},\ a_5 + n a_8 + a_9 + a_{10},\ a_2 + n a_8 + a_9 + a_{10},\ a_6 + a_8 + n a_9 + a_{10},\ a_7 + a_8 + a_9 + n a_{10}$ is non-zero) but
\begin{equation}a\label{eq4.2}
&&a_0 = a_6(n-2),\ a_1 = a_7(n-2),\ a_2 = a_5 = -a_6 - a_7,\ a_3 = a_6,\ a_4 = a_7.
\end{equation}a
\indent An example of such class of $B$-tensors is conharmonic curvature tensor $K$. We take $K$ as the representative member of this class.\\
(iii) {\bf \textit{Class 3}:} In this class $(^{ij}p) \neq 0$ for some $i,j \in \{1,2,3,4\}$ but $(^{ij}r) = 0$ for all $i,j \in \{1,2,3,4\}$. Then for this class $a_i$'s does not satisfy (\ref{eq4.1}) but
\begin{equation}a\label{eq4.3}
&&\left\{\begin{equation}gin{array}{c}
a_0 = (n-1)(a_3 + a_6 + n a_9), \ a_8 = -\frac{a_1 + (n-1)(a_2 + a_3 + a_5 + a_6 + n a_9)}{n(n-1)},\\
a_{10} = \frac{n(a_1 - (n-1)(a_4 + a_7))}{n(n-1)}.
\end{array}\right.
\end{equation}a
\indent Examples of such class of $B$-tensors are $W$, $P$, $\mathcal M$, $P^*$, $\mathcal W_0$, $\mathcal W_1$, $\mathcal W^*_3$. We take $W$ as the representative member of this class. We note that in this case $(^{ij}p) + n (^{ij}q) = 0$.\\
(iv) {\bf \textit{Class 4}:} In this class $(^{ij}p) \mbox{ and } (^{kl}r) \neq 0$ for some $i,j,k,l \in \{1,2,3,4\}$. For this class $a_i$'s does not satisfy (\ref{eq4.2}) and (\ref{eq4.3}).\\
\indent Examples of such class of $B$-tensors are $R$, $\mathcal W_0^*$, $\mathcal W_1^*$, $\mathcal W_2$, $\mathcal W_2^*$, $\mathcal W_3$, $\mathcal W_4$, $\mathcal W_4^*$, $\mathcal W_5$, $\mathcal W_5^*$, $\mathcal W_6$, $\mathcal W_6^*$, $\mathcal W_7$, $\mathcal W_7^*$, $\mathcal W_8$, $\mathcal W_8^*$, $\mathcal W_9$, $\mathcal W_9^*$. We take $R$ as the representative member of this class.\\
\indent We first discuss about the linear combination of $B$-tensors over $C^{\infty}(M)$. We consider two $B$-tensors $\bar B$ and $\tilde B$ with their $(i$-$j)$-th contraction tensors $(^{ij}\bar p) S + (^{ij}\bar q) r g$ and $(^{ij}\tilde p) S + (^{ij}\tilde q) r g$ respectively. Now consider a linear combination $\acute{B} = \mu \bar B + \eta \tilde B$ of $\bar B$ and $\tilde B$, where $\mu$ and $\eta$ are two scalars. Then
$\acute B$ is a $B$-tensor with $(i$-$j)$-th contraction tensor $(^{ij}\acute p) S + (^{ij}\acute q) r g$, where $(^{ij}\acute p) = (^{ij}\bar p) + (^{ij}\tilde p)$ and $(^{ij}\acute q) = (^{ij}\bar q) + (^{ij}\tilde q)$.
Now if both $\bar B$ and $\tilde B$ belong to class 1, then $(^{ij}\bar p) = (^{ij}\bar q) = (^{ij}\tilde p) = (^{ij}\tilde q) =0$ and thus $(^{ij}\acute p) = (^{ij}\acute q) =0$, i.e., $\acute B$ also remains a member of class 1. So class 1 is closed under linear combination over $C^{\infty}(M)$.
If both $\bar B$ and $\tilde B$ belong to class 2, then $(^{ij}\bar p) = (^{ij}\tilde p) = 0$ for all $i,j$ but $(^{ij}\bar q)$ and $(^{ij}\tilde q)$ are not zero for all $i,j$ and thus $(^{ij}\acute p) = 0$. Now if $(^{ij}\acute q) = 0$ for all $i,j$, then $\acute B$ belongs to class 1, otherwise it remains a member of class 2. Here the condition $(^{ij}\acute q) = 0$ can be expressed explicitly as
\begin{equation}a\label{c1}
&\left(\bar a_8+\bar a_9+\bar a_{10}\right) \mu+\left(\tilde a_8+\tilde a_9+\tilde a_{10}\right) \eta =0,&\\\nonumber
&(\mu\bar a_0 +\eta\tilde a_0) + (n-1)(n-2)(\mu\bar a_9 +\eta\tilde a_9) = 0,&\\\nonumber
&(\mu\bar a_2 +\eta\tilde a_2) = (n-1)\left[\mu(\bar a_9+\bar a_{10}) +\eta(\tilde a_9+\tilde a_{10})\right].&
\end{equation}a
Again if both $\bar B$ and $\tilde B$ belong to class 3, then $(^{ij}\bar p) + n(^{ij}\bar q) = (^{ij}\tilde p) + n(^{ij}\tilde q) = 0$ for all $i,j$ but $(^{ij}\bar p)$ and $(^{ij}\tilde p)$ are not zero for all $i,j$ and thus $(^{ij}\acute p) + n(^{ij}\acute q) = 0$. Now if $(^{ij}\acute p) = 0$ or $(^{ij}\acute q) = 0$ for all $i,j$, then $\acute B$ belongs to class 1, otherwise it remains a member of class 3. Here the condition $(^{ij}\acute p) = 0$ and $(^{ij}\acute q) = 0$ are same and can be expressed explicitly as
\begin{equation}a\label{c2}
&\left(\bar a_8+\bar a_9+\bar a_{10}\right) \mu+\left(\tilde a_8+\tilde a_9+\tilde a_{10}\right) \eta =0,&\\\nonumber
&(\mu\bar a_0 +\eta\tilde a_0) = (\mu\bar a_2 +\eta\tilde a_2) = (\mu\bar a_3 +\eta\tilde a_3) = (\mu\bar a_5 +\eta\tilde a_5) = -(n-1)(n-2)(\mu\bar a_9 +\eta\tilde a_9),&\\\nonumber
&(\mu\bar a_4 +\eta\tilde a_4) = (n-1)\left[\mu(\bar a_8+\bar a_9) +\eta(\tilde a_8+\tilde a_9)\right].&
\end{equation}a
We also note that if both $\bar B$ and $\tilde B$ belong to class 4, then $\acute B$ belongs to any one of the class according as their defining condition. Now if $\bar B$ belongs to class 1 then $\acute B$ belongs to the class as that of $\tilde B$. If $\bar B$ belongs to class 4 then $\acute B$ is of class 1 whether $\tilde B$ may belongs to class 2 or 3. Again if $\bar B$ belongs to class 2 and $\tilde B$ belongs to class 3, then obviously $\acute B$ becomes a member of class 4. Thus we can state the following:
\begin{equation}gin{thm}
$(i)$ Linear combinations of any two members of class 1 over $C^{\infty}(M)$ are the members of class 1.\\
$(ii)$ Linear combinations of any two members of class 2 over $C^{\infty}(M)$ are the members of class 1 or class 2 according as (\ref{c1}) holds or does not hold.\\
$(iii)$ Linear combinations of any two members of class 3 over $C^{\infty}(M)$ are the members of class 1 or class 3 according as (\ref{c2}) holds or does not hold.\\
$(iv)$ Linear combinations of any two members of class 4 over $C^{\infty}(M)$ may belong to any class among the four classes.\\
$(v)$ Linear combinations of any member of class 1 with any other member of any one of the remaining three classes over $C^{\infty}(M)$ belongs to the latter class.\\
$(vi)$ Linear combinations of any member of class 4 with any other member of any one of the remaining three classes over $C^{\infty}(M)$ belongs to class 4.\\
$(vi)$ Linear combinations of any member of class 2 with any other member of class 3 over $C^{\infty}(M)$ is a member of class 4.
\end{thm}
\indent From above we note that the tensor $B$ belongs to any one of the four classes according to the dependency of the coefficients $a_i$'s. The $\mathcal T$-curvature tensor and $C'$ may also belongs to any class. The curvature tensor $C^*$, $\widetilde W$ and $W^*$ are combination of two or more other $B$-tensors and thus they may belong to more than one class according as the coefficients of such combinations. Thus $C^*$ is a member of class 3 if $a_0 + (n-2)a_2 \neq 0$, otherwise it reduces to the conformal curvature tensor and becomes a member of class 1. Again, $\widetilde W$ is a member of class 3 if $a_0 -a_2 + (n-1)a_5 \neq 0$, otherwise it belongs to class 1. And $W^*$ is a member of class 4 if $b \neq 0$, otherwise it belongs to class 3. We also note that $P^*$ is the combination of $P$ and $W$, both of them are in class 3 and $P^*$ remains a member of class 3.\\
\indent We now discuss the equivalency of flatness, symmetry type, recurrent type, semisymmetry type and other various curvature restrictions for the above four classes of $B$-tensors.
\begin{equation}gin{thm}\label{thm4.1}
Flatness of all $B$-tensors of any class among the classes $1$- $4$ are equivalent to the flatness of the representative member of that class. (Flatness of all $B$-tensors of each class are equivalent.)
\end{thm}
\noindent\textbf{Proof:} We first consider that the tensor $B$ belongs to class 1 i.e. $a_i$'s satisfy (\ref{eq4.1}). We have to show $B = 0$ if and only if $C = 0$. Now
\begin{equation}a\label{e1}
\nonumber B_{ijkl}-(a_0 C_{ijkl} + a_1 C_{ikjl}) &=& \frac{a_0 \left(-g_{jl}S_{ik}+g_{jk} S_{il}+g_{il} S_{jk}-g_{ik} S_{jl}\right)}{(n-2)}-\frac{a_0 \left(g_{il} g_{jk}-g_{ik} g_{jl}\right) r}{(n-2) (n-1)}\\
&&+\frac{a_1 \left(-g_{kl} S_{ij}+g_{jk} S_{il}+g_{il} S_{jk}-g_{ij} S_{kl}\right)}{n-2}-\frac{a_1 \left(g_{il} g_{jk}-g_{ij} g_{kl}\right) r}{(n-1) (n-2)}\\
\nonumber &&+a_2 g_{il} S_{jk}+a_3 g_{jl} S_{ik}+a_4 g_{kl} S_{ij}+a_5 g_{jk} S_{il}+a_6 g_{ik} S_{jl}+a_7 g_{ij} S_{kl}\\
\nonumber &&+r \left(a_8 g_{il} g_{jk}+a_9 g_{ik} g_{jl}+a_{10} g_{ij} g_{kl}\right).
\end{equation}a
As $B$ is of class 1 so simplifying the above and using (\ref{eq4.1}) we get $B_{ijkl} - (a_0 C_{ijkl} + a_1 C_{ikjl}) = 0$. Again $(a_0 C_{ijkl} + a_1 C_{ikjl}) = 0$ if and only if $C_{ijij} = 0$, i.e. if and only if $C_{ijkl} = 0$ (by Lemma \ref{lem3.6}). Thus we get $B=0$ if and only if $C = 0$.\\
\indent Next we consider that the tensor $B$ belongs to class 2 i.e. $a_i$'s satisfy (\ref{eq4.2}) but not (\ref{eq4.1}). We have to show $B = 0$ if and only if $K = 0$. Now
\begin{equation}b
B_{ijkl} -(a_0 K_{ijkl} + a_1 K_{ikjl}) &=& \frac{a_0 \left(-g_{jl}S_{ik}+g_{jk} S_{il}+g_{il} S_{jk}-g_{ik} S_{jl}\right)}{n-2}\\
&+&\frac{a_1 \left(-g_{kl} S_{ij}+g_{jk} S_{il}+g_{il} S_{jk}-g_{ij} S_{kl}\right)}{n-2}\\
&+&a_2 g_{il} S_{jk}+a_3 g_{jl} S_{ik}+a_4 g_{kl} S_{ij}+a_5 g_{jk} S_{il}+a_6 g_{ik} S_{jl}+a_7 g_{ij} S_{kl}\\
&+&r \left(a_8 g_{il} g_{jk}+a_9 g_{ik} g_{jl}+a_{10} g_{ij} g_{kl}\right).
\end{equation}b
As $B$ is of class 2 so simplifying the above and using (\ref{eq4.2}) we get
\begin{equation}\label{e2}
B_{ijkl} -(a_0 K_{ijkl} + a_1 K_{ikjl}) = r \left(a_8 g_{il} g_{jk}+a_9 g_{ik} g_{jl}+a_{10} g_{ij} g_{kl}\right).
\end{equation}
Now as $B$ and $K$ are both of class 2 so vanishing of any one of $B$ or $K$ implies $r = 0$ and then
$$B_{ijkl} -(a_0 K_{ijkl} + a_1 K_{ikjl}) = 0.$$
Again, $(a_0 K_{ijkl} + a_1 K_{ikjl}) = 0$ if and only if $K_{ijij} = 0$, i.e. if and only if $K_{ijkl} = 0$ (by Lemma \ref{lem3.6}). Thus from (\ref{e2}), we get $B=0$ if and only if $K = 0$.\\
\indent Again we consider that the tensor $B$ belongs to class 3 i.e. $a_i$'s satisfies (\ref{eq4.3}) but not (\ref{cor4.1}). We have to show $B = 0$ if and only if $W = 0$. Now
\begin{equation}b
B_{ijkl} -(a_0 W_{ijkl} + a_1 W_{ikjl}) &=&
\frac{a_0 \left(g_{jk} g_{il}-g_{jl}g_{ik}+g_{il} g_{jk}-g_{ik} g_{jl}\right)}{n(n-i)}\\
&+&\frac{a_1 \left(g_{jk} g_{il}-g_{kl} g_{ij}+g_{il} g_{jk}-g_{ij} g_{kl}\right)}{n(n-i)}\\
&+&a_2 g_{il} S_{jk}+a_3 g_{jl} S_{ik}+a_4 g_{kl} S_{ij}+a_5 g_{jk} S_{il}+a_6 g_{ik} S_{jl}+a_7 g_{ij} S_{kl}\\
&+&r \left(a_8 g_{il} g_{jk}+a_9 g_{ik} g_{jl}+a_{10} g_{ij} g_{kl}\right).
\end{equation}b
As $B$ is of class 3 so simplifying the above and using (\ref{eq4.3}) we get
\begin{equation}a\label{e3}
\nonumber B_{ijkl} -(a_0 W_{ijkl} + a_1 W_{ikjl}) &=&
\frac{r}{n} [\left(a_2 g_{il} g_{jk}+a_3 g_{ik} g_{jl}+a_4 g_{ij} g_{kl}+a_5 g_{il} g_{jk}+a_6 g_{ik} g_{jl}+a_7 g_{ij} g_{kl}\right)]\\
&-& [a_2 g_{il} S_{jk}+a_3 g_{jl} S_{ik}+a_4 g_{kl} S_{ij}+a_5 g_{jk} S_{il}+a_6 g_{ik} S_{jl}+a_7 g_{ij} S_{kl}].
\end{equation}a
Now as $B$ and $W$ are both of class 3 so vanishing of any one of $B$ or $W$ implies $S = \frac{r}{n} g$ and then
$$\nonumber B_{ijkl} -(a_0 W_{ijkl} + a_1 W_{ikjl}) = 0.$$
Again, $(a_0 W_{ijkl} + a_1 W_{ikjl}) = 0$ if and only if $W_{ijij} = 0$, i.e., if and only if $W_{ijkl} = 0$ (by Lemma \ref{lem3.6}). Thus from (\ref{e3}), we get $B=0$ if and only if $W = 0$.\\
\indent Finally, we consider that the tensor $B$ belongs to class 4. We have to show $B = 0$ if and only if $R = 0$. Now as $B$ and $R$ are both of class 4 so vanishing of any one of $B$ or $R$ implies $S = 0$. Thus by the Lemma \ref{lem3.8} we can conclude that $B = 0$ if and only if $R = 0$.
This completes the proof.\\
\indent From the proof of the above we can state the following:
\begin{equation}gin{cor}\label{cor4.1}
If $B$ belongs to any class out of the above four classes then $R = 0$ implies $B = 0$ and $B = 0$ implies $C = 0$.
\end{cor}
\noindent \textbf{Proof:} We know that $R=0$ implies $S= 0$ and $r=0$. So the first part of the proof is obvious. Again we know that $R=0$ or $W=0$ or $K=0$ all individually implies $C=0$. So by above theorem the proof of the second part is done.\\
\indent We now discuss the above four classes of $B$-tensors as equivalence classes of an equivalence relation on $\mathscr B$, the set of all $B$-tensors. Consider a relation $\rho$ on $\mathscr B$ defined by $B_1 \rho B_2$ if and only if $B_1$-flat (i.e. $B_1 = 0$) $\Leftrightarrow$ $B_2$-flat (i.e. $B_2 = 0$) , for all $B_1, B_2 \in \mathscr B$. It can be easily shown that $\rho$ is an equivalence relation. We conclude from Theorem \ref{thm4.1} that all $B$-tensors of class 1 are related to the conformal curvature tensor $C$, all $B$-tensors of class 2 are related to the conharmonic curvature tensor $K$, all $B$-tensors of class 3 are related to the concircular curvature tensor $W$, all $B$-tensors of class 4 are related to the Riemann-Christoffel curvature tensor $R$. Thus class 1 is the $\rho$-equivalence class [$C$], class 2 is the $\rho$-equivalence class [$K$], class 3 is the $\rho$-equivalence class [$W$] and class 4 is the $\rho$-equivalence class [$R$].
\begin{equation}gin{thm}\label{cl-1}
$($Characteristic of class $1)$ $(i)$ All tensors of class $1$ are of the form
$$a_0 C_{ijkl} + a_1 C_{ikjl}$$
and the only generalized curvature tensor of this class is conformal curvature tensor upto a scalar multiple.\\
$(ii)$ All curvature restrictions of type $1$ on any $B$-tensor of class $1$ are equivalent, if $a_0$ and $a_1$ are constants.\\
$(iii)$ All curvature restrictions of type $2$ on any $B$-tensor of class $1$ are equivalent.
\end{thm}
\noindent \textbf{Proof:} We see that if $B$ is of class 1, then from (\ref{e1}), $B_{ijkl} = \left[a_0 C_{ijkl} + a_1 C_{ikjl}\right]$. Now for $B$ to be a generalized curvature tensor (\ref{eq5.3}) fulfilled and we get the form of generalized curvature tensor of this class.\\
Again consider any curvature restriction by an operator $\mathcal L$ on $B$, we have $\mathcal L B = 0$, which implies $\mathcal L\left[a_0 C_{ijkl} + a_1 C_{ikjl}\right] = 0$. Then by Lemma \ref{lem3.6} we get the result.
\begin{equation}gin{thm}\label{cl-2}
$($Characteristic of class $2)$ $(i)$ All tensors of class $2$ are of the form
$$a_0 K_{ijkl} + a_1 K_{ikjl} + r \left(a_8 g_{il} g_{jk}+a_9 g_{ik} g_{jl}+a_{10} g_{ij} g_{kl}\right)$$
such that $a_8 = \frac{a0+a1}{(n-2) (n-1)}, a_9 = -\frac{a0}{(n-2) (n-1)}, a_{10} = -\frac{a1}{(n-2) (n-1)}$ are not satisfy all together, otherwise it belongs to class $1$. The generalized curvature tensor of this class are of the form $a_0 K + a_8 r G$, $a_8 \neq \frac{a0}{(n-1) (n-2)}$.\\
$(ii)$ All commutative curvature restrictions of type $1$ on any $B$-tensor of class $2$ are equivalent, if $a_0$, $a_1$, $a_8$, $a_9$ and $a_{10}$ are constants.\\
$(iii)$ All commutative curvature restrictions of type $2$ on any $B$-tensor of class $2$ are equivalent.
\end{thm}
\noindent \textbf{Proof:} We see that if $B$ is of class 2, then from (\ref{e2}),
$$B_{ijkl} = a_0 K_{ijkl} + a_1 K_{ikjl} + r \left(a_8 g_{il} g_{jk}+a_9 g_{ik} g_{jl}+a_{10} g_{ij} g_{kl}\right).$$
Now for $B$ to be generalized curvature tensor, (\ref{eq5.3}) fulfilled and we get the form of generalized curvature tensor of this class as required.\\
Again consider any curvature restriction by a commutative operator $\mathcal L$ on $B$ i,e., $\mathcal L B = 0$, which implies
$$\mathcal L \left[a_0 K_{ijkl} + a_1 K_{ikjl} + r \left(a_8 g_{il} g_{jk}+a_9 g_{ik} g_{jl}+a_{10} g_{ij} g_{kl}\right)\right] = 0.$$
Now if $\mathcal L$ is of first type and $a_0$, $a_1$, $a_8$, $a_9$ and $a_{10}$ are constants, then $\mathcal L[a_0 K_{ijkl} + a_1 K_{ikjl}]=0$ and if $\mathcal L$ is of second type then automatically $\mathcal L[a_0 K_{ijkl} + a_1 K_{ikjl}]=0$. Thus by Lemma \ref{lem3.6} we get the result.
\begin{equation}gin{thm}\label{cl-3}
$($Characteristic of class $3)$ $(i)$ All tensors of class $3$ are of the form
\begin{equation}b
&&(a_0 W_{ijkl} + a_1 W_{ikjl}) +\left[a_2 g_{il} S_{jk} + a_3 g_{jl} S_{ik} + a_4 g_{kl} S_{ij} + a_5 g_{jk} S_{il} + a_6 g_{ik} S_{jl} + a_7 g_{ij} S_{kl}\right]\\
&& - \frac{r}{n}\left[(a_2+a_5) g_{il} g_{jk} + (a_3+a_6) g_{jl} g_{ik} + (a_4+a_7) g_{kl} g_{ij}\right]
\end{equation}b
such that $a_2 = a_5 = -\frac{a0 + a1}{n-2}, a_3 = a_6 = \frac{a0}{n-2}, a_4 = a_7 = \frac{a1}{n-2}$ are not satisfy all together, otherwise it belongs to class $1$. The generalized curvature tensor of this class are of the form $a_0 W + a_2 \left[g\wedge S - \frac{r}{n}G\right]$, $a_2 \neq \frac{a0}{(n-1)(n-2)}$.\\
$(ii)$ All commutative curvature restrictions of type $1$ on any $B$-tensor of class $3$ are equivalent, if $a_0$, $a_1$, $a_2$, $a_3$, $a_4$, $a_5$, $a_6$ and $a_7$ are constants.\\
$(iii)$ All commutative curvature restrictions of type $2$ on any $B$-tensor of class $3$ are equivalent.
\end{thm}
\noindent \textbf{Proof:} The proof is similar to the proof of the Theorem \ref{cl-2}.
\begin{equation}gin{thm}\label{cl-4}
$($Characteristic of class $4)$ $(i)$ All commutative curvature restrictions of type $1$ on any $B$-tensor of class $4$ are equivalent, if $a_i$'s are all constants.\\
$(ii)$ All commutative curvature restrictions of type $2$ on any $B$-tensor of class $4$ are equivalent.
\end{thm}
\noindent \textbf{Proof:} Consider a commutative operator $\mathcal L$ and $B$ is of class 1, such that $\mathcal L B = 0$. Now if $\mathcal L$ is commutative 1st type and all $a_i$'s are constant, then by taking contraction we get $\mathcal L S = 0$ and $\mathcal L (r) = 0$ as $a_i$'s are all constant. Putting these in the expression of $\mathcal L B = 0$ we get, $\mathcal L R = 0$. Again if $\mathcal L$ is of commutative 2nd type, then contraction yields $\mathcal L S = 0$ and $\mathcal L(r) = 0$. Substituting these in the expression of $\mathcal L B = 0$, we get $\mathcal L R = 0$. This complete the proof.\\
\indent From the proofs of the above four characteristic theorem as similar of Corollary \ref{cor4.1}, we can state the following:
\begin{equation}gin{cor}
If the tensor $B$ belongs to any one of the class and $\mathcal L$ is a commutative operator. If $\mathcal L$ is of type 2, then\\
(i) $\mathcal L R = 0$ implies $\mathcal L B = 0$ and\\
(ii) $\mathcal L B = 0$ implies $\mathcal L C = 0$.\\
The results are also true for the case of type 1 if the coefficients of $B$ are all constant.
\end{cor}
\noindent \textbf{Proof:} First consider the case $\mathcal L$ to be of 2nd type and commutative. Then $\mathcal L R=0$ implies $\mathcal L S= 0$ and $\mathcal L r=0$. So the first part of the proof is obvious. Again we can easily check that $\mathcal L R=0$ or $\mathcal L W=0$ or $\mathcal L K=0$ all individually implies $\mathcal L C=0$. So by the above four characteristic theorem the proof of the second part is done.\\ The proof for the next case (i.e., $\mathcal L$ to be of type 1 and commutative with the coefficients of $B$'s are all constant) is similar to above.\\
\indent We now state some results which will be used to show the coincidence of class 3 ad class 4 for the symmetry and recurrency condition.
\begin{equation}gin{lem}\label{lem5.1}\cite{RT}
Locally symmetric and projectively symmetric semi-Riemannian manifolds are equivalent.
\end{lem}
\begin{equation}gin{lem}\label{lem5.2}$($\cite{Glodek}, \cite{ol}, \cite{mik}, \cite{mik2}$)$
Every concircularly recurrent as well as projective recurrent manifold is necessarily a recurrent manifold with the same recurrence form.
\end{lem}
From the above four Characteristic theorems of the four classes and the Lemma \ref{lem5.1} and \ref{lem5.2} we can state the results expressed in a table for 1st type operator such that in the following table all condition(s) in a block are equivalent.
\begin{equation}gin{table}[H]
\begin{equation}gin{center}
{\footnotesize\begin{equation}gin{tabular}{|c|c|c|c|}\hline
\multicolumn{1}{|c|}{\textbf{Class 1}} & \multicolumn{1}{|c|}{\textbf{Class 2}} & \multicolumn{1}{|c|}{\textbf{Class3}} & \multicolumn{1}{|c|}{\textbf{\textbf{Class4}}}\\\hline
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{$W = 0$, $P = 0$, $\mathcal M = 0$,}&
\multicolumn{1}{|c|}{$R = 0$, $\mathcal W_0^* = 0$, $\mathcal W_1^* = 0$, $\mathcal W_2 = 0$,}\\
\multicolumn{1}{|c|}{$C = 0$}&
\multicolumn{1}{|c|}{$K = 0$}&
\multicolumn{1}{|c|}{$P^* = 0$, $\mathcal W_0 = 0$,}&
\multicolumn{1}{|c|}{$\mathcal W_2^* = 0$, $\mathcal W_3 = 0$,}\\
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{$\mathcal W_1 = 0$, $\mathcal W_3^* = 0$}&
\multicolumn{1}{|c|}{$\mathcal W_i = 0$, $\mathcal W_i^* = 0$, for all $i = 4,5,\cdots 9$}\\\hline
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|r}{$\nabla W = 0$, $\nabla P = 0$, $\nabla \mathcal M = 0$,}&
\multicolumn{1}{l|}{$\nabla R = 0$, $\nabla \mathcal W_0^* = 0$, $\nabla \mathcal W_1^* = 0$, $\nabla \mathcal W_2 = 0$,}\\
\multicolumn{1}{|c|}{$\nabla C = 0$}&
\multicolumn{1}{|c|}{$\nabla K = 0$}&
\multicolumn{1}{|r}{$\nabla P^* = 0$, $\nabla \mathcal W_0 = 0$,}&
\multicolumn{1}{l|}{$\nabla \mathcal W_2^* = 0$, $\nabla \mathcal W_3 = 0$,}\\
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|r}{$\nabla \mathcal W_1 = 0$, $\nabla \mathcal W_3^* = 0$,}&
\multicolumn{1}{l|}{$\nabla \mathcal W_i = 0$, $\nabla \mathcal W_i^* = 0$, for all $i = 4,5,\cdots 9$}\\\hline
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|r}{$\kappa W = 0$, $\kappa P = 0$, $\kappa \mathcal M = 0$,}&
\multicolumn{1}{l|}{$\kappa R = 0$, $\kappa \mathcal W_0^* = 0$, $\kappa \mathcal W_1^* = 0$, $\kappa \mathcal W_2 = 0$,}\\
\multicolumn{1}{|c|}{$\kappa C = 0$}&
\multicolumn{1}{|c|}{$\kappa K = 0$}&
\multicolumn{1}{|r}{$\kappa P^* = 0$, $\kappa \mathcal W_0 = 0$,}&
\multicolumn{1}{l|}{$\kappa \mathcal W_2^* = 0$, $\kappa \mathcal W_3 = 0$,}\\
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|r}{$\kappa \mathcal W_1 = 0$, $\kappa \mathcal W_3^* = 0$,}&
\multicolumn{1}{l|}{$\kappa \mathcal W_i = 0$, $\kappa \mathcal W_i^* = 0$, for all $i = 4,5,\cdots 9$}\\\hline
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{$L^s W = 0$, $L^s P = 0$,}&
\multicolumn{1}{|c|}{$L^s R = 0$, $L^s \mathcal W_0^* = 0$, $L^s \mathcal W_1^* = 0$,}\\
\multicolumn{1}{|c|}{$L^s C = 0$}&
\multicolumn{1}{|c|}{$L^s K = 0$}&
\multicolumn{1}{|c|}{$L^s \mathcal M = 0$, $L^s P^* = 0$,}&
\multicolumn{1}{|c|}{$L^s \mathcal W_2 = 0$, $L^s \mathcal W_2^* = 0$, $L^s \mathcal W_3 = 0$,}\\
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{$L^s \mathcal W_0 = 0$, $L^s \mathcal W_1 = 0$,}&
\multicolumn{1}{|c|}{$L^s \mathcal W_i = 0$, $L^s \mathcal W_i^* = 0$,}\\
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{$L^s \mathcal W_3^* = 0$}&
\multicolumn{1}{|c|}{for all $i = 4,5,\cdots 9$}\\\hline
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{$\kappa^s W = 0$, $\kappa^s P = 0$,}&
\multicolumn{1}{|c|}{$\kappa^s R = 0$, $\kappa^s \mathcal W_0^* = 0$, $\kappa^s \mathcal W_1^* = 0$,}\\
\multicolumn{1}{|c|}{$\kappa^s C = 0$}&
\multicolumn{1}{|c|}{$\kappa^s K = 0$}&
\multicolumn{1}{|c|}{$\kappa^s \mathcal M = 0$, $\kappa^s P^* = 0$,}&
\multicolumn{1}{|c|}{$\kappa^s \mathcal W_2 = 0$, $\kappa^s \mathcal W_2^* = 0$, $\kappa^s \mathcal W_3 = 0$,}\\
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{$\kappa^s \mathcal W_0 = 0$, $\kappa^s \mathcal W_1 = 0$,}&
\multicolumn{1}{|c|}{$\kappa^s \mathcal W_i = 0$, $\kappa^s \mathcal W_i^* = 0$,}\\
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{ }&
\multicolumn{1}{|c|}{$\kappa^s \mathcal W_3^* = 0$}&
\multicolumn{1}{|c|}{for all $i = 4,5,\cdots 9$}\\\hline
\end{tabular}}
\end{center}
\caption{List of equivalent structures for 1st type operators}\label{stc-1}
\end{table}
\indent\indent We now prove a theorem for coincidence of class 1 with class 2 and coincidence of class 3 with class 4 for any commutative 2nd type operator.
\begin{equation}gin{thm}\label{thm5.6}
Let $\mathcal L$ be a commutative operator (i.e., $\mathcal L$ and contraction commute) of type $2$, Then the following holds:\\
(i) For any two $B$-tensor $B_1$ and $B_2$ of class $1$ and $2$ respectively, the conditions $\mathcal L B_1 = 0$ and $\mathcal L B_2 = 0$ are equivalent.\\
(ii) For any two $B$-tensor $B_1$ and $B_2$ of class $3$ and $4$ respectively, the conditions $\mathcal L B_1 = 0$ and $\mathcal L B_2 = 0$ are equivalent.
\end{thm}
\noindent \textbf{Proof:} From above four characteristic theorem it is clear that to prove this theorem it is sufficient to show that for a commutative 2nd type operator $\mathcal L$, $\mathcal L C = \mathcal L K$ and $\mathcal L W = \mathcal L R$. Now first consider $C$ and $K$. Then for any operator $\mathcal L$,
$$\mathcal L C = \mathcal L K + \mathcal L \left(\frac{r}{(n-1)(n-2)} G\right).$$
Thus if $\mathcal L$ is commutative 2nd type operator, then $\mathcal L \left(\frac{r}{(n-1)(n-2)} G\right) = 0$ and hence (i) is proved.\\
Again considering $R$ and $W$, we get
$$\mathcal L W = \mathcal L R + \mathcal L \left(\frac{r}{n(n-1)} G\right).$$
So if $\mathcal L$ is commutative 2nd type operator then $\mathcal L \left(\frac{r}{n(n-1)} G\right) = 0$ and hence (ii) is proved.\\
\indent From the above four Characteristic theorems of the classes and the Theorem \ref{thm5.6} we can state the results express in a table for 2nd type operator such that in the following table all conditions in a block of the table are equivalent.
\begin{equation}gin{table}[H]
\begin{equation}gin{center}
{\footnotesize
\begin{equation}gin{tabular}{|c|c|c|c|}\hline
\multicolumn{2}{|c|}{\textbf{Class 1 and Class 2}}&
\multicolumn{2}{|c|}{\textbf{Class 3 and Class 4}}\\\hline
\multicolumn{2}{|c|}{$R\cdot C=0$,}&
\multicolumn{2}{|c|}{$R\cdot W = 0$, $R\cdot P = 0$, $R\cdot P^* = 0$, $R\cdot \mathcal M = 0$, $R\cdot R = 0$,}\\
\multicolumn{2}{|c|}{$R\cdot K=0$}&
\multicolumn{2}{|c|}{$R\cdot \mathcal W_i = 0$, $R\cdot \mathcal W_i^* = 0$, for all $i = 0,1,\cdots 9$}\\\hline
\multicolumn{2}{|c|}{$C\cdot C=0$,}&
\multicolumn{2}{|c|}{$C\cdot W = 0$, $C\cdot P = 0$, $C\cdot P^* = 0$, $C\cdot \mathcal M = 0$, $C\cdot R = 0$,}\\
\multicolumn{2}{|c|}{$C\cdot K=0$}&
\multicolumn{2}{|c|}{$C\cdot \mathcal W_i = 0$, $C\cdot \mathcal W_i^* = 0$, for all $i = 0,1,\cdots 9$}\\\hline
\multicolumn{2}{|c|}{$K\cdot C=0$,}&
\multicolumn{2}{|c|}{$K\cdot W = 0$, $K\cdot P = 0$, $K\cdot P^* = 0$, $K\cdot \mathcal M = 0$, $K\cdot R = 0$,}\\
\multicolumn{2}{|c|}{$K\cdot K=0$}&
\multicolumn{2}{|c|}{$K\cdot \mathcal W_i = 0$, $K\cdot \mathcal W_i^* = 0$, for all $i = 0,1,\cdots 9$}\\\hline
\multicolumn{2}{|c|}{$W\cdot C=0$,}&
\multicolumn{2}{|c|}{$W\cdot W = 0$, $W\cdot P = 0$, $W\cdot P^* = 0$, $W\cdot \mathcal M = 0$, $W\cdot R = 0$,}\\
\multicolumn{2}{|c|}{$W\cdot K=0$}&
\multicolumn{2}{|c|}{$W\cdot \mathcal W_i = 0$, $W\cdot \mathcal W_i^* = 0$, for all $i = 0,1,\cdots 9$}\\\hline
\multicolumn{2}{|c|}{$R\cdot C=L Q(g,C)$,}&
\multicolumn{2}{|c|}{$R\cdot W = L Q(g,W)$, $R\cdot P = L Q(g,P)$, $R\cdot P^* = L Q(g,P^*)$, $R\cdot \mathcal M = L Q(g,\mathcal M)$,}\\
\multicolumn{2}{|c|}{$R\cdot K=L Q(g,K)$}&
\multicolumn{2}{|c|}{$R\cdot R = L Q(g,R)$, $R\cdot \mathcal W_i =L Q(g,\mathcal W_i)$, $R\cdot \mathcal W_i^* =L Q(g,\mathcal W_i^*)$, for all $i = 0,1,\cdots 9$}\\\hline
\multicolumn{2}{|c|}{$C\cdot C=L Q(g,C)$,}&
\multicolumn{2}{|c|}{$C\cdot W = L Q(g,W)$, $C\cdot P = L Q(g,P)$, $C\cdot P^* = L Q(g,P^*)$, $C\cdot \mathcal M = L Q(g,\mathcal M)$,}\\
\multicolumn{2}{|c|}{$C\cdot K=L Q(g,K)$}&
\multicolumn{2}{|c|}{$C\cdot R = L Q(g,R)$, $C\cdot \mathcal W_i =L Q(g,\mathcal W_i)$, $C\cdot \mathcal W_i^* =L Q(g,\mathcal W_i^*)$, for all $i = 0,1,\cdots 9$}\\\hline
\multicolumn{2}{|c|}{$W\cdot C=L Q(g,C)$,}&
\multicolumn{2}{|c|}{$W\cdot W = L Q(g,W)$, $W\cdot P = L Q(g,P)$, $W\cdot P^* = L Q(g,P^*)$, $W\cdot \mathcal M = L Q(g,\mathcal M)$,}\\
\multicolumn{2}{|c|}{$W\cdot K=L Q(g,K)$}&
\multicolumn{2}{|c|}{$W\cdot R = L Q(g,R)$, $W\cdot \mathcal W_i =L Q(g,\mathcal W_i)$, $W\cdot \mathcal W_i^* =L Q(g,\mathcal W_i^*)$, for all $i = 0,1,\cdots 9$}\\\hline
\multicolumn{2}{|c|}{$K\cdot C=L Q(g,C)$,}&
\multicolumn{2}{|c|}{$K\cdot W = L Q(g,W)$, $K\cdot P = L Q(g,P)$, $K\cdot P^* = L Q(g,P^*)$, $K\cdot \mathcal M = L Q(g,\mathcal M)$,}\\
\multicolumn{2}{|c|}{$K\cdot K=L Q(g,K)$}&
\multicolumn{2}{|c|}{$K\cdot R = L Q(g,R)$, $K\cdot \mathcal W_i =L Q(g,\mathcal W_i)$, $K\cdot \mathcal W_i^* =L Q(g,\mathcal W_i^*)$, for all $i = 0,1,\cdots 9$}\\\hline
\end{tabular}}
\end{center}
\caption{List of equivalent structures for 2nd type operators}\label{stc-2}
\end{table}
\indent Thus from the Table \ref{stc-2} we can state the following:
\begin{equation}gin{cor}\label{cor6.5}
$(1)$ The conditions $R\cdot R = 0$, $R\cdot W = 0$ and $R\cdot P = 0$ are equivalent.\\
$(2)$ The conditions $C\cdot R = 0$, $C\cdot W = 0$ and $C\cdot P = 0$ are equivalent.\\
$(3)$ The conditions $W\cdot R = 0$, $W\cdot W = 0$ and $W\cdot P = 0$ are equivalent.\\
$(4)$ The conditions $K\cdot R = 0$, $K\cdot W = 0$ and $K\cdot P = 0$ are equivalent.\\
$(5)$ The conditions $R\cdot C = 0$ and $R\cdot K = 0$ are equivalent.\\
$(6)$ The conditions $C\cdot C = 0$ and $C\cdot K = 0$ are equivalent.\\
$(7)$ The conditions $W\cdot C = 0$ and $W\cdot K = 0$ are equivalent.\\
$(8)$ The conditions $K\cdot C = 0$ and $K\cdot K = 0$ are equivalent.
\end{cor}
\indent It may be mentioned that the first four results of Corollary \ref{cor6.5} were proved in Theorem 3.3 of \cite{PV} in another way.
\begin{equation}gin{cor}
$(1)$ The conditions $R\cdot R = L_1 Q(g,R)$, $R\cdot W = L_1 Q(g,W)$ and $R\cdot P = L_1 Q(g,P)$ are equivalent.\\
$(2)$ The conditions $C\cdot R = L_2 Q(g,R)$, $C\cdot W = L_2 Q(g,W)$ and $C\cdot P = L_2 Q(g,P)$ are equivalent.\\
$(3)$ The conditions $W\cdot R = L_3 Q(g,R)$, $W\cdot W = L_3 Q(g,W)$ and $W\cdot P = L_3 Q(g,P)$ are equivalent.\\
$(4)$ The conditions $K\cdot R = L_4 Q(g,R)$, $K\cdot W = L_4 Q(g,W)$ and $K\cdot P = L_4 Q(g,P)$ are equivalent.\\
$(5)$ The conditions $R\cdot C = L_5 Q(g,C)$ and $R\cdot K = L_5 Q(g,K)$ are equivalent.\\
$(6)$ The conditions $C\cdot C = L_6 Q(g,C)$ and $C\cdot K = L_6 Q(g,K)$ are equivalent.\\
$(7)$ The conditions $W\cdot C = L_7 Q(g,C)$ and $W\cdot K = L_7 Q(g,K)$ are equivalent.\\
$(8)$ The conditions $K\cdot C = L_8 Q(g,C)$ and $K\cdot K = L_8 Q(g,K)$ are equivalent.\\
Here $L_i$, $(i=1,2,\cdots, 8)$ are scalars.
\end{cor}
We note that here the operator $\mathcal P$ for projective curvature tensor is not considered as $P$ is not skew-symmetric in 3rd and 4th places i.e. $\mathcal P$ is not commutative.
\section{\bf{Conclusion}}
Form the above discussion we see that the set $\mathscr B$ of all $B$-tensors can be partitioned into four equivalence classes [$C$] (or class 1), [$K$] (or class 2), [$W$] (or class 3) and [$R$] (or class 4) under the equivalence relation $\rho$ such that $B_1 \rho B_2$ holds if and only if $B_1 = 0$ implies $B_2 = 0$ and $B_2 = 0$ implies $B_1 = 0$, where $B_1, B_2 \in \mathscr B$. We conclude that\\
(i) study of any curvature restriction of type 1 (such as symmetric type, recurrent type, super generalized recurrent) on any $B$-tensor of class 1 with constant $a_i$'s is equivalent to the study of such type of curvature restriction on the conformal curvature tensor $C$ and also any curvature restriction of type 2 (such as semisymmetric type, pseudosymmetric type) on any $B$-tensor of class 2 is equivalent to the study of such type of curvature restriction on $C$. Thus for all such restrictions, each $B$-tensor of class 1 gives the same structure as that due to $C$.\\
(ii) study of a symmetric type and recurrent type curvature restrictions on any $B$-tensor of class 2 with constant $a_i$'s is equivalent to the study of such type of curvature restriction on the conharmonic curvature tensor $K$. The study of a commutative semisymmetric type and commutative pseudosymmetric type curvature restrictions on any $B$-tensor of class 2 is equivalent to the study of such type of restrictions on the conformal curvature tensor $C$. Moreover, each commutative and first type curvature restrictions on any $B$-tensor of class 2 with constant coefficients give rise only one structure i.e., the structure due to $K$. Also each commutative and second type curvature restrictions on any $B$-tensor of class 2 gives rise the same structure as to $C$.\\
(iii) study of a symmetric type and recurrent type curvature restrictions on any $B$-tensor of class 3 with constant $a_i$'s is equivalent to the study of such type of curvature restriction on the concircular curvature tensor $W$. Again the studies of locally symmetric, recurrent, commutative semisymmetric type and commutative pseudosymmetric type curvature restrictions on any $B$-tensor of class 3 are equivalent to the studies of such type of restrictions on the Riemann-Christoffel curvature tensor $R$. Moreover, each commutative and first type curvature restriction on any $B$-tensor of class 3 with constant coefficients give rise only one structure i.e., the structure due to $W$. Also each commutative and second type curvature restriction on any $B$-tensor of class 3 gives rise the same structure as to $R$.\\
(iv) study of a symmetric type and recurrent type curvature restrictions on any $B$-tensor of class 4 with constant $a_i$'s is equivalent to the study of such type of curvature restriction on the Riemann-Christoffel curvature tensor $R$. The study of a commutative semisymmetric type and commutative pseudosymmetric type curvature restrictions on any $B$-tensor of class 4 is equivalent to the study of such type of restrictions on the curvature tensor $R$. Moreover, each commutative and first type curvature restriction on any $B$-tensor of class 4 with constant coefficients give rise only one structure i.e., the structure due to $R$. Also each commutative and second type curvature restriction on any $B$-tensor of class 4 gives rise also the same structure as to $R$.\\
\indent Finally, we also conclude that for future study of any kind of curvature restriction (discussed earlier) on various curvature tensors, we have to study such curvature restriction on the tensor $B$ only and as a particular case we can obtain the results for various curvature tensors. We also mention that to study various curvature restrictions on the tensor $B$, we have to consider only the form of $B$ as given in (\ref{eqgen}) but not as the large form given in (\ref{tensor}).\\
\indent However, the problem of various structures for any two $B$-tensors of different class is still remain for further study.\\
\noindent \textbf{Acknowledgement:} The second named author gratefully acknowledges to CSIR, New Delhi [File No. 09/025 (0194)/2010-EMR-I] for the financial assistance.
\begin{equation}gin{thebibliography}{99}\baselineskip=16pt
\bibitem{adm}
Adam$\grave{\mbox{o}}$w, A. and Deszcz, R., \emph{On totally umbilical submanifolds of some class of Riemannian manifolds}, Demonstratio Math., \textbf{16} (1983), 39-59.
\bibitem{CAH}
Cahen, M. and Parker, M., \emph{Sur des classes d'espaces pseudo-riemanniens symmetriques}, Bull. Soc. Math. Belg., {\bf 22} (1970), 339-354.
\bibitem{CAH1}
Cahen, M. and Parker, M., \emph{Pseudo-Riemannian symmetric spaces}, Mem. Amer. Math. Soc., {\bf 24} (1980).
\bibitem{ca}
Cartan, E., \emph{Sur une classe remarquable d'espaces de Riemannian}, Bull. Soc. Math. France, \textbf{54} (1926), 214- 264.
\bibitem{CHA}
Chaki, M. C., \emph{On pseudosymmetric manifolds}, An. Stiint. Ale Univ., AL. I. CUZA Din Iasi Romania, {\bf 33} (1987), 53-58.
\bibitem{LC}
Chongshan, L., \emph{On concircular transformations in Riemannian spaces}, J. Aust. Math. Soc. (Sr. A), {\bf 40} (1986), 218-225.
\bibitem{DEF}
Defever, F. and Deszcz, R., \emph{On semi-Riemannian manifolds satisfying the condition $R.R = Q(S,R)$}, Geometry and Topology of Submanifolds, III, World Sci., {\bf 1991}, 108-130.
\bibitem{DEF1}
Defever, F. and Deszcz, R., \emph{On warped product manifolds satisfying a certain curvature condition}, Atti. Acad. Peloritana Cl. Sci. Fis. Mat. Natur., {\bf 69} (1991), 213-236.
\bibitem{D0}
Defever, F., Deszcz, R., Hotlo$\acute{\mbox{s}}$, M., Kucharski, M. and Sent$\ddot{\mbox{u}}$rk, Z., \emph{Generalisations of Robertson-Walker spaces}, Annales Univ. Sci. Budapest. E$\ddot{\mbox{o}}$tv$\ddot{\mbox{o}}$s Sect. Math., {\bf 43} (2000), 13--24.
\bibitem{DA1}
Desai, P. and Amur, K., \emph{On $W$-recurrent spaces}, Tensor N. S., {\bf 29} (1975), 98-102.
\bibitem{DA2}
Desai, P. and Amur, K., \emph{On symmetric spaces}, Tensor N. S., {\bf 29} (1975), 185-199.
\bibitem{DR1}
Deszcz, R., \emph{Notes on totally umbilical submanifolds}, in: Geometry and Topology of Submanifolds, Luminy, May 1987, World Sci. Publ., Singapore, \textbf{1989}, 89-97.
\bibitem{DES}
Deszcz, R., \emph{On pseudosymmetric spaces}, Bull. Belg. Math. Soc., Series A, {\bf 44} (1992), 1-34.
\bibitem{D11}
Deszcz, R. and Glogowska, M., \emph{Some examples of nonsemisymmetric Ricci-semisymmetric
hypersurfaces}, Colloq. Math., {\bf 94}(2002), 87--101.
\bibitem{D2}
Deszcz, R., Glogowska, M., Hotlo$\acute{\mbox{s}}$, M. and
Sent$\ddot{\mbox{u}}$rk, Z., \emph{On certain quasi-Einstein semi-symmetric hypersurfaces}, Annales Univ. Sci. Budapest. E$\ddot{\mbox{o}}$tv$\ddot{\mbox{o}}$s Sect.
Math., {\bf 41}(1998), 151--164.
\bibitem{DGHS} Deszcz, R., G\l ogowska, M., Hotlo\'{s}, M. and Sawicz, K.,
\emph{A Survey on Generalized Einstein Metric Conditions},
Advances in Lorentzian Geometry:
Proceedings of the Lorentzian Geometry Conference in Berlin,
AMS/IP Studies in Advanced Mathematics {\bf{49}}, S.-T. Yau (series ed.),
M. Plaue, A.D. Rendall and M. Scherfner (eds.), 2011, 27--46.
\bibitem{des7}
Deszcz, R. and Grycak, W., \emph{On some class of warped product manifolds}, Bull. Inst. Math. Acad. Sinica, \textbf{15} (1987), 311-322.
\bibitem{D10}
Deszcz, R. and Hotlo$\acute{\mbox{s}}$, M., \emph{On hypersurfaces with type number two
in spaces of constant curvature}, Annales Univ. Sci.
Budapest. E$\ddot{\mbox{o}}$tv$\ddot{\mbox{o}}$s Sect. Math., {\bf
46} (2003), 19--34.
\bibitem{DUB}
Dubey, R. S. D., \emph{Generalized recurrent spaces}, Indian J. Pure Appl. Math., {\bf 10} (1979), 1508-1513.
\bibitem{sti}
Ewert-Krzemieniewski, S., \emph{On some generalisation of recurrent manifolds}, Math. Pannonica, \textbf{4/2} (1993), 191-203.
\bibitem{RG}
Garai, R. K., \emph{On recurrent spaces of first order}, Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, \textbf{26(4)} (1972), 889-909.
\bibitem{Glodek}
Glodek, E., \emph{A note on riemannian spaces with recurrent projective curvature}, Pr. nauk. Inst. matem. i fiz. teor. Wr. Ser. stud. i mater. \textbf{1} (1970), 9-12.
\bibitem{GLOG}
Glogowska, M., \emph{Semi-Riemannian manifolds whose Weyl tensor is a Kulkarni-Nomizu square}, Publ. Inst. Math. (Beograd), {\bf 72:86}(2002), 95-106.
\bibitem{GLOG1}
Glogowska, M., \emph{On quasi-Einstein Cartan type hypersurfaces}, J. Geom. Phys., {\bf 58}(2008), 599-614.
\bibitem{BG}
Gupta, B., \emph{On projective-symmetric spaces}, J. Austral. Math. Soc., {\bf 4(1)}(1964), 113-121.
\bibitem{ishi}
Ishii, Y., \emph{On conharmonic transformation}, Tensor N. S., \textbf{7} (1957), 73-80.
\bibitem{Kow2}
Kowalczyk, D.,
\emph{On the Reissner-Nordstr\"{o}m-de Sitter type spacetimes}, Tsukuba J. Math. {\bf{30}} (2006), 263--281.
\bibitem{lee}
Lee, J. M., \emph{Riemannian Manifold, an Introduction to Curvature}, Springer (1997).
\bibitem{mik}
Mike\v{s}, J., \emph{Geodesic mappings of affine-connected and Riemannian spaces} (English), J. Math. Sci., New York \textbf{78(3)} (1996), 311-333.
\bibitem{mik2}
Mike\v{s}, J., Van\v{z}urov\'{a}, A. and Hinterleitner, I. \emph{Geodesic mappings and some generalisations}, p. 160, Olomouc \textbf{2009}.
\bibitem{ol}
Olszak, K. and Olsak, Z., \emph{On pseudo-Riemannian manifolds with recurrent concircular curvature tensor}, Acta Math. Hungar., \textbf{137(1-2)} (2012), 64-71.
\bibitem{PV}
Petrovi\'c-Torgasev, M. and Verstraelen, L., \emph{On the concircular curvature tensor, the projective curvature tensor and the Einstein curvature tensor of Bochner-Kaehler manifolds}, Math. Rep. Toyama Univ., {\bf 10} (1987), 37-61.
\bibitem{pokh1}
Pokhariyal, G. P., \emph{Relativistic significance of Curvature tensors}, Int. J. Math. Math. Sci., \textbf{5(1)} (1982), 133-139.
\bibitem{pokh2}
Pokhariyal, G. P. and Mishra R. S., \emph{Curvature tensor and their relativistic significance}, Yokohama Math. J., \textbf{18} (1970), 105-108.
\bibitem{pokh3}
Pokhariyal, G. P. and Mishra R. S., \emph{Curvature tensor and their relativistic significance II}, Yokohama Math. J., \textbf{19(2)} (1971), 97-103.
\bibitem{prasad}
Prasad, B., \emph{A pseudo projective Curvature tensor on a Riemannian manifold}, Bull Calcutta Math. Soc., \textbf{94(3)} (2002), 163-166.
\bibitem{prasad2}
Prasad, B. and Maurya, A., \emph{Quasi concircular Curvature tensor on a Riemannian manifold}, Bull Calcutta Math. Soc., \textbf{30} (2007), 5-6.
\bibitem{prasad3}
Prasad, B., Doulo, K., and Pandey, P. N., \emph{Generalized quasi-conformal Curvature tensor on a Riemannian manifold}, Tensor N.S., \textbf{73} (2011), 188-197.
\bibitem{RL}
Rahaman, M. S. and Lal, S., \emph{On the concircular curvature tensor of Riemannian manifolds}, International Centre of for Theoretical Physics, \textbf{1990}.
\bibitem{RT}
Reynolds, R. F. and Thompson, A. H., \emph{Projective-symmetric spaces}, J. Australian Math. Soc., \textbf{7(1)} (1967), 48-54.
\bibitem{Ru1}
Ruse, H. S., \emph{On simply harmonic spaces}, J. London Math. Soc., {\bf 21} (1946), 243-247.
\bibitem{Ru2}
Ruse, H. S., \emph{On simply harmonic `kappa spaces' of four dimensions}, Proc. London Math. Soc., {\bf 50} (1949), 317-329.
\bibitem{Ru3}
Ruse, H. S., \emph{Three dimensional spaces of recurrent curvature},Proc. London Math. Soc., {\bf 50} (1949), 438-446.
\bibitem{sel}
Selberg, A., \emph{Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications
to Dirichlet series}, Indian J. Math., \textbf{20} (1956), 47-87.
\bibitem{SD}
Shaikh, A. A., Deszcz, D., Hotlo\'s, M., Je\l owicki and Kundu, H.,\emph{On pseudosymmetric manifolds}, Preprint.
\bibitem{SJ}
Shaikh, A. A. and Jana, S. K.,\emph{A Pseudo quasi-conformal curvature tensor on a Riemannian manifold}, South East Asian J. Math. Math. Sci., \textbf{4(1)} (2005), 15-20.
\bibitem{SP}
Shaikh, A. A. and Patra, A., \emph{On a generalized class of recurrent manifolds}, Archivum Mathematicum, {\bf 46} (2010), 39-46.
\bibitem{ROY1}
Shaikh, A. A. and Roy, I., \emph{On quasi generalized recurrent manifolds}, Math. Pannonica, {\bf 21(2)} (2010), 251-263.
\bibitem{ROY}
Shaikh, A. A. and Roy, I., \emph{On weakly generalized recurrent manifolds}, Annales Univ. Sci. Budapest. E$\ddot{\mbox{o}}$tv$\ddot{\mbox{o}}$s Sect. Math., {\bf 54} (2011), 35-45.
\bibitem{singh}
Singh, J. P., \emph{On m-Projective Recurrent Riemannian Manifold}, Int. J. Math. Analy., \textbf{6(24)} (2012), 1173 - 1178.
\bibitem{soos}
So\'os, G., \emph{\"Uber die Geod\"atischen Abbildungen Von Riemannschen R\"aumen auf Projectiv-symmetrische Riemannsche R\"aume}, Acta Acad. Sci. Hungarica, \textbf{9} (1958), 359-361.
\bibitem{sz}
Szab$\acute{\mbox{o}}$, Z. I., \emph{Structure theorems on Riemannian spaces satisfying $R(X,Y).R=0$} I, The local version, J. Diff. Geom., \textbf{17} (1982), 531-582.
\bibitem{tac}
Tachibana, S., \emph{A Theorem on Riemannian manifolds of positive curvature operator}, Proc. Japan Acad., \textbf{50} (1974), 301-302.
\bibitem{tb}
Tam$\acute{\mbox{a}}$ssy, L. and Binh, T. Q., \emph{On weakly symmetric and weakly projective symmetric Riemannian manifolds}, Coll. Math. Soc. J. Bolyai, \textbf{50} (1989), 663-670.
\bibitem{tri}
Tripathi, M. M. and Gupta, P., \emph{$\tau$-curvature tensor on a semi-Riemannian manifold}, J. Adv. Math. Stud., \textbf{4(1)} (2011), 117-129.
\bibitem{Ag}
Walker, A. G., \emph{On Ruse's spaces of recurrent curvature}, Proc. London Math. Soc., \textbf{52} (1950), 36-64.
\bibitem{yano1}
Yano, K., \emph{Concircular geometry, I-IV}, Proc. Imp. Acad. Tokyo, \textbf{16} (1940), 195-200, 354-360, 442-448, 505-511.
\bibitem{yano4}
Yano, K. and Sawaki S., \emph{Riemannian manifolds admitting a conformal transformation group}, J. Diff. Geom., \textbf{2} (1968), 161-184.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Entanglement bound for multipartite pure states based on local measurements }
\author{Li-zhen Jiang, Xiao-yu Chen ,Tian-yu Ye \\
{\small {College of Information and Electronic Engineering, Zhejiang
Gongshang University, Hangzhou, 310018, China}}}
\date{}
\maketitle
\begin{abstract}
An entanglement bound based on local measurements is introduced for
multipartite pure states. It is the upper bound of the geometric measure and
the relative entropy of entanglement. It is the lower bound of minimal
measurement entropy. For pure bipartite states, the bound is equal to the
entanglement entropy. The bound is applied to pure tripartite qubit states
and the exact tripartite relative entropy of entanglement is obtained for a
wide class of states.
\end{abstract}
\section{Introduction}
One of the open problem in quantum information theory is to quantify the
entanglement of a multipartite quantum state. Many entanglement measures for
pure or mixed multipartite states have been proposed \cite{Plenio} \cite
{Horodecki}, among them are the tangle \cite{Coffman} \cite{Miyake}, the
Schmidt measure \cite{Eisert} \cite{Hein} which is the logarithmic of the
minimal number of product terms that comprise the state vector, the
geometric measure \cite{Shimony} \cite{Wei1} \cite{Brody} which is defined
in terms of the maximal fidelity of the state vector and the set of pure
product states, the relative entropy of entanglement \cite{Vedral1} \cite
{Vedral2}, and the robustness \cite{Vidal} \cite{Harrow}. The last three are
related with each other \cite{Hayashi1} \cite{Hayashi2} \cite{Wei2} \cite
{Cavalcanti} \cite{Wei3} \cite{Zhu} and they are equal for some of the
states such as stabilizer states, symmetric basis and anti-symmetric basis
states. All these entanglement measures are not operationally defined. In
bipartite system, however, the entanglement cost and the distillable
entanglement are operational entanglement measures. If bipartite
entanglement measures satisfy some properties, it turns out that their
regularizations are bounded by distillable entanglement from one side and by
entanglement cost from the other side\cite{Horodecki}. For a pure bipartite
state, the two bounds are equal and the entanglement is simply the entropy
of the reduced density matrix thus has a clear information theoretical
meaning. We will investigate the possibility of extending these entropic and
operational definitions of entanglement to multipartite pure states in this
paper, we will propose an entanglement measurement bound (EMB) which is an
entanglement measure for pure tripartite qubit states. The bound is based on
the results of local measurements. Local measurement or local discrimination
had been used as upper bound of certain entanglement measures. For a graph
state, ''Pauli persistency'' has been used as an upper bound of Schmidt
measure \cite{Hein}, a quantity based on LOCC measurements has been used as
an upper bound \cite{Markham} of geometric measure.
\section{The justification of the definition of entanglement measurement
bound}
What is the usefulness of a bipartite entangled state? One answer should be
that we can use it for cryptography. If Alice's part is measured in spin up,
Bob's part should definitely in spin up for a bipartite spin entangled Bell
state $\Phi =\frac 1{\sqrt{2}}(\left| \uparrow \right\rangle _A\left|
\uparrow \right\rangle _B+\left| \downarrow \right\rangle _A\left|
\downarrow \right\rangle _B).$ Thus using $n$ pairs of Bell state, we can
get a shared string of $n$ bits for Alice and Bob through measurements. If
Alice measures her part in the basis of $\left| \phi \right\rangle =\cos
\phi \left| \uparrow \right\rangle +\sin \phi \left| \downarrow
\right\rangle ,$ $\left| \phi ^{\perp }\right\rangle =-\sin \phi \left|
\uparrow \right\rangle +\cos \phi \left| \downarrow \right\rangle ,$ then
the measurement results should be that Alice and Bob are simultaneously in
state $\left| \phi \right\rangle $ or they are simultaneously in state $
\left| \phi ^{\perp }\right\rangle $ , each with probability $\frac 12.$
Thus if we have $n$ pairs of Bell state, we can get a shared string of $n$
bits regardless of the measurement basis Alice chosen. When we have a less
entangled state $\cos \theta \left| \uparrow \right\rangle _A\left| \uparrow
\right\rangle _B+\sin \theta \left| \downarrow \right\rangle _A\left|
\downarrow \right\rangle _B,$ what can we do for the purpose of
cryptography? Certainly, we can measure one of the part, say Alice in the
spin up and down basis. The result turns out to be a shared string of length
$n,$ with the probability $\cos ^2\theta $ for spin up, and the probability $
\sin ^2\theta $ for spin down. The information contained in such a string
can be calculated to be $nH_2(\cos ^2\theta )$ , where $H_2(x)=-x\log
_2x-(1-x)\log _2(1-x)$ is the binary entropy function. However, if Alice
measures her part in a rather arbitrary basis $\left| \phi \right\rangle $
and $\left| \phi ^{\perp }\right\rangle ,$ Bob will get his state in $\left|
\phi _B\right\rangle \sim (\cos \phi \cos \theta \left| \uparrow
\right\rangle _B+\sin \phi \sin \theta \left| \downarrow \right\rangle _B)$
and $\left| \phi _B^{\perp }\right\rangle \sim (-\sin \phi \cos \theta
\left| \uparrow \right\rangle _B+\cos \phi \sin \theta \left| \downarrow
\right\rangle _B)$ states with probabilities $p(\theta ,\phi )=\cos ^2\theta
\cos ^2\phi +\sin ^2\theta \sin ^2\phi $ and $1-p(\theta ,\phi )$,
respectively. We can get a shared string of length $n$ with total
information content $nH_2(p(\theta ,\phi )).$ Notice that $p(\theta ,\phi
)=\cos 2\theta \cos ^2\phi +\sin ^2\theta ,$ the minimal information content
occurs at the case of $\cos ^2\phi =1$ or $0.$ So at least we can get a
shared string with information content $nH_2(\cos ^2\theta ),$ which is the
entanglement of the $n$ pairs of the state $\cos \theta \left| \uparrow
\right\rangle _A\left| \uparrow \right\rangle _B+\sin \theta \left|
\downarrow \right\rangle _A\left| \downarrow \right\rangle _B.$ Hence, the
entanglement of a pure bipartite entangled state is the minimal shared
information content obtained by measurement. This point had been proved in
Ref. \cite{Chernyavskiy} in the context of measurement entropy. For
completeness, we will give an alternative proof in the following.
For a bipartite state $\left| \psi \right\rangle =\sum_{i,j=1}^dA_{ij}\left|
i\right\rangle \left| j\right\rangle $ , where $\left| i\right\rangle $ and $
\left| j\right\rangle $ are the orthogonal basis, it is well known that the
entanglement entropy is the entropy of the reduced density matrix when one
of the partite is traced out. We have $\rho _A=Tr_B(\left| \psi
\right\rangle \left\langle \psi \right| )=\sum_{i,j,k}A_{ij}A_{kj}^{*}\left|
i\right\rangle \left\langle k\right| .$ Thus in the basis $\left|
i\right\rangle $, the reduced state is $\rho _A=AA^{\dagger },$where $A$ is
the matrix with entries $A_{ij}.$ A unitary transformation $U$ diagonalizes $
\rho _A$ to $\Lambda =diag\{\lambda _1,\ldots ,\lambda _d\},$ thus $
AA^{\dagger }=U\Lambda U^{\dagger }.$ So that the singular value
decomposition of matrix $A$ is $A=U\sqrt{\Lambda }V^T,$ where $V$ is some
other unitary transformation. We have the Schmidt decomposition
\begin{equation}
\left| \psi \right\rangle =\sum_k\sqrt{\lambda _k}\left| \varphi
_k^{(1)}\right\rangle \left| \varphi _k^{(2)}\right\rangle , \label{wav2}
\end{equation}
with orthogonal basis$\left| \varphi _k^{(1)}\right\rangle
=\sum_iU_{ik}\left| i\right\rangle ,$ $\left| \varphi _k^{(2)}\right\rangle
=\sum_jV_{jk}\left| j\right\rangle .$ The entanglement of the state is
\begin{equation}
E(\left| \psi \right\rangle )=-\sum_{i=1}^d\lambda _k\log _2\lambda _k.
\label{wav3}
\end{equation}
Now, we use the measurement to obtain shared digital information from the
state vector $\left| \psi \right\rangle =\sum_{i,j=1}^dA_{ij}\left|
i\right\rangle \left| j\right\rangle .$ Suppose the state vector is
projected to Alice's measurement base vector $\left| c\right\rangle
=\sum_i^dc_i\left| i\right\rangle ,$ then Bob's state will be proportional
to $\left\langle c\right. \left| \psi \right\rangle
=\sum_{i,j}c_i^{*}A_{ij}\left| j\right\rangle ,$ the probability of which is
\begin{equation}
p=\sum_j\left| \sum_ic_i^{*}A_{ij}\right| ^2.
\end{equation}
Let's consider which measurement basis yields the optimal probability $p.$
This is an optimal of $p$ with respect to $\{c_i\}$ subjected to $
\sum_i\left| c_i\right| ^2=1.$ With the Lagrange multiplier $\lambda $, we
can write the optimal equation as $\frac{\partial L}{\partial c_i^{*}}=0$,
where $L=p-\lambda (\sum_i\left| c_i\right| ^2-1).$ The optimal equation
then reads
\begin{equation}
\sum_{i,j}A_{ij}A_{kj}^{*}c_k-\lambda c_k=0,
\end{equation}
or
\[
(AA^{\dagger }-\lambda )\mathbf{c=}0,
\]
where $\mathbf{c=}(c_1,\ldots ,c_d)^T.$ The optimal probability should be $
p=\sum_{i,j}c_i^{*}A_{ij}A_{kj}^{*}c_k$ $=\mathbf{c}^{\dagger }AA^{\dagger }
\mathbf{c}$ $\mathbf{=c}^{\dagger }\lambda \mathbf{c}=\lambda .$ So the
optimal probability is the eigenvalue of the reduced density matrix $\rho
_A=AA^{\dagger }.$ Hence if we use eigenvectors of $\rho _A$ as the
measurement basis, the average information of each shared digit is the
entanglement of the state. Denote the eigensystem of $\rho _A$ as $\{\lambda
_k,\mathbf{c}^k\},$ let's see if the unitary transformed basis $\{U\mathbf{c}
^k\}$ decrease the entropy $H(\mathbf{p)=-}\sum_{i=1}^dp_k\log _2p_k$ or
not, where $p_k=\mathbf{c}^{k\dagger }U^{\dagger }AA^{\dagger }U\mathbf{c}
^k. $ Denote the elements of $U$ in the basis of $\mathbf{c}^k$, we have $
U_{ij}=\mathbf{c}^{i\dagger }U\mathbf{c}^j$. Using the spectrum
decomposition of $AA^{\dagger },$ then
\begin{eqnarray}
p_k &=&\sum_i\lambda _i\mathbf{c}^{k\dagger }U^{\dagger }\mathbf{c}^i\mathbf{
c}^{i\dagger }U\mathbf{c}^k \nonumber \\
&=&\sum_i\lambda _i\left| U_{ik}\right| ^2.
\end{eqnarray}
Notice that function $f(x)=-x\log _2x$ is concave, that is, for $\alpha \in
[0,1],$ one has $f(\alpha x_1+(1-\alpha )x_2)\geq \alpha f(x_1)+(1-\alpha
)f(x_2).$ Then $f(p_k)\geq \sum_i\left| U_{ik}\right| ^2f(\lambda _i),$
where the unitarity of $U$ is used. Hence
\begin{eqnarray}
H(\mathbf{p)} &=&\sum_kf(p_k)\geq \sum_{i,k}\left| U_{ik}\right| ^2f(\lambda
_i) \nonumber \\
&=&\sum_if(\lambda _i)=E(\left| \psi \right\rangle ).
\end{eqnarray}
We get the desired result.
\subsection{Definition}
\begin{definition}
For an $n-$partite state $\left| \psi \right\rangle ,$ let $\mathbf{p}$ be
the probability vector with multiple subscripts, the components of $\mathbf{p
}$ are $p_{i_1,i_2,\ldots ,i_N}=$ $\left| \left\langle \phi
_{i_1}^{(1)}\right| \otimes \left\langle \phi _{i_2|i_1}^{(2)}\right|
\otimes \cdots \left\langle \phi _{i_N|i_1i_2\ldots i_N}^{(N)}\right| \text{
}\cdot \left| \psi \right\rangle \right| ^2.$ Here $\left| \phi
_{i_j|i_1i_2\ldots i_{j-1}}^{(j)}\right\rangle $ ( denoted simply as $\left|
\phi _{i_j}^{(j)}\right\rangle $ hereafter) $(i_j=0,1,\ldots ,d_j-1)$ are
the orthonormal basis of $j-th$ partite when the measurement results for the
former parties are $i_1,i_2,\ldots ,i_{j-1},$ respectively. The EMB of $
\left| \psi \right\rangle $ is defined as the minimal entropy of the
measurement probability vector, that is,
\begin{equation}
E_{MB}(\left| \psi \right\rangle )=\min_{\mathbf{p}}H(\mathbf{p).}
\label{wav4}
\end{equation}
The minimization is over all possible local orthogonal measurements.
\end{definition}
Notice that the local measurements can be carried out step by step, for each
result of the first partite measurement, one can choose an orthogonal basis
to measure the second partite residue state. So one may have $d_1$ different
projection measurements for the second partite. When measuring the third
partite, one can have $d_1d_2$ projection measurements, and so on. The
choice of $j-th$ partite basis can rely on all the former measurement
results. The total number of measurements is $1+d_1+d_1d_2+\ldots
+d_1d_2\cdots d_{N-1}.$ Meanwhile, the minimization in (\ref{wav4}) is also
with respect to all permutation of the parties.
There is an definition of entanglement measure based on measurement\cite
{Chernyavskiy}. In the definition of \cite{Chernyavskiy}, each partite has
only one kind of (complete) measurement, the total number of the
measurements is $N-1.$ The $E_{Hmin}$ in \cite{Chernyavskiy} is no less than
our entanglement measurement bound $E_{MB}(\left| \psi \right\rangle )$ by
definition.
\section{As upper bounds of entanglement measures}
\subsection{Coarse grain}
In the multipartite case it is useful to compare EMB according to different
partitions, where the components $1,...,N$ are grouped into disjoint sets.
Any sequence $(A_1,...,A_N)$ of disjoint subsets $A\in V$ with $
\bigcup_{i=1}^NA_i=\{1,...,N\}$ will be called a partition of $V$ . We will
write
\begin{equation}
(A_1,...A_N)\leq (B_1,...,B_M),
\end{equation}
if $(A_1,...A_N)$ is a finer partition than $(B_1,...,B_M)$. EMB is
non-increasing under a coarser grain of the partition. If two components are
merged to form a new component, then EMB can only decrease. This is because
that the minimization in the definition of EMB Eq. (\ref{wav4}) can also be
seen as with respect to all possible local measurement hierarchies. A local
measurement hierarchy of a finer partition $(A_1,...A_N)$ is definitely a
local measurement hierarchy of the coarser grain partition $(B_1,...,B_M),$
while the inverse may not be true. So from $(A_1,...A_N)$ to $(B_1,...,B_M),$
the set of local measurement hierarchies is enlarged, the minimization may
reach further lower value. We have
\begin{equation}
E_{MB}^{(A_1,...A_N)}(\left| \psi \right\rangle )\geq
E_{MB}^{(B_1,...B_M)}(\left| \psi \right\rangle ), \label{wav5}
\end{equation}
where we specify the partition as the superscript of EMB. So any coarser
partition is a lower bound of the finer partition for EMB. Especially, lower
bound of EMB for a tripartite pure state is the bipartite pure state
entanglement, which is easily obtained. There are three bipartitions of a
tripartite state, the tighter lower bound is the partition with largest
entanglement.
\subsection{Upper bound of geometric measure}
Suppose $E_{MB}(\left| \psi \right\rangle )$ is achieved by the probability
vector $\mathbf{p}$ with components $p_{i_1,i_2,\ldots ,i_n}.$ For all $
p_{i_1,i_2,\ldots ,i_n}=\left| \left\langle \phi _{i_1}^{(1)}\right| \otimes
\left\langle \phi _{i_2}^{(2)}\right| \otimes \cdots \left\langle \phi
_{i_N}^{(N)}\right| \text{ }\cdot \left| \psi \right\rangle \right| ^2,$ we
may denote the largest one as $p_{0,0,\ldots ,0}=\left| \left\langle \phi
_0^{(1)}\right| \otimes \left\langle \phi _0^{(2)}\right| \otimes \cdots
\left\langle \phi _0^{(N)}\right| \text{ }\cdot \left| \psi \right\rangle
\right| ^2.$ So $p_{0,0,\ldots ,0}\geq p_{i_1,i_2,\ldots ,i_N}$. Then $
p_{i_1,i_2,\ldots ,i_n}\log _2p_{i_1,i_2,\ldots ,i_N}\leq p_{i_1,i_2,\ldots
,i_N}\log _2p_{0,0,\ldots ,0}$
\begin{eqnarray}
E_{MB}(\left| \psi \right\rangle ) &=&-\sum_{i_1,i_2,\ldots
,i_N}p_{i_1,i_2,\ldots ,i_N}\log _2p_{i_1,i_2,\ldots ,i_N} \nonumber \\
&\geq &-\sum_{i_1,i_2,\ldots ,i_N}p_{i_1,i_2,\ldots ,i_N}\log
_2p_{0,0,\ldots ,0} \nonumber \\
&\geq &-\log _2p_{0,0,\ldots ,0} \nonumber \\
&\geq &E_G(\left| \psi \right\rangle ).
\end{eqnarray}
The last inequality comes from the fact that the geometric measure $
E_G(\left| \psi \right\rangle )=\min -\log _2F,$ where $F=\left|
\left\langle \varphi ^{(1)}\right| \otimes \left\langle \varphi
^{(2)}\right| \otimes \cdots \left\langle \varphi ^{(N)}\right| \text{ }
\cdot \left| \psi \right\rangle \right| ^2,$ the minimization is over all
possible product state $\left| \varphi ^{(1)}\right\rangle \otimes \left|
\varphi ^{(2)}\right\rangle \otimes \cdots \left| \varphi
^{(N)}\right\rangle .$ The largest fidelity $F$ should be no less than some
special fidelity $p_{0,0,\ldots ,0}.$
\subsection{Upper bound of the relative entropy of entanglement}
Suppose the orthogonal expansion of $\left| \psi \right\rangle
=\sum_{i_1,i_2,\ldots ,i_N}\xi _{i_1,i_2,\ldots ,i_N}\left| \phi
_{i_1}^{(1)}\right\rangle \otimes \left| \phi _{i_2}^{(2)}\right\rangle
\otimes \cdots \left| \phi _{i_N}^{(N)}\right\rangle $ with $\xi
_{i_1,i_2,\ldots ,i_N}=\left\langle \phi _{i_1}^{(1)}\right| \otimes
\left\langle \phi _{i_2}^{(2)}\right| \otimes \cdots \left\langle \phi
_{i_N}^{(N)}\right| $ $\cdot \left| \psi \right\rangle $ be the optimal
expansion that achieves the measure entanglement, that is $E_{MB}(\left|
\psi \right\rangle )=-\sum_{i_1,i_2,\ldots ,i_N}p_{i_1,i_2,\ldots ,i_N}\log
_2p_{i_1,i_2,\ldots ,i_N}$ with $p_{i_1,i_2,\ldots ,i_N}=\left| \xi
_{i_1,i_2,\ldots ,i_N}\right| ^2.$ Let's construct the separable state
\begin{eqnarray}
\omega &=&\sum_{i_1,i_2,\ldots ,i_N}p_{i_1,i_2,\ldots ,i_N}\left| \phi
_{i_1}^{(1)}\right\rangle \left| \phi _{i_2}^{(2)}\right\rangle \cdots
\left| \phi _{i_N}^{(N)}\right\rangle \nonumber \\
&&\times \left\langle \phi _{i_1}^{(1)}\right| \left\langle \phi
_{i_2}^{(2)}\right| \cdots \left\langle \phi _{i_N}^{(N)}\right| .
\end{eqnarray}
The relative entropy of $\left| \psi \right\rangle $ with respect to $\omega
$ is $-Tr\left| \psi \right\rangle \left\langle \psi \right| \log _2\omega =$
$-\left\langle \psi \right| \log _2\omega \left| \psi \right\rangle
=-\left\langle \psi \right| \sum_{i_1,i_2,\ldots ,i_N}\left| \phi
_{i_1}^{(1)}\right\rangle \left| \phi _{i_2}^{(2)}\right\rangle \cdots
\left| \phi _{i_N}^{(N)}\right\rangle $ $\log _2p_{i_1,i_2,\ldots ,i_N}$ $
\left\langle \phi _{i_1}^{(1)}\right| \left\langle \phi _{i_2}^{(2)}\right|
\cdots \left\langle \phi _{i_N}^{(N)}\right| \left. \psi \right\rangle $ $
=E_{MB}(\left| \psi \right\rangle ).$ It is larger than or equal to the
relative entropy of entanglement $E_R(\left| \psi \right\rangle ).$ Since
the separable state $\omega $ is just one of the full separable states, it
may not be the full separable state that achieves the minimal relative
entropy for state $\left| \psi \right\rangle .$ So we have
\begin{equation}
E_{MB}(\left| \psi \right\rangle )\geq E_R(\left| \psi \right\rangle ).
\end{equation}
More concretely, we will consider the pure tripartite qubit state in the
next section.
\section{Pure tripartite qubit state}
It is well known that GHZ state and W state are two different kinds of pure
tripartite states that are not convertible with each other under stochastic
local operation and classical communication (SLOCC). We may write the states
in computational basis as $\left| GHZ\right\rangle =\frac 1{\sqrt{2}}(\left|
000\right\rangle +\left| 111\right\rangle )$ and $\left| W\right\rangle
=\frac 1{\sqrt{3}}(\left| 001\right\rangle +\left| 010\right\rangle +\left|
100\right\rangle ).$ The three parties are called Alice, Bob and Charlie.
When they share GHZ state, if Alice measures her part with result $0,$ then
the states of Bob and Charlie are in $0$ without further measurement. When
Alice measures $1,$ the other two parts are also in $1.$ A common string of
bits among the three parts can be established when one of them measures in
computational basis. However, when Alice measures in $\left| \phi
\right\rangle =\cos \phi \left| 0\right\rangle +\sin \phi \left|
1\right\rangle ,$ $\left| \phi ^{\perp }\right\rangle =-\sin \phi \left|
0\right\rangle +\cos \phi \left| 1\right\rangle $ basis, the joint state of
Bob and Charlie will be left to the entangled state $\cos \phi \left|
00\right\rangle +\sin \phi \left| 11\right\rangle $ or $-\sin \phi \left|
00\right\rangle +\cos \phi \left| 11\right\rangle $. Further measurement
should be performed to determine the state of Bob as well as Charlie. So GHZ
state measured in computational basis is a rather special case when only one
step of measurement is required to transform the tripartite entanglement to
shared bits. In general, we need two steps of measurements to convert the
tripartite quantum correlation to classical correlation.
In computational basis, a pure tripartite qubit state can be written as
\begin{equation}
\left| \psi \right\rangle =\sum_{i,j,k=0}^1A_{ijk}\left| i\right\rangle
\left| j\right\rangle \left| k\right\rangle , \label{wav6}
\end{equation}
the normalization takes $\sum_{i,j,k=0}^1\left| A_{ijk}\right| ^2=1.$ Let
the measurement basis of Alice are $\left| \phi _a\right\rangle
=a_0^{*}\left| 0\right\rangle +a_1^{*}\left| 1\right\rangle ,$ $\left| \phi
_a^{\perp }\right\rangle =-a_1\left| 0\right\rangle +a_0\left|
1\right\rangle ,$ the basis of Bob are $\left| \phi _b\right\rangle
=b_0^{*}\left| 0\right\rangle +b_1^{*}\left| 1\right\rangle ,$ $\left| \phi
_b^{\perp }\right\rangle =-b_1\left| 0\right\rangle +b_0\left|
1\right\rangle $ when Alice is projected to $\left| \phi _a\right\rangle $,
the basis of Bob are $\left| \phi _b^{\prime }\right\rangle =b_0^{\prime
*}\left| 0\right\rangle +b_1^{\prime *}\left| 1\right\rangle ,$ $\left| \phi
_b^{\prime \perp }\right\rangle =-b_1^{\prime }\left| 0\right\rangle
+b_0^{\prime }\left| 1\right\rangle $ when Alice is projected to $\left|
\phi _a^{\perp }\right\rangle $. Suppose the state $\left| \psi
\right\rangle $ be projected to $\left| \phi _a\right\rangle \left| \phi
_b\right\rangle $ for Alice and Bob's parts, then Charlie should be left in $
\left\langle \phi _a\phi _b\right| \left. \psi \right\rangle $ $=$ $
\sum_{k=0}^1(\sum_{i,j=0}^1A_{ijk}a_ib_j)\left| k\right\rangle $ , the
probability of measurement is $p_{ab}=\sum_{k=0}^1\left|
\sum_{i,j=0}^1A_{ijk}a_ib_j\right| ^2$. We may write $p_{ab}=p_ap_{b|a},$
where $p_a$ is the probability of projecting $\left| \psi \right\rangle $ to
$\left| \phi _a\right\rangle ,$ and $p_{b|a}$ is the probability of
projecting further to $\left| \phi _b\right\rangle .$ For all possible local
measurements, we consider the minimal entropy of the probability
distribution $\{p_{ab},p_{ab^{\perp }},p_{a^{\perp }b^{\prime }},p_{a^{\perp
}b^{\prime \perp }}\},$ alternatively, we may write it as $\{p_{00},$ $
p_{01},p_{10},p_{11}\}.$ The entropy of the measurement should be
\begin{eqnarray*}
E_{MB}(\left| \psi \right\rangle ) &=&\min (-\sum_{i,j=0}^1p_{ij}\log p_{ij})
\\
&=&\min [-\sum_i^1(p_i\log p_i+p_i\sum_jp_{j|i}\log p_{j|i}].
\end{eqnarray*}
So we may solve the problem by minimizing the entropy of conditional
distribution $p_{j|i}$ by first fixing $p_i,$ that is, after Alice's part is
measured in some basis that is not known to Bob and Charlie, Bob choose some
basis to minimize the entropy of conditional distribution $p_{j|i}.$ Since
the joint state of Bob and Charlie is left to (unnormalized) $\left\langle
\phi _a\right. \left| \psi \right\rangle =\sum_{i,j,k=0}^1A_{ijk}a_i\left|
j\right\rangle \left| k\right\rangle $ for some quite general measurement
base $\left| \phi _a\right\rangle $ of Alice. From the result of bipartite
case, we have
\begin{equation}
\min -\sum_{j=0}^1p_{j|i}\log p_{j|i}=E(\left| \psi _i\right\rangle ).
\end{equation}
Thus the minimization problem turns out to be
\begin{equation}
E_{MB}(\left| \psi \right\rangle )=\min_{\{a_0,a_1\}}[-\sum_{i=0}^1(p_i\log
p_i+p_iE(\left| \psi _i\right\rangle )]. \label{wav7}
\end{equation}
Where
\begin{eqnarray}
\left| \psi _0\right\rangle &=&p_0^{-1/2}\sum_{i,j,k=0}^1A_{ijk}a_i\left|
j\right\rangle \left| k\right\rangle , \label{wav7a} \\
\left| \psi _1\right\rangle
&=&p_1^{-1/2}\sum_{j,k=0}^1(-A_{0jk}a_1^{*}+A_{1jk}a_0^{*})\left|
j\right\rangle \left| k\right\rangle ; \label{wav7b}
\end{eqnarray}
with
\begin{eqnarray}
p_0 &=&\sum_{j,k=0}^1\left| A_{0jk}a_0+A_{1jk}a_1\right| ^2, \label{wav7c}
\\
p_1 &=&\sum_{j,k=0}^1\left| -A_{0jk}a_1^{*}+A_{1jk}a_0^{*}\right| ^2.
\label{wav7d}
\end{eqnarray}
In practical calculation, we can choose $a_0=\cos \theta ,$ $a_1=\sin \theta
e^{i\varphi },$ thus $E_{MB}(\left| \psi \right\rangle )$ is given by the
minimization over $\left\{ \theta ,\varphi \right\} $. The bipartite
entanglement at RHS of (\ref{wav7}) can easily be evaluated with
concurrence. Alternatively, we may write EMB as
\begin{equation}
E_{MB}(\left| \psi \right\rangle
)=\min_{\{a_0,a_1\}}[\sum_{i=0}^1S(B_iB_i^{\dagger })], \label{wav8}
\end{equation}
where $S$ is the von Neumann entropy of a matrix, $S(\varrho )=-Tr(\varrho
\log _2\varrho ),$
\begin{eqnarray}
B_0 &=&a_0A_0+a_1A_1, \\
B_1 &=&-a_1^{*}A_0+a_0^{*}A_1,
\end{eqnarray}
with $A_0$ and $A_1$ are the matrices of elements $\left( A_0\right)
_{jk}=A_{0jk},$ $\left( A_1\right) _{jk}=A_{1jk}.$
For $E_{Hmin}$ in \cite{Chernyavskiy}, Bob's measurement is independent of
Alice's measurements. Suppose the measurement basis of Alice be $\left| \phi
_a\right\rangle =a_0^{*}\left| 0\right\rangle +a_1^{*}\left| 1\right\rangle
, $ $\left| \phi _a^{\perp }\right\rangle =-a_1\left| 0\right\rangle
+a_0\left| 1\right\rangle ,$ the basis of Bob be $\left| \phi
_b\right\rangle =b_0^{*}\left| 0\right\rangle +b_1^{*}\left| 1\right\rangle
, $ $\left| \phi _b^{\perp }\right\rangle =-b_1\left| 0\right\rangle
+b_0\left| 1\right\rangle $, respectively. Then
\begin{equation}
E_{Hmin}(\left| \psi \right\rangle )=\min (-\sum_{i,j=0}^1p_{ij}^{\prime
}\log p_{ij}^{\prime }). \label{wav8a}
\end{equation}
where $p_{lm}^{\prime }=\sum_{k=0}^1\left|
\sum_{ij}A_{ijk}a_{li}b_{mj}^{}\right| ^2,$ with $
(a_{00},a_{01},a_{10},a_{11})=(a_0,a_1,-a_1^{*},a_0^{*})$ and $
(b_{00},b_{01},b_{10},b_{11})=(b_0,b_1,-b_1^{*},b_0^{*}).$ The
minimization in (\ref{wav8a}) is more difficult than that of
(\ref{wav8}). The calculation of the geometric measure involves
minimization over the product state of three qubits and thus is
more difficult than the calculation of EMB in (\ref{wav8}). Only
for symmetric tripartite state, the calculation of the geometric
measure can be reduced as shown later.
\subsection{Superposition of GHZ and W' states}
It has been known \cite{Carteret} \cite{Tamaryan} that any pure tripartite
qubit state can be local unitarily transformed to the standard form
\begin{eqnarray}
\left| \psi \right\rangle &=&q_0\left| 000\right\rangle +q_1\left|
011\right\rangle +q_2\left| 101\right\rangle \nonumber \\
&&+q_3\left| 110\right\rangle +q_4e^{i\gamma }\left| 111\right\rangle .
\label{wav9}
\end{eqnarray}
Where $q_i$ are positive, $\gamma \in [-\pi /2,\pi /2]$. The concurrences of
$\left| \psi _0\right\rangle $ and $\left| \psi _1\right\rangle $ are $
C_0=2p_0^{-1}\left| q_0q_1\cos ^2\theta +\text{ }q_0q_4\sin \theta \cos
\theta e^{i(\gamma +\varphi )}\text{ }-q_2q_3\sin ^2\theta e^{2i\varphi
}\right| ,$ with probability $p_0=(q_0^2+q_1^2)\cos ^2\theta
+(q_2^2+q_3^2+q_4^2)\sin ^2\theta +2q_1q_4\sin \theta \cos \theta \cos
(\gamma +\varphi )$, and $C_1=$ $2p_1^{-1}\left| q_0q_1\sin ^2\theta -\text{
}q_0q_4\sin \theta \cos \theta e^{i(\gamma +\varphi )}\text{ }-q_2q_3\cos
^2\theta e^{2i\varphi }\right| ,$ with probability $p_1=(q_0^2+q_1^2)\sin
^2\theta +(q_2^2+q_3^2+q_4^2)\cos ^2\theta -2q_1q_4\sin \theta \cos \theta
\cos (\gamma +\varphi )$, respectively.
A special case is the superposition of GHZ and W' state, $\left|
GHZ-W^{\prime }\right\rangle =\cos \alpha \left| GHZ\right\rangle +\sin
\alpha \left| W^{\prime }\right\rangle $, which is a standard tripartite
state with $q_0=q_4=\frac 1{\sqrt{2}}\cos \alpha ,$ $q_1=q_2$ $=q_3=\frac 1{
\sqrt{3}}\sin \alpha ,$ $\gamma =0.$ The state is widely used in evaluating
the tangle of symmetric tripartite mixed state. For this superposition
state, we have calculated EMB for parameter $x=\sin \alpha $, $\alpha \in
[-\pi /2,\pi /2].$ The results are shown in Fig.1. Also shown in Fig.1 are
the tangle, the geometric measure and the bipartition entanglement of the
state. The tangle of the state is \cite{Eltschka}
\[
\tau (\left| GHZ-W^{\prime }\right\rangle )=\left| \cos ^4\alpha +\frac 89
\sqrt{6}\sin ^3\alpha \cos \alpha \right| .
\]
According to the permutation symmetry of the state, the geometric measure
for this state is \cite{Hubener} \cite{Hayashi3} \cite{Wei4}
\begin{equation}
E_G=\min_\phi -\log _2\left| \left\langle GHZ-W^{\prime }\right| \left(
\left| \phi \right\rangle \right) ^{\otimes 3}\right| ^2, \label{wav8b}
\end{equation}
where $\left| \phi \right\rangle $ is a qubit state.
\begin{figure}
\caption{The Entanglement with respect to x, the
portion of W' state in the superposition of W' and GHZ state. The
solid line is the entanglement measurement bound, the dotted line
is the geometric measure, the up dashed line is $E_{Hmin}
\end{figure}
\subsection{Bipartite lower bound}
For a general state $\left| \psi \right\rangle
=\sum_{i,j,k=0}^1A_{ijk}\left| i\right\rangle \left| j\right\rangle \left|
k\right\rangle ,$ we may project it to state $\left| \phi _{ab}\right\rangle
=\sum_{i,j=0}^1c_{ij}^{*}\left| i\right\rangle \left| j\right\rangle $ with
joint measurement of Alice and Bob, then Charlie should be left in $
\left\langle \phi _a{}_b\right| \left. \psi \right\rangle $ $=$ $
\sum_{k=0}^1(\sum_{i,j=0}^1A_{ijk}c_i{}_j)\left| k\right\rangle $. The
bipartition is a coarser grain of a tripartition, so
\begin{equation}
E_{bi}(\left| \psi \right\rangle )\leq E_{MB}(\left| \psi \right\rangle ).
\end{equation}
The bipartite lower bound is $E_{bi}(\left| \psi \right\rangle )=\min
\{H_2(x_1),H_2(x_2),H_2(x_3)\}$, with $x_m=\frac 12(1+\sqrt{1-C_m^2}).$ The
concurrence $C_m=2\sqrt{\left|
d_{00}^{(m)}d_{11}^{(m)}-d_{01}^{(m)}d_{10}^{(m)}\right| },$ with $
d_{ii^{\prime }}^{(1)}=\sum_{j,k=0}^1A_{ijk}A_{i^{\prime }jk}^{*},$ $
d_{jj^{\prime }}^{(2)}=\sum_{i,k=0}^1A_{ijk}A_{ij^{\prime }k}^{*},$ $
d_{kk^{\prime }}^{(3)}=\sum_{i,j=0}^1A_{ijk}A_{ijk^{\prime }}^{*}.$ For a $
\left| GHZ-W^{\prime }\right\rangle $ state, the three concurrences are
equal to
\begin{equation}
C=\sqrt{\cos ^4\alpha +\frac 43\sin ^2\alpha \cos ^2\alpha +\frac 89\sin
^4\alpha }.
\end{equation}
So the lower bound of EMB of $\left| GHZ-W^{\prime }\right\rangle $ state is
$H_2(\frac 12(1+\sqrt{1-C^2})).$
\subsection{A special superposition state with equal tripartite EMB and
bipartite entanglement}
It can be seen from figure 1 that there is a superposition of GHZ and W'
state whose tripartite EMB and bipartite entanglement are equal. The state
is a $\left| GHZ-W^{\prime }\right\rangle $ state with $x=\sin \alpha =\sqrt{
\frac 35},$ we will denote it as $\left| \Omega \right\rangle $ in the
following. Then
\begin{eqnarray}
\left| \Omega \right\rangle &=&\frac 1{\sqrt{5}}(\left| 000\right\rangle
+\left| 011\right\rangle +\left| 101\right\rangle \nonumber \\
&&+\left| 110\right\rangle +\left| 111\right\rangle ).
\end{eqnarray}
The bipartite entanglement is $E_{bi}(\left| \Omega \right\rangle )=S(\rho
_C),$ where $\rho _C=Tr_{AB}(\left| \Omega \right\rangle \left\langle \Omega
\right| )=\frac 25\left| 0\right\rangle \left\langle 0\right| +\frac
15\left| 0\right\rangle \left\langle 1\right| +\frac 15\left| 1\right\rangle
\left\langle 0\right| +\frac 35\left| 1\right\rangle \left\langle 1\right| .$
Then
\begin{equation}
E_{bi}(\left| \Omega \right\rangle )=H_2[\frac 12(1+\frac 1{\sqrt{5}
})]\approx 0.8505.
\end{equation}
For the tripartite EMB, the eigenvalues of $B_0B_0^{\dagger }$ and $
B_1B_1^{\dagger }$ in Eq.(\ref{wav8}) are
\begin{eqnarray}
\lambda _{0\pm } &=&\frac 1{10}(2+K\pm \sqrt{5}K), \\
\lambda _{1\pm } &=&\frac 1{10}[3-K\pm \sqrt{5}(1-K)],
\end{eqnarray}
respectively, where $K=\left| a_1\right| ^2+a_0a_1^{*}+a_1a_0^{*}=\sin
^2\theta +\sin 2\theta \cos \varphi .$ Notice that $\lambda _{0+}+\lambda
_{1-}=\frac 1{10}(5+\sqrt{5}),\lambda _{0-}+\lambda _{1+}=\frac 1{10}(5-
\sqrt{5}),$ the minimal entropy summation in Eq.(\ref{wav8}) should be
achieved by maximal $K$ or minimal $K.$ The maximal and minimal values of $K$
are $\frac 12(1\pm \sqrt{5}),$ respectively. Either of them leads to the
same eigenvalues $\{0,0,$ $\frac 12(1+\frac 1{\sqrt{5}}),\frac 12(1-\frac 1{
\sqrt{5}})$ $\}.$ The tripartite EMB then is
\begin{equation}
E_{MB}(\left| \Omega \right\rangle )=H_2[\frac 12(1+\frac 1{\sqrt{5}
})]=E_{bi}(\left| \Omega \right\rangle ). \label{wav10}
\end{equation}
For any bipartition of $\left| \Omega \right\rangle $, the bipartition
relative entropy of entanglement $E_{bi}^R(\left| \Omega \right\rangle )$ is
just the entropy of the reduced density matrix, so $E_{bi}^R(\left| \Omega
\right\rangle )=E_{bi}(\left| \Omega \right\rangle ).$ However, the
tripartite relative entropy of entanglement $E_R(\left| \Omega \right\rangle
)$ should be no less than the bipartite one, as can be seen from the
definition of the relative entropy of entanglement. So we have
\begin{equation}
E_{bi}(\left| \Omega \right\rangle )=E_{MB}(\left| \Omega \right\rangle
)\geq E_R(\left| \Omega \right\rangle )\geq E_{bi}^R(\left| \Omega
\right\rangle ).
\end{equation}
So that all of them are equal for state $\left| \Omega \right\rangle .$ We
thus obtain the exact value of $E_{MB}(\left| \Omega \right\rangle )$ and
the tripartite relative entropy of entanglement $E_R(\left| \Omega
\right\rangle )$ for state $\left| \Omega \right\rangle .$
We may consider the minimal measurement entropy $E_{Hmin}$ defined in \cite
{Chernyavskiy} for $\left| \Omega \right\rangle $ state. The measurement
basis can be $\left| \phi _a\right\rangle \left| \phi _b\right\rangle
,\left| \phi _a\right\rangle $ $\left| \phi _b^{\perp }\right\rangle ,\left|
\phi _a^{\perp }\right\rangle \left| \phi _b\right\rangle ,\left| \phi
_a^{\perp }\right\rangle $ $\left| \phi _b^{\perp }\right\rangle .$ the
probabilities of the measurements are $p_{00}^{\prime }=\frac 15[1+xy],$ $
p_{01}^{\prime }=\frac 15[1+x(1-y)],$ $p_{10}^{\prime }=\frac 15[1+(1-x)y],$
$p_{11}^{\prime }=\frac 15[1+(1-x)(1-y)].$ Where $x=(\left| a_0+a_1\right|
^2-\left| a_0\right| ^2)$ $\in [\frac 12(1-\sqrt{5}),$ $\frac 12(1+\sqrt{5}
)],$ $y=(\left| b_0+b_1\right| ^2-\left| b_0\right| ^2)$ $\in [\frac 12(1-
\sqrt{5}),$ $\frac 12(1+\sqrt{5})].$ Then the minimal entropy of the
measurement is
\begin{equation}
E_{Hmin}(\left| \Omega \right\rangle )=\min -\sum_{i,j=0}^1p_{ij}^{\prime
}\log _2p_{ij}^{\prime }=H_2[\frac 12(1+\frac 1{\sqrt{5}})].
\end{equation}
We can see that $E_{Hmin}(\left| \Omega \right\rangle )=E_{MB}(\left| \Omega
\right\rangle ).$
\subsection{Conditions for equal of EMB and minimal measurement entropy}
For a general tripartite state $\left| \psi \right\rangle $, we have $
E_{Hmin}(\left| \psi \right\rangle )\geq E_{MB}(\left| \psi \right\rangle ),$
with the equality holds only when the basis $\left| \phi _b\right\rangle
,\left| \phi _b^{\perp }\right\rangle $ coincides with the basis $\left|
\phi _b^{\prime }\right\rangle ,\left| \phi _b^{\prime \perp }\right\rangle $
. The basis $\left| \phi _b\right\rangle ,\left| \phi _b^{\perp
}\right\rangle $ are the eigenvectors of $B_0B_0^{\dagger }$, while the
basis $\left| \phi _b^{\prime }\right\rangle ,\left| \phi _b^{\prime \perp
}\right\rangle $ are the eigenvectors of $B_1B_1^{\dagger }.$ Hence only
when matrix $B_0B_0^{\dagger }$ commutes with $B_1B_1^{\dagger }$ can we
have $E_{Hmin}(\left| \psi \right\rangle )=E_{MB}(\left| \psi \right\rangle
).$
The $A_0$ and $A_1$ matrices for the standard form of tripartite state (\ref
{wav9}) are
\begin{equation}
A_0=\left[
\begin{array}{ll}
q_0 & 0 \\
0 & q_1
\end{array}
\right] ,\text{ }A_1=\left[
\begin{array}{ll}
0 & q_2 \\
q_3 & q_4e^{i\gamma }
\end{array}
\right] .
\end{equation}
Notice that $B_0B_0^{\dagger }+B_1B_1^{\dagger }=A_0^2+A_1A_1^{\dagger
}\equiv \mathcal{A},$ so the condition for the equality should be
\begin{equation}
\lbrack B_0B_0^{\dagger },\mathcal{A}]=0. \label{wav11}
\end{equation}
If we require that $B_0B_0^{\dagger }$ commutes $B_1B_1^{\dagger }$ for all
measurements of the Alice's qubit, then condition (\ref{wav11}) reduces to $
[A_0,A_1]=0,[A_0,A_1^{\dagger }]=0$ and $[A_1,A_1^{\dagger }]=0.$ These are
equivalent to $(q_0-q_1)q_2=0,(q_0-q_1)q_3=0,$ $q_2=q_3,q_2e^{i\gamma
}=q_3e^{-i\gamma },q_2e^{i\gamma }=q_3e^{-i\gamma }.$ The solutions should
be either
\begin{equation}
q_0=q_1,q_2=q_3,\gamma =0,
\end{equation}
or
\begin{equation}
q_2=q_3=0.
\end{equation}
The corresponding states are
\begin{eqnarray*}
\left| \Omega _1\right\rangle &=&q_0(\left| 000\right\rangle +\left|
011\right\rangle )+q_2(\left| 101\right\rangle +\left| 110\right\rangle
)+q_4\left| 111\right\rangle , \\
\left| \Omega _2\right\rangle &=&q_0\left| 000\right\rangle +q_1\left|
011\right\rangle +q_4e^{i\gamma }\left| 111\right\rangle .
\end{eqnarray*}
For $\left| \Omega _1\right\rangle $ state, we choose $B_0=\cos \theta
A_0+\sin \theta A_1.$ Let $\det (B_0)=0$ to determine $\theta ,$ then the
eigenvalues of $B_0B_0^{\dagger }$ are $0$ and $(TrB_0)^2.$ We have
\begin{equation}
E_{MB}(\left| \Omega _1\right\rangle )=E_{bi}(\left| \Omega _1\right\rangle
)=H_2[\frac 12(1+\sqrt{1-C^2})],
\end{equation}
with $C^2=4q_0^2[2(1-2q_0^2)-q_4^2].$ The tripartite relative entropy of
entanglement $E_R(\left| \Omega _1\right\rangle )$ is obtained to be equal
to $E_{bi}(\left| \Omega _1\right\rangle )$ since it is in between $
E_{MB}(\left| \Omega _1\right\rangle )$ and $E_{bi}(\left| \Omega
_1\right\rangle ).$ Similar results can be obtained for states $\left|
\Omega _1^{\prime }\right\rangle =q_0(\left| 000\right\rangle +\left|
101\right\rangle )+q_3(\left| 011\right\rangle +\left| 110\right\rangle
)+q_4\left| 111\right\rangle $ and $\left| \Omega _1^{\prime \prime
}\right\rangle =q_0(\left| 000\right\rangle +\left| 110\right\rangle
)+q_1(\left| 101\right\rangle +\left| 011\right\rangle )+q_4\left|
111\right\rangle .$
For $\left| \Omega _2\right\rangle $ state, the equality of EMB and the
bipartite entanglement do not hold in general, however, when $q_1=0,$ we do
have the equality. But the situation seems rather trivial.
\subsection{LOCC monotone for completely measurement of a pure tripartite
state}
A fundamental property of an entanglement measure is that it should not
increase under LOCC. Local measurement will not increase the entanglement of
a state on average. To illustrate the detail meanings of EMB under LOCC,
let's consider the tripartite qubit state first. Given a pure tripartite
qubit state (\ref{wav6}) with coefficients $A_{ijk},$ we can calculate the
bound with formula (\ref{wav7}) where the default first step measurement is
on Alice's qubit. We may first measure Bob's qubit or Charlie's qubit. The
results may differ. The bound should be the minimum of the three by
definition. We denote it as
\[
E_{MB}(\left| \psi \right\rangle )=\min \{E_{MB}^A(\left| \psi \right\rangle
),E_{MB}^B(\left| \psi \right\rangle ),E_{MB}^C(\left| \psi \right\rangle
)\},
\]
where $E_{MB}^i(\left| \psi \right\rangle )$ is calculated with formula (\ref
{wav7}) when $ith$ partite is measured first. One the other hand, after a
measurement on Alice's partite, the state left should be (\ref{wav7a}) with
probability (\ref{wav7c}) or (\ref{wav7b}) with probability (\ref{wav7d}).
The maximal average entanglement after local measurement on Alice's partite
can be denoted as
\begin{equation}
E_{LOCC}^A(\left| \psi \right\rangle
)=\max_{a_0,a_1}[-\sum_{i=0}^1p_iE(\left| \psi _i\right\rangle )].
\end{equation}
We may measure Bob's or Charlie's qubit first, the maximal average
entanglement after a local measurement then is
\begin{eqnarray}
E_{LOCC}(\left| \psi \right\rangle ) &=&\max \{E_{LOCC}^A(\left| \psi
\right\rangle ), \nonumber \\
&&E_{LOCC}^B(\left| \psi \right\rangle ),E_{LOCC}^C(\left| \psi
\right\rangle )\}.
\end{eqnarray}
If we have
\begin{equation}
E_{MB}(\left| \psi \right\rangle )\geq E_{LOCC}(\left| \psi \right\rangle ),
\label{wav12}
\end{equation}
then the EMB is an LOCC monotone, we may call it measurement entanglement
and denoted as $E_M(\left| \psi \right\rangle ).$ In the following we will
prove that (\ref{wav12}) is true for a pure tripartite state in the sense of
completely measurement of the first partite.
\begin{theorem}
Entanglement measurement bound for a pure tripartite qubit state is an LOCC
monotone.
\end{theorem}
Proof: Suppose that EMB of a tripartite pure state $\left| \psi
\right\rangle $ is achieved by measuring Alice's partite first, then we have
\begin{equation}
E_{MB}(\left| \psi \right\rangle )=E_{MB}^{(A)}(\left| \psi \right\rangle
)\geq E^{(AB,C)}(\left| \psi \right\rangle )
\end{equation}
by (\ref{wav5}), where $E^{(AB,C)}(\left| \psi \right\rangle )$ is the
bipartite entanglement. When we measure on Alice or Bob of $AB$ part, the
average entanglement of the remained part will not exceed $E^{(AB,C)}(\left|
\psi \right\rangle )$ according to the monotonicity of bipartite
entanglement \cite{Bennett}, namely,
\begin{eqnarray*}
E^{(AB,C)}(\left| \psi \right\rangle ) &\geq &E_{LOCC}^A(\left| \psi
\right\rangle ), \\
E^{(AB,C)}(\left| \psi \right\rangle ) &\geq &E_{LOCC}^B(\left| \psi
\right\rangle ).
\end{eqnarray*}
Similarly, we also have $E_{MB}^{(A)}(\left| \psi \right\rangle )\geq
E^{(B,AC)}(\left| \psi \right\rangle )$ and the monotonicity of bipartite
entanglement shows that $E^{(B,AC)}(\left| \psi \right\rangle )\geq
E_{LOCC}^C(\left| \psi \right\rangle ).$ Thus (\ref{wav12}) is proved, and
the theorem follows.
For a $d_1\times d_2\times d_3$ tripartite state with completely measurement
of the each partite, we have
\[
E_M(\left| \psi \right\rangle )=E_{MB}(\left| \psi \right\rangle )\geq
E_{LOCC}(\left| \psi \right\rangle ).
\]
The completely measurement means that the state of $N$ parties is projected
to $N-1$ parties after the measurement.
\section{Conclusion}
The entanglement bound based on local measurements is introduced for
multipartite pure states. The measurement sequence is a dependent one, for
each step of measurement, the basis rely on the former measurement results.
The entanglement measurement bound defined in this paper is a lower bound of
a multipartite entanglement measure called \textit{minimal measurement
entropy }which is based on independent measurements of the parties. The
entanglement measurement bound is also the upper bound of \textit{geometric
measure }and \textit{the relative entropy of entanglement. }The property of
coarser grain for the bound is derived. Based on the coarser grain of the
bound and the fact that in bipartite case the bound is equal to the relative
entropy of entanglement, we obtain the lower and upper bounds for the
relative entropy of entanglement of a tripartite state. For a tripartite
qubit state we derive the condition when the lower and upper bound coincide.
The exact relative entropy of entanglement follows for a class of tripartite
qubit states in the form of $\left| \Omega _1\right\rangle =q_0(\left|
000\right\rangle +\left| 011\right\rangle )+q_2(\left| 101\right\rangle
+\left| 110\right\rangle )+q_4\left| 111\right\rangle $ or their qubit
permutation states. It is an interesting phenomenon that the tripartite
relative entropy of entanglement is equal to the bipartite relative entropy
of entanglement while the tangle is nonzero for these states. For tripartite
qubit states, the bound itself is an entanglement monotone. Further works
can be done on whether the bound is an LOCC monotone or not in general.
.
\section*{Acknowledgment}
Funding by the National Natural Science Foundation of China (Grant No.
60972071), Natural Science Foundation of Zhejiang Province (Grant No.
Y6100421), Zhejiang Province Science and Technology Project (Grant No.
2009C31060), Zhejiang Province Higher Education Bureau Program (Grant No.
Y200906669) are gratefully acknowledged.
\end{document}
|
\begin{document}
\title[Analytically computable tangle for three-qubit mixed states]{Analytically computable tangle for three-qubit mixed states}
\author{Hiroyasu Tajima}
\address{Department of Physics, The University of Tokyo\\
4-6-1 Komaba, Meguro, Tokyo, 153-8505, Japan\\
TEL: +81-3-5452-6156 \quad FAX: +81-3-5452-6155\\
}
\ead{[email protected]}
\begin{abstract}
We present a new tripartite entanglement measure for three-qubit mixed states.
The new measure $t_{\mathrm{r}}(\rho)$, which we refer to as the r-tangle, is given as a kind of the tangle, but has a feature which the tangle does not have;
if we can derive an analytical form of $ t_{\mathrm{r}}(\rho)$ for a three-qubit mixed state $\rho$, we can also derive $t_{\mathrm{r}}(\rho')$ analytically for any states $\rho'$ which are SLOCC-equivalent to the state $\rho$.
The concurrence of two-qubit states also satisfies the feature, but the tangle does not.
These facts imply that the r-tangle $t_{\mathrm{r}}$ is the appropriate three-partite counterpart of the concurrence.
We also derive an analytical form of the r-tangle $t_{\mathrm{r}}$ for mixtures of a generalized GHZ state and a generalized W state, and hence for all states which are SLOCC-equivalent to them.
\end{abstract}
\section{Introduction}
Quantum tasks beyond the classical tasks, such as quantum
computing, teleportation, superdense coding, $etc.$, utilize the entanglement as an important resource \cite{1,2,3,4}.
On one hand, with the development of the quantum information processing, manipulating many particles entangled to each other has become possible \cite{manipulate1,manipulate2}.
On the other hand, however, the quantification of the entanglement is still a fundamental problem in the field of quantum information.
Vigorous effort has been made, and the problem has been solved for two-qubit pure and mixed states as well as for three-qubit pure states.
The concurrence \cite{5,6,7} and the negativity \cite{8} make it possible for us to quantify the entanglement analytically for two-qubit pure and mixed states.
The stochastic LOCC classification of three-qubit pure states revealed \cite{GHZW} that there exist two types of three-partite entanglement, namely the GHZ-type and the W-type. The tangle \cite{10} and $J_{5}$ \cite{18} enabled us to quantify the entanglements of these two types.
With using the concurrence, the tangle, the parameter $J_{5}$ and the parameter $Q_{\mathrm{e}}$ introduced in Ref. \cite{tajima}, a necessary and sufficient condition of the possibility of deterministic LOCC transformations is given for arbitrary three-qubit pure states \cite{tajima}.
We thereby understood the features of two-qubit pure and mixed states as well as three-qubit pure states.
Apart from the above, however, our comprehension is not enough.
Although there have been many researches on the tangle for three-qubit mixed states, its analytical form has been derived only in restricted regions \cite{t1,t2,Siebert,t3,t4}.
The approach for deterministic LOCC used in Ref. \cite{tajima} cannot be applied to three-qubit mixed states directly, because an important feature which holds for the tangle of pure states does $not$ hold for the tangle of mixed states; when we perform a measurement $\{M_{(i)}\}$ on the qubit $A$ of a three-qubit pure state $\left|\psi\right\rangle$, the tangle $\tau$ of the $i$th result $\left|\psi^{(i)}\right\rangle\equiv M_{(i)}\left|\psi\right\rangle/\sqrt{p_{(i)}}$ with the probability $p_{(i)}$ and the tangle $\tau$ of $\left|\psi\right\rangle$ satisfy the following equation:
\begin{eqnarray}
\tau(\left|\psi^{(i)}\right\rangle)=\alpha^2_{(i)}\tau(\left|\psi\right\rangle),\enskip\alpha_{(i)}\equiv\frac{\sqrt{\mathrm{det}M^\dagger_{(i)}M_{(i)}}}{p_{(i)}}.\label{multialphaprepre}
\end{eqnarray}
This feature does not generally hold for the tangle of mixed states; we give an example that $\tau(\rho_{(i)})\ne\alpha^2_{(i)}\tau(\rho)$ in Appendix A.
In the present paper, we introduce a new tripartite entanglement measure for three-qubit mixed states, which we refer to as the r-tangle.
The r-tangle $t_{\mathrm{r}}$ can be interpreted as a kind of the tangle; when the state is pure, the square of the r-tangle is equal to the tangle.
The r-tangle also satisfies the following equation:
\begin{equation}
t_{\mathrm{r}}(\rho_{(i)})=\alpha_{(i)} t_{\mathrm{r}}(\rho),\enskip\rho_{(i)}\equiv \frac{M_{(i)}\rho M^\dag_{(i)}}{p_{(i)}}, \label{multialphapre}
\end{equation}
where $\alpha_{(i)}$ is the same as in \eref{multialphaprepre}.
The feature \eref{multialphapre} has two merits.
First, using the r-tangle, we may be able to derive a necessary and sufficient condition of the possibility of deterministic LOCC transformations for arbitrary three-qubit mixed states;
because $t^2_{\mathrm{r}}(\rho_{(i)})=\alpha^2_{(i)} t^2_{\mathrm{r}}(\rho)$ holds, we may apply the approach in Ref. \cite{tajima} to the mixed states by employing $t^2_{\mathrm{r}}(\rho)$ as a substitute for the tangle $\tau(\rho)$.
Second, we can derive the r-tangle analytically in broader regions than the tangle;
if we can derive an analytical form of $t_{\mathrm{r}}(\rho)$ for a three-qubit mixed state $\rho$, the equation \eref{multialphapre} let us derive $t_{\mathrm{r}}(\rho')$ analytically for any states $\rho'$ which are SLOCC-equivalent to the state $\rho$.
Moreover, we also derive an analytical form of the r-tangle for mixtures of a generalized GHZ state and a generalized W state.
For such states, the analytical form of the tangle also has been derived \cite{Siebert}.
Using \eref{multialphapre}, we can derive the r-tangle not only for the mixtures but also for any states which are SLOCC-equivalent to the mixtures.
Note again that we also cannot apply the approach to the tangle, because $\tau(\rho_{(i)})=\alpha^2_{(i)}\tau(\rho)$ does not hold generally.
\section{Main Results}
In the present section, we give two theorems for the r-tangle for three-qubit mixed states.
First, we give the definition of the r-tangle:
\begin{equation}
t_{\mathrm{r}}(\rho)=\min_{\rho=\sum q_{i}\left|\psi_{i}\right\rangle\left\langle\psi_{i}\right|}\sum_{i}q_{i}\sqrt{\tau}(\left|\psi_{i}\right\rangle),\label{r-tangle}
\end{equation}
where $\sqrt{\tau}(\left|\psi\right\rangle)$ is written in terms of the coefficients $C_{ijk}$ as
\begin{equation}
\left|\psi\right\rangle=\sum_{p,q,r}C_{pqr}\left|p q r\right\rangle
\end{equation}
\begin{eqnarray}
\sqrt{\tau}(\left|\psi\right\rangle)&=&\sqrt{4|d_{1}-2d_{2}+4d_{3}|},\\
d_{1}&=&C^2_{000}C^2_{111}+C^2_{001}C^2_{110}+C^2_{010}C^2_{101}+C^2_{100}C^2_{011},\\
d_{2}&=&C_{000}C_{111}C_{011}C_{100}+C_{000}C_{111}C_{101}C_{010}+C_{000}C_{111}C_{110}C_{001}\nonumber\\
&+&C_{011}C_{100}C_{101}C_{010}+C_{011}C_{100}C_{110}C_{001}+C_{101}C_{010}C_{110}C_{001},\\
d_{3}&=&C_{000}C_{110}C_{101}C_{011}+C_{111}C_{001}C_{010}C_{100}.
\end{eqnarray}
We refer to an ensemble $\{q_{i},\left|\psi_{i}\right\rangle\}$ of $\rho$ which minimizes the right-hand side of \eref{r-tangle} as the optimal ensemble.
We emphasize that $(t_{\mathrm{r}}(\rho))^2\ne\tau(\rho)$ because the mean of the square root is not equal to the square root of the mean.
The equality $(t_{\mathrm{r}}(\rho))^2=\tau(\rho)$ is valid only when $\rho$ is pure.
Second, we give two theorems for the r-tangle.
The first theorem below means that when we obtain the value of the r-tangle and the optimal ensemble of a state $\rho$, then we also obtain them of any states which are S-LOCC equivalent to $\rho$.
\begin{theorem}
Suppose that a measurement $\{M_{(j)}\}$ is performed on the qubit $A$ of an arbitrary three-qubit mixed state $\rho$ with the r-tangle $t_{\mathrm{r}}(\rho)$ and the optimal ensemble $\{q_{i},\left|\psi_{i}\right\rangle\}$.
Suppose also that the state $\rho_{(j)}=M_{(j)}\rho M^\dag_{(j)}/p_{(j)}$ is obtained as the $j$th result with the probability $p_{(j)}$.
The r-tangle $t_{\mathrm{r}}(\rho_{(j)})$ and the optimal ensemble $\{r_{i,(j)},\left|\varphi_{i,(j)}\right\rangle\}$ of $\rho_{(j)}$ are given by $t_{\mathrm{r}}(\rho)$ and $\{q_{i},\left|\psi_{i}\right\rangle\}$ as follows:
\begin{equation}
t_{\mathrm{r}}(\rho_{(j)})=\alpha_{(j)} t_{\mathrm{r}}(\rho),\enskip\alpha_{(j)}\equiv\frac{\det\sqrt{M^\dag_{(j)} M_{(j)}}}{p_{(j)}}\label{multialpha},
\end{equation}
\begin{eqnarray}
r_{i,(j)}&=&\frac{q_{i}}{p_{(j)}}\left\langle\psi_{i}\right|M^\dag_{(j)} M_{(j)}\left|\psi_{i}\right\rangle,
\\
\left|\varphi_{i,(j)}\right\rangle&=&\frac{M_{(j)}\left|\psi_{i}\right\rangle}{\sqrt{\left\langle\psi_{i}\right|M^\dag_{(j)} M_{(j)}\left|\psi_{i}\right\rangle}}.
\end{eqnarray}
\end{theorem}
We can use Theorem 1 as follows;
when we obtain the r-tangle for a mixed state $\rho$, we can also obtain it for any states in the same SLOCC class as $\rho$.
Similarly, when we obtain the optimal ensemble for a mixed state $\rho$, we can also obtain it for any states in the same SLOCC class as $\rho$.
Theorem 1 does not hold for the tangle $\tau(\rho)$;
note again that $\tau(\rho)\ne( t_{\mathrm{r}}(\rho))^2$.
We show an explicit example of the case $\alpha^2_{(j)}\tau(\rho_{(j)})\ne\tau(\rho)$ in Appendix A.
The second theorem below gives $ t_{\mathrm{r}}(\rho)$ analytically when $\rho$ is a mixture of generalized GHZ and generalized W states.
\begin{theorem}
We have
\begin{eqnarray}
t_{\mathrm{r}}(\rho(p)) &=&\left\{ \begin{array}{ll}
0 & (0\le p\le p_{0}) \\
2|ab|\frac{p-p_{0}}{1-p_{0}} & (p_{0}\le p\le1) \\
\end{array} \right\},\label{abcdf}
\end{eqnarray}
\begin{eqnarray}
p_{0}&=&\frac{s^{2/3}}{1+s^{2/3}},\\
s&=&\frac{4cdf}{a^2b}>0,
\end{eqnarray}
for the family of three-qubit mixed states
\begin{equation}
\rho(p)=p\left|\mathrm{gGHZ}_{a,b}\right\rangle\left\langle\mathrm{gGHZ}_{a,b}\right|+(1-p)\left|\mathrm{gW}_{c,d,f}\right\rangle\left\langle\mathrm{gW}_{c,d,f}\right|,
\end{equation}
which consists of a generalized GHZ state
\begin{equation}
\left|\mathrm{gGHZ}_{a,b}\right\rangle=a\left|000\right\rangle+b\left|000\right\rangle,\enskip|a|^2+|b|^2=1
\end{equation}
and a generalized W state
\begin{equation}
\left|\mathrm{gW}_{c,d,f}\right\rangle=c\left|001\right\rangle+d\left|010\right\rangle+f\left|010\right\rangle,\enskip |c|^2+|d|^2+|f|^2=1.
\end{equation}
\end{theorem}
Note that the analytical form of $ t_{\mathrm{r}}(\rho(p))$ is simpler than that of $\tau(\rho(p))$ in Ref. \cite{Siebert}.
The r-tangle \eref{abcdf} consists of two straight lines as a function of $p$, whereas the function $\tau(\rho(p))$ in Ref. \cite{Siebert} consists of two straight lines and a curve.
\section{Proofs of Theorems}
\textbf{Proof of Theorem 1}:
We first consider the case in which $\rho$ is pure.
In this case, we have the equality $t_{\mathrm{r}}(\rho)=\sqrt{\tau(\rho)}$, and therefore Theorem 1 is included in Lemma 1 of Ref. \cite{tajima}.
Next, we consider the case in which $\rho$ is mixed.
Let us refer to the optimal ensembles of $\rho$ and $\rho_{(j)}$ as $\{q_{i},\left|\psi_{i}\right\rangle\}$ and $\{r_{k_{j}},\left|\varphi_{k_{j}}\right\rangle\}$, respectively.
We will prove that $t_{\mathrm{r}}(\rho_{(j)})=\alpha_{(j)}t_{\mathrm{r}}(\rho)$ and $\{r_{k_{j}},\left|\varphi_{k_{j}}\right\rangle\}=\{r_{i,(j)},\left|\varphi_{i,(j)}\right\rangle\}$.
First, we consider the case in which $\sqrt{\mbox{det}(M^{\dag}_{(j)}M_{(j)})}=0$ holds.
In the present case, the qubit $A$ becomes separable after the measurement, and thus the equation $ t_{\mathrm{r}}(\rho_{(j)})=0$ also holds.
Thus, \eref{multialpha} is valid.
We can also prove that the ensemble $\{r_{i(j)},\left|\varphi_{i(j)}\right\rangle\}$ is optimal, because the states $\left|\varphi_{i(j)}\right\rangle$ are separable or biseparable states: the qubits $A$ of the states $\left|\varphi_{i(j)}\right\rangle$ are separable.
Thus Theorems 1 is valid when $\sqrt{\mbox{det}(M^{\dag}_{(j)}M_{(j)})}=0$ holds.
Second, let us consider the case in which $\sqrt{\mbox{det}(M^{\dag}_{(j)}M_{(j)})}\ne0$.
First, we show that if we can prove the following two equations, we can also prove Theorem 1:
\begin{eqnarray}
\alpha_{(j)}t_{\mathrm{r}}(\rho)\le\sum_{k_{j}}r_{k_{j}}t_{\mathrm{r}}(\left|\varphi_{k_{j}}\right\rangle),\label{1}\\
\sum_{i}r_{i(j)}t_{\mathrm{r}}(\left|\varphi_{i(j)}\right\rangle)\le\alpha_{(j)}\sum_{i}q_{i}t_{\mathrm{r}}(\left|\psi_{i}\right\rangle).\label{2}
\end{eqnarray}
Because $\{r_{k_{j}},\left|\varphi_{k_{j}}\right\rangle\}$ is the optimal ensemble of $\rho_{(j)}$ and because $\{r_{i,(j)},\left|\varphi_{i,(j)}\right\rangle\}$ is an ensemble of $\rho_{(j)}$,
\begin{equation}
t_{\mathrm{r}}(\rho_{(j)})=\sum_{k_{j}}r_{k_{j}}t_{\mathrm{r}}(\left|\varphi_{k_{j}}\right\rangle)\le\sum_{i}r_{i(j)}t_{\mathrm{r}}(\left|\varphi_{i(j)}\right\rangle)
\end{equation}
is valid.
Note that $\{q_{i},\left|\psi_{i}\right\rangle\}$ is the optimal ensemble of $\rho$, and thus $t_{\mathrm{r}}(\rho)=\sum_{i}q_{i}t_{\mathrm{r}}(\left|\psi_{i}\right\rangle)$.
Thus, if \eref{1} and \eref{2} hold,
\begin{equation}
\alpha_{(j)}t_{\mathrm{r}}(\rho)\le t_{\mathrm{r}}(\rho_{(j)})\le\sum_{i}r_{i(j)}t_{\mathrm{r}}(\left|\varphi_{i(j)}\right\rangle)\le\alpha_{(j)}t_{\mathrm{r}}(\rho)\label{cycle}
\end{equation}
also holds.
We can reduce \eref{cycle} to
\begin{equation}
\alpha_{(j)}t_{\mathrm{r}}(\rho)= t_{\mathrm{r}}(\rho_{(j)})=\sum_{i}r_{i(j)}t_{\mathrm{r}}(\left|\varphi_{i(j)}\right\rangle)=\alpha_{(j)}t_{\mathrm{r}}(\rho),
\end{equation}
and thus if we can prove Eqs. \eref{1} and \eref{2}, we can also prove Theorem 1.
Let us prove \eref{1} and \eref{2}.
First, we prove \eref{1}.
We prove \eref{1} by introducing an ensemble $\{r_{k_{j}}/L^2_{k_{j}},\left|\tilde{\varphi}_{k_{j}}\right\rangle\}$ of $\rho$ which satisfies
\begin{equation}
\alpha_{j}\sum_{k_{j}}\frac{r_{k_{j}}}{L^2_{k_{j}}}t_{\mathrm{r}}(\left|\tilde{\varphi}_{k_{j}}\right\rangle)=\sum_{k_{j}}r_{k_{j}}t_{\mathrm{r}}(\left|\tilde{\varphi}_{k_{j}}\right\rangle).\label{confor1}
\end{equation}
If we can introduce such ensemble of $\rho$, we can prove \eref{1} from \eref{confor1}; note that because $\{r_{k_{j}}/L^2_{k_{j}},\left|\tilde{\varphi}_{k_{j}}\right\rangle\}$ is an ensemble of $\rho$,
\begin{equation}
t_{\mathrm{r}}(\rho)\le\sum_{k_{j}}\frac{r_{k_{j}}}{L^2_{k_{j}}}t_{\mathrm{r}}(\left|\tilde{\varphi}_{k_{j}}\right\rangle)
\end{equation}
is valid.
We obtain $\{r_{k_{j}}/L^2_{k_{j}},\left|\tilde{\varphi}_{k_{j}}\right\rangle\}$ explicitly as follows.
Now we consider the case in which $\mbox{det}(M_{(j)})\ne0$ holds, and thus we can take $M^{-1}_{(j)}$, which is the inverse of $M_{(j)}$.
We take the ensemble $\{r_{k_{j}}/L^2_{k_{j}},\left|\tilde{\varphi}_{k_{j}}\right\rangle\}$ as follows:
\begin{eqnarray}
\left|\tilde{\varphi}_{k_{j}}\right\rangle\equiv L_{k_{j}}\sqrt{p_{(j)}}M^{-1}_{(j)}\left|\varphi_{k_{j}}\right\rangle,
\end{eqnarray}
where $L_{k_{j}}$ are normalization constants.
We can prove that $\{r_{k_{j}}/L^2_{k_{j}},\left|\tilde{\varphi}_{k_{j}}\right\rangle\}$ satisfies \eref{confor1}, as follows;
\begin{eqnarray}
\sum_{k_{j}}r_{k_{j}} t_{\mathrm{r}}(\left|\varphi_{k_{j}}\right\rangle)
=\sum_{k_{j}}r_{k_{j}} t_{\mathrm{r}}\left(\frac{M_{(j)}}{L_{k_{j}}\sqrt{p_{(j)}}}\left|\tilde{\varphi}_{k_{j}}\right\rangle\right)\nonumber\\
=\sum_{k_{j}}r_{k_{j}}\frac{\sqrt{\mbox{det}(M^\dag_{j}M_{(j)})}}{L^2_{k_{j}}p_{(j)}} t_{\mathrm{r}}(\left|\tilde{\varphi}_{k_{j}}\right\rangle)=\alpha_{(j)}\sum_{k_{j}}\frac{r_{k_{j}}}{L^2_{k_{j}}} t_{\mathrm{r}}(\left|\tilde{\varphi}_{k_{j}}\right\rangle).
\end{eqnarray}
Finally, let us prove \eref{2}.
Note that we can write $\{r_{i(j)},\left|\varphi_{i(j)}\right\rangle\}$ as
\begin{equation}
r_{i(j)}=\frac{q_{i}}{N^2_{ij}},\enskip\left|\varphi_{i(j)}\right\rangle= N_{ij}\frac{M_{(j)}}{\sqrt{p_{(j)}}}\left|\psi_{i}\right\rangle,\enskip N_{ij}\equiv \frac{\sqrt{p_{(j)}}}{\sqrt{\left\langle\psi_{i}\right|M^\dag_{(j)} M_{(j)}\left|\psi_{i}\right\rangle}}.
\end{equation}
Thus, we can derive \eref{2} as follows:
\begin{eqnarray}\fl
\sum_{i}r_{i(j)} t_{\mathrm{r}}(\left|\varphi_{i(j)}\right\rangle)=\sum_{i}\frac{q_{i}}{N^2_{ij}} t_{\mathrm{r}}\left(N_{ij}\frac{M_{(j)}}{\sqrt{p_{(j)}}}\left|\psi_{i}\right\rangle\right)\nonumber\\\fl
=\sum_{i}\frac{q_{i}}{N^2_{ij}}\frac{N^2_{ij}\sqrt{\mbox{det}(M^\dag_{(j)}M_{(j)})}}{p_{(j)}} t_{\mathrm{r}}(\left|\psi_{i}\right\rangle)=\frac{\sqrt{\mbox{det}(M^\dag_{(j)}M_{(j)})}}{p_{(j)}}\sum_{i}q_{i} t_{\mathrm{r}}(\left|\psi_{i}\right\rangle).
\end{eqnarray}
This completes the proof of Theorem 1.
$\Box$
\textbf{Proof of Theorem 2}:
We prove the present theorem by a method similar to the one used in Ref. \cite{Siebert}.
First, we prove the following lemma.
\begin{lemma}
If there is a function $f(p)$ which satisfies the following three conditions, it must be $ t_{\mathrm{r}}(\rho(p))$:
\begin{description}
\item[Condition 1]{The following inequality holds for any $p$ and $\varphi$:
\begin{equation}
f(p)\le t_{\mathrm{r}}(\left|p,\varphi\right\rangle),\label{condition1}
\end{equation}
where
\begin{eqnarray}
\left|p,\varphi\right\rangle&\equiv&\sqrt{p}\left|gGHZ_{a,b}\right\rangle+\sqrt{1-p}e^{i(\varphi-\tilde{\varphi}/3)}\left|gW_{c,d,f}\right\rangle,\\
\tilde{\varphi}&\equiv& \mathrm{arg}\left[\frac{4cdf}{a^2b}\right].
\end{eqnarray}}
\item[Condition 2]{There exists an ensemble $\{p_{i},\left|q_{i},\varphi_{i}\right\rangle\}$ of $\rho(p)$ which satisfies the following equation:
\begin{equation}
f(p)=\sum_{i}p_{i} t_{\mathrm{r}}(\left|q_{i},\varphi_{i}\right\rangle).\label{condition2}
\end{equation}}
\item[Condition 3]{The function $f(p)$ is a convex function.}
\end{description}
\end{lemma}
\textit{Proof}: Because of Condition 2, we have $f(p)\ge t_{\mathrm{r}}(\rho(p))$.
We also prove $f(p)\le t_{\mathrm{r}}(\rho(p))$ as follows:
\begin{eqnarray}
t_{\mathrm{r}}(\rho(p))&=&\sum_{i}\tilde{p}_{i} t_{\mathrm{r}}(\left|\tilde{q}_{i},\tilde{\varphi}_{i}\right\rangle)\nonumber\\
&\ge&\sum_{i}\tilde{p}_{i}f(\tilde{q}_{i})\ge f(\sum_{i}\tilde{p}_{i}\tilde{q}_{i})=f(p),
\end{eqnarray}
where $\{\tilde{p}_{i},\left|\tilde{q}_{i},\tilde{\varphi}_{i}\right\rangle\}$ is the optimal ensemble of $\rho(p)$. We have derived the first inequality from Condition 1 and the second inequality from Condition 3. ($\Box$)
Now we only have to prove that the right-hand side of \eref{abcdf}, which we refer to as $g(p)$, satisfies Conditions 1--3.
First, the function $g(p)$ is clearly convex, and thus Condition 3 holds.
Second, we can take the ensemble of $\rho(p)$ which satisfies \eref{condition2} as follows:
\begin{eqnarray}\fl
\rho(p) =\left\{\begin{array}{ll}
\frac{p_{0}-p}{p_{0}}\left|0,0\right\rangle\left\langle 0,0\right|+\frac{p}{3p_{0}}\sum^{2}_{n=0}\left|p_{0},\frac{2n\pi}{3}\right\rangle\left\langle p_{0},\frac{2n\pi}{3}\right| & (0\le p\le p_{0}) \\
\frac{p-p_{0}}{1-p_{0}}\left|1,0\right\rangle\left\langle 1,0\right|+\frac{1-p}{3(1-p_{0})}\sum^{2}_{n=0}\left|p_{0},\frac{2n\pi}{3}\right\rangle\left\langle p_{0},\frac{2n\pi}{3}\right| & (p_{0}\le p\le1) \\
\end{array} \right\}.\label{ensemble}
\end{eqnarray}
To prove that the above ensembles satisfy \eref{condition2}, we only have to notice that
\begin{eqnarray}
t_{\mathrm{r}}(\left|p,\varphi\right\rangle)=2|ab|\sqrt{\left|p^2-\sqrt{p(1-p)^3}e^{3i\varphi}\frac{4cdf}{a^2b}\right|},\label{amountoftau}
\end{eqnarray}
and especially
\begin{eqnarray}
t_{\mathrm{r}}(\left|1,0\right\rangle)&=&2|ab|,\label{amountof10}\\
t_{\mathrm{r}}(\left|0,0\right\rangle)&=& t_{\mathrm{r}}(\left|p_{0},\frac{2n\pi}{3}\right\rangle)=0.\label{amountof00}
\end{eqnarray}
Thus, Condition 2 is valid.
Finally, let us prove Condition 1.
Because $ t_{\mathrm{r}}$ is non-negative, $g(p)$ clearly satisfies Condition 1 for $0\le p\le p_{0}$.
Note that for $p_{0}\le p\le1$, the function $g(p)$ is a linear function of $p$ and that the following three expressions hold:
\begin{eqnarray}
t_{\mathrm{r}}(\left|p,0\right\rangle)&\le& t_{\mathrm{r}}(\left|p,\varphi\right\rangle),\\
t_{\mathrm{r}}(\left|1,0\right\rangle)&=&g(1),\\
t_{\mathrm{r}}(\left|0,0\right\rangle)&=&g(p_{0}).
\end{eqnarray}
Thus, if we can prove $ t_{\mathrm{r}}(\left|p,0\right\rangle)$ is concave for $p_{0}\le p\le 1$, then we can also prove $g(p)\le t_{\mathrm{r}}(\left|p,\varphi\right\rangle)$ for $p_{0}\le p\le 1$.
Let us prove the concaveness of $ t_{\mathrm{r}}(\left|p,0\right\rangle)$.
Only for simplicity, we refer to $4cdf/(a^2b)$ as $s$.
Then,
\begin{eqnarray}\fl
&&\frac{d^2}{dp^2} t_{\mathrm{r}}(\left|p,0\right\rangle)=2|ab|\frac{d^2}{dp^2}\sqrt{p^2-\sqrt{p(1-p)^3}s}\\\fl
&=&-\frac{1}{4( t_{\mathrm{r}}(\left|p,0\right\rangle))^3}\left(s\frac{12p^3+3p/2}{\sqrt{p(1-p)}}+s^2\frac{-4p^4+20p^3-3p^2-2p+1}{4p(1-p)}\right).\label{twoterms}
\end{eqnarray}
The term $12p^3+3p/2$ is clearly positive.
The term $-4p^4+20p^3-3p^2-2p+1$ is also positive for $0\le p\le1$ as shown in Fig\ref{kensyouzu}.
\begin{figure}
\caption{The graph of the function $-4p^4+20p^3-3p^2-2p+1$ from 0 to 1.}
\label{kensyouzu}
\end{figure}
Therefore, $t_{\mathrm{r}}(\left|p,0\right\rangle)$ is concave for $p_{0}\le p\le 1$, and thus the function $g(p)$ satisfies Conditions 1--3.
Hence, because of Lemma 1, the equation $t_{\mathrm{r}}(\rho(p))=g(p)$ is valid.
$\Box$
\section{Conclusion}
In the present article, we introduced a new entangleemnt measure which we call the r-tangle.
The r-tangle $t_{\mathrm{r}}$ satisfies $ t_{\mathrm{r}}(\rho_{(j)})=\alpha_{(j)} t_{\mathrm{r}}(\rho)$.
Thanks to the feature, if we derive an analytical form of the r-tangle for a three-qubit mixed state, we can also derive the r-tangle analytically for any states which are SLOCC-equivalent to the state.
Note that the concurrences also satisfy a similar feature $C(\rho_{(j)})=\alpha_{(j)}C(\rho)$, and that the tangle $\tau(\rho)$ does $not$ satisfies such a feature; we show an example that $\tau(\rho_{(j)})\ne\alpha^2_{(j)}\tau(\rho)$ in Appendix A.
These facts imply that we should consider the r-tangle instead of the tangle as the three-partite counterpart of the concurrence.
Moreover, we derive the analytical form of the r-tangle for mixtures of generalized GHZ state and generalized W state.
Although the tangle has been also derived for such states \cite{Siebert}, the form of $t_{\mathrm{r}}(\rho(p))$ is simpler than that of $\tau(\rho(p))$ as the function of $p$.
Using $ t_{\mathrm{r}}(\rho_{(j)})=\alpha_{(j)} t_{\mathrm{r}}(\rho)$, we can derive the r-tangle not only for the mixture but also for any state which is SLOCC-equivalent to the mixtures.
We cannot apply the approach to the tangle, because $\tau(\rho_{j})=\alpha^2_{(j)}\tau(\rho)$ does not hold generally.
\section{}
In the present appendix, we will show a counterexample of $\tau(\rho_{(j)})=\alpha^2_{(j)}\tau(\rho)$.
Let us consider the following three-qubit mixed state:
\begin{equation}\fl
\rho=\frac{4}{5}\left|\mathrm{gGHZ}_{1/\sqrt{2},1/\sqrt{2}}\right\rangle\left\langle\mathrm{gGHZ}_{1/\sqrt{2},1/\sqrt{2}}\right|+\frac{1}{5}\left|\mathrm{gW}_{1/\sqrt{3},1/\sqrt{3},1/\sqrt{3}}\right\rangle\left\langle\mathrm{gW}_{1/\sqrt{3},1/\sqrt{3},1/\sqrt{3}}\right|.
\end{equation}
According to a result in Ref. \cite{Siebert}, we have $\tau(\rho)=(63-\sqrt{465})/90$.
Let us perform the following measurement on the qubit $A$ of $\rho$:
\begin{eqnarray}
M_{(0)}=\left(
\begin{array}{cc}
1 & 0 \\
0 & \frac{1}{\sqrt{10}} \end{array}
\right),\enskip\enskip\enskip
M_{(1)}=\left(
\begin{array}{cc}
0 & 0 \\
0 & \frac{3}{\sqrt{10}} \end{array}
\right).
\end{eqnarray}
The probability $p_{(0)}$ that we obtain the result 0 is $29/50$, for which the state becomes
\begin{eqnarray}
\rho_{(0)}=\frac{22}{29}\left|\mathrm{gGHZ}_{\sqrt{10/11},\sqrt{1/11}}\right\rangle\left\langle\mathrm{gGHZ}_{\sqrt{10/11},\sqrt{1/11}}\right|\nonumber\\
+\frac{7}{29}\left|\mathrm{gW}_{\sqrt{10/21},\sqrt{10/21},\sqrt{1/21}}\right\rangle\left\langle\mathrm{gW}_{\sqrt{10/21},\sqrt{10/21},\sqrt{1/21}}\right|.
\end{eqnarray}
According to the result in Ref. \cite{Siebert}, $\tau(\rho_{(0)})=160(9-\sqrt{6})/7569$.
Thus,
\begin{eqnarray}
\alpha^2_{(0)}=\frac{\mbox{det}M^\dagger_{(0)} M_{(0)}}{p^2_{(0)}}=\frac{250}{841}\nonumber\\
\ne\frac{1600(9-\sqrt{6})}{841(63-\sqrt{465})}=\frac{\tau(\rho_{(0)})}{\tau(\rho)}.
\end{eqnarray}
$\Box$
\section*{References}
\end{document}
|
\begin{document}
\title{Exact and heuristic algorithms for Cograph Editing}
\abstract{We present a dynamic programming algorithm for optimally solving the \textsc{Cograph Editing} problem on an $n$-vertex graph that runs in $O(3^n n)$ time and uses $O(2^n)$ space.
In this problem, we are given a graph $G = (V, E)$ and the task is to find a smallest possible set $F \subseteq V \times V$ of vertex pairs such that $(V, E \bigtriangleup F)$ is a cograph (or $P_4$-free graph), where $\bigtriangleup$ represents the symmetric difference operator.
We also describe a technique for speeding up the performance of the
algorithm in practice.
Additionally, we present a heuristic for solving the \textsc{Cograph Editing}
problem which produces good results on small to medium datasets. In
application it is much more important to find the ground truth, not some
optimal solution. For the first time, we evaluate whether the cograph
property is strict enough to recover the true graph from data to which noise has been added.
}
\section{Introduction}
A \emph{cograph}, or \emph{complement reducible graph}, is a simple undirected graph that can be built from isolated vertices using the operations of disjoint union and complement.
Specifically:
\begin{enumerate}
\item A single-vertex graph is a cograph.
\item The disjoint union of two cographs is a cograph.
\item The complement of a cograph is a cograph.
\end{enumerate}
There are several equivalent definitions of cographs \cite{corneil1981complement}, perhaps the simplest to state being that cographs are exactly the graphs that contain no induced $P_4$ (path on 4 vertices).
As a subclass of perfect graphs, they enjoy advantageous algorithmic properties: many problems that are NP-hard on general graphs, such as \textsc{Clique} and \textsc{Chromatic Number}, become polynomial-time for cographs.
Cographs can be recognised in linear time \cite{corneil1985linear}.
A more difficult problem arises when we ask for the minimum number of
``edge editing'' operations required to transform a given graph into a
cograph.
Three problem variants can be distinguished: If we may only insert edges, we
have the \textsc{Cograph Completion} problem; if we may only delete edges,
we have the \textsc{Cograph Deletion} problem; if we may both insert and
delete edges, we have the \textsc{Cograph Editing} problem.
When framed as decision problems, in which the task is to determine whether
such a transformation can be achieved using at most a given number $k$ of
operations, all three problem variants are NP-complete
\cite{el-mallah1988complexity,liu2012complexity}.
(Note that the edge completion and deletion problems can be trivially
transformed into each other by taking complements.)
A general result of Cai, when combined with linear-time recognition of cographs, directly gives an $O(6^kn)$ fixed-parameter tractable (FPT) algorithm \cite{cai1996fixed} for \textsc{Cograph Editing}; more recently, a $O(4.612^k + n^{4.5})$ FPT algorithm \cite{liu2012complexity} has been described.
Concerning applications, we focus in particular on a recently developed approach for inferring phylogenetic trees from gene orthology data that involves solving the \textsc{Cograph Editing} problem \cite{hellmuth2015phylogenomics}.
Briefly, in this setting we may represent genes as vertices in a graph, with pairs of vertices linked by an edge whenever they are deemed to have arisen through a speciation (as opposed to gene duplication) event.
In a perfect world, this graph would be a cograph, and its cotree (see below) would correspond to the gene tree, which can be combined with gene trees inferred from other gene families to infer a species tree.
In the real world, measurement errors---false positive and false negative inferences of orthology---frequently cause the inferred orthology graph not to be a cograph, and in this case it is reasonable to ask for the smallest number $k^*$ of edge edits that would transform it into one.
For practical instances arising from orthology-based phylogenetic analysis, it is often the case that $k^* > n$ or even $k^* = \Omega(m)$, limiting the effectiveness of FPT approaches parameterised by the number of edits and motivating the development of ``traditional'' exponential-time algorithms---that is, algorithms that require time exponential in the number of vertices $n$.
We first give a straightforward dynamic programming algorithm that solves the more general edge-weighted versions of each of the three problem variants in $O(3^n n)$ time and $O(2^n)$ space, and which additionally offers simple implementation and predictable running time and memory usage.
We then describe modifications that are likely to significantly improve running time in practice, without sacrificing optimality (though also without improving the worst-case bound).
In addition, we describe and evaluate a heuristic solving the
\textsc{Cograph Editing} problem based on an algorithm by Lokshtanov \etal
~\cite{lokshtanov10characterizing}.
\subsection{Definitions}
Every cograph $G = (V, E)$ determines a unique vertex-labelled tree $T_G = (U, D, \h : U \to \{0, 1\})$ called the \emph{cotree} of $G$, which encodes the sequence of basic operations needed to build $G$ from individual vertices.
The vertices of $T_G$ correspond to induced subgraphs of $G$: leaves in $T_G$ correspond to individual vertices of $G$, and internal vertices to the subgraphs produced by combining the child subgraphs in one of two ways, according to whether the vertex is labelled 0 or 1 by $\h$.
0-vertices specify \emph{parallel} combinations, which combine the subgraphs represented by the child vertices into a single graph via disjoint union, while 1-vertices specify \emph{serial} combinations, which combine these subgraphs into a single graph by adding all possible edges between vertices coming from different children (or equivalently, by complementing, forming the disjoint union, and then complementing again).
The root is labelled 1, and every path from the root alternates between 0-vertices and 1-vertices.
Given a cotree $T_G$, a postorder traversal that begins with a distinct single-vertex graph at each leaf and then applies the series or parallel combination operations specified at the internal nodes will culminate, at the root node, in the corresponding cograph $G$.
\section{An $O(3^n n)$-time and $O(2^n)$-space algorithm for weighted
cograph editing, completion, and deletion problems}
We describe here an algorithm for the weighted version of the \textsc{Cograph Editing} problem.
The deletion and completion problem variants are dealt with using simple
modifications to the base algorithm, described later.
Unweighted variants can of course be obtained by setting all edge weights to 1.
Given an undirected graph $G = (V, E)$ with $V = \{ v_1, \dots, v_n \}$ and vertex-pair weights given by $w : V \times V \to \mathbb R^{\ge 0}$, with the interpretation that $w(u, v)$ is the cost of deleting the edge $(u, v)$ when $(u, v) \in E$ and the cost of inserting it otherwise, we seek a minimum-weight edge modification set $F \subseteq V \times V$ such that $(V, E \bigtriangleup F)$ is a cograph, where $\bigtriangleup$ is the symmetric difference operator.
We compute the minimum cost of transforming every subset of vertices into a cograph using dynamic programming.
The algorithm hinges on the following property of cotrees \cite{Lerchs1972}:
\begin{property}\label{pro:lca}
In a cograph $G$, two vertices $u$ and $v$ are linked by an edge if and only if their lowest common ancestor in the cotree $T_G$ of $G$ is a series node.
\end{property}
For any subset $X$ of vertices in $V$, let $v_X$ denote the vertex with maximum index in $X$. We can compute the minimum cost $\f(X)$ of editing the induced subgraph $G[X]$ to a cograph using:
\begin{equation}
\label{eqn:dp}
\f(X) =
\begin{cases}
0, & \text{if}\ |X| < 4\\
\min_{Y \subsetneq X, v_X \in Y} (\f(Y) + \f(X \setminus Y) + \cost(Y, X \setminus Y)), & \text{otherwise}
\end{cases}
\end{equation}
\begin{equation}
\cost(A, B) = \min \{ \parCost(Y, X \setminus Y), \serCost(Y, X \setminus Y) \}
\end{equation}
\begin{equation}
\parCost(A, B) = \sum_{\{(u, v) \in E : u \in A, v \in B\}} w(u, v) \quad \text{(these edges need to be deleted)}
\end{equation}
\begin{equation}
\serCost(A, B) = \sum_{\{(u, v) \notin E : u \in A, v \in B\}} w(u, v) \quad \text{(these edges need to be inserted)}
\end{equation}
Because each invocation of $\f$ has a strictly smaller set of vertices as input, it suffices to compute solutions to subproblems in increasing order of subset size.
To instead solve the edge deletion (respectively, insertion) problem, replace $\serCost$ (respectively, $\parCost$) with a function that is zero when the original function is zero, and infinity otherwise.
The $3^n$ factor in the time complexity arises from there being at most one argument to the outer $\min$ for every way of partitioning $V$ into 3 parts $(V \setminus X, X \setminus Y, Y)$.
If we enumerate bipartitions $(Y \mid X \setminus Y)$ in Gray code order, then straightforward algorithms for computing $\parCost$ and $\serCost$ incrementally may be used, resulting in the additional factor of $n$.
An optimal solution can be found by back-tracing the dynamic programming matrix as usual.
It is possible to extract every optimal solution this way, but producing each of them exactly once requires a slight reformulation whereby we include the root node type (series or parallel) in the dynamic programming state, which doubles the memory requirement.
Although the above is a ``subset convolution''-style dynamic program, the possibility of achieving $O^*(2^n)$ time by applying the Möbius transform approach of Björklund et al. \cite{bjorklund2007fourier} appears to be complicated by the third term in the summation.
\section{Reducing the number of partitions considered}
Given any subset $X$ of vertices, the dynamic programming algorithm above enumerates every possible bipartition to find a best one, and thereby compute $\f(X)$.
For many graphs encountered in practice, this will be overkill: The vast majority of bipartitions tried will be very bad, suggesting that there could be a way to avoid trying many of them without sacrificing optimality.
Instead of enumerating all bipartitions of $X$, we propose to use a search tree to gradually refine a \emph{set} of bipartitions defined by a series of weaker constraints, avoiding entire sets of bipartitions that can be proven to lead to suboptimal solutions.
Here we describe a branch and bound algorithm, running ``inside'' the dynamic program, that uses this strategy to find an optimal bipartition of a given vertex subset $X$.
We however note that, since $O(2^n)$ space is already needed by the ``core'' dynamic programming algorithm, and since a full enumeration of all bipartitions of $X$ would require only asymptotically the same amount of space, an $A^*$ algorithm is likely feasible.
The overall strategy is somewhat inspired by the Karmarkar-Karp heuristic \cite{KarmarkarKarp1982} for number partitioning, which achieves good empirical performance on this related problem by deferring as far as possible the question of exactly which part in the partition to assign an element to.
The basic idea is to maintain, in every subproblem $P$, a set of constraints $S_P$ of the form ``$A \subseteq X$ are all in the same part'', and another set of constraints $O_P$ of the form ``$A \subseteq X$ and $B \subseteq X$ are in different parts'' (clearly $A \cap B = \emptyset$).
We call a constraint of the former kind a \emph{$\same$-constraint} and denote it $\same(A)$; a constraint of the latter kind we call an \emph{$\opp$-constraint} and denote it $\opp(A | B)$.
Each subproblem (except the root; see below) has one extra bit of information $\lambda_P \in \{+,-\}$ that records whether it represents a series ($+$) or parallel ($-$) node in the cotree: we will see later that separating these two cases enables stronger lower bounds to be used.
A subproblem $P = (\lambda_P, C_P)$ thus represents the set of all bipartitions consistent with its \emph{constraint set} $C_P = S_P \cup O_P$ that introduce a series ($\lambda_P = +$) or parallel ($\lambda_P = -$) node in the cotree.
Note that any cotree may be represented as a binary tree with internal nodes labelled either $+$ or $-$ (though this representation is not unique).
The root subproblem $P_X$ for a vertex subset $X$ is special: it has no associated $\lambda$ value; rather it has has exactly two children $P_{X^+}$ and $P_{X^-}$ having opposite values of $\lambda$, with each containing only the trivial constraints $C_{P_{X^+}} = C_{P_{X^-}} = \{ \same(\{v\}) : v \in X \}$.
These two children (which may be thought of as the roots of entirely separate search trees) thus together represent the set of all possible configurations of $X$, where a configuration is a bipartition together with a choice of cotree node type (series or parallel).
\subsection{Generating subproblems}
Before discussing the general rule we use for generating subproblems, we first give a simplified example.
If there are two vertices $u$, $v$ in $X$ that have not yet been used in any $\opp$-constraint, we may create a new subproblem in which $u$ and $v$ are forced to be in the same part of the bipartition, as well as another new subproblem in which they are forced to be in opposite parts.
(Clearly every bipartition in the original subproblem belongs to exactly one of these two subproblems.)
\subsubsection{Structure of a general subproblem}
More generally, let $S^*_P$ be the set of all inclusion-maximal subsets of $X$ appearing in a $\same$-constraint in subproblem $P$ (i.e., $S^*_P = \{ Z \subseteq X : \same(Z) \in S_P \land (\nexists Z' \supsetneq Z \land \same(Z') \in S_P) \}$).
Then we may choose two distinct (necessarily disjoint and nonempty) subsets $A$ and $B$ from $S^*_P$ and form two new subproblems: one in which $\same(A \cup B)$ is added to the constraint set, and one in which $\opp(A | B)$ is added.
Each new subproblem inherits the $\lambda$ of the original.
As before, every bipartition in the original subproblem belongs to exactly one of these two subproblems.
When no such pair $(A, B)$ can be found, we halt this refinement process and enumerate all bipartitions consistent with the constraints, evaluating each as per Equation~\ref{eqn:dp}.
In this way, the constraint set $C_P$ of any subproblem $P$ can be represented as a directed forest, each component of which is a binary tree that may have either a $\same(\cdot)$ node or an $\opp(\cdot)$ node at the root and $\same(\cdot)$ nodes everywhere else.
The components containing only $\same(\cdot)$ nodes are exactly the members of $S^*_P$, so a standard union-find data structure \cite{tarjan1975efficiency} can be used to efficiently find a pair of vertex sets $A, B \in S^*_P$ eligible for generating a new pair of child subproblems.
Although it would be possible and perhaps fruitful to continue adding constraints of a more complicated form, for example a constraint $\opp(A|C)$ when the constraints $\opp(A|B)$ and $\opp(C|D)$ already exist in $C_P$, there are several reasons to avoid doing so.
First, allowing such constraints destroys the simple forest structure of constraints in $C_P$.
In the presence of such constraints, a new candidate constraint may be tautological or inconsistent; these cases can be detected (for example using a 2SAT algorithm), but doing so slows down the process of finding a new candidate constraint to add.
Second, any set of constraints containing one or more such complicated constraints is ``dominated'' in the sense that some set of simple constraints exists that implies the same set of bipartitions, meaning that no additional ``power'' is afforded by these constraints.
\subsection{Strengthening lower bounds}
The procedure described above is only useful in reducing the total number of bipartitions considered if the constraints added in subproblems are able to improve a lower bound on the cost of a solution.
The lower bound $L_P$ associated with any subproblem $P$ will have the form $L_P = \sum_{x \in S^*_P}{L_S(x)} + \sum_{\opp(A|B) \in O_P}{L_O(A, B)}$.
We now examine the two kinds of terms in this lower bound, and how they may be efficiently computed for a subproblem from its parent subproblem.
\subsubsection{Lower bounds $L_S(\cdot)$ from $\same$-constraints}
Whenever two $\same$-constraints $\same(A)$ and $\same(B)$ are combined into a single $\same$-constraint $\same(A \cup B)$, we may add $\f(A \cup B) - \f(A) - \f(B)$ to the lower bound.
This represents the cost of editing the entire vertex set $A \cup B$ into a cograph, offset by subtracting the costs already paid for editing each vertex subset $A$ and $B$ into cographs.
Note that any function computing a lower bound on these costs can be used in place of $\f$, provided that its value does not change between the time at which it is first added to the lower bound (at some subproblem), and later subtracted (at some deeper subproblem).
If the bottom-up strategy is followed for computing $\f$, then we always have these function values available exactly.
\subsubsection{Lower bounds $L_O(\cdot, \cdot)$ from $\opp$-constraints}
Whenever an $\opp$-constraint $\opp(A|B)$ is added to a parallel subproblem, then for each edge $(a, b) \in E$ with $a \in A$ and $b \in B$ we may add $w(a, b)$ to the lower bound.
This represents the cost of deleting these edges, which cannot exist if they are in different subtrees of a parallel cotree node by Property~\ref{pro:lca}.
Because the sets of vertices involved in the $\opp$-constraints of a given subproblem are all disjoint, no edge is ever counted twice.
The reasoning is identical for series subproblems, except that we consider all vertex pairs $(a, b) \notin E$: these edges must be added.
In fact it may be possible to strengthen this bound by considering vertices that belong neither to $A$ nor to $B$: For any triple of vertices $a \in A$, $b \in B$, $v \in X \setminus (A \cup B)$ such that neither $(v, a)$ nor $(v, b)$ is in $E$, we may in principle add $\min \{ w(v, a), w(v, b) \}$ to the lower bound, since $v$ cannot be in the same part of the bipartition as both $a$ and $b$ and so must, by Property~\ref{pro:lca}, have an edge to at least one of these vertices added.
However, doing so introduces the possibility of counting an edge multiple times.
Although this can be addressed, doing so appears to come at its own cost: For example, dividing by the maximum number of times that any edge is considered produces lower bound increases that are valid but likely weak; while partitioning vertices \emph{a priori} and then counting only bound increases from vertex triples in the same ``pristine'' part of the partition has the potential to produce stronger bounds, but entails significant extra complexity.
\subsection{Choosing a subproblem pair}
It remains to describe a way to choose disjoint vertex sets $A, B \in M_P$ to use for generating child subproblem pairs.
The strategy chosen is not important for correctness, but can have a dramatic effect on the practical performance of the algorithm.
Since both types of child subproblem are able to improve lower bounds, a sensible choice is to consider all $A, B \in M_P$ and choose the pair that maximises $\min \{ LB(S_P \cup O_P \cup \{ \same(A \cup B) \}), LB(S_P \cup O_P \cup \{ \opp(A | B) \}) \}$---that is, the pair $A, B$ that offers the best worst-case bound improvement.
Ties could be broken by $\max \{ LB(S_P \cup O_P \cup \{ \same(A \cup B) \}), LB(S_P \cup O_P \cup \{ \opp(A | B) \}) \}$.
Any remaining ties could be broken arbitrarily, or perhaps using more
expensive approaches such as fixed-length lookahead.
\section{Heuristics for the \textsc{Cograph Editing} problem}
In the following we will assume unweighted graphs.
For graph $G = (V, E)$ we denote $V(G) = V$ and $E(G) = E$. Given vertices
$V' \subseteq
V$, $G[V']$ is the subgraph of $G$ induced by $V'$. All neighbors of $v$ in
$G$ are denoted by $N_v(G)$.
Lokshtanov \etal~\cite{lokshtanov10characterizing} developed an algorithm to
find a minimal cograph completion $H = (V, E \cup F)$ of $G$ in $O
(|V|+|E|+|F|)$ time.
A cograph completion is called minimal if the set of added edges $F$ is
inclusion minimal, i.e., if there is no $F' \subsetneq F$ such that
$(V, E \cup F')$ is a cograph.
Let $G_x$ be a graph obtained from $G$ by adding a vertex $x$ and connecting
it to some vertices already contained in $G$, and let $H$ be any minimal
cograph completion of $G$.
Lokshtanov \etal showed that there exists a minimal cograph
completion $H_x$ of $G_x$ such that $H_x[V(H)] = H$.
Using this observation they describe a way to compute a minimal cograph
completion of $G$ in an iterative manner starting from an empty graph
$H_0$.
In each iteration a graph $G_{i}$ is derived from $H_{i-1}$ by adding a new
vertex $v_{i}$ from $G$. Vertex $v_{i}$ is connected to all its neighbors
$N_{v_i}(G[V(G_i)])$ in $G_i$. Now a set of additional edges $F_{v_i}$ is
computed such that $H_i = (V(G_i), E(G_i) \cup F_{v_i})$ is a minimal
cograph completion of $G_i$. Finally, $H_n$ is a minimal cograph completion
of $G$.
It is obvious that finding a minimal cograph completion gives an
upper bound on the \textsc{Cograph Editing} problem.
To find a minimal cograph deletion of $G$ we can simply find a minimal
cograph completion of its complement $\bar{G}$.
This algorithm allows us to efficiently find minimal cograph deletions and
completions. However, only adding or only deleting edges from $G$ is rather
restrictive for finding a good heuristic solution for the \textsc{Cograph
Editing} problem.
(Indeed, it is straightforward to construct instances for which
insertion-only or deletion-only strategies yield solutions that are
arbitrarily far from optimal.)
To allow a combination of edge insertions and deletions, in each iteration
step we choose a vertex $v_i$ and compute the minimum set of edges $F_{v_i}(G_i)$ to make $H_i$ a
minimal cograph completion of $G_i$. Furthermore, we compute
$F_{v_i}(\bar{G_i})$ for its complement.
If $|F_{v_i}(G_i)| \leq |F_{v_i}(\bar{G_i})|$ we add edges to $H_i$. Else we
remove edges from $H_i$ to preserve the cograph property.
In this way Lokshtanov's algorithm serves as a heuristic for the
\textsc{Cograph Editing} problem, allowing us to add and remove edges from
$G$.
Finding a set of edges $F_{v_i}$ such that $H_i$ is a
minimum cograph completion of $G_i$ takes $O(|N_{v_i}(H_i)|+1)$ time.
Computing both, $F_{v_i}(G_i)$ and $F_{v_i}(\bar{G_i})$, needs $O(|V|)$ time.
Hence, allowing edge insertions and deletions in every step increases
overall running time to $O(|V|^2)$.
To improve the heuristic there are multiple natural modifications which can
be easily integrated into the Lokshtanov algorithm.
The resulting cograph clearly depends on the order in which vertices
from $G$ are drawn. Although it is infeasible to test all possible
orderings of vertices in $V$, it is nevertheless worthwhile to try more than
one.
In our simplest version of the heuristic, we draw random orderings from $V$ and compute a
solution for each ordering. Going further, we can test multiple vertices in each
iteration step and add the vertex $v_i$ to $H_i$ which needs the smallest
number of
edge modifications. In its most exhaustive version this leads to an
algorithm which greedily takes in each step the best of all remaining
vertices and adds it to $H_i$. This algorithm's running time increases by
a factor of $O(|V|)$.
Another version of the heuristic may apply beam search, storing the best
$k$ intermediate results in each step.
All modifications described so far restrict each iteration to performing
only insertions or only deletions. In order to search more broadly, when considering
how to compute $H_i$ from $G_i$, we may
test whether removing a single edge incident on $v_i$ in $G_i$ before
inserting edges as usual results in fewer necessary edge edits overall.
A similar strategy can be applied to the complement graph.
In this way we can insert and delete edges in a single step -- for the
price
of having to iterate over all of $v_i$'s neighbors. If we apply this
strategy in each step to $G_i$ and its complement $\bar{G_i}$, the running
time increases by a factor of $O(|V|)$.
It must be noted that by applying Lokshtanov's algorithm in the above
manner, we lose any proven guarantees such as minimality.
\subsection{Results}
We evaluate five heuristic versions. All of them consider adding edges or
deleting edges in each iteration. Unless stated
otherwise we run each heuristic 100 times and take the cograph with lowest
costs. The five versions are:
\begin{enumerate}
\item \textit{standard}: Compute a cograph using random vertex insertion
order.
\item \textit{modify}: When adding $v_i$, allow removing one vertex from
$v_i$'s neighborhood in $G_i$. Hence, multiple edges may be inserted and
a single
edge may be deleted in the same step (or vice versa when applied to
$\bar{G_i}$).
\item \textit{choose-multiple}: In each step, choose 10 random vertices,
and add the one with the lowest modification costs.
\item \textit{beam-search}: Maintain 10 candidate solutions. In each
step, for each candidate solution choose 10 random vertices and try
adding each of them;
keep the best 10 of the 100 resulting solutions. Run the heuristic 10
times instead of 100.
\item \textit{choose-all}: In each step, consider all remaining vertices from $G$,
and add the one with the lowest modification costs.
Run the heuristic just once.
\end{enumerate}
We evaluate the heuristics on simulated data. As \textsc{Cograph Editing} is
NP-complete it is computationally too expensive to identify the
correct solution for reasonable size input graphs. We
simulate cographs and afterwards randomly perturb edges. The true
cograph serves as a proxy for the optimal cograph: It is a good bound, but
there is no guarantee that no other cograph is closer to the perturbed
graph.
To simulate cographs based on their recursive construction definition we
start with a graph on all vertices without any edges.
We put all vertices in different bins and randomly merge
bins. When two bins are merged, with some probability $d$ we connect all
vertices which are in different bins.
\sloppy
We simulate cographs with different numbers of vertices
$n \in \{10,20,50,100\}$ and edge densities
$d \in \{10\%,20\%,50\%\}$ where edge density is defined as the
number of edges in a graph divided by the number of edges in a fully
connected graph.
We limit our evaluations to $d \leq 50\%$; edge densities of $x\%$ and
$(100-x)\%$ will
produce the same results as the complement of a cograph is again a cograph.
As our simulation does not force the exact edge density, we
exclude instances where the simulated cograph's edge density deviates by more
than 10\% of the intended edge density.
For each parameter setting 100 cographs are computed: these are the \textit{true cographs}.
Each true cograph is then perturbed by randomly flipping vertex pairs---making
edges non-edges and vice versa---to produce a \textit{noisy graph}, which will be given
as input to the heuristics. An edge change is only valid if it
introduces at least one new $P_4$. Each edge can only be flipped once. We
use noise rates $r \in \{1\%,5\%, 10\%, 20\%\}$. If it is not possible to
introduce a new $P_4$ in each iteration step, we simulate and perturb a new
cograph instead. It must be noted, that a flipped edge that creates a new
$P_4$ will be retained even if it also removes one or more existing $P_4$s
from the graph.
The heuristic solution to each noisy graph will be denoted the \textit{heuristic cograph}.
A noise rate of 1\% on graphs with 10 vertices is not interesting as
these graphs only contain 45 vertex pairs, so this parameter combination is
excluded from evaluation.
The \emph{distance} between graphs
is the number of edge deletions and insertions needed to transform one graph
into another.
Dividing this distance by the number of edges in a complete graph with the
same number of vertices gives the \emph{normalized distance}, a value between 0 and 1,
inclusive.
In the context of an instance of the \textsc{Cograph Editing} problem, the \emph{cost}
of a graph is simply the distance between it and the
input graph; we use solution cost as a measure of solution quality.
Given two pairs of graphs, the first pair is \textit{closer} than the
second pair if the distance between the first pair is lower than
that between the second pair.
We do two kinds of evaluation: the first measures the quality of our
heuristics in solving the \textsc{Cograph Editing} problem, while the second
gauges the strength of the cograph property and this problem formulation to
recover information from noisy data.
First, to test whether the heuristics produce
good results we count how often a heuristic can find a cograph of cost
less than or equal to the cost of the true cograph. Recall that the true cograph gives the best upper bound for
the \textsc{Cograph Editing} problem we can get in practice. Such a solution
will be denoted ``fit''. It is clear, that this is
not necessarily the optimal solution.
Second, we evaluate whether the heuristic solution ``improves on'' the noisy
input graph: that is, whether it produces a cograph which
is closer to the true cograph than the noisy graph is to the true cograph.
In applications like phylogenetic tree estimation, recovering the
structure of the underlying true cograph is of much more interest
than a minimum number of modifications.
The use of this optimization problem (and our heuristic as approximation) is
only justified to the
extent that a cograph that requires few edits usually corresponds closely
to this ``ground truth'' cograph.
Let $d$ be the distance between the true cograph and the noisy graph obtained through experimental measurements.
If it is frequently the case that there exist multiple different cographs at the same distance $d$ from the noisy graph, some of which are at large distances from the true cograph, then even an exact solution to the optimization problem is of limited use in such applications.
Worse yet, if it is common to find such cographs at distances strictly below
$d$, then such an optimization problem is positively misleading.
On graphs of 20 vertices or fewer, all modifications perform quite well.
To determine the best heuristic method on larger graphs, we compare results on graphs
with 50 vertices (see Fig.~\ref{fig:method_comp}).
Here, the \textit{modify} heuristic clearly outperforms the other versions. Hence,
it is interesting to see how this method performs on graphs with different numbers
of vertices.
\begin{figure}
\caption{Comparison of five heuristic versions on graphs with
50 vertices. The figures show in how many cases the heuristics can
find a fit cograph---that is, a cograph that is at least as close to
the noisy graph as the true cograph is to the noisy graph.}
\label{fig:method_comp}
\end{figure}
For small graphs with 10 or 20 vertices the \textit{modify} heuristic finds
a fit solution in almost all cases (see Fig.~\ref{fig:method_modify}),
as do the other heuristics. If
input graphs have as little as 1\% noise, even on graphs with 50 vertices
a fit
cograph is found in over 98\%. For 100 vertices it is still over 65\%.
For more complex graphs, having a more balanced ratio of edges and
non-edges, the number of fit solutions decreases.
Interestingly, looking only at graphs with 100 vertices and over 1\%
noise, high noise rates seem to favor a good heuristic solution.
This is likely due to the fact that for high noise the true cograph
is no longer a good bound on the optimal solution.
\begin{figure}
\caption{Performance of \textit{modify}
\label{fig:method_modify}
\end{figure}
In application the relevant question is whether or how well the true cograph
can be recovered from noisy data. To make different parameter combinations
comparable we evaluate relative distances. Given distances
$dist_{n}$ between the true cograph and the noisy graph and $dist_{h}$ between the true
and the heuristic cographs, the \emph{relative distance} is $dist_{rel} =
\frac{dist_{h}}{dist_{n}}$ (see Fig.~\ref{fig:similarity_modify}). A value
smaller than one implies an improvement: the true cograph is closer to
the heuristic cograph than to the noisy graph. A $dist_{rel}$ larger than
one implies a loss of
similarity, while a value of zero corresponds to a perfect match
between heuristic and true cograph.
The median $dist_{rel}$ for graphs with 20 vertices and 1\% noise is 0.0\,.
The
mean is 0.54, 0.34 and 0.27 for 10\%, 20\% and 50\% edge density,
respectively.
The distances relative to the maximum number of possible edges can
be seen in supplementary Fig.~\ref{suppl:fig:similarity_modify_rest} and
\ref{suppl:fig:similarity_modify_1noise}.
\begin{figure}
\caption{
It is shown how well the \textit{modify}
\label{fig:similarity_modify}
\end{figure}
Interestingly, for graphs of size 10 to 50, a certain amount of complexity,
meaning greater edge density, seems to encourage a better recovery of the
true cograph. This might be due to the fact that on sparse graphs there are
often multiple options to resolve a $P_4$ which all lead to good results.
Hence, there is no unambiguous way to denoise the graph.
Particularly on graphs with 50 vertices we see that increasing edge density
leads to fewer fit cograph solutions (see Fig.~\ref{fig:method_modify}),
but on average the resulting graph is closer to the true cograph (see Fig.~\ref{fig:similarity_modify}).
This observation does not hold for graphs with 100 vertices;
but, as already explained, the true cograph no longer gives a good cost bound for large noisy
graphs and so we also cannot expect to recover it.
If we limit our evaluation to graphs with 10 and 20 vertices, we are able to
find a fit cograph in almost all cases.
The complexity of graphs with 10 vertices seems to be not sufficiently high
to reliably produce a cograph closer to the true cograph.
For graphs of size 20, heuristic and true cograph are mostly closer to each
other than the noisy graph is to the true graph. This means we are able to
partially recover the ground truth.
Nevertheless, only for 1\% noise and at least 20 vertices can we either
recover the true cograph or at least get very close to the correct solution.
Lokshtanov's algorithm has a running time linear in the number of vertices
plus
edges. The \textit{standard} heuristic is just the second fastest method in
our evaluation because we run it 100 times and \textit{choose-all} only once
(see Fig.~\ref{fig:runningtimes}). As expected, running times of
\textit{beam-search} and \textit{choose-multiple} are both slower than
\textit{standard}.
An iteration step in Lokshtanov's algorithm for adding $v_i$ to $H_i$ is
composed of two actions. Step A consists of examining which edges need to be
added so that $H_i$ is a minimal cograph completion of $G_i$. In step B,
$v_i$ and all necessary edges are added to $H_i$
(more precisely, to its cotree).
Both \textit{choose-multiple} and \textit{beam-search} perform step A
ten times more often than \textit{standard}, but all three methods perform
the same number of B-steps.
Hence, running times do not increase by a factor of ten but rather by two to
four.
On graphs with 50 vertices no method takes more than 1.69 seconds on
average;
For 100 vertices the slowest method is \textit{modify} with 12.03 seconds on
average. Running times of \textit{modify} grow fastest. Still, it is easily
applicable to graphs with several hundreds of vertices.
The \textit{choose-all} modification is fastest because only a single
cograph is computed; but running times grow faster than for
\textit{standard},
\textit{choose-multiple} and \textit{beam-search}.
All computations were executed single-threaded on an Intel E5-2630 @ 2.3GHz.
\begin{figure}
\caption{Running times for five heuristic versions. Noise
rate is 5\%, edge density is 20\%. Remaining parameter
combinations show similar results. The methods
\textit{choose-multiple}
\label{fig:runningtimes}
\end{figure}
\section{Conclusion}
We presented an exact algorithm solving the weighted \textsc{Cograph
Editing} problem in $O(3^n n)$ time and $O(2^n)$ space.
We evaluated five heuristics based on an algorithm for minimal cograph
completions.
For small and medium graphs of 10 and 20 vertices we are able to find
cographs with equal or lower cost than the ground truth, indicating
that we find (nearly) optimal solutions.
In application, the focus lies on recovering the true cograph, not the
optimal one. We showed that for small noise of 1\% we get results very
similar to this true cograph, even for large graphs with 100 vertices.
Interestingly, it is easier to recover the true edges when graphs contain
about 50\% edges.
For higher noise rates it is not possible to recover the true cograph. This
may be partly explained by the fact that we apply a heuristic and do not solve the
\textsc{Cograph Completion} problem optimally. But this observation already
holds for medium graphs with 20 vertices on which we produce good
results. We therefore argue that the cograph
constraint is not strict enough to always correctly resolve graphs with 5\%
noise and more. Therefore, if true graph structure recovery is important,
low noise rates are crucial.
The presented heuristics are fast enough to be applied to graphs with
several hundreds of vertices.
Accuracy clearly improves when removing the restriction that in each
iteration step edges can only be added or deleted.
Different heuristic modifications can be easily combined. This will likely
improve results further.
\section*{Acknowledgment}
Funding was provided to W. Timothy J. White and Marcus Ludwig by Deutsche
Forschungsgemeinschaft (grant BO~1910/9).
\beginsupplement
\FloatBarrier
\section{Supplementary}
\begin{figure}
\caption{
It is shown how well the \textit{modify}
\label{suppl:fig:similarity_modify_rest}
\end{figure}
\begin{figure}
\caption{
It is shown how well the \textit{modify}
\label{suppl:fig:similarity_modify_1noise}
\end{figure}
\end{document}
\end{document}
|
\begin{document}
\begin{abstract}
Let $A$ be an abelian variety over an algebraically closed field. We show that
$A$ is the automorphism group scheme of some smooth projective variety if and
only if $A$ has only finitely many automorphisms as an algebraic group.
This generalizes a result of Lombardo and Maffei for complex abelian varieties.
\end{abstract}
\subjclass[2020]{14K05, 14J50, 14L30, 14M20}
\keywords{Abelian varieties, automorphism group schemes, Albanese morphism}
\maketitle
\tableofcontents
\section{Introduction}
\label{sec:int}
Let $X$ be a projective algebraic variety over an algebraically closed field.
The automorphism group functor of $X$ is represented by a group scheme
$\Aut_X$, locally of finite type (see \cite[p.~268]{Grothendieck}
or \cite[Thm.~3.7]{MO}).
Thus, the automorphism group $\Aut(X)$ is the group of $k$-rational
points of a smooth group scheme that we will still denote by $\Aut(X)$
for simplicity. One may ask which smooth group schemes are obtained
in this way, possibly imposing some additional conditions on $X$ such
as smoothness or normality. It is known that every finite group $G$
is the automorphism group scheme of some smooth projective curve $X$
(see e.g.~the main result of \cite{MR}).
The case of a complex abelian variety $A$ was treated recently by
Lombardo and Maffei in \cite{LM}; they showed that
$A = \Aut(X)$ for some complex projective manifold $X$ if and only if
$A$ has only finitely many automorphisms as an algebraic group.
In this note, we generalize their result as follows:
\begin{maintheorem}\label{thm:main}
Let $A$ be an abelian variety over an algebraically closed field.
Denote by $\Aut_{\gp}(A)$ the group of automorphisms of $A$ as an algebraic group.
\begin{enumerate}
\item\label{main1}
If $A = \Aut(X)$ for some projective variety $X$, then $\Aut_{\gp}(A)$
is finite.
\item\label{main2}
If $\Aut_{\gp}(A)$ is finite, then there exists a smooth projective
variety $X$ such that $A = \Aut_X$.
\end{enumerate}
\end{maintheorem}
Like in \cite{LM}, the proof of the first assertion is easy, and the
second one is obtained by constructing $X$ as a quotient
$(A \times Y)/G$, where $G \subset A$ is a finite subgroup,
$Y$ is a smooth projective variety such that $G = \Aut_Y$, and
the quotient is taken for the diagonal action of $G$ on $A \times Y$.
In \cite{LM}, $G$ is a cyclic group of prime order $\ell$, and $Y$
a surface of degree $\ell$ in $\bP^3$ equipped with a free action
of $G$. As the construction of $Y$ does not extend readily to
prime characteristics, we take for $G$ the $n$-torsion subgroup
scheme $A[n]$ for an appropriate integer $n$, and for $Y$
an appropriate rational variety.
A different construction of a variety $X$ satisfying the second
assertion has been obtained independently by Mathieu Florence,
see \cite{Florence}; it works over an arbitrary field.
Let us briefly describe the structure of this note.
Section~\ref{sec:prelim} is a short introduction to basic notation
and reminders on abelian varieties. In Section~\ref{sec:(i)},
we take an abelian variety $A$ with $\Aut_{\gp}(A)$ infinite,
assume that $A = \Aut(X)$ for some projective variety $X$,
and derive a contradiction. In Section~\ref{sec:(ii)}, we take
an abelian variety $A$ with $\Aut_{\gp}(A)$ finite and prove that
for each prime number $\ell$ different from the characteristic
of the ground field, for each $m\ge 1$ big enough, and for each
smooth rational projective variety $Y$ with $\Aut_Y\simeq A[\ell^m]$,
one has
\[\Aut_X = A\]
where $X$ is the smooth projective variety $(A \times Y)/A[\ell^m]$.
Then, Section~\ref{Sec:Y} is devoted to an explicit construction of $Y$.
\section{Preliminaries and notation}\label{sec:prelim}
We begin by fixing some notation and conventions which will be
used throughout this note. The ground field $\k$ is algebraically
closed, of characteristic $p \geq 0$. A variety $X$ is a separated
integral scheme of finite type over~$k$. By a point of $X$,
we mean a $\k$-rational point.
We use \cite{Mumford} as a general reference for abelian varieties.
We denote by~$A$ such a variety of dimension $g \geq 1$,
with group law $+$ and neutral element $0$. Then
\[ \Aut(A) = A \rtimes \Aut_{\gp}(A), \]
where $A$ acts on itself by translations.
Moreover, $\Aut_{\gp}(A) = \Aut(A,0)$ (the group of automorphisms
fixing the neutral element), see \cite[\S 4, Cor.~1]{Mumford}.
For any positive integer $n$, we denote by $A[n]$ the $n$-torsion
subgroup scheme of $A$, i.e., the schematic kernel of
the multiplication map
\[ n_A \colon A \longrightarrow A, \quad a \longmapsto n a. \]
Clearly, $A[n]$ is stable by $\Aut_{\gp}(A)$. Also, recall
from \cite[\S 6, Prop.]{Mumford} that $A[n]$ is finite; moreover,
$A[n]$ is the constant group scheme $(\bZ/n)^{2g}$ if $n$ is prime
to $p$.
We denote by
\[ q \colon A \longrightarrow A/A[n], \quad a \longmapsto \bar{a} \]
the quotient morphism. Then $n_A$ factors as $q$ followed by an isomorphism
$A/A[n] \stackrel{\simeq}{\longrightarrow} A$.
\section{Proof of Theorem \ref{thm:main}\ref{main1}}
\label{sec:(i)}
In this section, we choose an abelian variety $A$ such
that $\Aut_{\gp}(A)$ is infinite, and proceed to
the proof of Theorem~\ref{thm:main}\ref{main1}. We will need:
\begin{lemma}\label{lem:ker}
For any positive integer $n$, the kernel of the restriction map
\[ \rho_n \colon \Aut_{\gp}(A) \longrightarrow \Aut_{\gp}(A[n]) \]
is infinite.
\end{lemma}
\begin{proof}
Note that $\rho_n$ extends to a ring homomorphism
\[ \sigma_n \colon \End_{\gp}(A) \longrightarrow \End_{\gp}(A[n]) \]
with an obvious notation. Moreover, the image of $\sigma_n$
is a finitely generated abelian group (as a quotient of
$\End_{\gp}(A)$) and is killed by $n$; thus, this image
is finite. So the image of $\rho_n$ is finite as well.
\end{proof}
We assume, for contradiction, the existence of a projective variety $X$
such that $A = \Aut(X)$; in particular, $X$ is equipped with a faithful
action of $A$. By \cite[Lem.~3.2]{Brion}, there exist a finite subgroup
scheme $G$ of $A$ and an $A$-equivariant morphism
$f \colon X \to A/G$, where $A$ acts on $A/G$ via the quotient map.
Denote by $n$ the order of $G$; then $G$ is a subgroup scheme
of $A[n]$. By composing $f$ with the natural map $A/G \to A/A[n]$,
we may thus assume that $G = A[n]$.
We now adapt the proof of \cite[Thm.~2.2]{LM}.
Let $Y$ be the schematic fiber of $f$ at $\bar{0}$. Then $Y$
is a closed subscheme of $X$, stable by the action of $A[n]$.
Form the cartesian square
\[ \xymatrix{
X' \ar[r]^-{f'} \ar[d]_{r} & A \ar[d]_{q}\\
X \ar[r]^-{f} & A/A[n].\\
} \]
Then $X'$ is a projective scheme equipped with an action of $A$;
moreover, $f'$ is an $A$-equivariant morphism and its fiber at $0$
may be identified to $Y$. It follows that the morphism
\[ A \times Y \longrightarrow X', \quad (a,y) \longmapsto a \cdot y \]
is an isomorphism with inverse
\[ X' \longrightarrow A \times Y, \quad
x' \longmapsto (f'(x'), -f'(x') \cdot x'). \]
So we may identify $X'$ with $A \times Y$; then $r$ is invariant
under the action of $A[n]$ via $g \cdot (a,y) = (a - g, g \cdot y)$.
Since $q$ is an $A[n]$-torsor, so is $r$. In particular,
$X = (A \times Y)/A[n]$ and the stabilizer in $A$ of any $y \in Y$
is a subgroup scheme of $A[n]$.
By Lemma~\ref{lem:ker}, we may choose a nontrivial $v \in \Aut_{\gp}(A)$
which restricts to the identity on $A[n]$. Then $v \times \id$ is
an automorphism of $A \times Y$ that commutes with the action of
$A[n]$. Since $r$ is an $A[n]$-torsor and hence a categorical quotient,
it follows that $v \times \id \in \Aut(A \times Y)$
factors through a unique $u \in \Aut(X)$, which satisfies
$u(a \cdot y) = v(a) \cdot y$ for all $a \in A$ and $y \in Y$.
As $\Aut(X)=A$, we have $u\in A$. For any $a, b \in A$ and $y \in Y$,
we have $(a + b) \cdot y = b \cdot (a \cdot y)$.
Choosing $b=u$ in the above formula yields
$(a + u) \cdot y = u \cdot (a \cdot y) = v(a) \cdot y$. Thus,
$v(a) - a - u$ fixes every point of $Y$ for any $a \in A$.
Taking $a = 0$, it follows that $u$ fixes $Y$ pointwise,
and hence $u \in A[n]$. So $v(a) - a \in A[n]$ for any $a \in A$,
i.e., $v - \id$ factors through a homomorphism $A \to A[n]$.
Since $A$ is smooth and connected,
it follows that $v - \id = 0$, a contradiction.
\section{Proof of Theorem \ref{thm:main}\ref{main2}: first steps}
\label{sec:(ii)}
We assume from now on that $\Aut_{\gp}(A)$ is finite.
Recall that $q\colon A\to A/A[n]$ is the quotient morphism
(see Section~\ref{sec:prelim}).
\begin{lemma}\label{lem:inj}$ $
\begin{enumerate}
\item\label{inj1} The map
$q_* \colon \Aut_{\gp}(A) \to \Aut_{\gp}(A/A[n])$
is an isomorphism for any integer $n \geq 1$.
\item\label{inj2} Let $\ell \neq p$ be a prime number. Then
$\rho_{\ell^m} \colon \Aut_{\gp}(A) \to \Aut_{\gp}(A[\ell^m])$
is injective for $m \gg 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{inj1} Since $\Aut_{\gp}(A/A[n]) \simeq \Aut_{\gp}(A)$ is finite,
it suffices to show that $q_*$ is injective.
Let $u \in \Aut_{\gp}(A)$ such that $q_*(u) = \id$. Then
$u(a) - a \in A[n]$ for any $a \in A$, that is, $u - \id$ factors
through a homomorphism $A \to A[n]$. As in the very end of
the proof of Theorem \ref{thm:main}\ref{main1}
the smoothness and connectedness of $A$ yield $u = \id$.
\ref{inj2}
Let $T_\ell(A) = \lim_{\leftarrow} A[\ell^m]$; then $T_\ell(A)$ is a
$\bZ_\ell$-module and the natural map
$\Aut_{\gp}(A) \to \Aut_{\bZ_\ell}(T_\ell(A))$
is injective (see \cite[\S 19, Thm.~3]{Mumford}). Thus,
$\bigcap_{m \geq 1} \Ker(\rho_{\ell^m}) = \{ \id \}$.
Since the $\Ker(\rho_{\ell^m})$ form a decreasing sequence, we
get $\Ker(\rho_{\ell^m}) = \{ \id \}$ for $m \gg 0$.
\end{proof}
Next, consider a smooth projective variety $Y$ equipped with an
action of the finite group $G = A[n]$, for some integer
$n$ prime to $p$. Then $G$ acts freely on
$A \times Y$ via $g \cdot (a,y) = (a - g, g \cdot y)$. The quotient
$X = (A \times Y)/G$ exists and is a smooth projective variety
(see \cite[\S 7, Thm.]{Mumford}). The $A$-action on $A \times Y$
via translation on itself yields an action on $X$. The projection
$\mathbb{P}r_A \colon A \times Y \to A$
yields a morphism
\[ f \colon X \longrightarrow A/G \]
which is $A$-equivariant, where $A$ acts on $A/G$ via the quotient
map $q$. Moreover, $f$ is smooth and its schematic fiber at
$\bar{0}$ is $G$-equivariantly isomorphic to $Y$.
\begin{lemma}\label{lem:alb}
Assume that $Y$ is rational.
\begin{enumerate}
\item\label{alb1} The map $f$ is the Albanese morphism of $X$.
\item\label{alb2} The neutral component $\Aut^0(Y)$ is a linear
algebraic group.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{alb1} Let $B$ be an abelian variety, and $u \colon X \to B$ a morphism.
Composing $u$ with the quotient morphism $A \times Y \to X$
yields a $G$-invariant morphism $v \colon A \times Y \to B$.
As $Y$ is rational, $v$ factors through a morphism $A \to B$,
which must be $G$-invariant. So $u$ factors through a morphism
$A/G \to B$.
\ref{alb2} By a theorem of Nishi and Matsumura (see \cite{Brion} for a modern
proof), there exist a closed affine subgroup scheme $H \subset \Aut^0(Y)$
such that the homogeneous space $\Aut^0(Y)/H$ is an abelian variety,
and an $\Aut^0(Y)$-equivariant morphism $u \colon Y \to \Aut^0(Y)/H$.
As $Y$ is rational and $u$ is surjective, this forces $H = \Aut^0(Y)$.
\end{proof}
As a consequence of Lemma~\ref{lem:alb}, if $Y$ is rational then
$f$ induces a homomorphism
\[ f_* \colon \Aut(X) \longrightarrow \Aut(A/G), \]
and hence an exact sequence
\[ 1 \longrightarrow \Aut_{A/G}(X) \longrightarrow \Aut(X)
\stackrel{f_*}{\longrightarrow} A/G \rtimes \Aut_{\gp}(A/G), \]
where $\Aut_{A/G}(X)$ denotes the group of relative automorphisms.
The $A$-action on $X$ yields a homomorphism
$G \to \Aut_{A/G}(X)$.
Moreover, the image of $f_*$ contains the group $A/G$ of
translations, and hence equals $A/G \rtimes \Gamma$,
where $\Gamma$ denotes the subgroup of $\Aut_{\gp}(A/G)$
consisting of automorphisms which lift to $X$.
\begin{lemma}\label{lem:rel}
Let $G = A[\ell^m]$, where $\ell,m$ satisfy the assumptions of
Lemma $\ref{lem:inj}\ref{inj2}$.
Let $Y$ be a smooth projective rational $G$-variety such that $\Aut(Y) = G$.
\begin{enumerate}
\item\label{rel1} The map $G \to \Aut_{A/G}(X)$ is an isomorphism.
\item\label{rel2} The group $\Gamma$ is trivial.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{rel1} Let $u \in \Aut_{A/G}(X)$. Then $u$ restricts to an automorphism
of $Y$ (the fiber of $f$ at $0$), and hence to a unique $g \in G$.
Replacing $u$ with $g^{-1} u$, we may assume that $u$ fixes $Y$
pointwise. For any $a \in A$ and $y \in Y$, we have
$f(u(\overline{(a,y)})) = f(\overline{(a,y)}) = \bar{a}$,
where $\overline{(a,y)}$ denotes the image of $(a,y)$ in $X$.
As $f$ is $A$-equivariant, it follows that
$(-a) \cdot u(\overline{(a,y)}) \in Y$. This defines a morphism
\[ F \colon A \times Y \longrightarrow Y, \quad
(a,y) \longmapsto (-a) \cdot u(\overline{(a,y)}) \]
such that $F(0,y) = u(y) = y$ for all $y \in Y$. As $A$ is connected,
this defines in turn a morphism (of varieties) $A \to \Aut^0(Y)$,
which must be constant by Lemma~\ref{lem:alb}\ref{alb2}. So
$u(\overline{(a,y)}) = a \cdot y = \overline{(a,y)}$
identically, i.e., $u = \id$.
\ref{rel2} Let $\gamma \in \Gamma$; then there exists $u \in \Aut(X)$
such that $f_*(u) = \gamma$. Since $\gamma(\bar{0}) = \bar{0}$,
we see that $u$ stabilizes $Y$; thus, $u \vert_Y = g$ for a unique
$g \in G$. Also, there exists $v \in \Aut_{\gp}(A)$ such that
$q_*(v) = \gamma$ (Lemma~\ref{lem:inj}\ref{inj1}).
Thus, we have
$f(u(\overline{a,y})) = \gamma f(\overline{(a,y)}) = \overline{v(a)}$,
i.e., $(- v(a)) \cdot u(\overline{(a,y)}) \in Y$ for all $a \in A$
and $y \in Y$. Arguing as in the proof of \ref{rel1}, it follows that
\[ u(\overline{(a,y)}) = v(a) \cdot g(y) \]
identically. In particular, $g(a \cdot y) = v(a) \cdot g(y)$ for all $a \in G$
and $y \in Y$. Since $G$ is commutative, we obtain $v(a) = a$
for all $a \in G$. Thus, $v = \id$ by Lemma~\ref{lem:inj}\ref{inj2}.
So $\gamma = \id$ as well.
\end{proof}
\begin{proposition}\label{prop:aut}
Under the assumptions of Lemma~$\ref{lem:rel}$, the $A$-action
on $X$ yields an isomorphism $A \to \Aut(X)$. If in addition
$G = \Aut_Y$, then $A \to \Aut_X$ is an isomorphism as well.
\end{proposition}
\begin{proof}
We have a commutative diagram of exact sequences
\[ \xymatrix{
0 \ar[r] & G \ar[r] \ar[d] &
A \ar[r] \ar[d] & A/G \ar[r] \ar[d] & 0
\\
1 \ar[r] & \Aut_{A/G}(X) \ar[r] & \Aut(X) \ar[r]^-{f_*} & \Aut(A/G). \\
} \]
By Lemma~\ref{lem:rel}, the left vertical map is an isomorphism
and the image of $f_*$ is the group $A/G$ of translations.
This yields the first assertion.
To show the second assertion, it suffices to show that the induced
homomorphism of Lie algebras $\Lie(A) \to \Lie(\Aut_X)$
is an isomorphism when $G = \Aut_Y$. Recall that $\Lie(\Aut_X)$
is the space of global sections of the tangent bundle $T_X$
(see e.g.~\cite[Lem.~3.4]{MO}). Moreover,
as $f$ is smooth, we have an exact sequence
\[ 0 \longrightarrow T_f \longrightarrow T_X
\stackrel{df}{\longrightarrow} f^*(T_{A/G}) \longrightarrow 0, \]
where $T_f$ denotes the relative tangent bundle.
Since $T_{A/G}$ is the trivial bundle with fiber $\Lie(A/G)$,
this yields an exact sequence
\[ 0 \longrightarrow H^0(X,T_f) \longrightarrow H^0(X,T_X)
\longrightarrow \Lie(A/G) \]
such that the composition
$\Lie(A) \to H^0(X,T_X) \to \Lie(A/G)$ is $\Lie(q)$.
So it suffices in turn to show that $H^0(X,T_f) = 0$.
We have a cartesian diagram
\[ \xymatrix{
A \times Y \ar[r]^-{\mathbb{P}r_A} \ar[d] & A \ar[d]_{}\\
X \ar[r]^-{f} & A/G,\\
} \]
where the vertical arrows are $G$-torsors.
This yields an isomorphism
\[ H^0(X,T_f) \simeq H^0(A \times Y, T_{\mathbb{P}r_A})^G \]
and hence
\[ H^0(X,T_f) \simeq H^0(A \times Y, \mathbb{P}r_Y^*(T_Y))^G
\simeq (\cO_A(A) \otimes H^0(Y,T_Y))^G
\simeq H^0(Y,T_Y)^G. \]
As $G = \Aut_Y$, we have $H^0(Y,T_Y) = \Lie(G) = 0$; this
completes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:main}\ref{main2}: the construction of $Y$}
\label{Sec:Y}
In this section, we fix integers $n,r\ge 2$, where $p$
does not divide $n$, and construct a smooth projective rational variety $Y$
of dimension $r$ such that $\Aut_Y=(\mathbb{Z}/n)^r$.
We define
\[G=\{(\mu_1,\ldots,\mu_r)\in \k^r\mid \mu_i^n=1
\text{ for each } i\in \{1,\ldots,r\}\}\simeq (\mathbb{Z}/n)^r\]
and let $G$ act on $(\mathbb{P}^1)^r$ by
\[\begin{array}{ccc}
G \times (\mathbb{P}^1)^r&\to & (\mathbb{P}^1)^r\\
( (\mu_1,\ldots,\mu_r),([ u_1:v_1],\ldots,[ u_r:v_r]))&
\mapsto & ([ u_1:\mu_1v_1],\ldots,[ u_r:\mu_rv_r])\end{array}\]
For each $i\in \{1,\ldots, r\}$, we denote by $\ell_i\subset (\mathbb{P}^1)^r$
the closed curve isomorphic to $\mathbb{P}^1$ given by the image of
\[\begin{array}{ccc}
\mathbb{P}^1&\to & (\mathbb{P}^1)^r\\
([u:v])&\mapsto & ([0:1],\ldots,[0:1],[u:v],[0:1],\ldots,[0:1])\end{array}\]
where the $[u:v]$ is at the place $i$. The curves
$\ell_1,\ldots,\ell_r\subset (\mathbb{P}^1)^r$ generate the cone of curves of $(\mathbb{P}^1)^r$.
For each $i\in \{1,\ldots, r\}$, the curve $\ell_i$ is stable by $G$
and the action of $G$ on $\ell_i$ corresponds to a cyclic action of order
$n$ on $\mathbb{P}^1$, given by $[u:v]\mapsto [\mu u:v]$, where $\mu\in \k$, $\mu^n=1$.
All orbits are of size $n$, except the two fixed points $[0:1]$ and $[1:0]$.
We choose $s=(s_1,\ldots,s_r)$ to be a sequence of positive integers, all
distinct, such that $s_i\cdot n\ge 3$ for each $i$ if $r=2$, and consider a
finite subset
\[ \Delta\subset \ell_1\cup \cdots\cup \ell_r\subset (\mathbb{P}^1)^r, \]
stable
by $G$, given by a union of orbits of size $n$.
For each $i\in \{1,\ldots,r\}$, we define $\Delta_i\subset \ell_i$
to be a union of exactly $s_i\ge 1$ orbits of size $n$, and choose then
$\Delta=\bigcup_{i=1}^r \Delta_i$. We moreover choose the points such that
the group $H=\{h\in \Aut(\mathbb{P}^1)\mid h(\Delta_i)=\Delta_i, h([0:1])=[0:1]\}$
only consists of $\{[u:v]\mapsto [\mu u:v]\mid u^n=1\}$.
As the unique point of intersection of any two distinct
$\ell_i$ is fixed by $G$, each point of $\Delta$ lies on exactly one
of the curves $\ell_i$. This gives
\[\Delta=\uplus_{i=1}^r \Delta_i\]
Let $\mathbb{P}i\colon Y\to (\mathbb{P}^1)^r$ be the blow-up of $\Delta$.
As $\Delta$ is $G$-invariant, the action of $G$ lifts to
an action on $Y$. We want to prove that the resulting homomorphism
$G \to \Aut_Y$ is an isomorphism.
\subsection{Intersection on $(\mathbb{P}^1)^r$}
For $i=1,\ldots,r$, we denote by $H_i\subset (\mathbb{P}^1)^r$ the hypersurface
given by
\[H_i=\{([u_1:v_1],\ldots,[u_r:v_r])\in (\mathbb{P}^1)^r\mid u_i=0\}.\]
Then $H_1,\ldots,H_r$ generate the cone of effective divisors on $(\mathbb{P}^1)^r$,
and we have
\[H_i\cdot \ell_i=1, H_i\cdot \ell_j=0\]
for all $i,j \in \{1,\ldots,r\}$ with $i\not= j$. Moreover,
the canonical divisor class of $(\mathbb{P}^1)^r$ satisfies
$K_{(\mathbb{P}^1)^r}=-2H_1-2H_2-\cdots -2H_r$, so $K_{(\mathbb{P}^1)^r}\cdot \ell_i=-2$
for each $i \in \{1,\ldots,r\}$.
We also observe that $\ell_i\subset H_j$ for all $i,j \in \{1,\ldots,r\}$
with $i\not= j$ and that $\ell_i\not\subset H_i$.
\subsection{Intersection on $Y$}
For $i=1,\ldots,r$, denote by $\tilde{\ell}_i,\tilde{H}_i\subset Y$
the strict transforms of $\ell_i$ and $H_i$.
For each $p\in \Delta$, we denote by $E_p=\mathbb{P}i^{-1}(p)$ the exceptional
divisor, isomorphic to $\mathbb{P}^{r-1}$, and choose a line $e_p\subset E_p$.
A basis of the Picard group of $Y$ is given by the union of
$\tilde{H}_1,\ldots,\tilde{H}_r$ and of all exceptional divisors $E_p$,
with $p\in \Delta$. A basis of the vector space of curves
(up to numerical equivalence) is given by
$\tilde{\ell}_1,\ldots,\tilde{\ell}_r$ and by all $e_p$ with $p\in \Delta$.
We have
\begin{equation*}e_p\cdot E_p=-1, e_p\cdot E_q=0\end{equation*}
for all $p,q\in \Delta$, $p\not=q$.
\begin{lemma}\label{LemmaHiellj}
For all $i,j\in \{1,\ldots,r\}$ with $i\not=j$, the following hold:
\begin{enumerate}
\item\label{tildeHi}
$\tilde{H}_i=\mathbb{P}i^*(H_i)-\sum\limits_{p\in \Delta\cap H_i} E_p
=\mathbb{P}i^*(H_i)-\sum\limits_{s\not=i}\sum\limits_{p\in \Delta_s} E_p.$
\item\label{tildeelliDelta}
$\tilde{\ell}_i\cdot E_p=1$ if $p\in \Delta_i$ and
$\tilde{\ell}_i\cdot E_p=0$ if $p\in \Delta\setminus \Delta_i$.
\item\label{tildeHielli}
$\tilde{H}_i\cdot \tilde{\ell}_i=1$.
\item\label{tildeHiellj}
$\tilde{H}_i\cdot \tilde{\ell}_j=-\lvert \Delta_j\rvert=-n s_j$.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{tildeHi} follows from the fact that $H_i$ is a smooth hypersurface
of $(\mathbb{P}^1)^r$ and that $\Delta\cap H_i=\bigcup\limits_{s\not=i} \Delta_s$.
\ref{tildeelliDelta}: follows from the fact that $\ell_i$ is a smooth curve,
passing through all points of $\Delta_i$ and not through any point
of $\Delta\setminus \Delta_i$.
\ref{tildeHielli}:
With \ref{tildeHi} and \ref{tildeelliDelta}, we get
$\tilde{H}_i\cdot \tilde{\ell}_i= H_i\cdot \ell_i =1$.
\ref{tildeHiellj}:
With \ref{tildeHi} and \ref{tildeelliDelta}, we get
$\tilde{H}_i\cdot \tilde{\ell}_j=H_i\cdot \ell_j-\lvert \Delta_j\rvert
=-\lvert \Delta_j\rvert=-ns_j$.
\end{proof}
\begin{lemma}\label{Lemm:gammapj}
For all $i\in \{1,\ldots, r\}$ and each $p\in \Delta\setminus\Delta_i$,
we take the irreducible curve $\gamma_{p,i}\subset (\mathbb{P}^1)^r$ passing through
$p$ and being numerically equivalent to $\ell_i$.
\begin{enumerate}
\item\label{gammapj}
Let $j\in \{1,\ldots,r\}$ be such that $p\in \Delta_j$.
The $j$-th coordinate of $\gamma_{p,i}$ is the one of $p$,
its $i$-th coordinate is free, and all others are $[0:1]$.
\item\label{gammapjeq}
The strict transform $\tilde{\gamma}_{p,i}$ of $\gamma_{p,i}$ on $Y$
is numerically equivalent to
$\tilde{\ell}_i+\sum\limits_{q\in \Delta_i} e_q-e_p$ and satisfies
$\tilde{\gamma}_{p,i}\cdot E_p=1$
and $\tilde{\gamma}_{p,i}\cdot E_q=0$ for all $q\in \Delta\setminus \{p\}$.
\end{enumerate}
\end{lemma}
\begin{proof}\ref{gammapj}:
We write $p=(p_1,\ldots,p_r)\in (\mathbb{P}^1)^r$. Since
$\gamma_{p,i}\subset (\mathbb{P}^1)^r$ is a curve equivalent to $\ell_i$
and passing through $p$, it has to be
\[\gamma_{p,i}
=\{(p_1,\ldots,p_{i-1},t,p_{i+1},\ldots,p_r)\in (\mathbb{P}^1)^r\mid t\in \mathbb{P}^1\}
\simeq \mathbb{P}^1.\]
Moreover, for each $s\in \{1,\ldots,r\}\setminus \{j\}$, we have
$p_s=[0:1]$, as $p\in \Delta_j\subset \ell_j$. This completes
the proof of~\ref{gammapj}.
\ref{gammapjeq}: We want to prove that
$\tilde{\gamma}_{p,i}\equiv \tilde{\ell}_i+\sum\limits_{q\in \Delta_i} e_q-e_p$.
For each divisor $D$ on $(\mathbb{P}^1)^r$, we have
\begin{align*}\tilde{\gamma}_{p,i}\cdot \mathbb{P}i^*(D)&
=\mathbb{P}i(\tilde{\gamma}_{p,i})\cdot D=\gamma_{p,i}\cdot D\\
(\tilde{\ell}_i-e_p)\cdot \mathbb{P}i^*(D)&=\mathbb{P}i(\tilde{\ell}_{i})\cdot D
=\ell_i\cdot D=\gamma_{p,i}\cdot D\end{align*}
We moreover have (with Lemma~\ref{LemmaHiellj}\ref{tildeelliDelta})
\begin{align*}\tilde{\gamma}_{p,i}\cdot E_p&=1
=E_p\cdot (\tilde{\ell}_i+\sum\limits_{q\in \Delta_i} e_q-e_p),\\
\tilde{\gamma}_{p,i}\cdot E_{p'}&=0
=E_{p'}\cdot (\tilde{\ell}_i+\sum\limits_{q\in \Delta_i} e_q-e_p),
\text{ for all }p'\in \Delta\setminus \{p\}.\end{align*}
\end{proof}
\begin{lemma}\label{Lemm:ConeGen}
Let $\gamma\subset Y$ be an irreducible curve. Then, one of the following holds:
\begin{enumerate}
\item\label{ConeGen1}
We have $\gamma\equiv de_p$ for some $d\ge 1$ and some $p\in \Delta$
$($where $\equiv$ denotes numerical equivalence$)$;
\item\label{ConeGen2}
There are non-negative integers $a_1,\ldots,a_r$ and $\{\mu_p\}_{p\in \Delta}$
such that
\[\gamma\equiv \sum_{i=1}^r a_i \tilde{\ell}_i+ \sum_{p\in \Delta} \mu_p e_p\]
and such that $a_1+\cdots +a_r\ge 1$.
\item\label{ConeGen3}
There are $j\in \{1,\ldots,r\}$, $q\in \Delta_j$ and integers
$a_1,\ldots,a_r\ge 0$ such that
\[\gamma\equiv a_j e_q+\sum\limits_{i\not=j} a_i \tilde{\gamma}_{q,i}\]
and such that $\sum_{i\not=j} a_i\ge 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose first that $\gamma$ is contained in some $E_p$, where $p\in \Delta$.
In this case, $\gamma$ is a curve of degree $d\ge 1$ in the projective space
$E_p\simeq \mathbb{P}^{r-1}$ (if $r=2$, then $\gamma=e_p=E_p$ and $d=1$), and thus
$\gamma\equiv d e_p$. This gives Case~\ref{ConeGen1}.
We may now assume that $\gamma$ is not contained in $E_p$ for any $p\in \Delta$.
Hence, $\gamma$ is the strict transform of the irreducible curve
$\mathbb{P}i(\gamma)\subset (\mathbb{P}^1)^r$, numerically equivalent to
$\sum_{i=1}^r a_i \ell_i$, with $a_1,\ldots,a_r\ge 0$ and
$\sum_{i=1}^r a_i\ge 1$. For each $p\in \Delta$, we write
$\epsilon_p= E_p\cdot \gamma \ge 0$.
We first prove that
\begin{equation}\label{gammaequiv}\tag{$\spadesuit$}
\gamma\equiv \sum_{i=1}^r a_i \tilde{\ell}_i
+\sum_{i=1}^r\sum_{p\in \Delta_i} (a_i-\epsilon_p) e_p.
\end{equation}
Intersecting both sides of \eqref{gammaequiv} with the divisor $\mathbb{P}i^*(D)$,
for any divisor $D$ on $(\mathbb{P}^1)^r$, gives
$\mathbb{P}i(\gamma)\cdot D=\sum a_i\ell_i \cdot D$. Moreover, for each $p\in \Delta$,
there is $j\in\{1,\ldots,r\}$ such that $p\in \Delta_j$.
Intersecting $E_p$ with both sides of \eqref{gammaequiv}, we obtain
$E_p\cdot \gamma=\epsilon_p
\stackrel{\text{Lemma~\ref{LemmaHiellj}\ref{tildeelliDelta}}}{=}
E_p\cdot (\sum\limits_{i=1}^r a_i \tilde{\ell}_i
+\sum\limits_{i=1}^r\sum\limits_{p\in \Delta_i} (a_i-\epsilon_p) e_p)$.
This completes the proof of \eqref{gammaequiv}.
For each $p\in \Delta$, we denote by $i\in \{1,\ldots, r\}$ the integer
such that $p\in \Delta_i$ and by $H_{p}\subset (\mathbb{P}^1)^r$ the hypersurface
consisting of points $q\in (\mathbb{P}^1)^r$ having the same $i$-th coordinate as $p$.
Hence $p_i\in H_{p}$, $H_{p}\cap \Delta=\{p\}$ and $H_{p}\sim H_i$.
The strict transform of $H_{p}$, that we write $\tilde{H}_{p}$, satisfies
$\tilde{H}_{p}\sim \mathbb{P}i^*(H_i)-E_p$. This gives
\begin{equation} \tag{$\heartsuit$}
\tilde{H}_{p}\cdot \gamma=a_i-E_p\cdot \gamma =a_i-\epsilon_p .\label{gammaHp}
\end{equation}
Suppose first that $ \tilde{H}_{p}\cdot \gamma\ge 0$ for each $p\in \Delta$.
This means (with \eqref{gammaHp}), that $a_i-\epsilon_p\ge 0$ for each
$i\in \{1,\ldots,r\}$ and each $p\in \Delta_i$. Hence all coefficients in
\eqref{gammaequiv} are non-negative, so we obtain ~\ref{ConeGen2}.
Suppose now that $ \tilde{H}_{q}\cdot \gamma< 0$ for some $q\in \Delta$.
This implies that $\gamma\subset \tilde{H}_q$. As $H_{q}\cap \Delta=\{q\}$,
we obtain $E_p\cap \tilde{H}_q=\emptyset$ for each $p\in \Delta\setminus \{q\}$,
which yields $\epsilon_p=E_p\cdot \gamma=0$. Writing $j\in\{1,\ldots,r\}$
the element such that $q\in \Delta_j$, the $j$-th component of
$\mathbb{P}i(\gamma)\subset (\mathbb{P}^1)^r$ is constant, so
$a_j=\mathbb{P}i^*(H_j)\cdot \gamma=H_j\cdot \mathbb{P}i(\gamma)=0$. We now prove that
\begin{equation}\tag{$\diamondsuit$}
\gamma\equiv (-\epsilon_q+\sum\limits_{i\not=j} a_i ) e_q
+\sum\limits_{i\not=j} a_i \tilde{\gamma}_{q,i}\label{sumgammaqi}
\end{equation}
Intersecting both sides of \eqref{sumgammaqi} with the divisor $\mathbb{P}i^*(D)$,
for any divisor $D$ on $(\mathbb{P}^1)^r$, gives
$\mathbb{P}i(\gamma)\cdot D=\sum a_i\ell_i \cdot D$. Intersecting $E_q$ with both sides
gives $\epsilon_q=\epsilon_q$, since $E_q\cdot \tilde{\gamma}_{q,i}=1$
for each $i\not=j$ (Lemma~\ref{Lemm:gammapj}\ref{gammapjeq}).
Intersecting with $E_p$ for $p\in \Delta\setminus \{q\}$ gives
$\epsilon_p=0$. This completes the proof of \eqref{sumgammaqi}.
As the $j$-th component of $\mathbb{P}i(\gamma)\subset (\mathbb{P}^1)^r$ is constant,
there is an integer $i\in \{1,\ldots,r\}\setminus \{j\}$ such that
the $i$-th component of $\mathbb{P}i(\gamma)$ is not constant. This implies
that $\mathbb{P}i(\gamma)\not\subset H_i$, so $\tilde{\gamma}\not\subset \tilde{H}_i$.
We obtain
\[0\le \tilde{H}_i\cdot \gamma
\stackrel{\text{Lemma~\ref{LemmaHiellj}\ref{tildeHi}}}
=(\mathbb{P}i^*(H_i)-\sum\limits_{s\not=i}\sum\limits_{p\in \Delta_s} E_p)\cdot \gamma
=a_i-\epsilon_q.\]
Hence, the coefficents of \eqref{sumgammaqi} are non-negative, giving \ref{ConeGen3}.
\end{proof}
\begin{proposition}\label{Prop:Equiv}
Let $\gamma\subset Y$ be an irreducible curve. Then, the following are equivalent:
\begin{enumerate}
\item\label{equiv1}
For all effective $1$-cycles $\gamma_1,\gamma_2$ on $Y$ such that
$\gamma\equiv \gamma_1+\gamma_2$, we have $\gamma_1=0$ or $\gamma_2=0$.
\item\label{equiv2}
$\gamma$ is numerically equivalent to $\tilde{\ell}_i$ for some
$i\in \{1,\ldots,r\}$, to $\tilde{\gamma}_{p,i}$ for some
$i\in \{1,\ldots,r\}, p\in \Delta\setminus \Delta_i$, or to $e_p$
for some $p\in \Delta$.
\item\label{equiv3}
$\gamma$ is either equal to $\tilde{\ell}_i$ for some $i\in \{1,\ldots,r\}$,
or equal to
$\tilde{\gamma}_{p,i}$ for some $i\in \{1,\ldots,r\}, p\in \Delta\setminus \Delta_i$,
or is a line in $E_p$, for some $p\in \Delta$.
\end{enumerate}
\end{proposition}
\begin{proof}
$\ref{equiv1}\Rightarrow \ref{equiv2}$:
By Lemma~\ref{Lemm:ConeGen}, $\gamma\equiv\gamma_1+\cdots+\gamma_s$
where $s\ge 1$ and where $\gamma_1,\ldots,\gamma_s$ belong to
$\{\tilde{\ell}_i\mid i\in \{1,\ldots,r\} \}\cup
\{e_p\mid p\in \Delta\}\cup \{\tilde{\gamma}_{p,i}
\mid i\in \{1,\ldots,r\},p\in \Delta\setminus \Delta_i\}$.
As \ref{equiv1} is satisfied, we have $s=1$, which implies \ref{equiv2}.
$\ref{equiv2}\Rightarrow \ref{equiv3}$:
Suppose first that $\gamma\equiv e_p$ for some $p\in \Delta$.
For an ample divisor $D$ on $(\mathbb{P}^1)^r$, we have
$0=e_p\cdot \mathbb{P}i^*(D)=\mathbb{P}i_*(\gamma)\cdot D$, which implies that
$\gamma$ is contracted by $\mathbb{P}i$. Hence, $\gamma$ is a curve
of degree $d\ge 1$ in some $E_q$, $q\in \Delta$, and is thus
equivalent to $d e_q$. As $-1=E_p\cdot e_p=E_p\cdot \gamma$,
we have $q=p$ and $d=1$.
Suppose now that $\gamma\equiv \tilde{\ell}_i$ for some $i\in \{1,\ldots,r\}$.
For each $j\in \{1,\ldots,r\}$ with $j\not=i$, we have
$\tilde{H}_i\cdot \gamma=\tilde{H}_i\cdot \tilde{\ell}_j
\stackrel{\text{Lemma~\ref{LemmaHiellj}\ref{tildeHiellj}}}{=}-n s_j<0$.
Hence, $\mathbb{P}i(\gamma)\subset \bigcap_{j\not=i } H_j=\ell_i$.
As $\mathbb{P}i(\gamma)\cdot H_i=\mathbb{P}i^*(H_i)\cdot \gamma=\mathbb{P}i^*(H_i)\cdot \tilde{\ell}_i=1$,
we have $\mathbb{P}i(\gamma)=\ell_i$ and $\tilde{\gamma}=\tilde{\ell}_i$.
In the remaining case, $\gamma\equiv\tilde{\gamma}_{p,i}$ for some
$i\in \{1,\ldots,r\}$ and some $p\in \Delta\setminus \Delta_i$.
Hence, $\mathbb{P}i(\gamma)$ is numerically equivalent to $\mathbb{P}i(\tilde{\gamma}_{p,i})$,
which is equivalent to $\ell_i$ (Lemma~\ref{Lemm:gammapj}\ref{gammapjeq}).
Hence, all coordinates of $\mathbb{P}i(\gamma)$ except the $i$-th one are constant.
As $\gamma\cdot E_p=\tilde{\gamma}_{p,i}\cdot E_p=1$
(again by Lemma~\ref{Lemm:gammapj}\ref{gammapjeq}),
the point $p$ belongs to both $\mathbb{P}i(\gamma)$ and ${\gamma}_{p,i}$,
which yields $\mathbb{P}i(\gamma)={\gamma}_{p,i}$ and thus $\gamma=\tilde{\gamma}_{p,i}$.
$\ref{equiv3}\Rightarrow \ref{equiv1}$: We take effective $1$-cycles
$\gamma_1,\gamma_2$ on $Y$ such that $\gamma\equiv \gamma_1+\gamma_2$
and prove that one of the two is zero, using $\ref{equiv3}$.
For each $i\in \{1,\ldots,r\}$, we write $a_i=\mathbb{P}i^*(H_i)\cdot \gamma$,
$b_i=\mathbb{P}i^*(H_i)\cdot \gamma_1$ and $c_i=\mathbb{P}i^*(H_i)\cdot \gamma_2$
and obtain $a_i=b_i+c_i$. As $H_i$ is nef, $\mathbb{P}i^*(H_i)$ is nef, so
$a_i,b_i,c_i\ge 0$. Moreover, $\gamma$ satisfying $\ref{equiv3}$,
we have $\sum_{i=1}^r a_i=1$, which implies that, up to exchanging
$\gamma_1$ and $\gamma_2$, we may assume that
$\sum_{i=1}^r a_i=\sum_{i=1}^r b_i$ and
$c_i=0$ for $i = 1, \ldots,r$.
In particular, $\gamma_2$ is a sum of irreducible curves contained in
the exceptional divisors $E_p$, $p\in \Delta$.
Suppose first that $\gamma=e_q$ for some $q\in \Delta$. This gives
$\sum_{i=1}^r a_i=\sum_{i=1}^r b_i = 0$,
which implies that both $\gamma_1$ and $\gamma_2$ are
sums of irreducible curves contained in
the exceptional divisors $E_p$, $p\in \Delta$.
For each $p'\in \Delta$ and each irreducible curve $c\subset E_{p'}$
of degree $d\ge 1$ we get $\sum_{p\in \Delta} E_p\cdot c=-d$. As
$\sum_{p\in \Delta} E_p\cdot \gamma=-1$, this gives $\gamma_1=0$ or $\gamma_2=0$.
We may now take $s\in \{1,\ldots,r\}$ and either $\gamma=\tilde{\ell}_s$ or
$\gamma=\tilde{\gamma}_{p,s}$ for some $p\in \Delta\setminus \Delta_s$.
This gives $b_s=1$ and $b_i=0$ for all
$i\in \{1,\ldots,r\}\setminus \{s\}$.
Lemma~\ref{Lemm:ConeGen} implies that $\gamma_1$ is
equivalent to a sum of curves contained in
$\{\tilde{\ell}_i\mid i\in \{1,\ldots,r\} \}\cup
\{e_p\mid p\in \Delta\}\cup \{\tilde{\gamma}_{p,i}
\mid i\in \{1,\ldots,r\},p\in \Delta\setminus \Delta_i\}$.
As $b_s=1$ and $b_i=0$ for all $i\in \{1,\ldots,r\}\setminus \{s\}$,
we have $\gamma_1\equiv \alpha+\beta$, where $\alpha$ is either equal to
$\tilde{\ell}_s$ or $\tilde{\gamma}_{p,s}$ for some
$p\in \Delta\setminus \Delta_s$ and where $\beta$ is a non-negative sum of
$e_p, p\in \Delta$. For each $p\in \Delta$, we obtain
\[E_p\cdot \gamma =
E_p\cdot \alpha +E_p\cdot \beta+E_p\cdot \gamma_2\le E_p\cdot \alpha.\]
We now use the fact that we know the intersection of $\alpha$ and $\gamma$
with $E_p$ (which is given either by
Lemma~\ref{LemmaHiellj}\ref{tildeelliDelta}
or by Lemma~\ref{Lemm:gammapj}\ref{gammapjeq}, depending if the curve is equal
to $\tilde{\ell}_s$ or $\tilde{\gamma}_{p,s}$).
If $\gamma=\tilde{\gamma}_{p,s}$ for some $p\in \Delta\setminus \Delta_s$,
then $1=E_p\cdot \gamma\le E_p\cdot \alpha$, which implies that
$\alpha=\tilde{\gamma}_{p,s}$. If $\gamma=\tilde{\gamma}_s$, then
$1=E_q\cdot \gamma\le E_q\cdot \alpha$ for each $q\in \Delta_s$,
which implies that $\alpha=\tilde{\gamma}_s$. In both cases, we get
$\alpha=\gamma$, which implies that $E_p\cdot \gamma_2=0$ for each
$p\in \Delta$, and thus that $\gamma_2=0$, as desired.
\end{proof}
\begin{theorem}
The map $G \to \Aut_Y$ is an isomorphism.
\end{theorem}
\begin{proof}
We first show that $G \stackrel{\sim}{\to} \Aut(Y)$.
Let $\alpha\in \Aut(Y)$. For each irreducible curve $\gamma\subset Y$
that satisfies Proposition~\ref{Prop:Equiv}\ref{equiv1}, the curve
$\alpha(\gamma)$ also satisfies Proposition~\ref{Prop:Equiv}\ref{equiv1}.
Hence, the union $F\subset Y$ of all curves satisfying this assertion
is also stable by $\Aut(Y)$.
By Proposition~\ref{Prop:Equiv}, we have
\[F=(\bigcup_{p\in \Delta } E_p) \cup (\bigcup_{i=1}^r \tilde{\ell}_i) \cup
(\bigcup_{i=1}^r (\bigcup_{p\in \Delta\setminus \Delta_i}\tilde{\gamma}_{p,i})).\]
We observe that the above union is the decomposition of $F$
into irreducible components. Hence, $\alpha$ permutes the irreducible components.
We now make the following observations:
\begin{enumerate}
\item
For each $i\in \{1,\ldots,r\}$, $\tilde{\ell}_i$ intersects exactly
$n\cdot s_i$ other irreducible components of $F$, namely the $E_p$ with
$p\in \Delta$.
\item
For each $p\in \Delta_i$, the divisor $E_p$ intersects exactly $r$ other
irreducible components of $F$, namely the curve $\tilde{\ell}_i$ and
the curves $\tilde{\gamma}_{p,j}$ with $j\in \{1,\ldots,r\}\setminus \{i\}$.
\item
For each
$i\in \{1,\ldots,r\}$ and $p \in \Delta \setminus \Delta_i$, the curve
$\tilde{\gamma}_{p,i}$ intersects exactly $n\cdot s_i+1$ other
irreducible components of $F$. Writing $j\in \{1,\ldots,r\}$
the element such that $p\in \Delta_j$, the curve intersects $E_p$
and all curves $\tilde{\gamma}_{q,j}$ for each $q\in \Delta_i$.
\end{enumerate}
If $r\ge 3$, the exceptional divisors $E_p$ are the irreducible components
of maximal dimension of $F$, so $g$ permutes them. If $r=2$, then
$g$ also permutes the $E_p$, as these are the only irreducible components
of $F$ that intersect exactly $2$ other irreducible components of $F$
(we assumed $n\cdot s_i\ge 3$ for each $i$ in the case $r=2$).
In any case, $g$ permutes the exceptional divisors $E_p$ and is thus
the lift of an automorphism $\hat{g}$ of $(\mathbb{P}^1)^r$: we observe that
the birational self-map $\hat{g}=\mathbb{P}i g \mathbb{P}i^{-1}$ of $(\mathbb{P}^1)^r$
restricts to an automorphism on the complement of $\Delta$, and as
$\Delta$ has codimension $\ge 2$, $\hat{g}$ is an automorphism.
We then use again the three observations above to see that
$g(\tilde{\ell}_i)=\tilde{\ell}_i$ for each $i\in \{1,\ldots,r\}$,
as the $s_i$ are all distinct. Hence,
$\hat{g}(\ell_i)=\hat{g}(\ell_i)$ for each $i$.
This implies that $\hat{g}$ is of the form
\[\begin{array}{ccc}
(\mathbb{P}^1)^r&\to & (\mathbb{P}^1)^r\\
( (\mu_1,\ldots,\mu_r),)&\mapsto &
([ u_1:\mu_1 v_1+\kappa_1u_1],\ldots,[ u_r:\mu_r v_r+\kappa_ru_r])
\end{array}\]
for some $\mu_1,\ldots,\mu_r\in \k^*$ and $\kappa_1,\ldots,\kappa_r\in \k$.
For each $i\in \{1,\ldots,r\}$, the restriction of $\hat{g}$ to $\ell_i$
corresponds to the automorphism $[u:v]\mapsto [ u_i:\mu_i v_1+\kappa_iu_i]$.
As it has to stabilize the set $\Delta_i$, we have $\kappa_i=0$ and
$\mu_i\in\k^*$ is of order $n$.
This yields the isomorphism $G \simeq \Aut(Y)$.
To complete the proof, it suffices to show that $\Aut_Y$ is constant,
or equivalently that its Lie algebra is trivial.
(We refer to \cite[\S 2.1]{Martin} for background on infinitesimal
automorphisms and vector fields).
Recall that $\Lie(\Aut_Y) = H^0(Y,\cT_Y)$, where $\cT_Y$ denotes the
tangent sheaf. In other terms, $\Lie(\Aut_Y)$ consists of the
global vector fields on $Y$. Denoting by
$E = \uplus_{p \in \Delta} E_p$ the exceptional divisor, we have
an exact sequence of sheaves on $Y$
\[ 0 \longrightarrow \cT_{Y,E} \longrightarrow \cT_Y
\longrightarrow \bigoplus_{p \in \Delta}
(i_{E_p})_*(\cN_{E_p/Y}) \longrightarrow 0, \]
where $\cT_{Y,E}$ is the sheaf of vector fields that are tangent to $E$,
and $\cN_{E_p/Y}$ denotes the normal sheaf.
Moreover, for any $p \in \Delta$, we have
$E_p \simeq \bP^{r-1}$ and this identifies $\cN_{E_p/Y}$ with
$\cO_{\bP^{r-1}}(-1)$; thus, $H^0(E_p,\cN_{E_p/Y}) = 0$.
As a consequence,
$H^0(Y,\cT_{Y,E}) \stackrel{\sim}{\to} H^0(Y,\cT_Y)$.
Viewing vector fields as derivations of the structure sheaf $\cO_Y$,
this yields
\[ \Der(\cO_Y,\cO_Y(-E)) \stackrel{\sim}{\to} \Der(\cO_Y), \]
where the left-hand side denotes the Lie algebra of derivations
which stabilize the ideal sheaf of $E$.
The blow-up $\mathbb{P}i : Y \to (\bP^1)^r$ contracts $E$ to $\Delta$ and
satisfies $\mathbb{P}i_*(\cO_Y) = \cO_{(\bP^1)^r}$; also,
$\mathbb{P}i_*(\cO_Y(-E)) = \cI_{\Delta}$ (the ideal sheaf of $\Delta$).
So $\mathbb{P}i$ induces a homomorphism of Lie algebras
$\mathbb{P}i_* : \Der(\cO_Y) \to \Der(\cO_{(\bP^1)^r})$,
which is injective as $\mathbb{P}i$ is birational. Moreover, $\mathbb{P}i_*$
sends $\Der(\cO_Y,\cO_Y(-E))$ into
$\Der(\cO_{(\bP^1)^r}, \cI_{\Delta})$, the Lie algebra of
vector fields on $(\bP^1)^r$ which vanish at each $p \in \Delta$.
So it suffices to show that each such vector field is zero.
We have
\[ \Der(\cO_{(\bP^1)^r}) = H^0((\bP^1)^r, \cT_{(\bP^1)^r})
= \bigoplus_{i=1}^r H^0(\bP^1,\cT_{\bP^1}) = \Lie(\Aut_{\bP^1})^r.
\]
Moreover, $\Lie(\Aut_{\bP^1}) = M_2(\k)/\k \, \id$, the quotient of
the Lie algebra of $2 \times 2$ matrices by the scalar matrices.
Let $\xi = (\xi_1,\ldots,\xi_r) \in \Der(\cO_{(\bP^1)^r})$,
with representative $(A_1,\ldots,A_r) \in M_2(\k)^r$. Then $\xi$
vanishes at $p = ([x_1:y_1],\ldots,[x_r:y_r])$ if and only if
$(x_i,y_i)$ is an eigenvector of $A_i$ for each $i \in \{ 1,\ldots,r\}$.
Thus, if $\xi \in \Der(\cO_{(\bP^1)^r},\cI_{\Delta})$, then
$(0,1)$ is an eigenvector of each $A_i$, i.e., $A_i$ is lower
triangular. In addition, each point of $\Delta_i$ yields an
eigenvector of $A_i$. So each $A_i$ is scalar, and $\xi = 0$
as desired.
\end{proof}
\end{document}
|
\begin{document}
\title{Entanglement-Coherence and Discord-Coherence analytical relations for X states}
\author{J.D. Young \and A. Auyuanet }
\institute{J.D. Young \at
Instituto de F\'isica, Facultad de Ingenier\'ia, Universidad de la Rep\'ublica, J. Herrera y Reissig 565, 11300, Montevideo, Uruguay \\
\and
A. Auyuanet \at
Instituto de F\'isica, Facultad de Ingenier\'ia, Universidad de la Rep\'ublica, J. Herrera y Reissig 565, 11300, Montevideo, Uruguay \\
\email{[email protected]}
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
In this work we derive analytical relations between Entanglement and Coherence as well as between Discord and Coherence, for Bell-diagonal states and for X states, evolving under the action of several noise channels: Bit Flip, Phase Damping and Depolarizing. We demonstrate that for these families, Coherence is the fundamental correlation, that is: Coherence is necessary for the presence of Entanglement and Discord.
\keywords{Quantum Correlations \and Coherence \and Entanglement \and Discord}
\end{abstract}
\section{Introduction}
\label{intro}
Correlations can be found in quantum systems that cannot be classically described; the existence of non-classical correlations in a system is a signature that the subsystems are really quantum. The complete characterization of correlations between parts of a quantum system as well as the interrelations between these correlations is an important subject both from the fundamental and applied point of view. To understand and quantify quantum correlations is of paramount importance to comprehend the origin of the quantum advantages in quantum computing and quantum information processing. Historically, Entanglement was the first quantum correlation known \cite{EPR}, and over time was the central subject of study, both theoretically and experimentally \cite{Horodecki_RMP, HarocheEPRatoms, ZeilingerTeleportation}. Over time it was understood that there are separable states that show non-classical behavior. Quantum Discord was presented in 2001 \cite{Zurek_D} , and soon other quantum correlations related to it \cite{Modi}. Coherence, which is behind the interference phenomenon, has been a main topic in the framework of Quantum Optics. Recently Baumgratz et al. \cite{Baumgratz} performed a quantitative characterization of Coherence, where quantum coherence is treated as a physical resource. Both Entanglement and Coherence are related to the phenomenon of quantum superposition, therefore it historically was natural to try to understand qualitatively what their relationship is and if there is a quantitative relationship between them \cite{XiCoherence,Adesso_ChE,Chitambar}. That naturally extended to the study of the relation between Coherence and Discord and other related quantum correlations \cite{Vedralcoherence,Fancoherence,HU20181} .\\
In this work we propounded to study the existence of analytically expressible dynamic relations between Entanglement and Coherence, and between Discord and Coherence, for a simple model such as Bell-diagonal states and for a slightly more complicated one, the X states. In Sec. \ref{sec:geomBelldiagonal} we do a geometric description of the Bell-diagonal states. In Sec. \ref{sec:geometric} we present the geometric measures of correlations. In Sec. \ref{sec:Belldiag} we find analytical dynamic relations between Entanglement and Coherence and between Discord and Coherence for the Bell-diagonal states evolving under the action of several noise channels: Bit Flip, Phase Damping and Depolarizing. In Sec. \ref{sec:Xstates} we study the X states; we analize their region of existence and its variations with the parameters that describes the X states and calculate the expression of the correlations for these states. In Sec. \ref{sec:evolX} we study the evolution of the correlations and their dynamical relations under the action of the same channels applied before. In Sec. \ref{sec:Coherencefundamental} we discuss Coherence as the fundamental quantum correlation. In Sec. \ref{sec:conclu} we summarize and discuss the results.
\section{Geometry of the Bell-diagonal states}
\label{sec:geomBelldiagonal}
The Bell-diagonal states are a paradigmatic class of states in which, due to their relative mathematical simplicity, it has been possible to study the behavior of various quantum correlations.\\
They can be written as:
\begin{equation}\label{Eq:EstadosBell}
\rho=\frac{1}{4} \Big (\mathbb{I}_2 \otimes \mathbb{I}_2 + \sum_{i=1}^{3} r_{i} \sigma_i \otimes \sigma_i \Big)
\end{equation}
where $\sigma_{i}$ are the Pauli matrices and $\vec{r}$ is the correlation vector:
\begin{equation}
\vec{r}=r_{1}\hat{i}+r_{2}\hat{j}+r_{3}\hat{k},
\end{equation}
with $r_{i}=\mathrm{Tr}(\rho\sigma_{i}\otimes\sigma_{i}),$ and
the elements of the $\rho$ matrix can be directly related to the entries of the correlation vector as follows:
\begin{gather}
\begin{aligned}
&\rho_{11}=\rho_{44}=\frac{1}{4}(1+r_3)&\\
&\rho_{22}=\rho_{33}=\frac{1}{4}(1-r_3)&\\
&\rho_{14}=\frac{1}{4}(r_1-r_2)&\\
&\rho_{23}=\frac{1}{4}(r_1+r_2)&\\
\label{Eq:CambioDeVariable}
\end{aligned}
\end{gather}
It's well known the three-dimensional representation of the Bell-diagonal states as a function of the entries of the correlation vector \cite{HorodeckiTetrahedro}, and there are several works studying the geometrical structure of several quantum correlation in its three-dimensional representation \cite{Caves_BDS,chinosBellDiagonal,steeringbelldiagonal} . In fig.(\ref{fig:Belldiagonal0}) we can see the tetrahedron that delimits the zone of existence of the states, and the octahedron, which marks the region where the states are separable.
Evolving under the action of different noise channels, the Bell-diagonal states describe a path within the tetrahedron \cite{Feldman17}.
\begin{figure}
\caption{Three-dimensional representation of the Bell-diagonal states. The tetrahedron limits the region of existence. Inside the tetrahedron, the octahedron confines the separables Bell-diagonal states.}
\label{fig:Belldiagonal0}
\end{figure}
\section{Geometric measures of correlations}
\label{sec:geometric}
In order to quantify quantum correlations we choose a geometric approach
\cite{GeometricEntanglement,BellomoGeometric,BellomoGeometric2,Dakic,GaussianDiscord,SquareNormDistanceCorr,geometricqc}.
We determine how much Entanglement, Discord or Coherence a state $\rho$ posses, by its minimum distance to the set of states that doesn't posses that correlation, i.e:
\begin{eqnarray*}
E(\rho) & = & \min_{\rho^{sep}}d(\rho,\rho^{sep}),\\
D(\rho) & = & \min_{\rho^{cc}}d(\rho,\rho^{cc}),\\
C(\rho) & = & \min_{\rho^{inc}}d(\rho,\rho^{inc}),\\
\end{eqnarray*}
where $\rho^{sep}$ belongs to the set of separables states (zero Entanglement),
$\rho^{cc}$ to the set of classical states (zero Discord) and $\rho^{inc}$. to the set of incoherent states (zero Coherence).\\
We choose to work with the Trace norm, wich is a particular case of the $p$-norm: $||A||_{p}^{p}:=\mathrm{Tr}\left(\sqrt{A^{\dagger}A}\right)^{p}$, when $p=1.$
The Trace norm determines $d_{1}$ as a measure of distance:
\begin{equation*}
d_{1}(\rho,\sigma)= ||\rho-\sigma||
\end{equation*}
With the previous considerations, in the next section we will show the expressions of the three quantum correlations.
\subsection{Entanglement}
\label{sec:entanglement}
For the Entanglement we have:
\begin{equation*}
E(\rho)=\min_{\rho_{sep} \in \mathcal{S}} \frac{1}{2}||\rho-\rho_{sep}||_1=\min_{\rho_{sep} \in \mathcal{S}} Tr|\rho-\rho_{sep}|,
\end{equation*}
where $\mathcal{S}$ is the set of separable states.
For the particular case of X states, in a previous work \cite{Feldman17} we found that:
\begin{equation}\label{Eq:EntrelazamientoTN}
E(\rho)=2 \max \{0,|\rho_{14}|-\sqrt{\rho_{22}\rho_{33}},|\rho_{32}|-\sqrt{\rho_{11}\rho_{44}}\}
\end{equation}
which is the Concurrence of the quantum state \cite{YuEberlyConcX}.
\subsection{Discord}
\label{sec:discord}
We will use the expression of the Discord for the Bell-diagonal states developed in \cite{GDBell1,GDBell2}:
\begin{equation}
D(\rho_{BD})=\frac{r_{int}}{2},
\label{Eq:DiscordiaParaBell}
\end{equation}
where $r_{int}$ is the intermediate value of the $|r_i|$.
\subsection{Coherence}
\label{sec:coherence}
Applying the geometric definition for the Coherence, its expression is:
\begin{equation*}
C(\rho)=\min_{\rho_{inc} \in \mathcal{I}} ||\rho-\rho_{inc}||=\sum_{\substack{i,j \\ i \ne j}} |\rho_{i,j}|.
\end{equation*}
where we used the fact that the nearest incoherent state is represented by the same $\rho$ matrix, but with all the elements out of the diagonal nulls \cite{Baumgratz}.
For the particular case of the X states, the Coherence is:
\begin{equation*}
C=|\rho_{14}|+|\rho_{32}|+|\rho_{41}|+|\rho_{23}|,
\end{equation*}
which under the assumption of all the entries of the matrix being reals, ($\rho_{14}=\rho_{41}$ y $\rho_{32}=\rho_{23}$) it has the following expression:
\begin{equation} \label{CoherenciaTN}
C=2(|\rho_{14}|+|\rho_{32}|)
\end{equation}
\section{Dynamical Relations for Bell-diagonal states}
\label{sec:Belldiag}
We will consider that our system evolves with each qubit in contact with its own environment. The evolution can be described by means of the Kraus operators \cite{nielsen2000quantum,Preskill} according to:
\begin{equation*}\label{Eq:Kraus_Locales}
\rho'_{AB}=\sum_{i,j} \big (M_i^A \otimes M_j^B \big) \rho_{AB} \big (M_i^A \otimes M_j^B \big)^\dagger,
\end{equation*}
where $M_i^A, M_i^B$ are the Kraus operator acting on each qubit. By means of this tool, we will study the evolution of the two qubit system under the action of several known quantum channels.
We start by writing the expression of the three correlations using the Trace Norm; the expression of the Entanglement can be obtained putting
eq.(\ref{Eq:CambioDeVariable}) in the expression of the Concurrence eq.(\ref{Eq:EntrelazamientoTN}):
\begin{equation*}
E(\rho)=\frac{1}{2} \max [0 \ , |r_1 \pm r_2| - (1\pm r_3) ]
\end{equation*}
The expression of the Discord, eq.(\ref{Eq:DiscordiaParaBell}):
\begin{equation*}\label{Eq:Discordia_TN}
D(\rho)=\textrm{int}[|r_1| \ ,|r_2| \ ,|r_3|]
\end{equation*}
And finally, for the Coherence we put eq.(\ref{Eq:CambioDeVariable}) in the expression eq.(\ref{CoherenciaTN}):
\begin{equation*}
C(\rho)=\frac{1}{2}(|r_1-r_2|+|r_1+r_2|)=\max(|r_1|,|r_2|)
\end{equation*}
In the following subsections we will study the dynamical relations between Entanglement, Discord and Coherence when the initial Bell-diagonal states evolve under the action of three known quantum channels: Bit Flip, Phase Damping and Depolarizing.
\subsection{Bit Flip}
\label{sec:bitflipBell}
The Bit flip channel models the simplest type of error that suffers a qubit: it flips the state from $\ket{0}$ to $\ket{1}$ and reciprocally, with probability $(1-p)$.
The Kraus operators corresponding to this channel are:
\begin{equation*}
M_1^{\textit{bf}}= \sqrt{1-p} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \quad \textrm{and} \quad M_2^{\textit{bf}}=\sqrt{p} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}
\end{equation*}
Applying the Kraus operators to de initial Bell-diagonal state,
it's easy to see that all the information of the evolution is contained in the vector of correlations.
\begin{equation*}
\vec{r}'=r_1\hat{i}+r_2(p-1)^2\hat{j}+r_3(p-1)^2\hat{k}
\end{equation*}
The Entanglement of the evolved state is:
\begin{equation}
E(p)=\frac{1}{2} \max [0 \ , |r_1 \pm r_2(p-1)^2| - |1\pm r_3(p-1)^2| ]
\label{EntBF}
\end{equation}
The expression of the Discord:
\begin{equation}
D(p)=\textrm{int}[|r_1| \ ,|r_2|(p-1)^2 \ ,|r_3|(p-1)^2]
\label{DiscBF}
\end{equation}
And finally the Coherence:
\begin{equation}
\begin{split}
C(p)=\frac{1}{2}(|r_1-r_2(p-1)^2|+|r_1+r_2(p-1)^2|)= \\
\max (|r_1|,|r_2|(p-1)^2)
\end{split}
\label{CoBF}
\end{equation}
\subsubsection{Entanglement and Coherence}
In order to express the Entanglement as a function of Coherence, we have to discriminate two alternatives: in the region of the space where $|r_1|>|r_2|(p-1)^2$, Entanglement and Coherence are independent. On the other hand, where $|r_1|<|r_2|(p-1)^2$, we can express the Entanglement as a function of the Coherence:
\begin{equation}
E=\max \Big[0,\frac{1}{2}(|r_1 \pm \text{sign}{(r2)C}|- |1\pm \frac{r_3}{r_2}C|)\Big]
\label{ECBF}
\end{equation}
We show in fig.(\ref{fig:entcohe}) this relation for $r_1=-0.3$, $r_2=0.6$ and $r_3=0.4$. The blue points are the plot of Entanglement, eq.(\ref{EntBF}) versus Coherence, eq.(\ref{CoBF}), and the red line corresponds to eq.(\ref{ECBF}) Taking into account what we explained above, this equation is valid for $0< p < 0.29$. Note that in the figure the implicit parameter $p$ grows backwards.
\begin{figure}
\caption{Entanglement as a function of the Coherence, under the action of the Bit Flip channel. $r_1=-0.3,r_2=0.6,r_3=0.4$.}
\label{fig:entcohe}
\end{figure}
\subsubsection{Discord and Coherence}
Looking for a functional relation between Discord and Coherence, we find that it is enough to analyze the initial values of the correlation vector. It is easy to see that this is only possible when $|r_{1}|=\min{(|r_{1}|,|r_{2}|,|r_{3}|)}$.
Starting with $|r_1|<|r_2|<|r_3|$, as $\vec{r}$ evolves, we find two regions: one where $D=C$ (when $|r_1|<|r_2| (p-1)^2$), and another (when $|r_2| (p-1)^2<|r_1|$ )where Discord decay quadratically, $D=|r_3|(p-1)^2$ independent of Coherence, which remains constant, $C=|r_1|$.
When initially we have $|r_1|<|r_3|<|r_2|$ we find three regions: one where $D=\frac{|r_3|}{|r_2|}C$ (when $|r_1|<|r_3| (p-1)^2$), a second region where Discord is constant, $D=|r_1|$ independent of Coherence (when $|r_3|(p-1)^2<|r_1| $), and a third region where the Discord decays quadratically, $D=|r_2|(p-1)^2$ while the Coherence remains constant, $C=|r_1|$ (when $|r_3|(p-1)^2<|r_2| (p-1)^2<|r_1| $).
We show in fig.(\ref{fig:disccohe}) one example for the case $|r_1|<|r_3|<|r_2|$. The red line corresponds to de equation of Discord in function of Coherence; the blue points are the plot of Discord, eq.(\ref{DiscBF}) versus Coherence, eq.(\ref{CoBF}).
\begin{figure}
\caption{Relation between Discord and Coherence, under the action of the Bit Flip channel. $r_1=-0.3,r_2=0.6,r_3=0.4$. Note that for Coherence $=0.3$, $p= 0.293$; as $p$ continues to grow, Coherence remains constant and Discord decreases quadratically regardless of Coherence. Red line and blue points explained in text.}
\label{fig:disccohe}
\end{figure}
\subsection{Phase Damping}
\label{sec:phasedampingBell}
This channel describes a type of noise that is completely quantum: it is a process where quantum information is lost without loss of energy \cite{nielsen2000quantum}.
The Kraus operators corresponding to this channel are:
\begin{multline*}\label{Eq:Phase_Flip_Kraus}
M_0^{\textit{pd}}=\sqrt{1-p} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} ,\quad M_1^{\textit{pd}}=\sqrt{p}\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} ,\\
M_2^{\textit{pd}}=\sqrt{p}\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}
\end{multline*}
Applying the Kraus operators to de initial Bell-diagonal state we obtain the expression of the evolved correlation vector:
\begin{equation*}\label{Eq:R_PD}
\vec{r}'=r_1(p-1)^2\hat{i}+r_2(p-1)^2\hat{j}+r_3\hat{k},
\end{equation*}
which allow us to calculate the expressions of the three correlations under this channel.
For the Entanglement we have:
\begin{equation}
E(p)=\frac{1}{2} \max [0 \ , |r_1 \pm r_2|(p-1)^2 - |1\pm r_3| ]
\label{EntPD}
\end{equation}
The expression of the Discord is:
\begin{equation}\label{Eq:Discordia_TN_PD}
D(p)=\textrm{int}\Big[|r_1|(p-1)^2 \ ,|r_2|(p-1)^2 \ ,|r_3|\Big]
\end{equation}
The Coherence has the following expression:
\begin{equation}\label{Eq:Coherencia_TN_P}
\begin{split}
C(p)=\frac{1}{2}(|r_1-r_2|+|r_1+r_2|)(p-1)^2= \\
\max (|r_1|, \ |r_2|) (p-1)^2
\end{split}
\end{equation}
\subsubsection{Entanglement and Coherence}
For this evolution, we can always find a relation between Entanglement and Coherence:
\begin{equation}
E=\max \Big[0,\frac{C|r_1 \pm r_2|}{2\max(|r_1|,|r_2|)} - \frac{|1 \pm r_3|}{2} \Big]
\label{EnCoPD}
\end{equation}
\begin{figure}
\caption{Entanglement as a function of Coherence under the phase damping channel.$\, r_1=-0.7,r_2=0.5$ and $r_3=0.3$. }
\label{fig:discohepd}
\end{figure}
We show this relation in Fig.(\ref{fig:discohepd}): blue dots are the plot of Entanglement eq.(\ref{EntPD}), versus Coherence, eq. (\ref{Eq:Coherencia_TN_P}). The red line corresponds to
eq.(\ref{EnCoPD}).
\subsubsection{Discord and Coherence}
Taking into consideration the expression for the Discord:
\begin{equation*}
D=\textrm{int}(|r_1|(p-1)^2,|r_2|(p-1)^2,|r_3|)
\end{equation*}
we see that it is possible to relate Discord with Coherence if
$|r_1|$ or $|r_2|$ are the intermediate values. If the intermediate value is $|r_3|$, the Discord will be constant and independent of the Coherence.
When the intermediate value is $r_1$ or $r_2$ the expression of the Discord is:
\begin{equation}\label{Eq:Vinculo_CD_TNpd}
D=\frac{ |r_i| C}{\max(|r_1|,|r_2|)},
\end{equation}
where $r_i=r_1,r_2$, depending on which is the intermediate value. This relation is showed in fig.(\ref{fig:discohepd}), where we can distinguish clearly 3 regions: each one corresponding to $|r_1|(p-1)^2$, $|r_2|(p-1)^2$ or $|r_3|$ being the intermediate value. The red line corresponds to de equation of Discord in function of Coherence, eq.(\ref{Eq:Vinculo_CD_TNpd}); the blue points are the plot of Discord, eq.(\ref{Eq:Coherencia_TN_P}) versus Coherence, eq.(\ref{Eq:Coherencia_TN_P}).
\begin{figure}
\caption{Discord as a function of Coherence under the phase damping channel.$\, r_1=-0.7,r_2=0.5$ and $r_3=0.3$. }
\label{fig:discohepd}
\end{figure}
\subsection{Depolarizing}
\label{sec:depolarizingBell}
The depolarizing channel is a well-known channel; we can describe it by explaining its action on a qubit: with probability (1-p) the qubit remains unaffected and with probability p it is depolarized \cite{nielsen2000quantum}.
\begin{multline*}
M_0^d=\sqrt{1-p} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad M_1^d=\frac{p}{3}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \\ \qquad M_2^d=\frac{p}{3} \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad M_3^d=\frac{p}{3} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}
\end{multline*}
The expression of the evolved correlation vector under the depolarizing channel is:
\begin{equation*}\label{Eq:R_BF}
\vec{r}'=r_1(p-1)^2+r_2(p-1)^2+r_3(p-1)^2
\end{equation*}
The Entanglement, Discord and Coherence are expressed as:
\begin{equation}
E(\rho) = \frac{1}{2} \max [0, |r_1\pm r_2|(p-1)^2 - |1\pm r_3(p-1)^2|]
\label{EntDepo}
\end{equation}
\begin{equation}
D(\rho) = \textrm{int}[|r_1| \ ,|r_2| \ ,|r_3|](p-1)^2,
\label{DiscDepo}
\end{equation}
\begin{equation}
C(\rho) = \max(|r_1|,|r_2|)(p-1)^2.
\label{CoheDepo}
\end{equation}
\subsubsection{Entanglement and Coherence}
Also in this case we can easily write Entanglement as a function of Coherence.
\begin{equation*}
E=\frac{1}{2}\max \Big[0,\frac{(|r_1\pm r_2|)C}{\max(|r_1|,|r_2|)} - |1\pm \frac{r_3 C}{\max(|r_1|,|r_2|)}| \Big],
\end{equation*}
and again we find a linear relation between Entanglement and Coherence.
This relation is showed in fig.(\ref{fig:entcohedep}):
\begin{figure}
\caption{Entanglement as a function of Coherence under the depolarizing channel.$\, r_1=-0.7,r_2=0.5$ and $r_3=0.3$. }
\label{fig:entcohedep}
\end{figure}
\subsubsection{Discord and Coherence}
As for the case of the previous channel, we can express Discord in function of Coherence:
\begin{equation*}\label{Eq:Vinculo_CD_TNpd}
D=\frac{C |r_i|}{\max(|r_1|,|r_2|)},
\end{equation*}
where $r_i=r_1,r_2,r_3$, depending which is the intermediate value. The difference with the previous channel is that for the Depolarizing channel, the intermediate value remains the same throughout the evolution because the three components of the correlations vector evolve the same way. This relation is showed in fig.(\ref{fig:discohedep}).
\begin{figure}
\caption{Discord as a function of Coherence under the depolarizing channel.$\, r_1=-0.7,r_2=0.5$ and $r_3=0.3$. }
\label{fig:discohedep}
\end{figure}
\section{Study of X states}
\label{sec:Xstates}
In this section we will work with the called X states, which their density matrix contains only non-zero elements along the main diagonal and anti-diagonal \cite{YuEberlyConcX}. These states, which include the Bell-diagonal states we previously studied, are described by the following expression:
\begin{equation*}
\rho = \frac{1}{4}( \mathbb{I}_4+s \cdot \sigma_3 \otimes \mathbb{I} +\mathbb{I} \otimes c \cdot \sigma_3 + \sum_{j=1}^{3} r_j \sigma_j \otimes \sigma_j)
\end{equation*}
Unlike Bell-diagonal states that only need three parameters to be described: $(r_1,r_2,r_3)$,
to describe the X states, it is necessary to add two additional parameters: $s$ and $c$.
In the following it will be helpful to express the elements of the density matrix $\rho_{i,j}$ as a function of the new parameters $r_i$, $s$ y $c$:
\begin{gather*}
\begin{aligned}
&\rho_{11}=\frac{1}{4}(1+r_3+s+c) \quad \rho_{22}=\frac{1}{4}(1-r_3+s-c)&\\
&\rho_{33}=\frac{1}{4}(1-r_3-s+c) \quad \rho_{44}=\frac{1}{4}(1+r_3-s-c)&\\
&\rho_{14}=\rho_{41}=\frac{1}{4}(r_1-r_2) \quad \rho_{23}=\rho_{32}=\frac{1}{4}(r_1+r_2)&\\
\label{Eq:CambioDeVariableX}
\end{aligned}
\end{gather*}
To have a geometric 3D representation of the Bell-diagonal states is possible since they depend only on the three components of $\vec{r}$. The X states require five parameters to be described; this makes difficult to study the region of existence, since to visualize in a three-dimensional diagram it is necessary to fix at least two of them.
\subsection{Regions of existence for the X states}
In order to perform this analisis we use the positivity condition of $\rho$, which expressed in the parameters $\vec{r}$, $s$ and $c$ is:
\begin{gather*}\label{Eq:RegionExistencia}
\begin{aligned}
&|r_1-r_2| \le \sqrt{(1+r_3)^2-(s+c)^2}&\\
&|r_1+r_2| \le \sqrt{(1-r_3)^2-(s-c)^2}&
\end{aligned}
\end{gather*}
These two inequalities determine the volume of the possible X states.
By changing the values of $s$ and $c$ we can see how the region of existence is modified.
In figure (\ref{fig:tetradeformed}), we can see how the tetrahedron that determines the region of existence of the Bell-diagonal states is deformed as the values of $s$ and $c$ change.
\begin{figure}
\caption{In color red the deformed tetrahedron for different values of $s$ and $c$. The original tetrahedron was kept for comparison.}
\label{fig:sfig1}
\label{fig:sfig2}
\label{fig:tetradeformed}
\end{figure}
Just as the tetrahedron that defines the allowed states is deformed, the octahedron defined by the states of zero entanglement also undergoes deformation.
In figure (\ref{fig:tetradeformed2}), we show the transformation of the octahedron as $s$ and $c$ changes.
\begin{figure}
\caption{In color green the deformed octahedron of separable states for different values of $s$ and $c$. The original octahedron was kept for comparison. }
\label{fig:sfig3}
\label{fig:sfig23}
\label{fig:tetradeformed2}
\end{figure}
\subsection{Analytical expressions of correlations for X states}
We start with the Entanglement, which as we saw in eq.(\ref{Eq:EntrelazamientoTN}), for X states coincides with the expression of the Concurrence:
\begin{equation}\label{Eq:EntrelazamientoXtn}
E=\frac{1}{2} \max \big[0,|r_1 \pm r_2|-\sqrt{(1 \pm r_3)^2-(s \pm c)^2}\big]
\end{equation}
For the Discord, we will use the expression provided by \cite{Giocannetti_1ndA}:
\begin{equation}\label{Eq:DiscordiaXtn}
D(\rho)=\begin{cases}
\frac{|r_1|}{2} & \text{if $\Delta >0$}.\\
\sqrt{\frac{r_1^2 \max (r_3^2,r_2^2+s^2)-r_2^2 \min (r_3^2,r_1^2)}{\max (r_3^2,r_2^2+s^2)-\min (r_3^2,r_1^2)+r_1^2-r_2^2}} & \text{if $\Delta \leq 0$},
\end{cases}
\end{equation}
where $\Delta=r_3^2-r_1^2-s^2$.
Finally, the expression of the Coherence is:
\begin{equation*}\label{Eq:CoherenciaXtn}
C=\max(|r_1|,|r_2|)
\end{equation*}
\subsection{Regions of Discord}
\label{ssec:regionsofDiscord}
For X states, Discord a priori, can have many different expressions in the different regions, depending on which are the maximums and minimums in eq.(\ref{Eq:DiscordiaXtn}) or which of the following relations, $r_3^2<r_1^2+s^2$ or $r_3^2 \ge r_1^2+s^2$ is verified. Taking this into consideration, we found five regions:
\begin{enumerate}
\item $ r_3^2 > A$
\item $\left(r_3^2 \leq A \right ) \cap \left(\max(r_3^2,B)=r_3^2\right ) \cap \left( \min(r_1^2,r_3^2)=r_3^2\right)$
\item $\left(r_3^2 \leq A\right ) \cap \left(\max(r_3^2,B)=r_3^2\right) \cap \left(\min(r_1^2,r_3^2)=r_1^2 \right)$
\item $\left(r_3^2 \leq A\right) \cap \left(\max(r_3^2,B)=B\right) \cap \left(\min(r_1^2,r_3^2)=r_3^2\right)$
\item $\left(r_3^2 \leq A\right) \cap \left(\max(r_3^2,B)=B\right) \cap \left(\min(r_1^2,r_3^2)=r_1^2\right)$
\end{enumerate}
where $A=r_1^2+s^2$ and $B=r_2^2+s^2$
By substituting these results in eq. (\ref{Eq:DiscordiaXtn}) we find that in regions (1), (3) and (5) Discord has the same expression. Therefore we have to considerate three regions for Discord:
\textbf{Region 1}, defined by: $|r_3|>|r_1|$ where the expression of the Discord is:
\begin{equation*}
D=\frac{|r_1|}{2},
\end{equation*}
and we show it in fig.(\ref{fig:Region1})
\begin{figure}
\caption{In red, the region of existence, in blue Region 1 with $s=0.2$.}
\label{fig:Region1}
\end{figure}
\textbf{Region 2}, defined by: $\left(|r_3| \leq |r_1| \right) \cap \left(r_3^2 > r_2^2+s^2\right)$
where the expression of Discord is,
\begin{equation}
D=\frac{|r_3|}{2},
\end{equation}
and it is showed in fig. \ref{fig:Region2_2}
\begin{figure}
\caption{In blue Region 2, with $s=0.2$.}
\label{fig:Region2_2}
\end{figure}
\textbf{Region 3} defined by: $\left(|r_3| \leq |r_1| \right) \cap \left(r_3^2 \leq r_2^2+s^2\right)$
where the expression of Discord is,
\begin{equation}
D=\frac{1}{2}\sqrt{\frac{r_1^2r_2^2-r_2^2r_3^2+r_1^2s^2}{r_1^2-r_3^2+s^2}},
\end{equation}
and in fig.\ref{fig:Region4_1} we show it.
\begin{figure}
\caption{In blue Region 3, with $s=0.2$.}
\label{fig:Region4_1}
\end{figure}
\subsection{Analysis in the $s-c$ plane}
The previous analysis was performed leaving the values of the parameters $s$ and $c$ fixed and varying $\vec{r}$. In this section we will analyze how the regions of existence change and how the correlations behave when we fix $\vec{r}$ and vary the parameters $s$ and $c$.
In the following figures \ref{fig:figENT} and \ref{fig:figDIS} we show the changes in the regions of existence and the behavior of Entanglement and Discord by varying $r_2$, on the $s-c$ plane.
As can be seen, the region of existence contracts, until it forms a perfect square at $r_2 = 0 $, and then stretches again until it becomes a line and then disappears.
The images selected are with $ r_1 $ and $ r_3 $ fixed so that it is better understood how the region of existence is modified as $ r_2 $ is changed. Setting $ r_2 $ and $ r_3 $ fixed, and changing $ r_1 $, and fixing $ r_1 $ and $ r_2 $ and changing $ r_3 $ give similar results.
\begin{figure}
\caption{Color maps of the Entanglement in the $s-c$ plane. The white region indicates the region of non-physical states.}
\label{fig:sfig1}
\label{fig:sfig2}
\label{fig:sfig3}
\label{fig:sfig3}
\label{fig:figENT}
\end{figure}
The Discord pattern is simple, because it does not depend on parameter $c$ and grows with $s$. On the other hand, Entanglement grows as $|s \pm c|$ increase, since: $|r_1 \pm r_2|- \sqrt{(1 \pm r_3)^2 -(s\pm c)^2}$ for values of $ r_i $ within the region of existence, showing growth in the longitudinal direction.
Coherence was not included in this analysis since it is constant in this plane, because it is independent of $s$ and $c$.
\begin{figure}
\caption{Color maps of the Discord in the $s-c$ plane. The white region indicates the region of non-physical states.}
\label{fig:sfig1}
\label{fig:sfig2}
\label{fig:sfig3}
\label{fig:sfig3}
\label{fig:figDIS}
\end{figure}
\section{Evolution of X states under noise channels}
\label{sec:evolX}
The description of the evolution of the X states is given by the evolution of the correlation vector, as in the case of the Bell-diagonal states, plus the evolution of the parameters $s$ and $c$. The only exception is made by the evolution under Phase Damping, in which the parameters $s$ and $c$ are not affected.
\subsection{Phase Damping}
\label{sec:phasedampingX}
The evolution of the correlation vector $\vec{r}$ under Phase Damping is given by:
\begin{equation*}\label{Eq:R_PD}
\vec{r}'=r_1(p-1)^2\hat{i}+r_2(p-1)^2\hat{j}+r_3\hat{k}
\end{equation*}
\subsubsection{Entanglement}
As we have already seen for the X states, each pair of values of $s$ and $c$ determines the zone of existence and the zero Entanglement octahedron. The expression of Entanglement is:
\begin{multline*}\label{Eq:EntrelazamientoXPD}
E=\frac{1}{2} \max \big[0,|r_1 \pm r_2|(p-1)^2- \\ \sqrt{(1 \pm r_3)^2 -(s \pm c)^2}\big]
\end{multline*}
\subsubsection{Coherence}
The Coherence expression of a X state that evolves under Phase Damping is simple: it decreases monotonously and reaches the value $ 0 $ when $ p = 1 $:
\begin{equation*}\label{Eq:CoherenciaXpd}
C=\max(|r_1|,|r_2|)(p-1)^2
\end{equation*}
\subsubsection{Discord}
When a X state evolves under the Phase Damping channel, the expression of the Discord is expressed by eq.(\ref{Eq:DiscordiaXtn}):
\begin{equation*}
D(\rho)=\begin{cases}
\frac{|r_1|}{2}(p-1)^2 & \text{if $\Delta > 0$}.\\
\frac{1}{2} \sqrt{\frac{A \max(r_3^2,B+s^2)-B \min(r_3^2,A)}{\max(r_3^2,B+s^2)- \min(r_3^2,A)+A-B}} & \text{if $\Delta \leq 0$},
\end{cases}
\end{equation*}
where, $\Delta=r_3^2-s^2-A $, $A=r_1^2(p-1)^4$ and $B=r_2^2(p-1)^4.$
We showed from an analysis in subsection \ref{ssec:regionsofDiscord} that there are three different
regions for the Discord. Depending on the initial value of the vector $\vec{r}$ and the parameters $s$ and $c$, the state in its evolution will cross some of that regions, as shown in the figure \ref{Fig:EvoPDX}. Each color line represents a possible evolution of an X state, starting from different initial values of $\vec{r}$ and the parameters $s$ and $c$.
\begin{figure}
\caption{Cut according to the plane $r_1-r_2 = 0.2$. Region 1 is purple, Region 2 is green, and Region 3 is orange. The yellow line shows an evolution that cross through all three regions. In blue the state always remains in Region 3. In red it jumps from Region 3 to Region 1. In pink from Region 2 to Region 1. In White, it always remains within Region 1.}
\label{Fig:EvoPDX}
\end{figure}
The figure shows how the state evolves depending on the initial conditions of $\vec{r}$. Evolution can take 5 possible paths: (1) to go through the 3 regions in descending order, Region 3, Region 2 and Region 1 (yellow path), (2) to go from Region 3 directly to Region 1 (red path) , (3) always stay in Region 3 (blue path), (4) move from Region 2 to Region 1 (pink path) or (5) always stay in Region 1 (white path). Regions 1, 2 and 3 can be written as the following inequalities:
\begin{multline*}\label{Eq:RegionesDiscordPD}
Region 1: |r_3|>|r_1|(p-1)^2 \\
Region 2: \left(|r_3|\leq|r_1|(p-1)^2\right) \cap\ \left(r_3^2 > r_2^2(p-1)^4+s^2\right) \\
Region 3: \left(|r_3|\leq|r_1|(p-1)^2\right) \cap \left( r_3^2 \leq r_2^2(p-1)^4+s^2\right)
\end{multline*}
Therefore the quantum state will remain in Region 1 as long as it is verified that:
\begin{equation*}\label{Eq:Region1p}
p>1-\sqrt{\frac{|r_3|}{|r_1|}}=p_{1},
\end{equation*}
and the expression of the Discord in this region is:
\begin{equation}
D=\frac{|r_1|}{2}(p-1)^2
\end{equation}
Note that if $r_1=0$ the state always stay in Region 1. \\
The quantum state will remain in Region 2 as long as it is verified that:
\begin{equation}\label{Eq:condicionesR2}
\begin{cases}
p \leq1-\sqrt{\frac{|r_3|}{|r_1|}}=p_1\\p>1-\sqrt[4]{\frac{r_3^2-s^2}{r_2^2}}=p_2,
\end{cases}
\end{equation}
and the expression of the Discord in this region is:
\begin{equation*}
D=\frac{|r_3|}{2}.
\end{equation*}
Note that if $|r_3|<|s|$, the state cannot be in Region 2.\\
Finally, it will remain in Region 3 as long as:
\begin{equation}
\begin{cases}
p \leq 1-\sqrt{\frac{|r_3|}{|r_1|}}=p_1\\ p<1-\sqrt[4]{\frac{r_3^2-s^2}{r_2^2}}=p_2\\
\end{cases}
\end{equation}
The expression of the Discord in this region is:
\begin{equation*}\label{Eq:DiscordiaPDX}
D=\frac{(p-1)^2}{2}\sqrt{\frac{r_1^2r_2^2(p-1)^4-r_2^2r_3^2+r_1^2s^2}{r_1^2(p-1)^4-r_3^2+s^2}}
\end{equation*}
If $p<p_1$ and $r_2=0$, the state will be in Region 2 or Region 3 depending on $|r_3|>|s|$ or $|r_3|\leq |s|$ respectively.
Looking at the figure, it is clear that the state in its evolution not necessarily crosses the three regions. We will clarify this point by giving some examples.
Let's consider an initial state defined by the following values of its parameters: $r_1=-0.6, r_2=0.4, r_3=0.3, s=0.2, c=0.3$. For these values, the state is in Region 3 and
$p_1=0.29$, and $p_2=0.25$, so $p_1>p_2$. This imply that the quantum state: remains in Region 3 when $p<p_2$, it is in Region 2 when $p_2<p \leq p_1$ and it is in the Region 1 for $p>p_1$. This behavior is showed in the yellow path in fig.(\ref{Fig:EvoPDX}).
Let's consider a second example with initial values $ r_1 = 0.5, r_2 = -0.2, r_3 = 0.3, s = 0.2, c = 0.3 $. In this case $ p_1 = 0.33 $ and $ p_2 = -0.06. $ $ p_2 $ is excluded since $ p $ can never take negative values. This will imply that the state will never be in Region 3, since $ p> p_2 \forall p $. The state will be in Region 2 for $ 0 <p \leq p_1 $ and in Region 1 for $ p> p_1 $. This is illustrated in the red path.
As a last example, consider the following initial conditions: $ r_1 = -0.6, r_2 = 0.4, r_3 = 0.7, s = 0.2, c = 0.3 $. In this situation the state initially is in Region 1 and $ p_1=-0.08$ , so $ p> p_1\, \forall p $ and the state always remains in Region 1, as can be seen in the white path of the figure \ref{Fig:EvoPDX}.
\subsubsection{Entanglement and Coherence}
In the region outside the octahedron of the separable states, the Entanglement and the Coherence can be related by a linear function:
\begin{equation*}\label{Eq:EntrelazCoherenciaXPD}
E=\frac{1}{2} \max \big[0,\frac{|r_1 \pm r_2| C}{\max(|r_1|,|r_2|)}- \sqrt{(1 \pm r_3)^2-(s\pm c)^2}\big]
\end{equation*}
\subsubsection{Discord and Coherence}
In order to determine the relation between Discord and Coherence, the behavior of the Discord must be taken into account in each of the three regions defined above.
In Region 1, the relation is linear:
\begin{equation*}
D=\frac{|r_1|C}{2\max(|r_1|,|r_2|)}
\end{equation*}
In Region 2, there is no relation between Discord and Coherence because the Discord is constant and depends only of $r_3$: $D=\frac{|r_3|}{2}$\\
In Region 3 the relation is:
\begin{equation*}\label{Eq:DiscordCoherencePDX}
D=\frac{1}{2}\sqrt{\frac{C^2 \left(\frac{r_1^2r_2^2C^2}{\max(|r_1|,|r_2|)^2}-r_2^2r_3^2+r_1^2s^2\right)}{r_1^2 C^2-r_3^2+s^2}}
\end{equation*}
\subsection{Bit Flip}
\label{sec:bitflipX}
As the state evolves under this channel, we see that not only does $\vec{r}$ change, but also parameters $s$ and $c$, as follows:
\begin{gather*}
\begin{aligned}
&\vec{r}'=r_1\hat{i}+r_2(p-1)^2\hat{j}+r_3(p-1)^2\hat{k} \\
&s'=s(p-1) \quad c'=c(p-1),
\end{aligned}
\end{gather*}
and this allows us to find the expression of the different correlations. \\
The Entanglement has the following expression:
\begin{equation*}
\begin{split}
E & =\frac{1}{2} \max \Big[0,|r_1 \pm r_2(p-1)^2| \\
& -\sqrt{(1 \pm r_3(p-1)^2)^2-(s \pm c)^2(p-1)^2}\Big]
\end{split}
\end{equation*}
The expression of the Coherence is:
\begin{equation*}
C=\max(|r_1|,|r_2|(p-1)^2)
\end{equation*}
\subsubsection{Discord}
According to eq.(\ref{Eq:DiscordiaXtn}) the expression for the Discord for a X state evolving under the Bit Flip channel is:
\begin{equation*}
D(\rho)=\begin{cases}
\frac{|r_1|}{2} & \text{if $\Delta >0$}.\\
\frac{1}{2} \sqrt{\frac{r_1^2 \max(A,C)-B \min(A,r_1^2)}{\max(A,C)-\min(A,r_1^2)+r_1^2-B}} & \text{if $\Delta \leq 0$},
\end{cases}
\end{equation*}
where $A=r_3^2(p-1)^4$, $B=r_2^2(p-1)^4$, $C=r_2^2(p-1)^4+s^2(p-1)^2$ and $\Delta=A-r_1^2-s^2(p-1)^2.$
Again, to study Discord we have to analyze its behavior in 3 different regions, defined by the following relations:
\begin{multline*}\label{Eq:RegionesDiscordBF}
Region 1: |r_3|(p-1)^2>|r_1| \\
Region 2: \left(|r_3|(p-1)^2 \leq |r_1|\right) \cap \left( (r_3^2- r_2^2)(p-1)^2 > s^2\right) \\
Region 3: \left(|r_3|(p-1)^2 \leq |r_1|\right) \cap \left( (r_3^2- r_2^2)(p-1)^2 \leq s^2\right)
\end{multline*}
Carrying out the same analysis as for the Phase Damping case, we arrived at the
following results: the X state will be in Region 1 provided that:
\begin{equation*}
\begin{cases}
p<1-\sqrt{\frac{|r_1|}{|r_3|}}, \\
r_3 \neq 0
\end{cases}
\end{equation*}
In this region the expression of Discord is:
\begin{equation*}
D=\frac{|r_1|}{2}
\end{equation*}
The state will remain in Region 2 as long as it is verified that
\begin{equation*}
\begin{cases}
p \ge 1-\sqrt{\frac{|r_1|}{|r_3|}} \\ p<1-\sqrt{\frac{s^2}{r_3^2-r_2^2}}
\end{cases}
\end{equation*}
The expression of Discord in this region is:
\begin{equation*}
D=\frac{|r_3|}{2}(p-1)^2
\end{equation*}
Finally Region 3 it's defined by:
\begin{equation*}
\begin{cases}
p \ge 1-\sqrt{\frac{|r_1|}{|r_3|}} \\p>1-\sqrt{\frac{s^2}{r_3^2-r_2^2}}
\end{cases}
\end{equation*}
Note that if $r_3^2 \leq r_2^2$, or $r_3=0$, the state is in Region 3.
In this region the Discord is expressed as:
\begin{equation*}\label{}
D=\frac{(p-1)}{2}\sqrt{\frac{r_1^2r_2^2(p-1)^2-r_2^2r_3^2(p-1)^6+r_1^2s^2}{r_1^2-r_3^2(p-1)^4+s^2(p-1)^2}}
\end{equation*}
It should be noted that since the Bit Flip channel also changes the values of $s$ and $c$, it is not possible to show an evolution crossing the different regions, since for each value of s and c the entire volume that defines the space of X states as well as the interior regions varies.
\subsubsection{Entanglement and Coherence}
In order to relate the Entanglement and the Coherence we must distinguish between two regions outside the region of separable states:
Region A: $|r_1|>|r_2|(p-1)^2 $, in this region the region the Coherence is constant: $C=|r_1|$, and
Region B:
$|r_1|<|r_2|(p-1)^2$, where
we can relate Entanglement and Coherence as follows:
\begin{equation*}
E=\frac{1}{2}\max \Bigg[0,|r_1 \pm C|-\sqrt{\Big(1\pm \frac{r_3 C}{|r_2|}\Big)^2-(s \pm c)^2\frac{C}{|r_2|}}\, \Bigg]
\end{equation*}
\subsubsection{Discord and Coherence}
By seeking to relate Discord to Coherence we have to take into account the 3 regions of Discord. In Region 1, defined by $|r_3|(p-1)^2>|r_1|$, Discord has a constant value, $D=\frac{|r_1|}{2}$ independent of the Coherence.\\
In Region 2, $|r_2|(p-1)^2 < |r_1|$ is verified, so the Coherence takes a constant value $C=|r_1|$ and Discord has the following expression: $D=\frac{|r_3|}{2}(p-1)^2$; there is no relation between them in this region neither.\\
Within Region 3, in the area where $|r_2|(p-1)^2<|r_1|$, the Coherence is constant $C=|r_1|$. \\
However, in the zone in which $|r_2|(p-1)^2>|r_1|$, Coherence takes the following expression: $C=|r_2|(p-1)^2$ and therefore we can relate it to Discord, as follows:
\begin{equation*}\label{Eq:DiscordCoherenceBFX}
D=\frac{1}{2}\sqrt{\frac{C \left(r_1^2|r_2|s+r_1^2|r_2|^2C-r_3^2C^3\right)}{r_1^2|r_2|^2+s^2|r_3|C-r_3^2C^2}}
\end{equation*}
\subsection{Depolarizing}
\label{sec:depolarizingX}
By evolving under de Depolarizing channel the correlations vector, $\vec{r}$, $s$ and $c$ change as:
\begin{gather*} \label{C=0}
\begin{aligned}
&\vec{r}'=r_1(p-1)^2\hat{i}+r_2(p-1)^2\hat{j}+r_3(p-1)^2\hat{k}&\\
&s'=s(p-1) \quad c'=c(p-1)&
\end{aligned}
\end{gather*}
The expression of the Entanglement of the X state under this channel is:
\begin{equation*}
\begin{split}
E&=\frac{1}{2} \max \Bigg[0,|r_1 \pm r_2|(p-1)^2-\\&\sqrt{\Big(1 \pm r_3(p-1)^2\Big)^2-(p-1)^2(s \pm c)^2}\,\,\Bigg]
\end{split}
\end{equation*}
The Coherence is:
\begin{equation*}
C=\max(|r_1|,|r_2|)(p-1)^2
\end{equation*}
\subsubsection{Regions of Discord}
The general expression for the Discord under this channel is:
\begin{equation*}
D(\rho)=\begin{cases}
\frac{|r_1|}{2}(p-1)^2\\
\frac{(p-1)^2}{2} \sqrt{\frac{r_1^2 A-r_2^2(p-1)^2 \min(r_3^2,r_1^2)}{A-(p-1)^2( r_1^2-r_2^2-\min(r_3^2,r_1^2))}}
\end{cases}
\end{equation*}
where $A= \max(r_3^2(p-1)^2,r_2^2(p-1)^2+s^2)$ The upper expression corresponds to the condition $r_1^2(p-1)^2-r_3^2(p-1)^2+s^2< 0$ and the lower to $r_1^2(p-1)^2-r_3^2(p-1)^2+s^2 \ge 0. $
This again offers three regions with different expression for Discord. After a little mathematical manipulation they are:
\begin{multline*}\label{Eq:RegionesDiscordDepo}
Region 1: |r_3|>|r_1| \\
Region 2: |r_3|\leq|r_1| \cap \left(r_3^2(p-1)^2 > r_2^2(p-1)^2+s^2 \right) \\
Region 3:|r_3|\leq|r_1| \cap \left(r_3^2(p-1)^2 \leq r_2^2(p-1)^2+s^2 \right)
\end{multline*}
In Region 1, the Discord is expressed as:
\begin{equation*}
D=\frac{|r_1|}{2}(p-1)^2
\end{equation*}
The state will remain in Region 2 as long as it is verified that:
\begin{equation*}
\begin{cases}
|r_3|<|r_1|\\p<1-\sqrt{\frac{s^2}{r_3^2-r_2^2}}\\
\end{cases}
\end{equation*}
and the Discord in this region is expressed as:
\begin{equation*}
D=\frac{|r_3|}{2}(p-1)^2.
\end{equation*}
If $|r_3|\leq |r_2|$ the state will be in Region 3. \\
The quantum X state will remain in Region 3 as long as:
\begin{equation*}
\begin{cases}
|r_3|<|r_1|\\p>1-\sqrt{\frac{s^2}{r_3^2-r_2^2}}\\
\end{cases}
\end{equation*}
and in this region Discord is expressed as:
\begin{equation*}\label{}
D=\frac{(p-1)}{2}\sqrt{\frac{r_2^2(p-1)^2(r_1^2-r_3^2)+r_1^2s^2}{2r_2^2-r_1^2+r_3^2}}
\end{equation*}
\subsubsection{Entanglement and Coherence}
It is also possible to relate in this case, the Entanglement and the Coherence:
\begin{multline*}
E=\frac{1}{2}\max\Bigg[0,|r_1\pm r_2|\frac{C}{\max(|r_1|,|r_2|)}-\\ \sqrt{\Bigg(1\pm \frac{Cr_3}{\max(|r_1|,|r_2|)}\Bigg)^2-\frac{C(s \pm c)^2}{\max(|r_1|,|r_2|)} }\,\,\Bigg]
\end{multline*}
\subsubsection{Discord and Coherence}
To link Discord and Coherence we must analyze the behavior of both magnitudes in each of the regions that we already discussed. It is immediate to show that in regions 1 and 2 the relation is linear.
In Region 1:
\begin{equation*}
D=\frac{|r_1|C}{2\max(|r_1|,|r_2|)}
\end{equation*}
In Region 2:
\begin{equation*}
D=\frac{|r_3|C}{2\max(|r_1|,|r_2|)}
\end{equation*}
In Region 3, the relation is a little bit complicated:
\begin{equation*}
D=\frac{1}{2}\sqrt{\frac{C (r_2^2(r_1^2-r_3^2)C+\max(|r_1|,|r_2|)r_1^2s^2)}{ (\max(|r_1|,|r_2|)^2)(2r_2^2-r_1^2+r_3^2)}}
\end{equation*}
\section{Coherence as the fundamental correlation}
\label{sec:Coherencefundamental}
As the theories of quantum correlations have been developed, attempts have been made to understand the possible relationships between them \cite{Bruss_ED_M,Vedralcoherence,XiCoherence,Fancoherence}.
A reasonable question to ask is whether there is a correlation that is fundamental, that is, if there is a correlation whose presence is necessary for the others to exist.
We begin by analyzing the expression of Coherence:
\begin{equation*}
C=\max(|r_1|,|r_2|),
\end{equation*}
We note that for Coherence to be canceled, it is necessary and sufficient that $r_1=r_2=0.$
If we impose that condition in the expression of the Entanglement:
\begin{equation*}
E=\frac{1}{2} \max \big[0,|r_1 \pm r_2|-\sqrt{(1 \pm r_3)^2-(s \pm c)^2}\big],
\end{equation*}
we observe that necessarily the Entanglement is zero. Therefore we can affirm that:
\begin{equation*}
C=0 \Rightarrow E=0.
\end{equation*}
Making the same analysis with the expressions of Discord eq.(\ref{Eq:DiscordiaXtn}):
\begin{equation}
D=\frac{|r_1|}{2} \iff r_3^2>r_1^2+s^2
\end{equation}
using that $r_1=0$, we obtain,
\begin{equation*}
D=0 \iff |r_3|>|s|
\end{equation*}
For the complementary region, $r_3^2<r_1^2+s^2$:
\begin{equation*}
D=\frac{1}{2}\sqrt{\frac{r_1^2 \max (r_3^2,r_2^2+s^2)-r_2^2 \min (r_3^2,r_1^2)}{\max (r_3^2,r_2^2+s^2)-\min (r_3^2,r_1^2)+r_1^2-r_2^2}},
\end{equation*}
imposing again $r_1=r_2=0$, we obtain:
\begin{equation*}
D=0 \iff |r_3|<|s|
\end{equation*}
From these results, we can conclude that:
\begin{gather*} \label{C=0}
\begin{aligned}
&C=0 \Rightarrow E=0&\\
&C=0 \Rightarrow D=0&
\end{aligned}
\end{gather*}
What if the Entanglement is zero? What consequences does this have on Discord and Coherence? By canceling the Entanglement, in expression eq.(\ref{Eq:EntrelazamientoXtn}), we obtain the region of separable states, where we have already seen that there are states with Coherence and non-null Discord. \\
Now we will analyze the consequences of nullifying Discord in the other correlations. We should consider the two regions of Discord.
In the region $r_3^2>r_1^2+s^2$, $D=\frac{|r_1|}{2}$. Whe then have, that:
\begin{equation*}
D=0 \iff r_1=0.
\end{equation*}
Clearly for the states in which it is verified that $|r_2|\neq 0$, the Discord being zero does not imply that the Coherence is zero.\\
In the region $r_3^2>r_1^2+s^2$, the expression of the Discord is:
\begin{equation*}
D=\frac{1}{2}\sqrt{\frac{r_1^2 \max (r_3^2,r_2^2+s^2)-r_2^2 \min (r_3^2,r_1^2)}{\max (r_3^2,r_2^2+s^2)-\min (r_3^2,r_1^2)+r_1^2-r_2^2}}
\end{equation*}
The condition for the Discord to be zero is:
\begin{equation}\label{Eq:CondicionesDiscordia}
D=0 \iff r_1^2\max(r_3^2,r_2^2+s^2)=r_2^2\min(r_3^2,r_1^2)
\end{equation}
Here we must consider four cases:
\begin{enumerate}
\item $\max(r_3^2,r_2^2+s^2)=r_3^2$ and $\min(r_3^2,r_1^2)=r_3^2$
\item $\max(r_3^2,r_2^2+s^2)=r_2^2+s^2$ and $\min(r_3^2,r_1^2)=r_3^2$
\item $\max(r_3^2,r_2^2+s^2)=r_3^2$ and $\min(r_3^2,r_1^2)=r_1^2$
\item $\max(r_3^2,r_2^2+s^2)=r_2^2+s^2$ and $\min(r_3^2,r_1^2)=r_1^2$
\end{enumerate}
By substituing each one in eq.(\ref{Eq:CondicionesDiscordia}) we obtain:
\begin{enumerate}
\item $r_1=r_2$
\item $r_3^2=\frac{r_1^2}{r_2^2}(r_2^2+s^2)$
\item $r_1=r_2$
\item $s=0$
\end{enumerate}
Neither of these conditions implies zero Coherence or zero Entanglement.
This result is very important and shows that for the X states, Coherence is the fundamental correlation.\\
All this results point in the same direction than Stretslov \cite{Adesso_ChE} , who showed that from a coherent state it is possible to produce Entanglement, but that it is impossible to produce it from an incoherent state.
\section{Conclusions}
\label{sec:conclu}
In this work we derive analytical relations between Entanglement and Coherence as well as between Discord and Coherence, for Bell-diagonal states and for the X states, evolving under the action of several noise channels: Bit Flip, Phase Damping and Depolarizing. For the Bell-diagonal states we found analytical relations between Entanglement and Coherence, and Discord and Coherence, for all cases. In particular all relations between these correlations are linear.
The study of the X states has the additional complexity that 5 parameters are required to describe the family, which modifies the region of existence: the tetrahedron of the Bell-diagonal states compresses and deforms as the two new parameters $s$ and $c$ change.
For the X states we described the different regions that determine the expression of Discord and in each of these regions we discused the existence of an analytical relation with Coherence.
We found that Discord defines three different regions according to its analytical expression: there are regions where Discord remains constant regardless of Coherence, regions where it decays quadratically without being able to relate to it, and regions where it depends on Coherence in a more or less trivial way depending on the channel applied.
Finally we demonstrate that for the families of studied states, Coherence is the fundamental correlation, that is: Coherence is necessary for the presence of Entanglement and Discord.
Our result points in the same line as previous results: Kok Chuan Tan et. al demonstrated that the Correlated Coherence in a bipartite system is necessary for the system to have Discord \cite{Tan}, and Streltsov et. al showed that Coherence is necessary to generate Entanglement
\cite{Adesso_ChE}.
\end{document}
|
\begin{document}
\title{
Energy-Based Test Sample Adaptation for \Domain Generalization
}
\begin{abstract}
In this paper, we propose energy-based sample adaptation at test time for domain generalization. Where previous works adapt their models to target domains, we adapt the unseen target samples to source-trained models. To this end, we design a discriminative energy-based model, which is trained on source domains to jointly model the conditional distribution for classification and data distribution for sample adaptation. The model is optimized to simultaneously learn a classifier and an energy function. To adapt target samples to source distributions, we iteratively update the samples by energy minimization with stochastic gradient Langevin dynamics. Moreover, to preserve the categorical information in the sample during adaptation, we introduce a categorical latent variable into the energy-based model. The latent variable is learned from the original sample before adaptation by variational inference and fixed as a condition to guide the sample update. Experiments on six benchmarks for classification of images and microblog threads demonstrate the effectiveness of our proposal.
\end{abstract}
\section{Introduction}
Deep neural networks are vulnerable to domain shifts and suffer from lack of generalization on test samples that do not resemble the ones in the training distribution \citep{recht2019imagenet,zhou2021domain,krueger2021out,shen2022association}.
To deal with the domain shifts, domain generalization has been proposed \citep{muandet2013domain,gulrajani2020search, cha2021swad}.
Domain generalization strives to learn a model exclusively on source domains in order to generalize well on unseen target domains.
The major challenge stems from the large domain shifts and the unavailability of any target domain data during training.
To address the problem, domain invariant learning has been widely studied, e.g., \citep{motiian2017unified,zhao2020domain,nguyen2021domain}, based on the assumption that invariant representations obtained on source domains are also valid for unseen target domains. However, since the target data is inaccessible during training, it is likely an ``adaptivity gap'' \citep{dubey2021adaptive} exists between representations from the source and target domains. Therefore, recent works try to adapt the classification model with target samples at \textit{test time} by further fine-tuning model parameters~\citep{sun2020test,wang2021tent} or by introducing an extra network module for adaptation~\citep{dubey2021adaptive}. Rather than adapting the model to target domains, \cite{xiao2022learning} adapt the classifier for each sample at test time. Nevertheless, a single sample would not be able to adjust the whole model due to the large number of model parameters and the limited information contained in the sample. This makes it challenging for their method to handle large domain gaps. Instead, we propose to adapt each target sample to the source distributions, which does not require any fine-tuning or parameter updates of the source model.
In this paper, we propose energy-based test sample adaptation for domain generalization. The method is motivated by the fact that energy-based models \citep{hinton2002training,lecun2006tutorial} flexibly model complex data distributions and allow for efficient sampling from the modeled distribution by Langevin dynamics \citep{du2019implicit,welling2011bayesian}. Specifically, we define a new discriminative energy-based model as the composition of a classifier and a neural-network-based energy function in the data space, which are trained simultaneously on the source domains. The trained model iteratively updates the representation of each target sample by gradient descent of energy minimization through Langevin dynamics, which eventually adapts the sample to the source data distribution. The adapted target samples are then predicted by the classifier that is simultaneously trained in the discriminative energy-based model. For both efficient energy minimization and classification, we deploy the energy functions on the input feature space rather than the raw images.
Since Langevin dynamics tends to draw samples randomly from the distribution modeled by the energy function, it cannot guarantee category equivalence. To maintain the category information of the target samples during adaptation and promote better classification performance, we further introduce a categorical latent variable in our energy-based model. Our model learns the latent variable to explicitly carry categorical information by variational inference in the classification model. We utilize the latent variable as conditional categorical attributes like in compositional generation \citep{du2020compositional,nie2021controllable} to guide the sample adaptation to preserve the categorical information of the original sample. At inference time, we simply ensemble the predictions obtained by adapting the unseen target sample to each source domain as the final domain generalization result.
We conduct experiments on six benchmarks for classification of images and microblog threads to demonstrate the promise and effectiveness of our method for domain generalization\footnote{Code available:~\url{https://github.com/zzzx1224/EBTSA-ICLR2023}.}.
\section{Methodology}
In domain generalization, we are provided source and target domains as non-overlapping distributions on the joint space $\mathcal{X} \times \mathcal{Y}$, where $\mathcal{X}$ and $\mathcal{Y}$ denote the input and label space, respectively.
Given a dataset with $S$ source domains $\mathcal{D}_{s} {=} \left \{ D_{s}^{i} \right \}_{i=1}^{S}$ and $T$ target domains $\mathcal{D}_{t} {=} \left \{ D_{t}^{i}\right \}_{i=1}^{T}$, a model is trained only on $\mathcal{D}_{s}$ and required to generalize well on $\mathcal{D}_{t}$.
Following the multi-source domain generalization setting \citep{li2017deeper, zhou2021domain}, we assume there are multiple source domains with the same label space to mimic good domain shifts during training.
In this work, we propose energy-based test sample adaptation, which adapts target samples to source distributions to tackle the domain gap between target and source data.
The rationale behind our model is that adapting the target samples to the source data distributions is able to improve the prediction of the target data with source models by reducing the domain shifts, as shown in Figure~\ref{fig1_intuition} (left).
Since the target data is never seen during training, we mimic domain shifts during the training stage to learn the sample adaptation procedure. By doing so, the model acquires the ability to adapt each target sample to the source distribution at inference time.
In this section, we first provide a preliminary on energy-based models and then present our energy-based test sample adaptation.
\subsection{Energy-based model preliminary}
\label{background}
Energy-based models \citep{lecun2006tutorial} represent any probability distribution $p(\mathbf{x})$ for $\mathbf{x} \in \mathbb{R}^D$ as $p_{\theta}(\mathbf{x}) = \frac{ {\rm exp} (-E_{\theta}(\mathbf{x}))}{Z_{\theta}}$, where $E_{\theta}(\mathbf{x}): \mathbb{R}^D \rightarrow \mathbb{R}$ is known as the energy function that maps each input sample to a scalar and $Z_{\theta} = \int {\rm exp} (-E_{\theta}(\mathbf{x})) d \mathbf{x}$ denotes the partition function.
However, $Z_{\theta}$ is usually intractable since it computes the integration over the entire input space of $\mathbf{x}$. Thus, we cannot train the parameter $\theta$ of the energy-based model by directly maximizing the log-likelihood $ {\rm log}~p_{\theta}(\mathbf{x}) = - E_{\theta}(x) - {\rm log} Z_{\theta}$.
Nevertheless, the log-likelihood has the derivative \citep{du2019implicit,song2021train} :
\begin{equation}
\label{contras_div}
\frac{\partial {\rm log}~p_{\theta}(\mathbf{x})}{\partial \theta} = \mathbb{E}_{p_d(\mathbf{x})} \Big [ - \frac{\partial E_\theta(\mathbf{x})}{\partial \theta} \Big ] + \mathbb{E}_{p_\theta(\mathbf{x})} \Big [ \frac{\partial E_\theta(\mathbf{x})}{\partial \theta} \Big ],
\end{equation}
where the first expectation term is taken over the data distribution $p_d(\mathbf{x})$ and the second one is over the model distribution $p_\theta(\mathbf{x})$.
The objective function in eq.~(\ref{contras_div}) encourages the model to assign low energy to the sample from the real data distribution while assigning high energy to those from the model distribution.
To do so, we need to draw samples from $p_\theta(\mathbf{x})$, which is challenging and usually approximated by MCMC methods \citep{hinton2002training}. An effective MCMC method used in recent works \citep{du2019implicit, nijkamp2019learning, xiao2020vaebm, grathwohl2019your} is Stochastic Gradient Langevin Dynamics \citep{welling2011bayesian}, which simulates samples by
\begin{equation}
\label{langevin}
\mathbf{x}^{i+1} = \mathbf{x}^{i} - \frac{\lambda}{2} \frac{\partial E_{\theta}(\mathbf{x}^{i})}{\partial \mathbf{x}^{i}} + \epsilon, ~~~~ s.t.,~~~~ \epsilon \sim \mathcal{N}(0, \lambda)
\end{equation}
where $\lambda$ denotes the step-size and $\mathbf{x}^0$ is drawn from the initial distribution $p_0(\mathbf{x})$, which is usually a uniform distribution \citep{du2019implicit,grathwohl2019your}.
Actually, maximizing $ {\rm log}~p_{\theta}(\mathbf{x})$ is equivalent to minimizing the KL divergence $\mathbb{D}_{\rm{KL}} (p_d(\mathbf{x})||p_\theta(\mathbf{x}))$ \citep{song2021train}, which is alternatively achieved in \citep{hinton2002training} by minimizing contrastive divergence:
\begin{equation}
\label{hintoncd}
\mathbb{D}_{\rm{KL}} (p_d(\mathbf{x})||p_\theta(\mathbf{x})) - \mathbb{D}_{\rm{KL}} (q_\theta(\mathbf{x})||p_\theta(\mathbf{x})),
\end{equation}
where $q_\theta(\mathbf{x}) = \prod_\theta^t p_d(\mathbf{x})$, representing $t$ sequential MCMC transitions starting from $p(\mathbf{x})$ \citep{du2020improved} and minimizing eq.~(\ref{hintoncd}) is achieved by minimizing:
\begin{equation}
\label{hintoncd2}
\mathbb{E}_{p_d(\mathbf{x})} [E_{\theta}(\mathbf{x})] - \mathbb{E}_{\rm{stop\_grad}(q_\theta (\mathbf{x}))} [E_{\theta}(\mathbf{x})] + \mathbb{E}_{q_\theta (\mathbf{x})} [E_{\rm{stop\_grad}(\theta)}(\mathbf{x})] + \mathbb{E}_{q_\theta (\mathbf{x})} [{\rm log} q_\theta (\mathbf{x})].
\end{equation}
Eq.~(\ref{hintoncd2}) avoids drawing samples from the model distribution $p_\theta(\mathbf{x})$, which often requires an exponentially long time for MCMC sampling \citep{du2020improved}.
Intuitively, $q_{\theta}(\mathbf{x})$ is closer to $p_{\theta}(\mathbf{x})$ than $p_d(\mathbf{x})$, which guarantees that $\mathbb{D}_{\rm{KL}} (p_d(\mathbf{x})||p_\theta(\mathbf{x})) \geq \mathbb{D}_{\rm{KL}} (q_\theta(\mathbf{x})||p_\theta(\mathbf{x}))$ and eq.~(\ref{hintoncd}) can only be zero when $p_{\theta}(\mathbf{x})=p_d(\mathbf{x})$.
\subsection{Energy-based test sample adaptation}
\label{sec3.1}
\begin{figure*}
\caption{\textbf{Illustration of the proposed energy-based model.}
\label{fig1_intuition}
\end{figure*}
We propose energy-based test sample adaption to tackle the domain gap between source and target data distributions.
This is inspired by the fact that Langevin dynamics simulates samples of the distribution expressed by the energy-based model through gradient-based updates, with no restriction on the sample initialization if the sampling steps are sufficient \citep{welling2011bayesian,du2019implicit}.
We leverage this property to conduct test sample adaptation with Langevin dynamics by setting the target sample as the initialization and updating it iteratively.
With the energy-based model of the source data distribution, as shown in Figure~\ref{fig1_intuition} (right), target samples are gradually updated towards the source domain and with sufficient update steps, the target sample will eventually be adapted to the source distribution.
\textbf{Discriminative energy-based model.}
We propose the discriminative energy-based model $p_{\theta,\phi}(\mathbf{x},\mathbf{y})$ on the source domain, which is constructed by a classification model and an energy function in the data space.
Note that $\mathbf{x}$ denotes the feature representations of the input image $I$, where
$\mathbf{x}$ is generated by a neural network backbone $\mathbf{x}{=}f_{\psi}(I)$.
Different from the regular energy-based models that generate data samples
from uniform noise, our goal is to promote the discriminative task, i.e., the conditional distribution $p(\mathbf{y}|\mathbf{x})$, which is preferred to be jointly modeled with the feature distributions $p(\mathbf{x})$ of the input data.
Thus, the proposed energy-based model is defined on the joint space of $\mathcal{X} \times \mathcal{Y}$ to consider both the classification task and the energy function.
Formally, the discriminative energy-based model of a source domain is formulated as:
\begin{equation}
\label{basic}
p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) = p_{\phi}(\mathbf{y}|\mathbf{x}) \frac{ {\rm exp} (-E_{\theta}(\mathbf{x}))}{Z_{\theta, \phi}},
\end{equation}
where $p_{\phi}(\mathbf{y}|\mathbf{x})$ denotes the classification model and $E_{\theta}(\mathbf{x})$ is an energy function, which is nowadays implemented with neural networks.
Eq.~(\ref{basic}) enables the energy-based model to jointly model the feature distribution of input data and the conditional distribution on the source domains.
An unseen target sample $\mathbf{x}_t$ is iteratively adapted to the distribution of the source domain $D_s$ by Langevin dynamics update with the energy function $E_{\theta}(\mathbf{x})$ and predicted by the classification model $p_{\phi}(\mathbf{y}|\mathbf{x})$.
The model parameters $\theta$ and $\phi$ can be jointly optimized following eq.~(\ref{hintoncd}) by minimizing:
\begin{equation}
\mathbb{D}_{\rm{KL}} (p_d(\mathbf{x}, \mathbf{y})||p_{\theta,\phi}(\mathbf{x}, \mathbf{y})) - \mathbb{D}_{\rm{KL}} (q_{\theta,\phi} (\mathbf{x}, \mathbf{y})||p_{\theta,\phi} (\mathbf{x}, \mathbf{y})),
\label{cd}
\end{equation}
which is derived as:
\begin{equation}
\begin{aligned}
\label{eq7:lall}
\mathcal{L} =
& - \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[ {\rm log}~p_{\phi}(\mathbf{y}|\mathbf{x}) \big] + \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[ E_{\theta}(\mathbf{x}) \big] - \mathbb{E}_{\rm{stop\_grad}(q_{\theta, \phi}(\mathbf{x}, \mathbf{y}))} \big[E_{\theta}(\mathbf{x}) \big] \\
& + \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[ E_{\rm{stop\_grad}(\theta)}(\mathbf{x}) - {{\rm log} ~ p_{\rm{stop\_grad}(\phi)}(\mathbf{y}|\mathbf{x})} \big],
\end{aligned}
\end{equation}
where $p_d(\mathbf{x},\mathbf{y})$ denotes the real data distribution and $q_{\theta, \phi}(\mathbf{x}, \mathbf{y})=\prod_{\theta}^{t} p(\mathbf{x}, \mathbf{y})$ denotes $t$ sequential MCMC samplings from the distribution expressed by the energy-based model similar to eq.~(\ref{hintoncd2}) \citep{du2020improved}.
We provide the detailed derivation in Appendix \ref{app:derive}.
In eq.~(\ref{eq7:lall}), the first term encourages to learn a discriminative classifier on the source domain.
The second and third terms train the energy function to model the data distribution of the source domain by assigning low energy on the real samples and high energy on the samples from the model distribution.
Different from the first three terms that directly supervise the model parameters $\theta$ and $\phi$, the last term stops the gradients of the energy function $E_{\theta}$ and classifier $\phi$ while back-propagating the gradients to the adapted samples $q_{\theta,\phi}(\mathbf{x},\mathbf{y})$.
Because of the stop-gradient, this term does not optimize the energy or log-likelihood of a given sample, but rather increases the probability of such samples with low energy and high log-likelihood under the modeled distribution.
Essentially, the last term trains the model $\theta$ to provide a variation for each sample that encourages its adapted version to be both discriminative on the source domain classifier and low energy on the energy function.
Intuitively, it supervises the model to learn the ability to preserve categorical information during adaptation and find a faster way to minimize the energy.
\textbf{Label-preserving adaptation with categorical latent variable.}
Since the ultimate goal is to correctly classify target domain samples, it is necessary to maintain the categorical information in the target sample during the iterative adaptation process.
Eq.~(\ref{eq7:lall}) contains a supervision term that encourages the adapted target samples to be discriminative for the source classification models.
However, as the energy function $E_{\theta}$ operates only in the $\mathcal{X}$ space and the sampling process of Langevin dynamics tends to result in random samples from the sampled distribution that are independent of the starting point, there is no categorical information considered during the adaptation procedure.
To achieve label-preserving adaptation, we introduce a categorical latent variable $\mathbf{z}$ into the energy function to guide the adaptation of target samples to preserve the category information.
With the latent variable, the energy function $E_{\theta}$ is defined in the joint space of $\mathcal{X} \times \mathcal{Z}$. The categorical information contained in $\mathbf{z}$ will be explicitly incorporated into the iterative adaptation.
To do so, we define the energy-based model with the categorical latent variable as:
\begin{equation}
\begin{aligned}
\label{explicit}
p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) = \int p_{\theta, \phi}(\mathbf{x}, \mathbf{y}, \mathbf{z}) d\mathbf{z}
= \int p_{\phi}(\mathbf{y}|\mathbf{z}, \mathbf{x}) p_{\phi}(\mathbf{z}|\mathbf{x}) \frac{ {\rm exp} (-E_{\theta}(\mathbf{x}|\mathbf{z}))}{Z_{\theta, \phi}}d\mathbf{z},
\end{aligned}
\end{equation}
where $\phi$ denotes the parameters of the classification model that predicts $\mathbf{z}$ and $\mathbf{y}$ and $E_\theta$ denotes the energy function that models the distribution of $\mathbf{x}$ considering the information of latent variable $\mathbf{z}$.
$\mathbf{z}$ is trained to contain sufficient categorical information of $\mathbf{x}$ and serves as the conditional attributes that guide the adapted samples $\mathbf{x}$ preserving the categorical information.
Once obtained from the original input feature representations $\mathbf{x}$, $\mathbf{z}$ is fixed and taken as the input of the energy function together with the updated $\mathbf{x}$ in each iteration.
Intuitively, when $\mathbf{x}$ is updated from the target domain to the source domain via Langevin dynamics, $\mathbf{z}$ helps it preserve the classification information contained in the original $\mathbf{x}$, without introducing additional information.
To learn the latent variable $\mathbf{z}$ with more categorical information,
we estimate $\mathbf{z}$ by variational inference and design a variational posterior $q_\phi(\mathbf{z}|\mathbf{d}_{\mathbf{x}})$, where
$\mathbf{d}_{\mathbf{x}}$ is the average representation of samples from the same category as $\mathbf{x}$ on the source domain.
Therefore, $q_\phi(\mathbf{z}|\mathbf{d}_{\mathbf{x}})$ can be treated as a probabilistic prototypical representation of a class.
By incorporating $q_\phi(\mathbf{z}|\mathbf{d}_{\mathbf{x}})$ into eq.~(\ref{explicit}), we obtain the lower bound of the log-likelihood:
\begin{equation}
\begin{aligned}
\label{explicit1}
{\rm log}~p_{\theta, \phi}(\mathbf{x}, \mathbf{y})
& \geq \mathbb{E}_{q_{\phi}} [{\rm log}~p_{\phi}(\mathbf{y}| \mathbf{z},\mathbf{x}) - E_{\theta}(\mathbf{x}, \mathbf{z}) - {\rm log}~Z_{\theta,\phi}] + \mathbb{D}_{\rm{KL}} [q_{\phi}(\mathbf{z}|\mathbf{d}_{\mathbf{x}}) || p_{\phi}(\mathbf{z}|\mathbf{x})].
\end{aligned}
\end{equation}
Note that in eq.~(\ref{explicit1}), the categorical latent variable $\mathbf{z}$ is incorporated into both the classification model $p_{\phi}(\mathbf{y}| \mathbf{z},\mathbf{x})$ and the energy function $E_{\theta}(\mathbf{x}|\mathbf{z})$.
The energy function contains both data information and categorical information.
During the Langevin dynamics update for sample adaptation, the latent variable provides categorical information in each iteration, which enables the adapted target samples to be discriminative.
By incorporating eq.~(\ref{explicit1}) into eq.~(\ref{cd}), we derive the objective function with the categorical latent variable as:
\begin{equation}
\begin{aligned}
\label{eq10explicitall}
\mathcal{L}_{f}
& = \mathbb{E}_{p_d(\mathbf{x}, \mathbf{y})} \Big [\mathbb{E}_{q_{\phi}(\mathbf{z})} [- {\rm log}~p_{\phi}(\mathbf{y}| \mathbf{z},\mathbf{x})] + \mathbb{D}_{\rm{KL}} [q_{\phi}(\mathbf{z}|\mathbf{d_{\mathbf{x}}}) || p_{\phi}(\mathbf{z}|\mathbf{x})]\Big ] + \mathbb{E}_{q_{\phi} (\mathbf{z})} \Big[ \mathbb{E}_{p_d(\mathbf{x})} [ E_{\theta}(\mathbf{x}|\mathbf{z})] \\
& - \mathbb{E}_{\rm{stop\_grad}(q_{\theta}(\mathbf{x}))} [E_{\theta}(\mathbf{x}, \mathbf{z})] \Big ]
+ \mathbb{E}_{q_{\theta}(\mathbf{x})} \Big[ \mathbb{E}_{q_{\rm{stop\_grad}(\phi)}(\mathbf{z})} \big[ E_{\rm{stop\_grad}(\theta)} (\mathbf{x}, \mathbf{z}) \\
& - {\rm log}~ p_{\rm{stop\_grad}(\phi)} (\mathbf{y}|\mathbf{z}, \mathbf{x}) \big ] -\mathbb{D}_{\rm{KL}} [q_{\rm{stop\_grad} (\phi)} (\mathbf{z}|\mathbf{d}_{\mathbf{x}}) || p_{\rm{stop\_grad}(\phi)}(\mathbf{z}|\mathbf{x})] \Big],
\end{aligned}
\end{equation}
where $p_d(\mathbf{x})$ and $q_{\theta}(\mathbf{x})$ denote the data distribution and the $t$ sequential MCMC samplings from the energy-based distribution of the source domain $D_s$.
Similar to eq.~(\ref{eq7:lall}), the first term trains the classification model on the source data. The second term trains the energy function to model the source data distribution. The last term is conducted on the adapted samples to supervise the adaptation procedure.
The complete derivation is provided in Appendix \ref{app:derive}.
An illustration of our model is shown in Figure~\ref{illustration}.
We also provide the complete algorithm in Appendix \ref{app:algo}.
\begin{figure*}
\caption{\textbf{Overall process of the proposed sample adaptation by discriminative energy-based model.}
\label{illustration}
\end{figure*}
\textbf{Ensemble inference.}
Since the target data is inaccessible during training, we train the specific parameters $\theta$ and $\phi$ to model each source distribution by adapting the samples from other source domains to the current source distribution.
In each iteration, we train the energy-based model $\theta^i$ of one randomly selected source domain $D^i_s$. The adapted samples generated by samples $\mathbf{x}^j, \mathbf{x}^k$ from the other source domains $D^j_s, D^k_s$ are used as the negative samples while $\mathbf{x}^i$ as the positive samples to train the energy-based model.
During inference, the target sample is adapted to each source distribution with the specific energy function and predicted by the specific classifier.
After that, we combine the predictions of all source domain models to obtain the final prediction:
\begin{equation}
\label{ensemble}
p(\mathbf{y}_t) = \frac{1}{S} \sum^{S}_{i=1} \frac{1}{N} \sum^{N}_{n=1} p_{\phi^i}(\mathbf{y}|\mathbf{z}^n, \mathbf{x})~~~~~~~~ \mathbf{z}^n \sim p(\mathbf{z}^n|\mathbf{x}_t), \mathbf{x} \sim p_{\theta^i}(\mathbf{x}).
\end{equation}
Here $\phi^i$ and $\theta^i$ denote the domain specific classification model and energy function of domain $D^i_s$.
Note that since the labels of the $\mathbf{x}^t$ are unknown, $\mathbf{d}_{\mathbf{x}^t}$ in eq.~(\ref{eq10explicitall}) is not available during inference. Therefore, we draw $\mathbf{z}^n$ from the prior distribution $p(\mathbf{z}^n|\mathbf{x}_t)$, where $\mathbf{x}_t$ is the original target sample without any adaptation.
With fixed $\mathbf{z}^n$, $\mathbf{x} \sim p_{\theta^i}(\mathbf{x})$ are drawn by Langevin dynamics as in eq.~(\ref{langevin}) with the target samples $\mathbf{x}^t$ as the initialization sample.
$p_{\theta^i}(\mathbf{x})$ denotes the distributions modeled by the energy function $E_{\theta^i}(\mathbf{x}|\mathbf{z}^n)$.
Moreover, to be efficient, {the feature extractor $\psi$ for obtaining feature representations $\mathbf{x}$} is shared by all source domains and only the energy functions and classifiers are domain specific.
We deploy the energy-based model on feature representations for lighter neural networks of the domain-specific energy functions and classification models.
\section{Experiments}
\textbf{Datasets.}
We conduct our experiments on five widely used datasets for domain generalization, \textbf{PACS} \citep{li2017deeper}, \textbf{Office-Home} \citep{venkateswara2017deep}, \textbf{DomainNet} \citep{peng2019moment}, and \textbf{Rotated MNIST and Fashion-MNIST}.
Since we conduct the energy-based distribution on the feature space, our method can also handle other data formats. Therefore, we also evaluate the method on \textbf{PHEME} \citep{zubiaga2016analysing}, a dataset for natural language processing.
PACS consists of 9,991 images of seven classes from four domains, i.e., \textit{photo}, \textit{art-painting}, \textit{cartoon}, and \textit{sketch}. We use the same training and validation split as \citep{li2017deeper} and follow their ``leave-one-out'' protocol.
Office-Home also contains four domains, i.e., \textit{art}, \textit{clipart}, \textit{product}, and \textit{real-world}, which totally have 15,500 images of 65 categories.
DomainNet is more challenging since it has six domains i.e., \textit{clipart}, \textit{infograph}, \textit{painting}, \textit{quickdraw}, \textit{real}, \textit{sketch}, with 586,575 examples of 345 classes.
We use the same experimental protocol as PACS.
We utilize the Rotated MNIST and Fashion-MNIST datasets by following the settings in Piratla et al. \citep{piratla2020efficient}.
The images are rotated from $0^\circ$ to $90^\circ$ in intervals of $15^\circ$, covering seven domains.
We use the domains with rotation angles from $15^\circ$ to $75^\circ$ as the source domains, and images rotated by $0^\circ$ and $90^\circ$ as the target domains.
PHEME is a dataset for rumour detection. There are a total of 5,802 tweets labeled as rumourous or non-rumourous from 5 different events, i.e., \textit{Charlie Hebdo}, \textit{Ferguson}, \textit{German Wings}, \textit{Ottawa Shooting}, and \textit{Sydney Siege}. Same as PACS, we evaluate our methods on PHEME also by the ``leave-one-out'' protocol.
\textbf{Implementation details.}
We evaluate on PACS and Office-Home with both a ResNet-18 and ResNet-50 \citep{he2016deep} and on DomainNet with a ResNet-50. The backbones are pretrained on ImageNet \citep{deng2009imagenet}.
On PHEME we conduct the experiments based on a pretrained DistilBERT \citep{sanh2019distilbert}, following \citep{wright2020transformer}.
{To increase the number of sampling steps and sample diversity of the energy functions, we introduce a replay buffer $\mathcal{B}$ that stores the past updated samples from the modeled distribution \citep{du2019implicit}.}
The details of the models and hyperparameters are provided in Appendix \ref{app:imp}.
\begin{table}[t]
\centering
\caption{\textbf{Benefit of energy-based test sample adaptation.} Experiments on PACS using a ResNet-18 averaged over five runs. Optimized by eq.~(\ref{eq7:lall}), our model improves after adaptation. With the latent variable (eq.~\ref{eq10explicitall}) performance improves further, both before and after adaptation.}
\centering
\label{ablations}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lcccccc}
\toprule
& Adaptation & \textbf{Photo} & \textbf{Art-painting } & \textbf{Cartoon} & \textbf{Sketch} & \textit{Mean} \\ \midrule
\multirow{2}*{Without latent variable (eq.~\ref{eq7:lall})} & \ding{55} & 94.73 \scriptsize{$\pm$0.22} & 78.66 \scriptsize{$\pm$0.59} & 78.24 \scriptsize{$\pm$0.71} & 78.34 \scriptsize{$\pm$0.62} & 82.49 \scriptsize{$\pm$0.26} \\
~ & \ding{51} & 94.59 \scriptsize{$\pm$0.16} & 80.45 \scriptsize{$\pm$0.52} & 79.98 \scriptsize{$\pm$0.51} & \textbf{79.23} \scriptsize{$\pm$0.32} & 83.51 \scriptsize{$\pm$0.30} \\
\midrule
\multirow{2}*{With latent variable (eq.~\ref{eq10explicitall})} & \ding{55} & 95.12 \scriptsize{$\pm$0.41} & 79.79 \scriptsize{$\pm$0.64} & 79.15 \scriptsize{$\pm$0.37} & \textbf{79.28} \scriptsize{$\pm$0.82} & 83.33 \scriptsize{$\pm$0.43} \\
~ & \ding{51} & \textbf{96.05} \scriptsize{$\pm$0.37} & \textbf{82.28} \scriptsize{$\pm$0.31} & \textbf{81.55} \scriptsize{$\pm$0.65} & \textbf{79.81} \scriptsize{$\pm$0.41} & \textbf{84.92} \scriptsize{$\pm$0.59} \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{Benefit of energy-based test sample adaptation.}
We first investigate the effectiveness of our energy-based test sample adaptation in Table~\ref{ablations}. Before adaptation, we evaluate the target samples directly by the classification model of each source domain and ensemble the predictions. After the adaptation to the source distributions, the performance of the target samples improves, especially on the \textit{art-painting} and \textit{cartoon} domains, demonstrating the benefit of the iterative adaptation by our energy-based model. The results of models with latent variable, i.e., trained by eq.~(\ref{eq10explicitall}), are shown in the last two rows.
With the latent variable, the performance is improved both before and after adaptation, which shows the benefit of incorporating the latent variable into the classification model.
The performance improvement after adaptation is also more prominent than without the latent variable, demonstrating the effectiveness of incorporating the latent variable into the energy function.
\begin{figure*}
\caption{\textbf{Iterative adaptation of target samples.}
\label{visualization2a}
\end{figure*}
\textbf{Effectiveness of iterative test sample adaptation by Langevin dynamics.}
We visualize the iterative adaptation of the target samples in Figure~\ref{visualization2a}.
In each subfigure, the target and source samples have the same label.
The visualization shows that the target samples gradually approach the source data distributions during the iterative adaptation by Langevin dynamics.
After adaptation, the predictions of the target samples on the source domain classifier also become more accurate.
For instance, in Figure~\ref{visualization2a} (a), the target sample of the \textit{house} category is predicted incorrectly, with a probability of \textit{house} being only 0.02\%.
After adaptation, the probability becomes 99.7\%, which is predicted correctly. More visualizations, including several failure cases, are provided in Appendix \ref{app:vis}.
\begin{wrapfigure}{r}{0.5\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{acc_ebm_revise.pdf}
\caption{
\textbf{Adaptation with different Langevin dynamics steps.} As the number of steps increases, energy decreases while accuracy increases. When the number of steps is too large, the accuracy without $\mathbf{z}$ or with $p(\mathbf{z}|\mathbf{x})$ drops slightly while the accuracy with $q(\mathbf{z}|\mathbf{d}_{\mathbf{x}})$ is more stable and better.
}
\label{accene}
\end{wrapfigure}
\textbf{Adaptation with different Langevin dynamics steps.}
We also investigate the effect of the Langevin dynamics step numbers during adaptation.
Figure~\ref{accene} shows the variety of the average energy and accuracy of the target samples adapted to the source distributions with different updating steps.
The experiments are conducted on PACS with ResNet-18. The target domain is \textit{art-painting}.
With the step numbers less than 80, the average energy decreases consistently while the accuracy increases along with the increased number of updating steps, showing that the target samples are getting closer to the source distributions.
When the step numbers are too large, the accuracy will decrease as the number of steps increases. We attribute this to $\mathbf{z}_t$ having imperfect categorical information, since it is approximated during inference from a single target sample $\mathbf{x}_t$ only. In this case, the label information would not be well preserved in $\mathbf{x}_t$ during the Langevin dynamics update, which causes an accuracy drop with a large number of updates.
To demonstrate this, we conduct the experiment by replacing $p(\mathbf{z}_t)$ with $q_{\phi}(\mathbf{z}|\mathbf{d}_\mathbf{x})$ during inference. $\mathbf{d}_\mathbf{x}$ is the class center of the same class as $\mathbf{x}_t$. Therefore, $q_{\phi}(\mathbf{z}|\mathbf{d}_\mathbf{x})$ contains categorical information that is closer to the ground truth label. We regard this as the oracle model.
As expected, the oracle model performs better as the number of steps increases and reaches stability after 100 steps. We also show the results without $\mathbf{z}$. We can see the performance and stability are both worse, which again demonstrates that $\mathbf{z}$ helps preserve label information in the target samples during adaptation.
{Moreover, the energy without conditioning on $\mathbf{z}$ is higher. The reason can be that without conditioning on $\mathbf{z}$, there is no guidance of categorical information during sample adaptation. In this case, the sample can be adapted randomly by the energy-based model, regardless of the categorical information. This can lead to the conflict to adapt the target features to different categories of the source data, slowing down the decline of the energy.}
We provide more analyses of $\mathbf{z}_t$ in Appendix \ref{app:results}.
In addition, the training and test time cost is also larger as the step number increases, the comparisons and analyses are also provided in Appendix \ref{app:results}.
\begin{table}[t]
\begin{center}
\caption{\textbf{Comparisons on image and text datasets.} Our method achieves the best mean accuracy for all datasets, independent of the backbone. {Larger adaptation steps (i.e., 50) lead to better performance.}
}
\label{pacsoff}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lllllll}
\toprule
~ & \multicolumn{2}{c}{\textbf{PACS}} & \multicolumn{2}{c}{\textbf{Office-Home}} & \textbf{DomainNet} & \textbf{PHEME} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7}
& ResNet-18 & ResNet-50 & ResNet-18 & ResNet-50 & ResNet-50 & DistilBERT\\ \midrule
\cite{iwasawa2021test} & 81.40 & 85.10 & 57.00 & 68.30 & - & - \\
\cite{zhou2020learning} & 82.83 & 84.90 & 65.63 & 67.66 & - & -\\
\cite{gulrajani2020search} & - & 85.50 & - & 66.50 & 40.90 & - \\
\cite{wang2021tent} & 83.09 & 86.23 & 64.13 & 67.99 & - & 75.8 \scriptsize{$\pm$0.23}\\
\cite{dubey2021adaptive} & - & & - & 68.90 & 43.90 & - \\
\cite{xiao2022learning} & 84.15 & 87.51 & 66.02 & 71.07 & - & 76.1 \scriptsize{$\pm$0.21}\\
\midrule
\textbf{\textit{This paper w/o adaptation}} & 83.33 \scriptsize{$\pm$0.43} & 86.05 \scriptsize{$\pm$0.37} & 65.01 \scriptsize{$\pm$0.47} & 70.44 \scriptsize{$\pm$0.25} & 42.90 \scriptsize{$\pm$0.34} & 75.4 \scriptsize{$\pm$0.13}\\
\textbf{\textit{This paper w/ adaptation (10 steps)}}
& 84.25 \scriptsize{$\pm$0.48} & 87.05 \scriptsize{$\pm$0.26} & 65.73 \scriptsize{$\pm$0.32} & 71.13 \scriptsize{$\pm$0.43} & 43.75 \scriptsize{$\pm$0.49} & 76.0 \scriptsize{$\pm$0.16}\\
\textbf{\textit{This paper w/ adaptation (20 steps)}}
& \textbf{84.92} \scriptsize{$\pm$0.59} & 87.70 \scriptsize{$\pm$0.28} & 66.31 \scriptsize{$\pm$0.21} & \textbf{72.07} \scriptsize{$\pm$0.38} & \textbf{44.66} \scriptsize{$\pm$0.51} & 76.5 \scriptsize{$\pm$0.18}\\
\textbf{\textit{This paper w/ adaptation (50 steps)}}
& \textbf{85.10} \scriptsize{$\pm$0.33} & \textbf{88.12} \scriptsize{$\pm$0.25} & \textbf{66.75} \scriptsize{$\pm$0.21} & \textbf{72.25} \scriptsize{$\pm$0.32} & \textbf{44.98} \scriptsize{$\pm$0.43} & \textbf{76.9} \scriptsize{$\pm$0.16}\\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
\textbf{Comparisons.}
PACS, Office-Home, and DomainNet are three widely used benchmarks in domain generalization.
We conduct experiments on PACS and Office-Home based on both ResNet-18 and ResNet-50 and experiments on DomainNet based on ResNet-50.
As shown in Table~\ref{pacsoff}, our method achieves competitive and even the best overall performance in most cases.
Moreover, our method performs better than most of the recent test-time adaptation methods \citep{iwasawa2021test,wang2021tent,dubey2021adaptive}, which fine-tunes the model at test time with batches of target samples.
{By contrast, we strictly follow the setting of domain generalization. We only use the source data to train the classification and energy-based models during training. At test time, we do our sample adaptation and make predictions on each individual target sample by just the source-trained models.
Our method is more data efficient at test time, avoiding the problem of data collection per target domain in real-world applications. Despite the data efficiency during inference, our method is still comparable and sometimes better, especially on datasets with more categories, e.g., Office-Home and DomainNet.}
Compared with the recent work by \cite{xiao2022learning}, our method is at least competitive and often better.
To show the generality of our method, we also conduct experiments on the natural language processing dataset PHEME. The dataset is a binary classification task for rumour detection. The results in Table~\ref{pacsoff} show similar conclusions as the image datasets.
Table~\ref{pacsoff} also demonstrates the effectiveness of our sample adaptation. For each dataset and backbone, the proposed method achieves a good improvement after adaptation by the proposed discriminative energy-based model.
{For fairness, the results without adaptation are also obtained by ensemble predictions of the source-domain-specific classifiers.
Moreover, larger steps (i.e., 50) lead to better performance. The improvements of adaptation with 50 steps are slight. Considering the trade-off of computational efficiency and performance, we set the step number as 20 in our paper.}
We provide detailed comparisons, results on rotated MNIST and Fashion-MNIST datasets, as well as more experiments on the latent variable, corruption datasets, and analyses of the ensemble inference method in Appendix \ref{app:results}.
\section{Related work}
\textbf{Domain generalization.}
One of the predominant methods is domain invariant learning \citep{muandet2013domain,ghifary2016scatter,motiian2017unified,seo2020learning,zhao2020domain,xiao2021bit,mahajan2021domain,nguyen2021domain,phung2021learning,shi2021gradient}.
\cite{muandet2013domain} and \cite{ghifary2016scatter} learn domain invariant representations by matching the moments of features across source domains.
Li et al. \citep{li2018domainb} further improved the model by learning conditional-invariant features.
Recently, \cite{mahajan2021domain} introduced causal matching to model within-class variations for generalization.
\cite{shi2021gradient} provided a gradient matching to encourage consistent gradient directions across domains.
\cite{arjovsky2019invariant} and \cite{ahuja2021invariance} proposed on invariant risk minimization to learn an invariant classifier.
Another widely used methodology is domain augmentation \citep{shankar2018generalizing,volpi2018generalizing,qiao2020learning, zhou2020learning,zhou2021mix, yao2022improving}, which generates more source domain data to simulate domain shifts during training.
\cite{zhou2021mix} proposed a data augmentation on the feature space by mixing the feature statistics of instances from different domains.
Meta-learning-based methods have also been studied for domain generalization \citep{li2018metalearning,balaji2018metareg,dou2019domain,du2020metanorm,bui2021exploiting,du2021hierarchical}.
\cite{li2018metalearning} introduced the model agnostic meta-learning \citep{finn2017model} into domain generalization.
\cite{du2020learning} proposed the meta-variational information bottleneck for domain-invariant learning.
\textbf{Test-time adaptation and source-free adaptation.}
Recently, adaptive methods have been proposed to better match the source-trained model and the target data at test time
\citep{sun2020test,li2020model,d2019learning,pandey2021generalization,iwasawa2021test,dubey2021adaptive,zhang2021adaptive}.
Test-time adaptation \citep{sun2020test, wang2021tent, liu2021ttt++, zhou2021training} fine-tunes (part of) a network trained on source domains by batches of target samples.
\cite{xiao2022learning} proposed single-sample generalization that adapts a model to each target sample under a meta-learning framework.
There are also some source-free domain adaptation methods \citep{liang2020we,yang2021exploiting,dong2021confident,liang2021source} that adapt the source-trained model on only the target data.
These methods follow the domain adaptation settings to fine-tune the source-trained model by the entire target set.
By contrast, we do sample adaptation at test time but strictly follow the domain generalization settings. In our method, no target sample is available during the training of the models. At test time, each target sample is adapted to the source domains and predicted by the source-trained model individually, without fine-tuning the models or requiring large amounts of target data.
\textbf{Energy-based model.} The energy-based model is a classical learning framework \citep{ackley1985learning,hinton2002training,hinton2006unsupervised,lecun2006tutorial}.
Recently, \citep{xie2016theory,nijkamp2019learning,nijkamp2020anatomy,du2019implicit,du2020improved, xie2022tale} further extend the energy-based model to high-dimensional data using contrastive divergence and Stochastic Gradient Langevin dynamics.
Different from most of these works that only model the data distributions, some recent works model the joint distributions \citep{grathwohl2019your, xiao2020vaebm}.
In our work, we define the joint distribution of data and label to promote the classification of unseen target samples in domain generalization, and further incorporate a latent variable to incorporate the categorical information into the Langevin dynamics procedure.
Energy-based models for various tasks have been proposed, e.g., image generation \citep{du2020compositional,nie2021controllable},
out-of-distribution detection \citep{liu2020energy}, and anomaly detection \citep{dehaene2020iterative}.
Some methods also utilize energy-based models for domain adaptation \citep{zou2021unsupervised,xie2021active,kurmi2021domain}.
Different from these methods, we focus on domain generalization and utilize the energy-based model to express the source domain distributions without any target data during training.
\section{Conclusion and Discussions}
In this paper, we propose a discriminative energy-based model to adapt the target samples to the source data distributions for domain generalization.
The energy-based model is designed on the joint space of input, output, and a latent variable, which is constructed by a domain specific classification model and an energy function.
With the trained energy-based model, the target samples are adapted to the source distributions through Langevin dynamics and then predicted by the classification model.
Since we aim to prompt the classification of the target samples, the model is trained to achieve label-preserving adaptation by incorporating the categorical latent variable.
We evaluate the method on six image and text benchmarks. The results demonstrate its effectiveness and generality.
We have not tested our approach beyond image and text classification tasks, but since our sample adaptation is conducted on the feature space, it should be possible to extend the method to other complex tasks based on feature representations.
Compared with recent model adaptation methods, our method does not need to adjust the model parameters at test time, which requires batches of target samples to provide sufficient target information.
This is more data efficient and challenging at test time, therefore the training procedure is more involved with complex optimization objectives.
One limitation of our proposed method is the iterative adaptation requirement for each target sample, which introduces an extra time cost at both training and test time. The problem can be mitigated by speeding up the energy minimization with optimization techniques during Langevin dynamics, e.g., Nesterov momentum \citep{nesterov1983method}, or by exploring one-step methods for sample adaptation. We leave these explorations for future work.
\section*{Acknowledgment}
This work is financially supported by the Inception Institute of Artificial Intelligence, the University of Amsterdam and the allowance
Top consortia for Knowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy.
\appendix
\section{Derivations}
\label{app:derive}
\textbf{Derivation of energy-based sample adaptation.}
Recall our discriminative energy-based model
\begin{equation}
p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) = p_{\phi}(\mathbf{y}|\mathbf{x}) \frac{ {\rm exp} (-E_{\theta}(\mathbf{x}))}{Z_{\theta, \phi}},
\end{equation}
where $Z_{\theta, \phi} = \int p_{\phi}(\mathbf{y}|\mathbf{x}) {\rm exp} (-E_{\theta}(\mathbf{x})) d\mathbf{x} d\mathbf{y}$ is the partition function.
$\phi$ and $\theta$ denote the parameters of the classifier and energy function, respectively.
To jointly train the parameters, we minimize the contrastive divergence proposed by \cite{hinton2002training}:
\begin{equation}
\begin{aligned}
\label{lossall}
\mathcal{L} = \mathbb{D}_{\rm{KL}}[p_d(\mathbf{x}, \mathbf{y})||p_{\theta, \phi}(\mathbf{x}, \mathbf{y})] - \mathbb{D}_{\rm{KL}}[q_{\theta, \phi}(\mathbf{x}, \mathbf{y})||p_{\theta, \phi}(\mathbf{x}, \mathbf{y})],
\end{aligned}
\end{equation}
where $p_d(\mathbf{x},\mathbf{y})$ denotes the real data distribution and $q_{\theta, \phi}(\mathbf{x}, \mathbf{y})=\prod_{\theta}^{t} p(\mathbf{x}, \mathbf{y})$ denotes $t$ sequential MCMC samplings from the distribution expressed by the energy-based model \citep{du2020improved}.
The gradient of the first term with respect to $\theta$ and $\phi$ is
\begin{equation}
\begin{aligned}
\label{term1grad}
\mathbf{\nabla}_{\theta, \phi} \mathbb{D}_{\rm{KL}}[p_d(\mathbf{x}, \mathbf{y})||p_{\theta, \phi}(\mathbf{x}, \mathbf{y})]
& = \mathbf{\nabla}_{\theta, \phi} \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[{\rm log}~\frac{p_{d}(\mathbf{x}, \mathbf{y})}{p_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big] \\
& = \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[\mathbf{\nabla}_{\theta, \phi} {\rm log}~p_{d}(\mathbf{x}, \mathbf{y}) - \mathbf{\nabla}_{\theta, \phi} {\rm log}~p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big] \\
& = \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[ - \mathbf{\nabla}_{\theta, \phi} {\rm log}~p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big], \\
\end{aligned}
\end{equation}
while the gradient of the second term is
\begin{equation}
\begin{aligned}
\label{term2grad}
& \mathbf{\nabla}_{\theta, \phi} \mathbb{D}_{\rm{KL}}[q_{\theta, \phi}(\mathbf{x}, \mathbf{y})||p_{\theta, \phi}(\mathbf{x}, \mathbf{y})] \\
= & \mathbf{\nabla}_{\theta, \phi} \mathbb{E}_{q_{\theta, \phi}(\mathbf{x},\mathbf{y})} \big[{\rm log}~\frac{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})}{p_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big] \\
= & \mathbf{\nabla}_{\theta, \phi} q_{\theta, \phi}(\mathbf{x},\mathbf{y}) \mathbf{\nabla}_{q_{\theta, \phi}} \mathbb{D}_{\rm{KL}}[q_{\theta, \phi}(\mathbf{x}, \mathbf{y})||p_{\theta, \phi}(\mathbf{x}, \mathbf{y})] + \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[ - \mathbf{\nabla}_{\theta, \phi} {\rm log}~p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big]. \\
\end{aligned}
\end{equation}
Combining eq.~(\ref{term1grad}) and eq.~(\ref{term2grad}), we have the overall gradient as:
\begin{equation}
\begin{aligned}
\label{allgrad}
\mathbf{\nabla}_{\theta, \phi} \mathcal{L}_{all}
= & - (\mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[\mathbf{\nabla}_{\theta, \phi} {\rm log}~p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big] - \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[\mathbf{\nabla}_{\theta, \phi} {\rm log}~p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big]\\
& + \mathbf{\nabla}_{\theta, \phi} q_{\theta, \phi}(\mathbf{x},\mathbf{y}) \mathbf{\nabla}_{q_{\theta, \phi}} \mathbb{D}_{\rm{KL}}[q_{\theta, \phi}(\mathbf{x}, \mathbf{y})||p_{\theta, \phi}(\mathbf{x}, \mathbf{y})]).
\end{aligned}
\end{equation}
For the first two terms, the gradient can be further derived to
\begin{equation}
\begin{aligned}
\label{term1grad2}
& \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[\mathbf{\nabla}_{\theta, \phi} {\rm log}~p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big] - \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[\mathbf{\nabla}_{\theta, \phi} {\rm log}~p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big] \\
= & \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[\mathbf{\nabla}_{\theta, \phi} ({\rm log}~p_{\phi}(\mathbf{y}|\mathbf{x}) - E_{\theta}(\mathbf{x}) - {\rm log}~Z_{\theta, \phi})\big] \\
- & \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[\mathbf{\nabla}_{\theta, \phi} ({\rm log}~p_{\phi}(\mathbf{y}|\mathbf{x}) - E_{\theta}(\mathbf{x}) - {\rm log}~Z_{\theta, \phi})\big].
\end{aligned}
\end{equation}
Moreover, $\mathbf{\nabla}_{\theta, \phi} {\rm log}~Z_{\theta, \phi}$ can be written as the expectation $\mathbb{E}_{p_{\theta, \phi}(\mathbf{x}, \mathbf{y})} [\mathbf{\nabla}_{\phi} {\rm log}~p_{\phi}(\mathbf{y}|\mathbf{x}) -\mathbf{\nabla}_{\theta} E_{\theta}(\mathbf{x})]$ \citep{song2021train,xiao2020vaebm}, which is therefore canceled out in eq.~(\ref{term1grad2}) \citep{hinton2002training}.
We then have the loss function for the first two terms as
\begin{equation}
\begin{aligned}
\label{fterm1}
\mathcal{L}_{1}
= \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[E_{\theta}(\mathbf{x}) - {\rm log}~p_{\phi}(\mathbf{y}|\mathbf{x}) \big] - \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[E_{\theta}(\mathbf{x}) - {\rm log}~p_{\phi}(\mathbf{y}|\mathbf{x}) \big].
\end{aligned}
\end{equation}
Furthermore, we have the loss function
\begin{equation}
\begin{aligned}
\label{fterm2}
\mathcal{L}_{2}
& = \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[{\rm log} \frac{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})}{p_{stop\_grad(\theta, \phi)}(\mathbf{x}, \mathbf{y})} \big] \\
& = - ~ \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[{{\rm log} p_{stop\_grad(\phi)}(\mathbf{y}|\mathbf{x})} - E_{stop\_grad(\theta)}(\mathbf{x}) - {\rm log} Z_{stop\_grad(\theta, \phi)}\big] \\
& ~~~~~ + \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[{\rm log} q_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big],
\end{aligned}
\end{equation}
which has the same gradient as the last term in eq.~(\ref{allgrad}) \citep{du2020improved}.
The $stop\_grad$ here means that we do not backpropagate the gradients to update the parameters by the corresponding forward functions. Thus, these parameters can be treated as constants.
Since the gradient of $\theta$ and $\phi$ is stopped in ${\rm log}~Z_{stop\_grad(\theta, \phi)}$, we treat it as a constant independent of $q_{\theta, \phi}(\mathbf{x}, \mathbf{y})$ and therefore remove it from the eq.~(\ref{fterm2}).
In addition, the term $\mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[{\rm log}~p_{\phi}(\mathbf{y}|\mathbf{x}) \big]$ in eq.~(\ref{fterm1}) encourages wrong prediction of the updated samples from $q_{\theta, \phi}(\mathbf{x}, \mathbf{y})$, which goes against our goal of promoting classification by adapting target samples.
The term $\mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[{\rm log} q_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \big]$ in eq.~(\ref{fterm2}) can be treated as a negative entropy of $q_{\theta, \phi}(\mathbf{x}, \mathbf{y})$, which is always negative and hard to estimate.
Therefore, we remove these two terms in the final loss function by applying an upper bound of the combination of eq.~(\ref{fterm1}) and eq.~(\ref{fterm2}) as:
\begin{equation}
\begin{aligned}
\label{lall}
\mathcal{L} =
& - \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[ {\rm log}~p_{\phi}(\mathbf{y}|\mathbf{x}) \big] + \mathbb{E}_{p_{d}(\mathbf{x}, \mathbf{y})} \big[ E_{\theta}(\mathbf{x}) \big] - \mathbb{E}_{\rm{stop\_grad}(q_{\theta, \phi}(\mathbf{x}, \mathbf{y}))} \big[E_{\theta}(\mathbf{x}) \big] \\
& + \mathbb{E}_{q_{\theta, \phi}(\mathbf{x}, \mathbf{y})} \big[ E_{\rm{stop\_grad}(\theta)}(\mathbf{x}) - {{\rm log} ~ p_{\rm{stop\_grad}(\phi)}(\mathbf{y}|\mathbf{x})} \big].
\end{aligned}
\end{equation}
\textbf{Energy-based sample adaptation with categorical latent variable.}
To keep the categorical information during sample adaptation, we introduce a categorical latent variable $\mathbf{z}$ into our discriminative energy-based model, which is defined as $p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) = \int p_{\theta, \phi}(\mathbf{x}, \mathbf{y}, \mathbf{z}) d\mathbf{z} = \int p_{\phi}(\mathbf{y}|\mathbf{z}, \mathbf{x}) p_{\phi}(\mathbf{z}|\mathbf{x}) \frac{ {\rm exp} (-E_{\theta}(\mathbf{x}|\mathbf{z}))}{Z_{\theta, \phi}}d\mathbf{z}$.
We optimize the parameters $\theta$ and $\phi$ also by the contrastive divergence $\mathbb{D}_{\rm{KL}}[p_d(\mathbf{x}, \mathbf{y})||p_{\theta, \phi}(\mathbf{x}, \mathbf{y})] - \mathbb{D}_{\rm{KL}}[q_{\theta, \phi}(\mathbf{x}, \mathbf{y})||p_{\theta, \phi}(\mathbf{x}, \mathbf{y})]$, which has similar gradient as eq.~(\ref{term1grad}) and eq.~(\ref{term2grad}).
The latent variable $\mathbf{z}$ is estimated by variational inference, leading to a lower bound of ${\rm log}~ p_{\theta, \phi}(\mathbf{x}, \mathbf{y}) \geq \mathbb{E}_{q_{\phi}(\mathbf{z})} [{\rm log}~p_{\phi}(\mathbf{y}|\mathbf{z},\mathbf{x}) - E_{\theta}(\mathbf{x}|\mathbf{z}) - {\rm log}~Z_{\theta,\phi}] + \mathbb{D}_{\rm{KL}} [q_{\phi}(\mathbf{z}|\mathbf{d}_{\mathbf{x}}) || p_{\phi}(\mathbf{z}|\mathbf{x})]$.
We obtain the final loss function of the contrastive divergence in a similar way as eq.~(\ref{lall}) by estimating the gradient and remove the terms that are hard to estimate or conflict with our final goal.
The final objective function is:
\begin{equation}
\begin{aligned}
\label{explicitall}
\mathcal{L}_{f}
& = \mathbb{E}_{p_d(\mathbf{x}, \mathbf{y})} \Big [\mathbb{E}_{q_{\phi}(\mathbf{z})} [- {\rm log}~p_{\phi}(\mathbf{y}| \mathbf{z},\mathbf{x})] + \mathbb{D}_{\rm{KL}} [q_{\phi}(\mathbf{z}|\mathbf{d_{\mathbf{x}}}) || p_{\phi}(\mathbf{z}|\mathbf{x})]\Big ] + \mathbb{E}_{q_{\phi} (\mathbf{z})} \Big[ \mathbb{E}_{p_d(\mathbf{x})} [ E_{\theta}(\mathbf{x}|\mathbf{z})] \\
& - \mathbb{E}_{\rm{stop\_grad}(q_{\theta}(\mathbf{x}))} [E_{\theta}(\mathbf{x}, \mathbf{z})] \Big ]
+ \mathbb{E}_{q_{\theta}(\mathbf{x})} \Big[ \mathbb{E}_{q_{\rm{stop\_grad}(\phi)}(\mathbf{z})} \big[ E_{\rm{stop\_grad}(\theta)} (\mathbf{x}, \mathbf{z}) \\
& - {\rm log}~ p_{\rm{stop\_grad}(\phi)} (\mathbf{y}|\mathbf{z}, \mathbf{x}) \big ] -\mathbb{D}_{\rm{KL}} [q_{\rm{stop\_grad} (\phi)} (\mathbf{z}|\mathbf{d}_{\mathbf{x}}) || p_{\rm{stop\_grad}(\phi)}(\mathbf{z}|\mathbf{x})] \Big].
\end{aligned}
\end{equation}
\section{Algorithm}
\label{app:algo}
We provide the detailed training and test algorithm of our energy-based sample adaptation in Algorithm~\ref{algo:1}.
\begin{algorithm}[t]
\algsetup{linenosize=\tiny}
\small
\caption{Energy-based sample adaptation}
\label{algo:1}
\begin{algorithmic}
\STATE \underline{TRAINING TIME}
\STATE {\textbf{Require:}} Source domains $\mathcal{D}_{s} {=} \left \{ D_{s}^{i} \right \}_{i=1}^{S}$ each with joint distribution $p_{d_s^i}{(I, \mathbf{y}})$ of input image and label.
\STATE {\textbf{Require:}} Learning rate $\mu$; iteration numbers $M$; step numbers $K$ and step size $\lambda$ of the energy function.
\STATE Initialize pretrained backbone $\psi$; $\phi^i, \theta^i, \mathcal{B}^i=\varnothing$ for each source domain $D_{s}^{i}$.
\FOR{\textit{iter} in $M$}
\FOR{$D_{s}^{i}$ in $\mathcal{D}_{s}$}
\STATE Sample datapoints $\{(I^i, \mathbf{y}^i)\} \sim p_{d_s^i}{(I, \mathbf{y}})$; $\{(I^j, \mathbf{y}^j)\} \sim \{p_{d_s^j}{(I, \mathbf{y}})\}_{j \neq i}$ or $\mathcal{B}$ with 50\% probability.
\STATE Feature representations {$\mathbf{x}^i=f_{\psi}(I^i), \mathbf{x}^j=f_{\psi}(I^j)$}
\FOR{$k$ in $K$}
\STATE $\mathbf{x}^j_k \leftarrow \mathbf{x}^j_{k-1} - \mathbf{\nabla}_{\mathbf{x}} E_{\theta^i}(\mathbf{x}^j_{k-1}|\mathbf{z}^j) + \omega$, ~~~ $\mathbf{z}^j \sim q_{\phi^i}(\mathbf{z}^j|\mathbf{d}_{\mathbf{x}^j}), ~~ \omega \sim \mathcal{N}(0, \sigma)$.
\ENDFOR
\STATE $q_{\theta^i}(\mathbf{x}) \leftarrow \mathbf{x}^j_k$, ~~~~~ $p_d(\mathbf{x}) \leftarrow p_{d_s^i}(\mathbf{x}^i)$.
\STATE
$(\psi, \phi^i) \leftarrow (\psi, \phi^i) -\lambda \nabla_{\psi, \phi^i} \mathcal{L}_{f}(p_d(\mathbf{x}))$
~~~~$\theta^i \leftarrow \theta^i -\lambda \nabla_{\theta^i} \mathcal{L}_{f}(p_d(\mathbf{x}), q_{\theta^i}(\mathbf{x}))$.
\STATE $\mathcal{B}^i \leftarrow \mathcal{B}^i \cup \mathbf{x}^j_k$
\ENDFOR
\ENDFOR
\STATE {\hrulefill}
\STATE \underline{TEST TIME}
\STATE {\textbf{Require}}:
Target images $I_t$ from the target domain; trained backbone $\psi$; and domain-specific model $\phi^i, \theta^i$ for each source domain in $\left \{ D_{s}^{i} \right \}_{i=1}^{S}$.
\STATE Input feature representations {$\mathbf{x}_t=f_{\psi}(I_t)$}.
\FOR{$i$ in $\left \{1, \dots, S \right \}$}
\FOR{$k$ in $K$}
\STATE $\mathbf{x}_{t, k} \leftarrow \mathbf{x}_{t, k-1} - \mathbf{\nabla}_{\mathbf{x}} E_{\theta^i}(\mathbf{x}_{t, k-1}|\mathbf{z}_t) + \omega$, ~~~ $\mathbf{z}_t \sim p_{\phi^i}(\mathbf{z}_t|\mathbf{x}_t), ~~ \omega \sim \mathcal{N}(0, \sigma)$.
\ENDFOR
\STATE $\mathbf{y}^i_t=p_{\phi^i}(\mathbf{y}_t|\mathbf{x}_{t, k}, \mathbf{z}_t)$
\ENDFOR
\STATE \textbf{return} $\mathbf{y}_{t} = \frac{1}{S} \sum^{S}_{i=1} \mathbf{y}_t^i$.
\end{algorithmic}
\end{algorithm}
\section{Datasets and Implementation details}
\label{app:imp}
\textbf{Model.}
To be efficient, we train a shared backbone for all source domains while a domain-specific classifier and a neural-network-based energy function for each source domain.
{The feature extractor backbone is a basic Residual Network without the final fully connected layer (classifier).}
Both the prior distribution $p_{\phi}(\mathbf{z}|\mathbf{x})$ and posterior distribution $q_{\phi} (\mathbf{z}|\mathbf{d}_{\mathbf{x}})$ of the latent variable $\mathbf{z}$ are generated by a neural network $\phi$ that consists of four fully connected layers with ReLU activation, which outputs the mean and variance of the distribution.
The last layer of $\phi$ outputs both the mean and standard derivation of the distribution $p_{\phi}(\mathbf{z}|\mathbf{x})$ and $q_{\phi} (\mathbf{z}|\mathbf{d}_{\mathbf{x}})$ for further Monte Carlo sampling.
The dimension of $\mathbf{z}$ is the same as the feature representations $\mathbf{x}$, e.g., 512 for ResNet-18 and 2048 for ResNet-50.
$\mathbf{d}_{\mathbf{x}}$ is obtained by the center features of the batch of samples that have the same categories as the current sample $\mathbf{x}$ in each iteration.
Deployed on feature representations, the energy function consists of three fully connected layers with two dropout layers.
The latent variable $\mathbf{z}$ is incorporated into the energy function by concatenating with the feature representation $\mathbf{x}$.
The input dimension is doubled of the output feature of the backbone, i.e., 1024 for ResNet-18 and 4096 for ResNet-50.
We use the \textit{swish} function as activation in the energy functions \citep{du2020improved}.
The final output of the EBM is a scalar, which is processed by a sigmoid function following \cite{du2020improved} to bound the energy to the region $[0, 1]$ and improve the stability during training.
During training, we introduce a replay buffer $\mathcal{B}$ to store the past updated samples from the modeled distribution \citep{du2019implicit}. By sampling from $\mathcal{B}$ with 50\% probability, we can initialize the negative samples with either the sample features from other source domains or the past Langevin dynamics procedure. This can increase the number of sampling steps and the sample diversity.
\begin{table}[t]
\caption{\textbf{Implementation details} of our method per dataset and backbone.
}
\centering
\label{param}
\resizebox{0.8\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lcccc}
\toprule
Dataset & Backbone & Backbone learning rate & Step size & Number of steps \\ \midrule
\multirow{2}*{PACS} & ResNet-18 & 0.00005 & 50 & 20 \\
~ & ResNet-50 & 0.00001 & 50 & 20 \\ \midrule
\multirow{2}*{Office-Home} & ResNet-18 & 0.00001 & 100 & 20 \\
~ & ResNet-50 & 0.00001 & 100 & 20 \\ \midrule
Rotated MNIST & ResNet-18 & 0.00005 & 50 & 20 \\ \midrule
Fashion-MNIST & ResNet-18 & 0.00005 & 50 & 20 \\ \midrule
PHEME & DistilBERT & 0.00003 & 40 & 20 \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{Training details and hyperparameters.}
We evaluate on PACS with both a ResNet-18 and ResNet-50 pretrained on ImageNet.
We use Adam optimization and train for 10,000 iterations with a batch size of 128. We set the learning rate to 0.00005 for ResNet-18, 0.00001 for ResNet-50, and 0.0001 for the energy-based model and classification model. We use 20 steps of Langevin dynamics sampling to adapt the target samples to source distributions, with a step size of 50.
We set the number of Monte Carlo sampling $N$ in eq.~(\ref{ensemble}) as 10 for PACS.
Most of the experimental settings on Office-Home are the same as on PACS. The learning rate of the backbone is set to 0.00001 for both ResNet-18 and ResNet-50.
The number of Monte Carlo sampling is 5.
For fair comparison, we evaluate the rotated MNIST and Fashion-MNIST with ResNet-18, following \citep{piratla2020efficient}. The other settings are also the same as PACS.
On PHEME we conduct the experiments based on a pretrained DistilBERT. We set the learning rate as 0.00003 and use 20 steps of Langevin dynamics with a step size of 20.
We train all models on an NVIDIA Tesla V100 GPU for 10,000 iterations.
The learning rates of the backbone are different for different datasets as shown in Table~\ref{param}.
The learning rates of the domain-specific classifiers and energy functions are both set to 0.0001 for all datasets.
For each source domain, we randomly select 128 samples as a batch to train the backbone and classification model.
We also select 128 samples from the other source domains together with the current domain samples to train the domain-specific energy function.
We use a replay buffer with 500 feature representations and
apply spectral normalization
on all weights of the energy function \citep{du2019implicit}.
We use random noise with standard deviation $ \lambda = 0.001$ and clip the gradients to have individual value magnitude of less than 0.01 similar to \citep{du2019implicit}.
The step size and number of steps for Langevin dynamics are different for different datasets as shown in Table~\ref{param}.
\begin{figure*}
\caption{\textbf{Benefit of energy-based test sample adaptation.}
\label{visualization}
\end{figure*}
\begin{figure*}
\caption{\textbf{More visualizations of the iterative adaptation on PACS.}
\label{visualization2}
\end{figure*}
\section{Visualizations}
\label{app:vis}
\textbf{More visualizations of the adaptation procedure.}
To further show the effectiveness of the iterative adaptation of target samples, we provide more visualizations on PACS.
Figure~\ref{visualization} visualizes the source domain features and the target domain features both before and after the adaptation to each individual source domain.
Figure~\ref{visualization2} visualizes more iterative adaptation procedure of the target samples. Subfigures in different rows show the adaptation of samples from different target domains to source domains.
Similar with the visualizations in the main paper, the target samples gradually approach the source data distributions during the iterative adaptation.
Therefore, the predictions of the target samples on the source domain classifier become more accurate after adaptation.
\begin{figure*}
\caption{\textbf{Failure case visualizations of our method on PACS.}
\label{failure}
\end{figure*}
\textbf{Failure cases.}
We also provide some failure cases on PACS in Figure~\ref{failure} to gain more insights in our method.
Our method is confused with samples that have objects of different categories (first row) and multiple objects or complex background (last three rows).
A possible reason is that there is noisy information contained in the latent variable of these samples, leading to adaptation without a clear direction, which behaves as wrong adaptation directions, e.g., visualization in row 1 column 4, or unstable updates with fluctuations in small regions, e.g., visualizations in row 2 column 3 and row 3 column 4.
Obtaining the latent variable with more accurate and clear categorical information can be one solution for these failure cases.
We can also solve the problem by achieving more stable adaptations with optimization techniques like Nesterov momentum \citep{nesterov1983method}.
Moreover, although failing in these cases, the adaptation of the target sample to some source domains still improves the performance, e.g., the adaptation of the \textit{photo} sample (row 1 column 2) and \textit{cartoon} sample (row 3 column 3) to the \textit{art-painting} domain and the adaptation of the \textit{sketch} sample (row 4 column 4) to the \textit{cartoon} domain, which further demonstrate the effectiveness of our iterative sample adaptation through the energy-based model.
The results motivate another solution for these failure cases, which is to learn to select the best source domain, or top-n source domains for adaptation and prediction of each target sample.
We leave these explorations for future work.
\section{More experimental results}
\label{app:results}
\begin{figure*}
\caption{\textbf{Visualizations of the target features $\mathbf{x}
\label{visualize_xz}
\end{figure*}
\textbf{Analyses and discussions of the categorical latent variables.}
{
In the proposed method, the categorical latent representation for the test sample will have high fidelity to the correct class.
This is guaranteed by the training procedure of our method.
As shown in the training objective function (eq.~\ref{eq10explicitall}), we minimize the KL divergence to encourage the prior $p_{\phi}(\mathbf{z}|\mathbf{x})$ to be close to the variational posterior $q_{\phi}(\mathbf{z}|\mathbf{d}_\mathbf{x})$. $\mathbf{d}_\mathbf{x}$ is essentially the class prototype containing the categorical information. By doing so, we train the inference model $p_{\phi}(\mathbf{z}|\mathbf{x})$ to learn to extract categorical information from a single sample.
Moreover, we also supervise the sample adaptation procedures by the predicted log-likelihood of the adapted samples (the last term in eq. (\ref{eq10explicitall})). The supervision is inherent in the objective function of our discriminative energy-based model as in the derivation of eq. (\ref{eq7:lall}) and eq. (\ref{eq10explicitall}). Due to this supervision, the model is trained to learn to adapt out-of-distribution samples to the source distribution while being able to maintain the correct categorical information conditioned on $\mathbf{z}$
Although trained only on source domains, the ability can be generalized to the target domain since it is trained by mimicking different domain shifts during training.}
To further show that $\mathbf{z}$ captures the categorical information in $\mathbf{x}$, we visualized the features $\mathbf{x}_t$ and latent variables $\mathbf{z}_t$ of the target samples in Figure~\ref{visualize_xz}, which shows that $\mathbf{z}_t$ actually captures the categorical information.
Moreover, $\mathbf{z}_t$ is more discriminative than $\mathbf{x}_t$ as shown in the figure.
Although $\mathbf{z}_t$ is approximated by only $\mathbf{x}_t$ during inference.
{Moreover, the categorical latent variable benefits the correctness of the model in the case that the target samples are adapted to previously unexplored regions with very large numbers of steps.
Our optimization objective is to minimize the energy to adapt the sample, therefore it is possible that the energy of the target samples is lower than the source data after very large numbers of steps.
In this case, the adapted samples could arrive in previously unexplored regions due to the limit of source data.
This can further be demonstrated in Figure \ref{accene}, where the performance of the adapted samples drops after large numbers of steps, reaching a low energy value. Additionally, in the unexplored regions, the classifier could not be well trained, which might also be a reason for causing the performance drop. This is also one reason that we set the number of steps as a small value, e.g., 20 and 50.
The categorical latent variable benefits the correctness of the model in such cases as also can be found in Figure \ref{accene}.
The oracle model shows almost no performance degradation even with small energy values after adaptation.
The model with the latent variable $p(\mathbf{z}|\mathbf{x})$ is also more robust to the step numbers and energy values than the model without $\mathbf{z}$.
These results show the role of the latent variable in preserving the categorical information during adaptation and somewhat correcting prediction after adaptation.}
With the categorical latent variable $\mathbf{z}$, it is natural to make the final prediction directly by $p_{\phi}(\mathbf{y}|\mathbf{z})$ without the sample adaptation procedure.
However, here we would like to clarify that it is sub-optimal.
The latent variable is dedicated to preserving the categorical information in $\mathbf{x}$ during adaptation. It still contains the domain information of the target samples. Therefore, it is not optimal to directly make predictions on the latent variable $\mathbf{z}$ due to the domain shifts between the $\mathbf{z}$ and the source-trained classifiers.
By contrast, the proposed method moves the target features close to the source distributions to address domain shifts while preserving the categorical information during adaptation.
To show how it works by direct prediction on $\mathbf{z}$, we provide the experimental results of making predictions only from $\mathbf{z}$ in Table~\ref{predonz}. As expected it is worse than predictions on the adapted target features $\mathbf{x}$, demonstrating the analysis we provided above.
\begin{table}[t]
\centering
\caption{{\textbf{
Analyses on the categorical latent variable.} As expected, prediction on the adapted target samples $\mathbf{x}$ performs better than prediction directly on the categorical latent variable $\mathbf{z}$.
}}
\label{predonz}
\begin{subtable}[t]{0.9\linewidth}
\centering
\caption{Overall comparisons on PACS and Office-Home.}
\centering
\label{zall}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lllll}
\toprule
~ & \multicolumn{2}{c}{\textbf{PACS}} & \multicolumn{2}{c}{\textbf{Office-Home}} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
~ & ResNet-18 & ResNet-50 & ResNet-18 & ResNet-50 \\ \midrule
predict directly on $\mathbf{z}$ & 82.46 \scriptsize{$\pm$0.34} & 85.95 \scriptsize{$\pm$0.33} & 64.49 \scriptsize{$\pm$0.25} & 70.60 \scriptsize{$\pm$0.53} \\
predict on adapted target samples $\mathbf{x}$
& \textbf{84.92} \scriptsize{$\pm$0.59} & \textbf{87.70} \scriptsize{$\pm$0.28} & \textbf{66.31} \scriptsize{$\pm$0.21} & \textbf{72.07} \scriptsize{$\pm$0.38}\\
\bottomrule
\end{tabular}
}
\end{subtable}
\begin{subtable}[t]{0.99\linewidth}
\centering
\caption{{Detailed comparisons on PACS.}}
\centering
\label{zpacs}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{llllll}
\toprule
~ & \textbf{Photo} & \textbf{Art-painting} & \textbf{Cartoon} & \textbf{Sketch} & \textit{Mean} \\ \midrule
\rowcolor{mColor2}
\multicolumn{6}{l}{\textbf{Predict directly on $\mathbf{z}$}}\\
No adaptation & 94,22 \scriptsize{$\pm$0.25} & 79.52 \scriptsize{$\pm$0.21} & 80.46 \scriptsize{$\pm$0.43} & 75.63 \scriptsize{$\pm$0.68} & 82.46 \scriptsize{$\pm$0.34} \\
\rowcolor{mColor2}
\multicolumn{6}{l}{\textbf{Predict on $\mathbf{z}$ with model adaptation (Tent)}}\\
Adaptation with 1 sample per step & 80.49 \scriptsize{$\pm$0.27} & 44.14 \scriptsize{$\pm$0.38} & 51.49 \scriptsize{$\pm$0.44} & 30.28 \scriptsize{$\pm$0.66} & 51.60 \scriptsize{$\pm$0.37} \\
Adaptation with 16 samples per step & 93.65 \scriptsize{$\pm$0.33} & 80.20 \scriptsize{$\pm$0.24} & 76.90 \scriptsize{$\pm$0.52} & 68.49 \scriptsize{$\pm$0.72} & 79.81 \scriptsize{$\pm$0.31} \\
Adaptation with 64 samples per step & 96.04 \scriptsize{$\pm$0.33} & 81.91 \scriptsize{$\pm$0.37} & 80.81 \scriptsize{$\pm$0.64} & 76.33 \scriptsize{$\pm$0.65} & 83.77 \scriptsize{$\pm$0.41} \\
Adaptation with 128 samples per step & \textbf{97.25} \scriptsize{$\pm$0.24} & \textbf{84.91} \scriptsize{$\pm$0.31} & \textbf{81.12} \scriptsize{$\pm$0.47} & 76.80 \scriptsize{$\pm$0.83} & \textbf{85.02} \scriptsize{$\pm$0.49} \\
\rowcolor{mColor2}
\multicolumn{6}{l}{\textbf{Predict on adapted target samples $\mathbf{x}$ with our method}}\\
Adaptation with 1 sample $\mathbf{x}$ & 96.05 \scriptsize{$\pm$0.37} & 82.28 \scriptsize{$\pm$0.31} & \textbf{81.55} \scriptsize{$\pm$0.65} & \textbf{79.81} \scriptsize{$\pm$0.41} & \textbf{84.92} \scriptsize{$\pm$0.59} \\
\bottomrule
\end{tabular}
}
\end{subtable}
\end{table}
To show the advantages of our method, we also combine the prediction of the latent variable $\mathbf{z}$ with model adaptation methods.
We use the online adaptation proposed by \cite{wang2021tent}, where all target samples are utilized to adapt the source-trained models in an online manner. The model keeps updating step by step. In each step, the model is adapted to one batch of target samples.
As shown in Table~\ref{zpacs}, with large numbers of target samples per step, e.g., 128, the adaptation with Tent is competitive. However, when the number of samples for online adaptation is small, e.g., 1 and 16, the performance of the adapted model even drops, especially for single sample adaptation. By contrast, our method adapts each target sample to the source distribution. All target samples are adapted and predicted equally and individually. The overall performance of our method is comparable to Tent with 128 samples per adaptation step.
\begin{table}[t]
\caption{\textbf{Training cost of the proposed method.}
Compared with ERM, our method has about 20\% more parameters, most of which come from the energy functions of source domains. Similar to test time, the time cost of training increases along with the number of steps of the energy-based model.
}
\centering
\label{traincost}
\resizebox{0.7\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lccc}
\toprule
& Parameters & Adaptation steps & 10000 iterations training time \\ \midrule
ERM & 11.18M & - & ~6.2 h \\ \midrule
\multirow{5}*{\textbf{\textit{This paper}}} & \multirow{5}*{13.73M} & ~20 & ~7.9 h \\
~ & ~ & ~40 & ~9.4 h \\
~ & ~ & ~60 & 10.6 h \\
~ & ~ & ~80 & 12.1 h \\
~ & ~ & 100 & 14.0 h \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{Time cost with different adaptation steps.}
As the number of steps increases, both the training and test time cost consistently increases for all target domains.
Without adaptation, the test time cost for one test batch is about 0.05 second.
The 20-step adaptation will take about 0.1 extra second. This number will increase to 0.25 second with 50 steps.
The test time increases by more than 0.5 seconds for 100 iterations, which is ten times that without adaptation and might limit the application of the proposal.
The training time cost is more than two times that for ERM for 100 iterations as shown in Table~\ref{traincost}.
Since the extra time cost is mainly caused by the iterative adaptation, potential solutions can be speeding up the Langevin dynamics with some optimization techniques like Nesterov momentum \citep{nesterov1983method}, or exploring some one-step methods for the target sample adaptation.
In other experiments on PACS in the paper we use 20 steps for all target domains considering both the overall performance and the time cost.
\begin{table}[t]
\caption{\textbf{Comparisons on PACS.} Our method achieves best mean accuracy with a ResNet-18 backbone and is competitive with ResNet-50.
}
\centering
\label{pacs}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lllllll}
\toprule
Backbone & Method & \textbf{Photo} & \textbf{Art-painting} & \textbf{Cartoon } & \textbf{Sketch } & \textit{Mean} \\ \midrule
\multirow{7}*{ResNet-18} & \cite{dou2019domain} & 94.99 & 80.29 & 77.17 & 71.69 & 81.04 \\
~ & \cite{iwasawa2021test} & - & - & - & - & 81.40 \\
~ & \cite{zhao2020domain} & \textbf{96.65} & 80.70 & 76.40 & 71.77 & 81.46 \\
~ & \cite{wang2021tent} & 95.49 & 81.55& 77.67 & 77.64 & 83.09 \\
~ & \cite{zhou2021mix} & 96.10 & \textbf{84.10} & 78.80 & 75.90 & 83.70 \\
~ & \cite{xiao2022learning} & 95.87 & 82.02 & 79.73 & 78.96 & 84.15 \\
~ & \textit{\textbf{This paper}} & 96.05 \scriptsize{$\pm$0.37} & 82.28 \scriptsize{$\pm$0.31} & \textbf{81.55} \scriptsize{$\pm$0.65} & \textbf{79.81} \scriptsize{$\pm$0.41} & \textbf{84.92} \scriptsize{$\pm$0.59}\\
\midrule
\multirow{9}*{ResNet-50} & \cite{dou2019domain} & 95.01 & 82.89 & 80.49 & 72.29 & 82.67 \\
~ & \cite{dubey2021adaptive} & - & - & - & - & 84.50 \\
~ & \cite{iwasawa2021test} & - & - & - & - & 85.10 \\
~ & \cite{zhao2020domain} & \textbf{98.25} & 87.51 & 79.31 & 76.30 & 85.34 \\
~ & \cite{gulrajani2020search} & 97.20 & 84.70 & 80.80 & 79.30 & 85.50 \\
~ & \cite{wang2021tent} & 97.96 & 86.30 & 82.53 & 78.11 & 86.23 \\
~ & \cite{seo2020learning} & 95.99 & 87.04 & 80.62 & \textbf{82.90} & 86.64 \\
~ & \cite{xiao2022learning} & 97.88 & \textbf{88.09} & 83.83 & 80.21 & \textbf{87.51} \\
~ & \textit{\textbf{This paper}} & 97.67 \scriptsize{$\pm$0.14} & \textbf{88.00} \scriptsize{$\pm$0.29} & \textbf{84.75} \scriptsize{$\pm$0.39} & 80.40 \scriptsize{$\pm$0.38} & \textbf{87.70} \scriptsize{$\pm$0.28} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[t]
\begin{center}
\caption{\textbf{Comparisons on Office-Home.} Our method achieves the best mean accuracy using both a ResNet-18 and ResNet-50 backbone.
}
\label{office}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lllllll}
\toprule
Backbone & Method & \textbf{Art} & \textbf{Clipart} & \textbf{Product} & \textbf{Real World} & \textit{Mean} \\ \midrule
\multirow{4}*{ResNet-18} & \cite{iwasawa2021test} & 47.00 & 46.80 & 68.00 & 66.10 & 57.00 \\
~ & \cite{wang2021tent} & 56.45 & 52.06 & 73.19 & 74.82 & 64.13
\\
~ & \cite{xiao2022learning}] & 59.39 & \textbf{53.94} & \textbf{74.68} & 76.07 & 66.02 \\
~ & \textit{\textbf{This paper}} & \textbf{60.08} \scriptsize{$\pm$0.33} & \textbf{53.93} \scriptsize{$\pm$0.34} & \textbf{74.50} \scriptsize{$\pm$0.39} & \textbf{76.74} \scriptsize{$\pm$0.24} & \textbf{66.31} \scriptsize{$\pm$0.21}\\
\midrule
\multirow{5}*{ResNet-50} & \cite{gulrajani2020search} & 61.30 & 52.40 & 75.80 & 76.60 & 66.50 \\
~ & \cite{wang2021tent} & 62.12 & 56.65 & 75.61 & 77.58 & 67.99 \\
~ & \cite{dubey2021adaptive} & - & - & - & - & 68.90 \\
~ & \cite{xiao2022learning} & 67.21 & 57.97 & 78.61 & 80.47 & 71.07 \\
~ & \textit{\textbf{This paper}} & \textbf{69.33} \scriptsize{$\pm$0.14} & \textbf{58.37} \scriptsize{$\pm$0.30} & \textbf{79.29} \scriptsize{$\pm$0.32} & \textbf{81.26} \scriptsize{$\pm$0.26} & \textbf{72.07} \scriptsize{$\pm$0.38}\\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[t]
\begin{minipage}[t]{\linewidth}
\centering
\caption{\textbf{Generalization beyond image data.} Rumour detection on the PHEME microblog dataset. Our method achieves the best overall performance and is competitive in each domain.
}
\centering
\label{text}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lllllll}
\toprule
& \textbf{Charlie Hebdo} & \textbf{Ferguson} & \textbf{German Wings } & \textbf{Ottawa Shooting } & \textbf{Sydney Siege} & \textit{Mean} \\ \midrule
ERM Baseline & 79.4 \scriptsize{$\pm$0.25} & 77.1 \scriptsize{$\pm$0.36} & \textbf{75.7} \scriptsize{$\pm$0.12} & 68.2 \scriptsize{$\pm$0.48} & 75.0 \scriptsize{$\pm$0.28} & 75.1 \scriptsize{$\pm$0.29} \\
\cite{wang2021tent} & 80.1 \scriptsize{$\pm$0.18} & 76.9 \scriptsize{$\pm$0.56} & 74.7 \scriptsize{$\pm$0.52} & \textbf{72.0} \scriptsize{$\pm$0.48} & 75.4 \scriptsize{$\pm$0.34} & 75.8 \scriptsize{$\pm$0.23} \\
\cite{xiao2022learning} & 81.0 \scriptsize{$\pm$0.52} & 77.2 \scriptsize{$\pm$0.25} & \textbf{75.7} \scriptsize{$\pm$0.31} & 70.0 \scriptsize{$\pm$0.46} & \textbf{76.5} \scriptsize{$\pm$0.22} & 76.1 \scriptsize{$\pm$0.21} \\
\textit{\textbf{This paper}} & \textbf{81.8} \scriptsize{$\pm$0.43} & \textbf{77.8} \scriptsize{$\pm$0.37}& 75.3 \scriptsize{$\pm$0.14} & \textbf{71.9} \scriptsize{$\pm$0.28} & 75.6 \scriptsize{$\pm$0.15} & \textbf{76.5} \scriptsize{$\pm$0.18}\\
\bottomrule
\end{tabular}
}
\end{minipage}
\begin{minipage}[t]{\linewidth}
\centering
\caption{
\textbf{
Experiments on single-source domain generalization.} The model is trained on original data and evaluated on 15 different types of corruption. Our method is competitive with \cite{sun2020test}, \cite{rusak2020simple} and \cite{hendrycks2020augmix}, and is outperformed by \cite{wang2021tent}. Mimicking good domain shifts during training is important for our method.}
\centering
\label{ssdg}
\resizebox{0.7\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{lcc}
\toprule
Method & \textbf{CIFAR-10-C} & \textbf{ImageNet-C} \\ \midrule
\cite{yun2019cutmix}
& {31.1} & -\\
\cite{guo2019mixup}
& {25.8} & -\\
\cite{hendrycks2020augmix}
& {17.4} & 51.7\\
\cite{rusak2020simple}
& - & 50.2\\
\cite{sun2020test}
& 17.5 & -\\
\cite{wang2021tent}
& \textbf{14.3} & \textbf{44.0}\\
\textit{\textbf{This paper}} (noisy data as negative samples)
& 21.5 & 55.8\\
\textit{\textbf{This paper}} (corrupted data as negative samples)
& 17.0 & 51.1\\
\bottomrule
\end{tabular}
}
\end{minipage}
\end{table}
\textbf{Detailed comparisons.}
We provide the detailed performance of each target domain on PACS (Table~\ref{pacs}), Office-Home (Table~\ref{office}), and PHEME (Table~\ref{text}).
On PACS, our method achieves competitive results on each target domain and the best overall performance with both ResNet-18 and ResNet-50 as the backbone.
Moreover, our method performs better than most of the recent model adaptation methods \citep{wang2021tent,dubey2021adaptive,iwasawa2021test,xiao2022learning}.
The conclusion on Office-Home and PHEME is similar to that on PACS. We achieve competitive and even better performance on each target domain.
\textbf{Results on single source domain generalization.}
To show the ability of our method of handling corruption distribution shifts and single source domain generalization, we conduct some experiments on CIFAR-10-C and ImageNet-C. We train the model on original data and evaluate it on the data with 15 types of corruption.
Since our method needs to mimic distribution shifts to train the discriminative energy-based model during training, for the single source domain setting, we generate the negative samples by adding random noise to the image and features of the clean data. We also use the other 4 corruption types (not contained in the evaluation corruption types) as the negative samples during training, which we regard as “corrupted data as negative samples”. Note that these corrupted data are only used as negative samples to train the energy-based model.
As shown in Table~\ref{ssdg}, by mimicking better domain shifts during training, our method achieves competitive results with \cite{sun2020test}.
{
We also compare our method with some data-augmentation-based methods (e.g., Mixup \citep{guo2019mixup}, CutMix \citep{yun2019cutmix} and AugMix \citep{hendrycks2020augmix}), our sample adaptation is also competitive with these methods.}
The proposed method performs worse with a single source domain, although we generate extra negative samples to mimic the domain shifts. The reason can be that the randomly generated domain shifts do not well simulate the domain shift at test time.
{\textbf{Analyses for ensemble prediction.}}
{We conduct several experiments on PACS to analyze the ensemble inference in our method.
We first provide the results of each source-domain-specific classifier before and after sample adaptation. As shown in Table \ref{persource}, Although the performance before and after adaptation to different source domains are different due to domain shifts, the proposed sample adaptation to most of the source domains performs better.
Moreover, the ensemble inference further improves the overall performance of both without and with sample adaptation, where the results with sample adaptation are still better, as expected.}
We also try different aggregation methods to make the final predictions. The results are provided in Table~\ref{simpred}.
The best results in Table \ref{persource} are comparable, but it is difficult to find the best source domain for adaptation before obtaining the results.
We tried to find the closest source domain of each target sample by the cosine similarity of feature representations and the predicted confidence. We also tried to aggregate the predictions by weighted average according to the Cosine similarity.
With cosine similarity, the weighted averaged results are comparable to the common ensemble method we used in the paper, but the results of adaptation to the closest source domain are not so good.
The reason can be that the cosine measure is not able to estimate the domain relationships, showing that it is difficult to reliably estimate the relationships between source and single target samples.
The results obtained by using the most confident adaptation are also not as good as the ensemble method, although comparable. The reason can be that ensemble methods introduce uncertainty into the predictions, which is more robust.
\begin{table}[t]
\centering
\caption{{\textbf{
Sample adaptation to each source domain on PACS.} The experiments are conducted on ResNet-18. Due to the domain shifts between the target domain and different source domains, the performance before and after adaptation are different. The results with sample adaptation to most of the source domains are better than those without adaptation. The ensemble inference further improves the overall performance, where the results with sample adaptation are still better than that without adaptation.
}}
\label{persource}
\begin{subtable}[t]{0.495\linewidth}
\centering
\caption{{Photo}}
\centering
\label{pspho}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{llllll}
\toprule
~ & Art-painting & Cartoon & Sketch & \textit{Ensemble} \\ \midrule
w/o adaptation
&95.79 \scriptsize{$\pm$0.23} & 95.03 \scriptsize{$\pm$0.27} & 95.05 \scriptsize{$\pm$0.42} & 95.12 \scriptsize{$\pm$0.41} \\
w/ adaptation
& 95.81 \scriptsize{$\pm$0.27} & 94.69 \scriptsize{$\pm$0.21} & 95.99 \scriptsize{$\pm$0.45} & 96.05 \scriptsize{$\pm$0.37} \\
\bottomrule
\end{tabular}
}
\end{subtable}
\begin{subtable}[t]{0.495\linewidth}
\centering
\caption{{Art-painting}}
\centering
\label{psart}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{llllll}
\toprule
~ & Photo & Cartoon & Sketch & \textit{Ensemble} \\ \midrule
w/o adaptation
&78.52 \scriptsize{$\pm$0.43} & 79.68 \scriptsize{$\pm$0.37} & 79.83 \scriptsize{$\pm$0.52} & 79.79 \scriptsize{$\pm$0.64} \\
w/ adaptation
& 81.49 \scriptsize{$\pm$0.33} & 82.19 \scriptsize{$\pm$0.35} & 80.81 \scriptsize{$\pm$0.43} & 82.28 \scriptsize{$\pm$0.31} \\
\bottomrule
\end{tabular}
}
\end{subtable}
\begin{subtable}[t]{0.495\linewidth}
\centering
\caption{{Cartoon}}
\centering
\label{pscart}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{llllll}
\toprule
~ & Photo & Art-painting & Sketch & \textit{Ensemble} \\ \midrule
w/o adaptation
&79.05 \scriptsize{$\pm$0.33} & 78.93 \scriptsize{$\pm$0.41} & 78.80 \scriptsize{$\pm$0.55} & 79.15 \scriptsize{$\pm$0.37} \\
w/ adaptation
& 81.09 \scriptsize{$\pm$0.38} & 80.44 \scriptsize{$\pm$0.31} & 80.32 \scriptsize{$\pm$0.71} & 81.55 \scriptsize{$\pm$0.65} \\
\bottomrule
\end{tabular}
}
\end{subtable}
\begin{subtable}[t]{0.495\linewidth}
\centering
\caption{{Sketch}}
\centering
\label{pssket}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{llllll}
\toprule
~ & Photo & Art-painting & Cartoon & \textit{Ensemble} \\ \midrule
w/o adaptation
&78.32 \scriptsize{$\pm$0.56} & 76.98 \scriptsize{$\pm$0.73} & 76.16 \scriptsize{$\pm$0.82} & 79.28 \scriptsize{$\pm$0.82} \\
w/ adaptation
& 79.72 \scriptsize{$\pm$0.43} & 79.69 \scriptsize{$\pm$0.67} & 79.77 \scriptsize{$\pm$0.45} & 79.81 \scriptsize{$\pm$0.41} \\
\bottomrule
\end{tabular}
}
\end{subtable}
\end{table}
\begin{table}[t]
\centering
\caption{{
\textbf{Analyses of different aggregation methods for the predictions.} The experiments are conducted on PACS using ResNet-18. The results with different aggregation methods are similar while ensemble inference performs slightly better.
}}
\centering
\label{simpred}
\resizebox{0.99\columnwidth}{!}{
\setlength\tabcolsep{4pt}
\begin{tabular}{llllll}
\toprule
Aggregation methods & Photo & Art-painting & Cartoon & Sketch & \textit{Mean} \\ \midrule
Adaptation to the closest source domain
&95.41 \scriptsize{$\pm$0.28} & 79.86 \scriptsize{$\pm$0.41} & 79.67 \scriptsize{$\pm$0.44} & 78,97 \scriptsize{$\pm$0.72} & 83.48 \scriptsize{$\pm$0.43} \\
Weighted average of adaptation to different source domains
&95.93 \scriptsize{$\pm$0.33} & 82.18 \scriptsize{$\pm$0.37} & 81.24 \scriptsize{$\pm$0.52} & 79.54 \scriptsize{$\pm$0.77} & 84.76 \scriptsize{$\pm$0.55} \\
Most confident prediction after adaptation
&95.77 \scriptsize{$\pm$0.25} & 81.93 \scriptsize{$\pm$0.31} & 80.67 \scriptsize{$\pm$0.65} & 79.25 \scriptsize{$\pm$0.62} & 84.41 \scriptsize{$\pm$0.32} \\
Ensemble (This paper)
& 96.05 \scriptsize{$\pm$0.37} & {82.28} \scriptsize{$\pm$0.31} & {81.55} \scriptsize{$\pm$0.65} & {79.81} \scriptsize{$\pm$0.41} & {84.92} \scriptsize{$\pm$0.59} \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{Benefit for larger domain gaps.}
To show the benefit of our proposal for domain generalization scenarios with large gaps, we conduct experiments on rotated MNIST and Fashion MNIST.
The results are shown in Figure~\ref{mnist}.
The models are trained on source domains with rotation angles from $15^\circ$ to $75^\circ$, $30^\circ$ to $60^\circ$, and $30^\circ$ to $45^\circ$, and always tested on target domains with angles of $0^\circ$ and $90^\circ$.
As the number of domains seen during training decreases the domain gap between source and target increases, and the performance gaps between our method and others becomes more pronounced.
Notably, when comparing our method with the recent test-time adaptation of \cite{xiao2022learning}, which adapts a model to each target sample, shows adapting target samples better handles larger domain gaps than adapting the model.
\begin{figure*}
\caption{\textbf{Benefit for larger domain gaps.}
\label{mnist}
\end{figure*}
\end{document}
|
\begin{document}
\title[Normalizer of $\Gamma_0(N)$]{The group structure of the normalizer of $\Gamma_0(N)$}
\author{Francesc Bars }
\date{First version: December 22th, 2006}
\thanks{MSC: 20H05(19B37,11G18)}
\frak maketitle
\begin{center}
\begin{small}
\begin{abstract}
We determine the group structure of the normalizer of
$\Gamma_0(N)$ in $SL_2(\field{R})$ modulo $\Gamma_0(N)$. These results correct the Atkin-Lehner statement
\cite[Theorem 8]{AL}.
\end{abstract}
\end{small}
\end{center}
\section{Introduction}
The modular curves $X_0(N)$ contain deep arithmetical information.
These curves are the Riemann surfaces obtained by completting with
the cusps the upper half plane modulo the
modular subgroup $$\Gamma_0(N)=\{\left(\begin{array}{cc} a&b\\
Nc&d\\
\end{array}\right)\in SL_2(\frak mathbb{Z})|c\in\field{Z}\}.$$
It is clear that the elements in the normalizer of $\Gamma_0(N)$
in $SL_2(\field{R})$ induce automorphisms of $X_0(N)$ and moreover one
obtains in that way all automorphisms of $X_0(N)$ for $N\frak mathcal Neq 37$
and $63$ \cite{KM}. This is one reason coming from the modular
world that shows the interest in computing the group structure of
this normalizer modulo $\Gamma_0(N)$.
Morris Newman obtains a result for this normalizer in terms of
matrices \cite{N},\cite{N2}, see also the work of Atkin-Lehner and
Newman \cite{NL}. Moreover, Atkin-Lehner state without proof the
group structure of this normalizer modulo $\Gamma_0(N)$
\cite[Theorem 8]{AL}. In this paper we correct this statement and
we obtain the right structure of the normalizer modulo
$\Gamma_0(N)$. The results are a generalization of some results
obtained in \cite{Ba}.
\section{The Normalizer of $\Gamma_0(N)$ in $SL_2(\field{R})$}
Denote by $\operatorname{Norm}(\Gamma_0(N)$ the normalizer of
$\Gamma_0(N)$ in $SL_2(\field{R})$.
\begin{thm}[Newman]\label{teo1} Let $N=\sigma^2 q$ with $\sigma,q\in\field{N}$ and $q$ square-free.
Let $\epsilon$ be the $\operatorname{gcd}$ of all integers of the
form $a-d$ where $a,d$ are integers such that $\left(\begin{array}{cc} a&b\\
Nc&d\\
\end{array}\right)\in\Gamma_0(N)$. Denote by $v:=v(N):=\operatorname{gcd}(\sigma,\epsilon)$.
Then $M\in \operatorname{Norm}(\Gamma_0(N))$ if and only if $M$ is
of the form
$$\sqrt{\delta}\left(\begin{array}{cc} r\Delta&\frac{u}{v\delta\Delta}\\
\frac{sN}{v\delta\Delta}&l\Delta\\
\end{array}\right)$$
with $r,u,s,l\in\field{Z}$ and $\delta|q$, $\Delta|\frac{\sigma}{v}$.
Moreover $v=2^{\frak mu}3^{w}$ with $\frak mu=min(3,[\frac{1}{2}v_2(N)])$
and $w=min(1,[\frac{1}{2}v_3(N)])$ where $v_{p_i}(N)$ is the
valuation at the prime $p_i$ of the integer $N$.
\end{thm}
This theorem is proved by Morris Newman in \cite{N} \cite{N2}, see
also \cite[p.12-14]{Ba}.
Observe that if $\operatorname{gcd}(\delta\Delta,6)=1$ we have
$\operatorname{gcd}(\delta\Delta^2,\frac{N}{\delta\Delta^2})=1$
because the determinant is one .
\section{The group structure of $\operatorname{Norm}(\Gamma_0(N))/\Gamma_0(N)$}
In this section we obtain some partial results on the group
structure of $\operatorname{Norm}(\Gamma_0(N))$. Let us first
introduce some particular elements of $SL_2(\field{R})$.
\begin{defin} Let $N$ be fixed. For every divisor $m'$ of $N$ with
$\operatorname{gcd}(m',N/m')=1$ the Atkin-Lehner involution
$w_{m'}$ is defined as follows,
$$w_{m'}=\frac{1}{\sqrt{m'}}\left(\begin{array}{cc} m'a&b\\
Nc&m'd\\
\end{array}\right)\in SL_2(\field{R})$$
with $a,b,c,d\in\field{Z}$.
\end{defin}
Denote by $S_{v'}=\left(\begin{array}{cc} 1&\frac{1}{v'}\\
0&1\\
\end{array}\right)$ with $v'\in\field{N}\setminus\{0\}$. Atkin-Lehner claimed in \cite{AL} the
following:
\begin{claim}[Atkin-Lehner]\cite[Theorem 8]{AL}\label{AL}
The quotient $\operatorname{Norm}(\Gamma_{0}(N))/\Gamma_0(N)$ is
the direct product of the following groups:
\begin{enumerate}
\item \{$ w_{q^{\upsilon_q(N)}}$\} for every prime $q$, $q\ge5$
$q\frak mid N$. \item
\begin{enumerate}
\item If $\upsilon_{3}(N)=0$, \{$1$\}
\item If $\upsilon_{3}(N)=1$, \{$w_{3}$\}
\item If $\upsilon_{3}(N)=2$, \{$w_{9},S_{3}$\}; satisfying $w_{9}^2=S_{3}^3=(w_{9}S_{3})^3=1$ (factor of order 12)
\item If $\upsilon_{3}(N)\ge3$; \{$w_{3^{\upsilon_{3}(N)}},S_{3}$\}; where $w_{3^{\upsilon_{3}(N)}}^2=S_{3}^3=1$ and $w_{3^{\upsilon_{3}(N)}}S_{3}w_{3^{\upsilon_{3}(N)}}$
commute
with $S_{3}$ (factor group with 18 elements)
\end{enumerate}
\item Let be $\lambda=\upsilon_{2}(N)$ and
$\frak mu=\operatorname{min}(3,[\frac{\lambda}{2}])$ and denote by
$\upsilon''=2^{\frak mu}$ the we have:
\begin{enumerate}
\item If $\lambda=0$ ; \{$1$\}
\item If $\lambda=1$; \{$w_{2}$\}
\item If $\lambda=2\frak mu$; \{$w_{2^{\upsilon_{2}(N)}},S_{\upsilon''}$\}
with the relations
$w_{2^{\upsilon_{2}(N)}}^2=S_{\upsilon''}^{\upsilon''}=(w_{2^{\upsilon_{2}(N)}}S_{\upsilon''})^3=1$,
where they have orders 6,24, and 96 for $\upsilon=2,4,8$ respectively. (One needs to warn that for $v=8$ the relations
do not define totally this factor group).
\item If $\lambda> 2\frak mu$; \{ $w_{2^{\upsilon_{2}(N)}},S_{\upsilon''}$\}; $w_{2^{\upsilon_{2}(N)}}^2=S_{\upsilon''}^{\upsilon''}=1.$
Moreover, $S_{\upsilon''}$ commutes with $w_{2^{\upsilon_{2}(N)}}S_{\upsilon''}w_{2^{\upsilon_{2}(N)}}$
(factor group of order $2 {\upsilon''}^2$).
\end{enumerate}
\end{enumerate}
\end{claim}
Let us give some partial results first.
\begin{prop}\label{propv1} Suppose that $v(N)=1$ (thus $4\frak mathcal Nmid N$ and $9\frak mathcal Nmid
N$). Then the Atkin-Lehner involutions generate
$\operatorname{Norm}(\Gamma_0(N)/\Gamma_0(N)$ and the group
structure is
$$\cong \field{P}rod_{i=1}^{\field{P}i(N)}\field{Z}/2\field{Z}$$
where $\field{P}i(N)$ is the number of prime numbers $\leq N$.
\end{prop}
\begin{dem} This is classically known. We recall only that $w_{m m'}=w_{m}w_{m'}$ for $(m,m')=1$ and
easily $w_mw_{m'}=w_{m'}w_m$; the the result follows by a
straightforward computation from Theorem \ref{teo1}, see also
\cite[p.14]{Ba}.
\end{dem}
When $v(N)>1$ it is clear that some element $S_{v'}$ appears in
the group structure of
$\operatorname{Norm}(\Gamma_0(N))/\Gamma_0(N)$ from Theorem
\ref{teo1}.
\begin{lemma}\label{lema2} If $4|N$ the involution $S_2\in \operatorname{Norm}(\Gamma_0(N))$
commutes with the Atkin-Lehner involutions $w_{m}$ with
$\operatorname{gcd}(m,2)=1$ and with the other $S_{v'}$.
\end{lemma}
\begin{dem} By the hypothesis the following matrix
belongs to $\Gamma_0(N)$
$$w_{m}S_2w_{m}S_2=\left(\begin{array}{cc} \frac{2mk^2+2Nt+mkNt}{2m}&\frac{(2+2m)(2m+2mk+Nt)}{4m}\\
\frac{Nt(2m+2mk+Nt)}{2m}&m+Nt+\frac{Nt}{m}+\frac{kNt}{2}+\frac{Nt^2}{4m}\\
\end{array}\right).$$
\end{dem}
\begin{prop}\label{propv2} Let $N=2^{v_2(N)}\field{P}rod_ip_i^{n_i}$, with $p_i$
different odd primes and assume that $v_2(N)\leq 3$, $v_3(N)\leq
1$. Then Atkin-Lehner's Claim \ref{AL} is true.
\end{prop}
For the proof we need two lemmas.
\begin{lemma} \label{nno:teo} Let $\tilde{u}\in \operatorname{Norm}(\Gamma_{0}(N))$ and write it as:
$$\tilde{u}=\frac{1}{\sqrt{\delta\Delta^2}}\left(\begin{array}{cc}
\Delta^2\delta r&\frac{u}{2}\\
\frac{sN}{2}&l\Delta^2\delta\\
\end{array}\right),$$
following the notation of Theorem \ref{teo1}.
Then:\frak mathcal Newline
$$w_{\Delta^2\delta}\tilde{u}=\left(\begin{array}{cc}
r'&\frac{u'}{2}\\
\frac{s'N}{2}&v'\\
\end{array}\right),\ if\ \operatorname{gcd}(\delta,2)=1,$$
$$w_{\Delta^2\frac{\delta}{2}}\tilde{u}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}
2r''&\frac{u''}{2}\\
\frac{s''N}{2}&2v''\\
\end{array}\right),\ if\ \operatorname{gcd}(\delta,2)=2.$$
\end{lemma}
\begin{dem}
This is an easy calculation. \end{dem}
We study now the different elements of the type
$$a(r',u',s',v')=\left(\begin{array}{cc}
r'&\frac{u'}{2}\\
\frac{s'N}{2}&v'\\
\end{array}\right),$$
$$b(r'',u'',s'',v'')=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}
2r''&\frac{u''}{2}\\
\frac{s''N}{2}&2v''\\
\end{array}\right).$$
Observe that $b(,,,)$ only appears when $N\equiv0(mod\ 8)$.
\begin{lemma} \label{nna:teo} For $N\equiv4(mod\ 8)$ all
the elements of the normalizer of type $a(r',u',s',v')$ belong to
the order six group
$\{S_{2},w_{4}|S_{2}^2=w_{4}^2=(w_{4}S_{2})^3=1\}$.
\end{lemma}
\begin{dem} Straightforward from the equalities:
$$a(r',u',s',v')\in\Gamma_{0}(N)\Leftrightarrow s'\equiv u'\equiv0(mod\ 2)$$
$$a(r',u',s',v')S_{2}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv v'\equiv\ u'\equiv1\ s'\equiv0(mod\ 2)$$
$$a(r',u',s',v')w_{4}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv v'\equiv0\ u'\equiv s'\equiv1(mod\ 2)$$
$$a(r',u',s',v')w_{4}S_{2}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv u'\equiv s'\equiv1\ v'\equiv0(mod\ 2)$$
$$a(r',u',s',v')S_{2}w_{4}\in\Gamma_{0}(N)\Leftrightarrow v'\equiv u'\equiv s'\equiv1\ r'\equiv0(mod\ 2)$$
$$a(r',u',s',v')S_{2}w_{4}S_{2}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv v'\equiv s'\equiv1\ u'\equiv0(mod\ 2)$$
\end{dem}
\begin{lemma} \label{nne:teo} Let $N$ be a positive integer with $\upsilon_{2}(N)=3$. Then all the elements
of the form $a(r',u',s',v')$ and $b(r'',u'',s'',v'')$ correspond
to some element of the following group of 8 elements
$$\{ S_{2},w_{8}|S_{2}^2=w_{8}^2=1,S_{2}w_{8}S_{2}w_{8}=w_{8}S_{2}w_{8}S_{2}\}$$
\end{lemma}
\begin{dem} If follows from the equalities:
$$a(r',u',s',v')\in\Gamma_{0}(N)\Leftrightarrow r'\equiv v'\equiv1,u'\equiv s'\equiv0(mod\ 2)$$
$$a(r',u',s',v')S_{2}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv v'\equiv u'\equiv1,s'\equiv0(mod\ 2)$$
$$a(r',u',s',v')w_{8}S_{2}w_{8}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv v'\equiv s'\equiv1,u'\equiv0(mod\ 2)$$
$$a(r',u',s',v')S_{2}w_{8}S_{2}w_{8}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv v'\equiv s'\equiv v'\equiv1(mod\ 2)$$
$$b(r'',u'',s'',v'')w_{8}\in\Gamma_{0}(N)\Leftrightarrow r''\equiv v''\equiv0,u''\equiv s''\equiv 1(mod\ 2)$$
$$b(r'',u'',s'',v'')S_{2}w_{8}S_{2}\in\Gamma_{0}(N)\Leftrightarrow r''\equiv v''\equiv u''\equiv s''\equiv1(mod\ 2)$$
$$b(r'',u'',s'',v'')S_{2}w_{8}\in\Gamma_{0}(N)\Leftrightarrow r''\equiv0,u''\equiv s''\equiv v''\equiv1(mod\ 2)$$
$$b(r'',u'',s'',v'')w_{8}S_{2}\in\Gamma_{0}(N)\Leftrightarrow v''\equiv0,u''\equiv s''\equiv r''\equiv1(mod\ 2)$$
\end{dem}
We can now proof Proposition \ref{propv2}].
\begin{dem}[ of Proposition \ref{propv2}]
Let $N=2^{\upsilon_{2}(N)}\field{P}rod_{i}p_{i}^{n_i}$, with $p_{i}$
different primes and assume that $9\frak mathcal Nmid N$. If
$\upsilon_{2}(N)\le1$ we are done by proposition \ref{propv1}.
Suppose $\upsilon_{2}(N)=2$ and let $\tilde{u}\in
\operatorname{Norm}(\Gamma_{0}(N))$. By lemmas \ref{nno:teo} and
\ref{nna:teo}, $w_{\delta}\tilde{u}=\alpha$, $\alpha\in\{
S_{2},w_{4}|S_{2}^2=w_{4}^2=(w_{4}S_{2})^3=1$ and it follows that
$\tilde{u}=w_{\delta}\alpha$. Since $w_{\delta}$ ($(\delta,2)=1$)
commutes with $S_{2}$ and the Atkin-Lehner involutions commute one
to each other, we are already done. In the situation $8||N$ the
proof is exactly the same but using lemmas \ref{nno:teo} and
\ref{nne:teo} instead.
\end{dem}
\section{Counterexamples to Claim \ref{AL}.}
In the above section we have seen that Atkin-Lehner's claim is
true if $v(N)\leq2$ i.e. for $v_2(N)\leq 3$ and $v_3(N)\leq 1$.
Now we obtain counterexamples when $v_2(N)$ and/or $v_3(N)$ are
bigger.
\begin{lemma} Claim \ref{AL} for $N=48$ is wrong.
\end{lemma}
\begin{dem} We know by Ogg \cite{O} that $X_0(48)$ is an
hyperelliptic modular curve with hyperelliptic involution not of
Atkin-Lehner type. The hyperelliptic involution always belongs to
the center of the automorphism group. We know by \cite{KM} that
$\operatorname{Aut}(X_0(48))=\operatorname{Norm}(\Gamma_0(48))/\Gamma_0(N)$.
Now if Claim \ref{AL} where true this group would be isomorphic to
$\field{Z}/2\times \Pi_4$ where $\Pi_n$ is the permutation group of $n$
elements. It is clear that the center of this group is
$\field{Z}/2\times\{1\}$, generated by the Atkin-Lehner involution $w_3$,
but this involution is not the hyperelliptic one.
\end{dem}
The problem of $N=48$ is that $S_4$ does not commute with the
Atkin-Lehner involution $w_3$; thus the direct product
decomposition of Claim \ref{AL} is not possible.
This problem appears also for powers of $3$ one can prove,
\begin{lemma}\label{lema10} Let $N=3^{v_3(N)}\field{P}rod_ip_i^{n_i}$ where $p_i$ are different
primes of $\field{Q}$. Impose that $S_3\in
\operatorname{Norm}(\Gamma_0(N))$. Then $S_3$ commutes with
$w_{p_i^{n_i}}$ if and only if $p_i^{n_i}\equiv 1(modulo\ 3)$.
Therefore if some $p_i^{n_i}\equiv -1(modulo\ 3)$ the Claim
\ref{AL} is not true.
\end{lemma}
\begin{dem} Let us show that $S_{3}$ does not commute with $w_{p_{i}^{n_i}}$
if and only if $p_{i}^{n_i}\equiv-1(mod$ $3$). Observe the
equality
$w_{p_{i}^{n_i}}=\frac{1}{\sqrt{p_{i}^{n_i}}}\left(\begin{array}{cc}
p_{i}^{n_i}k&1\\
Nt&p_{i}^{n_i}\\
\end{array}\right)$:
\begin{center}
$$w_{p_{i}^{n_i}}S_{3}w_{p_{i}^{n_i}}S_{3}^2=$$
$$\frac{1}{p_{i}^{n_i}} \left(
\begin{array}{cc}
(p_{i}^{n_i}k)^2+Nt(1+\frac{p_{i}^{n_i}k}{3})&p_{i}^{n_i}k(\frac{2p_{i}^{n_i}k}{3}+1)+(\frac{p_{i}^{n_i}k}{3}+1)
(\frac{2Nt}{3}+p_{i}^{n_i})\\
Nt(p_{i}^{n_i}k)+Nt(\frac{Nt}{3}+p_{i}^{n_i})&Nt(\frac{2p_{i}^{n_i}k}{3}+1)+p_{i}^{n_i}(\frac{Nt}{3}+p_{i}^{n_i})
(\frac{2Nt}{3}+p_{i}^{n_i})\\
\end{array} \right).$$
\end{center}
For this element to belong to $\Gamma_{0}(N)$ one needs to impose
$\frac{2k^2p_{i}^{n_i}}{3}+\frac{p_{i}^{n_i}k}{3}\in\field{Z}$. Since
$p_{i}^{n_i}\equiv1\ o\ -1(mod\ 3)$ it is needed that
$k\equiv1(mod\ 3)$. Now from $\operatorname{det}(w_{p_i})=1$ we
obtain that $p_{i}^{n_i}k\equiv1(mod\ 3)$; therefore
$p_{i}^{n_i}\equiv1(mod\ 3)$.
\end{dem}
\section{The group structure of $\operatorname{Norm}(\Gamma_0(N))/\Gamma_0(N)$ revisited.}
In this section we correct Claim \ref{AL}. We prove here that the
quotient $$\operatorname{Norm}(\Gamma_0(N))/\Gamma_0(N)$$ is the
product of some groups associated every one of them to the primes
which divide $N$. See for the explicit result theorem
\ref{Barsfi}.
\begin{thm}\label{Bars} Any element $w\in \operatorname{Norm}(\Gamma_0(N))$ has an expression
of the form
$$w=w_{m}\Omega,$$ where $w_m$ is an Atkin-Lehner involution of
$\Gamma_0(N)$ with $(m,6)=1$ and $\Omega$ belongs to the subgroup
generated by $S_{v(N)}$ and the Atkin Lehner involutions
$w_{2^{v_2(N)}}$, $w_{3^{v_3(N)}}$. Moreover for
$\operatorname{gcd}(v(N),2^3)\leq 2$ the group structure for the
subgroup $<S_{v_2(v(N))},w_{2^{v_2(N)}}>$ and $<S_{v_3(v(N))},
w_{3^{v_3(N)}}>$ of $<S_{v(N)},w_{2^{v_2(N)}},w_{3^{v_3(N)}}>$ is
the predicted by Atkin-Lehner at Claim \ref{AL}, but these two
subgroups do not necessary commute withe each other element-wise.
\end{thm}
\begin{dem}
Let us take any element $w$ of the
$\operatorname{Norm}(\Gamma_0(N))$. By Theorem \ref{teo1} we can
express $w$ as follows,
$$w={\sqrt{\delta}}\left(\begin{array}{cc}
r\Delta&\frac{u}{v\delta\Delta}\\
\frac{sN}{v\delta\Delta}&l\Delta\\
\end{array}\right)=\frac{1}{\Delta\sqrt{\delta}}\left(\begin{array}{cc}
r\delta\Delta^2&\frac{u}{v}\\
\frac{sN}{v}&l\delta\Delta^2\\
\end{array}\right)$$
Let us denote by $U=2^{v_2(N)}3^{v_3(N)}$. Write
$\Delta'=\operatorname{gcd}(\Delta,N/U)$ and
$\delta'=\operatorname{gcd}(\delta,N/U)$; then we obtain
$$w_{\delta'{\Delta'}^2}w=\frac{1}{\frac{\Delta}{\Delta'}\sqrt{\delta/\delta'}}\left(\begin{array}{cc}
r'\frac{\delta}{\delta'}\frac{\Delta^2}{\Delta'^2}&\frac{u'}{v(N)}\\
\frac{Nt'}{v(N)}&v'\frac{\delta}{\delta'}\frac{\Delta^2}{\Delta'^2}\\
\end{array}\right)$$
Observe that if $v(N)=1$ we already finish and we reobtain
proposition \ref{propv1}. This is clear if
$\operatorname{gcd}(N,6)=1$; if not, the matrix
$ww_{\delta'\Delta'^2}$ is the Atkin-Lehner involution at
$(\frac{\Delta}{\Delta'})^2\frac{\delta}{\delta'}\in\field{N}$.
Now we need only to check that any matrix of the form
\begin{equation}\label{eq5}\Omega=\frac{1}{\frac{\Delta}{\Delta'}\sqrt{\delta/\delta'}}\left(\begin{array}{cc}
r'\frac{\delta}{\delta'}(\frac{\Delta}{\Delta'})^2&\frac{u'}{v(N)}\\
\frac{Nt'}{v(N)}&v'\frac{\delta}{\delta'}(\frac{\Delta}{\Delta'})^2\\
\end{array}\right)
\end{equation}
is generated by $S_{v(N)}$ and the Atkin-Lehner involutions at
2 and 3 which are the factors of
$\frac{\delta}{\delta'}(\frac{\Delta}{\Delta'})^2$. To check this
observe that $\Omega=\Omega_2\Omega_3$ with
\begin{scriptsize}
\begin{equation}\label{factor}\Omega_2=\frac{1}{2^{v_2(\frac{\Delta}{\Delta'}\sqrt{\delta/\delta'})}}\left(\begin{array}{cc}
r''2^{v_2(\frac{\delta}{\delta'}(\frac{\Delta}{\Delta'})^2)}&\frac{u''}{2^{v_2(v(N))}}\\
\frac{Nt''}{2^{v_2(v(N))}}&v''2^{v_2(\frac{\delta}{\delta'}(\frac{\Delta}{\Delta'})^2)}\\
\end{array}\right)\end{equation} $$\Omega_3=\frac{1}{3^{v_3(\frac{\Delta}{\Delta'}\sqrt{\delta/\delta'})}}\left(\begin{array}{cc}
r'''3^{v_3(\frac{\delta}{\delta'}(\frac{\Delta}{\Delta'})^2)}&\frac{u'''}{3^{v_3(v(N))}}\\
\frac{Nt'''}{3^{v_3(v(N))}}&v'''3^{v_3(\frac{\delta}{\delta'}(\frac{\Delta}{\Delta'})^2)}\\
\end{array}\right).$$
\end{scriptsize}
We only consider the case for $\Omega_2$, the case for the
$\Omega_3$ is similar. We can assume that
$2^{v_2(\frac{\Delta}{\Delta'}\sqrt{\delta/\delta'})}=1$
substituting $\Omega_2$ by $w_{2^{v_2(N)}}\Omega_2$ if necessary.
Thus, we are reduced to a matrix of the form
$\tilde{\Omega_2}=\left(\begin{array}{cc}
r'&\frac{u'}{2^{v_2(v(N))}}\\
\frac{Nt'}{2^{v_2(v(N))}}&v'\\
\end{array}\right)$. Now for some $i$ we can
obtain
$S_{2^{v_2(v(N))}}^i\tilde{\Omega_2}=\left(\begin{array}{cc}
r'&{u'}\\
\frac{Nt'}{2^{v_2(v(N))}}&v'\\
\end{array}\right);$ name this matrix by $\overline{\Omega_2}$.
Then, it is easy to check that $w_{2^{v_2(N)}} S_{2^{v_2(v(N))}}^i
w_{2^{v_2(N)}}\overline{\Omega_2}\in \Gamma_0(N)$ for some $i$.
Similar argument as above are obtained if we multiply $w$ by $w_m$
on the right, i.e. $ww_m$ is also some $\Omega$ as above obtaining
similar conclusion.
Let us see now that the group generated by $S_{v_2(v(N))}$ and the
Atkin-Lehner involutions at 2, and the group generated by
$S_{v_3(v(N))}$ and the Atkin-Lehner involution at 3 have the
structure predicted in Claim \ref{AL} when
$\operatorname{gcd}(v(N),2^3)\leq 2$. We only need to check when
$v(N)$ is a power of 2 or 3 by (\ref{factor}). For $v(N)=1$ the
matrix (\ref{eq5}) is
$w_{\frac{\delta}{\delta'}(\frac{\Delta}{\Delta'})^2}$ (we denote
$w_1:=id$) (we have in this case a much deeper result, see
proposition \ref{propv1}). Take now $v(N)=2$. If
$l=gcd(3,\delta/\delta')$ let $\Omega=w_l\Omega'$; the matrix
$\Omega'$ is as (\ref{eq5}) but with
$\operatorname{gcd}(3,\delta/\delta')=1$, and
$\frac{\delta}{\delta'}\frac{\Delta^2}{\Delta'^2}$ is only a power
of 2. Then $\Omega'\in <S_2,w_{2^{v_2(N)}}>$, let us to precise
the group structure. For $v(N)=2$ we have $v_2(N)=2$ or 3, and we
have already proved the group structure of Claim \cite{AL} in
lemmas \ref{nna:teo},\ref{nne:teo} (we have moreover that Claim
\ref{AL} is true because $S_2$ commutes with the Atkin-Lehner
involutions $w_{p_i^{n_i}}$ if $(p_i,2)=1$, see proposition
\ref{propv2}). Assume now $v(N)=3$. If $l=gcd(2,\delta/\delta')$
and $\Omega=w_l\Omega'$ then $\Omega'$ is as (\ref{eq5}) but with
$gcd(2,\delta/\delta')=1$, and
$\frac{\delta}{\delta'}\frac{\Delta^2}{\Delta'^2}$ is only a power
of 3. Then $\Omega'\in <S_3,w_{3^{v_3(N)}}>$, let us to precise
the group structure. For $v(N)=3$ we have $v_3(N)\geq 2$. Let us
begin with $v_3(N)=2$, then $\Omega'$ is of the form
$$\Omega'=\left(\begin{array}{cc}
r'&\frac{u'}{3}\\
\frac{Nt'}{3}&v'\\
\end{array}\right)=:a(r',u',t',v')$$
(from the formulation of Theorem \ref{teo1} we can consider
$\frac{\Delta}{\Delta'}=1=\frac{\delta}{\delta'}$ because the
factors outside $3$ does not appear if we multiply for a
convenient Atkin-Lehner involution, and for 3 observe that under
our condition $\Delta=1$) and we have
$$a(r',u',t',v')\in\Gamma_{0}(N)\Leftrightarrow t'\equiv u'\equiv0(mod\ 3)$$
$$a(r',u',t',v')w_{9}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv v'\equiv0(mod\ 3)$$
$$a(r',u',t',v')S_{3}\in\Gamma_{0}(N)\Leftrightarrow r'+u'\equiv t'\equiv0(mod\ 3)$$
$$a(r',u',t',v')S_{3}^2\in\Gamma_{0}(N)\Leftrightarrow 2r'+u'\equiv t'\equiv0(mod\ 3)$$
$$a(r',u',t',v')S_{3}w_{9}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv qt'+v'\equiv0(mod\ 3)$$
$$a(r',u',t',v')S_{3}^2w_{9}\in\Gamma_{0}(N)\Leftrightarrow r'\equiv 2qt'+v'\equiv0(mod\ 3)$$
$$a(r',u',t',v')w_{9}S_{3}^2\in\Gamma_{0}(N)\Leftrightarrow r'+u'\equiv v'\equiv0(mod\ 3)$$
$$a(r',u',t',v')w_{9}S_{3}\in\Gamma_{0}(N)\Leftrightarrow r'+2u'\equiv v'\equiv0(mod\ 3)$$
$$a(r',u',t',v')w_{9}S_{3}^2w_{9}\in\Gamma_{0}(N)\Leftrightarrow u'\equiv qt'+v'\equiv0(mod\ 3)$$
$$a(r',u',t',v')S_{3}^2w_{9}S_{3}^2\in\Gamma_{0}(N)\Leftrightarrow u'\equiv 2qt'+v'\equiv0(mod\ 3)$$
$$a(r',u',t',v')S_{3}^2w_{9}S_{3}\in\Gamma_{0}(N)\Leftrightarrow r'+u'\equiv 2t'q+v'\equiv0(mod\ 3)$$
$$a(r',u',t',v')S_{3}w_{9}S_{3}^2\in\Gamma_{0}(N)\Leftrightarrow 2r'+u'\equiv qt'+v'\equiv0(mod\ 3)$$
and these are all the possibilities, proving that the group is
$\{S_3,w_9|S_3^3=w_9^2=(w_9S_3)^3=1\}$ of order 12. Observe that
$S_3$ does not commute with $w_2$ (see for example lemma
\ref{nna:teo}).
Suppose now that $v_3(N)\geq 3$. We distinguish the cases $v_3(N)$
odd and $v_3(N)$ even. Suppose $v_3(N)$ is even, then
$\frac{\delta}{\delta'}=1$ and $\Omega'$ has the following form
$$\frac{1}{\frac{\Delta}{\Delta'}}\left(\begin{array}{cc}
r'(\frac{\Delta}{\Delta'})^2&\frac{u'}{3}\\
\frac{Nt'}{3}&v'(\frac{\Delta}{\Delta'})^2\\
\end{array}\right)$$
with $\alpha:=\Delta/\Delta'$ dividing $3^{[v_3(N)/2]-1}$. Since
this last matrix has determinant 1 we see that $\alpha$ satisfies
$\operatorname{gcd}(\alpha,N/(3^2\alpha^2))=1$; thus $\alpha=1$ or
$\alpha=3^{[v_3(N)/2]-1}$. Write
$a(r',u',t',v')=\left(\begin{array}{cc}
r'&\frac{u'}{3}\\
\frac{Nt'}{3}&v'\\
\end{array}\right)$ when we take
$\alpha=1$ and $b(r',u',t',v')=\left(\begin{array}{cc}
r'(3^{[v_3(N)/2]-1})&\frac{u'}{3^{[v_3(N)/2]}}\\
\frac{Nt'}{3^{[v_3(N)/2]}}&v'(3^{[v_3(N)/2]-1})\\
\end{array}\right)$ when
$\alpha=3^{[v_3(N)/2]-1}$. It is easy to check that
$b(r',u',t',v')=w_{3^{v_3(N)}}a(r',u',t',v')$ and that the group
structure is the predicted in a similar way as the one done above
for $v(N)=2$. Suppose now that $v_3(N)$ is odd, then
$\frac{\delta}{\delta'}$ is 1 or 3 and $\frac{\Delta}{\Delta'}$
divides $3^{[v_3(N)/2]-1}$. Now from $\operatorname{det}()=1$ we
obtain that the only possibilities are
$\frac{\delta}{\delta'}=1=\frac{\Delta}{\Delta'}$ name the
matrices for this case following equation \ref{eq5} by
$a(r',u',t',v')$, and the other possibility is
$\frac{\delta}{\delta'}=3$ and
$\frac{\Delta}{\Delta'}=3^{[v_3(N)/2]-1}$, write the matrices for
this case following equation \ref{eq5} by $c(r',u',t',v')$. It is
also easy to check that
$c(r',u',t',v')=w_{3^{v_3(N)}}a(r'',u'',t'',v'')$, and that the
group structure is the predicted.
\end{dem}
\begin{cor}\label{propv3}
Let $N=3^{v_3(N)}\field{P}rod_{i}p_{i}^{n_i}$, with $p_{i}$ different
primes such that $\operatorname{gcd}(p_{i},6)=1$. Suppose that
$v(N)=3$ and $p_{i}^{n_i}\equiv1(mod\ 3)$ for all $i$. Then Claim
\ref{AL} is true.
\end{cor}
\begin{dem} From the proof of the above theorem \ref{Bars} for $v(N)=3$
with $v_3(N)\geq 2$, lemma \ref{lema10}, and that the general
observation that the Atkin-Lehner involutions commute one with
each other we obtain that the direct product decomposition of
Claim \ref{AL} is true obtaining the result.
\end{dem}
Now we shows the corrections to Claim \ref{AL} for $v(N)=4$ and
$v(N)=8$, about the group structure of the subgroup of
$\operatorname{Norm}(\Gamma_0(N))/\Gamma_0(N)$ generated for
$S_{2^k}$ and the Atkin-Lehner involution at prime 2.
\begin{prop} Suppose $v(N)=4$, observe that in this situation $v_2(N)=4,$ or $5$. Then the group structure of the
subgroup $<w_{2^{v_2(N)}},S_4>$ of
$\operatorname{Norm}(\Gamma_0(N))/\Gamma_0(N)$ is given by the
relations:
\begin{enumerate}
\item For $v_2(N)=4$ we have $S_4^4=w_{16}^{2}=(w_{16}S_4)^3=1$.
\item For $v_2(N)=5$ we have $S_4^4=w_{32}^{2}=(w_{32}S_4)^4=1$.
\end{enumerate}
\end{prop}
\begin{dem} It is a straightforward computation. Observe that for
$v_2(N)=4$ the statement coincides with Claim \ref{AL} but not for
$v_2(N)=5$, where one checks that $S_4$ does not commute with
$w_{32}S_4w_{32}$.
\end{dem}
\begin{prop} Suppose $v(N)=8$ and $v_2(N)$ even (this is the case (3)(c) in Claim \ref{AL}).
Then the group
$<w_{2^{v_2(N)}},S_8>\subseteq\operatorname{Norm}(\Gamma_0(N))/\Gamma_0(N)$
satisfies the following relations: $S_8^8=w_{2^{v_2(N)}}^2=1$, and
\begin{enumerate}
\item for $v_2(N)=6$ we have $(w_{64}S_8)^3=1$, \item for
$v_2(N)\geq 8$ we do not have the relation
$(w_{2^{v_2(N)}}S_8)^3=1$, \item for $v_2(N)\geq 10$ we have the
relation: $S_8$ commutes with $w_{2^{v_2(N)}}S_8 w_{2^{v_2(N)}}$,
\item for $v_2(N)=6$ or $8$ we do not have the relation: $S_8$
commutes with the element $w_{2^{v_2(N)}}S_8 w_{2^{v_2(N)}}$.
\item For $v_2(N)=8$ we have the relation:
$w_{256}S_8w_{256}S_8w_{256}S_8^3w_{256}S_8^3=1$.
\end{enumerate}
\end{prop}
\begin{dem} Straightforward.
\end{dem}
\begin{prop} Suppose $v(N)=8$ and $v_2(N)$ odd (this is the case (3)(d) in Claim \ref{AL}).
Then the group
$<w_{2^{v_2(N)}},S_8>\subseteq\operatorname{Norm}(\Gamma_0(N))/\Gamma_0(N)$
satisfies the following relations: $S_8^8=w_{2^{v_2(N)}}^2=1$, and
\begin{enumerate}
\item for $v_2(N)=7$ $(w_{128}S_8)^4=1$, \item for $v_2(N)\geq 9$
we do not have the relation $(w_{2^{v_2(N)}}S_8)^4=1$, \item for
$v_2(N)\geq 9$ we have the Atkin-Lehner relation:
$S_8$ commutes with $w_{2^{v_2(N)}}S_8 w_{2^{v_2(N)}}$,
\item for $v_2(N)=7$ we do not have that $S_8$ commutes with
$w_{128}S_8w_{128}$.
\end{enumerate}
\end{prop}
\begin{dem} Straightforward.
\end{dem}
Let us finally write the revisited results concerning Claim
\ref{AL} that we prove;
\begin{thm}\label{Barsfi} The quotient $\operatorname{Norm}(\Gamma_{0}(N))/\Gamma_0(N)$ is
a product of the following groups:
\begin{enumerate}
\item \{$ w_{q^{\upsilon_q(N)}}$\} for every prime $q$, $q\ge5$
$q\frak mid N$. \item
\begin{enumerate}
\item If $\upsilon_{3}(N)=0$, \{$1$\}
\item If $\upsilon_{3}(N)=1$, \{$w_{3}$\}
\item If $\upsilon_{3}(N)=2$, \{$w_{9},S_{3}$\}; satisfying $w_{9}^2=S_{3}^3=(w_{9}S_{3})^3=1$ (factor of order 12)
\item If $\upsilon_{3}(N)\ge3$; \{$w_{3^{\upsilon_{3}(N)}},S_{3}$\}; where $w_{3^{\upsilon_{3}(N)}}^2=S_{3}^3=1$ and $w_{3^{\upsilon_{3}(N)}}S_{3}w_{3^{\upsilon_{3}(N)}}$
commute
with $S_{3}$ (factor group with 18 elements)
\end{enumerate}
\item Let be $\lambda=\upsilon_{2}(N)$ and
$\frak mu=\operatorname{min}(3,[\frac{\lambda}{2}])$ and denote by
$\upsilon''=2^{\frak mu}$ the we have:
\begin{enumerate}
\item If $\lambda=0$ ; \{$1$\}
\item If $\lambda=1$; \{$w_{2}$\}
\item If $\lambda=2\frak mu$ and $2\leq\lambda\leq 6$; \{$w_{2^{\upsilon_{2}(N)}},S_{\upsilon''}$\}
with the relations
$w_{2^{\upsilon_{2}(N)}}^2=S_{\upsilon''}^{\upsilon''}=(w_{2^{\upsilon_{2}(N)}}S_{\upsilon''})^3=1$,
where they have orders 6,24, and 96 for $\upsilon=2,4,8$ respectively.
\item If $\lambda> 2\frak mu$ and $2\leq\lambda\leq 7$;
\{ $w_{2^{\upsilon_{2}(N)}},S_{\upsilon''}$\};
$w_{2^{\upsilon_{2}(N)}}^2=S_{\upsilon''}^{\upsilon''}=1$.
Moreover, $(w_{2^{\upsilon_{2}(N)}}S_{\upsilon''})^4=1$.
\item[($\tilde{c}$),($\tilde{d}$)] If $\lambda\geq 9$; \{$w_{2^{\upsilon_{2}(N)}},S_{8}$\}
with the relations
$w_{2^{\upsilon_{2}(N)}}^2=S_{8}^{8}=1$ and $S_8$
commutes with $w_{2^{v_2(N)}}S_8w_{2^{v_2(N)}}$.
\item[($\hat{c}$)] If $\lambda=8$; \{$w_{2^{\upsilon_{2}(N)}},S_{8}$\}
with the relations
$w_{2^{\upsilon_{2}(N)}}^2=S_{8}^{8}=1$
and $w_{256}S_8w_{256}S_8w_{256}S_8^3w_{256}S_8^3=1$.
\end{enumerate}
\end{enumerate}
\end{thm}
\begin{obs2} One needs to warn that for the situation $v(N)=8$
possible the relations does not define totally the factor group,
but it is a computation more.
\end{obs2}
\begin{obs2} The product between the different groups appearing in
theorem \ref{Barsfi} is easily computable. Effectively, we know
that the Atkin-Lehner involutions commute, and $S_{2^{v_2(v(N))}}$
commutes with $S_{3^{v_3(v(N))}}$. Moreover $S_2$ commutes with
any element from lemma \ref{lema2}. Consider $w_{p^n}$ an
Atkin-Lehner involution for $X_0(N)$ with $p$ a prime. One obtains
the following results by using the same arguments appearing in the
proof of lemma \ref{lema10};
\begin{enumerate}
\item let $p$ be coprime with $3$ and $3|v(N)$. $S_3$ commutes
with $w_{p^n}$ if and only if $p^n\equiv 1(modulo\ 3)$. If
$p^n\equiv-1(modulo\ 3)$ then $w_{p^n}S_3=S_3^2w_{p^n}$. \item Let
$p$ be coprime with $2$ and $4|v(N)$. $S_4$ commutes with
$w_{p^n}$ if and only if $p^n\equiv 1(modulo\ 4)$. If
$p^n\equiv-1(modulo\ 4)$ then $w_{p^n}S_4=S_4^3w_{p^n}$. \item Let
$p$ be coprime with $2$ and $8|v(N)$. Then,
$w_{p^n}S_8=S_8^kw_{p^n}$ if $p^n\equiv k(modulo\ 8)$, in
particular $S_8$ commutes with $w_{p^n}$ if and only if $p^n\equiv
1(modulo\ 8)$.
\end{enumerate}
\end{obs2}
Francesc Bars Cortina, Depart. Matem\`atiques, Universitat
Aut\`onoma de Barcelona, 08193 Bellaterra. Catalonia. Spain.
E-mail:
[email protected] \\
\end{document}
|
\begin{document}
\begin{center}
\textbf{\Large{Determination of forcing functions in the wave equation. Part I: the space-dependent case}}
\end{center}
S.O. Hussein and D. Lesnic
\\
\textit{Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT, UK}\\
E-mails: [email protected], [email protected]\\
\\
\textbf{\large Abstract.}
We consider the inverse problem for the wave equation which consists of determining an unknown space-dependent force function acting on a vibrating structure from Cauchy boundary data. Since only boundary data are used as measurements, the study has importance and significance to non-intrusive and non-destructive testing of materials. This inverse force problem is linear, the solution is unique, but the problem is still ill-posed since, in general, the solution does not exist and, even if it exists, it does not depend continuously upon the input data. Numerically, the finite difference method combined with the Tikhonov regularization are employed in order to obtain a stable solution. Several orders of regularization are investigated. The choice of the regularization parameter is based on the L-curve method. Numerical results show that the solution is accurate for exact data and stable for noisy data. An extension to the case of multiple additive forces is also addressed. In a companion paper, in Part II, the time-dependent force identification will be undertaken.\\
\\
\textbf{\large Keywords:} Inverse force problem; Regularization; L-curve; Finite difference method; Wave equation.
\section{Introduction}
The aim of this paper is to investigate an inverse force problem for the hyperbolic wave equation. The forcing function is assumed to depend only upon the space variable in order to ensure uniqueness of the solution, \cite{candunn70,engl94,vmk92,my95}. These authors have given conditions to be satisfied by the data in order to ensure uniqueness and, in the case of \cite{candunn70}, continuous dependence upon the data. However, no numerical results were presented and it is the main purpose and novelty of our study to develop an efficient numerical solution for this inverse linear, but ill-posed problem. In a previous study, \cite{hussein14}, we have used the boundary element method (BEM) to numerically discretise the wave equation with constant wave speed based on the available fundamental solution, \cite{morfesh53}. Furthermore, by assuming that the force function $f(x)$ appears as a free term in the wave equation, the method of separating variables, \cite{candunn70}, was applicable and regularisation was used to stabilise the resulting system of linear algebraic equations. However, if the wave speed is not constant or, if the force appears in a non-free term as $f(x)h(x,t)$ the above methods are not applicable. Therefore, in order to extend this range of applicability, in this paper the numerical method for discretising the wave equation is the finite difference method (FDM). The resulting system of linear equations is ill-conditioned, the original problem being ill-posed. The choice of the regularization parameter introduced by this technique is important for the stability of the numerical solution and in our study this is based on the L-curve criterion,\cite{hansen2001}.
The structure of the paper is as follows. In Section 2, we briefly describe inverse force problems for the hyperbolic wave equation recalling the uniqueness theorems of \cite{engl94,vmk92,my95}. In Sections 3 and 4, we introduce the FDM, as applied to direct and inverse problems, respectively. Numerical results are illustrated and discussed in Sections 5 and an extension of the study is presented in Section 6. Conclusions are provided in Section 7.
\section{Mathematical Formulation}
The governing equation for a vibrating bounded structure $\Omega \subset \mathbb{R}^{n},\ n=1,2,3$, acted upon by a force $F(\underline{x},t)$ is given by the wave equation
\begin{eqnarray}
u_{tt}(\underline{x},t)=c^{2}\nabla^{2}u(\underline{x},t)+F(\underline{x},t),
\quad (\underline{x},t)\in\Omega\times(0,T), \label{eq1}
\end{eqnarray}
where $T>0$ is a given time, $u(\underline{x},t)$ represents the displacement and $c>0$ is the wave speed of propagation. For simplicity, we assume that $c$ is a constant, but we can also let $c$ be a function depending on the space variable $\underline{x}$. For example, in $n=1$-dimension, where $\Omega$ represents the interval $(0,L), \ L>0$, occupied by a vibrating inhomogeneous string, its small transversal vibrations are governed by the wave equation
\begin{eqnarray}
\omega(x)u_{tt}(x,t)=u_{xx}(x,t)+F(x,t),
\quad (x,t)\in(0,L)\times(0,T), \label{eqwx}
\end{eqnarray}
where $\omega(x)=c^{-2}(x)$ represents the mass density of the string, which is stretched by a unit force.
Equation \eqref{eq1} has to be solved subject to the initial conditions
\begin{equation}
u(\underline{x},0)=u_0(\underline{x}), \quad \underline{x}\in \Omega, \label{eq2}
\end{equation}
\begin{equation}
u_{t}(\underline{x},0)=v_{0}(\underline{x}), \quad \underline{x}\in \Omega, \label{eq3}
\end{equation}
where $u_0$ and $v_{0}$ represent the initial displacement and velocity, respectively. On the boundary of the structure $\partial{\Omega}$ we can prescribe Dirichlet, Neumann, Robin or mixed boundary conditions.
Let us consider, for the sake of simplicity, Dirichlet boundary conditions being prescribed, namely,
\begin{eqnarray}
u(\underline{x},t)=P(\underline{x},t), \quad (\underline{x},t)\in\partial{\Omega}\times (0,T), \label{eq4}
\end{eqnarray}
where $P$ is a prescribed boundary displacement.
If the force $F(\underline{x},t)$ is given, then equations \eqref{eq1}, \eqref{eq2}-\eqref{eq4} form a direct well-posed problem, see e.g. Morse and Feshbach (1953). However, if the force function $F(\underline{x},t)$ cannot be directly observed it hence becomes unknown and then clearly, the above set of equations is not sufficient to determine uniquely the pair solution $(u(\underline{x},t),F(\underline{x},t))$. Then, we consider the additional measurement of the flux tension of the structure on a (non-zero measure) portion $\Gamma\subset\partial{\Omega}$, namely,
\begin{eqnarray}
\frac{\partial{u}}{\partial{\nu}}(\underline{x},t)=q(\underline{x},t), \quad (x,t)\in\Gamma\times(0,T), \label{eq6}
\end{eqnarray}
where $\underline{\nu}$ is the outward unit normal to $\partial{\Omega}$ and $q$ is a given function. Other additional information, such as the 'upper-base' final displacement measurement $u(\underline{x},T)$ for $\underline{x}\in\Omega$, will be investigated in a separate work.
Also, note that if instead of the Dirichlet boundary condition \eqref{eq4} we would have supplied a Neumann boundary condition then, the quantities $u$ and $\partial{u}/\partial{\nu}$ would have had to be reversed in \eqref{eq4} and \eqref{eq6}. In order to ensure a unique solution we further assume that
\begin{eqnarray}
F(\underline{x},t)=f(\underline{x})h(\underline{x},t), \quad (\underline{x},t)\in \Omega\times(0,T), \label{eq8}
\end{eqnarray}
where $h(\underline{x},t)$ is a known function and $f(\underline{x})$ represents the unknown space-dependent forcing function to be determined. This restriction is necessary because otherwise, we can always add to $u(\underline{x},t)$ any function of the form $t^{2}U(\underline{x})$ with $U\in C^{2}(\overline{\Omega})$ arbitrary with compact support in $\Omega$, and still obtain another solution satisfying \eqref{eq1}, \eqref{eq2}-\eqref{eq6}.
Note that the unknown force $f(\underline{x})$ is an interior quantity and it depends on the space variable $\underline{x}\in \Omega \subset \mathbb{R}^{n}$, whilst the additional measurement \eqref{eq6} of the flux $q(\underline{x},t)$ is a boundary quantity and it depends on $(\underline{x},t)\in\Gamma\times(0,T)$.
In the next subsection, we analyse more closely the uniqueness of solution of the inverse problem which requires finding the pair solution $(u(\underline{x},t),f(\underline{x}))$ satisfying equations \eqref{eq1}, \eqref{eq2}-\eqref{eq8}.
\subsection{Mathematical Analysis}
To start with, from \eqref{eq8}, and taking for simplicity $c=1$, equation \eqref{eq1} recasts as
\begin{eqnarray}
u_{tt}(\underline{x},t)=
\nabla^{2}u(\underline{x},t)+f(\underline{x})h(\underline{x},t),
\quad (\underline{x},t)\in\Omega\times(0,T). \label{eq9}
\end{eqnarray}
We note that in the one-dimensional case, $n=1$, and for $c=h=1$ and other compatibility conditions satisfied by the data \eqref{eq2}-\eqref{eq6}, Cannon and Dunninger \cite{candunn70}, based on the method of Fourier series, established the uniqueness of a classical solution of the inverse problem. We also have the following more general uniqueness result, see Theorem 9 of \cite{engl94}.
\\
\\
{\bf Theorem 1.} {\it Assume that $\Omega \subset \mathbb{R}^{n}$ is a bounded star-shaped domain with sufficiently smooth boundary such that $T>diam(\Omega)$. Let $h\in H^{2}(0,T;L^{\infty}(\Omega))$ be such that $h(.,0)\in L^{\infty}(\Omega)$, $h_{t}(.,0)\in L^{\infty}(\Omega)$ and
\begin{eqnarray}
H:=\frac{||h_{tt}||_{L^{2}(0,T;L^{\infty}(\Omega))}}{inf_{\underline{x}\in\Omega}|h(\underline{x},0)|} \quad \text{is sufficiently small}. \label{eqth1.1}
\end{eqnarray}
If $\Gamma=\partial\Omega$, then the inverse problem \eqref{eq2}-\eqref{eq6} and \eqref{eq9} has at most one solution $(u(\underline{x},t),f(\underline{x}))$ in the class of functions
\begin{eqnarray}
u\in L^{2}(0,T;H^{1}(\Omega)), \quad u_{t}\in L^{2}(0,T;L^{2}(\Omega)), \quad u_{tt}\in L^{2}(0,T;(H^{1}(\Omega))^{\prime}), \quad f\in L^{2}(\Omega), \label{eqth1.2}
\end{eqnarray}
where $(H^{1}(\Omega))^{\prime}$ denotes the dual of $H^{1}(\Omega)$.}\\
\\
For the notations and definitions of the function spaces involved, see \cite{lionsnewyork}.
The proof in \cite{engl94} relies on the estimate (5.25) of \cite{lions88}, namely,
\begin{eqnarray}
||h(.,0)f||_{L^{2}(\Omega)}\leq K_{1}||w_{1}||_{L^{2}(\partial{\Omega}\times(0,T))}, \label{eqth1.3}
\end{eqnarray}
for some positive constant $K_{1}$ which depends only on $\Omega$ and $T$, and $w_{1}$ is the solution of the problem
\begin{eqnarray}
w_{1tt}(\underline{x},t)=\nabla^{2}w_{1}(\underline{x},t), \quad (\underline{x},t)\in \Omega\times(0,T), \label{eqth1.4}
\end{eqnarray}
\begin{eqnarray}
w_{1}(\underline{x},0)=h(\underline{x},0)f(\underline{x}), \quad w_{1t}(\underline{x},0)=h_{t}(\underline{x},0)f(\underline{x}), \quad \underline{x}\in \Omega. \label{eqth1.6}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial{w_{1}}}{\partial{\nu}}(\underline{x},t)=0, \quad (\underline{x},t)\in \partial{\Omega}\times(0,T), \label{eqth1.5}
\end{eqnarray}
Theorem 1 also requires that the quantity $H$ in equation \eqref{eqth1.1} is sufficiently small which can be guaranteed if $||h_{tt}||_{L^{2}(0,T;L^{\infty}(\Omega))}$ is small or, if $inf_{x\in \Omega}|h(x,0)|$ is large. For example, if
\begin{eqnarray}
h(\underline{x},t)=t \ h_{1}(\underline{x})+h_{2}(\underline{x}), \quad (\underline{x},t)\in \Omega\times(0,T), \label{eqth1.7}
\end{eqnarray}
with $h_{1}\in L^{\infty}(\Omega)$ and $h_{2}\in L^{\infty}(\Omega)$ given functions, then $h_{tt}=0$ and therefore condition \eqref{eqth1.1} is satisfied if $inf_{x\in \Omega}|h_{2}(\underline{x})|>0$. In this case, the uniqueness proof follows immediately by remarking that $w_{1}=u_{tt}$, where $u$ satisfies the problem
\begin{eqnarray}
u_{tt}=\nabla^{2}u(\underline{x},t)+(t \ h_{1}(\underline{x})+h_{2}(\underline{x}))f(\underline{x}), \quad (\underline{x},t)\in \Omega\times(0,T), \label{eqth1.8}
\end{eqnarray}
\begin{eqnarray}
u(\underline{x},0)=u_{t}(\underline{x},0)=0, \quad \underline{x}\in \Omega, \label{eqth1.9}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial{u}}{\partial{\nu}}(\underline{x},0)=0, \quad (\underline{x},t)\in \partial{\Omega}\times(0,T). \label{eqth1.10}
\end{eqnarray}
In the above, $(u,f)$ represents the difference between two solutions $(u_{1},f_{1})$ and $(u_{2},f_{2})$ of the inverse problem \eqref{eq2}-\eqref{eq6} and \eqref{eq9}. Then from \eqref{eq4} it follows that
\begin{eqnarray}
u(\underline{x},t)=0, \quad (\underline{x},t)\in \partial{\Omega}\times(0,T). \label{eqth1.11}
\end{eqnarray}
Since $w_{1}=u_{tt}$, from \eqref{eqth1.11} it results that
\begin{eqnarray}
w_{1}(\underline{x},t)=0, \quad (\underline{x},t)\in \partial{\Omega}\times(0,T). \label{eqth1.12}
\end{eqnarray}
Then, conditions on $\Omega$ and $T>diam(\Omega)$, and equations \eqref{eqth1.4}, \eqref{eqth1.5} and \eqref{eqth1.12}, implies that the uniqueness property, see Remark 1.7 of \cite{lions88}, is applicable and consequently, $w_{1}\equiv0$.
Then, \eqref{eqth1.3} and $inf_{x\in \Omega}|h_{2}(\underline{x})|>0$ immediately yields $f\equiv0$. Afterwords, the problem \eqref{eqth1.8}-\eqref{eqth1.10} with $f=0$ yields $u\equiv0$.
\\
\\
We also have the following uniqueness theorem due to Theorem 3.8 of \cite{vmk92}.
\\
\\
{\bf Theorem 2.} {\it Assume that $\Omega\subset\mathbb{R}^{n}$ is a bounded domain with piecewise smooth boundary. Let $h\in C^{3}(\overline{\Omega}\times[0,T])$ be such that
\begin{eqnarray}
h(\underline{x},0)\neq0 \quad \text{for} \quad \underline{x}\in\overline{\Omega}. \label{eqth2.1}
\end{eqnarray}
If $\Gamma=\partial{\Omega}$, then the inverse problem \eqref{eq2}-\eqref{eq6} and \eqref{eq9} has at most one solution $(u(\underline{x},t),f(\underline{x}))\in C^{3}(\overline{\Omega}\times[0,T])\times C(\overline{\Omega})$.}\\
\\
One can remark that the previously stated uniqueness Theorems 1 and 2 require that the Neumann observation \eqref{eq6} is over the complete boundary $\Gamma=\partial{\Omega}$. In the incomplete case that $\Gamma\subset\partial{\Omega}$ is only a part of $\partial{\Omega}$ then, the uniqueness Theorem 1 holds under the assumption that $h$ is independent of $\underline{x}$, \cite{my95}, as follows.\\
\\
{\bf Theorem 3.} {\it Assume that $\Omega\subset\mathbb{R}^{n}$ is a bounded star-shaped domain with smooth boundary such that $T>diam(\Omega)$. Let $h\in C^{1}[0,T]$ be independent of $\underline{x}$ such that equation \eqref{eq9} becomes
\begin{eqnarray}
u_{tt}(\underline{x},t)=\nabla^{2}u(\underline{x},t)+f(\underline{x})h(t), \quad (\underline{x},t)\in \Omega\times(0,T), \label{eqth3.1}
\end{eqnarray}
and assume further that $h(0)\neq0$. Then the inverse problem \eqref{eq2}-\eqref{eq6} and \eqref{eqth3.1} has at most one solution in the class of functions}
\begin{eqnarray}
u\in C^{1}([0,T];H^{1}(\Omega))\cap C^{2}([0,T];L^{2}(\Omega)), \quad f\in L^{2}(\Omega). \label{eqth3.2}
\end{eqnarray}
In Section 4, we shall consider the numerical determination of space-dependent forcing functions. But before we do that, in the next section we explain the finite-difference method (FDM) adopted for the numerical discretisation of the direct problem.
\section{Numerical Solution of the Direct Problem}
In this section, we consider the direct initial Dirichlet boundary value problem \eqref{eq1}, \eqref{eq2}-\eqref{eq4} for simplicity, in one-dimension, i.e. $n=1$ and $\Omega=(0,L)$ with $L>0$, when the force $F(x,t)$
is known and the displacement $u(x,t)$ is to be determined, namely,
\begin{eqnarray}
u_{tt}(x,t)=c^{2}u_{xx}(x,t)+F(x,t),
\quad (x,t)\in(0,L)\times(0,T], \label{eq11}
\end{eqnarray}
\begin{eqnarray}
u(x,0)=u_0(x), \quad u_{t}(x,0)=v_{0}(x), \quad x\in[0,L], \ \label{eq12}
\end{eqnarray}
\begin{eqnarray}
u(0,t)=P(0,t)=:P_{0}(t), \quad t\in(0,T], \label{eq13}
\end{eqnarray}
\begin{eqnarray}
u(L,t)=P(L,t)=:P_{L}(t), \quad t\in(0,T]. \label{eq14}
\end{eqnarray}
The compatibility conditions between \eqref{eq12}-\eqref{eq14} yield
\begin{eqnarray}
P_{0}(0)=u_{0}(0), \quad P_{L}(0)=u_{0}(L). \label{eq15}
\end{eqnarray}
The discrete form of our problem is as follows. We divide the domain $(0,L)\times(0,T)$ into $M$ and $N$ subintervals of equal space
length $\delta x$ and $\delta t$, where $\delta x=L/M$ and $\delta t=T/N$. We denote by $u_{i,j}:=u(x_{i},t_{j})$, where $x_{i}=i\delta x$, $t_{j}=j\delta t$, and $F_{i,j}:=F(x_{i},t_{j})$
for $i=\overline{0,M}$, $j=\overline{0,N}$. Then, a central-difference approximation to equations \eqref{eq11}-\eqref{eq14} at the mesh points $(x_{i},t_{j})=(i\delta x,j\delta t)$
of the rectangular mesh covering the solution domain $(0,L)\times(0,T)$ is, \cite{gds85},
\begin{eqnarray}
u_{i,j+1}=r^{2}u_{i+1,j}+2(1-r^{2})u_{i,j}+r^{2}u_{i-1,j}-u_{i,j-1}+(\delta t)^{2}F_{i,j}, \label{eq16}\\
\quad \quad \quad i=\overline{1,(M-1)}, \quad j=\overline{1,(N-1)},\notag
\end{eqnarray}
\begin{eqnarray}
u_{i,0}=u_{0}(x_{i}), \quad i=\overline{0,M}, \quad \frac{u_{i,1}-u_{i,-1}}{2(\delta t)}=v_{0}(x_{i}), \quad i=\overline{1,(M-1)}, \label{eq16.1}
\end{eqnarray}
\begin{eqnarray}
u_{0,j}=P_{0}(t_{j}), \quad u_{M,j}=P_{L}(t_{j}), \quad j=\overline{0,N},\label{eq16.2}
\end{eqnarray}
where $r=c(\delta t)/\delta x$. Equation \eqref{eq16} represents an explicit FDM which is stable if $r\leq1$, giving approximate values for the solution at mesh points along
$t=2\delta t,3\delta t,...,$ as soon as the solution at the mesh points along $t=\delta t$ has been determined. Putting
$j=0$ in equation \eqref{eq16} and using \eqref{eq16.1}, we obtain
\begin{eqnarray}
u_{i,1}=\frac{1}{2}r^{2}u_{0}(x_{i+1})+(1-r^{2})u_{0}(x_{i})+\frac{1}{2}r^{2}u_{0}(x_{i-1})+(\delta t)v_{0}(x_{i})+\frac{1}{2}(\delta t)^{2}F_{i,0}, \notag \\
\quad \quad \quad \quad \quad i=\overline{1,(M-1)}. \label{eq16.3}
\end{eqnarray}
The normal derivatives $\frac{\partial{u}}{\partial{\nu}}(0,t)$ and
$\frac{\partial{u}}{\partial{\nu}}(L,t)$ are calculated using the finite-difference approximations
\begin{eqnarray}
-\frac{\partial{u}}{\partial{x}}(0,t_{j})=-\frac{4u_{1,j}-u_{2,j}-3u_{0,j}}{2(\delta x)}, \quad \quad
\frac{\partial{u}}{\partial{x}}(L,t_{j})=\frac{3u_{M,j}-4u_{M-1,j}+u_{M-2,j}}{2(\delta x)}, \quad \notag \\ j=\overline{1,N}. \label{eq19}
\end{eqnarray}
\section{Numerical Solution of the Inverse Problem}
We now consider the inverse initial boundary value problem \eqref{eq2}-\eqref{eq6} and \eqref{eq9} in one-dimension, i.e. $n=1$ and $\Omega=(0,L)$, when both the force $f(x)$ and the displacement $u(x,t)$ are to be determined, from the governing equation (take $c=1$ for simplicity)
\begin{eqnarray}
u_{tt}(x,t)=u_{xx}(x,t)+f(x)h(x,t), \quad (x,t)\in(0,L)\times(0,T], \label{eqfsplite}
\end{eqnarray}
subject to the initial and boundary conditions \eqref{eq12}-\eqref{eq14} and the overspecified flux tension condition \eqref{eq6} at one end of the string, say at $x=0$, namely
\begin{eqnarray}
-\frac{\partial{u}}{\partial{x}}(0,t)=q(0,t)=:q_{0}(t), \quad t\in(0,T]. \label{equofxat0t}
\end{eqnarray}
In the case that $h$ is independent of $x$, according to Theorem 3, the inverse source problem \eqref{eq12}-\eqref{eq14}, \eqref{eqfsplite} and \eqref{equofxat0t} has at most one solution provided that $h\in C^{1}[0,T]$, $h(0)\neq0$ and $T>L$.
In discretised finite-difference form equations \eqref{eq12}-\eqref{eq14} and \eqref{eqfsplite} recast as equations \eqref{eq16.1}, \eqref{eq16.2},
\begin{eqnarray}
u_{i,j+1}-(\delta t)^{2}f_{i}h_{i,j}=r^{2}u_{i+1,j}+2(1-r^{2})u_{i,j}+r^{2}u_{i-1,j}-u_{i,j-1}, \label{eqfdmsplitef}\\
\quad \quad \quad i=\overline{1,(M-1)}, \quad j=\overline{1,(N-1)},\notag
\end{eqnarray}
and
\begin{eqnarray}
u_{i,1}-\frac{1}{2}(\delta t)^{2}f_{i}h_{i,0}=\frac{1}{2}r^{2}u_{0}(x_{i+1})+(1-r^{2})u_{0}(x_{i})+\frac{1}{2}r^{2}u_{0}(x_{i-1})+(\delta t)v_{0}(x_{i}), \label{eqatjiszero}\\
\quad \quad \quad i=\overline{1,(M-1)},\notag
\end{eqnarray}
where $f_{i}:=f(x_{i})$ and $h_{i,j}:=h(x_{i},t_{j})$.
Discretizing \eqref{equofxat0t} using \eqref{eq19} we also have
\begin{eqnarray}
q_{0}(t_{j})=-\frac{\partial{u}}{\partial{x}}(0,t_{j})=-\frac{4u_{1,j}-u_{2,j}-3u_{0,j}}{2(\delta x)},\quad j=\overline{1,N}. \label{equx0}
\end{eqnarray}
In practice, the additional observation \eqref{equx0} comes from measurement which is inherently contaminated with errors. We therefore model this by replacing the exact data $q_{0}(t)$ by the noisy data
\begin{eqnarray}
q_{0}^{\epsilon}(t_{j})=q_{0}(t_{j})+\epsilon_{j}, \ \ \ j=\overline{1,N},\label{eq4.5}
\end{eqnarray}
where $(\epsilon_{j})_{j=\overline{1,N}}$ are $N$ random noisy variables generated (using the MATLAB routine 'normrd') from a Gaussian normal distribution with mean zero and standard deviation $\sigma=p\times max_{t\in[0,T]}\left|q_{0}(t)\right|$, where $p$ represents the percentage of noise.
Assembling \eqref{eqfdmsplitef}-\eqref{equx0} and using \eqref{eq16.1} and \eqref{eq16.2}, the discretised inverse problem reduces to solving a global linear system of $(M-1)\times N+N$ equations with $(M-1)\times N+(M-1)$ unknowns. Since this system is linear we can eliminate the unknowns $u_{i,j}$ for $i=\overline{1,(M-1)}$, $j=\overline{1,N}$, to reduce the problem to solving an ill-conditioned system of $N$ equations with $(M-1)$ unknowns of the generic form
\begin{eqnarray}
A\underline{f}=\underline{b}^{\epsilon}, \label{eqA}
\end{eqnarray}
where the right-hand side vector $\underline{b}^{\epsilon}$ incorporates the noisy measurement \eqref{eq4.5}. For a unique solution we require $N\geq M-1$. The method of least squares can be used to find an approximate solution to overdetermined systems. For the system of equations \eqref{eqA}, the least squares solution is given by
$\underline{f} =(A^{tr}A)^{-1}A^{tr}\underline{b}^{\epsilon}$, where the superscript $^{tr}$ denotes the transpose.
For the Examples 1-4 that will be considered in the next section, the condition numbers of the matrix $A$ in \eqref{eqA} (calculated using the command cond($A$) in MATLAB) given in Table 1 are between O($10^{4}$) to O($10^{8}$) for $M=N=80$. These large condition numbers indicate that the system of equations \eqref{eqA} is ill-conditioned. The ill-conditioning nature of the matrix $A$ can also be revealed by plotting its normalised singular values $sv(k)/sv(1)$ for $k=\overline{1,(M-1)}$, in Figure \ref{fig:normizedsvdEx1Ex2Ex3Ex4problem3}. These singular values have been calculated in MATLAB using the command svd($A$).
\begin{table}[H]
\caption{Condition number of matrix $A$ for Examples 1-4.}
\centering
\begin{tabular}{|c|c|c|c|c|c}
\hline
$$ & Example 1 & Example 2 & Example 3 & Example 4 \\
$N=M$ & $h(x,t)=1$ & $h(x,t)=1+t$ & $h(x,t)=1+x+t$ & $h(x,t)=t^{2}$ \\
\hline
$10$ &$28.55$ &$39.53$ & $33.73$ &$3394.55$ \\
\hline
$20$ &$110.98$ &$152.38$ & $131.29$ &$53232.36$ \\
\hline
$40$ &$437.93$ &$596.91$ & $518.51$ &$826827.12$ \\
\hline
$80$ &$1740.25$ &$2361.22$ & $2061.53$ &$12956244.4$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\caption{Normalised singular values $sv(k)/sv(1)$ for $k=\overline{1,(M-1)}
\label{fig:normizedsvdEx1Ex2Ex3Ex4problem3}
\end{figure}
\section{Numerical Results and Discussion}
In all examples in this section we take, for simplicity, $c=L=T=1$. Although the geometrical condition $1=T>diam(\Omega)=L=1$ is slightly violated, it is expected that the uniqueness Theorems 1 and 3 still hold, especially in $n=1$-dimension and when the inverse problems are numerically discretised.
\subsection{Example 1 ($h(x,t)=1$)}
This is an example in which we take $h(x,t)=1$ a constant function and consider first the direct problem \eqref{eq11}-\eqref{eq14} with the input data
\begin{eqnarray}
u(x,0)=u_{0}(x)=\sin(\pi x), \quad
u_{t}(x,0)=v_{0}(x)=1, \quad x\in[0,1], \label{eq20}
\end{eqnarray}
\begin{eqnarray}
u(0,t)=P_{0}(t)=t+\frac{t^{2}}{2},\quad u(1,t)=P_{L}(t)=t+\frac{t^{2}}{2}, \quad t\in(0,1], \label{eq21}
\end{eqnarray}
\begin{eqnarray}
F(x,t)=f(x)=1+\pi^{2}\sin(\pi x), \ \ \ x\in(0,1). \label{eq23}
\end{eqnarray}
The exact solution is given by
\begin{eqnarray}
u(x,t)&=&\sin(\pi x)+t+\frac{t^{2}}{2},\ \ \ (x,t)\in[0,1]\times[0,1]. \label{eq22}
\end{eqnarray}
The numerical and exact solutions for $u(x,t)$ at interior points are shown in Figure \ref{figexactnumericalerrorofuEx1problem3} and one can observe that an excellent agreement is obtained. Table 2 also gives the exact and numerical solutions for the flux tension \eqref{equofxat0t}. From this table it can be seen that the numerical results are convergent, as the mesh size decreases, and they are in very good agreement with the exact solution \eqref{eqq0}. Although not illustrated, it is reported that the same excellent agreement has been obtained between the exact and numerical solutions for the flux tension at $x=1$ and therefore, they are not presented.
\begin{figure}
\caption{Exact and numerical solutions for the displacement $u(x,t)$ and the absolute error between them for the direct problem obtained with $N=M=80$, for Example 1.}
\label{figexactnumericalerrorofuEx1problem3}
\end{figure}
\begin{table}[H]
\caption{Exact and numerical solutions for the flux tension at $x=0$, for the direct problem of Example 1.}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$t$ &$0.1$ &$0.2$ &$...$ &$0.8$ &$0.9$ &$1$ \\
\hline
$N=M=10$ &$-3.2427$ &$-3.2465$ &$...$ &$-3.2899$ &$-3.2937$ &$-3.295$\\
\hline
$N=M=20$ &$-3.1675$ &$-3.1685$ &$...$ &$-3.1790$ &$-3.1799$ &$-3.1802$\\
\hline
$N=M=40$ &$-3.1481$ &$-3.1483$ &$...$ &$-3.1510$ &$-3.1512$ &$-3.1513$\\
\hline
$N=M=80$ &$-3.1432$ &$-3.1433$ &$...$ &$-3.1439$ &$-3.1440$ &$-3.1440$\\
\hline
$exact$ &$-3.1416$ &$-3.1416$ &$...$ &$-3.1416$ &$-3.1416$ &$-3.1416$\\
\hline
\end{tabular}
\end{table}
The inverse problem given by equations \eqref{eqfsplite} with $h(x,t)=1$, \eqref{eq20}, \eqref{eq21} and
\begin{eqnarray}
-\frac{\partial{u}}{\partial{x}}(0,t)=q_{0}(t)=-\pi, \quad t\in[0,1], \label{eqq0}
\end{eqnarray}
is considered next. Since $h(0)=1\neq0$, Theorem 3 ensures the uniqueness of the solution in the class of functions \eqref{eqth3.2}, which in $n=1$-dimension rewrites as
\begin{eqnarray}
u\in C^{1}([0,T];H^{1}(0,L))\cap C^{2}([0,T];L^{2}(0,L)), \quad f\in L^{2}(0,L). \label{eqth3.2.1}
\end{eqnarray}
In fact, the exact solution $(f(x),u(x,t))$ of this inverse problem is given by equations \eqref{eq23} and \eqref{eq22}, respectively. Numerically, we employ the FDM for discretising the inverse problem, as described in Section 4.
\subsubsection{Exact Data}
We first consider the case of exact data, i.e. $p=0$ and hence $\underline{\epsilon}=\underline{0}$ in \eqref{eq4.5}. The numerical results corresponding to $f(x)$ and $u(x,t)$ are plotted in Figures \ref{fig:inverseproblemfexactnumericalEx1problem3} and \ref{fig:inverseproblemabsoulterroeofuEx1problem3}, respectively. From these figures it can be seen that convergent and accurate numerical solutions are obtained.
\begin{figure}
\caption{The exact (---) solution \eqref{eq23}
\label{fig:inverseproblemfexactnumericalEx1problem3}
\end{figure}
\begin{figure}
\caption{The absolute errors between the exact and numerical displacement $u(x,t)$ obtained with $N=M\in\lbrace10,20,40,80\rbrace$ and no regularization, for exact data, for the inverse problem of Example 1.}
\label{fig:inverseproblemabsoulterroeofuEx1problem3}
\end{figure}
\subsubsection{Noisy Data}
In order to investigate the stability of the numerical solution we include some ($p=1\%$) noise into the input data \eqref{equx0}, as given by equation \eqref{eq4.5}. The numerical solution for $f(x)$ obtained with $N=M=80$ and no regularization is plotted in Figure \ref{fig:inverseproblemaddnoiseatlambda0Ex1problem3}. It can be clearly seen that very high oscillations appear. This clearly shows that the inverse force problem \eqref{eq12}-\eqref{eq14}, \eqref{eqfsplite} and \eqref{equofxat0t} is ill-posed. In order to deal with this instability we employ the (zeroth-order) Tikhonov regularization which yields the solution
\begin{eqnarray}
\underline{f}_{\lambda}=(A^{tr}A+\lambda I)^{-1}A^{tr}\underline{b}^{\epsilon},
\label{eqregular}
\end{eqnarray}
where $I$ is the identity matrix and $\lambda>0$ is a regularization parameter to be prescribed. Including regularization we obtain the numerical solution \eqref{eqregular} whose accuracy error, as a function of $\lambda$, is plotted in Figure \ref{fig:normoffatexactandnumericalEx1problem3}. From this figure it can be
seen that the minimum of the error occurs around $\lambda=10^{-6}$. Clearly, this
argument cannot be used as a suitable choice for the regularization parameter $\lambda$ in the absence of an analytical (exact) solution \eqref{eq23} being available. However, one possible criterion for choosing $\lambda$ is given by the L-curve method, [6],
\begin{figure}
\caption{The exact solution \eqref{eq23}
\label{fig:inverseproblemaddnoiseatlambda0Ex1problem3}
\end{figure}
\ \ \quad \\
which plots the residual norm $||A\underline{f}_{\lambda}-\underline{b}^{\epsilon}||$ versus the solution norm $||\underline{f}_{\lambda}||$ for various values of $\lambda$. This is shown in Figure \ref{fig:lcurvein0thEx1problem3} for various values of $\lambda \in \lbrace10^{-9},5\times10^{-9},10^{-8},...,10^{-2}\rbrace.$ The portion to the right of the curve corresponds to large values of $\lambda$ which make the solution oversmooth, whilst the portion to the left of the curve corresponds to small values of $\lambda$ which make the solution undersmooth. The compromise is then achieved around the corner region of the L-curve where the aforementioned portions meet. Figure \ref{fig:lcurvein0thEx1problem3} shows that this corner region includes the values around $\lambda=10^{-6}$, which is a good prediction of the optimal value demonstrated in Figure \ref{fig:normoffatexactandnumericalEx1problem3}.
Finally, Figure \ref{fig:optimaloffwithexactEx1problem3} shows the regularized numerical solution for $f(x)$ obtained with various values of the regularization parameter $\lambda \in \lbrace 10^{-7},{10}^{-6},10^{-5}\rbrace$ for $p=1\%$ noisy data. From this figure it can be seen that the value of the regularization parameter $\lambda$ can also be chosen by trial and error. By plotting the numerical solution for various values of $\lambda$ we can infer when the instability starts to kick off. For example, in Figure \ref{fig:normoffatexactandnumericalEx1problem3}, the value of $\lambda=10^{-5}$ is too large and the solution is oversmooth, whilst the value of $\lambda=10^{-7}$ is too small and the solution becomes unstable. We could therefore inspect the value of $\lambda=10^{-6}$ and conclude that this is a reasonable choice of the regularization parameter which balances the smoothness with the instability of the solution.
\begin{figure}
\caption{The accuracy error $||\underline{f}
\label{fig:normoffatexactandnumericalEx1problem3}
\end{figure}
\begin{figure}
\caption{The L-curve for the Tikhonov regularization, for $N=M=80$ and $p=1\%$ noise, for the inverse problem of Example 1.}
\label{fig:lcurvein0thEx1problem3}
\end{figure}
\begin{figure}
\caption{The exact solution \eqref{eq23}
\label{fig:optimaloffwithexactEx1problem3}
\end{figure}
\subsection{Example 2 ($h(x,t)=1+t$)}
This is an example in which we take $h(x,t)=1+t$ a linear function of $t$ and independent of $x$ and consider first the direct problem \eqref{eq12}-\eqref{eq14} and \eqref{eqfsplite} with the input data
\begin{eqnarray}
u(x,0)=u_{0}(x)=0, \quad
u_{t}(x,0)=v_{0}(x)=0, \quad x\in[0,1], \label{eq30}
\end{eqnarray}
\begin{eqnarray}
u(0,t)=P_{0}(t)=0,\quad u(1,t)=P_{L}(t)=0, \quad t\in(0,1], \label{eq31}
\end{eqnarray}
\begin{eqnarray}
f(x)=
\begin{cases}
x \ \ \ \ \ \ \ \ \ \ \ \text{if}\ \ \ 0\leq x \leq \frac{1}{2},
\\
1-x \ \ \ \ \ \ \text{if}\ \ \ \frac{1}{2}<x\leq1. \label{eq32}
\end{cases}
\end{eqnarray}
As in Example 1, since $h(0)=1\neq0$, Theorem 3 ensures the uniqueness of the solution in the class of the functions \eqref{eqth3.2.1}. Also, remark that for this example, the force \eqref{eq32} has a triangular shape, being continuous but non-differentiable at the peak $x=1/2$. This example also does not possess an explicit analytical solution for the displacement $u(x,t)$.
The numerical solutions for the displacement $u(x,t)$ at interior points are shown in Figure \ref{fig:directproblemnumericalsolutionofuEx2problem3}. The flux tension \eqref{equofxat0t} is presented in Table 3 and Figure \ref{fig:numericalsolutionofuxatoEx2problem3}. From these figures and table it can be seen that convergent numerical solutions for both $u(x,t)$ and $q_{0}(t)$ are obtained, as $N=M$ increases.
\begin{figure}
\caption{Numerical solutions for the displacement $u(x,t)$ obtained using the direct problem with various $N=M\in \lbrace 10,20,40,80 \rbrace$ in cases (a)-(d), respectively, for Example 2.}
\label{fig:directproblemnumericalsolutionofuEx2problem3}
\end{figure}
\begin{table}[H]
\caption{The numerical solutions for the flux tension at $x=0$, for the direct problem of Example 2.}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$t$ &$0.1$ &$0.2$ &$...$ &$0.8$ &$0.9$ &$1$ \\
\hline
$N=M=10$ &$-0.00500$ &$-0.02100$ &$...$ &$-0.31900$ &$-0.35900$ &$-0.39000$\\
\hline
$N=M=20$ &$-0.00512$ &$-0.02125$ &$...$ &$-0.3095$ &$-0.34862$ &$-0.37875$\\
\hline
$N=M=40$ &$-0.00515$ &$-0.02131$ &$...$ &$-0.30712$ &$-0.34603$ &$-0.37593$\\
\hline
$N=M=80$ &$-0.00516$ &$-0.02132$ &$...$ &$-0.30653$ &$-0.34538$ &$-0.37523$\\
\hline
\end{tabular}
\end{table}
\begin{figure}
\caption{Numerical solution for the flux tension at $x=0$, for various $N=M\in\lbrace5,10,20,80\rbrace$, for the direct problem of Example 2.}
\label{fig:numericalsolutionofuxatoEx2problem3}
\end{figure}
Consider now the inverse problem given by equations \eqref{eqfsplite} with $h(x,t)=1+t$, equations \eqref{eq30}, \eqref{eq31} and \eqref{equofxat0t} with $q_{0}(t)$ numerically simulated and given in Figure 10 for $N=M=80$. We perturb further this flux by adding to it some $p\in \lbrace1,3,5\rbrace\%$ noise, as given by equation \eqref{eq4.5}. The numerical solution for $f(x)$ obtained with $N=M=80$ and no regularization has been found highly oscillatory and unstable similar to that obtained in Figure \ref{fig:inverseproblemaddnoiseatlambda0Ex1problem3} and therefore is not presented. In order to deal with this instability we employ and test the Tikhonov regularization of various orders such as zero, first and second, which yields the solution, \cite{twos},
\begin{eqnarray}
\underline{f}_{\lambda}=(A^{tr}A+\lambda D^{tr}_{k}D_{k})^{-1}A^{tr}\underline{b}^{\epsilon},
\label{eqregularex2}
\end{eqnarray}
where $D_{k}$ is the regularization derivative operator of order $k\in \lbrace0,1,2\rbrace$ and $\lambda \geq 0$ is the regularization parameter. The regularization derivative operator $D_{k}$ imposes continuity, i.e. class $C^{0}$ for $k=0$, first-order smoothness, i.e. class $C^{1}$ for $k=1$, or second-order smoothness, i.e. class $C^{2}$ for $k=2$. Thus $D_{0}=I$,
\\
\\
$D_{1}=
\begin{pmatrix}
1 & -1 & 0 &0 & ... & 0 \\
0 & 1 & -1 & 0& ... & 0 \\
... & ... & ... & ... & ...&...\\
0 & 0 & ... & 0 & 1 &-1
\end{pmatrix}$, \ $D_{2}=
\begin{pmatrix}
1 & -2 & 1 & 0 & 0 & ... & 0 \\
0 & 1 & -2 & 1 & 0 &... & 0 \\
... & ... & ... & ... & ...& ... & ...&\\
0 & 0 & ... & 0 & 1 & -2 & 1
\end{pmatrix}$.
\\
\\
Observe that for $k=0$, equation \eqref{eqregularex2} becomes the zeroth-order regularized solution \eqref{eqregular} which was previously employed in Example 1 in order to obtain a stable solution.
Including regularization we obtain the solution \eqref{eqregularex2} whose accuracy error, as a function of $\lambda$, is plotted in Figure \ref{fig:optimaloffwithexactone0th1st2ndEx2problem3} for various orders of regularization $k\in \lbrace 0,1,2 \rbrace$. From this figure it can be seen that there are wide ranges for choosing the regularization parameters in the valleys of minima of the plotted error curves. The minimum points $\lambda_{opt}$ and the corresponding accuracy errors are listed in Table 4. The L-curve criterion for choosing $\lambda$ in the zeroth-order regularisation is shown in Figure \ref{fig:lcurve0thEx2problem3} for various values of $\lambda \in \lbrace 10^{-9},10^{-8},...,10^{-2}\rbrace$ and for $p\in \lbrace1,3,5\rbrace\%$ noisy data. This figure shows that the L-corner
region includes the values around $\lambda=10^{-6}$ for $p=1\%$, $\lambda=10^{-5}$ for $p=3\%$, and $\lambda=10^{-5}$ for $p=5\%$. Similar L-curves, which plot the penalised solution norm $||D_{k}\underline{f}_{\lambda}||$ versus the residual norm $||A\underline{f}_{\lambda}-\underline{b}^{\epsilon}||$, have been obtained for the first and second-order regularizations and therefore they are not illustrated.
Figure \ref{fig:optimaloffwithexact0th1st2ndEx2problem3} shows the regularized numerical solutions \eqref{eqregularex2} for $f(x)$ obtained with the values of the regularization parameter $\lambda_{opt}$ given in Table 4 for $p\in\lbrace1,3,5\rbrace\%$ noisy data. From this figure it can be seen that the numerical results are stable and they become more accurate as the amount of noise $p$ decreases.
\begin{figure}
\caption{\ \ \ \ \ }
\caption{The accuracy error $||\underline{f}
\label{fig:optimaloffwithexactone0th1st2ndEx2problem3}
\end{figure}
\begin{figure}
\caption{The L-curve for the zeroth-order Tikhonov regularization, for $N=M=80$ and $p\in \lbrace1,3,5\rbrace\%$ noise, for the inverse problem of Example 2.}
\label{fig:lcurve0thEx2problem3}
\end{figure}
\begin{table}[H]
\caption{The accuracy error $||\underline{f}numerical-\underline{f}exact||$ for various order regularization methods and percentages of noise $p$, for the inverse problem of Example 2. The values of $\lambda_{opt}$ are also included.}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Regularization &$p=1\%$ &$p=3\%$ &$p=5\%$ \\
\hline
zeroth &$\lambda_{opt}=10^{-6}$ &$\lambda_{opt}=10^{-5}$ &$\lambda_{opt}=10^{-5}$\\
$$ & $0.2987$ & $0.5389$ & $0.6259$ \\
\hline
first &$\lambda_{opt}=10^{-4}$ &$\lambda_{opt}=10^{-4}$ &$\lambda_{opt}=10^{-3}$\\
$$ & $0.1433$ & $0.3112$ & $0.4494$ \\
\hline
second &$\lambda_{opt}=10^{-3}$ &$\lambda_{opt}=10^{-1}$ &$\lambda_{opt}=10^{-1}$\\
$$ & $0.1264$ & $0.2876$ & $0.3576$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\caption{The exact solution \eqref{eq32}
\label{fig:optimaloffwithexact0th1st2ndEx2problem3}
\end{figure}
\subsection{Example 3 ($h(x,t)=1+x+t$)}
All the data and details of the numerical implementation are the same as those for Example 2, except that for the present example $h(x,t)=1+x+t$ in equation \eqref{eqfsplite}. Since in this case $h$ depends also on $x$ we cannot apply Theorem 3, but we can apply instead Theorem 1, because $H=0$ in \eqref{eqth1.1} is sufficiently small. This then ensures the uniqueness of the solution in the class of functions \eqref{eqth1.2}, which in $n=1$-dimension reads as
\begin{eqnarray}
u\in L^{2}(0,T;H^{1}(0,L)), \quad u_{t}\in L^{2}(0,T;L^{2}(0,L)), \quad u_{tt}\in L^{2}(0,T;(H^{1}(0,L))^{\prime}), \notag \\
f\in L^{2}(0,L). \quad \quad \qquad \quad \quad \qquad \quad \quad \qquad \quad \quad \qquad \quad \quad \qquad \quad \quad \qquad \quad \quad \qquad \label{eqth1.2.1}
\end{eqnarray}
Figure \ref{fig:inverseoptimalfwithexact0th1st2ndEx3problem3} shows the regularized numerical solution for $f(x)$ obtained with various values of the regularization parameters listed in Table 5 for $p\in\lbrace1,3,5\rbrace\%$ noisy data. From this figure it can be seen that stable numerical solutions are obtained.
\begin{figure}
\caption{The exact solution \eqref{eq32}
\label{fig:inverseoptimalfwithexact0th1st2ndEx3problem3}
\end{figure}
\begin{table}[H]
\caption{The accuracy error $||\underline{f}numerical-\underline{f}exact||$ for various order regularization methods and percentages of noise $p$, for the inverse problem of Example 3. The values of $\lambda_{opt}$ are also included.}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Regularization &$p=1\%$ &$p=3\%$ &$p=5\%$ \\
\hline
zeroth &$\lambda_{opt}=10^{-5}$ &$\lambda_{opt}=10^{-5}$ &$\lambda_{opt}=10^{-5}$\\
$$ & $0.35490$ & $0.49093$ & $0.65283$ \\
\hline
first &$\lambda_{opt}=10^{-4}$ &$\lambda_{opt}=10^{-3}$ &$\lambda_{opt}=10^{-3}$\\
$$ & $0.14821$ & $0.35679$ & $0.45932$ \\
\hline
second &$\lambda_{opt}=10^{-3}$ &$\lambda_{opt}=10^{-1}$ &$\lambda_{opt}=10^{-1}$\\
$$ & $0.13326$ & $0.27424$ & $0.39021$ \\
\hline
\end{tabular}
\end{table}
\subsection{Example 4 ($h(x,t)=t^{2}$)}
All the details are the same as those for Example 2, except that for the present example $h(x,t)=t^{2}$ in equation \eqref{eqfsplite} is independent of $x$, but is a nonlinear function of $t$. Furthermore, one can see that $h(0)=0$ and also, condition \eqref{eqth1.1} is violated. Hence, we cannot apply the uniqueness Theorems 1-3 and, in this case, we expect a more severe situation than in the previous examples to occur. This is reflected in the very large condition numbers of the matrix $A$ reported in Table 1 for Example 4 in comparison with the milder condition numbers for Examples 1-3.
The numerical solution for the flux tension \eqref{equofxat0t} obtained by solving the direct problem given by equation \eqref{eqfsplite} with $h(x,t)=t^{2}$ and equations \eqref{eq30}-\eqref{eq32} is illustrated in Figure \ref{fig:directproblemnumericalsolutionuxat0Ex4problem3} for various mesh sizes. From this figure it can be seen that a rapidly convergent numerical solution is achieved. As in Example 2, we add noise to the numerical flux $q_{0}(t)$ obtained with the finer mesh $N=M=80$.
\begin{figure}
\caption{Numerical solution for the flux tension at $x=0$, for various $N=M\in\lbrace5,10,20,80\rbrace$, for the direct problem of Example 4.}
\label{fig:directproblemnumericalsolutionuxat0Ex4problem3}
\end{figure}
Figure \ref{fig:optimaloffwithexact0th1st2ndEx4problem3} shows the regularized numerical solution for $f(x)$ obtained with various regularization parameters listed in Table 6 for
$p\in \lbrace1,3,5\rbrace\%$ noisy data. As in all the previous examples, stable numerical solutions are obtained. However, in contrast to Examples 2 and 3, the first-order regularization seems to perform better than the second-order regularization, with the latter one also presenting some unexpected behaviour of increase in accuracy when $p$ increases from $1\%$ to $3\%$. These conclusions may be attributed to the severe ill-posedness of the Example 4 which, as discussed above, in addition to ill-conditioning it fails to satisfy the conditions for uniqueness of solution of Theorems 1-3.
\begin{figure}
\caption{The exact solution \eqref{eq32}
\label{fig:optimaloffwithexact0th1st2ndEx4problem3}
\end{figure}
\begin{table}[H]
\caption{The accuracy error $||\underline{f}numerical-\underline{f}exact||$ for various order regularization methods and percentages of noise $p$, for the inverse problem of Example 4. The values of $\lambda_{opt}$ are also included.}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Regularization &$p=1\%$ &$p=3\%$ &$p=5\%$ \\
\hline
zeroth &$\lambda_{opt}=10^{-8}$ &$\lambda_{opt}=10^{-8}$ &$\lambda_{opt}=10^{-8}$\\
$$ & $0.5947$ & $0.8082$ & $1.0863$ \\
\hline
first &$\lambda_{opt}=10^{-6}$ &$\lambda_{opt}=10^{-6}$ &$\lambda_{opt}=10^{-5}$\\
$$ & $0.1826$ & $0.2668$ & $0.4053$ \\
\hline
second &$\lambda_{opt}=10^{-5}$ &$\lambda_{opt}=10^{-4}$ &$\lambda_{opt}=10^{-4}$\\
$$ & $0.4313$ & $0.2178$ & $0.6912$ \\
\hline
\end{tabular}
\end{table}
\section{Extension to Multiple Sources}
In this section, we consider an extension of the inverse space-dependent problem, in the situation when
\begin{eqnarray}
F(\underline{x},t)=f(\underline{x})h(\underline{x},t)+g(\underline{x})\theta(\underline{x},t), \quad (\underline{x},t)\in\Omega\times(0,T]. \label{eqextension1}
\end{eqnarray}
where $h(\underline{x},t)$ and $\theta(\underline{x},t)$ are given functions and $f(\underline{x})$ and $g(\underline{x})$ are space-dependent unknown force components to be determined. Under the assumption \eqref{eqextension1}, equation \eqref{eq1} in one-dimension, i.e. $n=1$ and $\Omega=(0,L)$, becomes
\begin{eqnarray}
u_{tt}(x,t)=u_{xx}(x,t)+f(x)h(x,t)+g(x)\theta(x,t), \quad (x,t)\in(0,L)\times(0,T]. \label{eqfandgsplite}
\end{eqnarray}
This has to be solved subject to the initial and boundary conditions \eqref{eq12}-\eqref{eq14} and the overspecified flux tensions at both ends of the string, namely, \eqref{equofxat0t} and
\begin{eqnarray}
\frac{\partial{u}}{\partial{x}}(L,t)=q(L,t)=:q_{L}(t), \quad t\in(0,T]. \label{equofxatLt}
\end{eqnarray}
Then uniqueness of solution still holds in the case $h(x,t)=1$, $\theta(x,t)=t$, see Theorem 8 of \cite{engl94}, but for more general cases, e.g. $h(x,t)=1$, $\theta(x,t)=t^{2}$, the solution ($f(x),g(x),u(x,t)$) is not unique, see the counterexample to uniqueness given in \cite{engl94}.
In discretised finite-difference form equations \eqref{eq12}-\eqref{eq14} and \eqref{eqfandgsplite} recast as equations \eqref{eq16.1}, \eqref{eq16.2},
\begin{eqnarray}
u_{i,j+1}-(\delta t)^{2}f_{i}h_{i,j}-(\delta t)^{2}g_{i}\theta_{i,j}=r^{2}u_{i+1,j}+2(1-r^{2})u_{i,j}+r^{2}u_{i-1,j}-u_{i,j-1}, \label{eqfandgdmsplitef}\\
\quad \quad \quad i=\overline{1,(M-1)}, \quad j=\overline{1,(N-1)},\notag
\end{eqnarray}
and
\begin{eqnarray}
&&u_{i,1}-\frac{1}{2}(\delta t)^{2}f_{i}h_{i,0}-\frac{1}{2}(\delta t)^{2}g_{i}\theta_{i,0}=\frac{1}{2}r^{2}u_{0}(x_{i+1})+(1-r^{2})u_{0}(x_{i})+\frac{1}{2}r^{2}u_{0}(x_{i-1}) \quad \quad \quad \notag \\
&&+(\delta t)v_{0}(x_{i}), \quad \quad \quad i=\overline{1,(M-1)}.
\label{eqfandgdmsplitefjiszero}
\end{eqnarray}
where $f_{i}:=f(x_{i})$, $h_{i,j}:=h(x_{i},t_{j})$, $g_{i}:=g(x_{i})$ and $\theta_{i,j}:=\theta(x_{i},t_{j})$.
Discretizing \eqref{equofxat0t} and \eqref{equofxatLt}, using \eqref{eq19}, we also have \eqref{equx0} and
\begin{eqnarray}
q_{L}(t_{j})=\frac{\partial{u}}{\partial{x}}(L,t_{j})=\frac{3u_{M,j}-4u_{M-1,j}+u_{M-2,j}}{2(\delta x)},\quad j=\overline{1,N}. \label{equxL}
\end{eqnarray}
In practice, the additional observations \eqref{equx0} and \eqref{equxL} come from measurement which is inherently contaminated with errors. We therefore model this by replacing the exact data $q_{0}(t)$ and $q_{L}(t)$ by the noisy data \eqref{eq4.5} and
\begin{eqnarray}
q_{L}^{\epsilon}(t_{j})=q_{L}(t_{j})+\tilde{\epsilon}_{j}, \ \ \ j=\overline{1,N},\label{eq4.5uxatLEx5}
\end{eqnarray}
where $(\tilde{\epsilon}_{j})_{j=\overline{1,N}}$ and $N$ random noisy variables generated from a Gaussian normal distribution with mean zero and standard deviation $\tilde{\sigma}=p\times max_{t\in[0,T]}\left|q_{L}(t)\right|$.
Assembling \eqref{equx0}, \eqref{eqfandgdmsplitef}-\eqref{equxL}, and using \eqref{eq16.1} and \eqref{eq16.2}, the discretised inverse problem reduces to solving a global linear system of $(M-1)\times N+(N+N)$ equations with $(M-1)\times N+((M-1)+(M-1))$ unknowns. Since this system is linear we can eliminate the unknowns $u_{i,j}$ for $i=\overline{1,(M-1)}$, $j=\overline{1,N}$, to reduce the problem to solving an ill-conditioned system of $2N$ equations with $2(M-1)$ unknowns of the form
\begin{eqnarray}
A(\underline{f},\underline{g})=\underline{b}^{\epsilon}. \label{eqAwithfandg}
\end{eqnarray}
\subsection{Example 5}
This is an example in which we take $c=L=T=1$, $h(x,t)=1$ and $\theta(x,t)=t$ and the input data
\begin{eqnarray}
u(x,0)=u_{0}(x)=\sin(\pi x), \quad
u_{t}(x,0)=v_{0}(x)=x^{2}+1, \quad x\in[0,1], \label{eq20Ex5}
\end{eqnarray}
\begin{eqnarray}
u(0,t)=P_{0}(t)=t+\frac{t^{2}}{2},\quad u(1,t)=P_{L}(t)=2t+\frac{t^{2}}{2}, \quad t\in(0,1], \label{eq21Ex5}
\end{eqnarray}
\begin{eqnarray}
-\frac{\partial{u}}{\partial{x}}(0,t)=q_{0}(t)=-\pi, \ \ \frac{\partial{u}}{\partial{x}}(1,t)=q_{L}(t)=2t-\pi, \ \ t\in(0,1]. \label{eqq0andqL}
\end{eqnarray}
The exact solution is given by
\begin{eqnarray}
f(x)=1+\pi^{2}\sin(\pi x),\ \ g(x)=-2,\ \ u(x,t)=x^{2}t+\sin(\pi x)+t+\frac{t^{2}}{2},\notag \\
(x,t)\in[0,1]\times[0,1]. \label{eq22Ex5}
\end{eqnarray}
We first consider the case of exact data, i.e. $p=0$ and hence $\underline{\epsilon}=\underline{\tilde{\epsilon}}=\underline{0}$ in \eqref{eq4.5} and \eqref{eq4.5uxatLEx5}. The numerical results corresponding to $f(x)$ and $g(x)$ are plotted in Figure \ref{fig:inverseproblemexactsolutionoffandgwithnumericalEx5problem3}. From this figure it can be seen that convergent and accurate numerical solutions are obtained especially, for $f(x)$, although for $g(x)$ are some inaccuracies manifested near the end points $x\in\lbrace0,1\rbrace$.
We include some ($p=1\%$) noise into the input data \eqref{equx0} and \eqref{equxL}, as given by equations \eqref{eq4.5} and \eqref{eq4.5uxatLEx5}. Figure \ref{fig:optimalfandgwithexactin0th1st2ndEx5problem3} shows the regularized numerical solutions for $f(x)$ and $g(x)$ obtained with various regularizations and one can observe that reasonably stable numerical solutions are obtained.
\begin{figure}
\caption{The exact (---) solutions \eqref{eq22Ex5}
\label{fig:inverseproblemexactsolutionoffandgwithnumericalEx5problem3}
\end{figure}
\begin{figure}
\caption{The exact (---) solutions \eqref{eq22Ex5}
\label{fig:optimalfandgwithexactin0th1st2ndEx5problem3}
\end{figure}
\section{Conclusions}
In this paper, the determination of space-dependent forces from boundary Cauchy data in the wave equation has been investigated. The solution of this linear inverse problem is unique, but is still ill-posed since small errors in the input flux cause large errors in the output force. The problem is discretised numerically using the FDM, and in order to stabilise the solution, the Tikhonov regularization method has been employed. The choice of the regularization parameter was based on the L-curve criterion.
Numerical examples indicate that the method can accurately recover the unknown space-dependent force. The time-dependent force identification will be investigated in Part II, \cite{husseinlesnic}.\\
\\
\textbf{\large Acknowledgments}\\
S.O. Hussein would like to thank the Human Capacity Development Programme (HCDP) in Kurdistan for their financial support in this research.
\end{document}
|
\begin{document}
\title{Contactomorphisms of the sphere without translated points}
\begin{abstract}
We construct a contactomorphism of $(S^{2n-1},\alpha_{\mathrm{std}})$ which does not have any translated points, providing a negative answer to a conjecture posed in \cite{sandon_13}.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Let $(Y^{2n-1},\alpha)$ be a contact manifold with a choice of contact form $\alpha$. Recall that this means that $\alpha$ is a $1$-form so that $\alpha\wedge \mathrm{d}\alpha^{n-1}$ is a volume form. A \emph{contactomorphism} is a diffeomorphism $\varphi:Y\to Y$ with the property that $\varphi^{*}\alpha=e^{g}\alpha$ for\footnote{Strictly speaking, our definition selects only the orientation preserving contactomorphisms.} some smooth function $g:Y\to \mathbb{R}$. The function $g$ is reasonably called the \emph{scaling factor}; indeed, we easily compute:
\begin{equation}\label{eq:volume}
\varphi^{*}(\alpha\wedge \mathrm{d}\alpha^{n-1})=e^{ng}\alpha \wedge \mathrm{d}\alpha^{n-1},
\end{equation}
i.e., $e^{ng}$ governs the change in volume due to $\varphi$.
A choice of contact form also selects a special vector field $R$ called the \emph{Reeb field}, characterized by the equations $\alpha(R)=1$ and $\mathrm{d}\alpha(R,-)=0$.
We recall the following notion from \cite{sandon_12} and \cite{sandon_13}. Given a contactomorphism $\varphi$, a point $p\in Y$ is called a \emph{translated point} provided that $g(p)=0$ and $\varphi(p)$ lies on the Reeb flow line passing through $p$.
In \cite{sandon_13}, the author conjectures that every contactomorphism $\varphi$ isotopic to the identity of a compact contact manifold $Y$ (with any choice of form $\alpha$) has at least one translated point. The goal of the present document is to give counterexamples to this conjecture on $S^{2n-1}$ with the standard contact form $\alpha_{\mathrm{std}}$, for $n>1$. The main result we will prove is:
\begin{theorem}\label{theorem:main_result}
Let $n>1$. There exist contactomorphisms $\varphi:S^{2n-1}\to S^{2n-1}$ isotopic to the identity which do not have translated points for the contact form $\alpha_{\mathrm{std}}$.
\end{theorem}
The proof is given in \S\ref{sec:proof} and \S\ref{sec:construction} below. We recall the definition of standard contact form $\alpha_{\mathrm{std}}$ in \S\ref{sec:sphere}.
\begin{remark}
For the case $Y=(S^{1},\alpha)$, the identity $\int \varphi^{*}\alpha=\int \alpha$ implies the existence of at least two points satisfying $g=0$, and every point on $Y$ can be joined to any other by the Reeb flow. Thus every contactomorphism of $S^{1}$ has translated points (for any contact form).
\end{remark}
\begin{remark}
Sandon's conjecture has been proved in multiple cases. In \cite{AFM15}, \cite{meiwes_naef}, it is shown if the contact form $\alpha$ is \emph{hypertight} in the sense that it admits no contractible Reeb orbits. In \cite{shelukhin_contactomorphism} (using \cite{leaf_wise_albers_frauenfelder}'s work on leaf-wise intersection points), and \cite{oh_legendrian_entanglement}, \cite{oh_shelukhin_2}, the existence of translated points is proved under a smallness assumption on the oscillation norm $\int_{0}^{1} \mathrm{max}(H_{t})-\mathrm{min}(H_{t})\mathrm{d} t$.\footnote{Here $H_{t}=\alpha(X_{t})$ is the contact Hamiltonian associated to the infinitesimal generator $X_{t}$ of a path of contactomorphisms $\varphi_{t}$} In \cite{albers_merry}, the authors prove Sandon's conjecture for the boundary of Liouville domains $X$ with non-vanishing Rabinowitz Floer homology, and \cite{merry_ulja} proves the conjecture when the symplectic homology of $X$ is infinite dimensional. The papers \cite{sandon_13}, \cite{gkps}, \cite{allais_lens} establish versions of Sandon's conjecture for (non-spherical) lens spaces ($\mathbb{RP}^{2n-1}$ is a special case), using a generating function approach and ideas from \cite{theret_rotation}, \cite{givental_quasimorphism}. This is further supported by the work of \cite{albers_kang}, where Rabinowitz Floer homology groups of lens spaces are shown to be non-zero. The work of \cite{allais_zoll} establishes Sandon's conjecture for certain unit cotangent bundles using a variational approach.
\end{remark}
\begin{remark}
Sandon's work \cite{sandon_13} claimed to show that every contactomorphism on $(S^{2n-1},\alpha_{\mathrm{std}})$ had a translated point. However, \cite{gootjes_dreesbach} has clarified the situation by pointing out a gap in the original argument, and he gives a detailed proof of a restricted statement, see \cite[Theorem 1.2]{gootjes_dreesbach}. The author wishes to thank Sandon for introducing him to \cite{gootjes_dreesbach}.
\end{remark}
\begin{remark}\label{remark:special_case}
Sandon's conjecture is straightforward to verify in the case when $\varphi_{t}$ is an autonomous family of contactomorphisms generated by a Hamiltonian $H$ which is constant on Reeb flow lines (i.e., $R\mathbin{{\tikz{\mathrm{d}raw(-0.1,0)--(0.1,0)--(0.1,0.2)}\hspace{0.5mm}}} \mathrm{d} H=0$). In this case, any critical point of $H$ is a translated point of $\varphi_{t}$, for all $t$.
\end{remark}
\section{The standard contact form on the sphere}
\label{sec:sphere}
Consider $S^{2n-1}$ as the unit sphere in $\mathbb{R}^{2n}$, and recall that
\begin{equation*}
\alpha_{\mathrm{std}}=\sum_{i=1}^{n}(x_{i}\mathrm{d} y_{i}-y_{i}\mathrm{d} x_{i})
\end{equation*}
defines the standard contact form on $S^{2n-1}$. It is readily checkable that
\begin{equation*}
R=\sum_{i=1}^{n}x_{i}\partial_{y_{i}}-y_{i}\partial_{x_{i}}=J\sum_{i=1}^{n}(x_{i}\partial_{x_{i}}+y_{i}\partial_{y_{i}})
\end{equation*}
defines the Reeb flow. In particular, flow lines are given by $z_{i}(t)=e^{it}z_{i}(0)$, i.e., the orbits of the Reeb vector field are the fibers of the Hopf fibration.
\section{Proof of the main result}
\label{sec:proof}
Here is the sketch of the argument proving Theorem \ref{theorem:main_result}. First observe that if $p,q$ are points so that $q$ does not lie on the flow line through $p$, then we can find open sets $U_{p},U_{q}$ so that \emph{no flow line passes through $U_{p}$ and $U_{q}$}. Indeed, this follows from the Hausdorffness of the space of flow lines $CP^{n-1}=S^{2n-1}/S^{1}$ (this is a rather special property of the standard contact form).
Introduce the following notation: given a contactomorphism $\varphi$ with scaling factor $g$, let $\Sigma_{\varphi}$ be the set $\set{g=0}$. The existence of a translated point implies the existence of a Reeb flow line joining $\Sigma_{\varphi}$ and $\varphi(\Sigma_{\varphi})$.
Our strategy is simple: construct $\varphi$ so that $\Sigma_{\varphi}\subset U_{p}$ while $\varphi(\Sigma_{\varphi})
\subset U_{q}$; clearly $\varphi$ will have no translated points.
\begin{defn}\label{defn:focal}
For the purposes of the argument, let us say that a contactomorphism of a compact manifold $\varphi:Y\to Y$ has the \emph{focal property} for $(p,q)$ provided the following hold:
\begin{enumerate}
\item $p,q$ are fixed points of $\varphi$, $(\varphi^{*}\alpha)_{p} > \alpha_{p}$, and $(\varphi^{*}\alpha)_{q}<\alpha_{q}$,
\item $q$ has arbitrarily small neighbourhoods $U$ satisfying $\varphi(U)\subset U$ (attracting).
\item denoting $\varphi_{n}=\varphi\circ\mathrm{d}ots\circ \varphi$, if $z\ne p$, then $\lim_{n\to\infty}\varphi_{n}(z)=q$.
\end{enumerate}
Notice that (iii) implies that any compact set $K$ disjoint from $p$ will eventually be mapped into arbitrarily small open sets around $q$.
\end{defn}
\begin{lemma}\label{lemma:technical_1}
Let $\varphi:Y\to Y$ be a contactomorphism satisfying the focal property for $(p,q)$. Denote by $\Sigma_{n}$ the set of points $z$ so that $((\varphi_{n})^{*}\alpha)_{z}=\alpha_{z}$. Then $$\lim_{n\to\infty}\mathrm{dist}(\Sigma_{n},p)+\mathrm{dist}(\varphi_{n}(\Sigma_{n}),q)=0,$$ i.e., $\Sigma_{n}$ eventually enters arbitrarily small neighbourhoods of $p$ and $\varphi_{n}(\Sigma_{n})$ eventually enters arbitarily small neighbourhoods of $q$.
\end{lemma}
\begin{proof}
Bear in mind that $Y$ is assumed compact. Let $\Sigma_{n}=\set{z\in Y:(\varphi_{n}^{*}\alpha)_{z}=\alpha_{z}}$, and let $U_{p},U_{q}$ be arbitrary (small) open sets around $p,q$ respectively. It suffices to show that $\Sigma_{n}\subset U_{p}$ and $\varphi_{n}(\Sigma_{n})\subset U_{q}$ for $n$ sufficiently large. Let $g$ be the scaling factor for $\varphi$, i.e., $\varphi^{*}\alpha=e^{g}\alpha$, and let $g_{n}$ be the scaling factor for $\varphi_{n}:=\varphi\circ\mathrm{d}ots\circ \varphi$. The focal property (i) implies that $g(p)>0$ and $g(q)<0$. A straightforward computation establishes that:
\begin{equation}
\label{eq:scaling_iterate}
g_{n}=g+g\circ \varphi+\mathrm{d}ots+g\circ \varphi_{n-1}.
\end{equation}
It is clear that $g$ is a bounded function, so pick some $M>\sup_{z}\abs{g(z)}$.
Shrinking $U_{p},U_{q}$ if necessary, and using the focal properties for $\varphi$, we may suppose that:
\begin{enumerate}[label=(\alph*)]
\item $\varphi(U_{q})\subset U_{q}$,
\item $g>\mathrm{d}elta$ on $U_{p}$ and $g<-\mathrm{d}elta$ on $U_{q}$ for some $\mathrm{d}elta>0$,
\item $\varphi_{N}(Y-U_{p})\subset U_{q}$ for some $N\in \mathbb{N}$.
\end{enumerate}
We will refer to the constants $N,M,\mathrm{d}elta$ in the subsequent arguments.
Suppose that $z\in \Sigma_{n}$, and $z\not\in U_{p}$. Then $\varphi_{k}(z)\in U_{q}$ for all $k\ge N$. In particular, $g(\varphi_{k}(z))<-\mathrm{d}elta$ for $k\ge N$. We then estimate:
\begin{equation*}
0=g_{n}(z)=\underbrace{g(z)+\mathrm{d}ots+g(\varphi_{N-1}(z))}_{N\text{ terms}}+\underbrace{g(\varphi_{N}(z))+\mathrm{d}ots+g(\varphi_{n-1}(z))}_{n-N\text{ terms}}<NM-(n-N)\mathrm{d}elta.
\end{equation*}
Thus for $n$ sufficiently large we have a contradiction. Thus $\Sigma_{n}\subset U_{p}$, eventually.
On the other hand,\footnote{To prove that $\varphi_{n}(\Sigma_{n})\in U_{q}$ for $n$ sufficiently large, we can also observe that $\varphi^{-1}$ has the focal property for $(q,p)$, and that:
\begin{equation*}
((\varphi_{n}^{-1})^{*}\alpha)_{\varphi_{n}(z)}=\alpha_{\varphi_{n}(z)}\iff \alpha_{z}=(\varphi_{n}^{*}\alpha)_{z},
\end{equation*}
i.e., $\varphi_{n}(\Sigma_{n})=\set{\varphi_{n}^{-1}\alpha=\alpha}$. Thus the second part of the proof is a consequence the first half. It is not hard to show that $\varphi^{-1}$ has the focal property for $q,p$, although one needs to come up with a slightly clever choice of neighbourhood basis at $p$ to establish property (ii) for the inverse.
} suppose that $z\in \Sigma_{n}$ but $\varphi_{n}(z)\not\in U_{q}$. Thanks to the previous part, we may assume that $z\in U_{p}$. Let $k$ be the smallest integer so that $\varphi_{k}(z)\not\in U_{p}$. We then estimate:
\begin{equation*}
0=\underbrace{g(z)+g(\varphi(z))+\mathrm{d}ots+g(\varphi_{k-1}(z))}_{k\text{ terms}}+\underbrace{g(\varphi_{k}(z))+\mathrm{d}ots+g(\varphi_{n-1}(z))}_{n-k\text{ terms}}>k\mathrm{d}elta-(n-k)M.
\end{equation*}
Rearranging yields:
\begin{equation*}
k<\frac{n}{\mathrm{d}elta/M+1}
\end{equation*}
For $n$ sufficiently large (depending only on $\mathrm{d}elta,M$, and not on $z$), we have
\begin{equation*}
\frac{n}{\mathrm{d}elta/M+1}\le n-N,
\end{equation*}
Thus $k<n-N$, and hence $\varphi_{k}(z)\not\in U_{p}$ implies $\varphi_{k+N}(z)\in U_{q}$ and hence $\varphi_{n}(z)\in U_{q}$. Thus, since $z$ was arbitrary, $\varphi_{n}(\Sigma_{n})\subset U_{q}$, as desired.
\end{proof}
\begin{cor}\label{cor:cor}
If there exists a contactomorphism $\varphi:S^{2n-1}\to S^{2n-1}$ which has the focal property for $(p,q)$, and $q$ does not lie on the Hopf circle through $p$, then a sufficiently large iterate of $\varphi$ will have no translated points.
\end{cor}
\begin{proof}
As explained above, we can find open sets $U_{p},U_{q}$ around $p,q$, respectively, so that no Reeb orbit passes through $U_{p}$ and $U_{q}$. Since $\varphi$ has the focal property, Lemma \ref{lemma:technical_1} guarantees that eventually $\Sigma_{n}\subset U_{p}$ and $\varphi_{n}(\Sigma_{n})\subset U_{q}$. Thus there are no flow lines joining $\Sigma_{n}$ to $\varphi_{n}(\Sigma_{n})$, so the iterate $\varphi_{n}$ has no translated points, as desired.
\end{proof}
Therefore, in order to prove Theorem \ref{theorem:main_result}, it suffices to construct a contactomorphism isotopic to the identity which satisfies the focal property for $p,q$ with $q$ disjoint from the Reeb orbit through $p$. We perform this construction in the next section.
\section{Constructing contactomorphisms with the focal property}
\label{sec:construction}
First we observe that the focal property is preserved under conjugation:
\begin{lemma}
If $\varphi$ has the focal property for $(p,q)$, and $\sigma$ is any contactomorphism, then $\sigma\circ\varphi\circ\sigma^{-1}$ has the focal property for $(\sigma(p),\sigma(q))$.
\end{lemma}
\begin{proof}
Let $h$ be the scaling factor for $\sigma$ and $g$ the scaling factor for $\varphi$. Then, the scaling factor of $\sigma\circ \varphi\circ \sigma^{-1}$ equals $h\circ \varphi\circ \sigma^{-1}+g\circ \sigma^{-1}-h\circ \sigma^{-1}$. Using this formula, and the fact that $p,q$ are fixed points for $\varphi$, focal property (i) with $(\sigma(p),\sigma(q))$ is easily established for the conjugated contactomorphism. The focal properties (ii) and (iii) are straightforward to check, and are left to the reader.
\end{proof}
Now let $p,q$ be two points on $S^{2n-1}$, so that $q$ is not on the Reeb flow line through $p$ (this forces $n>1$). Since the contactomorphism group acts 2-transitively, the existence of a focal contactomorphism for any other pair $(P,Q)$ implies the existence of a focal contactomorphism for $(p,q)$. The following explicit formula proves the existence of a focal contactomorphism for a specific pair $(P,Q)$.
\begin{prop}[see Remark 8.2 in \cite{ekp}]\label{prop:explicit}
Let $a\in (0,1)$ and for $(z_{1},\mathrm{d}ots,z_{n})\in \mathbb{C}^{n}$ consider the mapping:
\begin{equation*}
\varphi(z)=(\frac{(1+a^{2})z_{1}+(1-a^{2})}{(1-a^{2})z_{1}+(1+a^{2})},\frac{2a z_{2}}{(1-a^{2})z_{1}+(1+a^{2})},\mathrm{d}ots,\frac{2a z_{n}}{(1-a^{2})z_{1}+(1+a^{2})}).
\end{equation*}
Then $\varphi(S^{2n-1})\subset S^{2n-1}$, and $\varphi$ induces a contactomorphism $S^{2n-1}\to S^{2n-1}$, isotopic to the identity, which is focal for $P=(-1,0,\mathrm{d}ots,0)$ and $Q=(1,0,\mathrm{d}ots,0)$.
\end{prop}
\begin{proof}
The $(n+1)\times(n+1)$ matrix:
\begin{equation*}
M_{a}=\frac{1}{2a}\begin{dmatrix}
{1+a^{2}}&{1-a^{2}}&{0}\\
{1-a^{2}}&{1+a^{2}}&{0}\\
{0}&{0}&{2a1_{(n-1)\times (n-1)}}
\end{dmatrix}
\end{equation*}
acts on $\mathbb{C}^{n+1}$ and preserves the quadratic form $q=-u_{0}\bar{u}_{0}+u_{1}\bar{u}_{1}+\mathrm{d}ots+u_{n}\bar{u}_{n}$, i.e., $M_{a}$ lies in the group $U(n,1)$. Projectivizing via $z_{i}=u_{i}/u_{0}$, we see that the quotient group $PU(n,1)=U(n,1)/S^{1}$ acts by biholomorphisms of the unit ball in $\mathbb{C}^{n}$ (where $q<0$) and extends smoothly to the unit sphere (where $q=0$). Thus $PU(n,1)$ acts on $S^{2n-1}$ by contactomorphisms, since the contact distribution is the distribution of complex tangencies $TS^{2n-1}\cap JTS^{2n-1}$. Our formula for $\varphi$ is given by the action of $M_{a}$, and hence $\varphi$ is a contactomorphism. Moreover $U(n,1)$ (and hence $PU(n,1)$) is a connected Lie group,\footnote{Proof of connectedness: by an explicit argument, one can deform the columns of any $U(n,1)$ matrix to ensure the first column is $e_{0}$. Then the other columns form a unitary basis for $\set{0}\times \mathbb{C}^{n}$. Thus everything in $U(n,1)$ can be joined to an embedded copy of $U(n)$, which is connected.} and so $\varphi$ is isotopic to the identity.\footnote{One can also see directly that $\varphi\to \mathrm{id}$ as $a\to 1$.}
It remains only to verify the focal properties. We see that $\varphi(z)=z$ if and only if $z_{1}=\pm 1$ (hence $z_{2}=\mathrm{d}ots=z_{n}=0$), and so $P,Q$ are the only fixed points of $\varphi$.
The tangent space to $S^{2n-1}$ at $P,Q$ is equal to $i\mathbb{R}\oplus \mathbb{C}^{n-1}$, and, if $1_{i\mathbb{R}},1_{\mathbb{C}^{n-1}}$ denote the projections onto these subspaces (which are the characteristic line and the contact hyperplane, respectively), we can write the derivatives as:\footnote{This formula for the derivative is related to the rescaling contactomorphism $(x,y,z)\mapsto (cx,cy,c^{2}z)$. See \cite[pp.\ 1743]{ekp} for further details.}
\begin{equation*}
\begin{aligned}
\mathrm{d}\varphi_{P}&=c_{P}^{2}1_{i\mathbb{R}}+c_{P}1_{\mathbb{C}^{n-1}}\text{ where }c_{P}=1/a>1,\\
\mathrm{d}\varphi_{Q}&=c_{Q}^{2}1_{i\mathbb{R}}+c_{Q}1_{\mathbb{C}^{n-1}}\text{ where }c_{Q}=a<1.
\end{aligned}
\end{equation*}
The focal property (i) follows immediately. Some basic calculus also establishes focal property (ii) (i.e., by comparing the map with its derivative). Finally, recalling that focal property (iii) states $z\ne P\implies \lim_{n}\varphi_{n}(z)=Q$, we argue as follows: the sequence $\varphi_{n}(z)$ must have its limit points contained in the fixed point set $\set{P,Q}$. Moreover, if $\varphi_{n}(z)$ has $Q$ as a limit point, then, by the attracting property, $\varphi_{n}(z)$ must converge to $Q$. Therefore if $\varphi_{n}(z)$ does not converge to $Q$ it must converge to $P$. However, since the derivative at $P$ is expanding, it is clear that $\varphi_{n}(z)\ne P$ cannot converge to $P$. This completes the proof
\end{proof}
Thus we conclude the existence of a focal contactomorphism for the chosen pair $(p,q)$; simply take the explicit formula from Proposition \ref{prop:explicit}, and conjugate by a contactomorphism\footnote{The explicit formula and the conjugation argument were explained to me by Egor Shelukhin} $\sigma$ which takes $P=(-1,0,\mathrm{d}ots,0)$ to $p$ and $Q=(1,0,\mathrm{d}ots,0)$ to $q$. Applying Corollary \ref{cor:cor} then completes the proof of Theorem \ref{theorem:main_result}.
\begin{remark}
It is clear that our construction is related to the \emph{contact (non)squeezing problem} for domains in the standard sphere. Indeed, the existence of contactomorphisms with the focal property implies that large domains can be squeezed inside of arbitrarily small domains. In \cite{uljarevic}, the author proves a non-squeezing property holds for domains in certain non-standard contact spheres, namely the \emph{Ustilovsky spheres}. As part of the argument, the author shows that the Ustilovsky spheres admit Liouville fillings with infinite dimensional symplectic homology, and hence Sandon's conjecture holds in this case by the results of \cite{merry_ulja}.
\end{remark}
\end{document}
|
\begin{document}
\catchline{}{}{}{}{}
\title{ENTROPY AND GEOMETRY OF QUANTUM STATES}
\author{KUMAR SHIVAM}
\address{Theoretical Physics, Raman Research Institute, Address\\
C.V.Raman Avenue, Sadashivnagar, Bangalore-560080
India\\
[email protected]}
\author{ANIRUDH REDDY}
\address{Theoretical Physics, Raman Research Institute, Address\\
C.V.Raman Avenue, Sadashivnagar, Bangalore-560080
India\\
[email protected]}
\author{JOSEPH SAMUEL}
\address{Theoretical Physics, Raman Research Institute, Address\\
C.V.Raman Avenue, Sadashivnagar, Bangalore-560080
India\\
[email protected]}
\author{SUPURNA SINHA}
\address{Theoretical Physics, Raman Research Institute, Address\\
C.V.Raman Avenue, Sadashivnagar, Bangalore-560080
India\\
[email protected]}
\maketitle
\begin{abstract}
We compare the roles of the Bures-Helstrom (BH)
and Bogoliubov-Kubo-Mori (BKM)
metrics in the subject
of quantum information geometry. We note that there are two
limits involved in state discrimination, which we call the
``thermodynamic'' limit (of $N$, the number of realizations going to infinity)
and the infinitesimal limit (of the separation of states tending to zero).
We show that these two limits do not commute in the quantum case. Taking
the infinitesimal limit first leads to the BH metric and the corresponding
Cram\'{e}r-Rao bound, which is widely accepted in this subject. Taking limits
in the opposite order leads to the BKM metric, which results in a weaker
Cram\'{e}r-Rao bound. This lack of commutation of limits is a purely quantum
phenomenon arising from quantum entanglement. We can exploit this phenomenon
to gain a quantum advantage in state discrimination and get around the limitation imposed by the Bures-Helstrom Cram\'{e}r-Rao (BHCR) bound.
We propose a technologically feasible experiment with cold atoms to demonstrate
the quantum advantage in the simple case of two qubits.
\end{abstract}
\keywords{Quantum Measurement; Metric; Distinguishability.}
\markboth{Kumar Shivam, Anirudh Reddy, Joseph Samuel, Supurna Sinha}
{Entropy and Geometry of Quantum States}
\section{Introduction}
Given two quantum states, how easily can we tell them apart?
Consider for instance, gravitational wave detection which is of considerable interest in
recent times \cite{gravwave,cramer}. Typically,
we expect a weak signal which
produces a small change in the quantum state of the
detector.
The sensitivity of our instrument is determined by our
ability to detect small changes in a quantum state. This leads to the issue of distinguishability measures
on the space of quantum states \cite{shunichi,statest, anthony, osaki, barnett}. In general, quantum states are represented by density matrices.
In this paper, we clarify the operational meaning of two Riemannian metrics on the space of density matrices: the
BH metric and the BKM metric.
In fact, even in the classical domain, one encounters similar questions while considering
drug trials, electoral predictions
or when we compare a biased coin to a fair one.
As the number $N$ of trials (or equivalently, the size of the sample) increases, our ability to distinguish
between candidate probability distributions improves. Such considerations give rise
in a natural and operational manner, to a metric on the space of probability distributions \cite{thomas}. This metric is known as
the Fisher-Rao metric and plays an important part in the theory of parameter estimation. This metric leads to the Cram\'{e}r-Rao bound
which limits the variance of any unbiased estimator.
Another example of the use of a Riemannian metric to measure distinguishability
occurs in the theory of colours \cite{geometry,weinberg}.
The space of colours is two dimensional (assuming normal vision)
and one can see this on a computer screen in several graphics softwares.
The sensation of colour is determined by the relative proportion of the RGB values, which
gives us two parameters.
The extent to which one can distinguish neighbouring colours
is usually represented by MacAdam ellipses \cite{geometry,macadam,weinberg}, which
are contours on the chromaticity diagram which are just
barely distinguishable from the centre. These ellipses give us a graphical representation of an operationally defined Riemannian
metric on the space of colours. The flat metric on the Euclidean plane would be represented by circles, whose radii are everywhere the same.
As it turns out, the metric on the space of colours is not flat and the MacAdam ellipses vary in size, orientation and eccentricity
over the space of colours. This analogy is good to bear in mind, for we provide a similar visualization of the geometry of state
space based on entropic considerations.
There is a subtlety here in that we started out with two distinct states (or colours)
represented say, by points $p_1$ and $p_2$. As the second point approaches the first, we may regard
them as represented by the first point along with a tangent vector. This involves replacing a
$\it{difference}$ by a $\it{derivative}$. One is no longer working on the space of states but on
the tangent space at a point $p_{1}$. We will refer to this as the infinitesimal limit.
There is another limiting process involved in state discrimination: the limit of
$N \rightarrow \infty$, where $N$ is the number of trials. We refer to this as the
``thermodynamic" limit. Our main point in this paper is that these two limits do not
commute. If we take the infinitesimal limit first, we are led to the BH metric and the corresponding CR bound. If we choose two distinct states, no matter how small their
separation, we find in the ``thermodynamic" limit there are quantum effects that
give us the BKM metric as the relevant one.
The noncommutativity of limits is the main point of this paper.
The paper is organized as follows.
In Sec. II we review the connection between the Kullback-Leibler (KL)
divergence and the theory of statistical inference \cite{9780511813559,thomas}.
In Sec III we take the infinitesimal limit first and show
that this leads to the BH metric. Then we show that in the thermodynamic limit,
the gain in discriminating power is no better than in the classical case. In Sec IV we reverse the order of limits by taking the thermodynamic limit first.
In this case, we find that as
$N \rightarrow \infty$ our discriminating power is determined by Umegaki's quantum relative entropy between the distinct states $p_1$ and $p_2$. Taking the infinitesimal limit leads us to the BKM metric.
We illustrate these theoretical considerations by giving examples of the quantum advantage in the case of two qubits.
In Sec. V, we compare the quantum Cram\'er Rao bounds arising from the BH and BKM metrics.
In Sec. VI we translate this theoretical work
into a technologically feasible
experiment with trapped cold atoms.
Finally, we
end the paper with some concluding remarks in Sec VII. Some calculational details are relegated to appendices A, B, and C. \ref{sec:BHmetric} computes the Bures metric as the
basis optimized Fisher-Rao metric. \ref{sec:BKMmetric} gives a simple matrix derivation of the BKM metric as the Hessian of the quantum relative entropy and \ref{sec:Geodesic} describes the
geometry of the BKM metric and plots its geodesics.
\section{KL Divergence as maximum likelihood}
Let us consider a biased coin for which the probability of getting a head is
$p_H= 1/3$ and that of getting a tail is $p_T= 2/3$. Suppose we incorrectly assume
that the coin is fair and assign probabilities $q_H= 1/2$
and $q_T= 1/2$ for getting a head and a tail respectively.
The question of interest is the number of trials needed to be able to distinguish
(at a given confidence level)
between our assumed probability distribution and the measured probability distribution.
A popular measure for distinguishing between the expected distribution and the measured distribution is
given by the relative entropy or the KL divergence (KLD)
which is widely used in the context of distinguishing
classical probability distributions \cite{jon}.
Let us consider $N$ independent tosses of a coin leading to a string $S=\{HTHHTHTHHTTTTT......\}$.
What is the probability that the string is generated by the model distribution $Q=\{q,1-q\}$?
The observed frequency distribution is $P=\{p,1-p\}$.
If there are $N_H$ heads and $N_T$ tails in a string
then the probability of getting such a string is $\frac{N!}{N_H!N_T!}q^{N_H}(1-q)^{N_T}$
which we call the likelihood function $L(N|Q)$.
If we take the average of the logarithm of this likelihood function and use Stirling's
approximation for large $N$ we get the following expression:
\begin{equation}
\frac{1}{N}\log{L(N|{Q)}}=-D_{KL}(P\|Q)+\frac{1}{N}\log{\frac{1}{\sqrt{2\pi Np(1-p)}}},
\label{likelihood}
\end{equation}
where $p=\frac{N_H}{N}$ and $D_{KL}(P\|Q)=p\log{\frac{p}{q}}+(1-p)\log{\frac{1-p}{1-q}}$.
The second term in (\ref{likelihood}) is due to the sub-leading term $\frac{1}{2}\log{2\pi N}$ of Stirling's approximation.
If $D_{KL}(P\|Q)\neq 0$ then the likelihood of the string $S$ being produced by the $Q$ distribution decreases exponentially
with $N$.
$$L(N|Q)=\frac{1}{\sqrt{2\pi Np(1-p)}} \exp{-\{ND_{KL}(P\|Q)\}}.$$
Thus $D_{KL}(P\|Q)$ gives us the divergence of the measured distribution from the model distribution.
The KL divergence is positive and vanishes if and only if the two distributions $P$ and $Q$ are equal. In this limit,
we find that
the exponential divergence gives way to a power law divergence, due to the subleading term in (\ref{likelihood}).
The arguments above generalize appropriately to an arbitrary number of outcomes (instead of two) and also
to continuous random variables.
The relative entropy (or KLD) gives an operational measure of how distinguishable two distributions are, quantified by the number of trials needed to
distinguish two distributions at a given confidence level. However, the KLD is not a distance function on the space of probability distributions: it is not symmetric between
the distributions $P$ and $Q$. One may try to symmetrize this function, but then, the result does not satisfy the triangle inequality. However, in the infinitesimal limit,
when $Q$ approaches $P$,
the relative entropy can be Taylor expanded to second order about $P$. The Hessian matrix does define a positive definite quadratic form at $P$ and thus a Riemannian metric
on the space of probability distributions. For a classical probability distribution $P=\{ p_i, i=1, 2, \ldots, d \}$, the Fisher-Rao metric \cite{thomas,facchi} is given by
\begin{equation}\label{FR}
ds^2=\sum_{i}\frac{{dp_i}^2}{p_i}
\end{equation}
and this forms the basis of classical statistical
inference and the famous $\chi$-squared test.
The Riemannian metric then defines a distance function, based on the lengths of the shortest curves connecting any two states $P$ and $Q$.
Similar considerations also apply to the quantum case, where probability distributions are replaced by density matrices.
Consider the density matrix $\rho$ of a $d$ state system, satisfying
$\rho^\dagger=\rho$, $\text{Tr}(\rho)=1$ and $\rho>0$, where we assume $\rho$ to be {\it strictly} positive, so that we are not at
the boundary of state space. Let
$\boldsymbol{\lambda}=\{ \lambda^i,i=1\dotso, d^2-1\}$ be local coordinates on the space of density matrices.
Let
${\cal S}(\rho_1(\boldsymbol{\lambda_1})\|\rho_2(\boldsymbol{\lambda}))$ be a function on the space of density matrices which is positive and vanishes if and only if $\rho_2=\rho_1$ \cite{Nielsen}.
Let us consider ${\cal S}(\rho_1(\boldsymbol{\lambda_1})\|\rho_2(\boldsymbol{\lambda}))$ as a function of its second argument.
If the states $\rho_1$ and $\rho_2$ are infinitesimally close to each other,
we can Taylor expand the relative entropy function.
\begin{equation}
\label{exp}
{\cal S}(\rho_1\|\rho_2)={\cal S}(\rho_1\|\rho_1) + \frac{\partial {\cal S}}{\partial \lambda^i}\Delta\lambda^i +\frac{1}{2} \frac{\partial^2{\cal S}}{\partial\lambda^j\partial\lambda^i}\Delta\lambda^i\Delta\lambda^j+..
\end{equation}
Notice that ${\cal S}(\rho_1\|\rho_1)$ is zero and the second term is zero because we are doing a Taylor expansion about the minimum of the relative entropy function.
The third term, which is second order in $\Delta \lambda$, gives us the metric and is positive definite
for $\Delta\lambda \neq 0$.
\begin{equation}
g_{ij}=\frac{\partial^2{\cal S}}{\partial\lambda^j\partial\lambda^i}.
\label{metric}
\end{equation}
The Hessian defines a metric {\it tensor}.
Positivity of the Hessian is guaranteed as the stationary point is {\it the absolute} minimum. The fact that density matrices do not in general commute is no obstacle to this definition.
\section{Measurements on single qubits: emergence of the Bures metric}
Let us now consider the quantum problem of distinguishing between two states $\rho_1$ and $\rho_2$ of a qubit. Here $\rho_1=\frac{\boldsymbol{\mathds{1}+X.\sigma}}{2}$
plays the role of $P$ above and $\rho_2=\frac{\boldsymbol{\mathds{1}+Y.\sigma}}{2}$ that of Q. A new ingredient in the quantum problem
is that we can choose our measurement basis. Suppose that we are given
a string of $N$ qubits all in the same state, which may be either $\rho_1$ or $\rho_2$. A possible strategy is to make projective measurements
on individual qubits, measuring the spin component in the direction $\boldsymbol{\hat m}$.
For each choice of $\boldsymbol{{\hat m}}$ we find $p_{\pm}=\frac{\boldsymbol{1\pm X.{\hat m}}}{2}$ and $q_{\pm}=\frac{\boldsymbol{1\pm Y.{\hat m}}}{2}$ and we can compute the KL-Divergence
or the classical relative entropy of the two distributions as :
\begin{equation}\label{kldm}
S_{\boldsymbol{m}}(\rho_{1}\|\rho_{2})=p_{+}\log{\frac{p_{+}}{q_{+}}}+p_{-}\log{\frac{p_{-}}{q_{-}}}.
\end{equation}
We will now choose $\boldsymbol{{\hat m}}$ in such a way as to maximize our discriminating power {\it i.e} $S_{\boldsymbol{m}}(\rho_{1}\|\rho_{2})$. This gives us,
\begin{equation}\label{opt1}
\delta S_{\boldsymbol{m}}=\frac{\partial S}{\partial \boldsymbol{{\hat m}}}\delta \boldsymbol{{\hat m}}=\lambda \delta \boldsymbol{{\hat m}},
\end{equation}
which can be rewritten as
\begin{equation}\label{opt2}
\frac{\partial S_{\boldsymbol{m}}}{\partial a_1}\boldsymbol{X}+\frac{\partial S_{\boldsymbol{m}}}{\partial a_2}\boldsymbol{Y}=\lambda \delta \boldsymbol{{\hat m}},
\end{equation}
where $a_1=\boldsymbol{{\hat m}.X}$ and $a_2=\boldsymbol{{\hat m}.Y}$. Since $\delta S_{\boldsymbol{m}}$ is a linear combination of $\boldsymbol{X}$ and $\boldsymbol{Y}$ we find
that $\boldsymbol{{\hat m}}$ must lie in the plane containing $\boldsymbol{X}$ and $\boldsymbol{Y}$, as shown in Fig. 1. Without loss of generality, we can suppose this to be the $x-z$ plane,
so that $X_2=Y_2=\hat{m}_2=0$.
We can replace $\boldsymbol{{\hat m}}=(\cos{\beta},0,\sin{\beta})$ by the angle $\beta$, which gives us $p_{\pm}=\frac{1}{2} (1\pm r_1\cos{\beta}) $ and $q_{\pm}=\frac{1}{2}(1\pm r_2\cos{(\theta+\beta)})$, where $r_{1}=|\boldsymbol{X}|$ and $r_{2}=|\boldsymbol{Y}|$. Plotting $S(\beta)$ (Fig. 2), we find that the maximum distinguishability is
attained at $\beta=\beta^*$ . This is clearly the most advantageous choice of
$\beta$. The value of $S_{\boldsymbol{m}}$ at the maximum is denoted by $S^*(r_1,r_2,\theta)=S_{\boldsymbol{m}}(r_1,r_2,\theta,\beta^*(r_1,r_2,\theta))$. $S^*(r_1,r_2,\theta)$
gives us the optimal choice for state discrimination when we measure qubits, one at a time. As we can see in Fig. 2, $S^*(r_1,r_2,\theta)$ is
never more than
Umegaki's quantum relative entropy\cite{umegaki}.
\begin{equation}
S(\rho_1(\boldsymbol{\lambda_1})\|\rho_2(\boldsymbol{\lambda}))= \text{Tr}[\rho_1 \log\rho_1-\rho_1\log\rho_2].
\label{umegaki}
\end{equation}
Equality between $S^*(\rho_1\|\rho_2)$ and $S(\rho_1\|\rho_2)$ happens
if and only if $[\rho_1,\rho_2]=0\ (\theta=0,\pi, 2\pi\approx 0)$ [See Fig. 3] {\it i.e} when the two density matrices commute with each other.
\begin{figure}
\caption{The figure shows the measurement direction $\boldsymbol{{\hat m}
\end{figure}
\begin{figure}
\caption{The relative entropy between two density matrices
as a function of the measurement basis parametrized by $\beta$. The maximum occurs at $\beta^*=0.41$.
For comparison we also show in red the horizontal line
representing Umgegaki's relative entropy (\ref{umegaki}
\end{figure}
We now take the infinitesimal limit and replace $(\rho_{1},\rho_{2})$ by $(\rho, d\rho)$ and represent $\rho$ by $(r,\theta)$ and $d\rho$ by $(dr,d\theta)$.
$dp_+$ and $dp_-$ are:
\begin{equation} \label{dp}
\left.\begin{aligned}
dp_+=\frac{\cos\beta dr-r\sin\beta d\theta}{2}, \\
dp_-=\frac{r\sin\beta d\theta-\cos\beta dr}{2}.
\end{aligned}
\right\}
\qquad
\end{equation}
\begin{figure}
\caption{The figure shows the quantum relative entropy and the classical
relative entropy as a function of $\theta$, for $r_1=0.9$ and $r_2=0.9$. Note that the quantum relative entropy
in general exceeds the classical one. This difference is what we call the quantum advantage, which can be exploited to beat the BHCR bound. The quantum relative entropy equals the classical one only for $\theta =0,\pi,2\pi\approx 0$.}
\end{figure}
Considering the classical relative entropy (\ref{kldm})
between infinitesimally separated states and doing a Taylor expansion,
gives us the Fisher-Rao metric \eqref{FR}, which is given by
\begin{equation}
\label{frmetric}
ds^2=\frac{dp_+^2}{p_+}+\frac{dp_-^2}{p_-}.
\end{equation}
Maximising \eqref{frmetric} with respect to the measurement basis for fixed $\rho_{1}$ and $d\rho$ gives us an expression for the metric (see \ref{sec:BHmetric} for a detailed calculation)
\begin{equation} \label{bs}
ds^2=\frac{dr^2}{1-r^2}+r^2d\theta^2.
\end{equation}
Returning to three dimensions using spherical symmetry we
get an expression for the metric
\begin{equation} \label{bs1}
ds^2=\frac{dr^2}{1-r^2}+r^2(d\theta^2+\sin^2{\theta}d\phi^2).
\end{equation}
In the above derivation, we have defined the
distinguishability metric in the tangent space by optimising over all
measurement bases. The metric we arrive at is
the BH metric (Ref. \cite{bures,helstrom,caves,dittmann,Uhlmann1992,ericsson} and references therein), which was introduced by Bures \cite{bures}
from a purely mathematical point of view. Its relevance to quantum state discrimination was elucidated by Helstrom\cite{helstrom}.
It plays the role of the Fisher-Rao metric in quantum physics, {\it if one restricts oneself to measuring one qubit at a time}.
More generally, the BH metric is defined as follows\cite{helstrom, erc1, erc2}.
Let $d\rho$ be a tangent vector at $\rho$.
Consider the equation for the unknown $L$:
\begin{equation}
d\rho=\frac{1}{2}\{\rho,L\}
\label{ldef}
\end{equation}
This linear equation defines the symmetric logarithmic derivative $L$ uniquely.
Optimising the Fisher-Rao metric \eqref{FR},
$g_{FR}(\rho,d\rho)=\sum_i{{\bra{i}\rho\ket{i}^{-1}}{\bra{i}d\rho\ket{i}^{2}}}$
over all choices of orthonormal bases $b=\{\ket{i}, i=1,2,3 ....d \}$ we find that \cite{helstrom}
(A) the optimal choice is given by the basis $b^*$ which diagonalizes $L$ and
(B) that the optimal value is given by
$$ g_{BH}(\rho,d\rho) = Tr[\rho L L]$$ which is defined as the Bures metric.
The discussion above is general and applicable to a $d$ state system.
We now take the thermodynamic limit.
Consider $N$ qubits with the state $\rho^{\otimes N}$. We will show that
\begin{equation}
\frac{1}{N}g_{BH}(\rho^{\otimes N},d \rho^{\otimes N})=g_{BH}(\rho,d\rho)
\label{additive}
\end{equation}
The proof is by induction. For $N=1$ (\ref{additive}) is an identity. Assuming (\ref{additive}) for $N-1$, we note that
$$d\rho^{\otimes N}=d(\rho^{\otimes N-1}{\otimes}\rho)=d\rho^{\otimes N-1}\otimes \rho+\rho^{\otimes N-1}\otimes d\rho,$$
and that
$$L_N=L_{N-1}\otimes 1\!\!1+1\!\!1\otimes L$$
uniquely solves (\ref{ldef}).
Computing $g_{BH}(\rho^{\otimes N},d \rho^{\otimes N})=Tr[\rho^{\otimes N} L_N L_N]$ and using the fact that (\ref{ldef}) implies $Tr[\rho L]=0$ we arrive at (\ref{additive}).
The optimized Fisher-Rao metric has the same discriminating power (per qubit) for $N$ qubits as for a
single qubit. This is exactly as in the classical case. This holds true in the limit $N\rightarrow \infty$. There is no quantum advantage. Note that in the above, we have taken the infinitesimal limit
first. We will see that taking the thermodynamic limit first leads to an entirely different picture.
\section{Quantum advantage : Measurements on Multiple Qubits}
Let us now take the ``thermodynamic'' limit of large $N$ first. Given
$N$ qubits, which may be a state $\rho_1^{\otimes N}$ or
$\rho_2^{\otimes N}$ we can choose a measurement basis in the Hilbert space
${\cal H}^{\otimes N}$. The optimization over measurement bases is now over an enlarged set.
Earlier we were restricted to bases of the form $b^{\otimes N}$ which are separable in the Hilbert space ${\cal H}^{\otimes N}$.
We now have the freedom to include entangled bases and this implies
\begin{equation}
\frac{S^*(\rho_1^{\otimes N}\|\rho_2^{\otimes N})}{N}\ge S^*(\rho_1\|\rho_2).
\label{quadvantage}
\end{equation}
In fact\cite{vedralrmp}, no matter how small the separation between the distinct states
$\rho_{1}$ and $\rho_{2}$, as $N\rightarrow\infty$, $\frac{1}{N} S^*(\rho_1^{\otimes N}\|\rho_2^{\otimes N}) \rightarrow S(\rho_1\|\rho_2)$,
where $S(\rho_1\|\rho_2)$ is Umegaki's quantum relative entropy. As we see in Fig. 3,
this is greater than or equal to the classical relative entropy, so the appropriate
relative entropy to use in the thermodynamic limit is Umegaki's relative entropy.
If we now take the infinitesimal limit as $\rho_2\rightarrow\rho_1$,
we effectively pass from the quantum relative entropy to a Riemannian metric defined as the Hessian of the quantum relative entropy. The form of this metric
in the case of a qubit is (see \ref{sec:BKMmetric})
\begin{equation} \label{me}
\boxed{g_{ij}=\frac{\partial^2S}{\partial x^i\partial x^j}=C(r)\frac{x^ix^j}{r^2} + D(r)\{\delta_{ij}-\frac{x^ix^j}{r^2}\}},
\end{equation}
where $C(r)=\frac{1}{1-r^2}$, $D(r)=\frac{1}{2r}\log\left(\frac{1+r}{1-r}\right)$ and $r=|\boldsymbol{Y}|$.
The corresponding line element is given in polar coordinates by:
\begin{equation} \label{linel}
ds^2=\frac{dr^2}{1-r^2}+\left[\frac{r}{2}\log\left(\frac{1+r}{1-r}\right)\right]{(d\theta^2+\sin^2\theta d\phi^2)}.
\end{equation}
This metric has been discussed earlier by Bogoliubov, Kubo and Mori (BKM) in the context of statistical mechanical fluctuations \cite{Petz,km,BALIAN}.
We refer to it as the BKM metric. For a discussion on the geometry of the BKM metric see \ref{sec:Geodesic}.
To illustrate the quantum advantage that comes from grouping qubits before measuring them,
we numerically study an example for $N=2$ and $\rho_1,\ \rho_2$ distinct and well separated.
The quantum state of the combined system is now given by $\tilde{\rho}=\rho\otimes \rho$,
where $\rho$ can refer to either $\rho_1$ or $\rho_2$. In choosing a measurement basis to distinguish $\tilde{\rho_1}$ from $\tilde{\rho_2}$,
we now have the additional advantage that we can choose bases which are not separable. This extra freedom gives us the quantum advantage
which comes from entanglement.
For example, let us choose $(r_1,r_2,\theta)=(0.9,0.5,\pi/2)$ so that
$\boldsymbol{X}=\{r_1,0,0\},\ \boldsymbol{Y}=\{r_2/\sqrt{2},0,r_2/\sqrt{2}\}$ and the direction $\boldsymbol{\hat{m}}$ in the $x$-$z$ plane
$\boldsymbol{\hat{m}}=\{\cos{\beta},0,\sin{\beta}\}$. Let the corresponding 1-qubit basis
which diagonalizes $\hat{m}.\boldsymbol{\sigma}$ be $\ket{+},\ket{-}$. We now construct the non separable basis
$|b_1\rangle=\frac{|{+-}\rangle+|{-+}\rangle}{\sqrt{2}}$,
$|b_2\rangle=\frac{|{+-}\rangle-|{-+}\rangle}{\sqrt{2}}$,
$|b_3\rangle=|{++}\rangle$
and $|b_4\rangle=|{--}\rangle$.
Note that two of these basis states are maximally entangled Bell states and two are completely separable
(Curiously, using all basis states as Bell states leads to no improvement over the separable states).
We numerically compute the relative entropy and optimize over $\beta$. This leads to an improvement over
measurements conducted on one qubit at a time. The improvement is seen in the value of the relative entropy per qubit,
which increases from $0.5839$ in the one qubit strategy to $0.5856$ in the two qubit strategy.
In fact, this number can be further
improved. By numerical Monte-Carlo searching, we have found bases (which don't have the clean form above) which yield
a relative entropy of $0.5863$ per qubit. Our Monte-Carlo search is simplified by the observation that one can by a unitary transformation
bring any two states described by $\boldsymbol{X}$ and $\boldsymbol{Y}$ to the $x$-$z$ plane of the Bloch ball, so that we are working over the real numbers
rather than complex numbers. Over the reals, unitary matrices are orthogonal matrices. We start with an initial basis
in the four dimensional real Hilbert space of the composite system and then rotate the basis by a random orthogonal matrix
close to the identity. We then compute the relative entropy using the new basis and accept the move if the new basis
has a larger relative entropy and reject it otherwise. This gives us a monotonic rise in the relative entropy and
drives us towards the optimal basis in the two qubit Hilbert space.
The method extends easily to three qubits and
more although the searches are more time consuming.
We have numerically observed that measuring three qubits at a time results in a further improvement over the two qubit
measurement strategy. However, this number (0.5880) still falls short of the quantum relative entropy which is $0.6385$.
The classically optimized relative entropy $S^{*}_N$ for $N$ qubits considered as a single system satisfies the inequality
$\frac{1}{N}S^{*}_N \leq S_Q$ \cite{vedralrmp} where $S_Q$ is the quantum relative entropy.
As $N\rightarrow\infty$ the inequality is saturated.
Thus the gap between the classically optimized relative entropy and
the quantum relative entropy (Fig. 2 and Fig. 3)
progressively reduces as one increases the number of qubits measured at a time.
\section{QUANTUM CRAM\'ER RAO BOUNDS}
As we have seen,
the quantum relative entropy
leads us to a metric (the BKM metric) on the tangent space.
We notice (see Fig. 2 and Fig. 3) that the quantum relative entropy dominates over the classically optimized relative
entropy computed in Sec III : $S(\rho_1\|\rho_2)\ge S^*(\rho_1\|\rho_2)$\cite{vedralrmp}. This implies that $g_{BKM}(v,v)\ge g_{BH}(v,v)$ for all tangent vectors $v$.
This can be explicitly seen by comparing Eq. (\ref{linel})
with Eq. (\ref{bs1}) and noting that $r/2\log{[(1+r)/(1-r)]}\ge r^2$.
This means that the BKM metric is more {\it discriminating} than the BH metric in the sense that distances are larger.
Figure 4 shows a graphical representation of the geometry of state space as given by the BH metric (white ellipses) and
the BKM metric \eqref{linel} (in black). Geometrically the unit sphere of the BKM metric is contained within
the unit sphere of the BH metric (Fig. 4).
The higher discrimination of the BKM metric over the BH metric translates
into a {\it less stringent} Cram\'{e}r-Rao bound, since the bound is based on the inverse of the metric.
Let $X$ be an unbiased estimator for a parameter $\theta$.
Then the variance $V=\text{Tr}[\rho X X]-(\text{Tr}[\rho X ])^2$ has to satisfy $V\ge \frac{1}{g(v,v)}$.
This is the well known Cram\'er-Rao bound.
\begin{figure}
\caption{The figure represents the geometry of the qubit state space as given by
the BKM metric (black ellipses in the lower half) and the BH metric (white ellipses with blue (online) boundaries in the upper half). The figure shows a two dimensional slice of the
three dimensional qubit state space. The geometry is invariant under rotations due to the unitary symmetry of the state space. Note
that the ellipticity increases near the boundary of the state space. The ellipse on the right shows both BH and BKM metrics superposed.
Note that the black BKM ellipse is {\it inside}
\label{macadam}
\end{figure}
In order to bring out this point, we propose an experimentally realizable strategy that
exploits the dominance of the BKM metric over the BH metric.
As we have seen in Sec III, restricting to measurements of one qubit at a time we find that
the BH metric sets the limit on state discrimination \cite{erc1, erc2}. However, as we
have seen in Sec IV, that we can beat this limit by measuring multiple qubits at a time.
\section{Proposed Experimental Realization}
The strategy described above can be experimentally realized with current technology using cold atoms in traps.
Experimental realizations of the quantum advantage are within reach.
There have been studies involving measurements for quantum state discrimination \cite{anthony, osaki, barnett},
where the upper limit of the state distinguishability is set by the BHCR bound. In order to exploit the quantum advantage discussed
here and go beyond the BHCR bound, we need to measure in an entangled basis of the two qubit system.
The entangled basis $\ket{b_i}$ mentioned here, is related to the separable
basis $|++\rangle,\ |+-\rangle,\ |-+\rangle,\ |--\rangle$ by a unitary transformation
$U$ in the four dimensional Hilbert space.
One can equivalently apply $U$ to the separable state $\tilde{\rho}=\rho\otimes\rho$. This creates an entangled
state $U^\dagger \tilde{\rho} U$, which can then be measured in the separable basis using a
projective measurement. Consider a pair of qubits subject to the Hamiltonian
\begin{equation}
H=\vec{\sigma_1}.\vec{B_1}+\vec{\sigma_2}.\vec{B_2}+ J(t) \vec{\sigma_1}.\vec{\sigma_2},
\label{ham2}
\end{equation}
which is a standard Heisenberg Hamiltonian for spins.
This Hamiltonian evolution produces the unitary transformation $U$ for a suitable choice of
$J(t)$.
This entangling unitary transformation $U$
is the square root of the SWAP operation $U=\sqrt{SWAP}$. $U$ has already been experimentally
realized in \cite{phillips} by creating a system in the laboratory subject to the Hamiltonian (\ref{ham2}). The method used in \cite{phillips} is to load ${}^{87}Rb$ atoms in pairs
into an array of double well potentials. The experimenters have control over all the parameters in the Hamiltonian.
They can generate the transformation $U$ at will by using a $\pi/4$ pulse for $J(t)$ by using
radio frequency, site selective pulses to address the qubits in pairs (See Table 1 of \cite{phillips}),
thus effecting the entangling unitary transformation $U$. What remains to be done to implement
our proposal is to projectively measure each of the qubits separately and thus achieve
a violation of the BHCR bound in distinguishing states.
\section{Conclusion}
The main goal of this paper is to draw attention to a noncommutativity of
limits in the context of quantum state discrimination. In particular, there are two limits --- one which we call the
``thermodynamic" limit (of N, the number of realizations going to infinity)
and the infinitesimal limit (of the separation of states tending to zero) --- which do not commute in the quantum case.
We show that taking the infinitesimal limit first leads to the BH metric. In contrast, taking the ``thermodynamic' limit first
leads to the BKM metric. The lack of commutation of limits is a purely quantum phenomenon with no classical counterpart.
We have explicitly shown by numerical methods, that one can make use of this lack of commutation of limits
to make use of quantum entanglement to get an advantage in state
discrimination.
Questions addressed here were raised but not fully answered in an early paper of Peres and Wootters \cite{peres}. At that time it was not
fully clear whether there was a one qubit strategy which could compete with the multiqubit strategy. Subsequent work using the machinery
of $C^*$ algebras has made it clear \cite{hiai,vedralrmp} that the best one qubit strategy is inferior to the multiqubit strategy. As $N$ increases
we approach the bound set by the BKM metric. Thus the quantum Cram\'{e}r-Rao bound set by the BKM metric can be approached but not surpassed. In contrast, the
BHCR bound can be surpassed, as we have seen in Sec VI.
We have worked out the geodesics of the BKM metric and plotted them numerically. We have noticed that any two points are connected
by a unique geodesic. The BKM metric leads to a distance function on the state space that emerges naturally from entropic and geometric
considerations. In working out the geodesics, it is easily seen analytically
that the geodesics approach the boundary of the state space at right angles. However, this approach is logarithmically slow and is not
apparent in Fig. 6. The form of the geodesics on state space is reminiscent of the geodesics of the Poincar\'{e} metric which also meet
the boundary at right angles. However, there are serious differences. While both metrics have negative curvature, the Poincar\'{e} metric has a {\it constant} negative curvature, unlike the BKM metric that has a varying curvature, which diverges logarithmically at the boundary.
It is natural to ask if this is a genuine singularity or one caused by our choice of coordinates. It is easily seen that the singularity is genuine. Consider a radial geodesic starting from $r=r_0$ and reaching the boundary at $r=1$. Its length is given by $\int_{r_0}^1 dr/\sqrt{1-r^2}=\pi/2-\arcsin{r_0}$,
which is finite. So the geodesic reaches the singularity of $R$ in a finite distance. Since the length of the geodesic and the scalar curvature are
independent of coordinates, it follows that the singularity is genuine and not an artifact of the
coordinate system. The divergence of the metric as one approaches $r=1$ has a physical interpretation.
It means that pure states offer a much larger quantum advantage than mixed states. In fact, quantum advantage diverges logarithmically as we approach the pure state limit.
Conversely, even a small corruption of the purity of quantum states
will seriously undermine our ability to distinguish between them.
From the statistical physics perspective,
the BKM metric can be interpreted as a thermodynamic susceptibility of a quantum state $\rho$ (viewed as a Gibbs state for the Hamiltonian $H=-(1/\beta)\log{\rho}$), to perturbations. The Gibbs state is the state that maximizes its entropy subject to an energy constraint. However, in statistical physics, a system makes spontaneous excursions to neighbouring lower entropy states. The size of these fluctuations is determined by the Hessian of the entropy function and thus related to the susceptibility.
In the existing literature\cite{petznew,petznew2,hasenew,jencova,grasselli} researchers have discussed the BKM and other Riemannian metrics on the quantum state space but have mainly focussed on the geometrical and mathematical aspects of the metric. In the context of quantum metrology\cite{a,b} the idea that a quantum procedure leads to an improved sensitivity in parameter estimation compared to its classical counterpart has been explored.
We go beyond earlier studies in suggesting physical and statistical mechanical interpretations of the geometry
and an experimental proposal demonstrating the use of entanglement as a resource. Such an experimental demonstration would operationally
bring out a subtle aspect of quantum information. We hope to interest experimental colleagues in this endeavour.
\section{BH METRIC FOR A QUBIT}
\label{sec:BHmetric}
The Fisher-Rao metric is given by
\begin{equation}
ds^2=\frac{dp_+^2}{p_+}+\frac{dp_-^2}{p_-} \nonumber.
\end{equation}
Substituting $dp_+$, $dp_-$, $p_+$ and $p_-$, from \eqref{dp}, we get
\begin{equation} \label{dss}
ds^2=\frac{\left(dr-r\tan\beta d\theta\right)^2}{1-r^2+\tan^2\beta}.
\end{equation}
Keeping $r$, $dr$, $d\theta$ fixed and optimising with respect to $\beta$ we find
\begin{equation}
\tan\beta^{*}=-\frac{r\left(1-r^2\right)}{dr/d\theta}.
\end{equation}
Substituting $\tan\beta^{*}$ in \eqref{dss} we get the expression for the metric
\begin{equation} \label{bs}
ds^2=\frac{dr^2}{1-r^2}+r^2d\theta^2.
\end{equation}
\section{BKM METRIC FOR A QUBIT}
\label{sec:BKMmetric}
Consider two mixed states $\rho_1$ and $\rho_2$ of a two level quantum system commonly referred to as a qubit. These can be written as
$\rho_1=\frac{\boldsymbol{\mathds{1}+X.\sigma}}{2}$ and $\rho_2=\frac{\boldsymbol{\mathds{1}+Y.\sigma}}{2}$
where $|\boldsymbol{X}|$ and $|\boldsymbol{Y}|$ $<$ 1. $\boldsymbol{X}$ and $\boldsymbol{Y}$ are three dimensional vectors with components
$x^i$ and $y^i$.
The relative entropy function can be written as follows:
\begin{eqnarray}
S(\rho_1\|\rho_2)&=&\text{Tr}\left[\left(\frac{\boldsymbol{\mathds{1}+X.\sigma}}{2}\right)\log\left(\frac{\boldsymbol{\mathds{1}+X.\sigma}}{2}\right)\right] \nonumber \\
&&-\text{Tr}\left[\left(\frac{\boldsymbol{\mathds{1}+X.\sigma}}{2}\right)\log\left(\frac{\boldsymbol{\mathds{1}+Y.\sigma}}{2}\right)\right].
\end{eqnarray}
We can use the power series expansion of $\log(\boldsymbol{\mathds{1}+Y.\sigma})$ to evaluate the trace of the above expression.
\begin{equation}
\log(\boldsymbol{\mathds{1}+Y.\sigma})=\underbrace{\left(\sum_{m=0}^{\infty}\frac{|\boldsymbol{Y}|^{2m+1}}{2m+1}\right)}_{f_o(|\boldsymbol{Y}|)}\frac{\boldsymbol{Y.\sigma}}{|\boldsymbol{Y}|}+\underbrace{\left(\sum_{n=0}^{\infty}\frac{|\boldsymbol{Y}|^{2n}}{2n}\right)}_{f_e(|\boldsymbol{Y}|)}\mathds{1},
\end{equation}
where $f_o(|\boldsymbol{Y}|)$ and $f_e(|\boldsymbol{Y}|)$ are respectively the odd and even parts of the function $f(r)=\log{(1+r)}$ . Notice that the odd part of the expansion is traceless. Making use of the above expansion we can express $S(\rho_1\|\rho_2)$ as follows
\begin{equation}
S(\rho_1\|\rho_2)=S(\boldsymbol{X}\|\boldsymbol{Y})=f_e(|\boldsymbol{Y}|)-\frac{f_o(|\boldsymbol{Y}|)}{|\boldsymbol{Y}|}(\boldsymbol{X}.\boldsymbol{Y}).
\end{equation}
In order to compute the Hessian of $S(\rho_1\|\rho_2)$ we
compute the second derivative $\frac{\partial^2 S}{\partial y^i \partial y^j}$
with respect to $y^j$ and then set $y^i=x^i$ and obtain the following metric \cite{geometry}:
\begin{equation} \label{me}
g_{ij}=\frac{\partial^2S}{\partial x^i\partial x^j}=C(r)\frac{x^ix^j}{r^2} + D(r)\{\delta_{ij}-\frac{x^ix^j}{r^2}\},
\end{equation}
where $C(r)=\frac{1}{1-r^2}$, $D(r)=\frac{1}{2r}\log\left(\frac{1+r}{1-r}\right)$ and $r=|\boldsymbol{Y}|$.
\section{GEOMETRY OF THE BKM METRIC}
\label{sec:Geodesic}
The scalar curvature $R$ of the BKM metric is given by:
\begin{equation} \label{ka}
R=\frac{4r^2-4r(1+r^2)\log(\frac{1+r}{1-r})+(1+2r^2-3r^4)[\log(\frac{1+r}{1-r})]^2}{2r^2(1-r^2)[\log(\frac{1+r}{1-r})]^2}.
\end{equation}
\begin{figure}
\caption{The scalar curvature \eqref{ka}
\label{1}
\end{figure}
As we can see from Fig. 5, the metric has negative scalar curvature and therefore the geodesics (Fig. 6) cannot cross
more than once. It follows therefore that any two states,
are connected by an unique geodesic. The length of this geodesic gives us a distance on the space of states. This has all the properties expected of a distance function: it is symmetric, strictly positive between distinct points and satisfies the triangle inequality.
The scalar curvature is zero near the origin
and diverges logarithmically to minus infinity as $r$ goes to unity. The geodesics of this metric are easily worked out from classical mechanics.
The metric has spherical
symmetry, because the quantum state space is invariant under unitary transformations.
Setting $r=\sin{\alpha}$, we rewrite the metric as
\begin{equation}
ds^2=d\alpha^2
+F(\alpha) \left(d\theta^2+\sin^2(\theta)d\phi^2\right),
\end{equation}
where
$F(\alpha)=\frac{\sin\alpha}{2}\log\left[ \frac{1+\sin\alpha}{1-\sin\alpha} \right]$.
Because of the spherical symmetry,
there is a conserved angular momentum vector $\vec{J}$ and thus the geodesics lie in the plane
perpendicular to $\vec{J}$.
Thus we can confine our calculations to a plane,
reducing the form of the metric to
\begin{equation}
ds^2=d\alpha^2+F(\alpha) \left(d\phi^2\right),
\end{equation}
where we have set $\theta=\frac{\pi}{2}$.
The Lagrangian of the classical mechanical system is
\begin{equation}
L=\frac{1}{2}\left({\dot{\alpha}}^2+F(\alpha){\dot{\phi}}^2\right).
\end{equation}
The constants of motion for this problem are the energy and the angular momentum, which are given by
\begin{equation}
E=\frac{1}{2}\left({\dot{\alpha}}^2+F(\alpha){\dot{\phi}}^2\right), \
P_\phi=J=\frac{\partial L}{\partial {\dot{\phi}}}=F(\alpha)\dot{\phi}.
\end{equation}
Using the above equations we solve for $\dot{\alpha}$ and $\dot{\phi}$. Our numerical solution
gives us the geodesics of interest. A typical geodesic is displayed in Fig. 6. Given any two points in the state space
(for example the red dots of Fig. 6), the length of the unique geodesic \cite{negcurv} connecting them gives us a distance function.
This is very similar in spirit to a construction of Wootters \cite{wootters}, who introduced a metric based on distinguishability
for {\it pure} states and used this to define a metric on pure states, which ultimately yielded the Fubini-Study metric. This work can be viewed
as an application of Wootters' idea to mixed states.
\begin{figure}
\caption{The figure shows a geodesic connecting two
typical quantum states,
indicated by two dots on the Bloch ball.
Two more geodesics are shown with different values of $J=|\vec J|$.
We also show geodesics crossing each other once. As explained in the text, the metric on quantum state space has negative curvature and
so geodesics cannot cross more than once.}
\label{2}
\end{figure}
\pagebreak
\end{document}
|
\begin{document}
\title{A remark on the Schr\"odinger equation on Zoll manifolds}
\author{Hisashi Nishiyama}
\date{}
\maketitle
\begin{abstract}
On the one dimensional sphere, the support of the fundamental solution to the Schr$\rm \ddot o$dinger equation consists of finite points at the time $t\in 2\pi\mathbf{Q}$.
The paper \cite{Ka} generalized this fact to compact symmetric spaces. In this paper, we consider similar results on Zoll manifolds. We study the singularity for a solution to the equation using a functional calculus of the self-adjoint operator with integer eigenvalues.
\end{abstract}
\footnotetext[1]{2000 Mathematical Subject Classification. 35B65, 35P05, 35Q40, 58J40, 58J47.}
\section{Introduction}
We consider the Schr$\rm \ddot o$dinger equation on Zoll manifolds. First we recall the setting of the problem. We study the following,
\begin{equation}
(i\partial_t+\Delta_M)u=0.
\end{equation}
Here $i=\sqrt{-1}$ is the imaginary unit. $t$ is the time variable and $\partial_t=\frac{\partial}{\partial t}$. $M$ is an $d$-dimension compact smooth Riemannian manifold without boundary. $\Delta_M$ denotes the Laplace-Beltrami operator, i.e. in a local coordinate $x=(x_1,\dots,x_d)\in M$,
\begin{equation*}
\Delta_M=\sum_{i,j=1}^d\frac{1}{\sqrt{{\rm det}(g_{ij})}}\partial_{x_i} g^{ij}(x)\sqrt {{\rm det}(g_{ij})}\partial_{x_j}.
\end{equation*}
Here the matrix $(g^{ij}(x))>0$ is the inverse of the metric $(g_{ij}(x))$ and ${\rm det}(g_{ij})$ is the determinant of $(g_{ij}(x))$.
Since $M$ is compact, $\Delta_M$ is a self-adjoint operator in $L^2(M)$, the space of the square integrable functions on $M$. By the spectral theory, we can write the solution operator to the equation (1.1) as $e^{it\Delta_M}$.
We also assume $M$ is the Zoll manifold, i.e. all geodesics has same minimal period.
In a local coordinate, the geodesic flow is generated by the Hamiltonian vector field on $T^*M\backslash \{0\}$,
\begin{equation}
H=\partial_\xi l\cdot\partial_x-\partial_x l\cdot\partial_\xi
\end{equation}
were the function $l$ is defined by $l(x,\xi)=\sqrt{\sum_{i,j=1}^d g^{ij}(x)\xi_i\xi_j}$ for $x\in M$, $\xi\not=0$.
Without loss of generality, we can assume the geodesic flow has minimal period $2\pi$.
Examples of Zoll manifolds are so called CROSSes (Compact Rank One Symmetric Spaces).
For the background of Zoll manifolds, we refer \cite{Be}, \cite{Gu1}.
The following example is our starting point.
\begin{exam}({\rm \cite{Ka}, S. Doi})
Let $M=S^1\simeq \mathbf{R}/2\pi\mathbf{Z}$. We consider the following equation
\begin{equation}
\begin{cases}
(i\partial_t+\partial_x^2)G=0\\
G|_{t=0}=\delta (x).
\end{cases}
\end{equation}
Here $\delta(x)$ denotes the Dirac measure at $x=0$.
Then the fundamental solution $G(t,x)$ satisfies the following two statement
(i) For fixed $t/2\pi\in\mathbf{Q}$, ${\rm supp} (G(t,x))$ is finite points.
(ii) For fixed $t/2\pi\in\mathbf{R}\backslash\mathbf{Q}$, ${\rm supp} (G(t,x))$ is the whole space $S^1$
\begin{proof}
First we prove the statement (i). By using the Fourier series, the solution can be written as follows
\begin{equation}
G(t,x)=\frac{1}{2\pi}\sum_{k\in\mathbf{Z}}e^{-itk^2+ikx}.
\end{equation}
Taking $t=2\pi n/m$, $n,m\in\mathbf{Z}$, and $k=mj+l$; $j\in\mathbf{Z}$, $0\leq l\leq m-1$. We have
\begin{align*}
G(2\pi n/m,x)&=\frac{1}{2\pi}\sum_{j\in\mathbf{Z}}\sum_{l=0}^{m-1}e^{-i2\pi\frac{n}{m}(mj+l)^2+i(mj+l)x}\\
&=\sum_{l=0}^{m-1}e^{-i2\pi\frac{n}{m}l^2+ilx}\frac{1}{2\pi}\sum_{j\in\mathbf{Z}}e^{imjx}\\
&=\sum_{l=0}^{m-1}e^{-i2\pi\frac{n}{m}l^2+ilx}\frac{1}{m}\sum_{j=0}^{m-1}\delta (x-2\pi j/m)
\end{align*}
For final identity, we use the Poisson summation formula for $2\pi$ periodic functions. We define
\begin{equation}
g(n,m;j)=\frac{1}{m}\sum_{l=0}^{m-1}e^{-i2\pi\frac{n}{m}l^2+il2\pi\frac{j}{m}}=\frac{1}{m}\sum_{l=0}^{m-1}e^{2\pi i(jl-nl^2)/m}.
\end{equation}
We have the following representation of the solution
\begin{equation}
G(2\pi n/m,x)= \sum_{j=0}^{m-1}g(n,m;j)\delta (x-2\pi j/m).
\end{equation}
This imply (i). We prove (ii). We use the following symmetry
\begin{lem}
$G(t,x)$ defined by (1.4) satisfies
\begin{align}
&G(t,x+2t)=e^{i(x+t)}G(t,x)\\
&G(t,x)=G(t,-x)
\end{align}
as the sense of distributions.
\end{lem}
The proof of this lemma is a direct computation. Recall
$$G=e^{it\partial_{x}^2}\delta\in H^{\frac{1}{2}(S^1)+\epsilon}\backslash H^{\frac{1}{2}}(S^1)$$
for any $\epsilon>0$. Here $H^s(S^1)$, $s\in\mathbf{R}$ are usual Sobolev spaces.
Since $S^1$ is compact, there exists a point $x_0$ such that near this point $G(t,x)$ does not have $H^{\frac{1}{2}}$ regularity. So by $(1.7)$, $G(t,x)$ does not belong to $H^{\frac{1}{2}}$ near $x_0+2t$. Inductively we can prove similar singularity at $x_0+2kt$, $k\in\mathbf{Z}$. If $t/2\pi=\alpha\in\mathbf{R}\backslash\mathbf{Q}$,
these are dense in $S^1$. Thus
$${\rm sing\ supp} (G(2\pi\alpha,x))=S^1.$$
\end{proof}
\end{exam}
\begin{remark}
In \cite{Ka}, Kakehi calculates $g(n,m;j)$. He shows that for $n,m\in\mathbf{Z}$, $m>0$, $n,m$: co-prime,
\begin{equation}
\begin{split}
g(n,m;j)\not=0, \ \text{$\forall$j if $\ m\equiv 1, \ 3$ mod $4$,}\\
\begin{cases}
&g(n,m;j)=0, \ \text{for j:even}\\
&g(n,m;j)\not=0, \ \text{for j:odd,}
\end{cases} \text{ if $m\equiv 2$ mod $4$,}\\
\begin{cases}
&g(n,m;j)\not=0,\ \text{for j:even}\\
&g(n,m;j)=0, \ \text{for j:odd}\\
\end{cases}\text{ if $m\equiv 0$ mod $4$}.
\end{split}
\end{equation}
Moreover he generalized this example to compact symmetric spaces.
\end{remark}
In this paper, we prove an analogous of this example.
Before to say our result, we give a physical interpretation of this finiteness of the support.
In quantum mechanics, the equation (1.3) represent the motion of the free particle.
On $S^1$, the plain wave is given by
\begin{equation*}
\frac{1}{2\pi}e^{-itk^2+ikx}=\frac{1}{2\pi}e^{-ik(tk-x)},\ k\in\mathbf{Z}.
\end{equation*}
This is the motion whose velocity is $k$. This means in $S^1$ the velocity is quantized.
So at the time $t$, the particle can reach the point
\begin{equation*}
k-x\equiv 0\ \mod \ 2\pi\ , k\in\mathbf{Z}.
\end{equation*}
Taking $t=2\pi p/q$, this can be written as follows
\begin{equation*}
x\equiv 2\pi \frac{p}{q}k\ \mod \ 2\pi\ ,k\in\mathbf{Z}.
\end{equation*}
These are finite points. By this naive interpretation, it is natural to expect same phenomenon for Zoll manifolds since $\sqrt{-\Delta_M}$ has asymptotically integer eigenvalues and whose geodesics are periodic. The main result of this article is the following.
\begin{thm}
Let M be a Zoll manifolds with period $2\pi$ and $x_0\in M$. We consider the following equation
\begin{equation}
\begin{cases}
(i\partial_t+\Delta_M)u=0.\\
u|_{t=0}=\delta_{x_0}.
\end{cases}
\end{equation}
Here $\delta_{x_0}$ is the Dirac measure on $x_0$. The following two statement holds to this solution
(i) For fixed $t/2\in\mathbf{Q}$, ${\rm sing\ supp} (u(t, x))$ is degenerate i.e. subset of finite $d-1$ dimensional sub-manifolds.
(ii) For fixed $t/2\pi\in\mathbf{R}\backslash\mathbf{Q}$, ${\rm sing\ supp}(u(t,x))$ is the whole space $M$
\end{thm}
\begin{remark}
Moreover we compute the wave front set of the solution. We can also compute the wave front set to the equation with a first order perturbation.
For more precise statement, see Corollary 3.4 and Theorem 4.1.
\end{remark}
We should remark some related results. In $R^n$, the singularity propagates at infinite speed and the smoothing effect may occur.
There are many results on the singularity of the solution to the Schr$\rm \ddot o$dinger equation related to the smoothing effect.
For example, if the classical trajectories are not trapped and the potential decays faster than the quadratic order, then
the solution with compact support initial data is smooth at $t>0$ see e.g. \cite{Na}.
On the other hand to compact manifolds, there are few works to the study of the singularity except the work \cite{Ka}.
On $\mathbf{R}^1$, Yajima \cite{Ya} proved fundamental solution is singular on the whole space time, to the super quadratic potential.
Our result indicates similar result on Zoll manifolds.
\section{Preliminaries}
In this paper, we use the microlocal analysis on manifolds. We recall some relevant notations, see \cite{Gr-Sj}, \cite{Ho}.
$S^m(T^*M)$, $m\in\mathbf{R}$ is the space of functions $a(x,\xi)$ on $T^*M$, which are smooth in $(x, \xi)$ and satisfy
$$\partial^\alpha _x \partial^\beta _\xi a(x,\xi)= {\cal O}((1+|\xi|)^{m-|\beta|})\ , (x,\xi) \in T^*M.$$
We also write $S^{-\infty}(T^*M)=\cap_m S^m(T^*M).$
If $a\in S^m(T^*M)$, we can define correspondent pseudo-differential operators, for short $\Psi DO$, by usual way.
$\Psi^m(M)$ denotes the class of $\Psi DO$ associated with this class. The smoothing operator is a operator which has smooth kernel. A operator in $\Psi^{-\infty}(M)$ is the smoothing operator.
It is useful to consider the operator as the operator on the half-density bundle.
For the operator $A\in \Psi^m(M)$ and Riemannian density $\rho$
$$Bu=\rho^{\frac{1}{2}}A(u\rho^{-\frac{1}{2}}),\ u\in C^\infty(M, \Omega^\frac{1}{2})$$
defines $B$, the operator on the half-density bundle.
We write this class of operator as $\Psi^m(M;\Omega^\frac{1}{2}, \Omega^\frac{1}{2})$. By this identity, we identify the operator on $M$ with the operator on the half-density bundle.
For $A\in \Psi^m(M;\Omega^\frac{1}{2}, \Omega^\frac{1}{2})$, we have the symbol map
$\sigma(A)\in S^m(T^*M)/S^{m-2}(T^*M)$ and there exist $a_0 \in S^m(T^*M) $, $a_1 \in S^{m-1}(T^*M) $ such that $\sigma(A)-a_0-a_1\in S^{m-2}(T^*M) $. We call $a_0=\sigma_{pri}(A)$ the principal symbol of $A$ and $a_1=\sigma_{sub}(A)$ the sub-principal symbol of $A$.
Modifying above definition, $\Psi_{phg}^m(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ denotes the class of $\Psi DO$ whose principal and sub-principal symbols are polyhomogeneous functions of order $m$ and $m-1$ respectively. $\sqrt{-\Delta_M}$ is a $\Psi DO$ $\in \Psi^{1}_{phg}(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ whose principal symbol is $(\sum_{i,j=1}^d g^{ij}(x)\xi_i\xi_j)^{1/2}$ and sub-principal symbol is $0$.
We also use the notation of the wave front set. If $A\in\Psi_{phg}^0(M)$ has the principal symbol $a$, the characteristic set of $A$ is given by
$${\rm Char} A=\{(x,\xi)\in T^*M\backslash \{0\}, a=0\}$$
We define the wave front set of $u \in {\cal D'}(M)$ by
$${\rm WF}(u)=\cap \{{\rm Char } A; A\in \Psi_{phg}^0(M), Au\in C^\infty(M)\}$$
where ${\cal D'}(M)$ is the space of distributions on $M$. By definition, the wave front set is a conic subset of $T^*M\backslash \{0\}$.
The wave front set is a refinement of the singular support, if $\Pi_x(x,\xi)$ is the projection to $x$ variable then
$$\Pi_x( {\rm WF}(u))={\rm sing \ supp} (u).$$
We use the following Propagation of singularity to the first order operator.
\begin{prop}
If $A\in \Psi^1(M;\Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ has real-principal symbol. We have
\begin{equation}
{\rm WF}( e^{itA}u)={\exp}(tH_a){\rm WF}(u), \text{$ \forall u \in {\cal D'}(M)$}.
\end{equation}
Here ${\rm exp}(t H_a )$is the Hamilton flows for the symbol $a=\sigma_{pri}(A)$.
\end{prop}
We use a functional calculus to the self-adjoint operator with integer eigenvalues. By the following proposition, we can reduce
$\sqrt{-\Delta_M}$ to this type operator. This result is due to Colin de Verdi$\rm\grave e$re, Duistermaat, Guillemin, and Weinstein-see \cite{Ho} and \cite{Gu2} for expositions of the spectral theory of the Laplace-Bertrami Operator on Zoll manifolds.
\begin{prop}
Let M be a Zoll manifolds with period $2\pi$.
Then there exists a integer $\alpha$ and a self-adjoint operator $E \in \Psi^{-1}_{phg}(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ which commute with $\Delta_M$ such that
\begin{equation}
{\rm Spec}(\sqrt{-\Delta_M}+\alpha/4 +E)\subset \mathbf{Z},
\end{equation}
\end{prop}
For the proof, see Lemma 29.2.1 in \cite{Ho}. $\alpha$ is so called the Maslov index, in our case this is the number of conjugate points in a closed geodesic counted with multiplicity.
By this proposition, we should consider the self-adjoint operator with integer eigenvalues.
The following special case of the functional calculus is essential to our results.
\begin{prop}
Let $L$ be a self-adjoint operator on the Hilbert space H and $f$ be a function $\mathbf{Z}\rightarrow \mathbf{C}$. We assume ${\rm Spec}(L)\subset \mathbf{Z} $. Then we have the identity
\begin{equation}
f(L)=\sum_{k\in\mathbf{Z}}\frac{1}{2\pi}\int^{2\pi}_0f(k)e^{iky}e^{-iyL}dy
\end{equation}
\end{prop}
\begin{proof}
This is proved by
\begin{equation}
\frac{1}{2\pi}\int^{2\pi}_0e^{i(k-l)y}dy=\delta_{k,l}
\end{equation}
and the following identity
\begin{align}
f(L)=\sum_{k\in\mathbf{Z}}f(k)P_k, e^{-iyL}=\sum_{k\in\mathbf{Z}}e^{-iyk}P_k.
\end{align}
Here $\delta_{k,l}=1$ if $k=l$ and otherwise $\delta_{k,l}=0$, $P_k$, $k\in\mathbf{Z}$ are orthogonal projections to the space of eigenfunctions with eigenvalue $k$ i.e.
\begin{equation}
\begin{split}
\begin{cases}
P_k: \text{projections to eigenfunctions with eigenvalue $k$},\\
\ \ \ \ \text{if $k$ is an eigenvalue of $L$}\\
P_k: \ 0, \ \text{if $k$ is not an eigenvalue of $L$}.
\end{cases}
\end{split}
\end{equation}
\end{proof}
In the above identity, we shall change the sum and integral sign.
We use self-adjoint operators $\langle D_x\rangle^{s}$ on $L^2(S^1)$ defined
by the Fourier series,
\begin{equation}
\langle D_x\rangle^{s}f(x)=\sum_{k\in\mathbf{Z}}\langle k\rangle^sf_ke^{ikx}.
\end{equation}
Here $\langle k\rangle=(k^2+1)^{1/2}$, $s\in\mathbf{R}$ and $f(x)=\sum_{k\in\mathbf{Z}} f_ke^{ikx}$.
This definition also can apply to the distribution on the $S^1$. We also define linear operators on $H$
\begin{equation}
\langle L\rangle^{s}=\sum_{k\in\mathbf{Z}}\langle k\rangle^sP_k.
\end{equation}
Since $P_k$ are orthogonal projections, we get
\begin{equation}
\begin{split}
\langle D_y\rangle^{N}& e^{-iyL}=\sum_{l\in\mathbf{Z}} \langle l\rangle^{N}P_le^{-ily}\\
&=\sum_{l\in\mathbf{Z}}\langle l\rangle^{N}P_l \sum_{j\in\mathbf{Z}}P_je^{-ijy}= \langle L\rangle^{N} e^{-iyL}.
\end{split}
\end{equation}
\begin{prop}
Let $L$ be a self-adjoint operator on the Hilbert space H and $f$ be a function $\mathbf{Z}\rightarrow \mathbf{C}$. We assume ${\rm Spec}(L)\subset \mathbf{Z} $. Then we have the identity
\begin{equation}
f(L)=\langle L\rangle^{N}\frac{1}{2\pi}\int^{2\pi}_0\sum_{k\in\mathbf{Z}}\langle k\rangle^{-N}f(k)e^{iky} e^{-iyL}dy
\end{equation}
if $\sum_{k\in\mathbf{Z}}\langle k\rangle^{-N}f(k)e^{iky}$ is absolute uniform convergent for some $N\in\mathbf{R}$.
\end{prop}
\begin{proof}
By Proposition 2.2 and (2.3), we have
\begin{equation}
\begin{split}
f(L)&=\sum_{k\in\mathbf{Z}}\frac{1}{2\pi}\int^{2\pi}_0 \langle k\rangle^{-N}f(k)e^{iky} \langle k\rangle^{N}\sum_{l\in\mathbf{Z}}P_le^{-ily}dy\\
&=\sum_{k\in\mathbf{Z}}\frac{1}{2\pi}\int^{2\pi}_0 \langle k\rangle^{-N}f(k)e^{iky} \sum_{l\in\mathbf{Z}} \langle l\rangle^{N}P_le^{-ily}dy\\
\end{split}
\end{equation}
Here $P_k$ are defined by (2.5). From (2.8), we get
\begin{align*}
f(L)=\langle L\rangle^{N}\sum_{k\in\mathbf{Z}}\frac{1}{2\pi}\int^{2\pi}_0\langle k\rangle^{-N}f(k)e^{-iyL}dy.
\end{align*}
Since $\sum_{k\in\mathbf{Z}}\langle k\rangle^{-N}f(k)e^{iky}$ is absolute uniform convergent, we can change the sum and integral
\begin{align*}
f(L)=\langle L\rangle^{N}\frac{1}{2\pi}\int^{2\pi}_0\sum_{k\in\mathbf{Z}}\langle k\rangle^{-N}f(k)e^{iky} e^{-iyL}dy.
\end{align*}
\end{proof}
Especially we have the identity
\begin{equation}
\begin{split}
e^{-itL^2}&=\langle L\rangle^{N}\frac{1}{2\pi}\int^{2\pi}_0\langle D_y\rangle^{-N}G(t,y)e^{-iyL}dy
\end{split}
\end{equation}
for $G(t,y)$ defined by (1.4) and $N>1$. By (1.8), this can be written as follows
\begin{equation}
\begin{split}
e^{-itL^2}&=\langle L\rangle^{N}\frac{1}{2\pi}\int^{\pi}_0 \langle D_y\rangle^{-N}G(t,y)(e^{iyL}+e^{-iyL})dy\\
&=\langle L\rangle^{N}\frac{1}{2\pi}\int^{2\pi}_0\langle D_y\rangle^{-N}G(t,y){\rm cos}(iyL) dy
\end{split}
\end{equation}
The following proposition is an abstract form of Theorem 1.3 (i).
\begin{prop}
Let $L$ be a self-adjoint operator on the Hilbert space H. We assume ${\rm Spec}(L)\subset \mathbf{Z} $.
Then for the time $t= 2\pi n/m \in 2\pi\mathbf{Q}$, $n,m\in\mathbf{Z}$, we have the following identity,
\begin{equation}
U(2\pi n/m)=\sum_{j=0}^{m-1}g(n,m;j)V(2\pi j/m)
\end{equation}
Here $U(t)=e^{-itL^2}$, $V(t)=e^{-itL}$ and $g(n,m,j)$ is defined by (1.5).
\end{prop}
\begin{proof}
By using the identity (2.12) and (1.6), we have
\begin{align*}
U(2\pi n/m)&=\frac{1}{2\pi}\int^{2\pi}_0\langle D_y\rangle^{-N}\sum_{j=0}^{m-1}g(n,m;j)\delta (x-2\pi j/m) \langle L\rangle^{N}e^{-iyL}dy\\
&=\frac{1}{2\pi}\int^{2\pi}_0\langle D_y\rangle^{-N}\sum_{j=0}^{m-1}g(n,m;j)\delta (x-2\pi j/m) \langle D_y\rangle^{N}e^{-iyL}dy\\
&=\frac{1}{2\pi}\int^{2\pi}_0\sum_{j=0}^{m-1}g(n,m;j)\delta (x-2\pi j/m) e^{-iyL}dy\\
&=\sum_{j=0}^{m-1}g(n,m;j)V(2\pi j/m).
\end{align*}
\end{proof}
\begin{remark}
By (2.13), we can prove the similar result for $V(t)={\rm cos}(itL)$.
\end{remark}
\begin{remark}
We can prove this proposition elementary from the following lemma which is proved by elementary linear algebra.
\begin{lem}\langlebel{}
Let $L$ and $V(t)$ be as above and $\mathbf{P}_l=\sum_{j\in\mathbf{Z}} P_{mj+l}.$. There exists an invertible
$m \times m$ complex matrix $(a_{ij})$, $a_{ij}\in\mathbf{C}$ such that the following identity holds
\begin{equation}
\mathbf{P}_i=\sum_{j=0}^{m-1}a_{ij}V(2\pi j/m).
\end{equation}
\end{lem}
\end{remark}
Later we use the wave front set of $G(t,x)$
\begin{prop}
If $t/2\pi=\alpha\in R\backslash\mathbf{Q}$, we have
\begin{equation}
{\rm WF} (G(2\pi\alpha,x))=T^*S^1\backslash\{0\}.
\end{equation}
\end{prop}
for $G(t,x)$ defined by (1.4).
\begin{proof}
By example 1(ii), the singularity exists at $x=0$. So we have $(0, \xi_0) \in{\rm WF} (G(2\pi\alpha,y))$ for some $\xi_0\not=0$. By (1.8), We also get $(0, -\xi_0) \in{\rm WF} (G(2\pi\alpha,y))$.
We apply similar argument used to prove Example 1 (ii).
\end{proof}
\section{Proof of Theorem 1.3 (i).}
In this section, we consider $t/2\pi\in\mathbf{Q}$ case. We prove the following theorem.
\begin{thm}
Let M be a Zoll manifold with period $2\pi$, $P_s\in\Psi_{phg}^1(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ be a self-adjoint operator and $P=\Delta_M+P_s$.
Then there exit self-adjoint operators $Q_1$, $Q_2\in\Psi_{phg}^1(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2}) $ which is commute with $\Delta_M$ and $S\in\Psi^0(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ such that
at the time $t= 2\pi n/m \in 2\pi\mathbf{Q}$, $n,m\in\mathbf{Z}$,
the following identity holds
\begin{equation}
U(2\pi n/m)=e^{-iS}W(2\pi n/m)\sum_{j=0}^{m-1}g(n,m;j)V(2\pi j/m){e^{iS}}+Smoothing.
\end{equation}
where $U(t)=e^{itP}$, $V(t)=e^{-itQ_1}$ and $W(t)=e^{itQ_2}$.
The principal symbols of these operator are given by
$\sigma_{pri} (Q_1)$=$\sigma_{pri} (\sqrt{-\Delta_M})$, $\sigma_{pri} (Q_2)=\sigma_{pri}(\tilde {P_s}+\alpha\sqrt{-\Delta_M} /2) $.
Here
\begin{equation}
\sigma_{pri}(\tilde {P_s})=\frac{1}{2\pi}\int^{2\pi}_{0}\sigma_{pri}(P_s)({\rm exp}(tH_l))dt
\end{equation}
where ${\rm exp}(tH_l)$ is the Hamilton flow for the symbol $l=\sigma_{pri} (\sqrt{-\Delta_M})$.
$S$ satisfies
$$e^{iS}(\Delta_M+P_s{)e^{-iS}}=\Delta_M+\tilde{P_s}+R$$
for a self-adjoint operators $\tilde{P_s}\in\Psi_{phg}^1(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ which is commute with $\Delta_M$ and $R\in \Psi^{-\infty}(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$. $\alpha $ denotes the Maslov index.
The coefficient $g(n,m;j)\in\mathbf{C}$ is defined by $(1.5)$.
\end{thm}
\begin{remark}
$e^{iS}$ does not change the wave front set since it is invertible and it is a $\Psi DO$ whose principal symbol is defined by $e^{i\sigma_{pri}(S)}\in S^0(M)$.
For $V(t)$ and $W(t)$, we can apply the propagation of singularity.
\end{remark}
Thus we have the propagation of the wave front set.
\begin{cor}
Let M and $P=\Delta_M+P_s$ as above.
Then at the time $t=2\pi n/m$, $n,m\in\mathbf{Z}$, $m>0$, $n,m$: co-prime, we have
\begin{equation}
{\rm{WF}}(U(2\pi n/m)u)\subset {\rm exp}(2\pi n/m H_q)\sum_{j; g(n,m;j)\not=0} {\rm exp}(2\pi j/m H_l ){\rm{WF}}(u) \text{, $\forall u \in {\cal D'}(M)$}.
\end{equation}
Here ${\rm exp}(t H_l )$ and ${\rm exp}(t H_q)$ are the Hamilton flows for the symbols $l=\sigma (\sqrt{-\Delta_M})$, and $q=\sigma (\alpha \sqrt{-\Delta_M}/2+\tilde {P_s})$ respectively.
\end{cor}
The term $j; g(n,m;j)\not=0$ can be computed by (1.9). Similarly we get
\begin{cor}
Let M and $P=\Delta_M+P_s$ as above.
Then for the time $t=2\pi n/m$, $n,m\in\mathbf{Z}$, $m>0$, $n,m$: co-prime, the following identity holds
\begin{equation}
{\rm{WF}}(U(2\pi n/m)\delta_{x_0})={\rm exp}(2\pi n/m H_q)\sum_{j; g(n,m;j)\not=0}{\rm exp}(2\pi j/m H_l ) {\rm{WF}}(\delta_{x_0}).
\end{equation}
Here $\delta_{x_0}$ is the Dirac measure at $x_0\in M$ and ${\rm exp}(t H_l )$ and ${\rm exp}(t H_q)$ are as above.
\end{cor}
This is a generalization of Theorem 1.3 (i). We prove Theorem 3.1 by reducing $e^{itP}$ to the form to which we can apply Proposition 2.5. This is essentially well known but for readers we write the detail. First we will assume $P_s=0$. By Proposition 2.2, we take a self adjoint operator $E \in \Psi^{-1}_{phg}(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ such that
\begin{equation}
{\rm Spec}(\sqrt{-\Delta_M}+\alpha/4 +E)\subset \mathbf{Z}
\end{equation}
We write
\begin{equation}
L=\sqrt{-\Delta_M}+\alpha/4 +E.
\end{equation}
Then we have
$$L^2= -\Delta_M + \alpha\sqrt{-\Delta_M}/2+\tilde E$$
Here $\tilde E\in\Psi^{0}_{phg}(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ commutes with $\Delta_M$. Since $\tilde E$ commute with $\Delta_M$, we have
\begin{equation}
e^{it\Delta_M }=e^{it (\alpha\sqrt{-\Delta_M}/2+\tilde E)}e^{-i tL^2}\\
\end{equation}
Next we shall consider the case $P_s\not =0$. First we assume that $P_s$ commutes with $\Delta_M$. Then by (3.7), we have
\begin{equation}
e^{it\Delta_M+P_s }=e^{it (\alpha\sqrt{-\Delta_M}/2+P_s+\tilde E)}e^{-i tL^2}\\
\end{equation}
Finally we consider the case that $P_s$ does not commute with $\Delta_M$.
We claim that there exit self-adjoint $S \in \Psi^{0}(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ and $R\in \Psi^{-\infty}(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ such that
\begin{equation}
e^{iS}(\Delta_M+P_s)e^{-iS} + R =\Delta_M+\tilde P_s.
\end{equation}
Here $\tilde P_s \in \Psi^{1}_{phg}(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ is a self-adjoint operator which commutes with $\Delta_M$. The principal symbol of $\tilde P_s$ is given by $(3.2)$.
This is essentially due to \cite{We}, see also \cite{Ho}.
Taking $Q=P_s- \alpha\sqrt{-\Delta_M}/2-\tilde E$, we will prove $L^2+Q$ is unitary equivalent to $L^2+B$ modulo smoothing operator where $B \in \Psi^{1}$, commutes with $\Delta_M$, is given below.
We introduce the operator
$$Q_t=e^{it{L}}Qe^{-it{L}}$$
which is a self-adjoint $\Psi DO$. Taking an average,
$$B_1=\frac{1}{2\pi}\int_0^{2\pi}Q_tdt,$$
it follows that
$$[L,B_1]=\frac{1}{2\pi i}( e^{i2\pi {L}}Qe^{-i2\pi {L}} -Q).$$
since
$$\frac{d}{dt}Q_t=ie^{it{L}}[L,Q]e^{-it{L}}.$$
By (3.5), we have $[L,B_1]=0$. Applying the Egolov's theorem the sub-principal symbol of $L^2+B_1$ is given by (3.2).
We set $B= B_1.$
By similar calculus, we have
\begin{equation}
B-Q=\frac{1}{2\pi}\int_0^{2\pi}dt\int_0^t\frac{d}{ds}Q_sds=[iT, L]
\end{equation}
where
$$T=\frac{1}{2\pi}\int_0^{2\pi}dt\int_0^tQ_sds.$$
Since $L$ is elliptic, we can take its parametrix $N$.
Then $S_1=1/2TN$ is a zero order self-adjoint $\Psi DO$. By the calculus of $\Psi DO$, we have
$$[S_1,L^2]=\frac{1}{2}([T,L^2]N+T[N,L^2])=[T,L]+R_0$$
where $R_0\in \Psi^{-1}$ is a self-adjoint operator.
We set $S=S_1$ and consider $ie^{i{S}}(L^2+Q)e^{-i{S}}$.
If $A\in\Psi^m$, then $ie^{i{S}}Ae^{-i{S}}$ is a $\Psi DO$ whose symbol is the asymptotic sum of the following formal series
$$\sum_0^\infty(({\rm ad}\ iS)^jA/ j!$$
where $({\rm ad} \ iS)A=[iS,A]$.
And by (3.9), We have
$$ie^{i{S}}(L^2+Q)e^{-i{S}}=L^2+B+R_1 $$
where $R_1\in \Psi^{0}$ is a self-adjoint operator. Applying similar argument to $R_1$, we have $B_2\in \Psi^{0}$ and $S_2\in \Psi^{-1}$ satisfying
$$ie^{i{S}}(L^2+Q)e^{-i{S}}=P^2+B+R_2 $$
where $B=B_1+B_2$, $S=S_1+S_2$ and $R_2\in \Psi^{-1}$ are self-adjoint operators. Continuing this argument and taking asymptotic sums, we have desired $B$ and $S$. This means (3.9). Thus we have the identity
\begin{equation}
i\partial_t+\Delta_M+\tilde P_s=e^{iS}(i\partial_t+\Delta_M+P_s)e^{-iS} +R.
\end{equation}
We shall give the solution operator to this equation. We write the solution operator as follows $$e^{iS}(e^{it( \Delta_M+ P_s)}+ A(t) )e^{-iS}$$ and inserting this to the right hand side of (3.11), we have the following equation to the operator $A(t)$,
\begin{equation}
\begin{cases}
(i\partial_t+\Delta_M+P_s+e^{-iS}Re^{iS} )A(t) =- e^{-iS}Re^{iS} e^{it( \Delta_M+ P_s)}\\
A(0)=0.
\end{cases}
\end{equation}
This can be solved by Duhamel's principle, we have
\begin{equation}
A(t)=i\int_0^te^{i(t-s)(\Delta_M+P_s+e^{-iS}Re^{iS} )}e^{-iS}Re^{iS} e^{is( \Delta_M+ P_s)}ds.
\end{equation}
Since $e^{-iS}Re^{iS}$ is a smoothing operator, $A(t)$ is a smoothing operator. We have the identity
\begin{equation}
e^{it( \Delta_M+ P_s)}=e^{-iS}e^{it( \Delta_M+\tilde P_s)}e^{iS}+ A(t).
\end{equation}
Since $\tilde P_s$ is commute with $\Delta_M$, by the identity and (3.8), we have proved the following
\begin{prop}
Let M be a Zoll manifold with period $2\pi$ and $P=\Delta_M+P_s$ as above. Then there exit self-adjoint operators $S, \ \tilde E\in\Psi^0(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ such that
\begin{equation}
e^{it( \Delta_M+ P_s)}=e^{-iS}e^{it (\alpha\sqrt{-\Delta_M}/2+\tilde P_s+\tilde E)}e^{-i tL^2}e^{iS}+ {\rm Smoothing}
\end{equation}
Here $L$ is defined by (3.6)
\end{prop}
We apply Proposition 2.5 to (3.15). Taking
$Q_1=L$ and $Q_2=\alpha\sqrt{-\Delta_M}/2+\tilde P_s+\tilde E$, we have Theorem 3.1.
\begin{remark}
If the manifolds are odd dimension rank 1 symmetric spaces, we can easily prove support theorem by our identity
although Kakehi proved support theorem for more general compact symmetric spaces.
\end{remark}
We outline the argument. Essentially we can assume the manifolds are odd dimensional spheres. The spectrum of the Laplace-Beltrami operator is the following, see e.g. \cite{Ta},
\begin{equation}
{\rm spec}\Delta_{S^d}=\{-k(k+(d-1)/2);k\in\mathbf{N}\}.
\end{equation}
We have
$$\Delta_{S^d}= \Delta_{S^d}-(d-1)^2/4+(d-1)^2/4=-\sqrt{-\Delta_{S^d}+(d-1)^2/4}^2+(d-1)^2/4$$
By (3.16), we know $${\rm spec}\sqrt{-\Delta_{S^d}+(d-1)^2/4}\subset \mathbf{Z}$$
So we can apply the Remark 2.5 to this operator. Using the Huygens' principle, see \cite{La-Ph}, \cite{Ta},
we have the following
\begin{equation}
{\rm supp}(e^{2\pi i n/m\Delta_{S^d}}\delta_{x_0})=\sum_{j; G(n,m;j)\not=0} d(2\pi j/m){\rm supp}(\delta_{x_0}) \text{, $\forall u \in {\cal D'}(M)$}.
\end{equation}
Here $\delta_{x_0}$ is the Dirac measure at $x_0\in S^d$, $d(t)$ is defined by
$$d(t)\{x\}=\{y\in S^d; {\rm dist}_{S^n}(y,x)=t\},\ t\in\mathbf{R}, x\in S^n, $$
where ${\rm dist}_{S^d}$ is the distance of the Sphere.
\section{Proof of Theorem 1.3 (ii).}
We shall consider $t/2\pi\in \mathbf{R}\backslash \mathbf{Q}$ case.
We prove the following a generalization of Theorem 1.3 (ii).
\begin{thm}
Let M be a Zoll manifolds with period $2\pi$, $P_s\in\Psi_{phg}^1(M; \Omega^\frac{1}{2}, \Omega^\frac{1}{2})$ be a self-adjoint operator and $P=\Delta_M+P_s$.
If $t/2\pi =\alpha\in \mathbf{R}\backslash \mathbf{Q}$, the following identity holds
\begin{equation}
{\rm{WF}}(U(2\pi\alpha)\delta_{x_0})= {\rm exp}(2\pi \alpha H_q)\cup_{0\leq \tau \leq 2\pi} {\rm exp}(\tau H_p) {\rm{WF}}(\delta_{x_0})
\end{equation}
Here $\delta_{x_0}$ is the Dirac measure at $x_0\in M$, $U(t)\delta_{x_0}=e^{itP}\delta_{x_0}$,
${\rm exp}(t H_l )$ and ${\rm exp}(t H_q)$ are the Hamilton flows for the symbols $l=\sigma_{pri} (\sqrt{-\Delta_M})$, and $q=\sigma_{pri} (\tilde {P_s})$ respectively. $\sigma_{pri}(\tilde {P_s})$ is defined by (3.2).
\end{thm}
\begin{remark}
We can also prove similar identity to ${\rm WF }_{d/2}$.
We notice the following
\begin{equation}
{\rm{WF}}(U(t)u)\subset {\rm exp}(t H_q)\cup_{0\leq \tau \leq 2\pi} {\rm exp}(\tau H_p) {\rm{WF}}(u)
\end{equation}
by (2.12) and Proposition 3.5.
\end{remark}
\begin{remark}
Since $M$ is complete, if $\sigma_{pri} (\tilde {P_s})=0$, then we have
$${\rm{sing\ supp}}(U(2\pi\alpha)\delta_{x_0})=M$$
from Theorem 4.1. This implies Theorem 1.3 (ii).
\end{remark}
By Proposition 3.5, we study the singularity of $e^{-itL^2}\delta_{x_0}$ here $L$ is defined by (3.6).
We fix the coordinates $(x, \xi)$ of the cotangent bundle where $x$ is the normal coordinate around $x_0=0$ and $\xi$ is the corespondent coordinate.
We study $e^{-itL^2}\delta_{x_0}$ near $x_0$. We recall that $e^{-itL} $ can be approximated, modulo smooth functions, by the Fourier integral operator, see \cite{Gr-Sj}, \cite{Ho}
\begin{equation}
e^{itL}u\equiv (2\pi)^{-d/2}\int e^{i(\Phi(t, x,\xi)-y\cdot\xi)}a(t, x,\xi)u(y)dyd\xi\ \ {\rm mod}\ C^\infty(M).
\end{equation}
From now on, we omit the term ${\rm mod}\ C^\infty(M)$. Here the phase $\Phi(t, x,\xi)$ and the amplitude $a(t, x,\xi)$ are chosen to satisfy
\begin{align}
\begin{cases}
&(i\partial_t+L)e^{i\Phi(t, x,\xi)}a(t, x,\xi)= {\cal O}((1+|\xi|)^{-\infty})\\
&e^{i\Phi(0, x,\xi)}a(0, x,\xi)=(2\pi)^{-d/2}e^{ix\cdot\xi}.
\end{cases}
\end{align}
These functions can be taken if $\Phi(t, x,\xi)$ satisfies the following Eikonal equation
\begin{equation}
\begin{cases}
&\partial_t\Phi+l(x,\partial_x\Phi)=0\\
&\Phi(0, x,\xi)=x\cdot\xi
\end{cases}
\end{equation}
here the symbol $l(x,\xi)$ is the principal symbol to the operator $L$ and we also inductively choose the amplitude $a(t, x,\xi)=a_0+a_1+\cdots$ by solving transport equations.
For small $t$, these equations have solutions and we get $\Phi(t, x,\xi)$, a real $C^\infty$ function homogeneous of degree one in $\xi$ and $a(t, x,\xi)=a_0+a_1+\cdots\in S_{phg}^{0}(M)$ where $a_j\in S^{-j}(M)$ is homogeneous of degree $-j$ in $\xi$.
In (4.3) we take $u=\delta_{x_0}$, and get
$$e^{itL}\delta_{x_0}\equiv \int e^{i\Phi(t, x,\xi)}a(t, x,\xi)d\xi=:A(t,x).$$
Here the integral is defined by the oscillatory integral. Taking a point $x_1\not= x_0$ near $x_0$, we study $A(t,x)$ in some neighborhood of $x_1$.
Change coordinates to polar coordinates $(r,\theta)$, $(\langlembda,\omega)$ to $x$, $\xi$ respectively. We have
$$A(t,r\theta)=\int_0^\infty\int_{S^(d-1)} e^{i\Phi (t,r\theta,\langlembda\omega)}a(t,r\theta,\langlembda\omega)\langlembda^{d-1}d\langlembda d\omega$$
We write $x_1=r_1\theta_1$. Here $r_1>0$ is sufficient small. We shall apply the stationary phase method to the $\omega$ integral.
Since $\Phi$ satisfies (4.5), we have
$$\Phi(t, x,\xi)=x\xi-t(\sum_{i,j}g^{ij}\xi_i\xi_j)^{1/2}+ {\cal O}(t^2\xi).$$
On normal coordinate, the metric satisfies
\begin{equation}
g^{ij}(x)=\delta_{ij}+{\cal O}(|x|).
\end{equation}
We take $|x|=r$ the same order as $t$. Then we get
\begin{equation}
\Phi(t, x,\xi)=x\cdot\xi-t|\xi|+ {\cal O}(t^2\xi)=\langlembda(r\theta \cdot\omega-t+{\cal O}(t^2))
\end{equation}
Since $\theta \cdot\omega$ take its maximum or minimum at $\omega=\pm\theta$ respectively,
$\omega=\pm\theta$ is critical values of $\theta\cdot \omega$. Moreover they are only critical points and non-degenerate i.e. the Hessian is not degenerate.
So we can assume the critical points of $\Phi(t, r\theta,\langlembda\omega)$ is non-degenerate. By the stationary phase method, we obtain
\begin{align*}
A(t,r\theta)\equiv\sum_{j}\int_0^\inftye^{i\langlembda {\tilde\Phi }_j(t,r\theta )} {\tilde a}_j(t,r\theta,\langlembda)\langlembda^{(d-1)/2}d\langlembda
\end{align*}
The phase function is defined by
$${\tilde\Phi }_j(t,r\theta )=\Phi(t, r\theta,\omega_j)$$
where $\omega_j$ are finite critical points to $\Phi$.
The amplitude is
$${\tilde a}_j(t,r\theta,\langlembda)={\tilde a}_{j 0}+{\tilde a}_{j1}+\cdots \in S_{phg}^{0}(U)$$
considering $t$ and $\theta$ as $C^\infty$ parameter and $U$ is some neighborhood of $r_1$.
We have ${\tilde\Phi }_j=(r-t)+{\cal O}(rt)$ or ${\tilde\Phi }=-(r+t)+{\cal O}(rt)$ from (4.7).
We also assume ${\partial_r\tilde\Phi_j }\not=0$ by changing $r\sim t$ smaller. If there exists a point $r_2\theta_2$ such that $\tilde\Phi_j(t,r_2\theta_2)\not=0$ then by using the identity
$$\frac{1}{i}\tilde\Phi_j^{-1}\partial_\langlembda e^{i\langlembda \tilde\Phi_j}=e^{i\langlembda \tilde\Phi_j}$$
and integration by parts, we get the integral is $C^\infty$ near this point.
On the other hand, by the propagation of singularity, $A(t,x)$ has only singularity at $r=t,-t$.
So essential parts of the integral are the phase ${\tilde\Phi }_j$s which are the following form
$${\tilde\Phi }_j^+=(r-t){\hat \Phi _j^+}\text{ or }{\tilde\Phi }_j^-=-(r+t){\hat \Phi _j}^-.$$
Since ${\partial_r\tilde\Phi_j }\not=0$, we have ${\hat \Phi }^\pm_j\not=0$. Near $r_1$, we get
\begin{align*}
A(t,r\theta)\equiv\sum_j \int_0^\infty&e^{i\langlembda(r-t){\hat \Phi }^+_j(t,r\theta )} {\tilde a_j}(t,r\theta,\langlembda)\langlembda^{(d-1)/2}d\langlembda\\
&+\sum_j \int_0^\inftye^{-i\langlembda(r+t){\hat \Phi }^-_j(t,r\theta )} {\tilde a_j}(t,r\theta,\langlembda)\langlembda^{(d-1)/2}d\langlembda
\end{align*}
By changing the variable as $\langlembda'=\langlembda{\hat \Phi }^{\pm}_j(t,r\theta )$
\begin{align*}
A(t, r\theta)\equiv\int_0^\infty&e^{i\langlembda'(r-t)} {\hat a}_+(t,r\theta,\langlembda'){\langlembda'}^{(d-1)/2}d\langlembda'\\
&+\int^{\infty}_0e^{-i\langlembda(r+t)}{\hat a}_-(t,r\theta,\langlembda'){\langlembda'}^{(d-1)/2}d\langlembda'
\end{align*}
where ${\hat a}_{\pm}=\sum_{j}{\tilde a_j}(t,r\theta,\langlembda{\hat \Phi }^{\pm}_j){\hat \Phi }_j^{\pm}$. By the propagation of singularity, we know ${\hat a_{\pm}}$ is elliptic. If $t>0$, since $(r+t)\not=0$ the second term is $C^\infty$, we can write this integrals as follows
\begin{align*}
A(t,r\theta)&\equiv\int_0^\inftye^{ i\langlembda'(r- t)} {\hat a}_{+}(t,r\theta,\langlembda'){\langlembda'}^{(d-1)/2}d\langlembda'\\
&=\int_0^\infty\int^\infty_\inftye^{ i\langlembda(r- z)} {\hat a}'_{+}(z,r\theta,\langlembda'){\langlembda'}^{(d-1)/2}\delta(z-t)dzd\langlembda'
\end{align*}
If $t<0$, the first term is $C^\infty$ and we have
\begin{align*}
A(t,r\theta)&\equiv\int^{\infty}_0e^{-i\langlembda(r+t) }{\hat a}_-(t,r\theta,\langlembda'){\langlembda'}^{(d-1)/2}d\langlembda'\\
&=\int_{-\infty}^0\int^\infty_{-\infty}e^{i\langlembda'(r- z)} {\hat a}'_{-}(z,r\theta,\langlembda'){|\langlembda'|}^{(d-1)/2}\delta(z-|t|)dzd\langlembda'
\end{align*}
Here ${\hat a}'_{-}={\hat a_-}(z,r\theta,|\langlembda'|)$. Thus we have
\begin{prop}
Let $M$ be a smooth Riemannian manifold, $x_0 \in M$ and $L$ is a self-adjoint $\Psi$DO satisfies
$\sigma_{pri}(L)=\sigma_{pri}(\sqrt{-\Delta_M})$.
We take $x_1\not=x_0$ sufficiently near $x_0$ and the normal coordinates $r\theta$ around $x_0=0$. Then there exists $U$, a neighborhood of $r_1$,
and ${\hat a}'_{\pm}(z,r\theta,\langlembda')\in S_{phg}^{0}(U)$, elliptic symbol with $C^\infty$ parameter $\theta$
such that for small $t>0$, we have the following identity near $x_1$ modulo smooth functions
\begin{equation}
e^{\pm itL}\delta_{x_0}(r\theta) \equiv
\begin{cases}
\displaystyle \int_0^\infty\int^\infty_{-\infty}e^{ i\langlembda'(r- z)} {\hat a}'_{+}(z,r\theta,\langlembda'){\langlembda'}^{(d-1)/2}\delta(z-t)dzd\langlembda'\\
\displaystyle \int_{-\infty}^0\int^\infty_{-\infty}e^{i\langlembda'(r- z)} {\hat a}'_{-}(z,r\theta,\langlembda'){|\langlembda'|}^{(d-1)/2}\delta(z-t)dzd\langlembda'
\end{cases}.
\end{equation}
\end{prop}
\begin{remark}
We can get this fact more directory by solving (4.5).
For short, we use the propagation of singularity.
\end{remark}
Now we study the singularity of $e^{- itL^2}\delta_{x_0}$ near $x_1$.
We take a smooth cut-off function $\rho(r\theta)$ near $(r_1, \theta_1)$,
i.e. $\rho=1$ on a small neighborhood of $(r_1, \theta_1)$ and $\rho$ is identically zero out side another neighborhood of $(r_1, \theta_1)$.
Then by the identity (2.13) and the propagations of singularity, we have the following identity for $N>1$
\begin{equation}
\rho e^{-itL^2}\delta_{x_0}\equiv \rho \langle L\rangle^{N}\int^{\pi}_{0}\tilde\rho(y)\{\langle D_y\rangle^{-N} G(t,y)\}\{(e^{iyP}+e^{-iyP})\}dy\delta_{x_0}
\end{equation}
where $\tilde\rho(y)\in C^\infty([0,\pi]))$ is a cut-off function near $r_0$. By Proposition 4.4, the right hand side can be written as follows in the normal polar coordinate near $x_0$.
\begin{align*}
(e^{iyP}+e^{-iyP})\delta_{x_0}\equiv \frac{1}{2\pi}\int_{-\infty}^\infty\int_{-\infty}^\inftye^{i \langlembda'(r- z)} b(z,r\theta,\langlembda')\delta(z-y)dzd\langlembda'\}
\end{align*}
Here we take
\begin{equation}
b(z,r\theta,\langlembda')=
\begin{cases}
2\pi{\hat a_+}'(z,r\theta,\langlembda'){\langlembda'}^{(d-1)/2}\ \ \ \langlembda'>0\\
2\pi{\hat a_-}'(z,r\theta,\langlembda'){|\langlembda'|}^{(d-1)/2}\ \ \ \langlembda'<0.
\end{cases}
\end{equation}
Since $\hat a'_\pm$ is elliptic with respect to $\langlembda'$ , $b$ is elliptic. The right hand side of (4.9) becomes
\begin{align*}
\rho & \langle L\rangle^{N}\int^{\pi}_{0}\tilde\rho(y)\{\langle D_y\rangle^{-N} G(t,y)\} \frac{1}{2\pi}\int_{-\infty}^\infty\int_{-\infty}^\inftye^{i \langlembda'(r- z)} b(z,r\theta,\langlembda')\delta(z-y)dzd\langlembda' dy\\
&= \rho \langle L\rangle^{N}\frac{1}{2\pi}\int_{-\infty}^\infty\int_{-\infty}^\inftye^{i \langlembda'(r- z)} b(z,r\theta,\langlembda')\int^{\pi}_{0}\tilde\rho(y)\{\langle D_y\rangle^{-N} G(t,y)\}\delta(z-y) dy dzd\langlembda' \\
&= \rho \langle L\rangle^{N}\frac{1}{2\pi}\int_{-\infty}^\infty\int_{-\infty}^\inftye^{i \langlembda'(r- z)} b(z,r\theta,\langlembda')\{\tilde\rho(y)\langle D_y\rangle^{-N} G(t,y)\}|_{y=z} dzd\langlembda'
\end{align*}
Here the last integral is $\Psi$DO with $C^\infty$ parameter $\theta$. We write this operator as $B(\theta)$ and get
\begin{prop}
Let $M$ be a Zoll manifold with period $2\pi$, $x_0 \in M$ and $L$ is a self-adjoint $\Psi$DO defined by (3.5).
We take $x_1\not=x_0$ sufficiently near $x_0$ and the normal polar coordinates $r\theta$ around $x_0=0$. Then there exists $U$, a neighborhood of $r_1$,
and $b(z,r\theta,\langlembda')\in S^{(d-1)/2}(U)$, elliptic symbol with $C^\infty$ parameter $\theta$
such that for small $t>0$ and $n>1$, we have the following identity near $x_1$ modulo smooth functions
\begin{equation}
\rho e^{-itL^2}\delta_{x_0}\equiv \langle L\rangle^{N} B(\theta)( \tilde\rho(y)\langle D_y\rangle^{-N} G(t,y))\
\end{equation}
\end{prop}
This proposition reduce the singularity of $e^{-itL^2}\delta_{x_0}$ to $G(t,y)$.
If $t/2\pi=\alpha\in\mathbf{R}\backslash\mathbf{Q}$, by Proposition 2.9, $\tilde G(y)=\tilde\rho(y)\langle D_y\rangle^{-N}G(2\pi\alpha,y)$ is singular at $r_1$ to both directions.
Since $B(t,\theta)$ is elliptic $\Psi$DO with $C^\infty$ parameter $\theta$, this preserve the singularity to $r$-variable. We have
\begin{equation*}
(r_1\theta_1, \pm \theta_1)\in {\rm WF}(e^{-itL^2}\delta_{x_0}).
\end{equation*}
We also know, in normal polar coordinate,
\begin{equation}
(r_1\theta_1, \pm \theta_1)= \exp (\pm r_1 H_l)(0, \pm\theta_1).
\end{equation}
Since we can take $x_1$ arbitrary near $x_0$, we have
\begin{equation}
(0, \pm \theta_1)\in {\rm WF}(e^{-itL^2}\delta_{x_0}).
\end{equation}
Next we prove the existence of the similar singularity at every points using (1.7). As in the previous argument, we take smooth cut-off functions $\rho(r\theta)$ near $\Pi_x(\exp ((r_1+4\pi\alpha) H_l)(0, \theta_1))$ and $\tilde\rho(y)$ near $r_1+4\pi\alpha$ We have
\begin{align*}
\rho e^{-i2\pi\alpha L^2}\delta_{x_0}&\equiv \rho \langle L\rangle^{N}\int^{\pi}_{0}\tilde\rho(y)\{\langle D_y\rangle^{-N} G(2\pi\alpha,y)\}(e^{iyL}+e^{-iyL})dy\delta_{x_0}\\
&=\rho\langle L\rangle^{N}\int^{\pi}_0 \tilde\rho(y)\{\langle D_y\rangle^{-N} e^{i(2\pi\alpha+y)}G(2\pi\alpha,y+4\pi\alpha)\} (e^{iyL}+e^{-iyL})\delta_{x_0}dy.
\end{align*}
Change the variable $y=y'-4\pi\alpha$.
\begin{align*}
&=\rho\langle L\rangle^{N}\int^{\pi}_0 \tilde\rho(y'-4\pi\alpha)\{\langle D_y\rangle^{-N} e^{i(y'-2\pi\alpha)}G(2\pi\alpha,y')\} (e^{i(y'-4\pi\alpha)L }
+e^{-i(y'-4\pi\alpha)L})\delta_{x_0}dy'\\
&=\rho e^{i4\pi\alpha L}\langle L\rangle^{N}\int^{\pi}_0 \{\tilde\rho(y'-4\pi\alpha)\langle D_y\rangle^{-N} e^{i(y'-2\pi\alpha)}G(2\pi\alpha,y')\} e^{i y'L }\delta_{x_0}dy'\\
&\ \ +\rho e^{-i4\pi\alpha L}\langle L\rangle^{N}\int^{\pi}_0 \{\tilde\rho(y'-4\pi\alpha)\langle D_y\rangle^{-N} e^{i(y'-2\pi\alpha)}G(2\pi\alpha,y')\} e^{- iy'L }\delta_{x_0}dy'.
\end{align*}
We write this as follows
\begin{equation}
\rho e^{-i2\pi\alpha L^2}\delta_{x_0}\equiv \rho e^{i4\pi\alpha L}{\rm I}+ \rho e^{-i4\pi\alpha L} \ {\rm II}.
\end{equation}
Now $\hat G(y')=\tilde\rho(y'-4\pi\alpha)\langle D_y\rangle^{-N} e^{i(y'-2\pi\alpha)}G(2\pi\alpha,y')$ is singular to both directions at $y'=r_1$ and applying previous argument. We write (I) as $B_1(\theta)\hat G(y')$.
Since $B_1(\theta)$ a $\Psi DO$ elliptic to the positive direction $r>0$, we have
\begin{align*}
(r_1\theta_1, \theta_1)\in {\rm WF}(\rm I)
\end{align*}
Taking $r_1$ sufficient small, we also assume
$$-r_1-8\pi\alpha\not\equiv r_1+4\pi\alpha , \ \ {\rm mod }\ 2\pi $$
By shrinking the support of $\rho$, we have
\begin{align*}
\exp (-8\pi\alpha H_l)(r_1\theta_1, -\theta_1)\not \in {\rm WF}(\rm I).
\end{align*}
By the propagation of the singularity, we get
\begin{equation}
\begin{split}
\exp (4\pi\alpha H_l)(r_1\theta_1, \theta_1)\in {\rm WF}(\rho e^{i4\pi\alpha L}\rm I)\\
\exp (-4\pi\alpha H_l)(r_1\theta_1, -\theta_1)\not \in {\rm WF}(\rho e^{i4\pi\alpha L}\rm I).
\end{split}
\end{equation}
Similarly we obtain
\begin{equation}
\begin{split}
\exp (8\pi\alpha H_l)(r_1\theta_1, \theta_1)\not\in {\rm WF}(\rho e^{-i4\pi\alpha L}\rm II)\\
\exp (-4\pi\alpha H_l)(r_1\theta_1, -\theta_1)\in {\rm WF}(\rho e^{-i4\pi\alpha L}\rm II).
\end{split}
\end{equation}
(4.12), (4.13), (4.14) and (4.15) mean
\begin{align*}
\exp ((r_1\pm4\pi\alpha) H_l)(0, \pm\theta_1)\in {\rm WF}(e^{-i2\pi\alpha L^2}\delta_{x_0}).
\end{align*}
Since $r_1>0$ is any small constant we have
\begin{align*}
\exp ((\pm4\pi\alpha) H_l)(0, \pm\theta_1)\in {\rm WF}(e^{-i2\pi\alpha L^2}\delta_{x_0}).
\end{align*}
Inductively we can prove similar singularity to the points $\exp ((4\pi\alpha k) H_l)(0, \pm\theta_1)$, $\forall k\in\mathbf{Z}$, $\forall\theta_1$. Since $4\pi\alpha k$, $\in\mathbf{Z}$ are dense in $S^1$, we get
\begin{equation}
\cup_{0\leq \tau \leq 2\pi} {\rm exp}(\tau H_p) {\rm{WF}}(\delta_{x_0})\subset {\rm{WF}}(e^{-i2\pi\alpha L^2}\delta_{x_0})
\end{equation}
Here we use periodicity, $\exp (-t H_l)= \exp ((2\pi -t )H_l)$.
By Proposition 3.5 and (4.2), this implies (4.1).
\begin{small}
\begin{center}
\end{center}
Department of Mathematics, Graduate School of Science, Osaka University,
1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan
E-mail:[email protected]
\end{small}
\end{document}
|
\begin{document}
\title[Posets in \emph{Macaulay2}]{Partially ordered sets in \emph{Macaulay2}}
\author[D.\ Cook II]{David Cook II}
\address{Department of Mathematics, University of Notre Dame, Notre Dame, IN 46556, USA}
\email{\href{mailto:[email protected]}{[email protected]}}
\author[S.\ Mapes]{Sonja Mapes}
\email{\href{mailto:[email protected]}{[email protected]}}
\author[G.\ Whieldon]{Gwyneth Whieldon}
\address{Department of Mathematics, Hood College, Frederick, MD 21701, USA}
\email{\href{mailto:[email protected]}{[email protected]}}
\subjclass[2010]{06A06, 06A11}
\thanks{\emph{Posets} version 1.0.6 available at \url{http://www.nd.edu/~dcook8/files/Posets.m2}.}
\begin{abstract}
We introduce the package \emph{Posets} for \emph{Macaulay2}. This package provides a data structure
and the necessary methods for working with partially ordered sets, also called posets. In
particular, the package implements methods to enumerate many commonly studied classes of posets,
perform operations on posets, and calculate various invariants associated to posets.
\end{abstract}
\maketitle
\section*{Introduction}
A \emph{partial order} is a binary relation $\preceq$ over a set $P$ that is antisymmetric, reflexive, and transitive.
A set $P$ together with a partial order $\preceq$ is called a \emph{poset}, or \emph{partially ordered set}.
Posets are combinatorial structures that are used in modern mathematical research, particularly in algebra.
We introduce the package \emph{Posets} for \emph{Macaulay2} via three distinct posets or related ideals which
arise naturally in combinatorial algebra.
We first describe two posets that are generated from algebraic objects. The intersection semilattice associated
to a hyperplane arrangement can be used to compute the number of unbounded and bounded real regions cut out by
a hyperplane arrangement, as well as the dimensions of the homologies of the complex complement of a hyperplane arrangement.
Given a monomial ideal, the lcm-lattice of its minimal generators gives information on the structure of the free
resolution of the original ideal. Specifically, two monomial ideals with isomorphic lcm-lattices have the
``same'' (up to relabeling) minimal free resolution, and the lcm-lattice can be used to compute, among other things,
the multigraded Betti numbers $\beta_{i,\bf{b}}(R/M)=\dim_{\Bbbk}\text{Tor}_{i,\bf{b}}(R/M,\Bbbk)$ of the monomial ideal.
In contrast to the first two examples (associating a poset to an algebraic object), we then describe an ideal that is
generated from a poset. In particular, the Hibi ideal of a finite poset is a squarefree monomial ideal which has many
nice \emph{algebraic} properties that can be described in terms of \emph{combinatorial} properties of the poset. In
particular, the resolution and Betti numbers, the multiplicity, the projective dimension, and the Alexander dual are
all nicely described in terms of data about the poset itself.
\section*{Intersection (semi)lattices}
A \emph{hyperplane arrangement} $\mathcal{A}$ is a finite collection of affine hyperplanes in some vector space $V$.
The \emph{dimension} of a hyperplane arrangement is defined by $\dim(\mathcal{A})=\dim(V)$, and the \emph{rank} of a
hyperplane arrangement $\text{rank}(\mathcal{A})$ is the dimension of the span in $V$ of the set of normals to the
hyperplanes in $A$.
The \emph{intersection semilattice} ${\mathcal L(A)}$ of $\mathcal{A}$ is the set of the nonempty intersections
of subsets of hyperplanes $\bigcap_{\mathcal{H}\in \mathcal{A}'}\mathcal{H}$ for $\mathcal{H}\in\mathcal{A}'\subseteq \mathcal{A}$, ordered by reverse
inclusion. We include the empty intersection corresponding to $\mathcal{A}'=\emptyset$, which is the minimal
element in the intersection meet semilattice $\hat{0} \in {\mathcal L(A)}$. If the intersection of all
hyperplanes in $\mathcal{A}$ is nonempty, $\bigcap_{\mathcal{H}\in \mathcal{A}} H \neq 0$, then the intersection meet semilattice
${\mathcal L(A)}$ is actually a lattice. Arrangements with this property are called \emph{central arrangements}.
Consider the non-central hyperplane arrangement $\mathcal{A}=\{\mathcal{H}_1=V(x+y),\mathcal{H}_2=V(x),\mathcal{H}_3=V(x-y),\mathcal{H}_4=V(y+1)\}$,
where $\mathcal{H}_i=V(\ell_i(x,y))\subseteq \mathbb{R}^2$ denotes the hyperplane $H_i$ of zeros of the linear form $\ell_i(x,y)$;
see Figure~\ref{fig:hyp-arr}(i). We can construct ${\mathcal L(A)}$ in \emph{Macaulay2} as follows.
\begin{verbatim}
i1 : needsPackage "Posets";
i2 : R = RR[x,y];
i3 : A = {x + y, x, x - y, y + 1};
i4 : LA = intersectionLattice(A, R);
\end{verbatim}
Further, using the method \texttt{texPoset} we can generate \LaTeX~to display the Hasse diagram of
${\mathcal L(A)}$, as in Figure~\ref{fig:hyp-arr}(ii).
\begin{figure}
\caption{The non-central hyperplane arrangement
\[
\mathcal{A}
\label{fig:hyp-arr}
\end{figure}
A theorem of Zaslavsky~\cite{Zaslavsky} provides information about the topology of the complement of
hyperplane arrangements in $\mathbb{R}^n$. Let $\mu$ denote the M\"obius function of the intersection semilattice ${\mathcal L(A)}$. Then
the number of regions that $\mathcal{A}$ divides $\mathbb{R}^n$ into is
\[
r(\mathcal{A}) = \sum_{x \in {\mathcal L(A)}} |\mu(\hat{0}, x)|.
\]
Moreover, the number of these regions that are bounded is
\[
b(\mathcal{A}) = |\mu({\mathcal L(A)} \cup \hat{1})|,
\]
where ${\mathcal L(A)} \cup \hat{1}$ is the intersection semilattice adjoined with a maximal element.
We verify these results for the non-central hyperplane arrangement $\mathcal{A}$ using \emph{Macaulay2}:
\begin{verbatim}
i5 : realRegions(A, R)
o5 = 10
i6 : boundedRegions(A, R)
o6 = 2
\end{verbatim}
Moreover, in the case of hyperplane arrangements in $\mathbb{C}^n$, using a theorem of Orlik and Solomon~\cite{OrlikSolomon},
we can recover the Betti numbers (dimensions of homologies) of the complement $\mathcal{M}_\mathcal{A} = \mathbb{C}^n-\cup \mathcal{A}$ of the hyperplane
arrangement using purely combinatorial data of the intersection semilattice. In particular, $\mathcal{M}_\mathcal{A}$ has torsion-free integral
cohomology with Betti numbers given by
\[
\beta_i({\mathcal{M}}_\mathcal{A})=\dim_{\mathbb{C}}\biggl(\text{H}_i({\mathcal{M}}_\mathcal{A})\biggr)=\sum_{\substack{
x\in{\mathcal L(A)} \\
\dim^{\mathbb C}(x)=n-i
}} |\mu(\hat{0},x)|,
\]
where $\mu(\cdot)$ again represents the M\"obius function. See \cite{Wachs} for details and generalizations of this formula.
\emph{Posets} will compute the ranks of elements in a poset, where the ranks in the intersection lattice \texttt{LA} are
determined by the codimension of elements. Combining the outputs of our rank function with the M\"obius function allows
us to calculate $\beta_0({\mathcal{M}}_\mathcal{A}) = 1$, $\beta_1({\mathcal{M}}_{\mathcal{A}}) = 4$, and $\beta_2({\mathcal{M}}_{\mathcal{A}}) = 5$.
\begin{verbatim}
i7 : RLA = rank LA
o7 = {{ideal 0}, {ideal(x+y), ideal(x), ideal(x-y), ideal(y+1)},
{ideal(y,x), ideal(y+1,x-1),ideal(y+1,x), ideal(y+1,x+1)}}
i8 : MF = moebiusFunction LA;
i9 : apply(RLA, r -> sum(r, x -> abs MF#(ideal 0_R, x)))
o9 = {1, 4, 5}
\end{verbatim}
\section*{LCM-lattices}
Let $R = K[x_1, \dots, x_t]$ be the polynomial ring in $t$ variables over the field $K$, where the degree of $x_i$
is the standard basis vector $e_i \in \mathbb{Z}^t$. Let $M = (m_1, \dots, m_n)$ be a monomial ideal in $R$, then we define the
\emph{lcm-lattice} of $M$, denoted $L_M$, as the set of all least common multiples of subsets of the generators of
$M$ partially ordered by divisibility. It is easy to see that $L_M$ will always be a finite atomic lattice.
While lcm-lattices are nicely structured, they can be difficult to compute by hand especially for large examples
or for ideals where $L_M$ is not ranked.
Consider the ideal $M = (a^3b^2c, a^3b^2d, a^2cd, abc^2d, b^2c^2d)$ in $R = k[a,b,c,d]$. Then we can construct
$L_M$ in \emph{Macaulay2} as follows. See Figure~\ref{fig:lcm-lattice} for the Hasse diagram of $L_M$, as
generated by the \texttt{texPoset} method.
\begin{verbatim}
i10 : R = QQ[a,b,c,d];
i11 : M = ideal(a^3*b^2*c, a^3*b^2*d, a^2*c*d, a*b*c^2*d, b^2*c^2*d);
i12 : LM = lcmLattice M;
\end{verbatim}
\begin{figure}
\caption{The lcm-lattice for $M = (a^3b^2c, a^3b^2d, a^2cd, abc^2d, b^2c^2d)$}
\label{fig:lcm-lattice}
\end{figure}
Lcm-lattices, which were introduced by Gasharov, Peeva, and Welker \cite{GasharovPeevaWelker}, have become an
important tool used in studying free resolutions of monomial ideals. There have been a number of results
that use the lcm-lattice to give constructive methods for finding free resolutions for monomial ideals, for some examples see ,
\cite{Clark}, \cite{PeevaVelasco}, and \cite{Velasco}.
In particular, Gasharov, Peeva, and Welker~\cite{GasharovPeevaWelker} provided a key connection between the lcm-lattice
of a monomial ideal $M$ of $R$ and its minimal free resolution, namely, one can compute the (multigraded) Betti numbers
of $R/M$ using the lcm-lattice. Let $\Delta(P)$ denote the order complex of the poset $P$, then for $i \geq 1$ we have
\[
\beta_{i,b}(R/M) = \dim \tilde{H}_{i-2}(\Delta (\hat{0},b); k),
\]
for all $b \in L_M$, and so
\[
\beta_i(R/M) = \sum_{b \in L_M} \dim \tilde{H}_{i-2}(\Delta (\hat{0},b); k).
\]
These computations can all be done using \emph{Posets} together with the package \emph{SimplicialComplexes},
by S.\ Popescu, G.\ Smith, and M.\ Stillman. In particular, we can show that $\beta_{i, a^2b^2c^2d} = 0$
for all $i$ with the following calculation.
\begin{verbatim}
i13 : D1 = orderComplex(openInterval(LM, 1_R, a^2*b^2*c^2*d));
i14 : prune HH(D1)
o14 = -1 : 0
0 : 0
1 : 0
o14 : GradedModule
\end{verbatim}
Similarly, we can show that $\beta_{1, a^3b^2cd} = 2$.
\begin{verbatim}
i15 : D2 = orderComplex(openInterval(L, 1_R, a^3*b^2*c*d));
i16 : prune HH(D2)
o16 = -1 : 0
2
0 : QQ
o16 : GradedModule
\end{verbatim}
\section*{Hibi ideals}
Let $P = \{p_1, \ldots, p_n\}$ be a finite poset with partial order $\preceq$, and let $K$ be a field.
The \emph{Hibi ideal}, introduced by Herzog and Hibi~\cite{HibiHerzog}, of $P$ over $K$ is the squarefree
ideal $H_P$ in $R = K[x_1, \ldots, x_n, y_1, \ldots, y_n]$ generated by the monomials
\[
u_I := \prod_{p_i \in I} x_i \prod_{p_i \notin I} y_i,
\]
where $I$ is an order ideal of $P$, i.e., for every $i \in I$ and $p \in P$, if $p \preceq i$, then $p \in I$.
\emph{Nota bene:} The Hibi ideal is the ideal of the monomial generators of the Hibi ring, a toric ring first
described by Hibi~\cite{Hibi}.
\begin{verbatim}
i17 : P = divisorPoset 12;
i18 : HP = hibiIdeal P;
i19 : HP_*
o19 = {x x x x x x , x x x x x y , x x x x y y , x x x x y y , x x x y y y ,
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 4 3 5 0 1 2 3 4 5 0 1 3 2 4 5
x x x y y y , x x y y y y , x x y y y y , x y y y y y , y y y y y y }
0 1 2 3 4 5 0 2 1 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
\end{verbatim}
Herzog and Hibi~\cite{HibiHerzog} proved that every power of $H_P$ has a linear resolution, and the
$i^{\rm th}$ Betti number $\beta_i(R/H_P)$ is the number of intervals of the distributive
lattice $\mathcal{L}(P)$ of $P$ isomorphic to the rank $i$ boolean lattice. Using Exercise~3.47 in
Stanley's book~\cite{Stanley}, we can recover this by looking instead at the number of elements of
$\mathcal{L}(P)$ that cover exactly $i$ elements.
\begin{verbatim}
i20 : betti res HP
0 1 2 3
o20 = total: 1 10 12 3
0: 1 . . .
5: . 10 12 3
i21 : LP = distributiveLattice P;
i22 : cvrs = partition(last, coveringRelations LP);
i23 : iCvrs = tally apply(keys cvrs, i -> #cvrs#i);
i24 : gk = prepend(1, apply(sort keys iCvrs, k -> iCvrs#k))
o24 : {1, 6, 3}
i25 : apply(#gk, i -> sum(i..<#gk, j -> binomial(j, i) * gk_j))
o25 : {10, 12, 3}
\end{verbatim}
Moreover, Herzog and Hibi~\cite{HibiHerzog} proved that the projective dimension of $H_P$ is
the Dilworth number of $H_P$, i.e., the maximum length of an antichain of $H_P$.
\begin{verbatim}
i26 : pdim module HP == dilworthNumber P
o26 = true
\end{verbatim}
\end{document}
|
\begin{document}
\title{A Unified Approach to the Global Exactness of Penalty and Augmented Lagrangian Functions I: Parametric Exactness}
\begin{abstract}
In this two-part study we develop a unified approach to the analysis of the global exactness of various penalty and
augmented Lagrangian functions for constrained optimization problems in finite dimensional spaces. This approach allows
one to verify in a simple and straightforward manner whether a given penalty/augmented Lagrangian function is exact,
i.e. whether the problem of unconstrained minimization of this function is equivalent (in some sense) to the original
constrained problem, provided the penalty parameter is sufficiently large. Our approach is based on the so-called
localization principle that reduces the study of global exactness to a local analysis of a chosen merit function near
globally optimal solutions. In turn, such local analysis can usually be performed with the use of sufficient optimality
conditions and constraint qualifications.
In the first paper we introduce the concept of global parametric exactness and derive the localization
principle in the parametric form. With the use of this version of the localization principle we recover existing simple
necessary and sufficient conditions for the global exactness of linear penalty functions, and for the existence of
augmented Lagrange multipliers of Rockafellar-Wets' augmented Lagrangian. Also, we obtain completely new necessary and
sufficient conditions for the global exactness of general nonlinear penalty functions, and for the global exactness of a
continuously differentiable penalty function for nonlinear second-order cone programming problems. We briefly
discuss how one can construct a continuously differentiable exact penalty function for nonlinear semidefinite
programming problems, as well.
\end{abstract}
\section{Introduction}
One of the main approaches to the solution of a constrained optimization problem consists in the reduction of this
problem to an unconstrained one (or a sequence of unconstrained problems) with the use of \textit{merit} (or
\textit{auxiliary}) functions. Such merit functions are usually defined as a certain convolution of the objective
function and constraints, and they almost always include the penalty parameter that must be properly chosen for the
reduction to work. This approach led to the development of various penalty and barrier methods
\cite{FiaccoMcCormick,AuslenderCominettiHaddou,BenTal,Auslender}, primal-dual methods based on the use of augmented
Lagrangians \cite{BirginMartinez_book} and many other methods of constrained optimization.
There exist numerous results on the duality theory for various merit functions, such as penalty and augmented Lagrangian
functions. A modern general formulation of the augmented Lagrangian duality for nonconvex problems based on a geometric
interpretation of the augmented Lagrangian in terms of subgradients of the optimal value function was proposed by
Rockafellar and Wets in \cite{RockWets}, and further developed in
\cite{HuangYang2003,ZhouYang2004,HuangYang2005,NedichOzdaglar}. Let us also mention several extensions
\cite{GasimovRubinov,BurachikRubinov,ZhouYang2006,ZhangYang2008,ZhouYang2009,BurachikIusemMelo,ZhouYang2012,
WangYangYang2014} of this augmented Lagrangian duality theory aiming at including some other augmented Lagrangian and
penalty functions into the unified framework proposed in \cite{RockWets}. A general duality theory for
\textit{nonlinear} Lagrangian and penalty functions was developed in
\cite{Rubinov2000,RubinovYang2003,PenotRubinov2005,WangYangYang2007}. Another general approach to the study of duality
based on the \textit{image space analysis} was systematically studied in
\cite{EvtushenkoRubinovZhadanI,Giannessi_book,Giannessi2007,Mastroeni2012,LiFengZhang2013,ZhuLi2014,ZhuLi2014_2,
XuLi2014}.
In contract to duality theory, few attempts
\cite{EvtushenkoZhadan,EvtushenkoZhadan_In_Collection,DiPilloGrippo89,DiPillo1994,EvtushenkoRubinovZhadanII} have been
made to develop a general theory of a \textit{global exactness} of merit functions, despite the abundance of particular
results on the exactness of various penalty/augmented Lagrangian functions. Furthermore, the existing general results on
global exactness are unsatisfactory, since they are very restrictive and cannot be applied to many particular cases.
Recall that a penalty function is called exact iff its points of global minimum coincide with globally optimal solutions
of the constrained optimization problem under consideration. The concept of exactness of a linear penalty function was
introduced by Eremin \cite{Eremin} and Zangwill \cite{Zangwill} in the mid-1960s, and was further investigated by many
researches
(see~\cite{Pietrzykowski,EvansGouldTolle,Bertsekas,HanMangasarian,IoffeNSC,Rosenberg,Mangasarian,Burke,WuBaiYangZhang,
Antczak,Demyanov,Demyanov0,DemyanovDiPilloFacchinei,DiPilloGrippo,DiPilloGrippo89,ExactBarrierFunc,Zaslavski,
Dolgopolik_UT} and the references therein). A class of \textit{continuously differentiable} exact penalty functions was
introduced by Fletcher \cite{Fletcher70} in 1970. Fletcher's penalty functions was modified and thoroughly investigated
in
\cite{Fletcher70,Fletcher73,MukaiPolak,GladPolak,BoggsTolle1980,Bertsekas_book,HanMangasarian_C1PenFunc,DiPilloGrippo85,
DiPilloGrippo86, Lucidi92, ContaldiDiPilloLucidi_93,FukudaSilva,AndreaniFukuda}. Di Pillo and Grippo proposed to
consider an \textit{exact augmented Lagrangian function} \cite{DiPilloGrippo1979} in 1979. This class of augmented
Lagrangian functions was studied and applied to various optimization problems in
\cite{DiPilloGrippo1980,DiPilloGrippo1982,Lucidi1988,DiPilloLucidiPalagi1993,DiPilloLucidi1996,DiPilloLucidi2001,
DiPilloEtAl2002,DiPilloLiuzzi2003,DuZhangGao2006,DuLiangZhang2006,LuoWuLiu2013,DiPilloLiuzzi2011,FukudaLourenco},
while a general theory of globally exact augmented Lagrangian functions was developed by the author in
\cite{Dolgopolik_GSP}. The theory of nonlinear exact penalty functions was developed by Rubinov and his colleagues
\cite{RubinovGloverYang1999,RubinovYangBagirov2002,RubinovGasimov2003,RubinovYang2003,YangHuang_NonlinearPenalty2003}
in the late 1990s and the early 2000s. Finally, a new class of exact penalty functions was introduced by Huyer and
Neumaier \cite{HuyerNeumaier} in 2003. Later on, this class of penalty functions was studied by many researchers, and
applied to various optimization problems, including optimal control problems
\cite{Bingzhuang,WangMaZhou,LiYu,MaLiYiu,LinWuYu,JianLin,LinLoxton,MaZhang2015,ZhengZhang2015,Dolgopolik_OptLet,
Dolgopolik_OptLet2,DolgopolikMV_UT_2}.
It should be noted that the problem of the existence of \textit{global saddle points} of augmented Lagrangian functions
is closely related to the exactness property of these functions. This problem was studied for general cone constrained
optimization problems in \cite{ShapiroSun,ZhouZhouYang2014}, for mathematical programming problems in
\cite{LiuYang,WangLi2009,LiuTangYang2009,LuoMastroeniWu2010,ZhouXiuWang,WuLuo2012b,WangZhouXu2009,SunLi,WangLiuQu}, for
nonlinear second order cone programming problems in \cite{ZhouChen2015}, for nonlinear semidefinite programming problems
in \cite{WuLuoYang2014,LuoWuLiu2015}, and for semi-infinite programming problems in
\cite{RuckmannShapiro,BurachikYangZhou2017}. A general theory of the existence of global saddle point of augmented
Lagrangian functions for cone constrained optimization problems was presented in \cite{Dolgopolik_GSP}. Finally, there
is also a problem of the existence of \textit{augmented Lagrange multipliers}, which can be viewed as the study of the
global exactness of Rockafellar-Wets' augmented Lagrangian function. Various results on the existence of augmented
Lagrange multipliers were obtained in
\cite{ShapiroSun,ZhouZhouYang2014,Dolgopolik_ALRW,RuckmannShapiro,KanSong,KanSong2,BurachikYangZhou2017}.
The anaylsis of the proofs of the main results of the aforementioned papers indicates that the underlying ideas of
these papers largely overlap. Our main goal is to unveil the core idea behind these result, and present a general theory
of the global exactness of penalty and augmented Lagrangian functions for finite dimensional constrained optimization
problems that can be applied to all existing penalty and augmented Lagrangian functions. The central result of our
theory is the so-called \textit{localization principle}. This principle allows one to reduce the study of the global
exactness of a given merit function to a local analysis of the behaviour of this function near globally optimal
solutions of the original constrained problem. In turn, such local analysis can be usually performed with the use of
sufficient optimality conditions and/or constraint qualifications. Thus, the localization principle furnishes one with a
simple technique for proving the global exactness of almost any merit function with the use of the standard tools of
constrained optimization (namely, constraint qualifications and optimality conditions). The localization principle was
first derived by the author for linear penalty functions in \cite{Dolgopolik_UT}, and was further extended to other
penalty and augmented Lagrangian functions in \cite{Dolgopolik_GSP,DolgopolikMV_UT_2,Dolgopolik_ALRW}
In order to include almost all imaginable penalty and augmented Lagrangian functions into the general theory, we
introduce and study the concept of global exactness for an arbitrary function depending on the primal variables, the
penalty parameter and some additional parameters, and do not impose any assumptions on the structure of this function.
Instead, natural assumptions on the behaviour of this function arise within the localization principle as necessary and
sufficient conditions for the global exactness.
It might seem natural to adopt the approach of the image space analysis
\cite{Giannessi_book,Giannessi2007,Mastroeni2012,LiFengZhang2013,ZhuLi2014,ZhuLi2014_2,XuLi2014} for the study of global
exactness. However, the definition of separation function from the image space analysis imposes some assumptions on the
structure of admissible penalty/aug\-mented Lagrangian functions, which create some unnecessary restrictions. In
contrast, our approach to the global exactness avoids any such assumptions.
Finally, let us note that there are several possible ways to introduce the concept of the global exactness of a merit
function. Each part of this two-part study is devoted to the analysis of one of the possible approaches to the
definition of global exactness. In this paper we study the so-called global \textit{parametric} exactness, which
naturally arises during the study of various exact penalty functions and augmented Lagrange multipliers.
The paper is organized as follows. In Section~\ref{Sect_ParametricExactness} we introduce the definition of global
parametric exactness and derive the localization principle in the parametric form. This version of localization
principle is applied to the study of the global exactness of several penalty and augmented Lagrangian in
Section~\ref{Sect_Appl_ParametricExactness}. In particular, in this section we recover existing necessary and sufficient
conditions for the global exactness of linear penalty function, and for the existence of augmented Lagrange multipliers.
We also obtain completely new necessary and sufficient conditions for the global exactness of a continuously
differentiable penalty function for nonlinear second-order cone programming problems, and briefly discuss how one can
define a globally exact continuously differentiable penalty function for nonlinear semidefinite programming problems.
Necessary preliminary results are given in Section~\ref{Sect_Preliminaries}.
\section{Preliminaries}
\label{Sect_Preliminaries}
Let $X$ be a finite dimensional normed space, and $M, A \subset X$ be nonempty sets. Throughout this article, we study
the following optimization problem
$$
\min f(x) \quad \text{subject to} \quad x \in M, \quad x \in A,
\eqno{(\mathcal{P})}
$$
where $f\colon X \to \mathbb{R} \cup \{ + \infty \}$ is a given function. Denote by $\Omega = M \cap A$ the set of
feasible points of this problem. From this point onwards, we suppose that there exists $x \in \Omega$ such that
$f(x) < + \infty$, and that there exists a globally optimal solution of $(\mathcal{P})$.
Our aim is to somehow ``get rid'' of the constraint $x \in M$ in the problem $(\mathcal{P})$ with the use of an
auxiliary function $F(\cdot)$. Namely, we want to construct an auxiliary function $F(\cdot)$ such that globally optimal
solutions of the problem $(\mathcal{P})$ can be easily recovered from points of global minimum of $F(\cdot)$ on the
set $A$. To be more precise, our aim is to develop a general theory of such auxiliary functions.
\begin{remark}
It should be underlined that only the constraint $x \in M$ is incorporated into an auxiliary function $F(\cdot)$, while
the constraint $x \in A$ must be taken into account explicitly. Usually, the set $A$ represents ``simple'' constrains
such as bound or linear ones. Alternatively, one can utilize one auxiliary function in order to ``get rid'' of one kind
of constraints, and then utilize a different type of auxiliary functions in order to ``get rid'' of other kind of
constraints. Overall, the differentiation of the constraints onto the main ones ($x \in M$) and the additional ones
($x \in A$) gives one more flexibility in the choice of the tools for solving constrained optimization problems.
\end{remark}
Let $\Lambda$ be a nonempty set of parameters that are denoted by $\lambda$, and let $c > 0$ be
\textit{the penalty parameter}. Hereinafter, we suppose that a function
$F \colon X \times \Lambda \times (0, + \infty) \to \mathbb{R} \cup \{ + \infty \}$, $F = F(x, \lambda, c)$, is given.
A connection between this function and the problem $(\mathcal{P})$ is specified below.
The function $F$ can be, for instance, a penalty function with $\Lambda$ being the empty set or an augmented Lagrangian
function with $\lambda$ being a Lagrange multiplier. However, in order not to restrict ourselves to any specific case,
we call $F(x, \lambda, c)$ \textit{a separating function} for the problem $(\mathcal{P})$.
\begin{remark}
The motivation behind the term ``separating function'' comes from a geometric interpretation of many penalty and
augmented Lagrangian function as nonlinear functions separating some nonconvex sets. This point of view on penalty and
augmented Lagrangian functions is systematically utilized within the image space analysis
\cite{Giannessi_book,Giannessi2007,Mastroeni2012,LiFengZhang2013,LuoWuLiu2013,ZhuLi2014,ZhuLi2014_2,XuLi2014}.
\end{remark}
\begin{remark}
Let us note that since we consider only separating functions depending on the penalty parameter $c > 0$, the so-called
\textit{objective penalty functions} (see, e.g., \cite{EvtushenkoRubinovZhadanII,MengHuDang2004,MengDangJiang2013})
cannot be considered within our theory.
\end{remark}
\section{A General Theory of Parametric Exactness}
\label{Sect_ParametricExactness}
In the first part of our study, we consider the simplest case when one minimizes the function $F(x, \lambda, c)$ with
respect to $x$, and views $\lambda$ as a tuning parameter. Let us introduce the formal definition of exactness of the
function $F(x, \lambda, c)$ in this case.
\begin{definition}
The separating function $F(x, \lambda, c)$ is said to be \textit{globally parametrically exact} iff there exist
$\lambda^* \in \Lambda$ and $c^* > 0$ such that for any $c \ge c^*$ one has
$$
\argmin_{x \in A} F(x, \lambda^*, c) = \argmin_{x \in \Omega} f(x).
$$
The greatest lower bound of all such $c^* > 0$ is called \textit{the least exact penalty parameter} of the function
$F(x, \lambda^*, c)$, and is denoted by $c^*(\lambda^*)$, while $\lambda^*$ is called \textit{an exact tuning
parameter}.
\end{definition}
Thus, if $F(x, \lambda, c)$ is globally parametrically exact and an exact tuning parameter $\lambda^*$ is known, then
one can choose sufficiently large $c > 0$ and minimize the function $F(\cdot, \lambda^*, c)$ over the set $A$ in order
to find globally optimal solutions of the problem $(\mathcal{P})$. In other words, if the function $F(x, \lambda, c)$ is
globally exact, then one can remove the constraint $x \in M$ with the use of the function $F(x, \lambda, c)$ without
loosing any information about globally optimal solutions of the problem $(\mathcal{P})$.
Our main goal is to demonstrate that the study of the global parametric exactness of
the separating function $F(x, \lambda, c)$ can be easily reduced to the study of a local behaviour of $F(x, \lambda, c)$
near globally optimal solutions of the problem $(\mathcal{P})$. This reduction procedure is called \textit{the
localization principle}.
At first, let us describe a desired local behaviour of the function $F(x, \lambda, c)$ near optimal solutions.
\begin{definition}
Let $x^*$ be a locally optimal solution of the problem $(\mathcal{P})$. The separating function $F(x, \lambda, c)$ is
called \textit{locally parametrically exact} at $x^*$ iff there exist $\lambda^* \in \Lambda$, $c^* > 0$ and a
neighbourhood $U$ of $x^*$ such that for any $c \ge c^*$ one has
$$
F(x, \lambda^*, c) \ge F(x^*, \lambda^*, c) \quad \forall \quad x \in U \cap A.
$$
The greatest lower bound of all such $c^* > 0$ is called \textit{the least exact penalty parameter} of the function
$F(x, \lambda^*, c)$ at $x^*$, and is denoted by $c^*(x^*, \lambda^*)$, while $\lambda^*$ is called \textit{an exact
tuning parameter} at $x^*$.
\end{definition}
Thus, $F(x, \lambda, c)$ is locally parametrically exact at a point $x^*$ with an exact tuning parameter $\lambda^*$ iff
there exists $c^* > 0$ such that $x^*$ is a local (uniformly with respect to $c \in [c^*, + \infty)$) minimizer of the
function $F(\cdot, \lambda^*, c)$ on the set $A$. Observe also that if the function $F(x, \lambda, c)$ is nondecreasing
in $c$, then $F(x, \lambda, c)$ is locally parametrically exact at $x^*$ with an exact tuning parameter $\lambda^*$ iff
there exists $c^*$ such that $x^*$ is a local minimizer of $F(\cdot, \lambda^*, c^*)$ on the set $A$.
Recall that $c > 0$ in $F(x, \lambda, c)$ is called \textit{the penalty parameter}; however, a connection of
the parameter $c$ with penalization is unclear from the definition of the function $F(x, \lambda, c)$. We need the
following definition in order to clarify this connection.
\begin{definition}
Let $\lambda^* \in \Lambda$ be fixed. One says that $F(x, \lambda, c)$ is a \textit{penalty-type} separating function
for $\lambda = \lambda^*$ iff there exists $c_0 > 0$ such that if
\begin{enumerate}
\item{$\{ c_n \} \subset [c_0, + \infty)$ is an increasing unbounded sequence;
}
\item{$x_n \in \argmin_{x \in A} F(x, \lambda^*, c_n)$, $n \in \mathbb{N}$;
}
\item{$x^*$ is a cluster point of the sequence $\{ x_n \}$,
}
\end{enumerate}
then $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$.
\end{definition}
Roughly speaking, $F(x, \lambda, c)$ is a penalty-type separating function for $\lambda = \lambda^*$ iff global
minimizers of $F(\cdot, \lambda^*, c)$ on the set $A$ tend to globally optimal solutions of the problem $(\mathcal{P})$
as $c \to + \infty$. Thus, if the separating function $F(x, \lambda, c)$ is of penalty-type, then $c$ plays the role of
penalty parameter, since an increase of $c$ forces global minimizers of $F(\cdot, \lambda^*, c)$ to get closer to
the feasible set of the problem $(\mathcal{P})$.
Note that if the function $F(\cdot, \lambda^*, c)$ does not attain a global minimum on the set $A$ for any $c$ greater
than some $c_0 > 0$, then, formally, $F(x, \lambda, c)$ is a penalty-type separating function for $\lambda = \lambda^*$.
Similarly, if all sequences $\{ x_n \}$, such that $x_n \in \argmin_{x \in A} F(x, \lambda^*, c_n)$, $n \in \mathbb{N}$
and $c_n \to + \infty$ as $n \to \infty$, do not have cluster points, then $F(x, \lambda, c)$ is a penalty-type
separating function for $\lambda = \lambda^*$, as well. Therefore we need an additional definition that allows one to
exclude such pathological behaviour of the function $F(x, \lambda, c)$ as $c \to \infty$ (see~\cite{Dolgopolik_UT},
Sections 3.2--3.4, for the motivation behind this definition).
Recall that $A$ is a subset of a finite dimensional normed space $X$.
\begin{definition}
Let $\lambda^* \in \Lambda$ be fixed. The separating function $F(x, \lambda, c)$ is said to be \textit{non-degenerate}
for $\lambda = \lambda^*$ iff there exist $c_0 > 0$ and $R > 0$ such that for any $c \ge c_0$ the function
$F(\cdot, \lambda^*, c)$ attains a global minimum on the set $A$, and there
exists $x(c) \in \argmin_{x \in A} F(x, \lambda^*, c)$ such that $\| x(c) \| \le R$.
\end{definition}
Roughly speaking, the non-degeneracy condition does not allow global minimizers of $F(\cdot, \lambda^*, c)$ on the set
$A$ to escape to infinity as $c \to \infty$. Note that if the set $A$ is bounded, then $F(x, \lambda, c)$ is
non-degenerate for $\lambda = \lambda^*$ iff the function $F(\cdot, \lambda^*, c)$ attains a global minimum on the set
$A$ for any $c$ large enough.
Now, we are ready to formulate and prove the localization principle. Recall that $\Omega$ is the feasible set of the
problem $(\mathcal{P})$. Denote by $\Omega^*$ the set of globally optimal solutions of this problem.
\begin{theorem}[Localization Principle in the Parametric Form I] \label{Thrm_LocPrin_Param}
Suppose that the validity of the condition
\begin{equation} \label{InclImpliesExactness}
\Omega^* \cap \argmin_{x \in A} F(x, \lambda^*, c) \ne \emptyset
\end{equation}
for some $\lambda^* \in \Lambda$ and $c > 0$ implies that $F(x, \lambda, c)$ is globally parametrically exact with the
exact tuning parameter $\lambda^*$. Let also $\Omega$ be closed, and $f$ be l.s.c. on $\Omega$. Then the separating
function $F(x, \lambda, c)$ is globally parametrically exact if and only if there exists $\lambda^* \in \Lambda$ such
that
\begin{enumerate}
\item{$F(x, \lambda, c)$ is of penalty-type and non-degenerate for $\lambda = \lambda^*$;
}
\item{$F(x, \lambda, c)$ is locally parametrically exact with the exact tuning parameter $\lambda^*$ at every globally
optimal solution of the problem $(\mathcal{P})$.
}
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose that $F(x, \lambda, c)$ is globally parametrically exact with an exact tuning parameter $\lambda^*$. Then
for any $c > c^*(\lambda^*)$ one has
$$
\argmin_{x \in A} F(x, \lambda^*, c) = \Omega^*.
$$
In other words, for any $c > c^*(\lambda^*)$ every globally optimal solution $x^*$ of the problem $(\mathcal{P})$ is a
global (and hence local uniformly with respect to $c \in (c^*(\lambda^*), + \infty)$) minimizer of
$F(\cdot, \lambda^*, c)$ on the set $A$. Thus, $F(x, \lambda, c)$ is locally parametrically exact with the exact tuning
parameter $\lambda^*$ at every globally optimal solution of the problem $(\mathcal{P})$.
Fix arbitrary $x^* \in \Omega^*$. Then for any $c > c(\lambda^*)$ the point $x^*$ is a global minimizer of
$F(\cdot, \lambda^*, c)$, which implies that $F(x, \lambda, c)$ is non-degenerate for $\lambda = \lambda^*$.
Furthermore, if a sequence $\{ x_n \} \subset A$ is such that $x_n \in \argmin_{x \in A} F(x, \lambda^*, c_n)$ for all
$n \in N$, where $c_n \to + \infty$ as $n \to \infty$, then due to the global exactness of $F$ one has that for all $n$
large enough the point $x_n$ coincides with one of the globally optimal solution of $(\mathcal{P})$, which implies that
$x_n \in \Omega$, and $f(x_n) = \min_{x \in \Omega} f(x)$. Hence applying the facts that $\Omega$ is closed and $f$ is
l.s.c. on $\Omega$ one can easily verify that a cluster point of the sequence $\{ x_n \}$, if exists, is a globally
optimal solution of $(\mathcal{P})$. Thus, $F(x, \lambda, c)$ is a penalty-type separating function
for $\lambda = \lambda^*$.
Let us prove the converse statement. Our aim is to verify that there exist $c > 0$ and $x^* \in \Omega^*$ such that
\begin{equation} \label{EqualityImplExactness}
\inf_{x \in A} F(x, \lambda^*, c) = F(x^*, \lambda^*, c).
\end{equation}
Then taking into account condition \eqref{InclImpliesExactness} one obtains that the separating function
$F(x, \lambda, c)$ is globally parametrically exact. Arguing by reductio ad absurdum, suppose that
\eqref{EqualityImplExactness} is not valid. Then, in particular, for any $n \in \mathbb{N}$ one has
\begin{equation} \label{ParamExactAdAbsurdum}
\inf_{x \in A} F(x, \lambda^*, n) < F(x^*, \lambda^*, n) \quad \forall x^* \in \Omega^*.
\end{equation}
By condition~1, the function $F(x, \lambda, c)$ is non-degenerate for $\lambda = \lambda^*$. Therefore there exist
$n_0 \in \mathbb{N}$ and $R > 0$ such that for any $n \ge n_0$ there
exists $x_n \in \argmin_{x \in A} F(x, \lambda^*, n)$ with $\| x_n \| \le R$.
Recall that $X$ is a finite dimensional normed space. Therefore there exists a subsequence $\{ x_{n_k} \}$ converging to
some $x^*$. Consequently, applying the fact that $F(x, \lambda, c)$ is a penalty-type separating function
for $\lambda = \lambda^*$ one obtains that $x^* \in \Omega^*$. By condition~2, $F(x, \lambda, c)$ is locally
parametrically exact at $x^*$ with the exact tuning parameter $\lambda^*$. Therefore there exist $c_0 > 0$ and a
neighbourhood $U$ of $x^*$ such that for any $c \ge c_0$ one has
\begin{equation} \label{LocalParamExactness}
F(x, \lambda^*, c) \ge F(x^*, \lambda^*, c) \quad \forall x \in U \cap A.
\end{equation}
Since the subsequence $\{ x_{n_k} \}$ converges to $x^*$, there exists $k_0$ such that for any $k \ge k_0$ one has
$x_{n_k} \in U$. Moreover, one can suppose that $n_k \ge c_0$ for all $k \ge k_0$. Hence with the use of
\eqref{LocalParamExactness} one obtains that
$$
F(x_{n_k}, \lambda^*, n_k) \ge F(x^*, \lambda^*, n_k),
$$
which contradicts \eqref{ParamExactAdAbsurdum} and the fact that $x_{n_k} \in \argmin_{x \in A} F(x, \lambda^*, n_k)$.
Thus, $F(x, \lambda, c)$ is globally parametrically exact.
\end{proof}
\begin{remark} \label{Rmrk_LocPrincipleParamForm}
{(i)~Condition \eqref{InclImpliesExactness} simply means that in order to prove the global
parametric exactness of $F(x, \lambda, c)$ it is sufficient to check that at least one globally optimal solution of
the problem $(\mathcal{P})$ is a point of global minimum of the function $F(\cdot, \lambda^*, c)$ instead of verifying
that the sets $\Omega^*$ and $\argmin_{x \in A} F(x, \lambda^*, c)$ actually coincide. It should be pointed out that in
most particular cases the validity of condition \eqref{InclImpliesExactness} is equivalent to global parametric
exactness. In fact, the equivalence between \eqref{InclImpliesExactness} and global parametric exactness automatically,
i.e. without any additional assumptions, holds true in all but one example (see subsection~\ref{Example_ALRW} below)
presented in this article. Note, finally, that condition \eqref{InclImpliesExactness} is needed only to prove the ``if''
part of the theorem.
}
\noindent{(ii)~The theorem above describes how to construct a globally exact separating function $F(x, \lambda, c)$.
Namely, one has to ensure that a chosen function $F(x, \lambda, c)$ is of penalty-type (which can be guaranteed by
adding a penalty term to the function $F(x, \lambda, c)$), non-degenerate (which can usually be guaranteed by the
introduction of a barrier term into the function $F(x, \lambda, c)$) and is locally exact near all globally optimal
solutions of the problem $(\mathcal{P})$, which is typically done with the use of constraint qualifications (metric
(sub-)regularity assumptions) and/or sufficient optimality conditions. Below, we present several particular examples
illustrating the usage of the previous theorem.
}
\noindent{(iii)~Note that the previous theorem can be reformulated as a theorem describing necessary and sufficient
conditions for a tuning parameter $\lambda^* \in \Lambda$ to be exact. It should also be mentioned that the theorem
above can be utilized in order to obtain necessary and/or sufficient conditions for the uniqueness of an exact tuning
parameter, In particular, it is easy to see that a globally exact tuning parameter $\lambda^*$ is unique, if there
exists $x^* \in \Omega^*$ such that a locally exact tuning parameter at $x^*$ is unique.
}
\end{remark}
The theorem above can be vaguely formulated as follows. The separating function $F(x, \lambda, c)$ is globally
parametrically exact iff it is of penalty-type, non-degenerate and locally exact at every globally optimal solution
of the problem $(\mathcal{P})$. Thus, under natural assumptions the function $F(x, \lambda, c)$ is globally exact iff
it is exact near globally optimal solutions of the original problem. That is why Theorem~\ref{Thrm_LocPrin_Param} is
called \textit{the localization principle}.
Let us reformulate the localization principle in the form that is slightly more convenient for applications.
\begin{theorem}[Localization Principle in the Parametric Form II] \label{Thrm_LocPrin_Param_SublevelSets}
Suppose that the validity of the condition
$$
\Omega^* \cap \argmin_{x \in A} F(x, \lambda^*, c) \ne \emptyset
$$
for some $\lambda^* \in \Lambda$ and $c > 0$ implies that $F(x, \lambda, c)$ is globally parametrically exact with the
exact tuning parameter $\lambda^*$. Let also the sets $A$ and $\Omega$ be closed, the objective function $f$ be l.s.c.
on $\Omega$, and the function $F(\cdot, \lambda, c)$ be l.s.c. on $A$ for all $\lambda \in \Lambda$ and $c > 0$. Then
the separating function $F(x, \lambda, c)$ is globally parametrically exact if and only if there
exists $\lambda^* \in \Lambda$ such that
\begin{enumerate}
\item{$F(x, \lambda, c)$ is of penalty-type for $\lambda = \lambda^*$;
}
\item{there exist $c_0 > 0$, $x^* \in \Omega^*$ and a bounded set $K \subset A$ such that
\begin{equation} \label{SubleveBoundedness_Param}
S(c, x^*) := \Big\{ x \in A \mid F(x, \lambda^*, c) < F(x^*, \lambda^*, c) \Big\} \subset K \quad \forall c \ge c_0;
\end{equation}
}
\item{$F(x, \lambda, c)$ is locally parametrically exact at every globally optimal solution of the problem
$(\mathcal{P})$ with the exact tuning parameter $\lambda^*$.
}
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose that $F(x, \lambda, c)$ is globally parametrically exact with the exact tuning parameter $\lambda^*$. Then, as
it was proved in Theorem~\ref{Thrm_LocPrin_Param}, $F(x, \lambda, c)$ is a penalty-type separating function
for $\lambda = \lambda^*$, and $F(x, \lambda, c)$ is locally parametrically exact with the exact tuning parameter
$\lambda^*$ at every globally optimal solution of the problem $(\mathcal{P})$. Furthermore, from the definition of
global exactness it follows that $S(c, x^*) = \emptyset$ for all $c > c^*(\lambda^*)$ and $x^* \in \Omega^*$, which
implies that \eqref{SubleveBoundedness_Param} is satisfied for all $c_0 > c^*(\lambda^*)$, $x^* \in \Omega^*$ and any
bounded set $K$.
Let us prove the converse statement. By our assumption there exist $c_0 > 0$ and $x^* \in \Omega^*$ such that for all
$c \ge c_0$ the sublevel set $S(c, x^*)$ is contained in a bounded set $K$ and, thus, is bounded. Therefore taking into
account the facts that the function $F(\cdot, \lambda^*, c)$ is l.s.c. on $A$, and the set $A$ is closed one obtains
that $F(\cdot, \lambda^*, c)$ attains a global minimum on the set $A$ at a point $x(c) \in K$
(if $S(c, x^*) = \emptyset$ for some $c \ge c_0$, then $x(c) = x^*$). From the fact that $K$ is bounded it follows that
that there exists $R > 0$ such that $\| x(c) \| \le R$ for all $c \ge c_0$, which implies that $F(x, \lambda, c)$ is
non-degenerate for $\lambda = \lambda^*$. Consequently, applying Theorem~\ref{Thrm_LocPrin_Param} one obtains the
desired result.
\end{proof}
Note that the definition of global parametric exactness does not specify how the optimal value of the problem
$(\mathcal{P})$ and the infimum of the function $F(\cdot, \lambda^*, c)$ over the set $A$ are connected. In some
particular cases (see subsection~\ref{Example_ALRW} below), this fact might significantly complicate the application of
the localization principle. Therefore, let us show how one can incorporate the assumption on the value of
$\inf_{x \in A} F(x, \lambda^*, c)$ into the localization principle.
\begin{definition}
The separating function $F(x, \lambda, c)$ is said to be \textit{strictly globally parametrically exact} if
$F(x, \lambda, c)$ is globally parametrically exact, and there exists $c_0 > 0$ such that
\begin{equation} \label{StrictParamExactness}
\inf_{x \in A} F(x, \lambda^*, c) = f^* \quad \forall c \ge c_0,
\end{equation}
where $\lambda^*$ is an exact tuning parameter, and $f^* = \inf_{x \in \Omega} f(x)$ is the optimal value of
the problem $(\mathcal{P})$. An exact tuning parameter satisfying \eqref{StrictParamExactness} is called
\textit{strictly exact}.
\end{definition}
Arguing in a similar way to the proofs of Theorems~\ref{Thrm_LocPrin_Param} and \ref{Thrm_LocPrin_Param_SublevelSets}
one can easily extend the localization principle to the case of strict exactness.
\begin{theorem}[Strengthened Localization Principle in the Parametric Form I] \label{Thrm_LocPrin_Param_Strict}
Suppose that the validity of the conditions
\begin{equation} \label{InclImpliesStrictExactness}
\Omega^* \cap \argmin_{x \in A} F(x, \lambda^*, c) \ne \emptyset, \quad \min_{x \in A} F(x, \lambda^*, c) = f^*
\end{equation}
for some $\lambda^* \in \Lambda$ and $c > 0$ implies that $F(x, \lambda, c)$ is strictly globally parametrically exact
with $\lambda^*$ being a strictly exact tuning parameter. Let also $\Omega$ be closed, and $f$ be l.s.c. on $\Omega$.
Then the separating function $F(x, \lambda, c)$ is strictly globally parametrically exact if and only if there exists
$\lambda^* \in \Lambda$ such that
\begin{enumerate}
\item{$F(x, \lambda, c)$ is of penalty-type and non-degenerate for $\lambda = \lambda^*$;}
\item{$F(x, \lambda, c)$ is locally parametrically exact at every globally optimal solution of the problem
$(\mathcal{P})$ with the exact tuning parameter $\lambda^*$;
}
\item{there exists $c_0 > 0$ such that $F(x^*, \lambda^*, c) = f^*$ for all $x^* \in \Omega^*$ and $c \ge c_0$.}
\end{enumerate}
\end{theorem}
\begin{theorem}[Strengthened Localization Principle in the Parametric Form II]
\label{Thrm_LocPrin_Param_Strict_Sublevel}
Suppose that the validity of the conditions
$$
\Omega^* \cap \argmin_{x \in A} F(x, \lambda^*, c) \ne \emptyset, \quad \min_{x \in A} F(x, \lambda^*, c) = f^*
$$
for some $\lambda^* \in \Lambda$ and $c > 0$ implies that $F(x, \lambda, c)$ is strictly globally parametrically exact
with $\lambda^*$ being a strictly exact tuning parameter. Let also the sets $A$ and $\Omega$ be closed, the objective
function $f$ be l.s.c. on $\Omega$, and the function $F(\cdot, \lambda, c)$ be l.s.c. on $A$ for
all $\lambda \in \Lambda$ and $c > 0$. Then the separating function $F(x, \lambda, c)$ is strictly globally
parametrically exact if and only if there exist $\lambda^* \in \Lambda$ and $c_0 > 0$ such that
\begin{enumerate}
\item{$F(x, \lambda, c)$ is of penalty-type for $\lambda = \lambda^*$;}
\item{there exists a bounded set $K$ such that
$$
\Big\{ x \in A \Bigm| F(x, \lambda^*, c) < f^* \Big\} \subset K \quad \forall c \ge c_0;
$$
}
\item{$F(x, \lambda, c)$ is locally parametrically exact with the exact tuning parameter $\lambda^*$ at every globally
optimal solution of the problem $(\mathcal{P})$;
}
\item{$F(x^*, \lambda^*, c) = f^*$ for all $x^* \in \Omega^*$ and $c \ge c_0$.}
\end{enumerate}
\end{theorem}
\section{Applications of the Localization Principle}
\label{Sect_Appl_ParametricExactness}
Below, we provide several examples demonstrating how one can apply the localization principle in the parametric form
to the study of the global exactness of various penalty and augmented Lagrangian functions.
\subsection{Example I: Linear Penalty Functions}
We start with the simplest case when the function $F(x, \lambda, c)$ is affine with respect to the penalty parameter $c$
and does not depend on any additional parameters. Let a function $\varphi \colon X \to [0, +\infty]$ be such that
$\varphi(x) = 0$ iff $x \in M$. Define
$$
F(x, c) = f(x) + c \varphi(x).
$$
The function $F(x, c)$ is called \textit{a linear penalty function} for the problem $(\mathcal{P})$.
\begin{remark}
In order to rigorously include linear penalty functions (as well as nonlinear penalty functions from the
following two examples) into the theory of parametrically exact separating functions one has to define $\Lambda$ to be a
one-point set, say $\Lambda = \{ - 1 \}$, introduce a new separating function $\widehat{F}(x, -1, c) \equiv F(x, c)$,
and consider the separating function $\widehat{F}(x, \lambda, c)$ instead of the penalty function $F(x, c)$. However,
since this transformation is purely formal, we omit it for the sake of shortness. Moreover, since in the case of penalty
functions the parameter $\lambda$ is absent, it is natural to omit the term ``parametric'', and say that $F(x, c)$ is
globally/locally exact.
\end{remark}
Let us obtain two simple characterizations of the global exactness of the linear penalty function $F(x, c)$ with the use
of the localization principle (Theorems~\ref{Thrm_LocPrin_Param} and \ref{Thrm_LocPrin_Param_SublevelSets}). These
characterizations were first obtained by the author in (\cite{Dolgopolik_UT}, Therems~3.10 and 3.17).
Before we formulate the main result, let us note that $F(x^*, c) = f^*$ for any globally optimal solution $x^*$ of the
problem $(\mathcal{P})$ and for all $c > 0$. Therefore, in particular, the linear penalty function $F(x, c)$ is
globally parametrically exact iff it is strictly globally parametrically exact.
\begin{theorem}[Localization Principle for Linear Penalty Functions]
Let $A$ and $\Omega$ be closed, and let $f$ and $\varphi$ be l.s.c. on $A$. Then the linear penalty function $F(x, c)$
is globally exact if and only if $F(x, c)$ is locally exact at every globally optimal solution of the problem
$(\mathcal{P})$ and one of the following two conditions is satisfied
\begin{enumerate}
\item{$F$ is non-degenerate;
}
\item{there exists $c_0 > 0$ such that the set $\{ x \in A \mid F(x, c_0) < f^* \}$ is bounded.
}
\end{enumerate}
\end{theorem}
\begin{proof}
Note that $F(x^*, c) = f(x^*) = f^*$ for any $x^* \in \Omega^*$ and $c > 0$. Therefore
$$
\Omega^* \cap \argmin_{x \in A} F(x, c) \ne \emptyset \implies \Omega^* \subset \argmin_{x \in A} F(x, c).
$$
Note also that if $x \notin M$, then either $F(x, c)$ is strictly increasing in $c$ or $F(x, c) = + \infty$ for all
$c > 0$. On the other hand, if $x \in M$, then $F(x, c) = f(x)$. Consequently, if for some $c_0 > 0$ one has
$\Omega^* \subset \argmin_{x \in A} F(x, c)$, then for any $c > c_0$ one has $\Omega^* = \argmin_{x \in A} F(x, c)$.
Thus, the validity of the condition $\Omega^* \cap \argmin_{x \in A} F(x, c) \ne \emptyset$ for some $c > 0$ implies
the global exactness of $F(x, c)$.
Our aim, now, is to verify that $F$ is a penalty-type separating function. Then applying
Theorems~\ref{Thrm_LocPrin_Param} and \ref{Thrm_LocPrin_Param_SublevelSets} one obtains the desired result.
Indeed, let $\{ c_n \} \subset (0, + \infty)$ be an increasing unbounded sequence, $x_n \in \argmin_{x \in A} F(x, c)$
for all $n \in \mathbb{N}$, and let $x^*$ be a cluster point of the sequence $\{ x_n \}$. By \cite{Dolgopolik_UT},
Proposition~3.5, one has $\varphi(x_n) \to 0$ as $n \to \infty$. Hence taking into account the facts that $A$ is closed
and $\varphi$ is l.s.c. on $A$ one gets that $x^* \in A$ and $\varphi(x^*) = 0$. Therefore $x^*$ is a feasible point of
the problem $(\mathcal{P})$.
As it was noted above, for any $y^* \in \Omega^*$ one has $F(y^*, c) = f(y^*)$ for all $c > 0$. Hence taking into
account the definition of $x_n$ and the fact that the function $\varphi$ is non-negative one gets
that $f(x_n) \le f(y^*)$ for all $n \in \mathbb{N}$. Consequently, with the use of the lower semicontinuity of $f$ one
obtains that $f(x^*) \le f(y^*)$, which implies that $x^*$ is a globally optimal solution of the problem
$(\mathcal{P})$. Thus, $F(x, c)$ is a penalty-type separating function.
\end{proof}
Let us also give a different formulation of the localization principle for linear penalty functions in which the
non-degeneracy condition is replaced by some more widely used conditions.
\begin{corollary}
Let $A$ and $\Omega$ be closed, and let $f$ and $\varphi$ be l.s.c. on $A$. Suppose also that one of the following
conditions is satisfied:
\begin{enumerate}
\item{the set $\{ x \in A \mid f(x) < f^* \}$ is bounded;
}
\item{there exist $c_0 > 0$ and $\delta > 0$ such that the function $F(\cdot, c_0)$ is bounded from below on $A$ and
the set $\{ x \in A \mid f(x) < f^*, \: \varphi(x) < \delta \}$ is bounded;
}
\item{there exist $c_0 > 0$ and a feasible point $x_0$ of the problem $(\mathcal{P})$ such that the set
$\{ x \in A \mid F(x, c_0) \le f(x_0) \}$ is bounded;
}
\item{the function $f$ is coercive on the set $A$, i.e. $f(x_n) \to + \infty$ as $n \to \infty$ for any sequence
$\{ x_n \} \subset A$ such that $\| x_n \| \to + \infty$ as $n \to \infty$;
}
\item{there exists $c_0 > 0$ such that the function $F(\cdot, c_0)$ is coercive on the set $A$;
}
\item{the function $\varphi$ is coercive on the set $A$ and there exists $c_0 > 0$ such that the function
$F(\cdot, c_0)$ is bounded from below on $A$.
}
\end{enumerate}
Then $F(x, c)$ is globally exact if and only if it is locally exact at every globally optimal solution of the problem
$(\mathcal{P})$.
\end{corollary}
\begin{proof}
One can easily verify that if one of the above assumptions holds true, then the set $\{ x \in A \mid F(x, c_0) < f^* \}$
is bounded for some $c_0 > 0$. Then applying the localization principle for linear penalty functions one obtains the
desired result.
\end{proof}
\begin{remark}
The corollary above provides an example of how one can reformulate the localization principle in a particular case
with the use of some well-known and widely used conditions such as coercivity or the boundedness of a certain sublevel
set. For the sake of shortness, we do not provide such reformulations of the localization principle for particular
separating function $F(x, \lambda, c)$ studied below. However, let us underline that one can easily reformulate the
localization principle with the use of coercivity-type assumptions in any particular case.
\end{remark}
For the sake of completeness, let us also formulate simple sufficient conditions for the local exactness of the
function $F$. These conditions are well-known (see, e.g. \cite{Dolgopolik_UT}, Theorem~2.4 and Proposition~2.7) and
rely on an error bound for the penalty term $\varphi$.
\begin{proposition}
Let $x^*$ be a locally optimal solution of the problem $(\mathcal{P})$, and $f$ be H\"{o}lder continuous with exponent
$\alpha \in (0, 1]$ in a neighbourhood of $x^*$. Suppose also that there exist $\tau > 0$ and $r > 0$ such that
$$
\varphi(x) \ge \tau \big[ \dist(x, \Omega) \big]^{\alpha} \quad \forall x \in A \colon \| x - x^* \| < r,
$$
where $\dist(x, \Omega) = \inf_{y \in \Omega} \| x - y \|$. Then the linear penalty function $F(x, c)$ is locally exact
at $x^*$.
\end{proposition}
\subsection{Example II: Nonlinear Penalty Functions}
Let a function $\varphi \colon X \to [0, +\infty]$ be as above. For the sake of convenience, suppose that the objective
function $f$ is non-negative on $X$. From the theoretical point of view this assumption is not restrictive, since one
can always replace the function $f$ with the function $e^{f(\cdot)}$. Furthermore, it should be noted that the
non-negativity assumption on the objective function $f$ is standard in the theory of nonlinear penalty functions
(cf.~\cite{RubinovGloverYang1999,RubinovYangBagirov2002,RubinovGasimov2003,RubinovYang2003,
YangHuang_NonlinearPenalty2003}).
Let a function $Q \colon [0, + \infty]^2 \to (- \infty, + \infty]$ be fixed. Suppose that the restriction of $Q$ to the
set $[0, + \infty)^2$ is strictly monotone, i.e. $Q(t_1, s_1) < Q(t_2, s_2)$ for any
$(t_1, s_1), (t_2, s_2) \in [0, + \infty)^2$ such that $t_1 \le t_2$, $s_1 \le s_2$ and $(t_1, s_1) \ne (t_2, s_2)$.
Suppose also that $Q(+ \infty, s) = Q(t, + \infty) = + \infty$ for all $t, s \in [0, + \infty]$.
Define
$$
F(x, c) = Q\big( f(x), c \varphi(x) \big).
$$
Then $F(x, c)$ is \textit{a nonlinear penalty function} for the problem $(\mathcal{P})$. This type of nonlinear penalty
functions was studied in
\cite{RubinovGloverYang1999,RubinovYangBagirov2002,RubinovGasimov2003,RubinovYang2003,YangHuang_NonlinearPenalty2003}.
The simplest particular example of nonlinear penalty function is the function $F(x, c)$ of the form
\begin{equation} \label{NonlinearPenFunc}
F(x, c) = \Big( \big( f(x) \big)^q + \big( c \varphi(x) \big)^q \Big)^{\frac1q}
\end{equation}
with $q > 0$. Here
$$
Q(t, s) = \Big( t^q + s^q \Big)^{\frac1q}.
$$
Clearly, this function is monotone. In this article, the function \eqref{NonlinearPenFunc} is called \textit{the}
$q$-\textit{th order nonlinear penalty function} for the problem $(\mathcal{P})$. Let us note that the least exact
penalty parameter of the $q$-th order nonlinear penalty function is often smaller than the least exact penalty parameter
of the linear penalty function $f(x) + c \varphi(x)$ (see~\cite{RubinovYangBagirov2002,RubinovYang2003} for more
details).
Let us obtain a \textit{new} simple characterization of global exactness of the nonlinear penalty function $F(x, c)$,
which does not rely on any assumptions on the perturbation function for the problem $(\mathcal{P})$
(cf.~\cite{RubinovYangBagirov2002,RubinovYang2003}). Furthermore, to the best of author's knowledge, \textit{exact}
nonlinear penalty functions has only been considered for mathematical programming problems, while our results are
applicable in the general case.
\begin{theorem}[Localization Principle for Nonlinear Penalty Functions]
Let the set $A$ be closed, and the functions $f$, $\varphi$ and $F(\cdot, c)$ be l.s.c. on the set $A$. Suppose also
that $Q(0, s) \to + \infty$ as $s \to +\infty$. Then the nonlinear penalty function $F(x, c)$ is globally exact if and
only if it is locally exact at every globally optimal solution of the problem $(\mathcal{P})$ and one of the two
following assumptions is satisfied:
\begin{enumerate}
\item{the function $F(x, c)$ is non-degenerate;
}
\item{there exists $c_0 > 0$ such that the set $\{ x \in A \mid Q(f(x), c_0 \varphi(x)) < Q(f^*, 0) \}$ is bounded.
}
\end{enumerate}
\end{theorem}
\begin{proof}
From the fact that $Q$ is strictly monotone it follows that for any $c > 0$ one has
$$
F(x, c) = Q(f(x), c \varphi(x)) = Q(f(x), 0) > Q(f^*, 0) \quad \forall x \in \Omega \setminus \Omega^*.
$$
Furthermore, if for some $c_0 > 0$ one has
$$
\inf_{x \in A} F(x, c_0) := \inf_{x \in A} Q(f(x), c_0 \varphi(x)) = Q(f^*, 0),
$$
then applying the strict mononicity of $Q$ again one obtains that for any $c > c_0$ the following inequality holds true
$$
F(x, c) = Q(f(x), c \varphi(x)) > Q(f^*, 0) \quad \forall x \in A \setminus \Omega.
$$
Therefore the validity of the condition $\Omega^* \cap \argmin_{x \in A} F(x, c_0) \ne \emptyset$ for some $c_0 > 0$ is
equivalent to the global exactness of $F(x, c)$ by virtue of the fact that for any $c > 0$ and
$x^* \in \Omega^*$ one has $F(x^*, c) = Q(f^*, 0)$.
Let us verify that $F$ is a penalty-type separating function. Then applying Theorems~\ref{Thrm_LocPrin_Param} and
\ref{Thrm_LocPrin_Param_SublevelSets} one obtains the desired result.
Indeed, let $\{ c_n \} \subset (0, + \infty)$ be an increasing unbounded sequence,
$x_n \in \argmin_{x \in A} F(x, c_n)$ for all $n \in \mathbb{N}$, and let $x^*$ be a cluster point of
the sequence $\{ x_n \}$. Let us check, at first, that $\varphi(x_n) \to 0$ as $n \to \infty$. Arguing by reductio ad
absurdum, suppose that there exist $\varepsilon > 0$ and a subsequence $\{ x_{n_k} \}$ of the sequence $\{ x_n \}$
such that $\varphi(x_{n_k}) > \varepsilon$ for all $k \in \mathbb{N}$. Hence applying the monotonicity of $Q$ one
obtains that
$$
F(x_{n_k}, c_{n_k}) := Q\big( f(x_{n_k}), c_{n_k} \varphi(x_{n_k}) \big) \ge Q(0, c_{n_k} \varepsilon)
\quad \forall k \in \mathbb{N}.
$$
Consequently, taking into account the fact that $Q(0, s) \to + \infty$ as $s \to +\infty$ one gets that
$F(x_{n_k}, c_{n_k}) \to + \infty$ as $k \to \infty$, which contradicts the fact that
\begin{equation} \label{NonlinPenFunc_InfVsOptValue}
\inf_{x \in A} F(x, c) \le F(y^*, c) = Q(f^*, 0) < + \infty \quad \forall c > 0, \: y^* \in \Omega^*
\end{equation}
(the inequality $Q(f^*, 0) < + \infty$ follows from the strict monotonicity of $Q$). Thus, $\varphi(x_n) \to 0$ as $n
\to \infty$. Applying the fact that $A$ is closed and $\varphi$ is l.s.c. on $A$ one gets that the cluster point $x^*$
is a feasible point of the problem $(\mathcal{P})$.
Note that from \eqref{NonlinPenFunc_InfVsOptValue} and the monotonicity of $Q$ it follows that
$f(x_n) \le f^*$ for all $n \in \mathbb{N}$. Hence taking into account the fact that $f$ is l.s.c. on $A$ one obtains
that $f(x^*) \le f^*$, which implies that $x^*$ is a globally optimal solution of $(\mathcal{P})$. Thus, $F(x, c)$ is a
penalty-type separating function.
\end{proof}
Let us also obtain \textit{new} simple sufficient conditions for the local exactness of the function $F(x, c)$ which
can be applied to the $q$-th order nonlinear penalty function with $q \in (0, 1)$. Note that since the function $Q$ is
strictly monotone, the point $(0, 0)$ is a global minimizer of $Q$ on the set $[0, + \infty] \times [0, + \infty]$.
Therefore, if $x^*$ is a locally optimal solution of $(\mathcal{P})$ such that $f(x^*) = 0$, then $x^*$ is a global
minimizer of $F(\cdot, c)$ on $A$ for all $c > 0$, which implies that $F(x, c)$ is locally exact at $x^*$. Thus, it is
sufficient to consider the case $f(x^*) > 0$.
\begin{theorem}
Let $x^*$ be a locally optimal solution of the problem $(\mathcal{P})$ such that $f(x^*) > 0$. Suppose that $f$ is
H\"{o}lder continuous with exponent $\alpha \in (0, 1]$ near $x^*$ , and there exist $\tau > 0$ and $r > 0$ such that
\begin{equation} \label{NonlinPenFunc_MetricSubreg}
\varphi(x) \ge \tau [\dist(x, \Omega)]^{\alpha} \quad \forall x \in A \colon \| x - x^* \| < r.
\end{equation}
Suppose also that there exist $t_0 > 0$ and $c_0 > 0$ such that
\begin{equation} \label{NonlinearPenFunc_ConvFuncAssump}
Q\big( f(x^*) - t, c_0 t \big) \ge Q(f(x^*), 0) \quad \forall t \in [0, t_0).
\end{equation}
Then the nonlinear penalty function $F(x, c)$ is locally exact at $x^*$.
\end{theorem}
\begin{proof}
Since $f$ is H\"{o}lder continuous with exponent $\alpha$ near the locally optimal solution $x^*$ of the problem
$(\mathcal{P})$, there exist $L > 0$ and $\delta < r$ such that
$$
f(x) \ge f(x^*) - L \big[\dist(x, \Omega)\big]^{\alpha} \ge 0 \quad \forall x \in A \colon \| x - x^* \| < \delta
$$
(\cite{Dolgopolik_UT}, Proposition~2.7). Consequently, applying \eqref{NonlinPenFunc_MetricSubreg} and the fact that
$Q$ is monotone one obtains that for any $x \in A$ with $\| x - x^* \| < \delta$ one has
$$
Q\big( f(x), c \varphi(x) \big) \ge
Q\Big( f(x^*) - L \big[\dist(x, \Omega)\big]^{\alpha}, c \tau \big[\dist(x, \Omega)\big]^{\alpha} \Big).
$$
Hence with the use of \eqref{NonlinearPenFunc_ConvFuncAssump} one gets that there exists $t_0 > 0$ and $c_0 > 0$ such
that for any $c \ge L c_0 / \tau$ one has
$$
Q\big( f(x), c \varphi(x) \big) \ge Q( f(x^*), 0)
\quad \forall x \in A \colon \| x - x^* \| <
\min\left\{ \delta, \left( \frac{t_0}{L} \right)^{1 / \alpha} \right\},
$$
which implies that $F(x, c)$ is locally exact at $x^*$ and $c^*(x^*) \le c_0 / \tau$.
\end{proof}
\begin{remark}
Assumption~\eqref{NonlinearPenFunc_ConvFuncAssump} always holds true for the $q$-th order nonlinear penalty function
with $0 < q \le 1$. Indeed, applying the fact that the function $\omega(t) = t^q$ is H\"{o}lder continuous with exponent
$q$ and the H\"{o}lder coefficient $C = 1$ on $[0, + \infty)$ one obtains that
$$
\big( f(x^*) - t \big)^q + c^q t^q \ge f(x^*)^q - t^q + c^q t^q \ge f(x^*)^q
$$
for any $t \in [0, f(x_*))$ and $c \ge 1$. Hence
$$
Q( f(x^*) - t, c t ) \ge Q( f(x^*), 0 ) \quad \forall t \in [0, f(x_*)) \quad \forall c \ge 1,
$$
which implies the required result.
\end{remark}
\begin{remark}
Note that assumption~\eqref{NonlinearPenFunc_ConvFuncAssump} in the theorem above cannot be strengthened. Namely, one
can easily verify that if the nonlinear penalty function $F$ is locally exact at a locally optimal solution $x^*$ for
all Lipschitz continuous functions $f$ and all function $\varphi$ satisfying \eqref{NonlinPenFunc_MetricSubreg}, then
\eqref{NonlinearPenFunc_ConvFuncAssump} holds true (one simply has to define $f(x) = - L \dist(x, \Omega)$ and
$\varphi(x) = \dist(x, \Omega)$). Note also that the $q$-th order nonlinear penalty function does not satisfy
assumption~\eqref{NonlinearPenFunc_ConvFuncAssump} for $q > 1$.
\end{remark}
\subsection{Example III: Continuously Differentiable Exact Penalty Functions}
In this section, we utilize the localization principle in order to improve existing results on the global exactness of
continuously differentiable exact penalty functions. A continuously differentiable exact penalty function for
mathematical programming problems was introduced by Fletcher in \cite{Fletcher70,Fletcher73}. Later on, Fletcher's
penalty function was modified and thoroughly investigated by many researchers
\cite{DiPilloGrippo89,DiPillo1994,MukaiPolak,GladPolak,BoggsTolle1980,Bertsekas_book,HanMangasarian_C1PenFunc,
DiPilloGrippo85,DiPilloGrippo86,Lucidi92,ContaldiDiPilloLucidi_93,FukudaSilva,AndreaniFukuda}. Here, we study a
modification of the continuously differentiable penalty function for nonlinear second-order cone programming problems
proposed by Fukuda, Silva and Fukushima in \cite{FukudaSilva}. However, it should be pointed out that the
results of this subsection can be easily extended to the case of any existing modification of Fletcher's penalty
function.
Let $X = A = \mathbb{R}^d$, and suppose that the set $M$ has the form
$$
M = \Big\{ x \in \mathbb{R}^d \Bigm| g_i(x) \in Q_{l_i + 1}, \quad i \in I, \quad h(x) = 0, \Big\}
$$
where $g_i \colon X \to \mathbb{R}^{l_i + 1}$, $I = \{ 1, \ldots, r \}$, and $h \colon X \to \mathbb{R}^s$ are given
functions, and
$$
Q_{l_i + 1} = \big\{ y = (y^0, \overline{y}) \in \mathbb{R} \times \mathbb{R}^{l_i} \bigm|
y^0 \ge \| \overline{y} \| \big\}
$$
is the second order (Lorentz) cone of dimension $l_i + 1$ (here $\| \cdot \|$ is the Euclidean norm). In this case the
problem $(\mathcal{P})$ is a nonlinear second-order cone programming problem.
Following the ideas of \cite{FukudaSilva}, let use introduce a continuously differentiable exact penalty function for
the problem under consideration. Suppose that the functions $f$, $g_i$, $i \in I$ and $h$ are twice continuously
differentiable.
For any $\lambda = (\lambda_1, \ldots, \lambda_r) \in \mathbb{R}^{l_1 + 1} \times \ldots \times \mathbb{R}^{l_r + 1}$
and $\mu \in \mathbb{R}^s$ denote by
$$
L(x, \lambda, \mu) = f(x) + \sum_{i = 1}^r \langle \lambda_i, g_i(x) \rangle + \langle \mu, h(x) \rangle,
$$
the standard Lagrangian function for the nonlinear second-order cone programming problem. Here
$\langle \cdot, \cdot \rangle$ is the inner product in $\mathbb{R}^k$.
For a chosen $x \in \mathbb{R}^n$ consider the following unconstrained minimization problem, which allows one to obtain
an estimate of Lagrange multipliers:
\begin{multline} \label{SOCP_LagrangeMultEstimate}
\min_{\lambda, \mu} \big\| \nabla_x L(x, \lambda, \mu) \big\|^2 +
\zeta_1 \sum_{i = 1}^r \Big( \langle \lambda_i, g_i(x) \rangle^2 +
\| (\lambda_i)_0 \overline{g}_i(x) + (g_i)_0(x) \overline{\lambda}_i \|^2 \Big) \\
+ \frac{\zeta_2}{2} \left( \| h(x) \|^2 + \sum_{i = 1}^r \dist^2\big( g_i(x), Q_{l_i + 1} \big) \right) \cdot
\big( \| \lambda \|^2 + \| \mu \|^2 \big),
\end{multline}
where $\zeta_1$ and $\zeta_2$ are some positive constants,
$\lambda_i = ((\lambda_i)_0, \overline{\lambda}_i) \in \mathbb{R} \times \mathbb{R}^{l_i}$, and the same notation
is used for $g_i(x)$. Observe that if $(x^*, \lambda^*, \mu^*)$ is a KKT-point of the problem $(\mathcal{P})$, then
$(\lambda^*, \mu^*)$ is a globally optimal solution of problem \eqref{SOCP_LagrangeMultEstimate}
(see~\cite{FukudaSilva}). Moreover, it is easily seen that for any $x \in \mathbb{R}^d$ there exists a globally optimal
solution of this problem, which we denote by $(\lambda(x), \mu(x))$. In order to ensure that an optimal solution is
unique one has to utilize a proper constraint qualification.
Recall that a feasible point $x$ is called \textit{nondegenerate} (\cite{BonnansShapiro}, Def.~4.70), if
$$
\begin{bmatrix}
J g_1(x) \\
\vdots \\
J g_r(x) \\
J h(x)
\end{bmatrix} \mathbb{R}^d +
\begin{bmatrix}
\lineal T_{Q_{l_1 + 1}} \big( g_1(x) \big) \\
\vdots \\
\lineal T_{Q_{l_r + 1}} \big( g_r(x) \big) \\
\{ 0 \}
\end{bmatrix} =
\begin{bmatrix}
\mathbb{R}^{l_1 + 1} \\
\vdots \\
\mathbb{R}^{l_r + 1} \\
\mathbb{R}^s
\end{bmatrix},
$$
where $J g_i(x)$ is the Jacobian of $g_i(x)$, ``lin'' stands for the lineality subspace of a convex cone, i.e.
the largest linear space contained in this cone, and $T_{Q_{l_1 + 1}} \big( g_1(x) \big)$ is the contingent cone to
$Q_{l_i + 1}$ at the point $g_i(x)$. Let us note that the nondegeneracy condition can be expressed as a
``linear independence-type'' condition (see~\cite{FukudaSilva}, Lemma~3.1, and \cite{BonnansRamirez}, Proposition~19).
Furthermore, by \cite{BonnansShapiro}, Proposition~4.75, the nondegeneracy condition guarantees that if $x$ is a
locally optimal solution of the problem $(\mathcal{P})$, then there exists a \textit{unique} Lagrange multiplier at
$x$.
Suppose that every feasible point of the problem $(\mathcal{P})$ is nondegenerate. Then one can verify that a
globally optimal solution $(\lambda(x), \mu(x))$ of problem \eqref{SOCP_LagrangeMultEstimate} is unique for all
$x \in \mathbb{R}^d$, and the functions $\lambda(\cdot)$ and $\mu(\cdot)$ are continuously differentiable
(\cite{FukudaSilva}, Proposition~3.3).
Now we can introduce a new continuously differentiable exact penalty function for nonlinear second-order cone
programming problems, which is a simple modification of the penalty function from \cite{FukudaSilva}. Namely, choose
$\alpha > 0$ and $\varkappa \ge 2$, and define
\begin{equation} \label{BarrierTerms_SOC}
p(x) = \frac{a(x)}{1 + \sum_{i = 1}^r \| \lambda_i(x) \|^2}, \quad
q(x) = \frac{b(x)}{1 + \| \mu(x) \|^2},
\end{equation}
where
$$
a(x) = \alpha - \sum_{i = 1}^r \dist^{\varkappa}\big( g_i(x), Q_{l_i + 1} \big), \quad
b(x) = \alpha - \| h(x) \|^2.
$$
Finally, denote $\Omega_{\alpha} = \{ x \in \mathbb{R}^d \mid a(x) > 0, \: b(x) > 0 \}$, and define
\begin{multline} \label{C1_PenaltyFunc}
F(x, c) = f(x) \\
+ \frac{c}{2 p(x)} \sum_{i = 1}^r
\left[ \dist^2\Big( g_i(x) + \frac{p(x)}{c} \lambda_i(x), Q_{l_i + 1} \Big) -
\frac{p(x)^2}{c^2} \| \lambda_i(x) \|^2 \right] \\
+ \langle \mu(x), h(x) \rangle + \frac{c}{2 q(x)} \| h(x) \|^2,
\end{multline}
if $x \in \Omega_{\alpha}$, and $F(x, c) = + \infty$ otherwise. Let us point out that $F(x, c)$ is, in essence, a
straightforward modification of the Hestenes-Powell-Rockafellar augmented Lagrangian function to the case of nonlinear
second-order cone programming problems \cite{LiuZhang2007,LiuZhang2008,ZhouChen2015} with Lagrange multipliers $\lambda$
and $\mu$ replaced by their estimates $\lambda(x)$ and $\mu(x)$. One can easily verify that the function $F(\cdot, c)$
is l.s.c. on $\mathbb{R}^d$, and continuously differentiable on its effective domain (see~\cite{FukudaSilva}).
Let us obtain \textit{first} simple \textit{necessary and sufficient} conditions for the global exactness of
continuously differentiable penalty functions.
\begin{theorem}[Localization Principle for $C^1$ Penalty Functions] \label{Thrm_SOC_C1Penalty_LP}
Let the functions $f$, $g_i$, $i \in I$, and $h$ be twice continuously differentiable, and suppose that every feasible
point of the problem $(\mathcal{P})$ is nondegenerate. Then the continuously differentiable penalty function $F(x, c)$
is globally exact if and only if it is locally exact at every globally optimal solution of the problem $(\mathcal{P})$
and one of the two following assumptions is satisfied:
\begin{enumerate}
\item{the function $F(x, c)$ is non-degenerate;
}
\item{there exists $c_0 > 0$ such that the set $\{ x \in \mathbb{R} \mid F(x, c_0) < f^* \}$ is bounded.
}
\end{enumerate}
In particular, if the set $\{ x \in \mathbb{R}^d \mid f(x) < f^* + \gamma, \: a(x) > 0, \: b(x) > 0 \}$ is bounded for
some $\gamma > 0$, then $F(x, c)$ is globally exact if and only if it is locally exact at every globally optimal
solution of the problem $(\mathcal{P})$.
\end{theorem}
\begin{proof}
Our aim is to apply the localization principle in the parametric form to the separating function
\eqref{C1_PenaltyFunc}. To this end, define $G(\cdot) = (g_1(\cdot), \ldots, g_r(\cdot))$,
$K = Q_{l_1 + 1} \times \ldots \times Q_{l_r + 1}$, and introduce the function
\begin{equation} \label{C1_Penalty_AuxiliaryFunc}
\Phi(x, c) = \min_{y \in K - G(x)} \left( - p(x) \langle \lambda(x), y \rangle + \frac{c}{2} \| y \|^2 \right).
\end{equation}
Note that the minimum is attained at a unique point $y(x, c)$ due to the facts $K - G(x)$ is a closed convex cone, and
the function on the right-hand side of the above equality is strongly convex in $y$. Observe also that
\begin{equation} \label{C1_PenaltyFunc_Represent}
F(x, c) = f(x) + \frac{1}{p(x)} \Phi(x, c) + \langle \mu(x), h(x) \rangle + \frac{c}{2 q(x)} \| h(x) \|^2
\end{equation}
(see~\cite{ShapiroSun}, formulae (2.5) and (2.7)). Consequently, the function $F(x, c)$ is nondecreasing
in $c$.
From \eqref{C1_Penalty_AuxiliaryFunc} and \eqref{C1_PenaltyFunc_Represent} it follows that $F(x, c) \le f(x)$ for any
feasible point $x$ (in this case $y = 0 \in K - G(x)$). Let, now, $(x^*, \lambda^*, \mu^*)$ be a KKT-point of the
problem $(\mathcal{P})$. Then by \cite{FukudaSilva}, Proposition~3.3(c) one has $\lambda(x^*) = \lambda^*$ and
$\mu(x^*) = \mu^*$, which, in particular, implies that $\lambda_i(x^*) \in (Q_{l_i + 1})^*$ and
$\langle \lambda_i(x^*), g_i(x^*) \rangle = 0$, where $(Q_{l_i + 1})^*$ is the polar cone of $Q_{l_i + 1}$. Then
applying the standard first order necessary and sufficient conditions for a minimum of a convex function on a convex set
one can easily verify that the infimum in
$$
\dist^2\Big( g_i(x^*) + \frac{p(x^*)}{c} \lambda_i(x^*), Q_{l_i + 1} \Big) =
\inf_{z \in Q_{l_i + 1}} \left\| g_i(x^*) + \frac{p(x^*)}{c} \lambda_i(x^*) - z \right\|^2
$$
is attained at the point $z = g_i(x^*)$. Therefore $F(x^*, c) = f(x^*)$ for all $c > 0$ (see~\eqref{C1_PenaltyFunc}).
In particular, if $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$, then $F(x^*, c) \equiv f^*$.
Suppose that for some $c_0 > 0$ one has
\begin{equation} \label{C1_PenaltyFunc_IntersectImpliesExact}
\Omega^* \cap \argmin_{x \in \mathbb{R}^d} F(x, c_0) \ne \emptyset.
\end{equation}
Then
\begin{equation} \label{C1_PenaltyFunc_Min}
\min_x F(x, c) = f^* = F(x^*, c) \quad \forall c \ge c_0 \quad \forall x^* \in \Omega^*
\end{equation}
due to the fact that $F(x, c)$ is nondecreasing in $c$. Thus, for all $c \ge c_0$ one has
$\Omega^* \subseteq \argmin_{x \in \mathbb{R}^d} F(x, c)$.
Let, now, $c > c_0$ and $x^* \in \argmin_x F(x, c)$ be arbitrary. Clearly, if $h(x^*) \ne 0$, then
$F(x^*, c) > F(x^*, c_0)$, which is impossible. Therefore $h(x^*) = 0$. Let, now, $y(x^*, c)$ be such that
$$
\Phi(x^*, c) = - p(x^*) \langle \lambda(x^*), y(x^*, c) \rangle + \frac{c}{2} \| y(x^*, c) \|^2
$$
(see~\eqref{C1_Penalty_AuxiliaryFunc}). From the definitions of $x^*$ and $c_0$ it follows that
\begin{multline*}
f^* = F(x^*, c_0) = f(x^*) + \frac{1}{p(x^*)} \Phi(x^*, c_0) \\
\le
f(x^*) + \frac{1}{p(x^*)} \left( - p(x^*) \langle \lambda(x^*), y \rangle + \frac{c_0}{2} \| y \|^2 \right)
\end{multline*}
for any $y \in K - G(x^*)$. Hence for any $y \in (K - G(x^*)) \setminus \{ 0 \}$ one has
$$
f(x^*) + \frac{1}{p(x^*)} \left( - p(x^*) \langle \lambda(x^*), y \rangle + \frac{c}{2} \| y \|^2 \right) >
f^*.
$$
Consequently, taking into account the first equality in \eqref{C1_PenaltyFunc_Min} and the definition of $x^*$ one
obtains that $y(x^*, c) = 0$ and $\Phi(x^*, c) = 0$, which yields that $0 \in K - G(x^*)$, i.e. $x^*$ is feasible, and
$F(x^*, c) = f(x^*) = f^*$. Therefore $x^* \in \Omega^*$. Thus, $\argmin_{x \in \mathbb{R}^d} F(x, c) = \Omega^*$ for
all $c > c_0$ or, in other words, the validity of condition \eqref{C1_PenaltyFunc_IntersectImpliesExact} implies
that the penalty function $F(x, c)$ is globally exact.
Let us now check that $F(x, c)$ is a penalty-type separating function. Then applying Theorems~\ref{Thrm_LocPrin_Param}
and \ref{Thrm_LocPrin_Param_SublevelSets} we arrive at the required result.
Indeed, let $\{ c_n \} \subset (0, + \infty)$ be an increasing unbounded sequence, $x_n \in \argmin_{x} F(x, c_n)$ for
all $n \in \mathbb{N}$, and $x^*$ be a cluster point of the sequence $\{ x_n \}$. As it was noted above,
$F(y^*, c) = f^*$ for any globally optimal solution of the problem $(\mathcal{P})$. Therefore $F(x_n, c_n) \le f^*$ for
all $n \in \mathbb{N}$. On the other hand, minimizing the function $\omega(x, t) = - \| \mu(x) \| t + c t^2 / 2 q(x)$
with respect to $t$ one obtains that
\begin{equation} \label{C1PenFunc_GlobMin_LowerEstimate}
F(x_n, c_n) \ge f(x_n) - \sum_{i = 1}^r \frac{p(x_n)}{2 c_n} \| \lambda_i(x_n) \|^2
- \frac{q(x_n)}{2 c_n} \| \mu(x_n) \|^2 \ge f(x_n) - \frac{\alpha}{c_n}.
\end{equation}
Hence passing to the limit as $n \to + \infty$ one obtains that $f(x^*) \le f^*$. Therefore it remains to show that
$x^*$ is a feasible point of the problem $(\mathcal{P})$.
Arguing by reductio ad absurdum, suppose that $x^*$ is not feasible. Let, at first, $h(x^*) \ne 0$. Then there
exist $\varepsilon > 0$ and a subsequence $\{ x_{n_k} \}$ such that $\| h(x_{n_k}) \| \ge \varepsilon$ for
all $k \in \mathbb{N}$. Note that since $\{ x_n \}$ is a convergent sequence and the function $\mu(\cdot)$ is
continuous, there exists $\mu_0 > 0$ such that $\| \mu(x_n) \| \le \mu_0$ for all $n \in \mathbb{N}$. Furthermore,
it is obvious that $\| h(x_n) \|^2 < \alpha$ for all $n \in \mathbb{N}$. Consequently, one has
$$
F(x_{n_k}, c_{n_k}) \ge f(x_n) - \frac{\alpha}{2 c_{n_k}} - \mu_0 \alpha +
\frac{c_{n_k} \varepsilon^2}{2(\alpha - \varepsilon^2)}.
$$
(clearly, one can suppose that $\varepsilon^2 < \alpha$). Therefore $\limsup_{n \to \infty} F(x_n, c_n) = + \infty$,
which is impossible. Thus, $h(x^*) = 0$.
Suppose, now, that $g_i(x^*) \notin Q_{l_i + 1}$ for some $i \in I$. Then there exist $\varepsilon > 0$ and a
subsequence $\{ x_{n_k} \}$ such that $\dist( g_i(x_{n_k}), Q_{l_i + 1}) \ge \varepsilon$ for all $k \in \mathbb{N}$.
Note that $p(x) \| \lambda_i(x) \| / c < \alpha / c$. Consequently, one has
$\dist( g_i(x_{n_k}) + p(x_{n_k}) \lambda_i(x_{n_k}) / c_{n_k}, Q_{l_i + 1}) \ge \varepsilon / 2$ for any
sufficiently large $k \ge n$. Therefore
$$
F(x_{n_k}, c_{n_k}) \ge f(x_{n_k}) +
\frac{c_{n_k} \varepsilon^2}{2 (\alpha - \varepsilon^{\varkappa})} - \frac{\alpha}{c_{n_k}}
$$
for any $k$ large enough (obviously, we can assume that $\varepsilon^{\varkappa} < \alpha$). Passing to the limit as $k
\to \infty$ one obtains that $\limsup_{n \to \infty} F(x_n, c_n) = + \infty$, which is impossible. Thus, $x^*$ is a
feasible point of the problem $(\mathcal{P})$.
Finally, note that from \eqref{C1PenFunc_GlobMin_LowerEstimate} it follows that
$$
\big\{ x \in \mathbb{R}^d \bigm| F(x, c) < f^* \big\} \subseteq
\big\{ x \in \mathbb{R}^d \bigm| f(x) < f^* + \gamma, \: a(x) > 0, \: b(x) > 0 \big\}
$$
for all $c > \alpha / \gamma$.
\end{proof}
\begin{remark} \label{Remark_C1PenFunc}
{(i)~Let us note that the local exactness of penalty function \eqref{C1_PenaltyFunc} at a globally optimal solution of
the problem $(\mathcal{P})$ can be easily established with the use of second sufficient optimality conditions
(see~\cite{FukudaSilva}, Theorem~5.7).
}
\noindent{(ii)~Note that from the proof of the theorem above it follows that $F(x^*, c) = f(x^*)$ for any KKT-point
$(x^*, \lambda^*, \mu^*)$ of the problem $(\mathcal{P})$.
}
\end{remark}
Following the underlying idea of the localization principle and utilizing some specific properties of continuously
differentiable penalty function \eqref{C1_PenaltyFunc} we can obtain stronger necessary and sufficient conditions for
the global exactness of this function than the ones in the theorem above. These conditions does not rely on the local
exactness property and, furthermore, strengthen existing \textit{sufficient} conditions for global exactness of
continuously differentiable exact penalty functions for nonlinear second-order cone programming problems
(\cite{FukudaSilva}, Proposition~4.9). However, it should be emphasized that these conditions heavily rely on
the particular structure of the penalty function under consideration.
\begin{theorem} \label{Thrm_SOC_C1Penalty_Mod}
Let the functions $f$, $g_i$, $i \in I$, and $h$ be twice continuously differentiable, and suppose that every feasible
point of the problem $(\mathcal{P})$ is nondegenerate. Then the continuously differentiable penalty function $F(x, c)$
is globally exact if and only if there exists $c_0 > 0$ such that
the set $\{ x \in \mathbb{R}^d \mid F(x, c_0) < f^* \}$ is bounded. In particular, it is exact, if there exists
$\gamma > 0$ such that the set $\{ x \in \mathbb{R}^d \mid f(x) < f^* + \gamma, \: a(x) > 0, \: b(x) > 0 \}$ is bounded.
\end{theorem}
\begin{proof}
Denote $S(c) = \{ x \in \mathbb{R}^d \mid F(x, c) < f^* \}$. If $F(x, c)$ is globally exact, then, as it is easy to
check, one has $S(c) = \emptyset$. Therefore it remains to prove the ``if'' part of the theorem.
If $S(c) = \emptyset$ for some $c > 0$, then $F(x, c) \ge f^*$ for all $x \in \mathbb{R}^d$, and arguing in the same
way as at the beginning of the proof of Theorem~\ref{Thrm_SOC_C1Penalty_LP} one can easily obtain the desired result.
Thus, one can suppose that $S(c) \ne \emptyset$ for all $c > 0$.
Choose an increasing unbounded sequence $\{ c_n \} \subset [c_0, + \infty)$. Taking into account the facts that
$F(\cdot, c)$ is l.s.c. and nondecreasing in $c$, and applying the boundedness of the set $S(c_0)$ one obtains that for
any $n \in \mathbb{N}$ the function $F(\cdot, c_n)$ attains a global minimum at a point $x_n \in S(c_n) \subseteq
S(c_0)$. Applying the boundedness of the set $S(c_0)$ once again one obtains that there exists a cluster point $x^*$ of
the sequence $\{ x_n \}$. Replacing, if necessary, the sequence $\{ x_n \}$ with its subsequence we can suppose that
$x_n$ converges to $x^*$. As it was shown in Theorem~\ref{Thrm_SOC_C1Penalty_LP}, $F(x, c)$ is a penalty-type separating
function. Therefore $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$, and $F(x^*, c) = f^*$ for all
$c > 0$.
From the fact that $x_n$ is a point of global minimum of $F(\cdot, c_n)$ it follows that $\nabla_x F(x_n, c_n) = 0$.
Then applying a direct modification of \cite{FukudaSilva}, Proposition~4.3 to the case of penalty function
\eqref{C1_PenaltyFunc} one obtains that for any $x_n$ in a sufficiently small neighbourhood of $x^*$ (i.e.
for any sufficiently large $n \in \mathbb{N}$) the triplet $(x_n, \lambda(x_n), \mu(x_n))$ is a KKT-point of the problem
$(\mathcal{P})$. Hence taking into account Remark~\ref{Remark_C1PenFunc} one gets that $F(x_n, c_n) = f(x_n) \ge f^*$
for any sufficiently large $n \in \mathbb{N}$, which contradicts our assumption that $S(c) \ne \emptyset$ for all $c >
0$ due to the definition of $x_n$. Thus, the penalty function $F(x, c)$ is globally exact.
\end{proof}
Let us note that the results of this subsection can be easily extended to the case of nonlinear semidefinite
programming problems (cf.~\cite{Dolgopolik_GSP}, Sections~8.3 and 8.4). Namely, suppose that $A = \mathbb{R}^d$, and let
$$
M = \Big\{ x \in \mathbb{R}^d \Bigm| G(x) \preceq 0, \: h(x) = 0 \Big\},
$$
where $G \colon X \to \mathbb{S}^l$ and $h \colon X \to \mathbb{R}^s$ are given functions, $\mathbb{S}^l$ is the set of
all $l \times l$ real symmetric matrices, and the relation $G(x) \preceq 0$ means that the matrix $G(x)$ is negative
semidefinite. We suppose that the space $\mathbb{S}^l$ is equipped with the Frobenius norm
$\| A \|_F = \sqrt{\trace(A^2)}$. In this case the problem $(\mathcal{P})$ is a nonlinear semidefinite programming
problem.
Suppose that the functions $f$, $G$ and $h$ are twice continuously differentiable. For any $\lambda \in \mathbb{S}^l$
and $\mu \in \mathbb{R}^s$ denote by
$$
L(x, \lambda, \mu) = f(x) + \trace( \lambda G(x) ) + \langle \mu, h(x) \rangle,
$$
the standard Lagrangian function for the nonlinear semidefinite programming problem. For a chosen $x \in \mathbb{R}^n$
consider the following unconstrained minimization problem, which allows one to compute an estimate of Lagrange
multipliers:
\begin{multline} \label{SemiDef_LagrangeMultEstimate}
\min_{\lambda, \mu} \big\| \nabla_x L(x, \lambda, \mu) \big\|^2 +
\zeta_1 \trace( \lambda^2 G(x)^2 ) \\
+ \frac{\zeta_2}{2} \left( \| h(x) \|^2 + \sum_{i = 1}^r \dist^2\big( G(x), S^l_{-} \big) \right) \cdot
\big( \| \lambda \|_F^2 + \| \mu \|^2 \big),
\end{multline}
where $\zeta_1$ and $\zeta_2$ are some positive constants, and $S^l_{-}$ is the cone of $l \times l$ real negative
semidefinite matrices. One can verify (cf.~\cite{Dolgopolik_GSP}, Lemma~4) that for any $x \in \mathbb{R}^d$ there
exists a unique globally optimal solution $(\lambda(x), \mu(x))$ of this problem, provided every feasible point of the
problem $(\mathcal{P})$ is non-degenerate, i.e. provided for any feasible $x$ one has
$$
\begin{bmatrix}
D G(x_*) \\
J h(x_*)
\end{bmatrix} \mathbb{R}^d +
\begin{bmatrix}
\lineal T_{\mathbb{S}^l_-} \big( G(x_*) \big) \\
\{ 0 \}
\end{bmatrix} =
\begin{bmatrix}
\mathbb{S}^l \\
\mathbb{R}^s
\end{bmatrix}.
$$
(see \cite{BonnansShapiro}, Def.~4.70). Let us note that, as in the case of second order cone programming problems, the
above nondegeneracy condition can be rewritten as a ``linear independence-type'' condition (see \cite{BonnansShapiro},
Proposition~5.71).
Now we can introduce \textit{first} continuously differentiable \textit{exact} penalty function for nonlinear
semidefinite programming problems. Namely, choose $\alpha > 0$ and $\varkappa \ge 1$, and define
$$
p(x) = \frac{a(x)}{1 + \trace(\lambda(x)^2)}, \quad
q(x) = \frac{b(x)}{1 + \| \mu(x) \|^2},
$$
where
$$
a(x) = \alpha - \trace\big( [ G(x) ]_+^2 \big)^{\varkappa}, \quad
b(x) = \alpha - \| h(x) \|^2,
$$
and $[\cdot]_+$ denotes the projection of a matrix onto the cone of $l \times l$ positive semidefinite matrices.
Denote $\Omega_{\alpha} = \{ x \in \mathbb{R}^d \mid a(x) > 0, \: b(x) > 0 \}$, and define
\begin{multline} \label{C1_PenaltyFunc_SemiDef}
F(x, c) = f(x)
+ \frac{1}{2c p(x)} \Big( \trace\big( [c G(x) + p(x) \lambda(x)]_+^2 \big) -
p(x)^2 \trace(\lambda(x)^2) \Big) \\
+ \langle \mu(x), h(x) \rangle + \frac{c}{2 q(x)} \| h(x) \|^2,
\end{multline}
if $x \in \Omega_{\alpha}$, and $F(x, c) = + \infty$ otherwise. Let us point out that $F(x, c)$ is, in essence,
a direct modification of the Hestenes-Powell-Rockafellar augmented Lagrangian function to the case of nonlinear
semidefinite programming problems
\cite{SunZhangWu2006,SunSunZhang2008,ZhaoSunToh2010,Sun2011,WenGoldfarbYin2010,LuoWuChen2012,WuLuoDingChen2013,
WuLuoYang2014,YamashitaYabe2015} with Lagrange multipliers $\lambda$ and $\mu$ replaced by their estimates $\lambda(x)$
and $\mu(x)$. One can verify that the function $F(\cdot, c)$ is l.s.c. on $\mathbb{R}^d$, and continuously
differentiable on its effective domain. Furthermore it is possible to extend Theorems~\ref{Thrm_SOC_C1Penalty_LP} and
\ref{Thrm_SOC_C1Penalty_Mod} to the case of continuously differentiable penalty function \eqref{C1_PenaltyFunc_SemiDef},
thus obtaining \textit{first} necessary and sufficient conditions for the global exactness of $C^1$ penalty functions
for nonlinear semidefinite programming problems. However, we do not present the proofs of these results here, and leave
them to the interested reader.
\subsection{Example IV: Rockafellar-Wets' Augmented Lagrangian Function}
\label{Example_ALRW}
The separating functions studied in the previous examples do not depend on any additional parameters apart from
the penalty parameter $c$. This fact does not allow one to fully understand the concept of parametric exactness. In
order to illuminate the main features of parametric exactness, in this example we consider a separating function that
depends on additional parameters, namely Lagrange multipliers. Below, we apply the general theory of parametrically
exact separating functions to the augmented Lagrangian function introduced by Rockafellar and Wets in \cite{RockWets}
(see also
\cite{ShapiroSun,HuangYang2003,HuangYang2005,ZhouZhouYang2014,Dolgopolik_ALRW,RuckmannShapiro,KanSong,
KanSong2,BurachikYangZhou2017}).
Let $P$ be a topological vector space of parameters. Recall that a function
$\Phi \colon X \times P \to \mathbb{R} \cup \{ + \infty \} \cup \{ - \infty \}$ is called \textit{a dualizing
parameterization function} for $f$ iff $\Phi(x, 0) = f(x)$ for any feasible point of the problem $(\mathcal{P})$.
A function $\sigma \colon P \to [0, + \infty]$ such that $\sigma(0) = 0$ and $\sigma(p) > 0$ for all $p \ne 0$ is called
\textit{an augmenting function}. Let, finally, $\Lambda$ be a vector space of \textit{multipliers}, and let the pair
$(\Lambda, P)$ be equipped with a bilinear coupling function
$\langle \cdot, \cdot \rangle \colon \Lambda \times P \to \mathbb{R}$.
Following the ideas of Rockafellar and Wets \cite{RockWets}, define the augmented Lagrangian function
\begin{equation} \label{ALRW_Def}
\mathscr{L}(x, \lambda, c) =
\inf_{p \in P} \Big( \Phi(x, p) - \langle \lambda, p \rangle + c \sigma(p) \Big),
\end{equation}
We suppose that $\mathscr{L}(x, \lambda, c) > - \infty$ for all $x \in X$, $\lambda \in \Lambda$ and $c > 0$. Let us
obtain simple necessary and sufficient conditions for the strict global parametric exactness of the augmented Lagrangian
function $\mathscr{L}(x, \lambda, c)$ with the use of the localization principle. These conditions were first obtained
by the author in \cite{Dolgopolik_ALRW}.
\begin{remark}
It is worth mentioning that in the context of the theory of augmented Lagrangian functions, a vector
$\lambda^* \in \Lambda$ is a strictly exact tuning parameter of the function $\mathscr{L}(x, \lambda, c)$ iff
$\lambda^*$ \textit{supports an exact penalty representation of the problem} $(\mathcal{P})$ (see~\cite{RockWets},
Definition~11.60). Furthermore, if the infimum in \eqref{ALRW_Def} is attained for all $x$, $\lambda$ and $c$, then the
strict global parametric exactness of the augmented Lagrangian function $\mathscr{L}(x, \lambda, c)$ is equivalent to
the existence of \textit{an augmented Lagrange multiplier} (see~\cite{RockWets}, Theorem~11.61, and
\cite{Dolgopolik_ALRW}, Proposition~4 and Corollary~1). Furthermore, in this case $\lambda^*$ is a strictly exact tuning
parameter iff it is an augmented Lagrange multiplier.
\end{remark}
\begin{remark}
Clearly, the definitions of strict global parametric exactness and global parametric exactness do not coincide in the
case of the augmented Lagrangian function $\mathscr{L}(x, \lambda, c)$ defined above. For example, let $P$ be a normed
space, $\Lambda$ be the topological dual of $P$, $\sigma(p) = \| p \|^2$, and
$$
\Phi(x, p) = \begin{cases}
f(x) + \max\{ - 1, - \| p \| \}, & \text{if } x \in \Omega, \\
+ \infty, & \text{if } x \notin \Omega.
\end{cases},
$$
Then, as it easy to check, $\mathscr{L}(x, \lambda, c)$ is globally parametrically exact with the exact tuning
parameter $\lambda^* = 0$ and $c^*(\lambda^*) = 0$, but it is not strictly globally parametrically exact, since
$\mathscr{L}(x, \lambda, c) < f(x)$ for all $x \in \Omega$, $\lambda \in \Lambda$ and $c > 0$. When one compares
strict global parametric exactness and global parametric exactness in the case of the augmented Lagrangian
$\mathscr{L}(x, \lambda, c)$, it appears that the strict global parametric exactness is more natural
in this case. Apart from the fact that there exist many connections of the strict global parametric exactness with
existing results on augmented Lagrangian function $\mathscr{L}(x, \lambda, c)$ pointed out above, the application of the
localization principle leads to simpler results in the case of the strict global parametric exactness. In particular, it
is rather difficult to verify that the validity of the condition $\Omega^* \cap \argmin_{x \in A} \mathscr{L}(x,
\lambda^*, c) \ne \emptyset$ implies the global parametric exactness, while the condition
\begin{equation} \label{RW_ALF_EquivExactnessCond}
\Omega^* \cap \argmin_{x \in A} \mathscr{L}(x, \lambda^*, c) \ne \emptyset, \quad
\min_{x \in A} \mathscr{L}(x, \lambda^*, c) = f^*
\end{equation}
is equivalent to the strict global parametric exactness of $\mathscr{L}(x, \lambda, c)$ under some natural assumptions
(see Theorem~\ref{Thrm_LocPrin_ALRW} below).
\end{remark}
Recall that the augmenting function $\sigma$ is said \textit{to have a valley at zero} iff for any neighbourhood
$U \subset P$ of zero there exists $\delta > 0$ such that $\sigma(p) \ge \delta$ for any $p \in P \setminus U$. The
assumption that the augmenting function $\sigma$ has a valley at zero is widely used in the literature on augmented
Lagrangian functions (see, e.g.,~\cite{BurachikRubinov,ZhouYang2009,ZhouYang2012,ZhouZhouYang2014}).
\begin{theorem}[Localization Principle for Augmented Lagrangian Functions] \label{Thrm_LocPrin_ALRW}
Suppose that the following assumptions are valid:
\begin{enumerate}
\item{$A$ and $\Omega$ are closed;
\label{Assumpt_ALRW_Closedness}
}
\item{$f$ and $\mathscr{L}(\cdot, \lambda, c)$ for all $\lambda \in \Lambda$ and $c > 0$ are l.s.c. on $A$;}
\item{$\Phi$ is l.s.c. on $A \times \{ 0 \}$;}
\item{$\sigma$ has a valley at zero;
\label{Assumpt_ALRW_ValleyAtZero}
}
\item{there exists $r > 0$ such that for any $c \ge r$, $x \in A$ and $\lambda \in \Lambda$ one has
$$
\argmin_{p \in P} \Big( \Phi(x, p) - \langle \lambda, p \rangle + c \sigma(p) \Big) \ne \emptyset
$$
i.e. the infimum in \eqref{ALRW_Def} is attained.
\label{Assumpt_ALRW_InfAttainment}
}
\end{enumerate}
Then the augmented Lagrangian function $\mathscr{L}(x, \lambda, c)$ is strictly globally parametrically exact if and
only if there exist $\lambda^*$ and $c_0 > 0$ such that $\mathscr{L}(x, \lambda, c)$ is locally parametrically exact at
every globally optimal solution of the problem $(\mathcal{P})$ with the exact tuning parameter $\lambda^*$,
$$
\mathscr{L}(x^*, \lambda^*, c) = f^* \quad \forall x^* \in \Omega^* \quad \forall c \ge c_0,
$$
and one of the following two conditions is valid:
\begin{enumerate}
\item{the function $\mathscr{L}(x, \lambda, c)$ is non-degenerate for $\lambda = \lambda^*$;
\label{Assumpt_ALRW_1}
}
\item{the set $\{ x \in A \Bigm| \mathscr{L}(x, \lambda^*, c_0) < f^* \Big\}$ is bounded.
\label{Assumpt_ALRW_2}
}
\end{enumerate}
\end{theorem}
\begin{proof}
The fact that the validity of \eqref{RW_ALF_EquivExactnessCond} is equivalent to strict global parametric exactness of
the function $\mathscr{L}(x, \lambda, c)$ follows directly from \cite{Dolgopolik_ALRW}, Proposition~4
and Corollary~1. Furthermore, by \cite{Dolgopolik_ALRW}, Proposition~8, the function $\mathscr{L}(x, \lambda, c)$ is
a penalty-type separating function for any $\lambda \in \Lambda$. Then applying
Theorems~\ref{Thrm_LocPrin_Param_Strict} and \ref{Thrm_LocPrin_Param_Strict_Sublevel} one obtains the desired result.
\end{proof}
\begin{remark}
Note that under the assumptions of the theorem the multipliers $\lambda^*$ is a strictly exact tuning parameter
if and only if it is an augmented Lagrangre multiplier \cite{Dolgopolik_ALRW}. Thus, the theorem above, in essence,
contains necessary and sufficient conditions for the existence of augmented Lagrange multipliers for the problem
$(\mathcal{P})$. See~\cite{Dolgopolik_ALRW} for applications of this theorem to some particular optimization
problems.
\end{remark}
Note that from the localization principle it follows that for the strict global parametric exactness of the augmented
Lagrangian $\mathscr{L}(x, \lambda, c)$ it is necessary that there exists a tuning parameter $\lambda^* \in \Lambda$
such that $\lambda^*$ is a locally exact tuning parameter at \textit{every} globally optimal solution of the problem
$(\mathcal{P})$. One can give a simple interpretation of this condition in the case when $\mathscr{L}(x, \lambda, c)$ is
a proximal Lagrangian. Namely, let $\mathscr{L}(x, \lambda, c)$ be the proximal Lagrangian (see~\cite{RockWets},
Example~11.57), and suppose that it is strictly globally parametrically exact with a strictly exact tuning parameter
$\lambda^* \in \Lambda$. By the definition of strict global exactness, any globally optimal solution $x^*$ of the
problem $(\mathcal{P})$ is a global minimizer of the function $L(\cdot, \lambda^*, c)$ for all sufficiently large $c$.
Then applying the first order necessary optimality condition to the function $\mathscr{L}(\cdot, \lambda, c)$ one can
easily verify that under natural assumptions the pair $(x^*, \lambda^*)$ is a KKT-point of the problem $(\mathcal{P})$
for any $x^* \in \Omega^*$ (see~\cite{ShapiroSun}, Proposition~3.1). Consequently, one gets that for the strict global
parametric exactness of the augmented Lagrangian function $\mathscr{L}(x, \lambda, c)$ it is \textit{necessary} that
there exists a Lagrange multiplier $\lambda^*$ such that the pair $(x^*, \lambda^*)$ is a KKT-point of the problem
$(\mathcal{P})$ for \textit{any} globally optimal solution $x^*$ of this problem. In particular, if there exist two
globally optimal solutions of the problem $(\mathcal{P})$ with disjoint sets of Lagrange multipliers, then the proximal
Lagrangian cannot be strictly globally parametrically exact.
For the sake of completeness, let us mention that in the case of augmented Lagrangian functions, sufficient conditions
for the local exactness are typically derived with the use of sufficient optimality conditions. In particular, the
validity of the second order sufficient optimality conditions at a given globally optimal solution $x^*$
guarantees that the proximal Lagrangian is locally parametrically exact at $x^*$ with the corresponding Lagrange
multiplier being a locally exact tuning parameter (see, e.g., \cite{Bazaraa}, Theorem~9.3.3; \cite{SunLi}, Theorem~2.1;
\cite{LiuTangYang2009}, Theorem~2; \cite{WangZhouXu2009}, Theorem~2.3; \cite{LuoMastroeniWu2010}, Theorems~3.1 and 3.2;
\cite{ZhouXiuWang}, Theorem~2.8; \cite{ZhouZhouYang2014}, Proposition~3.1; \cite{ZhouChen2015}, Theorem~2.3;
\cite{WuLuoYang2014}, Theorem~3, etc.).
\section{Conclusions}
In this paper we developed a general theory of global parametric exactness of separating function for finite-dimensional
constrained optimization problems. This theory allows one to reduce a constrained optimization problem to an
unconstrained one, provided an exact tuning parameter is known. With the use of the general results obtained in this
article we recovered existing results on the global exactness of linear penalty functions and Rockafellar-Wets'
augmented Lagrangian function. We also obtained new simple necessary and sufficient conditions for the global exactness
of nonlinear and continuously differentiable penalty functions.
\end{document}
|
\begin{document}
\title{Attention Enables Zero Approximation Error}
\begin{abstract}
Deep learning models have been widely applied in various aspects of daily life. Many variant models based on deep learning structures have achieved even better performances. Attention-based architectures have become almost ubiquitous in deep learning structures. Especially, the transformer model has now defeated the convolutional neural network in image classification tasks to become the most widely used tool. However, the theoretical properties of attention-based models are seldom considered. In this work, we show that with suitable adaptations, the single-head self-attention transformer with a fixed number of transformer encoder blocks and free parameters is able to generate any desired polynomial of the input with no error. The number of transformer encoder blocks is the same as the degree of the target polynomial. Even more exciting, we find that these transformer encoder blocks in this model do not need to be trained. As a direct consequence, we show that the single-head self-attention transformer with increasing numbers of free parameters is universal. These surprising theoretical results clearly explain the outstanding performances of the transformer model and may shed light on future modifications in real applications. We also provide some experiments to verify our theoretical result.
\end{abstract}
\section{Introduction}
By imitating the structure of brain neurons, deep learning models have replaced traditional statistical models in almost every aspect of applications, becoming the most widely used machine learning tools \cite{lecun2015deep,goodfellow2016deep}. Structures of deep learning are also constantly evolving from fully connected networks to many variants such as convolutional networks \cite{krizhevsky2012imagenet}, recurrent networks \cite{mikolov2010recurrent} and the attention-based transformer model \cite{dosovitskiy2020image}. Attention-based architectures were first introduced in the areas of natural language processing, and neural machine translation \cite{bahdanau2014neural,vaswani2017attention,ott2018scaling}, and now an attention-based transformer model has also become state-of-the-art in image classification \cite{dosovitskiy2020image}. However, compared with significant achievements and developments in practical applications, theoretical properties of attention-based transformer models are not well understood.
Let us describe briefly some current theoretical progress of attention-based architectures. The universality of a sequence-to-sequence transformer model is first established in \cite{yun2019transformers}. After that, a sparse attention mechanism, BIGBIRD, is proposed by \cite{zaheer2020big} and the authors further show that the proposed transformer model is universal if its attention structure contains the star graph. Later, \cite{yun2020n} provides a unified framework to analyze sparse transformer models. Recently, \cite{shi2021sparsebert} studies the significance of different positions in the attention matrix during pre-training and shows that diagonal elements in the attention map are the least important compared with other attention positions. From a statistical machine learning point of view, the authors in \cite{gurevych2021rate} propose a classifier based on a transformer model and show that this classifier can circumvent the curse of dimensionality.
The models considered in the above works all contain attention-based transformer encoder blocks. It is worth noting that the biggest difference between a transformer encoder block and a traditional neural network layer is that it introduces an inner product operation, which not only makes its actual performance better but also provides more room for theoretical derivations.
In this paper, we consider the theoretical properties of the single-head self-attention transformer with suitable adaptations. Different from segmenting $x$ into small pieces \cite{dosovitskiy2020image} and capturing local information, we consider a global pre-processing of $x$ and propose a new vector structure of the inputs of transformer encoder blocks. In this structure, in addition to the global information we obtain from data pre-processing, we place a one-hot vector to represent different features through the idea of positional encoding and place a zero vector to store the output values after each transformer encoder block. With such a special design, we can fix all transformer encoder blocks such that no training is needed for them. And it is able to realize the multiplication operation and store values in zero positions. By applying a well-known result in approximation theory \cite{zhou2018deep} stating that any polynomial $Q \in \mathcal{P}_q \left( \mathbb{R}^d\right)$ of degree at most $q$ can be represented by a linear combination of different powers of ridge forms $\xi_k \cdot x$ of $x\in \mathbb{R}^d$, we prove that the proposed model can generate any polynomial of degree $q$ with $q$ transformer encoder blocks and a fixed number of free parameters. As a direct consequence, we show that the proposed model is universal if we let the the number of free parameters and transformer encoder blocks go to infinity. Our theoretical results are also verified by experiments on synthetic data. In summary, the contributions of our work are as follows:
\begin{itemize}
\item We propose a new pre-processing method that captures global information and a new structure of input vectors of transformer encoder blocks.
\item With the special structure of input of transformer encoder blocks, we can artificially design all the transformer encoder blocks in a spare way and prove that the single-head self-attention transformer with $q$ transformer encoder blocks and fixed number of free parameters is able to generate any desired polynomial of degree $q$ of the input with no error.
\item As a direct consequence, we show that the single-head self-attention transformer with increasing numbers of free parameters and transformer encoder block is universal.
\item We apply our model to noisy regression tasks with synthetic data. Our experiments show that the proposed model performs much better than traditional fully connected neural networks with a comparable number of free parameters.
\end{itemize}
\begin{figure*}
\caption{The Architecture of the single-head self-attention transformer. $W^Q, W^K, W^V$ stand for the query matrix, the key matrix, and the value matrix respectively. MatMul stands for the matrix multiplication.}
\label{arc}
\label{fig-motivation}
\end{figure*}
\section{Transformer Structures}
In this section, we formally introduce the single-head self-attention transformer considered in this paper. The overall architecture is shown in Figure \ref{arc}.
\subsection{Data Pre-processing}
\label{processing}
For an input $x \in \mathbb{R}^{d}$ which can be a vector or the concatenation of an image, the usual pre-processing method is to segment it into small pieces and then conduct linear transforms, which can be thought of as extracting local features. However, we propose to directly apply a full matrix $F \in \mathbb{R}^{n \times d}$ to get global features $Fx = t \in \mathbb{R}^{n}$, where $F= [\xi_1, \cdots, \xi_{n} ]^\top$ with $\xi_i \in \mathbb{R}^{d}$ and $\left\|\xi_i\right\|\leq 1$. The matrix $F$ is obtained through the training process. Then we have $n$ global features $t_i = \langle \xi_i , x \rangle$ of the input $x$. Now we introduce the structure of inputs for transformer encoder blocks as follows,
$$z_i = [t_i, \overbrace{0,\cdots,0,\underbrace{1}_{(i+1)-\text{th entry}},0,\cdots,0}^{n}, \overbrace{0,\cdots,0}^q ,1 ]^\top,$$
for $i=1,\cdots,n$. Each one of them is a sparse vector in $\mathbb{R}^{n+q+2}$ and all the $n$ vectors are inputs for the transformer encoder blocks.
As we have covered before, we put a one-hot vector of dimension $n$ inside $z_i$ representing different features $t_i$ of the input $x$ which is similar to the idea of positional encoding. And we also place a $q$ dimensional zero vector to store outputs from each transformer encoder block. At the last position, we place a constant $1$ for the ease of computation in transformer encoder blocks.
We use
$$\mathcal{F}(x) : \mathbb{R}^d \rightarrow \mathbb{R}^{(n+q+2) \times (n)}$$
to denote the above transformation such that $$\mathcal{F}(x) = [z_1, \cdots,z_n].$$
\subsection{Single-Head Self-Attention Transformer Encoder Blocks}
One transformer encoder block contains a self-attention layer and a fully connected layer with a linear transformation. In the self-attention layer, we have one query matrix
$$W^Q \in \mathbb{R}^{(n+1) \times (n+q+2)},$$
one key matrix
$$W^K\in \mathbb{R}^{(n+1) \times (n+q+2)},$$
and one value matrix
$$W^V\in \mathbb{R}^{(n+q+2) \times (n+q+2)}.$$
For every input $z_i$, we calculate the query vector
$$q_i=W^{Q} z_i\in \mathbb{R}^{n+1},$$
the key vector
$$k_i=W^{K} z_i \in \mathbb{R}^{n+1},$$
and the value vector
$$v_i=W^{V} z_i \in \mathbb{R}^{n+q+2}.$$
With all these values, we have $n$ attention vectors
$$\alpha_i= [\langle q_i,k_1\rangle, \cdots ,\langle q_i, k_i \rangle, \cdots,\langle q_i, k_n \rangle]^\top \in \mathbb{R}^{n}.$$
In our proposed model, the softmax function in the self-attention layer is replaced by a one hot maximum function $\hat{m}(\alpha_i): \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ which keeps the largest value unchanged and sets the other values to $0$.
We use the notation
$$\mathcal{A}_{W^Q,W^K,W^V} : \mathbb{R}^{(n+q+2) \times n} \rightarrow \mathbb{R}^{(n+q+2) \times n}$$ to denote the mapping of the self-attention layer.
Then the output of the self-attention layer is given by
$$\mathcal{A}_{W^Q,W^K,W^V}(z_1, \cdots, z_n) = [\hat{z}_1, \cdots, \hat{z}_n],$$
where
$$\hat{z}_i = z_i + W^V Z\hat m({\alpha}_i),$$
with $Z = [z_1, \cdots, z_n]$.
The fully connected layer with a linear transformation contains two matrices
$$W_1 \in \mathbb{R}^{2 \times (n+q+2)},$$
and
$$W_2 \in \mathbb{R}^{(n+q+2) \times 2},$$
and two bias vectors $b_1\in \mathbb{R}^{2}$, $b_2\in \mathbb{R}^{n+q+2}$. We use the notation
$$\mathcal{B}_{W_1,W_2,b_1,b_2} : \mathbb{R}^{(n+q+2) \times n} \rightarrow \mathbb{R}^{(n+q+2) \times n}$$
to denote the mapping of the fully connected layer with a linear transformation. Then we have
$$\mathcal{B}_{W_1,W_2,b_1,b_2}(\hat{z}_1, \cdots, \hat{z}_n) = [z'_1, \cdots, z_n'],$$
where
$$z'_i = \hat{z}_i + W_2 \sigma\left(W_1\hat{z}_1+b_1\right) +b_2,$$
and $\sigma$ is the ReLU activation function acting component-wise.
Now we define our single-head self-attention transformer model with $\ell$ transformer encoder blocks as
$$\mathcal{T}^\ell(x) = \mathcal{B}^\ell \circ \mathcal{A}^\ell \circ \cdots \circ \mathcal{B}^1 \circ \mathcal{A}^1 \circ \mathcal{F} (x),$$
where $\mathcal{F}, \mathcal{A}^i, \mathcal{B}^i$ are the mappings defined above.
We further concatenate the output matrix into one vector and apply a linear transformation with a bias term to get our final output, that is,
$$ \mathcal{C}^\ell(x) = \beta \cdot \textbf{concat}\left( \mathcal{T}^\ell(x)\right) + b,$$
with $\beta \in \mathbb{R}^{n(n+q+2)}$ and $b \in \mathbb{R}$. We require the vector $\beta$ to possess a sparse structure which will be shown in the proof. The values in $\beta$ and $b$ are obtained through the training process. The layer normalization is not considered in our model.
\section{Main Results}
In this section, we present our main result showing that the single-head self-attention transformer model can generate any desired polynomial with a fixed number of transformer encoder blocks and free parameters. Before stating our main theorem, we first present two important lemmas. For the following lemma, we construct a sparse single-head self-attention block with fixed design which is able to realize the multiplication operation and store different products in the output vectors simultaneously.
\begin{lemma}\label{main result:one layer}
For all $n$ input vectors in the form of
$$z_i = [t_i, e_i, \overbrace{x_i,y_i,0,\cdots,0}^q ,1 ]^\top \in \mathbb{R}^{(n+q+2) \times 1},$$
with $t_i, x_i, y_i \in \mathbb{R}$ and absolute values bounded by some known constant $M$ for $i=1,\cdots,n$, there exists a sparse single-head self-attention transformer encoder block with fixed matrices $W^Q$, $W^K$, $W^V$, $W_1$, $W_2$ and vectors $b_1$, $b_2$ that can produce output vectors as
$$z'_i = [t_i,e_i, \overbrace{x_i,y_i,-x_iy_i,0,\cdots,0}^q ,1 ]^\top \in \mathbb{R}^{(n+q+2) \times 1},$$
where $e_i$ denotes the one-hot vector of dimension $n$ with value $1$ in the $i$-th position of $e_i$. The softmax function is replaced by one hot maximum function. The number of non-zero entries is $2n+8$.
\end{lemma}
\begin{remark}
The above lemma shows that a fixed single-head self-attention transformer encoder block is able to simultaneously calculate the product of two elements in all $n$ input vectors within the same two entries and store the negative value in the same $0$ positions. Since the construction is fixed, these transformer encoder blocks in the whole model do not need to be trained.
\end{remark}
Now we introduce a well-known result in approximation theory showing that any polynomial function $Q \in \mathcal{P}_q \left( \mathbb{R}^d\right)$ of degree at most $q$ can be represented by a linear combination of different powers of ridge forms $\xi_k \cdot x$ of $x\in \mathbb{R}^d$. The following lemma is first presented and proved in \cite{zhou2018deep} and also plays an important role in the analysis of deep convolutional neural networks \cite{zhou2020universality,mao2021theory}.
\begin{lemma}\label{lemma: polynomial}
Let $d \in \mathbb{N}$ and $q \in \mathbb{N}$. Then there exists a set $\left\{ \xi_k\right\}_{k=1}^{n_q} \subset \left\{ \xi \in \mathbb{R}^d : \left\|\xi\right\|=1\right\}$ of vectors with $\ell_2-$norm $1$ such that for any $Q \in \mathcal{P}_q \left( \mathbb{R}^d\right)$ we can find a set of coefficients $\left\{\beta_{k, s}: k=1,\cdots, n_q, s = 1,\cdots, q\right \} \subset \mathbb{R}$ such that
\begin{equation}
\begin{aligned}
Q(x) = Q(0) + \sum_{k=1}^{n_q} \sum_{s=1}^q \beta_{k,s} \left( \xi_k \cdot x\right)^s, ~~~x \in \mathbb{R}^d,
\end{aligned}
\end{equation}
where $n_q = \binom{d-1+q}{q}$ is the dimension of $\mathcal{P}^h_q(\mathbb{R}^d),$ the space of homogeneous polynomials on $\mathbb{R}^d$ of degree $q$.
\end{lemma}
\begin{remark}
The above lemma shows that any polynomial $Q \in \mathcal{P}_q \left( \mathbb{R}^d\right)$ can be uniquely determined by $Q(0)$, $\beta_{k,s}$ and $\xi_k$. So by applying the above lemma, we can perfectly reproduce any polynomial with proper construction.
\end{remark}
Now we are ready to state our main result on the single-head self-attention transform model.
\begin{theorem}\label{main result:polynomial}
Let $B>0$ and $q \in \mathbb{N}$. For any polynomial function $Q\in \mathcal{P}_q (\mathbb{R}^d)$ of degree at most $q$, there exist a single-head self-attention transformer model with $q$ transformer encoder blocks such that the output function $\mathcal{C}^q$ equals $Q$ on $\left\{x\in \mathbb{R}^d: \left\|x\right\| \leq B\right\}$
$$\mathcal{C}^q(x) = Q(x),~~\forall \left\|x\right\| \leq B.$$
The number of free parameters is less then $d^{q+1}+ qd^q +1$ which comes from $F$, $\beta$ and $b$. The number of non-zero entries in this model is less than $d^{q+1} +3qd^q + 8q +1.$
\end{theorem}
\begin{remark}
The above theorem shows a very strong property of the self-attention transformer model that it can generate any desired polynomial with a finite number of free parameters. As we can see, the degree of the polynomial is reflected in the number of transformer encoder blocks, showing that the more blocks the transformer has, the more complex polynomial it can represent. Clearly, this result outperforms that of the other classical deep learning models without attention-based structure in at least two aspects. First, since the linear combination of the output units of traditional ReLU neural networks is only a piece-wise linear function of the input, no matter how many finite layers and free parameters, it can never produce a polynomial of the input with no error. Second, the transformer encoder blocks in our construction only serves as the realization of the multiplication operation. The non-zero values are all pre-designed constants, so no training is needed for these blocks. We only need to train free parameters in $F$, $\beta$ and $b$.
\end{remark}
As a direct consequence of the above result, the proposed single-head self-attention transform model is universal.
\begin{corollary}
Let $d\in \mathbb{N}$ and $q \in \mathbb{N}$. For any bounded continuous function $f$ on $[0,1]^d$, there exists a single-head self-attention transformer with increasing numbers of free parameters and transformer encoder blocks such that
$$\lim_{q \rightarrow \infty} \left\|\mathcal{C}^q - f\right\|_{C([0,1]^d)} = 0$$
\end{corollary}
The above result is a simple application of the denseness of the polynomial set, which shows that the transformer model discussed in our paper is universal if we let the number of free parameters and transformer encoder blocks go to infinity.
\section{Comparison and Discussion}
In this section, we compare our work with some existing theoretical results on the transformer model \cite{yun2019transformers,yun2020n,zaheer2020big,shi2021sparsebert}. Since these works use similar methods to those in \cite{yun2019transformers}, we focus on the theoretical contributions of this paper.
In \cite{yun2019transformers}, the authors show that transformer models are universal approximators of continuous sequence-to-sequence functions with compact support with trainable positional encoding. The notion of contextual mappings is also formalized, and it is shown that the attention layers can compute contextual mappings, where each unique context is mapped to a unique vector.
The universality result is achieved in three key steps: \textbf{Step 1}. Approximate continuous permutation equivariant functions with piece-wise constant functions $\bar{\mathcal{F}}_{PE}(\delta)$. \textbf{Step 2}. Approximate $\bar{\mathcal{F}}_{PE}(\delta)$ with modified Transformers $\bar{\mathcal{T}}$. \textbf{Step 3}. Approximate modified Transformers $\bar{\mathcal{T}}$ with original Transformers $\mathcal{T}$.
In order to express the above steps more clearly, we show the idea of proof as follows. For an input $X \in\mathbb{R}^{d\times n}$, the authors first use a series of feed-forward layers that can quantize $X$ to an element $L$ on the extended grid $\mathbb{G}^+_\delta := \left\{-\delta^{-nd},0,\delta,\cdots,1-\delta\right\}^{d\times n}$. Activation functions that are applied to these layers are piece-wise linear functions with at most three pieces, and at least one piece is constant. Then, the authors use a series of self-attention layers in the modified transformer network to implement a contextual mapping $q(L)$. After that, a series of feed-forward layers in the modified transformer network can map elements of the contextual embedding $q(L)$ to create a desired approximator $\bar{g}$ of the piece-wise constant function $\bar{f} \in \bar{\mathcal{F}}_{PE}(\delta)$ which is the approximator of the target function.
We would like to address major differences between our work and theirs. First, the output functions are different. In the above work, the goal is to approximate a continuous function defined from $\mathbb{R}^{n\times d}$ to $\mathbb{R}^{n\times d}$, which focuses on sequence-to-sequence functions. In our setting, we use the linear combination of the units in the last layer as our output, which focuses on regression and classification tasks. Second, the two structures we consider are slightly different. The self-attention layers and feed-forward layers in their transformer model are set in an alternate manner. Although this may explain the different functions of different types of layer, it changes the structure of transformer model in real applications. In our setting, we guarantee the integrity of transformer encoder blocks and analyze each transformer encoder block as a whole. Last but not least, the ultimate goals and core ideas of the theoretical analysis of our two papers are different. Because the inner product operation is the biggest difference between the attention layer and the traditional network layer, we focus on this special structure for analysis. We find that if we can make good use of this inner product structure, then from the perspective of theoretical analysis, we do not have to think about approximation but can directly generate the function we want. And the exact construction only requires a finite number of free parameters with fixed transformer encoder blocks. This shows the different thinking in our theory and distinguishes our method from using piece-wise functions to approximate target functions.
\begin{figure*}
\caption{For the target polynomial $f^*_1$, the above 3-D surface plots are output functions of three different models after the training process. ATTENTION stands for our single-head self-attention transformer model, while NN$_{depth}
\label{f1}
\end{figure*}
\section{Experiments: Learning Polynomial Functions}
\label{experiments}
In this section, we verify our main results and demonstrate the superiority of our single-head self-attention transformer model by conducting experiments on two groups of synthetic data.
\paragraph{Target functions}
For these two experiments, we consider the noisy regression task
$$y = f^*(\textbf{x}) + \epsilon,$$
where $f^*$ is the target polynomial and $\epsilon$ is the standard normal noise.
For the first experiment, in order to visualize the advantages of our proposed model, we consider a simple polynomial,
\begin{equation*}
\begin{aligned}
f^*_1(\textbf{x}) = x_1^2 + x_2^2,
\end{aligned}
\end{equation*}
which satisfies $d=2$ and $q=2$.
For the second experiment, to show the strong expressiveness of our model, we consider a complicated polynomial
\begin{equation*}
\begin{aligned}
&f^*_2(\textbf{x}) =\\
&x_1^5+3x_2^4+2x_3^3+5x_3x_4+3x_5^2+2x_6x_7x_8+2x_9,
\end{aligned}
\end{equation*}
which satisfies $d=10$ and $q=5$.
\paragraph{Data generating process}
For the target function $f^*_1$, we generate 10000 i.i.d. sample $\textbf{x}$ from a multivariate Gaussian distribution $\mathcal{N}(\textbf{0}, \Sigma_1)$ with $\Sigma_1 = \text{diag}(100,100)$. We randomly choose 9000 of them for training and 1000 data for testing. \\
For the target function $f^*_2$, we generate 50000 i.i.d. sample $\textbf{x}$ from a multivariate Gaussian distribution $\mathcal{N}(\textbf{0}, \Sigma_2)$ with $\Sigma_2 = \text{diag}(1,\cdots,1) \in \mathbb{R}^{10\times 10}$. We randomly choose 45000 of them for training and 5000 data for testing.
\paragraph{Experimental setting}
To demonstrate the power of attention-based structures, we compare our proposed model with two types of ReLU fully connected neural networks with a comparable number of free parameters. Since for a polynomial $Q$ of degree $q$, our proposed model has one linear transformation with matrix $F\in\mathbb{R}^{n_q \times d}$ and $q$ transformer encoder blocks, we use NN$_{depth}$ to denote the fully connected network with $q+1$ layers and we use NN$_{width}$ to denote the shallow net with $n_q$ units in the hidden layer. For these two fully connected networks, we use the same way as our proposed model to generate output value, which is the linear combination of units in the last layer with a bias term. The detailed architectures can be found in \ref{app_exp}. \\
In all the experiments, we use SGD optimizer with one cycle learning rate \cite{Smith2019SuperconvergenceVF}, with an initial learning rate 0.0001 and maximum learning rate 0.001. For the polynomial $f_1^*$, we train three models 600 epochs with batch size 5000, and for the polynomial $f_2^*$, we train three models 2000 epochs with batch size 25000. The gradient clipping is used for all three models to avoid gradients exploding at the beginning of training.
\begin{figure}
\caption{A comparison of the convergence speed and generalization gap between our single-head self-attention model and two types of fully connected neural networks.}
\label{fig-train}
\end{figure}
\paragraph{Experimental results} For the target polynomial $f_1^*$, Figure \ref{f1} demonstrates the strong power of learning polynomials of our proposed model. With only 41 free parameters, our single-head self-attention transformer can perfectly capture the target function by using noisy data. Due to the nature of piece-wise linear output function, both two types of fully connected neural networks obviously can not achieve comparable results with very few parameters. \\
For the target function $f_2^*$, Table \ref{t1} and Figure \ref{fig-train} also demonstrate the superior ability of our model to learn a complicated polynomial. Our single-head self-attention transformer is the only one that can fit the ground truth function exactly with good convergence speed. Moreover, our model has a much better generalization power than both two types of fully connected neural networks with a similar number of free parameters.
\begin{table}[t]
\caption{A comparison of three models learning $f_2^*$. MSE$_{Tr}$ and MSE$_{Te}$ stand for the mean-squared error of the training data and the testing data after 2000 epochs training, respectively. We say that a model achieves convergence if the absolute difference of MSE$_{Tr}$ of two consecutive epochs is less than 0.01. \# EPOCHS stands for the number of epochs the model used before achieving convergence, and RUN TIME represents the corresponding running time of the training process. }
\label{t1}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccc}
\toprule & MSE$_{Tr}$ & MSE$_{Te}$ & \# epochs & Run time\footnote{GPU * min on NVIDIA A100 Tensor Core GPU.} \\
Attention& \textbf{0.938} & \textbf{0.109} & \textbf{212} & \textbf{1.9}\\
NN$_{depth}$ & 5.884 & 103.916 & 956& 7.6\\
NN$_{width}$ & 50.282 & 35.662 & 329 & 2.4\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\section{Proof of Main Results}
\begin{proof}[Proof of Lemma \ref{main result:one layer}]
We present explicit constructions of matrices and biases in single-head self-attention transformer encoder block. We let $W^Q \in \mathbb{R}^{(1+n) \times (2+n+q)}$ as follows,
\begin{equation*}
W^{Q}=\left[\begin{array}{cccccccc}
{0} & {\cdots} & {0} & {1} & {0} & {\cdots} & {0}& {0} \\
{0} & {2M^2} & {0} & {0} & {\cdots} & {\cdots} & {0}& {0} \\
{0} & {0} & {2M^2} & {0} & {\ddots} & {\ddots} & {0}& {0} \\
{0} & {0} & {0} & {\ddots} & {\ddots} & {\cdots} & {0}& {0} \\
{0} & {0} & {0} & {0} & {2M^2} & {0} & {\cdots}& {0}
\end{array}\right],
\end{equation*}
where the constant $1$ in the first row is in the $(n+2)-$th column. And we set $W^Q_{(t,t)} = 2M^2$ for $t=2,\cdots,n+1$ and all the other elements $0$. Since the inputs are in the form of
$$z_i = [t_i, e_i, \overbrace{x_i,y_i,0,\cdots,0}^q ,1 ]^\top \in \mathbb{R}^{(n+q+2) \times 1},$$
where $e_i$ denotes the one-hot vector of dimension $n$ with value $1$ in the $i$-th position of $e_i$. Then we know that $q_i\in \mathbb{R}^{(1+n)\times 1}$ is as follows
$$q_i=W^{Q} z_i = [x_i, \underbrace{0, \cdots, 0, \overbrace{2M^2}^{\text{$(i+1)$-th entry}}, 0, \cdots,0}_{n}]^\top .$$
We let $W^{K} \in \mathbb{R}^{(1+n) \times (2+n+q)}$ as follows
\begin{equation*}
W^{K}=\left[\begin{array}{cccccccc}
{0} & {\cdots} & {0} & {1} & {\cdots} & {\cdots} & {0}& {0} \\
{0} & {1} & {0} & {0} & {\cdots} & {\cdots} & {0}& {0} \\
{0} & {0} & {1} & {0} & {\ddots} & {\ddots} & {0}& {\vdots} \\
{0} & {0} & {0} & {\ddots} & {\ddots} & {\cdots} & {0}& {0} \\
{0} & {0} & {0} & {0} & {1} & {0} & {\cdots}& {0}
\end{array}\right].
\end{equation*}
where the constant $1$ in the first row is in the $(n+3)-$th column. And we set $W^K_{(t,t)} = 1$ for $t=2,\cdots,n+1$ and other elements $0$. Then we have $k_i \in \mathbb{R}^{(1+n)\times 1}$ as
$$k_i=W^{Q} z_i = [y_i, \underbrace{0, \cdots, 0, \overbrace{1}^{\text{$(i+1)$-th entry}}, 0, \cdots,0}_{n}]^\top .$$
We can easily find that for each $i$, if $j = i$, then $\langle q_i, k_j\rangle = x_iy_i+2M^2$. And if $j \neq i$, then $\langle q_i, k_j\rangle = x_iy_j$. By the condition $\left|x_i\right| < M$ and $\left|y_i\right| < M$, clearly we have $x_iy_i+2B^2 > x_iy_j $.
Then the attention vector $\alpha_i$ is
$$\alpha_i= [\langle q_i,k_1\rangle, \cdots ,\langle q_i, k_i \rangle, \cdots,\langle q_i, k_n \rangle]^\top \in \mathbb{R}^{n\times 1}.$$
Since we apply the one hot maximum function to $\alpha_1$, then by the construction we have
$$\hat{\alpha}_i= [0, \cdots ,0, \overbrace{x_i y_i + 2B^2}^{i-\text{th entry}}, 0,\cdots,0]^\top \in \mathbb{R}^{n\times 1}.$$
For the matrix $W^V \in \mathbb{R}^{(n+q+2)\times(n+q+2)}$, we set
$$
W^V_{i,j}= \left\{
\begin{array}{rcl}
1,& &i=n+4, j=n+q+2,\\
0,& &\text{others.}
\end{array}
\right.
$$
Then we know that for $i=1,\cdots,n$,
$$W^V z_i = [0,\cdots,0, \overbrace{1}^{(n+4)- \text{th entry}}, 0,\cdots,0 ]^\top.$$
By the equation
$$\hat{z}_i = z_i + W^V Z\hat{\alpha}_i,$$
we know that the outputs $z_i \in \mathbb{R}^{(n+q+2) \times 1}$ of self-attention layer are
$$\hat{z}_i = [t_i, e_i, \overbrace{x_i,y_i,x_iy_i+2M^2,0,\cdots,0}^q ,1 ]^\top ,$$
where $e_i$ denotes the one-hot vector of dimension $n$ with value $1$ in the $i$-th position of $e_i$.
Now we construct the fully connected layer in the transformer.
For $W_1 \in \mathbb{R}^{2 \times (2+n+q)}$, we let
$$
W_{1,(i,j)}= \left\{
\begin{array}{rcl}
1,& &i=1, j=n+4,\\
-1,& &i=2, j=n+4,\\
0,& &\text{others.}
\end{array}
\right.
$$
and $b_1= [0,0]^\top$. Then we have
$$\sigma(W_1 z_i +b_1) = [\sigma(x_iy_i+2M^2),\sigma(-x_iy_i - 2M^2)]^\top.$$
For $W_2 \in \mathbb{R}^{(n+q+2) \times 2}$, we let
$$
W_{2,(i,j)}= \left\{
\begin{array}{rcl}
-2,& &i=n+4, j=1,\\
2,& &i=n+4, j=2,\\
0,& &\text{others.}
\end{array}
\right.
$$
And we let $b_2 \in \mathbb{R}^{(n+q+2) \times 1}$ to be
$$
b_{2,(i)}= \left\{
\begin{array}{rcl}
2M^2,& &i=n+4,\\
0,& &\text{others.}
\end{array}
\right.
$$
Then by
$$z'_i = \hat{z}_i + W_2 \sigma\left(W_1\hat{z_1}+b_1\right) +b_2,$$
we have
$$z'_i = [t_i,e_i, \overbrace{x_i,y_i,-x_iy_i,0,\cdots,0}^q ,1 ]^\top \in \mathbb{R}^{(n+q+2) \times 1}.$$
Since we assume that $M$ is known, we do not have any free parameter in this construction. It is easy to see that the number of non-zero entry is $2n+8$. This finishes the proof.
\end{proof}
Now we are ready to prove Theorem \ref{main result:polynomial}.
\begin{proof}[Proof of Theorem \ref{main result:polynomial}]
To prove our main result on polynomial generation, we first apply Lemma \ref{lemma: polynomial}. Since the matrix $F \in \mathbb{R}^{n_q\times d}$ can be obtained by training, we set $F= [\xi_1, \cdots, \xi_{n_q} ]^\top$ and let $\xi_i$ to be those vectors we need in Lemma \ref{lemma: polynomial} for $i=1,\cdots,n_q$. Then we know that the inputs for the transformer encoder blocks are
$$z_i = [\xi_i \cdot x, \overbrace{0,\cdots,0,\underbrace{1}_{(i+1)-\text{entry}},0,\cdots,0}^{n}, \overbrace{0,\cdots,0}^q ,1 ]^\top,$$
for $i=1,\cdots,n_q$. Then we only need to apply Lemma \ref{main result:one layer} $q$ times with suitable adjustments of the position of non-zero entries to make sure that the product of two elements in vectors are saved in a right entry.
For the first transformer encoder block, we calculate the product of $\xi_i \cdot x$ and $1$ and place $-\xi_i \cdot x$ it in the $(n_q+2)-$th entry. Since we know that $\left\|\xi_i\right\|=1$, if we further assume that $\left\|x\right\| < B$, then we have $\left|\xi_i \cdot x\right| \leq B$. Then we only need to set $M=B$ in Lemma \ref{main result:one layer} and the output vectors are
$$z_i = [\xi_i \cdot x, e_i, \overbrace{-\xi_i \cdot x,\cdots,0}^q ,1 ]^\top,$$
where $e_i$ denotes the one-hot vector of dimension $n$ with value $1$ in the $i$-th position of $e_i$.
For the second transformer encoder block, we calculate the product of $\xi_i \cdot x$ and $-\xi_i \cdot x$ to get $\left(\xi_i \cdot x\right)^2$ and place it in the $(n_q+3)-$th entry. We set $M=B$ in Lemma \ref{main result:one layer} and the output vectors are
$$z_i = [\xi_i \cdot x, e_i, \overbrace{-\xi_i \cdot x,\left(\xi_i \cdot x\right)^2,\cdots 0}^q ,1 ]^\top,$$
Without loss of generality, we set $q$ to be odd. For the $i$-th block with $i=3,\cdots,q$, we set $M=B^{i-1}$. Then after $q$ transformer encoder blocks, the outputs are
$$z_i = [t_i, e_i, \overbrace{-t_i,t_i^2,\cdots,-t_i^q}^q ,1 ]^\top \in \mathbb{R}^{(n+q+2) \times 1},$$
where $t_i = \xi_i \cdot x$ for $i=1,\cdots,n_q$. Now we have different powers of $\xi_i \cdot x$ for $i=1,\cdots,n_q$. Then we only need to set elements of $\beta$ as those $\beta_{k,s}$ we need in Lemma \ref{lemma: polynomial} and $b = Q(0)$ to generate the polynomial $Q$ we want.
Since we assume that $B$ is known, then there is no free parameter in transformer encoder blocks. The free parameters in our model all come from $F$, $\beta$ and $b$. By $n_q = \binom{d-1+q}{q}$, it is easy to see that $n_q \leq d^q$. The number of free parameters in $F$ is less then $d^{q+1}$. Since for each $z_i$, we only need $q$ non-zero entries in $\beta$, the number of free parameters in $\beta$ is less then $qd^q$. So the total number of free parameters is less than $d^{q+1}+ qd^q +1$.
The number of non zero entries in this model is those in $F$, $W^K$, $W^Q$, $W^V$, $W_1$, $W_2$, $b_1$ $b_2$ in each block and in $\beta$, $b$. It can be calculated easily to know the number of non zero entries is less than $d^{q+1} +3qd^q + 8q +1.$
This finishes the proof.
\end{proof}
\section{Conclusion}
In this paper, we introduced a single-head self-attention transformer model and showed that any polynomial can be generated exactly by an output function of such a model with the number of transformer encoder blocks equal to the degree of the polynomial. The transformer encoder blocks in this model do not need to be trained.
In the future, many research directions will be very attractive. First of all, our core idea is different from traditional one of approximation, and through the appropriate adjustment of the transformer model, a completely new theoretical result is presented. Also, in our structure, the transformer encoder blocks are completely fixed, it is of great interest to check our results in real applications to see whether these adaptations can indeed bring benefits. Second, we have obtained such exciting theoretical results by considering only single-head self-attention structure. We can consider whether the multi-head structure can lead to more surprising conclusions. Last but not least, it is of great interest to consider this model under the setting of statistical machine learning. As we can see in our experiments, as long as the number of free parameters meets the theoretical requirement, our model can not only learn the objective function well, but also has a much stronger generalization ability than other models. And as far as we are concerned, this is the first deep learning model which is capable of reaching zero approximation error for certain function class. We will investigate how such a model affects convergence rates for regression or classification problems if the target function is a polynomial of the input and we will verify whether convergences rates now only depend on the complexity of the proposed model.
\nocite{langley00}
\appendix
\onecolumn
\section{Experimental Details}
In this section we describe the additional details our experiments.
\subsection{Model architectures}
\label{app_exp}
Table \ref{arc-f1} and \ref{arc-f2} illustrate the architecture of two types of ReLU fully connected neural networks with a comparable number of free parameters used in Section \ref{experiments}. The NN$_{width}$ has the same kind of linear transformation from $\mathcal{R}^d \to \mathcal{R}^{n_q}$ as our single-head self-attention transformer, while the NN$_{depth}$ has the same hidden layer $q+1$ as our single-head self-attention transformer.
\begin{table}[h!]
\caption{The architecture of NN$_{width}$ and NN$_{depth}$ for the target polynomial $f^*_1$.}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{l l l l}
\toprule
\multicolumn{1}{l}{Layer} & \multicolumn{1}{l}{NN\_{width}} & NN\_{depth} \\
\midrule
1 & \multicolumn{1}{l}{Linear(in=2,out=10)} & Linear(in=2,out=4) \\
\midrule
2 & \multicolumn{1}{l}{Relu} & Relu \\
\midrule
3 & \multicolumn{1}{l}{Linear(in=10,out=1)} & Linear(in=4,out=4) & \rdelim\}{2}{1mm}[$\times $ 2] \\
\midrule
4 & & Relu \\
\midrule
$\cdots$ & & \\
\midrule
7 & & Linear(in=4,out=1) \\
\bottomrule
\end{tabular}
\label{arc-f1}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\begin{table}[h!]
\caption{The architecture of NN$_{width}$ and NN$_{depth}$ for the target polynomial $f^*_2$.}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{l l l l}
\toprule
\multicolumn{1}{l}{Layer} & \multicolumn{1}{l}{NN\_{width}} & NN\_{depth} \\
\midrule
1 & \multicolumn{1}{l}{Linear(in=10,out=4368)} & Linear(in=10,out=120) \\
\midrule
2 & \multicolumn{1}{l}{Relu} & Relu \\
\midrule
3 & \multicolumn{1}{l}{Linear(in=4368,out=1)} & Linear(in=120,out=120) & \rdelim\}{2}{1mm}[$\times $ 5] \\
\midrule
4 & & Relu \\
\midrule
$\cdots$ & & \\
\midrule
13 & & Linear(in=120,out=1) \\
\bottomrule
\end{tabular}
\label{arc-f2}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\end{document}
|
\begin{document}
\title{Palm measures for Dirac operators and the $\Sineb$ process}
\begin{abstract}
We characterize the Palm measure of the $\operatorname{Sine}_{\beta}$ process as the eigenvalues of an associated operator with a specific boundary condition.
\end{abstract}
\section{Introduction}
The Palm measure of a point process is its distribution as seen from a typical point. We study the Palm measure of the $\operatorname{Sine}_{\beta}$ process, which is the bulk scaling limit of the eigenvalues for a large class of random matrices. (See \cite{BVBV}, \cite{BVBV_sbo}, \cite{BEY}.)
There are several reasons to do this:
\begin{itemize}
\item the secular function for (the operator associated to) the Palm measure has as simpler description than that of the original $\operatorname{Sine}_{\beta}$ process,
\item the intensity of the Palm measure is given by the two-point correlation functions of the original process, a source of many open problems and conjectures,
\item the operator description of the Palm measure can be given explicitly: this is the main goal of the current paper,
\item in the case of the circular beta ensembles, the finite version of the $\operatorname{Sine}_{\beta}$ process, the distribution of the Verblunsky coefficients can be given explicitly,
\item the Palm measures are closely related to the well-studied circular Jacobi ensembles and Hua-Pickrell measures of random matrix theory. We will describe the connection explicitly.
\end{itemize}
Let $b_1, b_2$ be independent two-sided standard Brownian motion, and for $t\in (0,1]$ set
\begin{align}\langlebel{eq:xy}
y_t=e^{b_2(u)-u/2}, quad x_t=-\int_u^0 e^{b_2(s)-\tfrac{s}{2}} d b_1, quad u=u(t)=\tfrac4{\beta} \log t.
\end{align}
Let $q$ be Cauchy distributed random variable independent of $b_1, b_2$.
Define the function $R(t):(0,1]\to {\mathbb R}^{2\times 2}$ via
\begin{align}\langlebel{eq:R_x_y}
R=\frac{X^t X}{2\det X}, qquad X=\mat{1}{-x}{0}{y}.
\end{align}
In this paper, we consider the ${\boldsymbol{\tau}}_\beta$ operator
\begin{equation}\langlebel{tau_0}
{\boldsymbol{\tau}}_\beta u=R^{-1} J u', qquad J=\mat{0}{-1}{1}{0},
\end{equation}
acting on absolutely continuous of functions of the form $u:(0,1]\to {\mathbb R}^2$, with the boundary conditions $[1,0]^t$ at $0$ and $[-q,-1]^t$ at $1$. The spectrum of the ${\boldsymbol{\tau}}_\beta$ operator is given by the $\operatorname{Sine}_{\beta}$ process, see \cite{BVBV_sbo}. Our main result is the following theorem.
\begin{theorem}\langlebel{thm:Sine_Palm}
The Palm measure of the $\operatorname{Sine}_{\beta}$ process has the same law as the spectrum of ${\boldsymbol{\tau}}_\beta$ with modified boundary condition $[1,0]^t$ at $1$.
\end{theorem}
The proof of this theorem relies on two key facts: one is the shift invariance of the process and the other is the special relationship between the spectral measure and the eigenvalue distribution for the ${\boldsymbol{\tau}}_\beta$ operator. Note that in this paper the spectral measure is always a real valued measure on ${\mathbb R}$.
For an $n\times n$ unitary matrix $U$ and a nonzero vector $v$ the spectral measure at $v$ is the probability measure
$$
\mathfrak{su}(1,1)m_{\langlembda} \langlengle v,\varphi_\langlembda\ranglengle^2 \delta_\langlembda.
$$
Here the sum is over all eigenvalues $\langlembda$, and the $\varphi_\langlembda$ are the unit length eigenvectors of $U$ forming an orthonormal basis.
For the Dirac operators of the type \eqref{tau_0}, we replace $\langlengle v,\varphi_\langlembda\ranglengle$ by the value of $\varphi_\langlembda$ at $0$ or $1$ to get the {\bf left} and {\bf right spectral measures}. More precisely,
\begin{align}\langlebel{eq:rightspec}
\mu_{\textup{left}}=\mathfrak{su}(1,1)m_{\langlembda} |\varphi_\langlembda(0)|^2\delta_\langlembda,
qquad
\mu_{\textup{right}}=\mathfrak{su}(1,1)m_{\langlembda} |\varphi_\langlembda(1)|^2\delta_\langlembda,
\end{align}
where $\varphi_\langlembda$ is the eigenfunction corresponding to $\langlembda$ with normalization $\int_0^1 \varphi^t_\langlembda R \varphi_\langlembda dt=1$. For the Dirac operators we consider the spectral measure has infinite total mass.
An important ingredient in the proof of Theorem \ref{thm:Sine_Palm} is the description of how the right spectral measure of a random Dirac operator changes when it is biased by the weight of zero. This is the content of the following proposition, see Proposition \ref{prop:Dir_Spec_weight} for a more precise formulation.
\begin{proposition}\langlebel{prop:Dir_Spec_weight_heur}
Let $\mu_q$ denote the right spectral measure of the random Dirac operator of the form \eqref{tau_0} with boundary conditions $[1,0]^t$ and $[-q,-1]^t$, with $\mu_\infty$ corresponding to boundary conditions $[1,0]^t$ at both ends.
Assume that $q$ is chosen independently of the function $R$ with a sufficiently regular distribution. Then the law of $\mu_q$ biased by $\mu_q((-\varepsilon,\varepsilon))$ has a distributional limit as $\varepsilon\to 0$ given by $\mu_{\infty}$.
\end{proposition}
Another important ingredient of the proof of Theorem \ref{thm:Sine_Palm} is the following.
\begin{proposition} \langlebel{prop:sine_weights}
The weights of the right spectral measure of the ${\boldsymbol{\tau}}_{\beta}$ operator are independent of each other and from the sequence of eigenvalues. The weights have gamma distribution with shape parameter $\beta/2$ and mean 2.
\end{proposition}
This is closely related to Theorem 9 in \cite{najnudel2021bead}, where the spectral measure is implicitly used to define the bead process, a Markov process with stationary distribution given by the $\operatorname{Sine}_{\beta}$ point process. The connection is through the relationship between spectral measures and rank-one perturbations. Rather than exploring this connection, we deduce Proposition \ref{prop:sine_weights} from its finite-$n$ analogue.
For $n\ge 1$ and $\beta>0$ the size $n$ circular beta-ensemble is the distribution of $n$ points on the unit circle with probability density
\begin{align}
\frac{1}{Z_{n,\beta}} \mbox{\rm P}od_{j<k\le n} |z_j-z_k|^\beta.
\end{align}
The \textbf{Killip-Nenciu measure} $\mu_{n,\beta}^{\textup{KN}}$ for given $n\ge 1$ and $\beta>0$ is the random probability measure where the support is given by the size $n$ circular beta-ensemble, and the weights have Dirichlet$(\beta/2, \dots, \beta/2)$ joint distribution and are independent of the support.
In general, a finitely supported probability measure on the unit circle can be characterized by its modified Verblunski coefficients from the theory of orthogonal polynomials. When a probability measure is supported on $n$ points, all but the first $n$ modified Verblunski coefficients vanish. The first $n-1$ modified Verblunski coefficients are all in the unit disk ${\mathbb U}=\{z:|z|<1\}$, with the last one being on the unit circle $\partial {\mathbb U}$ (see \cite{KillipNenciu}, \cite{BNR2009}).
The joint distribution of the modified Verblunski coefficients $\gamma_k, 0\le k\le n-1$ of the Killip-Nenciu measure was described in \cite{KillipNenciu} and \cite{BNR2009}. It was shown that $\gamma_k, 0\le k\le n-1$ are independent, $\gamma_k$ have density on $\mathbb U$ proportional to
\begin{align}\langlebel{eq:verb_circ}
(1-|z|^2)^{\frac{\beta}{2}(n-k-1)-1}, qquad \text{for} quad 0\le k \le n-2,
\end{align}
and $\gamma_{n-1}$ is uniform on $\partial {\mathbb U}$.
Note that the distribution of $|\gamma_k|^2$ is Beta$(1,\frac{\beta}{2}(n-k-1))$, and $\gamma_k$ has rotationally invariant distribution.
In \cite{BVBV_sbo} it was shown that the spectral information of a probability measure on the unit circle can also be described via a Dirac operator of the form \eqref{tau_0}, with $R$ defined from piece-wise constant paths $x, y$ via \eqref{eq:R_x_y}, that are built using the modified Verblunski coefficients. See Section \ref{sec:finite} and Proposition \ref{prop:discrete_op_1} below. We extend the connection further by the following lemma, see Proposition \ref{prop:Dir_discrete} for the precise formulation.
\begin{lemma}\langlebel{lem:spectral_lift}
Consider a probability measure $\mu$ supported on $e^{i \langlembda_j}, 1\le j\le n$, and the corresponding Dirac operator ${\boldsymbol{\tau}}$. Then the left spectral measure of ${\boldsymbol{\tau}}$ is the same as the lifting of $\mu$ to $\mathbb R$ stretched by $n$ and multiplied by $2n$:
\[
\mu_{\textup{left}, {\boldsymbol{\tau}}}(n \langlembda_j+2n k \pi)=2n \mu(e^{i \langlembda_j}), qquad k\in {\mathbb Z}Z, 1\le j\le n.
\]
\end{lemma}
The factor $2$ is related to our convention in \eqref{eq:R_x_y}. This convention comes from the carousel representation, see \cite{BVBV_sbo}, in which the rotation speed is then given by the eigenvalue $\langlembda$. Equivalently, this is related to the intensity $1/(2\pi)$ of the Sine$_\beta$ process.
The Palm measure corresponding to a measure with random Verblunski coefficients can be explicitly characterized in certain cases. The following is a special case of Proposition \ref{prop:path} below.
\begin{proposition} \langlebel{prop:Verblunski_biasing}
Let $\mu$ be a random probability measure supported on $n$ points on $\partial {\mathbb U}$.
Assume that its modified Verblunsky coefficients $\gamma_i, 0\le i\le n-1$ are independent and have rotationally invariant law. Then the Palm measure of $\mu$ agrees with the $\varepsilon\to 0$ limit of $\mu$ biased by the weight of the arc $\{e^{i\theta}, \theta\in (-\varepsilon,\varepsilon)\}$. The modified Verblunski coefficients of the Palm measure are given by
\begin{align}\langlebel{eq:gamma'}
\gamma'_i
=-\gamma_i\frac{1-\bar \gamma_i}{1-\gamma_i}, qquad n=0,.s, n-2,qquad \gamma'_{n-1}=1.
\end{align}
\end{proposition}
In the setting of probability measures on the unit circle the analogue of boundary condition for Dirac operators is given by the notion of Aleksandrov measures. The modified Verblunski coefficients of Aleksandrov measures have a more complex description, leading to formulas such as \eqref{eq:gamma'}, see Section \ref{sec:Aleks}.
Proposition \ref{prop:Verblunski_biasing} provides an explicit description of Verblunski coefficients of the Palm measure of the Killip-Nenciu measure as follows.
\begin{proposition}\langlebel{prop:biased_V}
Let $\mu$ be chosen from the Killip-Nenciu law with parameters $\beta,n$, let $\tilde \mu$ be the uniform probability measure on the support of $\mu$, and
let $\nu$ be $\mu$ biased by the weight of 1.
Let $X$ be picked from $\mu$, and $Y$ be picked from the $\tilde \mu$.
Then $\operatorname{supp} \mu(X\cdot)$, $\operatorname{supp} \tilde \mu(Y\cdot)$, and $\operatorname{supp} \nu(\cdot)$ all have the same law.
The Verblunski coefficients $\gamma_k'$ of $\nu$ are independent and have density proportional to
\begin{align}\langlebel{eq:verb_nu}
(1-|z|^2)^{\frac{\beta}{2}(n-k-1)} |1-z|^{-2}, qquad \text{for} quad 0\le k \le n-2,
\end{align}
and $\gamma_{n-1}'=1$.
\end{proposition}
The distribution of $\nu$ can be computed directly by noting that its support is the size $n$ circular beta-ensemble conditioned to have a point at 1, and the weights are just a size-biased version of the Dirichlet$(\beta/2, \dots, \beta/2)$ distribution.
The support of the measure $\nu$ has a point at 1, and if we remove that point then the joint density of the remaining points
is proportional to
\begin{align}\langlebel{eq:CJ}
\mbox{\rm P}od_{i<j\le n-1} |z_i-z_j|^\beta \mbox{\rm P}od_{j=1}^{n-1} |1-z_j|^\beta.
\end{align}
This
is a circular Jacobi beta-ensemble of size $n-1$, with parameter $\delta=\beta/2$.
The weights of $\nu$ are independent of the support, and they are Dirichlet distributed with parameters $\beta/2$ ($n-1$ times) and $\beta/2+1$ (once). The last weight belongs to the point at $1$.
If we remove the point at 1 from the $\nu$ and renormalize the weights to sum up to 1, then the support and the weights are still independent, with the $n-1$ remaining weights distributed as Dirichlet$(\beta/2, \dots, \beta/2)$.
The modified Verblunski coefficients $\gamma_k, 0\le k\le n-2$ of this measure were described in \cite{BNR2009}. These are independent of each other, and the density of $\gamma_k$ on ${\mathbb U}$ is proportional to
$$
(1-|z|^2)^{\frac{\beta}{2}(n-k-2)-1}|1-z|^{\beta}, quad \text{for} quad 0\le k\le n-1,
$$
and the density of $\gamma_{n-2}$ on $\partial {\mathbb U}$ is proportional to
$
|1-z|^{\beta}
$.
It would be interesting to find a direct derivation of this fact.
\begin{problem}
Find a simple derivation of the joint distribution of the modified Verblunsky coefficients for the random probability measure obtained by removing the point 1 from $\nu$ (and renormalizing), directly from Proposition \ref{prop:biased_V}.
\end{problem}
\section{Dirac operators and their spectral measures}
\mathfrak{su}(1,1)bsection{Dirac operator basics}
Let $R:{\mathcal I}\to \mathbb R^{2\times 2}$ be a function taking values in nonnegative definite matrices. Here ${\mathcal I}$ is an interval with $\overline{{\mathcal I}}=[0,\sigma]$, that may be open at one of its endpoints. Most often we will deal with the case $[0,1)$ or $(0,1]$.
In this paper, a {\bf Dirac operator} is defined as
\begin{equation}\langlebel{tau}
{\boldsymbol{\tau}}u=R^{-1} J u', qquad J=\mat{0}{-1}{1}{0},
\end{equation}
acting on some subset of functions of the form $u:{\mathcal I}\to {\mathbb R}^2$.
\begin{assumption}
$R(t)$ is positive definite for all $t\in {\mathcal I}$, $\|R\|, \|R^{-1}\|$ are locally bounded on ${\mathcal I}$. Moreover, $\det R(t)=1/4$ for all $t\in {\mathcal I}$.
\end{assumption}
If $R$ satisfies Assumption 1 then it can be parametrized as
\begin{align}
R=\frac{X^t X}{2\det X} , qquad X=\mat{1}{-x}{0}{y}, qquad y>0, \,x\in {\mathbb R}.\langlebel{eq:Rxy}
\end{align}
The assumption $\det R=1/4$ can be replaced by the more general condition $\int_{{\mathcal I}} \det R(s)\, ds<\infty$. This setting is equivalent to ours up to a time change.
\begin{assumption}\ \\
\begin{itemize}
\item If ${\mathcal I}$ is open on the left then there is a nonzero $\mathfrak u_0 \in {\mathbb R}^2$ so that we have
\begin{align}
\int_{{\mathcal I}} \left\|\mathfrak u_0 ^t R \right\|ds<\infty,qquad \int\limits_{\mathfrak{su}(1,1)bstack{s<t\\ (s,t)\in {\mathcal I}}} \mathfrak u_0 ^t R(s) \mathfrak u_0 \, (J \mathfrak u_0 )^t R(t) J \mathfrak u_0 ds dt<\infty.\langlebel{eq:assmpns_1}
\end{align}
\item If ${\mathcal I}$ is open on the right then there is a nonzero $\mathfrak u_1 \in {\mathbb R}^2$ so that we have
\begin{align}
\int_{{\mathcal I}} \left\|\mathfrak u_1 ^t R \right\|ds<\infty,qquad \int\limits_{\mathfrak{su}(1,1)bstack{s>t\\ (s,t)\in {\mathcal I}}} \mathfrak u_1 ^t R(s) \mathfrak u_1 \, (J \mathfrak u_1 )^t R(t) J \mathfrak u_1 ds dt<\infty.\langlebel{eq:assmpns_2}
\end{align}
\end{itemize}
\end{assumption}
Note that if ${\mathcal I}$ is closed and $R$ satisfies Assumption 1 then \eqref{eq:assmpns_1} and \eqref{eq:assmpns_2} hold for any $\mathfrak u_0, \mathfrak u_1\in {\mathbb R}^2$.
Moreover, if $\bar I=[0,\sigma]$, $0<t<\sigma$ and $R$ satisfies Assumptions 1-2 with an appropriate $\mathfrak u_0$ or $\mathfrak u_1$ (if applicable), then it also satisfies the assumptions on ${\mathcal I}\cap [0,t]$.
The Dirac operator ${\boldsymbol{\tau}}$ with boundary conditions $\mathfrak u_0, \mathfrak u_1$ on ${\mathcal I}$ is self-adjoint on an appropriate domain. Let $
L^2_R=L^2_R({\mathcal I})$
denote the $L^2$ sub-space of ${\mathcal I}\to {\mathbb R}^2$ functions with the squared norm
\begin{align}
\|f\|_R^2=\int_{{\mathcal I}} f^t(s) R(s) f(s) ds.
\end{align}
Let ${\text{\sc ac}}$ denote the subset of ${\mathcal I}\to {\mathbb R}^2$ functions that are absolutely continuous. The following proposition follows from standard theory of Dirac operators (see e.g.~\cite{Weidmann}, \cite{BVBV_sbo}).
\begin{proposition}\langlebel{prop:inverse_tau}
Let ${\boldsymbol{\tau}}$ be a Dirac operator satisfying Assumption 1 on ${\mathcal I}$. Let $\mathfrak u_0, \mathfrak u_1$ be non-zero vectors in ${\mathbb R}^2$ with which the appropriate case of Assumption 2 is satisfied. Then ${\boldsymbol{\tau}}$ is self-adjoint on the following domain:
\begin{align}
\operatorname{dom}_{\boldsymbol{\tau}}=\{v\in L^2_R \cap {\text{\sc ac}}\,: \, {\boldsymbol{\tau}} v\in L^2_R, \,\lim_{s\downarrow 0} v(s)^t\,J\,\mathfrak u_0 = 0, \,\,\lim_{s\uparrow \sigma}v(s)^t\,J\,\mathfrak u_1 =0\}.
\end{align}
\end{proposition}
We denote the self-adjoint operator ${\boldsymbol{\tau}}$ with boundary conditions
$\mathfrak u_0, \mathfrak u_1$ by $\mathtt{Dir}(R_{\cdot},\mathfrak u_0 ,\mathfrak u_1 )$ or $\mathtt{Dir}(x_{\cdot}+i y_{\cdot},\mathfrak u_0 ,\mathfrak u_1 )$.
When $\mathfrak u_0\not \,\parallel \mathfrak u_1$ then we also make the following assumption, which is just a normalization.
\begin{assumption}
$\mathfrak u_0^t J \mathfrak u_1=1$.
\end{assumption}
Note that if $\mathfrak u_0, \mathfrak u_1$ satisfy this assumption then we have $u_1=\frac{1}{\|u_0\|^2} J^t \mathfrak u_0+c \mathfrak u_0$ with $c\in {\mathbb R}$.
If $\mathfrak u_0\not\,\parallel \mathfrak u_1$ then ${\boldsymbol{\tau}}=\mathtt{Dir}(R_{\cdot},\mathfrak u_0 ,\mathfrak u_1 )$ has an inverse that is a Hilbert-Schmidt integral operator (see \cite{Weidmann} or \cite{BVBV_sbo}).
\begin{proposition}
Suppose that ${\boldsymbol{\tau}}=\mathtt{Dir}(R_{\cdot},\mathfrak u_0 ,\mathfrak u_1 )$ with Assumptions 1-3 are satisfied. Then ${\boldsymbol{\tau}}^{-1}$ is a Hilbert-Schmidt integral operator on $L^2_R$ given by
\begin{align}
{\boldsymbol{\tau}}^{-1} f(s)=\int_{{\mathcal I}} K_{{\boldsymbol{\tau}}^{-1}}(s,t) R(t) f(t) dt, qquad K_{{\boldsymbol{\tau}}^{-1}}(s,t)=\mathfrak u_0 \mathfrak u_1 ^t \mathbf 1(s<t)+\mathfrak u_1 \mathfrak u_0 ^t \mathbf 1(s\ge t),
\end{align}
and we have
\begin{align}
\|{\boldsymbol{\tau}}^{-1}\|_{\textup{HS}}^2=2\iint\limits_{\mathfrak{su}(1,1)bstack{s<t\\ (s,t)\in {\mathcal I}}}
\mathfrak u_0 ^t R(s) \mathfrak u_0 \, \mathfrak u_1^t R(t) \mathfrak u_1 ds dt.
\end{align}
\end{proposition}
Let $Y=\tfrac{X}{\sqrt{\det X}}$ with $X$ defined in \eqref{eq:Rxy}. Then the conjugated integral operator \begin{align}\langlebel{def:res}
{\mathtt{r}\,} {\boldsymbol{\tau}}:=Y {\boldsymbol{\tau}}^{-1} Y^{-1}\end{align}
is self-adjoint on $L^2$ with integral kernel
\begin{align}\langlebel{resint}
K_{{\mathtt{r}\,} {\boldsymbol{\tau}}}(s,t)=\tfrac{1}{2}\,a_0(s) a_1(t)^t \mathbf 1(s<t)\;+\;\tfrac{1}{2}\,a_1(s) a_0(t)^t \mathbf 1(s\ge t),
\end{align}
where
\begin{equation}\langlebel{eq:ac}
a_0=\frac{X\mathfrak u_0 }{\sqrt{\det X}},qquad a_1=\frac{X\mathfrak u_1 }{\sqrt{\det X}}.
\end{equation}
We have $\operatorname{spec}({\mathtt{r}\,} {\boldsymbol{\tau}})=\operatorname{spec}({\boldsymbol{\tau}}^{-1})$, in particular $\|{\mathtt{r}\,} \tau\|^2_{\textup{HS}}=\|{\boldsymbol{\tau}}^{-1}\|^2_{\textup{HS}}$.
\mathfrak{su}(1,1)bsection{Secular function of a Dirac operator}
\cite{BVBV_szeta} introduced the \emph{secular function} of a Dirac operator as a generalization of the (normalized) characteristic polynomial of a matrix.
\begin{definition}
Suppose that the Dirac operator ${\boldsymbol{\tau}}=\mathtt{Dir}(R_{\cdot},\mathfrak u_0 ,\mathfrak u_1 )$ satisfies Assumptions 1-3 on ${\mathcal I}$. We define its integral trace as
\begin{align}
\mathfrak t_{\boldsymbol{\tau}}=\int_{{\mathcal I}} \mathfrak u_0 R(s) \mathfrak u_1 ds,
\end{align}
and its secular function as
\begin{align}
\zeta_{{\boldsymbol{\tau}}}(z)=e^{-z\cdot {\mathfrak t}_{\boldsymbol{\tau}}}{\det}_2(I-z \,{\mathtt{r}\,} {\boldsymbol{\tau}}).
\end{align}
Here ${\det}_2$ is the (second) regularized determinant, see \cite{SimonTrace}, Chapter 9.
\end{definition}
The secular function of a Dirac operator ${\boldsymbol{\tau}}$ is an entire function with zero set given by $\operatorname{spec} {\boldsymbol{\tau}}$, and it is real on ${\mathbb R}$. As the next proposition shows, it can also be represented in terms of a canonical system. (See Proposition 13 in \cite{BVBV_szeta}.)
\begin{proposition}\langlebel{prop:H_ODE}
Suppose that ${\mathcal I}$ is closed on the right, and $R$, $\mathfrak u_0$ satisfy Assumptions 1 and 2.
There is a unique vector-valued function $H: {\mathcal I}\times {\mathbb C}\to {\mathbb C}^2$ so that for every $z\in {\mathbb C}$ the function $H(\cdot, z)$ is the solution of the ordinary differential equation
\begin{align}
\langlebel{eq:H}
J\frac{d}{dt}H(t,z)&=z R(t) H(t,z), qquad t\in {\mathcal I}, qquad\lim_{t\to 0} H(t,z)= \mathfrak u_0 .
\end{align}
For any $t\in {\mathcal I}$ the vector function $H(t,z)$ satisfies $\|H(t,z)\|>0$, and its two entries are entire functions of $z$ mapping the reals to the reals.
In addition, if ${\boldsymbol{\tau}}=\mathtt{Dir}(R_{\cdot},\mathfrak u_0 ,\mathfrak u_1 )$ with
$\mathfrak u_1$ satisfying Assumption 3, then
\begin{align}\langlebel{eq:zeta}
\zeta_{{\boldsymbol{\tau}}}(z)=H(1,z)^t J \mathfrak u_1.
\end{align}
\end{proposition}
The proposition immediately implies that under the listed conditions for any $0<t<\sigma$ we have
\[
\zeta_{{\boldsymbol{\tau}},t}(z)=H(t,z)^t J \mathfrak u_1,
\]
where $\zeta_{{\boldsymbol{\tau}},t}$ is the secular function of ${\boldsymbol{\tau}}$ defined on ${\mathcal I}\cap [0,t]$ with boundary conditions $\mathfrak u_0, \mathfrak u_1$.
As the next proposition shows, the function $H(t, \cdot)$ is continuous on compacts in $t$ at $t=0$, and it depends continuously on $R$ as well. We use the notation $\mathfrak t_{{\boldsymbol{\tau}},t}$ and ${\mathtt{r}\,} {\boldsymbol{\tau}}_t$ for the integral trace and resolvent of ${\boldsymbol{\tau}}$ restricted to ${\mathcal I} \cap [0,t]$.
\begin{proposition}\langlebel{prop:H_cont}
Suppose that ${\mathcal I}$ is closed on the right, and $R$, $\mathfrak u_0$ satisfy Assumptions 1 and 2, and let $H$ be the solution of \eqref{eq:H}. Suppose that $\mathfrak u_1$ satisfies Assumption 3, and let ${\boldsymbol{\tau}}$ be the Dirac operator defined by $R, \mathfrak u_0, \mathfrak u_1$. Then there is an absolute constant $c>1$ (depending only on $\mathfrak u_0$) so that for all $t\in {\mathcal I}$, $z\in {\mathbb C}$ we have
\begin{align} \langlebel{eq:H_cont}
|H(t,z)-\mathfrak u_0|\le \left(c^{|z|(|\mathfrak t_{{\boldsymbol{\tau}},t}|+\|{\mathtt{r}\,} {\boldsymbol{\tau}}_t\|+\int_0^t |a_0(s)|^2 ds)}-1\right) c^{\left(|z|(|\mathfrak t_{{\boldsymbol{\tau}},t}|+\|{\mathtt{r}\,} {\boldsymbol{\tau}}_t\|+\int_0^t |a_0(s)|^2 ds)+1\right)^2}.
\end{align}
Suppose that $\tilde R$, $\mathfrak u_0, \tilde \mathfrak u_1$ also satisfy Assumptions 1-3 on the same ${\mathcal I}$, let $\widetilde H$ be the solution of \eqref{eq:H}, and $\tilde {\boldsymbol{\tau}}$ the corresponding Dirac operator.
Then there is an absolute constant $\mathfrak u_0$ (depending only on $\mathfrak u_0$) so that for all $t\in {\mathcal I}$, $z\in {\mathbb C}$ we have
\begin{align}\notag
& |H(t,z)-\tilde H(t,z)|\le \left(c^{|z|\left(|\mathfrak t_{{\boldsymbol{\tau}},t}-\tilde \mathfrak t_{{\boldsymbol{\tau}},t}|+\|{\mathtt{r}\,} {\boldsymbol{\tau}}_t-{\mathtt{r}\,} \tilde{\boldsymbol{\tau}}_t\|+\sqrt{\int_{0}^t|a_0(s)-\tilde a_0(s)|^2 ds \int_{0}^t(|a_0(s)|^2+|\tilde a_0(s)|^2)ds
}\right)}-1\right)\\&\hskip150pt \times
c^{\left(|z|(|\mathfrak t_{{\boldsymbol{\tau}},t}|+|\mathfrak t_{\tilde{\boldsymbol{\tau}},t}|+\|{\mathtt{r}\,} {\boldsymbol{\tau}}_t\|+\|{\mathtt{r}\,} \tilde{\boldsymbol{\tau}}_t\|+\int_0^t \left(|a_0(s)|^2+|\tilde a_0(s)|^2 \right)ds)+1\right)^2}.\langlebel{eq:H1_cont}
\end{align}
\end{proposition}
\begin{proof}
Theorem 9.2(c) of \cite{SimonTrace} shows that if $\kappa_1, \kappa_2$ are Hilbert-Schmidt operators on the same domain then
\begin{align}\langlebel{det2_tri}
|{\det}_2(I-z \kappa_1)-{\det}_2(I-z \kappa_2)|\le |z|\cdot \|\kappa_1-\kappa_2\|_2 c^{ |z|^2 (\|\kappa_1\|_2^2+ \|\kappa_2\|_2^2)+1}
\end{align}
with an absolute constant $c$. Using \eqref{eq:zeta} one readily obtains a bound of the forms \eqref{eq:H_cont}, \eqref{eq:H1_cont} for $|H(t,z)^tJ\mathfrak u_1-1|$ and $|H(t,z)^tJ\mathfrak u_1-\tilde H(t,z)^tJ\tilde \mathfrak u_1|$, without the $a_0, \tilde a_0$ integral terms.
Note that Assumptions 1-3 also hold for $R, \mathfrak u_0, \mathfrak u_1+\mathfrak u_0$ (and similarly for $\tilde R, \mathfrak u_0, \mathfrak u_1+\mathfrak u_0$). These yield bounds for $|H(t,z)^tJ(\mathfrak u_0+\mathfrak u_1)-1|$ and $|H(t,z)^tJ(\mathfrak u_0+\mathfrak u_1)-\tilde H(t,z)^tJ(\mathfrak u_0+\tilde \mathfrak u_1)|$, but with modified integral trace and ${\mathtt{r}\,} {\boldsymbol{\tau}}$. By changing the right boundary condition by $\mathfrak u_0$ we change the integral trace on ${\mathcal I}\cap [0,t]$ by $\int_0^t |a_0(s)|^2 ds$, and the integral kernel of the conjugated integral operator \eqref{resint} by the function $K_0(s,t)=a_0(s) a_0(t)^t$. We have
\[
\|K_0\|_{\textup{HS}}^2=\iint_{{\mathcal I}\times {\mathcal I}}\|a_0(s) a_0(t)^t\|^2ds dt=\left(\int_{\mathcal I} |a_0(s)|^2 ds\right)^2
\]
and
\begin{align*}
\|K_0-\tilde K_0\|^2_{\textup{HS}}&=\iint_{{\mathcal I}\times {\mathcal I}} \|a_0(s) a_0(t)^t-\tilde a_0(s) \tilde a_0(t)^t\|^2ds\\&\le 2\int_{{\mathcal I}}|a_0(s)-\tilde a_0(s)|^2\int_{{\mathcal I}}(|a_0(s)|^2+|\tilde a_0(s)|^2) ds.
\end{align*}
This implies the bounds \eqref{eq:H_cont}, \eqref{eq:H_cont} for the terms
\begin{align*}
|H(t,z)^tJ(\mathfrak u_0+\mathfrak u_1)-1|, qquad |H(t,z)^tJ(\mathfrak u_0+\mathfrak u_1)-\tilde H(t,z)^tJ(\mathfrak u_0+\tilde \mathfrak u_1)|,
\end{align*}
(now with the $a_0, \tilde a_0$ terms included), from which the proposition follows.
\end{proof}
The ODE system \eqref{eq:H} also provides a representation of the eigenfunctions of ${\boldsymbol{\tau}}$.
\begin{proposition}\langlebel{prop:H_L2}
Suppose that ${\mathcal I}$ is closed from the right, and $R$, $\mathfrak u_0$ satisfy Assumptions 1 and 2.
The function $H(\cdot,\langlembda)$ given in \eqref{eq:H} is in $L^2_R$ for any fixed $\langlembda\in {\mathbb R}$. In particular, if ${\boldsymbol{\tau}}=\mathtt{Dir}(R_{\cdot}, \mathfrak u_0, \mathfrak u_1)$ for any nonzero $\mathfrak u_1\in {\mathbb R}^2$ then the eigenvalues of ${\boldsymbol{\tau}}$ are given by the zero set of
$H(1,z)^tJ\mathfrak u_1$. Moreover, if $\langlembda$ is an eigenvalue then $H(\cdot,\langlembda)$ is the corresponding eigenfunction.
\end{proposition}
\begin{proof}
By differentiating \eqref{eq:H} in $z$ we get
\[
J \partial_t \partial_z H=RH+z R \partial_z H, quad \text{and} quad
\partial_t (H^t J \partial_z H)=H^t R H.
\]
For $0<\varepsilon<\sigma$ and $\langlembda\in {\mathbb R}$ we get
\[
H^tJ\partial_\langlembda H(\sigma)-H^tJ\partial_\langlembda H(\varepsilon)=\int_\varepsilon^{\sigma} H^t R H ds>0.
\]
We let $\varepsilon\to 0$. By Propositions \ref{prop:H_ODE} and \ref{prop:H_cont} the function $H$ is continuous in $t$ at $0$ in the uniform-on-compacts topology, and is analytic for each $t$. It follows that $\partial_\langlembda H$ is continuous at $t=0$, hence the left side converges to $H^tJ\partial_\langlembda H(\sigma)$
Using monotone convergence for the right hand side, we get
\begin{align}\langlebel{eq:H_L2}
0<\int_{{\mathcal I}} H^t R H ds=H^tJ\partial_zH(\sigma)\in {\mathbb R}.
\end{align}
This proves that $H(\cdot,\langlembda)\in L^2_R$. It also implies that if for $\langlembda\in {\mathbb R}$ we have $\mathfrak u_1 \parallel H(\sigma,\langlembda)$ then $\langlembda$ is an eigenvalue of $\tau$ with eigenfunction $H(\cdot, \langlembda)$.
If $\langlembda\in {\mathbb R}$ is an eigenvalue with eigenfunction $f$ then ${\boldsymbol{\tau}} f=\langlembda f$ and $f$ satisfies the boundary conditions at $t=0$ and $t=\sigma$ given in the definition of $\operatorname{dom}_{\boldsymbol{\tau}}$. By classical theory of differential operators (see \cite{Weidmann}) the equation ${\boldsymbol{\tau}} g=\langlembda g$ has at most one $L^2_R$ solution with a given boundary condition at $t=0$. Since $H( \cdot,\langlembda)$ and $f$ are both solutions and have the same boundary conditions, they must be parallel, which means that $H( \cdot, \langlembda)$ is the eigenfunction and $\mathfrak u_1 \parallel H(\sigma,\langlembda)$. In particular, the zero set of $H(1,z)^tJ\mathfrak u_1$ is the same as the spectrum of ${\boldsymbol{\tau}}$.
\end{proof}
\begin{corollary}\langlebel{cor:spec} When ${\boldsymbol{\tau}}=\mathtt{Dir}(R,\mathfrak u_0,\mathfrak u_1)$ satisfies Assumptions 1,2 then the eigenfunctions are continuous at both endpoints of $\mathcal I$.
\end{corollary}
\begin{proof}
If $\mathcal I$ is closed from the right then these statements follow immediately from Proposition \ref{prop:H_L2}. (Note that by \eqref{eq:H_L2} the function $H(1, \cdot)$ cannot be constant.)
If $\mathcal I$ is closed from the left then we can just reverse time.
\end{proof}
\mathfrak{su}(1,1)bsection{Spectral measure}
In this section we will work with the case $\bar {\mathcal I}=[0,1]$.
Consider a Dirac operator ${\boldsymbol{\tau}}=\mathtt{Dir}(R_{\cdot}, \mathfrak u_0,\mathfrak u_1)$
satisfying Assumptions 1 and 2.
Denote the spectrum of ${\boldsymbol{\tau}}$ by $\operatorname{spec} {\boldsymbol{\tau}}$, and let $f_\langlembda$ denote an eigenfunction corresponding to an eigenvalue $\langlembda$.
\begin{definition}\langlebel{def:spectral}
The {\bf left} and {\bf right spectral measures} of ${\boldsymbol{\tau}}$ are defined as
\begin{align}
\mu_{\textup{left}}=\mathfrak{su}(1,1)m_{\langlembda\in \operatorname{spec} \tau} \frac{f_\langlembda(0)^t f_\langlembda(0)}{\|f_\langlembda\|_R^2} \delta_\langlembda, qquad
\mu_{\textup{right}}=\mathfrak{su}(1,1)m_{\langlembda\in \operatorname{spec} \tau} \frac{f_\langlembda(1)^t f_\langlembda(1)}{\|f_\langlembda\|_R^2} \delta_\langlembda.
\end{align}
\end{definition}
By Corollary \ref{cor:spec} these measures are well-defined. If ${\mathcal I}$ is closed from the right then $f_\langlembda(\cdot)$ can be chosen as $H(\langlembda, \cdot)$ defined in Proposition \ref{prop:H_ODE}, see also Proposition \ref{prop:H_L2}.
\begin{definition}\langlebel{def:phase}
Let ${\mathcal I}$ be closed on the right, and $\mathfrak u_0=[1,0]^t$. Suppose that
that $R$, $\mathfrak u_0$ satisfy Assumptions 1 and 2, and consider $H$ from \eqref{eq:H}. Denote $H(1,z)=[A(z),B(z)]^t$, and define the phase function of $R$ as
\begin{align}\langlebel{eq:alpha}
\alpha(t, \langlembda)=2 \Im \log(A(\langlembda)-i B(\langlembda)).
\end{align}
Here we take the continuous branch of logarithm so that $\alpha(t,0)=0$ for $t\in {\mathcal I}$.
\end{definition}
The next result shows how we can identify the spectral measure of a Dirac operator using the phase function.
\begin{lemma}\langlebel{lem:phase_spectral1}
Let ${\mathcal I}$ be closed on the right, $\mathfrak u_0=[1,0]^t$, and $\mathfrak u_1=[-q,-1]$ or $\mathfrak u_1=[1,0]$. Suppose that $R, \mathfrak u_0$ satisfy Assumptions 1 and 2, let ${\boldsymbol{\tau}}=\mathtt{Dir}(R_{\cdot},\mathfrak u_0,\mathfrak u_1)$, and consider the phase function $\alpha$ given in Definition \ref{def:phase}. Then we have
$$
\mu_{\textup{right}}=2\mathfrak{su}(1,1)m_{k\in \mathbb Z}(\alpha^{-1})'(2\pi k +u) \delta_{\alpha^{-1}(2\pi k+u)}
$$
where $\cot(u/2)=-q$ if $\mathfrak u_1=[-q,-1]$ and $u=0$
if $\mathfrak u_1=[1,0]$. The weight of an eigenvalue $\langlembda$ is given by
\[ \mu_{\textup{right}}(\langlembda)=\frac{2}{\partial_\langlembda \alpha(1,\langlembda)}
= \frac{A^2(\langlembda)+B^2(\langlembda)}{A'(\langlembda) B(\langlembda)-A(\langlembda)B'(\langlembda)}.
\]
\end{lemma}
\begin{proof}
By Proposition \ref{prop:H_L2}
the spectrum of ${\boldsymbol{\tau}}$ is given by the zero set of the function $H(1,z)^tJ\mathfrak u_1$. We have $H(1,z)^tJ\mathfrak u_1=A(z)-qB(z)$ if $\mathfrak u_1=[-q,-1]$, and $H(1,z)^tJ\mathfrak u_1=B(z)$ if $\mathfrak u_1=[1,0]$. By Definition \ref{def:phase} the zero set of $H(1,z)J\mathfrak u_1$ is exactly the set $\{\alpha^{-1}(2\pi k+u): k\in {\mathbb Z}Z\}$.
If $\langlembda\in {\mathbb R}$ is an eigenvalue then by \eqref{eq:H_L2} of Proposition \ref{prop:H_L2} we have
\[
\int_0^1 H^t R Hds= H^t J \partial_z H(1)=A'(z)B(z)-A(z)B'(z).
\]
This gives the expression
\[ \mu_{\textup{right}}(\langlembda)=\frac{H(\langlembda,1)^t H(\langlembda,1)}{H^t(1,\langlembda)J\partial_{\langlembda}H(1,\langlembda)}= \frac{A^2(\langlembda)+B^2(\langlembda)}{A'(\langlembda) B(\langlembda)-A(\langlembda)B'(\langlembda)}.
\]
We also have
\[
\partial_\langlembda \alpha(1,\langlembda)=2 \Im \frac{A'-i B'}{A-i B}= 2\frac{A'(\langlembda) B(\langlembda)-A(\langlembda)B'(\langlembda)}{A^2(\langlembda)+B^2(\langlembda)},
\]
which proves $
\mu_{\textup{right}}(\langlembda)=\frac{2}{\partial_\langlembda \alpha(1,\langlembda)}$.
\end{proof}
\begin{lemma}\langlebel{lem:spec_convergence} Assume that ${\mathcal I}$ is closed on the right. Let ${\boldsymbol{\tau}}$, $\{{\boldsymbol{\tau}}_n, n\ge 1\}$ be Dirac operators on ${\mathcal I}$ with common left boundary condition $\mathfrak u_0$
satisfying Assumptions 1 and 2. Assume that
$$\|{\mathtt{r}\,} \tau_n -{\mathtt{r}\,} \tau\|_{\textup{HS}}\to 0, qquad \mathfrak t_{\tau_n}\to \mathfrak t_{\tau}, qquad \int_{{\mathcal I}} |a_0(s)-a_{0,n}(s)|^2ds\to 0,$$
as $n\to \infty$.
Then in the vague topology of measures,
\begin{align}
\mu_{\textup{right},n}\rightarrow \mu_{\textup{right}}, qquad \mu_{\textup{left},n}\rightarrow \mu_{\textup{left}}.
\end{align}
\end{lemma}
\begin{proof}
From ${\mathtt{r}\,} \tau_n\to {\mathtt{r}\,} \tau$ we get $\langlembda_{k,n}\to \langlembda_k$. From the continuity bound of Proposition \ref{prop:H_cont} we get that $H_n\to H$ uniformly on compacts. This implies that we must have $\mathfrak u_{1,n}\to \mathfrak u_1$. Indeed, assume that there is a subsequence with a different limit $\mathfrak{v}$. Then the zero set of $\mathfrak u_{1,n}J H_n$ (the spectrum of ${\boldsymbol{\tau}}_n$) would converge locally on compacts to the
zero set of $\mathfrak{v}^t J H$ by Hurwitz's theorem, which would contradict $\langlembda_{k,n}\to \langlembda_k$.
To prove that the right spectral weights converge, we use the formula given by Lemma \ref{lem:phase_spectral1}.
Since $A-iB$ has no real zeros, we can extend the definition of $\alpha$ from \eqref{eq:alpha} to a neighborhood of ${\mathbb R}$. We also get that $\alpha_n\to \alpha$ uniformly on compacts in some neighborhood of $\mathbb R$. Since $\alpha$ is analytic, this implies convergence of the derivatives. Hence from Lemma \ref{lem:phase_spectral1} we get $ \mu_{\textup{right},n}(\langlembda_{k,n})\rightarrow \mu_{\textup{right}}(\langlembda_k)$, proving the lemma for the right spectral measure. Since $H_n(\sigma,\cdot)\to H(\sigma,\cdot)$ uniformly on compacts, this implies that $\|H_n(\cdot, \langlembda_{k,n})\|_{R_n}^2\to \|H_n(\cdot, \langlembda_{k})\|_{R}^2$, which in turn shows $\mu_{\textup{left},n}\rightarrow \mu_{\textup{left}}$, as $H_n(0,\langlembda)=H(0,\langlembda)=\mathfrak u_0$.
\end{proof}
\mathfrak{su}(1,1)bsubsection*{Basic transformations, path reversal and spectral measure}
We introduce the projection operator
$
\mathcal P\binom{z_1}{z_2}=\frac{z_1}{z_2},
$
with the natural extension $\mathcal P \binom{a}{0}=\infty\in \partial {\mathbb H} $ for a nonzero real $a$.
For the boundary conditions of an operator $\mathtt{Dir}(R_{\cdot},\mathfrak u_0,\mathfrak u_1)$ only the directions of $\mathfrak u_0$, $\mathfrak u_1$ matter, hence we can also use $\mathcal P \mathfrak u_0, \mathcal P \mathfrak u_1$ to identify the boundary conditions.
A $2\times 2$ matrix $A$ with a nonzero determinant can be identified with a linear fractional transformation via $z\to \mathcal P A \binom{z}{1}$. For an $A$ with real entries the corresponding linear fractional transformation is an isometry of the hyperbolic plane ${\mathbb H} $. We record the form of the hyperbolic rotation of ${\mathbb H} $ about $i$ taking $r\in {\mathbb R}$ to $\infty$ both as a $2\times 2$ matrix and the corresponding linear fractional transformation.
\begin{align}\langlebel{def:hyprot}
T_r=\frac{1}{\sqrt{1+r^2}}\mat{r}{1}{-1}{r}, qquad \mathcal{T}_r(z)=\mathcal P T_r \bin{z}{1}=\frac{rz+1}{r-z}.
\end{align}
The following lemma summarizes how an isometry of ${\mathbb H} $ (in particular, a hyperbolic rotation) acts on a Dirac operator, the statements follow from the definitions.
\begin{lemma}\langlebel{lem:isometry}
Suppose that $Q\in {\mathbb R}^{2\times 2}$ with $\det Q=1$ and define $\mathcal Q: {\mathbb H} \to {\mathbb H} $ via $\mathcal Q z=\mathcal P Q\binom{z}{1}$. Let ${\boldsymbol{\tau}}=\mathtt{Dir}(R_{\cdot}, \mathfrak u_0, \mathfrak u_1)=\mathtt{Dir}(x+i y, \mathfrak u_0, \mathfrak u_1)$ be a Dirac operator satisfying Assumptions 1 and 2. Then $\tilde {\boldsymbol{\tau}}=Q {\boldsymbol{\tau}} Q^{-1}=\mathtt{Dir}(\tilde R, \tilde \mathfrak u_0, \tilde \mathfrak u_1)=\mathtt{Dir}(\tilde x+i \tilde y, \tilde \mathfrak u_0, \tilde \mathfrak u_1)$ with
\[
\tilde R=JQJ^{-1}R Q^{-1}, quad \tilde x+ i \tilde y=\mathcal{Q}(x+i y), quad \tilde u_j=Q u_j.
\]
The operator $\tilde {\boldsymbol{\tau}}$ satisfies Assumptions 1-2, and if ${\boldsymbol{\tau}}$ satisfies Assumption 3 then the same holds for $\tilde {\boldsymbol{\tau}}$.
If $Q=T_r$ for some $r\in {\mathbb R}$ then the right and left spectral measures of ${\boldsymbol{\tau}}$ agree with those of $\tilde {\boldsymbol{\tau}}$.
\end{lemma}
Let $\rho f(t)=f(1-t)$ be the time-reversal operator on functions defined on $[0,1)$ or $(0,1]$. Let $S=\mat{1}{0}{0}{-1}$, and $\mathfrak{r}: {\mathbb C}\to {\mathbb C}$ be defined as $\mathfrak{r}(z)=-\bar z$. The following lemma is a simple consequence of the definitions.
\begin{lemma}\langlebel{lem:reversal}
Suppose that ${\boldsymbol{\tau}}=\mathtt{Dir}(z,\mathfrak u_0,\mathfrak u_1)$ is a Dirac operator satisfying Assumptions 1 and 2. Then $
\tilde {\boldsymbol{\tau}}=\rho^{-1} S {\boldsymbol{\tau}} S \rho
$
also satisfies Assumptions 1 and 2. We have $\tilde {\boldsymbol{\tau}}=\mathtt{Dir}(\tilde z, \tilde \mathfrak u_0, \tilde \mathfrak u_1)$ with $\tilde z=\rho \mathfrak{r} z$, $\tilde \mathfrak u_0=-\tilde uu_1$, and $\tilde \mathfrak u_1=-\tilde uu_0$. Moreover,
\[
\mu_{\textup{left},{\boldsymbol{\tau}}}=\mu_{\textup{right},\tilde{\boldsymbol{\tau}}}, qquad \mu_{\textup{right},{\boldsymbol{\tau}}}=\mu_{\textup{left},\tilde{\boldsymbol{\tau}}}.
\]
\end{lemma}
\section{Biasing by the weight at zero and the Palm measure}\langlebel{sec:biasing}
In this section we discuss the notion of conditioning a random measure to charge location $x$: ``biasing by the weight of $x$''. For translation-invariant random measures, this coincides with the notion of the Palm measure.
\begin{definition}\langlebel{def:biased} Given a random measure $\mu$ on a metric space $S$, let $\mu_\varepsilon$ have Radon-Nikodym derivative $$\frac{\mu(B_\varepsilon(x))}{E\mu(B_\varepsilon(x))}$$
with respect to $\mu$. That is, $\mu_\varepsilon$ is $\mu$ biased by the weight of $B_\varepsilon(x)$.
We define {\bf $\mu$ biased by the weight of $x\in S$} as the $\varepsilon\to 0$ distributional limit of $\mu_\varepsilon$ if it exists.
\end{definition}
\begin{remark}
If $\mu$ is a random probability measure, and $X$ is a random pick from $\mu$, then $\mu$ biased by the weight of $x$ can be interpreted as the posterior distribution of $\mu$ given $X=x$ in the Bayes sense.
\end{remark}
Often a slightly more general version of Definition \ref{def:biased} is needed, where the metric space is {\bf decorated} with a random function $\xi:S\to R$, and arbitrary measure space. The {\bf decoration} $\xi$ seems unnecessary at first, but is a standard tool in Palm measure theory. It will be useful in defining Palm measures for operators.
\begin{definition}\langlebel{def:biased1} Let $\mu$ be a random measure on a metric space $S$, and let $\xi$ be a random measurable function $S\to R$, where $R$ is a measure space.
Let $(\mu_\varepsilon,\xi_\varepsilon)$ have Radon-Nikodym derivative $$\frac{\mu(B_\varepsilon(x))}{E\mu(B_\varepsilon(x))}$$
with respect to $(\mu,\xi)$. That is, $(\mu_\varepsilon,\xi_\varepsilon)$ is $(\mu,\xi)$ biased by the weight of $B_\varepsilon(x)$.
We define {\bf $(\mu,\xi)$ biased by the weight of $x\in S$} as the $\varepsilon\to 0$ distributional limit of $(\mu_\varepsilon,\xi_\varepsilon)$ if it exists.
\end{definition}
When $\xi$ is constant, it can be ignored and Definition \ref{def:biased1} is equivalent to Definition \ref{def:biased}.
Next, we define the Palm measure of a decorated, translation invariant random measure with finite intensity.
One could ignore the decoration $\xi$ for the first reading, and think of it as almost surely constant. Even in the general case, it is good to think of $\xi$ as ``coming along for the ride''.
Let $\mu$ be a random measure on $\mathbb R$, and let $\xi$ be a random function from $\mathbb R$ to some measure space $R$. We assume that $(\mu,\xi)$ is translation invariant:
\[
(\mu(\cdot +t),\xi(\cdot+t))\ed(\mu,\xi), qquad \text{for all }t\in \mathbb R,
\]
and that for some positive $\rho$ \begin{equation}\langlebel{e:mua}E\mu(A)=|A|\rho qquad \text{for any Borel set $A\mathfrak{su}(1,1)bset \mathbb R$.}
\end{equation}
The number $\rho$ is called the {\bf intensity} of $\mu$.
\begin{definition}\langlebel{def:palm}
Let $A$ be a subset of $\mathbb R$ with positive and finite Lebesgue measure. Let $(\mu_A,\xi_A)$ be distributed as the biasing of $(\mu,\xi)$ by $\mu(A)$, namely for any measurable set $M$ we have
$$
P((\mu_A,\xi_A)\in M)=\frac{E[\mu(A);\,(\mu,\xi) \in M]}{E\mu(A)}
$$
Given $\mu_A$, let $X$ be a random point picked from the probability measure $\mu_A(A\cap \cdot)/\mu_A(A)$, the normalized version of $\mu_A$ restricted to $A$. Then the {\bf Palm measure} $(\mu_*,\xi_*)$ of $(\mu,\xi)$ is defined as the translation $(\mu_A,\xi_A)(\cdot-X)$.
\end{definition}
When $\xi$ is constant, in words, the Palm measure is the re-centering of the $\mu(A)$-biased version $\mu_A$ of $\mu$ at a random point picked from the restriction of $\mu_A$ to $A$.
Definition \ref{def:palm} can be summarized in terms of bounded test functions $f$ as follows:
$$
Ef(\mu_*,\xi_*)=\frac{1}{E\mu(A)}E\int_A f(\mu(\cdot -x),\xi(\cdot-x)) \mu(dx).
$$
which agrees with the classical definition, see \cite{Kallenberg}, Chapter 11, equation (1). There, in Lemma 11.2 it is shown that the definition does not depend on the set $A$.
\begin{proposition} \langlebel{prop:Palm} Let $(\mu,\xi)$ be translation-invariant decorated random measure on $\mathbb R$, and assume that $E\mu\{[0,1]\}=\rho<\infty$. Then $(\mu,\xi)$ biased by the weight of $0$ exists and equals the Palm measure of $(\mu,\xi)$.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:Palm}]
By Definition \ref{def:biased1}, for each $\varepsilon>0$ the Palm measure is given by
$\nu=\mu_\varepsilon(\cdot-X_\varepsilon)$ where $X_\varepsilon$ is picked from a probability measure supported on $B_\varepsilon(0)$. Equivalently,
$$
\mu_\varepsilon=\nu(\cdot+X_\varepsilon).
$$
Since $\nu$ does not depend on $\varepsilon$, and $X_\varepsilon$ converges in law to $0$, the pair $(\nu,X_\varepsilon)$ converges in law to $
(\nu,0)$. By the continuity theorem, $\mu_\varepsilon=\nu(\cdot+X_\varepsilon)$ converges to $\nu(\cdot+0)=\nu$.
\end{proof}
\begin{remark}\langlebel{rem:circle}
In Definition \ref{def:palm} we defined the Palm measure for decorated translation invariant random measures on ${\mathbb R}$ with a finite intensity. The same setup (with the appropriate modifications) works for rotation invariant random measures on the unit circle with a finite expected total weight. In particular, Proposition \ref{prop:Palm} holds for such random measures.
\end{remark}
Let $\mathbb S=[0,2\pi)$ with the topology of the circle, and let $\{\cdot \}_{2\pi}$ be the standard covering map $\mathbb R\to \mathbb S$. In other words, $\{x \}_{2\pi}=x-2\pi \lfloor x/(2\pi)\rfloor$, the single element of $[0,2\pi)\cap (x+2\pi {\mathbb Z}Z)$.
\begin{lemma}\langlebel{lem:pullback}
Let $\kappa>0$ and $\langlembda$ be a differentiable random bijection ${\mathbb R}\to {\mathbb R}$, and set $V=\{\langlembda^{-1}(0)\}_{2\pi}$. Let $\gamma$ be a random variable taking values in a Polish space $\mathcal S$.
For $u\in \mathbb R$, consider the measure
\begin{align}\langlebel{def:Xi}
\Xi_u=\kappa \mathfrak{su}(1,1)m_{w\in 2\pi {\mathbb Z}Z+u} \langlembda'(w)\delta_{\langlembda(w)}.
\end{align}
Let $U$ be independent of $(\langlembda,\gamma)$, and assume that the law of $\{U\}_{2\pi}$ is absolutely continuous with bounded density $f$ on $\mathbb S$. Moreover, assume that on the support of the law of the random variable $V$ the density $f$ is positive and continuous on $\mathbb S$.
Then $(\Xi_U,(U,\gamma))$ biased by the weight of zero exists and has the law of $(\Xi_{V},(V,\gamma))$ biased by $f(V)$. In particular, if $U$ is uniform on $\mathbb S$ or $\langlembda(0)=0$ a.s., then $(\Xi_U,(U,\gamma))$ biased by the weight of zero has the law of $(\Xi_{V},(V,\gamma))$.
\end{lemma}
The variable $\gamma$ is just coming along for the ride, it is not essential in the proof of the lemma. However, we will use this more general statement in Proposition \ref{prop:Dir_Spec_weight} below.
\begin{proof} By definition, multiplication by a constant $\kappa$ commutes with biasing, so it suffices to show the claim for $\kappa=1$.
Let $A_\varepsilon=[-\varepsilon,\varepsilon]$ and $\varepsilon\to 0$. It suffices to show that the joint law of $(\langlembda,\gamma, \{U\}_{2\pi})$ biased by $\Xi_U(A_\varepsilon)$ converges to the law of $(\langlembda,\gamma, V)$ biased by $f(V)$.
Now let
$\varphi$ be a bounded continuous test function. We will show that as $\varepsilon\to 0$
\begin{equation}\langlebel{eq:a-cont}
\frac1{E[\Xi_U[A_\varepsilon]]}
{E[\varphi(\langlembda, \gamma,\{U\}_{2\pi}) \Xi_U[A_\varepsilon]]}\to \frac{1}{E[f(V)]}E[\varphi(\langlembda, \gamma, V) f(V)].
\end{equation}
We have
\begin{align*}
{E[\varphi(\langlembda, \gamma,\{U\}_{2\pi}) \Xi_U[A_\varepsilon]]}&=
E[\varphi(\langlembda, \gamma,\{U\}_{2\pi}) \mathfrak{su}(1,1)m_{w\in 2\pi {\mathbb Z}+U} \langlembda'(w) 1_{A_\varepsilon}(\langlembda(w))]\\
&=E\int_{{\mathbb R}} \varphi(\langlembda, \gamma,\{ u \}_{2\pi}) \langlembda'(u) 1_{A_\varepsilon}(\langlembda(u)) f(\{u\}_{2\pi}) du.
\end{align*}
By a change of variables, this equals
\begin{align*}
E\int_{-\varepsilon}^{\varepsilon} \varphi(\langlembda, \gamma,\{ \langlembda^{-1}(x) \}_{2\pi}) f(\{\langlembda^{-1}(x)\}_{2\pi}) dx
&=2\varepsilon E \left[\varphi(\langlembda, \gamma,\{ \langlembda^{-1}(X_\varepsilon) \}_{2\pi}) f(\{\langlembda^{-1}(X_\varepsilon)\}_{2\pi}) \right]
\end{align*}
where $X_\varepsilon$ is uniform on $(-\varepsilon,\varepsilon)$ and independent of $\langlembda$.
As $\varepsilon\to 0$ we have $\{\langlembda^{-1}(X_\varepsilon)\}_{2\pi}\to V$ in probability. By the bounded convergence theorem and the assumptions on $f$ we get
\[
\frac{1}{2\varepsilon} E[\varphi(\langlembda, \gamma,\{U\}_{2\pi}) \Xi_U[A_\varepsilon]]\to E \left[\varphi(\langlembda, \gamma,V) f(V) \right].
\]
This convergence (together with the case $\varphi=1$) now
implies \eqref{eq:a-cont}, and the statement of the lemma follows.
\end{proof}
\mathfrak{su}(1,1)bsection*{Biasing and Dirac operators}
\begin{proposition}\langlebel{prop:Dir_Spec_weight}
Suppose that ${\mathcal I}$ is closed on the right, and $R$, $\mathfrak u_0=[1,0]^t$ satisfy Assumptions 1 and 2 on ${\mathcal I}$. For an $r\in {\mathbb R}\cup\{\infty\}$ we denote by $\mu_{R,r}$ the right spectral measure of the Dirac operator ${\boldsymbol{\tau}}_r=\mathtt{Dir}(R_{\cdot}, \infty, r)$ on ${\mathcal I}$.
Let $q$ be a random variable on ${\mathbb R}\cup\{\infty\}$ that is independent of $R$, and for which the law of the angle $2\operatorname{arccot}(q)\in \mathbb S$ has bounded density which is continuous and positive at $0$.
Then $(\mu_{R,q},(R,q))$ biased by the weight of 0 has law given by $(\mu_{R,\infty},(R,\infty))$.
\end{proposition}
In words, biasing the operator ${\boldsymbol{\tau}}_q$ by the weight of zero in the right spectral measure turns the random right boundary condition $q$ into $\infty$.
\begin{proof}
Consider the phase function $\alpha$ for the operator ${\boldsymbol{\tau}}_r$ introduced in \eqref{eq:alpha}, this does not depend on the right boundary condition $r$. Let $\langlembda:{\mathbb R}\to {\mathbb R}$ be the inverse of the function $\alpha(1,\cdot)$, note that $\langlembda^{-1}(0)=0$.
By Lemma \ref{lem:phase_spectral1} the spectral measure $\mu_{R, q}$ is defined as \eqref{def:Xi}
with $\kappa=2$ and $u=-2\operatorname{arccot}(q)\in (-\pi,\pi]$.
We can now apply Lemma \ref{lem:pullback} with $\gamma=(R,q)$. Using Lemma \ref{lem:phase_spectral1} again we identify $\Xi_V=\Xi_0$ as $\mu_{R,\infty}$.
\end{proof}
Proposition \ref{prop:Dir_Spec_weight} applied to ${\boldsymbol{\tau}}_\beta$ leads to the following corollary.
\begin{corollary}\langlebel{cor:Sine_biased}
Consider $R, q$ and ${\boldsymbol{\tau}}_\beta$ introduced in \eqref{eq:xy}-\eqref{tau_0}. For $r\in {\mathbb R}$ let ${\boldsymbol{\tau}}_{\beta,r}=\mathtt{Dir}(R_{\cdot},\infty,r)$, and let $\mu=\mu_{\textup{right},{\boldsymbol{\tau}}_\beta}$ and $\mu_r=\mu_{\textup{right},{\boldsymbol{\tau}}_{\beta,r}}$. Then
the $(\mu, (R,q))$ biased by the weight of 0 in $\mu$ has the same distribution as $(\mu_\infty, (R,\infty))$. In particular, $\mu$ biased by the weight of zero is $\mu_\infty$, the right spectral measure of $\mathtt{Dir}(R_{\cdot},\infty,\infty)$.
\end{corollary}
\section{Finite support measures on the unit circle}
\langlebel{sec:finite}
In \cite{BVBV_sbo} it was shown that one can associate a Dirac operator to a finitely supported probability measure on the unit circle. The construction relies on the theory of orthogonal polynomials \cite{OPUC}, see \cite{SimonOPUC1foot} for a shorter summary.
Let $\nu$ be a probability measure whose support is exactly $n$ points $e^{i\langlembda_j}, 1\le j \le n$ on the unit circle $\partial {\mathbb U}$. Denote the Gram-Schmidt ortogonalization of the polynomials $1,z, \cdots, z^n$ with respect to $\nu$ by $\Phi_k, k=0, \dots, n$. These are the orthogonal polynomials with respect to $\nu$, with $\Phi_n(z)=\mbox{\rm P}od_{j=1}^n(z-e^{i \langlembda_j})$. Together with the reversed polynomials $\Phi_k^*(z)=z^k \overline{\Phi_k(1/\bar z)}$, they satisfy the famous Szeg\H o recursion (see e.g.~Section 1.5, volume 1 of \cite{OPUC}):
\begin{align}\langlebel{eq:Szego1}
\binom{\Phi_{k+1}}{\Phi_{k+1}^*}=
\mat{1}{-\bar \alpha_k}{-\alpha_k}{1} \mat{z}{0}{0}{1} \binom{\Phi_{k}}{\Phi_{k}^*}, qquad \binom{\Phi_0}{\Phi_0^*}
=\binom{1}{1},qquad 0\le k\le n-1.
\end{align}
Here $\alpha_k$, $0\le k\le n-1$ are the Verblunsky coefficients, they satisfy $|\alpha_k|<1$ for $0\le k\le n-2$ and $|\alpha_{n-1}|=1$. The map between the probability measures supported on $n$ points on $\partial {\mathbb U}$ and the Verblunsky coefficients $\alpha_0, \dots, \alpha_{n-1}$ is invertible, and both the map and its inverse are continuous.
We denote by $\mathcal{U}(z):=\frac{z-i}{z+i}$ the Cayley transform mapping $\overline {\mathbb H} $ to $ \overline {\mathbb U}$, and introduce its matrix version
\begin{equation}\langlebel{eq:U}
U=\mat{1}{-i}{1}{i}.
\end{equation}
The following definition constructs a path and a Dirac operator from a probability measure supported on finitely many points on $\partial {\mathbb U}$.
\begin{definition}\langlebel{def:path}
Let $\nu$ be a probability measure supported on $n$ distinct points on the unit circle, and let $\alpha_k, 0\le k\le n-1$ be its Verblunsky coefficients. Define $b_k, $, $0\le k\le n$ as
\begin{align}\langlebel{eq:discrete_path_b}
b_0&=0, quad b_k=\mathcal P \mat{1}{\bar \alpha_0}{\alpha_0}{1}\cdots \mat{1}{\bar \alpha_{k-1}}{\alpha_{k-1}}{1} \binom{0}{1},
qquad 1\le k\le n,
\end{align}
and $z_k=\mathcal{U}^{-1} (b_k)$, $0\le k\le n$.
Let $z(t)=z_{\lfloor nt\rfloor}$, $\mathfrak u_0=[1,0]^t$ and $\mathfrak u_1=[-z_n, -1]$ if $z_n \in {\mathbb R}$ and $\mathfrak u_1=[1,0]$ if $z_n=\infty$. We call $z_0, \dots, z_n$ the {\bf path parameter} of $\nu$ in ${\mathbb H} $, and ${\boldsymbol{\tau}}=\mathtt{Dir}(z(\cdot), \mathfrak u_0, \mathfrak u_1)$ the {\bf Dirac operator corresponding to $\nu$}. We call $b_0, \dots, b_n$ the path parameter of $\nu$ in ${\mathbb U}$.
\end{definition}
The following result is proved in Proposition 16 of \cite{BVBV_sbo}.
\begin{proposition}
\langlebel{prop:discrete_op_1}
Let $\nu$ be a probability measure supported on $n$ distinct points $e^{i \langlembda_j}, 1\le j\le n$ on the unit circle $\{|z|=1\}$. Let $z_k, 0\le k\le n$ be the path parameter of $\nu$ in ${\mathbb H} $ introduced in Definition \ref{def:path}, and let ${\boldsymbol{\tau}}=\mathtt{Dir}(z(\cdot), \mathfrak u_0, \mathfrak u_1)$ be the Dirac operator corresponding to $\nu$.
Then ${\boldsymbol{\tau}}$ satisfies Assumptions 1 and 2 on ${\mathcal I}=[0,1]$, and also Assumption 3 if $z_n\neq \infty$. Moreover, the spectrum of ${\boldsymbol{\tau}}$ is the set $\{n \langlembda_j+2\pi n k: 1\le j\le n, k\in {\mathbb Z}Z\}$. The spectrum contains 0 if and only if $z_n=\infty$.
\end{proposition}
The path parameter of a probability measure supported on $n$ distinct points can be expressed in a more convenient way using the \emph{modified} Verblunsky coefficients $\gamma_k, 0\le k\le n-1$. These are defined
recursively in terms of the Verblunsky coefficients as follows:
\begin{align}
\gamma_0=\bar \alpha_0, qquad \gamma_k=\bar \alpha_k \mbox{\rm P}od_{j=0}^{k-1} \frac{1-\bar \gamma_j}{1-\gamma_j} , quad 1\le k\le n-1. \langlebel{eq:gamma_alpha}
\end{align}
By definition we have $|\gamma_k|=|\alpha_k|$ for all $0\le k\le n-1$, in particular $|\gamma_{n-1}|=1$. The recursion \eqref{eq:gamma_alpha} shows that the map between the regular and modified Verblunsky coefficients is invertible.
The modified Verblunsky coefficients are connected to the affine group of isometries of the hyperbolic plane fixing a particular boundary point. In the half plane representation ${\mathbb H} $ we choose the boundary point to be $\infty$: in that case the transformations are of the form $z\to \frac{1}{y}(z-x)$ with $x+i y\in {\mathbb H} $. The matrix representation and the transformation corresponding to a $x+i y\in {\mathbb H} $ is given by
\begin{equation}\langlebel{e:A1def}
A_{x+i y, {\mathbb H} }=
\mat{1}{-x}{0}{y}, qquad \mathcal A_{x+i y, {\mathbb H} }(z)= \mathcal P A_{x+i y, {\mathbb H} } \binom{z}{1}.
\end{equation}
Note that $x+i y$ is the pre-image of $i$ under $\mathcal A_{x+i y, {\mathbb H} }$.
In the unit disk ${\mathbb U}$ representation of the hyperbolic plane we choose the fixed point to be $1$, in this case the corresponding linear fractional transformations can be parameterized by the pre-image $\gamma \in {\mathbb U}$ of $0$.
The matrix representation and the linear fractional transformation corresponding to $\gamma \in {\mathbb U}$ are given by
\begin{equation}\langlebel{eq:Adef}
A_{\gamma, {\mathbb U}}=
\mat{\frac{1}{1-\gamma}}
{\frac{\gamma}{\gamma-1}}
{\frac{\bar \gamma}{\bar \gamma-1}}
{\frac{1}{1-\bar \gamma}}, qquad \mathcal A_{\gamma, {\mathbb U}}(z)= \mathcal P A_{\gamma,{\mathbb U}} \binom{z}{1}.
\end{equation}
The half-plane and disk representations can be connected via the Cayley transform $\mathcal U$ and the corresponding matrix $U$:
\begin{align*}
A_{U^{-1}(\gamma),{\mathbb H} }=U^{-1} A_{\gamma, {\mathbb U}}\, U, qquad \mathcal A_{U^{-1}(\gamma), {\mathbb U}}=\mathcal U^{-1} \circ \mathcal A_{\gamma, {\mathbb U}} \circ \mathcal U.
\end{align*}
For a particular sequence of modified Verblunsky coefficients $\gamma_k, 0\le k\le n-1$ we introduce the variables $v_k, w_k\in {\mathbb R}\cup \{\infty\}$, $0\le k\le n-1$ as
\begin{align}\langlebel{def:v_w}
v_k+i w_k=\mathcal U^{-1}(\gamma_k)-i.
\end{align}
Note that $v_k, w_k$ are finite for $0\le k \le n-2$ with $w_k>-1$. We have $w_{n-1}=-1$, with $v_{n-1}=\infty$ for $\gamma_{n-1}=1$, and $v_{n-1}\in {\mathbb R}$ otherwise. A direct computation gives
\begin{align}\langlebel{eq:v_w}
w_k=2{\mathbb R}e \frac{\gamma_k}{1-\gamma_k} , quad v_k=-2 \Im \frac{\gamma_k}{1-\gamma_k}, quad \text{if $\gamma_k\neq 1$.}
\end{align}
The following lemma shows that the path parameter can be expressed in a simple way in terms of the modified Verblunsky coefficients.
\begin{proposition}
Let $\nu$ be a probability measure supported on $n$ distinct points $e^{i \langlembda_j}, 1\le j\le n$ on the unit circle $\partial {\mathbb U}$. Let
$b_k, z_k, 0\le k\le n$ be the path parameters of $\nu$ introduced in Definition \ref{def:path}, and let $\gamma_k, 0\le k\le n-1$ be the modified Verblunsky coefficients, with $w_k, v_k$ defined in \eqref{def:v_w}. Then following identities hold for $0\le k\le n-1$:
\begin{align}\langlebel{eq:z_rec}
z_{k+1}&=z_k+(v_k+i w_k) \Im z_k, \\
b_{k+1}&=\frac{b_k+\gamma_k\frac{1-b_k}{1-\bar b_k}}{1+\bar b_k \gamma_k \frac{
1-b_k}{1-\bar b_k}}\langlebel{eq:b_rec}.
\end{align}
Moreover, we have
\begin{equation}
\langlebel{eq:bk_cA}
b_k={\mathcal A}_{\gamma_0,{\mathbb U}}^{-1}\circ \cdots\circ {\mathcal A}_{\gamma_{k-1},{\mathbb U}}^{-1}(0),
\end{equation}
with ${\mathcal A}_{\gamma_{n-1},{\mathbb U}}^{-1}(0)$ defined as the limit of ${\mathcal A}_{\gamma,{\mathbb U}}^{-1}(0)$ as $\gamma\to \gamma_{n-1}$ with $\gamma\in {\mathbb U}$.
\end{proposition}
\begin{proof}
From \eqref{eq:Szego1} and \eqref{eq:gamma_alpha} it follows by induction that
\begin{align}\langlebel{eq:Phi_1}
\Phi_k(1)=\mbox{\rm P}od_{j=0}^{k-1} (1-\gamma_j), qquad \Phi_k^*(1)=\mbox{\rm P}od_{j=0}^{k-1} (1-\bar \gamma_j).
\end{align}
Using this identity with \eqref{eq:gamma_alpha} again, we obtain
\[
\mat{1}{-\bar \alpha_k}{-\alpha_k}{1}=\mat{\Phi_{k+1}(1)}{0}{0}{\Phi_{k+1}^*(1)} A_{\gamma_k,{\mathbb U}} \mat{\Phi_{k}(1)}{0}{0}{\Phi_{k}^*(1)}^{-1}.
\]
If $|\alpha|<1$ then
\[
\mat{1}{-\bar \alpha}{-\alpha}{1}^{-1}=\frac{1}{1-|\alpha|^2} \mat{1}{\bar \alpha}{\alpha}{1}.
\]
Hence from \eqref{eq:discrete_path_b} we get
\begin{align*}
b_k=\mathcal P \mat{1}{-\bar \alpha_0}{-\alpha_0}{1}^{-1}\cdots \mat{1}{-\bar \alpha_{k-1}}{-\alpha_{k-1}}{1}^{-1} \binom{0}{1}=\mathcal P A_{\gamma_0, {\mathbb U}}^{-1} \cdots A_{\gamma_{k-1}, {\mathbb U}}^{-1} \binom{0}{1},
\end{align*}
proving \eqref{eq:bk_cA}. Since $A_{\gamma, {\mathbb U}}^{-1}(0)=\gamma$, for $1\le k\le n-1$ we also get the identity
\begin{align}
A_{b_k, {\mathbb U}}=A_{\gamma_{k-1}, {\mathbb U}} \cdots A_{\gamma_0, {\mathbb U}}=A_{\gamma_{k-1}, {\mathbb U}} A_{b_{k-1}, {\mathbb U}}. \langlebel{eq:Ab_rec}
\end{align}
Since $z_k=\mathcal U^{-1}(b_k)$ and $v_k+i(w_k+1)=\mathcal U^{-1}(\gamma_k)$ equation \eqref{eq:Ab_rec} implies
\begin{align}\langlebel{eq:Az_rec}
A_{z_k, {\mathbb H} }=A_{v_{k-1}+i(w_{k-1}+1), {\mathbb H} } \, A_{z_{k-1}, {\mathbb H} }.
\end{align}
From \eqref{eq:Ab_rec} and \eqref{eq:Az_rec} we obtain \eqref{eq:b_rec} and \eqref{eq:z_rec} for $0\le k\le n-2$. The identity for $k=n-1$ in both cases follow by taking the appropriate limits.
\end{proof}
The following statement extends the results of Proposition \ref{prop:discrete_op_1} to show that the left spectral measure of the Dirac operator associated to a finitely supported probability measure $\nu$ is a scaled and periodically extended version of $\nu$.
\begin{proposition}\langlebel{prop:Dir_discrete}
Let $\nu$ be a probability measure supported on $n$ distinct points $e^{i \langlembda_j}, 1\le j\le n$ on $\partial {\mathbb U}$, and let ${\boldsymbol{\tau}}$ be the Dirac operator corresponding to $\nu$ as defined in Definition \ref{def:path}.
Then
\begin{align}\langlebel{eq:disc_Dir_spec}
\mu_{\textup{left}, {\boldsymbol{\tau}}}(n \langlembda_j+2n k \pi)=2 n\, \nu(e^{i \langlembda_j}), qquad k\in {\mathbb Z}Z, 1\le j\le n.
\end{align}
\end{proposition}
\begin{proof}
For $\langlembda\in {\mathbb R}$ let $H(t,\langlembda)$ be the solution of the eigenfunction equation
\begin{align}\langlebel{eq:ODE_H}
{\boldsymbol{\tau}} H=\langlembda H, qquad H(0,\langlembda)=\mathfrak u_0=\bin{1}{0}.
\end{align}
Denote by $z_k, 0\le k\le n$ the path parameter of $\nu$ in ${\mathbb H} $, and introduce
\[
X_k=A_{z_k, {\mathbb H} }.
\]
From Definition \ref{def:path} it follows that ${\boldsymbol{\tau}}=R^{-1} J \frac{d}{dt}$ where
\begin{align}\langlebel{eq:discrete_R}
R(t)=R_k=\frac{X_k^t X_k}{2 \det X_k}, qquad \text{for $t\in [k/n,(k+1)/n)$}.
\end{align}
Introduce the normalized orthogonal polynomials
\begin{align}\langlebel{def:psi_X}
\Psi_k(z)=\Phi_k(z)/\Phi_k(1), quad \Psi_k^*(z)=\Phi_k^*(z)/\Phi_k^*(1).
\end{align}
From Proposition 32 of \cite{BVBV_szeta} it follows that
the function $H(t,\langlembda)$ satisfies the identities
\begin{align}\langlebel{eq:discrete_H_1}
H(\tfrac{k}{n},\langlembda)&=H_k(\langlembda)=e^{-\frac{i\langlembda k}{2n}}X_k^{-1}U^{-1} \binom{\Psi_k(e^{i \langlembda/n})}{\Psi_k^*(e^{i \langlembda/n})}, qquad 0\le k\le n-1,\\ \langlebel{eq:discrete_H_2}
H(t,\langlembda)&=X_{k}^{-1} U^{-1} \mat{e^{\frac{i\langlembda}{2}(t-k/n))}}{0}{0}{e^{\frac{-i\langlembda }{2}(t-k/n)}} U X_k\, H_k(\langlembda), qquad t\in (k/n,(k+1)/n].
\end{align}
From \eqref{eq:discrete_R}, \eqref{eq:discrete_H_1} and \eqref{eq:discrete_H_2} we obtain
\[
\|H(\cdot, \langlembda)\|_R^2=\frac1{n} \mathfrak{su}(1,1)m_{k=0}^{n-1} H_k(\langlembda)^t R_k H_k(\langlembda)=\frac1{n} \mathfrak{su}(1,1)m_{k=0}^{n-1} \frac{1}{2 \Im z_k} |\Psi_k(e^{i \langlembda/n})|^2,
\]
where in the last step we also used that $|\Phi_k(e^{i \langlembda/n})|=|\Phi_k^*(e^{i \langlembda/n})|$ for $\langlembda\in {\mathbb R}$.
Setting $\tilde \langlembda=n \langlembda_j+2n k \pi$ for a $k\in {\mathbb Z}Z$, $1\le j\le n$ we get
\begin{align}\langlebel{eq:disc_norm_sq}
\|H(\cdot, \tilde \langlembda)\|_R^2=\frac1{n} \mathfrak{su}(1,1)m_{k=0}^{n-1} \frac{1}{2\Im z_k} |\Psi_k(e^{i \langlembda_j})|^2,
\end{align}
and since $|H(0,\tilde \langlembda)|=|\binom{1}{0}|=1$, this leads to
\begin{align}
\mu_{\textup{left}, {\boldsymbol{\tau}}}(n \langlembda_j+2n k \pi)= \frac{2n}{\mathfrak{su}(1,1)m_{k=0}^{n-1} \frac{1}{\Im z_k} |\Psi_k(e^{i \langlembda_j})|^2}.
\end{align}
Hence to prove \eqref{eq:disc_Dir_spec} we need to show that \begin{align}\langlebel{eq:intermediate}
\nu(e^{i \langlembda_j})^{-1}=\mathfrak{su}(1,1)m_{k=0}^{n-1} \frac{1}{\Im z_k} |\Psi_k(e^{i \langlembda_j})|^2.
\end{align}
Fix $1\le j\le n$, and let $g$ be the degree $n$ polynomial with $g(e^{i \langlembda_j})=1$ and $g(e^{i \langlembda_\ell})=0$ for $1\le \ell\le n, \ell\neq j$. The polynomials $\Psi_k, 0\le k\le n-1$ are orthogonal with respect to $\nu$, and by construction $\int \bar \Psi_k(z) g(z) d\nu(z)=\bar \Psi_k(e^{i \langlembda_j}) \nu(e^{i \langlembda_j})$. Hence we get
\[
g(z)=\mathfrak{su}(1,1)m_{k=0}^{n-1}\Psi_k(z) \frac{\bar \Psi_k(e^{i \langlembda_j}) \nu (e^{i \langlembda_j})}{\|\Psi_k\|_\nu^2},
\]
and using $z=e^{i \langlembda_j}$ we obtain
\begin{align}\langlebel{eq:disc_spec_2}
1=\nu(e^{i \langlembda_j}) \mathfrak{su}(1,1)m_{k=0}^{n-1} \frac{|\Psi_k(e^{i \langlembda_j})|^2}{\|\Psi_k\|^2_\nu} .
\end{align}
The needed identity \eqref{eq:intermediate} follows once we show that $\Im z_k=\|\Psi_k\|_\nu^2$.
By Section 1.5 of volume 1, \cite{OPUC} we have
\[
\|\Phi_k\|_\nu^2=\mbox{\rm P}od_{\ell=0}^{k-1}(1-|\gamma_\ell|^2),
\]
which together with \eqref{eq:Phi_1} gives $\|\Psi_k\|^2_\nu=\mbox{\rm P}od_{\ell=0}^{k-1} \frac{1-|\gamma_\ell|^2}{|1-\gamma_\ell|^2}$. On the other hand, from \eqref{eq:v_w} and \eqref{eq:z_rec} we get
\[
\Im z_k=\mbox{\rm P}od_{\ell=0}^{k-1}(1+2 {\mathbb R}e \frac{\gamma_\ell}{1-\gamma_\ell})=\mbox{\rm P}od_{\ell=0}^{k-1} \frac{1-|\gamma_\ell|^2}{|1-\gamma_\ell|^2}, quad 0\le k\le n-1,
\]
which finishes the proof.
\end{proof}
\begin{remark}
As a simple example we can fix $\theta\in (-\pi,\pi)$ and consider the uniform probability measure $\nu$ on the points $e^{i \frac{\theta+2k\pi}{n}}$, $0\le k\le n-1$. In this case
\begin{align*}
\Phi_k(z)=z^k quad\text{ for $0\le k\le n-1$, and }qquad \Phi_n(z)=z^n-e^{i \theta}.
\end{align*}
The Verblunsky coefficients are
\begin{align*}
\alpha_k=0 quad \text{ for $0\le k\le n-2$, and} qquad \alpha_{n-1}=e^{-i \theta}.
\end{align*}
The modified Verblunski coefficients are \begin{align*}
\gamma_k=0quad \text{ for $0\le k\le n-2$, and }qquad \gamma_{n-1}=e^{i \theta},
\end{align*}
the path parameter in ${\mathbb H} $ is just $z_k=i$ for $0\le k\le n-1$ with $z_n=\cot(\theta/2)$, and the corresponding function $R(t)$ is $\tfrac12 I$.
The Dirac operator is $2 J \tfrac{d}{dt}$ with boundary conditions $\mathfrak u_0=[1,0]^t$ and $\mathfrak u_1=[-\cot(\theta/2),-1]^t$. The eigenvalues are of the form $2\pi k+\theta, k\in {\mathbb Z}Z$, and the $L^2_R$-normalized eigenfunction corresponding to $\langlembda$ is $\sqrt{2} [\cos(\langlembda t/2), \sin(\langlembda t/2)]^t$. Therefore the weights of the spectral measure are equal to 2 at the eigenvalues for both $\mu_{\text{left}}$ and $\mu_{\text{right}}$, illustrating the identity \eqref{eq:disc_Dir_spec}.
\end{remark}
\section{Finite approximations of the spectral measure of ${\boldsymbol{\tau}}_\beta$}
The goal of this section is to prove Proposition \ref{prop:sine_weights} identifying the right spectral weights of the ${\boldsymbol{\tau}}_\beta$ operator. Recall the definition of the circular beta-ensemble and the Killip-Nenciu measure.
\begin{definition}\langlebel{def:circ_KN}
The size $n$ circular beta ensemble with parameter $\beta>0$ is a distribution of $n$ points $e^{i\theta_1}, \dots, e^{i \theta_n}$ on $\partial {\mathbb U}$ with joint density function
\begin{align}
\frac{1}{Z_{n,\beta}} \mbox{\rm P}od_{1\le j\le k\le n} \left|e^{i \theta_j}-e^{i \theta_k}\right|^2.
\end{align}
The size $n$ Killip-Nenciu measure with parameter $\beta>0$ is a random probability measure $\mu_{n,\beta}^{\textup{KN}}$ with support given by the size $n$ circular beta ensemble, with weights distributed according to the Dirichlet$(\beta/2,\dots, \beta/2)$ distribution, independently of the support.
\end{definition}
\begin{theorem}[\cite{KillipNenciu}, \cite{BNR2009}]\langlebel{thm:KillipNenciu} The Verblunski coefficients $\alpha_0, \dots, \alpha_{n-1}$ of the random probability measure $\mu_{n,\beta}^{\textup{KN}}$ are independent, rotationally invariant, and for $0\le k\le n-2$ we have $|\alpha_k|^2\sim$ {Beta}$(1,\tfrac{\beta}{2}(n-k-1))$.
The joint distribution of the modified Verblunski coefficients $\gamma_0, \dots, \gamma_{n-1}$ is the same as the joint distribution of $\alpha_0, \dots, \alpha_{n-1}$
\end{theorem}
\begin{definition}
We denote by $\mathtt{Circ}_{\beta,n}$ be the random Dirac operator associated to the Killip-Nenciu measure $\mu_{n,\beta}^{\textup{KN}}$ via Definition \ref{def:path}.
\end{definition}
\begin{definition}
Let $b_1, b_2$ be independent standard Brownian motion and for $t\in [0,1)$ set $v=v(t)=\frac{4}{\beta} \log(1-t)$. Let $\tilde z(t)=\tilde x(t)+i \tilde y(t)$ with $\tilde y(t)=e^{b_2(v)-v/2}$ and $\tilde x(t)=\int_0^v e^{b_2(s)-\frac{s}{2}} db_2$. Let $\tilde z(1)=\lim_{t\to 1} \tilde z(t)$, and set $\operatorname{Sine}op=\mathtt{Dir}(\tilde z,\infty,\tilde z(1))$.
\end{definition}
The operator $\operatorname{Sine}op$ was introduced in \cite{BVBV_sbo}, in \cite{BVBV_szeta} it was shown to be orthogonally equivalent to the operator ${\boldsymbol{\tau}}_\beta$ introduced in \eqref{tau_0}.
Recall $\rho, S$ and $\mathfrak{r}$ introduced before Lemma \ref{lem:reversal}.
\begin{proposition}[Proposition 44, \cite{BVBV_szeta}]\langlebel{prop:tau_sine} Let ${\boldsymbol{\tau}}_\beta=\mathtt{Dir}(x+i y,\mathfrak u_0, \mathfrak u_1)$ defined via \eqref{eq:xy}-\eqref{tau_0}, and set
$z(t)=x(t)+i y(t)$. Let $T_q$ and $\mathcal{T}_q$ defined via \eqref{def:hyprot}. Then the operator
\[\tilde {\boldsymbol{\tau}}=\rho^{-1} (S T_q){\boldsymbol{\tau}}_\beta (S T_q)^{-1} \rho
\]
is orthogonal equivalent to ${\boldsymbol{\tau}}_\beta$, and has the same distribution as $\operatorname{Sine}op$. We have $\tilde {\boldsymbol{\tau}}=\mathtt{Dir}(\tilde z,\infty,\tilde z(1))$ where $\tilde z=\rho \mathfrak{r}\mathcal{T}_q z$, in particular $\tilde z(1)=-q$.
\end{proposition}
The following statement follows from Proposition \ref{prop:tau_sine} and Lemma \ref{lem:isometry}.
\begin{corollary}
The right (left) spectral measure of ${\boldsymbol{\tau}}_\beta$ has the same distribution as the left (right) spectral measure of $\operatorname{Sine}op$.
\end{corollary}
\cite{BVBV_op} provided a coupling of the $\mathtt{Circ}_{\beta,n}$ and $\operatorname{Sine}op$ operators and their driving paths under which $\|{\mathtt{r}\,} \mathtt{Circ}_{\beta,n}-{\mathtt{r}\,} \operatorname{Sine}op\|_{\textup{HS}}\to 0$ a.s. We summarize some of the properties of this coupling in the proposition below. Let
$$
d_{{\mathbb H} }(z_1,z_2)=\operatorname{arccosh}\left(1+ \frac{|z_1-z_2|^2}{2 \Im z_1 \Im z_2}\right)
$$
denote the hyperbolic distance between two points $z_1, z_2$ of ${\mathbb H} $.
\begin{proposition}[\cite{BVBV_op}]\langlebel{prop:coupling}
There is a coupling of the operators $\mathtt{Circ}_{\beta,n}=\mathtt{Dir}(\tilde z_n, \infty, \tilde z_{n}(1))$ and $\operatorname{Sine}op=\mathtt{Dir}(\tilde z, \infty, \tilde z(1))$ with the following properties. The driving paths $\tilde z_n, \tilde z$ satisfy $\tilde z_n(1)=\tilde z(1)$,
and there is a random $N_0$ so that for all $n\ge N_0$ the following uniform bounds hold:
\begin{align}\langlebel{eq:coupling_1}
d_{{\mathbb H} }(\tilde z_n(t), \tilde z(t))\le& \frac{\log^{3-1/8}n}{\sqrt{(1-t)n}},
qquad 0\le t\le t_n=1-\tfrac{1}{n}\log^6 n,\\
d_{{\mathbb H} }(\tilde z_n(t_n), \tilde z(t))\le& \frac{144}{\beta} (\log \log n)^2, qquad t_n\le t<1.\langlebel{eq:coupling_2}
\end{align}
\end{proposition}
We know have all the ingredients to prove Proposition \ref{prop:sine_weights}.
\begin{proof}[Proof of Proposition \ref{prop:sine_weights}]
Consider the coupling of Proposition
\ref{prop:coupling}, and denote the left spectral measure of $\mathtt{Circ}_{\beta,n}$ by $\mu_{\text{left},n}$.
By Theorem \ref{thm:KillipNenciu} and Proposition \ref{prop:Dir_discrete} we get that the support of $\mu_{\text{left},n}$ is given by $n \Lambda_n+n 2\pi n{\mathbb Z}$ where $\Lambda_n$ is distributed as the size $n$ circular beta ensemble. The main result of \cite{BVBV_op} is that in the coupling of Proposition
\ref{prop:coupling} the set $n \Lambda_n+n 2\pi n{\mathbb Z}$ converges pointwise to the spectrum of $\operatorname{Sine}op$, which has the same distribution as the spectrum of ${\boldsymbol{\tau}}_\beta$.
The weights of $\mu_{\text{left},n}$ are given by the periodic extension of the values $2n (X_{1,n}, \dots, X_{n,n})$ where $(X_{1,n}, \dots, X_{n,n})$ has Dirichlet$(\beta/2, \dots, \beta/2)$ distribution and independent of $\Lambda_n$. The Dirichlet$(\beta/2, \dots, \beta/2)$ distribution can be obtained by normalizing $n$ independent Gamma$(\beta/2)$ random variables with their sum. Using the strong law of large numbers it now follows that the weights of $\mu_{\text{left},n}$ converge to an i.i.d.~sequence of Gamma random variables with shape parameter $\beta/2$ and expected value 2. If we show that
\begin{align}\langlebel{eq:conv}
\mu_{\text{left},n}\to \mu_{\text{left},\operatorname{Sine}op}quad \text{a.s.~in the vague topology of measures}
\end{align}
then the statement of the proposition follows from Proposition \ref{prop:tau_sine}. To prove \eqref{eq:conv} we will use Lemma \ref{lem:spec_convergence}, but first we need to transform the operators so that the domain is an interval that is closed on the right.
Note that the operators $\mathtt{Circ}_{\beta,n}$ and $\operatorname{Sine}op$ share the same right condition $\tilde z(1)$. Consider the transformations $\rho$ and $\mathfrak{r}$, and the matrix $S$ introduced before Lemma \ref{lem:reversal}. Let $T_*=T_{-\tilde z(1)}$ defined via \eqref{def:hyprot}. Then by Proposition \ref{prop:tau_sine} the operator $\rho (S T_*)^{-1} \operatorname{Sine}op (S T_*) \rho$ has the same distribution as ${\boldsymbol{\tau}}_\beta$, and with a minor abuse of notation we will use the notation ${\boldsymbol{\tau}}_\beta$ for it. The driving path $z$ of ${\boldsymbol{\tau}}_\beta$ satisfies $z=\rho \mathfrak{r} \mathcal{T}_*^{-1} \tilde z$.
Introduce the similarly transformed versions of the $\mathtt{Circ}_{\beta,n}$ operators
\[
{\boldsymbol{\tau}}_{\beta,n}=\rho (S T_*)^{-1} \mathtt{Circ}_{\beta,n} (S T_*) \rho.
\]
These operators have driving paths $z_n=\rho \mathfrak{r} \mathcal{T}_*^{-1} \tilde z_n$.
We will show that the operators ${\boldsymbol{\tau}}_\beta$ and ${\boldsymbol{\tau}}_{\beta,n}$ satisfy the conditions of Lemma \ref{lem:spec_convergence}. Since
\[
\mu_{\text{left},n}=\mu_{\text{right},{\boldsymbol{\tau}}_{\beta,n}}, qquad \mu_{\text{left},\operatorname{Sine}op}=\mu_{\text{right},{\boldsymbol{\tau}}_{\beta}},
\]
this implies \eqref{eq:conv} and the proposition.
In \cite{BVBV_szeta} it was shown that in the coupling of Proposition \ref{prop:coupling} one has
$$\|{\mathtt{r}\,} {\boldsymbol{\tau}}_n -{\mathtt{r}\,} {\boldsymbol{\tau}}\|_{\textup{HS}}\to 0, qquad \mathfrak t_{{\boldsymbol{\tau}}_n}\to \mathfrak t_{{\boldsymbol{\tau}}},$$
with probability one. (See Theorem 47 and Proposition 48 and their proofs in \cite{BVBV_szeta}.) Hence we only need to show that in our coupling
\begin{align}\langlebel{eq:a0_goal}
\int_0^1 |a_0(s)-a_{0,n}(s)|^2ds\to 0.
\end{align}
Note that in ${\boldsymbol{\tau}}$ and ${\boldsymbol{\tau}}_n$ the initial condition is $\mathfrak u_0=[1,0]$, hence by \eqref{eq:ac} we have
\[
|a_0(s)-a_{0,n}(s)|^2=\left|\frac{1}{\sqrt{\Im z(s)}}-\frac1{\sqrt{\Im z_n(s)}} \right|^2.
\]
An explicit computation shows that
\[
\left|\frac{1}{\sqrt{y_1}}-\frac1{\sqrt{y_2}} \right|^2\le \frac{4}{y_1} \sinh(\frac12 d_{{\mathbb H} }(y_1,y_2))^2
\]
Since $\Im z(t), t\in (0,1]$ is distributed as $y_t$ from \eqref{eq:xy}, for any $\varepsilon>0$ we can find a random $C<\infty$ so that
\begin{align}\langlebel{eq:a0_1}
|a_0(s)|^2=(\Im z(t))^{-1}\le C t^{2/\beta-\varepsilon}.
\end{align}
From the definition of $z, z_n$ we also get
\[
d_{{\mathbb H} }(z(s),z_n(s))=d_{{\mathbb H} }(\tilde z(1-s),\tilde z_n(1-s))
\]
We can now use the coupling bounds of Proposition \ref{prop:coupling} to get the following estimates (assuming that $n$ is sufficiently large). If $1-t_n\le s\le 1$ then
\begin{align}\langlebel{eq:a0_2}
|a_0(s)-a_{0,n}(s)|^2\le c \cdot C s^{2/\beta-\varepsilon} \left(\frac{\log^{3-1/8}n}{\sqrt{s n}}\right)^2
\end{align}
with an absolute constant $c$. If $0<s<1-t_n$ then
\begin{align}\langlebel{eq:a0_3}
|a_0(1-t_n)-a_{0,n}(s)|^2\le C' s^{2/\beta-\varepsilon} e^{c (\log \log n)^2}
\end{align}
again with an absolute constant $c$. Choosing $0<\varepsilon<2/\beta$ and using \eqref{eq:a0_1}-\eqref{eq:a0_3} we obtain \eqref{eq:a0_goal}, which finishes the proof.
\end{proof}
We can now complete the proof of Theorem \ref{thm:Sine_Palm}.
\begin{proof}[Proof of Theorem \ref{thm:Sine_Palm}]
Consider $R, q$ and ${\boldsymbol{\tau}}_\beta$ introduced in \eqref{eq:xy}-\eqref{tau_0}. For $r\in {\mathbb R}$ let ${\boldsymbol{\tau}}_{\beta,r}=\mathtt{Dir}(R_{\cdot},\infty,r)$, and let $\mu=\mu_{\textup{right},{\boldsymbol{\tau}}_\beta}$ and $\mu_r=\mu_{\textup{right},{\boldsymbol{\tau}}_{\beta,r}}$.
By Corollary \ref{cor:Sine_biased}, $\mu$ biased by the weight of zero is $\mu_\infty$, the right spectral measure of $\mathtt{Dir}(R_{\cdot},\infty,\infty)$. It is known that the support of $\mu$ (the $\operatorname{Sine}_{\beta}$ process) is translation invariant with intensity $\frac{1}{2\pi}$ (see \cite{BVBV}). By Proposition \ref{prop:sine_weights} the weights in $\mu$ are i.i.d.~with a finite expectation, and they are independent of the support. Hence $\mu$ satisfies the conditions of Proposition \ref{prop:Palm}, which means that $\mu_\infty$ is the Palm measure of $\mu$. Using again the fact that the weights of $\mu$ are i.i.d.~with finite expectation and they are independent of the support of $\mu$, together with Definition \ref{def:palm} we get the statement of Theorem \ref{thm:Sine_Palm}.
\end{proof}
\section{Aleksandrov measures} \langlebel{sec:Aleks}
Biasing a measure on the unit circle by the weight of one is closely related to the classical notion of an Aleksandrov measure. We review some basic facts about such measures, and refer the reader to \cite{OPUC} volume 1, Section 3.2 and volume 2, Section 10.2 for more background.
\begin{definition}\langlebel{def:Aleks}
Let $\nu$ be a probability measure supported on $n$ distinct points on the unit circle. Denote its Verblunsky coefficients by $\alpha_k, 0\le k\le n-1$. For $|\eta|=1$ the probability measure corresponding to the Verblunsky coefficients $\eta \alpha_k, 0\le k\le n-1$ is denoted by $\nu_\eta$, and it is called the {\bf Aleksandrov measure} corresponding to $\nu$ with Aleksandrov parameter $\eta$.
\end{definition}
\begin{lemma}\langlebel{lem:Aleks_charge}
\begin{enumerate} The following statements hold for the Aleksandrov measure $\nu_\eta$.
\item The path parameter $b_{\eta,0}, \dots, b_{\eta,n}$ of $\nu_\eta$ in ${\mathbb U}$ is given by $ \eta^{-1} b_0, \dots, \eta^{-1} b_n $.
\item $\nu_{\eta}(1)>0$ if and only if $\eta=b_n$.
\item The measure $\nu_\eta$ is continuous as a function of $\eta$.
\end{enumerate}
\end{lemma}
\begin{proof}
We have
\[
\mat{\eta}{0}{0}{1}^{-1} \mat{1}{\bar \alpha}{\alpha}{1}\mat{\eta}{0}{0}{1}=\mat{1}{\bar \eta \bar \alpha}{\eta \alpha}{1},
\]
hence the first statement follows by \eqref{eq:discrete_path_b}.
By part (b) of Proposition \ref{prop:Dir_discrete} it follows that $\nu_\eta(1)>0$ exactly if $b_{\eta,n}=1$. This shows that the first statement of our lemma implies the second one.
The third statement follows from the fact that a probability measure supported on $n$ distinct points depends continuously on its Verblunsky coefficients (see Theorem 1.5.6 in Volume 1 of \cite{OPUC}).
\end{proof}
If one averages the Aleksandrov measure in its parameter then the result is the uniform measure on the unit circle.
\begin{theorem}[Theorem 10.2.2, volume 2, \cite{OPUC}]\langlebel{thm:spectral_average}
For a finitely supported discrete probability measure $\nu$ on the unit circle the averaged measure $\frac{1}{2\pi} \int_0^{2\pi} \nu_{e^{i \varphi}} d\varphi$ is the uniform measure.
\end{theorem}
The next theorem ties Aleksandrov measures to conditioning. See Section \ref{sec:biasing}
for the definition of conditioning by the weight of a point, and recall the path parameter $b_0,.s, b_n$ of a measure $\mu$ from Section \ref{sec:finite}.
\begin{proposition}\langlebel{prop:Aleks_cont} Let $a_k$ be a sequence of points on the unit circle, let $\mu$ be a measure supported on $n$ points with $b_n$ defined in Definition \ref{def:path}, and let $\mu_{a_k}$ denote the Aleksandrov measure of $\mu$ with parameter $a_k$. Then the following are equivalent
\begin{enumerate}
\item $a_k\to b_n$
\item $\mu_{a_k}\to \mu_{b_n}$
\item For every neighborhood $A$ of 1, $\mu_{a_k}(A)>0$ for all large enough $k$.
\end{enumerate}
\end{proposition}
\begin{proof}
The measure $\mu$ is continuous as a function of its Verblunsky parameters, see Theorem 1.5.6 in volume 1, \cite{OPUC}, hence $\mu_a$ is continuous in $a$. This immediately shows that 1 implies 2. Since $\mu_{b_n}(1)>0$ by Lemma \ref{lem:Aleks_charge}, 2 implies 3.
Now assume 3, and take a converging subsequence of $a_{k_m}\to \eta$. The continuity of $\mu_x$ in $x$ implies that $\mu_{a_k}\to \mu_{\eta}$, in particular the support of $\mu_{a_k}$ converges to the support of $\mu_{\eta}$. If $\eta\neq b_n$ then the support of $\mu_\eta$ does not contain 1, and we could find a neighborhood $A$ of 1 so that for large enough $k$ we have $\mu_{a_k}(A)=0$, contradicting 3. This shows that any subsequential limit of $a_k$ is equal to $b_n$, proving 1.
\end{proof}
\begin{corollary}\langlebel{cor:cond}
Fix $\mu$ supported on $n$ points on the unit circle, and let $U$ be any random variable so that $b_n\in \operatorname{supp} \operatorname{law} U$. Then $\mu_U$ conditioned on the weight of 1 is a deterministic measure equal to $\mu_{b_n}$.
\end{corollary}
\begin{proof}
For $0<\varepsilon<1$ let $A_\varepsilon$ be the arc of the unit circle corresponding to the angles in $(-\pi\varepsilon,\pi\varepsilon)$.
First we claim that $E\mu_U(A_{\varepsilon})>0$ for all $\varepsilon$. Let $\varphi$ be a continuous function with values in $[0,1]$ with $\varphi(1)=1$ supported on $A_\varepsilon$. If $u\to b_n$ then $\mu_u\to \mu_{b_n}$, so the functional $u\mapsto \int \varphi d\mu_{u}$ is continuous. Let $B$ the set of $u$ so that $\int \varphi d\mu_{u}>\int \varphi d\mu_{b_n}/2=:a/2$. Note that $a \ge \mu_{b_n}(1)>0$. By continuity, $B$ is an open neighborhood of $b_n$, so $P(U\in B)>0$, and on this event,
$$
\mu_U(A_{\varepsilon_k})>\int \varphi d\mu_{U}>a/2
$$
so $E\mu_U(A_{\varepsilon_k})>a P(U\in B)/2>0$.
Let $\mu_{U_k}$ be distributed as $\mu_U$ biased by $\mu_U(A_{\varepsilon_k})$; this exists since $E\mu_U(A_{\varepsilon_k})>0$.
The sequence $\mu_{U_k}$ satisfies Proposition \ref{prop:Aleks_cont} (3), hence by the Proposition, $U_k\to b_n$ and $\mu_{U_k}\to \mu_{b_n}$ in law.
\end{proof}
\begin{corollary}[Conditioning on a point at 1.]\langlebel{cor:cond_1}
Let $\mu$ be a random measure supported on $n$ points on the unit circle so that for every $|u|=1$ the Aleksandrov measure $\mu_u$ has the same distribution as $\mu$. Then $\mu$ conditioned on the weight of 1 has law $\mu_{b_n}$.
Moreover, if $\mu$ is rotationally invariant, then $\mu_{b_n}$ is the Palm measure of $\mu$.
\end{corollary}
\begin{proof}
Let $U$ be uniform. Consider the joint distribution $(X,\mu_U)$ where $X$ is a point on the unit circle with distribution given by $\mu_U$. Let $A_\varepsilon$ be the event that $X$ is in the arc corresponding to $(-\pi \varepsilon, \pi \varepsilon)$
Take a bounded continuous functional $f$ on the space of probability measures times the unit circle. By the last claim of Corollary \ref{cor:cond} we have
$$
Y_\varepsilon:=\frac{E(f(\mu_U,U)\,;\,A_\varepsilon\,|\,\mu)}
{P(A_\varepsilon\,|\,\mu)}
\to f(\mu_{b_n},b_n)\\ qquad \mbox{a.s.}
$$
Note that $|Y_\varepsilon|\le \mathfrak{su}(1,1)p |f|$ a.s. By spectral avereging, Theorem \ref{thm:spectral_average}, $P(A_\varepsilon\,|\,\mu)=\varepsilon$. Therefore
$$
\varepsilon^{-1}E(f(\mu_U,U)\,;\,A_\varepsilon\,|\,\mu)\to f(\mu_{b_n},b_n)\\ qquad \mbox{a.s.}
$$
Taking further expectations and using the bounded convergence theorem gives the required convergence
\[
EY_\varepsilon=\varepsilon^{-1}E(f(\mu_U,U)\,;\,A_\varepsilon)\to E f(\mu_{b_n},b_n).
\]
Thus we see that the law of $\mu_U$ given $A_\varepsilon$ converges to the law of $\mu_{b_n}$. This limit is the definition of $\mu_U$ biased by the weight of 1.
By Proposition \ref{prop:Palm} and Remark \ref{rem:circle}, if $\mu$ is rotationally invariant, then $\mu_{b_n}$ is the Palm measure of $\mu$.
\end{proof}
\section{Verblunski coefficients after biasing by the weight of one}
In this section we describe how random Verblunski coefficients change after conditioning on the weight of one. We have already seen that this is related to Aleksandrov measures, but the connection uses the path parameter $b_k, 0\le k\le n$, and the dependence can be complicated. In many cases, we still have simple formulas. For this, we again use the connection between the modified Verblunski coefficients and the affine group of isometries of the hyperbolic plane introduced in Section \ref{sec:finite}.
Recall that the affine group of linear fractional transformations of the unit disk fixing the point $1$ can be parametrized with the pre-image $\gamma$ of 0. The linear fractional transformation $\mathcal A_{\gamma,{\mathbb U}}$ can be represented as a $2\times 2$ matrix $A_{\gamma, {\mathbb U}}$ via \eqref{eq:Adef}.
The inverse of $A_{\gamma,{\mathbb U}}$ corresponds to a linear fractional transformation whose parameter is denoted by $\gamma^\iota$:
\begin{align}\langlebel{eq:iota}
\gamma^\iota=-\gamma\frac{1-\bar \gamma}{1-\gamma}, qquad A_{\gamma, {\mathbb U}}^{-1}=A_{\gamma^\iota,{\mathbb U}}.
\end{align}
Note that $\gamma\to \gamma^\iota$ is not an analytic function: it does not change the modulus,
$
|\gamma|=|\gamma^\iota|,
$
but it is not a rotation around 0.
From the definition \eqref{eq:iota} we have
\begin{align}\langlebel{eq:A_inverse}
\mathcal A_{\gamma, {\mathbb U}}(\gamma)=0,quad \mbox{ i.e. } ,quad \mathcal A_{\gamma, {\mathbb U}}^{-1}(0)=\mathcal A_{\gamma^\iota}(0)=\gamma.
\end{align}
Using the $\gamma\to \gamma^{\iota}$ map the connection between the regular and modified Verblunski coefficients \eqref{eq:gamma_alpha} simplifies to
\[
\gamma_0=\bar \alpha_0, qquad -\frac{\gamma_k}{\gamma_{k-1}^\iota}=\frac{\bar \alpha_k}{\bar \alpha_{k-1}}, quad 1\le k\le n-1.
\]
For $u\in \partial {\mathbb U},\gamma\in {\mathbb U}$ define the Poisson kernel as
$$
\operatorname{Poi}(\gamma,u)={\mathbb R}e \frac{u+\gamma}{u-\gamma}=\frac{1-|\gamma|^2}{|u-\gamma|^2}.
$$
The normalization is so that if $\sigmaheta$ is uniform random on $\partial {\mathbb U}$, then the expected value of $\operatorname{Poi}(\gamma,\sigmaheta)$ is 1. We have
\begin{equation}
\langlebel{eq:Ay}
\operatorname{Poi}(\gamma,1)= \frac{1-|\gamma|^2}{|1-\gamma|^2}=\det A_{\gamma, {\mathbb U}}=\mathcal{A}_{\gamma, {\mathbb U}}'(1).
\end{equation}
Our next result shows how the $\gamma\to \gamma^{\iota}$ operation shows up naturally when considering the reversed (and rescaled) version of a path corresponding to a discrete probability measure on $\partial {\mathbb U}$.
\begin{lemma}\langlebel{lem:reverse}
Let $\nu$ be a probability measure supported on $n$ distinct points on the unit circle. Denote the modified Verblunski coefficients of $\nu$ by $\gamma_k, 0\le k\le n-1$, and consider the path $b_k, 0\le k\le n$ defined by \eqref{eq:discrete_path_b} in Definition \ref{def:path}.
For $0\le k\le n-1$ define the reversed path parameter as
\begin{align}
b'_k=
{\mathcal A}_{b_{n-1}, {\mathbb U}}(b_{n-k-1}) qquad 0\le k\le n-1.
\end{align}
Then $b_0', \dots, b_{n-1}'$ are the first $n$ elements of the path produced by a sequence of modified Verblunsky coefficients starting with $\gamma^\iota_{n-2},.s, \gamma^\iota_0$.
\end{lemma}
\begin{proof}
From \eqref{eq:Ab_rec} we have
\[
{\mathcal A}_{b_k, {\mathbb U}}={\mathcal A}_{\gamma_{k-1}, {\mathbb U}}\circ \cdots\circ {\mathcal A}_{\gamma_{0}, {\mathbb U}}.
\]
Hence, by \eqref{eq:bk_cA} we get
\begin{align*}
b_k'={\mathcal A}_{\gamma_{n-2}}\circ \cdots \circ {\mathcal A}_{\gamma_{0}}({\mathcal A}_{\gamma_0}^{-1}\circ \cdots\circ {\mathcal A}_{\gamma_{n-k-2}}^{-1}(0))&={\mathcal A}_{\gamma_{n-2}}\circ \cdots \circ {\mathcal A}_{\gamma_{n-k-1}}(0)\\
&={\mathcal A}_{\gamma_{n-2}^\iota}^{-1} \circ \cdots \circ {\mathcal A}_{\gamma_{n-k-1}^\iota}^{-1}(0).
\end{align*}
Equation \eqref{eq:bk_cA} now shows that $b_0', \dots, b_{n-1}'$ are the first $n$ elements of the path produced by a sequence of modified Verblunsky coefficients starting with $\gamma^\iota_{n-2},.s, \gamma^\iota_0$.
\end{proof}
The following lemma describes the distribution of $\alpha^\iota$ if $\alpha$ has rotationally invariant distribution on ${\mathbb U}$.
\begin{lemma} \langlebel{lem:iota}
Assume that $\alpha \in {\mathbb U}$ is random with a rotationally invariant distribution. Then the law of $\alpha^\iota$ is absolutely continuous with respect to the law of $\alpha$ with density $\operatorname{Poi}(\alpha, 1)$.
\end{lemma}
\begin{proof}
By first conditioning on $|\alpha|=r$, we may assume without loss of generality that $\alpha=r \eta$ where $r\in (0,1)$ is deterministic, and $\eta$ is uniform on the unit circle. (Note that $\alpha=0$ exactly if $\alpha^{\iota}=0$, and $\operatorname{Poi}(0,1)=1$.)
Then
$$
\alpha^\iota=-\alpha\frac{1-\bar \alpha}{1-\alpha}=-r \eta\frac{1-r \eta^{-1}}{1-r \eta}=rg_r(\eta), qquad
g_r(u)=\frac{u-r}{ru-1}.$$
Since $\alpha\mapsto \alpha^\iota$ is an involution, we have $g_r^{-1}=g_r$. By the change of variables formula, the density of
$\alpha^\iota$ at $ru$ with respect to uniform distribution on $r\partial {\mathbb U}$ is given by
\[
|(g_r^{-1})'(u)|=|g_r'(u)|=\left|\frac{r^2-1}{(1-ru)^2}\right|=\operatorname{Poi}(ru,1).
qedhere\]
\end{proof}
If the modified Verblunsky coefficients of a finite discrete probability measure on $\partial {\mathbb U}$ satisfy certain invariance conditions then we can explicitly describe the affect of biasing by the weight of zero. This is the content of our next proposition.
\begin{proposition} \langlebel{prop:path} Consider a random probability measure $\mu$ supported on $n$ points on the unit circle. Let $\gamma_0, \dots, \gamma_{n-1}$ be its modified Verblunski coefficients, and $b_0, \dots, b_n$ its path parameter in ${\mathbb U}$.
Assume that for every $\eta\in \partial {\mathbb U}$, the measures $\mu_\eta$ and $\mu$ have the same law. Assume further that $\gamma_{n-1}\in \partial {\mathbb U}$ is uniform and independent of the rest of the modified Verblunski coefficients.
Then $\mu$ biased by the weight of 1 exists; call it $\nu$. Denote the modified Verblunski coefficients of $\nu$ by $\hat \gamma_0, \dots, \hat \gamma_{n-1}$. Then $\hat \gamma_{n-1}=1$ and the distribution of $(\hat \gamma_0,.s,\hat \gamma_{n-2})$ can be described in several ways as follows.
\begin{enumerate}[(1)]
\item The law of $(\gamma_0, \dots, \gamma_{n-2})$ given $b_n=1$.
\item The law of $(\gamma_0, \dots, \gamma_{n-2})$ biased by $\operatorname{Poi}(b_{n-1},1)$.
\item The law of $(\gamma_0, \dots, \gamma_{n-2})$ biased by $\mbox{\rm P}od_{i=0}^{n-2} \operatorname{Poi}(\gamma_i,1)$.
\item If we further assume that the arguments of $\gamma_i$ are conditionally independent and uniform given the moduli, then the law of $(\gamma_0^\iota, \dots, \gamma_{n-2}^\iota)$.
\item Under (4), the law of the first $n-1$ of the deformed Verblunsky coefficients corresponding to the reversed path parameter
$$
b'_k=
{\mathcal A}_{\tilde b_{n-1},{\mathbb U}}( \tilde b_{n-k-1}) qquad k<n
$$
where $\tilde b$ is the path parameter for the coefficients $\gamma_{n-2},.s, \gamma_0$.
\end{enumerate}
Moreover, if $\mu(\cdot u)\ed \mu$ for all $u$, then $\nu$ is the Palm measure of $\mu$.
\end{proposition}
\begin{proof}
Consider the path parameter $b_0, \dots, b_n$
of $\mu$. By Corollary \ref{cor:cond_1} $\mu$ biased by the weight of 1 exists, and it is the same as $\mu_{b_n}$. Hence $\nu\ed\mu_{b_n}$.
Let $\alpha_0, \dots, \alpha_{n-1}$ be the Verblunsky coefficients of $\mu$, and let $\eta$ be uniform on $\partial {\mathbb U}$, independent of $\alpha_0, \dots, \alpha_{n-1}$.
Then $\eta b_n\alpha_0,.s \eta b_n\alpha_{n-1}$ has the same distribution as $\alpha_0,.s ,\alpha_{n-1}$. Moreover, by Lemma \ref{lem:Aleks_charge} the path parameter $b_{\eta b_n,0}, \dots, b_{\eta b_n,n}$ corresponding to the Verblunsky coefficients $\eta b_n\alpha_0,.s \eta b_n\alpha_{n-1}$ satisfies $b_{\eta b_n,n}=1/\eta$.
So conditioning $\eta b_n\alpha_0$ on $b_{\eta b_n,n}=1$ just sets $\eta=1$. This means that
$(\alpha_0, \dots, \alpha_{n-1})$ conditioned on $b_n=1$ has the same distribution as $(b_n \alpha_0, \dots,b_n \alpha_{n-1})$, the Verblunsky coefficients of $\mu_{b_n}$. Since the modified Verblunsky coefficients are functions of the regular Verblunsky coefficients (see \eqref{eq:gamma_alpha}), it follows that the distribution of $(\hat \gamma_0,.s,\hat \gamma_{n-2})$ is the same as the law described in (1).
Since $\gamma_{n-1}$ is independent of the rest, the conditional distribution of $b_n$ given $\gamma_0,.s \gamma_{n-2}$ is the same as the distribution of $\frac{b+e^{i \sigmaheta}\frac{1-b}{1-\bar b}}{1+\bar b e^{i \sigmaheta} \frac{
1-b}{1-\bar b}}$ with $\sigmaheta$ uniform on $\partial {\mathbb U}$ and $b=b_{n-1}(\gamma_0, \dots, \gamma_{n-1})$. A direct change of variables computation shows that this distribution has density
$\operatorname{Poi}(b,\cdot)$. Bayes' formula now shows that the distributions in (1) and (2) are the same.
Using \eqref{eq:Ay} and \eqref{eq:Ab_rec} we have
\begin{align*}
\operatorname{Poi}(b_{n-1},1)=\det A_{b_{n-1}, {\mathbb U}}=\det (A_{\gamma_{n-2}, {\mathbb U}} \cdots A_{\gamma_{0},{\mathbb U}})=\mbox{\rm P}od_{i=0}^{n-2} \det A_{\gamma_i, {\mathbb U}} =\mbox{\rm P}od_{i=0}^{m-2} \operatorname{Poi}(\gamma_i,1),
\end{align*}
which shows the equivalence of the distributions in (2) and (3).
The equivalence of the distributions of (3) and (4) follows from Lemma \ref{lem:iota}. The equivalence of the distributions in (4 and (5) is a consequence of Lemma \ref{lem:reverse}.
The last claim follows from Proposition \ref{prop:Palm} and Corollary \ref{cor:cond_1} .
\end{proof}
We finish the paper with the proof of Proposition \ref{prop:biased_V}.
\begin{proof}[Proof of Proposition \ref{prop:biased_V}]
The Verblunski coefficients $\gamma_0,\dots, \gamma_{n-1}$ of $\mu=\mu_{n,\beta}^{\textup{KN}}$ are independent, rotationally invariant, with density given by \eqref{eq:verb_circ}. Hence $\mu$ satisfies the conditions of Proposition \ref{prop:path}, including the additional condition of statement (4) and the condition $\mu(\cdot u)\ed \mu$ for all $u$. Hence by statement (3) of Proposition \ref{prop:path} the Verblunsky coefficients $\gamma_i'$ of $\nu$ are given by \eqref{eq:verb_nu} and $\gamma'_{n-1}=1$. It also follows that the Palm measure of $\mu$ is $\nu$. From Definition \ref{def:palm} and Remark \ref{rem:circle} the measure $\mu(X \cdot)$ is equal to the Palm measure of $\mu$. Hence $\mu(X\cdot)$ and $\nu(\cdot)$ have the same law. Finally, the weights of $\mu$ are independent of the support of $\mu$, and their expectations are equal to $1/n$, hence $\operatorname{supp} \mu(X\cdot)$ and $\operatorname{supp} \tilde \mu(Y\cdot)$
have the same law as well.
\end{proof}
\noindent {\bf Acknowledgments.}
The second author was partially supported by the NSERC Discovery grant program.
$ $\\
\noindent
Benedek Valk\'o
\\Department of Mathematics
\\University of Wisconsin - Madison
\\Madison, WI 53706, USA
\\{\tt [email protected]}
\\[20pt]
B\'alint Vir\'ag
\\Departments of Mathematics and Statistics
\\University of Toronto
\\sigmaoronto ON~~M5S 2E4, Canada
\\{\tt [email protected]}
\end{document}
|
\begin{document}
\title{An Input-Output Construction of Finite State $\rho / \mu$ Approximations for Control Design}
\begin{abstract}
We consider discrete-time plants that interact with their controllers via fixed discrete alphabets.
For this class of systems,
and in the absence of exogenous inputs,
we propose a general, conceptual procedure for constructing a sequence of finite state approximate models
starting from finite length sequences of input and output signal pairs.
We explicitly derive conditions under which the proposed construct,
used in conjunction with a particular generalized structure,
satisfies desirable properties of $\rho/\mu$ approximations thereby
leading to nominal deterministic finite state machine models that can be used in certified-by-design controller synthesis.
We also show that the cardinality of the minimal disturbance alphabet
that can be used in this setting equals that of the sensor output alphabet.
Finally, we show that the proposed construct satisfies a relevant semi-completeness property.
\end{abstract}
\section{Introduction}
\label{Sec:Introduction}
\subsection{Motivation}
\label{SSec:Motivation}
Cyber-physical systems, involving tightly integrated physical and computational components,
are omni-present in modern engineered systems.
These systems are fundamentally complex, and pose multiple challenges to the control engineer \cite{TR:Lee2008}.
In order to effectively address these challenges,
there is an inevitable need to move to abstractions
or model reduction schemes that can handle dynamics
and computation in a unified framework.
Ideally,
an abstraction or model complexity reduction approach should provide a
lower complexity model that is more easily amenable to analysis, synthesis and optimization,
as well as a rigorously quantifiable assessment of the quality of approximation.
This would allow one to certify the performance of a controller designed for the lower complexity model
and implemented in the actual system faithfully captured by the original model,
without the need for extensive simulation or testing.
The problem of approximating systems involving dynamics and computation
(cyber-physical systems)
or discrete and analog effects (hybrid systems)
by simpler systems has been receiving much attention over the past two decades
\cite{JOUR:AlHeLaPa2000, CHAPTER:TiKh2002}.
In particular,
the problem of constructing {\it finite} state approximations of hybrid systems has
been the object of intense study,
due to the rampant use of finite state machines as models of computation or software,
as well as their amenability to tractable
analysis \cite{CONF:TaDaMe2005} and
control synthesis \cite{BOOK:MatSav2000, JOUR:KoImHi2011}
(though tractable does not always mean computationally efficient!).
\subsection{Overview of the Contribution}
\label{SSec: Contribution}
In a previous effort \cite{JOUR:Tarraf2012}, we proposed a notion of finite state
approximation for `systems over finite alphabets',
basically plants that are constrained to interact with their feedback controllers
by sending and receiving signals taking their values in fixed, finite alphabet sets.
We refer to this notion of approximation as a `$\rho/\mu$ approximation',
to highlight the fact that is is compatible with the analysis \cite{JOUR:TaMeDa2008}
and synthesis \cite{JOUR:TaMeDa2011} tools we had previously developed
for systems whose properties and/or performance objectives are described in terms of
$\rho/\mu$ gain conditions.
Note that the proposed notion of $\rho/\mu$ approximation explicitly identified
those properties that the approximate models need to satisfy in order to enable
certified-by-design controller synthesis.
However, it did not restrict us to a particular constructive algorithm
for generating these approximations.
In this paper, we propose and analyze a new\footnote{Early versions of this construct and its
analysis were presented in \cite{CONF:Tarraf2012,CONF:Tarraf2012a, CONF:Tarraf2013b}
An implementation of this construct demonstrating its application to a specific example was presented in \cite{CONF:AalTar2012}.}
approach for generating $\rho/\mu$ approximations of a given plant and performance objective.
In contrast to the state-space based construction
presented as a simple illustrative example in \cite{JOUR:Tarraf2012},
which was specifically tailored to the dynamics in question,
the present construct is a general methodology that is applicable to
arbitrary plants over finite alphabets provided that:
(i) They are not subject to exogenous inputs,
and (ii) their outputs are a function of the state only
(i.e. analogous to strictly proper transfer functions in the LTI setting).
Our construct essentially associates states of the approximate model with
finite length subsequences of input-output pairs of the plant.
Since the underlying alphabets are finite,
the set of possible input-output pairs of a given length is also finite.
The resulting approximate models thus have finite state-space,
and are shown to satisfy desirable properties of $\rho/\mu$ approximations under some clearly identified conditions,
thereby rendering them useable for control synthesis.
Our construct is conceptual,
in the sense that we do not address computational issues
that may arise due to the complexity of the underlying dynamics.
As such, our contribution is a general methodology, as opposed to a computational framework,
for generating finite state $\rho/\mu$ approximations,
and a rigorous analysis of the properties of this construct.
\subsection{Related Work}
\label{SSec:RelatedWork}
Automata and finite state models have been previously employed
as abstractions or approximate models of more complex dynamics for the purpose of control design.
We survey the directions most relevant to our work in what follows.
One research direction makes use of non-deterministic finite state automata
constructed so that their input/output behavior contains that of the original model
(these approximations are sometimes referred to as `qualitative models')
\cite{JOUR:Lu1994,JOUR:RaOY1998,JOUR:LuNiSc1999}.
Controller synthesis can then be formulated as a supervisory control problem,
addressed using the Ramadge-Wonham framework \cite{JOUR:RaWo1987, JOUR:RaWo1989}.
More recently, progress has been made in reframing these results \cite{JOUR:MoRa1999, JOUR:MoRaOY2002}
in the context of Willems' behavioral theory and $l$-complete systems \cite{MAGAZINE:Wi2007}.
Our construct bears some resemblance to algorithms employed
in constructing qualitative models.
However, our notion of $\rho/\mu$ approximation is fundamentally different
from the notion of qualitative models, as it seeks to explicitly
quantify the approximation error in the spirit of robust control.
A second research direction,
influenced by the theory of bisimulation in concurrent processes \cite{CHAPTER:Pa1981,BOOK:Mi1989},
makes use of bisimulation and simulation abstractions of the original plant.
These approaches,
which typically address full state feedback problems, effectively ensure that
the set of state trajectories of the original model is exactly matched by (bisimulation),
contained in (simulation),
matched to within some distance $\epsilon$ by (approximate bisimulation),
or contained to within some distance $\epsilon$ in (approximate simulation),
the set of state trajectories of the finite state abstraction
\cite{JOUR:GiPa2007, JOUR:Ta2008, JOUR:TaAmJuPa2008, JOUR:PoGiTa2008}.
The performance objectives are typically formulated as constraints
on the state trajectories of the original hybrid system,
and controller synthesis is a two step procedure:
A finite state supervisory controller is first designed, and subsequently refined
to yield a certified-by-design hybrid controller for the original plant \cite{BOOK:Ta2009}.
Other related research directions make use of
symbolic models \cite{JOUR:KloBel2008, MAGAZINE:BeBiEgFrKlPa2007} ,
approximating automata \cite{JOUR:CuKrNi1998,JOUR:ChuKro2000, JOUR:Reiszi2011},
and finite quotients of the system \cite{JOUR:ChuKro2001, JOUR:YorBel2010}.
While the subject of input-output robustness of discrete systems
has been garnering more attention recently \cite{CONF:TaBaCSM2012},
we are not aware of any alternative notions of discrete approximation
developed in conjunction with that work.
Of course, the idea of using finite length sequences of inputs and outputs
is widely employed in system identification \cite{BOOK:Lj1999}.
However, the setup of interest to us is fundamentally different for three reasons:
First, the dynamics of the plant are exactly known.
Second, the data can be generated in its entirety.
Third, the data is exact and uncorrupted by noise.
Finally, the present construct differs from our first effort reported in \cite{CONF:TaDu2011},
as it approximates the performance objectives as well as the dynamics of the systems,
and moreover leads to a finite state nominal model with deterministic transitions.
\subsection{Organization and Notation}
\label{SSec:OrganizationNotation}
We begin in Section \ref{Sec:Preliminaries} by reviewing the relevant notion of
$\rho/\mu$ approximation as well as basic concepts that will be useful in our development.
We state the problem of interest in Section \ref{Sec:ProblemSetup}.
We revisit a special structure in Section \ref{Sec:SpecialStructure}:
We demonstrate its relevance to $\rho/\mu$ approximations,
and we address the related question of disturbance alphabet choice.
We present our construct in Section \ref{Sec:Construction}
and give the intuition behind it.
We show that the resulting approximate models satisfy several of the desired
$\rho/\mu$ approximation properties in Section \ref{Sec:Properties},
and we address the question of ensuring finiteness of the approximation
error gain.
We demonstrate further relevant properties in Section \ref{Sec:Completeness},
highlighting the completeness of this construct.
We conclude with directions for future work in Section \ref{Sec:Conclusions}.
We employ fairly standard notation:
$\mathbb{Z}_+$ and $\mathbb{R}_+$ denote the non-negative integers and non-negative reals, respectively.
Given a set $\mathcal{A}$,
$\mathcal{A}^{\mathbb{Z}_+}$ and $2^{\mathcal{A}}$
denote the set of all infinite sequences over $\mathcal{A}$
(indexed by $\mathbb{Z}_+$)
and the power set of $\mathcal{A}$, respectively.
The cardinality of a (finite) set $\mathcal{A}$ is denoted by $|\mathcal{A}|$.
Elements of $\mathcal{A}$ and $\mathcal{A}^{\mathbb{Z}_+}$ are denoted by $a$
and (boldface) $\mathbf{a}$, respectively.
For $\mathbf{a} \in \mathcal{A}^{\mathbb{Z}_+}$, $a(i)$ denotes its $i^{th}$ term.
For $f: A \rightarrow B$, $C \subset B$, $f^{-1}(C) = \{ a \in A | f(a) \in C \}$.
For $f: A \rightarrow B$ and $g: B \rightarrow C$, $g \circ f$ denotes the composition of $f$ and $g$,
that is the function $g \circ f : A \rightarrow C$ defined by $g \circ f(a) = g(f(a))$.
Given $P \subset (\mathcal{U} \times \mathcal{R})^{\mathbb{Z}_+} \times (\mathcal{Y} \times \mathcal{V})^{\mathbb{Z}_+}$
and a choice $\mathbf{u_o} \in \mathcal{U}^{\mathbb{Z}_+}$,
$\mathbf{y_o} \in \mathcal{Y}^{\mathbb{Z}_+}$,
$P|_{\mathbf{u_o,y_o}}$ denotes the (possibly empty) subset of $P$ defined as
$P |_{\mathbf{u_o},\mathbf{y_o}} = \Big\{ \Big( (\mathbf{u},\mathbf{r}),(\mathbf{y},\mathbf{v}) \Big) \in P \Big| \mathbf{u}=\mathbf{u_o} \textrm{ and } \mathbf{y}=\mathbf{y_o} \Big\}$.
\section{Preliminaries}
\label{Sec:Preliminaries}
In our development, it is often convenient to view a discrete-time dynamical system as a set of
feasible signals, even when a state-space description of the system is available.
We thus begin this section by briefly reviewing this `feasible signals' view of systems.
We then present the recently proposed notion of $\rho/\mu$ approximation
specialized to the class of systems of interest (namely systems with no exogenous inputs),
and we state the relevant control synthesis result.
\subsection{Systems and Performance Specifications}
\label{SSec:SystemsSpecs}
Readers are referred to \cite{JOUR:TaMeDa2008} for a more detailed treatment
of the basic concepts reviewed in this section.
A discrete-time signal is an infinite sequence over some prescribed set (or `alphabet').
\begin{defnt}
\label{def:system}
A discrete-time system $S$ is a set of pairs of signals, $S \subset \mathcal{U}^{\mathbb{Z}_+} \times \mathcal{Y}^{\mathbb{Z}_+}$,
where $\mathcal{U}$ and $\mathcal{Y}$ are given alphabets.
\end{defnt}
A discrete-time system is thus a process characterized by its feasible signals set.
This description can be considered an extension of the
graph theoretic approach \cite{JOUR:GeSm1997} to the finite alphabet setting,
and also shares some similarities with the behavioral approach \cite{MAGAZINE:Wi2007}
though we insist on differentiating between input and output signals upfront.
In this setting,
system properties of interest are captured by means of integral `$\rho/\mu$ constraints' on the feasible signals.
\begin{defnt}
\label{def:GainStability}
Consider a system $S \subset \mathcal{U}^{\mathbb{Z}_+} \times \mathcal{Y}^{\mathbb{Z}_+}$ and let $\rho: \mathcal{U} \rightarrow \mathbb{R}$
and $\mu: \mathcal{Y} \rightarrow \mathbb{R}$ be given functions. $S$ is \textit{$\rho / \mu$ stable} if there exists a finite
non-negative constant $\gamma$ such that
\begin{equation}
\label{eq:gain}
\inf_{T \geq 0} \sum_{t=0}^{T} \gamma \rho (u(t)) - \mu (y(t)) > - \infty.
\end{equation}
is satisfied for all $(\mathbf{u},\mathbf{y})$ in $S$.
\end{defnt}
In particular, when $\rho$, $\mu$ are non-negative (and not identically zero), a notion of `gain' can be defined.
\begin{defnt}
\label{def:Gain}
Consider a system $S \subset \mathcal{U}^{\mathbb{Z}_+} \times \mathcal{Y}^{\mathbb{Z}_+}$.
Assume that $S$ is $\rho / \mu$ stable for $\rho: \mathcal{U} \rightarrow \mathbb{R}_+$ and
$\mu: \mathcal{Y} \rightarrow \mathbb{R}_+$, and that neither function is identically zero.
The $\rho / \mu$ \textit{gain of} $S$ is the infimum of $\gamma$ such that (\ref{eq:gain}) is satisfied.
\end{defnt}
Note that these notions of `gain stability' and `gain' can be considered extensions of the
classical definitions to the finite alphabet setting.
In particular, when $\mathcal{U}$, $\mathcal{Y}$ are Euclidean vector spaces and
$\rho$, $\mu$ are Euclidean norms,
we recover $l_2$ stability and $l_2$ gain.
We are specifically interested in discrete-time plants that interact with
their controllers through fixed discrete alphabets in a setting where no exogenous
input is present:
\begin{defnt}
\label{def:system}
A \textit{system over finite alphabets} $S$ is a discrete-time system
$S \subset \mathcal{U}^{\mathbb{Z}_+} \times (\mathcal{Y} \times \mathcal{V})^{\mathbb{Z}_+}$
whose alphabets $\mathcal{U}$ and $\mathcal{Y}$ are finite.
\end{defnt}
Here $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$
represents the control input to the plant
while $\mathbf{y} \in \mathcal{Y}^{\mathbb{Z}_+}$ and $\mathbf{v} \in \mathcal{V}^{\mathbb{Z}_+}$
represent the sensor and performance outputs of the plant, respectively.
The plant dynamics may be analog, discrete or hybrid.
Alphabet $\mathcal{V}$ may be finite, countable or infinite.
The approximate models of the plant will be drawn from a specific class of models,
namely deterministic finite state machines:
\begin{defnt}
\label{def:DFM}
A deterministic finite state machine (DFM) is a discrete-time system
$S \subset \mathcal{U}^{\mathbb{Z}_+} \times \mathcal{Y}^{\mathbb{Z}_+}$,
with finite alphabets $\mathcal{U}$ and $\mathcal{Y}$,
whose feasible input and output signals $(\mathbf{u},\mathbf{y}) \in S$ are related by
\begin{eqnarray*}
q(t+1) & = & f (q(t), u(t)) \\
y(t) & = & g(q(t),u(t))
\end{eqnarray*}
where $t \in \mathbb{Z}_+$, $q(t) \in \mathcal{Q}$ for some finite set $\mathcal{Q}$ and
some functions $f: \mathcal{Q} \times \mathcal{U} \rightarrow \mathcal{Q}$
and $g: \mathcal{Q} \times \mathcal{U} \rightarrow \mathcal{Y}$.
\end{defnt}
$\mathcal{Q}$, $f$ and $g$ are understood to represent the set of states of the DFM,
its state transition map, and its output map, respectively, in the traditional state-space sense.
We single out deterministic finite state machines in which there is no direct feedthrough
from particular inputs to particular outputs:
\begin{defnt}
A DFM $S \subset (\mathcal{U}_1 \times \hdots \times \mathcal{U}_{n_{I}})^{\mathbb{Z}_+}
\times
(\mathcal{Y}_1 \times \hdots \times \mathcal{Y}_{n_{O}})^{\mathbb{Z}_+}$ is
$\mathcal{U}_i/\mathcal{Y}_j$ strictly proper if its $j^{th}$ output map is of the form
\begin{displaymath}
y_{j} = g_{j} (q(t), u_{1}(t), \hdots, u_{i-1}(t), u_{i+1}(t), \hdots, u_{n_{I}}(t)),
\end{displaymath}
and strictly proper if it is $\mathcal{U}_i/\mathcal{Y}_j$ strictly proper for all
$i \in \{1,\hdots, n_I\}$ and $j \in \{1,\hdots, n_O \}$.
\end{defnt}
Finally, we introduce the following notation for convenience:
Given a system $P \subset \mathcal{U}^{\mathbb{Z}_+} \times (\mathcal{Y} \times \mathcal{V})^{\mathbb{Z}_+}$
and a choice of signals $\mathbf{u_o} \in \mathcal{U}^{\mathbb{Z}_+}$ and
$\mathbf{y_o} \in \mathcal{Y}^{\mathbb{Z}_+}$,
$P|_{\mathbf{u_o,y_o}}$ denotes the subset of feasible signals of $P$
whose first component is $\mathbf{u_o}$ and whose second component is $\mathbf{y_o}$.
That is
\begin{displaymath}
P |_{\mathbf{u_o},\mathbf{y_o}} = \Big\{ \Big( \mathbf{u},(\mathbf{y},\mathbf{v}) \Big) \in P \Big| \mathbf{u}=\mathbf{u_o} \textrm{ and } \mathbf{y}=\mathbf{y_o} \Big\}.
\end{displaymath}
Note that $P|_{\mathbf{u_o,y_o}}$ may be an empty set for specific choices of $\mathbf{u_o}$ and $\mathbf{y_o}$.
\subsection{$\rho/ \mu$ Approximations for Control Synthesis}
\label{SSec:Approximation}
The following definition is adapted from \cite{JOUR:Tarraf2012} for the case where the plant is not
subject to exogenous inputs, of interest in this paper.
Note that in the absence of exogenous input, function $\rho$ drops out of the definition.
Nonetheless, we will continue to call this a ``$\rho/\mu$ approximation"
in keeping with the previously established terminology.
\begin{figure*}
\caption{A finite state approximation of $P$}
\label{Fig:Approximation}
\end{figure*}
\begin{defnt}(Adapted from Definition 6 in \cite{JOUR:Tarraf2012})
\label{Def:DFMApproximation}
Consider a system over finite alphabets $P \subset \mathcal{U}^{\mathbb{Z}_+} \times (\mathcal{Y} \times \mathcal{V})^{\mathbb{Z}_+}$
and a desired closed loop performance objective
\begin{eqnarray}
\nonumber
\inf_{T \geq 0} \sum_{t=0}^{T} - \mu(v(t)) > -\infty \Leftrightarrow \\
\label{eq:ObjectiveP}
\sup_{T \geq 0} \sum_{t=0}^{T} \mu(v(t)) < \infty
\end{eqnarray}
for given function $\mu: \mathcal{V} \rightarrow \mathbb{R}$.
A sequence $\{\hat{M}_i\}_{i=1}^{\infty}$ of deterministic finite state machines
$\hat{M}_i \subset (\mathcal{U} \times \mathcal{W})^{\mathbb{Z}_+}
\times (\mathcal{Y} \times \hat{\mathcal{V}}_i \times \mathcal{Z})^{\mathbb{Z}_+}$
with $\hat{\mathcal{V}}_i \subset \mathcal{V}$ is a {\boldmath $\rho / \mu$}
\textbf{approximation} of $P$
if there exists a corresponding sequence of systems $\{\Delta_i\}_{i=1}^{\infty}$,
$\Delta_i \subset \mathcal{Z}^{\mathbb{Z}^+} \times \mathcal{W}^{\mathbb{Z}_+}$,
and non-zero functions $\rho _{\Delta}:\mathcal{Z}^{\mathbb{Z}_+} \rightarrow \mathbb{R}_+$,
$\mu_{\Delta}: \mathcal{W}^{\mathbb{Z}_+} \rightarrow \mathbb{R}_+$, such that for every $i$:
\begin{enumerate}[a)]
\item There exists a surjective map $\psi_i: P \rightarrow \hat{P}_i$
satisfying
\begin{equation}
\label{Eq:MapCondition}
\psi_i \Big( P|_{\mathbf{u},\mathbf{y}} \Big) \subseteq \hat{P}_i |_{\mathbf{u},\mathbf{y}}
\end{equation}
for all $(\mathbf{u},\mathbf{y}) \in \mathcal{U}^{\mathbb{Z_+}} \times \mathcal{Y}^{\mathbb{Z}_+}$,
where
$\hat{P}_i \subset \mathcal{U}^{\mathbb{Z}_+} \times (\mathcal{Y} \times \hat{\mathcal{V}}_i)^{\mathbb{Z}_+} $
is the feedback interconnection of $\hat{M}_i$ and $\Delta_i$ as shown in Figure \ref{Fig:Approximation}.
\item For every feasible signal
$(\mathbf{u},(\mathbf{y},\mathbf{v})) \in P$, we have
\begin{equation}
\label{eq:Objectivenounds}
\mu(v(t)) \leq \mu(\hat{v}_{i+1}(t)) \leq \mu(\hat{v}_i(t)),
\end{equation}
for all $t \in \mathbb{Z}_+$, where
$$ (\mathbf{u},(\mathbf{\hat{y}_i},\mathbf{\hat{v}_i})) = \psi_i \Big( ( \mathbf{u},(\mathbf{y},\mathbf{v})) \Big),$$
$$(\mathbf{u},(\mathbf{\hat{y}_{i+1}},\mathbf{\hat{v}_{i+1}})) = \psi_{i+1} \Big( (\mathbf{u},(\mathbf{y},\mathbf{v})) \Big).$$
\item $\Delta_i$ is $\rho_{\Delta} / \mu_{\Delta}$ gain stable,
and moreover, the corresponding $\rho_{\Delta} / \mu_{\Delta}$ gains satisfy $\gamma_{i} \geq \gamma_{i+1}$.
\end{enumerate}
\end{defnt}
\begin{remark}
Intuitively,
the quality of the $i^{th}$ approximation is captured by the gain $\gamma_i$
of the approximation error system $\Delta_i$ (in condition c)),
and the gap between the original and auxiliary performance objectives (the outer inequality in condition b)).
We do not require strict inequalities in conditions b) and c),
to allow for instances where the sequence of approximate models recovers the original plant exactly
after a finite number of steps (i.e. for some finite value of $i$),
or alternatively,
instances where it may not converge\footnote{Indeed,
it is not clear to us that every system should admit an \emph{arbitrarily close finite state} approximation!} at all,
but nonetheless provides a good enough approximation for the control problem at hand.
\end{remark}
Next, we review a result demonstrating that a $\rho/\mu$ approximation of the plant together with a new,
appropriately defined performance objective
may be used to synthesize certified-by-design controllers for the original plant and performance objective:
\begin{thm} (Adapted from Theorems 1 and 3 in \cite{JOUR:Tarraf2012})
Consider a plant $P$ and a $\rho/\mu$ approximation $\{\hat{M}_i\}_{i=1}^{\infty}$ as in Definition \ref{Def:DFMApproximation}.
If for some index $i$, there exists a controller $K \subset \mathcal{Y}^{\mathbb{Z}_+} \times \mathcal{U}^{\mathbb{Z}_+}$
such that the feedback interconnection of $\hat{M}_i$ and $K$,
$(\hat{M}_i,K) \subset \mathcal{W}^{\mathbb{Z}_+} \times (\hat{\mathcal{V}}_i \times \mathcal{Z})^{\mathbb{Z}_+}$,
satisfies
\begin{equation}
\label{eq:ObjectiveM}
\inf_{T \geq 0} \sum_{t=0}^{T} \tau \mu_{\Delta}(w(t)) - \mu(\hat{v}(t)) - \tau \gamma_i \rho_{\Delta}(z(t)) > -\infty
\end{equation}
for some $\tau >0$,
then the feedback interconnection of $P$ and $K$,
$(P,K) \subset \mathcal{V}^{\mathbb{Z}_+}$,
satisfies (\ref{eq:ObjectiveP}).
\end{thm}
\begin{remark}
In practice, the entire sequence of approximations is not constructed upfront:
Rather, the first element is constructed and control synthesis is attempted.
If synthesis fails, the next element of the sequence is constructed,
and so the process continues.
\end{remark}
Finally, synthesizing a {\it full state} feedback controller for a given DFM in order to satisfy given
performance objectives of the form (\ref{eq:ObjectiveM}),
for a {\it given} value of $\tau >0$, is a readily solvable problem:
\begin{thm} (Adapted from Theorem 4 in \cite{JOUR:TaMeDa2011})
\label{Thm:DPsynthesis}
Consider a DFM $M$ with state transition equation
$$ q(t+1)=f(q(t),u(t),w(t)),$$
and let $\sigma : \mathcal{Q} \times \mathcal{U} \times \mathcal{W} \rightarrow \mathbb{R}$ be given.
There exists a $\varphi:\mathcal{Q} \rightarrow \mathcal{U}$ such that the closed loop system $(M,\varphi)$ satisfies
\begin{equation}
\label{eq:SigmaObjective}
\inf_{T \geq 0} \sum_{t=0}^{T} \sigma(q(t),\varphi(q(t)),w(t)) > -\infty.
\end{equation}
iff
the sequence of functions $J_k : \mathcal{Q} \rightarrow \mathbb{R}$, $k \in \mathbb{Z}_+$, defined recursively by
\begin{eqnarray}
\label{eq:Jsequence}
J_0 & = & 0 \\
\nonumber
J_{k+1} & = & \max \{0,\mathbb{T}(J_k)\}
\end{eqnarray}
where
$\displaystyle \mathbb{T}(J(q)) = \min_{u \in \mathcal{U}} \max_{w \in \mathcal{W}} \{-\sigma(q,u,w) + J(f(q,u,w)) \}$,
converges.
\end{thm}
Note that in particular, a gain condition such as (\ref{eq:ObjectiveM}), can be written in the form
(\ref{eq:SigmaObjective}) as the outputs $\hat{v}$ and $z$ of $\hat{M}_i$
are functions of the state of $\hat{M}_i$ and its inputs.
\section{Problem Setup}
\label{Sec:ProblemSetup}
Given a discrete-time plant $P$ described by
\begin{eqnarray}
\label{eq:Plant}
\nonumber
x(t+1) & = & f(x(t),u(t)) \\
y(t) & = & g(x(t)) \\
\nonumber
v(t) & = & h(x(t))
\end{eqnarray}
where $t \in \mathbb{Z}_+$,
$x(t) \in \mathbb{R}^n$,
$u(t) \in \mathcal{U}$, $y(t) \in \mathcal{Y}$, $v(t) \in \mathcal{V}$,
and functions $f: \mathbb{R}^n \times \mathcal{U} \rightarrow \mathbb{R}^n$,
$g: \mathbb{R}^n \rightarrow \mathcal{Y}$
and $h: \mathbb{R}^n \rightarrow \mathcal{V}$ are given.
No apriori constraints are placed on the alphabet set $\mathcal{V}$:
It may be a Euclidean space, the set of reals, or a countable or finite set.
$\mathcal{U}$ and $\mathcal{Y}$ are
given finite alphabets with $| \mathcal{U}| = m$ and $|\mathcal{Y}|=p$, respectively:
They may represent quantized values of some analog inputs and outputs,
or they may simply be symbolic inputs and outputs in general.
We are also given a performance objective
\begin{equation}
\tag{\ref{eq:ObjectiveP}}
\sup_{T \geq 0} \sum_{t=0}^{T} \mu(v(t)) < \infty.
\end{equation}
Our goals are twofold:
\begin{enumerate}
\item To provide a systematic methodology for constructing a $\rho/\mu$ approximation of $P$.
\item To rigorously analyze the relevant properties of this construct.
\end{enumerate}
\section{A Special Structure }
\label{Sec:SpecialStructure}
In \cite{CHAPTER:TaMeDa2007},
we proposed a special `observer-inspired' structure and used it in conjunction with
a particular state-space based construct
in order to approximate and subsequently design stabilizing controllers for a special class of systems,
namely switched second order homogenous systems with binary outputs.
In what follows, we begin in Section \ref{SSec:ChoiceW} by proposing a slight generalization of this structure,
by modifying it to allow for arbitrary (i.e. not necessarily binary) finite sensor output alphabets.
We also address the related question of minimal construction of the disturbance alphabet set $\mathcal{W}$.
Next, we show in Section \ref{SSec:ExistenceOfMap} that under one additional assumption,
this generalized structure ensures the existence of
function $\psi_i$ as required in property a) of Definition \ref{Def:DFMApproximation}.
\subsection{Generalized Structure and Minimal Choice of $\mathcal{W}$}
\label{SSec:ChoiceW}
\begin{figure*}
\caption{Proposed generalized structure for $\hat{M}
\label{Fig:Structure}
\end{figure*}
Consider the structure for $\hat{M}_i$ and $\Delta_i$ shown in Figure \ref{Fig:Structure},
where $M_i$ is a DFM.
To ensure that the interconnection is well-posed,
we require $M_i$ to be $\mathcal{Y}/\mathcal{Y}$ strictly proper:
That is, its instantaneous output $\tilde{y}(t)$ is not an explicit function of its instantaneous input $y(t)$.
Noting that there is no loss of generality in
assuming that a finite set $\mathcal{W}$ with cardinality $r+1$ is given by $\mathcal{W} = \{ 0,\hdots, r\}$,
we begin by showing that when $P$ is a system over finite alphabets,
it is always possible to construct functions $\alpha$ and $\beta$ satisfying the property:
\begin{equation}
\label{Eq:AlphaBetaCondition}
\alpha \Big( \tilde{y}, \beta(\tilde{y},y) \Big)=y, \textrm{ for all } y, \tilde{y} \in \mathcal{Y}.
\end{equation}
The relevance of this property will become clear in Section \ref{SSec:ExistenceOfMap}:
Intuitively,
$\beta$ and $\alpha$ play the role of subtraction and addition in the finite alphabet setting.
\begin{prop}
\label{Prop:AlphaBetaCondition}
Consider an alphabet set $\mathcal{Y}$ with $|\mathcal{Y}|=p$
and a set $\mathcal{W} = \{0, \hdots,r \}$.
For sufficiently large $r$, there always exists functions
$\beta: \mathcal{Y \times \mathcal{Y} \rightarrow \mathcal{W}}$
and
$\alpha: \mathcal{Y} \times \mathcal{W} \rightarrow \mathcal{Y} \cup \{\epsilon\}$
such that (\ref{Eq:AlphaBetaCondition}) holds.
\end{prop}
\begin{proof}
The proof is by construction.
Let $r= p^2-1$.
Note that $|\mathcal{W}| = |\mathcal{Y}^2|=p^2$,
and there thus exists a bijective map $\beta : \mathcal{Y}^2 \rightarrow \mathcal{W}$
that associates with every pair
$(y_1,y_2) \in \mathcal{Y}^2$ a unique element of $\mathcal{W}$.
Now consider $\alpha: \mathcal{Y} \times \mathcal{W} \rightarrow \mathcal{Y} \cup \{\epsilon\}$
defined by
\begin{displaymath}
\alpha(\tilde{y},w)= \left\{ \begin{array}{cc}
y & \textrm{ if } \beta(\tilde{y},y)=w \\
\epsilon & \textrm{ otherwise}
\end{array} \right. .
\end{displaymath}
We have $ \alpha \Big( \tilde{y}, \beta(\tilde{y},y) \Big)=y $
for all $y, \tilde{y} \in \mathcal{Y}$, as desired.
\end{proof}
We next direct our attention in this setting to the choice of alphabet set $\mathcal{W}$.
A set with minimal cardinality is desirable,
as the complexity of solving the full state feedback control synthesis problem grows with the cardinality of $\mathcal{W}$,
as seen in the definition of $\mathbb{T}(J(q))$ in Theorem \ref{Thm:DPsynthesis}.
We thus answer the following question:
What is the minimal cardinality of $\mathcal{W}$ for which one can construct
functions $\beta$ and $\alpha$ with the desired property (\ref{Eq:AlphaBetaCondition})?
\begin{lemma}
\label{Lemma:CardinalityW}
Given a set $\mathcal{Y}$ with $| \mathcal{Y }| =p$.
Let $\mathcal{W}^* = \{ 0, 1, \hdots, p^*-1\}$ be the smallest set for which there
exists $\beta: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathcal{W}$
and $\alpha: \mathcal{Y} \times \mathcal{W} \rightarrow \mathcal{Y} \cup \{ \epsilon \}$
satisfying $\alpha \Big( \tilde{y}, \beta(\tilde{y},y) \Big)=y$ for all $y,\tilde{y} \in \mathcal{Y}$.
We have $p^*=p$.
\end{lemma}
\begin{proof}
\begin{table}
\centering
\begin{tabular}{ l || c c c c c c}
& $y_1$ & $y_2$ & $y_3$ & $y_4$ & $\hdots$ &$y_p$ \\
\hline \hline
$y_1$ & $0$ & $p-1$ & $p-2$ & $p-3$ & $\hdots$ & $1$ \\
$y_2$ & $1$ & $0$ & $p-1$ & $p-2$ & $\hdots$ &$2$ \\
$y_3$ & $2$ & $1$ & $0$ & $p-1$ & $\hdots$ & $3$ \\
$y_4$ & $3$ & $2$ & $1$ & $0$ & & \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$& & $\ddots$ & \\
$y_p$ & $p-1$ & $p-2$ & $p-3$ & & & $0$
\end{tabular}
\caption{Definition of $\beta: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathcal{W}$ when $|\mathcal{Y}|=p$}
\label{Table:BetaDefinition}
\end{table}
Let $p^*=p$, and consider a map $\beta: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathcal{W}$
defined as shown in the Table \ref{Table:BetaDefinition},
to be read as $\beta(y_1,y_1)=0$, $\beta(y_2,y_1)=1$, $\beta(y_1,y_2)=p-1$
and so on.
Note that by construction, each element of $\mathcal{W}$ appears {\it exactly once} in every row of the table.
Now consider function $\alpha: \mathcal{Y} \times \mathcal{W} \rightarrow \mathcal{Y}$
defined by
\begin{displaymath}
\alpha(\tilde{y},w) = y \textrm{ where } w=\beta(\tilde{y},y).
\end{displaymath}
$\alpha$ is a well-defined function,
and it is straightforward to show that
$\alpha \Big( \tilde{y}, \beta(\tilde{y},y) \Big)=y$ for all $y,\tilde{y} \in \mathcal{Y}$.
Finally, note that when $p^* <p$,
some element of $\mathcal{W}$ would have to appear twice in each row of the table.
Equivalently, for every $\tilde{y} \in \mathcal{Y}$, there exists $y_1 \neq y_2 \in \mathcal{Y}$
such that $\beta(\tilde{y},y_1) = \beta(\tilde{y},y_2)$.
Now suppose there exists a function $\alpha$
such that $\alpha \Big( \tilde{y}, \beta(\tilde{y},y) \Big)=y$ for all $y,\tilde{y} \in \mathcal{Y}$.
We then have
\begin{displaymath}
y_1 = \alpha(\tilde{y},\beta(\tilde{y},y_1)) = \alpha(\tilde{y},\beta(\tilde{y},y_2)) = y_2,
\end{displaymath}
leading to a contradiction.
\end{proof}
Note that $\alpha(\mathcal{Y} \times \mathcal{W})= \mathcal{Y}$ in the construction
presented in the proof of Lemma \ref{Lemma:CardinalityW}.
We can thus drop $\{ \epsilon \}$ from the co-domain of $\alpha$.
\subsection{Ensuring existence of $\psi_i$}
\label{SSec:ExistenceOfMap}
We now turn out attention to proving that, under one additional assumption on $M_i$,
the structure proposed in Section \ref{SSec:ChoiceW}
and shown in Figure \ref{Fig:Structure} ensures that condition a) of Definition \ref{Def:DFMApproximation} is met:
\begin{lemma}
\label{Lemma:ConditionA}
Consider the system shown in Figure \ref{Fig:Structure},
where $P \subset \mathcal{U}^{\mathbf{Z}_+} \times (\mathcal{Y} \times \mathcal{V})^{\mathbf{Z}_+}$,
$\mathcal{U}$ and $\mathcal{Y}$ are finite,
and $\beta:\mathcal{Y} \times \mathcal{Y} \rightarrow \mathcal{W}$
and $\alpha: \mathcal{Y} \times \mathcal{W} \rightarrow \mathcal{Y}$
are given functions that satisfy (\ref{Eq:AlphaBetaCondition}).
For any DFM
$M_i \subset (\mathcal{U} \times \mathcal{Y})^{\mathbb{Z}+} \times (\mathcal{Y \times \hat{\mathcal{V}}}_i)^{\mathbb{Z}_+} $
that is $\mathcal{Y} / \mathcal{Y}$ strictly proper and has fixed initial condition,
there exists a $\psi_i : P \rightarrow \hat{P}_i$,
where $\hat{P}_i$ is the interconnection of $\hat{M}_i$ and $\Delta_i$,
such that $\psi_i$ is surjective and
$ \psi_i \Big( P|_{\mathbf{u_o},\mathbf{y_o}} \Big) \subseteq\hat{P}_i|_{\mathbf{u_o},\mathbf{y_o}}$.
\end{lemma}
\begin{proof}
The proof is by construction.
We begin by noting that condition (\ref{Eq:AlphaBetaCondition})
ensures that the output $\mathbf{\hat{y}} \in \mathcal{Y}^{\mathbb{Z}_+}$ of $\hat{P}_i$ matches
the output $\mathbf{y} \in \mathcal{Y}^{\mathbb{Z}_+}$ of $P$ for every choice of
$\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$.
Now consider $\psi_{1,i} : P \rightarrow M_i$ defined by
$\psi_{1,i}\Big( (\mathbf{u},(\mathbf{y},\mathbf{v}) \Big) = ((\mathbf{u},\mathbf{y}),(\tilde{\mathbf{y}},\hat{\mathbf{v}})) \in M_i$,
where $(\tilde{\mathbf{y}},\hat{\mathbf{v}})$ is the unique output response of $M_i$ to input $(\mathbf{u},\mathbf{y})$
for fixed initial condition $q_o$.
Also consider $\psi_{2,i} : \psi_{1,i} (P) \rightarrow \hat{P}_i$ defined by:
\begin{displaymath}
\psi_{2,i} \Big( ((\mathbf{u},\mathbf{y}),(\tilde{\mathbf{y}},\hat{\mathbf{v}})) \Big) = (\mathbf{u},(\mathbf{y},\hat{\mathbf{v}}))
\end{displaymath}
This map is well-defined and its image lies in $\hat{P}_i$ by virtue of the structure considered.
Let $\psi_i= \psi_{2,i} \circ \psi_{1,i}$.
Note that $\psi_i$ is surjective since $\psi_{2,i}$ is surjective and $\psi_{2,i}^{-1}(\hat{P}_i) = \psi_{1}(P)$
by definition.
Moreover,
$\psi_i(P|_{\mathbf{u_o},\mathbf{y_o}}) \subseteq \hat{P}_i|_{\mathbf{u_o},\mathbf{y_o}}$ since
\begin{displaymath}
\psi_i(P|_{\mathbf{u_o},\mathbf{y_o}})
= \psi_{2,i}\Big( \psi_{1,i}(P|_{\mathbf{u_o},\mathbf{y_o}}) \Big)
= \psi_{2,i}\Big( ((\mathbf{u_o},\mathbf{y_o}),(\mathbf{\tilde{y}}, \mathbf{\hat{v}})) \in M_i \Big)
\subseteq \hat{P}_i|_{\mathbf{u_o},\mathbf{y_o}}
\end{displaymath}
which concludes our proof.
\qedsymbol
\end{proof}
It follows from Lemma \ref{Lemma:ConditionA} that by restricting ourselves to
approximations $\{ \hat{M}_i \}_{i=1}^{\infty}$ with the structure shown in Figure \ref{Fig:Structure},
where $M_i$ (for each $i\in \mathbb{Z}_+$)
is a $\mathcal{Y}/\mathcal{Y}$ strictly proper DFM with {\it fixed} initial condition,
but otherwise arbitrary structure,
property a) of Definition \ref{Def:DFMApproximation} is guaranteed by construction,
and we only need worry about constructing $\{ M_i \}_{i=1}^{\infty}$ to satisfy properties $(b)$ and $(c)$.
\section{Construction of $M_i$}
\label{Sec:Construction}
What remains is to construct a sequence of DFM $\{M_i\}_{i=1}^{\infty}$ that,
when used in conjunction with the generalized structure proposed in Section \ref{SSec:ChoiceW} and
shown in Figure \ref{Fig:Structure},
ensures that properties b) and c) of Definition \ref{Def:DFMApproximation} are satisfied.
We begin by giving the intuition behind this construction in Section \ref{SSec:ConstructionInspiration},
before presenting the details of the construction in Section \ref{SSec:ConstructionDetails}.
\subsection{Inspiration for the Construction}
\label{SSec:ConstructionInspiration}
The inspiration for the construction comes from linear systems theory.
Indeed, consider a discrete-time SISO LTI system $S$ described by
\begin{eqnarray*}
\label{Eq:LTIDynamics}
x(t+1) & = & A x(t) + B u(t) \\
y(t) & = & C x(t) + D u(t)
\end{eqnarray*}
where $t \in \mathbb{Z}_+$, $x(t) \in \mathbb{R}^n$, $u(t) \in \mathbb{R}$, $y(t) \in \mathbb{R}$,
$A$, $B$ and $C$ are given matrices of appropriate dimensions,
and $D$ is a given scalar.
Assume that the pair $(C,A)$ is observable and the pair $(A,B)$ is reachable.
Under these conditions, following a fairly classical derivation that is omitted here for brevity,
we can express the state of the system at the current
time in terms of its past $n$ inputs and outputs as
\begin{equation}
\label{Eq:RelatingStates}
x(t) = \left[ \begin{array}{cc} A^n O^{-1} & R - A^n O^{-1} M \end{array} \right]
\left[ \begin{array}{c}
y(t-1) \\ \vdots \\ y(t-n) \\ u(t-1) \\ \vdots \\ u(t-n)
\end{array} \right],
\end{equation}
where $R= [B \; AB \; \hdots \; A^{n-1}B]$ is the reachability matrix,
\begin{displaymath}
O= \left[ \begin{array}{c}
CA^{n-1} \\
CA^{n-2} \\
\vdots \\
C
\end{array} \right]
\end{displaymath}
is a row permutation of the observability matrix,
and $M$ is the matrix of Markov parameters
\begin{displaymath}
M= \left[ \begin{array}{cccc}
D & CB & \hdots & CA^{n-2}B \\
0 & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & CB \\
0 & \hdots & 0 & D
\end{array} \right].
\end{displaymath}
This observation suggests an approach for constructing a sequence of approximate models
of $S$ starting from finite length input-output sequence pairs of $S$:
The states of the $i^{th}$ approximate model, $\hat{S}_i$,
are then those subsets of $\mathbb{R}^{2i}$ that constitute
{\it feasible} snapshots of length $i$ of the input-output behavior of $S$.
Equivalently, each state of $\hat{S}_i$ corresponds to a subset
of states of $S$, consisting of those states that are un-falsified by the observed data
of length $i$.
In particular, when $i= n$, consider the approximate model $\hat{S}_n$ with state $\hat{x}(t)$ defined as
\begin{displaymath}
\hat{x}(t) = [y(t-1), \hdots, y(t-n), u(t-1),\hdots,u(t-n)]'
\end{displaymath}
and state-space description
\begin{eqnarray*}
\hat{x}(t+1) & = & \hat{A} \hat{x}(t) + \hat{B} u(t) \\
\hat{y}(t) & = & \hat{C} \hat{x}(t) + D u(t)
\end{eqnarray*}
where $\hat{C} = \left[ \begin{array}{cc} CA^n O^{-1} & CR-CA^n O^{-1} M \end{array} \right]$
and $\hat{A}$ and $\hat{B}$ are appropriately defined\footnote{The exact expression for $\hat{A}$ and
$\hat{B}$ is not relevant to the discussion,
and is thus omitted for brevity.} matrices.
We note the following:
\begin{enumerate}
\item If systems $S$ and $\hat{S}_n$ are identically initialized,
meaning that their initial states obey
\begin{displaymath}
x(0) = \left[ \begin{array}{cc} A^n O^{-1} & R - A^n O^{-1} M \end{array} \right] \hat{x}(0)
\end{displaymath}
their outputs will be identical for any choice of input $\mathbf{u} \in \mathbb{R}^{\mathbb{Z}_+}$.
In that sense, $\hat{S}_n$ can be considered to recover the original system $S$.
\item Every state of $\hat{S}_n$ corresponds to a single state of $S$.
The converse is not true.
Indeed, there does not exist a one-to-one correspondence between the states of $S$ and $\hat{S}_n$:
The kernel of matrix $\left[ \begin{array}{cc} A^n O^{-1} & R - A^n O^{-1} M \end{array} \right]$
in (\ref{Eq:RelatingStates}) has non-zero dimension,
and one state of $S$ can correspond to several states of $\hat{S}_n$.
$\hat{S}_n$ is thus an inherently redundant model.
\end{enumerate}
An alternative approach for comparing the responses of $S$ and $\hat{S}_n$
without explicitly matching their initial states
is by considering an ``approximation error" $\Delta_i$ with the structure shown in Figure \ref{Fig:Structure}
($P$ then corresponds to ``$S$" and $M_i$ corresponds to ``$\hat{S}_n$").
In this setup, $\hat{S}_n$ is additionally given access to the outputs of $S$,
allowing it to estimate its initial state:
State $\hat{x}(t)$ of $\hat{S}_n$ can thus be thought of as its the best instantaneous estimate of the
state $x(t)$ of $S$.
At time steps $t \leq n-1$, the state set of $\hat{S}$ is refined as follows
\begin{displaymath}
\hat{x}(0) \in \mathbb{R}^{2n},
\end{displaymath}
\begin{displaymath}
\hat{x}(1) \in \{v \in \mathbb{R}^{2n} |v(1)=y(0), v_{n+1} = u(0) \},
\end{displaymath}
\begin{displaymath}
\hat{x}(2) \in \{ v \in \mathbb{R}^{2n} |v(1)=y(1), v(2)=y(0),v_{n+1} = u(1), v_{n+2} = u(0) \}
\end{displaymath}
and so on.
At time steps $t \geq n$, $\hat{x}(t)$ is uniquely defined by the expression in (\ref{Eq:RelatingStates}).
The $\mathcal{L}_2$ gain of $\Delta_i$,
defined here as the infimum of $\gamma \geq 0$ such that the inequality
\begin{displaymath}
\inf_{T \geq 0} \sum_{t=0}^{T} \gamma^2 \| u(t) \||^2 - \| w(t) \|^2 > -\infty
\end{displaymath}
holds, compares how well the outputs match after a transient
(i.e. after $\hat{S}_n$ is done estimating the initial state of $S$):
Since the outputs of $S$ and $\hat{S}_n$ will exactly match for all times $t \geq n$,
the $\mathcal{L}_2$ gain of $\Delta_i$ in this case is zero.
The internal structure of $\Delta_i$
thus has a nice intuitive interpretation that may not have been
as transparent to the readers when we introduced it in \cite{CHAPTER:TaMeDa2007},
and the problem of finite state approximation is thus intricately connected to that of
state estimation and reconstruction under finite memory constraints.
Note that output $\hat{y}(t)$ cannot explicitly depend on input $y(t)$ in this setting,
otherwise $\hat{S}_n$ can trivially match the output of $S$ at every time step,
rendering the comparison meaningless.
While the use of $\hat{S}_n$ as an alternative model of $S$
is not justifiable here,
this exercise suggests a procedure for constructing
approximations of systems over finite alphabets:
In that setting, $\mathcal{U}$ and $\mathcal{Y}$ are finite
leading to approximate models with {\it finite} state-spaces.
\subsection{Details of the Construction}
\label{SSec:ConstructionDetails}
Given a plant over finite alphabets as in (\ref{eq:Plant}) and a performance objective
as in (\ref{eq:ObjectiveP}), we construct the corresponding sequence $\{M_i\}_{i=1}^{\infty}$ as follows:
For each $i \in \mathbb{Z}_+$, $M_i$ is a $\mathcal{Y} / \mathcal{Y}$ strictly proper DFM described by
\begin{eqnarray}
\nonumber
q(t+1) & = & f_i(q(t),u(t),y(t)) \\
\tilde{y}(t) & = & g_i(q(t)) \\
\nonumber
\hat{v}_i(t) & = & h_i(q(t))
\end{eqnarray}
where $t \in \mathbb{Z}_+$,
$q(t) \in \mathcal{Q}_i$, $u(t) \in \mathcal{U}$,
$y(t) \in \mathcal{Y}$, $\tilde{y}(t) \in \mathcal{Y}$,
and $\hat{v}_i(t) \in \hat{\mathcal{V}}_i$.
\noindent {\bf State Set:} The state set is
\begin{displaymath}
\mathcal{Q}_i = \mathcal{Q}_{i, F} \cup \mathcal{Q}_{i, I} \cup \{q_{\emptyset}, q_{o}\}
\end{displaymath}
where
\noindent $\mathcal{Q}_{i, F}$ - Set of final states. This is where the state of $M_i$ evolves
for $t \geq i$.
\noindent $\mathcal{Q}_{i, I}$ - Set of initial states. This is where the state of $M_i$ evolves
for $1 \leq t < i$.
\noindent $q_{\emptyset}$ - Impossible state. This is where the state of $M_i$ transitions to
when it encounters an input-output pair that does not correspond to plant $P$.
\noindent $q_{o}$ - Initial state. This is the fixed initial state of $M_i$ at $t=0$.
More precisely, using the shorthand notation $f_{u}(x)$ to denote $f(x,u)$, we have
\noindent $\rhd$
$\mathcal{Q}_{i,F} \subset \mathcal{Y}^i \times \mathcal{U}^i$,
$q=(y_1,\hdots,y_i,u_1,\hdots,u_i) \in \mathcal{Q}_{i,F}$ if $\exists x_o \in \mathbb{R}^n$ such that
\begin{eqnarray}
\nonumber
y_i & = & g\Big( x_o \Big) \\ \nonumber
y_{i-1} & = & g\Big( f_{u_i}(x_o) \Big) \\ \label{Eq:FeasibleF}
y_{i-2} & = & g \Big( f_{u_{i-1}} \circ f_{u_i}(x_o)\Big) \\ \nonumber
\vdots & = & \vdots \\ \nonumber
y_1 & = & g \Big( f_{u_2} \circ \cdots \circ f_{u_i}(x_o) \Big)
\end{eqnarray}
\noindent $\rhd$
$\mathcal{Q}_{i,I} = \mathcal{Q}_{i,I,1} \cup \hdots \cup \mathcal{Q}_{i,I,i}$
where
$\mathcal{Q}_{i,I,j} \subset \mathcal{Y}^j \times \mathcal{U}^j$
and
$q=(y_1,\hdots,y_j,u_1,\hdots,u_j) \in \mathcal{Q}_{i,I,j}$ if $\exists x_o \in \mathbb{R}^n$ such that
\begin{eqnarray}
\nonumber
y_j & = & g\Big( x_o \Big) \\ \nonumber
y_{j-1} & = & g\Big( f_{u_j}(x_o) \Big) \\ \label{Eq:FeasibleI}
\vdots & = & \vdots \\ \nonumber
y_1 & = & g \Big( f_{u_2} \circ \cdots \circ f_{u_j}(x_o) \Big)
\end{eqnarray}
\noindent
\noindent {\bf Transition Function:} The transition function
$f_i: \mathcal{Q}_i \times \mathcal{U} \times \mathcal{Y} \rightarrow \mathcal{Q}_i$
is defined as follows:
\noindent $\rhd$ For $q=(y_1,\hdots,y_i,u_1,\hdots,u_i) \in \mathcal{Q}_{i,F}$, we define
\begin{displaymath}
f_i(q,u,y)=
\left\{ \begin{array}{cc}
\overline{q} = (y,y_1,\hdots,y_{i-1}, u, u_1,\hdots, u_{i-1}) & \textrm{ if } \overline{q} \in \mathcal{Q}_{i,F} \\
q_{\emptyset} & \textrm{ otherwise}
\end{array} \right.
\end{displaymath}
\noindent $\rhd$ For $q=q_o$, we define
\begin{displaymath}
f_i(q_o,u,y)=
\left\{ \begin{array}{cc}
\overline{q} = (y, u) & \textrm{ if } \overline{q} \in \mathcal{Q}_{i,I,1} \\
q_{\emptyset} & \textrm{ otherwise}
\end{array} \right.
\end{displaymath}
\noindent $\rhd$ For $q=(y_1,\hdots,y_j,u_1,\hdots,u_j) \in \mathcal{Q}_{i,I,j}$, we define
\begin{displaymath}
f_i(q,u,y)=
\left\{ \begin{array}{cc}
\overline{q} = (y,y_1,\hdots,y_j, u, u_1,\hdots, u_j) & \textrm{ if }
\overline{q} \in \mathcal{Q}_{i,I,j+1} \cup \mathcal{Q}_{i,F} \\
q_{\emptyset} & \textrm{ otherwise}
\end{array} \right.
\end{displaymath}
\noindent $\rhd$ For $q_{\emptyset}$, we define $f_i(q_{\emptyset},u,y)=q_{\emptyset}$
for all $u \in \mathcal{U}$ and $y \in \mathcal{Y}$.
\noindent {\bf Output Functions:} We begin by associating with every $q \in \mathcal{Q}_i$
a subset $X(q)$ of $\mathbb{R}^n$ defined as follows:
\noindent $\rhd$ For $q=(y_1,\hdots,y_i,u_1,\hdots,u_i) \in \mathcal{Q}_{i,F}$, let
\begin{equation}
X_o= \{\ x_o \in \mathbb{R}^n | x_o \textrm{ satisfies } (\ref{Eq:FeasibleF}) \}
\end{equation}
and define
\begin{equation}
X(q) = f_{u_1} \circ \hdots \circ f_{u_i}(X_o)
\end{equation}
\noindent $\rhd$ For $q=(y_1,\hdots,y_j,u_1,\hdots,u_j) \in \mathcal{Q}_{i,I,j}$, let
\begin{equation}
X_o= \{\ x_o \in \mathbb{R}^n | x_o \textrm{ satisfies } (\ref{Eq:FeasibleI}) \}
\end{equation}
and define
\begin{equation}
X(q) = f_{u_1} \circ \hdots \circ f_{u_j}(X_o)
\end{equation}
\noindent $\rhd$ Define
\begin{equation}
X(q) =
\left\{ \begin{array}{cc}
\mathbb{R}^n, & \; q=q_o \\
\emptyset, & \; q = q_{\emptyset}
\end{array} \right.
\end{equation}
We can also associate with every $q \in \mathcal{Q}_i$ a subset $Y(q)$ of $\mathcal{Y}$ defined as
\begin{equation}
\label{Eq:Ydefinition}
Y(q) = g(X(q))
\end{equation}
We are now ready to define the output function $g_i: \mathcal{Q}_i \rightarrow \mathcal{Y}$ as
\begin{equation}
g_i(q) =
\left\{ \begin{array}{ll}
y \textrm{ for some } y \in \mathcal{Y}, & \: \: \: \textrm{if } Y(q) =\emptyset \\
y \textrm{ for some } y \in Y(q), & \: \: \: \textrm{otherwise} \
\end{array} \right.
\end{equation}
The output function $h_i: \mathcal{Q}_i \rightarrow \hat{\mathcal{V}}_i$ is defined as
\begin{equation}
\label{Eq:vOutputFunction}
h_i(q) =
\left\{ \begin{array}{ll}
h \Big( \displaystyle \argmax_{x \in X(q)} \mu(h(x)) \Big),
& \:\:\: q \in \mathcal{Q}_{i,F} \cup \mathcal{Q}_{i,I} \cup \{q_o\} \\
\displaystyle h \Big( \argmin_{x \in \mathbb{R}^n} \mu(h(x)) \Big),
& \:\:\: q=q_{\emptyset}
\end{array} \right.
\end{equation}
\noindent {\bf Output Set:} The output set $\hat{\mathcal{V}}_i$ is defined as
\begin{displaymath}
\hat{\mathcal{V}}_i = \bigcup_{q \in \mathcal{Q}_i} h_i(q)
\end{displaymath}
\begin{remark}
We conclude this section with a few observations:
\begin{enumerate}
\item The output of $M_i$ corresponding to a state $q$ is chosen {\it arbitrarily} among the
feasible options. The possibility of error is accounted for in the gain $\gamma_i$ of $\Delta_i$.
\item Our definition of the performance output function $h_i$ assumes that
the map $\mu: \mathbb{R} \rightarrow \mathbb{R}$ has a well-defined minimum and maximum.
This places some mild restrictions on the original problem.
\end{enumerate}
\end{remark}
\begin{remark}
When $|\mathcal{U}| =m$ and $|\mathcal{Y}|=p$,
the cardinality of the state set $\mathcal{Q}_i$ of $M_i$ satisfies
\begin{displaymath}
m^i \leq | \mathcal{Q}_i | \leq m^i p^i.
\end{displaymath}
The bounds follow from the fact that every input sequence of length $i$ is feasible,
and for each input sequence,
the corresponding number of feasible output sequences of length $i$ can range from 1 to $p^i$.
For each state $q_i \in \mathcal{Q}_i$,
there is at least 1 and at most $p \cdot m$ possible state transitions.
\end{remark}
\section{$\rho / \mu$ Approximation Properties of the Construction}
\label{Sec:Properties}
In this Section, we show that the construction of $\{M_i\}_{i=1}^{\infty}$ proposed
in Section \ref{SSec:ConstructionDetails} together with the generalized structure
proposed and analyzed in Section \ref{Sec:SpecialStructure} indeed allows us to meet
the remaining two properties of Definition \ref{Def:DFMApproximation},
namely properties b) and c).
\subsection{Conditions on the Performance Objectives}
\label{SSec:PropertyB}
\begin{figure*}
\caption{Interconnection of $P$, $M_i$ and $M_{i+1}
\label{Fig:PMiInterconnection}
\end{figure*}
\begin{prop}
\label{Prop:Xinclusion}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and a DFM $M_i$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails} for some $i \geq 1$.
Consider the interconnection of $P$ and $M_i$ as shown in Figure \ref{Fig:PMiInterconnection}.
Let $x(t)$ and $q_i(t)$ be the states of $P$ and $M_i$,
respectively, at time $t$.
For any choice of $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$ and $x(0) \in \mathbb{R}^n$,
we have
\begin{displaymath}
x(t) \in X(q_i(t)), \textrm{ for all } t \geq 0.
\end{displaymath}
\end{prop}
\begin{proof}
Pick a choice $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$ and $x(0) \in \mathbb{R}^n$.
At $t=0$, $q(0)=q_o$ and $X(q_o)=\mathbb{R}^n$ by construction.
Thus $x(0) \in X(q_i(0))$.
For $1 \leq t < i$, we can write
\begin{displaymath}
X(q_i(t)) = \Big\{ x \in \mathbb{R}^n | x = f_{u(t-1)} \circ \hdots \circ f_{u(0)} (x_o) \textrm{ for some } x_o \in \mathbb{R}^n \textrm{ that satisfies } (\ref{Eq:FeasibleI}) \Big\}.
\end{displaymath}
Thus $x(t) \in X(q_i(t))$ since it can indeed be written in that form for some $x_o$,
namely the initial state of $P$, $x_o=x(0)$,
and $x_o$ satisfies (\ref{Eq:FeasibleI}).
For $t \geq i$, we can write
\begin{displaymath}
X(q_i(t)) = \Big\{ x \in \mathbb{R}^n | x = f_{u(t-1)} \circ \hdots \circ f_{u(t-i)} (x_o) \textrm{ for some } x_o \in \mathbb{R}^n \textrm{ that satisfies } (\ref{Eq:FeasibleF}) \Big\}.
\end{displaymath}
Again we have $x(t) \in X(q_i(t))$, since $x(t)$ can be written as $x(t) = f_{u(t-1)} \circ \hdots \circ f_{u(t-i)} (x(t-i))$,
and $x(t-i)$ satisfies (\ref{Eq:FeasibleF}).
Finally, we note that our argument is independent of the specific choice of $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$,
and is also independent of the initial state of $P$,
which concludes our proof.
\end{proof}
\begin{prop}
\label{Prop:vInequality}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and a DFM $M_i$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails} for some $i \geq 1$.
Consider the interconnection of $P$ and $M_i$ as shown in Figure \ref{Fig:PMiInterconnection}.
For any choice of $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$ and $x(0) \in \mathbb{R}^n$,
we have
\begin{displaymath}
\mu(v(t)) \leq \mu(\hat{v}_i(t)), \textrm{ for all } t \geq 0.
\end{displaymath}
\end{prop}
\begin{proof}
Pick a choice $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$ and $x(0) \in \mathbb{R}^n$.
It follows from Proposition \ref{Prop:Xinclusion} that the corresponding state trajectories of
$P$ and $M_i$ satisfy $x(t) \in X(q_i(t))$, for all $t \in \mathbb{Z}_+$.
We have $q_i(t) \neq q_{\emptyset}$ for all $t$,
since $M_i$ is driven by a {\it feasible} pair $(\mathbf{u},\mathbf{y})$ of $P$
in this setup.
Let $\displaystyle \overline{x}_i(t) = \argmax_{x \in X(q_i(t))} \mu(h(x(t)))$.
It follows from (\ref{Eq:vOutputFunction}) that
\begin{equation*}
\mu(v(t)) = \mu(h(x(t)) \leq \mu \Big(h(x_i(t)) \Big) = \mu (\hat{v}_i (t)).
\end{equation*}
Once again, noting that our argument is independent of the specific choice of
$\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$,
and of the initial state of $P$,
we conclude our proof.
\end{proof}
\begin{prop}
\label{Prop:vInequalityPlus}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
two DFM $M_i$ and $M_{i+1}$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails} for some $i \geq 1$.
Consider the interconnection of $P$, $M_i$ and $M_{i+1}$ as shown in Figure \ref{Fig:PMiInterconnection}.
For any choice of $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$ and $x(0) \in \mathbb{R}^n$,
we have
\begin{displaymath}
\mu(\hat{v}_{i+1}(t)) \leq \mu(\hat{v}_i(t)), \textrm{ for all } t \geq 0.
\end{displaymath}
\end{prop}
\begin{proof}
Pick a choice $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$ and $x(0) \in \mathbb{R}^n$.
Let $q_i(t)$ and $q_{i+1}(t)$ denote the states of $M_i$ and $M_{i+1}$,
respectively, at time $t$.
For $1 \leq t <i$, we can write
\begin{displaymath}
X(q_i(t)) = \Big\{ x \in \mathbb{R}^n | x = f_{u(t-1)} \circ \hdots \circ f_{u(0)} (x_o) \textrm{ for some }
x_o \in \mathbb{R}^n \textrm{ that satisfies } (\ref{Eq:FeasibleI}) \Big\}
\end{displaymath}
and
\begin{displaymath}
X(q_{i+1}(t)) = \Big\{ x \in \mathbb{R}^n | x = f_{u(t-1)} \circ \hdots \circ f_{u(0)} (x_o) \textrm{ for some }
x_o \in \mathbb{R}^n \textrm{ that satisfies } (\ref{Eq:FeasibleI}) \Big\}.
\end{displaymath}
Since $X(q_i(t)) = X(q_{i+1}(t))$, it follows from (\ref{Eq:vOutputFunction}) that
$\mu(\hat{v}_{i+1}(t)) = \mu(\hat{v}_i(t))$ for all $1 \leq t <i$.
For $t \geq i$, we can write
\begin{displaymath}
X(q_i(t)) = \Big\{ x \in \mathbb{R}^n | x = f_{u(t-1)} \circ \hdots \circ f_{u(t-i)} (x_o) \textrm{ for some }
x_o \in \mathbb{R}^n \textrm{ that satisfies } (\ref{Eq:FeasibleF}) \Big\}
\end{displaymath}
and
\begin{displaymath}
X(q_{i+1}(t)) = \Big\{ x \in \mathbb{R}^n | x = f_{u(t-1)} \circ \hdots \circ f_{u(t-i-1)} (x_o) \textrm{ for some } x_o \in \mathbb{R}^n \textrm{ that satisfies } (\ref{Eq:FeasibleF}) \textrm{ with } i+1 \textrm{ replacing } i \Big\}.
\end{displaymath}
Thus $X(q_{i+1}(t)) \subseteq X(q_{i}(t))$.
Letting
$$\displaystyle \overline{x}_i(t) = \argmax_{x \in X(q_i(t))} \mu(h(x(t)))$$
and
$$\displaystyle \overline{x}_{i+1}(t) = \argmax_{x \in X(q_{i+1}(t))} \mu(h(x(t))),$$
it follows from (\ref{Eq:vOutputFunction}) that
\begin{equation*}
\mu(\hat{v}_{i+1}(t)) = \mu(h(\overline{x}_{i+1}(t))) \leq
\mu(h(\overline{x}_i(t))) = \mu (\hat{v}_i (t)).
\end{equation*}
Finally, we note that our argument is independent of the specific choice of $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$,
and is also independent of the initial state of $P$,
which concludes our proof.
\qedsymbol
\end{proof}
We can now state and prove the main result in this Section:
\begin{lemma}
\label{Lemma:ConditionB}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and a sequence of DFMs $\{M_i\}_{i=1}^{\infty}$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}
and used with the structure shown in Figure \ref{Fig:Structure}.
There exists a surjective map $\psi_i: P \rightarrow \hat{P}_i$
satisfying (\ref{Eq:MapCondition}) such that for every $(\mathbf{u},(\mathbf{y},\mathbf{v})) \in P$, we have
\begin{equation}
\tag{\ref{eq:Objectivenounds}}
\mu(v(t)) \leq \mu(\hat{v}_{i+1}(t)) \leq \mu(\hat{v}_i(t)),
\end{equation}
for all $t \in \mathbb{Z}_+$, where
$$ (\mathbf{u},(\mathbf{\hat{y}_i},\mathbf{\hat{v}_i})) = \psi_i \Big( ( \mathbf{u},(\mathbf{y},\mathbf{v})) \Big),$$
$$(\mathbf{u},(\mathbf{\hat{y}_{i+1}},\mathbf{\hat{v}_{i+1}})) = \psi_{i+1} \Big( (\mathbf{u},(\mathbf{y},\mathbf{v})) \Big).$$
\end{lemma}
\begin{proof}
Consider the map $\psi_i : P \rightarrow \hat{P}_i$ constructed in the proof of Lemma \ref{Lemma:ConditionA}.
We have $\psi_{i} = \psi_{2,i} \circ \psi_{1,i}$ where
$\psi_{1,i} : P \rightarrow M_i$ is defined by
$$\psi_{1,i}\Big( (\mathbf{u_o},(\mathbf{y_o},\mathbf{v}) \Big) = ((\mathbf{u_o},\mathbf{y_o}),
(\tilde{\mathbf{y}}_i,\hat{\mathbf{v}}_i)) \in M_i.$$
Here $(\tilde{\mathbf{y}},\hat{\mathbf{v}}_i)$ is the
{\it unique} output response of $M_i$ to input $(\mathbf{u}_o,\mathbf{y_o})$
for initial condition $q_i(0)$.
Also recall that $\psi_{2,i} : \psi_{1,i} (P) \rightarrow \hat{P}_i$ was defined by
\begin{displaymath}
\psi_{2,i} \Big( ((\mathbf{u_o},\mathbf{y_o}),(\tilde{\mathbf{y}}_i,\hat{\mathbf{v}}_i)) \Big)
= (\mathbf{u_o},(\mathbf{y_o},\hat{\mathbf{v}}_i)).
\end{displaymath}
Thus it suffices to show that for any $(\mathbf{u_o},(\mathbf{y_o},\mathbf{v})) \in P$,
the outputs of $M_i$ and $M_{i+1}$,
$(\tilde{\mathbf{y}}_i,\hat{\mathbf{v}}_i)$ and $(\tilde{\mathbf{y}}_{i+1},\hat{\mathbf{v}}_{i+1})$,
respectively, in response to input
$(\mathbf{u}_o,\mathbf{y_o})$, satisfy the desired condition.
This follows directly from Propositions \ref{Prop:vInequality} and \ref{Prop:vInequalityPlus}.
\end{proof}
\subsection{Condition on the Gains}
\label{SSec:PropertyC}
In this Section, we first show that under some mild additional assumptions,
the proposed construction of $\{ M_i \}_{i=1}^{\infty}$
together with the structure shown in Figure \ref{Fig:Structure} meet the gain inequality in
property c) of Definition \ref{Def:DFMApproximation}.
We begin by establishing some facts that will be useful in our analysis:
\begin{prop}
\label{Prop:Yinclusion}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and a DFM $M_i$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails} for some $i \geq 1$.
Consider the interconnection of $P$ and $M_i$ as shown in Figure \ref{Fig:PMiInterconnection}.
Let $y(t)$ and $x(t)$ be the output and state, respectively, of $P$ at time $t$.
Let $q_i(t)$ be the state of $M_i$ at time $t$.
For any choice of $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$ and $x(0) \in \mathbb{R}^n$,
we have
\begin{displaymath}
y(t) \in Y(q_i(t)), \textrm{ for all } t \geq 0,
\end{displaymath}
for $Y$ defined in (\ref{Eq:Ydefinition}).
\end{prop}
\begin{proof}
Pick a choice $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$ and $x(0) \in \mathbb{R}^n$.
By Proposition \ref{Prop:Xinclusion},
we have $x(t) \in X(q_i(t))$ for all $t \geq 0$.
It thus follows that $y(t) = g(x(t)) \in Y(q_i(t)) = g(X(q_i(t)))$, for all $t \geq 0$.
Finally, we note that our argument is independent of the specific choice of $\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$,
and is also independent of the initial state of $P$,
which concludes our proof.
\end{proof}
\begin{prop}
\label{Prop:YHierarchy}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and two DFMs $M_i$ and $M_{i+1}$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}, for some $i \geq 1$.
Consider the interconnection of $P$, $M_i$ and $M_{i+1}$ as shown in Figure \ref{Fig:PMiInterconnection}.
For any choice of $\mathbf{u} \in \mathcal{U^{\mathbb{Z}_+}}$ and initial state $x(0) \in \mathbb{R}^ n$
of $P$, we have
\begin{displaymath}
Y(q_{i+1}(t)) \subseteq Y(q_i(t)), \textrm{ for } t \geq 0.
\end{displaymath}
\end{prop}
\begin{proof}
Pick a choice $\mathbf{u} \in \mathcal{U^{\mathbb{Z}_+}}$ and $x_o \in \mathbb{R}^n$.
By arguments similar to those made in the proof of Proposition \ref{Prop:vInequalityPlus},
omitted here for brevity,
we have
\begin{displaymath}
\left\{ \begin{array}{ll}
X(q_{i+1}(t)) = X(q_i(t)), \textrm{ for } 0 \leq t < i \\
X(q_{i+1}(t)) \subseteq X(q_i(t)), \textrm{ for } t \geq i
\end{array} \right.
\end{displaymath}
It thus follows, taking into account (\ref{Eq:Ydefinition}), that
\begin{displaymath}
\left\{ \begin{array}{ll}
Y(q_{i+1}(t)) = Y(q_i(t)), \textrm{ for } 0 \leq t < i \\
Y(q_{i+1}(t)) \subseteq Y(q_i(t)), \textrm{ for } t \geq i
\end{array} \right.
\end{displaymath}
which concludes our proof.
\end{proof}
\begin{defnt}
\label{Def:PositiveDefinite}
Let $\mathcal{W} = \{0, 1,\hdots, p-1\}$ for some integer $p$.
A function $\mu: \mathcal{W} \rightarrow \mathbb{R}_+$ is positive definite if
$\mu(w) \geq 0$ for all $w \in \mathcal{W}$ and $\mu(w)=0$ iff $w=0$.
\end{defnt}
\begin{defnt}
\label{Def:Flat}
Let $\mathcal{W} = \{0, 1,\hdots, p-1\}$ for some integer $p$ and consider a positive definite
function $\mu: \mathcal{W} \rightarrow \mathbb{R}_+$.
$\mu$ is flat if there exists an $\alpha >0$ such that
$\mu(w)=\alpha$ for every $w \neq 0$.
\end{defnt}
\begin{defnt}
\label{Def:Child}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and a sequence of DFMs $\{ M_i \}_{i=1}^{\infty}$ constructed as described in Section \ref{SSec:ConstructionDetails}.
$q_{i+1} = (y_1,\hdots, y_j, y_{j+1},u_1,\hdots, u_j, u_{j+1}) \in \mathcal{Q}_{i+1}\setminus{\{q_o,q_{\emptyset}\}}$
is said to be a child of $q_i \in \mathcal{Q}_{i}$ if
\begin{displaymath}
q _i= \left\{ \begin{array}{ll}
(y_1,\hdots,y_j,u_1,\hdots,u_j) & \textrm{ when } j=i \\
q_{i+1} & \textrm{ when } 1\leq j < i
\end{array} \right.
\end{displaymath}
We denote this by writing $q_{i+1} \in \mathcal{C}(q_i)$.
We consider $q_o$ and $q_{\emptyset}$ in $\mathcal{Q}_{i+1}$ to be children
of $q_o$ and $q_{\emptyset}$, respectively, in $\mathcal{Q}_{i}$.
\end{defnt}
\begin{prop}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and a sequence of DFMs $\{ M_i \}_{i=1}^{\infty}$ constructed as described in Section \ref{SSec:ConstructionDetails}.
For every $q_{i+1} \in \mathcal{Q}_{i+1}$,
there exists a unique $q_i \in \mathcal{Q}_i$ such that $q_{i+1} \in \mathcal{C}(q_i)$.
\end{prop}
\begin{proof}
Existence follows from Definition \ref{Def:Child} and the definition of the states.
Uniqueness follows directly from Definition \ref{Def:Child}.
\end{proof}
\begin{remark}
The intuition here is that the set of states of $M_{i+1}$ can be partitioned into equivalence classes:
Elements of each equivalence class are children of the same state of $M_i$.
\end{remark}
\begin{prop}
\label{Prop}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and a sequence of DFMs $\{ M_i \}_{i=1}^{\infty}$ constructed as described in Section \ref{SSec:ConstructionDetails}.
For every $q_{i+1} \in \mathcal{Q}_{i+1}$,
$q_i \in \mathcal{Q}_i$ such that $q_{i+1} \in \mathcal{C}(q_i)$,
we have
$ X(q_{i+1}) \subseteq X(q_{i})$ and
$ Y(q_{i+1}) \subseteq Y(q_{i})$.
\end{prop}
\begin{proof}
The proof follows directly from Definition \ref{Def:Child}
and the definitions of $X$ and $Y$.
\end{proof}
\begin{defnt}
\label{Def:OutputNested}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and a sequence of DFMs $\{ M_i \}_{i=1}^{\infty}$ constructed as described in Section \ref{SSec:ConstructionDetails}.
The sequence $\{M_i\}_{i=1}^{\infty}$ is output-nested if for every $i \in \mathbb{Z}_+$,
$q_{i+1} \in \mathcal{Q}_{i+1}$ and $q_{i} \in \mathcal{Q}_{i}$ such that $q_{i+1} \in \mathcal{C}(q_{i})$,
if $g_{i}(q_i) \in Y(q_{i+1})$ then $g_{i+1}(q_{i+1})=g_{i}(q_i)$.
\end{defnt}
\begin{remark}
Intuitively, a sequence is output nested if every child is associated with the same output as its parent
whenever that output is feasible for the child.
\end{remark}
We can now prove the following:
\begin{prop}
\label{Prop:MuHierarchy}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
and two DFMs $M_i$ and $M_{i+1}$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}, for some $i \geq 1$.
Consider the interconnection of $P$, $M_i$ and $M_{i+1}$ as shown in Figure \ref{Fig:PMiInterconnection}.
Let $w_i(t) = \beta (y(t),\tilde{y}_i(t))$ and
$w_{i+1}(t) = \beta (y(t),\tilde{y}_{i+1}(t))$
for $\beta$ defined in Table \ref{Table:BetaDefinition},
and consider a flat, positive definite function $\mu_{\Delta}:\mathcal{W} \rightarrow \mathbb{R}_+$.
Assume that the sequence $\{ M_i \}_{i=1}^{\infty}$ is output nested.
For any choice of $\mathbf{u} \in \mathcal{U^{\mathbb{Z}_+}}$ and initial state $x(0) \in \mathbb{R}^ n$
of $P$, we have
\begin{displaymath}
\mu_{\Delta} (w_{i+1}(t)) \leq \mu_{\Delta} (w_i(t)),
\end{displaymath}
for all $t \geq 0$.
\end{prop}
\begin{proof}
Fix $i$.
Pick a choice $\mathbf{u} \in \mathcal{U^{\mathbb{Z}_+}}$, $x_o \in \mathbb{R}^n$.
Let $q_i(t)$ and $q_{i+1}(t)$ denote the states of $M_i$ and $M_{i+1}$,
respectively, at time $t$.
If $g_i(q_i(t)) \in Y(q_{i+1}(t))$,
we have $\tilde{y}_{i+1}(t) = g_{i+1} (q_{i+1}(t)) = g_{i} (q_i(t))= \tilde{y}_i(t)$
since $\{ M_i \}_{i=1}^{\infty}$ is output nested.
Thus $w_{i+1}(t) = w_{i}(t)$,
and $\mu_{\Delta} (w_{i+1}(t)) = \mu_{\Delta} (w_i(t))$.
On the other hand,
if $g_i(q_i(t)) \notin Y(q_{i+1}(t))$,
we have $y(t) \neq \tilde{y}_i(t)$ since
$y(t) \in Y(q_{i+1}(t))$ by Proposition \ref{Prop:Yinclusion}.
It follows that $w_i(t) \neq 0$ and $\mu_{\Delta}(w_i(t)) =\alpha$,
the unique positive number in the range of $\mu_{\Delta}$.
Meanwhile, $w_{i+1}(t)$ may or may not be zero,
and in both cases the inequality
$\mu_{\Delta} (w_{i+1}(t)) \leq \mu_{\Delta} (w_i(t))$
since $\mu_{\Delta}$ is flat and positive definite.
What is left is to note that our argument was independent of the choice of
$\mathbf{u} \in \mathcal{U}^{\mathbb{Z}_+}$, $x(0) \in \mathbb{R}^n$, and $i$.
\end{proof}
\color{black}
We are now ready to state and prove the main result in this Section:
\begin{lemma}
\label{Lemma:ConditionC}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
a disturbance alphabet $\mathcal{W} = \{0,\hdots,p-1 \}$ where $p=|\mathcal{Y}|$,
$\beta: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathcal{W}$ defined as in Table \ref{Table:BetaDefinition},
a flat, positive definite function $\mu_{\Delta}: \mathcal{W} \rightarrow \mathbb{R}_+$,
and a sequence of DFM $\{M_i\}_{i=1}^{\infty}$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}.
Assume that $\{M_i\}_{i=1}^{\infty}$ is output nested.
For any $i \geq 1$,
the gains of $\Delta_i$ and $\Delta_{i+1}$
satisfy $\gamma_{i} \geq \gamma_{i+1}$.
\end{lemma}
\begin{proof}
Fix $i$, and let $\gamma_i$ be the gain of $\Delta_i$.
Pick a choice of $(\mathbf{u_o},(\mathbf{y_o},\mathbf{v})) \in P$,
and consider the setup shown in \ref{Fig:PMiInterconnection}.
Let $(\tilde{\mathbf{y}}_i,\hat{\mathbf{v}}_i)$ and $(\tilde{\mathbf{y}}_{i+1},\hat{\mathbf{v}}_{i+1})$
be the unique outputs of $M_i$ and $M_{i+1}$, respectively,
in response to input $(\mathbf{u}_o,\mathbf{y_o})$.
Let $w_j(t) = \beta(\tilde{y}_j(t), y_o(t))$ for $j=i,i+1$.
It follows from Proposition \ref{Prop:MuHierarchy} that
\begin{eqnarray*}
\mu_{\Delta}(w_{i+1}(t)) \leq \mu_{\Delta} (w_{i}(t)), \textrm{ } \forall t & \Leftrightarrow &
- \mu_{\Delta}(w_{i+1}(t)) \geq -\mu_{\Delta} (w_{i}(t)), \textrm{ } \forall t \\
& \Leftrightarrow &
\gamma_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w_{i+1}(t))
\geq \gamma_i \rho_{\Delta}(u_o(t)) -\mu_{\Delta} (w_{i}(t)),
\textrm{ } \forall t \\
& \Rightarrow &
\sum_{t=0}^{T }\gamma_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w_{i+1}(t))
\geq
\sum_{t=0}^{T} \gamma_i \rho_{\Delta}(u_o(t)) -\mu_{\Delta} (w_{i}(t)),
\textrm{ } \forall T \\
& \Rightarrow &
\sum_{t=0}^{T }\gamma_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w_{i+1}(t))
\geq
\inf_{t \geq 0} \sum_{t=0}^{T} \gamma_i \rho_{\Delta}(u_o(t)) -\mu_{\Delta} (w_{i}(t)),
\textrm{ } \forall T \\
& \Rightarrow &
\inf_{T \geq 0} \sum_{t=0}^{T }\gamma_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w_{i+1}(t))
\geq
\inf_{t \geq 0} \sum_{t=0}^{T} \gamma_i \rho_{\Delta}(u_o(t)) -\mu_{\Delta} (w_{i}(t))
\end{eqnarray*}
Letting $\tilde{\gamma}_{i+1} = \inf \gamma$ such that
\begin{displaymath}
\inf_{T \geq 0} \sum_{t=0}^{T }\gamma \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w_{i+1}(t)) > -\infty,
\end{displaymath}
we have $\tilde{\gamma}_{i+1} \leq \gamma_i$.
Since this argument holds for any choice of $(\mathbf{u_o},(\mathbf{y_o},\mathbf{v})) \in P$,
we have
\begin{displaymath}
\gamma_{i+1} = \inf \{ \tilde{\gamma}_{i+1} \} \leq \gamma_i,
\end{displaymath}
where the `inf' is understood to be taken over all possible choices of feasible signals of $P$.
\end{proof}
\color{black}
\subsection{Ensuring Finite Error Gain}
\label{SSec:FiniteGain}
Note that Lemma \ref{Lemma:ConditionC},
while effectively establishing a hierarchy of approximations,
does not address the question: When is $\gamma_{i}$ finite?
A straightforward way to guarantee that is to require $\rho_{\Delta}(z) >0$ for all $z$.
While this may be meaningful in a setup where we have no preference for specific choices of control inputs
(since $z=u$ in our proposed structure),
this may be too restrictive in general,
particularly when we wish to retain the ability to penalize certain inputs.
In this Section, we first propose a tractable
approach for establishing an upper bound for the approximation error:
The idea is to verify instead that an appropriately constructed DFM satisfies a suitably defined gain condition.
We then use this approach as the basis for deriving a readily verifiable
sufficient condition for the gain to be finite.
We begin by associating with each approximate model $M_i$
two new DFMs:
\begin{defnt}
\label{Def:DFMExtension}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
a disturbance alphabet $\mathcal{W} = \{0,\hdots,p-1 \}$ where $p=|\mathcal{Y}|$,
$\beta: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathcal{W}$ defined as in Table \ref{Table:BetaDefinition},
a positive definite function $\mu_{\Delta}: \mathcal{W} \rightarrow \mathbb{R}_+$,
and a DFM $M_i$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}, for some $i \geq 1$.
The e-extension of $M_i$, denoted by $M_i^{e}$, is a new DFM,
$M_i^e \subset (\mathcal{U} \times \mathcal{Y})^{\mathbb{Z}_+} \times (\mathcal{Y \times \hat{\mathcal{V}}}_i \times \mathbb{R}_+)^{\mathbb{Z}_+}$,
obtained from $M_i$ by
introducing one additional output $e : \mathcal{Q}_i \rightarrow \mathbb{R}_+$
defined by
\begin{displaymath}
e(q_i) = \left\{ \begin{array}{ll}
0 & \textrm{ if } q_i=q_{\emptyset}\\
\displaystyle \max_{y_1,y_2 \in Y(q_i)} \mu_{\Delta} (\beta(y_1,y_2)) & \textrm{ otherwise}
\end{array} \right. .
\end{displaymath}
\end{defnt}
\begin{remark}
It follows in Definition \ref{Def:DFMExtension} that when $q_i \neq q_{\emptyset}$,
$e(q_i)=0 \Leftrightarrow |Y(q_i)|=0$.
\end{remark}
\begin{defnt}
\label{Def:ZeroReduction}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
a DFM $M_i$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}, for some $i \geq 1$,
and a choice $\rho_{\Delta}: \mathcal{U} \rightarrow \mathbb{R}_+$.
Let $\overline{\mathcal{U}} = \{ u\in \mathcal{U} | \rho_{\Delta}(u)=0 \}$.
The 0-reduction of $M_i$, denoted by $M_i^{0}$, is a new DFM,
$M_i^0 \subset (\overline{\mathcal{U}} \times \mathcal{Y})^{\mathbb{Z}_+} \times (\mathcal{Y \times \hat{\mathcal{V}}}_i \times \mathbb{R}_+)^{\mathbb{Z}_+}$,
obtained from $M_i^e$, the e-extension of $M_i$,
by restricting the first input of $M_i^e$ to $\overline{\mathcal{U}}$.
\end{defnt}
\begin{remark}
It follows from Definition \ref{Def:ZeroReduction} that a state $q_i$ of $M_i$,
and thus also of $M_i^e$,
$q_i=(y_1,\hdots,y_j,u_1,\hdots,u_j)$ for some $j \in \{0,\hdots,i\}$,
is a state of $M_i^0$ iff $u_k \in \overline{\mathcal{U}}$ for {\bf all} $k \in \{1,\hdots,j\}$.
The number of states of $M_i^0$ can thus be significantly lower than that of $M_i$ and $M_i^e$.
Likewise, the number of state transitions can be significantly lower.
\end{remark}
We are now ready to present an approach for verifying an upper bound for $\gamma_i$:
\begin{lemma}
\label{Lemma:GammaUpperBound}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
a disturbance alphabet $\mathcal{W} = \{0,\hdots,p-1 \}$ where $p=|\mathcal{Y}|$,
$\rho_{\Delta}: \mathcal{U} \rightarrow \mathbb{R}_+$,
positive definite $\mu_{\Delta}: \mathcal{W} \rightarrow \mathbb{R}_+$,
and a DFM $M_i$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}, for some $i \geq 1$.
Let $\gamma_i$ be the gain of the corresponding error system $\Delta_i$
shown in Figure \ref{Fig:Structure} with $\beta: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathcal{W}$
defined as in Table \ref{Table:BetaDefinition}.
Let $\hat{\gamma}_{i}$ be the infimum of $\gamma$ such that the e-extension of $M_i$, $M_i^e$, satisfies
\begin{equation}
\label{Eq:GainBound}
\inf_{T \geq 0} \sum_{t=0}^{T} \gamma \rho_{\Delta} (u(t)) - e_i(t) > -\infty.
\end{equation}
We have $\gamma_i \leq \hat{\gamma}_i$.
\end{lemma}
\begin{proof}
Assume that $M_i^e$ satisfies (\ref{Eq:GainBound}).
Pick a choice of $(\mathbf{u_o},(\mathbf{y_o},\mathbf{v})) \in P$ and
consider the interconnection of $P$ and $M_i$ shown in Figure \ref{Fig:PMiInterconnection}.
Let $x(t)$ and $q_i(t)$ be the states of $P$ and $M_i$, respectively, at time $t$,
and let $e_i(t)$ be the output of $M_i^e$ for input $(\mathbf{u_o},\mathbf{y}_o)$.
Note that the state of $M_i^e$ at time $t$ is also $q_i(t)$.
If $|Y(q_i(t))| =1$, we have $e_i(t)=0$ by definition.
It also follows from Proposition \ref{Prop:Yinclusion}
and the fact that $Y(q_i(t))$ is a singleton
that $y(t) = \tilde{y}_i(t)$,
and thus $w(t) =0$ by the definition of $\beta$.
We thus have $e_i(t) = \mu_{\Delta}(w(t))$.
When $|Y(q_i(t))| >1$, we have
$\displaystyle e_i(t) = \max_{y_1,y_2 \in Y(q_i(t))} \mu_{\Delta}(\beta(y_1,y_2)) \geq \mu_{\Delta} (w(t))$,
where the inequality again follows from Proposition \ref{Prop:Yinclusion}.
It thus follows that $e_i(t) \geq \mu_{\Delta}(w(t))$ for all $t \geq 0$, and we can now write
\begin{eqnarray*}
\mu_{\Delta}(w(t)) \leq e_i(t), \textrm{ } \forall t & \Leftrightarrow &
- \mu_{\Delta}(w(t)) \geq - e_i(t), \textrm{ } \forall t \\
& \Leftrightarrow &
\hat{\gamma}_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w(t))
\geq \hat{\gamma}_i \rho_{\Delta}(u_o(t)) -e_i(t),
\textrm{ } \forall t \\
& \Rightarrow &
\sum_{t=0}^{T } \hat{\gamma}_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w(t))
\geq
\sum_{t=0}^{T} \hat{\gamma}_i \rho_{\Delta}(u_o(t)) - e_i(t),
\textrm{ } \forall T \\
& \Rightarrow &
\sum_{t=0}^{T }\hat{\gamma}_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w(t))
\geq
\inf_{t \geq 0} \sum_{t=0}^{T} \hat{\gamma}_i \rho_{\Delta}(u_o(t)) - e_i(t),
\textrm{ } \forall T \\
& \Rightarrow &
\inf_{T \geq 0} \sum_{t=0}^{T }\hat{\gamma}_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w(t))
\geq
\inf_{t \geq 0} \sum_{t=0}^{T} \hat{\gamma}_i \rho_{\Delta}(u_o(t)) -e_i(t)
\end{eqnarray*}
Letting $\tilde{\gamma}_{i} = \inf \gamma$ such that
\begin{displaymath}
\inf_{T \geq 0} \sum_{t=0}^{T }\gamma \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w(t)) > -\infty,
\end{displaymath}
we have $\tilde{\gamma}_{i} \leq \hat{\gamma}_i$.
Since this argument holds for any choice of $(\mathbf{u_o},(\mathbf{y_o},\mathbf{v})) \in P$,
we have
\begin{displaymath}
\gamma_{i} = \inf \{ \tilde{\gamma}_{i} \} \leq \hat{\gamma}_i,
\end{displaymath}
where the `inf' is understood to be taken over all possible choices of feasible signals of $P$.
\end{proof}
Lemma \ref{Lemma:GammaUpperBound} essentially establishes an upper bound for the
gain $\gamma_i$ of $\Delta_i$, verified by checking that
$M_i^e$ satisfies a suitably defined gain condition.
Verifying that a DFM satisfies a gain condition can be
systematically and efficiently done:
Readers are referred to \cite{JOUR:TaMeDa2008} for the details.
Note that in practice, this approach is typically used for computing an upper bound
to be used in lieu of the gain for control synthesis,
as the problem of computing the gain of $\Delta_i$ exactly is
difficult, if not intractable, in general.
Note that to ensure that the gain $\gamma_i$ is finite,
it suffices to ensure that its upper bound $\hat{\gamma_i}$ established using the approach in
Lemma \ref{Lemma:GammaUpperBound} is finite.
We can take this a step further,
by proposing a more refined sufficient condition expressed in terms of the 0-reduction of $M_i$,
and that requires significantly less computational effort to verify:
\begin{lemma}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
$\rho_{\Delta}: \mathcal{U} \rightarrow \mathbb{R}_+$,
a positive definite function $\mu_{\Delta}: \mathcal{W} \rightarrow \mathbb{R}_+$,
and a DFM $M_i$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}, for some $i \geq 1$.
Let $\gamma_i$ be the gain of the corresponding error system $\Delta_i$
shown in Figure \ref{Fig:Structure}.
Let $M_i^0$ be the 0-reduction of $M_i$.
If $M_i^0$ satisfies (\ref{Eq:GainBound})
for some finite $\gamma$, then $\gamma_i$ is finite.
\end{lemma}
\begin{proof}
Construct a weighted graph corresponding to $M_i^e$ by associating with every state transition of $M_i^e$
a cost, namely `$\gamma \rho_{\Delta}(u) -e_i$' defined by the input $u$ that drives the transition and
the output $e_i$ associated with the beginning state of the transition.
$M_i^e$ satisfies (\ref{Eq:GainBound}) iff every cycle in the corresponding weighted graph
has non-negative total cost -
the proof of this statement is omitted for brevity - readers are referred to \cite{JOUR:TaMeDa2008} for the details.
In particular, $\hat{\gamma}_i$, the infimum of $\gamma$ such that (\ref{Eq:GainBound}) is satisfied,
is infinite iff there exists a cycle in $M_i^e$, driven {\bf entirely} by inputs in $\overline{\mathcal{U}}$,
and such that $e_i \neq 0$ for at least one state along the cycle.
Thus, it suffices to verify that $M_i^0$ satisfies (\ref{Eq:GainBound}) for some finite $\gamma$ to ensure that
$\hat{\gamma_i} < \infty$, from which we can deduce that $\gamma_i$ is finite by Lemma \ref{Lemma:GammaUpperBound}.
\end{proof}
We conclude this section by proving that the gain {\it bounds} established in Lemma \ref{Lemma:GammaUpperBound}
satisfy the hierarchy required in condition c) of Definition \ref{Def:DFMApproximation}.
\begin{lemma}
\label{Lemma:ConditionCbound}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
a disturbance alphabet $\mathcal{W} = \{0,\hdots,p-1 \}$ where $p=|\mathcal{Y}|$,
$\rho_{\Delta}: \mathcal{U} \rightarrow \mathbb{R}_+$,
positive definite $\mu_{\Delta}: \mathcal{W} \rightarrow \mathbb{R}_+$,
and a sequence of DFMs $\{M_i\}_{i=1}^{\infty}$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails}.
Let $\hat{\gamma}_{i}$ be the infimum of $\gamma$ such that the e-extension of $M_i$, $M_i^e$, satisfies
(\ref{Eq:GainBound}).
We have $\hat{\gamma}_i \geq \hat{\gamma}_{i+1}$.
\end{lemma}
\begin{proof}
Fix $i$.
Pick a choice of $(\mathbf{u_o},(\mathbf{y_o},\mathbf{v})) \in P$ and
consider the interconnection of $P$, $M_i$ and $M_{i+1}$ as shown in Figure \ref{Fig:PMiInterconnection}.
Let $q_i(t)$ and $q_{i+1}(t)$ be the states of $M_{i}$ and $M_{i+1}$, respectively, at time $t$,
and let $e_i(t)$ and $e_{i+1}(t)$ be the outputs of the corresponding e-extensions $M_i^e$
and $M_{i+1}^e$, respectively, for input $(\mathbf{u_o},\mathbf{y}_o)$.
By Proposition \ref{Prop:YHierarchy},
we have $Y(q_{i+1}(t)) \subseteq Y(q_i(t))$, for all $t \geq 0$.
Thus we have for every $t \geq 0$:
\begin{displaymath}
e_{i+1}(t) =
\max_{y_1,y_2 \in Y(q_{i+1}(t))} \mu_{\Delta} (\beta(y_1,y_2))
\leq
\max_{y_1,y_2 \in Y(q_{i}(t))} \mu_{\Delta} (\beta(y_1,y_2))
= e_{i}(t).
\end{displaymath}
We can now write for any $\gamma \geq 0$
\begin{eqnarray*}
-e_i(t) \leq -e_{i+1}(t), \textrm{ } \forall t
& \Leftrightarrow&
\gamma \rho_{\Delta}(u_o(t)) - e_i(t)
\geq \gamma \rho_{\Delta}(u_o(t)) - e_{i+1}(t),
\textrm{ } \forall t \\
& \Rightarrow &
\sum_{t=0}^{T } \hat{\gamma}_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w(t))
\geq
\sum_{t=0}^{T} \hat{\gamma}_i \rho_{\Delta}(u_o(t)) - e(t),
\textrm{ } \forall T \\
& \Rightarrow &
\sum_{t=0}^{T }\hat{\gamma}_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w(t))
\geq
\inf_{t \geq 0} \sum_{t=0}^{T} \hat{\gamma}_i \rho_{\Delta}(u_o(t)) - e(t),
\textrm{ } \forall T \\
& \Rightarrow &
\inf_{T \geq 0} \sum_{t=0}^{T }\hat{\gamma}_i \rho_{\Delta}(u_o(t)) - \mu_{\Delta}(w(t))
\geq
\inf_{t \geq 0} \sum_{t=0}^{T} \hat{\gamma}_i \rho_{\Delta}(u_o(t)) -e(t)
\end{eqnarray*}
It thus follows that $\hat{\gamma}_i \geq \hat{\gamma}_{i+1}$.
\end{proof}
Note that Lemma \ref{Lemma:ConditionCbound} does not require the additional assumptions
(output nested $\{M\}_{i=1}^{\infty}$ and flat $\mu_{\Delta}$) that Lemma
\ref{Lemma:ConditionC} requires to hold.
That is because the gain bounds are inherently conservative,
effectively considering a `worst case' scenario.
\section{Semi-Completeness of the Construct}
\label{Sec:Completeness}
In this Section, we prove one additional property of the given construct:
Intuitively, we show that if a deterministic finite state machine exists that can accurately predict
the sensor output of a plant after some initial transient,
then our construct recovers it.
While the resulting
DFM generated by our construct is not expected to be minimal
(due to the inherent redundancy in this description, see the discussion in Section
\ref{SSec:ConstructionInspiration}),
this property suggests that our construct is well-suited for addressing analytical questions
about convergence of the approximate models to the original plant.
\begin{thm}
Consider a plant $P$ as in (\ref{eq:Plant}),
a performance objective as in (\ref{eq:ObjectiveP}),
a positive definite choice of $\mu_{\Delta}: \mathcal{W} \rightarrow \mathbb{R}_+$,
and a sequence $\{ M_i \}_{i=1}^{\infty}$ constructed following the procedure given in
Section \ref{SSec:ConstructionDetails},
with $\{ \gamma_i\}_{i=1}^{\infty}$ denoting the gains of the corresponding approximation errors $\{ \Delta_i \}_{i=1}^{\infty}$
shown in Figure \ref{Fig:Structure}.
Assume there exists a DFM $M$ with fixed initial condition,
such that the corresponding $\Delta$ obtained by interconnecting $P$ and $M$ as in Figure
\ref{Fig:Structure} has gain $\gamma=0$.
Then $\gamma_{i^*}=0$ for some index $i^*$.
Moreover, $\gamma_i =0$ for all $i \geq i^*$.
\end{thm}
\begin{proof}
Assume a DFM $M$ with the stated properties exists,
and let $w(t)$ be the output of the system $\Delta$ constructed by interconnecting
$P$ and $M$ as shown in Figure \ref{Fig:Structure}.
By assumption, we have
\begin{displaymath}
\inf_{T \geq 0} \sum_{t=0}^{T} 0. \rho_{\Delta}(u(t)) - \mu_{\Delta}(w(t)) > -\infty
\Leftrightarrow \sup_{T \geq 0} \sum_{t=0}^{T} \mu_{\Delta}(w(t)) < \infty
\end{displaymath}
Since $\mathcal{W}$ is finite, the cardinality of $\mu_{\Delta}(\mathcal{W})$ is also finite,
as is that of the state set of $M$.
Thus there must exists a time $T^*$ such that $\mu_{\Delta}(w(t))=0$ for all $t \geq T^*$,
or equivalently $w(t)=0$ for all $t \geq T^*$ (by the positive definiteness of $\mu_{\Delta}$).
Now let $i^*=T^*$,
and consider the corresponding DFM $M_{i^*}$ in the constructed sequence.
We claim that $|Y(q_{i^*})|=1$ for every $q_i^{*} \in \mathcal{Q}_{i^*,F}$.
The proof is by contradiction: Indeed,
suppose that $|Y(q_{i^*})| >1$ for some $q_{i^*}=(y_1,\hdots,y_{i^*},u_1,\hdots,u_{i^*})' \in \mathcal{Q}_{i^*,F}$.
Thus, there exists an input sequence, namely $u(0)=u_1$, $u(1)=u_2$,$\hdots$, $u(T^*-1)=u_{i^*}$
with two corresponding feasible sensor outputs of $P$ given by
$y(0)=y_1$, $y(1)=y_2$,$\hdots$, $y(T^*-1)=y_{i^*}$, $y(T^*)=y'$
and
$y(0)=y_1$, $y(1)=y_2$,$\hdots$, $y(T^*-1)=y_{i^*}$, $y(T^*)=y''$
where $y' \neq y''$.
Since $M$ has fixed initial condition, its response to the input sequence is fixed,
and it thus follows that $w(T^*) \neq 0$ for some run, contradicting the fact that $w(t)=0$
for all $t \geq T^*$.
This cannot be, and hence $|Y(q_{i^*})| =1$ for all $q_{i^*} \in \mathcal{Q}_{i^*,F}$.
It follows from this and Proposition \ref{Prop:Xinclusion} that $y_{i^*}(t) =y(t)$ for every $t \geq T^*$,
and thus $\gamma_{i^*}=0$.
Finally, when $i > i^*$,
$|Y(q_{i})| = 1$ for every $q_{i} \in \mathcal{Q}_{i,F}$, and $\gamma_{i}=0$.
\end{proof}
\color{black}
\section{Conclusions and Future Work}
\label{Sec:Conclusions}
In this paper,
we revisited the recently proposed notion of $\rho/\mu$ approximation and a corresponding
particular structure for the approximate models and approximation errors.
We generalized this structure for the non-binary alphabet setting,
and we showed that the cardinality of the minimal disturbance alphabet
that can be used in this setting equals that of the sensor output alphabet.
We then proposed a general, conceptual procedure for generating a sequence
of finite state machines for systems over finite alphabets that are not subject to exogenous inputs.
We explicitly derived conditions under which the resulting constructs,
used in conjunction with the generalized structure,
satisfy the three required properties of $\rho/\mu$ approximations,
and we proposed a readily verifiable sufficient condition to ensure that the gain of the
approximation error is finite.
We also showed that these constructs exhibit a `semi-completeness' property,
in the sense that if a finite state machine exists that can
perfectly predict the sensor output after some transient,
then our construct recovers it.
Our future work will focus on two directions:
\begin{enumerate}
\item At the theoretical level, it is clear from the construct that the problem of
approximation and that of state estimation under coarse sensing are closely intertwined.
We will thus focus on understanding the limitations of approximating certain classes of systems
using these constructs,
or at a more basic level,
the limitations of reconstructing the state under coarse sensing and finite memory constraints.
\item At the algorithmic level,
we will look into refining this procedure by developing a recursive version that
allocates available memory in a more selective manner,
in line with the dynamics of the system.
\end{enumerate}
\section{Acknowledgments}
This research was supported by NSF CAREER award 0954601
and AFOSR Young Investigator award FA9550-11-1-0118.
\end{document}
|
\begin{equation}gin{document}
\title{Optomechanical back-action evading measurement without parametric instability}
\author{Steven K. Steinke,$^{1,2}$ K. C. Schwab,$^2$ and Pierre Meystre$^1$}
\affiliation{$^1$B2 Institute, Department of Physics and College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA\\
$^2$Department of Applied Physics, Caltech, Pasadena, CA 91125, USA}
\date{\today}
\begin{equation}gin{abstract}
We review a scheme for performing a back-action evading measurement of one mechanical quadrature in an optomechanical setup. The experimental application of this scheme has been limited by parametric instabilities caused in general by a slight dependence of the mechanical frequency on the electromagnetic energy in the cavity. We find that a simple modification to the optical drive can effectively eliminate the parametric instability even at high intracavity power, allowing realistic devices to achieve sub-zero-point uncertainties in the measured quadrature.
\end{abstract}
\pacs{42.50.Dv, 03.65.Ta, 42.50.Lc, 42.50.Wk}
\maketitle
\section{Introduction}
In combination with back-action evading techniques described below, squeezed states offer the potential for important applications in optomechanical precision force sensing~\cite{Caves1}, in particular achieving sensitivity with resolution below the zero-point fluctuations $x_\mathrm{zp}$ of the mechanical component of the sensor. Unfortunately, due to the typical thermal environment of a mechanical system, it is difficult to produce direct squeezing below the zero-point level in a mechanical oscillator. It is similarly challenging to ascertain its position to that level of accuracy in a single strong measurement. One possible solution is to continuously observe the position in, e.g., an optomechanical setup~\cite{reviews}, using light transmitted through a Fabry-P\'erot cavity to probe the motion of an oscillating end-mirror or using an equivalent microwave circuit with mechanically modulated frequency. Such a measurement process gradually reduces $\Delta x$. However, this leads to a problem as the oscillator rotates through phase space. Because measuring $\hat x$ increases the uncertainty of $\hat p$ and every quarter cycle $\hat x$ and $\hat p$ are exchanged, the maximum squeezing is limited to roughly that which could be produced in only one fourth of the oscillator period, even if technical noise is neglected. Hence, the potential squeezing is limited by measurement back-action.
A back-action evading measurement of the position of a membrane in a cavity optomechanical system was proposed as early as 1980 in Refs.~\cite{Braginsky1, Braginsky2}, who suggested driving the resonator with an input field resonant with the cavity frequency $\omega_c$, but modulated at the mirror frequency $\Omega_m$. In Fourier space, the field has oscillatory components at $\omega_c \pm \Omega_m$; hence, this scheme is often known as two-tone back-action evasion. By modulating the light field frequency at $\Omega_m$ the measurement effectively turns on and off as the system oscillates. This protocol thereby measures neither position nor momentum individually, but rather, one of the mechanical {\em quadratures}. Thus, while the measurement back-action still exists, it feeds only into the unmeasured quadrature and evades the measured one, leaving in principle no lower limit on the uncertainty one quadrature might reach.
A detailed quantum mechanical analysis is presented in Ref.~\cite{Clerk1}, and a generalized version exploiting intereference between additional tones is proposed in Ref.~\cite{Ono1}. The technique was originally explored for use in gravity wave detectors\cite{Braginsky3}, including an early approximation to stroboscopic position measurement\cite{Ono2}, but these experiments using massive oscillators were not designed to reach sensitivities near their zero-point levels. More recently, experiments have been carried out in the quantum regime. These have reached sensitivities of $4 x_\mathrm{zp}$~\cite{Hertzberg}, $2.5 x_\mathrm{zp}$~\cite{Suh13}, and $1.4x_\mathrm{zp}$~\cite{SuhTLS}, but, thus far, no experiment has achieved sub-zero-point position sensitivity.
While two-tone back-action evasion is an elegant solution, experimental reality intervenes to place a rather restrictive limit on such a scheme. Because the envelope of the driving field oscillates at the mechanical frequency, the intracavity power oscillates at twice the mechanical frequency. Through indirect effects, this leads to the frequency of the mechanical oscillator becoming slightly modulated, with the modulation oscillating at twice the natural frequency. Such a frequency modulation produces a parametric instability, which will drive the system and greatly reduce the amount of squeezing possible. To accomplish the back-action evading measurement, high optical power and low mechanical dissipation are both critical, yet these factors both worsen the parametric instability.
The variety of ways in which this instability can arise is staggering. In the experiments mentioned above, it originated respectively from non-linear terms in the optomechanical coupling~\cite{Hertzberg}, cavity heating causing a thermal shift in the frequency of the mechanical element~\cite{Suh13}, and two-level systems in surface oxides acting as non-linear dielectrics~\cite{SuhTLS}. Thus, the common limitation of these experiments is a connection between mechanical frequency and the cavity energy, $\delta\Omega_m\propto E$. In the two-tone scheme, this connection seems to lead inevitably to parametric instability.
However, it is in principle possible to work around this particular limitation. There are two critical features needed to avoid the parametric instability while performing the back-action evading measurement:
\begin{equation}gin{enumerate}
\item The power in the cavity must not oscillate at twice the mechanical frequency.
\item The probe light must couple only to a single quadrature of motion.
\end{enumerate}
The first of these requirements prevents the instability, as we will discuss in more detail in the next section. The latter requirement prevents measurement back-action from affecting the measured quadrature. Small deviations from this requirement will reduce the ultimate sensitivity of the measurement, but do not preclude sensitivities below the zero-point level, as shown in Ref.~\cite{Clerk1}.
We will verify in this paper that these two criteria may be pursued somewhat independently of each other. In the next section, we show that the ideal envelope for the intracavity field is a square wave, which satisfies the first criterion above. While experimental constraints prevent the realization of a perfect square wave, we show that it is sufficient to add a single additional drive tone. The rest of the paper is devoted to proving that the second criterion will also be satisfied. We review in Section III the optomechanical system used for the measurement, including a general optical driving term and outlining the main steps in extending the results of Ref.~\cite{Milburn1} for such a drive. Using reasonable simplifications, in Section IV we then derive a general master equation for the conditional evolution of the mechanics under continuous observation and show in Section V that, for the desired purpose, it can be reduced to the master equation of Ref.~\cite{Clerk1}, thereby completing the demonstration of the consistency of the two criteria. Finally, Section VI is a summary and outlook.
\section{Field Envelope}
We model a generic cavity optomechanical system as a single-mode optical or electrical resonator with resonant mode frequency $\omega_c$ whose radiation pressure drives an harmonically confined end-mirror or capacitor plate with natural oscillation frequency $\Omega_m$. The cavity is driven on-resonance but modulated with a field envelope $\alpha(t)$. In the microwave domain, this cavity field is often achieved through the application of multiple tones in order to avoid the low frequency phase noise of available sources.
We proceed by first asking what specific drive scheme can be used to avoid the mechanical parametric instability. To recap, this instability is due to a generic coupling arising through a variety of mechanisms between the mechanical frequency and the electromagnetic energy in the cavity which oscillates proportional to $|\alpha(t)|^2$. If only two tones are used to drive the system at $\omega_c\pm\Omega_m$, the cavity power then oscillates at $2\Omega_m$. Via device-dependent mechanisms, this induces a shift in the mechanical frequency,
\begin{equation}
\label{freqshiftproblem}
\Omega^\prime_m(t) = \Omega_m + \delta\Omega_m\cos2\Omega_mt.
\end{equation}
In a well-made device, the shift $\delta\Omega_m$ will be much less than $\Omega_m$, though the quality factor $Q$ will also be high. Hence the mechanical damping will be weak. The onset of parametric instability begins when the criterion for the fractional frequency shift
\begin{equation}
\label{minfreqshift}
\frac{\delta\Omega_m}{\Omega_m} > \frac{1}{Q}
\end{equation}
is satisifed\cite{Xie1}. Similar parametric resonances occur if the mechanical frequency oscillates at subharmonics of $2\Omega_m$, but our scheme avoids these. Furthermore, the required fractional frequency shift is of order $Q^{-1/n}$ for the $n$th subharmonic, rendering all but the first instability irrelevant for our purposes. However, even in carefully engineered devices, the limit in Eq.~\eqref{minfreqshift} is reached at low pump power, reducing the maximum possible accuracy of the back-action evading measurement to above the zero point level\cite{SuhTLS}.
We will show in the following sections that the measurement of the membrane motion can be performed in such a way that it is dominated by the two tones at $\omega_c\pm\Omega_m$. Therefore, we are more or less free to add additional fields to the cavity, as long as they are not near those frequencies, without affecting the measurement-based squeezing. We can in that way cancel out the oscillations of the energy in the cavity at $2\Omega_m$. Of course, this will add oscillations at higher harmonics, but parametric resonance only occurs at {\em subharmonics} of $2\Omega_m$\cite{Xie1}.
Consider specifically a field envelope, i.e., the electromagnetic field in a frame rotating at $\omega_c$, with a single added drive tone at $3\Omega_m$,
\begin{equation}
\label{omega3}
\alpha(t)\propto\cos\Omega_m t + \mu e^{i(3\Omega_mt+\Phi)}.
\end{equation}
The energy $E$ in the cavity is proportional to $|\alpha|^2$,
\begin{equation}
\label{omega246}
E(t) \propto \frac{1}{2}+\mu^2+A_2\cos \left(2\Omega_mt+\Phi_2\right) +\mu\cos\left(4\Omega_mt+\Phi\right),
\end{equation}
with
\begin{equation}gin{eqnarray}
\label{a2phi2}
A_2 &=& \sqrt{\frac{1}{4}+\mu\cos\Phi+\mu^2},\nonumber\\
\Phi_2 &=&\arctan\frac{2\mu\sin\Phi}{1+2\mu\cos\Phi}.
\end{eqnarray}
Thus, by adding the additional $3\Omega_m$-detuned tone with $\mu=1/2$ and $\Phi=\pi$, the energy oscillating at $2\Omega_m$ is completely redirected via interference into the DC and $4\Omega_m$ terms. For small deviations from ideality, i.e., $\mu=1/2+\delta\mu$, $\Phi=\pi+\delta\Phi$, the relative amplitude of the energy oscillation at $2\Omega_m$ is
\begin{equation}
\label{nonidealcase}
A_2=\sqrt{(\delta\mu)^2+\frac{(\delta\Phi)^2}{4}},
\end{equation}
which still represents a significant reduction in the $2\Omega_m$ Fourier component. In the next section, we will discuss the drive needed to produce such the cavity field of Eq.~\eqref{omega3}. If desired, one could continue to eliminate the higher harmonics of the energy oscillation by adding additional phase-locked sources detuned from the cavity by $\pm(2n+1)\Omega_m$, for increasing integer $n$. If we extend the series in Eq.~\eqref{omega3}, use tones of equal strength at each pair of red- and blue-detuned sidebands (i.e. cosines rather than complex exponentials) so that $\alpha(t)$ is real, and repeat the cancellation procedure of Eq.~\eqref{omega246} for these higher harmonics, we find that $\alpha(t)$ converges to a square wave envelope. Solving for such field envelopes requires finding roots of higher order polynomials for higher harmonics, so it must be done numerically. We plot several such solutions in Fig.~\ref{multiplot}. It is also intuitively easy to verify that the square wave has the desired properties: it is a function that oscillates at the same fundamental frequency as the mechanics, thereby permitting the measurement of a single quadrature, but, when squared, it is simply a constant. Thus, the energy in the cavity remains constant, and there is no resulting oscillatory mechanical frequency shift at any harmonic of the mechanical motion. In terms of the field itself rather than the envelope, this corresponds to a $\pi$ phase shift every half mechanical period.
\begin{equation}gin{figure}
\begin{equation}gin{center}
\includegraphics[width=0.5\textwidth]{multiplot.png}
\end{center}
\caption{\label{multiplot}The field envelope $\alpha(t)$ needed to cancel higher harmonic oscillations in the intracavity energy $|\alpha(t)|^2$, using (a) 1, (b) 2, (c) 4, and (d) 16 additional drive tones.}
\end{figure}
However, adding numerous phase-locked tones may be technically difficult, and so we examine the effectiveness of a single added $3\Omega_m$-detuned tone. The energy in the cavity, and hence the mechanical frequency, now has a Fourier component, aside from the DC term, at $4\Omega_m$. It is, hypothetically speaking, still possible to excite a parametric instability in this case; however, the required fractional frequency shift in the mechanics to excite a parametric instability is of order 1, i.e., the mechanical frequency would have to vary wildly during the course of the experiment~\cite{Xie1}. Such large frequency shifts are simply not seen in these devices; instead, the fractional shifts are of order less than a percent~\cite{Hertzberg,Suh13,SuhTLS}. To contrast, the fractional frequency shift required to excite a parametric instability is only $\frac{1}{Q}$ when the oscillations in frequency occur at $2\Omega_m$. A shift of this magnitude is seen experimentally, such as in Ref.~\cite{SuhTLS}, where the observed frequency shift of $\sim$ 50 Hz agrees quite well with the predicted onset of parametric instability. Based on these observations, we therefore find that a single additional tone is sufficient to eliminate the instability.
\section{Model}
Having now satisfied the first key criterion listed in the introduction we turn to the demonstration that, under realistic conditions, the probe light couples predominantly to a single quadrature of motion. Our starting point is the generic optomechanical Hamiltonian, see e.g. Ref.~\cite{Clerk1}
\begin{equation}gin{eqnarray}
\label{hamiltonian}
H &=& \hbar(\omega_c - G \hat x)\left [\hat a^\dagger \hat a - \langle \hat a^\dagger \hat a\rangle(t)\right ]+\hbar \Omega_m \hat c^\dagger \hat c \nonumber \\
&+&i\hbar\sqrt{\kappa}(b^*_\mathrm{in}(t)\hat a-b_\mathrm{in}(t)\hat a^\dag),
\end{eqnarray}
that describes the dynamics of a single-mode Fabry-P{\'e}rot resonator with an oscillating, harmonically bound end-mirror driven by radiation pressure. Here $\hat a$ is the annihilation operator of the cavity mode and $\kappa$ its decay rate, $b_\mathrm{in}$ is the amplitude of the (classical) driving field, and $\hat x = x_\mathrm{zp}(\hat c+\hat c^\dag)$ is the displacement of the mirror resulting from radiation pressure. $G$ is the single photon optomechanical coupling. Finally $x_\mathrm{zp}^2$ is the position variance of the membrane in its ground state.
We assume that the drive is resonant with the cavity but periodically modulated at the mechanical frequency,
\begin{equation}gin{eqnarray}
\label{bin}b_\mathrm{in}(t) &=& e^{-i\omega_c t}\sum_{\ell=-\infty}^{\infty}\begin{equation}ta_\ell e^{i\Omega_m \ell t}.
\end{eqnarray}
This differs from the scheme of Refs.~\cite{Braginsky1,Clerk1} in which only $\begin{equation}ta_{\pm 1} \ne 0$. We further assume that the mechanical element is thermally damped at a rate $\gamma$
due to its contact with a reservoir of temperature $T$. Thus its equilibrium thermal phonon occupancy is given
by the Bose-Einstein distribution,
\begin{equation}gin{equation}
\bar n = \frac{1}{\exp(\hbar\Omega_m / k_B T)-1}.
\end{equation}
However, we assume that $k_B T \ll \hbar\omega_c$ so that we can neglect thermal photons.
The effects of thermal coupling and the cavity decay are encapsulated by the master equation for the composite
density matrix $\rho$ of the oscillator-field system,
\begin{equation}
\label{master}
\frac{d \rho}{dt} = \frac{i}{\hbar}[\rho,H] + \frac{\kappa}{2}\mathcal{D}[\hat a]\rho
+\frac{\gamma}{2}\left ((\bar n+1)\mathcal{D}[\hat c]\rho+\bar n\mathcal{D}[\hat c^\dag]\rho\right ),
\end{equation}
where the dissipation super-operator $\mathcal{D}$ is defined as
\begin{equation}gin{equation}
\mathcal{D}[\hat o]\rho = 2\hat o\rho \hat o^\dag - \hat o^\dag \hat o\rho - \rho \hat o^\dag \hat o.
\end{equation}
We also work in the good cavity limit, specifically assuming a separation of scales
\begin{equation}gin{equation}
\label{separationofscales}
\gamma \ll \kappa \ll \Omega_m \ll \omega_c.
\end{equation}
The only of the above inequalities that is not trivially satisfied in the typical experimental environment is $\kappa \ll \Omega_m$. In the optical domain, even cavities with very high finesse have $\kappa$ of order $\sim$ 10 MHz due to the high frequency of light. Working in the microwave regime eases this requirement somewhat; typical parameters in a microwave device are $\omega_c = 2\pi\times5.3$~GHz, $\Omega_m=2\pi\times3.7$~MHz, $\kappa=2\pi\times260$~kHz, and $\gamma=2\pi\times50$~Hz~\cite{SuhTLS}.
We now simplify the master equation by moving into the rotating frame for both the optical mode and the mechanical oscillator, and then displacing $\hat a$ by a classical mean value $\alpha(t)$ -- ultimately, the field envelope of the previous section -- where the time dependence here is due to the modulated nature of the drive. Mathematically, this is equivalent to applying several unitary transformations to the density matrix in succession
\begin{equation}
\rho^\prime = WVU\rho U^\dag V^\dag W^\dag,
\end{equation}
where
\begin{equation}gin{eqnarray}
U&=&\exp(i\Omega_m \hat c^\dag \hat c t),\nonumber \\
V&=&\exp(i\omega_c \hat a^\dag \hat a t),\nonumber \\
W&=&\exp[\alpha(t)\hat a^\dag - \alpha^*(t)\hat a].
\end{eqnarray}
Though $W(t)$ does not typically commute with $W(t^\prime)$, this fact ultimately contributes only an irrelevant net global phase to the evolution. Identifying $\langle \hat a^\dag \hat a\rangle = |\alpha|^2$ and noting that in the rotating frame the mean intracavity field satisfies
\begin{equation}gin{equation}
d\alpha/dt+\kappa\alpha/2=\sqrt{\kappa}b_\mathrm{in}e^{i\omega_ct}.
\end{equation}
we find readily
\begin{equation}gin{eqnarray}
\label{meanalpha}\alpha(t)&=&\sum_{\ell=-\infty}^{\infty}\alpha_\ell e^{i\Omega_m\ell t},\nonumber \\
\alpha_\ell \ &=& \frac{\sqrt{\kappa}\begin{equation}ta_\ell}{i\Omega_m\ell+\kappa/2}.
\end{eqnarray}
We can combine these formulae with the results of the previous section to provide the exact form of the needed drive for the $3\Omega_m$-detuned tone. Specifically, if the first red and blue $\Omega_m$-detuned sidebands are pumped with amplitude and phase given by
\begin{equation}
\begin{equation}ta_{\pm 1} = \left (\frac{\kappa}{2}\pm i\Omega_m\right)B,
\end{equation}
then the third sideband should be pumped with amplitude and phase
\begin{equation}
\begin{equation}ta_3 = -\left (\frac{\kappa}{2}+3i\Omega_m\right)B.
\end{equation}
After the above unitary transformations and substitutions, the Hamiltonian governing the evolution of the system in the primed frame is
\begin{equation}
\label{hprime}
H^\prime = g\left(\hat ce^{-i\Omega_mt}+\hat c^\dag e^{i\Omega_mt}\right)
\left(\alpha(t) \hat a^\dag+\alpha^*(t) \hat a+\hat a^\dag \hat a\right),
\end{equation}
where $g = Gx_\mathrm{zp}$. The dissipative terms remain unchanged, except for the replacement of $\rho$ with
$\rho^\prime$.
In subsequent calculations, we neglect the quadratic $\hat a^\dag \hat a$ term in the Hamiltonian (\ref{hprime}), as it is of order unity in size. On the other hand, the linear terms are multiplied by the classical mean field, $\alpha(t)$, which is the square root of the mean intracavity photon number, typically $10^6$ or more for back-action evading experiments, so these will dominate the evolution dynamics.
\section{Optomechanical Master Equation}
We now turn to the issue of measurement of the system. Our derivation follows the approach of Ref.~\cite{Milburn1}, generalized to a local oscillator with a time-dependent amplitude or phase.
By using a homodyne detection scheme, it is possible to make a measurement of one quadrature of the cavity field. The output field from the cavity is
\begin{equation}gin{equation}
\hat b_\mathrm{\rm out} = b_\mathrm{\rm in} + \sqrt{\kappa}\hat a
\end{equation}
and the field reaching the detector is $B_\mathrm{\rm lo} + \hat b_{\rm out}$, where $B_{\rm lo}$ is the additional local oscillator. We can fold all the classical contributions together into a ``net'' local oscillator strength given in the rotating frame by
\begin{equation}gin{equation}
B_\mathrm{net} = (B_\mathrm{lo}+b_\mathrm{in})e^{i\omega_ct}/\sqrt{\kappa}-\alpha(t) \equiv B(t)e^{i\phi(t)}
\end{equation}
where $\phi$ is relative phase between the local oscillator and the ouput field. Note that if $B_\mathrm{lo}$ is sufficiently large, the additional terms are negligible. For a detector of efficiency $\eta$ the detected photocurrent $I$ (i.e., the measurement record) is given by
\begin{equation}\label{meas}
Idt = \left(B^2+B\langle \hat a e^{-i\phi} + \hat a^\dag e^{i\phi}\rangle\right)\eta\kappa dt + B\sqrt{\eta\kappa}dW,
\end{equation}
where $W(t)$ is a Wiener process. That is, $\xi(t) = dW/dt$ is Gaussian white noise and $(dW)^2 = dt$. The dynamical effects of this measurement on the density matrix can be computed and yield the conditional stochastic master equation (SME)~\cite{Milburn1}
\begin{equation}gin{eqnarray}
\label{SME}
\frac{d \rho_c}{dt} &=& \frac{i}{\hbar}[\rho_c,H^\prime] + \frac{\kappa}{2}\mathcal{D}[\hat a]\rho_c \nonumber \\
&+&\frac{\gamma}{2}\left [(\bar n+1)\mathcal{D}[\hat c]\rho_c+\bar n\mathcal{D}[\hat c^\dag]\rho_c\right ] \\
&+& \left (\hat a\rho e^{-i\phi}+\rho \hat a^\dag e^{i\phi}-\langle \hat a e^{-i\phi}+\hat a^\dag e^{i\phi}\rangle\rho\right )\sqrt{\eta\kappa}\xi(t).\nonumber
\end{eqnarray}
\section{Reduced Master Equation And Measurement}
In order to proceed, we now adiabatically eliminate the light field. This can be achieved by working in the weak-coupling limit. Specifically, we assume that the optomechanical interaction strength, approximately given by $g|\alpha(t) \langle\hat c \rangle |$, is much less than the cavity decay rate $\kappa$. This has multifold advantages: First, we can accurately make the rotating wave approximation (RWA), thereby removing the explicit time dependence of the interaction Hamiltonian and simplifying it to the form given below in Eqs.~\eqref{hrwa1} and~\eqref{hrwa2}.
Second, we find rate equations for those components of the density matrix needed to trace out the light field and derive an effective master equation for the mechanical subsystem alone. Those terms not involved in the trace, i.e. the off-diagonal terms, are taken to adiabatically follow the other terms due to the dominance of the decay rate $\kappa$. Because similar calculations have been reported thoroughly elsewhere, including in Ref.~\cite{Milburn1}, we do not reproduce all intermediate details here.
Our first step is to make the RWA, after which we are left with the interaction
\begin{equation}
\label{hrwa1}H^\prime = g\left(\hat C_1^\dag \hat a+\hat C_1\hat a^\dag\right),
\end{equation}
where
\begin{equation}
\hat C_1 = \alpha_1\hat c+\alpha_{-1}\hat c^\dag
\end{equation}
and
\begin{equation}
[\hat C_1,\hat C_1^\dag] = |\alpha_1|^2-|\alpha_{-1}|^2.
\end{equation}
If the red and blue sidebands of the field detuned at $\pm\Omega_m$ are balanced in intensity, $\alpha_1 = Ae^{i\theta} = \alpha_{-1}^*$, with $A$ real, and $\hat C_1$ simplifies to a constant times the Hermitian mechanical quadrature operator
\begin{equation}
\hat Q = \frac{1}{\sqrt{2}}\left(e^{i\theta}\hat c+e^{-i\theta}\hat c^\dag\right).
\end{equation}
Namely,
\begin{equation}
\hat C_1 = A\sqrt{2}\hat Q,
\end{equation}
and the Hamiltonian ultimately reduces to,
\begin{equation}
\label{hrwa2}
H^\prime = gA\sqrt{2}\left(\hat a + \hat a^\dag\right)\hat{Q}.
\end{equation}
The next step is to define the small parameter
\begin{equation}
\epsilon = \frac{gA}{\kappa}.
\end{equation}
Because of the dominant effect of dissipation on the dynamics of the intracavity light field it always remain near its equilibrium, absent the optomechanical interaction. In the displaced frame, this is the ground state. Therefore, we can expand the density matrix in the (displaced) optical Fock basis as
\begin{equation}gin{eqnarray}
\label{expansion}
\rho &=& \rho_{00}|0\rangle\langle 0| + \epsilon(\rho_{01}|0\rangle\langle 1|+\mathrm{H.c.})\nonumber \\
&+&\epsilon^2 \rho_{11}|1\rangle\langle 1|+\epsilon^2(\rho_{02}|0\rangle\langle 2|+\mathrm{H.c.})
+\mathcal{O}(\epsilon^3).
\end{eqnarray}
The adiabatic elimination proceeds by substituting equation \eqref{expansion} into the stochastic master equation~\eqref{SME}. We can then obtain a dimensionless version of the SME by rescaling time and the Wiener increment, $d\tau = \kappa dt, dw = \sqrt{\kappa}dW$. (Note that $dw/d\tau$ is still Gaussian white noise because $(dw)^2 = d\tau$.) Adiabatically eliminating the off-diagonal terms and truncating after order $\epsilon^2$ yields the relations
\begin{equation}
\label{offdiag}
\rho_{02} = 2i\rho_{01}\hat Q,\hspace{24pt}\rho_{01} = 2i\sqrt{2}\rho_{00}\hat Q,
\end{equation}
and the equations of motion for the diagonal terms are (again, to order $\epsilon^2$)
\begin{equation}gin{eqnarray}
\label{rho00}
&&d\rho_{00}= \frac{\gamma}{2\kappa}\left [(\bar n+1)\mathcal{D}[\hat c]\rho_{00}+\bar n\mathcal{D}[\hat c^\dag]\rho_{00}\right ]d\tau \nonumber\\
&&+\epsilon^2\left[i\sqrt{2}\left(\rho_{01} \hat Q-\hat Q\rho^\dag_{01}\right)+\rho_{11}\right ] d\tau\nonumber \\
&&+\left[\rho_{01}e^{i\phi}+\rho^\dag_{01}e^{-i\phi}-\rho_{00}\mathrm{tr} \left(\rho_{01}e^{i\phi}+\rho^\dag_{01}e^{-i\phi}\right)\right ]\epsilon dw,\nonumber \\
\label{rho11}
&&d\rho_{11}=\frac{\gamma}{2\kappa}\left [(\bar n+1)\mathcal{D}[\hat c]\rho_{11}+\bar n\mathcal{D}[\hat c^\dag]\rho_{11}\right ] d\tau \nonumber\\
&&-\left [ (i\sqrt{2}\left(\hat Q\rho_{01}-\rho^\dag_{01}\hat Q\right)+\rho_{11}\right ]d\tau,
\end{eqnarray}
where the trace in Eq.~\eqref{rho00} is over the mechanical degree of freedom. Though $\rho_{11}$ is a small term (of order $\epsilon^2$), its inclusion simplifies the computation of the final master equation for the mechanics.
Tracing over the optical degree of freedom is equivalent to finding an expression for $d(\rho_{00}+\epsilon^2\rho_{11})$, which is done easily by substitution of Eq.~\eqref{offdiag} into Eqs.~\eqref{rho00}. After substituting back in $t$ and $W$, this yields the stochastic master equation for the conditional density matrix of the mechanical subsystem,
\begin{equation}gin{eqnarray}
\label{finalME}
\frac{d \rho_m}{dt} &=& -k[\hat Q,[\hat Q,\rho_m]] \nonumber \\
&+& i\sqrt{2\eta k}(e^{i\phi}\rho_m \hat Q-e^{-i\phi}\hat Q\rho_m - 2i\sin\phi\langle \hat Q\rangle\rho_m)\xi(t)\nonumber \\
&+&\frac{\gamma}{2}((\bar n+1)\mathcal{D}[\hat c]\rho_m+\bar n\mathcal{D}[\hat c^\dag]\rho_m),
\end{eqnarray}
where $k = 4g^2 A^2/\kappa$.
The relative phase of $\alpha_1$ and $\alpha_{-1}$ defines which quadrature of motion is measured, and without loss of generality we can take $\theta = 0$ so that $\hat Q = \hat X$. We represent the orthogonal quadrature by $\hat Y$. On the other hand, Eq.~\eqref{finalME} makes explicit the importance of the relative phase $\phi$ between the local oscillator used for detection and the classical driving field. If they are $\pi/2$ out of phase, then maximum measurement strength is achieved. By contrast, if they are in phase ($\phi=0$), then the measurement acts as a stochastic unitary drive of the system, i.e., a random force displacing the mechanics. In this case, the light quadrature being measured is $\hat a+\hat a^\dag$, which amounts to replacing that term in the Hamiltonian with a fluctuating, semi-classical value.
We now take $\phi=\pi/2$. This step is included for completeness, as it allows us to reproduce the master equation and then the key results on conditional squeezing of Ref.~\cite{Clerk1},
\begin{equation}gin{eqnarray}
\label{finalfinalME}
\frac{d \rho_m}{dt} &=& -k[\hat X,[\hat X,\rho_m]] - \sqrt{2\eta k}(\rho_m \hat X+\hat X\rho_m -2\langle \hat X \rangle)\xi(t)\nonumber \\
&+&
\frac{\gamma}{2}\left ((\bar n+1)\mathcal{D}[c]\rho+\bar n\mathcal{D}[c^\dag]\rho\right ).
\end{eqnarray}
The equations for the {\em conditioned} mean values, variances, and covariance ($C=\langle \hat X\hat Y+\hat Y\hat X\rangle/2-\langle \hat X\rangle\langle \hat Y\rangle$) are readily derived under a Gaussian state ensatz:
\begin{equation}gin{eqnarray}
\frac{d}{dt}\langle \hat X\rangle&=&-\frac{\gamma}{2}\langle X\rangle-\sqrt{K}V_X\xi,\\
\frac{d}{dt}\langle \hat Y\rangle&=&-\frac{\gamma}{2}\langle Y\rangle-\sqrt{K}C\xi,\\
\frac{dV_X}{dt} &=& -K V_X^2 -\gamma\left(V_X-\bar n-\frac{1}{2}\right),\\
\frac{dV_Y}{dt} &=& -K C^2 + 2k - \gamma\left(V_Y-\bar n-\frac{1}{2}\right),\\
\frac{dC}{dt} &=& -K V_X C -\gamma C,
\end{eqnarray}
where we have introduced the scaled measurement strength $K = 8\eta k$. These variances approach steady state values. Of particular interest is the variance of $\hat X$
\begin{equation}gin{eqnarray}
V_X&=&\sqrt{\frac{\gamma}{2K}\left(2 \bar n + 1+\frac{\gamma}{2K}\right)}
-\frac{\gamma}{2K},
\end{eqnarray}
which approaches 0 for sufficiently large $K$ or small $\gamma$; the uncertainty relations are maintained by a concommitant increase in $V_Y$. As mentioned above, these results agree exactly with those of Ref.~\cite{Clerk1}, despite the modification to the optical drive. The physical reason these distinct schemes produce the same results is fairly straightforward. The added light harmonics couple weakly and oscillate three times faster than the mechanical frequency, so the force they exert on the mechanical element will average out to zero over each mechanical period. Because our modified system produces all the same results from this point, we will omit the extensive additional calculations on the use of feedback to promote conditional squeezing into unconditioned, or ``real'', squeezing.
Summarizing, then, we have shown that, provided that the RWA can be invoked (i.e., the good cavity limit of Eq.\eqref{separationofscales} is satisfied), only the contribution of the first sideband, $\ell=\pm 1$, contributes significantly to the homodyne detection signal, demonstrating that the probe field couples predominantly to a single quadrature of motion. With this, both criteria discussed in the introduction are satisfied, demonstrating the viability of the proposed scheme for eliminating the parametric instability.
\section{Conclusions}
We have thus extended the proposal to perform optomechanical back-action evading measurements to the case of a multi-tone drive scheme. Specifically, by simply adding a third tone detuned from the cavity by $3\Omega_m$ with appropriate amplitude and phase, we push the oscillations in cavity energy to higher harmonics of the mechanical frequency, which in turn do not contribute to the parametric instability. We then reproduced, in the appropriate good-cavity but weakly coupled regime ($gA\ll\kappa\ll\Omega_m$), the desired back-action evading measurement of a single mechanical quadrature. Because these instabilities appear in such diverse experimental settings, we hope this modification will prove useful in reaching sub-zero-point position sensitivities deep in the quantum regime. Indeed, preliminary experimental results from our collaboration show squeezing in a microwave optomechanical device below the zero-point level; we will confirm and refine these results in the coming months.
There is still room to extend the calculations presented above. For instance, it may be instructive to include the frequency modulating terms explicitly in the Hamiltonian, $H_\mathrm{parametric} \propto a^\dag ac^\dag c$. In addition, we can go beyond the rotating wave approximation to examine the back-action contributed by the rapidly oscillating terms in the Hamiltonian. While small, this contribution is non-zero, and could eventually impose a lower limit once other technical obstacles are overcome. Finally, measurements of the output field other than homodyne detection should be considered to increase the applicability of the scheme even further.
\begin{equation}gin{acknowledgments}
This work was supported by the DARPA QuASAR program through a grant from AFOSR and the DARPA ORCHID program through a grant from ARO, the US Army Research Office, and by the NSF. KCS acknowledges funding provided by the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation through Grant GBF1250.
\end{acknowledgments}
\begin{equation}gin{thebibliography}{10}
\bibitem{Caves1}
C. M. Caves, K. S. Thorne, R. W. P. Drever, V. D. Sandberg, M. Zimmermann, Rev. Mod. Phys. {\bf 52}, 341 (1980)
\bibitem{reviews}
For recent reviews of cavity and quantum optomechanics, see T. J. Kippenberg, K. J. Vahala, Science {\bf 321} 1172 (2008); M. Aspelmeyer {\it et al.}, J. Opt. Soc. Am. B {\bf 27}, A189 (2010); F. Marquardt, S. M. Girvin, Physics {\bf 2}, 40 (2009); M. Aspelmeyer, P. Meystre, K. C. Schwab, Physics Today {\bf 65}, 29 (2012); P. Meystre, Ann. der Physik {\bf 525}, 215 (2013); D. M. Stamper-Kurn, arXiv:1204.4351, to appear in {\em Cavity Optomechanics}, edited by M. Aspelmeyer, T. Kippenberg, F. Marquardt, Springer Verlag; M. Aspelmeyer, T. J. Kippenberg, F. Marquardt, arXiv:1303.0733, to be published in Rev. Mod. Phys.
\bibitem{Braginsky1}
V. B. Braginsky, Y. I. Vorontsov, K. S. Thorne, Science {\bf 209}, 4456 (1980).
\bibitem{Braginsky2}
V. B. Braginsky, F. Ya. Khalili, {\em Quantum Measurement,} Cambridge University Press, Cambridge, UK, (1992).
\bibitem{Braginsky3}
V. B. Braginsky, F. Ya. Khalili, Rev. Mod. Phys. {\bf 68}, 1 (1996).
\bibitem{Clerk1}
A. A. Clerk, F. Marquardt, K. Jacobs, New J. Phys. {\bf 10}, 095010 (2008).
\bibitem{Ono1}
R. Onofrio, F. Bordoni, Phys. Rev. A {\bf 43}, 2113 (1991).
\bibitem{Ono2}
L. E. Marchese, M. F. Bocko, R. Onofrio, Phys. Rev. D {\bf 45}, 1869 (1992).
\bibitem{Hertzberg}
J.B. Hertzberg, T. Roucheleau, T. Ndukum, M. Savva, A. A. Clerk, K. C. Schwab, Nature Physics {\bf 6}, 213 (2009).
\bibitem{Suh13}
J. Suh, M.D. Shaw, H.G. LeDuc, A.J. Weinstein, K.C. Schwab, Nano Lett {\bf 12}, 6260 (2012).
\bibitem{SuhTLS}
J. Suh, A.J. Weinstein, K.C. Schwab, Appl. Phys. Lett. {\bf 103}, 052604 (2013).
\bibitem{Milburn1}
H. M. Wiseman, G. J. Milburn, Phys. Rev. A {\bf 47}, 642 (1993).
\bibitem{Xie1}
W. Xie, {\em Dynamic Stability of Structures,} Cambridge University Press, New York, NY, USA (2006).
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{abstract}
A multisection is a decomposition of a manifold into $1$--handlebodies, where each subcollection of the pieces intersects along a $1$--handlebody except the global intersection which is a closed surface. These generalizations of Heegaard splittings and Gay--Kirby trisections were introduced by Ben Aribi, Courte, Golla and the author, who proved in particular that any $5$--manifold admits such a multisection. In arbitrary dimension, we show that two classes of manifolds admit multisections: surface bundles and fiber bundles over the circle whose fiber itself is multisected. We provide explicit constructions, with examples.
\end{abstract}
\title{Multisections of surface bundles and bundles over $S^1$}
\tableofcontents
\section{Introduction}
Heegaard splittings are standard decompositions of closed orientable $3$--manifolds into two handlebodies. In \cite{GayKirby}, Gay and Kirby introduce analogous decompositions of $4$--manifolds, the so-called trisections; they proved that all closed orientable smooth $4$--manifolds admit such trisections. More recently, in \cite{BCGM}, Ben Aribi, Courte, Golla and the author studied a notion of multisection for closed orientable manifolds of any dimension, which encompasses Heegaard splittings and trisections: a multisection of an $(n+1)$--manifold is a decomposition into $n$ $1$--handlebodies such that each subcollection intersects along a $1$--handlebody, except the global intersection which is a closed surface. They proved in particular that any smooth $5$--manifold admits a multisection. Here, we study the existence of multisections in arbitrary dimension for two classes of manifolds: surface bundles and bundles over the circle.
\begin{theo}[Theorem~\ref{th:SurfaceBundle}]
Any surface bundle admits a multisection.
\end{theo}
\begin{theo}[Theorem~\ref{th:bundleS1}]
Any fiber bundle over $S^1$, whose fiber admits a multisection globally preserved by the monodromy, admits a multisection.
\end{theo}
The proofs are constructive. We show how to get a multisection diagram of the bundle, either from a suitable decomposition of the basis in the case of surface bundles, or from a multisection diagram of the fiber in the case of bundles over $S^1$.
We work in the PL category, but our constructions also work in the smooth category. Throughout the article, all manifolds are orientable.
\section{Multisections and diagrams}
\label{sec:defs}
For $n\geq3$, an {\em $n$--dimensional handlebody} is a manifold that admits a handle decomposition with one $0$--handle and some $1$--handles; the number of $1$--handles is the {\em genus} of the handlebody.
A {\em multisection}, or {\em $n$--section}, of a closed connected $(n+1)$--manifold $W$ is a decomposition $W=\cup_{i=1}^nW_i$ where:
\begin{itemize}
\item for any non-empty proper subset $I$ of $\{1,\dots,n\}$, the intersection $W_I=\cap_{i\in I}W_i$ is a submanifold of $W$ PL--homeomorphic to an $(n-|I|+1)$--dimensional handlebody,
\item $\Sigma=\cap_{i=1}^nW_i$ is a closed connected surface.
\end{itemize}
A multisection is a {\em Heegaard splitting} when $n=2$, a {\em trisection} when $n=3$, a {\em quadrisection} when $n=4$.
The {\em genus} of the multisection is the genus of the {\em central surface} $\Sigma$.
\begin{remark}
In the smooth category, the $W_I$ cannot all be smooth submanifolds of $W$: some corners necessarily appear. This forces to give a more detailed definition, see~\cite{BCGM}.
\end{remark}
Given a $3$--dimensional handlebody $H$ with boundary $\Sigma$, a {\em cut-system} for $H$ is a collection of disjoint simple closed curves on $\Sigma$ which bound disjoint properly embedded disks in $H$ such that the result of cutting $H$ along these disks is a $3$--ball. Recall that a cut-system is well defined up to handleslides.
A {\em multisection diagram} for a multisection as above is a tuple $\big(\Sigma;(c_i)_{1\leq i\leq n}\big)$, where $\Sigma$ is the central surface of the multisection and $c_i$ is a cut-system on $\Sigma$ for the $3$--dimensional handlebody $\cap_{j\neq i}W_i$. By \cite[Theorem 3.2]{BCGM}, a multisection diagram determines a unique PL--manifold (and a unique smooth manifold up to dimension $6$).
The connected sum of multisected manifolds can be performed in a suitable way so as to produce a multisection of the resulting manifold. This gives a simple way to define stabilizations of multisections: a {\em stabilization} of a multisection of an $(n+1)$--manifold $W$ is the connected sum of the multisected $W$ with the standard sphere $S^{n+1}$ equipped with a genus--$1$ multisection; the latter are given diagrammatically in Figure~\ref{fig:gen1spheres}. These stabilization moves can also be described as cut-and-paste operations on the multisection of $W$, see \cite{BCGM}.
\begin{figure}
\caption{Genus--$1$ multisection diagrams of the spheres, where $0<k<n$}
\label{fig:gen1spheres}
\end{figure}
\begin{proposition}
Let $W$ be an $(n+1)$--manifold with a multisection diagram $\big(\Sigma;(c_i)_{1\leq i\leq n}\big)$. Assume the diagram contains two groups of $k$ and $n-k$ parallel curves, with $0<k<n$, such that a curve of one group meets a curve of the other group in exactly one point. Then the associated multisection is the result of a stabilization.
\end{proposition}
\begin{proof}
Denote by $\alpha_1,\dots,\alpha_k$ and $\beta_{k+1},\dots,\beta_n$ the two groups of curves. Since two curves in the same collection $c_i$ have to be disjoint but not parallel, there is one of these curves in each $c_i$. If a curve of the diagram, other than the $\beta$--curves, meets the $\alpha$--curves, then it is in the same collection $c_i$ as one of the $\beta$--curves, so that it can be slid along this $\beta$--curve until it gets disjoint from the $\alpha$--curves. The same is true for a curve, other than the $\alpha$--curves, that would meet the $\beta$--curves. Hence, after handleslides, we can assume that the $\alpha$--curves and $\beta$--curves are disjoint from any other curve of the diagram. It means that the diagram is a connected sum of a lower-genus diagram with a diagram of Figure~\ref{fig:gen1spheres}. Since the multisected manifold is determined by the diagram, it is the result of a stabilization.
\end{proof}
We will use this proposition in Section~\ref{sec:bundlesS1} to simplify some multisection diagrams. We may note that, for a smooth manifold, this proof works up to dimension $6$ only.
\section{Multisecting surface bundles}
\label{sec:surfacebundles}
Surface bundles turn out to admit multisections in any dimension. A multisection of a surface bundle can be obtained from a suitable decomposition of the basis into balls. This generalizes the work of Gay--Kirby \cite{GayKirby} and Williams \cite{Williams} on trisections of $4$--dimensional surface bundles.
Let $M$ be a closed $d$--manifold. We call {\em good ball decomposition} of the manifold $M$ a decomposition $M=\cup_{i=0}^dM_i$ where, for all non-empty $I\subset\{0,\dots,d\}$, $\cap_{i\in I}M_i$ is a disjoint union of embedded $(d-|I|+1)$--balls. Such decompositions do exist.
\begin{lemma}
Any closed manifold admits a good ball decomposition.
\end{lemma}
\begin{proof}
Let $M$ be a closed $d$--manifold and $\mathcal T$ a triangulation of $M$. We consider the first and second barycentric subdivisions $\mathcal T^1$ and $\mathcal T^2$ of $\mathcal T$. For $i\in\{0,\dots,d\}$, let $V_i$ be the set of barycenters of $i$--faces of $\mathcal T$; note that the union of the $V_i$ is the set of vertices of $\mathcal T^1$. We define $M_i$ as the union of the stars of the vertices of $V_i$ in $\mathcal T^2$, see Figure~\ref{figGBD}.
We may note that $M_i$ is the union of $i$--handles in the handle decomposition of $M$ associated with the triangulation $\mathcal T$ (see for instance \cite{RS}). The condition on the intersections is direct.
\end{proof}
\begin{figure}
\caption{For $d=2$, decomposition of a $2$--simplex into the $M_i$}
\label{figGBD}
\end{figure}
Let $p:W\to M$ be a surface bundle. Taking the preimage of a good ball decomposition of $M$, we get a decomposition of $W$ into pieces that are products of a surface, the fibre, with balls. To obtain a multisection from this, we dig some ``disk tunnels'' into the different pieces, which we add to other pieces.
If $N\subset M$ is a disjoint union of contractible subspaces of~$M$, a {\em disk section of $p$ over $N$} is a submanifold $Z$ of $W$ such that $p^{-1}(x)\cap Z$ is a $2$--disk for all $x\in N$, and $p(Z)=N$.
\begin{theorem} \label{th:SurfaceBundle}
Fix $n>1$. Let $p:W\to M$ be a surface bundle, where $W$ is a closed $(n+1)$--manifold. Assume we are given a good ball decomposition $M=\cup_{i=1}^nM_i$. Fix pairwise disjoint disk sections $Z_i$ of $p$ over $M_i$, for $1\leq i\leq n$. Set $W_i=\left(\overline{p^{-1}(M_i)\setminus Z_i}\right)\cup Z_{i+1}$. Then $W=\cup_{i=1}^nW_i$ is a multisection.
\end{theorem}
\begin{proof}
Set $M_I=\cap_{i\in I}M_i$.
We slightly abuse notation by denoting $Z_i=M_i\times D_i$, where $D_i$ is a disk in the fiber $S$ above each point of $M_i$, indeed depending of this point. In this way, $W_i$ can be written as
$$W_i=\left(M_i\times\overline{S\setminus D_i}\right)\cup\big( M_{i+1}\times D_{i+1}\big),$$
where the indices are considered modulo $n$.
More generally, for $I\subset\{1,\dots,n\}$, one can check by induction on $|I|$ that:
\begin{align*}
W_I=&\
\left(M_I\times\overline{S\setminus\cup_{i\in I}D_i}\right)
\cup\bigcup_{\substack{i\in I\\i+1\notin I}}\left(M_{I\cup\{i+1\}\setminus\{i\}}\times D_{i+1}\right)
\cup\bigcup_{\substack{i\in I\\i+1\in I}}\left(M_{I\setminus\{i\}}\times\partial D_{i+1}\right).
\end{align*}
We fix a non-empty proper subset $I$ of $\{1,\dots,n\}$ and check that $W_I$ is a handlebody.
In the first term, each connected component is a thickening of a surface with non-empty boundary, thus a handlebody. The second term adds $1$--handles to these handlebody; indeed, $M_I$ and $M_{I\cup\{i+1\}\setminus\{i\}}$ are made of $(n-|I|)$--balls that intersect along $(n-|I|-1)$--balls. For $i\in I$ such that $i+1\in I$, $M_{I\setminus\{i\}}$ is made of $(n-|I|+1)$--balls and contains in its boundary the $(n-|I|)$--balls composing~$M_I$. Hence each component of $M_{I\setminus\{i\}}\times\partial D_{i+1}$ is a $D^{n-|I|+1}\times S^1$ glued along some $D^{n-|I|}\times S^1$, with $D^{n-|I|}$ living in the boundary of $D^{n-|I|+1}$, to some handlebodies in $M_I\times\overline{S\setminus\cup_{i\in I}D_i}$. This has the effect to glue together some of the latter handlebodies along a $1$--handle corresponding to the boundary component $\partial D_{i+1}$ of $\overline{S\setminus\cup_{i\in I}D_i}$. Finally, $W_I$ is made of handlebodies and it remains to check that it is connected.
Thanks to the above formula for $W_I$, it is enough to check that the different connected components of $M_I$ are connected by some number of paths, each contained in $M_{I\cup\{i+1\}\setminus\{i\}}$ (which is contained in $M_{I\setminus\{i\}}$) for some $i\in I$. Note that the connectedness of $M$ implies that $\cup_{|I|=n-1}M_I$ is connected (the good ball decomposition provides a CW--complex structure where the $k$--skeleton is $\cup_{|I|=n-k}M_I$). Hence two components of $M_I$ are always connected by a path in $\cup_{|I|=n-1}M_I$, and each interval of this path which is not in $M_I$ is a component of $M_{\{1,\dots,n\}\setminus\{i\}}$ for some $i\in I$, which is contained in $M_{I\cup\{i+1\}\setminus\{i\}}$.
Finally, the central piece $\Sigma=W_{\{1,\dots,n\}}$ is given by
$$\Sigma=\left(M_{\{1,\dots,n\}}\times\overline{S\setminus\cup_{i=1}^nD_i}\right)\cup\bigcup_{i=1}^n\left(M_{\{1,\dots,n\}\setminus\{i\}}\times\partial D_{i+1}\right).$$
The first term is made of copies of $S$ with the open $D_i$ removed, while the second term is made of tubes joining the boundary components of these copies. Hence $\Sigma$ is a closed surface.
\end{proof}
\begin{remark}
The multisection obtained in the theorem has genus $vg+e-v+1$, where $v$ is the number of points in $M_{\{1,\dots,n\}}$, $e$ is the number of intervals in $\cup_{|I|=n-1}M_I$, and $g$ is the genus of the fiber.
\end{remark}
We now present some examples. When the bundle is simply a product, we define the disk sections as $Z_i=M_i\times D_i$, where the $D_i$ are disjoint disks on the fiber.
We start with bundles over spheres. The standard sphere admits the good ball decomposition given by the following lemma, see Figure~\ref{figdecspheres}.
\begin{lemma} \label{lemmaDecomposeSphere}
The $(n-1)$--sphere admits a good ball decomposition $S^{n-1}=\cup_{i=1}^nB_i$ where $\cap_{i\in I}B_i$ is a single $(n-|I|)$--ball for $I\subsetneq\{1,\dots,n\}$ and $\cap_{i=1}^nB_i$ is made of two points.
\end{lemma}
\begin{proof}
Consider the map $\varphi:S^{n-1}\subset\mathbb{R}^{n}\to\mathbb{R}^{n-1}$ sending $(x_1,\dots,x_n)$ to $(x_1,\dots,x_{n-1})$. Its image $B^{n-1}$ can be viewed as an $(n-1)$--simplex and cut into the cones with vertex its center and bases its faces. The pull-back of this decomposition provides the required decomposition of~$S^{n-1}$.
\end{proof}
\begin{figure}
\caption{Good ball decomposition of $S^k$ for small $k$}
\label{figdecspheres}
\end{figure}
\begin{corollary}
A surface bundle over $S^{n-1}$ with fiber a closed surface of genus $g$ admits a multisection of genus $2g+n-1$.
\end{corollary}
If $W$ is a surface bundle over $S^{n-1}$, the associated multisection has a central surface given by two copies of the fiber (the preimages of $\cap_{i=1}^nB_i$) joined by a tube above each $\cap_{j\neq i}B_j$.
Examples of the associated diagram are given in Figures~\ref{figS2bundles} and~\ref{figS1S1S3}. The cut-system associated to $W_{\{1,\dots,n\}\setminus\{i\}}$ is obtained as follows. A first curve is given by a meridian curve around the tube above $\cap_{j\neq i-1}B_j$. Then take a family of properly embedded arcs on $S\setminus\cup_{j\neq i} D_j$ that cut it into a punctured sphere, and build simple closed curves on $\Sigma$ by joining the two copies of these arcs on the two copies of the punctured $S$ by parallel arcs on the tubes.
\begin{figure}
\caption{Multisection diagrams for $S^2\times S^3$ and $S^2\times S^4$}
\label{figS2bundles}
\end{figure}
\begin{figure}
\caption{Quadrisection diagram for $S^1\times S^1\times S^3$}
\label{figS1S1S3}
\end{figure}
In dimension $6$, we get infinitely many $6$--manifolds admitting a $5$--section of genus $4$, namely the $S^2$--bundles over~$S^4$. Such a bundle can be constructed by gluing two copies of $S^2\times B^4$ via a map $\varphi :S^3\to S0(3)$. Writing $S^3$ as the quotient of $S^2\times [0,2\pi]$ by the shrinking of $S^2\times\{0\}$ and $S^2\times \{2\pi\}$, we define a map $\varphi_m$ that sends $(x,t)$ onto the rotation of axis given by $x$ and angle $mt$. This defines an $S^2$--bundle $W(m)$. While $W(-m)$ is diffeomorphic to $W(m)$, the group $\pi_3\big(W(m)\big)$ is finite of order $m$. To get a simple multisection diagram of $W(m)$, it appears helpful to modify the map $\varphi_m$ by a homotopy. We write $S^3$ as $S^2\times [-1,2\pi]$ with $S^2\times\{-1\}$ and $S^2\times \{2\pi\}$ shrunk, and we define $\varphi_m$ as previously on $S^2\times[0,2\pi]$ and constant equal to the identity on $S^2\times[-1,0]$. Taking a good ball decomposition $S^4=\cup_{1\leq i\leq 5}B_i$ as above, we set $B_a=B_1\cup B_2$ and $B_b=B_3\cup B_4\cup B_5$ and we assume that the parametrizations of $S^3$ as the boundary of $B_a$ and $B_b$ are such that:
\[
\begin{minipage}{5cm}
$S^3\cap\partial B_1=\big(S^2\times[-1,0]\big)/\sim$\\[0.1cm]
$S^3\cap\partial B_2=\big(S^2\times[0,2\pi]\big)/\sim$
\end{minipage}
\hspace{1.5cm}
\begin{minipage}{5cm}
$S^3\cap\partial B_3=\big(\Delta_3\times[-1,2\pi]\big)/\sim$\\[0.1cm]
$S^3\cap\partial B_4=\big(\Delta_4\times[-1,2\pi]\big)/\sim$\\[0.1cm]
$S^3\cap\partial B_5=\big(\Delta_5\times[-1,2\pi]\big)/\sim$
\end{minipage}
\]
where $S^2=\Delta_3\cup\Delta_4\cup\Delta_5$ is a good ball decomposition of $S^2$ as in Figure~\ref{figdecspheres}. Now the bundle $W(m)$ is given by the gluing of $S^2\times B_a$ and $S^2\times B_b$ via the map $\varphi_m:S^2\times\partial B_a\to S^2\times\partial B_b$. We choose $D_1$ and $D_2$ as small neighborhoods of the two points in $\Delta_3\cap\Delta_4\cap\Delta_5$, and $D_3\subset\Delta_4$, $D_4\subset\Delta_5$, $D_5\subset\Delta_3$ disjoint from $\varphi_m(D_1\cup D_2)$. This gives explicit disks sections, and a careful analysis of the gluing locus, where all the $3$--dimensional handlebodies of the multisection lie, provides the diagram in Figure~\ref{figS2bundleS4}. The only $3$--dimensional piece where the gluing is non-trivial is $W_{2345}$, represented in green.
\begin{figure}
\caption{Multisection diagram for the $S^2$--bundle $W(2)$ over $S^4$\\{\footnotesize For a diagram of $W(m)$, the green curve that differs from the diagram of $S^2\times S^4$ has to turn $2m$ times.}
\label{figS2bundleS4}
\end{figure}
We now treat surface bundles over $S^2\times S^1$.
\begin{lemma}
A surface bundle over $S^2\times S^1$ admits a quadrisection of genus $8g+9$, where $g$ is the genus of the fiber.
\end{lemma}
\begin{proof}
We use the following good ball decomposition of $S^2\times S^1$. The factor $S^2$ is cut into two disks; each of these disks product $S^1$ is then cut into two balls, see Figure~\ref{figS2xS1GBD}. In this decomposition $S^2\times S^1=\cup_{1\leq i\leq 4}M_i$, each $M_i$ is a $3$--ball, each $M_{ij}$ is made of two $2$--disks, each $M_{ijk}$ is made of four intervals and $M_{1234}$ contains exactly eight points. In the associated quadrisection of a surface bundle over $S^2\times S^1$, the central surface is made of $8$ copies of the fiber joined by $16$ tubes.
\end{proof}
\begin{figure}
\caption{Good ball decomposition of $S^2\times S^1$: the $S^2$--slices\\{\footnotesize Here, $S^1$ is regarded as $[0,24]/(0=24)$. We represent $M_1$ in blue, $M_2$ in yellow, $M_3$ in red and $M_4$ in green.}
\label{figS2xS1GBD}
\end{figure}
\begin{figure}
\caption{Quadrisection diagram for $S^2\times S^2\times S^1$}
\label{figS2S2S1}
\end{figure}
In Figure~\ref{figS2S2S1}, we give a quadrisection diagram for $S^2\times S^2\times S^1$. To draw the diagram curves on the \mbox{genus--$9$} surface, we need to determine, for each $3$--dimensional handlebody of the quadrisection, a family of $9$ properly embedded disks that cut the handlebody into a $3$--ball. We have
$$W_{123}=\Big(M_{123}\times\big(S^2\setminus(D_1\cup D_2\cup D_3)\big)\Big)\cup\Big(M_{23}\times\partial D_2\Big)\cup\Big(M_{13}\times\partial D_3\Big)\cup\Big(M_{412}\times D_4\Big).$$
From Figure~\ref{figS2xS1GBD}, it can be read that $M_{23}$ (resp. $M_{13}$) is made of two disks, viewed as one bigon and one hexagon regarding the repartition of their boundaries along $M_{123}$ and $M_{234}$ (resp. $M_{123}$ and $M_{341}$).
Our $9$ disks in $W_{123}$ are as follows:
\begin{itemize} \itemsep=0cm
\item $4$ meridional disks in the four solid tubes composing $D_4\times M_{412}$,
\item the union of:
\begin{itemize}
\item a point in $\partial D_2$ product with the bigon in $M_{23}$,
\item an arc joining $\partial D_1$ to $\partial D_2$ on $S^2$ product with the relevant interval in $M_{123}$ (the one on the boundary of the bigon in $M_{23}$),
\end{itemize}
\item the union of:
\begin{itemize}
\item a point in $\partial D_2$ product with the hexagon in $M_{23}$,
\item an arc joining $\partial D_1$ to $\partial D_2$ on $S^2$ product with the relevant three intervals in $M_{123}$,
\end{itemize}
\item disks analogous to the previous two with $\partial D_3$ instead of $\partial D_2$ and $M_{13}$ instead of $M_{23}$,
\item an arc joining $\partial D_1$ to itself around $\partial D_2$ product with an interval in $M_{123}$ (different from the ones corresponding to the bigons).
\end{itemize}
The other cut-systems are obtained similarly.
We may note that a simpler quadrisection diagram of $S^2\times S^2\times S^1$ is provided in the next section.
We end this section with surface bundles over real projective spaces.
\begin{lemma}
A surface bundle over $\mathbb{R}P^n$ admits a multisection of genus $2^ng+2^{n-1}(n-1)+1$, where $g$ is the genus of the fiber.
\end{lemma}
\begin{proof}
We need to give a suitable good ball decompositions of $\mathbb{R}P^n$. Using homogeneous coordinates $[x_0:\dots:x_n]$ for $\mathbb{R}P^n$, we set $M_i=\{|x_i|\geq|x_j|\mid \forall j\}$. This defines a good ball decomposition of $\mathbb{R}P^n$ where $M_{\{1,\dots,n\}}$ contains $2^n$ points and each $M_{\{1,\dots,n\}\setminus\{i\}}$ is made of $2^{n-1}$ intervals.
\end{proof}
\section{Multisecting fiber bundles over the circle}
\label{sec:bundlesS1}
For a closed $3$--manifold that fibers over the circle, there is a simple construction of a Heegaard splitting, which is the one described in the previous section. It has been generalized to dimension~$4$ by Gay and Kirby \cite{GayKirby} and Koenig \cite{Koenig}. A similar construction can indeed be performed in any dimension.
\begin{theorem} \label{th:bundleS1}
Let $W$ be an $(n+1)$--manifold which fibers over $S^1$ with fiber a closed $n$--manifold $X$ and monodromy $\varphi$. Assume that $X$ admits a multisection $X=\cup_{i=1}^{n-1}X_i$ preserved by $\varphi$ and denote by $\sigma$ the permutation such that $\varphi(X_i)=X_{\sigma(i)}$. Then $W$ admits a multisection of genus $Ng+1$, where $g$ is the genus of $S=X_{\{1,\dots,n\}}$ and $N$ equals the number of cycles in the decomposition of~$\sigma$ (including fixed points as order--$1$ cycles) plus the sum of the orders of these cycles.
\end{theorem}
\begin{proof}
We construct a multisection of $W$ using the multisection of $X$ and a decomposition of $S^1$ into intervals. Identifying $W$ with $\fract{X\times I}{(x,0)\sim(\varphi(x),1)}$, we define the $W_i$ in the \mbox{product $X\times I$.}
\begin{figure}
\caption{Scheme of the quadrisection of $W$ for different permutations $\sigma$}
\label{figbundleS1n4}
\end{figure}
Assume first $n=4$. There are three cases, depending on the type of $\sigma$: identity, transposition or $3$--cycle. These are schematized in Figure~\ref{figbundleS1n4}. For instance, when $\sigma=(123)$, we first set:
\begin{align*}
W_1'&=\left(X_1\times\left[\frac45,1\right]\right)\cup_\varphi \left(X_2\times\left[0,\frac35\right]\right),&
W_2'&=\left(X_2\times\left[\frac35,1\right]\right)\cup_\varphi \left(X_3\times\left[0,\frac25\right]\right),\\
W_3'&=\left(X_3\times\left[\frac25,1\right]\right)\cup_\varphi \left(X_1\times\left[0,\frac15\right]\right),&
W_4'&=X_1\times\left[\frac15,\frac45\right].
\end{align*}
At this stage, the $W_i'$ are handlebodies, but their intersections are not. We shall again fix this by tubing. We say that a $4$--ball in $X$ is in {\em good position} if it is transverse to the $X_i$, the $X_{ij}$ and the central surface $S$, and if it intersects each of these pieces along a ball of the corresponding dimension. For each interval in our construction, namely $\left[\frac i5,\frac {i+1}5\right]$ for $i=1,2,3$ and $\left[\frac45,1\right]\cup\left[0,\frac15\right]$, we take a tube $B^4\times I$ made of a $4$--ball in good position above each point of the interval, and we add it to the $W_i'$ which doesn't appear above this interval. For instance, a tube above $\left[\frac25,\frac35\right]$ is removed to $W_1\cup W_3\cup W_4$ and added to $W_2'$ to form $W_2$. We require distinct tubes to be disjoint.
Let us check that this defines a quadrisection of $W$. Each $W_i$ is a $4$--dimensional handlebody times an interval, thus a $5$--dimensional handlebody. The $W_{ij}$ are made of copies of some $X_k$ and copies of some $X_{k\ell}$ times interval, joined by two tubes, hence they are $4$--dimensional handlebodies; for instance, $W_{12}$ is the union of $X_2\times\left\lbrace\frac35\right\rbrace$ and $\left(X_{12}\times\left[\frac45,1\right]\right)\cup_{\varphi}\left(X_{23}\times\left[0,\frac25\right]\right)$ joined by the tubes above $\left[\frac25,\frac35\right]$ and $\left[\frac35,\frac45\right]$. The $W_{ijk}$ are made of two copies of some $X_{\ell m}$ and a copy of the surface $S=X_{123}$ times interval, joined by three tubes. Finally, the central surface $\Sigma=W_{1234}$ is given by four copies of $S$, above $\frac i5$ for $i=1,..,4$, joined by tubes.
The genera of the surfaces are related by $g(\Sigma)=4g+1$, where $g=g(S)$.
The schemes of Figure~\ref{figbundleS1n4} show how to apply this construction for other permutations of the trisection of $X$. The genus of $\Sigma$ is then given by $5g+1$ and $6g+1$ respectively.
\begin{figure}
\caption{Scheme of the multisection of $W$ for a monodromy inducing a cycle}
\label{figbundleS1cycle}
\end{figure}
\begin{figure}
\caption{Other schemes of multisections}
\label{figbundleS1n67}
\end{figure}
In higher dimensions, the same construction works, as long as we can determine the right scheme. The idea is to use the decomposition of $\sigma$ into disjoint cycles. Figure~\ref{figbundleS1cycle} gives a model for an $(n-1)$--cycle. Then different cycles can be stacked together as examplified in Figures~\ref{figbundleS1n4} and~\ref{figbundleS1n67}. In doing so, we decompose the interval $[0,1]$ into $N$ intervals $I_\ell=[\frac{\ell}{N+1},\frac{\ell+1}{N+1}]$ for $\ell=1,\dots,N-1$ and $I_N=[\frac{N}{N+1},1]\cup[0,\frac{1}{N+1}]$; note that $N$ equals the number of cycles in the decomposition of $\sigma$ (including fixed points as order--$1$ cycles) plus the sum of the orders of these cycles. As above, we first define $W_i'$ as the disjoint union of the $X_k\times I_\ell$ for each interval $I_\ell$ above which $W_i$ appears in the $X_k$--line. We say that an $n$--ball in $X$ is in {\em good position} if it meets each $X_I$ transversely along a ball.
Then, for each interval $I_\ell$, we take a tube $B^n\times I_\ell$ made of a ball in good position above each point of $I_\ell$, so that the different tubes are disjoint, and we add this tube to the only $W_i$ which doesn't appear above $I_\ell$. Thus constructed, the $W_i$ are made of $n$--dimensional handlebodies times interval joined by $1$--handles, hence they are $(n+1)$--dimensional handlebodies. For a non-empty proper subset $I$ of $\{1,\dots,n\}$, $W_I$ is made of some $X_J\times I_\ell$ with $|J|=|I|$ and some $X_J\times\{\frac{\ell}{N+1}\}$ with $|J|=|I|-1$, joined by $1$--handles; hence $W_I$ is an $(n-|I|+2)$--dimensional handlebody.
Finally, $\Sigma=W_{\{1,\dots,n\}}$ is made of $N$ copies of the punctured $S$ joined by $N$ tubes.
\end{proof}
\begin{figure}
\caption{Trisection diagram for $\mathbb{C}
\label{figCP2S1scheme}
\end{figure}
\begin{figure}
\caption{Quadrisection diagrams of $\mathbb{C}
\label{figCP2xS1}
\end{figure}
We shall explain on the example of $\mathbb{C}P^2\times S^1$ how to draw a diagram of the multisection we have constructed. We start with the diagram of $\mathbb{C}P^2$ given in the left hand side of Figure~\ref{figCP2S1scheme}, whose surface is denoted by $S$, where the red curve represents $X_{12}$, the blue curve represents $X_{23}$ and the green curve represents $X_{13}$. We use the alternative scheme given in the right hand side of Figure~\ref{figCP2S1scheme}. We draw the surface $\Sigma$ as $N=6$ copies $S_i$ of $S$ set along a cycle and joined by tubes creating the hole in the middle, see Figure~\ref{figCP2xS1}. The $3$--dimensional handlebodies of the quadrisection are made of copies of the $3$--dimensional handlebodies of the trisection of $\mathbb{C}P^2$ and copies of the punctured surface $S$ product an interval, joined by tubes. For instance, in our example, the handlebody $W_{124}$ is made of:
\begin{itemize}\itemsep=0cm
\item copies of $X_{13}$ represented by the purple curves on $S_1$ and $S_6$,
\item the product of a punctured $S$ with an interval running from $S_2$ to $S_3$, where the purple curves represent arcs on the punctured $S$ that define disks in the product,
\item the same thing between $S_4$ and $S_5$,
\item the tubes joining the above pieces, to which corresponds a meridian purple curve which could be drawn between $S_i$ and $S_{i+1}$ for any $i\neq2,4$.
\end{itemize}
The other cut-systems are obtained accordingly; the orange one represents $W_{123}$, the blue one $W_{134}$ and the green one $W_{234}$. If the monodromy $\varphi$ were non-trivial, then the green curves on $S_1/S_6$ would have to join arcs on $S_6$ to their images by $\varphi$ on $S_1$. Note that we have to represent the successive copies of $S$ with alternating orientation. The monodromy $\varphi$ reverses the orientation of $S$ precisely when $n$ is odd.
On this diagram, two destabilizations can be performed. Sliding the orange curve on $S_2$ along the orange curve on $S_1$ and the green curve on $S_2$ along the green curve on $S_3$, we get parallel green and purple curves dual to parallel blue and orange curves. This allows to destabilize once. The second destabilization is symmetric with respect to a vertical axis.
\begin{figure}
\caption{A trisection diagram for $S^2\times S^2$}
\label{figS2S2}
\end{figure}
Starting with the trisection diagram of $S^2\times S^2$ given in Figure~\ref{figS2S2}, we get a quadrisection diagram of $S^2\times S^2\times S^1$ in a completely analogous manner. In particular, we use the same scheme (see Figure~\ref{figCP2S1scheme}) and the same colors. This first gives the genus--$13$ quadrisection diagram on the left hand side of Figure~\ref{figS2xS2xS1}. After stabilizations, we get the genus--$7$ diagram on the right hand side.
\begin{figure}
\caption{Quadrisection diagrams of $S^2\times S^2\times S^1$}
\label{figS2xS2xS1}
\end{figure}
\def$'${$'$}
\providecommand{\bysame}{\leavevmode ---\ }
\providecommand{\og}{``}
\providecommand{\fg}{''}
\providecommand{\smfandname}{\&}
\providecommand{\smfedsname}{\'eds.}
\providecommand{\smfedname}{\'ed.}
\providecommand{\smfmastersthesisname}{M\'emoire}
\providecommand{\smfphdthesisname}{PhD thesis}
\end{document}
|
\begin{document}
\setcounter{page}{1}
\title{The Cohomology of the Moduli Space of Abelian Varieties}
\author{Gerard van der Geer}
\address{University of Amsterdam}
\email{[email protected]}
\subjclass[2000]{Primary 14; Secondary: 11G, 14F, 14K, 14J10, 10D}
\keywords{Abelian Varieties, Moduli, Modular Forms}
\maketitle
\thispagestyle{empty}
\section*{Introduction}
That the moduli spaces of abelian varieties are a rich source of
arithmetic and geometric information slowly emerged
from the work of Kronecker, Klein, Fricke and many others at the end
of the 19th century.
Since the 20th century we know that the first place to dig for these
hidden treasures is the cohomology of these moduli spaces.
In this survey we are concerned with the cohomology of the moduli
space of abelian varieties. Since this is an extensive and widely
ramified topic that connects to many branches of algebraic geometry
and number theory we will have to limit ourselves. I have chosen to
stick to the moduli spaces of principally polarized abelian varieties,
leaving aside the moduli spaces of abelian varieties with non-principal
polarizations and the other variations like the moduli spaces with extra
structure (like conditions on their endomorphism rings) since often
the principles are the same, but the variations are clad in
heavy notation.
The emphasis in this survey is on the tautological ring of the moduli
space of principally polarized abelian varieties. We discuss the cycle
classes of the Ekedahl-Oort stratification, that can be expressed in
tautological classes, and discuss differential forms on the moduli
space. We also discuss complete subvarieties of~$\A{g}$.
Finally, we discuss Siegel modular forms and its relations
to the cohomology of these moduli spaces. We sketch the approach developed
jointly with Faber and Bergstr\"om to calculate the traces of the Hecke
operators by counting curves of genus $\leq 3$ over finite fields,
an approach that opens a new window on Siegel modular forms.
\setcounter{tocdepth}{1}
\tableofcontents
\section{The Moduli Space of Principally Polarized Abelian Varieties}\label{Ag}
We shall assume the existence of the moduli space of principally
polarized abelian varieties as given. So
throughout this survey $\A{g}$ will denote the Deligne-Mumford stack
of principally polarized abelian varieties of dimension $g$. It is
a smooth Deligne-Mumford stack over ${\rm Spec}({\bZ})$ of relative
dimension $g(g+1)/2$, see \cite{F-C}.
Over the complex numbers this moduli space can be described as the
arithmetic quotient (orbifold)
${\Sp}(2g,{\bZ}) \backslash {\Hg}$
of the Siegel upper half space by the symplectic group.
This generalizes the well-known
description of the moduli of complex elliptic curves as
${\rm SL}(2,{\bZ})\backslash \mathfrak{H}$ with $\mathfrak{H}=\mathfrak{H}_1$
the usual upper half plane of the complex plane. We refer to Milne's
account in this Handbook for the general theory of Shimura varieties.
The stack $\A{g}$ comes with a universal family of principally
polarized abelian varieties $\pi: \X{g} \to \A{g}$. Since abelian
varieties can degenerate the stack $\A{g}$ is not proper or complete.
The moduli space $\A{g}$ admits several compactifications. The first one
is the Satake compactification or Baily-Borel compactification. It is
defined by considering the vector space of Siegel modular forms of
sufficiently high weight, by using these to map $\A{g}$ to projective
space and then by taking the closure of the image
of $\A{g}$ in the receiving projective space. This construction was
first done by Satake and by Baily-Borel over the field of
complex numbers, cf.\ \cite{B-B}.
The Satake compactification $\SatA{g}$ is very singular for $g\geq 2$.
It has a stratification
$$
\SatA{g}=\A{g}\sqcup \SatA{g-1}=\A{g}\sqcup \A{g-1} \sqcup \cdots \sqcup
\A{1}\sqcup \A{0}.
$$
In an attempt to construct non-singular compactifications of arithmetic
quotients, such as ${\Sp}(2g,{\bZ}) \backslash {\Hg}$, Mumford with a team of
co-workers created a theory of so-called toroidal compactifications of
${\A{g}}$ in \cite{AMRT}, cf.\ also the new edition \cite{AMRT-2}.
These compactifications are not unique but depend on a
choice of combinatorial data, an admissible cone decomposition of the cone of
positive (semi)-definite symmetric bilinear forms in $g$ variables.
Each such toroidal compactification
admits a morphism $q: \barA{g} \to \SatA{g}$.
Faltings and Chai showed how Mumford's theory of compactifying quotients of
bounded symmetric domains could be used to extend
this to a compactification of $\A{g}$ over the integers,
see \cite{F1, F-C, AMRT}.
This also led to the Satake compactification over the integers.
There are a number of special choices for the cone decompositions,
such as the second
Voronoi decomposition, the central cone decomposition or the
perfect cone decomposition. Each of these choices has its advantages
and disadvantages. Moreover, Alexeev has constructed a functorial
compactification of $\A{g}$, see \cite{Alexeev, Alexeev-Nakamura}.
It has as a disadvantage that for $g\geq 4$
it is not irreducible but possesses extra components.
The main component of this corresponds to
${\cA}_g^{\rm Vor}$, the toroidal compactification
defined by the second Voronoi compactification.
Olsson has adapted Alexeev's construction using log structures
and obtained a functorial compactification. The normalization
of this compactification is the compactification ${\cA}_g^{\rm Vor}$, cf\
\cite{Olsson}.
The toroidal compactification ${\cA}_g^{\rm perf}$ defined by the perfect cone
decomposition is a canonical model of $\A{g}$ for $g \geq 12$
as was shown by Shepherd-Barron, see \cite{Sh-B} and his contribution to
this Handbook.
The partial desingularization of the Satake compactification obtained by Igusa
in \cite{Igusa}
coincides with toroidal compactification corresponding to the central cone
decomposition.
We refer to a survey
by Grushevsky (\cite{Gr}) on the geometry of the moduli space
of abelian varieties.
These compactifications (2nd Voronoi, central and perfect cone)
agree for $g\leq 3$, but are different for higher $g$.
For $g=1$ one has $\barA{1}=\SatA{1}=\overline{\cM}_{1,1}$, the Deligne-Mumford
moduli space of $1$-pointed stable curves of genus $1$. For $g=2$
the Torelli morphism gives an identification $\barA{2}=\overline{\cM}_{2}$,
the Deligne-Mumford moduli space of stable curves of genus $2$.
The open part $\A{2}$ corresponds to stable curves of genus $2$ of
compact type. But please note that the Torelli
morphism of stacks ${\cM}_{g} \to \A{g}$
is a morphism of degree $2$ for $g\geq 3$, since every
principally polarized abelian variety
posseses an automorphism of order $2$,
but the generic curve of genus $g\geq 3$
does not.
We shall use the term Faltings-Chai compactifications for the compactifications
(over rings of integers) defined by admissible cone decompositions.
\section{The Compact Dual}\label{compactdual}
The moduli space $\A{g}(\bC)$ has the analytic description as ${\Sp}(2g,{\bZ})
\backslash {\Hg}$. The Siegel upper half space ${\Hg}$
can be realized in various ways,
one of which is as an open subset of the so-called compact dual and the
arithmetic quotient inherits various properties from this compact quotient.
For this reason we first treat the compact dual at some length.
To construct $\mathfrak{H}_g$,
we start with an non-degenerate symplectic form on a complex
vector space, necessarily of even dimension, say $2g$. To be explicit, consider
the vector space ${\bQ}^{2g}$ with basis $e_1,\ldots,e_{2g}$ and
symplectic form $\langle \, , \, \rangle $
given by $J(x,y)=x^t J y$ with
$$
J= \left( \begin{matrix} 0 & 1_g \\ -1_g & 0 \\ \end{matrix} \right) \, .
$$
We let $G={\rm GSp}(2g,{\bQ})$ be the algebraic group over
${\bQ}$ of symplectic similitudes of this symplectic vector space
$$
G= \{ g \in {\GL}(2g,{\bQ}): J(gx,gy)= \eta(g) J(x,y)\} \, ,
$$
where $\eta: G \to {\Gm}$ is the so-called multiplyer.
Then $G$ is the group of matrices
$\gamma=(A \, B; C \, D)$ with $A,B,C,D$ integral $g \times g$-matrices
with
$$
A^t \cdot C = C^t \cdot A, \, B^t \cdot D = D^t \cdot B \hskip 0.6em\relax
\hbox{\rm and} \hskip 0.6em\relax A^t \cdot D - C^t \cdot B = \eta(g) 1_{2n}.
$$
We denote the kernel of $\eta$ by $G^0$.
Let $Y_g$ be the Lagrangian Grassmannian
$$
Y_g =\{ L \subset {\bC}^{2g}: \dim(L)=g,\,
J(x,y)=0 \, \text{for all $ x,y \in L$}\}
$$
that parametrizes all totally isotropic subspaces of our complex symplectic
vector space. This is a homogeneous manifold of complex dimension $g(g+1)/2$
for the action of $G({\bC})={\GSp}(2g,{\bC})$;
in fact this group acts transitively
and the quotient of ${\GSp}(2g,{\bC})$ by the central ${\Gm}({\bC})$
acts effectively.
We can write $Y_g$ as a quotient $Y_g=G({\bC})/Q$, where $Q$ is a
parabolic subgroup of $G({\bC})$. More precisely, if we fix a point
$y_0= e_{1}\wedge \ldots \wedge e_{g} \in Y_g$
then we can write $Q$ as the group of matrices $(A\, B ; C \, D)$ in
${\GSp}(2g,{\bC})\}$ with $C=0$. This parabolic group $Q$ has a Levi
decomposition as $Q=M \ltimes U$ with $M$ the subgroup of $G$
that respects the decomposition of the symplectic space as
${\bQ}^g\oplus {\bQ}^g$; the matrices of $M$ are of the form
$(A \, 0 ; 0 \, D)$ and is isomorphic to ${\rm GL}(g)\times {\bG}_m$,
while those of $U$ are of the form $(1_g \, B ; 0 \, 1_g)$ with
$B$ symmetric.
There is an embedding of ${\Hg}$ into $Y_g$ as follows.
We consider the group $G^0({\bR})$
and the maximal compact subgroup $K$ of elements that fix $\sqrt{-1} \, 1_g$.
Its elements are described as $(A\, -B ; B \, A)$ and assigning to
it the element $A+\sqrt{-1} B$ gives an isomorphism of $K$ with
the unitary group $U(g)$. One way to describe the Siegel upper half space
is as the orbit $X_g=G^0({\bR})/K$ under the action of $G^0$; this can be
embedded in $Y_g=(G/Q)({\bC})$ as the set of all maximal
isotropic subspaces $V$
such that $-\sqrt{-1} \langle v,\bar{v} \rangle $ is positive definite
on $V$. Each such subspace has a basis consisting of the columns of the
transpose of the matrix $(-1_g \, \tau)$ for a unique $\tau \in {\Hg}$.
The subgroup $G^{+}({\bR})$ leaves this subset invariant and
this establishes the embedding of the domain ${\Hg}$ in its $Y_g$
and this space $Y_g$ is called the {\sl compact dual} of ${\Hg}$.
It contains $X_g \thicksim {\Hg}$ as
an open subset. The standard example (for $g=1$) is that of the
upper half plane contained in ${\bP}^1$.
For later use we extend this a bit by looking not only at
maximal isotropic subspaces, but also at symplectic filtrations
on our standard symplectic space.
Consider for $i=1, \ldots ,g$ the (partial) flag variety $U_g^{(i)}$ of
symplectic flags $E_i \subset \ldots \subset E_{g-1} \subset E_g$,
of linear subspaces $E_j$ of ${\bC}^{2g}$ with $\dim (E_j)=j$ and $E_g$
totally isotropic. We have $U_g^{(g)} = Y_g$. There are natural maps
$\pi_i: U_g^{(i)} \to U_g^{(i+1)}$ and the fibre of $\pi_i$ is a
Grassmann variety of dimension $i$. We can represent $U_g^{(1)}$ as a
quotient $G/B$, where $B$ is a Borel subgroup of $G$.
These spaces $U_g^{(i)}$ come equipped with universal flags
$E_i \subset E_{i+1} \subset \ldots \subset E_g$.
The manifold $Y_g$ possesses, as all Grassmannians
do, a cell decomposition. To define it, choose a fixed
isotropic flag
$
\{ 0 \} = Z_0 \subsetneq Z_{1} \subsetneq Z_2 \subsetneq
\ldots \subsetneq Z_g$; that is, $\dim Z_i=i$ and $Z_g$ is an isotropic
subspace of our symplectic space.
We extend the filtration by setting
$Z_{g+i} = (Z_{g-i})^{\bot}$ for $i=1,\ldots, g$.
For general $V \in Y_g$ we expect that $V\cap Z_{j} = \{ 0 \} $ for
$j\leq g$. Therefore,
for $\mu=(\mu_1,\ldots, \mu_r)$ with $\mu_i$ non-negative
integers satisfying
$$
0 \leq \mu_i \leq g,\hskip 0.6em\relax
\mu_i-1\leq \mu_{i+1} \leq \mu_{i} \eqno(1)
$$
we put
$$
W_{\mu}=\{ V \in Y_g : \dim (V \cap Z_{g+1-\mu_i})=i\}\, .
$$
This gives a cell decomposition with cells only in even real dimension. The cell
$W_{\mu}$ has (complex) codimension $\sum \mu_i$.
Denote the set of $n$-tuples $\mu = (\mu_1,\ldots , \mu_g)$
satisfying (1) by $M_g$. Then $\# M_g = 2^g$. Moreover, we
have
$$
W_{\mu} \subseteq {\overline W}_{\nu}\iff \mu_i\geq
\nu_i \hskip 0.6em\relax {\rm for }\hskip 0.6em\relax 1\leq i \leq g \, .
$$
From the cell decomposition we find the homology of $Y_g$.
\begin{proposition}\label{cohcompdual}
The integral homology of $Y_g$ is generated by the cycle classes of
$[{\overline W}_{\mu}]$ of the closed cells with $\mu \in M_g$.
The Poincar\'e-polynomial of
$Y_g$ is given by $\sum b_{2i} \, t^i=(1+t)(1+t^2)\cdots
(1+t^g)$.
\end{proposition}
Note that $Y_g$ is a rational variety and that the Chow ring
$R_g$ of $Y_g$ is the isomorphic image of the cohomology ring
of $Y_g$ under the usual cycle class map. On $Y_g$ we
have a sequence of tautological vector bundles
$0 \to E \to H \to Q \to 0$,
where $H$ is the trivial bundle of rank $2g$ defined by our fixed
symplectic space and where the fibre of $E_y$ of $E$ over $y$ is
the isotropic subspace of dimension $g$ corresponding to $y$.
The bundle $E$ corresponds to the standard representation of $K=U(g)$,
see also Section \ref{The Hodge Bundle}.
The tangent space to a point $e=[E]$ of $Y_g$ is ${\rm
Hom}^{\rm sym}(E,Q)$, the space of symmetric homomorphisms from
$E$ to $Q$; indeed, usually the tangent space of a Grassmannian is
described as ${\rm Hom}(E,Q)$ (``move $E$ a bit and it moves infinitesimally
out of the kernel of $H\to Q$"), but we have to preserve the symplectic
form that identifies $E$ with the dual of $Q$; therefore we have
to take the `symmetric' homomorphisms.
We can identify this with ${\rm Sym}^2(E^{\vee})$.
We consider the Chern classes $u_i= c_i(E) \in R_g= {\CH}^*(Y_g)$ for $
i=1,\ldots,g$. We call them {\sl tautological classes}. The symplectic form
$J$ on $H$ can be used to identify $E$ with the dual of the
quotient bundle $Q$ . The triviality of the bundle $H$
implies the following relation for the Chern classes of $E$ in $R_g$:
$$
(1+u_1+u_2+\ldots +u_g)(1-u_1+u_2+\ldots +(-1)^gu_g)=1 \, .
$$
These relations may be succinctly stated as
$${\rm ch}_{2k}(E)=0 \qquad (k\geq 1), \eqno(2)$$
where ${\rm ch}_{2k}$ is the part of degree $2k$ of the Chern character.
\begin{proposition} The Chow ring $R_g$ of $Y_g$ is
the ring generated by the $u_i$ with as only relations
${\rm ch}_{2k}(E)=0$ for $k\geq 1$.
\end{proposition}
One can check algebraically that the ring which is the quotient of
${\bZ}[u_1,\ldots,u_g]$ by the relation (2) has Betti numbers as
given in Prop.\ \ref{cohcompdual}. This description implies that
this ring after tensoring with ${\bQ}$
is in fact generated by the $u_{j}$ with $j$ odd.
Furthermore, by using induction one obtains the relations
$$
u_g u_{g-1}\cdots
u_{k+1}u_k^2=0 \qquad \text{for $k=g, \ldots, 1$}. \eqno(3)
$$
It follows that this ring has the following set of $2^g$
basis elements
$$
\prod_{i \in I} u_i \, ,\qquad I \subseteq \{1,\ldots,g\}.
$$
The ring $R_g$ is a so-called Gorenstein ring with socle
$u_gu_{g-1} \cdots u_1$.
Using $R_g/ (u_g) \cong R_{g-1}$ one finds the following properties.
\begin{lemma}
In $R_g/(u_g)$ we have $u_1^{g(g-1)/2}\neq 0$ and $u_1^{g(g-1)/2+1}=0$.
\end{lemma}
Define now classes in the Chow ring of the flag spaces
$U_g^{(i)}$ as follows:
$$
v_j = c_1(E_j/E_{j-1}) \hskip 0.6em\relax \in CH^*(U_g^{(i)})\hskip 0.6em\relax j=i, \ldots, g.
$$
Moreover, we set
$$
u_j^{(i)} = c_j(E_i) \hskip 0.6em\relax \in CH^*(U_g^{(i)}) \hskip 0.6em\relax j=1,\ldots, i.
$$
We can view $u_j^{(i)}$ as the $j$th symmetric function in $v_1, \ldots,
v_i$. Then we have the relations under the forgetful maps $\pi_i:
U_g^{(i)} \to U_g^{(i+1)}$
$$
\pi_i^*(u_j^{(i+1)})= u_j^{(i)} + v_{i+1} u_{j-1}^{(i)}\qquad j=1,\ldots,
i+1,\eqno(4)
$$
and
$$
v_j^j-u_1^{(j)}v_j^{j-1}+ \ldots + (-1)^ju_j^{(j)}=0 \qquad j=1,\ldots,
g.\eqno(5) $$
The Chow ring of $U_g^{(1)}$ has generators
$$
v_1^{\eta_1}v_2^{\eta_2} \cdots v_g^{n_g}
u_g^{\epsilon_g}u_{g-1}^{\epsilon_{g-1}}\cdots
u_1^{\epsilon_1}
$$
with $0\leq \eta_i < i$ and $\epsilon_i =0 $ or $1$.
\begin{lemma}\label{Gysin}
The following Gysin formulas hold:
\begin{enumerate}
\item{} $(\pi_i)_*(c)=0 {\rm ~for~all~} c \in CH^*(U_g^{(i+1)})$;
\item{} $(\pi_i)_*(v_{i+1}^i)=1$;
\item{} $(\pi_i)_*(u_i^{(i)})= (-1)^i$.
\end{enumerate}
\end{lemma}
These formulas together with (4) and (5) completely determine the image of the
Gysin map $(\pi)_*= (\pi_{g-1} \cdots \pi_1)_*$.
For a sequence of $r\leq g$ positive integers $\mu_i$
with $\mu_i \geq \mu_{i+1}$ we define an element of $R_g$:
$$
\Delta_{\mu}(u)= \det \left( \begin{matrix}
(u_{\mu_i-i+j})_{1\leq i \leq r; 1\leq j \leq
r} \end{matrix} \right).
$$
We also define so-called $Q$-polynomials by the formula:
$$
Q_{i,j} = u_i u_j - 2 u_{i+1}u_{j-1} + \ldots
+(-1)^j 2 u_{i+j} ~~{\rm for}~~ i<j.
$$
We have $Q_{i 0} = u_i$ for $i=1, \ldots ,g$.
Let $\mu$ be a strict partition.
For $r$ even we define an anti-symmetric matrix $[x_{\mu}]
=[x_{ij}]$ as follows. Let
$$
x_{i,j} = Q_{\mu_i \mu_j}(u)
\hskip 0.6em\relax {\rm for ~~} 1\leq i < j \leq r.
$$
We then set for even $r$
$$
\Xi_{\mu} = {\rm Pf}([x_{\mu}]),
$$
while for $r$ odd we define $\Xi_{\mu} = \Xi_{\mu_1 \ldots \mu_r 0}$.
These expressions may look a bit artificial, but their purpose is
clearly demonstrated by the following Theorem, due to Pragacz \cite{Pragacz,F-P}.
\begin{theorem} (Pragacz's formula) The
class of the cycle $[{\overline W}_{\mu}]$ in the Chow ring
is given by (a multiple of) $\Xi_{\mu}(u)$.
\end{theorem}
From (2.4) we have the property:
for partitions $\mu$ and $\nu$
with $\sum \mu_i + \sum \nu_i = g(g+1)/2$ we have
$$
\Xi_{\mu} \Xi_{\nu} =
\begin{cases} 1 & \text{if $\nu = \rho -\mu $,} \\
0 & \text{if $\nu \neq \rho -\mu$,} \\
\end{cases}
$$
where $\rho = \{ g, g-1, \ldots, 1 \}$.
We have the following relations in $CH^*(Y_g)$:
$$
\begin{aligned}
i) \hskip 0.6em\relax & u_gu_{g-1}\ldots u_1=1,\cr
ii)\hskip 0.6em\relax & u_1^N= N!\prod_{k=1}^g \frac{1}{(2k-1)!!} \, ,
\end{aligned}
$$
where $1$ represents the class of a point and $N=g(g+1)/2$.
\begin{proof}
For i) We have $T_{Y_g}\cong {\rm Sym}^2(E^{\vee})$, thus the
Chern classes of the tangent bundle $T_{Y_g}$ are
expressed in the $u_i$. By \cite{Fulton}, Ex.\ 14.5.1, p.\ 265)
for the top Chern class of
${\rm Sym}^2$ of a vector bundle we have
$$
e(Y_g)= 2^g = 2^g \Delta_{(g, g-1, \ldots , 1)}(u)= 2^g \det \left(
\begin{matrix} u_g&0&0&\dots &0\cr
u_{g-2}&u_{g-1}&u_g&0\ldots\cr
&&\dots\cr
&&&&u_1\cr
\end{matrix}
\right).
$$
Developing the determinant gives
$$
1= u_gu_{g-1}\ldots
u_1+u_g^2A(u)+u_gu_{g-1}^2B(u)+ \ldots \, ,
$$
from which the desired identity results by using (3).
Alternatively, one may use Pragacz's formula above for the degeneracy
locus given by $Z_g = V$ for some subspace $V$.
\end{proof}
\section{The Hodge Bundle}\label{The Hodge Bundle}
Since $\A{g}$ comes with a universal principally polarized abelian variety
$\pi: \X{g} \to \A{g}$ we have a rank $g$ vector bundle or locally
free sheaf
$$
{\bE}={\bE}_g = \pi_*(\Omega^1_{\X{g}/\A{g}}),
$$
called the {\sl Hodge Bundle}.
It has an alternative definition as
$$
{\bE}= s^*(\Omega^1_{\X{g}/\A{g}})\, ,
$$
i.e., as the cotangent bundle to the zero section $s: \A{g} \to \X{g}$.
If ${\cX}_g^t$ is the dual abelian variety (isomorphic to $\X{g}$ because
we stick to a principal polarization) then the Hodge bundle ${\bE}^t$
of ${\cX}_g^t$ satisfies
$$
({\bE}^t)^{\vee}={\rm Lie}({\cX}_g^t)\cong R^1\pi_* {\mathcal O}_{\X{g}}.
$$
The Hodge bundle ${\bE}$ can be extended to any toroidal compactification
$\barA{g}$ of Faltings-Chai type. In fact, over $\barA{g}$ we have a
universal family of semi-abelian varieties and one takes the dual
of ${\rm Lie}(\tilde{\cX})$, cf.\ \cite{F-C}.
We shall denote it again by ${\bE}$.
If we now go to a fine moduli space, say $\A{g}[n]$ with $n\geq 3$,
the moduli space of principally polarized abelian varieties with a
level $n$ structure, and take a smooth toroidal compactification
then we can describe the sheaf of holomorphic
$1$-forms in terms of the Hodge bundle. With
$D=\barA{g}[n]-\A{g}[n]$, the divisor at infinity,
we have the important result:
\begin{proposition}\label{OmegaisSym2}
The Hodge bundle ${\bE}$ on $\barA{g}[n]$ for $n\geq 3$
satisfies the identity
$$
{\rm Sym}^2({\bE})\cong \Omega^1_{\barA{g}[n]}(\log D).
$$
\end{proposition}
This result extends the description of the tangent space to ${\Hg}$
in the compact dual. It is proven in the general setting
in \cite{F-C}, p.\ 86.
Recall the description of $Y_g$ as the symplectic Grassmannian
$G({\bC})/Q({\bC})$ with $G={\rm GSp}(2g,{\bZ})$ and $Q$ the parabolic subgroup
with Levi decomposition $M\ltimes U$ with $M={\rm GL}(g)\times {\bG}_m$.
The Siegel upper half space $\mathfrak{H}_g$
can be viewed as an open subset of $Y_g$. Put $G_0={\rm Sp}(2g,{\bZ})$,
$Q_0=Q\cap G_0$ and $M_0=M \cap G_0$. Then if $\rho: Q_0 \to {\rm GL}(V)$
is a finite-dimensional complex representation, we define an equivariant
vector bundle ${\cV}_{\rho}$ on $Y_g$ by
$$
{\cV}_{\rho} = G_0({\bC}) \times^{Q_0({\bC})} V,
$$
where the contracted product is defined by the usual equivalence relation
$(g,v) \sim (gq, \rho(q)^{-1} v)$ for all $g \in G_0({\bC})$ and
$q \in Q_0({\bC})$. Then our group ${\rm Sp}(2g, {\bZ})$ acts
on the bundle ${\cV}_{\rho}$ and the quotient is a vector bundle $V_{\rho}$
in the orbifold sense on
$\A{g}({\bC})={\rm Sp}(2g, {\bZ})\backslash \mathfrak{H}_g$.
A representation of ${\rm GL}(g)$ can be lifted to a representation
of $Q_0$ by letting it act trivially on the unipotent radical $U$
of $M_0$. Carrying out this construction
with the standard (tautological) representation
of ${\rm GL}(g)$ produces the Hodge bundle.
The Hodge bundle can be extended to a bundle over our toroidal
compactification $\barA{g}$. Since any bundle $V_{\rho}$ is obtained
by applying Schur functors to powers of the Hodge bundle
(see \cite{F-H}) we can extend
the bundle $V_{\rho}$ by applying the Schur functor to the extended
power of the Hodge bundle. In this way we obtain a canonical extension
to $\tilde{\A{g}}$ for all equivariant holomorphic bundles $V_{\rho}$.
\section{The Tautological Ring of $\A{g}$}
The moduli space $\A{g}$ is a Deligne-Mumford stack or orbifold
and as such it has a Chow ring. To be precise, consider the moduli
space $\A{g}\otimes k$ over an algebraically closed field $k$.
For simplicity we shall write $\A{g}$ instead.
We are interested in
the Chow rings $\CH(\A{g})$ and ${\CHQ}(\A{g})$ of this moduli space.
In general these rings seems unattainable. However, they contain subrings
that we can describe well and that play an important role.
We denote the Chern classes of the Hodge bundle by
$$
\lambda_i := c_i ({\bE}), \qquad i=1,\ldots,g.
$$
We define the {\sl tautological subring} of the Chow ring
${\CHQ}(\A{g})$ as the subring generated by the Chern classes of the Hodge
bundle ${\bE}$.
The main result is a description of the tautological ring in terms of
the Chow ring $R_{g-1}$ of the compact dual $Y_{g-1}$ (see \cite{vdG:Cycles}).
\begin{theorem}\label{tautringofAg}
The tautological subring $T_g$ of the Chow ring ${\CHQ}(\A{g})$ generated
by the $\lambda_i$ is isomorphic to the ring $R_{g-1}$.
\end{theorem}
This implies that a basis of the codimension $i$ part is given by the monomials
$$
\lambda_1^{e_1} \lambda_2^{e_2} \cdots \lambda_{g-1}^{e_{g-1}}
\hskip 0.6em\relax \text{ $e_j \in \{0,1\}$ and $\sum_{j=1}^{g-1} j\, e_j =i$.}
$$
This theorem follows from the following four results, each interesting
in its own right.
\begin{theorem}\label{lambdarelation}
The Chern classes $\lambda_i$ of the Hodge bundle ${\bE}$ satisfy the relation
$$
(1+\lambda_1+\cdots +\lambda_g)(1-\lambda_1+\cdots +(-1)^g \lambda_g)=1.
$$
\end{theorem}
\begin{theorem}\label{lambdagiszero}
The top Chern class $\lambda_g$ of the Hodge bundle ${\bE}$ vanishes
in ${\CHQ}(\A{g})$.
\end{theorem}
\begin{theorem}\label{lambda1isample}
The first Chern class $\lambda_1$ of ${\bE}$ is ample on $\A{g}$.
\end{theorem}
\begin{theorem}\label{completecodimg}
In characteristic $p>0$ the moduli space $\A{g}\otimes {\bF}_p$ contains
a complete subvariety of codimension $g$.
\end{theorem}
To deduce Theorem \ref{tautringofAg} from Thms \ref{lambdarelation} -
\ref{completecodimg} we argue as follows.
By Theorem \ref{lambdarelation} the tautological subring $T_g$ is
a quotient ring of $R_g$ via $u_i \mapsto \lambda_i$,
and then by Theorem \ref{lambdagiszero} also
of $R_{g-1}\cong R_g/(u_g)$. The ring $R_{g-1}$ is a Gorenstein ring with
socle $u_1u_2 \cdots u_{g-1}$ and this is a non-zero multiple of
$u_1^{g(g-1)/2}$. By Theorem \ref{lambda1isample} the ample class
$\lambda_1$ satisfies $\lambda_1^j \cdot [V]\neq 0$ on any complete
subvariety $V$ of codimension~$j$ in $\A{g}$. The existence of a complete
subvariety of codimension $g$ in positive characteristic implies that
$\lambda_1^{g(g-1)/2}$ does not vanish in ${\CHQ}(\A{g}\otimes {\bF}_p)$,
hence the class $\lambda_1^{g(g-1)/2}$ does not vanish in ${\CHQ}(\A{g})$.
If the map $T_g \to R_{g-1}$ defined by $\lambda_i \mapsto u_i$ would
have a non-trivial kernel then $\lambda_1^{g(g-1)/2}$ would have to vanish.
Theorem \ref{lambdarelation} was proved in \cite{vdG:Cycles}.
The proof is obtained by
applying the Grothen\-dieck-Riemann-Roch formula to the theta divisor
on the universal abelian variety ${\Xg}$ over $\A{g}$. In fact, take
a line bundle $L$ on ${\Xg}$ that provides on each fibre $X$ a theta
divisor and normalize $L$ such that it vanishes on the zero section of
${\Xg}$ over $\A{g}$. Then the Grothendieck-Riemann-Roch Theorem tells us
that
$$
\begin{aligned}
{\rm ch}(\pi_{!}L) &= \pi_{*}({\rm ch}(L) \cdot
{\rm Td}^{\vee}(\Omega^1_{{\Xg}/\A{g}}))\cr
&=\pi_{*}({\rm ch}(L) \cdot
{\rm Td}^{\vee}(\pi^*({\bE_g})))\cr
&= \pi_{*}({\rm ch}(L)) \cdot {\rm Td}^{\vee}({\bE}),\cr
\end{aligned} \eqno(1)
$$
where we used $\Omega^1_{{\Xg}/\A{g}} \cong \pi^*({\bE})$ and
the projection formula.
Here ${\rm Td}^{\vee}$ is defined for a line bundle with first Chern class
$\gamma $ by $\gamma/(e^{\gamma}-1)$.
But $R^i\pi_*L=0$ for our relatively
ample $L$ for $i>0$ and so $\pi_{!}L$ is represented by a vector bundle,
and because $L$ defines a principal polarization, by a line bundle.
We write $\theta=c_1(\pi_{!}L)$. This gives the identity
$$
\sum_{k=0}^{\infty} \frac{\theta^k}{k!}=
\pi_{*}(\sum_{k=0}^{\infty} \frac{\Theta^{g+k}}{(g+k)!}) \,
{\rm Td}^{\vee}({\bE}). \eqno(2)
$$
Recall that
${\rm Td}^{\vee}({\bE})=1-\lambda_1/2+ (\lambda_1^2+\lambda_2)/12+ \ldots $.
We can compare terms of equal codimension; the codimension $1$ term gives
$$
\theta=-\lambda_1/2 +\pi_{*}(\Theta^{g+1}/(g+1)!). \eqno(3)
$$
If we now look how both sides of (2)
behave when we replace $L$ by $L^n$, we
see immediately that the term $\Theta^{g+k}$ of the right hand side changes
by a factor $n^{g+k}$. As to the left hand side, for a principally polarized
abelian variety $X$ the space of sections $H^0(X,L_X^{\otimes n})$ is a representation
of the Heisenberg group; it is the irreducible representation of degree $n^g$.
This implies that ${\rm ch}(\pi_{!}L)=n^g {\rm ch}(\pi_{!}L)$.
We see
$$
n^g \, \sum_{k=0}^{\infty} \frac{\theta^k}{k!} = \pi_*\left(\sum_{k=0}^{\infty}
\frac{ n^{g+k} \Theta^{g+k}}{(g+k)!}\right) \cdot {\rm Td}^{\vee}({\bE}).
$$
Comparing the coefficients leads immediately to the following result.
\begin{corollary} In ${\CHQ}(\A{g})$ we have
$\pi_*({\rm ch}(L))=1$ and ${\rm ch}(\pi_*L)={\rm Td}^{\vee}({\bE})$.
\end{corollary}
In particular $\pi_*(\Theta^{g+1})=0$. Substituting this in (3)
gives the following result.
\begin{corollary} (Key Formula) We have $2\theta= -\lambda_1$.
\end{corollary}
\begin{corollary}
We have ${\rm ch}_{2k}({\bE})=0$ for $k\geq 1$.
\end{corollary}
\begin{proof}
The relation (1) reduces now to
$$
e^{-\lambda_1/2}= {\rm Td}^{\vee}({\bE}).
$$
If, say $\rho_1,\ldots,\rho_g$ are the Chern roots of ${\bE}$, so that
our $\lambda_i$ is the $i$th elementary symmetric function in the $\rho_j$,
then this relation is
$$
\prod_{i=1}^g \frac{e^{\rho_i}-e^{-\rho_i}}{\rho_i} = 1\, .
$$
This is equivalent with ${\rm ch}_{2k}({\bE})=0$ for $k>0$ and also with
${\rm Td}({\bE}\oplus {\bE}^{\vee})=0$.
\end{proof}
The proof of Theorem \ref{lambdagiszero} is also an application of
Grothendieck-Riemann-Roch, this time applied to the structure sheaf
(i.e., $n=0$ in the preceding case). We find
$$
{\rm ch}(\pi_! {\mathcal O}_{\Xg})=
\pi_{*}({\rm ch}({\mathcal O}_{\Xg})\cdot
{\rm Td}^{\vee}\pi^*({\bE}_g))= \pi_{*}(1) {\rm Td}^{\vee}({\bE}_g)=0.
$$
For an abelian variety $X$ the cohomology group $H^i(X,{\mathcal O}_X)$
is the $i$th
exterior power of $H^1(X,{\mathcal O}_X)$ and we thus
see that the left hand side equals
(global Serre duality)
$$
{\rm ch}(1-{\bE}^{\vee}+ \wedge^2{\bE}^{\vee}- \cdots
+(-1)^g \wedge^g {\bE}^{\vee}).
$$
But by some general yoga (see \cite{BS}) we have for a vector bundle $B$
of rank $r$ the relation
$ \sum_{j=0}^r (-1)^j {\rm ch}(\wedge^j B^{\vee})= c_r(B) {\rm Td}(B)^{-1}$.
So the left hand side is $\lambda_g\, {\rm Td}({\bE})^{-1}$ and since
${\rm Td}({\bE})$ starts with $1+\ldots$ it follows that $\lambda_g$ is zero.
Theorem \ref{lambda1isample} is a classical result in characteric zero.
If we present our moduli space $\A{g}$ over ${\bC}$ as the quotient
space ${\SpgZ} \backslash {\Hg}$ then the determinant of the Hodge bundle
is a (n orbifold) line bundle that corresponds to the factor of automorphy
$$
\det (c\tau +d) \qquad \text{ for $\tau \in {\Hg}$ and $
\left(\begin{matrix} a & b \\ c & d \\ \end{matrix} \right) \in {\SpgZ}$.}
$$
In other words, a section of the $k$th power of $\det({\bE})$ gives by
pull back to ${\Hg}$ a holomorphic function $f: {\Hg} \to {\bC}$
with the property that
$$
f((a\tau+b)(c\tau+d)^{-1}) = \det (c\tau+d)^k f(\tau).
$$
A very well-known theorem by Baily and Borel (see \cite{B-B})
says that modular forms
of sufficiently high weight $k$ define an embedding of
${\SpgZ} \backslash {\Hg}$
into projective space. The idea is that one can construct sufficiently many
modular forms to separate points and tangent vectors on $\A{g}({\bC})$.
So $\lambda_1$ is an ample class.
Clearly this holds then also in characteristic $p$
if $p$ is sufficiently large. So if we have a complete subvariety of
of codimension $j$ in $\A{g}\otimes {\bF}_p$ for every $p$ then
$\lambda_1^{j}$ cannot vanish when restricted to this subvariety.
Theorem \ref{lambda1isample} was extended to all characteristics by Moret-Bailly, see \cite{M-B}.
\section{The Tautological Ring of ${\barA{g}}$}
In this section, just as in the preceding one, we will write $\A{g}$
for $\A{g}\otimes k$ with $k$ an algebraically closed field.
The class $\lambda_g$ vanishes in the rational Chow ring of $\A{g}$.
However, in a suitable compactification of $\A{g}$ this is no longer true
and instead of the quotient $R_{g-1}=R_g/(u_g)$ we find a copy of $R_g$
in the Chow ring of such a compactification.
We consider a smooth toroidal compactification $\barA{g}$
of $\A{g}$ of the type constructed by Faltings and Chai. Over $\barA{g}$
we have a `universal' semi-abelian variety ${\cG}$ with a zero-section $s$,
see \cite{F-C}. Then $s^*{\rm Lie}({\cG})$ defines in a canonical way
an extension of the Hodge bundle ${\bE}$ on $\A{g}$ to a vector bundle
$\barA{g}$. We will denote this extension again by ${\bE}$.
The relation (1) of the preceding section
can now be extended to ${\barA{g}}$. A proof of this extension
was given by Esnault and Viehweg, see \cite{E-V}. They show that in
characteristic $0$ for the Deligne extension of the cohomology sheaf
${\mathcal H}^1$ with its Gauss-Manin connection the Chern character
vanishes in degree $\neq 0$ by applying Grothendieck-Riemann-Roch to a
log extension and using the action of $-1$ to separate weights.
By specializing one finds the following result.
\begin{theorem}\label{extendedlambdarelation}
The Chern classes $\lambda_i$ of the Hodge bundle ${\bE}$ on ${\barA{g}}$
satisfy the relation
$$
(1+\lambda_1+\cdots + \lambda_g)(1-\lambda_1+\cdots +(-1)^g \lambda_g)=1.
$$
\end{theorem}
This implies the following result:
\begin{theorem}
The tautological subring $\tilde{T}_g$ of ${\CHQ}({\barA{g}})$
is isomorphic to $R_g$
\end{theorem}
\begin{proof}
Clearly, by the relation of \ref{extendedlambdarelation} the tautological
ring is a quotient of $R_g$; since $\lambda_1$ is ample on an open dense part
($\A{g}$) the socle
$$
\lambda_g\lambda_{g-1}
\cdots \lambda_1 = \frac{1}{(g(g+1)/2)!} \left( \prod_{k=1}^g (2k-1)!! \right)
\lambda_1^{g(g+1)/2}
$$
does not vanish,
and the tautological ring must be isomorphic to $R_g$.
\end{proof}
The Satake compactification $\SatAg$ of $\A{g}$, defined in general as the proj of the ring of Siegel modular forms,
possesses a stratification
$$
\SatAg= \A{g} \sqcup {\cA}_{g-1} \sqcup \cdots \sqcup {\cA}_1 \sqcup {\cA}_0.
$$
By the natural map $q: \barA{g} \to \SatAg$ this induces a stratification of
${\barA{g}}$: we let ${\cA}_g^{(t)}$ be the inverse image $q^{-1}(
{\cA}_g \sqcup {\cA}_{g-1} \sqcup \cdots {\cA}_{g-t})$, the moduli space of
abelian varieties of torus rank $\leq t$.
We have the following extension of Theorems \ref{lambdagiszero} and
\ref{completecodimg}.
\begin{theorem}
For $t<g$ we have the relation
$\lambda_g\lambda_{g-1} \cdots \lambda_{g-t}=0$
in the Chow group ${\rm CH}_{\bQ}^m({\cA}_g^{(t)})$ with $m=\sum_{i=0}^t g-i$.
\end{theorem}
\begin{theorem}
The moduli space ${\cA}_g^{(t)}\otimes {\bF}_p$ in characteristic $p>0$
contains a complete subvariety of codimension $g-t$, namely
the locus of abelian
varieties of $p$-rank~$\leq t$.
\end{theorem}
\begin{corollary}
We have $\lambda_1^{g(g-1)/2+t} \neq 0 $ on ${\cA}_g^{(t)}$.
\end{corollary}
\begin{proposition}
For $0\leq k \leq g$ and $r>0$ we have on the boundary
$\SatA{k} \subset \SatA{g}$ of the Satake compactification
$$
\lambda_1^{k(k+1)/2} \neq 0 \qquad \text{and} \qquad
\lambda_1^{k(k+1)/2 +r}=0 .
$$
\end{proposition}
This follows from the fact that $\lambda_1$ is an ample class on $\SatA{g}$,
that $\lambda_1 | \SatA{k}$ is again $\lambda_1$ (now of course defined
on $\SatA{k}$) and the fact that $\dim \SatA{k}= k(k+1)/2$.
\section{The Proportionality Principle}
The result on the tautological ring has the following immediate corollary,
a special case of the so-called Proportionality Principle of
Hirzebruch-Mumford:
\begin{theorem}
The characteristic numbers
of the Hodge bundle are proportional to those of the tautological bundle
on the `compact dual' $Y_g$:
$$
\lambda_1^{n_1} \cdots \lambda_g^{n_g}([{\barA{g}}])=
(-1)^{g(g+1)/2} \frac{1}{2^g}
\left( \prod_{k=1}^g \zeta(1-2k)
\right) \cdot u_1^{n_1}\cdots u_g^{n_g} ([Y_g])
$$
for all $(n_1,\ldots,n_g)$ with $\sum n_i= g(g+1)/2$.
\end{theorem}
Indeed, any top-dimensional class $\lambda_1^{n_1}\cdots \lambda_g^{n_g}$
is a multiple $m(n_1,\ldots,n_g)$ times the top-dimensional class
$\lambda_1 \cdots \lambda_g$ where the coefficient depends only on the structure
of $R_g$. So the proportionality principle is clear, and to make it explicit
one must find the value of one top-dimensional class,
say $\lambda_1 \cdots \lambda_g$ on ${\barA{g}}$.
The top Chern class of the bundle ${\rm Sym}^2({\bE})$ on $\A{g}$ equals
$$
2^g \, \lambda_1 \cdots \lambda_g \, .
$$
We can interpret $2^g \deg (\lambda_1 \cdots \lambda_g)$ up to a sign
$(-1)^{g(g+1)/2}$
as the log version of the Euler number of $\A{g}$.
It is known via the work of Siegel and Harder:
\begin{theorem} (Siegel-Harder) The Euler number of $\A{g}$ is equal to
$$
\zeta(-1)\zeta(-3) \cdots \zeta(1-2g).
$$
\end{theorem}
\begin{example}
For $g=1$ the Euler number of $\A{1}({\bC})$ is $-1/12$. It is obtained by
integrating
$$
(-1)\frac{1}{2\pi i} \frac{dx \wedge dy}{y^2}
$$
over the usual fundamental domain ($\{ \tau=x+iy \in \mathfrak{H}: |x| \leq 1/2,
x^2+y^2 \geq 1\}$) for the action of ${\rm SL}(2,{\bZ})$
and (which gives $2\zeta(-1)=-1/6$) and
then dividing by $2$ since the group ${\rm SL}(2,{\bZ})$ does not
act effectively ($-1$ acts trivially).
\end{example}
The Proportionality Principle immediately extends to our equivariant bundles
$V_{\rho}$ since these are obtained by applying Schur functors to
powers of the Hodge bundle.
\begin{corollary} (The Proportionality Principle of Hirzebruch-Mumford)
The characteristic numbers of the equivariant vector bundle $V_{\rho}$
on $\tilde{\A{g}}$ are proportional to those of the corresponding bundle
on the compact dual~$Y_g$
\end{corollary}
The proportionality factor here is
$$
p(g)= (-1)^{g(g+1)/2} \prod_{j=1}^g \frac{\zeta(1-2j)}{2}
$$
We give a few values.
\vbox{
\centerline{\def\hskip 0.6em\relax{\hskip 0.6em\relax}
\def\hskip 0.5em\relax {\hskip 0.5em\relax }
\vbox{\offinterlineskip
\hrule
\halign{&\vrule#&\strut\hskip 0.5em\relax \hfil#\hskip 0.6em\relax\cr
height2pt&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&\cr
&$g$&&$1$&&$2$&&$3$&&$4$&&$5$&\cr
height2pt&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&\cr
\noalign{\hrule}
height2pt&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&\cr
&${1/p(g) } $&&$24$&&$5760$&&$2903040$&&$1393459200$&&$367873228800$&\cr
height2pt&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&\cr
} \hrule}
}}
A remark about the history of this `principle'.
Hirzebruch, inspired by a paper of Igusa, found in 1958 (see \cite{H:PT})
that for a discrete torsion-free group $\Gamma$
of automorphisms of a bounded symmetric domain $D$ such that the quotient
$X_{\Gamma}=\Gamma\backslash D$ is compact, the Chern numbers of the quotient
are proportional to the Chern numbers of the compact dual $\hat{D}$ of $D$.
In the case of a subgroup $\Gamma \subset {\rm Sp}(2g,{\bR})$ that acts
freely on ${\Hg}$ this means that there is a proportionality factor
$e_{\Gamma}$ such that for all $n=(n_1,\ldots, n_{g(g+1)/2})$
with $n_i\in {\bZ}_{\geq 0}$ and $\sum n_i=g(g+1)/2$ we have
$$
c^n(X_{\Gamma})=
e_{\Gamma}\, c^n(Y_g) \, ,
$$
where $c^n$ stands for the top Chern class $\prod_i c_i^{n_i}$
and $Y_g$ is the compact dual of $\mathfrak{H}_g$.
His argument was that the top Chern classes of $X$ are represented by
$G({\bR})$-invariant differential forms of top degree and these are
proportional to each other on $D$ and on $\hat{D}$. The principle
extends to all equivariant holomorphic vector bundles on ${\Hg}$.
Mumford has obtained (see \cite{Mumford:HPP})
an extension of Hirzebruch's Proportionality
Principle that applies to his toroidal compactifications of $\A{g}$.
\section{The Chow Rings of $\barA{g}$ for $g=1,2,3$}\label{Chowringssmallg}
For low values of $g$ the Chow rings of $\A{g}$ and their toroidal
compactifications can be explicitly described.
\begin{theorem}
The Chow ring $\CHQ({\barA1})$ is isomorphic to
${\bQ}[\lambda_1]/(\lambda_1^2)$.
\end{theorem}
Mumford determined in \cite{Mumford:Enumerative} the Chow ring of $\barA{2}$.
\begin{theorem}
The Chow ring ${\CHQ}(\barA2)$ is generated by
$\lambda_1$, $\lambda_2$ and $\sigma_1$ and isomorphic to
${\bQ}[\lambda_1,\lambda_2,\sigma_1]/(I_2),$ with $I_2$ the ideal generated by
$$
(1+\lambda_1+\lambda_2)(1-\lambda_1+\lambda_2)-1, \hskip 0.6em\relax \lambda_2\sigma_1,
\hskip 0.6em\relax \sigma_1^2-22\lambda_1\sigma_1+120 \lambda_1^2.
$$
The ranks of the Chow groups are $1,2,2,1$.
\end{theorem}
The intersection numbers in this ring are given by
\centerline{\def\hskip 0.6em\relax{\hskip 0.6em\relax}
\def\hskip 0.5em\relax {\hskip 0.5em\relax }
\vbox{\offinterlineskip
\hrule
\halign{&\vrule#&\strut\hskip 0.5em\relax \hfil#\hskip 0.6em\relax\cr
height2pt&\omit&&\omit&&\omit&\cr
&$2\backslash 1$&&$\lambda_1$&&$\sigma_1$&\cr
height2pt&\omit&&\omit&&\omit&\cr
\noalign{\hrule}
height2pt&\omit&&\omit&&\omit&\cr
&$\lambda_1^2$&&$1/2880$&&$0$&\cr
&$\lambda_1\sigma_1$&&$0$&&$-1/24$&\cr
height2pt&\omit&&\omit&&\omit&\cr
\cr } \hrule}
}
\par
\noindent
while the degrees of the top classes in the sigmas are
$$
\deg(\sigma_3)=1/4, \hskip 0.6em\relax \deg(\sigma_2\sigma_1) =-1/4, \hskip 0.6em\relax
\deg(\sigma_1^3)=-11/12.
$$
For genus $3$ the result is (cf.\ \cite{vdG:CHA3}):
\begin{theorem}
The tautological ring of ${\cA}_3$ is generated by the Chern classes
$\lambda_i$ and isomorphic to ${\bQ}[\lambda_1]/(\lambda_1^4)$.
\end{theorem}
\begin{theorem}
The Chow ring ${\CHQ}(\barA3)$ of $\tilde{\mathcal A}_3$ is generated by the
$\lambda_i$, $i=1,2,3$ and the $\sigma_i$, $i=1,2$ and is
isomorphic to the graded ring (subscript is degree)
$$
{\bQ}[\lambda_1,\lambda_2,\lambda_3,\sigma_1,\sigma_2]/I_3,
$$
with $I_3$ the ideal generated by the relations
\begin{align}\notag
&(1+\lambda_1+\lambda_2+\lambda_3)(1-\lambda_1+\lambda_2-\lambda_3)=1, \cr
&\lambda_3\sigma_1=\lambda_3\sigma_2=\lambda_1^2\sigma_2=0,\cr
&\sigma_1^3= 2016 \, \lambda_3 - 4 \, \lambda_1^2 \sigma_1 -24
\, \lambda_1\sigma_2 +\frac{11}{3}\, \sigma_2\sigma_1,\cr
&\sigma_2^2= 360\, \lambda_1^3\sigma_1 -45 \, \lambda_1^2\sigma_1^2
+15 \, \lambda_1\sigma_2\sigma_1,\cr
&\sigma_1^2\sigma_2=1080\, \lambda_1^3 \sigma_1-165 \, \lambda_1^2\sigma_1^2+
47\, \lambda_1\sigma_2\sigma_1.\cr
\end{align}
The ranks of the Chow groups are: $1, 2, 4, 6, 4, 2, 1$.
\end{theorem}
For the degrees of the top dimensional elements we refer to \cite{vdG:CHA3}.
\section{Some Intersection Numbers}
As stated in section \ref{Ag}
the various compactifications employed for $\A{g}$
each have their own merits. For example, the toroidal compactification
associated to the perfect cone decomposition has the advantage that its boundary
is an irreducible divisor $D=D_g$.
By a result of Borel \cite{Borel} it is known that in degrees $\leq g-2$
the cohomology of ${\rm Sp}(2g,{\bQ})$ (and in fact for any finite index
subgroup $\Gamma$) is generated by elements in degree $2k+4$ for
$k=0,1,\ldots, (g-6)/2$:
$$
H^*(\Gamma, {\bQ})= {\bQ}[x_2,x_6,x_{10},\ldots] \, .
$$
In particular in degree $2$ there is one generator. We deduce (at least for
$g\geq 6$ and with special arguments also for $2 \leq g \leq 5$):
\begin{proposition}
The Picard group of $\A{g}$ for $g\geq 2$
is generated by the determinant $\lambda_1$
of the Hodge bundle.
\end{proposition}
By observing that on $\A{g}^{(1)}$ and $\A{g}^{\rm Perf}$ the boundary divisor
is irreducible we get:
\begin{corollary} The Picard group of $\A{g}^{(1)}$ and of
${\cA}_g^{\rm Perf}$ is generated by $\lambda_1$ and the class of $D$.
\end{corollary}
It would be nice to know the top intersection numbers $\lambda_1^n D^{G-n}$
with $G=g(g+1)/2=\dim \A{g}$. It seems that these numbers are zero
when $n$ is not of the form $k(k+1)/2$. In fact, Erdenberger, Grushevsky
and Hulek formulated
this as a conjecture, see \cite{E-G-H}.
\begin{conjecture}
The intersection number $\deg \lambda_1^n\, D^{G-n}$ on ${\cA}_g^{\rm Perf}$
vanishes unless $n$ is of the form $n=k(k+1)/2$ for $k\leq g$.
\end{conjecture}
From our results above it follows that
$$
\deg \lambda_1^G =
(-1)^G \frac{G!}{2^g} \prod_{k=1}^g \frac{\zeta(1-2k)}{(2k-1)!!}\, .
$$
Erdenberger, Grushevsky and Hulek calculated the next two cases. We
quote only the first
$$
\deg \lambda_1^{g(g-1)/2} D^g = (-1)^{G-1}\frac{(g-1)!(G-g)!}{2}
\prod_{k=1}^{g-1} \frac{\zeta(1-2k)}{(2k-1)!!}
$$
and we refer for the intersection number
$\deg \lambda^{(g-2)(g-1)/2} D^{2g-1}$ to loc.\ cit.
\section{The Top Class of the Hodge Bundle}
As before, we shall write here $\A{g}$ for $\A{g} \otimes k$
with $k$ an algebraically closed field.
The cycle class $\lambda_g$ vanishes in $\CHQ^g(\A{g})$.
However, it does not vanish on $\CHQ^g(\barA{g})$,
for example because $\lambda_1\lambda_2\cdots \lambda_g$ is a
non-zero multiple of $\lambda_1^{g(g+1)/2}$ that has positive degree
since $\lambda_1$ is ample on $\A{g}$.
This raises two questions. First,
what is the order of $\lambda_g$ in ${\CH}^g(\A{g})$ as a torsion class?
Second, since up to torsion $\lambda_g$ comes from the boundary $\barA{g}-
\A{g}$, one can ask for a natural supporting cycle for this class in the
boundary. Since we work on stacks one has to use intersection theory
on stacks; the theory is still in its infancy, but we use Kresch's
approach \cite{Kresch}. Mumford answered the first question for $g=1$
in \cite{Mumford:12L} .
In joint work with Ekedahl (\cite{E-vdG:TOP,E-vdG:lambdag})
we considered these two questions.
Let us begin with the torsion order of $\lambda_g$ in ${\rm CH}^g(\A{g})$.
A well-known formula of Borel and Serre
(used above in Section \ref{The Hodge Bundle}) says that
the Chern character of the alternating sum of the exterior powers
of the Hodge bundle satisfies
$$
{\rm ch}(\wedge^*{\bE})= (-1)^g \lambda_g {\rm Td}({\bE})^{-1}
$$
and this implies that its terms of degree $1, \ldots, g-1$ vanish and
that it equals $-(g-1)! \lambda_g$ in degree $g$.
\begin{lemma}
Let $p$ be a prime and $\pi: A \to S$ be a family of abelian varieties
of relative dimension $g$ and $L$ a line bundle on $A$ that is of order $p$
on all fibres of $\pi$. If $p> \min(2g,\dim S +g)$ then $p \, (g-1)! \lambda_g =0$.
\end{lemma}
Indeed, we may assume after twisting by a pull back from the base $S$
that $L^p$ is trivial. Let $[L]$ be its class in $K_0(A)$. Then
$$
0=[L]^p-1= ([L]-1)^p+p\, ([L]-1)(1+\frac{p-1}{2}([L]-1)+\ldots )\, .
$$
Now $[L]-1$ is nilpotent because it has support of codimension $\geq 1$ and
so it follows that $p\, ([L]-1)$ lies in the ideal generated by
$([L]-1)^p$ which has codimension $\geq p$. Now if $p>2g$ or $p> \dim S +g$ the
codimension is $> 2g$, hence the image under $\pi$ has codimension $>g$
or is zero. We thus can safely remove it and may assume that $p\, [L]=p$
in $K_0(A)$. Consider now the Poincar\'e bundle $P$ on $A \times \hat{A}$;
we know that $H^i(X \times \hat{X},P)$ for an abelian variety $X$ is
zero for $i\neq g$ and $1$-dimensional for $i=g$. So the derived pullback
of $R\pi_*P$ along the zero section of $A\times \hat{A}$ over $A$ is $R\pi_*{\mathcal O}_A$. We know that $p\, [P]=p[L\otimes P]$ and $p\, [R\pi_*P]=
p\, [R\pi_*(L\otimes P)]$ and $R\pi_*(L\otimes P)$ has support
along the inverse of the section of $\hat{A}$ corresponding to $L$.
This section is
everywhere disjoint from the zero section, so the pull back of
$R\pi_*(L\otimes P)$ along the zero section is trivial:
$p\, [R\pi_* {\mathcal O}_a]=0$ and since $R\pi_*{\mathcal O}_A=\wedge^*
R^1\pi_*{\mathcal O}_A=\wedge^* {\bE}$ we find $-p (g-1)! \lambda_g=0$.
We put
$$
n_g:= {\rm gcd}\{ p^{2g}-1: \text{ $p$ running through the primes $>2g+1$} \} \, .
$$
Note that for a prime $\ell$ we have $\ell^k | n_g$ if and only if
the exponent of $({\bZ}/\ell^k{\bZ})^*$ divides $2g$.
By Dirichlet's prime number theorem we have for $p>2$
$$
{\rm ord}_p(n_g)= \begin{cases}
0 & (p-1)\not| 2g \\
\max \{ k : p^{k-1}(p-1) | 2g \} & (p-1)|2g, \\
\end{cases}
$$
while
$$
{\rm ord}_2(n_g)= \max \{ k: 2^{k-2} | 2g\}.
$$
For example, we have
$$
n_1=24, \hskip 0.6em\relax n_2=240, \hskip 0.6em\relax n_3=504, \hskip 0.6em\relax n_4=480.
$$
\begin{theorem}
Let $\pi: \X{g}\to \A{g}$ be the universal family.
Then $(g-1)! n_g \lambda_g =0$.
\end{theorem}
For the proof we consider the commutative diagram
$$
\begin{matrix}
\X{g}' & \longrightarrow & \X{g} \\
\downarrow && \downarrow \\
\A{g}' & \longrightarrow & \A{g} \\
\end{matrix}
$$
where $\A{g}'\to \A{g}$ is the degree $p^{2g}-1$ cover
obtained by adding a line bundle
of order $p$. It follows that $(g-1)! p (p^{2g}-1) \lambda_g$ vanishes.
Then the rest follows from the definition of $n_g$.
In \cite{Mumford:12L} Mumford proved that the order of $\lambda_1$ is $12$.
So our result is off by a factor $2$.
In the paper \cite{E-vdG:TOP} we also gave the vanishing orders for
the Chern classes of the de Rham bundle $\HDR$ for the universal family
$\X{g}\to \A{g}$. This bundle is provided with an integrable
connection and so its Chern classes are torsion classes in integral
cohomology. If $l$ is a prime different from the characteristic
these Chern classes are torsion too by the comparison theorems and
using specialization.
We denote these classes by $r_i \in H^{2i}(\A{g}, {\bZ}_l)$. We determined
their orders up to a factor $2$. Note that $r_i$ vanishes for $i$ odd
as $\HDR$ is a symplectic bundle.
\begin{theorem}
For all $i$ the class $r_{2i+1}$ vanishes. For $1\leq i \leq g$
the order of $r_{2i}$ in
integral (resp.\ $l$-adic) cohomology equals (resp.\ equals the
$l$-part of) $n_i/2$ or $n_i$.
\end{theorem}
We now turn to the question of a representative cycle in the
boundary for $\lambda_g$ in $\CHQ(\barA{g})$.
It is a very well-known fact that there is a cusp form
$\Delta$ of weight $12$ on ${\rm SL}(2,{\bZ})$ and that it
has a only one zero, a simple zero at the cusp. This leads to
the relation $12 \, \lambda_1= \delta$, where $\delta$ is the cycle
representing the class of the cusp.
This formula has an analogue for higher $g$.
\begin{theorem}
In the Chow group ${\CHQ}({\cA}^{(1)}_g)$
of codimension $g$ cycles
on the moduli stack $\A{g}^{(1)}$
of rank $\leq 1$ degenerations
the top Chern class $\lambda_g$ satisfies the formula
$$
\lambda_g= (-1)^g \zeta(1-2g) \, \delta_g,
$$
with $\delta_g$ the ${\bQ}$-class of the locus $\Delta_{g}$ of
semi-abelian varieties which are trivial extensions of an abelian
variety of dimension $g-1$ by the multiplicative group~${\bG}_m$.
\end{theorem}
The number $\zeta(1-2g)$ is a rational number $-b_{2g}/2g$ with $b_{2g}$
the $2g$th Bernoulli number. For example, we have
$$
12 \, \lambda_1= \delta_1, \hskip 0.6em\relax 120\, \lambda_2= \delta_2, \hskip 0.6em\relax
252\, \lambda_3=\delta_3, \hskip 0.6em\relax 240\, \lambda_4=\delta_4,
\hskip 0.6em\relax 132 \, \lambda_5=\delta_5
$$
One might also wish to work on the Satake compactification $\SatA{g}$,
singular though as it is. Every toroidal compactification of
Faltings-Chai type $\barA{g}$ has a canonical morphism
$q: \barA{g} \to \SatA{g}$. Then we can define a class $\ell_{\alpha}$
where $\alpha$ is a subset of $\{1,2,\ldots,g\}$ by
$$
\ell_{\alpha}= q_*(\lambda_{\alpha}) \hskip 0.6em\relax
\text{with $\lambda_{\alpha}$ equal to
$\prod_{i \in {\alpha}}\lambda_i\in {\rm CH}^{\bQ}_{d}(\barA{g})$}
$$
with $d=d(\alpha)=\sum_{i \in \alpha} i$ the degree of $\alpha$
and ${\rm CH}^{\bQ}_{d}(\SatA{g})$ the Chow homology group.
Note that this push forward does not depend on the choice of toroidal
compactification as such toroidal compactifications always allow a
common refinement and the $\lambda_i$ are compatible with pull back.
One can ask for example whether the classes of the boundary components
$\SatA{j}$ lie in the ${\bQ}$-vector space generated by the $\ell_{\alpha}$
with $d(\alpha)={\rm codim}_{\SatA{g}}(\SatA{j})$.
In \cite{E-vdG:lambdag} we made the following conjecture.
\begin{conjecture}\label{classAg*conj}
In the group ${\rm CH}^{\bQ}_d(\SatA{g})$ with $d=g(g+1)/2-k(k+1)/2$
we have
$$
[\SatA{g-k}]=
\frac{(-1)^k}{\prod_{i=1}^{k} \zeta(2k-1-2g)} \ell_{g,g-1,\ldots,g+1-k}
$$
\end{conjecture}
The evidence that Ekedahl and I gave is:
\begin{theorem}
Conjecture \ref{classAg*conj} is true for $k=1$ and $k=2$ and if
${\rm char}(k)=p>0$ then for all $k$.
\end{theorem}
We shall see later that a multiple of the class $\lambda_g$ has a beautiful
representative cycle in $\A{g}\otimes {\bF}_p$, namely the locus of
abelian varieties of $p$-rank $0$.
\section{Cohomology}
By a result of Borel \cite{Borel} the stable cohomology of the symplectic group is known; this implies that in degrees $\leq g-2$
the cohomology of ${\rm Sp}(2g,{\bQ})$ (and in fact for any finite index
subgroup $\Gamma$) is generated by elements in degree $2k+4$ for
$k=0,1,\ldots, (g-6)/2$:
$$
H^*(\Gamma, {\bQ})= {\bQ}[x_2,x_6,x_{10},\ldots]\, .
$$
The Chern classes $\lambda_{2k+1}$ of the Hodge bundle provide these classes.
There are also some results on the stable homology of
the Satake compactification, see \cite{CL}; besides the $\lambda_{2k+1}$
there are other classes $\alpha_{2k+1}$ for $k\geq 1$.
Van der Kallen and Looijenga proved in \cite{vdK-L} that the rational homology
of the pair $(\A{g}, {\A{g,{\rm dec}}})$ with ${\A{g,{\rm dec}}}$
the locus of decomposable principally polarized abelian
varieties, vanishes in degree $\leq g-2$.
For low values of $g$ the cohomology of $\A{g}$ is known.
For $g=1$ one has $H^0(\A{1},{\bQ})={\bQ}$ and $H^1=H^2=(0)$.
For $g=2$ one can show that $H^0(\A{2},{\bQ})={\bQ}$,
$H^2(\A{2},{\bQ})={\bQ}(-1)$.
Hain determined the cohomology of $\A{3}$. His result (\cite{Hain}) is:
\begin{theorem}
The cohomology $H^*(\A{3}, {\bQ})$ is given by
$$
H^{j}(\A{3}, {\bQ})\cong \begin{cases}
{\bQ} & j=0 \\
{\bQ}(-1) & j=2 \\
{\bQ}(-2) & j=4 \\
E & j=6 \\
0 & \text{else}
\end{cases}
$$
where $E$ is an extension
$$
0\to {\bQ}(-3) \to E \to
{\bQ}(-6) \to 0
$$
\end{theorem}
We find for the compactly supported cohomology a similar result
$$
H_c^{j}(\A{3}, {\bQ})\cong \begin{cases}
{\bQ}(-6) & j=12\\
{\bQ}(-5) & j=10 \\
{\bQ}(-4) & j=8 \\
E' & j=6 \\
0 & \text{else}
\end{cases}
$$
where $E'$ is an extension $0\to {\bQ} \to E' \to {\bQ}(-3) \to 0$.
The natural map $H_c^* \to H^*$ is the zero map. Indeed, the classes
in $H_c^j$ for $j=12$, $10$ and $8$ are $\lambda_3\lambda_1^3$,
$\lambda_3 \lambda_1^2$ and $\lambda_3\lambda_1$,
while $\lambda_3$ gives a non-zero class in $H^6_c$. On the other hand, $1$, $\lambda_1$, $\lambda_1$ and $\lambda_1^3$
give non-zero classes in $H^0$, $H^2$, $H^4$ and $H^6$. Looking at the weight
shows that the map is the zero map.
By calculating the cohomology of $\barA{3}$
Hulek and Tommasi proved in \cite{H-T}
that the cohomology of the Voronoi compactification
for $g\leq 3$ coincides with the Chow ring (known by the Theorems
of Section \ref{Chowringssmallg}):
\begin{theorem}
The cycle class map gives an isomorphism
$$
\CHQ^{*}(\A{g}^{\rm Vor}) \cong H^{*}(\A{g}^{\rm Vor}) \hskip 0.6em\relax
\text{for $g=1,2,3$}\, .
$$
\end{theorem}
\section{Siegel Modular Forms}
The cohomology of $\A{g}$ itself and of the universal family $\X{g}$
and its powers $\X{g}^n$ is closely linked to modular forms.
We therefore pause to give a short description of these modular
forms on ${\rm Sp}(2g,{\bZ})$. For a general reference we refer
to the book of Freitag \cite{Freitag:SMF} and to \cite{vdG:SMF} and the
references there.
Siegel modular forms generalize the notion of usual (elliptic) modular
forms on ${\rm SL}(2, {\bZ})$ and its (finite index) subgroups.
We first need to generalize the notion of the weight of a modular
form. To define it we need a finite-dimensional complex representation
$\rho : {\rm Gl}(g,{\bC}) \to {\rm GL}(V)$ with $V$ a finite-dimensional
complex vector space.
\begin{definition}
A holomorphic map $f: \mathfrak{H}_g \to V$ is called a Siegel modular form
of weight $\rho$ if
$$
f(\gamma(\tau))= \rho(c\tau + d) f(\tau) \hskip 0.6em\relax
\text{for all $\gamma = \left( \begin{matrix} a & b \\ c & d \\ \end{matrix} \right)
\in {\Sp}(2g,{\bZ})$ and all $\tau \in \mathfrak{H}_g$},
$$
plus for $g=1$ the
extra requirement that $f$ is holomorphic at the cusp. (The latter means that
$f$ has a Fourier expansion $f=\sum_{n\geq 0} a(n)q^n$ with
$q=e^{2\pi i \tau}$.)
\end{definition}
Modular forms of weight $\rho$ form a ${\bC}$-vector space
$M_{\rho}({\Sp}(2g,{\bZ}))$; as it turns out this space is
finite-dimensional.
This vector space can be identified with the space of holomorphic sections of
the vector bundle $V_{\rho}$ defined before
(see Section \ref{The Hodge Bundle}). Since we are working on an
orbifold, one has to be careful; we could replace ${\rm Sp}(2g, {\bZ})$
by a normal congruence subgroup $\Gamma$ of finite index that acts freely
on $\mathfrak{H}_g$, take the space $M_{\rho}(\Gamma)$ of
modular forms on $\Gamma$ and consider the invariant subspace
under the action of ${\rm Sp}(2g,{\bZ})/\Gamma$.
Since we can decompose the representation $\rho$ into irreducible representations it is no restriction of generality to limit ourselves to the case where
$\rho$ is irreducible.
The irreducible representations $\rho$ of ${\rm GL}(g,{\bC})$
are given by $g$-tuples integers $(a_1,a_2,\ldots,a_g)$ with
$a_i \geq a_{i+1}$, the highest weight of the representation.
A special case is where $\rho$ is given by $a_i=k$, in other words
$$
\rho(c\tau+d)= \det(c\tau+d)^k \, .
$$
In this case the Siegel modular forms are called {\sl classical
Siegel modular forms} of weight $k$.
A modular form $f$ has a Fourier expansion
$$
f(\tau)= \sum_{n \, \text{ half-integral}} a(n) e^{2\pi i {\rm Tr}(n\tau)},
$$
where $n$ runs over the symmetric $g\times g$ matrices that are half-integral
(i.e. $2n$ is an integral matrix with even entries along the diagonal)
and
$$
{\rm Tr}(n \tau)= \sum_{i=1}^g n_{ii} \tau_{ii} +2 \sum_{1 \leq i < j \leq g}
n_{ij} \tau_{ij}.
$$
A classical result of Koecher (cf.\ \cite{Freitag:SMF})
asserts that $a(n)=0$ for $n$ that are not
semi-positive. This is a sort of Hartogs extension theorem.
We shall use the suggestive notation $q^n$ for $e^{2 \pi i {\rm Tr}(n\tau)}$
and then can write the Fourier expansion as
$$
f(\tau)=\sum_{n\geq 0,\, \text{half-integral}} a(n) \, q^n.
$$
We observe the invariance property
$$
a(u^t n u)= \rho(u^t) a(n) \hskip 0.6em\relax \text{for all $u \in {\rm GL}(g,{\bZ})$}
\eqno(1)
$$
This follows by the short calculation
\begin{align}
a(u^tnu)&=\int_{x \, \bmod 1} f(\tau)e^{-2\pi i {\rm Tr}(u^tnu \tau)} dx\cr
&=\rho(u^t) \int_{x \, \bmod 1}
f(u\tau u^t)e^{-2 \pi i {\rm Tr}(n \, u\tau u^t)} dx= \rho(u^t) a(n),\cr \notag \end{align}
where we wrote $\tau=x+iy$.
There is a way to extend Siegel modular forms on $\A{g}$ to modular forms on
$\A{g}\sqcup \A{g-1}$ and inductively to $\SatA{g}$ by means of the so-called
$\Phi$-operator of Siegel. For $f \in M_{\rho}$ one defines
$$
\Phi(f)(\tau^{\prime})= \lim_{t \to \infty} f(\begin{matrix}
\tau^{\prime} & 0 \\ 0 & it\\ \end{matrix}) \qquad \tau^{\prime} \in \mathfrak{H}_{g-1} \, .
$$
The limit is well-defined and gives a function that satisfies
$$
(\Phi f)(\gamma^{\prime}(\tau^{\prime}))
= \rho(\begin{matrix} c\tau +d & 0 \\ 0 & 1 \\
\end{matrix}) (\Phi f) (\tau^{\prime})\, ,
$$
where $\gamma^{\prime}=(a \, b ; c \, d) \in {\rm Sp}(2g-2, {\bZ})$ is embedded
in ${\rm Sp}(2g,{\bZ})$ as the automorphism group of the symplectic subspace
$\langle e_i, f_i : i=1,\ldots,g-1\rangle$.
That is, we get a linear map
$$
M_{\rho} \longrightarrow M_{\rho^{\prime}}, \hskip 0.6em\relax f \mapsto \Phi(f),
$$
for some representation $\rho'$; in fact,
with the representation $\rho^{\prime}=(a_1,a_2,\ldots,a_{g-1})$ for
irreducible $\rho=(a_1,a_2,\ldots,a_{g})$, cf.\ the proof of
\ref{corankresult}.
\begin{definition}
A Siegel modular form $f \in M_{\rho}$ is called a {\sl cusp form} if $\Phi(f)=0$.
\end{definition}
We can extend this definition by
\begin{definition}
A non-zero modular form $f \in M_{\rho}$ has co-rank $k$ if
$\Phi^{k}(f)\neq 0$ and $\Phi^{k+1}(f) =0$.
\end{definition}
That is, $f$ has co-rank $k$ if it survives (non-zero)
till $\A{g-k}$ and no further.
So cusp forms have co-rank $0$.
For an irreducible representation $\rho$
of ${\rm GL}_g$ with highest weight $(a_1,\ldots,a_g)$
we define the co-rank as
$$
{\rm co-rank}(\rho)= \# \{ i: 1\leq i \leq g, a_i=a_g\}
$$
Weissauer proved in \cite{Weissauer} the following result.
\begin{theorem}\label{corankresult}
Let $\rho$ be irreducible.
For a non-zero Siegel modular form $f \in M_{\rho}$
one has ${\rm corank}(f) \leq {\rm corank}(\rho)$.
\end{theorem}
\begin{proof}
We give Weissauer's proof.
Suppose that $f: \mathfrak{H}_g \to V$ is a form of co-rank $k$ with Fourier series
$\sum a(n)q^n$; then there exists a
semi-definite half-integral matrix $n$ such that
$$
n =\left( \begin{matrix} n^{\prime} & 0 \\ 0 & 0 \\ \end{matrix} \right)
\hskip 0.6em\relax \text{with $n^{\prime}$ a $(g-k)\times (g-k)$ matrix}
$$
such that $a(n)\neq 0$.
The identity (1) implies that $a(n)=\rho(u)a(n)$ for all $u$ in the group
$$
G_{g,k}= \{ \left(\begin{matrix} 1_{g-k} & b \\ 0 & d \\ \end{matrix}
\right): d \in {\rm GL}_k \}
$$
This group is a semi-direct product ${\rm GL}_k \ltimes N$ with $N$ the
unipotent radical. The important remark now is that the Zariski closure of
${\rm GL}(k,{\bZ})$ in $G_{g,k}$ contains ${\rm SL}_k({\bC}) \ltimes N$
and we have $a(n)=\rho(u)a(n)$ for all $u \in {\rm SL}_k({\bC}) \ltimes N$.
So the Fourier coefficient $a(n)$ lies in the subspace $V^{N}$
of $N$-invariants. The parabolic
group $P=\{ (a\, b ; 0 \, d) ; a \in {\rm GL}_{n-k}, d \in {\rm GL}_k\}$,
which is also a semi-direct product $({\rm GL}_{n-k}\times {\rm GL}_k)
\ltimes N$, acts on $V^N$ via its quotient ${\rm GL}_{n-k}\times {\rm GL}_k$.
Since this is a reductive group $V^N$ decomposes into irreducible
representations, each of which is a tensor product of an irreducible
representation of ${\rm GL}_{n-k}$ times an irreducible
representation of ${\rm GL}_{k}$.
If $U_k$ denotes the subgroup of upper triangular unipotent matrices in
${\rm GL}(k,{\bC})$,
then the highest weight space is $(V^N)^{U_{g-k} \times U_k}$
and this equals $V^{U_g}$, and this is $1$-dimensional. Thus $V^N$
is an irreducible representation of ${\rm GL}_{g-k} \times {\rm GL}_k$
and one checks that it is given by $(a_1,\ldots, a_{g-k}) \otimes (a_{g-k+1},
\ldots, a_g)$. The space of invariants $V^{G_{g,k}} \subset V^N$
under $G_{g,k}$ can only contain ${\rm SL}_k({\bC})$-invariant elements
if the ${\rm GL}_k$-representation is $1$-dimensional.
Therefore $V^{G_{g,k}}$ is zero unless $ (a_{g-k+1},\ldots, a_g)$ is a
$1$-dimensional representation, hence $a_{g-k+1}=\ldots = a_g$.
In that case the representation for ${\rm GL}_{g-k}$ is given by
$(a_1,\ldots, a_{g-k})$. Hence the Fourier coefficients of $f$ all have
to vanish if the corank of $f$ is greater than the corank of $\rho$.
This proves the result.
\end{proof}
\section{Differential Forms on $\A{g}$}\label{DF}
Here we look first at the moduli space over the field of complex numbers
$\A{g}({\bC})={\Sp}(2g,{\bZ}) \backslash {\Hg}$. We are interested
in differential forms living on $\barA{g}$. If $\eta$ is a holomorphic
$p$ form on $\barA{g}$ then we can pull the form back to $\mathfrak{H}_g$.
It will there be a section of some exterior power of $\Omega^1_{\mathfrak{H}_g}$,
hence of some exterior power of the second symmetric power of the
Hodge bundle. Such forms can be analytically described by vector-valued
Siegel modular forms.
As we saw in \ref{OmegaisSym2},
the bundle $\Omega^1(\log D)$ is associated to the
second symmetric power ${\rm Sym}^2({\bE})$ of the Hodge bundle
(at least in the stacky interpretation) and we are led to ask
which irreducible representations occur in the exterior powers
of the second symmetric power of the
standard representation of ${\rm GL}(g)$. The answer is that these
are exactly those irreducible representations $\rho$ that are of the form
$ w \eta -\eta$, where $\eta=(g,g-1,\ldots,1)$ is half the sum of the
positive roots and $w$ runs through the $2^g$ Kostant representatives
of $W({\rm GL}_g) \backslash W({\rm Sp}_{2g})$, where $W$ is the Weyl group.
They have the property that they send dominant weights for ${\rm Sp}_{2g}$
to dominant weights for ${\rm GL}_g$.
To describe this explicitly, recall that the Weyl group $W$
of ${\rm Sp}(2g,{\bZ})$
is the semi-direct product $S_g \ltimes ({\bZ}/2{\bZ})^g$ of signed
permutations. The Weyl group of ${\rm GL}_g$ is equal to the symmetric group
$S_g$. Every left coset $S_g\backslash W$ contains exactly one
Kostant element $w$. Such an element $w$ corresponds one to one
to an element of $({\bZ}/2{\bZ})^g$ that we view a $g$-tuple
$(\epsilon_1,\ldots,\epsilon_g)$
with $\epsilon_i \in \{ \pm 1 \}$
and the action of this element $w$ on the root lattice is then via
$$
(a_1,\ldots,a_g) {\buildrel w \over \longrightarrow}
(\epsilon_{\sigma(1)} a_{\sigma(1)}, \ldots, \epsilon_{\sigma(g)} a_{\sigma(g)})$$
for all $a_1 \geq a_2 \geq \cdots \geq a_g$ and with $\sigma$ the unique
permutation such that
$$
\epsilon_{\sigma(1)} a_{\sigma(1)} \geq \cdots \geq
\epsilon_{\sigma(g)} a_{\sigma(g)}.
$$
If $f$ is a classical Siegel modular form of weight $k=g+1$ on the group
$\Gamma_g$ then $f(\tau) \prod_{i\leq j} d\tau_{ij}$
is a top differential on the smooth part of quotient space
$\Gamma_g \backslash {\mathcal H}_g={\mathcal A}_g(\bC)$.
It can be extended over the smooth part of the
rank-$1$ compactification ${\mathcal A}_g^{(1)}$
if and only if $f$ is a cusp form. It is not difficult to see
that this form can be extended as a holomorphic form to the whole
smooth compactification $\tilde{\mathcal A}_g$.
\begin{proposition}\label{top}
The map that associates to a classical Siegel modular cusp form $f
\in S_{g+1}(\Gamma_g)$ of weight $g+1$ the top differential
$\omega= f(\tau) \prod_{i\leq j} d\tau_{ij}$ defines an
isomorphism between $S_{g+1}(\Gamma_g)$ and the space
of holomorphic top differentials
$H^0(\tilde{\mathcal A}_g, \Omega^{g(g+1)/2})$
on any smooth compactification
$\tilde{\mathcal A}_g$.
\end{proposition}
Freitag and Pommerening proved in \cite{FP} the following extension result.
\begin{theorem}
Let $p<g(g+1)/2$ and $\omega$ a holomorphic $p$-form on the smooth locus
of ${\rm Sp}(2g,{\bZ})\backslash \mathfrak{H}_{g}$.
Then it extends uniquely to a holomorphic $p$-form on any smooth
toroidal compactification $\barA{g}$.
\end{theorem}
But for $g>1$ the singular locus has codimension $\geq 2$. This implies:
\begin{corollary}
For $p<g(g+1)/2$ holomorphic $p$-forms on $\barA{g}$ correspond one-one
to ${\rm Sp}(2g,{\bZ})$-invariant
holomorphic $p$-forms on $\mathfrak{H}_g$:
$$
\Gamma(\barA{g},\Omega^p)\cong (\Omega^p(\mathfrak{H}_g))^{{\rm Sp}(2g,{\bZ})}.
$$
\end{corollary}
When can holomorphic $p$-forms exist on $\barA{g}$?
Weissauer proved in \cite{Weissauer} a vanishing theorem of the following
form.
\begin{theorem}
Let $\tilde{\mathcal A}_g$ be a smooth compactification of
${\mathcal A}_g$. If $p$ is an integer $0\leq p< g(g+1)/2$ then
the space of holomorphic $p$-forms on
$\tilde{\mathcal A}_g$ vanishes unless $p$ is of the form
$g(g+1)/2-r (r+1)/2$ with $1\leq r \leq g$
and then
$H^0(\tilde{\mathcal A}_g,\Omega^p_{\tilde{\mathcal A}_g})
\cong M_{\rho}(\Gamma_g)$
with $\rho=(g+1,\ldots,g+1,g-r,\ldots,g-r)$ with
$g-r$ occuring $r$ times.
\end{theorem}
Weissauer deduced this from the following Vanishing Theorem for
Siegel modular forms (cf.\ loc.\ cit.).
\begin{theorem}
Let $\rho=(a_1,\ldots,a_g)$ be irreducible with ${\rm co-rank}(\rho) < g-a_g$.
If we have
$$
\# \{ 1\leq i \leq g : a_i=a_g+1 \} < 2(g-a_g-\text{\rm co-rank}(\rho))
$$
then $M_{\rho}=(0)$.
\end{theorem}
\section{The Kodaira Dimension of $\A{g}$}
The Kodaira dimension of the moduli space $\A{g}$ over ${\bC}$,
the least integer
$\kappa=\kappa({\A{g}})$ such that the sequence $P_m/m^{\kappa}$
with $P_m({\A{g}})=
\dim H^0({\A{g}},K^{\otimes m})$, is bounded, is an important invariant.
In terms of Siegel modular forms this comes down to the growth of
the dimension of the space of classical Siegel modular forms $f$
of weight $k(g+1)$ that extend to holomorphic tensors
$f(\tau) \eta^{\otimes k}$ with
$\eta= \wedge_{1\leq i \leq j \leq g} d \tau_{ij}$ on $\barA{g}$.
The first condition is that $f$ vanishes with multiplicity $\geq k$ along
the divisor at infinity. Then one has to deal with the extension over
the quotient singularities. Reid and Tai independently found a criterion
for the extension of pluri-canonical forms over quotient singularities.
\begin{proposition}\label{reid-tai}
(Reid-Tai Criterion) A pluri-canonical form $\eta$
on ${\bC}^n$ that is invariant under a finite group $G$ acting linearly
on ${\bC}^n$ extends to a resolution of singularities if
for every non-trivial element $\gamma \in G$ and every fixed point $x$
of $\gamma$ we have
$$
\sum_{j=1}^n \alpha_j \geq 1\, ,
$$
where the action of $\gamma$ on the tangent space of $x$ has eigenvalues
$e^{2 \pi i \alpha_j}$. If $G$ does not possess pseudo-reflections
then it extends if and only if $\sum_{j=1}^n \alpha_j \geq 1$.
\end{proposition}
Tai checked (cf.\ \cite{Tai})
that if $g\geq 5$ then every fixed point of a non-trivially
acting element of ${\rm Sp}(2g,{\bZ})$ satisfies the requirement
of the Reid-Tai criterion and thus these forms extend over the
quotient singularities of ${\A{g}}$. He also checked that the
extension over the singularities in the boundary did not present
difficulties. Thus he showed by calculating the dimension of the
space of sections of $K^{\otimes k}$ on a level cover of $\A{g}$
and calculating the space over invariants under the action of ${\rm Sp}(2g, {\bZ}/\ell {\bZ})$ that for $g \geq 9$ the space $\A{g}$ was of general type.
He thus improved earlier work of Freitag (\cite{Freitag:KD}).
Mumford extended this result in \cite{Mumford:KD} and proved:
\begin{theorem}\label{KDgatleast7}
The moduli space $\A{g}$ is of general type if $g \geq 7$.
\end{theorem}
His approach was similar to the method used in his joint paper with
Harris on the Kodaira dimension of $\M{g}$. First he works
on the moduli space $\A{g}^{(1)}$ of rank $1$-degenerations
and observes that
the Picard group ${\rm Pic}(\A{g}^{(1)})\otimes {\bQ}$ is generated by two
divisors: $\lambda_1$ and the class $\delta$ of the boundary.
By Proposition \ref{OmegaisSym2} for the coarse moduli space
we know the canonical class.
\begin{proposition}
Let $U$ be the open subset of ${\cA}_g^{(1)}$ of
semi-abelian varieties of torus rank $\leq 1$
with automorphism group $\{ \pm \}$. Then the canonical class of $U$
is given by $(g+1)\lambda_1 -\delta$.
\end{proposition}
Now there is an explicit effective divisor $N_0$ in ${\mathcal A}_g^{(1)}$,
namely the divisor of principally polarized abelian varieties
$(X,\Theta)$ that have a singular theta divisor. By a tricky calculation
in a $1$-dimensional family Mumford is able to calculate the divisor
class of $N_0$.
\begin{theorem}
The divisor class of $\bar{N}_0$ is given by
$$
[\bar{N}_0]= \left( \frac{(g+1)!}{2}+g!\right) \lambda_1
- \frac{(g+1)!}{12} \delta \, .
$$
\end{theorem}
The divisor $N_0$ is in general not irreducible and splits off the
divisor $\theta_0$
of principally polarized abelian varieties whose symmetric theta
divisor has a singularity at a point of order $2$. This divisor is
given by the vanishing of an even theta constant (Nullwert)
and so this divisor class can be easily computed.
The divisor class of $\theta_0$ equals
$$
[\theta_0]= 2^{g-2}(2^g+1)\, \lambda_1-2^{2g-5}\, \delta \, .
$$
\begin{proof} (of Theorem \ref{KDgatleast7})
We have $[\bar{N}_0]=\theta_0+ 2 \, [R]$ with $R$ an effective divisor
given as
$$
[R]=\left( \frac{(g+1)!}{4} + \frac{g!}{2}-2^{g-3}(2^g+1)\right) \lambda_1
-\left( \frac{(g+1)!}{24} -2^{2g-6}\right) \delta \, .
$$
For an expression $a\lambda_1-b \delta$ we call the ratio $a/b$ its slope.
If $R$ is effective with a slope $\leq$ slope of the expression for the
canonical class we know that the canonical class is ample.
From the formulas one sees that the slope of $\bar{N}_0$ is
$6+ 12/(g+1)$ and deduces that the inequality holds for $g\geq 7$.
\end{proof}
From the relatively easy equation for the class
$[\theta_0]$ one deduces that the even theta
constants provide a divisor with slope $8+2^{3-g}$, and this gives
that $\A{g}$ is of general type for $g\geq 8$; cf.\ \cite{Freitag:SMF}.
For $g\leq 5$ we know that $\A{g}$ is rational or unirational.
For $g=1$ and $2$ rationality was known in the 19th century; Katsylo proved
rationality for $g=3$, Clemens proved unirationality for $g=4$
and unirationality for $g=5$ was proved by several people (Mori-Mukai,
Donagi, Verra). But the case $g=6$ is still open.
For some results on the nef cone we refer to the survey
\cite{Gr} of Grushevsky.
\section{Stratifications}
In positive characteristic the moduli space $\A{g}\otimes{\bF}_p$
possesses stratifications that can tell us quite a
lot about the moduli space; it is not clear what the characteristic zero
analogues of these stratifications are. This makes this moduli space
in some sense more accessible in characteristic $p$ than in characteristic
zero, a fact that may sound counterintuitive to many.
The stratifications we are refering to are the Ekedahl-Oort stratification
and the Newton polygon stratification. Since the cycle classes are not known
for the latter one we shall stick to the Ekedahl-Oort
stratification, E-O for short. It was originally defined by Ekedahl and Oort
in terms of the group
scheme $X[p]$, the kernel of multiplication by $p$ for an abelian variety
in characteristic $p$. It has been studied intensively by Oort and many others,
see e.g. \cite{Oort:Stratification} and \cite{Oort:foliations}.
The alternative definition using degeneracy loci
of vector bundles was given in my paper \cite{vdG:Cycles} and worked out fully
in joint work with Ekedahl, see \cite{E-vdG:EO}. In this section
we shall write $\A{g}$ for $\A{g}\otimes {\bF}_p$. We start with the
universal principally polarized abelian variety $\pi:\X{g} \to \A{g}$.
For an abelian variey $X$ over a field $k$
the de Rham cohomology $H^1_{\rm dR}(X)$ is a vector space of rank $2g$.
We get by doing this in families a cohomology sheaf $\HDR(\X{g}/\A{g})$,
the hyperdirect image
$R^1\pi_*({\mathcal O}_{\X{g}} \to \Omega^1_{\X{g}/\A{g}})$.
Because of the polarization it comes with a symplectic pairing $\langle \, , \, \rangle: \HDR \times \HDR \to {\mathcal O}_{\A{g}}$.
We have the Hodge filtration of $\HDR$:
$$
0 \to \pi_*(\Omega^1_{\X{g}/\A{g}}) \to \HDR \to R^1\pi_*{\mathcal O}_{\X{g}}
\to 0,
$$
where the first non-zero term is the Hodge bundle ${\bE}_g$. In characteristic
$p>0$ we have additionally two maps
$$
F: \X{g} \to \X{g}^{(p)}, \hskip 0.6em\relax V: \X{g}^{(p)} \to \X{g},
$$
relative Frobenius and the Verschiebung
satisfying $F\circ V= p \cdot {\rm id}_{\X{g}^{(p)}}$ and
$V\circ F = p \cdot {\rm id}_{\X{g}}$.
Look at the simplest case $g=1$. Since Frobenius is inseparable the kernel
$X[p]$ of multiplication by $p$ is not reduced and has either $1$ or $p$
physical points.
An elliptic curve in characteristic $p>0$
is called \emph{supersingular}
if and only if $X[p]_{\rm red}=(0)$. Equivalently
this means that $V$ is also inseparable, hence the kernel of $F$
and $V$ (for $\X{g}$ instead of $\X{g}^{(p)}$) coincide. There are
finitely many points on the $j$-line $\A{1}$ that correspond to the
supersingular elliptic curves.
So in general
we will compare the relative position of the kernel of $F$ and of $V$
inside $\HDR$. As it turns out, it is better to work with the flag
space $\F{g}$ of symplectic flags on $\HDR$; by this we mean that we
consider the space of flags $(E_i)_{i=1}^{2g}$ on $\HDR$ such that
${\rm rank} (E_i)=i$, $E_{g}= {\bE}$ and $E_{g-i}=E_j^{\bot}$.
We can then introduce a second flag on $\HDR$, say $(D_i)_{i=1}^{2g}$
defined by setting
$$
D_g= \ker(V)=V^{-1}(0), \hskip 0.6em\relax D_{g+i}=V^{-1}(E_i^{(p)},
$$
and complementing by
$$
D_{g-i}=D_{g+i}^{\bot} \qquad \text{for $i=1,\ldots,g$}.
$$
This is called the conjugate flag. For an abelian variety $X$ we define
the \emph{canonical flag} of $X$ as the coarsest flag that is stable
under $F$: if $G$ is a member, then also $F(G^{(p)})$; usually this will
not be a full flag. The conjugate flag $D$ is a refinement of the
canonical flag.
So starting with one filtration we end up with two filtrations. We then
compare these two filtrations. In order to do this we need the Weyl group
$W_g$ of the symplectic group ${\rm Sp}(2g,{\bZ})$. This group is isomorphic
to the semi-direct product
$$
W_g \cong
S_g \ltimes ({\bZ}/2{\bZ})^g
= \{ \sigma \in S_{2g} : \sigma(i)+\sigma(2g+1-i)=2g+1, \, i=1,\ldots,g \} \, .
$$
As is well-known each element $w \in W_g$
has a \emph{length} $\ell(w)$. Let
$$
W_g^{\prime}= \{ \sigma \in W_g: \sigma\{ 1,2, \ldots,g\} = \{1,2,\ldots,g\} \}
\cong S_g \, .
$$
Now for every coset $aW_g^{\prime}$ there is a unique element $w$ of minimal
length in $aW_g^{\prime}$ such that every $w' \in aW_g^{\prime}$ can be written as $w'=wx$ with $\ell(w')=\ell(w)+\ell(x)$.
There is a partial order on $W_g$, the Bruhat-Chevalley order:
$$
w_1 \geq w_2 \hskip 0.6em\relax \text{if and only if} \hskip 0.6em\relax r_{w_1}(i,j)\leq r_{w_2}(i,j)
\hskip 0.6em\relax \text{for all $i,j$},
$$
where the function $r_w(i,j)$ is defined as
$$
r_w(i,j)= \# \{ n \leq i: w(n) \leq j \}.
$$
There are $2^g$ so-called \emph{final elements} in $W_g$: these are the
minimal elements of the cosets. In another context (see Section \ref{DF})
these are called
Kostant representatives. An element $\sigma \in W_g$ is final
if and only if $\sigma(i) < \sigma(j)$ for $i < j \leq g$. These final
elements correspond one-to-one to so-called final types: these are
increasing surjective maps
$$
\nu : \{0,1,\ldots,2g\} \to \{ 0,1,\ldots,2g\}
$$
satisfying $\nu(2g-i)=\nu(i)+(g-i)$ for $0 \leq i \leq g$. The bijection
is gotten by associating to $w \in W_g$ the final type $\nu_w$ with
$\nu_w(i)-i-r_w(g,i)$.
Now the stratification on our flag space $\F{g}$ is defined by
$$
(E_{\cdot},D_{\cdot}) \in U_w \iff \dim(E_i \cap D_j) \geq r_w(i,j) \, .
$$
There is also a scheme-theoretic definition: the two filtrations mean that
we have two sections $s,t$ of a $G/B$-bundle $T$ (with structure group $G$)
over the base with $G={\rm Sp}_{2g}$
and $B$ a Borel subgroup; locally (in the \'etale topology) we may assume that
$t$ is a trivializing section; then we can view $s$ as a map of the base
to $G/B$ and we simply take $U_w$ (resp.\ $\overline{U}_w$)
to be the inverse image of the $B$-orbit $BwB$ (resp.\ its closure).
This definition is independent of the chosen trivialization.
We thus find global subschemes $U_w$ and $\bar{U}_w$ for
the $2^g (g!)$ elements of $W_g$.
It will turn out that for \emph{final} elements the projection map $\F{g}
\to \A{g}$ restricted to $U_w$ defines a finite morphism to its
image in $\A{g}$, and these will be the E-O strata. But the strata
on $\F{g}$ behave in a much better way. That is why we first study these
on $\F{g}$. We can extend the strata to strata over a compactification,
see \cite{E-vdG:EO}: the Hodge bundle extends and so does the de Rham
sheaf, namely as the logarithmic de Rham sheaf
$R^1\pi_*(\Omega^{\cdot}_{\tilde{\X{g}}/\tilde{\A{g}}})$ and
then we play the same game as above.
There is a smallest stratum: $\bar{U}_1$ associated
to the identity element of $W_g$.
\begin{proposition}
Any irreducible component of any $\bar{U}_w$ in $\F{g}$ contains a point of $\bar{U}_1$.
\end{proposition}
The advantage of working in the flag space comes out clearly if we consider
the local structure of our strata. The idea is that locally our moduli
space (flag space) looks like the space ${\rm FL}_g$
of complete symplectic flags in a
$2g$-dimensional symplective space.
We need the notion of \emph{height $1$ maps} in charactistic $p>0$. A closed immersion $S \to S'$ is called a height $1$ map if $I^{(p)}=(0)$ with $I$
the ideal sheaf defining $S$ and $I^{(p)}$ the ideal generated by $p$-th
powers of elements. If $S={\rm Spec}(R)$ and $x: {\rm Spec}(k) \to S$
is a closed point then the height $1$ neighborhood is ${\rm Spec}(R/m^{(p)})$.
We can then introduce the notions of \emph{height $1$ isomorphic} and
\emph{height $1$ smooth} in an obvious way.
Our basic result of \cite{E-vdG:EO} about our strata is now:
\begin{theorem}
Let $k$ be a perfect field. For every $k$-point $x$ of $\F{g}$ there exists
a $k$-point of ${\rm FL}_g$ such that their height $1$ neighborhoods
are isomorphic as stratified spaces.
\end{theorem}
Here the stratification on the flag space ${\rm FL}_g$ is the usual one
given by the Schubert cells.
The idea of the proof is to trivialize the de Rham bundle over a height $1$
neighborhood and then use infinitesimal Torelli for abelian varieties.
Our theorem has some immediate consequences.
\begin{corollary}\label{strataproperties}
\begin{enumerate}
\item Each stratum $U_w$ is smooth of dimension $\ell(w)$.
\item Each stratum $\bar{U}_w$ is Cohen-Macaulay, reduced and normal;
moreover $\bar{U}_w$ is the closure of $U_w$.
\item For $w$ a final element the projection $\F{g} \to \A{g}$
induces a finite \'etale map $U_w \to V_w$, with $V_w$ the image of $U_w$.
\end{enumerate}
\end{corollary}
We now descend from $\F{g}$ to $\A{g}$ to define the Ekedahl-Oort
stratification.
\begin{definition}
For a final element $w \in W_g$ the E-O stratum $V_w$ is defined to be the
image of $U_w$ under the projection $\F{g} \to \A{g}$.
\end{definition}
Over a E-O stratum $V_w \subset \A{g}$ the type of the canonical filtration
(i.e., the dimensions of the filtration steps that occur) are constant.
\begin{proposition}
The image of any $U_w$ under the projection $\F{g} \to \A{g}$ is a union
of strata $V_{v}$; the image of any $\bar{U}_w$ is a union of strata
$\bar{V}_v$.
\end{proposition}
We also have an irreducibility result (cf.\ \cite{E-vdG:EO}).
Each final element $w \in W_g$ corresponds to a partition and a
Young diagram.
\begin{theorem}
If $w \in W_g$ is a final element whose Young diagram $Y$ does not contain
all rows of length $i$ with
$\lceil (g+1)/2\rceil \leq i \leq g$ then $\bar{V}_w$ is irreducible and
$U_w \to V_w$ is a connected \'etale cover.
\end{theorem}
Harashita proved in \cite{Harashita} that the other ones are in general
reducible (with exceptions maybe for small characteristics).
A stratification on a space is not worth much unless one knows the
cycle classes of the strata. In our case one can calculate these.
On the flag space the Chern classes $\lambda_i$ of the Hodge bundle
decompose in their roots:
$$
c_1(E_i)=l_1+\ldots + l_i\, .
$$
We then have
$$
c_1(D_{g+i})-c_1(D_{g+1-i})= p\, l_i.
$$
\begin{theorem}
The cycle classes of $\bar{U}_w$ are polynomials in the classes $l_i$
with coefficients that are polynomials in $p$.
\end{theorem}
We refer to \cite{E-vdG:EO} for an explicit formula. Using the analogues of
the maps $\pi_i$ of Section \ref{compactdual} and Lemma \ref{Gysin}
we can calculate the cycle classes of the E-O strata on $\A{g}$.
Instead of giving a general formula we restrict to giving the formulas
for some important strata. For example, there are the $p$-rank strata
$$
V_f:=\{ [X] \in \A{g} : \# X[p](\bar{k})\leq p^f\}
$$
for $f=g,g-1,\ldots, 0$. Besides these there are the $a$-number strata
$$
T_a:=\{ [X] \in \A{g}: \dim_k {\rm Hom}(\alpha_p,X) \geq a \} \, .
$$
Recall that the $p$-rank $f(X)$
of an abelian variety is $f$ if and only if
$\# X[p](\bar{k})=p^f$
and $0 \leq f \leq g$ with $f=g$ being the generic case. Similarly, the
$a$-number of $a(X)$ of $X$ is $\dim_k(\alpha_p, X)$ and this equals the rank of
$\ker (V) \cap \ker(F)$; so $0\leq a(X) \leq g$ with $a(X)=0$ being
the generic case. The stratum $V_f$ has codimension $g-f$ while the stratum
$T_a$ has codimension $a(a+1)/2$. These codimensions were originally
calculated by Oort
and follow here easily from \ref{strataproperties}.
\begin{theorem}
The cycle class of the $p$-rank stratum $V_f$ is given by
$$
[V_f] = (p-1)(p^2-1)\cdots (p^{g-f}-1) \lambda_{g-f}.
$$
\end{theorem}
For example, for $g=1$ the stratum $V_0$ is the stratum of supersingular
elliptic curves. We have $[V_0]= (p-1)\lambda_1$. By the cycle relation
$12 \lambda_1=\delta$ with $\delta$ the cycle of the boundary we find
$$
[V_0]= \frac{p-1}{12} \delta \, .
$$
Since the degenerate elliptic curve (rational nodal curve) has
two automorphisms we find (using the stacky interpretation of our formula)
the Deuring Mass
Formula
$$
\sum_{E/\bar{k} \text{\rm \, supersingular}}
\frac{1}{\# {\rm Aut}_k(E)} = \frac{p-1}{24}
$$
for the number of supersingular elliptic curves in characteristic $p$.
One may view
all the formulas for the cycle classes as a generalization of the Deuring
Mass Formula.
The formulas for the $a$-number strata are given in \cite{vdG:Cycles} and
\cite{E-vdG:EO}.
We have
\begin{theorem}
The cycle class of the locus $T_a$ of abelian varieties with $a$-number
$\geq a$ is given by
$$
\sum Q_{\beta}({\bE}^{(p)} \cdot Q_{\varrho(a)-\beta}({\bE}^{\vee}),
$$
where $Q_{\mu}$ is defined as in Section \ref{compactdual}
and the sum is over all partitions
$\beta$ contained in $\varrho(a)=\{a,a-1,\ldots,1\}$.
\end{theorem}
For example, the formula for the stratum $T_1$ is $p\lambda_1-\lambda_1$,
which fits since $a$-number $\geq 1$ means exactly that the $p$-rank
is $\leq g-1$. For $a=2$ the formula is
$[T_2]=(p-1)(p^2+1)(\lambda_1\lambda_2)-(p^3-1)2\lambda_3$. For $a=g$
the formula reads
$$
[T_g]=(p-1)(p^2+1)\cdots (p^g+(-1)^g) \lambda_1\lambda_2 \cdots \lambda_g.
$$
This stratum is of maximal codimension; this formula is visibly a
generalization of the
Deuring Mass Formula and counts the number of superspecial abelian varieties;
this number was first calculated by Ekedahl in \cite{Ekedahl:SS}.
We formulate a corollary of our formulas for the $p$-rank strata.
\begin{corollary}
The Chern classes $\lambda_i$ of the Hodge bundle are represented
by effective ${\bQ}$-classes.
\end{corollary}
Another nice aspect of our formulas is that when we specialize $p=0$
in our formulas, that are polynomials in the $l_i$ and $\lambda_i$
with coefficients that are polynomials in $p$, we get back the formulas
for the cycle classes of the Schubert cells both on the Grassmannian
and the flag space, see \cite{E-vdG:EO}.
\section{Complete subvarieties of $A_g$}
The existence of complete subvarieties of relative small codimension
can give us interesting information about a non-complete variety.
In the case of the moduli space $\A{g}\otimes k$ (for some field $k$)
we know that $\lambda_1$
is an ample class and that $\lambda_1^{g(g-1)/2+1}$ vanishes. Since
for a complete subvariety $X$ of dimension $d$ we must have $\lambda_1^d|X
\neq 0$, this implies immediately a lower bound on the codimension.
\begin{theorem}
The minimum possible codimension of a complete subvariety of $\A{g}\otimes k$
is $g$.
\end{theorem}
This lower bound can be realized in positive characteristic as was
noted by Koblitz and Oort, see \cite{Koblitz}, \cite{Oort:SV}.
The idea is simple: a semi-abelian variety with a positive torus rank
has points of order $p$. So by requiring that our abelian varieties
have $p$-rank~$0$ we stay inside $\A{g}$ and this defines the required
complete variety.
\begin{theorem}
The moduli stack $\A{g}\otimes k$ with ${\rm char}(k)=p>0$
contains a complete substack of codimension $g$: the locus $V_0$ of
abelian varieties with $p$-rank zero.
\end{theorem}
A generalization is:
\begin{theorem}
The partial Satake compactification $\SatA{g}-\SatA{g-(t+1)}$ in
characteristic $p>0$ of
degenerations of torus rank $t$ contains a complete subvariety of
codimension $g-t$, namely the locus $V_{t}$ of $p$-rank $\leq t$.
\end{theorem}
In characteristic $0$ there is no such obvious complete subvariety
of codimension $g$, at least if $g\geq 3$. Oort conjectured in
\cite{Oort:CS} that it should not exist for $g\geq 3$. This was proved
by Keel and Sadun in \cite{Keel-Sadun}.
\begin{theorem}
If $X \subset \A{g}({\bC})$ is a complete subvariety with the property
that $\lambda_i | X $ is trivial in cohomology for some $1 \leq i \leq g$
then $\dim X \leq i(i-1)/2$ with strict inequality if $i\geq 3$.
\end{theorem}
This theorem implies the following corollary.
\begin{corollary}
For $g \geq 3$ the moduli space $A_g \otimes {\bC}$ does not possess
a complete subvariety of codimension $g$.
\end{corollary}
So the question arises what the maximum dimension of a complete
subvariety of $\A{g}({\bC})$ is.
One might conclude that the analogue of $\A{g}({\bC})$ in positive
characteristic in some sense is rather the locus of principally polarized
abelian varieties with maximal $p$-rank ($=g$) than $\A{g}\otimes {\bF}_p$.
\section{Cohomology of local systems and relations to modular forms}
There is a close connection between the cohomology of moduli spaces
of abelian varieties and modular forms. This connection was discovered
in the 19th century and developed further in the work of Eichler, Shimura,
Kuga, Matsushima and many others. It has developed into a central
theme involving the theory of automorphic representations and
the Langlands philosophy.
We shall restrict here to just one aspect of this. This is work in
progress that is being developed in joint work with Jonas Bergstr\"om
and Carel Faber.
Let us start with $g=1$. The space of cusp forms $S_{2k}$ of weight $2k$
on ${\rm SL}(2,{\bZ})$ has a cohomological interpretation. To describe
it we consider the universal elliptic curve $\pi: \X{1} \to \A{1}$
and let $V:=R^1\pi_* {\bQ}$ be the local system of rank $2$ with as
fibre over $[X]$ the cohomology $H^1(X,{\bQ})$ of the elliptic curve $X$
From $V$ we can construct other local systems: define for $a\geq 1$
$$
V_a:= {\rm Sym}^a(V).
$$
This is a local system of rank $a+1$. There is also the $l$-adic analogue
$V^{(l)}=R^1\pi_*{\bQ}_l$
for $l$-adic \'etale cohomology and its variants $V_a^{(l)}$.
The basic result of Eichler and Shimura says that for $a>0$ and $a$ even
there is an isomorphism
$$
H^1_c(\A{1}\otimes {\bC},V_a \otimes {\bC})
= S_{a+2}\oplus \bar{S}_{a+2} \oplus {\bC}.
$$
So we might say that as a mixed Hodge module the compactly supported
cohomology of the local system $V_a\otimes {\bC}$ equals
$S_{a+2}\oplus \bar{S}_{a+2} \oplus {\bC}$.
But this identity can be stretched
further. The left hand side may be replaced by other flavors of cohomology,
for example by $l$-adic \'etale cohomology $H^1(\A{1}\otimes \bar{\bQ}, V_a^{(l)})$
that comes with a natural Galois action of ${\rm Gal}(\bar{\bQ}/{\bQ})$.
In view of this, we replace the left hand side by an Euler characteristic
$$
e_c(\A{1},V_a):= \sum_{i=0}^2 [H^i_c(\A{1},V_a)],
$$
where now the cohomomology groups of compactly supported cohomology are
to be interpreted in an appropriate Grothendieck group, e.g.\ of mixed
Hodge structures when we consider complex cohomology $H^*_c(\A{1}\otimes {\bC},
V_a\otimes {\bC})$ with its mixed Hodge
structure, or in the Grothendieck group of Galois representations when
we consider compactly supported \'etale $l$-adic cohomology
$H^*_{c,et}(\A{1}\otimes \bar{\bQ},V_a^{(l)})$.
On the other hand, for the right hand side Scholl defined in \cite{Scholl}
a Chow motive
$S[2k]$ associated to the space of cusp forms $S_{2k}$ for $k>1$.
Then a sophisticated form of the Eichler-Shimura isomorphism asserts
that we have an isomorphism
$$
e_c(\A{1},V_a)= -S[a+2]-1 \qquad \text{$a\geq 2$ even}\, .
$$
We have a natural algebra of operators, the Hecke
operators, acting on
the spaces of cusp forms, but also on the cohomology since the Hecke
operators are defined by correspondences.
Then the isomorphism above is compatible with the action of the Hecke
operators.
But our moduli space $\A{1}$ is defined over ${\bZ}$. We thus can study
the cohomology over ${\bQ}$ by looking at the fibres $\A{1} \otimes {\bF}_p$
and the corresponding local systems $V_a^{(l)} \otimes {\bF}_p$ (for $l\neq p$)
by using comparison theorems. Now in characteristic $p$ the Hecke operator
is defined by the correspondence $X_0(p)$
of (cyclic) $p$-isogenies $\phi: X \to X'$ between
elliptic curves (i.e.\ we require $\deg \phi =p$); it allows maps
$q_i: X_0(p) \to \A{1}$ ($i=1,2$)
by sending $\phi$ to its source $X$ and target $X'$.
In characteristic $p$ the correspondence decomposes into two components
$$
X_0(p)\otimes {\bF}_p = F_p+F_p^t, \qquad \text{(the congruence relation)}
$$
where $F_p$ is the correspondence $X \mapsto X^{(p)}$ and $F_p^t$ its transpose.
This follows since for such a $p$-isogeny we have that $X'\cong X^{(p)}$ or
$X\cong (X')^{(p)}$.
This implies a relation between the Hecke operator $T(p)$ and the action
of Frobenius on $H^1(\A{1}\otimes \bar{\bF}_p, V_a^{(\ell)})$;
The result is then a relation between the action of Frobenius on
\'etale cohomology
and the action of a Hecke operator on the space of cusp forms
(\cite{Deligne:FM}, Prop.\ 4.8)
$$
{\rm Tr}(F_p,H^1_c(\A{1}\otimes \overline{\bF}_p,V_a))
= {\rm Tr}(T(p),S_{a+2})+1\, .
$$
We can calculate the traces of the
Frobenii by counting points over finite fields. In fact, if we make a
list of all elliptic curves over ${\bF}_p$ up to isomorphism over ${\bF}_p$
and we calculate the eigenvalues $\alpha_X,\bar{\alpha}_X$
of $F_p$ acting on the \'etale
cohomology $H^1(X,{\bQ}_l)$ then we can calculate the trace of $F_p$
on the cohomology $H^1_c(\A{1}\otimes \bar{\bF}_p,V_a^{(\ell)})$
by summing the expression
$$
\frac{\alpha_X^a+ \alpha_X^{a-1}\bar{\alpha}_X+ \ldots + \bar{\alpha}_X^a}{
\# {\rm Aut}_{{\bF}_p}(X)}
$$
over all $X$ in our list.
So by counting elliptic curves over a finite field ${\bF}_p$ we can calculate
the trace of the Hecke operator $T(p)$ on $S_{a+2}$; in fact, once we have
our list of elliptic curves over ${\bF}_p$ together with the
eigenvalues $\alpha_X, \bar{\alpha}_X$ and the order of
their automorphism group, we can calculate the trace of $T(p)$ on
the space of cusp forms $S_{a+2}$
for \emph{all} $a$.
The term $-1$ in the Eichler-Shimura identity can be interpreted as
coming from the kernel
$$
-1= \sum (-1)^i [\ker H^i_c(\A{1},V_a) \to H^i(\A{1},V_a)].
$$
So to avoid this little nuisance we might replace the compactly supported
cohomology by the image of compactly supported cohomology in the usual
cohomology, i.e.\ define the \emph{interior} cohomology $H^i_!$
as the image of compactly supported cohomology in the usual cohomology.
Then the result reads
$$
e_!(\A{1},V_a)= -S[a+2] \qquad \text{for $a>2$ even}.
$$
Some words about the history of the Eichler-Shimura result may be in order
here.
Around 1954 Eichler showed
(see \cite{Eichler1954}) that for some congruence subgroup $\Gamma$ of
${\rm SL}(2,{\bZ})$ the $p$-part of the zeta function of the
corresponding modular curve $X_{\Gamma}$ in characteristic $p$ is given by the
Hecke polynomial for the Hecke operator $T(p)$ acting on the space
of cusp forms of weight $2$ for $\Gamma$. This was generalized to
some other groups by Shimura. M.\ Sato observed in 1962 that
by combining the Eichler-Selberg trace formula for modular forms
with the congruence relation (expressing the Hecke correspondence
in terms of the Frobenius correspondence and its transpose) one
could extend Eichler's results by expressing the Hecke polynomials
in terms of the zeta functions of ${\mathcal M}_{1,n} \otimes {\bF}_p$,
except for problems due to the non-completeness of the moduli spaces.
A little later Kuga and Shimura showed that Sato's idea worked for
compact quotients
of the upper half plane (parametrizing abelian surfaces).
Ihara then made Sato's idea reality in 1967 by combining the
Eichler-Selberg trace formula with results of Deuring and Hasse
on zeta functions of elliptic curves, cf.\ \cite{Ihara}, where
one also finds references to the history of this problem.
A year later Deligne solved in \cite{Deligne:FM}
the problems posed by the non-completeness
of the moduli spaces. Finally Scholl proved the existence of the motive
$S[k]$ for even $k$ in 1990. A different construction of this motive
was given by Consani and Faber in \cite{C-F}.
That the approach sketched above for calculating traces of Hecke operators
by counting over finite fields is not used commonly,
is due to the fact that we
have an explicit formula for the traces of the Hecke operators, the
Eichler-Selberg trace formula, cf.\ \cite{Zagier}. But for higher genus $g$,
i.e.\ for modular forms on the symplectic group ${\rm Sp}(2g,{\bZ})$
with $g\geq 2$,
no analogue of the trace formula for Siegel modular forms is known.
Moreover, a closer inspection reveals that vector-valued Siegel modular
forms are the good analogue for higher $g$ of the modular forms on
${\rm SL}(2,{\bZ})$ (rather than classical Siegel modular forms only).
This suggests to try the analogue of the point counting
method for genus $2$ and higher. That is what Carel Faber and I did
for ${\rm Sp}(4,{\bZ})$ and in joint work with Bergstr\"om extended
to genus $2$ and level $2$ and also to genus $3$, see \cite{FvdG:CR,BFG,BFG:g=3}.
There are alternative methods that we should point out.
In general one tries to compare a trace formula of
Selberg type (Arthur trace formula) with the Grothendieck-Lefschetz
fixed point formula.
There is a topological trace formula for the trace
of Hecke operators acting on the compactly supported cohomology,
see for example \cite{Harder}, esp.\ the letter
to Goresky and MacPherson there.
Laumon gives a spectral decomposition of the cohomology of
local systems for $g=2$ (for the trivial one in \cite{Laumon1} and in general
in \cite{Laumon2}); cf.\ also the work of Kottwitz in general.
We also refer to Sophie Morel's book \cite{SM}.
But though these methods in principle could lead to explicit results
on Siegel modular forms, as far as I know it has not yet done that.
So start with the universal principally polarized abelian variety
$\pi: \X{g} \to \A{g}$ and form the local system $V:=R^1\pi_* {\bQ}$
and its $l$-adic counterpart $R^1\pi_* {\bQ}_{l}$ for \'etale
cohomology.
By abuse of notation we will write simply $V$.
We consider $\pi$ as a morphism of stacks.
For any irreducible representation $\lambda$ of ${\rm Sp}_{2g}$
we can construct a corresponding local system $V_{\lambda}$; it is obtained
by applying a Schur functor, cf.\ e.g.\ \cite{F-H}.
So if we denote $\lambda$ by its highest
weight $\lambda_1\geq \lambda_2 \geq \cdots \geq \lambda_g$ then
$V=V_{1,0,\ldots,0}$ and ${\rm Sym}^a(V)=V_{a,0,\ldots,0}$.
We now consider
$$
e_c(\A{g},V_{\lambda}):= \sum_i (-1)^i [H^i_c(\A{g},V_{\lambda})],
$$
where as before the brackets indicate that we consider this in the
Grothendieck group of the appropriate category (mixed Hodge modules,
Galois representations). It is a basic result of Faltings \cite{F-C,F}
that $H^i(\A{g}\otimes {\bC},V_{\lambda} \otimes {\bC})$ and
$H^i_c(\A{g}\otimes {\bC},V_{\lambda}\otimes {\bC})$
carry a mixed Hodge structure. Moreover, the interior cohomology $H^i_!$
carries a pure Hodge structure with the weights equal to the $2^g$
sums of any of the subsets of
$\{ \lambda_1+g,\lambda_2+g-1,\ldots,\lambda_g+1\}$. He also shows that for
regular $\lambda$, that is, when $\lambda_1> \lambda_2 \cdots
>\lambda_g >0$, the cohomology $H^i_!$ vanishes when $i\neq g(g+1)/2$.
These results of Faltings are the analogue for the non-compact case of results
of Matsushima and Murakami in \cite{MM}; they could use Hodge theory
to deduce the vanishing of cohomology groups and the decompositions
indexed by pairs of elements in the Weyl group.
For $g=2$ a local system $V_{\lambda}$ is specified by giving $\lambda=(a,b)$
with $a\geq b \geq 0$. Then the Hodge filtration is
$$
F^{a+b+3} \subseteq F^{a+2} \subseteq F^{b+1} \subseteq F^0=
H^3_!(\A{2}\otimes{\bC},V_{\lambda}\otimes {\bC}).
$$
Moreover, there is an identification of the first step in the Hodge filtration
$$
F^{a+b+3} \cong S_{a-b,b+3},
$$
with the space $S_{a-b,b+3}$
of Siegel modular cusp forms whose weight $\rho$
is the representation ${\rm Sym}^{a-b}{\rm St} \otimes \det({\rm St})^{b+3}$
with ${\rm St}$ the standard representation of ${\rm GL}(2,{\bC})$.
Again, as for $g=1$, we have an algebra of Hecke operators induced by geometric
correspondences and it acts both on the cohomology and the modular forms
compatible with the isomorphism.
Given our general ignorance of vector-valued Siegel modular forms for
$g>1$ the obvious question at this point is whether we can mimic the
approach sketched above for calculating the traces of the Hecke
operators by counting over finite fields.
Note that by Torelli we have morphism $\M{2}\to \A{2}=\M{2}^{\rm ct}$,
where $\M{2}^{ct}$ is the moduli space of curves of genus $2$
of compact type. This
means that we have to include besides the smooth curves of genus $2$
the stable curves of genus $2$ that are a union of two elliptic curves.
Can we use this to calculate the traces of the Hecke operator $T(p)$
on the spaces of vector-valued Siegel modular forms?
Two problems arise. The first is the so-called Eisenstein cohomology. This
is
$$
e_{\rm Eis}(\A{2}, V_{a,b}):= \sum (-1)^i (\ker H^i_c \to H^i)(\A{2},V_{a,b}).
$$
In the case of genus $1$ this expression was equal to the innocent $-1$,
but for higher $g$ this is a more complicated term. The second problem
is the \emph{endoscopy}: the terms in the Hodge filtration that do
not see the first and last step of the Hodge filtration.
For torsion-free groups there is work by
Schwermer (see \cite{Schwermer})
on the Eisenstein cohomology; cf.\ also the work of Harder \cite{Harder}.
And there is an extensive literature on endoscopy.
But explicit formulas were not available.
On the basis of numerical calculations Carel Faber and I guessed
in \cite{FvdG:CR} a formula
for the Eisenstein cohomology. I was able to prove this formula in
\cite{vdG:EC} for regular $\lambda$.
We also made a guess for the endoscopic term.
Putting this together we get the following conjectural formula
for $(a,b)\neq (0,0)$.
\begin{conjecture}\label{g=2conj}
The trace of the Hecke operator $T(p)$ on the space $S_{a-b,b+3}$ of
cusp forms on ${\rm Sp}(4,{\bZ})$ equals
$$
-{\rm Tr}(F_p,e_c(\A{2}\otimes {\bF}_p,V_{a,b}))+
{\rm Tr}(F_p,e_{2,\rm extra}(a,b)),
$$
where the term $e_{2,\rm extra}(a,b)$ is defined as
$$
s_{a-b+2}-s_{a+b+4}(S[a-b+2]+1) {\bL}^{b+1} +
\begin{cases} S[b+2]+1 & a \, \text{even},\\ -S[a+3] & a \, \text{odd}, \\
\end{cases}
$$
and
$s_n= \dim S_n({\rm SL}(2,{\bZ}))$ is the dimension of the space of
cusp forms on ${\rm SL}(2,{\bZ})$ and ${\bL}=h^2({\bP}^1)$
is the Lefschetz motive.
\end{conjecture}
If $a>b=0$ or $a=b>0$ then one should put $s_2=-1$ and $S[2]=-1-{\bL}$ in
the formula.
In the case of regular local systems (i.e.\ $a>b>0$) our conjecture can be
deduced from results of Weissauer as he shows in his preprint
\cite{Weissauer:THO}. In the case of local system of non-regular
highest weight the conjecture is still open.
We have counted the curves of genus $2$ of compact type for all primes
$p \leq 37$. This implies that we can calculate the traces of the
Hecke operator $T(p)$ for $p\leq 37$ on the space of cusp forms $S_{a-b,b+3}$
{\sl for all} $a>b>0$, and assuming the conjecture also for $a>b=0$ and $a=b>0$.
Our results are in accordance with results on the numerical Euler
characteristic $\sum (-1)^i \dim H^i_c(\A{2}, V_{a,b})$ due to Getzler
(\cite{Getzler}) and with the
dimension formula for the space of cusp form $S_{a-b,b+3}$ due to Tsushima
(\cite{Tsushima}).
We give an example. For $(a,b)=(11,5)$ we have
$$
e_c(\A{2},V_{11,5})= -{\bL}^6 -S[6,8]
$$
with $S[6,8]$ the hypothetical motive associated to the ($1$-dimensional)
space of cusp forms $S_{6,8}$.
We list a few eigenvalues $\lambda(p)$ and $\lambda(p^2)$
of the Hecke operators. This allows us to give
the characteristic polynomial of Frobenius (and the Euler factor
of the spinor $L$-function of the Siegel modular form)
$$
1-\lambda(p)X+(\lambda(p)^2-\lambda(p^2)-p^{a+b+2})X^2-\lambda(p)p^{a+b+3}X^3+
p^{2(a+b+3)} X^4
$$
and its slopes.
\vbox{
\centerline{\def\hskip 0.6em\relax{\hskip 0.6em\relax}
\def\hskip 0.5em\relax {\hskip 0.5em\relax }
\vbox{\offinterlineskip
\hrule
\halign{&\vrule#&\strut\hskip 0.5em\relax \hfil#\hskip 0.6em\relax\cr
height2pt&\omit&&\omit&&\omit&&\omit&\cr
&$p$&&$\lambda(p)$&&$\lambda(p^2)$&&{\rm slopes}&\cr
height2pt&\omit&&\omit&&\omit&&\omit&\cr
\noalign{\hrule}
height2pt&\omit&&\omit&&\omit&&\omit&\cr
&$2$&&$0$&&$-57344$&&$13/2,25/2$&\cr
&$3$&&$-27000$&&$143765361$&&$3,7,12,16$&\cr
&$5$&&$2843100$&&$-7734928874375$&&$2,7,12,17$&\cr
&$7$&&$-107822000$&&$4057621173384801$&&$0,6,13,19$&\cr
height2pt&\omit&&\omit&&\omit&&\omit&\cr
} \hrule}
}}
Another indication that the computer counts of curves of genus $2$
are correct comes from a conjecture of Harder about congruences
between Hecke eigenvalues of cusp forms for genus $1$ and genus $2$.
Harder had the idea that there should be such congruences many
years ago (cf.\ \cite{Harder}),
but our results motivated him to make his conjecture
precise and explicit. He conjectured that if a (not too small,
or better an ordinary)
prime $\ell$ divides a critical
value $s$ of the $L$-function of an eigenform on ${\rm SL}(2,{\bZ})$
of weight $r$ then there should be a vector-valued
Siegel modular form of prescribed weight depending on $s$ and $r$
and a congruence modulo $\ell$ between the eigenvalues under the Hecke
operator $T(p)$ for $f$ and $F$.
We refer to \cite{BGHZ}, the papers
by Harder \cite{Harder:123} and van der Geer \cite{vdG:SMF}
there, for an account of this fascinating story.
These congruences generalize the famous congruence
$$
\tau(n) \equiv p^{11}+1 \, (\bmod \, 691)
$$
for the Hecke eigenvalues $\tau(p)$ ($p$ a prime)
of the modular form $\Delta=\sum_{n>0} \tau(n) q^n$
of weight $12$ on ${\rm SL}(2,{\bZ})$.
One example of such a congruence is the congruence
$$
\lambda(p) \equiv p^8+a(p)+p^{13} \, (\bmod \, 41)
$$
where $f=\sum a(n) q^n$ is the normalized ($a(1)=1$) cusp form of
weight $22$ on ${\rm SL}(2,{\bZ})$ and the $\lambda(p)$ are the Hecke
eigenvalues of the genus $2$ Siegel cusp form $F \in S_{4,10}$.
We checked this congruence for all primes $p$ with $p\leq 37$.
For example, $a(37)=22191429912035222$ and $\lambda(37)=
11555498201265580$.
In joint work with Bergstr\"om and Faber we extended this to
level~$2$. One considers the moduli space $\A{2}[2]$ of
principally polarized abelian surfaces of level~2. This moduli
space contains as a dense open subset the moduli space
$\M{2}[w^6]$ of curves of genus~$2$ together with six Weierstrass
points. It comes with an action of $S_6\cong {\rm Sp}(4,{\bZ}/2{\bZ})$.
We formulated an analogue of conjecture \ref{g=2conj} for level~$2$.
Assuming this conjecture we can calculate the traces of the Hecke
operators $T(p)$ for $p\leq 37$ for the spaces of cusp forms of all
level~$2$. Using these numerical data we could observe lifings
from genus~$1$ to genus~$2$ and could make precise conjectures
about such liftings; and again we could predict and verify numerically
congruences between genus~$1$ and genus~$2$ eigenforms.
We give an example. For $(a,b)=(4,2)$ we find
$$
e_c(\A{2}[2],V_{4,2})= -45 {\bL}^3+45 -S[\Gamma_2[2],(2,5)]
$$
and assuming our conjecture
we can calculate the traces of the Hecke operators on the space
of cusp forms of weight $(2,5)$ on the level $2$ congruence
subgroup $\Gamma_2[2]$ of ${\rm Sp}(4,{\bZ})$;
this space is a representation of
type $[2^2,1^2]$ for the symmetric group $S_6$ and is generated by
one Siegel modular form; for this Siegel modular form we have
the eigenvalue
$\lambda(23)=-323440$ for $T(23)$.
It is natural to ask how the story continues for genus $3$. The first remark is
that the Torelli map $\M{3} \to \A{3}$ is a morphism of degree $2$
in the sense of stacks. This is due to the fact that every principally
polarized abelian variety has a non-trivial automorphism $-{\rm id}$,
while the general curve of genus $3$ has a trivial automorphism group.
This has as a consequence that for local systems $V_{a,b,c}$ with $a+b+c$
odd the cohomology on $\A{3}$ vanishes, but on $\M{3}$ it need not; and in
fact in general it does not.
In joint work with Bergstr\"om and Faber we managed to formulate
an analogue of \ref{g=2conj} for genus $3$, see \cite{BFG:g=3}.
Assuming the conjecture
we are able to calculate the traces of Hecke operators $T(p)$ on the
space of cusp forms $S_{a-b,b-c,c+4}$ for
all primes $p$ for which we did the counting (at least $p\leq 19$).
Again the numerically data fit with calculations of dimensions of
spaces of cusp forms and numerical Euler characteristics. (These numerical
Euler characteristics were calculated in \cite{B-vdG:EC}.) And again
we could observe congruences between eigenvalues for $T(p)$ for
$g=3$ eigenforms and those of genus $1$ and $2$.
We give two examples:
$$
e_c(\A{3},V_{10,4,0})= -{\bL}^7+{\bL}+S[6,8],
$$
the same $S[6,8]$ for genus $2$ we met above. And for example
$$
e_c(\A{3},V_{8,4,4})= -S[12] \, {\bL}^6+S[12] +S[4,0,8],
$$
where genuine Siegel modular forms (of weight $(4,0,8)$)
of genus $3$ do occurr.
We refer to \cite{BFG:g=3} for the details and to the Chapter by
Faber and Pandharipande for the cohomology of local systems
and modular forms on the moduli spaces of curves.
\noindent
{\sl Acknowledgement}
The author thanks Jonas Bergstr\"om and Carel Faber
for the many enlightening discussions we had on topics dealt with in
this survey. Thanks are due to T.\ Katsura for inviting me to Japan where
I found the time to finish this survey. I am also greatly indebted to
Torsten Ekedahl from whom I learned so much. I was a great shock to
hear that he passed away; his gentle and generous personality
and sharp intellect will be deeply missed.
\end{document}
|
\begin{document}
\title[Generalizing DN operators]{Generalizing Dirichlet-to-Neumann operators}
\author{Liping Li}
\address{RCSDS, HCMS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China. }
\address{Bielefeld University, Bielefeld, Germany.}
\email{[email protected]}
\thanks{The first named author is partially supported by NSFC (No. 11931004), and Alexander von Humboldt Foundation in Germany.}
{{\mathfrak{m}}athrm{s}}ubjclass[2010]{Primary 31C25, 60J60.}
\keywords{Dirichlet-to-Neumann operators, Dirichlet forms, Trace Dirichlet forms, Perturbations, Positivity preserving coercive forms, $h$-transformations, Irreducibility, Calder\'on's problem}
\begin{abstract}
The aim of this paper is to study the Dirichlet-to-Neumann operators in the context of Dirichlet forms and especially to figure out their probabilistic counterparts. Regarding irreducible Dirichlet forms, we will show that the Dirichlet-to-Neumann operators for them are associated with the trace Dirichlet forms corresponding to the time changed processes on the boundary. Furthermore, the Dirichlet-to-Neumann operators for perturbations of Dirichlet forms will be also explored. It turns out that for typical cases such a Dirichlet-to-Neumann operator corresponds to a quasi-regular positivity preserving (symmetric) coercive form, so that there exists a family of Markov processes associated with it via Doob's $h$-transformations.
\end{abstract}
{\mathfrak{m}}aketitle
\tableofcontents
{{\mathfrak{m}}athrm{s}}ection{Introduction}
Let $\Omega{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athbb R}^d$, $d\mathfrak{g}eq 2$, be a bounded Lipschitz domain, i.e. the boundary $\Gamma:=\partial \Omega$ is locally the graph of a Lipschitz function.
The classical Dirichlet-to-Neumann operator (DN operator in abbreviation) $D$ over $\Omega$ is roughly defined as follows: For $\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(\Gamma)$ such that there exists a (unique) harmonic function $u{\mathrm{i}}n H^1(\Omega)$, i.e. ${\mathfrak{m}}athcal{D}elta u=0$ in $\Omega$ in the weak sense, with $\mathop{\mathrm{Tr}}(u)=\text{-}\mathrm{var}phi$ and having a weak normal derivative $\partial_\mathbf{n} u{\mathrm{i}}n L^2(\Gamma)$, $D\text{-}\mathrm{var}phi$ is defined as $\frac{1}{2}\partial_\mathbf{n} u$. The domain $\mathcal{D}(D)$ of $D$ is the totality of all such $\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(\Gamma)$. Here $\mathop{\mathrm{Tr}}(u)$ is the trace of $u$ on $\Gamma$ and the weak normal derivative $\partial_\mathbf{n} u$ is determined by the Green-Gauss formula \eqref{eq:GreenGauss}. The rigorous definition will be stated in Definition~{{\mathfrak{m}}athrm{e}}f{DEF31}.
A systemic introduction to DN operators is referred to, e.g., the monograph of Taylor \cite[Section 12C]{T96}. Analogical DN operators can be defined for other operators, such as Schr\"odinger operators, in place of the Laplacian; see, e.g., \cite{AE20, AE14, BE17, EO14, EO19}. Under mild conditions, these DN operators are lower semi-bounded and self-adjoint on $L^2(\Gamma)$. There appear various analytic approaches to study them in recent years. For example, ter Elst and Ouhabaz \cite{EO14, EO19} showed that the strongly continuous semigroup corresponding to a DN operator is given by a kernel which satisfies Poisson upper bounds. Other properties like positivity and irreducibility for the semigroup are explored in, e.g., \cite{AM12, AE20}. Based on an indirect ellipticity property called hidden compactness, Arendt et al. \cite{AE14} investigated DN operators that may be multi-valued. However the literatures for probabilistic approach to DN operators are very few. To our knowledge, it first arose in \cite{BV17} that the classical DN operator is associated with certain Markov process on $\Gamma$.
It is our first aim in this paper to generalize DN operators to those for operators related to irreducible Dirichlet forms. Terminologies and notations concerning Dirichlet forms are referred to, e.g., \cite{FOT11, CF12}. Let ${{\mathfrak{m}}athrm{s}}L$ be a self-adjoint operator that is the generator of a regular and irreducible Dirichlet form $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(E,m)$. The `boundary' and the underlying measure on the `boundary' are chosen as a quasi closed set $F{{\mathfrak{m}}athrm{s}}ubset E$ and a positive Radon smooth measure ${\mathfrak{m}}u$ with $\text{qsupp}[{\mathfrak{m}}u]=F$, where $\text{qsupp}[{\mathfrak{m}}u]$ stands for the quasi support of ${\mathfrak{m}}u$. A function $u$ is called harmonic in the `interior' $G:=E{{\mathfrak{m}}athrm{s}}etminus F$ with respect to ${{\mathfrak{m}}athrm{s}}L$ if $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ and
$${{\mathfrak{m}}athscr E}(u,g)=0,\quad \forall g{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}^G,$$
where ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ is the extended Dirichlet space of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ and ${{\mathfrak{m}}athscr F}^G_{{\mathfrak{m}}athrm{e}}:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}: u=0,{{\mathfrak{m}}athscr E}\text{-q.e. on }F\}$. Then the DN operator ${{\mathfrak{m}}athrm{s}}N$ for ${{\mathfrak{m}}athrm{s}}L$ (also called for $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$) is by definition mapping `the trace on the boundary' $\text{-}\mathrm{var}phi:=u|_F$ of certain harmonic function $u$ to its `weak normal derivative on the boundary' $f$ determined by
\begin{equation}\label{eq:11}
{{\mathfrak{m}}athscr E}(u,v)={\mathrm{i}}nt_F fv|_F d{\mathfrak{m}}u,\quad \text{for any } v{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}} \text{ with }v|_F{\mathrm{i}}n L^2(F,{\mathfrak{m}}u).
\end{equation}
Here the `trace' $u|_F$ is the restriction of (${{\mathfrak{m}}athscr E}$-quasi-continuous) $u$ to $F$. We will show in Theorem~{{\mathfrak{m}}athrm{e}}f{THM26} that ${{\mathfrak{m}}athrm{s}}N$ is a well-defined positive self-adjoint operator on $L^2(F,{\mathfrak{m}}u)$. More significantly, $-{{\mathfrak{m}}athrm{s}}N$ is the $L^2$-generator of the trace Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(F,{\mathfrak{m}}u)$, whose associated Markov process is the time changed process of $X$ by the positive continuous additive functional (PCAF in abbreviation) corresponding to the Revuz measure ${\mathfrak{m}}u$. Here $X$ is the Markov process associated with $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$. This new approach leads to a rich family of DN operators containing those for self-adjoint operators appearing in the literatures like \cite{V21, BV17, W15, W18}. Particularly, the classical DN operator $D$ can be recovered as follows: Consider $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})=(\frac{1}{2}{\mathfrak{m}}athbf{D},H^1(\Omega))$ on $L^2(\bar{\Omega})$, where $\bar{\Omega}$ is the closure of $\Omega$ and
\[
{\mathfrak{m}}athbf{D}(u,v):={\mathrm{i}}nt_\Omega \nabla u(x) \nabla v(x)dx,\quad u,v{\mathrm{i}}n H^1(\Omega),
\]
whose is associated with the reflected Brownian motion on $\bar{\Omega}$. Take $F=\Gamma$ and ${\mathfrak{m}}u={{\mathfrak{m}}athrm{s}}igma$, the surface measure on $\Gamma$. Then $D$ is identified with the DN operator for $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(\Gamma)$;
Nevertheless this new approach seems not to make us satisfied, because the Schr\"odinger operators like $\frac{1}{2}{\mathfrak{m}}athcal{D}elta-V$ with $V{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$ are hardly associated with a Dirichlet form. To overcome this lack, we further introduce the DN operators for the perturbation of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$. Let $\mathbf{S}$ be the family of all positive smooth measures with respect to ${{\mathfrak{m}}athscr E}$ and take $\kappa=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}-\mathbf{S}$, i.e. $\kappa^\pm{\mathrm{i}}n \mathbf{S}$ and $\kappa^+\perp \kappa^-$. Then
\[
\begin{aligned}
{{\mathfrak{m}}athscr F}^\kappa:={{\mathfrak{m}}athscr F}\cap L^2(E,|\kappa|), \quad {{\mathfrak{m}}athscr E}^\kappa(u,v):={{\mathfrak{m}}athscr E}(u,v)+{\mathrm{i}}nt_E uvd\kappa,\; u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa
\end{aligned}\]
is called the perturbation of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ by $\kappa$. Under suitable conditions $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is a lower bounded symmetric closed form, whose $L^2$-generator ${{\mathfrak{m}}athrm{s}}L_\kappa$ is upper semi-bounded and self-adjoint. Particularly, $\frac{1}{2}{\mathfrak{m}}athcal{D}elta-V$ corresponds to the perturbation of $(\frac{1}{2}{\mathfrak{m}}athbf{D},H^1(\Omega))$ by $V(x)dx$. The DN operator ${{\mathfrak{m}}athrm{s}}N_\kappa$ for ${{\mathfrak{m}}athrm{s}}L_\kappa$ is defined by a similar way to that of ${{\mathfrak{m}}athrm{s}}N$; see Definition~{{\mathfrak{m}}athrm{e}}f{DEF41}. Then one of the main results in this part, Theorem~{{\mathfrak{m}}athrm{e}}f{THM44}, states a useful sufficient condition for the self-adjointness of ${{\mathfrak{m}}athrm{s}}N_\kappa$ on $L^2(F,{\mathfrak{m}}u)$ by an argument involving certain compact embedding property.
A problem of great interest to us is to figure out the probabilistic counterparts of DN operators. This is completely solved for the DN operators for Dirichlet forms. However we are stuck in the perturbation case because the $L^2$-semigroup associated with ${{\mathfrak{m}}athrm{s}}N_\kappa$ is usually not Markovian. Instead the main result for this case, Theorem~{{\mathfrak{m}}athrm{e}}f{THM312}, states that under certain conditions ${{\mathfrak{m}}athrm{s}}N_\kappa$ corresponds to a quasi-regular positivity preserving (symmetric) coercive form, so that there exists a family of Markov processes that are associated with ${{\mathfrak{m}}athrm{s}}N_\kappa$ via Doob's $h$-transformations. Several concrete perturbations including constant perturbation of the Laplacian, perturbations of uniformly elliptic operators and perturbations supported on boundary will be paid special attention to. For most of them, the $L^2$-semigroup associated with ${{\mathfrak{m}}athrm{s}}N_\kappa$ is shown to be irreducible. As a result, ${{\mathfrak{m}}athrm{s}}N_\kappa$ has a (unique) ground state, i.e. the smallest element in the spectrum ${{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}N_\kappa)$ is a simple eigenvalue admitting a strictly positive eigenfunction, called the ground state, and no other eigenvalues admit strictly positive eigenfunctions. More significantly, this ground state corresponds to the unique $h$-transformation such that the Markov process obtained by $h$-transformation associated with ${{\mathfrak{m}}athrm{s}}N_\kappa$ is recurrent; see Example~{{\mathfrak{m}}athrm{e}}f{EXA65}, Theorems~{{\mathfrak{m}}athrm{e}}f{THM412} and {{\mathfrak{m}}athrm{e}}f{THM55}.
The developments of DN operators for perturbations are also motivated by the so-called Calder\'on's problem initiated in a pioneer contribution \cite{C80}. This is an inverse problem considering whether one can determine the electrical conductivity of a medium by making voltage and current measurements at the boundary of the medium. A systemic survey is referred to \cite{U12}. To be more mathematical, take two potentials $V_1,V_2{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$ and let $D_{V_i}$ be the DN operator for the Schr\"odinger operator $\frac{1}{2}{\mathfrak{m}}athcal{D}elta-V_i$. Then the classical Calder\'on's problem is to ask if $D_{V_1}=D_{V_2}$ implies $V_1=V_2$. The case of dimension $d\mathfrak{g}eq 3$ was solved in a seminal paper \cite{SU87} and the answer is positive. Then the case of dimension $2$ is considered in, e.g., \cite{B08}. Other related literatures include, e.g., \cite{SU91, BGU21, GSU20}. In this paper we will also give some remarks on the Calder\'on's problem in the context of Dirichlet forms.
The rest of this paper is organized as follows. The section \S{{\mathfrak{m}}athrm{e}}f{SEC2} is devoted to introducing and studying the DN operator for a regular and irreducible Dirichlet form. The classical DN operator will be recovered in \S{{\mathfrak{m}}athrm{e}}f{SEC4}. In \S{{\mathfrak{m}}athrm{e}}f{SEC3}, we will explore DN operators for perturbations.
Their probabilistic counterparts are figure out in \S{{\mathfrak{m}}athrm{e}}f{SEC34}. In \S{{\mathfrak{m}}athrm{e}}f{Revisiting}, the DN operator $D_\lambda$ for $\frac{1}{2}{\mathfrak{m}}athcal{D}elta-\lambda$ with a constant $\lambda{\mathrm{i}}n {{\mathfrak{m}}athbb R}$ is explored. The quadratic form induced by $D_\lambda$ is formulated in Theorem~{{\mathfrak{m}}athrm{e}}f{THM44-2}. As a corollary, we obtain the irreducibility of the $L^2$-semigroup associated with $D_\lambda$ in Corollary~{{\mathfrak{m}}athrm{e}}f{COR410} for $\lambda>\lambda^\text{D}_1/2$, where $\lambda^\text{D}_1$ is the first eigenvalue of the Dirichlet Laplacian. Then Theorem~{{\mathfrak{m}}athrm{e}}f{THM412} states more properties about the Markov processes obtained by $h$-transformations associated with $D_\lambda$. Perturbations of uniformly elliptic operators and perturbations supported on boundary are studied in \S{{\mathfrak{m}}athrm{e}}f{SEC5} and \S{{\mathfrak{m}}athrm{e}}f{SEC7} respectively.
{{\mathfrak{m}}athrm{s}}ubsection*{Notations}
We prepare notations that will be frequently used for handy reference. Given a topological space $E$, ${\mathfrak{m}}athcal{B}(E)$ represents the family of all Borel measurable functions on $E$. For a family ${{\mathfrak{m}}athscr G}$ of certain functions, $p{{\mathfrak{m}}athscr G}$ (resp. $b{{\mathfrak{m}}athscr G}$) stands for the subfamily consisting of positive (resp. bounded) functions in ${{\mathfrak{m}}athscr G}$. Given a measure $m$ or a function $u$ on $E$ and $F{{\mathfrak{m}}athrm{s}}ubset E$, $m|_F$ or $u|_F$ stands for the restriction of $m$ or $u$ to $F$. For $u,v{\mathrm{i}}n L^2(E,m)$, $(u,v)_m:={\mathrm{i}}nt_E uv dm$. Given an operator ${{\mathfrak{m}}athrm{s}}L$ on a Hilbert space, $\mathcal{D}({{\mathfrak{m}}athrm{s}}L)$ stands for its domain and ${{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L)$ stands for its spectrum.
The symbol $\lesssim$ (resp. $\mathfrak{g}trsim$) means that the left (resp. right) term is bounded by the right (resp. left) term multiplying a non-essential constant.
For a regular or quasi-regular Dirichlet form $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(E,m)$, its extended Dirichlet space is denoted by ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$. For convenience, every function in ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ is taken to be its ${{\mathfrak{m}}athscr E}$-quasi continuous $m$-version if without other statements. For $\alpha\mathfrak{g}eq 0$, ${{\mathfrak{m}}athscr E}_\alpha(u,v):={{\mathfrak{m}}athscr E}(u,v)+\alpha (u,v)_m$ and $\|u\|_{{{\mathfrak{m}}athscr E}_\alpha}:={{\mathfrak{m}}athscr E}_\alpha(u,u)^{1/2}$ for all $u,v{\mathrm{i}}n{{\mathfrak{m}}athscr F}$.
Let $\Omega{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athbb R}^d$, $d\mathfrak{g}eq 2$, be a domain. Set $$H^1(\Omega)=W^{1,2}(\Omega)=\{u{\mathrm{i}}n L^2(\Omega): \partial_{x_i}u{\mathrm{i}}n L^2(\Omega), 1\leq i\leq d\}$$and for $u,v{\mathrm{i}}n H^1(\Omega)$, ${\mathfrak{m}}athbf{D}(u,v):={\mathrm{i}}nt_\Omega \nabla u(x)\nabla v(x)dx$. The notation $H^1_0(\Omega)$ stands for the closure of $C_c^{\mathrm{i}}nfty(\Omega)$ in $H^1(\Omega)$, where $C_c^{\mathrm{i}}nfty(\Omega)$ is the family of all smooth functions with compact support in $\Omega$. Similarly, $H^s(\Omega)=W^{s,2}(\Omega)$ denotes the Sobolev space of order $s\mathfrak{g}eq 0$. To be more general, $W^{m,p}(\Omega)$, $m{\mathrm{i}}n {\mathfrak{m}}athbb{N}, p\mathfrak{g}eq 1$, is the usual Sobolev space over $\Omega$.
{{\mathfrak{m}}athrm{s}}ection{DN operators for irreducible Dirichlet forms}\label{SEC2}
{{\mathfrak{m}}athrm{s}}ubsection{Basic setting}
What we are concerned with is a regular and irreducible Dirichlet form $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(E,m)$ associated with an $m$-symmetric Markov process $X=(X_t, {\mathfrak{m}}athbf{P}_x)$ on $E_\partial:=E\cup \{\partial\}$, where $E$ is a locally compact separable metric space attached by a trap $\partial$ and $m$ is a fully supported Radon measure on $E$. The irreducibility particularly implies the recurrence or transience of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$. Denote by ${\mathfrak{m}}athring{{\mathfrak{m}}athbf{S}}$ the totality of positive Radon measures on $E$ charging no ${{\mathfrak{m}}athscr E}$-polar sets. Take a non-zero ${\mathfrak{m}}u{\mathrm{i}}n {\mathfrak{m}}athring{\mathbf{S}}$ and set $F:=\text{qsupp}[{\mathfrak{m}}u]$ (see \cite[Definition~3.3.4]{CF12}). Note that $F$ is quasi closed. For convenience, we always take quasi closed (resp. quasi open) set to be a nearly Borel and finely closed (resp. finely open) ${{\mathfrak{m}}athscr E}$-q.e. version, so that its first hitting time, e.g., ${{\mathfrak{m}}athrm{s}}igma_F$ in Lemma~{{\mathfrak{m}}athrm{e}}f{LM21}, is well defined.
Let $G:=E{{\mathfrak{m}}athrm{s}}etminus F$. Denote by $({{\mathfrak{m}}athscr E}^G,{{\mathfrak{m}}athscr F}^G)$ the part Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $G$, i.e.
\[
{{\mathfrak{m}}athscr F}^G=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}: u=0, {{\mathfrak{m}}athscr E}\text{-q.e. on }F\},\quad {{\mathfrak{m}}athscr E}^G(u,v)={{\mathfrak{m}}athscr E}(u,v),\; u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^G.
\]
Then $({{\mathfrak{m}}athscr E}^G,{{\mathfrak{m}}athscr F}^G)$ is quasi-regular on $L^2(G,m|_G)$ associated with the part process of $X$ on $G$, whose extended Dirichlet space is
\[
{{\mathfrak{m}}athscr F}_{{{\mathfrak{m}}athrm{e}}}^{G}:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}: u=0, {{\mathfrak{m}}athscr E}\text{-q.e. on }F\};
\]
see \cite[Theorems~3.3.8 and 3.4.9]{CF12}. Write
\[
{\mathfrak{m}}athcal{H}_F:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}: {{\mathfrak{m}}athscr E}(u,v)=0,\forall v{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{{\mathfrak{m}}athrm{e}}}^{G}\}.
\]
Note that ${{\mathfrak{m}}athscr F}^G_{{\mathfrak{m}}athrm{e}}\cap \mathcal{H}_F=\{0\}$.
The following decomposition is elementary due to the irreducibility.
\begin{lemma}\label{LM21}
Every $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ can be written as a sum
\[
u=u_1+u_2,
\]
where $u_1{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{{\mathfrak{m}}athrm{e}}}^{G}$ and $u_2{\mathrm{i}}n {\mathfrak{m}}athcal{H}_F$. This decomposition is unique, and indeed
\[
u_1=u-{\mathfrak{m}}athbf{H}_F u,\quad u_2={\mathfrak{m}}athbf{H}_F u,
\]
where ${\mathfrak{m}}athbf{H}_F u(x):={\mathfrak{m}}athbf{E}_x\left[u(X_{{{\mathfrak{m}}athrm{s}}igma_F}); {{\mathfrak{m}}athrm{s}}igma_F<{\mathrm{i}}nfty\right]$ for $x{\mathrm{i}}n E$ and ${{\mathfrak{m}}athrm{s}}igma_F:={\mathrm{i}}nf\{t>0:X_t{\mathrm{i}}n F\}$.
\end{lemma}
\begin{proof}
Note that $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is recurrent or transient due to the irreducibility. When $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is transient, the assertions have been concluded in, e.g., \cite[Theorem~3.4.2]{CF12}. When $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is recurrent, the above decomposition still holds in view of \cite[Theorem~3.4.8]{CF12}, and the uniqueness can be easily obtained by means of \cite[Theorem~5.2.16]{CF12}.
\end{proof}
\begin{remark}
In a little abuse of notation, we may write
\begin{equation}\label{eq:Fe}
{{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}} = {{\mathfrak{m}}athscr F}_{{{\mathfrak{m}}athrm{e}}}^G\oplus {\mathfrak{m}}athcal{H}_F.
\end{equation}
When $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is transient, ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ is a Hilbert space with the inner product ${{\mathfrak{m}}athscr E}$ and ${{\mathfrak{m}}athscr F}_{{{\mathfrak{m}}athrm{e}}}^G$ is clearly a closed subspace of ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$. Then ${\mathfrak{m}}athcal{H}_F$ is the orthogonal complement of ${{\mathfrak{m}}athscr F}_{{{\mathfrak{m}}athrm{e}}}^{G}$ and \eqref{eq:Fe} is a real direct sum decomposition. However when $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is recurrent, ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ is not a Hilbert space because $1{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}, {{\mathfrak{m}}athscr E}(1,1)=0$; see, e.g., \cite[Theorem~1.6.3]{FOT11}.
\end{remark}
{{\mathfrak{m}}athrm{s}}ubsection{DN operators}
For $\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$, $u$ is called the \emph{harmonic extension} of $\text{-}\mathrm{var}phi$ if $u{\mathrm{i}}n \mathcal{H}_F$ and $u|_F=\text{-}\mathrm{var}phi$.
Note that the harmonic extension of $\text{-}\mathrm{var}phi$ is unique if exists.
We introduce the following DN operator for $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$.
\begin{definition}\label{DEF23}
Let ${\mathfrak{m}}u{\mathrm{i}}n {\mathfrak{m}}athring{\mathbf{S}}$ and $F=\text{qsupp}[{\mathfrak{m}}u]$. The following operator
\begin{equation}\label{eq:DN2}
\begin{aligned}
\mathcal{D}({{\mathfrak{m}}athrm{s}}N)=&\big\{\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(F,{\mathfrak{m}}u): \exists\,u{\mathrm{i}}n \mathcal{H}_F\text{ and }f{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)\text{ such that }u|_F=\text{-}\mathrm{var}phi, \\
&\qquad\qquad {{\mathfrak{m}}athscr E}(u,v)={\mathrm{i}}nt_{F} f v|_F d{\mathfrak{m}}u \text{ for any }v{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}\text{ with }v|_F{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)\big\},\\
{{\mathfrak{m}}athrm{s}}N \text{-}\mathrm{var}phi=& f,\quad \text{-}\mathrm{var}phi{\mathrm{i}}n {\mathfrak{m}}athcal{D}({{\mathfrak{m}}athrm{s}}N)
\end{aligned}\end{equation}
is called the \emph{DN operator for $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(F,{\mathfrak{m}}u)$}.
\end{definition}
In what follows, we figure out the probabilistic counterpart of ${{\mathfrak{m}}athrm{s}}N$. Note that ${\mathfrak{m}}u$ induces a PCAF $A=(A_t)_{t\mathfrak{g}eq 0}$, whose support is $F$; see, e.g., \cite[Theorem~5.1.5]{FOT11}. Let $\zeta:={\mathrm{i}}nf\{t>0:X_t=\partial\}$ be the lifetime of $X$ and we denote $F\cup \{\partial\}$ by $F_\partial$ regarding it as a topological subspace of $E_\partial$. The right continuous inverse $\tau_t$ of $A$ is defined as
\[
\tau_t:=\left\lbrace
\begin{aligned}
&{\mathrm{i}}nf\{s>0: A_s>t\},\quad t<A_{\zeta-}, \\
&{\mathrm{i}}nfty,\qquad\qquad\qquad \qquad\;\, t\mathfrak{g}eq A_{\zeta-}.
\end{aligned} \right.
\]
Set $\check{X}_t:=X_{\tau_t}$ for $t\mathfrak{g}eq 0$ and $\check{\zeta}:=A_{\zeta-}$.
Then $\check{X}=(\check{X}_t, \check{\zeta}, \{{\mathfrak{m}}athbf{P}_x\}_{x{\mathrm{i}}n F_\partial})$ is the so-called \emph{time changed process} of $X$ by the PCAF $A$. It is known that $\check{X}$ is a right process associated with the quasi-regular Dirichlet form $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ on $L^2(F,{\mathfrak{m}}u)$:
\begin{equation}\label{eq:traceDirichletform}
\begin{aligned}
\check{{{\mathfrak{m}}athscr F}}&={{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}|_F\cap L^2(F,{\mathfrak{m}}u),\\
\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi, \phi)&={{\mathfrak{m}}athscr E}(\mathbf{H}_F \text{-}\mathrm{var}phi, \mathbf{H}_F\phi),\quad \text{-}\mathrm{var}phi, \phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}},
\end{aligned}
\end{equation}
where ${\mathfrak{m}}athbf{H}_F \text{-}\mathrm{var}phi(x):={\mathfrak{m}}athbf{E}_x\left[\text{-}\mathrm{var}phi(X_{{{\mathfrak{m}}athrm{s}}igma_F}); {{\mathfrak{m}}athrm{s}}igma_F<{\mathrm{i}}nfty\right]$ for $x{\mathrm{i}}n E$; see, e.g., \cite[Theorem~5.2.7]{CF12}. The Dirichlet form $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ is called the \emph{trace Dirichlet form} of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(F,{\mathfrak{m}}u)$. Let $F^*$ be the topological support of ${\mathfrak{m}}u$, i.e. the smallest closed set such that ${\mathfrak{m}}u(E{{\mathfrak{m}}athrm{s}}etminus F^*)=0$. Then $F{{\mathfrak{m}}athrm{s}}ubset F^*$, ${{\mathfrak{m}}athscr E}$-q.e. (but $F^*{{\mathfrak{m}}athrm{s}}etminus F$ is not necessarily ${{\mathfrak{m}}athscr E}$-polar). It is worth noting that $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ can be realized as a regular Dirichlet form on $L^2(F^*,{\mathfrak{m}}u)$ ($=L^2(F,{\mathfrak{m}}u)$); see, e.g., \cite[Theorem~5.2.13]{CF12}.
\begin{theorem}\label{THM26}
Let ${{\mathfrak{m}}athrm{s}}N$ be the DN operator for $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(F,{\mathfrak{m}}u)$ and $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ be the trace Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(F,{\mathfrak{m}}u)$. Then $-{{\mathfrak{m}}athrm{s}}N$ is identified with the generator of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ on $L^2(F,{\mathfrak{m}}u)$. Particularly, ${{\mathfrak{m}}athrm{s}}N$ is a positive and self-adjoint operator on $L^2(F,{\mathfrak{m}}u)$ corresponding to a strongly continuous Markovian semigroup.
\end{theorem}
\begin{proof}
It suffices to show that $\text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}N)$, ${{\mathfrak{m}}athrm{s}}N \text{-}\mathrm{var}phi=f$, if and only if $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}$ and $\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi, \phi)=(f, \phi)_{\mathfrak{m}}u$ for any $\phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}$. To do this, we first take $\text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}N)$ with ${{\mathfrak{m}}athrm{s}}N\text{-}\mathrm{var}phi=f$. Then there exists $u{\mathrm{i}}n \mathcal{H}_F{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ such that $\text{-}\mathrm{var}phi=u|_F{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$ and
\begin{equation}\label{eq:29}
{{\mathfrak{m}}athscr E}(u,v)={\mathrm{i}}nt_{F} f \cdot v|_F d{\mathfrak{m}}u
\end{equation}
for any $v{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ with $v|_F{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$. Particularly $\text{-}\mathrm{var}phi{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}|_F\cap L^2(F,{\mathfrak{m}}u)=\check{{{\mathfrak{m}}athscr F}}$. Note that
\[
{{\mathfrak{m}}athscr E}(u,v)={{\mathfrak{m}}athscr E}(u,\mathbf{H}_Fv)={{\mathfrak{m}}athscr E}(\mathbf{H}_Fu,\mathbf{H}_Fv).
\]
It follows from \eqref{eq:29} that for any $\phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}$ with $\phi=v|_F$ and $v{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$,
\[
\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi, \phi)={{\mathfrak{m}}athscr E}(\mathbf{H}_Fu,\mathbf{H}_F v)=(f, \phi)_{\mathfrak{m}}u.
\]
To the contrary, let $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}$ and $f{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$ such that $\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi, \phi)=(f, \phi)_{\mathfrak{m}}u$ for any $\phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}$. Then $u:=\mathbf{H}_F \text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{H}_F$ is the harmonic extension of $\text{-}\mathrm{var}phi$. We assert that $u,f$ satisfy the condition in \eqref{eq:DN2}. To do this, set $\phi:=v|_F{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}$ for $v{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ with $v|_F{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$. By means of Lemma~{{\mathfrak{m}}athrm{e}}f{LM21} and \eqref{eq:traceDirichletform}, we can obtain that
\[
(f,v|_F)_{\mathfrak{m}}u=\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi, \phi)={{\mathfrak{m}}athscr E}(\mathbf{H}_F \text{-}\mathrm{var}phi, \mathbf{H}_F\phi)={{\mathfrak{m}}athscr E}(u,v).
\]
Hence \eqref{eq:29} is concluded. That completes the proof.
\end{proof}
{{\mathfrak{m}}athrm{s}}ubsection{Recovering classical DN operators}\label{SEC4}
Let $\Omega{{\mathfrak{m}}athrm{s}}ubset {\mathfrak{m}}athbb{R}^d$, $d\mathfrak{g}eq 2$, be a bounded Lipschitz domain. Let $H^1(\Omega):=\{u{\mathrm{i}}n L^2(\Omega): \partial_{x_i}u{\mathrm{i}}n L^2(\Omega), 1\leq i\leq d\}$ and $L^2(\Gamma)$ be the $L^2$-space on $\Gamma$ with respect to the surface measure ${{\mathfrak{m}}athrm{s}}igma$, i.e. the restriction of $d-1$ Hausdorff measure to $\Gamma$. Similarly, $H^s(\Omega)$ denotes the Sobolev space of order $s\mathfrak{g}eq 0$. In addition, we can define the Sobolev spaces $H^s(\Gamma)$ for $0\leq s\leq 1$ in the usual way using local coordinate representations of $\Gamma$; see, e.g., \cite[\S2.4]{SS11}.
Since $\Omega$ is Lipschitz, there is a unique \emph{trace operator} $\text{Tr}: H^1(\Omega)\rightarrow L^2(\Gamma)$ such that $\mathop{\mathrm{Tr}}(u)=u|_{\Gamma}$ for $u{\mathrm{i}}n H^1(\Omega)\cap C(\bar{\Omega})$. The \emph{weak normal derivative} is defined as follows. For $u{\mathrm{i}}n H^1(\Omega)$, we say ${\mathfrak{m}}athcal{D}elta u{\mathrm{i}}n L^2(\Omega)$ if there exists $f{\mathrm{i}}n L^2(\Omega)$ such that ${\mathrm{i}}nt_\Omega \nabla u\nabla vdx+{\mathrm{i}}nt_\Omega fv=0$ for any $v{\mathrm{i}}n H^1_0(\Omega)$. In this case ${\mathfrak{m}}athcal{D}elta u:=f$. For $u{\mathrm{i}}n H^1(\Omega)$ with ${\mathfrak{m}}athcal{D}elta u{\mathrm{i}}n L^2(\Omega)$, we say $u$ has a weak normal derivative in $L^2(\Gamma)$ provided that there exists $f{\mathrm{i}}n L^2(\Gamma)$ such that
\begin{equation}\label{eq:GreenGauss}
{\mathrm{i}}nt_\Omega \nabla u\nabla v dx +{\mathrm{i}}nt_\Omega {\mathfrak{m}}athcal{D}elta u vdx={\mathrm{i}}nt_\Gamma f \mathop{\mathrm{Tr}}(v)d{{\mathfrak{m}}athrm{s}}igma,\quad \forall v{\mathrm{i}}n H^1(\Omega).
\end{equation}
Meanwhile we denote by $\partial_{\mathbf{n}}u:=f$ the weak normal derivative of $u$. Note that \eqref{eq:GreenGauss} holds for every function in
\begin{equation}\label{eq:35}
\begin{aligned}
\{u{\mathrm{i}}n H^1(\Omega): &{\mathfrak{m}}athcal{D}elta u{\mathrm{i}}n L^2(\Omega), u\text{ has a weak normal derivative }\partial_\mathbf{n} u{\mathrm{i}}n L^2(\Gamma)\} \\
&=H^{3/2}_{\mathfrak{m}}athcal{D}elta(\Omega):=\{u{\mathrm{i}}n H^{3/2}(\Omega): {\mathfrak{m}}athcal{D}elta u{\mathrm{i}}n L^2(\Omega)\} \\
&=W^1:=\{u{\mathrm{i}}n H^1(\Omega): {\mathfrak{m}}athcal{D}elta u{\mathrm{i}}n L^2(\Omega), \mathop{\mathrm{Tr}}(u){\mathrm{i}}n H^1(\Gamma)\};
\end{aligned}\end{equation}
see, e.g., \cite[Lemma~2.2]{BV17}. Following, e.g., \cite{AM12, AE11, EO14, BV17}, we present the DN operator $D_\lambda$ with a parameter $\lambda\mathfrak{g}eq 0$ on $L^2(\Gamma)$, which maps the trace of certain harmonic $u{\mathrm{i}}n H^1(\Omega)$ to its weak normal derivative, in the following way.
\begin{definition}\label{DEF31}
For $\lambda\mathfrak{g}eq 0$, the operator
\[
\begin{aligned}
\mathcal{D}(D_\lambda)&:=\big\{\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(\Gamma): \exists u{\mathrm{i}}n H^1(\Omega) \text{ such that }\frac{1}{2}{\mathfrak{m}}athcal{D}elta u=\lambda u, \mathop{\mathrm{Tr}}(u)=\text{-}\mathrm{var}phi, \\
&\qquad \qquad \qquad \qquad\qquad \text{ and }u\text{ has a weak normal derivative }\partial_\mathbf{n} u{\mathrm{i}}n L^2(\Gamma)\big\}, \\
D_\lambda \text{-}\mathrm{var}phi&:=\frac{1}{2}\partial_\mathbf{n} u,\quad \text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}(D_\lambda) \text{ and } u\text{ as above}
\end{aligned}
\]
is called the DN operator on $L^2(\Gamma)$. Write $D:=D_0$ for the sake of brevity.
\end{definition}
The main result Theorem~3.3 of \cite{BV17} concludes that $-D_\lambda$ is the $L^2$-generator of the time changed process of the $\lambda$-subprocess of the reflected Brownian motion on $\bar{\Omega}$. In what follows, we will recover it as a special case of Theorem~{{\mathfrak{m}}athrm{e}}f{THM26}. To do this, consider the Dirichlet form $(\frac{1}{2}\mathbf{D}, H^1(\Omega))$, where $\mathbf{D}(u,v):={\mathrm{i}}nt_\Omega \nabla u\nabla vdx$ for $u,v{\mathrm{i}}n H^1(\Omega)$. Since $\Omega$ is Lipschitz, $(\frac{1}{2}\mathbf{D}, H^1(\Omega))$ is a regular Dirichlet form on $L^2(\bar{\Omega})$, which is associated with the reflected Brownian motion on $\bar{\Omega}$. Let
\begin{equation}\label{eq:Brownian}
{{\mathfrak{m}}athscr F}:=H^1(\Omega),\quad {{\mathfrak{m}}athscr E}(u,v):=\frac{1}{2}\mathbf{D}(u,v)+\lambda\cdot (u,v)_m, \; u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F},
\end{equation}
where $\lambda\mathfrak{g}eq 0$ and $m$ is the Lebesgue measure on $\bar{\Omega}$. Then $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is clearly a regular and irreducible Dirichlet form on $L^2(\bar{\Omega})$. When $\lambda>0$, the associated Markov process $X$ is the $\lambda$-subprocess of the reflected Brownian motion on $\bar\Omega$. Take ${\mathfrak{m}}u:={{\mathfrak{m}}athrm{s}}igma{\mathrm{i}}n {\mathfrak{m}}athring{\mathbf{S}}$.
The following lemma is crucial to applying Theorem~{{\mathfrak{m}}athrm{e}}f{THM26}.
\begin{lemma}\label{LM32}
Let $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ be given by \eqref{eq:Brownian}. Then the following hold:
\begin{itemize}
{\mathrm{i}}tem[(1)] ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}=H^1(\Omega)$.
{\mathrm{i}}tem[(2)] For any ${{\mathfrak{m}}athscr E}$-quasi-continuous $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$, $u|_\Gamma=\mathop{\mathrm{Tr}}(u)$, ${{\mathfrak{m}}athrm{s}}igma$-a.e.
{\mathrm{i}}tem[(3)] $\text{qsupp}[{{\mathfrak{m}}athrm{s}}igma]=\Gamma$, ${{\mathfrak{m}}athscr E}$-q.e., and the topological support of ${{\mathfrak{m}}athrm{s}}igma$ is also $\Gamma$.
{\mathrm{i}}tem[(4)] ${{\mathfrak{m}}athscr F}_{{{\mathfrak{m}}athrm{e}}}^{\Omega}=H^1_0(\Omega)$ and $\mathcal{H}_\Gamma=\{u{\mathrm{i}}n H^1(\Omega): \frac{1}{2}{\mathfrak{m}}athcal{D}elta u=\lambda u\}$.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
{\mathrm{i}}tem[(1)] It suffices to consider the case $\lambda=0$. Since $\Omega$ is bounded and Lipschitz, the Poincar\'e inequality
\[
\| f-\bar f\|_{L^2(\Omega)}^2\leq C{\mathfrak{m}}athbf{D}(f,f),\quad f{\mathrm{i}}n H^1(\Omega)
\]
is known to be true for some constant $C>0$ depending only on $\Omega$, where $\bar f:=\frac{1}{m(\Omega)}{\mathrm{i}}nt_\Omega f(x)dx$. Let $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$ and $\{u_n:n\mathfrak{g}eq 1\}{{\mathfrak{m}}athrm{s}}ubset H^1(\Omega)$ be an approximating sequence for $u$. Then $u_n\rightarrow u$ a.e., and the Poincar\'e inequality yields that $\{u_n-\bar{u}_n\}$ is a Cauchy sequence in $L^2(\Omega)$. Hence there exists $v{\mathrm{i}}n L^2(\Omega)$ such that $u_n-\bar{u}_n\rightarrow v$ in $L^2(\Omega)$. Particularly, taking a subsequence if necessary, we have $u_n-\bar{u}_n\rightarrow v$, a.e. Since $u_n\rightarrow u$, a.e., it follows that $u-v=\lim_{n\rightarrow{\mathrm{i}}nfty} \bar{u}_n$ is a.e. constant. Therefore $u{\mathrm{i}}n L^2(\Omega)$ because $v$ and constant functions are in $L^2(\Omega)$. Eventually we can conclude that ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}\cap L^2(\Omega)={{\mathfrak{m}}athscr F}=H^1(\Omega)$.
{\mathrm{i}}tem[(2)] For $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}=H^1(\Omega)$, we can find $\{u_n\}{{\mathfrak{m}}athrm{s}}ubset H^1(\Omega)\cap C(\bar{\Omega})$ such that $u_n\rightarrow u$ in $H^1(\Omega)$. On account of \cite[Theorem~2.3.4]{CF12}, taking a subsequence if necessary, we get that $u_n$ converges to $u$, ${{\mathfrak{m}}athscr E}$-q.e. Since ${{\mathfrak{m}}athrm{s}}igma$ charges no ${{\mathfrak{m}}athscr E}$-polar sets, it follows that $u_n|_\Gamma \rightarrow u|_\Gamma$, ${{\mathfrak{m}}athrm{s}}igma$-a.e. Note that $u_n|_\Gamma\rightarrow \mathop{\mathrm{Tr}}(u)$ in $L^2(\Gamma)$. We can conclude that $u|_\Gamma=\mathop{\mathrm{Tr}}(u)$, ${{\mathfrak{m}}athrm{s}}igma$-a.e.
{\mathrm{i}}tem[(3)] Clearly, the topological support of ${{\mathfrak{m}}athrm{s}}igma$ is $\Gamma$. Denote $\Gamma_{{\mathfrak{m}}athrm{s}}igma:=\text{qsupp}[{{\mathfrak{m}}athrm{s}}igma]$. Note that $\Gamma_{{\mathfrak{m}}athrm{s}}igma{{\mathfrak{m}}athrm{s}}ubset \Gamma$, ${{\mathfrak{m}}athscr E}$-q.e. It suffices to show that $\Gamma{{\mathfrak{m}}athrm{s}}etminus \Gamma_{{\mathfrak{m}}athrm{s}}igma$ is ${{\mathfrak{m}}athscr E}$-polar. Without loss of generality assume that $\Gamma_{{\mathfrak{m}}athrm{s}}igma$ is nearly Borel and finely closed. It follows from \cite[Theorem~3.3.5]{CF12} and ${{\mathfrak{m}}athrm{s}}igma(\Gamma{{\mathfrak{m}}athrm{s}}etminus \Gamma_{{\mathfrak{m}}athrm{s}}igma)=0$ that the part Dirichlet space of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $\Gamma_{{\mathfrak{m}}athrm{s}}igma^c:=\bar{\Omega}{{\mathfrak{m}}athrm{s}}etminus \Gamma_{{\mathfrak{m}}athrm{s}}igma$ is
\[
\begin{aligned}
{{\mathfrak{m}}athscr F}^{\Gamma_{{\mathfrak{m}}athrm{s}}igma^c}&=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}: u=0, {{\mathfrak{m}}athscr E}\text{-q.e. on }\Gamma_{{\mathfrak{m}}athrm{s}}igma\} \\
&=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}: u=0, {{\mathfrak{m}}athrm{s}}igma\text{-a.e. on }\Gamma_{{\mathfrak{m}}athrm{s}}igma\} \\
&=\{u{\mathrm{i}}n H^1(\Omega): \text{Tr}(u)=0\}.
\end{aligned}\]
Using the trivial traces theorem over a Lipschitz domain (e.g., \cite[Theorem~4]{H20}), we obtain that ${{\mathfrak{m}}athscr F}^{\Gamma_{{\mathfrak{m}}athrm{s}}igma^c}$ is identified with $H^1_0(\Omega)$, the part Dirichlet space of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $\Omega$. On account of \cite[Theorem~3.3.8~(iii)]{CF12}, $\Gamma^c_{{\mathfrak{m}}athrm{s}}igma{{\mathfrak{m}}athrm{s}}etminus \Omega=\Gamma{{\mathfrak{m}}athrm{s}}etminus \Gamma_{{\mathfrak{m}}athrm{s}}igma$ is ${{\mathfrak{m}}athscr E}$-polar.
{\mathrm{i}}tem[(4)] They are obvious by the first assertion.
\end{itemize}
That completes the proof.
\end{proof}
In view of Lemma~{{\mathfrak{m}}athrm{e}}f{LM32}~(2), we will not distinguish $u|_\Gamma$ and $\mathop{\mathrm{Tr}}(u)$ for $u{\mathrm{i}}n H^1(\Omega)$ hereafter. Denote the corresponding PACF of ${{\mathfrak{m}}athrm{s}}igma$ by $L^{{\mathfrak{m}}athrm{s}}igma:=(L^{{\mathfrak{m}}athrm{s}}igma_t)_{t\mathfrak{g}eq 0}$, also called the local time (of $X$) on $\Gamma$. Let $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ be the trace Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ defined as \eqref{eq:traceDirichletform} with $F=\Gamma$ and ${\mathfrak{m}}u={{\mathfrak{m}}athrm{s}}igma$.
Now we present the following result to recover \cite[Theorem~3.3]{BV17}.
\begin{corollary}\label{COR28}
For $\lambda\mathfrak{g}eq 0$, let $D_\lambda$ be the DN operator in Definition~{{\mathfrak{m}}athrm{e}}f{DEF31}. Then the following hold:
\begin{itemize}
{\mathrm{i}}tem[(1)] $\mathcal{D}(D_\lambda)=H^1(\Gamma)$ and $D_\lambda \text{-}\mathrm{var}phi=\frac{1}{2}\partial_\mathbf{n} \mathbf{H}_\Gamma \text{-}\mathrm{var}phi$ for $\text{-}\mathrm{var}phi {\mathrm{i}}n H^1(\Gamma)$.
{\mathrm{i}}tem[(2)] $-D_\lambda$ is the generator of the trace Dirichlet form $(\check{{\mathfrak{m}}athscr E},\check{{\mathfrak{m}}athscr F})$ on $L^2(\Gamma)$, which is associated with the time changed process of $X$ by the local time $L^{{\mathfrak{m}}athrm{s}}igma$. Furthermore, $(\check{{\mathfrak{m}}athscr E},\check{{\mathfrak{m}}athscr F})$ is regular on $L^2(\Gamma)$.
\end{itemize}
\end{corollary}
\begin{proof}
\begin{itemize}
{\mathrm{i}}tem[(1)] The case $\lambda=1$ has been considered in \cite[Lemma~2.2]{BV17}. Now consider any $\lambda\mathfrak{g}eq 0$. It suffices to show $\mathcal{D}(D_\lambda)=H^1(\Gamma)$. Take $\text{-}\mathrm{var}phi {\mathrm{i}}n \mathcal{D}(D_\lambda)$. It follows from \eqref{eq:35} that $u:=\mathbf{H}_\Gamma \text{-}\mathrm{var}phi{\mathrm{i}}n W^1$ and hence $\text{-}\mathrm{var}phi=u|_\Gamma{\mathrm{i}}n H^1(\Gamma)$. To the contrary, let $\text{-}\mathrm{var}phi{\mathrm{i}}n H^1(\Gamma)$. Then there exists $u'{\mathrm{i}}n H^1(\Omega)$ such that $u'|_\Gamma=\text{-}\mathrm{var}phi$ due to the trace theorem. Set $u:=\mathbf{H}_\Gamma u' {\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}=H^1(\Omega)$ by means of Lemma~{{\mathfrak{m}}athrm{e}}f{LM32}~(1). We get from Lemma~{{\mathfrak{m}}athrm{e}}f{LM32}~(4) that $\frac{1}{2}{\mathfrak{m}}athcal{D}elta u=\lambda u{\mathrm{i}}n L^2(\Omega)$. Since $\mathop{\mathrm{Tr}}(u)=\text{-}\mathrm{var}phi{\mathrm{i}}n H^1(\Gamma)$, it follows from \eqref{eq:35} that $u$ has a weak normal derivative $\partial_\mathbf{n} u{\mathrm{i}}n L^2(\Gamma)$. This implies $\text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}(D_\lambda)$.
{\mathrm{i}}tem[(2)] It can be straightforwardly verified by applying Theorem~{{\mathfrak{m}}athrm{e}}f{THM26} and Lemma~{{\mathfrak{m}}athrm{e}}f{LM32} with $F=\Gamma$ and ${\mathfrak{m}}u={{\mathfrak{m}}athrm{s}}igma$. The regularity of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ on $L^2(\Gamma)$ is due to \cite[Theorem~5.2.3]{CF12} and Lemma~{{\mathfrak{m}}athrm{e}}f{LM32}~(3).
\end{itemize}
That completes the proof.
\end{proof}
When $d=2$, $\lambda=0$ and $\Omega={\mathfrak{m}}athbb{D}:=\{x: |x|<1\}$, $\check{{{\mathfrak{m}}athscr E}}$ is identified with the celebrated \emph{Douglas integral}; see \cite{D31}. To be precise, $\Gamma=\partial {\mathfrak{m}}athbb{D}=\{\theta:0\leq \theta<2\pi\}$ and
\[
\begin{aligned}
&\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)=\frac{1}{16\pi}{\mathrm{i}}nt_0^{2\pi}{\mathrm{i}}nt_0^{2\pi} \left(\text{-}\mathrm{var}phi(\theta)-\text{-}\mathrm{var}phi(\theta')\right)^2{{\mathfrak{m}}athrm{s}}in^{-2}\left(\frac{\theta-\theta'}{2}\right)d\theta d\theta,\\
&\check{{{\mathfrak{m}}athscr F}}=\{\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(\partial {\mathfrak{m}}athbb{D}): \check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)<{\mathrm{i}}nfty\}.
\end{aligned}
\]
Particularly, the DN operator $D$ corresponds to the Cauchy process on $\partial {\mathfrak{m}}athbb{D}$.
{{\mathfrak{m}}athrm{s}}ection{DN operators for perturbations of Dirichlet forms}\label{SEC3}
Let $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ be a regular and irreducible Dirichlet form on $L^2(E,m)$.
Take $\kappa=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}-\mathbf{S}$ and let
$${{\mathfrak{m}}athscr F}^\kappa={{\mathfrak{m}}athscr F}\cap L^2(E,|\kappa|),\quad {{\mathfrak{m}}athscr E}^\kappa(u,v)={{\mathfrak{m}}athscr E}(u,v)+{\mathrm{i}}nt_E uv d\kappa,\; u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa$$ be the perturbation of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ by $\kappa$, where $|\kappa|=\kappa^++\kappa^-$; see Definition~{{\mathfrak{m}}athrm{e}}f{DEFB1}. Set
\[
{{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}:={{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}\cap L^2(E,|\kappa|).
\]
We impose $\kappa^-\neq 0$ unless otherwise specified. It is worth pointing out that $\kappa^-\neq 0$ may lead to the failure of closedness or Markovian property for $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ that can not be straightforwardly linked with a certain Markov process.
As reviewed in Appendix~{{\mathfrak{m}}athrm{e}}f{APPB}, if $\kappa^-$ is ${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bounded, then $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is a lower bounded symmetric closed form on $L^2(E,m)$. When $\kappa{\mathrm{i}}n \mathbf{S}$, it becomes a quasi-regular Dirichlet form, called the perturbed Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ by $\kappa$. Meanwhile ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}^\kappa$ is the extended Dirichlet space of $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ enjoying the same quasi notions as $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$; see \cite[Proposition~5.1.9]{CF12}.
{{\mathfrak{m}}athrm{s}}ubsection{Definition}
Let ${\mathfrak{m}}u{\mathrm{i}}n {\mathfrak{m}}athring{\mathbf{S}}$ with $F:=\text{qsupp}[{\mathfrak{m}}u]$. Write $G:=E{{\mathfrak{m}}athrm{s}}etminus F$.
Set
\[
{{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}: u=0, {{\mathfrak{m}}athscr E}\text{-q.e. on }F\}
\]
and
\[
\mathcal{H}^\kappa_F:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}: {{\mathfrak{m}}athscr E}^\kappa(u,v)=0,\forall v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}\}.
\]
Given $\text{-}\mathrm{var}phi {\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$, $u$ is called a $\kappa$-\emph{harmonic extension} of $\text{-}\mathrm{var}phi$ provided that $u{\mathrm{i}}n \mathcal{H}_F^\kappa$ and $u|_F=\text{-}\mathrm{var}phi$. Note that the condition
\begin{equation}\label{eq:DEF41}
{{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}\cap \mathcal{H}^\kappa_F=\{0\}
\end{equation}
implies that the $\kappa$-harmonic extension, denoted by $\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi$, of $\text{-}\mathrm{var}phi$ is unique if exists.
As an analogue of Definition~{{\mathfrak{m}}athrm{e}}f{DEF23}, we introduce the following.
\begin{definition}\label{DEF41}
Assume \eqref{eq:DEF41}. The following operator
\[
\begin{aligned}
\mathcal{D}({{\mathfrak{m}}athrm{s}}N_\kappa)=&\big\{\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(F,{\mathfrak{m}}u): \exists\,u{\mathrm{i}}n \mathcal{H}^\kappa_F\text{ and }f{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)\text{ such that }u|_F=\text{-}\mathrm{var}phi, \\
&\qquad\qquad {{\mathfrak{m}}athscr E}^\kappa(u,v)={\mathrm{i}}nt_{F} f v|_F d{\mathfrak{m}}u \text{ for any }v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}\text{ with }v|_F{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)\big\}, \\
{{\mathfrak{m}}athrm{s}}N_\kappa \text{-}\mathrm{var}phi=&f,\quad \text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}N_\kappa)
\end{aligned}
\]
is called the DN operator for $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ on $L^2(F,{\mathfrak{m}}u)$.
\end{definition}
In most cases we will impose a stronger assumption:
\begin{equation}\label{eq:42-3}
{{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}^\kappa={{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}\oplus \mathcal{H}^\kappa_F,
\end{equation}
i.e. for any $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$, there exists a unique pair $(u_1,u_2){\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}\times \mathcal{H}^\kappa_F$ such that $u=u_1+u_2$.
See Appendix~{{\mathfrak{m}}athrm{e}}f{APP2} for some remarks on this assumption. Meanwhile set
\begin{equation}\label{eq:45}
\check{{{\mathfrak{m}}athscr F}}^\kappa:={{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}|_F\cap L^2(F,{\mathfrak{m}}u)=\{u|_F{\mathrm{i}}n L^2(F,{\mathfrak{m}}u): u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}\},
\end{equation}
and because of the first assertion in the following lemma, define
\begin{equation}\label{eq:46}
\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi, \phi):={{\mathfrak{m}}athscr E}^\kappa(\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi, \mathbf{H}^\kappa_F \phi),\quad \text{-}\mathrm{var}phi,\phi{\mathrm{i}}n\check{{{\mathfrak{m}}athscr F}}^\kappa.
\end{equation}
We call $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ the \emph{trace form of $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ on $L^2(F,{\mathfrak{m}}u)$.}
\begin{lemma}\label{LM43}
Assume \eqref{eq:42-3}. The following hold:
\begin{itemize}
{\mathrm{i}}tem[(1)] For any $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$, the $\kappa$-harmonic extension of $\text{-}\mathrm{var}phi$ exists uniquely.
{\mathrm{i}}tem[(2)] $\text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}N_\kappa)$ with $f={{\mathfrak{m}}athrm{s}}N_\kappa \text{-}\mathrm{var}phi$, if and only if $\text{-}\mathrm{var}phi {\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa, f{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$ and
\begin{equation}\label{eq:42-2}
\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi, \phi)={\mathrm{i}}nt_F f\phi d{\mathfrak{m}}u,\quad \forall \phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa.
\end{equation}
\end{itemize}
\end{lemma}
\begin{proof}
The first assertion is obvious. We only prove the second one. Take $\text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}N_\kappa)$ with $f={{\mathfrak{m}}athrm{s}}N_\kappa \text{-}\mathrm{var}phi$. Then $u$ appearing in Definition~{{\mathfrak{m}}athrm{e}}f{DEF41} is the $\kappa$-harmonic extension of $\text{-}\mathrm{var}phi$, and particularly $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$. In addition, for any $\phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$, it holds that $\mathbf{H}^\kappa_F \phi {\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$ and $\mathbf{H}^\kappa_F \phi|_F=\phi{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$. Hence
\[
\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi, \phi)={{\mathfrak{m}}athscr E}^\kappa(u,\mathbf{H}^\kappa_F \phi)={\mathrm{i}}nt_F f\phi d{\mathfrak{m}}u.
\]
To the contrary, take $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$ satisfying \eqref{eq:42-2} for some $f{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$. Then $u:=\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi {\mathrm{i}}n \mathcal{H}^\kappa_F$ and $u|_F=\text{-}\mathrm{var}phi$. For any $v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$ with $\phi:=v|_F{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$, \eqref{eq:42-2} yields that
\[
{\mathrm{i}}nt_F f\phi d{\mathfrak{m}}u=\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi, \phi)={{\mathfrak{m}}athscr E}^\kappa(u, \mathbf{H}^\kappa_F \phi)={{\mathfrak{m}}athscr E}^\kappa(u,v),
\]
because $v-\mathbf{H}^\kappa_F \phi {\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}$.
That completes the proof.
\end{proof}
{{\mathfrak{m}}athrm{s}}ubsection{Self-adjointness of DN operators}\label{SEC41}
It is of course interesting to ask if ${{\mathfrak{m}}athrm{s}}N_\kappa$ is self-adjoint on $L^2(F,{\mathfrak{m}}u)$. Lemma~{{\mathfrak{m}}athrm{e}}f{LM43} tells us that if $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a lower bounded closed form, then ${{\mathfrak{m}}athrm{s}}N_\kappa$ is lower semi-bounded and self-adjoint.
\begin{definition}\label{DEF34}
Let $\kappa=\kappa^+-\kappa^-, {\mathfrak{m}}u$ and $F$ be as above. Then $\kappa^-$ is called \emph{${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bounded on trace}, if there exist some constants $0<\delta_0<1$ and $C_{\delta_0}>0$ such that
\begin{equation}\label{eq:47}
{\mathrm{i}}nt_E u^2d\kappa^-\leq \delta_0\cdot {{\mathfrak{m}}athscr E}^{\kappa^+}(u,u)+C_{\delta_0}\cdot {\mathrm{i}}nt_F (u|_F)^2 d{\mathfrak{m}}u,\quad \forall u{\mathrm{i}}n \mathcal{H}^\kappa_F,
\end{equation}
where ${{\mathfrak{m}}athscr E}^{\kappa^+}(u,u)={{\mathfrak{m}}athscr E}(u,u)+{\mathrm{i}}nt_E u^2d\kappa^+$.
\end{definition}
A useful sufficient condition for the self-adjointness of ${{\mathfrak{m}}athrm{s}}N_\kappa$ is presented in the following.
\begin{theorem}\label{THM44}
Assume \eqref{eq:42-3} and that $\kappa^-$ is ${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bounded on trace. Then ${{\mathfrak{m}}athrm{s}}N_\kappa$ is lower semi-bounded and self-adjoint on $L^2(F,{\mathfrak{m}}u)$.
\end{theorem}
\begin{proof}
It suffices to prove that $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a lower bounded closed form on $L^2(F,{\mathfrak{m}}u)$. For $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$, set $u:=\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi$. Taking $\text{-}\mathrm{var}epsilon>0$ such that $(1+\text{-}\mathrm{var}epsilon){\delta_0}<1$, $\alpha_0:=(1+\text{-}\mathrm{var}epsilon)C_{\delta_0}$ and using \eqref{eq:47}, we get
\begin{equation}\label{eq:47-1}
\begin{aligned}
\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi, \text{-}\mathrm{var}phi)&+\alpha_0 {\mathrm{i}}nt_F \text{-}\mathrm{var}phi^2d{\mathfrak{m}}u \\ &={{\mathfrak{m}}athscr E}^{\kappa^+}(u,u)+\text{-}\mathrm{var}epsilon {\mathrm{i}}nt_E u^2 d\kappa^--(1+\text{-}\mathrm{var}epsilon){\mathrm{i}}nt_E u^2 d\kappa^- +\alpha_0 {\mathrm{i}}nt_F (u|_F)^2 d{\mathfrak{m}}u \\
&\mathfrak{g}eq\tilde{{\delta}}_0\cdot {{\mathfrak{m}}athscr E}^{\kappa^+}(u,u)+\text{-}\mathrm{var}epsilon {\mathrm{i}}nt_E u^2 d\kappa^-\mathfrak{g}eq 0,
\end{aligned}\end{equation}
where $\tilde{{\delta}}_0=1-(1+\text{-}\mathrm{var}epsilon){\delta_0}>0$.
Hence $(\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0}, \check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a non-negative symmetric quadratic form.
Next we show that $\check{{{\mathfrak{m}}athscr F}}^\kappa$ is a Hilbert space under the inner product $\check{{{\mathfrak{m}}athscr E}}^\kappa_\alpha$ for $\alpha>\alpha_0$. To do this, take an $\check{{{\mathfrak{m}}athscr E}}^\kappa_\alpha$-Cauchy sequence $\{\text{-}\mathrm{var}phi_n\}{{\mathfrak{m}}athrm{s}}ubset \check{{{\mathfrak{m}}athscr F}^\kappa}$. Set $u_n:=\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi_n$. It follows from \eqref{eq:47-1} that $\{u_n\}$ forms an ${{\mathfrak{m}}athscr E}^{|\kappa|}$-Cauchy sequence in ${{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$. Note that ${{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$ is a Hilbert space under the inner product ${{\mathfrak{m}}athscr E}^{|\kappa|}$. Thus there exists $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$ such that ${{\mathfrak{m}}athscr E}^{|\kappa|}(u_n-u,u_n-u)\rightarrow 0$. Taking a subsequence if necessary, we have that $u_n$ converges to $u$, ${{\mathfrak{m}}athscr E}^{|\kappa|}$-q.e. as well as ${{\mathfrak{m}}athscr E}$-q.e. To claim $u{\mathrm{i}}n \mathcal{H}^\kappa_F$, note that for any $v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa, G}_{{\mathfrak{m}}athrm{e}}$,
\begin{equation}\label{eq:48}
|{{\mathfrak{m}}athscr E}^\kappa(u_n-u,v)|\lesssim {{\mathfrak{m}}athscr E}^{|\kappa|}(u_n-u,u_n-u)^{1/2}\cdot {{\mathfrak{m}}athscr E}^{|\kappa|}(v,v)^{1/2}\rightarrow 0.
\end{equation}
Consequently ${{\mathfrak{m}}athscr E}^\kappa(u,v)=\lim_{n\rightarrow {\mathrm{i}}nfty}{{\mathfrak{m}}athscr E}^\kappa(u_n,v)=0$. This yields $u{\mathrm{i}}n \mathcal{H}^\kappa_F$.
On the other hand, $\text{-}\mathrm{var}phi_n=u_n|_F$ is Cauchy in $L^2(F,{\mathfrak{m}}u)$. Hence $\text{-}\mathrm{var}phi_n\rightarrow \text{-}\mathrm{var}phi$ in $L^2(F,{\mathfrak{m}}u)$ for some $\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(F,{\mathfrak{m}}u)$. Taking a subsequence if necessary we get that $\text{-}\mathrm{var}phi_n\rightarrow \text{-}\mathrm{var}phi$, ${\mathfrak{m}}u$-a.e. on $F$. Since $\text{qsupp}[{\mathfrak{m}}u]=F$, we obtain that $u|_F=\text{-}\mathrm{var}phi$ and particularly, $\text{-}\mathrm{var}phi {\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$. In addition,
\[
|\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi_n-\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi_n-\text{-}\mathrm{var}phi)|=|{{\mathfrak{m}}athscr E}^\kappa(u_n-u,u_n-u)|\leq {{\mathfrak{m}}athscr E}^{|\kappa|}(u_n-u,u_n-u)\rightarrow 0.
\]
Therefore $\check{{{\mathfrak{m}}athscr E}}^\kappa_\alpha(\text{-}\mathrm{var}phi_n-\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi_n-\text{-}\mathrm{var}phi)\rightarrow 0$. That completes the proof.
\end{proof}
Now we turn to give some remarks on the condition \eqref{eq:47}. Note that ${{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$ is a Hilbert space under the inner product ${{\mathfrak{m}}athscr E}^{|\kappa|}$, and on account of \eqref{eq:48}, $\mathcal{H}^\kappa_F$ is a closed subspace of ${{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$. In other words, $\mathcal{H}^\kappa_F$ is also a Hilbert space under the inner product ${{\mathfrak{m}}athscr E}^{|\kappa|}$. The argument in the following lemma is based on the abstract Ehrling's lemma; see, e.g., \cite[Chapter I, Theorem~7.3]{W87}.
\begin{lemma}\label{LM45}
Assume \eqref{eq:42-3}. If $(\mathcal{H}^\kappa_F, {{\mathfrak{m}}athscr E}^{|\kappa|})$ is compactly embedded in $L^2(E,\kappa^-)$, i.e. any $\{u_n{\mathrm{i}}n \mathcal{H}^\kappa_F:n\mathfrak{g}eq 1\}$ with ${{\mathfrak{m}}athrm{s}}up_{n\mathfrak{g}eq 1}{{\mathfrak{m}}athscr E}^{|\kappa|}(u_n,u_n)<{\mathrm{i}}nfty$ forms a relatively compact sequence in $L^2(E,\kappa^-)$, then for any $\delta>0$, there exists $C_\delta>0$ such that
\begin{equation}\label{eq:49-2}
{\mathrm{i}}nt_E u^2d\kappa^-\leq \delta\cdot {{\mathfrak{m}}athscr E}^{|\kappa|}(u,u)+C_\delta\cdot {\mathrm{i}}nt_F (u|_F)^2 d{\mathfrak{m}}u,\quad \forall u{\mathrm{i}}n \mathcal{H}^\kappa_F.
\end{equation}
Particularly, if $({{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}, {{\mathfrak{m}}athscr E}^{|\kappa|})$ is compactly embedded in $L^2(E,\kappa^-)$, then for any $\delta>0$, there exists $C_\delta>0$ such that \eqref{eq:49-2} holds.
\end{lemma}
\begin{proof}
Argue by contradiction. Suppose that for some $\delta>0$, and any $n{\mathrm{i}}n {\mathfrak{m}}athbb{N}$, there exists $u_n{\mathrm{i}}n \mathcal{H}^\kappa_F$ such that ${{\mathfrak{m}}athscr E}^{|\kappa|}(u_n,u_n)=1$ and
\begin{equation}\label{eq:4110}
{\mathrm{i}}nt_E u_n^2 d\kappa^-> \delta + n {\mathrm{i}}nt_F (u_n|_F)^2 d{\mathfrak{m}}u.
\end{equation}
Note that ${{\mathfrak{m}}athrm{s}}up_n {\mathrm{i}}nt_E u^2_nd\kappa^-\leq {{\mathfrak{m}}athrm{s}}up_n {{\mathfrak{m}}athscr E}^{|\kappa|}(u_n,u_n)=1$ leading to ${\mathrm{i}}nt_F (u_n|_F)^2d{\mathfrak{m}}u\rightarrow 0$. Using the relative compactness of $\{u_n\}$ in $L^2(E,\kappa^-)$, we may (and do) assume that $u_n$ converges to some $v{\mathrm{i}}n L^2(E,\kappa^-)$ both strongly in $L^2(E,\kappa^-)$ and $\kappa^-$-a.e.
Taking a subsequence of $\{u_n\}$ if necessary, we get that $f_N:=\frac{1}{N}{{\mathfrak{m}}athrm{s}}um_{n=1}^N u_n$ converges to some $u$ strongly in $\mathcal{H}^\kappa_F$ under the norm $\|\cdot\|_{{{\mathfrak{m}}athscr E}^{|\kappa|}}$ as $N\rightarrow {\mathrm{i}}nfty$, and $f_N|_F:=\frac{1}{N}{{\mathfrak{m}}athrm{s}}um_{n=1}^N u_n|_F$ converges to $0$ strongly in $L^2(F,{\mathfrak{m}}u)$. Since ${{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$ is the extended Dirichlet space of the Dirichlet form $({{\mathfrak{m}}athscr E}^{|\kappa|}, {{\mathfrak{m}}athscr F}^\kappa)$, it follows that a subsequence of $\{f_N\}$, still denoted by $\{f_N\}$, converges to $u$, ${{\mathfrak{m}}athscr E}^{|\kappa|}$-q.e. as well as ${{\mathfrak{m}}athscr E}$-q.e. Since $f_N|_F\rightarrow 0$ in $L^2(F,{\mathfrak{m}}u)$, taking a subsequence if necessary, we may (and do) assume that $f_N|_F$ converges to $0$, ${\mathfrak{m}}u$-a.e. Since $\text{qsupp}[{\mathfrak{m}}u]=F$ and $\kappa^-$ charges no ${{\mathfrak{m}}athscr E}$-polar sets, we obtain that $u|_F=0$, ${{\mathfrak{m}}athscr E}$-q.e., and $u=v$, $\kappa^-$-a.e. As a result, $u{\mathrm{i}}n \mathcal{H}^\kappa_F$ implies that $u=0$. Particularly, ${\mathrm{i}}nt_E u^2_n d\kappa^-\rightarrow 0$ leading to a contraction of \eqref{eq:4110}. That completes the proof.
\end{proof}
The condition \eqref{eq:49-2} is stronger than \eqref{eq:47}. In fact, taking $\delta<1/2$ in \eqref{eq:49-2} and letting $\delta_0:=\delta/(1-\delta), C_{\delta_0}:=C_\delta/(1-\delta)$, we arrive at \eqref{eq:47}. Hence the following corollary holds.
\begin{corollary}\label{COR46}
Assume \eqref{eq:42-3} and that $({{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}, {{\mathfrak{m}}athscr E}^{|\kappa|})$ is compactly embedded in $L^2(E,\kappa^-)$. Then ${{\mathfrak{m}}athrm{s}}N_\kappa$ is lower semi-bounded and self-adjoint on $L^2(F,{\mathfrak{m}}u)$.
\end{corollary}
A special case of great interest is $\kappa^-=V^-\cdot m$ with $V^-{\mathrm{i}}n L^{\mathrm{i}}nfty(E,m)$. In this case $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is clearly a lower bounded closed form on $L^2(E,m)$.
\begin{corollary}\label{COR47}
Assume \eqref{eq:42-3} and that ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}$ (endowed with the norm $\|\cdot\|_{{{\mathfrak{m}}athscr E}_1}$) is compactly embedded in $L^2(E,m)$. Consider $\kappa^-=V^-\cdot m$ with $V^-{\mathrm{i}}n L^{\mathrm{i}}nfty(E,m)$. Then ${{\mathfrak{m}}athrm{s}}N_\kappa$ is lower semi-bounded and self-adjoint on $L^2(F,{\mathfrak{m}}u)$.
\end{corollary}
\begin{proof}
Mimicking the proof of Lemma~{{\mathfrak{m}}athrm{e}}f{LM45}, we can obtain that for any $\delta>0$, there exists $C_\delta>0$ such that
\[
{\mathrm{i}}nt_E u^2 dm\leq \delta\cdot {{\mathfrak{m}}athscr E}_1(u,u)+ C_\delta {\mathrm{i}}nt_F (u|_F)^2d{\mathfrak{m}}u,\quad \forall u{\mathrm{i}}n \mathcal{H}^\kappa_F{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athscr F}.
\]
This implies that for $0<\delta<1$,
\[
{\mathrm{i}}nt_E u^2 dm\leq \frac{\delta}{1-\delta}\cdot {{\mathfrak{m}}athscr E}(u,u)+ \frac{C_\delta}{1-\delta} {\mathrm{i}}nt_F (u|_F)^2d{\mathfrak{m}}u,\quad \forall u{\mathrm{i}}n \mathcal{H}^\kappa_F.
\]
Letting $\|V^-\|_{\mathrm{i}}nfty:=\|V^-\|_{L^{\mathrm{i}}nfty(E,m)}$ and taking $0<\delta<1$ such that $\delta_0:=\|V^-\|_{\mathrm{i}}nfty \cdot \delta/(1-\delta)<1$, we get
\[
{\mathrm{i}}nt_E u^2d\kappa^-={\mathrm{i}}nt_E u^2 V^-dm\leq \delta_0\cdot {{\mathfrak{m}}athscr E}(u,u)+\frac{C_\delta \|V^-\|_{\mathrm{i}}nfty}{1-\delta} {\mathrm{i}}nt_F (u|_F)^2d{\mathfrak{m}}u,\quad \forall u{\mathrm{i}}n \mathcal{H}^\kappa_F.
\]
Eventually the assertion is concluded by applying Theorem~{{\mathfrak{m}}athrm{e}}f{THM44}.
\end{proof}
{{\mathfrak{m}}athrm{s}}ubsection{Examples}\label{SEC33}
We give two examples in this short subsection.
\begin{example}\label{EXA48}
Let $\Omega{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athbb R}^d$ be a bounded Lipschitz domain, and $0<\alpha\leq 2$. Consider the Dirichlet form
\[
\begin{aligned}
{{\mathfrak{m}}athscr F}&=\{ u{\mathrm{i}}n L^2(\Omega): {{\mathfrak{m}}athscr E}(u,u)<{\mathrm{i}}nfty\}, \\
{{\mathfrak{m}}athscr E}(u,v)&=\frac{c(d,\alpha)}{2}{\mathrm{i}}int_{\Omega\times \Omega{{\mathfrak{m}}athrm{s}}etminus {\mathfrak{m}}athsf{d}_\Omega} \frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{d+\alpha}}dxdy+\lambda {\mathrm{i}}nt_\Omega uvdx,\quad u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F},
\end{aligned}
\]
where the given constant $\lambda>0$ for $0<\alpha<2$ and $\lambda\mathfrak{g}eq 0$ for $\alpha=2$, $c(d,\alpha)$ is the constant appearing in, e.g., \cite[(1.4.27)]{FOT11}, and ${\mathfrak{m}}athsf{d}_\Omega$ is the diagonal of $\Omega\times \Omega$. Then $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is a regular and irreducible Dirichlet form on $L^2(\bar{\Omega})$. When $\alpha=2$, $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is identified with \eqref{eq:Brownian} associated with the $\lambda$-subprocess of the reflected Brownian motion on $\bar{\Omega}$. When $0<\alpha<2$, it is associated with the $\lambda$-subprocess of the reflected $\alpha$-stable process on $\bar{\Omega}$; see \cite{BBC03}.
Note that ${{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}=H^{\alpha/2}(\Omega)$, the Sobolev space of order $\alpha/2$, and $\|\cdot\|_{{{\mathfrak{m}}athscr E}_1}$ is equivalent to the norm of $H^{\alpha/2}(\Omega)$. In view of \cite[Theorem~2.5.5]{SS11}, ${{\mathfrak{m}}athscr F}=H^{\alpha/2}(\Omega)$ is compactly embedded in $L^2(\Omega)$.
Let ${\mathfrak{m}}u={{\mathfrak{m}}athrm{s}}igma$, the surface measure on $\Gamma=\partial \Omega$. In the case $\alpha=2$, $\text{qsupp}[{\mathfrak{m}}u]=\Gamma$ has been mentioned in Lemma~{{\mathfrak{m}}athrm{e}}f{LM32}~(3). For the case $0<\alpha<2$, under a slightly stronger condition that $\Omega$ is $C^{1,1}$ (see, e.g., page 67 of \cite{K03}), the same result can be obtained by virtue of \cite[Theorem~3.14]{K03} and \cite[Lemma~5.2.9]{CF12}.
Take $\kappa=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}-\mathbf{S}$ satisfying \eqref{eq:42-3}. Let ${{\mathfrak{m}}athrm{s}}N_\kappa$ be the DN operator for $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ on $L^2(\Gamma)$. On account of Corollary~{{\mathfrak{m}}athrm{e}}f{COR47}, if $\kappa^-(dx)=V^-(x)dx$ with $V^-{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$, then ${{\mathfrak{m}}athrm{s}}N_\kappa$ is lower semi-bounded and self-adjoint on $L^2(\Gamma)$.
\end{example}
\begin{example}
In this example let $d{\mathrm{i}}n {\mathfrak{m}}athbb{N}$ and $0<\alpha\leq 2$. Consider the Dirichlet form
\[
\begin{aligned}
{{\mathfrak{m}}athscr F}&=\{ u{\mathrm{i}}n L^2({{\mathfrak{m}}athbb R}^d): {{\mathfrak{m}}athscr E}(u,u)<{\mathrm{i}}nfty\}, \\
{{\mathfrak{m}}athscr E}(u,v)&=\frac{c(d,\alpha)}{2}{\mathrm{i}}int_{{{\mathfrak{m}}athbb R}^d \times {{\mathfrak{m}}athbb R}^d{{\mathfrak{m}}athrm{s}}etminus {\mathfrak{m}}athsf{d}} \frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{d+\alpha}}dxdy,\quad u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F},
\end{aligned}
\]
where $c(d,\alpha)$ is the same constant as in Example~{{\mathfrak{m}}athrm{e}}f{EXA48} and ${\mathfrak{m}}athsf{d}$ is the diagonal of ${{\mathfrak{m}}athbb R}^d\times {{\mathfrak{m}}athbb R}^d$. Clearly $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is a regular and irreducible Dirichlet form on $L^2({{\mathfrak{m}}athbb R}^d)$. When $0<\alpha<2$, it is associated with the isotropic $\alpha$-stable process on ${{\mathfrak{m}}athbb R}^d$. When $\alpha=2$, it is associated with the Brownian motion on ${{\mathfrak{m}}athbb R}^d$. The associated process is denoted by $X$.
Let ${\mathfrak{m}}u{\mathrm{i}}n {\mathfrak{m}}athring{\mathbf{S}}$ with $F=\text{qsupp}[{\mathfrak{m}}u]$. Take $\kappa=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}-\mathbf{S}$ satisfying \eqref{eq:42-3}. Denote by ${{\mathfrak{m}}athrm{s}}N_\kappa$ the DN operator for $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ on $L^2(F,{\mathfrak{m}}u)$.
We first consider the transient case $\alpha<d$. Denote by $\mathbf{S}_{\mathrm{i}}nfty$ the subfamily of $\mathbf{S}_K$, the Kato class, consisting of all Green-tight smooth measures with respect to $X$; see, e.g., \cite[Definition~2.1]{T07}. Note that $$\{V(x)dx: V{\mathrm{i}}n L^{\mathrm{i}}nfty({{\mathfrak{m}}athbb R}^d)\cap L^1({{\mathfrak{m}}athbb R}^d), V\mathfrak{g}eq 0\}{{\mathfrak{m}}athrm{s}}ubset \mathbf{S}_{\mathrm{i}}nfty.$$ Assume that
\[
\kappa{\mathrm{i}}n \mathbf{S}-\mathbf{S}_{\mathrm{i}}nfty.
\]
By virtue of \cite[Theorem~3.4]{T07}, we obtain that $({{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}},{{\mathfrak{m}}athscr E}^{|\kappa|})$ is compactly embedded in $L^2({{\mathfrak{m}}athbb R}^d, \kappa^-)$. Therefore Corollary~{{\mathfrak{m}}athrm{e}}f{COR46} yields that ${{\mathfrak{m}}athrm{s}}N_\kappa$ is lower semi-bounded and self-adjoint on $L^2(F,{\mathfrak{m}}u)$.
Next we treat the recurrent case $\alpha\mathfrak{g}eq d=1$. For $\nu{\mathrm{i}}n \mathbf{S}$, denote by $X^\nu$ the subprocess of $X$ killed by the PCAF corresponding to $\nu$. Let $\mathbf{S}^\nu_{\mathrm{i}}nfty$ be the family of all Green-tight smooth measures with respect to $X^\nu$; see \cite[(2.8)]{T16}. Assume that $$\kappa=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}_K-\mathbf{S}^{\kappa^+}_{\mathrm{i}}nfty.$$ Using \cite[Proposition~6.3]{T16}, we get that $({{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}, {{\mathfrak{m}}athscr E}^{|\kappa|})$ is compactly embedded in $L^2({{\mathfrak{m}}athbb R}^d,\kappa^-)$ for the case $\alpha>1$. If $\alpha=1$, the same compact embedding holds when certain condition on $\kappa^-$ is added; see \cite[Remark~6.4]{T16}. Eventually ${{\mathfrak{m}}athrm{s}}N_\kappa$ is lower semi-bounded and self-adjoint on $L^2(F,{\mathfrak{m}}u)$ by means of Corollary~{{\mathfrak{m}}athrm{e}}f{COR46}.
\end{example}
{{\mathfrak{m}}athrm{s}}ubsection{Markov processes $h$-associated with DN operators}\label{SEC34}
Let us turn to figure out the probabilistic counterpart of ${{\mathfrak{m}}athrm{s}}N_\kappa$.
Adopt the assumptions of Theorem~{{\mathfrak{m}}athrm{e}}f{THM44} and assume further that $\kappa^-$ is ${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bounded. These mean that both $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ and $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ are lower bounded closed forms.
Set
\[
{{\mathfrak{m}}athscr F}^{\kappa,G}:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa: u=0,{{\mathfrak{m}}athscr E}\text{-q.e. on }F\},\quad {{\mathfrak{m}}athscr E}^{\kappa,G}(u,v):={{\mathfrak{m}}athscr E}^\kappa(u,v),\; u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}.
\]
Then $({{\mathfrak{m}}athscr E}^{\kappa,G},{{\mathfrak{m}}athscr F}^{\kappa,G})$ is a lower bounded closed form on $L^2(G,m|_G)$, whose generator is denoted by ${{\mathfrak{m}}athrm{s}}L_{\kappa,G}$.
\begin{lemma}\label{LM311}
The following are equivalent:
\begin{itemize}
{\mathrm{i}}tem[(a)] $-{{\mathfrak{m}}athrm{s}}L_{\kappa,G}$ is positive in the sense that $(-{{\mathfrak{m}}athrm{s}}L_{\kappa,G}u,u)_m\mathfrak{g}eq 0$ for any $u{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}L_{\kappa,G})$.
{\mathrm{i}}tem[(b)] ${{\mathfrak{m}}athscr E}^{\kappa,G}(u,u)\mathfrak{g}eq 0$ for any $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}$.
{\mathrm{i}}tem[(c)] ${{\mathfrak{m}}athscr E}^{\kappa,G}(u,u)\mathfrak{g}eq 0$ for any $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}$.
\end{itemize}
\end{lemma}
\begin{proof}
Clearly (c) implies (a). Suppose (a). Since $({{\mathfrak{m}}athscr E}^{\kappa,G}_{\alpha_0}, {{\mathfrak{m}}athscr F}^{\kappa,G})$ is a non-negative closed form for some $\alpha_0\mathfrak{g}eq 0$, it follows that for $\alpha>\alpha_0$,
\begin{equation}\label{eq:3.11}
{{\mathfrak{m}}athscr E}^{\kappa,G}_\alpha (u,u)\mathfrak{g}eq \alpha (u,u)_m,\quad \forall u{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}L_{\kappa, G}).
\end{equation}
Note that $\mathcal{D}({{\mathfrak{m}}athrm{s}}L_{\kappa,G})$ is $\|\cdot\|_{{{\mathfrak{m}}athscr E}^{\kappa,G}_\alpha}$-dense in ${{\mathfrak{m}}athscr F}^{\kappa,G}$. For $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}$, one can take a sequence $u_n{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}L_{\kappa,G})$ such that $\|u_n-u\|_{{{\mathfrak{m}}athscr E}^{\kappa,G}_\alpha}\rightarrow 0$ and $(u_n,u_n)_m\rightarrow (u,u)_m$. Applying \eqref{eq:3.11} to $u_n$ and letting $n\rightarrow 0$, we get (b). Finally suppose (b) and we are to derive (c). Note that ${{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}^{\kappa^+,G}_{{\mathfrak{m}}athrm{e}}$ is the extended Dirichlet space of the Dirichlet form $({{\mathfrak{m}}athscr E}^{\kappa^+,G},{{\mathfrak{m}}athscr F}^{\kappa^+,G})$, where ${{\mathfrak{m}}athscr E}^{\kappa^+,G}(u,v)={{\mathfrak{m}}athscr E}(u,v)+{\mathrm{i}}nt uv d\kappa^+$ for $u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa^+,G}$. For $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}^{\kappa^+,G}_{{\mathfrak{m}}athrm{e}}$, take its approximating sequence $\{u_n\}{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athscr F}^{\kappa^+,G}$, and on account of \cite[Theorem~2.3.4]{CF12}, we may and do assume that $\{u_n\}$ is ${{\mathfrak{m}}athscr E}^{\kappa^+,G}$-Cauchy and $u_n$ converges to $u$, ${{\mathfrak{m}}athscr E}^{\kappa^+,G}$-q.e. Particularly, $u_n$ converges to $u$, $\kappa^-$-a.e. Looking at (b), we have
\begin{equation}\label{eq:3.12}
{{\mathfrak{m}}athscr E}^{\kappa^+,G}(u_n,u_n)\mathfrak{g}eq {\mathrm{i}}nt u_n^2 d\kappa^-.
\end{equation}
Since $\{u_n\}$ is ${{\mathfrak{m}}athscr E}^{\kappa^+,G}$-Cauchy, it follows that $\{u_n\}$ is $L^2(E,\kappa^-)$-Cauchy and $u_n$ converges to $u$ in $L^2(E,\kappa^-)$. Letting $n\uparrow{\mathrm{i}}nfty$ in \eqref{eq:3.12}, we arrive at (c). That completes the proof.
\end{proof}
\begin{remark}
Either condition in this lemma admits a non-zero $\kappa^-$. For example, in Lemma~{{\mathfrak{m}}athrm{e}}f{LM41}, they are also equivalent to $\lambda>\lambda^\text{D}_1/2$ due the Poincar\'e's inequality.
\end{remark}
The conception of quasi-regular (non-negative) positivity preserving (symmetric) coercive form is reviewed in Appendix~{{\mathfrak{m}}athrm{e}}f{APPD}. Note that $(\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0},\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a non-negative closed form for some $\alpha_0\mathfrak{g}eq 0$. The quasi notions like nest, polar set and quasi-continuous function for $\check{{\mathfrak{m}}athscr E}^\kappa$ are by definition those for $\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0}$. We say $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a quasi-regular positive preserving (symmetric) coercive form if so is $(\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0},\check{{{\mathfrak{m}}athscr F}}^\kappa)$.
\begin{theorem}\label{THM312}
Assume that either of the conditions in Lemma~{{\mathfrak{m}}athrm{e}}f{LM311} holds. Then $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a quasi-regular positivity preserving (symmetric) coercive form on $L^2(F,{\mathfrak{m}}u)$.
\end{theorem}
\begin{proof}
We first show that $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is positivity preserving. Take $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$, and set $\text{-}\mathrm{var}phi^+:=\text{-}\mathrm{var}phi \vee 0, \text{-}\mathrm{var}phi^-:=\text{-}\mathrm{var}phi^+-\text{-}\mathrm{var}phi$. Clearly $\text{-}\mathrm{var}phi^\pm {\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$. Let $u:=\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi$ and $u^+:=u\vee 0, u^-:=u^+-u$. Since $u=\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi^+-\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi^-$, it follows that
\[
u^+-\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi^+=u^--\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi^-=:u_0{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}.
\]
Note that $u_0|_F=0$, ${\mathfrak{m}}u$-a.e. and hence ${{\mathfrak{m}}athscr E}$-q.e. due to $\text{qsupp}[{\mathfrak{m}}u]=F$. This implies that $u_0{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}$. By the definition of $\check{{{\mathfrak{m}}athscr E}}^\kappa$, one have that
\[
\begin{aligned}
\check{{{\mathfrak{m}}athscr E}}&^\kappa(\text{-}\mathrm{var}phi^+,\text{-}\mathrm{var}phi^-)\\ &={{\mathfrak{m}}athscr E}^\kappa(\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi^+,\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi^-)={{\mathfrak{m}}athscr E}^\kappa(u^+-u_0,u^--u_0)={{\mathfrak{m}}athscr E}(u^+,u^-)-{{\mathfrak{m}}athscr E}^\kappa(u_0,u_0).
\end{aligned}\]
By means of (c) in Lemma~{{\mathfrak{m}}athrm{e}}f{LM311} and the positivity preserving property of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$, we get that
\[
\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi^+,\text{-}\mathrm{var}phi^-)\leq {{\mathfrak{m}}athscr E}(u^+,u^-)\leq 0.
\]
Hence $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is positivity preserving.
Denote by $(\check{{{\mathfrak{m}}athscr E}}^{|\kappa|},\check{{{\mathfrak{m}}athscr F}}^{|\kappa|})$ the trace Dirichlet form of $({{\mathfrak{m}}athscr E}^{|\kappa|},{{\mathfrak{m}}athscr F}^{|\kappa|})$ on $L^2(F,{\mathfrak{m}}u)$. Clearly, $(\check{{{\mathfrak{m}}athscr E}}^{|\kappa|},\check{{{\mathfrak{m}}athscr F}}^{|\kappa|})$ is quasi-regular on $L^2(F,{\mathfrak{m}}u)$. Note that
\[
\check{{{\mathfrak{m}}athscr F}}^{|\kappa|}={{\mathfrak{m}}athscr F}^{|\kappa|}_{{\mathfrak{m}}athrm{e}}|_F\cap L^2(F,{\mathfrak{m}}u)={{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}|_F\cap L^2(F,{\mathfrak{m}}u)=\check{{{\mathfrak{m}}athscr F}}^\kappa.
\]
For $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$, $\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi, \mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi {\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$ and $\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi-\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}$. By means of (c) in Lemma~{{\mathfrak{m}}athrm{e}}f{LM311}, we have that
\[
\begin{aligned}
\check{{{\mathfrak{m}}athscr E}}^{|\kappa|}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)&={{\mathfrak{m}}athscr E}^{|\kappa|}\left(\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi,\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi\right)\mathfrak{g}eq {{\mathfrak{m}}athscr E}^\kappa\left(\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi,\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi \right) \\
&={{\mathfrak{m}}athscr E}^\kappa\left(\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi+(\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi-\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi), \mathbf{H}^\kappa_F\text{-}\mathrm{var}phi+(\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi-\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi)\right). \\
\end{aligned}\]
On account of $\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi-\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}$, the last term is equal to
\[
\begin{aligned}
&={{\mathfrak{m}}athscr E}^\kappa(\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi, \mathbf{H}^\kappa_F\text{-}\mathrm{var}phi)+{{\mathfrak{m}}athscr E}^\kappa\left(\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi-\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi,\mathbf{H}^{|\kappa|}_F\text{-}\mathrm{var}phi-\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi \right)\\
&\mathfrak{g}eq {{\mathfrak{m}}athscr E}^\kappa(\mathbf{H}^\kappa_F\text{-}\mathrm{var}phi, \mathbf{H}^\kappa_F\text{-}\mathrm{var}phi)=\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi).
\end{aligned}\]
In view of Lemma~{{\mathfrak{m}}athrm{e}}f{LMD3} we eventually obtain the quasi-regularity of $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$. That completes the proof.
\end{proof}
Take a constant $\alpha_0\mathfrak{g}eq 0$ such that $\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0}$ is non-negative.
Denote by $\check{T}^\kappa=(\check{T}^\kappa_t)_{t\mathfrak{g}eq 0}$ and $(\check{G}^\kappa_\alpha)_{\alpha>\alpha_0}$ the $L^2$-semigroup and $L^2$-resolvent of $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ respectively. For $\alpha{\mathrm{i}}n {{\mathfrak{m}}athbb R}$, $\text{-}\mathrm{var}phi$ is called \emph{$\alpha$-excessive} if $\text{-}\mathrm{var}phi{\mathrm{i}}n pL^2(F,{\mathfrak{m}}u)$ and ${{\mathfrak{m}}athrm{e}}^{-\alpha t}\check{T}^\kappa_t \text{-}\mathrm{var}phi\leq \text{-}\mathrm{var}phi$, ${\mathfrak{m}}u$-a.e. for all $t>0$. Define
\[
{\mathfrak{m}}athbf{E}^+_\alpha:=\{\text{-}\mathrm{var}phi \text{ is } \alpha\text{-excessive}: \text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa, \text{-}\mathrm{var}phi>0, {\mathfrak{m}}u\text{-a.e.}\}.
\]
Note that ${\mathfrak{m}}athbf{E}^+_\alpha{{\mathfrak{m}}athrm{s}}ubset {\mathfrak{m}}athbf{E}^+_{\alpha'}$ in case $\alpha<\alpha'$, and for $\alpha>\alpha_0$,
\[
\{\check{G}^\kappa_\alpha f: f{\mathrm{i}}n L^2(F,{\mathfrak{m}}u), f>0, {\mathfrak{m}}u\text{-a.e.}\}{{\mathfrak{m}}athrm{s}}ubset {\mathfrak{m}}athbf{E}_\alpha^+;
\]
see \cite[Lemma~3.6]{MR95}. For $\alpha{\mathrm{i}}n {{\mathfrak{m}}athbb R}$ and $h{\mathrm{i}}n L^2(F,{\mathfrak{m}}u), h>0$, ${\mathfrak{m}}u$-a.e., set
\[
\begin{aligned}
&\check{{\mathfrak{m}}athscr F}^{\kappa,h}:=\{\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(F,h^2\cdot {\mathfrak{m}}u): \text{-}\mathrm{var}phi h{\mathrm{i}}n \check{{\mathfrak{m}}athscr F}^{\kappa}\}, \\
&\check{{\mathfrak{m}}athscr E}^{\kappa,h}_\alpha(\text{-}\mathrm{var}phi,\phi):=\check{{\mathfrak{m}}athscr E}^{\kappa}_\alpha(\text{-}\mathrm{var}phi h,\phi h),\quad \text{-}\mathrm{var}phi,\phi{\mathrm{i}}n \check{{\mathfrak{m}}athscr F}^{\kappa,h},
\end{aligned}
\]
called the $h$-transform of $(\check{{{\mathfrak{m}}athscr E}}^\kappa_\alpha,\check{{{\mathfrak{m}}athscr F}}^\kappa)$. Write $\check{{{\mathfrak{m}}athscr E}}^{\kappa,h}:=\check{{{\mathfrak{m}}athscr E}}^{\kappa,h}_0$. Clearly, $(\check{{{\mathfrak{m}}athscr E}}^{\kappa,h}_\alpha,\check{{{\mathfrak{m}}athscr F}}^{\kappa,h})$ is a lower bounded closed form on $L^2(F,h^2\cdot {\mathfrak{m}}u)$, whose $L^2$-semigroup and $L^2$-resolvent are
\[
\check{T}^{\kappa,h,\alpha}_t\text{-}\mathrm{var}phi=\frac{{{\mathfrak{m}}athrm{e}}^{-\alpha t}\check{T}^\kappa_t\left(\text{-}\mathrm{var}phi h\right)}{h},\quad \check{G}^{\kappa,h,\alpha}_{\alpha'}\text{-}\mathrm{var}phi=\frac{1}{h}{\mathrm{i}}nt_0^{\mathrm{i}}nfty {{\mathfrak{m}}athrm{e}}^{-(\alpha+\alpha') t} \check{T}^{\kappa}_t (\text{-}\mathrm{var}phi h)dt
\]
for $t\mathfrak{g}eq 0$ and $\alpha'>(\alpha_0-\alpha)\vee 0$. Its $L^2$-generator is
\[
\begin{aligned}
&\mathcal{D}\left(\check{{{\mathfrak{m}}athrm{s}}L}^{\kappa,h}_\alpha\right)=\{\text{-}\mathrm{var}phi {\mathrm{i}}n L^2(F,h^2\cdot {\mathfrak{m}}u): \text{-}\mathrm{var}phi h{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}N_\kappa)\}, \\
&\check{{{\mathfrak{m}}athrm{s}}L}^{\kappa,h}_\alpha \text{-}\mathrm{var}phi =-\frac{{{\mathfrak{m}}athrm{s}}N_\kappa (\text{-}\mathrm{var}phi h)}{h}-\alpha \text{-}\mathrm{var}phi,\quad \text{-}\mathrm{var}phi {\mathrm{i}}n \mathcal{D}\left(\check{{{\mathfrak{m}}athrm{s}}L}^{\kappa,h}_\alpha \right).
\end{aligned}\]
When $h{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_\alpha$, $\check{T}^{\kappa, h,\alpha}_t$ is Markovian, i.e. $0\leq \check{T}^{\kappa, h,\alpha}_t \text{-}\mathrm{var}phi\leq 1$ for $\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(F,h^2\cdot {\mathfrak{m}}u)$ with $0\leq \text{-}\mathrm{var}phi\leq 1$. Particularly, the resolvent can be extended to a Markovian one with parameter $\alpha'>0$ on $L^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)$, i.e.
\[
\check{G}^{\kappa,h,\alpha}_{\alpha'} \text{-}\mathrm{var}phi=\frac{1}{h}{\mathrm{i}}nt_0^{\mathrm{i}}nfty {{\mathfrak{m}}athrm{e}}^{-(\alpha+\alpha') t} \check{T}^{\kappa}_t (\text{-}\mathrm{var}phi h)dt,\quad \alpha'>0, \text{-}\mathrm{var}phi {\mathrm{i}}n L^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)
\]
forms a Markovian resolvent on $L^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)$; see, e.g., \cite[page 8]{O13}.
\begin{corollary}
Adopt the same assumptions in Theorem~{{\mathfrak{m}}athrm{e}}f{THM312}. For any $\alpha{\mathrm{i}}n {{\mathfrak{m}}athbb R}$ and $h{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_\alpha$, the $h$-transform $(\check{{{\mathfrak{m}}athscr E}}^{\kappa,h}_\alpha,\check{{{\mathfrak{m}}athscr F}}^{\kappa,h})$ is a quasi-regular lower bounded symmetric Dirichlet form on $L^2(F,h^2\cdot {\mathfrak{m}}u)$. Furthermore, there exists an $h^2\cdot {\mathfrak{m}}u$-symmetric Markov process $\check{X}^{\kappa, h, \alpha}$ such that for $\alpha'>0$ and $\text{-}\mathrm{var}phi{\mathrm{i}}n L^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)$,
\[
\check{R}^{\kappa,h,\alpha}_{\alpha'}\text{-}\mathrm{var}phi(\cdot):={\mathfrak{m}}athbf{E}_\cdot \left[{\mathrm{i}}nt_0^{\mathrm{i}}nfty {{\mathfrak{m}}athrm{e}}^{-\alpha' t} \text{-}\mathrm{var}phi(\check{X}^{\kappa, h,\alpha}_t)dt\right]
\]
is an $\check{{{\mathfrak{m}}athscr E}}^{\kappa,h}_\alpha$-quasi-continuous ${\mathfrak{m}}u$-version of $\check{G}^{\kappa, h,\alpha}_{\alpha'} \text{-}\mathrm{var}phi$.
\end{corollary}
\begin{proof}
Clearly $(\check{{{\mathfrak{m}}athscr E}}^{\kappa,h}_\alpha,\check{{{\mathfrak{m}}athscr F}}^{\kappa,h})$ is a lower bounded symmetric Dirichlet form. Note that $h{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_\alpha {{\mathfrak{m}}athrm{s}}ubset {\mathfrak{m}}athbf{E}^+_{\alpha \vee (\alpha_0+1)}$. On account of Theorem~{{\mathfrak{m}}athrm{e}}f{THMD2}, we obtain the quasi-regularity of $(\check{{{\mathfrak{m}}athscr E}}^{\kappa,h}_\alpha,\check{{{\mathfrak{m}}athscr F}}^{\kappa,h})$. Using \cite[Theorem~3.3.4]{O13} and quasi-homeomorphism, one can conclude the existence of $\check{X}^{\kappa,h,\alpha}$ by a standard argument. That completes the proof.
\end{proof}
The Markov process $\check{X}^{\kappa, h, \alpha}$ is called $(\alpha, h)$-associated with ${{\mathfrak{m}}athrm{s}}N_\kappa$. When $\alpha=0$, it is called $h$-associated with ${{\mathfrak{m}}athrm{s}}N_\kappa$.
{{\mathfrak{m}}athrm{s}}ection{Revisiting classical DN operators}\label{Revisiting}
Let us turn to revisit the classical DN operators $D_\lambda$ defined as Definition~{{\mathfrak{m}}athrm{e}}f{DEF31} but with possibly negative parameter $\lambda$. For simplicity assume that $\Omega{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athbb R}^d$, $d\mathfrak{g}eq 2$, is a bounded domain with smooth boundary (though most results in this section hold for less regular boundary like a Lipschitz one). Let ${\mathfrak{m}}athcal{D}elta^\text{D}$ be the Dirichlet Laplacian, i.e. ${\mathfrak{m}}athcal{D}elta^\text{D} u={\mathfrak{m}}athcal{D}elta u$ with domain $\mathcal{D}({\mathfrak{m}}athcal{D}elta^\text{D}):=\{u{\mathrm{i}}n H^1_0(\Omega): {\mathfrak{m}}athcal{D}elta u{\mathrm{i}}n L^2(\Omega)\}$. It is known that the spectrum ${{\mathfrak{m}}athrm{s}}igma({\mathfrak{m}}athcal{D}elta^\text{D})$ consists of a decreasing sequence of eigenvalues:
\[
\cdots \leq \lambda^\text{D}_3\leq \lambda^\text{D}_2<\lambda^\text{D}_1<0,
\]
where $\lambda^\text{D}_1<0$ is called the first eigenvalue of ${\mathfrak{m}}athcal{D}elta^\text{D}$.
Throughout this section, let $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})=(\frac{1}{2}{\mathfrak{m}}athbf{D}, H^1(\Omega))$ be the Dirichlet form on $L^2(\bar{\Omega})$ associated with the reflected Brownian motion $X$ on $\bar{\Omega}$. Denote by $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ the trace Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(\Gamma)$ and by $\check{X}$ its associated Markov process on $\Gamma$.
{{\mathfrak{m}}athrm{s}}ubsection{Form representation for DN operators}
The DN operator $D_\lambda$ for $\lambda{\mathrm{i}}n {{\mathfrak{m}}athbb R}$ is defined as the same way as Definition~{{\mathfrak{m}}athrm{e}}f{DEF31}. But we should point out that it is well defined, i.e. not multi-valued, if and only if $2\lambda\notin \{\lambda^\text{D}_n: n\mathfrak{g}eq 1\}$; see \cite[Proposition~4.11]{AE14}. Another way to obtain $D_\lambda$ is to use the method formulated in \S{{\mathfrak{m}}athrm{e}}f{SEC3}. Take ${\mathfrak{m}}u={{\mathfrak{m}}athrm{s}}igma$, the surface measure on $\Gamma=\partial \Omega$ with $\text{qsupp}[{{\mathfrak{m}}athrm{s}}igma]=\Gamma$, and $\kappa=\lambda \cdot m{\mathrm{i}}n \mathbf{S}-\mathbf{S}$.
\begin{lemma}\label{LM41}
Let $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})=(\frac{1}{2}{\mathfrak{m}}athbf{D}, H^1(\Omega))$, ${\mathfrak{m}}u={{\mathfrak{m}}athrm{s}}igma$ and $\kappa=\lambda\cdot m$ for $\lambda{\mathrm{i}}n {{\mathfrak{m}}athbb R}$. Then the following are equivalent:
\begin{itemize}
{\mathrm{i}}tem[(a)] \eqref{eq:DEF41} holds;
{\mathrm{i}}tem[(b)] \eqref{eq:42-3} holds;
{\mathrm{i}}tem[(c)] $2\lambda\notin \{\lambda^\text{D}_n: n\mathfrak{g}eq 1\}$.
\end{itemize}
In addition, for $\lambda\neq \lambda^\text{D}_n/2$, the following are true:
\begin{itemize}
{\mathrm{i}}tem[(1)] $D_\lambda$ is identified with the DN operator for $({{\mathfrak{m}}athscr E}^\kappa, {{\mathfrak{m}}athscr F}^\kappa)$ on $L^2(\Gamma)$.
{\mathrm{i}}tem[(2)] $D_\lambda$ is lower semi-bounded and self-adjoint on $L^2(\Gamma)$.
\end{itemize}
\end{lemma}
\begin{proof}
Clearly (b) implies (a), and (c) implies (b) by means of Lemma~{{\mathfrak{m}}athrm{e}}f{LMB1}; see also \cite[Lemma~2.2]{AM12}. If $\lambda=\lambda^\text{D}_n/2$ for some $n$, then $D_\lambda$ is not well defined because of \cite[Proposition~4.11]{AE14}. This means that there exist $u_1, u_2{\mathrm{i}}n \mathcal{H}^\kappa_\Gamma$ such that $u_1|_\Gamma=u_2|_\Gamma$ but $u_1\neq u_2$. Consequently $0\neq u:=u_1-u_2{\mathrm{i}}n \mathcal{H}^\kappa_\Gamma\cap {{\mathfrak{m}}athscr F}^{\kappa, \Omega}_{{\mathfrak{m}}athrm{e}}$, and \eqref{eq:DEF41} fails.
For $\lambda\neq \lambda^\text{D}_n/2$, it is easy to verify the first assertion by using Lemma~{{\mathfrak{m}}athrm{e}}f{LM32}. The second assertion is the consequence of Corollary~{{\mathfrak{m}}athrm{e}}f{COR47}. That completes the proof.
\end{proof}
From now on we assume that $\lambda\neq \lambda^\text{D}_n/2$.
As mentioned in Lemma~{{\mathfrak{m}}athrm{e}}f{LM43}, $-D_\lambda$ is the $L^2$-generator of the trace form $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$. Fix a constant $\alpha_0\mathfrak{g}eq 0$ such that $\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0}$ is non-negative. The main purpose of this subsection is to formulate the expression of $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$. For emphasis, write
\[
\begin{aligned}
&({{\mathfrak{m}}athscr E}^\lambda,{{\mathfrak{m}}athscr F}^\lambda):=({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa), \quad (\check{{\mathfrak{m}}athscr E}^\lambda,\check{{\mathfrak{m}}athscr F}^\lambda):=(\check{{\mathfrak{m}}athscr E}^\kappa,\check{{\mathfrak{m}}athscr F}^\kappa),\\ &\mathcal{H}^\lambda_\Gamma:=\mathcal{H}^\kappa_\Gamma,\quad \mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi:=\mathbf{H}^\kappa_\Gamma \text{-}\mathrm{var}phi, \text{ for }\text{-}\mathrm{var}phi {\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\lambda.
\end{aligned}\]
Write $\mathbf{H}_\Gamma\text{-}\mathrm{var}phi:=\mathbf{H}^0_\Gamma \text{-}\mathrm{var}phi$. Clearly ${{\mathfrak{m}}athscr F}^\lambda=H^1(\Omega)$ and $\check{{{\mathfrak{m}}athscr F}}^\lambda=H^{1/2}(\Gamma)$. Set
\begin{equation}\label{eq:4.6}
{\mathfrak{m}}athscr{U}_\lambda(\text{-}\mathrm{var}phi\otimes \phi):=\lambda {\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi (x) \mathbf{H}_\Gamma \phi(x)dx,\quad \text{-}\mathrm{var}phi, \phi{\mathrm{i}}n H^{1/2}(\Gamma).
\end{equation}
Recall that $({{\mathfrak{m}}athscr E}^\Omega,{{\mathfrak{m}}athscr F}^\Omega)$ is the part Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(\Omega)$ associated with the killed Brownian motion $X^\Omega$ on $\Omega$. Denote its $L^2$-generator by ${{\mathfrak{m}}athrm{s}}L_\Omega$ with domain $\mathcal{D}({{\mathfrak{m}}athrm{s}}L_\Omega)$ and set $G^\Omega:=(-{{\mathfrak{m}}athrm{s}}L_\Omega)^{-1}$ to be its $0$-resolvent operator, i.e. the Green operator on $L^2(\Omega)$. Let $p^\Omega_t(x,y)$ be the transition density of $X^\Omega$, i.e. ${\mathfrak{m}}athbf{E}_x\left[f(X^\Omega_t) \right]={\mathrm{i}}nt_\Omega p^\Omega_t(x,y)f(y)dy$.
\begin{lemma}\label{LM42}
Assume that $\lambda\neq \lambda^\text{D}_n/2$. Then the following hold:
\begin{itemize}
{\mathrm{i}}tem[(1)] For $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$,
\begin{equation}\label{eq:4.2}
\mathbf{H}_\Gamma \text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi-\lambda G^\Omega (\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)=0.
\end{equation}
{\mathrm{i}}tem[(2)] Assume $\lambda>\lambda^\text{D}_1/2$. For $\text{-}\mathrm{var}phi {\mathrm{i}}n b{\mathfrak{m}}athcal{B}(\Gamma)\cap H^{1/2}(\Gamma)$, $\mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi$ admits the following probabilistic representation:
\begin{equation}\label{eq:4.4}
\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi(x)={\mathfrak{m}}athbf{E}_x\left[{{\mathfrak{m}}athrm{e}}^{-\lambda \tau}\text{-}\mathrm{var}phi(X_\tau);\tau<{\mathrm{i}}nfty \right],
\end{equation}
where $\tau:={\mathrm{i}}nf\{t>0: X_t\notin \Omega\}$ and $\text{-}\mathrm{var}phi$ on the right hand side takes its $\check{{{\mathfrak{m}}athscr E}}$-quasi-continuous ${{\mathfrak{m}}athrm{s}}igma$-version. Furthermore, for $\text{-}\mathrm{var}phi, \phi{\mathrm{i}}n b{\mathfrak{m}}athcal{B}(\Gamma)\cap H^{1/2}(\Gamma)$,
\begin{equation}\label{eq:4.3}
{\mathfrak{m}}athscr{U}_\lambda(\text{-}\mathrm{var}phi\otimes \phi)={\mathrm{i}}nt_{\Gamma \times \Gamma} \text{-}\mathrm{var}phi(\xi)\phi(\eta)U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta),
\end{equation}
where
\begin{equation}\label{eq:4.5-2}
U_\lambda(\xi,\eta)=\frac{1}{4}{\mathrm{i}}nt_0^{\mathrm{i}}nfty \left(1-{{\mathfrak{m}}athrm{e}}^{-\lambda t}\right) \frac{\partial^2 p^\Omega_t(\xi,\eta)}{\partial \mathbf{n}_\xi \partial \mathbf{n}_\eta}dt
\end{equation}
is symmetric and $\mathbf{n}_\xi$ denotes the inward normal vector at $\xi$.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
{\mathrm{i}}tem[(1)] Note that $\mathbf{H}_\Gamma \text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi {\mathrm{i}}n H^1(\Omega)$ and $(\mathbf{H}_\Gamma \text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)|_\Gamma=0$. This implies $\mathbf{H}_\Gamma \text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi{\mathrm{i}}n H^1_0(\Omega)$. Take $v{\mathrm{i}}n H^1_0(\Omega)$. Then
\[
{{\mathfrak{m}}athscr E}^\Omega(\mathbf{H}_\Gamma \text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi, v)={{\mathfrak{m}}athscr E}(\mathbf{H}_\Gamma\text{-}\mathrm{var}phi, v)-{{\mathfrak{m}}athscr E}(\mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi,v)=\lambda {\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi vdx.
\]
Thus $\mathbf{H}_\Gamma\text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}L_\Omega)$ and ${{\mathfrak{m}}athrm{s}}L_\Omega(\mathbf{H}_\Gamma\text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)=-\lambda\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi$. As a result, \eqref{eq:4.2} holds true.
{\mathrm{i}}tem[(2)] Denote the right hand side of \eqref{eq:4.4} by $\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi$ and write $\mathbf{H}_\tau \text{-}\mathrm{var}phi:={\mathfrak{m}}athbf{E}_x\left[\text{-}\mathrm{var}phi(X_\tau);\tau<{\mathrm{i}}nfty\right]$. Then $\mathbf{H}_\tau \text{-}\mathrm{var}phi=\mathbf{H}_\Gamma \text{-}\mathrm{var}phi$. Let $R^\Omega$ be the (probabilistic) $0$-resolvent of $X^\Omega$, i.e.
\[
R^\Omega f(x):={\mathfrak{m}}athbf{E}_x\left[ {\mathrm{i}}nt_0^{\mathrm{i}}nfty f(X^\Omega_t)dt\right],\quad f{\mathrm{i}}n b{\mathfrak{m}}athcal{B}(\Omega).
\]
We assert that when $\lambda>\lambda^\text{D}_1/2$, for $\text{-}\mathrm{var}phi{\mathrm{i}}n b{\mathfrak{m}}athcal{B}(\Gamma)\cap H^{1/2}(\Gamma)$,
\begin{equation}\label{eq:4.5}
\mathbf{H}_\tau \text{-}\mathrm{var}phi(x)-\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi(x)-\lambda R^\Omega\left(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi\right)(x)=0,\quad x{\mathrm{i}}n \Omega.
\end{equation}
Note that $\lambda>\lambda_1^\text{D}/2$ amounts to that $(\Omega, -\lambda)$ is gaugeable, i.e. $x{\mathfrak{m}}apsto {\mathfrak{m}}athbf{E}_x\left[{{\mathfrak{m}}athrm{e}}^{-\lambda \tau} \right]$ is finite and bounded; see \cite[Theorem~4.19]{CZ95}. Particularly $\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi$ is finite and bounded.
Since $X^\Omega$ is transient, it follows that $R^\Omega(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi)<{\mathrm{i}}nfty$. In addition, for $\lambda\neq 0$,
\[
\begin{aligned}
\frac{1}{\lambda}\left(\mathbf{H}_\tau \text{-}\mathrm{var}phi(x)-\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi(x)\right)&={\mathfrak{m}}athbf{E}_x \left[{\mathrm{i}}nt_0^\tau {{\mathfrak{m}}athrm{e}}^{-\lambda t} \text{-}\mathrm{var}phi(X_\tau)dt \right] \\
&={\mathfrak{m}}athbf{E}_x \left[{\mathrm{i}}nt_0^{\mathrm{i}}nfty 1_{\{t<\tau\}} {{\mathfrak{m}}athrm{e}}^{-\lambda \tau\circ \theta_t} \text{-}\mathrm{var}phi(X_\tau\circ \theta_t)dt \right]\\
&={\mathfrak{m}}athbf{E}_x\left[{\mathrm{i}}nt_0^{\mathrm{i}}nfty 1_{\{t<\tau\}}{\mathfrak{m}}athbf{E}_{X_t}\left[{{\mathfrak{m}}athrm{e}}^{-\lambda \tau} \text{-}\mathrm{var}phi(X_\tau) \right] dt \right] \\
&=R^\Omega(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi)(x),
\end{aligned}
\]
where $\theta_t$ is the translation operator of $X$, and the third identity is due to the strong Markov property of $X$. Hence we arrive at \eqref{eq:4.5}.
Since ${{\mathfrak{m}}athscr F}^{\kappa,\Omega}_{{\mathfrak{m}}athrm{e}}=H^1_0(\Omega)$, one can get that $\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi$ is the unique weak solution in $H^1(\Omega)$ to the following equation:
\begin{equation}\label{eq:4.1}
\frac{1}{2}{\mathfrak{m}}athcal{D}elta u- \lambda u=0\text{ in }\Omega
\end{equation}
with the boundary condition $u|_\Gamma=\text{-}\mathrm{var}phi$.
On the other hand, on account of \cite[Theorem~4.7]{CZ95}, $\mathbf{H}_\tau^\lambda \text{-}\mathrm{var}phi$ is also a weak solution to \eqref{eq:4.1}. In addition, for $\check{{{\mathfrak{m}}athscr E}}$-q.e. $x{\mathrm{i}}n \Gamma$,
\[
\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi(x)={\mathfrak{m}}athbf{E}_x\left[\text{-}\mathrm{var}phi(X_\tau)\right]=\mathbf{H}_\tau \text{-}\mathrm{var}phi(x)=\text{-}\mathrm{var}phi(x).
\]
It suffices to show $\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi{\mathrm{i}}n H^1(\Omega)$, so that $\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi=\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi$. In fact, write $\text{-}\mathrm{var}phi^+:=\text{-}\mathrm{var}phi \vee 0$ and $\text{-}\mathrm{var}phi^-:=\text{-}\mathrm{var}phi^+-\text{-}\mathrm{var}phi$. Then \eqref{eq:4.5} implies that
\[
{\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi^\pm(x) R^\Omega(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi^\pm)(x)dx<{\mathrm{i}}nfty.
\]
In view of \cite[Theorem~1.5.4]{FOT11}, $R^\Omega(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi^\pm){\mathrm{i}}n {{\mathfrak{m}}athscr F}^\Omega_{{\mathfrak{m}}athrm{e}}=H^1_0(\Omega)$. Hence
\[
R^\Omega(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi)=R^\Omega(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi^+)-R^\Omega(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi^-){\mathrm{i}}n H^1_0(\Omega).
\]
Since $\mathbf{H}_\tau\text{-}\mathrm{var}phi {\mathrm{i}}n {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}=H^1(\Omega)$, it follows from \eqref{eq:4.2} that $\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi{\mathrm{i}}n H^1(\Omega)$. Eventually we obtain that $\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi=\mathbf{H}^\lambda_\tau\text{-}\mathrm{var}phi$.
As formulated in \cite[(5.8.1)]{CF12},
\begin{equation}\label{eq:4.7-2}
{\mathfrak{m}}athbf{P}_x\left(\tau{\mathrm{i}}n ds, X_\tau{\mathrm{i}}n d\xi \right)=\frac{1}{2}\frac{\partial p^\Omega_s(x,\xi)}{\partial {\mathbf{n}_\xi}}ds{{\mathfrak{m}}athrm{s}}igma(d\xi),\quad x{\mathrm{i}}n \Omega, \xi{\mathrm{i}}n \Gamma.
\end{equation}
Then a straightforward computation yields \eqref{eq:4.3}. Note that \eqref{eq:4.2} and \eqref{eq:4.4} imply
\[
\begin{aligned}
{\mathfrak{m}}athscr U_\lambda(\text{-}\mathrm{var}phi\otimes \phi)&=\lambda {\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi(x) \mathbf{H}^\lambda_\tau \phi(x)dx+\lambda^2 {\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi(x)R^\Omega(\mathbf{H}^\lambda_\tau \phi)(x)dx \\
&=\lambda {\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi(x) \mathbf{H}^\lambda_\tau \phi(x)dx+\lambda^2 {\mathrm{i}}nt_\Omega R^\Omega(\mathbf{H}^\lambda_\tau \text{-}\mathrm{var}phi)(x)\mathbf{H}^\lambda_\tau \phi(x)dx \\
&={\mathfrak{m}}athscr{U}_\lambda(\phi\otimes \text{-}\mathrm{var}phi).
\end{aligned}\]
Therefore $U_\lambda(\xi,\eta)=U_\lambda(\eta,\xi)$.
\end{itemize}
That completes the proof.
\end{proof}
\begin{remark}\label{RM43}
\eqref{eq:4.2} reads as $\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi=-\lambda G^\Omega(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)+ \mathbf{H}_\Gamma \text{-}\mathrm{var}phi$, where $-\lambda G^\Omega(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi){\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\Omega}_{{\mathfrak{m}}athrm{e}}=H^1_0(\Omega)$ and $\mathbf{H}_\Gamma \text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{H}_\Gamma$. This is actually the orthogonal decomposition of $\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi{\mathrm{i}}n H^1(\Omega)$ in $H^1_0(\Omega)\oplus \mathcal{H}_\Gamma$.
Note that $U_\lambda(\xi,\eta)\mathfrak{g}eq 0$ when $\lambda\mathfrak{g}eq 0$, while $U_\lambda(\xi,\eta)< 0$ when $\lambda^\text{D}_1/2<\lambda<0$. For $\lambda>\lambda^\text{D}_1/2$, $U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(\xi){{\mathfrak{m}}athrm{s}}igma(\eta)$ forms a finite symmetric measure on $\Gamma\times \Gamma$, because the gaugeability of $(\Omega, -\lambda)$ implies
\[
\left|{\mathrm{i}}int_{\Gamma\times \Gamma}U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(\xi){{\mathfrak{m}}athrm{s}}igma(\eta)\right|=\left| {\mathfrak{m}}athscr{U}_\lambda(1\otimes 1)\right|\leq |\lambda| {\mathrm{i}}nt_\Omega {\mathfrak{m}}athbf{E}_x\left[{{\mathfrak{m}}athrm{e}}^{-\lambda \tau} \right]dx<{\mathrm{i}}nfty.
\]
In addition,
\[
\lambda{\mathfrak{m}}apsto U_\lambda(\xi,\eta)
\]
is increasing and the limit $U(\xi,\eta):=\lim_{\lambda\uparrow{\mathrm{i}}nfty} U_\lambda(\xi,\eta)$ exists for any $\xi,\eta{\mathrm{i}}n \Gamma$. Particularly, $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ enjoys the following Beurling-Deny representation:
\begin{equation}\label{eq:4.7}
\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)=\frac{1}{2}{\mathrm{i}}int_{\Gamma\times \Gamma} \left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta),\quad \forall \text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}=H^{1/2}(\Gamma);
\end{equation}
see \cite[(5.8.4]{CF12}.
\end{remark}
Now we have a position to formulate the trace form $(\check{{{\mathfrak{m}}athscr E}}^\lambda, \check{{{\mathfrak{m}}athscr F}}^\lambda)$ associated with the DN operator $D_\lambda$.
\begin{theorem}\label{THM44-2}
Assume that $\lambda\neq \lambda^\text{D}_n/2$. Then for any $\text{-}\mathrm{var}phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\lambda=H^{1/2}(\Gamma)$,
\begin{equation}\label{eq:4.8}
\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)=\frac{1}{2}{\mathrm{i}}int_{\Gamma\times \Gamma} \left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)+{\mathfrak{m}}athscr{U}_\lambda(\text{-}\mathrm{var}phi\otimes \text{-}\mathrm{var}phi),
\end{equation}
where $U(\xi,\eta)=\lim_{\lambda\uparrow {\mathrm{i}}nfty}U_\lambda(\xi,\eta)$ appears in Remark~{{\mathfrak{m}}athrm{e}}f{RM43} and ${\mathfrak{m}}athscr{U}_\lambda$ is defined as \eqref{eq:4.6}. Furthermore, for either $\lambda\mathfrak{g}eq 0, \text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$ or $\lambda>\lambda^\text{D}_1/2, \text{-}\mathrm{var}phi{\mathrm{i}}n b{\mathfrak{m}}athcal{B}(\Gamma)\cap H^{1/2}(\Gamma)$,
\begin{equation}\label{eq:4.9}
\begin{aligned}
\check{{{\mathfrak{m}}athscr E}}&^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)\\
&=\frac{1}{2}{\mathrm{i}}int_{\Gamma\times \Gamma}\left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2\left(U(\xi,\eta)-U_\lambda(\xi,\eta)\right){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)+{\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2V_\lambda(\xi){{\mathfrak{m}}athrm{s}}igma(d\xi),
\end{aligned}\end{equation}
where $V_\lambda(\xi)={\mathrm{i}}nt_\Gamma U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\eta)$.
\end{theorem}
\begin{proof}
It follows from \eqref{eq:4.2} that for $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$,
\[
\begin{aligned}
{{\mathfrak{m}}athscr E}^\lambda(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi, \mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)&={{\mathfrak{m}}athscr E}(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi, \mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)+\lambda {\mathrm{i}}nt_\Omega \left(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi\right)^2dx \\
&={{\mathfrak{m}}athscr E}(\mathbf{H}_\Gamma \text{-}\mathrm{var}phi, \mathbf{H}_\Gamma \text{-}\mathrm{var}phi)+\lambda^2 {{\mathfrak{m}}athscr E}(G^\Omega(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi), G^\Omega(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi))+\lambda {\mathrm{i}}nt_\Omega \left(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi\right)^2dx.
\end{aligned}\]
Since $G^\Omega(\mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi){\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}L_\Omega){{\mathfrak{m}}athrm{s}}ubset H^1_0(\Omega)$ and $-{{\mathfrak{m}}athrm{s}}L_\Omega G^\Omega(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)=\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi$, ${{\mathfrak{m}}athscr E}^\lambda(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi, \mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)$ is equal to
\[
{{\mathfrak{m}}athscr E}(\mathbf{H}_\Gamma \text{-}\mathrm{var}phi, \mathbf{H}_\Gamma \text{-}\mathrm{var}phi)+\lambda^2{\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi G^\Omega(\mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi) dx +\lambda {\mathrm{i}}nt_\Omega \left(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi\right)^2dx.
\]
Using \eqref{eq:4.2}, we get that
\[
{{\mathfrak{m}}athscr E}^\lambda(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi, \mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)={{\mathfrak{m}}athscr E}(\mathbf{H}_\Gamma \text{-}\mathrm{var}phi, \mathbf{H}_\Gamma \text{-}\mathrm{var}phi)+{\mathfrak{m}}athscr U_\lambda (\text{-}\mathrm{var}phi\otimes \text{-}\mathrm{var}phi).
\]
In view of the definition of $\check{{{\mathfrak{m}}athscr E}}^\lambda$ as well as $\check{{{\mathfrak{m}}athscr E}}$ and \eqref{eq:4.7}, the formula \eqref{eq:4.8} can be concluded.
When $\lambda\mathfrak{g}eq 0$, \eqref{eq:4.4} holds true for all $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$. Hence \eqref{eq:4.3} also holds true for $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$. Since $0\leq U_\lambda(\xi, \eta)\leq U(\xi,\eta)$, it follows that
\[
\begin{aligned}
{\mathrm{i}}int_{\Gamma\times \Gamma} \left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2&U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)\\
&\leq {\mathrm{i}}int_{\Gamma\times \Gamma} \left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)<{\mathrm{i}}nfty.
\end{aligned}\]
This implies that
\[
{\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2V_\lambda(\xi){{\mathfrak{m}}athrm{s}}igma(d\xi)=\frac{1}{2}{\mathrm{i}}int_{\Gamma\times \Gamma} \left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)+{\mathfrak{m}}athscr{U}_\lambda(\text{-}\mathrm{var}phi\otimes \text{-}\mathrm{var}phi)<{\mathrm{i}}nfty.
\]
Particularly, \eqref{eq:4.9} holds true. When $\lambda^\text{D}_1/2<\lambda<0$, we note that $U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)$ is a finite measure on $\Gamma\times \Gamma$ and hence $V_\lambda(\xi){{\mathfrak{m}}athrm{s}}igma(\xi)$ is a finite measure on $\Gamma$. Then the same representation \eqref{eq:4.9} holds for $\text{-}\mathrm{var}phi{\mathrm{i}}n b{\mathfrak{m}}athcal{B}(\Gamma)\cap H^{1/2}(\Gamma)$ by means of Lemma~{{\mathfrak{m}}athrm{e}}f{LM42}~(2). That completes the proof.
\end{proof}
\begin{remark}\label{RM45}
We point out that
\begin{itemize}
{\mathrm{i}}tem[(a)] For $\lambda>\lambda^\text{D}_1/2$, it holds that $\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)\lesssim \|\text{-}\mathrm{var}phi\|_{H^{1/2}(\Gamma)}^2$ for all $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$;
{\mathrm{i}}tem[(b)] For $\lambda>0$, it holds that $\|\text{-}\mathrm{var}phi\|_{H^{1/2}(\Gamma)}^2\lesssim \check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)$ for all $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$;
{\mathrm{i}}tem[(c)] For $\lambda\leq 0$ with $\lambda\neq \lambda^\text{D}_n/2$, it holds that $$\|\text{-}\mathrm{var}phi\|_{H^{1/2}(\Gamma)}^2\lesssim \check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)+(1-\lambda){\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi(x)^2dx,\quad \forall \text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma).$$
\end{itemize}
To show (a), note that there exists a bounded extension operator $Z: H^{1/2}(\Gamma)\rightarrow H^1(\Omega)$ such that $Z\text{-}\mathrm{var}phi|_\Gamma=\text{-}\mathrm{var}phi$ for $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$; see, e.g., \cite[Theorem~2.6.11]{SS11}. Then $Z\text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi {\mathrm{i}}n H^1_0(\Omega)$ and
\[
\begin{aligned}
{{\mathfrak{m}}athscr E}^\lambda(Z\text{-}\mathrm{var}phi,Z\text{-}\mathrm{var}phi)&={{\mathfrak{m}}athscr E}^\lambda\left((Z\text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)+\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi, (Z\text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)+\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi\right) \\
&={{\mathfrak{m}}athscr E}^\lambda(\mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi,\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)+{{\mathfrak{m}}athscr E}^\lambda(Z\text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi,Z\text{-}\mathrm{var}phi-\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi) \\
&\mathfrak{g}eq {{\mathfrak{m}}athscr E}^\lambda(\mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi,\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)=\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi),
\end{aligned}
\]
where the inequality is due to the Poincar\'e's inequality. By means of the boundedness of $Z$, one get that
\[
\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)\leq {{\mathfrak{m}}athscr E}^\lambda(Z\text{-}\mathrm{var}phi,Z\text{-}\mathrm{var}phi)\lesssim \|Z\text{-}\mathrm{var}phi\|_{H^1(\Omega)}^2\lesssim \|\text{-}\mathrm{var}phi\|^2_{H^{1/2}(\Gamma)}.
\]
To prove (b) and (c), we have
\[
\|\text{-}\mathrm{var}phi\|_{H^{1/2}(\Gamma)}^2=\|\text{Tr}(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)\|^2_{H^{1/2}(\Gamma)}\lesssim \|\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi\|^2_{H^1(\Omega)},
\]
since the trace operator $\text{Tr}: H^1(\Omega)\rightarrow H^{1/2}(\Gamma)$ is bounded; see, e.g., \cite[Theorem~2.6.8]{SS11}. When $\lambda>0$, it holds that $\|\text{-}\mathrm{var}phi\|_{H^{1/2}(\Gamma)}^2\lesssim {{\mathfrak{m}}athscr E}^\lambda(\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi,\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi)=\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)$ arriving at (b). When $\lambda\leq 0$ with $\lambda\neq \lambda^\text{D}_n/2$, since
\[
\begin{aligned}
\|\mathbf{H}^\lambda_\Gamma \text{-}\mathrm{var}phi\|^2_{H^1(\Omega)}&={{\mathfrak{m}}athscr E}^\lambda(\mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi, \mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi)+(1-\lambda){\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi(x)^2dx\\
&=\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)+(1-\lambda){\mathrm{i}}nt_\Omega \mathbf{H}^\lambda_\Gamma\text{-}\mathrm{var}phi(x)^2dx,
\end{aligned}\]
we obtain (c).
\end{remark}
Note that the representation \eqref{eq:4.9} is not established for $\lambda^\text{D}_1/2<\lambda<0$ and unbounded $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$. In fact, we have the following.
\begin{corollary}
Assume that $\lambda>\lambda^\text{D}_1/2$. Then \eqref{eq:4.9} holds for $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$ such that ${\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2|V_\lambda(\xi)|{{\mathfrak{m}}athrm{s}}igma(d\xi)<{\mathrm{i}}nfty$.
\end{corollary}
\begin{proof}
Fix $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$ such that ${\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2|V_\lambda(\xi)|{{\mathfrak{m}}athrm{s}}igma(d\xi)<{\mathrm{i}}nfty$. Set $\text{-}\mathrm{var}phi_n:=(-n)\vee \text{-}\mathrm{var}phi \wedge n$. Then $\text{-}\mathrm{var}phi_n{\mathrm{i}}n H^{1/2}(\Gamma)$ and
\[
{\mathrm{i}}int_{\Gamma\times \Gamma}\left(\text{-}\mathrm{var}phi_n(\xi)-\text{-}\mathrm{var}phi_n(\eta)\right)^2U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)\rightarrow {\mathrm{i}}int_{\Gamma\times \Gamma}\left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta).
\]
Since ${\mathrm{i}}int_{\Gamma\times \Gamma}\left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2|U_\lambda(\xi,\eta)|{{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)\leq 4{\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2|V_\lambda(\xi)|{{\mathfrak{m}}athrm{s}}igma(d\xi)<{\mathrm{i}}nfty$, one can obtain that
\[
{\mathrm{i}}int_{\Gamma\times \Gamma}\left(\text{-}\mathrm{var}phi_n(\xi)-\text{-}\mathrm{var}phi_n(\eta)\right)^2U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)\rightarrow {\mathrm{i}}int_{\Gamma\times \Gamma}\left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)
\]
and
\[
{\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi_n(\xi)^2V_\lambda(\xi){{\mathfrak{m}}athrm{s}}igma(d\xi)\rightarrow {\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2V_\lambda(\xi){{\mathfrak{m}}athrm{s}}igma(d\xi).
\]
Therefore Remark~{{\mathfrak{m}}athrm{e}}f{RM45}~(a) yields that $\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)=\lim_{n\rightarrow {\mathrm{i}}nfty}\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi_n,\text{-}\mathrm{var}phi_n)$ is equal to the right hand side of \eqref{eq:4.9}. That completes the proof.
\end{proof}
\begin{example}
Consider $\Omega={\mathfrak{m}}athbb{D}=\{x{\mathrm{i}}n {{\mathfrak{m}}athbb R}^d: |x|<1\}$ and $\lambda>\lambda^\text{D}_1/2$. Note that the Poisson kernel for ${\mathfrak{m}}athbb{D}$ is
\begin{equation}\label{eq:4.12-2}
K(x,\xi)=\frac{1}{w_d}\frac{1-|x|^2}{|x-\xi|^d},\quad x{\mathrm{i}}n {\mathfrak{m}}athbb{D}, \xi{\mathrm{i}}n \partial {\mathfrak{m}}athbb{D},
\end{equation}
where $w_d$ is the area of ${\mathfrak{m}}athbb{D}$. A straightforward computation yields that
\[
V_\lambda(\xi)=\lambda{\mathrm{i}}nt_{\mathfrak{m}}athbb{D} K(x,\xi){\mathfrak{m}}athbf{E}_x{{\mathfrak{m}}athrm{e}}^{-\lambda \tau}dx.
\]
Since $x{\mathfrak{m}}apsto {\mathfrak{m}}athbf{E}_x {{\mathfrak{m}}athrm{e}}^{-\lambda\tau}$ is bounded, it follows that $|V_\lambda(\xi)|\lesssim |\lambda|{\mathrm{i}}nt_{\mathfrak{m}}athbb{D} K(x,\xi)dx$. By the explicit expression \eqref{eq:4.12-2} of $K$, one have that $\xi{\mathfrak{m}}apsto {\mathrm{i}}nt_{\mathfrak{m}}athbb{D}K(x,\xi)dx$ is finite and constant. Therefore $V_\lambda(\xi)$ is finite and \eqref{eq:4.9} holds for all $\text{-}\mathrm{var}phi{\mathrm{i}}n H^{1/2}(\Gamma)$.
Recall that $\check{{{\mathfrak{m}}athscr E}}^\lambda_{\alpha_0}$ is non-negative. By means of \eqref{eq:4.9}, Remark~{{\mathfrak{m}}athrm{e}}f{RM45} and the boundedness of $V_\lambda$, one can obtain that for $\alpha>\alpha_0$, $\|\cdot\|_{\check{{{\mathfrak{m}}athscr E}}^\lambda_\alpha}$ is an equivalent norm on $H^{1/2}(\Gamma)$ to $\|\cdot\|_{H^{1/2}(\Gamma)}$.
\end{example}
Looking at \eqref{eq:4.7} and \eqref{eq:4.8}, one can find the following exchangeability of perturbation and trace transformation on $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$: For $\lambda\neq \lambda^\text{D}_n/2$, the trace form of $({{\mathfrak{m}}athscr E}^\lambda,{{\mathfrak{m}}athscr F}^\lambda)$ is identified with the trace Dirichlet form $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ `perturbed' by the bilinear functional ${\mathfrak{m}}athscr{U}_\lambda$. When $\lambda>\lambda^\text{D}_1/2$, the perturbation term ${\mathfrak{m}}athscr{U}_\lambda$ is determined by the symmetric measure $$\check\nu_\lambda(d\xi, d\eta):=U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta).$$ Particularly, if $\lambda\mathfrak{g}eq 0$, then $\check\nu_\lambda$ is a bivariate smooth measure with respect to $\check{{{\mathfrak{m}}athscr E}}$ in the sense that $\bar{\nu}_\lambda(\cdot):=\check\nu_\lambda(\cdot, \Gamma)$ is smooth with respect to $\check{{{\mathfrak{m}}athscr E}}$ and $\check\nu_\lambda(d\xi, d\eta)\leq U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)$; see \cite{Y96-2}. On account of \cite[Theorem~4.3]{Y96-2}, there exists a multiplicative functional $\check M^\lambda=(\check M^\lambda_t)_{t\mathfrak{g}eq 0}$ of $\check{X}$ such that $\check\nu_\lambda$ is the bivariate Revuz measure of $\check M^\lambda$; see \cite{Y96} for the definitions of multiplicative functional and its bivariate Revuz measure. Note that the perturbation of $(\check{{{\mathfrak{m}}athscr E}},\check{{\mathfrak{m}}athscr F})$ by $\check\nu_\lambda$ (more exactly, by ${\mathfrak{m}}athscr{U}_\lambda$) corresponds to the killing transformation of $\check{X}$ by $\check M^\lambda$; see \cite[Theorem~3.10]{Y96}. Therefore we have the following.
\begin{corollary}
Assume $\lambda\mathfrak{g}eq 0$. Let $X^\lambda$ be the $\lambda$-subprocess of $X$ associated with $({{\mathfrak{m}}athscr E}^\lambda,{{\mathfrak{m}}athscr F}^\lambda)$. Then there exists a multiplicative functional $\check M^\lambda$ of $\check{X}$ such that the time changed process of $X^\lambda$ by the local time on $\Gamma$ is identified with the subprocess of $\check{X}$ perturbed by $\check{M}^\lambda$.
\end{corollary}
{{\mathfrak{m}}athrm{s}}ubsection{Irreducibility and ground state}
Let $(\check{T}^\lambda_t)_{t\mathfrak{g}eq 0}:=({{\mathfrak{m}}athrm{e}}^{-tD_\lambda})_{t\mathfrak{g}eq 0}$ be the strongly continuous semigroup associated with $(\check{{{\mathfrak{m}}athscr E}}^\lambda,\check{{{\mathfrak{m}}athscr F}}^\lambda)$ on $L^2(\Gamma)$. Denote its resolvent by $(\check{G}^\lambda_\alpha)_{\alpha>\alpha_0}$, i.e. $\check{G}^\lambda_{\alpha} f={\mathrm{i}}nt_0^{\mathrm{i}}nfty {{\mathfrak{m}}athrm{e}}^{-\alpha t}\check{T}^\lambda_t f dt$ for $f{\mathrm{i}}n L^2(\Gamma)$.
\begin{lemma}\label{LM46}
Let $A{{\mathfrak{m}}athrm{s}}ubset \Gamma$ be such that $1_A{\mathrm{i}}n H^{1/2}(\Gamma)$ and ${\mathrm{i}}nt_{A\times (\Gamma{{\mathfrak{m}}athrm{s}}etminus A)}U(\xi, \eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)=0$. Then ${{\mathfrak{m}}athrm{s}}igma(A)=0$ or ${{\mathfrak{m}}athrm{s}}igma(\Gamma{{\mathfrak{m}}athrm{s}}etminus A)=0$.
\end{lemma}
\begin{proof}
Since $1_A{\mathrm{i}}n H^{1/2}(\Gamma)=\check{{{\mathfrak{m}}athscr F}}$, take $1_{\tilde{A}}$ to be its $\check{{{\mathfrak{m}}athscr E}}$-quasi-continuous ${{\mathfrak{m}}athrm{s}}igma$-version. Then $\tilde{A}=A$, ${{\mathfrak{m}}athrm{s}}igma$-a.e., and $\tilde{A}$ is nearly Borel and simultaneously finely open and finely closed.
In view of \eqref{eq:4.5-2}, $U(\xi,\eta)=0$ if and only if $U_\lambda(\xi,\eta)=0$ for one (equivalently all) $\lambda>0$.
Set $h(s,x,\xi):=\frac{1}{2}\frac{\partial p^\Omega_s(x,\xi)}{\partial {\mathbf{n}_\xi}}$, i.e. ${\mathfrak{m}}athbf{P}_x(\tau{\mathrm{i}}n ds, X_\tau{\mathrm{i}}n d\xi)=h(s,x,\xi)ds{{\mathfrak{m}}athrm{s}}igma(d\xi)$; see \eqref{eq:4.7-2}. Then $U_\lambda(\xi,\eta)=\lambda{\mathrm{i}}nt_\Omega K_\lambda(x,\xi)K(x,\eta)dx$, where
\[
K(x,\xi)={\mathrm{i}}nt_0^{\mathrm{i}}nfty h(s,x,\xi)ds,\quad K_\lambda(x,\eta)={\mathrm{i}}nt_0^{\mathrm{i}}nfty {{\mathfrak{m}}athrm{e}}^{-\lambda s}h(s,x,\eta)ds.
\]
Clearly $K(x,\xi)=0$ amounts to $K_\lambda(x,\xi)=0$. Hence for $\lambda>0$, $U_\lambda(\xi,\eta)=0$ is equivalent to ${\mathrm{i}}nt_\Omega K(x,\xi)K(x,\eta)dx=0$. Since ${\mathrm{i}}nt_{A\times (\Gamma{{\mathfrak{m}}athrm{s}}etminus A)}U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)=0$, it follows that
\[
{\mathrm{i}}nt_\Omega {\mathfrak{m}}athbf{P}_x(X_\tau{\mathrm{i}}n \tilde A){\mathfrak{m}}athbf{P}_x(X_\tau{\mathrm{i}}n \Gamma {{\mathfrak{m}}athrm{s}}etminus \tilde A)dx=0.
\]
Particularly, ${\mathfrak{m}}athbf{P}_x(X_\tau{\mathrm{i}}n \tilde A)=0$ or ${\mathfrak{m}}athbf{P}_x(X_\tau{\mathrm{i}}n \Gamma{{\mathfrak{m}}athrm{s}}etminus \tilde A)=0$ for some $x{\mathrm{i}}n \Omega$. We only treat the former case $u(x):={\mathfrak{m}}athbf{P}_x (X_\tau{\mathrm{i}}n \tilde A)=0$ for some $x{\mathrm{i}}n \Omega$. Note that $u\mathfrak{g}eq 0$ and $u$ is harmonic in $\Omega$ (see \cite[Theorem~1.23]{CZ95}). Then the strong maximum principle implies that $u\equiv 0$ in $\Omega$. Since $u$ is ${{\mathfrak{m}}athscr E}$-quasi-continuous, it follows that $u=0$, ${{\mathfrak{m}}athscr E}$-a.e. on $\bar{\Omega}$. On account of \cite[Theorem~5.2.8]{CF12}, $1_{\tilde{A}}=u|_\Gamma=0$, $\check{{{\mathfrak{m}}athscr E}}$-q.e. Particularly, $1_{\tilde{A}}=0$, ${{\mathfrak{m}}athrm{s}}igma$-a.e. and hence ${{\mathfrak{m}}athrm{s}}igma(A)={{\mathfrak{m}}athrm{s}}igma(\tilde{A})=0$. That completes the proof.
\end{proof}
For $\lambda>\lambda^\text{D}_1/2$, the irreducibility of $(\check{T}^\lambda_t)_{t\mathfrak{g}eq 0}$ was first obtained in \cite[Theorem~4.2]{AM12} with the help of Krein-Rutman theorem. It is worth pointing out that $\lambda>\lambda^D_1/2$ is equivalent to either condition in Lemma~{{\mathfrak{m}}athrm{e}}f{LM311} (as well as the gaugeability of $(\Omega,-\lambda)$).
In what follows we give an alternative proof that based on the form representation of $D_\lambda$.
\begin{corollary}\label{COR410}
For $\lambda>\lambda^\text{D}_1/2$, $(\check{T}^\lambda_t)_{t\mathfrak{g}eq 0}$ is irreducible.
\end{corollary}
\begin{proof}
The positivity of $(\check{T}^\lambda_t)_{t\mathfrak{g}eq 0}$ has been obtained in Theorem~{{\mathfrak{m}}athrm{e}}f{THM312}.
To show the irreducibility, take an invariant set $A{{\mathfrak{m}}athrm{s}}ubset \Gamma$ with respect to $(\check{T}^\lambda_t)_{t\mathfrak{g}eq 0}$. Since $1_\Gamma{\mathrm{i}}n H^{1/2}(\Gamma)$ and Theorem~{{\mathfrak{m}}athrm{e}}f{THMC2}, we have that $1_A{\mathrm{i}}n H^{1/2}(\Gamma)$ and
\[
0=\check{{{\mathfrak{m}}athscr E}}^\lambda(1_A, 1_{\Gamma{{\mathfrak{m}}athrm{s}}etminus A})={\mathrm{i}}int_{A\times (\Gamma {{\mathfrak{m}}athrm{s}}etminus A)}\left(U(\xi,\eta)-U_\lambda(\xi,\eta)\right){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta).
\]
Clearly, $U-U_\lambda\mathfrak{g}eq 0$.
In view of \eqref{eq:4.5-2}, $U(\xi,\eta)-U_\lambda(\xi,\eta)=0$ if and only if $U(\xi,\eta)=0$. Therefore Lemma~{{\mathfrak{m}}athrm{e}}f{LM46} yields ${{\mathfrak{m}}athrm{s}}igma(A)=0$ or ${{\mathfrak{m}}athrm{s}}igma(\Gamma{{\mathfrak{m}}athrm{s}}etminus A)=0$. That completes the proof.
\end{proof}
Next we claim the existence of a special eigenfunction, called the ground state, of $D_\lambda$ by virtue of the Krein-Rutman theorem. Though the derivation is classical, we present a proof for readers' convenience.
\begin{theorem}\label{THM411}
Assume that $\lambda\neq \lambda^\text{D}_n/2$. Then the following hold:
\begin{itemize}
{\mathrm{i}}tem[(1)] The DN operator $D_\lambda$ has a purely discrete spectrum ${{\mathfrak{m}}athrm{s}}igma(D_\lambda)$ in the sense that ${{\mathfrak{m}}athrm{s}}igma(D_\lambda)$ consists only of eigenvalues of finite multiplicities which have no finite accumulation points.
{\mathrm{i}}tem[(2)] Assume that $\lambda>\lambda^\text{D}_1/2$ and let $\lambda^\text{DN}_1$ be the smallest eigenvalue in ${{\mathfrak{m}}athrm{s}}igma(D_\lambda)$. Then $\lambda^\text{DN}_1$ is a simple eigenvalue admitting a strictly positive eigenfunction $h_\lambda{\mathrm{i}}n L^2(\Gamma)$, i.e. $h_\lambda>0$, ${{\mathfrak{m}}athrm{s}}igma$-a.e., and $D_\lambda h_\lambda=\lambda^\text{DN}_1 h_\lambda$.
No other eigenvalues admit strictly positive eigenfunctions.
\end{itemize}
\end{theorem}
\begin{proof}
\begin{itemize}
{\mathrm{i}}tem[(1)] Note that for $\alpha>\alpha_0$, $\check{G}^\lambda_\alpha(L^2(\Gamma)){{\mathfrak{m}}athrm{s}}ubset \mathcal{D}(D_\lambda){{\mathfrak{m}}athrm{s}}ubset H^{1/2}(\Gamma)$ and $H^{1/2}(\Gamma)$ is compactly embedded in $L^2(\Gamma)$. Hence $\check{G}^\lambda_\alpha$ is a compact operator on $L^2(\Gamma)$. The first assertion is a result of \cite[Proposition~2.11]{S12}.
{\mathrm{i}}tem[(2)] The existence of $\lambda^\text{DN}_1$ is due to the lower semi-boundedness of $D_\lambda$. Note that $\beta {\mathrm{i}}n {{\mathfrak{m}}athrm{s}}igma(D_\lambda)$ if and only if $(\alpha+\beta)^{-1}$ is an eigenvalue of $\check{G}^\lambda_\alpha$, and meanwhile for $\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(\Gamma)$, $D_\lambda \text{-}\mathrm{var}phi=\beta \text{-}\mathrm{var}phi$, if and only if $\check{G}^\lambda_\alpha \text{-}\mathrm{var}phi=(\alpha+\beta)^{-1}\text{-}\mathrm{var}phi$.
Particularly, $(\alpha+\lambda^\text{DN}_1)^{-1}$ is equal to the spectral radius of $\check{G}^\lambda_\alpha$.
It follows from the irreducibility of $(\check{T}^\lambda_t)_{t\mathfrak{g}eq 0}$ that $\check{G}^\lambda_\alpha \text{-}\mathrm{var}phi>0$, ${{\mathfrak{m}}athrm{s}}igma$-a.e., for any non-zero $\text{-}\mathrm{var}phi{\mathrm{i}}n pL^2(\Gamma)$. Applying the Krein-Rutman theorem (see, e.g., \cite[Theorem~1.2]{D06}) to $\check{G}^\lambda_\alpha$, we can obtain that $(\alpha+\lambda^\text{DN}_1)^{-1}$ is a simple and unique eigenvalue of $\check{G}^\lambda_\alpha$ admitting a strictly positive eigenfunction $h_\lambda$. This amounts to that $\lambda^\text{DN}_1$ is a simple eigenvalue of $D_\lambda$ and the only eigenvalue admitting a strictly positive eigenfunction.
\end{itemize}
That completes the proof.
\end{proof}
When $\lambda\neq \lambda^\text{D}_n/2$, $\lambda^\text{DN}_1$ is called the \emph{first eigenvalue} of $D_\lambda$. It is worth pointing out that if $\lambda<0$, then $\lambda^\text{DN}_1<0$; see \cite[Lemma~3.2]{AM12}.
When $\lambda>\lambda^\text{D}_1/2$, $\lambda^\text{DN}_1$ is a simple eigenvalue of $D_\lambda$ and we call the eigenfunction $h_\lambda$ in this theorem the \emph{ground state} of $D_\lambda$.
{{\mathfrak{m}}athrm{s}}ubsection{Markov processes $h$-associated with the DN operators}
We have obtained in Corollary~{{\mathfrak{m}}athrm{e}}f{COR28} the probabilistic counterpart of $D_\lambda$ for $\lambda\mathfrak{g}eq 0$, i.e. the time changed process of the $\lambda$-subprocess of reflected Brownian motion on $\bar{\Omega}$ by the local time on $\Gamma$. For $\lambda<0$, $-D_\lambda$ is not straightforwardly identified with the $L^2$-generator of a certain Markov process, as $V_\lambda<0$ in \eqref{eq:4.9} usually leads to the failure of Markovian property for $(\check{{{\mathfrak{m}}athscr E}}^\lambda,\check{{{\mathfrak{m}}athscr F}}^\lambda)$. Instead, another tactic involving $h$-transformations arises in \S{{\mathfrak{m}}athrm{e}}f{SEC34}. Note that either condition in Lemma~{{\mathfrak{m}}athrm{e}}f{LM311} amounts to $\lambda>\lambda^\text{D}/2$. For $\alpha{\mathrm{i}}n {{\mathfrak{m}}athbb R}$ and $h{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_\alpha$, let $(\check{{{\mathfrak{m}}athscr E}}^{\lambda, h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda, h})$ be the $h$-transform of $(\check{{{\mathfrak{m}}athscr E}}^\lambda_\alpha, \check{{{\mathfrak{m}}athscr F}}^\lambda)$. Then we have the following.
\begin{theorem}\label{THM412}
Assume that $\lambda>\lambda_1^\text{D}/2$. Let $\alpha{\mathrm{i}}n {{\mathfrak{m}}athbb R}$ and $h{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_\alpha$. Then the following hold:
\begin{itemize}
{\mathrm{i}}tem[(1)] The $h$-transform $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ is a quasi-regular and irreducible lower bounded symmetric Dirichlet form on $L^2(\Gamma, h^2\cdot {{\mathfrak{m}}athrm{s}}igma)$. Particularly, the $(\alpha,h)$-associated Markov process of $D_\lambda$ is irreducible.
{\mathrm{i}}tem[(2)] $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ is non-negative if and only if $\alpha\mathfrak{g}eq -\lambda^\text{DN}_1$.
{\mathrm{i}}tem[(3)] $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ is recurrent, if and only if $\alpha=-\lambda^\text{DN}_1$ and $h=c\cdot h_\lambda$ for some constant $c>0$, where $h_\lambda$ is the ground state of $D_\lambda$.
\end{itemize}
\end{theorem}
\begin{proof}
\begin{itemize}
{\mathrm{i}}tem[(1)] We only need to prove the irreducibility. In fact, the $L^2$-semigroup of $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha,\check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ is
\begin{equation}\label{eq:4.13}
\check{T}_t^{\lambda, h, \alpha}\text{-}\mathrm{var}phi:=\frac{{{\mathfrak{m}}athrm{e}}^{-\alpha t}\check{T}^\lambda_t(\text{-}\mathrm{var}phi h)}{h},\quad \text{-}\mathrm{var}phi {\mathrm{i}}n L^2(\Gamma, h^2 \cdot {{\mathfrak{m}}athrm{s}}igma).
\end{equation}
Since $(\check{T}^\lambda_t)_{t\mathfrak{g}eq 0}$ is irreducible, it follows that $(\check{T}_t^{\lambda, h, \alpha})_{t\mathfrak{g}eq 0}$ is also irreducible. As a result of Theorem~{{\mathfrak{m}}athrm{e}}f{THMC2}, $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha,\check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ is irreducible.
{\mathrm{i}}tem[(2)] Note that $\lambda^\text{DN}_1={\mathrm{i}}nf\{(D_\lambda \text{-}\mathrm{var}phi, \text{-}\mathrm{var}phi)_{{\mathfrak{m}}athrm{s}}igma: \text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}(D_\lambda), \|\text{-}\mathrm{var}phi\|_{L^2(\Gamma)}=1\}$ and $\check{{{\mathfrak{m}}athscr E}}^\lambda_\alpha(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)=(D_\lambda \text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)_{{\mathfrak{m}}athrm{s}}igma+\alpha(\text{-}\mathrm{var}phi, \text{-}\mathrm{var}phi)_{{\mathfrak{m}}athrm{s}}igma$ for $\text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}(D_\lambda)$. Hence $\check{{{\mathfrak{m}}athscr E}}^\lambda_\alpha$ is non-negative if and only if $\alpha\mathfrak{g}eq - \lambda^\text{DN}_1$. This also amounts to that $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ is non-negative.
{\mathrm{i}}tem[(3)] Suppose $\alpha=-\lambda^\text{DN}_1$ and $h=c\cdot h_\lambda$. Clearly $1{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^{\lambda,h}$ and it follows from \eqref{eq:4.13} that $\check{T}^{\lambda, h,\alpha}_t 1=1$. Hence $\check{{{\mathfrak{m}}athscr E}}^{\lambda, h}_\alpha(1,1)=0$. As a result, $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ is recurrent. To the contrary, the recurrence of $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ implies its conservativeness, which particularly yields that $\check{T}^{\lambda, h,\alpha}_t 1=1$. Thus $\check{T}^\lambda_t h={{\mathfrak{m}}athrm{e}}^{-\alpha t} h$. By the Hille-Yosida theorem, we get that $D_\lambda h=-\alpha h$. In view of Theorem~{{\mathfrak{m}}athrm{e}}f{THM411}, we can eventually conclude that $\alpha=-\lambda^\text{DN}_1$ and $h=c\cdot h_\lambda$ for some $c>0$.
\end{itemize}
That completes the proof.
\end{proof}
\begin{remark}
The irreducibility of $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ implies that it is either recurrent or transient. The proof of the third assertion indicates that if $(\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$ is transient, then it is not conservative.
\end{remark}
Finally let us formulate the Beurling-Deny decomposition for $(\check{{{\mathfrak{m}}athscr E}}^{\lambda, h}_{\alpha},\check{{{\mathfrak{m}}athscr F}}^{\lambda,h})$. When $\lambda\mathfrak{g}eq 0$, $(\check{{{\mathfrak{m}}athscr E}}^\lambda,\check{{{\mathfrak{m}}athscr F}}^\lambda)$ is already a regular Dirichlet form on $L^2(\Gamma)$. A related consideration for this case is referred to \cite{Y98}.
\begin{proposition}
For either $\lambda\mathfrak{g}eq 0, \text{-}\mathrm{var}phi{\mathrm{i}}n b\check{{{\mathfrak{m}}athscr F}}^{\lambda,h}$ or $\lambda>\lambda^\text{D}_1/2, \text{-}\mathrm{var}phi{\mathrm{i}}n b\check{{{\mathfrak{m}}athscr F}}^{\lambda,h}$ with $\text{-}\mathrm{var}phi h{\mathrm{i}}n bH^{1/2}(\Gamma)$, $\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_{\alpha}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)$ enjoys the following Beurling-Deny decomposition:
\[
\begin{aligned}
\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_{\alpha}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)&=\frac{1}{2}{\mathrm{i}}int_{\Gamma\times \Gamma}\left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\xi)\right)^2 h(\xi)h(\eta)\left(U(\xi,\eta)-U_\lambda(\xi,\eta)\right){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta)\\ &\qquad +{\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi^2(\xi)k^{h}(d\xi),
\end{aligned}\]
where ${\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2k^{h}(d\xi)=\check{{{\mathfrak{m}}athscr E}}_\alpha^\lambda(\text{-}\mathrm{var}phi^2 h,h)$.
\end{proposition}
\begin{proof}
Clearly, $1{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^{\lambda,h}$ and $${\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2k^{h}(d\xi)=\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_{\alpha}(\text{-}\mathrm{var}phi^2, 1)=\check{{{\mathfrak{m}}athscr E}}^\lambda_\alpha(\text{-}\mathrm{var}phi^2 h, h).$$
Note that $\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_{\alpha}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)=\check{{{\mathfrak{m}}athscr E}}^\lambda_\alpha(\text{-}\mathrm{var}phi h, \text{-}\mathrm{var}phi h)$, and in view of Theorem~{{\mathfrak{m}}athrm{e}}f{THM44-2}, $\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi h, \text{-}\mathrm{var}phi h)$ is equal to $I_1+I_2$, where
\[
\begin{aligned}
&I_1:=\frac{1}{2}{\mathrm{i}}int\left(\text{-}\mathrm{var}phi(\xi)h(\xi)-\text{-}\mathrm{var}phi(\eta)h(\eta) \right)^2U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta), \\
&I_2:={\mathrm{i}}int\text{-}\mathrm{var}phi(\xi)h(\xi)\text{-}\mathrm{var}phi(\eta)h(\eta)U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta).
\end{aligned}\]
A straightforward computation yields that $I_1=I_{11}+I_{12}$, where
\[
\begin{aligned}
&I_{11}=\frac{1}{2}{\mathrm{i}}int \left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\eta)\right)^2h(\xi)h(\eta)U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta),\\
&I_{12}=\frac{1}{2}{\mathrm{i}}int (\text{-}\mathrm{var}phi^2(\xi)h(\xi)
-\text{-}\mathrm{var}phi^2(\eta)h(\eta))(h(\xi)-h(\eta))U(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta).
\end{aligned}
\]
On the other hand, $\check{{{\mathfrak{m}}athscr E}}^\lambda(\text{-}\mathrm{var}phi^2 h, h)=I_{12}+{\mathfrak{m}}athscr{U}_\lambda(\text{-}\mathrm{var}phi^2 h \otimes h)$. Since $\text{-}\mathrm{var}phi^2 h{\mathrm{i}}n H^{1/2}(\Gamma)$ is bounded when $\lambda^D_1/2<\lambda<0$, it follows from \eqref{eq:4.4} and $\mathbf{H}_\Gamma h(x)={\mathfrak{m}}athbf{E}_x \tilde{h}(X_\tau)$, where $\tilde{h}$ is the $\check{{{\mathfrak{m}}athscr E}}$-quasi-continuous ${{\mathfrak{m}}athrm{s}}igma$-version of $h$, that
\[
{\mathfrak{m}}athscr{U}_\lambda(\text{-}\mathrm{var}phi^2 h\otimes h)={\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi(\xi)^2h(\xi)h(\eta)U_\lambda(\xi,\eta){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta).
\]
Thus we get that
\[
\begin{aligned}
\check{{{\mathfrak{m}}athscr E}}^{\lambda,h}_{\alpha}&(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)-{\mathrm{i}}nt_\Gamma \text{-}\mathrm{var}phi^2(\xi)k^{h}(d\xi)\\
&=I_{11}+I_2-{\mathfrak{m}}athscr{U}_\lambda(\text{-}\mathrm{var}phi^2h\otimes h) \\
&=\frac{1}{2}{\mathrm{i}}int_{\Gamma\times \Gamma}\left(\text{-}\mathrm{var}phi(\xi)-\text{-}\mathrm{var}phi(\xi)\right)^2 h(\xi)h(\eta)\left(U(\xi,\eta)-U_\lambda(\xi,\eta)\right){{\mathfrak{m}}athrm{s}}igma(d\xi){{\mathfrak{m}}athrm{s}}igma(d\eta).
\end{aligned}\]
That completes the proof.
\end{proof}
{{\mathfrak{m}}athrm{s}}ection{DN operators for Schr\"odinger operators}\label{SEC5}
Let $\Omega$, $\Gamma=\partial \Omega$ and ${{\mathfrak{m}}athrm{s}}igma$ be the same as those in \S{{\mathfrak{m}}athrm{e}}f{SEC4}. Further let $(a_{ij}(x))_{1\leq i,j\leq d}$ be a matrix function on $\Omega$ such that $a_{ij}=a_{ji}$ and there exists a constant $C>1$ such that for any $\xi=(\xi_1,\cdots, \xi_d){\mathrm{i}}n {{\mathfrak{m}}athbb R}^d$,
\begin{equation}\label{eq:36}
C^{-1} |\xi|^2\leq {{\mathfrak{m}}athrm{s}}um_{i,j=1}^da_{i,j}(x)\xi_i\xi_j\leq C|\xi|^2,\quad \forall x{\mathrm{i}}n \Omega.
\end{equation}
In this section consider the Dirichlet form:
\begin{equation}\label{eq:52-2}
\begin{aligned}
{{\mathfrak{m}}athscr F}&:=H^1(\Omega),\\
{{\mathfrak{m}}athscr E}(u,v)&:=\frac{1}{2}{{\mathfrak{m}}athrm{s}}um_{i,j=1}^d{\mathrm{i}}nt_\Omega a_{ij}(x)\partial_{x_i}u\partial_{x_j}vdx,\quad u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}.
\end{aligned}
\end{equation}
Clearly $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is a regular and irreducible Dirichlet form on $L^2(\bar\Omega)$ associated with a Markov process $X$ on $\bar{\Omega}$. The irreducibility is due to, e.g., \cite[Corollary~4.6.4]{FOT11}.
Denote by ${\mathfrak{m}}athbf{S}$ (resp. ${\mathfrak{m}}athring{{\mathfrak{m}}athbf{S}}$) the totality of positive smooth measures (positive Radon smooth measures) on $\bar\Omega$ with respect to ${{\mathfrak{m}}athscr E}$. Note that $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ enjoys the same quasi notations as \eqref{eq:Brownian}. Hence ${\mathfrak{m}}athbf{S}$ and ${\mathfrak{m}}athring{{\mathfrak{m}}athbf{S}}$ are also the same as those in \S{{\mathfrak{m}}athrm{e}}f{SEC4}. Particularly, ${\mathfrak{m}}u:={{\mathfrak{m}}athrm{s}}igma{\mathrm{i}}n {\mathfrak{m}}athring{\mathbf{S}}$ with $\text{qsupp}[{{\mathfrak{m}}athrm{s}}igma]=\Gamma$. Denote by $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ the trace Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(\Gamma)$.
Take
\[
\kappa(x)=V(x)dx{\mathrm{i}}n \mathbf{S}-\mathbf{S}
\]
with $V{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$.
For emphasis, write
\[
\begin{aligned}
&({{\mathfrak{m}}athscr E}^V,{{\mathfrak{m}}athscr F}^V):=({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa),\\
& {{\mathfrak{m}}athscr F}^{V,\Omega}:=H^1_0(\Omega),\quad {{\mathfrak{m}}athscr E}^{V,\Omega}(u,v):={{\mathfrak{m}}athscr E}^V(u,v),\; u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{V,\Omega}.
\end{aligned}\]
Both $({{\mathfrak{m}}athscr E}^V,{{\mathfrak{m}}athscr F}^V)$ and $({{\mathfrak{m}}athscr E}^{V,\Omega},{{\mathfrak{m}}athscr F}^{V,\Omega})$ are lower bounded closed forms. Denote their $L^2$-generators by ${{\mathfrak{m}}athrm{s}}L_V:={{\mathfrak{m}}athrm{s}}L-V$ and ${{\mathfrak{m}}athrm{s}}L_{V,\Omega}$ respectively, where ${{\mathfrak{m}}athrm{s}}L$ is the generator of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$. From Lemma~{{\mathfrak{m}}athrm{e}}f{LM32}~(1), we know that ${{\mathfrak{m}}athscr F}^V_{{\mathfrak{m}}athrm{e}}:={{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}} \cap L^2(\Omega, |V(x)|dx)=H^1(\Omega)$ and ${{\mathfrak{m}}athscr F}^{V,\Omega}_{{\mathfrak{m}}athrm{e}}:={{\mathfrak{m}}athscr F}^{\kappa,\Omega}_{{\mathfrak{m}}athrm{e}}=H^1_0(\Omega)$. Set $\mathcal{H}^V_\Gamma:=\mathcal{H}^\kappa_\Gamma=\{u{\mathrm{i}}n H^1(\Omega): {{\mathfrak{m}}athscr E}^V(u,v)=0,\forall v{\mathrm{i}}n H^1_0(\Omega)\}$.
The DN operator for the Schr\"odinger operator ${{\mathfrak{m}}athrm{s}}L_V$ defined as below has been widely studied in, e.g., \cite{AE14, AE20, EO14, EO19}. Note that $0\notin {{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L_{V,\Omega})$ implies \eqref{eq:42-3}; see Lemma~{{\mathfrak{m}}athrm{e}}f{LMB1}.
\begin{definition}\label{DEF45}
Let $V{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$ such that $0\notin {{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L_{V,\Omega})$. The DN operator $D_V$ for ${{\mathfrak{m}}athrm{s}}L_V$ on $L^2(\Gamma)$ is defined as follows:
\[
\begin{aligned}
&\mathcal{D}(D_V):=\bigg\{\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(\Gamma): \exists u{\mathrm{i}}n \mathcal{H}^V_\Gamma, f{\mathrm{i}}n L^2(\Gamma)\text{ s.t. }\mathop{\mathrm{Tr}}(u)=\text{-}\mathrm{var}phi, \\&\qquad\qquad\qquad \qquad\ \qquad\qquad {{\mathfrak{m}}athscr E}^V(u,v)={\mathrm{i}}nt_\Gamma f v|_\Gamma d{{\mathfrak{m}}athrm{s}}igma\text{ for all }v{\mathrm{i}}n H^1(\Omega)\bigg\}, \\
&D_V\text{-}\mathrm{var}phi:=f,
\end{aligned}\]
The function $u{\mathrm{i}}n \mathcal{H}^V_\Gamma$ with $\text{Tr}(u)=\text{-}\mathrm{var}phi$ is called the \emph{$V$-harmonic extension} of $\text{-}\mathrm{var}phi$.
\end{definition}
It is easy to verify that $D_V$ is identified with the DN operator for $({{\mathfrak{m}}athscr E}^V,{{\mathfrak{m}}athscr F}^V)$ on $L^2(\Gamma)$. On account of Corollary~{{\mathfrak{m}}athrm{e}}f{COR47}, $D_V$ is lower semi-bounded and self-adjoint. Let $(\check{{{\mathfrak{m}}athscr E}}^V,\check{{{\mathfrak{m}}athscr F}}^V):=(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ be the trace form associated with $D_V$. Then $\check{{{\mathfrak{m}}athscr F}}^V=\check{{{\mathfrak{m}}athscr F}}=H^{1/2}(\Gamma)$. Denote by $\mathbf{H}^V_\Gamma \text{-}\mathrm{var}phi$ the $V$-harmonic extension of $\text{-}\mathrm{var}phi {\mathrm{i}}n H^{1/2}(\Gamma)$ and write $\mathbf{H}_\Gamma \text{-}\mathrm{var}phi:= \mathbf{H}^0_\Gamma \text{-}\mathrm{var}phi$. Define
\[
{\mathfrak{m}}athscr{U}_V(\text{-}\mathrm{var}phi\otimes \phi):={\mathrm{i}}nt_\Gamma \mathbf{H}^V_\Gamma \text{-}\mathrm{var}phi(x) \mathbf{H}_\Gamma \phi(x) V(x)dx,\quad \text{-}\mathrm{var}phi, \phi{\mathrm{i}}n H^{1/2}(\Gamma).
\]
Repeating the argument to prove \eqref{eq:4.8}, we can also represent $\check{{{\mathfrak{m}}athscr E}}^V$ as the `perturbation' of $\check{{{\mathfrak{m}}athscr E}}$ by the bilinear symmetric functional ${\mathfrak{m}}athscr{U}_V$.
\begin{proposition}
Assume that $0\notin {{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L_{V,\Omega})$. Then for any $\text{-}\mathrm{var}phi, \phi{\mathrm{i}}n H^{1/2}(\Gamma)$,
\[
\check{{{\mathfrak{m}}athscr E}}^V(\text{-}\mathrm{var}phi,\phi)=\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi,\phi)+{\mathfrak{m}}athscr U_V(\text{-}\mathrm{var}phi\otimes \phi).
\]
\end{proposition}
Let us turn to derive the probabilistic counterpart of $D_V$.
When $V\mathfrak{g}eq 0$, $(\check{{{\mathfrak{m}}athscr E}}^V,\check{{{\mathfrak{m}}athscr F}}^V)$ is clearly the trace Dirichlet form of $({{\mathfrak{m}}athscr E}^V,{{\mathfrak{m}}athscr F}^V)$ on $L^2(\Gamma)$ and the probabilistic counterpart of $D_V$ can be easily figured out by applying Theorem~{{\mathfrak{m}}athrm{e}}f{THM26}. The more interesting case is that $V$ has a negative part. Though it is hard to formulate the representation of $\check{{{\mathfrak{m}}athscr E}}^V$ like $\check{{{\mathfrak{m}}athscr E}}^\lambda$ in \S{{\mathfrak{m}}athrm{e}}f{Revisiting}, the irreducibility of the $L^2$-semigroup associated with $D_V$ is obtained in \cite[Proposition~7.4]{AE20}.
That is the following.
\begin{lemma}\label{LM53-2}
Assume that $a_{ij}{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$ for $1\leq i,j\leq d$, $0\notin {{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L_{V,\Omega})$ and $-{{\mathfrak{m}}athrm{s}}L_{V,\Omega}$ is positive, i.e. $(-{\mathrm{i}}nt_\Omega {{\mathfrak{m}}athrm{s}}L_{V,\Omega} u, u)_m\mathfrak{g}eq 0$ for any $u{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}L_{V,\Omega})$. Then the $L^2$-semigroup $\check{T}^V_t:={{\mathfrak{m}}athrm{e}}^{-t D_V}$ associated with $-D_V$ is irreducible.
\end{lemma}
\begin{remark}
In view of Lemma~{{\mathfrak{m}}athrm{e}}f{LM311}, the positivity of ${{\mathfrak{m}}athrm{s}}L_{V,\Omega}$ amounts to that ${{\mathfrak{m}}athscr E}^V(u,u)\mathfrak{g}eq 0$ for all $u{\mathrm{i}}n H^1_0(\Omega)$. Together with the positivity of ${{\mathfrak{m}}athrm{s}}L_{V,\Omega}$, $0\notin {{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L_{V,\Omega})$ implies additionally that ${{\mathfrak{m}}athscr E}^V(u,u)=0$ for $u{\mathrm{i}}n H^1_0(\Omega)$ if and only if $u=0$.
\end{remark}
Mimicking the proof of Theorem~{{\mathfrak{m}}athrm{e}}f{THM411}, one can conclude that $\lambda^V_1:=\text{min }{{\mathfrak{m}}athrm{s}}igma(D_V)$ is a simple eigenvalue admitting a strictly positive eigenfunction, i.e. there exists $h_V{\mathrm{i}}n L^2(\Gamma)$ such that $h_V>0$, ${{\mathfrak{m}}athrm{s}}igma$-a.e., and $D_Vh_V=\lambda^V_1 h_V$. No other eigenvalues admit strictly positive eigenfunctions. We call $h_V$ the \emph{ground state} of $D_V$.
For $\alpha{\mathrm{i}}n{{\mathfrak{m}}athbb R}$, denote by ${\mathfrak{m}}athbf{E}^+_\alpha$ the totality of all strictly positive $\alpha$-excessive functions in $\check{{{\mathfrak{m}}athscr F}}^V$. Particularly $h_V{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_{-\lambda_1^V}$. For $h{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_\alpha$, let $(\check{{{\mathfrak{m}}athscr E}}^{V,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{V,h})$ be the $h$-transform of $(\check{{{\mathfrak{m}}athscr E}}^V_\alpha,\check{{{\mathfrak{m}}athscr F}}^V)$. As an extension of Theorem~{{\mathfrak{m}}athrm{e}}f{THM412}, one can obtain the following result. The proof can be completed by repeating the argument in the proof of Theorem~{{\mathfrak{m}}athrm{e}}f{THM412}, and we omit it.
\begin{theorem}\label{THM55}
Adopt the assumptions of Lemma~{{\mathfrak{m}}athrm{e}}f{LM53-2}. Let $\alpha{\mathrm{i}}n {{\mathfrak{m}}athbb R}$ and $h{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_\alpha$. Then the following hold:
\begin{itemize}
{\mathrm{i}}tem[(1)] $(\check{{{\mathfrak{m}}athscr E}}^{V,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{V,h})$ is a quasi-regular and irreducible lower bounded symmetric Dirichlet form on $L^2(\Gamma, h^2\cdot {{\mathfrak{m}}athrm{s}}igma)$. Particularly, the $(\alpha,h)$-associated Markov process of $D_V$ is irreducible.
{\mathrm{i}}tem[(2)] $(\check{{{\mathfrak{m}}athscr E}}^{V,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{V,h})$ is non-negative if and only if $\alpha\mathfrak{g}eq -\lambda^V_1$.
{\mathrm{i}}tem[(3)] $(\check{{{\mathfrak{m}}athscr E}}^{V,h}_\alpha, \check{{{\mathfrak{m}}athscr F}}^{V,h})$ is recurrent, if and only if $\alpha=-\lambda^V_1$ and $h=c\cdot h_\lambda$ for some constant $c>0$, where $h_V$ is the ground state of $D_V$.
\end{itemize}
\end{theorem}
{{\mathfrak{m}}athrm{s}}ection{DN operators for perturbations supported on boundary}\label{SEC7}
In this section we pay our attention to the special perturbation supported on the boundary. Let $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ be a regular and irreducible Dirichlet form on $L^2(E,m)$ and ${\mathfrak{m}}u{\mathrm{i}}n {\mathfrak{m}}athring{\mathbf{S}}$ with $\text{qsupp}[{\mathfrak{m}}u]=F$ of positive ${{\mathfrak{m}}athscr E}$-capacity. Denote by $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ the trace Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(F,{\mathfrak{m}}u)$ and by $\check{\mathbf{S}}$ the family of positive smooth measures with respect to $\check{{{\mathfrak{m}}athscr E}}$. Take $\kappa:=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}-\mathbf{S}$ such that $|\kappa|(G)=0$ where $G:=E{{\mathfrak{m}}athrm{s}}etminus F$.
{{\mathfrak{m}}athrm{s}}ubsection{Characterization of trace forms}\label{SEC42}
We first summarize some facts about $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ and its trace form $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ defined as \eqref{eq:45} and \eqref{eq:46}.
\begin{proposition}\label{THM33}
Assume that $|\kappa|(G)=0$. Then the following hold:
\begin{itemize}
{\mathrm{i}}tem[(1)] ${{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}\oplus \mathcal{H}^\kappa_F$, i.e. \eqref{eq:42-3} holds true.
{\mathrm{i}}tem[(2)] $\kappa|_F {\mathrm{i}}n \check{\mathbf{S}}-\check{\mathbf{S}}$.
{\mathrm{i}}tem[(3)] $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is identified with the perturbation of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ by $\kappa|_F$.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
{\mathrm{i}}tem[(1)] Note that ${{\mathfrak{m}}athscr F}^{\kappa,G}_{{{\mathfrak{m}}athrm{e}}}={{\mathfrak{m}}athscr F}_{{{\mathfrak{m}}athrm{e}}}^G$ because of $|\kappa|(G)=0$. This implies $\mathcal{H}^\kappa_F=\mathcal{H}_F\cap L^2(E,|\kappa|)$. Hence for $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{{\mathfrak{m}}athrm{e}}}{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}$, $u-\mathbf{H}_F u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}$ and $\mathbf{H}_F u{\mathrm{i}}n \mathcal{H}^\kappa_F$. Particularly ${{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}\oplus \mathcal{H}^\kappa_F$ holds.
{\mathrm{i}}tem[(2)] Write $\kappa$ for $\kappa|_F$ for convenience. In view of \cite[Theorems 5.2.6 and 5.2.8]{CF12} and $|\kappa|(G)=0$, we get $\kappa^\pm{\mathrm{i}}n \check{\mathbf{S}}$. Hence $\kappa{\mathrm{i}}n \check{\mathbf{S}}-\check{\mathbf{S}}$.
{\mathrm{i}}tem[(3)] Note that $\check{{{\mathfrak{m}}athscr F}}^\kappa={{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}|_F\cap L^2(F,|\kappa| +{\mathfrak{m}}u)=\check{{{\mathfrak{m}}athscr F}}\cap L^2(F,|\kappa|)$. For $\text{-}\mathrm{var}phi {\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$, it follows from the proof of the first assertion that $\mathbf{H}_F^\kappa \text{-}\mathrm{var}phi=\mathbf{H}_F \text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{H}^\kappa_F$. Hence for $\text{-}\mathrm{var}phi, \phi{\mathrm{i}}n \check{{{\mathfrak{m}}athscr F}}^\kappa$, using $|\kappa|(G)=0$, we get
\[
\check{{{\mathfrak{m}}athscr E}}^\kappa(\text{-}\mathrm{var}phi, \phi)={{\mathfrak{m}}athscr E}^\kappa(\mathbf{H}^\kappa_F \text{-}\mathrm{var}phi, \mathbf{H}^\kappa_F \phi)={{\mathfrak{m}}athscr E}^\kappa(\mathbf{H}_F \text{-}\mathrm{var}phi, \mathbf{H}_F \phi)=\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi, \phi)+{\mathrm{i}}nt_F \text{-}\mathrm{var}phi \phi d\kappa.
\]
Therefore $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is the perturbation of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ by $\kappa$.
\end{itemize}
That completes the proof.
\end{proof}
From now on we would write $\kappa$ for $\kappa|_F$ if no confusions caused. Let ${{\mathfrak{m}}athrm{s}}N_\kappa$ be the DN operator for $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ on $L^2(F,{\mathfrak{m}}u)$. On account of Proposition~{{\mathfrak{m}}athrm{e}}f{THM33}~(3) and Lemma~{{\mathfrak{m}}athrm{e}}f{LM43}, if $\kappa^-$ is $\check{{{\mathfrak{m}}athscr E}}^{\kappa^+}$-form bounded (for example, $\kappa^-(dx)=\beta^-(x){\mathfrak{m}}u(x)$ with $\beta^-{\mathrm{i}}n pL^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)$), then $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a lower bounded symmetric closed form on $L^2(F,{\mathfrak{m}}u)$ whose generator is $-{{\mathfrak{m}}athrm{s}}N_\kappa$.
As a corollary, we can obtain the following exchangeability of killing transformation and time change transformation.
\begin{corollary}
Let $\kappa{\mathrm{i}}n \mathbf{S}$ such that $\kappa(G)=0$. Then the trace Dirichlet form of $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ on $L^2(F,{\mathfrak{m}}u)$ is identified with the perturbed Dirichlet form of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ by $\kappa$.
\end{corollary}
{{\mathfrak{m}}athrm{s}}ubsection{Markov processes $h$-associated with DN operators}
Next we figure out the probabilistic counterpart of ${{\mathfrak{m}}athrm{s}}N_\kappa$ under the assumption that $\kappa^-$ is $\check{{{\mathfrak{m}}athscr E}}^{\kappa^+}$-form bounded.
\begin{theorem}
Assume that $\kappa^-$ is $\check{{{\mathfrak{m}}athscr E}}^{\kappa^+}$-form bounded. Then $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a quasi-regular lower bounded positivity preserving (symmetric) coercive form on $L^2(F,{\mathfrak{m}}u)$.
\end{theorem}
\begin{proof}
The assumption implies that $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a lower bounded closed form. Take a constant $\alpha_0\mathfrak{g}eq 0$ such that $({{\mathfrak{m}}athscr A},{{\mathfrak{m}}athscr G}):=(\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0},\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is non-negative. On account of Proposition~{{\mathfrak{m}}athrm{e}}f{THM33}~(3), one can obtain that $({{\mathfrak{m}}athscr A},{{\mathfrak{m}}athscr G})$ is a non-negative positivity preserving (symmetric) coercive form. To prove the quasi-regularity, set
\[
\tilde {{\mathfrak{m}}athscr G}:=\check{{{\mathfrak{m}}athscr F}}^{\kappa^+},\quad \tilde{{{\mathfrak{m}}athscr A}}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi):=\check{{{\mathfrak{m}}athscr E}}^{\kappa^+}_{\alpha_0}(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi),\; \text{-}\mathrm{var}phi {\mathrm{i}}n \tilde{{\mathfrak{m}}athscr G}.
\]
Then $(\tilde{{{\mathfrak{m}}athscr A}},\tilde{{{\mathfrak{m}}athscr G}})$ is the perturbed Dirichlet form of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ by $\tilde{\kappa}:=\kappa^++\alpha_0\cdot {\mathfrak{m}}u{\mathrm{i}}n \mathbf{S}$. Particularly $(\tilde{{{\mathfrak{m}}athscr A}},\tilde{{{\mathfrak{m}}athscr G}})$ is a quasi-regular Dirichlet form on $L^2(F,{\mathfrak{m}}u)$.
Note that ${{\mathfrak{m}}athscr G}=\check{{{\mathfrak{m}}athscr F}}^\kappa=\check{{{\mathfrak{m}}athscr F}}^{\kappa^+}=\tilde{{{\mathfrak{m}}athscr G}}$ and for any $\text{-}\mathrm{var}phi{\mathrm{i}}n {{\mathfrak{m}}athscr G}=\tilde{{{\mathfrak{m}}athscr G}}$,
\[
{{\mathfrak{m}}athscr A}_1(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi)\leq \tilde{{{\mathfrak{m}}athscr A}}_1(\text{-}\mathrm{var}phi,\text{-}\mathrm{var}phi).
\]
Therefore $({{\mathfrak{m}}athscr A},{{\mathfrak{m}}athscr G})$ is quasi-regular by means of Lemma~{{\mathfrak{m}}athrm{e}}f{LMD3}. That completes the proof.
\end{proof}
As discussed in \S{{\mathfrak{m}}athrm{e}}f{SEC34}, for $\alpha{\mathrm{i}}n {{\mathfrak{m}}athbb R}$ and $h{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_\alpha$, the $h$-transform $(\check{{{\mathfrak{m}}athscr E}}^{\kappa, h}_\alpha,\check{{{\mathfrak{m}}athscr F}}^{\kappa,h})$ of $(\check{{{\mathfrak{m}}athscr E}}^\kappa_\alpha, \check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a quasi-regular lower bounded symmetric Dirichlet form on $L^2(F,h^2\cdot {\mathfrak{m}}u)$ whose associated Markov process is called $(\alpha, h)$-associated with ${{\mathfrak{m}}athrm{s}}N_\kappa$. In case $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ is irreducible, $(\check{{{\mathfrak{m}}athscr E}}^{\kappa, h}_\alpha,\check{{{\mathfrak{m}}athscr F}}^{\kappa,h})$ is also irreducible.
{{\mathfrak{m}}athrm{s}}ubsection{Absolutely continuous case}
Consider a special case that $\kappa^-=\beta^-\cdot{\mathfrak{m}}u$ with $\beta^-{\mathrm{i}}n p L^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)$. Clearly $\kappa^-$ is $\check{{{\mathfrak{m}}athscr E}}$-form bounded.
The following corollary presents a simple way to obtain another Markov process related to ${{\mathfrak{m}}athrm{s}}N_\kappa$, which has been proposed in \cite[\S4]{BV17}. In a word, for any constant $\alpha_0\mathfrak{g}eq \|\beta^-\|_{L^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)}$, $-{{\mathfrak{m}}athrm{s}}N_\kappa-\alpha_0$ is the $L^2$-generator of a certain Markov process.
\begin{corollary}\label{COR49}
Let $\kappa^-=\beta^-\cdot{\mathfrak{m}}u$ with $\beta^-{\mathrm{i}}n p L^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)$. For any $\alpha_0\mathfrak{g}eq \|\beta^-\|_{L^{\mathrm{i}}nfty(F,{\mathfrak{m}}u)}$, $(\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0},\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a quasi-regular Dirichlet form on $L^2(F,{\mathfrak{m}}u)$, whose generator is $-{{\mathfrak{m}}athrm{s}}N_\kappa-\alpha_0$.
\end{corollary}
\begin{proof}
On account of Proposition~{{\mathfrak{m}}athrm{e}}f{THM33}~(3), $(\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0},\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is the perturbed Dirichlet form of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ by $\tilde{\kappa}:=\kappa+\alpha_0\cdot {\mathfrak{m}}u{\mathrm{i}}n \mathbf{S}$. Hence $(\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0},\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a quasi-regular Dirichlet form on $L^2(F,{\mathfrak{m}}u)$. That completes the proof.
\end{proof}
The following example recovering \cite[\S4]{BV17} is related to the classical Robin boundary condition for the Laplacian on $\Omega$.
\begin{example}\label{EXA65}
Adopt the notations in \S{{\mathfrak{m}}athrm{e}}f{SEC5} and consider ${{\mathfrak{m}}athscr F}=H^1(\Omega)$ and ${{\mathfrak{m}}athscr E}(u,v):=\frac{1}{2}\mathbf{D}(u,v)$ for $u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}$ on $L^2(\bar{\Omega})$. Take $\kappa:=\beta\cdot {{\mathfrak{m}}athrm{s}}igma$ with $\beta{\mathrm{i}}n L^{\mathrm{i}}nfty(\Gamma)$.
It is easy to verify that $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is a lower bounded closed form on $L^2(\Omega)$ and its generator on $L^2(\Omega)$ is the so-called Robin Laplacian on $\Omega$:
\begin{equation}\label{eq:49}
\begin{aligned}
&\mathcal{D}({\mathfrak{m}}athcal{D}elta_\beta):=\{u{\mathrm{i}}n H^1(\Omega):{\mathfrak{m}}athcal{D}elta u{\mathrm{i}}n L^2(\Omega) \text{ and }\frac{1}{2} \partial_\mathbf{n} u +\beta \cdot u|_\Gamma=0\},\\
&{\mathfrak{m}}athcal{D}elta_\beta u:=\frac{1}{2}{\mathfrak{m}}athcal{D}elta u,\quad u{\mathrm{i}}n \mathcal{D}({\mathfrak{m}}athcal{D}elta_\beta).
\end{aligned}
\end{equation}
In addition, the trace form $(\check{{{\mathfrak{m}}athscr E}}^\kappa,\check{{{\mathfrak{m}}athscr F}}^\kappa)$ is a lower bounded closed form on $L^2(\Gamma)$ whose generator is $-{{\mathfrak{m}}athrm{s}}N_\kappa$. One can verify that ${{\mathfrak{m}}athrm{s}}N_\kappa$ is identified with
\[
\begin{aligned}
&\mathcal{D}({{\mathfrak{m}}athrm{s}}N_\beta):=\mathcal{D}(D),\\
&{{\mathfrak{m}}athrm{s}}N_\beta\text{-}\mathrm{var}phi:=\frac{1}{2}\partial_\mathbf{n} u +\beta\cdot \text{-}\mathrm{var}phi,\quad \text{-}\mathrm{var}phi{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athrm{s}}N_\beta),
\end{aligned}
\]
where $D$ is the classical DN operator and $u$ is the harmonic extension of $\text{-}\mathrm{var}phi$ appearing in Definition~{{\mathfrak{m}}athrm{e}}f{DEF31}. Note that ${{\mathfrak{m}}athrm{s}}N_\beta$ is what is concerned with in \cite[\S4]{BV17}.
The probabilistic counterpart of ${{\mathfrak{m}}athrm{s}}N_\beta$ obtained in Corollary~{{\mathfrak{m}}athrm{e}}f{COR49} can be described as follows. Take a constant $\alpha_0\mathfrak{g}eq \|\beta^-\|_{L^{\mathrm{i}}nfty(\Gamma)}$ where $\beta^-:=-(0\wedge \beta)$. Set $\tilde{\kappa}:=\kappa+\alpha_0\cdot {{\mathfrak{m}}athrm{s}}igma{\mathrm{i}}n {\mathfrak{m}}athring{\mathbf{S}}$ and let $X^{\tilde{\kappa}}$ be the subprocess of $X$ killed by the PCAF corresponding to $\tilde{\kappa}$. Then $-{{\mathfrak{m}}athrm{s}}N_\beta-\alpha_0$ is the $L^2$-generator of the time changed process $\check{X}^{\tilde \kappa}$ of $X^{\tilde{\kappa}}$ by the PCAF corresponding to ${{\mathfrak{m}}athrm{s}}igma$.
Note incidentally that the $L^2$-semigroup associated with ${{\mathfrak{m}}athrm{s}}N_\beta$ is irreducible due to the irreducibility of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$. Mimicking Theorem~{{\mathfrak{m}}athrm{e}}f{THM411}, one can obtain that ${{\mathfrak{m}}athrm{s}}N_\beta$ admits a strictly positive ground state, i.e. ${{\mathfrak{m}}athrm{s}}N_\beta h_\beta=\lambda^\beta_1 h_\beta$ where $\lambda^\beta_1=\text{min }{{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}N_\beta)$ is a simple eigenvalue and $h_\beta{\mathrm{i}}n L^2(\Gamma)$, $h_\beta>0$, ${{\mathfrak{m}}athrm{s}}igma$-a.e. In addition the analogical results of Theorem~{{\mathfrak{m}}athrm{e}}f{THM412} holds true for ${{\mathfrak{m}}athrm{s}}N_\beta$. Particularly $\lambda^\beta_1+\alpha_0\mathfrak{g}eq 0$ and $h_\beta{\mathrm{i}}n {\mathfrak{m}}athbf{E}^+_{-\lambda^\beta_1}{{\mathfrak{m}}athrm{s}}ubset {\mathfrak{m}}athbf{E}^+_{\alpha_0}$. Let $(\check{{{\mathfrak{m}}athscr E}}^{\kappa, h_\beta}_{\alpha_0},\check{{{\mathfrak{m}}athscr F}}^{\kappa,h_\beta})$ be the $h$-transform of $(\check{{{\mathfrak{m}}athscr E}}^\kappa_{\alpha_0}, \check{{{\mathfrak{m}}athscr F}}^\kappa)$ with $h=h_\beta$.
Then $(\check{{{\mathfrak{m}}athscr E}}^{\kappa, h_\beta}_{\alpha_0},\check{{{\mathfrak{m}}athscr F}}^{\kappa,h_\beta})$ is a quasi-regular and irreducible symmetric Dirichlet form on $L^2(\Gamma, h^2_\beta\cdot {{\mathfrak{m}}athrm{s}}igma)$, whose associated Markov process is the $h$-transform of $\check{X}^{\tilde{\kappa}}$ with $h=h_\beta$ as studied in, e.g., \cite{Y98}.
\end{example}
{{\mathfrak{m}}athrm{s}}ubsection{Calder\'on's problem}\label{SEC43-2}
Finally, we turn to give some remarks on the Calder\'on's problem related to $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$. It is an inverse problem considering whether one can determine the perturbation if knowing the trace information on certain boundary; see Appendix~{{\mathfrak{m}}athrm{e}}f{SEC6}.
The following result states that if the perturbation is supported on boundary, then the uniqueness for the Calder\'on's problem holds true.
\begin{theorem}
Take $\kappa_i{\mathrm{i}}n \mathbf{S}-\mathbf{S}$ such that $|\kappa_i|(G)=0$ for $i=1,2$ and let $(\check{{{\mathfrak{m}}athscr E}}^{\kappa_i},\check{{{\mathfrak{m}}athscr F}}^{\kappa_i})$ be the trace form of $({{\mathfrak{m}}athscr E}^{\kappa_i},{{\mathfrak{m}}athscr F}^{\kappa_i})$. Then $(\check{{{\mathfrak{m}}athscr E}}^{\kappa_1},\check{{{\mathfrak{m}}athscr F}}^{\kappa_1})=(\check{{{\mathfrak{m}}athscr E}}^{\kappa_2},\check{{{\mathfrak{m}}athscr F}}^{\kappa_2})$ implies $\kappa_1=\kappa_2$.
\end{theorem}
\begin{proof}
Note that $\check{{{\mathfrak{m}}athscr F}}^{\kappa_1}=\check{{{\mathfrak{m}}athscr F}}^{\kappa_2}$ and Proposition~{{\mathfrak{m}}athrm{e}}f{THM33}~(3) imply that $${{\mathfrak{m}}athscr G}:=\check{{{\mathfrak{m}}athscr F}}\cap L^2(F,|\kappa_1|)=\check{{{\mathfrak{m}}athscr F}}\cap L^2(F,|\kappa_2|)=\check{{{\mathfrak{m}}athscr F}}\cap L^2(F,|\kappa_1|+|\kappa_2|).$$
Take $\text{-}\mathrm{var}phi, \phi{\mathrm{i}}n {{\mathfrak{m}}athscr G}$. It follows from $\check{{{\mathfrak{m}}athscr E}}^{\kappa_1}(\text{-}\mathrm{var}phi, \phi)=\check{{{\mathfrak{m}}athscr E}}^{\kappa_2}(\text{-}\mathrm{var}phi, \phi)$ that
\begin{equation}\label{eq:44}
\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi, \phi)+{\mathrm{i}}nt_F \text{-}\mathrm{var}phi \phi d(\kappa^+_1+\kappa^-_2)=\check{{{\mathfrak{m}}athscr E}}(\text{-}\mathrm{var}phi, \phi)+{\mathrm{i}}nt_F \text{-}\mathrm{var}phi \phi d(\kappa^-_1+\kappa^+_2).
\end{equation}
Set $\tilde{\kappa}_1:=\kappa_1^++\kappa^-_2$ and $\tilde{\kappa}_2:=\kappa^-_1+\kappa^+_2$. Then $\tilde{\kappa}_1,\tilde{\kappa}_2{\mathrm{i}}n \mathbf{S}$. Denote by $(\check{{{\mathfrak{m}}athscr E}}^{\tilde{\kappa}_i},\check{{{\mathfrak{m}}athscr F}}^{\tilde{\kappa}_i})$ the perturbed Dirichlet form of $(\check{{{\mathfrak{m}}athscr E}},\check{{{\mathfrak{m}}athscr F}})$ by $\tilde{\kappa}_i$ for $i=1,2$. Clearly, ${{\mathfrak{m}}athscr G}{{\mathfrak{m}}athrm{s}}ubset \check{{{\mathfrak{m}}athscr F}}^{\tilde{\kappa}_1} \cap \check{{{\mathfrak{m}}athscr F}}^{\tilde{\kappa}_2}$.
We claim that ${{\mathfrak{m}}athscr G}$ is $\check{{{\mathfrak{m}}athscr E}}^{\tilde{\kappa}_i}_1$-dense in $\check{{{\mathfrak{m}}athscr F}}^{\tilde{\kappa}_i}$. To accomplish this, using \cite[Theorem~2.2.4]{FOT11} we can take an $\check{{{\mathfrak{m}}athscr E}}$-nest $\{F_n:n\mathfrak{g}eq 1\}$ such that $|\kappa_1|(F_n)+|\kappa_2|(F_n)<{\mathrm{i}}nfty$. Then
\[
\tilde{{{\mathfrak{m}}athscr G}}:=\bigcup_{n\mathfrak{g}eq 1} b\check{{\mathfrak{m}}athscr F}_{F_n} {{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athscr G},
\]
where $b\check{{\mathfrak{m}}athscr F}_{F_n}$ is the family of all bounded functions in $$\check{{\mathfrak{m}}athscr F}_{F_n}=\{u{\mathrm{i}}n \check{{\mathfrak{m}}athscr F}: u=0, \check{{\mathfrak{m}}athscr E}\text{-q.e. on }F^c_n\}.$$
On account of \cite[Theorem~5.1.4]{CF12}, $\{F_n:n\mathfrak{g}eq 1\}$ is also an $\check{{\mathfrak{m}}athscr E}^{\tilde{\kappa}_i}$-nest. Hence $\tilde{{\mathfrak{m}}athscr G}$ as well as ${{\mathfrak{m}}athscr G}$ is $\check{{{\mathfrak{m}}athscr E}}^{\tilde{\kappa}_i}_1$-dense in $\check{{{\mathfrak{m}}athscr F}}^{\tilde{\kappa}_i}$. This assertion, together with \eqref{eq:44}, tells us that $(\check{{{\mathfrak{m}}athscr E}}^{\tilde{\kappa}_1},\check{{{\mathfrak{m}}athscr F}}^{\tilde{\kappa}_1})=(\check{{{\mathfrak{m}}athscr E}}^{\tilde{\kappa}_2},\check{{{\mathfrak{m}}athscr F}}^{\tilde{\kappa}_2})$. Eventually applying the Beurling-Deny formula to them, we get $\tilde{\kappa}_1=\tilde{\kappa}_2$. Therefore $\kappa_1=\kappa_2$. That completes the proof.
\end{proof}
\appendix
{{\mathfrak{m}}athrm{s}}ection{Perturbations of Dirichlet forms}\label{APPB}
Let $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ be a regular Dirichlet form on $L^2(E,m)$ and $\kappa:=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}-\mathbf{S}$. Denote by $A^{\kappa^\pm}:=(A^{\kappa^\pm}_t)_{t\mathfrak{g}eq 0}$ the PCAF corresponding to $\kappa^\pm$ and set $A^\kappa:=A^{\kappa^+}-A^{\kappa^-}$.
\begin{definition}\label{DEFB1}
Let $\kappa=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}-\mathbf{S}$. The following quadratic form
\[
\begin{aligned}
{{\mathfrak{m}}athscr F}^\kappa&:={{\mathfrak{m}}athscr F}\cap L^2(E,|\kappa|), \\
{{\mathfrak{m}}athscr E}^\kappa(u,v)&:={{\mathfrak{m}}athscr E}(u,v)+{\mathrm{i}}nt_E uvd\kappa^+-{\mathrm{i}}nt_E uvd\kappa^-,\quad u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa
\end{aligned}
\]
is called \emph{the perturbation of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ by $\kappa$}.
\end{definition}
When $\kappa{\mathrm{i}}n \mathbf{S}$ (i.e. $\kappa^-=0$), $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is a quasi-regular Dirichlet form on $L^2(E,m)$ associated with the subprocess of $X$ obtained by killing by $A^\kappa$; see, e.g., \cite[\S5.1]{CF12}. Meanwhile it is also called the \emph{perturbed Dirichlet form of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ by $\kappa$}.
In general if $\kappa^-\neq 0$, then $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is not necessarily closed. The following definition gives a well-known sufficient condition for the closedness of $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$.
\begin{definition}\label{DEFA2}
Let $\kappa=\kappa^+-\kappa^-{\mathrm{i}}n \mathbf{S}-\mathbf{S}$. Then $\kappa^-$ is called \emph{${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bounded} (with ${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bound less than $1$) if $L^2(E,\kappa^-){{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athscr F}^{\kappa^+}$ and there exist constants $0\leq \delta<1$ and $C_\delta\mathfrak{g}eq 0$ such that
\begin{equation}\label{eq:A1}
{\mathrm{i}}nt_E u^2 d\kappa^-\leq \delta\cdot {{\mathfrak{m}}athscr E}^{\kappa^+}(u,u)+C_\delta {\mathrm{i}}nt_E u^2dm,\quad \forall u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa^+}.
\end{equation}
The infimum of all possible constants $\delta$ for which the inequality holds is called the \emph{${{\mathfrak{m}}athscr E}^{\kappa^+}$-bound} of $\kappa^-$.
\end{definition}
\begin{remark}
A famous family of smooth measures that are ${{\mathfrak{m}}athscr E}$-form bounded (clearly also ${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bounded for any $\kappa^+{\mathrm{i}}n \mathbf{S}$) is the so-called extended Kato class containing $\left\{V\cdot m: V{\mathrm{i}}n pL^{\mathrm{i}}nfty(E,m)\right\}$; see, e.g., \cite{AM92, SV96}.
\end{remark}
Clearly if $\kappa^-$ is ${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bounded, then $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is a lower bounded symmetric closed form in the sense that $({{\mathfrak{m}}athscr E}^\kappa_{\alpha_0}, {{\mathfrak{m}}athscr F}^\kappa)$ for $\alpha_0=C_\delta$ appearing in \eqref{eq:A1} is a non-negative symmetric closed form on $L^2(E,m)$. Meanwhile ${{\mathfrak{m}}athscr F}^\kappa={{\mathfrak{m}}athscr F}\cap L^2(E,\kappa^+)$ and the $L^2$-semigroup of $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is the so-called Feymann-Kac semigroup $$P^\kappa_t f(x)={\mathfrak{m}}athbf{E}_x\left[e^{-A^\kappa_t}f(X_t) \right]$$ satisfying $\|P^\kappa_t f\|_{L^2(E,m)}\leq e^{\alpha_0 t}\|f\|_{L^2(E,m)}$, whose generator is upper semi-bounded and self-adjoint on $L^2(E,m)$.
{{\mathfrak{m}}athrm{s}}ection{Direct sum decomposition for perturbation of Dirichlet forms}\label{APP2}
Adopt the same notations as in Appendix~{{\mathfrak{m}}athrm{e}}f{APPB}. Assume that $\kappa^-$ is ${{\mathfrak{m}}athscr E}^{\kappa^+}$-form bounded, so that $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$ is lower bounded and closed on $L^2(E,m)$.
In this appendix we set up the analogue of \eqref{eq:Fe} for $({{\mathfrak{m}}athscr E}^\kappa,{{\mathfrak{m}}athscr F}^\kappa)$.
Take a quasi closed set $F{{\mathfrak{m}}athrm{s}}ubset E$ with respect to ${{\mathfrak{m}}athscr E}$ and set $G=E{{\mathfrak{m}}athrm{s}}etminus F$. Set further
\[
{{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}:={{\mathfrak{m}}athscr F}_{{\mathfrak{m}}athrm{e}}\cap L^2(E,|\kappa|),\quad {{\mathfrak{m}}athscr F}^{\kappa,G}_{{{\mathfrak{m}}athrm{e}}}:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}: u=0, {{\mathfrak{m}}athscr E}\text{-q.e. on }F\}
\]
and
\[
\mathcal{H}^\kappa_F:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}: {{\mathfrak{m}}athscr E}^\kappa(u,v)=0,\forall v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}_{{{\mathfrak{m}}athrm{e}}}\}.
\]
Write ${{\mathfrak{m}}athscr F}^{\kappa,G}:={{\mathfrak{m}}athscr F}^{\kappa,G}_{{{\mathfrak{m}}athrm{e}}}\cap L^2(G,m|_G)$ and ${{\mathfrak{m}}athscr E}^{\kappa,G}(u,v):={{\mathfrak{m}}athscr E}^\kappa(u,v)$ for $u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa, G}$. Note that $({{\mathfrak{m}}athscr E}^{\kappa,G}, {{\mathfrak{m}}athscr F}^{\kappa,G})$ is lower bounded and closed on $L^2(G,m|_G)$. Denote the $L^2$-generator of $({{\mathfrak{m}}athscr E}^{\kappa,G}, {{\mathfrak{m}}athscr F}^{\kappa,G})$ by ${{\mathfrak{m}}athrm{s}}L_{\kappa,G}$. The spectrum of ${{\mathfrak{m}}athrm{s}}L_{\kappa,G}$ is denoted by ${{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L_{\kappa,G})$.
\begin{lemma}\label{LMB1}
Assume that $0\notin {{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L_{\kappa, G})$. Then it holds
\begin{equation}
{{\mathfrak{m}}athscr F}^\kappa={{\mathfrak{m}}athscr F}^{\kappa,G}\oplus \tilde\mathcal{H}^\kappa_F,
\end{equation}
where $\tilde{\mathcal{H}}^\kappa_F:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa: {{\mathfrak{m}}athscr E}^\kappa(u,v)=0,\forall v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}\}$.
Particularly, if ${{\mathfrak{m}}athscr F}^{\kappa,G}_{{{\mathfrak{m}}athrm{e}}}{{\mathfrak{m}}athrm{s}}ubset L^2(G,m|_G)$, then
\begin{equation}\label{eq:31-2}
{{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}\oplus \mathcal{H}^\kappa_F.
\end{equation}
\end{lemma}
\begin{proof}
Note that ${{\mathfrak{m}}athscr F}^{\kappa,G}$ is a Hilbert space under the inner produce ${{\mathfrak{m}}athscr E}^{|\kappa|}_1(u,v):={{\mathfrak{m}}athscr E}(u,v)+{\mathrm{i}}nt_E uv d(m+|\kappa|)$.
Denote by $\left( {{\mathfrak{m}}athscr F}^{\kappa,G}\right)'$ the family of all continuous linear functionals on ${{\mathfrak{m}}athscr F}^{\kappa,G}$. Consider an operator $${{\mathfrak{m}}athscr A}:\mathcal{D}({{\mathfrak{m}}athscr A})= {{\mathfrak{m}}athscr F}^{\kappa,G}\left({{\mathfrak{m}}athrm{s}}ubset \left( {{\mathfrak{m}}athscr F}^{\kappa,G}\right)'\right)\rightarrow \left( {{\mathfrak{m}}athscr F}^{\kappa,G}\right)'$$ defined as follows: For any $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}$,
\[
\langle {{\mathfrak{m}}athscr A} u, v\rangle:={{\mathfrak{m}}athscr E}^\kappa(u,v),\quad \forall v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}.
\]
It is easy to verify that $L^2(G,m|_G)$ is continuously embedded in $\left( {{\mathfrak{m}}athscr F}^{\kappa,G}\right)'$, i.e. for any $f{\mathrm{i}}n L^2(G,m|_G)$, $I_f: g{\mathfrak{m}}apsto {\mathrm{i}}nt_G fgdm$ defines a continuous linear functional on ${{\mathfrak{m}}athscr F}^{\kappa,G}$ and $\|I_f\|_{\left( {{\mathfrak{m}}athscr F}^{\kappa,G}\right)'}\lesssim \|f\|_{L^2(G,m|_G)}$. In addition, the part ${{\mathfrak{m}}athscr A}_0$ of ${{\mathfrak{m}}athscr A}$ in $L^2(G,m|_G)$, i.e. $\mathcal{D}({{\mathfrak{m}}athscr A}_0):=\{u{\mathrm{i}}n \mathcal{D}({{\mathfrak{m}}athscr A})\cap L^2(G,m|_G): {{\mathfrak{m}}athscr A} u{\mathrm{i}}n L^2(G,m|_G)\}$, ${{\mathfrak{m}}athscr A}_0 u:={{\mathfrak{m}}athscr A} u$, is identified with $-{{\mathfrak{m}}athrm{s}}L_{\kappa, G}$. Applying \cite[Proposition~3.10.3]{AB01}, we get that ${{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athscr A})={{\mathfrak{m}}athrm{s}}igma(-{{\mathfrak{m}}athrm{s}}L_{\kappa,G})$. Since $0\notin {{\mathfrak{m}}athrm{s}}igma(-{{\mathfrak{m}}athrm{s}}L_{\kappa,G})={{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athscr A})$, it follows that ${{\mathfrak{m}}athscr A}$ is invertible.
Take $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa$. Set $F_u(v):={{\mathfrak{m}}athscr E}^\kappa(u,v)$ for any $v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}$. It is easy to verify that $F_u{\mathrm{i}}n \left( {{\mathfrak{m}}athscr F}^{\kappa,G}\right)'$. Since ${{\mathfrak{m}}athscr A}$ is invertible, there exists $u_1{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}$ such that $F_u={{\mathfrak{m}}athscr E}^\kappa(u_1,\cdot)$. Let $u_2:=u-u_1{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa$. Then ${{\mathfrak{m}}athscr E}^\kappa(u_2,v)=F_u(v)-{{\mathfrak{m}}athscr E}^\kappa(u_1,v)=0$ for any $v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}$. This implies $u_2{\mathrm{i}}n \tilde{\mathcal{H}}^\kappa_F$. In addition, $0\notin {{\mathfrak{m}}athrm{s}}igma({{\mathfrak{m}}athrm{s}}L_{\kappa, G})$ also leads to ${{\mathfrak{m}}athscr F}^{\kappa,G}\cap \tilde{\mathcal{H}}^\kappa_F=\{0\}$, because any $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}\cap \tilde{\mathcal{H}}^\kappa_F$ satisfies ${{\mathfrak{m}}athrm{s}}L_{\kappa,G}u=0$.
If ${{\mathfrak{m}}athscr F}^{\kappa,G}_{{{\mathfrak{m}}athrm{e}}}{{\mathfrak{m}}athrm{s}}ubset L^2(G,m|_G)$, then ${{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}={{\mathfrak{m}}athscr F}^{\kappa,G}$. For $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$, we still have that $F_u={{\mathfrak{m}}athscr E}^\kappa(u,\cdot){\mathrm{i}}n \left( {{\mathfrak{m}}athscr F}^{\kappa,G}\right)'$. Hence there exists $u_1{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}={{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}$ such that $F_u={{\mathfrak{m}}athscr E}^\kappa(u_1,\cdot)$. It follows that $u_2:=u-u_1{\mathrm{i}}n {{\mathfrak{m}}athscr F}^\kappa_{{\mathfrak{m}}athrm{e}}$ and ${{\mathfrak{m}}athscr E}^\kappa(u_2,v)=0$ for any $v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa,G}={{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}$. Particularly, $u_2{\mathrm{i}}n \mathcal{H}^\kappa_F$. On the other hand,
\[
{{\mathfrak{m}}athscr F}^{\kappa,G}_{{\mathfrak{m}}athrm{e}}\cap \mathcal{H}^\kappa_F={{\mathfrak{m}}athscr F}^{\kappa,G}\cap \mathcal{H}^\kappa_F={{\mathfrak{m}}athscr F}^{\kappa,G}\cap \tilde\mathcal{H}^\kappa_F=\{0\}.
\]
That completes the proof.
\end{proof}
{{\mathfrak{m}}athrm{s}}ection{Irreducibility of lower bounded closed forms}
Let $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ be a lower bounded symmetric closed form on $L^2(E,m)$, i.e. there exists a constant $\alpha_0\mathfrak{g}eq 0$ such that $({{\mathfrak{m}}athscr E}_{\alpha_0},{{\mathfrak{m}}athscr F})$ is a non-negative symmetric closed form on $L^2(E,m)$. Let $(T_t)_{t\mathfrak{g}eq 0}$ be the $L^2$-semigroup associated with $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$.
\begin{definition}\label{DEFC1}
\begin{itemize}
{\mathrm{i}}tem[(1)] The semigroup $(T_t)_{t\mathfrak{g}eq 0}$ is \emph{positive} if $T_t f{\mathrm{i}}n pL^2(E,m)$ for all $t\mathfrak{g}eq 0$ and $f{\mathrm{i}}n pL^2(E,m)$. It is called \emph{irreducible} if for every $t>0$ and non-zero function $f{\mathrm{i}}n pL^2(E,m)$, we have $T_t f(x)>0$ for $m$-a.e. $x{\mathrm{i}}n E$.
{\mathrm{i}}tem[(2)] A subset $A{{\mathfrak{m}}athrm{s}}ubset E$ is called \emph{invariant} if $1_{A^c}\cdot T_t(f1_A)=0$, $m$-a.e. for any $t\mathfrak{g}eq 0$ and $f{\mathrm{i}}n L^2(E,m)$.
\end{itemize}
\end{definition}
There are several equivalent conditions on $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ to the positivity of $(T_t)_{t\mathfrak{g}eq 0}$. For example, it is equivalent to
\begin{equation}\label{eq:C1}
f{\mathrm{i}}n {{\mathfrak{m}}athscr F} {\mathcal{L}}ongrightarrow f^+, f^-{\mathrm{i}}n {{\mathfrak{m}}athscr F} \text{ and }{{\mathfrak{m}}athscr E}(f^+,f^-)\leq 0,
\end{equation}
where $f^+:=f\vee 0$ and $f^-:=f^+-f$; see, e.g., \cite{MR95} for other equivalent conditions.
\begin{theorem}\label{THMC2}
Assume that $(T_t)_{t\mathfrak{g}eq 0}$ is positive. The following are equivalent:
\begin{itemize}
{\mathrm{i}}tem[(1)] $A{{\mathfrak{m}}athrm{s}}ubset E$ is invariant;
{\mathrm{i}}tem[(2)] $f 1_A{\mathrm{i}}n {{\mathfrak{m}}athscr F}$ and ${{\mathfrak{m}}athscr E}( f1_A, f1_{A^c})\mathfrak{g}eq 0$ for all $f{\mathrm{i}}n {{\mathfrak{m}}athscr F}$;
{\mathrm{i}}tem[(3)] $f 1_A{\mathrm{i}}n {{\mathfrak{m}}athscr F}$ and ${{\mathfrak{m}}athscr E}(f1_A, f1_{A^c})=0$ for all $f{\mathrm{i}}n {{\mathfrak{m}}athscr F}$;
{\mathrm{i}}tem[(4)] $f 1_A{\mathrm{i}}n {{\mathfrak{m}}athscr F}$ and ${{\mathfrak{m}}athscr E}(f,f)={{\mathfrak{m}}athscr E}(f1_A, f1_A)+{{\mathfrak{m}}athscr E}(f1_{A^c}, f1_{A^c})$ for all $f{\mathrm{i}}n {{\mathfrak{m}}athscr F}$.
\end{itemize}
Further, $(T_t)_{t\mathfrak{g}eq 0}$ is irreducible, if and only if $m(A)=0$ or $m(A^c)=0$ for any invariant set $A{{\mathfrak{m}}athrm{s}}ubset E$.
\end{theorem}
\begin{proof}
The equivalence between (1) and (2) and the characterization for the irreducibility of $(T_t)_{t\mathfrak{g}eq 0}$ are the consequences of \cite[Theorems~2.9 and 2.10]{O05}. Clearly, (3) and (4) are equivalent, and (3) implies (2). Now suppose that (2) holds. Then for $f{\mathrm{i}}n{{\mathfrak{m}}athscr F}$, $u:=f1_A-f1_{A^c}{\mathrm{i}}n {{\mathfrak{m}}athscr F}$ and ${{\mathfrak{m}}athscr E}(u1_A, u 1_{A^c})\mathfrak{g}eq 0$, which amounts to
\[
{{\mathfrak{m}}athscr E}(f1_A, -f1_{A^c})\mathfrak{g}eq 0.
\]
Therefore ${{\mathfrak{m}}athscr E}(f1_A, f1_{A^c})=0$ and (3) holds true. That completes the proof.
\end{proof}
In view of this theorem, when $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is a Dirichlet form, the irreducibility of $(T_t)_{t\mathfrak{g}eq 0}$ coincides with that of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$; see, e.g., \cite[\S1.6]{FOT11}.
{{\mathfrak{m}}athrm{s}}ection{Quasi-regular positivity preserving (symmetric) coercive forms}\label{APPD}
In this appendix we review the basic conceptions about quasi-regular positivity preserving (symmetric) coercive forms raised in \cite{MR95}. Note that only symmetric cases are under consideration in our paper.
Let $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ be a non-negative symmetric closed form on $L^2(E,m)$, whose associated $L^2$-semigroup $(T_t)_{t\mathfrak{g}eq 0}$ is positive in the sense of Definition~{{\mathfrak{m}}athrm{e}}f{DEFC1}. This positivity preserving property amounts to \eqref{eq:C1}. Meanwhile $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is also called a \emph{positivity preserving (symmetric) coercive form}. For a closed set $F{{\mathfrak{m}}athrm{s}}ubset E$, we set
\[
{{\mathfrak{m}}athscr F}_F:=\{u{\mathrm{i}}n {{\mathfrak{m}}athscr F}: u=0, m\text{-a.e. on }E{{\mathfrak{m}}athrm{s}}etminus F\}.
\]
Then an increasing sequence $\{F_n:n\mathfrak{g}eq 1\}$ of closed subsets of $E$ is called an \emph{${{\mathfrak{m}}athscr E}$-nest} if $\cup_{n\mathfrak{g}eq }{{\mathfrak{m}}athscr F}_{F_n}$ is ${{\mathfrak{m}}athscr E}_1$-dense in ${{\mathfrak{m}}athscr F}$. A subset $N{{\mathfrak{m}}athrm{s}}ubset E$ is called \emph{${{\mathfrak{m}}athscr E}$-polar} if $N{{\mathfrak{m}}athrm{s}}ubset \cap_{n\mathfrak{g}eq 1}(E{{\mathfrak{m}}athrm{s}}etminus F_n)$ for some ${{\mathfrak{m}}athscr E}$-nest $\{F_n:n\mathfrak{g}eq 1\}$. We say that a property of points in $E$ holds ${{\mathfrak{m}}athscr E}$-quasi-everywhere (abbreviated ${{\mathfrak{m}}athscr E}$-q.e.), if the property holds outside some ${{\mathfrak{m}}athscr E}$-polar set. Given an ${{\mathfrak{m}}athscr E}$-nest $\{F_n:n\mathfrak{g}eq 1\}$, define
\[
C(\{F_n\}):=\{f:A\rightarrow {{\mathfrak{m}}athbb R}: \cup_{n\mathfrak{g}eq 1}F_n{{\mathfrak{m}}athrm{s}}ubset A{{\mathfrak{m}}athrm{s}}ubset E, f|_{F_n}\text{ is continuous for each }n\}.
\]
Then an ${{\mathfrak{m}}athscr E}$-q.e. defined function $u$ on $E$ is called \emph{${{\mathfrak{m}}athscr E}$-quasi-continuous} if there exists an ${{\mathfrak{m}}athscr E}$-nest $\{F_n:n\mathfrak{g}eq 1\}$ such that $u{\mathrm{i}}n C(\{F_n\})$. For $\alpha\mathfrak{g}eq 0$, $u{\mathrm{i}}n L^2(E,m)$ is called \emph{$\alpha$-excessive} if $u\mathfrak{g}eq 0$, $m$-a.e. and ${{\mathfrak{m}}athrm{e}}^{-\alpha t}T_tu\leq u$, $m$-a.e. for all $t>0$.
\begin{definition}\label{DEFD1}
A positivity preserving (symmetric) coercive form $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is called \emph{quasi-regular} if:
\begin{itemize}
{\mathrm{i}}tem[(i)] There exists an ${{\mathfrak{m}}athscr E}$-nest $\{K_n:n\mathfrak{g}eq 1\}$ consisting of compact sets.
{\mathrm{i}}tem[(ii)] There exists an ${{\mathfrak{m}}athscr E}_1$-dense subset of ${{\mathfrak{m}}athscr F}$ whose elements have ${{\mathfrak{m}}athscr E}$-quasi-continuous $m$-versions.
{\mathrm{i}}tem[(iii)] There exist $u_n{\mathrm{i}}n{{\mathfrak{m}}athscr F}$, $n{\mathrm{i}}n {\mathfrak{m}}athbb{N}$, having ${{\mathfrak{m}}athscr E}$-quasi-continuous $m$-versions $\tilde{u}_n$, $n{\mathrm{i}}n {\mathfrak{m}}athbb{N}$, and an ${{\mathfrak{m}}athscr E}$-polar set $N{{\mathfrak{m}}athrm{s}}ubset E$ such that $\{\tilde{u}_n:n{\mathrm{i}}n{\mathfrak{m}}athbb{N}\}$ separates the point of $E{{\mathfrak{m}}athrm{s}}etminus N$.
{\mathrm{i}}tem[(iv)] There exists an ${{\mathfrak{m}}athscr E}$-q.e. strictly positive ${{\mathfrak{m}}athscr E}$-quasi-continuous $m$-version $h$ of an $\alpha$-excessive function in ${{\mathfrak{m}}athscr F}$ for some $\alpha{\mathrm{i}}n (0,{\mathrm{i}}nfty)$.
\end{itemize}
\end{definition}
Let $h{\mathrm{i}}n L^2(E,m)$, $h>0$, $m$-a.e. Define
\[
\begin{aligned}
&{{\mathfrak{m}}athscr F}^h:=\{u{\mathrm{i}}n L^2(E,h^2\cdot m): uh{\mathrm{i}}n {{\mathfrak{m}}athscr F}\}, \\
&{{\mathfrak{m}}athscr E}^h(u,v):={{\mathfrak{m}}athscr E}(uh,vh),\quad u,v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^h,
\end{aligned}
\]
called the \emph{$h$-transform of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$}.
The following theorem is taken from \cite[Theorem~4.14]{MR95}. The Markov process associated with the $h$-transform $({{\mathfrak{m}}athscr E}^h,{{\mathfrak{m}}athscr F}^h)$ in this theorem is called \emph{$h$-associated with $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$.}
\begin{theorem}\label{THMD2}
A positivity preserving (symmetric) coercive form $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ on $L^2(E,m)$ is quasi-regular if and only if for one (hence every) $m$-a.e. strictly positive function $h{\mathrm{i}}n {{\mathfrak{m}}athscr F}$, which is $\alpha$-excessive for some $\alpha>0$, there exists an ${{\mathfrak{m}}athscr E}$-q.e. strictly positive ${{\mathfrak{m}}athscr E}$-quasi-continuous $m$-version $\tilde{h}$ and the corresponding $h$-transform $({{\mathfrak{m}}athscr E}^h,{{\mathfrak{m}}athscr F}^h)$ is a quasi-regular symmetric Dirichlet form.
\end{theorem}
We give a simple but useful sufficient condition for the quasi-regularity of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$.
\begin{lemma}\label{LMD3}
Let $({{\mathfrak{m}}athscr E}',{{\mathfrak{m}}athscr F}')$ be a quasi-regular symmetric Dirichlet form on $L^2(E,m)$. Assume that
\begin{equation}\label{eq:D1}
{{\mathfrak{m}}athscr F}'={{\mathfrak{m}}athscr F},\quad {{\mathfrak{m}}athscr E}_1(u,u)\leq C {{\mathfrak{m}}athscr E}'_1(u,u),\;\forall u{\mathrm{i}}n {{\mathfrak{m}}athscr F},
\end{equation}
for some constant $C>0$.
Then $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$ is quasi-regular.
\end{lemma}
\begin{proof}
The condition \eqref{eq:D1} implies that an ${{\mathfrak{m}}athscr E}'$-nest is also an ${{\mathfrak{m}}athscr E}$-nest. Hence an ${{\mathfrak{m}}athscr E}'$-polar set is also ${{\mathfrak{m}}athscr E}$-polar and an ${{\mathfrak{m}}athscr E}'$-quasi-continuous function is also ${{\mathfrak{m}}athscr E}$-quasi-continuous. On account of the quasi-regularity of $({{\mathfrak{m}}athscr E}',{{\mathfrak{m}}athscr F}')$, one can easily verify that Definition~{{\mathfrak{m}}athrm{e}}f{DEFD1}(i-iii) and \cite[(4.8)]{MR95} are satisfied by $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$. Take $\text{-}\mathrm{var}phi{\mathrm{i}}n L^2(E,m)$ with $\text{-}\mathrm{var}phi>0$, $m$-a.e., and set $u:={\mathrm{i}}nt_0^{\mathrm{i}}nfty {{\mathfrak{m}}athrm{e}}^{-\alpha t}T_t\text{-}\mathrm{var}phi dt$. Then $u$ is $\alpha$-excessive and $u>0$, $m$-a.e. due to \cite[Lemma~3.6]{MR95}. Note that $u{\mathrm{i}}n {{\mathfrak{m}}athscr F}={{\mathfrak{m}}athscr F}'$. Thus $u$ admits an ${{\mathfrak{m}}athscr E}'$-quasi-continuous $m$-version $\tilde{u}$, which is also ${{\mathfrak{m}}athscr E}$-quasi-continuous. Clearly $u\leq \tilde{u}$, $m$-a.e. Consequently, \cite[Assumption~4.6]{MR95} is satisfied by $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$. In view of \cite[Proposition~4.11]{MR95}, we can eventually conclude the quasi-regularity of $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})$.
\end{proof}
{{\mathfrak{m}}athrm{s}}ection{Classical Calder\'on's problem}\label{SEC6}
In this appendix we review some results for the classical Calder\'on problem (see \cite{U12, SU87, C80}). Let $\Omega$ be a bounded domain with smooth boundary and consider the Dirichlet form $({{\mathfrak{m}}athscr E},{{\mathfrak{m}}athscr F})=(\frac{1}{2}\mathbf{D}, H^1(\Omega))$ on $L^2(\bar{\Omega})$. Set $G=\Omega$, $F=\Gamma:=\partial \Omega$ and ${\mathfrak{m}}u={{\mathfrak{m}}athrm{s}}igma$. Take $\kappa_i(dx)=V_i(x)dx$ with $V_i{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$ for $i=1,2$.
We do not assume \eqref{eq:42-3} and let
\begin{equation}\label{eq:E0}
\begin{aligned}
C_{V_i}:=&\big\{(\text{-}\mathrm{var}phi, f){\mathrm{i}}n L^2(\Gamma)\times L^2(\Gamma): \exists\,u{\mathrm{i}}n \mathcal{H}^{\kappa_i}_\Gamma\text{ such that }u|_\Gamma=\text{-}\mathrm{var}phi, \\
&\qquad\qquad {{\mathfrak{m}}athscr E}^{\kappa_i}(u,v)={\mathrm{i}}nt_{\Gamma} f v|_\Gamma d{{\mathfrak{m}}athrm{s}}igma \text{ for any }v{\mathrm{i}}n {{\mathfrak{m}}athscr F}^{\kappa_i}_{{\mathfrak{m}}athrm{e}}\text{ with }v|_\Gamma{\mathrm{i}}n L^2(\Gamma)\big\},
\end{aligned}\end{equation}
called the \emph{Cauchy data} for $({{\mathfrak{m}}athscr E}^{\kappa_i},{{\mathfrak{m}}athscr F}^{\kappa_i})$ on $L^2(\Gamma)$. Note that if \eqref{eq:42-3} holds true, then $C_{V_i}$ is the graph of ${{\mathfrak{m}}athrm{s}}N_{\kappa_i}$. The uniqueness problem for classical Calder\'on problem considers whether $C_{V_1}=C_{V_2}$ implies $V_1=V_2$.
\begin{lemma}\label{LM53}
For $i=1,2$, it holds that
\begin{equation}\label{eq:54}
\begin{aligned}
C_{V_i}=\left\{(u|_\Gamma, \frac{1}{2}\partial_\mathbf{n} u): u{\mathrm{i}}n H^{3/2}(\Omega), -\frac{1}{2}{\mathfrak{m}}athcal{D}elta u +V_i u=0\right\}.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
Denote the family on the right side of \eqref{eq:54} by ${{\mathfrak{m}}athscr G}$. To prove $C_{V_i}{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athscr G}$,
take $(\text{-}\mathrm{var}phi,f){\mathrm{i}}n C_{V_i}$ and let $u$ be the function appearing in \eqref{eq:E0}, i.e. $u{\mathrm{i}}n H^1(\Omega)$, $u|_\Gamma=\text{-}\mathrm{var}phi$, $-\frac{1}{2}{\mathfrak{m}}athcal{D}elta u + V_i u=0$ weakly in $\Omega$ and
\begin{equation}\label{eq:53}
\frac{1}{2}{\mathrm{i}}nt_\Omega \nabla u \nabla v dx+{\mathrm{i}}nt_\Omega u(x)v(x)V_i(x)dx={\mathrm{i}}nt_\Gamma f v|_\Gamma d{{\mathfrak{m}}athrm{s}}igma,\quad \forall v{\mathrm{i}}n H^1(\Omega).
\end{equation}
It follows that ${\mathfrak{m}}athcal{D}elta u=2V_i u{\mathrm{i}}n L^2(\Omega)$ and for any $v{\mathrm{i}}n H^1(\Omega)$,
\begin{equation}\label{eq:52}
{\mathrm{i}}nt_\Omega \nabla u \nabla v dx + {\mathrm{i}}nt_\Omega {\mathfrak{m}}athcal{D}elta u vdx=2{\mathrm{i}}nt_\Gamma f v|_\Gamma d{{\mathfrak{m}}athrm{s}}igma.
\end{equation}
Hence $u$ has a weak normal derivative in $L^2(\Gamma)$ and $f=\frac{1}{2}\partial_\mathbf{n} u$. On account of \eqref{eq:35}, we get $u{\mathrm{i}}n H^{3/2}(\Omega)$. Therefore $C_{V_i}{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athscr G}$ is obtained. To the contrary, take $u{\mathrm{i}}n H^{3/2}(\Omega)$ such that $-\frac{1}{2}{\mathfrak{m}}athcal{D}elta u +V_i u=0$ weakly in $\Omega$. Then ${\mathfrak{m}}athcal{D}elta u=2V_i u{\mathrm{i}}n L^2(\Omega)$ and \eqref{eq:35} implies that $u$ has a weak normal derivative $\partial_\mathbf{n} u$ in $L^2(\Gamma)$, i.e. \eqref{eq:52} holds for $f:=\frac{1}{2}\partial_\mathbf{n} u$. Since $-\frac{1}{2}{\mathfrak{m}}athcal{D}elta u +V_i u=0$ weakly in $\Omega$, it follows that \eqref{eq:53} is true. Thus $(\text{-}\mathrm{var}phi,f){\mathrm{i}}n C_{V_i}$ with $\text{-}\mathrm{var}phi=u|_\Gamma {\mathrm{i}}n L^2(\Gamma)$ and $f=\frac{1}{2}\partial_\mathbf{n} u$. Eventually we can conclude \eqref{eq:54}.
\end{proof}
The classical Calder\'on problem of dimension higher than $3$ has been solved in a seminal paper \cite{SU87}; see also \cite{U12}. That is the following.
\begin{theorem}
Consider a bounded domain $\Omega{{\mathfrak{m}}athrm{s}}ubset {{\mathfrak{m}}athbb R}^d$, $d\mathfrak{g}eq 3$, with smooth boundary. Let $V_i{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$ for $i=1,2$. If $C_{V_1}=C_{V_2}$, then $V_1=V_2$.
\end{theorem}
\begin{proof}
Take $(\text{-}\mathrm{var}phi,f){\mathrm{i}}n C_{V_1}=C_{V_2}$.
For $i=1,2$, let $u_i{\mathrm{i}}n H^{3/2}(\Omega)$ such that $u_i|_\Gamma=\text{-}\mathrm{var}phi$ and $\frac{1}{2}{\mathfrak{m}}athcal{D}elta u_i-V_i u_i=0$ as in Lemma~{{\mathfrak{m}}athrm{e}}f{LM53}. We assert that
\begin{equation}\label{eq:51}
{\mathrm{i}}nt_\Omega (V_1-V_2)u_1u_2dx=0.
\end{equation}
In fact, it follows from \eqref{eq:E0} that for any $v{\mathrm{i}}n H^1(\Omega)$,
\[
\frac{1}{2}{\mathrm{i}}nt_\Omega \nabla u_i \nabla v dx+{\mathrm{i}}nt_\Omega u_i(x) v(x) V_i(x)dx={\mathrm{i}}nt_\Gamma f\mathop{\mathrm{Tr}}(v)d{{\mathfrak{m}}athrm{s}}igma.
\]
Taking $v=u_2$ for $i=1$ and $v=u_1$ for $i=2$, we get
\[
\frac{1}{2}{\mathrm{i}}nt_\Omega \nabla u_1 \nabla u_2 dx+{\mathrm{i}}nt_\Omega u_1(x) u_2(x) V_1(x)dx={\mathrm{i}}nt_\Gamma f\text{-}\mathrm{var}phi d{{\mathfrak{m}}athrm{s}}igma
\]
and
\[
\frac{1}{2}{\mathrm{i}}nt_\Omega \nabla u_2 \nabla u_1 dx+{\mathrm{i}}nt_\Omega u_2(x) u_1(x) V_2(x)dx={\mathrm{i}}nt_\Gamma f\text{-}\mathrm{var}phi d{{\mathfrak{m}}athrm{s}}igma.
\]
Hence \eqref{eq:51} holds.
The conclusion $V_1=V_2$ can be obtained as follows: Note that the restrictions of the functions appearing in \cite[(18)]{U12} (with $q_i=V_i$) to $\Omega$ belong to $H^2(\Omega){{\mathfrak{m}}athrm{s}}ubset H^{3/2}(\Omega)$. Particularly, by means of \eqref{eq:54}, \eqref{eq:51} holds for $u_1,u_2$ of the form \cite[(18)]{U12}. Hence $V_1=V_2$ follows from the same argument as that in the proof of \cite[Theorem~2.5]{U12}.
\end{proof}
Regarding the two-dimensional case, Bukhgeim \cite{B08} obtains $V_1=V_2$ under a slightly different condition to $C_{V_1}=C_{V_2}$; see \cite[Theorem~2.1]{B08}.
\begin{theorem}
Consider $d=2$ and $\Omega={\mathfrak{m}}athbb{D}=\{x: |x|<1\}$. Let $V_i{\mathrm{i}}n L^{\mathrm{i}}nfty(\Omega)$ for $i=1,2$. Set for $i=1,2$ and $2<p\leq {\mathrm{i}}nfty$,
\[
\tilde{C}_{V_i, p}=\left\{(u|_\Gamma, \frac{1}{2}\partial_\mathbf{n} u): u{\mathrm{i}}n W^{2,p}(\Omega), -\frac{1}{2}{\mathfrak{m}}athcal{D}elta u +V_i u=0\right\}.
\]
If $\tilde C_{V_1, p}=\tilde C_{V_2,p}$ for some $2<p\leq {\mathrm{i}}nfty$, then $V_1=V_2$.
\end{theorem}
\begin{remark}
Note that $W^{2,p}(\Omega){{\mathfrak{m}}athrm{s}}ubset H^{3/2}(\Omega)$ for $p>2$. Hence $\tilde{C}_{V_i,p}{{\mathfrak{m}}athrm{s}}ubset C_{V_i}$.
\end{remark}
\end{document}
|
\betagin{document}
\title{Stable Directions for Degenerate Excited States of Nonlinear
Schr\"odinger Equations}
\author{Stephen Gustafson, \quad Tuoc Van Phan}
\maketitle
\abstract{
We consider nonlinear Schr\"{o}dinger equations,
$i\partial_t \psi = H_0 \psi + \lambdambda |\psi|^2\psi$ in
$\mathbb{R}^3 \times [0,\infty)$, where $H_0 = -\Delta + V$, $\lambdambda=\pm 1$,
the potential $V$ is radial and spatially decaying, and the linear
Hamiltonian $H_0$ has only two eigenvalues $e_0 < e_1 <0$, where $e_0$
is simple, and $e_1$ has multiplicity three. We show that there exist
two branches of small ``nonlinear excited state'' standing-wave
solutions, and in both the resonant ($e_0 < 2e_1$) and non-resonant
($e_0 > 2e_1$) cases, we construct certain finite-codimension
regions of the phase space consisting of solutions converging
to these excited states at time infinity (``stable directions'').
}\ \\
{\bf Key words.} Asymptotic dynamics, Nonlinear excited states, Schr\"{o}dinger equations.\\
{\bf AMS subject classifications.} 35Q40; 35Q55.
\section{Introduction}
We consider the nonlinear Schr\"{o}dinger equation
\betagin{equation} \lambdabel{NSE}
i\partial_t \psi = H_0 \psi + \lambdambda |\psi|^2 \psi
\end{equation}
which arises in several physical settings, including
many-body quantum systems, and optics. Here
the wave function $\psi = \psi(x,t)$ is complex-valued,
\[
\psi : \mathbb{R}^3 \times [0,\infty) \to \mathbb{C},
\]
and the linear Hamiltonian is
\[
H_0 := -\Delta + V(x),
\]
where $V : \mathbb{R}^3 \to \mathbb{R}$ is a smooth, spatially decaying potential
function. We take $\lambdambda = \pm 1$.
Equation \eqref{NSE} can be expressed as a Hamiltonian system
\[
i\partial_t \psi = \frac{ \partial \mathcal{E}[\psi,\bar{\psi}]}
{\partial \bar{\psi}},
\]
where the Hamiltonian energy is defined as
\[
\mathcal{E}[\psi] = \mathcal{E}[\psi, \bar{\psi}] =
\int \left (\frac{1}{2} \nabla \psi \cdot \nabla \bar{\psi}
+ \frac{1}{2} V(x) \psi \bar{\psi} +\frac{\lambdambda}{4}\psi^2
\bar{\psi}^2 \right) dx.
\]
Since the energy is invariant under the time-translation
$t \mapsto t + t_0$ ($t_0 \in \mathbb{R}$) and the phase rotation
\[
\psi \mapsto e^{i r} \psi,
\quad \quad r \in \mathbb{R},
\]
the energy and the particle number
\[
\mathcal{N}[\psi] := \int |\psi(x)|^2dx
\]
are constant in time for any smooth solution
$\psi(t) = \psi(.,t) \in H^1(\mathbb{R}^3)$ of \eqref{NSE}:
\[
\mathcal{E}[\psi(t)] = \mathcal{E}[\psi(0)], \quad
\mathcal{N}[\psi(t)] = \mathcal{N}[\psi(0)]
\]
The global well-posedness of solutions with $\| \psi(0) \|_{H^1}$ small
can be established easily by using these conserved quantities, and a
continuity argument.
A very important feature of the equation \eqref{NSE} is
that it can have localized ``standing wave'' or ``nonlinear bound state''
solutions. These are solutions of the form $\psi(x,t) = e^{-iEt}Q(x)$,
where the profile function $Q$ must therefore solve the equation
\betagin{equation} \lambdabel{Q.eqn}
H_0 Q + \lambdambda |Q|^2 Q = EQ.
\end{equation}
The solutions of \eqref{Q.eqn} are critical points of the Hamiltonian
$\mathcal{E}$ subject to the constraint that the $L^2$-norm of $Q$ is fixed.
We may obtain {\it small} solutions of \eqref{Q.eqn} as bifurcations along
the discrete eigenvalues of $H_0$. In this paper, we assume that $H_0$
has two discrete eigenvalues $e_0 < e_1 < 0$. The small solutions with $E$ close to
$e_0$ are called \textit{nonlinear ground states}, while those with $E$ close
to $e_1$ are naturally called \textit{nonlinear excited states}.
Along a simple eigenvalue, the bifurcation problem for finding the
corresponding nonlinear solutions (eg. nonlinear ground states) is quite
standard, regardless of the multiplicities of the other eigenvalues, see
\cite{SW1, TY1, GNT}. Along a degenerate eigenvalue (such as $e_1$
in our setting), however, the problem becomes
more delicate due to the interaction between the different directions in the
$e_1$-eigenspace. Hence our first goal in this paper is to find all of
the nonlinear excited states when $e_1$ is degenerate.
We consider here a {\it radial} potential $V$, and we assume that
$e_1$ has multiplicity three, corresponding to the first non-zero
angular momentum spherical harmonics:
\noindent
{\bf Assumption A1:}
\textit{The potential $V(x)$ is spherically symmetric,
and the linear Hamiltonian $H_0 = -\Delta + V$ has
a discrete eigenvalue $e_1 < 0$ of multiplicity three,
with eigenspace
\[
\mathbf{V} := Null(H_0 - e_1) = \mbox{ span }
\{\phi_1, \phi_2, \phi_2\}, \quad\quad
\phi_j(x) = x_j \varphi(|x|)
\]
for a real function $\varphi$.}
\noindent
Notice that such a multiplicity-three eigenvalue is a
generic possibility in three dimensions with a radial
potential.
We will denote the orthogonal projection onto this
excited-state eigenspace by
\[
\mathbf{P}_1 = \mathbf{P}_1(H_0) := \mbox{ orthogonal projection onto }
\mathbf{V}.
\]
Since the potential $V$ is radial, equation~\eqref{Q.eqn} is
invariant not only under phase rotation, time translation, and
complex conjugation, but also under all spatial rotations
and reflections:
\[
Q(x) \mapsto Q(\mathbf{G}amma x), \quad \quad
\mathbf{G}amma \in O(3).
\]
This rich structure plays a central role in
understanding the existence of nonlinear excited states, as well
as the nearby dynamics.
In Section~\ref{Exi}, we shall see that there is a \textit{symmetry
breaking bifurcation phenomenon} in the bifurcation equations of the
nonlinear excited states $Q$. Here, we shall briefly describe the
phenomenon (for the systematic theory, see~\cite{GSS, GS, SA}).
By integrating \eqref{Q.eqn} against a linear excited state
$\phi_j$, it follows that there are non-trivial solutions close
to $\mathbf{V}$ only if $\lambdambda(E-e_1) > 0$.
We write $Q = \epsilon (v+h)$ for $v \in \mathbf{V}$,
$h \in \mathbf{V}^\perp$ and $\epsilon^2 = [E-e_1]/\lambdambda$. Then, the equation
\eqref{Q.eqn} is equivalent to
\betagin{equation} \lambdabel{Q.bifur}
\left \{
\betagin{array}{ll}
& h = \lambdambda \epsilonsilon^2 [H_0-e_1]^{-1}(1- \, \mathbf{P}\! _\mathrm{1} \, \! (H_0))
\{h - |v +h|^2(v +h) \} \\
& \, \mathbf{P}\! _\mathrm{1} \, \! (H_0) [v - |v+h|^2(v +h)] = 0
\end{array} \right..
\end{equation}
By using the contraction
mapping theorem, we obtain the unique small solution $h = h(\epsilon,v)$
of the first equation of \eqref{Q.bifur} for sufficiently small
$\epsilon>0$. To solve the bifurcation equation
\[
N(\epsilon,v) : =\, \mathbf{P}\! _\mathrm{1} \, \! (H_0) [v - |v+h(\epsilon,v)|^2(v +h(\epsilon,v))] = 0,
\]
the standard method is to apply the implicit function theorem.
However,
due to the invariant structure of $N$, its derivative $N_v(0,v)$
has a non-trivial kernel for all $v \in \mathbf{V}$ such that $N(0,v)=0$.
To overcome this, we have to restrict $N$ onto $N$-invariant
subspaces of $\mathbf{V}$, and so eliminate the kernel of the
derivative of $N$.
In this way, we obtain two particular representatives,
denoted $Q_E$ and $\widetilde{Q}_E$, of the family of nonlinear excited
states. Then, applying the symmetries -- spatial rotation,
reflection, phase rotation and complex conjugation -- to $Q_E$
and $\widetilde{Q}_E$, we obtain all of the nonlinear excited states of
\eqref{Q.eqn}, as summarized in the following
theorem, proved in Section~\ref{Exi}:
\betagin{theorem}[Existence of degenerate excited states]
\lambdabel{ex.sol}
There exist two branches of nonlinear excited state solutions to
\eqref{Q.eqn}, with $E - e_1$ small. These branches
are generated by applying phase rotations
($Q(x) \mapsto e^{i r} Q(x), \; r \in \mathbb{R}$) and spatial rotations
($Q(x) \mapsto Q(\mathbf{G}amma x), \; \mathbf{G}amma \in SO(3)$)
to two fixed solutions of the forms
\[
Q_E(x) = x_1f_1(x_1^2, x_2^2+x_3^2), \quad
\widetilde{Q}_E(x) = e^{i\thetaeta}f_2(x_1^2 +x_2^2, x_3^2)
\]
for real-valued functions $f_1, f_2$, and where
$\thetaeta \in [0,2\pi)$ is the angle between the $x_1$ axis and the
vector $x' = (x_1,x_2) \in \mathbb{R}^2$. The solutions
$Q_E$ and $\widetilde{Q}_E$ lie in the Sobolev space $H^2$,
and decay exponentially as $|x| \to \infty$.
\end{theorem}
\betagin{remark}
Another way to state this result:
the excited state branches arise as the orbits, under the symmetry
group of the equation, of two particular bifurcation curves
of solutions. One of these ($Q_E$) is real, odd in one direction,
and invariant under rotations fixing that direction, while the
other ($\widetilde{Q}_E$) is even in one direction, and ``co-rotational''
with respect to rotations fixing that direction.
\end{remark}
\betagin{remark}
The nature of the symmetry group, together with the symmetry
properties of $Q_E$ and $\widetilde{Q}_E$, imply that each solution
branch is a four-dimensional family: one degree of
freedom is the eigenvalue $E$, one is the phase rotation,
and two come from spatial rotations. See~\eqref{parameters}
for an explicit expression of the branches in terms of parameters.
\end{remark}
Our next goal is to study the dynamics nearby the nonlinear excited
states. It is well-known that the family of {\it ground states} -- let
us denote them by $Q_E^0$, with nonlinear eigenvalue $E$ close to
$e_0$ -- is \textit{orbitally stable}; that is the orbit of
a ground state under the action of the symmetry group
(in this case just phase rotations, since $Q_E^0$ is
radially symmetric) is stable:
\[
||\psi(0) - Q_E^0||_{H^1} \mbox{ small } \implies
\inf_{r} \| \psi(t) - Q_E^0 e^{i r} \|_{H^1}
\mbox{ small for } t \geq 0
\]
(see \cite{RW}). One may expect that $Q_{E}^0$ is even
{\it asymptotically stable} in, say, a local $L^2$-norm:
\betagin{equation} \lambdabel{asy.st}
\lim_{t \rightarrow \infty} \inf_{E,r}
||\psi(t) - Q_{E}^0 e^{i r}||_{L^2_{loc}} = 0
\end{equation}
where one now has to consider the entire ground state family;
i.e., to allow for modulation of the eigenvalue $E$ as well as
the phase $r$. If $H_0$ has only one discrete eigenvalue,
it is proved in~\cite{SW1} that solutions initially close to a
ground state eventually converge to some (other) ground state
$e^{i r} Q_{E}^0$. In the case $H_0$ has two discrete
eigenvalues $e_0 < e_1 < 0$ (also satisfying a resonance
condition) and both of them are simple, this is proved
in \cite{TY1}. Recently, if $e_1$ is degenerate, the same
result is proved in \cite{ZW}, by introducing a new type of
normal form.
If the initial data is not close to a ground state, the presence of
excited states makes the problem more subtle, -- see \cite{NTP, Tsai,
TY2}. The physical intuition is that excited states are unstable,
and that, generically, nearby states should radiate, and then relax
(locally) to the ground state. However, the results of \cite{TY3} show
that there are, in fact, {\it stable directions} -- finite
co-dimension families of solutions which converge to the excited
states -- at least when $H_0$ has simple eigenvalues.
Motivated by the papers \cite{ZW} and \cite{TY3}, in this paper
we shall construct solutions converging to the nonlinear excited
states of~Theorem \ref{ex.sol}. Of course, this result does not
contradict the physical intuition, since this family of solutions
has zero measure in some sense (and so should not be directly seen
in experiments or numerical simulations).
To study the stable directions for the nonlinear excited states,
it is essential that we understand the
linearized operators around these excited states, their spectral
properties, and the associated time-decay estimates.
In particular, if we denote the {\it linearized operator} around
a solitary wave solution $e^{i E t} Q(x)$ by
$\mathcal{L}_Q$, so that
\[
\mathcal{L}_Q \zetata = -i \{(H_0 -E + 2\lambdambda |Q|^2) \zetata +
\lambdambda Q^2 \bar{\zetata} \},
\]
then the analysis of Section~\ref{sp.an} shows that
for each of the nonlinear excited states, there is a
finite-codimension subspace
\[
\mathbf{E}_c(\mathcal{L}_{Q}) =
\mbox{ the continuous spectral subspace for }
\mathcal{L}_{Q}
\]
on which hold the dispersive decay estimates
\[
\norm{e^{t \mathcal{L}_{Q}} \eta}_{L^p} \lesssimssim
|t|^{-3(1/2-1/p)}\norm{\eta}_{L^{p'}},
\quad \frac{1}{p} + \frac{1}{p'} = 1.
\]
It turns out that the spectrum of the linearized operator around
$Q_E$ (and its symmetry translates) is different from that of the
linearized operator around $\widetilde{Q}_E$ (and its symmetry
translates). This
interesting phenomena is due to the difference in the symmetry
properties of $Q_E$ and $\widetilde{Q}_E$. Moreover, because of the
degeneracy, there is an interaction between many different
directions in the same modes, and it is more complicated to study
these linearized operators, and to construct the stable directions,
than it is for the ground state.
Before stating our main theorem, we make precise our further
assumptions on the potential $V$.
\noindent {\bf Assumption A2:}
\textit{
The linear Hamiltonian $H_0 = -\Delta + V$ has only two
eigenvalues $e_0 < e_1 < 0$, with $e_0 \not= 2 e_1$.
}
\betagin{remark}
Taken together, Assumptions A1 and A2 say that: the radial
eigenfunction problem supports only the eigenvalue $e_0$;
the eigenvalue problem corresponding to the first
non-zero angular momentum sector supports only the eigenvalue
$e_1$; and there are no eigenvalues corresponding to higher
angular momenta. It is not difficult to construct examples
of such potentials (eg. among finite square well
potentials of varying depth and width).
\end{remark}
Next we make precise our assumptions of spatial decay and
regularity of the potential $V(x)$.
\noindent {\bf Assumption A3:}
\textit{
For some $\sigma>0$,
\[
|\nabla^\alphapha V(x)| \leq C (1 +|x|^2)^{-5-\sigma}, \
\forall x \in \mathbb{R}^3, \ |\alphapha| \leq 2,
\]
and there is $0<\sigma_0 <1$ such that
\[
\norm{[(x \cdot \nabla)^k V]\phi}_{L^2} \leq
\sigma_0 \norm{-\Delta \phi}_{L^2} + C \norm{\phi}_{L^2}, \
\forall \ k = 1,2,3, \ \phi \in H^1.
\]
Furthermore, $0$ (the bottom of the continuous
spectrum of $H_0$) is not an eigenvalue nor a resonance.}
Assumption A3 ensures we can apply standard analysis tools
for linear Schr\"{o}dinger operators. It is certainly not optimal.
The final assumption will ensure, in the {\it resonant case}
$e_0 < 2 e_1$, that the resonant interaction is generic.
Denote a normalized ground-state eigenfunction by $\phi_0$,
which we may suppose is positive and radial (since $V$ is radial),
denote by $\mathbf{P}_0(H_0)$ the corresponding orthogonal projection,
\[
Null(H_0 - e_0) = \mathbb{C} \phi_0, \quad\quad
\phi_0(x) = \phi_0(|x|) > 0, \quad\quad
\mathbf{P}_0 = \mathbf{P}_0(H_0) = \lambdangle \; \phi_0 \; | \;\; ,
\]
and denote by $\, \mathbf{P}\! _\mathrm{c} \, \! $ the orthogonal projection onto the
continuous spectral subspace of $H_0$:
\[
\, \mathbf{P}\! _\mathrm{c} \, \! = \, \mathbf{P}\! _\mathrm{c} \, \! (H_0) = 1 - \mathbf{P}_0(H_0) - \mathbf{P}_1(H_0).
\]
\noindent {\bf Assumption A4:}
\textit{``Fermi Golden Rule'':
If $e_0 < 2 e_1$ (``resonant case''), there exists
$\lambdambda_0 >0$ such that
\betagin{equation} \lambdabel{FGR}
\lim_{r \to 0+} \mathrm{I}_2m \bke{\phi_{0}\phi^2 , (H_0 - 2e_1 + e_0
-ir)^{-1}\, \mathbf{P}\! _\mathrm{c} \, \! \phi_{0} \phi^2 } \geq \lambdambda_0 \norm{\phi}_{L^2}^4
\quad\quad
\forall \phi \in \mathbf{V} .
\end{equation}
}
When $e_1$ is simple, the condition \eqref{FGR} is
well-known and appears in many papers,
see \cite{BP1, BP2, BS, ZS1, NTP, Tsai, TY1, TY2, TY3, TY4}.
In the degenerate case, \eqref{FGR}
is also used in \cite{ZW}, and it is claimed there that
\eqref{FGR} holds generically.
The existence of stable directions for the excited
states is proved in Section~\ref{prom}:
\betagin{theorem}[Stable directions for degenerate excited states]
\lambdabel{m-theorem}
Assume that $H_0 = -\Delta + V$ satisfies A1-A4 above.
Then, there exists $\epsilon_0>0$ such that for any
$0 < \epsilon \leq \epsilon_0$,
there is $\deltalta_0 > 0$ such that for any $0 < \deltalta < \deltalta_0$
the following holds: if $Q$ denotes a nonlinear
excited state with $E -e_1 = \lambdambda \epsilon^2$,
$\mathcal{L}_Q$ denotes the corresponding linearized operator, and
$\eta_\infty \in W^{2,1} \cap H^2 \cap \mathbf{E}_c(\mathcal{L}_Q)$ with
$\norm{\eta_\infty}_{W^{2,1} \cap H^2} \leq \deltalta$,
then there exists a solution $\psi(x,t)$ of \eqref{NSE} such that
\[
\norm{\psi(x,t) - \psi_{as}(x,t)}_{H^2} \leq C(\epsilon)
\deltalta^{7/4}(1+t)^{-1},
\]
where
\[
\psi_{as}(x,t) := e^{-i E t} [Q + e^{t\mathcal{L}_Q} \eta_\infty]
\]
and so in particular, for $p > 2$,
\[
\| \psi(x,t) - e^{-iEt} Q(x) \|_{L^p} \to 0
\;\; \mbox{ as } \;\; t \to \infty.
\]
\end{theorem}
\section{Some notation and definitions} \lambdabel{ND}
\betagin{itemize}
\item[\textup{(1)}]
Let $L^2 := L^2(\mathbb{R}^3, \mathbb{C})$, $\mathbf{E} := L^2(\mathbb{R}^3, \mathbb{C}^2)$. We equip $\mathbf{E}$ with the standard inner product
\[ \wei{f,g} = \int_{\mathbb{R}^3}( \bar{f}_1 g_1 + \bar{f}_2 g_2) dx , \quad \forall \ f = \betagin{bmatrix} f_1 \\ f_2 \end{bmatrix}, \quad g = \betagin{bmatrix} g_1 \\ g_2 \end{bmatrix} \in \mathbf{E}.\]
Moreover, for any $u \in L^2$, we shall write $\overrightarrow{u} = \betagin{bmatrix} \mathbb{R}e (u) \\ \mathrm{I}_2m(u)\end{bmatrix} \in \mathbf{E}$.
\item[\textup{(2)}]
Recall
\[
\betagin{split}
& H_0 = -\Delta + V \\
&\mathbf{V} := Null(H_0 - e_1) = \text{span}_{\mathbb{C}}
\{\phi_1, \phi_1, \phi_2\}, \quad
Null(H_0 - e_0) = \mathbb{C} \phi_0 \\
&\mathbf{P}_0 = \lambdangle \; \phi_0 \; | \; , \quad
\mathbf{P}_1 = \sum_{j=1}^3 \lambdangle \; \phi_j \; | \;, \quad
\mathbf{P}_c = 1 - P_0 - P_1
\end{split}
\]
\item[\textup{(3)}]
Denote
\betagin{equation} \lambdabel{J-sigma.def}
J = \betagin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \quad \sigma_1 = \betagin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \quad \sigma_2 = \betagin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}, \quad
\sigma_3 = \betagin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}.
\end{equation}
\item[\textup{(4)}]
Let $\mathbf{O}(k)$ denote the group of orthogonal transformations
on $\mathbb{R}^k$. Identifying $\mathbb{R}^2$ with $\mathbb{C}$,
we can write $\mathbf{O}(2)$
as the group generated by $\{ e^{i r}, \mathop{\,\mathbf{conj}\,}: r \in [0,2\pi)\}$,
where $\mathop{\,\mathbf{conj}\,}$ denotes complex conjugation.
\betagin{definition}
Let $\mathbf{G}:= \mathbf{O}(3) \oplus \mathbf{O}(2)$. For $g = (g_1, g_2) \in G$,
define its action on $L^2$ by $g * f(x) : = g_2* f(g_1*x)$
where $g_1*x$ denotes usual matrix multiplication,
and $g_2 * $ denotes complex multiplication (or conjugation).
\end{definition}
\item[\textup{(5)}] \lambdabel{Rota}
For $\alpha \in [0, 2\pi)$, we denote by $R_{jk}(\alpha)$
the rotation matrix in the $x_j x_k$-plane of $\mathbb{R}^3$ through angle
$\alpha$, for $j,k =1,2,3$, and $j < k$. Precisely,
\betagin{equation} \lambdabel{basic.rotation}
\betagin{split}
R_{12}(\alpha) & := \betagin{bmatrix} \cos(\alpha) & -\sin(\alpha) & 0 \\ \sin(\alpha) & \cos(\alpha) & 0 \\ 0 & 0 & 1 \end{bmatrix},
\quad R_{13}(\alpha) := \betagin{bmatrix} \cos(\alpha) & 0 & \sin(\alpha)\\ 0 & 1 & 0 \\ -\sin(\alpha) & 0 & \cos(\alpha) \end{bmatrix}, \\
R_{23}(\alpha) & := \betagin{bmatrix} 1 & 0 & 0 \\ 0 & \cos(\alpha) & -\sin(\alpha) \\ 0 & \sin(\alpha) & \cos(\alpha) \end{bmatrix}.
\end{split}
\end{equation}
Also, let
\betagin{equation} \lambdabel{Gamma.def}
\betagin{split}
& \mathbf{G}amma(\deltalta,\alpha,\sigma) := R_{12}(\deltalta)R_{13}(\alpha)R_{23}(\sigma), \quad \mathbf{G}amma_0(\alpha, \deltalta) : = R_{12}(\alpha)R_{13}(\deltalta), \\
& \mathbf{G}amma_1(\alpha, \deltalta) := R_{13}(\alpha)R_{23}(\deltalta), \quad \alpha, \deltalta, \sigma \in [0,2\pi).
\end{split}
\end{equation}
By the Euler-Brauer resolution of a rotation, for any rotation matrix
$A \in \mathbf{SO}(3)$, there exist unique
$(\deltalta, \alpha, \sigma) \in [0,2\pi)^3$ such that
$A =\mathbf{G}amma(\deltalta,\thetaeta,\sigma)$ (eg. \cite[p. 146]{Rao}).
Moreover, for any $B \in \mathbf{O}(3)$, either $B$ or $-B$ lies in
$\mathbf{SO}(3)$.
\item[\textup{(6)}]
For some $s >3$, let $L^2_s$ be the weighted $L^2$ space defined by
\betagin{equation} \lambdabel{Ls}
L^2_s := \{ f: (1+|x|^2)^{s/2} f(x) \in L^2(\mathbb{R}^3, \mathbb{C})\}. \end{equation}
Then, let $\mathbf{E}_s = L^2_s \times L^2_s$ and $\mathbf{B} := \mathbf{B}(\mathbf{E}_{s}, \mathbf{E}_{-s})$ be the space of all bounded operators from $\mathbf{E}_s$ to $\mathbf{E}_{-s}$.
\end{itemize}
\section{Existence of Nonlinear Excited States} \lambdabel{Exi}
Let $\mu := E -e_1$ and
\betagin{equation} \lambdabel{nl.ext}
F(\mu,Q) := (H_0 -e_1)Q + \lambda |Q|^2Q - \mu Q.
\end{equation}
Since $V = V(|x|)$, we see that $F$ is invariant under the action the group $\mathbf{G}$, i.e.
\[ F(\mu, g*Q) = g *F(\mu, Q), \quad \forall \ g \in \mathbf{G}. \]
As remarked in the introduction, the equation $F(\mu,Q)=0$
has a solution for $Q$ near $\mathbf{V}$ only if
$\lambdambda \mu = \lambdambda(E-e_1) > 0$. Now, we write
$Q = \epsilon(v + h)$ where we will take
$\epsilon = \sqrt{\lambdambda \mu} > 0$ sufficiently small,
$v \in \mathbf{V}$ of order one, and $h \in \mathbf{V}^\perp$.
Then the equation $F(\mu, Q) =0$ becomes
\betagin{equation} \lambdabel{hv.eqn} (H_0-e_1)h + \lambdambda \epsilon^2|v+h|^2 - \mu(v+h) =0. \end{equation}
Now, applying the projections $\mathbf{P}_1$ and
$\mathbf{P}_1^\perp := 1 - \mathbf{P}_1$ to the equation \eqref{hv.eqn}, we get
\betagin{flalign} \lambdabel{nlex.h}
& h = \lambdambda \epsilonsilon^2 [H_0-e_1]^{-1} \mathbf{P}_1^\perp
\{h - |v +h|^2(v +h) \} \\ \lambdabel{nlex.ker} &
\mathbf{P}_1 [v - |v+h|^2(v +h)] = 0.
\end{flalign}
By applying the contraction mapping theorem or implicit function
theorem, we see that for any fixed $c_1>0$, there
exists a sufficiently small number $\epsilon_1>0$, such that for all
$0 \leq \epsilon < \epsilon_1$, and each $v \in \mathbf{V}$ with $\norm{v} < c_1$,
there is a unique solution $h=h(\epsilonsilon,v) \in H^2$
of~\eqref{nlex.h} satisfying
\[
h(0,v) =0, \quad h_v(0,v) = 0.
\]
Moreover, since $\mathbf{P}_1 g = g \mathbf{P}_1 $ for all $g \in \mathbf{G}$, by the uniqueness of the solution $h$ of \eqref{nlex.h}, we also have
\betagin{equation} \lambdabel{h.inv}
h(\epsilon, g*v) = g* h(\epsilon,v), \ \forall\ g \in \mathbf{G}.
\end{equation}
Let
\[ N(\epsilonsilon,v) := \, \mathbf{P}\! _\mathrm{1} \, \! [v - |v+ h(\epsilonsilon,v)|^2(v + h(\epsilonsilon,v))]. \]
Then, we have
\betagin{equation} \lambdabel{N.inv}
N(\epsilon, g*v) = g*N(\epsilon,v), \quad \text{for all} \quad v \in \mathbf{V} \quad \text{and} \quad g \in \mathbf{G}.
\end{equation}
Moreover,
\betagin{equation} \lambdabel{N.deriv}
N_v(0,v)w = \, \mathbf{P}\! _\mathrm{1} \, \! [w - 2|v|^2 w - v^2\bar{w}], \;\; \forall \ w \in \mathbf{V}.
\end{equation}
In order to apply the implicit function theorem to solve the equation $N(\epsilon, v) =0$, we need to find $v_0 \in \mathbf{V}$ such that
\betagin{equation} \lambdabel{imp.cond}
N(0,v_0) = 0, \quad \text{and} \quad N_v(0,v_0) : \mathbf{V} \rightarrow \mathbf{V} \ \text{is invertible}.
\end{equation}
Set $v_0 = \phi_z := z\cdot \phi = z_1 \phi_1 + z_2 \phi_2 + z_2 \phi_3$ for $z \in \mathbb{C}^3$. Then,
$N(0,v_0) =0$ if and only if $(\phi_j, N(0, \phi_z)) =0$ for all
$j=1,2,3$. Set $I = (\phi_1^2, \phi_2^2)$.
For each $j=1,2,3$, the equation $(\phi_j, N(0, \phi_z)) =0$ becomes
\betagin{flalign*}
z_j & = (\phi_j, |\phi_z|^2\phi_z) = 2 z_j \sum_{l \not =j} |z_l|^2(\phi_j^2, \phi_l^2) + \bar{z}_j \sum_{l} z_l^2 (\phi_j^2, \phi_l^2) \\
\ & = (\phi_1^2, \phi_2^2) \left \{3 |z_j|^2 z_j + \bar{z}_j \sum_{l\not=j} z_l^2 + 2z_l \sum_{l \not = j} |z_l|^2 \right \} \\
\ & = I \left \{2z_j |z|^2 + \bar{z}_j z^2 \right\}, \quad \text{where} \quad z^2 := \sum_{l} z_l^2.
\end{flalign*}
Equivalently,
\betagin{equation} \lambdabel{bi.re}
2z_j |z|^2 + \bar{z}_j z^2 =\frac{1}{I} z_j
\quad\quad \forall \ j = 1,2,3.
\end{equation}
Write $z_j = a_je^{i\alphapha_j}$, for $a_j \geq 0$ and $\alpha_j \in [0,
2\pi)$.
For some $j_0$, we have $z_{j_0} \not= 0$, and by applying
a phase rotation we may assume $z_{j_0} \in \mathbb{R}$.
Then dividing~\eqref{bi.re} by $z_{j_0}$, we get
\betagin{equation} \lambdabel{first.com}
2|z|^2 + z^2 = \frac{1}{I}.
\end{equation}
Moreover for any $j$ with $z_j \not=0$, \eqref{bi.re} implies
\betagin{equation} \lambdabel{nonzero.com}
2|z|^2 + e^{-2i\alpha_j} z^2 = \frac{1}{I},
\end{equation}
and so, comparing~\eqref{first.com} and~\eqref{nonzero.com}, we see
that either $z_j = \pm a_j$, or else $z^2 =0$.
So \eqref{bi.re} is equivalent to
\betagin{equation} \lambdabel{re.sol}
|z|^2 = \frac{1}{2I} \quad \text{and} \quad z^2 = 0; \quad\quad
\text{or} \quad |z|^2 = \frac{1}{3I} \quad \text{and} \quad
e^{i\alpha}z \in \mathbb{R}^3, \mbox{ some } \alphapha \in [0,2\pi).
\end{equation}
Thus we obtain two representative elements of the solutions of
$N(0,v)=0$:
\betagin{lemma} \lambdabel{Orbit}
Let $v_1 = (1/3I)^{1/2}(1,0, 0) \cdot \phi, \;\;
v_2 = (1/4I)^{1/2}(1, i, 0) \cdot \phi$,
and let $\mathbf{O}_1, \mathbf{O}_2$ be respectively the orbits of $v_1$ and $v_2$
under the action of $\mathbf{G}$ on $\mathbf{V}$. The set $\mathbf{O}:= \mathbf{O}_1 \cup \mathbf{O}_2$
contains all non-zero solutions of $N(0,v) = 0$.
\end{lemma}
\betagin{proof} First of all, note that $\mathbf{V}$ is invariant under the
action of $\mathbf{G}$. So, the action of $\mathbf{G}$ on $\mathbf{V}$ is
well-defined. Moreover, from \eqref{N.inv}, we see that $v$ solves
the equation $N(0,v) =0$ for all $v \in \mathbf{O}$.
Now observe that for any $v = z \cdot \phi \in \mathbf{V}$, we have
\betagin{equation}\lambdabel{g.ac.z}
g*v = g_2*[z' \cdot \phi], \quad \forall\ g = (g_1, 1) \in \mathbf{G},
\quad z' := g_1 z.
\end{equation}
So, to prove that for every $v = z\cdot \phi$ with $z = (z_1,z_2,z_3)
\in \mathbf{C}^3$ such that $N(0,v) =0$ then $v\in \mathbf{O}$, we shall need
to find some $g = (g_1, g_2) \in \mathbf{G}$ such that
\[
g_2*[z' \cdot \phi] = v_1 \ \text{or} \ = v_2,
\quad \text{with} \quad z' := g_1 z.
\]
So let $v = z\cdot \phi$ with $z = (z_1,z_2,z_3) \in \mathbf{C}^3$
such that $N(0,v) =0$. Then, $z$ satisfies \eqref{re.sol}.
If $e^{i\alpha}z \in \mathbb{R}^3$ for some $\alpha \in [0,2\pi)$ with
$|z|^2 = (1/3I)^{1/2}$, then, it's simple to see that there exists
$g:= (g_1,e^{-i\alpha}) \in \mathbf{G}$ with $g_1 \in \mathbf{SO}(3)$ such
that $g*v_1 = z\cdot\phi$. So, $z \in \mathbf{O}_1$. Now, assume that
$e^{i\alpha}z \notin \mathbb{R}^3$ for all $\alpha \in [0,2\pi)$. Then,
we have $|z|^2 = 1/(2I)$ and $z^2 =0$. We write
$z_j = a_j e^{i\alpha_j}$, for $a_j \geq 0$ and $0 \leq \alpha_j < 2\pi$
for all $j =1,2,3$. Applying a spatial rotation, and then a phase
rotation, we may assume that $a_1 >0$ and $\alphapha_1 = 0$.
So, $z = (a_1, a_2e^{i\alpha_2}, a_3e^{i\alpha_3})$. Now, if $a_2a_3 =0$ or $\alpha_2\alpha_3 =0$, then it is also simple to show that $v \in \mathbf{O}$. So, we assume that $a_1a_2a_3 \not=0$ and $\alpha_2\alpha_3 \not=0$.
Let $M_1 := R_{23}(\alpha)^T$ where $R_{23}$ is defined
in~\eqref{basic.rotation} and choose $\alpha$ so that
$a_2\cos(\alpha) \cos(\alpha_2) = a_2\sin(\alpha)\cos(\alpha_3)$, or in other
words $\tan(\alpha) = a_2\cos(\alpha_2)/[a_3\cos(\alpha_3)]$.
Then, $(M_1, 1) \in \mathbf{G}$ and $(M_1,1)*v = v' := z' \cdot \phi$ where $z' := (r_1,i a_2' , a_3'e^{i\alpha_3'})$ for some $a_2' \in \mathbb{R}$ and $a_3' \geq 0$. Since $z^2=0$ and $|z|^2 = 1/(2I)$, we have $(z')^2 =0$ and $|z'|=1/(2I)$. So,
\[
a_1 -(a_2')^2 + (a_3')^2 e^{2i\alpha_3'} =0.
\]
This implies that $e^{2i\alpha_3'} \in \mathbb{R}$. Therefore,
$\alpha_3' \in \{ 0, \; \pi/2, \; \pi, \; 3\pi/2 \}$.
So either $z' = (r_1, ia, ib)$ or $z' = (r_1, ia ,b)$
for some $a, b \in \mathbb{R}$ such that $|z'| = 1/(2I)$.
From this and by taking $M_2 = R_{23}(\thetaeta')^T$ or $M_2 = R_{13}(\thetaeta')^T$ with appropriate $\thetaeta'$, we see that $(M_2,1)v' = v'' :=z'' \cdot \phi$ with $z'' = (c, id,0)$ for some $c,d \in \mathbb{R}, c \not=0$ and $|z''|^2 = 1/(2I)$ and $(z'')^2 = 0$. Then, it follows that $d \not=0$ and then $v'' \in \mathbf{O}_2 \subset \mathbf{O}$. This completes the proof of the lemma.
\end{proof}
Lemma \ref{Orbit} completely solves the first equation
of~\eqref{imp.cond}. For the second condition in~\eqref{imp.cond},
it is noted that
\[
N(\epsilon, v)(iv) = 0, \quad \forall \ v \in \mathbf{V}: N(\epsilon, v) =0,
\]
a consequence of the phase invariance.
So, the second condition of \eqref{imp.cond} never holds. To overcome
this and solve the equation $N(\epsilon, v) =0$, we shall restrict $N$ to
the invariant subspaces of $\mathbf{V}$ and solve $N =0$ on these subspaces.
To this end, we introduce the following lemma:
\betagin{lemma} \lambdabel{Invariant} For each $j =1,2$, let $\mathbf{V}_j :=
\text{span}_{\mathbb{R}} \{ v_j\}$ where $v_1, v_2$ are defined in
Lemma \ref{Orbit}.
There are subgroups $\mathbf{G}_j \leq \mathbf{G}$ such that $\mathbf{V}_j$ is the fixed
subspace of $\mathbf{V}$ under the action of $\mathbf{G}_j$ on $\mathbf{V}$.
In other words,
\betagin{equation} \lambdabel{Fix-subspace}
\mathbf{V}_j = \{v \in \mathbf{V} : g* v = v \quad \forall \ g \in \mathbf{G}_j\},
\;\; j = 1,2.
\end{equation}
\end{lemma}
\betagin{proof} Recall that $\mathop{\,\mathbf{conj}\,}$ denotes complex
conjugation: $\mathop{\,\mathbf{conj}\,} *z = \bar{z}$ for $z \in {\mathbb C}$.
Set
\[
\iota_0 := \left(\betagin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1
\end{bmatrix} , -1 \right), \;
\iota_1 := \left(\betagin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1
\end{bmatrix} , \mathop{\,\mathbf{conj}\,} \right), \;
\iota_2 := \left( \betagin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1
\end{bmatrix} , 1 \right) \; \in \mathbf{G}.
\]
Moreover, let
\betagin{equation*}
g(\alpha):= (R_{12}(\alpha), e^{-i\alpha}) \in \mathbf{G} \quad \text{and} \quad G_1' : = \left \{ \betagin{bmatrix} 1 & 0 \\ 0 & g\end{bmatrix}, \ g \in \mathbf{O}(2) \right \} \leq \mathbf{O}(3).
\end{equation*}
Then, let $\mathbf{G}_1$ be the subgroup of $\mathbf{G}$ which is generated by
the subgroup $\mathbf{G}_1' \oplus \{ 1 , \mathop{\,\mathbf{conj}\,} \}$ and $\{\iota_0\}$.
And let $\mathbf{G}_2$ be the subgroup of $\mathbf{G}$ which is generated by
$\{\iota_1, \iota_2, g(\alpha),\ \forall \ \alpha \in [0, 2\pi)\}$.
We shall show that $\mathbf{V}_j$ is the fixed subspace of the action of
$\mathbf{G}_j$ on $\mathbf{V}$ for $j=1,2$. Note that
\[
\{ v \in \mathbf{V} : (g,1)* v = v \ \text{and} \ \iota_0*v = v, \ \forall\
g \in \mathbf{G}_1' \} = \text{span}_{\mathbb{C}} \{\phi_1 \}.
\]
Moreover, for all $z \in \mathbb{C}$, we see that $\mathop{\,\mathbf{conj}\,} * z \phi_1 =
z \phi_1$ if and only if $z \in \mathbb{R}$. So, we obtain
\eqref{Fix-subspace} for $j=1$.
On the other hand, for $v = z\cdot \phi \in \mathbf{V}$ such that $v$ is
fixed by the action of $\mathbf{G}_2$, we have $g(\alpha)* v =v$ for all $\alpha \in [0,2\pi)$. So,
\betagin{equation*}
\left \{ \betagin{array}{ll}
e^{i\alpha} z_1 & = z_1\cos(\alpha) + z_2\sin(\alpha), \\
e^{i\alpha} z_2 & = -z_1\sin(\alpha) + z_2\cos(\alpha), \\
e^{i\alpha} z_3 & = z_3, \quad \quad \forall \ \alpha \in [0,2\pi).
\end{array} \right.
\end{equation*}
Therefore, we get $z_2 = iz_1$ and $z_3 =0$. So $v = z_1 (\phi_1 + i \phi_2)$. Moreover, $\iota_1 *v = v, \iota_2*v =v$ imply that $z_1 \in \mathbb{R}$. So, $v \in \mathbf{V}_2$ and this completes the proof of the lemma.
\end{proof}
An elementary but important observation: \eqref{N.inv}
implies that $N = N(\epsilon, \cdot)$ maps $\mathbf{V}_j$ into $\mathbf{V}_j$.
Thus we have well-defined maps
\[
N_1 : = N_{| \mathbf{V}_1} : \mathbf{V}_1 \to \mathbf{V}_1,
\quad\quad
N_2 : = N_{| \mathbf{V}_2} : \mathbf{V}_2 \to \mathbf{V}_2.
\]
Now consider the bifurcation equation $N_1=0$. From the definition of $v_1$ in Lemma \ref{Orbit}, we have $N_1(0, v_1) = 0$. Moreover, from \eqref{N.deriv}
\[ \partial_v N_1(0,v_1)(\phi_1) = (\phi_1, N_v(0,v_1) \phi_1)\phi_1 = - 2\phi_1. \]
Therefore, applying the implicit function theorem, we obtain a bifurcation solution of the equation $N_1 = 0$ as
\betagin{equation} \lambdabel{bir.1}
v_+(\epsilonsilon) = v_1 + a_+(\epsilonsilon) \phi_1,
\quad a_+(\epsilonsilon) = O(\epsilonsilon)
\end{equation}
for $0 < \epsilon < \epsilon_2$, some $0 < \epsilon_2 \leq \epsilon_1$.
Now, we consider the bifurcation equation $N_2 = 0$.
Let $\phi^* = \phi_1 + i \phi_2$. As before, we have $N_2(0,v_2) = 0$ and
\[
\partial_v N_2(0,v_2)(\phi^*) = (\phi^*, N_v(0,v_2) \phi^*)\phi^* =
[2 - \frac{3}{4I}(|\phi^*|^2,|\phi^*|^2)] \phi^* = -4\phi^*.
\]
Again, by the implicit function theorem, we have a bifurcation
solution of the equation $N_2 = 0$ given by
\betagin{equation} \lambdabel{bir.2}
v_-(\epsilonsilon) = v_2 + a_-(\epsilonsilon) \phi^*,
\quad a_-(\epsilonsilon) = O(\epsilonsilon)
\end{equation}
for $ 0 < \epsilon < \epsilon_2$, some $0 < \epsilon_2 \leq \epsilon_1$.
We denote the two resulting solutions of \eqref{nl.ext} by
\betagin{equation} \lambdabel{QE.def}
\betagin{split}
Q_E & := \epsilonsilon [v_1 + a_+(\epsilonsilon)\phi_1 + h(\epsilonsilon, v_1)] = \rho(\epsilon) \phi_1 + h(\epsilon, v_1), \\
\widetilde{Q}_{E} & = \epsilonsilon [v_2 + a_-(\epsilonsilon)\phi^* + h(\epsilonsilon, v_2)]= \widetilde{\rho}(\epsilon)\phi^* + h(\epsilon, v_2),
\end{split}
\end{equation}
where $\rho(\epsilon) := \epsilon/(3I)^{1/2} + \epsilon a_+(\epsilon)$ and $\widetilde{\rho}(\epsilon) := \epsilon/(4I)^{1/2} + \epsilon a_-(\epsilon)$.
\betagin{remark} \lambdabel{sym-ex} By \eqref{h.inv}, we see that for each $j=1,2$, we have $h(\epsilonsilon, v_j) = h(\epsilon, g*v_j) = g*h(\epsilon, v_j)$ for all $g \in \mathbf{G}_j$. Therefore, we can write $Q_E$ and $\widetilde{Q}_E$ as
\[
Q_E = x_1 f_1(x_1^2, x_2^2+x_3^2), \quad
\widetilde{Q}_E = e^{i\thetaeta} f_2(x_1^2 +x_2^2, x_3^2)
\]
for some real functions $f_1, f_2$. Here, $\thetaeta \in [0,2\pi)$
is the angle between that $x_1$ axis and the vector
$x' = (x_1,x_2) \in \mathbb{R}^2$.
\end{remark}
Note that $Q_E, \; \widetilde{Q}_E \in H^2$ by construction.
Their exponential decay is standard; in particular,
the argument in the case of simple eigenvalues
-- see, eg, \cite{GNT} -- applies.
This completes the proof of Theorem~\ref{ex.sol}.
$\mathbf{B}ox$
\betagin{remark}
It is straightforward to express the full solution branches
explicitly in terms of symmetry transformations:
\betagin{equation}
\lambdabel{parameters}
Q(x) = e^{i r_1} Q_E( \mathbf{G}amma_0(r_2,r_3) x ) \quad \mbox{ and }
\quad Q(x) = e^{i r_1} \widetilde{Q}_E( \mathbf{G}amma_1(r_2,r_3) x)
\end{equation}
for parameters $E \in (e_1, e_1 + \lambdambda \epsilon_2^2)$,
$r_1, r_2, r_3 \in [0, 2\pi)$.
\end{remark}
\section{Spectral Analysis of the Linearized Operators Around Excited States} \lambdabel{sp.an}
The spectral properties are invariant under symmetry
transformations of the underlying solution, hence we
may fix one element of each branch of excited states.
So in what follows, let $Q = Q_E$ or $\widetilde{Q}_E$.
We write solution $\psi(t,x)$ of \eqref{NSE} as
$\psi(t,x) = [Q + \zetata]e^{-iEt}$. Then, from \eqref{NSE}, we have
\[
\partial_t \zetata = \mathcal{L}_Q \zetata + \ \text{nonlinear terms},
\]
where
\betagin{equation} \lambdabel{L.def}
\mathcal{L}_Q \zetata = -i\{(H_0 -E + 2\lambdambda |Q|^2) \zetata
+ \lambdambda Q^2 \bar{\zetata} \}.
\end{equation}
Decomposing $\zetata$ into the real and imaginary parts, we can extend
$\mathcal{L}$ to $\overrightarrow{\mathcal{L}} :\mathbf{E} \rightarrow \mathbf{E} $. That is,
\betagin{equation} \lambdabel{bL.gamma.def}
\overrightarrow{\mathcal{L}}_Q := J (H_0 - E) + W_Q, \quad \text{where} \quad
J := \betagin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \quad
W_Q : = \betagin{bmatrix} W_{1,Q} & W_{2,Q} \\ W_{3,Q} &
W_{4,Q} \end{bmatrix},
\end{equation}
with
\betagin{equation*}
\betagin{split}
\ & W_{1,Q} := 2 \lambdambda \mathbb{R}e Q \mathrm{I}_2m Q, \quad W_{4,Q} = - W_{1,Q}, \\
\ & W_{2, Q} := \lambdambda(2|Q|^2 + [\mathrm{I}_2m Q]^2 - [\mathbb{R}e Q]^2 ), \\
\ & W_{3, Q} := - \lambdambda(2|Q|^2 + [\mathbb{R}e Q]^2 - [\mathrm{I}_2m Q]^2).
\end{split}
\end{equation*}
For $Q = Q_{E}$ we will write $\mathbf{L}_0 := \overrightarrow{\mathcal{L}}_{Q_E}$ and for
$Q = \widetilde{Q}_E$, we will write $\mathbf{H}_0 := \overrightarrow{\mathcal{L}}_{\widetilde{Q}_E}$.
So
\betagin{equation} \lambdabel{bH.def}
\betagin{split}
& \mathbf{L}_0 = \betagin{bmatrix} 0 & L_{-} \\ -L_+ & 0 \end{bmatrix}, \quad \text{with}\quad L_\pm := H_0 - E + \lambdambda( 2 \pm 1) Q^2_E, \\
& \mathbf{H}_0 = J (H_0- E) + \betagin{bmatrix} W_1 & W_2 \\ W_3 & W_4 \end{bmatrix}, \ W_{j} := W_{j,\widetilde{Q}_E}.
\end{split}
\end{equation}
We now state several lemmas which will be used in the next
sections. These lemmas closely parallel lemmas in~\cite{TY3},
and so we will omit some details.
\betagin{lemma} \lambdabel{Non-emb-eig}
Let $\Sigma_c : = \{ia : |a| \geq |E| \}$.
For all $\tau \in \Sigma_c$, $|\tau| \not= |E|$, the equation
$\overrightarrow{\mathcal{L}} \psi = \tau \psi$ has no non-zero solution $\psi \in \mathbf{E}$.
Moreover, $\pm i|E|$ are neither eigenvalues nor resonances.
\end{lemma}
\betagin{proof} Note that $\overrightarrow{\mathcal{L}} = J(-\Delta -E) + \widetilde{W}$ where
$\widetilde{W} = JV +W$. By the assumptions on $V$, and the exponential
decay of $Q$, the potential $\widetilde{W}$
decays quickly. The first part of the lemma then follows
as in~\cite[Section 2.4]{TY3}, making use of the
resolvent estimate Lemma~\ref{Resol.est}.
To prove that $\pm i|E|$ is not an eigenvalue or resonance, we may
use Lemma \ref{Resol.est} and the argument in \cite{JK},
or we can follow \cite[Section 2.5]{TY3}.
\end{proof}
\betagin{lemma} \lambdabel{outside}
Let $R(z) : = (\overrightarrow{\mathcal{L}} -z)^{-1}$, $R_0(z) = [JH -z]^{-1}$ and
$\Sigma_p:= \{0, \pm i(e_0-E)\}$ where $H=H_0-E$.
There exists an order one constant $C>0$ such that
\betagin{equation*}
\betagin{split}
& \norm{R(z)}_{(\mathbf{E},\mathbf{E})} \leq C[\norm{R_0(z)}_{(\mathbf{E}, \mathbf{E})} + \norm{R_0(z)}_{(\mathbf{E}, \mathbf{E})}^2], \ \forall z \notin \Sigma_c : dist(z, \Sigma_p) \geq \epsilon.
\end{split}
\end{equation*}
\end{lemma}
\betagin{proof} We have
\[
R(z) = [1 + R_0(z)W]^{-1} R_0(z)
= \sum_{j=0}^\infty(-1)^{j}[R_0(z)W]^jR_0(z).
\]
Using this and the facts that $R_0(z)$ is uniformly bounded in
$\mathbf{B}$ with bound $O(1/\epsilon)$ for $z$ with $dist(z, \Sigma_p) \geq \epsilon$ (see \cite{JK}), and $W$ is localized of order $\epsilon^2$, we obtain
\betagin{flalign*}
\norm{R(z)}_{(\mathbf{E}, \mathbf{E})} & \leq \norm{R_0(z)}_{(\mathbf{E}, \mathbf{E})} + \sum_{j=1}^\infty C\norm{R_0(z)}_{(\mathbf{E}, \mathbf{E})} \{C\epsilon^2 \norm{R_0(z)}_{\mathbf{B}}\}^{j-1} \norm{R_0(z)}_{(\mathbf{E}, \mathbf{E})}\\
\ & \leq C\left\{\norm{R_0(z)}_{(\mathbf{E}, \mathbf{E})} +\norm{R_0(z)}_{(\mathbf{E}, \mathbf{E})} ^2
\sum_{j=0}^\infty [C \epsilon ]^{j} \right \}
\end{flalign*}
For $\epsilon$ sufficiently small, the series in the right hand side of the
last inequality converges, and the lemma follows.
\end{proof}
\betagin{lemma} \lambdabel{R0.sp.est} Let $\mathbf{D}_1 :=\{a+ib : |b - (e_0-E)|\leq \epsilon, 0 < a \leq \epsilon\}$ and $R_0(z) = (JH -z)^{-1}$. Then for some fixed $s>3$, there is a constant $C>0$ such that for all $z \in \mathbf{D}_1$,
\betagin{equation}\lambdabel{Ro.est.lo}
\betagin{split}
& \norm{\lambdangle x \rangle^{-s} \mathbf{P}_c R_0(z) \mathbf{P}_c \lambdangle x \rangle^{-s}}_{(L^2, L^2)} \leq C, \\
& \norm{\lambdangle x \rangle^{-s} \mathbf{P}_c\frac{d}{dz} R_0(z) \mathbf{P}_c \lambdangle x \rangle^{-s}}_{(L^2, L^2)} \leq C(\mathbb{R}e z)^{-1/2}.
\end{split}
\end{equation}
Here $\mathbf{P}_c = \mathbf{P}_c(JH) = \betagin{bmatrix}\mathbf{P}_c(H) \\ \mathbf{P}_c(H) \end{bmatrix}$. Moreover, for $z_1, z_2 \in \mathbf{D}_1$, we have
\betagin{equation} \lambdabel{R0-R0}
\norm{\lambdangle x \rangle^{-s} \mathbf{P}_c[ R_0(z_1) - R_0(z_2)] \mathbf{P}_c \lambdangle x \rangle^{-s}}_{(L^2, L^2)} \leq C[\max \{ \mathbb{R}e z_1, \mathbb{R}e z_2\}]^{-1/2} |z_1 -z_2|. \end{equation}
\end{lemma}
\betagin{proof} We write $R_0(z) = R_{01}(z) + R_{02}(z)$, where
\betagin{equation} \lambdabel{R0.dec} R_{01}(z):= \frac{1}{2}\betagin{bmatrix} i & -1 \\ 1 & i\end{bmatrix}(H -iz)^{-1}, \quad R_{02}(z):= \frac{1}{2}\betagin{bmatrix} -i & -1 \\ 1 & -i\end{bmatrix}(H + iz)^{-1}.\end{equation}
From this and since $R_{02}$ is regular in $\mathbf{D}_1$, we shall
prove the lemma with $R_0(z)$ replaced by $T_0(z):= (H
-iz)^{-1}\mathbf{P}_c(H)$. The first estimate in \eqref{Ro.est.lo} is
standard and therefore we skip its proof. The second estimate in
\eqref{Ro.est.lo} follows from \eqref{R0-R0}. So, it's sufficient to
prove \eqref{R0-R0}. The proof now is similar to that of \cite[Lemma
2.3]{TY3} and we shall only give the main steps.
For any $z_1, z_2$ in $\mathbf{D}_1$. We write $z_1 = a_1 +ib_1, z_2 = a_2 + ib_2$ and assume that $a_1 \leq a_2$. Let $z_3 = a_2 + ib_1$. We have $z_3 \in \mathbf{D}_1$. For any $u,v \in L^2$ with $\norm{u}_2 = \norm{v}_2 =1$, let $u_1 = \mathbf{P}_c \wei{x}^{-s}u,\ v_1 = \mathbf{P}_c\wei{x}^{-s}v$. We have $u_1, v_1 \in L^1 \cap L^2$. Now, using the decay estimate $\norm{e^{-itH}\mathbf{P}_c \phi}_{L^\infty} \leq C(1+t)^{-3/2}\norm{\phi}_{L^1}$ with $H = H_0 -E$, we get
\betagin{flalign*}
& |(u,\wei{x}^{-s}\mathbf{P}_c[T_0(z_1) -T_0(z_3)]\mathbf{P}_c \wei{x}^{-s}v)| \\
& \quad = \left |\int_0^{\infty}(u_1, [e^{-it(H-i(a_1+ib_1))} -e^{-it(H-i(a_2+ib_1))}]v_1)dt \right |\\
& \quad = \left |\int_0^{\infty}(u_1, e^{-it(H + b_1)}v_1)(e^{-a_1t} -e^{-a_2t})dt \right |\\
& \quad \leq C\int_0^{\infty}(1+t)^{-3/2}(e^{-a_1t} -e^{-a_2t}) dt \leq Ca_{2}^{-1/2}(a_2 -a_1).
\end{flalign*}
On the other hand, we have
\betagin{flalign*}
& |(u,\wei{x}^{-s}\mathbf{P}_c[T_0(z_3) -T_0(z_2)]\mathbf{P}_c \wei{x}^{-s}v)| \\
& \quad = \left |\int_0^{\infty}(u_1, [e^{-it(H-i(a_2+ib_1))} -e^{-it(H-i(a_2+ib_2))}]v_1)dt \right |\\
& \quad = \left |\int_0^{\infty}(u_1, e^{-it(H + b_2-ia_2)}v_1)(e^{it(b_2-b_1)} -1)dt \right |\\
& \quad \leq C\int_0^{\infty}(1+t)^{-3/2}e^{-ta_2}|e^{it(b_2-b_1)} -1| dt \leq Ca_{2}^{-1/2}|b_2 -b_1|.
\end{flalign*}
Therefore,
\betagin{flalign*}
& |(u,\wei{x}^{-s}\mathbf{P}_c[T_0(z_1) -T_0(z_2)]\mathbf{P}_c \wei{x}^{-s}v)| \\
& \quad \leq |(u,\wei{x}^{-s}\mathbf{P}_c[T_0(z_1) -T_0(z_3)]\mathbf{P}_c \wei{x}^{-s}v)| + |(u,\wei{x}^{-s}\mathbf{P}_c[T_0(z_3) -T_0(z_2)]\mathbf{P}_c \wei{x}^{-s}v)|\\
& \quad \leq Ca_2^{-1/2}(|a_1-a_2| + |b_1-b_2|) \leq C \mathbb{R}e (z_2)^{-1/2} |z_1-z_2|.
\end{flalign*}
This completes the proof of the lemma.
\end{proof}
\subsection{Spectral Analysis of $\mathbf{L}_0$}
We study the spectrum of $\mathbf{L}_0$ first.
From \eqref{J-sigma.def} and \eqref{bH.def}, we have
\betagin{equation} \lambdabel{adjoint}
\sigma_1 \mathbf{L}_0 = \mathbf{L}^*_0 \sigma_1, \quad \sigma_3 \mathbf{L}_0 = -\mathbf{L}_0 \sigma_3, \quad \text{where}
\quad \mathbf{L}^*_0 = \betagin{bmatrix} 0 & -L_+ \\ L_- & 0 \end{bmatrix},
\end{equation}
\betagin{lemma}[Spectrum of $L_-$] \lambdabel{L-.spectrum}
For $Q = Q_E$, the following hold:
\betagin{itemize}
\item[\textup{(i)}]
The discrete spectrum of $L_-$ consists of $\{\widetilde{e}_0, \widetilde{e}_{2},
0\}$ where $0$ and $\widetilde{e}_0 = e_0 -e_1 +O(\epsilonsilon^2)$
are simple eigenvalues, and
$\widetilde{e}_2 = \widetilde{e}_3 = -2 \lambdambda \epsilonsilon^2/3 + O(\epsilonsilon^3)$
is a double eigenvalue. For $j = 0,1,2,3$, there exist localized orthonormal functions $\widetilde\phi_j = \phi_j + O(\epsilonsilon^2)$ such that
\[ L_- \widetilde \phi_0 = \widetilde{e_0} \widetilde\phi_0, \quad L_- \widetilde \phi_1 =0, \quad L_ - \widetilde \phi_j = \widetilde{e}_j \widetilde\phi_j, \ j = 2,3. \]
Moreover, $\widetilde \phi_0(x)$ is even with respect to $x_2, x_3$ and $\widetilde \phi_{j}(x)$ are odd with respect to $x_j$ and even with respect to $x_k$ for all $j, k>1$ and $j \not=k$.
\item[\textup{(ii)}] The continuous spectrum of $L_-$ is $\sigma_c(L_-) = [|E|, \infty)$.
\end{itemize}
\end{lemma}
\betagin{proof} We shall just prove (i), as (ii) is standard.
Since $L_-$ is a small perturbation of $H = H_0 -E$, they have the
same number of discrete eigenvalues.
From Assumption A0, we see that $H$ has a simple eigenvalue $e_0 -E$
with eigenfunction $\phi_0$ and a triple eigenvalue $e_1 -E$ with
eigenfunctions $\phi_1, \phi_2, \phi_2$. We will compute the
discrete eigenvalues of $L_-$ which are perturbations of
the discrete eigenvalues of $H$.
Since $e_0-E$ is simple, by standard perturbation theory, we can find
an eigenfunction $\widetilde \phi_{0} = \phi_0 + O(\epsilonsilon^2)$ such that
$L_{-} \widetilde \phi_{0} = \widetilde e_{0} \widetilde \phi_{0}$ and $\widetilde e_{0} = e_0 -E
+ O(\epsilon^2) = e_0 - e_1 + O(\epsilonsilon^2)$.
Moreover, direct computation gives $L_-Q =0$.
Therefore, with $\widetilde{\phi}_1 = \frac{1}{\norm{Q}_{L^2}}Q$,
we have $L_- \widetilde{\phi}_1 =0$.
Now, we shall show that there exists a double eigenvalue
$\widetilde{e}_2 = \widetilde{e}_3 = -2\epsilon^2/3 + O(\epsilon^3)$ of $L_-$.
In other words, we need to solve $L_- \widetilde \phi = e \widetilde \phi$ for
$e$ close to $e_1 - E = O(\epsilon^2)$.
Again, we write $\widetilde \phi = z \cdot \phi + g$, where $g$ is in $\mathbf{V}^\perp$ and $z \in \mathbb{R}^3$. Then
\betagin{equation} \lambdabel{ei.Lm}
(H_0 -e_1) g = (e+ \lambda(\epsilonsilon^2 - Q^2))(z \cdot \phi + g). \end{equation}
This equation is solvable if and only if
\[
(\phi_j, (e+ \lambda \epsilonsilon^2)z \cdot \phi - \lambda Q^2 z \cdot \phi )
= \lambda(\phi_j, Q^2 g) \quad \forall \ j = 1, 2,3.
\]
Since $Q^2 = \rho^2 \phi_1^2 + 2\rho \phi_1 h + h^2$ where $h$
is a function of $x_1, |x'|$ which is odd in $x_1$, we get
\betagin{equation} \lambdabel{e.cond}
\left \{ e + \lambda \epsilonsilon^2 - \lambda[\rho^2(\phi_1^2, \phi_j^2)
+ 2\rho(\phi_j^2 \phi_1, h) + (\phi_j^2, h^2)]\right \}
z_j = \lambdambda(\phi_j, Q^2g).
\end{equation}
Moreover, since $Q_E(x)$ depends on $x_1, |x'|$ and it is odd in $x_1$ for $x=(x_1, x')$, we see that when we restrict $g$ to the class of function which is odd in $x_2$ and even in $x_1, x_3$ we have $(\phi_j, Q^2g) =0$ for $j=1,3$. Solving the equation \eqref{e.cond} in this class of functions $g$, we obtain an eigenvalue
\[
e =\widetilde e_{2} := \lambda \left[ I \rho^2 + 2\rho(\phi_2^2 \phi_1, h) +
(\phi_2^2, h^2) - \epsilonsilon^2 \right] + O(\epsilonsilon^4)
= -\frac{2}{3} \lambda \epsilonsilon^2 + O(\epsilonsilon^3)
\]
of $L_-$ with a normalized eigenfunction $\widetilde \phi_{2} = \phi_2 + g_{2}$, where $g_{2} = O(\epsilonsilon^2)$ which is odd in $x_2$ and even in $x_1, x_3$.
Now, let $\widetilde{\phi}_3 = (R_{23}(\pi/2),1) * \widetilde{\phi}_2$. Since $(R_{23}(\pi/2),1)*Q_E = Q_E$, we see that $L_-\widetilde{\phi}_3 = \widetilde{e}_2 \widetilde{\phi}_3$. Moreover, $\widetilde{\phi}_3$ is odd in $x_3$ and even in $x_1, x_2$. Therefore, $(\widetilde{\phi}_2, \widetilde{\phi}_3) =0$. So, $\widetilde{e}_2$ has multiplicity at least two. Then, by counting the number of discrete eigenvalues of $L_-$ and $H$, we see that $\widetilde{e}_2$ has multiplicity two and we have found all of the discrete eigenvalues of $L_-$. This completes the proof of the lemma.
\end{proof}
\betagin{lemma}[Spectrum of $L_+$] \lambdabel{L+.spectrum}
For $Q = Q_E$, the following statements hold
\betagin{itemize}
\item[\textup{(i)}] The discrete spectrum of $L_+$ consists of
$\{\hat{e}_0, 0, \hat{e}_1\}$ where
$\hat{e}_0 = e_0 -e_1 +O(\epsilonsilon^2)$ and
$\hat{e}_1 = 2 \lambda \epsilonsilon^2 + O(\epsilonsilon^3)$
are simple eigenvalues, and $0$ is a double eigenvalue.
For $j = 0,1,2,3$, there exist localized orthonormal functions
$\varphi_j = \phi_j + O(\epsilonsilon^2)$ such that
\[
L_+ \varphi_k = \hat{e}_k \varphi_k, \quad \text{for}
\ k = 0,1, \text{and} \quad L_+ \varphi_j = 0,
\ \text{for} \ j = 2,3.
\]
Moreover, $\varphi_0(x)$ is even with respect to $x_2, x_3$ and $\varphi_{j}(x)$ are odd with respect to $x_j$ and even with respect to $x_k$ for all $j, k>1$ and $j \not=k$.
\item[\textup{(ii)}] The continuous spectrum of $L_+$ is $\sigma_c(L_+) = [|E|, \infty)$.
\end{itemize}
\end{lemma}
\betagin{proof} Again, we only need to prove (i).
The existence of $\varphi_0$ and $\hat{e}_0$ follows from standard
perturbation theory. Now consider the eigenvalues which are a
perturbation of $e_1 -E$. Recall that
$Q_E(\mathbf{G}amma_0(r_2,r_3) x)$ are all solutions, where
$r_2,r_3 \in [0, \pi)^2$.
From \eqref{basic.rotation} and \eqref{Gamma.def}, we have
\betagin{equation} \lambdabel{gamma0.def}
\mathbf{G}amma_0 = \mathbf{G}amma_0(r_2, r_3) =
\betagin{bmatrix} \cos(r_2) \cos (r_3) & - \sin(r_2) & \cos(r_2) \sin(r_3) \\
\sin(r_2)\cos(r_3) & \cos (r_2) & \sin(r_2)\sin(r_3) \\
\sin(r_3) & 0 & \cos(r_3)
\end{bmatrix}.
\end{equation}
In particular,
\betagin{equation} \lambdabel{mo.deri}
\frac{\partial \mathbf{G}amma_0(0,0)}{\partial r_2} =
\betagin{bmatrix} 0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0 \\
\end{bmatrix}, \quad
\frac{\partial \mathbf{G}amma_0(0,0)}{\partial r_3} =
\betagin{bmatrix} 0 & 0 & 1 \\
0 & 0 & 0 \\
-1 & 0 & 0 \\
\end{bmatrix}.
\end{equation}
Now, let
\betagin{equation} \lambdabel{Z.def}
Z_1 := \frac{\partial Q_E}{\partial E}, \quad
Z_2 := \frac{\partial Q_E(\mathbf{G}amma_0(r_2,r_3) x)}{\partial r_2}
\left |_{r_2=r_3=0} \right., \quad
Z_3 := \frac{\partial Q_E(\mathbf{G}amma_0(r_2,r_3) x)}{\partial r_2}
\left |_{r_2=r_3=0} \right. .
\end{equation}
From \eqref{mo.deri}, we see that
\betagin{flalign*}
Z_1(x) & = \partial_{E} [\rho(\epsilonsilon)] \phi_1 + \partial_E h, \\
Z_2 (x) & = (x_2\partial_1 - x_1\partial_2 )Q_E(x) = \rho(\epsilonsilon) \phi_2 + (x_2\partial_1 - x_1\partial_2 ) h ,\\
Z_3(x) & = (x_3 \partial_1 - x_1 \partial_3) Q_E(x) = \rho(\epsilonsilon) \phi_3 +(x_3 \partial_1 - x_1 \partial_3) h.
\end{flalign*}
Since $Q_E(x)$ is even with respect to $x_2$ and $x_3$, so is $Z_1$. Moreover, $Z_2$ is odd with respect to $x_2$ and even with respect to $x_3$ and $Z_3$ is odd with respect to $x_3$ and even with respect to $x_2$. Therefore, we have
\betagin{equation} \lambdabel{gr1.orth}
(Z_1, Z_2) = (Z_1, Z_3) = (Z_2, Z_3) = 0.
\end{equation}
Now, recall the equation~\eqref{nl.ext} for the excited states:
\[
(H_0 -E) Q + \lambdambda |Q|^2 Q = 0.
\]
Taking $Q = Q_E(\mathbf{G}amma_0(r_2,r_3) x)$ and differentiating this
equation with respect to $E$, $r_2$ and $r_3$, we find
\[
L_+ Z_1 = Q_E, \quad L_+ Z_2 = L_+ Z_3 = 0.
\]
So
\[
\varphi_2 = \frac{1}{||Z_2||_{L^2}}Z_2, \quad
\varphi_3 = \frac{1}{||Z_3||_{L^2}}Z_3
\]
are (orthonormal) zero-eigenfunctions.
The computation of $\varphi_1$ and $\hat{e}_1$ is exactly the same
as in Lemma~\ref{L-.spectrum}. In particular, as in~\eqref{e.cond},
we get
\[
\hat{e}_1 = 3 \lambda[3I \rho^2 + 2\rho(\phi_1^3, h) + (\phi_1^2, h^2)]
- \lambda \epsilonsilon^2 + O(\epsilonsilon^4) = 2 \lambda \epsilonsilon^2 + O(\epsilonsilon^3).
\]
This completes the proof of the lemma.
\end{proof}
Now, with $Z_j$ as in \eqref{Z.def}, we have
\betagin{equation} \lambdabel{gr1.eign}
\mathbf{L}_0 \betagin{bmatrix} 0 \\ Q_E
\end{bmatrix} = 0, \quad
\mathbf{L}_0 \betagin{bmatrix} Z_1 \\ 0
\end{bmatrix} = \betagin{bmatrix} 0 \\ Q_E
\end{bmatrix}, \quad
\mathbf{L}_0 \betagin{bmatrix} Z_2 \\ 0
\end{bmatrix} = \mathbf{L}_0 \betagin{bmatrix} Z_3 \\ 0
\end{bmatrix} = 0.
\end{equation}
Moreover, from Lemma \ref{L-.spectrum}, we have $\text{kernel}(L_-) = \text{span} \{Q_E\}$. Since $(Q_E, Z_2) = (Q_E, Z_3) =0$, there are two functions $Y_2$ and $Y_3$ such that
\betagin{equation} \lambdabel{Y23.def}
L_- Y_2 = Z_2 \quad L_- Y_3 = Z_3 \quad \text{and} \quad (Q_E, Y_2) = (Q_E, Y_3) = (Y_2, Y_3) = 0.
\end{equation}
\betagin{lemma} \lambdabel{inner} For all $ j =1,2,3$ let $\alphapha_j := -(Z_j, Y_j)^{-1}$ where $Y_1 = Q_E$. Then, $\alphapha_j$ is finite for all $j =1,2,3$.
\end{lemma}
\betagin{proof} We need to show that $(Z_j,Y_j) \not=0$ for all $j
=1,2,3$. For $j =1$, we have $(Y_1, Z_1) = (Q_E, \partial_E Q_E) =
(6I)^{-1} + O(\epsilonsilon) \not =0$ for $\epsilonsilon$ sufficiently small.
Now, we shall show that $(Z_j, Y_j) \not=0$ for $\epsilonsilon$
sufficiently small. Since the proof for the case $j =2$ is identical
to that
of the case $j=3$, we only need to prove $(Z_2, Y_2) \not=0$.
Recall that $Z_2 = a(\epsilonsilon) \phi_2 + \widetilde{h}$ for some localized order $\epsilonsilon^3$ function $\widetilde{h}$ which is odd in $x_2$ and even in $x_3$. We write
\[
Y_2 = \sum_{j=0}^3 b_j \widetilde{\phi}_j + g, \quad g \perp
\widetilde{\phi}_j, \ \forall \ j = 0,1,2,3.
\]
Here $\widetilde{\phi}_j$ are the eigenfunctions of $L_-$ defined in
Lemma~\ref{L-.spectrum}. By choosing $Y_2$ not to have a
component in the kernel of $L_-$, we can assume $b_1 =0$.
Moreover, since $L_- Y_2 = Z_2$ we have
\[
L_-g = - \sum_{j=0}^3 \widetilde{e_j} \widetilde \phi_j +
\rho(\epsilonsilon) \phi_2 + \widetilde{h}.
\]
Taking the inner product of this equation with $\widetilde{\phi}_j$ for $j =0,1,2,3$ and using the symmetry properties of these functions, we get
\[ \widetilde{e_2}b_2 = \rho(\epsilonsilon)(\widetilde{\phi}_2, \phi_2) + (\widetilde{\phi}_2, \widetilde{h}) = \rho(\epsilonsilon) + O(\epsilonsilon^3), \quad b_j = 0, \ \forall \ j \not= 2. \]
Since $\widetilde{e}_2 = -2 \lambda \epsilonsilon^2/3 + O(\epsilonsilon^3)$, $\widetilde{\phi}_2 = \phi_2 + O(\epsilonsilon^2)$ we see that
\[
b_2 = -\frac{3 \lambda \epsilonsilon^{-1}}{2(3I)^{1/2}} + O(1).
\]
To sum up, we have $Y_2 = (\epsilonsilon^{-1}) \widetilde{\phi}_2 + g$ for $g \in \, \mathbf{P}\! _\mathrm{c} \, \! (L_-)$ and satisfies
\[ L_ - g = O(\epsilonsilon^3) \phi_2 + O(\epsilonsilon^3) = O(\epsilonsilon^3). \]
This implies that $g = O(\epsilonsilon)$. Therefore $(Y_2, Z_2) = O(1) + O(\epsilonsilon) \not=0$ for $\epsilonsilon$ sufficiently small. This completes the proof of the Lemma.
\end{proof}
Now, we have the following theorem on the spectrum of $\mathbf{L}_0$:
\betagin{theorem}[Invariant Subspaces of $\mathbf{L}_0$ - Resonant Case]
\lambdabel{Spectrum.R}
Assume that $e_0 < 2e_1$. The space
$\mathbf{E} := L^2(\mathbb{R}^3, \mathbb{C}^2)$
can be decomposed as the direct sum of $\mathbf{L}_0$-invariant subspaces
\betagin{equation} \lambdabel{E.R.dec}
\mathbf{E} = \oplus_{j=1}^3 \mathbf{E}_{0j} \oplus \mathbf{E}_+ \oplus \mathbf{E}_- \oplus \mathbf{E}_c.
\end{equation}
If $f$ and $g$ belong to different subspaces, then $\wei{J f, g} =0$. These subspaces and their corresponding projections satisfy the following:
\betagin{itemize}
\item[\textup{(i)}] For each $j =1,2,3$, let
\[
\Phi_{0j} := \frac{\partial}{\partial r_j}
\overrightarrow{e^{-i r_1} Q_E}(\mathbf{G}amma_0(r_2,r_3) x) \big|_{r_1=r_2=r_3=0}.
\]
Also, let $\widetilde{\Phi}_{01} = \frac{\partial Q_E}{\partial E}$
and $\widetilde{\Phi}_{0k} := \betagin{bmatrix}0 \\ Y_k \end{bmatrix}$ for $k=2,3$. Then, we have $\mathbf{L}_0 \widetilde{\Phi}_{0j} = \Phi_{0j}, \ \mathbf{L}_0 \Phi_{0j} =0$ and
\[ \alphapha_j^{-1} = \wei{J \Phi_{0j}, \widetilde{\Phi}_{0j}}, \quad \wei{J \Phi_{0j}, \Phi_{0j}} = \wei{J \widetilde{\Phi}_{0j}, \widetilde{\Phi}_{0j}} =0. \]
Moreover, the subspace $\mathbf{E}_{0j}$ is spanned by $\{\Phi_{0j}, \widetilde{\Phi}_{0j}\}$ with the projection $\mathbf{P}_{0j}$ from $\mathbf{E}$ onto $\mathbf{E}_{0j}$ defined by
\betagin{equation*}
\mathbf{P}_{0j}f = -\alphapha_j\wei{J \widetilde{\Phi}_{0j}, f} \Phi_{0j} + \alphapha_j\wei{J\Phi_{0j}, f} \widetilde{\Phi}_{0j}, \ \forall \ f \in \mathbf{E}, \ \ j = 1,2,3.
\end{equation*}
\item[\textup{(ii)}]
There are four single eigenvalues
$\omega_1, \omega_2 := -\bar{\omega}_1, \omega_3 := \bar{\omega}_1,
\omega_4 := -\omega_1$ of $\mathbf{L}_0$ with corresponding eigenvectors
$\Phi_1, \Phi_2 : = \sigma_3 \bar{\Phi}_1, \Phi_3: = \bar{\Phi}_1,
\Phi_4:= \sigma_3 \Phi_1$ where $\Phi_1 := \betagin{bmatrix} \phi_0 \\
-i \phi_0 \end{bmatrix} + h_1$, $\omega_1 := i\kappappa + \gammamma$ with
$\kappappa := e_1 - e_0 + O(\epsilonsilon^2) >0$ and $\gammamma := \gammamma_0
\epsilonsilon^4 + O(\epsilonsilon^5)$ for some positive order one constant
$\gammamma_0$. The function $h_1$ satisfies $\norm{h_1}_{L^p} \leq
C[\epsilon^2 + \epsilon^{6-\frac{12}{p}}]$ for all $1 \leq p \leq \infty$ and
$\norm{h_1}_{L^2_{\mathrm{loc}}} \leq C\epsilon^2$ and $(J\Phi_1,\Phi_1)=0$.
The subspaces $\mathbf{E}_+, \; \mathbf{E}_-$ are defined as
\betagin{equation*}
\mathbf{E}_+ := \text{span}_{\mathbb{C}} \left \{ \Phi_1, \Phi_2\right \}
, \quad \mathbf{E}_- := \text{span}_{\mathbb{C}}
\left \{ \Phi_3, \Phi_4 \right \}.
\end{equation*}
The projections of $\mathbf{E}$ onto $\mathbf{E}_+$ and $\mathbf{E}_-$ are defined by
\betagin{equation*}
\betagin{split}
\mathbf{P}_+ f &= c_1\wei{J\Phi_1,f} \Phi_2 -\bar{c}_1\wei{J\Phi_2,f}\Phi_1, \\
\mathbf{P}_- f &= \bar{c}_1\wei{J\Phi_3,f} \Phi_4 - c_1\wei{J\Phi_4,f}\Phi_3, \quad \text{with} \quad c_1 := \wei{J\Phi_1, \Phi_2}^{-1}.
\end{split}
\end{equation*}
\item[\textup{(iii)}] $\mathbf{E}_c = \{g: \wei{J f, g} =0, \quad \forall f \in \oplus_{j=0}^3\mathbf{E}_{0j} \oplus \mathbf{E}_+ \oplus \mathbf{E}_- \}$. Its corresponding projection is $\, \mathbf{P}\! _\mathrm{c} \, \! (\mathbf{L}_0) = Id - \sum_{j=1}^3\mathbf{P}_{0j}(\mathbf{L}_0) - \mathbf{P}_1^+(\mathbf{L}_0) - \mathbf{P}_1^-(\mathbf{L}_0)$.
\end{itemize}
\end{theorem}
\setlength{\unitlength}{1mm}\noindent
\betagin{center}
\betagin{picture}(120,80)
\put (0,40){\varepsilonctor(1,0){120}}
\put (120,42){\makebox(0,0)[c]{\scriptsize x}}
\put (60,0){\varepsilonctor(0,1){80}}
\put (60,82){\makebox(0,0)[c]{\scriptsize y}}
\multiput(60, 76)(0.5, 0){30}{\circle*{0.1}}\put (75,76){\varepsilonctor(1,0){1}}
\put(82,76){\makebox(0,0)[c]{\scriptsize$\sigma_c(\mathbf{L}_0)$}}
{\color{red}\put (60.15,60){\line(0,1){18}}\put (59.8,60){\line(0,1){18}}}
\put(64,60){\makebox(0,0)[c]{\scriptsize$|E|$}}
{\color{red}\put (60.15,20){\line(0,-1){20}}\put (59.8,20){\line(0,-1){20}}}
\put(64,20){\makebox(0,0)[c]{\scriptsize$-|E|$}}
\put(60,40){\circle*{1.5}}
\multiput(60, 40)(0.8, -0.4){30}{\circle*{0.1}}\put (84,28){\varepsilonctor(2,-1){1}}
\put(100,25){\makebox(0,0)[c]{\scriptsize$\mathbf{E}_0 =\{\Phi_{0j}, \widetilde{\Phi}_{0j}, \ j = 1,2, 3\}$}}
\put(40,72){\makebox(0,0)[c]{\scriptsize$\mathbf{E}_{+}= \{\Phi_1, \Phi_2\}$}}
\put(62,66){\circle*{1.5}}
\put(80,66){\makebox(0,0)[c]{\scriptsize$(\omega_1, \Phi_1),\ \mathbb{R}e(\omega_1) = O(\epsilon^4)$}}
\put(58,66){\circle*{1.5}}
\put(41,66){\makebox(0,0)[c]{\scriptsize$(\omega_2 =-\bar{\omega}_1, \Phi_2 =\sigma_1\bar{\Phi}_1)$}}
\put(62,14){\circle*{1.5}}
\put(77,14){\makebox(0,0)[c]{\scriptsize$(\omega_3 =\bar{\omega}_1, \Phi_3= \bar{\Phi}_1)$}}
\put(58,14){\circle*{1.5}}
\put(43,14){\makebox(0,0)[c]{\scriptsize$(\omega_4=\bar{\omega}_2, \Phi_4 = \bar{\Phi}_2)$}}
\put(40,8){\makebox(0,0)[c]{\scriptsize$\mathbf{E}_{-}= \{\Phi_3, \Phi_4\}$}}
\multiput(60, 4)(0.5, 0){30}{\circle*{0.1}}\put (75,4){\varepsilonctor(1,0){1}}
\put(82,4){\makebox(0,0)[c]{\scriptsize$\sigma_c(\mathbf{L}_0)$}}
\end{picture}
Figure 1: Spectrum of $\mathbf{L}_0$ in the resonant case.
\end{center}
\betagin{proof}
From \eqref{gr1.eign}, we have $\mathbf{L}_0 \Phi_{0j} =0$ and
$\mathbf{L}_0 \widetilde{\Phi}_{0j} = \Phi_{0j}$ for all $k=1,2,3$. So,
from \eqref{adjoint}, \eqref{gr1.eign}, \eqref{Y23.def} and
Lemma \ref{inner}, the statement (i) follows. Moreover, we have
\[
{\mathbf{L}_0}_{ \left |_{\mathbf{E}_0} \right.} =
\betagin{bmatrix} 0 & -1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{bmatrix}.
\]
The proofs of (ii) and (iii) are similar and simpler than the
proofs of (iii) and (iv) of Theorem~\ref{Spectrum.C.R} below,
so they are omitted.
\end{proof}
Now, if we denote
\[
\mathbf{L}_r := \mbox{ the linearized operator around excited state }
\;\; e^{i r_1} Q_E(\mathbf{G}amma_0(r_2,r_3) x),
\]
for $r = (r_1, r_2, r_3) \in \mathbb{R}^3$, then
\[
\mathbf{L}_r \left( r*\betagin{bmatrix} u \\ v \end{bmatrix}\right) =
r* \left( \mathbf{L}_0 \betagin{bmatrix} u \\ v \end{bmatrix}\right), \
\text{where}\ r* \betagin{bmatrix} f(x) \\ g(x)\end{bmatrix} :=
e^{Jr_1}\betagin{bmatrix} f(\mathbf{G}amma_0(r_2,r_3)x) \\
g(\mathbf{G}amma_0(r_2,r_3)x) \end{bmatrix}.
\]
This observation leads to the following corollary.
\betagin{corollary} \lambdabel{Cor.R} For $r=(r_1,r_2,r_3) \in \mathbb{R}^3$ and for $j=1,2,3$, let $\Phi_{0j}^r : = r* \Phi_{0j}$ and $\widetilde{\Phi}_{0j}^r = r* \widetilde{\Phi}_{0j}$. Similarly, for $j = 1,2,3,4$, let $\Phi_j^r := r*\Phi_j$. Also, let
\betagin{flalign*}
\mathbf{E}_{j0}^r & := \text{span}\{\Phi_{0j}^r, \widetilde{\Phi}_{0j}^r\} \ j = 1,2,3, \\
\mathbf{E}_+^r & := \text{span}\{\Phi_1^r, \Phi_2^r \}, \ \mathbf{E}_-^r := \text{span}\{\Phi_3^r, \Phi_4^r \} \ \forall \ j = 1,2, 3,\\
\mathbf{E}_c^r & := \{f \in \mathbf{E}\ : \wei{Jf, g} = 0, \ \forall \ g \in \oplus_{j=1}^3 \mathbf{E}_{0j}^r \oplus \mathbf{E}_-^r \oplus \mathbf{E}_+^r \}.
\end{flalign*}
Then $\mathbf{E}_{0j}^r, \mathbf{E}_k^r$ are invariant under $\mathbf{L}_r$,
$\mathbf{E} = \oplus_{j=1}^3 \mathbf{E}_{0j}^r \oplus \mathbf{E}_-^r \oplus \mathbf{E}_+^r \oplus
\mathbf{E}_c^r$, and the projections from $\mathbf{E}$ into these subspaces are
defined exactly as the corresponding projections in Theorem
\ref{Spectrum.R}.
Moreover,
\[
\mathbf{L}_r \widetilde{\Phi}_{0j}^r = \Phi_{0j}^r, \
\mathbf{L}_r \Phi_{0j}^r =0, \
\mathbf{L}_r \Phi_k^r = \omega_k \Phi_k^r, \
\forall \ j = 1,2,3, \ k = 1,2, 3,4.
\]
\end{corollary}
Analogous to Theorem~\ref{Spectrum.R}, we have the following
for the non-resonant case:
\betagin{theorem}[Invariant Subspaces of $\mathbf{L}_0$ -- Non-Resonant Case] \lambdabel{Spectrum.NR} Assume that $e_0 > 2e_1$. Then, the space $\mathbf{E} := L^2(\mathbb{R}^3, \mathbb{C}^2)$ can be decomposed as the direct sum of $\mathbf{L}_0$-invariant subspaces
\betagin{equation} \lambdabel{E.NR.dec}
\mathbf{E} = \oplus_{j=1}^3 \mathbf{E}_{0j} \oplus \mathbf{E}_1 \oplus \mathbf{E}_2 \oplus \mathbf{E}_c.
\end{equation}
If $f$ and $g$ belong to different subspaces, then $\wei{J f, g} =0$. These subspaces and their corresponding projections satisfy the followings
\betagin{itemize}
\item[\textup{(i)}]
For each $j =1,2,3$, let
\[
\Phi_{0j} :=\frac{\partial}{\partial r_j}
\overrightarrow{e^{-i r_1} Q_E}(\mathbf{G}amma_0(r_2,r_3) x)|_{r_1=r_2=r_3=0}.
\]
Also, let
$\widetilde{\Phi}_{01} = \frac{\partial \overrightarrow{Q_E}}{\partial E}$ and
$\widetilde{\Phi}_{0k} := \betagin{bmatrix}0 \\ Y_k \end{bmatrix}$ for $k=2,3$. Then, we have $\mathbf{L}_0 \widetilde{\Phi}_{0j} = \Phi_{0j}, \ \mathbf{L}_0 \Phi_{0j} =0$ and
\[ \alphapha_j^{-1} = \wei{J \Phi_{0j}, \widetilde{\Phi}_{0j}}, \quad \wei{J \Phi_{0j}, \Phi_{0j}} = \wei{J \widetilde{\Phi}_{0j}, \widetilde{\Phi}_{0j}} =0. \]
Moreover, the subspace $\mathbf{E}_{0j}$ is spanned by $\{\Phi_{0j}, \widetilde{\Phi}_{0j}\}$ with
the projection $\mathbf{P}_{0j}$ from $\mathbf{E}$ onto $\mathbf{E}_{0j}$ defined by
\betagin{equation*}
\mathbf{P}_{0j}f = -\alphapha_j\wei{J \widetilde{\Phi}_{0j}, f} \Phi_{0j} + \alphapha_j\wei{J\Phi_{0j}, f} \widetilde{\Phi}_{0j}, \ \forall \ f \in \mathbf{E}, \ \ j = 1,2,3.
\end{equation*}
\item[\textup{(ii)}]
There are two single purely imaginary eigenvalues
$\omega_1, \omega_2 := -\omega_1$ of $\mathbf{L}_0$ with eigenvectors
$\Phi_1$ and $\Phi_2 := \bar{\Phi}_1$ where
$\Phi_1 = \betagin{bmatrix} u \\ -i v \end{bmatrix}$, $\omega_1 =
i\kappappa$ for some constant
$\kappappa = e_1 -e_0 + O(\epsilonsilon^2) >0$.
The functions $u$ and $v$ are real with $(J\Phi_1, \Phi_1) = 2i(u,v)
=i$ and satisfy $L_+ u = -\kappappa v, L_-v = -\kappappa u$.
For each $j =1,2$, the subspace $\mathbf{E}_j$ is spanned by $\Phi_j$,
and the projection of $\mathbf{E}$ onto $\mathbf{E}_j$ is defined by
\betagin{equation*}
\betagin{split}
\mathbf{P}_j(\mathbf{L}_0) f &= (-1)^ji\wei{J \Phi_j, f}\Phi_j , \ \
\forall \ f \in \mathbf{E}.
\end{split}
\end{equation*}
\item[\textup{(iii)}]
$\mathbf{E}_c = \{g: \wei{J f, g} =0, \quad \forall f \in \oplus_{j=1}^{3} \mathbf{E}_{0j} \oplus \mathbf{E}_1 \oplus \mathbf{E}_2\}$. Its corresponding projection is $\, \mathbf{P}\! _\mathrm{c} \, \! (\mathbf{L}_0) = Id - \sum_{j=1}^3 \mathbf{P}_{0j}(\mathbf{L}_0) - \sum_{j=1}^2 \mathbf{P}_j(\mathbf{L}_0)$.
\end{itemize}
\end{theorem}
\setlength{\unitlength}{1mm}\noindent
\betagin{center}
\betagin{picture}(120,80)
\put (0,40){\varepsilonctor(1,0){120}}
\put (120,42){\makebox(0,0)[c]{\scriptsize x}}
\put (60,0){\varepsilonctor(0,1){80}}
\put (60,82){\makebox(0,0)[c]{\scriptsize y}}
{\color{red}\put (60.15,60){\line(0,1){18}}\put (59.8,60){\line(0,1){18}}}
\put(64,60){\makebox(0,0)[c]{\scriptsize$|E|$}}
{\color{red}\put (60.15,20){\line(0,-1){20}}\put (59.8,20){\line(0,-1){20}}}
\put(64,20){\makebox(0,0)[c]{\scriptsize$-|E|$}}
\multiput(60, 76)(0.5, 0){30}{\circle*{0.1}}\put (75,76){\varepsilonctor(1,0){1}}
\put(83,76){\makebox(0,0)[c]{\scriptsize$\sigma_c(\mathbf{L}_0)$}}
\put(60,40){\circle*{1.5}}
\multiput(60, 40)(0.8, -0.4){30}{\circle*{0.1}}\put (84,28){\varepsilonctor(2,-1){1}}
\put(100,25){\makebox(0,0)[c]{\scriptsize$\mathbf{E}_0 =\{\Phi_{0j}, \widetilde{\Phi}_{0j}, \ j = 1,\cdots 3\}$}}
\put(60,56){\circle*{1.5}}
\put(81,56){\makebox(0,0)[c]{\scriptsize$(\omega_1 =i(e_1-e_0)+O(\epsilon^2), \Phi_1)$}}
\put(60,24){\circle*{1.5}}
\put(45,24){\makebox(0,0)[c]{\scriptsize$(\omega_2 =\bar{\omega}_3, \Phi_2 = \bar{\Phi}_3)$}}
\multiput(60, 4)(0.5, 0){30}{\circle*{0.1}}\put (75,4){\varepsilonctor(1,0){1}}
\put(82,4){\makebox(0,0)[c]{\scriptsize$\sigma_c(\mathbf{L}_0)$}}
\end{picture}
Figure 2: Spectrum of $\mathbf{L}_0$ in the non-resonant case.
\end{center}
\betagin{corollary} \lambdabel{Cor.NR} For $r=(r_1,r_2,r_3) \in \mathbb{R}^3$ and for $j=1,2,3$, let $\Phi_{0j}^r : = r* \Phi_{0j}$ and $\widetilde{\Phi}_{0j}^r = r* \widetilde{\Phi}_{0j}$. Similarly, for $j = 1,2$, let $\Phi_j^r := r*\Phi_j$. Also, let
\betagin{flalign*}
\mathbf{E}_{j0}^r & := \text{span}\{\Phi_{0j}^r, \widetilde{\Phi}_{0j}^r \} \ j = 1,2,3, \\
\mathbf{E}_k^r & := \text{span}\{\Phi_k^r \}, \ \forall \ k =1,2, \\
E_c^r & := \{f \in \mathbf{E}\ : \wei{Jf, g} = 0, \ \forall \ g \in \oplus_{j=1}^3 \mathbf{E}_{0j}^r \oplus \mathbf{E}_1^r \oplus \mathbf{E}_2^r \}
\end{flalign*}
Then $\mathbf{E}_{0j}^r, \mathbf{E}_k^r$ are invariant under
$\mathbf{L}_r$, $\mathbf{E} = \oplus_{j=1}^3 \mathbf{E}_{0j}^r \oplus \mathbf{E}_1^r \oplus \mathbf{E}_2^r
\oplus \mathbf{E}_c^r$,
and the projections from $\mathbf{E}$ into these subspaces are defined exactly as those corresponding projections in Theorem \ref{Spectrum.NR}. Moreover,
\[
\mathbf{L}_r \widetilde{\Phi}_{0j}^r = \Phi_{0j}^r, \
\mathbf{L}_r \Phi_{0j}^r =0, \
\mathbf{L}_r \Phi_k^r = \omega_k \Phi_k^r, \
\forall \ j = 1,2,3, \ k = 1,2.
\]
\end{corollary}
\subsection{Spectral Analysis of $\mathbf{H}_0$}
Recall \eqref{QE.def} that
$\widetilde{Q}_E = \epsilonsilon[ v_2 + h_2(\epsilonsilon) + h(\epsilonsilon, v_2)]$
where $v_2 = 1/(2\sqrt{I}) [\phi_1 + i \phi_2]$, $h_2(\epsilonsilon) =
O(\epsilonsilon) v_2$ and $h(\epsilonsilon, v_2) \in \mathbf{V}^\perp$.
In other word, we can write
$\widetilde{Q}_E = \widetilde{\rho}(\epsilonsilon)[\phi_1 + i \phi_2] + \widetilde{h}$ where
$\widetilde{\rho}(\epsilonsilon) = \epsilonsilon/(2\sqrt{I}) +O(\epsilonsilon^2)$ and
$\widetilde{h} \in \mathbf{V}^\perp$ and of order $O(\epsilonsilon^3)$.
From \eqref{bL.gamma.def}, the linearized operator around
$\widetilde{Q}_E$ is
\betagin{equation} \lambdabel{bHr.def}
\mathbf{H}_0 = J (H_0-E) + W_{\widetilde{Q}_E}, \quad
W_{\widetilde{Q}_E}:= \betagin{bmatrix} W_{1,\widetilde{Q}_E} &
W_{2,\widetilde{Q}_E} \\ W_{3,\widetilde{Q}_E} & W_{4,\widetilde{Q}_E}
\end{bmatrix},
\end{equation}
where
\betagin{equation} \lambdabel{W.def}
\betagin{split}
& \ W_{1,Q} = 2\lambdambda \mathbb{R}e Q \mathrm{I}_2m Q, \quad W_{2,Q} =
\lambdambda [ (\mathbb{R}e Q)^2 + 3(\mathrm{I}_2m Q)^2], \\
& \ W_{3,Q} = -\lambdambda [ 3(\mathbb{R}e Q)^2 + (\mathrm{I}_2m Q)^2], \quad
W_{4,Q} = - W_{1,Q}.
\end{split}
\end{equation}
Now let
\betagin{equation}
\mathbf{K}_0 := \betagin{bmatrix} H_0 - E - W_{3} & W_{1} \\
W_{1} & H_0 - E + W_{2} \end{bmatrix}.
\end{equation}
We see that $\mathbf{K}_0$ is self adjoint and
\[
\mathbf{H}_0 = J \mathbf{K}_0, \quad \mathbf{H}_0^* = - \mathbf{K}_0 J.
\]
Moreover, denoting
\[
\mathbf{H}_r = \mbox{ linearized operator around excited state }
\;\; e^{i r_1} \widetilde{Q}_E (\mathbf{G}amma_1(r_2,r_3) x)
\]
for $r = (r_1, r_2, r_3) \in \mathbb{R}^3$, we have
\betagin{equation} \lambdabel{H.sym}
\mathbf{H}_r \left( r*\betagin{bmatrix} u \\ v \end{bmatrix}\right) =
r* \left( \mathbf{H}_0 \betagin{bmatrix} u \\ v \end{bmatrix}\right), \
\text{where}\ r* \betagin{bmatrix} f(x) \\ g(x)\end{bmatrix} =
e^{Jr_1}\betagin{bmatrix} f(\mathbf{G}amma_1(r_2,r_3)x) \\
g(\mathbf{G}amma_1(r_2,r_3)x) \end{bmatrix}.
\end{equation}
Therefore, as above, to study the spectrum of $\mathbf{H}_r$, it
suffices to study the spectrum of $\mathbf{H}_0$.
We have the following theorem:
\betagin{theorem}[$\mathbf{H}_0$-Invariant Subspaces -- Resonant Case]
\lambdabel{Spectrum.C.R}
Assume that $e_0 < 2e_1$. The space
$\mathbf{E} = L^2(\mathbb{R}^3, \mathbb{C}^2)$ can be decomposed into
$\mathbf{H}_0$-invariant subspaces as
\[
\mathbf{E} = \mathbf{E}_{01} \oplus\mathbf{E}_{02} \oplus \mathbf{E}_1 \oplus \mathbf{E}_2 \oplus
\mathbf{E}_+ \oplus \mathbf{E}_- \oplus\mathbf{E}_{c}.
\]
If $f$ and $g$ belong to different subspaces, then $\wei{J f, g} =0$. These subspaces and their corresponding projections satisfy the following:
\betagin{itemize}
\item[\textup{(i)}]
The subspaces
\[
\mathbf{E}_{01} := \text{span}_{\mathbb{C}} \left\{
\Phi_{00} := \frac{\partial}{\partial_E} \overrightarrow{\widetilde{Q}_E}, \;
\Phi_{01}:= \frac{\partial}{\partial r_1}
\overrightarrow{e^{-i r_1} \widetilde{Q}_E} \big|_{r_1=0} \right \}
\]
and
\[
\mathbf{E}_{02} := \text{span}_{\mathbb{C}} \left\{
\Phi_{0j}:= \frac{\partial}{\partial r_j}
\overrightarrow{\widetilde{Q}_E}(\mathbf{G}amma_1(r_2,r_3) x) \big|_{r_2=r_3=0}
\;\;\; j=2,3 \right\}.
\]
Moreover, $ \mathbf{H}_0 \Phi_{00} = \Phi_{01}, \quad \mathbf{H}_0 \Phi_{0j} = 0, \quad \forall j = 1,2,3$ and
\[
\wei{J\Phi_{0j}, \Phi_{0k}} = 0, \quad \text{if} \quad (j,k) \notin
\{(0,1), (1,0), (2,3), (3,2) \}.
\]
The projections $\mathbf{P}_0(\mathbf{H}_{0}) : \mathbf{E} \rightarrow \mathbf{E}_0:=\mathbf{E}_{01} \oplus
\mathbf{E}_{02}$ are defined by
$\mathbf{P}_0(\mathbf{H}_{0})= \mathbf{P}_{01}(\mathbf{H}_{0}) + \mathbf{P}_{02}(\mathbf{H}_{0})$ with
\betagin{flalign*}
\mathbf{P}_{01}(\mathbf{H}_0)f & = \betata_1 \wei{J \Phi_{00}, f}\Phi_{01} - \betata_1 \wei{J\Phi_{01}, f}\Phi_{00}, \\
\mathbf{P}_{02}(\mathbf{H}_0)f & = \betata_2 \wei{J \Phi_{02}, f}\Phi_{03} -\betata_2 \wei{J \Phi_{03}, f}\Phi_{02}, \quad \forall \ f \in \mathbf{E},
\end{flalign*}
where $\betata_1 := \wei{J\Phi_{00}, \Phi_{01}}^{-1} = O(1)$ and $\betata_2 := \wei{J\Phi_{02}, \Phi_{03}}^{-1} = O(\epsilonsilon^{-2})$.
\item[\textup{(ii)}]
There exist $\Phi_1 := \betagin{bmatrix} -i\phi_1 + \phi_2 \\ \phi_1 + i\phi_2 \end{bmatrix} + O(\epsilonsilon^2), \Phi_2 := \bar{\Phi}_1$ and purely imaginary numbers $\omega_1 := i(\lambdambda \epsilonsilon^2 + O(\epsilonsilon^3)))$ and $\omega_2 := \bar{\omega}_1$ such that for $j =1,2$, the subspace $\mathbf{E}_j$ is spanned by $\Phi_j$ and
\[
\mathbf{H}_0 \Phi_j = \omega_j \Phi_j, \quad \wei{J \Phi_1, \Phi_1} =-4i,
\quad \wei{J \Phi_1, \Phi_2} =0.
\]
The projection $\mathbf{P}_j(\mathbf{H}_0) : \mathbf{E} \rightarrow \mathbf{E}_j$ is defined by
\[\mathbf{P}_1(\mathbf{H}_0) f = -\frac{1}{4i}\wei{J\Phi_1, f}\Phi_1, \ \ \mathbf{P}_2(\mathbf{H}_0) f = \frac{1}{4i}\wei{J \Phi_2, f}\Phi_2, \quad \forall \ f \in \mathbf{E}.\]
\item[\textup{(iii)}]
For $j =3,4,5,6$ there exist eigenfunctions $\Phi_j$ with
corresponding eigenvalues $\omega_j$ satisfying the condition
$\omega_{k+1} = \bar{\omega}_k$, $\Phi_{k+1} = \bar{\Phi}_k$ for
$k=3,5$, and
$\mathrm{I}_2m \omega_4 = \kappappa = e_1 - e_0 + O(\epsilonsilon^2) >0$,
$\mathbb{R}e \omega_4 = \gammamma = \gammamma_0\epsilonsilon^4 + O(\epsilonsilon^6)$
for some positive order one constant $\gammamma_0$. Moreover, we have
\betagin{flalign*}
& \omega_5 = -\omega_3, \quad \wei{J\Phi_5, \Phi_4} =1, \quad \wei{J\Phi_j,\Phi_j}=0 \quad \text{for all} \quad j=3,4,5,6.
\end{flalign*}
The subspaces $\mathbf{E}_+ = \text{span}\{\Phi_4, \Phi_5\}$ and $\mathbf{E}_- = \text{span}\{\Phi_3, \Phi_6\}$ with
the projection
$\mathbf{P}_\pm(\mathbf{H}_0) : \mathbf{E} \rightarrow \mathbf{E}_\pm$ defined by
\betagin{flalign*}
\mathbf{P}_+(\mathbf{H}_0) f & = \wei{J\Phi_5, f}\Phi_4 - \wei{J\Phi_4,f}\Phi_5, \\
\mathbf{P}_-(\mathbf{H}_0) f & = \wei{J\Phi_6,f}\Phi_3 - \wei{J\Phi_3,f}\Phi_6,
\quad \forall \ f \in \mathbf{E}.
\end{flalign*}
\item[\textup{(iv)}]
$\mathbf{E}_c = \{g \in \mathbf{E} : \wei{Jf, g} = 0, \quad \forall \ f \in \mathbf{E}_0
\oplus \mathbf{E}_1 \oplus \mathbf{E}_2 \oplus \mathbf{E}_+ \oplus \mathbf{E}_-\}$.
Its corresponding projection is
$\mathbf{P}_c(\mathbf{H}_0) = \textup{Id} - \sum_{j=0}^2 \mathbf{P}_j(\mathbf{H}_0) - \mathbf{P}_+(\mathbf{H}_0) - \mathbf{P}_-(\mathbf{H}_0)$.
\end{itemize}
\end{theorem}
\setlength{\unitlength}{1mm}\noindent
\betagin{center}
\betagin{picture}(120,80)
\put (0,40){\varepsilonctor(1,0){120}}
\put (120,42){\makebox(0,0)[c]{\scriptsize x}}
\put (60,0){\varepsilonctor(0,1){80}}
\put (60,82){\makebox(0,0)[c]{\scriptsize y}}
{\color{red}\put (60.15,60){\line(0,1){18}}\put (59.8,60){\line(0,1){18}}}
\multiput(60, 76)(0.5, 0){30}{\circle*{0.1}}\put (75,76){\varepsilonctor(1,0){1}}
\put(83,76){\makebox(0,0)[c]{\scriptsize$\sigma_c(\mathbf{H}_0)$}}
\put(64,60){\makebox(0,0)[c]{\scriptsize$|E|$}}
{\color{red}\put (60.15,20){\line(0,-1){20}}\put (59.8,20){\line(0,-1){20}}}
\put(64,20){\makebox(0,0)[c]{\scriptsize$-|E|$}}
\put(60,40){\circle*{1.5}}
\multiput(60, 40)(0.8, -0.4){30}{\circle*{0.1}}\put (84,28){\varepsilonctor(2,-1){1}}
\put(100,25){\makebox(0,0)[c]{\scriptsize$\mathbf{E}_0 =\{\Phi_{00} =\partial_EQ_{|r=0}, \Phi_{0j}=\partial_{r_j}Q_{|r=0}, \ j = 1,2,3\}$}}
\put(60,44){\circle*{1.5}}
\put(73,44){\makebox(0,0)[c]{\scriptsize$(\omega_1 =O(\epsilon^2), \Phi_1)$}}
\put(60,36){\circle*{1.5}}
\put(45,36){\makebox(0,0)[c]{\scriptsize$(\omega_2 =\bar{\omega}_1, \Phi_2 =\bar{\Phi}_1)$}}
\put(40,72){\makebox(0,0)[c]{\scriptsize$\mathbf{E}_{+}= \{\Phi_4, \Phi_5\}$}}
\put(62,66){\circle*{1.5}}
\put(80,66){\makebox(0,0)[c]{\scriptsize$(\omega_4, \Phi_4),\ \mathbb{R}e(\omega_4) = O(\epsilon^4)$}}
\put(58,66){\circle*{1.5}}
\put(47,66){\makebox(0,0)[c]{\scriptsize$(\omega_5, \Phi_5)$}}
\put(62,14){\circle*{1.5}}
\put(77,14){\makebox(0,0)[c]{\scriptsize$(\omega_3 =\bar{\omega}_4, \Phi_3= \bar{\Phi}_4)$}}
\put(58,14){\circle*{1.5}}
\put(43,14){\makebox(0,0)[c]{\scriptsize$(\omega_6=\bar{\omega}_5, \Phi_6 = \bar{\Phi}_5)$}}
\put(40,8){\makebox(0,0)[c]{\scriptsize$\mathbf{E}_{-}= \{\Phi_3, \Phi_6\}$}}
\multiput(60, 4)(0.5, 0){30}{\circle*{0.1}}\put (75,4){\varepsilonctor(1,0){1}}
\put(82,4){\makebox(0,0)[c]{\scriptsize$\sigma_c(\mathbf{H}_0)$}}
\end{picture}
Figure 3: Spectrum of $\mathbf{H}_0$ in the resonant case.
\end{center}
\betagin{proof}
From Lemma \ref{Non-emb-eig} and Lemma \ref{outside}, we see that
$\mathbf{H}_0$ has no eigenvalues in
$\Sigma_c \cup \{z : dist(z,\Sigma_p) \geq \epsilon \}$.
Therefore, we shall only need to look for eigenvalues of $\mathbf{H}_0$ in
$\{z : \text{dist}(z,\Sigma_p) \leq \epsilon \ \text{and}\ z \notin
\Sigma_c \}$.
First of all, note that $\mathbf{H}_0$ is a small perturbation of the
operator $JH$ (recall that $H= H_0-E$) whose discrete spectrum
$\sigma_d(JH) =\{ \pm i(e_0-E), \pm i(e_1-E)\}$,
and whose eigenfunctions are
$\Phi_j^{\pm} := \betagin{bmatrix} \phi_j \\ \pm i\phi_j \end{bmatrix}$,
for $j = 0,1,2,3$. The eigenvalues $\pm i(e_0-E)$ are simple
and the eigenvalues $\pm i(e_1-E) = O(\epsilonsilon^2)$ have
multiplicity $3$. Moreover, the continuous spectrum of
$JH$ is $\Sigma_c :=\sigma_c(J(H_0-E) =\{i e : |e| \geq -E\}$.
From standard perturbation theory, the dimension of eigenspaces
of $\mathbf{H}_0$ for eigenvalues near $0$ totals $6$, and the
corresponding eigenfunctions are perturbations of linear
combinations of $\Phi_{j}^{\pm}, j =1,2,3$.
On the other hand, by the resonance condition,
$|e_0 -E| = e_1 -e_0 +O(\epsilonsilon^2) > -E$. So finding the
eigenvalues of $\mathbf{H}_0$ bifurcating from $\pm i (e_1-E)$ requires
careful estimates of resolvent operators.
In general, we need to solve the problem
\betagin{equation} \lambdabel{H.eig}
\mathbf{H}_0 \betagin{bmatrix} u \\ v \end{bmatrix} =
\tau \betagin{bmatrix} u \\ v \end{bmatrix}
\end{equation}
for some functions $u, v$ and some eigenvalue $\tau$ near zero
and near $\pm i(e_1 -e_0)$.
We shall first find the eigenvalues $\tau$ near zero.
Let's write $u = a\cdot \phi + h_1, v = b \cdot \phi + h_2$ where
$a = (a_1, \cdots, a_3)$ and $b = (b_1, \cdots, b_3)$ are of order one and $h_1, h_2 \in \mathbf{V}^\perp$. Then, we have
\betagin{equation} \lambdabel{h12.eqn}
\left \{ \betagin{array}{ll}
(H_0- E) h_2 + W_1 u + W_2 v & = \tau u + (E-e_1) b \cdot \phi,\\
(H_0- E)h_1 -W_3 u - W_4 v & = -\tau v + (E-e_1) a \cdot \phi.
\end{array} \right.
\end{equation}
Applying the projection $\mathbf{P}_1^\perp$, we get
\betagin{equation} \lambdabel{h12.sol}
h_1 = -(H_0-e_1)^{-1}\mathbf{P}_1^\perp (\tau h_2 - W_3 u - W_4 v),
\quad h_2 = (H_0-e_1)^{-1}\mathbf{P}_1^\perp (\tau h_1 - W_1u - W_2 v).
\end{equation}
Taking the inner product of the equations \eqref{h12.eqn} with $\phi_j$, for $j=1,2,3$, we get
\betagin{equation} \lambdabel{e.eqn}
(\phi_j, W_1 u + W_2 v) = \tau a_j + (E-e_1) b_j, \quad
(\phi_j, W_3u + W_4 v)= \tau b_j + (e_1-E) a_j.
\end{equation}
From \eqref{h12.sol} and \eqref{e.eqn}, we see that $h_1, h_2, \tau = O(\epsilonsilon^2)$. We now write $\tau = \frac{\lambdambda \epsilonsilon^2}{2} \tau_1 + O(\epsilonsilon^3)$. Then, from \eqref{e.eqn} we have
\betagin{equation*}
\betagin{bmatrix}
0 & 1 & 0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & 3 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
-3 & 0 & 0 & 0 & -1 & 0 \\
0 & -1 & 0 & -1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix} \betagin{bmatrix} a_1 \\ a_2 \\ a_3 \\ b_1 \\ b_2 \\ b_3
\end{bmatrix} = \tau_1 \betagin{bmatrix} a_1 \\ a_2 \\ a_3 \\ b_1 \\ b_2 \\ b_3
\end{bmatrix}.
\end{equation*}
This implies that $\tau_1$ solves the equation $ \tau_1^4(\tau_1^2 + 4) = 0$. So, the eigenvalue problem \eqref{H.eig} has two purely imaginary eigenvalues $\tau = \omega_1 := \lambdambda \epsilonsilon^2 i + O(\epsilonsilon^3)$ and $\tau = \omega_2 := \bar{\omega}_1$. Respectively, the eigenvectors of $\omega_1$ and $\omega_2$ are $\Phi_1, \bar{\Phi}_1$ with
\[
\Phi_1 = \betagin{bmatrix} -i \phi_1 + \phi_2 \\
\phi_1 + i\phi_2 \end{bmatrix} + O(\epsilonsilon^2).
\]
Moreover, as in the previous section, let
\betagin{equation}
\betagin{split}
& \Phi_{00} = \frac{\partial}{\partial E} \overrightarrow{\widetilde{Q_E}}
= \partial_E \widetilde{\rho}(\epsilonsilon) \betagin{bmatrix} \phi_1 \\
\phi_2 \end{bmatrix} + O(\epsilonsilon^2), \\
& \Phi_{01} = \frac{\partial}{\partial r_1}
\overrightarrow{e^{-i r_1} \widetilde{Q}_E} \big|_{r_1=0} = \betagin{bmatrix}
\mathrm{I}_2m \widetilde{Q} \\ -\mathbb{R}e \widetilde{Q} \end{bmatrix} = \widetilde{\rho}(\epsilonsilon)
\betagin{bmatrix} \phi_2 \\ -\phi_1 \end{bmatrix} + O(\epsilonsilon^3), \\
& \Phi_{02} = \frac{\partial}{\partial r_2}
\overrightarrow{\widetilde{Q}_E}(\mathbf{G}amma_1(r_2,r_3) x) \big|_{r_2=r_3=0}
= \widetilde{\rho}(\epsilonsilon) \betagin{bmatrix} 0 \\ \phi_3 \end{bmatrix}
+ O(\epsilonsilon^3), \\
& \Phi_{03} = \frac{\partial}{\partial r_3}
\overrightarrow{\widetilde{Q}_E}(\mathbf{G}amma_1(r_2,r_3) x) \big|_{r_2=r_3=0}
= \widetilde{\rho}(\epsilonsilon) \betagin{bmatrix} \phi_3 \\ 0
\end{bmatrix} + O(\epsilonsilon^3).
\end{split}
\end{equation}
We see that $\mathbf{H}_0$ has three zero - eigenvectors and one zero-generalized eigenvector. Precisely, we have
\[ \mathbf{H}_0 \Phi_{00} = \Phi_{01}, \quad \mathbf{H}_0 \Phi_{0j} = 0, \ \forall \ j = 1, 2,3. \]
Next, we shall look for the eigenvalues $\tau$ of $\mathbf{H}_0$ near $\pm
i(e_0 - E)$, i.e. we shall solve $(JH -\tau +W)\Phi =0$. We shall
first solve this equation for $|\tau -i(e_0-E)| \leq \epsilon$. From Lemma
\ref{Non-emb-eig}, we may assume that $\mathbb{R}e \tau \not=0$. We write
$\Phi = a\Phi_0^+ + \eta$, for some $a \in \mathbb{C}$,
$\eta \perp \Phi_0^+$. Then we have
\betagin{equation} \lambdabel{he.tau.eta}
\left \{
\betagin{array}{ll}
& a\tau = ai(e_0-E) + \frac{1}{2}\wei{\Phi_0^+, W(a\Phi_0^+ +\eta)}, \\
& (JH -\tau) \eta = -\mathbf{P}^\perp W(a\Phi_0^+ +\eta).
\end{array} \right.
\end{equation}
Here $\mathbf{P}^\perp =\mathbf{P}_d + \mathbf{P}_c$,
\[\mathbf{P}_d f := \frac{1}{2}\wei{\Phi_0^-, f}\Phi_0^- + \sum_{j=1}^3 \frac{1}{2}\wei{\Phi_j^{\pm}, f}\Phi_j^{\pm}, \quad \mathbf{P}_c f := \mathbf{P}_c(JH)f, \quad f \in \mathbf{E}. \]
Recall that $R_0(z) = (J H - z)^{-1} =R_{01}(z) + R_{02}(z)$ where,
\[ R_{01}(z):= \frac{1}{2}\betagin{bmatrix} i & -1 \\ 1 & i\end{bmatrix}(H -iz)^{-1}, \quad R_{02}(z):= \frac{1}{2}\betagin{bmatrix} -i & -1 \\ 1 & -i\end{bmatrix}(H + iz)^{-1}.\]
Therefore, $R_0(z)$ is well defined for $\mathbb{R}e z \not=0$. From the second equation of \eqref{he.tau.eta}, we get
\betagin{equation} \lambdabel{eta.spectrum.eqn}
\eta = -R_0(\tau) \mathbf{P}^\perp [W(a\Phi_0^+ + \eta)].
\end{equation}
For $j=1,2,3$, from \eqref{W.def} and by the symmetry properties of
$\widetilde{Q}$, we have
\betagin{equation} \lambdabel{in.com}
\betagin{split}
\wei{\Phi_j^{+}, W\Phi_0^+} & = (\phi_0\phi_j, W_1 + iW_2) - i(\phi_0\phi_j,W_3 + iW_4) \\
& =4i\lambdambda(\phi_0\phi_j, |\widetilde{Q}|^2) =0, \\
\wei{\Phi_j^{-}, W\Phi_0^+} & = (\phi_0,\phi_j, W_1 + iW_2) + i(\phi_0\phi_j,W_3 + iW_4)\\
& = -2i\lambdambda (\phi_0\phi_j, \widetilde{Q}^2)=0.
\end{split}
\end{equation}
So,
\[ \mathbf{P}^\perp [W\Phi_0^+] = \frac{1}{2}\wei{\Phi_0^-, W\Phi_0^+}\Phi_0^- + \, \mathbf{P}\! _\mathrm{c} \, \! W\Phi_0^+.\]
Therefore, we obtain the equation for $\eta$
\betagin{equation} \lambdabel{eta.exp.sp}
\betagin{split}
\eta_\tau & = \frac{1}{2[i(e_0-E) +\tau]}[a\wei{\Phi_0^-, W\Phi_0^+} + \wei{\Phi_0^-, W\eta}]\Phi_0^- \\% - \frac{1}{2[i(e_0-E) +\tau]}(\Phi_0^-, W\eta) \Phi_0^- \\
\ & \quad - \sum_{j=1}^3 \frac{1}{2[\pm i(e_1-E)-\tau]}\wei{\Phi_j^\pm, W\eta} \Phi_j^\pm - aR_0(\tau)\, \mathbf{P}\! _\mathrm{c} \, \! W\Phi_0^+ - R_0(\tau)\mathbf{P}_c W\eta.
\end{split}
\end{equation}
Now suppose that $a =0$. Since $\mathbb{R}e(\tau)\not=0$ and $\tau$ is close
to $ i(e_0-E)$, from \eqref{eta.exp.sp} and Lemma \ref{R0.sp.est}, we
see that
$\norm{\eta}_{L^2_{\mathrm{loc}}} \leq C\epsilon^2\norm{\eta}_{L^2_{\mathrm{loc}}}$.
So, we get a contradiction because $\epsilon$ is small.
So, without loss of generality, we may assume that $a=1$. Then let's define $\tau_0 : = i(e_0-E) + \frac{1}{2}(\Phi_0^+, W \Phi_0^+)$. Since
\[
\wei{\Phi_0^+, W \Phi_0^+} = (\phi_0^2, W_1+iW_2) -i(\phi_0^2,
W_3-iW_1) = 4i(\phi_0^2, |\widetilde{Q}|^2),
\]
we see that $\tau_0$ is purely imaginary and $\mathrm{I}_2m \tau_0 = e_0-e_1 + \lambdambda \epsilon^2 +O(\epsilonsilon^4)$. Moreover, let
\betagin{equation}\lambdabel{tau1.def}
\betagin{split}
\tau_1 & := \frac{1}{4[i(e_0-E)+\tau_0]}|\wei{\Phi_0^-, W\Phi_0^+}|^2 - 4i\lambdambda^2 (\phi_0|Q|^2, \frac{1}{H +i\tau_0 }\mathbf{P}_c(H_0)\phi_0|Q|^2) \\
\ & \quad -\lambdambda^2 i (\phi_0Q^2, \frac{1}{H -i\tau_0 -i0}\mathbf{P}_c(H_0)\phi_0Q^2).
\end{split}
\end{equation}
Then we have $|\tau_1| \lesssim \epsilonsilon^4$. Moreover, from \eqref{FGR}, and since $(H+i\tau_0)^{-1}\mathbf{P}_c$ is regular, we have $\mathbb{R}e (\tau_1) \geq \lambdambda^2\lambdambda_0 \epsilonsilon^4$. Here, $\lambdambda$ and $\lambdambda_0$ are respectively defined in \eqref{NSE} and \eqref{FGR}. From \eqref{he.tau.eta} and \eqref{eta.spectrum.eqn}, we get the equation for $\tau$
\betagin{equation} \lambdabel{tau.eqn.sp}
\tau = \tau_0 + \tau_1 - [\tau_1 + \frac{1}{2}\wei{\Phi_0^+, WR_0(\tau)\mathbf{P}^\perp W\Phi_0^+}] - \frac{1}{2}\wei{\Phi_0^+, WR_0(\tau)\mathbf{P}^\perp W\eta}.
\end{equation}
From \eqref{eta.exp.sp} and Lemma \ref{R0.sp.est} and by applying the contraction mapping theorem, for each $\tau$ in $\mathbf{D}_1$, we can find the solution $\eta_\tau$ in $L^2_{-s}$ of \eqref{eta.exp.sp} such that $\norm{\eta_\tau}_{L^2_{-s}} \lesssim \epsilonsilon^2$. Moreover, for all $\tau, \tau' \in \mathbf{D}_1$, from \eqref{eta.exp.sp} and Lemma \ref{R0.sp.est}, we also have
\[ \norm{\eta_{\tau} -\eta_{\tau'}}_{L^2_{-s}} \lesssim \epsilonsilon^2[\max\{\mathbb{R}e \tau, \mathbb{R}e\tau'\}]^{-1/2}|\tau-\tau'| + \epsilonsilon^2 \norm{\eta_{\tau} -\eta_{\tau'}}_{L^2_{-s}}.\]
Therefore, there exists a constant $C>0$ such that
\betagin{equation} \lambdabel{eta.contraction} \norm{\eta_{\tau} -\eta_{\tau'}}_{L^2_{-s}} \leq C\epsilonsilon^2[\max\{\mathbb{R}e \tau, \mathbb{R}e\tau'\}]^{-1/2}|\tau-\tau'|, \quad \forall \ \tau, \tau' \in \mathbf{D}_1. \end{equation}
Next, let
\betagin{equation} \lambdabel{D.def} \mathbf{D}: = \{z \in \mathbb{C}: |z-(\tau_0+\tau_1)| \leq \epsilonsilon^5\}.\end{equation}
Note that $\mathbf{D} \subset \mathbf{D}_1$. Also, let $f(\tau)$ be a function which is defined by the right hand side of \eqref{tau.eqn.sp}. We shall show that $f$ maps $\mathbf{D}$ into $\mathbf{D}$ and has a fixed point in $\mathbf{D}$. Note that
\betagin{equation} \lambdabel{exp.Pperp} \betagin{split}
& \wei{\Phi_0^+, WR_0(\tau)\mathbf{P}^\perp W\Phi_0^+} = \frac{1}{2}\wei{\Phi_0^-, W\Phi_0^+} \wei{\Phi_0^+, WR_0(\tau)\Phi_0^-} + \wei{\Phi_0^+, WR_0(\tau)\, \mathbf{P}\! _\mathrm{c} \, \! W\Phi_0^+} \\
& \quad \quad \quad = -\frac{1}{2[i(e_0-E) + \tau]}\wei{\Phi_0^-, W\Phi_0^+}\wei{\Phi_0^+, W\Phi_0^-} + \wei{\Phi_0^+, WR_0(\tau)\, \mathbf{P}\! _\mathrm{c} \, \! W\Phi_0^+} \\
& \quad \quad \quad = -\frac{1}{2[i(e_0-E) + \tau]}|\wei{\Phi_0^-, W\Phi_0^+}|^2 + \wei{\Phi_0^+, WR_0(\tau)\, \mathbf{P}\! _\mathrm{c} \, \! W\Phi_0^+},\\
& \quad \quad \quad = -\frac{1}{2[i(e_0-E) + \tau]}|\wei{\Phi_0^-, W\Phi_0^+}|^2 + 2i\lambdambda^2(\phi_0Q^2, (H-i\tau)^{-1}\mathbf{P}_c(H) \phi_0Q^2)\\
& \quad \quad \quad \quad \quad \quad \quad \quad +4i\lambdambda^2(\phi_0|Q|^2, (H+i\tau)^{-1}\mathbf{P}_c(H)\phi_0|Q|^2).
\end{split}
\end{equation}
Here, we have used the fact that $R_0\, \mathbf{P}\! _\mathrm{c} \, \! = R_{01}\, \mathbf{P}\! _\mathrm{c} \, \! + R_{02}\, \mathbf{P}\! _\mathrm{c} \, \! $ with
\[ R_{01}(z)\, \mathbf{P}\! _\mathrm{c} \, \! := \frac{1}{2}\betagin{bmatrix} i & -1 \\ 1 & i \end{bmatrix}(H-iz)^{-2}\mathbf{P}_c(H), \quad R_{02}(z)\, \mathbf{P}\! _\mathrm{c} \, \! := \frac{1}{2}\betagin{bmatrix} -i & -1 \\ 1 & -i\end{bmatrix}(H+iz)^{-1}\mathbf{P}_c(H)\]
and
\betagin{equation} \lambdabel{R0z.in}
\betagin{split}
\wei{\Phi_0^+, WR_{01}(\tau)\, \mathbf{P}\! _\mathrm{c} \, \! W\Phi_0^+} & = 2i\lambdambda^2(\phi_0Q^2, (H-i\tau)^{-1}\mathbf{P}_c(H) \phi_0Q^2), \\
\wei{\Phi_0^+, WR_{02}(\tau)\, \mathbf{P}\! _\mathrm{c} \, \! W\Phi_0^+} & = 8i\lambdambda^2(\phi_0|Q|^2, (H+i\tau)^{-1}\mathbf{P}_c(H)\phi_0|Q|^2).
\end{split}
\end{equation}
From \eqref{tau1.def}, \eqref{exp.Pperp}, Lemma \ref{R0.sp.est} and $|\tau -\tau_0| \lesssim \epsilonsilon^4$ if $\tau \in \mathbf{D}$, we obtain
\[ |\tau_1 + \frac{1}{2}\wei{\Phi_0^+, WR_0(\tau)\mathbf{P}^\perp W\Phi_0^+}| \lesssim \epsilonsilon^6, \ \forall \ \tau \in \mathbf{D}. \]
On the other hand,
\betagin{equation*}
\betagin{split}
& \wei{\Phi_0^+, WR_0(\tau)\mathbf{P}^\perp W\eta} = -\frac{1}{2[i(e_0-E) +\tau]}\wei{\Phi_0^-, W\eta}\wei{\Phi_0^+, W\Phi_0^-} \\
& \quad \quad \quad + \sum_{j=1}^3 \frac{1}{2[\pm i(e_1-E) -\tau]}\wei{\Phi_0^\pm,W\eta}\wei{\Phi_0^+,W\Phi_j^\pm} + \wei{\Phi_0^+, WR_0(\tau)\mathbf{P}_c W\eta}.
\end{split} \end{equation*}
So,
\[ |\wei{\Phi_0^+, WR_0(\tau)\mathbf{P}^\perp W\eta}| \lesssim \epsilon^6, \ \forall \ \tau \in \mathbf{D}.\]
Therefore, $|f(\tau) - (\tau_0 + \tau_1)| \lesssim \epsilonsilon^6 \leq \epsilonsilon^5$ for all $\tau \in \mathbf{D}$. So, $f(\tau) \in \mathbf{D}$ for all $\tau \in \mathbf{D}$. From \eqref{eta.contraction}, \eqref{exp.Pperp} and Lemma \ref{R0.sp.est} below, we can show that there exists a constant $C>0$ independent of $\epsilonsilon$ such that $|f(\tau)-f(\tau')| \leq C\epsilonsilon^2|\tau -\tau'|$, for all $\tau, \tau'$ in $\mathbf{D}$. So, the map $f : \mathbf{D} \rightarrow \mathbf{D}$ is a contraction when $\epsilonsilon$ is sufficiently small. Therefore, there exists unique $\tau_* \in \mathbf{D}$ such that $\tau_* = f(\tau_*)$. Moreover, we have
\betagin{equation} \lambdabel{tau*size}
\mathrm{I}_2m \tau_* = \mathrm{I}_2m \tau_0 + O(\epsilonsilon^2) = e_0 - e_1 + O(\epsilonsilon^2), \ \lambdambda^2\lambdambda_0 \epsilonsilon^4 \leq \mathbb{R}e \tau_* = \mathbb{R}e \tau_1 +O(\epsilonsilon^6) \leq C\epsilonsilon^4. \end{equation}
Next, we shall show that $\tau_*$ is the unique fixed point of $f$ in $\mathbf{D}_1$, where $\mathbf{D}_1$ is defined in Lemma \ref{R0.sp.est} (note that $\mathbf{D} \subset \mathbf{D}_1$). Suppose that there is $\tau' \in \mathbf{D}_1$ such that $f(\tau') = \tau'$. Using \eqref{eta.contraction} and \eqref{tau*size}, we obtain
\[ |\tau_* -\tau'| = |f(\tau_*) - f(\tau')| \leq C \epsilon^2|\tau_ - \tau'| \leq \frac{1}{2}|\tau_* - \tau'|.\]
Therefore, $\tau' = \tau_*$. So, $\tau_*$ is the unique eigenvalue of $\mathbf{H}_0$ in $\mathbf{D}_1$. In summary, let $h_3:=\eta_{\tau_*}$, $\Phi_3 = \Phi_0^{+} + h_3$, $\omega_3 = \tau_*$, $\Phi_4 =\bar{\Phi}_3$ and $\omega_4 = \bar{\omega}_3$. We have $\lambdambda_0 \lambdambda^2 \leq \mathbb{R}e \omega_j \lesssim \epsilonsilon^4$, $\mathrm{I}_2m \omega_{j} = (-1)^{j-1}(e_0-e_1) + O(\epsilon^2)$ and
\[ \mathbf{H}_0 \Phi_j = \omega_j\Phi_j, \ \ \forall \ j = 3,4.\]
Moreover, from \eqref{eta.exp.sp}, we get
\betagin{equation} \lambdabel{h3.def}
\betagin{split}
h_3 & = \frac{1}{2[i(e_0-E +\omega_3]}[\wei{\Phi_0^-, W\Phi_0^+} +\wei{\Phi_0^-, Wh_3}]\Phi_0^- \\
& - \sum_{j=1}^3 \frac{1}{\pm i(e_1-E) -\omega_3}\wei{\Phi_j^\pm, Wh_3}\Phi_j^\pm -R_0(\omega_3)\mathbf{P}_cW\Phi_0^+ - R_0(\omega_3)\mathbf{P}_cWh_3.
\end{split}
\end{equation}
Moreover, from Lemma \ref{Lp.eta.sp} below, we see that $\norm{h_3}_{L^2} \leq C$.
Similarly, solving the bifurcation equation $(JH +W)[\Phi_0^- + \widetilde{\eta}] = z(\Phi_0^- + \widetilde{\eta})$, we also obtain two other eigenvalues and eigenfunctions $\omega_5, \omega_6$ and $\Phi_5 = \Phi_0^- + h_5, \Phi_6 = \Phi_0^+ + h_6$ with $\omega_6 =\bar{\omega}_5, \Phi_6 =\bar{\Phi}_5$, $\mathbf{H}_0 \Phi_j = \omega_j \Phi_j$, $\norm{h_j}_{L^2} \leq C$ for $j=5,6$. In particular, $h_5$ satisfies
\betagin{equation} \lambdabel{h5.def}
\betagin{split}
h_5 & = \frac{1}{2[-i(e_0-E +\omega_5]}[\wei{\Phi_0^+, W\Phi_0^-} +\wei{\Phi_0^+, Wh_3}]\Phi_0^+ \\
& - \sum_{j=1}^3 \frac{1}{\pm i(e_1-E) -\omega_5}\wei{\Phi_j^\pm, Wh_3}\Phi_j^\pm -R_0(\omega_5)\mathbf{P}_cW\Phi_0^- - R_0(\omega_5)\mathbf{P}_cWh_5.
\end{split}
\end{equation}
Also,
\betagin{equation*}
\betagin{split}
\mathbb{R}e \omega_5 & = -\lambdambda^2 \mathrm{I}_2m (\phi_0\bar{Q}^2, \frac{1}{H_0 +e_0 -2e_1 +O(\epsilon^2) -i0}\mathbf{P}_c \phi_0\bar{Q}^2) + O(\epsilon^6), \\
\mathrm{I}_2m \omega_5 & = -(e_0-e_1) + O(\epsilon^2).
\end{split}
\end{equation*}
In particular, we have $\mathbb{R}e \omega_j = O(\epsilon^4)$ and $\mathbb{R}e \omega_j \leq -\lambdambda_0\lambdambda_0 \epsilon^4$ for $j=5,6$. By the uniqueness properties of these fixed points $\omega_3, \omega_4, \cdots, \omega_6$, we see that they are all of eigenvalues of $\mathbf{H}_0$ in the $\epsilon$-neighborhood of $\pm i e_{01}$. Therefore, we have found all of the eigenvalues of $\mathbf{H}_0$. \\
Next, we shall prove the orthogonality conditions from which the formulas of the projections follow. Firstly, from the symmetry properties of $\widetilde{Q}_E$, it follows that $\wei{Jf,g}=0$ if $f,g$ are in different spaces of $\mathbf{E}_{01}$ and $\mathbf{E}_{02}$. Secondly, we claim that for any two pairs of eigenvectors, eigenvalues $(u_1, \lambdambda_1), (u_2, \lambdambda_2) $ of $\mathbf{H}$ with $\bar{\lambdambda}_1 + \lambdambda_2 \not=0$, then $\wei{Ju_1,u_2} =0$. In fact, since $\mathbf{H} = J\mathbf{K}$ and $\mathbf{K}$ is a self-adjoint operator, we get
\betagin{equation} \lambdabel{com.abs}
\lambdambda_2 \wei{Ju_1,u_2} = \wei{Ju_1, \mathbf{H} u_2} = \wei{\mathbf{H}^* Ju_1, u_2} = \wei{\mathbf{K} u_1, u_2} =- \bar{\lambdambda}_1\wei{Ju_1, u_2}. \end{equation}
Then, we obtain $(\bar{\lambdambda}_1 +\lambdambda_2)\wei{Ju_1, u_2} =0$. So, $\wei{Ju_1, u_2}= 0$ since $\bar{\lambdambda}_1 + \lambdambda_2 \not=0$. Therefore, $\wei{Jf,g} =0$ if $f$ and $g$ are in different spaces of $\mathbf{E}_1, \mathbf{E}_2, \mathbf{E}_+, \mathbf{E}_-$. Moreover, $\wei{J\Phi_j,\Phi_j} =0$ for all $j=3,4,5,6$. On the other hand, for $u_1 \in \mathbf{E}_0$, we have $(\mathbf{H}^*)^2 J u_1 =(\mathbf{K} J)(\mathbf{K} J) Ju_1 = J \mathbf{H}^2u_1 =0$. Therefore, for all $u_2$ such that $\mathbf{H} u_2 = \lambdambda u_2$ with $\lambdambda \not=0$, we get
\betagin{equation*}
\lambdambda^2\wei{Ju_1,u_2} = \wei{Ju_1, \mathbf{H}^2 u_2} = \wei{(\mathbf{H}^*)^2 u_1, u_2} =0. \end{equation*}
This proves that $\wei{Jf,g} =0$ if $f \in \mathbf{E}_0$ and $g$ is in one of $\mathbf{E}_1, \mathbf{E}_2, \mathbf{E}_+, \mathbf{E}_-$. So, we have proved all of the orthogonality conditions. Moreover, since $J \Phi_0^+ = i\Phi_0^+$ and Lemma \ref{Lp.eta.sp} below, we obtain
\betagin{flalign*}
\wei{J\Phi_5, \Phi_4} & = \wei{J (\Phi_0^+ + h_5), \Phi_0^+ + h_4} = 2i + O(\epsilon^2) + \wei{Jh_5, h_4}\\
& = 2i + O(\epsilon^2) \not=0.
\end{flalign*}
Moreover, it follows from this and \eqref{com.abs} that
\[ \omega_5 = -\bar{\omega}_4= -\omega_3. \]
Finally, we shall complete the proof of Theorem \ref{Spectrum.R} by the following two lemmas which show that all of $\omega_3, \omega_4, \cdots \omega_6$ are simple and $\norm{h_j}_{L^2} \leq C$ for $j =3,4,\cdots, 6$:
\end{proof}
\betagin{lemma} \lambdabel{Lp.eta.sp} For $1 \leq p \leq \infty$, we have
\[ \norm{h_j}_{L^p} \leq C_p [\epsilon^2 + \epsilon^{6-\frac{12}{p}}], \quad \norm{h_j}_{H^1} \leq C, \quad |\wei{Jh_5, h_4}| \leq C\epsilon^4, \ \forall j = 3,4,5,5. \]
\end{lemma}
\betagin{proof} To prove the first inequality of Lemma \ref{Lp.eta.sp}, we only need to prove it for $j=3$. From \eqref{h3.def}, we only need to show that
\betagin{equation} \lambdabel{L2.eta.sp}
\norm{R_0(\omega_3)\mathbf{P}_cW\Phi_0^+}_{L^p}, \ \norm{R_0(\omega_3)\mathbf{P}_cW\eta}_{L^p} \leq C_p[\epsilon^2 + \epsilon^{6-\frac{12}{p}}].
\end{equation}
Since $R_0(\tau) = R_{01}(\tau) + R_{02}(\tau)$ and $R_{02}(\omega_3)$ is regular and $R_{01}(\omega_3) \sim (H-i\omega_3)^{-1}$, we shall only need to prove \eqref{L2.eta.sp} with $R_0(\omega_3)$ replaced by $(H-i\omega_3)^{-1}$. Now, we follow the argument in \cite{NTP}. We write
\[ H-i\omega_3 = H_0 -E -i\omega_3 = -\Delta - \nu^2 + V,\]
where $\nu^2 = E + i\omega_3$ and $\mathrm{I}_2m \nu >0$ and is of order $\epsilonsilon^4$. By resolvent expansion, we have
\betagin{equation} \lambdabel{L2.nu.sp}
(H-i\omega_3)^{-1}\varphi = (-\Delta - \nu^2)^{-1}\varphi +(-\Delta - \nu^2)^{-1} V(H-i\omega_3)^{-1}\varphi. \end{equation}
Because the resolvent $(-\Delta - \nu^2)^{-1}$ has the kernel $K(x) := (4\pi|x|)^{-1} \exp(i\nu|x|)$, we have
\betagin{flalign*} \norm{(-\Delta - \nu^2)^{-1}\varphi}_{L^p} & \lesssim \norm{K*\varphi}_{L^p} \lesssim [\norm{K}_{L^p(B_1^c)} + \norm{K}_{L^2(B_1)}]\norm{\varphi}_{L^1 \cap L^2} \\
\ & \lesssim [1 + \epsilon^{4 - 6/p}]\norm{\varphi}_{L^1 \cap L^2}.
\end{flalign*}
On the other hand, since $V$ decays sufficiently fast, we have
\[ \norm{V(H-i\omega_3)^{-1}\varphi}_{L^1 \cap L^2} \lesssim \norm{(H-i\omega_3)^{-1}\varphi}_{L^2_{\mathrm{loc}}} \lesssim \norm{\wei{x}^{s}\varphi}_{L^2}. \]
Then, it follows from \eqref{L2.nu.sp} that
\[ \norm{(H-i\omega_3)^{-1}\varphi}_{L^2} \lesssim [1 + \epsilon^{4 - 6/p}][\norm{\phi}_{L^1} + \norm{\wei{x}^{s}\varphi}_{L^2}]. \]
Using this estimate, we get
\betagin{flalign*}
\norm{R_0(\omega_3)\mathbf{P}_cW\Phi_0^+}_{L^p} +\norm{R_0(\omega_3)\mathbf{P}_cW\eta}_{L^p} \leq C_p \epsilon^2[1 + \epsilon^{4 -\frac{12}{p}}].
\end{flalign*}
So, we obtain the first inequality of Lemma \ref{Lp.eta.sp}. Similarly, to prove the second inequality, we only need to show that for some localized function $\varphi$
\[ \norm{\nabla v}_{L^2} \leq C, \quad v: = (H-i\omega_3)^{-1} \varphi. \]
This follows directly by multiplying the equation $(H-i\omega_3) v= \varphi$ by $\bar{v}$ and integrating over $\mathbb{R}^3$.
Finally, we prove the last inequality of Lemma \ref{L2.eta.sp}. Again, note that $h_3$ and $h_5$ are of the form
\[ h_3 = \text{ok} + A_1(H-i\omega_3)^{-1}\varphi, \quad h_5 = \text{ok} + A_2(H+i\omega_5)^{-1}\varphi_*. \]
Here, $\text{ok}$ = terms which are localized and of order $\epsilon^2$, $\varphi, \varphi_*$ are some localized functions of order $\epsilon^2$ and $A_1, A_2$ are some constant matrices of order one. So, we get
\betagin{flalign*}
|\wei{Jh_5, \bar{h}_3}| & \leq C[\epsilon^4 + |((H+i\omega_5)^{-1}\varphi_*, (H+i\bar{\omega}_3)^{-1}\bar{\varphi})|] \\
& \leq C[\epsilon^4 + |((H-i\omega_3)^{-1} (H+i\omega_5)^{-1}\varphi_*, \bar{\varphi})|] \\
& \leq C\epsilon^4.
\end{flalign*}
Here, in the last step, we used the fact that for $\alpha_1, \alpha_2 \in \mathbb{C}$ such that $\mathrm{I}_2m \alpha_j >0$ and $|\mathbb{R}e \alpha_j| \in [a_1, a_2] \subset (0, \infty)$ with $j =1,2$, we have
\betagin{equation} \lambdabel{bo.H}
\norm{(H-\alpha_1)^{-1}(H-\alpha_2)^{-1}P_c(H)g}_{L^2_{-s}} \leq C\norm{g}_{L^2_{s}}, \ \forall \ g \in L^2_s. \end{equation}
Here, the constant $C>0$ is independent of $\alpha_1$ and $\alpha_2$. One can prove \eqref{bo.H} by using Mourre estimates and the argument as in \cite{TY1}, where the authors proved similar estimates for linearized operators and $\alpha_1 = \alpha_2$. For a different approach, one can see \cite{C2}.
\end{proof}
\betagin{lemma} \lambdabel{sim.ome} The eigenvalues $\omega_j$ defined in the proof of Theorem \ref{Spectrum.C.R} are simple for $j=3,4,5,6$.
\end{lemma}
\betagin{proof}
It suffices that we only prove this lemma for $j=3$. Suppose by contradiction that there is $\widetilde{\Phi}$ such that $[\mathbf{H} - \omega_3] \widetilde{\Phi} = \Phi_0^+ +h_3$. Note that $h_3 = \eta = \eta(\omega_3)$. We write $\widetilde{\Phi} = c\Phi_0^+ +h$ where $h \in \mathbf{E}, h \perp \Phi_0^+$ and $c$ is in $\mathbb{C}$. Then, we have
\[ c[i(e_0-E) -\omega_3]\Phi_0^+ + W [c\Phi_0^+ + h] + (JH -\omega_3)h = \Phi_0^+ + \eta. \]
Equivalently,
\betagin{equation}\lambdabel{sys.gen}
\left \{\betagin{array}{ll}
& c[i(e_0-E) +\frac{1}{2}\wei{\Phi_0^+, W\Phi_0^+} -\omega_3] + \frac{1}{2}\wei{\Phi_0^+, Wh} =1, \\
& (JH -\omega_3)h = - c\mathbf{P}^{\perp} W\Phi_0^+ - \mathbf{P}^\perp Wh + \eta. \end{array} \right.
\end{equation}
Let's now define $h^* = h-c\eta$. From \eqref{he.tau.eta} and \eqref{sys.gen}, we see that $h^*$ solves the equation
\[ h^* = R_0(\omega_3)[\eta - \mathbf{P}^\perp Wh^*]. \]
From \eqref{eta.exp.sp}, Lemma \ref{R0.sp.est} and \eqref{bo.H}, we have
\[
\norm{R_0(\omega_3)\eta}_{L^2_{-s}} \leq C[\epsilon^2 +
\norm{R_0^2(\omega_3)\mathbf{P}_cW\Phi_0^+}_{L^2_{-s}} +
\norm{R_0^2(\omega_3)\mathbf{P}_cW\eta}_{L^2_{-s}}] \leq C\epsilon^2.
\]
Therefore, we get $\norm{h^*}_{L^2_{-s}} \leq C\epsilon^2$. Now, from the first equation of \eqref{sys.gen}, we get
\[c[i(e_0-E) +\frac{1}{2}\wei{\Phi_0^+, W(\Phi_0^++\eta)} -\omega_3] = 1 - \frac{1}{2}\wei{\Phi_0^+, Wh^*}. \]
From \eqref{he.tau.eta}, we see that $i(e_0-E) +\frac{1}{2}\wei{\Phi_0^+, W[\Phi_0^+ +\eta]} -\omega_3 =0$. Therefore, we get $1 - \frac{1}{2}\wei{\Phi_0^+, Wh^*} =0$. This is a contradiction since $|(\Phi_0^+, Wh^*)| \leq C\epsilon^4$. So, $\omega_3$ is simple and the lemma follows.
\end{proof}
As a corollary of Theorem \ref{Spectrum.C.R}, we obtain the same
spectral properties around symmetry transformed ground states:
\betagin{corollary} \lambdabel{Spectrum.C.R-cor} For
$r=(r_1,r_2,r_3) \in \mathbb{R}^3,$ and for $j=0,1,2,3$, let
$\Phi_{0j}^r : = r* \Phi_{0j} = e^{-Jr_1} \Phi_{0j} \circ \mathbf{G}amma_1(r_2,r_3)$. Similarly, for $j = 1,2,3,4,5,6$, let $\Phi_j^r := e^{-Jr_1}\Phi_j \circ \mathbf{G}amma_1(r_2,r_3)$. Also, let $\mathbf{E}_j^r, \mathbf{E}^r_-, \mathbf{E}_+^r, \mathbf{E}_c^r$ be exactly as in Theorem \ref{Spectrum.C.R}, for $j=0,1,2$.
Then $\mathbf{E}_j^r$ are invariant under $\mathbf{H}_r$,
$\mathbf{E} = \oplus_{j=0}^2 \mathbf{E}_j^r \oplus \mathbf{E}_-^r \oplus \mathbf{E}_+^r \oplus
\mathbf{E}_c^r$,
and the projections from $\mathbf{E}$ into these subspaces are defined exactly as those corresponding projections in Theorem \ref{Spectrum.C.R}. Moreover, we have
\[
\mathbf{H}_r \Phi_{00}^r = \Phi_{01}^r, \ \mathbf{H}_r \Phi_{0j}^r =0, \
\mathbf{H}_r \Phi_k^r = \omega_k \Phi_k^r, \ \forall \ j = 1,2,3, \
k = 1,2,3,4,5,6.
\]
\end{corollary}
In the non-resonant case, we also have the following result
on the spectrum of $\mathbf{H}_0$, whose proof is simpler, and is
therefore skipped:
\betagin{theorem}[$\mathbf{H}_0$-Invariant Subspaces - Non-Resonant Case]
\lambdabel{Spectrum.C.NR}
Assume that $2e_1 < e_0$. Let $\mathbf{H}_0$ be defined as in \eqref{bH.def}. Then, the space $\mathbf{E} = L^2(\mathbb{R}^3, \mathbb{C}^2)$ can be decomposed into the $\mathbf{H}_0$-invariant subspaces as
\[ \mathbf{E} = \oplus_{j=0}^4 \mathbf{E}_j \oplus \mathbf{E}_{c}. \]
If $f$ and $g$ belong to different subspaces, then $\wei{J f, g} =0$. These subspaces and their corresponding projections satisfy the followings:
\betagin{itemize}
\item[\textup{(i)}]
The subspace $\mathbf{E}_0$ is generated by zero-eigenvectors
\[
\Phi_{0j} = \frac{\partial}{\partial r_j}
\overrightarrow{e^{-ir_1}\widetilde{Q}_E}(\mathbf{G}amma_1(r_2,r_3) x)
\big|_{r_1=r_2=r_3=0}
\]
for $j=1,2,3$, and a generalized eigenvector
$\Phi_{00} = \partial_E \overrightarrow{\widetilde{Q}_E}$, with $ \mathbf{H}_0 \Phi_{01} = \Phi_{00}, \quad \mathbf{H}_0 \Phi_{0j} = 0, \quad \forall j = 0,2,3$ and moreover,
\[ \wei{J\Phi_{0j}, \Phi_{0k}} = 0, \quad \text{if} \quad (j,k) \notin \{(0,1), (1,0), (2,3), (3,2) \}. \]
The projection $\mathbf{P}_0(\mathbf{H}_0) : \mathbf{E} \rightarrow \mathbf{E}_0$ is defined by $\mathbf{P}_0(\mathbf{H}_0) = \mathbf{P}_{01}(\mathbf{H}_0) + \mathbf{P}_{02}(\mathbf{H}_0)$ with
\betagin{flalign*}
\mathbf{P}_{01}(\mathbf{H}_0)f & = \betata_1 \wei{J \Phi_{00}, f}\Phi_{01} - \betata_1 \wei{J\Phi_{01}, f}\Phi_{00}, \\
\mathbf{P}_{02}(\mathbf{H}_0)f & = \betata_2 \wei{J \Phi_{02}, f}\Phi_{03} -\betata_2 \wei{J \Phi_{03}, f}\Phi_{02}, \quad \forall \ f \in \mathbf{E},
\end{flalign*}
where $\betata_1 := \wei{J\Phi_{00}, \Phi_{01}}^{-1} = O(1)$ and $\alphapha_2 = \wei{J\Phi_{02}, \Phi_{03}}^{-1} = O(\epsilonsilon^{-2})$.
\item[\textup{(ii)}]
There exist $\Phi_1 := \betagin{bmatrix} -i\phi_1 + \phi_2 \\ \phi_1 + i\phi_2 \end{bmatrix} + O(\epsilonsilon^2), \Phi_2 := \bar{\Phi}_1$ and purely imaginary numbers $\omega_1 := i(\lambdambda \epsilonsilon^2 + O(\epsilonsilon^3)))$ and $\omega_2 := \bar{\omega}_1$ such that for $j =1,2$, the subspace $\mathbf{E}_j$ is spanned by $\Phi_j$ and
\[ \mathbf{H}_0 \Phi_j = \omega_j \Phi_j, \quad \wei{J \Phi_1, \Phi_1} =-4i, \quad \wei{J \Phi_1, \Phi_2} =0. \]
The projection $\mathbf{P}_j(\mathbf{H}_0) : \mathbf{E} \rightarrow \mathbf{E}_j$ is defined by
\[\mathbf{P}_1(\mathbf{H}_0) f = -\frac{1}{4i}\wei{J\Phi_1, f}\Phi_1, \ \ \mathbf{P}_2(\mathbf{H}_0) f = \frac{1}{4i}\wei{J \Phi_2, f}\Phi_2, \quad \forall \ f \in \mathbf{E}.\]
\item[\textup{(iii)}]
There exist $\Phi_3:= \betagin{bmatrix} \phi_0 \\ -i\phi_0 \end{bmatrix} + O(\epsilonsilon^2)$, $\Phi_4:= \bar{\Phi}_3$ and purely imaginary numbers $\omega_3 = i(e_1 - e_0 + O(\epsilonsilon^2)), \omega_4 = \bar{\omega}_3$ such that for $j=3,4$ the subspaces $\mathbf{E}_j$ is spanned by $\Phi_j$ and
\[ \mathbf{H}_0 \Phi_j = \omega_j \Phi_j, \quad \wei{J \Phi_3, \Phi_3} = 2i, \quad \wei{J \Phi_3, \bar{\Phi}_4} =0. \]
The projection $\mathbf{P}_j(\mathbf{H}_0) : \mathbf{E} \rightarrow \mathbf{E}_j$ is defined by
\[\mathbf{P}_3(\mathbf{H}_0) f = \frac{1}{2i}\wei{J\Phi_3, f}\Phi_3 \quad \mathbf{P}_4(\mathbf{H}_0) f = - \frac{1}{2i}\wei{J \bar{\Phi}_4, f} \bar{\Phi}_4, \quad \forall \ f \in \mathbf{E}.\]
\item[\textup{(iv)}] $\mathbf{E}_c = \{g \in \mathbf{E} : \wei{Jf, g} = 0, \quad \forall \ f \in \mathbf{E}_j, \ \forall \ j = 0,1,2\}$. Its corresponding projection is $\mathbf{P}_c(\mathbf{H}_0) = \textup{Id} - \sum_{j=0}^4 \mathbf{P}_j(\mathbf{H}_0)$.
\end{itemize}
\end{theorem}
\setlength{\unitlength}{1mm}\noindent
\betagin{center}
\betagin{picture}(120,80)
\put (0,40){\varepsilonctor(1,0){120}}
\put (120,42){\makebox(0,0)[c]{\scriptsize x}}
\put (60,0){\varepsilonctor(0,1){80}}
\put (60,82){\makebox(0,0)[c]{\scriptsize y}}
\multiput(60, 76)(0.5, 0){30}{\circle*{0.1}}\put (75,76){\varepsilonctor(1,0){1}}
\put(83,76){\makebox(0,0)[c]{\scriptsize$\sigma_c(\mathbf{H}_0)$}}
{\color{red}\put (60.15,60){\line(0,1){18}}\put (59.8,60){\line(0,1){18}}}
\put(64,60){\makebox(0,0)[c]{\scriptsize$|E|$}}
{\color{red}\put (60.15,20){\line(0,-1){20}}\put (59.8,20){\line(0,-1){20}}}
\put(64,20){\makebox(0,0)[c]{\scriptsize$-|E|$}}
\put(60,40){\circle*{1.5}}
\multiput(60, 40)(0.8, -0.4){30}{\circle*{0.1}}\put (84,28){\varepsilonctor(2,-1){1}}
\put(100,25){\makebox(0,0)[c]{\scriptsize$\mathbf{E}_0 =\{\Phi_{00} = \partial_EQ_{|r=0}, \Phi_{0j} =\partial_{r_j}Q_{|r=0} , \ j = 1,2,3\}$}}
\put(60,43){\circle*{1.5}}
\put(73,43){\makebox(0,0)[c]{\scriptsize$(\omega_1 =O(\epsilon^2), \Phi_1)$}}
\put(60,37){\circle*{1.5}}
\put(45,37){\makebox(0,0)[c]{\scriptsize$(\omega_2 =\bar{\omega}_1, \Phi_2 =\bar{\Phi}_1)$}}
\put(60,56){\circle*{1.5}}
\put(81,56){\makebox(0,0)[c]{\scriptsize$(\omega_3 =i(e_1-e_0)+O(\epsilon^2), \Phi_3)$}}
\put(60,24){\circle*{1.5}}
\put(45,24){\makebox(0,0)[c]{\scriptsize$(\omega_4 =\bar{\omega}_3, \Phi_4 = \bar{\Phi}_3)$}}
\multiput(60, 4)(0.5, 0){30}{\circle*{0.1}}\put (75,4){\varepsilonctor(1,0){1}}
\put(82,4){\makebox(0,0)[c]{\scriptsize$\sigma_c(\mathbf{H}_0)$}}
\end{picture}
Figure 4: Spectrum of $\mathbf{H}_0$ in the non-resonant case.
\end{center}
\betagin{corollary} \lambdabel{Spectrum.N.R-cor}
For $r=(r_1,r_2,r_3) \in \mathbb{R}^3,$ and for $j=0,1,2,3$, let
$\Phi_{0j}^r : = r* \Phi_{0j} = e^{-Jr_1} \Phi_{0j} \circ
\mathbf{G}amma_1(r_2,r_3)$. Similarly, for $j = 1,2,3,4$, let
$\Phi_j^r := e^{-Jr_1}\Phi_j \circ \mathbf{G}amma_1(r_2,r_3)$.
Also, let $\mathbf{E}_j^r$ and $\mathbf{E}_c^r$ be exactly as in
Theorem \ref{Spectrum.C.NR}, for $j=0,1,2,3,4$.
Then $\mathbf{E}_j^r$ are invariant under $\mathbf{H}_r$,
$\mathbf{E} = \oplus_{j=0}^4 \mathbf{E}_j^r \oplus \mathbf{E}_c^r$,
and the projections from $\mathbf{E}$ into these subspaces are defined exactly as those corresponding projections in Theorem \ref{Spectrum.C.NR}. Moreover,
\[
\mathbf{H}_\gammamma \Phi_{00}^r = \Phi_{01}^r, \ \mathbf{H}_\gammamma \Phi_{0j}^r =0, \
\mathbf{H}_\gammamma \Phi_k^r = \omega_k \Phi_k^r, \ \forall \ j = 1,2,3, \
k = 1,2,3,4.
\]
\end{corollary}
\subsection{Resolvent Estimates and Decay Estimates}
In this section, we shall study the resolvent
$R(z) = (\overrightarrow{\mathcal{L}}-z)^{-1}$ with $\overrightarrow{\mathcal{L}} := \overrightarrow{\mathcal{L}}_Q$
for $Q = Q_E$ or $\widetilde{Q}_E$. Set $e_{01} := e_1 - e_0$.
In the resonant case, let $\omega$ denote the eigenvalue
in the first quadrant. Recall that
$\omega = i \kappappa + \gammamma$ with $\kappappa = e_{01} + O(\epsilonsilon^2)$
and $\gammamma = \gammamma_0 \epsilonsilon^4 + O(\epsilonsilon^5)$ for
some constant $\gammamma_0 >0$. Also, recall that we can write
\[
\overrightarrow{\mathcal{L}} = JH + W, \quad W = \betagin{bmatrix} W_1 & W_2 \\
W_3 & W_4 \end{bmatrix} = O(\epsilonsilon^2), \quad H = H_0 - E.
\]
\betagin{lemma}[Resolvent Estimates] \lambdabel{Resol.est} Let $R(z) := (\overrightarrow{\mathcal{L}}-z)^{-1}$ be the resolvent of $\overrightarrow{\mathcal{L}}$, $\mathbf{B} := \mathbf{B}(\mathbf{E}_{s}, \mathbf{E}_{-s})$ for some $s > 3$ where $\mathbf{E}_s$ is defined in \eqref{Ls}. Then we have a constant $C>0$ independent of $\epsilonsilon$ such that for $\tau \geq |E|$,
\betagin{equation} \lambdabel{resolvent.est}
\norm{R(i\tau \pm 0)}_{\mathbf{B}}+ \norm{R(-i\tau \pm 0)}_{\mathbf{B}} \leq C [(1+\tau)^{-1/2} + (|\tau - \kappappa| + \epsilonsilon^4)^{-1}].
\end{equation}
Moreover, for $k = 1,2$, we also have,
\betagin{equation}\lambdabel{D-resolvent.est}
\norm{R^{(k)}(i\tau \pm 0)}_{\mathbf{B}} +\norm{R^{(k)}(-i\tau \pm 0)}_{\mathbf{B}} \leq C [(1+\tau)^{-(1+k)/2} + |\tau - \kappappa| + \epsilonsilon^4)^{-1}],
\end{equation}
where $R^{(k)}$ is the $k^{\text{th}}$ derivatives of $R$.
\end{lemma}
\betagin{proof} We prove the lemma for $\overrightarrow{\mathcal{L}} = \mathbf{H}_r$ in resonant case. For the other cases, the proof is simpler so we skip. Without loss of generality, we may assume $\overrightarrow{\mathcal{L}} = \mathbf{H}_0$. First of all, we study the resolvent operator $R(z)$ for $z \in
\mathbb{C}$ near $\pm i e_{01}$. Assume that $|z - i e_{01}| <
\epsilonsilon$ or $|z + ie_{01}| < \epsilonsilon$. For $\Phi \in \mathbf{E}_{s}, U \in
\mathbf{E}$ such that $(\overrightarrow{\mathcal{L}} -z) U = \Phi$, we want to estimate
$\norm{U}_{\mathbf{E}_{-s}}$. We write $U = \Phi^* + \lambdangle x \ranglei$, where $\Phi^* = a
\Phi^+_0 + b\Phi^- _0$ with $a,b \in \mathbb{C}$ and $\lambdangle x \ranglei \perp
\Phi_0^\pm$.
Then the equation $(\overrightarrow{\mathcal{L}} -z) U = \Phi$ is equivalent to
\betagin{equation} \lambdabel{sys.re}
\left \{
\betagin{array}{ll}
& a[i(e_0-E) +\frac{1}{2}\wei{\Phi_0^+, W\Phi_0^+} -z] + \frac{b}{2}\wei{\Phi_0^+, W\Phi_0^-} + \frac{1}{2}\wei{\Phi_0^+, W\lambdangle x \ranglei} = \frac{1}{2}\wei{\Phi_0^+, \Phi}, \\
& b[-i(e_0-E) +\frac{1}{2}\wei{\Phi_0^-, W\Phi_0^-} -z] + \frac{a}{2}\wei{\Phi_0^-, W\Phi_0^+} + \frac{1}{2}\wei{\Phi_0^-, W\lambdangle x \ranglei} =\frac{1}{2} \wei{\Phi_0^-, \Phi}, \\
& \lambdangle x \ranglei = R_0(z)\mathbf{P}_c[\Phi - W(\Phi^* +\lambdangle x \ranglei)] + \sum_{j=1}^3\frac{\wei{\Phi_j^{\pm},\Phi - W(\Phi^* +\lambdangle x \ranglei)}}{2[\pm i(e_1-E)-z]}\Phi_j^\pm.
\end{array}
\right .
\end{equation}
Here $R_0(z) = (JH -z)^{-1}$ which is well-defined if $\mathbb{R}e(z) \not=0$. From \eqref{in.com}, we get
\betagin{equation}
\sum_{j=1}^3\frac{\wei{\Phi_j^{\pm},\Phi - W(\Phi^* +\lambdangle x \ranglei)}}{2[\pm i(e_1-E)-z]}\Phi_j^\pm = \sum_{j=1}^3\frac{\wei{\Phi_j^{\pm},\Phi - W\lambdangle x \ranglei}}{2[\pm i(e_1-E)-z]}\Phi_j^\pm.
\end{equation}
From this and \eqref{sys.re}, we can write $\lambdangle x \ranglei$ as $\lambdangle x \ranglei = \lambdangle x \ranglei_1 + \lambdangle x \ranglei_2 + \lambdangle x \ranglei_3 + \lambdangle x \ranglei_4 + \lambdangle x \ranglei_5$ where
\betagin{equation} \lambdabel{xi.dec}
\betagin{split}
\lambdangle x \ranglei_1 & : = -a R_0(z)\mathbf{P}_c W\Phi_0^+ - b R_0(z)\mathbf{P}_c W\phi_0^-, \\
\lambdangle x \ranglei_2 & := -R_0(z)W\lambdangle x \ranglei_1 - \frac{1}{2[\pm i(e_1-E)-z]}\sum_{j=1}^1\wei{\Phi_j^\pm, W\lambdangle x \ranglei_1}\Phi_j^\pm\\
\lambdangle x \ranglei_3 &= R_0(z)\mathbf{P}_c \Phi + \frac{1}{2[\pm i(e_1-E)-z]}\sum_{j=1}^1\wei{\Phi_j^\pm, \Phi}\Phi_j^\pm, \\
\lambdangle x \ranglei_4 & := -R_0(z)W\lambdangle x \ranglei_3 - \frac{1}{2[\pm i(e_1-E)-z]}\sum_{j=1}^1\wei{\Phi_j^\pm, W\lambdangle x \ranglei_3}\Phi_j^\pm\\
\lambdangle x \ranglei_5 & := -R_0(z)W(\lambdangle x \ranglei-\lambdangle x \ranglei_1 -\lambdangle x \ranglei_3) \\
& \quad \quad \quad - \frac{1}{2[\pm i(e_1-E)-z]}\sum_{j=1}^1\wei{\Phi_j^\pm, W(\lambdangle x \ranglei -\lambdangle x \ranglei_1 -\lambdangle x \ranglei_3)}\Phi_j^\pm \\
\end{split}
\end{equation}
Now, for $z = i\tau + \deltalta$ with $|\tau - (e_0-e_1)| \leq \epsilon$ and $0 < \deltalta \leq \epsilon^5$. Then, $|\pm i(e_1-E)-z| = O(|e_1-e_0|)$. So, from Lemma \ref{R0.sp.est} and \eqref{xi.dec}, we obtain
\betagin{equation}\lambdabel{xi1234.est}
\betagin{split}
& \norm{\lambdangle x \ranglei_1}_{\mathbf{E}_{-s}} \leq C\epsilon^2[|a| + |b|], \quad \norm{\lambdangle x \ranglei_2}_{\mathbf{E}_{-s}} \leq C\epsilon^4[|a| +|b|], \\
& \norm{\lambdangle x \ranglei_3}_{\mathbf{E}_{-s}} \leq C\norm{\Phi}_{\mathbf{E}_{s}}, \quad \norm{\lambdangle x \ranglei_4}_{\mathbf{E}_{-s}} \leq C\epsilon^2\norm{\Phi}_{\mathbf{E}_{s}}.
\end{split}
\end{equation}
Similarly, we also have
\[ \norm{\lambdangle x \ranglei_5}_{\mathbf{E}_{-s}} \leq C\epsilon^2[\norm{\lambdangle x \ranglei_2}_{\mathbf{E}_{-s}} + \norm{\lambdangle x \ranglei_4}_{\mathbf{E}_{-s}} + \norm{\lambdangle x \ranglei_5}_{\mathbf{E}_{-s}}]. \]
So,
\betagin{equation} \lambdabel{xi5.est}
\norm{\lambdangle x \ranglei_5}_{\mathbf{E}_{-s}} \leq C\epsilon^4[\norm{\Phi}_{\mathbf{E}_{-s}} + \epsilon^2(|a| +|b|)].
\end{equation}
From \eqref{xi1234.est} and \eqref{xi5.est}, we obtain
\betagin{equation} \lambdabel{xi.re.est}
\norm{\lambdangle x \ranglei}_{\mathbf{E}_{-s}} \leq C [\norm{\Phi}_{\mathbf{E}_s} + \epsilon^2(|a| +|b|)].
\end{equation}
Recall that
\betagin{equation} \lambdabel{tau.0}
\tau_0 = i(e_0 -E) + \frac{1}{2}\wei{\Phi_0^+, W\Phi_0^+} = i[\kappappa + O(\epsilon^2)], \quad \mathbb{R}e \tau_0 =0. \end{equation}
Let $M = (M_{ij})_{i,j =1}^{2}$ be the $2\times 2$ matrix defined by
\betagin{equation} \lambdabel{M.dec}
\betagin{split}
M_{11} : & = \tau_0 -\frac{1}{2}\wei{\Phi_0^+, WR_0(z)\mathbf{P}_cW\Phi_0^+} -z, \\
M_{12} : & = \frac{1}{2}\wei{\Phi_0^+, W\Phi_0^-} - \frac{1}{2}\wei{\Phi_0^+, WR_0(z)\mathbf{P}_cW\Phi_0^-}, \\
M_{21} : & = \frac{1}{2}\wei{\Phi_0^-, W\Phi_0^+} - \frac{1}{2}\wei{\Phi_0^-, WR_0(z)\mathbf{P}_cW\Phi_0^+}, \\
M_{22} : & = \bar{\tau}_0 -\frac{1}{2}\wei{\Phi_0^-, WR_0(z)\mathbf{P}_cW\Phi_0^-} -z.
\end{split}
\end{equation}
Also, let
\betagin{equation} \lambdabel{XB.def}
X : = \frac{1}{2}\betagin{bmatrix}\wei{\Phi_0^+, \Phi} - \wei{\Phi_0^+, W(\lambdangle x \ranglei_3 +\lambdangle x \ranglei_4)} \\ \wei{\Phi_0^-, \Phi} - \wei{\Phi_0^-, W(\lambdangle x \ranglei_3 +\lambdangle x \ranglei_4)} \end{bmatrix},\quad B =\frac{1}{2}\betagin{bmatrix} \wei{\Phi_0^+, W(\lambdangle x \ranglei_2 +\lambdangle x \ranglei_5)} \\ \wei{\Phi_0^-, W(\lambdangle x \ranglei_2 +\lambdangle x \ranglei_5)} \end{bmatrix}.
\end{equation}
Then, the first two equations of \eqref{sys.re} become
\betagin{equation} \lambdabel{sys.re1} M \betagin{bmatrix} a \\ b \end{bmatrix} + B = X. \end{equation}
From \eqref{xi.re.est} and \eqref{XB.def}, we have
\betagin{equation} \lambdabel{XB.est}
|X| \leq C\norm{\Phi}_{\mathbf{E}_s}, \quad |B| \leq C\epsilon^6[\norm{\Phi}_{E_{s}} + (|a| + |b|)]. \end{equation}
On the other hand, from \eqref{exp.Pperp}, \eqref{R0z.in} and \eqref{tau.0}, we have
\betagin{equation}
\betagin{split}
M_{11} & = i(\kappappa -\tau + O(\epsilon^2)) - i\lambda^2 (\phi_0Q^2,(H-iz)^{-1}\mathbf{P}_c\phi_0Q^2) \\
\ & \quad - 2i\lambda^2(\phi_0|Q|^2, (H+iz)^{-1}\mathbf{P}_c\phi_0|Q|^2) - \delta,\\
M_{22} & = -i(\kappappa + \tau +O(\epsilon^2)) -\delta -\frac{1}{2}\wei{\Phi_0^-, WR_0(z)\mathbf{P}_cW\Phi_0^-}.
\end{split}
\end{equation}
Because of $|z - i(e_0 -e_1)| \leq \epsilon$ and the Fermi Golden Rule, we get
\betagin{equation} \lambdabel{M.o1}
\betagin{split}
& \mathrm{I}_2m M_{11} = (\kappappa -\tau) + O(\epsilon^2) = O(\epsilon), \\
& \mathbb{R}e M_{11} = \lambda^2\mathrm{I}_2m (\phi_0Q^2,(H-iz)^{-1}\mathbf{P}_c\phi_0Q^2) + O(\epsilon^5) \geq \frac{\lambda^2\lambda_0\epsilon^4}{2}, \\
& \mathrm{I}_2m M_{22} = -(\kappappa +\tau) + O(\epsilon^2) = O(|e_0-e_1|), \\
& \mathbb{R}e M_{22} = O(\epsilon^4).
\end{split}
\end{equation}
Note that from the symmetry property of $\widetilde{Q}$, we get
\[ (\phi_0^2, (\mathbb{R}e \widetilde{Q})^2 - (\mathrm{I}_2m \widetilde{Q})^2) = O(\epsilon^4), \quad (\phi_0^2, \mathbb{R}e \widetilde{Q} \mathrm{I}_2m \widetilde{Q}) = 0. \]
So, it follows that
\[ \wei{\Phi_0^+,W\Phi_0^-} = 2i\lambda(\phi_0^2, \widetilde{Q}^2), \quad \mathbb{R}e c_1 = 0, \quad \mathrm{I}_2m c_1 = O(\epsilon^4). \]
From this and \eqref{M.dec}, we get
\betagin{equation} \lambdabel{M.o2}
M_{12} = O(\epsilon^4), \quad M_{21} = O(\epsilon^4).
\end{equation}
From \eqref{M.o1} and \eqref{M.o2}, we can find a constant $C_0>0$ depending on $\lambda, \lambda_0$ and $|e_0-e_1|$ such that
\[|\text{det}M| \geq C_0^{-1} [|\kappappa -\tau| + \epsilon^4]. \]
From this, \eqref{sys.re1}, \eqref{XB.est}, we obtain
\betagin{flalign*}
|a| + |b| & \leq |M^{-1}(X-B)| \leq C_0[|\kappappa -\tau| + \epsilon^4]^{-1}\{\norm{\Phi}_{\mathbf{E}_s} + \epsilon^6(|a| + |b|)\}\\
& \leq C_0[|\kappappa -\tau| + \epsilon^4]^{-1}\norm{\Phi}_{\mathbf{E}_s} + C_0\epsilon^2(|a| + |b|).
\end{flalign*}
Thus, for sufficiently small $\epsilon$ such that $C_0 \epsilon^2 < \frac{1}{2}$, we get
\betagin{equation} \lambdabel{ab.est} |a| + |b| \leq 2C_0[|\tau -\kappappa| + \epsilon^4]^{-1} \norm{\Phi}_{\mathbf{E}_s}. \end{equation}
From \eqref{xi.re.est} and \eqref{ab.est}, we get $\norm{U}_{\mathbf{E}_{-s}}
\leq C[1+ |\tau -\kappappa| + \epsilon^4]^{-1} \norm{\Phi}_{\mathbf{E}_s}$.
In other words, we have
\[
\norm{R(i\tau + 0)}_{(\mathbf{E}_{s}, \mathbf{E}_{-s})} \leq C[1+ (|\tau -\kappappa| +
\epsilon^4)^{-1}], \ \ \forall \ \tau \in \mathbf{R} : |\tau -(e_0-e_1)|
\leq \epsilon.
\]
Also, the estimates of $\norm{R(i\tau -0)}_{\mathbf{B}}, \norm{R(-i\tau \pm0)}_{\mathbf{B}}$ can be obtained in a similar way. So, we have proved \eqref{resolvent.est} for $\tau$ near $\pm e_{01}$.
Next, for $z = i\tau +0$ where $\tau > e_{01} +\epsilon$ or $|E| \leq \tau \leq e_{01} -\epsilon$, from \cite[Theorem 9.2]{JK}, we have $\norm{R_0(z)}_{\mathbf{B}} \leq C(1+\tau)^{-1/2}$. Then, by a similar argument as we just did, we also obtain \eqref{resolvent.est}.
Finally, to prove \eqref{D-resolvent.est}, we follow the argument in
\cite[Lemma 2.5]{TY3}. Basically, we obtain
\eqref{D-resolvent.est} by an induction argument and by differentiating the relation $R(z)[1 +WR_0(z)] = R_0(z)$ and using the relations $(1+WR_0)^{-1}=1- WR$, $(1+R_0W)^{-1} = 1 - RW$. The proof of the lemma is then complete.
\end{proof}
\betagin{lemma}[Decay Estimates] \lambdabel{Decay.est} For any
of our excited states $Q$, let $\overrightarrow{\mathcal{L}} = \overrightarrow{\mathcal{L}}_Q$.
Then we have
\betagin{itemize}
\item[\textup{(i)}]
There exists a constant $C>0$ independent of $\epsilon$ such that for all $\eta \in \mathbf{E}_c(\overrightarrow{\mathcal{L}}) \cap H^k$, we have
\[ C^{-1}\norm{\eta}_{H^k} \leq \norm{e^{t\overrightarrow{\mathcal{L}}}\eta}_{H^k} \leq C \norm{\eta}_{H^k}, \ \forall \ \eta \in \mathbf{E}_c(\overrightarrow{\mathcal{L}}) \cap H^k, \ \ k=1,2.\]
\item[\textup{(ii)}] For all $p \in [2, \infty]$, there is a constant $C =C(\epsilon,p)$ such that for all $\eta \in \mathbf{E}_c({\overrightarrow{\mathcal{L}}})$, we have
\[ \norm{e^{t\overrightarrow{\mathcal{L}}} \eta}_{L^p} \leq C(\epsilon) |t|^{-3(1/2-1/p)}\norm{\eta}_{L^{p'}}, \ \quad \text{where} \quad \frac{1}{p} + \frac{1}{p'} =1. \]
\end{itemize}
\end{lemma}
\betagin{proof}
To prove (i), let's define the quadratic form: $\mathcal{Q}[\psi] =
\wei{\overrightarrow{\mathcal{L}}\psi, J\psi} = \wei{\mathbf{K}\psi, \psi}, \ \psi \in \mathbf{E}$.
Recall $\mathbf{K}$ is self-adjoint.
Then, for all $\psi \in \mathbf{E}$, we have
\betagin{flalign*}
\frac{\partial}{\partial t}\mathcal{Q}[e^{\overrightarrow{\mathcal{L}}t}\psi] & =\frac{\partial}{\partial t}\wei{\mathbf{K} e^{J\mathbf{K} t}\psi,e^{J\mathbf{K} t}\psi} \\
& =\wei{\mathbf{K} J\mathbf{K} e^{J\mathbf{K} t}\psi,e^{J\mathbf{K} t}\psi} + \wei{\mathbf{K} e^{J\mathbf{K} t}\psi,J\mathbf{K} e^{J\mathbf{K} t}\psi} =0.
\end{flalign*}
Therefore, $\mathcal{Q}[e^{\overrightarrow{\mathcal{L}}t}\psi] = \mathcal{Q}[\psi]$, for all $t \geq 0$ and
for all $\psi \in \mathbf{E}$.
Next, we claim that there exists $C>0$ such that
\betagin{equation}
\lambdabel{eq.Hk}
C^{-1} \norm{\eta}_{H^1}^2 \leq \mathcal{Q}[\eta] \leq C \norm{\eta}_{H^1}^2,
\ \forall \ \eta \in \mathbf{E}_c(\overrightarrow{\mathcal{L}}) \cap H^1.
\end{equation}
The upper-bound is immediate, so we just need the lower bound
(the ``spectral gap'').
Recall $\mathbf{K} = H - JW$, with $W = O(\e^2)$.
In the non-resonant cases, the lower bound follows, by a simple
perturbation-theoretic argument, from the corresponding
spectral gap for the reference operator $H$.
However, in the resonant cases, the subspaces $\mathbf{E}_c(\overrightarrow{\mathcal{L}})$
and $\mathbf{E}_c(J H)$ are not close, and the argument is more subtle --
so let us assume now we are in the resonant case.
Define the subspaces
\[
S_0 := \text{span} \left\{
\left[ \betagin{array}{c} \phi_j \\ 0 \end{array} \right],
\left[ \betagin{array}{c} 0 \\ \phi_j \end{array} \right]
\; j = 1,2,3 \right \}, \;\;
S_1 := \text{span} \left\{
\Phi_a, \Phi_{b} \right\}, \;\;
S := S_0 \oplus S_1,
\]
where for $\overrightarrow{\mathcal{L}} = \mathbf{L}_0$, $(a,b)=(1,3)$,
and for $\overrightarrow{\mathcal{L}} = \mathbf{H}_0$, $(a,b)=(3,4)$
(i.e. one eigenfunction from $\mathbf{E}_+$, one from $\mathbf{E}_-$).
Notice $S$ is $8$-dimensional. For any $v \in S_0$,
\[
\frac{ \lambdangle v, \mathbf{K} v \rangle }{ \| v \|^2_{L^2} }
= e_1-E + O(\e^2) = O(\e^2).
\]
Similarly, if $v \in S_0$, $w \in S_1$,
\[
\lambdangle v, \mathbf{K} w \rangle = \lambdangle \mathbf{K} v, w \rangle
= O(\e^2) \| v \|_{L^2} \|w\|_{L^2},
\]
and
\[
\lambdangle v, \; w \rangle = O(\e^2) \| v \|_{L^2} \| w \|_{L^2}
\]
since $\Phi_{a,b} = a_1 \left( \betagin{array}{c}
\phi_0 \\ \pm i \phi_0 \end{array} \right) + h$,
for an order-one constant $a_1$, and
with $\|h\|_{L^\infty} = O(\e^2)$.
Now, for $\Phi = \Phi_a$ or $\Phi_b$ and
$\omega = \omega_a$ or $\omega_b$:
\[
\betagin{split}
\bar{\omega} \lambdangle \Phi, J \Phi \rangle &=
\lambdangle \overrightarrow{\mathcal{L}} \Phi, J \Phi \rangle =
\lambdangle \mathbf{K} \Phi, \Phi \rangle =
\lambdangle \Phi, \mathbf{K} \Phi \rangle \\
&= -\lambdangle \Phi, J \overrightarrow{\mathcal{L}} \Phi \rangle =
-\omega \lambdangle \Phi, J \Phi \rangle
\end{split}
\]
and since $\bar{\omega} \not= -\omega$,
$\lambdangle \Phi, J \Phi \rangle = 0$, and so
\[
\lambdangle \Phi_a, \mathbf{K} \Phi_a \rangle =
\lambdangle \Phi_b, \mathbf{K} \Phi_b \rangle = 0.
\]
Finally, notice
\[
\betagin{split}
\bar{\omega_a} \lambdangle \Phi_a, J \Phi_b \rangle
&= \lambdangle \overrightarrow{\mathcal{L}} \Phi_a, J \Phi_b \rangle
= \lambdangle \mathbf{K} \Phi_a, \Phi_b \rangle
= \lambdangle \Phi_a, \mathbf{K} \Phi_b \rangle \\
&= \lambdangle J \Phi_a, \overrightarrow{\mathcal{L}} \Phi_b \rangle
= \omega_b \lambdangle J \Phi_a, \Phi_b \rangle
= -\omega_b \lambdangle \Phi_a, J \Phi_b \rangle
\end{split}
\]
and since $\bar{\omega}_a \not= -\omega_b$,
we have $\lambdangle \Phi_a, J \Phi_b \rangle = 0$ and so
\[
\lambdangle \Phi_a, \mathbf{K} \Phi_b \rangle = 0.
\]
Combining all these facts, we conclude that
\[
v \in S \; \implies \;
\frac{\lambdangle v, \mathbf{K} v \rangle}{\| v \|^2_{L^2}} = O(\e^2).
\]
We claim now that:
\betagin{equation}
\lambdabel{gap}
J \eta \perp S_0 \oplus \mathbf{E}_+ \oplus \mathbf{E}_- \;\; \implies \;\;
\lambdangle \eta , \mathbf{K} \eta \rangle \geq \frac{1}{4} |e_1|
\| \eta \|^2_{L^2}.
\end{equation}
Indeed, if not, then for any $v$ in the $9$-dimensional subspace
$S \oplus \lambdangle \eta \rangle$, we would have
$\lambdangle v, \mathbf{K} v \rangle < \frac{1}{2}|e_1| \| v \|^2_{L^2}$.
To see this, we are using
$\lambdangle v, \mathbf{K} \eta \rangle = O(\e^2) \| v \|_{L^2} \| \eta \|_{L^2}$
if $v \in S$, $J \eta \perp S$, which is easily checked,
as well as
$\| v + \eta \|_{L^2}^2 \sim \| v \|_{L^2}^2 + \| \eta \|_{L^2}^2$
for $v \in S$, $J \eta \perp S_0 \oplus \mathbf{E}_+ \oplus \mathbf{E}_-$,
which, in turn, follows from the non-degeneracy of the
matrix $\lambdangle \Phi_j, J \Phi_k \rangle$, and a little
linear algebra.
The mini-max principle would then imply that $\mathbf{K}$ has
at least $9$ eigenvalues (counting multiplicity) below
$\frac{1}{2}|e_1|$, while standard perturbation theory shows,
in fact, that there are just $8$.
Now since the $\Phi_{0j}$ are small $L^2$ perturbations
of elements of $S_0$, we get easily from~\eqref{gap}
\[
\eta \in \mathbf{E}_c \;\; \implies \;\;
\lambdangle \eta, \; \mathbf{K} \eta \rangle \geq C^{-1} \| \eta \|_{L^2}^2.
\]
Then it is straightforward to upgrade this lower bound to $H^1$
by using $\mathbf{K} = \deltalta(-\Delta + |E|) + (1-\deltalta) \mathbf{K} + \deltalta R$,
with $R$ a bounded multiplication operator, for suitably
small $\deltalta$. This yields~\eqref{eq.Hk}.
From this and since $\mathcal{Q}[e^{\overrightarrow{\mathcal{L}}t}\eta] = \mathcal{Q}[\eta]$, we get
\[
\mathcal{Q}[e^{\overrightarrow{\mathcal{L}}t}\eta] = \mathcal{Q}[\eta] \sim \norm{\eta}_{H^1}^2,
\]
which proves (i) for $k=1$.
A straightforward consequence of the $H^1$ estimate just proved is
that:
\[
\norm{\eta}_{H^3}^2 \sim \norm{\overrightarrow{\mathcal{L}}\eta}_{H^1}^2 \sim
\mathcal{Q}[\overrightarrow{\mathcal{L}}\eta], \ \forall \ \eta \in \mathbf{E}_c \cap H^3.
\]
Since $\mathcal{Q}[\overrightarrow{\mathcal{L}}\eta] = \mathcal{Q}[e^{t\overrightarrow{\mathcal{L}}} \overrightarrow{\mathcal{L}}\eta]$, it follows
that $\norm{\eta}_{H^3} \sim \norm{e^{t\overrightarrow{\mathcal{L}}}\eta}_{H^3}$. Then, by
interpolation, we obtain $\norm{\eta}_{H^2} \sim
\norm{e^{t\overrightarrow{\mathcal{L}}}\eta}_{H^2}$,
which is (i) for $k=2$.
To prove (ii), we follow the argument in \cite[Section 2.7]{TY3}. Let
$H_* = -\Delta -E$, $A: = JV + W$. Then, we have
$\overrightarrow{\mathcal{L}} = JH_* +A$. Note that $H_*$ has no bound states and $A$
is localized. Now, we define the wave operator $W_+ =\lim_{t
\rightarrow \infty} e^{-t\overrightarrow{\mathcal{L}}} e^{tJH_*}$. Using Lemma
\ref{Resol.est} and the argument in \cite{C1}, it follows that $W_+$
maps $\mathbf{E}$ onto $\mathbf{E}_c(\overrightarrow{\mathcal{L}})$. Moreover, $W_+$ and its inverse
(restricted to
$\mathbf{E}_c(\overrightarrow{\mathcal{L}})$) are bounded $L^p \to L^p$. Then,
from the intertwining property, we have
\[
e^{t\overrightarrow{\mathcal{L}}}\mathbf{P}_c = W_+ e^{tJH_*} (W_+)^*\mathbf{P}_c.
\]
From this and the decay estimates of $e^{tJH_*}$, we obtain (ii).
\end{proof}
\section{Proof of Theorem \ref{m-theorem}} \lambdabel{prom}
We shall divide the proof of Theorem \ref{m-theorem} into two cases.
The first case is for $Q= \tilde{Q}_E$, and the second one is
for $Q= Q_E$. Each of these cases will be divided into two
sub-cases depending on whether $e_0 < 2e_1$ (resonant case) or
$e_0 > 2e_1$ (non-resonant case). We will give the details of the
proof of Theorem \ref{m-theorem} for the resonant case. In the
non-resonant case, the proof is similar, and therefore we only
sketch it.
We draw heavily on~\cite{TY3} in this section.
\subsection{Stable Directions for the Excited State
$\tilde{Q}_E$}
\subsubsection{The Resonant Case} \lambdabel{CR-sub}
For $r = (r_1, r_2, r_3) \in \mathbb{R}^3$, let
$Z_r := (\Phi_{01}^r, \Phi_{02}^r, \Phi_{03}^r),\ G_r := \Phi_{00}^r$
where $\Phi_{0j}^r, j =0,1,2,3$ are defined as in
Corollary \ref{Spectrum.C.R-cor}. Also, let $N_r := (\Phi_1^r, \Phi_2^r, \cdots, \Phi_6^r) \in \mathbf{E}^6$. We shall construct a solution $\psi$ of the equation \eqref{NSE} of the form
\[
\overrightarrow{\psi} = \betagin{bmatrix} \mathbb{R}e \psi \\ \mathrm{I}_2m \psi \end{bmatrix} =
e^{JEt} [\overrightarrow{Q}_{E,r(t)} + h],
\]
where
\[ h = \betagin{bmatrix} h_1 \\ h_2 \end{bmatrix} = k +\eta, \quad k= \betagin{bmatrix} k_1 \\ k_2 \end{bmatrix} : = a(t) G_r + b(t)\cdot N_r, \quad \eta \in \mathbf{E}_c^r\]
with $a \in \mathbb{C}, \ b := (b_1, b_2, \cdots, b_6) \in \mathbb{C}^6$ and
\[
\overrightarrow{Q}_{E,r} = e^{Jr_1} \betagin{bmatrix} \mathbb{R}e \widetilde{Q}_E \circ
\mathbf{G}amma_1(r_2, r_3) \\ \mathrm{I}_2m \widetilde{Q}_E \circ \mathbf{G}amma_1(r_2,
r_3)\end{bmatrix} .
\]
Since \eqref{NSE} is equivalent to
\[
\partial_t \overrightarrow{\psi} = (\Delta - V) J \overrightarrow{\psi} + \lambdambda |\overrightarrow{\psi}|^2 J \overrightarrow{\psi}, \]
we get
\betagin{equation} \lambdabel{k-CR.eqn}
\partial_t \overrightarrow{Q}_{E,r} + \partial_t h = \mathbf{H}_r h + F_1, \quad F_1 := \lambdambda [2\overrightarrow{Q}_{E,r} \cdot h + |h|^2] Jh + \lambdambda|h|^2J\overrightarrow{Q}_{E,r}.
\end{equation}
Now, let $\hat{k} : = \nabla_r [a\cdot G_r + b\cdot N_r] = (\hat{k}_1, \hat{k}_2, \hat{k}_3)$, we get
\betagin{flalign*}
& \partial_t \overrightarrow{Q}_{E,r} = \dot{r} \cdot Z_r, \quad \partial_t h = \dot{r}\cdot \hat{k} + \dot{a}G_r + \dot{b}\cdot N_r +\partial_t\eta, \\
& \mathbf{H}_r h = a \Phi_{01}^r + \sum_{j=1}^6 \omega_j b_j \Phi_j^r + \mathbf{H}_{r} \eta.
\end{flalign*}
Therefore, \eqref{k-R.eqn} becomes
\betagin{equation} \lambdabel{rab.eqn}
\dot{r}\cdot (Z_r + \hat{k}) -a \Phi_{01}^r + \dot{a} G_r + \dot{b}\cdot N_r + \partial_t\eta = \sum_{j=1}^6 \omega_j b_j \Phi_j^r + \mathbf{H}_{r} \eta +F_1.
\end{equation}
Taking the inner products of \eqref{rab.eqn} with $J\Phi_{00}^r, J\Phi_{02}^r, J\Phi_{03}^r$, we obtain
\betagin{flalign*}
& \wei{J\Phi_{00}, \Phi_{01}} \dot{r}_1 + \sum_{j=1}^3 \dot{r}_j \wei{J\Phi_{00}^r, \partial_{r_j}k} = \wei{J\Phi_{00}^r, F_1} + a\wei{J\Phi_{00}, \Phi_{01}}, \\
& \wei{J\Phi_{03}, \Phi_{02}} \dot{r}_2 + \sum_{j=1}^3 \dot{r}_j \wei{J\Phi_{03}^r, \partial_{r_j}k} = \wei{J\Phi_{03}^r, F_1}, \\
& \wei{J\Phi_{02}, \Phi_{03}} \dot{r}_2 + \sum_{j=1}^3 \dot{r}_j \wei{J\Phi_{02}^r, \partial_{r_j}k} = \wei{J\Phi_{02}^r, F_1}.
\end{flalign*}
In other words, we can write $M \dot{r}^T = A$, where $M = M_0 + M_1$ with
\betagin{equation} \lambdabel{M.CR.def}
M_0 : = \betagin{bmatrix}\wei{J\Phi_{00}, \Phi_{01}} & 0 & 0 \\ 0 & \wei{J\Phi_{03}, \Phi_{02}} & 0 \\ 0 & 0 & \wei{J\Phi_{02}, \Phi_{03}} \end{bmatrix}, \quad M_1 : = \betagin{bmatrix}\wei{J\Phi_{00}^r, \hat{k}} \\ \wei{J\Phi_{03}^r, \hat{k}} \\ \wei{J\Phi_{02}^r, \hat{k}} \end{bmatrix},
\end{equation}
and $A^T : = ( \wei{J\Phi_{00}^r, F_1} + a\wei{J\Phi_{00}, \Phi_{01}}, \wei{J\Phi_{03}^r, F_1}, \wei{J\Phi_{02}^r, F_1})$. We shall show that the matrix $M$ is invertible and therefore, $r$ satisfies $\dot{r} = [M^{-1}A]^T$. Now, let $F_2 : = \dot{r} \cdot \hat{k} = [M^{-1}A]^T \cdot \hat{k}$ and $F = F_1 -F_2$. Taking the inner product of \eqref{rab.eqn} with $J\Phi_{01}$ and $J\Phi_{j}^r$ for $j =1,2,\cdots, 6$, we obtain
\betagin{equation*}
\dot{a} = \wei{J\Phi_{01}, \Phi_{00}}^{-1}\wei{J\Phi_{01}^r, F}, \quad \dot{b} = B.
\end{equation*}
Here $B = \omega + B'$ where $\omega = (\omega_1, \omega_2,\cdots, \omega_6)$ and $B' = (B_1', B_2', \cdots, B_6') \in \mathbb{C}^6$ with
\betagin{equation} \lambdabel{B.CR.def}
\betagin{split}
B'_j & = (J\Phi_j, \Phi_j)^{-1}\wei{J\Phi_j^r,F}, \quad j =1,2.\\
B_3' & = (J\Phi_6,\Phi_3)^{-1}\wei{J\Phi_6^r,F}, \quad B'_6=(J\Phi_3,\Phi_6)^{-1}\wei{J\Phi_3^r,F}, \\
B_4' & = (J\Phi_5,\Phi_4)^{-1}\wei{J\Phi_5^r,F}, \quad B'_5=(J\Phi_4,\Phi_5)^{-1}\wei{J\Phi_4^r,F}.
\end{split}
\end{equation}
Moreover, applying the projection $\mathbf{P}_c(\mathbf{H}_r)$ onto \eqref{rab.eqn}, we obtain the equation of $\eta$
\betagin{equation} \lambdabel{eta.CR.eqn} \partial_t \eta = \mathbf{H}_r \eta + \mathbf{P}_c(\mathbf{H}_r)F. \end{equation}
Since $\mathbf{H}_r$ is time dependent, it's not easy to estimate $\eta$
directly from \eqref{eta.CR.eqn}. To overcome this difficulty, we
shall use the following transformation: Let
$\widetilde{\eta} = \mathbf{P}_c(\mathbf{H}_0)\eta$. Then \eqref{eta.CR.eqn} becomes
\betagin{equation}
\partial_t \widetilde{\eta} = \mathbf{H}_0 \widetilde{\eta} + \mathbf{P}_c(\mathbf{H}_0)NL, \quad NL:=\widetilde{F} + \mathbf{P}_c(\mathbf{H}_r)F,
\end{equation}
where $\widetilde{F}= (\mathbf{H}_r - \mathbf{H}_0)\eta$. Moreover, we have
\betagin{flalign} \notag
& \mathbf{P}_c(\mathbf{H}_r) -\mathbf{P}_c(\mathbf{H}_0) =\\ \lambdabel{U.est}
& = \sum_{j=0}^2 [\mathbf{P}_j(\mathbf{H}_0) -\mathbf{P}_j(\mathbf{H}_r)] +[\mathbf{P}_+(\mathbf{H}_0) -\mathbf{P}_+(\mathbf{H}_r)] +[\mathbf{P}_-(\mathbf{H}_0) -\mathbf{P}_-(\mathbf{H}_r)].
\end{flalign}
From this, Lemma \ref{L2.eta.sp}, we see that $\mathbf{P}_c(\mathbf{H}_r) -\mathbf{P}_c(\mathbf{H}_0) = O(|r|)$. Since $\eta = \widetilde{\eta} + [\mathbf{P}_c(H_\gammamma) -\mathbf{P}_c(\mathbf{H}_0)]\eta$, $|r|$ is sufficiently small and $\epsilon$ is fixed, we can solve $\eta$ in term of $\widetilde{\eta}$ by
\betagin{equation} \lambdabel{U.CR.def}
\eta = \mathbf{U}_r \widetilde{\eta}, \quad \mathbf{U}_r : = \sum_{j=0}^\infty [\mathbf{P}_c(\mathbf{H}_r) -\mathbf{P}_c(\mathbf{H}_0)]^j.
\end{equation}
Now, for given $\eta_\infty \in \mathbf{E}_c(\mathbf{H}_0)$, let $\widetilde{\eta} = e^{\mathbf{H}_0t}\eta_\infty + g$, where $g$ satisfies the equation
\betagin{equation*}
\partial_t g = \mathbf{H}_0 g + \mathbf{P}_c(\mathbf{H}_0) NL,
\end{equation*}
and we want $g(t) \rightarrow 0$ in some sense as $t \rightarrow \infty$. In summary, we shall construct a solution $\psi$ of \eqref{NSE} as
\[
\overrightarrow{\psi} = e^{JEt} [\overrightarrow{Q}_{E,r(t)} + a \Phi_{00}^r + b \cdot
\Phi_r + \mathbf{U}_r(\lambdangle x \ranglei + g)],
\]
where $a, b,r$ satisfy the system of equations
\betagin{equation}
\left \{ \betagin{array}{ll}
\dot{a} & = \wei{J\Phi_{01}, \Phi_{00}}^{-1}\wei{J\Phi_{01}^r, F}, \\
\dot{b} & = B, \quad \dot{r} = [M^{-1}A]^T, \\
\dot g & = \mathbf{H}_0 g + \mathbf{P}_c(\mathbf{H}_0)NL.
\end{array} \right.
\end{equation}
Now, for $\deltalta >0$ and sufficiently small, let
\betagin{flalign*}
\mathcal{X} : = \{ & (a,r,b, g): [0, \infty) \rightarrow \mathbb{C} \times \mathbb{R}^3 \times \mathbb{C}^6 \times (\mathbf{E}_c(\mathbf{H}_0) \cap H^2) : \\
\ & |a(t)|, |b(t)|, |r_j(t)| \leq \deltalta^{7/4} (1+t)^{-2}, \quad j = 2,3; \quad |r_1(t)| \leq 2\deltalta^{7/4}(1+t)^{-1}, \\
\ & \norm{g(t)}_{\mathbf{E} \cap H^2} \leq \deltalta^{7/4}(1+t)^{-3/2}\}.
\end{flalign*}
Then, we define the map $\Omega :\mathcal{X} \rightarrow \mathcal{X}$ with $\Omega(a,r,b,g) = (a^*,r^*,b^*,g^*)$ as
\betagin{flalign*}
a^*(t) & = \int_\infty^t [\wei{J\Phi_{01}, \Phi_{00}}^{-1}\wei{J\Phi_{01}^r, F}](s) ds, \\
r^*(t) & = \int_\infty^t [M^{-1}A]^T(s) ds, \\
b^*_j(t) & = \int_\infty^t e^{\omega_j(t-s)} B_j'(s) ds, \quad j = 1,2,\cdots, 4,\\
b^*_k(t) & = e^{\omega_kt}b_{k}(0) + \int_0^t e^{\omega_k(t-s)} B_k'(s) ds, \quad k = 5,6,\\
g^*(t) & = \int_{\infty}^t e^{\mathbf{H}_0(t-s)} \mathbf{P}_c NL(s) ds.
\end{flalign*}
Here $F = F_1 - F_2$ where $F_1$ is defined in \eqref{k-CR.eqn} and $F_2 = (M^{-1}B)^T \cdot \hat{k} =(M^{-1}B)^T \cdot \nabla_r k$. Moreover, $\eta = \mathbf{U}_r [e^{\mathbf{H}_0 t} \eta_\infty + g]$. Note also that for $k =5,6$, we have $\mathbb{R}e \omega_k <0$. Therefore, the terms $e^{\omega_kt}b_{k}(0)$ in the equations of $b^*_k$ exponentially decay and so $b_k(0)$ can be freely chosen.
\betagin{lemma} The map $\Omega$ is well-defined and it is a contraction map if $\deltalta$ is sufficiently small and
\[ \norm{\lambdangle x \ranglei_\infty}_{H^2 \cap W^{2,1}} \leq \deltalta \quad \text{and} \quad |b_k(0)| \leq \deltalta^2/4, \ \forall \ k =5,6.\]
\end{lemma}
\betagin{proof} Recall that $\eta := \lambdangle x \ranglei + \mathbf{U}_r g,\ \lambdangle x \ranglei:=\mathbf{U}_r e^{\mathbf{H}_0 t} \eta_\infty$. Since $\norm{\lambdangle x \ranglei_\infty}_{H^2 \cap W^{2,1}} \leq \deltalta$, we get
\[ \norm{\lambdangle x \ranglei(t)}_{H^2} \leq C(\epsilon)\deltalta, \quad \norm{\lambdangle x \ranglei(t)}_{W^{2,\infty}} \leq C(\epsilon) \deltalta |t|^{-3/2} \]
Therefore,
\betagin{equation*}
\norm{|\lambdangle x \ranglei|^2J\lambdangle x \ranglei}_{H^2} \leq C(\epsilon)\deltalta^3 (1+t)^{-3}.
\end{equation*}
Now, define $F_0 : = 2(\overrightarrow{Q}_{E,r} \cdot \eta) J\eta + |\eta|^2 J\overrightarrow{Q}_{E,r} + |\eta|^2 J\eta$. We have $\norm{F_0}_{H^2} \leq \deltalta^2 (1+t)^{-3}$. From this and \eqref{k-CR.eqn}, we see that $F_0$ is the main term of $F_1$. So, we obtain
\betagin{equation} \lambdabel{F1.CR.est}
\norm{F_1}_{H^2} \leq C(\epsilon)\deltalta^2 (1+t)^{-3}.
\end{equation}
From \eqref{M.CR.def}, we have
\[
\norm{M_1} \leq C \norm{\hat{k}}_{L^2} \leq C [|a| +|b|] \leq
C\deltalta^{7/4}(1+t)^{-2}.
\]
Therefore, for sufficiently small $\deltalta$, $M = M_0 + M_1$ is invertible and $M^{-1} = (I_3 + M_0^{-1}M)M_0^{-1}$, where $I_3$ is the identity $3\times 3$ matrix. From this and the explicit formula of $A$, we obtain
\betagin{equation}\lambdabel{MA.CR.est}
\betagin{split}
\norm{(M^{-1}A)_1} &\leq \frac{3}{2}[ |a| + \norm{M_0} \norm{F_1}_{L^2_{-s}}] \leq \frac{3}{2} |a| + C(\epsilon) \norm{F_1}_{L^2_{-s}}], \\
\norm{(M^{-1}A)_k} & \leq 2\norm{M_0} \norm{F_1}_{L^2_{-s}} \leq C(\epsilon)\norm{F_1}_{L^2_{-s}}, \ k = 2,3.
\end{split}
\end{equation}
Then, it follows from \eqref{F1.CR.est} and \eqref{MA.CR.est} that
\betagin{equation} \lambdabel{F.CR.est}
\norm{F}_{H^2} \leq \norm{F_1}_{H^2} + \norm{F_2}_{H^2} \leq C(\epsilon) \deltalta^2 (1+t)^{-3}. \end{equation}
On the other hand, we also have
\[ \norm{\widetilde{F}}_{H^2} = \norm{(\mathbf{H}_r - \mathbf{H}_0)\eta}_{H^2} \leq C\epsilon |r| \norm{\eta}_{L^\infty} \leq C(\epsilon) \deltalta^2(1+t)^{-5/2}. \]
From this and \eqref{F1.CR.est}, we get
\betagin{equation} \lambdabel{NL.CR.est}
\norm{NL}_{H^2} \leq C(\epsilon) \deltalta^2 (1+t)^{-5/2}.
\end{equation}
From Lemma \ref{Decay.est} and \eqref{NL.CR.est}, we obtain
\[
\norm{g^*(t)}_{H^2} \lesssim C(\epsilon) \deltalta^2 \int_{\infty}^t
\norm{NL(s)}_{H^2} ds \leq C(\epsilon) \deltalta^2 (1+t)^{-3/2} \leq
\deltalta^{7/4}(1+t)^{-3/2}.
\]
On the other hand, from \eqref{B.CR.def}, \eqref{MA.CR.est} and \eqref{F.CR.est} and by direct computation, we also obtain
\[
|a^*|, |b^*|, |r_j^*| \leq \deltalta^{7/4}(1+t)^{-2}, \quad \forall j =
2,3\quad \text{and} \quad |r_1^*(t)| \leq 2\deltalta^{7/4}(1+t)^{-1}.
\]
Hence, $\Omega$ maps $\mathcal{X}$ into $\mathcal{X}$. Also, it is
easily checked that $\Omega$ is a contraction map if $\deltalta$ is sufficiently small. So, the lemma follows.
\end{proof}
Now, to complete the proof of the Theorem \ref{m-theorem}
for $Q = \tilde{Q}_E$ in the resonant case, we shall prove that
\betagin{equation} \lambdabel{CR.as}
\norm{\psi_{as}(t) - \psi(t)}_{H^2} \leq C(\epsilon) (1+t)^{-3/2}.
\end{equation}
where $\psi, \psi_{as}$ are defined by
\betagin{flalign*}
& \psi = e^{JEt} [\overrightarrow{Q}_{E,r(t)} + a G_r + b \cdot N_r + \mathbf{U}_r(\lambdangle x \ranglei + g)]\\
& \psi_{as}(t) = e^{JEt}[\overrightarrow{Q}_{E,0} + \lambdangle x \ranglei], \quad \lambdangle x \ranglei :=e^{\mathbf{H}_0t}\eta_0.
\end{flalign*}
We have
\[\psi_{as}(t) - \psi(t) = e^{JEt} [\overrightarrow{Q}_{E,0} - \overrightarrow{Q}_{E,r} + a G_r + b \cdot N_r + (1-\mathbf{U}_r) \lambdangle x \ranglei + \mathbf{U}_r g]. \]
From Lemma \ref{Lp.eta.sp} and \eqref{U.est}, we get
\betagin{equation*}
\betagin{split}
& \norm{\overrightarrow{Q}_{E,0} - \overrightarrow{Q}_{E,r} + a G_r + b \cdot N_r}_{H^2} \leq C(\epsilon)[|r| +|a| +|b|] \\
& \ \quad \quad \leq C(\epsilon) \deltalta^{7/4}(1+t)^{-1}, \\
& \norm{\mathbf{U}_r g}_{H^2} \leq C(\epsilon) \norm{g}_{H^2} \leq C(\epsilon) (1+t)^{-3/2}.
\end{split}
\end{equation*}
On the other hand, using \eqref{U.est} again, we obtain
\[\norm{(1-\mathbf{U}_r)\lambdangle x \ranglei}_{H^2} \leq C(\epsilon)|r| \norm{\lambdangle x \ranglei}_{H^2} \leq C(\epsilon) \deltalta^{11/4}(1+t)^{-3/2}. \]
Therefore, we obtain \eqref{CR.as}.
\subsubsection{The Non-Resonant Case}
We shall construct a solution $\psi$ of the equation \eqref{NSE} of the form
\[
\overrightarrow{\psi} = \betagin{bmatrix} \mathbb{R}e \psi \\ \mathrm{I}_2m \psi \end{bmatrix} =
e^{JEt} [\overrightarrow{Q}_{E,r} + h],
\]
where
\[ h = \betagin{bmatrix} h_1 \\ h_2 \end{bmatrix} = k +\eta, \quad k= \betagin{bmatrix} k_1 \\ k_2 \end{bmatrix} : = a(t) G_r + b(t)\cdot N_r, \quad \eta \in \mathbf{E}_c^r\]
with $a \in \mathbb{C}, \ b := (b_1, b_2, \cdots, b_4) \in \mathbb{C}^4$ and
\[
\overrightarrow{Q}_{E,r} = e^{Jr_1} \betagin{bmatrix} \mathbb{R}e \widetilde{Q}_E \circ
\mathbf{G}amma_1(r_2, r_3) \\ \mathrm{I}_2m \widetilde{Q}_E \circ \mathbf{G}amma_1(r_2,
r_3)\end{bmatrix}.
\]
Here, $G_r = \Phi_{00}^r$ and $N_r = (\Phi_1^r,\cdots, \Phi_4^r)$. As in the previous section, we also obtain the equation of $a,b,\eta$ as
\betagin{equation} \lambdabel{NR.C.eqn}
\left \{ \betagin{array}{ll}
\dot{a} & = \wei{J\Phi_{01}, \Phi_{00}}^{-1}\wei{J\Phi_{01}^r, F}, \\
\dot{b} & = B, \quad \dot{r} = [M^{-1}A]^T, \\
\dot g & = \mathbf{H}_0 g + \mathbf{P}_c(\mathbf{H}_0)NL.
\end{array} \right.
\end{equation}
where $g = \mathbf{U}_r[e^{\mathbf{H}_0t}\eta_\infty +\eta]$ and $F, M, A, NL$ are defined exactly the same way. The map $\Omega$ is defined exactly the same except we don't have the equation for $b^*_5$ and $b_6^*$.
\subsection{Stable Directions for the Excited State
$Q_E$}
\subsubsection{The Resonant Case} \lambdabel{R-R-sub}
For $r = (r_1, r_2, r_3) \in \mathbb{R}^3$, let $Z_r := (\Phi_{01}^r,
\Phi_{02}^r, \Phi_{03}^r), G_r := (\widetilde{\Phi}_{01}^r, \widetilde{\Phi}_{02}^r,
\widetilde{\Phi}_{03}^r) \in \mathbf{E}^3$ where $\Phi_{0j}^r, \widetilde{\Phi}_{0j}^r, j
=1,2,3$ are defined as in Corollary \ref{Cor.R}.
Also, let $N_r := (\Phi_1^r, \Phi_2^r, \Phi_3^r, \Phi_4^r) \in \mathbf{E}^4$.
We shall construct a solution $\psi$ of the equation \eqref{NSE} of the form
\[
\overrightarrow{\psi} = \betagin{bmatrix} \mathbb{R}e \psi \\ \mathrm{I}_2m \psi \end{bmatrix} =
e^{JEt} [\overrightarrow{Q}_{E,r} + h], \quad \text{where} \quad h : = k + \eta,
\ k:= a(t) \cdot G_r + b(t)\cdot N_r
\]
with $a := (a_1, a_2, a_3) \in \mathbb{C}^3, \ b := (b_1, b_2, \cdots, b_4) \in \mathbb{C}^4, \eta \in \mathbf{E}_c^r$ and
\[
\overrightarrow{Q}_{E,r} = e^{Jr_1} \betagin{bmatrix} Q_E \circ
\mathbf{G}amma_0(r_1, r_2) \\ 0\end{bmatrix}.
\]
Since \eqref{NSE} is equivalent to
\[ \partial_t \overrightarrow{\psi} = (\Delta -V) J \overrightarrow{\psi} + \lambdambda |\overrightarrow{\psi}|^2 J \overrightarrow{\psi}. \]
So, we obtain
\betagin{equation} \lambdabel{k-R.eqn}
\partial_t \overrightarrow{Q}_{E,r} + \partial_t h = \mathbf{L}_r h + F_1, \quad F_1 := \lambdambda [ 2\overrightarrow{Q}_{E,r} \cdot h + |h|^2] Jh + \lambdambda|h|^2J\overrightarrow{Q}_{E,r}.
\end{equation}
Let $\hat{k} := \nabla_r [a\cdot G_r + b\cdot N_r]$, we have
\betagin{flalign*}
& \partial_t \overrightarrow{Q}_{E,r} = \dot{r} \cdot Z_r, \\
& \partial_t h = \partial_t \eta + \dot{a}\cdot G_r + \dot{b} \cdot N_r + \dot{r}\cdot \hat{k}, \\
& \mathbf{L}_r h = \mathbf{L}_r \eta + a \cdot Z_r + \sum_{j=1}^4 \omega_j b_j \Phi_j^r.
\end{flalign*}
Therefore, the equation \eqref{k-R.eqn} becomes
\betagin{equation} \lambdabel{R-eqn.all}
(\dot{r} -a)\cdot Z_r + \dot{r}\cdot \hat{k}+ \dot{a}\cdot G_r + \sum_{j=1}^4 (\dot{b}_j - \omega_j b_j)\Phi_j^r + \partial_t \eta = \mathbf{L}_r \eta + F_1.
\end{equation}
Now, taking the inner product of \eqref{R-eqn.all} with $\widetilde{\Phi}^r_{0j}, j =1,2,3$, we obtain
\[ (M_0 +M_1) \dot{r} = M_0 a + A, \]
where
\betagin{equation} \lambdabel{R-M.def}
M_0 : = \betagin{bmatrix}\wei{J\widetilde{\Phi}_{01}, \Phi_{01}} & 0 & 0 \\ 0 & \wei{J\widetilde{\Phi}_{02}, \Phi_{02}} & 0 \\ 0 & 0 & \wei{J\widetilde{\Phi}_{03}, \Phi_{03}} \end{bmatrix}, \quad M_1 : = \betagin{bmatrix}\wei{J\widetilde{\Phi}_{01}^r, \hat{k}} \\ \wei{J\widetilde{\Phi}_{02}^r, \hat{k}} \\ \wei{J\widetilde{\Phi}_{03}^r, \hat{k}} \end{bmatrix},
\end{equation}
and $A : = (A_1, A_2, A_3)^T$, where $A_j = \wei{J\widetilde{\Phi}_{0j}^r, F_1}$. As we will see, the matrix $(M_0 +M_1)$ is invertible. Therefore, we obtain
\betagin{equation}\lambdabel{R-r.eqn}
\dot{r} = (1 + M_0^{-1}M_1)^{-1}a + (1+M_0^{-1}M_1)M_0^{-1}A.
\end{equation}
Now, for $\eta_\infty \in \mathbf{E}_c(\mathbf{L}_0)$, let $g$ be such that $\eta = \mathbf{U}_r[e^{\mathbf{L}_0 t}\eta_\infty +g]$ where $\mathbf{U}_r$ is defined exactly the same way as in \eqref{U.CR.def}. Taking the inner products of \eqref{R-eqn.all} with $J\Phi_{0j}^r, \Phi_k^r$ we also get
\betagin{equation} \lambdabel{R.ab.eqn} \left \{
\betagin{array}{lll}
& \dot{a}_j & = \wei{J\Phi_{0j}, \widetilde{\Phi}_{0j}}^{-1} \wei{J\Phi_{0j}^{r}, F}, \\
& \dot{b}_k & = \omega_k b_k + B_k, \\
& \dot{g} & = \mathbf{L}_0 g + \mathbf{P}_c(\mathbf{L}_0) NL,
\end{array} \right.
\end{equation}
where $F= F_1 -F_2, F_2 : = \dot{r} \cdot \hat{k} =[(1 + M_0^{-1}M_1)^{-1}a + (1+M_0^{-1}M_1)M_0^{-1}A]^T \cdot \hat{k}$, $NL = (\mathbf{L}_r - \mathbf{L}_0)\eta + \mathbf{P}_c(\mathbf{L}_r)F $ and $B$ satisfies $|B| \leq \norm{F}_{L^2_{-s}}$. Now, we define
\betagin{flalign*}
\mathcal{Y} : = \{ & (a,r,b, g): [0, \infty) \rightarrow \mathbb{C}^3 \times \mathbb{R}^3 \times \mathbb{C}^4 \times (\mathbf{E}_c(\mathbf{L}_r) \cap H^2) : \\
\ & |a(t)|, |b(t)| \leq \deltalta^{7/4} (1+t)^{-2}, \quad |r(t)| \leq 2\deltalta^{7/4}(1+t)^{-1}, \\
\ & \norm{g(t)}_{H^2} \leq \deltalta^{7/4}(1+t)^{-3/2}\}.
\end{flalign*}
Let $\Omega : \mathcal{Y} \rightarrow \mathcal{Y}$ be a defined as $\Omega(a,r,b,\eta) = (a^*, r^*, b^*, g^*)$ where
\betagin{flalign*}
a_j^*(t) &= \int_{\infty}^t \wei{J\Phi_{0j}, \widetilde{\Phi}_{0j}}^{-1} \wei{J\Phi_{0j}^{r}, F}(s) ds, \quad j = 1,2,3, \\
r^*(t) & = \int_{\infty}^t [(1 + M_0^{-1}M_1)^{-1}a + (1+M_0^{-1}M_1)M_0^{-1}A](s) ds ,\\
b_k^*(t) & = \int_{\infty}^t e^{\omega_k(t-s)}B_k(s) ds, \quad k = 1,3,\\
b_l^*(t) & = e^{\omega_lt}b_l(0) + \int_0^t e^{\omega_l(t-s)}B_k(s) ds , \quad l = 2,4, \\
g^*(t) & = \int_{\infty}^t e^{\mathbf{L}_0(t-s)}\mathbf{P}_c(\mathbf{L}_0)NL(s) ds.
\end{flalign*}
Note that $\mathbb{R}e(\omega_l) < 0$ for $l = 2,4$. Therefore, the terms $e^{\omega_l}b_l(0)$ decay exponentially and we only need to require $|b_l(0)| \leq \deltalta^2/4$. Then, as in Subsection \ref{CR-sub}, we can show that there exist $\epsilon_0$ and $\deltalta_0(\epsilon)$ such that for $0 < \epsilon \leq \epsilon_0$ and $0 < \deltalta \leq \deltalta_0(\epsilon)$, the map $\Omega$ is well-defined and contraction. Therefore, we obtain the solution $\psi$ of \eqref{NSE} of the form
\[\psi(t) = e^{JEt}[\overrightarrow{Q}_{E,r} + a\cdot G_r + b\cdot N_r + \mathbf{U}_r(e^{\mathbf{L}_0}\eta_\infty + g)] \]
and $\psi$ satisfies the Theorem \ref{m-theorem}.
\subsubsection{The Non-Resonant Case}
The construction of the solution $\psi$ is exactly the same as that of subsection \ref{R-R-sub}. The only difference is that $b \in \mathbb{C}^2$, not $\mathbb{C}^4$ as in the Subsection \ref{R-R-sub}. The equation of $b^*$ becomes
\[ b^*(t) = \int_{\infty}^t B(s) ds, \quad B = (B_1, B_2).\]
The equations for $a^*, r^*$ and $g^*$ are the same as those in subsection \ref{R-R-sub}. The proof of Theorem \ref{m-theorem} is now complete.
\section*{Acknowledgments}
We thank T.-P. Tsai and K. Nakanishi for their interest in this work.
S.G. acknowledges the support of NSERC under Grant 251124-07
\betagin{thebibliography}{10}
\bibitem{Ag} S. AGMON, {\it Spectral properties of Schr\"{o}dinger operators and scattering theory}, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 2 (1975), no. 2, pp. 151-218
\bibitem{Al} SIMON L. ALTMANN, {\it Rotations, quaternions, and double groups}, Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1986
\bibitem{BP1} V.S. BUSLAEV AND G.S. PEREL'MAN, {\it Scattering for the nonlinear Schr\"odinger equations: states close to a solitary wave}, St. Petersburg Math J. 4 (1993), pp. 1111-1142.
\bibitem{BP2} V.S. BUSLAEV AND G.S. PEREL'MAN, {\it On the stability of solitary waves for nonlinear Schr\"odinger
equations. Nonlinear evolution equations}, Amer. Math. Soc. Transl. Ser. 2, 164 (1995), pp. 75-98
\bibitem{BS} V.S. BUSLAEV AND G.S. PEREL'MAN, {\it On asymptotic stability of solitary waves for nonlinear Schr\"odinger equations},
Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire, 20 (2003), no. 3, pp. 419-475.
\bibitem{C1} S. CUCCAGNA,{\it Stabilization of solutions to nonlinear Schr\"odinger equations}, Comm. Pure Appl. Math., 54
(2001), no. 9, pp. 1110-1145.
\bibitem{C2} S. CUCCAGNA,{\it On asymptotic stability of ground states of nonlinear Schr\"odinger equations}, Rev. Math. Phys., 15 (2003) no.8, pp. 877-903.
\bibitem{ZS1} Z. GANG AND I. M. SIGAL, {\it Relaxation of solitons in nonlinear Schr\"odinger equations with potential}, Adv. Math., 216(2007), no. 2, pp. 443-490.
\bibitem{ZW} Z. GANG AND M.I. WEINSTEIN, {\it Dynamics of Nonlinear Sch\"{o}dinger/Gross-Pitaevskii Equations; Mass Transfer in Systems with Solitons and Degenerate Neutral Modes}, Anal. PDE, Vol. 1 (2008), no. 3, pp. 267-322.
\bibitem{GSS} M. GOLUBITSKY, I. STEWART AND D. G. SHAEFFER, {\it Singularities and Groups in Bifurcation Theory}, Vol. II, Applied Math. Sciences, Springer-Verlag, 1988.
\bibitem{GS} M. GOLUBITSKY AND I. STEWART, {\it The Symmetry Perspective}, Birkh\"{a}user, 2000.
\bibitem{GNT} S. GUSTAFSON, K. NAKANISHI AND T.-P. TSAI, {\it Asymptotic stability and completeness in the energy space for nonlinear
Schr\"odinger equations with small solitary waves}, Int. Math. Res. Not., 66(2004), pp. 3559-3584.
\bibitem{JK} A. JENSEN AND T. KATO, {\it Spectral properties of Schr\"{o}dinger operators and time-decay of the wave functions}, Duke Math. J., 46(1979), no. 3, pp. 583-611.
\bibitem{NTP} K. NAKANISHI, T.V. PHAN AND T.-P. TSAI, {\it Small Solutions of nonlinear Schr\"{o}dinger equations
near first excited states}, Preprint.
\bibitem{Rao} K.N. SRINIVASA RAO, {\it The Rotation and Lorentz Groups and their Representations for Physicists}, John Wiley \& Sons, 1988.
\bibitem{RW} H.A. ROSE AND M.I. WEINSTEIN, {\it On the Bound States of the Nonlinear Schr\"{o}dinger Equation with a Linear Potential}, Physica D, 30(1988), pp. 207-218.
\bibitem{SA} D.H. SATTERINGER, {\it Branching in the Presence of Symmetry}, CBMS-NSF Regional Conference Series in Applied Mathematics - Vol 40, Society for Industrial and Applied Mathematics, 1983.
\bibitem{SW1} A. SOFFER AND M.I. WEINSTEIN, {\it Multichannel Nonlinear Scattering Theory for Nonintegrable Equations I}, Com. Math. Phys., 133(1990),
pp. 119-146.
\bibitem{SW2} A. SOFFER AND M.I. WEINSTEIN, {\it Multichannel Nonlinear Scattering Theory for Nonintegrable Equations II}, J. Differential Equations, 98(1992), pp. 376-390.
\bibitem{Tsai} {T.-P. TSAI}, {\it Asymptotic dynamics of nonlinear Schr{\"o}dinger equations with many bounds states}, J. Differential Equations, 192(2003) pp. 225-282.
\bibitem{TY1} T.-P. TSAI AND H.-T. YAU, {\it Asymptotic dynamics of nonlinear Schr\"odinger equations: resonance dominated and dispersion dominated solutions}, Comm. Pure Appl. Math., 55(2002), pp. 153-216.
\bibitem{TY2} T.-P. TSAI AND H.-T. YAU, {\it Relaxation of excited states in nonlinear Schr\"odinger equations}, Int. Math. Res. Not., no.31(2002), pp. 1629-1673.
\bibitem{TY3} T.-P. TSAI AND H.-T. YAU, {\it Stable directions for excited states of nonlinear Schr\"odinger equations}, Comm. Partial Differential Equations, 27(2002), no. 11\&12, pp. 2363-2402.
\bibitem{TY4} T.-P. TSAI AND H.-T. YAU, {\it Classification of asymptotic profiles for nonlinear Schr\"odinger equations with small initial
data}, Adv. Theor. Math. Phys., 6(2002), no.1, pp. 107-139.
\bibitem{W2} M.I. WEINSTEIN, {\it Lyapunov stability of ground states of nonlinear dispersive evolution equations}, Comm. Pure Appl. Math.
, 39(1986), pp. 51-68.
\bibitem{Y} K. YAJIMA, {\it The $W^{k,p}$ continuity of wave operators for Schr\"odinger operators}, J. Math. Soc. Japan, 47(1995), no. 3, pp. 551-581.
\end{thebibliography}
\noindent{Stephen Gustafson}, [email protected] \\
Department of Mathematics, University of British Columbia,
Vancouver, BC V6T 1Z2, Canada
\noindent{Tuoc Van Phan}, [email protected] \\
Department of Mathematics, University of British Columbia,
Vancouver, BC V6T 1Z2, Canada
\end{document}
|
\begin{document}
\title{\LARGE \bf
Robust LQR for Uncertain Discrete-Time Systems using Polynomial Chaos
}
\thispagestyle{empty}
\pagestyle{empty}
\begin{abstract}
In this paper, a polynomial chaos based framework for designing controllers for discrete time linear systems with probabilistic parameters is presented. Conditions for exponential-mean-square stability for such systems are derived and algorithms for synthesizing optimal quadratically stabilizing controllers are proposed in a convex optimization formulation. The solution presented is demonstrated on the derived discrete-time models of a nonlinear F-16 aircraft model trimmed at a set of chosen points.
\end{abstract}
\section{Introduction}
In this paper, we address the problem of designing linear quadratic regulators (LQRs) for discrete-time linear-time-invariant (LTI) systems with parametric uncertainty in the system matrices. Such systems can be defined as:
\begin{align}
\vo{x}^{t+1} = \vo{A(\vo{\Delta}ta)}\vo{x}^t + \vo{B(\vo{\Delta}ta)}\vo{u}^t
\label{eqn:sysEq}
\end{align}
where the system matrices $\vo{A(\vo{\Delta}ta)} \in \mathbb{R}^{n \times n}$ and $\vo{B(\vo{\Delta}ta)} \in \mathbb{R}^{m \times n}$ are assumed to be affine functions in $\vo{\vo{\Delta}ta} \in \mathcal{D}_{\vo{\Delta}} \subset \mathbb{R}^d$, representing the parameter space with a chosen probability density function $p(\vo{\Delta})$ \cite{barmish1997uniform}.
The objective is to design a parameter-independent state-feedback law of the form $\vo{u} = \vo{Kx}$ that optimizes the closed-loop performance of the system in an LQR sense.
We solve the problem by \textit{maximizing the lower bound}\cite{willems1971least} on the cost-to-go using a reduced order model.
In \cite{daafouz2001parameter} and \cite{de1999new}, the authors described conditions for establishing robust stability in discrete time systems with time-varying parametric uncertainties. Robust stabilization of similar systems with polytopic uncertainty by decoupling the Lyapunov and system matrices has also been addressed in the past \cite{zhang2007improved}.
However, the state of the art for handling systems like \eqref{eqn:sysEq} involve randomized algorithms \cite{tempo2012randomized}, a framework that requires large datasets to establish desired probabilistic guarantees \cite{kim2013wiener} and becomes computationally intractable for high dimensional parameter spaces.
Polynomial Chaos theory (PC) on the other hand, evokes deterministic algorithms that do not suffer from confidence issues, unlike randomized algorithms \cite{bhattacharya2014robust}.
We employ polynomial chaos theory as a means of numerical approximation \cite{xiu2002wiener} of the system and its uncertainties.
The PC framework is increasingly being used in the modeling of systems with random characteristics
\cite{fisher2009linear}. In \cite{bhattacharya2019robust}, the authors presented a convex optimization formulation in the PC setting for the design of robust quadratic regulators for linear continuous systems \cite{stengel1991stochastic, polyak2001probabilistic} using deterministic algorithms.
Through this work, we expand on their contributions by extending one of the proposed approaches to discrete time systems with parametric uncertainties.
Furthermore, we derive an expression guaranteeing stability of such systems modeled using polynomial chaos theory.
With the conjunction of PC techniques and some useful results associated with Kronecker products \cite{bhattacharya2014robust}, we have truncated the dimension of the function space, thereby making the reduced order model more amenable to control design.
The paper is organized as follows: in section 2, we introduce the polynomial chaos framework that allows for the approximation techniques that follow. Using this framework, we present conditions for stability using standard Lyapunov theory. Formulation of the control objective and the solution methodology using linear matrix inequalities (LMIs) in a convex optimization problem is presented in section 3. The theorems are then applied to an illustrative control design problem obtained from a nonlinear F-16 longitudinal aircraft model \cite{stevens2015aircraft} and the paper concludes with a summary of the findings.
\section{Preliminaries}
\subsection{Polynomial Chaos Theory}
Polynomial Chaos (PC) theory is a deterministic framework to express the evolution of uncertainty in a dynamical system with probabilistic parameters.
Using PC-based approximation techniques, $\vo{x}(t,\vo{\Delta})$ can be expanded as:
\begin{align}
\vo{x}^t(\vo{\Delta}) := \sum_{i=0}^{\infty}\vo{x}_i^t\phi_i(\vo{\Delta})
\end{align}
where $\vo{x}_i^t$ are time-varying PC coefficients, and $\phi_i(\vo{\Delta})$ are a known set of basis polynomials.
For the sake of brevity, we omit discussion over the nature of polynomials chosen, and encourage readers to refer to \cite{bhattacharya2019robust} for an in-depth commentary over the basis.
For computational tractability, we truncate the PC expansion to a finite number of terms, i.e.,
\begin{align}
\vo{x}^t(\vo{\Delta}) \approx \vo{\hat{x}}^t(\vo{\Delta}) := \sum_{i=0}^{N}\vo{x}_i^t\phi_i(\vo{\Delta})
\label{eqn:xApproxSum}
\end{align}
\subsection{Surrogate System Modeling with Polynomial Chaos}
Define $\vo{\Phi}$ to be:
\begin{align}
\vo{\Phi}(\vo{\vo{\Delta}ta}) :=& (\phi_0(\vo{\vo{\Delta}ta}), \phi_1(\vo{\vo{\Delta}ta}) {}, \hdots, {} \phi_N(\vo{\vo{\Delta}ta})), \textrm{and} \\
\vo{\Phi}_n(\vo{\vo{\Delta}ta}) :=& \vo{\Phi}(\vo{\vo{\Delta}ta}) \otimes \vo{I}_n,
\end{align}
where $\vo{I}_n \in \mathbb{R}^{n \times n}$ is the identity matrix. Now, define matrix $\vo{X} \in \mathbb{R}^{n \times (N+1)}$, with PC coefficients $\vo{x}_i \in \mathbb{R}^n$ as
\begin{align}
\vo{X} = \vo{[x}_0, \hdots, \vo{x}_N].
\end{align}
Therefore, \eqref{eqn:xApproxSum} can be written compactly:
\begin{align}
\vo{\hat{x}}^t(\vo{\vo{\Delta}ta}) := \vo{X}^t\vo{\Phi}(\vo{\vo{\Delta}ta}).
\end{align}
Further vectorizing,
\begin{align}
\vo{\hat{x}}^t(\vo{\vo{\Delta}ta}) &\equiv \textrm{vec}(\vo{\hat{x}}^t(\vo{\vo{\Delta}ta})) \nonumber\\
&= \textrm{vec}(\vo{X}^t\vo{\Phi}(\vo{\vo{\Delta}ta})) \nonumber\\
&= \textrm{vec}(\vo{I}_n \vo{X}^t \vo{\Phi}(\vo{\vo{\Delta}ta})) \nonumber\\
&= \vo{\Phi}_n^T(\vo{\vo{\Delta}ta})\vo{x}_{pc}^t
\end{align}
where $\vo{x}_{pc}^t := \textrm{vec}(\vo{X}^t)$
The error due to approximation can be expressed as:
\begin{align}
\vo{e}^t(\vo{\Delta}) &:= \vo{\hat{x}}^{t+1} - \vo{A}(\vo{\Delta})\vo{\hat{x}}^t - \vo{B}(\vo{\Delta})\vo{K}\vo{\hat{x}}^t \nonumber\\
&= \vo{\Phi}_n^T(\vo{\vo{\Delta}ta})\vo{x}_{pc}^{t+1} - \vo{A}(\vo{\Delta})\vo{\Phi}_n^T(\vo{\vo{\Delta}ta})\vo{x}_{pc}^t - \nonumber\\ & \quad \vo{B}(\vo{\Delta})\vo{K}\vo{\Phi}_n^T(\vo{\vo{\Delta}ta})\vo{x}_{pc}^t
\end{align}
To minimize the error in the $\mathcal{L}_2$ sense, we need the projection of the expected value of this approximation error on
the basis function to be zero, i.e.,
\begin{align}
\mathbb{E}[\vo{e}^t(\vo{\Delta})\phi_i(\vo{\Delta})] = 0 \textrm{ for } i = 0,1,\hdots,N
\end{align}
Upon simplification, we arrive at $n(N+1)$ \textit{deterministic} ordinary differential equations in $\vo{x}_{pc}^t$.
\begin{align}
\vo{x}_{pc}^{t+1} = (\mathbb{E}[\vo{\Phi}(\vo{\Delta}) \vo{\Phi}^T(\vo{\Delta})] \otimes \vo{I}_n)^{-1} \times \nonumber\\
(\mathbb{E}[(\vo{\Phi}(\vo{\Delta}) \vo{\Phi}^T(\vo{\Delta})) \otimes \vo{A}(\vo{\Delta})] + \nonumber\\
\mathbb{E}[(\vo{\Phi}(\vo{\Delta}) \vo{\Phi}^T(\vo{\Delta})) \otimes (\vo{B}(\vo{\Delta})\vo{K})])\vo{x}_{pc}^t
\label{eqn:pcODE}
\end{align}
\section{Controller Synthesis}
The design objective is to determine an optimal state-feedback law of the form $\vo{u} = \vo{Kx}$, to optimize the closed-loop performance in an LQR sense.
Therefore, the closed-loop system with control $\vo{u} = \vo{Kx}$ is given by,
\begin{align}
\vo{x}^{t+1} = (\vo{A(\vo{\Delta}ta) + B(\vo{\Delta}ta)K})\vo{x}^t
\label{eqn:sysEqCL}
\end{align}
The system in \eqref{eqn:sysEqCL} is infinite-dimensional with respect to $\vo{\vo{\Delta}ta}$. We present a formulation that reduces the infinite-dimensional system using PC expansion and develop an optimization problem with the reduced-order system. The approximate problem is then solved exactly.
\subsection{Stability}
An almost-sure (a.s.) stability analysis of linear systems from a polynomial chaos framework is prohibitive, except for some special cases \cite{dutta2010nonlinear, ghanem2003stochastic}. However, the equivalence of a.s. stability to exponential-mean-square (EMS) stability for LTI systems \cite{chen1995linear} means that we can use EMS analysis, a far more favorable framework based on moments, to examine our system in the PC setting.
\textit{Derivation}:
Define a parameter-dependent Lyapunov function $V(\vo{x}) := \vo{x}^T \vo{P}(\vo{\Delta}) \vo{x}$.\\
Represent $\vo{P(\vo{\Delta}ta)}$ using homogeneous polynomial-based parameter-dependent quadratic functions, i.e.,
\begin{align}
\vo{P(\vo{\Delta}ta)} := {\vo{\Phi_n^T (\vo{\Delta}ta)}} \bar{\vo{P}} \vo{\Phi_n (\vo{\Delta}ta)}, \, \bar{\vo{P}} \in \mathcal{S}_{++}^{n(N+1)}
\end{align}
If $\vo{P}(\vo{\Delta}) > \vo{0},$ $\forall\vo{\Delta} $ $\iff \bar{\vo{P}} > \vo{0}$. Partitioning $\vo{P}$ as:
\begin{align*}
\bar{\vo{P}} := \begin{bmatrix}
\vo{P}_{00} & \hdots & \vo{P}_{0N} \\
\vdots & {} & \vdots \\
\vo{P}_{N0} & \hdots & \vo{P}_{NN}
\end{bmatrix}
\end{align*}
where $\vo{P}_{ij} = \vo{P}_{ji} \in \mathbb{R}^{n \times n}$. Therefore,
\begin{align}
\vo{P(\vo{\Delta}ta)} &:= {\vo{\Phi_n^T (\vo{\Delta}ta)}} \bar{\vo{P}} \vo{\Phi}_n (\vo{\Delta}) \nonumber \\
&= \begin{bmatrix}
\phi_0(\vo{\Delta})\vo{I}_n & \hdots & \phi_N (\vo{\Delta}) \vo{I}_n
\end{bmatrix} \times \nonumber \\
& \begin{bmatrix}
\vo{P}_{00} & \hdots & \vo{P}_{0N} \\
\vdots & {} & \vdots \\
\vo{P}_{N0} & \hdots & \vo{P}_{NN}
\end{bmatrix} \times \begin{bmatrix}
\phi_0(\vo{\Delta})\vo{I}_n \\ \vdots \\ \phi_N (\vo{\Delta}) \vo{I}_n
\end{bmatrix} \nonumber \\
&= \sum_{ij} \phi_i (\vo{\Delta}) \phi_j (\vo{\Delta}) \vo{P}_{ij}
\end{align}
Noting that $\vo{P}(\vo{\Delta})$ is symmetric,
\begin{align*}
\vo{P}_{ij} = \vo{P}_{ji}.
\end{align*}
\textbf{Theorem}:
The system given in \eqref{eqn:sysEq} is EMS-stable if $\exists$ $\vo{\bar{P}} = \vo{\bar{P}}^T > 0$ such that $\mathbb{E} [\vo{W}_{pc}] < 0$ where $\mathbb{E} [\vo{W}_{pc}]$ is given element-wise by the expression below.
\begin{align}
\mathbb{E} [\vo{W}_{pc_{il}}] = &\sum_{j,k = 0}^{N} \sum_{m,n = 0}^{5} \left[ \mathbb{E}[\phi_i \phi_j \phi_k \phi_m \phi_n \phi_l] \times \right. \nonumber\\
& \left. \vo{A}_m^T \vo{P}_{jk} \vo{A}_n \right]
- \sum_{j,k = 0}^{N} \left[ \mathbb{E}[\phi_i \phi_j \phi_k \phi_l] \vo{P}_{jk} \right] \nonumber\\
&\textrm{for } i, l=0,1,\hdots,N
\label{eqn:stab}
\end{align}
\textbf{\textit{Proof}}: Examining the expression $\mathbb{E}[\vo{\Delta}ta V]$,
\begin{align*}
\mathbb{E}[\vo{\Delta}ta V] &= V^{t+1} - V^{t} \\
&= (\vo{x}^{t+1})^T \vo{P}(\vo{\Delta}) \vo{x}^{t+1} - \vo{x}^{tT} \vo{P}(\vo{\Delta}) \vo{x}^t \\
&= (\vo{A}(\vo{\Delta}) \vo{x}^t)^T \vo{P}(\vo{\Delta})(\vo{A}(\vo{\Delta}) \vo{x}^t) - \vo{x}^{tT} \vo{P}(\vo{\Delta}) \vo{x}^t \\
&= \vo{x}^{tT}( \vo{A}^T (\vo{\Delta})\vo{P}(\vo{\Delta}) \vo{A} (\vo{\Delta})- \vo{P}(\vo{\Delta})) \vo{x}^t
\end{align*}
Dropping the $t$ superscript for notational convenience,
\begin{align*}
\mathbb{E}[\vo{\Delta}ta V] = \vo{x}^T (\vo{A}^T(\vo{\Delta}) \vo{P}(\vo{\Delta}) \vo{A} (\vo{\Delta})- \vo{P}(\vo{\Delta}))\vo{x}
\end{align*}
For stability,
$\mathbb{E}[\vo{\Delta}ta V] < 0$.
Substituting $\vo{x}^t = \vo{\Phi}_n^T(\vo{\vo{\Delta}ta})\vo{x}_{pc}^t$,
\begin{align}
\mathbb{E}[\vo{\Delta}ta V] = \vo{x}_{pc}^T \mathbb{E}[\vo{\Phi}_n(\vo{\vo{\Delta}ta})(\vo{A}^T (\vo{\Delta})\vo{P}(\vo{\Delta}) \vo{A}(\vo{\Delta}) - \nonumber\\ \vo{P}(\vo{\Delta})) \vo{\Phi}_n^T (\vo{\Delta})] \vo{x}_{pc}
\end{align}
Let:
\begin{align*}
\vo{W}_{pc} = \vo{\Phi}_n(\vo{\vo{\Delta}ta})(\vo{A}^T (\vo{\Delta})\vo{P}(\vo{\Delta}) \vo{A}(\vo{\Delta}) - \vo{P}(\vo{\Delta})) \vo{\Phi}_n^T (\vo{\Delta})
\end{align*}
Expanding the expressions:
\begin{align*}
\vo{\Phi}_n (\vo{\Delta}) = (\vo{\Phi} (\vo{\Delta}) \otimes \vo{I}_n), \quad
\vo{A}(\vo{\Delta}) = \sum_{m=0}^{\textrm{nOrd}} \vo{A}_m \phi_m(\vo{\Delta})
\end{align*}
'$\textrm{nOrd}$' is the order of polynomials chosen to approximate the uncertain system matrices $\vo{A}$ and $\vo{B}$.
Using the standard properties of Kronecker products, and switching to the index notation, we obtain the equation \eqref{eqn:stab}. Therefore, this reduces to:
\begin{align*}
\mathbb{E} [\vo{W}_{pc_{il}}] = &\sum_{j,k = 0}^{N} \sum_{m,n = 0}^{5} \left[ \mathbb{E}[\phi_i \phi_j \phi_k \phi_m \phi_n \phi_l] \times \right. \nonumber\\
& \left. \vo{A}_m^T \vo{P}_{jk} \vo{A}_n \right]
- \sum_{j,k = 0}^{N} \left[ \mathbb{E}[\phi_i \phi_j \phi_k \phi_l] \vo{P}_{jk} \right] \nonumber\\
&\textrm{for } i, l=0,1,\hdots,N
\end{align*}
Note that for each $i$ and $l$, $\mathbb{E}[V_{pc_{il}}] \in \mathbb{R}^{n \times n}$.
\subsection{LQR}
Assuming the system given by \eqref{eqn:sysEq} is EMS-stable, we consider the synthesis of an optimal state-feedback control gain that minimizes a quadratic cost, i.e., a fixed parameter-independent gain $\vo{K}$ that minimizes
\begin{align}
\mathbb{E} \left[ \sum_{t=0}^{\infty} (\vo{x}^t)^T \vo{Q} \vo{x}^t + (\vo{u}^t)^T \vo{R} \vo{u}^t)\right]
\end{align}
subject to $\vo{u}^t = \vo{K}\vo{x}^t$ and dynamics given by \eqref{eqn:sysEqCL}.
For the dynamical system given in \eqref{eqn:sysEqCL} with controller $\vo{u} = \vo{Kx}$, the solution is:
\begin{align}
\vo{x}^t(\vo{\Delta}) = (\vo{A_c})^{t}(\vo{\Delta})\vo{x}_0
\end{align}
where $\vo{A}_c(\vo{\Delta}) := \vo{A}(\vo{\Delta}) + \vo{B}(\vo{\Delta})\vo{K}$. The cost-to-go from initial condition $x_0$ is therefore,
\begin{gather}
\vo{J}(\vo{x}_0) = \mathbb{E}\left[ \sum_{t=0}^{\infty} (\vo{x}^t)^T \vo{Q} \vo{x}^t + (\vo{u}^t)^T \vo{R} \vo{u}^t)\right] \nonumber\\
= \vo{x_0}^T \mathbb{E} \underbrace{\left[ \sum_{t=0}^{\infty} (\vo{A}_c^{t}(\vo{\Delta}ta))^T(\vo{Q} + \vo{K}^T\vo{RK}) \vo{A}_c^{t}(\vo{\Delta}) \right]}_{\vo{:=P(\vo{\Delta}ta)}} \vo{x_0}
\end{gather}
for some $\vo{P(\vo{\Delta}ta)} : \mathbb{R}^d \mapsto \mathcal{S}_{++}^n$. Therefore, the cost-to-go is a quadratic fucntion of states.
\subsection{Model Reduction}
Rewriting \eqref{eqn:pcODE} using properties associated with Kronecker products \cite{bhattacharya2014robust}, we obtain the reduced order approximation:
\begin{align}
\vo{x}_{pc}^{t+1} = (\vo{A}_{pc} + \vo{B}_{pc}\vo{\mathcal{K}})\vo{x}_{pc}^{t}
\end{align}
where $\vo{\mathcal{K}} = \vo{I}_{N+1} \otimes \vo{K}$, and
\begin{align}
\vo{A}_{pc} &:= \vo{\Phi_K}
\mathbb{E}[(\vo{\Phi}(\vo{\Delta}) \vo{\Phi}^T(\vo{\Delta})) \otimes \vo{A}(\vo{\Delta})] \\
\vo{B}_{pc} &:= \vo{\Phi_K}
\mathbb{E}[(\vo{\Phi}(\vo{\Delta}) \vo{\Phi}^T(\vo{\Delta})) \otimes \vo{B}(\vo{\Delta})]
\end{align}
where $\vo{\Phi_K} := (\mathbb{E}[\vo{\Phi}(\vo{\Delta}) \vo{\Phi}^T(\vo{\Delta})] \otimes \vo{I}_n)^{-1}$.
The modified cost function is:
\begin{align}
\vo{J} := \sum_{t=0}^{\infty}(\vo{x}_{pc}^t)^T (\vo{Q}_{pc} + \vo{\mathcal{K}}^T \vo{R}_{pc} \vo{\mathcal{K}})\vo{x}_{pc}^t
\end{align}
where:
\begin{align}
\vo{Q}_{pc} &:= \mathbb{E}[\vo{\Phi}_n(\vo{\Delta}) \vo{Q} \vo{\Phi}_n^T(\vo{\Delta})] \\
\vo{R}_{pc} &:= \mathbb{E}[\vo{\Phi}_n(\vo{\Delta}) \vo{R} \vo{\Phi}_n^T(\vo{\Delta})]
\end{align}
Defining the Lyapunov function in $\vo{x}_{pc}^t$, i.e.,
\begin{align}
V(\vo{x}_{pc}^t) := (\vo{x}_{pc}^t)^T \vo{P}_{pc} \vo{x}_{pc}^t
\end{align}
where $\vo{P}_{pc} \in \mathcal{S}^{n(N+1)}_{++}$. Dropping the $t$ superscript for notational convenience, we get:
\begin{align}
&[\vo{\Delta}ta V] + [\vo{x}_{pc}^T (\vo{Q}_{pc} + \vo{\mathcal{K}}^T \vo{R}_{pc} \vo{\mathcal{K}}) \vo{x}_{pc} ] \nonumber\\
&= \vo{x}_{pc}^T [(\vo{A}_{pc} + \vo{B}_{pc} \vo{\mathcal{K}})^T \vo{P}_{pc} (\vo{A}_{pc} + \vo{B}_{pc}\vo{\mathcal{K}}) \nonumber\\
& - \vo{P}_{pc} + \vo{Q}_{pc} + \vo{\mathcal{K}}^T \vo{R}_{pc} \vo{\mathcal{K}}] \vo{x}_{pc}
\nonumber \\
& = \vo{x}_{pc}^T [ \vo{A}_{pc}^T \vo{P}_{pc} \vo{A}_{pc} - \vo{P}_{pc} + \vo{Q}_{pc}\nonumber\\
& - \vo{A}_{pc}^T \vo{P}_{pc} \vo{B}_{pc} (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc})^{-1} \vo{B}_{pc}^{T} \vo{P}_{pc} \vo{A}_{pc} \nonumber\\
&+ (\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^T \times \nonumber\\
&(\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc}) \times \nonumber \\
& (\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})] \vo{x}_{pc}
\label{eqn:redOrderEqn}
\end{align}
We cannot determine $\vo{K}$ by setting $\vo{\mathcal{K}} := \vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc}$, unless we are assured that all of $\vo{A}_{pc}$, $\vo{B}_{pc}$, and $\vo{P}_{pc}$
have block diagonal structures with repeating blocks, thus leading to our formulation:
\textbf{Theorem:}
The optimal controller $\vo{K}$ and $\vo{P}_{pc}$ are obtained as solutions to the following optimization problems:
\begin{gather}
\underset{\vo{P}_{pc} \in {\mathcal{S}}_{++}^{n(N+1)}}{\textrm{max}} \textrm{tr } \vo{P}_{pc}, \nonumber
\\ \textrm{subject to} \nonumber\\
\begin{bmatrix}
\vo{A}_{pc}^T \vo{P}_{pc} \vo{A}_{pc} + \vo{Q}_{pc} - \vo{P}_{pc} & \vo{A}_{pc}^T \vo{P}_{pc} \vo{B}_{pc} \\
\vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc} & \vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc}
\end{bmatrix} \nonumber\\
\geq \vo{0}
\label{eqn:thmP} \\
\textrm{and} \nonumber\\
\underset{\vo{X} \in \vo{\mathcal{S}}_{++}^{n(N+1)}, \vo{K} \in \mathbb{R}^{m \times n}}{\textrm{min}} \textrm{tr } \vo{X}, \nonumber
\\ \textrm{subject to} \nonumber\\
\begin{bmatrix}
\vo{H}_1 & \vo{H}_2 \\
\vo{H}_2^T & \vo{H}_4
\end{bmatrix}
\geq \vo{0}
\label{eqn:thmK}
\end{gather}
where
\begin{flalign*}
\vo{H}_1 &= \vo{X} \\
\vo{H}_2 &= (\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^T \\
\vo{H}_4 &= (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc})^{-1}
\end{flalign*}
\textit{\textbf{Proof:}}
Following the approach similar to the one described in \cite{bhattacharya2019robust} wherein they obtained the optimal solution by maximizing the lower bound, i.e., setting \eqref{eqn:redOrderEqn} $\geq 0$.
\begin{align}
&\vo{A}_{pc}^T \vo{P}_{pc} \vo{A}_{pc} \nonumber\\
& - \vo{A}_{pc}^T \vo{P}_{pc} \vo{B}_{pc} (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc} \nonumber\\
& - \vo{P}_{pc} + \vo{Q}_{pc} \nonumber\\
& + (\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^T \times \nonumber\\
&(\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc} ) \times \nonumber \\
&(\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc}) \geq \vo{0} \label{eqn:maxLB}\\
&\implies \vo{\Delta}ta V \geq -\vo{x}_{pc}^T (\vo{Q}_{pc} + \vo{\mathcal{K}}^T \vo{R}_{pc} \vo{\mathcal{K}}) \vo{x}_{pc} \nonumber
\end{align}
Summing over $[0, \infty]$, we obtain:
\begin{align}
\sum_{t=0}^{\infty} \vo{\Delta}ta V^t \geq - \sum_{t=0}^{\infty} (\vo{x}_{pc}^t)^T (\vo{Q}_{pc} + \vo{\mathcal{K}}^T \vo{R}_{pc} \vo{\mathcal{K}})(\vo{x}_{pc}^t)
\end{align}
which is equivalent to:
\begin{align}
\lim_{t \to \infty}V(\vo{x}_{pc}^t) - V(\vo{x}_{pc}^0) \geq \nonumber\\
- \sum_0^{\infty} \vo{x}_{pc}^T (\vo{Q}_{pc} + \vo{\mathcal{K}}^T \vo{R}_{pc} \vo{\mathcal{K}}) \vo{x}_{pc}
\end{align}
Since the closed loop system is EMS stabilizable,
\begin{align}
\lim_{t \to \infty}V(\vo{x}_{pc}^t) = 0,
\end{align}
thus implying:
\begin{align}
V(\vo{x}_{pc}^0) \leq - \sum_{t=0}^{\infty} \vo{x}_{pc}^T (\vo{Q}_{pc} + \vo{\mathcal{K}}^T \vo{R}_{pc} \vo{\mathcal{K}}) \vo{x}_{pc}
\end{align}
The inequality \eqref{eqn:maxLB} is nonconvex and is relaxed as:
\begin{align}
&(\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^T \times \nonumber\\
&(\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc}) \times \nonumber\\
&(\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc}) \geq \vo{0}
\end{align}
for any $\vo{K}$ (or $\vo{\mathcal{K}}$). Therefore, the inequality transforms:
\begin{align}
& - \vo{A}_{pc}^T \vo{P}_{pc} \vo{B}_{pc} (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc})^{-1} \ \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc} \nonumber\\
& + \vo{A}_{pc}^T \vo{P}_{pc} \vo{A}_{pc} - \vo{P}_{pc} + \vo{Q}_{pc} \geq \vo{0}
\label{eqn:thmPpc}
\end{align}
Subsequently, determine a $\vo{K}$ that minimizes
\begin{align}
(\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^T \times \nonumber\\
(\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{B}_{pc}) \times \nonumber\\
(\vo{\mathcal{K}} + (\vo{R}_{pc} + \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})^{-1} \vo{B}_{pc}^T \vo{P}_{pc} \vo{A}_{pc})
\label{eqn:thmKpc}
\end{align}
Equations \eqref{eqn:thmPpc} and \eqref{eqn:thmKpc} can be expressed as the LMIs in \eqref{eqn:thmP} and \eqref{eqn:thmK}.
\section{Example and Results}
The plant considered here is a longitudinal F-16 aircraft model. The states of the system are $V$(ft/s), angle-of-attack $\alpha$(rad), and pitch rate $q$(rad/s). The control variables are thrust $T$ (lb) and elevator angle $\delta_e$(deg). The nonlinear model is trimmed at velocities from 400 ft/s to 900 ft/s in increments of 100 ft/s, and at an altitude of 10000 ft.
The objective is to design a fixed gain $\vo{K}$ that is able to regulate perturbations in the linear plant about various equilibrium points.
The vehicle is trimmed using a constraned nonlinear least-squares optimization, solved using sequential quadratic programming with \textit{fmincon} in MATLAB. The following constraints are imposed on the state and control magnitudes to account for aerodynamic and actuator limits:
\begin{center}
\begin{tabular}{l r}
Trim at $V_{trim}$: & $V_{trim} \leq V \leq V_{trim}$ \\
Validity of aerodynamic data: & $-20^{\circ} \leq \alpha \leq 40^{\circ}$ \\
Steady-level flight: & $0 \leq \theta - \alpha \leq 0$\\
Steady-level flight: & $0 \leq q \leq 0$\\
Thrust limits: & $1000 \leq T \leq 19000$ \\
Elevator limits: & $-25^{\circ} \leq \delta_e \leq 25^{\circ}$\\
\end{tabular}
\end{center}
Linear models at each of these flight velocities were obtained using MATLAB's \textit{linmod} command. Subsequently, the discrete-time linear models were derived using the \texttt{c2d} command. Figure \ref{olPoles} shows the open loop poles of the discretized linear system. At $V_{trim} = 400$ ft/s, the system is marginally stable. At higher velocities, the poles are all comfortably inside the unit circle.
Figure \ref{ctrbEnergy} shows $\textbf{det}(W_c^{-1})$ where $W_c$ is the controllability Gramian, for various values of $V_{trim}$ for which the open-loop system is stable. This Gramian helps gauge the energy required to move the system around the state space. The determinant is a scalar measure of the Gramian, and figure \ref{ctrbEnergy} shows that the energy increases for lower values of $V_{trim}$, indicating that closed-loop performance at these velocities degrades.
We scale $V$ and represent it as $\vo{\Delta}$, where
\begin{align*}
\vo{\Delta} = \frac{2V - (\textrm{max}(V) + \textrm{min}(V))}{\textrm{max}(V) - \textrm{min}(V)}
\end{align*}
and assume $\vo{\Delta}$ to be uniformly distributed in the interval $[-1,1]$.
The uncertain $\vo{A}(\vo{\Delta})$ and $\vo{B}(\vo{\Delta})$ are expressed as:
\begin{align*}
\vo{A}(\vo{\Delta}) := \sum_{i=0}^{5} \vo{A}_i \phi_i (\vo{\Delta}), \vo{B}(\vo{\Delta}) := \sum_{i=0}^{5} \vo{B}_i \phi_i (\vo{\Delta})
\end{align*}
where $\phi_i(\vo{\Delta})$ are $i$th-order Legendre polynomials.
The polynomial-chaos framework allows for any $\mathcal{L}_2$ function to model parametric uncertainty, not just multi-affine functions. Here, Legendre polynomials are used to capture the variation in the system matrices.
The code for simulation is publicly available\footnote{https://github.com/isrlab/LqrDiscPC}.
The components $\vo{A}_i$ and $\vo{B}_i$ can be derived from the same.
For the controller synthesis, we regulate the outputs: velocity $V$, angle of attack $\alpha$, and flight path angle $\gamma := \theta - \alpha$ defined by:
\begin{align*}
\vo{y} := \begin{bmatrix}
V \\
\alpha \\
\gamma
\end{bmatrix} = \vo{Cx}, \textrm{with } \vo{C} := \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & -1 & 1 & 0
\end{bmatrix}
\end{align*}
The optimal cost-to-go is defined using $\vo{Q}$ and $\vo{R}$ where:
\begin{gather*}
\vo{Q} := \vo{C}^T \vo{Q}_y \vo{C}, \textrm{with } \vo{Q}_y := \textbf{diag}(10^{-1} \quad 10 \quad 10{}) \\
\vo{R} := \textbf{diag}(10^{-4} \quad 10^{-1} )
\end{gather*}
Weights $\vo{Q}$ and $\vo{R}$ are chosen to normalize the state and control trajectories with respect to the desired peak values. The PC-based synthesis algorithm is implemented with increasing orders of approximation, up to 7.
Figure \ref{kGain} depicts the increase in control gain with approximation order whereas figure \ref{yTrajRed} shows the closed-loop response for $V^t$, $\alpha^t$, and $\gamma^t$ with the gain $\vo{K}$ for the 7th order.
The response here is for the linear system for various values of $\vo{\Delta}$.
The initial condition used here is: $\vo{x}_0^T := (0\, 0\, \frac{30 \pi}{180}\, 0)$, which corresponds
to $\gamma(0) = 30^{\circ}$.
The controller is synthesized using a 7th order approximation. The colors black to blue correspond to values of $\vo{\Delta}$ ranging from -1 to 1.
In figure \ref{clLoopPoles}, we can see the closed-loop poles of the system created using the 7th order approximation. All poles lie within the unit circle, indicating stability of the closed-loop system. These observations are consistent with the trajectory response in figure \ref{yTrajRed}.
\section{Summary}
We derived a new algorithm for the synthesis of optimal and robust state feedback controllers for linear discrete-time systems with probabilistic parameters in their system matrices. Optimality ensures a minimum quadratic cost on the state and control action required to stabilize the system.
We followed a reduced order modeling approach that utilizes a finite-dimensional approximation built using polynomial chaos and develops a controller by solving this optimization problem exactly.
The simulations show that the controller is able to stabilize the system about the chosen operating points.
However, it is important to note that the proposed approach works favorably only for systems proven to be stable over the entire distribution of the random parameters.
In the future, we would like to investigate whether the infinite-dimensional model in \cite{bhattacharya2019robust} could be meaningfully extended to discrete-time systems as well.
Moreover, the impact of a multi-dimensional parameter space, i.e. $\vo{\Delta} \subset \mathbb{R}^d$ with $d \geq 2$ on closed-loop stability and controller performance requires further research.
\begin{figure}
\caption{Open-loop poles of the uncertain model and the reduced-order model with 7th-order approximation.}
\caption{Closed-loop poles for the reduced order model.}
\caption{Open and closed-loop poles.}
\label{olPoles}
\label{clLoopPoles}
\end{figure}
\begin{figure}
\caption{Degree of controllability of the system as a function of $\vo{\Delta}
\caption{Variation of $||\vo{K}
\caption{Controllability and controller gain.}
\label{ctrbEnergy}
\label{kGain}
\end{figure}
\begin{figure}
\caption{Output trajectories obtained using the reduced order model. The initial condition is: $\vo{x}
\label{yTrajRed}
\end{figure}
\end{document}
|
\begin{document}
\mathcal{M}ketitle
\begin{abstract}
A $k$-orbit maniplex is one that has $k$ orbits of flags under the action of its automorphism group.
In this paper we extend the notion of symmetry type graphs of maps to that of maniplexes and polytopes
and make use of them to study $k$-orbit maniplexes, as well as fully-transitive 3-maniplexes.
In particular, we show that there are no fully-transtive $k$-orbit 3-mainplexes with $k > 1$ an odd number, we classify 3-orbit mainplexes and determine all face transitivities for 3- and 4-orbit maniplexes.
Moreover, we give generators of the automorphism group of a polytope or a maniplex, given its symmetry type graph.
Finally, we extend these notions to oriented polytopes, in particular we classify oriented 2-orbit maniplexes and give generators for their orientation preserving automorphism group.
\end{abstract}
\keywords{Abstract Polytope, Regular Graph, Edge-Coloring, Maniplex}
\subjclass{Primary 52B15; Secondary 05C25, 51M20}
\section{Introduction}
While abstract polytopes are a combinatorial generalisation of classical polyhedra and polytopes, maniplexes generalise maps on surfaces and (the flag graph of) abstract polytopes.
The combinatorial structure of maniplexes, maps and polytopes is completely determined by a edge-coloured $n$-valent graph with chromatic index $n$, often called the flag graph.
The symmetry type graph of a map is the quotient of its flag graph under the action of the automorphism group.
In this paper we extend the notion of symmetry type graphs of maps to that of maniplexes (and polytopes).
Given a maniplex, its symmetry type graph encapsulates all the information of the local configuration of the flag orbits under the action of the automorphism group of the maniplex.
Traditionally, the main focus of the study of maps and polytopes has been that of their symmetries.
Regular and chiral ones have been extensively studied. These are maps and polytopes with either maximum degree of symmetry or maximum degree of symmetry by rotation.
Edge-transitive maps were studied in \cite{edge-trans} by Siran, Tucker and Watkins.
Such maps have either 1, 2 or 4 orbits of flags under the action of the automorphism group.
More recently Orbani\'c, Pellicer and Weiss extend this study and classify $k$-orbit maps (maps with $k$ orbits of flags under the automorphism group) up to $k\leq4$ in \cite{k-orbitM}.
Little is known about polytopes that are neither regular nor chiral. In \cite{tesisisa} Hubard gives a complete characterisation of the automorphism group of 2-orbit and fully-transitive polyhedra (i.e. polyhedra transitive on vertices, edges and faces) in terms of distinguished generators of them. Moreover, she finds generators of the automorphism group of a 2-orbit polytope of any given rank.
Symmetry type graphs of the Platonic and Archimedean Solids were determined in \cite{Archim}.
In \cite{medial} Del R\'io-Francos, Hubard, Orbani\'c and Pisanski determine symmetry type graphs of up to 5 vertices and give, for up to 7 vertices, the possible symmetry type graphs that a properly self-dual, an improperly self-dual and a medial map might have.
The possible symmetry type graphs that a truncation of a map can have is determined in \cite{trunc}.
One can find in \cite{CompSymTypeGraph} a strategy to generate symmetry type graphs.
By making use of symmetry type graphs, in this paper we classify 3-orbit polytopes and
give generators of their automorphism groups.
In particular, we show that 3-orbit polytopes are never fully-transitive, but they are $i$-face-transitive for all $i$ but one or two, depending on the class.
We extend further the study of symmetry type graphs to show that if a 4-orbit polytope is not fully-transitive, then it is $i$-face-transitive for all $i$ but at most three ranks.
Moreover, we show that a fully-transitive 3-maniplex (or 4-polytope) that is not regular cannot have an odd number of orbits of flags, under the action of the automorphism group.
The main result of the paper is stated in Theorem~\ref{auto}. Given a maniplex $\mathcal{M}$ in Theorem~\ref{auto} we give generators for the automorphism group of $\mathcal{M}$ with respect to some base flag.
The paper is divided into six sections, organised in the following way.
In Section \ref{sec:PolyMani},
we review some basic theory of polytopes and maniplexes, and describe their respective flag graphs.
In Section \ref{sec:stg},
we define and give some properties of the symmetry type graphs of polytopes and maniplexes, extending the concept of symmetry type graphs of maps.
In Section \ref{sec:stg-highly},
we study symmetry type graphs of highly symmetric maniplexes. In particular, we
classify symmetry type graphs with 3 vertices, determine the possible transitivities that a 4-orbit mainplex can have and study some properties of fully-transitive maniplexes of rank 3.
In Section \ref{Gen-autG}
we give generators of the automorphism group of a polytope or a maniplex. In the last section of the paper we define oriented and orientable maniplexes. Further on, we define the oriented flag di-graph which emerge from a flag graph if this is bipartite.
The oriented symmetry type di-graph of an oriented maniplex is then a quotient of the oriented flag di-graph, just as the symmetry type graph was a quotient of the flag graph.
Using these graphs we classify oriented 2-orbit maniplexes and give generators for their orientation preserving automorphism group.
\section{Abstract Polytopes and Maniplexes}
\label{sec:PolyMani}
\subsection{Abstract Polytopes}
In this section we briefly review the basic theory of abstract polytopes and their monodromy groups (for details we refer the reader to \cite{arp} and \cite{d-auto}).
An (\emph{abstract\/}) \emph{polytope of rank\/} $n$, or simply an \emph{$n$-polytope\/}, is a partially ordered set $\mathcal{M}thcal{P}$ with a strictly monotone rank function with range $\{-1,0, \ldots, n\}$.
An element of rank $j$ is called a \emph{$j$-face\/} of $\mathcal{M}thcal{P}$, and a face of rank $0$, $1$ or $n-1$ is called a \emph{vertex\/}, \emph{edge\/} or \emph{facet\/}, respectively.
A {\em chain of $\mathcal{M}thcal{P}$} is a totally ordered subset of $\mathcal{M}thcal{P}$.
The maximal chains, or \emph{flags}, all contain exactly $n + 2$ faces, including a unique least face $F_{-1}$ (of rank $-1$) and a unique greatest face $F_n$ (of rank $n$).
A polytope $\mathcal{M}thcal{P}$ has the following homogeneity property (diamond condition):\ whenever $F \leq G$, with $F$ a $(j-1)$-face and $G$ a $(j+1)$-face for some $j$, then there are exactly two $j$-faces $H$ with $F \leq H \leq G$. Two flags are said to be \emph{adjacent} ($i$-\emph{adjacent}) if they differ in a single face (just their $i$-face, respectively).
The diamond condition can be rephrased by saying that every flag $\Phi$ of $\mathcal{M}thcal{P}$ has a unique $i$-adjacent flag, denoted $\Phi^i$, for each $i=0, \dots, n-1$. Finally, $\mathcal{M}thcal{P}$ is \emph{strongly flag-connected}, in the sense that, if $\Phi$ and $\Psi$ are two flags, then they can be joined by a sequence of successively adjacent flags, each containing $\Phi \cap \Psi$.
Let $\mathcal{M}thcal{P}$ be an abstract $n$-polytope.
The {\em universal} string Coxeter group $W := [\infty,\ldots,\infty]$ of rank $n$, with distinguished involutory generators $r_0,$ $r_1,\ldots,r_{n-1}$, acts transitively on the set of flags $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$ of $\mathcal{M}thcal{P}$ in such a way that $\Psi^{r_i} = \Psi^i$, the $i$-adjacent flag of $\Psi$, for each $i=0,\dots,n-1$ and each $\Psi$ in $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$. In particular, if $w=r_{i_1}\ldots r_{i_k} \in W$ then
\[ \Psi^w =\Psi^{r_{i_{1}} r_{i_2}\ldots r_{i_{k-1}}r_{i_k}}
=: \Psi^{i_1,i_2,\ldots,i_{k-1},i_k}. \]
The {\em monodromy or connection group} of $\mathcal{M}thcal{P}$ (see for example~\cite{d-auto}), denoted $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$, is the quotient of $W$ by the normal subgroup $K$ of $W$ consisting of those elements of $W$ that act trivially on $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$ (that is, fix every flag of $\mathcal{M}thcal{P})$. Let
\[ \pi: W \to \mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})=W/K \]
denote the canonical epimorphism.
Clearly, $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$ acts on $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$ in such a way that $\Psi^{\pi(w)}=\Psi^w$ for each $w$ in $W$ and each $\Psi$ in $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$, so in particular $\Psi^{\pi(r_i)}=\Psi^{i}$ for each $i$. We slightly abuse notation and also let $r_i$ denote the $i$-th generator $\pi(r_i)$ of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$. We shall refer to these $r_i$ as the {\em distinguished} generators of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$.
Since the action of $W$ is transitive on the flags,
the action of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$ on the flags of $\mathcal{M}thcal{P}$ is also transitive; moreover, this action is faithful, since only the trivial element of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$ fixes every flag. Thus $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$ can be viewed as a subgroup of the symmetric group on $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$.
Note that for every flag $\Phi$ of $\mathcal{M}thcal{P}$ and $i, j \in \{0, \dots, n-1\}$ such that $|i-j|\geq 2$, we have that $\Phi^{r_ir_j}=\Phi^{i,j}=\Phi^{j,i}=\Phi^{r_jr_i}$. Since the action of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$ is faithful in $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$, this implies that $r_ir_j=r_jr_i$, whenever $|i-j|\geq 2$.
An {\em automorphism} of a polytope $\mathcal{M}thcal{P}$ is a bijection of $\mathcal{M}thcal{P}$ that preserves the order.
We shall denote by $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{P})$ the group of automorphisms of $\mathcal{M}thcal{P}$.
Note that any automorphism of $\mathcal{M}thcal{P}$ induces a bijection of its flags that preserves the $i$-adjacencies, for every $i \in \{0, 1, \dots, n-1\}$.
A polytope $\mathcal{M}thcal{P}$ is said to be {\em regular} if the action of $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{P})$ is regular on $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$.
If $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{P})$ has exactly 2 orbits on $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$ in such a way that adjacent flags belong to different orbits, $\mathcal{M}thcal{P}$ is called a {\em chiral polytope}.
We say that a polytope is a {\em $k$-orbit polytope} if the action of $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{P})$ has exactly $k$ orbits on $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$. Hence, regular polytopes are 1-orbit polytopes and chiral polytopes are 2-orbit polytopes.
Given an $n$-polytope $\mathcal{M}thcal{P}$, we define the {\em graph of flags} $\mathcal{M}thcal{G_P}$ of $\mathcal{M}thcal{P}$ as follows. The vertices of $\mathcal{M}thcal{G_P}$ are the flags of $\mathcal{M}thcal{P}$, and we put an edge between two of them whenever the corresponding flags are adjacent. Hence $\mathcal{M}thcal{G_P}$ is $n$-valent (i.e. every vertex of $\mathcal{M}thcal{G_P}$ has exactly $n$ incident edges; to reduce confusion we avoid the alternative terminology `$n$-regular'). Furthermore, we can colour the edges of $\mathcal{M}thcal{G_P}$ with $n$ different colours as determined by the adjacencies of the flags of $\mathcal{M}thcal{P}$. That is, an edge of $\mathcal{M}thcal{G_P}$ has colour $i$, if the corresponding flags of $\mathcal{M}thcal{P}$ are $i$-adjacent. In this way every vertex of $\mathcal{M}thcal{G_P}$ has exactly one edge of each colour (see Figure \ref{fig:baricentic}).
\begin{figure}
\caption{The graph of flags of a cubeoctahedron}
\label{fig:baricentic}
\end{figure}
It is straightforward to see that each automorphism of $\mathcal{M}thcal{P}$ induces an automorphism of the flag graph $\mathcal{M}thcal{G_M}r$ that preserves the colours.
Conversely, every automorphism of $\mathcal{M}thcal{G_M}r$ that preserves the colours is a bijection of the flags that preserves all the adjacencies, inducing an automorphism of $\mathcal{M}thcal{P}$. That is, the automorphism group $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{P})$ of $\mathcal{M}thcal{P}$ is the colour preserving automorphism group $\mathcal{M}thrm{Aut}_p(\mathcal{M}thcal{G_M}r)$ of $\mathcal{M}thcal{G_M}r$.
Note that the connectivity of $\mathcal{M}thcal{P}$ implies that the action of $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{P})$ on $\mathcal{M}thcal{F}(\mathcal{M}thcal{P})$ is free (or semiregular). Hence, the action of $\mathcal{M}thrm{Aut}_p(\mathcal{M}thcal{G_M}r)$ is free on the vertices of the graph $\mathcal{M}thcal{G_M}r$.
One can re-label the edges of $\mathcal{M}thcal{G_M}r$ and assign to them the generators of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$. In fact, since for each flag $\Phi$, the action of $r_i$ takes $\Phi$ to $\Phi^{r_i}$, by thinking of the edge of colour $i$ of $\mathcal{M}thcal{G_M}r$ as the generator $r_i$, one can regard a walk along the edges of $\mathcal{M}thcal{G_M}r$ as an element of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$. That is, if $w$ is a walk along the edges of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$ that starts at $\Phi$ and finishes at $\Psi$, then we have that $\Phi^w=\Psi$.
Hence, the connectivity of $\mathcal{M}thcal{P}$ also implies that the action of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{P})$ is transitive on the vertices of $\mathcal{M}thcal{G_M}r$.
Furthermore, since the $i$-faces of $\mathcal{M}thcal{P}$ can be regarded as the orbits of flags under the action of the subgroup $H_i=\langle r_j \mid j\neq i \rangle$, the $i$-faces of $\mathcal{M}thcal{P}$ can be also regarded as the connected components of the subgraph of $\mathcal{M}thcal{G_M}r$ obtained by deleting all the edges of colour $i$.
\subsection{Maniplexes}
Maniplexes were first introduced by Steve Wilson in \cite{mani}, aiming to unify the notion of maps and polytopes. In this section we review the basic theory of them.
An \emph{$n$-complex} $\mathcal{M}thcal{M}$ is defined by a set of {\em flags} $\mathcal{M}thcal{F}$ and a sequence $(r_0,$ $ r_1, \dots, r_n)$, such that each $r_i$ partitions the set $\mathcal{M}thcal{F}$ into sets of size 2 and the partitions defined by $r_i$ and $r_j$ are disjoint when $i\neq j$. Furthermore, we ask for $\mathcal{M}thcal{M}$ to be {\em connected} in the following way. Thinking of the $n$-complex $\mathcal{M}thcal{M}$ as the graph $\mathcal{M}thcal{G}$ with vertex set $\mathcal{M}thcal{F}$, and with edges of colour $i$ corresponding to the matching $r_i$, we ask for the graph $\mathcal{M}thcal{G}$ indexed by $\mathcal{M}thcal{M}$ to be connected.
An \emph{$n$-maniplex} is an $n$-complex such that the elements in the sequence $(r_0, r_1, \dots, r_n)$ correspond to the distinguished involutory generators of a Coxeter string group.
In terms of the graph $\mathcal{M}thcal{G}$, this means that the connected components of the induced subgraph with edges of colours $i$ and $j$, with $| i - j | \geq 2$ are 4-cyles.
We shall refer to the {\em rank} of an $n$-maniplex, precisely to $n$.
A 0-maniplex must be a graph with two vertices joined by an edge of colour 0. A 1-maniplex is associated to a 2-polytope or $l$-gon, which graph contains $2l$ vertices joined by a perfect matching of colours 0 and 1, and each of size $l$. A 2-maniplex can be considered as a map and vice versa, so that maniplexes generalise the notion of maps to higher rank. Regarding polytopes, the flag graph of any $(n+1)$-polytope can be associated to an $n$-maniplex, generalising in such way the notion of polytopes.
One can think of the sequence $(r_0, r_1, \dots, r_n)$ of a maniplex $\mathcal{M}$ as permutations of the flags. In fact, if $\Phi, \Psi \in \mathcal{M}thcal{F}$ are flags of $\mathcal{M}$ belonging to the same part of the partition induced by $r_i$, for some $i$, we say that $\Phi^{r_i} = \Psi$ and $\Psi^{r_i} = \Phi$. In this way each $r_i$ acts as a involutory permutation of $\mathcal{M}thcal{F}$.
In analogy with polytopes, we let
$K = \{ w \in \langle r_0, \dots, r_n \rangle \mid \Phi^w = \Phi, \ \mathcal{M}thrm{for} \ \mathcal{M}thrm{all} \ \Phi \in \mathcal{M}thcal{F} \}$ and define the {\em connection group} $\mathcal{M}thrm{Mon}(\mathcal{M})$ of $\mathcal{M}$ as the quotient of $\langle r_0, \dots, r_n \rangle$ over $K$.
As before, we abuse notation and say that $\mathcal{M}thrm{Mon}(\mathcal{M})$ is generated by $r_0, \dots, r_n$ and define the action of $\mathcal{M}thrm{Mon}(\mathcal{M})$ on the flags inductively, induced by the action of the sequence $(r_0, r_1, \dots, r_n)$. In this way, the action of $\mathcal{M}thrm{Mon}(\mathcal{M})$ on $\mathcal{M}thcal{F}$ is faithful and transitive.
Note further that since the sequence $(r_0, r_1, \dots, r_n)$ induces a string Coxeter group, then, as elements of $\mathcal{M}thrm{Mon}(\mathcal{M})$, $r_ir_j=r_jr_i$ whenever $|i-j|\geq 2$. This implies that given a flag $\Phi$ of $\mathcal{M}thcal{M}$ and $i, j \in \{0, \dots, n\}$ such that $|i-j|\geq 2$, we have that $\Phi^{i,j}=\Phi^{r_ir_j}=\Phi^{r_jr_i}=\Phi^{j,i}$.
An {\em automorphism} $\alpha$ of an $n$-maniplex is a colour-preserving automorphism of the graph $\mathcal{M}thcal{G}$.
In a similar way as it happens for polytopes, the connectivity of the graph $\mathcal{M}thcal G$ implies that the action of the automorphism group $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M}$) of $\mathcal{M}thcal{M}$ is free on the vertices of $\mathcal{M}thcal G$.
Hence, $\alpha$ can be seen as a permutation of the flags in $\mathcal{M}thcal{F}$ that commutes with each of the permutations in the connection group.
To have consistent concepts and notation between polytopes and maniplexes, we shall say that an $i$-face (or a face of rank $i$) of a maniplex is a connected component of the subgraph of $\mathcal{M}thcal G$ obtained by removing the $i$-edges of $\mathcal{M}thcal G$. Furthermore, we say that two flags $\Phi$ and $\Psi$ are $i$-adjacent if $\Phi^{r_i}= \Psi$ (note that since $r_i$ is an involution, $\Phi^{r_i}= \Psi$ implies that $\Psi^{r_i}= \Phi$, so the concept is symmetric).
To each $i$-face $F$ of $\mathcal{M}thcal{M}$, we can associate an $(i-1)$-maniplex $\mathcal{M}thcal{M}_F$ by identifying two flags of $F$
whenever there is a $j$-edge between them, with $j > i$. Equivalently, we can remove from $F$ all edges of colours $\{i+1, \ldots,
n-1 \}$, and then take one of the connected components. In fact, since $\langle r_0, \ldots, r_{i-1} \rangle$
commutes with $\langle r_{i+1}, \ldots, r_n \rangle$, the connected components of this subgraph of $F$ are
all isomorphic, so it does not matter which one we pick.
If $\Phi$ is a flag of $\mathcal{M}thcal{M}$ that contains the $i$-face $F$, then it naturally induces a flag $\overline{\Phi}$ in $\mathcal{M}thcal{M}_F$.
Similarly, if $\varphi \in \mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ fixes $F$, then $\varphi$ induces an automorphism $\overline{\varphi} \in \mathcal{M}thrm{Aut}(\mathcal{M}thcal{M}_F)$,
defined by $\overline{\Phi} \overline{\varphi} = \overline{\Phi \varphi}$. To check that this is well-defined,
suppose that $\overline{\Phi} = \overline{\Psi}$; we want to show that $\overline{\Phi \varphi} = \overline{\Psi \varphi}$.
Since $\overline{\Phi} = \overline{\Psi}$, it follows that $\Psi = \Phi^w$ for some $w \in \langle r_{i+1}, \ldots, r_n \rangle$.
Then $\Psi \varphi = (\Phi^w) \varphi = (\Phi \varphi)^w$, so that $\overline{\Psi \varphi} = \overline{\Phi \varphi}$.
By definition, the edges of $\mathcal{M}thcal G$ of one given colour form a perfect matching. The 2-factors of the graph $\mathcal{M}thcal G$ are the subgraphs spanned by the edges of two different colours of edges.
Since the automorphisms of $\mathcal{M}thcal{M}$ preserve the adjacencies between the flags, it is not difficult to see that the following lemma holds.
\begin{lemma}
\label{orbitTOorbit}
Let $\Phi$ be a flag of $\mathcal{M}thcal{M}$ and let $a \in \mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$. If ${\mathcal{M}thcal O}_1$ and ${\mathcal{M}thcal O}_2$ denote the flag orbits of $\Phi$ and $\Phi^a$ (under $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$), respectively, then $\Psi \in {\mathcal{M}thcal O}_1$ if and only if $\Psi^a \in {\mathcal{M}thcal O}_2$.
\end{lemma}
We say that a maniplex $\mathcal{M}thcal{M}$ is {\em $i$-face-transitive} if $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ is transitive on the faces of rank $i$. We say that $\mathcal{M}thcal{M}$ is {\em fully-transitive} if it is $i$-face-transitive for every $i =0, \dots, n-1$.
If $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ has $k$ orbits on the flags of $\mathcal{M}thcal{M}$, we say that $\mathcal{M}thcal{M}$ is a $k$-orbit maniplex. A 1-orbit maniplex is also called a {\em reflexible} maniplex. A 2-orbit maniplex with adjacent flags belonging to different orbits is a {\em chiral} maniplex. If a maniplex has at most 2 orbits of flags and $\mathcal{M}thcal{G_M}$ is a bipartite graph, then the maniplex is said to be {\em rotary}.
\section{Symmetry type graphs of polytopes and maniplexes}
\label{sec:stg}
In this section we shall define the symmetry type graph of a polytope or a maniplex.
To this end, we shall make use of quotient of graphs.
Therefore, we now consider pregraphs; that is, graphs that allow multiple edges and semi-edges.
As it should be clear, it makes no difference whether we consider an abstract $n$-polytope or an $(n-1)$-maniplex. Hence, though we will consider maniplexes throughout the paper, similar results will apply to polytopes.
Given an edge-coloured graph $\mathcal{M}thcal G$, and a partition $\mathcal{M}thcal B$ of its vertex set $V$, the {\em coloured quotient with respect to $\mathcal{M}thcal B$}, $\mathcal{M}thcal G_{\mathcal{M}thcal B}$, is defined as the pregraph with vertex set $\mathcal{M}thcal B$, such that for any two vertices $B,C \in {\mathcal{M}thcal B}$, there is a dart of colour $a$ from $B$ to $C$ if and only if there exists $u \in B$ and $v \in C$ such that there is a dart of colour $a$ from $u$ to $v$.
Edges between vertices in the same part of the partition $\mathcal{M}thcal B$ quotient into semi-edges.
Throughout the remainder of this section, let $\mathcal{M}thcal{M}$ be an $(n-1)$-maniplex and $\mathcal{M}thcal{G_M}$ its coloured flag graph.
As we discussed in the previous section, $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ acts semiregularly on the vertices of $\mathcal{M}thcal{G_M}$. We shall consider the orbits of the vertices of $\mathcal{M}thcal{G_M}$ under the action of $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ as our partition ${\mathcal{M}thcal B}$, and denote ${\mathcal{M}thcal B:=O}rb$.
Note that since the action is semiregular, every two orbits $B,C \in {\mathcal{M}thcal O}rb$ have the same number of elements.
The {\em symmetry type graph} $T(\mathcal{M}thcal{M})$ of $\mathcal{M}thcal{M}$ is the coloured quotient graph of $\mathcal{M}thcal{G_M}$ with respect to ${\mathcal{M}thcal O}rb$.
Since the flag graph $\mathcal{M}thcal{G_M}$ is an undirected graph, then $T(\mathcal{M}thcal{M})$ is a pre-graph without loops or directed edges.
Furthermore, as we are taking the coloured quotient, and $\mathcal{M}thcal{G_M}$ is edge-coloured with $n$ colours, then $T(\mathcal{M}thcal{M})$ is an $n$-valent pre-graph, with one edge or semi-edge of each colour at each vertex.
It is hence not difficult to see that if $\mathcal{M}thcal{M}$ is a reflexible maniplex, then $T(\mathcal{M}thcal{M})$ is a graph consisting of only one vertex and $n$ semi-edges, all of them of different colours.
In fact, the symmetry type graph of a $k$-orbit maniplex has precisely $k$ vertices.
Figure~\ref{STGrank3A} shows the symmetry type graph of a reflexible 2-maniplex (on the left), and the symmetry type graph of the cuboctahedron: the quotient graph of the flag graph in Figure~\ref{fig:baricentic} with respect to the automorphism group of the cubocahedron.
\begin{figure}
\caption{Symmetry type graphs of a reflexible 2-maniplex (on the left) and of the cuboctahedron (on the right).}
\label{STGrank3A}
\end{figure}
Note that by the definition of $T(\mathcal{M}thcal{M})$, there exists a surjective function $$\psi: V(\mathcal{M}thcal{G_M}) \to V(T(\mathcal{M}thcal{M}))$$ that assigns, to each vertex of $V(\mathcal{M}thcal{G_M})$ its corresponding orbit in $T(\mathcal{M}thcal{M})$. Hence, given $\Phi, \Psi \in V(\mathcal{M}thcal{G_M})$, we have that $\psi(\Phi)=\psi(\Psi)$ if and only if $\Phi$ and $\Psi$ are in the same orbit under $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$.
Given vertices $u,v$ of $T(\mathcal{M}thcal{M})$, if there is an $i$-edge joining them, we shall denote such edge as $(u,v)_i$. Similarly, $(v,v)_i$ shall denote the semi-edge of colour $i$ incident to the vertex $v$.
Because of Lemma~\ref{orbitTOorbit}, we can define the action of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$ on the vertices of $T(\mathcal{M}thcal{M})$. In fact, given $v \in T(\mathcal{M}thcal{M})$ and $a \in \mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$, then $v^a:=\psi(\Phi^a)$, where $\Phi\in \psi^{-1}(v)$. Note that the definition of the action does not depend on the choice of $\Phi\in \psi^{-1}(v)$; in fact, we have that $\Phi, \Psi, \in \psi^{-1}(v)$ if and only if $\psi(\Phi)=\psi(\Psi)$ and this in turn is true if and only if $\Phi$ and $\Psi$ are in the same orbit under $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$. By Lemma~\ref{orbitTOorbit}, the fact that $\Phi$ and $\Psi$ are in the same orbit under $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ implies that, for any $a \in \mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$, the flags $\Phi^a$ and $\Psi^a$ are also in the same orbit under $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$. Hence $\psi(\Phi^a)=\psi(\Psi^a)$ and therefore the definition of $v^a$ does not depend on the choice of the element $\Phi\in \psi^{-1}(v)$.
Since $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$ is transitive on the vertices of $\mathcal{M}thcal{G_M}$, then it is also transitive on the vertices of $T(\mathcal{M}thcal{M})$, implying that $T(\mathcal{M}thcal{M})$ is a connected graph.
Furthermore, the action of each generator $r_i$ of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$ on a vertex $v$ of $T(\mathcal{M}thcal{M})$ corresponds precisely to the (semi-)edge of colour $i$ incident to $v$.
Hence, the orbit $v^{ \langle r_j \mid j \neq i \rangle}$ corresponds to the orbit under $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ of an $i$-face $F$ of $\mathcal{M}thcal{M}$ such that $F \in \Psi$, for some $\Psi \in \psi^{-1}(v)$ (as before, different choices of flag $\Psi\in \psi^{-1}(v)$ induce the same orbit of $i$-faces). Therefore, the connected components of the subgraph $T^i(\mathcal{M}thcal{M})$ of $T(\mathcal{M}thcal{M})$ with edges of colours $\{0, \dots, n-1\} \setminus \{i\}$ correspond to the orbits of the $i$-faces under $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$. In particular this implies the following proposition.
\begin{proposition}
Let $\mathcal{M}thcal{M}$ be a maniplex, $T(\mathcal{M}thcal{M})$ its symmetry type graph and let $T^i(\mathcal{M}thcal{M})$ be the subgraph resulting by erasing the $i$-edges of $T(\mathcal{M}thcal{M})$. Then $\mathcal{M}thcal{M}$ is $i$-face-transitive if and only if $T^i(\mathcal{M}thcal{M})$ is connected.
\end{proposition}
We shall say that a symmetry type graph $T$ is $i$-face-transitive if $T^i$ is connected, and that $T$ is a fully-transitive symmetry type graph if it is $i$-face-transitive for all $i$.
Recall that to each $i$-face $F$ of $\mathcal{M}thcal{M}$, there is an associated $(i-1)$-maniplex $\mathcal{M}thcal{M}_F$.
The symmetry type graph $T(\mathcal{M}thcal{M}_F)$ is related in a natural way to the connected component of $T^i(\mathcal{M}thcal{M})$ that corresponds to $F$:
\begin{proposition}
\label{STGofFaces}
Let $F$ be an $i$-face of the maniplex $\mathcal{M}thcal{M}$, and let $\mathcal{M}thcal{M}_F$ be the corresponding $(i-1)$-maniplex. Let
$\mathcal{M}thcal C$ be the connected component of $T^i(\mathcal{M}thcal{M})$ corresponding to $F$. Then there is a surjective
function $\pi: V(\mathcal{M}thcal C) \to V(T(\mathcal{M}thcal{M}_F))$. Furthermore, if $j < i$ then each $j$-edge
$(u, u^j)_j$ of $\mathcal{M}thcal C$ yields a $j$-edge $(\pi(u), \pi(u^j))_j$ in $T(\mathcal{M}thcal{M}_F)$, and if $j > i$, then $\pi(u) = \pi(u^j)$.
\end{proposition}
\begin{proof}
First, let $\Phi$ and $\Psi$ be flags of $\mathcal{M}thcal{M}$ that are both in the connected component $F$, and suppose that
they lie in the same flag orbit, so that $\Psi = \Phi \varphi$ for some $\varphi \in \mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$.
Then the induced automorphism $\overline{\varphi}$ of $\mathcal{M}thcal{M}_F$ sends $\overline{\Phi}$ to $\overline{\Psi}$, and
therefore $\overline{\Phi}$ and $\overline{\Psi}$ lie in the same orbit. Furthermore, every flag of
$\mathcal{M}thcal{M}_F$ is of the form $\overline{\Phi}$ for some $\Phi$ in $F$. Thus, each orbit of $\mathcal{M}thcal{M}$ that
intersects $F$ induces an orbit of $\mathcal{M}thcal{M}_F$, and it follows that there is a surjective function
$\pi: V(\mathcal{M}thcal C) \to V(T(\mathcal{M}thcal{M}_F))$.
Consider an edge $(u, u^j)_j$ in $\mathcal{M}thcal C$. Then $u = \psi(\Phi)$ for some flag $\Phi$ in $F$, and we can
take $u^j = \psi(\Phi^j)$. Both $\Phi$ and $\Phi^j$ induce flags in $\mathcal{M}thcal{M}_F$. If $j < i$, then
$\overline{\Phi^j} = \overline{\Phi}^j$. Therefore, there must be a $j$-edge from the orbit of
$\overline{\Phi}$ to the orbit of $\overline{\Phi^j}$; in other words, a $j$-edge from $\pi(u)$ to $\pi(u^j)$.
On the other hand, if $j > i$, then $\overline{\Phi^j} = \overline{\Phi}$, and so $\overline{\Phi}$ and
$\overline{\Phi^j}$ lie in the same orbit and thus $\pi(u) = \pi(v)$.
\end{proof}
Note that the edges of a given colour $i$ of $T(\mathcal{M}thcal{M})$ form a perfect matching (where, of course, we are allowing to match a vertex with itself by a semi-edge). Given two colours $i$ and $j$, the subgraph of $T(\mathcal{M}thcal{M})$ consisting of all the vertices of $T(\mathcal{M}thcal{M})$ and only the $i$- and $j$-edges shall be called a $(i,j)$ 2-factor of $T(\mathcal{M}thcal{M})$.
Because $r_ir_j=r_jr_i$ whenever $| i-j |\geq 2$, in $\mathcal{M}thcal{G_M}$, the alternating cycles of colours $i$ and $j$ have length 4. By Lemma~\ref{orbitTOorbit} each of these 4-cycles should then factor, in $T(\mathcal{M}thcal{M})$, into one of the five graphs in Figure~\ref{4cyclequotient}. Hence, if $| i-j |\geq 2$, then the connected components of the $(i,j)$ 2-factors of $T(\mathcal{M}thcal{M})$ are precisely one of these graphs.
\begin{figure}
\caption{Possible quotients of $i-j$ coloured 4-cycles.}
\label{4cyclequotient}
\end{figure}
In light of the above observations we state the following lemma.
\begin{lemma}
\label{2factors4vertices}
Let $T(\mathcal{M}thcal{M})$ be the symmetry type graph of a maniplex. If there are vertices $u,v,w \in V(T(\mathcal{M}thcal{M}))$ such that $(u,v)_i, (v,w)_j \in E(T(\mathcal{M}thcal{M}))$ with $| i - j | \geq 2$, then the connected component of the $(i,j)$ 2-factor that contains $v$ has four vertices.
\end{lemma}
\section{Symmetry type graphs of highly symmetric maniplexes}
\label{sec:stg-highly}
One can classify maniplexes with small number of flag orbits (under the action of the automorphism group of the maniplex) in terms of their symmetry type graphs. The number of distinct possible symmetry types of a $k$-orbit $(n-1)$-maniplex is the number of connected pre-graphs on $k$ vertices that are $n$-valent and that can be edge-coloured with exactly $n$ colours. Furthermore, given a symmetry type graph, one can read from the appropriate coloured subgraphs the different types of face transitivities that the maniplex has.
As pointed out before, the symmetry type graph of a reflexible $(n-1)$-maniplex consists of one vertex and $n$ semi-edges. The classification of two-orbit maniplexes (see \cite{2-orbit}) in terms of the local configuration of their flags follows immediately from considering symmetry type graphs. In fact, for each $n$, there are $2^{n}-1$ symmetry type graphs with 2 vertices and $n$ (semi)-edges, since given any proper subset $I$ of the colours $\{0,1, \dots, n-1\}$, there is a symmetry type graph with two vertices, $| I |$ semi-edges corresponding to the colours of $I$, and where all the edges between the two vertices use the colours not in $I$ (see Figure~\ref{2orbitSTG}).
This symmetry type graph corresponds precisely to polytopes in class $2_I$, see \cite{2-orbit}.
\begin{figure}
\caption{The symmetry type graph of a maniplex in class $2_I$.}
\label{2orbitSTG}
\end{figure}
Highly symmetric maniplexes can be regarded as those with few flag orbits or those with many (or all) face transitivities.
In \cite{medial} one can find the complete list of symmetry type graphs of $2$-maniplexes with at most 5 vertices. In this section we classify symmetry type graphs with 3 vertices and study some properties of symmetry type graphs of 4-orbit maniplexes and fully-transitive 3-maniplexes.
\begin{proposition}\label{stg_3-orbit}
There are exactly $2n-3$ different possible symmetry type graphs of 3-orbit maniplexes of rank $n-1$.
\end{proposition}
\begin{proof}
Let $\mathcal{M}$ be a 3-orbit $(n-1)$-maniplex and $T(\mathcal{M})$ its symmetry type graph. Then, $T(\mathcal{M})$ is an $n$-valent well edge-coloured graph with vertices $v_1, v_2$ and $ v_3$. Recall that the set of colours $\{0,1, \dots, n-1\}$ correspond to the distinguished generators $r_0, r_1, \dots, r_{n-1}$ of the connection group of $\mathcal{M}$, and that by $(u,v)_i$ we mean the edge between vertices $u$ and $v$ of colour $i$.
Since $T(\mathcal{M})$ is a connected graph, without loss of generality, we can suppose that there is at least one edge joining $v_1$ and $v_2$ and another joining $v_2$ and $v_3$.
Let $j,k \in \{0,1, \dots, n-1\}$ be the colours of these edges, respectively. That is, without loss of generality we may assume that $(v_1,v_2)_j$ and $(v_2,v_3)_k$ are edges of $T(\mathcal{M}thcal{M})$.
By Lemma~\ref{2factors4vertices}, we must have that $k = j \pm 1$, as otherwise $T(\mathcal{M}thcal{M})$ would have to have at least 4 vertices.
This implies that the only edges of $T(\mathcal{M}thcal{M})$ are either $(v_1,v_2)_j$ and $(v_2,v_3)_{j+1}$, $(v_1,v_2)_j$ and $(v_2,v_3)_{j-1}$ or $(v_1,v_2)_j$, $(v_2,v_3)_{j+1}$ and $(v_2,v_3)_{j-1}$, with $j \in \{1, 2, \dots, n-2\}$. (See Figure~\ref{i-transitive3}).
\begin{figure}
\caption{Possible symmetry type graphs of 3-orbit $(n-1)$-maniplexes with edges of colours $j-1$, $j$, and $j+1$, with $j \in \{1, 2, \dots, n-2\}
\label{i-transitive3}
\end{figure}
An easy computation now shows that there are $2n-3$ possible different symmetry type graphs of 3-orbit maniplexes of rank $n-1$.
\end{proof}
Given a 3-orbit $(n-1)$-maniplex $\mathcal{M}thcal{M}$ with symmetry type graph having exactly two edges $e$ and $e'$ of colours $j$ and $j+1$, respectively, for some $j \in \{0, \dots, n-2\}$, we shall say that $\mathcal{M}thcal{M}$ is in class $3^{j,j+1}$.
If, on the other hand, the symmetry type graph of $\mathcal{M}thcal{M}$ has one edge of colour $j$ and parallel edges of colours $j-1$ and $j+1$, for some $j \in \{1, \dots, n-2\}$, then we say that $\mathcal{M}thcal{M}$ is in class $3^j$.
From Figure~\ref{i-transitive3} we observe that a maniplex in class $3^{j, j+1}$ is $i$-face-transitive whenever $i \neq j, j+1$, while a maniplex in class $3^j$ if $i$-face-transitive for every $i \neq j$.
\begin{proposition}
A $3$-orbit maniplex is $j$-face-transitive if and only if it does not belong to any of the classes $3^j$, $3^{j,j+1}$ or $3^{j-1,j}$.
\end{proposition}
\begin{theorem}
\label{no3-orbitfully}
There are no fully-transitive 3-orbit maniplexes.
\end{theorem}
Using Proposition~\ref{STGofFaces}, we get some information about the number of flag orbits that the $j$-faces have:
\begin{proposition}
A $3$-orbit maniplex in class $3^j$ or $3^{j,j+1}$ has reflexible $j$-faces.
\end{proposition}
\begin{proof}
If $\mathcal{M}thcal{M}$ is a $3$-orbit maniplex, then the orbits of the $j$-faces correspond to the connected components of
$T^j(\mathcal{M}thcal{M})$. Assuming that $\mathcal{M}thcal{M}$ is in class $3^j$ or $3^{j,j+1}$, the graph $T^j(\mathcal{M}thcal{M})$ has two connected components;
an isolated vertex, and two vertices that are connected by a $(j+1)$-edge (and a $(j-1)$-edge, if $\mathcal{M}thcal{M}$ is in class
$3^{j,j+1}$. Then by Proposition~\ref{STGofFaces}, the $j$-faces that correspond to the isolated vertex
are reflexible (that is, 1-orbit), and the edge with label $j+1$ forces an identification between the two
vertices of the second component, so the $j$-faces in that component are also reflexible.
\end{proof}
\subsection{On the symmetry type graphs of 4-orbit maniplexes}
\label{sec:4notfully}
It does not take long to realise that counting the number of symmetry type graphs with $k\geq4$ vertices, and perhaps classifying them in a similar fashion as was done for 2 and 3 vertices, becomes considerably more difficult.
In this section, we shall analyse symmetry type graphs with 4 vertices and determine how far a 4-orbit maniplex can be from being fully-transitive. The following lemma is a consequence of the fact that by taking away the $i$-edges of a symmetry type graph $T(\mathcal{M}thcal{M}$), the resulting $T^i(\mathcal{M}thcal{M})$ cannot have too many components.
\begin{lemma}
\label{4orbMi-faceorb}
Let $\mathcal{M}thcal{M}$ be a 4-orbit $(n-1)$-maniplex and let $i \in \{0, \dots, n-1\}$. Then $\mathcal{M}thcal{M}$ has one, two or three orbits of $i$-faces.
\end{lemma}
If an $(n-1)$-maniplex $\mathcal{M}thcal{M}$ is not fully-transitive, there exists at least one $i \in \{0, \dots, n-1\}$ such that $T^i(\mathcal{M}thcal{M})$ is disconnected.
We shall divide the analysis of the types in three parts: when $T^{i}(\mathcal{M}thcal{M})$ has three connected components (two of them of one vertex and one with two vertices), when $T^i(\mathcal{M}thcal{M})$ has a connected component with one vertex and another connected component with three vertices, and finally when $T^{i}(\mathcal{M}thcal{M})$ has two connected components with two vertices each.
Before we start the case analysis, we let $v_1,v_2,v_3,v_4$ be the vertices of $T(\mathcal{M}thcal{M})$.
Suppose that $T^{i}(\mathcal{M})$ has three connected components with $v_2$ and $v_3$ in the same component.
Without loss of generality we may assume that $T(\mathcal{M}thcal{M})$ has edges $(v_1,v_2)_{i}$ and $(v_3,v_4)_{i}$.
Let $k\in\{0,1, \dots, n-1\} \setminus \{i\}$ be the colour of an edge between $v_2$ and $v_3$.
Since there is no edge of $T(\mathcal{M}thcal{M})$ between $v_1$ and $v_4$, Lemma~\ref{2factors4vertices} implies that there are at most two such possible $k$, namely $k = i-1$ and $k = i+1$.
If $i \neq 0, n-1$, $T(\mathcal{M}thcal{M})$ can have either both edges or exactly one of them, while if $i\in \{0, n-1\}$ there is one possible edge (see Figure~\ref{3components4}).
\begin{figure}
\caption{Symmetry type graphs of an $(n-1)$-maniplex $\mathcal{M}
\label{3components4}
\end{figure}
Let us now assume that $T^{i}(\mathcal{M})$ has two connected components, one consisting of the vertex $v_1$ and the other one containing vertices $v_2, v_3$ and $v_4$.
This means that the $i$-edge incident to $v_1$ is the unique edge that connects this vertex with the rest of the graph and, without loss of generality, $T(\mathcal{M}thcal{M})$ has the edge $(v_1,v_2)_{i}$.
As with the previous case, Lemma~\ref{2factors4vertices} implies that an edge between $v_2$ and $v_3$ has colour either $i-1$ or $i+1$.
First observe that having either $(v_2,v_3)_{i-1}$ or $(v_2,v_3)_{i+1}$ in $T(\mathcal{M}thcal{M})$ immediately implies (by Lemma~\ref{2factors4vertices}) that there is no edge between $v_2$ and $v_4$.
Now, if both edges $(v_2,v_3)_{i-1}$ and $(v_2,v_3)_{i+1}$ are in $T(\mathcal{M}thcal{M})$, then an edge between $v_3$ and $v_4$ would have to have colour $i$, contradicting the fact that $T^i(\mathcal{M}thcal{M})$ has two connected components.
Hence, there is exactly one edge between $v_2$ and $v_3$.
It is now straightforward to see that $T(\mathcal{M}thcal{M})$ should be as one of the graphs in Figure~\ref{2components4}, implying that
there are exactly four symmetry type graphs with these conditions for each $i\neq 0,1,n-2, n-1$, but only two symmetry type graph of this kind when $i=0, 1, n-2,$ or $n-1$.
\begin{figure}
\caption{Symmetry type graphs of $(n-1)$-maniplexes with four orbits on its flags, and two orbits on its $i$-faces such that one contains three flag orbits and the other contains a single flag orbit.}
\label{2components4}
\end{figure}
It is straightforward to see from Figure~\ref{2components4} that the next lemma follows.
\begin{lemma}
Let $\mathcal{M}thcal{M} $ be a 4-orbit $(n-1)$-maniplex with two orbits of $i$-faces such that $T^{i}(\mathcal{M})$ has a connected component consisting of one vertex, and another one consisting of three vertices. Then either $T^{i-1}(\mathcal{M})$ or $T^{i+1}(\mathcal{M})$ has two connected components, each with two vertices.
\end{lemma}
Finally, we turn out our attention to the case where $T^{i}(\mathcal{M})$ has two connected components, with two vertices each. Suppose that $v_1$ and $v_2$ belong to one component, while $v_3$ and $v_4$ belong to the other.
As the two components must be connected by the edges of colour $i$, we may assume that $(v_1,v_3)_{i}$ is an edge of $T(\mathcal{M})$.
If the vertices $v_2$ and $v_4$ have semi-edges of colour $i$, Lemma~\ref{2factors4vertices} implies that $T(\mathcal{M}thcal{M})$ is one of the graphs shown in Figure~\ref{2-2components4}.
\begin{figure}
\caption{Six of the symmetry type graphs of $(n-1)$-maniplexs with four orbits on its flags, and two orbits on its $i$-faces such that each contains two flag orbits.}
\label{2-2components4}
\end{figure}
On the other hand, if $(v_1,v_3)_{i}$ and $(v_2,v_4)_{i}$ are both edges of $T(\mathcal{M})$, given $j \in \{0,1, \dots, n-1\} \setminus \{i-1, i, i+1\}$, we use again Lemma~\ref{2factors4vertices} to see that $(v_1,v_2)_j$ is an edge of $T(\mathcal{M}thcal{M})$ if and only if $(v_3,v_4)_j$ is also an edge of $T(\mathcal{M}thcal{M})$.
By contrast, $T(\mathcal{M}thcal{M})$ can have either two edges of colour $i\pm1$ (each joining the vertices of each connected component of $T^{i}(\mathcal{M}thcal{M})$), four semi-edges or an edge and two semi-edges of colour $i\pm1$.
Hence, if $i \neq 0, n-1$, for each $J \subset \{0, 1, \dots, n-1\} \setminus \{i-1, i, i+1\}$ there are ten symmetry type graph with semi-edges of colours in $J$ and edges of colours not in $J$, as shown in Figures~\ref{3(n-2)_2compA} and~\ref{3(n-2)_2compB},
while for $J=\{0,1, \dots, n-1\} \setminus \{i-1, i, i+1\}$ there are six such graphs (shown in Figure~\ref{3(n-2)_2compB}).
On the other hand if $i \in\{0, n-1\}$, for each $J \subset \{0, 1, \dots, n-1\} \setminus \{i-1, i, i+1\}$ there are two graphs as in Figure~\ref{3(n-2)_2compA} and one as in Figure~\ref{3(n-2)_2compB},
while for $J=\{0,1, \dots, n-1\} \setminus \{i-1, i, i+1\}$, there is only one of the graphs in Figure~\ref{3(n-2)_2compB}.
\begin{figure}
\caption{Four families of possible symmetry type graphs of $(n-1)$-maniplexes with four orbits on its flags, and two orbits on its $i$-faces such that each contains two flag orbits.}
\label{3(n-2)_2compA}
\end{figure}
\begin{figure}
\caption{The remaining six families of possible symmetry type graphs of $(n-1)$-maniplexes with four orbits on its flags, and two orbits on its $i$-faces such that each contains two flag orbits.}
\label{3(n-2)_2compB}
\end{figure}
We summarize our analysis of the transitivity of 4-orbit maniplexes below.
\begin{theorem}
Let $\mathcal{M}thcal{M}$ be a 4-orbit maniplex. Then, one of the following holds.
\begin{enumerate}
\item $\mathcal{M}thcal{M}$ is fully-transitive.
\item There exists $i \in \{0, \dots, n-1\}$ such that $\mathcal{M}thcal{M}$ is $j$-face-transitive for all $j\neq i$.
\item There exist $i, k \in \{0, \dots, n-1\}$, $i \neq k$, such that $\mathcal{M}thcal{M}$ is $j$-face-transitive for all $j\neq i, k$.
\item There exists $i \in \{0, \dots, n-1\}$ such that $\mathcal{M}thcal{M}$ is $j$-face-transitive for all $j\neq i, i \pm 1$.
\end{enumerate}
\end{theorem}
\subsection{On fully-transitive $n$-maniplexes for small $n$}
Every 1-maniplex is reflexible and hence fully-transitive.
Fully-transitive 2-maniplexes correspond to fully-transitive maps. It is well-known (and easy to see from the symmetry type graph) that if a map is edge-transitive, then it should have one, two or four orbits. Moreover, a fully-transitive map should be regular, a two-orbit map in class 2, $2_0$, $2_1$ or $2_2$, or a four-orbit map in class $4_{Gp}$ or $4_{Hp}$ (see, for example, \cite{medial}).
When considering fully-transitive $n$-maniplexes, $n \geq 3$, the analysis becomes considerably more complicated.
In \cite{2-orbit} Hubard shows that there are $2^{n+1} - n -2$ classes of fully-transitive two-orbit $n$-maniplexes. By Theorem~\ref{no3-orbitfully}, there are no 3-orbit fully-transitive $n$-maniplexes.
We note that there are 20 symmetry type graphs of 4-orbit 3-maniplexes that are fully transitive (see Figure~\ref{4orbitfully}).
\begin{figure}
\caption{Symmetry type graphs of 4-orbit fully-transitive 3-maniplexes}
\label{4orbitfully}
\end{figure}
The following theorem shall be of great use to show that a fully-transitive 3-maniplex must have an even number of flag orbits unless it is reflexible.
\begin{theorem}
Let $\mathcal{M}thcal{M}$ be a fully-transitive 3-maniplex and let $T(\mathcal{M}thcal{M})$ be its symmetry type graph. Then either $\mathcal{M}thcal{M}$ is reflexible or $T(\mathcal{M}thcal{M})$ has an even number of vertices.
\end{theorem}
\begin{proof}
On the contrary suppose that $T(\mathcal{M}thcal{M})$ has an odd number of vertices, different than 1.
Whenever $|i-j|>1$, the connected components of the $(i,j)$ 2-factor of a symmetry type graph are as in Figure~\ref{4cyclequotient}.
Hence, there is a connected component of the $(0,2)$ 2-factor of $T(\mathcal{M}thcal{M})$ with exactly one vertex $v$ (and, hence, semi-edges of colours 0 and 2). The connectivity of $T(\mathcal{M}thcal{M})$ implies that there is a vertex $v_1$ adjacent to $v$ in $T(\mathcal{M}thcal{M})$.
If $v_1$ is the only neighbour of $v$, then $T(\mathcal{M}thcal{M})$ has the edges $(v,v_1)_1$ and $(v,v_1)_3$ as otherwise $\mathcal{M}thcal{M}$ is not fully-transitive.
Since the connected components of the $(0,3)$ 2-factor of $T(\mathcal{M}thcal{M})$ are as in Figure~\ref{4cyclequotient}, $v_1$ has a 0 coloured semi-edge.
Because $T(\mathcal{M}thcal{M})$ has more than two vertices, the edge of $v_1$ of colour 2 joins $v_1$ to another vertex, say $u$.
But removing the edge $(v_1,u)_2$ disconnects the graph contradicting the fact that $\mathcal{M}thcal{M}$ is 2-face-transitive.
On the other hand, if $v$ has more than one neighbour it has exactly two, say $v_1$ and $u$ and $T(\mathcal{M}thcal{M})$ has the two edges $(v,v_1)_1$ and $(v,u)_3$. This implies that the connected component of the $(1,3)$ 2-factor containing $v$ has four vertices: $v, v_1, u$ and $v_2$. (Therefore $(v_1,v_2)_3$ and $(u,v_2)_1$ are edges of $T(\mathcal{M}thcal{M})$.) Using the $(0,3)$ 2-factor one sees that $u$ has a semi-edge of colour 0.
Now, if $(v_1,v_2)_0$ is an edge of $T(\mathcal{M}thcal{M})$, then the vertices $v, v_1, v_2$ and $u$ are joined to the rest of $T(\mathcal{M}thcal{M})$ by the edges of colour 2, implying that removing them shall disconnect $T(\mathcal{M}thcal{M})$ (there exists at least another vertex in $T(\mathcal{M}thcal{M})$ as it has an odd number of vertices), which is again a contradiction.
On the other hand, if $v_1$ (or $v_2$) has an edge of colour 0 to a vertex $v_3$, then by Lemma~\ref{2factors4vertices} $v_2$ (or $v_1$) has a 0-edge to a vertex $v_4$. Again, if $(v_3, v_4)_1$ is an edge of $T(\mathcal{M}thcal{M})$, since the number of vertices of the graph is odd, removing the edges of colour 2 will leave only the vertices $u,v,v_1,\dots, v_4$ in one component, which is a contradiction. Proceeding now by induction on the number of vertices one can conclude that $T(\mathcal{M}thcal{M})$ cannot have an odd number of vertices
\end{proof}
\section{Generators of the automorphism group of a $k$-orbit maniplex}
\label{Gen-autG}
It is well-known among polytopists that the automorphism group of a regular $n$-polytope can be generated by $n$ involutions. In fact, given a base flag $\Phi \in \mathcal{M}thcal{F}(\mathcal{M}thcal{P})$, the distinguished generators of $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ with respect to $\Phi$ are involutions $\rho_0, \rho_1, \dots, \rho_{n-1}$ such that $\Phi \rho_i = \Phi^i$.
Generators for the automorphism group of a two-orbit $n$-polytope can also be given in terms of a base flag (see \cite{2-orbit}). In this section we give a set of distinguished generators (with respect to some base flag) for the automorphism group of a $k$-orbit $(n-1)$-maniplex in terms of the symmetry type graph $T(\mathcal{M}thcal{M})$, provided that $T(\mathcal{M}thcal{M})$ has a hamiltonian path.
Given two walks $w_{1}$ and $w_{2}$ along the edges and semi-edges of $T(\mathcal{M}thcal{M})$ such that the
final vertex of $w_{1}$ is the starting vertex of $w_{2}$, we define
the sequence $w_{1}w_{2}$ as the walk that traces all the edges of
$w_{1}$ and then all the edges of $w_{2}$ in the same order; the
inverse of $w_{1}$, denoted by $w_{1}^{-1}$, is the walk which has
the final vertex of $w_{1}$ as its starting vertex, and traces all
the edges of $w_{1}$ in reversed order. Since each of the elements
of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$ associated to the edges of $T(\mathcal{M}thcal{M})$
is its own inverse, we shall forbid walks that trace the same edge
two times consecutively (or just remove the edge from such
walk, shortening its length by two).
Given a set of walks in
$T(\mathcal{M}thcal{M})$, we say that a subset $\mathcal{W}'\subseteq \mathcal{W}$ is {\em a
generating set of $\mathcal{W}$} if each $w\in \mathcal{W}$ can be expressed as a sequence
of elements of $\mathcal{W}'$ and their inverses.
Now, let $\mathcal{W}$ be the set of closed walks along the edges and semi-edges
of $T(\mathcal{M}thcal{M})$ starting at a distinguished vertex $v_0$.
Recall that the walks along the edges and semi-edges of $T(\mathcal{M}thcal{M})$
correspond to permutations of the flags of $\mathcal{M}thcal{M}$; moreover,
each closed walk of $\mathcal{W}$ corresponds to an automorphism of $\mathcal{M}thcal{M}$.
Thus, by finding a generating set of $\mathcal{W}$, we will find a set of automorphisms
of $\mathcal{M}thcal{M}$ that generates $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$. (However, the converse is not true, as an automorphism of $\mathcal{M}thcal{M}$ may be described in more than one way as a closed walk of $T(\mathcal{M}thcal{M})$.) Given $T(\mathcal{M}thcal{M})$, we may easily find such generating set. The construction
goes as follows:
Let $\mathcal{M}thcal{M}$ be a $k$-orbit maniplex of rank $n-1$ such that
$\mathcal{M}thcal{C}=(v_{0},v_{1},v_{2},...,v_{q})$ is a walk of minimal length that visits all the vertices of $T(\mathcal{M})$.
The sets of vertices and edges (and semi-edges) of $T(\mathcal{M}thcal{M})$
will be denoted by $V$ and $E$, respectively. The set of edges visited
by $\mathcal{M}thcal{C}$ will be denoted by $E_{\mathcal{M}thcal{C}}$.
In this section, the edges
joining two vertices $v_{i}$ and $v_{j}$ will be denoted by $(v_{i},v_{j})_{1}$,
$(v_{i},v_{j})_{2}$, $(v_{i},v_{j})_{3}$,...,$(v_{i},v_{j})_{h}$;
if $j=i+1$ then $(v_{i},v_{j})_{1}\in E_{\mathcal{M}thcal{C}}$.
(Note that in order to not start carrying many subindices, we modify the notation of the edges of $T(\mathcal{M})$ that we had used throughout the paper. If one wants to be consistent with the notation of the edges used in the previous sections, one would have to say that the edges between $v_i$ and $v_j$ are $(v_i,v_j)_{a_1}$, $(v_i,v_j)_{a_2}, \dots (v_i,v_j)_{a_h}$).
Similarly,
we denote all semi-edges incident to a vertex $v_{i}$ by $(v_{i},v_{i})_{1}$,
$(v_{i},v_{i})_{2}$, $(v_{i},v_{i})_{3}$,...,$(v_{i},v_{i})_{l}$.
For the sake of simplicity, $(v_{i},v_{j})_{1}$ will be just called
$(v_{i},v_{j})$. Let $\mathcal{W}$ be the set of all closed walks in $T(\mathcal{M}thcal{M})$
with $v_{0}$ as its starting vertex. We shall now construct $G(\mathcal{W})\subseteq \mathcal{W}$,
a generating set of $\mathcal{W}$.
For each edge $(v_{i},v_{j})_{m}\in E\setminus E_{\mathcal{M}thcal{C}}$ we
shall define the walk
$w_{i,j,m}=((v_{0},v_{1}),(v_{1},v_{2}),...,(v_{i-1},v_{i}),(v_{i},v_{j})_{m},(v_{j},v_{j-1}),(v_{j-1},v_{j-2}),...,(v_{1},v_{0})).$
That is, we walk from $v_0$ to $v_i$ in $E_{\mathcal{M}thcal{C}}$, and then we take the edge $(v_{i},v_{j})_{m}$, and then we walk back from
$v_j$ to $v_0$ in $E_{\mathcal{M}thcal{C}}$.
Let $\mathcal{W}_{e}\subseteq \mathcal{W}$ be the set of all such walks.
For each semi-edge $(v_{i},v_{i})_{l}\in E\setminus E_{\mathcal{M}thcal{C}}$
we shall define the walk $w_{i,i,l}=((v_{0},v_{1}),(v_{1},v_{2}),...,(v_{i-1},v_{i}),(v_{i},v_{i})_{l},(v_{i},v_{i-1}),(v_{i-1},v_{i-2}),...,(v_{1},v_{0}))$.
That is, we walk from $v_0$ to $v_i$ in $E_{\mathcal{M}thcal{C}}$, and then we take the semi-edge $(v_{i},v_{i})_{l}$, and then we walk back from
$v_i$ to $v_0$ in $E_{\mathcal{M}thcal{C}}$.
Let $\mathcal{W}_{s}\subseteq \mathcal{W}$ be the set of all such walks.
We define $G(\mathcal{W})=\mathcal{W}_{e}\cup \mathcal{W}_{s}$.
\begin{lemma}
\label{generatingwalks}
With the notation from above, $G(\mathcal{W})$ is a generating set for $\mathcal{W}$.
\end{lemma}
\begin{proof}
We shall prove that any $w\in \mathcal{W}$ can be expressed
as a sequence of elements of $G(\mathcal{W})$ and their inverses. Let $w\in \mathcal{W}$
be a closed walk among the edges and semi-edges of $T(\mathcal{M}thcal{M})$ starting at $v_0$.
From now on, semi-edges will be referred to simply as ``edges''.
We shall proceed by induction over $n$, the number of edges in $E\setminus E_{\mathcal{M}thcal{C}}$
visited by $w$. If $w$ visits only one edge in $E\setminus E_{\mathcal{M}thcal{C}}$,
then $w\in G(\mathcal{W})$ or $w^{-1}\in G(\mathcal{W})$. Let us suppose that,
if a closed walk among the edges of $T(\mathcal{M}thcal{M})$
visits $m$ different edges in $E\setminus E_{\mathcal{M}thcal{C}}$, with
$m<n$, then it can be expressed as a sequence of elements of $G(\mathcal{W})$
and their inverses.
Let $w\in \mathcal{W}$ be a walk that visits exactly $n$ edges in $E\setminus E_{\mathcal{M}thcal{C}}$.
Let $(v_{a},v_{b})_{l}\in E\setminus E_{\mathcal{M}thcal{C}}$ be the last
edge of $E\setminus E_{\mathcal{M}thcal{C}}$
visited by $w$.
Without loss of generality we may assume that the vertex
$v_{b}$ was visited after $v_{a}$, so let $(v_c, v_a)_m$ be the edge that $w$ visits just before $(v_a,v_b)_l$ (note that $(v_c, v_a)_m$ may or may not be in $E_{\mathcal{M}thcal{C}}$).
Let $w_{1}\in \mathcal{W}$ be the closed walk that
traces the same edges (in the same order) as $w$ until reaching
$(v_{c},v_{a})_{m}$ and then traces the edges $(v_{a},v_{a-1})$,
$(v_{a-1},v_{a-2})$, ...,$(v_{1},v_{0})$, and let $w_{2}\in \mathcal{W}$
be the closed walk
that traces the edges $(v_{0},v_{1}),(v_{1},v_{2}),...,(v_{a-1},v_{a})$ and then traces $(v_a,v_b)_l$ and continues the way $w$ does to return to $v_0$.
It is clear that $w_{1}$ visits exactly $n-1$ edges in $E\setminus E_{\mathcal{M}thcal{C}}$
and that $w_{2}$ visits only one.
By inductive hypothesis both $w_{1}$
and $w_{2}$ can be expressed as a sequence of elements of $G(\mathcal{W})$,
and therefore so does $w$ since $w=w_{1}w_{2}$.
\end{proof}
Let $\Phi$ be a base flag of $\mathcal{M}thcal{M}$ that projects to the initial vertex of a walk that contains all vertices of $T(\mathcal{M})$ of a symmetry type graph.
Following the notation of \cite{d-auto}, given $w \in \mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$ such that $\Phi^w$ is in the same orbit as $\Phi$ (that is, $ w \in \mathcal{M}thrm{Norm(Stab (\Phi))}$), we denote by $\alpha_w$ the automorphism taking $\Phi$ to $\Phi^w$. Moreover, if $w=r_{i_1}r_{i_2}\dots r_{i_k}$ for some $i_1, \dots i_k \in \{0, \dots, n-1\}$, then we may also denote $\alpha_w$ by $\alpha_{i_1,i_2, \dots i_k}$.
The following theorem gives distinguished generators (with respect to some base flag) of the automorphism group of a maniplex $\mathcal{M}$ in terms of a distinguished walk of $T(\mathcal{M}thcal{M})$, that travels through all the vertices of $T(\mathcal{M}thcal{M})$. Its proof is a consequence of the previous lemma.
\begin{theorem}
\label{auto}
Let $\mathcal{M}thcal{M}$ be a $k$-orbit $n$-maniplex and let $T(\mathcal{M}thcal{M})$ its symmetry type graph.
Suppose that $v_1, e_1, v_2, e_2 \dots, e_{q-1}, v_q$ is a distinguished walk that visits every vertex of $T(\mathcal{M})$, with the edge $e_i$ having colour $a_i$, for each $i= 1, \dots q-1$.
Let $S_i \subset \{0, \dots, n-1\}$ be such that $v_i$ has a semi-edge of colour $s$ if and only if $s \in S_i$.
Let $B_{i,j}\subset \{0, \dots, n-1\}$ be the set of colours of the edges between the vertices $v_i$ and $v_j$ (with $i<j$) that are not in the distinguished walk
and
let $\Phi \in \mathcal{M}thcal{F}(\mathcal{M})$ be a base flag of $\mathcal{M}thcal{M}$ such that $\Phi$ projects to $v_1$ in $T(\mathcal{M}thcal{M})$.
Then, the automorphism group of $\mathcal{M}$ is generated by the union of the sets
$$\{\alpha_{a_1, a_2, \dots, a_i, s, a_i, a_{i-1}, \dots, a_1} \mid i=1, \dots, k-1, s \in S_i \},$$
and
$$\{ \alpha_{a_1, a_2, \dots, a_i, b, a_j, a_{j-1}, \dots, a_1} \mid i,j \in \{1, \dots, k-1\}, i<j, b \in B_{i,j} \}.$$
\end{theorem}
We note that, in general, a set of generators of $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ obtained from Theorem~\ref{auto} can be reduced since
there might be more than one element of $G(\mathcal{W})$ representing the same automorphism.
For example, the closed walk $w$ through an edge of colour $2$, then a $0$-semi-edge and finally a 2-edge corresponds to the element $r_2r_0r_2 = r_0$ of $\mathcal{M}thrm{Mon}(\mathcal{M})$. Hence, the group generator induced by the walk $w$ is the same as that induced by the closed walk consisting only of the semi-edge of colour $0$.
The following two corollaries give a set of generators for 2- and 3-orbit polytopes, respectively, in a given class. The notation follows that of Theorem~\ref{auto}, where if the indices of some $\alpha$ do not fit into the parameters of the set, we understand that such automorphism is the identity.
\begin{corollary}
{\bf \cite{tesisisa}}
Let $\mathcal{M}thcal{M}$ be a 2-orbit $(n-1)$-maniplex in class $2_I$, for some $I \subset \{0, \dots, n-1\}$ and let $j_0 \notin I$. Then
$$ \big\{ \alpha_i, \alpha_{j_0, i, j_0}, \alpha_{k,j_0} \mid i \in I, \ k \notin I \big\}$$
is a generating set for $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$.
\end{corollary}
\begin{corollary}
Let $\mathcal{M}thcal{M}$ be a 3-orbit $(n-1)$-maniplex.
\begin{enumerate}
\item If $\mathcal{M}thcal{M}$ is in class $3^{i}$, for some $i \in \{1, \dots, n-2\}$, then
$$ \big\{ \alpha_j, \alpha_{i,i-1,i+1,i}, \alpha_{i,i+1,i+2,i+1,i}, \alpha_{i,i+1,i,i+1,i} \mid j \in \{0, \dots, n-1\} \setminus \{i\} \big\} $$
is a generating set for $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$.
\item If $\mathcal{M}thcal{M}$ is in class $3^{i,i+1}$, for some $i \in \{0, \dots, n-2\}$, then
$$\big\{ \alpha_j, \alpha_{i,i-1,i}, \alpha_{i,i+1,i+2,i+1,i}, \alpha_{i,i+1,i,i+1,i} \mid j \in \{0, \dots, n-1\} \setminus \{i\} \big\}$$
is a generating set for $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$.
\end{enumerate}
\end{corollary}
\section{Oriented and orientable maniplexes}
\label{sec:orient}
A maniplex $\mathcal{M}thcal{M}$ is said to be {\em orientable} if its flag graph $\mathcal{M}thcal{G_M}$ is a bipartite graph.
Since a subgraph of a bipartite graph is also bipartite, all the sections of an orientable maniplex are orientable maniplexes themselves.
An {\em orientation} of an orientable maniplex is a colouring of the parts of $\mathcal{M}thcal{G_M}$, with exactly two colours, say black and white. An {\em oriented maniplex} is an orientable maniplex with a given orientation.
Note that any oriented maniplex $\mathcal{M}thcal{M}$ has an enantiomorphic maniplex (or mirror image) $\mathcal{M}thcal{M}^{en}$. One can think of the enantiomorphic form of an oriented maniplex simply as the orientable maniplex with the opposite orientation.
If the connection groups $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$ of $\mathcal{M}thcal{M}$ is generated by $r_0, r_1, \dots, r_{n-1}$,
for each $i \in \{0, \dots, n-2\}$ let us define the element $t_i:= r_{n-1}r_i \in \mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$. Then, $t_i^2=1$, for $i=0, \dots n-3$. The subgroup $\mathcal{M}thrm{Mon}^+(\mathcal{M}thcal{M})$ of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$ generated by $t_0, \dots t_{n-2}$ is called {\em even connection group of $\mathcal{M}thcal{M}$}. Note that $\mathcal{M}thrm{Mon}^+(\mathcal{M}thcal{M})$ has index at most two in $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$.
In fact $(\mathcal{M}thrm{Mon}^+(\mathcal{M}thcal{M}))^{r_{n-1}} = \mathcal{M}thrm{Mon}^+(\mathcal{M}thcal{M}^{en})$.
It should be clear then that any maniplex and its enantiomorphic form are in fact isomorphic as maniplexes.
An {\em oriented flag di-graph} $\mathcal{M}thcal{G_M}^+$ of an oriented maniplex $\mathcal{M}thcal{M}$ is constructed in the following way.
The vertex set of $\mathcal{M}thcal{G_M}^+$ consists of one of the parts of the bipartition of $\mathcal{M}thcal{G_M}$. That is, the black (or white) vertices of the flag graph of $\mathcal{M}thcal{M}$. The darts of $\mathcal{M}thcal{G_M}^+$ will be the 2-arcs of $\mathcal{M}thcal{G_M}$ of colours $n-1,i$, for each $i \in\{0,\dots, n-2\}$. We then identify two darts to obtain an edge if they have the same vertices, but go in opposite directions.
Note that for $i = 0, \dots, n-3$ and each flag $\Phi$ of $\mathcal{M}thcal{M}$, the 2-arc starting at $\Phi$ and with edges coloured $n-1$ and $i$ has the same end vertex than the 2-arc starting at $\Phi$ and with edges coloured $i$ and $n-1$.
Hence, all the darts corresponding to 2-arcs of colours ${n-1}$ and $i$, with $i=0, \dots n-3$ will have both directions in $\mathcal{M}thcal{G_M}^+$ giving us, at each vertex, $n-2$ different edges.
On the other hand, the 2-arcs on edges of two colours ${n-1}, {n-2}$ will in general be directed darts of $\mathcal{M}thcal{G_M}^+$. An example of an oriented flag di-graph is shown in Firgure~\ref{orientedgraphflag}.
We note that the oriented flag di-graph of $\mathcal{M}thcal{M}^{en}$ can be obtained from $\mathcal{M}thcal{G_M}^+$ by reversing the directions of the $n-2, n-1$ darts.
\begin{figure}
\caption{The oriented flag di-graph of an oriented cuboctahedron from its flag graph.}
\label{orientedgraphflag}
\end{figure}
Note that the 2-arcs of colours $r_{n-1}, r_i$ correspond to the generators $t_i$ of $\mathcal{M}thrm{Mon}^+(\mathcal{M}thcal{M})$. In fact, as $\mathcal{M}thrm{Mon}^+(\mathcal{M}thcal{M})$ consists precisely of the even words of $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$, a maniplex is orientable if and only if the index of $\mathcal{M}thrm{Mon}^+(\mathcal{M}thcal{M})$ in $\mathcal{M}thrm{Mon}(\mathcal{M}thcal{M})$ is exactly two.
We can then colour the edges and darts of $\mathcal{M}thcal{G_M}^+$ with the elements $t_i$.
The fact that $t_i^2=1$ for every $i=0, \dots, n-3$ indeed implies that the edges of $\mathcal{M}thcal{G_M}^+$ are labelled by these first $n-2$ elements, while the darts are labelled by $t_{n-2}$.
We can see now that for each $i \in \{0, \dots, n-2\}$, the $i$-faces of $\mathcal{M}thcal{M}$ are in correspondence with the connected components of the subgraph of $\mathcal{M}thcal{G_M}^+$ with edges of colours $\{0, \dots, n-2\}\setminus \{i\}$. To identify the facets of $\mathcal{M}thcal{M}$ as subgraphs of $\mathcal{M}thcal{G_M}^+$, we first consider some oriented paths on the edges of $\mathcal{M}thcal{G_M}^+$. We shall say that an oriented path on the edges of $\mathcal{M}thcal{G_M}^+$ is {\em facet-admissible}
if no two darts of colour $t_{n-2}$ are consecutive on the path. Then, two vertices of $\mathcal{M}thcal{G_M}^+$ are in the same facet of $\mathcal{M}thcal{M}$ if there exists a facet admissible oriented path from one of the vertices to the other.
For the remainder of this section, by a maniplex we shall mean an oriented maniplex, with one part of the flags coloured with black and the other one in white.
An {\em orientation preserving automorphism} of an (oriented) maniplex $\mathcal{M}thcal{M}$ is an automorphism of $\mathcal{M}thcal{M}$ that sends black flags to black flags and white flags to white flags. An {\em orientation reversing automorphism} is an automorphism that interchanges black and white flags.
A {\em reflection} is an orientation reversing involutory automorphism. The group of orientation preserving automorphisms of $\mathcal{M}thcal{M}$ shall be denoted by $\mathcal{M}thrm{Aut}^+(\mathcal{M}thcal{M})$.
The orientation preserving automorphism $\mathcal{M}thrm{Aut}^+(\mathcal{M}thcal{M})$ of a maniplex $\mathcal{M}thcal{M}$ is a subgroup of index at most two in $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$. In fact, the index is exactly two if and only if $\mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$ contains an orientation reversing automorphism. Note that in this case, there exists an orientation reversing automorphism that sends $\mathcal{M}thcal{M}$ to its enantiomorphic form $\mathcal{M}thcal{M}^{en}$.
Pisanski~\cite{tomo} defines a maniplex to be {\em chiral-a-la-Conway} if $\mathcal{M}thrm{Aut}^+(\mathcal{M}thcal{M}) = \mathcal{M}thrm{Aut}(\mathcal{M}thcal{M})$.
If a maniplex $\mathcal{M}thcal{M}$ is chiral-a-la-Conway, then its enantiomorphic maniplex $\mathcal{M}thcal{M}^{en}$ is isomorphic to $\mathcal{M}thcal{M}$, but there is no automorphism of the maniplex sending one to the other.
It follows from the definition that $\mathcal{M}thcal{M}$ is chiral-a-la-Conway if and only if the automorphisms of $\mathcal{M}thcal{M}$ preserve the bipartition of $\mathcal{M}thcal{G_M}$ and therefore we have the following proposition.
\begin{proposition}
\label{oddcycles}
Let $\mathcal{M}thcal{M}$ be an oriented maniplex and let $T(\mathcal{M}thcal{M})$ its symmetry type graph. Then, $\mathcal{M}thcal{M}$ is chiral-a-la-Conway if and only if $T(\mathcal{M}thcal{M})$ has no odd cycles.
\end{proposition}
Similarly as before, the orientation preserving automorphisms of a maniplex $\mathcal{M}thcal{M}$ correspond to colour preserving automorphism of the bipartite graph $\mathcal{M}thcal{G_M}$ that preserves the two parts. But these correspond to colour preserving automorphisms of the di-graph $\mathcal{M}thcal{G_M}^+$, implying that $\mathcal{M}thrm{Aut}^+(\mathcal{M}thcal{M}) \cong \mathcal{M}thrm{Aut}_p(\mathcal{M}thcal{G_M}^+)$.
Note that the action of $\mathcal{M}thrm{Aut}^+(\mathcal{M}thcal{M})$ on the set $\mathcal{M}thcal{B(\mathcal{M}thcal{M})}$ of all the black flags of $\mathcal{M}thcal{M}$ is semiregular, and
hence, the action on $\mathcal{M}thrm{Aut}_p(\mathcal{M}thcal{G_M}^+)$ is semiregular on the vertices of $\mathcal{M}thcal{G_M}^+$.
An oriented maniplex $\mathcal{M}thcal{M}$ is said to be {\em rotary (or orientably regular)} if the action of $\mathcal{M}thrm{Aut}^+(\mathcal{M}thcal{M})$ is regular on $\mathcal{M}thcal{B}(\mathcal{M}thcal{M})$.
Equivalently, $\mathcal{M}thcal{M}$ is rotary if the action of $\mathcal{M}thrm{Aut}_p(\mathcal{M}thcal{G_M}^+)$ is regular on its vertices.
We say that $\mathcal{M}thcal{M}$ is {\em orientably $k$-orbit} if the action of $\mathcal{M}thrm{Aut}_p(\mathcal{M}thcal{G_M}^+)$ has exactly $k$ orbits on the vertices of $\mathcal{M}thcal{G_M}^+$. The following lemma is straightforward.
\begin{lemma}
\label{orientablekorbit}
Let $\mathcal{M}thcal{M}$ be a chiral-a-la-Conway maniplex. Then $T(\mathcal{M}thcal{M})$ has no semi-edges and if $\mathcal{M}thcal{M}$ is an orientably $k$-orbit maniplex, then $\mathcal{M}thcal{M}$ is a $2k$-orbit maniplex.
\end{lemma}
\subsection{Oriented symmetry type di-graphs of oriented maniplexes}
We now consider the semiregular action of $\mathcal{M}thrm{Aut}^+(\mathcal{M}thcal{M})$ on the vertices of $\mathcal{M}thcal{G_M}^+$, and let ${\mathcal{M}thcal B}= {\mathcal{M}thcal O}rb^+$ be the partition of the vertex set of $\mathcal{M}thcal{G_M}^+$ into the orbits with respect to the action of $\mathcal{M}thrm{Aut}^+(\mathcal{M}thcal{M})$. (As before, since the action is semiregular, all orbits are of the same size.) The {\em oriented symmetry type di-graph} $T^+(\mathcal{M}thcal{M})$ of $\mathcal{M}thcal{M}$ is the quotient colour di-graph with respect to ${\mathcal{M}thcal O}rb^+$. Similarly as before, if $\mathcal{M}thcal{M}$ is rotary, then the oriented symmetry type di-graph of $\mathcal{M}thcal{M}$ consists of one vertex with one loop and $n-2$ semi-edges.
Note that for oriented symmetry type di-graphs we shall not identify two darts with the same vertices, but different directions.
If we now turn our attention to oriented symmetry type di-graphs with two vertices, one can see that for each $I\subset \{0, \dots, n-2\}$, there is an oriented symmetry type di-graphs with two vertices having semi-edges (or loops) of colours $i$ at each vertex for every $i \in I$, and having edges (or both darts) of colour $j$, for each $j \notin I$. An oriented maniplex with such oriented symmetry type di-graph shall be say to be in class $2_I^+$. Hence, there are $2^{n-2}-1$ classes of oriented 2-orbit $(n-1)$-maniplexes.
Note that if $\mathcal{M}$ is a $k$-orbit maniplex, then $T^+(\mathcal{M}thcal{M})$ has either $k$ or $\frac{k}{2}$ vertices.
The next result follows from Proposition~\ref{oddcycles} and Lemma~\ref{orientablekorbit}.
\begin{theorem}
Let $\mathcal{M}thcal{M}$ be an oriented maniplex. Then, $T(\mathcal{M}thcal{M})$ and $T^+(\mathcal{M}thcal{M})$ have the same number of vertices if and only if $T(\mathcal{M}thcal{M})$ has a semi-edge or an odd cycle.
\end{theorem}
It is not difficult to see that if we are to consider for a moment an oriented symmetry type di-graph $T^+$ with an (undirected) hamiltonian path, then the construction of Section~\ref{Gen-autG} gives us a way to construct a generating set of the closed walks based at the starting vertex of the path (and Lemma~\ref{generatingwalks} implies that the set actually generates.) Hence, one can find generators for the group of orientation preserving automorphisms of an oriented maniplex, provided that it has an (undirected) hamiltonian path. In particular we have the following theorem.
\begin{theorem}
Let $\mathcal{M}thcal{M}$ be an oriented 2-orbit $(n-1)$-maniplex in class $2_I^+$, for some $I\subset \{0, \dots, n-2\}$. Then
\begin{enumerate}
\item If $n-2 \in I$, let $j_0 \notin I$, then
$$\big\{
\alpha_{i,n-1}, \alpha_{j_0, n-1, i, j_0}, \alpha_{k, n-1, j_0, n-1} \mid i \in I \ k \notin I
\big\}$$
is a generating set for $Aut^+(\mathcal{M}thcal{M})$.
\item If $n-2 \notin I$ but there exists $j_0 \notin I$, $j_0 \neq n-2$, then
$$\big\{
\alpha_{i,n-1}, \alpha_{j_0, n-1, i, j_0}, \alpha_{k, n-1, j_0, n-1} , \alpha_{n-1, n-2, j_0, n-1}\mid i \in I \ k \notin I
\big\}$$
is a generating set for $Aut^+(\mathcal{M}thcal{M})$.
\item If $I =\{0, \dots, n-3\}$, then
$$\big\{
\alpha_{i,n-1}, \alpha_{n-2, n-1, i, n-1, n-2}, \alpha_{n-1, n-2, n-1, n-2} \mid i \in I
\big\}$$
is a generating set for $Aut^+(\mathcal{M}thcal{M})$.
\end{enumerate}
\end{theorem}
Given an oriented maniplex $\mathcal{M}$ and its symmetry type graph $T(\mathcal{M})$, we shall say that $T^+(\mathcal{M})$ is the associated oriented symmetry type di-graph of $T(\mathcal{M}thcal{M})$. Hence, given a symmetry type graph $T$ one can find its associated oriented symmetry type di-graph $T^+$ by erasing all edges of $T$ and replacing them by the $n-1,i$ paths of $T$. Note that this replacement of the edges may disconnect the new graph. If that is the case, we take $T^+$ to be one of the connected components.
\subsection{Oriented symmetry type graphs with three vertices}
In a similar way as one can classify maniplexes with small number of flag orbits (under the action of the automorphism group of the maniplex) in terms of their symmetry type graph, one can classify oriented maniplexes with small number of flags (under the action of the orientation preserving automorphism group of the maniplex) in terms of their oriented symmetry type di-graph.
Let $\mathcal{M}thcal{M}$ be a $6$-orbit Chiral-a-la-Conway $(n-1)$-maniplex, with $n\geq4$.
Let $T(\mathcal{M}thcal{M})$ be its symmetry type graph and $T^{+}(\mathcal{M}thcal{M})$ be its oriented symmetry type di-graph.
Recall that $T(\mathcal{M}thcal{M})$ is a graph with 6 vertices and no semi-edges
or odd cycles, and that $T^{+}(\mathcal{M}thcal{M})$ is a di-graph with 3
vertices.
Let $V=\{v_{1},v_{2},..,v_{6}\}$ be the vertex set of $T(\mathcal{M}thcal{M})$.
We may label the vertices of $T(\mathcal{M}thcal{M})$ in such a way that
the edges $(v_{1},v_{2})$, $(v_{3},v_{4})$, $(v_{5},v_{6})$ are
coloured with the colour $(n-1)$, and that no two vertices of the
set $\{v_{1},v_{3},v_{5}\}$ are adjacent. Let $\mathcal{W}=\{w_{1},w_{3},w_{5}\}$
be the vertex set of $T^{+}(\mathcal{M}thcal{M})$. Each $w_{i}\in \mathcal{W}$ corresponds
to the vertex $v_{i}\in V$, $i\in\{1,3,5\}$.
In what follows, in the same way as in Section~\ref{sec:stg}, $(v_{i},v_{j})_{k}$
denotes the $k$-coloured edge joining the vertices $v_{i}$ and
$v_{j}$, $v_{i},v_{j}\in V$, $k\in\{0,1,...,n-1)$; and $(w_{i},w_{j})_{k}$
denotes the $(k,n-1)$-coloured edge joining the vertices $w_{i}$
and $w_{j}$, $w_{i},w_{j}\in \mathcal{W}$ and $k\in\{0,1,...,n-3)$.
Since there are no semi-edges in $T(\mathcal{M}thcal{M})$, for each colour
$i\in\{0,...,n-3\}$ there is one edge (and one semi-edge) of colour
$(i,n-1)$ in $T^{+}(\mathcal{M}thcal{M})$ if and only if the 2-factor of
$T(\mathcal{M}thcal{M})$ of colours $i$ and $(n-1)$ consists of one 4-cycle
and one 2-cycle of alternating colours. Likewise, there are three
semi-edges of colour $(i,n-1)$ in $T^{+}(\mathcal{M}thcal{M})$ if and only
if the 2-factor of $T(\mathcal{M}thcal{M})$ of colours $i$ and $(n-1)$
consist of three 2-cycles.
It is straightforward to see that there
are two consecutive edges of colour $(i,n-1)$ and $(j,n-1),$ $i\neq j$,
in $T^{+}(\mathcal{M}thcal{M})$ if and only if the 2-factor of colours $i$
and $j$ consists of a single 6-cycle. It follows that if there are
two consecutive edges of colour $(i,n-1)$ and $(j,n-1)$ in $T^{+}(\mathcal{M}thcal{M})$,
then $\left|i-j\right|<2$.
Notice that the possible 2-factors of colour $(n-1)$ and $(n-2)$
in $T(\mathcal{M}thcal{M})$ are either a single 6-cycle of alternating colours,
a 4-cycle along with a 2-cycle, or three separate 2-cycles. Hence,
the darts in $T^{+}(\mathcal{M}thcal{M})$ are arranged in either a 3-cycle,
a 2-cycle along with a loop, or three separate loops. We proceed case
by case.
Consider the case when there are three loops in $T^{+}(\mathcal{M}thcal{M})$.
Since oriented symmetry type di-graphs are connected, then without
loss of generality $(w_{1},w_{3})_{i}$ and $(w_{3},w_{5})_{i+1}$
must be edges of $T^{+}(\mathcal{M}thcal{M})$. We may suppose that $(w_{1},w_{3})_{i}$
is the only edge joining $w_{1}$ and $w_{3}$. If there is a third
edge in $T^{+}(\mathcal{M}thcal{M})$, then it is necessarily $(w_{3},w_{5})_{i-1}$.
Note that, since the edges coloured by $(n-1)$ and $(n-2)$ do not
lie on a 6-cycle in $T(\mathcal{M}thcal{M})$, there are no restrictions on
the semi-edges of $T^{+}(\mathcal{M}thcal{M})$. Thus, there is one oriented
symmetry type di-graph for each pair of colours $i$ and $i+1$, with
$i\in\{0,...,n-3\}$ and one for each triple $i-1$, $i$ and $i+1$,
$i\in\{1,...,n-3\}$. Therefore, there are $2n-7$ oriented symmetry
type di-graphs with 3 loops.
Consider the case when $T^{+}(\mathcal{M}thcal{M})$ has only one loop. We
may suppose that the loop is in $w_{5}$ and the vertices $w_{1}$
or $w_{3}$ are joined by darts. This implies that $(v_{1},v_{4})_{(n-2)}$,
$(v_{2},v_{3})_{(n-2)}$ and $(v_{5},v_{6})_{(n-2)}$ are edges of
$T(\mathcal{M}thcal{M})$. As $T^{+}(\mathcal{M}thcal{M})$ is connected, there must
be an edge joining $w_{3}$ and $w_{5}$ of colour $(i,n-1).$
Necessarily
$i=n-3$, since the edges $(v_{1},v_{2})_{i}$, $(v_{2},v_{3})_{(n-2)}$,
$(v_{3},v_{6})_{i}$, $(v_{6},v_{5})_{n-2}$, $(v_{5},v_{4})_{i}$,
$(v_{4},v_{1})_{n-2}$ form a 6-cycle in $T(\mathcal{M}thcal{M})$. Notice
that there are no restrictions on the semi-edges of $T^{+}(\mathcal{M}thcal{M})$.
Hence, there are exactly two oriented symmetry type di-graph with
a single loop: one with a single edge of colour $(n-3,n-1)$ between $w_3$ and $w_5$, and
one with two edges of colours $(n-3,n-1)$ and $(n-4,n-1)$ between them.
Consider the case when the darts in $T^{+}(\mathcal{M}thcal{M})$ are arranged
in a 3-cycle. It is clear that the 2-factor of $T(\mathcal{M}thcal{M})$ of
colours $(n-2)$ and $(n-1)$ is a single 6-cycle. Therefore, if $i\in\{0,...,n-4\}$,
the 2-factor of $T(\mathcal{M}thcal{M})$ of colours $i$ and $(n-1)$ cannot
consist of three 2-cycles, as this implies the existence of a 6-cycle
of alternating colours $i$ and $(n-2)$ such that $\left|i-(n-2)\right|\geq2$.
That is, $T^{+}(\mathcal{M}thcal{M})$ has one edge (and one semi-edge) of
colour $(i,n-1)$ for each $i\in\{0,...,n-4\}$ and either one edge
and a semi-edge, or three semi-edges for colour $(n-3,n-1).$ Note
that if $n\geq7$, the set $\{0,...,n-4\}$ has more than three elements
and thus all edges of colour $(i,n-1)$ in $T^{+}(\mathcal{M}thcal{M})$, $i\in\{0,...,n-4\}$,
must be joining the same pair of vertices. Otherwise, there would
be at least two consecutive edges of colours $(i,n-1)$ and $(j,n-1)$,
with $\left|i-j\right|\geq2$. Figure~\ref{3orient} below shows the only four
possible oriented symmetry type di-graphs with a 3-cycle of darts
and at least two consecutive edges. Two correpond to 4-maniplexes,
one to 3-maniplexes and one to 5-maniplexes. These will be treated as
special cases.
\begin{figure}
\caption{Oriented symmetry type di-graphs with 3 vertices and one directed 3-cylce, of 3-, 4- and 5-maniplexes}
\label{3orient}
\end{figure}
We may suppose that $T^{+}(\mathcal{M}thcal{M})$ has no consecutive edges.
It follows that here are exactly two oriented symmetry type di-graph
with a 3-cycle of darts: one with an edge joining the same pair of
vertices for each colour $i\in\{0,...,n-3\},$ and one with three
semi-edges of colour $(n-3,n-1)$ and an edge joining the same pair
of vertices for each colour $i\in\{0,...,n-4\}$.
Considering all the cases above, there are $(n-3)+(n-4)+2+2=2n-3$
oriented symmetry type graphs with three vertices for oriented maniplexes of
rank $n\geq6$; $2n-2=6$ for oriented maniplexes of rank 3; $2n-1=9$ for oriented maniplexes
of rank 4; and $2n-2=10$ for oriented maniplexes of rank 5.
\section*{Acknowledgments}
This work was done with the support of ''Programa de Apoyo a Proyectos de Investigaci\'on e Innovaci\'on Tecnol\'ogica (PAPIIT) de la UNAM, IB101412 {\em Grupos y gr\'aficas asociados a politopos abstractos}". The second author was partially supported by Slovenian Research Agency (ARRS) and the third author was partially supported by CONACyT under project 166951 and by the program ''Para las mujeres en la ciencia L'Oreal-UNESCO-AMC 2012"
\end{document}
|
\begin{document}
\title{Augmented Lagrangian Optimization \\under Fixed-Point Arithmetic}
\author{Yan~Zhang,~\IEEEmembership{Student Member,~IEEE,}
Michael~M.~Zavlanos,~\IEEEmembership{Member,~IEEE}
\thanks{The authors are with the Department
of Mechanical Engineering and Material Science, Duke University, Durham, NC, 27708, USA e-mail: [email protected]; [email protected].}}
\maketitle
\begin{abstract}
In this paper, we propose an inexact Augmented Lagrangian Method (ALM) for the optimization of convex and nonsmooth objective functions subject to linear equality constraints and box constraints where errors are due to fixed-point data. To prevent data overflow we also introduce a projection operation in the multiplier update. We analyze theoretically the proposed algorithm and provide convergence rate results and bounds on the accuracy of the optimal solution. Since iterative methods are often needed to solve the primal subproblem in ALM, we also propose an early stopping criterion that is simple to implement on embedded platforms, can be used for problems that are not strongly convex, and guarantees the precision of the primal update. To the best of our knowledge, this is the first fixed-point ALM that can handle non-smooth problems, data overflow, and can efficiently and systematically utilize iterative solvers in the primal update. Numerical simulation studies on a utility maximization problem are presented that illustrate the proposed method.
\end{abstract}
\begin{IEEEkeywords}
convex optimization, Augmented Lagrangian Method, embedded systems, fixed-point arithmetic.
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\label{sec:intro}
Embedded computers, such as FPGAs (Field-Programm-able Gate Arrays), are typically low-cost, low-power, perform fast computations, and for these reasons they have long been used for the control of systems with fast dynamics and power limitations, e.g., automotive, aerospace, medical, and robotics. While embedded computers have been primarily used for low-level control, they have recently also been suggested to obtain real-time solutions to more complex optimization problems \cite{jerez2014embedded,richter2017resource}.
The main challenges in implementing advanced optimization algorithms on resource-limited embedded devices are providing complexity certifications and designing the data precision to control the solution accuracy and the dynamic range to avoid overflow.
While recent methods, such as \cite{jerez2014embedded,richter2017resource} , address these challenges and provide theoretical guarantees, they do so for quadratic problems. In this paper, we provide such guarantees for general convex problems. Specifically, we propose an Augmented Lagrangian Method (ALM) to solve problems of the form
\begin{equation}
\label{pb:problem}
\begin{split}
\min \quad f(x) \;\;\; \text{s.t.} \;\;\; Ax = b, \;\;\; x \in \mathcal{X}
\end{split}
\end{equation}
on embedded platforms under fixed-point arithmetic, where $x \in \mathbb{R}^n$, $A \in \mathbb{R}^{p \times n}$, $b \in \mathbb{R}^p$ and $f(x)$ is a scalar-valued function.
The set $\mathcal{X}$ is convex.
ALM falls in the class of first order methods, which have been demonstrated to be efficient in solving problem~\eqref{pb:problem} on embedded computers due to their simple operations and less memory requirements, \cite{richter2017resource}.
Recent work on error analysis of inexact first-order methods is presented in \citet*{devolder2014first,patrinos2015dual,necoara2016iteration}. Specifically, \citet{devolder2014first} proposed a first-order inexact oracle to chracterize the iteration complexity and suboptimality of the primal gradient and fast gradient method. \citet{patrinos2015dual,necoara2016iteration} extend these results to the dual domain.
However, these analyses assume strong convexity of the objective functions. \citet*{nedelcu2014computational,necoara2017complexity} analyze the convergence of ALM using an inexact oracle for general convex problems. The inexactness comes from the approximate solution of the subproblems in ALM. Similar analysis has been conducted in \citet*{rockafellar1976monotone,eckstein2013practical,lan2016iteration}. However, none of above works on ALM consider the error in the multiplier update under fixed-point arithmetic. Moreover, since no projection is used in the multiplier update, the above works cannot provide an upper bound on the multiplier iterates and therefore cannot avoid data overflow. Perhaps the most relevant work to the method proposed here is \citet*{jerez2014embedded}. Specifically, \cite{jerez2014embedded} analyzes the behavior of the Alternating Direction Method of Multipliers (ADMM), a variant of the ALM, on fixed-point platforms. However, the analysis in \cite{jerez2014embedded} can only be applied to quadratic objective functions. Moreover, to prevent data overflow, the proposed method needs to monitor the iteration history of the algorithm to estimate a bound on the multiplier iterates.
Compared to existing literature on inexact ALM, we propose a new inexact ALM that incorporates errors in both the primal and dual updates and contains a projection operation in the dual update. Assuming a uniform upper bound on the norm of the optimal multiplier is known, this projection step can guarantee no data overflow during the whole iteration history. To the best of our knowledge, this is the first work to provide such guaranetees for general convex and non-smooth problems. Furthermore, we show that our proposed algorithm has $O(1/K)$ convergence rate and provide bounds on the achievable primal and dual suboptimality and infeasibility. In general, iterative solvers are needed to solve the subproblems in ALM but the theoretical complexity of such solvers is usually conservative. Therefore, we present a stopping criterion that allows us to terminate early the primal iteration in the ALM while guaranteeing the precision of the primal update. This stopping condition is simple to check on embedded platforms and can be used for problems that are not necessarily strongly convex.
We note that in this paper we do not provide a theoretical uniform upper bound on the optimal multiplier for general convex problems. Instead, our contribution is to develop a new projected ALM method that relies on such bounds to control data overflow on fixed-point platforms.
The rest of this paper is organized as follows. In Section~\ref{sec:prelim}, we formulate the problem and introduce necessary notations and lemmas needed to prove the main results. In Section~\ref{sec:converg}, we characterize the convergence rate of the algorithm and present bounds on the primal suboptimality and infeasiblity of the solution. In Section~\ref{sec:fp}, we present the stopping criterion for the solution of the primal subproblem under fixed-point arithmetic. In Section~\ref{sec:sim}, we present simulations that verify the theoretical analysis in the previous sections. In Section~\ref{sec:conclude}, we conclude the paper.
\section{Preliminaries}
\label{sec:prelim}
We make the following assumptions on problem~\eqref{pb:problem}.
\begin{assumption}
\label{assum:prob_form}
The function $f(x)$ is convex and is not necessarily differentiable. The problem~\eqref{pb:problem} is feasible.
\end{assumption}
\begin{algorithm}[t]
\caption{Augmented Lagrangian Method}
\label{alg:exactAL}
\begin{algorithmic}[1]
\Require{Initialize $\lambda_0 \in \mathbb{R}^p$, k=0}
\While{$Ax_{k} \neq b$}
\State{$x_{k} = \arg \min_{x\in \mathcal{X}} L_\rho(x;\lambda_k)$}
\State{$\lambda_{k+1} = \lambda_k + \rho(Ax_{k} - b)$}
\Let{$k$}{$k + 1$}
\EndWhile
\end{algorithmic}
\end{algorithm}
The Lagrangian function of problem~\eqref{pb:problem} is defined as
$L(x; \lambda) = f(x) + \langle Ax - b, \lambda \rangle$ and the dual function is defined as
$ \Phi(\lambda) \triangleq \min_{x\in\mathcal{X}} L(x;\lambda)$, where $\lambda \in \mathbb{R}^n$ is the Lagrangian multiplier \citep*{ruszczynski2006nonlinear} and $\langle \cdot , \cdot \rangle$ is the inner product between two vectors. Then the dual problem associated with problem~\eqref{pb:problem} can be defined as
$\max_{\lambda \in \mathbb{R}^p} \Phi(\lambda)$.
Suppose $x^\star$ is an optimal solution of the primal problem~\eqref{pb:problem} and $\lambda^\star$ is an optimal solution of the dual problem. Then, we make the following assumption:
\begin{assumption}
\label{assum:strong_dual}
Strong duality holds for the problem~\eqref{pb:problem}. That is, $f(x^\star) = \Phi(\lambda^\star)$.
\end{assumption}
Assumption~\ref{assum:strong_dual} implies that $(x^\star, \lambda^\star)$ is a saddle point of the Lagrangian function $L(x;\lambda)$ \citep*{ruszczynski2006nonlinear}. That is, $\forall x \in \mathcal{X}, \lambda \in \mathbb{R}^p$,
\begin{equation}
\label{eq:saddle_point}
L(x^\star; \lambda) \leq L(x^\star; \lambda^\star) \leq L(x; \lambda^\star).
\end{equation}
The Augmented Lagrangian function of the primal problem~\eqref{pb:problem} is defined by
$ L_\rho(x; \lambda) = f(x) + \langle Ax-b, \lambda \rangle + \frac{\rho}{2}\| Ax - b \|^2 $,
where $\rho$ is the penalty parameter and $\|\cdot\|$ is the Euclidean norm of a vector. Moreover, we define the augmented dual function
$\Phi_\rho(\lambda) = \min_{x \in \mathcal{X}} L_\rho(x; \lambda)$. Note that $\Phi_\rho(\lambda)$ is always differentiable with respect to $\lambda$ and its gradient is $\nabla \Phi_\rho(\lambda) = A x_\lambda^\star - b$,
where $x_\lambda^\star = \arg\min_{x\in\mathcal{X}} L_\rho(x; \lambda)$. Moreover, $\nabla \Phi_\rho(\lambda)$ is Lipschitz continuous with constant $L_\Phi = \frac{1}{\rho}$ \citep*{rockafellar1976augmented}. The ALM can be viewed as a gradient ascent method on the multiplier $\lambda$ with step size $\frac{1}{L_\Phi} = \rho$. We present the ALM in Algorithm~\ref{alg:exactAL}. Discussion on the convergence of Algorithm~\ref{alg:exactAL} can be found in \cite{rockafellar1976augmented,ruszczynski2006nonlinear} and the references therein.
ALM converges faster than dual method when the problem is not strongly convex due to its smoothing effect on the dual objective function $\Phi(\lambda)$, \citep{rockafellar1976augmented}.
In practice, Algorithm~\ref{alg:exactAL} cannot be implemented exactly on a fixed-point platform. We present the modified ALM in Algorithm~\ref{alg:inexactAL} to include fixed-point arithmetic errors.
In Algorithm~\ref{alg:inexactAL}, $x^\star_k = \arg \min_{x \in \mathcal{X}} L_\rho(x; \lambda_k)$.
The effect of the fixed-point arithmetic is incorporated in the error terms $\epsilon_{in}^k$ and $\epsilon_{out}^k$.
Moreover, $K_{out}$ is the number of iterations in the ALM and $L$ is the step size used in the dual update and is defined in Lemma~\ref{lem:inexact_oracle}. Finally, $D$ is a convex and compact set containing at least one optimal multiplier $\lambda^\star$ and $\Pi_D$ denotes the projection onto the set $D$.
This projection step is the major difference that makes the analysis in this paper different from other works on inexact ALM, e.g., \cite{nedelcu2014computational,necoara2017complexity}.
The set $D$ is predetermined before running the algorithm. We make the following assumption on $D$:
\begin{assumption}
\label{assum:choice_D}
The set $D$ is a box that contains $0$, $2\lambda^\star$ and $\lambda^\star + \mathbf{1}$, where $\mathbf{1}$ is a vector of appropriate dimension and its entries are all $1$.
\end{assumption}
The above choice of $D$ is discussed in more details in Section~\ref{sec:converg}. Note that this set $D$ depends on a uniform bound on $\|\lambda^\star\|$ over a set of problem data. The methods proposed in \citet*{ruszczynski2006nonlinear,mangasarian1985computable,nedic2009approximate,devolder2012double} establish such bounds on $\|\lambda^\star\|$ for fixed problem parameters. On the other hand, \citet*{patrinos2014accelerated,richter2011towards} provide uniform bounds on $\|\lambda^\star\|$ assuming $ b \in \mathcal{B}$ in the constraints in problem~\eqref{pb:problem}. However, the methods in \cite{patrinos2014accelerated,richter2011towards} can only be applied to quadratic problems. For general problems considered in this paper, the interval analysis method in \citet*{hansen1993bounds} can be used to estimate a uniform bound on $\|\lambda^\star\|$. However, the interval analyisis method requires restrictive assumptions on the set of problem data and gives impractical bounds \citep*{jerez2015low}. In practice, we can approximate these bounds using sampling and then apply an appropriate scaling factor. Our contribution in this paper is to develop a new projected ALM method that employs such bounds to control data overflow on fixed-point platforms.
\section{Convergence Analysis}
\label{sec:converg}
\begin{algorithm}[t]
\caption{Augmented Lagrangian Method under inexactness}
\label{alg:inexactAL}
\begin{algorithmic}[1]
\Require{Initialize $\lambda_0 \in \mathbb{R}^p$, k=0}
\While{$k \leq K_{out}$}
\State{$\tilde{x}_k \approx \arg \min_{x \in \mathcal{X}} L_\rho(x; \lambda_k)$ so that:
\setlength{\belowdisplayskip}{10pt}
\setlength{\abovedisplayskip}{10pt}
\begin{equation*}
L_\rho(\tilde{x}_k, \lambda_k) - L_\rho(x^\star_k, \lambda_k) \leq \epsilon_{in}^k
\end{equation*}}
\State{$\lambda_{k+1} = \Pi_{D}[\lambda_k + \frac{1}{L}(A \tilde{x}_k - b + \epsilon_{out}^k)]$}
\Let{$k$}{$k + 1$}
\EndWhile
\end{algorithmic}
\end{algorithm}
In this section, we show convergence of Algorithm~\ref{alg:inexactAL} to a neighborhood of the optimal dual and primal objective value. First, we make some necessary assumptions on the boundedness of errors appearing in the algorithm.
\begin{assumption}
\label{assum:bound_var}
At every iteration of Algorithm~\ref{alg:inexactAL}, the errors are uniformly bounded, i.e.,
\begin{equation*}
\label{eq:bd_epsilon}
0 \leq \epsilon_{in}^k \leq B_{in}, \quad \|\epsilon_{out}^k\| \leq B_{out}, \text{ for all } k.
\end{equation*}
\end{assumption}
This assumption is satisfied by selecting appropriate data precision and subproblem solver parameters. Specifically, $\epsilon_{in}^k \geq 0$ if $\tilde{x}_k$ is always feasible, that is, $\tilde{x}_k \in \mathcal{X}$, which is possible if, e.g., the projected gradient method is used to solve the subproblem in line 2 in Algorithm~\ref{alg:inexactAL}.
Due to
the projection operation in line 3 of Algorithm~\ref{alg:inexactAL}, we have that for $\forall \lambda_1, \lambda_2 \in D_{\delta}$, $\|\lambda_1 - \lambda_2\| \leq B_{\lambda}$,
where $D_\delta = D\cup\{\lambda_\delta \in \mathbb{R}^m : \lambda_\delta = \lambda + \epsilon_{out}^k, \text{ for all }\lambda \in D\}$. Since $D$ is compact and $\|\epsilon_{out}^k\|$ is bounded according to Assumption~\ref{assum:bound_var}, $D_\delta$ is also compact. $D_\delta$ is only introduced for the analysis. In practice, we only need to compute the size of $D$ to implement Algorithm~\ref{alg:inexactAL}.
\subsection{Inexact Oracle}
Consider the concave function $\Phi_\rho(\lambda)$ with $L_\Phi$-Lipschitz continuous gradient. For any $\lambda_1$ and $\lambda_2 \in \mathbb{R}^p$, we have
$0 \geq \Phi_\rho(\lambda_2) - [\Phi_\rho(\lambda_1) + \langle \nabla\Phi_\rho(\lambda_1), \lambda_2 - \lambda_1 \rangle] \geq -\frac{L_\Phi}{2}\|\lambda_1 - \lambda_2\|^2$.
Recall the expression of $\nabla \Phi_\rho(\lambda)$.
Since step 2 of Algorithm~\ref{alg:inexactAL} can only be solved approximately, $\nabla \Phi_\rho(\lambda)$ can only be evaluated inexactly.
Therefore, the above inequalities can not be satisfied exactly.
We extend the results in \cite{devolder2014first, necoara2017complexity} and propose the following inexact oracle to include the effect of $\epsilon_{out}^k$.
\begin{lemma}
\label{lem:inexact_oracle}
(Inexact Oracle) Let assumptions~\ref{assum:prob_form}, \ref{assum:strong_dual} and \ref{assum:bound_var} hold. Moreover, consider the approximations $\Phi_{\delta,L}(\lambda_k) = L_\rho(\tilde{x}_k;\lambda_k) + B_{out}B_{\lambda} $ to $\Phi_\rho(\lambda_k)$ and $ s_{\delta, L}(\lambda_k) = A \tilde{x}_k - b + \epsilon_{out}^k $
to $\nabla \Phi_\rho(\lambda_k)$. Then these approximations consititute a $(\delta, L)$ inexact oracle to the concave function $\Phi_\rho(\lambda)$ in the sense that, for $\forall \lambda \in D_{\delta}$,
\begin{equation}
\label{eq:inexact_oracle}
\begin{split}
0 \geq \Phi_\rho(\lambda) - (\Phi_{\delta, L}(\lambda_k) + & \langle s_{\delta, L}(\lambda_k), \lambda - \lambda_k \rangle) \\
& \geq -\frac{L}{2}\|\lambda - \lambda_k\|^2 - \delta,
\end{split}
\end{equation}
where $L = 2L_\Phi = \frac{2}{\rho}$ and $\delta = 2B_{in} + 2B_{out}B_\lambda$.
\end{lemma}
\begin{proof}
The proof is similar to \cite{necoara2017complexity} and therefore is omitted.
\end{proof}
\subsection{Dual Suboptimality}
Showing the convergence of the dual variable is similar to showing the convergence of the projected gradient method with the inexact oracle used in \cite{patrinos2015dual}. Therefore, we omit the proof and directly summarize the dual suboptimality results of our algorithm. Specifically, we have the following inequality:
\begin{equation}
\label{proof:dual_optimality_5}
\begin{split}
\Phi_\rho(\lambda^\star) & - \Phi_\rho(\lambda_{k+1}) \\
& \leq \frac{L}{2}(\|\lambda_k - \lambda^\star\|^2 - \|\lambda_{k+1} - \lambda^\star\|^2) + \delta. \\
\end{split}
\end{equation}
Furthermore, similar to the Theorem 5 in \cite{patrinos2015dual}, we have the following convergence result for the dual variable:
\begin{theorem}
Let assumptions~\ref{assum:prob_form}, \ref{assum:strong_dual} and \ref{assum:bound_var} hold. Define $\bar{\lambda}_K = \frac{1}{K}\sum_{k = 1}^{K}\lambda_k$. Then, we have
$\Phi_\rho(\lambda^\star) - \Phi_\rho(\bar{\lambda}_k) \leq \frac{L}{2k}\|\lambda_0 - \lambda^\star\|^2 + \delta$.
\end{theorem}
\subsection{Primal Infeasibility and Suboptimality}
Define the Lyapunov/Merit function
$\phi^k(\lambda) = \frac{L}{2}\|\lambda_k - \lambda\|^2 + \frac{1}{2}\|\lambda_{k-1} - \lambda^\star\|^2$.
Also define the residual function $r(x) = Ax - b$. We have the following intermediate result:
\begin{lemma}
\label{lem:intermediate_lemma}
Let assumptions~\ref{assum:prob_form},\ref{assum:strong_dual} and \ref{assum:bound_var} hold. For all $k \geq 1$, and for all $\lambda \in D$, we have that
\begin{equation}
\label{eq:intermediate_lemma}
f(\tilde{x}_k) - f(x^\star) + \langle \lambda, r(\tilde{x}_k) \rangle \leq \phi^k(\lambda) - \phi^{k+1}(\lambda) + E,
\end{equation}
where $E = (1+\frac{4}{L})B_\lambda B_{out} + (1 + \frac{4}{L})B_{in} + (\frac{1}{2} + \frac{1}{2L})B_{out}^2$.
\end{lemma}
\begin{proof}
We recall that $\tilde{x}_k$ is a suboptimal solution as defined in line 2 of Algorithm~\ref{alg:inexactAL} that satisfies
$ L_\rho(\tilde{x}_k;\lambda_k) - L_\rho(x_k^\star;\lambda_k) \leq \epsilon_{in}^k$.
Moreover, due to the optimality of $x_k^\star$, we also have that
$L_\rho(x_k^\star;\lambda_k) \leq L_\rho(x^\star;\lambda_k) = f(x^\star)$,
where $x^\star$ is the optimal solution to problem~\eqref{pb:problem}. The equality is because $r(x^\star) = Ax^\star - b = 0$. Combining these two inequalities, we have that
$L_\rho(\tilde{x}_k;\lambda_k) - f(x^\star) \leq \epsilon_{in}^k$.
Expanding $L_\rho(\tilde{x}_k;\lambda_k)$ and rearranging terms, we get
$ f(\tilde{x}_k) - f(x^\star) \leq -\langle \lambda_k, r(\tilde{x}_k) \rangle - \frac{\rho}{2}\|r(\tilde{x}_k)\|^2 + \epsilon_{in}^k$.
Adding $\langle \lambda, r(\tilde{x}_k) \rangle$ to both sides of the above inequality, we get
\setlength{\belowdisplayskip}{10pt}
\setlength{\abovedisplayskip}{10pt}
\begin{equation}
\label{eq:lemma6_5}
\begin{split}
f(\tilde{x}_k) - & f(x^\star) + \langle \lambda, r(\tilde{x}_k) \rangle \\ & \leq \langle \lambda - \lambda_k , r(\tilde{x}_k) \rangle - \frac{\rho}{2}\|r(\tilde{x}_k)\|^2 + \epsilon_{in}^k.
\end{split}
\end{equation}
In what follows, we show that the right hand side of \eqref{eq:lemma6_5} is upper bounded by $\phi^k(\lambda) - \phi^{k+1}(\lambda) + E$. First, we focus on the term $\langle \lambda - \lambda_k , r(\tilde{x}_k) \rangle$. For all $\lambda \in D$, we have that
$ \|\lambda_{k+1} - \lambda\|^2 = \|\Pi_D[\lambda_k + \frac{1}{L}(r(\tilde{x}_k) + \epsilon_{out}^k)] - \lambda\|^2 \nonumber \leq \|\lambda_k + \frac{1}{L}(r(\tilde{x}_k) + \epsilon_{out}^k) - \lambda\|^2 = \|\lambda_k - \lambda\|^2 + 2\langle \frac{1}{L}(r(\tilde{x}_k) + \epsilon_{out}^k), \lambda_k - \lambda \rangle + \frac{1}{L^2}\|r(\tilde{x}_k) + \epsilon_{out}^k\|^2 $,
where the inequality follows from the contraction of the projection onto a convex set. Rearranging terms in the above inequality and multiplying both sides by $\frac{L}{2}$, we have
$ \langle r(\tilde{x}_k), \lambda - \lambda_k \rangle \leq \frac{L}{2}(\|\lambda_k - \lambda\|^2 - \|\lambda_{k+1} - \lambda\|^2)
- \langle \epsilon_{out}^k, \lambda - \lambda_k \rangle
+ \frac{1}{2L}\|r(\tilde{x}_k)\|^2 + \frac{1}{L}\langle r(\tilde{x}_k), \epsilon_{out}^k \rangle + \frac{1}{2L}\|\epsilon_{out}^k\|^2$.
Applying Assumption~\ref{assum:bound_var} and the Cauchy-Schwartz inequality, we obtain
\begin{flalign}
\label{eq:lemma6_7}
& \langle r(\tilde{x}_k), \lambda - \lambda_k \rangle \leq
\frac{L}{2}(\|\lambda_k - \lambda\|^2 - \|\lambda_{k+1} - \lambda\|^2) & \\
& +B_{out}B_\lambda + \frac{1}{2L}\|r(\tilde{x}_k)\|^2 + \frac{1}{L}\langle r(\tilde{x}_k), \epsilon_{out}^k \rangle + \frac{1}{2L}B_{out}^2. & \nonumber
\end{flalign}
To upper bound the term $\langle r(\tilde{x}_k), \epsilon_{out}^k \rangle$ in \eqref{eq:lemma6_7}, first we add and subtract $\epsilon_{out}^k$ from $r(\tilde{x}_k)$, and rearrange terms to get
$ \langle r(\tilde{x}_k), \epsilon_{out}^k \rangle = \langle r(\tilde{x}_k) + \epsilon_{out}^k, \epsilon_{out}^k \rangle - \|\epsilon_{out}^k\|^2$.
Defining $\lambda_{\delta k} = \lambda_k + \epsilon_{out}^k$, recalling the definition of $s_{\delta,L}(\lambda_k)$ in Lemma~\ref{lem:inexact_oracle}, and ignoring the term $-\|\epsilon_{out}^k\|^2$, we obtain
$ \langle r(\tilde{x}_k), \epsilon_{out}^k \rangle \leq \langle s_{\delta,L}(\lambda_k), \lambda_{\delta k} - \lambda_k \rangle$.
Next we apply the second inequality in Lemma~\ref{lem:inexact_oracle} to upper bound $\langle s_{\delta,L}(\lambda_k), \lambda_{\delta k} - \lambda_k \rangle$. In order to apply Lemma~\ref{lem:inexact_oracle}, both $\lambda_{\delta k}$ and $\lambda_k$ need to belong to $D_\delta$. Due to the projection in line 3 in Algorithm~\ref{alg:inexactAL}, $\lambda_k$ always belongs to $D$. Recalling the definition of $D_\delta$, it is straightforward to verify that $\lambda_k$, $\lambda_{\delta k} \in D_\delta$. Thus applying Lemma~\ref{lem:inexact_oracle} we get
\setlength{\belowdisplayskip}{10pt}
\setlength{\abovedisplayskip}{10pt}
\begin{equation}
\label{eq:lemma6_8}
\begin{split}
& \langle r(\tilde{x}_k), \epsilon_{out}^k \rangle \leq \langle s_{\delta,L}(\lambda_k), \lambda_{\delta k} - \lambda_k \rangle \\
& \leq \Phi_\rho(\lambda_{\delta k}) - \Phi_{\delta,L}(\lambda_k) + \frac{L}{2}B_{out}^2 + \delta.
\end{split}
\end{equation}
Since $\lambda^\star$ is the global maximizer of the function $\Phi_\rho(\lambda)$, we get $\Phi_\rho(\lambda^\star) \geq \Phi_\rho(\lambda_{\delta k})$. We can also show that $\Phi_{\delta, L}(\lambda_k) \geq \Phi_\rho(\lambda_k)$ always holds because
$ \Phi_{\delta,L}(\lambda_k) = L_\rho(\tilde{x}_k;\lambda_k) + B_{out}B_\lambda \geq L_\rho(\tilde{x}_k;\lambda_k) \geq L_\rho(x_k^\star;\lambda_k)$. Combining these two inequalities we obtain that $\Phi_\rho(\lambda^\star) - \Phi_\rho(\lambda_k) \geq \Phi_\rho(\lambda_{\delta k}) - \Phi_{\delta,L}(\lambda_k)$. Substituting this inequality into \eqref{eq:lemma6_8}, we have that
$ \langle r(\tilde{x}_k), \epsilon_{out}^k \rangle \leq \Phi_\rho(\lambda^\star) - \Phi_\rho(\lambda_k) + \frac{L}{2}B_{out}^2 + \delta $.
Combining this inequality, \eqref{proof:dual_optimality_5} and \eqref{eq:lemma6_7}, we get
\begin{equation}
\label{eq:lemma6_11}
\begin{split}
& \langle r(\tilde{x}_k), \lambda - \lambda_k \rangle \leq \frac{L}{2}(\|\lambda_k - \lambda\|^2 - \|\lambda_{k+1} - \lambda\|^2) \\
& + \frac{1}{2}(\|\lambda_{k-1} - \lambda^\star\|^2 - \|\lambda_{k} - \lambda^\star\|^2) + \frac{1}{2L}\|r(\tilde{x}_k)\|^2 \\
& + B_{out}B_\lambda + (\frac{1}{2} + \frac{1}{2L})B_{out}^2 + \frac{2}{L}\delta.
\end{split}
\end{equation}
Combining \eqref{eq:lemma6_11} with \eqref{eq:lemma6_5}, we have that
$ f(\tilde{x}_k) - f(x^\star) + \langle \lambda, r(\tilde{x}_k) \rangle \leq \frac{L}{2}(\|\lambda_k - \lambda\|^2 - \|\lambda_{k+1} - \lambda\|^2) + \frac{1}{2}(\|\lambda_{k-1} - \lambda^\star\|^2 - \|\lambda_{k} - \lambda^\star\|^2) + (\frac{1}{2L} - \frac{\rho}{2})\|r(\tilde{x}_k)\|^2 + B_{out}B_\lambda + (\frac{1}{2} + \frac{1}{2L})B_{out}^2 + \frac{2}{L}\delta + \epsilon_{in}^k$.
From Lemma~\ref{lem:inexact_oracle}, we have that $L = 2L_\Phi = \frac{2}{\rho}$, and $\frac{1}{2L} - \frac{\rho}{2} < 0$. Therefore, the term $(\frac{1}{2L} - \frac{\rho}{2})\|r(\tilde{x}_k)\|^2 < 0$ can be neglected. Recalling the definition of $\phi^k(\lambda)$, $\delta$ and $E$, we obtain \eqref{eq:intermediate_lemma}, which completes the proof.
\end{proof}
Next, we apply Lemma~\ref{lem:intermediate_lemma} to prove the primal suboptimality and infeasibility of Algorithm \ref{alg:inexactAL}.
\begin{theorem}
\label{thm:primal_opt_feas}
(Primal Suboptimality and Infeasibility) Let assumptions~\ref{assum:prob_form}, \ref{assum:strong_dual} and \ref{assum:bound_var} hold. Define $\bar{x}_K = \frac{1}{K}\sum_{k=1}^{K}\tilde{x}_k$. Then, we have that (a) primal optimality:
\begin{equation}
\label{eq:primal_opt_1}
-(\frac{1}{K}\phi^1(2\lambda^\star) + E) \leq f(\bar{x}_K) - f(x^\star) \leq \frac{1}{K}\phi^1(0) + E,
\end{equation}
\noindent (b) primal feasibility:
\begin{equation}
\label{eq:primal_feas}
\|r(\bar{x}_K)\| \leq \frac{1}{K}\phi^1(\lambda^\star + \frac{r(\bar{x}_K)}{\|r(\bar{x}_K)\|}) + E.
\end{equation}
\end{theorem}
\begin{proof}
Summing inequality (\ref{eq:intermediate_lemma}) in Lemma~\ref{lem:intermediate_lemma} for $k = 1, 2, \dots, K$, we have that
$ \sum_{k=1}^{K} f(\tilde{x}_k) - Kf(x^\star) + \langle \lambda, \sum_{k=1}^{K}r(\tilde{x}_k) \rangle \leq \phi^1(\lambda) - \phi^{K+1}(\lambda) + KE \leq \phi^1(\lambda) + KE$,
where the second inequality follows from $\phi^k(\lambda) \geq 0$. Dividing both sides of the above inequality by $K$ and using the fact that $\frac{1}{K}\sum_{k=1}^{K}r(\tilde{x}_k) = r(\bar{x}_K)$, we get
$\label{proof:final_2}
\frac{1}{K}\sum_{k=1}^{K}f(\tilde{x}_k) - f(x^\star) + \langle \lambda, r(\bar{x}_K) \rangle \leq \frac{1}{K}\phi^1(\lambda) + E$.
From the convexity of the function $f(x)$, we have that $f(\bar{x}_K) \leq \frac{1}{K}\sum_{k=1}^{K}f(\tilde{x}_k)$. Combining the above two inequalities, we obtain
\begin{equation}
\label{proof:final_3}
f(\bar{x}_K) - f(x^\star) + \langle \lambda, r(\bar{x}_K) \rangle \leq \frac{1}{K}\phi^1(\lambda) + E.
\end{equation}
To prove the right inequality in \eqref{eq:primal_opt_1}, let $\lambda = 0$ in \eqref{proof:final_3}. Then, we have that
$ f(\bar{x}_K) - f(x^\star) \leq \frac{1}{K}\phi^1(0) + E$.
To prove the left inequality in \eqref{eq:primal_opt_1}, according to the inequality~\eqref{eq:saddle_point} and Assumption~\ref{assum:strong_dual}, we have that
\begin{equation}
\label{proof:final_4_1}
f(x^\star) - f(\bar{x}_K) \leq \langle \lambda^\star, r(\bar{x}_K) \rangle.
\end{equation}
Adding $\langle \lambda^\star, r(\bar{x}_K) \rangle$ to both sides of \eqref{proof:final_4_1} and rearranging terms, we have that
\begin{equation}
\label{proof:final_4_2}
\langle \lambda^\star, r(\bar{x}_K) \rangle \leq f(\bar{x}_K) - f(x^\star) + \langle 2\lambda^\star, r(\bar{x}_K) \rangle.
\end{equation}
Combining inequalities (\ref{proof:final_4_1}) and (\ref{proof:final_4_2}), we obtain
$ f(x^\star) - f(\bar{x}_K) \leq f(\bar{x}_K) - f(x^\star) + \langle 2\lambda^\star, r(\bar{x}_K) \rangle$.
Combining this inequality and the inequality in~\eqref{proof:final_3} with $\lambda = 2\lambda^\star$, we get
$ f(x^\star) - f(\bar{x}_K) \leq \frac{1}{K}\phi^1(2\lambda^\star) + E$.
To prove \eqref{eq:primal_feas}, let $\lambda = \lambda^\star + \frac{r(\bar{x}_K)}{\|r(\bar{x}_K)\|}$ in \eqref{proof:final_3}, which also belongs in $D$ by Assumption~\ref{assum:choice_D}. Rearranging terms, we obtain
$f(\bar{x}_K) - f(x^\star) + \langle \lambda^\star, r(\bar{x}_K) \rangle + \|r(\bar{x}_K)\| \leq \frac{1}{K}\phi^1(\lambda^\star + \frac{r(\bar{x}_K)}{\|r(\bar{x}_K)\|}) + E$.
Letting $x = \bar{x}_K$ in the second inequality in \eqref{eq:saddle_point}, we have that $f(\bar{x}_K) + \langle \lambda^\star, r(\bar{x}_K) \rangle - f(x^\star) \geq 0$. Combining this inequality with the above inequality, we get
$\|r(\bar{x}_K)\| \leq \frac{1}{K}\phi^1(\lambda^\star + \frac{r(\bar{x}_K)}{\|r(\bar{x}_K)\|}) + E$,
which completes the proof.
\end{proof}
\section{Fixed-point Implementation}
\label{sec:fp}
To apply the Algorithm~\ref{alg:inexactAL} to solve the problem~\eqref{pb:problem} on fixed-point platforms, we need another assumption.
\begin{assumption}
\label{assum:4.1}
The set $\mathcal{X}$ is compact and simple.
\end{assumption}
The compactness is needed to bound the primal iterates $\tilde{x}_k$ in Algorithm~\ref{alg:inexactAL}.
$\mathcal{X}$ is simple in the sense that computing the projection onto $\mathcal{X}$ is of low complexity on fixed-point platforms, for example, $\mathcal{X}$ is a box.
We note that our convergence analysis in Section~\ref{sec:converg} does not rely on the Assumption~\ref{assum:4.1}. This assumption is needed only for fixed-point implementation.
Due to the projection operation in the multiplier update in Algorithm~\ref{alg:inexactAL}, $\|\lambda_k\|$ is always bounded. Since now both $x_k$ and $\lambda_k$ lie in compact sets, the bounds on the norms of these variables can be used to design the word length of the fixed-point data to avoid overflow, similar to Section 5.3 in \cite{patrinos2015dual}. Meanwhile, according to the results in \eqref{eq:primal_opt_1} and \eqref{eq:primal_feas}, we can choose the number $K_{out}$ of the outer iterations, the accuracy of the multiplier update $B_{out}$ and the accuracy of the subproblem solution $B_{in}$ in Algorithm~\ref{alg:inexactAL} to achieve an $\epsilon$-solution to the problem~\eqref{pb:problem}.
Specifically, to achieve the accuracy $B_{in}$, we can determine the iteration complexity of the subproblem solver based on the studies in \cite{devolder2014first,patrinos2015dual, schmidt2011convergence}. However, these results are usually conservative in practice.
Therefore, in this section, we propose a stopping criterion for early termination of the subproblem solver with guarantees on the solution accuracy. This stopping criterion can be used for box-constrained convex optimization problems that are not necessarily strongly convex, and is simple to check on embedded devices.
A similar stopping criterion has been proposed in \cite{nedelcu2014computational}, which however requires the objective function of the subproblem to be strongly convex. In what follows, we relax the strong convexity assumption so that this stopping criterion can be applied.
\begin{lemma}
\label{lem:stopcondition}
Consider a convex and nonsmooth function $f(x)$, with support $\mathcal{X} \subset \mathbb{R}^n$ and set of minimizers $X^\star$. Assume $f(x)$ satisfies the quadratic growth condition,
\begin{equation}
\label{eq:qgc_1}
f(x) \geq f^\star + \frac{\sigma}{2}\mathtt{dist}^2(x, X^\star), \text{ for } \forall x \in \mathcal{X},
\end{equation}
where $\mathtt{dist}(x, X^\star)$ is the distance from $x$ to the set $X^\star$. and $\sigma>0$ is a scalar. Then, we have that for any $x \in \mathcal{X}$ and any $s \in \mathcal{N}_{\mathcal{X}}(x)$,
\begin{equation}
\label{eq:qgc_2}
f(x) - f^\star \leq \frac{2}{\sigma}\|\partial f(x) + s\|^2,
\end{equation}
where $\mathcal{N}_\mathcal{X}(x)$ is the normal cone at $x$ with respect to $\mathcal{X}$.
\end{lemma}
\begin{proof}
The proof is the similar to the proof in \cite{nedelcu2014computational} and therefore is omitted.
\end{proof}
\citet*{necoara2018linear} have shown that the function $h(Ax)$ satisfies the inequality~\eqref{eq:qgc_1} when the set $\mathcal{X}$ is polyhedral and $h(y)$ is strongly convex. However, since the subproblem objective $L_\rho(x,\lambda_k)$ is in the form $\sum_i h_i(A_ix)$ rather than $h(Ax)$, in what follows we present a generalization of Theorem 8 in \cite{necoara2018linear} so that the function $L_\rho(x,\lambda_k)$ satisfies the inequality~\eqref{eq:qgc_1} and the condition~\eqref{eq:qgc_2} can be applied.
\begin{lemma}
\label{lem:qg_func}
Consider a convex function $H(x) = \sum_i h_i(A_ix - b_i)$ with the polyhedral support $\mathcal{X} = \{ x:Cx \leq d \}$, where $h_i(y)$ is $\sigma_i-$strongly convex with respect to $y$ for any $i$. Then we have
\begin{equation}
\label{eq:qg_func}
H(x) \geq H^\star + \frac{\sigma}{2}\mathtt{dist}^2(x, X^\star),
\end{equation}
where $\sigma = \frac{\min_i\{\sigma_i\}}{\theta^2(\tilde{A},C)}$, the matrix $\tilde{A}$ is in the form
$[\cdots | A_i^T | \cdots ]^T$
and $\theta(\tilde{A}, C)$ is a constant only related to the matrices $\{A_i\}$ and $C$.
\end{lemma}
\begin{proof}
Given $x^\star \in X^\star$, for any $x \in \mathcal{X}$, from the strong convexity of the function $h_i$, we have that
$ h_i(A_ix - b_i) \geq h_i(A_ix^\star-b_i) + \langle \partial h_i(y)|_{y = A_ix^\star - b_i}, A_i(x - x^\star) \rangle + \frac{\sigma_i}{2}\|A_ix - A_ix^\star\|^2 = h_i(A_ix^\star-b_i) + \langle x - x^\star, A_i^T \partial h_i(y)|_{y = A_ix^\star - b_i} \rangle + \frac{\sigma_i}{2}(x - x^\star)^TA_i^TA_i(x - x^\star)$.
Adding up the above inequalities for all $i$, we have
$H(x) \geq H(x^\star) + \langle \sum_i A_i^T\partial h_i(y)|_{y = A_ix^\star - b_i}, x - x^\star \rangle + \sum_i \frac{\sigma_i}{2}(x - x^\star)^TA_i^TA_i(x - x^\star)$.
Recall that $\partial H(x^\star) = \sum_i A_i^T\partial h_i(y)|_{y = A_ix^\star - b_i}$. From the optimality of $x^\star$, we have that $\langle \sum_i A_i^T\partial h_i(y)|_{y = A_ix^\star - b_i}, x - x^\star \rangle \geq 0$ for any $x \in \mathcal{X}$. Therefore, we obtain
\begin{equation}
\label{eq:qgc_func_1}
H(x) \geq H(x^\star) + \sum_i \frac{\sigma_i}{2}(x - x^\star)^TA_i^TA_i(x - x^\star).
\end{equation}
Next, we show that the optimal solution set $X^\star = \{\tilde{x}^\star: \tilde{A}\tilde{x}^\star = t^\star ; C\tilde{x}^\star \leq d \}$, where $t^\star = \tilde{A}x^\star$. To do so, we decompose the space $\mathbb{R}^n$ into two mutually orthogonal subspaces, $\mathcal{V}$ and $\mathcal{U}$, where $\mathcal{V}$ is the intersection of the kernel spaces of all matrices $A_i$ and $\mathcal{U}$ is the union of the row space of all matrices $A_i$. Showing that ${X}^\star = \{\tilde{x}^\star: \tilde{A}\tilde{x}^\star = t^\star ; C\tilde{x}^\star \leq d \}$ is equivalent as showing that $X^\star = \tilde{X}^\star$, where $\tilde{X}^\star = \{\tilde{x}^\star: \tilde{x}^\star = x^\star + v, \text{ where } v \in \mathcal{V} ; C\tilde{x}^\star \leq d \}$. First, we show that $\tilde{X}^\star \subset X^\star$. If $\tilde{x}^\star \in \tilde{X}^\star$, we have that $\tilde{x}^\star$ is feasible and $H(\tilde{x}^\star) = H(x^\star)$ because $A_i\tilde{x}^\star = A_i(x^\star + v) = A_ix^\star$, for any $i$. Therefore, $\tilde{X}^\star \subset X^\star$. Second, we prove that $X^\star \subset \tilde{X}^\star$ by showing that if $\tilde{x} \notin \tilde{X}^\star$, then $\tilde{x}$ is not optimal. If $C\tilde{x} > d$, then $\tilde{x}$ is not feasible. On the other hand, if $C\tilde{x} \leq d$ but $\tilde{x} = x^\star + \alpha v + u$, where $\alpha$ is any real number and $u \in \mathcal{U}$, then $H(\tilde{x}) > H(x^\star)$. This is due to the inequality~\eqref{eq:qgc_func_1} and $\sum_i \frac{\sigma_i}{2}(\tilde{x} - x^\star)^TA_i^TA_i(\tilde{x} - x^\star) > 0$. Therefore, $X^\star \subset \tilde{X}^\star$ and we have shown that $X^\star = \tilde{X}^\star$.
Since $X^\star = \{\tilde{x}^\star: \tilde{A}\tilde{x}^\star = t^\star ; C\tilde{x}^\star \leq d \}$, according to Lemma 15 in \cite{wang2014iteration}, for any $x \in \mathcal{X}$, we have that $\mathtt{dist}(x, X^\star) \leq \theta(\tilde{A},C) \|\tilde{A}(x - x^\star)\|$. Furthermore, we have that $\mathtt{dist}^2(x, X^\star) \leq \theta^2(\tilde{A},C)\|\tilde{A}(x - x^\star)\|^2 \leq \frac{\theta^2(\tilde{A},C)}{\min_i\{\sigma_i\}}\sum_i \sigma_i(x - x^\star)^TA_i^TA_i(x - x^\star)$. Combining this inequality with \eqref{eq:qgc_func_1}, we get the desired result \eqref{eq:qg_func}.
\end{proof}
The constant $\theta(\tilde{A},C)$ can be obtained as in \citet*{necoara2018linear}.
If the objective function $f(x)$ in problem~\eqref{pb:problem} is of the form $h(Ax - b)$ where $h(y)$ is strongly convex and the set $\mathcal{X} = \{x:l \leq x \leq u\}$, according to Lemma \ref{lem:qg_func}, the subproblem objective $L_\rho(x, \lambda_k)$ satisfies the inequality~\eqref{eq:qgc_1}. Therefore, according to Lemma~\ref{lem:stopcondition}, we have that the iterate $x_t$ returned from the subproblem solver satisfies $L_\rho(x_t;\lambda_k) - L_\rho(x_k^\star;\lambda_k) \leq B_{in}$ if $ \|\partial L_\rho(x_t;\lambda_k) + s_t^\star\| \leq \sqrt{\frac{\sigma}{2}B_{in}}$, where $s_t^\star = \arg\min_{s_t \in \mathcal{N}_{\mathcal{X}}(x_t)}\|\partial L_\rho(x_t;\lambda_k) + s_t\|$. As discussed in \cite{nedelcu2014computational}, $\|\partial L_\rho(x_t;\lambda_k) + s_t^\star\|$ can be efficiently evaluated on the embedded platform.
\section{Numerical Simulations}
\label{sec:sim}
In this section, we present simulation results for a utility maximization example to verify the convergence and error analysis results in Section~\ref{sec:converg} and the design of the fixed-point implementation in Section~\ref{sec:fp}. The simulations are conducted using the Fixed-Point Designer in Matlab R2015a on a Macbook Pro with 2.6GHz Intel Core i5 and 8GB, 1600MHz memory.
Consider an undirected graph $G = (\mathcal{N}, \mathcal{E})$, where $\mathcal{N} = \{1,2,\dots,N\}$ is the set of nodes and $\mathcal{E}$ is the set of edges, so that $(i,j) \in \mathcal{E}$ if the nodes $i$ and $j$ are connected in the graph $G$. Denote the set of neighbors of node $i$ as $\mathcal{N}_i$. The set $\mathcal{N}$ consists of two subsets $\{S, D\}$, where $S$ and $D$ are the sets of source and the destination nodes, respectively. The node $i \in S$ generates data at a rate $s_i$, where $s_{\min} \leq s_i \leq s_{\max}$. The data flows from node $i$ to node $j$ through edge $(i,j) \in \mathcal{E}$ at a rate $t_{ij}$, where $0 \leq t_{ij} \leq c_{ij}$. All generated data finally flows into the destination nodes, which are modeled as sinks and can absorb the incoming data at any rates. The nodes collaboratively solve the following network utility maximization (NUM) problem
\begin{flalign}
\label{eq:pb_num}
& \max_{\{s_i\}, \{t_{ij}\}} \; \sum_{i = 1} \log(s_i) & \nonumber \\
& \text{s.t. } \; \sum_{j \in \mathcal{N}_i} t_{ij} - \sum_{j \in \mathcal{N}_i} t_{ji} = s_i, \;\;\; \forall i \in S & \\
& s_{\min} \leq s_i \leq s_{\max}, \; \forall i \in S, \;\;\; 0 \leq t_{ij} \leq c_{ij}, \;\;\; \forall (i,j) \in \mathcal{E}, \nonumber
\end{flalign}
where the constraint $\sum_{j \in \mathcal{N}_i} t_{ij} - \sum_{j \in \mathcal{N}_i} t_{ji} = s_i$ expresses the flow conservation law at the node $i$.
The logarithm objective function is used to measure the utility of the data generation rate.
To solve problem~\eqref{eq:pb_num} distributedly, distributed ALM schemes \citep*{chang2015multi,chatzipanagiotis2015augmented,chatzipanagiotis2016distributed,chatzipanagiotis2017convergence,lee2017complexity} have been proposed that converge much faster than the dual decomposition method although at the cost of solving nontrivial subproblems locally at each iteration. Here we employ the consensus-ADMM method in \cite{chang2015multi} to solve problem~\eqref{eq:pb_num}. Specifically, let node $i$ keep a local decision variable $[s_i, t_{(i)}^T]^T$, where $t_{(i)} = [t_{i1}^{(i)}, \dots, t_{i|\mathcal{N}_i|}^{(i)}, t_{1i}^{(i)}, \dots, t_{|\mathcal{N}_i|i}^{(i)}]^T$. Then, at the $t$ th iteration, each node needs to solve a local problem
\begin{flalign}
\label{eq:pb_resource_local}
& \min_{s_i, t_{ij}^{(i)}, t_{ji}^{(i)}} \; -\log(s_i) + \langle p_i^t, t_{(i)} \rangle + \mu\|t_{(i)} - g_i^t\|^2 & \nonumber \\
& \text{s.t. } \; \sum_{j \in \mathcal{N}_i} t_{ij}^{(i)} - \sum_{j \in \mathcal{N}_i} t_{ji}^{(i)} = s_i, & \\
& s_{\min} \leq s_i \leq s_{\max}, \;\;\; 0 \leq t_{ij}^{(i)} \leq c_{ij}, \;\;\; 0 \leq t_{ji}^{(i)} \leq c_{ji}, & \nonumber
\end{flalign}
where the variables $p_i^t$ and $g_i^t$ are updated at every iteration of the consensus-ADMM to finally achieve consensus $t_{ij}^{(i)} = t_{ij}^{(j)}$ on all edges. Problem \eqref{eq:pb_resource_local} has the form of problem \eqref{pb:problem} and we can apply Algorithm \ref{alg:inexactAL} to solve it using fixed-point data. Since the objective function in \eqref{eq:pb_resource_local} is not quadratic, the results in \cite{jerez2014embedded} cannot be applied.
For our numerical simulations, we randomly generate problem \eqref{eq:pb_num} on a network of $10$ nodes and apply Algorithm~\ref{alg:inexactAL} to solve the subproblem \eqref{eq:pb_resource_local}.
In what follows we present results for the node that solves the largest subproblems that are of dimension $9$.
The parameters $p_i^t$ and $g_i^t$ in \eqref{eq:pb_resource_local} are obtained by running $30$ iterations of consensus-ADMM on problem~\eqref{eq:pb_num} with double floating point data. This creates $30$ instances of subproblem~\eqref{eq:pb_resource_local} which we solve using Algorithm~\ref{alg:inexactAL}. The bounds on $\|\lambda^\star\|$ for each problem instance were obtained by solving each problem using the Matlab function $fmincon$. Table~\ref{table:primal_feas_opt_num} shows the achieved optimality and feasibility for the worst-case scenario and for three solution accuracies, $\epsilon = 1, 0.1$ and $0.01$. We observe that the theoretical bounds are around 20 to 30 times higher than the actual algorithm performance.
\begin{table}[t]
\centering
\scriptsize
\caption{Primal optimality and feasiblity achieved by Algorithm~\ref{alg:inexactAL} for the NUM problem }
\label{table:primal_feas_opt_num}
\begin{tabular}{| c | c | c | c |}
\multicolumn{4}{c}{} \\
\hline
$\epsilon$ & $fl$-$wl$ & low. \eqref{eq:primal_opt_1}, $\;$ opt., $\;$ up. \eqref{eq:primal_opt_1} & feas., $\;$ \eqref{eq:primal_feas} \\
\hline
1 & 10-14 & -0.9861, $\;$ 0.0338, $\;$ 0.7026 & 0.0439, $\;$ 1.0 \\
0.1 & 14-18 & -0.0995, $\;$ 0.0034, $\;$ 0.0707 & 0.0044, $\;$ 0.1 \\
0.01 & 17-21 & -0.0100, $\;$ 0.00035, $\;$ 0.0071 & 0.00044, $\;$ 0.01 \\
\hline
\end{tabular}
\end{table}
Note that this simulation serves only the purpose of showing the ability of Algorithm~\ref{alg:inexactAL} to solve non-quadratic convex optimization problems, as the subproblems \eqref{eq:pb_resource_local}.
The method in Jerez et al. (2014) can also be extended to solve such problems if it is used as the QP solver in the Sequential Quadratic Programming (SQP) framework. However, a new KKT matrix needs to be inverted at each iteration of SQP according to Jerez et al. (2014) and the solutions to the QP problems will be inexact on a fixed-point platform. The effect of this inexactness on the iteration complexity and final solution accuracy of SQP under fixed-point arithmetics has not been theoretically studied.
A systematic solution of \eqref{eq:pb_num} using distributed ALM methods with fixed-point data is an open problem and is left for future research.
\section{Conclusion}
\label{sec:conclude}
In this paper we proposed an Augmented Lagrangian Method to solve convex and non-smooth optimization problems using fixed-point arithmetic. To avoid data overflow, we introduced a projection operation in the multiplier update. Moreover, we present a stopping criterion to terminate early the primal subproblem iteration, while ensuring a desired accuracy of the solution. We presented convergence rate results as well as bounds on the optimality and feasibility gaps. To the best of our knowledge, this is the first fixed-point ALM that can handle non-smooth problems, data overflow, and can efficiently and systematically utilize iterative solvers in the primal update.
\ifCLASSOPTIONcaptionsoff
\fi
\end{document}
|
\begin{document}
\title{Optimal architecture for a non-deterministic noiseless linear
amplifier}
\author{N.~A.~McMahon} \author{A.~P.~Lund} \author{T.~C.~Ralph}
\affiliation{ Centre for Quantum Computation and Communications Technology,
School of Mathematics and Physics,\\
University of Queensland, St Lucia Queensland 4072, Australia}
\begin{abstract}
Non-deterministic quantum noiseless linear amplifiers are a new technology
with interest in both fundamental understanding and new applications. With a
noiseless linear amplifier it is possible to perform tasks such as
improving the performance of quantum key distribution and purifying lossy
channels. Previous designs for noiseless linear amplifiers involving linear
optics and photon counting are non-optimal because they have a probability
of success lower than the theoretical bound given by the theory of
generalised quantum measurement. This paper develops a theoretical model
which reaches this limit. We calculate the fidelity and probability of
success of this new model for coherent states and Einstein-Podolsky-Rosen
(EPR) entangled states.
\end{abstract}
\maketitle
A deterministic noiseless, phase insensitive, linear amplifier, as seen in
classical systems is unphysical in quantum theory~\cite{CAV82}. However it has
been demonstrated that an analogous probabilistic amplifier is approximately
physically realisable~\cite{ref:Ralph2008, ref:Xiang2010, ZAV11} and has a wide
variety of potential uses in quantum computing and communication technology
protocols. These protocols include error correction~\cite{ref:Ralph2011},
quantum key distribution~\cite{ref:Blandino2012} and other protocols where
distillation of entanglement is desirable~\cite{ref:Xiang2010}.
In order to translate these systems to useful quantum technologies an
investigation into the optimal probabilities of success that can be achieved is
important. Low probabilities of success reduce the range of possible
experimental and commercial applications of these devices. Ralph and Lund
\cite{ref:Ralph2008} proposed a linear optics implementation of a heralded
noiseless linear amplifier which has been theoretically investigated
\cite{FIU09, GAG12, ref:Walk2013} and experimentally demonstrated with good
agreement in visibility and effective gain for small amplitudes $\alpha < 0.04$
and gains $|g|^{2}\leq 5$ \cite{ref:Xiang2010, ref:Ferreyrol2008, MIC12,OSO12,
KOS13}. The probability of success for low amplitude inputs $\alpha \ll 1$
using this design is $P = \frac{1}{g^{2}+1}$. The probability of success of
other linear optical designs are similar \cite{ZAV11, KIM12}. For higher
amplitudes, $\bar n$, the probability scales as $P \approx
\frac{1}{(g^{2}+1)^N}$ where $N \gg |\alpha|^2$. The theoretical maximum
probability of success for a noiseless linear amplifier in the low photon
number regime is $P = \frac{1}{g^{2}}~$\cite{ref:Ralph2008} and is expected to
scale as $\frac{1}{g^{2N}}$.
Our aim in this paper is to identify and analyse a physical model for noiseless
linear amplification which saturates this maximum probability of success. Our
approach is related to the idea that noiseless amplification can be implemented
via a weak measurement model~\cite{ref:Menzies2009}. The paper is arranged in
the following way. In the first section we will introduce a measurement model
for noiseless amplification. In section 2 we will translate this into a
physical model for the amplifier and particularly look at the low photon number
limit. The following two sections will analyse the performance of the amplifier
with respect to coherent state inputs and the distillation and purification of
Einstein, Podolsky, Rosen (EPR) entanglement (2-mode squeezing). In the final
section we will conclude.
\section{Noiseless amplification as a general measurement}
An ideal noiseless amplifier performs the operation $g^{a^\dagger
a}$~\cite{ref:Ralph2008}, that is it takes an input state $\ket{\psi}$ to
$g^{a^\dagger a}\ket{\psi}$. This operator takes the coherent state
$\ket{\alpha}$ to the coherent state $\ket{g\alpha}$ and is inherently not
unitary. This suggests that a measurement process with post-selection on the
measurement outcomes is required to implement it. The case we are most
interested in here is where $g > 1$. In this situation the operator is
unbounded and can only be implemented perfectly over the entire Hilbert space
via a measurement process with probability zero. In many experimental
situations the action of this operator on states with high occupation number
are not important as they have negligible amplitude. Therefore this operator
is generally chosen to be truncated at some occupation number $N$, which will
be chosen depending on the desired performance of an experimental apparatus.
This truncation allows for non-zero probabilities of successfully implementing
the desired amplification transformation. Lower values of $N$ will generally
result in higher probabilities of success at the cost of a lower fidelity of
operation when compared to the ideal operation. In current experiments with
low energy inputs $N=1$ is sufficient to achieve high fidelity, and this very
simple case has non-trivial implications.
When constructing a measurement which implements the amplification, it
suffices to consider the case where there is only two outcomes, a success
outcome and a failure outcome. When a success outcome is achieved the state
is transformed in the required way. Measurement outcomes, which we will label
$i$ are represented by $S$ for success and $F$ for failure. The action on the
input state due to each measurement result can be represented by the generally
non-unitary operator $\hat{M}_{i}$ called the measurement operator. The
probability of success for this measurement outcome when the measurement is
applied to the state $\ket{\psi}$ is given by
\begin{equation}
P_{i} = \bra{\psi} \hat{M}_{i}^{\dagger} \hat{M}_{i} \ket{\psi}
\end{equation}
and the resultant output state having achieved the result $i$ is
\begin{equation}
\ket{\psi^{'}_{i}} = \frac{\hat{M}_{i} \ket{\psi}}{\sqrt{P_i}}.
\end{equation}
To ensure that these operators define a probability measure the condition
\begin{equation}
\label{eq:MeasurementModelRestriction}
\hat{M}_S \hat{M}^\dagger_S + \hat{M}_F\hat{M}^\dagger_F = \hat{I}
\end{equation}
must be satisfied~\cite{ref:MikeandIke}.
To implement the amplification we require $\hat{M}_S \propto g^{a^\dagger a}$.
To ensure \eqref{eq:MeasurementModelRestriction} holds over the entire Hilbert
space it would be necessary for $\hat{M}_S = 0 g^{\hat{a}^\dagger \hat{a}}$ as
the eigenvalues of $g^{\hat{n}}$ are unbounded for $g > 1$ and
$\hat{M}_F\hat{M}_F^\dagger$ must be a positive operator. Now we can make the
truncation of this operator to achieve a non-zero probability. We do this by
requiring the action on the first $N$ Fock states to be proportional to the
those same elements for the perfect amplification operator and leaving the
action on higher occupation number states arbitrary. In this case the success
measurement operator can be written as
\begin{equation}
\hat{M}_S = \mathcal{N} \sum_{n = 0}^{N} g^{n}\ket{n}\bra{n} +
\sum_{n = N+1}^{\infty} S_{n} \ket{n}\bra{n},
\end{equation}
where $S_{n}$ is a sequence of complex numbers with norm between zero and
one. This will then allow the operation to satisfy
\eqref{eq:MeasurementModelRestriction} with $\mathcal{N}$ playing the role of
the proportionality constant and will in general be non-zero. The probability
of success for an arbitrary input state $\ket{\psi}$ is
\begin{widetext}
\begin{equation}
\label{eq:Probability}
P_S = \bra{\psi} M_{S}^{\dagger} M_{S} \ket{\psi}
= \left|\mathcal{N}\right|^{2}
\sum_{n=0}^{N} g^{2n} \left| \braket{n}{\psi}\right|^{2}
+ \sum_{n=N+1}^{\infty} \left| S_{n}\right|^{2} \left|\braket{n}{\psi}\right|^{2}.
\end{equation}
To ensure that $0 \leq P_S \leq 1$ for all possible input states $\mathcal{N}
\leq g^{-N}$. Here we can see that any complex phase factor within each $S_n$
will not influence the probability of success. The fidelity of the success
operation for pure state inputs is
\begin{equation}
\label{eq:Fidelity}
\mathcal{F}
= \frac{\left|\bra{\psi} g^{a^\dagger a} M_S \ket{\psi}\right|^{2}}
{\bra{\psi} M_{S}^{\dagger} M_{S} \ket{\psi}}
= P^{-1} \left|
\mathcal{N} \sum_{n=0}^{N} g^{2n} \left|\braket{n}{\psi}\right|^2
+ \sum_{n=N+1}^{\infty} S_n g^n \left|\braket{n}{\psi}\right|^2\right|^2.
\end{equation}
\end{widetext}
Here the complex phase factors of the $S_n$ are important. However if the
$S_n$ are not real than this can only act to reduce the fidelity. Therefore,
to maximise the fidelity and probability over the widest set of states then
requires $\mathcal{N} = g^{-N}$ and $S_n = 1$. This optimised measurement
operator is then
\begin{equation}
\label{eq:SuccessMO}
\hat{M}_{S} = g^{-N}\sum_{n = 0}^{N} g^{n}\ket{n}\bra{n}
+ \sum_{n = N+1}^{\infty} \ket{n}\bra{n}
\end{equation}
\section{Measurement model for noiseless amplification}
We can construct a model for the generalised measurement described in
Eq.~\ref{eq:SuccessMO} by considering a measurement apparatus consisting of a
two level system which interacts with the bosonic input mode as shown in
Figure~\ref{fig:model}.
\begin{figure}
\caption{A bosonic system (labeled ``System'') interacts with a two-level
apparatus (labeled ``Apparatus''). The apparatus is prepared into a $Z$
axis spin eigenstate. The interaction applies a conditional unitary
rotation where the conditioning depends on the number of bosons in the
input. The apparatus is measured and if the spin has flipped then a success
is heralded.}
\label{fig:model}
\end{figure}
After the interaction the apparatus is measured using a projective measurement
scheme. The apparatus orthonormal basis states represent success and failure
and will be written as $\ket{S}$ and $\ket{F}$ respectively. This basis is
arbitrary, but the interaction will depend on the particular choice of basis.
We will assume that the apparatus is prepared in the $\ket{F}$ state before
the interaction. The interaction is given by the unitary operator
\begin{equation}
\hat{U} = \hat{M}_{S} \otimes \ket{S} \bra{F} +
\hat{M}_{F} \otimes \ket{F} \bra{F} +
\hat{B}_{1} \otimes \ket{F} \bra{S} +
\hat{B}_{2} \otimes \ket{S} \bra{S}
\end{equation}
where $\hat{M}_{S}$ is the operator which will be applied to the system input
state when a success result is measured and $\hat{M}_{F}$ is the operator
applied to the system on measuring the failure result. The particular form of
the operators $\hat{B}_{1,2}$ are not of concern as they are dependent on the
apparatus being initialised in the $\ket{S}$ state. They are included to
include enough freedom to ensure that $\hat{U}$ remains unitary. Using the
Kronecker product representation of the tensor product the unitarity
requirement can be written as
\begin{equation}
\left(
\begin{matrix}
\hat{M}^{\dagger}_{F} & \hat{M}^{\dagger}_{S} \\
\hat{B}^{\dagger}_{1} & \hat{B}^{\dagger}_{2}
\end{matrix}
\right)
\left(
\begin{matrix}
\hat{M}_{F} & \hat{B}_{1} \\
\hat{M}_{S} & \hat{B}_{2}
\end{matrix}
\right)
= \left(
\begin{matrix}
\hat{I} & 0 \\
0 & \hat{I}
\end{matrix}
\right)
\end{equation}
which can be rewritten as
\begin{eqnarray}
\hat{M}_{F}^{\dagger}\hat{M}_{F} + \hat{M}_{S}^{\dagger}\hat{M}_{S} &=& \hat{I} \\
\hat{M}_{F}^{\dagger} \hat{B}_{1} + \hat{M}_{S}^{\dagger} \hat{B}_{2} &=& 0 \\
\hat{B}_{1}^{\dagger}\hat{B}_{1} + \hat{B}_{2}^{\dagger}\hat{B}_{2} &=& \hat{I}
\label{eq:Requirements}
\end{eqnarray}
Provided $\hat{M}_{S}$ and $\hat{M}_{F}$ define a set of measurement operators
(in particular the requirement in equation
\eqref{eq:MeasurementModelRestriction}) then the first and last equations are
always satisfied if $\hat{B}_{1} = \pm \hat{M}_{S}$ and $\hat{B}_{2} = \pm
\hat{M}_{F}$. The second equation could never be satisfied had we swapped the
success and failure operators in this assignment. If $\hat{M}_{S}$ and
$\hat{M}_{F}$ are Hermitian and commute, as is the case we are considering
here, then we can always satisfy the second equation by choosing $\hat{B}_{1} =
-\hat{M}_{S}$ and $\hat{B_{2}} = \hat{M}_{F}$.
Now we can substitute our success operator from equation~\eqref{eq:SuccessMO}
into this interaction unitary. This unitary can then be rearranged to be
written as
\begin{equation}
\label{eq:AmpUnitary}
\hat{U} = \sum_{n=0}^{\infty} \ket{n}\bra{n} \otimes \hat{R}_{n}
\end{equation}
where $\hat{R}_{n}$ is defined as
\begin{equation}
\label{eq:Rotation}
\hat{R}_{n} =
\left(
\begin{matrix}
\sqrt{1-G_{n}^{2}} & G_{n} \\
-G_{n} & \sqrt{1-G_{n}^{2}}
\end{matrix}
\right),
\end{equation}
\begin{equation}
G_{n} = \min(1,g^{(n-N)}).
\end{equation}
The operator $\hat{R}_{n}$ is a Pauli Y-rotation of $\theta =
2\arcsin\left(\min(1, g^{n-N})\right)$ on the heralding qubit which depends on
the number of bosons in the input mode. This unitary can be generated by the
Hamiltonian
\begin{eqnarray}
\label{eq:Hamiltonian}
\hat{H} &=& \frac{\hbar}{\tau} \left[
\sum_{n = 0}^{\infty} \arcsin(\min(1,g^{n-N})) \hat{Y} \otimes \ket{n}\bra{n}
\right] \nonumber \\
&=& \frac{\hbar}{\tau} \arcsin(\min(1,g^{\hat{a}^\dagger \hat{a}-N})) \otimes \hat{Y},
\end{eqnarray}
where $\tau$ is the interaction time which is chosen to ensure the apropriate
that the rotation parameter $\theta$ is implemented.
\subsection{Low photon number limit}
In the limit of low amplitude inputs we can implement the amplifier with $N=1$.
The system can then be considered a qubit and the gate between the system and
the apparatus is locally equivalent to a standard controlled rotation.
To see this, we take the unitary from equation~\ref{eq:AmpUnitary}
\begin{equation}
\hat{U}_{N=1} = \ket{0}\bra{0} \otimes \left(
\begin{matrix}
\sqrt{1-1/g^2} & 1/g \\
-1/g & \sqrt{1-1/g^2}
\end{matrix}
\right) +
\ket{1}\bra{1} \otimes \left(
\begin{matrix}
0 & 1 \\
-1 & 0
\end{matrix}
\right)
\end{equation}
and then decompose it into
\begin{equation}
\hat{U}_{N=1} = -(X\otimes X) (I\otimes Z) C(R_y(\theta)) (X\otimes I)
\end{equation}
where $X$ and $Z$ are the standard Pauli matricies and $C(R_y(\theta))$ is a
controlled Pauli $Y$ rotation by $\theta$ and $\theta$ is as defined above with
$n=0$ and $N=1$. Applying this unitary to states of the form $\ket{0} + \alpha
\ket{1}$ where $\alpha$ is small results in the probability of success for the
noiseless amplification of $\frac{1}{g^2}$.
\section{Coherent state inputs}
We can now calculate the performance of this model for particular situations.
First we will calculate the action on coherent states. Coherent states are an
ideal test of the amplification process as the expected output from the
amplification is easy to define. The ideal amplification action on a coherent
state is
\begin{equation}
g^{a^\dagger a} \ket{\alpha} = e^{(g^2-1)|\alpha|^2/2}\ket{g\alpha}.
\end{equation}
This can then be used to calculate the probability of success and the fidelity
of our model amplifier for coherent state inputs denoted by $P_c$ and
$\mathcal{F}_c$ respectively,
\begin{widetext}
\begin{equation}
\label{eq:ProbabilityCoherent}
P_c = \bra{\alpha} M_S^{\dagger} M_S \ket{\alpha}
= e^{-\left|\alpha\right|^{2}}
\left[g^{-2N} \sum_{n = 0}^{N} g^{2n} \frac{\left|\alpha\right|^{2n}}{n!} +
\sum_{n = N+1}^{\infty} \frac{\left|\alpha\right|^{2n}}{n!}\right],
\end{equation}
\begin{equation}
\label{eq:FidelityCoherent}
\mathcal{F}_c
= P_c^{-1} \left|\bra{g \alpha} M_{s} \ket{\alpha}\right|^{2}
= P_c^{-1} e^{-(1+g^2)\left|\alpha\right|^{2}}
\left| g^{-N} \sum_{n=0}^{N} g^{2n} \frac{\left|\alpha\right|^{2n}}{n!}
+ \sum_{n=N+1}^{\infty} g^{n} \frac{\left|\alpha\right|^{2n}}{n!}\right|^{2}.
\end{equation}
These expressions can be written in terms of incomplete gamma functions
\begin{equation}
\label{eq:coherentprobability}
P_c = P(N+1,|\alpha|^2) + g^{-2N} e^{(g^2-1)|\alpha|^2} Q(N+1, |g\alpha|^2),
\end{equation}
\begin{equation}
\label{eq:coherentfidelity}
F_c = P_c^{-1} e^{-(1+g^2)|\alpha|^2} \left| g^{-N} e^{|g\alpha|^2} P(N+1,|g\alpha|^2)
+ e^{g|\alpha|^2} Q(N+1, g|\alpha|^2) \right|^2
\end{equation}
\end{widetext}
where $Q(N,\lambda)$ is the regularlised incomplete gamma function defined as
\begin{equation}
Q(N,\lambda)=\Gamma(N,\lambda)/\Gamma(N)
\end{equation}
where $\Gamma(N,\lambda)$ is the incomplete gamma function, $\Gamma(N)$ is the
complete gamma function and $P(N,\lambda) = 1 - Q(N,\lambda)$~\cite{gammafunc}.
The appearance of the incomplete gamma functions here is expected as this
function is the cumulative distribution function for the Possionian
distribution which is the distribution that would result when measuring a
coherent state in the Fock basis. In this form these equations can be rapidly
computed numerically for particular values of $g$, $\alpha$ and $N$.
Figure~\ref{fig:coherent0.8} shows $P_c$ and $F_c$ for $\alpha=0.8$ and
$N=1,2,3$ and $4$.
\begin{figure}
\caption{Probability of success and fidelity for an input coherent state
with amplitude $\alpha=0.8$ for $N=1,2,3$ and $4$. These curves are
calculated from equations~\ref{eq:coherentprobability}
\label{fig:coherent0.8}
\end{figure}
The probability drops away from $1$ for small gains and the rate at which this
occurs increases as $N$ increases. The fidelity initially stays close to $1$
for small amplitudes but eventually drops and the gain at which this occurs
increases as $N$ increases. Whilst these properties are evident in the
figure, they are general features given that $\alpha$ is fixed.
Low fidelity operation is not of great interest for building a device which
performs linear amplification. Therefore we will set a bound on performance
that is deemed acceptable. Quantitatively we will require a minimum fidelity
$\mathcal{F} \geq 0.99$. The fidelity will increase towards $1$ as $N$
increases hence in any particular situation we can choose an $N$ to achieve
this fidelity requirement. Figure~\ref{fig:GraphsCoherent} shows the effect
of enforcing this minimum acceptable fidelity.
\begin{figure}
\caption{Probability of success for coherent state inputs with amplitude
$\alpha=0.1,0.3,0.8$ for gains between $g=1$ and $4$. Cut-off $N$ is
chosen to ensure an output fidelity more than $0.99$. Discontinuous jumps
occur when the fidelity bound is reached and the value of $N$ is
incremented. The corresponding values for $N$ are shown in the lower
plot.}
\label{fig:GraphsCoherent}
\end{figure}
The most notable effect that can be seen is the discontinuous jumps in the
probability of success. A jump occurs when the cut-off $N$ is incremented to
enforce the minimum fidelity. This means that the probability of success is
made up of pieces from the probabilities like what is shown in
figure~\ref{fig:coherent0.8} for $\alpha=0.8$. Also of note, is that for low
amplitude inputs (here $\alpha=0.1$) then choosing $N=1$ provides an
acceptable reproduction of linear amplification over a wide range of gain
(here $1 \leq g \leq 3$).
\section{EPR state inputs}
An important application of this type of amplification is distilling
continuous variable entanglement~\cite{ref:Xiang2010,ref:Blandino2012}. The
action of the amplifier is easiest to calculate for an ideal
Einstein-Podolsky-Rosen (EPR) state
\begin{equation}
\label{eq:EPRState}
\ket{EPR} = \sqrt{1-\chi^{2}} \sum_{n=0}^\infty \chi^n \ket{n, n},
\end{equation}
where the parameter $0\leq\chi<1$ is representative of the strength of the
continuous variable entanglement. The ideal amplification of this state is
then
\begin{equation}
g^{a^\dagger a}\ket{EPR} \propto \sum_{n=0}^\infty (g \chi)^n \ket{n,n}.
\end{equation}
The action of the amplifier preserves the form of the EPR state but increases
the entanglement. Note that this places an upper bound on $g$. For if $g >
1/\chi$ then the coefficients in the summation diverge. What this means is
that when an implementation chooses an $N$ cut-off, the output state does not
converge towards a particular state in the limit as $N \rightarrow \infty$.
This phenomenon will also found when applying ideal amplification to a
distribution of coherent states which forms a mixed state~\cite{ref:Walk2013}.
The EPR state can be generalised to include losses. Here we will concentrate
on the case where only one of the EPR modes undergoes loss of amplitude
$\eta$. The state from this is a three mode state
\begin{equation}
\label{eq:EPRloss}
\frac{\ket{EPR_l}}{\sqrt{1-\chi^{2}}} = \sum_{n=0}^{\infty} \sum_{t=0}^{n} \chi^{n}
\sqrt{\binom{n}{t} \eta^{t} \left(1-\eta\right)^{n-t}} \ket{n, t, n-t}
\end{equation}
where the third mode represents the loss mode which is assumed to be
inaccessible to any experiment.
\begin{figure}
\caption{\small{The state generated by an ideal noiseless linear amplifier on
a single sided lossy EPR state is another single sided lossy EPR state but
with different variables for the strength of the squeezing and loss. The
parameters of the state after the amplification $\chi^\prime$ and
$\eta^\prime$ are related to the input state parameters $\chi$ and $\eta$
and the gain of the amplification $g$
(equations~\ref{eq:ChiRelation}
\label{fig:EPREquiv}
\end{figure}
As in the case of the pure EPR state, the lossy EPR state under ideal
amplification is another lossy EPR state but with different parameters, see
figure~\ref{fig:EPREquiv}. Applying the ideal amplification to the second
mode in equation~\ref{eq:EPRloss} introduces a $g^t$ into the coefficients.
Then equating this to another lossy EPR state characterised by squeezing
$\chi^\prime$ and transmission $\eta^\prime$ gives the relations
\begin{equation}
\chi^n g^t \sqrt{\eta}^t \left(\sqrt{1-\eta}\right)^{n-t} =
{\chi^\prime}^n \sqrt{\eta^\prime}^t \left(\sqrt{1-\eta^\prime}\right)^{n-t}
\end{equation}
which must hold true for all integers $n \geq 0$ and $0\leq t \leq n$. Two
separate equations can be obtained from this,
\begin{equation}
\chi \sqrt{1-\eta} = \chi^\prime \sqrt{1-\eta^\prime}
\end{equation}
\begin{equation}
\chi g \sqrt{\eta} = \chi^\prime \sqrt{\eta^\prime},
\end{equation}
which can be inverted to give
\begin{equation}
\label{eq:ChiRelation}
\chi^\prime = f \chi,
\end{equation}
\begin{equation}
\label{eq:EtaRelation}
\eta^\prime = \frac{g^2}{f^2} \eta,
\end{equation}
\begin{equation}
\label{eq:AuxRelation}
f = \sqrt{1-\eta + \eta g^2}.
\end{equation}
The possibility of non-convergence of the output state, just as seen for pure
EPR inputs, is present here as well. Convergence will be achieved provided
$\chi^\prime < 1$.
We will consider $\eta$ to be a fixed value and choose $\chi^\prime$ to be a
fixed in the sense that some target squeezing strength is desired. In this
way we can avoid choosing gains for which the output is not convergent.
We will focus here on the ability of the state to demonstrate the EPR
paradox~\cite{reid1998,ou1992}. This is achieved by EPR criterion
$\varepsilon_{EPR} < 1$ where
\begin{equation}
\varepsilon_{EPR} = V_{B|A}^+ V_{B|A}^-
\end{equation}
and $V_{B|A}^\pm$ is the conditional variance of the $B$ mode on $A$ and the
superscript represents the quadrature in which the variance is calculated.
The conditional variance is defined as
\begin{equation}
V_{B|A}^\pm = \min_{0 \leq g \leq 1} \left< (X_B^\pm \mp g X_A^\pm)^2 \right>,
\end{equation}
and for the EPR state with one sided loss the optimisation gives~\cite{bernu}
\begin{equation}
V^+_{B|A} = V^-_{B|A} = 1 - \frac{2 \chi^2 \eta}{1+\chi^2},
\end{equation}
and hence the EPR criterion in this case is
\begin{equation}
\label{eq:EPRCondition}
\varepsilon_{EPR} = \left(1-\frac{2\chi^2 \eta}{1+\chi^2}\right)^{2}.
\end{equation}
When the amplifier succeeds, both the effective squeezing and transmission are
greater then their initial counterparts. The amplifier has a purifing action
on this state. This means that it is possible to reach a lower EPR
criterion then would be otherwise possible.
\begin{figure}
\caption{The EPR criterion as a function of gain with a target output
squeezing of $\chi^{'}
\label{fig:EPRIdeal}
\end{figure}
Figure~\ref{fig:EPRIdeal} shows the EPR criterion for an output squeezing of
$\chi^{'} = 0.5$ with a channel transmission of $\eta = 0.25$. The lowest EPR
condition possible without amplification given the channel loss (i.e. $\chi
\rightarrow 1$) is achieved by amplifying the lossy EPR state when $g\approx
2.5$.
The state conditional on achieving success is
\begin{widetext}
\begin{equation}
\label{eq:EPRsuccess}
M_s \ket{EPR_l} = \sum_{t=0}^\infty \sum_{n=t}^\infty
\mathrm{min}(g^{t-N},1) \chi^n
\sqrt{\binom{n}{t} \eta^t (1-\eta)^{n-t}} \ket{n,t,n-t}.
\end{equation}
The probability of success for our model amplifier on this type of input state
can be simply computed as $P_{EPR} = \bra{EPR}\hat{M}_{S}^{\dagger}
\hat{M}_{S} \ket{EPR}$ just as before
\begin{equation}
\label{eq:ProbabilityEPR}
P_{EPR} = (1-\chi^2)\left(
\frac{g^{-2N}}{1-\chi^2(1+(g^2-1)\eta)} +
\sum_{n=N+1}^\infty \chi^{2n}
\sum_{t=N+1}^n \binom{n}{t} \left(1-g^{2(t-N)}\right) \eta^t (1-\eta)^{n-t}\
\right).
\end{equation}
A sum can be removed from this equation by using the relationship
\begin{equation}
\sum_{t=N+1}^n \binom{n}{t} a^t b^{n-t} =
(a+b)^n I_\frac{a}{a+b}(N+1,n-N),
\end{equation}
where $I_x(a,b)$ is the regularised incomplete beta function~\cite{betafunc},
giving \begin{equation}
P_{EPR} = (1-\chi^2)\left(
\frac{g^{-2N}}{1-{\chi^\prime}^2} +
\sum_{n=N+1}^\infty \left(
\chi^{2n} I_\eta(N+1,n-N)
- g^{-2N} {\chi^\prime}^{2n} I_{\eta^\prime}(N+1,n-N)
\right)
\right).
\end{equation}
To compute fidelity is more difficult because when the loss mode is traced out
the resulting state is mixed. We can calculate a lower bound on the fidelity
by computing the fidelity of the amplified state compared to the purified
lossy EPR state with squeezing $\chi^\prime$ and loss $\eta^\prime$,
i.e. $\mathcal{F}_{EPR} = P_{EPR}^{-1} \left|\bra{EPR^{'}}\hat{M}_{S}
\ket{EPR}\right|^{2}$
\begin{equation}
\frac{\sqrt{\mathcal{F}_{EPR} P_{EPR}}}{\sqrt{(1-\chi^2)(1-\chi^{\prime 2})}} =
\frac{g^{-N}}{1-(g\sqrt{\eta\eta^\prime}+\sqrt{(1-\eta)(1-\eta^\prime)})\chi \chi^\prime}
+ \sum_{n=N+1}^\infty (\chi \chi^\prime)^n \sum_{t=N+1}^n
\binom{n}{t} (1 - g^{t-N}) \sqrt{\eta\eta^\prime}^t
\sqrt{(1-\eta)(1-\eta^\prime)}^{n-t}
\end{equation}
\begin{equation}
\eta_1 = \sqrt{\eta\eta^\prime} + \sqrt{(1-\eta)(1-\eta^\prime)}
= \frac{1-\eta+g\eta}{f}
\end{equation}
\begin{equation}
\eta_2 = g\sqrt{\eta\eta^\prime} + \sqrt{(1-\eta)(1-\eta^\prime)} = f
\end{equation}
\begin{equation}
\frac{\sqrt{\mathcal{F}_{EPR} P_{EPR}}}{\sqrt{(1-\chi^2)(1-\chi^{\prime 2})}} =
\frac{g^{-N}}{1-\eta_2 \chi \chi^\prime}
+ \sum_{n=N+1}^\infty (\chi \chi^\prime)^n \left(
\eta_1^n I_{\sqrt{\eta\eta^\prime}/\eta_1} (N+1,n-N)
- g^{-N} \eta_2^n I_{g\sqrt{\eta\eta^\prime}/\eta_2} (N+1,n-N)
\right)
\end{equation}
\end{widetext}
where $f$ is defined in Equation~\ref{eq:AuxRelation}.
\begin{figure*}
\caption{Probability and fidelity for the EPR state characterised by an
effective squeezing of $\chi' = 0.5$ and a transmission of $\eta =
0.3$ undergoing amplification with truncation numbers $N = 1$ to $N =
5$.}
\label{fig:EPRRealNLA}
\end{figure*}
The probability and fidelity for $N = 1$ to $5$ with $\chi^\prime = 0.5$ and
$\eta=0.3$ are shown in figure \ref{fig:EPRRealNLA}. The probabilities drop
exponentially with gain, but the fidelity drops slowly. This is because as
the gain increases a lower $\chi$ is used to ensure that $\chi^\prime$ stays
fixed. The asymptotic behaviour of these functions as $g \rightarrow \infty$
is
\begin{equation}
P_{EPR} = g^{-2N} \left(
\frac{1-{\chi^\prime}^{2N+2}}{1-{\chi^\prime}^2}
\right) + O (g^{-2N-1}),
\end{equation}
\begin{equation}
\mathcal{F}_{EPR} P_{EPR} = g^{-2N}
\frac{(1-{\chi^\prime}^{2N+2})^2}{1-{\chi^\prime}^2} + O(g^{-2N-1}).
\end{equation}
Hence we find that the fidelity asymptotically approaches a constant value
\begin{equation}
\mathcal{F}_{EPR} = 1-{\chi^\prime}^{2N+2} + O(g^{-1}).
\end{equation}
The fidelity will always be $1$ at $g = 1$ and for larger $g$ then approaches
this constant value from above. Therefore this number constitutes a lower
bound on the fidelity.
As was indicated before in the analysis for coherent state inputs, the low
fidelity operation is not usually of interest. When designing an experiment
there is usually some minimum fidelity and probability of success that is
deemed acceptable. The order of magnitude for these is dependant on the on the
experimental conditions. We will now consider these factors to further analyse
the action of this model amplifier.
We can use this expression for the limiting case of fidelity to explicly
compute a maximum $N$ under restrictions in the fidelity and entanglement. A
fidelity minimum is chosen $\mathcal{F}_{min} < 1$ and at all times the
performance of amplification must always be higher than this number. Also, if
there is a maximum $\chi^\prime < 1$ for which amplifications cannot exceed
after successful amplification, then it must be true that
\begin{equation}
N \leq
\frac{\log{\left(1-\mathcal{F}_{min}\right)}}{2 \log{\chi^\prime}} - 1
\end{equation}
Note that this requirement is independent of the probability of success.
To consider both a probability and fidelity bound we consider a numerical
optimisation of the EPR criterion for an amplified EPR state which results a
particular output squeezing $\chi^\prime$ which has undergone one sided loss
$1-\eta$. The optimisation we will consider here enforces a fidelity greater
than $0.99$ and the probability of success greater than either $0.1$, $0.01$
and $0.001$. Because of the monotonic nature of the fidelity and probability
under such conditions, we find that this optimisation always occurs on the
boundary of either the probability constraint or the fidelity constraint.
Figure~\ref{fig:BestEPRCond} shows the results of this optimisation when
$\chi^\prime = 0.5$ and $0.8$ as a function of loss.
\begin{figure*}
\caption{Plots of the lowest possible EPR criterion, $N$, gain $g$, lower
bound of fidelity and probability of success which achieve the of
lowest possible EPR criterion, $\varepsilon_{EPR}
\label{fig:BestEPRCond}
\end{figure*}
The results of this optimisation are best understood by starting at the case
where $\eta=1$. For this case we want to find if we are at the
boundary of the fidelity or probability constraints whilst ensuring that both
constraints are satisfied. Also, the largest possible gain which achieves the
fidelity constraint will occur at the lowest value for $N$. Therefore we seek
the gain and lowest $N$ such that our fidelity and probability constraints are
satisfied. As the loss is increased, less signal is amplified and the fidelity
and probability increase. Therefore a larger gain can be chosen which still
satisfies the constraints. This continues until such point as the input signal
is weak enough so that the next lowest $N$ satisfies the constraints. This
results in a discontinuous jump in the output. Also, if the probability was
the saturated constraint, when $N$ is decremented this may change to the
fidelity constraint being the one that is saturated. As loss is increased
further there will be a point where the saturation of these constraints will
swap. This results in sharp corners appearing in the maximised curves for the
gain and EPR criterion.
The figures also show a comparison of this best EPR criterion to particular
situations not involving any amplification process. The amplification process
always produces a lower EPR criterion when compared with doing no
amplification. However, it is probably of more interest to compare the
situation to that of assuming the entanglement source could in principle
produce a maximally entangled EPR state (i.e. $\chi = 1$). Because of the
loss, the EPR criterion for this limiting case is not zero. Our amplification
model can succeed in producing a lower EPR criterion than that of the maximally
entanged source. As shown in Figure~\ref{fig:BestEPRCond} this improvement
occurs in high loss situations. The parameters for which this improvement
occurs will depend on the value of $\chi^\prime$ chosen. But as shown in
figure~\ref{fig:BestEPRCond} the range of losses for which this occurs can
cover a significant range.
\section{Conclusion}
This paper has demonstrated a new model which could be used as a noiseless
phase insensitive linear amplifier. We have presented a unitary for the
non-conditional evolution of a coupled harmonic oscillator system and a
heralding qubit. This evolution can then be used as a probabilistic amplifier
by measuring the heralding qubit after the unitary evolution. The evolution is
not that of a linear optical transformation, but does achieve the highest
theoretically possible probability of success. The action of our noiseless
amplification model on a coherent state and an EPR state was computed. For an
EPR state undergoing one sided loss, we found that for sufficiently high loss
it is possible for the amplifier to achieve an EPR criterion lower than that
possible using an unamplified infinite squeezed source passing through the same
loss. By choosing our parameters such that we target a particular level of
two-mode squeezing when the amplification succeeds, we have shown that, for the
case of single sided loss, the fidelity of the amplification has a lower bound.
This model and the results we have computed here may be used as a guide to
future experiments which wish to operate near the optimal probability of
success.
\end{document}
|
\begin{document}
\title[Mixed boundary conditions as limits of dissipative boundary conditions]{Mixed boundary conditions as limits of dissipative boundary conditions in dynamic perfect plasticity}
\author[J.-F. Babadjian]{Jean-Fran\c cois Babadjian}
\address[Jean-Fran\c cois Babadjian]{Universit\'e Paris--Saclay, Laboratoire de Math\'ematiques d'Orsay, 91405, Orsay, France}
\email{[email protected]}
\author[R. Llerena]{Randy Llerena}
\address[Randy Llerena]{Research Platform MMM ``Mathematics-Magnetism-Materials" - Fak. Mathematik Univ. Wien, A1090 Vienna}
\email{[email protected]}
\date{\today}
\begin{abstract}
This paper addresses the well posedness of a dynamical model of perfect plasticity with mixed boundary conditions for general closed and convex elasticity sets. The proof relies on an asymptotic analysis of the solution of a perfect plasticity model with relaxed dissipative boundary conditions obtained in \cite{BC}. One of the main issues consists in extending the measure theoretic duality pairing between stresses and plastic strains, as well as a convexity inequality to a more general context where deviatoric stresses are not necessarily bounded. Complete answers are given in the pure Dirichlet and pure Neumann cases. For general mixed boundary conditions, partial answers are given in dimension $2$ and $3$ under additional geometric hypothesis on the elasticity set and the reference configuration.
\end{abstract}
\subjclass[2010]{74C10, 35Q74, 49J45, 49Q20, 35F31}
\keywords{Elasto-plasticity, Boundary conditions, Convex analysis, Functionals of measures, Functions of bounded deformation, Calculus of variations, Dynamic evolution}
\maketitle
\section{Introduction}
Elasto-plasticity is a classical theory of continuum mechanics \cite{H,Lj} that predicts the appearance of permanent deformations in materials when an internal critical stress is reached. At the atomistic level, these plastic deformations occur when the crystal lattice of the atoms are misaligned due to the accumulation of slips defects, called dislocations. These dislocations determine the change of behavior of a body from an elastic and reversible state to a plastic and irreversible one.
At the continuum level, and in the context of small deformations, the theory involves the {\it displacement field} $u:\Omegaega \times (0,T) \to \mathbb{R}^n$ and the {\it Cauchy stress tensor} $\sigma: \Omegaega \times (0,T) \to \mathbb{M}^n_{\rm sym}$, both defined on the reference configuration $\Omegaega$ of the body, a bounded open subset of $\mathbb{R}^n$ ($n=2,3$). They first satisfy the {\it equation of motion}
\begin{equation}
\ddot{u} - {\rm div} \sigma = f \quad \text{ in }\Omega \times (0,T), \label{eq:eqmotion}
\end{equation}
for some (given) external body load $f : \Omegaega \times (0,T) \to \mathbb{R}^n$. In the previous expression, and in the sequel, the dot stands for the partial derivative with respect to time. One particular feature of perfect plasticity is that the stress tensor is constrained to take its values into a fixed closed and convex set $\mathbf K$ of the space $\mathbb{M}^n_{\rm sym}$ of symmetric $n \times n$ matrices, also called {\it elasticity set}:
\begin{equation}
\label{eq:stressconstraint}
\sigma \in {\bf K}.
\end{equation}
In classical elasticity, the linearized strain is purely elastic and it is represented by the symmetric part of the gradient of displacement, i.e. $Eu:=(Du+Du^T)/2$. In perfect elasto-plasticity, the elastic strain $e:\Omegaega \times (0,T) \to \mathbb{M}^n_{\rm sym}$ only represents a part of the linearized strain $Eu$. It stands for the reversible part of the total deformation and it is related to $\sigma$ by means of {\it Hooke's law}, which we assume to be isotropic:
\begin{equation}\label{eq:Hooke}
\sigma=\mathbf A e=\lambda (\tr e) {\rm Id} + 2\mu e,
\end{equation}
for some constants $(\lambda,\mu) \in \mathbb{R}^2$, called Lam\'e coefficients, which satisfy the ellipticity conditions $\mu>0$ and $n\lambda+2\mu>0$. The remaining part of the strain,
\begin{equation}\label{eq:additive}
p:=Eu-e
\end{equation}
stands for the plastic strain leading to irreversible deformations. It is a new unknown of the problem whose evolution is described by means of a {\it flow rule}. Assuming that $\mathbf K$ has nonempty interior, it stipulates that if $\sigma$ belongs to the interior of $\mathbf K$, then the material behaves elastically and no additional inelastic strains are created, i.e. $\dot p=0$. On the other hand, if $\sigma$ reaches the boundary of $\mathbf K$, then $\dot p$ may develop in such a way that a non–trivial permanent plastic strain $p$ may remain after unloading. The evolution of $p$ is described by the {\it Prandtl-Reuss law}
$$\dot p \in N_{\mathbf K}(\sigma),$$
where $N_{\mathbf K}(\sigma)$ stands for the normal cone to $\mathbf K$ at $\sigma$, or equivalently, thanks to convex analysis, by {\it Hill’s principle of maximum plastic work}
\begin{equation}
H(\dot p) = \sigma : \dot p \label{eq:hillslaw},
\end{equation}
where $H(q) := \sup_{\tau \in {\bf K}} \tau : q$ is the support function ${\bf K}$. The system \eqref{eq:eqmotion}--\eqref{eq:hillslaw} has to be supplemented by initial conditions
\begin{equation}\label{eq:initialcondition}
(u(0), \dot{u}(0), e(0), p(0)) = (u_0, v_0,e_0,p_0)
\end{equation}
as well as suitable boundary conditions to be discussed later, and which will be one of the main focus of this work.
For most of metals and alloys, standard models of perfect plasticity involve elasticity sets $\mathbf K$ which are invariant in the direction of hydrostatic matrices (multiples of the identity) and bounded in the direction of deviatoric (trace free) ones. This is for example the case of the Von Mises and Tresca models (see e.g. \cite{A1,AG,Tem85} in the static case, \cite{A2,FG,Suquet,DMDSM} in the quasi-static case and \cite{AL,DS} in the dynamic one). In other situations like in the context of soils mechanics, it is of importance to consider elasticity sets $\mathbf K$ that are not necessarily invariant with respect to hydrostatic matrices. So called Drucker-Prager or Mohr-Coulomb models fall within this framework (see \cite{BC,BMo,BMo2}). In this paper, we treat as utmost as possible the case of a general elasticity set ${\bf K}$.
Let us now discuss the boundary conditions. Having in mind that the system of dynamic elasto-plasticity described so far has a hyperbolic nature, one has to consider boundary conditions compatible with this hyperbolic structure, in particular, with the finite speed propagation of the initial data along the characteristic lines. A general approach to this type of initial--boundary value constrained hyperbolic systems has been studied in \cite{DMS} (see also \cite{DLS}) where a class of so-called admissible dissipative boundary conditions has been introduced. This problem has subsequently been specified to the case of plasticity, first in \cite{BM} for a simplified scalar model, and then in \cite{BC} for the general vectorial model as described before. In this context, all admissible (homogeneous) {\it dissipative boundary conditions} take the form (see \cite[Section 3]{BC})
\begin{equation}
\label{eq:dissipativebdr}
S \dot u + \sigma \nu = 0 \quad \text{ on }\partial\Omegaega \times (0,T),
\end{equation}
where $\nu$ denotes the outer unit normal to $\Omega$, and $S:\partial \Omegaega \to \mathbb{M}^n_{\rm sym}$ is a spatially dependent positive definite boundary matrix. The well posedness of the initial--boundary value system \eqref{eq:eqmotion}--\eqref{eq:dissipativebdr} has been carried out in \cite{BC}. It has been established existence and uniqueness of two equivalent notions of relaxed solutions (variational and entropic solutions). The relaxation phenomena is a simple consequence of the fact that, formally, the stress contraint \eqref{eq:stressconstraint} might not be compatible with the boundary condition \eqref{eq:dissipativebdr}. Indeed, if $\sigma(t) \in \mathbf K$ in $\Omegaega$, we would expect that $\sigma(t)\nu \in \mathbf K\nu$ on $\partial\Omegaega$ while $\sigma(t)\nu=-S \dot u(t)$ is free on the boundary. Thus, the boundary condition and the stress constraint have to accomodate to each other and the dissipative boundary condition \eqref{eq:dissipativebdr} has to be relaxed into
\begin{equation}
\label{eq:relaxedBC}
P_{-\mathbf K\nu}(S \dot u)+\sigma\nu=0 \quad \text{ on }\partial\Omegaega \times (0,T),
\end{equation}
where, for $x \in \partial\Omega$, $P_{-\mathbf K\nu(x)}$ stands for the orthogonal projection in $\mathbb{R}^n$ onto the convex set $-\mathbf K \nu(x)$ with respect to a suitable scalar product. This is indeed a relaxation in the sense of the Calculus of Variations, because the energy balance involves a term of the form
$$\int_\Omegaega H(\dot p)\, dx + \frac12 \int_{\partial\Omegaega} S \dot u\cdot \dot u\, d\mathcal H^{n-1} + \frac12 \int_{\partial\Omegaega}S^{-1}(\sigma\nu)\cdot (\sigma\nu)\, d\mathcal H^{n-1}.$$
The previous energy functional turns out of not being lower semicontinuous with respect to weak convergence in the energy space, and its relaxation with respect to this topology is explicitly given by
$$\int_\Omegaega H(\dot p)\, dx + \int_{\partial\Omegaega} \psi(x,\dot u)\, d\mathcal H^{n-1} + \frac12 \int_{\partial\Omegaega}S^{-1}(\sigma\nu)\cdot (\sigma\nu)\, d\mathcal H^{n-1},$$
where $\psi(x,\cdot)$ is the inf-convolution of the functions $z \mapsto \frac12 S(x)z \cdot z$ and $z \mapsto H(-z\odot \nu(x))$. The connexion between the relaxed energy and the modified boundary condition \eqref{eq:relaxedBC} comes from a first order minimality condition and the following formula (see \cite[Section 4]{BC})
$$D_z\psi(x,\dot u(t,x))= P_{-\mathbf K\nu(x)}(S(x)\dot u(t,x)).$$
Unfortunately, Dirichlet, Neumann and mixed boundary conditions are not admissible because the matrix $S$ is not allowed to vanish nor to take the value $\infty$. It is the main focus of the present work to show that these type of natural boundary conditions can actually be obtained by means of an asymptotic analysis letting $S \to \infty$ in a portion of the boundary where we want to recover a Dirichlet condition, and letting $S \to 0$ on the complementary part where one wishes to formulate a Neumann condition. This type of analysis has already been performed in \cite{BM} in the simplified case of antiplane scalar plasticity where pure Dirichlet and pure Neumann boundary conditions have been derived. We extend here this analysis to the general vectorial case where additional issues arise, and to the case of mixed boundary conditions.
To be more precise, in the spirit of \cite{FG,KT, DMDSM}, we partition $\partial \Omega$ into the disjoint union of $\Gamma_D, \Gamma_N$ and $\Sigma$, where $\Gamma_D$ and $\Gamma_N$ stand for the Dirichlet and Neumann parts of the boundary, respectively, and $\Sigma$ is the interface between $\Gamma_D$ and $\Gamma_N$ which is $\mathcal H^{n-1}$-negligible. We consider a boundary matrix of the form
\begin{equation}\label{eq:Slambda}
S_\lambda(x) := \left( \lambda \textbf{1}_{\Gamma_D} + \frac{1}{\lambda} \textbf{1}_{\Gamma_N} \right)\textrm{Id}
\end{equation}
for some parameter $\lambda >0$ which will be sent to $\infty$. Denoting by $(u_\lambda,e_\lambda,p_\lambda,\sigma_\lambda)$ the unique weak solutions of the system \eqref{eq:eqmotion}--\eqref{eq:initialcondition} with the relaxed dissipative boundary condition \eqref{eq:relaxedBC} associated to the boundary matrix $S_\lambda$, using the results of \cite{BC}, we easily derive bounds in the energy space for this quadruple, which allow one to get weak limits $(u,e,p,\sigma)$ and pass to the limit into the equation of motion \eqref{eq:eqmotion}, the stress constraint \eqref{eq:stressconstraint}, Hooke's law \eqref{eq:Hooke}, the additive decomposition \eqref{eq:additive} and the initial condition \eqref{eq:initialcondition}. This is the object of Lemma \ref{lem:compactness}. As usual in plasticity, the main difficulty consists in passing to the limit in the flow rule expressed by \eqref{eq:hillslaw} and in the relaxed boundary condition \eqref{eq:relaxedBC}. In accordance with \cite{BC,BM,BMo}, the idea consists in taking the limit as $\lambda\to\infty$ into the energy balance. The main difficulty is concerned with the term
$$\int_\Omegaega H(\dot p_\lambda)\, dx + \int_{\partial\Omegaega} \psi_\lambda(x,\dot u_\lambda)\, d\mathcal H^{n-1} + \frac12 \int_{\partial\Omegaega}S_\lambda^{-1}(\sigma_\lambda\nu)\cdot (\sigma_\lambda\nu)\, d\mathcal H^{n-1},$$
where
$$\psi_\lambda (x,z): = \inf_{w \in \mathbb{R}^n}{\left\{ \frac{1}{2} \left( \lambda {\bf 1}_{\Gamma_D} + \frac 1 \lambda {\bf 1}_{\Gamma_N} \right)|w|^2 + H((w- z) \odot \nu (x) ) \right\} }.$$
A uniform bound on the previous energy easily shows that
$$\int_{\Gamma_N} |\sigma_\lambda\nu|^2\, d\mathcal H^{n-1} \leq \frac{C}{\lambda} \to 0, \quad \text{ as }\lambda \to \infty,$$
which leads to the Neumann boundary condition $\sigma\nu=0$ on $\Gamma_N$. The obtention of the Dirichlet boundary condition on $\Gamma_D$ is more involved because, as usual in perfect plasticity, concentration phenomena might occur. A convex analysis argument based on the Moreau-Yosida approximation of $H$ yields the following lower bound on the energy (see Lemma \ref{prop:relaxationdirichletpart})
$$\int_{\Omega}H( \dot p)\, dx +\int_{{\Gamma_D}}H(- \dot u \odot \nu)\, d\mathcal H^{n-1}\le \liminf_{\lambda \rightarrow \infty}{\left( \int_\Omega H( \dot p_\lambda)\, dx + \int_{\partial \Omega}{\psi_\lambda(x, \dot u_\lambda)\, d \mathcal{H}^{n-1}} \right)}.$$
Proving that this lower bound is also an upper bound is formally a consequence the convexity inequality
$$H(\dot p) \geq \sigma:\dot p$$
(because $\sigma \in \mathbf K$), and integrations by parts in space and time. Unfortunately, this formal convexity inequality is very difficult to justify in the context of perfect plasticity because the Cauchy stress $\sigma$ and the plastic strain rate $\dot p$ are not in duality. Indeed, the natural energy space gives $\sigma(t) \in H( {\rm div},\Omega)$ while $\dot p((t) \in \mathcal M(\Omega \cup \Gamma_D;\mathbb{M}^n_{\rm sym})$ since the support function $H$ grows linearly with respect to its argument. In particular, the plastic dissipation
$$\int_\Omega H(\dot p(t))\, dx$$
has to be understood as a convex function of a measure (see \cite{DT1,DT2,GS}). Whenever the quadruple $(u,e,p,\sigma)$ belongs to the energy space, it follows that $(\dot u(t),\dot e(t),\dot p(t))$ belongs to the space of all kinematically admissible triples
\begin{multline*}
\Big\{(v,\eta,q) \in [BD(\Omega) \cap L^2(\Omega;\mathbb{R}^n)] \times L^2(\Omega;\mathbb M^n_{\rm sym}) \times \mathcal M(\Omega \cup \Gamma_D ;\mathbb M^n_{\rm sym}) : \\
Ev=\eta+q \text{ in }\Omega, \quad q=-v\odot \nu\mathcal H^{n-1} \text{ on }\Gamma_D\Big\},
\end{multline*}
and $\sigma(t)$ belongs to the space of all statically and plastically admissible stresses
$$\{\tau \in H( {\rm div},\Omega): \; \tau\nu=0 \text{ on }\Gamma_N, \; \tau(x) \in \mathbf K \text{ a.e. in }\Omega\}.$$
In the spirit of \cite{ FG,KT, DMDSM}, it allows one to consider a generalized stress/strain duality (see Definition \ref{definition:dualityMixed}) as the first order distribution $[\sigma(t) \colon \dot p(t)] \in \mathcal{D}' (\mathbb{R}^n)$, compactly supported in $\overline\Omega$, defined as
\begin{equation}
\label{eq:introstrain}
\langle [\sigma (t)\colon \dot p(t)], \varphi \rangle = -\int_\Omegaega \varphi \sigma(t) : \dot e(t) \, dx- \int_\Omegaega \dot u(t) \cdot {\rm div} \sigma(t) \, \varphi \, dx - \int_\Omegaega \sigma(t) \colon \big(u(t) \odot \nabla \varphi\big) \, dx
\end{equation}
or any $\varphi \in C^\infty_c (\mathbb{R}^n)$. The question now reduces to prove that
\begin{equation}
\label{eq:introconvexinea}
H(\dot p(t)) \ge [\sigma(t) \colon \dot p(t)] \quad \textrm{in } \mathcal M (\mathbb{R}^n),
\end{equation}
and this is the object of Section \ref{sec:duality} . In Propositions \ref{prop:an1} we show that this generalized convexity inequality is always satisfied in the pure Dirichlet ($\Gamma_D=\partial\Omega$) and pure Neumann ($\Gamma_N=\partial\Omega$) cases. In the case of mixed boundary conditions, there might be some concentration effects at the interface $\Sigma$ between the Dirichlet and the Neumann parts, and the previous convexity inequality is shown to hold only in $\mathcal M (\mathbb{R}^n \setminus \Sigma)$ in Proposition \ref{prop:convexinequalityMixed}. Unfortunately, this weaker result is not enough to conclude the energy upper bound because, although $\Sigma$ is $\mathcal H^{n-1}$-negligible, some undesirable energy concentration might accumulate on that set. We further exhibit special cases in dimensions $n=2$ and $n=3$ which guarantee the validity of \eqref{eq:introconvexinea} also in the case of mixed boundary conditions (see Propositions \ref{prop:n=2} and \ref{prop:n=3}). In dimension $n=2$, it is enough to assume that $\Sigma$ is a finite set (as in \cite{FG}) while in dimension $n=3$, we suppose that the convex set $\mathbf K$ is invariant in the direction of hydrostatic matrices and bounded in the direction of deviatoric ones, as well as additional regularity assumptions on the reference configuration $\Omega$ (as in \cite{KT}).
To conclude this introduction, let us mention that our method only allows one to derive homogeneous mixed boundary conditions. Indeed, at a formal level, even starting from a nonhomogeneous dissipative boundary condition of the form $S\dot u+\sigma\nu=g$ on $\partial\Omega \times (0,T)$, for some non trivial source term $g$, (or its relaxed counterpart $P_{-\mathbf K\nu}(S\dot u-g)+\sigma\nu=0$ on $\partial\Omega \times (0,T)$ given by an adaptation of \cite{BC}), we obtain an energy balance involving the following additional term
$$\int_0^T \int_{\partial \Omega}S^{-1} g \cdot g \, d \mathcal{H}^{n-1}\, dt .$$
Specializing the problem to a boundary matrix $S=S_\lambda$ of the form \eqref{eq:Slambda} and some $\lambda$-dependent source term $g_\lambda \in L^2 (\partial \Omega \times (0,T); \mathbb{R}^n)$, the previous discussion shows that a uniform bound on the solution $(u_\lambda,e_\lambda,p_\lambda,\sigma_\lambda)$ in the energy space would require that
$$\sup_{\lambda>0} \left\{\frac{1}{\lambda} \int_{\Gamma_D} |g_\lambda|^2\, d\mathcal H^{n-1} + \lambda \int_{\Gamma_N} |g_\lambda|^2\, d\mathcal H^{n-1} \right\}<\infty.$$
It would imply that
$$\sigma_\lambda\nu=g_\lambda-\lambda^{-1} \dot u_\lambda \to 0 \quad \text{ in }\Gamma_N \times (0,T)$$ in a weak sense as $\lambda \to \infty$ (because the trace of $\dot u_\lambda$ is bounded in $L^1(\partial\Omega \times (0,T);\mathbb{R}^n)$), leading to a homogenous Neumann condition in $\Gamma_N$. Concerning the Dirichlet part, formally reporting this information in the dissipative boundary condition restricted to $\Gamma_D$ would lead to
$$\dot u_\lambda = \lambda^{-1}g_\lambda-\lambda^{-1}\sigma_\lambda\nu \to 0 \quad \text{ in }\Gamma_D \times (0,T),$$
in some weak sense as $\lambda \to \infty$ (because $\sigma_\lambda\nu$ is bounded in $L^2(0,T;H^{-1/2}(\partial\Omega;\mathbb{R}^n))$), leading to a homogeneous Dirichlet boundary condition. Strictly speaking one should rather consider the relaxed boundary condition which would lead to a strain concentration on $\Gamma_D$ associated to a homogeneous Dirichlet boundary condition.
The paper is organized as follows. In Section 2, we introduce various notation and basic facts used throughout this paper. In Section 3, we discuss the notion duality between plastic strains and Cauchy stresses, and we prove generalized convexity inequalities of the form \eqref{eq:introconvexinea} involving these two arguments which are not in duality in the energy space. Finally, in Section 4, we state and prove our main result, Theorem \ref{thm:compactness}, about the convergence of the solutions obtained in \cite{BC} to the (unique) solution of a dynamical elasto-plastic model with homogeneous mixed boundary conditions.
\section{Notation and preliminaries}
\subsection{Linear algebra}
If $a$ and $b \in \mathbb{R}^n$, we write $a \cdot b:=\sum_{i=1}^n a_i b_i$ for the Euclidean scalar product, and we denote by $|a|:=\sqrt{a \cdot a}$ the corresponding norm.
We denote by $\mathbb M^n$ the set of $n \times n$ matrices and by $\mathbb M^{n}_{\rm sym}$ the space of symmetric $n \times n$ matrices. The set of all (deviatoric) trace free symmetric matrices will be denoted as $\mathbb M^{n}_{D}$. The space $\mathbb M^n$ is endowed with the Fr\"obenius scalar product $A:B:=\tr(A^T B)$ and with the corresponding Fr\"obenius norm $|A|:=\sqrt{A:A}$. If $a \in \mathbb{R}^n$ and $b \in \mathbb{R}^n$, we denote by $a \odot b:=(ab^T+b^T a)/2 \in \mathbb M^{n}_{\rm sym}$ there symmetric tensor product.
If $A \in \mathbb M^{n}_{\rm sym}$, there exists an orthogonal decomposition of $A$ with respect to the Fr\"obenius scalar product as follows
$$ A = A_D + \frac{1}{n} (\tr A){\rm Id} ,$$
where $A_D \in \mathbb M^n_D$ stands for the deviatoric part of $A$.
\subsection{Measures}
The Lebesgue measure in $\mathbb{R}^n$ is denoted by $\mathcal L^n$, and the $(n-1)$-dimensional Hausdorff measure by $\mathcal H^{n-1}$. If $X \subset \mathbb{R}^n$ is a Borel set and $Y$ is an Euclidean space, we denote by $\mathcal M(X;Y)$ the space of $Y$-valued bounded Radon measures in $X$ endowed with the norm $\|\mu\|:=|\mu|(X)$, where $|\mu|$ is the variation of the measure $\mu$. If $Y=\mathbb{R}$ we simply write $\mathcal M(X)$ instead of $\mathcal M(X;\mathbb{R})$.
If the relative topology of $X$ is locally compact, by Riesz representation theorem, $\mathcal M(X;Y)$ can be identified with the dual space of $C_0(X;Y)$, the space of continuous functions $\varphi:X \to Y$ such that $\{|\varphi|\geq \varepsilon\}$ is compact for every $\varepsilon>0$. The (vague) weak* topology of $\mathcal M(X;Y)$ is defined using this duality.
Let $\mu \in \mathcal M(X;Y)$ and $f:Y \to [0,+\infty]$ be a convex, positively one-homogeneous function. Using the theory of convex functions of measures developed in \cite{DT1,DT2, GS}, we introduce the nonnegative {Borel measure $f(\mu)$}, defined by
$$f(\mu)=f\left(\frac{d \mu}{d |\mu|}\right)|\mu|\,,$$
where $\frac{d \mu}{d |\mu|}$ stands for the Radon-Nikod\'ym derivative of $\mu$ with respect to $|\mu|$.
\subsection{Functional spaces}
We use standard notation for Lebesgue spaces ($L^p$) and Sobolev spaces ($W^{s,p}$ and $H^s=W^{s,2}$).
The space of functions of bounded deformation is defined by
$$BD(\Omegaega)=\{u \in L^1(\Omegaega;\mathbb{R}^n) : \; E u \in \mathcal M(\Omegaega;\mathbb M^n_{\rm sym})\}\,,$$
where $E u:=(Du+Du^T)/2$ stands for the distributional symmetric gradient of $u$. We recall (see \cite{Bab, Tem85}) that, if $\Omegaega$ has a Lipschitz boundary, every function $u \in BD(\Omegaega)$ admits a trace, still denoted by $u$, which belongs to $L^1(\partial\Omegaega;\mathbb{R}^n)$, and such that the integration by parts formula holds: for all $\varphi \in C^1(\overline \Omegaega;\mathbb M^n_{\rm sym})$,
$$\int_{\partial\Omegaega} u\cdot (\varphi\nu)\, d\mathcal H^{n-1}=\int_\Omegaega {\rm div} \varphi \cdot u\, dx + \int_\Omegaega \varphi:d E u\,.$$
Note that the trace operator is continuous with respect to the strong convergence of $BD(\Omegaega)$ but not with respect to the weak* convergence in $BD(\Omegaega)$.
Let us define
$$H({\rm div},\Omegaega)=\{\sigma \in L^2(\Omegaega;\mathbb M^n_{\rm sym}) :\; {\rm div} \sigma \in L^2(\Omegaega;\mathbb{R}^n)\}\,.$$
If $\Omegaega$ has Lipschitz boundary, for any $\sigma \in H(\mathrm{div}, \Omegaega)$ we can define the normal trace $\sigma\nu$ as an element of $H^{-\frac12}(\partial \Omegaega;\mathbb{R}^n)$ (cf. e.g.\ \cite[Theorem~1.2, Chapter~1]{Tem85}) by setting \begin{equation}\label{2911181910}
\langle \sigma \nu, \psi \rangle_{H^{-\frac12}(\partial \Omegaega;\mathbb{R}^n),H^{\frac12}(\partial \Omegaega;\mathbb{R}^n)}:= \int_\Omegaega \psi \cdot {\rm div} \sigma \, dx + \int_\Omegaega \sigma \colon E \psi \, dx\,.
\end{equation}
for every $\psi \in H^1(\Omegaega;\mathbb{R}^n)$.
\section{Duality between stress and plastic strain}\label{sec:duality}
In the spirit of \cite{KT,FG,BMo}, we define a generalized notion of stress/strain duality.
\noindent $(H_1)$ {\bf The reference configuration.} Let $\Omega \subset \mathbb{R}^n$ be a bounded open set with Lipschitz boundary. We assume that $\partial \Omega$ is decomposed as the following disjoint union
$$\partial \Omega = \Gamma_D \cup \Gamma_N \cup \Sigma,$$
where $\Gamma_D$ and $\Gamma_N$ are open sets in the relative topology of $\partial\Omega$, and $\Sigma = \partial _{| \partial \Omega}\Gamma_D = \partial _{| \partial \Omega}\Gamma_N$ is $\mathcal{H}^{n-1}$-negligible.
On the Neumann part $\Gamma_N$, we will prescribe a surface load given by a function $g \in L^\infty(\Gamma_N;\mathbb{R}^n)$. The space of {\it statically admissible stresses} is defined by
$$\mathcal S_g:=\{\sigma \in H({\rm div},\Omegaega) : \;\sigma\nu=g \text{ on }\Gamma_N\}.$$
In the sequel we will also be interested in stresses $\sigma$ taking values in a given set.
\noindent $(H_2)$ {\bf Plastic properties.} Let $\mathbf K \subset \mathbb{M}^n_{{\rm sym}}$ be a closed convex set such that $0$ belongs to the interior point of ${\bf K}$. In particular, there exists $r>0$ such that
\begin{equation}
\label{eq:ms2}
\lk \tau \in \mathbb{M}^n_{{\rm sym}} : \abs{\tau} \le r \rk \subset {\bf K}.
\end{equation}
The support function $H : \mathbb{M}^n_{{\rm sym}} \rightarrow \left[ 0, + \infty \right]$ of $\bf K$ is defined by
$$H(q) := \sup_{\sigma \in {\bf K}}{\sigma : q} \quad \textrm{for all } q \in \mathbb{M}^n_{{\rm sym}}.$$
We can deduce from (\ref{eq:ms2}) that
\begin{equation}\label{eq:coercH}
H(q) \ge r \abs{q} \quad \textrm{for all } q \in \mathbb{M}^n_{{\rm sym}}.
\end{equation}
If $p \in \mathcal{M} (\Omega \cup \Gamma_D; \mathbb{M}^n_{{\rm sym}}),$ we denote the convex function of a measure $H(p)$ by
$$H(p) := H \left( \frac{dp}{d \abs{p}} \right) \abs{p},$$
and the plastic dissipation is defined by
$$\mathcal{H} (p) := \int_{\Omega \cup \Gamma_D} H\left( \frac{dp}{d \abs{p}} \right) d\abs{p}. $$
We define the set of all {\it plastically admissible stresses} by
$$\mathcal K:=\{ \sigma \in H( {\rm div},\Omega) \colon \, \sigma(x) \in \mathbf K \text{ for a.e.\ } x \in \Omegaega \}$$
which defines a closed and convex subset of $H( {\rm div},\Omega) $.
The portion $\Gamma_D$ of $\partial\Omega$ stands for the Dirichlet part of the boundary where a given displacement $w$ will be prescribed. We assume that it extends into a function $w \in H^1(\Omega;\mathbb{R}^n)$ (so that $w_{|\Gamma_D}\in H^{1/2}(\Gamma_D;\mathbb{R}^n)$). We define the space of {\it kinematically admissible triples} by
\begin{eqnarray*}
\mathcal A_w & := & \Big\{(u,e,p) \in [BD(\Omega) \cap L^2(\Omega;\mathbb{R}^n)] \times L^2(\Omega;\mathbb M^n_{\rm sym}) \times \mathcal M(\Omega \cup \Gamma_D ;\mathbb M^n_{\rm sym}) : \\
&& Eu=e+p \text{ in }\Omega, \quad p=(w-u)\odot \nu\mathcal H^{n-1} \text{ on }\Gamma_D\Big\},
\end{eqnarray*}
where $\nu$ is the outer unit normal to $\Omega$. The function $u$ stands for the displacement, $e$ is the elastic strain and $p$ is the plastic strain. The following result provides an approximation for triples $(u,e,p) \in \mathcal A_w$ and its proof follows the line of Step 1 in \cite[Theorem 6.2]{FG}.
\begin{lemma}
\label{lem:approximationBDkin}
Let $(u,e,p) \in [BD(\Omega) \cap L^2(\Omega; \mathbb{R}^n)] \times L^2(\Omega; \mathbb{M}^n_{\rm sym}) \times \mathcal{M} (\Omega; \mathbb{M}^n_{\rm sym})$ be such that $Eu = e+ p$ in $\Omega$. Then, there exists a sequence $\{(u_k,e_k, p_k)\}_{k \in \mathbb{N}}$ in $C^{\infty} (\overline{\Omega}; \mathbb{R}^n \times \mathbb{M}^n_{\rm sym} \times \mathbb{M}^n_{\rm sym}) $ such that
$$E u_k = e_k + p_k \quad \text{ in }\Omega,$$
\begin{equation}\label{eq:approximationuep}
\begin{cases}
u_k \rightarrow u \quad \textrm{strongly in } L^2( \Omega; \mathbb{R}^n), \\
e_k \rightarrow e \quad \textrm{strongly in } L^2( \Omega; \mathbb{M}^n_{\rm sym}), \\
p_k \rightharpoonup p \quad \textrm{weakly* in } \mathcal{M} (\Omega; \mathbb{M}^n_{\rm sym}),\\
|p_k|(\Omegaega) \to |p|(\Omega),\\
|Eu_k|(\Omegaega) \to |Eu|(\Omega),\\
u_k \to u \quad \textrm{strongly in }L^1(\partial\Omegaega;\mathbb{R}^n).
\end{cases}
\end{equation}
and for all $\varphi \in C^\infty_c(\mathbb{R}^n)$ with $\varphi \geq 0$,
\begin{equation}\label{eq:Res}
\limsup_{k \to \infty}\int_\Omega \varphi\, dH(p_k)\leq \int_\Omega \varphi\, dH(p).
\end{equation}
\end{lemma}
\begin{proof}
The construction of a sequence $\{(u_k,e_k, p_k)\}_{k \in \mathbb{N}}$ in $C^{\infty} (\overline{\Omega}; \mathbb{R}^n \times \mathbb{M}^n_{\rm sym} \times \mathbb{M}^n_{\rm sym}) $ such that $E u_k = e_k + p_k$ in $\Omega$ together with the four first convergences of \eqref{eq:approximationuep} result from Step 1 in \cite[Theorem 6.2]{FG}. Moreover, a careful inspection of that proof also shows that $|Eu_k|(\Omegaega) \to |Eu|(\Omega)$. The strong convergence of the trace in $L^1(\partial\Omegaega;\mathbb{R}^n)$ is a consequence of \cite[Proposition 3.4]{Bab}. The last condition \eqref{eq:Res} follows as well from the proof of \cite[Theorem 6.2]{FG} using the subadditivity and the positive one-homogeneity of $H$. Note that \eqref{eq:Res} cannot be directly obtained from the strict convergence of $\{p_k\}_{k \in \mathbb N}$ and Reshetnyak continuity Theorem (see \cite[Theorem 2.39]{AFP} or \cite{S}) because $H$ is just lower semicontinuous and it can take infinite values.
\end{proof}
We now define a distributional duality pairing between statically admissible stresses and plastic strains.
\begin{definition}
\label{definition:dualityMixed}
Let $\sigma \in \mathcal S_g$ and $(u,e,p)\in \mathcal A_w$. We define the first order distribution
$[\sigma \colon p]\in \mathcal D'(\mathbb{R}^n)$ by
$$\langle [\sigma \colon p], \varphi \rangle :=\int_\Omegaega \varphi \sigma : (E w-e)\, dx+\int_\Omegaega (w-u) \cdot {\rm div} \sigma \, \varphi \, dx + \int_\Omegaega \sigma \colon \big((w-u) \odot \nabla \varphi\big) \, dx + \int_{\Gamma_N}{\varphi g \cdot u \, d \mathcal{H}^{n-1} }$$
for all $\varphi \in C^\infty_c(\mathbb{R}^n)$.
\end{definition}
\begin{remark}
{\rm If $\varphi \in C^\infty_c(\Omegaega)$, thanks to the integration by parts formula in $H^1(\Omegaega;\mathbb{R}^n)$, the expression of the stress/strain duality becomes independent of $w$ and $g$, and it reduces to
\begin{equation}\label{1410191537}
\langle [\sigma \colon p], \varphi \rangle= -\int_\Omegaega \varphi \sigma \colon e \,dx- \int_\Omegaega u\cdot {\rm div} \sigma \, \varphi \,dx - \int_\Omegaega \sigma \colon (u\odot \nabla \varphi) \, dx\,.
\end{equation}
}
\end{remark}
As already observed in \cite{BMo}, contrary to \cite{FG,KT}, we are not able to show in general that $[\sigma:p]$ extends into a bounded Radon measure. This is due to the fact that, in our context, $\sigma_D$ fails to belong to $L^\infty(\Omega;\mathbb{M}^n_D)$. However, provided $\sigma \in \mathcal K$ and under suitable assumption on $\Omega$ and $\mathbf K$, we are going to show a convexity inequality which will ensure that $H(p)-[\sigma:p]$ is a nonnegative distribution, hence that $[\sigma:p]$ actually defines a bounded Radon measure supported in $\overline\Omega$.
\subsection{Pure Dirichlet or pure Neumann boundary conditions}
As the following result shows, the distribution $[\sigma:p]$ always extends into a bounded Radon measure in the pure Dirichlet or pure Neumann cases.
\begin{proposition}
\label{prop:an1}
Let $\Omega \subset \mathbb{R}^n$ be a bounded open set with Lipschitz boundary. Assume that either $\partial\Omega=\Gamma_D$ or $\partial\Omega=\Gamma_N$. Then, for every $\sigma \in \mathcal S_g \cap \mathcal K$ and $(u,e,p) \in \mathcal A_w$ with $H(p) \in \mathcal M(\Omega \cup \Gamma_D)$, the distribution $[\sigma:p]$ extends to a bounded Radon measure supported in $\overline \Omega$ and
\begin{equation}
\label{eq:conv-ineqNeumann}
H(p) \ge \left[ \sigma : p \right] \quad {{\rm in }} \ \mathcal{M}(\mathbb{R}^n).
\end{equation}
\end{proposition}
\begin{proof} In the case of pure Dirichlet boundary conditions, $\partial\Omega=\Gamma_D$, we first note that $\mathcal S_g=H({\rm div},\Omega)$. The duality pairing is then independent of $g$ and reduces to
$$\langle [\sigma \colon p], \varphi \rangle = \int_\Omegaega \varphi \sigma : (E w-e)\, dx+ \int_\Omegaega (w-u) \cdot {\rm div} \sigma \, \varphi \,dx + \int_\Omegaega \sigma \colon \big((w-u) \odot \nabla \varphi\big) \,dx$$
for all $\varphi \in C^\infty_c(\mathbb{R}^n)$. This case has already been addressed in \cite[Section 2]{BMo}. The result is a direct consequence an approximation result for $\sigma \in \mathcal K$ by smooth functions (see e.g. \cite[Lemma 2.3]{DMDSM}) as well as the integration by parts formula in $BD(\Omegaega)$ (see \cite[Theorem 3.2]{Bab}).
In the case of pure Neumann boundary conditions, $\partial\Omega=\Gamma_N$, using the integration by parts formula in $H^1(\Omega;\mathbb{R}^n)$ for the function $w$, the duality pairing becomes independent of $w$ and reduces to
$$\langle [\sigma \colon p], \varphi \rangle := -\int_\Omegaega \varphi \sigma : e \,dx- \int_\Omegaega u \cdot {\rm div} \sigma \, \varphi \,dx - \int_\Omegaega \sigma \colon \big(u \odot \nabla \varphi\big)\, dx+\int_{\partial\Omegaega} { \varphi} g \cdot u \, d\mathcal H^{n-1}$$
for all $\varphi \in C^\infty_c(\mathbb{R}^n)$. According to {Lemma \ref{lem:approximationBDkin}}, there exists a sequence $\{(u_k,e_k, p_k)\}_{k \in \mathbb{N}}$ in $C^{\infty} (\overline{\Omega}; \mathbb{R}^n \times \mathbb{M}^n_{\rm sym} \times \mathbb{M}^n_{\rm sym}) $ such that $E u_k = e_k + p_k$ in $\Omega$ and \eqref{eq:approximationuep}--\eqref{eq:Res} hold. By definition of the duality pairing $\left[ \sigma: p_k \right]$, for all $\varphi \in C^\infty_c (\mathbb{R}^n)$ we have
\begin{equation}
\left< \left[ \sigma: p_k \right], \varphi \right> := - \int_{\Om}{\sigma : e_k \varphi \, dx} -\int_{\Om}{\varphi u_k \cdot {\rm div} \sigma \, dx} - \int_{\Om}{\sigma : (u_k \odot \nabla \varphi) \, dx} +{ \int_{\partial\Omegaega} { \varphi} g \cdot u_k \, d\mathcal H^{n-1} }, \label{eq:an4}
\end{equation}
and using the integration by parts formula \eqref{2911181910} for $\sigma \in H({\rm div},\Omega)$, we get that
\begin{equation}
\left< \left[ \sigma: p_k \right], \varphi \right> := \int_\Omega \sigma : p_k \varphi \, dx. \label{eq:an4bis}
\end{equation}
By definition of the support function $H$, we have that $H(p_k) \geq \sigma:p_k$ a.e. in $\Omega$, hence if $\varphi \geq 0$, by \eqref{eq:an4}, it yields
\begin{eqnarray*}
\int_\Omega H(p_k)\varphi\, dx & \geq & \int_\Omega \sigma:p_k \varphi \,dx \\
& =& - \int_{\Om}{\sigma : e_k \varphi \, dx}-\int_{\Om}{\varphi u_k \cdot {\rm div} \sigma \, dx} - \int_{\Om}{\sigma : (u_k \odot \nabla\varphi) \, dx} { + \int_{\partial \Omega}{\varphi g \cdot u_k \, d\mathcal{H}^{n-1}}}.
\end{eqnarray*}
Hence, passing to the limit as $k \to \infty$ thanks to the convergences \eqref{eq:approximationuep}--\eqref{eq:Res} yields
\begin{eqnarray*}
\int_\Omega \varphi\, dH(p) & \ge & - \int_{\Om}{\sigma : e \varphi \, dx} -\int_{\Om}{\varphi u \cdot {\rm div} \sigma \, dx} - \int_{\Om}{\sigma : (u \odot \nabla \varphi) \, dx} {+ \int_{\partial \Omega}{\varphi g \cdot u \, d\mathcal{H}^{n-1}}} \\
& =: & \left<\left[ \sigma: p \right], \varphi \right>,
\end{eqnarray*}
where we used once more the definition of duality $[\sigma:p]$. As a consequence, the distribution $H(p)-[\sigma:p]$ is nonnegative, hence it extends into a bounded Radon measure in $\mathbb{R}^n$. Thus, $[\sigma:p]$ extends as well into a bounded Radon measure in $\mathbb{R}^n$. Finally $[\sigma:p]$ is clearly supported in $\overline\Omega$ from its very definition.
\end{proof}
\subsection{Mixed boundary conditions}
When $\Gamma_D \neq \emptyset$ and $\Gamma_N\neq \emptyset$, the situation is much more delicate as in \cite{FG}. We first prove the following general result giving the required convexity inequality but only outside $\Sigma$ (see \cite[Theorem 6.2]{FG}) which, unfortunately, will not be enough for our purpose. We will later do additional assumptions in dimensions $n=2$ and $3$ which will ensure the validity of the convexity inequality in the whole $\mathbb{R}^n$.
\begin{proposition}
\label{prop:convexinequalityMixed}
Let $\Omega \subset \mathbb{R}^n$ be a bounded open set with Lipschitz boundary. For every $\sigma \in \mathcal S_g \cap \mathcal K$ and $(u,e,p) \in \mathcal A_w$ with $H(p) \in \mathcal M(\Omega \cup \Gamma_D)$, the restriction of the distribution $[\sigma:p]$ to $\mathbb{R}^n \setminus \Sigma$ extends to a bounded Radon measure in $\mathbb{R}^n \setminus \Sigma$ and
\begin{equation}
\label{eq:ar1}
H(p) \ge [ \sigma : p ] \quad \text{ in } \mathcal M(\mathbb{R}^n \setminus \Sigma ).
\end{equation}
\end{proposition}
\begin{proof}
Without loss of generality, we can assume $w=0$ in Definition \ref{definition:dualityMixed}. Let us fix a test function $\varphi \in C^\infty_c (\mathbb{R}^n \setminus \Sigma)$, and let $U \subset \mathbb{R}^n$ be an open set such that $\Sigma \subset U$ and $U \cap {\rm supp} (\varphi) = \emptyset$. Let us consider another open set $W \subset \mathbb{R}^n$ such that $\Gamma_N \setminus U \subset W$ and $\overline W \cap \partial \Omega \subset \Gamma_N$. Finally, let $W'\subset \mathbb{R}^n$ be a further open set such that $W'\subset\subset W$, $\Gamma_N \setminus U \subset W'$ and ${\rm supp} (\varphi ) \cap \Gamma_N \subset W'$. Let $\psi \in C^\infty_c(\mathbb{R}^n)$ be a cut-off function such that $0 \le \psi \le 1$, ${\rm Supp}(\psi) \subset W$ and $\psi=1$ on $W'$. We decompose $\sigma$
as follows,
$$\sigma = \psi \sigma + (1 - \psi) \sigma =: \sigma_1 + \sigma_2.$$
Note that, for $i =1,2$, we have that $ \sigma_i \in H({\rm div}, \Omega)$. Moreover,
\begin{equation}
\label{eq:extensionsigma1}
\sigma_1 \nu := \psi (\sigma \nu) = \psi g \quad \textrm{on { $\partial\Omega$} \quad and \quad } {\sigma_2 = 0 \quad \text{on } W'. }
\end{equation}
Substituting $\sigma$ with this decomposition in Definition \ref{definition:dualityMixed} we get that
\begin{align}\label{eq:sigma12}
\langle [\sigma \colon p], \varphi \rangle &:= -\int_\Omegaega \varphi \sigma : e \,dx- \int_\Omegaega u \cdot {\rm div} \sigma \, \varphi \, dx - \int_\Omegaega \sigma \colon \big(u\odot \nabla \varphi\big) \, dx + \int_{ \Gamma_N}\varphi g \cdot u \, d \mathcal{H}^{n-1} \nonumber \\
& = -\int_\Omegaega \varphi \sigma_1 : e \, dx- \int_\Omegaega u \cdot {\rm div} \sigma_1 \, \varphi \, dx - \int_\Omegaega \sigma_1 \colon \big(u\odot \nabla \varphi\big) \, dx + \int_{ \Gamma_N}\varphi g \cdot u \, d \mathcal{H}^{n-1}\nonumber \\
& \quad -\int_\Omegaega \varphi \sigma_2 : e \, dx- \int_\Omegaega u \cdot {\rm div} \sigma_2 \, \varphi \, dx - \int_\Omegaega \sigma_2 \colon \big(u\odot \nabla \varphi\big) \, dx.
\end{align}
We first approximate $(u,e,p)$ in the expression \eqref{eq:sigma12} involving $\sigma_1$. Indeed, thanks to Lemma \ref{lem:approximationBDkin}, there exists a sequence $\{(u_k,e_k, p_k)\}_{k \in \mathbb{N}}$ in $C^{\infty} (\overline{\Omega}; \mathbb{R}^n \times \mathbb{M}^n_{\rm sym} \times \mathbb{M}^n_{\rm sym}) $ such that $E u_k = e_k + p_k$ in $\Omega$ and \eqref{eq:approximationuep}--\eqref{eq:Res} hold. On the one hand, we have
\begin{multline}\label{eq:divers-conv}
-\int_\Omegaega \varphi \sigma_1 : e_k \, dx- \int_\Omegaega u_k \cdot {\rm div} \sigma_1 \, \varphi \, dx - \int_\Omegaega \sigma_1 \colon \big(u_k\odot \nabla \varphi\big) \, dx + \int_{ \Gamma_N}\varphi g \cdot u_k \, d \mathcal{H}^{n-1} \\
\to -\int_\Omegaega \varphi \sigma_1 : e \, dx- \int_\Omegaega u \cdot {\rm div} \sigma_1 \, \varphi \, dx - \int_\Omegaega \sigma_1 \colon \big(u\odot \nabla \varphi\big) \, dx + \int_{ \Gamma_N}\varphi g \cdot u \, d \mathcal{H}^{n-1}.
\end{multline}
On the other hand, for any $k \in \mathbb{N}$, thanks to the integration by parts formula for $\sigma_1 \in H( {\rm div},\Omega)$ together with \eqref{eq:extensionsigma1}, we can observe that
\begin{align}
& -\int_\Omegaega \varphi \sigma_1 : e_k \, dx- \int_\Omegaega u_k \cdot {\rm div} \sigma_1 \, \varphi \, dx - \int_\Omegaega \sigma_1 \colon \big(u_k\odot \nabla \varphi\big) \, dx + \int_{ \Gamma_N}\varphi g \cdot u_k \, d \mathcal{H}^{n-1}\nonumber \\
&\qquad = \int_{\Om}{\varphi \sigma_1 : p_k} \, dx - \langle \sigma_1 \nu, \varphi u_k \rangle_{H^{-\frac{1}{2}} (\partial \Omega;\mathbb{R}^n), H^{\frac{1}{2}} (\partial \Omega;\mathbb{R}^n) } + \int_{\Gamma_N}{\varphi g \cdot u_k \, d \mathcal{H}^{n-1} } \nonumber \\
& \qquad = \int_{\Om}{\varphi \sigma_1 : p_k} \, dx - \int_{\partial \Omega}{\varphi { \psi g} \cdot u_k \, d \mathcal{H}^{n-1} } + \int_{\Gamma_N}{\varphi g \cdot u_k \,d \mathcal{H}^{n-1} }\nonumber\\
& \qquad = \int_{\Om}{\varphi \sigma_1 : p_k} \, dx,\label{eq:integrationbypartsrobin1}
\end{align}
where we used that $\psi=1$ on ${\rm Supp}(\varphi) \cap \Gamma_N$ and $\psi=0$ in $\partial\Omega \setminus \Gamma_N$. Hence, by definition of the support function $H$, we have that $H(p_k) \geq \sigma:p_k$ a.e. in $\Omega$. As a consequence, if $\varphi \geq 0$,
\begin{eqnarray*}
\int_{\Omega } H(p_k) \psi \varphi \, dx &\geq & \int_\Omega \sigma_1:p_k \varphi \, dx\\
& =& -\int_{\Om}{\varphi u_k \cdot {\rm div} \sigma_1 \, dx} - \int_{\Om}{\sigma_1 : (u_k \odot \nabla \varphi) \, dx} \\
&&\qquad- \int_{\Om}{\sigma_1 : e_k \varphi \,dx} { + \int_{\Gamma_N}{\varphi g \cdot u_k \,d\mathcal{H}^{n-1}}}.
\end{eqnarray*}
We can pass to the limit as $k \to \infty$ owing to \eqref{eq:Res} and \eqref{eq:divers-conv}. We deduce that
\begin{eqnarray}\label{eq:rob2}
\int_{W\cap \Omega } \varphi \, dH(p) & \ge & \int_\Omega \varphi\psi\, dH(p)\nonumber\\
& \geq & -\int_{\Om}{\varphi u \cdot {\rm div} \sigma_1 \, dx} - \int_{\Om}{\sigma_1 : (u \odot \nabla \varphi) \, dx}\nonumber\\
&&\qquad - \int_{\Om}{\sigma_1 : e \varphi \, dx} { + \int_{\Gamma_N}{\varphi g \cdot u \, d\mathcal{H}^{n-1}}},
\end{eqnarray}
where we have used the fact that $p$, hence $H(p)$, does not charge $\Gamma_N$.
Coming back to \eqref{eq:sigma12}, we now approximate the last term in the right-hand side by approximating $\sigma_2$. Arguing as in \cite[Lemma 2.3]{DMDSM} or Step 2 in \cite[Theorem 6.2]{FG} and using \eqref{eq:extensionsigma1}, there exists a sequence $\lk \sigma^{k}_2 \rk_{k \in \mathbb{N}} \subset C^\infty(\overline \Omega; \mathbb{M}^n_{\rm sym})$ such that $\sigma_2^k (x) \in \mathbf K$ for all $x \in \overline\Omega$ and
\begin{equation}
\label{eq:approximatiosigma2}
\begin{cases}
\sigma_2^k \to \sigma_2 & \textrm{strongly in }H({\rm div},\Omega),\\
\sigma_2^k \nu = 0 & \textrm{on $W' \cap \Gamma_N$}.
\end{cases}
\end{equation}
Therefore, using the integration by parts formula in $BD(\Omega)$, we infer that
\begin{equation}
\label{eq:approximationitgdirichlet}
\begin{split}
& -\int_\Omegaega \varphi \sigma_2^k : e \, dx- \int_\Omegaega u \cdot {\rm div} \sigma_2^k \, \varphi \, dx - \int_\Omegaega \sigma_2^k \colon \big(u\odot \nabla \varphi\big) \, dx \\
&\qquad = \int_{\Omega}{\varphi \sigma_2^k :\, dp} - \int_{\partial \Omega}{\varphi (\sigma_2^k \nu)\cdot u \,d\mathcal{H}^{n-1}} \\
&\qquad = \int_{\Omega}{\varphi \sigma_2^k :\, dp} - \int_{ \Gamma_D}{\varphi (\sigma_2^k \nu)\cdot u \, d\mathcal{H}^{n-1}} \\
&\qquad=: \int_{\Omega \cup \Gamma_D}{\varphi \sigma_2^k : \, dp}
\end{split}
\end{equation}
where in the second equality, we have used the fact that ${\rm supp} (\varphi) \cap \partial\Omega \subset \Gamma_D \cup (\Gamma_N \cap W')$ and the last condition of \eqref{eq:approximatiosigma2}, while in the third equality we used that $p\mres \Gamma_D= - u \odot \nu \mathcal{H}^{n-1}\mres \Gamma_D$. Using that $\sigma_2^k (x) \in \mathbf K$ for all $x \in \overline\Omega$, we get that
$$\int_{\Omega \cup \Gamma_D} \varphi \,dH(p) \geq \int_{\Omega \cup \Gamma_D}{\varphi \sigma_2^k : \, dp},$$
hence passing to the limit as $k \to \infty$ using \eqref{eq:approximatiosigma2} and \eqref{eq:approximationitgdirichlet} leads to
\begin{equation}
\label{eq:rob1}
\int_{\Omega \cup \Gamma_D} \varphi \,dH(p) \geq -\int_\Omegaega \varphi \sigma_2 : e \, dx- \int_\Omegaega u \cdot {\rm div} \sigma_2 \, \varphi \, dx - \int_\Omegaega \sigma_2 \colon \big(u\odot \nabla \varphi\big) \, dx.
\end{equation}
Combining \eqref{eq:sigma12}, \eqref{eq:rob2} and \eqref{eq:rob1}, we conclude that
$$\langle [\sigma:p],\varphi\rangle \leq \int_{\Omega \cup \Gamma_D} \varphi\, dH(p)+\int_{W\cap \Omega } \varphi \, dH(p) .$$
Let us finally consider a decreasing sequence of open sets $\{W_j\}_{j \in \mathbb{N}}$ such that $\Gamma_N \setminus U \subset W_j$ and $W_j \cap \partial \Omega \subset \Gamma_N$ for all $j \in \mathbb{N}$, and $\bigcap_j W_j=\overline{\Gamma_N \setminus U}$. Passing to the limit in the previous expression as $j \to \infty$ owing to the monotone convergence theorem yields
$$\langle [\sigma:p],\varphi\rangle \leq \int_{\Omega \cup \Gamma_D} \varphi \,dH(p)+\int_{\overline{\Gamma_N \setminus U}} \varphi \, dH(p) .$$
As $\overline{\Gamma_N \setminus U} \subset \Gamma_N \cup \Sigma$ and $p$ is concentrated on $\Omega \cup \Gamma_D$, we deduce that
$$\langle [\sigma:p],\varphi\rangle \leq \int_{\Omega \cup \Gamma_D} \varphi \,dH(p)$$
which completes the proof of the proposition.
\end{proof}
In the remaining part of this section, we exhibit some particular cases where we can extend inequality \eqref{eq:ar1} above into one in $\mathcal{M} (\mathbb{R}^n)$. The following result deals with the two-dimensional case where the convexity inequality holds provided $\Sigma$ is a finite set.
\begin{proposition}\label{prop:n=2}
Under the same assumptions as in Proposition \ref{prop:convexinequalityMixed}, assume further that $n=2$ and that $\Sigma$ is a finite set. Then, for all $\sigma \in \mathcal S_g \cap \mathcal K$ and all $(u,e,p) \in \mathcal A_w$,
$$H(p) \geq [\sigma:p] \quad \text{ in } \mathcal M(\mathbb{R}^2).$$
\end{proposition}
\begin{proof}
We again reduce to the case $w=0$. Arguing as in \cite[Example 2]{FG}, for all $(u,e,p) \in \mathcal A_0$, there exists a sequence $\lk (u_k, e_k, p_k) \rk_{k \in \mathbb{N}}$ in $\mathcal A_0$ such that, for each $k \in \mathbb{N}$, $(u_k, e_k, p_k) =0 $ in an open neighborhood $U_k$ of $\Sigma$ and
\begin{equation}
\label{eq:weakapproximation}
\begin{cases}
u_k \to u \quad \textrm{strongly in $L^2(\Omega; \mathbb{R}^2)$}, \\
e_k \to e \quad \textrm{strongly in $L^2(\Omega; \mathbb{M}^2_{\rm sym})$}, \\
p_k \rightharpoonup p \quad \textrm{weakly* in $\mathcal{M}(\Omega\cup \Gamma_D; \mathbb{M}^2_{\rm sym})$},\\
|p_k|(\Omega \cup \Gamma_D) \to |p|(\Omega\cup \Gamma_D).
\end{cases}
\end{equation}
A careful inspection of the argument used in \cite[Example 2]{FG} shows that $|Eu_k|(\Omegaega) \to |Eu|(\Omegaega)$. Thus, applying \cite[Proposition 3.4]{Bab}, we deduce the convergence of the trace
\begin{equation}\label{eq:convtrace1}
u_k \to u\quad \text{ strongly in }L^1(\partial\Omegaega; \mathbb{R}^n).
\end{equation}
Moreover, for all $\varphi \in C^\infty_c(\mathbb{R}^2)$ with $\varphi \geq 0$,
\begin{equation}\label{eq:Res2}
\limsup_{k \to \infty} \int_{\Omega \cup \Gamma_D} \varphi \, dH(p_k) \leq \int_{\Omega \cup \Gamma_D} \varphi \, dH(p).
\end{equation}
Once more, \eqref{eq:Res2} does not follow from the Reshetnyak continuity Theorem because our $H$ does not fulfill the assumptions of that result.
Let $V_k$ be an open set satisfying $\Sigma \subset V_k \subset\subset U_k$, and let $\psi_k \in C^\infty_c(\mathbb{R}^2;[0,1])$ be a cut-off function such that $\psi_k=1$ in $V_k$ and ${\rm Supp}(\psi_k) \subset U_k$. For every $\varphi \in C^\infty_c(\mathbb{R}^2)$ with $\varphi\geq 0$, then $(1-\psi_k)\varphi \in C^\infty_c(\mathbb{R}^2 \setminus \Sigma)$ so that by Proposition \ref{prop:convexinequalityMixed},
$$\int_{\Omega \cup \Gamma_D} \varphi \, dH(p_k) \geq \int_{\Omega \cup \Gamma_D} \varphi(1-\psi_k) \, dH(p_k) \geq \langle [\sigma:p_k],\varphi(1-\psi_k)\rangle.$$
Since by construction ${\rm Supp}(u_k,e_k,p_k) \subset \mathbb{R}^2 \setminus U_k$, it is easily seen that ${\rm Supp}([\sigma:p_k]) \subset \mathbb{R}^2 \setminus U_k$ hence $ \langle [\sigma:p_k],\varphi\psi_k\rangle=0$. As a consequence
$$\int_{\Omega \cup \Gamma_D} \varphi \, dH(p_k) \geq \langle [\sigma:p_k],\varphi\rangle,$$
and the conclusion follows passing to the limit as $k \to \infty$ owing to the convergences \eqref{eq:weakapproximation}--\eqref{eq:Res2}.
\end{proof}
The three-dimensional case requires additional regularity assumptions for the domain $\Omega$, and a particular geometric structure for the elasticity set $\mathbf K$ which has to be a cylinder whose axis is given by the set of spherical matrices. Note that these assumptions cover the physical cases of Von Mises and Tresca models.
\begin{proposition}\label{prop:n=3}
Under the same assumptions as in Proposition \ref{prop:convexinequalityMixed}, assume further that $n=3$ and that:
\begin{itemize}
\item[(i)] $\Omega \subset \mathbb{R}^3$ is a bounded open set of class $C^2$ and $\Sigma$ is $1$-dimensional submanifold of class $C^2$;
\item[(ii)] $\mathbf K=K_D \oplus (\mathbb{R}\, {\rm Id})=\{\sigma \in \mathbb M^3_{\rm sym} : \, \sigma_D\in K_D \}$ where $K_D \subset \mathbb M^3_D$ is a compact and convex set containing $0$ in its interior.
\end{itemize}
Then, for all $\sigma \in \mathcal S_g \cap \mathcal K$ and all $(u,e,p) \in \mathcal A_w$,
$$H(p) \geq [\sigma:p] \quad \text{ in } \mathcal M(\mathbb{R}^3).$$
\end{proposition}
\begin{proof}
Since $\sigma \in L^2(\Omega; \mathbb{M}^3_{\rm sym}) $ satisfies $ {\rm div}\sigma \in L^2 (\Omega; \mathbb{R}^3) $ and $\sigma_D \in L^\infty (\Omega;\mathbb{M}^3_{\rm sym} )$ (because $\sigma \in \mathcal K$ implies $\sigma_D(x) \in K_D$ a.e. in $\Omega$), we claim that $\sigma \in L^6 (\Omega; \mathbb{M}^3_{\rm sym})$. Indeed, arguing as in \cite[Proposition 6.1]{FG}, using the decomposition $\sigma = \sigma_D + \frac{1}{3} (\tr\sigma){\rm Id}$, we have that
$\frac13 \nabla (\tr\sigma)= {\rm div} \sigma - {\rm div} \sigma_D \in L^2(\Omega;\mathbb{R}^3) + W^{-1,\infty}(\Omega;\mathbb{R}^3)$, hence by the Sobolev embedding,
$$\nabla(\tr\sigma) \in W^{-1,6} (\Omega) + W^{-1, \infty} (\Omega) \subset W^{-1,6} (\Omega).$$
Applying Ne\v cas Lemma (see \cite{Nec}), we infer that $\tr\sigma \in L^6(\Omega)$, hence $\sigma \in L^6(\Omega;\mathbb M^3_{\rm sym})$.
In particular, $\sigma \in L^3(\Omega;\mathbb M^3_{\rm sym})$, $\sigma_D \in L^\infty(\Omega;\mathbb M^3_D)$, $ {\rm div} \sigma \in L^{3/2}(\Omega;\mathbb{R}^3)$ and $\sigma\nu\in L^\infty(\Gamma_N;\mathbb{R}^3)$. These conditions turn out to be sufficient to apply \cite[Proposition 2.7]{KT} (with, in the notation of \cite{KT}, $n=3$, $p=3/2$ and $p^*=3$). Then, an immediate adaptation of the proof of \cite[Lemma 3.5]{KT} (using \cite[Proposition 2.7]{KT} instead of \cite[Corollary 2.8]{KT}) shows the validity of the so-called Kohn-Temam condition:
$$\lim_{\delta \to 0} \frac{1}{\delta}\int_{\Sigma_\delta} |\sigma| |u|\, dx =0,$$
where $\Sigma_\delta:=\Omega \cap \{x \in \mathbb{R}^3 : \, {\rm dist}(x,\Sigma)<\delta\}$. We are thus in position to argue as in the proof of \cite[Theorem 6.5]{FG} to get the conclusion. Indeed, let $\psi_\delta \in C^\infty_c(\Sigma_\delta;[0,1])$ be a cut-off function such that $\psi_\delta=1$ in a neighborhood of $\Sigma$ and $|\nabla \psi_\delta|\leq 2/\delta$. Then, for all $\varphi \in C^\infty_c(\mathbb{R}^3)$ with $\varphi\geq 0$, we have
\begin{multline*}
\langle[\sigma:p],(1-\psi_\delta)\varphi\rangle=-\int_\Omegaega (1-\psi_\delta)\varphi \sigma : e \,dx- \int_\Omegaega u \cdot {\rm div} \sigma \, (1-\psi_\delta)\varphi \, dx - \int_\Omegaega (1-\psi_\delta) \sigma \colon \big(u\odot \nabla \varphi\big) \, dx\\
+ \int_\Omegaega \varphi \sigma \colon \big(u\odot \nabla \psi_\delta\big) \, dx + \int_{ \Gamma_N}(1-\psi_\delta)\varphi g \cdot u \, d \mathcal{H}^{n-1}.
\end{multline*}
Since $\psi_\delta \searrow 0$ pointwise, and
$$\left| \int_\Omegaega \varphi \sigma \colon \big(u\odot \nabla \psi_\delta\big) \, dx\right| \leq \frac{2\|\varphi\|_{L^\infty(\Omega)}}{\delta}\int_{\Sigma_\delta} |\sigma| |u|\, dx \to 0,$$
the dominated convergence Theorem allows us to pass to the limit as $\delta \to 0$, and get that
$$\langle[\sigma:p],(1-\psi_\delta)\varphi\rangle \to\langle[\sigma:p],\varphi\rangle.$$
On the other hand, since $(1-\psi_\delta)\varphi \in C^\infty_c(\mathbb{R}^3 \setminus \Sigma)$, Proposition \ref{prop:convexinequalityMixed} ensures that
$$\int_{\Omega \cup \Gamma_D} \varphi \, dH(p) \geq \int_{\Omega \cup \Gamma_D} (1-\psi_\delta)\varphi \, dH(p) \geq \langle[\sigma:p],(1-\psi_\delta)\varphi\rangle.$$
The conclusion follows passing to the limit as $\delta \to 0$.
\end{proof}
\section{Dynamic elasto-plasticity}
\subsection{The model with dissipative boundary conditions}
We consider a small strain dynamical perfect plasticity problem under the following assumptions:
\noindent $(H_3)$ {\bf The elastic properties.} We assume that the material is isotropic, which means that the constitutive law, expressed by Hooke's tensor, is given by
$${\bf A} \xi = \lambda ({\rm tr}\,\xi) {\rm Id} + 2 \mu \xi \quad {\textrm{for all } } \xi \in \mathbb{M}^n_{{\rm sym}},$$
where $\lambda$ and $ \mu$ are the Lam\'e coefficients satisfying $\mu > 0$ and $2 \mu + n \lambda >0$. These conditions imply the existence of constants $\alpha>0$ and $\beta>0$ such that
$$\alpha |\xi|^2 \leq \mathbf A \xi:\xi \leq \beta |\xi|^2 \quad \text{ for all }\xi \in \mathbb{M}^n_{{\rm sym}}.$$
We define the following quadratic form
$$Q(\xi) := \frac{1}{2 }\mathbf A \xi:\xi = \frac{\lambda}{2 } ({\rm tr}\,\xi)^2 + \mu \abs{\xi}^2 \quad {\textrm{for all } } \xi \in \mathbb{M}^n_{{\rm sym}}. $$
If $e \in L^2 (\Omega; \mathbb{M}^n_{{\rm sym}})$, we further define the elastic energy by
$$\mathcal{Q} (e) := \int_{\Om}{Q(e) \, dx}.$$
$(H_4)$ {\bf The dissipative boundary conditions.} Let $S \in L^\infty(\partial\Omega;\mathbb M^n_{\rm sym})$ be a boundary matrix satisfying the conditions: there exists a constant $c>0$ such that
$$S(x)z\cdot z \geq c|z|^2 \quad \text{for $\mathcal H^{n-1}$-a.e. $x \in \partial\Omega$} \text{ and for all }z \in \mathbb{R}^n.$$
$(H_5)$ {\bf The external forces.} We assume the body is subjected to external body forces
$$f \in H^1(0,T; L^2(\Omega; \mathbb{R}^n)).$$
$(H_6)$ {\bf The initial conditions.} Let $u_0 \in H^1(\Omega; \mathbb{R}^n)$, $v_0 \in H^2(\Omega; \mathbb{R}^n)$, $e_0 \in L^2(\Omega; \mathbb{M}^n_{{\rm sym}})$ and $p_0 \in L^2(\Omega; \mathbb{M}^n_{{\rm sym}})$ be such that
\begin{equation*}
\begin{cases}
\sigma_0 := {\bf A} e_0 \in \mathcal K,\\
Eu_0 = e_0 + p_0 & \textrm{ in } \Omega, \\
S v_0 + \sigma_0 \nu =0 & \textrm{ on } \partial \Omega.
\end{cases}
\end{equation*}
In order to formulate the main result of \cite{BC}, we further need to introduce the function $\psi:\partial\Omega \times \mathbb{R}^n \to \mathbb{R}^+$ defined by
\begin{equation}\label{eq:psi}
\psi (x,z) = \inf_{w \in \mathbb{R}^n} \left\{\frac12 S(x)w\cdot w + H((w-z)\odot \nu(x))\right\} \text{ for $\mathcal H^{n-1}$-a.e. $x \in \partial\Omega$ and all $z \in \mathbb{R}^n$,}
\end{equation}
where $\nu(x)$ is the outer normal to $\Omega$ at $x \in \partial\Omega$. We recall (see \cite[Remark 4.7]{BC}) that the differential of $\psi$ in the $z$-direction is given by
$$D_z \psi (x,z) = {\rm{P}}_{-{\mathbf K} \nu(x) } (S (x) z),$$
where ${\rm P}_{-\mathbf K\nu(x)}$ is the orthogonal projection in $\mathbb{R}^n$ onto the closed and convex set $-\mathbf K\nu(x)$ with respect to the scalar product $(u,v) \in \mathbb{R}^n \times \mathbb{R}^n \mapsto \langle u,v\rangle_{S(x)^{-1}}:= S(x)^{-1} u \cdot v$. We further denote by $\| \cdot\|_{S(x)^{-1}}$ its associated norm.
The following well posedness result with homogeneous dissipative boundary conditions has been established in \cite{BC}.
\begin{theorem}
\label{theorem:wellposednesslevelS}
Assume that assumptions $(H_1)$--$(H_6)$ hold. Then, there exists a unique triple $(u,e,p)$ such that
\begin{equation*}
\begin{cases}
u \in W^{2, \infty} (0,T; L^2 (\Omega; \mathbb{R}^n)) \cap C^{0,1} ( \left[0, T\right]; BD (\Omega)), \\
e \in W^{1, \infty} (0,T; L^2 (\Omega ; \mathbb{M}^n_{{\rm sym}} )), \\
p \in C^{0,1} ( \left[0, T\right]; \mathcal{M} (\Omega;\mathbb{M}^n_{{\rm sym}})) , \\
\end{cases}
\end{equation*}
\begin{equation*}
\sigma:= \mathbf{A} e \in L^\infty (0,T; H( {\rm div}, \Omega)), \quad \sigma \nu \in L^\infty (0,T; L^2 (\partial \Omega; \mathbb{R}^n)),
\end{equation*}
and satisfying
\begin{enumerate}
\item The initial conditions:
$$u(0) = u_0, \quad \dot{u}(0) = v_0, \quad e(0)=e_0, \quad p(0) = p_0;$$
\item The additive decomposition: for all $t \in \left[0,T \right]$,
$$E u(t) = e(t) + p(t) \quad \textrm{in }\mathcal{M} (\Omega;\mathbb{M}^n_{{\rm sym}});$$
\item The equation of motion:
$$\ddot {u} - {\rm div} \sigma = f \quad \textrm{in }L^2 (0,T; L^2(\Omega ; \mathbb{R}^n)); $$
\item The relaxed dissipative boundary condition:
$${\rm P}_{-\mathbf{K} \nu} (S \dot{u}) + \sigma \nu = 0 \quad \textrm{in } L^2 (0,T; L^2 (\partial \Omega; \mathbb{R}^n));$$
\item The stress constraint: for every $t \in \left[ 0, T \right]$,
$$\sigma (t) \in \mathbf{K} \quad \text{ a.e. in }\Omega;$$
\item The flow rule: for a.e. $t \in \left[0, T \right]$,
$$H (\dot {p} (t) )= \left[ \sigma(t): \dot{p}(t) \right] \quad \textrm{in } \mathcal{M} (\Omega);$$
\item The energy balance: for every $t \in \left[0, T \right]$
\begin{align}
&\frac{1}{2} \int_{\Omega}{\abs{\dot{u} (t)}^2 \, dx} + \mathcal{Q}(e(t)) + \int_0^t H(\dot{p}(s))(\Omega)\, ds + \int_{0}^{t}{\int_{\partial \Omega}{\psi(x, \dot{u})\, d\mathcal{H}^{n-1}}\, ds} \nonumber \\
& \quad + \frac{1}{2} \int_{0}^{t}{\int_{\partial \Om}{ S^{-1} (\sigma \nu) \cdot (\sigma \nu) \, d \mathcal{H}^{n-1}}\, ds}= \frac{1}{2} \int_{\Omega}{\abs{v_0}^2 \, dx} + \mathcal{Q}(e_0) + \int_{0}^{t}{\int_{\Om}{f \cdot \dot{u}\, dx}\, ds}.
\label{eq:energybalance}
\end{align}
\end{enumerate}
Moreover, the following uniform estimate holds
\begin{equation}
\norm{\ddot{u}}^2_{L^\infty (0,T; L^2 (\Omega; \mathbb{R}^n))} + \norm{\dot{e}}^2_{L^\infty(0,T; L^2 (\Omega; \mathbb{M}^n_{{\rm sym}}))} \le C_* ,
\label{posteriori}
\end{equation}
for some constant $C_*>0$ depending on $\|u_0\|_{H^1(\Omega;\mathbb{R}^n)}$, $\|v_0\|_{H^2(\Omega;\mathbb{R}^n)}$, $\|e_0\|_{L^2(\Omega;\mathbb M^n_{\rm sym})}$, $\|\sigma_0\|_{H({\rm div},\Omega)}$ and $\|p_0\|_{L^2(\Omega;\mathbb M^n_{\rm sym})}$, but which is independent of $S$.
\end{theorem}
\subsection{Derivation of mixed boundary condition}
Our aim is to show through an asymptotic analysis how it is possible to obtain homogeneous mixed boundary conditions starting from dissipative boundary conditions. We consider a boundary matrix of the form
$$S(x)=S_\lambda(x):= \left( \lambda { {\bf 1}_{\Gamma_D}(x) + \frac{1}{\lambda} {\bf 1}_{\Gamma_N}(x)} \right){\rm Id}, \quad \lambda>0.$$
\begin{remark}
{\rm Note that since
$$\| \cdot \|_{S_\lambda(x)^{-1}}=\left( \lambda { {\bf 1}_{\Gamma_D} (x) + \frac{1}{\lambda} {\bf 1}_{\Gamma_N}(x) } \right)^{-1} | \cdot |,$$
for any $\lambda >0$ and all $x \in \partial \Omega \setminus \Sigma$, the orthogonal projection ${\rm P}_{-\mathbf K\nu(x)}$ onto the closed and convex set $-\mathbf K\nu(x)$ with respect to the scalar product $\langle\cdot,\cdot\rangle_{S_\lambda(x)^{-1}}$ coincides with the orthogonal projection with respect to the canonical Euclidean scalar product of $\mathbb{R}^n$. It is in particular independent of $\lambda$. }
\end{remark}
We will need to strengthen assumption $(H_1)$ into
\noindent $(H'_1)$ {\bf Reference configuration.} Let $\Omega \subset \mathbb{R}^n$ be a bounded open set with $C^3$ boundary. We assume that $\partial \Omega$ is decomposed as the following disjoint union
$$\partial \Omega = \Gamma_D \cup \Gamma_N \cup \Sigma,$$
where $\Gamma_D$ and $\Gamma_N$ are open sets in the relative topology of $\partial\Omega$, and $\Sigma = \partial _{| \partial \Omega}\Gamma_D = \partial _{| \partial \Omega}\Gamma_N$ is a $(n-2)$-dimensional submanifold of class $C^3$.
Moreover, the initial condition needs to be adapted to our mixed boundary conditions.
\noindent $(H'_6)$ {\bf The initial conditions.} Let $u_0 \in H^1(\Omega; \mathbb{R}^n)$, $v_0 \in H^2(\Omega; \mathbb{R}^n) $, $e_0 \in L^2(\Omega; \mathbb{M}^n_{{\rm sym}})$, $p_0 \in L^2(\Omega; \mathbb{M}^n_{{\rm sym}})$ and $\sigma_0 := {\bf A} e_0 \in H^2(\Omega;\mathbb{M}^n_{\rm sym})$ be such that
$$\begin{cases}
Eu_0 = e_0 + p_0& \text{ in }\Omega,\\
v_0 = 0& \text{ on }\Gamma_D,\\
\sigma_0 \nu =0 & \text{ on }\Gamma_N,\\
\sigma_0 + B(0,r) \subset {\bf K} & \text{ in }\Omega \text{ for some }r>0.
\end{cases}$$
First, we are going to construct a sequence of initial data $(u^\lambda_0, v^\lambda_0, e^\lambda_0, p^\lambda_0)$ satisfying $(H_6)$ with $S=S_\lambda$, and approximating $(u_0,v_0,e_0,p_0)$ as $\lambda \to \infty$. This is the object of the following result.
\begin{lemma}
\label{lemma:6.1}
Let $n=2$, $3$. Under assumptions $(H'_1)$ and $(H'_6)$, for every $\lambda>0$, there exists $(v_0^\lambda, \sigma_0^\lambda) \in H^2(\Omega;\mathbb{R}^n) \times \mathcal K $ such that $(v_0^\lambda ,\sigma_0^\lambda) \to (v_0,\sigma_0)$ strongly in $ H^2(\Omega;\mathbb{R}^n) \times H( {\rm div},\Omega)$ as $\lambda \to \infty$ and
\begin{equation}
\label{approximate:initialdata}
\left( \lambda {\bf 1}_{\Gamma_D} + \frac 1\lambda {\bf 1}_{\Gamma_N} \right) v_0^\lambda + \sigma_0^\lambda \nu = 0 \quad \mathcal H^{n-1}\textrm{-a.e. on $\partial \Omega$}.
\end{equation}
\end{lemma}
\begin{proof}
Since $\partial \Omega$ has a $C^3$ boundary then its normal $\nu$ belongs to $C^2(\partial\Omega;\mathbb{R}^n)$ and, thanks to the Trace Theorem in Sobolev spaces, the trace of $\sigma_0$ belongs to $H^\frac{3}{2}(\partial \Omega;\mathbb M^n_{\rm sym})$. As a consequence, the product $\sigma_0 \nu$ belongs to $H^{\frac32}(\partial\Omega;\mathbb{R}^n)$ and there exists an extension $\hat{v}_0 \in H^2(\Omega; \mathbb{R}^n)$ whose trace on $\partial\Omega$ coincides with $-\sigma_0 \nu$ with the estimate
$$\|\hat v_0\|_{H^2(\Omega;\mathbb{R}^n)} \leq C \|\sigma_0\nu\|_{H^{3/2}(\partial\Omega;\mathbb{R}^n)},$$
where $C>0$ is a constant only depending on $n$ and $\Omega$. For each $\lambda>0$, let us define
$$v_0^\lambda := v_0 + \lambda^{-1}\hat{v}_0 \in H^2(\Omega; \mathbb{R}^n).$$
It follows
that $v_0^\lambda \rightarrow v_0$ strongly in $H^2(\Omega; \mathbb{R}^n)$ as $\lambda \to \infty$. Now, we consider $z_0 \in H^1(\Omega;\mathbb{R}^n)$ as the unique weak solution of the boundary value problem
\begin{equation}
\label{eqlem:5.1}
\begin{cases}
z_0-{{\rm div}} (e(z_0)) =0 & {{\rm in}} \ \Omega ,\\
e(z_0) \nu = - v_0 & {{\rm on}} \ \partial \Omega.
\end{cases}
\end{equation}
According to Korn's inequality and the Lax-Milgram Lemma such a solution exists and is unique. Using that $\Omega$ has a $C^3$-boundary and that $v_0 \in H^{\frac32}(\partial\Omega;\mathbb{R}^n)$, elliptic regularity ensures that $z_0 \in H^3 (\Omega; \mathbb{R}^n)$. Let us define
$$\sigma_ 0^\lambda := \sigma_0+\lambda^{-1} e(z_0)$$
In particular, $ \sigma_0^\lambda \to \sigma_0$ strongly in $H({\rm div},\Omega)$ as $\lambda \to \infty$. On $\Gamma_D$, we observe that
$$\lambda {v_0^\lambda}_{| \Gamma_D} +{\sigma_0^\lambda \nu}_{| \Gamma_D} = \lambda {v_0}_{| \Gamma_D} + \hat{v_0}_{| \Gamma_D} + \sigma_0\nu_{| \Gamma_D}+ \frac{1}{\lambda} {e(z_0) \nu}_{| \Gamma_D} = 0,$$
where we have used the fact that $e(z_0) \nu = -v_0 = 0$ and $\hat v_0=-\sigma_0\nu$ on $\Gamma_D$.
Similarly, on $\Gamma_N$ we have
$$\frac{1}{\lambda} {v_0^\lambda}_{| \Gamma_N} +{\sigma_0^\lambda \nu}_{| \Gamma_N} = \frac{1}{\lambda} {v_0}_{| \Gamma_N} +\frac{1}{\lambda^2} \hat{v_0}_{| \Gamma_N} + \sigma_0\nu_{| \Gamma_N}+ \frac{1}{\lambda} {e(z_0) \nu}_{| \Gamma_N} = 0,$$
where we have used the fact that $\hat v_0=-\sigma_0\nu=0$ and $e(z_0)\nu = -v_0$ on $\Gamma_N$. We conclude \eqref{approximate:initialdata} thanks to the fact that $\partial \Omega = \Gamma_D \cup \Gamma_N \cup \Sigma$ and $\mathcal{H}^{n-1}(\Sigma)=0$.
It remains to check that $\sigma_0^\lambda \in \mathbf K$ a.e. in $\Omega$. To this aim, we have by Sobolev imbedding (recall that $n=2$ or $3$) that $e(z_0) \in H^2(\Omega;\mathbb{M}^n_{\rm sym}) \subset L^\infty(\Omega;\mathbb{M}^n_{\rm sym})$. Let $r>0$ be the constant given by the last property of hypothesis $(H'_6)$ and $\lambda>0$ large enough so that $\lambda^{-1} \|e(z_0)\|_{L^\infty(\Omega;\mathbb{M}^n_{\rm sym})} < r$. It thus follows that $\sigma_0^\lambda \in \sigma_0 + B(0,r) \subset \mathbf K$ a.e. in $\Omega$.
\end{proof}
Given the initial data $(u_0,v_0^\lambda,e_0^\lambda:=\mathbf A^{-1}\sigma^\lambda_0,p_0^\lambda:=Eu_0-\mathbf A^{-1}\sigma_0^\lambda)$ satisfying $(H_6)$, we denote by $(u_\lambda,e_\lambda,p_\lambda)$ the associated solution given by Theorem \ref{theorem:wellposednesslevelS}. Our aim is to study the asymptotic behavior of the solutions $(u_\lambda,e_\lambda,p_\lambda)$ when $\lambda \to \infty$ in order to recover Dirichlet ($\Gamma_N = \emptyset$), Neumann ($\Gamma_D = \emptyset$) {and mixed} boundary conditions in the other cases.
Our main result is the following:
\begin{theorem}
\label{thm:compactness}
Assume that $(H'_1)$, $(H_2)$, $(H_3)$, $(H_5)$ and $(H'_6)$ hold. For each $\lambda>0$, let $(v_0^\lambda,\sigma_0^\lambda)$ be given by Lemma \ref{lemma:6.1}, and let $(u_\lambda,e_\lambda,p_\lambda)$ be the solution given by Theorem \ref{theorem:wellposednesslevelS} associated with the boundary matrix $S_\lambda$ defined in \eqref{eq:Slambda} and the initial data $(u_0,v_0^\lambda,e_0^\lambda:=\mathbf A^{-1}\sigma^\lambda_0,p_0^\lambda:=Eu_0-\mathbf A^{-1}\sigma_0^\lambda)$. Then,
$$\begin{cases}
u_\lambda \rightharpoonup u & \text{ weakly* in } W^{2,\infty}(0,T;L^2(\Omega;\mathbb{R}^n)),\\
e_\lambda \rightharpoonup e & \text{ weakly* in } W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})),\\
\sigma_\lambda \rightharpoonup \sigma & \text{ weakly* in } W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})),\\
p_\lambda(t) \rightharpoonup p(t) & \text{ weakly* in }\mathcal M(\Omega;\mathbb M^n_{\rm sym}) \text{ for all }t \in [0,T],
\end{cases}
$$
where $(u,e, p)$ is the unique triple satisfying
$$
\begin{cases}
u \in W^{2,\infty}(0,T;L^2(\Omegaega;\mathbb{R}^n)) \cap C^{0,1}([0,T];BD(\Omegaega)),\\
e \in W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})),\\
\sigma:=\mathbf A e \in W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})) \cap L^\infty(0,T;H({\rm div},\Omegaega)),\\
p\in C^{0,1}([0,T];\mathcal M(\Omega \cup \Gamma_D;\mathbb M^n_{\rm sym})),
\end{cases}
$$
together with
\begin{enumerate}
\item The initial conditions:
$$u(0) = u_0, \quad \dot{u} (0) = v_0,\quad e (0) = e_0, \quad p (0) = p_0; $$
\item The kinematic compatibility: for all $t \in \left[0, T \right]$,
$$\begin{cases}
E u(t) = e(t) + p(t) & \textrm{in $\Omega$},\\
p(t) =- u(t) \odot \nu \mathcal{H}^{n-1} & \textrm{on $\Gamma_D$};
\end{cases}$$
\item The equation of motion:
$$\ddot{u} - {\rm{div}}\sigma =f \quad in \ L^2 (0,T;L^2(\Omega; \mathbb{R}^n)); $$
\item The stress constraint: for every $t \in \left[0, T \right]$,
$$\sigma(t) \in \mathbf{K} \quad {\rm{a.e \ in \ }} \Omega; $$
\item The boundary condition
$$ \sigma \nu = 0 \quad {\rm in } \quad L^2(0,T; L^2(\Gamma_N; \mathbb{R}^n)); $$
\item The flow rule: if one of the following conditions are satisfied:
\begin{itemize}
\item[(i)] Dirichlet case: $\Omega=\Gamma_D$,
\item[(ii)] Neumann case: $\Omega=\Gamma_N$,
\item[(iii)] Mixed case in dimension $n=2$: $\Gamma_D \neq \emptyset$, $\Gamma_N\neq \emptyset$ and $\Sigma$ finite,
\item[(iv)] Mixed case in dimension $n=3$: $\Gamma_D \neq \emptyset$, $\Gamma_N\neq \emptyset$ and
$$\mathbf K=K_D \oplus (\mathbb{R} {\rm Id}):=\{\sigma \in \mathbb M^3_{\rm sym} : \, \sigma_D \in K_D\},$$
for some compact and convex set $K_D \subset \mathbb M^3_D$ containing zero in its interior,
\end{itemize}
then, for a.e. $t \in [0,T]$,
$$H (\dot{p}(t)) = [ \sigma(t): \dot{p}(t) ] \quad {\rm{in \ }} \mathcal{M}(\Omega \cup \Gamma_D).$$
\end{enumerate}
\end{theorem}
As explained before, the solution $(u,e,p)$ to the previous boundary value problem will be obtained by means of an asymptotic analysis as $\lambda \to \infty$ of the solution $(u_\lambda,e_\lambda,p_\lambda)$ of the dissipative boundary value in the Theorem \ref{theorem:wellposednesslevelS}. This analysis is based in the spirit of \cite[Theorem 5.1]{BM} in the antiplane case.
\subsection{Weak compactness and passing to the limit into linear equations}
We observe that the constant $C_*>0$ appearing in estimate \eqref{posteriori} of Theorem \ref{theorem:wellposednesslevelS} depends on the various norms $ \|u_0\|_{H^1(\Omega;\mathbb{R}^n)}$, $\|v_0^\lambda\|_{H^2(\Omega;\mathbb{R}^n)}$, $\|e_0^\lambda\|_{L^2(\Omega;\mathbb M^n_{\rm sym})}$, $\|\sigma^\lambda_0\|_{H({\rm div},\Omega)}$ and $\|p_0^\lambda\|_{L^2(\Omega;\mathbb M^n_{\rm sym})}$ of the initial data. Since, by Lemma \ref{lemma:6.1}, these quantities are independent of $\lambda$, it follows that the constant $C_*$ is independent of $\lambda$ as well. This is essential to get uniform bounds on the sequence $\{(u_\lambda,e_\lambda,p_\lambda)\}_{\lambda>0}$ and then weak compactness thereof.
The following compactness result follows from standard argument as, e.g., in \cite[Section 5]{BM}. The weak convergences allow us to obtain, in the limit, the initial conditions, the kinetic compatibility, the equation of motion and the stress constraint,
\begin{lemma}\label{lem:compactness}
Assume that $(H'_1)$, $(H_2)$, $(H_3)$, $(H_5)$ and $(H'_6)$ hold. There exist a subsequence (not relabeled) and
$$
\begin{cases}
u \in W^{2,\infty}(0,T;L^2(\Omegaega;\mathbb{R}^n)) \cap C^{0,1}([0,T];BD(\Omegaega)),\\
e \in W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})),\\
\sigma\in W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})) \cap L^\infty(0,T;H({\rm div},\Omegaega)),\\
p \in C^{0,1}([0,T];\mathcal M(\Omega;\mathbb M^n_{\rm sym})),
\end{cases}
$$
such that as $\lambda \to \infty$,
$$
\begin{cases}
u_\lambda \rightharpoonup u & \text{ weakly* in } W^{2,\infty}(0,T;L^2(\Omega;\mathbb{R}^n)),\\
e_\lambda \rightharpoonup e & \text{ weakly* in } W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})),\\
\sigma_\lambda \rightharpoonup \sigma & \text{ weakly* in } W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})),
\end{cases}
$$
and, for every $t \in [0,T]$,
$$
\begin{cases}
u_\lambda(t) \rightharpoonup u(t) & \text{ weakly in } L^2(\Omega;\mathbb{R}^n),\\
u_\lambda(t) \rightharpoonup u(t) & \text{ weakly* in } BD(\Omega),\\
\dot u_\lambda(t) \rightharpoonup \dot u(t) & \text{ weakly in } L^2(\Omega;\mathbb{R}^n),\\
e_\lambda(t) \rightharpoonup e(t) & \text{ weakly in } L^2(\Omega;\mathbb M^n_{\rm sym}),\\
\sigma_\lambda(t) \rightharpoonup \sigma(t) & \text{ weakly in } L^2(\Omega;\mathbb M^n_{\rm sym}),\\
p_\lambda(t) \rightharpoonup p(t) & \text{ weakly* in }\mathcal M(\Omega;\mathbb M^n_{\rm sym}).
\end{cases}
$$
Moreover, there hold:
\begin{itemize}
\item the initial conditions: $ u(0) = u_0, \ \dot{u} (0) = v_0,\ e (0) = e_0, \ p (0) = p_0$;
\item the additive decomposition: for all $t \in [0, T]$,
$$E u(t) = e(t) + p(t) \quad \text{ in } \mathcal M(\Omega;\mathbb M^n_{\rm sym});$$
\item the equation of motion: $\ddot{u} - {\rm{div}}\sigma=f$ in $L^2 (0,T;L^2(\Omega; \mathbb{R}^n))$;
\item the stress constraint: for every $t \in \left[0, T \right]$,
$\sigma (t) =\mathbf A e(t) \in \mathbf{K}$ a.e in $\Omega$;
\item the Neumann condition: $\sigma\nu=0$ in $L^2(0,T;L^2(\Gamma_N;•\mathbb{R}^n))$.
\end{itemize}
\end{lemma}
\begin{proof}
According to the energy balance \eqref{eq:energybalance} and estimate \eqref{posteriori}, we infer that
\begin{multline}
\|\dot{u}_\lambda\|_{L^\infty(0,T;L^2(\Omega;\mathbb{R}^n))} +\|\sigma_\lambda\|_{L^\infty(0,T;L^2(\Omega;\mathbb M^n_{\rm sym}))} + \|\dot p_\lambda\|_{L^1(0,T;\mathcal M(\Omega;\mathbb M^n_{\rm sym}))}\\
+ \frac{1}{\sqrt\lambda} \|\sigma_\lambda \nu\|_{L^2(0,T;L^2({ \Gamma_D};\mathbb{R}^n))} { + \sqrt{\lambda} \|\sigma_\lambda \nu\|_{L^2(0,T;L^2({\Gamma_N};\mathbb{R}^n))} }\\
+ \int_0^T\int_{\partial \Omega}{\psi_\lambda(x, \dot{u}_\lambda)\, d\mathcal{H}^{n-1}}\, ds \leq C, \label{eq:differenceenergies2}
\end{multline}
where $\psi_\lambda$ is given by \eqref{eq:psi} with $S=S_\lambda$, and
$$\norm{\ddot{u}_\lambda}^2_{L^\infty (0,T; L^2 (\Omega; \mathbb{R}^n))} + \norm{\dot{e}_\lambda}^2_{L^\infty(0,T; L^2 (\Omega; \mathbb{M}^n_{{\rm sym}}))} \le C_*.$$
In both previous estimates, the constants $C>0$ and $C_*>0$ are independent of $\lambda$. Using that $u_\lambda \in W^{2,\infty}(0,T;L^2(\Omega;\mathbb{R}^n))$ and $u_0\in L^2(\Omega;\mathbb{R}^n)$, we get
$$\sup_{\lambda>0}\|u_\lambda\|_{W^{2,\infty}(0,T;L^2(\Omega;\mathbb{R}^n))}<\infty,$$
and similarly, since $e_\lambda \in W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym}))$ and $e_0 \in L^2(\Omega;\mathbb M^n_{\rm sym})$,
$$\sup_{\lambda>0}\|e_\lambda\|_{W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym}))}<\infty.$$
We can thus extract a subsequence (not relabeled) and find $u \in W^{2,\infty}(0,T;L^2(\Omega;\mathbb{R}^n))$ and $e\in W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym}))$ such that, as $\lambda \to \infty$,
$$
\begin{cases}
u_\lambda \rightharpoonup u\quad \text{ weakly* in } W^{2,\infty}(0,T;L^2(\Omega;\mathbb{R}^n)),\\
e_\lambda \rightharpoonup e \quad \text{ weakly* in } W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})).
\end{cases}
$$
Setting $\sigma:=\mathbf Ae\in W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym}))$ we also have
$$\sigma_\lambda \rightharpoonup \sigma\quad \text{ weakly* in } W^{1,\infty}(0,T;L^2(\Omega;\mathbb M^n_{\rm sym})),$$
and using the equation of motion leads to
$${\rm div}\sigma_\lambda=\ddot u_\lambda-f \rightharpoonup \ddot u-f \quad \text{ weakly* in }L^\infty(0,T;L^2(\Omega;\mathbb{R}^n)).$$
By uniqueness of the distributional limit, we infer that $ {\rm div} \sigma=\ddot u-f \in L^\infty(0,T;L^2(\Omega;\mathbb{R}^n))$ and, thus, $\sigma \in L^\infty(0,T;H({\rm div},\Omega))$.
Owing to Ascoli-Arzela Theorem, for every $t \in [0,T]$,
$$
\begin{cases}
u_\lambda(t) \rightharpoonup u(t) & \text{ weakly in } L^2(\Omega;\mathbb{R}^n),\\
\dot u_\lambda(t) \rightharpoonup \dot u(t) & \text{ weakly in } L^2(\Omega;\mathbb{R}^n),\\
e_\lambda(t) \rightharpoonup e(t) & \text{ weakly in } L^2(\Omega;\mathbb M^n_{\rm sym}),\\
\sigma_\lambda(t) \rightharpoonup \sigma(t) & \text{ weakly in } L^2(\Omega;\mathbb M^n_{\rm sym}).
\end{cases}
$$
We now derive weak compactness on the sequence $\{p_\lambda\}_{\lambda>0}$ of plastic strains. Thanks to the energy balance between two arbitrary times $0 \leq t_1 \leq t_2 \leq T$ together with \eqref{eq:coercH},
\begin{align}
r\int_{t_1}^{t_2}|\dot p_\lambda(s)|(\Omega)\, ds \leq \int_{\Om}t{H(\dot{p}_\lambda (s))(\Omega)\, ds} & \leq & \frac{1}{2} \int_{\Omega}(\dot u_\lambda(t_1)-\dot u_\lambda(t_2))\cdot (\dot u_\lambda(t_1)+\dot u_\lambda(t_2))\, dx \nonumber\\
&&+ \frac{1}{2} \int_{\Omega}(\sigma_\lambda(t_1)-\sigma_\lambda(t_2)): (e_\lambda(t_1)+e_\lambda(t_2))\,dx\nonumber\\
&& +\int_{t_1}^{t_2}\int_\Omega f\cdot \dot u_\lambda\, dx \, ds. \label{eq:differenceenergies}
\end{align}
By $(H_5)$, using that $f \in L^\infty(0,T;L^2(\Omega;\mathbb{R}^n))$, that $\{\dot u_\lambda\}_{\lambda>0}$ is bounded in $L^\infty(0,T;L^2(\Omega;\mathbb{R}^n))$ and that $\{\sigma_\lambda\}_{\lambda>0}$ is bounded in $L^\infty(0,T;L^2(\Omega;\mathbb M^n_{\rm sym}))$, we can find a constant $C>0$ independent of $\lambda$ such that
$$|p_\lambda(t_1)-p_\lambda(t_2)|(\Omega) \leq \int_{t_1}^{t_2}|\dot p_\lambda(s)|(\Omega)ds\leq C(t_2-t_1).$$
Applying Ascoli-Arzela Theorem, we extract a further subsequence (independent of time) and find $p \in C^{0,1}([0,T];\mathcal M(\Omega;\mathbb M^n_{\rm sym}))$ such that for all $t \in [0,T]$,
$$p_\lambda(t) \rightharpoonup p(t) \quad \text{ weakly* in }\mathcal M(\Omega;\mathbb M^n_{\rm sym}).$$
Using the additive decomposition $Eu_\lambda=e_\lambda+p_\lambda$ in $\Omega$, the previously established weak convergences show that $u \in C^{0,1}([0,T];BD(\Omega))$ and, for all $t \in [0,T]$,
$$u_\lambda(t) \rightharpoonup u(t) \quad \text{ weakly* in }BD(\Omega).$$
It is now possible to pass to the limit in the initial condition
$$ u(0) = u_0, \quad \dot{u}(0) = v_0,\quad e(0) = e_0, \quad p(0) = p_0,$$
in the additive decomposition: for all $t \in [0, T]$,
$$E u(t) = e(t) + p(t) \quad \textrm{in $\mathcal M(\Omega;\mathbb M^n_{\rm sym})$},$$
and in the equation of motion
$$\ddot{u} - {\rm{div}}\sigma =f \quad \text{ in } L^2 (0,T;L^2(\Omega; \mathbb{R}^n)).$$
The stress constraint being convex, hence closed under weak $L^2(\Omega;\mathbb M^n_{\rm sym})$ convergence, we further obtain that for every $t \in [0, T ]$, $\sigma(t) \in \mathbf{K}$ a.e in $\Omega$.
It remains to show the Neumann boundary condition $\sigma \nu = 0$ on $\Gamma_N$. Since $\sigma_\lambda \rightharpoonup \sigma$ weakly in $L^2(0,T;H( {\rm div},\Omega))$, we deduce that $\sigma_\lambda\nu \rightharpoonup \sigma\nu$ weakly in $L^2(0,T;H^{-1/2}(\partial\Omega;\mathbb{R}^n))$. On the other hand, using estimate \eqref{eq:differenceenergies2}, we have
$$ \|\sigma_\lambda \nu \|_{L^2(0,T;L^2(\Gamma_N;\mathbb{R}^n))} \leq \frac {C}{\sqrt\lambda} \to 0,$$
as $\lambda\to \infty$, hence $\sigma \nu=0$ in $L^2(0,T;L^2(\Gamma_N;\mathbb{R}^n))$.
\end{proof}
\subsection{Flow rule}
It remains to prove the flow rule, which will be performed by passing to the limit in the energy balance obtained in the Theorem \ref{theorem:wellposednesslevelS}, namely, for all $t\in [0,T]$,
\begin{multline}
\frac{1}{2} \int_{\Omega}{ \abs{\dot{u}_\lambda (t)}^2\, dx} + \int_{\Om} {Q}(e_\lambda(t))\, dx + \int_0^t H (\dot p_\lambda(s))(\Omega)\,ds +\int_0^t \int_{\partial \Omega} \psi_\lambda(x, \dot{u}_\lambda)\, d\mathcal{H}^{n-1}\, ds \\
\leq \frac{1}{2} \int_{\Omega}{\abs{v_0}^2 \, dx} + \int_{\Om} {Q}(e_0) \, dx + \int_0^t \int_{\Om}{f\cdot \dot{u}_\lambda \, dx}\, ds .\label{eq:inqdifferenceenergies}
\end{multline}
The first two terms will easily pass to the lower limit by lower semicontinuity of the norm with respect to weak $L^2$-convergence. The main issue is to pass to the (lower) limit in both last terms in the left-hand side of the previous inequality. The following result will enable one to obtain a lower bound.
\begin{lemma}
\label{prop:relaxationdirichletpart}
Let $\lk (\hat u_\lambda, \hat e_\lambda, \hat p_\lambda) \rk_{\lambda>0} \subset [BD(\Omegaega) \cap L^2(\Omegaega;\mathbb{R}^n)] \times L^2(\Omegaega;\mathbb M^n_{\rm sym}) \times \mathcal M(\Omegaega;\mathbb M^n_{\rm sym})$ be such that $E\hat u_\lambda=\hat e_\lambda+\hat p_\lambda$ in $\Omega$,
and
$$
\begin{cases}
\hat u_\lambda \rightharpoonup \hat u & \text{ weakly in } L^2(\Omega;\mathbb{R}^n),\\
\hat u_\lambda \rightharpoonup \hat u & \text{ weakly* in } BD(\Omega),\\
\hat e_\lambda \rightharpoonup \hat e & \text{ weakly in } L^2(\Omega;\mathbb M^n_{\rm sym}),\\
\hat p_\lambda \rightharpoonup \hat p & \text{ weakly* in }\mathcal M( \Omega ;\mathbb M^n_{\rm sym}),
\end{cases}
$$
as $\lambda \to \infty$, for some $(\hat u, \hat e, \hat p) \in [BD(\Omegaega) \cap L^2(\Omegaega;\mathbb{R}^n)] \times L^2(\Omegaega;\mathbb M^n_{\rm sym}) \times \mathcal M(\Omega;\mathbb M^n_{\rm sym})$. Then,
\begin{equation}
\label{eq:rd1}
H(\hat p)(\Omegaega)+\int_{{\Gamma_D}} H(-\hat u \odot \nu)\, d\mathcal H^{n-1}\le \liminf_{\lambda \rightarrow \infty}{\left( H(\hat p_\lambda)(\Omegaega) + \int_{\partial \Omega}{ \psi_\lambda(x,\hat u_\lambda)\, d \mathcal{H}^{n-1}} \right)}.
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality, we assume that the right hand side of (\ref{eq:rd1}) is finite. Let $(\lambda_k)_{k\in \mathbb{N}}$ be such that $\lambda_ k \nearrow \infty$ and
$$\liminf_{\lambda \rightarrow \infty}{\left( H(\hat p_\lambda)(\Omegaega) + \int_{\partial \Omega}{ \psi_\lambda(x,\hat u_\lambda)\, d \mathcal{H}^{n-1}} \right)}= \lim_{k \rightarrow \infty}{\left( H(\hat p_{\lambda_k})(\Omegaega)+ \int_{\partial \Omega}{ \psi_{\lambda_k}(x,\hat u_{\lambda_k})\, d \mathcal{H}^{n-1}} \right)}.$$
As a consequence, there exists a constant $c>0$ (independent of $k$) such that
$$\int_{\partial \Omega}{ \psi_{\lambda_k}(x,\hat u_{\lambda_k})\, d \mathcal{H}^{n-1}}\leq c$$
for all $k\in \mathbb N$. By definition \eqref{eq:psi} of $\psi_\lambda$ (see also \cite[Lemma 4.9]{BC}), there exists a function $v_{k} \in L^2 (\partial \Omega; \mathbb{R}^n)$ such that
\begin{eqnarray*}
\int_{\partial \Omega}{\psi_{\lambda_k}(x,\hat u_{\lambda_k})\, d \mathcal{H}^{n-1}} & =& \frac{1}{2} \int_{\partial \Omega} S_{\lambda_k}(\hat u_{\lambda_k} - v_k)\cdot (\hat u_{\lambda_k} - v_k)\, d \mathcal{H}^{n-1} + \int_{\partial \Omega}{H(-v_{k} \odot \nu ) \, d \mathcal{H}^{n-1}}\\
& \geq & \frac{\lambda_k}{2} \int_{\Gamma_D}|\hat u_{\lambda_k} - v_k|^2 \, d \mathcal{H}^{n-1} + \int_{\partial \Omega}{ H(-v_{k} \odot \nu ) \, d \mathcal{H}^{n-1}}.
\end{eqnarray*}
By nonnegativity of $H$, we infer that $\hat u_{\lambda_k} - v_{k}\to 0$ in $L^2(\Gamma_D;\mathbb{R}^n)$ as $k \to \infty$. Moreover
\begin{align}
&H(\hat p_{\lambda_k})(\Omegaega) + \int_{\partial \Omega}{ \psi_{\lambda_k}(x,\hat u_{\lambda_k})\, d \mathcal{H}^{n-1}} \nonumber \\
& \qquad \ge H(\hat p_{\lambda_k})(\Omegaega) + \int_{\Gamma_D}{ H(-v_{k} \odot \nu)\, d \mathcal{H}^{n-1}} \nonumber \\
& \qquad \ge H_\mu(\hat p_{\lambda_k})(\Omegaega) + \int_{{ \Gamma_D}}{ H_\mu(-v_{k} \odot \nu)\, d \mathcal{H}^{n-1}},\label{eq:fre1}
\end{align}
where $H_\mu:\mathbb M^n_{\rm sym} \to \mathbb{R}^+$ is the Moreau--Yosida transform of $H$ (see \cite[Lemma 1.61]{AFP} or \cite[Lemma 5.30]{FL}), defined by
$$H_\mu(p) := \inf_{q \in \mathbb M^n_{\rm sym} }\{H(q) + \mu \abs{p-q}\} \quad \text{ for all }p \in \mathbb M^n_{\rm sym} .$$
We recall that $H_\mu$ of $H$ enjoys the following properties:
\begin{enumerate}
\item For all $\mu > 0$ we have that $H_\mu \le H$;
\item The function $H_\mu$ is $\mu$-Lipschitz;
\item The function $H_\mu$ is convex as the inf-convolution between the proper convex functions $H$ and $\mu| \cdot |$ (see e.g. \cite[Theorem 5.4]{R});
\item For all $p \in \mathbb M^n_{\rm sym}$, $H_\mu (p) \to H(p)$ as $\mu \rightarrow \infty$.
\end{enumerate}
By the $\mu$-Lipschitz continuity of $H_\mu$, adding and subtracting the term $ \int_{\Gamma_D}{H_\mu (-\hat u_{\lambda_k} \odot \nu)\, d \mathcal{H}^{n-1}}$ in (\ref{eq:fre1}) yields
\begin{align}
& H(\hat p_{\lambda_k})(\Omegaega)+ \int_{\partial \Omega}{\psi_{\lambda_k}(x,\hat u_{\lambda_k})\, d \mathcal{H}^{n-1}} \nonumber \\
& \ge H_\mu(\hat p_{\lambda_k})(\Omegaega)+ \int_{{\Gamma_D}}{H_\mu(-\hat u_{\lambda_k} \odot \nu)\, d \mathcal{H}^{n-1}} -\mu \int_{ \Gamma_D}{\abs{\hat u_{\lambda_k} - v_{k}} \, d\mathcal{H}^{n-1}} .\label{eq:fr3}
\end{align}
Passing to the limit as $k \to \infty$ in \eqref{eq:fr3}, we obtain
\begin{align}
&\lim_{k \rightarrow \infty}{\left( H(\hat p_{\lambda_k})(\Omegaega) + \int_{\partial \Omega}{\psi_{\lambda_k}(x,\hat u_{\lambda_k})\, d \mathcal{H}^{n-1}} \right)} \nonumber \\
& \quad \ge \liminf_{k \rightarrow \infty}{\left( H_\mu(\hat p_{\lambda_k})(\Omegaega)+ \int_{\Gamma_D}{ H_\mu(-\hat u_{\lambda_k} \odot \nu)\, d \mathcal{H}^{n-1}} \right)}. \end{align}
Let $U \subset \mathbb{R}^N$ be an open set such that $\Gamma_D=U \cap \partial\Omega$, and let $\tilde \Omega:=\Omega \cup U$. We extend $(\hat u_\lambda,\hat e_\lambda,\hat p_\lambda)$ to $\tilde \Omega$ as
$$\tilde u_\lambda:=
\begin{cases}
\hat u_\lambda & \text{ in }\Omega,\\
0 & \text{ in }\tilde \Omega \setminus \Omega,
\end{cases}
\qquad
\tilde e_\lambda:=
\begin{cases}
\hat e_\lambda & \text{ in }\Omega,\\
0 & \text{ in }\tilde \Omega \setminus \Omega,
\end{cases}
$$
and
$$\tilde p_\lambda:=E\tilde u_\lambda-\tilde e_\lambda=\hat p_\lambda \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} \Omega - \hat u_\lambda \odot \nu \mathcal H^{n-1} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} {\Gamma_D}.$$
Similarly, we set
$$\tilde u:=
\begin{cases}
\hat u & \text{ in }\Omega,\\
0 & \text{ in }\tilde \Omega \setminus \Omega,
\end{cases}
\qquad
\tilde e:=
\begin{cases}
\hat e & \text{ in }\Omega,\\
0 & \text{ in }\tilde \Omega \setminus \Omega.
\end{cases}
$$
Note that $\tilde p_\lambda \rightharpoonup \tilde p$ weakly* in $\mathcal M(\tilde\Omega;\mathbb M^n_{\rm sym})$ with $\tilde p=E\tilde u-\tilde e=\hat p \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} \Omega - \hat u \odot \nu \mathcal H^{n-1} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} { \Gamma_D}$. Using that $H_\mu$ is a continuous, convex and positively one homogeneous function with $H_\mu(0)=0$, we can apply Reshetnyak's lower semicontinuity Theorem (see \cite[Theorem 2.38]{AFP}) to get that
\begin{multline*}
\liminf_{k \rightarrow \infty}{\left( H_\mu(\hat p_{\lambda_k})(\Omegaega)+ \int_{{ \Gamma_D}}{H_\mu(-\hat u_{\lambda_k} \odot \nu)\, d \mathcal{H}^{n-1}} \right)}\\
=\liminf_{k \rightarrow \infty} H_\mu (\tilde p_{\lambda_k})(\tilde\Omegaega) \geq H_\mu (\tilde p)(\tilde \Omegaega)\\
= H_\mu (\hat p)(\Omegaega)+ \int_{{ \Gamma_D}}{ H_\mu(-\hat u \odot \nu)\, d \mathcal{H}^{n-1}}.
\end{multline*}
We have thus estabished that for all $\mu>0$,
$$\liminf_{\lambda \rightarrow \infty}{\left( H(\hat p_\lambda)(\Omegaega) + \int_{\partial \Omega}{\psi_\lambda(x,\hat u_\lambda)\, d \mathcal{H}^{n-1}} \right)}
\geq H_\mu (\hat p)(\Omegaega) + \int_{{ \Gamma_D}}{H_\mu(-\hat u \odot \nu)\, d \mathcal{H}^{n-1}}.$$
We can now pass to the limit as $\mu \to \infty$ owing to the Monotone Convergence theorem to get that
$$\liminf_{\lambda \rightarrow \infty}{\left( H(\hat p_\lambda)(\Omegaega) + \int_{\partial \Omega}{\psi_\lambda(x,\hat u_\lambda)\, d \mathcal{H}^{n-1}} \right)}
\geq H (\hat p)(\Omegaega) + \int_{{ \Gamma_D}}{H(-\hat u \odot \nu)\, d \mathcal{H}^{n-1}},$$
which leads to the desired lower bound.
\end{proof}
We are now in position to prove a lower bound energy inequality. Since for all $t \in [0,T]$, we have $\dot u_\lambda(t) \rightharpoonup \dot u(t)$ weakly in $L^2(\Omega;\mathbb{R}^n)$ and $e_\lambda(t) \rightharpoonup e(t)$ weakly in $L^2(\Omega;\mathbb M^n_{\rm sym})$, we get by weak lower semicontinuity of the norm that
$$\frac{1}{2} \int_{\Omega}{|\dot{u}(t)|^2 \, dx} + \mathcal{Q}(e(t))\leq \liminf_{\lambda \to \infty} \left\{\frac{1}{2} \int_{\Omega}{\abs{\dot{u}_\lambda (t)}^2\, dx} + \mathcal{Q}(e_\lambda(t))\right\}.$$
To pass to the lower limit in the remaining terms in the left-hand side of the energy inequality \eqref{eq:inqdifferenceenergies}, we consider a partition $0=t_0 \le t_1 \le \ldots \le t_N = t$ of the time interval $[0, t]$. By convexity of $H$ and $\psi_\lambda(x,\cdot)$, we infer from Jensen's inequality that
\begin{multline*}
\int_0^t H(\dot{p}_\lambda(s))(\Omegaega) \, ds + \int_0^t \int_{\partial \Omega} \psi_\lambda(x, \dot{u}_\lambda (s))\, d\mathcal{H}^{n-1}\, ds \\
\ge \sum_{i =1}^{N} \left\{ H(p_\lambda(t_i) - p_\lambda(t_{i-1}) )(\Omegaega) + \int_{\partial \Omega} \psi_\lambda(x, u_\lambda (t_{i})-u_\lambda (t_{i-1}))\, d \mathcal{H}^{n-1}\right\}.
\end{multline*}
Since, for all $0 \le i \le N$ we have that
$$
\begin{cases}
u_\lambda(t_i) \rightharpoonup u(t_i) & \text{ weakly in } L^2(\Omega;\mathbb{R}^n),\\
u_\lambda(t_i) \rightharpoonup u(t_i) & \text{ weakly* in } BD(\Omega),\\
e_\lambda(t_i) \rightharpoonup e(t_i) & \text{ weakly in } L^2(\Omega;\mathbb M^n_{\rm sym}),\\
p_\lambda(t_i) \rightharpoonup p(t_i) & \text{ weakly* in }\mathcal M(\Omega;\mathbb M^n_{\rm sym}),
\end{cases}
$$
we can apply Proposition \ref{prop:relaxationdirichletpart} to get that
$$ \liminf_{\lambda \rightarrow \infty}\left(\int_0^t H(\dot{p}_\lambda(s))(\Omegaega)\, ds + \int_0^t \int_{\partial \Omega} \psi_\lambda(x, \dot{u}_\lambda)\, d\mathcal{H}^{n-1}\, ds\right) \ge \sum_{i =1}^{N} H(p(t_i) - p(t_{i-1}) )(\Omega \cup \Gamma_D),$$
where the measure $p(t)$ is extended to $\Gamma_D$ by setting
$$p(t)\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}\Gamma_D := -u(t)\odot\nu \mathcal H^{n-1}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}\Gamma_D.$$
Passing to the supremum with respect to all partitions, we deduce that
$$\mathcal V_{\mathcal H}(p;0,t):=\sup\left\{\sum_{i=1}^N H(p(t_i) - p(t_{i-1}) )(\Omega \cup \Gamma_D) :\; 0=t_0 \leq t_1 \leq \cdot \leq t_N=t, \, N \in \mathbb N\right\}<\infty.$$
Using \cite[Theorem 7.1]{DMDSM}\footnote{Note that \cite[Theorem 7.1]{DMDSM} is stated for functions $H$ which are bounded from above, which is not our case here because $H$ is allowed to take the value $+\infty$. However, a carefull inspection of the proof of \cite[Theorem 7.1]{DMDSM} shows the validy of this result in our case thanks to the additional property $\mathcal V_{\mathcal H}(p;0,t)<\infty$.}, we get that
$$ \liminf_{\lambda \rightarrow \infty}\left(\int_0^t H(\dot{p}_\lambda(s))(\Omegaega)\, ds + \int_0^t \int_{\partial \Omega} \psi_\lambda(x, \dot{u}_\lambda)\, d\mathcal{H}^{n-1}\, ds\right) \ge \int_0^t H(\dot p(s) )(\Omega \cup \Gamma_D)\, ds.$$
Passing to the lower limit in \eqref{eq:inqdifferenceenergies} as $\lambda \to \infty$ yields
\begin{multline}\label{eq:ineq1}
\frac{1}{2} \int_{\Omega}{|\dot{u} (t)|^2 \, dx} + \mathcal{Q}(e(t))+\int_0^t H(\dot p(s) )(\Omega \cup \Gamma_D) \, ds \\
\leq \frac{1}{2} \int_{\Omega}{ \abs{v_0}^2 \, dx} + \mathcal Q(e_0) + \int_0^t \int_{\Om}{ f\cdot \dot{u}\, dx}\, ds.
\end{multline}
The proof of the other energy inequality relies on the convexity inequality proved in Section \ref{sec:duality}. Indeed, assuming one of the following assumptions:
\begin{itemize}
\item $\partial\Omega=\Gamma_D$;
\item $\partial\Omega=\Gamma_N$;
\item $n=2$ and $\Sigma$ is a finite set;
\item $n=3$ and $\mathbf K=K_D \oplus (\mathbb{R}\, {\rm Id})$, for some compact and convex set $K_D \subset \mathbb M^3_D$ containing $0$ in its interior;
\end{itemize}
we can appeal Proposition \ref{prop:an1}, Proposition \ref{prop:n=2} or Proposition \ref{prop:n=3}. Indeed, for a.e. $t \in [0,T]$, we have $(\dot u(t),\dot e(t),\dot p(t)) \in \mathcal A_0$, $\sigma(t) \in \mathcal K \cap \mathcal S_0$ and $H(\dot p(t))$ is a finite measure (by \eqref{eq:ineq1}). As a consequence, for a.e. $t \in [0,T]$, the duality pairing $[\sigma(t) \colon \dot p(t)] \in \mathcal D'(\mathbb{R}^n)$ is well defined and it extends to a bounded Radon measure supported in $\overline \Omegaega$ with
\begin{equation}
H(\dot p(t)) \geq [\sigma(t) \colon \dot p(t)]\quad\text{in }\mathcal M(\mathbb{R}^n)\,. \label{ea:reverseinequality}
\end{equation}
Since the nonnegative measure $H(\dot p(t)) - [\sigma(t) \colon \dot p(t)]$ is compactly supported in $\overline\Omega$, we can evaluate its mass by taking the test function $\varphi \equiv 1$ in Definition \ref{definition:dualityMixed}. We then obtain that for a.e. $t \in [0,T]$,
$$0 \leq H(\dot p(t))(\Omega \cup \Gamma_D) +\int_\Omegaega \sigma(t) : \dot e(t)\, dx + \int_\Omegaega \dot u(t) \cdot {\rm div} \sigma(t) \, dx.$$
Using the equation of motion and the regularity properties of $\dot u$ and $e$, we can integrate by parts respect to time and get that
\begin{multline*}
0 \leq \int_0^t H(\dot p(s))(\Omega \cup \Gamma_D) )\, ds+ \mathcal Q(e(t)) -\mathcal Q(e_0)\\
+\frac12 \int_\Omega |\dot u(t)|^2 \, dx - \frac12 \int_\Omega |v_0|^2 \, dx -\int_0^t \int_\Omega f\cdot u\, dx\, ds.
\end{multline*}
Owing to the first energy inequality \eqref{eq:ineq1}, we deduce that the last expression is zero, which implies that the nonnegative measure $H(\dot p(t)) - [\sigma(t) \colon \dot p(t)]$ has zero mass in $\overline\Omega$. This leads in turn that this measure vanishes in $\overline \Omega$, in other words the flow rule $H(\dot p(t)) = [\sigma(t) \colon \dot p(t)]$ in $\mathcal M(\overline \Omega)$ is satisfied. Finally, since $H(\dot p(t))$ is concentrated on $\Omegaega \cup \Gamma_D$, it follows that $[\sigma(t):\dot p(t)]$ vanishes on $\partial\Omegaega \setminus \Gamma_D$ and that the flow rule $H(\dot p(t))=[\sigma(t):\dot p(t)]$ holds in $\mathcal M(\Omegaega \cup \Gamma_D)$.
\subsection{Uniqueness}
Let $(u_1, e_1,p_1)$ and $(u_2,e_2,p_2)$ be two solutions given by Theorem \ref{thm:compactness}. Subtracting the equations of motion of each solution, we have
$$\ddot{u}_1 - \ddot{u}_2 - {\rm div}(\sigma_1 - \sigma _2) = 0 \quad \textrm{in } L^2(0,T; L^2 (\Omega; \mathbb{R}^n)).$$
Let us consider the test function $\varphi := {\bf 1}_{[0,t]} (\dot{u}_1 - \dot{u}_2) \in L^2(0,T; L^2(\Omega; \mathbb{R}^n))$, we deduce
\begin{equation}
\label{eq:uniqueness1}
\int_{0}^{t}{\int_{\Om}{(\ddot{u}_1 - \ddot{u}_2):(\dot{u}_1 - \dot{u}_2)\,dx}\,ds} - \int_{0}^{t}{\int_{\Om}{ ( {\rm div}(\sigma_1 - \sigma _2)) \cdot (\dot{u}_1 - \dot{u}_2)\, dx }\, ds} = 0.
\end{equation}
Since $\ddot{u}_1 - \ddot{u}_2 \in L^2(0,T; L^2 (\Omega; \mathbb{R}^n))$ and $\dot{u} _1 (0) = \dot{u} _2 (0) = v_0$, we infer that
\begin{equation}\label{eq:1406}
\int_{0}^{t}{\int_{\Om}{(\ddot{u}_1 (s) - \ddot{u}_2(s)):(\dot{u}_1 (s)- \dot{u}_2 (s))\,dx} \,ds} = \frac{\norm{\dot{u}_1 (t) - \dot{u}_2 (t) }^2_{L^2(\Omega; \mathbb{R}^n)} }{2}.
\end{equation}
We already know that, for a.e. $s \in [0,T]$, the distributions $[\sigma_1(s):\dot p_1(s)]$ and $[\sigma_2(s):\dot p_2(s)]$ belong to $\mathcal M(\Omegaega \cup \Gamma_D)$. Moreover, since $(\dot u_1(s),\dot e_1(s),\dot p_1(s)),\, (\dot u_2(s),\dot e_2(s),\dot p_2(s)) \in \mathcal A_0$, $\sigma_1(s), \,\sigma_2(s) \in \mathcal S_0 \cap \mathcal K$ and $H(\dot p_1(s)), \, H(\dot p_2(s))$ are finite measures we can appeal Propositions \ref{prop:an1}, \ref{prop:n=2} and \ref{prop:n=3} which state that $[\sigma_2(t) \colon \dot p_1(s)]$ and $ [\sigma_1(s) \colon \dot p_2(s)]$ extend to bounded Radon measures supported in $\overline \Omegaega$ with
$$[\sigma_1 (s): \dot{p}_1 (s)] = {H} (\dot{p}_1 (s)) \ge [\sigma_2 (s): \dot{p}_1 (s)] \quad \text{ in }\mathcal M(\mathbb{R}^n),$$
and
$$ [\sigma_2 (s): \dot{p}_2 (s)] = {H} (\dot{p}_2 (s)) \ge [\sigma_1 (s): \dot{p}_2 (s)] \quad \text{ in }\mathcal M(\mathbb{R}^n).$$
As a consequence, the measure $ [(\sigma_1(s) - \sigma_2(s)) : (\dot{p}_1(s) - \dot{p}_2(s))] $ is nonnegative. Furthermore, by the definition of stress duality (see Definition \ref{definition:dualityMixed} with the test function $\varphi\equiv 1$ and $g=0$), we infer that
\begin{eqnarray}
0 & \leq & \int_0^t [(\sigma_1(s) - \sigma_2(s)) : (\dot{p}_1(s) - \dot{p}_2(s))](\overline\Omegaega)\nonumber\\
& = & -\int_{0}^{t}{\int_{\Om}{ (\sigma_1(s) - \sigma_2(s)) : (\dot{e}_1(s) - \dot{e}_2(s)) \, dx }\, ds}\nonumber\\
&&\qquad\qquad- \int_{0}^{t}{\int_{\Om}{ ( {\rm div}(\sigma_1 (s) - \sigma _2(s))) \cdot (\dot{u}_1(s) - \dot{u}_2(s)) \, dx }\, ds}\nonumber\\
& = & - \mathcal{Q}(e_1 (t) - e_2 (t) )- \int_{0}^{t}{\int_{\Om}{ ( {\rm div}(\sigma_1 (s) - \sigma _2(s))) \cdot (\dot{u}_1(s) - \dot{u}_2(s)) \, dx }\, ds} , \label{eq:uniqueness2}
\end{eqnarray}
where we have used the fact that $e_1 (0) = e_2(0) = e_0$. By \eqref{eq:uniqueness1}, \eqref{eq:1406} and \eqref{eq:uniqueness2}, we infer that
$$ \frac{\norm{\dot{u}_1 (t) - \dot{u}_2 (t) }^2_{L^2(\Omega; \mathbb{R}^n)} }{2} + \mathcal{Q}(e_1 (t) - e_2 (t) ) \le 0. $$
From the expression above, we infer that $e_1 = e_2$ and $\dot u_1=\dot u_2$. Since, $u_1 (0) = u_2 (0) = u_0$, we conclude that $u_1 = u_2$, and by the kinematic compatibility $p_1 = p_2$. This concludes the proof of the uniqueness. In particular, by uniqueness of the limit, there is no need of extracting subsequences when passing to the limit as $\lambda \to \infty$. The proof of Theorem \ref{thm:compactness} is now complete.
\subsection*{Acknowledges} R. Llerena acknowledges support from the Austrian Science Fund (FWF) through projects P 29681 and TAI 293-N, and from BMBWF through the OeAD-WTZ project HR 08/2020. This work was supported by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH.
\nocite{*}
\end{document}
|
\begin{document}
\title[Non-trivial extensions in equivariant cohomology]{Non-trivial extensions in equivariant cohomology with constant coefficients}
\maketitle
\begin{abstract}
In this paper, we prove some computational results about equivariant cohomology over the cyclic group $C_{p^n}$ of prime power order. We show that there is an inductive formula when the dimension of the $C_p$-fixed points of the grading is large. Among other calculations, we also show the existence of non-trivial extensions when $n\geq 3$.
\end{abstract}
\section{Introduction}
The equivariant stable homotopy category has a rich structure coming from the desuspension along representation spheres. This equips equivariant cohomology groups with a grading over the virtual representations $RO(G)$. The resulting structure is usually difficult to compute even in the case of ordinary cohomology. Their computations give interesting results and also have surprising consequences.
The coefficients of ordinary cohomology are Mackey functors, and the important ones are the Burnside ring Mackey functor $\underline{A}$, and the constant Mackey functor $\underline{\mathbb{Z}}$. For the group $C_p$, the $RO(C_p)$-graded commutative rings $\underline{\pi}_{-\bigstar}^{C_p} H\underline{A} \cong \underline{H}^\bigstar_{C_p}(S^0;\underline{A})$ was computed by Lewis \cite{Lew88}, and analogous results for $\underline{\mathbb{Z}}$, $\underline{\mathbb{Z}}p$ were computed by Stong and Lewis. There are computations for a few other groups (see \cite{BG19}, \cite{BG20}, \cite{Zen17} \cite{KL20}), however, for most ones, even among Abelian groups, very little is known.
In this paper, we study the Mackey functors $\underline{\pi}_{-\alpha}^{C_{p^n}} H\underline{\mathbb{Z}} \cong \underline{H}^\alpha_{C_{p^n}}(S^0;\underline{\mathbb{Z}})$ for $\alpha \in RO(C_{p^n})$, which form the additive structure for equivariant cohomology over the group $C_{p^n}$. We prove the following result.
\begin{thma}
If $|\alpha^{C_p}|\leq -2n+2$ or $|\alpha^{C_p}|\geq 2n$, the Mackey functor $\underline{H}^\alpha_{C_{p^n}}(S^0;\underline{\mathbb{Z}})$ can be computed directly from the Mackey functor $\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0;\underline{\mathbb{Z}})$. (see Table \ref{comp-highfix})
\end{thma}
We also point out various computations of these Mackey functors not covered by the above theorem. The formulas that appear here are mostly written as a direct sum of those of the form $\underline{\mathbb{Z}}_T$ and $\underline{B}_{T,S}$, a notation inspired from \cite{HHR17}. A consequence of these results are the complete calculation of the additive structure for the group $C_{p^2}$ (see Table \ref{comp-tab}). A new feature of these groups starting from $n\geq 2$ is the existence of $a_\lambda$-periodic classes. For the group $C_p$, the classes were either part of a polynomial algebra or were $a_\lambda$-torsion.
In these computations, we also point out a non-trivial extension of Mackey functors. These extensions first occur in the case $n=3$, and hence also for higher $n$. In cohomology over the Burnside ring $\underline{A}$, one has extensions $\underline{A}[d]$ for the group $C_p$ of the form $0\to \langle \mathbb{Z} \rangle \to \underline{A}[d] \to \underline{\mathbb{Z}}\to 0$, that occur in the additive structure. On the other hand, over $\underline{\mathbb{Z}}$ coefficients, the Mackey functors occuring in the additive structure over $C_p$ are a direct sum of those of the form $\underline{\mathbb{Z}}_T$ and $\underline{B}_{T,S}$. For the group $C_{p^n}$, many special cases have been shown to be of this type (see for example \cite[Theorem 5.7]{HHR17}). However, we point out through examples that this does not happen in general, and in these cases, the $a_{\lambda_i}$-multiplication gives non-trivial extensions.
\begin{mysubsection}{Organization}
In Section \ref{eqcoh}, we recall some preliminaries on equivariant cohomology, and their computational methods. In Section \ref{Z-mod}, we discuss the category of $\underline{\mathbb{Z}}$-modules, and their extensions, constructing important examples used in later sections. In Section \ref{largedimcomp}, we compute the equivariant cohomology at large $C_p$-fixed points. In Section \ref{nontriv}, we compute the equivariant cohomology over $C_{p^2}$ and use them to point out the non-trivial extensions.
\end{mysubsection}
\begin{notation}
Throughout this paper, $G$ denotes the cyclic group $C_{p^n}$ of order $p^n$, where $m$ is odd, and $g$ denotes a fixed generator of $G$. For an orthogonal $G$-representation $V$, $S(V)$ denotes the unit sphere, $D(V)$ the unit disk, and $S^V$ the one-point compactification $\cong D(V)/S(V)$.
\end{notation}
\section{Equivariant cohomology}\label{eqcoh}
Ordinary cohomology theories are defined for Abelian groups, and these are represented by spectra with homotopy concentrated in degree $0$. In the equivariant world, the analogous role is played by \emph{Mackey functors}. In this section we briefly recall their definition, and relate them to equivariant cohomology (see \cite{May96} for details).
The \emph{Burnside category} $\sf Burn_G$ is the category with objects finite $G$-sets
and each morphism set $\operatorname{Mor}_{\sf Burn_G}(S,T)$ is the group completion of the hom-set of spans between $S$ and $T$ in the category of finite $G$-sets. It is a fact that $\sf Burn_G$ is self dual, that is, the duality map $D\colon \sf Burn_G \to \sf Burn_G^{\operatorname{op}}$ that is identity on objects and switches the legs of the spans, is an isomorphism of categories.
\begin{defn}
A functor $\underline{M}: \sf Burn_G^{op} \to \sf Ab$ from Burnside category into Abelian groups is called \emph{Mackey functor}.
\end{defn}
In this paper we restrict our attention to $G=C_{p^n}$. For the remainder of the paper $G$ will always refer to this group.
Explicitly, a $G$-Mackey functor\footnote{This is a simplification in the case $G$ is Abelian. Otherwise the double coset formula (4) has a slightly more complicated expression.} $\underline{M}$ is a collection of commutative $W_G(H)$-groups $\underline{M}(G/H)$ one for each subgroup $H \le G$, each accompanied by \emph{transfer} $\operatorname{tr}^H_K\colon \underline{M}(G/K) \to \underline{M}(G/H)$ and \emph{restriction} $\operatorname{res}^H_K \colon\underline{M}(G/H) \to \underline{M}(G/K)$ for $K\le H\le G$ such that
\begin{enumerate}
\item $\operatorname{tr}^H_J = \operatorname{tr}^H_J\operatorname{tr}^K_J$ and $\operatorname{res}^H_J = \operatorname{res}^K_J \operatorname{res}^H_K$ for all $J \le K \le H.$
\item $\operatorname{tr}^H_K(\gamma.x)= \operatorname{tr}^H_K(x)$ for all $x \in \underline{M}(G/K)$ and $\gamma \in W_H(K).$
\item $\gamma. \operatorname{res}^H_K(x) =\operatorname{res}^H_K(x)$ for all $x \in \underline{M}(G/H)$ and $\gamma \in W_H(K).$
\item $\operatorname{res}^H_K\operatorname{tr} ^J_K(x)= \sum_{\gamma \in W_H(K)} \gamma.\operatorname{tr}^{K}_{J\cap K}(x)$
for all subgroups $J,H \leq K.$
\end{enumerate}
We will often write $\underline{M}(H)$ for $\underline{M}(G/H)$.
A morphism between two Mackey functors is given by natural transformations. We denote the category of Mackey functors of the group $G$, by $\sf Mack_G.$ It is a fact that $\sf Mack_G$ is an Abelian category. The Burnside ring Mackey functor is representable functor $\sf Burn_G(-, G/G)$. This is denoted by $\underline{A}$. For an Abelian group $C$, the constant Mackey functor $\underline{C}$ is described as $\underline{C}(G/H)=C$ with the $\operatorname{res}^H_K=\mathrm{Id}$, and $\operatorname{tr}^H_K=$ multiplication by $[H:K]$. Following Lewis \cite{Lew88}, the data of a Mackey functor for the group $C_p$ may be organized in a diagram as demonstrated below.
$$\xymatrix@R=0.1cm{ & \mathbb{Z} \operatorname{op}lus \mathbb{Z} \ar@/_.5pc/[dd]_{[\begin{smallmatrix} 1 & p \end{smallmatrix}]} && & & \mathbb{Z} \ar@/_.5pc/[dd]_{\mathrm{Id}} \\
\underline{A} : & && &\underline{\mathbb{Z}} : \\
& \mathbb{Z} \ar@/_.5pc/[uu]_{\left[ \substack{0 \\ 1}\right]} &&& & \mathbb{Z} \ar@/_.5pc/[uu]_{p}}$$
The top row gives the value of the Mackey functor at $C_p/C_p$, and the bottom row gives the Mackey functor at $C_p/e$. For the groups $C_{p^n}$, there are analogous diagrams arranged vertically with $n+1$ different levels.
\begin{exam}
For a $G$-spectrum $X$, the equivariant homotopy groups forms a Mackey functor $\underline{\pi}_n(X)$, defined by the formula
\[ \underline{\pi}_n(X)(G/H):= \pi_n(X^H).
\]
\end{exam}
For a Mackey functor $\underline{M}$ one may define an Eilenberg-MacLane spectrum \cite{GM95} $H\underline{M}$ such that $\underline{\pi}_n(H\underline{M})$ is concentrated in degree $0$ where it is the Mackey functor $\underline{M}$. This constructs a $RO(G)$-graded cohomology theory by the formula
\[
H^\alpha_G(X;\underline{M}) \cong \mbox{ Ho-G-spectra } (X, \Sigma^\alpha H\underline{M}).
\]
Recall that $RO(G)$ denotes the group completion of the monoid of irreducible representations of $G.$ A general element $\alpha \in RO(G)$ can be represented as a formal difference $\alpha = V - W$ for $G$-representations $V, W.$ For a unitary representation $V$ of $G$, we denote by $S^V$ the one-point compactification of $V$. Analogously for a virtual representation $\alpha= V-W$, $S^\alpha$ denotes the $G$-spectrum $\Sigma^{-W} S^V$. Using the functor $G/H\mapsto X\wedge G/H_+$, the cohomology groups are part of a Mackey functor denoted by $\underline{H}_G^\alpha(S^0;\underline{M})$.
One may put a symmetric monoidal structure on the category $\sf Mack_G$ by using the \emph{box product}. For two Mackey functors $\underline{M}$ and $\underline{N} \in \sf Mack_G$, the box product $\underline{M} \square \underline{N}$ is the left Kan extension of tensor product of Abelian groups along the functor $\times : \sf Burn_G^{\operatorname{op}} \times \sf Burn_G^{\operatorname{op}}\to \sf Burn_G^{\operatorname{op}}$ given by $(S, T) \mapsto S\times T.$ The Burnside ring Mackey functor $\underline{A}$ plays the role of unit object in the symmetric monoidal structure of $\sf Mack_G.$
\begin{defn}
A ({\it commutative}) \emph{Green functor} for $G$, is a (commutative) monoid in the symmetric monoidal category $\sf Mack_G$ defined as above.
\end{defn}
Both $\underline{A}$ and $\underline{\mathbb{Z}}$ are examples of commutative Green functors. Given a commutative Green functor $R$, an \emph{$\underline{R}$-module} is a Mackey functor $\underline{M}$ equipped with $\mu_{\underline{M}} \colon \underline{R}\square \underline{M}\to \underline{M}$ satisfying the usual relations.
The category of $\underline{R}$-modules will be denoted by $\underline{R}$-$\mathrm{Mod}_G$. This in turn has the structure of a symmetric monoidal category with the induced box product.
For a commutative Green functor $R$, the corresponding equivariant cohomology has a graded commutative ring structure.
\begin{notation}
The representation ring $RO(C_{p^n})$ is generated by the trivial representation $1$, and the $2$-dimension representation $\lambda (k)$ given by the rotation by the angle $\frac{2 \pi k}{p^n}$ for $k=1, \cdots, \frac{p^n-1}{2}$. Denote the representation $\lambda (p^m)$ by $\lambda_m.$ Write $RO_0(C_{p^n}) \subset RO(C_{p^n})$ of those $\alpha$ such that the dimension of $\alpha$ is zero.
\end{notation}
We now describe some equivariant cohomology classes in $\underline{H}^\bigstar_G(S^0;\underline{\mathbb{Z}})(G/G) \cong \pi_{-\bigstar}^G(H\underline{\mathbb{Z}})$. The generators used are defined in \cite [Section 3]{HHR16} which we now recall.
\begin{defn}\label{uagen}
Let $V$ be a $G$-representation. We have the $G$-map $S^0 \to S^V$\footnote{This is given by the inclusion of $\{0,\infty\}\subseteq S^V$.} which induces
$$S^0 \to S^V\wedge S^0 \to S^V \wedge H\underline{\mathbb{Z}}$$
which we call $a_V\in \underline{H}_G^V(S^0;\underline{\mathbb{Z}})$. If $V$ is an oriented $G$-representation, a choice of orientation gives a class
$$u_V\in H_G^{V-\dim(V)}(S^0;\underline{\mathbb{Z}}) \cong \mathbb{Z}.$$
\end{defn}
We also have relations among these generators namely $u_V u_W = u_{V\operatorname{op}lus W}$ and $a_V a_W = a_{V\operatorname{op}lus W}$. It follows that these classes are products of $u_{\lambda(m)}$ and $a_{\lambda(m)}$ for $0\leq m <n$. These classes satisfy relations
\begin{myeq}\label{a-reln}
p^{n-i} a_{\lambda_i}=0,
\end{myeq}
and
\begin{myeq}\label{au-reln}
u_{\lambda_i} a_{\lambda_j} = p^{j-i} u_{\lambda_j} a_{\lambda_i} \mbox{ if } i < j.
\end{myeq}
\begin{defn}\label{ratio-classes}
Observe that the map $a:S^{\lambda_i} \to S^{\lambda_j}$ for $i<j<n$, described as the map on one-point compactifications induced by $z\mapsto z^{p^{j-i}}$, satisfies $a\circ a_{\lambda_i} = a_{\lambda_j}$. This class is denoted by $a_{\lambda_j/\lambda_i}$. In the case $j=n$, this construction also makes sense and gives a class which after multiplication with $u_{\lambda_i}$ gives $p^{n-i}$, so it is denoted $[p^{n-i}u_{\lambda_i}^{-1}]$.
\end{defn}
In this notation, for $G=C_p$, the equivariant cohomology of $S^0$ in gradings $n+m\lambda_0$ is given by (see \cite{Lew88}, \cite{Zen17})
\[
\underline{H}^{\cdot + \cdot \lambda_0}_{C_p}(S^0;\underline{\mathbb{Z}})(C_p/C_p) \cong \mathbb{Z}[u_{\lambda_0},a_{\lambda_0}] \operatorname{op}lus_{j\geq 1} \mathbb{Z}\{[pu_{\lambda_0}^{-j}]\} \operatorname{op}lus_{j,k\geq 1} \Sigma^{-1} \mathbb{Z}/p\{u_{\lambda_0}^{-j} a_{\lambda_0}^{-k}\}.
\]
For $G=C_{p^n}$, the ring $\underline{H}^{\bigstar}_{C_{p^n}}(S^0;\underline{\mathbb{Z}})(C_{p^n}/C_{p^n})$ in gradings which are combinations of integers and positive multiples of $\lambda_i$ is described as \cite[Remark 4.6]{HHR17}
\[ \mathbb{Z}[a_{\lambda_0},u_{\lambda_0}, \cdots, a_{\lambda_{n-1}}, u_{\lambda_{n-1}}]/(p^{n-i}a_{\lambda_i}, u_{\lambda_i}a_{\lambda_{i+k}}-p^k u_{\lambda_{i+k}}a_{\lambda_i}, i,k\geq 0).
\]
\begin{notation} Recall \cite{Web00} for $H\le G$, there is the \emph{restriction} functor
\[\downarrow^G_H : \sf Mack_G \to \sf Mack_H\] given by $\downarrow^G_H(\underline{N})(H/L): = \underline{N}(G\times_H H/L)$ where $\underline{N} \in \sf Mack_G$ and $L \le H.$ Given a Mackey functor $\underline{M}$, one defines $\operatorname{Hom}_L(\underline{M}, \mathbb{Z})$ as the composition of the functors
\[
\xymatrix{\sf Burn_G^{\operatorname{op}} \ar[r]^D & \sf Burn_G^{\operatorname{op}} \ar[r]^{\underline{M}} & \sf Ab \ar[rr]^{\operatorname{Hom}_{\mathbb{Z}}(-, \mathbb{Z})} && \sf Ab}
\]
and similarly $\operatorname{op}eratorname{Ext}_L(\underline{M}, \mathbb{Z})$ as the composition of the functors
\[
\xymatrix{\sf Burn_G^{\operatorname{op}} \ar[r]^D & \sf Burn_G^{\operatorname{op}} \ar[r]^{\underline{M}} & \sf Ab \ar[rr]^{\operatorname{op}eratorname{Ext}^1_{\mathbb{Z}}(-, \mathbb{Z})} && \sf Ab}.
\]
We often denote $\operatorname{op}eratorname{Ext}_L(\underline{M}, \mathbb{Z})$ by $\underline{M}^E.$ The Mackey functor $\operatorname{Hom}_L(\underline{\mathbb{Z}},\mathbb{Z})$ has the same groups as $\underline{\mathbb{Z}}$ with the restriction and transfer maps switched and is denoted by $\underline{\mathbb{Z}}^\ast$.
\end{notation}
These Mackey functors play crucial role in the equivariant analog of the universal coefficient theorem discussed below along the lines of \cite{And69}.
\begin{mysubsection}{Anderson duality}
Let $\operatorname{op}eratorname{I}_{\mathbb{Q}}$ and $\operatorname{op}eratorname{I}_{\mathbb{Q}/\mathbb{Z}}$ be the spectra representing the cohomology theories given by $X \mapsto \operatorname{Hom}(\pi_{-\ast}^G(X), \mathbb{Q})$ and $X \mapsto \operatorname{Hom}(\pi_{-\ast}^G(X), \mathbb{Q}/\mathbb{Z})$ respectively. The natural map $\mathbb{Q} \to \mathbb{Q}/\mathbb{Z}$ induces the spectrum map $\operatorname{op}eratorname{I}_{\mathbb{Q}} \to \operatorname{op}eratorname{I}_{\mathbb{Q}/\mathbb{Z}}$, and the homotopy fibre is denoted by $\operatorname{op}eratorname{I}_{\mathbb{Z}}$. For a $G$-spectrum $X$, the \emph{Anderson dual} $\operatorname{op}eratorname{I}_{\mathbb{Z}}X$ of $X$, is the function spectrum $\operatorname{Fun}(X, \operatorname{op}eratorname{I}_{\mathbb{Z}})$. For $X=H\underline{\mathbb{Z}}$, one easily computes $\operatorname{op}eratorname{I}_{\mathbb{Z}}H\underline{\mathbb{Z}} \cong \Sigma^{2-\lambda_0}H\underline{\mathbb{Z}}.$
In general, for $G$-spectra $E$, $X$, and $\alpha \in RO(G),$ there is short exact sequence
\begin{myeq}\label{end_dual}
0 \to \operatorname{op}eratorname{Ext}_L(\underline{E}_{\alpha -1}(X), \mathbb{Z}) \to \operatorname{op}eratorname{I}_{\mathbb{Z}}(E)^{\alpha}(X) \to \operatorname{Hom}_L(\underline{E}_{\alpha}(X), \mathbb{Z})\to 0.
\end{myeq}
In particular, for $E=H\mathbb{Z}$ and $X=S^0,$ we have the equivalence $\underline{E}_\alpha(X) \cong \underline{H}_G^{-\alpha}(S^0; \underline{\mathbb{Z}})$. Therefore, one may rewrite \eqref{end_dual} as
\begin{myeq}\label{and_comp}
0 \to \operatorname{op}eratorname{Ext}_L(\underline{H}^{3-\lambda_0 -\alpha}_{G}(S^0; \underline{\mathbb{Z}}), \mathbb{Z}) \to \underline{H}^{\alpha}_{G}(S^0; \underline{\mathbb{Z}}) \to \operatorname{Hom}_L(\underline{H}^{2-\lambda_0-\alpha}_{G}(S^0; \underline{\mathbb{Z}}), \mathbb{Z}) \to 0
\end{myeq}
for each $\alpha \in RO(G).$
\end{mysubsection}
Anderson duality provides a relation in the equivariant cohomology ring of $S^0$. A naive method to give another such relation is to build up $S^V$ using a filtration such that the filtration quotients are computable, and then use this to relate $\underline{H}^\alpha_G(S^0)$ to $\underline{H}^{\alpha - V}_G(S^0)$. This method is commonly used (see for example \cite{Lew88}, \cite{BG19}, \cite{BG20}). More explicitly, for each $m \le n$, we have homotopy cofibration sequences in $C_{p^n}$-spectra,
\begin{myeq}\label{rep_sphere}
{C_{p^n}/C_{p^m}}_+ \stackrel{1-g}{\to} {C_{p^n}/C_{p^m}}_+ \to S(\lambda_m)_+
\end{myeq}
and
\begin{myeq}\label{sphere}
S(\lambda_m)_+ \to S^0 \to S^{\lambda_m}.
\end{myeq}
Here $g$ is a chosen generator for the quotient group ${C_{p^n}/C_{p^m}}.$
For a non-negative integer $0 \le k \le n,$ and a Mackey functor $\underline{M} \in \sf Mack_{C_{p^k}}$ define
$
\Theta_k : \sf Mack_{C_{p^k}} \to \sf Mack_{C_{p^n}}
$
as
\[\Theta_k(\underline{M})(G/H)= \begin{cases} \underline{\mathbb{Z}}(G/H)\otimes_\mathbb{Z}\underline{M}(C_{p^k}/C_{p^k}) & \mbox{ if } C_{p^k} \subseteq H \\ \underline{M}(C_{p^k}/H) & \mbox{ otherwise.}\end{cases} \]
The restrictions and transfers are clear from this description. In similar fashion, we define another functor $\Theta^\ast_k: \sf Mack_{C_{p^k}} \to \sf Mack_{C_{p^n}}$ as
\[\Theta^\ast_k(\underline{M})(G/H)= \begin{cases} \underline{\mathbb{Z}}^\ast(G/H)\otimes_\mathbb{Z}\underline{M}(C_{p^k}/C_{p^k}) & \mbox{ if } C_{p^k} \subseteq H \\ \underline{M}(C_{p^k}/H) & \mbox{ otherwise.}\end{cases} \]
\begin{prop}\label{sph_coh}
For each non-negative integer $m \le n$, $\underline{M} \in \sf Mack_{C_{p^n}},$ and $\alpha \in RO(C_{p^n})$, there is short exact sequence
\[ \underline{0} \to \Theta^\ast_m(\underline{H}^{\alpha-1}_{C_{p^m}}(S^0; \downarrow^{p^n}_{{p^m}}\underline{M})) \to \underline{H}^\alpha_{C_{p^n}}(S(\lambda_m)_+; \underline{M}) \to \Theta_m(\underline{H}^\alpha_{C_{p^m}}(S^0; \downarrow^{p^n}_{{p^m}}\underline{M}))\to \underline{0}\]
in $\sf Mack_{C_{p^n}}.$
\end{prop}
\begin{proof}
The cofiber sequence \eqref{rep_sphere} yields the cohomology long exact sequence
\[
\xymatrix@C=0.5cm{\cdots \underline{H}^{\alpha-1}_{C_{p^n}}({C_{p^n}/C_{p^m}}_+) \ar[r] & \underline{H}^\alpha_{C_{p^n}}(S(\lambda_m)_+) \ar[r] & \underline{H}^{\alpha}_{C_{p^n}}({C_{p^n}/C_{p^m}}_+) \ar[r]^{(1-g)^\ast} & \underline{H}^{\alpha}_{C_{p^n}}({C_{p^n}/C_{p^m}}_+)\cdots}
\]
An immediate computation gives $\ker((1-g)^\ast) \cong \Theta_m(\underline{H}^\alpha_{C_{p^m}}(S^0; \downarrow^{p^n}_{{p^m}}\underline{M}))$ and the cokernel of $(1-g)^\ast$ is $\Theta^\ast_m(\underline{H}^{\alpha-1}_{C_{p^m}}(S^0; \downarrow^{p^n}_{{p^m}}\underline{M})).$ (See Proposition 4.3. and \S 4.5 in \cite{BG20} for an analogous computation.) Hence the result follows.
\end{proof}
\begin{exam} \label{lambda0}
Observe that $\Theta_0 (C)$ for an Abelian group $C$ is the constant Mackey functor $\underline{C}$, while $\Theta_0^\ast(C)$ is the dual $\underline{C}^\ast$. In Proposition \ref{sph_coh}, $\underline{H}^\alpha_{e}(S^0;\underline{\mathbb{Z}})$ is $0$ for $|\alpha|\neq 0$, and $\mathbb{Z}$ for $|\alpha|=0$. It follows that
\begin{myeq}\label{lambda0coh}
\underline{H}^{\alpha}_{C_{p^n}}(S(\lambda_0)_+;\underline{\mathbb{Z}}) \cong \begin{cases} \underline{\mathbb{Z}} & \mbox{ if } |\alpha|=0 \\
\underline{\mathbb{Z}}^\ast &\mbox{ if } |\alpha|=1 \\
0 &\mbox{ otherwise}. \end{cases}
\end{myeq}
For each $m< n$, consider the following long exact sequence in cohomology associated to \eqref{sphere}
\refstepcounter{theorem}\label{eq:base}
\begin{equation}
\xymatrix@C=0.6cm{\cdots \underline{H}^{\alpha+\lambda_m-1}_{C_{p^n}}(S({\lambda_m}_+)) \ar[r] & \underline{H}^{\alpha}_{C_{p^n}}(S^0) \ar[r]^{\cdot a_{\lambda_m}} & \underline{H}^{\alpha+\lambda_m}_{C_{p^n}}(S^0) \ar[r] & \underline{H}^{\alpha+\lambda_m}_{C_{p^n}}(S({\lambda_m}_+)) \cdots}\tag*{\myTagFormat{eq:base}{m}}\label{eq:base-m}
\end{equation}
At $m=0$, \eqref{lambda0coh} implies that multiplication by $a_{\lambda_0}$ is an isomorphism unless $|\alpha|=-2,-1,0$. At $|\alpha|=-2$, multiplication by $a_{\lambda_0}$ is injective while at $|\alpha|=-1,0$, multiplication by $a_{\lambda_0}$ is surjective. The injectivity at $|\alpha|=-1$ requires some additional argument which is essentially the same as \cite[Proposition 4.5]{BG20}.
\end{exam}
\section{$\underline{\mathbb{Z}}$-modules \& extensions involving them}\label{Z-mod}
In this section, we discuss the category of $\underline{\mathbb{Z}}$-modules and a few particular $\mathbb{Z}$-modules that are important in the rest of the paper. Along the way we study certain extensions in the additive category $\underline{\mathbb{Z}}$-$\mathrm{Mod}_{G}.$ We first note that the $\underline{\mathbb{Z}}$-modules satisfy an additional condition on the restriction and transfer maps.
\begin{rmk}\label{cohomological}
For any $\underline{M}\in \underline{\mathbb{Z}}$-$\mathrm{Mod}_G$, $\operatorname{tr}^H_K \operatorname{res}^H_K$ equals the multiplication by index $[H: K]$ for $K \le H \le G$ \cite[Theorem 4.3]{Yos83}.
\end{rmk}
This condition puts certain restriction on which Mackey functors may actually be $\underline{\mathbb{Z}}$-modules. One such example is the lemma below.
\begin{lemma}\label{torsion}
If $\underline{M}$ is a $\underline{\mathbb{Z}}$-$\mathrm{Mod}_G$ satisfying $\underline{M}(G/e)=0$, then $\underline{M}(G/H)\otimes_{\mathbb{Z}} \mathbb{Q}=0$ for all $H \le G.$
\end{lemma}
\begin{proof}
Applying Remark \ref{cohomological} to each $x \in \underline{M}(G/H)$, $|H|x= \operatorname{tr}^H_e \operatorname{res}^H_e(x)= \operatorname{tr}^H_e(0)=0.$ This implies each element in $\underline{M}(G/H)$ is torsion.
\end{proof}
We now interpret Lemma \ref{torsion} for equivariant cohomology with $\underline{\mathbb{Z}}$-coefficients.
\begin{cor}\label{cortors}
Let $\alpha \in RO(G)\setminus RO_0(G)$, then the Mackey functor $\underline{H}^\alpha_G(S^0; \underline{\mathbb{Z}})$ is torsion.
\end{cor}
\begin{proof}
Note that the Mackey functors $\underline{H}^\alpha_G(S^0;\underline{\mathbb{Z}})$ are all $\underline{\mathbb{Z}}$-modules. For $\alpha$ with $|\alpha|\neq 0,$ one sees $\underline{H}^{\alpha}_G(S^0; \underline{M})(G/e)\cong \tilde{H}^{|\alpha|}(S^0; \underline{M}(G/e))$, hence is zero. By Lemma \ref{torsion}, the result follows.
\end{proof}
We now observe that $\underline{M} \in \underline{\mathbb{Z}}$-$\mathrm{Mod}_G$ satisfying $\underline{M}(G/e)=0$, then $\operatorname{Hom}_L(\underline{M}, \mathbb{Z})=0$. Applying this fact to Corollary \ref{cortors} using \eqref{and_comp}, we note
\begin{myeq}\label{anders-comp}
|\alpha|\neq 0 \operatorname{im}plies \underline{H}^\alpha_{C_{p^n}}(S^0;\underline{\mathbb{Z}}) \cong \operatorname{op}eratorname{Ext}_L(\underline{H}^{3-\lambda_0 -\alpha}_{C_{p^n}}(S^0; \underline{\mathbb{Z}}), \mathbb{Z}).
\end{myeq}
The following proposition allows us to reduce the grading from $RO(C_{p^n})$ to the linear combinations of $\lambda_k$.
\begin{prop}[\cite{Zen17}, Proposition 4.25]
There is an equivalence $H\underline{\mathbb{Z}}\wedge S^{\lambda(p^k)}\simeq H\underline{\mathbb{Z}} \wedge S^{\lambda(rp^k)}$ whenever $p \nmid r$.
\end{prop}
The above implies the existence of invertible classes in $H\underline{\mathbb{Z}}$-cohomology in grading $\lambda(p^k)-\lambda(rp^k)$ for $p\nmid r$. One may make a coherent choice of these units, so that the $H\underline{\mathbb{Z}}$-cohomology is, up to some units and their inverses, the part which lies in the graded pieces given by linear combinations of $1,\lambda_0, \cdots, \lambda_{n-1}$. From now onwards, we assume that $\alpha\in RO(G)$ means that $\alpha = c + \sum_{k\geq 0} a_k \lambda_k$.
\begin{defn}
Let $S\subseteq \overline{n}:=\{1,\cdots, n\}.$
Denote by $\underline{\mathbb{Z}}_S$ the Mackey functor
\[ \underline{\mathbb{Z}}_S(C_{p^n}/H)= \mathbb{Z} \mbox{ for } H\le C_{p^n},\]
\[ \operatorname{res}^{C_{p^i}}_{C_{p^{i-1}}}= p^{\chi_S(i)}\text{ for } 1\le i \le n.\]
Here $\chi_S$ is the characteristic function on $S$.
\end{defn}
We note that $\underline{\mathbb{Z}}_{\emptyset}=\underline{\mathbb{Z}}$, and $\underline{\mathbb{Z}}_{\overline{n}}=\underline{\mathbb{Z}}^\ast$. For subsets $S \subseteq T \subseteq \overline{n}$, there is a unique map $f_{T, S}: \underline{\mathbb{Z}}_T \to \underline{\mathbb{Z}}_S$ such that it is the identity at $C_{p^n}/e$\footnote{If $S \nsubseteq T$, there is no Mackey functor morphism from $\underline{\mathbb{Z}}_T \to \underline{\mathbb{Z}}_S$ which induces identity at $C_{p^n}/e.$}. Then the Mackey functor structure suggests $f_{T,S}(C_{p^n}/{C_{p^k}})$ is the multiplication by $p^{\alpha_{T\setminus S,k}}$, where $\alpha_{T \setminus S,k}:=\#((T\setminus S)\cap \overline{k}).$ Denoting the cokernel of $f_{T,S}$ by $\underline{B}_{T,S}$, we get a exact sequence
\begin{myeq}\label{nz_ext1}
\underline{0}\to \underline{\mathbb{Z}}_T \to \underline{\mathbb{Z}}_S\to \underline{B}_{T, S}\to \underline{0}.
\end{myeq}
where the Mackey functor $\underline{B}_{T,S}$ is given by
\[
\underline{B}_{T, S}(C_{p^n}/C_{p^k})=
\frac{\mathbb{Z}}{{p^{\alpha_{{T \setminus S} ,k}}\mathbb{Z}}}, ~~
\operatorname{res}^{C_{p^k}}_{C_{p^{k-1}}}=
\begin{cases}
p & \text{ if } k \in S \\
1 & \text{ if } k \in S^c,~~
\end{cases}
\operatorname{tr}^{C_{p^k}}_{C_{p^{k-1}}}=
\begin{cases}
1 & \text{ if } k \in S \\
p & \text{ if } k \in S^c.
\end{cases}
\]
For $k\leq \min(S \cup T)$, we readily observe that $\underline{B}_{T,S} = \underline{B}_{T\cup \ov{k}, S\cup \ov{k}}$. Applying the functor $\operatorname{Hom}_L(-, \mathbb{Z})$ to the short exact sequence \eqref{nz_ext1} yields the long exact sequence
\[ \operatorname{Hom}_L(\underline{B}_{T, S}, \mathbb{Z}) \to \operatorname{Hom}_L(\underline{\mathbb{Z}}_S, \mathbb{Z}) \to \operatorname{Hom}_L(\underline{\mathbb{Z}}_T, \mathbb{Z})\to \operatorname{op}eratorname{Ext}_L(\underline{B}_{T, S}, \mathbb{Z})\to \operatorname{op}eratorname{Ext}_L(\underline{\mathbb{Z}}_S, \mathbb{Z})\cdots\]
One readily observes that the first and last terms of the long exact sequence above are zero, which simplifies the expression to the short exact sequence
\[\underline{0} \to \underline{\mathbb{Z}}_{S^c} \to \underline{\mathbb{Z}}_{T^c} \to \operatorname{op}eratorname{Ext}_L(\underline{B}_{T, S}, \mathbb{Z})\to \underline{0}.
\]
Comparing the short exact sequence above with \eqref{nz_ext1}, we deduce $\operatorname{op}eratorname{Ext}_L(\underline{B}_{T, S}, \mathbb{Z}) = \underline{B}_{S^c, T^c}$.
\begin{prop}\label{zb}
Let $\underline{M}$ be a torsion free $\underline{\mathbb{Z}}$-module for the group $C_{p^n}$ that fits into the short sequence
\[\underline{0} \to \underline{\mathbb{Z}}_S \to \underline{M} \to \underline{B}_{S,T} \to \underline{0}.
\]
Then, there is an isomorphism $\underline{M} \cong \underline{\mathbb{Z}}_T$ of $\underline{\mathbb{Z}}$-modules.
\end{prop}
\begin{proof}
Since $\underline{M}$ is torsion free, that is, for each $0 \le k \le n,$ the Abelian group $\underline{M}(C_{p^n}/C_{p^k})$ has no torsion. Thus $\underline{M}(C_{p^n}/C_{p^k})\cong \mathbb{Z}.$ Therefore, there is a subset $T' \subseteq \ov{n}$ such that $\underline{M} \cong \underline{\mathbb{Z}}_{T'},$ and then the cokernel of $\underline{\mathbb{Z}}_S \to \underline{M}$ is $\underline{B}_{S,T'}$. However from the formula of $\underline{B}_{S,T}$ we easily observe that, $\underline{B}_{S,T'} \cong \underline{B}_{S,T}$ implies $T=T'$.
\end{proof}
We demonstrate an exact sequence as in Proposition \ref{zb} using Mackey functor diagrams for the group $C_{p^3}$.
\[
\xymatrix{&\mathbb{Z} \ar@/_1pc/[d]_{p} \ar[rrrr]^-{p^3}& &&& \mathbb{Z} \ar@/_1pc/[d]_{1} \ar[rrrr] & & & & \mathbb{Z}/p^3 \ar@/_1pc/[d]_{1} \\
\underline{\mathbb{Z}}^\ast : &\mathbb{Z} \ar@/_1pc/[d]_{p} \ar@/_1pc/[u]_{1} \ar@/^1pc/[rrrr]^-{p^2}&&& \underline{\mathbb{Z}}: & \mathbb{Z} \ar@/_1pc/[d]_{1} \ar@/_1pc/[u]_{p} \ar@/^1pc/[rrrr] & & &\underline{B}_{\ov{3}, \emptyset}: & \mathbb{Z}/p^2 \ar@/_1pc/[d]_{1} \ar@/_1pc/[u]_{p} \\
&\mathbb{Z} \ar@/_1pc/[d]_{p} \ar@/_1pc/[u]_{1} \ar[rrrr]^-{p} &&& & \mathbb{Z} \ar@/_1pc/[d]_{1} \ar@/_1pc/[u]_{p} \ar[rrrr] & & & & \mathbb{Z}/p \ar@/_1pc/[d]_{1} \ar@/_1pc/[u]_{p} \\
&\mathbb{Z} \ar@/_1pc/[u]_{1} \ar[rrrr]^{1} & &&& \mathbb{Z} \ar@/_1pc/[u]_{p} \ar[rrrr] & && & 0 \ar@/_1pc/[u]_{p}}
\]
\begin{prop}\label{Bker} Let $k\le n$. \\
1) A map $f: \underline{B}_{\ov{n}, \ov{k}^c} \to \underline{B}_{\ov{1}, \emptyset}$ is uniquely determined by $f(C_{p^n}/C_p)$. \\
2) If $f(C_{p^n}/C_p)$ is an isomorphism, then $\mbox{Ker}(f) \cong \underline{B}_{\ov{1}^c, \ov{1+k}^c}$, and $\mbox{Coker}(f) \cong \underline{B}_{\{k+1\}, \emptyset}$.
\end{prop}
\begin{proof}
The proof relies on a careful examination of the restriction and transfer maps of $\underline{B}_{\ov{n},\ov{k}^c}$. These are given by
\[
\underline{B}_{\ov{n}, \ov{k}^c}(C_{p^n}/C_{p^r})=
\frac{\mathbb{Z}}{{p^{\min(r,k)}\mathbb{Z}}}, ~~
\operatorname{res}^{C_{p^r}}_{C_{p^{r-1}}}=
\begin{cases}
1 & \text{ if } r \leq k \\
p & \text{ if } r>k,~~
\end{cases}
\operatorname{tr}^{C_{p^r}}_{C_{p^{r-1}}}=
\begin{cases}
p & \text{ if } r\leq k \\
1 & \text{ if } r>k.
\end{cases}
\]
The unique extension of $f$ from level $C_{p^n}/C_p$ is guaranteed by the fact that the restriction maps in $\underline{B}_{\ov{1},\emptyset}$ are the identity above this level. If $f(C_{p^n}/C_p)$ is an isomorphism, we have
\[
f(C_{p^n}/C_{p^l}) \text{ is } \begin{cases}
\text{onto} & \text{ if } l\le k \\
0 & \text{ if } l \ge k+1.
\end{cases}
\]
This implies the required conclusion about the cokernel of $f$. Note that the part of $f$ between the levels $C_{p^n}/C_{p^k}$ and $C_{p^n}/C_{p^{k+1}}$ may be described as
\[
\xymatrix@C=0.7cm@R=0.3cm{\ker(f) && \underline{B}_{\ov{n}, \ov{k}^c} & & \underline{B}_{\ov{1}, \emptyset} \\
\mathbb{Z}/p^k \ar@/_/[dd]_1 \ar[rr]^{\cong} && \mathbb{Z}/p^k \ar@/_/[dd]_p \ar[rr]^{0} & &\mathbb{Z}/p \ar@/_/[dd]_1\\
& & & && \\
\mathbb{Z}/p^{k-1}\{ p\} \ar@/_/[uu]_p \ar[rr] && \mathbb{Z}/p^k \ar@/_/[uu]_1 \ar[rr]^{f(C_{p^n}/C_{p^{k}})} && \mathbb{Z}/p \ar@/_/[uu]_0}
\]
Therefore, $\downarrow^{p^n}_{p^{k+1}}\ker(f) \cong \underline{B}_{\ov{1}^c,\emptyset}.$ The part of this Mackey functor above the level $C_{p^n}/C_{p^{k+1}}$ is unchanged, that is, same with $\underline{B}_{\ov{n}, \ov{k}^c}.$ Hence the result follows.
\end{proof}
\begin{mysubsection}{Pullback Mackey functors}
The $\underline{\mathbb{Z}}$-modules may also be defined as Abelian group-valued functors $\underline{M}: \mathcal{B}\underline{\mathbb{Z}}_G^{\operatorname{op}}\to \sf Ab$. The category $\mathcal{B}\underline{\mathbb{Z}}_G$ has finite $G$-sets as objects and $\operatorname{Mor}_{\mathcal{B}\underline{\mathbb{Z}}_G}(S,T):= \operatorname{Mor}_G(\mathbb{Z}[S], \mathbb{Z}[T])$ (see \cite[Proposition 2.15]{Zen17}). Suppose $N$ is a normal subgroup of $G$. The quotient map $ G \to G/N$ induces $\phi: \mathcal{B}\underline{\mathbb{Z}}_{G/N}^{\operatorname{op}} \to \mathcal{B}\underline{\mathbb{Z}}_{G}^{\operatorname{op}}$. Define $\Phi^\ast_{N}: \underline{\mathbb{Z}}\text{-}\mathrm{Mod}_{G/N}\to \underline{\mathbb{Z}}\text{-}\mathrm{Mod}_{G}$ as $\Phi^\ast_{N}(\underline{M}):= Lan_{\phi}(\underline{M})$, the left Kan extension of $\underline{M}$ along $\phi$. For $G=C_{p^n}$ we write, $\Phi^\ast_{p^m}$ for $\Phi^\ast_{C_{p^m}}$. The Mackey functor $\Phi^\ast_N\underline{M}$ is given by the formula
\begin{myeq}\label{Phi_Comp}\Phi^\ast_N\underline{M}(G/H): = \underset{(G/H \to G/L) \in \operatorname{Mor}_{\mathcal{B}_G}}\operatorname{colim} \underline{M}((G/N)/(L/N))=\begin{cases} \underline{M}((G/N)/(H/N)) & \text{ if } N \subseteq H \\ \underline{M}((G/N)/(N/N)) & \text{ if } H \subset N.\end{cases}
\end{myeq}
and the restriction $\operatorname{res}^N_e$ is $\mathrm{Id}.$ From this formula, we note that $\Phi_p^\ast$ commutes with $\operatorname{op}eratorname{Ext}_L$ on the $\mathbb{Z}$-modules which are $0$ at $C_{p^n}/e$.
\begin{exam}
The formula above implies $\Phi^\ast_N\underline{\mathbb{Z}}\cong \underline{\mathbb{Z}}$. Also, $\Phi_{p^m}^\ast (\underline{B}_{T,S})\cong \underline{B}_{T^{(m)}, S^{(m)}}$, where $T^{(m)}$ is the image of $T$ under the map $\ov{n-m} \to \ov{n}$ given by $r \mapsto r+m$.
\end{exam}
\end{mysubsection}
We now point out a non-trivial extension of $\underline{\mathbb{Z}}$-modules that arise in equivariant cohomology computations over $C_{p^2}$.
\[
\xymatrix{&\mathbb{Z}/p \ar@/_1pc/[d]_{0} \ar[rrrr] & &&& \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{{\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}}} \ar[rrrr] & & & & \mathbb{Z} \ar@/_1pc/[d]_{1} \\
\underline{B}_{\ov{2},\{2\}} : &\mathbb{Z}/p \ar@/_1pc/[d]_{0} \ar@/_1pc/[u]_{1} \ar@/^1pc/[rrrr] &&& \underline{T}(2): & \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{{\begin{bmatrix} p & 0 \end{bmatrix}}} \ar@/_1pc/[u]_{{\begin{bmatrix} p & 0 \\ -1 & 1 \end{bmatrix}}} \ar@/^1pc/[rrrr] & & &\underline{\mathbb{Z}}_{\{1\}}: & \mathbb{Z} \ar@/_1pc/[d]_{p} \ar@/_1pc/[u]_{p} \\
&0 \ar@/_1pc/[u] \ar[rrrr] &&& & \mathbb{Z} \ar@/_1pc/[u]_{{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}} \ar[rrrr] & & & & \mathbb{Z} \ar@/_1pc/[u]_{1} }
\]
A change of basis gives another isomorphic form of $\underline{T}(2)$ occuring in the diagram above.
\[
\xymatrix{ \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{{\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}}} \\
\mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{{\begin{bmatrix} p & 0 \end{bmatrix}}} \ar@/_1pc/[u]_{{\begin{bmatrix} p & 0 \\ 0 & 1 \end{bmatrix}}} \\
\mathbb{Z} \ar@/_1pc/[u]_{{\begin{bmatrix} 1 \\ -1 \end{bmatrix}}} }
\]
This generalizes to $C_{p^n}$ Mackey functors $\underline{T}(n)$ obtained by repeating top part of the diagram above. This gives a non-trivial extension of Mackey functors
\[
0 \to \underline{B}_{\ov{n}, \ov{1}^c} \to \underline{T}(n) \to \underline{\mathbb{Z}}_{\ov{1}} \to 0.
\]
\section{Computations for large $C_p$-fixed point dimensions}\label{largedimcomp}
This section deals with computations of $\underline{H}^{\alpha}_{C_{p^n}}(S^0, \underline{\mathbb{Z}})$ when the dimension of the $C_p$-fixed points of $\alpha$ is large. More precisely, we prove that if $|\alpha^{C_p}| \in (-2n+2, 2n)^c$, then the Mackey functor $\underline{H}^{\alpha}_{C_{p^n}}(S^0;\underline{\mathbb{Z}})$ can be computed explicitly in terms of $|\alpha|$ and $\underline{H}_{C_{p^{n-1}}}^{\alpha^{C_p}}(S^0; \underline{\mathbb{Z}})$ (see Table \ref{comp-highfix}). We now drop $\underline{\mathbb{Z}}$ in the notation to write $\underline{H}^{\alpha}_{C_{p^n}}(S^0)$ for $\underline{H}^{\alpha}_{C_{p^n}}(S^0;\underline{\mathbb{Z}})$, and throughout assume $n\geq 2$.
\begin{lemma}\label{pullbackalpha}
Let $\alpha \in \operatorname{im}(RO(C_{p^n}/C_{p^m}) \to RO(C_{p^n}))$. Then there is an equivalence
\[\underline{H}^\alpha_{C_{p^n}}(S^0) \cong \Phi^\ast_{p^m}(\underline{H}^{\alpha^{C_{p^m}}}_{C_{p^n}/C_{p^m}}(S^0)).
\]
\end{lemma}
\begin{proof}
The hypothesis implies that $\alpha$ and hence, also $S^{-\alpha}$ are $C_{p^m}$-fixed. Suppose $q:C_{p^n} \to C_{p^n}/C_{p^m}$, which induces adjunctions \cite[Proposition V.3.10]{MM02}
\[
\{ q^\ast(S^{-\alpha^{C_{p^m}}}), H\underline{\mathbb{Z}}\}^{C_{p^n}} \cong \{S^{-\alpha^{C_{p^m}}}, H\underline{\mathbb{Z}}^{C_{p^m}} \}^{C_{p^{n}}/C_{p^m}}.
\]
The result now follows from $H\underline{\mathbb{Z}}^{C_{p^m}} \simeq H\underline{\mathbb{Z}}$, and a calculation of the restriction and transfers as in the proof of \cite[Proposition 6.12]{BG20}.
\end{proof}
Now apply Lemma \ref{pullbackalpha} together with the fact that there are invertible classes in degree $\lambda(rp^k)-\lambda(p^k)$ for $p\nmid r$. This implies that the conclusion is valid once $|\alpha^K|$ are all equal for $K\leq C_{p^m}$. Further applying the techniques of Example \ref{lambda0}, the hypothesis may be weakened to the fact $|\alpha^K|$ are all of the same sign for $K\leq C_{p^m}$.
\begin{prop}\label{signs}
Suppose $\alpha \in RO(C_{p^n})$ such that the collection of numbers $|\alpha^{C_{p^k}}|$ for $k\leq m$, are either all $0$ or are all of the same sign, then $\underline{H}^\alpha_{C_{p^n}}(S^0) \cong \Phi^\ast_{p^m}(\underline{H}^{\alpha^{C_{p^m}}}_{C_{p^n}/C_{p^m}}(S^0))$.
\end{prop}
We now readily deduce that in a range of degrees, the cohomology is $0$, starting from the fact that for the trivial group, the cohomology is $0$ if the degree is non-zero.
\begin{prop}\label{cohzero}
Let $\alpha \in RO(C_{p^n})$.\\
(a) If $|\alpha^{C_{p^m}}|$ positive for all $m$, or negative for all $m$, then,
$
\underline{H}^{\alpha}_{C_{p^n}}(S^0) \cong \underline{0}.
$ \\
(b) If $|\alpha^{C_{p^m}}|\le 1$ for all $m,$ then, $\underline{H}^{\alpha}_{C_{p^n}}(S^0)\cong \underline{0}.$\\
(c) For $\alpha$ odd satisfying $|\alpha^{C_{p^m}}| < 1 \operatorname{im}plies |\alpha^H| \le 1$ $\forall$ $C_{p^m} \subseteq H$, $\underline{H}^{\alpha}_{C_{p^n}}(S^0 )\cong \underline{0}.$
\end{prop}
\begin{proof}
Proposition \ref{signs} directly implies (a). For (b), the result follows from \eqref{anders-comp} and (a) as, $|\alpha|\le -1, |\alpha^H|\leq 1 $ $ \operatorname{im}plies |(3-\lambda_0 - \alpha)^H| >0$ for all $H$. But for $|\alpha|= 1$, $|(3-\lambda_0-\alpha)|=0$, and all the other fixed point dimensions are $>0$. Using ~\myTagFormat{eq:base}{0} along with (a) imply $\underline{H}^{3-\lambda_0-\alpha}_{C_{p^n}}(S^0)\cong \underline{\mathbb{Z}}^\ast$, so that $\operatorname{op}eratorname{Ext}_L$ is trivial. Hence (b) follows.
For (c), Example \ref{lambda0} implies that $a_{\lambda_0}$ multiplication is an isomorphism if $|\alpha|\neq -1$ and is surjective if $|\alpha|=-1$. We now assume that (c) is true for $G=C_{p^k}$, $k< n$. For $|\alpha|>0$ and $|\alpha^{C_p}|>0$, Proposition \ref{signs} applies to prove the result. For $|\alpha|>0$ and $|\alpha^{C_p}|<0$, using the $a_{\lambda_0}$-multiplication, we get a surjective map from $\underline{H}^\beta_{C_{p^n}}(S^0)$ with $|\beta|<0$ and $|\beta^{C_{p^l}}|=|\alpha^{C_{p^l}}|$ for $l>1$, so that Proposition \ref{signs} implies the result again. In the remaining cases, using the $a_{\lambda_0}$-multiplication, it suffices to prove for $|\alpha|=-1$, $|\alpha^{C_p}|=1$. The proof is now completed by repeated applications of Proposition \ref{signs} and Anderson duality\eqref{anders-comp} as
\[
\underline{H}^{\alpha}_{C_{p^n}}(S^0) \cong \operatorname{op}eratorname{Ext}_L(\underline{H}^{3-\lambda_0 -\alpha}_{C_{p^n}}(S^0), \mathbb{Z}) \cong \operatorname{op}eratorname{Ext}_L( \Phi_p^\ast \underline{H}^{3-\alpha^{C_p}}_{C_{p^{n-1}}}(S^0),\mathbb{Z}) \cong \Phi_p^\ast \underline{H}^{\alpha^{C_p} - \lambda_0}_{C_{p^{n-1}}}(S^0) \cong 0.
\]
\end{proof}
Proposition \ref{cohzero} has many applications in showing that certain even equivariant cell complexes have free cohomology. Examples include complex projective spaces and Grassmannians \cite{Lew88}, \cite{BG19}, and linear actions on simply connected $4$-manifolds \cite{BDK21}.
\begin{prop}\label{comp_1}
Suppose $\alpha \in RO_0(C_{p^n})$ satisfying $|\alpha^{C_{p^k}}|>0$ for $k>1$. Then,
\[\underline{H}^\alpha_{C_{p^n}}(S^0)\cong \begin{cases} \underline{\mathbb{Z}}_{\ov{1-\frac{|\alpha^{C_p}|}{2}}^c} & \text{ if } 4-2n \le |\alpha^{C_p}| \le 0\\
\underline{\mathbb{Z}} & \text{ if } |\alpha^{C_p}| \le 2-2n \\
\underline{\mathbb{Z}}^\ast & \text{ if }|\alpha^{C_p}| \ge 2. \end{cases}
\]
\end{prop}
\begin{proof}
From the proof of Proposition \ref{cohzero} (b), observe that $\underline{H}^\alpha_{C_{p^{n}}}(S^0) \cong \underline{\mathbb{Z}}^\ast$ for $|\alpha^{C_p}|>0$. We use induction on $F(\alpha)= 1-\frac{|\alpha^{C_p}|}{2}$.
We now assume the result for $F(\alpha)<k$, and prove this first for those with $F(\alpha)=k\leq n$.
Let $\beta\in RO_0(C_{p^n})$ with $|\beta^{C_{p^k}}|>0$ for $k>1$. Note that $F(\beta+\lambda_1 -\lambda_0) =F(\beta)-1$ and $\beta+\lambda_1-\lambda_0 \in RO_0(C_{p^n})$, so that it satisfies the induction hypothesis. Hence, using ~\myTagFormat{eq:base}{0} for index $\beta+\lambda_1-\lambda_0$, we obtain the long exact sequence
\[
\cdots \underline{H}^{\beta+\lambda_1-1}_{C_{p^n}}(S(\lambda_0)_+)\to \underline{H}^{\beta+\lambda_1-\lambda_0}_{C_{p^n}}(S^0) \to \underline{H}^{\beta+\lambda_1}_{C_{p^n}}(S^0) \to \underline{H}^{\beta+\lambda_1}_{C_{p^n}}(S(\lambda_0)_+) \cdots
\]
In the above, we have \eqref{lambda0coh} $\underline{H}^{\beta+\lambda_1}_{C_{p^n}}(S(\lambda_0)_+)=0$ and $\underline{H}^{\beta+\lambda_1-1}_{C_{p^n}}(S(\lambda_0)_+) =\underline{\mathbb{Z}}^\ast.$ The term $\underline{H}^{\beta+\lambda_1-\lambda_0}_{C_{p^n}}(S^0) \cong \underline{\mathbb{Z}}_{\ov{F(\beta)-1}^c}$. Hence \eqref{nz_ext1}, $\underline{H}^{\beta + \lambda_1}_{C_{p^n}}(S^0) \cong \underline{B}_{\ov{n}, \ov{F(\beta)-1}^c}$.
Making use of ~\myTagFormat{eq:base}{1}, we obtain
\[
\xymatrix@R=0.3cm{0 \ar[r] & \underline{H}^{\beta+\lambda_1-1}_{C_{p^n}}(S(\lambda_1)_+)\ar[r]
\ar@{=}[d]^{(\text{by } \ref{sph_coh})} & \underline{H}^{\beta}_{C_{p^n}}(S^0) \ar[r] &\underline{H}^{\beta+\lambda_1}_{C_{p^n}}(S^0)\ar@{=}[d] \ar[r] & \underline{H}^{\beta+\lambda_1}_{C_{p^n}}(S(\lambda_1)_+)\ar@{=}^{(\text{by } \ref{sph_coh})}[d]\cdots \\ & \underline{\mathbb{Z}}_{\ov{1}^c} & & \underline{B}_{\ov{n},\ov{F(\beta)-1}^c}& \underline{B}_{\ov{1}, \emptyset}}
\]
The identifications at the two ends of the sequence are $\underline{H}^{\beta+\lambda_1}_{C_{p^n}}(S(\lambda_1)_+)\cong \Theta_1(\underline{B}_{\ov{1}, \emptyset})\cong \underline{B}_{\ov{1}, \emptyset}$, and $\underline{H}^{\beta+\lambda_1-1}_{C_{p^n}}(S(\lambda_1)_+) \cong \Theta^\ast_1(\underline{\mathbb{Z}}) \cong \underline{\mathbb{Z}}_{\ov{1}^c}.$ Proposition \ref{Bker} computes the kernel of $\underline{H}^{\beta+\lambda_1}_{C_{p^n}}(S^0) \to \underline{H}^{\beta+\lambda_1}_{C_{p^n}}(S(\lambda_1)_+)$ as $\underline{B}_{\ov{1}^c,\ov{F(\beta)}^c}.$ Hence we deduce the short exact sequence
\begin{myeq}\label{ind_step}
\underline{0} \to \underline{\mathbb{Z}}_{\ov{1}^c}\to \underline{H}^{\beta}_{C_{p^n}}(S^0) \to \underline{B}_{\ov{1}^c,\ov{F(\beta)}^c} \to \underline{0}.
\end{myeq}
Using ~\myTagFormat{eq:base}{0} and Corollary \ref{signs}, it follows $\underline{H}^{\beta}_{C_{p^n}}(S^0) \to \underline{H}^{\beta}_{C_{p^n}}(S(\lambda_0)_+)\cong \underline{\mathbb{Z}}$ is injective, hence $\underline{H}^{\beta}_{C_{p^n}}(S^0)$ is torsion free. Thus \eqref{ind_step} along with Proposition \ref{zb} imply the middle term $\underline{H}^{\beta}_{C_{p^n}}(S^0) \cong \underline{\mathbb{Z}}_{\ov{F(\beta)}^c}$.
If $F(\alpha)=n$, then by the above computation $\underline{H}^\alpha_{C_{p^n}}(S^0; \underline{\mathbb{Z}})\cong \underline{\mathbb{Z}}.$ Working as above we get, $\underline{H}^{\alpha+\lambda_0}_{C_{p^n}}(S^0)\cong \underline{B}_{\ov{n}, \emptyset}$ (using ~\myTagFormat{eq:base}{0}), and that the kernel $\underline{B}_{\ov{n},\emptyset} \to \underline{B}_{\ov{1},\emptyset}$ is $\underline{B}_{\ov{1}^c,\emptyset}$. Analogously we obtain $\underline{H}^{\alpha-\lambda_1 + \lambda_0}_{C_{p^n}}(S^0) \cong \underline{\mathbb{Z}}$.
\end{proof}
\begin{table}[ht]
\begin{tabular}{ |p{3.8cm}|p{3.7cm}||p{3.8cm}| p{3.7cm}| }
\hline
{ \small $|\alpha|$ \text{ even} } & { \small $\underline{H}_{C_{p^n}}^\alpha(S^0)$} & { \small $|\alpha|$ \text{odd} } & { \small $\underline{H}_{C_{p^n}}^\alpha(S^0)$} \\
\hline
{ \small $ |\alpha| >0$, $|\alpha^{C_p}|\leq -2n +2$} & {\small $\underline{B}_{\ov{n}, \emptyset} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ }
& { \small $|\alpha| <0 $, $|\alpha^{C_p}| \leq -2n+3$} & { $\Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ } \\
\hline
{ \small $ |\alpha| =0$, $|\alpha^{C_p}|\leq -2n +2$} & {\small $\underline{\mathbb{Z}} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ } & { \small $|\alpha| >0 $, $|\alpha^{C_p}| \leq -2n+3$} & {\small $\Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ } \\
\hline
{ \small $ |\alpha| <0$, $|\alpha^{C_p}|\leq -2n +2$} & {\small $\Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ }
& { \small $|\alpha| >0 $, $|\alpha^{C_p}| \geq 2n+1$} & {\small $\Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ } \\
\hline
{ \small $ |\alpha| \neq 0$, $|\alpha^{C_p}|\geq 2n $} & {\small $ \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ } & { \small $|\alpha| <0 $, $|\alpha^{C_p}| \geq 2n+1$} & {\small $\underline{B}_{\ov{n},\emptyset} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ } \\
\hline
{ \small $ |\alpha| =0$, $|\alpha^{C_p}|\geq 2n $} & {\small $\underline{\mathbb{Z}}^\ast \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ }
& & \\
\hline
\end{tabular}
\caption{Formula for $\underline{H}_{C_{p^n}}^\alpha(S^0)$ at large $C_p$-fixed points.}
\label{comp-highfix}
\end{table}
Incorporating the values obtained in Proposition \ref{comp_1} into the long exact sequence ~\myTagFormat{eq:base}{0}, we derive
\begin{cor}\label{comp_2}
Suppose $\alpha \in RO(C_{p^n})$ satisfying $|\alpha^H|>0$ for all $H \neq C_p.$ Then,
\[\underline{H}^\alpha_{C_{p^n}}(S^0; \underline{\mathbb{Z}})\cong \begin{cases} \underline{B}_{\ov{n}, \ov{1-\frac{|\alpha^{C_p}|}{2}}^c} & \text{ if } 4-2n \le |\alpha^{C_p}| \le 0\\ \underline{B}_{\ov{n}, \emptyset} & \text{ if } |\alpha^{C_p}| \le 2-2n \\ 0 & \text{ if }|\alpha^{C_p}| \ge 2. \end{cases}
\]
\end{cor}
We next use the multiplicative structure to derive a computation with $|\alpha|=0$ and $|\alpha^{C_p}|$ a sufficiently large negative number.
\begin{thm}\label{negfree}
If $\alpha \in RO_0(C_{p^n})$ with $|\alpha^{C_p}| \le -2n+2$, then the torsion-free part of $\underline{H}^\alpha_{C_{p^n}}(S^0)$ is $\underline{\mathbb{Z}}$, and there is a decomposition
\[
\underline{H}^\alpha_{C_{p^n}}(S^0) \cong \underline{\mathbb{Z}} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) ).
\]
For $|\alpha|<0$ and $|\alpha^{C_p}|\le -2n+2$ even, $\underline{H}^\alpha_{C_{p^n}}(S^0)\cong \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) ).$
\end{thm}
\begin{proof}
The last statement follows from Proposition \ref{signs}. In the rest of the proof, we use the $\underline{H}^\bigstar_{C_{p^n}}(S^0)$-module structure on $\underline{H}^\bigstar_{C_{p^n}}(S(\lambda_0)_+)$ which we denote by $\mu_{S(\lambda_0)}$. The multiplication in $\underline{H}^\bigstar_{C_{p^n}}(S^0)$ is denoted by $\mu$.
Proposition \ref{comp_1} implies that $\underline{H}^\alpha_{C_{p^n}}(S^0) \cong \underline{\mathbb{Z}}$ for $\alpha \in RO_0(C_{p^n})$ satisfying $|\alpha^H|>0$ for all $H \neq e, C_p$ and $|\alpha^{C_p}|\le 2-2n$. Let $\beta\in RO_0(C_{p^n})$ satisfying $|\beta^{C_p}|=0$ and $|\beta^H|\le 0$ for all $H \neq e, C_p$. Applying Anderson duality \eqref{and_comp} and Propositions \ref{comp_1}, \ref{cohzero} (a), we deduce $\underline{H}^\beta_{C_{p^n}}(S^0; \underline{\mathbb{Z}})\cong \underline{\mathbb{Z}}.$
We have the commutative diagram ($\pi : S(\lambda_0)_+ \to S^0$ is the map induced by adding disjoint base-points to $S(\lambda_0)\to \ast$ )
\[
\xymatrix{\underline{\mathbb{Z}} \cong \underline{H}^\alpha_{C_{p^n}}(S^0) \square_{\underline{\mathbb{Z}}}\underline{H}^\beta_{C_{p^n}}(S^0)\ar[d]^{\pi^\ast \square_{\underline{\mathbb{Z}}}\underline{\mathbb{Z}}} \ar[rr]^-{\mu} && \underline{H}^{\alpha+\beta}_{C_{p^n}}(S^0) \ar[d]^{\pi^\ast}\\
\underline{\mathbb{Z}} \cong \underline{H}^\alpha_{C_{p^n}}(S(\lambda_0)_+) \square_{\underline{\mathbb{Z}}}\underline{H}^\beta_{C_{p^n}}(S^0) \ar[rr]^-{\mu_{S(\lambda_0)}} && \underline{H}^{\alpha+\beta}_{C_{p^n}}(S(\lambda_0)_+)\cong \underline{\mathbb{Z}}}
\]
Note that $\pi^\ast \square_{\underline{\mathbb{Z}}}\underline{\mathbb{Z}}$ and $\mu_{S(\lambda_0)}$ are isomorphisms, so that $\mu$ is a section for $\pi^\ast$ up to isomorphism. This yields the decomposition (using ~\myTagFormat{eq:base}{0})
\[
\underline{H}^{\alpha+\beta}_{C_{p^n}}(S^0) \cong \underline{\mathbb{Z}} \operatorname{op}lus \underline{H}^{\alpha+\beta-\lambda_0}_{C_{p^n}}(S^0, \underline{\mathbb{Z}})\cong \underline{\mathbb{Z}} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{(\alpha+\beta)^{C_p}}_{C_{p^{n-1}}}(S^0, \underline{\mathbb{Z}})) (\text{by Corollary } \ref{signs}).
\]
Hence the result follows as for each $H \neq e, C_p,$ $|(\alpha+\beta)^H|$ can be made arbitrary.
\end{proof}
For $\alpha$ odd positive case, the following is a direct consequence of Theorem \ref{negfree}.
\begin{cor}\label{negtor}
Suppose $\alpha\in RO(C_{p^n})$ satisfying $|\alpha|>0$ odd and $|\alpha^{C_p}| \le -2n+3,$ then $\underline{H}^{\alpha}_{C_{p^n}}(S^0) \cong \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0))$.
\end{cor}
\begin{proof}
From Example \ref{lambda0}, it suffices to prove for $|\alpha|=1$.
Then by ~\myTagFormat{eq:base}{0}, one obtains
\[
\xymatrix@R=0.5cm@C=0.5cm{\cdots \underline{H}^{\alpha-1}_{C_{p^n}}(S^0) \ar@{=}[d]^{\text{(by Theorem } \ref{negfree})}\ar[r]
&\underline{H}^{\alpha-1}_{C_{p^n}}(S(\lambda_0)_+) \ar@{=}[d]\ar[r]
& \underline{H}^{\alpha-\lambda_0}_{C_{p^n}}(S^0) \ar[r] & \underline{H}^{\alpha}_{C_{p^n}}(S^0) \ar[r]
& \underline{H}^{\alpha}_{C_{p^n}}(S(\lambda_0)_+)\ar@{=}[d] \cdots \\
\underline{\mathbb{Z}} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}-1}_{C_{p^{n-1}}}(S^0, \underline{\mathbb{Z}}) )
& \underline{\mathbb{Z}} & &
& \underline{\mathbb{Z}}^\ast}
\]
The map $\underline{H}^{\alpha-1}_{C_{p^n}}(S^0)\to \underline{H}^{\alpha-1}_{C_{p^n}}(S(\lambda_0)_+)$ has a section according to the proof of Theorem \ref{negfree}. Thus, there is an isomorphism
$
\underline{H}^{\alpha-\lambda_0}_{C_{p^n}}(S^0) \stackrel{\cong}{\to} \underline{H}^{\alpha}_{C_{p^n}}(S^0).
$
The result follows from $\underline{H}^{\alpha-\lambda_0}_{C_{p^n}}(S^0)\cong \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0))$ using Corollary \ref{signs}.
\end{proof}
The following corollary is a direct consequence of the Anderson duality and Theorem \ref{negfree}.
\begin{cor}\label{zerpos}
The Mackey functor $\underline{H}^{\alpha}_{C_{p^n}}(S^0)\cong \underline{\mathbb{Z}}^* \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0))$ if $|\alpha|=0$ and $|\alpha^{C_p}|\ge 2n.$
\end{cor}
\begin{proof}
Using Theorem \ref{negfree} and Corollary \ref{negtor}, the short exact sequence \eqref{and_comp} turns out to be
\[
\underline{0} \to \Phi^\ast_p(\underline{H}^{3-\alpha^{C_p}}_{C_{p^{n-1}}}(S^0))^E \to \underline{H}^{\alpha}_{C_{p^n}}(S^0) \to \underline{\mathbb{Z}}^\ast \to \underline{0}.
\]
The map $\underline{\mathbb{Z}}^\ast \cong \underline{H}^{\alpha+\lambda_0 -1 }_{C_{p^n}}(S^0)\to \underline{H}^\alpha_{C_{p^n}}(S^0)$ in ~\myTagFormat{eq:base}{0} serves as a splitting for this sequence. Applying \eqref{anders-comp} to $\underline{H}^{3-\alpha^{C_p}}_{C_{p^{n-1}}}(S^0)$, we obtain the result using the identifications in Example \ref{lambda0}.
\end{proof}
The following result is readily deduced from Corollary \ref{negtor} and Anderson duality \eqref{anders-comp}
\begin{cor}\label{postor}
Suppose $\alpha\in RO(C_{p^n})$ satisfying $|\alpha|<0$ even and $|\alpha^{C_p}| \ge 2n,$ then $\underline{H}^{\alpha}_{C_{p^n}}(S^0) \cong \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0))$.
\end{cor}
The following result is obtained by applying ~\myTagFormat{eq:base}{0} to the results of Theorem \ref{negfree}.
\begin{cor}\label{posevtor}
Suppose $\alpha \in RO(C_{p^n})$ satisfying $|\alpha|>0$ even and $|\alpha^{C_p}|\le -2n+2$, then $\underline{H}^{\alpha}_{C_{p^n}}(S^0) \cong \underline{B}_{\ov{n}, \emptyset} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) ).$ \end{cor}
\begin{proof}
From Example \ref{lambda0}, it suffices to assume $|\alpha|=2$. We then have the following reduction of ~\myTagFormat{eq:base}{0}
\[
\underline{0} \to \underline{\mathbb{Z}}^* \to \underline{\mathbb{Z}} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) ) \to \underline{H}^{\alpha}_{C_{p^n}}(S^0) \to \underline{0}.
\]
As $\underline{\mathbb{Z}}^\ast$ is generated by transfers on the element $1 \in \mathbb{Z} = \underline{\mathbb{Z}}^\ast(C_{p^n}/e)$, the image of $\underline{\mathbb{Z}}^* \to \underline{\mathbb{Z}} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0) )$ sits inside $\underline{\mathbb{Z}}$, uniquely defined by the fact that it is an isomorphism at $C_{p^n}/e$. The result follows as the cokernel of $\underline{\mathbb{Z}}^\ast \to \underline{\mathbb{Z}}$ is $\underline{B}_{\ov{n},\emptyset}$.
\end{proof}
For the group $C_p$, the equivariant cohomology ring (in grading $n+m\lambda_0$) consists of a polynomial algebra $\mathbb{Z}[u_{\lambda_0}, a_{\lambda_0}]$, plus modules $\mathbb{Z}\{[pu_{\lambda_0}^{-j}]\}$ and $\mathbb{Z}/p\{u_{\lambda_0}^{-j} a_{\lambda_0}^{-k}\}$. The latter part are all $a_{\lambda_0}$-torsion, and the former part gives a region where multiplication by $a_{\lambda_0}$ is injective, but we do not have any $a_{\lambda_0}$-periodic pieces. For $n\geq 2$, the Mackey functor $ \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0))$ forms a $a_{\lambda_0}$-periodic piece as observed in the proofs of Theorem \ref{negfree} and Corollary \ref{posevtor}, and in Corollary \ref{zerpos} and Corollary \ref{postor}. Using Lemma \ref{pullbackalpha}, this also constructs $a_{\lambda_i}$-periodic pieces over $C_{p^n}$ whenever $n\geq i+2$.
\begin{exam}
For the group $C_{p^2}$, an example where $a_{\lambda_0}$-periodicity occurs is in degrees $2\lambda_1 + c \lambda_0$, where the periodic piece is the Mackey functor $\underline{B}_{\{2\},\emptyset}$ which is $\mathbb{Z}/p$ at $C_{p^2}/C_{p^2}$, and $0$ at other levels. The Mackey functor in degree $2\lambda_1 - 2\lambda_0$ is demonstrated in Table \ref{per0}, where the symbols for the generating classes are written alongside. The class $[p^2 u_{\lambda_0}^{-2}]$ is $a_{\lambda_0}$-torsion, and the periodic piece is represented in degree $2\lambda_1$ by $a_{\lambda_0}^2 a_{\lambda_1/\lambda_0}^2 = a_{\lambda_1}^2$. This construction easily generalizes so that over $C_{p^n}$ for $n\geq 2$, the class $a_{\lambda_1}^n$ is $a_{\lambda_0}$-periodic, and for $n\geq i+2$, $a_{\lambda_{i+1}}^{n-i}$ is $a_{\lambda_i}$-periodic.
\begin{table}[ht]
\begin{tabular}{ |p{4.5cm}|p{1.7cm}| }
\hline
{ \text{Mackey functor diagram}} & { \text{ \;\; \;\; Generating elements}} \\
\hline
{\xymatrix{ \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} p & 0\end{bmatrix}} \\
\mathbb{Z} \ar@/_1pc/[d]_p \ar@/_1pc/[u]_{\small \begin{bmatrix} 1 \\ 0 \end{bmatrix}} \\
\mathbb{Z} \ar@/_1pc/[u]_1} } & { \xymatrix@R=0.70cm{ u_{\lambda_1}^2[p^2u_{\lambda_0}^{-2}], (a_{\lambda_1/\lambda_0})^2 -u_{\lambda_1}^2[p^2u_{\lambda_0}^{-2}] \\
[pu_{\lambda_0}^{-2}] \\
1}}
\\
\hline
\end{tabular}
\caption{Formula for $\underline{H}_{C_{p^2}}^{2\lambda_1-2\lambda_0} (S^0)$.}
\label{per0}
\end{table}
\end{exam}
Note that $\operatorname{op}eratorname{Ext}_L(\underline{B}_{\ov{n}, \emptyset}, \mathbb{Z}) \cong \underline{B}_{\ov{n}, \emptyset}$, hence \eqref{anders-comp} along with Corollary \ref{posevtor} readily implies
\begin{cor}\label{posoddtor}
For an element $\alpha \in RO(C_{p^n})$ satisfying $|\alpha|<0$ odd and $|\alpha^{C_p}| \ge 2n+1$, $\underline{H}^{\alpha}_{C_{p^n}}(S^0) \cong \underline{B}_{\ov{n}, \emptyset} \operatorname{op}lus \Phi^\ast_p(\underline{H}^{\alpha^{C_p}}_{C_{p^{n-1}}}(S^0, \underline{\mathbb{Z}}) ).$
\end{cor}
\section{Examples of non-trivial extensions}\label{nontriv}
In this section, we compute the coefficient Mackey functor $\underline{H}^{\bigstar}_{C_{p^2}}(S^0; \underline{\mathbb{Z}})$ completely, and using that we observe non-trivial extensions for the group $C_{p^n}$ when $n\ge 3$. The ring structure of $\underline{H}^\bigstar_{C_{p^2}}(S^0; \underline{\mathbb{Z}})(C_{p^2}/C_{p^2})$ is completely determined in \cite{Zen17} using the Tate square. Our computations follow mainly from the general results of \S \ref{largedimcomp}.
For $C_p$, the $\underline{\mathbb{Z}}$-cohomology has the following additive structure \cite[Corollary B.10]{Fer99}
\begin{myeq}\label{cohCp}
\underline{H}^\alpha_{C_p}(S^0;\underline{\mathbb{Z}}) \cong \begin{cases}
\underline{\mathbb{Z}} & \mbox{if}~|\alpha|=0, |\alpha^{C_p}|\leq 0 \\
\underline{\mathbb{Z}}^\ast & \mbox{if}~|\alpha|=0, |\alpha^{C_p}|> 0\\
\underline{B}_{\ov{1}, \emptyset} & \mbox{if}~|\alpha|>0, |\alpha^{C_p}|\leq 0~\mbox{even} \\
\underline{B}_{\ov{1}, \emptyset} & \mbox{if}~|\alpha|<0, |\alpha^{C_p}|\ge 3 ~\mbox{odd} \\
0 &\mbox{otherwise}.
\end{cases}
\end{myeq}
\begin{mysubsection}{The additive structure of $C_{p^2}$ cohomology}
We compute the Mackey functors $\underline{H}^\alpha_{C_{p^2}}(S^0; \underline{\mathbb{Z}})$ for each $\alpha \in RO(C_{p^2})$. The entire computation is summarized in Table \ref{comp-tab}.
\end{mysubsection}
\begin{table}[ht]
\begin{tabular}{ |p{3.5cm}|p{1.7cm}||p{3.5cm}| p{1.8cm}| }
\hline
{ \small $|\alpha|$ \text{ even \& positive}} & { \small $\underline{H}_{C_{p^2}}^\alpha(S^0)$} & { \small $|\alpha|=0$ } & { \small $\underline{H}_{C_{p^2}}^\alpha(S^0)$} \\
\hline
{ \small $ |\alpha^{C_p}| >0 , |\alpha^{C_{p^2}}| >0$} & { $0$ }
& { \small $|\alpha^{C_p}| \le 0 , |\alpha^{C_{p^2}}| \le 0$} & { $\underline{\mathbb{Z}}$ } \\
\hline
{\small $|\alpha^{C_p}| >0,|\alpha^{C_{p^2}}|\le 0$} & { $\underline{B}_{\ov{1}^c, \emptyset}$ } & {\small $|\alpha^{C_p}| >0,|\alpha^{C_{p^2}}|> 0$} & { $\underline{\mathbb{Z}}^{\ast}$ } \\
\hline
{ \small $|\alpha^{C_p}|=0,|\alpha^{C_{p^2}}|\le 0$} & { $\underline{B}_{\ov{2}, \emptyset}$ }
& { \small $|\alpha^{C_p}|\ge 4,|\alpha^{C_{p^2}}|\le 0$} & { $\underline{B}_{\ov{1}^c, \emptyset}\operatorname{op}lus \underline{\mathbb{Z}}^\ast$ } \\
\hline
{ \small $|\alpha^{C_p}|= 0, |\alpha^{C_{p^2}}| >0$} & { $\underline{B}_{\ov{2}, \ov{1}^c}$ } & { \small $|\alpha^{C_p}|= 2, |\alpha^{C_{p^2}}| \le 0$} & { $\underline{\mathbb{Z}}_{\ov{1}}$ } \\
\hline
{ \small $|\alpha^{C_p}|<0, |\alpha^{C_{p^2}}|>0$} & { $\underline{B}_{\ov{2}, \emptyset}$ }
& { \small $|\alpha^{C_p}|=0, |\alpha^{C_{p^2}}|>0$} & { $\underline{\mathbb{Z}}_{\ov{1}^c}$ }\\
\hline
{ \small $|\alpha^{C_p}|\le 0, |\alpha^{C_{p^2}}|\le 0$} & { $\underline{B}_{\ov{2}, \emptyset}$} & { \small $|\alpha^{C_p}|<0, |\alpha^{C_{p^2}}|> 0$} & { $\underline{\mathbb{Z}}$ } \\
\Cline{1.3pt}{1-4}
\hline
{ \small $|\alpha|$ \text{ odd \& negative} } & { \small $\underline{H}_{C_{p^2}}^\alpha(S^0)$} & {\small $|\alpha|$ \text{ even \& negative} } & { \small $\underline{H}_{C_{p^2}}^\alpha(S^0)$} \\
\hline
{ \small $ |\alpha^{C_p}|\le 1, |\alpha^{C_{p^2}}|\leq 1$} & { $0$ }
&{\small $|\alpha^{C_p}| \ge 4,|\alpha^{C_{p^2}}| \le 0$} & { $\underline{B}_{\ov{1}^c, \emptyset}$ } \\
\hline
{\small $|\alpha^{C_p}| \le 1,|\alpha^{C_{p^2}}|> 1$} & { $\underline{B}_{\ov{1}^c, \emptyset}$ } & { \small otherwise } & { 0 } \\
\hline
\Cline{1pt}{3-4}
{ \small $|\alpha^{C_p}|=3,|\alpha^{C_{p^2}}|> 1$} & { $\underline{B}_{\ov{2}, \emptyset}$ }
&{\small $|\alpha|$ \text{ odd \& positive} } & { \small $\underline{H}_{C_{p^2}}^\alpha(S^0)$} \\
\hline
{ \small $|\alpha^{C_p}|= 3, |\alpha^{C_{p^2}}| \le 1$} & { $\underline{B}_{\ov{1}, \emptyset}$ } & {\small $|\alpha^{C_p}|\le -1, |\alpha^{C_{p^2}}| >1$} & { $\underline{B}_{\ov{1}^c, \emptyset}$ } \\
\hline
{ \small $|\alpha^{C_p}|\ge 5$} & { $\underline{B}_{\ov{2}, \emptyset}$ }
& {\small otherwise} & $0$ \\
\hline
\end{tabular}
\caption{Formula for $\underline{H}_{C_{p^2}}^\alpha(S^0;\underline{\mathbb{Z}})$.}
\label{comp-tab}
\end{table}
\begin{theorem}\label{Cp2-comp}
The Mackey functors $\underline{H}_{C_{p^2}}^\alpha(S^0;\underline{\mathbb{Z}})$ are as demonstrated in Table \ref{comp-tab}.
\end{theorem}
\begin{proof}
The starting point is the application Proposition \ref{signs} to \eqref{cohCp}, which gives the result whenever $|\alpha|$ and $|\alpha^{C_p}|$ are either both $0$, or both have the same sign. The remaining cases with $|\alpha|=0$ follow from Proposition \ref{comp_1}, Theorem \ref{negfree}, and Corollary \ref{zerpos}. For $|\alpha|>0$ odd, apart from the above, the remaining follow from Corollary \ref{negtor}. Applying Anderson duality \eqref{anders-comp} to the cases with $|\alpha|>0$ odd, we obtain the computations for $|\alpha|<0$ even.
We next consider the case $|\alpha|>0$ even. Summarizing the results from \S \ref{largedimcomp}, we note that other than $|\alpha^{C_p}|=0$, $|\alpha^{C_{p^2}}|\leq 0$, the results follow from Proposition \ref{signs}, Corollary \ref{comp_2} and Corollary \ref{posevtor}. Using the calculations in Example \ref{lambda0}, it suffices to consider $|\alpha|=2$, in which case we already have computed the cohomology at the grading $\alpha-\lambda_0$ to be $\underline{\mathbb{Z}}$ \eqref{comp-tab}. The short exact sequence
\[
\underline{0} \to \underline{\mathbb{Z}}^\ast \to \underline{H}^{\alpha-\lambda_0}_{C_{p^2}}(S^0; \underline{\mathbb{Z}}) \to \underline{H}^{\alpha}_{C_{p^2}}(S^0; \underline{\mathbb{Z}}) \to \underline{0}.
\]
computes $\underline{H}^\alpha_{C_{p^2}}(S^0)\cong \underline{B}_{\ov{2}, \emptyset}$. Finally for $|\alpha|<0$ odd, the result follows from Anderson duality \eqref{anders-comp}.
\end{proof}
\begin{mysubsection}{Examples of non-trivial extensions}
We now point out cohomology computations where the Mackey functors which arise are not a direct sum of copies of $\underline{\mathbb{Z}}_T$ and $\underline{B}_{T,S}$.
We start by assuming $n\ge 3$ and $\alpha \in RO_0(C_{p^n})$ satisfying $|\alpha^{C_p}|=2n-2$ and $|\alpha^H|\le 0$ for all $H \neq e, C_p.$ By Proposition \ref{sph_coh}, there is a short exact sequence
\begin{myeq}\label{nonsplit}
\underline{0} \to \underline{B}_{\ov{n},\ov{1}^c} \cong \Theta^\ast_1(\underline{B}_{\ov{1}}) \to \underline{H}^\alpha_{C_{p^n}}(S(\lambda_1)_+; \underline{\mathbb{Z}}) \to \Theta_1(\underline{\mathbb{Z}}_{\ov{1}}) \cong \underline{\mathbb{Z}}_{\ov{1}}\to \underline{0}
\end{myeq}
in $\sf Mack_{C_{p^n}}$. At each level $C_{p^n}/H$, the short exact sequence \eqref{nonsplit} splits as the right end is free. We note that there does not exists any splitting in $\sf Mack_{C_{p^n}}$. For if it did, then $\downarrow^{p^n}_{p^2} \underline{H}^\alpha_{C_{p^n}}(S(\lambda_1)_+; \underline{\mathbb{Z}}) \cong \underline{B}_{\ov{2},\ov{1}^c} \operatorname{op}lus \underline{\mathbb{Z}}_{\ov{1}}$. Now we may apply ~\myTagFormat{eq:base}{1} substituting the values from Table \ref{comp-tab} to obtain the following exact sequence
\[
\underline{0} \to \underline{B}_{\ov{1}^c,\emptyset} \to \underline{\mathbb{Z}}^\ast \operatorname{op}lus \underline{B}_{\ov{1}^c, \emptyset }\to \underline{B}_{\ov{2},\ov{1}^c} \operatorname{op}lus \underline{\mathbb{Z}}_{\ov{1}} \to \underline{H}^{\alpha-\lambda_1+1}_{C_{p^2}}(S^0)\to \underline{0}.
\]
The restriction $\operatorname{res}^{C_{p^2}}_{C_p}$ in $\underline{B}_{\ov{2},\ov{1}^c}$ is $0$ and in $\underline{\mathbb{Z}}_{\ov{1}}$ is divisible by $p$, on the other hand, the restriction in $\underline{H}^{\alpha-\lambda_1+1}_{C_{p^2}}(S^0)\cong \underline{B}_{\ov{2},\emptyset}$ (Table \ref{comp-tab}) is the usual quotient $\mathbb{Z}/p^2 \to \mathbb{Z}/p$. Thus, the short exact sequence \eqref{nonsplit} does not split in $\sf Mack_{C_{p^n}}.$
Assuming \eqref{nonsplit} does not split, a direct computation implies that up to isomorphism, $\underline{H}^\alpha_{C_{p^n}}(S(\lambda_1)_+)$ has only one choice which we call $ \underline{T}(n)$. The groups are given by
\[
\underline{T}(n)(C_{p^n}/H) \cong \begin{cases} \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p & \mbox{if } H\neq e \\
\mathbb{Z} & \mbox{if } H=e, \end{cases}
\]
and the restrictions and transfers are given by
\[\operatorname{res}^{C_{p^i}}_{C_{p^{i-1}}}= \begin{cases} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} & \mbox{ for } 2\le i \le n \\ \begin{pmatrix} p & 0\end{pmatrix} & \mbox{ for } i=1. \end{cases} \text{ and } \operatorname{tr}^{C_{p^i}}_{C_{p^{i-1}}}= \begin{cases} \begin{pmatrix} p & 0 \\ 0 & 1 \end{pmatrix} & \mbox{ for } 2\le i \le n \\ \begin{pmatrix} 1\\ -1\end{pmatrix} & \mbox{ for } i=1. \end{cases}
\]
Now, we restrict out attention to the group $C_{p^3}$ and $\alpha \in RO_0(C_{p^3})$ satisfying $|\alpha^{C_p}|=4$, $|\alpha^{C_{p^2}}|\le 0$, and $|\alpha^{C_{p^3}}|\le 0$. Consider the long exact sequence
\begin{myeq}\label{p-seq3}
\cdots \underline{H}^{\alpha-\lambda_1}_{C_{p^3}}(S^0) \to \underline{H}^{\alpha}_{C_{p^3}}(S^0) \to \underline{H}^{\alpha}_{C_{p^3}}(S(\lambda_1)_+) \to \underline{H}^{\alpha-\lambda_1+1}_{C_{p^3}}(S^0) \to \underline{H}^{\alpha+1}_{C_{p^3}}(S^0)\cdots
\end{myeq}
From \eqref{anders-comp} and Proposition \ref{cohzero} (a), we get $\underline{H}^{\alpha-\lambda_1}_{C_{p^3}}(S^0)\cong \underline{0}.$ Using \eqref{anders-comp}, we obtain $\underline{H}^{\alpha-\lambda_1+1}_{C_{p^3}}(S^0) \cong \operatorname{op}eratorname{Ext}_L(\underline{H}^{2-\lambda_0+\lambda_1-\alpha}_{C_{p^3}}(S^0), \mathbb{Z})$, which is $\underline{B}_{\ov{3},\ov{1}^c}^E \cong \underline{B}_{\ov{1}, \emptyset}$ by Corollary \ref{comp_2}. Finally, $\underline{H}^{\alpha+1}_{C_{p^3}}(S^0)=0$ by Proposition \ref{cohzero} (b). Thus \eqref{p-seq3} reduces to the short exact sequence
\[
\underline{0} \to \underline{H}^{\alpha}_{C_{p^3}}(S^0) \to \underline{T}(3) \to \underline{B}_{\ov{1}, \emptyset} \to \underline{0}.
\]
A direct computation of the kernel of the map $\underline{T}(3) \to \underline{B}_{\ov{1}, \emptyset}$ gives
\[
\xymatrix{\mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix}} \\
\mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} p & 0\end{bmatrix}} \ar@/_1pc/[u]_{\small \begin{bmatrix} p & 0\\ 0 & 1 \end{bmatrix}} \\
\mathbb{Z} \ar@/_1pc/[d]_p \ar@/_1pc/[u]_{\small \begin{bmatrix} 1 \\ -1 \end{bmatrix}} \\
\mathbb{Z} \ar@/_1pc/[u]_1}
\]
Applying a change of basis $\epsilon_1 \mapsto \epsilon_1-\epsilon _2$ and $\epsilon_2\mapsto \epsilon_2$ at $C_{p^3}/C_{p^2}$, we may rewrite the Mackey functor above as
\[
\xymatrix{& \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} 1 & 0\\ 1 & 0 \end{bmatrix}} \\
\underline{H}^{\alpha}_{C_{p^3}}(S^0): & \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} p & 0\end{bmatrix}} \ar@/_1pc/[u]_{\small \begin{bmatrix} p & 0\\ -1 & 1 \end{bmatrix}} \\
& \mathbb{Z} \ar@/_1pc/[d]_p \ar@/_1pc/[u]_{\small \begin{bmatrix} 1 \\ 0 \end{bmatrix}} \\ & \mathbb{Z} \ar@/_1pc/[u]_1}
\]
Note that $\downarrow^{p^3}_{p^2}\underline{H}^{\alpha}_{C_{p^3}}(S^0) \cong \underline{\mathbb{Z}}^\ast \operatorname{op}lus \underline{B}_{\ov{1}^c, \emptyset}$ as in Table \ref{comp-tab}. We now look at \eqref{p-seq3}
\[
\underline{0} \to \underline{H}^{\alpha+\lambda_0-1}_{C_{p^3}}(S(\lambda_0)_+) \to \underline{H}^{\alpha}_{C_{p^3}}(S^0) \to \underline{H}^{\alpha+\lambda_0}_{C_{p^3}}(S^0) \to \underline{H}^{\alpha+\lambda_0}_{C_{p^3}}(S(\lambda_0)_+) \cong \underline{0}.
\]
and put in the values to get
\[
\xymatrix{& \mathbb{Z} \ar@/_/[d]_p \ar[rrr]^{\small \begin{bmatrix} p \\ -1 \end{bmatrix}}&& & \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} 1 & 0\\ 1 & 0 \end{bmatrix}} \ar[rrr] &&& \mathbb{Z}/p^2 \ar@/_1pc/[d]_{1}\\
\underline{0} \ar[r] & \mathbb{Z} \ar@/_/[d]_p \ar@/_/[u]_1 \ar[rrr]^(0.3){\small \begin{bmatrix} 1 \\ 0 \end{bmatrix}} & && \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} p & 0\end{bmatrix}} \ar@/_1pc/[u]_{\small \begin{bmatrix} p & 0\\ -1 & 1 \end{bmatrix}} \ar[rrr]&&& \mathbb{Z}/p \ar@/_1pc/[u]_{p} \ar@/_/[d] \ar[r] & \underline{0},\\
& \mathbb{Z} \ar@/_/[d]_p\ar@/_/[u]_1 \ar[rrr]^{1} &&& \mathbb{Z} \ar@/_/[d]_p \ar@/_1pc/[u]_{\small \begin{bmatrix} 1 \\ 0 \end{bmatrix}} \ar[rrr] &&& 0 \ar@/_/[d] \ar@/_/[u] \\
& \mathbb{Z} \ar@/_/[u]_1 \ar[rrr]^{1} &&& \mathbb{Z} \ar@/_/[u]_1 \ar[rrr] &&& 0 \ar@/_/[u]}
\]
a non-trivial extension. One may compute and check that $\underline{H}^\alpha_{C_{p^3}}(S^0)$ does not split as a direct sum of Mackey functors of the type $\underline{\mathbb{Z}}_T$ and $\underline{B}_{T,S}$.
\begin{table}[ht]
\begin{tabular}{ |p{4.5cm}|p{1.7cm}| }
\hline
{ \text{ Mackey functor diagram}} & { \text{ \;\; \;\; Generating elements}} \\
\hline
{\xymatrix{ \mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} 1 & 0\\ 1 & 0 \end{bmatrix}} \\
\mathbb{Z} \operatorname{op}lus \mathbb{Z}/p \ar@/_1pc/[d]_{\small \begin{bmatrix} p & 0\end{bmatrix}} \ar@/_1pc/[u]_{\small \begin{bmatrix} p & 0\\ -1 & 1 \end{bmatrix}} \\ \mathbb{Z} \ar@/_1pc/[d]_p \ar@/_1pc/[u]_{\small \begin{bmatrix} 1 \\ 0 \end{bmatrix}} \\
\mathbb{Z} \ar@/_1pc/[u]_1} } & { \xymatrix@R=.65cm{ (a_{\lambda_1/\lambda_0})^2, p.(a_{\lambda_1/\lambda_0})^2 -u_{\lambda_1}^2[p^3u_{\lambda_0}^{-2}]\\
u_{\lambda_1}^2[p^2u_{\lambda_0}^{-2}], (a_{\lambda_1/\lambda_0})^2 -u_{\lambda_1}^2[p^2u_{\lambda_0}^{-2}] \\
[pu_{\lambda_0}^{-2}] \\
1}}
\\
\hline
\end{tabular}
\caption{Formula for $\underline{H}_{C_{p^3}}^{2\lambda_1 - 2 \lambda_0} (S^0)$.}
\label{comp-cp3}
\end{table}
\begin{exam}
An example of $\alpha$ as the above is $2\lambda_1 - 2\lambda_0$. The diagram (Table \eqref{comp-cp3}) compares the Mackey functor diagram with the corresponding generating classes.
\end{exam}
\end{mysubsection}
\end{document}
|
\begin{document}
\title{Curves on a Double Surface}
\author{Scott Nollet}
\address{Department of Mathematics,
Texas Christian University, Fort Worth, TX 76129, USA}
\email{[email protected]}
\author{Enrico Schlesinger}
\address{Dipartimento di Matematica, Politecnico di Milano, Piazza
Leonardo da Vinci 32,
20133 Milano, Italy}
\email{[email protected]}
\date{}
\dedicatory{Dedicated to Silvio Greco on the occasion of his sixtieth birthday}
\keywords{Hilbert schemes, double surfaces, deformations}
\subjclass{14C05,14H50}
\begin{abstract}
Let $X$ be a doubling of a smooth surface $F$ in a smooth
threefold and let $C \subset X$ be a locally Cohen-Macaulay curve.
Then $C$ gives rise to two effective divisors on $F$, namely the
curve part $P$ of $C \cap F$ and the curve $R$ residual
to $C \cap F$ in $C$.
We show that a general deformation of $R$ on $F$ lifts to a
deformation of $C$ on $X$ when a certain cohomology group
vanishes and give applications to the study of
Hilbert schemes of locally Cohen-Macaulay space curves.
\end{abstract}
\maketitle
\section{Introduction}
It is usually difficult to determine when a fixed curve $C \subset \mathbb{P}^3$
is in the closure of another family of curves.
Beyond semicontinuity conditions, there are few known obstructions.
Hartshorne showed that curves of certain degree and genus cannot
specialize to stick figures by analyzing the specific quadric and
cubic surfaces on which the curves lie \cite{zeuthen}.
The problem reduces to that of linear equivalence if all the curves lie
on a {\it fixed smooth} surface $F$, since the families of divisor classes
are both open and closed in the corresponding Hilbert scheme \cite{F}.
When the curves in question are nonreduced, smooth surfaces are of little help.
In our analysis of curves of degree four in $\mathbb{P}^3$ \cite{NS}, we
used families of curves on double planes and double quadric surfaces to
produce various specializations in the Hilbert schemes: these were
critical in determining irreducible components and showing
connectedness. If $X$ is a doubling of a smooth surface $F$
in a smooth threefold - or more generally, if $X$ is a ribbon
supported on $F$ in the sense of Bayer and Eisenbud \cite{beis} -
we will describe the Hilbert scheme of curves on $X$, using
as our model the rather complete study of curves on a double plane in $\mathbb{P}^3$
\cite{2h}.
In section 2 we describe the natural triple $T(C)$ associated to a curve
$C \subset X$ defined in \cite{2h}: the scheme-theoretic
intersection $C \cap F$ has a divisorial part $P$ and a zero-dimensional
part $Z$, and when we form the residual curve $R$ to $C \cap F$ in $C$, we
obtain the triple $T(C)=\{Z,R,P\}$. Here $Z$ is a generalized Gorenstein
divisor on $R$ and $R \subset P$ are effective divisors on $F$.
We describe the curves $C$ giving rise to a fixed triple $\{Z,R,P\}$
and give practical conditions (\ref{exist} and \ref{practical})
under which this space is non-empty. The existence of such curves $C$
is subtle when our conditions fail.
Using the triples above, we stratify $H_{d,g}(X)$ in section 3 to obtain
locally closed $H_{z,r,p} \subset H_{d,g} (X)$ with natural projection
maps $t$ to the relevant Hilbert flag schemes $D_{z,r,p}$.
Relativizing results from section 2, we
find (\ref{structureofpi}) that $t$ has the local structure of an open
immersion followed by an affine bundle projection over the locus
$V \subset D_{z,r,p}$ of triples satisfying $H^{1}({\mathcal O}_{R}(Z+P-F))=0$
(the fibres are nonempty if condition (2) of (\ref{exist}) holds).
Combining with the structure of the projection map
$D_{z,r} \rightarrow H_{r}$ (\ref{dominant}), we find (Theorem \ref{limits}) that
if $C \subset X$ is a curve whose triple $\{Z,R,P\}$
satisfies $H^{1}({\mathcal O}_{R}(Z+P-F))=0$, then a general deformation
of $R$ lifts to a deformation of $C$. We close with some applications
to families of space curves.
\section{Curves on a ribbon} \label{two}
Let $F$ be a smooth surface over an algebraically closed ground field $k$.
If $F$ is contained in a smooth threefold $T$ and $X$ is the effective divisor
$2F$ on $T$, then
\begin{enumerate}
\item
${\mbox{Supp}} X = F$;
\item
$\ideal{F,X} \cong {\mathcal O}_{F} (-F)$ is an invertible ${\mathcal O}_{F}$-module.
\end{enumerate}
In other words, $X$ is a {\it ribbon} over $F$
in the sense of Bayer and Eisenbud \cite{beis}.
Since $F$ is smooth, any ribbon $X$ is locally split, hence appears
locally as a doubling of $F$ in a smooth threefold.
We use the notation ${\mathcal O}_{F} (-F)= \ideal{F}$ and
${\mathcal O}_F (F) = {\mathcal H} om_{{\mathcal O}_F} ( \ideal{F}, {\mathcal O}_F)$.
Here we will further assume $X$ is projective, although many of
our constructions work more generally.
We will study curves on a ribbon $X$ over $F$ using the triples introduced
in ~\cite{2h}. We adopt the following conventions:
A subscheme $C \subset X$ is a {\it curve} if
all of its associated points have dimension one, thus $C$ is locally
Cohen-Macaulay of pure dimension one or empty. If $Y$ is a subscheme of
$X$, $\ideal{Y}$ denotes the ideal sheaf of $Y$ in $X$.
If $R$ is a Gorenstein scheme and $Z$ a generalized divisor on
$R$ \cite{hgd}, then ${\mathcal O}_R (Z) = {\mathcal H} om (
\ideal{Z,R}, {\mathcal O}_R)$ denotes the reflexive sheaf associated to the
divisor $Z$. If further $R \subset F$, we write ${\mathcal O}_R (Z-F)$ for
${\mathcal O}_R (Z) \otimes {\mathcal O}_F (-F)$.
\begin{prop} \label{1.1}
To each curve $C$ in $X$ is associated a triple $T(C) = \{Z,R,P \}$
in which $R \subset P$ are effective divisors on $F$, $Z \subset R$ is
Gorenstein and zero-dimensional (possibly empty), and
$$
\ideal{P, C} \cong {\mathcal O}_R (Z-F).
$$
The arithmetic genera are related by
\begin{equation}
p_{a}(C)= p_{a}(P) + p_{a}(R) + \deg_R {\mathcal O}_{R}(F) - \deg (Z) -1.
\end{equation}
\end{prop}
\begin{proof}
We proceed as in~\cite[$\S 2$]{2h}. Extracting the
possible embedded points from the one dimensional scheme-theoretic
intersection $C \cap F \subset F$, we may write
$$
\ideal{C \cap F,F} = \ideal{Z,F} (-P)
$$
where $P$ is an effective divisor and $Z$ is zero-dimensional.
The inclusion $P \subset C \cap F$ yields a commutative diagram
\begin{equation} \label{diagram}\begin{array}{lllllllll}
& & 0 & & 0 & & 0 & & \\
& & \downarrow & & \downarrow & & \downarrow & & \\
0 & \rightarrow & \ideal{R,F}(-F) & \rightarrow & \ideal{C,X} & \rightarrow &
\ideal{Z,F}(-P) & \rightarrow & 0 \\
& & \downarrow & & \downarrow & & \downarrow & & \\
0 & \rightarrow & {\mathcal O}_F (-F) & \rightarrow & \ideal{P,X} & \rightarrow &
{\mathcal O}_{F}(-P) & \rightarrow & 0 \\
& & \downarrow & & \downarrow
& & \downarrow & & \\
0 & \rightarrow & {\mathcal O}_{R}(-F) & \stackrel{\sigma}{\rightarrow} & {\mathcal L} &
\rightarrow & {\mathcal O}_{Z}(-P) & \rightarrow & 0 \\
& & \downarrow & & \downarrow & & \downarrow & & \\
& & 0 & & 0 & & 0 & & \\
\end{array}\end{equation}
which defines the residual scheme $R$ to $C \cap F$ in $C$.
The inclusion ${\mathcal O}_R (-F) \hookrightarrow {\mathcal O}_C$ shows that the
associated points of $R$ are among those of $C$, hence $R$ is a curve.
By construction, $P$ is the largest curve in $F \cap C$,
hence $R \subseteq P$ and $C \subset F$ if and only if $R$ is empty.
We now show that $Z$ is Gorenstein on $R$ and that
${\mathcal L} \cong {\mathcal O}_{R}(Z-F)$ is a rank one reflexive
${\mathcal O}_{R}$-module.
In view of the bottom row of diagram
\ref{diagram}, the submodule $\ideal{R} {\mathcal L} \subset {\mathcal L}$ is
supported on $Z$, but ${\mathcal L} = \ideal{P,C} \subset {\mathcal O}_{C}$ has
only associated points of dimension one (because $C$ is purely
one-dimensional), hence $\ideal{R} {\mathcal L} =0$ and ${\mathcal L}$ is an
${\mathcal O}_{R}$-module.
It follows that ${\mathcal O}_{Z}(-P)$ is an ${\mathcal O}_{R}$-module as well,
hence $Z \subset R$.
Applying the bifunctor ${\mathcal H}om_{{\mathcal O}_R} (\: - \:, \: - \: )$ to
the sequence $\exact{\ideal{Z,R}}{{\mathcal O}_R}{{\mathcal O}_Z}$ and
the bottom row of diagram~\ref{diagram} we obtain
\begin{equation*} \begin{array}{cccccccc}
& & 0 & & 0 & & {\mathcal H}om_{{\mathcal O}_R} ({\mathcal O}_Z, {\mathcal O}_Z (-P)) \\
& & \downarrow & & \downarrow && \downarrow
\mbox{\scriptsize{$\alpha$}} \\
0 & \rightarrow &
{\mathcal O}_R (-F) & \stackrel{\pi}{\rightarrow} &
{\mathcal O}_R (Z-F) & \rightarrow &
{\mathcal E}xt^{1}_{{\mathcal O}_R} ({\mathcal O}_Z, {\mathcal O}_R (-F)) \\
& & \downarrow & & \downarrow \mbox{\scriptsize{$\beta$}} & &
\downarrow 0\\
0 & \rightarrow &
{\mathcal L} & \stackrel{\phi}{\rightarrow} &
{\mathcal H}om_{{\mathcal O}_R} (\ideal{Z,R}, {\mathcal L}) & \stackrel{\gamma}{\rightarrow} &
{\mathcal E}xt^{1}_{{\mathcal O}_R} ({\mathcal O}_Z, {\mathcal L}) \\
& & \downarrow & & \downarrow & & \\
{\mathcal O}_Z (-P) & \stackrel{\delta}{\rightarrow} &
{\mathcal O}_Z (-P) & \stackrel{0}{\rightarrow} &
{\mathcal H}om_{{\mathcal O}_R} (\ideal{Z,R}, {\mathcal O}_Z (-P)) & & \\
\end{array}\end{equation*}
The morphisms $\pi$, $\phi$ and $\alpha$ are injective because
${\mathcal H}om_{{\mathcal O}_R} ({\mathcal O}_Z, {\mathcal L})=0$ as ${\mathcal L}$ has no zero dimensional associated
point.
Since $F$ is smooth, $R$ is Gorenstein, hence
$\omega_{Z} \cong {\mathcal E}xt^{1}_{{\mathcal O}_R} ({\mathcal O}_Z,{\mathcal O}_R (-F))$ and $\alpha$
can be thought as a morphism ${\mathcal O}_Z \rightarrow \omega_Z$. Since ${\mathcal O}_Z$ and
$\omega_Z$ have the same length, $\alpha$ is an isomorphism (which
explains the $0$ map at the right of the diagram) and $Z$ is
Gorenstein.
Thinking of ${\mathcal L}$ and ${\mathcal O}_R (Z-F)$ as subsheaves of
${\mathcal H}om_{{\mathcal O}_R} (\ideal{Z,R}, {\mathcal L})$, the $0$ at the bottom
of the diagram yields ${\mathcal L} \subset {\mathcal O}_R (Z-F)$
while the $0$ at the right gives ${\mathcal O}_R (Z-F) \subset {\mathcal L}$, hence
${\mathcal L}={\mathcal O}_R (Z-F)$.
For the arithmetic genus formula, note that $p_{a}(C)-p_{a}(P) =
- \chi \ideal{P,C} = - \chi {\mathcal L}$, which can be read off from the
bottom row of diagram \ref{diagram}, keeping in
mind that $\deg Z = \chi \,{\mathcal O}_Z$ and $\deg_R {\mathcal E} = \chi \, {\mathcal E}
-\chi \, {\mathcal O}_R$ for an invertible sheaf ${\mathcal E}$ on $R$.
\end{proof}
\begin{prop}\label{moduli}
Given a triple $\{Z,R,P \}$ of closed subschemes of $F$ as above, the
set of curves $C \subset X$ with $T(C)=\{Z,R,P \} $ is in one-to-one
correspondence with an open subset of the vector space
$$
\mbox{\rm H}^{0} (R, {\mathcal O}_{R} (Z+P-F)) \cong
\mbox{\rm Hom}_{R} ({\mathcal O}_{R}(-P), {\mathcal O}_{R} (Z-F) ).
$$
\end{prop}
\begin{proof}
We study the fibres of the map $C \mapsto T(C)$.
We have seen that the bottom row of diagram (\ref{diagram})
is a sequence of ${\mathcal O}_{R}$-modules, hence tensoring with ${\mathcal O}_R$
we obtain a new diagram
\begin{equation}\label{phi}
\begin{CD}
0 @>>> {\mathcal O}_{R}(-F) @>{\tau}>>
\ideal{P} \otimes {\mathcal O}_{R}
@>{\pi}>> {\mathcal O}_{R} (-P) @>>> 0 \\
&& @V{=}VV @V{\phi}VV @VVV \\
0 @>>> {\mathcal O}_{R}(-F) @>{\sigma}>> {\mathcal O}_{R}(Z-F)
@>{\gamma}>> {\mathcal O}_{Z}(-P) @>>> 0
\end{CD}
\end{equation}
in which $\phi$ is surjective, the top row is the conormal sequence of
$P$ in $X$ restricted to $R$, and the bottom row is obtained by dualizing
$ \exact{\ideal{Z,R}}{{\mathcal O}_R}{{\mathcal O}_Z}$. It is clear that any
surjection ${\phi}$ with ${\phi} \circ \tau = \sigma$ yields a curve
with triple $\{Z,R,P\}$. As in \cite{2h}, we obtain a
one-to-one correspondence between curves $C$ in $X$ with triple $\{Z,R,P \}$
and surjections ${\phi}$ satisyfing ${\phi} \circ \tau = \sigma$.
Since the cokernel of $\tau$ is isomorphic to ${\mathcal O}_R (-P)$, the set of
such surjections may be identified with an open subset of
$\mbox{\rm Hom}_{R} ({\mathcal O}_{R}(-P), {\mathcal O}_{R} (Z-F) )$.
\end{proof}
It is useful to know when a triple actually arises from a curve.
\begin{prop} \label{exist}
Let $\{Z,R,P \}$ be a triple of subschemes of $F$ as in
proposition ~\ref{1.1}. Suppose that
\begin{enumerate}
\item\label{a} $\mbox{\rm H}^{1} (R, {\mathcal O}_{R}(Z+P-F)) = 0$; and
\item\label{b} the map
$\mbox{\rm H}^{0} ({\mathcal O}_{R} (Z+P-F)) \otimes {\mathcal O}_{R}
\rightarrow {\mathcal O}_{Z}$ induced by $\gamma$ is
surjective.
\end{enumerate}
Then the set of curves $C \subset X$ with
$T(C) = \{Z,R,P \}$ is parametrized by a non-empty open subset
$U \subset \mbox{\rm H}^{0} (R, {\mathcal O}_{R} (Z+P-F))$ of dimension
$\deg Z + \chi {\mathcal O}_{R} (P-F)$.
\end{prop}
\begin{proof}
The triple $\{Z,R,P \}$ gives rise to the two exact rows of
diagram~(\ref{phi}).
Condition $(1)$ gives
$$
\mbox{\rm Ext}^{1}({\mathcal O}_{R}(-P),{\mathcal O}_{R}(Z-F)) \cong
\mbox{H}^{1}({\mathcal O}_{R}(Z+P-F))=0,
$$
hence there exists
$\phi_{0} \in \mbox{\rm Hom} (\ideal{P} \otimes {\mathcal O}_R, {\mathcal O}_R (Z-F))$
such that $\phi_{0} \circ \tau = \sigma$. Moreover, any such
morphism $\phi$ can be written $\phi = \phi_{0} + \alpha \circ \pi$ for
$\alpha \in \mbox{\rm Hom} ( {\mathcal O}_{R} (-P), {\mathcal O}_R (Z-F)) \subset
\mbox{\rm Hom} ( \ideal{P} \otimes {\mathcal O}_{R}, {\mathcal O}_{R} (Z-F))$.
Let ${\overline {\phi_{0}}}:{\mathcal O}_{R}(-P) \rightarrow {\mathcal O}_{Z}(-P)$ be the
morphism induced by $\phi_{0}$. The snake lemma shows that the morphism
$\phi_{0} + \alpha \circ \pi$ is surjective if and only if
${\overline {\phi_{0}}} + \gamma \circ \alpha$ is. Tensoring with
${\mathcal O}_{R}(P)$, we view $\alpha$ as a global section of
${\mathcal O}_{R}(Z+P-F)$. The images of these global section under
$\gamma$ generate ${\mathcal O}_{Z}$ of by condition $(2)$.
Since $Z$ is finitely supported, it follows
that for a general such section $s \in H^{0} {\mathcal O}_R (Z+P-F)$, the global
section $\gamma(s) + {\overline {\phi_{0}}}(1)$ is a unit in ${\mathcal O}_{Z,z}$
at each point $z \in Z$. Thus $\alpha$ with $\alpha(1)=s$ corresponds
to a surjective morphism $\phi$.
\end{proof}
\begin{example}
The hypotheses of Proposition~\ref{exist} are not necessary
for the existence of a curve $C$ with a given triple. Let
$F \subset \mathbb{P}^3$ be a smooth surface of degree $d = \deg F$ which contains a
line $L$. The effective divisor $X = 2F$ on $\mathbb{P}^3$ is a ribbon
over $F$ which contains all double lines $C$ supported on $L$. If
$p_{a}(C) \neq 1-d$, then $C \not \subset F$ and the triple $T(C)$
must take the form $\{Z,L,L\}$, where $Z$ is an effective
divisor of degree $d-1-p_{a}(C)$ on $L$. Since ${\mathcal O}_{L}(Z+L-F) \cong
\omega_{L}(\deg Z + 4 - 2 \deg F)$ it is clear that
$H^{1}( {\mathcal O}_{L} (Z+L-F)) \neq 0$ and
$H^{0}( {\mathcal O}_{L} (Z+L-F))=0$ for $d > > 0$, hence neither hypothesis of
\ref{exist} hold, yet the existence of $C$ shows that the there are curves with
the triple $\{Z,L,L\}$. Replacing $L$ by any smooth curve gives
similar examples.
\end{example}
\begin{remark}\label{practical} The following practical conditions
imply the hypotheses of Prop. \ref{exist}:
\begin{enumerate}
\item $\mbox{H}^{1} (R, {\mathcal O}_{R}(Z+P-F)) = 0$ and
${\mathcal O}_{R}(Z+P-F)$ is generated by global sections.
\item
$\mbox{H}^1 (R, {\mathcal O}_{R} (Z+P-F-H))=0 $ for some very ample
divisor $H$ on $R$.
\item
$\mbox{H}^{1} (R, {\mathcal O}_{R}(P-F))=0$.
\end{enumerate}
Indeed, the first condition is clearly stronger than the hypotheses
of~\ref{exist}.
The second condition implies the first by Mumford's regularity theorem.
If the third condition is satisfied, then for {\em any} effective
generalized divisor
$Z \subset R$ the exact sequence
$$0 \rightarrow {\mathcal O}_{R} (P-F) \rightarrow {\mathcal O}_{R} (Z+P-F)
\stackrel{\gamma}{\rightarrow} {\mathcal O}_{Z} \rightarrow 0$$
shows that $\mbox{H}^{1} (R, {\mathcal O}_{R} (Z+P-F))=0$ and that
$\gamma$ is surjective on global sections, which implies hypothesis
$(2)$ of~\ref{exist} since $Z$ has finite length.
\end{remark}
\begin{remark}\label{split}
Perhaps the simplest situation occurs when the restriction to $R$ of the
conormal sequence associated to $P \subset F$ (the top row of diagram
\ref{phi}) splits. This happens in case (3) of Remark \ref{practical}.
\begin{enumerate}
\item This splitting occurs if and only if
the triple $\{\emptyset,R,P\}$ arises from a curve $C$, since in
this case ${\mathcal O}_{Z}=0$ and $\sigma$ is the identity map. In
case $R=P$ is a general smooth space curve, we expect
that $\{\emptyset,R,P\}$ does {\em not} arise from a curve $C$, as this
would be equivalent to splitting of the normal bundle.
\item If $P$ is the intersection of a surface $F \subset {\mathbb{P}}^{n}$
with a hypersurface $H$ of degree $d$, then restricting the
natural map ${\mathcal O}_{{\mathbb{P}}^{n}}(-d) \rightarrow \ideal{P}$ to $R$ provides
such a splitting and the triple $\{\emptyset,P,P\}$ arises from
the curve $X \cap H$. If $F$ is a general surface of
degree $\geq 4$ in $\mathbb{P}^3$, then every curve $P \subset F$ arises
in this way since ${\mbox{Pic}} F \cong \mathbb{Z}$ with
${\mathcal O}_{S}(1)$ as generator \cite{gh}.
\item When the splitting {\it does} occur, there is no
obstruction to finding maps $\phi$ such that $\phi \circ \tau = \sigma$
and the existence of a surjective such $\phi$ is {\em equivalent}
to condition (2) of Proposition \ref{exist}.
\end{enumerate}
\end{remark}
\begin{example}
If $X=2H$ is a double plane in $\mathbb{P}^3$, then {\it every} triple
$\{Z,R,P\}$ with $Z$ Gorenstein arises from a curve $C \subset X$ by
Remark \ref{practical}(3) cf.~\cite{2h}.
\end{example}
In the next three examples, we consider behavior of triples
$\{Z,R,P\}$ for the double quadric $X=2Q \subset \mathbb{P}^3$, using the
standard isomorphism ${\mbox{Pic}} \; Q \cong \mathbb{Z} \oplus \mathbb{Z}$ \cite[II, 6.6.1]{AG}.
\begin{example}\label{quadric1}
If $P$ has type $(a,b)$ with $a,b > 0$ and $R < P$, then {\it every}
triple $\{Z,R,P\}$ with $Z$
Gorenstein arises from a curve $C \subset X$. Indeed, the exact sequence
$$0 \rightarrow {\mathcal O}_{Q}(P-R-Q) \rightarrow {\mathcal O}_{Q}(P-Q) \rightarrow {\mathcal O}_{R}(P-Q) \rightarrow 0$$
yields the vanishing of Remark \ref{practical}(3). Since Remark
\ref{split} applies here, it is common for the normal bundle of a curve
$P \subset Q$ to split when restricted to a {\em proper} subcurve $R$,
while this is quite rare when $R=P$ \cite{hulek}.
\end{example}
\begin{example}\label{quadric2}
If $0 < a \leq b$ and $R=P$, then $\mbox{H}^{1}({\mathcal O}_{R}(P-Q)) \cong k$ and
condition \ref{practical}(3) fails.
\begin{enumerate}
\item If $a=b$, then $P$ is a complete intersection and the
triple $\{\emptyset,P,P\}$ arises from the complete intersection
of $X$ and a surface of degree $a$ containing $P$.
Not every triple $\{Z,P,P\}$ arises from a
curve, however: if $P$ has type $(1,1)$ and $\deg (Z) =1$,
the triple $\{Z,P,P\}$ could only be associated to a curve
of degree $4$ and genus $2$ by (\ref{1.1}), but there is no such
curve in $\mathbb{P}^3$ \cite[3.1 and 3.3]{genus}.
On the other hand, if $\deg (Z) \geq 2$ and $P$ is a smooth conic,
then there exists a curve $C \subset X$ with $T(C) = \{Z,P,P\}$ by
Proposition~\ref{exist}.
\item If $1=a < b$ or $(a,b)=(1,0)$ and $P$ is a {\it smooth} rational
curve, then the triple $\{Z,P,P\}$ arises from a curve if and only
if $ \deg (Z) > 0$ (condition (2) of ~\ref{exist} fails when
$\deg (Z) =1$, but any nonzero map $\phi$
in diagram (\ref{phi}) is surjective in this case). The normal bundle
splits (as described in \cite[Theorem 1]{hulek} over $\mathbb{C}$), but the
top row of diagram (\ref{phi}) does not.
\item $1 < a < b$ and the pair $Z \subset P$ is sufficiently
general with $\deg (Z) > 0$, then the triple $\{Z,P,P\}$ arises
from a curve on $X$. Since the proof uses a degeneration argument,
we postpone it until the following section on families (see Example
\ref{degen} below).
\end{enumerate}
\end{example}
\begin{example}\label{quadric3}
Now suppose that $P$ has type $(0,b)$ for some $b > 0$.
\begin{enumerate}
\item Suppose that $R \subset P$ is a disjoint union of {\it reduced}
lines. Then applying Example \ref{quadric2}(2)
above to each line $L \subset R$, we see that the triple
$\{Z,R,P\}$ arises from a curve $C \subset X$ if and only if
$Z \cap L \neq \emptyset$ for each line $L \subset R$
if and only if
$\mbox{H}^{1}({\mathcal O}_{R}(Z+P-Q)) \cong \mbox{H}^{1}({\mathcal O}_{R}(Z-Q))=0$.
\item Let $R \subset P$ be a double line on $Q$. In this case
$Z$ need not be contained in the underlying reduced line.
In fact, if $L$ is the underlying support, then the triple
$Z \subset R \subset P$ satisfies the conditions of
\ref{practical}(2) if $2 \leq \deg (Z \cap L) \leq \deg (Z) -2$.
To see this, let $W = Z \cap L $ and let $Y$ be the residual scheme
to $W$ in $Z$. Since $R^{2}=0$, the sequence relating $Y$ and $W$ to
$Z$ takes the form
$$ 0 \rightarrow \ideal{Y,L} \rightarrow \ideal{Z,R} \rightarrow \ideal{W,L} \rightarrow 0$$
and applying ${\mathcal H}om_{{\mathcal O}_{R}}(-,{\mathcal O}_{R})$ yields the exact sequence
$$0 \rightarrow {\mathcal O}_{L}(W) \rightarrow {\mathcal O}_{R}(Z) \rightarrow {\mathcal O}_{L}(Y) \rightarrow 0$$
(one checks locally that ${\mathcal E}xt^{1}_{{\mathcal O}_{R}}({\mathcal O}_{L},{\mathcal O}_{R})=0$
and ${\mathcal H}om_{{\mathcal O}_{R}}({\mathcal O}_{L}(a),{\mathcal O}_{R}) \cong {\mathcal O}_{L}(-a)$ by
\cite[III,6.7]{AG}). Tensoring by ${\mathcal O}_{R}(-Q-H)$ and taking the
long exact cohomology sequence now gives the desired vanishing.
One can formulate more complicated criteria for higher order
multiple lines on $Q$.
\end{enumerate}
\end{example}
\section{Families}
In this section we study families of curves in $X$ and their
corresponding triples. We prove that, if $V$ denotes
the open subset of the flag Hilbert scheme $D$ consisting of
triples satisfying the conditions of Proposition \ref{exist},
the set of curves $C$ with $T(C) \in V$ is an open dense subset
of an affine fibre bundle over $V$ (\ref{structureofpi}).
Combining this with the structure of
the projection maps on Hilbert flag schemes (\ref{dominant}), we find that a
curve $C$ with triple $\{Z,R,P\}$ satisfying the first condition of
Proposition \ref{exist} is the flat limit of curves with triples
$\{Z^{\prime},R^{\prime},P^{\prime}\}$ for which $R^{\prime}$ is
general (\ref{limits}).
We first extend our constructions to the relative case. Let
$Sch_{k}$ be the category of locally Noetherian schemes over the
ground field $k$.
For $S \in Sch_{k}$, let $H(S)$ be the set of families of curves
$C \subset X \times S$ such that the sheaves ${\mathcal O}_{C},
{\mathcal O}_{C \cap (F \times S)}$ and
${\mathcal E}_C = {\mathcal E}xt^{1}_{{\mathcal O}_{F \times S}}
(\ideal{C \cap (F \times S)},{\mathcal O}_{F \times S})$
are all flat over $S$.
Then $H: Sch_{k} \rightarrow Sets$ defines a contravariant functor:
if $\phi:T \rightarrow S$ is a morphism in $Sch_{k}$, we define
$H(\phi): H(S) \rightarrow H(T)$ by sending a family $C \in H(S)$ to its
pull-back $C_{T} = C \times_{S} T$. We have to check that this is well
defined, i.e., that $C_T \in H(T)$. Here the point is that
${\mathcal E}_{C_T}$ is the pullback of ${\mathcal E}_{C}$: indeed, on the
fibres we have ${\mathcal E}xt^{2} (\ideal{C_{s}.F},{\mathcal O}_{F})=0$, so the theorem of
base change for the ${\mathcal E}xt$ functors ~\cite{BPS,JS} tells us that ${\mathcal E}_C$
commutes with base change - that is, the natural map
$({\rm Id}_{F} \times \phi)^{*} {\mathcal E}_C \rightarrow {\mathcal E}xt^{1}_{{\mathcal O}_{F \times T}}
(\ideal{C_{T} \cap (T \times F)},{\mathcal O}_{F \times T})$
is an isomorphism.
We now claim that to a family of curves $C \in H(S)$ we can associate
a triple $T(C) = \{Z,R,P\}$ where $Z \subseteq R \subseteq P$ are
closed subschemes of $F \times S$, flat over $S$, such that for every
closed point $s \in S$ the triple $\{Z_{s},Y_{s},P_{s} \}$ is
precisely the triple $T(C_{s})$.
To construct $P$, we need to show that the sheaf
$$
\mathcal{H}_C =
{\mathcal H}om_{{\mathcal O}_{F \times S}} (\ideal{C.F},{\mathcal O}_{F \times
S})
$$
is an invertible sheaf on $F \times S$. By definition of $H(S)$, we
know that ${\mathcal E}_C $ is flat
over $S$, and its formation commutes with base change. The theorem of
base change for the functors ${\mathcal E}xt^{i}$ implies that $ \mathcal{H}_C $
itself is flat over $S$ and commutes with base change.
In particular, the natural map
$$
\mathcal{H} \otimes k(s) \rightarrow {\mathcal H}om (\ideal{C_{s}.F},{\mathcal O}_{F})
$$
is an isomorphism for every closed point $s \in S$. Thus the
restriction of $\mathcal{H}$ to each fibre is an invertible sheaf,
hence so is $\mathcal{H}$.
By a standard argument \cite[7.4.1]{kollar}, the inclusion
$\ideal{C.F} \hookrightarrow {\mathcal O}_{F \times S}$ defines a global
section of $\mathcal{H}$ whose zero scheme is an effective Cartier
divisor $P \subset F \times S$, flat over $S$.
Now define $Z (C) \subset F \times S$ to be the residual scheme to $P$ in
$C \cap (F \times S)$, so that
$\ideal{C \cap (F \times S)}= \ideal{P}\ideal{Z}$. To see that $Z$ is
flat over $S$ we note ${\mathcal O}_{Z} (-P) \cong \ideal{P,C \cap (F \times S)}$
and use \cite[7.4.1]{kollar}.
Finally, define $R(C) \subset X \times S$ to be the residual
scheme to the intersection of $C$ with $F \times S$. The exact sequence
$$
0 \rightarrow
{\mathcal O}_{R}(-F \times S) \rightarrow {\mathcal O}_{C} \rightarrow
{\mathcal O}_{C.F} \rightarrow 0
$$
shows that $R$ is flat over $S$, and that for each $s \in S$ the fibre
$R_{s}$ is the residual scheme to the intersection of $C_{s}$ with
$F$. Since $Z_{s} \subseteq R_{s} \subseteq P_{s}$ for
each $s \in S$, we have $Z \subseteq R \subseteq P$.
Summing up, to any $C \in H(S)$ we can associate a triple $T(C) =
\{Z,R,P \}$ where $Z \subseteq Y \subseteq P$, are closed subschemes
of $F \times S$, flat over $S$, and this construction is compatible with base
change. Thus we have a natural transformation $T: H \rightarrow D$
where $D$ is the functor that to a scheme $S$ associates flags
$Z \subset R \subset P \subset F \times S$, with $Z$, $R$, $P$ flat over
$S$, $Z$ zero dimensional, and $R \subset P$ effective Cartier divisors.
Both $H$ and $D$ are represented by quasiprojective schemes. This is
well known for $D$. Using Mumford's flattening stratification, we see
$H$ is representable by a subscheme of the Hilbert scheme of curves in
$X$. Since giving the Hilbert polynomials of $C$,
$C \cap (F \times S)$ and
${\mathcal E}_C$ is the same as giving the Hilbert polynomials of $Z$, $R$ and
$P$, $H$ is represented by the disjoint union of locally closed
subschemes $H_{z,r,p}$ of the Hilbert scheme of curves in $X$, where
$\{z,r,p\}$ vary in the set of possible Hilbert polynomials for
$Z$, $R$ and $P$. Furthermore, the natural transformation $T$ induces
a morphism of schemes $t: H_{z,r,p} \rightarrow D_{z,r,p}$.
\begin{thm} \label{structureofpi}
Let $V \subset D_{z,r,p}$ be the open subscheme corresponding to
triples $\{Z,R,P\}$ satisfying $H^{1}({\mathcal O}_{R}(Z+P-F))=0$. Then
the map $t^{-1}(V) \rightarrow V$ has the structure of an open immersion
followed by a projection from an affine bundle over $V$.
\end{thm}
\begin{proof}
Given a triple $\{Z,R,P\} \in D(S)$, we define
$$
{\mathcal O}_R (Z - F \times S) =
{\mathcal H}om_{{\mathcal O}_R} ( \ideal{Z,R} , {\mathcal O}_R (-F \times S)).
$$
If $s \in S$ is a closed point, we have ${\mathcal E} xt^{1}_{{\mathcal O}_{R_s}}
(\ideal{Z_s,R_s}, {\mathcal O}_{R_s} (-F) ) = 0$ because $R_s$ is
Gorenstein. It follows that
${\mathcal O}_R (Z - F \times S)$
is flat over $S$ and its formation
commutes with base change \cite{BPS,JS}, so that for every
morphism $g: T \rightarrow S$ in $Sch_{k}$ the pull back of ${\mathcal O}_R (Z - F
\times S)$ is ${\mathcal O}_{R_T} (Z_T - F \times T)$.
Hence there is a functor
$A=A_{z,r,p}$ that assigns to the scheme $S$ the set of flat families
of flags $Z \subset R \subset P \subset F \times S$ with Hilbert
polynomials $z$,$r$,$p$ along with a morphism
$\phi:\ideal{P} \otimes {\mathcal O}_{R} \rightarrow {\mathcal O}_{R} (Z-F \times S)$.
The exact sequence
\begin{equation} \label{m}
0 \rightarrow {\mathcal O}_{R} (- F \times S) \stackrel{\tau}{\rightarrow} \ideal{P}
\otimes {\mathcal O}_{R}
\stackrel{\pi}{\rightarrow} {\mathcal O}_{R} (- P) \rightarrow 0.
\end{equation}
and the sequence
\begin{equation} \label{l}
0 \rightarrow
{\mathcal O}_{R} (- F \times S) \stackrel{\sigma}{\rightarrow} {\mathcal O}_R (Z - F \times S)
\rightarrow
{\mathcal E} xt^{1}({\mathcal O}_{Z_S} , {\mathcal O}_{R_S} (- F \times S) ) \rightarrow 0
\end{equation}
obtained by dualizing
$$\exact{\ideal{Z,R}}{{\mathcal O}_{R}}{{\mathcal O}_{Z}}$$
are both compatible with base change, thus $A$ has a subfunctor
$M=M_{z,r,p}$ corresponding to
morphisms $\phi$ satisfying $\phi \circ \tau = \sigma$.
Now we claim that $H_{z,r,p}$ is an open subfunctor of $M$.
Indeed, given $C \in H(S)$, we may write a diagram analogous to diagram
(\ref{diagram}):
\begin{equation} \label{fourth}
\begin{CD}
&& && && 0&& \\
&&&& && @VVV \\
&&&& \ideal{C} \otimes {\mathcal O}_R @>>>
\ideal{Z,R} (-P)@>>> 0 \\
&&&& @VVV @VVV \\
0 @>>> {\mathcal O}_{R} (-F \times S) @>\tau>> \ideal{P} \otimes {\mathcal O}_R @>>>
{\mathcal O}_{R} (-P) @>>> 0\\
&&@VVV @VV{\phi}V @VVV \\
0 @>>> {\mathcal O}_{R} (-F \times S) @>\sigma>> {\mathcal L} @>>> {\mathcal O}_{Z} (-P) @>>> 0 \\
&&&& @VVV @VVV \\
&&&& 0 && 0&& \\
\end{CD}
\end{equation}
As in the proof of~\ref{1.1}, we obtain a morphism
$\psi: {\mathcal L} \rightarrow {\mathcal O}_R (Z-F \times S)$. These sheaves are flat over
$S$ and compatible with pull back. Since $\psi$ induces isomorphisms
$\psi_s$ on the fibres by the proof of~\ref{1.1},
$\psi$ is an isomorhism. Thus the diagram gives us a morphism
$\phi:\ideal{P} \otimes {\mathcal O}_{R} \rightarrow {\mathcal O}_{R} (Z-F)$ with
$\phi \circ \tau = \sigma$, and we obtain a natural transformation from
$H$ to $M$ that makes $H$ into a subfunctor of $M$. It is open because
it corresponds to the open condition that the map $\phi$ be surjective.
It remains to show that when we take inverse images over $V \subset D$,
the induced map $M_{V} \stackrel{t}{\rightarrow} V$ has the structure of an
affine bundle.
Let $U \subset V$ be an affine open set equipped with
universal flat flag
$$\begin{matrix}
Z & \subset & R & \subset & P & \subset & F \times U\\
&&&&&& \downarrow f \\
&&&&&& U.\\
\end{matrix}$$
Since $\mbox{H}^{1} ({\mathcal O}_{R_{u}}(Z_{u}+P_{u}-F))=0$ for each $u \in U$,
we deduce \cite[III, 8.5 and 12.9]{AG} that $R^{1} f_{*}
{\mathcal O}_{R}(Z+P-F)=0$ and hence that
$${\rm {Ext}}^{1}({\mathcal O}_{R}(-P),{\mathcal O}_{R}(Z-F)) \cong
H^{1}({\mathcal O}_{R}(Z+P-F))=0.$$
In particular, there exists $\phi_{0}:\ideal{P} \otimes {\mathcal O}_{R}
\rightarrow {\mathcal O}_{R}(Z-F)$ such that $\phi_{0} \circ \tau = \sigma$.
Now let $G: Sch_{U} \rightarrow Sets$ be the functor that to a scheme $T$
over $U$ associates
the set
$$ G(T)= \mbox{Hom}_{R_T} ( {\mathcal O}_{R_T} (-P_T),{\mathcal O}_{R_T} (Z_T - F
\times T).$$
By the lemma \ref{bundle} below,
${\mathcal E} = f_{*} {\mathcal H} om_{{\mathcal O}_{R}} ({\mathcal O}_{R}(-P), {\mathcal O}_{R}(Z-F))$ is
locally free on $U$, and $G$ is represented by the geometric vector
bundle $B \stackrel{p}{\rightarrow} U$ whose sheaf of sections is ${\mathcal E}$.
Thus there is a universal map
$\alpha:{\mathcal O}_{R_{B}}(-P_{B}) \rightarrow {\mathcal O}_{R_{B}}(Z_{B}-F)$
on the pullback of the universal flag to $B$. We now show that
the pair $(B,\phi=p^{*}(\phi_{0})+\alpha \circ \pi)$ represents
$M_{U}$.
To this end, let $S$ be a scheme,
$Z_{S} \subset R_{S} \subset P_{S} \subset F \times S$ be a flag
corresponding to a map $h: S \rightarrow D$ that factors through $U$, and
$\psi:\ideal{P_{S}} \otimes {\mathcal O}_{R_{S}} \rightarrow {\mathcal O}_{R_{S}} (Z-F)$
be a map satisfying
$\psi \circ \tau_{S} = \sigma_{S}$. By construction the map
$\psi - h^{*}(\phi_{0})$ is the image of a map
in $\mbox{Hom} ({\mathcal O}_{R_{S}} (-P_{S}), {\mathcal O}_{R_{S}} (Z_{S}-F_{S}))$,
hence the universal property of $B \rightarrow S$ yields a unique lifting
${\tilde h}:S \rightarrow B$ of $h$. Moreover, it is clear from
construction that $\psi = {\tilde h}^{*}(\phi)$. This shows that
$(B,\phi)$ represents $M_{U}$, finishing the proof.
\end{proof}
The following lemma, which we used in the above proof, is an immediate
consequence of the theorems of base change
for cohomology and for the ${\mathcal E}xt$ functors.
\begin{lemma}\label{bundle}
Let $f: R \rightarrow U$ be a morphism of locally Noetherian schemes
over $k$, and let ${\mathcal F}$, ${\mathcal G}$ be coherent sheaf on $R$. Let
$G=G_{{\mathcal F},{\mathcal G}}: Sch_{U} \rightarrow Sets$ be the contravariant
functor that to a locally Noetherian $U$-scheme $T$ associates the set
$$
G(T) = \mbox{\em Hom}_{R_T} ( {\mathcal F}_T,{\mathcal G}_T)
$$
where $R_T$, ${\mathcal F}_T$, ${\mathcal G}_T$ are the base extensions to $T$. Suppose
that $f$ is projective and flat, and ${\mathcal F}$,${\mathcal G}$ are flat over $U$.
Furthermore, suppose that for every point $u \in U$:
\begin{enumerate}
\item
${\mathcal E} xt^{1}_{{\mathcal O}_{R_u}} ( {\mathcal F}_u, {\mathcal G}_u ) = 0$;
\item
$\text{\em H}^{1} (R_u, {\mathcal H} om_{{\mathcal O}_{R_u}} ( {\mathcal F}_u, {\mathcal G}_u ) ) = 0$.
\end{enumerate}
Then the sheaf $\, {\mathcal E} = f_{*} {\mathcal H} om_{{\mathcal O}_{R}} ( {\mathcal F}, {\mathcal G} )$ is
locally free on $U$, and $G$ is represented by the geometric vector
bundle over $U$ whose sheaf of sections is ${\mathcal E}$.
\end{lemma}
\begin{corollary} \label{comps1}
Let $Y$ be an irreducible component of $D_{z,r,p}$ and let $U \subset Y$ be
the open subset consisting of triples $\{Z,R,P\}$ for which
$H^{1}({\mathcal O}_{R}(Z+P-F))=0$. If $t^{-1}(U)$ is nonempty,
then ${\overline {t^{-1}(U)}}$ is an irreducible component of $H_{z,r,p}$.
\end{corollary}
\begin{proof}
From the structure of $t$ given in Theorem \ref{structureofpi},
${t^{-1}(U)} \subset H_{z,r,p}$ is an irreducible open subset of
${t^{-1}(Y)}$.
Let $W$ be an irreducible component of $H_{z,r,p}$ containing
${\overline {t^{-1}(U)}}$. Then $t(W)$ is irreducible and contains
a nonempty open subset of $U$ ($t|_{t^{-1}(U)}$ is an open map
by \ref{structureofpi}),
hence $Y= {\overline U} \subset \overline{t(W)}$.
Since $Y$ is an irreducible component, we must have
$Y=\overline{t(W)}$, hence $W \subset {t^{-1}(Y)}$.
It follows that $t^{-1}(U) = t^{-1}(U) \cap W$ is a nonempty open
subset of $W$ and $W = {\overline {t^{-1}(U)}}$.
\end{proof}
\begin{remark}
Note that $t^{-1}(V)$ (resp. $t^{-1}(U)$) may be empty in Theorem
\ref{structureofpi} (resp. Cor. \ref{comps1}), as in the case of the smooth
conic of type $(1,1)$ on the quadric surface and $\deg Z = 1$ (Example
\ref{quadric2}(1)). These sets are guaranteed to be nonempty if
there exists a triple $\{Z,R,P\}$ in $V$ (resp. $U$) satisfying
condition two of Proposition \ref{exist}.
\end{remark}
To use Corollary~\ref{comps1}, we need to understand the Hilbert scheme of
flags $D_{z,r,p}(F)$. Now $D_{z,r,p}$
breaks up as the disjoint union of closed subschemes $D_{z,\xi,\end{thm}a}$ where
$\xi$ (resp. $\end{thm}a$) varies in the set of numerical equivalence
classes of divisors
in $F$ with Hilbert polynomial $r$ (resp. $p$).
We have a decomposition
$$D_{z,\xi,\end{thm}a} \cong D_{z,\xi} \times H_{\end{thm}a - \xi}$$
where $D_{z,\xi}$ denotes the Hilbert scheme of flags $Z \subset R \subset F$,
with $Z$ zero dimensional of degree $z$, and $R$ an effective divisor of class
$\xi$, and $H_{\end{thm}a - \xi}$ is the Hilbert scheme of effective divisors in $F$
of class $\end{thm}a - \xi$ - this because we can tack on the effective divisor $P-R$
after choosing the flag $Z \subset R$. The following lemma helps to
identify the irreducible components of the Hilbert flag scheme.
\begin{lemma}\label{dominant}
Let $q:D_{z,\xi} \rightarrow H_{\xi}$ be the projection.
Then $q$ is surjective and maps generic points of $D_{z,\xi}$ to
generic points of $H_{\xi}$.
\end{lemma}
\begin{proof}
The argument is due to Brun and Hirschowitz \cite[3.2]{brun-hirsch}.
Since $q$ is proper and surjective, it is enough to show that, if
$A$ is an irreducible component of $ D_{z,\xi}$ and $B$ is an
irreducible component of
$H_{\xi}$ that contains $q(A)$, then $B= q(A)$.
Let $M = {\mbox{Hilb}}_{z}(F)$. $D_{z,\xi}$ is constructed as the scheme of zeros
of a global section of a rank $z$ vector bundle on $M \times H_{\xi}$
\cite{kleppe,sernesi}.
Thus the codimension of $D_{z,\xi}$ in $M \times H_{\xi}$ is $\leq z$
at each point.
In particular, the irreducible component $A$ has
dimension at least $\dim B + z$.
On the other hand, let $J \subset B$ denote the image
of $A$. The fibre over any fixed curve $Y \in B$ has dimension
$\leq z$ by the theorem of Brian\c con \cite{briancon,iar} which
describes the punctual Hilbert scheme. It follows that
$$\dim B + z \leq \dim A \leq \dim J + z,$$
hence these are equalities and $J=B$.
\end{proof}
\begin{remark}
If Z is Cartier on R, it follows from deformation theory
that the map q of lemma 3.4 is smooth at the point $(Z,R)$ of $D$,
because $H^1({\mathcal N}_{Z,R})=0$.
\end{remark}
\begin{remark}\label{irred}
If $B \subset H_{\xi}$ is an irreducible component whose general
member is a smooth connected curve, then $q^{-1}(B)$ is an
irreducible component of $D_{z,\xi}$ (\cite[4.3]{2h}). Indeed, the
irreducible components of $D_{z,\xi}$ contained in $q^{-1}(B)$ map
dominantly to $B$, but the general fibre of $q$ is irreducible, so there
is only one such component.
\end{remark}
\begin{example}\label{ex1}
(1) If $F = \mathbb{P}^2$, the class of a divisor is determined by its degree $d$.
If $B={\mathbb{P}} H^{0}({\mathcal O}_{\mathbb{P}^2}(d))$, then $D_{z,d} = q^{-1}(B)$ is
irreducible by Remark \ref{irred}. It now follows from
Corollary \ref{comps1} and Remark \ref{practical}(3) that the schemes
$H_{z,r,p}$ are irreducible. In fact, their closures are precisely the
irreducible components of $H_{d,g} (X)$ \cite[5.1]{2h}.
\end{example}
\begin{example}\label{ex2}
If $F=Q \subset \mathbb{P}^3$ is the smooth quadric surface,
then the numerical equivalence class of a divisor is determined
by its bidegree $(a,b)$ \cite[II,6.6.1]{AG} and $H_{(a,b)} = |{\mathcal O}_{Q}(a,b)|$
is a projective space.
If $a$ and $b$ are both positive, then the general element of $H_{a,b}$
is smooth and irreducible, hence $D_{z,(a,b)}$ is irreducible by
Remark \ref{irred}.
If $a=0$ and $b > 0$, then the general element in $H_{(a,b)}$ is a
disjoint union of $b$ lines and $D_{z,(a,b)}$ has irreducible components
corresponding to various partitions of $z$ as a sum of $b$
non-negative integers, depending on how the zero-dimensional scheme
$Z$ is distributed among the generic lines in the family. In
particular, $q^{-1}(H_{(a,b)})$ is not irreducible unless $z \leq 1$
or $b=1$.
\end{example}
We now prove that if $T(C)=\{Z,R,P\}$ satisfies
$H^{1}({\mathcal O}_{R}(Z+P-F))=0$, then a general deformation
of $R$ lifts to a deformation of $C$.
\begin{thm}\label{limits}
Let $C \subset X$ be a curve with triple $T(C)=\{Z,R,P\}$
such that $H^{1}({\mathcal O}_{R}(Z+P-F))=0$.
Suppose that $B$ is an irreducible
component of $H_{r} (F)$ containing $R$. Then there is an irreducible
component $W$ of $H_{z,r,p}$ containing $C$ such that the natural
map $H_{z,r,p} \rightarrow H_{r}$ induces a dominant map $W \rightarrow B$.
\end{thm}
\begin{proof}
By Lemma~\ref{dominant} there is an irreducible component
$X \subset D_{z,r}$ containing $(Z,R)$ such that
$q(X)=B$. Since
$D_{z,\xi,\end{thm}a} \cong D_{z,\xi} \times H_{\end{thm}a-\xi}$ (here $\end{thm}a$ and $\xi$
are the numerical equivalence classes of $P$ and $R$; see discussion
following Cor. \ref{comps1}), we obtain an
irreducible component $Y = X \times K$ of $D_{z,\xi,\end{thm}a}$ containing
$T(C)$ and mapping dominantly to $B$ for a suitable irreducible
component $K \subset H_{\end{thm}a-\xi}$. Letting $U \subset Y$ be the
open set of triples for which the vanishing occurs, $U$ is dense in $Y$
and the generic point of $U$ maps to the generic point of $B$. By
Lemma~\ref{comps1} $W=\overline{t^{-1}(U)}$ is an irreducible
component of $H_{z,r,p}$, and by construction the generic
point of $t^{-1} (U)$ maps to the generic point of $B$.
\end{proof}
\begin{example}\label{thick}
The conclusion of Theorem \ref{limits} fails for a general thick
$4$-line $C$ of genus $g$ on the double quadric $X=2Q$ in $\mathbb{P}^3$.
Recall that a {\it thick} $4$-line is a curve of degree $4$ supported
on a line $L$ and containing the first infinitesimal neighborhood $L^{(2)}$
\cite{banica}.
We claim that such a curve is {\bf not} a flat limit of disjoint unions
of double lines on $X$.
To see this, we first note that the family of
double lines of genus $g_{1}$ with fixed support is irreducible of
dimension $1-2g_{1}$ by \cite[1.6]{nthree}. Since the lines on $Q$
form a one-dimensional family, the disjoint unions of two double lines
of genera $g_{1}$ and $g_{2}$ form a family of dimension
$4-2g_{1}-2g_{2}=2-2g$.
On the other hand, the thick $4$-lines on fixed support $L$ are determined
by surjections in
$${\rm {Hom}} (\ideal{L^{(2)}}, {\mathcal O}_{L}(-g-1)) \cong
{\rm {Hom}} ({\mathcal O}_{L}(-2)^{3}, {\mathcal O}_{L}(-g-1)) \cong
H^{0}({\mathcal O}_{L}(-g+1)^{3})$$
by \cite[$\S 4$]{banica}, hence these form an irreducible family of
dimension $5-3g$. We are interested in the subset of
those which send the equation of $X$ to zero. If $L = \{x=y=0\}$
and $Q = \{xz-yw=0\}$, then $X = \{x^{2}z^{2}-2xyzw+y^{2}w^{2}=0\}$
and hence the thick $4$-lines with support $L$ lying on $X$ correspond
to the triples
$\{(a,b,c) \in H^{0}({\mathcal O}_{L}(-g+1)^{3}) : az^{2}-2zwb+cw^{2}=0\}$.
These form a vector subspace of codimension $-g+4$
(provided char $k \neq 2$), hence the family has dimension $1-2g$.
Varying the support line $L$ on $Q$, we obtain a family of dimension $2-2g$
and conclude that the general thick $4$-line $C$ cannot be the limit
of a family whose general member is a disjoint union of two double lines.
\end{example}
\begin{example}\label{degen} In Example \ref{quadric2}(3) we claimed that if
$X = 2Q \subset \mathbb{P}^3$ is the double quadric, then the general triple
$\{Z,P,P\}$ arises from a curve on $X$ if $\deg Z > 0$ and $P$ has
type $(a,b)$ with $1 < a < b$. We now explain why.
Let $C$ be a smooth rational curve of type $(1,b-a+1)$ on $Q$ and
$Z \subset C$ a divisor with $\deg Z > 0$.
By Example \ref{quadric2}(2), $H^{1} ({\mathcal O}_{C} (Z+C-Q))=0$ and the triple
$\{Z,P,P\}$ arises from a curve ${\tilde C}$ on $X$. If $H$ is a
general hypersurface of degree $a-1$, then $H \cap {\tilde C}$
consists of $(a-1)(b-a+2)$ double points and $H \cap Z = \emptyset$.
Letting ${\tilde E}=H \cap X$, we have $T({\tilde E})=\{\emptyset,E,E\}$
where $E$ is a divisor on $Q$ of type $(a-1,a-1)$ (see \ref{quadric2}(1)).
The triple for ${\tilde C} \cup {\tilde E}$ has form
$\{{\tilde Z},C \cup E,C \cup E\}$, but $Z \subset {\tilde Z}$ by
local considerations and the genus formula forces $Z = {\tilde Z}$,
hence $T({\tilde C} \cup {\tilde E})=\{Z,C \cup E,C \cup E\}$.
Since $H^{1} ({\mathcal O}_{C} (Z+C-Q))=0$, when we tensor the exact sequence
$$0 \rightarrow {\mathcal O}_{C}(C) \rightarrow {\mathcal O}_{C \cup E}(C+E) \rightarrow {\mathcal O}_{E}(C+E) \rightarrow 0$$
by ${\mathcal O}_{C \cup E}(Z-Q)$ we see that
$H^{1} ({\mathcal O}_{C \cup E} (Z+C+E-Q))=H^{1} ({\mathcal O}_{P} (Z+P-Q))=0$. Note here that
$H^{1} ({\mathcal O}_{E}(Z+C+E-Q))=0$ via the exact sequence
$$0 \rightarrow {\mathcal O}_{E}(Z+E-Q) \rightarrow {\mathcal O}_{E}(Z+E+C-Q) \rightarrow {\mathcal O}_{E \cap
C}(Z+E+C-Q) \rightarrow 0.$$
We can now apply Theorem \ref{limits} and its proof to
${\tilde C} \cup {\tilde E}$. By Remark \ref{irred}(b), $B = H_{(a,b)}$ is
irreducible as is $D_{z,r} \cong D_{z,r,p}$, hence the
general triple $\{Z,P,P\}$ with $\deg Z > 0$ and $P$ of type $(a,b)$
arises from a curve.
\end{example}
\begin{example}\label{3.3}
Let $W$ be a quasi-primitive triple line of type $(0,b)$ in $\mathbb{P}^3$ for
some $b \geq 0$.
Then the underlying double line $D$ necessarily lies on a smooth
quadric surface $Q$ \cite[1.5]{nthree} and hence $W$ lies on the
double quadric $X=2Q$.
The associated triple is $T(W)=\{Z,L,D\}$, where $L$ is the support of $W$
and $Z \subset L$ is a divisor of degree $b+2$ by the genus formula
of Proposition \ref{1.1} ($g(W)=-2-b$ by \cite[2.3a]{nthree}).
If $H$ denotes the hyperplane divisor, then
$$\mbox{H}^{1}({\mathcal O}_{L}(Z+D-Q-H)) \cong \mbox{H}^{1} ({\mathcal O}_{L}(b-1))=0$$
since $b \geq 0$ and Remark \ref{practical}(2) applies.
We deduce from Theorem \ref{limits} that $W$ is the
limit of a family of curves on $2Q$ whose general member is the
disjoint union of a line and a double line. This generalizes the
deformation used in the proof of \cite[3.3]{nthree}.
\end{example}
\begin{example}\label{4lines}
This is the example that inspired the present paper. Let $R=P$ be a
double line $2L$ on the smooth quadric surface $Q \subset \mathbb{P}^3$.
Let $c \geq b \geq 0$ be integers and let $Z \subset R$ be a divisor
consisting of $c-b$ simple points and $b+2$ double points which are
not contained in $L$. One can show \cite[3.2]{NS} that the triple
$\{Z,R,P\}$ arises from a general quasiprimitive 4-line $C$ of type
$(0,b,c)$. Since $Z$ contains $ \geq 2$ double points not
contained in $L$, the sufficient condition of Example
\ref{quadric2}(b) holds - thus the conditions of Proposition
\ref{exist} hold for the triple $\{Z,R,P\}$ and
we can apply Theorem \ref{limits} to see that $C$ is in the closure
of some family of disjoint unions of double lines, since the general member of
$|{\mathcal O}_{Q}(0,2)|$ is a disjoint union of two lines.
\end{example}
\end{document}
|
\begin{document}
\title{Quantum Color-Coding Is Better}
\author{Joshua Von Korff$^3$}
\author{Julia Kempe$^{1,2,4}$}
\affiliation{Departments of Chemistry$^1$, Computer
Science$^2$, and Physics$^3$, University of California, Berkeley, CA 94720\\
$^4$CNRS-LRI, UMR 8623, Universit\'e de Paris-Sud, 91405 Orsay, France }
\date{\today}
\begin{abstract}
We describe a quantum scheme to ``color-code'' a set of objects in order to record which one is which. In the classical case, $N$ distinct colors are required to color-code $N$ objects. We show that in the quantum case, only $\frac{N}{e}$ distinct ``colors'' are required, where $e \approx 2.71828$. If the number of colors is less than optimal, the objects may still be correctly distinguished with some success probability less than $1$.
We show that the success probability of the quantum scheme is better than the corresponding classical one and is information-theoretically optimal.
\end{abstract}
\pacs{}
\maketitle
\section{Introduction} \label{Section::Introduction}
We will describe a quantum scheme for ``color-coding'' a set of objects in order to record which one is which. That is, we want to be able to tell which is the first object, which is the second, and so on, by looking only at the ``color-code'' quantum labels on the objects.
First, we consider a few examples to clarify the problem. The classical version of color-coding can be stated as follows: suppose Alice has $N$ identical boxes. Inside each box, she writes an integer between $1$ and $N$, using no integer twice. Alice wants to send the boxes to Bob and have him guess which number is in which box. Alice helps Bob using a classical color-code: she paints a colored dot (say, red or green) on the outside of each box. Alice cannot control the order in which Bob receives the boxes, or mark the boxes in any other way. In other words, Alice is sending the boxes through a classical channel that applies an unknown permutation, and Bob is to guess which permutation was applied. What procedure should Alice and Bob follow to maximize the probability that Bob will guess the permutation correctly?
This problem is an instance of process tomography - reverse engineering an operation (a permutation in this case) by examining its effects on an initial state.
For $N = 2$ with two colors, Alice need only paint a red dot on box $1$, and a green dot on box $2$. Bob can then state the correct order with perfect certainty. In general, Alice needs $N$ distinct colors if Bob is to distinguish $N$ boxes with perfect accuracy. In the quantum case, however, we will prove that Alice only needs $\frac{N}{e}$ different colors, where $e \approx 2.71828$ is Euler's constant.
We analyze the classical case first. Alice is to choose an initial color sequence such as $\psi = $ ``Red Red Green.'' The first color in this list corresponds to box $1$, and so on. Alice may choose $\psi$ either deterministically or randomly, but randomness never helps her (by concavity \cite{Vonkorff:04}).
Given a $\psi$ with $n$ red dots, the number $n$ is conserved by the permuting channel, so Bob can only receive $N \choose n$ distinct messages. Then the success probability is at most ${N \choose n}/N! \le {N \choose \frac{N}{2}}/N! = (N! / (\frac{N}{2}!)^2) / N! = 1/ (\frac{N}{2}!)^2$.
To achieve this maximum, Alice could label boxes $1, \ldots \frac{N}{2}$ with red dots, and boxes $\frac{N}{2} + 1, \ldots N$ with green dots. Bob would then have to guess the ordering within the red set and within the green set.
If Alice is allowed to use $d$ different colors instead of just $2$, her optimal strategy is to label an equal number of boxes with each color, and her success probability is $1/{(\frac{N}{d}!)}^d$ (with slight variations if $d$ does not divide $N$).
\section{Quantum Colors on Three Objects} \label{Section::Quantum}
Now let's consider the quantum version. Instead of labelling boxes with classical colors (red or green), Alice labels them with quantum spins that can point $\ket{\uparrow}$ or $\ket{\downarrow}$. As a starting example, suppose there are $N = 3$ boxes.
Alice can ``color'' the boxes with any quantum state $\ket{\Psi} \in (\mathbb{C}^2)^{\otimes 3}$, including entangled states. When Alice initializes the state, the first copy of $(\mathbb{C}^2)$ corresponds to box number $1$, and so on. As in the classical case, Alice may as well choose $\ket{\Psi}$ deterministically.
Then Bob receives a state $\Gamma(\sigma) \ket{\Psi}$, where $\sigma$ is a random permutation, and $\Gamma(\sigma)$ is the unitary operator that permutes the $3$ spins via $\sigma$. Bob wants to perform some measurement on this state that allows him to deduce $\sigma$.
We want to know: can Alice improve on the classical
protocol, perhaps by entangling the $3$ quantum systems? Remember that in the classical case, the states that Bob can receive all have the same number of red boxes. This limits the distinguishability of the received states. But in the quantum case, Alice can use a signal that is in a superposition of several different numbers of red boxes. So the classical limitation may no longer hold.
In the classical case, the optimal protocol lets Bob guess the correct permutation with probability $\frac{1}{2}$. To understand the quantum case, we have to consider the irreducible representations of the action of the permutation group $S_3$ on $V = (\mathbb{C}^2)^{\otimes 3}$. That is, we must divide $V$ as finely as possible into subspaces that are preserved by $\Gamma(\sigma)$ for all $\sigma$. The vector space is $8$ dimensional, and there are $6$ irreducible representations: $4$ one-dimensional and $2$ two-dimensional. The one-dimensional representations are the spans of the vectors $\ket{\uparrow \uparrow \uparrow}, \ket{\uparrow \uparrow \downarrow} + \ket{\uparrow \downarrow \uparrow} + \ket{\downarrow \uparrow \uparrow}, \ket{\uparrow \downarrow \downarrow} + \ket{\downarrow \uparrow \downarrow} + \ket{\downarrow \downarrow \uparrow}$, and $\ket{\downarrow \downarrow \downarrow}$. The first two-dimensional representation is spanned by two vectors, $\ket{1, 1}$ and $\ket{1, 2}$. Define $\ket{1, 1} \equiv \frac{1}{\sqrt{3}} (\ket{\uparrow \downarrow \downarrow} + e^{2 \pi \imath}/3 \ket{\downarrow \uparrow \downarrow} + e^{- 2 \pi \imath}/3 \ket{\downarrow \downarrow \uparrow})$. Then the coefficients of $\ket{1, 2}$ are the complex conjugates of the coefficients of $\ket{1, 1}$. The second two-dimensional representation is spanned by $\ket{2, 1}, \ket{2, 2}$, which are like $\ket{1, 1}, \ket{1, 2}$ except that the directions of all spins are flipped.
Now, suppose Alices uses $\ket{\Psi} = \sqrt{1 / 5} \ket{\uparrow \uparrow \uparrow} + \sqrt{2 / 5} \ket{1, 1} + \sqrt{2 / 5} \ket{2, 2}$. This state has the interesting property that $|\bra{\Psi} \Gamma(\sigma) \ket{\Psi}| = \frac{1}{5}$ for all $\sigma \ne \epsilon$, where $\epsilon$ is the identity permutation. That is, all permutations $\Gamma(\sigma)\ket{\Psi}$ of the state $\ket{\Psi}$ are nearly distinguishable from each other,
which hints that it may be a useful state for our purposes.
It turns out that the six positive operators $\{\Gamma(\sigma) \ket{\Psi} \bra{\Psi} \Gamma^\dagger (\sigma)\, | \, \sigma \in S_3\}$ define a POVM (a generalized measurement \cite{Nielsen:00,Preskill:98}) that Bob can use to guess the permutation $\sigma$ that was applied to $\ket{\Psi}$. The success probability is $\frac{5}{6}$, which is better than the classical $\frac{1}{2}$.
\section{Quantum Color Coding Theorem}
In general, Alice has $N$ boxes, and labels them with $d$-state quantum systems. Let $p(N, d)$ be the probability that Bob measures the permutation correctly for the optimal quantum protocol. We prove:
\begin{theorem}\label{Theorem::MainTheorem}
Let $r$ be a constant, $d=\lfloor r N \rfloor$. \\
1) If $r > \frac{1}{e}$ then $\lim_{N \rightarrow \infty} p(N, d) = 1$. \\ 2) If $r < \frac{1}{e}$ then $p(N, d) \sim \frac{d^N}{N!}$ as $N \rightarrow \infty$
\end{theorem}
In particular we need only $\approx \frac{N}{e}$ quantum colors to order $N$ objects, a distinct improvement over the classical case! If we have less than $\frac{N}{e}$ colors, we still attain the information-theoretic maximal success probability, $(\mbox{ \# channel states})/(\mbox{ \# message states}) = d^N / N!$ \cite{Vonkorff:04}.
We prove \refthm{MainTheorem} using the following steps:
\begin{enumerate}
\item First we derive the measurement that Bob can make to determine the correct permutation, under some general assumptions.
\item Next, we maximise this measurement's success probability, and state it in terms of the dimensions and multiplicities of the irreducible representations of the permutation group $S_N$ on $(\mathbb{C}^d)^{\otimes N}$.
\item Finally, we prove that the success probability satisfies \refthm{MainTheorem}. This requires an in-depth look at the representation theory of the symmetric group.
\end{enumerate}
\begin{step}
Analyze the possibilities for Bob's measurement.
\end{step}
Our techniques to derive Bob's measurement are inspired by \cite{Massar:95, Gisin:99, Massar:00, Bagan:00, Peres:01, Peres:01b, Fiurasek:02}.
Bob's measurement is a POVM described by positive operators $\{E_\sigma\}, \sum_\sigma E_\sigma = Id$. Here, $\sigma$ indexes the measurement result, which is a permutation. Bob ``wins'' if he measures the correct $\sigma$.
Then Bob's success probability, given the state $\Gamma(\sigma) \ket{\Psi}$, is the probability that his measurement result corresponds to the operator $E_\sigma$, which is $P(E, \sigma) = \bra{\Psi} \Gamma^\dagger(\sigma) E_\sigma \Gamma(\sigma) \ket{\Psi}$. Therefore, if Bob is given a random $\sigma$, his average success probability is $P_{\mbox{av}}(E) = \frac{1}{N!} \sum_\sigma \bra{\Psi} \Gamma(\sigma)^\dagger E_\sigma \Gamma(\sigma) \ket{\Psi}$.
Consider the new measurement operators $E'_{\sigma '} = \frac{1}{N!} \sum_\sigma \Gamma(\sigma)^\dagger E_{\sigma \circ \sigma '} \Gamma(\sigma)$, where $\sigma \circ \sigma '$ refers to group multiplication in $S_N$. These "symmetrized" operators are still measurement operators, and give the same success probability as before \cite{Vonkorff:04}. So we may as well assume that Bob uses such operators.
It is also straightforward to prove that the new operators $\{E'_{\sigma}\}$ satisfy the following useful property:
\begin{equation} \label{Equation::POVMconstraint}
\forall \sigma :\ \ E'_{\sigma } = \Gamma(\sigma)E'_{\epsilon} \Gamma(\sigma)^\dagger
\end{equation}
where $\epsilon$ is the identity permutation. This property simplifies our task, because all relevant positive operators can be deduced from $E'_\epsilon$. From now on, we drop the prime sign and assume that $E_\sigma$ satisfies \refeqn{POVMconstraint}.
The condition $\sum_\sigma E_{\sigma} = Id$ imposes stringent constraints on $E_\epsilon$. Using \refeqn{POVMconstraint}
\begin{equation}\label{Equation::POsSumToIdentity}
\sum_{\sigma} \Gamma(\sigma) E_\epsilon \Gamma(\sigma)^\dagger = Id.
\end{equation}
To analyze this constraint, we decompose the space $V=(\mathbb{C}^d)^{\otimes N}$ into irreducible subspaces of the unitary matrices $\Gamma(\sigma)$, corresponding to irreducible representations of $S_N$. Let $V = \bigoplus_{\rho, b} V^{(\rho, b)}$, where $\rho$ indexes the irreducible representation up to equivalence, and $b$ indexes copies of a given $\rho$.
Let $\Gamma^{(\rho, b)}(\sigma)$ be the projection of $\Gamma(\sigma)$ onto $V^{(\rho, b)}$. Then
\begin{eqnarray}
Id &=&\sum_{\sigma \in S_N} \Gamma(\sigma) E_\epsilon \Gamma(\sigma)^\dagger
\nonumber \\
&=&\sum_{\sigma} (\sum_{\rho, b} \Gamma^{(\rho, b)}(\sigma)) E_\epsilon (\sum_{\rho ', b '} \Gamma^{(\rho ', b ')}(\sigma)^\dagger)
\nonumber \\
&=&\sum_{\sigma, \rho, b, \rho ', b '} \Gamma^{(\rho, b)}(\sigma) E_\epsilon \Gamma^{(\rho ', b ')}(\sigma)^\dagger . \label{Equation::ConstraintOnE}
\end{eqnarray}
Next, we can select a basis within each irreducible space, indexed by $a$. That is, $\ket{\rho, b, a}$ are basis vectors for $V$, and the operator $\Gamma^{(\rho, b)}(\sigma)$ has matrix elements $(\Gamma^{(\rho, b)}(\sigma))_{a, a '}$. We choose the bases of $V^{(\rho,b)}$ such that $\forall \sigma, a, a '$ : $(\Gamma^{(\rho, b)}(\sigma))_{a, a '} = (\Gamma^{(\rho , b ')}(\sigma))_{a, a '}$, i.e. equivalent irreducible representations have the same matrix elements. We can analyze \refeqn{ConstraintOnE} adapting the orthogonality relation for matrix representations \cite{Cornwell:84} to our case:
\begin{eqnarray}\label{Equation::OrthogonalityTheoremEqual}
\sum_{\sigma} (\Gamma^{(\rho b)*}(\sigma))_{\alpha \beta} (\Gamma^{(\rho ' b ')}(\sigma))_{\gamma \kappa} &=&0 \quad \quad \mbox{if} \; \rho \neq \rho '\nonumber \\
\frac{1}{|G|} \sum_{\sigma} (\Gamma^{(\rho b)*}(\sigma))_{\alpha \beta} (\Gamma^{(\rho b ')}(\sigma))_{\gamma \kappa} &=& \delta_{\gamma \alpha} \delta_{\beta \kappa} / D_\rho,
\end{eqnarray}
where $D_\rho$ is the dimension of representation $\rho$.
At this point, we will assume that $E_\epsilon = \ket{\Phi} \bra{\Phi}$ for some state $\ket{\Phi}$, not necessarily normalized.
(This assumption will turn out to be a good one: such a POVM is sufficient to establish the result of \refthm{MainTheorem}.)
Now, we want to find the constraints on $\ket{\Phi}$ resulting from \refeqn{ConstraintOnE}. We can write $\ket{\Phi} = \sum_{ \rho ,b,a} C_{ \rho ,b, a} \ket{ \rho ,b ,a}$.
Applying \refeqn{ConstraintOnE} and \refeqn{OrthogonalityTheoremEqual}, after a lot of algebra (details in \cite{Vonkorff:04}), we obtain the condition
\begin{equation} \label{Equation::OrthoPhiComponents}
\forall \rho, b, b ' : \quad \sum_a C_{ \rho, b, a} C^*_{ \rho , b ', a} = \delta_{b b '} \frac{D_\rho}{N!}
\end{equation}
Hence the projections of $\ket{\Phi}$ into distinct but equivalent irreducible representations $V^{(\rho , b)}$ and $V^{(\rho , b ')}$ must be ``orthogonal'' in the sense of \refeqn{OrthoPhiComponents}. Since the dimension of the irreducible representation $\rho$ is $D_\rho$, this implies that, for a given $\rho$, $b$ can have at most $D_\rho$ different values. If there are more than $D_\rho$ copies of an irreducible representation $\rho$, the projection of $\ket{\Phi}$ on the remaining copies must be $0$. Let $W$ be the space spanned by all the $V^{(\rho ,b)}$ that have non-zero overlap with $\ket{\Phi}$. The $E_\sigma$ only span the space $W$. To make our measurement a POVM on the whole space $V$ we complete it with the operator $E_{V/W}=Id_{V/W}$.
Following \refeqn{OrthoPhiComponents}, we can write
\begin{equation} \label{Equation::PhiDecomposition}
\ket{\Phi} = \sum_{\rho, b | V^{(\rho, b)} \subseteq W} \sqrt{\frac{D_\rho}{ N!}} \ket{\Phi_{\rho b}}
\end{equation}
where $\ket{\Phi_{\rho b}}$ is the (normalized) component of $\ket{\Phi}$ in the $(b, \rho)$ subrepresentation. Without loss of generality we can select $\ket{\Phi_{\rho b}}$ to be a basis state $\ket{\rho,b,a^{\rho , b}}$ inside $V^{(\rho, b)}$, such that $a^{\rho ,b'}= a^{\rho , b}$ if and only if $b= b'$.
\begin{step}
With this measurement, what is the maximal success probability?
\end{step}
Now that we have specified a POVM $\{E_\sigma\}$, we want a signal state $\ket{\Psi} \in W$ that maximizes the success probability $P_{\mbox{av}}(E) = \frac{1}{N!} \sum_\sigma \bra{\Psi} \Gamma^\dagger(\sigma) E_\sigma \Gamma(\sigma) \ket{\Psi} = \frac{1}{N!} \sum_\sigma \bra{\Psi} E_\epsilon \ket{\Psi}$ (using \refeqn{POVMconstraint}). Therefore $P(E) = \bra{\Psi} E_\epsilon \ket{\Psi} = |\braket{\Psi}{\Phi}|^2$. This is maximized by the choice $\ket{\Psi} \propto \ket{\Phi}$, up to normalization.
Since $\ket{\Psi}$ is normalized and $\ket{\Phi}$ in general is not, we can write
\begin{equation}
\nonumber
P(E) = |\braket{\Phi}{\Psi}|^2 = \braket{\Phi}{\Phi}
= \sum_{b, \rho | V^{(b \rho)} \subseteq W} \frac{D_\rho}{N!}
= \frac{\mbox{dim } W}{N!}
\end{equation}
Where the last steps use \refeqn{PhiDecomposition}. Then what is the maximum possible dimension of $W$?
To begin with, we have $\dim V = \sum_\rho m_\rho D_\rho = d^N$, where $m_\rho$ is the multiplicity of the irreducible representation $\rho$, and $D_\rho$ is its dimension. If we could set $W = \oplus_{\rho, b} V^{(\rho, b)} = V$, then $\dim V = \dim W = \sum_\rho m_\rho D_\rho$. However, as discussed above, $W$ can include at most $D_\rho$ copies of any given $\rho$. So the maximum dimension of $W$ is $\sum_\rho \mbox{min }(m_\rho, D_\rho) D_\rho$. Therefore the maximum success probability is given by:
\begin{equation} \label{Equation::MaxProbSuccess}
P_{\max} = \frac{1}{N!} \sum_\rho \min (m_\rho, D_\rho) D_\rho
\end{equation}
\begin{step}
It remains to evaluate the success probability and show that it is information-theoretically optimal.
\end{step}
We need to
take an in-depth look at the irreducible representations $\rho$ of the $S_N$, their dimensions $D_\rho$, and their multiplicities $m_\rho$ in $V = (\mathbb{C}^d)^{\otimes N}$. As is well-known in representation theory, the irreducible representations of $S_N$ are in one-to-one correspondence with partitions of $N$ as an (unordered) sum of integers.
Such partitions are drawn as Young diagrams, which are rows of boxes, where the total number of boxes is $N$, and each row has no more boxes than the row above it. The rows are ``left-justified'', i.e. the first boxes from each row form a column.
For the case $r > \frac{1}{e}$, $d = \lfloor r N \rfloor$, we want to prove (\refthm{MainTheorem}) that $P_{\max} \approx 1$ for large $N$. Our proof of this statement will involve the following steps:
\begin{enumerate}
\item If the columns of a Young diagram $\rho$ are short, then $D_\rho < m_\rho$ (where ``short'' has a precise meaning that will be defined in the proof).
\item Define the Plancherel measure $\mu_N$ of a diagram $\rho$ by $\mu_N(\rho) = \frac{D_\rho^2}{N!}$. Then the sum of the Plancherel measures of all Young diagrams with ``long'' columns
approaches $0$ for large $N$.
\item $\sum_\rho \mu_N(\rho) = 1$
\item Using \#2 and \#3, we deduce that $\sum_{\rho \in \{\rho_{short}\}} \mu_N(\rho) \approx 1$, where the sum is over all diagrams with short columns.
\item Using \refeqn{MaxProbSuccess} and \#1, we find that $P_{\max} \geq \frac{1}{N!} \sum_{\rho \in \{\rho_{short}\}} D_\rho^2 = \sum_{\rho \in \{\rho_{short}\}} \mu_N(\rho) \approx 1$.
\end{enumerate}
\begin{substep}
Young diagrams with short columns have $D_\rho < m_\rho$.
\end{substep}
To determine which of $D_\rho, m_\rho$ is smaller, we can calculate the ratio $D_\rho / m_\rho$, given by \cite{Fulton:91}:
\begin{equation} \label{Equation::DimMultRatio}
\frac{D_\rho}{m_\rho} = \frac{N!}{\prod_{i,j} d - i + j}
\end{equation}
where $(i,j)$ are the row and column of the Young tableau.
\begin{lemma} \label{Lemma::ShortColumns}
Given any $A > 0$ and $r > \frac{1}{e}$, the following statement holds for sufficiently large $N$: If $d > r N$, then any Young diagram of $N$ boxes whose first column is shorter than $A \sqrt{N}$ must have $D_\rho < m_\rho$.
\end{lemma}
\begin{proof}
If the first column of the diagram is shorter than $A \sqrt{N}$, then all columns are shorter than $A \sqrt{N}$. So in \refeqn{DimMultRatio}, we must have $i < A \sqrt{N}$. Therefore the denominator is at least $(d - A \sqrt{N})^N
> (r- \frac{A}{\sqrt{N}})^N N^N$. For large enough $N$, $r - \frac{A}{\sqrt{N}} > \frac{1}{e}$. Let's pick an $\epsilon$ such that $\frac{d}{N} - \frac{A}{\sqrt{N}} > \frac{1}{e}(1 + \epsilon)$. Then the denominator of \refeqn{DimMultRatio} is greater than $(\frac{N}{e})^N (1 + \epsilon)^N$. Now, using Stirling's approximation, $N! \approx (\frac{N}{e})^N \sqrt{2 \pi N}$, which is less than $(\frac{N}{e})^N (1 + \epsilon)^N$ for sufficiently large $N$. Therefore the numerator is smaller than the denominator, and hence $D_\rho < m_\rho$ for sufficiently large $N$.
\end{proof}
\begin{substep}
The Young diagrams with long columns have small total Plancherel measure.
\end{substep}
Let $\lambda_1$ be the length of the first column. Then \cite{Kerov:03}:
\begin{equation}
\mu_N(\rho) \le e^{- 2 \lambda_1 (\log \frac{\lambda_1}{\sqrt{N}} - 1)}
\end{equation}
According to a theorem of Erd\"{o}s~\cite{Erdos:41}, the total number of diagrams of size $N$ is less than $e^{C \sqrt{N}}$, for some constant $C$. Therefore the total Plancherel measure of diagrams with $\lambda_1 \ge A \sqrt{N}$ is at most $e^{C\sqrt{N}} e^{-2 A \sqrt{N} (\log A - 1)}=e^{(C - 2 A (\log A - 1)) \sqrt{N}}$. If we choose some $A_0$ such that $ 2 A_0 (\log A_0 - 1) > C$, then this quantity goes to zero as $N \rightarrow \infty$. Therefore $\lim_{N \rightarrow \infty} \sum_{(\rho | \lambda_1 \ge A_0 \sqrt{N})} \frac{D_\rho^2}{N!} = 0$.
\begin{substep}
$\sum_\rho \mu_N(\rho) = 1$.
\end{substep}
It is well known that $\sum_\rho D_\rho^2 = |G|$, if the sum is taken over all irreducible representations $\rho$ of any group, and $|G|$ is the order of the group. In our case $|S_N| = N!$, so $\sum_\rho D_\rho^2 / N! = 1$.
\begin{substep}
$\sum_{\rho \in \rho_{short}} \mu_N(\rho) \approx 1$, where the sum is over all diagrams with short columns.
\end{substep}
Combining the results of the two previous steps, we obtain $\lim_{N \rightarrow \infty} \sum_{(\rho | \lambda_1 < A_0 \sqrt{N})} D_\rho^2 / N! = 1$.
\begin{substep} $P_{max} \stackrel{ N \rightarrow \infty} {\longrightarrow} 1$.
\end{substep}
For large enough $N$, all Young diagrams with $\lambda_1 < A_0 \sqrt{N}$ will also have $D_\rho \leq m_\rho$, and
\begin{eqnarray*}
P_{\max}
& \geq & \sum_{\rho | D_\rho \leq m_\rho} \frac{D_\rho^2}{N!}
\geq \sum_{\rho | \lambda_1 < A_0 \sqrt{N}} \frac{D_\rho^2}{N!} \stackrel{ N \rightarrow \infty} {\longrightarrow} 1
\end{eqnarray*}
We have proved \refthm{MainTheorem} for $d > \frac{N}{e}$, so it remains to consider $d < \frac{N}{e}$. The proof is similar, so we merely hint at which steps are different. First, we replace \reflma{ShortColumns} with:
\begin{lemma} \label{Lemma::ShortRows}
Given any $A > 0$ and $r < \frac{1}{e}$, the following statement holds for sufficiently large $N$: If $d < r N$, then any Young diagram of $N$ boxes whose first row is shorter than $A \sqrt{N}$ must have $m_\rho < D_\rho$.
\end{lemma}
The proof is almost identical to that of \reflma{ShortColumns}.
Now, the Young diagrams with long rows have small total $\mu_{N, m}$ measure, where each diagram is given weight $\mu_{N,m}(\rho)=\frac{m_\rho D_\rho}{N!}$.
\cite{Kerov:03} states that for large enough $N$, if $\frac{d}{N} \rightarrow r$, we have
\begin{equation}
\mu_{N, d}(\rho) \le e^{- \tilde{\lambda}_1 (2 (\log \frac{\tilde{\lambda}_1}{\sqrt{N}} - 1) - \frac{1}{2 r})}
\end{equation}
where $\tilde{\lambda}_1$ is the length of the first row.
The rest of the proof is completely analogous to the preceding proof.
We find that the success probability is lower bounded by $\sum_{(\rho | \tilde{\lambda}_1 < \tilde{A}_0 \sqrt{N})} \frac{m_\rho D_\rho}{N!} \sim \frac{d^N}{N!}$ as $N \rightarrow \infty$.
Hence we have attained the information-theoretic lower bound for both $r < \frac{1}{e}$ and $r > \frac{1}{e}$, as promised.\\
\section{Concluding notes}\label{Section::Conclusion}
We have shown how quantum coding can give a distinct advantage in protecting information against a random permutation. Note that our results up to Step 3 are completely general and can be formulated for any group.
For instance, suppose we replace $S_N$ with $SO(3)$, the group of rotations in space; and we replace the $N$ $d$-state systems with $N$ $2$-state spins that can be rotated in space. Then we obtain the problem of transmitting a reference direction using $N$ quantum spins, which has been studied in \cite{Massar:95, Gisin:99, Massar:00, Bagan:00, Peres:01, Peres:01b, Fiurasek:02}, among other papers.
We hope that our techniques also provide a toolbox for quantum process tomography of other channels.
We thank J. Fern, A. Harrow and O. Regev for discussions and K.B. Whaley for support.
This work was sponsored by
DARPA
and the
Air Force Laboratory,
Air Force Material Command, USAF,
under Contract No. F30602-01-2-0524,
NSF
through Grant No. 0121555, ACI-SI 2003-n24 and RESQ IST-2001-37559.
\end{document}
|
\begin{document}
\begin{titlePage}
\end{titlePage}
\thispagestyle{plain}
\phantom{a}
\abstract{
\addcontentsline{toc}{section}{Abstract}
We show that affine coordinate subspaces of dimension at least two in Euclidean
space are of Khintchine type for divergence. For affine coordinate subspaces of dimension
one, we prove a result which depends on the dual Diophantine type of the base point of
the subspace. These results provide evidence for the conjecture that all affine subspaces of
Euclidean space are of Khintchine type for divergence. We also prove a partial analogue regarding the Hausdorff measure theory.
Furthermore, we obtain various results relating weighted Diophantine approximation and Dirichlet improvability. In particular, we show that weighted badly approximable vectors are weighted Dirichlet improvable, thus generalising a result by Davenport and Schmidt. We also provide a relation between non-singularity and twisted inhomogeneous approximation. This extends a result of Shapira to the weighted case.}
\addcontentsline{toc}{section}{Contents}
\tableofcontents
\addcontentsline{toc}{section}{Acknowledgements}
\afterPreface{
Now to the sentimental bits. Mathematics is an incredible success story within the history of humankind, being built on the ideas of (almost) uncountably many curious minds and spanning millennia as well as continents. I have been lucky to get a chance to add some of the most negligible bits to this story. However, even this smallest of contributions would not have been possible without the help of many people.
First I would like to thank Sanju and Evgeniy, for always keeping an open ear and an open mind, and for letting me work at my own speed. I have never been fast, but I have always tried to be thorough. A big thanks also to my co-authors Felipe and David, without whom this thesis would not look the same. Many others have led me to the path of mathematics or helped keeping me on it. I will never know if I could have made it through my undergraduate degree without the company of Lisa and Salome.
I will always be grateful to my parents and my sister for letting me grow up in an environment where everything was possible, for supporting me in any way they could and for being welcoming and surprisingly excited each time I came home to visit.
York has become a second home to me and this is mainly due to the people I have met here, be they colleagues, housemates or just other fellow lost souls. These include, but are not limited to: Spiros, James, Oliver \& Ellie, Ben, all the Daves, Demi (a special thanks for reading through this thesis), Mirjam, Vicky, the Italians, the pool \& snooker lot and Derek.
Whenever I went back to Switzerland, I tried (and failed) to visit all my dear old friends. Out of all those I would like to specially mention Paddy, Pagi and Janine. Thanks for being there through all the highs and lows.
Last, but by no means least, I would like to thank Henna.
}
\thispagestyle{plain}
\vspace*{10ex}
\noindent
\textit{``And the mercy seat is waiting\\ And I think my head is burning\\ And in a way I'm yearning\\ To be done with all this measuring of truth\\ An eye for an eye\\ And a tooth for a tooth\\ And anyway there was no proof\\ Nor a motive why"\\[2ex]}
- Nick Cave, 1988
\chapter{Introduction}\label{Ch:Introduction}
The most fundamental problem in Diophantine approximation is the characterisation of points in Euclidean space $\RR^n$ as to how well they can be approximated by rational points. Obviously, the set $\QQ^n$ is dense in $\RR^n$ and so for a given $\balpha=(\alpha_1,\dots,\alpha_n)$ in $\RR^n$ we can find infinitely many rational points contained in an arbitrarily small ball around $\balpha$. On the other hand, this requires considering rationals with arbitrarily large denominators, which gives rise to the idea of relating the quality of approximation to the size of the denominator. More formally, given $\balpha\in\RR^n$, we are looking for solutions $q\in \NN$ to the inequality
\begin{equation}\label{Eqn:Psi}
\parallel q\balpha\parallel < \psi(q),
\end{equation}
where
\begin{equation*}
\parallel \bx \parallel = \min\limits_{\bz\in\ZZ^n}\left\{|\bx-\bz|\right\} = \min\limits_{\bz\in\ZZ^n}\left\{\max\limits_{1\leq j\leq n}\{|x_j-z_j|\}\right\}
\end{equation*}
denotes the distance from a point $\bx\in\RR^n$ to the nearest integer $\bz=(z_1,\dots,z_n)\in\ZZ^n$ and $\psi:\RR\rightarrow\RR$ is a positive real-valued function. It is readily seen that for any $\balpha\in\RR^n\backslash\QQ^n$ and $q\in \NN$, we can find $\bp=(p_1,\dots,p_n)\in \ZZ^n$ with
\begin{equation*}
\left|\alpha_j-\frac{p_j}{q}\right|<\frac{1}{2q},\quad (1\leq j\leq n)
\end{equation*}
and so any $q\in \NN$ is a solution to \eqref{Eqn:Psi} if $\psi$ is taken to be constant to $1/2$. Hence, we will want to consider functions $\psi$ which tend to zero with growing $q$. It is also worth noting that the case of $\balpha=\boldsymbol{a}/b\in\QQ^n$ with $\boldsymbol{a}\in\ZZ^n$ and $b\in\NN$ is of little interest as $\parallel q\balpha\parallel$ is equal to zero when $q$ is a multiple of $b$ and bounded from below by $1/b$, otherwise. It follows that we will restrict our attention to irrational points.
\section[Basic metric properties of simultaneous Diophantine approximation]{Basic metric properties of simultaneous\\ Diophantine approximation}\label{Sec:Basic}
The first fundamental result in Diophantine approximation was proved by Dirichlet and is a consequence of the pigeonhole principle, a rather simple, but powerful concept.
\theoremstyle{plain} \newtheorem*{pigeon}{Pigeonhole Principle}
\begin{pigeon}
If m objects are placed into n boxes, where $m>n$, then at least one of the boxes contains at least two objects.
\end{pigeon}
\begin{theorem}[Dirichlet, 1842]
For any $\alpha\in\RR$ and $Q\in\NN$, there exist integers $p$ and $q$, such that
\begin{equation*}
\left|\alpha-\frac{p}{q}\right|<\frac{1}{qQ} \quad \text{ and } \quad 1\leq q\leq Q.
\end{equation*}
\end{theorem}
\begin{proof}
Let $\lfloor x\rfloor =\max\{z\in\ZZ:z\leq x\}$ and $\{x\}=x-\lfloor x\rfloor$ denote the integer and fractional part of a real number $x$, respectively. For any $x\in\RR$, we have $0\leq\{x\}<1$ and so the $Q+1$ numbers $\{0\alpha\},\{\alpha\},\{2\alpha\},\dots,\{Q\alpha\}$ are all contained in the half-open unit interval $[0,1)$. We can divide this interval into $Q$ disjoint subintervals of the form
\begin{equation*}
[u/Q,(u+1)/Q),\quad u\in\{0,1,\dots,Q-1\}.
\end{equation*}
Hence, by the Pigeonhole Principle, one of these intervals contains two points $\{q_1\alpha\},\ \{q_2\alpha\}$ with $q_1< q_2\in \{0,1,\dots,Q\}$ and it follows that
\begin{equation*}
|\{q_2\alpha\}-\{q_1\alpha\}|<\frac{1}{Q}.
\end{equation*}
As $\{q_k\alpha\}=q_k\alpha-p_k$ with $p_k=\lfloor q_k\alpha\rfloor$ for $k=1,2$, this implies
\begin{equation*}
|\{q_2\alpha\}-\{q_1\alpha\}|=|(q_2\alpha-p_2)-(q_1\alpha-p_1)|=|(q_2-q_1)\alpha-(p_2-p_1)|
\end{equation*}
and so letting $q=q_2-q_1$ and $p=p_2-p_1$ we have found integers $p$ and $q$, with $1\leq q\leq Q$, satisfying
\begin{equation*}
|q\alpha-p|<\frac{1}{Q},
\end{equation*}
or, in other words,
\begin{equation*}
\left|\alpha-\frac{p}{q}\right|<\frac{1}{qQ}.\\ \qedhere
\end{equation*}
\end{proof}
There is a higher-dimensional analogue to Dirichlet's Theorem concerning the approximation of points $\balpha\in\RR^n$ by rational points $(\bp/q)=(p_1/q,\dots,p_n/q)\in\ZZ^n\times\NN$.
\begin{theorem}[Dirichlet]\label{Dir}
For any $\balpha=(\alpha_1,\dots,\alpha_n)\in\RR^n$ and $Q\in\NN$, there exist $\bp=(p_1,\dots,p_n)\in\ZZ^n$ and $q\in\NN$, such that
\begin{equation}\label{eqDir}
\max\limits_{1\leq j\leq n}\left|\alpha_j-\frac{p_j}{q}\right|<\frac{1}{qQ^{1/n}} \quad \text{ and } \quad 1\leq q\leq Q.
\end{equation}
\end{theorem}
We will skip the proof for now, but the statement can be easily obtained as a consequence of Minkowski's Theorem for systems of linear forms, which will be a later topic, see Section \ref{sec:weighted}. Of course, whenever $q\leq Q$ is a solution to \eqref{eqDir}, it will also satisfy the same equation with $Q=q$. This, and the fact that that $\parallel q\balpha\parallel$ is bounded away from zero for any given $q\in\NN$ and $\balpha\in\RR^n\backslash\QQ^n$, gives rise to the following corollary.
\begin{corollary}\label{DirCor}
Let $\balpha\in\RR^n$. There exist infinitely many $q\in\NN$ satisfying
\begin{equation}\label{Eqn:DirCor}
\parallel q\balpha\parallel <\frac{1}{q^{1/n}}.
\end{equation}
\end{corollary}
\begin{remark*}
Corollary \ref{DirCor} is trivially true for rational points. If $\balpha=(\boldsymbol{a}/b)\in\QQ^n$, then $\parallel q\balpha\parallel=0$ for any $q\in\NN$ of the form $q=kb$ with $k\in\NN$.
\end{remark*}
We note that inequality \eqref{Eqn:DirCor} has the form $\parallel q\balpha\parallel <\psi(q)$ as introduced above with $\psi(q)=q^{-1/n}$. We would like to extend this concept to a suitable class of functions. A positive-valued decreasing function $\psi:\mathbb{R}^+\rightarrow\mathbb{R}^+$ is called an \textit{approximating function}. Given such a function $\psi$, we look at the set of points in ${\rm I}^n=[0,1]^n$ which are \textit{simultaneously $\psi$-approximable}.
Namely, this is the set
\begin{equation}\label{approx}
W_n(\psi)=\left\{\balpha\in {\rm I}^n: \parallel q\balpha\parallel<\psi(q)\text{ infinitely often}\right\},
\end{equation}
where infinitely often means that the inequality holds for infinitely many $q\in\NN$. If $n=1$, we simply write $W(\psi)$. An often occurring case is when the approximating function $\psi$ has the form $\psi(q)=q^{-\tau}$ for $\tau>0$. In this case we speak of \textit{simultaneously $\tau$-approximable} points and denote the corresponding set by $W_n(\tau)$.
\begin{remark*}
The restriction to the unit cube ${\rm I}^n$ is purely for simplicity and does not affect the generality of results. This is due to the fact that the fractional part of the product $q\balpha$ does not depend on the integer part of $\balpha$ and thus we have that
\begin{equation*}
\parallel q\balpha\parallel=\parallel q(\balpha+\boldsymbol{k})\parallel
\end{equation*}
for any integer vector $\boldsymbol{k}\in\ZZ^n$. In other words, $\balpha$ is $\psi$-approximable if and only if any element of the set $\balpha+\ZZ^n$ is $\psi$-approximable.
\end{remark*}
\begin{remark*}
Simultaneous approximation is one of the two main types of Diophantine approximation, the other one being \textit{dual approximation}. In the dual case, points in $\RR^m$ are approximated by rational hyperplanes of the form
\begin{equation*}
\{\bq\cdot \bx=p:(p,\bq)\in\ZZ\times\ZZ^m\setminus\{\0\}\},
\end{equation*}
where $\bq\cdot\bx=q_1 x_1+\dots +q_m x_m$ is the standard scalar product in $\RR^m$. Given an approximating function $\psi$, one can define the set of \textit{dually $\psi$-approximable} points as
\begin{equation*}
W_m^D(\psi)=\{\balpha\in\RR^m:\norm{\bq\cdot\balpha}<\psi(|\bq|)\text{ infinitely often}\}.
\end{equation*}
Clearly, when $n=m=1$, the sets $W(\psi)$ and $W^D(\psi)$ coincide. The two approximation problems are also closely connected in higher dimension. Of course we can combine the two forms and this leads to a system of linear forms. Famous results like the Khintchine--Groshev Theorem or Minkowski's Theorem (see Theorem \ref{Minkowski}) are formulated in a generality which allows us to deduce both simultaneous and dual statements. Theorem \ref{thm:lines}, one of the main results in Chapter \ref{fibres}, is dependent on dual approximation properties, and in Section \ref{Sec:KTP} we will make use of a transference principle relating the two types of approximation. Other than this, we will not be concerned with dual approximation and refer the reader to \cite{Cassels} or \cite{BBDV} for a more extensive theory.
\end{remark*}
Dirichlet's Theorem tells us that $W_n(1/n)={\rm I}^n$. Since \eqref{approx} requires infinitely many solutions, we can conclude that $W_n(\psi)={\rm I}^n$ for any function $\psi$ which eventually dominates $q^{-1/n}$. However, Theorem \ref{Dir} by itself cannot reveal anything more about functions not falling in this category. To make further progress, we start by noting that $W_n(\psi)$ can be a written as a so-called $\limsup$ set. Given a countable collection of sets $(A_k)_{k\in\NN}$, we denote by
\begin{equation*}
\limsup\limits_{k\rightarrow\infty}A_k:=\bigcap\limits_{l=1}^{\infty}\bigcup\limits_{k=l}^{\infty}A_k
\end{equation*}
the set of points contained in infinitely many of the $A_k$. In our case, let
\begin{equation*}
A_n(\psi,q)=\bigcup\limits_{|\bp|\leq q} B\left(\frac{\bp}{q},\frac{\psi(q)}{q}\right)\cap {\rm I}^n,
\end{equation*}
where $(\bp/q)=(p_1/q,\dots,p_n/q)\in\QQ^n$ and $B(\bx,r)$ is the ball of radius $r$ with respect to $\max$-norm centred at $\bx\in\RR^n$. As always, $|\bp|=\max\{|p_1|,\dots,|p_n|\}$ and the index $|\bp|\leq q$ means that the union runs over all integer vectors $\bp$ satisfying this condition. It follows directly from definition \eqref{approx} that
\begin{equation*}
W_n(\psi)=\limsup\limits_{q\rightarrow\infty}A_n(\psi,q).
\end{equation*}
The structure of $\limsup$ sets proves to be very useful when trying to investigate the measure theoretic properties of $W_n(\psi)$ with respect to $n$-dimensional Lebesgue measure $\lambda_n$. In fact, it directly fits the requirements for the convergence part of the Borel--Cantelli Lemma, a fundamental result in probability theory.
\begin{lemma}[Borel--Cantelli]\label{BoCa}
Let $(\Omega,m)$ be a finite measure space and let $(A_k)_{k\in\NN}$ be a collection of $m$-measurable subsets of $\Omega$. Then
\begin{equation*}
\sum\limits_{k=1}^{\infty}m(A_k)<\infty\quad \Rightarrow\quad m\left(\limsup\limits_{k\rightarrow\infty}A_k\right)=0.
\end{equation*}
\end{lemma}
Clearly, ${\rm I}^n$ with measure $\lambda_n$ is a finite measure space and we get
\begin{align*}
\lambda_n(A_n(\psi,q))&=\lambda_n\left(\bigcup\limits_{|\bp|\leq q} B\left(\frac{\bp}{q},\frac{\psi(q)}{q}\right)\cap {\rm I}^n\right)\\[1ex]
&\leq\sum\limits_{|\bp|\leq q}\lambda_n\left(B\left(\frac{\bp}{q},\frac{\psi(q)}{q}\right)\cap {\rm I}^n\right)\\[1ex]
&=\sum\limits_{|\bp|\leq q}\lambda_n\left(B\left(\frac{\bp}{q},\frac{\psi(q)}{q}\right)\right)\\[1ex]
&=2^n q^n\frac{\psi(q)^n}{q^n}=2^n\psi(q)^n,
\end{align*}
where we can do the shift in summation to compensate for the portions of balls outside ${\rm I}^n$. Equality holds whenever the balls contained in $W_n(\psi,q)$ are not overlapping, i.e. for $\psi(q)<1/2$, which will always be satisfied in our considerations. Lemma \ref{BoCa} implies that
\begin{equation}\label{Eqn:KhinCon}
\lambda_n(W_n(\psi))=0 \quad \text{ if } \quad \sum\limits_{q=1}^{\infty}\psi(q)^n<\infty.
\end{equation}
Observe that in the above argument we have not made use of the fact that $\psi$ is monotonic.
It is much more work to obtain a converse statement to \eqref{Eqn:KhinCon}. Divergence is not enough to satisfy the converse version of the Borel--Cantelli Lemma, the sets in question are also required to be \textit{pairwise independent}. For example, consider the sets of the form $A_k=[0,\frac{1}{k}]$ with $k\in\NN$. It is easily seen that
\begin{equation*}
\sum\limits_{k=1}^{\infty}\lambda(A_k)=\sum\limits_{k=1}^{\infty}\frac{1}{k}=\infty,
\end{equation*}
where in the one-dimensional case we just write $\lambda$ for the Lebesgue measure $\lambda_1$. However, as we are dealing with a collection of nested intervals, it follows that
\begin{equation*}
\limsup\limits_{k\rightarrow\infty}A_k=\lim\limits_{k\rightarrow\infty}A_k=\{0\},
\end{equation*}
which is a null-set with respect to Lebesgue measure. Similar overlaps occur between the sets $A_n(\psi,q)$, from which $W_n(\psi)$ is constructed and so pairwise independence is not satisfied in our case. There is a proof using the notion of \textit{quasi-independence on average} (see \cite{DAaspects} for an outline of the proof), but originally the following statement was proved by Khintchine using methods of classical measure and integration theory \cite{Khintchine}.
\begin{theorem}[Khintchine, 1924]\label{Khintchine}
Let $\psi$ be an approximating function. Then
\begin{equation*}
\lambda_n(W_n(\psi))=
\begin{dcases}
1\quad &\text{ if }\quad \sum\limits_{q=1}^{\infty} \psi(q)^n=\infty,\\[2ex]
0\quad &\text{ if }\quad \sum\limits_{q=1}^{\infty} \psi(q)^n<\infty.
\end{dcases}
\end{equation*}
\end{theorem}
We will not present a proof here, but later we will show how Khintchine's Theorem can be obtained as a consequence of \textit{ubiquity theory}. See Section \ref{Sec:ClassicalResults}.
\begin{remark*}
As mentioned above, the convergence part of Theorem \ref{Khintchine} does not need the function $\psi$ to be decreasing. In fact, it has been shown by Gallagher that even for the divergence part this assumption can be removed if $n\geq 2$ \cite{Gallagherkt}. This is an important improvement of Khintchine's Theorem and will be vital to our proof of Theorem \ref{thm:subspaces}.
\end{remark*}
However, monotonicity of $\psi$ is essential when $n=1$. In 1941, Duffin and Schaeffer proved the existence of a non-monotonic function $\vartheta:\RR^+\rightarrow\RR^+$, for which the sum $\sum_{q\in\NN}\vartheta(q)$ diverges, but $\lambda(W(\vartheta))=0$ \cite{Duffin}. The construction of $\vartheta$ uses the following facts. For any square-free positive integer $N$, and $s>0$,
\begin{equation}\label{Eqn:Fact1}
\sum\limits_{q\in\NN,\ q|N}q=\prod\limits_{p\in\mathbb{P},\ p|N}(1+p)
\end{equation}
and
\begin{equation}\label{Eqn:Fact2}
\prod\limits_{p\in\mathbb{P}}(1+p^{-s})=\frac{\zeta(s)}{\zeta(2s)},
\end{equation}
where $\mathbb{P}\subset\NN$ is the set of prime numbers. Here, $\zeta$ denotes the Riemann zeta function, which on our domain of interest is defined by the infinite series
\begin{equation*}
\zeta(s):=\sum\limits_{k=1}^{\infty}k^{-s}=\frac{1}{1^s}+\frac{1}{2^s}+\frac{1}{3^s}+\cdots .
\end{equation*}
The value $\zeta(1)$ is the limit of the harmonic series, which diverges, whereas $\zeta(2)$ takes the finite value $\pi^2/6$. Therefore, \eqref{Eqn:Fact2} implies that
\begin{equation*}
\prod\limits_{p\in\mathbb{P}}(1+p^{-1})=\frac{\zeta(1)}{\zeta(2)}=\infty.
\end{equation*}
Thus, we can find a sequence of square-free positive integers $(N_i)_{i\in\NN}$ such that $N_i$ and $N_j$ are coprime whenever $i\neq j$ and which satisfy
\begin{equation}\label{Eqn:Fact3}
\prod\limits_{p\in\mathbb{P},\ p|N_i}(1+p^{-1})>2^i+1.
\end{equation}
We define the function $\vartheta$ on the positive integers by
\begin{equation*}
\vartheta(q):=
\begin{cases}
2^{-i-1}\frac{q}{N_i}\ &\text{ if }q>1\text{ and }q|N_i\text{ for some }i,\\
0\ &\text{ otherwise.}
\end{cases}
\end{equation*}
As above, let
\begin{equation*}
A(\vartheta,q)=\bigcup\limits_{p=0}^q B\left(\frac{p}{q},\frac{\vartheta(q)}{q}\right)\cap{\rm I}.
\end{equation*}
If $q$ divides $N_i$, then $A(\vartheta,q)\subseteq A(\vartheta,N_i)$, since
\begin{equation*}
\frac{\vartheta(q)}{q}=2^{-i-1}\frac{q}{qN_i}=\frac{2^{-i-1}}{N_i}=\frac{\vartheta(N_i)}{N_i}
\end{equation*}
for any divisor $q$ of $N_i$. Hence,
\begin{equation*}
\bigcup\limits_{q\in\NN,\ q|N_i}A(\vartheta,q)=A(\vartheta,N_i)
\end{equation*}
and so
\begin{equation*}
\lambda\left(\bigcup\limits_{q\in\NN,\ q|N_i}A(\vartheta,q)\right)=\lambda\left(A(\vartheta,N_i)\right)=2\vartheta(N_i)=2^{-i}.
\end{equation*}
By definition
\begin{equation*}
W(\vartheta)=\limsup\limits_{q\rightarrow\infty}A(\vartheta,q)=\limsup\limits_{i\rightarrow\infty}A(\vartheta,N_i)
\end{equation*}
and, moreover, we have that
\begin{equation*}
\sum\limits_{i=1}^{\infty}m(A(\vartheta,N_i))=\sum\limits_{i=1}^{\infty}2^{-i}=1<\infty.
\end{equation*}
Thus, the Borel--Cantelli Lemma implies that
\begin{equation*}
\lambda(W(\vartheta))=0.
\end{equation*}
On the other hand, using the equations \eqref{Eqn:Fact1} and \eqref{Eqn:Fact3}, we can show that
\begin{align*}
\sum\limits_{q=1}^{\infty}\vartheta(q)&=\sum\limits_{i=1}^{\infty}2^{-i-1}\frac{1}{N_i}\sum\limits_{q>1,\ q|N_i}q\\[1ex]
&=\sum\limits_{i=1}^{\infty}2^{-i-1}\frac{1}{N_i}\left(\prod\limits_{p\in\mathbb{P},\ p|N_i}(1+p)-1\right)\\[1ex]
&=\sum\limits_{i=1}^{\infty}2^{-i-1}\frac{1}{N_i}\left(\prod\limits_{p\in\mathbb{P},\ p|N_i}(1+p^{-1})p-1\right)\\[1ex]
&>\sum\limits_{i=1}^{\infty}2^{-i-1}\frac{1}{N_i}\left((2^i+1)N_i-1\right)\\[1ex]
&>\sum\limits_{i=1}^{\infty}2^{-i-1}\frac{1}{N_i}2^i N_i\\[1ex]
&=\sum\limits_{i=1}^{\infty}2^{-1}=\infty.
\end{align*}
In the same paper, Duffin and Schaeffer also discuss a variation of Khintchine's Theorem for arbitrary positive functions $\psi$. The inequality $\norm{q\alpha}<\psi(q)$ implies the existence of an integer $p$ satisfying
\begin{equation}\label{Eqn:DS}
\left|\alpha-\frac{p}{q}\right|<\frac{\psi(q)}{q}.
\end{equation}
We can uniquely fix the rational point $p/q$ to the approximation error $\psi(q)/q$ by requiring $\gcd(p,q)=1$. Let $W'(\psi)$ denote the set of points $\alpha$ in ${\rm I}$ for which inequality~\eqref{Eqn:DS} holds for infinitely many coprime pairs $(p,q)\in\ZZ\times\NN$. Clearly, $W'(\psi)\subseteq W(\psi)$. The Borel--Cantelli Lemma directly implies that
\begin{equation*}
\lambda(W'(\psi))=0\quad \text{ if }\quad \sum\limits_{q=1}^{\infty}\varphi(q)\frac{\psi(q)}{q}<\infty,
\end{equation*}
where $\varphi$ is the Euler totient function, i.e.
\begin{equation*}
\varphi(q):=\left|\left\{k\in\NN: 1\leq k\leq q \text{ and } (k,q)=1\right\}\right|.
\end{equation*}
\begin{Conjecture}[Duffin--Schaeffer, 1941]\label{Con:DS}
For any positive real-valued function $\psi$,
\begin{equation*}
\lambda(W'(\psi))=1\quad \text{ if }\quad \sum\limits_{q=1}^{\infty}\varphi(q)\frac{\psi(q)}{q}=\infty.
\end{equation*}
\end{Conjecture}
Duffin and Schaeffer proved the following weaker version of their conjecture.
\begin{theorem}\label{Thm:DS}
Conjecture \ref{Con:DS} holds under the additional assumption that
\begin{equation}\label{Eqn:AssumDS}
\limsup\limits_{k\rightarrow\infty}\left(\sum\limits_{q=1}^{k}\varphi(q)\frac{\psi(q)}{q}\right)\left(\sum\limits_{q=1}^{k}\psi(q)\right)^{-1}>0.
\end{equation}
\end{theorem}
Theorem \ref{Thm:DS} will be referred to as the Duffin--Schaeffer Theorem.
\begin{remark*}
The Duffin--Schaeffer conjecture is one of the most important and difficult open problems in Diophantine approximation. Various partial results have been established (see \cite{Harman} or \cite{Sprindzuk2}) and Gallagher's ``$0-1$ law'' shows that $W'(\psi)$ has either zero or full measure \cite{Gallagher01}. Conjecture \ref{Con:DS} and Khintchine's Theorem are equivalent in the case when $\psi$ is monotonic.
It is also worth noting that while $\vartheta$ as defined above is a counterexample to Khintchine's Theorem without monotonicity, it is not a counterexample to the Duffin--Schaeffer conjecture. Indeed, using the fact that $\sum_{q|N}\varphi(q)=N$, we see that
\begin{align*}
\sum\limits_{q=1}^{\infty}\varphi(q)\frac{\vartheta(q)}{q}&=\sum\limits_{i=1}^{\infty}2^{-i-1}\frac{1}{N_i}\sum\limits_{q>1,\ q|N_i}\varphi(q)\\[1ex]
&<\sum\limits_{i=1}^{\infty}2^{-i-1}\frac{1}{N_i}N_i\\[1ex]
&=\sum\limits_{i=1}^{\infty}2^{-i-1}=\frac{1}{2}<\infty.
\end{align*}
\end{remark*}
Turning back to Khintchine's Theorem, as a direct consequence we get that Corollary \ref{DirCor} is optimal in the sense that almost no $\balpha\in{\rm I}^n$ is $\tau$-approximable for $\tau>1/n$. On the other hand, if we define a collection of approximating functions $(\psi_k)_{k\in\NN}$ by
\begin{equation}\label{Eqn:DefPsiK}
\psi_k:q\mapsto\psi_k(q):=\frac{1}{k}q^{-\frac{1}{n}},
\end{equation}
then almost all $\balpha\in{\rm I}^n$ are $\psi_k$-approximable for any $k\in\NN$. We call a point $\balpha\in{\rm I}^n$ \textit{badly approximable} if there exists a $k\in\NN$ such that $\balpha\notin W_n(\psi_k)$ and we denote the set of badly approximable points in ${\rm I}^n$ by $\mathbf{Bad}_n$. For simplicity, we will write $\mathbf{Bad}$ for $\mathbf{Bad}_1$. In other words, $\mathbf{Bad}_n$ is the set of points $\balpha\in{\rm I}^n$, for which
\begin{equation*}
\liminf\limits_{q\rightarrow\infty}q^{1/n}\parallel q\balpha\parallel>0.
\end{equation*}
Khintchine's Theorem immediately implies the following.
\begin{theorem}
$\lambda_n(\mathbf{Bad}_n)=0$.
\end{theorem}
\begin{proof}
We define an approximating function $\psi$ by
\begin{equation*}
\psi(q):=\frac{1}{(q\log q)^{1/n}}.
\end{equation*}
For any $k\in\NN$, there exists a $Q(k)\in\NN$ such that
\begin{equation*}
\psi_k(q)=\frac{1}{k}q^{-1/n}>\frac{1}{(\log q)^{1/n}}q^{-1/n}=\psi(q)
\end{equation*}
for all $q\geq Q(k)$ and thus $W_n(\psi)\subseteq W_n(\psi_k)$ for all $k\in\NN$. This implies that
\begin{equation*}
\mathbf{Bad}_n=\bigcup\limits_{k=1}^{\infty}\left({\rm I}^n\setminus W_n(\psi_k)\right)\subseteq {\rm I}^n\setminus W_n(\psi).
\end{equation*}
Now, observe that
\begin{equation*}
\sum\limits_{q=1}^{\infty}\psi(q)^n=\sum\limits_{q=1}^{\infty}\frac{1}{q\log q}=\infty,
\end{equation*}
and hence, by Khintchine's Theorem, $\lambda_n(\psi)=1$ and so, in turn, $\lambda_n(\mathbf{Bad}_n)=0$.
\end{proof}
A priori, the set $\mathbf{Bad}_n$ could be empty. However, while being a null set with respect to Lesbesgue measure $\lambda_n$, $\mathbf{Bad}_n$ is maximal with respect to Hausdorff dimension (see Section \ref{Sec:Hausdorff} for the definition).
\begin{theorem}\label{Thm:DimBad}
$\dim(\mathbf{Bad}_n)=n$.
\end{theorem}
In dimension one, Theorem \ref{Thm:DimBad} was proved by Jarn\'ik using a Cantor set construction \cite{JarnikBad}. The proof for arbitrary dimensions was done by Schmidt within the more general context of \textit{Schmidt games} and \textit{winning} sets \cite{Schmidtgames}. A concise account of both methods can be found in \cite{DAaspects}. The sets of badly approximable numbers are of great importance in Diophantine approximation and have been studied thoroughly. In particular, in the one-dimensional case they can be completely characterised using the theory of \textit{continued fractions}.
Any number $\alpha\in\RR$ can be written as an iterated fraction of the form
\begin{equation*}
\alpha=a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cfrac{1}{\ddots}}}}
\end{equation*}
with $a_0\in\ZZ$ and $a_k\in\NN$ for $k\geq 1$. For simplicity, we usually prefer the notation
\begin{equation*}
\alpha=[a_o;a_1,a_2,a_3,\dots].
\end{equation*}
The numbers $a_i$, $i\in\NN$, are called the \textit{partial quotients} of the continued fraction. The first entry is the integral part of $\alpha$ and so our usual restriction to $\alpha\in{\rm I}$ corresponds to only considering continued fractions with $a_0=0$. Clearly, rational numbers are represented by finite length continued fractions. On the other hand, every irrational number $\alpha\in{\rm I}$ has a unique infinite continued fraction expansion. Given a number $\alpha\notin\QQ$ and $k\in\NN$, we define the $k$-th \emph{convergent} of $\alpha$ by
\begin{equation*}
\frac{p_k}{q_k}:=[a_o;a_1,a_2,a_3,\dots,a_k].
\end{equation*}
The convergents have very useful properties. They provide explicit solutions to the inequality in Corollary \ref{DirCor}. That is,
\begin{equation*}
\left| \alpha-\frac{p_n}{q_n}\right| < \frac{1}{q_n^2}\quad \text{ for all }n\in\NN.
\end{equation*}
Furthermore, they are also so-called \textit{best approximates}. This means, given $1\leq q<q_n$, any rational $\frac{p}{q}$ satisfies
\begin{equation*}
\left| \alpha-\frac{p_n}{q_n}\right| < \left| \alpha-\frac{p}{q}\right|.
\end{equation*}
For the proofs of these facts and an extensive account of the theory on continued fractions, see \cite{HardyWright} or \cite{Khintchine-book}. Coming back to badly approximable numbers, we know the following to be true.
\begin{theorem}
Let $\alpha\in{\rm I}^n\setminus\QQ$ with continued fraction expansion $[0;a_0,a_1,a_2,\dots]$. Then
\begin{equation*}
\alpha \in \mathbf{Bad}\ \Longleftrightarrow\ \exists\ M=M(\alpha)\geq 1\ \text{ s.t. } a_i\leq M\ \text{ for all } i\in\NN.
\end{equation*}
\end{theorem}
In other words, $\mathbf{Bad}$ consists exactly of the numbers whose continued fractions have bounded partial quotients. In particular, this implies that quadratic irrationals are in $\mathbf{Bad}$, since they are precisely the numbers with a periodic continued fraction expansion. It is widely believed that no higher degree algebraic irrationals are badly approximable, but this has not been verified for any single number. In Section \ref{sec:weighted} we will introduce a more general class of badly approximable numbers and we will show later how being badly approximable implies good approximation behaviour with regard to twisted Diophantine approximation. See Theorem \ref{Thm:Kurzweil} and Theorem \ref{Con1}.
Corollary \ref{DirCor} shows that all numbers are $\psi_1$-approximable, with $\psi_k$ as defined in \eqref{Eqn:DefPsiK}. In other words, this means that the complement of $W_n(\psi_1)$ is empty. Letting $k$ grow, Khintchine's Theorem tells us that $W_n(\psi_k)$ still has full measure for any $k\in\NN$, but the complement might be non-empty. In dimension one, Hurwitz first showed that
\begin{equation*}
{\rm I}\setminus W(\psi_3)\neq\emptyset,
\end{equation*}
implying the existence of badly approximable numbers \cite{Hurwitz}. Asymptotically, as $k$ tends to infinity, this complement will contain all the badly approximable numbers. This example shows the limitations of Theorem \ref{Khintchine}. Being purely a zero-one law it cannot illustrate the difference between those exceptional sets. Even more importantly, let $1/n<\tau_1<\tau_2$. Then, clearly
\begin{equation*}
W_n(\tau_2)\subseteq W_n(\tau_1),
\end{equation*}
but
\begin{equation*}
\lambda_n(W_n(\tau_1))=\lambda_n(W_n(\tau_2))=0.
\end{equation*}
From a heuristic point of view, one would expect $W_n(\tau_2)$ to be strictly smaller than $W_n(\tau_1)$, but Khintchine's Theorem is not powerful enough to verify this claim. Hence, we need some finer means to distinguish the sizes of Lebesgue null-sets. This leads us to the concepts of \textit{Hausdorff measure} and \textit{dimension}.
\section[Hausdorff measures and dimension and Jarn\'ik's Theorem]{Hausdorff measures and dimension\\and Jarn\'ik's Theorem}\label{Sec:Hausdorff}
The definition of Hausdorff measures involves several steps. First, a \textit{dimension function} $f:\RR^+\rightarrow\RR^+$ is a left-continuous monotonic function such that
\begin{equation}\label{Eqn:DimFct}
\lim\limits_{t\rightarrow 0}f(t)=0.
\end{equation}
Let $A$ be a subset of $\RR^n$. Then, given a real number $\rho>0$, a $\rho$\textit{-cover} of $A$ is a countable collection $\{A_k\}_{k\in\NN}$ of subsets of $\RR^n$ such that
\begin{equation*}
A\subseteq\bigcup\limits_{k=1}^{\infty}A_k
\end{equation*}
and
\begin{equation*}
d_k=d(A_k)<\rho\quad \text{ for all } k\in\NN,
\end{equation*}
where the \textit{diameter} of a set $B\subset\RR^n$ is given by
\begin{equation*}
d(B):=\sup\{|\bx-\by|:\bx,\by\in B\}.
\end{equation*}
Let
\begin{equation*}
\mathcal{H}^f_{\rho}(A)=\inf\sum\limits_{k=1}^{\infty}f(d(A_k)),
\end{equation*}
where the infimum is taken over all $\rho$-covers of $A$.
As $\rho$ decreases, the class of possible $\rho$-covers for $A$ is reduced and thus $\mathcal{H}^f_{\rho}$ increases. Hence, the limit
\begin{equation*}
\mathcal{H}^f(A)=\lim\limits_{\rho\rightarrow 0}\mathcal{H}^f_{\rho}(A)
\end{equation*}
exists and is either finite or equal to $+\infty$. $\mathcal{H}^f(A)$ is called the \textit{Hausdorff} $f$\textit{-measure} of $A$. In the case that $f(t)=t^s$ with $s> 0$, the measure $\mathcal{H}^f$ is called the $s$\textit{-dimensional Hausdorff measure} and denoted by $\mathcal{H}^s$. This definition can be extended to $s=0$, even though the function
\begin{equation*}
f:t\mapsto t^0=1\quad \text{for all }t\in\RR^+
\end{equation*}
does not satisfy condition \eqref{Eqn:DimFct}. It is easily verified that $\mathcal{H}^0(A)$ is the cardinality of $A$ and for $s\in\NN$, $\mathcal{H}^s$ is a constant multiple of the $s$-dimensional Lebesgue measure on $\RR^s$. Importantly, this means their notions of null sets and full measure coincide. It is also worth noting that $\mathcal{H}^g(A)=0$ if
\begin{equation*}
\cH^f(A)<\infty\quad \text{ and }\quad \lim\limits_{t\rightarrow 0}\frac{g(t)}{f(t)}=0.
\end{equation*}
In particular, unless $A$ is finite, this shows there exists a unique $s_0$ where $\cH^s(A)$ drops from infinity to zero, i.e.
\begin{equation*}
\cH^s(A)=
\begin{cases}
\infty\ &\text{ if }\ s<s_0,\\
0\ &\text{ if }\ s>s_0.
\end{cases}
\end{equation*}
This critical point is called the \textit{Hausdorff dimension} of $A$. Formally,
\begin{equation*}
\dim A=\inf\{s>0:\cH^s(A)=0\}.
\end{equation*}
If $\dim A=s$, then $\cH^s(A)$ may be zero or infinite or satisfy $0<\cH^s(A)<\infty$. For example, a ball in $\RR^n$ has finite measure $\cH^n$. Often it is easier to find the dimension of a set than to obtain the actual measure at the critical value. Showing $\dim A=s$ is usually done by proving $\dim A\leq s$ and $\dim A\geq s$ separately. In most cases the upper bound is more easily attained since it is enough to provide specific covers as $\rho\rightarrow 0$, whereas for the lower bound we need to show no other sequence of covers could lead to a smaller limit.
For more details on Hausdorff measure and dimension as well as related notions and constructions, see \cite{Falconer} and \cite{Mattila}. A standard example for a non-integral dimensional subset of $\mathbb{R}$ is the middle third Cantor set $C$. The set $C$ is constructed by removing the middle third from the unit interval ${\rm I}=[0,1]\subset\mathbb{R}$ and then successively removing the middle third from all the resulting intervals. This means that after the first iteration the resulting set consists of the intervals $\left[0,\frac{1}{3}\right]$ and $\left[\frac{2}{3},1\right]$, after removing the middle thirds of those intervals the next set comprises the intervals $\left[0,\frac{1}{9}\right]$, $\left[\frac{2}{9},\frac{1}{3}\right]$, $\left[\frac{2}{3},\frac{7}{9}\right]$ and $\left[\frac{8}{9},1\right]$. Continuing in the same fashion the $k$-th step leaves $2^k$ disjoint intervals of length $3^{-k}$. We will refer to them as level $k$ intervals and the Cantor set is the resulting infinite limit or infinite intersection of this process. It is possible to describe $C$ in explicit ways, for example as all the numbers in the unit interval with a $3$-adic expansion which does not contain the digit 1; i.e. if we write any $\alpha\in {\rm I}$ as
\begin{equation*}
\alpha=\sum\limits_{k=0}^{\infty}a_k 3^{-k},\quad a_k\in\{0,1,2\}\ \text{ for all }k\in\NN,
\end{equation*}
then $C$ comprises exactly the numbers which have a representation with $a_k\neq 1$ for all $k$. This is true since the $k$-th step of the process above removes precisely the numbers where $a_k$ is the first coefficient equal to one.
\begin{lemma} [Example 2.7. of \cite{Falconer}]
Let $s=\frac{\log 2}{\log 3}$. Then the Cantor set satisfies $\dim C=s$. Furthermore, $\cH^s(C)=1$.
\end{lemma}
\begin{proof}
As the level $k$ intervals are a collection of $2^k$ intervals of length $3^{-k}$ covering $C$, it is natural to use those sets as a $3^{-k}$-cover of $C$. We directly see that
\begin{equation*}
\mathcal{H}_{3^{-k}}^s(C)\leq 2^k 3^{-ks}=1,
\end{equation*}
which implies $\mathcal{H}^s\leq 1$ by letting $k\rightarrow\infty$ and thus it follows that $\dim C\leq s$.
To prove that $\mathcal{H}^s(C)\geq\frac{1}{2},$ we will show that
\begin{equation} \label{cantor1}
\sum\limits_{i\in J}d_i^s\geq\frac{1}{2}=3^{-s}
\end{equation}
for any cover $\mathcal{U}=\{U_j\}_{j\in J}$ of $C$ with $d_j=d(U_j)$. Clearly, we can assume all the $U_j$ to be intervals and the compactness of $C$ implies the existence of a finite subcover of $\mathcal{U}$. Hence, by choosing the closure of those intervals, it is enough to prove \eqref{cantor1} when $\mathcal{U}$ is a finite collection of closed subintervals of $[0,1]$. For each $j\in J$, let $k$ be the unique integer such that
\begin{equation} \label{cantor2}
3^{-(k+1)}\leq d_j<3^{-k}.
\end{equation}
Then $U_j$ can intersect at most one of the level $k$ intervals since all those intervals lie at least $3^{-k}$ apart. Let $l\geq k$, then by the $l$-th iteration any level $k$ interval has been replaced by $2^{l-k}$ level $l$ intervals and hence $U_j$ can intersect at most
\begin{equation*}
2^{l-k}=2^l 3^{-sk}\leq 2^l 3^s d_j^s
\end{equation*}
intervals of level $l$ where the last inequality is due to \eqref{cantor2}. We can choose $l$ large enough such that $3^{-(l+1)}\leq d_k$ holds for all $k\in J$ and then, since $\mathcal{U}$ is a cover of $C$ and hence the $U_k$ intersect all $2^l$ level $l$ intervals, it follows that
\begin{equation*}
2^l\leq\sum\limits_{j\in J}2^l 3^s d_j^s
\end{equation*}
which reduces to \eqref{cantor1}.
\end{proof}
\begin{remark*}
This proof actually shows that $\frac{1}{2}\leq\cH^s(C)\leq 1$. Proving that the upper bound is sharp requires a little more effort and will be omitted here.
\end{remark*}
Turning back to the $\limsup$ set of $\psi$-approximable points, we can make use of its structure similarly to the above example to obtain an upper bound for $\dim W_n(\psi)$. For simplicity we will stick to the case $n=1$. Recall that
\begin{equation*}
W(\psi)=\limsup\limits_{q\rightarrow\infty}A(\psi,q),
\end{equation*}
where
\begin{equation*}
A(\psi,q)=\bigcup\limits_{0\leq p\leq q} B\left(\frac{p}{q},\frac{\psi(q)}{q}\right)\cap {\rm I}.
\end{equation*}
For each $t\in\NN$, the balls contained in the sets $A(\psi,q)$ with $q\geq t$ form a cover of $W(\psi)$. Let $\rho>0$. Assuming that $\psi$ is monotonic and $\psi(q)<1$ for $q$ large enough, $\rho>2\psi(t)/t$ holds for $t$ large enough. Fixing such a $t$, the balls contained in the collection $\{A(\psi,q)\}_{q\geq t}$ form a $\rho$-cover of $W(\psi)$. It follows that
\begin{align*}
\cH^s(W(\psi))&\leq 2^s \sum\limits_{q=t}^{\infty}q\left(\frac{\psi(q)}{q}\right)^s\\[1ex]
&=2^s\sum\limits_{q=t}^{\infty}q^{1-s}\psi(q)^s\rightarrow 0
\end{align*}
as $t\rightarrow\infty$ (i.e. as $\rho\rightarrow 0$) if the sum $\sum_{q\in\NN}q^{1-s}\psi(q)^s$ converges. It can be shown with some extra effort that one can remove the monotonicity condition, giving us a Hausdorff measure analogue of the convergence part of Khintchine's Theorem. Formally:
\begin{lemma}\label{BCHd}
Let $\psi:\RR^+\rightarrow\RR^+$ be a function and $s>0$. Then
\begin{equation*}
\cH^s(W(\psi))=0\quad \text{ if }\quad \sum\limits_{q=1}^{\infty}q^{1-s}\psi(q)^s<\infty.
\end{equation*}
\end{lemma}
If we let $\psi(q)=q^{-\tau}$ with $\tau>1$ and $s>\frac{2}{\tau+1}$, we see that
\begin{equation*}
\sum\limits_{q=1}^{\infty}q^{1-s}\psi(q)^s=\sum\limits_{q=1}^{\infty}q^{1-s(1+\tau)}\leq\sum\limits_{q=1}^{\infty}q^{-1}<\infty,
\end{equation*}
and so $\cH^s(W(\tau))=0$ for $s>\frac{2}{\tau+1}$. For $\tau>1$, this shows that $\dim W(\tau)\leq\frac{2}{\tau+1}$.
This result relies on the $\limsup$ nature of $W(\psi)$ and proves the easier half of the Jarn\'ik--Besicovitch Theorem.
\begin{theorem}[Jarn\'ik--Besicovitch]\label{JB}
Let $\tau>\frac{1}{n}$. Then
\begin{equation*}
\dim(W_n(\tau))=\frac{n+1}{\tau+1}.
\end{equation*}
\end{theorem}
Theorem \ref{JB} was proved independently and with different methods by Jarn\'ik in 1929 and Besicovitch in 1934 \cite{Jarnikold}, \cite{Besicovitch}. This confirms the intuition that given ${\tau_1<\tau_2}$, $W_n(\tau_2)$ is strictly smaller than $W_n(\tau_1)$. However, it is not able to provide us with the measure $\cH^s(W_n(\tau))$ at the critical exponent and it can only deal with functions of the form $\psi(q)=q^{-\tau}$. These gaps were subsequently filled by Jarn\'ik~\cite{Jarnik}.
\begin{theorem}[Jarn\'ik, 1931]\label{Jarnik}
Let $\psi$ be an approximating function and $s\in(0,n)$. Then
\begin{equation*}
\cH^s(W_n(\psi))=
\begin{dcases}
\infty\quad &\text{ if }\quad \sum\limits_{q=1}^{\infty}q^{n-s}\psi(q)^s=\infty,\\[2ex]
0\quad &\text{ if }\quad \sum\limits_{q=1}^{\infty}q^{n-s}\psi(q)^s<\infty.
\end{dcases}
\end{equation*}
\end{theorem}
\begin{remark*}
Here we have to exclude the case when $s=n$ since Khintchine's Theorem shows that
\begin{equation*}
\cH^n(W_n(\psi))=\cH^n({\rm I}^n)<\infty\quad \text{ if }\quad \sum\limits_{q=1}^{\infty}\psi(q)^n=\infty.
\end{equation*}
It is worth noting that the original statement required a stronger monotonicity condition and other additional conditions were imposed on $\psi$. Those restrictions were removed in \cite{limsup}. Again, $\psi$ being monotonic is only needed for the divergence case.
\end{remark*}
As an immediate consequence of Theorem \ref{Jarnik}, we can generalise Theorem \ref{JB} to arbitrary approximation functions.
\begin{corollary}
Let $\psi$ be an approximating function. Then
\begin{equation*}
\dim(W_n(\psi))=\inf\left\{s>0:\sum\limits_{q=1}^{\infty}q^{n-s}\psi(q)^s<\infty\right\}.
\end{equation*}
Moreover, $\cH^s(W_n(\tau))=\infty$, where $s=\frac{n+1}{\tau+1}$.
\end{corollary}
As with Khintchine's Theorem, we will see how Jarn\'ik's Theorem can be derived from ubiquity theory. However, it can also be deduced as a consequence of Khintchine's Theorem using the \textit{Mass Transference Principle}, which in turn implies we can completely remove the monotonicity condition if $n\geq 2$. Before we go to that proof, we notice that we can combine the statements of Khintchine and Jarn\'ik by making use of the fact that
\begin{equation*}
\cH^s({\rm I}^n)=
\begin{dcases}
\infty\quad &\text{ if }\quad s<n,\\
C(n)\quad &\text{ if }\quad s=n,
\end{dcases}
\end{equation*}
where $C(n)$ is a constant only depending on $n$ since $\cH^n$ is a constant multiple of $\lambda_n$. This gives rise to the following concise theorem, which describes the measure theoretic behaviour of $W_n(\psi)$.
\begin{theorem}[Khintchine--Jarn\'ik]\label{KhiJa}
Let $\psi$ be an approximating function and $s\in(0,n]$. Then
\begin{equation*}
\cH^s(W_n(\psi))=
\begin{dcases}
\cH^s({\rm I}^n)\ &\text{ if }\quad \sum\limits_{q=1}^{\infty}q^{n-s}\psi(q)^s=\infty,\\[2ex]
0\quad &\text{ if }\quad \sum\limits_{q=1}^{\infty}q^{n-s}\psi(q)^s<\infty.
\end{dcases}
\end{equation*}
\end{theorem}
This notation allows us to get rid of the problem when $s=n$ and illustrates how closely the Theorems of Khintchine and Jarn\'ik are related. While one could think of the Hausdorff theory as a subtle refinement of the Lebesgue measure theory, the following section shows how the Hausdorff theory can actually be derived as a consequence of the Lebesgue theory.
\section{The Mass Transference Principle}\label{secMTP}
In this section, we describe a general principle that allows us to deduce Jarn\'ik's Theorem from Khintchine's Theorem. Let $(\Omega,d)$ be a locally compact metric space and suppose there exist constants
\begin{equation*}
\delta>0,\quad 0<c_1<1<c_2<\infty,\quad \text{ and }\ r_0>0,
\end{equation*}
such that the inequalities
\begin{equation}\label{MTdim}
c_1r^{\delta}<\cH^{\delta}(B(\bx,r))<c_2r^{\delta}
\end{equation}
are satisfied for any ball $B(\bx,r)\subset\Omega$ with $\bx\in\Omega$ and $r<r_0$. We have only defined Hausdorff measures and dimension for $\RR^n$, but the theory can be extended to arbitrary metric spaces (see \cite{Falconer}, \cite{Mattila}). It follows from \eqref{MTdim} that
\begin{equation*}
0<\cH^{\delta}(\Omega)\leq\infty\quad \text{ and }\quad \dim\Omega=\delta.
\end{equation*}
Now, given a dimension function $f$ and a ball $B=B(\bx,r)\subset\Omega$, we define the scaled balls
\begin{equation*}
B^f=B\left(\bx,f(r)^\frac{1}{\delta}\right)\quad \text{ and }\quad B^s=B\left(\bx,r^\frac{s}{\delta}\right)
\end{equation*}
in the special case where $f(r)=r^s$ for some $s>0$, respectively. Clearly, $B=B^\delta$. When, dealing with $\limsup$ sets in $\Omega$, the Mass Transference Principle allows us to derive $\cH^f$-measure theoretic results from statements concerning $\cH^{\delta}$-measure. In the `typical' case where $\delta=n\in\NN$ and $\Omega=\RR^n$ this means we can transform Lebesgue measure theoretic statements to results on Hausdorff measures. The following has been established by Beresnevich and Velani in 2006. The complete theory and proofs can be found in \cite{MassTrans}.
\begin{samepage}
\begin{theorem}[Mass Transference Principle]\label{MTP}
Let $\{B_k\}_{k\in\NN}$ be a sequence of balls in $\Omega$ with $r(B_k)\rightarrow 0$ as $k\rightarrow\infty$. Let $f$ be a dimension function such that $t^{-\delta}f(t)$ is monotonic. For any ball $B\subset\Omega$ with $\cH^{\delta}(B)>0$,
\begin{equation}\label{Eqn:MTPcond}
\text{ if}\quad \cH^{\delta}\left(B\cap\limsup\limits_{k\rightarrow\infty}B_k^f\right)=\cH^{\delta}(B),
\end{equation}
\begin{equation*}
\text{ then }\quad \cH^f\left(B\cap\limsup\limits_{k\rightarrow\infty}B_k^{\delta}\right)=\cH^f(B).
\end{equation*}
\end{theorem}
\end{samepage}
It is worth noting that Theorem \ref{MTP} has no monotonicity requirements on the radii of balls and even the condition that $r(B_k)\rightarrow 0$ as $k\rightarrow\infty$ is simply of cosmetic nature. Before showing how the Mass Transference Principle can be used to deduce Jarn\'ik's Theorem from Khintchine's Theorem, we will prove the Jarn\'ik--Besicovitch Theorem as a consequence of Dirichlet's Theorem.
\begin{proof}[Proof of Theorem \ref{JB}]
Let $\psi(q)=q^{-\tau}$ for $\tau>1/n$. This means, given $q\in\NN$, we are dealing with balls of radius $r=q^{-(\tau+1)}$ centred at rational points $\bp/q$. Corollary \ref{DirCor} of Dirichlet's Theorem states that for any $\balpha\in\RR^n$ there are infinitely many rational points $\bp/q$, $q\in\NN$, satisfying
\begin{equation*}
\left|\balpha-\frac{\bp}{q}\right|<q^{-\left(1+\frac{1}{n}\right)}.
\end{equation*}
Letting $s=\frac{n+1}{\tau+1}$, we see that
\begin{equation*}
r^{\frac{s}{n}}=\left(q^{-(\tau+1)}\right)^{\frac{n+1}{n(\tau+1)}}=\left(q^{-(\tau+1)}\right)^{\frac{1+1/n}{\tau+1}}=q^{-(1+\frac{1}{n})}
\end{equation*}
and thus \eqref{Eqn:MTPcond} is satisfied with $f(r)=r^s$. This imples that $\dim W_n(\tau)\geq s$. The upper bound follows easily using the Borel--Cantelli Lemma.
\end{proof}
\begin{remark*}
This actually proves more than just Theorem \ref{JB}. We have also showed that $\cH^s(W_n(\tau))=\infty$, a fact we were previously only able to deduce once Jarn\'ik's Theorem was established. We note that we will also apply the Mass Transference Principle to a similar setting to complete the proof of Theorem \ref{HDfibres} in Section \ref{Sec:HDfibres}.
\end{remark*}
\begin{remark*}
In the following proof and later throughout the text, we will be using the Vinogradov symbol $\ll$. Given two real-valued functions $f$ and $g$, $f(q)\ll g(q)$ means there exist positive constants $c$ and $Q$ such that $f(q)\leq c g(q)$ for all $q\geq Q$. When applied to infinite sums,
\begin{equation*}
\sum\limits_{q=1}^{\infty}f(q)\ll\sum\limits_{q=1}^{\infty}g(q)\quad :\Longleftrightarrow\quad \sum\limits_{q=1}^{Q}f(q)\leq c\sum\limits_{q=1}^{Q}g(q)
\end{equation*}
for a constant $c$ and all $Q$ large enough.
\end{remark*}
\begin{proof}[Proof of Jarn\'ik's Theorem modulo Khintchine's Theorem]
Without loss of generality we can assume that $\psi(q)/q\rightarrow 0$ as $q\rightarrow\infty$. Otherwise, $W_n(\psi)={\rm I}^n$ and obviously $\cH^s(W_n(\psi))=\infty$ for any $s<n$. Hence, the decay condition on the radii in Theorem \ref{MTP} is satisfied. With respect to the above setup, $(\Omega,d)$ is the unit cube ${\rm I}^n$ equipped with the supremum norm, $\delta=n$ and $f(r)=r^s$ with $s\in(0,n)$. We assume that $\sum_{q\in\NN} q^{n-s}\psi(q)^s=\infty$. Letting
\begin{equation*}
\vartheta(q)=\left(q^{n-s}\psi(q)^s\right)^{\frac{1}{n}}=q^{1-\frac{s}{n}}\psi(q)^{\frac{s}{n}},
\end{equation*}
we see that $\sum_{q\in\NN}\vartheta(q)^n=\infty$. For now, we suppose that either $\vartheta$ is decreasing or that $n\geq 2$. Hence, Khintchine's Theorem implies that
\begin{equation*}
\cH^n\left(B\cap W_n(\vartheta)\right)=\cH^n(B)
\end{equation*}
for any ball $B$ in $\RR^n$. Here we are dealing with a $\limsup$ set of balls with radii $r(B_k^s)=\vartheta(q)/q=q^{-s/n}\psi(q)^{s/n}$ for some $q\in\NN$ and thus the Mass Transference Principle tell us that
\begin{equation*}
\cH^s\left(B\cap\limsup\limits_{k\rightarrow\infty}B_k^{n}\right)=\cH^s(B),
\end{equation*}
where $r(B_k^n)=q^{-1}\psi(q)$ for $q\in\NN$. Those are exactly the balls that contribute to the $\limsup$ set $W_n(\psi)$ and thus $\cH^s(W_n(\psi))=\infty$.
In the case where $n=1$ and $\vartheta$ is non-monotonic, we will make use of the Duffin--Schaeffer Theorem (see Theorem \ref{Thm:DS}). For any $q\in\NN$, there is an integer $k\geq 0$ such that $q\in(2^{k-1},2^k]$. As both $q^{1-s}$ and $\psi(q)$ are monotonic functions, we see that
\begin{align*}
\vartheta(q)&=q^{1-s}\psi(q)^s\\[1ex]
&\leq \left(2^k\right)^{1-s}\psi\left(2^{k-1}\right)^s\\[1ex]
&=2^{1-s}\left(2^{k-1}\right)^{1-s}\psi\left(2^{k-1}\right)^s.
\end{align*}
There are $2^{k-1}$ integers in the interval $q\in(2^{k-1},2^k]$ and, using a shift in summation, this shows that
\begin{equation*}
\sum\limits_{q=1}^{\infty}\vartheta(q)\ll\sum\limits_{k=1}^{\infty}\left(2^k\right)^{2-s}\psi\left(2^{k}\right)^s.
\end{equation*}
On the other hand, the monotonicity of the functions $q^{1-s}$ and $\psi(q)$ also gives us the lower bound
\begin{align*}
\vartheta(q)\frac{\varphi(q)}{q}&=q^{1-s}\psi(q)^s\frac{\varphi(q)}{q}\\[1ex]
&\geq \left(2^{k-1}\right)^{1-s}\psi\left(2^k\right)^s\frac{\varphi(q)}{q}\\[1ex]
&=2^{s-1}\left(2^{k}\right)^{1-s}\psi\left(2^k\right)^s\frac{\varphi(q)}{q}.
\end{align*}
This implies that
\begin{align*}
\sum\limits_{q=1}^{\infty}\vartheta(q)\frac{\varphi(q)}{q}&\gg\sum\limits_{k=1}^{\infty}\left(2^{k}\right)^{1-s}\psi\left(2^k\right)^s\sum\limits_{2^{k-1}<q\leq 2^k}\frac{\varphi(q)}{q}\\[1ex]
&\gg\sum\limits_{k=1}^{\infty}\left(2^k\right)^{2-s}\psi\left(2^{k}\right)^s\\[1ex]
&\gg\sum\limits_{q=1}^{\infty}\vartheta(q),
\end{align*}
where we use the fact that the Euler totient function $\varphi$ satisfies
\begin{equation*}
\sum\limits_{q=1}^Q\frac{\varphi(q)}{q}=\frac{6}{\pi^2}Q+\mathcal{O}(\log Q).
\end{equation*}
For a proof of this property, see \cite[Theorem 111]{HardyWright}. Thus, the function $\varphi$ satisfies the condition \eqref{Eqn:AssumDS} and, as $\sum_{q\in\NN}\vartheta(q)=\infty$, the Duffin--Schaeffer Theorem implies that $\lambda(W'(\vartheta))=1$. The rest of the argument works as in the previous case, which completes the proof of the divergence part of Jarn\'ik's Theorem. The convergence part follows directly from the $n$-dimensional analogue of Lemma~\ref{BCHd}.
\end{proof}
\section[Inhomogeneous approximation: standard and twisted case]{Inhomogeneous approximation:\\ standard and twisted case}
We introduced simultaneous Diophantine approximation as the study of rational points $\bp/q\in\QQ^n$ lying close to our point of consideration $\balpha\in\RR^n$. However, already from the identity $\parallel q \balpha \parallel<\psi(q)$ we can obtain another point of view to characterise this problem. We take a point $\balpha\in\RR^n$ and want to investigate how close its natural multiples get to integers in $\ZZ^n$, or in other words, how closely their fractional parts $\{q\balpha\}=(\{q\alpha_1\},\dots,\{q\alpha_n\})$ approach points in the set $\{0,1\}^n\subset\RR^n$. Essentially, this corresponds to rotations of the torus $\mathbb{T}^n=\RR^n/\ZZ^n$ by the angle $\balpha\in\RR^n$ and to the question of how often the trajectory of the point $\boldsymbol{0}$ under this action returns to a small neighbourhood of the origin. We will mostly stay clear of this dynamical point of view, but a formal introduction as well as many far reaching consequences can be found in \cite{Einsiedler}.
All of what we have done so far focusses on approximation of the origin. However, we might as well fix any point $\boldsymbol{\gamma}=(\gamma_1,\dots,\gamma_1)\in {\rm I}^n$ and investigate how close we can get with terms of the form $\{q\balpha\}$, where $q$ is in $\NN$. Formally speaking, given an approximating function $\psi$ and a point $\boldsymbol{\gamma}\in {\rm I}^n$, let
\begin{equation*}
W_n(\psi,\boldsymbol{\gamma})=\left\{\balpha\in {\rm I}^n:\parallel q\balpha-\boldsymbol{\gamma}\parallel<\psi(q)\text{ infinitely often}\right\}
\end{equation*}
denote the \emph{inhomogeneous} set of \emph{$\psi$-approximable} points in ${\rm I}^n$. Hence, a point $\balpha$ is in $W_n(\psi,\boldsymbol{\gamma})$ if there exist infinitely many `shifted' rational points
\begin{equation*}
\frac{\bp-\boldsymbol{\gamma}}{q}=\left(\frac{p_1-\gamma_1}{q},\dots,\frac{p_n-\gamma_n}{q}\right),\quad q\in\NN,\ p_j\in\ZZ \text{ for } j\in\{1,\dots,n\}
\end{equation*}
such that the inequalities
\begin{equation*}
\left| \alpha_j-\frac{(p_j-\gamma_j)}{q} \right|<\frac{\psi(q)}{q}
\end{equation*}
are simultaneously satisfied for all $j\in\{1,\dots,n\}$. As above, when $\psi$ has the form $\psi(q)=q^{-\tau}$ with $\tau>0$, we write $W_n(\tau,\boldsymbol{\gamma})$ for $W_n(\psi,\boldsymbol{\gamma})$. Of course, in the case where $\boldsymbol{\gamma}=\0$, we are dealing with the classical homogeneous theory concerning the sets $W_n(\psi)$ and $W_n(\tau)$, respectively. The analogue of Khintchine's Theorem with a fixed inhomogeneous constant $\boldsymbol{\gamma}\in{\rm I}^n$ has been proved by Sz\"usz \cite{Szusz}. From this result we can deduce a Jarn\'ik type statement using the Mass Transference Principle. The proof is identical to the homogeneous case, we simply consider balls with shifted centres. Thus, we obtain the following generalisation of Theorem \ref{KhiJa}. Originally, the part where $s\in(0,n)$ was proved by Schmidt using classical methods \cite{Schmidtjarnik}.
\begin{samepage}
\begin{theorem}[Inhomogeneous Khintchine--Jarn\'ik]\label{Thm:InhomKJ}
Let $\psi$ be an approximating function, $\boldsymbol{\gamma}\in{\rm I}^n$ and $s\in(0,n]$. Then
\begin{equation*}
\cH^s(W_n(\psi,\boldsymbol{\gamma}))=
\begin{dcases}
\cH^s({\rm I}^n)\ &\text{ if }\quad \sum\limits_{q=1}^{\infty}q^{n-s}\psi(q)^s=\infty,\\[2ex]
0\quad &\text{ if }\quad \sum\limits_{q=1}^{\infty}q^{n-s}\psi(q)^s<\infty.
\end{dcases}
\end{equation*}
\end{theorem}
\end{samepage}
\begin{remark}\label{Rmk:InhomDir}
Theorem \ref{Thm:InhomKJ} shows that given any inhomogeneous constant $\boldsymbol{\gamma}$, we obtain the same measure theoretic statements for $W_n(\psi,\boldsymbol{\gamma})$ as in the homogeneous case. However, the inhomogeneous analogue to Dirichlet's Theorem is not true for arbitrary $\boldsymbol{\gamma}\in{\rm I}^n$. Even stronger, Cassels showed the following \cite[Chapter III, Theorem III]{Cassels}.
\begin{samepage}
\begin{theorem}\label{Thm:NoInhomDir}
Let $\psi$ be a positive real-valued function with $\psi(q)\rightarrow 0$ as $q\rightarrow\infty$. Then, there exist $\alpha\notin\QQ$ and $\gamma\in\RR$ such that the system
\begin{equation*}
\norm{q\alpha-\gamma}<\psi(Q),\quad 1\leq q\leq Q
\end{equation*}
has no integer solution $q$ for infinitely many $Q\in\NN$.
\end{theorem}
\end{samepage}
Obviously, for any $\alpha\in\QQ$ the sequence $(\{q\alpha\})_{q\in\NN}$ only takes finitely many distinct values and so cannot approximate any other points. Hence, the irrationality of $\alpha$ is a central part of this result. Still, Theorem \ref{Thm:NoInhomDir} is not a big impediment. For any irrational $\alpha$ and real $\gamma$ there are infinitely many $q\in\NN$ satisfying
\begin{equation*}
\norm{q\alpha-\gamma}<\frac{1+\varepsilon}{\sqrt{5}q},
\end{equation*}
where $\varepsilon>0$ can be chosen arbitrarily small \cite{KhintchineFr}. The result in higher dimensions looks slightly different. Cassels proved the existence of $\balpha$ and $\boldsymbol{\gamma}$ in ${\rm I}^n$ such that the inequality
\begin{equation}\label{Eqn:Cassels}
\max\limits_{1\leq j\leq n}\norm{q\alpha_j-\gamma_j}<Cq^{-1/n}
\end{equation}
has only finitely many solutions $q\in\NN$ for arbitrarily large $C>0$. Namely, such a $\boldsymbol{\gamma}$ exists for \emph{singular} $\balpha$, a notion which will be introduced in Chapter \ref{dirichlet} and coincides with rational points if and only if $n=1$. For the precise statement involving \eqref{Eqn:Cassels}, see Theorem \ref{Thm:Casselsnonsing}.
\end{remark}
As well as fixing the inhomogeneous constant $\boldsymbol{\gamma}\in{\rm I}^n$ and investigating properties of $W_n(\psi,\gamma)$, we can also consider a converse point of view. Given a fixed $\balpha\in{\rm I}^n$, we would like to know the distribution of the sequence $(\{q\balpha\})_{q\in\NN}$ inside ${\rm I}^n$. Concretely, given an approximating function $\psi$ and $\balpha\in{\rm I}^n$, we are interested in the set
\begin{equation*}
W_n^{\balpha}(\psi)=\left\{\boldsymbol{\gamma}\in{\rm I}^n:\norm{q\balpha-\boldsymbol{\gamma}}<\psi(q)\text{ infinitely often}\right\}.
\end{equation*}
As above, when $\psi$ has the form $\psi(q)=q^{-\tau}$ with $\tau>0$, we write $W_n^{\balpha}(\tau)$ for $W_n^{\balpha}(\psi)$. This situation is slightly different from the previously considered theory. Given $q$, we are dealing with a single ball of radius $\psi(q)$ centred at $\{q\balpha\}$ rather than having $q$ balls of size $\psi(q)/q$ around (shifted) rational points. Still, the total measures for each $q$ are identical and so the Borel--Cantelli Lemma tells us that
\begin{equation}\label{Eqn:BCtwist}
\lambda_n\left(W_n^{\balpha}(\psi)\right)=0 \quad \text{ if } \quad \sum\limits_{q=1}^{\infty}\psi(q)^n<\infty.
\end{equation}
The converse situation is a bit more complicated. A point $\balpha=(\alpha_1,\dots,\alpha_n)\in{\rm I}^n$ is called \emph{totally irrational} if the numbers $1,\alpha_1,\dots,\alpha_n$ are linearly independent over $\QQ$. If $\balpha$ is not totally irrational, then the elements of $(\{q\balpha\})_{q\in\NN}$ are contained in a finite union of $\QQ$-linear subspaces of $\RR^n$ intersected with ${\rm I}^n$ and so $\lambda_n\left(W_n^{\balpha}(\psi)\right)=0$ for any approximation function $\psi$. On the other hand, it is known that the sequence $(\{q\balpha\})_{q\in\NN}$ is equidistributed in ${\rm I}^n$ when $\balpha$ is totally irrational \cite{Weyl}. Roughly speaking, this means that for any ball $B\in{\rm I}^n$, the asymptotic proportion of elements of $(\{q\balpha\})_{q\in\NN}$ contained in $B$ equals $\lambda_n(B)$. However, having this property is still not strong enough to guarantee $\lambda_n\left(W_n^{\balpha}(\psi)\right)=1$ for any approximating function $\psi$ with ${\sum_{q\in\NN}\psi(q)^n=\infty.}$ To see this, define the set of \emph{twisted $\psi$-approximable} points as
\begin{equation*}
W_n^{\times}(\psi)=\left\{\balpha\in{\rm I}^n:\lambda_n(W_n^{\balpha}(\psi))=1\right\}.
\end{equation*}
The twisted analogue to Khintchine's Theorem was proved by Kurzweil among other results in his fundamental paper on inhomogeneous Diophantine approximation \cite{Kurzweil}.
\begin{theorem}[Twisted Khintchine]\label{Thm:TwistedKhin}
Let $\psi$ be an approximating function. Then we have
\begin{equation*}
\lambda_n(W_n^{\times}(\psi))=1\quad \text{ if }\quad \sum\limits_{q=1}^{\infty} \psi(q)^n=\infty
\end{equation*}
and
\begin{equation*}
W_n^{\times}(\psi)=\emptyset\quad \text{ if }\quad \sum\limits_{q=1}^{\infty}\psi(q)^n<\infty.
\end{equation*}
\end{theorem}
\begin{remark*}
Of course, the convergence part of Theorem \ref{Thm:TwistedKhin} simply follows from \eqref{Eqn:BCtwist}, which is a consequence of the Borel--Cantelli Lemma, and the divergence part is the main achievement. It is worth noting that for this problem there is no Jarn\'ik type theory since $W_n^{\times}(\psi)$ either has full measure or is the empty set.
\end{remark*}
Remarkably, Kurzweil also proved the following identity, establishing a deep connection between homogeneous and twisted inhomogeneous Diophantine Approximation.
\begin{theorem}[Kurzweil]\label{Thm:Kurzweil}
Denote by $\Psi^{\infty}_n$ the set of all approximating functions $\psi$ such that $\sum\psi(q)^n=\infty$. Then,
\begin{equation*}
\bigcap\limits_{\psi\in\Psi^{\infty}_n}W_n^{\times}(\psi)=\mathbf{Bad}_n.
\end{equation*}
\end{theorem}
Rewritten in a contrapositive way, Theorem \ref{Thm:Kurzweil} tells us that
\begin{equation*}
\balpha\notin\mathbf{Bad}_n\ \Longleftrightarrow\ \exists\ \psi\in\Psi^{\infty}_n\ \text{ such that }\ \lambda_n\left(W_n^{\balpha}(\psi)\right)<1,
\end{equation*}
where the choice of the exceptional $\psi$ might depend on $\balpha$.
\begin{remark*}
The original version of Theorem \ref{Thm:Kurzweil} was stated in a different and more general fashion than above. This might be a reason why the result and its significance were somewhat overlooked for a long period of time. However, in recent years the work of Kurzweil has become a focal point in Diophantine approximation and advances were made in different directions by Fayad \cite{Fayad}, Kim \cite{Kim}, \cite{Fuchs}, Tseng \cite{Tseng}, Chaika \cite{Chaika} and Simmons \cite{SimmonsKurzweil}. An important extension due to Harrap will be discussed shortly.
\end{remark*}
\section[Weighted Diophantine approximation: standard and twisted case]{Weighted Diophantine approximation:\\ standard and twisted case}\label{sec:weighted}
While so far we have only been concerned with equidistant approximation, i.e. $\limsup$ sets built from balls around rational points (or indeed `shifted' rational points in the case of inhomogeneous approximation) according to the $\max$-norm, we can also vary this approach by applying different weights to different coordinate directions. In the following, an $n$-tuple $\bi=(i_1,\dots,i_n)\in\RR^n$ satisfying
\begin{equation}\label{Eqn:WV1}
0\leq i_j\leq 1,\quad (1\leq j\leq n)
\end{equation}
and
\begin{equation}\label{Eqn:WV2}
i_1+i_2+\dots+i_n=1
\end{equation}
will be called a \textit{weight vector}. We will consider simultaneous approximation with respect to rectangles given by a weight vector $\bi$ rather than balls given by $\max$-norm. In this case we have the following generalisation of Dirichlet's Theorem.
\begin{theorem}[Weighted Dirichlet]\label{wDir}
Let $\bi\in\RR^n$ be a weight vector. Then, for any $\balpha\in\RR^n$ and $Q\in\NN$, there exists $q\in\NN$, $q\leq Q$ such that
\begin{equation}\label{Eqn:wDir}
\parallel q\alpha_j\parallel<Q^{-i_j},\quad (1\leq j\leq n).
\end{equation}
\end{theorem}
Analogously to Corollary \ref{DirCor} in the non-weighted case, Theorem \ref{wDir} immediately implies the following.
\begin{corollary}\label{Cor:wDir}
Let $\bi\in\RR^n$ be a weight vector. Then, for any $\balpha\in\RR^n$ there exist infinitely many $q\in\NN$ such that
\begin{equation*}
\max\limits_{1\leq j\leq n}q^{i_j}\parallel q\alpha_j\parallel<1.
\end{equation*}
\end{corollary}
Theorem \ref{wDir} shows that we have
\begin{equation*}
\max\left\{\parallel q\alpha_1\parallel^{1/i_1},\dots,\parallel q\alpha_n\parallel^{1/i_n}\right\}<Q^{-1},
\end{equation*}
which illustrates that instead of a cube given by $\max$-norm, we are dealing with an $n$-dimensional rectangle, the side lengths of which are scaled by the powers $i_j$. Note that while the shape of this rectangle depends on the weight vector $\bi$, the conditions \eqref{Eqn:WV1} and \eqref{Eqn:WV2} ensure that the volume is the same for any choice of $\bi$. The proof of Theorem \ref{wDir} follows from a surprisingly simple geometric observation by Minkowski.
\begin{samepage}
\begin{theorem}[Minkowski's Convex Body Theorem]\label{Thm:MinkCBT}
Let $K$ be a bounded convex set in $\RR^m$, symmetric about $\0$, i.e. $\bx\in K\Leftrightarrow -\bx\in K$. Assume that either $\lambda_m(K)>2^m$ or that $K$ is closed and $\lambda_m(K)\geq 2^m$. Then $K$ contains an integer point $\bz\neq\0$.
\end{theorem}
\end{samepage}
\begin{proof}
We will follow the argument given by Schmidt \cite{schmidt}. Suppose that $K$ satisfies ${\lambda_m(K)>2^m}$. For $q\in\NN$, define
\begin{equation*}
Z_q(K)=\left\{\frac{\bp}{q}=\left(\frac{p_1}{q},\dots,\frac{p_m}{q}\right)\in K:\bp\in\ZZ^m\right\}.
\end{equation*}
Then it follows that
\begin{equation*}
\lim\limits_{q\rightarrow\infty}\frac{\# Z_q(K)}{q^m\lambda_m(K)}=1
\end{equation*}
and thus we have
\begin{equation*}
\# Z_q(K)>q^m2^m=(2q)^m
\end{equation*}
for $q$ large enough. There are $2q$ different residue classes modulo $2q$ and every point in $Z_q(K)$ has $m$ coordinates, so there must be two distinct points $\bp/q$ and $\tilde{\bp}/q$ in $Z_q(K)$ satisfying
\begin{equation}\label{Eqn:Mod}
p_j\equiv\tilde{p}_j\mod 2q,\quad \quad (1\leq j\leq m).
\end{equation}
By symmetry, $-\tilde{\bp}/q\in K$ and by convexity,
\begin{equation*}
\bz=\frac{1}{2}\frac{\bp}{q}+\frac{1}{2}\frac{-\tilde{\bp}}{q}=\frac{\bp-\tilde{\bp}}{2q}\in K.
\end{equation*}
Clearly, $\bz\neq\0$ and \eqref{Eqn:Mod} shows that $\bz$ is contained in $\ZZ^n$, which completes the proof.
The case when $K$ is closed and $\lambda_m(K)= 2^m$ can be reduced to the case when $\lambda_m(K)>2^m$. $K$ and $\ZZ^m\setminus K$ are closed disjoint subsets of $\RR^m$. Since $\RR^m$ is a normal space, $K$ is contained in an open neighbourhood $K'$ satisfying $K'\cap(\ZZ^m\setminus K)=\emptyset$. Clearly, $K'$ can be assumed to be convex and symmetric about zero. It follows that $\lambda_m(K')>\lambda_m(K)=2^m$ and so, by the previous case, $K'$ contains a non-zero integer point $\bz$, which must be contained in $K$.
\end{proof}
We now use Theorem \ref{Thm:MinkCBT} to deduce the following fundamental theorem in the geometry of numbers.
\begin{theorem}[Minkowski's Linear Forms Theorem]\label{Minkowski}
Given $m\in\NN$, suppose that $a_{j,k}$ with $1\leq j,k\leq m$, are real numbers with determinant $\det (a_{j,k})=\pm 1$ and let $A_1,\dots,A_m$ be positive real numbers with product $A_1 A_2\cdots A_m=1$. Then, there exists an integer point $\bz=(z_1,\dots,z_m)\in\ZZ^m\setminus\{\0\}$ such that the inequalities
\begin{equation}\label{Eqn:Mink}
\begin{aligned}
|a_{j,1}z_1+\dots+a_{j,m}z_m&|<A_1,\quad \quad (1\leq j\leq m-1) \\
\text{and }\quad \quad |a_{m,1}z_1+\dots+a_{m,m}z_m&|\leq A_m
\end{aligned}
\end{equation}
are satisfied simultaneously.
\end{theorem}
\begin{proof}
Denote the linear forms in question by
\begin{equation*}
L_j(\bx)=a_{j,1}x_1+\dots+a_{j,m}x_m,\quad \quad (1\leq j\leq m)
\end{equation*}
and let
\begin{equation*}
\tilde{L}_j(\bx)=\frac{1}{A_j}L_j(\bx),\quad \quad (1\leq j\leq m).
\end{equation*}
Then \eqref{Eqn:Mink} can be rewritten as
\begin{align*}
|\tilde{L}_j(\bx)&|<1,\quad \quad (1\leq j\leq m-1)\\
\text{and }\quad \quad |\tilde{L}_m(\bx)&|\leq 1.
\end{align*}
This modified system of linear forms still has determinant $\det(\tilde{a}_{j,k})=\pm 1$, so we can restrict ourselves to the case when $A_1=\dots=A_m=1$. The set $K\subset\RR^n$ of all $\bx\in\RR^n$ satisfying
\begin{equation*}
|L_j(\bx)|\leq 1,\quad \quad (1\leq j\leq m)
\end{equation*}
is the image of the closed unit cube ${\rm I}^n$ under a linear transformation of determinant $\pm 1$ and thus $K$ is a closed parallelepiped, symmetric about $\0$ with volume $\lambda_m(K)=2^m$. By Theorem \ref{Thm:MinkCBT}, there is an integer point $\bz\neq \0$ contained in $K$.
To get inequality for the first $m-1$ linear forms, we need to slightly modify this argument.
For each $\varepsilon>0$, the system of inequalities
\begin{align*}
|{L}_j(\bx)&|<1,\quad \quad \quad (1\leq j\leq m-1)\\
\text{and }\quad \quad |{L}_m(\bx)&|\leq 1+\varepsilon.
\end{align*}
defines a symmetric parallelepiped $K_{\varepsilon}$ of volume $\lambda_m(K_{\varepsilon})=2^m(1+\varepsilon)>2^m$. By Theorem \ref{Thm:MinkCBT}, there is an integer point $\bz_{\varepsilon}\neq \0$ contained in $K_{\varepsilon}$. For $\varepsilon<1$, all the sets $K_{\varepsilon}$ will be contained in $K_1$. As a bounded body, $K_1$ only contains finitely many integer points and so there must be a sequence $(\varepsilon_k)_{k\in\NN}$ tending to zero as $k\rightarrow\infty$, such that all the $\bz_{\varepsilon_k}$ are the same, say $\bz$. On letting $k\rightarrow\infty$ we see that $\bz$ must satisfy \eqref{Eqn:Mink}.
\end{proof}
\begin{samepage}
We will often refer to this result simply as Minkowski's Theorem. Theorem \ref{wDir} follows from Theorem \ref{Minkowski} by setting $m=n+1$ and the following choice of coefficients:
\begin{align*}
a_{j,1}=\alpha_j,\quad a_{j,j+1}=-1,\quad A_j&=Q^{-i_j},\quad (1\leq q\leq n),\\
a_{n+1,1}=1,\quad A_{n+1}&=Q
\end{align*}
and all the other coefficients $a_{j,k}=0$. The special case where $A_1=\dots=A_n=Q^{1/n}$ proves the original version of Dirichlet's Theorem.
\end{samepage}
Analogously to the standard theory in Section \ref{Sec:Basic}, we can define the class of \textit{simultaneously $(\bi,\psi)$-approximable} numbers in ${\rm I}^n$ as
\begin{equation*}
W_n(\bi,\psi)=\left\{\balpha\in {\rm I}^n: \max\limits_{1\leq j\leq n}\psi(q)^{-i_j}\parallel q\alpha_j\parallel<1\text{ infinitely often}\right\},
\end{equation*}
where $\psi$ is an approximating function and $\bi=(i_1,\dots,i_n)$ is a weight vector. Observe that, by Corollary \ref{Cor:wDir}, $W_n(\bi,\psi)={\rm I}^n$ for $\psi=q^{-1}$ and any weight vector $\bi$. More generally, Khintchine himself extended his classical result from $\bi=(1/n,\dots,1/n)$ to arbitrary weight vectors \cite{Khintchine2}.
\begin{samepage}
\begin{theorem}[Weighted Khintchine]\label{Thm:WeightedKhin}
Let $\psi$ be an approximating function and $\bi$ an $n$-dimensional weight vector. Then
\begin{equation*}
\lambda_n(W_n(\bi,\psi))=
\begin{dcases}
1\quad &\text{ if }\quad \sum\limits_{q=1}^{\infty} \psi(q)=\infty,\\[2ex]
0\quad &\text{ if }\quad \sum\limits_{q=1}^{\infty} \psi(q)<\infty.
\end{dcases}
\end{equation*}
\end{theorem}
\end{samepage}
\begin{remark*}
When $n=1$, then $i=1$ is the only $1$-dimensional weight vector and so this is just Khintchine's Theorem for $\RR$. For $n\geq 2$, we get the classical result by choosing $\bi=(1/n,\dots,1/n)$. Note that in this formulation the exponent within the sum does not depend on $n$. This is simply because the condition $\norm{q\alpha_j}<\psi(q)$ has been replaced by $\norm{q\alpha_j}<\psi(q)^{i_j}$ and these powers satisfy $i_1+\dots+i_n=1$. For $n\geq 2$ we can drop the monotonicity assumption on $\psi$ in accordance with the previously developed theory.
\end{remark*}
Analogously to the non-weighted case we can introduce the problem of twisted Diophantine Approximation. For any fixed $\balpha\in\RR^n$, let
\begin{equation*}
W_n^{\balpha}(\bi,\psi)=\left\{\bbeta\in {\rm I}^n: \max\limits_{1\leq j\leq n}\psi(q)^{-i_j}\parallel q\alpha_j-\beta_j\parallel<1\text{ infinitely often}\right\}
\end{equation*}
and
\begin{equation*}
W_n^{\times}(\bi,\psi)=\left\{\balpha\in{\rm I}^n: \lambda_n\left(W_n^{\balpha}(\bi,\psi)\right)=1\right\}.
\end{equation*}
As for the previous results, the Borel--Cantelli Lemma implies that $\lambda_n(W_n^{\balpha}(\bi,\psi))=0$ if $\sum_{q\in\NN}\psi(q)<\infty$, independent of the choice of $\balpha$. As for non-weighted approximation, it follows that in this case the set $W_n^{\times}(\bi,\psi)$ will be empty. This gives us the convergence part for a Khintchine-type result. The divergence part was proved by Harrap in a recent paper \cite{Harraptwisted}.
\begin{samepage}
\begin{theorem}[Weighted Twisted Khintchine]
Let $\psi$ be an approximating function. Then we have
\begin{equation*}
\lambda_n(W_n^{\times}(\bi,\psi))=1\quad \text{ if }\quad \sum\limits_{q=1}^{\infty} \psi(q)=\infty
\end{equation*}
and
\begin{equation*}
W_n^{\times}(\bi,\psi)=\emptyset\quad \text{ if }\quad \sum\limits_{q=1}^{\infty}\psi(q)<\infty.
\end{equation*}
\end{theorem}
\end{samepage}
In the same paper, Harrap also extended Kurzweil's Theorem to the more general setting of arbitrary weight vectors, giving us the following result.
\begin{samepage}
\begin{theorem}[Weighted Kurzweil]
Denote by $\Psi^{\infty}$ the set of all approximating functions $\psi$ such that $\sum_{q\in\NN}\psi(q)=\infty$. Then, for any weight vector $\bi\in{\rm I}^n$,
\begin{equation*}
\bigcap\limits_{\psi\in\Psi^{\infty}}W_n^{\times}(\bi,\psi))=\mathbf{Bad}_n(\bi).
\end{equation*}
\end{theorem}
\end{samepage}
The set $\mathbf{Bad}_n(\bi)$ mentioned here is the natural generalisation of $\mathbf{Bad}_n$ to the setting of weighted Diophantine Approximation. Corollary \ref{Cor:wDir} states that for any weight vector $\bi$ and $\balpha\in\RR^n$ there are infinitely many integers $q$ satisfying
\begin{equation}\label{Eqn:WBad}
\norm{q\alpha_j}<q^{-i_j},\quad (1\leq j\leq n).
\end{equation}
Analogously to the basic theory in Section \ref{Sec:Basic}, we can ask if we still get infinitely many solutions $q$ for \eqref{Eqn:WBad} if we multiply the right-hand side by a constant factor $c<1$. If this is not true for arbitrarily small constants, we call $\balpha$ \emph{$\bi$-badly approximable} or say $\balpha\in \mathbf{Bad}_n(\bi)$. More formally,
\begin{equation*}
\balpha\in\mathbf{Bad}_n(\bi)\ \Leftrightarrow\ \liminf\limits_{q\rightarrow\infty}\max\limits_{1\leq j\leq n}q^{i_j}\norm{q\alpha_j}>0.
\end{equation*}
Of course, the set $\mathbf{Bad}_n(1/n,\dots,1/n)$ is simply the previously introduced $\mathbf{Bad}_n$. As one would expect, these more general sets $\mathbf{Bad}_n(\bi)$ have many of the properties of $\mathbf{Bad}_n$. As in the non-weighted case, we can use Theorem \ref{Thm:WeightedKhin} to show that $\lambda_n(\mathbf{Bad}_n(\bi))=0$ for any weight vector $\bi\in\RR^n$. The proof is completely analogous to Theorem \ref{Thm:DimBad} and will be skipped here. Regarding the dimension theory, Pollington and Velani showed in \cite{PolVel} that $\dim\mathbf{Bad}_n(\bi)=n$ for any weight vector $\bi$. Special interest has been taken in the intersection of sets of the form $\mathbf{Bad}_2(\bi)$, in particular the following famous conjecture of Schmidt \cite{SchmidtBad}.
\begin{Conjecture}[Schmidt, 1983]\label{ConSchmidt}
Let $i,j$ be in ${\rm I}$ with $i+j=1$. Then,
\begin{equation}\label{Eqn:SchmidtCon}
\mathbf{Bad}_2(i,j)\cap\mathbf{Bad}_2(j,i)\neq\emptyset.
\end{equation}
\end{Conjecture}
\begin{remark*}
To be exact, Schmidt formulated Conjecture \ref{ConSchmidt} for the pair $i=1/3$ and $j=2/3$, but, of course, the question is relevant for any choice of weight vector.
\end{remark*}
The proof of Schmidt's Conjecture was part of a fairly recent paper by Badziahin, Pollington and Velani \cite{Badziahin}. In fact, they established the following much more general result.
\begin{theorem}\label{Thm:Badziahin}
Let $(i,j)_{k\in\NN}$ be a sequence of two-dimensional weight vectors in ${\rm I}^2$ and assume that
\begin{equation}\label{Eqn:BPV1}
\liminf\limits_{k\rightarrow\infty}\min\{i_k,j_k\}>0.
\end{equation}
Then,
\begin{equation}\label{Eqn:Badziahin}
\bigcap\limits_{k=1}^{\infty}\mathbf{Bad}_2(i_k,j_k)\neq\emptyset.
\end{equation}
\end{theorem}
In particular, condition \eqref{Eqn:BPV1} is trivially satisfied for any finite sequence of weight vectors, which proves \eqref{Eqn:SchmidtCon}. It was later shown by An that \eqref{Eqn:Badziahin} still holds without requiring condition \eqref{Eqn:BPV1} \cite{An}. He actually proved the stronger condition that $\mathbf{Bad}_2(i,j)$ is \textit{winning} in the sense of Schmidt games. Theorem \ref{Thm:Badziahin} was subsequently extended to arbitrary dimensions by Beresnevich \cite{Berschmidt}. As in dimension two, the higher dimensional analogue originally required a condition on the weights. This condition was removed by Yang \cite{Yang}. However, the question of winning is still open in higher dimensions. Schmidt's Conjecture is closely related to one of the most famous and deepest open problems in Diophantine approximation.
\begin{Conjecture}[Littlewood, 1930s]
For any pair $(\alpha,\beta)\in{\rm I}^2$,
\begin{equation}\label{Eqn:LW}
\liminf\limits_{q\rightarrow\infty}q\norm{q\alpha}\norm{q\beta}=0.
\end{equation}
\end{Conjecture}
It is easily seen that a counterexample to Schmidt's Conjecture would have implied the truth of Littlewood's Conjecture. Indeed, assume there exists a weight vector $(i,j)$ such that
\begin{equation*}
\mathbf{Bad}_2(i,j)\cap\mathbf{Bad}_2(j,i)=\emptyset.
\end{equation*}
Then, any element of ${\rm I}^2$ is either not contained in $\mathbf{Bad}_2(i,j)$ or not in $\mathbf{Bad}_2(j,i)$. Fix $(\alpha,\beta)\in{\rm I}^2$ and assume without loss of generality that $(\alpha,\beta)\notin\mathbf{Bad}_2(i,j)$. This means that for any $\varepsilon>0$, there are infinitely many $q\in\NN$ satisfying
\begin{equation*}
\max\{\norm{q\alpha}^i,\norm{q\beta}^j\}<\varepsilon q^{-1}.
\end{equation*}
Making use of the fact that $\norm{\cdot}$ is always less than one, we see that there are infinitely many $q\in\NN$, for which
\begin{align*}
q\norm{q\alpha}\norm{q\beta}&=q\norm{q\alpha}^i\norm{q\alpha}^j\norm{q\beta}^j\norm{q\beta}^i\\&<q\norm{q\alpha}^i\norm{q\beta}^j\\
&<q\max\{\norm{q\alpha}^i,\norm{q\beta}^j\}\\
&<\varepsilon
\end{align*}
and since $\varepsilon$ was chosen arbitrarily small, this implies that
\begin{equation*}
\liminf\limits_{q\rightarrow\infty}q\norm{q\alpha}\norm{q\beta}=0.
\end{equation*}
Even more generally, suppose that
\begin{equation}\label{Eqn:LWtrue}
\bigcap\limits_{(i,j)}\mathbf{Bad}_2(i,j)=\emptyset,
\end{equation}
where the intersection runs over all two-dimensional weight vectors. By the same argument this would immediately imply Littlewood's Conjecture. However, there is no indication why \eqref{Eqn:LWtrue} should be true and proving its negation would not shine any new light on Littlewood's Conjecture.
Despite having been studied for many decades, Littlewood's Conjecture has not been solved yet. However, many partial results have been obtained. It is easily seen that any pair $(\alpha,\beta)$ not satisfying \eqref{Eqn:LW} requires both $\alpha$ and $\beta$ to be badly approximable. Suppose $\alpha\notin\mathbf{Bad}$. Then, for any $\varepsilon>0$, there exist infinitely many $q\in\NN$ such that $q\norm{q\alpha}<\varepsilon$, and since $\norm{q\beta}<1$ for any $q\in\NN$, it follows that
\begin{equation*}
q\norm{q\alpha}\norm{q\beta}<q\norm{q\alpha}<\varepsilon
\end{equation*}
infinitely often, which implies \eqref{Eqn:LW}. Hence, we can restrict our attention to badly approximable pairs, in which case the following statement was proved by Pollington and Velani \cite{Pollington}.
\begin{theorem}\label{Thm:PV}
Let $\alpha\in\mathbf{Bad}$. Then
\begin{equation*}
\dim\left(\left\{\beta\in\mathbf{Bad}:\liminf\limits_{q\rightarrow\infty}q\log q\norm{q\alpha}\norm{q\beta}=0\right\}\right)=1.
\end{equation*}
\end{theorem}
Note that the points in Theorem \ref{Thm:PV} satisfy an even stronger property than $\eqref{Eqn:LW}.$ In fact, this is true for any $\alpha$ and almost all $\beta\in{\rm I}$. Khintchine's Theorem tell us that
\begin{equation*}
\liminf\limits_{q\rightarrow\infty}q\log q\norm{q\beta}=0
\end{equation*}
for almost all $\beta\in{\rm I}$ and $\norm{q\alpha}$ is bounded from above by $1$ for any $q\in\NN$.
Regarding potential counterexamples to Littlewood's Conjecture, Einsiedler, Katok and Lindenstrauss proved the following fundamental result \cite{EKL}.
\begin{theorem}
\begin{equation*}
\dim\left(\left\{(\alpha,\beta)\in{\rm I}^2:\liminf\limits_{q\rightarrow\infty}q\norm{q\alpha}\norm{q\beta}>0\right\}\right)=0.
\end{equation*}
\end{theorem}
\begin{remark*}
An analogue to Littlewood's Conjecture can be formulated for arbitrary dimensions. Namely, given a point $\balpha=(\alpha_1,\dots,\alpha_n)\in{\rm I}^n$, we can ask whether
\begin{equation}\label{Eqn:LWhigh}
\liminf\limits_{q\rightarrow\infty}q\ \prod\limits_{j=1}^n\ \norm{q\alpha_j}=0.
\end{equation}
However, this question is of limited interest when $n\neq 2$. In the one-dimensional case, the set of exceptions to \eqref{Eqn:LWhigh} is simply equal to $\mathbf{Bad}$. Furthermore, when $n\geq 3$, any counterexample to \eqref{Eqn:LWhigh} must satisfy
\begin{equation*}
\liminf\limits_{q\rightarrow\infty}q\norm{q\alpha_j}\norm{q\alpha_k}>0
\end{equation*}
whenever $j\neq k$ and thus proving Littlewood's Conjecture would immediately imply the higher-dimensional analogue.
\end{remark*}
\section{Outlook}
In Chapter \ref{ubiquity} we will introduce the concept of ubiquity and give proofs for the classical theorems of Khintchine and Jarn\'ik. Moreover, this will provide us with tools to advance these basic results to the more specific setting of affine coordinate subspaces.
The Khintchine--Jarn\'ik Theorem is very powerful in the sense that it allows us to determine $\cH^s(W_n(\psi))$ for any given approximating function $\psi$ and $s\in(0,n]$. However, it does not reveal which exact points are $\psi$-approximable or not. In particular, if we consider a set $A\subset {\rm I}^n$ with $\cH^s(A)=0$, knowing $\cH^s(W_n(\psi))$ will not tell us anything about the intersection $W_n(\psi)\cap A$. This set could be all of $A$, a non-trivial subset of $A$, or even the empty set. In Chapter \ref{fibres} we will develop a theory for the case when the set in question is an affine coordinate subspace, i.e. a subset of ${\rm I}^n$ for which one or more coordinates are fixed. Theorems \ref{thm:subspaces} and \ref{thm:lines} are the main results for the Khintchine-type theory and Theorem \ref{HDfibres} is a partial analogue regarding the Hausdorff theory. These results strengthen the classical theory in a new and natural manner.
In Chapter \ref{dirichlet} we will define $\bi$-Dirichlet improvable vectors as the points in $\RR^n$ for which Theorem \ref{wDir} still holds if the right-hand side of \eqref{Eqn:wDir} is multiplied by a constant $c<1$. If this is possible for $c$ arbitrarily small, we call a point $\bi$-singular. Much research has been done when $\bi=(1/n,\dots,1/n)$, but less so in the weighted case. We will show that $\bi$-badly approximable vectors are $\bi$-Dirichlet improvable, thus extending a result by Davenport and Schmidt to the weighted case. The second main result of Chapter \ref{dirichlet} shows that non-$\bi$-singular vectors $\balpha$ are well suited for weighted twisted approximation in the following sense: For any $\varepsilon>0$ and $\psi(q)=\varepsilon q^{-1}$, the set $W_n^{\balpha}(\bi,\psi)$ has full Lebesgue measure $\lambda_n$. This generalises a theorem of Shapira.
\chapter{Ubiquity Theory}\label{ubiquity}
This chapter serves to introduce the concept of ubiquity and how it can be used to prove results similar to the theorems of Khintchine and Jarn\'ik. In fact, we are able to derive those classical statements directly from results in ubiquity theory, see Section \ref{Sec:ClassicalResults}. The main theorems in Section \ref{Sec:MainThms} are formulated for the general setting of a compact metric space equipped with a probability measure. This includes the typical case of the unit interval ${\rm I}^n$ equipped with the Lebesgue measure $\lambda_n$, but it also shows how Diophantine approximation can be interpreted in a wider sense. On the other hand, we can use ubiquity theory to gain new insight exactly for this very specific classical setting, as it allows us to work around the limitations of the Khintchine--Jarn\'ik Theorem, see Chapter \ref{fibres}.
Throughout this chapter we will completely follow the theory presented in \cite{limsup}, which means that all the definitions and results including some proofs are taken from \cite{limsup} without any major modifications. However, we omit most of the proofs and we only adopt those parts which are relevant to our problems. While developing the theory we show how it connects to the main concepts and results of Chapter \ref{Ch:Introduction}. For simplicity we limit the strict derivation of these illustrations to the one-dimensional case. Still, the multi-dimensional case can usually be done in an analogous fashion through minor adjustments, as we remark throughout the text. The content of this chapter is based on \cite[Chapter 3]{master} but has been fully revised and heavily modified.
\section{The basic problem}
Let $(\Omega,d)$ be a compact metric space equipped with a non-atomic probability measure $m$. An \textit{atom} is a measurable set of positive measure which contains no subset of smaller but positive measure and \textit{non-atomic} means that there exist no atoms in $\Omega$ with respect to the measure $m$. Let
\begin{equation*}
\mathcal{R}:=\{R_{\alpha}\subseteq\Omega:\alpha\in J\}
\end{equation*}
be a collection of subsets of $\Omega$ indexed by an infinite but countable set $J$, called the \textit{resonant sets}. Furthermore, let
\begin{equation*}
\beta:J\rightarrow\mathbb{R}^+,\quad \alpha\mapsto\beta_{\alpha}
\end{equation*}
be a positive function on $J$, which we will refer to as the \textit{weight function}. We also endow this function with the condition that the set $\{\alpha\in J:\beta_{\alpha}<k\}$ has finite cardinality for any positive $k$. For a subset $A$ of $\Omega$, we define
\begin{equation*}
\Delta(A,\delta):=\{x\in\Omega:\ d(x,A)<\delta\}
\end{equation*}
where $d(x,A)=\inf\{d(x,a):a\in A\}$. Hence, $\Delta(A,\delta)$ is the $\delta$-neighbourhood of $A$. Given a decreasing, positive valued function $\varphi:\mathbb{R}^+\rightarrow\mathbb{R}^+$, let
\begin{equation*}
\Lambda(\varphi):=\{x\in\Omega:x\in\Delta(R_{\alpha},\varphi(\beta_{\alpha}))\text{ for infinitely many }\alpha\in J\}.
\end{equation*}
The definition of $\Lambda(\varphi)$ reveals its nature as a $\limsup$ set. This is more formally shown when we use a different construction. For $n\in\mathbb{N}$ and a fixed $k>1$, we define
\begin{equation*}
\Delta(\varphi,n):=\bigcup\limits_{\alpha\in J_k(n)}\Delta(R_{\alpha},\varphi(\beta_{\alpha})),
\end{equation*}
where
\begin{equation*}
J_k(n):=\{\alpha\in J:k^{n-1}<\beta_{\alpha}\leq k^n\}.
\end{equation*}
By the condition on the weight function $\beta$, the set $J_k(n)$ is finite for any given values of $k$ and $n$. Thus, $\Lambda(\varphi)$ is the set of points in $\Omega$ lying in infinitely many of the sets $\Delta(\varphi,n)$ and we get the identity
\begin{equation*}
\Lambda(\varphi)=\limsup\limits_{n\rightarrow\infty}\Delta(\varphi,n)=\bigcap\limits_{m=1}^{\infty}\bigcup\limits_{n=m}^{\infty}\Delta(\varphi,n).
\end{equation*}
As in Chapter \ref{Ch:Introduction}, we are now interested in determining the measure theoretic properties of the set $\Lambda(\varphi)$. Since we are dealing with a $\limsup$ set in a probability space, the Borel--Cantelli Lemma (see Lemma \ref{BoCa}) directly implies that $m\left(\Lambda(\varphi)\right)=0$ if
\begin{equation}\label{Eqn:UbiConv}
\sum\limits_{n=1}^{\infty}m\left(\Delta(\varphi,n)\right)<\infty.
\end{equation}
Obtaining a converse statement is much more intricate. This is done by the first main theorem in Section \ref{Sec:MainThms} under mild conditions on the measure $m$. Assuming a diverging sum condition as well as a `global ubiquity' hypothesis, Theorem \ref{mT1} shows that $\Lambda(\varphi)$ has strictly positive $m$ measure. Moreover, replacing `global ubiquity' by the stronger `local ubiquity' condition implies that $\Lambda(\varphi)$ has full measure, which gives us a Khintchine-type statement for this more general setting.
Regarding the case when \eqref{Eqn:UbiConv} is satisfied, the $\limsup$ set $\Lambda(\varphi)$ is a null-set with respect to the ambient measure $m$. However, as in Section \ref{Sec:Hausdorff}, we can rely on the Hausdorff measures $\cH^f$ to obtain a finer means for investigating the size of
$\Lambda(\varphi)$. The problem of determining $\cH^f(\Lambda(\varphi))$ is much more subtle than the one regarding $m$-measure and imposes stronger conditions on the measure $m$ as well as mild conditions on the dimension function $f$. Assuming an `$f$-volume' divergent sum condition and a `local ubiquity' hypothesis, Theorem \ref{hT1} implies that $\cH^f(\Lambda(\varphi))=\infty$. It is often the case that one can obtain a converse statement where convergence of the `$f$-volume' sum implies that $\cH^f(\Lambda(\varphi))=0$. Then, $\cH^f(\Lambda(\varphi))$ satisfies a `zero-infinity' law. In particular, this is satisfied for the Hausdorff $s$-measures $\cH^s$, allowing us to deduce the Hausdorff dimension of $\Lambda(\varphi)$.
As a particular example, our set of interest, the set $W_n(\psi)$ of $\psi$-approximable points in ${\rm I}^n$ can be expressed in the form $\Lambda(\varphi)$ with $\varphi(q)=\psi(q)/q$ by choosing
\begin{equation}\label{Eqn:list}
\begin{aligned}
\Omega&=[0,1]^n,\quad J=\{(\bp,q)\in\mathbb{Z}^n\times\NN:0\leq |\bp|\leq q\},\\
\alpha&=(\bp,q)\in J,\quad \beta_{\alpha}=q,\quad\text{and}\quad R_{\alpha}=\frac{\bp}{q}.
\end{aligned}
\end{equation}
As usual, $d$ is the metric induced by the $\max$-norm, and we get
\begin{equation*}
\Delta(R_{\alpha},\varphi(\beta_{\alpha}))=B\left(\frac{\bp}{q},\varphi(q)\right).
\end{equation*}
Hence, in this case the resonant sets are rational points $\bp/q\in\QQ^n$ and the associated sets $\Delta(R_{\alpha},\varphi(\beta_{\alpha}))$ are balls centred at those points. It follows that
\begin{equation*}
\Delta(\varphi,m)=\bigcup\limits_{k^{m-1}<q\leq k^m}\bigcup\limits_{0\leq |\bp|\leq q}B\left(\frac{\bp}{q},\varphi(q)\right)
\end{equation*}
and
\begin{equation*}
W_n(\psi)=\limsup\limits_{m\rightarrow\infty}\Delta(\varphi,m).
\end{equation*}
This is basically the same characterisation as obtained in Chapter \ref{Ch:Introduction}. Here we just take the union over all $q$ in the range $(k^{m-1},k^m)$ in one step instead of doing it for each $q$ separately. The slight inconvenience of having to deal with the the function $\psi(q)/q$ instead of $\psi$ itself is due to our definition of $W_n(\psi)$. The statements in ubiquity theory are more easily formulated using conditions of the form $|\balpha-\bp/q|$ while we generally prefer the notation $\norm{q\balpha-\bp}$. However, this is easily adjusted.
Thus, the basic problems in simultaneous Diophantine approximation of determining $\lambda_n(W_n(\psi))$ or $\cH^s(W_n(\psi))$ are covered by this more general problem of investigating the measure theoretic properties of a $\limsup$ set $\Lambda(\varphi)$.
\section{Ubiquitous systems}
Let $l=(l_n)_{n\in N}$ and $u=(u_n)_{n\in N}$ be positive increasing sequences such that eventually
\begin{equation*}
l_n<l_{n+1}\leq u_n\quad \text{ and }\quad\lim\limits_{n\rightarrow\infty}l_n=\infty.
\end{equation*}
For obvious reasons, $l$ and $u$ will be referred to as the \textit{lower sequence} and \textit{upper sequence}, respectively. Now, given a positive, decreasing function $\varphi$, we define
\begin{equation*}
\Delta_l^u(\varphi,n)=\bigcup\limits_{\alpha\in J_l^u(n)}\Delta(R_{\alpha},\varphi(\beta_{\alpha})),
\end{equation*}
where
\begin{equation*}
J_l^u(n)=\{\alpha\in J:l_n<\beta_{\alpha}\leq u_n\}.
\end{equation*}
Again, the condition bestowed upon the weight function $\beta_{\alpha}$ provides finiteness of any fixed set $J_l^u(n)$ and, since $l_n$ tends to infinity, we get
\begin{equation*}
\Lambda(\varphi)=\limsup\limits_{n\rightarrow\infty}\Delta_l^u(\varphi,n)=\bigcap\limits_{m=1}^{\infty}\bigcup\limits_{n=m}^{\infty}\Delta_l^u(\varphi,n),
\end{equation*}
independent of the choice of sequences $l$ and $u$.
Recall that our space $\Omega$ is equipped with a probability measure $m$. Throughout the whole discussion we need to assume some conditions on the measure $m$ . Firstly, any open ball centred at any arbitrary point in $\Omega$ has strictly positive measure and secondly, the measure $m$ is \textit{doubling}. That means there exists a constant $C\geq 1$ such that
\begin{equation*}
m(B(x,2r))\leq Cm(B(x,r))
\end{equation*}
for any point $x\in\Omega$. This allows us to maintain control over the measure while shrinking or blowing up balls in $\Omega$. Furthermore, it implies that
\begin{equation*}
m(B(x,tr))\leq C(t)m(B(x,r))
\end{equation*}
for any $t>1$, where $C(t)$ is an increasing function, which does not depend on $x$ or $r$ and satisfies $C(2^k)\leq C^k$. In the case that $m$ is doubling we will also refer to the measure space $(\Omega,d,m)$ as doubling.
We need to introduce two more properties of the measure $m$, which are not very restrictive but will be needed in the problems of determining $m(\Lambda(\varphi))$ and $\mathcal{H}^f(\Lambda(\varphi))$, respectively. For the first problem, we want to assure that balls of the same radius have roughly the same measure when they are centred at points contained in resonant sets $R_{\alpha}$ with all $\alpha$ belonging to the same set $J_l^u(n)$ for some $n$.
\textbf{(M1) } There exist constants $a, b, r_0>0$, which only depend on the sequences $l$ and $u$, such that for any $c\in R_{\alpha}, c'\in R_{\alpha'}$ with $\alpha, \alpha'\in J_l^u(n)$ and $r\leq r_0$
\begin{equation*}
a\leq\frac{m(B(c,r))}{m(B(c',r))}\leq b.
\end{equation*}
When considering the Hausdorff measure problem we need a stronger condition. Namely, we want that the measure of any ball centred at a point in $\Omega$ is proportional to a fixed power of its radius.
\textbf{(M2) } There exist constants $a, b, r_0>0$ and $\delta\geq 0$ such that for any $x\in\Omega$ and $r\leq r_0$
\begin{equation*}
ar^{\delta}\leq m(B(x,r))\leq br^{\delta}.
\end{equation*}
Such measures are often referred to as \textit{Ahlfors regular} measures. Without loss of generality we can choose $0<a<1<b$. Obviously, the condition (M2) implies (M1) with constants $a/b$, $b/a$ and $r_0$, independent of the choice of $l$ and $u$. Moreover, the condition (M2) implies that $\dim\Omega=\delta$.
\begin{remark*}
In addition to (M1) and (M2), the original paper \cite{limsup} also introduces the so-called intersection conditions (IC). These conditions control the intersection of resonant sets $R_{\alpha}$ with balls centred at points contained in resonant sets. The intersection conditions are trivially satisfied when the resonant sets are points. We are only interested in this case and thus will not state the conditions (IC). It is worth noting that this also simplifies the conditions within the main theorems and their corollaries in Section \ref{Sec:MainThms}.
\end{remark*}
Now all the preliminaries are given and we are able to define the notion of a ubiquitous system. For this, let $\rho$ be a function with $\lim\limits_{r\rightarrow\infty}\rho(r)=0$ and let
\begin{equation*}
\Delta_l^u(\rho,n)=\bigcup\limits_{\alpha\in J_l^u(n)}\Delta(R_{\alpha},\rho(u_n)).
\end{equation*}
In accordance with the following definitions, $\rho$ will be called the \textit{ubiquitous function}. Let $B=B(x,r)$ be an arbitrary ball with centre $x$ in $\Omega$ and radius $r\leq r_0$, where $r_0$ is given by either (M1) or (M2). Suppose there exist a function $\rho$, sequences $l$ and $u$ and an absolute constant $\kappa>0$ such that
\begin{equation}\label{ubiq.cond}
m(B\cap\Delta_l^u(\rho,n))>\kappa m(B)\quad \text{ for } n\geq n_0(B).
\end{equation}
Then the pair $(\mathcal{R},\beta)$ is called a \textit{local $m$-ubiquitous system relative to $(\rho,l,u)$.} Suppose there exist a function $\rho$, sequences $l$ and $u$ and an absolute constant $\kappa>0$ such that for $n\geq n_0$, \eqref{ubiq.cond} is satisfied for $B=\Omega$. Then the pair $(\mathcal{R},\beta)$ is called a \textit{global $m$-ubiquitous system relative to $(\rho,l,u)$.}
Since $m$ is a probability measure, in the global case the condition \eqref{ubiq.cond} simply reduces to $m(\Delta_l^u(\rho,n))\geq\kappa$. Here, all we require is that the sets $\Delta_l^u(\rho,n), n\geq n_0$ cover a certain ratio of the whole space $\Omega$ with respect to the measure $m$. In the local case the same property is required to hold for any small enough ball. Clearly this condition is much stronger and it can be easily seen that local ubiquity implies global ubiquity. Simply take an arbitrary ball $B$ centered at $x\in\Omega$ with radius $\leq r_0$. Then for $n$ sufficiently large
\begin{equation*}
m(\Delta_l^u(\rho,n))\geq m(B\cap \Delta_l^u(\rho,n))\geq \kappa m(B)\eqqcolon\kappa_1>0.
\end{equation*}
Hence, local ubiquity with constant $\kappa$ implies global ubiquity with a constant $\kappa_1$, $0<\kappa_1\leq \kappa$. The converse is not true in general. However, there is a simple and very useful condition under which global ubiquity implies local ubiquity. Namely, if
\begin{equation*}
\lim\limits_{n\rightarrow\infty}m(\Delta_l^u(\rho,n))=1=m(\Omega).
\end{equation*}
This can be seen as follows. Suppose we have a global $m$-ubiquitous system and let $B\subseteq\Omega$ be an arbitrary ball with $m(B)=\varepsilon>0$ (the statement holds trivially for any null set). For $n$ sufficiently large, we get
\begin{equation*}
m(\Delta_l^u(\rho,n))>m(\Omega)-\frac{\varepsilon}{2},
\end{equation*}
and thus
\begin{equation*}
m(B\cap\Delta_l^u(\rho,n))>\frac{\varepsilon}{2},
\end{equation*}
which shows local $m$-ubiquity.
To establish the inequality \eqref{ubiq.cond} in either case of ubiquity, we do not need the presence of the lower sequence $l$. To show this, assume for $n\geq n_0$ the modified inequality
\begin{equation}\label{mod.ubiq.cond}
m\left(B\cap\bigcup\limits_{\alpha\in J:\beta_{\alpha}\leq u_n}\Delta(R_{\alpha},\rho(u_n))\right)\geq\kappa m(B)
\end{equation}
is satisfied. Now let $t\in\mathbb{N}$. Since $\lim\limits_{r\rightarrow\infty}\rho(r)=0$ there exists $n_t\in\mathbb{N}$ such that for $n\geq n_t$ we have
\begin{equation*}
m\left(B\cap\bigcup\limits_{\alpha\in J:\beta_{\alpha}\leq t}\Delta(R_{\alpha},\rho(u_n))\right)<\frac{1}{2}\kappa m(B).
\end{equation*}
Without loss of generality we have $n_{t+1}\geq n_t+1$ and hence for every $n\in\mathbb{N}$ there is exactly one $t=t(n)$ such that $n$ lies in the interval $[n_{t(n)},n_{t(n)+1})$. Thus the lower sequence $l$ defined by $l_n= t(n)$ is increasing and diverging and for $n\geq n_0$ we have
\begin{equation*}
m(B\cap \Delta_l^u(\rho,n))=m\left(B\cap\bigcup\limits_{\alpha\in J:l_n<\beta_{\alpha}\leq u_n}\Delta(R_{\alpha},\rho(u_n))\right)\geq\frac{1}{2}\kappa m(B)
\end{equation*}
which shows $(\mathcal{R},\beta)$ is a local $m$-ubiquitous system relative to $(\rho,l,u)$. Hence whenever \eqref{mod.ubiq.cond} is satisfied we know there exists a sequence $l$ such that we get ubiquity relative to $(\rho,l,u)$. It is also worth noting and easy to see that ubiquity relative to $(\rho,l,u)$ implies ubiquity relative to $(\rho,l,s)$ for any subsequence $s$ of $u$.
In the case where we deal with the set $W(\psi)$ of one-dimensional $\psi$-approximable points, the considered measure $m$ is simply the one-dimensional Lebesgue measure $\lambda$, which satisfies condition (M2) with $\delta=1$. A direct application of Dirichlet's Theorem (see Theorem \ref{Dir}) yields the following statement.
\begin{lemma}\label{K-J-lemma}
There exists a constant $k>1$ such that the pair $(\mathcal{R},\beta)$ defined in \eqref{Eqn:list} is a local $m$-ubiquitous system relative to $(\rho,l,u)$ for $l_{n+1}=u_n=k^n$ and $\rho:t\rightarrow kt^{-2}$.
\end{lemma}
\begin{proof}
Let $A=[a,b]\subset {\rm I}=[0,1]$. By Dirichlet's Theorem, for any $x\in A$ and for any $k^n>1$ there are coprime integers $p$ and $q$ with $1\leq q\leq k^n$ such that
\begin{equation*}
\left\lvert x-\frac{p}{q}\right\rvert <\frac{1}{qk^n}.
\end{equation*}
Clearly, $p/q$ has to lie in the interval $\left[a-\frac{1}{q},b+\frac{1}{q}\right]$ which implies
\begin{equation*}
aq-1\leq p\leq bq+1.
\end{equation*}
Hence, for a fixed $q$ there exist at most $\lambda(A)q+3$ possible values of $p$ satisfying the above inequality. For $n$ large enough it follows that
\begin{align*}
\lambda\left(A\cap\bigcup\limits_{q\leq k^{n-1}}\bigcup\limits_{0\leq p\leq q} B\left(\frac{p}{q},\frac{1}{qk^n}\right)\right)&\leq\sum\limits_{q\leq k^{n-1}}\frac{2}{qk^n}\left(\lambda(A)q+3\right)\\[1ex]
&=2\sum\limits_{q\leq k^{n-1}}\left(\frac{\lambda(A)}{k^n}+\frac{3}{qk^n}\right)\\[1ex]
&=\frac{2}{k}\lambda(A)+6\sum\limits_{q\leq k^{n-1}}\frac{1}{qk^n}\\[1ex]
&\leq\frac{3}{k}\lambda(A)
\end{align*}
since the last sum tends to zero for $n\rightarrow\infty$. If we take $k\geq6$, we get
\begin{equation*}
\lambda\left(A\cap\bigcup\limits_{k^{n-1}<q\leq k^n}\bigcup\limits_{0\leq p\leq q} B\left(\frac{p}{q},\frac{k}{k^{2n}}\right)\right)\geq \lambda(A)-\frac{3}{k}\lambda(A)\geq\frac{1}{2}\lambda(A).
\qedhere
\end{equation*}
\end{proof}
The divergence parts of the theorems of Jarn\'ik and Khintchine will be an immediate consequence of Lemma \ref{K-J-lemma} and the main theorems of the ubiquity theory, which we state in the next section.
\begin{remark*}
In the $n$-dimensional case, when dealing with the $n$-dimensional Lebesgue measure $\lambda_n$ of $W_n(\psi)$, the condition $\mathrm{(M2)}$ is satisfied with $\delta=n$. The same proof as above with adjusted constants tells us that $(\mathcal{R},\beta)$ is a local $\lambda_n$-ubiquitous system relative to $(\rho,l,u)$ for $l_{m+1}=u_m=k^m$ and $\rho:t\rightarrow kt^{-\left(1+\frac{1}{n}\right)}$, where the difference in exponent is due to Dirichlet's Theorem.
\end{remark*}
\section{The main theorems}\label{Sec:MainThms}
Next we state the main theorems of ubiquity theory. While we do not present any proofs here, it is worth mentioning that the main parts of \cite{limsup} serve to prove those results. Once the theorems are established, both the theorems of Khintchine and Jarn\'ik are rather simple consequences, which illustrates the power of this theory.
We start with some notation. Let $m$ be a measure satisfying the condition (M1) with respect to the sequences $l$ and $u$. By $B_n(r)$ we denote a generic ball of radius $r$ centred at a point of a resonant set $R_{\alpha}$ with $\alpha\in J_l^u(n)$. The condition (M1) ensures that for any ball $B(c,r)$ with $c\in R_{\alpha}$ and $\alpha\in J_l^u(n)$ we have $m(B(c,r))\asymp m(B_n(r))$.
\begin{theorem}\label{mT1}
Let $(\Omega,d)$ be a compact metric space equipped with a probability measure $m$ satisfying condition $\mathrm{(M1)}$ with respect to lower and upper sequences $l$ and $u$. Suppose that $(\mathcal{R},\beta)$ is a global $m$-ubiquitous system relative to $(\rho,l,u)$ and that $\varphi$ is an approximating function. Furthermore, either assume that
\begin{equation*}
\limsup\limits_{n\rightarrow\infty}\frac{\varphi(u_n)}{\rho(u_n)}>0
\end{equation*}
or assume that both
\begin{equation*}
\sum\limits_{n=1}^{\infty}\frac{m(B_n(\varphi(u_n)))}{m(B_n(\rho(u_n)))}=\infty
\end{equation*}
and for $Q$ sufficiently large
\begin{equation*}
\sum\limits_{s=1}^{Q-1}\frac{1}{m(B_s(\rho(u_s)))}\sum\limits_{\substack{s+1\leq t\leq Q:\\ \varphi(u_s)<\rho(u_t)}}m(B_t(\varphi(u_t)))
\ll\left(\sum\limits_{n=1}^{Q}\frac{m(B_n(\varphi(u_n)))}{m(B_n(\rho(u_n)))}\right)^2.
\end{equation*}
Then, $m(\Lambda(\varphi))>0$. In addition, if any open subset of $\Omega$ is $m$-measurable and $(\mathcal{R},\beta)$ is locally $m$-ubiquitous relative to $(\rho,l,u)$, then $m(\Lambda(\varphi))=1$.
\end{theorem}
Before we are able to state the Hausdorff measure analogue of Theorem \ref{mT1} we need to introduce one more definition. Given a sequence $u$, a positive real-valued function $h$ is said to be \textit{$u$-regular} if there exists a constant $\lambda\in(0,1)$ such that for $n$ large enough the inequality
\begin{equation*}
h(u_{n+1})\leq\lambda h(u_n)
\end{equation*}
is satisfied where $\lambda$ is independent of $n$ but may depend on $u$. If $h$ is $u$-regular then the function $h$ is eventually strictly decreasing along the sequence $u$. Furthermore, $u$-regularity implies $s$-regularity for any subsequence $s$ of $u$.
\begin{theorem} \label{hT1}
Let $(\Omega,d)$ be a compact metric space equipped with a probability measure $m$ satisfying condition $\mathrm{(M2)}$. Suppose that $(\mathcal{R},\beta)$ is a locally $m$-ubiquitous system relative to $(\rho,l,u)$ and that $\varphi$ is an approximating function. Let $f$ be a dimension function such that $r^{-\delta}f(r)\rightarrow\infty$ as $r\rightarrow 0$ and such that $r^{-\delta}f(r)$ is decreasing. Let $g$ be the real, positive function given by
\begin{equation*}
g(r)= f(\varphi(r))\rho(r)^{-\delta},\quad \text{ and }\quad G=\limsup\limits_{n\rightarrow\infty} g(u_n).
\end{equation*}
\begin{itemize}
\item
Suppose that $G=0$ and that $\rho$ is $u$-regular. Then,
\begin{equation} \label{ineq10}
\mathcal{H}^f(\Lambda(\varphi))=\infty\quad \text{ if }\quad \sum\limits_{n=1}^{\infty}g(u_n)=\infty.
\end{equation}
\item
Suppose that $0<G\leq\infty$. Then, $\mathcal{H}^f(\Lambda(\varphi))=\infty$.
\end{itemize}
\end{theorem}
\begin{remark*}
Since the condition $\mathrm{(M2)}$ is independent of both the sequences $l$ and $u$ and, as shown above, the sequence $l$ is irrelevant for establishing ubiquity, results like Theorem \ref{hT1} do not require mention of the sequence $l$. However, the condition $\mathrm{(M1)}$ clearly depends on $l$ and $u$ and hence statements like Theorem \ref{mT1} rely on the used sequences and $l$ cannot be dropped from the preconditions.
\end{remark*}
\subsection{Corollaries}
In \cite{limsup}, multiple corollaries are stated for both Theorem \ref{mT1} and Theorem \ref{hT1}. These corollaries mainly illustrate how we get stronger results when certain restrictions hold. Most of the restrictions are naturally satisfied for typical applications, which makes the corollaries especially useful.
For Theorem \ref{mT1} we state a corollary which applies if the considered measure does not only satisfy condition (M1), but also condition (M2).
\begin{corollary} \label{cor2}
Let $(\Omega,d)$ be a compact metric space equipped with a probability measure $m$ satisfying condition $\mathrm{(M2)}$. Suppose that $(\mathcal{R},\beta)$ is a global $m$-ubiquitous system relative to $(\rho,l,u)$ and that $\varphi$ is an approximating function. Moreover, assume that either $\varphi$ or $\rho$ is $u$-regular and that
\begin{equation*}
\sum\limits_{n=1}^{\infty}\left(\frac{\varphi(u_n)}{\rho(u_n)}\right)^{\delta}=\infty.
\end{equation*}
Then $m(\Lambda(\varphi))>0$. If in addition any open subset of $\Omega$ is $m$-measurable and $(\mathcal{R},\beta)$ is locally $m$-ubiquitous relative to $(\rho,l,u)$, then $m(\Lambda(\varphi))=1$.
\end{corollary}
\begin{remark*}
By choosing the right sequence $u$, the additional requirement that the function $\varphi$ is $u$-regular is easily satisfied in the classical example of $\psi$-approximability with $\varphi(q)=\psi(q)/q$ . Hence, this corollary is particularly useful as it allows us to prove the divergence case of Khintchine's Theorem. Furthermore, Corollary \ref{cor2} will be vital in the proof of Theorem \ref{thm:lines}, a Khintchine-type result for affine coordinate subspaces (see Section \ref{sec:ubiquity}).
\end{remark*}
Next we turn to subsequent results of Theorem \ref{hT1}. While Theorem \ref{hT1} itself is all we need to complete the proof of Jarn\'ik's Theorem, the following statement is formulated in a slightly simpler way and will be referred to in the proof of Theorem \ref{HDfibres}, a partial Jarn\'ik-type analogue to Theorems \ref{thm:subspaces} and \ref{thm:lines}.
\begin{corollary} \label{cor4}
Let $(\Omega,d)$ be a compact metric space equipped with a probability measure $m$ satisfying condition $\mathrm{(M2)}$. Suppose that $(\mathcal{R},\beta)$ is a locally $m$-ubiquitous system relative to $(\rho,l,u)$ and that $\varphi$ is an approximating function. For $0<s<\delta$, define
\begin{equation*}
g(r)=\varphi(r)^{s}\rho(r)^{-\delta},\quad \text{ and }\quad G=\limsup\limits_{n\rightarrow\infty}g(u_n).
\end{equation*}
\begin{itemize}
\item
Suppose that $G=0$ and that either $\varphi$ or $\rho$ is $u$-regular. Then
\begin{equation*}
\mathcal{H}^s(\Lambda(\varphi))=\infty\quad \text{ if }\quad \sum\limits_{n=1}^{\infty}g(u_n)=\infty.
\end{equation*}
\item
Suppose that $0<G\leq\infty$. Then $\mathcal{H}^s(\Lambda(\varphi))=\infty.$
\end{itemize}
\end{corollary}
Note that Corollary \ref{cor4} only applies to Hausdorff $s$-measures rather than the more general class of measures $\cH^f$. This means that the growth conditions on the dimension function in Theorem \ref{hT1} are trivially satisfied and allows us to weaken the regularity condition on $\rho$. As a consequence of the second part of Corollary \ref{cor4} we obtain the following dimension formulae for $\Lambda(\varphi)$.
\begin{corollary}\label{cor5}
Let $(\Omega,d)$ be a compact metric space equipped with a probability measure $m$ satisfying condition $\mathrm{(M2)}$. Suppose that $(\mathcal{R},\beta)$ is a locally $m$-ubiquitous system relative to $(\rho,l,u)$ and that $\varphi$ is an approximating function.
\begin{itemize}
\item
If $\ \lim\limits_{n\rightarrow\infty}\varphi(u_n)/\rho(u_n)=0$, then
\begin{equation*}
\dim\Lambda(\varphi)\geq \sigma\delta,\quad \text{ where }\quad \sigma:=\limsup\limits_{n\rightarrow\infty}\frac{\log\rho(u_n)}{\log\varphi(u_n)}.
\end{equation*}
Furthermore, if $\ \liminf\limits_{n\rightarrow\infty}\rho(u_n)/\varphi(u_n)^{\sigma}<\infty$, then $\mathcal{H}^{\sigma\delta}(\Lambda(\varphi))=\infty$.
\item
If $\ \limsup\limits_{n\rightarrow\infty}\varphi(u_n)/\rho(u_n)>0,$ then $0<\mathcal{H}^{\delta}(\Lambda(\varphi))<\infty$ and so $\dim\Lambda(\varphi)=\delta$.
\end{itemize}
\end{corollary}
\section{The classical results}\label{Sec:ClassicalResults}
Next we show how we can derive the divergence parts of the theorems of Khintchine and Jarn\'ik as consequences of Lemma \ref{K-J-lemma} and the statements above. We will also make use of the following observation.
\begin{lemma}[Cauchy condensation test]\label{sum-lemma}
Let $\phi:\mathbb{R}^+\rightarrow\mathbb{R}^+$ be a positive decreasing function and let $k>1$. Then
\begin{equation}\label{Eqn:Cauchy}
\sum\limits_{q=1}^{\infty}\phi(q)=\infty\quad \Longleftrightarrow\quad \sum\limits_{n=1}^{\infty} k^n\phi(k^n)=\infty.
\end{equation}
\end{lemma}
\begin{proof}
Fix an integer $k>1$. Any $q\in\NN$ is contained in an interval of the form $[k^n,k^{n-1})$ with $0\leq n\in\ZZ$. The function $\phi$ is decreasing, hence
\begin{equation*}
\phi(k^n)\geq\phi(q)\geq\phi(k^{n+1}).
\end{equation*}
The interval $[k^n,k^{n-1})$ contains
\begin{equation*}
k^{n+1}-k^n=k^n(k-1)=k^{n+1}\left(1-\frac{1}{k}\right)
\end{equation*}
integer points. Thus, we get upper and lower bounds for $\sum_{q\in\NN}\phi(q)$ by
\begin{align*}
(k-1)\sum\limits_{n=0}^{\infty}k^n\phi(k^n)&=\sum\limits_{n=0}^{\infty}(k^{n+1}-k^n)\phi(k^n)\\[1ex]
&\geq\sum\limits_{q=1}^{\infty}\phi(q)\\[1ex]
&\geq\sum\limits_{n=0}^{\infty}(k^{n+1}-k^n)\phi(k^{n+1})=\left(1-\frac{1}{k}\right)\sum\limits_{n=1}^{\infty}k^n\phi(k^n).
\end{align*}
The addition of the term $\phi(1)$ to the first series does not affect whether the sum converges or diverges. Hence, the sum $\sum_{q\in\NN}\phi(q)$ is essentially bounded from above and below by constant multiples of $\sum_{k\in\NN}k^n\phi(k^n)$, which implies \eqref{Eqn:Cauchy}.
\end{proof}
\begin{remark*}
Note that the argument above only proves \eqref{Eqn:Cauchy} for integers $k$. However, it can easily be extended to any real number $k>1$ by making use of the fact that
\begin{equation*}
\sum\limits_{q=1}^{\infty}\phi(q)=\infty\quad \Longleftrightarrow\quad \int\limits_{t=1}^{\infty}\phi(t)=\infty
\end{equation*}
for any positive decreasing function $\phi$ and by splitting the integral domain into intervals of the form $[k^n,k^{n-1})$ as above.
\end{remark*}
Now we can turn to the proof of the divergence case of Khintchine's Theorem in the one-dimensional case, see Theorem \ref{Khintchine}.
\begin{corollary}
Let $W(\psi)$ be the set of $\psi$-approximable points in $[0,1]$. Then $\lambda(W(\psi))=1$ if $\ \sum\limits_{q=1}^{\infty}\psi(q)=\infty$.
\end{corollary}
\begin{proof}
Remember that in Lemma \ref{K-J-lemma} we have established that the pair $(\mathcal{R},\beta)$ defined in \eqref{Eqn:list} is locally ubiquitous with respect to $(\rho,l,u)$, where $l_{n+1}=u_n=k^n$ and $\rho:r\rightarrow k r^{-2}$. Clearly, $\rho$ is $u$-regular, so we can apply Corollary \ref{cor2} to $\varphi(q)=\psi(q)/q$. This tells us that $m(W(\psi))=1$, if
\begin{equation*}
k\sum\limits_{n=1}^{\infty}\frac{\psi(k^n)k^{2n}}{k^n}=k\sum\limits_{n=1}^{\infty}\psi(k^n)k^n=\infty.
\end{equation*}
Now the statement directly follows by applying Lemma \ref{sum-lemma}.
\end{proof}
\begin{remark*}
In the $n$-dimensional case, by adjusting $\rho$ and $\delta$ appropriately, we get in an analogous fashion that $\lambda_n(W(\psi))=1$ if
\begin{equation*}
k^n\sum\limits_{l=1}^{\infty}\frac{\psi(k^l)^n \left(\left(k^l\right)^{1+\frac{1}{n}}\right)^n}{k^{ln}}=k^n\sum\limits_{l=1}^{\infty}\psi(k^l)^n k^{l} =\infty,
\end{equation*}
which by Lemma \ref{sum-lemma} is equivalent to $\sum_{q\in\NN}\varphi(q)^n=\infty$.
\end{remark*}
This proves the divergence part of Khintchine's Theorem for arbitrary dimensions and hence completes the proof of the theorem since we already obtained the convergence part in Chapter \ref{Ch:Introduction}.
The divergence part of Jarn\'ik's Theorem follows directly from Theorem \ref{hT1}. Note that we will not differ between the two cases $G=0$ and $G>0$ since the function $\rho$ is $u$-regular and $G>0$ obviously implies divergence in \eqref{ineq10}.
\begin{samepage}
\begin{corollary}
Let $W(\psi)$ be the set of $\psi$-approximable points in $[0,1]$ and $s\in(0,1)$. Then
\begin{equation*}
\mathcal{H}^s(W(\psi))=\infty\quad \text{ if }\quad \sum\limits_{q=1}^{\infty}q^{1-s}\psi(q)^s=\infty.
\end{equation*}
\end{corollary}
\end{samepage}
\begin{proof}
We apply Theorem \ref{hT1} to the situation where $\varphi(q)=\psi(q)/q$, $\delta=1$, $u_n=k^n$, the $u$-regular ubiquitous function is $\rho:r\rightarrow kr^{-2}$ and the dimension function is given by $r\rightarrow r^s$. Since $s<1$, the function $r\rightarrow r^{s-1}$ is decreasing and diverges for $r\rightarrow 0$. We get that
\begin{equation*}
g(r)=k\psi(r)^s r^2/r^s
\end{equation*}
and hence, $\mathcal{H}^s(W(\psi))=\infty$, if
\begin{equation*}
k\sum\limits_{n=1}^{\infty}\frac{\psi(k^n)^s k^{2n}}{k^{ns}}=k\sum\limits_{n=1}^{\infty},\psi(k^n)^s(k^n)^{1-s}k^n=\infty
\end{equation*}
which again due to Lemma \ref{sum-lemma} is exactly the case when $\sum_{q\in\NN}q^{1-s}\psi(q)^s=\infty$.
\end{proof}
This completes the proof of Jarn\'ik's Theorem in dimension $1$. In the $n$-dimensional case it can be established analogously that $\mathcal{H}^s(W(\psi))=\infty$ for $s\in(0,n)$ if
\begin{equation*}
k\sum\limits_{l=1}^{\infty}\frac{\psi(k^l)^s k^{(n+1)l}}{k^{ls}}=k\sum\limits_{l=1}^{\infty}\psi(k^l)^s(k^l)^{n-s}k^l=\infty,
\end{equation*}
which happens precisely when $\sum_{q\in\NN}q^{n-s}\psi(q)^s$ diverges.
\begin{remark*}
These proofs illustrate how easily both the theorems of Khintchine and Jarn\'ik follow once the major theorems of ubiquity theory are established. Note that Jarn\'ik's Theorem can be directly deduced from Khintchine's Theorem using the Mass Transference Principle as shown in Section \ref{secMTP}. Indeed, given the Mass Transference Principle, the real power of ubiquity is that it enables us to establish Khintchine type theorems with respect to the ambient measure.
\end{remark*}
\chapter{Rational approximation of affine coordinate subspaces of Euclidean space}\label{fibres}
\chaptermark{Approximation on fibres}
Khintchine's Theorem is sufficient to determine the Lebesgue measure $\lambda_n(W_n(\psi))$ for any given approximating function $\psi$, but it fails to answer more specific questions arising in a natural manner. For instance, if we consider an approximating function $\psi$ such that $\sum_{q\in\NN}\psi(q)^2=\infty$, we know that almost every pair $(\alpha,\beta)\in{\rm I}^2$ is $\psi$-approximable. Now we would like to know what happens if we fix the coordinate $\alpha$. Is it still true that almost every pair $(\alpha,\beta)\in\{\alpha\}\times{\rm I}$ is $\psi$-approximable? Essentially this means we want to obtain a one-dimensional Lebesgue measure statement from a two-dimensional setting. Khintchine's Theorem implies that
\begin{equation*}
\lambda\left(\{\alpha\}\times{\rm I}\cap W_2(\psi)\right)=1\quad \text{ for }\lambda\text{-almost all }\alpha\in{\rm I}.
\end{equation*}
However, given any fixed $\alpha\in{\rm I}$, the set $\{\alpha\}\times{\rm I}$ is a null-set with respect to two-dimensional Lebesgue measure and so, a priori, we cannot say anything about the intersection $\{\alpha\}\times{\rm I}\cap W_2(\psi)$. This consideration is easily extended to higher dimensions.
\section{Preliminaries and the main results}
We want to investigate the following question. Let $\ell$ and $m$ be positive integers with $\ell+m=n$, and let $\psi$ be an approximating function. Fix $\balpha\in{\rm I}^{\ell}$ and define the \textit{fibre above $\balpha$} by
\begin{equation*}
{\rm F}_n^{\balpha}:=\{\balpha\}\times{\rm I}^m\subset {\rm I}^n.
\end{equation*}
Then, is it true that
\begin{equation}\label{Eqn:Khin?}
\lambda_{m}({\rm F}_n^{\balpha}\cap W_n(\psi))=
\begin{dcases}
1,\quad &\text{ if }\quad\sum\limits_{q=1}^{\infty}\psi(q)^n=\infty \\[2ex]
0,\quad &\text{ if }\quad\sum\limits_{q=1}^{\infty}\psi(q)^n<\infty
\end{dcases}
\quad\quad\textbf{?}
\end{equation}
Upon choosing a rational vector $\balpha$, it is easily established that the convergence part of \eqref{Eqn:Khin?} cannot be true. To see this, fix $\alpha=a/b\in\QQ$ with $b\in\NN$. Dirichlet's Theorem implies that for any $\beta\in{\rm I}$ there exist infinitely many $q\in\NN$ such that $\norm{q\beta}<1/q$. Hence,
\begin{equation*}
\norm{bq\alpha}=0<\frac{b^2}{bq}=\psi(bq)
\end{equation*}
and
\begin{equation*}
\norm{bq\beta}<\frac{b}{q}=\frac{b^2}{bq}=\psi(bq),
\end{equation*}
where $\psi:\NN\rightarrow\RR^+$ is the approximating function given by $\psi(q)=b^2/q$. This shows that every point $(\alpha,\beta)$ in the set ${\rm F}_2^{\alpha}=\{\alpha\}\times{\rm I}\subset\RR^2$ is $\psi$-approximable and thus
\begin{equation*}
\lambda({\rm F}_2^{\alpha}\cap W_2(\psi))=1.
\end{equation*}
On the other hand, $\psi$ satisfies
\begin{equation*}
\sum\limits_{q=1}^{\infty}\psi(q)^2=\sum\limits_{q=1}^{\infty}b^4q^{-2}<\infty.
\end{equation*}
Analogous examples work for any choice of $n$ and $\ell$. This shows it is worth treating the two sides of the problem separately. We will concentrate on the divergence side. A similar argument to above shows that any rational vector $\balpha$ satisfies the divergence part of \eqref{Eqn:Khin?}. Again, for simplicity we will just consider the case when $n=2$ and $\alpha=a/b\in\QQ$. Assume that $\psi$ is an approximating function satisfying $\sum_{q\in\NN}\psi(q)^2=\infty$ and define the function $\bar{\psi}$ by
\begin{equation*}
\bar{\psi}(q)=\frac{\psi(bq)}{b}.
\end{equation*}
Clearly, monotonicity of $\psi$ implies monotonicity of $\bar{\psi}$ and it follows that
\begin{align*}
\sum\limits_{q=1}^{\infty}\bar{\psi}(q)&=\sum\limits_{q=1}^{\infty}\frac{\psi(bq)}{b}=\frac{1}{b^2}\sum\limits_{q=1}^{\infty}b\psi(bq)\\[1ex]
&\geq\frac{1}{b^2}\sum\limits_{q=1}^{\infty}\psi(bq)+\psi(bq+1)+\dots+\psi(bq+b-1)\\[1ex]
&=\frac{1}{b^2}\sum\limits_{q=b}^{\infty}\psi(q)=\infty.
\end{align*}
Hence, by Khintchine's Theorem, almost every $\beta\in{\rm I}$ is $\bar{\psi}$ approximable. This implies that there are infinitely many $q\in\NN$ satisfying $\norm{q\beta}<\bar{\psi}(q)$ and thus
\begin{equation*}
\norm{bq\beta}\leq b\norm{q\beta}<b\bar{\psi}(q)=\frac{b\psi(q)}{b}=\psi(bq)
\end{equation*}
for infinitely many $q\in\NN$. On the other hand, $\norm{bq\alpha}=0<\psi(bq)$ for any $q\in\NN$ and so for almost every $\beta\in{\rm I}$ the pair $(\alpha,\beta)$ is $\psi$-approximable.
\begin{remark*}
Note that the above argument only make use of the property that $\sum_{q\in\NN}\psi(q)=\infty$ rather than the stronger assumption that $\sum_{q\in\NN}\psi(q)^2=\infty$. This argument extends to arbitrary dimensions and illustrates that picking a rational vector $\balpha=\boldsymbol{a}/b\in\QQ^{\ell}$ essentially reduces the problem of Diophantine approximation within ${\rm F}_n^{\balpha}$ to the $m$-dimensional case of Khintchine's Theorem. Throughout the rest of this chapter we will assume that $\balpha\notin\QQ^{\ell}$.
\end{remark*}
An affine coordinate subspace $\{\balpha\}\times\RR^m \subseteq\RR^n$ is said to be of \emph{Khintchine type for divergence} if ${\rm F}_n^{\balpha}$ satisfies the divergence case of \eqref{Eqn:Khin?}, i.e. if for any approximating function $\psi:\RR\to\RR^+$ such that $\sum_{q\in\NN}\psi(q)^n$ diverges, almost every point on $\{\balpha\}\times\RR^m$ is $\psi$-approximable. Intuitively, $\{\balpha\}\times\RR^m$ is of Khintchine type for divergence if its typical points behave like the typical points of Lebesgue measure with respect to the divergence case of Khintchine's theorem. The recent article~\cite{hyperplanes} addresses the issue for certain affine coordinate hyperplanes in $\RR^n$, where $n\geq 3$. There, sufficient conditions are given for a hyperplane to be of Khintchine type for divergence. However, the techniques of \cite{hyperplanes} are not capable of handling subspaces of codimension greater than one, nor those of large Diophantine type. Here, we overcome these difficulties by taking a different approach. We show that affine coordinate subspaces of dimension at least two are of Khintchine type for divergence, and we make substantial progress on the one-dimensional case. All of the following, unless otherwise noted, is joint work with Ram\'irez and Simmons \cite{mine}.
\begin{remark*}
The question \eqref{Eqn:Khin?} can also be extended to other types of manifolds. A manifold $\mathcal{M}\subset\RR^n$ is called \textit{non-degenerate} if it is sufficiently curved to deviate from any hyperplane. Clearly, this differs from the case of affine (coordinate) subspaces, which are often referred to as \textit{degenerate} manifolds. It is widely believed that non-degeneracy is the right criterion for a manifold to be endowed with to allow a Khintchine-type theorem for $\mathcal{M}\cap W_n(\psi)$ in both the convergence and divergence case (see \cite{DAaspects} for more background information).
\begin{Conjecture}\label{Con:DreamThm}
Let $\mathcal{M}$ be a $d$-dimensional non-degenerate submanifold of $\RR^n$, let $\mu_d$ be the normalised $d$-dimensional Lebesgue measure induced on $\mathcal{M}$ and let $\psi$ be an approximating function. Then
\begin{equation}\label{Eqn:DreamThm}
\mu_d(\mathcal{M}\cap W_n(\psi))=
\begin{dcases}
1,\quad &\text{ if }\quad\sum\limits_{q=1}^{\infty}\psi(q)^n=\infty, \\[2ex]
0,\quad &\text{ if }\quad\sum\limits_{q=1}^{\infty}\psi(q)^n<\infty.
\end{dcases}
\end{equation}
\end{Conjecture}
The following list shows the various contributions that have been made towards Conjecture \ref{Con:DreamThm}.
\begin{itemize}
\item
\textit{Extremal manifolds.} A submanifold $\mathcal{M}$ of $\RR^n$ is called \textit{extremal} if
\begin{equation*}
\mu_d(\mathcal{M}\cap W_n(\tau))=0\quad \text{for all }\tau>\frac{1}{n}.
\end{equation*}
Note that $\mathcal{M}\cap W_n(\tau)=\mathcal{M}$ for $\tau\leq 1/n$ by Dirichlet's Theorem. Hence, a manifold is extremal if and only if \eqref{Eqn:DreamThm} holds for any approximation function $\psi$ of the form $\psi:q\mapsto q^{-\tau}$. Kleinbock and Margulis proved that any non-degenerate submanifold $\mathcal{M}$ of $\RR^n$ is extremal \cite{KleinbockMargulis}.
\item
\textit{Planar curves.} Conjecture \ref{Con:DreamThm} is true when $\mathcal{M}$ is a non-degenerate planar curve, i.e. when $n=2$ and $d=1$. The convergence part of \eqref{Eqn:DreamThm} was established in \cite{VaughanVelani} and strengthened in \cite{Zorin}. The divergence part was proved in \cite{BDVplanarcurves}.
\item
\textit{Beyond planar curves} The divergence case of Conjecture \ref{Con:DreamThm} is true for analytic non-degenerate submanifolds of $\RR^n$ \cite{Bermanifolds} as well as non-degenerate curves and manifolds that can be `fibred' into such curves \cite{BVVZ2}. This category includes non-degenerate manifolds which are smooth but not necessarily analytic. The convergence case has been shown to be true for non-degenerate manifolds of high enough dimension $d$ relative to $n$ \cite{BVVZ}, \cite{Simmons-convergence-case}. Earlier work proved the convergence part of Conjecture \ref{Con:DreamThm} for manifolds satisfying a geometric curvature condition \cite{Dodson}.
\end{itemize}
\end{remark*}
Coming back to the present problem, we prove the following:
\begin{theorem}\label{thm:subspaces}
Every affine coordinate subspace of Euclidean space of dimension at
least two is of Khintchine type for divergence.
\end{theorem}
\begin{remark*}
Combining Theorem \ref{thm:subspaces} with Fubini's theorem shows
that every submanifold of Euclidean space which is foliated by
affine coordinate subspaces of dimension at least two is of
Khintchine type for divergence. For example, given $a,b,c\in\RR$
with $(a,b)\neq (0,0)$, the three-dimensional affine subspace
\[
\{(x,y,z,w) : a x + b y = c\} \subseteq \RR^4
\]
is of Khintchine type for divergence, being foliated by the
two-dimensional affine coordinate subspaces $\left\{(x,y)\times\RR^2:x,y\in\RR,\ a x + b y = c\right\}$.
\end{remark*}
The reason for the restriction to subspaces of dimension at least two is that Gallagher's Theorem is used in the proof. Recall that this removes the monotonicity condition from Khintchine's Theorem, but only in dimension two and higher. Regarding one-dimensional affine coordinate subspaces, we have the following theorem.
\begin{theorem}\label{thm:lines}
Consider a one-dimensional affine coordinate subspace
$\{\balpha\}\times\RR \subseteq \RR^n$, where $\balpha\in\RR^{n - 1}$.
\begin{itemize}
\item[(i)] If the dual Diophantine type of $\balpha$ is strictly greater
than $n$, then $\{\balpha\}\times \RR$ is contained in the set of very
well approximable vectors
\[
\mathrm{VWA}_n = \{\bx : \exists \varepsilon > 0 \; \exists^\infty
q\in\NN \;\; \norm{q\bx} < q^{-1/n - \varepsilon}\}.
\]
\item[(ii)] If the dual Diophantine type of $\balpha$ is strictly less
than $n$, then $\{\balpha\}\times\RR$ is of Khintchine type for
divergence.
\end{itemize}
\end{theorem}
\noindent Here the \emph{dual Diophantine type} of a point
$\balpha\in\RR^\ell$ is the number
\begin{equation}
\label{taudef}
\tau_D(\balpha)=\sup\left\{\tau\in\mathbb{R}^+: \norm{\bq\cdot\balpha} < |\bq|^{-\tau} \textrm{ for i.m. } \bq\in\ZZ^{\ell}\backslash\{\boldsymbol{0}\} \right\}.
\end{equation}
\begin{remark*}
The inclusion $\{\balpha\}\times\RR \subseteq \mathrm{VWA}_d$ in part
(i) is philosophically ``almost as good'' as being of Khintchine
type for divergence, since it implies that for sufficiently ``nice''
functions $\psi:\NN\to\RR^+$ such that $\sum_{q\in\NN} \psi(q)^n$
diverges, almost every point on $\{\balpha\}\times\RR$ is
$\psi$-approximable. For example, call a function $\psi$ \emph{good}
if for each $c > 0$, we have either $\psi(q)\geq q^{-c}$ for all $q$
sufficiently large or $\psi(q) \leq q^{-c}$ for all $q$ sufficiently
large. Then by the comparison test, if $\psi$ is a good function
such that $\sum_{q\in\NN} \psi(q)^n$ diverges, then for all
$\varepsilon > 0$, we have $\psi(q) \geq q^{-1/n - \varepsilon}$ for all $q$
sufficiently large and thus, by Theorem \ref{thm:lines}(i), every
point of $\{\balpha\}\times\RR$ is $\psi$-approximable. The class of
good functions includes the class of \emph{Hardy $L$-functions}
(those that can be written using the symbols $+,-,\times,\div,\exp$,
and $\log$ together with constants and the identity function), see
\cite[Chapter III]{Hardy} or \cite{AvdD} for further
discussion and examples.
\end{remark*}
Taken together, parts (i) and (ii) of Theorem \ref{thm:lines} imply
that if $\psi$ is a Hardy $L$-function such that
$\sum_{q\in\NN} \psi(q)^n$ diverges, and if $\balpha\in{\rm I}^{n-1}$ is a vector
whose dual Diophantine type is not exactly equal to $d$, then almost
every point of $\{\balpha\}\times\RR \subseteq \RR^d$ is
$\psi$-approximable. This situation is somewhat frustrating, since it
seems strange that points in ${\rm I}^{n - 1}$ with dual Diophantine type
exactly equal to $n$ should have any special properties (as opposed to
those with dual Diophantine type $(n - 1)$, which are the ``not very
well approximable'' points). However, it seems to be impossible to
handle these points using our techniques.
Even if $\sum_{q\in\NN} \psi(q)^n$ converges, we might be interested in the set of $\psi$-approximable points. Jarn\'ik's Theorem gives us a means to determine the Hausdorff $s$-measure of $W_n(\psi)$ for any given $s<n$ as well as its Hausdorff dimension. Still, as in the above case, given a base point $\balpha$ in ${\rm I}^{\ell}$, we cannot say much about the intersection ${\rm F}_n^{\balpha}\cap W_n(\psi)$. Clearly, we are only interested in base points $\balpha\in W_{\ell}(\psi)$, as otherwise no point in $\balpha\times {\rm I}^m$ can be $\psi$-approximable. Focussing on the case where $\psi$ is a monomial of the form $\psi(q)=q^{-\tau}$, we have proved the following result:
\begin{theorem}\label{HDfibres}
Let $\ell,m\in\NN$ with $\ell+m=n$ and $\balpha\in W_{\ell}(\tau)\subset{\rm I}^{\ell}$ and define
\begin{equation*}
s^{\balpha}_n(\tau):=\dim({\rm F}^{\balpha}_n\cap W_n(\tau)).
\end{equation*}
Then
\begin{equation*}
s^{\balpha}_n(\tau)\geq s_n^{\ell}(\tau):=
\begin{dcases}
m\ &\text{ if }\quad \tau\leq\frac{1}{n},\\[2ex]
\frac{n+1}{\tau+1}-\ell\ &\text{ if }\quad \frac{1}{n}<\tau\leq\frac{1}{\ell},\\[2ex]
\frac{m}{\tau+1}\ &\text{ if }\quad \tau>\frac{1}{\ell}.
\end{dcases}
\end{equation*}
Furthermore, $\cH^{s_n^{\ell}(\tau)}({\rm F}_n^{\balpha}\cap W_n(\tau))=\cH^{s_n^{\ell}(\tau)}({\rm I}^m)$.
\end{theorem}
We will show that for most base points $\balpha\in W_{\ell}(\tau)$ we get the exact dimension result
\begin{equation*}
s^{\balpha}_n(\tau)= s_n^{\ell}(\tau).
\end{equation*}
In the first case, where $\tau\leq 1/n$, Theorem \ref{HDfibres} is trivially true with $s_n^{\balpha}(\tau)=m$ for all $\balpha\in{\rm I}^{\ell}$. This follows directly from Dirichlet's Theorem. In the second case, where $1/n<\tau\leq 1/\ell$, we still have $W_{\ell}(\tau)={\rm I}^{\ell}$, so we can consider any $\balpha\in{\rm I}^{\ell}$. In the third case, where $\tau> 1/\ell$, the set of suitable base points $W_{\ell}(\tau)$ is a proper subset of ${\rm I}^{\ell}$ of dimension $\frac{\ell+1}{\tau+1}$, satisfying
\begin{equation*}
\cH^{\frac{\ell+1}{\tau+1}}(W_{\ell}(\tau))=\infty,
\end{equation*}
by Jarn\'ik's Theorem. In both of the latter cases there are base points $\balpha\in W_{\ell}(\tau)$, for which $s_n^{\balpha}(\tau)>s_n^{\ell}(\tau)$. However, this set of exceptions is ``small" as shown by the following result.
\begin{corollary}\label{Cor:HD}
If $1/n<\tau\leq 1/\ell$, then the collection of points $\balpha\in {\rm I}^{\ell}$ such that
\begin{equation}\label{Eqn:HDslice}
s_n^{\balpha}=\dim({\rm F}_n^{\balpha}\cap W_n(\tau))>\frac{n+1}{\tau+1}-\ell
\end{equation}
is a null-set with respect to the Lebesgue measure $\lambda_{\ell}$. If $\tau> 1/\ell$, then the collection of points $\balpha\in {\rm I}^{\ell}$ such that $s^{\balpha}_n(\tau)>\frac{m}{\tau+1}$ is a zero set with respect to the measure $\mathcal{H}^{\frac{\ell+1}{\tau+1}}$.
\end{corollary}
We will prove Corollary \ref{Cor:HD} in Section \ref{sec:HDrmk}. We have not investigated the set of exceptions any further, but it trivially includes rational points and, depending on $\tau$, points with rational dependencies between different coordinates.
\section[Proof of Theorem~\ref{thm:subspaces}: Subspaces of dimension at least two]{Proof of Theorem~\ref{thm:subspaces}:\\ Subspaces of dimension at least two}\label{Sec:Subspaces2}
Consider an affine coordinate subspace $\{\balpha\}\times\RR^m$, where
$\balpha\in{\rm I}^\ell$ and $\ell + m = n$. Given a non-increasing function
$\psi:\NN\to\RR^+$, for each $M,N$ with $M < N$ let
\begin{equation*}
Q_{\psi}(M,N):= \Abs{\left\{M < q\leq N : \norm{q\balpha} < \psi(N) \right\}},
\end{equation*}
and write $Q_{\psi}(N) : = Q_{\psi}(0, N)$. Since any real number $\delta > 0$ may
be thought of as a constant function, the expression $Q_\delta(M,N)$
makes sense.
\begin{lemma}\label{lem:count}
For all $N\in\NN$,
\begin{equation*}
Q_\delta(N) = \Abs{\left\{q\in\NN : \norm{q\balpha} < \delta, q\leq N\right\}}\geq N\delta^\ell - 1.
\end{equation*}
\end{lemma}
\begin{proof}
Let
\begin{equation*}
\cQ_\delta(N) = \left\{q\in\NN : \norm{q\balpha} < \delta, q\leq
N\right\},
\end{equation*}
so that $Q_\delta(N) = \Abs{\cQ_\delta(N)}$. We first claim that
$Q_\delta(N)\geq Q_{\frac{\delta}{2},\boldsymbol{\gamma}}(N) - 1$ for any
$\boldsymbol{\gamma}\in\RR^\ell$ and $N\in\NN$, where
\begin{equation*}
\cQ_{\delta,\boldsymbol{\gamma}}(N) : = \left\{q\in\NN : \norm{q\balpha + \boldsymbol{\gamma}} < \delta, q\leq N\right\}
\end{equation*}
and $Q_{\delta,\boldsymbol{\gamma}}(N) = \Abs{\cQ_{\delta,\boldsymbol{\gamma}}(N)}$. Simply
notice that if $q_1 < q_2\in\cQ_{\frac{\delta}{2},\boldsymbol{\gamma}}(N)$, then,
by the triangle inequality, $q_2 -
q_1\in\cQ_{\delta}(N)$.
Therefore, letting $q_0 = \min\cQ_{\frac{\delta}{2},\boldsymbol{\gamma}}(N)$, we
have that
\begin{equation*}
\cQ_{\frac{\delta}{2},\boldsymbol{\gamma}}(N)-q_0:=\left\{q-q_0:q\in\cQ_{\frac{\delta}{2},\boldsymbol{\gamma}}(N)\right\}\subseteq\cQ_\delta(N)\cup\{0\},
\end{equation*}
which implies that
\begin{equation*}
Q_\delta(N)\geq Q_{\delta/2,\boldsymbol{\gamma}}(N) - 1.
\end{equation*}
Now we show that for any $N\in\NN$ there is some $\boldsymbol{\gamma}$ such that
$Q_{\frac{\delta}{2},\boldsymbol{\gamma}}(N)\geq N\delta^\ell$. Notice that
\begin{align*}
\int_{\TT^\ell} Q_{\frac{\delta}{2},\boldsymbol{\gamma}}(N)\,d\boldsymbol{\gamma} & = \int_{\TT^\ell} \sum_{q = 1}^N \mathbf{1}_{\left( - \frac{\delta}{2}, \frac{\delta}{2}\right)^\ell}(q\bx + \boldsymbol{\gamma})\,d\boldsymbol{\gamma} = N\delta^\ell,
\end{align*}
where $\TT^{\ell}=\RR^{\ell}/\ZZ^{\ell}$ is the $\ell$-dimensional torus and $\mathbf{1}$ is the characteristic function.
Therefore, $Q_{\frac{\delta}{2},\boldsymbol{\gamma}}(N)$ must take some value
$\geq N \delta^\ell$ at some $\boldsymbol{\gamma}$. Combining this with the
previous paragraph proves the lemma.
\end{proof}
\begin{samepage}
\begin{lemma}
\label{lem:series}
Let $\balpha\in{\rm I}^{\ell}$ and $m\in\NN$ with $\ell+m=n$. Suppose that $\psi:\NN\to\RR^+$ is an approximating function such that $\sum_{q\in\NN}\psi(q)^n$ diverges. Then,
\begin{equation}
\label{psikseries}
\sum_{\norm{q\balpha} < \psi(q)}\psi(q)^{m} = \infty.
\end{equation}
\end{lemma}
\end{samepage}
\begin{remark*}
The index $\norm{q\balpha}<\psi(q)$ in the above sum is short for ``$q\in\NN,$ $\norm{q\balpha}<\psi(q)$'' and will be used throughout this chapter.
\end{remark*}
\begin{proof}
We may assume without loss of generality that $\psi$ is a step function of the form $\psi(q) = 2^{-k_q}$
where $k_q\in\NN$. Indeed, given any $\psi$ as in the theorem
statement, we can let $k_q = \ceil{-\log_2\psi(q)}$ and replace
$\psi(q)$ with $2^{-k_q}$. For any $q\in\NN$, we will have reduced $\psi(q)$ by no more
than a factor of $\frac{1}{2}$, hence preserving the divergence of the
series $\sum_{q\in\NN}\psi(q)^n$. On the other hand, since this modified function is less
than the old $\psi$, divergence of \eqref{psikseries} for the new
function implies divergence of \eqref{psikseries} for the old $\psi$.
Now,
\begin{align*}
\sum_{\norm{q\balpha} < \psi(q)}\psi(q)^{m} &\geq \sum_{k\in\NN}\psi(2^k)^{m}\Abs{\left\{2^{k - 1} < q\leq 2^k : \norm{q\balpha} < \psi(2^k)\right\}}\\[1ex]
& = \sum_{k\in\NN}\psi(2^k)^{m}Q(2^{k - 1}, 2^k) \\[1ex]
& = \sum_{k\in\NN}\sum_{j\geq k}\left(\psi(2^j)^{m} - \psi(2^{j + 1})^{m}\right)Q(2^{k - 1}, 2^k)\\[1ex]
& = \sum_{j\in\NN}\left(\psi(2^j)^{m} - \psi(2^{j + 1})^{m}\right)\sum_{k = 1}^j Q(2^{k - 1}, 2^k) \\[1ex]
&\geq \sum_{j\in\NN}\left(\psi(2^j)^{m} - \psi(2^{j + 1})^{m}\right) Q(2^j) \\[1ex]
&\geq \sum_{j\in\NN}\left(\psi(2^j)^{m} - \psi(2^{j + 1})^{m}\right)
[2^j\psi(2^j)^\ell - 1] \quad (\text{by Lemma \ref{lem:count}})\\[1ex]
&= -\psi(2)^m + \sum_{j\in\NN}\left(\psi(2^j)^{m} - \psi(2^{j + 1})^{m}\right)
2^j\psi(2^j)^\ell.
\end{align*}
Let $(j_d)_{d = 1}^\infty$ be the sequence indexing the set
$\{j\in\NN : k_{2^{j}}\neq k_{2^{j + 1}}\}$ in increasing
order. Then we have
\begin{equation*}
\psi(2^{j_d})^m - \psi(2^{j_d + 1})^m \gg \psi(2^{j_d})^m,
\end{equation*}
and hence,
\begin{align*}
\sum_{j\in\NN}\left(\psi(2^j)^{m} - \psi(2^{j + 1})^{m}\right)
2^j\psi(2^j)^\ell
&\gg \sum_{d\in\NN}2^{j_d} \psi(2^{j_d})^{m + \ell} \\
&\gg \sum_{d\in\NN}\left(\sum_{k = j_{d - 1} + 1}^{j_d}2^k\right)\psi(2^{j_d})^n \\[1ex]
& = \sum_{k\in\NN}2^k\psi(2^k)^{n},
\end{align*}
which diverges by Cauchy's condensation test (see Lemma \ref{sum-lemma}).
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:subspaces}]
Suppose that $m\geq 2$. Then, by Lemma \ref{lem:series}, we can apply
Gallagher's extension of Khintchine's theorem \cite{Gallagherkt} to
the function
\begin{equation}
\label{psix}
\psi_{\balpha}(q) = \begin{cases}
\psi(q) & \text{if }\ \norm{q\balpha} < \psi(q), \\
0 & \text{otherwise},
\end{cases}
\end{equation}
and get that $\{\balpha\}\times\RR^m$ is of Khintchine type for
divergence. But $\balpha\in{\rm I}^\ell$ was chosen arbitrarily, and applying
permutation matrices does not affect whether a manifold is of
Khintchine type for divergence. This completes the proof.
\end{proof}
\section[Proof of Theorem~\ref{thm:lines}(i): Base points of high Diophantine type]{Proof of Theorem~\ref{thm:lines}(i):\\ Base points of high Diophantine type}\label{Sec:KTP}
The proof of Theorem \ref{thm:lines}(i) is based on the following
standard fact, which can be found for example in~\cite[Theorem
V.IV]{Cassels}:
\theoremstyle{plain} \newtheorem*{ktp}{Khintchine's transference
principle}
\begin{ktp}
Let $\balpha\in{\rm I}^d$ and define the numbers
\begin{equation*}
\omega_D=\omega_D(\balpha) = \sup\left\{\omega\in\mathbb{R}^+: \norm{\inner{\bq}{\balpha}} \leq \abs{\bq}^{-(n + \omega)} \textrm{ for i.m. } \bq\in\ZZ^n\backslash\{\boldsymbol{0}\} \right\}
\end{equation*}
and
\begin{equation*}
\omega_S=\omega_S(\balpha) = \sup\left\{\omega\in\mathbb{R}^+: \norm{q\balpha} \leq q^{-(1 + \omega)/n} \textrm{ for i.m. } q\in\NN \right\}.
\end{equation*}
Then
\begin{equation*}
\frac{\omega_D}{n^2 + (n - 1)\omega_D} \leq \omega_S \leq \omega_D,
\end{equation*}
where the cases $\omega_D = \infty$ and $\omega_S = \infty$ should
be interpreted in the obvious way.
\end{ktp}
Note that $\omega_D$ is related to the dual Diophantine type $\tau_D$
defined in \eqref{taudef} via the formula
$\tau_D(\bx) = \omega_D(\bx) + n$.
\begin{proof}
[Proof of Theorem~\ref{thm:lines}(i)]
We fix $\balpha = (\alpha_1,\dots, \alpha_{n - 1})\in{\rm I}^{n - 1}$ such that
$\tau_D(\balpha) > n$, and we consider a point
$(\balpha,\beta) \in \{\balpha\}\times\RR$. It is clear from \eqref{taudef} that
$\tau_D(\balpha,\beta) \geq \tau_D(\balpha)$, so $\tau_D(\balpha,\beta) > n$ and thus
$\omega_D(\balpha,\beta) > 0$. Then, by Khintchine's transference principle,
$\omega_S(\balpha,\beta) > 0$, {\it i.e.} $(\balpha,\beta) \in \mathrm{VWA}_n$.
\end{proof}
\section[Proof of Theorem~\ref{thm:lines}(ii): Base points of low Diophantine type]{Proof of Theorem~\ref{thm:lines}(ii):\\ Base points of low Diophantine type}\label{sec:ubiquity}
We start by stating a result of Cassels' \cite{Cassels-01law}, which will be used in the proof.
\begin{theorem}[Cassels]\label{Thm:Cassels01}
Let $(\phi(i))_{i\in\NN}$ be any sequence of non-negative numbers and let $(q_i)_{i\in\NN}$ be any sequence of integers. Then $\parallel q_i\alpha \parallel < \phi(i)$ has infinitely many solutions either for almost all or for almost no $\alpha\in\RR$.
\end{theorem}
\begin{remark*}
Theorem \ref{Thm:Cassels01} is also known as Cassels' ``0-1 law''. Gallagher's ``0-1 law'', which was referred to in Chapter \ref{Ch:Introduction}, is an extension of Theorem \ref{Thm:Cassels01} to the coprime setting of the Duffin--Schaeffer Conjecture, see Conjecture \ref{Con:DS}.
\end{remark*}
We will also need to make use of a property of lattices. Let $\Lambda=\Lambda(A)=A\ZZ^m\subset\RR^m$ be a full-rank lattice generated by $A\in\RR^{m\times m}$ satisfying $\det A\neq 0$. For $j\in\{1,\dots,m\},$ we define the \textit{$j$-th successive minimum} of $\Lambda$ as
\begin{equation*}
\mu_j=\mu_j(\Lambda):=\inf\left\{r>0:\Lambda\cap \bar{B}(\0,r)\text{ contains }j\text{ linearly independent vectors}\right\},
\end{equation*}
where $\bar{B}$ denotes a closed ball. Clearly, $0<\mu_1\leq\mu_2\leq\dots\leq\mu_m<\infty$. Furthermore, let the \textit{Dirichlet fundamental domain} of $\Lambda$ centred at $\0$ be defined as
\[
\cD = \{\br\in\RR^{m} : \mathrm{dist}(\br,\Lambda) =
\mathrm{dist}(\br,\0) = |\br|\}.
\]
Then the following holds.
\begin{lemma}\label{Lma:Lattice}
Suppose that $\cD \not\subseteq B_{m}(\0,R)$. Then the last
successive minimum of $\Lambda$ satisfies $\mu_m\geq R/m$.
\end{lemma}
\begin{proof}
Assume that $\mu_m< R/m$. As $\cD \not\subseteq B_{m}(\0,R)$, there exists $\bx\in\cD$ with $|\bx|>R$. Since $\bx\in\cD$, it follows that $\bar{B}(\bx,R)\cap\Lambda=\emptyset$. There are $m$ linearly independent vectors $\bv_1,\dots,\bv_m\in\Lambda$ satisfying $|\bv_1|=\mu_1,\dots,|\bv_m|=\mu_m$. These vectors span $\RR^m$ and so we can write
\begin{equation*}
\bx=s_1\bv_1+\dots+s_m\bv_m,\quad \text{ with } s_j\in\RR,\quad (1\leq j\leq m).
\end{equation*}
Let
\begin{equation*}
\bz=\lfloor s_1\rfloor \bv_1+\dots+\lfloor s_m\rfloor \bv_m\in\Lambda.
\end{equation*}
It follows that
\begin{align*}
|\bx-\bz|=\left|\sum\limits_{j=1}^{m}(s_j-\lfloor s_j\rfloor)\bv_j\right|\leq \sum\limits_{j=1}^{m}|\bv_j|\leq R
\end{align*}
and so $\bz\in\bar{B}(\bx,R)\cap\Lambda$, which is a contradiction.
\end{proof}
\begin{remark}
The lower bound $\mu_m\geq R/m$ is not optimal, but for our purposes we only need the fact that $\mu_m\geq cR$ for some constant $c>0$ which does not depend on the lattice $\Lambda$.
\end{remark}
Now to the proof of Theorem~\ref{thm:lines}(ii).
Let $\psi:\RR\to\RR^+$ be non-increasing and such that
$\sum_{q\in\NN}\psi(q)^n$ diverges. Our goal here is to use the ideas
of ubiquity theory introduced in Chapter \ref{ubiquity} to show that almost every point on
$\{\balpha\}\times\RR\subseteq\RR^n$ is $\psi$-approximable, where
$\balpha\in\RR^{n - 1}$ has been fixed with dual Diophantine type strictly
less than $n$. The ubiquity approach begins with the fact that for any
$N\in\NN$ such that
\begin{equation}
\label{Nreq}
N^{-1/(n - 1)} < \psi(N) < 1,
\end{equation}
we have
\begin{equation}
\label{eq:mink}
[0,1] \subseteq \bigcup_{\substack{q\leq N \\ \norm{q \balpha} < \psi(N)}}\bigcup_{p = 0}^q B\left(\frac{p}{q},\frac{2}{q N\psi(N)^{n - 1}}\right),
\end{equation}
which is a simple consequence of Minkowski's theorem. The basic aim is
to show that a significant proportion of the measure of the above
double-union set is represented by integers $q$ that are closer to $N$ than to
$0$. Specifically, we must show that for some $k\geq 2$, the following
three conditions hold:
\begin{enumerate}
\item[\textbf{(U)}] In accordance with the theory presented in Chapter \ref{ubiquity}, we define the following objects:
\begin{align*}
J &= \{(p,q)\in\ZZ\times\NN : \norm{q\balpha} < \psi(q)\},&
R_{(p,q)} &= \left\{p/q\right\} \;\;\; ((p,q) \in J),\\
\cR &= \{R_{(p,q)} : (p,q)\in J\},&
\beta_{(p,q)} &= q \;\;\; ((p,q)\in J),\\
l_j &= k^{j - 1} \;\;\; (j\in\NN),&
u_j &= k^j \;\;\; (j\in\NN).
\end{align*}
Furthermore, we define the function $\rho:\NN\rightarrow\RR^{+}$ by
\begin{equation*}
\rho(q) = \frac{c}{q^2 \psi(q)^{n - 1}},
\end{equation*}
where $c > 0$ will be chosen later. Then the pair $(\cR,\beta)$ forms a global
ubiquitous system with respect to the triple $(\rho,l,u)$. This means that there is some
$\kappa > 0$ such that
\begin{align*}
\lambda\left([0,1]\cap\bigcup_{\substack{k^{j - 1} < q \leq k^j \\ \norm{q\balpha} < \psi(k^j)}}\bigcup_{p = 0}^q
B\left(\frac{p}{q},\frac{c}{k^{2j}\psi(k^j)^{n - 1}}\right)\right) \geq \kappa
\end{align*}
for all $j$ sufficiently large. \item[\textbf{(R)}] The function
$\varphi(q)= \psi(q)/q$ is $u$-regular, meaning that there
is some constant $c < 1$ such that
$\varphi(k^{j + 1})\leq c \varphi(k^j)$ for all $j$ sufficiently
large.
\item[\textbf{(D)}] The sum
$\sum_{j\in\NN}\frac{\varphi(k^j)}{\rho(k^j)}$ diverges.
\end{enumerate}
Then Corollary \ref{cor2} will imply that the set of
$\psi_{\balpha}$-approximable numbers (see \eqref{psix}) in $\RR$ has
positive measure, and Theorem \ref{Thm:Cassels01} will
imply that it has full measure. Since the set of
$\psi_{\balpha}$-approximable numbers is just the set of $y\in\RR$ for which
$(\balpha,y)$ is $\psi$-approximable, this will show that the set of
$\psi$-approximable points on the line
$\{\balpha\}\times\RR\subseteq\RR^n$ has full one-dimensional Lebesgue
measure. The following lemma establishes \textbf{(R)} and~\textbf{(D)}.
\begin{lemma}
If $\psi:\RR\to\RR^+$ is non-increasing, then~\textup{\textbf{(R)}}
holds. Furthermore, if $\sum_{q\in\NN}\psi(q)^n$ diverges,
then~\textup{\textbf{(D)}} holds.
\end{lemma}
\begin{proof}
In the first place, we have
\begin{equation*}
\frac{\varphi(k^{j + 1})}{\varphi(k^j)} = \frac{\psi(k^{j + 1})}{k\psi(k^j)}
\leq \frac{1}{k},
\end{equation*}
which proves~\textbf{(R)}. For~\textbf{(D)},
\begin{equation*}
\sum_{j\in\NN}\frac{\varphi(k^j)}{\rho(k^j)} = \sum_{j\in\NN}k^j\psi(k^j)^n,
\end{equation*}
which diverges by Cauchy's condensation test.
\end{proof}
The challenge then is to establish~\textbf{(U)}.
\begin{lemma}\label{ubiqlemma}
Let $\psi:\RR\to\RR^+$ be non-increasing such that \eqref{Nreq} holds
for all sufficiently large $N$. Assume that for all $k\geq 2$
there exists $j_k\geq 1$ such that, for all $j\geq j_k$,
\begin{equation*} \Abs{\left\{0 < q\leq k^{j - 1} : \norm{q\balpha} <
\psi(k^j)\right\}} \ll k^{j - 1}\psi(k^j)^{d - 1},
\end{equation*}
where the implied constant in $\ll$ is
assumed to be independent of $k$. Then~\textup{\textbf{(U)}} holds
for some $k\geq 2$.
\end{lemma}
\begin{proof}
For all $k\geq 2$ and $j\geq j_k$, we have
\begin{equation*}
\lambda\left([0,1]\cap\bigcup_{\substack{q\leq k^{j - 1} \\ \norm{q\balpha} < \psi(k^j)}}\bigcup_{p = 0}^q B\left(\frac{p}{q},\frac{2}{q k^j \psi(k^j)^{d - 1}}\right)\right)
\leq\sum_{\substack{q\leq k^{j - 1} \\ \norm{q\balpha} < \psi(k^j)}}\frac{4}{k^j\psi(k^j)^{d - 1}}
\ll\frac{1}{k}\cdot
\end{equation*}
After choosing $k$ to be larger than the implied constant in the
``$\ll$'' comparison, we see that the left hand side is
$\leq 1 - \kappa < 1$ for some $\kappa > 0$.
Combining this with \eqref{eq:mink}, we see that for all $j\geq j_k$
large enough so that \eqref{Nreq} holds for $N = k^j$, we have
\begin{equation*}
\lambda\left([0,1]\cap\bigcup_{\substack{k^{j - 1} < q\leq k^j \\ \norm{q\balpha} < \psi(k^j)}}\bigcup_{p = 0}^q
B\left(\frac{p}{q},\frac{2}{q k^j \psi(k^j)^{d - 1}}\right)\right)
\geq \kappa > 0,
\end{equation*}
and this implies \textbf{(U)} with $c = 2k$.
\end{proof}
Thus the goal is to show that the conditions of Lemma \ref{ubiqlemma} are satisfied.
The one-dimensional case of the following lemma was originally proven
by Beresnevich, Haynes, and Velani using a continued fraction
argument \cite{nalpha}.
\begin{lemma}
\label{nalphalemma}
Fix $\balpha\in\RR^\ell$ and $\tau > \tau_D(\balpha)$. Then for all $N$
sufficiently large and for all $\delta \geq N^{-1/\tau}$, we have
\begin{equation}
\label{BHVformula}
\Abs{\{q\in\mathbb{N}:\norm{q\balpha} < \delta,q< N\}}\leq 4^{\ell + 1} N \delta^\ell.
\end{equation}
\end{lemma}
\begin{proof}
Consider the lattice $\Lambda = g_t u_{\balpha} \ZZ^{\ell + 1}$, where
\begin{align*}
g_t & = \left[\begin{array}{ll}
e^{t/\ell}I_\ell &\\
& e^{-t}
\end{array}\right],\\[2ex]
u_{\balpha} & = \left[\begin{array}{ll}
I_\ell & - \balpha\\
&\ \ \ 1
\end{array}\right],
\end{align*}
and where $t$ is chosen so that $R : = e^{t/\ell} \delta = e^{-t} N$,
i.e.
\begin{equation*}
t=\frac{\log(N/\delta)}{1+1/\ell}.
\end{equation*}
Let $\br=(p_1,\dots,p_{\ell},q)\in\ZZ^{\ell+1}$. Then
\begin{equation*}
g_t u_{\balpha}\br=(e^{t/\ell}(p_1+q\alpha_1),\dots,e^{t/\ell}(p_{\ell}+q\alpha_{\ell}),e^{-t}q),
\end{equation*}
and so \eqref{BHVformula} can
be rewritten as
\[
\Abs{\{\br\in \Lambda : |\br| < R\}} \leq (4R)^{\ell + 1}.
\]
Let $\cD$ be the Dirichlet fundamental domain for $\Lambda$
centred at $\0$, i.e.
\[
\cD = \{\br\in\RR^{\ell + 1} : \mathrm{dist}(\br,\Lambda) =
\mathrm{dist}(\br,\0) = |\br|\}.
\]
Since $\Lambda$ is unimodular, $\cD$ is of volume 1, so
\begin{align*}
\Abs{\{\br\in \Lambda : |\br| < R\}} &=
\lambda_{\ell+1}\left(\bigcup_{\substack{\br\in\Lambda \\ |\br|<
R}} (\br + \cD)\right)\\[1ex] &\leq^* \lambda_{\ell+1}(B_{\ell + 1}(\0,2R)) =
(4R)^{\ell + 1},
\end{align*}
where the starred inequality is true as long as
$\cD \subseteq B_{\ell + 1}(\0,R)$. So we need to show that
$\cD\subseteq B_{\ell + 1}(\0,R)$ assuming that $N$ is large enough.
Suppose that $\cD \not\subseteq B_{\ell + 1}(\0,R)$. Then, by Lemma \ref{Lma:Lattice}, the last
successive minimum of $\Lambda$ is $\gg R$, so by \cite[Theorem
VIII.5.VI]{Cassels-geometry}, some point $\bs$ in the dual lattice
\begin{equation*}
\Lambda^* = \{\bs\in \RR^{\ell + 1} : \br\cdot\bs\in\ZZ \text{ for all }\br\in\Lambda\}
\end{equation*}
satisfies $0 < |\bs|\ll R^{-1}$. Given a lattice $\Lambda(A)$, its dual lattice is generated by the inverse transpose of $A$ and so we can write
$\bs = g_t' u_{\balpha}' (\bq,p)$ for some $p\in\ZZ$, $\bq\in\ZZ^\ell$,
where $g_t'$ and $u_{\balpha}'$ denote the inverse transposes of $g_t$ and
$u_{\balpha}$, respectively. Then the inequality $|\bs|\ll R^{-1}$
becomes
\begin{align*}
\begin{split}
e^{-t/\ell} |\bq| &\ll R^{-1}\\
e^t |\inner\bq\balpha + p| &\ll R^{-1}
\end{split}
\hspace{.6in}\text{ i.e. }
\begin{split}
|\bq| &\ll \delta^{-1}\\
|\inner\bq\balpha + p| &\ll N^{-1}.
\end{split}
\end{align*}
Since $\delta \geq N^{-1/\tau},$ we get
\begin{equation}
\label{final}
|\inner\bq\balpha + p| \ll \delta^\tau \ll |\bq|^{-\tau}.
\end{equation}
Because $\tau > \tau_D(\balpha)$, there are only finitely many pairs
$(p,\bq)$ satisfying \eqref{final}. Hence, for all sufficiently large
$N$, we have $\cD \subseteq B_{\ell + 1}(\0,R)$ and thus
\eqref{BHVformula} holds.
\end{proof}
From this we can deduce the following consequence.
\begin{corollary}\label{cor:nalpha}
Let $\balpha\in{\rm I}^{n - 1}$ be of dual Diophantine type
$\tau_D(\balpha) < n$ and suppose that, for any $\varepsilon > 0$, we have
$\psi(q) \geq q^{-1/n - \varepsilon}$ for all $q$ sufficiently large. Then,
for any $k\geq 2$ and $\ell\in\ZZ$, we have
\begin{equation*}
\Abs{\left\{ 0 < q \leq k^{j + \ell} : \norm{q\balpha} < \psi(k^j) \right\}}\ll k^{j + \ell}\psi(k^j)^{n - 1}
\end{equation*}
for $j$ large enough.
\end{corollary}
\begin{proof}
We show that for large enough $j$ we are in a situation where we can
apply Lemma~\ref{nalphalemma} with $N = k^{j + \ell}$ and
$\delta = \psi(k^j)$. Since $\tau_D < n$ we can choose
$\tau\in (\tau_D,n)$ and then, for all large enough $j$,
\begin{equation*}
N^{-1/\tau} = k^{-(j + \ell)/\tau} < \psi(k^j);
\end{equation*}
hence Lemma~\ref{nalphalemma} applies.
\end{proof}
Armed with Corollary \ref{cor:nalpha}, we are now ready to finish the proof.
\begin{proof}
[Proof of Theorem~\ref{thm:lines}(ii)]
Let $\balpha\in{\rm I}^{n - 1}$ be a point whose dual Diophantine type is
strictly less than $n$, and let $\psi:\NN\to\RR^+$ be a non-increasing
function such that $\sum_{q\in\NN}\psi(q)^n$ diverges. Furthermore,
assume that, for every $\varepsilon > 0$, the inequality
\begin{equation}\label{Eqn:psibounds}
1 > \psi(q) \geq q^{-1/n - \varepsilon}
\end{equation}
holds for all sufficiently large
$q$. Then, by Corollary~\ref{cor:nalpha}, we satisfy all the parts of
Lemma~\ref{ubiqlemma}, so there exists $k\geq 2$ such that
\textbf{(U)} holds. Thus, by the argument given earlier, we can use
Corollary \ref{cor2} to conclude that almost every point on the
line $\{\balpha\}\times\RR\subseteq\RR^n$ is $\psi$-approximable.
We now show that assumption \eqref{Eqn:psibounds} can be made
without loss of generality. If $\psi(q) \geq 1$ for all $q$, then all
points are $\psi$-approximable and the theorem is trivial. If
$\psi(q) < 1$ for some $q$, then, by monotonicity, $\psi(q) < 1$ for all
$q$ sufficiently large. So we just need to show that the assumption
$\psi(q) \geq q^{-1/n - \varepsilon}$ can be made without loss of
generality. Let
\begin{equation*}
\phi(q) = (q(\log q)^2)^{-1/n}
\end{equation*}
and define the function
\begin{equation*}
\overline\psi(q) = \max\{\psi(q),\phi(q)\}.
\end{equation*}
Then $\overline\psi$ satisfies our assumptions and, therefore, almost every
point on $\{\balpha\}\times\RR$ is $\overline\psi$-approximable.
Corollary \ref{cor:nalpha} implies that
\begin{align*}
\sum_{\norm{q\balpha} < \phi(q)}\phi(q) &\leq \sum_{j\in\NN}\phi(2^j)\Abs{\left\{0 < q\leq 2^{j + 1} : \norm{q\balpha} < \phi(2^j)\right\}}\\[1ex]
&\ll \sum_{j\in\NN}2^{j + 1} \phi(2^j)^n,
\end{align*}
which converges because $\sum_{q\in\NN}\phi(q)^n$ does. Hence,
almost every point on $\{\balpha\}\times\RR$ is not
$\phi$-approximable. But every $\overline\psi$-approximable point
which is not $\phi$-approximable is $\psi$-approximable. Therefore,
the set of $\psi$-approximable points on the line
$\{\balpha\}\times\RR\subseteq\RR^n$ is of full measure, and the theorem
is proved.
\end{proof}
\section{Proof of Theorem \ref{HDfibres}}\label{Sec:HDfibres}
As mentioned previously, the first case follows directly from Dirichlet's Theorem, so the two latter cases are left to prove. Let $\balpha\in W_{\ell}(\tau)$ and
\begin{equation*}
\cQ(\balpha,\tau)=\left\{q\in\NN:\parallel q\balpha\parallel<q^{-\tau}\right\}.
\end{equation*}
This is an infinite set, so it can be written as an increasing sequence $(q_i)_{i\in\NN}$. The collection $\cQ=\bigcup_{i\in\NN}Q_i$ of sets
\begin{equation*}
Q_i:=\left\{\frac{\bp}{q}\in\mathbb{Z}^{m}\times\mathbb{N} : 0< q\leq q_i, \parallel q\balpha\parallel<q_i^{-\tau}\right\}\subseteq [0,1]^{m},\quad i\in\NN,
\end{equation*}
forms a locally $\lambda_{m}$-ubiquitous system with respect to the infinite increasing sequence $(q_i)_{i\in\NN}$, the weight function $\beta(\bp/q)=q$ and the function $\rho(q)=q^{-1}$. This follows directly from the fact that the intervals of the form
\begin{equation*}
\left(\frac{k}{q_i},\frac{k+1}{q_i}\right),\quad k\in\{0,1,\dots,k-1\},
\end{equation*}
or their multi-dimensional analogues, respectively, cover ${\rm I}^{m}$. We do not know the growth rate of the sequence $(q_i)_{i\in\mathbb{N}}$ for an arbitrary $\balpha$. However, as long as we can guarantee
\begin{equation*}
G=\limsup\limits_{i\rightarrow\infty}g(u_i)>0,\quad \text{ where }\quad g(r)=\varphi(r)^{s}\rho(r)^{-\delta},
\end{equation*}
we still get full $\mathcal{H}^s$-measure for ${\rm F}^{\balpha}_n\cap W_n(\psi)$ by Corollary \ref{cor4}. In our case, we have $\varphi(r)=\psi(r)/r$ and $\delta=m$. Thus, we get
\begin{equation*}
g(q_i)=\left(\frac{\psi(q_i)}{q_i}\right)^s\rho(q_i)^{-m}=q_i^{(-\tau-1)s+m},
\end{equation*}
and if $s\leq\frac{m}{\tau+1}=s_n^{\ell}(\tau)$, then $G\geq 1$. Hence, we have shown that
\begin{equation*}
\cH^s({\rm F}^{\balpha}_n\cap W_n(\psi))=\infty
\end{equation*}
for $s\leq s_n^{\ell}(\tau)$ and thus
\begin{equation*}
s_n^{\alpha}(\tau)\geq s_n^{\ell}(\tau)= \frac{m}{\tau+1}
\end{equation*}
for $\balpha\in W_{\ell}(\tau)$, which proves the third case of Theorem \ref{HDfibres}.
\begin{remark*}
This dimension result was already proved by the author in \cite{master}. Corollary \ref{cor5} was used for the conclusion instead of Corollary \ref{cor4} and so the statement regarding Hausdorff $s_n^{\ell}(\tau)$-measure was missing. The remaining case was mentioned as a conjecture in \cite{master}, but no progress towards a proof had been made.
\end{remark*}
It is worth noting that the above argument does not depend on the choice of $\tau$. However, this fact is not sufficient to prove the second case as
\begin{equation*}
\frac{n+1}{\tau+1}-\ell>\frac{m}{\tau+1}\quad \text{ for }\quad \tau\in\left(\frac{1}{n},\frac{1}{\ell}\right).
\end{equation*}
For this case we will need to use the Mass Transference Principle (see Theorem \ref{MTP}). A similar argument can be found in \cite{Lee}. By Minkowski's Theorem (see Theorem~\ref{Minkowski}), for any $\bbeta\in{\rm I}^m$ there are infinitely many numbers $q\in\NN$ simultaneously satisfying
\begin{equation*}
\parallel q\alpha_i \parallel < q^{-\tau},\quad (1\leq i \leq \ell)
\end{equation*}
and
\begin{equation*}
\parallel q\beta_j \parallel <q^{-\left(\frac{1-\ell\tau}{m}\right)},\quad (1\leq j \leq m).
\end{equation*}
In other words, for any $\bbeta\in{\rm I}^m$ there are infinitely many numbers $q$ in the intersection
\begin{equation*}
\cQ(\balpha,\tau)\cap\cQ\left(\bbeta,\frac{1-\ell\tau}{m}\right).
\end{equation*}
Switching to the language of Theorem \ref{MTP}, this tells us that for any ball $B\subset {\rm I}^m$, we get
\begin{equation*}
\cH^{m}\left(B\cap\limsup\limits_{k\rightarrow\infty}B_k^{\left(\frac{m+1-\ell\tau}{\tau+1}\right)}\right)=\cH^{m}(B)
\end{equation*}
where $B_k$ runs over all balls of radius $q^{-(\tau+1)}$ centered at points $\frac{\bp}{q}\in\ZZ^m\times \cQ(\balpha,\tau)$. Applying the Mass Transference Principle shows that
\begin{equation*}
\cH^s\left(B\cap\limsup\limits_{k\rightarrow\infty}B_k\right)=\cH^s(B)=\infty,
\end{equation*}
where
\begin{equation*}
s=\frac{m+1-\ell\tau}{\tau+1}=\frac{n-\ell+1-\ell\tau}{\tau+1}=\frac{n+1-\ell(\tau+1)}{\tau+1}=\frac{n+1}{\tau+1}-\ell.
\end{equation*}
Thus, the set of $\bbeta\in{\rm I}^m$, for which there are infinitely many numbers $q\in\cQ(\balpha,\tau)$ with $\parallel q\bbeta\parallel < q^{-\tau}$, has full $\cH^s$-measure. Equivalently, the set of $(\balpha,\bbeta)\in {\rm F}_n^{\balpha}$ which are $\tau$-approximable has full $\cH^s$-measure for $s=\frac{n+1}{\tau+1}-\ell$, which finishes the proof of Theorem \ref{HDfibres}.
\subsection{Proof of Corollary \ref{Cor:HD}}\label{sec:HDrmk}
Corollary \ref{Cor:HD} is a consequence of the Jarn\'ik-Besicovitch Theorem applied to both $W_n(\tau)$ and $W_m(\tau)$ and the following Theorem (7.11 in \cite{Falconer}), which provides a measure formula for fibres above a base set.
\begin{theorem}\label{slicing}
Let $F$ be a subset of ${\rm I}^n={\rm I}^{\ell}\times{\rm I}^m$ and let $E$ be any subset of ${\rm I}^{\ell}$. Let $s,t\geq 0$ and suppose there is a constant $c$ such that
\begin{equation*}
\mathcal{H}^t(F\cap {\rm F}^n_{\balpha})\geq c
\end{equation*}
for all $\balpha\in E$. Then
\begin{equation*}
\mathcal{H}^{s+t}(F)\geq bc\mathcal{H}^s(E),
\end{equation*}
where $b>0$ only depends on $s$ and $t$.
\end{theorem}
Reformulated as a statement about Hausdorff dimension, Theorem \ref{slicing} can be interpreted as follows.
\begin{corollary}\label{slicingcor}
Let $F$ be a subset of ${\rm I}^n={\rm I}^{\ell}\times{\rm I}^m$ and let $E$ be any subset of ${\rm I}^{\ell}$. Suppose that
\begin{equation*}
\dim(F\cap {\rm F}^n_{\balpha})\geq s
\end{equation*}
for all $\balpha\in E$. Then
\begin{equation*}
\dim(F)\geq \dim(E)+s.
\end{equation*}
\end{corollary}
Equipped with Corollary \ref{slicingcor} we are ready to prove Corollary \ref{Cor:HD}.
\begin{proof}[Proof of Corollary \ref{Cor:HD}]
Let $\frac{1}{n}<\tau\leq\frac{1}{\ell}$ and $\varepsilon=\frac{1}{k}$ for $k\in\mathbb{Z}^+$. Then the set ${\rm I}^{\ell}_k(\tau)$ of $\balpha\in {\rm I}^{\ell}$ such that
\begin{equation*}
s^{\balpha}_n(\tau)\geq \frac{n+1}{\tau+1}-\ell+\varepsilon
\end{equation*}
is a set of dimension less than $\ell$. Indeed, if it had dimension at least $\ell$, then the union over all the sets ${\rm F}_n^{\balpha}\cap W_n(\tau)$ would be a set of dimension strictly greater than
\begin{equation*}
\ell+\frac{n+1}{\tau+1}-\ell+\varepsilon=\frac{n+1}{\tau+1}+\varepsilon>\dim W_n(\tau),
\end{equation*} by the slicing formula for product-like sets given in Corollary \ref{slicingcor}. However, this is not possible due to the monotonicity property of Hausdorff dimension. Hence, $\dim {\rm I}^{\ell}_k(\tau)<\ell$, and, in particular,
\begin{equation*}
\lambda_{\ell}({\rm I}^{\ell}_k(\tau))=0.
\end{equation*}
Now, the set of $\balpha\in {\rm I}^{\ell}$ satisfying \eqref{Eqn:HDslice} is the countable union over all sets ${\rm I}^{\ell}_k(\tau)$ with $k\in\mathbb{Z}^+$ and hence also a zero set with respect to $\lambda_{\ell}$. The second case works completely analogously.
\end{proof}
\chapter{Dirichlet improvability and singular vectors}\label{dirichlet}
\chaptermark{Dirichlet improvability}
We recall that the notion of a weight vector was introduced in Section \ref{sec:weighted} and that given any weight vector $\bi\in\RR^n$ and $Q\in\NN$, Theorem \ref{wDir} states that the system of inequalities given by
\begin{equation*}
\norm{q\alpha_j}<Q^{-i_j},\quad j\in \{1,\dots,n\},
\end{equation*}
has a non-zero integer solution $q\leq Q$. We will say that $\balpha\in\RR^n$ is \emph{$\bi$-Dirichlet improvable} or $\balpha\in \cD_n(\bi)$ if there exists a positive constant $c<1$ such that the system of inequalities
\begin{equation}\label{DI}
\norm{q\alpha_j}<cQ^{-i_j},\quad j\in \{1,\dots,n\},
\end{equation}
has a non-zero integer solution $q\leq Q$ for all $Q$ large enough. If this is true for $c$ arbitrarily small, then we call $\balpha$ \emph{$\bi$-singular} or we say $\balpha\in\cS_n(\bi)$. We omit the $\bi$ in both notations in the case where $\bi=(1/n,\dots,1/n)$, i.e. when we are dealing with an improved version of the classical non-weighted theorem by Dirichlet. In dimension one, we simply denote the sets in question by $\cD$ and $\cS$, respectively. Of course, in this case, the only choice of weight vector is $i=1$.
\section{Background and our results}
Khintchine introduced the notion of singular vectors in the 1920s \cite{KhintchineSing}. He showed in the one-dimensional case that the rationals are the only singular numbers. This is a direct consequence of the following statement.
\begin{lemma}\label{Lma:Wald}
Let $\alpha$ be a real number. Assume there exists $Q_0=Q_0(\alpha)$ such that for each integer $Q\geq Q_0$ there exist $p=p(Q)\in\ZZ$ and $q=q(Q)\in\NN$ satisfying $q\leq Q$ and
\begin{equation}\label{Eqn:Wald}
|q\alpha-p|<\frac{1}{3Q}.
\end{equation}
Then, $\alpha$ is rational and $p/q=\alpha$ for each $Q\geq Q_0$.
\end{lemma}
\begin{proof}
We use a slightly modified version of an argument from \cite{Waldschmidt}. Assume that $Q\geq Q_0$ and that $p$ and $q$ are the integers satisfying \eqref{Eqn:Wald}. Moreover, denote by $p'$ and $q'$ the integers such that
\begin{equation*}
|q'\alpha-p'|<\frac{1}{3(Q+1)}\quad\text{ and }\quad 1\leq q'\leq Q+1.
\end{equation*}
We want to show that $p/q=p'/q'$. The integer $qp'-q'p$ satisfies
\begin{align*}
|qp'-q'p|&=|q(p'-q'\alpha)+q'(q\alpha-p)|\\[1ex]
&\leq |q(p'-q'\alpha)|+|q'(q\alpha-p)|\\[1ex]
&<\frac{Q}{3(Q+1)}+\frac{Q+1}{3Q}\\[1ex]
&<\frac{1}{3}+\frac{2}{3}=1,
\end{align*}
hence it vanishes. This implies that $qp'=q'p$ and thus the rational number $p/q=p'/q'$ does not depend on $Q\geq Q_0$. On the other hand,
\begin{equation*}
\lim\limits_{Q\rightarrow\infty}\frac{p(Q)}{q(Q)}=\alpha,
\end{equation*}
which shows that $\alpha=p/q$.
\end{proof}
\begin{remark*}
The constant $1/3$ in \eqref{Eqn:Wald} is not optimal. In fact, Lemma \ref{Lma:Wald} is proved with the constant $1/2$ in both \cite{KhintchineSing} and \cite{Waldschmidt}. However, Khintchine's argument uses results about the convergents of continued fractions and Waldschmidt defines a slightly stronger form of Dirichlet improvability, requiring that the solution in \eqref{DI} satisfies the strict inequality $q<Q$.
\end{remark*}
On the other hand, it is easily seen that any rational $a/b$ is singular. Indeed, for any $c>0$ and for $Q\geq b$, the choice of $q=b$ trivially satisfies \eqref{DI}.
Davenport and Schmidt were the first to introduce the set $\cD$ and proved that $\alpha\in\RR\setminus\QQ$ is Dirichlet improvable if and only if $\alpha$ is badly approximable \cite{Davenport}. Their argument relies on the theory of continued fractions. This completes the one-dimensional theory.
\begin{theorem}
$\cS=\QQ$ and $\cD\setminus\cS=\mathbf{Bad}$.
\end{theorem}
The situation in higher dimensions is more intricate. It is straightforward to show that points on any rational hyperplane are singular. Hence, ${\dim\cS_n\in[n-1,n]}$. Also, on utilising the Borel--Cantelli Lemma it can be verified that the set $\cS_n$ has zero $n$-dimensional Lebesgue measure \cite[Chapter V, \S 7]{Cassels}. However, by means of a geometric argument, Khintchine proved the existence of totally irrational vectors contained in $\cS_n$ for $n\geq 2$ \cite{KhintchineSing}. Furthermore, in a recent groundbreaking paper, Cheung showed that the Hausdorff dimension of $\cS_2$ is equal to $4/3$ \cite{Cheung} and later Cheung and Chevallier extended this result to arbitrary dimensions \cite{Cheung2}.
\begin{theorem}[Cheung--Chevallier]
For $n\geq 2$,
\begin{equation*}
\dim\cS_n=\frac{n^2}{n+1}.
\end{equation*}
\end{theorem}
Significantly, this shows that $\cS_n$ is much bigger than the set of rationally dependent vectors, which is only of dimension $n-1$. These articles make use of dynamical methods. Previously, and through a classical approach, partial results towards establishing $\dim\cS_n$ had been made by Baker \cite{Baker1}, \cite{Baker2} and Rynne \cite{Rynne}. Regarding Dirichlet improvable vectors, Davenport and Schmidt showed that $\lambda_n(\cD_n)=0$ \cite{Davenport2}. They also proved the following result \cite{Davenport}.
\begin{theorem}[Davenport--Schmidt]\label{Thm:DSoriginial}
$\mathbf{Bad}_n\subset\cD_n.$
\end{theorem}
\begin{remark*}
As an immediate consequence of Theorem \ref{Thm:DSoriginial} we see that $\dim\cD_n=n$.
\end{remark*}
All of these results concern the standard set-up and until recently very little research had been done for the weighted case. However, in a very recent article \cite{Tamam}, Liao, Shi, Solan and Tamam have managed to extend Cheung's two-dimensional result to general weight vectors.
\begin{theorem}
Let $\bi=(i_1,i_2)$ be a weight vector. Then
\begin{equation*}
\dim\cS_2(\bi)=2-\frac{1}{1+\max\{i_1,i_2\}}.
\end{equation*}
\end{theorem}
Before stating our results, we recall the definition of the set $\mathbf{Bad}_n(\bi)$ for arbitrary weight vectors $\bi$:
\begin{equation*}
\balpha\in\mathbf{Bad}_n(\bi)\ \Longleftrightarrow\ \liminf\limits_{q\rightarrow\infty}\max\limits_{1\leq j\leq n}q^{i_j}\norm{q\alpha_j}>0.
\end{equation*}
We start by extending the above theorem of Davenport and Schmidt to the weighted case:
\begin{theorem}\label{ThmDS}
$\mathbf{Bad}_n(\bi)\subseteq\cD_n(\bi).$
\end{theorem}
\begin{remark*}
Clearly, no badly approximable vector can be singular, so we know that $\mathbf{Bad}_n(\bi)$ and $\cS_n(\bi)$ are disjoint. Interestingly, apart from the trivial case $n=1$, it is not currently known if the sets $\cD_n\setminus (\mathbf{Bad}_n\cup\cS_n)$ or their weighted analogues are empty.
\end{remark*}
Our second main result will show that non-singular vectors are well-suited for twisted inhomogeneous approximation. The following non-weighted result has been proved by Shapira using dynamical methods \cite{Shapira}.
\begin{theorem}[Shapira]
Let $\balpha\notin\cS_n$. Then for almost all $\bbeta \in {\rm I}^n$,
\begin{equation*}
\liminf\limits_{q\rightarrow\infty}q^{1/n}\max\limits_{1\leq j\leq n}\norm{q\alpha_j-\beta_j}=0.
\end{equation*}
\end{theorem}
We extend Shapira's result to general weight vectors. Our approach is classical.
\begin{theorem}\label{Con1}
Let $\balpha\notin\cS_n(\bi)$. Then, for almost all $\bbeta\in {\rm I}^n$,
\begin{equation*}
\liminf\limits_{q\rightarrow\infty}\max\limits_{1\leq j\leq n}q^{i_j}\norm{q\alpha_j-\beta_j}=0.
\end{equation*}
\end{theorem}
\begin{remark*}
Another way to state Theorem \ref{Con1} is by using the notation introduced in Section \ref{sec:weighted}. Given $\varepsilon>0$, let $\psi_{\varepsilon}(q)=\varepsilon q^{-1}$. Then, for any $\balpha\notin\cS_n(\bi)$ and for any $\varepsilon>0$, the set $W_n^{\balpha}(\bi,\psi_{\varepsilon})$ has full Lebesgue measure $\lambda_n$. Hence,
\begin{equation*}
{\rm I}^n\setminus\cS_n(\bi)\subset\bigcap\limits_{\varepsilon>0} W_n^{\times}(\bi,\psi_{\varepsilon}).
\end{equation*}
\end{remark*}
\section{Preliminary results}
In this section we introduce various auxiliary statements which will be used to prove Theorems \ref{ThmDS} and \ref{Con1}.
\subsection{Results needed to prove Theorem \ref{ThmDS}.}
Haj{\'o}s proved the following statement for systems of linear forms \cite{Hajos}:
\begin{theorem}[Haj\'os]\label{HajThm2}
Let $L_1,\dots,L_m$ be $m$-dimensional linear forms given by
\begin{equation*}
L_j(\bx)=a_{j,1}x_1+\dots+a_{j,m}x_m,\quad \quad (1\leq j \leq m),
\end{equation*}
with real coefficients $a_{j,k}$, $1\leq j,k \leq m$. Assume the system of linear forms has determinant $\det(a_{j,k})=\pm 1$ and they satisfy
\begin{equation}\label{maxeq2}
\max\left\{|L_1(\bx)|,\dots,|L_m(\bx)|\right\}\geq 1
\end{equation}
for all integer vectors $\bx=(x_1,\dots,x_m)\neq \0\in\ZZ^m$. Then, one of the linear forms has only integer coefficients.
\end{theorem}
Any matrix $A=(a_{j,k})\in\RR^{m\times m}$ satisfying $\det A\neq 0$ gives rise to a full-rank lattice
\begin{equation*}
\Lambda(A)=\left\{A\bz:\bz\in\ZZ^m\right\}.
\end{equation*}
Two lattices $\Lambda(A_1)$ and $\Lambda(A_2)$ are identical if and only if there exists a unimodular matrix $B\in\ZZ^{m\times m}$ such that $A_1=A_2 B$. This implies that
\begin{equation*}
\Lambda(A)=\Lambda(AB)
\end{equation*}
for any matrix $B\in\ZZ^{m\times m}$ with $\det B=\pm 1$. Clearly, $A\bz=AB(B^{-1}\bz)$ and thus, turning back to the linear forms defined through $A$, we see that
\begin{equation*}
L_j(\bx)=\sum\limits_{k=1}^{\infty}a_{j,k}x_k=\sum\limits_{k=1}^{\infty}(ab)_{j,k}y_k=L_j'(\by),\quad (1\leq j \leq m),
\end{equation*}
where we perform a change of variables, substituting $\by$ for $B^{-1}\bx$, and where $(ab)_{j,k}$ denotes the entries of $AB$. Importantly, any non-zero integer vector $\by$ is the image of a non-zero integer vector $\bx$ under the multiplication by $B^{-1}$. Hence, rather than investigating $L_1(\bx),\dots,L_m(\bx)$ for $\bx\in\ZZ^{m\times m}\setminus\{\0\}$, we can consider a modified collection of linear forms $L'_1(\by),\dots,L'_m(\by)$ at integer vectors $\by\neq\0$. This will prove to be very useful.
For our purpose, we will need the following corollary of Theorem \ref{HajThm2}.
\begin{corollary}\label{HajThm1}
Let $L_1,\dots,L_m$ be given as in Theorem \ref{HajThm2}. After a possible permutation $\pi$ of the linear forms $L_{j}$, there is an integral linear transformation of determinant $\pm 1$ from the variables $x_1,\dots,x_m$ to $y_1,\dots,y_m$, such that in the new variables $y_{\ell}$ the linear forms have lower triangular form and all diagonal elements are equal to $1$.
\end{corollary}
In other words, we get
\begin{equation*}
L_{\pi^{-1}(j)}(\bx)=\phi_{j,1}y_1+\dots+\phi_{j,j-1}y_{j-1}+y_{j},\quad (1\leq j\leq m),
\end{equation*}
with all further coefficients being $0$. According to Haj{\'o}s, Theorem \ref{HajThm2} and Corollary~\ref{HajThm1} are equivalent. However, he does not provide a proof for this equivalence, so we will give the details.
\begin{proof}
The main difficulty is to show that Theorem \ref{HajThm1} follows from Theorem \ref{HajThm2}.
We will assume that Theorem \ref{HajThm2} implies that the first linear form has only integer entries. Then, we need to show the following: Given a unimodular matrix $A\in\mathbb{R}^{m\times m}$ with $a_{1,1},\dots,a_{1,m}\in\ZZ$ satisfying $|A\bx|\geq 1$ for all $\bx\in \ZZ^m\backslash\{\0\}$, there exists a matrix $B\in\ZZ^{m\times m}$ with determinant $\pm 1$ such that $AB$ has lower triangular form with all diagonal entries equal to $1$. Corollary \ref{HajThm1} is trivially true for $m=1$, so, by induction, it is sufficient to show that we can choose $B$ in such a way that $AB$ has first row entries
\begin{equation*}
ab_{1,1}=1,\ ab_{1,2}=\dots=ab_{1,m}=0,
\end{equation*}
since then we can apply the statement to the matrix
\begin{equation*}
AB^*=(ab_{j,k})_{j,k=2}^{m}\in \RR^{(m-1)\times(m-1)},
\end{equation*}
which satisfies $\det AB^{*}=\det AB=\pm 1$.
We do this in two steps. First we show that we can get $AB$ with first row entries
\begin{equation*}
ab_{1,1}=\gcd(a_{1,1},\dots,a_{1,m}),ab_{1,2}=\dots=ab_{1,m}=0
\end{equation*}
and then we conclude that $\gcd(a_{1,1},\dots,a_{1,m})=1$.
Multiplying $A$ by the diagonal matrix $B_0$ with $b_{i,i}=\operatorname{sgn}(a_{1,i})$ gives us a matrix $AB_0$ with only positive entries in the first row and then our plan is to emulate the Euclidean algorithm. We take the last two entries of the first row of $AB_0$ (if non-zero) and keep on subtracting the lesser entry from the greater one until the two of them are equal to $\gcd(a_{1,m-1},a_{1,m})$ and zero. If needed we then swap those entries such that the latter one equals zero.
\noindent The subtractions correspond to multiplication on the right by matrices of the form
\begin{align*}
\begin{bmatrix}
I_{m-2} & & \\
& 1 & 0 \\
& -1 & 1 &
\end{bmatrix}\quad \text{ or }\quad
\begin{bmatrix}
I_{m-2} & & \\
& 1 & -1 \\
& 0 & 1 &
\end{bmatrix},
\end{align*}
respectively. Swapping is done via multiplication by a matrix of the form
\begin{align*}
\begin{bmatrix}
I_{m-2} & & \\
& 0 & 1 \\
& 1 & 0 &
\end{bmatrix}.
\end{align*}
We are only interested in the entries of the first row and thus are not concerned how these multiplications affect the other rows. We then continue analogously by doing the same procedure for the next two non-zero entries of the first row, and so on, using adjusted multiplication matrices. Any such operation is done via multiplication by a matrix of determinant $\pm 1$ and in the end we get a matrix $AB$ with first row entries as desired.
Finally, it is easy to see that $\gcd(a_{1,1},\dots,a_{1,m})=1$. Assume this is not the case. Then, it follows that
\begin{equation*}
|\det AB^*|=1/\gcd(a_{1,1},\dots,a_{1,m})<1.
\end{equation*}
Hence, by Minkowski's Theorem (see Theorem \ref{Minkowski}), there exists a non-zero vector $\bz^*\in\ZZ^{m-1}$ such that $0<|AB^*\bz^*|<1$. Let $\bz=(0,\bz^*)$. It follows that $0<|AB\bz|<1$, contradicting the assumption \eqref{maxeq2}.
Now we show that Corollary \ref{HajThm1} implies Theorem \ref{HajThm2}. Let $AB$ be as given above. The matrix $B^{-1}$ has only integer entries and so the same is true for the first row of $A=ABB^{-1}$.
\end{proof}
We also need to make use of another property of lattices. Given a full-rank lattice $\Lambda=\Lambda(A)\subset\RR^m$, for $j\in\{1,\dots,m\}$ we define its \textit{$j$-th successive minimum} as
\begin{equation*}
\mu_j=\mu_j(\Lambda):=\inf\left\{r>0:\Lambda\cap \bar{B}(\0,r)\text{ contains }j\text{ linearly independent vectors}\right\},
\end{equation*}
where $\bar{B}$ denotes a closed ball. Clearly, $0<\mu_1\leq\mu_2\leq\dots\leq\mu_m<\infty$. Furthermore, Minkowski proved the following fundamental theorem.
\begin{theorem}[Minkowski's Second Theorem]\label{Thm:Mink2nd}
\begin{equation*}
\frac{2^n}{n!}\det A\leq\prod\limits_{j=1}^{m}\mu_j\leq 2^n\det A.
\end{equation*}
\end{theorem}
In particular, Theorem \ref{Thm:Mink2nd} implies that if we have a lower bound $\mu_1>\varepsilon>0$, then all the successive minima are uniformly bounded by $\mu_j<2^n\varepsilon^{-(m-1)}\det A$.
\subsection{Results needed to prove Theorem \ref{Con1}}
We will be using the following standard result in measure theory.
\begin{theorem}[Lebesgue Density Theorem]\label{LDT}
Let $\mu$ be a Radon measure on $\RR^n$. If $A\subset\RR^n$ is $\mu$-measurable, then the limit
\begin{equation}\label{Eqn:LDT}
\lim\limits_{r\rightarrow\infty}\frac{\mu\left(A\cap B_r(\bx)\right)}{\mu\left( B_r(\bx)\right)}
\end{equation}
exists and equals $1$ for $\mu$-almost all $\bx\in A$ and equals $0$ for $\mu$-almost all $\bx\in\RR^m\backslash\{A\}$.
\end{theorem}
Theorem \ref{LDT} can be found in \cite{Harman}. Importantly, it does not depend on the choice of metric, so we can apply it to the distance $d$ as defined below. In our case, the measure $\mu$ is simply the Lebesgue measure $\lambda_n$.
Given a vector $\bi=(i_1,\dots,i_n)\in{\rm I}^n$, let $i_{min}\coloneqq\min\limits_{1\leq j\leq n}i_j$. Then $\bar{i}_j\coloneqq\frac{i_{min}}{i_j}\leq 1$ for $1\leq j\leq n$ and so $d_j(u,v)\coloneqq |u-v|^{\bar{i}_j}$ is a metric on $\RR$ for $1\leq j\leq n$. Thus,
\begin{equation}\label{Eqn:DefMetric}
d(\bx,\by)\coloneqq\max\limits_{1\leq j\leq n}d_j(x_j,y_j)
\end{equation}
is a metric on $\RR^n$. We will refer to balls with respect to the metric $d$ as $d$-balls and denote by $B_r^d(\bx)$ a $d$-ball centred at $\bx\in\RR^n$ of radius $r$. The following generalisation of a result in \cite{VBSV} from balls with respect to $\max$-norm to $d$-balls will be used to prove Theorem \ref{Con1}. For better readability within the proof, we will use the notation~$|\cdot |$ to refer to the $n$-dimensional Lebesgue measure $\lambda_n$ of a set.
\begin{theorem}\label{ConBV}
Let $(A_k)_{k\in\NN}$ be a sequence of $d$-balls in $\RR^n$ with $|A_k|\rightarrow 0$ as $k\rightarrow\infty$. Let $(U_k)_{k\in\NN}$ be a sequence of Lebesgue measurable sets such that $U_k\subset A_k$ for all $k$. Assume that, for some $c>0$, $|U_k|>c|A_k|$ for all $k$. Then the sets
\begin{equation*}
\mathcal{U}=\limsup\limits_{k\rightarrow\infty}U_k\coloneqq \bigcap\limits_{j=1}^{\infty}\bigcup\limits_{k=j}^{\infty}U_k\quad \text{ and }\quad \mathcal{A}=\limsup\limits_{k\rightarrow\infty}A_k\coloneqq \bigcap\limits_{j=1}^{\infty}\bigcup\limits_{k=j}^{\infty}A_k
\end{equation*}
have the same Lebesgue measure.
\end{theorem}
\begin{remark*}
Given a constant $c>0$, we already know from the weighted version of Khintchine's Theorem that the sets $W_n(\bi,\psi)$ and $W_n(\bi,c\psi)$ have the same Lebesgue measure (see Theorem \ref{Thm:WeightedKhin}). Theorem \ref{ConBV} implies that this property is shared by more general $\limsup$ sets.
\end{remark*}
\begin{proof}
Let $\mathcal{U}_j\coloneqq \bigcup_{k\geq j}U_k$ and $\mathcal{C}_j\coloneqq \mathcal{A}\backslash\mathcal{U}_j$. Then $\mathcal{U}_j\supset\mathcal{U}_{j+1}$ and $\mathcal{C}_j\subset\mathcal{C}_{j+1}$. Define
\begin{equation*}
\mathcal{C}\coloneqq\mathcal{A}\backslash\mathcal{U}=\mathcal{A}\backslash\bigcap\limits_{j=1}^{\infty}\mathcal{U}_j=\bigcup\limits_{j=1}^{\infty}\left(\mathcal{A}\backslash\mathcal{U}_j\right)=\bigcup_{j=1}^{\infty}\mathcal{C}_j.
\end{equation*}
We are to show that $\mathcal{C}$ has measure zero or, equivalently, that every $\mathcal{C}_j$ has measure zero.
Assume the contrary. Then there is an $\ell\in\NN$ such that $|\mathcal{C}_\ell|>0$ and therefore there is a density point $\bx_0$ of $\mathcal{C}_\ell$, i.e. a point $\bx_0$ for which the limit in \eqref{Eqn:LDT} is equal to $1$. Since $\bx_0\in\mathcal{A}$, we know that $\bx_0\in A_{j_k}$ for a sequence $(j_k)_{k\in\NN}$. As $|A_{j_k}|$ tends to zero, we can conclude that
\begin{equation}\label{Eqn:Claim}
|\mathcal{C}_{\ell}\cap A_{j_k}|\sim|A_{j_k}|,\quad\text{as }k\rightarrow\infty,
\end{equation}
which is shown by the following considerations.
If $A_{j_k}$ is a $d$-ball of radius $r_{j_k}$ containing $\bx_0$, then $A_{j_k}$ will be contained in a $d$-ball $B^d_{2r_{j_k}}(\bx_0)$. Indeed, doubling the radius corresponds to extending the $j$-th side length by the factor
\begin{equation*}
2^{1/\bar{i}_j}=2^\frac{i_j}{i_{min}}\geq 2,\quad j\in\{1,\dots,n\}.
\end{equation*}
Comparing Lesbegue measures, it follows that
\begin{equation}\label{Eqn:LDT1}
\frac{|B^d_{2r_{j_k}}(\bx_0)|}{|A_{j_k}|}=2^s, \text{\qquad where \qquad} s\coloneqq\sum\limits_{j=1}^n\frac{i_j}{i_{min}},
\end{equation}
and thus
\begin{equation}\label{Eqn:LDT2}
\frac{|B^d_{2r_{j_k}}(\bx_0)\backslash A_{j_k}|}{|B^d_{2r_{j_k}}(\bx_0)|}=1-\frac{1}{2^s},
\end{equation}
since $A_{j_k}$ is fully contained in $B^d_{2r_{j_k}}(\bx_0)$. The Lebesgue density theorem tells us that for any $\varepsilon>0$ and $\delta$ small enough,
\begin{equation}\label{Eqn:LDT3}
\frac{|\mathcal{C}_{\ell}\cap B^d_{\delta}(\bx_0)|}{|B^d_{\delta}(\bx_0)|}>1-\varepsilon.
\end{equation}
Combining the identities \eqref{Eqn:LDT1}, \eqref{Eqn:LDT2} and \eqref{Eqn:LDT3}, we see that
\begin{align*}
\frac{|\mathcal{C}_{\ell}\cap A_{j_k}|}{|A_{j_k}|}&=\frac{|\mathcal{C}_{\ell}\cap A_{j_k}|}{|B^d_{2r_{j_k}}(\bx_0)|}\frac{|B^d_{2r_{j_k}}(\bx_0)|}{|A_{j_k}|}\\[1ex]
&\geq\frac{|\mathcal{C}_{\ell}\cap B^d_{2r_{j_k}}(\bx_0)|-|B^d_{2r_{j_k}}(\bx_0)\backslash A_{j_k}|}{|B^d_{2r_{j_k}}(\bx_0)|}\frac{|B^d_{2r_{j_k}}(\bx_0)|}{|A_{j_k}|}\\[1ex]
&=\left(\frac{|\mathcal{C}_{\ell}\cap B^d_{2r_{j_k}}(\bx_0)|}{|B^d_{2r_{j_k}}(\bx_0)|}-\frac{|B^d_{2r_{j_k}}(\bx_0)\backslash A_{j_k}|}{|B^d_{2r_{j_k}}(\bx_0)|}\right)\frac{|B^d_{2r_{j_k}}(\bx_0)|}{|A_{j_k}|}\\[1ex]
&>\left(1-\varepsilon-\left( 1-\frac{1}{2^s}\right)\right)2^s\\[1ex]
&=1-2^s\varepsilon,\quad \text{ for } r_{j_k} \text{ small enough}.
\end{align*}
The value of $\varepsilon$ can be chosen to be arbitrarily small by \eqref{Eqn:LDT3}. Hence, this quotient tends to~$1$ as $k\rightarrow\infty$, which proves \eqref{Eqn:Claim}.
Since $\mathcal{C}_j\supset\mathcal{C}_{\ell}$ for all $j\geq \ell$, it follows that
\begin{equation}\label{cont}
|\mathcal{C}_{j_k}\cap A_{j_k}|\sim|A_{j_k}|\quad \text{ as }k\rightarrow\infty.
\end{equation}
On the other hand, by definition, $\mathcal{C}_{j_k}\cap U_{j_k}=\emptyset$. Using that $|U_k|>c|A_k|$ for all $k$, we get that
\begin{equation*}
|A_{j_k}|\geq |U_{j_k}|+|\mathcal{C}_{j_k}\cap A_{j_k}|\geq c|A_{j_k}|+|\mathcal{C}_{j_k}\cap A_{j_k}|,
\end{equation*}
and thus
\begin{equation*}
|\mathcal{C}_{j_k}\cap A_{j_k}|<(1-c)|A_{j_k}|
\end{equation*}
for $k$ sufficiently large. This is a contradiction to \eqref{cont}. Thus, every set $\mathcal{C}_j$ has zero Lebesgue measure, which completes the proof.
\end{proof}
The final auxiliary result is due to Cassels \cite{Cassels}. It relates homogeneous and inhomogeneous approximation properties of linear forms.
\begin{theorem}\label{Cassels}
Let $L_1,\dots,L_{\ell}$ be linear forms in the $\ell$ variables $\bz=(z_1,\dots,z_\ell)$ given by
\begin{equation*}
L_k(\bz)=a_{k,1}z_1+\dots+a_{k,l}z_l,\quad (1\leq k\leq \ell),
\end{equation*}
with real coefficients $a_{k,j}$, $1\leq k,j\leq \ell$. Assume the system of linear forms has determinant $\Delta=\det(a_{k,j}) \neq 0$ and suppose that the only integer solution of
\begin{equation}\label{maxeq4}
\max\limits_{1\leq k\leq \ell}|L_k(\bz)|<1
\end{equation}
is $\bz=\0$. Then, for all real vectors $\boldsymbol{\gamma}=(\gamma_1,\dots,\gamma_\ell)\in{\rm I}^{\ell}$, there are integer solutions of
\begin{equation*}
\max\limits_{1\leq k\leq \ell}|L_k(\bz)-\gamma_k|<\frac{1}{2}(\lfloor|\Delta|\rfloor+1),
\end{equation*}
where $\lfloor\cdot\rfloor$ denotes the integer part of a real number.
\end{theorem}
Essentially, Theorem \ref{Cassels} tells us the following. If the values of a collection of linear forms taken at integer points are bounded away from $\0$, then these linear forms can uniformly approximate any inhomogeneous constant $\boldsymbol{\gamma}$. This will be used to deduce twisted approximation properties of non-singular vectors, see Theorem \ref{Thm1}.
\section{Proof of Theorem \ref{ThmDS}}
We will show that any vector not contained in $\cD_n(\bi)$ cannot be badly approximable. If $\balpha=(\alpha_1,\dots,\alpha_n)$ is not in $\cD_n(\bi)$, then for any $c<1$ there exists an infinite sequence of integers $Q$ such that \eqref{DI} has no solution $q<Q$. This implies that there is a sequence $(Q_k)_{k\in\NN}$ such that
\begin{equation*}
\norm{q\alpha_j}>(1-\frac{1}{2^k})Q_k^{-i_j}
\end{equation*}
for all $q\leq Q_k$ and some $j\in \{1,\dots,n\}$. In other words, we have
\begin{equation}\label{maxeq1}
\max\left\{Q_k^{-1}q,Q_k^{i_1}|q\alpha_1+p_1|,\dots,Q_k^{i_n}|q\alpha_n+p_n|\right\}>1-\frac{1}{2^k}
\end{equation}
for all integers $q,p_1,\dots,p_n$ not all equal to $0$. The $n+1$ linear forms
\begin{equation*}
Q_k^{-1}q,\ Q_k^{i_1}(q\alpha_1+p_1),\dots,\ Q_k^{i_n}(q\alpha_n+p_n)
\end{equation*}
define a lattice of determinant $1$ in $(n+1)$-dimensional space. By \eqref{maxeq1}, this lattice has no non-zero point within distance $1/2$ of the origin. By Theorem \ref{Thm:Mink2nd}, such a lattice has a basis of $n+1$ points, the coordinates of which are all bounded from above by a numerical constant. This implies that there exists a linear transformation with integral coefficients and determinant $1$ from $q,p_1,\dots,p_n$ to $x_0,\dots,x_n$ such that
\begin{align}\label{trafo1}
\begin{cases}
Q_k^{-1}q&=\ \theta_{0,0}^{(k)}x_0+\theta_{0,1}^{(k)}x_1+\dots+\theta_{0,n}^{(k)}x_n,\\
Q_k^{i_1}(q\alpha_1+p_1)&=\ \theta_{1,0}^{(k)}x_0+\theta_{1,1}^{(k)}x_1+\dots+\theta_{1,n}^{(k)}x_n,\\
\vdots\\
Q_k^{i_n}(q\alpha_n+p_n)&=\ \theta_{n,0}^{(k)}x_0+\theta_{n,1}^{(k)}x_1+\dots+\theta_{n,n}^{(k)}x_n,
\end{cases}
\end{align}
where the absolute values of the $\theta_{\ell,m}^{(k)}$ are bounded by a uniform constant $C$ for all $\ell,m,k$. The transformations depend on $k$, but, for each $k$, the determinant of $(\theta_{\ell,m}^{(k)})$ is equal to $1$. Define the matrices $\Theta_k=(\theta_{\ell,m}^{(k)})$ for $k\in\NN$. The sequence $(\Theta_k)_{k\in\NN}$ is contained in a compact subset of $\SL_n(\RR)$. This implies there is a subsequence~$(\kappa)$ of values of $(k)_{k\in\NN}$ such that $(\Theta_{\kappa})_{\kappa}$ converges to an element of $\SL_n(\RR)$. We denote this limit by $\Theta=(\theta_{\ell,m})$ and get the linear forms
\begin{align}\label{trafo2}
\begin{cases}
X_0&=\ \theta_{0,0}x_0+\theta_{0,1}x_1+\dots+\theta_{0,n}x_n\\
X_1&=\ \theta_{1,0}x_0+\theta_{1,1}x_1+\dots+\theta_{1,n}x_n\\
\vdots\\
X_n&=\ \theta_{n,0}x_0+\theta_{n,1}x_1+\dots+\theta_{n,n}x_n
\end{cases}
\end{align}
of determinant $1$ with the property that
\begin{equation*}
\max\left\{|X_0|,|X_1|,\dots,|X_n|\right\}\geq 1
\end{equation*}
for all integer vectors $(x_0,\dots,x_n)\neq (0,\dots,0)$. Indeed, if there was a non-zero tuple $(x_0^*,\dots,x_n^*)$ satisfying
\begin{equation*}
\max\left\{|X_0|,|X_1|,\dots,|X_n|\right\}<1,
\end{equation*}
then putting $(x_0^*,\dots,x_n^*)$ in \eqref{trafo1} would violate \eqref{maxeq1} for large enough values of $k$. By Corollary \ref{HajThm1}, after a possible integral transformation of determinant $\pm 1$, we get
\begin{equation*}
X_{\pi^{-1}(\ell)}=\phi_{\ell 0}y_0+\phi_{\ell 1}y_1+\dots+y_{\ell},\quad (0\leq \ell\leq n),
\end{equation*}
with all other coefficients being equal to zero. Independent of the permutation $\pi$, it is always possible to satisfy either
\begin{equation*}
X_0=0\quad\quad \text{ or }\quad\quad X_1=\dots=X_n=0
\end{equation*}
with the non-zero integer vector
\begin{equation*}
(y_0,\dots,y_{n-1},y_n)=(0,\dots,0,1).
\end{equation*}
Hence, the same is true of the linear forms in $\eqref{trafo2}$ with integers $x_0,\dots,x_n$ not all equal to $0$. On substituting these into \eqref{trafo1}, we obtain, for any $\varepsilon>0$ on taking $k$ sufficiently large, either a solution of
\begin{equation*}
Q_k^{-1}q<C_{1},\ Q_k^{i_1}|q\alpha_1+p_1|<\varepsilon,\dots,\ Q_k^{i_n}|q\alpha_n+p_n|<\varepsilon,
\end{equation*}
with $C_{1}$ independent of $\varepsilon$, or a solution to
\begin{equation*}
Q_k^{-1}q<\varepsilon,\ Q_k^{i_1}|q\alpha_1+p_1|<C_{1},\dots,\ Q_k^{i_n}|q\alpha_n+p_n|<C_{1}.
\end{equation*}
In the first case, substituting $N_k$ for $C_1Q_k$ shows the existence of $q\in\NN$ such that
\begin{equation*}
q^{i_j}\norm{q\alpha_j}\leq N_k^{i_j}\norm{q\alpha_j}<\varepsilon C_1^{i_j}\leq \varepsilon C_1,\quad (1\leq j\leq n).
\end{equation*}
In the second case, substituting $N_k$ for $\varepsilon Q_k$ gives us a $q\in\NN$ satisfying
\begin{equation*}
q^{i_j}\norm{q\alpha_j}\leq N_k^{i_j}\norm{q\alpha_j}<\varepsilon^{i_j} C_1\leq \varepsilon^{i_{min}} C_1,\quad (1\leq j\leq n).
\end{equation*}
The constant $\varepsilon>0$ was chosen to be arbitrarily small. Hence, in both cases, $\balpha$ cannot be $\bi$-badly approximable. This completes the proof of Theorem \ref{ThmDS}.
\section{Proof of Theorem \ref{Con1}}
Theorem \ref{ConBV} is the main ingredient used in the proof of Theorem \ref{Con1}. Hence, we start by preparing the ground for applying \ref{ConBV}.
If $\balpha=(\alpha_1,\dots,\alpha_n)$ is not in $\cS_n(\bi)$, then there exists an $\varepsilon(\balpha)\in(0,1)$ such that the equation
\begin{equation*}
\norm{q\alpha_j}<\varepsilon(\balpha)Q_k^{-i_j},\quad j\in \{1,\dots,n\}
\end{equation*}
has no integer solution $q\leq Q_k$ for an infinite increasing sequence $(Q_k)_{k\in\NN}$. In other words, there is a sequence $(Q_k)_{k\in\NN}$ such that $\norm{q\alpha_j}>\varepsilon(\balpha)Q_k^{-i_j}$ for some $j\in \{1,\dots,n\}$ for each $q\leq Q_k$. This implies that
\begin{equation}\label{maxeq3}
\max\left\{Q_k^{-1}q,\varepsilon(\balpha)^{-1} Q_k^{i_1}|q\alpha_1+p_1|,\dots,\varepsilon(\balpha)^{-1} Q_k^{i_n}|q\alpha_n+p_n|\right\}\geq 1
\end{equation}
for all integers $q,p_1,\dots,p_n$ not all equal to $0$.
For any fixed $Q_k$, the $n+1$ linear forms appearing in \eqref{maxeq3} in the $n+1$ variables $q,p_1,\dots,p_n$ form a system of linear forms of determinant $\varepsilon(\balpha)^{-n}$ satisfying condition \eqref{maxeq4}. Hence, by Theorem \ref{Cassels}, for any $Q_k$ and any $\boldsymbol{\gamma}=(\gamma_1,\dots,\gamma_n)\in {\rm I}^n$, there exists a non-zero integer vector $(q,p_1,\dots,p_n)$ satisfying
\begin{equation*}
\max\left\{Q_k^{-1}q,\varepsilon(\balpha) Q_k^{i_1}\left|q\alpha_1+p_1-\frac{\gamma_1}{\varepsilon(\balpha)}\right|,\dots,\varepsilon(\balpha) Q_k^{i_n}\left|q\alpha_n+p_n-\frac{\gamma_n}{\varepsilon(\balpha)}\right|\right\}< \delta,
\end{equation*}
where $\delta=\frac{1}{2}(\lfloor|\varepsilon(\balpha)^{-n}|\rfloor+1)$. Substituting $\tilde{Q}_k$ for $\delta Q_k$ and $\tilde{\gamma}_j$ for $\gamma_j/\varepsilon(\balpha)$ implies that there exists a positive integer solution $q$ to the system of inequalities
\begin{equation*}
\begin{aligned}
\norm{ q\alpha_j-\tilde{\gamma}_j} &<\delta^{1+i_j}\varepsilon(\balpha) \tilde{Q}_k^{-i_j},\quad (1\leq j\leq n),\\[1ex]
q &< \tilde{Q}_k.
\end{aligned}
\end{equation*}
By letting
\begin{equation*}
C(\balpha)=\max\limits_{1\leq j\leq n}\delta^{1+i_j}\varepsilon(\balpha),
\end{equation*}
we have proved the following statement.
\begin{samepage}
\begin{theorem}\label{Thm1}
Let $\balpha\notin\cS_n(\bi)$. Then, all $\bbeta \in {\rm I}^n$ are uniformly $\bi$-approximable by the sequence $(q\balpha)_{q\in\NN}$, i.e. there exists a constant $C(\balpha)>0$ such that for all $\bbeta \in {\rm I}^n$ there are infinitely many $q\in\NN$ satisfying
\begin{equation*}
\max\limits_{1\leq j\leq n}q^{i_j}\norm{q\alpha_j-\beta_j}<C(\balpha).
\end{equation*}
\end{theorem}
\end{samepage}
In other words, every point $\bbeta\in {\rm I}^n$ is contained in infinitely many sets of the form
\begin{equation*}
\prod\limits_{j=1}^n[q\alpha_j-p_j-C(\balpha)q^{-i_j},q\alpha_j-p_j+C(\balpha)q^{-i_j}].
\end{equation*}
These sets are not $d$-balls, but, if we extend them slightly, we get that
\begin{equation}\label{LSup}
\bigcap\limits_{k=1}^{\infty}\bigcup\limits_{q=k}^{\infty}\left(\bigcup\limits_{|\bp|<q}A^{\balpha}_{\bp,q}\right)={\rm I}^n,
\end{equation}
where our $\limsup$ set is built from the sets
\begin{equation}\label{Eqn:Sets}
A^{\balpha}_{\bp,q}:=\prod\limits_{j=1}^n[q\alpha_j-p_j-C(\balpha)^{\frac{i_j}{i_{min}}}q^{-i_j},q\alpha_j-p_j+C(\balpha)^{\frac{i_j}{i_{min}}}q^{-i_j}]
\end{equation}
with $q\in\NN$ and $\bp\in\ZZ^n$. By setting $r(\balpha,q)=C(\balpha)q^{-i_{min}}$ it follows that
\begin{equation*}
A^{\balpha}_{p,q}=B^d_{r(\balpha,q)}(q\balpha-\bp).
\end{equation*}
Indeed, if we recall the definition of the metric $d$ given in \eqref{Eqn:DefMetric}, we see that
\begin{align*}
d_j(u,v)\leq C(\balpha)q^{-i_{min}}&\Leftrightarrow |u-v|^{\bar{i}_j}\leq C(\balpha)q^{-i_{min}}\\[1ex]
&\Leftrightarrow |u-v|\leq (C(\balpha)q^{-i_{min}})^{\frac{i_j}{i_{min}}}\\[1ex]
&\Leftrightarrow |u-v|\leq C(\balpha)^{\frac{i_j}{i_{min}}}q^{-i_j}.
\end{align*}
As we are dealing with a countable collection of $d$-balls of the form $A^{\balpha}_{\bp,q}$, it is possible to rewrite them as a sequence $(A_k)_{k\in\NN}$ and \eqref{LSup} is equivalent to the statement that
\begin{equation*}
\limsup_{k\rightarrow\infty}A_k={\rm I}^n.
\end{equation*}
This means we are in a situation where we can apply Theorem \ref{ConBV}.
\begin{proof}[Proof of Theorem \ref{Con1}]
We are to show that, for any given $\varepsilon>0$, the $\limsup$ set
\begin{equation*}
\bigcap\limits_{k=1}^{\infty}\bigcup\limits_{q=k}^{\infty}\left(\bigcup\limits_{|\bp|<q}U^{\balpha}_{\bp,q}(\varepsilon)\right)
\end{equation*}
has full Lebesgue measure, where
\begin{equation*}
U^{\balpha}_{\bp,q}(\varepsilon)=\prod\limits_{j=1}^n[q\alpha_j-p_j-\varepsilon q^{-i_j},q\alpha_j-p_j+\varepsilon q^{-i_j}].
\end{equation*}
For $\varepsilon$ small enough, any such set $U^{\balpha}_{\bp,q}(\varepsilon)$ is contained in a set $A^{\balpha}_{\bp,q}$ as defined by \eqref{Eqn:Sets}. Furthermore,
\begin{equation*}
\frac{|U^{\balpha}_{\bp,q}(\varepsilon)|}{|A^{\balpha}_{\bp,q}|}=\frac{2^n\varepsilon^n q^{-1}}{2^n C(\balpha)^{\sum\limits_{j=1}^n\frac{i_j}{i_{min}}}q^{-1}}=\frac{\varepsilon^n}{C(\balpha)^s},
\end{equation*}
a ratio which does not depend on $\bp$ or $q$. Thus the conditions for Theorem \ref{ConBV} are satisfied and using \eqref{LSup} we can conclude that
\begin{equation*}
\left|\bigcap\limits_{k=1}^{\infty}\bigcup\limits_{q=k}^{\infty}\left(\bigcup\limits_{|\bp|<q}U^{\balpha}_{\bp,q}(\varepsilon)\right)\right|=
\left|\bigcap\limits_{k=1}^{\infty}\bigcup\limits_{q=k}^{\infty}\left(\bigcup\limits_{|\bp|<q}A^{\balpha}_{\bp,q}\right)\right|=1.
\qedhere
\end{equation*}
\end{proof}
\section{Future development}
It is worth noting that Theorem \ref{Thm1} has been shown to be a statement of equivalence in the non-weighted case.
\begin{theorem}[Theorem V.XIII in \cite{Cassels}]\label{Thm:Casselsnonsing}
Let $\balpha\in {\rm I}^n$. Then, $\balpha$ is non-singular if and only if all $\bbeta \in {\rm I}^n$ are uniformly approximable by the sequence $(q\balpha)_{q\in\NN}$, i.e. there exists a constant $C(\balpha)>0$ such that for all $\bbeta \in {\rm I}^n$ there are infinitely many $q\in\NN$ satisfying
\begin{equation*}
\max\limits_{1\leq j\leq n}\norm{q\alpha_j-\beta_j}<C(\balpha)q^{-1/n}.
\end{equation*}
\end{theorem}
This is shown through the construction of a vector $\bbeta\in{\rm I}^n$ for which the inequality
\begin{equation*}
\max\limits_{1\leq j\leq n}\norm{q\alpha_j-\beta_j}<Cq^{-1/n}
\end{equation*}
has only finitely many solutions $q\in\NN$ for any given constant $C>0$. This also implies that there is no analogous statement to Dirichlet's Theorem for an arbitrary inhomogeneous constant, as discussed in Remark \ref{Rmk:InhomDir}. We are confident that Theorem~\ref{Thm:Casselsnonsing} can be extended to the weighted case. So far we have not finalised a proof.
Another interesting question in both the standard and weighted case is whether Theorem \ref{Con1} is actually true if and only if $\balpha$ is non-singular. This would be a strengthening of Theorem \ref{Thm:Casselsnonsing} and give us the following Kurzweil-type statement:
\begin{Conjecture}
For $\varepsilon>0$, denote by $\psi_{\varepsilon}$ the approximating function given by $\psi_{\varepsilon}(q)=\varepsilon q^{-1}$. Then
\begin{equation*}
\bigcap\limits_{\varepsilon>0}W^{\times}_n(\bi,\psi_{\varepsilon})={\rm I}^n\setminus \cS_n(\bi).
\end{equation*}
\end{Conjecture}
\begin{bibdiv}
\setcounter{chapter}{0}
\begin{biblist}
\addcontentsline{toc}{chapter}{List of References}
\bib{An}{article}{author = {An, J.},
title = {Badziahin--Pollington--Velani's Theorem and Schmidt's game},
journal = {Bull. London Math. Soc.}, volume = {45}, year = {2013},
number = {4}, pages = {712--733}, }
\bib{AvdD}{incollection}{ author = {Aschenbrenner, M.}, author =
{van den Dries, L.}, title = {Asymptotic differential algebra},
booktitle = {Analyzable functions and applications}, pages =
{49--85}, series = {Contemp. Math., 373}, publisher =
{Amer. Math. Soc., Providence, RI}, year = {2005}, }
\bib{Badziahin}{article}{ author = {Badziahin, D.}, author = {Pollington, A. D.},
author = {Velani, S. L.},
title = {On a problem in simultaneous Diophantine approximation: Schmidt's conjecture},
journal = {Ann. of Math. (2)}, volume = {174}, year = {2011},
number = {3}, pages = {1837--1883}, }
\bib{Baker1}{article}{ author = {Baker, R. C.},
title={Singular n-tuples and Hausdorff dimension},
journal = {Math. Proc. Cambridge Philos. Soc.},
volume = {81}, year = {1977}, number = {3}, pages = {377--385}, }
\bib{Baker2}{article}{ author = {Baker, R. C.},
title={Singular n-tuples and Hausdorff dimension II},
journal = {Math. Proc. Cambridge Philos. Soc.},
volume = {111}, year = {1992}, number = {3}, pages = {577--584}, }
\bib{Bermanifolds}{article}{ author = {Beresnevich, V. V.}, title
= {Rational points near manifolds and metric Diophantine
approximation}, journal = {Ann. of Math. (2)}, volume = {175},
date = {2012}, number = {1}, pages = {187--235}, }
\bib{Berschmidt}{article}{ author = {Beresnevich, V. V.},
title = {Badly approximable points on manifolds},
journal = {Inventiones Mathematicae}, volume = {202},
date = {2015}, number = {3}, pages = {1199--1240}, }
\bib{BBDV}{misc}{ author = {Beresnevich, V. V.}, author = {Bernik, V. I.},
author = {Dodson, M.}, author = {Velani, S. L.},
title = {Classical metric Diophantine approximation revisited},
series = {Roth Festschrift - essays in honour of Klaus Roth on the occasion
of his 80th birthday},
publisher = {Cambridge University Press},
year = {2009}, pages = {38--61}, }
\bib{limsup}{article}{ author = {Beresnevich, V. V.}, author =
{Dickinson, D.}, author = {Velani, S. L.}, title = {Measure
theoretic laws for lim sup sets}, journal =
{Mem. Amer. Math. Soc.}, volume = {179}, date = {2006}, number =
{846}, pages = {x+91}, }
\bib{BDVplanarcurves}{article}{ author = {Beresnevich, V. V.},
author = {Dickinson, D.}, author = {Velani, S. L.}, title =
{Diophantine approximation on planar curves and the distribution
of rational points. With an Appendix II by
R. C. Vaughan}, journal = {Ann. of Math. (2)}, volume =
{166}, date = {2007}, number = {2}, pages = {367--426}, }
\bib{nalpha}{misc}{ author = {Beresnevich, V. V.}, author =
{Haynes, A. K.}, author = {Velani, S. L.}, title = {Sums of reciprocals of fractional parts and multiplicative Diophantine approximation},
note = {\url{https://arxiv.org/abs/1511.06862}, preprint 2016}, }
\bib{Lee}{article}{ author = {Beresnevich, V. V.}, author = {Lee, L.},
author = {Vaughan, R. C.}, author = {Velani, S. L.},
title = {Diophantine approximation on manifolds and lower bounds for Hausdorff dimension},
note = {In preparation (2017)}, }
\bib{DAaspects}{misc}{ author = {Beresnevich, V. V.}, author = {Ram{\'i}rez, F.~A.},
author = {Velani, S. L.}, title = {Metric Diophantine Approximation: aspects of recent work},
note = {\url{https://arxiv.org/abs/1601.01948}, preprint 2016},}
\bib{BVVZ}{article}{ author = {Beresnevich, V. V.}, author = {Vaughan, R. C.},
author = {Velani, S. L.}, author = {Zorin, E.},
title = {Diophantine approximation on manifolds and the distribution of rational points: contributions to the convergence theory},
journal = {Int. Math. Res. Notices},
volume = {2017}, year = {2017},
number = {10}, pages = {2885--2908}, }
\bib{BVVZ2}{article}{ author = {Beresnevich, V. V.}, author = {Vaughan, R. C.},
author = {Velani, S. L.}, author = {Zorin, E.},
title = {Diophantine approximation on manifolds and the distribution of rational points: contributions to the divergence theory},
note = {In preparation (2017)}}
\bib{MassTrans}{article}{ author = {Beresnevich, V. V.}, author = {Velani, S. L.},
title = {A Mass Transference Principle and the Duffin--Schaeffer conjecture for Hausdorff measures}, journal = {Ann. Math.}, volume = {164}, year = {2006}, pages = {971--992}, }
\bib{VBSV}{article}{ author = {Beresnevich, V. V.}, author = {Velani, S. L.},
title = {A note on zero-one laws in metrical Diophantine approximation},
journal = {Acta Arithmetica}, volume = {133.4}, Year = {2008}, pages = {363--374}, }
\bib{Zorin}{article}{ author = {Beresnevich, V. V.}, author = {Zorin, E.},
title = {Explicit bounds for rational points near planar curves and metric
Diophantine approximation}, journal = {Adv. Math.}, volume = {225}, year = {2010},
number = {6}, pages = {3064--3087}, }
\bib{Besicovitch}{article}{ author = {Besicovitch, A. S.},
title = {On the sum of digits of real numbers represented in the dyadic system},
journal = {Math. Ann.}, volume = {110}, year = {1934}, pages = {321--330}, }
\bib{Cassels-01law}{article}{ author = {Cassels, J. W. S.}, title
= {Some metrical theorems in Diophantine approximation. I.},
journal = {Proc. Cambridge Philos. Soc.}, volume = {46}, year =
{1950}, pages = {209--218}, }
\bib{Cassels}{book}{ author = {Cassels, J. W. S.}, title = {An
introduction to Diophantine approximation}, series =
{Cambridge Tracts in Mathematics and Mathematical Physics,
No. 45}, publisher = {Cambridge University Press, New York},
date = {1957}, pages = {x + 166}, }
\bib{Cassels-geometry}{book}{ author = {Cassels, J. W. S.}, title
= {An introduction to the geometry of numbers. {C}orrected
reprint of the 1971 edition}, series = {Classics in
Mathematics}, publisher = {Springer-Verlag, Berlin}, year =
{1997}, pages = {viii+344}, }
\bib{Chaika}{article}{ author = {Chaika, J.},
title = {Shrinking targets for IETs: Extending a theorem of Kurzweil},
journal = {Geom. Funct. Anal}, volume = {21}, year = {2011},
number = {5}, pages = {1020--1042}, }
\bib{Cheung}{article}{ author = {Cheung, Y.},
title = {Hausdorff dimension of the set of Singular Pairs},
journal = {Ann. Math.}, volume = {173}, year = {2011}, pages = {127--167},}
\bib{Cheung2}{article}{ author = {Cheung, Y.}, author = {Chevallier, N.},
title = {Hausdorff dimension of singular vectors},
journal = {Duke Math. J.}, volume = {165}, year = {2016}, pages = {2273--2329},}
\bib{Davenport}{article}{ author = {Davenport, H.}, author = {Schmidt, W. M.},
title = {Dirichlet's theorem on diophantine approximation}, journal = {Symposia Mathematica},
volume = {IV}, date = {1970}, pages = {113--132}, }
\bib{Davenport2}{article}{ author = {Davenport, H.}, author = {Schmidt, W. M.},
title = {Dirichlet's theorem on diophantine approximation II}, journal = {Acta Arith.},
volume = {16}, date = {1970}, pages = {413--424}, }
\bib{Dodson}{article}{ author = {Dodson, M.}, author = {Rynne, B.}, author = {Vickers, J.},
title = {Khintchine type theorems on manifolds}, journal = {Acta Arith.},
volume = {57}, year = {1991}, number = {2}, pages = {115--130}, }
\bib{Duffin}{article}{ author = {Duffin, R. J.}, author = {Schaeffer, A. C.},
title = {Khintchine's problem in metric Diophantine approximation},
journal = {Duke Math. J.}, volume = {8}, date = {1941}, pages = {243--255}, }
\bib{Einsiedler}{book}{ author = {Einsiedler, M.}, author = {Ward, T.},
title = {Ergodic Theory with a view towards Number Theory},
series = {Graduate Texts in Mathematics}, publisher = {Springer-Verlag, Berlin},
year={2011},}
\bib{EKL}{article}{ author = {Einsiedler, M.}, author = {Katok, A.},
author = {Lindenstrauss, E.},
title = {Invariant measures and the set of exceptions to Littlewood’s conjecture},
journal = {Ann. of Math. (2)}, volume = {164}, year = {2006},
number = {2}, pages = {513--560}, }
\bib{Falconer}{book}{ author = {Falconer, K.},
title = {Fractal Geometry: Mathematical Foundations and Applications},
publisher = {John Wiley and Sons}, year = {2003},}
\bib{Fayad}{article}{ author = {Fayad, B.},
title = {Mixing in the absence of the shrinking target property},
journal = {Bull. London Math. Soc.}, volume = {38},
year = {2006}, number = {5}, pages = {829--838},}
\bib{Fuchs}{article}{ author = {Fuchs, M.}, author = {Kim, D. H.},
title = {On Kurzweil's 0-1 Law in Inhomogeneous Diophantine Approximation},
journal = {Acta Arith.}, volume = {173}, year = {2016}, number = {1},
pages = {41--57}, }
\bib{Gallagher01}{article}{ author = {Gallagher, P. X.}, title =
{Approximation by reduced fractions},
journal = {J. Math. Soc. Japan}, volume = {13}, date = {1961}, pages = {342--345},}
\bib{Gallagherkt}{article}{ author = {Gallagher, P. X.}, title =
{Metric simultaneous Diophantine approximation. II}, journal =
{Mathematika}, volume = {12}, date = {1965}, pages = {123--127},}
\bib{Hajos}{article}{ author = {Haj{\'o}s, G.},
title = {{\"U}ber einfache und mehrfache Bedeckung des n-dimensionalen Raumes mit einem W{\"u}rfelgitter}, journal = {Math. Z. Zeitschrift}, volume = {47},
Year = {1941}, pages = {427--467}, }
\bib{Hardy}{article}{ author = {Hardy, G. H.}, Publisher = {Hafner
Publishing Co., New York}, Series = {Cambridge Tracts in
Mathematics and Mathematical Physics, No. 12}, Title = {Orders
of infinity. {T}he {I}nfinit\"arcalc\"ul of {P}aul du
{B}ois--{R}eymond}, Year = {1971}, }
\bib{HardyWright}{book}{ author = {Hardy, G. H.}, author = {Wright, E. M.},
title = {Introduction to the Theory of Numbers},
publisher = {Oxford University Press}, year = {1938},}
\bib{Harman}{book}{ author = {Harman, G.}, title = {Metric Number Theory},
series = {London Mathematical Society Monographs New Series},
publisher = {Oxford University Press}, year = {1998},}
\bib{Harraptwisted}{article}{ author = {Harrap, S. G.},
title = {Twisted inhomogeneous Diophantine approximation and badly approximable sets},
journal = {Acta Arith.}, volume = {151}, year = {2012}, pages = {55--82}, }
\bib{Hurwitz}{article}{ author = {Hurwitz, A.},
title = {\"Uber die angen\"aherte Darstellung der Irrationalzahlen durch rationale Br \"uche},
journal = {Math. Ann.}, volume = {39}, date = {1891}, pages = {279--284}, }
\bib{JarnikBad}{article}{ author = {Jarn\'ik, V.},
title = {Zur metrischen Theorie der diophantischen Appoximationen},
journal = {Prace Mat.-Fiz.}, volume = {36}, year = {1928}, pages = {371--382}, }
\bib{Jarnikold}{article}{ author = {Jarn\'ik, V.},
title = {Diophantische Approximationen und Hausdorffsches Mass},
journal = {Mat. Sbornik}, volume = {36}, date = {1929}, pages = {371--381}, }
\bib{Jarnik}{article}{ author ={Jarn\'ik, V.},
title = {\"Uber die simultanen diophantischen Approximationen},
journal = {Math. Z.}, volume = {33}, date = {1931}, pages = {505--543}, }
\bib{Khintchine}{article}{ author = {Khintchine, A. Y.}, title =
{Einige S\"atze \"uber Kettenbr\"uche, mit Anwendungen auf die Theorie der Diophantischen
Approximationen}, journal = {Math. Ann.}, volume = {92}, date =
{1924}, pages = {115--125}, }
\bib{Khintchine2}{article}{ author = {Khintchine, A. Y.}, title =
{Zur metrischen Theorie der diophantischen Approximationen},
journal = {Math. Z.}, volume = {24}, date =
{1926}, number = {1}, pages = {706--714}, }
\bib{KhintchineSing}{article}{ author = {Khintchine, A. Y.}, title =
{\"Uber eine Klasse linearer Diophantischer Approximationen},
journal = {Rendiconti Circ. Mat. Soc. Palermo}, volume = {50}, date =
{1926}, number = {1}, pages = {170--195}, }
\bib{KhintchineFr}{article}{ author = {Khintchine, A. Y.}, title =
{Sur le probl\`eme de Tchebycheff}, journal = {Izv. Akad. Nauk SSSR, Ser. Mat.},
volume = {10}, year = {1946}, pages = {281--294}, }
\bib{Khintchine-book}{book}{author = {Khintchine, A. Y.},
title = {Continued Fractions}, Publisher = {University of Chicago Press, Chigaco},
year = {1964},}
\bib{Kim}{article}{ author = {Kim, D. H.},
title = {The shrinking target property of irrational rotations},
journal = {Nonlinearity}, volume = {20}, year = {2007},
number = {7}, pages = {1637--1643}, }
\bib{KleinbockMargulis}{article}{ author={Kleinbock, D. Y.},
author={Margulis, G. A.}, title={Flows on homogeneous spaces and
Diophantine approximation on manifolds}, journal={Ann. of
Math. (2)}, volume={148}, date={1998}, number={1},
pages={339--360}, }
\bib{Kurzweil}{article}{ author = {Kurzweil, J.},
title = {On the metric theory of inhomogeneous Diophantine approximations},
journal = {Studia Math.}, volume = {15}, year = {1955}, pages = {84--112}, }
\bib{Tamam}{article}{ author = {Liao, L.}, author = {Shi, R.}, author = {Solan, O. N.},
author = {Tamam, N.},
title = {Hausdorff dimension of weighted singular vectors in $\RR^2$},
note = {\url{https://arxiv.org/abs/1605.01287}, preprint 2016}, }
\bib{Mattila}{book}{ author = {Mattila, P.},
title = {Geometry of Sets and Measures in Euclidean Space: Fractals and rectifiability},
publisher = {Cambridge University Press}, year = {1995},}
\bib{Pollington}{article}{ author = {Pollington, A. D.}, author = {Velani, S. L.},
title = {On a problem in simultaneous Diophantine approximation: Littlewood's conjecture}, journal = {Acta Mathematica}, volume = {185}, date = {2000}, pages = {287--306}, }
\bib{PolVel}{article}{ author = {Pollington, A. D.}, author = {Velani, S. L.},
title = {On simultaneously badly approximable numbers},
journal = {J. London Math. Soc. (2)}, volume = {66}, year = {2002},
pages = {29--40}, }
\bib{hyperplanes}{article}{ author = {Ram{\'i}rez, F. A.}, title =
{Khintchine types of translated coordinate hyperplanes}, journal
= {Acta Arith.}, volume = {170}, date = {2015},
pages = {243--273}, }
\bib{mine}{article}{ author = {Ram\'irez, F. A.}, author = {Simmons, D. S.},
author = {S\"uess, F.}, title =
{Rational approximation of affine coordinate subspaces of Euclidean space},
journal = {Acta Arith.}, volume = {177}, date = {2017}, pages = {91--100}, }
\bib{Rynne}{article}{ author = {Rynne, B. P.},
title = {A lower bound for the Hausdorff dimension of sets of singular n-tuples},
journal = {Math. Proc. Cambridge Philos. Soc.}, volume = {107},
year = {1990}, number = {2}, pages = {387--394}, }
\bib{Schmidtjarnik}{article}{ author = {Schmidt, W. M.},
title = {Metrical theorems on fractional parts of sequences},
journal = {Trans. Amer. Math. Soc.}, volume = {110}, year = {1964},
pages = {493--518}, }
\bib{Schmidtgames}{article}{ author = {Schmidt, W. M.},
title = {On badly approximable numbers and certain games},
journal = {Trans. Amer. Math. Soc.}, volume = {123}, year = {1966},
pages = {178--199}, }
\bib{schmidt}{book}{ author = {Schmidt, W. M.}, title = {Diophantine Approximation},
series = {Lecture Notes in Mathematics, Vol. 785}, publisher = {Springer-Verlag},
year = {1975}, }
\bib{SchmidtBad}{book}{ author = {Schmidt, W. M.},
title = {Open problems in Diophantine approximation},
series = {Approximations diophantiennes et nombres transcendants (Luminy 1982)},
publisher = {Birkh\"auser}, year = {1983}, }
\bib{Shapira}{article}{author = {Shapira, U.},
title = {Grids with dense values}, journal = {Comment. Math. Helv.}, volume = {88},
Year = {2013}, pages = {485--506}, }
\bib{SimmonsKurzweil}{article}{ author = {Simmons, D. S.},
title = {An analogue of a theorem of Kurzweil},
journal = {Nonlinearity}, volume = {28}, year = {2015}, number = {5},
pages = {1401--1408}, }
\bib{Simmons-convergence-case}{misc}{
author = {Simmons, D. S.},
title = {Some manifolds of {K}hinchin type for convergence},
note = {\url{http://arxiv.org/abs/1602.01727}, preprint 2015},}
\bib{Sprindzuk2}{book}{ author = {Sprind{\v{z}}uk, V. G.},
title = {Metric theory of Diophantine approximations},
publisher = {John Wiley}, year = {1979},
note = {Translated by R. A. Silverman}, }
\bib{master}{misc}{ author = {S\"uess, F.}, title =
{Simultaneous Diophantine Approximation: Juicing the Fibres},
year = {2013}, note = {MSc Thesis, ETH Z\"urich}, }
\bib{Szusz}{article}{ author = {Sz\"usz, P.},
title = {\"Uber die metrische Theorie der Diophantischen Approximation},
journal = {Acta Math. Sci. Hungar.}, volume = {9},
year = {1958}, pages = {177--193}, }
\bib{Tseng}{article}{ author = {Tseng, J.},
title = {On circle rotations and the shrinking target properties},
journal = {Discrete Contin. Dyn. Sys.}, volume = {20}, year = {2008},
number = {4}, pages = {1111--1122}, }
\bib{VaughanVelani}{article}{ author = {Vaughan, R.C.}, author = {Velani, S.L.},
title = {Diophantine approximation on planar curves: the convergence
theory}, journal = {Invent. Math.}, volume = {166}, year = {2006}, pages = {103--124}, }
\bib{Waldschmidt}{misc}{ author = {Waldschmidt, M.},
title = {Report on some recent advances in Diophantine approximation},
publisher = {Springer-Verlag},
series = {Special volume in honor of Serge Lang},
note = {\url{https://arxiv.org/abs/0908.3973}, preprint 2009}, }
\bib{Weyl}{article}{ author = {Weyl, H.},
title = {\"Uber die Gleichverteilung von Zahlen mod. Eins},
journal = {Math. Ann.}, volume = {77}, year = {1916}, number = {3},
pages = {313--352}, }
\bib{Yang}{article}{ author = {Yang, L.},
title = {Badly approximable points on curves and unipotent orbits in homogeneous spaces},
note = {\url{https://arxiv.org/abs/1703.03461}, preprint 2017}, }
\end{biblist}
\end{bibdiv}
\end{document}
|
\begin{document}
\title{
esizebox{ extwidth}
\begin{abstract}
Node features of graph neural networks (GNNs) tend to become more similar with the increase of the network depth. This effect is known as {\em over-smoothing}, which we axiomatically define as the exponential convergence of suitable similarity measures on the node features. Our definition unifies previous approaches and gives rise to new quantitative measures of over-smoothing. Moreover, we empirically demonstrate this behavior for several over-smoothing measures on different graphs (small-, medium-, and large-scale). We also review several approaches for mitigating over-smoothing and empirically test their effectiveness on real-world graph datasets.
Through illustrative examples, we demonstrate that mitigating over-smoothing is a necessary but not sufficient condition for building deep GNNs that are expressive on a wide range of graph learning tasks. Finally, we extend our definition of over-smoothing to the rapidly emerging field of continuous-time GNNs.
\end{abstract}
\section{Introduction}
Graph Neural Networks (GNNs) \citep{sperduti1994encoding,goller1996learning,sperduti1997supervised,frasconi1998general,gori2005new,scarselli2008graph,bruna2013spectral,chebnet,gcn,MoNet,mpnn} have emerged as a powerful tool for learning on relational and interaction data. These models have been successfully applied on a variety of different tasks, e.g. in computer vision and graphics \cite{MoNet}, recommender systems \cite{ying2018graph}, transportation \cite{derrowpinion2021traffic}, computational chemistry \citep{mpnn}, drug discovery \cite{gaudelet2021utilizing}, particle physics \citep{shlomi2020graph}, and analysis of social networks (see \citet{zhou,gdlbook} for additional applications).
The number of layers in a neural network (referred to as ``depth'' and giving the name to the entire field of ``deep learning'') is often considered to be crucial for its performance on real-world tasks. For example, convolutional neural networks (CNNs) used in computer vision, often use tens or even hundreds of layers.
In contrast, most GNNs encountered in applications are relatively shallow and often have just few layers.
This is related to several issues impairing the performance of deep GNNs in realistic graph-learning settings: graph bottlenecks \citep{alon2020bottleneck}, over-squashing \citep{topping2021understanding,egp}, and over-smoothing \citep{li2018deeper,os1,os2}. In this article we focus on the over-smoothing phenomenon, which \emph{loosely} refers to the exponential convergence of all node features towards the same constant value as the number of layers in the GNN increases. While it has been shown that small amounts of smoothing are desirable for regression and classification tasks \citep{kerivennot}, excessive smoothing (or `over-smoothing') results in convergence to a non-informative limit. Besides being a key limitation in the development of deep multi-layer GNNs, over-smoothing can also severely impact the ability of GNNs to handle {\em heterophilic} graphs \citep{syn_cora}, in which node labels tend to differ from the labels of the neighbors and thus long-term interactions have to be learned.
Recent literature has focused on precisely defining over-smoothing through measures of node feature similarities such as the the graph Dirichlet \citep{graphcon,dirich_first,pairnorm,energetic_gnn}, cosine similarity \citep{mad}, and other related similarity scores \citep{group_ratio}. With the abundance of such measures, however, there is currently still a conceptual gap in a general definition of over-smoothing that would provide a unification of existing approaches. Moreover, previous work mostly measures the similarity of node features and does not explicitly consider the rate of convergence of over-smoothing measures with respect to an increasing number of GNN layers.
In this article, we aim to unify several recent approaches and define over-smoothing in a formal and tractable manner through an axiomatic construction.
Through our definition, we rule out problematic measures such as the Mean Average Distance that does not provide a sufficient condition for over-smoothing.
We then review several approaches to mitigate over-smoothing and provide their extensive empirical evaluation. These empirical studies lead to the insight that meaningfully solving over-smoothing in deep GNNs is more elaborate than simply forcing the node features not to converge towards the same node value when the number of layers is increased. Rather, there needs to be a subtle balance between the expressive power of the deep GNN and its ability to preserve the diversity of node features in the graph. Finally, we extend our definition to the rapidly emerging sub-field of continuous-time GNNs \citep{grand,blend,graphcon,g2,sheaf,grad_flow}.
\section{Definition of over-smoothing}
Let $\mathcal{G}=(\mathcal{V},\mathcal{E}\subseteq \mathcal{V}\times \mathcal{V})$ be an undirected graph with $|\mathcal{V}|=v$ nodes and $|\mathcal{E}|=e$ edges (unordered pairs of nodes $\{ i,j \}$ denoted $i \sim j$).
The \emph{$1$-neighborhood} of a node $i$ is denoted
$
\mathcal{N}_i =
\{j \in \mathcal{V} : i\sim j \}
$.
Furthermore, each node $i$ is endowed with an $m$-dimensional feature vector $\mathbf{X}_i$; the node features are arranged into a $v\times m$ matrix
${\bf X} = ({\bf X}_{ik})$ with $i=1,\hdots, v$ and $k=1,\hdots, m$.
Message-Passing GNN (MPNN) updates the node features by performing several iterations of the form,
\begin{equation}
\label{eq:mp}
{\bf X}^n = \sigma({\bf F}_{\theta^n}({\bf X}^{n-1},\mathcal{G})), \quad \forall n=1,\dots,N,
\end{equation}
where ${\bf F}_{\theta^n}$ is a \emph{learnable} function with parameters $\theta^n$, ${\bf X}^n \in \mathbb{R}^{v \times m_n}$ are the $m$-dimensional hidden node features, and $\sigma$ is an element-wise non-linear activation function. Here, $n \geq 1$ denotes the $n$-th layer with $n=0$ being the input layer and $N$ the total number of layers (depth). In particular, we consider local (1-neighborhood) coupling of the form
$(\mathbf{F}(\mathbf{X},\mathcal{G}))_i = \mathbf{F}(\mathbf{X}_i, \mathbf{X}_{j\in \mathcal{N}_i})$ operating on the multiset of 1-neighbors of each node. Examples of such functions used in the graph machine learning literature \citep{gdlbook} include {\em graph convolutions} \citep{gcn} and {\em attentional message passing} \citep{gat}.
There exist a variety of different approaches to quantify over-smoothing in deep GNNs, e.g. measures based on the Dirichlet energy on graphs \citep{graphcon,dirich_first,pairnorm,energetic_gnn}, as well as measures based on the mean-average distance (MAD) \citep{mad,group_ratio}, and references therein. However, most previous approaches lack a formal definition of over-smoothing as well as provide approaches to measure over-smoothing which are not sufficient to quantify this issue. Thus, the aim of this survey is to establish a unified, rigorous, and tractable definition of over-smoothing, which we provide in the following.
\begin{defi}[Over-smoothing]
\label{def:oversmoothing}
Let $\mathcal{G}$ be an undirected, connected graph and ${\bf X}^n \in \mathbb{R}^{v \times m}$ denote the $n$-th layer hidden features of an $N$-layer GNN defined on $\mathcal{G}$.
Moreover, we call $\mu:\mathbb{R}^{v \times m} \longrightarrow \mathbb{R}_{\geq 0}$ a \textbf{node-similarity measure} if it satisfies the following axioms:
\begin{enumerate}
\item $\exists {\bf c} \in \mathbb{R}^{m}$ with ${\bf X}_i={\bf c}$ for all nodes $i \in \mathcal{V}$ $\Leftrightarrow$ $\mu({\bf X})=0$, ~for ${\bf X} \in \mathbb{R}^{v \times m}$
\item $\mu({\bf X} + {\bf Y}) \leq \mu({\bf X}) + \mu({\bf Y})$, ~for all ${\bf X},{\bf Y} \in \mathbb{R}^{v \times m}$
\end{enumerate}
We then define {\bf over-smoothing with respect to $\mu$} as the layer-wise exponential convergence of the node-similarity measure $\mu$ to zero, i.e.,
\begin{enumerate}
\item[3.] $\mu({\bf X}^n) \leq C_1e^{-C_2n}$, for $n=0,\dots,N$ with some constants $C_1,C_2>0$.
\end{enumerate}
\end{defi}
Note that without loss of generality we assume that the node-similarity measure $\mu$ converges to zero (any node-similarity measure that converges towards a non-zero constant can easily be recast). Further remarks about Definition \ref{def:oversmoothing} are in order.
\begin{remark}
Condition 1 in Definition \ref{def:oversmoothing} simply formalizes the widely accepted notion that over-smoothing is caused by node features converging to a constant node vector whereas condition 3 provides a more stringent, quantitative measure of this convergence. Note that the triangle inequality or subadditivity (condition 2) rules out degenerate choices of similarity measures.
\end{remark}
\begin{remark}
Definition \ref{def:oversmoothing} only considers the case of connected graphs. However, this definition can be directly generalized to disconnected graphs, where we apply a node-similarity measure $\mu_S$ on every connected component $S \subseteq \mathcal{V}$, and define the global similarity measure as the sum of the node-similarity measures on each connected component, i.e., $\mu = \sum_{S}\mu_S$. This way we ensure to cover the case of different connected components converging to different constant node values.
\end{remark}
\section{Over-smoothing measures}
\label{sec:measures}
Existing approaches to measure over-smoothing in deep GNNs have mainly been based on concept of \emph{Dirichlet energy} on graphs,
\begin{equation}
\label{eq:dirichlet}
\EuScript{E}uScript{E}({\bf X}^n) = \frac{1}{v}\sum_{i\in \mathcal{V}} \sum_{j \in \mathcal{N}_i} \|{\bf X}^n_i - {\bf X}^n_j \|^2_2,
\end{equation}
(note that instead of normalizing by $1/v$ we can equivalently normalize the terms inside the norm based on the node degrees $d_i$, i.e. $\|\frac{{\bf X}^n_i}{\sqrt{1+d_i}} - \frac{{\bf X}^n_j}{\sqrt{1+d_j}} \|^2_2$). It is straightforward to check that the measure,
\begin{equation}
\label{eq:dirsim}
\mu({\bf X}^n) = \sqrt{\EuScript{E}uScript{E}({\bf X}^n)},
\end{equation}
satisfies the conditions 1 and 2 in the definition \ref{def:oversmoothing} and thus, constitutes a bona fide node-similarity measure. Note that in the remainder of this article, we will refer to the square root of the Dirichlet energy simply as the Dirichlet energy.
In the literature, \emph{Mean Average Distance} (MAD),
\begin{equation}
\label{eq:mad}
\mu({\bf X}^n) = \frac{1}{v}\sum_{i\in \mathcal{V}} \sum_{j \in \mathcal{N}_i} 1 - \frac{{{\bf X}^n_i}^\top{\bf X}^n_j}{\|{\bf X}^n_i\|\|{\bf X}^n_j\|}.
\end{equation}
has often been suggested as a measure of over-smoothing.
We see that in contrast to the Dirichlet energy, \emph{MAD is not a node-similarity measure}, as it does not fulfill condition 1 nor condition 2 of the over-smoothing definition \ref{def:oversmoothing}. In fact, MAD is always zero in the scalar case, where all node features share the same sign for each feature dimension. This makes MAD a very problematic measure for over-smoothing as $\mu({\bf X})=0$ does not represent a sufficient condition for over-smoothing to happen. However, as we will see in the subsequent section, in the multi-dimensional case ($m>1$) MAD does converge exponentially to zero for increasing number of layers if the GNN over-smooths and thus fulfills condition 3 of the over-smoothing definition \ref{def:oversmoothing}. Therefore, we conclude that under careful considerations of the specific use-case, MAD may be used as a measure for over-smoothing. However, since the Dirichlet energy fulfills all three conditions of Definition \ref{def:oversmoothing} and is numerically more stable to compute, it should always be favored over MAD.
It is natural to ask if there exist other measures that constitute a node-similarity measure as of Definition \ref{def:oversmoothing} and can thus be used to define over-smoothing. While the Dirichlet energy denotes a canonical choice in this context, there are other measures that can be used. For instance, instead of basing the Dirichlet energy in \eqref{eq:dirichlet} on the $L^2$ norm, any other $L^p$-norm ($p>1$) can be used.
\subsection{Empirical evaluation of different measures for over-smoothing}
\citet{graphcon} have empirically demonstrated the qualitative behavior described in Definition \ref{def:oversmoothing} on a $10\times10$ regular 2-dimensional grid with one-dimensional uniform random (hidden) node features. We extend this empirical study in two directions, first to higher dimensional node features and also to real-world graphs, namely Texas \citep{geom_gcn}, Cora \citep{cora}, and Cornell5 (Facebook 100 dataset).
Note that as mentioned above, the extension to higher dimensional node features is necessary in order to empirically evaluate MAD, as MAD is zero for any one-dimensional node features sharing the same sign.
Since we are interested only in the dynamics of the Dirichlet energy and MAD associated with the propagation of node features through different GNN architectures, we omit the original input node features of the real-world graph dataset Cora and exchange them for standard normal random variables, i.e., ${\bf X}_{jk} \sim \mathcal{N}(0,1)$ for all nodes $j$ and every feature $k$. In \fref{fig:os_measures_plot} we set the input and hidden dimension of the node features to $128$ and plot the (logarithm of) the Dirichlet energy \eqref{eq:dirichlet} and MAD \eqref{eq:mad}) of each layer's node features with respect to the (logarithm of) layer number for three popular GNN models, i.e., GCN, GAT and the GraphSAGE architecture of \cite{graphsage}. We can see that all three GNN architectures \emph{over-smooth}, with both layer-wise measures converging exponentially fast to zero for increasing number of layers. Moreover, we observe that this behavior is not just restricted to the structured and regular grid dataset of \cite{graphcon}, but the same behavior (i.e., exponential convergence of the measures with respect to increasing number of layers) can be seen on all the three real-world graph datasets considered here.
\begin{figure}
\caption{Dirichlet energy and Mean Average Distance (MAD) of layer-wise node features ${\bf X}
\label{fig:os_measures_plot}
\end{figure}
It is important to emphasize the importance of the \textbf{exponential convergence} of the layer-wise over-smoothing measure $\mu$ to zero in Definition \ref{def:oversmoothing}. Algebraic convergence is not sufficient for the GNN to suffer from over-smoothing. This can be seen in \fref{fig:os_measures_plot}, where for instance the Dirichlet energy of GCN, GraphSAGE and GAT reach machine-precision zero after a maximum of $64$ layers, while for instance a linear convergence of the Dirichlet energy would still have a Dirichlet energy of around $1$ for an initial energy of around $100$, even after $128$ hidden layers.
\section{Reducing over-smoothing}
\subsection{Methods}
Several methods to mitigate (or at least reduce) the effect of over-smoothing in deep GNNs have recently been proposed. While we do not discuss each individual approach, we highlight several recent methods in this context, all of which can be classified into one of the following classes.
\paragraph{Normalization and Regularization}
A proven way to reduce the over-smoothing effect in deep GNNs is to regularize the training procedure. This can be done either explicitly by penalizing deviations of over-smoothing measures during training or implicitly by normalizing the node feature embeddings and by adding noise to the optimization process.
An example of explicit regularization techniques can be found in Energetic Graph Neural Networks (EGNNs) \citep{energetic_gnn}, where the authors measure over-smoothing using the Dirichlet energy and propose to optimize a GNN within a constrained range of the underlying layer-wise Dirichlet energy.
DropEdge \citep{dropedge} on the other hand represents an example of implicit regularization by adding noise to the optimization process. This is done by randomly dropping edges of the underlying graph during training. Graph DropConnect (GDC) \citep{gdc} generalizes this approach by allowing the GNNs to draw different random masks for each channel and edge independently.
Another example of implicit regularization is PairNorm \citep{pairnorm}, where the pairwise distances are set to be constant throughout every layer in the deep GNN. This is obtained by performing the following normalization on the node features ${\bf X}$ after each GNN layer,
\begin{equation}
\begin{aligned}
\hat{{\bf X}}_i &= {\bf X}_i - \frac{1}{v}\sum_{j=1}^v {\bf X}_{j}, \\ {\bf X}_i &= \frac{s\hat{{\bf X}}_i}{\sqrt{\frac{1}{v}\sum_{j=1}^v \|\hat{{\bf X}}_{j}\|_2^2}},
\end{aligned}
\end{equation}
where $s>0$ is a hyperparameter.
Similarly, \citet{group_ratio} have suggested to normalize within groups of the same labeled nodes, leading to Differentiable Group Normalization (DGN). Moreover, \citet{nodenorm} have suggested to node-wise normalize each feature vector, yielding NodeNorm.
\paragraph{Change of GNN dynamics}
A rapidly emerging strategy to mitigate over-smoothing for deep GNNs is by qualitatively changing the (discrete or continuous) dynamics of the message-passing propagation. A recent example is the use of non-linear oscillators which are coupled through the graph structure yielding Graph-Coupled Oscillator Network (GraphCON) \citep{graphcon},
\begin{equation}
\begin{aligned}
\label{eq:graphCON}
{\bf Y}^n &= {\bf Y}^{n-1} + {\Delta t} [\sigma({\bf F}_{\theta^n}({\bf X}^{n-1},\mathcal{G})) - \gamma{\bf X}^{n-1} - \alpha {\bf Y}^{n-1}], \\
{\bf X}^n &= {\bf X}^{n-1} + {\Delta t}{\bf Y}^n,
\end{aligned}
\end{equation}
where ${\bf Y}^n$ are auxiliary node features and $\Delta t>0$ denotes the time-step (usually set to $\Delta t =1$). The idea of this work is to exchange the diffusion-like dynamics of GCNs (and its variants) to that of non-linear oscillators, which can provably be guaranteed to have a Dirichlet energy that does not exponentially vanish (as of Definition \ref{def:oversmoothing}).
A similar approach has been taken in \cite{pde-gcn}, where the dynamics of a deep GCN is modelled as a wave-type partial differential equation (PDE) on graphs, yielding PDE-GCN. Another approach inspired by physical systems is Allen-Cahn Message Passing (ACMP) \citep{allen_cahn_gnn}, where the dynamics is constructed based on the Allen-Cahn equation modeling interacting particle system with attractive and repulsive forces. A related effort is the Gradient Flow Framework (GRAFF) \citep{grad_flow}, where the proposed GNN framework can be interpreted as attractive respectively repulsive forces between adjacent features.
A recent example in this direction, that is not directly inspired by physical systems, is that of Gradient Gating ({\bf G$^2$}) \citep{g2}, where a learnable node-wise early-stopping mechanism is realized through a gating function leveraging the graph-gradient,
\begin{equation}
\begin{aligned}
\label{eq:g2}
\hat{{\bf \omega}ldsymbol{\tau}}^n &= {\sigma}(\hat{{\bf F}}_{\hat{\theta}^n}({\bf X}^{n-1},\mathcal{G})),\\
{\bf \omega}ldsymbol{\tau}^n_{ik} &=
\tanh\left(\sum_{j\in\mathcal{N}_i}|\hat{{\bf \omega}ldsymbol{\tau}}^n_{jk} - \hat{{\bf \omega}ldsymbol{\tau}}^n_{ik}|^p\right), \\
{\bf X}^n &= (1 - {\bf \omega}ldsymbol{\tau}^n)\odot {\bf X}^{n-1} + {\bf \omega}ldsymbol{\tau}^n \odot \sigma({\bf F}_{\theta^n}({\bf X}^{n-1},\mathcal{G})),
\end{aligned}
\end{equation}
with $p\geq0$. This mechanism slows down the message-passing propagation corresponding to each individual node (and each individual channel) as $\hat{{\bf \omega}ldsymbol{\tau}}^n_{ij}$ goes to zero before local over-smoothing occurs in the $j$-th channel on a node $i$.
\paragraph{Residual connections}
Motivated by the success of residual neural networks (ResNets) \citep{resnet} in conventional deep learning, there has been many suggestions of adding residual connections to deep GNNs. An early example includes \citet{resGCN}, where the authors equip a GNN with a residual connection \cite{resnet}, i.e.,
\begin{equation}
\label{eq:resGCN}
{\bf X}^n = {\bf X}^{n-1} + {\bf F}_{\theta^n}({\bf X}^{n-1},\mathcal{G}).
\end{equation}
By instantiating the GNN in \eqref{eq:resGCN} with a GCN, this leads to major improvements over competing methods.
Another example is GCNII \citep{gcnii} where a scaled residual connection of the initial node features is added to every layer of a GCN,
\begin{equation}
\label{eq:gcnii}
{\bf X}^n = \sigma\left[\left((1-\alpha_n)\hat{{\bf D}}^{-\frac{1}{2}}\hat{{\bf A}} \hat{{\bf D}}^{-\frac{1}{2}}{\bf X}^{n-1} + \alpha_n{\bf X}^0\right) \left((1-\beta_n)\mathrm{I} + \beta_n{\bf W}^n \right)
\right],
\end{equation}
where $\alpha_n,\beta_n \in [0,1]$ are fixed hyperparameters for all $n=1,\dots,N$.
This allows for constructing very deep GCNs, outperforming competing methods on several benchmark tasks. Similar approaches aggregate not just the initial node features but all node features of every layer of a deep GNN at the final layer. Examples of such models include Jumping Knowledge Networks (JKNets) \citep{jknet} and Deep Adaptive Graph Neural Networks (DAGNNs) \citep{dagnn}
\subsection{Empirical evaluation}
In order to evaluate the effectiveness of the different methods that have been suggested to mitigate over-smoothing in deep GNNs, we follow the experimental set-up of section \ref{sec:measures}. To this end, we choose two representative methods of each of the different strategies to overcome over-smoothing, namely DropEdge and PairNorm as representatives from ``normalization and regularization'' strategies, GraphCON and \textbf{G}$^2$ from ``change of GNN dynamics'', and Residual GCN (Res-GCN) and GCNII from ``residual connections''. We consider the same three different graphs as in section \ref{sec:measures}, namely small-scale Texas, medium-scale Cora and larger-scale Cornell5 graph. Since we are only interested in the qualitative behavior of the different methods, we fix one node-similarity measure, namely the Dirichlet energy. Thereby, we can see in \fref{fig:os_mitigation} that DropEdge-GCN and Res-GCN suffer from an exponential convergence of the layer-wise Dirichlet energy to zero (and thus from over-smoothing) on all three graphs.
In contrast to that, all other methods we consider here mitigate over-smoothing by keeping the layer-wise Dirichlet energy approximately constant.
\begin{figure}
\caption{Layer-wise Dirichlet energy of hidden node features propagated through \textbf{G}
\label{fig:os_mitigation}
\end{figure}
\section{Risk of sacrificing expressivity to mitigate over-smoothing}
Since several of the previously suggested methods designed to mitigate over-smoothing successfully prevent the layer-wise Dirichlet energy (and other node-similarity measures) from converging exponentially fast to zero, it is natural to ask if this is already sufficient to construct (possibly very) deep GNNs which also efficiently solve the learning task at hand.
To answer this question, we start by constructing a deep GCN which keeps the Dirichlet energy constant while at the same time its performance on a learning task is as poor as a standard deep multi-layer GCN.
It turns out that simply adding a bias vector to a deep GCN with shared parameters among layers, i.e.,
\begin{equation*}
{\bf X}^n = \sigma(\hat{{\bf D}}^{-\frac{1}{2}}\hat{{\bf A}} \hat{{\bf D}}^{-\frac{1}{2}}{\bf X}^{n-1}{\bf W} + {\bf b}), \quad \forall n=1,\dots,N,
\end{equation*}
with weights ${\bf W} \in \mathbb{R}^{m\times m}$ and bias ${\bf b} \in \mathbb{R}^{m}$, is sufficient for the optimizer to keep the resulting layer-wise Dirichlet energy of the model approximately constant. This can be seen in \fref{fig:os_pitfall_dirichlet} where the layer-wise Dirichlet energy is shown (among others) for a standard GCN as well as a GCN with an additional bias term after training on the Cora graph dataset in the fully supervised setting. We observe that while the Dirichlet energy converges exponentially fast to zero for the standard GCN, simply adding a bias term results in an approximately constant layer-wise Dirichlet energy. Moreover, \fref{fig:os_pitfall_dirichlet} shows the test accuracy of the same models for different number of layers. We can see that both the standard GCN as well as the GCN with bias vector suffer from a significant decrease of performance for increasing number of layers. Interestingly, while GCN with bias keeps the Dirichlet energy perfectly constant and GCN without bias exhibits a Dirichlet energy converging exponentially fast to zero, both models suffer similarly from drastic impairment of performance (in terms of test accuracy) for increasing number of layers. We thus observe that simply constructing a deep GNN that keeps the node-similarity measure constant (around $1$) is not sufficient in order to successfully construct deep GNNs.
This observation is further supported in \fref{fig:os_pitfall_dirichlet} by looking at the Dirichlet energy of the PairNorm method which behaves similarly to the Dirichlet energy of GCN with bias, i.e., approximately constant around $1$. However, the performance in terms of test accuracy on the Cora graph dataset drops exponentially after using more than $32$ layers. Interestingly, \textbf{G}$^2$-GCN exhibits an approximately constant layer-wise Dirichlet energy and at the same time does not decrease its performance by increasing number of layers. In fact the performance of \textbf{G}$^2$-GCN increases slightly by increasing number of layers.
Therefore, we argue that solving the over-smoothing issue defined in Definition \ref{def:oversmoothing} is necessary in order to construct well performing deep GNNs. Otherwise the network is not able to learn any meaningful function defined on the graph. However, as can be seen from this experiment it is not sufficient.
Therefore, based on this experiment we conclude that a major pitfall in designing deep GNNs that mitigate over-smoothing is to sacrifice the expressive power of the GNN only to keep the node-similarity measure approximately constant. In fact, based on our experiments, only \textbf{G}$^2$ (among the considered models here) fully mitigates the over-smoothing issue by keeping the node-similarity measure approximately constant, while at the same time increasing its expressive power for increasing number of layers.
\begin{figure}
\caption{Trained \textbf{G}
\label{fig:os_pitfall_dirichlet}
\end{figure}
\section{Extension to continuous-time GNNs}
A rapidly growing sub-field of graph representation learning deals with GNNs that are continuous in depth. This is performed by formulating the message-passing propagation in terms of graph dynamical systems modelled by (neural \citep{node}) Ordinary Differential Equations (ODEs) or Partial Differential Equations (PDEs), i.e., message-passing framework \eqref{eq:mp}, where the forward propagation is modeled by a differential equation:
\begin{equation}
\label{eq:ct_mp}
{\bf X}^\prime(t) = \sigma({\bf F}_{\theta}({\bf X}(t),\mathcal{G})),
\end{equation}
with ${\bf X}(t)$ referring to the node features at time $t\geq0$. Different choices of the vector field (i.e., right-hand side of \eqref{eq:ct_mp}) yields different architectures. Moreover, we note that the right-hand side in \eqref{eq:ct_mp} can potentially arise from a discretization of a differential operator defined on a graph leading to a PDE-inspired architecture.
We refer to this class of graph-learning models as \emph{continuous-time GNNs}.
Early examples of continuous-time GNNs include Graph Neural Ordinary Differential Equations (GDEs) \citep{gde} and Continuous Graph Neural Networks (CGNN) \citep{cgnn}. More recent examples include Graph-Coupled Oscillator Networks (GraphCON) \citep{graphcon}, Graph Neural Diffusion (GRAND) \citep{grand}, Beltrami Neural Diffusion (BLEND) \citep{blend}, Neural Sheaf Diffusion (NSD) \citep{sheaf}, and Gradient Glow Framework (GRAFF) \citep{grad_flow}.
Based on this framework, we can easily extend our definition of over-smoothing \ref{def:oversmoothing} to continuous-time GNNs, by defining over-smoothing as the exponential convergence in time of a node-similarity measure. More concretely, we define it as follows.
\begin{defi}[Continuous-time over-smoothing]
\label{def:ct_oversmoothing}
Let $\mathcal{G}$ be an undirected, connected graph and ${\bf X}(t) \in \mathbb{R}^{v \times m}$ denote the hidden node features of a continuous-time GNN \eqref{eq:ct_mp} at time $t\geq0$ defined on $\mathcal{G}$. Moreover, $\mu$ is a node-similarity measure as of Definition \ref{def:oversmoothing}.
We then define over-smoothing with respect to $\mu$ as the exponential convergence in time of the node-similarity measure $\mu$ to zero, i.e.,
\begin{enumerate}
\item[] $\mu({\bf X}(t)) \leq C_1e^{-C_2 t}$, for $t\geq 0$ with some constants $C_1,C_2>0$.
\end{enumerate}
\end{defi}
\section{Conclusion}
Stacking multiple message-passing layers (i.e., a deep GNN) is necessary in order to effectively process information on relational data where the underlying computational graph exhibits (higher-order) long-range interactions. This is of particular importance for learning heterophilic graph data, where node labels may differ significantly from those of their neighbors.
Besides several other identified problems (e.g., over-squashing, exploding and vanishing gradients problem), the over-smoothing issue denotes a central challenge in constructing deep GNNs.
Since previous work has measured over-smoothing in various ways, we unify those approaches by providing an axiomatic definition of over-smoothing through the layer-wise \emph{exponential convergence} of \emph{similarity measures} on the node features. Moreover, we review recent measures for over-smoothing and, based on our definition, rule out the commonly used MAD in the context of measuring over-smoothing. Additionally, we test the qualitative behavior of those measures on three different graph datasets, i.e., small-scale Texas graph, medium-scale Cora graph, and large-scale Cornell5 graph, and observe an exponential convergence to zero of all measures for standard GNN models (i.e., GCN, GAT, and GraphSAGE).
We further review prominent approaches to mitigate over-smoothing and empirically test whether these methods are able to successfully overcome over-smoothing by plotting the layer-wise Dirichlet energy on different graph datasets.
We conclude by highlighting the need for balancing the ability of models to mitigate over-smoothing, but without sacrificing the expressive power of the underlying deep GNN. This phenomenon was illustrated by the example of a simple deep GCN with shared parameters among all layers as well as a bias, where the optimizer rapidly finds a state of parameters during training that leads to a mitigation of over-smoothing (i.e., approximately constant Dirichlet energy). However, in terms of performance (or accuracy) on the Cora graph-learning task, this model fails to outperform its underlying baseline (i.e., same GCN model without a bias) which suffers from over-smoothing. This behavior is further observed in other methods that are particularly designed to mitigate over-smoothing, where the over-smoothing measure remains approximately constant but at the same time the accuracy of the model drops significantly for increasing number of layers. However, we also want to highlight that there exist methods that are able to mitigate over-smoothing while at the same time maintaining its expressive power on our task, i.e., \textbf{G}$^2$.
We thus conclude that \emph{mitigating over-smoothing is only a necessary condition}, among many others, for building deep GNNs, while a particular focus in designing methods in this context has to be on the maintenance or potential enhancement of the expressive power of the underlying model.
\end{document}
|
\begin{document}
\title{Yaglom-type limit theorems for branching Brownian motion with absorption}
\author{Pascal Maillard\thanks{Institut de Mathématiques de Toulouse, CNRS UMR5219. \emph{Postal address:} Institut de Mathématiques de Toulouse, Université de Toulouse 3 Paul Sabatier, 118 route de Narbonne, 31062 Toulouse cedex 9, France} \\ Université de Toulouse \and Jason Schweinsberg\thanks{\emph{Postal address:} Department of Mathematics 0112; 9500 Gilman Drive; La Jolla, CA 92093-0112} \\ University of California San Diego}
\maketitle
\begin{abstract}
We consider one-dimensional branching Brownian motion in which particles are absorbed at the origin. We assume that when a particle branches, the offspring distribution is supercritical, but the particles are given a critical drift towards the origin so that the process eventually goes extinct with probability one. We establish precise asymptotics for the probability that the process survives for a large time $t$, building on previous results by Kesten (1978) and Berestycki, Berestycki, and Schweinsberg (2014). We also prove a Yaglom-type limit theorem for the behavior of the process conditioned to survive for an unusually long time, providing an essentially complete answer to a question first addressed by Kesten (1978). An important tool in the proofs of these results is the convergence of a certain observable to a continuous state branching process. Our proofs incorporate new ideas which might be of use in other branching models.
\end{abstract}
\noindent {\it AMS 2020 subject classifications}. Primary 60J80; Secondary 60J65, 60J25
\noindent {\it Key words and phrases}. Branching Brownian motion, Yaglom limit theorem, continuous-state branching process
\section{Introduction and main results}
We will consider one-dimensional branching Brownian motion with absorption, which evolves according to the following rules. At time zero, all particles are in $(0, \infty)$. Each particle independently moves according to one-dimensional Brownian motion with a drift of $-\mu$. Particles are absorbed when they reach zero. Each particle independently branches at rate $\beta$. When a branching event occurs, the particle dies and is replaced by a random number of offspring. We denote by $p_k$ the probability that an individual has $k$ offspring and assume that the numbers of offspring produced at different branching events are independent. We define $m$ so that $m + 1 = \sum_{k=1}^{\infty} kp_k$ is the mean of the offspring distribution, and we assume $m > 0$. We also assume that $\sum_{k=1}^{\infty} k^2 p_k < \infty$, so the offspring distribution has finite variance. Finally, we assume that $\beta = 1/2m$, which by scaling arguments entails no real loss of generality because speeding up time by a factor of $r$ while scaling space by a factor of $1/\sqrt{r}$ multiplies the branching rate by $r$ and the drift by $\sqrt{r}$.
Kesten \cite{kesten} showed that if $\mu < 1$, the process survives forever with positive probability, while if $\mu \geq 1$, the process eventually goes extinct almost surely. In this paper, we will assume that $\mu = 1$, so we are considering the case of critical drift. Our aim is to establish some new results for this process, focusing on the question of how the process behaves when conditioned to survive for an unusually long time. These results sharpen some of the results in the seminal paper of Kesten \cite{kesten} and build on more recent work of Berestycki, Berestycki, and Schweinsberg \cite{bbs, bbs3}.
There are several reasons to study branching Brownian motion with absorption:
\begin{enumerate}
\item To study branching Brownian motion without absorption, for example its extremal particles, it is often useful to kill the particles at certain space-time barriers, as pioneered by Bramson~\cite{Bramson1978}. It is therefore natural to study these processes for their own sake.
\item Branching Brownian motion with absorption, or more complicated models that build on it, can be interpreted as a model of a biological population under the influence of evolutionary selection. In this setting, particles represent individuals in a population, the position of a particle represents the fitness of the individual, and the absorption at zero models the deaths of individuals with low fitness. See, for example, the work of Brunet, Derrida, Mueller, and Munier \cite{bdmm1, bdmm2}.
\item There are close connections between branching Brownian motion and partial differential equations, going back to the early work of McKean \cite{mckean}. Branching Brownian motion with absorption was used in \cite{hhk06} to study the equation $\frac{1}{2} f'' - \rho f' + \beta (f^2 - f)$, which describes traveling wave solutions to the FKPP equation, under the boundary conditions $\lim_{x \downarrow 0} f(x) = 1$ and $\lim_{x \rightarrow \infty} f(x) = 0$.
\item The branching random walk with absorption, a discrete-time analogue of branching Brownian motion with absorption, appears directly or indirectly in other mathematical models such as infinite urn models
\cite{mr2016} or in the study of algorithms for finding vertices of large labels in a labelled tree generated by a branching random walk \cite{Aldous1992}. Also, branching Brownian motion with absorption is a toy model for certain noisy travelling wave equations (see again \cite{bdmm1,bdmm2}).
\item Branching Brownian motion with absorption can be regarded as a non-conservative Markov process living in an infinite-dimensional and unbounded state space (the space of finite collections of points on $\ensuremath{\mathbb{R}}_+$). As such, it is an interesting testbed for quasi-stationary distributions and Yaglom limits, which have seen a great deal of attention in the last decade \cite{cclmms,mv12,cv16}, particularly regarding approximating particle systems \cite{afgj,delmoral}
\end{enumerate}
\subsection{Some notation}
We introduce here some notation that we will use throughout the paper. When the branching Brownian motion starts with a single particle at the position $x$, we denote probabilities and expectations by $\ensuremath{\mathbf{P}}_x$ and $\ensuremath{\mathbf{E}}_x$ respectively. More generally, we may start from a fixed or random initial configuration of particles in $(0, \infty)$, which we represent by the measure $\nu$ consisting of a unit mass at the position of each particle. We then denote probabilities and expectations by $\ensuremath{\mathbf{P}}_{\nu}$ and $\ensuremath{\mathbf{E}}_{\nu}$. To avoid trivialities, we will always assume that the initial configuration of particles is nonempty. When the initial configuration $\nu$ is random, $\ensuremath{\mathbf{P}}_{\nu}$ and $\ensuremath{\mathbf{E}}_{\nu}$ refer to unconditional probabilities and expectations, not conditional probabilities and expectations given the random measure $\nu$. In particular, if $A$ is an event, then $\ensuremath{\mathbf{P}}_{\nu}(A)$ is a number between 0 and 1, not a random variable whose value depends on the realization of $\nu$. That is, using the language of random walks in random environments, ${\bf P}_{\nu}$ represents the “annealed” law rather than the “quenched” law. We will denote by $({\cal F}_t, t \geq 0)$ the natural filtration of the process.
We will denote by $N_s$ the set of particles that are alive at time $s$, meaning they have not yet been absorbed at the origin. If $u \in N_s$, we denote by $X_u(s)$ the position at time $s$ of the particle $u$. We also define the critical curve
\begin{equation}\label{Lcdef}
L_t(s) = c(t - s)^{1/3}, \hspace{.5in} c = \bigg( \frac{3 \pi^2}{2} \bigg)^{1/3}.
\end{equation}
This critical curve appeared in the original paper of Kesten \cite{kesten}. As will become apparent later, it can be interpreted, very roughly, as the position where a particle must be at time $s$ in order for it to be likely to have a descendant alive in the population at time $t$. We will also define
\begin{equation}\label{Zdef}
Z_t(s) = \sum_{u \in N_s} z_t(X_u(s), s), \hspace{.5in}z_t(x,s) = L_t(s) \sin \bigg( \frac{\pi x}{L_t(s)} \bigg) e^{x - L_t(s)} \ensuremath{\mathbbm{1}}_{x \in [0, L_t(s)]}.
\end{equation}
The process $(Z_t(s), 0 \leq s \leq t)$ will be extremely important in what follows. Lemma \ref{lem:Zexp} below shows that, in some sense, this process is very close to being a martingale. Let $M(s)$ be the number of particles at time $s$, and let $R(s)$ denote the position of the right-most particle at time $s$. In symbols, we define
\begin{equation}\label{MRdef}
M(s) = \#N_s, \hspace{.4in} R(s) = \sup\{X_u(s): u \in N_s\}.
\end{equation}
Finally, let $$\zeta = \inf\{s: M(s) = 0\}$$ be the extinction time for the process.
We will often be working to prove asymptotic results as $t \rightarrow \infty$ where, for each $t$, we are working under a different probability measure such as ${\bf P}_{\nu_t}$ or the conditional probability $\ensuremath{\mathbf{P}}_{\nu_t}(\:\cdot \:|\zeta > t)$. We use $\ensuremath{\mathbb{R}}ightarrow$ to denote convergence in distribution and $\rightarrow_p$ to denote convergence in probability. If $X_t$ is a random variable for each $t$, then by $X_t \rightarrow_p c$ under $\ensuremath{\mathbf{P}}_{\nu_t}$ we mean that $\ensuremath{\mathbf{P}}_{\nu_t}(|X_t - c| > \varepsilons) \rightarrow 0$ as $t \rightarrow \infty$ for all $\varepsilons > 0$, while $X_t \rightarrow_p \infty$ under $\ensuremath{\mathbf{P}}_{\nu_t}$ means $\ensuremath{\mathbf{P}}_{\nu_t}(X_t > K) \rightarrow 1$ as $t \rightarrow \infty$ for all $K \in \ensuremath{\mathbb{R}}_+$. We also write $f(t) \sim g(t)$ if $\lim_{t \rightarrow \infty} f(t)/g(t) = 1$ and $f(t) \ll g(t)$ if $\lim_{t \rightarrow \infty} f(t)/g(t) = 0$.
Throughout the paper, $C$ denotes a positive constant whose value may change from line to line. Numbered constants $C_k$ keep the same value from one occurrence to the next.
\subsection{The probability of survival until time \texorpdfstring{$t$}{t}}
For branching Brownian motion started with a single particle at $x > 0$, we are interested in calculating the probability that the process survives at least until time $t$. Kesten \cite{kesten} showed that there exists a positive constant $K_1$ such that for every fixed $x > 0$, we have for sufficiently large $t$,
$$x e^{x - L_t(0) - K_1(\log t)^2} \leq \ensuremath{\mathbf{P}}_x(\zeta > t) \leq (1 + x)e^{x - L_t(0) + K_1(\log t)^2}.$$
Berestycki, Berestycki, and Schweinsberg \cite{bbs} tightened these bounds by showing that there are positive constants $C_1$ and $C_2$ such that for all $x > 0$ and $t > 0$ such that $x \leq L_t(0) - 1$, we have
\begin{equation}\label{oldbound}
C_1 z_t(x,0) \leq \ensuremath{\mathbf{P}}_x(\zeta > t) \leq C_2 z_t(x,0).
\end{equation}
Note that the results in \cite{bbs} are stated in the case when $p_2 = 1$, which means two offspring are produced at each branching event. However, the proof of (\ref{oldbound}) can be extended, essentially without change, to the case of the more general supercritical offspring distributions considered here. Also, a slightly different scaling, with $\beta = 1$ and $\mu = \sqrt{2}$, was used in \cite{bbs}. Theorem \ref{survival} below is our main result regarding survival probabilities.
\begin{theorem}\label{survival}
Suppose that for each $t > 0$, we have a possibly random initial configuration of particles $\nu_t$. Then there is a constant $\alpha > 0$ such that the following hold:
\begin{enumerate}
\item Suppose that, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $Z_t(0) \ensuremath{\mathbb{R}}ightarrow Z$ and $L_t(0) - R(0) \rightarrow_p \infty$ as $t \rightarrow \infty$. Then $$\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t) = 1 - \ensuremath{\mathbf{E}}[e^{-\alpha Z}].$$
\item Suppose that each $\nu_t$ is deterministic, and that, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $Z_t(0) \rightarrow 0$ and $L_t(0) - R(0) \rightarrow \infty$ as $t \rightarrow \infty$. Then as $t \rightarrow \infty$, we have $$\ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t) \sim \alpha Z_t(0).$$ In particular, if $x > 0$ is fixed, then
\begin{equation}\label{scsurvival}
\ensuremath{\mathbf{P}}_x(\zeta > t) \sim \alpha \pi x e^{x-L_t(0)}.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{remark}
The constant $\alpha$ in the statement of Theorem~\ref{survival} has the expression $\alpha = \pi^{-1}e^{-a_{\ref{eq:abelian}}-3/4}$, where $a_{\ref{eq:abelian}}$ is a constant related to the tail of the \emph{derivative martingale} of the branching Brownian motion and defined in Lemma~\ref{neveuW} below.
\end{remark}
Note that (\ref{scsurvival}) improves upon (\ref{oldbound}) when the initial configuration has only a single particle. Derrida and Simon \cite{ds07} had previously obtained (\ref{scsurvival}) by nonrigorous methods.
Theorem \ref{survival} applies when there is no particle at time zero that is close to $L_t(0)$. This condition is important, here and throughout much of the paper, because it ensures that no individual particle at time zero has a high probability of having descendants alive at time $t$. Theorem~\ref{survivex} below applies when the process starts with one particle near $L_t(0)$. Here and throughout the rest of the paper, we denote by $q$ the extinction probability for a Galton-Watson process with offspring distribution $(p_k)_{k=0}^{\infty}$. Note that Theorem \ref{survivex} implies that when $q = 0$ and the process starts from one particle near $L_t(0)$, the fluctuations in the extinction time are of the order $t^{2/3}$, which can also be seen from Theorem 2 in \cite{bbs}.
\begin{theorem}\label{survivex}
There is a function $\phi: \ensuremath{\mathbb{R}} \rightarrow (0,1)$ such that for all $x \in \ensuremath{\mathbb{R}}$,
\begin{equation}\label{convphi}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{L_t(0) + x}(\zeta \leq t) = \phi(x)
\end{equation}
and, more generally, for all $x \in \ensuremath{\mathbb{R}}$ and $v \in \ensuremath{\mathbb{R}}$,
\begin{equation}\label{convphi2}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{L_t(0)+x}(\zeta \leq t + vt^{2/3}) = \phi\Big(x - \frac{cv}{3}\Big).
\end{equation}
We have $\lim_{x \rightarrow -\infty} \phi(x) = 1$ and $\lim_{x \rightarrow \infty} \phi(x) = q$. The function $\phi$ also satisfies $$\frac{1}{2} \phi'' - \phi' = \beta(\phi - f \circ \phi),$$ where $f(s) = \sum_{k=0}^{\infty} p_k s^k$ is the generating function for the offspring distribution $(p_k)_{k=1}^{\infty}$. In fact, $\phi(x) = \psi(x+\log(\alpha\pi))$, where $\psi$ is the function from Lemma~\ref{neveuW} below and $\alpha$ is the constant from Theorem \ref{survival}.
\end{theorem}
\subsection{The process conditioned on survival}
Our main goal in this paper is to understand the behavior of branching Brownian motion conditioned to survive for an unusually long time. The results in this section can be viewed as the analogs of the theorem of Yaglom \cite{yaglom} for critical Galton-Watson processes conditioned to survive for a long time.
Proposition \ref{extinctiondist}, which is a straightforward consequence of Theorem \ref{survival}, gives the asymptotic distribution of the survival time for the process, conditional on $\zeta > t$. We see that the amount of additional time for which the process survives is of the order $t^{2/3}$, and has approximately an exponential distribution.
\begin{proposition}\label{extinctiondist}
Suppose that for each $t > 0$, we have a deterministic initial configuration of particles $\nu_t$. Suppose that, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have
\begin{equation}\label{maininitial}
\lim_{t \rightarrow \infty} Z_t(0) = 0, \hspace{.5in}\lim_{t \rightarrow \infty} \big(L_t(0) - R(0)\big) = \infty.
\end{equation}
Let $V$ have an exponential distribution with mean 1. Then, under the conditional probability measures $\ensuremath{\mathbf{P}}_{\nu_t}( \: \cdot \:| \zeta > t)$, we have $t^{-2/3}(\zeta - t) \ensuremath{\mathbb{R}}ightarrow \frac{3}{c} V$ as $t \rightarrow \infty$.
\end{proposition}
We are also able to get rather precise information regarding what the configuration of particles looks like at or near time $t$, conditional on the process surviving until time $t$. Recall the definitions of $M(s)$ and $R(s)$ from (\ref{MRdef}). Kesten proved (see (1.12) of \cite{kesten}) that for fixed $x > 0$, there is a positive constant $K_2$ such that
\begin{equation}\label{KestenM}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_x(M(t) > e^{K_2 t^{2/9} (\log t)^{2/3}} \, | \, \zeta > t) = 0.
\end{equation}
Kesten also showed (see (1.11) of \cite{kesten}) that there is a positive constant $K_3$ such that
\begin{equation}\label{KestenR}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_x(R(t) > K_3 t^{2/9} (\log t)^{2/3} \,| \, \zeta > t) = 0.
\end{equation}
Theorem \ref{larges} below provides sharper results regarding the behavior of the number of particles in the system and the position of the right-most particle near time $t$, when the process is conditioned to survive until time $t$. Note that the time $s$ depends on $t$.
\begin{theorem}\label{larges}
Suppose that for each $t > 0$, we have a deterministic initial configuration of particles $\nu_t$ such that (\ref{maininitial}) holds under $\ensuremath{\mathbf{P}}_{\nu_t}$.
Suppose $s \in [0, t]$. Let $V$ have an exponential distribution with mean 1. Under the conditional probability measures $\ensuremath{\mathbf{P}}_{\nu_t}(\: \cdot\:|\,\zeta > t)$, the following hold:
\begin{enumerate}
\item If $t^{-2/3}(t - s) \rightarrow \sigma \geq 0$, then
\begin{equation}\label{J1as}
\big(t^{-2/3}(\zeta - t), t^{-2/9} \log M(s), t^{-2/9} R(s) \big) \ensuremath{\mathbb{R}}ightarrow \bigg( \frac{3V}{c}, \: c \Big(\sigma + \frac{3V}{c}\Big)^{1/3}, \: c \Big(\sigma + \frac{3V}{c}\Big)^{1/3} \bigg).
\end{equation}
\item If $t^{2/3} \ll t - s \ll t$, then letting $a(s,t) = ((t-s)/t)^{2/3}$ and $b(s,t) = c(t-s)^{1/3} - \log(t-s)$, we have
\begin{equation}\label{J2as}
\big(t^{-2/3}(\zeta - t), a(s,t)(\log M(s) - b(s,t)), a(s,t)(R(s) - b(s,t)) \big) \ensuremath{\mathbb{R}}ightarrow \bigg( \frac{3V}{c}, V, V \bigg).
\end{equation}
\end{enumerate}
\end{theorem}
Part 1 of Theorem \ref{larges} with $\sigma = 0$ implies that if $t - s \ll t^{2/3}$, and in particular if $s = t$, then conditional on $\zeta > t$, we have $t^{-2/9} \log M(s) \ensuremath{\mathbb{R}}ightarrow (3c^2)^{1/3} V^{1/3}$ and $t^{-2/9} R(s) \ensuremath{\mathbb{R}}ightarrow (3c^2)^{1/3} V^{1/3}$. When we start with one particle at $x$, these results improve upon (\ref{KestenM}) and (\ref{KestenR}). Note also that when $t^{2/3} \ll t - s \ll t/(\log t)^{3/2}$, the $\log(t - s)$ term can be dropped from $b(s,t)$.
\begin{remark}
The assumption in Proposition~\ref{extinctiondist} and Theorem~\ref{larges} that the initial configurations are deterministic is important. Suppose we allow the $\nu_t$ to be random and assume, similar to part 1 of Theorem \ref{survival}, that under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $Z_t(0) \rightarrow_p 0$ and $L_t(0) - R(0) \rightarrow_p 0$. To see that the conclusions of Proposition~\ref{extinctiondist} and Theorem~\ref{larges} can fail, consider the example in which $\nu_t$ consists of a single particle at 1 with probability $1 - 1/t$ and a single particle at $2L_t$ with probability $1/t$. Conditional on the initial particle being at 1, the probability that the process survives until time $t$ is approximately $\alpha \pi e^{1 - ct^{1/3}}$ by (\ref{scsurvival}), while conditional on the initial particle being at $2L_t(0)$, the probability of survival until time $t$ is approximately $1 - q$. Therefore, conditional on survival, with overwhelming probability the initial particle was at $2L_t(0)$, and on this event, the configuration of particles at time $t$ will be quite different from what is predicted by Theorem~\ref{larges}.
\end{remark}
\begin{remark}
It is possible to define the process conditioned to survive \emph{for all time}, through a certain \emph{spine decomposition}, which is classical for branching processes. One can easily convince oneself that in this process the number of particles at time $t$ is of the order $\exp(O(t^{1/2}))$, which is of a very different magnitude from the $\exp(O(t^{2/9}))$ obtained in the above theorems. This is in stark contrast to the classical case of (critical) Galton-Watson processes conditioned to survive until time $t$ or forever, where the number of particles at time $t$ is of the order of $t$ in both cases (see e.g.~\cite{lpp}); in fact, both are related through a certain change of measure.
\end{remark}
\section{Tools, heuristics, and further results}
In this section, we describe some of the tools required to prove the main results stated in the introduction, along with the heuristics that allow us to see why these results are true. We also state some further results (Theorems \ref{CSBPcond}, \ref{meds}, and \ref{condconfigprop}) which provide information about the behavior of the branching Brownian motion during the time interval $[\delta t, (1 - \delta)t]$, where $\delta > 0$ is small, conditioned on survival of the process until time $t$.
Theorems \ref{survival} and \ref{survivex} and Proposition \ref{extinctiondist} depend heavily on a connection between branching Brownian motion with absorption and continuous-state branching processes. This connection is explained in Section~\ref{CSBPintro}, where Theorems~\ref{CSBPthm} and \ref{CSBPcond} are stated. To prepare for the proof of Theorem~\ref{larges}, we present in Section~\ref{sec:particle_configurations} a slight generalization of a result on particle configurations from \cite{bbs3}, which is Proposition~\ref{configpropnew}. We also state in that section two more results complementing Theorem~\ref{larges}, namely Theorems \ref{meds} and \ref{condconfigprop}. To be able to apply Proposition~\ref{configpropnew} for proving Theorem~\ref{larges}, we develop a method which allows us to predict the extinction time starting from an arbitrary initial configuration. This method is outlined in Section~\ref{sec:predicting}. Section~\ref{sec:descendants} recalls a result on the number of descendants of a single particle in branching Brownian motion with absorption, which is used in the proof of Theorem~\ref{CSBPthm}. Finally, Section~\ref{sec:organization} explains the organization of the rest of the paper.
\subsection{Connections with continuous-state branching processes}\label{CSBPintro}
The primary tool that allows us to improve upon previous results is a connection between branching Brownian motion with absorption and continuous-state branching processes. This connection is a variation of a result of Berestycki, Berestycki, and Schweinsberg \cite{bbs2}, who considered branching Brownian motion with absorption in which the drift $\mu$ was slightly supercritical and was chosen so that the number of particles in the system remained approximately stable over the longest possible time. They showed that under a suitable scaling, the number of particles in branching Brownian motion with absorption converges to a continuous-state branching process with jumps. The intuition behind why we get a jump process in the limit is that, on rare occasions, a particle will move unusually far to the right. Many descendants of this particle will then be able to survive, because they will avoid being absorbed at zero. Such events can lead to a large rapid increase in the number of particles, and such events become instantaneous jumps in the limit as $t \rightarrow \infty$. To prove the main results of the present paper, we will need to establish a version of this result when $\mu = 1$, so that the drift is critical.
\paragraph{Continuous-state branching processes.}
A continuous-state branching process is a $[0, \infty]$-valued Markov process $(\Xi(t), t \geq 0)$ whose transition functions satisfy the branching property $p_t(x + y, \: \cdot) = p_t(x, \: \cdot) * p_t(y, \: \cdot),$ which means that the sum of two independent copies of the process started from $x$ and $y$ has the same finite-dimensional distributions as the process started from $x + y$. It is well-known that continuous-state branching processes can be characterized by their branching mechanism, which is a function $\ensuremath{\mathbf{P}}si: [0, \infty) \rightarrow \ensuremath{\mathbb{R}}$. If we exclude processes that can make an instantaneous jump to $\infty$, the function $\ensuremath{\mathbf{P}}si$ is of the form $$\ensuremath{\mathbf{P}}si(q) = \gamma q + \beta q^2 + \int_0^{\infty} (e^{-qx} - 1 + qx \ensuremath{\mathbbm{1}}_{x \leq 1}) \: \nu(dx),$$ where $\gamma \in \ensuremath{\mathbb{R}}$, $\beta \geq 0$, and $\nu$ is a measure on $(0, \infty)$ satisfying $\int_0^{\infty} (1 \wedge x^2) \: \nu(dx) < \infty$. If $(\Xi(t), t \geq 0)$ is a continuous-state branching process with branching mechanism $\ensuremath{\mathbf{P}}si$, then for all $\lambda \geq 0$,
\begin{equation}\label{csbpLaplace}
E[e^{-\lambda \Xi(t)} \,| \, \Xi_0 = x] = e^{-x u_t(\lambda)},
\end{equation}
where $u_t(\lambda)$ can be obtained as the solution to the differential equation
\begin{equation}\label{diffeq}
\frac{\partial}{\partial t} u_t(\lambda) = -\ensuremath{\mathbf{P}}si(u_t(\lambda)), \hspace{.5in} u_0(\lambda) = \lambda.
\end{equation}
We will be interested here in the case
\begin{equation}\label{Psidef}
\ensuremath{\mathbf{P}}si(q) = \ensuremath{\mathbf{P}}si_{a,b}(q) = aq + bq \log q
\end{equation}
for $a \in \ensuremath{\mathbb{R}}$ and $b > 0$.
It is not difficult to solve (\ref{diffeq}) to obtain
\begin{equation}\label{utlambda}
u_t(\lambda) = \lambda^{e^{-bt}} e^{a(e^{-bt} - 1)/b}.
\end{equation}
This process was first studied by Neveu \cite{nev92} when $a = 0$ and $b = 1$. It is therefore also called \emph{Neveu's continuous state branching process.}
\paragraph{Relation with branching Brownian motion.}
The following result is the starting point in the study of branching Brownian motion with absorption at critical drift. Note that in contrast to the case of weakly supercritical drift considered in \cite{bbs2}, a nonlinear time change appears here.
\begin{theorem}\label{CSBPthm}
Suppose that for each $t > 0$, we have a possibly random initial configuration of particles $\nu_t$. Suppose that, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $Z_t(0) \ensuremath{\mathbb{R}}ightarrow Z$ and $L_t(0) - R(0) \rightarrow_p \infty$ as $t \rightarrow \infty$.
Then there exists $a \in \ensuremath{\mathbb{R}}$ such that
the finite-dimensional distributions of the processes $$(Z_t((1 - e^{-u}) t), u \geq 0),$$ under $\ensuremath{\mathbf{P}}_{\nu_t}$, converge as $t \rightarrow \infty$ to the finite-dimensional distributions of a continuous-state branching process $(\Xi(u), u \geq 0)$ with branching mechanism $\ensuremath{\mathbf{P}}si_{a,2/3}(q) = aq + \frac{2}{3} q \log q$, whose distribution at time zero is the distribution of $Z$.
\end{theorem}
The strategy for proving Theorem~\ref{CSBPthm} will be similar to the one followed in \cite{bbs2}, but the proof is more involved due to the time inhomogeneity emerging in the analysis as a result of the non-linear time change. Yet, thanks to the introduction of several new ideas, we were able to significantly reduce the length of the proof.
\begin{remark}
The constant $a$ in the statement of Theorem~\ref{CSBPthm} has the expression $a=\frac{2}{3}(a_{\ref{eq:abelian}}+\log \pi)+\frac{1}{2}$, with $a_{\ref{eq:abelian}}$ the constant defined in Lemma~\ref{neveuW} below.
\end{remark}
\begin{remark} To understand the time change, let $s$ denote the original time scale on which the branching Brownian motion is defined, and let $u$ denote the new time parameter under which the process will converge to a continuous-state branching process. From \cite{bbs2}, we know that the jumps in the process described above will happen at a rate proportional to $L_t(s)^{-3}$ or, equivalently, proportional to $(t - s)^{-1}$. This corresponds to the time scaling by $(\log N)^3$ in \cite{bbs2}. Therefore, to get a time-homogeneous limit, we need to set $du = (t - s)^{-1} \: ds.$ Integrating this equation gives
$$u = \int_0^u \: dv = \int_0^s (t - r)^{-1} \: dr = \log \bigg( \frac{t}{t-s} \bigg).$$ Rearranging, we get $s = (1 - e^{-u}) t$, which is the time change that appears in Theorem \ref{CSBPthm}.
\end{remark}
\paragraph{The probability of survival.}
Let $(\Xi(u), u \geq 0)$ be the continuous-state branching process that appears in Theorem \ref{CSBPthm}. It follows from well-known criteria due to Grey \cite{grey} that $(\Xi(u), u \geq 0)$ neither goes extinct nor explodes in finite time. That is, if $\Xi(0) \in (0, \infty)$, then $P(\Xi(u) \in (0, \infty) \mbox{ for all }u \geq 0) = 1$.
Let \begin{equation}\label{alphadef}
\alpha = e^{-3a/2}.
\end{equation}
The process $((e^{-\alpha \Xi(u)}), u \geq 0)$ is a martingale taking values in $(0, 1)$, as can be seen either by observing that $u_t(\alpha) = \alpha$ for all $t \geq 0$ and making a direct calculation using (\ref{csbpLaplace}), or by observing that $\ensuremath{\mathbf{P}}si(\alpha) = 0$ and following the discussion on p.~716 of \cite{bfm08}. By the Martingale Convergence Theorem, this martingale converges to a limit, and it is not difficult to see that the only possible values for the limit are $0$ and $1$. Therefore, using $P_x$ to denote probabilities when $\Xi(0) = x$, we have, as noted in \cite{bfm08},
\begin{equation}\label{CSBPsurvival}
P_x \Big( \lim_{u \rightarrow \infty} \Xi(u) = \infty \Big) = 1 - e^{-\alpha x}, \hspace{.3in} P_x \Big( \lim_{u \rightarrow \infty} \Xi(u) = 0 \Big) = e^{-\alpha x}.
\end{equation}
As can be guessed from Theorem \ref{CSBPthm}, the event that $\lim_{u \rightarrow \infty} \Xi_u = \infty$ corresponds to the event that the branching Brownian motion survives until time $t$, and this correspondence leads to Theorem \ref{survival}. Note that the constant $\alpha$ in Theorem \ref{survival} and the constant $a$ in the definition of the continuous-state branching process in Theorem \ref{CSBPthm} are related by the formula (\ref{alphadef}).
\paragraph{Conditioning on survival.}
To make a connection between continuous-state branching processes and branching Brownian motion conditioned on survival until time $t$, we need to consider the continuous-state branching process conditioned to go to infinity. Let $(\Xi(u), u \geq 0)$ be a continuous-state branching process with branching mechanism $\ensuremath{\mathbf{P}}si(q) = aq + \frac{2}{3} q \log q$, started from $\Xi(0) = x$. Bertoin, Fontbona, and Martinez \cite{bfm08} interpreted this process as describing a population in which a random number (possibly zero) of so-called prolific individuals have the property that their number of descendants in the population at time $u$ tends to infinity as $u \rightarrow \infty$. The number $N$ of such prolific individuals at time zero has a Poisson distribution with mean $\alpha x$, which is consistent with Theorem \ref{survival}. As noted in Section~3 of \cite{bfm08}, the branching property entails that $(\Xi(u), u \geq 0)$ can be decomposed as the sum of $N$ independent copies of a process $(\ensuremath{\mathbf{P}}hi(u), u \geq 0)$, which describes the number of descendants of a prolific individual, plus a copy of the original process conditioned to go to zero as $u \rightarrow \infty$, which accounts for the descendants of the non-prolific individuals. Conditioning on the event $\lim_{u \rightarrow \infty} \Xi(u) = \infty$ is the same as conditioning on $N \geq 1$. Furthermore, as $x \rightarrow 0$, the conditional probability that $N = 1$ given $N \geq 1$ tends to one. Consequently, if we condition on $\lim_{u \rightarrow \infty} \Xi(u) = \infty$ and then let $x \rightarrow 0$, we obtain in the limit the process $(\ensuremath{\mathbf{P}}hi(u), u \geq 0)$. Therefore, the process $(\ensuremath{\mathbf{P}}hi(u), u \geq 0)$ can be interpreted as the continuous-state branching process started from zero but conditioned to go to infinity as $u \rightarrow \infty$. See \cite{bkm11, fm19} for further developments in this direction. The following result, which we will deduce from Theorem~\ref{CSBPthm}, describes the finite-dimensional distributions of the branching Brownian motion with absorption, conditioned to survive for an unusually long time.
\begin{theorem}\label{CSBPcond}
Suppose that for each $t > 0$, we have a deterministic initial configuration of particles $\nu_t$ such that (\ref{maininitial}) holds under $\ensuremath{\mathbf{P}}_{\nu_t}$. Then the finite-dimensional distributions of $(Z_t((1 - e^{-u})t), u \geq 0),$ under the conditional probability measures $\ensuremath{\mathbf{P}}_{\nu_t}(\:\cdot\:|\,\zeta > t)$, converge as $t \rightarrow \infty$ to the finite-dimensional distributions of $(\ensuremath{\mathbf{P}}hi(u), u \geq 0)$.
\end{theorem}
\begin{remark}
Theorem \ref{CSBPcond} provides another way of understanding Proposition \ref{extinctiondist}. It is known that
\begin{equation}\label{csbpasymp}
\lim_{u \rightarrow \infty} e^{-2u/3} \log \Xi(u) = - \log W \hspace{.2in} \mbox{a.s.},
\end{equation}
where $W$ has an exponential distribution with rate parameter $\alpha x$. This result was stated for the case when the branching mechanism is $\ensuremath{\mathbf{P}}si(q) = q \log q$ in \cite{nev92} by Neveu, who attributed the result as being essentially due to Grey \cite{grey77}. A complete proof is given in Appendix A of \cite{fs04}, and by using (\ref{utlambda}), this proof can be adapted to give (\ref{csbpasymp}) when $\ensuremath{\mathbf{P}}si(q) = aq + \frac{2}{3} q \log q$. By conditioning on the event $\lim_{u \rightarrow \infty} \Xi(u) = \infty$, which is equivalent to conditioning on $-\log W > 0$, and then letting $x \rightarrow 0$, we obtain
\begin{equation}\label{CSBPV}
\lim_{u \rightarrow \infty} e^{-2u/3} \log \ensuremath{\mathbf{P}}hi(u) = V \hspace{.2in} \mbox{ a.s.},
\end{equation}
where $V$ has the exponential distribution with mean $1$. This exponential limit law was derived also in Proposition 7 of \cite{fm19}. It turns out that the random variable $V$ in (\ref{CSBPV}) is the same random variable that appears in Proposition \ref{extinctiondist} and Theorem \ref{larges} above. To see this, note that (\ref{CSBPV}) combined with Theorem \ref{CSBPcond} implies that when $u$ is large, we can write $Z_t((1 - e^{-u})t) \approx \exp(e^{2u/3} V)$. Using the Taylor approximation $c(t + s)^{1/3} - ct^{1/3} \approx \frac{c}{3} s t^{-2/3}$ when $s \ll t$, we have
\begin{align}\label{CSBPheuristic}
Z_{t + vt^{2/3}}((1 - e^{-u})t) &\approx \exp \Big( e^{2u/3} V - L_{t + vt^{2/3}}((1 - e^{-u})t) + L_t((1 - e^{-u})t) \Big) \nonumber \\
&\approx \exp \Big( e^{2u/3} V - \frac{c}{3}(v t^{2/3}) (e^{-u} t)^{-2/3} \Big) \nonumber \\
&= \exp\Big( e^{2u/3} V - \frac{vc}{3} e^{2u/3} \Big).
\end{align}
The process should survive until approximately time $t + vt^{2/3}$, where $v$ is chosen so that $Z_{t + vt^{2/3}}((1 - e^{-u})t)$ is neither too close to zero nor too large. This will happen when the expression inside the exponential in (\ref{CSBPheuristic}) is close to zero, which occurs when $v = \frac{3}{c}V$. That is, conditional on survival until at least time $t$, the process should survive for approximately time $t + \frac{3}{c} V t^{2/3}$, consistent with Proposition \ref{extinctiondist}.
\end{remark}
\subsection{Particle configurations}
\label{sec:particle_configurations}
After branching Brownian motion with absorption has been run for a sufficiently long time, the particles will settle into a fairly stable configuration. Specifically, as long as $Z_t(s)$ is neither too small nor too large, the “density” of particles near $y$ at time $s$ is likely to be roughly proportional to
\begin{equation}\label{roughdensity}
\sin \bigg( \frac{\pi y}{L_t(s)} \bigg) e^{-y}.
\end{equation}
Berestycki, Berestycki, and Schweinsberg \cite{bbs3} obtained some results that made this idea precise, in the case of binary branching when the branching Brownian motion starts from a single particle that is far from the origin. The proposition below extends the results in \cite{bbs3} to more general initial configurations and more general offspring distributions.
\begin{proposition}\label{configpropnew}
Consider a possibly random sequence of initial configurations $(\nu_n)_{n=1}^{\infty}$, along with possibly random times $(t_n)_{n=1}^{\infty}$, where $t_n$ may depend only on $\nu_n$ and $t_n \rightarrow_p \infty$ as $n \rightarrow \infty$. Suppose that, under $\ensuremath{\mathbf{P}}_{\nu_n}$, the sequences $(Z_{t_n}(0))_{n=1}^{\infty}$ and $(Z_{t_n}(0)^{-1})_{n=1}^{\infty}$ are tight, and $L_{t_n}(0) - R(0) \rightarrow_p \infty$ as $n \rightarrow \infty$.
Let $0 < \delta < 1/2$. Then the following hold:
\begin{enumerate}
\item For all $\varepsilons > 0$, there exist positive constants $C_3$ and $C_4$, depending on $\delta$ and $\varepsilons$, such that if $\delta t_n \leq s \leq (1 - \delta)t_n$ and $n$ is sufficiently large, then
\begin{equation}\label{configconc1}
\ensuremath{\mathbf{P}}_{\nu_n} \bigg( \frac{C_3}{L_{t_n}(s)^3} e^{L_{t_n}(s)} \leq M(s) \leq \frac{C_4}{L_{t_n}(s)^3} e^{L_{t_n}(s)} \bigg) > 1 - \varepsilons.
\end{equation}
\item For all $\varepsilons > 0$, there exist positive constants $C_5$ and $C_6$, depending on $\delta$ and $\varepsilons$, such that if $\delta t_n \leq s \leq (1 - \delta)t_n$ and $n$ is sufficiently large, then
\begin{equation}\label{configconc2}
\ensuremath{\mathbf{P}}_{\nu_n} \big( L_{t_n}(s) - \log t_n - C_5 \leq R(s) \leq L_{t_n}(s) - \log t_n + C_6 \big) > 1 - \varepsilons.
\end{equation}
\item Let $N_{s,n}$ denote the set of particles alive at time $s$ for branching Brownian motion started from the initial configuration $\nu_n$. Let $(s_n)_{n=1}^{\infty}$ be a sequence of times such that $\delta t_n \leq s_n \leq (1 - \delta) t_n$ for all $n$. Define the probability measures $$\chi_n = \frac{1}{M(s_n)} \sum_{u \in N_{s_n,n}} \delta_{X_u(s_n)}, \hspace{.1in} \eta_n = \bigg( \sum_{u \in N_{s_n,n}} e^{X_u(s_n)} \bigg)^{-1} \sum_{u \in N_{s_n,n}} e^{X_u(s_n)} \delta_{X_u(s_n)/L_{t_n}(s_n)}.$$ Let $\mu$ be the probability measure on $(0, \infty)$ with density $g(y) = ye^{-y}$, and let $\xi$ be the probability measure on $(0,1)$ with density $h(y) = \frac{\pi}{2} \sin(\pi y)$. Then $\chi_n \ensuremath{\mathbb{R}}ightarrow \mu$ and $\eta_n \ensuremath{\mathbb{R}}ightarrow \xi$ as $n \rightarrow \infty$, where $\ensuremath{\mathbb{R}}ightarrow$ denotes convergence in distribution for random elements in the Polish space of probability measures on $(0, \infty)$, endowed with the weak topology.
\end{enumerate}
\end{proposition}
\begin{remark}
Parts 1 and 2 of Proposition \ref{configpropnew} give estimates on the number of particles at time $s$ and the position of the right-most particle at time $s$. Part 3 of Proposition \ref{configpropnew} states two limit theorems which together make precise the idea described in (\ref{roughdensity}). If we choose a particle at random from the particles alive at time $s$, then most likely we will choose a particle near the origin. Using the $\sin(x) \approx x$ approximation for small $x$, we get that the density of the position of this randomly chosen particle is approximately $g$. If instead we choose a particle at random such that a particle at $y$ is chosen with probability proportional to $e^y$, and then we scale the location of the chosen particle such that the right-most particle is located near $1$, then the density of the chosen particle is approximately $h$.
\end{remark}
\begin{remark}
Proposition \ref{configpropnew} also allows us to see why Theorem \ref{larges} should be true. For simplicity, we focus on the case when $s = t$. Consider a branching Brownian motion that has already survived for time $t$ and will ultimately survive until time $t + v$. We expect $Z_{t+v}(t)$ to be neither too close to zero (in which case the process would most likely die out before time $t + v$) nor too large (in which case the process would most likely survive beyond time $t + v$). Furthermore, because the process has evolved for a long time, we expect the density of particles at time $t$ to follow approximately (\ref{roughdensity}). It follows that the position of the right-most particle at time $t$ should be close to $L_{t+v}(t) = cv^{1/3}$, while the number of particles at time $t$ should be within a constant multiple of $v^{-1} e^{cv^{1/3}}$. The key to proving Theorem \ref{larges} is to argue that as long as $t - s \ll t$, the extinction time can be predicted fairly accurately from the configuration of particles at time $s$, so that we can apply Proposition \ref{configpropnew} with the predicted extinction time of the process in place of $t_n$. Proposition \ref{extinctiondist} tells us that conditional on survival until time $t$, the amount of additional time for which the process survives can be approximated by $\frac{3}{c}V t^{2/3}$, where $V$ has an exponential distribution with mean one. Therefore, using $\frac{3}{c}Vt^{2/3}$ in place of $v$, we expect $\log M(t) \approx R(t) \approx c(\frac{3}{c} V t^{2/3})^{1/3} = (3c^2 V)^{1/3} t^{2/9}$, consistent with Theorem \ref{larges}.
\end{remark}
\paragraph{More results conditioned on survival.} The following two results complement Theorem~\ref{larges} and will be proved using the same methods, explained in Section~\ref{sec:predicting}. As in Theorem~\ref{larges}, the time $s$ depends on $t$.
\begin{theorem}\label{meds}
Suppose that for each $t > 0$, we have a deterministic initial configuration of particles $\nu_t$ such that (\ref{maininitial}) holds under $\ensuremath{\mathbf{P}}_{\nu_t}$. Let $0 < \delta < 1/2$, and suppose $s \in [\delta t, (1 - \delta) t]$. For all $\varepsilons > 0$, there exist positive constants $C_3$, $C_4$, $C_5$, and $C_6$ such that if $t$ is sufficiently large, then
$$\ensuremath{\mathbf{P}}_{\nu_t} \bigg( \frac{C_3}{L_t(s)^3} e^{L_t(s)} \leq M(s) \leq \frac{C_4}{L_t(s)^3} e^{L_t(s)} \,\Big|\, \zeta > t \bigg) > 1 - \varepsilons$$ and $$\ensuremath{\mathbf{P}}_{\nu_t}\big(L_t(s) - \log t - C_5 \leq R(s) \leq L_t(s) - \log t + C_6 \,\big|\, \zeta > t\big) > 1 - \varepsilons.$$
\end{theorem}
\begin{theorem}\label{condconfigprop}
Suppose that for each $t > 0$, we have a deterministic initial configuration of particles $\nu_t$ such that (\ref{maininitial}) holds under $\ensuremath{\mathbf{P}}_{\nu_t}$.
Suppose $s \in [0, t]$, and suppose $$\liminf_{t \rightarrow \infty} \frac{s}{t} > 0.$$ Define the probability measures $$\chi_s = \frac{1}{M(s)} \sum_{u \in N_s} \delta_{X_u(s)}, \hspace{.2in} \eta_s = \bigg( \sum_{u \in N_s} e^{X_u(s)} \bigg)^{-1} \sum_{u \in N_s} e^{X_u(s)} \delta_{X_u(s)/R(s)}.$$ Then, under the conditional probability measures $\ensuremath{\mathbf{P}}_{\nu_t}(\:\cdot\:|\,\zeta > t)$, we have $\chi_s \ensuremath{\mathbb{R}}ightarrow \mu$ and $\eta_s \ensuremath{\mathbb{R}}ightarrow \xi$ as $t \rightarrow \infty$, where $\mu$ and $\xi$ are defined as in Proposition \ref{configpropnew}. If $\limsup_{t \rightarrow \infty} s/t < 1$, then we may replace $R(s)$ by $L_t(s)$ in the formula for $\eta_s$.
\end{theorem}
\subsection{Predicting the extinction time}
\label{sec:predicting}
Our strategy for proving Theorem \ref{larges} will be to use Proposition \ref{configpropnew} to deduce results about the configuration of particles at time $s$, where $t - s \ll t$, by allowing the configuration of particles at some time $r \leq s$ to play the role of the initial configuration of particles. To do this, we will need to show that the configuration of particles at time $r$ satisfies the hypotheses of Proposition \ref{configpropnew}. However, because the number of particles near time $t$ is highly variable, there is no deterministic choice of $t_n$ that will allow the tightness criterion in Proposition \ref{configpropnew} to be satisfied.
Consequently, we will develop a method for associating with an arbitrary configuration of particles a random time, which represents approximately how long the branching Brownian motion is likely to survive, starting from that configuration. This technique may be of independent interest. For all $s \geq 0$, let
\begin{equation}\label{Tdef}
T(s) = \inf\{t: L_{s+t}(s) \geq R(s) + 2 \mbox{ and }Z_{s+t}(s) \leq 1/2\}.
\end{equation}
For any fixed $s \geq 0$, as have $\lim_{t \rightarrow \infty} L_{s+t}(s) = \infty$, and for any fixed $s \geq 0$ and $x > 0$, we have $\lim_{t \rightarrow \infty} z_{s+t}(x,s) = 0$. Therefore, $T(s)$ is well-defined and finite.
The following result allows us to interpret $T(s)$ as being approximately the amount of additional time we expect the process to survive, given what the configuration of particles looks like at time $s$, provided that no particle at time $s$ is too close to $L_{T(s)}(0)$.
\begin{lemma}\label{tightextinct}
Let $\varepsilons > 0$. There exist positive constants $k'$, $t'$, and $a'$ such that for all initial configurations $\nu$ such that $T(0) \geq t'$ and $L_{T(0)}(0) - R(0) \geq a'$, we have $$\ensuremath{\mathbf{P}}_{\nu}(|\zeta - T(0)| \leq k' T(0)^{2/3}) > 1 - \varepsilons.$$
\end{lemma}
To apply Proposition \ref{configpropnew} to the configuration of particles at time $r$, we will need to know that with high probability, no particle at time $r$ is too close to $L_{T(r)}(0)$. The key to this argument will be Lemma \ref{mainLRlemma}, which says that starting from any configuration of particles at time zero, there will typically be no particle close to this right boundary a short time later.
\begin{lemma}\label{mainLRlemma}
Let $\varepsilons > 0$ and $A > 0$. There exist positive real numbers $t_0 > 0$ and $d > 0$, depending on $\varepsilons$ and $A$, such that if $\nu$ is any initial configuration of particles, then $$\ensuremath{\mathbf{P}}_{\nu}(\{R(d) \geq L_{T(d)}(0) - A\} \cap \{T(d) \geq t_0\}) < \varepsilons.$$
\end{lemma}
\subsection{Descendants of a single particle}
\label{sec:descendants}
Recall that $(p_k)_{k=1}^{\infty}$ denotes the offspring distribution when a particle branches. Let $L$ be a random variable such that $P(L = k) = p_k$. Recall that we suppose that $\ensuremath{\mathbf{E}}[L^2]<\infty$. Let $f(s) = \ensuremath{\mathbf{E}}[s^L]$ be the probability generating function of the offspring distribution, and let $q$ be the smallest root of $f(s) = s$, which is the extinction probability for a Galton-Watson process with offspring distribution $(p_k)_{k=1}^{\infty}$. We record the following lemma, which is a consequence of results in Chapter 4 of \cite{thesis}.
\begin{lemma}\label{neveuW}
Suppose the branching Brownian motion is started with a single particle at zero, and there is no absorption at the origin. For each $y \geq 0$, let $K(y)$ be the number of particles that reach $-y$ if particles are killed upon reaching $-y$. Then there exists a random variable $W$ such that $$\lim_{y \rightarrow \infty} y e^{-y} K(y) = W \hspace{.2in}\textup{a.s.}$$ We have $\ensuremath{\mathbf{P}}(W > 0) = 1 - q$ and ${\bf E}[e^{-e^x W} ] = \psi(x)$, where $\psi$ is a solution to the equation $$\frac{1}{2} \psi'' - \psi' = \beta(\psi - f \circ \psi)$$ with $\lim_{x \rightarrow -\infty} \psi(x) = 1$, $\lim_{x \rightarrow \infty} \psi(x) = q$ and $1-\psi(-x)\sim xe^{-x}$ as $x\rightarrow\infty$. In fact, there exists $a_{\ref{eq:abelian}}\in\ensuremath{\mathbb{R}}$ such that as $\lambda \rightarrow 0$,
\begin{equation}
\label{eq:abelian}
\ensuremath{\mathbf{E}}[e^{-\lambda W}] = \exp\left(\ensuremath{\mathbf{P}}si_{a_{\ref{eq:abelian}},1}(\lambda) + o(\lambda)\right),
\end{equation}
where $\ensuremath{\mathbf{P}}si_{a,b}(\lambda) = a\lambda + b\lambda \log \lambda$ is the function from \eqref{Psidef}.
\end{lemma}
In the case of binary branching, the existence of the random variable $W$ in Lemma~\ref{neveuW} goes back to the work of Neveu \cite{nev87}. Proposition 4.1 in Chapter 2 of \cite{thesis} establishes that
\begin{equation}\label{Wasymp1}
\ensuremath{\mathbf{P}}(W > x) \sim \frac{1}{x} \hspace{.3in}\textup{as }x \rightarrow \infty
\end{equation}
and
\begin{equation}\label{Wasymp2}
\ensuremath{\mathbf{E}}[W \ensuremath{\mathbbm{1}}_{\{W \leq x\}}] - \log x \rightarrow C \hspace{.3in}\textup{as }x \rightarrow \infty.
\end{equation}
The results (\ref{Wasymp1}) and (\ref{Wasymp2}) were proved earlier in \cite{bbs2} for binary branching.
As indicated in \cite{thesis}, the result (\ref{eq:abelian}) follows from (\ref{Wasymp1}) and (\ref{Wasymp2})
by de Haan's Tauberian Theorem (see Theorem 2 of \cite{dh76}).
\begin{remark}
Lemma \ref{neveuW} holds under weaker assumptions on the offspring distribution; see \cite{thesis}. Also, an analogous result for branching random walk has been proven recently in \cite{bim20}.
The random variable $W$ appearing in Lemma~\ref{neveuW} is equal to the limit of the so-called \emph{derivative martingale} \cite{nev87}, but we will not use this fact explicitly.
\end{remark}
\subsection{Organization of the paper}
\label{sec:organization}
In Sections \ref{survivalsec} and \ref{18sec}, we prove the main results of the paper, assuming Theorem \ref{CSBPthm} and Proposition \ref{configpropnew}. The most novel arguments in the paper are in these two sections. In Section~\ref{survivalsec}, we prove Theorems \ref{survival}, \ref{survivex}, and \ref{extinctiondist}, all of which pertain to survival times for the process, as well as Theorem \ref{CSBPcond}, whose proof requires similar ideas. In Section \ref{18sec}, we consider the process conditioned to survive until a large time $t$. We prove Theorem \ref{larges} and Theorems \ref{meds} and \ref{condconfigprop}, along with Lemmas \ref{tightextinct} and \ref{mainLRlemma}.
The last four sections of the paper are devoted to the proofs of Theorem \ref{CSBPthm} and Proposition \ref{configpropnew}. In Section \ref{momsec}, we establish some preliminary heat kernel and moment estimates that will be needed to prove those results. In Section~\ref{configsec}, we show how to use results from \cite{bbs3} to deduce Proposition~\ref{configpropnew}. Finally, Theorem \ref{CSBPthm} is proved in Sections \ref{CSBPsec} and \ref{sec:csbp_proof}.
\section{The probability of survival until time \texorpdfstring{$t$}{t}}\label{survivalsec}
Let $(\Xi(u), u \geq 0)$ denote a continuous-state branching process with branching mechanism $\ensuremath{\mathbf{P}}si(q) = aq + \frac{2}{3} q \log q$, where $a$ is the constant from Theorem \ref{CSBPthm}. Use $P_x$ and $E_x$ to denote probabilities and expectations for this process started from $\Xi(0) = x$. Recall (\ref{CSBPsurvival}), and let ${\cal E}$ be the event that $\lim_{u \rightarrow \infty} \Xi(u) = 0$, so that
\begin{equation}\label{CSBPext}
P_x({\cal E}) = e^{-\alpha x},
\end{equation}
where $\alpha = \exp(-3a/2)$ as defined in (\ref{alphadef}). Throughout this section, we also use the notation $$\phi_t(u) = (1 - e^{-u})t.$$
We begin with the following lemma, which can be deduced from (\ref{oldbound}) and gives an initial rough estimate of the survival probability.
\begin{lemma}\label{survivalgen}
There exist positive constants $C_2$ and $C_7$ such that for all $t > 0$ and all initial configurations $\nu$ such that $R(0) \leq L_t(0) - 1$, we have
\begin{equation}\label{roughsurvival}
1 - e^{-C_7 Z_t(0)} \leq \ensuremath{\mathbf{P}}_{\nu}(\zeta > t) \leq C_2 Z_t(0),
\end{equation}
and the lower bound holds even if the condition $R(0) \leq L_t(0) - 1$ is removed.
\end{lemma}
\begin{proof}
Recall that (\ref{oldbound}) implies that if $0 \leq x \leq L_t(0) - 1$, then
\begin{equation}\label{zsurvive}
C_1 z_t(x,0) \leq \ensuremath{\mathbf{P}}_x(\zeta > t) \leq C_2 z_t(x,0).
\end{equation}
One easily checks that there exists $C>0$ such that $z_t(x,0) \le C$ and $z_t(L_t(0)-1,0) \ge C^{-1}$ for $t$ sufficiently large. Furthermore, $\ensuremath{\mathbf{P}}_x(\zeta > t)$ is an increasing function of $x$. Hence, the lower bound in (\ref{zsurvive}) holds even if $x > L_t(0) - 1$, with the constant $C_1$ replaced by a different constant $C_7$. Now consider a general initial configuration of particles $\nu$. It follows from Boole's Inequality and (\ref{zsurvive}) that $$\ensuremath{\mathbf{P}}_{\nu}(\zeta > t) \leq \sum_{u \in N_0} \ensuremath{\mathbf{P}}_{X_u(0)}(\zeta > t) \leq C_2 Z_t(0),$$ which is the upper bound in (\ref{roughsurvival}). To see the lower bound, note that by the inequality $1-x\le e^{-x}$ for $x\in [0,1]$,
\begin{align*}
\ensuremath{\mathbf{P}}_{\nu}(\zeta > t)
&= 1 - \prod_{u \in N_0} (1 - \ensuremath{\mathbf{P}}_{X_u(0)}(\zeta > t)) \\
&\geq 1 - \exp \bigg( - \sum_{u \in N_0} \ensuremath{\mathbf{P}}_{X_u(0)}(\zeta > t) \bigg) \geq 1 - e^{-C_7Z_t(0)},
\end{align*}
as claimed.
\end{proof}
\begin{remark}
Once we prove Theorem \ref{survivex}, we will know that the condition $R(0) \leq L_t(0) - 1$ keeps the probabilities $\ensuremath{\mathbf{P}}_{X_u(0)}(\zeta > t)$ bounded away from one. This means there is a positive constant $C$ for which $1 - \ensuremath{\mathbf{P}}_{X_u(0)}(\zeta > t) \geq \exp\left(-C \ensuremath{\mathbf{P}}_{X_u(0)}(\zeta > t)\right)$ for all $u \in N_0$. Therefore, letting $C_8 = C C_2$, it will follow as in the above proof that
\begin{equation}\label{strongupper}
\ensuremath{\mathbf{P}}_{\nu}(\zeta > t) \leq 1 - \exp \bigg(- C \sum_{u \in N_0} \ensuremath{\mathbf{P}}_{X_u(0)}(\zeta > t) \bigg) \leq 1 - e^{-C_8 Z_t(0)}.
\end{equation}
This stronger form of the upper bound will be used in the proof of Lemma \ref{mainLRlemma} below.
\end{remark}
\begin{lemma}\label{zrare}
Suppose that, for each $t > 0$, we have a deterministic configuration of particles $\nu_t$. Suppose that, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $L_t(0) - R(0) \rightarrow \infty$ and $Z_t(0) \rightarrow z \in (0, \infty)$ as $t \rightarrow \infty$. Let $\delta > 0$ and $r \in (0, 1)$. Then there exist $\varepsilons > 0$ and $y > 0$, depending on $\delta$ but not on $r$, such that for sufficiently large $t$, we have
\begin{equation}\label{smallzeq}
\ensuremath{\mathbf{P}}_{\nu_t}(\{\zeta > t\} \cap \{Z_t(rt) \leq \varepsilons\}) < \delta
\end{equation}
and
\begin{equation}\label{bigzeq}
\ensuremath{\mathbf{P}}_{\nu_t}(\{\zeta \leq t\} \cap \{Z_t(rt) \geq y\}) < \delta.
\end{equation}
\end{lemma}
\begin{proof}
Write $s = rt$, and let $A_{s,t}$ be the event that all particles at time $s$ are in the interval $[0, L_t(s) - 1]$. By applying the Markov property at time $s$ along with the upper bound in Lemma \ref{survivalgen}, and noting that $L_t(s) = L_{t-s}(0)$, we get that on the event $A_{s,t}$, we have $\ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t \,|\, {\cal F}_s) \leq C_2 Z_t(s)$. Therefore,
$$\ensuremath{\mathbf{P}}_{\nu_t}(\{\zeta > t\} \cap \{Z_t(s) \leq \varepsilons\} \cap A_{s,t}) \leq \ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t \,|\, A_{s,t} \cap \{Z_t(s) \leq \varepsilons\}) \leq C_2 \varepsilons.$$ Also, it follows from the conclusion (\ref{configconc2}) of Proposition \ref{configpropnew} that $\ensuremath{\mathbf{P}}_{\nu_t}(A_{s,t}^c) < \delta/2$ for sufficiently large $t$. The result (\ref{smallzeq}) follows by choosing $\varepsilons < \delta/(2C_2)$. Likewise, the lower bound in Lemma \ref{survivalgen}, in combination with the Markov property applied at time $s$, gives $\ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t \,|\, {\cal F}_s) \leq e^{-C_7 Z_t(s)}$. Therefore,
$$\ensuremath{\mathbf{P}}_{\nu_t}(\{\zeta \leq t\} \cap \{Z_t(s) \geq y\}) \leq \ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t \,|\, Z_t(s) \geq y)\leq e^{-C_7 y},$$
and thus (\ref{bigzeq}) holds for sufficiently large $y$.
\end{proof}
\begin{lemma}\label{trilem}
Suppose that, for each $t > 0$, we have a deterministic configuration of particles $\nu_t$. Suppose that, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $L_t(0) - R(0) \rightarrow \infty$ and $Z_t(0) \rightarrow z \in (0, \infty)$ as $t \rightarrow \infty$. Let $\delta > 0$. There exist $\varepsilons > 0$, $y > 0$, and $u_0 > 0$ such that for each fixed $u \geq u_0$, we have for sufficiently large $t$,
\begin{align*}
&\ensuremath{\mathbf{P}}_{\nu_t}(\{Z_t(\phi_t(u)) \leq \varepsilons\} \: \triangle \: \{\zeta \leq t\}) < 3\delta \\
&\ensuremath{\mathbf{P}}_{\nu_t}(\{Z_t(\phi_t(u)) > y\} \: \triangle \: \{\zeta > t\}) < 3\delta \\
&P_z(\{\Xi(u) \leq \varepsilons\} \: \triangle \: {\cal E}) < 3\delta \\
&P_z(\{\Xi(u) > y\} \: \triangle \: {\cal E}^c) < 3\delta
\end{align*}
where $\triangle$ denotes the symmetric difference between two events.
\end{lemma}
\begin{proof}
Choose $\varepsilons > 0$ small enough that $P_{\varepsilons}({\cal E}) \geq 1 - \delta$ and (\ref{smallzeq}) holds. Choose $y > 0$ large enough that $P_y({\cal E}) \leq \delta$ and (\ref{bigzeq}) holds. Fix $u_0$ large enough that $P_z(\varepsilons < \Xi(u) \leq y) < \delta$ for $u \geq u_0$, which is possible because the limit in (\ref{CSBPsurvival}) exists. By Theorem \ref{CSBPthm}, for $u \geq u_0$,
$$\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t}(\varepsilons < Z_t(\phi_t(u)) \leq y) = P_z(\varepsilons < \Xi(u) \leq y) < \delta.$$ The first two statements of the lemma follow from this result and Lemma \ref{zrare}. Likewise, it follows from the Markov property of $(\Xi(u), u \geq 0)$ that $P_z(\{\Xi(u) \leq \varepsilons\} \cap {\cal E}^c) \leq P_{\varepsilons}({\cal E}^c) < \delta$ and $P_z(\{\Xi(u) > y\} \cap {\cal E}) \leq P_y({\cal E}) \leq \delta$. The third and fourth statements of the lemma follow.
\end{proof}
\begin{proof}[Proof of Theorem \ref{survival}]
The proof is similar to the proof of Proposition 6 in \cite{bbs4}. Suppose the initial configuration $\nu_t$ is deterministic, and, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $Z_t(0) \rightarrow z \in (0, \infty)$ and $L_t(0) - R(0) \rightarrow \infty$ as $t \rightarrow \infty$. Let $\delta > 0$. Choose $\varepsilons > 0$, $y > 0$, and $u_0 > 0$ as in Lemma \ref{trilem}. By Theorem~\ref{CSBPthm}, for each fixed $u \geq u_0$, we have $$\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t}(Z_t(\phi_t(u)) \leq \varepsilons) = P_z(\Xi(u) \leq \varepsilons).$$ Therefore, using the first and third statements in Lemma \ref{trilem}, we obtain for each fixed $u \geq u_0$,
$$\limsup_{t \rightarrow \infty} |\ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t) - P_z({\cal E})| \leq 6 \delta + \limsup_{t \rightarrow \infty} |\ensuremath{\mathbf{P}}_{\nu_t}(Z_t(\phi_t(u)) \leq \varepsilons) - P_z(\Xi(u) \leq \varepsilons)| = 6 \delta.$$ Since $\delta > 0$ was arbitrary, it follows that
\begin{equation}\label{zpositive}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t) = P_z({\cal E}) = e^{-\alpha z},
\end{equation}
which gives part~1 of Theorem \ref{survival} when each $\nu_t$ is deterministic and $z > 0$.
Next, suppose $\nu_t$ is deterministic and, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $Z_t(0) \rightarrow 0$ and $L_t(0) - R(0) \rightarrow \infty$ as $t \rightarrow \infty$. We may consider $t$ large enough that $0 < Z_t(0) < 1$. Let $\nu_t^*$ denote the initial configuration with $\lfloor 1/Z_t(0) \rfloor$ particles at the location of each particle in the configuration $\nu_t$. Then, adding a star to the notation when referring to the process started from $\nu_t^*$, we have $Z_t^*(0) \rightarrow 1$ as $t \rightarrow \infty$. Also, we have $L_t^*(0) - R^*(0) \rightarrow \infty$. Thus, we can apply (\ref{zpositive}) to get $$\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t^*}(\zeta \leq t) = e^{-\alpha}.$$ Because the process started from $\nu_t^*$ goes extinct by time $t$ if and only if each of the $\lfloor 1/Z_t(0) \rfloor$ independent copies of the process started from $\nu_t$ goes extinct by time $t$, we have $$\ensuremath{\mathbf{P}}_{\nu_t^*}(\zeta \leq t) = (1 - \ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t))^{\lfloor 1/Z_t(0) \rfloor}.$$ It follows that $\ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t) \sim \alpha Z_t(0)$, which establishes part~2 of Theorem \ref{survival}. It follows that $\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t) = 1,$ so (\ref{zpositive}) also holds when $z = 0$.
It remains only to establish part~1 of Theorem~\ref{survival} when the initial configuration of particles may be random. Consider an arbitrary subsequence of times $(t_n)_{n=1}^{\infty}$ tending to infinity. Because, under $\ensuremath{\mathbf{P}}_{\nu_{t_n}}$, we have $Z_{t_n}(0) \ensuremath{\mathbb{R}}ightarrow Z$ and $L_{t_n}(0) - R(\infty) \rightarrow_p \infty$, we can use Skorohod's Representation Theorem to construct the sequence of random initial configurations $(\nu_{t_n})_{n=1}^{\infty}$ on one probability space $(\Omega, {\cal F}, \ensuremath{\mathbf{P}})$ so that $Z_{t_n}(0) \rightarrow Z$ and $L_{t_n}(0) - R(0) \rightarrow \infty$ almost surely. Then, for $\ensuremath{\mathbf{P}}$-almost every $\omega \in \Omega$, we can apply the result (\ref{zpositive}) for deterministic initial configurations to get $$\lim_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_{t_n}(\omega)}(\zeta \leq t) = e^{-\alpha Z(\omega)}.$$ Taking expectations of both sides and applying the Dominated Convergence Theorem gives $\lim_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_{t_n}}(\zeta \leq t) = \ensuremath{\mathbf{E}}[e^{-\alpha Z}]$, which implies part~1 of Theorem~\ref{survival}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{survivex}]
The proof is similar to the proof of Theorem 1 in \cite{bbs4}. Recalling Lemma~\ref{neveuW}, we first start a branching Brownian motion with a single particle at zero and stop particles when they reach $-y$. Let $T_y$ be the time at which the last particle is killed at $-y$. Let $g: (0, \infty) \rightarrow (0, \infty)$ be an increasing function such that
\begin{equation}\label{Tygy}
\lim_{y \rightarrow \infty} \ensuremath{\mathbf{P}}(T_y > g(y)) = 0.
\end{equation}
Fix $x \in \ensuremath{\mathbb{R}}$, and let $t \mapsto y(t)$ be an increasing function which tends to infinity slowly enough that the following three conditions hold:
\begin{equation}\label{ycond}
\lim_{t \rightarrow \infty} y(t) = \infty, \hspace{.5in} \lim_{t \rightarrow \infty} \frac{y(t)}{L_t(0)} = 0, \hspace{.5in} \lim_{t \rightarrow \infty} t^{-2/3} g(y(t)) = 0.
\end{equation}
Now we begin a branching Brownian motion with a single particle at $L_t(0) + x$. Let $K_t$ denote the number of particles that reach $L_t(0) + x - y(t)$ before time $t$, if particles are stopped upon reaching this level. For the process to go extinct before time $t$, the descendants of each of these $K_t$ particles must go extinct before time $t$.
Let $w_1, \dots, w_{K_t}$ denote the times when these particles reach the level $L_t(0) + x - y(t)$. Then, $$\ensuremath{\mathbf{P}}_{L_t(0) + x}(\zeta \leq t) \leq \ensuremath{\mathbf{E}} \bigg[ \prod_{i=1}^{K_t} \ensuremath{\mathbf{P}}_{L_t(0) + x - y(t)}(\zeta \leq t - w_i) \bigg].$$ Let $\nu_t$ denote the random configuration with $K_t$ particles at the position $L_t(0) + x - y(t)$. Recall that $\ensuremath{\mathbf{P}}_{\nu_t}$ is an unconditional probability measure, and does not refer to conditional probability given the value of $\nu_t$. Then for $t$ large enough that $g(y(t)) < t$,
\begin{equation}\label{extsqueeze}
\ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t - g(y(t))) - \ensuremath{\mathbf{P}}(T_{y(t)} > g(y(t))) \leq \ensuremath{\mathbf{P}}_{L_t(0) + x}(\zeta \leq t) \leq \ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t).
\end{equation}
For the initial configuration $\nu_t$, we have
$$Z_t(0) = K_t L_t(0) \sin \bigg( \frac{\pi (L_t(0) + x - y(t))}{L_t(0)} \bigg) e^{x - y(t)}.$$
In view of the first two conditions in (\ref{ycond}), we have $$\sin \bigg( \frac{\pi (L_t(0) + x - y(t))}{L_t(0)} \bigg) \sim \frac{\pi y(t)}{L_t(0)},$$
where $\sim$ means that the ratio of the two sides tends to one as $t \rightarrow \infty$.
Also, by Lemma~\ref{neveuW}, the processes for all $t$ can be constructed on one probability space in such a way that
$y(t) e^{-y(t)} K_t \rightarrow W$ a.s., where $W$ is the random variable introduced in Lemma \ref{neveuW}. Therefore, as $t \rightarrow \infty$, we have $$Z_t(0) \rightarrow \pi e^x W \hspace{.1in}\textup{a.s.}$$ Also, $L_t(0) - R(0) = y(t) - x \rightarrow \infty$ as $t \rightarrow \infty$.
Thus, by Theorem \ref{survival},
\begin{equation}\label{sq1}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t) = \ensuremath{\mathbf{E}}[e^{-\alpha \pi e^x W}].
\end{equation}
For the lower bound, let $t' = t - g(y(t))$. By the third condition in (\ref{ycond}), we have $L_t(0) - L_{t'}(0) = c t^{1/3} - c (t - g(y(t)))^{1/3} \rightarrow 0$ as $t \rightarrow \infty$. Therefore, by repeating the arguments above, we see that as $t \rightarrow \infty$, we have $Z_{t'}(0) \rightarrow \pi e^x W$ and $L_{t'}(0) - R(0) \rightarrow \infty$ almost surely. Therefore,
\begin{equation}\label{sq2}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t}(\zeta \leq t - g(y(t))) = \ensuremath{\mathbf{E}}[e^{-\alpha \pi e^x W}].
\end{equation}
It follows from (\ref{Tygy}), (\ref{extsqueeze}), (\ref{sq1}), and (\ref{sq2}) that $$\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{L_t(0) + x}(\zeta \leq t) = \ensuremath{\mathbf{E}}[e^{-\alpha \pi e^x W}],$$ which gives (\ref{convphi}). Finally, if we define $\psi$ as in Lemma \ref{neveuW} and $\phi(x) = \ensuremath{\mathbf{E}}[e^{-\alpha \pi e^x W}]$, then $\phi(x) = \psi(x + \log(\alpha \pi))$, so the properties of $\phi$ claimed in the statement of the theorem follow from Lemma \ref{neveuW}.
To prove (\ref{convphi2}), write $t'' = t + v t^{2/3}$. By differentiating, we get
\begin{equation}\label{tpt}
\lim_{t \rightarrow \infty} \big( L_{t''}(0) - L_t(0) \big) = \lim_{t \rightarrow \infty} \big( c(t + v t^{2/3})^{1/3} - c t^{1/3} \big) = \frac{cv}{3}.
\end{equation}
Using (\ref{convphi}), it follows that
$$\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{L_t(0) + x}(\zeta \leq t'') = \lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{L_{t''}(0) + x - cv/3}(\zeta \leq t'') = \phi(x - cv/3),$$
as claimed.
\end{proof}
\begin{proof}[Proof of Proposition \ref{extinctiondist}]
Let $v > 0$. By Theorem \ref{survival}, as $t \rightarrow \infty$ we have $$\ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t + v t^{2/3}\,|\,\zeta > t) = \frac{\ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t + v t^{2/3})}{\ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t)} \sim \frac{Z_{t + vt^{2/3}}(0)}{Z_t(0)}.$$
Note that here both $Z_{t+vt^{2/3}}(0)$ and $Z_t(0)$ are being evaluated under the same initial measure $\ensuremath{\mathbf{P}}_{\nu_t}$. Therefore, by (\ref{tpt}),
$$\lim_{t \rightarrow \infty} \frac{Z_{t + vt^{2/3}}(0)}{Z_t(0)} = \lim_{t \rightarrow \infty} e^{L_t(0) - L_{t + vt^{2/3}}(0)} = e^{-cv/3},$$
which gives the result.
\end{proof}
\begin{proof}[Proof of Theorem \ref{CSBPcond}]
We begin by following a similar strategy to the proof of part 2) of Theorem~\ref{survival}. Let $z > 0$. Let $\nu_t^*$ denote the initial configuration with $\lfloor z/Z_t(0) \rfloor$ particles at the location of each particle in the configuration $\nu_t$. Adding the star to the notation when considering the process started from $\nu_t^*$, we have $Z_t^*(0) \rightarrow z$ and $L_t(0) - R^*(0) \rightarrow \infty$ as $t \rightarrow \infty$. Equation (\ref{CSBPext}) and Theorem \ref{survival} give
\begin{equation}\label{starprob}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu^*_t}(\zeta > t) = 1 - e^{-\alpha z} = P_z({\cal E}^c).
\end{equation}
Also, by Theorem \ref{CSBPthm}, the finite-dimensional distributions of $(Z_t^*((1 - e^{-u})t), u \geq 0)$ converge as $t \rightarrow \infty$ to the finite-dimensional distributions of $(\Xi(u), u \geq 0)$ started from $\Xi_0 = z$.
Fix $k \in \ensuremath{\mathbb{N}}$ and times $0 \leq u_1 < \dots < u_k$. Let $\delta > 0$. Choose $\varepsilons > 0$, $y > 0$, and $u_0 > 0$ as in Lemma \ref{trilem}, and then fix $u \geq u_0$. Let $g: \ensuremath{\mathbb{R}}^k \rightarrow \ensuremath{\mathbb{R}}$ be bounded and uniformly continuous, and let $h: \ensuremath{\mathbb{R}}^+ \rightarrow [0,1]$ be a continuous nondecreasing function such that $h(x) = 0$ if $x \leq \varepsilons$ and $h(x) = 1$ if $x \geq y$.
By the convergence result stated at the end of the previous paragraph,
\begin{equation}\label{ghlim}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{E}}_{\nu_t^*}[g(Z_t^*(\phi_t(u_1)), \dots, Z_t^*(\phi_t(u_k)) )h(Z_t^*(\phi_t(u)))] = E_z[g(\Xi(u_1), \dots, \Xi(u_k)) h(\Xi(u))].
\end{equation}
Lemma \ref{trilem} implies that for sufficiently large $t$, we have
\begin{equation}\label{happ1}
\ensuremath{\mathbf{P}}_{\nu_t^*}(h(Z_t^*(\phi_t(u))) \neq \ensuremath{\mathbbm{1}}_{\{\zeta > t\}}) < 6 \delta
\end{equation}
and
\begin{equation}\label{happ2}
P_z(h(\Xi(u)) \neq \ensuremath{\mathbbm{1}}_{{\cal E}^c}) < 6 \delta.
\end{equation}
By combining (\ref{starprob}), (\ref{ghlim}), (\ref{happ1}), and (\ref{happ2}), we get
\begin{equation}\label{condlim}
\lim_{t \rightarrow \infty} \frac{\ensuremath{\mathbf{E}}_{\nu_t^*}[g(Z_t^*(\phi_t(u_1)), \dots, Z_t^*(\phi_t(u_k))) \ensuremath{\mathbbm{1}}_{\{\zeta > t\}}]}{\ensuremath{\mathbf{P}}_{\nu_t^*}(\zeta > t)} = \frac{E_{z}[g(\Xi(u_1), \dots, \Xi(u_k)) \ensuremath{\mathbbm{1}}_{{\cal E}^c}]}{P_{z}({\cal E}^c)},
\end{equation}
which means the finite-dimensional distributions of $(Z_t^*((1 - e^{-u})t), u \geq 0)$ conditional on $\zeta > t$ converge as $t \rightarrow \infty$ to the finite-dimensional distributions of $(\Xi(u), u \geq 0)$ started from $\Xi(0) = z$ and conditioned to go to infinity.
We now take a limit as $z \rightarrow 0$. We can write the branching Brownian motion started from $\nu_t^*$ as the sum of $\lfloor z/Z_t(0) \rfloor$ independent branching Brownian motions started from $\nu_t$. Let $N_{t,z}$ denote the number of these independent branching Brownian motions that have a descendant alive at time $t$. Conditioning on survival of the process until time $t$ is the same as conditioning on $N_{t,z} \geq 1$. Therefore, the process conditioned on survival until time $t$ can be constructed by summing three processes, in the following way.
\begin{enumerate}
\item The first process is branching Brownian motion started from $\nu_t$ conditioned on survival until time $t$.
\item Choose a random variable $M_{t,z}$ whose distribution is the conditional distribution of $N_{t,z}$ given $N_{t,z} \geq 1$. The second process is the sum of $M_{t,z} - 1$ independent branching Brownian motions started from $\nu_t$ conditioned on survival until time $t$.
\item The third process is the sum of $\lfloor z/Z_t(0) \rfloor - M_{t,z}$ independent branching Brownian motions conditioned to go extinct before time $t$.
\end{enumerate}
We will denote the contributions from these three processes by $Z_t^{(1)}$, $Z_t^{(2)}$, and $Z_t^{(3)}$ and let $Z_t' = Z_t^{(1)} + Z_t^{(2)} + Z_t^{(3)}$. This means that the law of $(Z_t'(s), 0 \leq s < t)$ is the same as the conditional law of $(Z_t^*(s), 0 \leq s < t)$ given $\zeta > t$. Therefore, for all $t \geq 0$, we have
\begin{align}\label{ZZprime}
&\ensuremath{\mathbf{E}}[g(Z_t^{(1)}(\phi_t(u_1)), \dots, Z_t^{(1)}(\phi_t(u_k)))] \nonumber \\
&\hspace{.4in}= \frac{\ensuremath{\mathbf{E}}_{\nu_t^*}[g(Z_t^*(\phi_t(u_1)), \dots, Z_t^*(\phi_t(u_k))) \ensuremath{\mathbbm{1}}_{\{\zeta > t\}}]}{\ensuremath{\mathbf{P}}_{\nu_t^*}(\zeta > t)} \nonumber \\
&\hspace{.8in}+ \ensuremath{\mathbf{E}}\big[g(Z_t^{(1)}(\phi_t(u_1)), \dots, Z_t^{(1)}(\phi_t(u_k))) - g(Z_t'(\phi_t(u_1)), \dots, Z_t'(\phi_t(u_k)))\big].
\end{align}
Define $\|g\| = \sup_x |g(x)|$ and
$$w_g(\delta) = \sup\big\{|g(x_1, \dots, x_k) - g(y_1, \dots, y_k)|: |x_i - y_i| < \delta \mbox{ for all }i \in \{1, \dots, k\} \big\}.$$ Let $$p(z,t) = \ensuremath{\mathbf{P}}(Z_t^{(2)}(s) > 0 \mbox{ for some }s \geq 0)$$ and $$q(z,t,\delta) = \ensuremath{\mathbf{P}}(Z_t^{(3)}(\phi_t(u_i)) > \delta \mbox{ for some }i \in \{1, \dots, k\}).$$ Then, the absolute value of the second term on the right-hand side of (\ref{ZZprime}) is bounded above by
$$2 \|g\| (p(z,t) + q(z,t,\delta)) + w_g(\delta).$$
It is easy to see, for example by splitting the initial population into two groups of approximately equal size and applying (\ref{starprob}) with $z/2$ in place of $z$, that there is a positive constant $C$ such that for each $z > 0$, we have $\ensuremath{\mathbf{P}}_{\nu_t^*}(N_{t,z} \geq 2) \leq C z^2$ for sufficiently large $t$. Therefore,
\begin{equation}\label{pzlim}
\lim_{z \rightarrow 0} \lim_{t \rightarrow \infty} p(z,t) = \lim_{z \rightarrow 0} \lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_t^*}(N_{t,z} \geq 2 \,|\, N_{t,z} \geq 1) = 0.
\end{equation}
By Theorem \ref{CSBPthm}, the finite-dimensional distributions of $(Z_t^{(3)}((1 - e^{-u})t), u \geq 0)$, if the process were not being conditioned to go extinct, would converge as $t \rightarrow \infty$ to the finite-dimensional distributions of $(\Xi(u), u \geq 0)$ started from $\Xi(0) = z$. As $z \rightarrow 0$, the limiting extinction probability for the branching Brownian motion as $t \rightarrow \infty$ tends to one, while the process $(\Xi(u), u \geq 0)$ started from $\Xi(0) = z$ converges to the zero process. These observations imply that for all $\delta > 0$, we have
\begin{equation}\label{qzlim}
\lim_{z \rightarrow 0} \lim_{t \rightarrow \infty} q(z,t,\delta) = 0.
\end{equation}
From (\ref{pzlim}), (\ref{qzlim}), and the fact that $w_g(\delta) \rightarrow 0$ as $\delta \rightarrow 0$ by the uniform continuity of $g$, we obtain
$$\lim_{z \rightarrow 0} \lim_{t \rightarrow \infty} \ensuremath{\mathbf{E}}\big[g(Z_t^{(1)}(\phi_t(u_1)), \dots, Z_t^{(1)}(\phi_t(u_k))) - g(Z_t'(\phi_t(u_1)), \dots, Z_t'(\phi_t(u_k)))\big] = 0.$$
Finally, as noted in Section \ref{CSBPintro}, the finite-dimensional distributions of $(\Xi(u), u \geq 0)$ started from $\Xi(0) = z$ and conditioned on ${\cal E}^c$ converge as $z \rightarrow 0$ to the finite-dimensional distributions of $(\ensuremath{\mathbf{P}}hi(u), u \geq 0)$. Thus, by taking limits in (\ref{ZZprime}), observing that the left-hand side of (\ref{ZZprime}) does not depend on $z$, and applying (\ref{condlim}), we obtain
\begin{align*}
\lim_{t \rightarrow \infty} \ensuremath{\mathbf{E}}[g(Z_t^{(1)}(\phi_t(u_1)), \dots, Z_t^{(1)}(\phi_t(u_k)))] &= \lim_{z \rightarrow 0} \lim_{t \rightarrow \infty} \frac{\ensuremath{\mathbf{E}}_{\nu_t^*}[g(Z_t^*(\phi_t(u_1)), \dots, Z_t^*(\phi_t(u_k))) \ensuremath{\mathbbm{1}}_{\{\zeta > t\}}]}{\ensuremath{\mathbf{P}}_{\nu_t^*}(\zeta > t)} \\
&= \lim_{z \rightarrow 0} \frac{E_{z}[g(\Xi(u_1), \dots, \Xi(u_k)) \ensuremath{\mathbbm{1}}_{{\cal E}^c}]}{P_{z}({\cal E}^c)} \\
&= E[g(\ensuremath{\mathbf{P}}hi(u_1), \dots, \ensuremath{\mathbf{P}}hi(u_k))].
\end{align*}
The result follows.
\end{proof}
\section{Conditioning on Survival}\label{18sec}
In this section, we prove our main results concerning the behavior of branching Brownian motion conditioned to survive for an unusually large time $t$, namely Theorem \ref{larges} and Theorems \ref{meds} and \ref{condconfigprop}.
We will frequently need estimates on $z_t(x,0)$. Because $2x/\pi \leq \sin(x) = \sin(\pi - x) \leq x$ for all $x \in [0, \pi/2]$, we have
\begin{equation}\label{detzbound}
2 \min\{x, L_t(0) - x\} e^{x - L_t(0)} \leq z_t(x,0) \leq \pi \min\{x, L_t(0) - x\} e^{x - L_t(0)}
\end{equation}
for all $t > 0$ and $x \in [0, L_t(0)]$.
Recall the definition of $T(s)$ from (\ref{Tdef}). The following result shows that $Z_{T(0)}(0)$ will be exactly $1/2$ as long as $T(0)$ is sufficiently large, and will allow us to prove Lemma \ref{tightextinct}.
\begin{lemma}\label{monotone}
Given any initial configuration of particles, the function $t \mapsto Z_t(0)$ is monotone decreasing on $\{t\ge0:L_t(0) \ge R(0) + 2\}$. Also, there is a positive number $t^*$ such that if $T(0) \geq t^*$, then $T(0)$ is the unique positive real number $t$ such that $L_t(0) \geq R(0) + 2$ and $Z_t(0) = 1/2$.
\end{lemma}
\begin{proof}
To prove the first claim, note that $$\frac{d}{dL} L e^{x-L} \sin \bigg( \frac{\pi x}{L} \bigg) = e^{x-L} \bigg[ (1 - L) \sin \bigg( \frac{\pi x}{L} \bigg) - \frac{\pi x}{L} \cos \bigg( \frac{\pi x}{L} \bigg) \bigg].$$ If $0 \leq x < L/2$, then both terms inside the brackets are negative when $L > 1$. Suppose instead $L/2 \leq x \leq L - 2$. Then $\sin(\pi x/L) \geq \sin(2 \pi/L) \geq 4/L$, so
$$\frac{d}{dL} L e^{x-L} \sin \bigg( \frac{\pi x}{L} \bigg) \leq e^{x-L} \bigg( \frac{4 (1 - L)}{L} + \frac{(L-2) \pi}{L} \bigg) < 0.$$
It follows that $t \mapsto Z_t(0)$ is a monotone decreasing function on $\{t\ge0:L_t(0) \ge R(0) + 2\}$. Therefore, either $T(0) = R(0) + 2$, or $T(0)$ is the unique positive real number $t$ such that $L_t(0) \geq R(0) + 2$ and $Z_t(0) = 1/2$. Because $\lim_{t \rightarrow \infty} z_t(L_t(0) - 2, 0) = 2 \pi/e^2 > 1/2$, the first possibility can be ruled out if $T(0)$ is sufficiently large, which completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Lemma \ref{tightextinct}]
It suffices to show that for any deterministic sequence of initial configurations $(\nu_n)_{n=1}^{\infty}$ such that $T(0) \rightarrow \infty$ and $L_{T(0)}(0) - R(0) \rightarrow \infty$ as $n \rightarrow \infty$, we have
\begin{align}
\lim_{k \rightarrow \infty} \limsup_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_n}(\zeta \leq T(0) - kT(0)^{2/3}) = 0, \label{limk1} \\
\lim_{k \rightarrow \infty} \liminf_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_n}(\zeta \leq T(0) + k T(0)^{2/3}) = 1. \label{limk2}
\end{align}
For $k \geq 0$, let $t_n$, $t_n^-(k)$ and $t_n^+(k)$ denote the values of $T(0)$, $T(0) - kT(0)^{2/3}$ and $T(0) + kT(0)^{2/3}$ respectively under $\ensuremath{\mathbf{P}}_{\nu_n}$.
Recall by (\ref{tpt}) that for every fixed $k$,
\begin{equation}
\label{tptadapted}
L_{t_n^-(k)}(0) = L_{t_n}(0) + O(1) = L_{t_n^+(k)}(0).
\end{equation}
Furthermore, by Lemma \ref{monotone}, we have $Z_{T(0)}(0) = 1/2$ under $\ensuremath{\mathbf{P}}_{\nu_n}$ for sufficiently large $n$. If $(x_n)_{n=1}^{\infty}$ is a sequence of positive numbers for which $L_{t_n}(0) - x_n \rightarrow \infty$, then using \eqref{tptadapted},
$$\lim_{n \rightarrow \infty} \frac{z_{t_n^-(k)}(x_n, 0)}{z_{t_n}(x_n, 0)} = \lim_{n \rightarrow \infty} \frac{L_{t_n^-(k)}(0) \sin\big(\frac{\pi x_n}{L_{t_n^-(k)}(0)}\big) e^{x_n - L_{t_n^-(k)}(0)}}{L_{t_n}(0) \sin\big(\frac{\pi x_n}{L_{t_n}(0)}\big) e^{x_n - L_{t_n}(0)}} = \lim_{n \rightarrow \infty} e^{L_{t_n}(0) - L_{t_n^-(k)}(0)} = e^{ck/3}.$$
From this calculation, and a similar calculation with $t_n^+(k)$ in place of $t_n^-(k)$, it follows that
\begin{equation}\label{zchange}
\lim_{n \rightarrow \infty} Z_{t_n^-(k)}(0) = \frac{e^{ck/3}}{2}, \hspace{.5in}\lim_{n \rightarrow \infty} Z_{t_n^+(k)}(0) = \frac{e^{-ck/3}}{2}.
\end{equation}
Because $L_{t_n}(0) - R(0) \rightarrow \infty$
it now follows from Theorem~\ref{survival} that
$$\lim_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_n}(\zeta \leq t_n^-) = e^{-(\alpha/2) e^{ck/3}}, \hspace{.5in} \lim_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_n}(\zeta \leq t_n^+) = e^{-(\alpha/2) e^{-ck/3}},$$
which imply (\ref{limk1}) and (\ref{limk2}).
\end{proof}
\begin{lemma}\label{smallTlem}
Let $\varepsilons > 0$ and $K > 0$. Then there exists $t > 0$, depending on $\varepsilons$ and $K$, such that for all initial configurations $\nu$ for which $T(0) \leq K$ under $\ensuremath{\mathbf{P}}_\nu$, we have $\ensuremath{\mathbf{P}}_{\nu}(\zeta > t) < \varepsilons$.
\end{lemma}
\begin{proof}
Let $u \leq K \leq t$.
It follows from (\ref{detzbound}) that if $0 \leq x \leq \min\{L_u(0), L_t(0)\}$, then
$$\frac{z_t(x,0)}{z_u(x,0)} = \frac{L_t(0) \sin(\frac{\pi x}{L_t(0)}) e^{-L_t(0)}}{L_u(0) \sin(\frac{\pi x}{L_u(0)}) e^{-L_u(0)}} \leq \frac{\pi}{2} \cdot \frac{\min\{x, L_t(0) - x\}}{\min\{x, L_u(0) - x\}} \cdot e^{L_u(0) - L_t(0)}.$$ Consequently, if $x \leq L_u(0) - 2$, then
\begin{equation}\label{KZratio}
\frac{z_t(x,0)}{z_u(x,0)} \leq \frac{\pi}{2} \cdot \frac{L_t(0)}{2} \cdot e^{L_u(0) - L_t(0)} \leq \frac{\pi e^K}{4} \cdot L_t(0) e^{-L_t(0)}.
\end{equation}
By the definition of $T(0)$, we have $Z_{T(0)}(0) \leq 1/2$ and $R(0) \leq T(0) - 2$. Therefore, we can choose $t$ sufficiently large that for all initial configurations $\nu$ for which $T(0) \leq K$ under $\ensuremath{\mathbf{P}}_\nu$, we have
$$Z_t(0) \leq Z_{T(0)}(0) \cdot \frac{\pi e^K}{4} \cdot L_t(0) e^{-L_t(0)} \leq \frac{\pi e^K}{8} \cdot L_t(0) e^{-L_t(0)} < \frac{\varepsilons}{C_2},$$ with $C_2$ the constant from Lemma \ref{survivalgen}. It follows from that lemma that the probability of survival until time $t$ is bounded above by $\varepsilons$, as claimed.
\end{proof}
We now work towards the proof of Lemma \ref{mainLRlemma}. To prepare for this proof, we record some bounds on the position of the right-most particle $R(t)$ in branching Brownian motion with absorption. For branching Brownian motion without absorption, Bramson \cite{bram83} considered this problem when $q = 0$. He showed that if $m_x(t)$ denotes the median of the distribution of $R(t)$ when we start with a single particle at $x$, then there is a positive constant $C$ such that for all $t \geq 1$, we have
\begin{equation}\label{mainBram}
\bigg|m_x(t) - \bigg(x - \frac{3}{2} \log t \bigg) \bigg| \leq C.
\end{equation}
Bramson also showed (see equation (8.17) of \cite{bram83}) that there is another positive constant $C'$ such that for all $x \in \ensuremath{\mathbb{R}}$, $t \geq 1$, and $y \geq 1$, we have $\ensuremath{\mathbf{P}}_x(R(t) > m_x(t) + y) \leq C' y e^{-y}$.
Combining this result with (\ref{mainBram}) and noting that absorption at zero can only reduce the likelihood that there is a particle above a certain level at time $t$, we get that for branching Brownian motion with absorption, there is a positive constant $C''$ such that for all $x > 0$, $t \geq 1$, and $y \geq 1$, we have
\begin{equation}\label{Bramabs}
\ensuremath{\mathbf{P}}_x\bigg(R(t) > x - \frac{3}{2} \log t + y \bigg) \leq C'' y e^{-y}.
\end{equation}
We now claim that (\ref{Bramabs}) holds even when $q > 0$. To see this, we construct the branching Brownian motion process in the following way. First, we define a branching Brownian motion process with no killing at the origin. If we ignore the spatial positions of the particles, this process is simply a continuous-time Galton-Watson process. Next, we color particles red if they have an infinite line of descent, and blue if all of their descendants eventually die out. It follows from results in \cite{gr} that the red particles form a continuous-time Galton-Watson process in which the offspring distribution still has finite variance but particles can never die. Furthermore, this process has the same growth rate as the original process. After coloring the particles red and blue, we again consider the spatial motion, which is independent of the branching structure, and add the killing at the origin by truncating paths once they hit the origin. Now the red particles form a branching Brownian motion whose offspring distribution satisfies $q = 0$, and so (\ref{Bramabs}) holds. Because, conditional on the configuration of particles at time $t$, each particle is red with probability $1-q$ and blue with probability $q$, the result (\ref{Bramabs}) must also hold for the original process that includes particles of both colors, after dividing the constant by $1 - q$.
We will also need an alternative bound when $x$ is small that allows us to take the absorption into account. For this, let $$V(s) = \sum_{u \in N_s} X_u(s) e^{X_u(s)}.$$ It is well-known (see, for example, Lemma 2 in \cite{hh07}) that $(V(s), s \geq 0)$ is a nonnegative martingale, and its value is at least $ye^y$ when there is a particle above $y$. It follows from Markov's Inequality that
\begin{equation}\label{derMG}
\ensuremath{\mathbf{P}}_x(R(t) > y) \leq \ensuremath{\mathbf{P}}_x(V(s) \ge ye^y) \leq \frac{x}{y} e^{x - y}.
\end{equation}
\begin{proof}[Proof of Lemma \ref{mainLRlemma}]
Consider the set $N_0$ of particles at time zero. Rank the particles $u_1, u_2, \dots$ in decreasing order by position, so that $X_{u_1}(0) \geq X_{u_2}(0) \geq \dots$. Now construct an extension of the process in which the absorption is suppressed, so that the trajectories of particles continue past the origin. Let $G$ be the smallest integer $g$ such that the particle $u_g$ has descendants alive at time $d$ in this extended process.
Note that if $q_d$ denotes the probability that a Galton-Watson process with offspring distribution $(p_k)_{k=0}^{\infty}$ dies before time $d$, then $\ensuremath{\mathbf{P}}(G = k|\#N_0 \geq k) = q_d^{k-1}(1 - q_d)$.
Let $\nu^*$ denote the initial configuration consisting of the particles $u_i$ with $i \geq G$. Let ${\cal F}_0^*$ denote the $\sigma$-field generated by $N_0$ and $G$. Note that, conditional on ${\cal F}_0^*$, the descendants of the particles $u_i$ for $i \geq G+1$ behave as they would in the original branching Brownian motion process, while the descendants of the particle $u_G$ are conditioned to survive until time $d$ in the extended process.
Let $T^*(0)$ be defined as in (\ref{Tdef}) for the configuration $\nu^*$.
We will show that given $0 < \varepsilons < 1$ and $A > 0$, we can choose $d$ sufficiently large and then $t_0$ sufficiently large that
\begin{equation}\label{Rd}
\ensuremath{\mathbf{P}}_{\nu}(R(d) \geq L_{T^*(0)}(0) - 2A) < \frac{\varepsilons}{2}
\end{equation}
and
\begin{equation}\label{needL1}
\ensuremath{\mathbf{P}}_{\nu}\Big(\{L_{T(d)}(0) \leq L_{T^*(0)}(0) - A\} \cap \{T(d) \geq t_0\} \Big) < \frac{\varepsilons}{2}.
\end{equation}
These two results immediately imply the statement of the lemma.
We first show that (\ref{Rd}) holds if $d$ is sufficiently large. Let $N^*_0 = N_0 \setminus \{u_1, \dots, u_{G-1}\},$ and let $N^*_s$ denote the set of descendants of these particles alive at time $s$. Let $$\alpha = \frac{e^{2A}}{\varepsilons (1-q)}.$$ Let $S_1 = \{u \in N^*_0: L_{T^*(0)}(0) - X_u(0) \geq \alpha\}$ and $S_2 = N^*_0 \setminus S_1$. Let $Z_t^*(s)$ be defined as in (\ref{Zdef}), but summing only over particles in $N^*_s$. To bound the probability that some particle in $N_0^*$ has a descendant above $L_{T^*(0)} - 2A$ at time $d$, we apply (\ref{derMG}) to particles in $S_1$ and (\ref{Bramabs}) to particles in $S_2$. The behavior of the descendants of the particle $u_G$ is affected by conditioning. However, because the probability that a continuous-time Galton-Watson process with branching rate $\beta$ and offspring distribution $(p_k)_{k=1}^{\infty}$ survives until time $d$ is greater than $1 - q$, we can apply the results (\ref{Bramabs}) and (\ref{derMG}) to all particles in our process if we divide the upper bounds there by $1 - q$.
Consider first the particles in $S_2$. Assume for now that $L_{T^*(0)}(0) \geq 2 \alpha$, so that all particles in $S_2$ are above $\frac{1}{2} L_{T^*(0)}(0)$. Using that $X_{u_G}(0) \leq L_{T^*(0)}(0) - 2$ by (\ref{Tdef}) as well as the lower bound in (\ref{detzbound}), we get $z_{T^*(0)}(X_u(0), 0) \geq 4 e^{-\alpha}$ for all $u \in S_2$. Because $Z^*_{T^*(0)}(0) \leq 1/2$, it follows that there can be at most $e^{\alpha}/8$ particles in $S_2$. In view of (\ref{Bramabs}), the probability that one of these particles has a descendant above $L_{T^*(0)}(0) - 2A$ at time $d$ tends to zero as $d \rightarrow \infty$. Therefore, given $\varepsilons$ and $A$, we can choose $d$ large enough to keep this probability below $\varepsilons/4$. Using also (\ref{derMG}) to handle the particles in $S_1$, we get that on $\{L_{T^*(0)}(0) \geq 2 \alpha\}$,
$$\ensuremath{\mathbf{P}}_{\nu}(R(d) \geq L_{T^*(0)}(0) - 2A\,|\,{\cal F}^*_0) < \frac{\varepsilons}{4} + \frac{1}{1-q} \sum_{u \in S_1} \frac{X_u(0) e^{X_u(0) - L_{T^*(0)}(0) + 2A}}{L_{T^*(0)}(0) - 2A}.$$
The lower bound in (\ref{detzbound}), applied separately when $x \leq \frac{1}{2} L_{T^*(0)}(0)$ and $x > \frac{1}{2}L_{T^*(0)}(0)$, yields
$$\sum_{u \in S_1} \frac{X_u(0) e^{X_u(0) - L_{T^*(0)}(0) + 2A}}{L_{T^*(0)}(0) - 2A} \leq \frac{e^{2A}}{2} \sum_{u \in S_1} \frac{z_{T^*(0)}(X_u(0), 0)}{L_{T^*(0)}(0) - 2A} \max \bigg\{1, \frac{X_u(0)}{L_{T^*(0)}(0) - X_u(0)} \bigg\}.$$
Recall that $L_{T^*(0)}(0) - X_u(0) \geq \alpha$ for all $u \in S_1$, and therefore using that $\alpha \geq 2A$, we also have $X_u(0) \leq L_{T^*(0)}(0) - 2A$ for all $u \in S_1$ and $L_{T^*(0)}(0) - 2A \geq \alpha$ on the event $\{L_{T^*(0)}(0) \geq 2 \alpha\}$. It follows that for all $u \in S_1$, we have
$$\frac{1}{L_{T^*(0)}(0) - 2A} \max \bigg\{1, \frac{X_u(0)}{L_{T^*(0)}(0) - X_u(0)} \bigg\} \leq \frac{1}{\alpha}.$$
Therefore,
$$\frac{1}{1-q} \sum_{u \in S_1} \frac{X_u(0) e^{X_u(0) - L_{T^*(0)}(0) + 2A}}{L_{T^*(0)}(0) - 2A} \leq \frac{\varepsilons Z^*_{T^*(0)}(0)}{2} \leq \frac{\varepsilons}{4},$$
and thus $$\ensuremath{\mathbf{P}}_{\nu}(R(d) \geq L_{T^*(0)}(0) - 2A\,|\, {\cal F}^*_0) < \frac{\varepsilons}{2}$$ on the event $\{L_{T^*(0)}(0) \geq 2 \alpha\}$. Lemma \ref{smallTlem} implies that we can choose $d$ large enough that $\ensuremath{\mathbf{P}}_{\nu}(R(d) \geq L_{T^*(0)}(0) - 2A\,|\, {\cal F}^*_0) \leq \ensuremath{\mathbf{P}}_{\nu}(\zeta > d\,|\,{\cal F}_0^*) < \varepsilons/2$ on the event $\{L_{T^*(0)}(0) < 2 \alpha\}$. It follows that (\ref{Rd}) holds, when $d$ is chosen to be sufficiently large.
It remains to establish (\ref{needL1}). Choose $\delta > 0$ small enough that
\begin{equation}\label{deltadef}
\frac{2 \delta e^{C_8/2}}{1 - q + \delta} < \frac{\varepsilons}{2},
\end{equation}
where $C_8$ is the constant from (\ref{strongupper}). Let $k'$, $t'$, and $a'$ be the constants from Lemma \ref{tightextinct} with $\delta$ in place of $\varepsilons$. Choose $a_0$ large enough that $a_0 \geq a'$, $a_0 > ck'/6$, and $\phi(a_0) \leq q + \delta/2$, where $\phi$ is the function from Theorem \ref{survivex}. We will assume that $A \geq 2a_0$, which can be done because the statement of the lemma is weaker when $A < 2a_0$. Next, choose $d$ large enough that (\ref{Rd}) holds, and large enough that the probability that a continuous-time Galton-Watson process with branching rate $\beta$ and offspring distribution $(p_k)_{k=1}^{\infty}$ survives until time $d$ is at most $1 - q + \delta$. Finally, choose $t_0 > 0$ large enough that the following hold:
\begin{enumerate}
\item We have $t_0 \geq t'$.
\item We have $t - k't^{2/3} \geq d$ for all $t \geq t_0$.
\item If $x \geq a_0$ and $t \geq t_0$, then $\ensuremath{\mathbf{P}}_{L_t(0) + x}(\zeta \geq t + d) \geq 1 - q - \delta$. Note that Theorem \ref{survivex} and our assumption that $\phi(a_0) \leq q + \delta/2$ imply that $t_0$ can be chosen this way.
\item If $t \geq t_0$, then $ct^{1/3} - 2a_0 \leq c(t - k' t^{2/3} - d)^{1/3}$. Note that this is possible because $ct^{1/3} - c(t - k't^{2/3} - d)^{1/3} \sim ck'/3$ as $t \rightarrow \infty$, and $a_0 > ck'/6$.
\end{enumerate}
Let $T'$ be the time such that $L_{T'}(0) = L_{T^*(0)}(0) - A$. Our strategy will be to show that with high probability, the process will survive until time $T' + d$, which will preclude $T(d)$ from being too small. In particular, we claim that on $\{T' \geq t_0\}$, we have
\begin{equation}\label{Bsurvival}
\ensuremath{\mathbf{P}}_{\nu}(\zeta > T' + d\,|\,{\cal F}_0^*) \geq \frac{1 - q - \delta}{1 - q + \delta}.
\end{equation}
Assume for now that (\ref{Bsurvival}) holds. It follows that
\begin{equation}\label{intzeta1}
\ensuremath{\mathbf{P}}_{\nu}(\{\zeta \leq T' + d\} \cap \{T' \geq t_0\}) \leq \frac{2 \delta}{1 - q + \delta}.
\end{equation}
Because $Z_{d + T(d)}(d) \leq 1/2$ and $R(d) \leq L_{T(d)}(0) - 2$ by definition, it follows from (\ref{strongupper}) that $$\ensuremath{\mathbf{P}}_{\nu}(\zeta \leq T' + d\,|\,t_0 \leq T(d) \leq T') \geq \ensuremath{\mathbf{P}}_{\nu}(\zeta \leq T(d) + d\,|\,t_0 \leq T(d) \leq T') \geq e^{-C_8/2},$$ and therefore
\begin{align}\label{intzeta2}
\ensuremath{\mathbf{P}}_{\nu}(\{\zeta \leq T' + d\} \cap \{T' \geq t_0\}) &\geq \ensuremath{\mathbf{P}}_{\nu}(\{\zeta \leq T' + d\} \cap \{t_0 \leq T(d) \leq T'\}) \nonumber \\
&= \ensuremath{\mathbf{P}}_{\nu}(t_0 \leq T(d) \leq T') \ensuremath{\mathbf{P}}_{\nu}(\zeta \leq T' + d \,|\, t_0 \leq T(d) \leq T') \nonumber \\
&\geq e^{-C_8/2} \ensuremath{\mathbf{P}}_{\nu}(t_0 \leq T(d) \leq T').
\end{align}
From (\ref{intzeta1}), (\ref{intzeta2}), and (\ref{deltadef}), we get
$$\ensuremath{\mathbf{P}}_{\nu}(t_0 \leq T(d) \leq T') \leq \frac{2 \delta e^{C_8/2}}{1 - q + \delta} < \frac{\varepsilons}{2},$$
which by the definition of $T'$ is precisely (\ref{needL1}).
It remains to prove (\ref{Bsurvival}).
Let $B = \{X_{u_G}(0) \geq L_{T^*(0)}(0) - A/2\} \in {\cal F}_0^*$. On the event $B$, the particle $u_G$ begins above $L_{T'}(0) + A/2$.
Our choices of $a_0$ and $t_0$ ensure that as long as $A \geq 2 a_0$ and $T' \geq t_0$, the probability that a particle started at the position $X_{u_G(0)}$ has descendants alive at time $T' + d$ is at least $1 - q - \delta$. Also, our choice of $d$ ensures that the probability that, without absorption at zero, such a particle would have descendants alive until time $d$ is at most $1 - q + \delta$. Because our definition of $G$ entails conditioning on the latter event, and because the presence of other particles in the initial configuration can only increase the probability that the process survives beyond time $T' + d$, the inequality (\ref{Bsurvival}) holds on the event $B \cap \{T' \geq t_0\}$.
On the event $B^c$, the configuration $\nu^*$ has no particles above $L_{T^*(0)}(0) - A/2$. Then we can apply Lemma~\ref{tightextinct}, which implies that on the event $B^c \cap \{T^*(0) \geq t_0\}$ we have
\begin{equation}\label{Bcpre}
\ensuremath{\mathbf{P}}_{\nu}(\zeta \geq T^*(0) - k' T^*(0)^{2/3} \,|\, {\cal F}_0^*) > 1 - \delta.
\end{equation}
Note that this result holds even though, as noted at the beginning of the proof, conditioning on ${\cal F}_0^*$ means the descendants of the particle at $u_G$ are conditioned to survive until time $d$ in the extended process. This conditioning can only increase the chance that descendants of the particle at $u_G$ survive beyond time $T^*(0) - k' T^*(0)^{2/3}$ because, by our choice of $t_0$, particles can not survive this long if they die out before time $d$ even in the extended process. The fourth condition above on our choices of $a_0$ and $t_0$ guarantees that on the event $\{T^*(0) \geq t_0\}$, we have $T' + d < T^*(0) - k' T^*(0)^{2/3}$. Also, $(1 - q - \delta)/(1 - q + \delta) \leq 1 - \delta$, so (\ref{Bcpre}) implies that (\ref{Bsurvival}) holds also on $B^c \cap \{T^*(0) \geq t_0\}$, and therefore on $\{T' \geq t_0\}$.
\end{proof}
\begin{lemma}\label{MGsurvival}
Let $(\nu_n)_{n=1}^{\infty}$ be a sequence of deterministic initial configurations. Let $(s_n)_{n=1}^{\infty}$ and $(t_n)_{n=1}^{\infty}$ be sequences of times such that:
\begin{equation}\label{sntn}
\mbox{1) } 0 \leq s_n \leq t_n \mbox{ for all } n, \hspace{.3in}\mbox{2)} \lim_{n \rightarrow \infty} (t_n - s_n) = \infty, \hspace{.3in}\mbox{3)}\lim_{n \rightarrow \infty} s_n/t_n = 1.
\end{equation}
Suppose that, under $\ensuremath{\mathbf{P}}_{\nu_n}$, we have $Z_{t_n}(0) \rightarrow 0$ and $L_{t_n}(0) - R(0) \rightarrow \infty$ as $n \rightarrow \infty$. For $0 \leq u \leq t_n$, define
\begin{equation}\label{Wdef}
W_n(u) = \ensuremath{\mathbf{P}}_{\nu_n}(\zeta > t_n \,|\, {\cal F}_u).
\end{equation}
Under the conditional probability measure $\ensuremath{\mathbf{P}}_{\nu_n}( \: \cdot \:|\,\zeta > t_n)$, we have $W_n(s_n) \rightarrow_p 1$ as $n \rightarrow \infty$. Moreover, for all $\varepsilons > 0$ and $a \in (0, 1)$, there exists $\delta > 0$ such that for sufficiently large $n$,
\begin{equation}\label{infW}
\ensuremath{\mathbf{P}}_{\nu_n}\Big( \inf_{at_n \leq u \leq t_n} W_n(u) \leq \delta \,\big|\, \zeta > t_n \Big) < \varepsilons.
\end{equation}
\end{lemma}
\begin{proof}
Suppose conditions 1), 2), and 3) hold.
Let $\varepsilons > 0$. Choose $m$ sufficiently large that $e^{-C_7 m} < \varepsilons^2$, where $C_7$ is the constant from Lemma \ref{survivalgen}. By Theorem \ref{CSBPcond}, conditional on $\zeta > t_n$, the finite-dimensional distributions of the processes $(Z_{t_n}((1 - e^{-u}) t_n), u \geq 0)$ converge as $n \rightarrow \infty$ to the finite-dimensional distributions of $(\ensuremath{\mathbf{P}}hi(u), u \geq 0)$, which is a continuous-state branching process started at zero and conditioned to go to infinity as $u \rightarrow \infty$. Therefore, we can choose $v \in (0, 1)$ sufficiently close to $1$ that
\begin{equation}\label{vsurvival}
\ensuremath{\mathbf{P}}_{\nu_n}(Z_{t_n}(vt_n) > m \,|\, \zeta > t_n) > 1 - \varepsilons
\end{equation}
for sufficiently large $n$. Lemma \ref{survivalgen} implies that $\ensuremath{\mathbf{P}}_{\nu_n}(\zeta > t_n\,|\,{\cal F}_{vt_n}) \geq 1 - e^{-C_7 m} > 1 - \varepsilons^2$ on the event $\{Z_{t_n}(vt_n) > m\}$ for sufficiently large $n$. That is, we have $W_n(vt_n) > 1 - \varepsilons^2$ on $\{Z_{t_n}(vt_n) > m\}$ for sufficiently large $n$. Therefore, (\ref{vsurvival}) implies that for sufficiently large $n$, we have
\begin{equation}\label{Weq1}
\ensuremath{\mathbf{P}}_{\nu_n}(W_n(vt_n) > 1 - \varepsilons^2 \,|\, \zeta > t_n) > 1 - \varepsilons.
\end{equation}
Since $(W_n(u), 0 \leq u \leq t_n)$ is a $[0,1]$-valued martingale, it follows from the Optional Sampling Theorem that
\[
\ensuremath{\mathbf{P}}_{\nu_n}\Big( \inf_{vt_n \leq u \leq t_n} W_n(u) > 1 - \varepsilons \,\big| \, W_n(vt_n) > 1 - \varepsilons^2\Big) \geq 1 - \varepsilons.
\]
We claim that we also have,
\begin{equation}\label{Weq2}
\ensuremath{\mathbf{P}}_{\nu_n}\Big( \inf_{vt_n \leq u \leq t_n} W_n(u) > 1 - \varepsilons \,\big| \, \{W_n(vt_n) > 1 - \varepsilons^2\} \cap \{\zeta > t_n\}\Big) \geq 1 - \varepsilons.
\end{equation}
To see this, note that the further conditioning on the event $\{\zeta > t_n\} = \{W_n(t_n) = 1\}$ can only increase the probability that the martingale stays above $1 - \varepsilons$ because the martingale can not stay above $1 - \varepsilons$ between times $vt_n$ and $t_n$ on the event $\{\zeta > t_n\}^c = \{W_n(t_n) = 0\}$. From (\ref{sntn}), (\ref{Weq1}), and (\ref{Weq2}), we get that for sufficiently large $n$, $$\ensuremath{\mathbf{P}}_{\nu_n}(W_n(s_n) > 1 - \varepsilons \,|\, \zeta > t_n) \geq \ensuremath{\mathbf{P}}_{\nu_n}\Big(\inf_{vt_n \leq u \leq t_n} W_n(u) > 1 - \varepsilons \,\big|\, \zeta > t_n \Big) > (1 - \varepsilons)^2,$$ which immediately gives the first conclusion of the lemma when conditions 1), 2), and 3) hold.
It remains to prove (\ref{infW}). There exists $b > 0$ such that $P(\ensuremath{\mathbf{P}}hi(-\log(1-a)) > b) > 1 - \varepsilons/2$. Then Theorem \ref{CSBPcond} implies that
$$\ensuremath{\mathbf{P}}_{\nu_n}(Z_{t_n}(a t_n) > b \,|\, \zeta > t_n) > 1 - \frac{\varepsilons}{2}$$ for sufficiently large $n$. It follows from Lemma \ref{survivalgen} that, for sufficiently large $n$, we have $W_n(at_n) > 1 - e^{-C_7 b}$ on the event $\{Z_{t_n}(at_n) > b\}$, and therefore, writing $d = 1 - e^{-C_7 b} > 0$, we have
\begin{equation}\label{infW1}
\ensuremath{\mathbf{P}}_{\nu_n}(W_n(at_n) > d \,|\, \zeta > t_n) > 1 - \frac{\varepsilons}{2}.
\end{equation}
Let $\delta = d \varepsilons/2$, and let $D$ be the event that $\inf_{at_n \leq u \leq t_n} W_n(u) \leq \delta$. Using Bayes' Rule followed by the Optional Sampling Theorem, along with the trivial bound $\ensuremath{\mathbf{P}}_{\nu_n}( D \,|\, W_n(at_n) > d ) \leq 1$, we get
\begin{align}\label{infW2}
\ensuremath{\mathbf{P}}_{\nu_n} ( D \,|\, \{W_n(at_n) > d\} \cap \{\zeta > t_n\}) &= \frac{\ensuremath{\mathbf{P}}_{\nu_n}( D \,|\, W_n(at_n) > d ) \ensuremath{\mathbf{P}}_{\nu_n}(\zeta > t_n\,|\, D \cap \{W_n(at_n) > d\})}{\ensuremath{\mathbf{P}}_{\nu_n}(\zeta > t_n\,|\,W_n(at_n) > d)} \nonumber \\
&\leq \frac{\delta}{d}.
\end{align}
It follows from (\ref{infW1}) and (\ref{infW2}) that for sufficiently large $n$,
$$\ensuremath{\mathbf{P}}_{\nu_n}(D^c\,|\,\zeta > t_n) \geq \ensuremath{\mathbf{P}}_{\nu_n}(W_n(at_n) > d\,|\,\zeta > t_n) \ensuremath{\mathbf{P}}_{\nu_n}(D^c\,|\,\{W_n(at_n) > d\} \cap \{\zeta > t_n\}) \geq \bigg(1 - \frac{\varepsilons}{2} \bigg)^2,$$
which implies (\ref{infW}).
\end{proof}
\begin{lemma}\label{gensequence}
Let $(\nu_n)_{n=1}^{\infty}$ be a sequence of deterministic initial configurations. Let $(s_n)_{n=1}^{\infty}$ and $(t_n)_{n=1}^{\infty}$ be sequences of times such that
\begin{equation}\label{sntn2}
\mbox{1) } 0 \leq s_n \leq t_n \mbox{ for all } n, \hspace{.3in}\mbox{2)} \lim_{n \rightarrow \infty} (t_n - s_n) = \infty, \hspace{.3in}\mbox{3)}\liminf_{n \rightarrow \infty} s_n/t_n > 0.
\end{equation}
Suppose, under $\ensuremath{\mathbf{P}}_{\nu_n}$, we have $Z_{t_n}(0) \rightarrow 0$ and $L_{t_n}(0) - R(0) \rightarrow \infty$ as $n \rightarrow \infty$. Then, under the conditional probability measure $\ensuremath{\mathbf{P}}_{\nu_n}( \: \cdot \:|\,\zeta > t_n)$, we have $T(s_n) \rightarrow_p \infty$ and $L_{T(s_n)}(0) - R(s_n) \rightarrow_p \infty$ as $n \rightarrow \infty$.
\end{lemma}
\begin{proof}
Let $\varepsilons > 0$ and $A > 0$. Define the martingale $(W_n(u), 0 \leq u \leq t_n)$ as in Lemma \ref{MGsurvival}. Choose $a > 0$ such that $\liminf_{n \rightarrow \infty} s_n/t_n > 2a$, and choose $\delta > 0$ such that (\ref{infW}) holds for sufficiently large $n$. It follows from (\ref{infW}) that
\begin{equation}\label{Wsn}
\ensuremath{\mathbf{P}}_{\nu_n} (W_n(s_n) \leq \,\delta \,|\, \zeta > t_n) < \varepsilons.
\end{equation}
By Lemma \ref{smallTlem} and the fact that $t_n - s_n \rightarrow \infty$, for any fixed $K > 0$, we have $W_n(s_n) < \delta$ on the event $\{T(s_n) \leq K\}$ for sufficiently large $n$. Therefore, for sufficiently large $n$, we have $\ensuremath{\mathbf{P}}_{\nu_n}(T(s_n) \leq K\,|\,\zeta > t_n) < \varepsilons$. It follows that $T(s_n) \rightarrow_p \infty$ as $n \rightarrow \infty$ under $\ensuremath{\mathbf{P}}_{\nu_n}( \: \cdot \:|\,\zeta > t_n)$.
Choose $d$ and $t_0$ as in Lemma \ref{mainLRlemma}, with $\delta \varepsilons$ playing the role of $\varepsilons$. Because $s_n - d > at_n$ for sufficiently large $n$, the reasoning that led to (\ref{Wsn}) also gives
\begin{equation}\label{Wsnd}
\ensuremath{\mathbf{P}}_{\nu_n} (W_n(s_n - d) \leq \,\delta \,|\, \zeta > t_n) < \varepsilons.
\end{equation}
By applying Lemma~\ref{mainLRlemma} with the configuration of particles at time $s_n - d$ playing the role of $\nu$, we get
$$\ensuremath{\mathbf{P}}_{\nu_n}(\{R(s_n) \geq L_{T(s_n)}(0) - A\} \cap \{T(s_n) \geq t_0\}\,|\,{\cal F}_{s_n - d}) < \delta \varepsilons.$$
In particular, because $\{W_n(s_n - d) > \delta\} \in {\cal F}_{s_n - d}$, we have
\begin{equation}\label{condW}
\ensuremath{\mathbf{P}}_{\nu_n}(\{R(s_n) \geq L_{T(s_n)}(0) - A\} \cap \{T(s_n) \geq t_0\}\,|\,W_n(s_n - d) > \delta) < \delta \varepsilons.
\end{equation}
Elementary probability results imply that if $B$, $C$, $D$, and $E$ are events, then
\begin{align*}
P(B|E) &\leq P(B \cap C \cap D|E) + P(C^c|E) + P(D^c|E) \\
&= P(B \cap C \cap E|D) \cdot \frac{P(D|E)}{P(E|D)} + P(C^c|E) + P(D^c|E).
\end{align*}
Now write $B = \{R(s_n) \geq L_{T(s_n)}(0) - A\}$, $C = \{T(s_n) \geq t_0\}$, $D = \{W_n(s_n - d) > \delta\}$, and $E = \{\zeta > t_n\}$. Note that $P(E|D) > \delta$ by definition, and $P(D|E) > 1 - \varepsilons$ by (\ref{Wsnd}). Also, $P(B \cap C \cap E|D) \leq \delta \varepsilons$ by (\ref{condW}), and $P(C^c|E) < \varepsilons$ for sufficiently large $n$ because we already know that $T(s_n) \rightarrow_p \infty$ as $n \rightarrow \infty$ under $\ensuremath{\mathbf{P}}_{\nu_n}( \: \cdot \: |\, \zeta > t_n)$. Thus, for sufficiently large $n$,
$$\ensuremath{\mathbf{P}}_{\nu_n}(R(s_n) \geq L_{T(s_n)}(0) - A\,|\, \zeta > t_n) < \delta \varepsilons \cdot \frac{1}{\delta} + 2 \varepsilons = 3 \varepsilons.$$ Because $\varepsilons > 0$ and $A > 0$ were arbitrary, it follows that $L_{T(s_n)}(0) - R(s_n) \rightarrow_p \infty$ under the conditional probability measure $\ensuremath{\mathbf{P}}_{\nu_n}( \: \cdot \:|\,\zeta > t_n)$ as $n \rightarrow \infty$.
\end{proof}
\begin{lemma}\label{Ttcompare}
Let $(\nu_n)_{n=1}^{\infty}$ be a sequence of deterministic initial configurations. Let $(t_n)_{n=1}^{\infty}$ be a sequence of times tending to infinity. Let $\delta > 0$, and let $(s_n)_{n=1}^{\infty}$ be a sequence of times such that $\delta t_n \leq s_n \leq (1 - \delta) t_n$ for all $n$. Suppose, under $\ensuremath{\mathbf{P}}_{\nu_n}$, we have $Z_{t_n}(0) \rightarrow 0$ and $L_{t_n}(0) - R(0) \rightarrow \infty$ as $n \rightarrow \infty$. Then, under the conditional probability measure $\ensuremath{\mathbf{P}}_{\nu_n}( \: \cdot \:|\,\zeta > t_n)$, we have $L_{t_n}(s_n) - R(s_n) \rightarrow_p \infty$.
\end{lemma}
\begin{proof}
We will show that for all $\varepsilons > 0$, there is a positive constant $C$, depending on $\delta$ and $\varepsilons$, such that
\begin{equation}\label{Ttnts}
\ensuremath{\mathbf{P}}_{\nu_n}(|T(s_n) - (t_n - s_n)| > C t_n^{2/3} \,|\, \zeta > t_n) < \varepsilons.
\end{equation}
Because $L_{T(s_n)}(0) - R(s_n) \rightarrow_p \infty$ under $\ensuremath{\mathbf{P}}_{\nu_n}( \: \cdot \:|\,\zeta > t_n)$ by Lemma \ref{gensequence}, we can see from (\ref{tpt}) that (\ref{Ttnts}) implies the result of the lemma.
By Lemma \ref{MGsurvival}, there exists $\eta > 0$ such that $\ensuremath{\mathbf{P}}_{\nu_n}(W_n(s_n) \leq \eta\,|\,\zeta > t_n) < \varepsilons/4$ for sufficiently large $n$. By Lemmas \ref{tightextinct} and \ref{gensequence} there is a constant $k'$ such that, if $H_n$ denotes the random variable $\ensuremath{\mathbf{P}}_{\nu_n}(|(\zeta - s_n) - T(s_n)| \leq k' T(s_n)^{2/3}\,|\,{\cal F}_{s_n})$, then $\ensuremath{\mathbf{P}}_{\nu_n}(H_n > 1 - \eta \varepsilons/4 \,|\, \zeta > t_n) \rightarrow 1$ as $n \rightarrow \infty$. Elementary probability results imply that if $B$, $C$, and $D$ are events, then
$$P(B|D) \leq P(C^c|D) + P(B|C \cap D) \leq P(C^c|D) + \frac{P(B|C)}{P(D|C)}.$$ By taking $B = \{|(\zeta - s_n) - T(s_n)| > k'T(s_n)^{2/3}\}$, $C = \{H_n > 1 - \eta \varepsilons/4\} \cap \{W_n(s_n) > \eta\} \in {\cal F}_{s_n}$, and $D = \{\zeta > t_n\}$, we get that for sufficiently large $n$,
\begin{equation}\label{Tt1}
\ensuremath{\mathbf{P}}_{\nu_n}(|(\zeta - s_n) - T(s_n)| > k'T(s_n)^{2/3} \,|\, \zeta > t_n) \leq \frac{\varepsilons}{4} + \frac{(\eta \varepsilons/4)}{\eta} = \frac{\varepsilons}{2}.
\end{equation}
Proposition \ref{extinctiondist} implies that there is another positive constant $k$ such that
\begin{equation}\label{Tt2}
\ensuremath{\mathbf{P}}_{\nu_n}(\zeta > t_n + kt_n^{2/3} \,|\, \zeta > t_n) < \varepsilons/4.
\end{equation}
Now (\ref{Tt1}) and (\ref{Tt2}) imply
\begin{equation}\label{Tt3}
\ensuremath{\mathbf{P}}_{\nu_n}(T(s_n) > (t_n - s_n) + kt_n^{2/3} + k'T(s_n)^{2/3} \,|\, \zeta > t_n) < 3 \varepsilons/4.
\end{equation}
To obtain the necessary lower bound on $T(s_n)$, first note that by Theorem \ref{CSBPcond} and the assumptions on $s_n$, there exists $\delta > 0$ such that
\begin{equation}\label{Tt4}
\ensuremath{\mathbf{P}}_{\nu_n}(Z_{t_n}(s_n) < \delta\,|\,\zeta > t_n) < \varepsilons/4
\end{equation}
for sufficiently large $n$. Choose $k$ large enough that $e^{-ck/3}/2 < \delta$. Lemmas \ref{monotone} and \ref{gensequence} imply that under $\ensuremath{\mathbf{P}}_{\nu_n}(\: \cdot \: |\, \zeta > t_n)$, with probability tending to one as $n \rightarrow \infty$, we have $Z_{s_n + T(s_n)}(s_n) = 1/2$ and therefore, in view of (\ref{zchange}), we also have $Z_{s_n + T(s_n) + kT(s_n)^{2/3}}(s_n) < \delta$ for sufficiently large $n$. On this event, by the monotonicity established in Lemma \ref{monotone}, if $t_n > s_n + T(s_n) + kT(s_n)^{2/3}$ then $Z_{t_n}(s_n) < \delta$. Combining this observation with (\ref{Tt4}), we see that for sufficiently large $n$,
\begin{equation}\label{Tt5}
\ensuremath{\mathbf{P}}_{\nu_n}(T(s_n) < (t_n - s_n) - kT(s_n)^{2/3} \,|\, \zeta > t_n) < \varepsilons/4.
\end{equation}
Now (\ref{Ttnts}) can be deduced from (\ref{Tt3}) and (\ref{Tt5}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{larges}]
If $t^{-2/3}(t - s) \rightarrow \sigma \geq 0$, then let $r = s - t^{2/3}$. If $t^{2/3} \ll t - s \ll t$, then let $r = 2s - t$, so that $s - r = t - s$. Throughout the proof, we will work under the conditional distribution $\ensuremath{\mathbf{P}}_{\nu_t}( \: \cdot \:|\, \zeta > t)$. We will repeatedly make use of the fact that $\ensuremath{\mathbf{P}}_{\nu_t}(\zeta > t\,|\,{\cal F}_r) \rightarrow_p 1$ as $t \rightarrow \infty$, by Lemma \ref{MGsurvival}. Indeed, this allows us to remove the conditioning when applying results (namely, Lemma \ref{tightextinct} and Proposition \ref{configpropnew}) with the particle configuration at time $r$ playing the role of the initial configuration.
We first claim that
\begin{equation}\label{Tzeta}
t^{-2/3}(T(r) - (\zeta - r)) \rightarrow_p 0 \hspace{.2in}\mbox{as }t \rightarrow \infty.
\end{equation}
Our choice of $r$ ensures that $1 \ll t - r \ll t$, and Proposition \ref{extinctiondist} states that
\begin{equation}
\label{eq:zetaV}
t^{-2/3}(\zeta - t) \ensuremath{\mathbb{R}}ightarrow \frac{3}{c}V.
\end{equation}
Combining these facts, we get
\begin{equation}\label{zetasmall}
t^{-1}(\zeta - r) \rightarrow_p 0 \hspace{.2in}\mbox{as }t \rightarrow \infty.
\end{equation}
On the other hand, Lemma \ref{gensequence} implies that $T(r) \rightarrow_p \infty$ and $L_{T(r)}(0) - R(r) \rightarrow_p \infty$ as $t \rightarrow \infty$. Then Lemma \ref{tightextinct} implies that for all $\varepsilons > 0$, there is a constant $k'$ such that
\begin{equation}\label{zetaTdiff}
\ensuremath{\mathbf{P}}_{\nu_t}(|(\zeta - r) - T(r)| \leq k' T(r)^{2/3} \,|\, \zeta > t) > 1 - \varepsilons
\end{equation}
for sufficiently large $t$. Now we see from (\ref{zetasmall}) and (\ref{zetaTdiff}) that $t^{-1}T(r) \rightarrow_p 0$ as $t \rightarrow \infty$, and then another application of (\ref{zetaTdiff}) yields (\ref{Tzeta}) as claimed.
Now let $V_t = t^{-2/3}(T(r) - (t - r))$. It follows from (\ref{Tzeta}) and \eqref{eq:zetaV} that
\begin{equation}\label{Tudist}
V_t = t^{-2/3}(T(r) - (t - r)) \ensuremath{\mathbb{R}}ightarrow \frac{3}{c} V
\end{equation}
and
\begin{equation}\label{simcon1}
t^{-2/3}(\zeta - t) - V_t \rightarrow_p 0.
\end{equation}
We now apply Proposition~\ref{configpropnew} with the configuration of particles at time $r$ playing the role of the initial configuration of particles and the time $T(r)$ playing the role of $t_n$. The assumptions of Proposition~\ref{configpropnew} are satisfied, because, as mentioned above, $T(r) \rightarrow_p \infty$ and $L_{T(r)}(0) - R(r) \rightarrow_p \infty$ as $t \rightarrow \infty$, which implies that $Z_{r+T(r)}(r) \rightarrow_p 1/2$ as $t\rightarrow\infty$, by the definition of $T(r)$. If $t^{-2/3}(t - s) \rightarrow \sigma \geq 0$, then using (\ref{Tudist}), we have
\begin{equation}\label{strat1}
\frac{s - r}{T(r)} = \frac{t^{2/3}}{(t - s) + (s - r) + (T(r) - (t - r))} \ensuremath{\mathbb{R}}ightarrow \frac{1}{\sigma + 1 + \frac{3}{c}V}.
\end{equation}
The limiting random variable on the right-hand side is $(0,1)$-valued, so given $\varepsilons > 0$, we can find $\delta > 0$ such that $\ensuremath{\mathbf{P}}_{\nu_t}(\delta T(r) \leq s - r \leq (1 - \delta)T(r) \,|\, \zeta > t) > 1 - \varepsilons/2$.
If instead $t^{2/3} \ll t - s \ll t$, then using again \eqref{Tudist},
\begin{equation}\label{strat2}
\frac{s - r}{T(r)} = \frac{t - s}{2(t - s) + (T(r) - (t - r))} \ensuremath{\mathbb{R}}ightarrow \frac{1}{2}.
\end{equation}
It follows that in both cases, we can apply Proposition \ref{configpropnew} to get that if $\varepsilons > 0$, then for sufficiently large $t$ we have
\begin{equation}\label{Me1}
\ensuremath{\mathbf{P}}_{\nu_t} \bigg( \frac{C_{9}}{L_{T(r)}(s-r)^3} e^{L_{T(r)}(s-r)} \leq M(s) \leq \frac{C_{10}}{L_{T(r)}(s-r)^3} e^{L_{T(r)}(s-r)} \,\Big|\, \zeta > t \bigg) > 1 - \varepsilons
\end{equation}
and
\begin{equation}\label{Re1}
\ensuremath{\mathbf{P}}_{\nu_t} \big( L_{T(r)}(s-r) - \log T(r) - C_{11} \leq R(s) \leq L_{T(r)}(s-r) - \log T(r) + C_{12} \,\big|\, \zeta > t \big) > 1 - \varepsilons.
\end{equation}
We write that $W_t$ is $O_p(1)$ if for all $\varepsilons > 0$, there exists a positive real number $K$ such that $P(|W_t| \leq K) > 1 - \varepsilons$ for sufficiently large $t$, and we write that $W_t$ is $o_p(1)$ if $W_t \rightarrow_p 0$. Then, by (\ref{Tudist}) and (\ref{Me1}),
\begin{align*}
\log M(s) &= L_{T(r)}(s - r) - 3 \log L_{T(r)}(s - r) + O_p(1) \\
&= c(t - s + t^{2/3} V_t)^{1/3} - \log (t - s + t^{2/3} V_t) + O_p(1).
\end{align*}
Likewise, by (\ref{Tudist}) and (\ref{Re1}),
\begin{align*}
R(s) &= c (t - s + t^{2/3} V_t)^{1/3} - \log (t - r + t^{2/3} V_t) + O_p(1).
\end{align*}
When $t^{-2/3}(t - s) \rightarrow \sigma \geq 0$, it follows that $$t^{-2/9} \log M(s) = c(\sigma + V_t)^{1/3} + o_p(1)$$ and $$t^{-2/9} R(s) = c(\sigma + V_t)^{1/3} + o_p(1).$$ These two results, combined with (\ref{Tudist}) and (\ref{simcon1}), give (\ref{J1as}).
When instead $t^{2/3} \ll t - s \ll t$, the Mean Value Theorem implies that for some random variable $\xi_t$ such that $0 \leq \xi_t \leq t^{2/3} V_t$, we have
$$\log M(s) = c (t - s)^{1/3} - \log(t - s) + \frac{c}{3} (t - s + \xi_t)^{-2/3} t^{2/3} V_t + O_p(1).$$
Because $(t - s + \xi_t)/(t - s) \rightarrow_p 1$, it follows that
$$\bigg( \frac{t - s}{t} \bigg)^{2/3} \big( \log M(s) - c(t - s)^{1/3} + \log(t - s) \big) = \frac{c}{3} V_t + o_p(1).$$ By the same reasoning, we get
$$\bigg( \frac{t - s}{t} \bigg)^{2/3} \big( R(s) - c(t - s)^{1/3} + \log(t - s) \big) = \frac{c}{3} V_t + o_p(1).$$
These results, combined with (\ref{Tudist}) and (\ref{simcon1}), imply (\ref{J2as}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{meds}]
Consider a sequence of times $(t_n)_{n=1}^{\infty}$ tending to infinity, and choose $(s_n)_{n=1}^{\infty}$ such that $\delta t_n \leq s_n \leq (1 - \delta) t_n$ for all $n$. We will condition on $\zeta > t_n$ and then apply Proposition \ref{configpropnew} with the configuration of particles at time $\delta t_n/2$ playing the role of the initial configuration of particles. Because $P(0 < \ensuremath{\mathbf{P}}hi(u) < \infty) = 1$ for all $u > 0$, it follows from Theorem \ref{CSBPcond} that, under $\ensuremath{\mathbf{P}}_{\nu_{t_n}}(\:\cdot\:|\,\zeta > t_n)$, the distributions of the sequences $(Z_{t_n}(\delta t_n/2))_{n=1}^{\infty}$ and $(Z_{t_n}(\delta t_n/2)^{-1})_{n=1}^{\infty}$ are tight. Lemma \ref{Ttcompare} implies that, under $\ensuremath{\mathbf{P}}_{\nu_{t_n}}(\:\cdot\:|\,\zeta > t_n)$, we have $L_{t_n}(\delta t_n/2) - R(\delta t_n/2) \rightarrow_p \infty$ as $n \rightarrow \infty$. Therefore, the hypotheses of Proposition~\ref{configpropnew} are satisfied.
To deduce the result of Theorem \ref{meds} from Proposition \ref{configpropnew}, we need to show that the conclusions are unaffected by conditioning on $\zeta > t$. We proceed as in the proof of Lemma~\ref{Ttcompare}. By Lemma \ref{MGsurvival}, there exists $\eta > 0$ such that $\ensuremath{\mathbf{P}}_{\nu_{t_n}}(W_n(\delta t_n) \leq \eta \,|\,\zeta > t_n) < \varepsilons/2$ for sufficiently large $n$. By Proposition \ref{configpropnew}, if we define the random variables $$H_n = \ensuremath{\mathbf{P}}_{\nu_{t_n}} \bigg( \frac{C_3}{L_{t_n}(s_n)^3} e^{L_{t_n}(s_n)} \leq M(s_n) \leq \frac{C_4}{L_{t_n}(s_n)^3} \,\bigg|\, {\cal F}_{\delta t_n/2} \bigg)$$ and
$$J_n = \ensuremath{\mathbf{P}}_{\nu_{t_n}}\big( L_t(s) - \log t - C_5 \leq R(s) \leq L_t(s) - \log t + C_6 \, \big| \, {\cal F}_{\delta t_n/2} \big),$$ then $\ensuremath{\mathbf{P}}_{\nu_{t_n}}(H_n > 1 - \eta \varepsilons/2 \,|\, \zeta > t_n) \rightarrow 1$ and $\ensuremath{\mathbf{P}}_{\nu_{t_n}}(J_n > 1 - \eta \varepsilons/2 \,|\, \zeta > t_n) \rightarrow 1$ as $n \rightarrow \infty$, provided that we choose the values of the constants so that (\ref{configconc1}) and (\ref{configconc2}) hold with $\eta \varepsilons/2$ in place of $\varepsilons$.
Following the steps in the derivation of (\ref{Tt1}) then yields the two conclusions in Theorem~\ref{meds}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{condconfigprop}]
Consider any sequence of times $(t_n)_{n=1}^{\infty}$ tending to infinity, and let $s_n$ be the value of $s$ associated with the time $t_n$. We first consider the case in which $t_n - s_n \ll t_n$.
Let $r_n = s_n - t_n^{2/3}$ if $t_n - s_n \leq t_n^{2/3}$, and let $r_n = 2s_n - t_n$ if $t_n - s_n \geq t_n^{2/3}$. Let $A_n^{\delta}$ be the event that $\delta T(r_n) \leq s_n - r_n \leq (1 - \delta) T(r_n)$. Using the same reasoning used to establish (\ref{strat1}) and (\ref{strat2}), we can see that for all $\varepsilons > 0$, there is a $\delta > 0$ such that $\ensuremath{\mathbf{P}}_{\nu_{t_n}}(A_n^{\delta} \,|\, \zeta > t_n) > 1 - \varepsilons$ for sufficiently large $n$.
We apply Proposition \ref{configpropnew} with the configuration of particles at time $r_n$ playing the role of the initial configuration of particles, the time $T(r_n)$ playing the role of $t_n$, and $s_n - r_n$ playing the role of $s_n$. The result of part 3 of Proposition \ref{configpropnew} only applies on the event $A_n^{\delta}$. Therefore, we will define the probability measure $\chi_n^{\delta}$ to be equal to $\chi_{t_n}$ on the event $A_n^{\delta}$ and to be equal to $\mu$ otherwise. Likewise, we will define the probability measure $\eta_n^{\delta}$ in the same way as $\eta_{t_n}$, except with $L_{T(u_n)}(s_n - r_n)$ in place of $R(s_n)$ in the definition, on the event $A_n^{\delta}$. Otherwise, we define $\eta_n^{\delta}$ to be the probability measure $\xi$. Define $\eta_t^*$ to be the same as $\eta_t$, except with $L_{T(r_n)}(s_n - r_n)$ in place of $R(s_n)$ in the definition. Then part 3 of Proposition~\ref{configpropnew} implies that for all $\delta > 0$, we have $\chi_n^{\delta} \ensuremath{\mathbb{R}}ightarrow \mu$ and $\eta_n^{\delta} \ensuremath{\mathbb{R}}ightarrow \xi$ as $n \rightarrow \infty$. Note that Lemma \ref{MGsurvival} ensures that the conditioning on $\zeta > t$ does not affect the result when we apply Proposition~\ref{configpropnew}.
Therefore, letting $\rho$ denote the Prohorov metric on the space of probability measures on $\ensuremath{\mathbb{R}}$, we have $$\lim_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_{t_n}}(\rho(\chi_n^{\delta}, \mu) > \varepsilons\,|\,\zeta > t_n) = 0, \hspace{.4in}\lim_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_{t_n}}(\rho(\eta_n^{\delta}, \xi) > \varepsilons\,|\,\zeta > t_n) = 0.$$ Because $\ensuremath{\mathbf{P}}_{\nu_{t_n}}(A_n^{\delta} \,|\, \zeta > t_n) > 1 - \varepsilons$ for sufficiently large $n$, it follows that
$$\limsup_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_{t_n}}(\rho(\chi_{t_n}, \mu) > \varepsilons\,|\, \zeta > t_n) \leq \varepsilons, \hspace{.4in}\limsup_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_{t_n}}(\rho(\eta^*_{t_n}, \xi) > \varepsilons\,|\,\zeta > t_n) \leq \varepsilons.$$ Because $\varepsilons > 0$ is arbitrary, it follows that $\chi_t \ensuremath{\mathbb{R}}ightarrow \mu$ and $\eta_t^* \ensuremath{\mathbb{R}}ightarrow \xi$. Finally, $R(s)/L_{T(r)}(s - r) \rightarrow_p 1$ as $t \rightarrow \infty$ by part 2 of Proposition \ref{configpropnew}, so $\eta_t \ensuremath{\mathbb{R}}ightarrow \xi$, as claimed.
By a subsequence argument, it remains only to consider the case in which, for some $\delta > 0$, we have $\delta t_n \leq s_n \leq (1-\delta)t_n$ for all $n$. In this case, we can apply part 3 of Proposition \ref{configpropnew} with the configuration of particles at time $\delta t_n/2$ playing the role of the initial configuration of particles, as in the proof of Theorem \ref{meds}, to obtain the result. Because the limit distributions $\mu$ and $\eta$ are concentrated on a single measure, the result of Lemma \ref{MGsurvival} remains enough to ensure that the conditioning on $\zeta > t$ does not affect the conclusion.
\end{proof}
\section{Moment estimates}\label{momsec}
\subsection{Heat kernel estimates} \label{sec:heat_kernel}
First, consider a single Brownian particle which is killed when it reaches $0$ or $1$. Let $w_s(x,y)$ denote the ``density'' of the position of this particle at time $s$, meaning that if the Brownian particle starts at the position $x \in (0, 1)$ at time zero, then the probability that it is in the Borel subset $U$ of $(0,1)$ at time $s$ is $$\int_U w_s(x,y) \: dy.$$ It is well-known (see, for example, p. 188 of \cite{lawler}) that
\begin{align}
\label{wfourier}
w_s(x,y) &= 2 \sum_{n=1}^{\infty} e^{-\pi^2 n^2 s/2} \sin (n \pi x) \sin (n \pi y).
\end{align}
Equation \eqref{wfourier} yields that for every $x\in[0,1]$ and $s\ge0$,
\begin{equation}
\label{wsin}
\int_0^1 \sin(\pi y) w_s(x,y)\,dy = e^{-\pi^2 s/2}\sin(\pi x).
\end{equation}
Furthermore, by the reasoning in Lemma 5 of \cite{bbs2}, if we define
\begin{equation}\label{vdef}
v_s(x,y) = 2 e^{-\pi^2 s/2} \sin(\pi x) \sin(\pi y)
\end{equation}
and
\begin{equation}\label{Ddef}
D(s) = \sum_{n=2}^{\infty} n^2 e^{-\pi^2 (n^2 - 1) s/2},
\end{equation}
then
\begin{equation}\label{udef}
w_s(x,y) = v_s(x,y)(1 + D_s(x,y)),
\end{equation}
where $|D_s(x,y)| \leq D(s)$ for all $x, y \in (0, 1)$.
We further recall (see Lemma 7.1 of \cite{maillard}) that
\begin{align}
\label{wint}
\int_0^s e^{\pi^2 r/2}w_r(x,y)\,dr = 2s\sin(\pi x)\sin(\pi y) + O\big((x\wedge y)(1-(x\vee y))\big),
\end{align}
and
\begin{align}
\label{wprimeint}
\int_0^s e^{\pi^2 r/2}\left(-\frac 1 2 \partial_yw_r(x,1)\right)\,dr
&= \pi s\sin(\pi x) + O(x).
\end{align}
We will also need the following two lemmas.
\begin{lemma}
\label{lem:wintunif}
For all $x \in (0,1)$ and $y \in (0, 1/2]$, we have
\[
\int_0^s e^{\pi^2 r/2}\sup_{y'\in[0,y]} w_r(x,y')\,dr = O(y(s\sin(\pi x)+(1-x))).
\]
\end{lemma}
\begin{proof}
For $r \ge 1$, we have by (\ref{vdef}) and (\ref{udef}),
\begin{equation}\label{wlarges}
\sup_{y'\in[0,y]} w_r(x,y') = O(e^{-\pi^2 r/2}\sin(\pi x)y).
\end{equation}
It therefore suffices to show that
\begin{equation}
\label{letsshowit}
\int_0^1 \sup_{y'\in[0,y]} w_r(x,y')\,dr = O(y(1-x)).
\end{equation}
We bound $w_r(x,y)$ by the heat kernel of Brownian motion killed at $0$, i.e.
\[
w_r(x,y) \le \frac 1 {\sqrt{2\pi r}} \left(e^{-\frac{(x-y)^2}{2r}} - e^{-\frac{(x+y)^2}{2r}}\right) = \frac 1 {\sqrt{2\pi r}} e^{-\frac{(x-y)^2}{2r}}(1-e^{-\frac{2xy}r}).
\]
Using the inequality $1 - e^{-z} \leq 1 \wedge z$ for $z \geq 0$, we get
\begin{equation}\label{wrbound}
w_r(x,y) \leq \frac{1}{\sqrt{2 \pi r}} e^{-(x - y)^2/2r} \left(1 \wedge \frac{2 x y}{r} \right).
\end{equation}
The first step in proving (\ref{letsshowit}) is to show the weaker statement
\begin{equation}
\label{letsshowit_weaker}
\int_0^1 \sup_{y'\in[0,y]} w_r(x,y')\,dr = O(y).
\end{equation}
To do this, we distinguish between two cases. When $x \leq 2y$, equation (\ref{wrbound}) gives
\[
\sup_{y'\in[0,y]}w_r(x,y') \le \frac 1 {\sqrt{2\pi r}} \left(1\wedge \frac {4y^2}{r}\right).
\]
Integrating over $r$ and changing variables by $r = y^2 u$, this gives
\begin{equation}
\label{intbound1}
\int_0^1 \sup_{y'\in[0,y]} w_r(x,y')\,dr \le y \int_0^\infty \frac 1 {\sqrt{2\pi u}} \left(1\wedge \frac {4}{u}\right)\,du = O(y),
\end{equation}
because the last integral converges. When $x > 2y$, we use that $x-y' \ge x/2$ for all $y'\le y$ to get
\[
\sup_{y'\in[0,y]} w_r(x,y') \le \frac {2xy} {\sqrt{2\pi}r^{3/2}} e^{-x^2/8r}.
\]
Integrating over $r$ and changing variables by $r = x^2 u$,
\begin{equation}
\label{intbound2}
\int_0^1 \sup_{y'\in[0,y]} w_r(x,y')\,dr \le 2 y \int_0^\infty \frac 1 {\sqrt{2\pi} u^{3/2}} e^{-1/8u}\,du = O(y),
\end{equation}
because the last integral converges.
Equations~\eqref{intbound1} and \eqref{intbound2} together yield \eqref{letsshowit_weaker}.
When $x \leq 3/4$, equation (\ref{letsshowit}) follows immediately from (\ref{letsshowit_weaker}). Therefore, it remains to show (\ref{letsshowit}) when $x \geq 3/4$. By symmetry, for all $x,y \in (0,1)$ and $r\ge0$, we have $w_r(x,y) = w_r(1-x, 1-y)$ and so, using (\ref{wrbound}) for the last step,
\begin{align*}
w_r(x,y) &= \int_0^1 w_{r/2}(x,z) w_{r/2}(z,y) \: dz \\
&\leq \sup_{z \in (0,1)} w_{r/2}(x,z) w_{r/2}(z,y) \\
&= \sup_{z \in (0,1)} w_{r/2}(1-x, 1-z) w_{r/2}(z,y) \\
&\leq \sup_{z\in (0,1)} \frac{1}{\sqrt{\pi r}} e^{-(x-z)^2/r} \left( 1 \wedge \frac{4(1-x)}{r} \right) \cdot \frac{1}{\sqrt{\pi r}} e^{-(z-y)^2/r} \left(1 \wedge \frac{4y}{r} \right).
\end{align*}
Now note that when $x \geq 3/4$ and $y \leq 1/2$, for all $z \in (0,1)$ we have either $(x-z)^2 \geq 1/64$ or $(y-z)^2 \geq 1/64$. Hence, for all $x \geq 3/4$ and $y \leq 1/2$, we have
\begin{align*}
w_r(x,y) \leq \frac{y(1-x)}{r^3} e^{-1/64r}.
\end{align*}
It follows that when $y \leq 1/2$, we have
$$\int_0^1 \sup_{y' \in [0, y]} w_r(x,y') \: dr \leq y(1-x) \int_0^1 \frac{1}{r^3} e^{-1/64r} \: dr = O(y(1-x)),$$
because the integral converges.
\end{proof}
\begin{lemma}\label{wint2}
For all $x \in (0,1)$, we have $$\int_0^s e^{\pi^2 r/2} \int_0^1 w_r(x,y) \: dy \: dr = O(s \sin(\pi x) + (1-x)).$$
\end{lemma}
\begin{proof}
Exchanging integrals, this is an immediate consequence of \eqref{wint}.
\end{proof}
We now wish to estimate the density of the position of the Brownian particle at time $s$ when the particle is killed if it reaches either $0$ or $K(s)$ at time $s$, where $K(s)$ is a smooth positive function. That is, the right boundary at which the Brownian particle is killed moves over time. We will need somewhat sharper estimates than those provided in \cite{bbs}. To obtain such estimates, we will follow almost exactly the approach used by Roberts \cite{roberts}, which in turn was inspired by the work of Novikov \cite{nov}. We will use the following general lemma.
\begin{lemma}\label{densityK}
Let $T > 0$. Let $K: [0, T] \rightarrow (0, \infty)$ be twice differentiable. Let $x\in [0,K(0)]$. Let $(\Omega, {\cal F}, P)$ be a probability space and
$(B_s, s \geq 0)$ be Brownian motion started at $x$ on this space. For $s \in [0, T]$, let
\begin{equation}\label{rhodef}
\rho_s = \bigg( \frac{K(0)}{K(s)} \bigg)^{1/2} \exp \bigg( \frac{K'(s) B_s^2}{2K(s)} - \frac{K'(0)B_0^2}{2K(0)} - \int_0^s \frac{K''(u)B_u^2}{2K(u)} \: du \bigg)
\end{equation}
and
\begin{equation}\label{taudef}
\tau(s) = \int_0^s \frac{1}{K(u)^2} \: du.
\end{equation}
Then $(\rho_s)_{s\in[0,T]}$ is a martingale and under the measure $Q$ defined by $dQ/dP = \rho_T$, $(B_s)_{s\in[0,T]}$ is equal in law to $(K(s) W_{\tau(s)})_{s\in[0,T]}$, where $(W_u)_{u\ge0}$ is a Brownian motion started at $x/K(0)$. In particular, for all bounded measurable
functions $g: [0, 1] \rightarrow \ensuremath{\mathbb{R}}$ and all $s \in (0, T]$, we have
$$E \bigg[ \rho_s g \bigg( \frac{B_s}{K(s)} \bigg) \ensuremath{\mathbbm{1}}_{\{0 < B_u < K(u) \: \forall u \in [0, s]\}} \bigg] = \int_0^1 g(y) w_{\tau(s)}(\tfrac x {K(0)}, y) \: dy.$$
\end{lemma}
\begin{proof}
Denote by $({\cal G}_s, s \geq 0)$ the Brownian filtration, i.e.~the smallest complete, right-continuous filtration to which $(B_s, s\ge0)$ is adapted.
For $s \in [0, T]$, let
\begin{equation}\label{Xdef}
X_s = \frac{K(s)}{K(0)} x + K(s) \int_0^s \frac{1}{K(u)} \: dB_u.
\end{equation}
A short calculation gives
\begin{equation}\label{sdeX}
dX_s = \frac{K'(s)}{K(0)} x \: ds + K'(s) \left(\int_0^s \frac{1}{K(u)} \: dB_u\right) \: ds + dB_s = \frac{K'(s)}{K(s)} X_s \: ds + dB_s.
\end{equation}
That is, $(X_s, 0 \leq s \leq u)$ is a Brownian motion with a time and space dependent drift whose drift at time $s$ is given by $K'(s) X_s/K(s)$. For $s \in [0, T]$, let $$\gamma_s = \exp \bigg( \int_0^s \frac{K'(u) B_u}{K(u)} \: dB_u - \frac{1}{2} \int_0^s \frac{K'(u)^2 B_u^2}{K(u)^2} \: du \bigg).$$ We show below by an integration by parts argument that $\gamma_s = \rho_s$ for all $s\in[0,T]$, where $\rho_s$ is defined in (\ref{rhodef}), and assume this for the moment. Because $K'(u)/K(u)$ is bounded over $u \in [0, T]$ by assumption, it follows, for example, from Corollary 3.5.14 in \cite{ks} that the process $(\gamma_s, 0 \leq s \leq T)$ is a martingale. Therefore, we can define a new probability measure $Q$ on $(\Omega, {\cal F})$ such that for $s \in [0, T]$, we have $$\frac{dQ}{dP} \bigg|_{{\cal G}_s} = \gamma_s.$$ By Girsanov's Theorem, the law of the process $(B_s, 0 \leq s \leq T)$ under $Q$ is the same as the law of $(X_s, 0 \leq s \leq T)$ under $P$.
Furthermore, we can see from (\ref{Xdef}) that by a standard time-change argument due to Dambis, Dubins, and Schwarz (see, for example, Theorem 3.4.6 of \cite{ks}), we can write $$\frac{X_s}{K(s)} = W_{\tau(s)},$$ where $(W_s, s \geq 0)$ is a Brownian motion under $P$ with $W_0 = x/K(0)$ and $\tau(s)$ is given by (\ref{taudef}). This proves the first part of the lemma. In particular, if $g \in [0, 1] \rightarrow \ensuremath{\mathbb{R}}$ is a bounded measurable function, then using $E$ to denote expectations under $P$ and $E_Q$ to denote expectations under $Q$, we have for $s \in [0, T]$,
\begin{align*}
E \bigg[ \gamma_s g \bigg( \frac{B_s}{K(s)} \bigg) \ensuremath{\mathbbm{1}}_{\{0 < B_u < K(u) \: \forall u \in [0, s]\}} \bigg] &= E_Q \bigg[ g \bigg( \frac{B_s}{K(s)} \bigg) \ensuremath{\mathbbm{1}}_{\{0 < B_u < K(u) \: \forall u \in [0, s]\}} \bigg] \nonumber \\
&= E \bigg[ g \bigg( \frac{X_s}{K(s)} \bigg) \ensuremath{\mathbbm{1}}_{\{0 < X_u < K(u) \: \forall u \in [0, s]\}} \bigg]\\
&= E\big[g(W_{\tau(s)}) \ensuremath{\mathbbm{1}}_{\{0 < W_u < 1 \: \forall u \in [0, \tau(s)]\}} \big] \\
&= \int_0^1 g(y) w_{\tau(s)}(\tfrac x {K(0)}, y) \: dy.
\end{align*}
To prove the lemma, it remains only to show that $\gamma_s = \rho_s$ for all $s \in [0, T]$. Observe that if we write $Z_s = K'(s)B_s/2K(s)$, then $$dZ_s = \frac{K'(s)}{2K(s)} \: dB_s + \bigg( \frac{K''(s)}{2K(s)} - \frac{K'(s)^2}{2K(s)^2} \bigg) B_s \: ds,$$ and therefore $$\langle B, Z \rangle_s = \int_0^s \frac{K'(u)}{2K(u)} \: du = \frac{1}{2} \log \bigg( \frac{K(s)}{K(0)} \bigg).$$ Integrating by parts gives
\begin{align*}
&\frac{K'(s) B_s^2}{2K(s)} - \frac{K'(0)B_0^2}{2K(0)} = Z_sB_s-Z_0B_0 = \int_0^sZ_udB_u + \int_0^s B_udZ_u + \langle B, Z \rangle_s\\
&\hspace{.1in}= \int_0^s \frac{K'(u) B_u}{2K(u)} \: dB_u + \int_0^s \frac{K'(u)B_u}{2K(u)} \: dB_u + \int_0^s \bigg( \frac{K''(u)}{2K(u)} - \frac{K'(u)^2}{2K(u)^2} \bigg) B_u^2 \: du + \frac{1}{2} \log \bigg( \frac{K(s)}{K(0)} \bigg),
\end{align*}
and rearranging this equation, we get that $\gamma_s = \rho_s$, as claimed.
\end{proof}
Next, for any fixed constant $A \geq 0$, define
\begin{equation}\label{LAdef}
L_{t,A}(s) = c(t - s)^{1/3} - A,
\end{equation}
where $c$ was defined in (\ref{Lcdef}).
We now consider the case in which $K(s) = L_{t,A}(s)$. Then $L_{t,A}(s)$ is defined for $s \in [0,t_A]$, with $t_A = t - (A/c)^3$. Suppose there is a single Brownian particle at $x \in (0, L_{t,A}(r))$, where $0 \leq r < s$, which is killed if it reaches $0$ or $L_{t,A}(u)$ at time $u \in (r, s]$. Let $q_{r,s}^A(x,y)$ denote the ``density" for the position of this particle at time $s$, meaning that the probability that the particle is in the Borel subset $U$ of $(0, L_{t,A}(s))$ at time $s$ is $$\int_U q_{r,s}^A(x,y) \: dy.$$ Define for $0\le r\le s < t_A$,
\begin{equation} \label{taursdef}
\tau_A(r,s) = \int_r^s \frac{1}{L_{t,A}(u)^2} \: du
\end{equation}
(we omit the parameter $t$ in the notation of $\tau_A$).
\begin{proposition}
\label{prop:qrs}
For $0\le r\le s< t_A$, $x\in[0,L_{t,A}(r)]$ and $y\in[0,L_{t,A}(s)]$, we have
$$q_{r,s}^A(x,y) = \frac{e^{O((t-s)^{-1/3})}}{(L_{t,A}(r) L_{t,A}(s))^{1/2}} \: w_{\tau_A(r,s)} \bigg( \frac{x}{L_{t,A}(r)}, \frac{y}{L_{t,A}(s)} \bigg).$$
\end{proposition}
\begin{proof}
Let $(B_u, u \geq r)$ denote Brownian motion started at $x$ at time $r$. Let
$$\rho_{r,s} = \bigg( \frac{L_{t,A}(r)}{L_{t,A}(s)} \bigg)^{1/2} \exp \bigg( \frac{L_{t,A}'(s) B_s^2}{2L_{t,A}(s)} - \frac{L_{t,A}'(r)B_r^2}{2L_{t,A}(r)} - \int_r^s \frac{L_{t,A}''(u)B_u^2}{2L_{t,A}(u)} \: du \bigg).$$
By Lemma \ref{densityK}, if $h: [0, L_{t,A}(s)] \rightarrow \ensuremath{\mathbb{R}}$ is a bounded measurable function, then
\begin{equation}\label{LAdensity}
E \big[ \rho_{r,s} h(B_s) \ensuremath{\mathbbm{1}}_{\{0 < B_u < L_{t,A}(u) \: \forall u \in [r, s]\}} \big] = \frac{1}{L_{t,A}(s)} \int_0^{L_{t,A}(s)} h(z) w_{\tau_A(r,s)}\bigg(\frac{x}{L_{t,A}(r)}, \frac{z}{L_{t,A}(s)} \bigg) \: dz.
\end{equation}
We have $$L_{t,A}'(s) = -\frac{c}{3} (t - s)^{-2/3}, \hspace{.3in}L_{t,A}''(s) = -\frac{2c}{9} (t - s)^{-5/3}.$$ On the event that $0 < B_u < L_{t,A}(u)$ for all $u \in [r, s]$, we have
\begin{align*}
&\bigg| \frac{L_{t,A}'(s) B_s^2}{2L_{t,A}(s)} - \frac{L_{t,A}'(r)B_r^2}{2L_{t,A}(r)} - \int_r^s \frac{L_{t,A}''(u) B_u^2}{2L_{t,A}(u)} \bigg| \\
&\hspace{.8in}\leq \bigg| \frac{L_{t,A}'(s) L_{t,A}(s)}{2} \bigg| + \bigg| \frac{L_{t,A}'(r) L_{t,A}(r)}{2} \bigg| + \frac{1}{2} \bigg| \int_r^s L_{t,A}''(u) L_{t,A}(u) \: du \bigg| \\
&\hspace{.8in}\leq C (t - s)^{-1/3}
\end{align*}
for some positive constant $C$. Therefore,
\begin{equation}\label{rhors}
\rho_{r,s} = \bigg( \frac{L_{t,A}(r)}{L_{t,A}(s)} \bigg)^{1/2}e^{O((t-s)^{-1/3})}.
\end{equation}
It now follows from (\ref{LAdensity}) and (\ref{rhors}) that $$E \big[ h(B_s) \ensuremath{\mathbbm{1}}_{\{0 < B_u < L_{t,A}(u) \: \forall u \in [r,s]\}} \big] = \frac{e^{O((t-s)^{-1/3})}}{(L_{t,A}(r) L_{t,A}(s))^{1/2}} \int_0^{L_{t,A}(s)} h(z) w_{\tau_A(r,s)} \bigg( \frac{x}{L_{t,A}(r)}, \frac{z}{L_{t,A}(s)} \bigg) \: dz.$$ This implies the result.
\end{proof}
\subsection{First moment estimates}
\label{sec:first_moment_estimates}
We now return to the original setting of the paper, in which each Brownian particle drifts to the left at rate $1$ and branching events, each producing an average of $m+1$ offspring, occur at rate $\beta = 1/2m$. Suppose there is a single particle at $x \in (0, L_{t,A}(r))$ at time $r$, where $0 \leq r < s$, and particles are killed if they reach $0$ or $L_{t,A}(u)$ at time $u \in (r, s]$. Let $p_{r,s}^A(x,y)$ denote the ``density'' for the process at time $s$, meaning that the expected number of particles in the Borel subset $U$ of $(0, L_{t,A}(s))$ at time $s$ is $$\int_U p_{r,s}^A(x,y) \: dy.$$ By Girsanov's Theorem, the addition of the drift multiplies the density by $e^{(x - y) - t/2}$, and by the Many-to-one Lemma, the branching multiplies the density by $e^{t/2}$. It follows that $$p_{r,s}^A(x,y) = e^{x-y} q_{r,s}^A(x,y).$$ In this section and the next one, we use this fact to estimate first and second moments of various quantities of this process.
Define $N_{s,A}$ to be the set particles at time $s$ that stay below the curve $L_{t,A}$ until time $s$. We define
\begin{align*}
Z_{t,A}(s) &= \sum_{u\in N_{s,A}} z_{t,A}(X_u(s),s),\quad z_{t,A}(x,s) = L_{t,A}(s)\sin\left(\frac{\pi x}{L_{t,A}(s)}\right)e^{x-L_t(s)}\ensuremath{\mathbbm{1}}_{x\in[0,L_{t,A}(s)]},\\
Y_{t,A}(s) &= \sum_{u\in N_{s,A}} y_{t,A}(X_u(s),s),\quad y_{t,A}(x,s) = \tfrac x {L_{t,A}(s)} e^{x-L_t(s)}, \\
{\tilde Y}_{t,A}(s) &= \sum_{u\in N_{s,A}} {\tilde y}_{t,A}(X_u(s), s),\quad {\tilde y}_{t,A}(x,s) = e^{x-L_t(s)}.
\end{align*}
We also define $$y_t(x,s) = y_{t,0}(x,s), \quad {\tilde y}_t(x,s) = {\tilde y}_{t,0}(x,s).$$
Note that $Y_{t,A}(s) \leq {\tilde Y}_{t,A}(s)$. We further define $R_{t,A}(r,s)$, for $r\le s$, to be the number of particles absorbed at the curve $L_{t,A}$ between the times $r$ and $s$. The notation $\ensuremath{\mathbf{P}}_{(x,r)}$ and $\ensuremath{\mathbf{E}}_{(x,r)}$ denotes probabilities and expectations for our branching Brownian motion process started from a particle at the space-time point $(x,r)$.
We now collect a few estimates for $L_{t,A}(s)$ and $\tau_A(r,s)$, which were defined in \eqref{LAdef} and \eqref{taursdef} respectively. Recall that $t_A = t - (A/c)^3$, and define
\[
s_A = t - \left(\frac{2A}{c}\right)^3 \le t_A,
\]
so that $A/L_t(s) \le 1/2$ for every $s\le s_A$. It follows that for $s\le s_A$, we have
\begin{align}
\label{eq:LtA_Lt}
L_{t,A}(s) = L_t(s)e^{O(A(t-s)^{-1/3})}.
\end{align}
Also, a simple calculation gives, for $r\le s\le s_A$,
\begin{align}
\tau_A(r, s) &= \int_r^s \frac{1}{c^2 (t - u)^{2/3}} \: du + \int_r^s \frac{2A}{c^3 (t - u)} \: du + O(A^2(t - s)^{-1/3}) \nonumber \\
&= \frac{3}{c^2} \big( (t - r)^{1/3} - (t - s)^{1/3} \big) + \frac{2A}{c^3} \log \bigg( \frac{t - r}{t-s} \bigg) + O(A^2(t - s)^{-1/3})\nonumber\\
\label{tauasym}
&= \frac 2 {\pi^2} \left(L_t(r) - L_t(s) + \frac {2 A}{3} \log \bigg( \frac{t - r}{t-s} \bigg) + O(A^2(t - s)^{-1/3})\right).
\end{align}
It follows that for $r\le s\le s_A$,
\begin{equation}\label{tauLexpasym}
e^{-\frac{\pi^2}2\tau_A(r,s)} = e^{L_t(s) - L_t(r) + O(A^2(t - s)^{-1/3})} \bigg( \frac{t - s}{t-r} \bigg)^{\frac{2A} 3}.
\end{equation}
Furthermore, since $L_{t,A}(s) \le L_t(s)$ for every $s\le t_A$, we get by definition and a simple calculation, for every $s\le t_A$ (in particular, every $s\le s_A$),
\begin{align}
\label{taucomparison}
\tau_A(r,s) \ge \tau_0(r,s) = \frac 2 {\pi^2} (L_t(r) - L_t(s)),
\end{align}
and also, by \eqref{eq:LtA_Lt} and the definition of $\tau_A$ from \eqref{taursdef}, for every $s\le s_A$,
\begin{align}
\label{taucomparisonweak}
\tau_A(r,s) = \tau_0(r,s)e^{O(A(t-s)^{-1/3})}.
\end{align}
\begin{lemma} We have for $r\le s\le s_A$ and $x\in [0,L_{t,A}(r)]$,
\label{lem:Zexp}
$$ \ensuremath{\mathbf{E}}_{(x,r)}[Z_{t,A}(s)] = e^{O((1 \vee A^2)(t - s)^{-1/3})} \left(\frac{t-s}{t-r}\right)^{\frac{2A} 3 + \frac 1 2} z_{t,A}(x,r).$$
\end{lemma}
\begin{proof}
By applying Proposition \ref{prop:qrs} followed by equations (\ref{wsin}) and (\ref{tauLexpasym}), we get
\begin{align*}
\ensuremath{\mathbf{E}}_{(x,r)}&[Z_{t,A}(s)] = \int_0^{L_{t,A}(s)} e^{x-y} q_{r,s}^A(x,y) z_{t,A}(y,s)\,dy\\
&=e^{O((t - s)^{-1/3})} \frac{L_{t,A}(s)^{1/2}}{L_{t,A}(r)^{1/2}} e^{x-L_t(s)} \int_0^{L_{t,A}(s)} \sin\bigg(\frac{\pi y}{L_{t,A}(s)} \bigg) w_{\tau_A(r,s)} \bigg( \frac{x}{L_{t,A}(r)}, \frac{y}{L_{t,A}(s)} \bigg)\,dy \\
&=e^{O((t - s)^{-1/3})} \frac{L_{t,A}(s)^{3/2}}{L_{t,A}(r)^{1/2}} e^{x-L_t(s)} e^{-\frac{\pi^2}2\tau_A(r,s)}\sin\bigg(\frac{\pi x}{L_{t,A}(r)} \bigg) \\
&=e^{O((1 \vee A^2)(t - s)^{-1/3})} \frac{L_{t,A}(s)^{3/2}}{L_{t,A}(r)^{3/2}} \bigg( \frac{t - s}{t-r} \bigg)^{\frac{2A} 3} z_{t,A}(x,r).
\end{align*}
The lemma follows from \eqref{eq:LtA_Lt}.
\end{proof}
\begin{lemma}\label{lem:Yexp}
Let $\gamma > 0$. There exists a positive constant $C$, depending on $\gamma$, such that if $r\le s\le t_A$ and $\tau_A(r,s) \geq \gamma$, then for $x\in[0,L_{t,A}(r)]$,
$$\ensuremath{\mathbf{E}}_{(x,r)}[{\tilde Y}_{t,A}(s)] \leq C e^{O((t - s)^{-1/3})} \frac{z_{t,A}(x,r)}{L_{t,A}(r)}.$$
\end{lemma}
\begin{proof}
By Proposition \ref{prop:qrs},
\begin{align*}
\ensuremath{\mathbf{E}}_{(x,r)}[{\tilde Y}_{t,A}(s)] &= \int_0^{L_{t,A}(s)} e^{x-y} q^A_{r,s}(x,y) e^{y - L_t(s)} \: dy \\
&= \frac{e^{O((t - s)^{-1/3})} e^{x - L_t(s)}}{L_{t,A}(r)^{1/2} L_{t,A}(s)^{1/2}} \int_0^{L_{t,A}(s)} w_{\tau_A(r,s)} \bigg( \frac{x}{L_{t,A}(r)}, \frac{y}{L_{t,A}(s)} \bigg) \: dy.
\end{align*}
Because $\tau_A(r,s) \geq \gamma$, it follows from (\ref{vdef}) and (\ref{udef}) that
\begin{align*}
\ensuremath{\mathbf{E}}_{(x,r)}[{\tilde Y}_{t,A}(s)] &\leq \frac{C e^{O((t - s)^{-1/3})}e^{x - L_t(s)} e^{-\frac{\pi^2}{2} \tau_A(r,s)}}{L_{t,A}(r)^{1/2} L_{t,A}(s)^{1/2}} \int_0^{L_{t,A}(s)} \sin \bigg( \frac{\pi x}{L_{t,A}(r)} \bigg) \sin \bigg( \frac{\pi y}{L_{t,A}(s)} \bigg) \: dy \\
&\leq C e^{O((t - s)^{-1/3})}e^{x - L_t(s)} e^{-\frac{\pi^2}{2} \tau_A(r,s)} \bigg( \frac{L_{t,A}(s)}{L_{t,A}(r)} \bigg)^{1/2} \sin \bigg( \frac{\pi x}{L_{t,A}(r)}\bigg).
\end{align*}
Therefore, using \eqref{taucomparison} and the fact that $L_{t,A}$ is decreasing, we get
$$\ensuremath{\mathbf{E}}_{(x,r)}[{\tilde Y}_{t,A}(s)] \leq C e^{O((t - s)^{-1/3})} e^{x - L_t(r)} \sin \bigg( \frac{\pi x}{L_{t,A}(r)} \bigg),$$
as claimed.
\end{proof}
To calculate the first moment of $R_{t,A}$, we will use the following well-known result on the hitting time of a curve by a Brownian motion.
\begin{lemma}
\label{lem:heat_flow}
Let $b_+,b_-:\ensuremath{\mathbb{R}}_+\rightarrow\ensuremath{\mathbb{R}}$ be smooth functions and let $y \in (b_-(0),b_+(0))$. Let $u(y,s)$ be the density of Brownian motion started at $x$ and killed when hitting one of the curves $b_+$ and $b_-$. Let $H_+$ and $H_-$ denote the hitting times of the curves $b_+$ and $b_-$, respectively. Then
\[
\ensuremath{\mathbf{P}}_x(H_+ \in ds,\ H_+ < H_-) = - \frac 1 2 \partial_y u(y,s)\Big|_{y = b_+(s)}\, ds
\]
\end{lemma}
In words, Lemma~\ref{lem:heat_flow} says that the density at time $s$ of the hitting time of the boundary $b_+$ is equal to the heat flow of $u$ out of the boundary at time $s$. This result is so classical that it is difficult to find a complete proof of it in the literature. See e.g. \cite[p.~154, eq.~32]{ito_mckean} for an early appearance (without proof) in the case of constant boundaries and note that in our one-dimensional setting, one can easily reduce to this case by a suitable change of variables. For two different proof ideas, one more elegant, the other one more robust, both directly applicable for non-constant boundaries, one may consult \cite[Lemma~I.1.4]{lerche} and \cite[Section~3]{daniels}, respectively. For a general discussion of parabolic measure on the boundary of a space-time domain and its relation to hitting times, see \cite[Section 2.IX.13]{doob}. Lemma~\ref{lem:heat_flow} can also be deduced from the formula given in Section 1.XV.7 of that book. A more readable, but non-rigorous discussion in the time-homogeneous case can be found in \cite[Section 5.2.1]{gardiner}.
\begin{lemma}\label{lem:Rexp}
We have for $r\le s\le s_A$ and $x\in [0,L_{t,A}(r)]$,
\begin{equation}\label{Rexp1}
\begin{split}
\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)] &\leq \pi e^{A + O((1 \vee A^2)(t - s)^{-1/3})} \left(\frac{\tau_0(r,s)}{L_t(r)} z_{t,A}(x,r) + O(y_{t,A}(x,r))\right)\\
&\leq \left(\frac{t-r}{t-s}\right)^{\frac{2A} 3 + \frac 1 6}\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)].
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:heat_flow} together with the many-to-one lemma, we get
\begin{equation}
\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)] =\int_r^s \left(-\frac 1 2 \frac{d}{dy}p^A_{r,u}(x,y)\Big|_{y=L_{t,A}(u)}\right)\,du.
\label{eq:heat_flow}
\end{equation}
Equation (\ref{eq:heat_flow}) implies
\begin{align*}
\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)] &=\int_r^s \left(-\frac 1 2 \frac{d}{dy}e^{x-y}q^A_{r,u}(x,y)\Big|_{y=L_{t,A}(u)}\right)\,du\\
&= \int_r^s e^{x-L_{t,A}(u)}\left(-\frac 1 2 \partial_y q^A_{r,u}(x,L_{t,A}(u))\right)\,du.
\end{align*}
Because $\partial_y q^A_{r,u}(x,L_{t,A}(u)) = \lim_{y\uparrow L_{t,A}(u)} q^A_{r,u}(x,y)/(L_{t,A}(u)-y)$, the uniform bounds on $q^A_{r,u}(x,y)$ in Proposition~\ref{prop:qrs} directly turn into uniform bounds on its derivative at $y=L_{t,A}(u)$. Therefore,
$$\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)] = e^A e^{O((t - s)^{-1/3})} \int_r^s \frac 1 {L_{t,A}(r)^{1/2}L_{t,A}(u)^{3/2}} e^{x-L_t(u)} \left(-\frac 1 2 \partial_y w_{\tau_A(r,u)} \big( \tfrac{x}{L_{t,A}(r)}, 1 \big)\right)\,du.$$
Now (\ref{tauLexpasym}) and \eqref{eq:LtA_Lt} give
\begin{align}
\nonumber
&\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)] = e^A e^{O((1 \vee A^2)(t - s)^{-1/3})} e^{x-L_t(r)} \\
\label{Rexpeqtoshow1}
&\hspace{.3in} \times \int_r^s \frac{1}{L_{t,A}(u)^2} \bigg( \frac{t-u}{t-r} \bigg)^{\frac{2A} 3 + \frac 1 6} e^{\frac{\pi^2}2\tau_A(r,u)} \left(-\frac 1 2 \partial_y w_{\tau_A(r,u)} \big( \tfrac{x}{L_{t,A}(r)}, 1 \big)\right)\,du.
\end{align}
We claim that
\begin{align}
\nonumber
T &:= e^{x-L_t(r)}\int_r^s \frac{1}{L_{t,A}(u)^2} e^{\frac{\pi^2}2\tau_A(r,u)} \left(-\frac 1 2 \partial_y w_{\tau_A(r,u)} \big( \tfrac{x}{L_{t,A}(r)}, 1 \big)\right)\,du \\
\label{Rexpeqtoshow}
&= \pi \left(\frac{\tau_A(r,s)}{L_{t,A}(r)} z_{t,A}(x,r) + O(y_{t,A}(x,r))\right).
\end{align}
Then \eqref{Rexpeqtoshow1} and \eqref{Rexpeqtoshow}, along with \eqref{eq:LtA_Lt} and \eqref{taucomparisonweak},
imply the lemma because $\frac{t-u}{t-r} \le 1$ for every $u\in [r,s]$.
To prove the claim, we transform the integral in \eqref{Rexpeqtoshow} using the change of variables $\tau_A(r,u) = u'$ along with \eqref{taursdef}, to get
\begin{align*}
T= e^{x-L_t(r)} \int_0^{\tau_A(r,s)} e^{\frac{\pi^2}2 u'} \left(-\frac 1 2 \partial_y w_{u'} \big( \tfrac{x}{L_{t,A}(r)}, 1 \big)\right)\,du'.
\end{align*}
Equation~\eqref{wprimeint} now gives
\begin{align*}
T = \pi e^{x-L_t(r)} \left(\tau_A(r,s) \sin\big( \tfrac{\pi x}{L_{t,A}(r)}\big) + O(\tfrac{x}{L_{t,A}(r)})\right),
\end{align*}
which is exactly \eqref{Rexpeqtoshow}.
\end{proof}
\subsection{Second moment estimates}
\label{sec:second_moment_estimates}
\begin{lemma}
\label{lem:Zvar}
Let $\varepsilons$, $\gamma_1$, and $\gamma_2$ be positive numbers. Suppose $r \le s \leq (1 - \varepsilons) t\wedge s_A$. Suppose also that $\tau_A(r,s) \geq \gamma_1$ and $(1 \vee A^2)(t - s)^{-1/3} \leq \gamma_2$. Then
there exists a positive constant $C$, depending on $\varepsilons$, $\gamma_1$, and $\gamma_2$, such that
\[
\ensuremath{\mathbf{E}}_{(x,r)}[Z_{t,A}(s)^2] \leq C e^{-A} \left(\frac{\tau_0(r,s)}{L_t(r)} z_{t,A}(x,r) + y_{t,A}(x,r)\right).
\]
\end{lemma}
\begin{proof}
Let $m_2$ be the second factorial moment of the offspring distribution. Standard second moment calculations (see, for example, p. 146 of \cite{inw}) give
\begin{align}
\ensuremath{\mathbf{E}}_{(x,r)}[Z_{t,A}(s)^2] &= \ensuremath{\mathbf{E}}_{(x,r)}\left[\sum_{u\in N_s} z_{t,A}(X_u(s),s)^2\right] \nonumber \\
&\hspace{.3in}+ \beta m_2 \int_r^s \int_0^{L_{t,A}(u)} e^{x-y}q^A_{r,u}(x,y)\ensuremath{\mathbf{E}}_{(y,u)}[Z_{t,A}(s)]^2\,dy\, du \nonumber \\
\label{T1T2}
&=: T_1 + T_2.
\end{align}
We first bound the first term in \eqref{T1T2}. By Proposition \ref{prop:qrs},
$$T_1 \leq \frac{C}{(L_{t,A}(r) L_{t,A}(s))^{1/2}} \int_0^{L_{t,A}(s)} e^{x-y} w_{\tau_A(r,s)} \big( \tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(s)} \big) L_{t,A}(s)^2 \sin \big( \tfrac{\pi y}{L_{t,A}(s)} \big)^2 e^{2(y - L_t(s))} \: dy.$$
Now using (\ref{vdef}), (\ref{Ddef}), and (\ref{udef}), along with the fact that $\tau_A(r,s) \geq \gamma_1$, we get
$$T_1 \leq \frac{C L_{t,A}(s)^{3/2} e^{x}}{L_{t,A}(r)^{1/2}} \int_0^{L_{t,A}(s)} e^{-\frac{\pi^2}{2} \tau_A(r,s)} e^{y - 2 L_t(s)} \sin \big( \tfrac{\pi x}{L_{t,A}(r)} \big) \sin \big( \tfrac{\pi y}{L_{t,A}(s)} \big)^3 \: dy.$$
Using \eqref{taucomparison}, we get
\begin{align}\label{T1}
T_1 &\leq \frac{C L_{t,A}(s)^{3/2} e^{x - L_t(r)}}{L_{t,A}(r)^{1/2}} \sin \bigg( \frac{\pi x}{L_{t,A}(r)} \bigg) \int_0^{L_{t,A}(s)} e^{y-L_t(s)} \sin \big( \tfrac{\pi y}{L_{t,A}(s)} \big)^3 \: dy \nonumber \\
&\leq \frac{C e^{-A} z_{t,A}(x,r)}{L_{t,A}(r)^{3/2} L_{t,A}(s)^{3/2}}.
\end{align}
We now bound the term $T_2$ in \eqref{T1T2}.
By Proposition~\ref{prop:qrs} and Lemma~\ref{lem:Zexp},
\begin{align*}
T_2 &\leq C \int_r^s \int_0^{L_{t,A}(u)} \frac{e^{x-y}}{L_{t,A}(r)^{1/2}L_{t,A}(u)^{1/2}}w_{\tau_A(r,u)}\big( \tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(u)} \big)z_{t,A}(y,u)^2\,dy\,du.
\end{align*}
Applying the inequality $z_{t,A}(y,u) \le \pi(L_{t,A}(u)-y)e^{y-L_t(u)}$ and using that $L_{t,A}$ is decreasing and that $L_{t,A}\le L_t$ gives
\begin{align*}
T_2 \leq CL_t(r)\int_r^s \int_0^{L_{t,A}(u)} \frac{e^{x-L_t(u)+y-L_{t,A}(u) - A}}{L_{t,A}(u)^2}w_{\tau_A(r,u)}\big( \tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(u)} \big)(L_{t,A}(u)-y)^2\,dy\,du.
\end{align*}
Changing variables $y \mapsto L_{t,A}(u) - y$, and using the equality $w_u(x',y') = w_u(1-x',1-y')$ for all $x',y'\in[0,1]$ together with \eqref{taucomparison} gives
\begin{equation}\label{T2prelim}
T_2 \leq CL_t(r)e^{x-L_t(r)-A} \int_r^s \frac{e^{\frac{\pi^2}{2}\tau_A(r,u)}}{L_{t,A}(u)^2} \int_0^{L_{t,A}(u)} y^2e^{-y} w_{\tau_A(r,u)}\big(1-\tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(u)}\big)\,dy \,du.
\end{equation}
Now making the additional change of variables $\tau_A(r,u) \mapsto u$, using \eqref{taursdef}, and letting $h(u)$ be the number such that $\tau_A(r, h(u)) = u$, we get
$$T_2 \leq CL_t(r) e^{x-L_t(r)-A} \int_0^{\tau_A(r,s)} e^{\pi^2 u/2} \int_0^{L_{t,A}(h(u))} y^2 e^{-y} w_u \big(1-\tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(h(u))}\big)\,dy \,du.$$
We now split the inner integral into two pieces and use Tonelli's Theorem and the fact that $L_{t,A}$ is decreasing for the first piece to get
\begin{align}\label{T3T4}
T_2 &\leq Ce^{x-L_t(r)-A} L_t(r) \int_0^{\tau_A(r,s)} e^{\pi^2 u/2} \int_0^{\frac{1}{2}L_{t,A}(s)} y^2 e^{-y} w_u \big(1-\tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(h(u))}\big)\,dy \,du \nonumber \\
&\hspace{.2in}+ Ce^{x-L_t(r)-A} L_t(r) \int_0^{\tau_A(r,s)} e^{\pi^2 u/2} \int_{\frac{1}{2} L_{t,A}(s)}^{L_{t,A}(h(u))} y^2 e^{-y} w_u \big(1-\tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(h(u))}\big)\,dy \,du \nonumber \\
&\leq Ce^{x-L_t(r)-A} L_t(r) \int_0^{\frac{1}{2} L_{t,A}(s)} y^2 e^{-y} \int_0^{\tau_A(r,s)} e^{\pi^2 u/2} \sup_{y' \in [0, y/L_{t,A}(s)]} w_u\big(1-\tfrac{x}{L_{t,A}(r)}, y' \big)\,du \,dy \nonumber \\
&\hspace{.2in} + Ce^{x-L_t(r)-A} L_t(r)^3 e^{-\frac{1}{2} L_{t,A}(s)} \int_0^{\tau_A(r,s)} e^{\pi^2 u/2} \int_0^{L_{t,A}(h(u))} w_u \big(1-\tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(h(u))}\big)\,dy \,du \nonumber \\
&=: T_3 + T_4.
\end{align}
By Lemma \ref{lem:wintunif}, and then using \eqref{eq:LtA_Lt} and the assumptions on $s$ (in particular that $s \leq (1 - \varepsilons)t$),
\begin{align}\label{T3}
T_3 &\leq \frac{C e^{x - L_t(r) - A} L_t(r)}{L_{t,A}(s)} \left[\tau_A(r,s) \sin\left(\tfrac{x}{L_{t,A}(r)}\right) + \tfrac x {L_{t,A}(r)}\right] \int_0^{\infty} y^3 e^{-y} \: dy \nonumber \\
&\leq C e^{-A} \left(\frac{\tau_A(r,s)}{L_t(r)} z_{t,A}(x,r) + y_{t,A}(x,r)\right).
\end{align}
By Lemma \ref{wint2}, and using again \eqref{eq:LtA_Lt} and the assumptions on $s$,
\begin{align}\label{T4}
T_4 &\leq Ce^{x-L_t(r)-A} L_t(r)^4 e^{-\frac{1}{2} L_{t,A}(s)} \left[\tau_A(r,s) \sin\left(\tfrac{x}{L_{t,A}(r)}\right) + \tfrac x {L_{t,A}(r)}\right] \nonumber \\
&\leq C e^{-A} \left(\frac{\tau_A(r,s)}{L_t(r)} z_{t,A}(x,r) + y_{t,A}(x,r)\right).
\end{align}
The lemma now follows from \eqref{T1T2}, \eqref{T1}, \eqref{T3T4}, \eqref{T3}, and \eqref{T4}, together with \eqref{taucomparisonweak}.
\end{proof}
\begin{lemma}
\label{lem:Rvar}
Let $\varepsilons$, $\gamma_1$, and $\gamma_2$ be positive numbers. Suppose $r \le s \leq (1 - \varepsilons) t\wedge s_A$. Suppose also that $\tau_A(r,s) \geq \gamma_1$ and $(1 \vee A^2)(t - s)^{-1/3} \leq \gamma_2$. Then
there exists a positive constant $C$, depending on $\varepsilons$, $\gamma_1$, and $\gamma_2$, such that
\[
\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)^2] \le Ce^A\left(\frac{\tau_0(r,s)}{L_t(r)} z_{t,A}(x,r) + y_{t,A}(x,r)\right).
\]
\end{lemma}
\begin{proof}
As in the proof of Lemma~\ref{lem:Zvar}, we have
\begin{align}
\nonumber
\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)^2] &= \ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s)] + \beta m_2 \int_r^s \int_0^{L_{t,A}(u)} e^{x-y}q^A_{r,u}(x,y)(\ensuremath{\mathbf{E}}_{(y,u)}[R_{t,A}(u,s)])^2\,dy\,du\\
\label{R_T1T2}
&=: T_1 + T_2.
\end{align}
In view of (\ref{Rexp1}), it only remains to bound $T_2$. For every $u\in[r,s]$ and $y\in [0,L_{t,A}(u)]$, we get, using Lemma~\ref{lem:Rexp} and the fact that $\tau_0(u,s) \leq C L_t(u)$ when $s \leq (1 - \varepsilons)t$,
\begin{align*}
(\ensuremath{\mathbf{E}}_{(y,u)}[R_{t,A}(u,s)])^2 &\leq C e^{2A}\left(\frac{\tau_0(u,s)}{L_t(u)} z_{t,A}(y,u)+y_{t,A}(y,u)\right)^2 \\
&\leq C e^{2A} \big( z_{t,A}(y,u)^2+y_{t,A}(y,u)^2 \big) \\
&\leq C e^{2A} \big( (L_{t,A}(u) - y)^2 e^{2(y - L_t(u))} + e^{2(y - L_t(u))} \big) \\
&= C e^{-2(L_{t,A}(u) - y)} \big((L_{t,A}(u) - y)^2 + 1 \big).
\end{align*}
Plugging this into \eqref{R_T1T2} and using Proposition \ref{prop:qrs}, we get
\begin{align*}
T_2 &\leq C \int_r^s \int_0^{L_{t,A}(u)} \frac{e^{x-y}}{L_{t,A}(r)^{1/2} L_{t,A}(u)^{1/2}} w_{\tau_A(r,u)}\big( \tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(u)} \big) \\
&\hspace{2in} \times e^{-2(L_{t,A}(u) - y)} \big((L_{t,A}(u) - y)^2 + 1 \big) \: dy \: du.
\end{align*}
Now making the change of variables $y \mapsto L_{t,A}(u) - y$, using that $w_u(x',y') = w_u(1 - x', 1 - y')$, and then using (\ref{tauLexpasym}) as in the proof of Lemma~\ref{lem:Zvar}, we get
\begin{align*}
T_2 &\leq C L_t(r) \int_r^s \int_0^{L_{t,A}(u)} \frac{e^{x + y - L_{t,A}(u)}}{L_{t,A}(u)^2} w_{\tau_A(r,u)}\big( 1 - \tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(u)} \big) e^{-2y}(y^2 + 1) \: dy \: du \\
&\leq C L_t(r) e^{x - L_t(r) + A} \int_r^s \frac{e^{\frac{\pi^2}{2} \tau_A(r,u)}}{L_{t,A}(u)^2} \int_0^{L_{t,A}(u)} w_{\tau_A(r,u)}\big(1 - \tfrac{x}{L_{t,A}(r)}, \tfrac{y}{L_{t,A}(u)} \big) e^{-y}(y^2 + 1) \: dy \: du.
\end{align*}
Note that this expression is identical to the expression in (\ref{T2prelim}) except that the sign of $A$ in the exponential in front of the integral is reversed, and we have $y^2 + 1$ in place of $y^2$ in the integrand. Consequently, we can follow the same steps as in the proof of Lemma~\ref{lem:Zvar} to obtain
$$T_2 \leq C e^A \left(\frac{\tau_0(r,s)}{L_t(r)} z_{t,A}(x,r) + y_{t,A}(x,r)\right),$$
which completes the proof of the lemma.
\end{proof}
\section{Particle configurations}\label{configsec}
Our goal in this section is to deduce Proposition \ref{configpropnew} from results in \cite{bbs3}.
The strategy of the proofs in \cite{bbs3} is to show that if at time zero there is a single particle at $x > 0$, then for all $\kappa > 0$, the configuration of particles at time $\kappa t^{2/3}$ will satisfy certain conditions. The rest of the proofs then use only what has been established about the configuration of particles at time $\kappa t^{2/3}$. Consequently, the results in \cite{bbs3} immediately extend to any initial configuration of particles for which these conditions hold at time $\kappa t^{2/3}$. This observation yields Lemma \ref{configlemma} below. We define
$${\tilde Y}_t(s) = \sum_{u\in N_s} {\tilde y}_t(X_u(s),s),$$
which is similar to ${\tilde Y}_{t,A}(s)$ defined at the beginning of Section \ref{sec:first_moment_estimates}, except that here particles are only killed at the origin and not at the curve $L_{t,A}$.
\begin{lemma}\label{configlemma}
Suppose we have a sequence of possibly random initial configurations $(\nu_n)_{n=1}^{\infty}$ such that the following conditions hold for a corresponding sequence of times $(t_n)_{n=1}^{\infty}$:
\begin{enumerate}
\item The times $t_n$ do not depend on the evolution of the branching Brownian motion after time zero, and $t_n \rightarrow_p \infty$ as $n \rightarrow \infty$.
\item For all $\varepsilons > 0$ and $\kappa > 0$, there is a positive constant $C_{13}$, depending on $\varepsilons$ and $\kappa$, such that for sufficiently large $n$,
\begin{equation}\label{configasm2}
\ensuremath{\mathbf{P}}_{\nu_n} \bigg( {\tilde Y}_{t_n}(\kappa t_n^{2/3}) \leq \frac{C_{13}}{L_{t_n}(\kappa t_n^{2/3})} \bigg) > 1 - \varepsilons.
\end{equation}
\item For all $\varepsilons > 0$ and $\kappa > 0$, there are positive constants $C_{14}$ and $C_{15}$, depending on $\varepsilons$ and $\kappa$, such that for sufficiently large $n$,
\begin{equation}\label{configasm3}
\ensuremath{\mathbf{P}}_{\nu_n}(C_{14} \leq Z_{t_n}(\kappa t_n^{2/3}) \leq C_{15}) > 1 - \varepsilons.
\end{equation}
\item For all $\kappa > 0$ and $A \geq 0$, we have
\begin{equation}\label{configasm4}
\lim_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_n}\big(R(\kappa t_n^{2/3}) < L_{t_n}(\kappa t_n^{2/3}) - A \big) = 1.
\end{equation}
\end{enumerate}
Let $0 < \delta < 1/2$. Then the three conclusions of Proposition \ref{configpropnew} hold.
\end{lemma}
\begin{proof}
This proposition essentially restates the results of \cite{bbs3} in the context of the present paper. The second, third, and fourth conditions that we require for the sequence $(t_n)_{n=1}^{\infty}$ are the three conclusions of Lemma 15 of \cite{bbs3}, while the first condition that $t_n \rightarrow \infty$ in probability corresponds to the condition in \cite{bbs3} that the position $x$ of the initial particle tends to infinity. The first conclusion of Proposition \ref{configpropnew} is Theorem 1 of \cite{bbs3}. The second conclusion of Proposition \ref{configpropnew} is Theorem 2 in \cite{bbs3}. The third conclusion of Proposition \ref{configpropnew} is a combination of Theorems 3 and 4 in \cite{bbs3}. Proposition \ref{configpropnew} holds because these four theorems in \cite{bbs3} are deduced from Lemma 15 in \cite{bbs3}. When $q = 0$, the following adaptations are required to obtain the result in the present context:
\begin{itemize}
\item In \cite{bbs3}, the branching rate is $1$ and the drift is $-\sqrt{2}$. However, it is straightforward to translate results into our setting by a simple scaling.
\item Lemma 15 of \cite{bbs3} includes a stronger form (\ref{configasm3}), in which the bounds are proved when the term $\sin(\pi x/L_t(s))$ in the definition of $Z_t(s)$ from (\ref{Zdef}) is replaced by $\sin(\pi x/(L_t(0) + \alpha))$ for any $\alpha \in \ensuremath{\mathbb{R}}$. However, we have $$|(L_t(0) + \alpha) - L_t(\kappa t^{2/3})| \leq C(\kappa + |\alpha|)$$ for some positive constant $C$, so the ratio of the two sine terms will be bounded above and below by positive constants with high probability as long as (\ref{configasm4}) holds and $t_n \rightarrow \infty$ in probability. Therefore, establishing (\ref{configasm3}) is sufficient.
\item Theorems 2, 3, and 4 of \cite{bbs3} are stated for the case when $s = ut$ for some $u \in (0, 1)$. However, it is not hard to see that the proof extends to the case where $s \sim ut$ as $t \rightarrow \infty$, with the constants being uniform over $u \in [\delta, 1 - \delta]$, and then a subsequence argument gives the results in the form stated here.
\item The results in \cite{bbs3} are stated for a fixed initial configuration of particles. However, because the proof in \cite{bbs3} ultimately works from the random configuration at time $\kappa t^{2/3}$, the only possible complication comes from the randomness of the times $t_n$. In \cite{bbs3}, Theorems 1 and 2 are probability statements that hold when the position $x$ of the initial particle tends to infinity, while Theorems 3 and 4 establish convergence in distribution as $x \rightarrow \infty$. The requirement that the random times $t_n$ tend to infinity in probability is therefore sufficient for these results to carry over to the present context.
\item In \cite{bbs3}, it is assumed that at the time of a birth event, a particle splits into two other particles. However, as long as $q = 0$, the only change that results from considering a general offspring distribution is that a different constant appears in front of the second moment estimates, which does not affect the results. Results of Bramson \cite{bram83} are needed to prove Theorem 2 in \cite{bbs3}, but those results hold under the more general offspring distributions considered here when $q = 0$. Note in particular that equation (1.2$'$) on page 5 of \cite{bram83} is satisfied when the offspring distribution has finite variance.
\end{itemize}
The claim that Proposition \ref{configpropnew} holds even when $q > 0$ requires a bit more care. Indeed, the initial configuration with a single particle at $x_n$, with $x_n \rightarrow \infty$, does not fulfill the four conditions in the lemma when $q > 0$ because of the possibility that all descendants of the initial particle could die out. Nevertheless, once these four conditions, which correspond to Lemma 15 of \cite{bbs3}, are established, one deduces Theorems 1, 3, and 4 in \cite{bbs3} using moment estimates, which change only by a constant factor when $q > 0$. Therefore, the first and third conclusions of Proposition \ref{configpropnew} follow from the arguments in \cite{bbs3} without change. Some additional argument is needed, however, to obtain the second conclusion of Proposition \ref{configpropnew} because the proof of Theorem 2 in \cite{bbs3} uses a result of Bramson \cite{bbs3} which is valid only when $q = 0$.
To extend the second conclusion of Proposition \ref{configpropnew} to the case $q > 0$, we modify the process as follows. First, we construct the original branching Brownian motion in two stages. In the first stage, we construct the process without absorption at zero. At the second stage, we truncate any particle trajectories that hit zero. Now we can construct a modified process by deleting all particles that do not have an infinite line of descent in the first stage of this construction. This yields a new branching Brownian motion with $q = 0$ that includes a subset of the particles in the original branching Brownian motion. In particular, for any fixed $s > 0$, the law of the new process at time $s$, conditioned on the original branching Brownian motion at time $s$, is obtained by independently retaining each particle of the original process with probability $1 - q$.
We check that the four conditions of the lemma hold for the new process. Condition 1 is immediate because we will use the same times $t_n$ as in the original process, while conditions 2 and 4 and the upper bound in (\ref{configasm3}) hold because the particles in the new process are a subset of the particles in the original process. To establish the lower bound in (\ref{configasm3}), note that (\ref{configasm4}) implies that for all $\theta > 0$, with probability tending to one as $n \rightarrow \infty$, no individual particle in the original process contributes more than $\theta$ to $Z_{t_n}(\kappa t_n^{2/3})$. Now, suppose $z_1, \dots, z_m$ is a sequence of numbers such that $z_1 + \dots + z_m = z$ and $z_i \leq \theta$ for all $i$. Let $\xi_1, \dots, \xi_m$ be independent Bernoulli$(1-q)$ random variables, and let $Z = z_1 \xi_1 + \dots + z_m \xi_m$. Then $E[Z] = (1 - q) z$ and $\mbox{Var}(Z) = q(1-q)(z_1^2 + \dots + z_m^2) \leq q(1 - q) \theta z$. By applying this observation to the numbers $z_{t_n}(X_u(\kappa t_n^{2/3}), 0)$ for $u \in N_{t_n}$ and $\theta$ sufficiently small, and then using Chebyshev's Inequality, we obtain the lower bound in (\ref{configasm3}).
It now follows from the result when $q = 0$ that the conclusion (\ref{configconc2}) holds for the new process. Because the particles in the new process are a subset of the particles in the original process, we immediately get the lower bound in (\ref{configconc2}) for the original process. Finally, recall that for any time $s$, the position of the right-most particle is the same in the new process as in the original process with probability $1 - q$. Therefore, the upper bound in (\ref{configconc2}) for the original process holds with probability at least $1 - \varepsilons/(1 - q)$, which is sufficient.
\end{proof}
We are now able to prove Proposition \ref{configpropnew} by showing that the hypotheses of Proposition~\ref{configpropnew} imply those of Lemma \ref{configlemma}.
\begin{proof}[Proof of Proposition \ref{configpropnew}]
Suppose that the hypotheses of Proposition \ref{configpropnew} are satisfied.
The first condition of Lemma \ref{configlemma} holds by assumption.
Using that $\sin(x) \geq 2x/\pi$ and $\sin(\pi - x) \geq 2x/\pi$ for all $x \in [0, \pi/2]$, we have for all $x \in [0, L_{t_n}(0) - A]$,
\begin{equation}\label{yzrat}
\frac{y_{t_n,0}(x,0)}{z_{t_n}(x,0)} = \frac{x}{L_{t_n}(0)^2 \sin( \frac{\pi x}{L_{t_n}(0)})} \leq \frac{1}{2A}.
\end{equation}
Because $A$ is arbitrary and $(Z_{t_n}(0))_{n=1}^{\infty}$ is tight, the assumption that $L_{t_n}(0) - R(0) \rightarrow_p \infty$ implies that $Y_{t_n}(0) \rightarrow_p 0$ as $n \rightarrow \infty$.
Let $\varepsilons > 0$ and $\kappa > 0$. To establish the second, third, and fourth conditions in Lemma \ref{configlemma}, we consider the branching Brownian motion with particles killed when they reach either the origin or the curve $s \mapsto L_{t_n}(s)$, run for time $\kappa t_n^{2/3}$.
We will need to make some moment calculations, conditional on the initial configuration of particles. By Markov's Inequality,
Lemma~\ref{lem:Rexp} with $A = 0$, and equation (\ref{tauasym}), there is a positive constant $C$, depending on $\kappa$, such that
$$\ensuremath{\mathbf{P}}_{\nu_n}(R_{t_n}(0, \kappa t_n^{2/3}) \geq 1|{\cal F}_0) \leq \ensuremath{\mathbf{E}}_{\nu_n}[R_{t_n}(0, \kappa t_n^{2/3})|{\cal F}_0] \leq C \bigg( \frac{Z_{t_n}(0)}{L_{t_n}(0)} + Y_{t_n}(0) \bigg).$$ Because $L_{t_n}(0) \rightarrow_p \infty$ and $Y_{t_n}(0) \rightarrow_p 0$ as $n \rightarrow \infty$, and $(Z_{t_n}(0))_{n=1}^{\infty}$ is tight, we can deduce that
\begin{equation}\label{Ris0}
\lim_{n \rightarrow \infty} \ensuremath{\mathbf{P}}_{\nu_n}(R_{t_n}(0, \kappa t_n^{2/3}) \geq 1) = 0.
\end{equation}
Thus, we may disregard the possibility that particles are killed at $L_{t_n}(s)$ before time $\kappa t_n^{2/3}$.
By Lemma \ref{lem:Yexp} with $A = 0$,
\begin{equation}\label{Ytnexp}
\ensuremath{\mathbf{E}}_{\nu_n}[{\tilde Y}_{t_n,0}(\kappa t_n^{2/3})|{\cal F}_0] \leq \frac{C Z_{t_n}(0)}{L_{t_n}(0)},
\end{equation}
where the positive constant $C$ depends on $\kappa$. Because the sequence $(Z_{t_n}(0))_{n=1}^{\infty}$ is tight and
$L_{t_n}(0) \geq L_{t_n}(\kappa t^{2/3})$, the second condition (\ref{configasm2}) in Lemma \ref{configlemma} follows from (\ref{Ytnexp}) and Markov's Inequality, along with (\ref{Ris0}).
From Lemma \ref{lem:Zexp} with $A = 0$, and the fact $(Z_{t_n}(0))_{n=1}^{\infty}$ is tight, we conclude that for all $\varepsilons > 0$ and $\delta > 0$, for sufficiently large $n$ we have, on an event of probability at least $1 - \varepsilons/2$,
$$\delta \leq \ensuremath{\mathbf{E}}_{\nu_n}[Z_{t_n,0}(\kappa t_n^{2/3})|{\cal F}_0] \leq \frac{1}{\delta}.$$
By Lemma \ref{lem:Zvar} with $A = 0$, there is a positive constant $C$ such that
$$\textup{Var}_{\nu_n}(Z_{t_n,0}(\kappa t_n^{2/3})|{\cal F}_0) \leq C \bigg( \frac{Z_{t_n}(0)}{L_{t_n}(0)} + Y_{t_n}(0) \bigg),$$
and the right-hand side tends to zero in probability as $n \rightarrow \infty$ by the argument before (\ref{Ris0}). In view of our assumptions on the initial configurations as well as (\ref{Ris0}), the third condition (\ref{configasm3}) in Lemma \ref{configlemma} now follows from an application of Chebyshev's Inequality.
Because ${\tilde y}_{t_n,0}(L_{t_n}(\kappa t_n^{2/3}) - A, \kappa t_n^{2/3}) = e^{-A},$ the fourth condition (\ref{configasm4}) in Lemma \ref{configlemma} follows immediately from (\ref{configasm2}).
\end{proof}
\section{Convergence to the CSBP: small time steps}\label{CSBPsec}
In this section we state and prove a result (Proposition~\ref{prop:csbp_small_step}) which will be at the heart of the proof of Theorem~\ref{CSBPthm} in Section~\ref{sec:csbp_proof}.
\subsection{Notation in this section}
We will make heavy use of the results in Sections~\ref{sec:first_moment_estimates} and \ref{sec:second_moment_estimates}. In particular, we use all the notation introduced in Section~\ref{sec:first_moment_estimates}. Whenever the symbol $A$ appears, we will always tacitly assume that $A\ge1$.
In what follows, it will be necessary for us to let both $t$ and $A$ go to infinity. To this end, \emph{we will always first let $t$, then $A$ go to infinity}. We therefore introduce the following two symbols:
\begin{itemize}
\item $\varepsilons_t$: denotes a quantity which is bounded in absolute value by a function $h(A,t)$ satisfying:
\[
\forall A\ge 1: \lim_{t\rightarrow\infty} h(A,t) = 0.
\]
\item $\varepsilons_{A,t}$: denotes a quantity which is bounded in absolute value by a function $h(A,t)$ satisfying:
\[
\lim_{A\rightarrow\infty} \limsup_{t\rightarrow\infty} h(A,t) = 0.
\]
\end{itemize}
Note that the first condition is stronger than the second one.
Furthermore, as above, the symbol $O(\cdot)$ denotes a quantity bounded in absolute value by a constant times the quantity inside the parentheses. Also, throughout the section, we fix $\mathcal{L}ambda > 1$ and a positive function $\bar\theta$ such that $\bar\theta(A)A^2\rightarrow 0$ as $A\rightarrow\infty$. The functions $h$ above and the constant in the definition of $O(\cdot)$ may only depend on the offspring distribution of the branching Brownian motion and on $\mathcal{L}ambda$ and $\bar{\theta}$.
Throughout the section, let $r\le s$ such that $s\le (1-\mathcal{L}ambda^{-1})t$ and $t-s=e^{-\theta}(t-r)$, for some $\theta\in [\bar\theta(A)/2,\bar\theta(A)]$. All estimates are meant to be uniform in $r$ and $s$ respecting these constraints.
Note that with this notation, we have
\begin{align}\label{26}
\frac{\tau_0(r,s) }{ L_t(r)} =2\pi^{-2} (1-e^{-\theta/3}) = \frac2{3\pi^{2}}\theta ( 1 + O(\theta)).
\end{align}
In particular, for all $r\le r'\le s'\le s$,
\begin{align}\label{27}
\frac{\tau_0(r',s') }{ L_t(r')} &=O(\theta).
\end{align}
The main step in the proof of Theorem \ref{CSBPthm} will be to show the following proposition.
\begin{proposition}
\label{prop:csbp_small_step}
Set $a=\frac{2}{3}(a_{\ref{eq:abelian}}+\log \pi)+\frac{1}{2}$. Then, uniformly in $\lambda\in [\mathcal{L}ambda^{-1},\mathcal{L}ambda]$, on the event $\{\forall u\in N_r: X_u(r) \le L_{t,A}(r)\}$, we have
\[
\ensuremath{\mathbf{E}}[e^{-\lambda Z_t(s)}\,|\,\ensuremath{\mathcal{F}}_r] = \exp\{(-\lambda + \theta (\ensuremath{\mathbf{P}}si_{a,2/3}(\lambda)+\varepsilons_{A,t}))Z_t(r) + O(AY_t(r))\}.
\]
\end{proposition}
The proof of this proposition will be decomposed into several steps. Inspired by \cite{bbs2}, we decompose the particles into those crossing the curve $L_{t,A}$ and those staying below it. The particles crossing the curve are exactly the ones causing the jumps in the CSBP. In Section~\ref{sec:csbp_jump}, we give an asymptotic result for the Laplace transform of such a jump. In Section~\ref{sec:csbp_small_step}, we use this result to prove Proposition~\ref{prop:csbp_small_step}.
\subsection{One particle at \texorpdfstring{$L_{t,A}$}{L\_\{t,A\}}}
\label{sec:csbp_jump}
\begin{lemma}\label{lem:jump}
Uniformly in $\lambda \in [\mathcal{L}ambda^{-1},\mathcal{L}ambda]$ and $q\in[r,s-t^{2/3}],$
\begin{equation}
\label{eq:jump}
\ensuremath{\mathbf{E}}_{(L_{t,A}(q),q)} [e^{-\lambda Z_t(s)}] =\exp \left\{\pi e^{-A}(\ensuremath{\mathbf{P}}si_{a_{\ref{eq:jump}},1}(\lambda) - A\lambda + \varepsilons_{A,t}) \right\},
\end{equation}
with $a_{\ref{eq:jump}} = a_{\ref{eq:abelian}}+\log \pi$.
\end{lemma}
The following lemma will be needed for the proof of Lemma~\ref{lem:jump}.
\begin{lemma}\label{lem:deriv_appears}
Let $y: (0, \infty) \rightarrow (0, \infty)$ be a function such that $y(t)\rightarrow\infty$ and $y(t) = o(t^{1/3})$ as $t\rightarrow\infty$.
Let $f: (0, \infty) \rightarrow (0, \infty)$ be a function such that $f(t) = o(t^{2/3})$ as $t\rightarrow\infty$.
Then uniformly in $q\in[r,s-t^{2/3}]$, $q'\in [q,q+f(t)]$, and $\lambda \in [\mathcal{L}ambda^{-1},\mathcal{L}ambda]$, as $t\rightarrow\infty$, we have
\begin{equation}
\ensuremath{\mathbf{E}}_{(L_t(q)-y(t),q')}[e^{-\lambda Z_t(s)}] = \exp\left\{-(\lambda+\varepsilons_t+O(\theta))\pi y(t)e^{-y(t)}\right\}.
\end{equation}
\end{lemma}
\begin{proof}
Write $x' = L_t(q) - y(t)$. Under $\ensuremath{\mathbf{P}}_{(x',q')}$, we have $Z_t(s) = Z_{t,0}(s)$ on the event $\{R_{t,0}(q',s) = 0\}$. Hence,
\begin{align}
\label{eq:701}
\left|\ensuremath{\mathbf{E}}_{(x',q')}[e^{-\lambda Z_t(s)}] - \ensuremath{\mathbf{E}}_{(x',q')}[e^{-\lambda Z_{t,0}(s)}]\right| \le \ensuremath{\mathbf{P}}_{(x',q')}(R_{t,0}(q',s) \ge 1) \le \ensuremath{\mathbf{E}}_{(x',q')}[R_{t,0}(q',s)].
\end{align}
By Lemma~\ref{lem:Rexp} and \eqref{27},
\begin{align}
\label{eq:702}
\ensuremath{\mathbf{E}}_{(x',q')}[R_{t,0}(q',s)] \le C(\theta z_t(x',q') + y_t(x',q')).
\end{align}
Furthermore, using that $e^{-z} = 1-z+O(z^2)$ for $z\ge 0$, we have
\begin{align}
\label{eq:703}
\ensuremath{\mathbf{E}}_{(x',q')}[e^{-\lambda Z_{t,0}(s)}] = 1 - \lambda \ensuremath{\mathbf{E}}_{(x',q')}[Z_{t,0}(s)] + O( \ensuremath{\mathbf{E}}_{(x',q')}[Z_{t,0}(s)^2]).
\end{align}
By Lemma~\ref{lem:Zexp},
\begin{align}
\label{eq:704}
\ensuremath{\mathbf{E}}_{(x',q')}[Z_{t,0}(s)] = (1+O(\theta) + \varepsilons_t) z_t(x',q').
\end{align}
As for the second moment, to apply Lemma~\ref{lem:Zvar}, note that $\tau_0(q',s) \ge \gamma_1$ for some $\gamma_1>0$, since $q' \le s-t^{2/3} + f(t)$ and $f(t) = o(t^{2/3})$ by assumption. Hence, for $t$ large enough, by Lemma~\ref{lem:Zvar} and \eqref{27},
\begin{align}
\label{eq:705}
\ensuremath{\mathbf{E}}_{(x',q')}[Z_{t,0}(s)^2] \le C\left(\theta z_t(x',q') + y_t(x',q')\right).
\end{align}
Combining \eqref{eq:701}, \eqref{eq:702}, \eqref{eq:703}, \eqref{eq:704} and \eqref{eq:705}, we have for large enough $t$,
\begin{align}
\label{eq:706}
\ensuremath{\mathbf{E}}_{(x',q')}[e^{-\lambda Z_t(s)}] = 1 - (\lambda +\varepsilons_t+O(\theta)) z_t(x',q') + O(y_t(x',q')).
\end{align}
Now using that $x' = L_t(q) - y(t)$ and $y(t) = o(t^{1/3}) = o(L_t(q'))$, along with the fact that $L_t(q) - L_t(q') \rightarrow 0$ as $t \rightarrow \infty$ because $q' \in [q, q + f(t)]$, we get
$$z_t(x',q') = L_t(q') \sin\left(\frac{\pi (y(t) - (L_t(q) - L_t(q')))}{L_t(q')}\right) e^{L_t(q) - L_t(q') - y(t)} = (1+\varepsilons_t)\pi y(t)e^{-y(t)}.$$
Furthermore,
\[
y_t(x',q') = \frac{x'}{L_t(q')} e^{L_t(q) - L_t(q') - y(t)} \le (1 + \varepsilons_t) e^{-y(t)}.
\]
It is also easy to check that
\[
z_t(x',q')^2 + y_t(x',q')^2 = O(y_t(x',q')).
\]
It follows from the above that the RHS of \eqref{eq:706} is at least $1/2$ for $t$ large enough, since $y(t) \rightarrow\infty$ as $t\rightarrow\infty$ by assumption. Using the equality $1-x = e^{-x + O(x^2)}$ for $x\in [0,1/2]$, equation~\eqref{eq:706} together with the above equations gives
\begin{align}
\ensuremath{\mathbf{E}}_{(x',q')}[e^{-\lambda Z_t(s)}]
&= \exp\left(-(\lambda +\varepsilons_t+O(\theta)) z_t(x',q') + O(y_t(x',q')) + O(z_t(x',q')^2+y_t(x',q')^2)\right) \nonumber\\
&= \exp\left(-[(\lambda +\varepsilons_t+O(\theta)) \pi y(t) + O(1)]e^{-y(t)}\right),\nonumber
\end{align}
which implies the statement of the lemma, since $y(t)\rightarrow\infty$ as $t\rightarrow\infty$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:jump}]
We start by proceeding as in the proof of Theorem~\ref{survivex}. Let $g: (0, \infty) \rightarrow (0, \infty)$ be an increasing function that satisfies (\ref{Tygy}). Let $y: (0, \infty) \rightarrow (0, \infty)$ be defined so that, similarly to (\ref{ycond}), we have
$$\lim_{t \rightarrow \infty} y(t) = \infty, \hspace{.5in} \lim_{t \rightarrow \infty} \frac{y(t)}{L_t(0)} = 0, \hspace{.5in} \lim_{t \rightarrow \infty} t^{-2/3} g(A + y(t)) = 0.$$
Starting with one particle at $L_{t,A}(q)$ at time $q$, we stop particles as soon as they hit the point $L_{t,A}(q)-y(t) = L_t(q) - A - y(t)$. We denote again by $K_t$ the number of particles hitting that point and by $w_1,\ldots,w_{K_t}$ the times they hit it. Then $w_i \in [q,q+g(A + y(t))]$ for all $i=1,\ldots,K_t$ with probability $1-\varepsilons_t$ by (\ref{Tygy}). We can apply Lemma~\ref{lem:deriv_appears} with $A + y(t)$ in place of $y(t)$ and $f(t) = g(A + y(t))$ to get
\begin{align}
\nonumber
\ensuremath{\mathbf{E}}_{(L_{t,A}(q),q)}[e^{-\lambda Z_t(s)}]
&= \ensuremath{\mathbf{E}}_{(L_{t,A}(q),q)}\left[\prod_{i=1}^{K_t} \ensuremath{\mathbf{E}}_{(L_t(q) - A - y(t),w_i)}[e^{-\lambda Z_t(s)}]\right]\\
&= \ensuremath{\mathbf{E}}_{(L_{t,A}(q),q)}\left[\exp\left(-K_t (\lambda +\varepsilons_t+O(\theta)) \pi (A+y(t))e^{-A-y(t)}\right)\right]+ \varepsilons_t.
\end{align}
Recall that $y(t)e^{-y(t)} K_t$ converges in law to $W$, the random variable from Lemma~\ref{neveuW}. It follows that
\begin{align}
\label{eq:714}
\ensuremath{\mathbf{E}}_{(L_{t,A}(q),q)}[e^{-\lambda Z_t(s)}] =
\ensuremath{\mathbf{E}}[\exp(-\pi e^{-A}(\lambda +O(\theta)) W)] + \varepsilons_t.
\end{align}
Note that $\lambda + O(\theta) =\lambda(1+O(\theta))$, uniformly in $\lambda\ge \mathcal{L}ambda^{-1}$. Hence, by \eqref{eq:714}, combined with Lemma~\ref{neveuW}, as $A\rightarrow\infty$, we have
\begin{align}\label{eq:715}
\ensuremath{\mathbf{E}}_{(L_{t,A}(q),q)}[e^{-\lambda Z_t(s)}]
\nonumber
&= \exp\left\{\ensuremath{\mathbf{P}}si_{a_{\ref{eq:abelian}},1}(\pi e^{-A}(1 +O(\theta))\lambda)+o(e^{-A}) + \varepsilons_t\right\}\\
&= \exp\{\pi e^{-A} (1+O(\theta))\lambda(\log \lambda + a_{\ref{eq:abelian}} + \log \pi -A + O(\theta) + \varepsilons_{A,t})\}.
\end{align}
Setting $a_{\ref{eq:jump}} = a_{\ref{eq:abelian}} + \log \pi$ and using the fact that $\theta A \le \bar\theta(A) A \rightarrow 0$ as $A\rightarrow\infty$, equation \eqref{eq:715} implies
\begin{align}
\label{eq:716}
\ensuremath{\mathbf{E}}_{(L_{t,A}(q),q)}[e^{-\lambda Z_t(s)}] = \exp\{\pi e^{-A}(\ensuremath{\mathbf{P}}si_{a_{\ref{eq:jump}},1}(\lambda) -A\lambda + \varepsilons_{A,t})\},
\end{align}
which finishes the proof of the lemma.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:csbp_small_step}}
\label{sec:csbp_small_step}
Decomposing into the descendants of the particles living at time $r$, it is enough to show that for every $x\in[0,L_{t,A}(r)]$, we have
\begin{align}
\label{eq:720}
\ensuremath{\mathbf{E}}_{(x,r)}[e^{-\lambda Z_t(s)}] = \exp\{(-\lambda + \theta (\ensuremath{\mathbf{P}}si_{a,2/3}(\lambda)+\varepsilons_{A,t}))z_t(x,r) + O(Ay_t(x,r))\}.
\end{align}
Fix $x\in[0,L_{t,A}(r)]$ throughout the section. We adapt an idea from \cite{bbs2} and stop the particles the moment they hit the curve $L_{t,A}$ during the time interval $[r,s]$. We denote by $\mathcal{L}_{t,A}$ the set of those particles, identifying a particle with the time it hits the curve (one can do this more formally using the concept of \emph{stopping lines} from \cite{chauvin91}). For every particle hitting the curve at time $u$, we denote by $Z_t^{(u)}(s)$ the contribution to $Z_t(s)$ of the descendants of $u$. We then have the following decomposition:
\begin{equation}\label{master decomposition}
Z_t(s) = Z'_{t,A}(s)+\sum_{u \in \mathcal{L}_{t,A}}Z_t^{(u)}(s),
\end{equation}
where
\begin{align*}
Z'_{t,A}(s) = \sum_{u\in N_{s,A}} z_t(X_u(s),s),
\end{align*}
with $N_{s,A}$ defined in Section~\ref{sec:first_moment_estimates}. In what follows, we will also make use of the quantities $Z_{t,A}$, $Y_{t,A}$ etc.~defined in that section.
By the (strong) branching property, conditionally on $\mathcal{L}_{t,A}$, the $Z^{(u)}$ are independent and independent of $Z'_{t,A}(s)$. Therefore, we can write
\begin{align*}
\ensuremath{\mathbf{E}}_{(x,r)}[e^{-\lambda Z_t(s)}] &= \ensuremath{\mathbf{E}}_{(x,r)}\left[ e^{-\lambda Z'_{t,A}(s)} \prod_{u\in\mathcal{L}_{t,A}} e^{-\lambda Z^{(u)}_t(s)}\right]\\
&= \ensuremath{\mathbf{E}}_{(x,r)}\left[ e^{-\lambda Z'_{t,A}(s)} \prod_{u\in\mathcal{L}_{t,A}} \ensuremath{\mathbf{E}}_{(L_{t,A}(u),u)}[e^{-\lambda Z_t(s)}]\right].
\end{align*}
Define $s'=s-t^{2/3}$.
Using Markov's inequality and conditioning on $\ensuremath{\mathcal{F}}_{s'}$, then applying Lemma~\ref{lem:Rexp}, Lemma~\ref{lem:Zexp}, and Lemma~\ref{lem:Yexp}, we have
\begin{align*}
\ensuremath{\mathbf{P}}_{(x,r)}(\mathcal{L}_{t,A} \cap [s',s]\ne \emptyset) &\le \ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(s',s)] \\
&\le \ensuremath{\mathbf{E}}_{(x,r)}[Z_{t,A}(s')\varepsilons_t + O(e^A Y_{t,A}(s'))(1 + \varepsilons_t)]\\
&= z_{t,A}(x,r)\varepsilons_t.
\end{align*}
Hence,
\begin{align}
\label{eq:721}
\ensuremath{\mathbf{E}}_{(x,r)}[e^{-\lambda Z_t(s)}] = \ensuremath{\mathbf{E}}_{(x,r)}\left[ e^{-\lambda Z'_{t,A}(s)} \prod_{u\in\mathcal{L}_{t,A}\cap[r,s']} \ensuremath{\mathbf{E}}_{(L_{t,A}(u),u)}[e^{-\lambda Z_t(s)}]\right] + z_{t,A}(x,r)\varepsilons_t.
\end{align}
Equation \eqref{eq:721} and Lemma~\ref{lem:jump} now give
\begin{align}
\label{eq:722}
\ensuremath{\mathbf{E}}_{(x,r)}[e^{-\lambda Z_t(s)}] = \ensuremath{\mathbf{E}}_{(x,r)}\left[ e^{-\lambda Z'_{t,A}(s) + R_{t,A}(r,s')\pi e^{-A}(\ensuremath{\mathbf{P}}si_{a_{\ref{eq:jump}},1}(\lambda) - A\lambda + \varepsilons_{A,t})}\right] + z_{t,A}(x,r)\varepsilons_t.
\end{align}
We next claim that \eqref{eq:722} implies
\begin{align}
\nonumber
\ensuremath{\mathbf{E}}_{(x,r)}[e^{-\lambda Z_t(s)}] &= 1-\lambda \ensuremath{\mathbf{E}}_{(x,r)}[Z_{t,A}(s)] + \pi e^{-A}(\ensuremath{\mathbf{P}}si_{a_{\ref{eq:jump}},1}(\lambda) - A\lambda + \varepsilons_{A,t})\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s')]\\
\label{eq:723}
&+ O(\ensuremath{\mathbf{E}}_{(x,r)}[Z_{t,A}(s)^2 + (Ae^{-A}R_{t,A}(r,s'))^2 + AY_{t,A}(s)]) + z_{t,A}(x,r)\varepsilons_t.
\end{align}
Indeed, the upper bound follows using first the fact that $Z'_{t,A}(s)\ge Z_{t,A}(s)$, which can be seen by observing that $z_t(x,s) \ge z_{t,A}(x,s)$ for every $x\ge0$ because the function $L \mapsto L \sin(\pi x/L)$ is increasing on $[x, \infty)$, and then using that $e^{-x} = 1-x+O(x^2)$ for $x\ge0$. Note that the second summand in the exponent on the RHS of \eqref{eq:722} is always negative, because the product in the expectation on the RHS of \eqref{eq:721} is bounded by $1$. The lower bound, on the other hand, follows from the equality $Z'_{t,A}(s) = Z_{t,A}(s) + O(A Y_{t,A}(s))$, which is a consequence of the fact that $z_t(x,s) = z_{t,A}(x,s) + O(Ay_{t,A}(x,s))$), together with the inequality $e^{-x} \ge 1-x$ for every $x\ge 0$.
We now gather the following estimates:
\begin{align*}
\ensuremath{\mathbf{E}}_{(x,r)}[Z_{t,A}(s)] &= e^{-\theta(\frac{2}{3}A+\frac{1}{2})+\varepsilons_t} z_{t,A}(x,r) && \text{by Lemma~\ref{lem:Zexp}}\\
\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(r,s')] &= \pi e^{A+O(\theta A)+\varepsilons_t}\\
&\hspace{-.5in}\times \left( \left(\frac{2}{3\pi^2}\theta(1+ O(\theta)) + \varepsilons_t \right) z_{t,A}(x,r) + O(y_{t,A}(x,r))\right) && \text{by Lemma~\ref{lem:Rexp} and \eqref{26}}\\
\ensuremath{\mathbf{E}}_{(x,r)}[Z_{t,A}(s)^2] &\le Ce^{-A}(\theta z_{t,A}(x,r)+y_{t,A}(x,r)) && \text{by Lemma~\ref{lem:Zvar}}\\
\ensuremath{\mathbf{E}}_{(x,r)}[R_{t,A}(s)^2] &\le Ce^A(\theta z_{t,A}(x,r)+y_{t,A}(x,r)) && \text{by Lemma~\ref{lem:Rvar}}\\
\ensuremath{\mathbf{E}}_{(x,r)}[Y_{t,A}(s)] &\le z_{t,A}(x,r)\varepsilons_t && \text{by Lemma~\ref{lem:Yexp}}
\end{align*}
Using that $\theta A^2 \le \bar\theta(A)A^2 \rightarrow 0$ as $A\rightarrow\infty$, equation~\eqref{eq:723} together with the above estimates gives after some calculation, with $a_{\ref{eq:724}} = \frac{2}{3}a_{\ref{eq:jump}}+\frac{1}{2}$,
\begin{align}
\label{eq:724}
\ensuremath{\mathbf{E}}_{(x,r)}[e^{-\lambda Z_t(s)}] = 1 + (-\lambda + \theta(\ensuremath{\mathbf{P}}si_{a_{\ref{eq:724}},2/3}(\lambda)+\varepsilons_{A,t}))z_{t,A}(x,r) + O(Ay_{t,A}(x,r)).
\end{align}
Using that $z_{t,A}(x,r) =O(A e^{-A})$ and $y_{t,A}(x,r) \le e^{-A}$ for $x\le L_{t,A}(r)$, as well as $z_{t,A}(x,r)^2 = O(y_{t,A}(x,r))$, we get
\begin{align}
\label{eq:725}
\ensuremath{\mathbf{E}}_{(x,r)}[e^{-\lambda Z_t(s)}] = \exp\left(-\lambda + \theta(\ensuremath{\mathbf{P}}si_{a_{\ref{eq:724}},2/3}(\lambda)+\varepsilons_{A,t}))z_{t,A}(x,r) + O(Ay_{t,A}(x,r))\right).
\end{align}
Using again the equality $z_t(x,r) = z_{t,A}(x,r) + O(Ay_{t,A}(x,r))$, equation \eqref{eq:725} implies \eqref{eq:720} with $a = a_{\ref{eq:724}}$ and concludes the proof of Proposition~\ref{prop:csbp_small_step}.
\section{Convergence to the CSBP: proof of Theorem~\ref{CSBPthm}}
\label{sec:csbp_proof}
Before getting to the heart of the proof, we perform a series of reductions. First, it is enough to consider initial conditions such that $Z$ is positive almost surely. For, suppose that, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $Z_t(0)\rightarrow_p 0$ as $t\rightarrow\infty$. If we superpose $\lfloor Z_t(0)\rfloor$ independent copies of the system, we can reduce this case to the case where $Z_t(0)\rightarrow_p 1$ as $t\rightarrow\infty$.
Indeed, once we have established that the finite-dimensional distributions of these superposed processes converge to the CSBP $(\Xi(u), u \geq 0)$ started from $1$, which almost surely stays finite for all times, it will follow that when $Z_t(0)\rightarrow0$ in probability as $t\rightarrow\infty$, the finite-dimensional distributions of the process converge to those of the process that is identically zero.
This argument is easily generalized to the general case where $Z$ has an atom at $0$ of arbitrary positive mass.
Next, the finite-dimensional convergence can be easily deduced from the one-dimensional convergence result and the Markov property of the process. For this, it is enough to show that for every $u\in (0,1)$, with high probability, the configuration of particles at time $ut$ again satisfies the hypotheses, with $(1-u)t$ instead of $t$, i.e.~that $Z_t(ut) \ensuremath{\mathbb{R}}ightarrow Z$ for some random variable $Z>0$ and $L_t(ut) - R(ut) \rightarrow \infty$ in probability (note that $L_t(ut) = L_{(1-u)t}(0)$ and $z_t(x,ut) = z_{(1-u)t}(x,0)$). The first is precisely a consequence of the one-dimensional convergence result, together with the fact that Neveu's CSBP does not hit 0. The second on the other hand follows from the second part of Proposition~\ref{configpropnew}.
Finally, by a simple conditioning argument, it is enough for the one-dimensional convergence result to assume an initial condition such that, under $\ensuremath{\mathbf{P}}_{\nu_t}$, we have $Z_t(0) \rightarrow_p z_0$ as $t\rightarrow\infty$, for some constant $z_0>0$. We assume this for the rest of the section. Also, all probabilities and expectations for the rest of this section will be taken under $\ensuremath{\mathbf{P}}_{\nu_t}$, so we will omit the subscript.
We now go on to prove the one-dimensional convergence. Fix $\tau > 0$. It is enough to show the following: for every $\lambda > 0$, we have
\begin{align}
\label{eq:750}
\lim_{t\rightarrow\infty} \ensuremath{\mathbf{E}}[e^{-\lambda Z_t(t(1-e^{-\tau}))}] = e^{-z_0u_\tau(\lambda)},
\end{align}
where $u_\tau(\lambda)$ is the function from \eqref{csbpLaplace} corresponding to the CSBP with branching mechanism $\ensuremath{\mathbf{P}}si_{a,2/3}$, with $a$ being the number from Proposition~\ref{prop:csbp_small_step}.
We do this by discretizing time. As in Section~\ref{sec:csbp_small_step}, we introduce a parameter $A$ which goes slowly to $\infty$ with $t$. Recall the notation $\varepsilons_t$ and $\varepsilons_{A,t}$ from that section, as well as the function $\bar\theta$. Quantities denoted by $\varepsilons_t$ and $\varepsilons_{A,t}$ now may also depend on the initial condition and on $\tau$. For $A$ sufficiently large, choose $\theta\in [\bar{\theta}(A)/2,\bar{\theta}(A)]$ such that $\tau = K\theta$ for some $K\in\ensuremath{\mathbb{N}}$. Define $t_k = t(1-e^{-k\theta})$ for $k=0,\ldots,K$, so that $t_K = t(1-e^{-\tau})$.
Set $\ensuremath{\mathcal{F}}_k = \ensuremath{\mathcal{F}}_{t_k}$. By assumption, there exists a sequence $a_t\rightarrow \infty$ such that $L_t(0)-a_t-R(0) \rightarrow\infty$ and $a_tY_t(0)\rightarrow 0$ in probability as $t\rightarrow\infty$. We assume without loss of generality that $a_t \le t^{1/6}$ for every $t\ge0$. Define the events
\begin{align*}
G_k = \{\forall j\in\{0,\ldots,k\}: R(t_j) \le L_{t,A}(t_j),\ Y_t(t_j) \le Z_t(t_j)/a_t\},\quad k=0,\ldots,K,
\end{align*}
so that $G_k\in \ensuremath{\mathcal{F}}_k$ for all $k\in\{0,\ldots,K\}$.
\begin{lemma}
\label{lem:Gk}
We have
\(
\ensuremath{\mathbf{P}}(G_K) \ge 1-\varepsilons_t.
\)
\end{lemma}
\begin{proof}
We have $\ensuremath{\mathbf{P}}(R(0) \le L_{t,A}(0),\ Y_t(0) \le Z_t(0)/a_t)\ge 1-\varepsilons_t$ by assumption. Let $k\in\{1,\ldots,K\}$. By part 2 of Proposition~\ref{configpropnew}, we have $L_{t,A}(t_k) - R(t_k) \rightarrow \infty$ in probability as $t\rightarrow\infty$. Furthermore, by part 3 of Proposition~\ref{configpropnew}, we have $L_t(t_k)Y_t(t_k)/Z_t(t_k)\rightarrow c$ in probability as $t\rightarrow\infty$, for some constant $c\in(0,\infty)$. Hence, since $a_t \le t^{1/6}$ by assumption, $a_t Y_t(t_k)/Z_t(t_k)\rightarrow 0$ in probability as $t\rightarrow\infty$. A union bound shows that $\ensuremath{\mathbf{P}}(G_K) = 1-\varepsilons_t$.
\end{proof}
Now fix $\lambda > 0$. For every $\delta \in\ensuremath{\mathbb{R}}$, define recursively,
\begin{align*}
\lambda_K^{(\delta)} &= \lambda\\
\lambda_k^{(\delta)} &= \lambda_{k+1}^{(\delta)} - \theta(\ensuremath{\mathbf{P}}si_{a,2/3}(\lambda_{k+1}^{(\delta)}) - \delta).
\end{align*}
\begin{lemma}
\label{lem:csbp_lambda_delta}
Fix $\lambda > 0$.
\begin{enumerate}
\item There exists $\mathcal{L}ambda > 1$ such that for $|\delta|$ small enough and for $\theta$ small enough (a priori depending on $\delta$), we have $\lambda_k^{(\delta)} \in [\mathcal{L}ambda^{-1},\mathcal{L}ambda]$ for all $k=0,\ldots,K$.
\item For every $\varepsilons>0$, there exists $\delta > 0$ such that for all $\theta$ sufficiently small,
$$\lambda_0^{(\delta)},\lambda_0^{(-\delta)} \in [u_\tau(\lambda)-\varepsilons,u_\tau(\lambda)+\varepsilons].$$
\item For every $\delta>0$, we have for sufficiently large $A$ and $t$, for every $k=0,\ldots,K$,
\begin{align}
\label{eq:752}
\ensuremath{\mathbf{E}}[e^{-\lambda_k^{(\delta)}Z_t(t_k)}\ensuremath{\mathbbm{1}}_{G_k}] -\ensuremath{\mathbf{P}}(G_K\backslash G_k) \le \ensuremath{\mathbf{E}}\left[e^{-\lambda Z_t(t_K)}\ensuremath{\mathbbm{1}}_{G_K}\right] \le \ensuremath{\mathbf{E}}[e^{-\lambda_k^{(-\delta)}Z_t(t_k)}\ensuremath{\mathbbm{1}}_{G_k}].
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
Parts 1 and 2 follow from standard results on convergence of Euler schemes for ordinary differential equations, after suitable localization arguments. We provide the details for completeness.
Write $\ensuremath{\mathbf{P}}si = \ensuremath{\mathbf{P}}si_{a,2/3}$ for simplicity. Fix $\lambda > 0$. Choose $\mathcal{L}ambda > 1$ such that $u_t(\lambda) \in (\mathcal{L}ambda^{-1},\mathcal{L}ambda)$ for all $t\in [0,\tau]$. Define $\ensuremath{\mathbf{P}}si^\mathcal{L}ambda:\ensuremath{\mathbb{R}}\rightarrow\ensuremath{\mathbb{R}}$ by
\[
\ensuremath{\mathbf{P}}si^\mathcal{L}ambda(x) =
\begin{cases}
\ensuremath{\mathbf{P}}si(x) & \ensuremath{\text{ if }} x\in [\mathcal{L}ambda^{-1},\mathcal{L}ambda]\\
\ensuremath{\mathbf{P}}si(\mathcal{L}ambda^{-1}) & \ensuremath{\text{ if }} x\le \mathcal{L}ambda^{-1}\\
\ensuremath{\mathbf{P}}si(\mathcal{L}ambda) & \ensuremath{\text{ if }} x \ge \mathcal{L}ambda.
\end{cases}
\]
Then $\ensuremath{\mathbf{P}}si^\mathcal{L}ambda$ is a Lipschitz function. If we define $(\lambda_k^{(\delta,\mathcal{L}ambda)})_{k=0,\ldots,K}$ recursively by
\begin{align*}
\lambda_K^{(\delta, \mathcal{L}ambda)} &= \lambda\\
\lambda_k^{(\delta, \mathcal{L}ambda)} &= \lambda_{k+1}^{(\delta)} - \theta(\ensuremath{\mathbf{P}}si^\mathcal{L}ambda(\lambda_{k+1}^{(\delta)}) - \delta),
\end{align*}
then $(\lambda_{K-k}^{(\delta,\mathcal{L}ambda)})_{k=0,\ldots,K}$ is the explicit Euler scheme for the ODE
\begin{align}
\label{eq:ode_delta_lambda}
y' = -(\ensuremath{\mathbf{P}}si^\mathcal{L}ambda(y)-\delta),\quad y(0) = \lambda
\end{align}
on the interval $[0,\tau]$, with timestep $\theta$. The right-hand side being a Lipschitz function of $u$, it is well-known that the Euler scheme converges, i.e., if $y^{(\delta,\mathcal{L}ambda)}$ denotes the solution to the ODE \eqref{eq:ode_delta_lambda}, then as $\theta\rightarrow 0$,
\[
\max_{k=0,\ldots,K} |\lambda_{K-k}^{(\delta,\mathcal{L}ambda)} - y^{(\delta,\mathcal{L}ambda)}(k\theta)| \rightarrow 0.
\]
Furthermore, because the right-hand side of \eqref{eq:ode_delta_lambda} depends continuously on the parameter $\delta$, we have $y^{(\delta,\mathcal{L}ambda)} \rightarrow y^{(0,\mathcal{L}ambda)} \eqqcolon y^{(\mathcal{L}ambda)}$ as $\delta\rightarrow 0$, uniformly on $[0,\tau]$. Finally, since $\ensuremath{\mathbf{P}}si^\mathcal{L}ambda = \ensuremath{\mathbf{P}}si$ on $[\mathcal{L}ambda^{-1},\mathcal{L}ambda]$, and $(u_t(\lambda))_{t\in[0,\tau]}$ is the solution to the ODE \eqref{diffeq} and satisfies $u_t(\lambda)\in [\mathcal{L}ambda^{-1},\mathcal{L}ambda]$ for all $t\in[0,\tau]$, we have $y^{(\mathcal{L}ambda)}(t) = u_t(\lambda)$ for all $t\in[0,\tau]$. Altogether, the above arguments show
\begin{align}
\label{eq:756}
\lim_{\delta\rightarrow0} \lim_{\theta\rightarrow0} \max_{k=0,\ldots,K} |\lambda_{K-k}^{(\delta,\mathcal{L}ambda)} - u_{k\theta}(\lambda)| = 0.
\end{align}
It remains to remove the localization: since $u_t(\lambda)$ is contained in the open interval $(\mathcal{L}ambda^{-1},\mathcal{L}ambda)$ for all $t\in [0,\tau]$, by \eqref{eq:756}, there exists $\delta_0>0$, such that for all $|\delta|\le \delta_0$, for $\theta$ sufficiently small, $\lambda_k^{(\delta,\mathcal{L}ambda)} \in [\mathcal{L}ambda^{-1},\mathcal{L}ambda]$ for all $k\in\{0,\ldots,K\}$. But since $\ensuremath{\mathbf{P}}si^\mathcal{L}ambda = \ensuremath{\mathbf{P}}si$ on $[\mathcal{L}ambda^{-1},\mathcal{L}ambda]$, a direct recurrence argument shows that $\lambda_k^{(\delta,\mathcal{L}ambda)} = \lambda_k^{(\delta)}$ for all $k=0,\ldots,K$. This proves part 1. Part~2 immediately follows, using again \eqref{eq:756}.
We now prove part 3 of the lemma. Fix $\delta > 0$. Choose $\mathcal{L}ambda>1$ such that $e^{-\tau} > \mathcal{L}ambda^{-1}$ and such that the first part of the lemma holds with this $\mathcal{L}ambda$. By Proposition~\ref{prop:csbp_small_step}, we have for $A$ and $t$ sufficiently large, for every $\lambda'\in [\mathcal{L}ambda^{-1},\mathcal{L}ambda]$, and every $k=0,\ldots,K-1$, almost surely,
\[
e^{(-\lambda' +\theta (\ensuremath{\mathbf{P}}si_{a,2/3}(\lambda')-\delta)) Z_t(t_k)}\ensuremath{\mathbbm{1}}_{G_k} \le \ensuremath{\mathbf{E}}\left[e^{-\lambda' Z_t(t_{k+1})}\,|\,\ensuremath{\mathcal{F}}_k\right]\ensuremath{\mathbbm{1}}_{G_k} \le e^{(-\lambda' +\theta (\ensuremath{\mathbf{P}}si_{a,2/3}(\lambda')+\delta)) Z_t(t_k)}\ensuremath{\mathbbm{1}}_{G_k}.
\]
In particular, using the first part of the lemma, for every $\delta>0$ small enough, for $A$ and $t$ sufficiently large, we have for every $k=0,\ldots,K-1$, almost surely,
\begin{align}
\label{eq:753+}
\ensuremath{\mathbf{E}}\left[e^{-\lambda_{k+1}^{(\delta)} Z_t(t_{k+1})}\,|\,\ensuremath{\mathcal{F}}_k\right]\ensuremath{\mathbbm{1}}_{G_k} &\ge e^{-\lambda_k^{(\delta)} Z_t(t_k)}\ensuremath{\mathbbm{1}}_{G_k},\\
\label{eq:753-}
\ensuremath{\mathbf{E}}\left[e^{-\lambda_{k+1}^{(-\delta)} Z_t(t_{k+1})}\,|\,\ensuremath{\mathcal{F}}_k\right]\ensuremath{\mathbbm{1}}_{G_k} &\le e^{-\lambda_k^{(-\delta)} Z_t(t_k)}\ensuremath{\mathbbm{1}}_{G_k}.
\end{align}
We now prove \eqref{eq:752} by induction. For $k=K$, the inequalities trivially hold. Let $k<K$ and assume \eqref{eq:752} holds for $k+1$, i.e.
\begin{align}
\label{eq:752bis}
\ensuremath{\mathbf{E}}[e^{-\lambda_{k+1}^{(\delta)}Z_t(t_{k+1})}\ensuremath{\mathbbm{1}}_{G_{k+1}}] -\ensuremath{\mathbf{P}}(G_K\backslash G_{k+1}) \le \ensuremath{\mathbf{E}}\left[e^{-\lambda Z_t(t_K)}\ensuremath{\mathbbm{1}}_{G_K}\right] \le \ensuremath{\mathbf{E}}[e^{-\lambda_{k+1}^{(-\delta)}Z_t(t_{k+1})}\ensuremath{\mathbbm{1}}_{G_{k+1}}].
\end{align}
Using that $G_{k+1}\subset G_k$, equation \eqref{eq:752bis} easily implies
\begin{align}
\label{eq:752ter}
\ensuremath{\mathbf{E}}[e^{-\lambda_{k+1}^{(\delta)}Z_t(t_{k+1})}\ensuremath{\mathbbm{1}}_{G_k}] -\ensuremath{\mathbf{P}}(G_K\backslash G_k) \le \ensuremath{\mathbf{E}}\left[e^{-\lambda Z_t(t_K)}\ensuremath{\mathbbm{1}}_{G_K}\right] \le \ensuremath{\mathbf{E}}[e^{-\lambda_{k+1}^{(-\delta)}Z_t(t_{k+1})}\ensuremath{\mathbbm{1}}_{G_k}].
\end{align}
Equations \eqref{eq:753+}, \eqref{eq:753-} and \eqref{eq:752ter} now show that \eqref{eq:752} holds for $k$. This finishes the induction.
\end{proof}
We can now wrap up the proof of \eqref{eq:750}. By Lemma~\ref{lem:Gk}, we have $\ensuremath{\mathbf{P}}(G_K) = 1-\varepsilons_t$, and so
\begin{equation}\label{Gbound}
\ensuremath{\mathbf{E}}[e^{-\lambda Z_t(t_K)}] = \ensuremath{\mathbf{E}}[e^{-\lambda Z_t(t_K)}\ensuremath{\mathbbm{1}}_{G_K}] + \varepsilons_t.
\end{equation}
Now fix $\varepsilon>0$ and choose $\delta>0$ as in the second part of Lemma~\ref{lem:csbp_lambda_delta}. We then have by the third part of that lemma and (\ref{Gbound}), for $A$ and $t$ sufficiently large,
\[
\ensuremath{\mathbf{E}}[e^{-(u_\tau(\lambda)+\varepsilon)Z_t(0)}\ensuremath{\mathbbm{1}}_{G_0}]-\varepsilons_t\le \ensuremath{\mathbf{E}}[e^{-\lambda Z_t(t_K)}] \le \ensuremath{\mathbf{E}}[e^{-(u_\tau(\lambda)-\varepsilon)Z_t(0)}\ensuremath{\mathbbm{1}}_{G_0}]+\varepsilons_t.
\]
Hence, letting $t\rightarrow\infty$, and using the assumption on the initial configuration, we have
\[
e^{-(u_\tau(\lambda)+\varepsilon)z_0}\le \liminf_{t\rightarrow\infty} \ensuremath{\mathbf{E}}[e^{-\lambda Z_t(t(1-e^{-\tau}))}] \le \limsup_{t\rightarrow\infty} \ensuremath{\mathbf{E}}[e^{-\lambda Z_t(t(1-e^{-\tau}))}] \le e^{-(u_\tau(\lambda)-\varepsilon)z_0}.
\]
Letting $\varepsilon \rightarrow0$ proves \eqref{eq:750} and thus finishes the proof of Theorem~\ref{CSBPthm}.
\subsection*{Acknowledgments} The authors warmly thank Julien Berestycki for a number of productive discussions while the ideas of this project were being formulated, along with some further discussions throughout the course of the project. They also thank Louigi Addario-Berry, Nathana\"el Berestycki, \'Eric Brunet, Simon Harris, and Matt Roberts for helpful discussions at an early stage of the project, and they thank a referee for comments which improved the exposition of the paper. Pascal Maillard was supported in part by grants ANR-20-CE92-0010-01 and ANR-11-LABX-0040 (ANR program ``Investissements d'Avenir''). Jason Schweinsberg was supported in part by NSF Grants DMS-1206195 and DMS-1707953.
\end{document}
|
\begin{document}
\title{From low-rank approximation to an efficient rational Krylov subspace method for the Lyapunov equation}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnotetext[3]{Skolkovo Institute of Science and Technology,
Novaya St.~100, Skolkovo, Odintsovsky district, 143025
Moscow Region, Russia ([email protected], [email protected])}
\footnotetext[5]{Institute of Numerical Mathematics,
Gubkina St. 8, 119333 Moscow, Russia}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\newcounter{Ivan}
\newcommand{\Ivan}[1]{
\refstepcounter{Ivan}{
\todo[inline,color={green!11!black!22},size=\small]{\textbf{[IVANt_{1/2}eIvan]:}~#1}
}
}
\newcounter{Denis}
\newcommand{\Denis}[1]{
\refstepcounter{Denis}{
\todo[inline,color={green!11!green!22},size=\small]{\textbf{[Denist_{1/2}eDenis]:}~#1}
}
}
\newcommand{\hlc}[2][yellow]{ {\sethlcolor{#1} \hl{#2}} }
\newcolumntype{C}[1]{>{\Centering}m{#1}}
\renewcommand\tabularxcolumn[1]{C{#1}}
\begin{abstract}
We propose a new method for the approximate solution of the Lyapunov
equation with rank-$1$ right-hand side, which is based on extended rational
Krylov subspace approximation with adaptively computed shifts. The
shift selection is obtained from the connection between the Lyapunov
equation, solution of systems of linear ODEs and alternating least
squares method for low-rank approximation. The numerical experiments
confirm the effectiveness of our approach.
\end{abstract}
\begin{keywords}
Lyapunov equation, rational Krylov subspace, low-rank approximation, model order reduction
\end{keywords}
\begin{AMS}
65F10, 65F30, 15A06, 93B40.
\end{AMS}
\pagestyle{myheadings} t_{1/2}ispagestyle{plain}
\section{Introduction}
Let $A$ be an $n \times n$ stable matrix and $y_0$ is a vector of length $N$.
We consider the continuous-time Lyapunov equation with rank-$1$
right-hand side:
\begin{equation}\label{alr:lyap}
AX + XA^{\top} = -y_0 y^{\top}_0,
\end{equation}
For large $n$ it is impossible to store $X$, thus a low-rank
approximation of the solution is sought:
\begin{equation}\label{alr:rank1.0}
X \approx U Z U^{\top}, \quad U \in \mathbb{R}^{n \times r}, \quad
Z \in \mathbb{R}^{r \times r}.
\end{equation}
Lyapunov equation has fundamental role in many application areas such
as signal processing \textcolor{black}{\cite{hinamoto-filter-1993, sanches-med-2008}} and system
and control theory
\textcolor{black}{\cite{benner-numlineareq-2008, li-lyapappr-1999, pomet-lyapadaptreg-1992}.}
There are many approaches for the solution of the Lyapunov equation.
Alternating directions implicit (ADI)
methods\textcolor{black}{\cite{lebedev-zolotarev-1977, lu-lyapadi-1991,
li-lrlyapadi-2002, jbilou-adiprec-2010}}
are powerful techniques that arise from the solution methods for elliptic and parabolic partial
differential equations \textcolor{black}{\cite{wachspress-iterlyap-1988,
damm-genlyapadi-2008, benner-selfshifts-2013, hochbruck-preckrylov-1995}}.
Krylov subspace methods have been successful in solving linear
systems and eigenvalues problems. They utilizes Arnoldi-type
\textcolor{black}{\cite{jbilou-proj-2006, stykel-krylov-2012, jaimoukha-krylov-1994}} or
Lanczos-type \textcolor{black}{\cite{saad-overview-1989, bai-krylov-2002}} algorithms to construct low-rank approximation using
Krylov subspaces. Krylov subspace methods have advantage in
simplicity but the convergence can be slow for ill-conditioned $A$
\textcolor{black}{\cite{simdru-lyapadapt-2009, mikkelsen-curve-2010}}.
Rational Krylov subspace methods (extended Krylov subspace method
\cite{druskin-eks-1998,sim-krylov-2007}, adaptive rational Krylov
\cite{simdru-lyapadapt-2009,druskin-adapt-2010,druskin-rksm-2011,druskin-trks-2014}, Smith method
\cite{gugercin-modsmith-2003, penzl-cyclicsmith-1999}) are often the method
of choice. Manifold-based approaches have been proposed in
\cite{vandereycken-riemlyap-2010, vandereycken-riemopt-2010} where the solution is been sought directly in the
low-rank format \eqref{alr:rank1.0}. The main computational cost in such
algorithms is the solution of linear systems with
matrices of the form $A + \lambda_i I$.
A comprehensive review on the solution of linear matrix equations in general and Lyapunov equation in particular can be found in \cite{simoncini-review-2013}.
In this work we start from the Lyapunov equation and a simple method
that doubles the size of $U$ at each step using the solution of an
auxiliary Sylvester equation. \textcolor{black}{The main disadvantage of this approach is that} too many linear system solvers are required for each step.
Using the rank-$1$ approximation to the correction equation, we obtain a simple formula
for the new vector. In our experiments we also found that it is a good idea to add a Krylov vector to the subspace. This increases the accuracy significantly at almost no additional cost. We compare the effectiveness of the new method with
the extended rational Krylov method \textcolor{black}{\cite{druskin-eks-1998, sim-krylov-2007}} and the adaptive rational Krylov
approach \textcolor{black}{\cite{druskin-adapt-2010, druskin-rksm-2011}} on several model examples with symmetric and non-symmetric
matrices $A$ coming from discretizations of two-dimensional
\textcolor{black}{and three-dimensional} elliptic PDEs on different grids.
\section{Minimization problem}
How do we define what is the best low-rank approximation to the Lyapunov equation? A natural way is to formulate the initial problem
as a minimization problem
$$
R(X) \rightarrow \min,
$$
and then reduce this problem to the minimization over the manifold of low-rank matrices. A popular choice is the residual:
\begin{equation}\label{alr:residual}
R(X) = \Vert A X + X A^{\top} + y_0 y^{\top}_0 \Vert^2_F,
\end{equation}
which is easy to compute for a low-rank matrix $X$. Disadvantage of
the functional \eqref{alr:residual} is well known: it may lead to the large condition
numbers \textcolor{black}{\cite[p. 19]{laub-schur-1979}}. For the symmetric positive definite case another functional is often used:
$$
R(X) =\mathop{\mathrm{tr}}\nolimits(X A X) + \mathop{\mathrm{tr}}\nolimits(X y_0 y^{\top}_0).
$$
For a non-symmetric case we can use a different functional, which is based on the connection of the low-rank solution to the Lyapunov equation
and low-dimensional subspace approximation to the solution of a
system of linear ODEs. Consider an ODE with the matrix $A$:
$$
\frac{dy}{dt} = Ay, \quad y(0) = y_0.
$$
It is natural to look for the solution in the low-dimensional subspace form
$$
y(t) \approx \widetilde{y}(t) = U c(t),
$$
where $U$ is an $n \times r$ orthogonal matrix and $c(t)$ is an $r \times
1$ vector. Then the columns of the \textcolor{black}{orthogonal} matrix $U$ that minimizes
\begin{equation*}
\begin{split}
\min_{U \in O(n), c(t)} \int^{\infty}_0\limits \Vert y(t) - \widetilde{y}(t)
\Vert^2dt &= \\
= \textcolor{black}{\min_{U \in O(n)} \int^{\infty}_0\limits \Vert y(t) - UU^{\top} y(t)
\Vert^2dt} &+ \textcolor{black}{\min_{U \in O(n), c(t)} \int^{\infty}_0\limits \Vert
U U^{\top} y(t) - \widetilde{y}(t) \Vert^2dt} = \\
= \min_{U \in O(n)} \int^{\infty}_0\limits \Vert y(t) - UU^{\top}
y(t) \Vert^2dt &= \textcolor{black}{\min_{U \in O(n)} \Big( \mathop{\mathrm{tr}}\nolimits X - \mathop{\mathrm{tr}}\nolimits U^{\top} X U \Big)}
\end{split}
\end{equation*}
are the eigenvectors, corresponding to the largest eigenvalues of the matrix $X$\textcolor{black}{\cite{sorensen-sylv-2002}} that solves
the Lyapunov equation \eqref{alr:lyap}. However, the computation of
\textcolor{black}{ the optimal $\widehat{c}(t) = U^{\top}y(t)$} requires the knowledge of the true solution $y(t)$ which
is not known. Instead, we can consider the Galerkin projection:
\begin{equation}\label{alr:odeappr}
\begin{split}
\widetilde{y} &= U c(t),\\
\frac{dc}{dt} &= U^{\top} A U c, \quad c_0 = U^{\top} y_0,
\end{split}
\end{equation}
The final approximation is then
\begin{equation}\label{alr:odefinappr}
\begin{split}
\widetilde{y} = U e^{Bt} U^{\top} y_0,
\end{split}
\end{equation}
where $B = U^{\top} A U$.
The functional to be minimized is
\begin{equation}\label{alr:fun1}
F(U) = \int^{\infty}_0\limits\Vert y - \widetilde{y} \Vert^2 dt.
\end{equation}
Note that the functional depends only on $U$. Given $U$, the
approximation to the solution of the Lyapunov equation can be recovered
from the solution of the
``small'' Lyapunov equation
\begin{equation} \label{lyap:appr}
\begin{split}
X \approx U Z U^{\top}, \quad
B Z + Z B^{\top} = -c_0 c^{\top}_0.
\end{split}
\end{equation}
The functional \eqref{alr:fun1} can not be efficiently computed. However, a simple expansion of the norm gives
\begin{equation}\label{alr:FU}
F(U) = \int^{\infty}_0\limits\Vert y \Vert^2 dt - 2 \int^{\infty}_0\limits\langle y, \widetilde{y} \rangle dt + \int^{\infty}_0\limits \Vert \widetilde{y} \Vert^2 dt.
\end{equation}
\textcolor{black}{
Let us represent the functional in the next form}
\begin{equation*}
F(U) = F_1(U) - 2 F_2(U),
\end{equation*}
where
$$
F_1(U) =\int^{\infty}_0\limits (\Vert y \Vert^2 - \Vert \widetilde{y}
\Vert^2) dt , \quad F_2(U) = \int^{\infty}_0\limits(\langle y,
\widetilde{y} \rangle - \Vert \widetilde{y} \Vert^2) dt.
$$
\begin{lemma} \textcolor{black}{Assume that matrices A and B are stable.}
Then the functionals $F_1(U), F_2(U)$ can be calculated as follows:
\begin{equation}\label{f12}
\begin{split}
F_1(U) &= \mathop{\mathrm{tr}}\nolimits X - \mathop{\mathrm{tr}}\nolimits Z,\\
F_2(U) &= \mathop{\mathrm{tr}}\nolimits U^{\top} (P - U Z),
\end{split}
\end{equation}
where $P$ is the solution of the Sylvester equation and $Z$ is solution of
the Lyapunov equation:
\begin{equation}\label{auxiliary1}
\begin{split}
A P + P B^{\top} &= -y_0 c_0^{\top},\\
B Z + Z B^{\top} &= -c_0 c_0^{\top}.
\end{split}
\end{equation}
\end{lemma}
{\it Proof: \\}
It is easy to see, that
\begin{equation*}
\begin{split}
-\int^{\infty}_0\limits\langle y, \widetilde{y} \rangle dt
&= -\mathop{\mathrm{tr}}\nolimits \int^{\infty}_0\limits e^{At}y_0 c_0^{\top}
e^{B^{\top}t}U^{\top} dt =\left. -\mathop{\mathrm{tr}}\nolimits \left(e^{At}P e^{B^{\top}t}
U^{\top} \right)\right|^{\infty}_0 = \mathop{\mathrm{tr}}\nolimits U^{\top} P,\\
\int^{\infty}_0\limits \Vert \widetilde{y} \Vert^2 dt &=
\mathop{\mathrm{tr}}\nolimits \int^{\infty}_0\limits U e^{Bt}c_0 c_0^{\top}
e^{B^{\top}t}U^{\top} dt = -\mathop{\mathrm{tr}}\nolimits \left. \left(e^{Bt}Z
e^{B^{\top}t}\right)\right|^{\infty}_0 = \mathop{\mathrm{tr}}\nolimits Z.
\end{split}
\end{equation*}
In the same way $\int^{\infty}_0\limits \Vert y \Vert^2 dt =
\mathop{\mathrm{tr}}\nolimits X$ and we can write
\begin{equation*}
\begin{split}
F_1(U) &= \int^{\infty}_0\limits \Big(\Vert y \Vert^2 - \Vert \widetilde{y}
\Vert^2\Big) dt = \mathop{\mathrm{tr}}\nolimits X - \mathop{\mathrm{tr}}\nolimits Z,\\
F_2(U) &= \int^{\infty}_0\limits\Big(\langle y,
\widetilde{y} \rangle - \Vert \widetilde{y} \Vert^2\Big) dt = \mathop{\mathrm{tr}}\nolimits
(U^{\top}P - Z) =\mathop{\mathrm{tr}}\nolimits U^{\top} (P - U Z). \blacksquare
\end{split}
\end{equation*}
\textcolor{black}{Note that using integral representation of the solution of Lyapunov
equation is well-known fact \cite{saad-numsol-1989}:}
\begin{equation*}
\begin{split}
X = -\int_0^{\infty} \limits e^{At}y_0 y_0^{\top} e^{A^\top t} dt.
\end{split}
\end{equation*}
\textcolor{black}{The Sylvester equation can be solved using a standard method
\cite[Theorem 3.1]{sorensen-sylv-2002}} since the
matrix
$B$ is $r \times r, r \ll n$. We compute the Schur decomposition of $B^{\top}$ and the
equation is reduced to $r$ linear systems with the matrices $A + \lambda_i
I,
\quad i = 1, \ldots, r.$
\begin{lemma} \textcolor{black}{Assume that matrices A and B are stable.}
The gradient of $F(U)$ can be computed as:
\begin{equation}\label{f:grad}
\begin{split}
\grad F(U) = -2 P + 2 y_0 (
c_0^{\top} Z_I - y_0^{\top} P_U) ) &+ 2 A U (Z Z_I - P^{\top} P_U) + \\
&+ 2 A^{\top} U ( Z_I Z - P_U^{\top} P),
\end{split}
\end{equation}
where $P, Z$ are defined by \eqref{auxiliary1} and
\begin{equation}\label{auxiliary2}
\begin{split}
A^{\top} P_U + P_U B &= -U, \\
B^{\top} Z_I + Z_I B &= -I_r.
\end{split}
\end{equation}
\end{lemma}
{\it Proof: \\}
Variation of Z can be expressed as a solution of the Lyapunov equation with
another right hand side:
$$ B\delta Z + \delta Z B^{\top} = - \delta c_0 c_0^{\top} - c_0
\delta c_0^{\top} - \delta B Z - Z \delta B^{\top}
$$
Using the well-known integral form of the solution of the Lyapunov equation we get that:
\begin{equation*}
\begin{split}
-\mathop{\mathrm{tr}}\nolimits\delta Z &= -\mathop{\mathrm{tr}}\nolimits \int_0^{\infty} e^{Bt} (\delta c_0 c_0^{\top} + c_0
\delta c_0^{\top} + \delta B Z + Z \delta B^{\top}) e^{B^{\top} t} dt
= \\
&= -\mathop{\mathrm{tr}}\nolimits (\int_0^{\infty} e^{B^{\top} t} I e^{Bt} dt) (\delta c_0 c_0^{\top} + c_0
\delta c_0^{\top} + \delta B Z + Z \delta B^{\top}) = \\
&= -\mathop{\mathrm{tr}}\nolimits Z_I (\delta c_0 c_0^{\top} + c_0
\delta c_0^{\top} + \delta B Z + Z \delta B^{\top}) = \\
&= -2 \mathop{\mathrm{tr}}\nolimits\delta U^{\top} (y_0 c_0^{\top} Z_I + A U Z Z_I + A^{\top} U Z_IZ).
\end{split}
\end{equation*}
Similarly for $P$:
$$ A \delta P + \delta P B^{\top} = -y_0 \delta c_0^{\top} - P \delta
B^{\top}, $$
therefore,
\begin{equation*}
\begin{split}
\delta \mathop{\mathrm{tr}}\nolimits (U^{\top} P) &=\mathop{\mathrm{tr}}\nolimits (\delta U^{\top} P + U^{\top}
\delta P ) = \\
&= \mathop{\mathrm{tr}}\nolimits (\delta U^{\top} P + U^{\top} \int_0^{\infty} e^{At} (y_0 \delta c_0^{\top} + P \delta
B^{\top}) e^{B^{\top}t}dt) = \\
&=\mathop{\mathrm{tr}}\nolimits \delta U^{\top} P + \mathop{\mathrm{tr}}\nolimits (\int_0^{\infty} e^{B^{\top}t} U^{\top} e^{At} dt)(y_0 \delta c_0^{\top} + P \delta
B^{\top}) = \\
&= \mathop{\mathrm{tr}}\nolimits \delta U^{\top} P + \mathop{\mathrm{tr}}\nolimits P_U^{\top} (y_0 \delta c_0^{\top}
+ P \delta B^{\top}) = \\
&=\mathop{\mathrm{tr}}\nolimits \delta U^{\top} (P + y_0 y_0^{\top} P_U + A^{\top} U P_U^{\top} X + A U P^{\top} P_U).
\end{split}
\end{equation*}
Finally,
\begin{equation*}
\begin{split}
\delta F(U) = \mathop{\mathrm{tr}}\nolimits \delta U^{\top} (\grad F) = \mathop{\mathrm{tr}}\nolimits
\delta(-2 U^{\top} P &+ Z),\\
\grad F(U) = -2 P + 2 y_0 (
c_0^{\top} Z_I - y_0^{\top} P_U) ) &+ 2 A U (Z Z_I - P^{\top} P_U) + \\
&+ 2 A^{\top} U ( Z_I Z - P_U^{\top} P). \blacksquare
\end{split}
\end{equation*}
Now denote by $R_1(U)$ and $R_2(U)$ residuals of the Lyapunov and
Sylvester equations:
\begin{equation}\label{residual}
\begin{split}
R_1(U) &= \Vert A (U Z U^{\top}) + (U Z U^{\top}) A^{\top} + y_0
y_0^{\top}\Vert, \\
R_2(U) &= \Vert A (UZ) + (UZ) B^{\top} + y_0 c_0^{\top} \Vert.
\end{split}
\end{equation}
\begin{lemma}\label{alr:lemres}
Assume that \textcolor{black}{matrices A and B are stable and } $y_0$ lies in the column space of $U$.
Then the next equality holds:
\begin{equation}\label{residual:equality}
R_1(U) = \sqrt{2} R_2(U)
= \sqrt{2} \Vert (A U - U B) Z\Vert .
\end{equation}
\end{lemma}
{\it Proof: \\}
Since $Z$ is the \textcolor{black}{unique} solution of the Lyapunov equation, we get that
\begin{equation*}
\begin{split}
R_1(U)^2 &= \Vert A U Z U^{\top} + U Z U^{\top} A^{\top} + y_0
y_0^{\top}\Vert^2 = \\
&=\Vert y_0 y_0^{\top} - U U^{\top} y_0 y_0^{\top} U
U^{\top} + (A U - U B)Z U^{\top} + U Z (A U - U B)^{\top} \Vert^2 = \\
&= \Vert (A U - U B)Z U^{\top} \Vert^2 + \Vert U Z (A U - U B)^{\top}
\Vert^2 = 2 \Vert (A U - U B)Z\Vert^2.
\end{split}
\end{equation*}
We can use the same trick for the residual of the Sylvester equation:
\begin{equation*}
\begin{split}
R_2(U)^2 &= \Vert A (UZ) + (UZ) B^{\top} + y_0 c_0^{\top} \Vert^2 =\\
&= \Vert (I - U U^{\top})y_0 c_0^{\top} + (A U - U B) Z\Vert^2 = \Vert
(A U - U B) Z\Vert^2. \blacksquare
\end{split}
\end{equation*}
\textcolor{black}{Lemma \ref{alr:lemres} is well known for the special case when $U$ is the basis of a (rational) Krylov
subspace \cite[Theorem 2.1]{jaimoukha-krylov-1994}.}
Lemma \ref{alr:lemres} is valid if $y_0 \in \mathop{\mathrm{span}}\nolimits U$ and we will always make sure that $y_0 =
U U^{\top} y_0$.
The next Lemma shows that if the residual of the Lyapunov equation goes to zero,
so does the values of the functional $F(U)$.
\begin{lemma}\label{alr:lemmabound}
Assume that \textcolor{black}{ matrices A and B are stable and} $y_0$ lies in the column space of $U$.
Then $$ F(U) \leq C R_1(U) $$
with a constant
\textcolor{black}{
$$C = \Vert \Big(I \otimes A + A \otimes I \Big)^{-1}\Vert_F +
\sqrt{2r} \Vert \Big(I \otimes A + B \otimes I \Big)^{-1}\Vert_F .$$
}
\end{lemma}
{\it Proof: \\}
The matrix $X - U Z U^{\top}$ satisfies the Lyapunov equation
\begin{equation*}
A (X - UZU^{\top}) + (X - UZU^{\top}) A^{\top} = -(A U - U B)Z U^{\top}
- U Z (A U - U B)^{\top},
\end{equation*}
therefore
$$
(I \otimes A + A \otimes I) \mathop{\mathrm{vec}}\nolimits \Big(X - UZU^{\top}\Big) = -\mathop{\mathrm{vec}}\nolimits \Big( (A U - U B)Z U^{\top}
+ U Z (A U - U B)^{\top} \Big),
$$
and
$$
\Vert X - U Z U^{\top} \Vert_F \leq \Vert \Big(I \otimes A + A \otimes I \Big)^{-1} \Vert_F ~R_1(U)
$$
Thus,
$$| \mathop{\mathrm{tr}}\nolimits X - \mathop{\mathrm{tr}}\nolimits Z | \leq C_1 R_1(U).$$
In the same way,
$$
\Vert P - U Z \Vert_F \leq \Vert \Big(I \otimes A + B \otimes I \Big)^{-1}\Vert_F~R_2(U) = C_2 R_2(U),
$$
therefore
$$
|F_2(U)| = |\mathop{\mathrm{tr}}\nolimits U^{\top} (P - UZ)| \leq \Vert U \Vert_F ~\Vert P - U Z \Vert_F \leq \sqrt{r} C_2 R_2(U).
$$
Finally,
$$
F(U) \leq |F_1(U)| + 2|F_2(U)| \leq (C_1 + \sqrt{2r} C_2) R_1(U)
\textcolor{black}{ = C R_1(U)}.
$$
$\blacksquare$
Lemma~\ref{alr:lemmabound} shows that $R_1(U)$, the residual of the Lyapunov equation, is a viable
error bound for our functional $F(U)$.
\begin{lemma}\label{alr:funcbound}
\textcolor{black}{
Assume that matrices A and B are stable, $y_0$ lies in the column
space of $U$, matrices $P$ and $Z$ are solutions of the Sylvester and
the small Lyapunov equations \ref{auxiliary1} correspondingly. Then $$\Vert X - P U^{\top} - U P^{\top} +
U Z U ^{\top} \Vert_F \leq F(U).$$}
\end{lemma}
{\it Proof: \\}
\textcolor{black}{
Let us consider the matrix
$$M = \int_0^{\infty}\limits \Big(y(t)-\widetilde y(t)\Big)
\Big(y^{\top}(t)-\widetilde y^{\top}(t)\Big) dt.$$
We use exponential representations of the Lyapunov and the Sylvester
equations solutions \eqref{auxiliary1}:
\begin{equation*}
\begin{split}
M &= \int_0^{\infty}\limits y(t) y^{\top}(t) dt
-\int_0^{\infty}\limits\widetilde y(t) y^{\top}(t) dt
-\int_0^{\infty}\limits y(t) \widetilde y^{\top}(t) dt
+ \int_0^{\infty}\limits \widetilde y(t) \widetilde y^{\top}(t) dt =
\\
&= X - P U^{\top} - U P^{\top} +
U Z U ^{\top}.
\end{split}
\end{equation*}
Matrix $M$ is symmetric and non-negative definite and therefore:
\begin{equation}\label{alr:fbound1}
\begin{split}
\Vert X - P U^{\top} - U P^{\top} +
U Z U ^{\top} \Vert_F = \Vert M \Vert_F \leq \mathop{\mathrm{tr}}\nolimits M &= F(U)
\end{split}
\end{equation}
}
$\blacksquare$
\textcolor{black}{
We can also compare $F(U)$ functional based on Galerkin projection with
functional based on optimal projection $\widehat{y} = U U^{\top} y(t)$:
\begin{equation}\label{alr:fbound2}
\begin{split}
\int_0^{\infty}\limits \Vert y(t)-\widehat y(t) \Vert^2 dt &=
\mathop{\mathrm{tr}}\nolimits (I - U U^{\top}) X (I - U U^{\top}) = \\
&= \mathop{\mathrm{tr}}\nolimits (I - U U^{\top}) M (I - U U^{\top}) \leq \mathop{\mathrm{tr}}\nolimits M = F(U).
\end{split}
\end{equation}
Lemma~\ref{alr:funcbound} shows that if $F(U)$ is small, the solution $X$ can be well-approximated
by a rank-$2r$ matrix.}
\section{Methods for basis enrichment}
Notice that the column vectors of the gradient are a linear combination of
the column vectors of $P, AU, A^{\top}U$ and $y_0.$ We need a method to
enlarge the basis U so the first idea is to use matrix $P$ to extend the basis.
Note, that the matrix $UZ$ can be also considered as an approximation to the solution of
Sylvester equation. So to enrich the basis we will use $P_1 = P - UZ$
instead, and the matrix $P_1$ satisfies the equation:
\begin{equation}\label{sylvester:lr}
\begin{split}
& A(P - U Z) + (P - U Z) B^{\top} = -y_0 c_0^{\top} - A U Z - U
ZB^{\top} = \\
&= - (I - U U^{\top}) y_0 c_0^{\top} - (A U - U B) Z, \\
&A P_1 + P_1 B^{\top} = - (A U - U B) Z,
\end{split}
\end{equation}
where we have used that $y_0 = U U^{\top} y_0$.\\
The method is summarized in Algorithm \ref{twicer:int}.
The main disadvantage of Algorithm~\ref{twicer:int} is that the
computational cost grows dramatically at each step. If $U$ has $r$ columns,
the next step will require $r$ solutions of $n\times n$ linear systems with
matrices of the form $A + \lambda_i I$.
\begin{algorithm}[H] \label{twicer:int}
\DontPrintSemicolon
\SetKwComment{tcp}{$\triangleright$~}{}
\KwData{Input matrix $A \in \mathbb{R}^{n \times n}$, vector $y_0 \in \mathbb{R}^{n \times 1}$, maximal rank $r_{\max}$, accuracy parameter $\varepsilon$.}
\KwResult{Orthonormal matrix $U \in \mathbb{R}^{n \times r}$.}
\Begin{
\nl set $U = \frac{y_0}{\Vert y_0 \Vert}$ \tcp*[r]{Initialization}
\nl \For{$\rank U \leq r_{\max}$}
{
\nl Compute $c_0 = U^{\top}y_0 ,\quad B = U^{\top}AU$
\nl Compute $Z$ as Lyapunov equation solution: $B Z + Z B^{\top} = -
c_0 c_0^{\top}$
\nl Compute error estimate $\delta = \Vert (A U - U B) Z \Vert$
\nl \If{$\delta\leq \varepsilon$}
{
\nl Stop
}
\nl Compute $P_1$ as Sylvester equation solution: $A P_1 + P_1 B^{\top} = -(A U - U B) Z$
\nl Update $U = \mathrm{orth}[U, P_1]$
}
}
\caption{The doubling method}
\end{algorithm}
\textcolor{black}{
\begin{remark}
Algorithm \ref{twicer:int}
is similar to the IRKA method \cite{beattie-irkah2-2007,
gugercin-irka-2005, gugercin-irkareduction-2008,
flagg-irkaconv-2012}. The IRKA method starts from initial orthogonal matrices
$V_0, W_0$ with the given rank $r$ and replaces the matrices $V_{i-1}$ and
$V_{i-1}$ by the new ones $V_i$ and $W_i$ that are the
union of the rational
Krylov subspaces $(A + s_i I)^{-1}b$ and $(A^{\top} + s_i I)^{-1}c$, correspondingly, in every step.
The shifts $s_i$ are obtained as eigenvalues
of the matrix $V_{i-1}^{\top} A W_{i-1}$. The main difference of Algorithm
\ref{twicer:int} algorithm from IRKA is that IRKA is usually used with the fixed rank.
\end{remark}
}
To reduce the number of solvers required by the algorithm, we propose
two improvements. The first is to add the last Krylov vector to the subspace.
In this case, as we will show, the residual will always have rank-$1$.
The second improvement is to add only one vector each time. In order to do
so, we will use a simple rank-$1$ approximation to $P_1$.
\subsection{Adding a Krylov vector and a rational Krylov vector to the
subspace}
\textcolor{black}{
In the following section we will show, that under a special basis enrichment
strategy, the rank-$1$ approximation to $P_1$ can be replaced by adding
one vector of the form $(A + sI)^{-1} w$ to the subspace.
In this case, as it was already described in \cite{sim-krylov-2007, simdru-lyapadapt-2009}, adding a Krylov vector preserves the
rank-$1$ structure of the residual of equation \eqref{sylvester:lr}.
Next Lemma is a generalization of the well-known
fact of rank-$1$ residual in the Arnoldi iteration
\cite{saad-methodsbook-2003}.
}
\begin{lemma}\label{lyap:1rkres}
Let $A$ be an $n\times n$ \textcolor{black}{stable} matrix and $y_0$ is a vector of size $n$.
Assume that an $n\times r$ orthogonal matrix $U$ and vectors $w, v$
of size $n$ satisfy the following equations:
$$ (I-UU^{\top}) A U =w q^{\top} , \quad v = (A + s I)^{-1}w.$$
Let us denote by $U_1$ and $U_2$ the basis of the spans of the columns of the matrices $\left[U,
v\right]$ and $\left[U, v, w\right]$ accordingly. Then the following equality holds
$$ \left(I-U_2U_2^{\top}\right) A U_2 = \left(I-U_2U_2^{\top}\right)Aw \widehat{q} ^{\top}.$$
\end{lemma}
{\it Proof: \\}
Due to the fact that $(I-UU^{\top})AU = w q^{\top}$ we get that $(I-U_2U_2^{\top})AU = 0.$\\
On the other hand we have $$ (I-U_2U_2^{\top})Av = (I-U_2U_2^{\top})((A+sI)v -s v) = (I-U_2U_2^{\top})(w - s v) = 0.$$
Therefore $(I-U_2U_2^{\top})A U_2 = (I-U_2U_2^{\top})Awq^{\top}. $
\textcolor{black}{Note that if $U_1$ and $U_2$ are obtained by using the Gram-Schmidt process
then \\$(I-U_2U_2^{\top})A U_1 = 0$ and the matrix $(I-U_2U_2^{\top})A U_2$
has only one non-zero column and this column is the last one.}
$\blacksquare$\\
\textcolor{black}{The statements about residual rank-structure
are well known \cite{jaimoukha-krylov-1994, sim-krylov-2007}.}
Lemma \ref{lyap:1rkres} shows that if the approximation algorithm starts from $U_0 =
\frac{y_0}{\Vert y_0 \Vert}$ and adds a vector from the Krylov
subspace and a corresponding vector from the rational Krylov subspace at
each step then the residual matrix $(AU - UB)Z = (I - UU^{\top})AUZ$
is rank-$1$ at any step.
That is the main benefit from using the
Krylov subspaces in our approach. Moreover, the residual has the form
$\widehat{w} \widehat{q}^{\top}$, where $\widehat{w} = (I - U_1 U^{\top}_1) A w$
is the next \emph{Krylov vector}. This fact is important and will be
used later.
\subsection{Rank-$1$ approximation to the correction equation}
Suppose we have the matrix $U$ constructed by sequential addition of
rational Krylov and Krylov vectors to the subspace and the equation for $P_1$ has the form \eqref{sylvester:lr}
Note, that due to the equality
$$
(A U - UB) = (I - UU^{\top}) AU,
$$
the matrix $(AU - UB) = w q^{\top}$ has only one non-zero column.
If the new vectors are added as the rightmost vectors,
$$
(AU - UB) Z = w z^{\top},
$$
where $z^{\top}$ is the last row of the matrix $Z$.
Therefore, the equation for $P_1$ takes the form
\begin{equation}\label{alr:rank1corr}
AP_1 + P_1 B^{\top} = -(AU - UB) Z = w z^{\top}.
\end{equation}
The last step from the doubling method to the final algorithm is find to a
rank-$1$ approximation to the solution of \eqref{alr:rank1corr}. If $U$ is known,
we apply one step of alternating iterations, looking for the solution
in the form $P_1 \approx v q^{\top}$, where $q = \frac{z}{\Vert z \Vert}$ is the
normalized last row of the matrix $Z$. \textcolor{black}{This choice of $q$ is natural
due to the right hand side of \eqref{alr:rank1corr}: $-(AU - UB) Z = \Vert z \Vert w q^{\top}.$}
The Galerkin condition for $v$ leads to equation
\begin{equation*}
\left(A + (q^{\top} B^{\top} q) ~I\right) v= w.
\end{equation*}
\textcolor{black}{
This is the main formula for the next shift.}
Due to the simple rank-$1$ structure of the residual its norm can be
efficiently computed as
$$R_1(U) = \sqrt{2} \Vert w \Vert ~\Vert z \Vert. $$
The final algorithm which we call \emph{alternating low rank (ALR) method} is
presented in Algorithm \ref{al:int}. The main computational cost is the solution of the shifted linear system.
\begin{algorithm}[H] \label{al:int}
\DontPrintSemicolon
\SetKwComment{tcp}{$\triangleright$~}{}
\KwData{Input matrix $A \in \mathbb{R}^{n \times n}$, vector $y_0 \in \mathbb{R}^{n \times 1}$, maximal rank $r_{\max}$, accuracy parameter $\varepsilon$.}
\KwResult{Orthonormal matrix $U \in \mathbb{R}^{n \times r}$.}
\Begin{
\nl set $U = \frac{y_0}{\Vert y_0 \Vert}, w_0 = \frac{y_0}{\Vert y_0 \Vert}$ \tcp*[r]{Initialization}
\nl \For{$ \rank U \leq r_{\max}$}
{
\nl Compute $w_k = (I_n - UU^{\top})Aw_{k-1}$
\nl Compute $c_0 = U^{\top}y_0 ,\quad B = U^{\top}AU$
\nl Compute $Z$ as Lyapunov equation solution: $B Z + Z B^{\top} = -c_0 c_0^{\top}$
\nl Compute $z$ as the last row of the matrix $Z$.
\nl Compute error estimate $\delta = \Vert w_k\Vert ~\Vert z\Vert$
\nl \If{$\delta \leq \varepsilon$}
{
\nl Stop
}
\nl Compute shift $s = q^{\top} B q, q = \frac{z}{\Vert z \Vert}$
\nl Compute $v_k = (A + s I)^{-1}w_k$
\nl Update $U := \mathrm{orth}[U, v_k, w_k].$
}
}
\caption{The Adaptive low-rank method}
\end{algorithm}
\textcolor{black}{
\textbf{Remark.} Restriction to rank-$1$ right hand sides is helpful because there
are several ways to generalize ALR algorithm for an arbitrary rank case.
We can either go for rank-$1$ approximation to the solution of the Sylvester equation (and get one new Rational Krylov vector each time),
or use a block version of the ALR algorithm. We plan to compare this
approaches in our future work.
}
\section{Numerical experiments}
We have implemented the ALR method in Python using SciPy and NumPy
packages available in the Anaconda Python distribution. The
implementation is available online at
\url{https://github.com/dkolesnikov/rkm_lyap}. The
matrices, Python code and IPython notebooks which reproduce all the figures in this work are available at \url{https://github.com/dkolesnikov/alr-paper}, where the .mat files with test matrices and vectors can be found as well.
We have compared the ALR method with two methods with publicly
available \textcolor{black}{Matlab implementations. We have ported all the methods to Python for a fair comparison,
and their implementations are also online}.
The first method uses the Extended Krylov Subspaces approach which was
proposed in \cite{druskin-eks-1998}.
Its main idea is to use as the basis the \emph{extended Krylov
subspace} of the form
$$\mathop{\mathrm{span}}\nolimits \Big(A^{-k} y_0, \ldots, y_0, \ldots,
A^l y_0 \Big).$$
Note, that the residual in this approach is also rank-$1$
and can be cheaply computed. This approach was implemented as the
Krylov plus Inverted Krylov algorithm (here-after KPIK) in
\cite{sim-krylov-2007} and convergence estimate also was obtained.
The second approach is the Rational Krylov Subspace Method (RKSM) which was
proposed in \cite{druskin-adapt-2010}. Its main idea is to compute
vectors step by step from the rational Krylov subspaces
$$\mathop{\mathrm{span}}\nolimits \Big((A + s_i I)^{-1} y_0, \quad i = 1, \ldots\Big).$$
The shifts $s_i$ are selected by a
special procedure. There are different algorithms to compute the shifts
(and the method proposed in this paper falls into this class, also there
is a recent algorithm \cite{druskin-trks-2014} based on tangential interpolation).
We use the RKSM method described in
\cite{druskin-rksm-2011} which has the publicly avaliable implementation.
The MATLAB code of both methods can be downloaded
from \url{http://www.dm.unibo.it/~simoncin/software.html}.
Note that it is not fully fair to compare the efficiency
of the ALR and KPIK
with RKSM. The first two methods use vectors from Krylov subspace and have
$2r + 1$ size of approximation subspaces at $r$ iteration step,
in but ``pure'' RKSM method has a basis of size $(r+1)$ after $r$ iterations.
We tried to ``extend'' the RKSM method by adding Krylov vectors.
This extended approach we will call \emph{ERKSM}.
\subsection{Efficient implementation}
\textcolor{black}
{
Note that the KPIK method has important feature. It uses only linear solver
with the matrix $A$ at every step of algorithm, so the factorization of the matrix $A$ can significantly
reduce the complexity. Another methods (ALR, RKSM, ERKSM) solve linear systems with different
matrices so this method is not applicable.
}
\textcolor{black}
{
Instead, we propose to use the algebraic multigrid to speed-up the solution of linear systems
and to use the same multigrid hierarchy for different shifted systems.
Algebraic multigrid (AMG) is a method for the solution of linear systems based on the multigrid principles,
but requires no explicit knowledge of the problem geometry.
AMG determines coarse grids, intergrid
transfer operators, and coarse-grid equations
based solely on the matrix entries.
Denote vector spaces $\mathbf{R^n}$ and $\mathbf{R^{n_c}}$ that
correspond to the fine and the coarse grids. Interpolation
(prolongation) maps the coarse grid to the
fine grid and is represented by the $n \times n_c$ matrix $P_c$ :
$\mathbf{R^n} \to \mathbf{R^{n_c}}$.
There are also few options for defining the coarse system $A_c$ and the most common
approach is to use the Galerkin projection $A_c = P_c^{\top} A P_c.$
}
\textcolor{black}
{
Our main idea is to use the algebraic multigrid method with the Galerkin
projection to construct the fast solver for ``shifted'' linear systems with matrices $(A
+ s I_n)$. Note that if the transfer operators are chosen to be the same for all shifts $s$, then
the coarse system matrix can be easy ``shifted'' $A_c + I_{n_c} = P_c^{\top}
(A + s I_{n_c} ) P_c.$
}
\textcolor{black}
{
We use the Python implementation of AMG \cite{PyAMG-2}
to find the coarse grid hierarchy for the matrix $A$ once and then reuse this hierarchy for
shifted linear systems. This trick is possible because
the matrix $A$ is stable and the shifts always have negative real parts. We found that
these approach works well both in symmetric and non-symmetric
cases in our numerical experiments.
}
\textcolor{black}
{
We have chosen $9$ methods for the comparison. First 4 methods are ALR,
KPIK, RKSM and ERKSM with shifted linear systems solved using direct solvers for sparse matrices. The next 4 methods
use the modified AMG solver. And the last method is KPIK
with factorized inverse, and we denote it KPIK(LU).
}
\subsection{Model problem 1: Laplace2D matrix}
\textcolor{black}
{
We start our tests on symmetric problems. The first problem is the discretization of the two-dimensional
Laplace operator $$Lu = u_{xx} + u_{yy} $$ with Dirichlet boundary conditions on the unit square using 5-point stencil operator.
The vector $y_0$ is obtained by the discretization on the grid of the
function $$f(x, y)= e^{-(x-0.5)^2 - 1.5(y-0.7)^2}.$$ The results are
shown in Table \ref{lyaptab:laplace2d}, final time shows the computational time in which the method reaches $10^{-8}$
accuracy.
}
\begin{table}[H]
\begin{minipage}{\linewidth}
\centering
\captionof{table}{Model problem 1 timings}\label{lyaptab:laplace2d}\begin{tabular}{lllllr}
\toprule
grid &method &precomp. &1 solver time &final time &it-s/rank\\
\midrule
&ALR & -- &0.0190 &0.2303 &10/21 \\
&ALR(AMG) &0.0695 &0.0498 &0.5703 &10/21 \\
&KPIK(LU) &\bf{0.0341} &\bf{0.0018} &\bf{0.1755} &15/31 \\
&KPIK(AMG) &0.0695 &0.0319 &0.6104 &15/31 \\
64$\times$64 &KPIK & -- &0.0196 &0.3654 &15/31 \\
&RKSM &0.1345 &0.0194 &1.8460 &21/22 \\
&RKSM(AMG) &0.2040 &0.0434 &2.4254 &21/22 \\
&ERKSM &0.1345 &0.0210 &2.7598 &18/37 \\
&ERKSM(AMG) &0.2040 &0.0533 &3.4273 &18/37 \\
\midrule
&ALR & -- &0.0981 &1.3168 &12/25 \\
&ALR(AMG) &0.1941 &0.1431 &2.5503 &12/25 \\
&KPIK(LU) &\bf{0.1746} &\bf{0.0066} &\bf{1.0401} &20/41 \\
&KPIK(AMG) &0.1941 &0.1018 &2.9015 &20/41 \\
128$\times$128 &KPIK & -- &0.1051 &2.7162 &20/41 \\
&RKSM &1.0598 &0.1024 &5.1593 &22/23 \\
&RKSM(AMG) &1.2539 &0.1751 &7.4457 &23/24 \\
&ERKSM &1.0598 &0.1063 &9.2354 &25/51 \\
&ERKSM(AMG) &1.2539 &0.1819 &10.5334 &23/47 \\
\midrule
&ALR & -- &0.6541 &10.7695 &15/31 \\
&ALR(AMG) &\bf{0.6764} &0.6083 &13.1072 &15/31 \\
&KPIK(LU) &1.1841 &\bf{0.0399} &\bf{9.5395} &26/53 \\
&KPIK(AMG) &\bf{0.6764} &0.4087 &17.2390 &26/53 \\
256$\times$256 &KPIK & -- &0.6728 &21.4517 &26/53 \\
&RKSM &29.7068 &0.7498 &55.8941 &27/28 \\
&RKSM(AMG) &30.3832 &0.9082 &59.5665 &26/27 \\
&ERKSM &29.7068 &0.7158 &58.3618 &24/49 \\
&ERKSM(AMG) &30.3832 &1.0387 &68.0542 &24/49 \\
\bottomrule
\end{tabular}\par
\end{minipage}
\end{table}
\textcolor{black}
{
Note that for the $256 \times 256$ grid the RKSM-based methods have significant precomputation time, in which
the eigenvalue bounds are estimated. The KPIK(LU) method is significantly faster than other methods on all grids.
Note that the AMG solver is slower than the direct solver as well.
Another noticeable feature of the AMG solver is the significant variation of
average linear system solution time on the $256 \times 256$ grid for different shift selection strategies.
The main reason of such variation is that the
``shifted'' multigrid hierarchy may take more time in case of relatively big shifts.
An example of the shift distribution is presented on Figure \ref{alr:shift-distrib}.
}
\begin{figure}
\caption{Shift distributions for different methods}
\label{alr:shift-distrib}
\end{figure}
\subsection{Model problem 2: Laplace3D matrix}
\textcolor{black}
{
The second problem is taken from \cite[Example 5.3]{sim-krylov-2007}
and it is the discretization of the three-dimensional
Laplace operator $$Lu = u_{xx} + u_{yy} + u_{zz}$$
with Dirichlet boundary conditions on the unit square using 7-point stencil operator.
The vector $y_0$ was taken to be the vector of all ones. The results are
shown in Table \ref{lyaptab:sim53},
\begin{table}[h!]
\begin{minipage}{\linewidth}
\centering
\captionof{table}{Model problem 2 timings}\label{lyaptab:sim53}\begin{tabular}{lllllr}
\toprule
grid &method &precomp. &1 solver time &final time &it-s/rank\\
\midrule
&ALR & -- &0.0147 &0.0822 &5/11 \\
&ALR(AMG) &0.0712 &0.0113 &0.1286 &5/11 \\
&KPIK(LU) &0.0163 &\bf{0.0006} &\bf{0.0303} &6/13 \\
&KPIK(AMG) &0.0712 &0.0136 &0.1521 &6/13 \\
10$\times$10$\times$10 &KPIK & -- &0.0116 &0.0684 &6/13 \\
&RKSM &\bf{0.0054} &0.0102 &0.3186 &9/10 \\
&RKSM(AMG) &0.0766 &0.0099 &0.3205 &9/10 \\
&ERKSM &\bf{0.0054} &0.0142 &0.3081 &6/13 \\
&ERKSM(AMG) &0.0766 &0.0146 &0.3729 &6/13 \\
\midrule
&ALR & -- &0.6237 &3.8685 &7/15 \\
&ALR(AMG) &0.1385 &0.0993 &0.7724 &7/15 \\
&KPIK(LU) &0.8283 &\bf{0.0085} &0.9671 &8/17 \\
&KPIK(AMG) &0.1385 &0.0766 &\bf{0.7419} &8/17 \\
20$\times$20$\times$20 &KPIK & -- &0.6313 &4.4834 &8/17 \\
&RKSM &\bf{0.0608} &0.6303 &6.0856 &10/11 \\
&RKSM(AMG) &0.1993 &0.0846 &1.3038 &10/11 \\
&ERKSM &\bf{0.0608} &0.6811 &6.9582 &10/21 \\
&ERKSM(AMG) &0.1993 &0.0832 &1.5275 &9/19 \\
\midrule
&ALR & -- &8.9842 &62.6705 &8/17 \\
&ALR(AMG) &0.4265 &0.3449 &3.0680 &8/17 \\
&KPIK(LU) &19.0281 &\bf{0.0535} &19.9242 &10/21 \\
&KPIK(AMG) &0.4265 &0.2354 &\bf{2.9497} &10/21 \\
30$\times$30$\times$30 &KPIK & -- &9.1710 &82.3441 &10/21 \\
&RKSM &\bf{0.3609} &9.0977 &120.1328 &14/15 \\
&RKSM(AMG) &0.7874 &0.3443 &6.2245 &14/15 \\
&ERKSM &\bf{0.3609} &10.2733 &115.8494 &12/25 \\
&ERKSM(AMG) &0.7874 &0.3836 &6.5964 &11/23 \\
\bottomrule
\end{tabular}\par
\end{minipage}
\end{table}
}
\subsection{Model problem 3}
\textcolor{black}
{
The third problem is taken from \cite[Example 5.1]{sim-krylov-2007} and describes a
model of heat flow with convection in the given domain.
The associated differential operator is $$Lu = u_{xx} + u_{yy} - 10 x u_{x} - 1000 y u_{y}$$ on the unit square with Dirichlet boundary conditions.
Matrices $A$ are obtained from the central finite difference
discretization of the differential operator using 5-point stencil
operator and are non-symmetric with complex eigenvalues.
Once again, the vector $y_0$ was
taken to be the vector of all ones.
The results are shown in Table \ref{lyaptab:sim51}.
}
\begin{table}[h!]
\begin{minipage}{\linewidth}
\centering
\captionof{table}{Model problem 3 timings}\label{lyaptab:sim51}\begin{tabular}{lllllr}
\toprule
grid &method &precomp. &1 solver time &final time &it-s/rank\\
\midrule
&ALR & -- &0.0184 &0.1834 &8/17 \\
&ALR(AMG) &0.1744 &0.0253 &0.3649 &8/17 \\
&KPIK(LU) &\bf{0.0197} &\bf{0.0017} &\bf{0.1575} &11/23 \\
&KPIK(AMG) &0.1744 &0.0284 &0.5586 &11/23 \\
64$\times$64 &KPIK & -- &0.0250 &0.3809 &11/23 \\
&RKSM &0.0452 &0.0177 &0.7313 &12/13 \\
&RKSM(AMG) &0.2196 &0.0185 &0.7349 &12/13 \\
&ERKSM &0.0452 &0.0183 &0.7154 &9/19 \\
&ERKSM(AMG) &0.2196 &0.0187 &1.0360 &10/21 \\
\midrule
&ALR & -- &0.1189 &1.2058 &10/21 \\
&ALR(AMG) &0.5329 &0.0875 &1.6560 &10/21 \\
&KPIK(LU) &\bf{0.1136} &\bf{0.0098} &\bf{0.6804} &11/23 \\
&KPIK(AMG) &0.5329 &0.1052 &2.0300 &11/23 \\
128$\times$128 &KPIK & -- &0.1278 &1.5739 &11/23 \\
&RKSM &0.6967 &0.1241 &3.0168 &13/14 \\
&RKSM(AMG) &1.2296 &0.0758 &2.7024 &13/14 \\
&ERKSM &0.6967 &0.1225 &3.5379 &12/25 \\
&ERKSM(AMG) &1.2296 &0.0698 &3.2370 &11/23 \\
\midrule
&ALR & -- &1.1270 &14.2010 &10/21 \\
&ALR(AMG) &\bf{0.7507} &0.3281 &5.1982 &10/21 \\
&KPIK(LU) &1.0185 &\bf{0.0313} &\bf{3.0908} &12/25 \\
&KPIK(AMG) &\bf{0.7507} &0.2982 &5.6122 &12/25 \\
256$\times$256 &KPIK & -- &1.4760 &18.2555 &12/25 \\
&RKSM &6.8556 &1.0914 &27.0884 &17/18 \\
&RKSM(AMG) &7.6063 &0.2696 &14.1912 &17/18 \\
&ERKSM &6.8556 &1.1761 &29.3055 &15/31 \\
&ERKSM(AMG) &7.6063 &0.2524 &15.1329 &13/27 \\
\bottomrule
\end{tabular}\par
\end{minipage}
\\
\end{table}
\textcolor{black}
{
For this problem KPIK method with LU factorization is the best
one on all grids.
}
\subsection{Model problem 4}
\textcolor{black}
{
The fourth problem is taken from \cite[Example 5.2]{sim-krylov-2007} and it is a three-dimensional variant of the previous problem.
The differential operator is given by
$$ Lu = u_{xx} + u_{yy} + u_{zz} - 10x u_{x} - 1000y u_{y} - u_{z}$$ with Dirichlet boundary conditions on the unit cube using 7-point stencil operator.
Once again, the vector $y_0$ was taken to be the vector of all ones.
The results are shown in Table \ref{lyaptab:sim52}.
}
\begin{table}[h]
\begin{minipage}{\linewidth}
\centering
\captionof{table}{Model problem 4 timings}\label{lyaptab:sim52}\begin{tabular}{lllllr}
\toprule
grid &method &precomp. &1 solver time &final time &it-s/rank\\
\midrule
&ALR & -- &0.0117 &0.0671 &5/11 \\
&ALR(AMG) &0.0444 &0.0066 &0.0952 &5/11 \\
&KPIK(LU) &0.0181 &\bf{0.0004} &\bf{0.0386} &7/15 \\
&KPIK(AMG) &0.0444 &0.0099 &0.1206 &7/15 \\
10$\times$10$\times$10 &KPIK & -- &0.0257 &0.1760 &7/15 \\
&RKSM &\bf{0.0041} &0.0108 &0.2189 &7/8 \\
&RKSM(AMG) &0.0485 &0.0094 &0.2259 &7/8 \\
&ERKSM &\bf{0.0041} &0.0128 &0.2201 &5/11 \\
&ERKSM(AMG) &0.0485 &0.0085 &0.2276 &5/11 \\
\midrule
&ALR & -- &0.6424 &3.2881 &6/13 \\
&ALR(AMG) &0.2231 &0.0466 &\bf{0.5020} &6/13 \\
&KPIK(LU) &0.6687 &\bf{0.0086} &0.8351 &9/19 \\
&KPIK(AMG) &0.2231 &0.0436 &0.6787 &9/19 \\
20$\times$20$\times$20 &KPIK & -- &0.8533 &7.0838 &9/19 \\
&RKSM &\bf{0.0265} &0.6610 &5.5919 &9/10 \\
&RKSM(AMG) &0.2496 &0.0491 &0.9061 &9/10 \\
&ERKSM &\bf{0.0265} &0.7303 &4.8876 &7/15 \\
&ERKSM(AMG) &0.2496 &0.0541 &0.9108 &6/13 \\
\midrule
&ALR & -- &9.2135 &53.2061 &7/15 \\
&ALR(AMG) &0.9827 &0.1987 &\bf{2.3924} &7/15 \\
&KPIK(LU) &10.1743 &\bf{0.0497} &11.0392 &9/19 \\
&KPIK(AMG) &0.9827 &0.2055 &3.0282 &9/19 \\
30$\times$30$\times$30 &KPIK & -- &12.7750 &105.1641 &9/19 \\
&RKSM &\bf{0.1215} &9.3640 &85.8903 &10/11 \\
&RKSM(AMG) &1.1042 &0.1792 &3.2822 &10/11 \\
&ERKSM &\bf{0.1215} &9.5372 &58.2637 &7/15 \\
&ERKSM(AMG) &1.1042 &0.1609 &3.2703 &8/17 \\
\bottomrule
\end{tabular}\par
\end{minipage}\\
\end{table}
\textcolor{black}
{
The LU-factorization takes too long on the finest grid for 3D problems so KPIK method which uses AMG solver is faster than
KPIK(LU) is faster. The ALR(AMG) method is the fastest for the $30 \times 30 \times 30$ grid.
}
\section{Conclusions and future work}
\textcolor{black}
{
The existing methods for the low-rank approximation to the solution of the Lyapunov equations perform
quite well on different problems, provided the right implementation of the linear system solver with shifted matrices is available.
The ALR method in this paper produces different shifts in comparison with the RKSM solver: the shifts computed by the ALR method
are less spread and this gives a possibility to reuse the multigrid hierarchy in a more efficient way, and we think it is the most
promising feature of our method (and the number of iterations is similar to the RKSM/ERKSM methods).
}
\textcolor{black}
{
In our future work we plan to investigate the properties of the proposed approach. First of all,
we would like to prove the convergence of the method. It will be also very interesting to generalize it to the
rank-$r$ right-hand side, and also to extend it to other matrix functions. The effect of inexact solvers on the convergence
of the ALR method also need to be studied.
}
\end{document}
|
\begin{document}
\newcommand{{\partial}isplaystyle}{{\partial}isplaystyle}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb R}n}{{\mathbb R}^n}
\newcommand{{ \mathcal H}}{{ \mathcal H}}
\newcommand{{{\mathbf 1}}}{{{\mathbf 1}}}
\newcommand{{ l^1S}}{{ l^1S}}
\renewcommand{{\partial}ot{H}}{{\partial}ot{H}}
\newcommand{\tilde{S}}{\tilde{S}}
\newcommand{\tilde{T}}{\tilde{T}}
\newcommand{\tilde{W}}{\tilde{W}}
\newcommand{\tilde{X}}{\tilde{X}}
\newcommand{\tilde{K}}{\tilde{K}}
\newcommand{\tilde{x}}{\tilde{x}}
\newcommand{\tilde{x}i}{\tilde{\xi}}
\newcommand{\tilde{A}}{\tilde{A}}
\newcommand{{\tilde I}}{{\tilde I}}
\newcommand{\tilde{P}}{\tilde{P}}
\newcommand{{\lambdaangle}}{{{\lambdaangle}ngle}}
\newcommand{{\rangle}}{{{\rangle}ngle}}
\newcommand{{\text{supp }}}{{\text{supp }}}
\newcommand{\begin{2}\begin{com}}{\begin{2}\begin{com}}
\newcommand{\epsilonnd{com}\epsilonnd{2}}{\epsilonnd{com}\epsilonnd{2}}
\renewcommand{\epsilon}{\epsilonpsilon}
\renewcommand{{\text{div}\,}}{{\text{div}\,}}
\newcommand{{\R^n\backslash\Omega}}{{{\mathbb R}^n\backslash\Omega}}
\newcommand{{{\partial}artial\Omega}}{{{\partial}artial\Omega}}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{assumption}[theorem]{Assumption}
\newcommand{\beta}{\beta}
\newcommand{\epsilon}{\epsilonpsilon}
\newcommand{\omega}{\omega}
\newcommand{{\partial}elta}{{\partial}elta}
\renewcommand{{\partial}}{{{\partial}artial}}
\renewcommand{\lambda}{{\lambdaangle}mbda}
\newcommand{{\partial}}{{{\partial}artial}}
\newcommand{{\not\negmedspace\nabla}}{{\not\negmedspace\nabla}}
\newenvironment{com}{\begin{quotation}{\lambdaeftmargin .25in\rightmargin .25in}\sffamily \footnotesize $\clubsuit$}
{$\spadesuit$\epsilonnd{quotation}{\partial}ar
}
\newenvironment{com2}{\sffamily\footnotesize $\clubsuit$ }{ $\spadesuit$}
\title{Decay estimates for variable coefficient wave equations
in exterior domains}
\author[J. Metcalfe]
{Jason Metcalfe}
\address{Department of Mathematics, University of North Carolina,
Chapel Hill, NC 27599-3250, USA}
\epsilonmail{[email protected]}
\author[D. Tataru]
{Daniel Tataru}
\address{Mathematics Department, University of California \\
Berkeley, CA 94720-3840, USA}
\epsilonmail{[email protected]}
\thanks{
The work of the first author was supported in part by NSF grant DMS0800678.
The work of
the second author was supported in part by
NSF grants DMS0354539 and DMS0301122.}
\baselineskip 18pt
\begin{abstract}
In this article we consider variable coefficient, time dependent
wave equations in exterior domains ${\mathbb R} \times ({\mathbb R}^n \setminus
\Omega)$, $n\geq 3$. We prove localized energy estimates if
$\Omega$ is star-shaped, and global in time Strichartz estimates if
$\Omega$ is strictly convex.
\epsilonnd{abstract}
\includeversion{jm}
\maketitle
\section{Introduction}
Our goal, in this article, is to prove analogs of the well known
Strichartz estimates and localized energy estimates for variable
coefficient wave equations in exterior domains. We consider
long-range perturbations of the flat metric, and we take the obstacle
to be star-shaped. The localized energy estimates are obtained under
a smallness assumption for the long range perturbation.
Global-in-time Strichartz estimates are then proved assuming the
local-in-time Strichartz estimates, which are known to hold for
strictly convex obstacles.
For the constant coefficient wave equation $\Box =
{\partial}artial_t^2-\Delta$ in ${\mathbb R}\times{\mathbb R}^n$, $n\ge 2$, we have that
solutions to the Cauchy problem
\begin{equation}
\Box u = f,\quad u(0)=u_0,\quad {\partial}artial_t u(0)=u_1,
{\lambdaangle}bel{cccp}
\epsilonnd{equation}
satisfy the Strichartz estimates\footnote{Here and throughout, we
shall use $\nabla$ to denote a space-time gradient unless otherwise
specified with subscripts.}
\[
\||D_x|^{-\rho_1} \nabla u\|_{L^{p_1}L^{q_1}}\lambdaesssim \|\nabla
u(0)\|_{L^2} + \||D_x|^{\rho_2} \Box
u\|_{L^{p_2'}L^{q_2'}},
\]
for
Strichartz admissible exponents $(\rho_1,p_1,q_1)$ and
$(\rho_2,p_2,q_2)$. Here, exponents $(\rho,p,q)$ are called
Strichartz admissible if $2\lambdae p,q\lambdae \infty$,
\[ \rho = \frac{n}{2}-\frac{n}{q}-\frac{1}{p},\quad \frac{2}{p}\lambdae \frac{n-1}{2}\Bigl(1-\frac{2}{q}\Bigr),\]
and $(\rho,p,q)\neq (1,2,\infty)$ when $n=3$.
The Strichartz estimates follow via a $TT^*$ argument and the
Hardy-Littlewood-Sobolev inequality from the dispersive estimates,
\[
\| |D_x|^{-\frac{n+1}2 (1-\frac2q)} \nabla u(t)\|_{L^{q}} \lambdaesssim
t^{-\frac{n-1}2 (1-\frac2q)} \|u_1\|_{L^{q'}}, \qquad 2 \lambdaeq q < \infty
\]
for solutions to \epsilonqref{cccp} with $u_0 = 0$, $f=0$. This in turn is obtained
by interpolating between a $L^2 \to L^2$ energy estimate and an
$L^1\to L^\infty$ dispersive bound which provides $O(t^{-(n-1)/2})$
type decay. Estimates of this form originated in the work \cite{Str},
and as stated are the culmination of several subsequent works. The
endpoint estimate $(p,q)=\Bigl(2,\frac{2(n-1)}{n-3}\Bigr)$ was most
recently obtained in \cite{MR1646048}, and we refer the interested
reader to the references therein for a more complete history.
The second estimate which shall be explored is the localized energy
estimate, a version of which states
\begin{equation}{\lambdaangle}bel{kss}
\sup_j \|{\lambdaangle} x{\rangle}^{-1/2} \nabla u\|_{L^2({\mathbb R}\times \{|x|\in
[2^{j-1},2^j]\})}
\lambdaesssim \|\nabla u(0)\|_{L^2} + \sum_k \|{\lambdaangle} x{\rangle}^{1/2} \Box
u\|_{L^2({\mathbb R}\times \{|x|\in [2^{k-1},2^k]\})}\epsilonnd{equation}
in the constant coefficient case. These estimates can be proved using
a positive commutator argument with a multiplier which is roughly of
the form $f(r){\partial}artial_r$ when $n\ge 3$ and are quite akin to the
bounds found in, e.g., \cite{morawetz}, \cite{Strauss},
\cite{MR1308623}, \cite{smithsogge},
\cite{KSS}, and \cite{MR2128434}. See also \cite{Alinhac},
\cite{MetSo}, \cite{globalw} for certain estimates for small
perturbations of the
d'Alembertian.
Variants of these estimates for constant coefficient wave equations
are also known in exterior domains. Here, $u$ is replaced by a
solution to
\[\Box u = F,\quad u|_{{\partial}artial \Omega}=0,\quad u(0)=u_0,\quad
{\partial}artial_t u(0)=u_1,\quad (t,x)\in {\mathbb R}\times {\R^n\backslash\Omega}\] where $\Omega$ is a
bounded set with smooth boundary. The localized energy estimates have
played a key role in proving a number of long time existence results
for nonlinear wave equations in exterior domains. See, e.g.,
\cite{KSS} and \cite{MetSo2, MetSo} for their proof and application.
Here, it is convenient to assume that the obstacle $\Omega$ is
star-shaped, though certain estimates are known (see e.g.
\cite{MetSo2}, \cite{BurqGlobal}) in more general settings. Exterior
to star-shaped obstacles, the estimates for small perturbations of
$\Box$ continue to hold (see \cite{MetSo2}). This, however, only
works for $n\ge 3$, and the bound which results is not strong enough
in order to prove the Strichartz estimates which we desire. As such,
we shall, in the sequel, couple this bound with certain frequency
localized versions of the estimate in order to prove the Strichartz
estimates. For time independent perturbations, one may permit more
general geometries. See, e.g., \cite{BurqGlobal}.
Certain global-in-time Strichartz estimates are also known in exterior
domains, but, except for certain very special cases (see \cite{BLP},
\cite{bss}, which are closely based on \cite{SmSo}), require that the
obstacle be strictly convex. Local-in-time estimates were shown in
\cite{SmSoLocal} for convex obstacles, and using these estimates,
global estimates were constructed in \cite{smithsogge} for $n$ odd and
\cite{BurqGlobal} and \cite{MetGlobal} for general $n$. See, also,
\cite{hmssz}.
In the present article, we explore variable coefficient cases of these
estimates. Here, $\Box$ is replaced by the second order hyperbolic
operator
\[
P(t,x,D) = D_i a^{ij}(t,x) D_j+ b^i(t,x) D_i + c(t,x),
\]
where $D_0=D_t$ is understood. We assume that $(a^{ij})$ has
signature $(n,1)$ and that $a^{00}<0$, i.e. that time slices are
space-like. We shall then consider the initial value boundary value
problem
\begin{equation}
{\lambdaangle}bel{maineq}
Pu=f,\quad u|_{{\partial}artial\Omega}=0,\quad u(0)=u_0,\quad {\partial}artial_t
u(0)=u_1,\quad (t,x)\in {\mathbb R}\times {\R^n\backslash\Omega}.
\epsilonnd{equation}
When $\Omega=\epsilonmptyset$ and $b^i \epsilonquiv c\epsilonquiv 0$, the problem of
proving Strichatz estimates is understood locally, and of course,
localized energy estimates are trivial locally-in-time. For smooth
coefficients, Strichartz estimates were first proved in
\cite{MR1168960} using Fourier integral operators. Using a wave
packet decomposition, Strichartz estimates were obtained in
\cite{MR1644105} for $C^{1,1}$ coefficients in spatial dimensions
$n=2,3$. Using instead an approach based on the FBI transform, these
estimates were extended to all dimensions in \cite{nlw, cs, lp}. For
rougher coefficients, the Strichartz estimates as stated above are
lost (see \cite{SS}, \cite{MR1909638}) and only certain estimates with
losses are available \cite{cs, lp}. When the boundary is nonempty,
far less is known, and we can only refer to the results of
\cite{SmSoLocal} for smooth time independent coefficients, $b^i\epsilonquiv
c\epsilonquiv 0$, and $\Omega$ strictly geodesically convex. The proof of
these estimates is quite involved and uses a Melrose-Taylor parametrix
to approximate the reflected solution.
For the boundaryless problem, global-in-time localized energy
estimates and Strichartz estimates were recently shown in
\cite{globalw} for small, $C^2$, long-range perturbations. The former
follow from a positive commutator argument with a multiplier which is
akin to what we present in the sequel. For the latter, an outgoing
parametrix is constructed using a time-dependent FBI transform in a
fashion which is reminiscent to that of the preceding work \cite{gS}
on Schr\"odinger equations. Upon conjugating the half-wave equation
by the FBI transform, one obtains a degenerate parabolic equation due
to a nontrivial second order term in the asymptotic expansion. Here,
the bounds from \cite{gS}, which are based on the maximum principle,
may be cited. The errors in this parametrix construction are small in
the localized energy spaces, which again are similar to those below,
and it is shown that the global Strichartz estimates follow from the
localized energy estimates.
The aim of the present article is to combine the approach of
\cite{globalw} with analogs of those from \cite{smithsogge},
\cite{BurqGlobal}, and \cite{MetGlobal} to show that global-in-time
Strichartz estimates in exterior domains follow from the localized
energy estimates and local-in-time Strichartz estimates for the
boundary value problem. As we shall show the localized energy
estimates for small perturbations outside of star-shaped obstacles,
the global Strichartz estimates shall then follow for convex obstacles
from the estimates of \cite{SmSoLocal}.
Let us now more precisely describe our assumptions. We shall look at
certain long range perturbations of Minkowski space. To state this,
we set
\[D_0=\{|x|\lambdae 2\},\quad D_j = \{2^j\lambdae |x|\lambdae 2^{j+1}\},\quad
j=1,2,{\partial}ots\]
and
\[
A_j = {\mathbb R} \times D_j, \qquad A_{<j} = {\mathbb R} \times \{|x| \lambdaeq
2^{j}\}.
\]
We shall then assume that
\begin{equation}
{\lambdaangle}bel{coeff}
\sum_{j \in {\mathbb N}} \sup_{A_j\cap ({\mathbb R}\times{\R^n\backslash\Omega})} {\lambdaangle} x{\rangle}^2 |\nabla^2 a(t,x)| + {\lambdaangle} x{\rangle} |\nabla a(t,x)|
+ |a(t,x)-I_n| \lambdaeq \epsilon
\epsilonnd{equation}
and, for the lower order terms,
\begin{equation}
{\lambdaangle}bel{coeffb}
\sum_{j \in {\mathbb N}} \sup_{A_j\cap ({\mathbb R}\times{\R^n\backslash\Omega})} {\lambdaangle} x{\rangle}^2 |\nabla b(t,x)|
+ {\lambdaangle} x{\rangle} |b(t,x)| \lambdaeq \epsilon
\epsilonnd{equation}
\begin{equation}
{\lambdaangle}bel{coeffcc}
\sum_{j \in {\mathbb N}} \sup_{A_j\cap ({\mathbb R}\times{\R^n\backslash\Omega})} {\lambdaangle} x{\rangle}^2 |c(t,x)|
\lambdaeq \epsilon.
\epsilonnd{equation}
If $\epsilon$ is small enough then \epsilonqref{coeff} precludes the existence of trapped
rays, while for arbitrary $\epsilon$ it restricts the trapped rays to
finitely many dyadic regions.
We now define the localized energy spaces that we shall use.
We begin with an initial choice which is convenient for
the local energy estimates but not so much for the Strichartz
estimates. Precisely, we define the localized energy space $LE_0$ as
\[
\|{\partial}hi\|_{LE_{0}} = \sup_{j\ge 0}\Bigl(2^{-j/2} \|\nabla
{\partial}hi\|_{L^2(A_j\cap ({\mathbb R}\times{\R^n\backslash\Omega}))} + 2^{-3j/2}
\|{\partial}hi\|_{L^2(A_j\cap({\mathbb R}\times{\R^n\backslash\Omega}))}\Bigr),
\]
while for the forcing term we set
\[
\|f\|_{LE_0^*} = \sum_{k\ge 0} 2^{k/2} \|f\|_{L^2(A_k\cap
({\mathbb R}\times{\R^n\backslash\Omega}))}.
\]
The local energy bounds in these spaces shall follow from the
arguments in \cite{MetSo}.
On the other hand, for the Strichartz estimates, we shall introduce
frequency localized spaces as in \cite{globalw}, as well as the
earlier work \cite{gS}. We use a Littlewood-Paley decomposition in
frequency,
\[ 1= \sum_{k=-\infty}^\infty S_k(D), \quad \text{supp
}s_k(\xi)\subset \{2^{k-1}<|\xi|<2^{k+1}\}\]
and for each $k\in {\mathbb Z}$, we use
\[\|{\partial}hi\|_{X_k}=2^{-k^-/2} \|{\partial}hi\|_{L^2(A_{<k^-})} + \sup_{j\ge k^-}
\||x|^{-1/2}{\partial}hi\|_{L^2(A_j)}\]
to measure functions of frequency $2^k$. Here $k^-=\frac{|k|-k}{2}$. We then define the global
norm
\[
\|{\partial}hi\|^2_X=\sum_{k=-\infty}^\infty \|S_k {\partial}hi\|^2_{X_k}.
\]
Then for the local energy norm we use
\[
\|{\partial}hi\|^2_{LE_\infty} = \|\nabla {\partial}hi\|^2_X.
\]
For the inhomogeneous term we introduce the
dual space $Y=X'$ with norm defined by
\[
\|f\|_Y^2=\sum_{k=-\infty}^\infty \|S_k f\|_{X_k'}^2.
\]
To relate these spaces to the $LE_0$ respectively $LE_0^*$
we use Hardy type inequalities which are summarized
in the following proposition:
\begin{proposition}{\lambdaangle}bel{phardy}
We have
\begin{equation}
{\lambdaangle}bel{Hardy} \sup_j \||x|^{-1/2} u\|_{L^2(A_j)}\lambdaesssim \|u\|_X
\epsilonnd{equation}
and
\begin{equation}{\lambdaangle}bel{reverseHardy}
\|u\|_Y\lambdaesssim \sum_j \||x|^{1/2} u\|_{L^2(A_j)}.
\epsilonnd{equation}
In addition,
\begin{equation}{\lambdaangle}bel{hardy.ge4}
\||x|^{-3/2} {\partial}hi\|_{L^2}\lambdaesssim \|\nabla_x {\partial}hi\|_X,\quad n\ge
4.\epsilonnd{equation}
\epsilonnd{proposition}
The first bound \epsilonqref{Hardy} is a variant of a Hardy inequality, see
\cite[(16), Lemma 1]{globalw}, and also \cite{gS}. The second
\epsilonqref{reverseHardy} is its dual. The bound \epsilonqref{hardy.ge4}, proved
in \cite[Lemma 1]{globalw}, fails in dimension three.
Now we turn our attention to the obstacle problem. For $R$
fixed so that $\Omega\subset\{|x|<R\}$, we select a smooth cutoff
$\chi$ with $\chi\epsilonquiv 1$ for $|x|<2R$ and ${\text{supp }}\chi\subset\{|x|<4R\}$.
We shall use $\chi$ to partition the analysis into a portion near the
obstacle and a portion away from the obstacle. In particular, we
define the localized energy space $LE \subset LE_0$ as
\[
\|{\partial}hi\|^2_{LE} = \| {\partial}hi\|^2_{LE_{0}} +
\|(1-\chi){\partial}hi\|^2_{LE_\infty}.
\]
For the forcing term, we will respectively construct $LE^* \supset
LE_0^*$ by
\[
\|f\|^2_{LE^*} = \|\chi f\|^2_{LE_{0}^*} +
\|(1-\chi)f\|^2_{Y}, \qquad n \geq 4.
\]
This choice is no longer appropriate in dimension $n=3$, as otherwise
the local $L^2$ control of the solution is lost. Instead we simply set
\[
\|f\|^2_{LE^*} = \| f\|^2_{LE_{0}^*}, \qquad n = 3.
\]
Using these space, we now define what it means for a solution to
satisfy our stronger localized energy estimates.
\begin{definition}
We say that the operator $P$ satisfies the localized energy
estimates if for each initial data $(u_0,u_1) \in {\partial}ot H^{1} \times
L^2$ and each inhomogeneous term $f \in LE^*$,
there exists a unique solution $u$ to \epsilonqref{maineq}
with $u \in LE$
which satisfies the bound
\begin{equation}
\|u\|_{LE} + \Bigl\|\frac{{\partial}artial u}{{\partial}artial
\nu}\Bigr\|_{L^2({\partial}artial \Omega)} \lambdaesssim
\|\nabla u(0)\|_{L^2} + \|f\|_{LE^*}.
\epsilonnd{equation}
\epsilonnd{definition}
We prove that the localized energy estimates hold under the assumption
that $P$ is a small perturbation of the d'Alembertian:
\begin{theorem}{\lambdaangle}bel{l3}
Let $\Omega$ be a star-shaped domain. Assume that the coefficients
$a^{ij}$, $b^i$, and $c$ satisfy \epsilonqref{coeff}, \epsilonqref{coeffb}, and
\epsilonqref{coeffcc} with an $\epsilon$ which is sufficiently small. Then the
operator $P$ satisfies the localized energy estimates
globally-in-time for $n\ge 3$.
\epsilonnd{theorem}
These results correspond to the $s=0$ results of \cite{globalw}. Some
more general results are also available by permitting $s\neq 0$, but
for simplicity we shall not provide these details.
Once we have the local energy estimates, the next step is to
prove the Strichartz estimates. To do so, we shall assume that the
corresponding Strichartz estimate holds locally-in-time.
\begin{definition}
For a given operator $P$ and domain $\Omega$, we say that the
local Strichartz estimate holds if
\begin{equation}{\lambdaangle}bel{locStr}
\|\nabla u\|_{|D_x|^{\rho_1}L^{p_1}L^{q_1}([0,1]\times {\R^n\backslash\Omega})}
\lambdaesssim \|\nabla u(0)\|_{L^2} + \|f\|_{|D_x|^{-\rho_2}L^{p'_2}L^{q_2'}([0,1]\times{\R^n\backslash\Omega})}
\epsilonnd{equation}
for any solution $u$ to \epsilonqref{maineq}.
\epsilonnd{definition}
As mentioned previously, \epsilonqref{locStr} is only known under some
fairly restrictive hypotheses. We show a conditional result which
says that the global-in-time Strichartz estimates follow from the
local-in-time estimates as well as the localized energy estimates.
\begin{theorem}{\lambdaangle}bel{globalStrichartz}
Let $\Omega$ be a domain such that $P$ satisfies both the localized
energy estimates and the local Strichartz estimate. Let $a^{ij},
b^i, c$ satisfy \epsilonqref{coeff}, \epsilonqref{coeffb}, and \epsilonqref{coeffcc}. Let
$(\rho_1,p_1,q_1)$ and $(\rho_2,p_2,q_2)$ be two Strichartz pairs.
Then the solution $u$ to \epsilonqref{maineq} satisfies
\begin{equation}{\lambdaangle}bel{glStr}
\|\nabla u\|_{|D_x|^{\rho_1}L^{p_1}L^{q_1}}\lambdaesssim \|\nabla
u(0)\|_{L^2} + \|f\|_{|D_x|^{-\rho_2}L^{p_2'}L^{q_2'}}.
\epsilonnd{equation}
\epsilonnd{theorem}
Notice that this conditional result does not require the $\epsilon$ in
\epsilonqref{coeff}, \epsilonqref{coeffb}, and \epsilonqref{coeffcc} to be small. We
do, however, require this for our proof of the localized energy
estimates which are assumed in Theorem \ref{globalStrichartz}.
As an example of an immediate corollary of the localized energy
estimates of Theorem \ref{l3} and the local Strichartz estimates of
\cite{SmSoLocal}, we have:
\begin{corollary}
Let $n\ge 3$, and let $\Omega$ be a strictly convex domain. Assume
that the coefficients $a^{ij}$, $b^i$ and $c$ are time-independent
in a neighborhood of $\Omega$ and satisfy \epsilonqref{coeff},
\epsilonqref{coeffb} and \epsilonqref{coeffcc} with an $\epsilon$ which is sufficiently
small. Let
$(\rho_1,p_1,q_1)$ and $(\rho_2,p_2,q_2)$ be two Strichartz pairs
which satisfy
\[ \frac{1}{p_1} =
\Bigl(\frac{n-1}{2}\Bigr)\Bigl(\frac{1}{2}-\frac{1}{q_1}\Bigr),\quad
\frac{1}{p_2'}=\Bigl(\frac{n-1}{2}\Bigr)\Bigl(\frac{1}{2}-\frac{1}{q_2'}\Bigr).\]
Then the
solution $u$ to \epsilonqref{maineq} satisfies
\begin{equation}
\| \nabla u\|_{|D_x|^{\rho_1} L^{p_1}L^{q_1}}
\lambdaesssim \|\nabla u(0)\|_{L^2} +
\|f\|_{|D_x|^{-\rho_2} L^{p'_2}L^{q'_2}}.
{\lambdaangle}bel{fse} \epsilonnd{equation}
{\lambdaangle}bel{tfse}\epsilonnd{corollary}
This paper is organized as follows.
In the next section,
we prove the localized energy estimates for small perturbations of the
d'Alembertian exterior to a star-shaped obstacle. In the
last section, we prove Theorem \ref{globalStrichartz} which says that
global-in-time Strichartz estimates follow from the localized energy
estimates as well as the local Strichartz estimates.
\section{The localized energy estimates}
{\lambdaangle}bel{mora}
In this section, we shall prove Theorem~\ref{l3}.
By combining the inclusions $LE \subset LE_0$, $LE_0^* \subset LE^*$ and
the bounds \epsilonqref{hardy.ge4}, \epsilonqref{coeffb}, and \epsilonqref{coeffcc}, one
can easily prove the following which permits us to treat the lower
order terms perturbatively. See, also, \cite[Lemma 3]{globalw}.
\begin{proposition}{\lambdaangle}bel{bcbound}
Let $b^i$, $c$ be as in \epsilonqref{coeffb} and \epsilonqref{coeffcc}
respectively. Then,
\begin{align}
\|b\nabla u\|_{LE^*} &\lambdaesssim \epsilon \|u\|_{LE}, {\lambdaangle}bel{bbound}\\
\|cu\|_{LE^*}&\lambdaesssim \epsilon\|u\|_{LE}. {\lambdaangle}bel{cbound}
\epsilonnd{align}
\epsilonnd{proposition}
We now look at the proof of the localized energy estimates.
Due to Proposition \ref{bcbound} we can assume that
$b=0, c=0$. To prove the theorems, we use positive commutator
arguments. We first do the analysis separately in the two regions.
\subsection{Analysis near $\Omega$ and classical Morawetz-type estimates}
Here we sketch the proof from \cite{MetSo} which gives an estimate
which is similar to \epsilonqref{kss} for small perturbations of the d'Alembertian.
This estimate shall allow us to gain control of the solution near the
boundary. It also permits local $L^2$ control of the solution, not
just the gradient in three dimensions. The latter is necessary as the
required Hardy inequality which can be utilized in higher dimensions
corresponds to a false endpoint estimate in three dimensions.
The main estimate is the following:
\begin{proposition}{\lambdaangle}bel{prop.kss}
Let $\Omega$ be a star-shaped domain. Assume that the coefficients
$a^{ij}$, $b^i$, $c$ satisfy \epsilonqref{coeff}, \epsilonqref{coeffb}, and
\epsilonqref{coeffcc} respectively with an $\epsilon$ which is sufficiently
small. Suppose that ${\partial}hi$ satisfies $P{\partial}hi=F$, ${\partial}hi|_{{\partial}artial\Omega}=0$. Then
\begin{equation}{\lambdaangle}bel{main.kss}
\|{\partial}hi\|_{LE_0} + \|\nabla {\partial}hi\|_{L^\infty L^2}
+ \|{\partial}artial_\nu {\partial}hi\|_{L^2({{\partial}artial\Omega})}\lambdaesssim \|\nabla {\partial}hi(0)\|_2
+ \|F\|_{LE_0^*}.
\epsilonnd{equation}
\epsilonnd{proposition}
\begin{proof}
We provide only a terse proof. The interested reader can refer to
\cite{MetSo} for a more detailed proof. For $f=\frac{r}{r+\rho}$,
where $\rho$ is a fixed positive constant, we use a multiplier of
the form
\[
{\partial}artial_t{\partial}hi + f(r){\partial}artial_r{\partial}hi +\frac{n-1}{2}\frac{f(r)}{r}{\partial}hi.
\]
By multiplying $P{\partial}hi$ and integrating by parts, one obtains
\begin{equation}{\lambdaangle}bel{commutator}
\begin{split}
\!\! \int_0^T \!\! \int_{\R^n\backslash\Omega} \frac{1}{2}f'(r)({\partial}artial_r{\partial}hi)^2 & +
\Bigl(\frac{f(r)}{r}-\frac{1}{2}f'(r)\Bigr)|{\not\negmedspace\nabla}{\partial}hi|^2 +
\frac{1}{2}f'(r)({\partial}artial_t {\partial}hi)^2 -
\frac{n-1}{4}\Delta\Bigl(\frac{f(r)}{r}\Bigr){\partial}hi^2\:dxdt \!\! \\ &
-\frac{1}{2}\!\int_0^T\!\!\int_{{\partial}artial\Omega} \!\!\frac{f(r)}{r}({\partial}artial_\nu {\partial}hi)^2
{\lambdaangle} x,\nu{\rangle} (a^{ij}\nu_i\nu_j)\:d\sigma dt + (1+O(\epsilonpsilon)) \|\nabla
{\partial}hi(T)\|^2_2\\\lambdaesssim & \ \|\nabla
{\partial}hi(0)\|^2_2 + \int_0^T\int_{\R^n\backslash\Omega}
|F|\Bigl(|{\partial}artial_t{\partial}hi|+|f(r){\partial}artial_r{\partial}hi|+
\Bigl|\frac{f(r)}{r}{\partial}hi\Bigr|\Bigr)\:dx\:dt \\ &\ +
\int_0^T\int_{\R^n\backslash\Omega} O\Bigl(\frac{|a-I|}{r}+|\nabla a|\Bigr)|\nabla
{\partial}hi|\Bigl(|\nabla {\partial}hi| + \Bigl|\frac{{\partial}hi}{r}\Bigr|\Bigr)\:dx\:dt.
\\\lambdaesssim & \ \|\nabla {\partial}hi(0)\|^2_2 + \|F\|_{LE_0^*(0,T)}
\|{\partial}hi\|_{LE_0(0,T)} + \epsilonpsilon \|{\partial}hi\|_{LE_0(0,T)}^2.
\epsilonnd{split}
\epsilonnd{equation}
Here, we have used the Hardy inequality $\||x|^{-1}{\partial}hi\|_2\lambdaesssim
\|\nabla {\partial}hi\|_2$, $n\geq 3$, as well as \epsilonqref{coeff}.
All terms on the left are nonnegative. By direct computation, the
first term controls
\[
\rho^{-1} \|\nabla {\partial}hi\|^2_{L^2([0,T]\times \{|x|\approx \rho\})}
+ \rho^{-3} \|{\partial}hi\|^2_{L^2([0,T]\times \{|x|\approx \rho\})}.
\]
Taking a supremum over dyadic $\rho$ provides a bound for the
$\|{\partial}hi\|_{LE_0(0,T)}$. In the second term
we have $-{\lambdaangle} x,\nu{\rangle} \gtrsim 1$, which follows from the assumption
that $\Omega$ is star-shaped, and also $a^{ij}\nu_i\nu_j\gtrsim 1$
which follows from \epsilonqref{coeff}. By simply taking
$\rho=1$, one can bound the third term in the left of
\epsilonqref{main.kss} by the right side of \epsilonqref{commutator}.
Thus we obtain
\[
\|{\partial}hi\|_{LE_0(0,T)} + \|\nabla {\partial}hi(T)\|_{L^\infty L^2}
+ \|{\partial}artial_\nu {\partial}hi\|_{L^2({{\partial}artial\Omega})}\lambdaesssim \|\nabla {\partial}hi(0)\|^2_2 + \|F\|_{LE_0^*(0,T)}
\|{\partial}hi\|_{LE_0(0,T)} + \epsilonpsilon \|{\partial}hi\|_{LE_0(0,T)}^2.
\]
The $LE_0$ terms on the right can be bootstrapped for $\epsilonpsilon$ small
which yields \epsilonqref{main.kss}.
\epsilonnd{proof}
\subsection{Analysis near $\infty$ and frequency localized estimates}
In this section, we briefly sketch the proof from \cite{globalw} for some
frequency localized versions of the localized energy estimates for the
boundaryless equation. The main estimate here, which is from
\cite{globalw}, is the following.
\begin{proposition} {\lambdaangle}bel{glw}
Suppose that $a^{ij}$ are as in Theorem~\ref{l3} and $b=0$, $c=0$.
Then for each initial data $(u_0,u_1)\in {\partial}ot{H}^1\times L^2$ and
each inhomogeneous term $f\in Y\cap L^1L^2$, there exists a unique solution $u$
to the boundaryless equation
\[Pu=f,\quad u(0)=u_0,\quad {\partial}artial_t u(0)=u_1\]
satisfying
\begin{equation}{\lambdaangle}bel{mtle}
\|\nabla u\|_{L^\infty L^2\cap X}\lambdaesssim \|\nabla u(0)\|_{L^2} +
\|f\|_{L^1L^2 + Y}.\epsilonnd{equation}
\epsilonnd{proposition}
The proof here uses a multiplier of the form
\[D_t + {\partial}elta_0 Q + i{\partial}elta_1 B.\]
Here the parameters are chosen so that
\[\epsilon\lambdal {\partial}elta_1\lambdal {\partial}elta\lambdal {\partial}elta_0\lambdal 1.\]
The multiplier $Q$ is given by
\[
Q = \sum_k S_k Q_k S_k
\]
where $Q_k$ are differential operators of the form
\[
Q_k = (D_x x {\partial}hi_k( |x|) + {\partial}hi_k(|x|) xD_x).
\]
The ${\partial}hi_k$ are functions of the form
\[
{\partial}hi_k (x) = 2^{-k^-} {\partial}si_k(2^{-k^-} {\partial}elta x)
\]
where for each $k$ the functions ${\partial}si_k$ have the following
properties:
\begin{enumerate}
\item[(i)] ${\partial}si_k(s) \approx (1+s)^{-1}$ for $s > 0$ and $|{\partial}artial^j
{\partial}si_k(s)| \lambdaesssim (1+s)^{-j-1}$ for $j \lambdaeq 4$,
\item[(ii)] $ {\partial}si_k(s) + s {\partial}si_k'(s) \approx (1+s)^{-1} \alpha_k(s)$
for $s > 0$,
\item[(iii)] ${\partial}si_k(|x|)$ is localized at frequency $ \lambdal 1$.
\epsilonnd{enumerate}
The $\alpha_k$ are slowly varying functions that are related to the
bounds of the individual summands in \epsilonqref{coeff}. This construction
is reminiscent of those in \cite{gS}, \cite{MMT}, and \cite{globalw}.
For the Lagrangian term $B$, we fix a function $b$ satisfying
\[ b(s)\approx \frac{\alpha(s)}{1+s},\quad |b'(s)|\lambdal b(s).\]
Then, we set $B=\sum_k S_k 2^{-k^-} b(2^{-k^-}x)S_k$.
The computations, which are carried out in detail in \cite{globalw},
are akin to those outlined in the previous section.
\subsection{Proof of Theorem~\ref{l3}}
Consider first the three dimensional case. For $f \in LE^* = LE_0^*$
we can use Proposition~\ref{prop.kss} to obtain
\[
\|u\|_{LE_0} + \|\nabla u\|_{L^\infty L^2}
+ \|{\partial}artial_\nu u\|_{L^2({{\partial}artial\Omega})}\lambdaesssim \|\nabla u(0)\|_2
+ \|f\|_{LE_0^*}.
\]
It remains to estimate $\| (1-\chi)u\|_{LE_\infty}$ with $\chi$ as
in the definition of $LE$. By \epsilonqref{mtle} we have
\[
\| (1-\chi)u\|_{LE_\infty} \lambdaesssim \| \nabla
(1-\chi)u(0)\|_{L^2} + \| P [(1-\chi)u] \|_{Y} \lambdaesssim \|\nabla
u(0)\|_{L^2} + \| P [(1-\chi)u] \|_{LE^*_0}.
\]
Finally, to bound the last term we write
\[
P [(1-\chi)u] = -[P,\chi]u +(1-\chi) f.
\]
The commutator has compact spatial support; therefore
\[
\| P [(1-\chi)u] \|_{LE^*_0} \lambdaesssim \|u\|_{LE_0} + \|f\|_{LE_0^*}
\]
and the proof is concluded.
Consider now higher dimensions $n \geq 4$.
For fixed $f\in LE^*$, we first solve the boundaryless problem
\[
Pu_\infty = (1-\chi)f \in Y, \qquad u_\infty(0) = 0, \ {\partial}artial_t u_{\infty}(0) = 0
\]
using Proposition~\ref{glw}. We consider $\chi_\infty$ which is
identically $1$ in a neighborhood of infinity and vanishes on ${\text{supp }}
\chi$. For the function $\chi_\infty u_\infty$ we use the
Hardy inequalities in Proposition~\ref{phardy} to write
\[
\|\chi_\infty u_\infty\|_{LE} \approx \|\nabla(\chi_\infty u_\infty)\|_{X}
\lambdaesssim \| \nabla u_\infty\|_{X} \lambdaesssim \|(1-\chi_\infty)f\|_{Y}.
\]
The remaining part ${\partial}si = u - {\partial}si_\infty u_\infty$ solves
\[
P {\partial}si = \chi_\infty f + [P,\chi_\infty] u_\infty;
\]
therefore
\[
\|P {\partial}si\|_{LE_0^*} \lambdaesssim \|f\|_{LE^*} + \|u_\infty\|_{LE_0}
\lambdaesssim \|f\|_{LE^*} + \|\nabla u_\infty\|_{X}
\lambdaesssim \|f\|_{LE^*}.
\]
Then we estimate ${\partial}si$ as in the three dimensional case. The
proof is concluded.
\section{The Strichartz estimates}
In this final section, we prove Theorem \ref{globalStrichartz}, the
global Strichartz estimates. We use fairly standard arguments to
accomplish this. In a compact region about the obstacle, we prove the
global estimates using the local Strichartz estimates and the
localized energy estimates. Near infinity, we use \cite{globalw}.
The two regions can then be glued together using the localized energy
estimates.
We shall utilize the following two propositions. The first gives the
result when the forcing term is in the dual localized energy space.
\begin{proposition}{\lambdaangle}bel{hom}
Let $(\rho,p,q)$ be a Strichartz pair.
Let $\Omega$ be a domain such that $P$ satisfies both the localized
energy estimates and the homogeneous local Strichartz estimate with exponents $(\rho,p,q)$.
Then for each ${\partial}hi\in LE$
with $P {\partial}hi\in LE^*$, we have
\begin{equation}{\lambdaangle}bel{homest}
\||D_x|^{-\rho}\nabla {\partial}hi\|^2_{L^pL^q}\lambdaesssim \|\nabla
{\partial}hi(0)\|_{L^2}^2 + \|{\partial}hi\|_{LE}^2 + \|P {\partial}hi\|^2_{LE^*}.
\epsilonnd{equation}
\epsilonnd{proposition}
The second proposition allows us to gain control when the forcing term
is in a dual Strichartz space.
\begin{proposition}{\lambdaangle}bel{nonhom}
Let $(\rho_1,p_1,q_1)$ and $(\rho_2,p_2,q_2)$ be Strichartz pairs.
Let $\Omega$ be a domain such that $P$ satisfies both the localized
energy estimates and the local Strichartz estimate with exponents
$(\rho_1,p_1,q_1)$, $(\rho_2,p_2,q_2)$. Then there is a parametrix
$K$ for $P$ with
\begin{equation}{\lambdaangle}bel{nonhomest}
\|\nabla Kf\|^2_{L^\infty L^2} + \|Kf\|^2_{LE} + \||D_x|^{-\rho_1}\nabla
Kf\|^2_{L^{p_1}L^{q_1}}\lambdaesssim \||D_x|^{\rho_2}f\|^2_{L^{p_2'}L^{q_2'}}
\epsilonnd{equation}
and
\begin{equation}{\lambdaangle}bel{errorest}
\|P Kf - f\|_{LE^*}\lambdaesssim \||D_x|^{\rho_2}f\|_{L^{p_2'}L^{q_2'}}.
\epsilonnd{equation}
\epsilonnd{proposition}
We briefly delay the proofs and first apply the propositions to prove
Theorem \ref{globalStrichartz}.
\begin{proof}[Proof of Theorem \ref{globalStrichartz}]
For
\[Pu=f+g,\quad f\in |D_x|^{-\rho_2}L^{p_2'}L^{q_2'},\, g\in LE^*,\]
we write
\[u=Kf+v.\]
The bound for $\nabla Kf$ follows immediately from \epsilonqref{nonhomest}.
To bound $v$, we note that
\[P v = (1-PK)f+g.\]
Applying \epsilonqref{homest} and the localized energy estimate, we have
\[\||D_x|^{-\rho_1}\nabla v\|_{L^{p_1}L^{q_1}}\lambdaesssim \|\nabla
u(0)\|_{L^2} + \|\nabla Kf\|_{L^\infty L^2} + \|(1-PK)f\|_{LE^*} + \|g\|_{LE^*}.\]
The Strichartz estimates \epsilonqref{glStr} then
follow from \epsilonqref{nonhomest} and \epsilonqref{errorest}.
\epsilonnd{proof}
\begin{proof}[Proof of Proposition \ref{hom}]
We assume $P {\partial}hi\in Y$, and we write
\[{\partial}hi = \chi {\partial}hi + (1-\chi){\partial}hi\]
with $\chi$ as in the definition of the $LE$ norm.
Since, using \epsilonqref{reverseHardy}, the fundamental theorem of calculus, and
\epsilonqref{Hardy}, we have
\[ \|[P, \chi] {\partial}hi\|_{LE^*}\lambdaesssim \|{\partial}hi\|_{LE},\]
it suffices to show the estimate for ${\partial}hi_1=\chi {\partial}hi$,
${\partial}hi_2=(1-\chi){\partial}hi$ separately.
To show \epsilonqref{homest} for ${\partial}hi_1$, we need only assume that ${\partial}hi_1$
and $P{\partial}hi_1$ are compactly supported, and we write
\[{\partial}hi_1 = \sum_{j\in {\mathbb Z}} \beta(t-j){\partial}hi_1\]
for an appropriately chosen, smooth, compactly supported function
$\beta$. By commuting $P$ and $\beta(t-j)$, we easily obtain
\[ \sum_{j\in {\mathbb N}} \|\beta(t-j){\partial}hi_1\|^2_{LE}
+ \|P
(\beta(t-j){\partial}hi_1)\|^2_{L^1L^2}
\lambdaesssim \|{\partial}hi_1\|^2_{LE}+ \|P {\partial}hi_1\|^2_{LE^*}.\]
Here, as above, we have also used \epsilonqref{reverseHardy}, the
fundamental theorem of calculus, and \epsilonqref{Hardy}. Applying the
homogeneous local Strichartz estimate to each piece $\beta(t-j){\partial}hi_1$
and using Duhamel's formula, the bound \epsilonqref{homest} for ${\partial}hi_1$ follows
immediately from the square summability above.
On the other hand, ${\partial}hi_2$ solves a boundaryless equation, and the
estimate \epsilonqref{homest} is just a restatement of \cite[Theorem 7]{globalw} with $s=0$.
This follows directly when $n\ge 4$ and easily from
\epsilonqref{reverseHardy} when $n=3$.
\epsilonnd{proof}
\begin{proof}[Proof of Proposition \ref{nonhom}]
We split $f$ in a fashion similar to the above:
\[ f = \chi f + (1-\chi) f = f_1 + f_2.\]
For $f_1$, we write
\[f_1 = \sum_j \beta(t-j) f_1\]
where $\beta$ is supported in $[-1,1]$. Let ${\partial}si_j$ be the solution
to
\[P {\partial}si_j = \beta(t-j)f_1.\]
By the local Strichartz estimate, we have
\[ \||D_x|^{-\rho_1} \nabla{\partial}si_j\|_{L^{p_1}L^{q_1}(E_j)} + \|\nabla
{\partial}si_j\|_{L^\infty L^2(E_j)} \lambdaesssim \|\beta(t-j)|D_x|^{\rho_2}f_1\|_{L^{p_2'}L^{q_2'}}\]
where $E_j = [j-2,j+2]\times (\{|x|<2\}\cap {\R^n\backslash\Omega})$. Letting
$\tilde{\beta}(t-j,r)$ be a cutoff which is supported in $E_j$ and is
identically one on the support of $\beta(t-j)\chi$, set ${\partial}hi_j = \tilde{\beta}(t-j,r){\partial}si_j$.
Then,
\begin{equation}{\lambdaangle}bel{locnonhomest}
\||D_x|^{-\rho_1} \nabla {\partial}hi_j\|_{L^{p_1}L^{q_1}} + \|\nabla
{\partial}hi_j\|_{L^\infty L^2}\lambdaesssim
\|\beta(t-j)|D_x|^{\rho_2} f_1\|_{L^{p_2'}L^{q_2'}}.\epsilonnd{equation}
Moreover,
\[ P {\partial}hi_j - \beta(t-j)f_1 = [P,\tilde{\beta}(t-j,r)]{\partial}si_j,\]
and thus,
\begin{equation}{\lambdaangle}bel{locerror}
\|P {\partial}hi_j - \beta(t-j)f_1\|_{L^2}\lambdaesssim \|\beta(t-j)|D_x|^{-\rho_2}f_1\|_{L^{p_2'}L^{q_2'}}.
\epsilonnd{equation}
Setting
\[Kf_1 = \sum_j {\partial}hi_j\]
and summing the bounds \epsilonqref{locnonhomest} and \epsilonqref{locerror}
yields the desired result for $f_1$.
For $f_2$, we solve the boundaryless equation
\[P {\partial}si = f_2.\]
For a second cutoff $\tilde{\chi}$ which is 1 on the support of
$1-\chi$ and vanishes for $\{r<R\}$, we set
\[Kf_2=\tilde{\chi}{\partial}si.\]
The following lemma, which is in essence from \cite[Theorem
6]{globalw}, applied to ${\partial}si$ then easily yields the desired bounds.
\begin{lemma}
Let $f\in |D_x|^{-\rho_2}L^{p_2'}L^{q_2'}$. Then the forward solution
${\partial}si$ to the boundaryless equation $P{\partial}si = f$ satisfies the bound
\begin{equation}
\|\nabla {\partial}si\|^2_{L^\infty L^2}+\|{\partial}si\|^2_{LE} + \||D_x|^{-\rho_1}\nabla
{\partial}si\|^2_{L^{p_1}L^{q_1}} \lambdaesssim \||D_x|^{\rho_2}f\|^2_{L^{p_2'}L^{q_2'}}.
\epsilonnd{equation}
\epsilonnd{lemma}
It remains to prove the lemma. From \cite[Theorem 6]{globalw}, we
have that
\begin{equation}
{\lambdaangle}bel{backref}
\|\nabla {\partial}si\|^2_X + \||D_x|^{-\rho_1}\nabla
{\partial}si\|^2_{L^{p_1}L^{q_1}} \lambdaesssim \||D_x|^{\rho_2}f\|^2_{L^{p_2'}L^{q_2'}}.
\epsilonnd{equation}
By \epsilonqref{Hardy} we have
\[
\sup_{j\ge 0} 2^{-j/2} \|\nabla
{\partial}si\|_{L^2(A_j)}
\lambdaesssim \|\nabla {\partial}si\|_X.
\]
It remains only to show the uniform bound
\begin{equation}
{\lambdaangle}bel{ll2}
2^{-\frac{3j}2}\|{\partial}si\|_{L^2(A_j)}\lambdaesssim \||D_x|^{\rho_2} f\|_{L^{p_2'}L^{q_2'}}
\epsilonnd{equation}
when $n=3$.
Let $H(t,s)$ be the forward fundamental solution to $P$. Then
\[
{\partial}si(t) = \int_{-\infty}^t H(t,s)f(s) \ ds.
\]
Therefore \epsilonqref{ll2} can be rewritten as
\[
2^{-\frac{3j}2}\lambdaeft\| \int_{-\infty}^t H(t,s)f(s) \ ds
\right\|_{L^2(A_j)}\lambdaesssim \||D_x|^{\rho_2} f\|_{L^{p_2'}L^{q_2'}}.
\]
Since $p_2'<2$ for
Strichartz pairs in $n=3$, by the Christ-Kiselev lemma
\cite{christkiselev} (see also \cite{smithsogge}) it suffices to show that
\begin{equation}
{\lambdaangle}bel{ll2a}
2^{-\frac{3j}{2}}\Bigl\|\int_{-\infty}^\infty H(t,s)f(s)\:ds\Bigr\|_{L^2(A_j)}\lambdaesssim
\||D_x|^{\rho_2}f\|_{L^{p_2'}L^{q_2'}}.
\epsilonnd{equation}
The function
\[
{\partial}si_1(t) = \int_{-\infty}^\infty H(t,s)f(s)\:ds
\]
solves $P {\partial}si_1 = 0$, and from \epsilonqref{backref} we have
\[
\|\nabla {\partial}si_1\|_{L^\infty L^2} \lambdaesssim \||D_x|^{\rho_2}
f\|_{L^{p_2'}L^{q_2'}}.
\]
On the other hand, from \epsilonqref{main.kss} with $P{\partial}si_1 = 0$ and $\Omega=\epsilonmptyset$, we
obtain
\[
2^{-\frac{3j}2}\|{\partial}si_1\|_{L^2(A_j)}
\lambdaesssim \|\nabla {\partial}si_1(0)\|^2_2.
\]
Hence \epsilonqref{ll2a} follows, and the proof is concluded.
\epsilonnd{proof}
\epsilonnd{document}
|
\begin{document}
\begin{abstract}
Define $\cpx{n}$ to be the \emph{complexity} of $n$, the smallest number of ones
needed to write $n$ using an arbitrary combination of addition and
multiplication. The set $\mathscr{D}$ of \emph{defects}, differences
$\delta(n):=\cpx{n}-3\log_3 n$, is known to be a well-ordered subset of
$[0,\infty)$, with order type $\omega^\omega$. This is proved by
showing that, for any $r$, there is a finite set ${\mathcal S}_s$ of certain
multilinear polynomials, called low-defect polynomials, such that $\delta(n)\le
s$ if and only if one can write $n = f(3^{k_1},\ldots,3^{k_r})3^{k_{r+1}}$
for some $f\in{\mathcal S}_s$. \cite{paperwo, theory}
In this paper we show that, in addition to it being true that $\mathscr{D}$ (and thus
$\overline{\mathscr{D}}$) has order type $\omega^\omega$, this set satisifies a sort of
self-similarity property, with $\overline{\mathscr{D}}' = \overline{\mathscr{D}} + 1$. This is
proven by restricting attention to \emph{substantial} low-defect polynomials,
ones that can be themselves written efficiently in a certain sense, and showing
that in a certain sense the values of these polynomials at powers of $3$ have
complexity equal to the na\"ive upper bound most of the time.
As a result, we also prove that, under appropriate conditions on $a$ and $b$,
numbers of the form $b(a3^k+1)3^\ell$ will, for all sufficiently large
$k$, have complexity equal to the na\"ive upper bound. These results resolve
various earlier conjectures of the second author \cite{Arias}.
\end{abstract}
\title{Integer complexity: Stability and self-similarity}
\section{Introduction}
In this paper we prove a new stability theorem and a new self-similarity theorem
for integer complexity; among other things, we show that
$\cpx{a3^k+1}=\cpx{a}+3k+1$ for stable $a$ and large $k$, and that
$\overline{\D}'=\overline{\D}+1$ where $\mathscr{D}$ is the set of defects. (In fact, although we will
not show it here, our main results are equivalent to one another.) We achieve
this via the mechanism of {\em substantial low-defect polynomials}, which we
will define below.
In this context, the \emph{complexity} of a natural number $n$,
denoted $\cpx{n}$, is the least number of $1$'s needed to write it using any
combination of addition and multiplication, with the order of the operations
specified using parentheses grouped in any legal nesting. For instance, $n=11$
has a complexity of $8$, since it can be written using $8$ ones as \[
11=(1+1+1)(1+1+1)+1+1,\] but not with any fewer than $8$. This notion was
implicitly introduced in 1953 by Kurt Mahler and Jan Popken \cite{MP}, and was
later popularized by Richard Guy \cite{Guy, UPINT}. There are a number of other
related ways of measuring the complexity of a number \cite{ AMchains, BSS,
TAOCP2, subreview}, at least one of which is related to the $P$ vs $NP$ problem
for Blum-Shub-Smale machines \cite{BSS}.
Integer complexity is approximately logarithmic, and in particular it satisfies the bounds
\begin{equation*}\label{eq1}
3 \log_3 n= \frac{3}{\log 3} \log n\le \cpx{n} \le \frac{3}{\log 2} \log n =
3\log_2 n
,\qquad n>1.
\end{equation*}
The lower bound can be deduced from the results of Mahler and Popken, and was
explicitly proved by John Selfridge \cite{Guy}. It is attained with equality for
$n=3^k$ for all $k \ge1$. The upper bound can be obtained by writing $n$ in
binary and finding a representation using Horner's algorithm. It is not sharp,
and the constant $\frac{3}{\log2} $ can be improved \cite{upbds}.
Based on the lower bound, earlier work \cite{paper1} introduced the
notion of the \emph{defect} of $n$, denoted $\delta(n)$, which is defined to be
the difference $\cpx{n}-3\log_3 n$. Subsequent work \cite{paperwo} showed that
the set of defects is in fact a well-ordered subset of the real line, with order
type $\omega^\omega$. An equivalent form of this theorem had been conjectured
earlier by the second author \cite{Arias}. Indeed, the paper \cite{Arias}
made eleven of these conjectures about integer complexity, some of which can be
expressed in terms of the defect. Some of these conjectures have since been
proven \cite{paper1, paperwo}, but a few remain outstanding. In this paper we
will prove the true conjectures that remain outstanding, and prove salvages for
the ones that requires additional conditions.
We focus on two in particular, \cite[Conjecture~2]{Arias} and
\cite[Conjecture~8]{Arias}; we will derive \cite[Conjectures~9--11]{Arias} as
corollaries. Let us begin by describing what it is we aim to prove.
\subsection{Stability of $a(b3^k+1)$}
\label{cj2-intro}
Consider the following definition and theorem, which was also
\cite[Conjecture~1]{Arias}:
\begin{defn}
A number $n$ is called \emph{stable} if $\cpx{3^k n}=3k+\cpx{n}$ holds for every
$k \ge 0$.
\end{defn}
\begin{thm}[{\cite[Theorem~13]{paper1}}]
\label{stab}
For any natural number $n$, there exists $K\ge 0$ such that $n3^K$ is stable;
i.e., for all $k\ge K$, one has
\[ \cpx{n3^k} = \cpx{n3^K} + 3(k-K). \]
\end{thm}
One might phrase this as, when $k$ gets large, eventually increasing $k$ by $1$
will always increase the complexity by $3$. It is a stability theorem for
numbers of the form $a3^k$. In this paper we prove that
a similar phenomenon occurs for numbers of the form $b(a3^k+1)$:
\begin{thm}[Degree-$1$ stability theorem]
\label{cj2thm}
We have:
\begin{enumerate}
\item Suppose $a$ is stable. Then there exists $K$ such that, for all $k\ge K$
and all $\ell\ge 0$,
\[\cpx{(a3^k+1)3^\ell}=\cpx{a}+3k+3\ell+1 .\]
\item Suppose $ab$ is stable and $\cpx{ab}=\cpx{a}+\cpx{b}$. Then there exists
$K$ such that, for all $k\ge K$ and all $\ell\ge 0$,
\[\cpx{b(a3^k+1)3^\ell}=\cpx{a}+\cpx{b}+3k+3\ell+1 .\]
\end{enumerate}
Moreover, in both these cases, it is possible to algorithmically compute how
large $K$ needs to be.
\end{thm}
This theorem essentially resolves a conjecture of the second author
\cite[Conjecture~2]{Arias}, who suggested that given any two natural numbers $a$
and $b$, there exists $K$ such that for all $k\ge K$,
\[ \cpx{b(a3^k+1)} = \cpx{a} + \cpx{b} + 3k + 1. \]
This conjecture is too strong as stated, as there are cases where there turn out
to be simpler ways of writing the numbers in question. For instance, consider
the case of $b=2$, $a=1094$. Since
$\cpx{1094}=22$, if \cite[Conjecture~2]{Arias} were true as stated, then for $k$
sufficiently large, one would have
\[ \cpx{2(1094\cdot3^k+1)}=25 + 3k.\] However,
it turns out that $\cpx{2\cdot 1094}=\cpx{2188}=22$ as well (since $2188 = 3^7
+1$), which means that, for any $k\ge 0$,
\[ \cpx{2(1094\cdot3^k+1)} = \cpx{2188\cdot3^k + 2} \le 24 + 3k. \]
However, as our theorem above shows, the statement is true if we require that
$\cpx{ab}=\cpx{a}+\cpx{b}$, i.e., that there is no more efficient way of writing
$ab$ than factoring it into $a$ and $b$, and additionally require that $ab$ is
stable. The obvious modification for the case $b=1$ is also true.
We consider this modified version of the conjecture to essentially answer
the original question. There are other ways to modify
\cite[Conjecture~2]{Arias} by adding additional hypotheses. In
Section~\ref{secdragon} we give an analogue where complexities are off by one.
(We hope to prove other variants in a future paper \cite{deg1}.)
\subsubsection{The off-by-one case}
\label{secdragon}
We can also get an analogue of Theorem~\ref{cj2thm} that holds even if the
complexities are off by one, if the polynomial considered is just barely
insubstantial (see Section~\ref{secsubst}).
\begin{thm}[Off-by-one stability theorem]
\label{dragons}
We have:
\begin{enumerate}
\item Suppose $ab$ is stable, $\cpx{a}+\cpx{b}=\cpx{ab}+1$, and $b>1$.
Suppose further that $a$ is stable. Then there exists $K$ such that for all
$k\ge K$,
\[\cpx{b(a3^k+1)}=\cpx{a}+\cpx{b}+3k+1 .\]
\item Supose further that $b$ is also stable. Then there exists $K$ such that
for all $k\ge K$ and $\ell\ge 0$,
\[\cpx{b(a3^k+1)3^\ell}=\cpx{a}+\cpx{b}+3k+3\ell+1 .\]
\end{enumerate}
Moreover, in both these cases, it is possible to algorithmically compute how
large $K$ needs to be.
\end{thm}
This theorem does not neatly fit into the framework laid by rest of the paper.
For instance, Theorem~\ref{cj2thm} is generalized beyond degree $1$ by
Proposition~\ref{usu}, and in a future paper \cite{hyperplanes} we will prove an
stronger generalization to higher degrees. But it is less clear how
Theorem~\ref{dragons} might be generalized to higher degrees; it is something of
an outlier. The idea behind the proof is related to
\cite[Conjectures~9--11]{Arias}. It remains open how to extend these
conjectures (proved here as Theorem~\ref{cj9corsep}) or this theorem to higher
degrees.
Note that one cannot extend this theorem to cases that are off by $2$, as
demonstrated by the case of $2(1094\cdot3^k+1)$ considered above. So, there is
a certain irregularity to this case. We will address the case of more general
degree-$1$ low-defect polynomials in a future paper \cite{deg1}.
\subsection{Self-similarity of the defect set}
The set $\mathscr{D}$ of all defects is a well-ordered
subset of the real line, with order type $\omega^\omega$ \cite{paperwo}.
Moreover, it is known that the limit of the initial $\omega^k$ defects of $\mathscr{D}$
occurs at precisely $k$.
We introduce some notation in order to restate the well-ordering theorem in
terms of the closure $\overline{\D}$ of the defect set $\mathscr{D}$. This change of emphasis
will make many statements simpler; moreover, the set $\overline{\D}$ has a structural
characterization which will be given in Theorem~\ref{closure-intro}.
\begin{notn}
\label{index}
Given a well-ordered set $S$, we will use $S[\alpha]$ to denote the $\alpha$'th
element of $S$, and when $\alpha>0$, will use $S(\alpha)$ to denote
$S[-1+\alpha]$. (Here, for $\alpha>0$, $-1+\alpha$ denotes the unique $\beta$
such that $1+\beta=\alpha$.)
\end{notn}
The reason for this latter notation is that, for reasons that will become clear,
we want to think of closed sets as being $1$-indexed, rather than $0$-indexed.
So we will use the $S(\alpha)$ notation when $S$ is closed, and use the
$S[\alpha]$ notation when $S$ is discrete, as these sets we want to think of as
being $0$-indexed as is usual.
So, with this notation, we may write
\begin{equation}
\label{omegak}
\overline{\D}(\omega^k) = k.
\end{equation}
This statement says that $\overline{\D}$, in addition to being a copy of
$\omega^\omega$, is in fact a copy of $\omega^\omega$ that is embedded in ${\mathbb R}$
in a somewhat nice way.
Now, for another point of view on this equation, we might also write it as
\[\overline{\D}(\omega^k \cdot 1) = k = 0 + k = \overline{\D}(1) + k.\]
In this paper we prove a generalization, that shows that there is indeed much
more structure to the way that $\overline{\D}$ is embedded in ${\mathbb R}$:
\begin{thm}[Combined index-value shift relation]
\label{cj8weak-intro}
Given $1\le \alpha<\omega^\omega$ an ordinal and $k$ a whole number,
\[\overline{\D}(\omega^k \alpha)=\overline{\D}(\alpha)+k.\]
\end{thm}
However, this statement can still be strengthened. Theorem~\ref{cj8weak-intro}
discusses what happens for $\overline{\D}$ as a whole, but in previous papers the set
$\mathscr{D}$ was broken down into sets $\mathscr{D}ax{u}$, where $\mathscr{D}ax{u}$ is the set of
defects of numbers $n$ with $\cpx{n}\equiv u\pmod{3}$. (See \cite{paperwo} and
Section~1.5 of \cite{intdft} for more about the reason for this decomposition.)
So, what about the component
$\mathscr{D}ax{u}$? Proving things about these can be more difficult than proving
things about $\mathscr{D}$ as a whole. In \cite{intdft} it was proven that
\begin{equation}
\label{omegak3}
\overline{\Dst}ax{u+k}(\omega^k) = \overline{\Dst}ax{u}(1) + k.
\end{equation}
But while proving $\overline{\D}(\omega^k) = k$ required only a crude version of the
method of low-defect polynomials, determining the value of $\overline{\Dst}ax{u}(\omega^k)$
required further refinements.
This paper's methods go further, however, are powerful enough to resolve
these component sets, proving \cite[Conjecture~8]{Arias}:
\begin{thm}[Split index-value shift relation]
\label{cj8}
Given $1\le \alpha<\omega^\omega$ an ordinal, $k$ a whole number, and $u$ a
congruence class modulo $3$,
\[\overline{\Dst}ax{u+k}(\omega^k \alpha)=\overline{\Dst}ax{u}(\alpha)+k.\]
\end{thm}
Note we have rephrased this conjectures somewhat from its original language; see
the Appendix of \cite{paperwo}, as well as \cite{compactum}, for more on
translating between this language and the original, and see
Corollary~\ref{cj8orig} later in the paper for a form that is closer to the
original.
It is also worth noting here how we may rewrite Theorem~\ref{cj8weak-intro}.
This theorem may be equivalently stated:
\begin{thm}[Combined self-similarity theorem]
\label{selfsim-intro}
\[\overline{\D}' = \overline{\D} + 1\]
\end{thm}
Here $S'$ means the set of limit points of $S$. Of course, $\overline{\D}' = \mathscr{D}'$, but
we write $\overline{\D}'$ so that we have the same set on both sides of the equation.
It is this version of the theorem that give the paper its title, as it means
that the set $\overline{\D}$ has a sort of ``self-similarity'' property; if one shifts
it over by $1$ one obtains its limit points. So, the original set $\overline{\D}$ can
be obtained by taking the set $\overline{\D}+1$ and attaching a ``tail'' to the left of
each point to make it a limit point (at least, if one knows where to put the
points in this ``tail'').
The result is striking if one looks at the sets $\overline{\D} \cap [k, k+1]$; or better
yet, if one translates them to obtain $(\overline{\D} \cap [k, k+1])-k$. Then each set
in the sequence looks like the previous, except that each point has sprouted a
tail and become a limit point.
Just as one can rewrite Theorem~\ref{cj8weak-intro} as
Theorem~\ref{selfsim-intro}, one may rewrite Theorem~\ref{cj8} similarly;
see Corollary~\ref{selfsim3}.
Moreover, while we will not show it in this paper, these theorems are actually
equivalent to a weak version of Theorem~\ref{cj2thm}. In this paper we will
prove this weak version as a side effect of our proof of
Theorem~\ref{cj8weak-intro}, but in fact either can be used to prove the other.
Also, as mentioned earlier, versions of \cite[Conjectures~9--11]{Arias} also
follow from Theorem~\ref{cj8} and Theorem~\ref{cj2thm}; we prove these later as
Theorem~\ref{cj9corsep}.
These theorems have conjectured analogues for some other well-ordered subsets of
the real numbers; see Section~\ref{analogues} and Appendix~\ref{addchains}.
\subsubsection{Defects of expressions and stable defects}
\label{dftexp}
Theorem~\ref{cj8weak-intro} has an important consequence, which we will prove
later:
\begin{thm}[Defect closure theorem]
\label{closure-intro}
\[\overline{\D} = \mathscr{D} + {\mathbb N}n \]
\end{thm}
This result has an interesting interpretation. Consider what an element $\eta$
of $\mathscr{D}+{\mathbb N}n$ looks like. Such a number may be written as $k-3\log_3
n$, where $k\ge \cpx{n}$. But this means that $k$ is the number of $1$'s used
in \emph{some} $(1,+,\cdot)$-expression, not necessarily minimal, for $n$.
Indeed, we could could define $\eta$ to be the defect of the expression. That
is to say, given a $(1,+,\cdot)$-expression $E$, we could define $\cpx{E}$, the
complexity of $E$, to be the number of $1$'s it uses, and then define
\[\delta(E) = \cpx{E} - 3\log_3 \mathrm{val}(E),\]
where $\mathrm{val}(E)$ denotes the number that $E$ evaluates to.
Then, Theorem~\ref{closure-intro} states that the closure of the set of all
defects of \emph{numbers} is equal to the set of all defects of
\emph{expressions}.
There is also an analogue of this result when we restrict to particular
congruence classes modulo $3$, which appears later as Corollary~\ref{closure3};
it may similarly be interpreted as stating that closure $\overline{\Dst}ax{u}$ is equal to
the set of defects $\delta(E)$ for $(1,+,\cdot)$-expressions $E$ with
$\cpx{E}\equiv u \pmod{3}$.
We can also give a stronger statement of Theorem~\ref{closure-intro}. To do
this, we need the notion of a \emph{stable} defect.
\begin{defn}
We define $\mathscr{D}st$, the set of stable defects, to be the set of $\delta(n)$ for all
stable natural numbers $n$.
\end{defn}
Many of the theorems stated above are more naturally stated in terms of $\overline{\Dst}$
rather than $\overline{\D}$, and will be proved in that form. However, these sets are
the same, yielding the stronger version of Theorem~\ref{closure-intro}:
\begin{thm}[Stable defect closure theorem]
\label{closure}
\[\overline{\D} = \overline{\Dst} = \mathscr{D} + {\mathbb N}n = \mathscr{D}st + {\mathbb N}n \]
\end{thm}
We will prove this in Section~\ref{corsec}. Note that, by Theorem~\ref{stab},
$\mathscr{D} \subseteq \mathscr{D}st + {\mathbb N}n$, so the hard part of this is proving that $\overline{\Dst}
= \mathscr{D}st + {\mathbb N}n$; the rest follows from that.
So while we have thus far discussed results in terms of defects $\mathscr{D}$ and
$\overline{\D}$, the rest of this paper will be primarily in terms of stable defects
$\mathscr{D}st$ and $\overline{\Dst}$.
On the whole, then, these results suggest that a point of view focused more on
defects of expressions, rather than defects of numbers, may be fruitful.
Indeed, Theorem~29 from \cite{paper1}, the theorem that provides the basis for
the method of low-defect polynomials which has been used to prove the
well-ordering results so far, could just as easily, and more strongly, be stated
in terms of expressions rather than in terms of numbers. This would then lead
directly to a proof that $\mathscr{D} + {\mathbb N}n$ is well-ordered, of order type
$\omega^\omega$, exactly the same as the proof for $\mathscr{D}$ in \cite{paperwo}. Of
course, there are other ways one could prove that statement without making such
modifications to the method; including, of course, noting as we do here that $\mathscr{D}
+ {\mathbb N}n = \overline{\Dst}$, although that is certainly a much more difficult route than
necessary if one merely wishes to prove the order type of $\mathscr{D} + {\mathbb N}n$. But the
point of view suggested by Theorem~\ref{closure} is an interesting one; and in a
future paper \cite{varx}, we will show it can indeed be a fruitful one.
There are drawbacks to this approach; for instance, this approach makes it
natural to talk about $\mathscr{D}st$ (as an element $\eta\in\overline{\D}$ satisfies
$\eta\in\mathscr{D}st$ if and only if $\eta$ is minimal among all elements of $\overline{\D}$
that are congruent to $\eta$ modulo $1$), but not to talk about $\mathscr{D}$. That
said, the fruitfulness of this approach would seem to indicate that $\mathscr{D}$ is not
always the set one wants to talk about.
\subsubsection{Comparison of $\overline{\D}$ to other well-ordered sets}
\label{analogues}
One can compare $\overline{\D}$ to other well-ordered subsets of $\mathbb{R}$,
particularly ones of order type $\omega^\omega$. One close analogue is to the
set of addition chain defects; we consider this in Appendix~\ref{addchains},
which see. Other points of comparison include volumes of hyperbolic
$3$-manifolds, which is known to have order type $\omega^\omega$ \cite{3man},
and the set of commuting probabilities of finite groups, which is known to be
reverse-well-ordered, with order type the reverse of $\omega^\omega$
\cite{Eberhard, Browning}. Also of interest are the fusible numbers of order
type $\varepsilon_0$ \cite{fusibletype}. We might then ask, to what extent does
the equation $\overline{\D}'=\overline{\D}+1$ have an analogue in these other settings?
In the case of the set $\mathcal{P}$ of commuting probabilities, which is closed
under multiplication rather than addition (see Corollary~\ref{addclosed}), then
since the first limit point (going from the top) is $\frac{1}{2}$, the analogue
would be $\overline{\mathcal{P}}'=\frac{1}{2}\overline{\mathcal{P}}$; however,
this cannot be true, as each $\frac{1}{p}$ is a limit point of $\mathcal{P}$
(where $p$ is any prime).
But in fact, $\mathcal{P}$ does satisfy a similar law; it simply takes a more
complicated form. Namely, as shown by Browning \cite{Browning}, one instead has
\[
\overline{\mathcal{P}}'=\bigcup_{p\ \mathrm{prime}}
\frac{1}{p}\overline{\mathcal{P}},
\]
and more generally $\overline{\mathcal{P}}^{(n)}$ follows a similar law. (It is
also worth noting here that Browning also showed that
$\overline{\mathcal{P}}=\mathcal{P}\cup\{0\}$.)
One way of thinking about this is in terms of ``primitive limit points''; the
law $\overline{\D}'=\overline{\D}+1$ says that, for the additively closed set $\overline{\D}$, the
number $1$ is the only limit point which cannot be written as $\eta_1+\eta_2$,
where at least one of the $\eta_i$ is a limit point and neither is zero; it is
the only ``primitive limit point''. Browning's result then says that, for the
multiplicatively closed set $\overline{\mathcal{P}}$, the points $\frac{1}{p}$
are the only limit points that cannot be written as $\eta_1\eta_2$, where at
least one of the $\eta_1$ is a limit point and neither is equal to one; so there
is one ``primitive limit point'' for each prime.
Meanwhile, for the fusible numbers, which have a much larger order type, such a
law must take a different form. Instead of $\overline{\D}'=\overline{\D}+1$, we look at
$\overline{\D}(\alpha)+1=\overline{\D}(\omega\alpha)$. In fact, while it is not proven, it is
conjectured \cite{fusibleintro} that the fusible numbers $\mathcal{F}$ satisfy
$\mathcal{F}(\alpha)+1=\mathcal{F}(\omega^\alpha)$, which would be quite
analogous.
As for the case of volumes of hyperbolic $3$-manifolds, it is unknown to the
present authors to what extent it may or may not satisfy a similar law.
However, there are some additional sets of interest, even if they may not in
fact be well-ordered.
One could define the set $\mathscr{D}^\pm$, the set of defects when subtraction
is allowed. This set will not be well-ordered; for instance, the sequence
$\delta^\pm(3^k-1)$ will approach $1$ from above. Despite this, one might ask the
question, does this set still obey the law $\overline{\Dst}ax{\pm}'=\overline{\Dst}ax{\pm}+1$? If so,
this would be an analogy in a non-well-ordered set. One might also ask the same
question of $\mathscr{D}^{\pm,\ell}$, the set of defects arising from
addition-subtraction chains (see Section~\ref{addchains}).
If such an equation does hold, it would suggest that the order types of these
sets might be the same as that of the set of Pisot-Vijayaraghavan numbers, as
determined by Boyd and Mauldin \cite{pisot}. However these sets could also have
this order type without satisfying such a law; indeed, it is unknown to the
authors whether the set of PV numbers itself satisfies such a law.
\subsection{The method: Low-defect polynomials, substantial
polynomials, and small exceptional sets}
\label{substintro}
In order to prove Theorems~\ref{cj2thm} and \ref{cj8}, we introduce a
notion we call substantial low-defect polynomials.
A low-defect polynomial is a particular type of multilinear polynomial,
introduced in \cite{paperwo} and expanded upon in \cite{theory}, used for
studying numbers with defect below a given bound; see Section~\ref{dftsec} for
details. In \cite{paperwo} it was proved that, given any positive real number
$s$, one can write down a finite set of low-defect polynomials ${\mathcal S}$ such that
every number $n$ with $\delta(n)\le s$ can be written in the form
$f(3^{n_1},\ldots,3^{n_d})3^{n_{d+1}}$ for some $f\in{\mathcal S}$; and that, moreover,
such an $n$ can always be represented ``efficiently'' in such a fashion.
Moreover, it was shown in \cite{theory} that one can choose ${\mathcal S}$ such that for
any $f\in{\mathcal S}$, one has $\deg f\le s$. (Note that the degree of a low-defect
polynomial is always equal to the number of variables it is in, since low-defect
polynomials are multilinear and always include a term containing all the
variables.)
The defects arising from a low-defect polynomial $f$ are bounded above by a
quantity we denote $\delta(f)$ (see Definition~\ref{deltaf}). In \cite{theory} it
was shown that this quantity satisifes the inequality
\begin{equation}
\label{dftineq}
\delta(f) \ge \delta(a) + \deg f \ge \delta_{\mathrm{st}}(a) + \deg f,
\end{equation}
where $a$ is the leading coefficient of $f$. The inequality $\deg f\le
s$ mentioned above is actually a consequence of the inequality $\delta(f) \le s$,
combined with \eqref{dftineq}. We will define a substantial
polynomial to be a polynomial that saturates \eqref{dftineq}, one where
\[ \delta(f) = \delta_{\mathrm{st}}(a) + \deg f. \]
We call such polynomials ``substantial'' because, as we will show in
Proposition~\ref{cj9prop}, one has that $f$ is substantial if and only if the
degree of $f$ is maximal among all low-defect polynomials $g$ with
$\delta(g)=\delta(f)$. Lower-degree polynomials will have their contributions to
the clustering of defects below $\delta(f)$ absorbed and overshadowed by those of
larger degree, rendering them ``insubstantial''.
However, even a substantial polynomial $f$ will only affect the clustering of
defects below $\delta(f)$ if its defects do indeed approach $\delta(f)$, rather than
capping out at some smaller $\delta(f)-k$. We will show though as
Proposition~\ref{usu} that the former case always occurs; if $f$ is substantial,
then numbers coming from it will ``usually'' have the expected complexity and
defect, and the set of exceptions is ``small'' in an appropriate sense.
Theorem~\ref{cj2thm} is then simply the special case of substantial polynomials
of degree $1$ (if we ignore computability).
Note that we only prove that the exceptional set is small in a fairly weak
sense; however, we will prove in a future paper \cite{hyperplanes} that the
exceptional set in fact small in a stronger sense.
\subsection{Structure of the paper}
Section~\ref{dftsec} reviews preliminaries, both from previous papers on the
integer complexity defect and more general preliminaries from topology and order
theory. Section~\ref{secsubst} introduces substantial polynomials and how they
work, and then Section~\ref{secproof} uses them to prove Theorem~\ref{cj8} and
related statements, including a weak version of Theorem~\ref{cj2thm}, and
\cite[Conjectures~9--11]{Arias}. Finally
Section~\ref{deg1} focuses on the degree $1$ case; it proves
Theorem~\ref{cj2thm}, Theorem~\ref{dragons}, and corollaries of these.
\section{Preliminaries}
\label{dftsec}
In this section we will review existing results on defects and low-defect
polynomials, as well some other preliminary results we will need.
\subsection{The defect, stable defect, and stable complexity}
We start with some basic facts about the defect:
\begin{prop}[{\cite[Theorem~2.1]{theory}}]
\label{oldprops}
We have:
\begin{enumerate}
\item For all $n$, $\delta(n)\ge 0$.
\item For $k\ge 0$, $\delta(3^k n)\le \delta(n)$, with equality if and only if
$\cpx{3^k n}=3k+\cpx{n}$. The difference $\delta(n)-\delta(3^k n)$ is a nonnegative
integer.
\item A number $n$ is stable if and only if for any $k\ge 0$, $\delta(3^k
n)=\delta(n)$.
\item If the difference $\delta(n)-\delta(m)$ is rational, then $n=m3^k$ for some
integer $k$ (and so $\delta(n)-\delta(m)\in\mathbb{Z}$).
\item Given any $n$, there exists $k$ such that $3^k n$ is stable.
\item For a given defect $\alpha$, the set $\{m: \delta(m)=\alpha \}$ has either
the form $\{n3^k : 0\le k\le L\}$ for some $n$ and $L$, or the form $\{n3^k :
0\le k\}$ for some $n$. The latter occurs if and only if $\alpha$ is the
smallest defect among $\delta(3^k n)$ for $k\in \mathbb{Z}$.
\item If $\delta(n)=\delta(m)$, then $\cpx{n}=\cpx{m} \pmod{3}$.
\item $\delta(1)=1$, and for $k\ge 1$, $\delta(3^k)=0$. No other integers occur as
$\delta(n)$ for any $n$.
\item If $\delta(n)=\delta(m)$ and $n$ is stable, then so is $m$.
\end{enumerate}
\end{prop}
We will want to consider the set of all defects:
\begin{defn}
We define the \emph{defect set} $\mathscr{D}$ to be $\{\delta(n):n\in{\mathbb N}\}$, the
set of all defects.
In addition, for $u$ a congruence class modulo $3$, we
define \[\mathscr{D}^u = \{\delta(n):n>1,\ \cpx{n}\equiv u\pmod{3}\}.\]
\end{defn}
\begin{prop}
\label{disjbasic}
For distinct congruence classes $u$ modulo $3$, the sets $\mathscr{D}^u$ are
disjoint.
\end{prop}
\begin{proof}
This follows from part (7) of Proposition~\ref{oldprops}.
\end{proof}
The paper \cite{paperwo} also defined the notion of a \emph{stable defect}:
\begin{defn}
We define a \emph{stable defect} to be the defect of a stable number, and define
$\mathscr{D}_\mathrm{st}$ to be the set of all stable defects. Also, for $a$ a
congruence class modulo $3$, we define $\mathscr{D}^u_\mathrm{st}=\mathscr{D}^u \cap
\mathscr{D}_\mathrm{st}$.
\end{defn}
Because of part (9) of Theorem~\ref{oldprops}, this definition makes sense; a
stable defect $\alpha$ is not just one that is the defect of some stable number,
but one for which any $n$ with $\delta(n)=\alpha$ is stable. Stable defects can
also be characterized by the following proposition from \cite{paperwo}:
\begin{prop}[{\cite[Proposition~2.4]{theory}}]
\label{modz1}
A defect $\alpha$ is stable if and only if it is the smallest
$\beta\in\mathscr{D}$ such that $\beta\equiv\alpha\pmod{1}$.
\end{prop}
We can also define the \emph{stable defect} of a given number, which we denote
$\delta_\mathrm{st}(n)$.
\begin{defn}
For a positive integer $n$, define the \emph{stable defect} of $n$, denoted
$\delta_\mathrm{st}(n)$, to be $\delta(3^k n)$ for any $k$ such that $3^k n$ is stable.
(This is well-defined as if $3^k n$ and $3^\ell n$ are stable, then $k\ge \ell$
implies $\delta(3^k n)=\delta(3^\ell n)$, and $\ell\ge k$ implies this as well.)
\end{defn}
Note that the statement ``$\alpha$ is a stable defect'', which earlier we were
thinking of as ``$\alpha=\delta(n)$ for some stable $n$'', can also be read as the
equivalent statement ``$\alpha=\delta_\mathrm{st}(n)$ for some $n$''.
Similarly we have the stable complexity:
\begin{defn}
For a positive integer $n$, define the \emph{stable complexity} of $n$, denoted
$\cpx{n}_\mathrm{st}$, to be $\cpx{3^k n}-3k$ for any $k$ such that $3^k n$ is stable.
\end{defn}
We then have the following facts relating the notions of $\cpx{n}$, $\delta(n)$,
$\cpx{n}_\mathrm{st}$, and $\delta_\mathrm{st}(n)$:
\begin{prop}
\label{stoldprops}
We have:
\begin{enumerate}
\item $\delta_\mathrm{st}(n)= \min_{k\ge 0} \delta(3^k n)$
\item $\delta_\mathrm{st}(n)$ is the smallest $\alpha\in\mathscr{D}$ such that
$\alpha\equiv \delta(n) \pmod{1}$. In particular, if two stable defects are
congruent modulo $1$, then they are equal.
\item $\cpx{n}_\mathrm{st} = \min_{k\ge 0} (\cpx{3^k n}-3k)$
\item $\delta_\mathrm{st}(n)=\cpx{n}_\mathrm{st}-3\log_3 n$
\item $\delta_\mathrm{st}(n) \le \delta(n)$, with equality if and only if $n$ is stable.
\item $\cpx{n}_\mathrm{st} \le \cpx{n}$, with equality if and only if $n$ is stable.
\item $\cpx{3n}_\mathrm{st} = \cpx{n}_\mathrm{st}+3$
\item If $\delta_\mathrm{st}(n)=\delta_\mathrm{st}(m)$, then $\cpx{n}_\mathrm{st}\equiv\cpx{m}_\mathrm{st}\pmod{3}$.
\item $\cpx{nm}_\mathrm{st} \le \cpx{n}_\mathrm{st} + \cpx{m}_\mathrm{st}$
\end{enumerate}
\end{prop}
\begin{proof}
Parts (1)-(8) are Proposition~2.7 from \cite{intdft}. Part (9) is Proposition~9
from Section~7 of \cite{paperalg}.
\end{proof}
Remember that $1$ is not stable, so one has $\cpx{1}=1$ and $\cpx{1}_\mathrm{st}=0$.
Note, by the way, that just as $\mathscr{D}_\mathrm{st}$ can be characterized either as
defects $\delta(n)$ with $n$ stable or as defects $\delta_\mathrm{st}(n)$ for any $n$,
$\mathscr{D}^u_\mathrm{st}$ can be characterized either as defects $\delta(n)$ with $n$
stable and $\cpx{n}\equiv u\pmod{3}$, or as defects $\delta_\mathrm{st}(n)$ for any $n$
with $\cpx{n}_\mathrm{st}\equiv u\pmod{3}$.
We also make the following definition:
\begin{defn}
For $n\in N$, define $\mathscr{D}elta(n) = \delta(n)-\delta_\mathrm{st}(n) = \cpx{n}-\cpx{n}_\mathrm{st}$.
\end{defn}
By the above, one always has $\mathscr{D}elta(n)\in{\mathbb N}n$.
Also, in order to further discuss stabilization, it is useful here to define:
\begin{defn}
\label{defk}
Given $n\in {\mathbb N}$, define $K(n)$ to be the smallest $k$ such that $n3^k$ is
stable.
\end{defn}
Then it was shown in \cite{paperalg} that:
\begin{thm}
\label{computk}
The function $K$ is computable; the function $n\mapsto\cpx{n}_\mathrm{st}$ is
computable; and the set of stable numbers is a computable set.
\end{thm}
We will use this later in proving that the bounds in Theorem~\ref{cj2thm} and
its variants in Section~\ref{deg1} can be computed.
There is one final fact about stability that we will use repeatedly.
\begin{prop}[{\cite[Section~7, Corollary~1]{paperalg}}]
We have:
\label{goodfac}
\begin{enumerate}
\item If $N$ is stable, $N=n_1\cdots n_k$, and
$\cpx{N}=\cpx{n_1}+\ldots+\cpx{n_k}$, then the $n_i$ are also stable.
\item If $n_i$ are stable numbers, $N=n_1\cdots n_k$, and
$\cpx{N}_\mathrm{st}=\cpx{n_1}_\mathrm{st}+\ldots+\cpx{n_k}_\mathrm{st}$, then $N$ is also stable.
\end{enumerate}
\end{prop}
\subsection{Low-defect polynomials and the exceptional set}
\label{secpoly}
We represent the set of numbers with defect at most $r$ by substituting in
powers of $3$ into certain multilinear polynomials we call \emph{low-defect
polynomials}. We will associate with each one a ``base complexity'' to form a
\emph{low-defect pair}. These notions can also be formalized in terms of
\emph{low-defect expression} or \emph{low-defect trees}, which we will discuss
shortly.
\begin{defn}
\label{polydef}
We define the set $\mathscr{P}$ of \emph{low-defect pairs} as the smallest
subset of ${\mathbb Z}[x_1,x_2,\ldots]\times {\mathbb N}$ such that:
\begin{enumerate}
\item For any constant polynomial $k\in {\mathbb N}\subseteq{\mathbb Z}[x_1, x_2, \ldots]$ and any
$C\ge \cpx{k}$, we have $(k,C)\in \mathscr{P}$.
\item Given $(f_1,C_1)$ and $(f_2,C_2)$ in $\mathscr{P}$, we have $(f_1\otimes
f_2,C_1+C_2)\in\mathscr{P}$, where, if $f_1$ is in $d_1$ variables and $f_2$ is
in $d_2$ variables,
\[ (f_1\otimes f_2)(x_1,\ldots,x_{d_1+d_2}) :=
f_1(x_1,\ldots,x_{d_1})f_2(x_{d_1+1},\ldots,x_{d_1+d_2}). \]
\item Given $(f,C)\in\mathscr{P}$, $c\in {\mathbb N}$, and $D\ge \cpx{c}$, we have
$(f\otimes x_1 + c,C+D)\in\mathscr{P}$ where $\otimes$ is as above.
\end{enumerate}
The polynomials obtained this way will be referred to as \emph{low-defect
polynomials}. If $(f,C)$ is a low-defect pair, $C$ will be called its
\emph{base complexity}. If $f$ is a low-defect polynomial, we will define its
\emph{absolute base complexity}, denoted $\cpx{f}$, to be the smallest $C$ such
that $(f,C)$ is a low-defect pair.
We will also associate to a low-defect polynomial $f$ the \emph{augmented
low-defect polynomial}
\[ \xpdd{f} = f\otimes x_1; \]
if $f$ is in $d$ variables, this is $fx_{d+1}$.
\end{defn}
So, for instance, $(3x_1+1)x_2+1$ is a low-defect polynomial, as is
$(3x_1+1)(3x_2+1)$, as is $(3x_1+1)(3x_2+1)x_3+1$, as is
\[2((73(3x_1+1)x_2+6)(2x_3+1)x_4+1).\] In this paper we will primarily concern
ourselves with low-defect pairs $(f,C)$ where $C=\cpx{f}$, so in much of what
follow, we will dispense with the formalism of low-defect pairs and just discuss
low-defect polynomials.
Note that the degree of a low-defect polynomial is also equal to the number of
variables it uses; see Proposition~\ref{polystruct}.
Also note that augmented low-defect polynomials are never themselves low-defect
polynomials; as we will see in a moment (Proposition~\ref{polystruct}),
low-defect polynomials always have nonzero constant term, whereas augmented
low-defect polynomials always have zero constant term. We can also observe
that low-defect polynomials are in fact read-once polynomials as discussed in
for instance \cite{ROF}.
Note that we do not really care about what variables a low-defect polynomial is
in -- if we permute the variables of a low-defect polynomial or replace them
with others, we will still regard the result as a low-defect polynomial. From
this perspective, the meaning of $f\otimes g$ could be simply regarded as
``relabel the variables of $f$ and $g$ so that they do not share any, then
multiply $f$ and $g$''. Helpfully, the $\otimes$ operator is associative not
only with this more abstract way of thinking about it, but also in the concrete
way it was defined above.
From \cite{paperwo}, we have the following propositions about low-defect
polynomials:
\begin{prop}[{\cite[Proposition~4.2]{paperwo}}]
\label{polystruct}
Suppose $f$ is a low-defect polynomial of degree $d$. Then $f$ is a
polynomial in the variables $x_1,\ldots,x_d$, and it is a multilinear
polynomial, i.e., it has degree $1$ in each of its variables. The coefficients
are non-negative integers. The constant term is nonzero, and so is the
coefficient of $x_1\cdots x_d$, which we will call the \emph{leading
coefficient} of $f$.
\end{prop}
\begin{prop}[{\cite[Proposition~2.10]{theory}}]
\label{basicub}
If $f$ is a low-defect polynomial of degree $d$, then
\[\cpx{f(3^{n_1},\ldots,3^{n_d})}\le \cpx{f}+3(n_1+\ldots+n_d).\]
and
\[\cpx{\xpdd{f}(3^{n_1},\ldots,3^{n_{d+1}})}\le \cpx{f}+3(n_1+\ldots+n_{d+1}).\]
\end{prop}
Because of this, it makes sense to define:
\begin{defn}
Given a low-defect pair $(f,C)$ (say of degree $r$) and a number $N$, we will
say that $(f,C)$ \emph{efficiently $3$-represents} $N$ if there exist
nonnegative integers $n_1,\ldots,n_r$ such that
\[N=f(3^{n_1},\ldots,3^{n_r})\ \textrm{and}\ \cpx{N}=C+3(n_1+\ldots+n_r).\]
We will say $(\xpdd{f},C)$ efficiently
$3$-represents $N$ if there exist $n_1,\ldots,n_{r+1}$ such that
\[N=\xpdd{f}(3^{n_1},\ldots,3^{n_{r+1}})\ \textrm{and}\
\cpx{N}=C+3(n_1+\ldots+n_{r+1}).\]
More generally, we will also say $f$ $3$-represents $N$ if there exist
nonnegative integers $n_1,\ldots,n_r$ such that $N=f(3^{n_1},\ldots,3^{n_r})$.
and similarly with $\xpdd{f}$.
\end{defn}
Note that if $(f,C)$ (or $(\xpdd{f},C)$) efficiently $3$-represents some $N$,
then $(f,\cpx{f})$ (respectively, $(\xpdd{f},\cpx{f})$ efficiently
$3$-represents $N$, which means that in order for $(f,C)$ (or $(\xpdd{f},C)$ to
$3$-represent anything efficiently at all, we must have $C=\cpx{f}$. However it
is still worth using low-defect pairs rather than just low-defect polynomials
since we may not always know $\cpx{f}$. In our applications here, where we wish
to perform computations by means of these objects, taking the time to compute
$\cpx{f}$, rather than just making do with an upper bound, may not be desirable.
For this reason it makes sense to use ``$f$ efficiently $3$-represents $N$'' to
mean ``some $(f,C)$ efficiently $3$-represents $N$'' or equivalently
``$(f,\cpx{f})$ efficiently $3$-reperesents $N$''. Similarly with $\xpdd{f}$.
In keeping with the name, numbers $3$-represented by low-defect polynomials, or
their augmented versions, have bounded defect. Let us make some definitions
first:
\begin{defn}
\label{deltaf}
Given a low-defect pair $(f,C)$, we define $\delta(f,C)$, the defect of $(f,C)$,
to be $C-3\log_3 a$, where $a$ is the leading coefficient of $f$. We will also
define $\delta(f)$ to mean $\delta(f,\cpx{f})$, since much of the time we will not
be concerned with keeping track of base complexities.
\end{defn}
One thing worth noting about defects of polynomials, that has not been noted
previously:
\begin{prop}
\label{fdft}
Let $(f,C)$ and $(g,D)$ be low-defect pairs. If $\delta(f,C)=\delta(g,D)$, then
$C\equiv D \pmod{3}$.
\end{prop}
\begin{proof}
The proof is exactly the same as the proof of part (7) from
Proposition~\ref{oldprops}. If $\delta(f,C)=\delta(g,D)$, then
\[ C - 3\log_3 a = D - 3\log_3 b, \]
where $a$ and $b$ are the leading coefficients of $f$ and $g$, respectively.
So $C-D = 3\log_3(\frac{a}{b})\in {\mathbb Z}$, and so in particular it is a rational
number, meaning $\log_3(\frac{a}{b})$ is in turn a rational number, which can
only happen if it is in fact an integer, so $3\mid C-D$ as required.
\end{proof}
\begin{defn}
\label{fdftdef}
Given a low-defect pair $(f,C)$ of degree $r$, we define
\[\delta_{f,C}(n_1,\ldots,n_r) =
C+3(n_1+\ldots+n_r)-3\log_3 f(3^{n_1},\ldots,3^{n_r}).\]
We will also define $\delta_f$ to mean $\delta_{f,\cpx{f}}$ when we are not
concerned with keeping track of base complexities.
\end{defn}
Then we have:
\begin{prop}[{\cite[Proposition~2.15]{intdft}}]
\label{dftbd}
Let $(f,C)$ be a low-defect pair of degree $r$, and let $n_1,\ldots,n_{r+1}$ be
nonnegative integers.
\begin{enumerate}
\item We have
\[ \delta(\xpdd{f}(3^{n_1},\ldots,3^{n_{r+1}}))\le \delta_{f,C}(n_1,\ldots,n_r)\]
and the difference is an integer.
\item We have \[\delta_{f,C}(n_1,\ldots,n_r)\le\delta(f,C)\]
and if $r\ge 1$, this inequality is strict.
\item The function $\delta_f$ is strictly increasing in each variable, and
\[ \delta(f) = \sup_{n_1,\ldots,n_d} \delta_f(n_1,\ldots,n_d).\]
\end{enumerate}
\end{prop}
The defects we get from a low-defect polynomial $f$ form a well-ordered set of
order type approximately $\omega^d$:
\begin{prop}[{\cite[Proposition~2.16]{intdft}}]
\label{indivtype}
Let $f$ be a low-defect polynomial of degree $d$. Then:
\begin{enumerate}
\item The image of $\delta_f$ is a well-ordered subset of $\mathbb{R}$, with
order type $\omega^d$.
\item The set of $\delta(N)$ for all $N$ $3$-represented by the augmented
low-defect polynomial $\xpdd{f}$ is a well-ordered subset of $\mathbb{R}$, with
order type at least $\omega^d$ and at most $\omega^d(\lfloor \delta(f)
\rfloor+1)<\omega^{d+1}$. The same is true if $f$ is used instead of the
augmented version $\xpdd{f}$.
\end{enumerate}
\end{prop}
The reason we care about low-defect polynomials is that all numbers of
sufficiently low defect can be efficiently $3$-represented by them. First,
some definitions:
\begin{defn}
A natural number $n$ is called a \emph{leader} if it is the smallest number with
a given defect. By part (6) of Proposition~\ref{oldprops}, this is equivalent
to saying that either $3\nmid n$, or, if $3\mid n$, then $\delta(n)<\delta(n/3)$,
i.e., $\cpx{n}<3+\cpx{n/3}$.
\end{defn}
\begin{defn}
For any real $s\ge0$, define the set $\overline{A}_s$ to be
\[\overline{A}_s := \{n\in\mathbb{N}:\delta(n)\le s\}.\]
Define the set $\overline{B}_s$ to be
\[
\overline{B}_s:= \{n \in \overline{A}_s :~~n~~\mbox{is a leader}\}.
\]
\end{defn}
Obviously, one has:
\begin{prop}
\label{arbr}
For every $n\in \overline{A}_r$, there exists a unique $m\in \overline{B}_r$ and
$k\ge 0$ such that $n=3^k m$ and $\delta(n)=\delta(m)$; then $\cpx{n}=\cpx{m}+3k$.
\end{prop}
\begin{proof}
This is exactly Proposition~2.6 from \cite{paperwo}, except that we are looking
at $n$ with $\delta(n)\le r$, instead of $\delta(n)<r$.
\end{proof}
Then, it was shown in \cite{theory} (Theorem~A.7) that:
\begin{thm}
\label{covering}
For any real $s\ge 0$, there exists a finite set ${\mathcal S}_s$ of low-defect pairs
satisfying the following conditions:
\begin{enumerate}
\item For any $n\in \overline{B}_s$, there is some low-defect pair in ${\mathcal S}_s$
that efficiently $3$-represents $n$.
\item Each pair $(f,C)\in {\mathcal S}_s$ satisfies $\delta(f,C)\le s$, and hence $\deg
f\le \lfloor s\rfloor$.
\end{enumerate}
We refer to such a set ${\mathcal S}_s$ as a \emph{good covering} of $\overline{B}_s$.
\end{thm}
Moreover, it was shown in \cite{paperalg} (see Algorithm~5 and the appendix)
that:
\begin{thm}
\label{goodcomput}
Given a real number $s$ of the form $q+r\log_3 n$ with $n\in {\mathbb N}$, $q,r\in {\mathbb Q}$,
it is possible to algorithmically compute a good covering of $\overline{B}_s$.
\end{thm}
(The assumption on the form of $s$ here is inessential and is just to restrict
to a computable subset of real numbers; one can state the theorem more generally
than this.)
In this paper we will be concerned with proving that certain low-defect pairs,
the \emph{substantial} low-defect pairs, efficiently $3$-represent ``most'' of
the numbers that they $3$-represent. In order to discuss this, it helps to make
the following definition:
\begin{defn}
\label{except}
Let $(f,C)$ be a low-defect pair of degree $d$. Define its
\emph{exceptional set} to be
\[
\{(n_1,\ldots,n_d): \cpx{f(3^{n_1},\ldots,3^{n_d})}_\mathrm{st}<C+3(n_1+\ldots+n_d)\}
\]
We will also say ``the exceptional set of $f$'' to simply mean the exceptional
set of $(f,\cpx{f})$.
\end{defn}
Finally, one more property of low-defect polynomials we will need is the
following:
\begin{prop}[{\cite[Proposition~3.24]{theory}}]
\label{ineq}
Let $f$ be a low-defect polynomial, and suppose that $a$ is the leading
coefficient of $f$. Then $\cpx{f}\ge \cpx{a} + \deg f$, which also implies
$\cpx{f}\ge \cpx{a}_\mathrm{st} + \deg f$.
In particular, $\delta(f) \ge \delta(a) + \deg f$ and
$\delta(f) \ge \delta_\mathrm{st}(a) + \deg f$.
\end{prop}
With this, we have the preliminary notions and terminology out of the way.
However, we will also take a moment to discuss some alternate formalisms.
\subsection{Low-defect expressions and trees}
Now, it is mathematically convenient to phrase things in terms of polynomials,
but sometimes we want a finer-grained view of things. Rather than look at a
polynomial $f$, we may want to look at the expression that gives rise to it.
That is to say, if we have a low-defect polynomial $f$, it was constructed
according to rules (1)--(3) in Definition~\ref{polydef}; each of these rules
though gives a way not just of building up a polynomial, but an expression. For
instance, we can build up the polynomial $4x+2$ by using rule (1) to make $2$,
then using rule (3) to make $2x+1$, then using rule (2) to make $2(2x+1)=4x+2$.
The polynomial $4x+2$ itself does not remember its history, of course; but
perhaps we want to remember its history -- in which we do not want to consider
the \emph{polynomial} $4x+2$, but rather the \emph{expression} $2(2x+1)$, which
is different from the expression $4x+2$.
So, with that, we define:
\begin{defn}
A \emph{low defect expression} is defined to be an expression in positive
integer constants, $+$, $\cdot$, and some number of variables, constructed
according to the following rules:
\begin{enumerate}
\item Any positive integer constant $c$ by itself forms a low-defect expression.
\item Given two low-defect expressions using disjoint sets of variables, their
product is a low-defect expression. If $E_1$ and $E_2$ are low-defect
expressions, we will use $E_1 \otimes E_2$ to denote the low-defect expression
obtained by first relabeling their variables to be disjoint and then multiplying
them.
\item Given a low-defect expression $E$, a positive integer constant $c$, and a
variable $x$ not used in $E$, the expression $E\cdot x+c$ is a low-defect
expression. (We can write $E\otimes x+c$ if we do not know in advance that $x$
is not used in $E$.)
\end{enumerate}
\end{defn}
And, naturally, we also define:
\begin{defn}
We define an \emph{augmented low-defect expression} to be an expression of the
form $E\cdot x$, where $E$ is a low-defect expression and $x$ is a variable not
appearing in $E$. If $E$ is a low-defect expression, we also denote the
augmented low-defect expression $E\otimes x$ by $\xpdd{E}$.
\end{defn}
It is clear from the definitions that evaluating a low-defect expression yields
a low-defect polynomial, and that evaluating an augmented low-defect expression
yields an augmented low-defect polynomial. Note also that low-defect
expressions are read-once expressions, so, as mentioned earlier, low-defect
polynomials are read-once polynomials.
All of the results of Section~\ref{secpoly}, which were stated in terms of
low-defect pairs, can instead be stated in terms of low-defect expressions,
though we will not restated them in this way here. Note that for this we need
the notion of the complexity of a low-defect expression:
\begin{defn}
We define the complexity of a low-defect expression $E$, denoted $\cpx{E}$, as
follows:
\begin{enumerate}
\item If $E$ is a positive integer constant $n$, we define $\cpx{E}=\cpx{n}$.
\item If $E$ is of the form $E_1 \cdot E_2$, where $E_1$ and $E_2$ are
low-defect expressions, we define $\cpx{E}=\cpx{E_1}+\cpx{E_2}$.
\item If $E$ is of the form $E' \cdot x + c$, where $E'$ is a low-defect
expression, $x$ is a variable, and $c$ is a positive integer constant, we define
$\cpx{E}=\cpx{E'}+\cpx{c}$.
\end{enumerate}
\end{defn}
In addition, we can helpfully represent a low-defect expression by a rooted
tree, with the vertices and edges both labeled by positive integers. Some
information is lost in this representation, but nothing of much relevance. This
representation does away with such problems as, for instance, $4$ and $2\cdot 2$
being separate expressions. In addition, trees can be treated more easily
combinatorially, which will prove useful in a sequel paper \cite{seqest}.
\begin{defn}
Given a low-defect expression $E$, we define a corresponding \emph{low-defect
tree} $T$, which is a rooted tree where both edges and vertices are labeled with
positive integers. We build this tree as follows:
\begin{enumerate}
\item If $E$ is a constant $n$, $T$ consists of a single vertex labeled with
$n$.
\item If $E=E'\cdot x + c$, with $T'$ the tree for $E'$, $T$ consists of $T'$
with a new root attached to the root of $T'$. The new root is labeled with a
$1$, and the new edge is labeled with $c$.
\item If $E=E_1 \cdot E_2$, with $T_1$ and $T_2$ the trees for $E_1$ and $E_2$
respectively, we construct $E$ by ``merging'' the roots of $E_1$ and $E_2$ --
that is to say, we remove the roots of $E_1$ and $E_2$ and add a new root, with
edges to all the vertices adjacent to either of the old roots; the new edge
labels are equal to the old edge labels. The label of the new root is equal
to the product of the labels of the old roots.
\end{enumerate}
\end{defn}
We can define an associated base complexity for these too:
\begin{defn}
The complexity of a low-defect tree, $\cpx{T}$, is defined to be the smallest
$\cpx{E}$ among all low-defect expressions yielding $T$.
\end{defn}
We also, for expressions and trees, have the following concrete expression for
the complexity:
\begin{prop}[{\cite[Proposition~3.23]{theory}}]
\label{treecpx}
We have:
\begin{enumerate}
\item Let $E$ be a low-defect expression. Then $\cpx{E}$ is equal to the sum of
the complexities of all the integer constants occurring in $E$.
\item Let $T$ be a low-defect tree. Then
\[ \cpx{T} = \sum_{e\ \textrm{an edge}} \cpx{w(e)} +
\sum_{v\ \textrm{a leaf}} \cpx{w(v)}
+ \sum_{\substack{v\ \textrm{a non-leaf vertex}\\ w(v)>1}} \cpx{w(v)},\]
where $w$ denotes the label of the given vertex or edge.
\end{enumerate}
\end{prop}
Also worth noting is the following:
\begin{prop}
\label{treelead}
Let $T$ be a low-defect tree, let $V$ and $E$ be its vertex set and edge set,
let $f$ be the low-defect polynomial arising from it, and let $N$ be its leading
coefficient. Then $N$ is equal to the product of all vertex labels in $T$, and
$\deg f = |V|-1 = |E|$.
\end{prop}
\begin{proof}
This is just Proposition~3.14 from \cite{theory} combined with
Proposition~\ref{polystruct} above.
\end{proof}
Note that for a low-defect polynomial $f$, $\cpx{f}$ can be equivalently
characterized as the smallest $\cpx{T}$ among all expressions $E$ or trees $T$
yielding $f$.
So, we get a chain from more information preserved to least information
preserved. Most specific is the low-defect expression $E$; this can then be
represented by a tree $T$; this can then be evaluated to get a polynomial $f$,
which we can associate with a base complexity $\cpx{T}$ to get the low-defect
pair $(f,\cpx{T})$; and finally we can just look at $f$ itself, getting the
low-defect pair $(f,\cpx{f})$.
In truth, we could add a few more steps here, such as a tree-pair $(T,C)$ or
expression-pair $(E,C)$; or, most specific of all, a low-defect expression $E$
where each numerical constant $n$ is replaced by a specific
$(1,+,\cdot)$-expression that represents it. But none of this will be necessary
here; expressions and trees will suffice for now.
\subsection{Some topological and order preliminaries}
Before we proceed, we should make notes of some facts from topology and order
theory that we will need.
\begin{prop}[\cite{MOtopologies}]
\label{topologies}
Let $X$ be a totally ordered set with the least upper bound property, and let
$S\subseteq X$ be closed. Then the subspace topology and the order topology on
$S$ coincide.
\end{prop}
This proposition allows us to ignore questions of what topology we are using.
One key reason we need this proposition is for the following:
\begin{prop}
\label{limitpoints}
Let $X$ be a totally ordered set with the least upper bound property, and let
$S\subseteq X$ be a closed, well-ordered set. Then the limit points of $S$
within $X$ are the points of $S$ of the form $S(\omega\alpha)$ for ordinals
$\alpha$.
\end{prop}
\begin{proof}
The set $\{S(\omega\alpha): \omega\le\omega\alpha<\type(S)\}$ is the set of
limit points of $S$ within itself under its order topology, and by
Proposition~\ref{topologies}, this coincides with the subspace topology it
inherits as a subset of $X$. Since $S$ is closed in $X$, this is the same as
the set of limit points of $S$ within $X$.
\end{proof}
We will need to know a few more things about well-orders, closures, and limit
points.
First off, we need to know how indices in the closure relate to indices in the
original set. We will focus on the case where the original set is discrete, as
will be the case for the sets we consider. We will make use of the following
fact, a proof of which can be found in \cite{adcwo}, where it is
Proposition~5.5:
\begin{prop}
\label{closuretype}
Let $X$ be a totally ordered set, and let $S$ be a well-ordered subset of order
type $\alpha$. Then $\overline{S}$ is also well-ordered, and has order type
either $\alpha$ or $\alpha+1$. If $\alpha=\gamma+k$ where $\gamma$ is a limit
ordinal and $k$ is finite, then $\overline{S}$ has order type $\alpha+1$ if and
only if the initial segment of $S$ of order type $\gamma$ has a supremum in $X$
which is not in $S$.
\end{prop}
With this we prove:
\begin{prop}
\label{barshift}
Let $X$ be a totally-ordered set with the least upper bound property, and let
$S\subseteq X$ be well-ordered and discrete. Suppose $\alpha$ is an ordinal
less than the order type of $S$.
Then \[ S[\alpha] = \overline{S}(\alpha+1). \]
\end{prop}
\begin{proof}
Let $\eta = S[\alpha]$, and look at $S_\eta = \{ \zeta\in S: \zeta<\eta \}$ and
at $\overline{S}_\eta = \{ \zeta\in\overline{S}: \zeta<\eta \}$; note that
since subspace topologies commute with closures, $\overline{S}_\eta$ is in fact
the closure of $S_\eta$. Now, certainly $S_\eta$ has order type $\alpha$. So
by Proposition~\ref{closuretype}, $S_\eta$ has order type either $\alpha$ or
$\alpha+1$.
If $\alpha$ is infinite, write $\alpha=\gamma+k$ with $\gamma$ a
limit ordinal and $k$ finite; then the initial segment of $S$ of order type
$\gamma$ is bounded above by $\eta$, so it has a supremum in $X$, (since $X$ has
the least upper bound property), and this supremum lies outside $S$ by the
assumption that $S$ is discrete. So $\overline{S}_\eta$ has order type
$\alpha+1$, which since $\alpha$ is infinite is equal to $-1+(\alpha+1)$; that
is, $S[\alpha]=\overline{S}(\alpha+1)$.
On the other hand, if $S$ is finite, then $S_\eta$ and $\overline{S}_\eta$ are
certainly equal, so both have order type $\alpha$. Since $\alpha$ is finite,
$\alpha+1=1+\alpha$, and so $S[\alpha]=\overline{S}(\alpha+1)$.
\end{proof}
We will additionally need to know how the types of limit a point can be relates
to its index.
\begin{notn}
Suppose $S$ is a well-ordered set of order type $\alpha>0$, and with the Cantor
normal form of $\alpha$ being $\alpha=\omega^{\alpha_0}a_0 + \ldots +
\omega^{\alpha_r}a_r$. Then we will define $\ord \alpha = \alpha_r$.
\end{notn}
It is a well-known fact from order theory that $\omega^{\ord \alpha}$ is the
smallest order type among nonzero final segments of $\alpha$. We will require
another characterization:
\begin{prop}
\label{ordercof}
Suppose of $S$ is a well-ordered set of order type $\alpha>0$. Then
$\ord\alpha$ is equal to the largest $\beta$ such that $S$ has a cofinal subset
of order type $\omega^\beta$.
\end{prop}
While this fact is also well known, we did not find a good reference for it so
we supply a proof of our own.
\begin{proof}
Firstly, $S$ has a cofinal subset of order type $\omega^{\ord \alpha}$, because
the final Cantor block is such a set. It thus only remains to show that nothing
larger is possible.
Suppose $T$ is cofinal in $S$, and that the order type of $T$ is equal to
$\omega^\beta$. Let $F$ be the final Cantor block of $S$. Then the order type
of $T\cap F$ is at most $\omega^{\ord\alpha}$. But since $F$ is a final segment
of $S$, $T\cap F$ is a final segment of $T$; and since $T$ is cofinal in $S$, it
is a nonempty final segment. Since the order type of $T$ is a power of
$\omega$, all nonempty final segments of $T$, including $T\cap F$, have that
same order type, $\omega^\beta$. Therefore, $\beta\le\ord\alpha$.
\end{proof}
The main reason we care about this is the following:
\begin{prop}
\label{order}
Let $X$ be a totally-ordered set with the least upper bound property, and let
$S\subseteq X$ be well-ordered. Consider a point $\eta\in\overline{S}$, and
write $\eta=\overline{S}(\alpha)$, so $\alpha>0$.
Then then $\ord\alpha$ is the largest $\beta$ such that there is a subset of $S$
of order type $\omega^\beta$ with supremum equal to $\eta$. In particular,
$\ord\alpha = 0$ if and only if $\eta$ is an isolated point of $S$, and
$\ord\alpha > 0 $ if and only if $\eta$ is a limit point of $S$.
\end{prop}
\begin{proof}
First, if $T$ is a subset of $S$ with $\sup T=\eta$ and with order type equal to
$\omega^\beta$, then $T$ is also a subset of $\overline{S}$, so
$\beta\le\ord\alpha$ by Proposition~\ref{ordercof}.
Now, for the reverse, let $U$ be the final Cantor block of
$\overline{S}\cap(-\infty,\eta]$, and let $T=U\cap S$, so $U=\overline{T}$. So
$U$ has order type $\omega^{\ord\alpha}$. Thus, by
Proposition~\ref{closuretype}, $T$ must also have order type
$\omega^{\ord\alpha}$ (including if $\ord\alpha=0$, since if $T$ has order type
$0$ rather than $1$, so would $U$).
Also, $\eta\in U=\overline{T}$, so we must have $\sup T=\eta$. So $T$ is a
subset of $S$ of order type $\omega^{\ord\alpha}$ with $\sup T=\eta$,
proving the claim.
\end{proof}
Finally, one last key fact that we will use about well-orders is the following:
\begin{prop}
\label{cutandpaste}
We have:
\begin{enumerate}
\item If $S$ is a well-ordered set and $S=S_1\cup\ldots\cup S_n$, and $S_1$
through $S_n$ all have order type less than $\omega^k$, then so does $S$.
\item If $S$ is a well-ordered set of order type $\omega^k$ and
$S=S_1\cup\ldots\cup S_n$, then at least one of $S_1$ through $S_n$ also has
order type $\omega^k$.
\end{enumerate}
\end{prop}
Proofs of the more general statements this is a special case of can be found in
\cite{carruth} and \cite{wpo}; for a proof of precisely this statement, it is
proved from these more powerful principles as Proposition~5.4 in \cite{paperwo}.
\section{Substantial polynomials}
\label{secsubst}
The key new concept in this paper is that of the substantial low-defect
polynomial.
\begin{defn}
Let $(f,C)$ be a low-defect pair, and let $a$ be the leading coefficient of $f$.
We will say $(f,C)$ is \emph{substantial} if $C = \cpx{a}_\mathrm{st} + \deg f$. We
will say $f$ is substantial if $\cpx{f} = \cpx{a}_\mathrm{st} + \deg f$. (Since for a
low-defect pair to be substantial we must have $C=\cpx{f}$ by
Proposition~\ref{ineq}, we will often just talk about substantial
polynomials and ignore the formalism of pairs.)
\end{defn}
We call such polynomials (or pairs) ``substantial'' because, among all
low-defect pairs $(f,C)$ with a fixed value of $\delta(f,C)$, these are the ones
of maximum degree (see also Section~\ref{substintro}).
\begin{prop}
\label{polydft}
Suppose $(f,C)$ is a low-defect pair; then we may write $\delta(f,C)=\eta+k$,
where $\eta$ is a stable defect and $k$ is a whole number.
\end{prop}
\begin{proof}
Since $\delta(f,C) = C-3\log_3 a$ where $C\ge\cpx{a}$ (by Proposition~\ref{ineq}),
we have
\[ \delta(f,C) = (C-\cpx{a}) + \delta(a) = (C-\cpx{a}) + \mathscr{D}elta(a) + \delta_\mathrm{st}(a). \]
\end{proof}
\begin{prop}
\label{cj9prop}
Suppose $(f,C)$ is a low-defect pair, and write $\delta(f,C)=\delta(q)+k$, where $q$
is stable and $k$ is a whole number. Let $a$ be the leading coefficient of $f$.
Then
\[ \deg f + (C-\deg f-\cpx{a}_\mathrm{st}) = k .\]
In particular, we always have $\deg f\le k$, and $(f,C)$ is substantial if and
only if $\deg f=k$.
\end{prop}
\begin{proof}
Since $C-3\log_3 a = \delta(q)+k$, we have that $\delta(a) \equiv \delta(q) \pmod{1}$.
So, by part (2) of Proposition~\ref{stoldprops}, since $q$ is stable, we have
$\delta_\mathrm{st}(a)=\delta(q)$.
Therefore,
\[C-\cpx{a}_\mathrm{st} = C - 3\log_3 a -\delta_\mathrm{st}(a)= k,\]
as required. The second part is then just Proposition~\ref{ineq}
and the definition of ``substantial''.
\end{proof}
The question then is, how can we determine whether a given low-defect pair is
substantial? One easy criterion is the following:
\begin{prop}
\label{basecase}
If $\delta(f,C)<\deg f+1$, then $(f,C)$ is substantial.
\end{prop}
\begin{proof}
If $\delta(f,C)<\deg f+1$, then (letting $a$ be the leading coefficient of $f$)
we have
\[ C-\deg f-\cpx{a}_\mathrm{st} = \delta(f,C) -\delta_\mathrm{st}(a)-\deg f
< 1-\delta_\mathrm{st}(a)\le 1,\] so $C-\deg f -\cpx{a}_\mathrm{st} =0$, that is, $(f,C)$ is
substantial.
\end{proof}
However, substantial polynomials can be much more general than this. The
easiest way to tell if a low-defect polynomial is substantial is if we know how
it was formed.
\begin{prop}
\label{substrecur}
Let $(f,C)$ be a low-defect pair. Then:
\begin{enumerate}
\item If $f$ is a constant $n$ and $C=\cpx{n}$, then $(f,C)$ is substantial if
and only if $n$ is stable.
\item If $f = g_1 \otimes g_2$ and $C=D_1+D_2$, and $a$ is the leading
coefficient of $f$ and $b_i$ is the leading coefficient of $g_i$, then $(f,C)$
is substantial if and only if $(g_1, D_1)$ is substantial, $(g_2, D_2)$ is
substantial, $a$ is stable, and $\cpx{a}=\cpx{b_1}+\cpx{b_2}$.
\item If $f = g \otimes x + c$, and $C=D+\cpx{c}$, then $(f,C)$ is substantial
if and only if $(g,D)$ is substantial and $c=1$.
\end{enumerate}
\end{prop}
It is worth remembering here (per Theorem~\ref{computk}) that it can be computed
algorithmically whether a given number is stable or not.
\begin{proof}
For part (1), the leading coefficient of $f$ is $n$, and $\deg f=0$, so $(f,C)$
is substantial if and only if $C=\cpx{n}_\mathrm{st}+\deg f=\cpx{n}_\mathrm{st}$; since
$C=\cpx{n}$, this holds if and only if $n$ is stable.
For part (2), $(f, C)$ is substantial if and only if
\[
D_1 + D_2 = \cpx{b_1 b_2}_\mathrm{st} + \deg g_1 + \deg g_2.
\]
By part (9) of Proposition~\ref{stoldprops}, we know that $\cpx{b_1 b_2}_\mathrm{st} \le
\cpx{b_1}_\mathrm{st} + \cpx{b_2}_\mathrm{st}$, so in particular for $(f,C)$ to be substantial
we must have
\[
D_1 + D_2 \le \cpx{b_1}_\mathrm{st} + \cpx{b_2}_\mathrm{st} + \deg g_1 + \deg g_2.
\]
But we know by Proposition~\ref{ineq} that $D_i \ge \cpx{b_i}_\mathrm{st} + \deg g_i$,
so the only way this can happen is if $D_i = \cpx{b_i}_\mathrm{st} + \deg g_i$, and both
sides of the equation are in fact equal. So we conclude that each $(g_i, D_i)$
is substantial, and moreover that $\cpx{a}_\mathrm{st} = \cpx{b_1}_\mathrm{st} + \cpx{b_2}_\mathrm{st}$.
Since $(f,C)$, $(g_1,D_1)$, and $(g_2, D_2)$ are all substantial, we know that
$a$, $b_1$, and $b_2$ are all stable, and so we conclude that $\cpx{a} =
\cpx{b_1} + \cpx{b_2}$.
This shows that if $(f,C)$ is substantial, then the conditions of part (2) are
satisfied; and checking the converse is straightforward.
For part (3), let $a$ be the leading coefficient of $g$, which is also the
leading coefficient of $f$; then $(f, C)$ is substantial if and only if
\[
D + \cpx{c} = \cpx{a}_\mathrm{st} + \deg g + 1.
\]
We know that $D\ge \cpx{a}_\mathrm{st} + \deg g$, and that $\cpx{c}=1$, so this holds if
and only if $D = \cpx{a}_\mathrm{st} + \deg g$ (i.e., $(g,D)$ is substantial) and
$\cpx{c}=1$ (i.e., $c=1$).
\end{proof}
So, for instance, we can conclude:
\begin{prop}
\label{exists}
If $\eta$ is a stable defect (say $\eta=\delta(q)$ for $q$ stable) and $k$ a whole
number, then there is a substantial polynomial $f$ with leading coefficient $q$
such that $\delta(f)=\eta+k$ and $\deg f=k$.
\end{prop}
\begin{proof}
Since $q$ is stable, by repeatedly applying Proposition~\ref{substrecur}, the
polynomial $(((qx_1+1)x_2+1)\cdots)x_k+1$ is substantial.
\end{proof}
It makes sense to define substantiality for low-defect trees and expressions,
too:
\begin{defn}
If $E$ is a low-defect expression, then we define $E$ to be substantial if
$(f,\cpx{E})$ is substantial, where $f$ is the low-defect polynomial obtained by
evaluating $E$. Similarly, if $T$ is a low-defect tree, we define $T$ to be
substantial if $(f,\cpx{T})$ is substantial, where $f$ is the low-defect
polynomial arising from $T$.
\end{defn}
We will not actually require these latter notions for our proof, but they are
useful to keep in mind in order to get a picture of what substantial polynomials
look like and how we can form them. Obviously the analogues of
Proposition~\ref{substrecur} will work just as well for expressions or trees; we
will not repeat the proof here. Note that for trees, we can to some extent read
substantiality directly off the tree:
\begin{prop}
\label{substree}
Let $T$ be a low-defect tree with vertex set $V$, and let $N$ be the product of
the vertex labels. Then $T$ is substantial if and only if all edge labels are
$1$, no leaf vertex labels are equal to $1$, $N$ is stable, and \[ \cpx{N} =
\sum_{\substack{v\in V \\ w(v)> 1}} \cpx{w(v)}, \]
where $w(v)$ denotes the label of $v$.
\end{prop}
\begin{proof}
By Proposition~\ref{treelead}, the leading coefficent of the polynomial
corresponding to $T$ is equal to $N$, and its degree is equal to the number of
edges $|E|$
(i.e., one less than the number of vertices). So $T$ is substantial if and only
if $\cpx{T}=\cpx{N}_\mathrm{st} + |E|$. Apply Proposition~\ref{treecpx}, and note
firstly
that $|E|\le\sum \cpx{w(e)}$, with equality if and only if $\cpx{w(e)}=1$,
i.e.~$w(e)=1$, for all edges $E$;
and secondly that since \[ N= \prod_{\substack{v\in V \\ w(v)> 1}} w(v), \]
we also have that $\cpx{N}$ is at most the rest of the sum, with equality if
and only if
\[ \cpx{N} = \sum_{\substack{v\in V \\ w(v)> 1}} \cpx{w(v)}; \]
and none of the leaf labels are $1$; and of course $\cpx{N}_\mathrm{st}\le\cpx{N}$ with
equality if and only if $N$ is stable.
\end{proof}
Again, this is useful for getting a concrete idea of what substantial
polynomials look like. We will be particularly concerned with the degree-$1$
case:
\begin{prop}
\label{1subst}
A low-defect polynomial of degree $1$ is substantial if and only if it can be
written as $ax+1$, for $a$ stable, or $b(ax+1)$, where $ab$ is stable and
$\cpx{ab}=\cpx{a}+\cpx{b}$.
\end{prop}
\begin{proof}
This is immediate from Proposition~\ref{substrecur} or
Proposition~\ref{substree}.
\end{proof}
\section{Proving the main theorems}
\label{secproof}
We now begin to prove the main theorem. We start by proving a restricted
version of Theorem~\ref{closure}, where we show that $\overline{\Dst} \subseteq \mathscr{D}+{\mathbb N}n$,
together with its analogue for when we split into congruence classes.
\begin{prop}
\label{subclosure}
We have:
\begin{enumerate}
\item $\overline{\Dst} \subseteq \mathscr{D}st + {\mathbb N}n$.
\item Say $q$ is a stable number, $k\in {\mathbb N}n$, and let $\eta=\delta(q)+k$. If
$\eta\in\overline{\Dst}a{u}$, then $u \equiv \cpx{q} + k \pmod{3}$.
\end{enumerate}
\end{prop}
\begin{proof}
Suppose $\eta\in \overline{\Dst}$. Now, we know that the set $\mathscr{D}st$ is well-ordered, so
this means that $\eta\in \overline{\mathscr{D}st\cap[0,\eta]}$. That is to say,
\[\eta = \sup (\mathscr{D}st\cap[0,\eta]).\]
Choose a good covering
${\mathcal S}$ of $\overline{B}_\eta$ (see Theorem~\ref{covering}). Now, we know that
\[\mathscr{D}st\cap[0,\eta]\subseteq
\bigcup_{(f,C)\in{\mathcal S}}
\{\delta(f(3^{k_1},\ldots,3^{k_d})): k_i\in{\mathbb N}n,\ d=\deg f\}.
\]
So, $\sup(\mathscr{D}st\cap[0,\eta])$ is at most the maximum of the suprema of these
individual sets. But, applying Proposition~\ref{dftbd}, this means that
\[\eta = \sup (\mathscr{D}st\cap[0,\eta]) \le \max_{(f,C)\in{\mathcal S}} \delta(f,C)\le \eta,\]
where the last inequality comes from the definition of ${\mathcal S}$ (see
Theorem~\ref{covering}, part (2)).
Therefore,
\[\eta = \max_{(f,C)\in{\mathcal S}} \delta(f,C);\]
or, in other words, there is some $(f,C)\in{\mathcal S}$ such that $\delta(f,C)=\eta$.
So, if $b$ is the leading coefficient of this $f$, then by
Proposition~\ref{ineq}, we have $C=\cpx{b}+\ell$ for some $\ell\in{\mathbb N}n$, and so
\[ \eta = C - 3\log_3 b = \ell + \delta(b) = \delta_\mathrm{st}(b) + k, \]
where $k=\ell+\mathscr{D}elta(b)$. This proves part (1).
For the second part, define
\[\zeta = \max (\{ \delta(f,C): (f,C)\in {\mathcal S},\ \delta(f,C)<\eta \} \cup
\{0\}).\]
We will show that if $\delta(n)\in(\zeta,\eta]$, then
$\cpx{n}\equiv\cpx{q}+k\pmod{3}$. Since $\mathscr{D}$ is well-ordered, there is also
some interval $(\eta,\theta)$ that is free of defects not equal to $\eta$, so
this will show that $\eta\notin\overline{\Dst}a{u}$ for $u\not\equiv\cpx{q}+k\pmod{3}$.
So, suppose $\delta(n)\in(\zeta,\eta]$. Then $n$ is efficiently $3$-represented
by some $(f,C)\in{\mathcal S}$; and since $\zeta<\delta(n)\le\delta(f,C)$, this means we must
have $\delta(f,C)=\eta$. Note that since $n$ is efficiently $3$-represented by
$(f,C)$, this means we have $\cpx{n}\equiv C\pmod{3}$.
For comparison, let us consider the polynomial
\[ g(x_1,\ldots,x_k) = (((qx_1+1)x_2)\ldots)x_k+1 \]
and the low-defect pair $(g,\cpx{q}+k)$; note that $\delta(g,\cpx{q}+k)=\eta$.
Since $\delta(f,C)=\delta(g,\cpx{q}+k)$, we conclude by
Proposition~\ref{fdft} that $C\equiv\cpx{q}+k\pmod{3}$.
So, if $\delta(n)\in(\zeta,\eta]$, then
\[ \cpx{n}\equiv C\equiv\cpx{q}+k\pmod{3}. \]
As noted above, this proves the second part of the theorem.
\end{proof}
Now we ask, given a point in $\overline{\Dst}\subseteq \mathscr{D}+{\mathbb N}n$, what can we say about the
type of limit leading up to it? That is to say, the points leading up to it
will form an $\omega^m$ for some $m$; the question is, what is $m$? We will
answer this question shortly, but right now can only put an upper bound on it.
\begin{prop}
\label{step3}
Let $q$ be a stable number, let $\eta=\delta(q)+k$ for some $k\in {\mathbb N}n$, and
suppose $\eta = \overline{\Dst}(\alpha)$. Then $\ord \alpha\le k$. Moreover, if
$\eta=\overline{\Dst}a{\cpx{q}+k}(\beta)$, then $\ord\beta\le k$.
\end{prop}
\begin{proof}
We prove both parts simultaneously, using Proposition~\ref{order}.
Let ${\mathcal S}$ be a good covering of $\overline{B}_\eta$, and let
\[\zeta = \max (\{ \delta(f,C): (f,C)\in {\mathcal S},\ \delta(f,C)<\eta \} \cup
\{0\}).\]
Let us examine $\mathscr{D}\cap(\zeta,\eta]$. If $\delta(n)\in(\zeta,\eta]$, then $n$ is
efficiently $3$-represented by $(\hat{f},C)$ for some $(f,C)\in {\mathcal S}$. Since
$\delta(n)>\zeta$, we must have $\delta(f,C)\ge \delta(n)>\zeta$ and so
$\delta(f,C)=\eta$. Also, since the $3$-representation is efficient, we have that
$\delta(n)$ is a value of $\delta_{f,C}$.
Therefore, $\mathscr{D}\cap(\zeta,\eta]$ is contained in the union of the images of
finitely many $\delta_{f,C}$, with each $(f,C)$ having $\delta(f,C)=\eta$. But by
Proposition~\ref{cj9prop}, this means that we have $\deg f\le k$ for each of
these, and so, by Proposition~\ref{indivtype} and \ref{cutandpaste}, the order
type of $\mathscr{D}\cap(\zeta,\eta]$ is strictly less than $\omega^{k+1}$.
So, if $S$ is a subset of $\mathscr{D}st$ or $\mathscr{D}a{u}$ with supremum equal to $\eta$ and
order type $\omega^\gamma$, we must have $\gamma\le k$, as otherwise
$S\cap(\zeta,\eta]$ would have this same order type, but the latter's order type
must be less than $\omega^{k+1}$. Therefore, by Proposition~\ref{order}, we
have $\ord\alpha\le k$ and $\ord\beta\le k$.
\end{proof}
We now prove a crucial fact about substantial polynomials: They usually give the
right complexity (and their outputs are usually stable). That is, if $f$ is a
low-defect polynomial, then its exceptional set (Defintion~\ref{except}) is
small; most inputs do not lie in the exceptional set.
However, this proposition will only prove that the exceptional set is small in a
weak sense. When $\deg f=1$, we conclude that the exceptional set is finite,
which is about as strong a conclusion as one could expect, and we will elaborate
more on this case in Section~\ref{deg1}. But when $\deg f>1$, the resulting
conclusion is quite weak. Fortunately, it is possible to prove that the
exceptional set is small in a much stronger sense, and we will do this in a
subsequent paper \cite{hyperplanes}. For now, though, this weak notion of a
small exceptional set will be all that we prove and all that we need.
\begin{prop}
\label{usu}
Suppose $f$ is a substantial low-defect polynomial of degree $k$, and let $S$ be
its exceptional set. Then the order type of $\delta_f(S)$ is less than
$\omega^k$. Equivalently, the order type of the set
\[ E := \{ \delta_\mathrm{st}(f(3^{n_1},\ldots,3^{n_k})) : (n_1,\ldots,n_k)\in S\} \]
is less than $\omega^k$, as is the order type of the set
\[ \{ \delta(f(3^{n_1},\ldots,3^{n_k})) : (n_1,\ldots,n_k)\in S\}. \]
\end{prop}
\begin{proof}
The order type of $\delta_f({\mathbb N}n^k)$ is at most $\omega^k$ by
Proposition~\ref{indivtype}. We want to show that the order type of $\delta_f(S)$
is strictly less than this, so assume the contrary, that they are equal.
Now, for any
$(n_1,\ldots,n_k)\in S$, we have
\[ \delta_\mathrm{st}(f(3^{n_1},\ldots,3^{n_k})) = \delta_f(n_1,\ldots,n_k) - \ell \]
for some whole number $\ell$ with $1\le \ell \le \lfloor \delta(f)\rfloor$. (Here
we know $\ell \ge 1$ by the assumption that $(n_1,\ldots,n_k)\in S$, and we know
$\ell\le\lfloor\delta(f)\rfloor$ by Proposition~\ref{dftbd}.)
So define
\[ S_\ell :=
\{ (n_1,\ldots,n_k)\in S :
\delta_\mathrm{st}(f(3^{n_1},\ldots,3^{n_k})) = \delta_f(n_1,\ldots,n_k)-\ell
\}.
\]
Then if we define $E_\ell = \delta_f(S_\ell)-\ell$, we can write
\[ E = \bigcup_{1\le\ell\le\lfloor\delta(f)\rfloor} E_\ell .\]
By our assumption that $\delta_f(S)$ has order type $\omega^k$, the same must be
true for at least one $E_\ell$ by Proposition~\ref{cutandpaste}. However, since
$E_\ell+\ell$ and $\delta_f(S)$ both have order type $\omega^k$, the former must
be cofinal in the latter; so $\sup E_\ell = \delta(f) - \ell$, and therefore
$\delta(f)-\ell\in\overline{\Dst}$.
Suppose that $f$ has leading coefficient $q$, so $\delta(f)=\delta(q)+k$ with
$q$ stable. Then $\delta(f)-\ell = \delta(q) + k - \ell$; since $\ell\ge 1$, this
means that $\delta(f) - \ell = \delta(q) + r$ for some integer $r$ strictly less
than $k$.
But by Proposition~\ref{stoldprops}, \ref{step3}, and \ref{subclosure}, this is
impossible. If $r<0$, then by Proposition~\ref{subclosure} and part (2) of
Proposition~\ref{stoldprops}, the point $\delta(q)+r$ cannot lie in $\overline{\Dst}$ at all.
While if $r\ge 0$, if we write $\delta(q)+r = \overline{\Dst}(\alpha)$, then by
Proposition~\ref{step3} and part (2) of Proposition~\ref{stoldprops}, we must
have $\ord\alpha=r<k$, but (applying Proposition~\ref{order}) we have just shown
$\ord\alpha\ge k$.
Therefore, no $E_\ell$ can have order type $\omega^k$; and therefore neither can
$E$; and therefore neither can $\delta_f(S)$. Moreover, neither can
\[ \{ \delta(f(3^{n_1},\ldots,3^{n_k})) : (n_1,\ldots,n_k)\in S\}, \]
as it is also covered by finitely many translates of $\delta_f(S)$.
\end{proof}
We now prove a stronger version of Proposition~\ref{step3}, where we bootstrap
its inequalities into equations. We can now say exactly what the type of limits
we are looking at.
\begin{thm}
\label{precursor}
Let $q$ be a stable number and let $\eta=\delta(q)+k$ for some $k\in {\mathbb N}n$. Then
$\eta\in\overline{\Dst}a{\cpx{q}+k}$. Moreover, if we write $\eta = \overline{\Dst}(\alpha)$, then
$\ord \alpha = k$. Similarly, if we write $\eta=\overline{\Dst}a{\cpx{q}+k}(\beta)$, then
$\ord\beta = k$.
\end{thm}
\begin{proof}
By Proposition~\ref{exists}, we can choose a substantial polynomial $f$ with
leading coefficient $q$ with $\delta(f)=\eta$ and degree $k$. Let $S$ be its
exceptional set. By Proposition~\ref{indivtype}, the order type of
$\delta_f({\mathbb N}n^k)$ is $\omega^k$, but by Proposition~\ref{usu}, the order type of
$\delta_f(S)$ is strictly less than $\omega^k$; by Proposition~\ref{cutandpaste},
this implies that $\delta_f({\mathbb N}n^k\setminus S)$ has order type $\omega^k$ as well.
Moreover, $\delta_f({\mathbb N}n^k\setminus S)$ must obviously be cofinal within
$\delta_f({\mathbb N}n^k)$, as otherwise it would have strictly smaller order type;
therefore its supremum is (by Proposition~\ref{dftbd}) equal to $\delta(f)=\eta$.
But for $(n_1,\ldots,n_k)\notin S$, we have, by definition, that
\[
\delta(f(3^{n_1},\ldots,3^{n_k})) = \delta_f(n_1,\ldots,n_k),
\]
and that moreover this defect is a stable one. Moreover, if
$(n_1,\ldots,n_k)\notin S$, then
\[
\cpx{f(3^{n_1},\ldots,3^{n_k})} = \cpx{f} + 3(n_1+\ldots+n_k) \equiv \cpx{f}
\pmod{3},
\]
and $\cpx{f}=\cpx{q}+k$.
Therefore, the set $\delta_f({\mathbb N}n^k\setminus S)$ is a subset of $\mathscr{D}st^{\cpx{q}+k}$,
has order type $\omega^k$, and has supremum $\eta$. This shows that
$\eta\in\overline{\Dst}a{\cpx{q}+k}$, and also (by Proposition~\ref{order}) that
$\ord\beta\ge k$ and $\ord\alpha\ge k$.
And we already know from Proposition~\ref{step3}, that $\ord\beta\le k$ and that
$\ord\alpha\le k$, so we conclude that $\ord\alpha = \ord\beta = k$, proving the
theorem.
\end{proof}
We have now essentially done the work of proving the main theorem. All that
remains is to use order theory and topology to convert Theorem~\ref{precursor}
into more usable forms.
\subsection{From Theorem~\ref{precursor} to Theorem~\ref{cj8}}
\label{corsec}
Having proven Proposition~\ref{usu} and Theorem~\ref{precursor}, we now apply
it to yield a number of corollaries, including Theorem~\ref{cj8} and other
theorems discussed in the introduction. We will begin with a weak version of
Theorem~\ref{cj2thm}.
\begin{thm}[Weak version of Theorem~\ref{cj2thm}]
\label{cj2weak}
We have:
\begin{enumerate}
\item Suppose $a$ is stable. Then there exists $K$ such that, for all $k\ge K$
and all $\ell\ge 0$,
\[\cpx{(a3^k+1)3^\ell}=\cpx{a}+3k+3\ell+1 .\]
\item Suppose $ab$ is stable and $\cpx{ab}=\cpx{a}+\cpx{b}$. Then there exists
$K$ such that, for all $k\ge K$ and all $\ell\ge 0$,
\[\cpx{b(a3^k+1)3^\ell}=\cpx{a}+\cpx{b}+3k+3\ell+1 .\]
\end{enumerate}
\end{thm}
This theorem is the same as Theorem~\ref{cj2thm}, just without the computability
requirement. We will prove the full Theorem~\ref{cj2thm}, with the
computability requirement, in Section~\ref{deg1}.
\begin{proof}
In either case, we are considering a substantial polynomial $f$ of degree $1$;
in case (1), $f(x) = ax+1$, with $\cpx{f}=\cpx{a}+1$, and in case (2),
$f(x)=b(ax+1)$, with $\cpx{f}=\cpx{a}+\cpx{b}+1$ (by Propositions~\ref{1subst}
and \ref{ineq}).
Let $S$ be the exceptional set of $f$. Then by Proposition~\ref{usu},
$\delta_f(S)$ has order type less than $\omega$, i.e., is finite; since $\delta_f$
is strictly increasing, this implies $S$ is finite as well. So, choose $K$ to
be larger than any element of $S$. Then, for $k\ge K$,
\[ \cpx{f(3^k)3^\ell} = \cpx{f} + 3k + 3\ell \]
by definition of $S$. Substituting in the particular values of $f$ and
$\cpx{f}$ yields the theorem.
\end{proof}
\begin{prop}
\label{selfsim3}
If $u$ is a congruence class modulo $3$, then
we have \[\overline{\Dst}a{u}'=\overline{\Dst}a{u-1}+1\] and
$\overline{\Dst}a{u}'''=\overline{\Dst}a{u}+3$.
\end{prop}
\begin{proof}
The set $\overline{\Dst}a{u}$ is closed, so $\overline{\Dst}a{u}'$ is a subset of it. The question
then is, which points of $\overline{\Dst}a{u}$ are limit points? By
Proposition~\ref{limitpoints}, they are the
points $\overline{\Dst}a{u}(\alpha)$ with $\ord\alpha\ge 1$. However, by
Theorem~\ref{precursor} and Proposition~\ref{subclosure}, any such point can be
written as $\delta(q)+k$, where $q$ is stable, $k=\ord \alpha$, and
$u\equiv\cpx{q}+k \pmod{3}$.
Let \[\eta=\overline{\Dst}a{u}(\alpha)-1=\delta(q)+k-1.\] Since $k\ge 1$, $k-1\ge 0$, and
so by Theorem~\ref{precursor}, \[\eta\in\overline{\Dst}a{\cpx{q}+k-1}=\overline{\Dst}a{u-1}.\] This
shows that $\overline{\Dst}a{u}'\subseteq \overline{\Dst}a{u-1}+1$.
Conversely, if we start with $\eta\in\overline{\Dst}a{u-1}$, we may similarly write
$\eta=\delta(q)+k$ for some $k\ge 0$ and some stable $q$ with
$u-1\equiv\cpx{q}+k\pmod{3}$. So $\eta+1=\delta(q)+k+1$. So by
Theorem~\ref{precursor}, \[\eta+1\in\overline{\Dst}a{\cpx{q}+k+1}=\overline{\Dst}a{u};\] moreover,
since $k+1\ge 1$, we conclude by Theorem~\ref{precursor} and
Proposition~\ref{limitpoints} that it is a limit point of the set. This proves
that $\overline{\Dst}a{u-1}+1\subseteq\overline{\Dst}a{u}'$, and so $\overline{\Dst}a{u}'=\overline{\Dst}a{u-1}+1$.
The second statement, that $\overline{\Dst}a{u}'''=\overline{\Dst}a{u}+3$, then just follows from
iterating the previous statement three times.
\end{proof}
We can now prove Theorem~\ref{cj8}.
\begin{proof}[Proof of Theorem~\ref{cj8}]
It suffices to prove the case $k=1$, as the more general case follows from
iterating this.
Fix $a$ and consider the sets $\overline{\Dst}a{u}$ and $\overline{\Dst}a{u+1}$. By
Proposition~\ref{selfsim3}, we have $\overline{\Dst}a{u+1}'=\overline{\Dst}a{u}+1$. Therefore,
for $1\le \alpha<\omega^\omega$,
\[ \overline{\Dst}a{u+1}'(\alpha) = \overline{\Dst}a{u}(\alpha) + 1.\]
Since the limit points of $\overline{\Dst}a{u+1}$ are by Proposition~\ref{limitpoints}
precisely the points of the form $\overline{\Dst}a{u+1}(\omega\beta)$ for some $1\le
\beta<\omega^\omega$, and since we are $1$-indexing, this means that
\[ \overline{\Dst}a{u+1}(\omega\alpha) = \overline{\Dst}a{u+1}'(\alpha) = \overline{\Dst}a{u}(\alpha) + 1,\]
as desired.
\end{proof}
We can now proceed to various corollaries.
We begin with the split-up analogue of Theorem~\ref{closure-intro}. See
Section~\ref{dftexp} for a discussion of how this result may be interpreted.
\begin{cor}
\label{closure3}
For $u$ a congruence class modulo $3$,
\[\overline{\Dst}a{u}=(\mathscr{D}^u_\mathrm{st}+3{\mathbb N}n)\cup (\mathscr{D}^{u-1}_\mathrm{st}+3{\mathbb N}n+1)\cup (\mathscr{D}^{u-2}_\mathrm{st}+3{\mathbb N}n+2).
\]
Equivalently,
\[\overline{\Dst}a{u} = \{k-3\log_3 n: k\ge \cpx{n},\ k\equiv u\pmod{3}\}.\]
Moreover, each $\mathscr{D}^u_\mathrm{st}$ is a discrete set.
\end{cor}
\begin{proof}
By Proposition~\ref{subclosure}, if $\eta\in\overline{\Dst}a{u}$, then $\eta=\delta(n)+\ell$
for some stable $n$ and some $\ell\ge0$ with $\cpx{n}+\ell\equiv u\pmod{3}$. So
$\eta=(\cpx{n}+\ell)-3\log_3 n$, and we can take $k=\cpx{n}+\ell$.
Conversely,
if we have $k$ and $n$ with $k\ge\cpx{n}$ and $k\equiv u\pmod{3}$, then in
particular we have $k\ge\cpx{n}_\mathrm{st}$; say $k=\cpx{n}_\mathrm{st}+\ell$. Now, if we
let $K=K(n)$, and write $n'=3^K n$ and $k'=k+3K$, then we also have
$k'=\cpx{n'}+\ell$, and $k'\equiv k \pmod{3}$. So by
Proposition~\ref{precursor},
\[k - 3\log_3 n = k'-3\log_3 n' = \delta(n')+\ell \in \overline{\Dst}a{\cpx{n'}+\ell} =
\overline{\Dst}a{k'} = \overline{\Dst}a{k} = \overline{\Dst}a{u}.\]
This proves that
\[\overline{\Dst}a{u} = \{k-3\log_3 n: k\ge \cpx{n},\ k\equiv u\pmod{3}\};\]
in the process, we have also shown that
\[\overline{\Dst}a{u} = \{k-3\log_3 n: k\ge \cpx{n}_\mathrm{st},\ k\equiv u\pmod{3}\}.\]
The statement
\[\overline{\Dst}a{u}=(\mathscr{D}^u_\mathrm{st}+3{\mathbb N}n)\cup (\mathscr{D}^{u-1}_\mathrm{st}+3{\mathbb N}n+1)\cup (\mathscr{D}^{u-2}_\mathrm{st}+3{\mathbb N}n+2).
\]
then just consists of breaking the latter statement up by congruence class.
Finally, note that if $\eta\in\mathscr{D}^u_\mathrm{st}$, say $\eta=\delta(n)$ with $n$ stable,
then $\eta=\delta(n)+0$, so by Propositions~\ref{step3} and \ref{limitpoints}, it
is not a limit point of $\overline{\Dst}a{u}$, i.e.~not a limit point of $\mathscr{D}^u_\mathrm{st}$. That
is to say, $\mathscr{D}^u_\mathrm{st}$ contains none of its own limit points; it is a discrete
set.
\end{proof}
\begin{cor}
The different $\overline{\Dst}a{u}$ are disjoint.
\end{cor}
\begin{proof}
This follows immediately from Proposition~\ref{subclosure}.
\end{proof}
\begin{cor}
If $u_1$ and $u_2$ are congruence classes modulo $3$, then
\[ \overline{\Dst}a{u_1}+\overline{\Dst}a{u_2} \subseteq \overline{\Dst}a{u_1+u_2}. \]
\end{cor}
\begin{proof}
If $\eta_1\in\overline{\Dst}a{u_1}$ and $\eta_2\in\overline{\Dst}a{u_2}$, then by
Corollary~\ref{closure3}, we may write $\eta_i=k_i-3\log_3 n_i$ where $k_i\ge
\cpx{n_i}$ and $k_i\equiv u_i \pmod{3}$. Then $\cpx{n_1n_2}\le k_1+k_2$, so
\[ \eta_1+\eta_2 = k_1 + k_2 - 3\log_3(n_1n_2) \in \overline{\Dst}a{u_1+u_2}, \]
where here we have applied Corollary~\ref{closure3} again.
\end{proof}
\begin{cor}
\label{translate3}
If $u$ is a congruence class modulo $3$, then
$\overline{\Dst}a{u}=\overline{\mathscr{D}^u}$.
\end{cor}
\begin{proof}
Since $\mathscr{D}^u_\mathrm{st}\subseteq \mathscr{D}^u$, we immediately have $\overline{\Dst}a{u}\subseteq
\overline{\Dst}ax{u}$. For the reverse, let $\eta=\delta(n)\in\mathscr{D}^u$; then we may write
$\eta=\delta_\mathrm{st}(n)+k$, where $k = \cpx{n}-\cpx{n}_\mathrm{st} \ge 0$. Since
$\cpx{n}\equiv u\pmod{3}$, this means $\cpx{n}_\mathrm{st} \equiv u-k \pmod{3}$.
Thus $\eta \in \mathscr{D}^{u-k}_\mathrm{st} + k$. By Corollary~\ref{closure3}, this means
$\eta\in \overline{\Dst}a{u}$. So $\mathscr{D}^u\subseteq \overline{\Dst}a{u}$ and thus $\overline{\Dst}ax{u}\subseteq
\overline{\Dst}a{u}$, i.e., $\overline{\Dst}a{u} = \overline{\Dst}ax{u}$.
\end{proof}
\begin{cor}
\label{selfsim}
We have $\overline{\Dst}'=\overline{\Dst}+1$.
\end{cor}
\begin{proof}
By Corollary~\ref{selfsim3},
\[
\overline{\Dst}' = (\overline{\Dst}a{0}\cup\overline{\Dst}a{1}\cup\overline{\Dst}a{2})'
= \overline{\Dst}a{0}'\cup\overline{\Dst}a{1}'\cup\overline{\Dst}a{2}'
= (\overline{\Dst}a{2}+1)\cup(\overline{\Dst}a{0}+1)\cup(\overline{\Dst}a{1}+1)
= \overline{\Dst}+1.
\]
\end{proof}
\begin{rem}
Note that we could have proved Corollary~\ref{selfsim} exactly the same way we
proved Corollary~\ref{selfsim3} (or as part of Corollary~\ref{selfsim3}), rather
than as a separate corollary of it. But this proof highlights that something
similar will still hold if we put
together the different $\overline{\Dst}a{u}$ in a different way.
For instance, if we use the $R$ function from \cite{intdft}, and
define $\mathscr{R}=\{ R(n): n\in{\mathbb N} \}$, and define
$\mathscr{R}^u$ similarly, then these will satisfy
\[
\overline{\mathscr{R}^0}' = \frac{2}{3}\overline{\mathscr{R}^2},\quad
\overline{\mathscr{R}^1}' = \frac{3}{4}\overline{\mathscr{R}^0},\quad
\textrm{and}\
\overline{\mathscr{R}^2}' = \frac{2}{3}\overline{\mathscr{R}^1},
\]
and thus $\overline{\mathscr{R}^u}''' = \frac{1}{3}\overline{\mathscr{R}^u}$
for each congruence class $u$, and therefore
$\overline{\mathscr{R}}'''=\frac{1}{3}\overline{\mathscr{R}}$ when put together.
Alternatively, if we normalize things differently and define $\mathscr{A}^i$
to be like the original sets $A_i$ from \cite{Arias}, then we would obtain
\[
\overline{\mathscr{A}^0}' = \frac{1}{3}\overline{\mathscr{A}^2},\quad
\overline{\mathscr{A}^1}' = \overline{\mathscr{A}^0},\quad
\textrm{and}\
\overline{\mathscr{A}^2}' = \overline{\mathscr{A}^1},
\]
and then the same consequences as for $\mathscr{R}$. However, we will not
include a formal proof of these statements here.
\end{rem}
\begin{cor}
\label{cj8weak}
Given $1\le \alpha<\omega^\omega$ and $k\in {\mathbb N}n$,
$\overline{\Dst}(\omega^k \alpha)=\overline{\Dst}(\alpha)+k$.
\end{cor}
\begin{proof}
By Proposition~\ref{limitpoints}, we know that the limit points of
$\overline{\Dst}$ are the points of the form $\overline{\Dst}(\omega\beta)$ for some
$1\le\beta<\omega^\omega$. In other words, $\overline{\Dst}'(\alpha)=\overline{\Dst}(\omega\alpha)$.
So by Corollary~\ref{selfsim},
\[ \overline{\Dst}(\omega\alpha) = \overline{\Dst}'(\alpha) = \overline{\Dst}(\alpha) + 1.\]
Iterating this, we obtain $\overline{\Dst}(\omega^k \alpha) = \overline{\Dst}(\alpha)+k.$
\end{proof}
\begin{cor}
\label{closure-weak}
We have $\overline{\Dst}=\mathscr{D}_\mathrm{st}+{\mathbb N}n$; equivalently,
\[\overline{\Dst} = \{k-3\log_3 n: k\ge \cpx{n}\}.\]
Moreover, $\mathscr{D}_\mathrm{st}$ is a discrete set.
\end{cor}
\begin{proof}
The first statement follows immediately from Proposition~\ref{subclosure} and
Theorem~\ref{precursor}, and the equivalence of the two forms is obvious.
To prove the second statement, note that if $\eta\in\mathscr{D}st$, say $\eta=\delta(n)$
with $n$ stable, then $\eta=\delta(n)+0$, so by Propositions~\ref{step3} and
\ref{limitpoints}, it is not a limit point of $\overline{\Dst}$, i.e., not a limit point
of $\mathscr{D}st$. That is to say, $\mathscr{D}st$ contains none of its own limit
points; it is a discrete set.
\end{proof}
\begin{cor}
\label{addclosed}
The set $\overline{\Dst}$ is closed under addition.
\end{cor}
\begin{proof}
If $\eta_1, \eta_2 \in\overline{\Dst}$, then by Corollary~\ref{closure-weak}, we may write
$\eta_i=k_i-3\log_3 n_i$ where $k_i\ge \cpx{n_i}$. Then $\cpx{n_1n_2}\le
k_1+k_2$, so \[ \eta_1+\eta_2 = k_1 + k_2 - 3\log_3(n_1n_2) \in \overline{\Dst}, \] where
here we have applied Corollary~\ref{closure-weak} again.
\end{proof}
\begin{cor}
\label{translate}
We have $\overline{\Dst}=\overline{\mathscr{D}}$, and, given $u$ a congruence class modulo $3$,
$\overline{\Dst}a{u}=\overline{\mathscr{D}^u}$.
\end{cor}
\begin{proof}
Since $\mathscr{D}st^u\subseteq \mathscr{D}^u$, we immediately have $\overline{\Dst}a{u}\subseteq \overline{\Dst}ax{u}$;
and similarly $\overline{\Dst}\subseteq\overline{\D}$. For the reverse, if $\eta\in\mathscr{D}^u$, that
means there is some $n$ such that $\eta=\cpx{n}-3\log_3 n$ with $\cpx{n}\equiv u
\pmod{3}$; since certainly $\cpx{n}\ge\cpx{n}$ (or since
$\cpx{n}\ge\cpx{n}_\mathrm{st}$), this means that by Proposition~\ref{closure3},
$\eta\in\overline{\Dst}a{u}$. So $\mathscr{D}^u\subseteq\overline{\Dst}a{u}$ and thus
$\overline{\Dst}ax{u}\subseteq\overline{\Dst}a{u}$, and so $\overline{\Dst}ax{u}=\overline{\Dst}a{u}$. The same reasoning
using Proposition~\ref{closure-weak} yields that $\mathscr{D}\subseteq\overline{\Dst}$ and so
$\overline{\mathscr{D}}=\overline{\Dst}$.
\end{proof}
Having proven all this, let us now prove the claims from the introduction.
\begin{proof}[Proofs of Theorems~\ref{cj8weak-intro},
\ref{selfsim-intro}, \ref{closure-intro}, and \ref{closure}]
Theorem~\ref{cj8weak-intro} is just Corollary~\ref{cj8weak} together with
Corollary~\ref{translate}. Theorem~\ref{selfsim-intro} is simply
Corollary~\ref{selfsim} together with Corollary~\ref{translate}. Finally,
Theorem~\ref{closure} follows from Corollary~\ref{closure-weak} together with
the fact that $\mathscr{D}st\subseteq\mathscr{D}\subseteq \mathscr{D}st+{\mathbb N}n$. Theorem~\ref{closure-intro}
is then simply a weaker version of this.
\end{proof}
Finally, having now proven all these forms of \cite[Conjecture~8]{Arias},
let us prove something closer to the original form.
\begin{cor}
\label{cj8orig}
For $u$ a congruence class modulo $3$ and $0\le \alpha<\omega^\omega$,
\[\lim_{k\to\infty} \mathscr{D}^{u+1}_\mathrm{st} [\omega\alpha+k]=\mathscr{D}^u_\mathrm{st} [\alpha]+1.\]
Similarly,
\[\lim_{k\to\infty} \mathscr{D}_\mathrm{st} [\omega\alpha+k]=\mathscr{D}_\mathrm{st} [\alpha]+1.\]
\end{cor}
\begin{proof}
From Theorem~\ref{cj8}, we know that
\[\overline{\Dst}a{u+1}(\omega(\alpha+1))=\overline{\Dst}a{u}(\alpha+1)+1.\]
Using Proposition~\ref{closure3} and Proposition~\ref{barshift}, we know that
$\overline{\Dst}a{u}(\alpha+1)=\mathscr{D}a{u}[\alpha]$. Moreover, since $\overline{\Dst}a{u+1}$ is a closed
set, and we know by Proposition~\ref{topologies} that the subspace topology on
it coincides with the order topology, we have that
\[\overline{\Dst}a{u+1}(\omega(\alpha+1)) = \lim_{k\to\infty} \overline{\Dst}a{u}(\omega\alpha+k).\]
Applying Proposition~\ref{barshift} again, we conclude
\[\overline{\Dst}a{u+1}(\omega(\alpha+1)) = \lim_{k\to\infty} \mathscr{D}^u_\mathrm{st}[\omega\alpha+k],\]
and therefore
\[\lim_{k\to\infty} \mathscr{D}^{u+1}_\mathrm{st} [\omega\alpha+k]=\mathscr{D}^u_\mathrm{st} [\alpha]+1,\]
proving the first claim. The proof of the second claim is similar.
\end{proof}
\subsection{Conjectures~9, 10, and 11 from \cite{Arias}}
\label{cj9sec}
We have now proven \cite[Conjecture~8]{Arias} (as Theorem~\ref{cj8}), and a
number of variants. Now we address Conjectures~9, 10, and 11 from \cite{Arias},
which deal with the degree $1$ case. (It is possible to write down
generalizations beyond this case, but these generalizations are not
interesting.) Specifically, we prove:
\begin{thm}
\label{cj9corsep}
Let $n$ be a number not divisible by $3$, with $\cpx{n}_\mathrm{st}\equiv u\pmod{3}$,
and write $\delta_\mathrm{st}(n)=\mathscr{D}a{u}[\alpha]$.
Then the set
\[\{\mathscr{D}a{u+1}[\omega\alpha+k]: k\in{\mathbb N}n\},\]
which may be equivalently written as
\[\mathscr{D}a{u+1} \cap [\mathscr{D}a{u+1}[\omega\alpha], \delta_\mathrm{st}(n)+1),\]
has finite symmetric
difference with the set
\[
\{ \delta_\mathrm{st}(b(a3^k + 1)):
\ k\ge 0,\ ab=n,\ \cpx{n}_\mathrm{st} =\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}\}.
\]
Similarly, if instead $\delta_\mathrm{st}(n)=\mathscr{D}st[\alpha]$, then the set $\{\mathscr{D}st[\omega\alpha+k]: k\in{\mathbb N}n\}$, which may equivalently
be written as $\mathscr{D}st\cap[\mathscr{D}st[\omega\alpha],\delta_\mathrm{st}(n)+1)$, has finite symmetric
difference with the set
\[
\{ \delta_\mathrm{st}(b(a3^k + 1)):
\ k\ge 0,\ ab=n,\ \cpx{n}_\mathrm{st} =\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}\}.
\]
\end{thm}
This is intended as a repair of \cite[Conjectures~9--11]{Arias}, whose original
statements are a bit too strong. These conjectures state that
if $q$ is a stable number with $\delta(q)=\mathscr{D}a{u}[\alpha]$, then the set
$A := \{\mathscr{D}a{u+1}[\omega\alpha+k]: k\in{\mathbb N}n\}$ has finite symmetric difference
with the set
\[
E := \{ \delta_\mathrm{st}(b(a3^k + 1)):
\ k\ge 0,\ ab=q \},
\]
with the one-sided difference $A\setminus E$ being a finite subset of
$\{\delta_\mathrm{st}(2^k): k\in{\mathbb N}\}$.
(Here we have rephrased these conjectures somewhat from their original language;
see the Appendix of \cite{paperwo} for more on translating between these two
frameworks.)
As you can see from comparison to Theorem~\ref{cj9corsep}, the
removal of the final clause is the only major alteration.
Let us present counterexamples to the final clause for all three congruence
classes. For the case where $\cpx{q}\equiv0\pmod{3}$, we can consider $q=64 =
\mathscr{D}a{0}[2]$, and observe that $70=\mathscr{D}a{1}[\omega2+1]$, even though $70$ cannot be
written as $b(a3^k+1)3^\ell$ with $ab=64$. For the case where
$\cpx{q}\equiv1\pmod{3}$, we can consider $q=32=\mathscr{D}a{1}[1]$, and observe that
$35=\mathscr{D}a{2}[\omega+1]$, even though $35$ cannot be written as $b(a3^k+1)3^\ell$
with $ab=32$. And for the case $\cpx{q}\equiv 2\pmod{3}$, we can consider
$q=5=\mathscr{D}a{2}[2]$, and observe that $1280=\mathscr{D}a{0}[\omega2]$, even though $1280$
cannot be written as $5(3^k+1)3^\ell$ or as $(5\cdot3^k+1)3^\ell$. These
counterexamples have been chosen to have minimal defect. Note that we have
omitted the computations to verify these, but all of this may be easily verified
from a good cover of $\overline{B}_{14\delta(2)}$, which can be computed using the
algorithms from \cite{paperalg}.
Now, it is worth noting here that here we look not at factorizations satisfying
$\cpx{ab}=\cpx{a}+\cpx{b}$, but rather $\cpx{ab}_\mathrm{st}=\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}$.
Of course, if all divisors of $n$ (other than $1$) are stable, then these two
conditions are the same (aside from the cases where $a=1$ or $b=1$). But in
general they may not be. For instance, the stable number $856$ can be factored
as $8\cdot 107$. We do not have $\cpx{8}+\cpx{107}=\cpx{856}$, but we do have
$\cpx{8}_\mathrm{st}+\cpx{107}_\mathrm{st} = \cpx{856}_\mathrm{st}$, and it is important that we do not
exclude this case. (The stability of $856$, and the various values of
$\cpx{n}_\mathrm{st}$, were verified using the algorithms from \cite{paperalg}). See
Section~7 of \cite{paperalg} for more information on the relation between these
two conditions on factorizations.
Note also that the condition that $n$ is not divisible by $3$ is not essential;
we include it because the theorem does not distinguish between a number $n$ and
$n3^k$ for $k\in \mathbb{Z}$, so we have chosen to require that $n$ not be
divisible by $3$ in order to keep things concrete and canonical. No generality
is lost in this way.
In addition, while we have not stated this theorem constructively, it is
possible to prove it in a constructive manner. We will skip doing this here
because we do not expect getting effective numbers for this to be of much
relevance, compared to the theorems in the next section where effectivity may be
of more use.
We now prove the theorem.
\begin{proof}[Proof of Theorem~\ref{cj9corsep}]
We include only a proof of the first part, as the proof of the second part is
exactly analogous.
Let $q=n3^{K(n)}$, so that $q$ is stable and $\delta(q)=\delta_\mathrm{st}(n)$.
So we know from Corollary~\ref{cj8orig} that
\[\lim_{k\to\infty} \mathscr{D}^{u+1}_\mathrm{st} [\omega\alpha+k]=\delta(q)+1=\delta_\mathrm{st}(n)+1.\]
Choose $a$ and $b$ with $ab=n$ and $\cpx{n}_\mathrm{st}=\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}$.
Let $A=a3^{K(a)}$, $B=b3^{K(b)}$, and $N=AB$. Then we have
\[ \cpx{N}_\mathrm{st}=\cpx{n}_\mathrm{st}+3K(a)+3K(b)
= \cpx{a}_\mathrm{st}+3K(a)+\cpx{b}_\mathrm{st}+3K(b)
= \cpx{A}_\mathrm{st} + \cpx{B}_\mathrm{st},\]
so by Proposition~\ref{goodfac}, $N$ is also stable, and so
\[ \cpx{N}=\cpx{A}+\cpx{B}. \]
By Theorem~\ref{cj2weak}, we know that all but finitely many $k$ will satisfy
\[ \cpx{B(A3^k+1)}_\mathrm{st} = \cpx{N} + 3k + 1 \]
and therefore
\[ \delta_\mathrm{st}(B(A3^k+1)) =
\cpx{N}+1-3\log_3 (N+B3^{-k}),
\]
meaning
\[
\lim_{k\to\infty} \delta_\mathrm{st}(B(A3^k+1)) = \cpx{N}+1-3\log_3 N = \delta(N)
+1=\delta_\mathrm{st}(n)+1.
\]
Moreover,
\[ \delta_\mathrm{st}(B(A3^k+1)) = \delta_\mathrm{st}(b(A3^k+1)), \]
and for all $k\ge K(a)$,
\[ \delta_\mathrm{st}(b(a3^k+1)) = \delta_\mathrm{st}(b(A3^{k-K(a)}+1)). \]
So in fact,
\[
\lim_{k\to\infty} \delta_\mathrm{st}(b(a3^k+1)) = \delta_\mathrm{st}(n)+1.
\]
Since for $k\ge K(a)$ we have $\delta_\mathrm{st}(b(a3^k+1))\in \mathscr{D}a{\cpx{q}+1}=\mathscr{D}a{u+1}$,
this means that all but finitely many of the $\delta_\mathrm{st}(b(a3^k+1))$ must be
numbers of the form $\mathscr{D}a{u+1}[\omega\alpha+k]$.
For the converse, take a good covering ${\mathcal S}$ of $\overline{B}_{\delta(q)+1}$.
Let
\[\zeta = \max (\{ \delta(f,C): (f,C)\in {\mathcal S},\ \delta(f,C)<\delta(q)+1 \} \cup
\{0\}).\]
Now, since
\[\lim_{k\to\infty} \mathscr{D}^{u+1}_\mathrm{st} [\omega\alpha+k]=\delta(q)+1,\]
all but finitely many $\mathscr{D}a{u+1}[\omega\alpha+k]$ must lie in
$(\zeta,\delta(q)+1)$. This means
they are of the form $\delta(m)$ for some leader $m$ that is efficiently
$3$-represented by some $(f,C)\in {\mathcal S}$ (note we must actually have $C=\cpx{f}$).
But also,
\[\zeta<\delta(m)\le \delta(f,C)\le \delta(q)+1,\]
so we must have $\delta(f,C)=\delta(q)+1$. By Proposition~\ref{cj9prop}, this
implies $\deg f\le 1$. But we cannot have $\deg f=0$ as then we would have
$\delta(m)=\delta(f,C)$, in contradiction to the assumption that
$\delta(m)\in(\zeta,\delta(q)+1)$. So $\deg f=1$, which again by
Proposition~\ref{cj9prop}, means $f$ is substantial.
So, by Proposition~\ref{1subst}, this means $f$ either has the form
$f(x)=B(Ax+1)$, with $AB$ stable and $\cpx{AB}=\cpx{A}+\cpx{B}$, and
$\cpx{f}=\cpx{AB}+1$; or $f(x)=Ax+1$, with $A$ stable, and $\cpx{f}=\cpx{A}+1$.
In this latter case, let $B=1$. Since $\delta(f)=\delta(q)+1$, and differing stable
defects cannot be congruent modulo $1$, we must have $\delta(AB)=\delta(q)$; so
$AB=q3^j$ for some $j\in\mathbb{Z}$, and $\cpx{AB}=\cpx{q}+3j$.
Now, if $j<0$, then we may define $g(x)=B(A3^{-j}x+1)$, and $D=C-3j$, with the
result that all but finitely many $m$ that are efficiently $3$-represented by
$(f,C)$ will also be efficiently $3$-represented by $(g,D)$; and if we consider
$g$ instead of $f$, then we will have $B(A3^{-j})=q$ exactly. So it suffices to
consider the case where $j\ge 0$, as otherwise we may replace $f$ by $g$ to
obtain $j=0$, at the loss of only finitely many defects.
So now we have that all but finitely many of our defects are of the form
$\delta(m)$, where $m$ is of the form $B(A3^k+1)$, $AB=q3^j$ is stable, and either
$\cpx{A}+\cpx{B}=\cpx{q3^j}$ or $B=1$. By Theorem~\ref{cj2weak} again, all but
finitely many $B(A3^k+1)$ are stable, and we have only finitely many ordered
pairs $(A,B)$, meaning all but finitely many of our defects are of the form
$\delta_\mathrm{st}(B(A3^k+1))$.
Now let $a$ be the part of $A$ that is not divisible by $3$, and $b$ be the
part of $B$ that is not divisible by $3$, so that $ab=n$. Moreover, since
either $B=1$ or $\cpx{A}+\cpx{B}=\cpx{q}$, we have
\[\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}=\cpx{n}_\mathrm{st}.\] And we just said that all
but finitely many of our defects are of the form
$\delta_\mathrm{st}(B(A3^k+1))$, but this is the same as
$\delta_\mathrm{st}(b(a3^{k+k_0}+1))$, where $A=a3^{k_0}$.
So, all but finitely of our defects have the required form.
\end{proof}
\section{The degree-$1$ case and its variants}
\label{deg1}
In this section, we will investigate the implications of Proposition~\ref{usu}
and Theorem~\ref{cj9corsep} for low-defect polynomials of degree $1$. The
simplest case is Theorem~\ref{cj2thm}.
Of course, if we ignore the part about computability, then we could prove
Theorem~\ref{cj2thm} by direct application of Proposition~\ref{usu}; indeed, we
already did this as Theorem~\ref{cj2weak}. Since we want a slightly stronger
statement, however, we will have to do slightly more work.
\begin{proof}[Proof of Theorem~\ref{cj2thm}]
In either case, we are considering a substantial polynomial $f$ of degree $1$;
in case (1), $f(x) = ax+1$, with $\cpx{f}=\cpx{a}+1$, and in case (2),
$f(x)=b(ax+1)$, with $\cpx{f}=\cpx{a}+\cpx{b}+1$ (by Propositions~\ref{1subst}
and \ref{ineq}).
So, let $q$ be the leading coefficient of $f$ (which is $a$ in case (1) and $ab$
in case (2)), and let $\eta = \delta(q)$, so $\delta(f) = \eta + 1$. Now, given any
$r\in{\mathbb N}n$, we can, by Theorem~\ref{goodcomput},
compute a good covering ${\mathcal S}_r$ of $\overline{B}_{\eta-r}$.
So given
$0\le r\le\lfloor \eta\rfloor$, let
\[\zeta_r = \max (\{ \delta(g,C): (g,C)\in {\mathcal S}_r,\ \delta(g,C)<\eta-r \} \cup
\{0\}).\]
Now if we had a number $n$ with $\delta(n)\in (\zeta_r, \eta-r)$, then $n$ would
be efficiently $3$-represented by $(\xpdd{g},C)$ for some $(g,C)\in {\mathcal S}_r$; this
would imply $\delta(n)\le\delta(g,C)\le \eta-r$ and therefore $\delta(g,C)=\eta-r$.
However, by Proposition~\ref{polydft}, $\delta(g,C)$ is equal to a stable defect
plus a nonnegative integer, and as such not equal to any $\eta-r$ except
possibly when $r=0$;
therefore no such $n$ can exist unless $r=0$. But if $r=0$, then
$\delta(g,C)=\eta$. Since $\eta$ is a stable defect, this forces $\deg g = 0$ by
Proposition~\ref{cj9prop} again. But this means that $\delta(n)=\eta-r$, in
contradiction to the assumption that $\delta(n)<\eta-r$. Therefore, whether or
not $r>0$, we obtain $\mathscr{D} \cap (\zeta_r, \eta-r)=\emptyset$.
So, for each $0\le r\le \lfloor\eta\rfloor$, compute $K_r$ such that
$\delta_f(K_r)-r-1 > \zeta_r$; this is
straightforward as (by Definition~\ref{fdftdef})
\[\delta_f(k)=\cpx{a}+1-3\log_3(a+3^{-k})\] in case (1) and
\[\delta_f(k)=\cpx{ab}+1-3\log_3(b(a+3^{-k}))\] in case (2).
(Note that either way, $\lim_{k} \delta_f(k)=\delta(f)=\eta-r>\zeta_r$.)
Now let \[K = \max_{0\le r\le\lfloor \eta\rfloor} K_r.\] Then for $k\ge K$, we
know $\delta_\mathrm{st}(f(3^k))\equiv\delta_f(k)\pmod {1}$, and also
\[\delta_\mathrm{st}(f(3^k))\le\delta_f(k)<\delta(f)=\eta+1.\]
This means we must have $\delta_\mathrm{st}(f(3^k))=\delta_f(k)$, because otherwise we would
have $\delta_\mathrm{st}(f(3^k)) = \delta_f(k)-1-r$ for some $r\ge 0$; but we know
\[ \zeta_r < \delta_f(K_r)-r-1 \le \delta_f(k)-r-1
< \delta(f) - 1 -r = \eta-r, \]
i.e., $\delta_f(k)-1-r\in(\zeta_r,\eta-r)$, while $\delta_\mathrm{st}(f(3^k))\in\mathscr{D}st$, so
these quantities cannot be equal as these sets are disjoint.
Therefore, we conclude that for such $k$, we have $\cpx{f(3^k)}_\mathrm{st} =
\cpx{f}+3k$. Or, in other words, for $k\ge K$ and $\ell\ge 0$, we have
$\cpx{\xpdd{f}(3^k, 3^\ell)}=\cpx{f}+3k+3\ell$. Recalling once again that in
case (1) we have $\cpx{f}=\cpx{a}+1$ and in case (2) we have
$\cpx{f}=\cpx{a}+\cpx{b}+1$, and noting that all the steps in determining $K$
were computable, this proves the theorem.
\end{proof}
As was mentioned in Section~\ref{secproof}, this proof raises the question of
whether one can show for substantial polynomials of higher degree whether the
exceptional set can be shown to be ``small'' in a stronger sense than that
implied by Proposition~\ref{usu}, and in a future paper \cite{hyperplanes} we
shall show that it can be.
Returning to the degree $1$ case, however, we can immediately write down the
following corollary of Theorem~\ref{cj2thm}:
\begin{cor}
\label{cj2cor}
We have:
\begin{enumerate}
\item Let $a$ be a natural number. Then there exists $K$ such that, for all
$k\ge K$ and all $\ell\ge 0$,
\[\cpx{(a3^k+1)3^\ell}=\cpx{a}_\mathrm{st}+3k+3\ell+1 .\]
\item Suppose $a$ and $b$ are natural numbers and
$\cpx{ab}_\mathrm{st}=\cpx{a}_\mathrm{st}+\cpx{b}$. Then there exists $K$ such that, for all
$k\ge K$ and all $\ell\ge 0$,
\[\cpx{b(a3^k+1)3^\ell}=\cpx{a}_\mathrm{st}+\cpx{b}+3k+3\ell+1 .\]
\item Suppose $a$ and $b$ are natural numbers and
$\cpx{ab}_\mathrm{st}=\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}$. Then there exists $K$ and such that,
for all $k\ge K$ and all $\ell\ge K(b)$,
\[\cpx{b(a3^k+1)3^\ell}=\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}+3k+3\ell+1 .\]
\end{enumerate}
Moreover, in all these cases, it is possible to algorithmically compute how
large $K$ needs to be.
\end{cor}
\begin{proof}
For part (1), pick $k_0$ such that $a3^{k_0}$ is stable; let $A=a3^{k_0}$.
(Note that by Theorem~\ref{computk}, $k_0$ can be computed from $a$.) Then
by part (1) of Theorem~\ref{cj2thm}, for all sufficiently large $k$ (and we can
compute how large from $k_0$ and $A$), we have
\begin{multline*}
\cpx{(a3^k + 1)3^\ell} =
\cpx{(A 3^{k-k_0} +1)3^\ell} = \\
\cpx{A} + 3(k-k_0) + 3\ell + 1 =
\cpx{a}_\mathrm{st} + 3k + 3\ell + 1.
\end{multline*}
For part (2), pick $k_0$ large enough such that $a3^{k_0}$ and $ab3^{k_0}$ are
both stable; let $A=a3^{k_0}$. Again, note that $k_0$ may be computed from $a$
and $b$.
Then $Ab$ is stable, and
\[\cpx{Ab} = \cpx{ab}_\mathrm{st} + 3k_0
= \cpx{a}_\mathrm{st} + \cpx{b} + 3k_0
= \cpx{A} + \cpx{b},\]
so we may apply part (2) of Theorem~\ref{cj2thm}. So for all sufficiently large
$k$ (and we can compute how large from $A$, $b$, and $k_0$), we have
\begin{multline*}
\cpx{b(a3^k + 1)3^\ell} =
\cpx{b(A 3^{k-k_0} +1)3^\ell} = \\
\cpx{A} + \cpx{b} + 3(k-k_0) + 3\ell + 1 =
\cpx{a}_\mathrm{st} + \cpx{b} + 3k + 3\ell + 1.
\end{multline*}
Finally, for part (3), let $\ell_0=K(b)$, and pick $k_0$ large enough such that
both $a3^{k_0}$ and $ab3^{k_0+\ell_0}$ are both stable; again, these quantities
can be computed from $a$ and $b$. Let $A=a3^{k_0}$ and
let $B=b3^{\ell_0}$.
Then $AB$ is stable, and
\[\cpx{AB} = \cpx{ab}_\mathrm{st} + 3k_0 + 3\ell_0
= \cpx{a}_\mathrm{st} + \cpx{b}_\mathrm{st} + 3k_0 + 3\ell_0
= \cpx{A} + \cpx{B},\]
so we may again apply part (2) of Theorem~\ref{cj2thm}. So for all sufficiently
large $k$ (and how large can be computed from $A$, $B$, and $k_0$), and any
$\ell\ge \ell_0$, we have
\begin{multline*}
\cpx{b(a3^k + 1)3^\ell} =
\cpx{B(A 3^{k-k_0} +1)3^{\ell-\ell_0}} = \\
\cpx{A} + \cpx{B} + 3(k-k_0) + 3(\ell-\ell_0) + 1 =
\cpx{a}_\mathrm{st} + \cpx{b}_\mathrm{st} + 3k + 3\ell + 1.
\end{multline*}
\end{proof}
However, this is just a corollary. In fact, it is possible to go further than
this, and prove Theorem~\ref{dragons}, which requires venturing out of the realm
of substantial polynomials and into the realm of polynomials that are, one might
say, just barely insubstantial. To do this, we use the idea of
Theorem~\ref{cj9corsep}, even though that exact statement will not appear
in the proof.
Now one might say that the polynomials considered in Theorem~\ref{dragons} have
``insubstantiality $1$'', but we will not actually attempt to define a general
notion of ``insubstantiality'', as it is not entirely clear how to do that. As
noted in Section~\ref{secdragon}, however, one cannot extend
Theorem~\ref{dragons} to cases of ``insubstantiality $2$'', as demonstrated by
the case of $2(1094x+1)$. (Note that the numbers $2$, $1094$, and $2188$ are
all stable, as can be computed with the algorithms from \cite{paperalg}.)
Let us make another note about this theorem before we prove it. The division of
Theorem~\ref{dragons} into two parts, both of which require $a$ to be stable and
the second of which requires $b$ to be stable, may make it seem that something
has been lost in moving to barely-insubstantial polynomials; after all,
Theorem~\ref{cj2thm} has no stability condition on $a$ or $b$, only on $ab$.
However, that is because in Theorem~\ref{cj2thm}, the stability conditions on
$a$ and $b$ are implicit. By Proposition~\ref{goodfac}, if $ab$ is stable and
$\cpx{ab}=\cpx{a}+\cpx{b}$, then $a$ and $b$ are themselves stable. But in
Theorem~\ref{dragons}, we do not have $\cpx{ab}=\cpx{a}+\cpx{b}$, but rather
$\cpx{ab}=\cpx{a}+\cpx{b}-1$. This requires adding explicit stability
conditions on $a$ and $b$, where in Theorem~\ref{cj2thm} they were implicit.
We now prove Theorem~\ref{dragons}.
\begin{proof}[Proof of Theorem~\ref{dragons}]
Suppose $ab$ is stable, $\cpx{a}+\cpx{b}=\cpx{ab}+1$, $b>1$, and $a$ is stable.
Let $q=ab$, and let $f(x)=b(ax+1)$ and
\[C=\cpx{a}+\cpx{b}+1=\cpx{q}+2,\] so
$\delta(f,C)=\delta(q)+2$. Let \[\eta = \delta(q) + 1 = \delta(f,C) - 1.\]
We wish to find a $K$ such that, for all $k\ge K$, we have
$\cpx{f(3^k)}=C+3k$; and so that, if $b$ is stable, we moreover have
$\cpx{f(3^k)}_\mathrm{st} = C+3k$. Now, if the first of these statements fails, then we
obtain
\[ \cpx{f(3^k)} \le C + 3k - 1, \]
and if the second fails we obtain
\[ \cpx{f(3^k)}_\mathrm{st} \le C + 3k - 1. \]
These in turn imply
\[ \delta(f(3^k)) < \delta(f,C) - 1 = \eta \]
and
\[ \delta_\mathrm{st}(f(3^k)) < \delta(f,C) - 1 = \eta, \]
respectively.
So, will find a $K$ such that, for $k\ge K$, we can rule out the first of these
possibilities; and such that, under the assumption that $b$ is stable, we can
rule out the second as well.
Now, given any
$r\in {\mathbb N}n$, we can, by Theorem~\ref{goodcomput},
compute a good covering ${\mathcal S}_r$ of $\overline{B}_{\eta-r}$.
So, given $0\le r\le\lfloor \eta\rfloor$, let
\[\zeta_r = \max (\{ \delta(g,D): (g,D)\in {\mathcal S}_r,\ \delta(g,D)<\eta-r \} \cup
\{0\}).\]
Now if we had a number $n$ with $\delta(n)\in (\zeta_r, \eta-r)$, then $n$ would
be efficiently $3$-represented by some $(\xpdd{g},D)$ for $(g,D)\in {\mathcal S}_r$; this
would imply $\delta(n)\le\delta(g,D)\le \eta-r$ and therefore $\delta(g,D)=\eta-r$.
However, as before (again using Proposition~\ref{polydft}), $\delta(g,D)$ is equal
to a defect plus a nonnegative integer, and as such not equal to any $\eta-r$
except possibly when $r\le 1$, since $\eta$ is equal to $1$ plus a stable
defect. So for $r>1$, we have $\mathscr{D}st\cap (\zeta_r, \eta-r)=\emptyset$.
Moreover, when $r=1$, we have $\delta(g,D)=\eta-1=\delta(q)$. Since this is a
stable defect, this once again forces $\deg g = 0$. So this means that
$\delta(n)=\eta-r$, in contradiction to the assumption that $\delta(n)\in
(\zeta_r,\eta-r)$, and we again obtain $\mathscr{D}\cap(\zeta_r, \eta-r)=\emptyset$.
So for each $r>0$, we can as before compute $K_r$ such that $\delta_{f,C}(K_r)-r-1
> \zeta_r$ (since $\lim_k \delta_{f,C}(k)=\delta(f,C)=\eta+1$); we can then be
assured that, for $k\ge K_r$, we cannot have
$\delta(f(3^k))\in(\zeta_r,\eta-r)$ nor can we have
$\delta_\mathrm{st}(f(3^k))\in(\zeta_r,\eta-r)$. This leaves the problem of determining a
suitable $K_0$ for the case of $r=0$.
We claim that in this case, we may pick $K_0$ in the same way; that is, it
suffices to choose $K_0$ such that $\delta_{f,C}(K_0)-1 > \zeta_0$. In other
words, we wish to show that for $k\ge K_0$ it is not possible to have
$\delta(f(3^k))\in (\zeta_0,\eta)$; and that if $b$ is stable, it is not possible
to have $\delta_\mathrm{st}(f(3^k))\in (\zeta_0,\eta)$.
Now in this $r=0$ case, if $\delta(n)\in (\zeta_0, \eta)$, then we can as before
take $(g,D)\in{\mathcal S}_0$ that efficiently $3$-represents $n$. By
Proposition~\ref{cj9prop}, as $\eta=\delta(q)+1$, this implies $\deg g\le 1$, with
$\deg g=1$ if and only if $g$ is substantial. However, by the same reasoning as
in the $r=1$ case, we cannot have $\deg g=0$, as this would force
$\delta(n)=\eta$. So $g$ must be a substantial polynomial of degree $1$.
Such a $g$ takes the form
$g(x)=d(cx+1)$, where $cd=q$ and either $\cpx{c}+\cpx{d}=\cpx{q}$ or $d=1$.
So what we wish to show is that we cannot have $\delta(f(3^k))=\delta(g(3^\ell))$
for any $\ell\in{\mathbb N}n$; and that, if $b$ is stable, we moreover cannot have
$\delta_\mathrm{st}(f(3^k))=\delta(g(3^\ell))$ for any $\ell\in{\mathbb N}n$. So, again, assume that
such an equality does hold.
In the former case, $f(3^k)$ is efficiently $3$-represented by our
$(\xpdd{g},D)$; in other words, $f(3^k)=g(3^\ell)3^j$ for some substantial $g\in
{\mathcal S}_0$ and $j\in{\mathbb N}n$, with $g(3^\ell)$ a leader. In the latter case, where we
assume $b$ stable, we merely obtain that $f(3^k)=g(3^\ell)3^j$ for some $j\in
{\mathbb Z}$. So let us combine these assumptions and say that either $j\ge 0$ or $b$ is
stable, and from this derive a contradiction.
So, in either case, we may write
\[ b(a3^k + 1) = d(c3^\ell+1)3^j, \]
which we may rewrite as
\[ q3^k + b = q3^{\ell+j} + d3^j.\]
Now, we know that $b,d\le q$, and so in particular $b\le q3^k$ and $d3^j \le
q3^{\ell+j}$. Therefore, we know that
\[ q3^k \le q3^k + b \le 2q3^k \]
and
\[ q3^{\ell+j} \le q3^{\ell+j} + d3^j \le 2q3^{\ell+j}. \]
Since the quantities being bounded are equal, both sets of bounds must apply to
this one quantity, which is only possible if $k=\ell+j$, as otherwise the
described intervals are disjoint.
So $q3^k + b = q3^k + d3^j$, or in other words, $b=d3^j$, which since $ab=cd$
implies $a=c3^{-j}$. Now, if $d=1$, then $c=q$, so
$b=3^j$ (and therefore $j>0$, as $b>1$) and
$a=q3^{-j}$. But we assumed $a$ is stable, so
\[\cpx{q}=\cpx{a3^j}=\cpx{a}+3j=\cpx{a}+\cpx{b},\]
contrary to the assumption that $\cpx{q}=\cpx{a}+\cpx{b}-1$.
So we instead must have $d>1$, $\cpx{c}+\cpx{d}=\cpx{q}$, and therefore $c$ and
$d$ both stable by Proposition~\ref{goodfac}. Then in this case note that since
$a$ and $c$ are both stable, we have $\cpx{a}=\cpx{c}-3j$ regardless of the sign
of $j$.
Now, here is where we make use of the alternative we set up above, that
either $j\ge 0$ or $b$ is stable.
If $j\ge 0$, then we may conclude that $\cpx{b}=\cpx{d}+3j$, because $d$ is
stable. While if $b$ is stable, then that means $b$ and $d$ are both stable; so
we may again conclude that $\cpx{b}=\cpx{d}+3j$, using the stability of $d$ if
$j\ge 0$ and using the stability of $b$ if $j\le 0$. But this means that
\[\cpx{a}+\cpx{b}=\cpx{c}+\cpx{d}=\cpx{q},\] again contrary to the assumption
that $\cpx{a}+\cpx{b}=\cpx{q}+1$, and we have reached a contradiction.
This shows that our choice of $K_0$ satisfies the required conditions; it is not
possible to have $\delta(f(3^k))\in (\zeta_0,\eta)$, and if $b$ is stable, it is
not possible to have $\delta_\mathrm{st}(f(3^k))\in (\zeta_0,\eta)$.
So we may once again let \[K = \max_{0\le r\le\lfloor \eta\rfloor} K_r.\] Then
for $k\ge K$, we know $\delta(f(3^k))\equiv\delta_{f,C}(k)\pmod {1}$, and also
\[\delta(f(3^k))\le\delta_{f,C}(k)<\delta(f)=\eta+1;\]
and the same is true of $\delta_\mathrm{st}(f(3^k))$.
So once again we conclude that $\delta(f(3^k))=\delta_{f,C}(k)$, and that if $b$ is
stable then $\delta_\mathrm{st}(f(3^k))=\delta_{f,C}(k)$, because otherwise
the defect under consideration would equal $\delta_{f,C}(k)-1-r$ for some $r\ge
0$, and therefore lie in some interval $(\zeta_r,\eta-r)$, a possibility we have
just ruled out (unconditionally for $\delta(f(3^k))$ and under the condition that
$b$ is stable for $\delta_\mathrm{st}(f(3^k))$).
Therefore, we conclude that for $k\ge K$, we have $\cpx{f(3^k)} = C+3k$, and if
$b$ is stable, $\cpx{f(3^k)}_\mathrm{st}=C+3k$. Applying the definitions of $f$, $C$,
and stable complexity yields the desired equations. Finally, we note that all
the steps in determining $K$ were computable, so this proves the theorem.
\end{proof}
Finally, just as we generalized Theorem~\ref{cj2thm} to Corollary~\ref{cj2cor},
let us generalize Theorem~\ref{dragons} in the same way.
\begin{cor}
\label{cordragons}
We have:
\begin{enumerate}
\item Suppose $\cpx{a}_\mathrm{st}+\cpx{b}=\cpx{ab}_\mathrm{st}+1$, and $b\ne 1$. Then there
exists $K$ such that for all $k\ge K$,
\[\cpx{b(a3^k+1)}=\cpx{a}_\mathrm{st}+\cpx{b}+3k+1.\]
\item Suppose $\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}=\cpx{ab}_\mathrm{st}+1$. Then there exists $K$
such that for all $k\ge K$ and $\ell\ge K(b)$,
\[\cpx{b(a3^k+1)3^\ell}=\cpx{a}_\mathrm{st}+\cpx{b}_\mathrm{st}+3k+3\ell+1.\]
\end{enumerate}
Moreover, in both these cases, it is possible to algorithmically compute how
large $K$ needs to be.
\end{cor}
\begin{proof}
For part (1), pick $k_0$ large enough such that $a3^{k_0}$ and
$ab3^{k_0}$ are both stable; let $A=a3^{k_0}$. (Note that by
Theorem~\ref{computk}, $k_0$ can be computed from $a$ and $b$.)
Then
\[\cpx{Ab} = \cpx{ab}_\mathrm{st} + 3k_0
= \cpx{a}_\mathrm{st} + \cpx{b} + 3k_0 - 1
= \cpx{A} + \cpx{b} - 1,\]
so we may apply part (1) of Theorem~\ref{dragons}. So for all sufficiently
large $k$ (and how large can be computed from $A$, $b$, and $k_0$), we have
\begin{multline*}
\cpx{b(a3^k + 1)} =
\cpx{b(A 3^{k-k_0} +1)} = \\
\cpx{A} + \cpx{b} + 3(k-k_0) + 1 =
\cpx{a}_\mathrm{st} + \cpx{b} + 3k + 1.
\end{multline*}
For part (2), let $\ell_0=K(b)$, and pick $k_0$ large enough so that $a3^{k_0}$
and $ab3^{k_0+\ell_0}$ are both stable; let $A=a3^{k_0}$ and $B=b3^{\ell_0}$.
Again, all these quantities may be computed from $a$ and $b$.
Then
\[\cpx{AB} = \cpx{ab}_\mathrm{st} + 3k_0 + 3\ell_0
= \cpx{a}_\mathrm{st} + \cpx{b}_\mathrm{st} + 3k_0 + 3\ell_0 - 1
= \cpx{A} + \cpx{B} - 1,\]
so we may apply part (2) of Theorem~\ref{dragons}. So for all sufficiently
large $k$ (and how large can be computed from $A$, $B$, and $k_0$), and all
$\ell\ge\ell_0$, we have
\begin{multline*}
\cpx{b(a3^k + 1)3^\ell} =
\cpx{B(A 3^{k-k_0} +1)3^{\ell-\ell_0}} = \\
\cpx{A} + \cpx{B} + 3(k-k_0) + 3(\ell-\ell_0) + 1 =
\cpx{a}_\mathrm{st} + \cpx{b}_\mathrm{st} + 3k + 3\ell + 1.
\end{multline*}
\end{proof}
\section{Comparison to addition chains}
\label{addchains}
It is worth discussing here the possibility of analogous theorems to
Theorem~\ref{cj8weak-intro} and Theorem~\ref{selfsim-intro} for addition chains.
An \emph{addition chain} for $n$ is defined to be a sequence
$(a_0,a_1,\ldots,a_r)$ such that $a_0=1$, $a_r=n$, and, for any $1\le k\le r$,
there exist $0\le i, j<k$ such that $a_k = a_i + a_j$; the number $r$ is called
the length of the addition chain. The shortest length among addition chains for
$n$, called the \emph{addition chain length} of $n$, is denoted $\ell(n)$.
Addition chains were introduced in 1894 by H.~Dellac \cite{Dellac} and
reintroduced in 1937 by A.~Scholz \cite{aufgaben}; extensive surveys on the
topic can be found in Knuth \cite[Section 4.6.3]{TAOCP2} and Subbarao
\cite{subreview}.
The notion of addition chain length has obvious similarities to that of integer
complexity; each is a measure of the resources required to build up the number
$n$ starting from $1$. Both allow the use of addition, but integer complexity
supplements this by allowing the use of multiplication, while addition chain
length supplements this by allowing the reuse of any number at no additional
cost once it has been constructed. Furthermore, both measures are approximately
logarithmic; the function $\ell(n)$ satisfies
\[ \log_2 n \le \ell(n) \le 2\log_2 n. \]
A difference worth noting is that unlike integer complexity, there is no known
way to compute addition chain length via dynamic programming. Specifically, to
compute integer complexity this way, one may use the fact that for any $n>1$,
\begin{displaymath}
\cpx{n}=\min_{\substack{a,b<n\in \mathbb{N} \\ a+b=n\ \mathrm{or}\ ab=n}}
\cpx{a}+\cpx{b}.
\end{displaymath}
By contrast, addition chain length seems to be harder to compute. Suppose we
have a shortest addition chain $(a_0,\ldots,a_{r-1},a_r)$ for $n$; one might
hope that $(a_0,\ldots,a_{r-1})$ is a shortest addition chain for $a_{r-1}$, but
this need not be the case. An example is provided by the addition chain
$(1,2,3,4,7)$; this is a shortest addition chain for $7$, but $(1,2,3,4)$ is not
a shortest addition chain for $4$, as $(1,2,4)$ is shorter. Moreover, there is
no way to assign to each natural number $n$ a shortest addition chain
$(a_0,\ldots,a_r)$ for $n$ such that $(a_0,\ldots,a_{r-1})$ is the addition
chain assigned to $a_{r-1}$ \cite{TAOCP2}. This can be an obstacle both to
computing addition chain length and proving statements about addition chains.
Despite this, if we define an \emph{addition chain defect}, analogous to
integer complexity defects, we find that they act quite similarly.
As mentioned above, the set of all integer complexity defects
is a well-ordered subset of the real numbers, with order type $\omega^\omega$.
If we define
\[\delta^{\ell}(n):=\ell(n)-\log_2 n,\]
then it was shown in \cite{adcwo} that this is also true for addition chain
defects:
\begin{thm}[Addition chain well-ordering theorem]
\label{adcwothm}
The set \[\mathscr{D}^\ell := \{ \delta^{\ell}(n) : n \in \mathbb{N} \},\] considered as a
subset of the real numbers, is well-ordered and has order type $\omega^\omega$.
\end{thm}
Moreover, as also shown in \cite{adcwo}, stabilization has its analogue as
well:
\begin{thm}
\label{adcstab}
For any natural number $n$, there exists $K\ge 0$ such that, for any $k\ge K$,
\[ \ell(2^k n)=(k-K)+\ell(2^K n). \]
\end{thm}
So we can then ask if the results of this paper will translate to addition
chains. In \cite{adcwo} it was conjectured:
\begin{conj}
\label{adcconjk}
For each whole number $k$, $\mathscr{D}^\ell\cap[0,k]$ has order type
$\omega^k$.
\end{conj}
We could, then, ask if an even stronger statement might hold, as per this paper:
\begin{conj}
\label{adcconj}
Given $1\le \alpha<\omega^\omega$ an ordinal and $k$ a whole number,
\[\overline{\mathscr{D}^\ell}(\omega^k \alpha)={\mathscr{D}^\ell}(\alpha)+k.\]
Furthermore,
\[ \overline{\mathscr{D}^\ell_\mathrm{st}} = \mathscr{D}^\ell_\mathrm{st} + {\mathbb N}n. \]
\end{conj}
We could also ask the same for restricted types of addition chains, such as star
chains or Hansen chains. Actually,
Theorems~\ref{adcwothm} and \ref{adcstab} were proven for these types of chain
as well in \cite{adcwo}, and
for other sorts obeying quite general conditions, but it is not clear what sort
of conditions would be needed to achieve stronger results such as
Conjecture~\ref{adcconjk} or \ref{adcconj}.
Note that there is no equivalent of the modulo-$3$ results for addition chains,
due to our basic inequality being $\ell(2n)\le \ell(n)+1$, rather than
$\cpx{3n}\le\cpx{n}+3$; see Section~1E of \cite{intdft} for a more detailed
discussion of this point.
\end{document}
|
\begin{document}
\begin{abstract}
We present an elementary construction of the non-connective algebraic $K$-theory spectrum
associated to an additive category following the contracted functor approach due to Bass.
It comes with a universal property that easily allows us to identify it with other
constructions, for instance with the one of Pedersen-Weibel in terms of $\mathbb{Z}^i$-graded
objects and bounded homomorphisms.
\end{abstract}
\title{Non-connective $K$- and Nil-spectra of additive categories}
\typeout{----------------------- Introduction ------------------------}
\section*{Introduction}
In this paper we present a construction of the non-connective $K$-theory spectrum
${\mathbf K}infty(\mathcal{A})$ associated to an additive category $\mathcal{A}$. Its $i$-th homotopy
group is the $i$-th algebraic $K$-group of $\mathcal{A}$ for each $i \in \mathbb{Z}$. The construction
is a spectrum version of the construction of negative $K$-groups in terms of contracted
functors due to Bass~\cite[\S7 in Chapter XII]{Bass(1968)}.
By construction, this non-connective delooping of connective $K$-theory is the
universal one that satisfies the Bass-Heller-Swan decomposition:
roughly speaking, the passage from the connective algebraic $K$-theory spectrum
${\mathbf K}$ to the non-connective algebraic $K$-theory spectrum ${\mathbf K}infty$
is up to weak homotopy equivalence uniquely determined by the properties
that the Bass-Heller-Swan map for ${\mathbf K}infty$ is a weak equivalence
and the comparison map ${\mathbf K} \to {\mathbf K}infty$ is
bijective on homotopy groups of degree $\ge 1$. This universal property will easily
allow us to identify our model of a
non-connective algebraic $K$-theory spectrum with the construction due to
Pedersen-Weibel~\cite{Pedersen-Weibel(1985)} based on
$\mathbb{Z}^i$-graded objects and bounded homomorphisms.
We will use this construction to explain how the twisted Bass-Heller-Swan
decomposition
for connective $K$-theory of additive categories can be extended to the
non-connective
version, compare~\cite{Lueck-Steimle(2013twisted_BHS)}. We will also discuss that
the
compatibility of the connective $K$-theory spectrum with filtered colimits
passes to the non-connective $K$-theory spectrum. Finally we deal with homotopy
$K$-theory and applications
to the $K$-theoretic Farrell-Jones Conjecture.
In the setting of functors defined on quasi-separated quasi-compact schemes, a
non-connective
delooping based on Bass's approach was carried out by Thomason-Trobaugh
\cite[section 6]{Thomason-Trobaugh(1990)}. Later Schlichting
\cite{Schlichting(2004deloop_exact), Schlichting(2006)} defined a non-connective
delooping in the wider context of exact categories and even more generally of
``Frobenius pairs''; it is the universal delooping of connective $K$-theory that
satisfies the Thomason-Trobaugh localization theorem, i.e., takes exact sequences of
triangulated categories to cofiber sequences of spectra. Most recently,
Cisinski-Tabuada \cite{Cisinski-Tabuada(2011)} and Blumberg-Gepner-Tabuada
\cite{Blumberg-Gepner-Tabuada(2013)} have used higher category theory to show that
non-connective $K$-theory, defined as a functor on dg categories and stable
$\infty$-categories, respectively, is the universal theory satisfying a list of
axioms containing the localization theorem.
While the latter characterizations are more general in that they apply to a broader
context and emphasize the role of the localization theorem, the approach presented
here has the charm of being elementary and of possessing a universal property in the
context of additive categories.
\tableofcontents
This paper has been financially supported by the Leibniz-Award
of the first author granted by the Deutsche Forschungsgemeinschaft. The second author was also supported by ERC Advanced Grant 288082 and the
Danish National Research Foundation through the Centre for Symmetry and Deformation (DNRF92).
The authors want to thank the referee for his or her useful comments.
\typeout{----------------------- Section 1: The Bass-Heller-Swan map ------------------------}
\section{The Bass-Heller-Swan map}
\label{sec:The_Bass-Heller-Swan_map}
Let ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ be a covariant functor from the category
$\matheurm{Add\text{-}Cat}$ of additive categories to the category $\matheurm{Spectra}$ of (sequential) spectra. Our main example
will be the functor ${\mathbf K}$ which assigns to an additive category $\mathcal{A}$ its (connective) algebraic
$K$-theory spectrum ${\mathbf K}(\mathcal{A})$, with the property that
$K_i(\mathcal{A}) = \pi_i({\mathbf K}(\mathcal{A}))$ for $i \ge 0$ and $\pi_i({\mathbf K}(\mathcal{A})) = 0$ for $i \le -1$. Let
$\mathcal{I}$ be the groupoid which has two objects $0$ and $1$ and for which the set of
morphisms between any two objects consists of precisely one element. Equip $\mathcal{A} \times
\mathcal{I}$ with the obvious structure of an additive category. For an additive category
$\mathcal{A}$ let $j_i \colon \mathcal{A} \to \mathcal{A} \times \mathcal{I}$ be the functor of additive
categories which sends a morphism $f \colon A \to B$ in $\mathcal{A}$ to the morphism
$f \times \id_i \colon A \times i \to B \times i$ for $i = 0,1$.
\begin{condition} \label{con:condition_bfu}
For every additive category $\mathcal{A}$, we require the existence of a map
\[
{\mathbf u} \colon {\mathbf E}(A) \wedge [0,1]_+ \to {\mathbf E}(\mathcal{A} \times \mathcal{I})
\]
such that ${\mathbf u}$ is natural in $A$ and, for $i = 0,1$ and
$k_i \colon {\mathbf E}(\mathcal{A}) \to {\mathbf E}(\mathcal{A}) \times [0,1]_+$
the obvious inclusion coming from the inclusion $\{i\} \to [0,1]$, the
composite ${\mathbf u}\circ k_i$ agrees with ${\mathbf E}(j_i)$.
\end{condition}
The connective algebraic $K$-theory spectrum functor ${\mathbf K}$ satisfies
Condition~\ref{con:condition_bfu}, cf.~\cite[Proposition~1.3.1 on page~330]{Waldhausen(1985)}.
\begin{lemma} Suppose that ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ satisfies Condition~\ref{con:condition_bfu}.
\label{lem:nat_equiv_homotopy}
\begin{enumerate}
\item \label{lem:nat_equiv_homotopy:homotopy}
Let $F_i \colon \mathcal{A} \to \mathcal{B}$, $i=0,1$ be two
functors of additive categories and let $T \colon F_0 \xrightarrow{\simeq} F_1$ be a
natural isomorphism. Then we obtain a homotopy
\[
{\mathbf h} \colon {\mathbf E}(\mathcal{A}) \wedge [0,1] \to {\mathbf E}(\mathcal{B})
\]
with ${\mathbf h} \circ k_i = {\mathbf E}(F_i)$ for $i =0,1$. This construction is natural in $F_0$,
$F_1$ and $T$;
\item \label{lem:nat_equiv_homotopy_equivalence} An equivalence $F \colon \mathcal{A} \to \mathcal{B}$
of additive categories induces a homotopy equivalence ${\mathbf E}(F) \colon {\mathbf E}(\mathcal{A}) \to {\mathbf E}(\mathcal{B})$.
\end{enumerate}
\end{lemma}
\begin{proof}~\ref{lem:nat_equiv_homotopy:homotopy}
The triple $(F_0,F_1,T)$ induces an additive functor $H\colon \mathcal{A} \times \mathcal{I} \to \mathcal{B}$ with
$H \circ j_i = F_i$ for $i = 0,1$. Define ${\mathbf h}$ to be the composite ${\mathbf E}(H) \circ {\mathbf u}$.
\\[1mm]~\ref{lem:nat_equiv_homotopy_equivalence}
Let $F' \colon \mathcal{B} \to \mathcal{A}$ be a functor such that $F' \circ F$ is naturally isomorphic to $\id_{\mathcal{A}}$ and
$F \circ F'$ is naturally isomorphic to $\id_{\mathcal{B}}$. Assertion~\ref{lem:nat_equiv_homotopy:homotopy}
implies ${\mathbf E}(F') \circ {\mathbf E}(F) \simeq \id_{{\mathbf E}(\mathcal{A})}$ and ${\mathbf E}(F) \circ {\mathbf E}(F') \simeq \id_{{\mathbf E}(\mathcal{B})}$.
\end{proof}
Let $\mathcal{A}$ be an additive category. Define the \emph{associated Laurent category}
$\mathcal{A}[t,t^{-1}]$ as follows. It has the same objects as $\mathcal{A}$. Given two objects $A$
and $B$, a morphism $f \colon A \to B$ in $\mathcal{A}[t,t^{-1}]$ is a formal sum
$f = \sum_{k \in \mathbb{Z}} f_k \cdot t^k$, where $f_k \colon A \to B$ is a morphism in $\mathcal{A}$ and only
finitely many of the morphisms $f_k$ are non-trivial. If $f = \sum_{i \in \mathbb{Z}} f_i \cdot t^i\colon A \to B$
and $g = \sum_{j \in \mathbb{Z}} g_j \cdot t^j \colon B \to C$ are morphisms in $\mathcal{A}[t,t^{-1}]$,
we define the composite $g \circ f \colon A \to C$ by
\[
g \circ f := \sum_{k \in \mathbb{Z}} \;\biggl( \sum_{\substack{i,j \in \mathbb{Z},\\i+j = k}} g_j \circ f_i\biggr) \cdot t^k.
\]
The direct sum and the structure of an abelian group on the set of morphism from $A$ to
$B$ in $\mathcal{A}[t,t^{-1}]$ is defined in the obvious way using the
corresponding structures of $\mathcal{A}$.
Let $\mathcal{A}[t]$ and $\mathcal{A}[t^{-1}]$ respectively be the additive subcategory of
$\mathcal{A}[t,t^{-1}]$ whose set of objects is the set of objects in $\mathcal{A}$ and whose morphism
from $A$ to $B$ are given by finite Laurent series $\sum_{k \in \mathbb{Z}} f_k \cdot t^k$
with $f_k = 0$ for $k < 0$ and $k > 0$ respectively.
If $\mathcal{A}$ is the additive category of finitely generated free $R$-modules, then
$\mathcal{A}[t]$ and $\mathcal{A}[t,t^{-1}]$ respectively are equivalent to the additive category of
finitely generated free modules over $R[t]$ and $R[t,t^{-1}]$ respectively.
Define functors
\[
z[t,t^{-1}], z[t], z[t^{-1}] \colon \matheurm{Add\text{-}Cat} \to \matheurm{Add\text{-}Cat}
\]
by sending an object $\mathcal{A}$ to the object $\mathcal{A}[t,t^{-1}]$, $\mathcal{A}[t]$ and $\mathcal{A}[t^{-1}]$ respectively.
Their definition on morphisms in $\matheurm{Add\text{-}Cat}$ is obvious.
Next we define natural transformations of functors $\matheurm{Add\text{-}Cat} \to \matheurm{Add\text{-}Cat}$
\begin{equation}\label{eq:square_of_functors}\xymatrix{
& z[t] \ar[rd]^{j_+} \ar@/_2ex/[ld]_{\ev_0^+}\\
\id_{\matheurm{Add\text{-}Cat}} \ar[ru]_{i_+} \ar[rd]^{i_-} \ar[rr]^{i_0} && z[t,t^{-1}]\\
& z[t^{-1}] \ar[ru]_{j_-} \ar@/^2ex/[lu]^{\ev_0^-}
}\end{equation}
We have to specify for every object $\mathcal{A}$ in $\matheurm{Add\text{-}Cat}$ their values on $\mathcal{A}$.
The functors $i_0(\mathcal{A})$, $i_+(\mathcal{A})$ and $i_-(\mathcal{A})$ send a morphism $f \colon A \to B$ in
$\mathcal{A}$ to the morphism $f \cdot t^0 \colon A \to B$. The functors
$j_{\pm}(\mathcal{A})$ are the obvious inclusions.
The functor $\ev_0^{\pm}(\mathcal{A}) \colon \mathcal{A}_{\phi}[t^{\pm}] \to \mathcal{A}$
sends a morphism $\sum_{k \ge 0} f_k \cdot t^k$ in $\mathcal{A}[t]$
or $\sum_{k \le 0} f_k \cdot t^k$ in $\mathcal{A}[t^{-1}]$
respectively to $f_0$. Notice that $\ev_0^+\circ i_+= \ev_0^-\circ i_-= \id_{\mathcal{A}}$
and $i_0 = j_+ \circ i_+ = j_- \circ i_-$ holds.
Given a functor ${\mathbf E}\colon\matheurm{Add\text{-}Cat}\to\matheurm{Spectra}$, we now define a number of new functors of the same type. Put
\begin{eqnarray*}
{\mathbf Z}_{\pm}{\mathbf E} & := & {\mathbf E} \circ z[t^{\pm1}];
\\
{\mathbf Z}{\mathbf E} & := & {\mathbf E} \circ z[t,t^{-1}].
\end{eqnarray*}
The square \eqref{eq:square_of_functors} induces a square of natural transformations
\[\xymatrix{
& {\mathbf Z}_+{\mathbf E} \ar[rd]^{{\mathbf j}_+} \ar@/_2ex/[ld]_{{\mathbf e}v_0^+}\\
{\mathbf E} \ar[ru]_{{\mathbf i}_+} \ar[rd]^{{\mathbf i}_-} \ar[rr]^{{\mathbf i}_0} && {\mathbf Z}_\pm{\mathbf E}\\
& {\mathbf Z}_-{\mathbf E} \ar[ru]_{{\mathbf j}_-} \ar@/^2ex/[lu]^{{\mathbf e}v_0^-}
}\]
Next we define a natural transformation
\[
{\mathbf a} \colon {\mathbf E}\wedge (S^1)_+ \to {\mathbf Z}{\mathbf E}.
\]
Let $T \colon i_0 \to i_0$ be the natural transformation of functors $\mathcal{A} \to
\mathcal{A}[t,t^{-1}]$ of additive categories whose value at an object $A$ is given by the
isomorphism $\id_A \cdot t \colon A \to A$ in $\mathcal{A}[t,t^{-1}]$. Because of
Lemma~\ref{lem:nat_equiv_homotopy}~\ref{lem:nat_equiv_homotopy:homotopy} it induces a
homotopy ${\mathbf h} \colon {\mathbf E}(\mathcal{A}) \wedge [0,1]_+ \to {\mathbf E}(\mathcal{A}[t,t^{-1}])$ such that
${\mathbf h}_0 = {\mathbf h}_1 = {\mathbf E}(i_0)$ holds, where ${\mathbf h}_i := {\mathbf h} \circ k_i$ for the obvious
inclusion $k_i \colon {\mathbf E}(\mathcal{A}) \to {\mathbf E}(\mathcal{A}) \times [0,1]_+$ for $i = 0,1$.
Since we have the obvious pushout
\[
\xymatrix{{\mathbf E}(\mathcal{A})\wedge \{0,1\}_+ \ar[r] \ar[d]
& {\mathbf E}(\mathcal{A}) \wedge [0,1]_+ \ar[d]
\\
{\mathbf E} \ar[r]
&
{\mathbf E}(\mathcal{A}) \wedge (S^1)_+}
\]
we obtain a map ${\mathbf a}(\mathcal{A}) \colon {\mathbf E}(\mathcal{A}) \wedge (S^1)_+ \to {\mathbf E}(\mathcal{A}[t,t^{-1}])$.
This defines a transformation
\[
{\mathbf a} \colon {\mathbf E}\wedge (S^1)_+ \to {\mathbf Z}{\mathbf E}.
\]
In order to guarantee the existence of ${\mathbf a}$, we have imposed the Condition~\ref{con:condition_bfu}
which is stronger than just demanding that ${\mathbf E}$ sends equivalences of additive categories
to (weak) homotopy equivalences of spectra.
Define ${\mathbf N}_{\pm}{\mathbf E}$ to be the homotopy fiber of the map of
spectra ${\mathbf e}v_0^{\pm} \colon {\mathbf Z}^{\pm}{\mathbf E} \to {\mathbf E}$.
Let
\[
{\mathbf b}_{\pm } \colon {\mathbf N}_{\pm}{\mathbf E}\to {\mathbf Z}_{\pm}E \xrightarrow{{\mathbf j}_\pm} {\mathbf Z}{\mathbf E}
\]
be the composite map, where the first map is the canonical one.
Define
\begin{eqnarray*}
{\mathbf B}{\mathbf E} & = & \bigl({\mathbf E} \wedge (S^1)_+\bigr) \vee {\mathbf N}_+{\mathbf E} \vee {\mathbf N}_-{\mathbf E};
\\
{\mathbf B}_r{\mathbf E} & = & {\mathbf E} \vee {\mathbf N}_+{\mathbf E} \vee {\mathbf N}_-{\mathbf E}
\end{eqnarray*}
and let
\begin{eqnarray*}
{\mathbf B}HS := {\mathbf a} \vee {\mathbf b}_+ \vee {\mathbf b}_- & \colon & {\mathbf B}{\mathbf E} \to {\mathbf Z}_\pm{\mathbf E};
\\
{\mathbf B}HS_r := {\mathbf i}_0 \vee {\mathbf b}_+ \vee {\mathbf b}_- & \colon & {\mathbf B}_r{\mathbf E} \to {\mathbf Z}_\pm{\mathbf E}.
\end{eqnarray*}
We sometimes call ${\mathbf B}HS$ the \emph{Bass-Heller-Swan map} and
${\mathbf B}HS_r$ the \emph{restricted Bass-Heller-Swan map}.
We have the following commutative diagram
\begin{eqnarray}
&
\xymatrix{{\mathbf E} \ar[d] \ar[r]^-{{\mathbf l}}
&{\mathbf E} \wedge (S^1)_+ \ar[d]^{{\mathbf a}}
\\
{\mathbf B}_r{\mathbf E} \ar[r]^-{{\mathbf B}HS_r} &
{\mathbf Z}{\mathbf E}
}
\label{homotopy_pushout_for_bfl}
\end{eqnarray}
where the left vertical arrow is the canonical inclusion,
and ${\mathbf l}$ is the obvious inclusion coming from the
identification ${\mathbf E} = {\mathbf E} \wedge \pt_+$ and the inclusion $\pt_+ \to (S^1)_+$.
It induces a map ${\mathbf s}'' \colon \hocofib({\mathbf l}) \to \hocofib({\mathbf B}HS_r)$.
Let ${\mathbf p}r \colon \hocofib({\mathbf l}) \to \Sigma {\mathbf E} := {\mathbf E} \wedge S^1$ be the obvious projection
which is a homotopy equivalence. Define ${\mathbf L}{\mathbf E}$ to be the homotopy pushout
\begin{eqnarray}
\xymatrix{
\hocofib({\mathbf l}) \ar[r]^-{{\mathbf p}r}_-{\simeq} \ar[d]_{{\mathbf s}''}
&
\Sigma {\mathbf E} \ar[d]^{{\mathbf s}'}
\\
\hocofib({\mathbf B}HS_r) \ar[r]_-{\overline{{\mathbf p}r}}^-{\simeq}
&
{\mathbf L}{\mathbf E}
}
\label{diagram_for_bfL}
\end{eqnarray}
By construction we obtain a homotopy cofiber sequence
\[
{\mathbf B}HS_r{\mathbf E} \xrightarrow{{\mathbf B}HS_r} {\mathbf Z}{\mathbf E} \to {\mathbf L}{\mathbf E}.
\]
Denote by
\[
{\mathbf s} \colon {\mathbf E} \to \Omega {\mathbf L}{\mathbf E}
\]
the adjoint of ${\mathbf s}'$.
Summarizing, we have now defined functors
\[
{\mathbf B}{\mathbf E}, {\mathbf B}_r{\mathbf E}, {\mathbf L}{\mathbf E}, {\mathbf N}_{\pm}{\mathbf E}, {\mathbf Z}_{\pm}{\mathbf E},{\mathbf Z}{\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}
\]
and natural transformations
\begin{eqnarray*}
{\mathbf i}_0 \colon {\mathbf E} & \to & {\mathbf Z}{\mathbf E};
\\
{\mathbf i}_{\pm} \colon {\mathbf E} & \to & {\mathbf Z}_{\pm}{\mathbf E};
\\
{\mathbf j}_{\pm} \colon {\mathbf Z}_{\pm} {\mathbf E} & \to & {\mathbf Z}{\mathbf E};
\\
{\mathbf e}v_0^{\pm} \colon {\mathbf Z}_{\pm} {\mathbf E} & \to & {\mathbf E};
\\
{\mathbf a} \colon {\mathbf E} \wedge (S^1)_+ & \to & {\mathbf Z} {\mathbf E};
\\
{\mathbf b}_{\pm} \colon {\mathbf N}_{\pm}{\mathbf E} & \to & {\mathbf Z}{\mathbf E};
\\
{\mathbf B}HS \colon {\mathbf B}{\mathbf E} & \to & {\mathbf Z}{\mathbf E};
\\
{\mathbf B}HS_r \colon {\mathbf B}_r{\mathbf E} & \to & {\mathbf Z}{\mathbf E};
\\
{\mathbf s} \colon {\mathbf E} & \to & \Omega {\mathbf L}{\mathbf E}.
\end{eqnarray*}
\begin{definition}[Compatible transformations]\label{def:funcc}
Let $E,F\colon \matheurm{Add\text{-}Cat}\to\matheurm{Spectra}$ be two functors satisfying
Condition~\ref{con:condition_bfu}. A natural transformation $\phi\colon {\mathbf E}\to {\mathbf F}$ is called
\emph{compatible} if the obvious diagram
\[\xymatrix{
{\mathbf E}(\mathcal{A})\wedge [0,1]_+ \ar[rr]^{\mathbf u} \ar[d]^\phi && {\mathbf E}(\mathcal{A}\times\mathcal{I}) \ar[d]^\phi\\
{\mathbf F}(\mathcal{A})\wedge [0,1]_+ \ar[rr]^{\mathbf u} && {\mathbf F}(\mathcal{A}\times\mathcal{I}) }\]
is commutative. The category
\[\funcc(\matheurm{Add\text{-}Cat}, \matheurm{Spectra})\]
is the category of functors satisfying Condition~\ref{con:condition_bfu} whose morphisms
are compatible natural transformations.
\end{definition}
We leave the proof of the following lemma to the reader:
\begin{lemma}\label{lem:inheritance_of_condition_under_general_constructions}
\begin{enumerate}
\item If ${\mathbf E}\colon \matheurm{Add\text{-}Cat}\to\matheurm{Spectra}$ satisfies Condition~\ref{con:condition_bfu}
then so do ${\mathbf E}\wedge X$ and $\map(X, {\mathbf E})$ for any space $X$.
\item If ${\mathbf E}$ and ${\mathbf E}'$ satisfy Condition~\ref{con:condition_bfu} then so does
${\mathbf E}\vee{\mathbf E}'$.
\item If
\[{\mathbf E}_1 \rightarrow {\mathbf E}_0 \leftarrow {\mathbf E}_2\]
is a diagram in $\funcc(\matheurm{Add\text{-}Cat}, \matheurm{Spectra})$ , then its homotopy pullback satisfies
Condition~\ref{con:condition_bfu}.
\item If $\mathcal{J}$ is a small category and $F\colon \mathcal{J}\to \funcc(\matheurm{Add\text{-}Cat},\matheurm{Spectra})$ is
a functor, then $\hocolim F$ satisfies Condition~\ref{con:condition_bfu}.
\end{enumerate}
\end{lemma}
\begin{remark}
The category $\matheurm{Spectra}$ is the ``naive'' one with strict morphisms of spectra as described for instance
in~\cite{Davis-Lueck(1998)}. Our model for $\Omega{\mathbf E}$ is the spectrum $\map(S^1, {\mathbf E})$ defined levelwise,
and analogously for the homotopy pushout, homotopy pullback, homotopy fiber,
and more general for homotopy colimits and homotopy limits
over arbitrary index categories. For more details see for
instance~\cite{Davis-Lueck(1998),Hollender-Vogt(1992)}.
\end{remark}
As an application of Lemma~\ref{lem:inheritance_of_condition_under_general_constructions}, we deduce:
\begin{lemma} \label{lem:cond_inherited}
If ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ satisfies Condition~\ref{con:condition_bfu}, then so do
${\mathbf B}{\mathbf E}$, ${\mathbf B}_r{\mathbf E}$, ${\mathbf L}{\mathbf E}$, ${\mathbf N}_{\pm}{\mathbf E}$, ${\mathbf Z}_{\pm}{\mathbf E}$, and ${\mathbf Z}{\mathbf E}$.
\end{lemma}
We will apply this as well as the following result without further remarks.
\begin{lemma} \label{lem:f_equiv_implies_Bf,Zf,Nf_equiv}
Let ${\mathbf f} \colon {\mathbf E} \to {\mathbf F}$ be a transformation of functors $\matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$.
Suppose that it is a weak equivalence, i.e., ${\mathbf f}(\mathcal{A})$ is a weak equivalence for any object $\mathcal{A}$
in $\matheurm{Add\text{-}Cat}$. Then the same is true for the transformations
${\mathbf B}{\mathbf f}$, ${\mathbf B}_r{\mathbf f}$, ${\mathbf L}{\mathbf f}$, ${\mathbf N}_{\pm}{\mathbf f}$, ${\mathbf Z}_{\pm}{\mathbf f}$, and ${\mathbf Z}{\mathbf f}$.
\end{lemma}
\typeout{----------------------- Section 2: Contracted functors ------------------------}
\section{Contracted functors}
\label{sec:Contracted_functors}
Let ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ be a covariant functor satisfying Condition~\ref{con:condition_bfu}.
\begin{definition}[$c$-contracted]
\label{def:c-contracted_functor}
For $c \in \mathbb{Z}$, we call ${\mathbf E}$ \emph{$c$-contracted} if is satisfies the following two
conditions:
\begin{enumerate}
\item For every $i \in \mathbb{Z}$ the natural transformation
$\pi_i({\mathbf B}HS_r) \colon \pi_i({\mathbf B}_r{\mathbf E}) \to\pi_i({\mathbf Z}{\mathbf E})$ is split injective, i.e., there exists
a natural transformation of functors from $\matheurm{Add\text{-}Cat}$ to the category of abelian groups
\[
\rho_i \colon \pi_i({\mathbf Z}{\mathbf E}) \to \pi_i({\mathbf B}_r{\mathbf E})
\]
such that the composite $\pi_i({\mathbf B}_r{\mathbf E}) \xrightarrow{\pi_i({\mathbf B}HS_r)} \pi_i({\mathbf Z}{\mathbf E})
\xrightarrow{\rho_i} \pi_i({\mathbf B}_r{\mathbf E})$ is the identity;
\item For $i \in \mathbb{Z}, i \ge -c+1$ the transformation
\[
\pi_i({\mathbf B}HS) \colon \pi_i({\mathbf B}{\mathbf E}) \to \pi_i({\mathbf Z}{\mathbf E})
\]
is an isomorphism, i.e., its evaluation at any additive category $\mathcal{A}$ is bijective.
\end{enumerate}
We call ${\mathbf E}$ \emph{$\infty$-contracted} if
${\mathbf B}HS \colon {\mathbf B}{\mathbf E} \to {\mathbf Z}{\mathbf E}$ is a weak homotopy equivalence.
\end{definition}
\begin{lemma} \label{lem:c_contracted_and_vee} Let ${\mathbf E},{\mathbf E}' \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$
be covariant functors satisfying Condition~\ref{con:condition_bfu}.
\begin{enumerate}
\item \label{lem:c_contracted_and_vee:pi_i_iso}
Suppose that ${\mathbf E}$ and ${\mathbf E}'$ satisfy Condition~\ref{con:condition_bfu}. Consider $i \in \mathbb{Z}$.
Then both $\pi_i({\mathbf B}HS({\mathbf E}))$ and $\pi_i({\mathbf B}HS({\mathbf E})')$ are isomorphisms if and only if
$\pi_i({\mathbf B}HS({\mathbf E} \vee {\mathbf E}'))$ is an isomorphism;
\item \label{lem:c_contracted_and_vee:contracted}
Suppose that ${\mathbf E}$ and ${\mathbf E}'$ satisfy Condition~\ref{con:condition_bfu}. Consider $c \in \mathbb{Z}$. Then
${\mathbf E} \vee {\mathbf E}'$ is $c$-contracted if and
only if both ${\mathbf E}$ and ${\mathbf E}'$ are $c$-contracted.
\end{enumerate}
\end{lemma}
\begin{proof}
The transformation
\[
{\mathbf b}_{\pm} \vee {\mathbf i}_0 \colon {\mathbf N}_{\pm}{\mathbf E} \vee {\mathbf E} \to {\mathbf Z}_{\pm}{\mathbf E}
\]
is a weak equivalence, i.e., its evaluation at any additive category $\mathcal{A}$ is a weak equivalence of spectra,
since ${\mathbf e}v_0^{\pm} \circ {\mathbf i}_0 = \id_{{\mathbf E}}$ holds. Note that
\[
{\mathbf Z}_{\pm}{\mathbf E}\vee {\mathbf Z}_{\pm}{\mathbf E}' = {\mathbf Z}_{\pm}({\mathbf E} \vee {\mathbf E}')
\]
so
\[
{\mathbf N}_{\pm}{\mathbf E} \vee {\mathbf N}_{\pm}{\mathbf E}' = {\mathbf N}_{\pm}({\mathbf E} \vee {\mathbf E}').
\]
The obvious map
\[
\bigl({\mathbf E} \wedge (S^1)_+\bigr) \vee \bigl({\mathbf E}' \wedge (S^1)_+\bigr) \to ({\mathbf E} \vee {\mathbf E}')\wedge (S^1)_+
\]
is an isomorphism. Hence the following two obvious transformations are weak equivalences
\begin{eqnarray*}
{\mathbf B}_r{\mathbf E} \vee {\mathbf B}_r{\mathbf E}' & \to & {\mathbf B}_r({\mathbf E} \vee {\mathbf E}');
\\
{\mathbf B}{\mathbf E} \vee {\mathbf B}{\mathbf E}' & \to & {\mathbf B}({\mathbf E} \vee {\mathbf E}').
\end{eqnarray*}
Now the claim follows from the following two commutative diagrams
\[
\xymatrix{\pi_i({\mathbf B}{\mathbf E}) \oplus \pi_i({\mathbf B}{\mathbf E}')
\ar[r]^-{\cong} \ar[d]_{\pi_i({\mathbf B}HS({\mathbf E})) \oplus \pi_i({\mathbf B}HS({\mathbf E}'))}
&
\pi_i({\mathbf B}({\mathbf E} \vee {\mathbf E}'))
\ar[d]^{\pi_i({\mathbf B}HS({\mathbf E} \vee {\mathbf E}'))}
\\
\pi_i({\mathbf Z}{\mathbf E}) \oplus \pi_i({\mathbf Z}{\mathbf E}')
\ar[r]^-{\cong}
&
\pi_i({\mathbf Z}({\mathbf E} \vee {\mathbf E}'))}
\]
and
\belowdisplayskip=-12pt
\[
\xymatrix{\pi_i({\mathbf B}_r{\mathbf E}) \oplus \pi_i({\mathbf B}_r{\mathbf E}')
\ar[r]^-{\cong} \ar[d]_{\pi_i({\mathbf B}HS_r({\mathbf E})) \oplus \pi_i({\mathbf B}HS_r({\mathbf E}'))}
&
\pi_i({\mathbf B}_r({\mathbf E} \vee {\mathbf E}'))
\ar[d]^{\pi_i({\mathbf B}HS_r({\mathbf E} \vee {\mathbf E}'))}
\\
\pi_i({\mathbf Z}{\mathbf E}) \oplus \pi_i({\mathbf Z}{\mathbf E}')
\ar[r]^-{\cong}
&
\pi_i({\mathbf Z}({\mathbf E} \vee {\mathbf E}'))}
\]
\end{proof}
Define
\begin{eqnarray}
&
{\mathbf K} \colon \matheurm{Add\text{-}Cat}\to \matheurm{Spectra}
&
\label{bfK}
\end{eqnarray}
to be the connective $K$-theory spectrum functor in the sense of Quillen~\cite[page~95]{Quillen(1973)}
by regarding $\mathcal{A}$ as an exact category or in the sense of Waldhausen~\cite[page~330]{Waldhausen(1985)}
by regarding $\mathcal{A}$ as a Waldhausen category. (These approaches are equivalent,
see~\cite[Section~1.9]{Waldhausen(1985)}).
\begin{theorem}[Bass-Heller-Swan Theorem for ${\mathbf K}$]
\label{the:Bass-Heller-Swan_Theorem_for_bfK}
The functor ${\mathbf K}$ is $1$-contract\-ed in the sense of Definition~\ref{def:c-contracted_functor}.
\end{theorem}
\begin{proof}
The proof that the Bass-Heller-Swan map induces bijections on $\pi_i$ for $i \ge 1$ can
be found in~\cite[ Theorem~0.4~(i)]{Lueck-Steimle(2013twisted_BHS)} provided that $\mathcal{A}$ is idempotent
complete. Denote by $\eta \colon \mathcal{A} \to \Idem(\mathcal{A})$ the inclusion of
$\mathcal{A}$ into its idempotent completion. By cofinality~\cite[Theorem~A.9.1]{Thomason-Trobaugh(1990)} the maps ${\mathbf Z}{\mathbf K}(\eta)$ and
${\mathbf B}_r{\mathbf K}(\eta)$ induce isomorphisms on $\pi_1$ for $i\ge 1$; the map
${\mathbf B}{\mathbf K}(\eta)$ induces isomorphisms at least for $i\geq 2$. The commutativity of the
diagram
\[\xymatrix{
{\mathbf B} {\mathbf K}(\mathcal{A}) \ar[rr]^{{\mathbf B}HS} \ar[d]^{{\mathbf B} {\mathbf K}(\eta)}
&& {\mathbf Z} {\mathbf K}(\mathcal{A}) \ar[d]^{{\mathbf Z}{\mathbf K}(\eta)}
\\
{\mathbf B} {\mathbf K}(\Idem(\mathcal{A})) \ar[rr]^{{\mathbf B}HS}
&& {\mathbf Z} {\mathbf K}(\Idem(\mathcal{A}))
}\]
shows that the Bass-Heller-Swan map for $\mathcal{A}$ induces isomorphisms of $\pi_i$ for $i\geq
2$ and that the restricted Bass-Heller-Swan map for $\mathcal{A}$ is split injective on $\pi_i$
for $i\geq 1$.
Since all spectra are connective, it remains to show that the restricted Bass-Heller-Swan
map for $\mathcal{A}$ is split injective on $\pi_0$. Notice that
\[\pi_0 ({\mathbf K}(\mathcal{A}))\to \pi_0( {\mathbf K}(\mathcal{A}[t]))\]
is surjective as both categories $\mathcal{A}$ and $\mathcal{A}[t]$ have the same objects. It follows that
$\pi_0 ({\mathbf N}{\mathbf K}(\mathcal{A}))=0$ and we need to show that the map induced by the inclusion
\[\pi_0 ({\mathbf K}(\mathcal{A}))\to \pi_0 ({\mathbf K}(\mathcal{A}[t, t^{-1}]))\]
is split mono. Such a split is given by evaluation at $t=1$.
\end{proof}
Denote by $\Idem\colon \matheurm{Add\text{-}Cat}\to\matheurm{Add\text{-}Cat}$ the idempotent completion functor, and let
\[{\mathbf K}idem:={\mathbf K}\circ\Idem \colon \matheurm{Add\text{-}Cat}\to \matheurm{Spectra}.\]
\begin{example}[Algebraic $K$-theory of a ring $R$]
\label{exa:ring}
Given a ring $R$, then the idempotent completion $\Idem(\mathcal{R})$ of the additive category
$\mathcal{R}$ of finitely generated free $R$-modules is equivalent to the additive category of
finitely generated projective $R$-modules. Moreover, the map $\mathbb{Z} \to \pi_0 {\mathbf K}(\mathcal{R})$
sending $n$ to $[R^n]$ is surjective (even bijective if $R^n \cong R^m$ implies $m =
n$), whereas $\pi_0 {\mathbf K}idem(\mathcal{R})$ is the projective class group of $R$.
\end{example}
For an additive category we define
its algebraic $K$-group
\begin{eqnarray}
K_i(\mathcal{A}) & := & \pi_i({\mathbf K}idem(\mathcal{A})) \quad \text{for}\; i \ge 0.
\label{K_i(cala)_i_ge_0}
\end{eqnarray}
We already showed that by cofinality, the map induced by the inclusion
\[\pi_i {\mathbf K}(\mathcal{A}) \to \pi_i {\mathbf K}idem(\mathcal{A}) = K_i(\mathcal{A})\]
is an isomorphism for $i\geq 1$.
\begin{theorem}[Bass-Heller-Swan Theorem for connective algebraic $K$-theory]
\label{the:Bass-Heller-Swan_Theorem_for_bfKIdem}
The functor ${\mathbf K}idem$ is $0$-contracted in the sense of Definition~\ref{def:c-contracted_functor}.
\end{theorem}
\begin{proof}
In view of the proof of Theorem~\ref{the:Bass-Heller-Swan_Theorem_for_bfK},
the Bass-Heller-Swan map is bijective on $\pi_i$ for $i\geq 1$. It remains to show split injectivity on $\pi_0$.
We will abbreviate $\mathcal{B}= \mathcal{A}[s,s^{-1}]$. Notice for the sequel
that
\[\mathcal{B}[t^{\pm1}] = \bigl(\mathcal{A}[t^{\pm1}]\bigr)[s,s^{-1}], \quad \mathcal{B}[t,t^{-1}] = \bigl(\mathcal{A}[t,t^{-1}]\bigr)[s,s^{-1}].\]
Put
\begin{eqnarray*}
N\!K_i(\mathcal{A}[t^{\pm1}])
& = &
\pi_i({\mathbf N}_{\pm1}{\mathbf K}idem(\mathcal{A})) = \ker\bigl(K_i(\mathcal{A}[t^{\pm1}] \to K_i(\mathcal{A})\bigr);
\\
N\!K_i(\mathcal{B}[t^{\pm1}])
& = &
\pi_i({\mathbf N}_{\pm1}{\mathbf K}idem(\mathcal{B})) = \ker\bigl(K_i(\mathcal{B}[t^{\pm1}] \to K_i(\mathcal{B})\bigr).
\end{eqnarray*}
Because of Lemma~\ref{lem:c_contracted_and_vee} also the Bass-Heller-Swan map for ${\mathbf N}_{\pm1}{\mathbf K}idem$
induces isomorphisms on $\pi_1$.
In particular we get split injections
\begin{eqnarray*}
\alpha \colon K_0(\mathcal{A}) & \to & K_1(\mathcal{B});
\\
\beta_\pm \colon N\!K_0(\mathcal{A}[t^{\pm1}]) & \to & N\!K_1(\mathcal{B}[t^{\pm1}]);
\\
j \colon K_0(\mathcal{A}[t,t^{-1}]) & \to & K_1(\mathcal{B}[t,t^{-1}]).
\end{eqnarray*}
We obtain the following commutative diagram
\[
\xymatrix@!C= 17em{
K_0(\mathcal{A}) \oplus N\!K_0(\mathcal{A}[t]) \oplus N\!K_0(\mathcal{A}[t^{-1}]) \ar[r]^-{\pi_0({\mathbf B}HS_r(\mathcal{A}))}
\ar[d]_{\alpha\oplus\beta_+\oplus \beta_-}
&
K_0(\mathcal{A}[t,t^{-1}]) \ar[d]^{j}
\\
K_1(\mathcal{B}) \oplus N\!K_1(\mathcal{B}[t]) \oplus N\!K_1(\mathcal{B}[t^{-1}])
\ar[r]^-{\pi_1({\mathbf B}HS_r(\mathcal{B}))}_-{\cong}
&
K_1(\mathcal{B}[t,t^{-1}])
}\]
which is compatible with the splitting. So $\pi_0({\mathbf B}HS_r(\mathcal{A}))$ is a split mono,
being a retract of the split mono $\pi_1({\mathbf B}HS_r(\mathcal{B}))$.
\end{proof}
\begin{lemma}\label{E-n-contracted_implies_LE_is_n-contracted}
If ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ is $c$-contracted, then $\Omega{\mathbf L}{\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ is
$(c+1)$-contracted and the map $\pi_i({\mathbf s}) \colon \pi_i({\mathbf E}) \to \pi_i(\Omega({\mathbf L}{\mathbf E}))$ is bijective for $i \ge - c$.
\end{lemma}
\begin{proof}
Obviously it suffices to show that ${\mathbf L}{\mathbf E}$ is $c$-contracted and that the map
${\mathbf s}' \colon \Sigma {\mathbf E} \to {\mathbf L}{\mathbf E}$, which is the adjoint of ${\mathbf s}$,
induces an isomorphism on $\pi_i$ for $i \ge -c+1$.
Since ${\mathbf E}$ is $c$-contracted, ${\mathbf Z}{\mathbf E}$, ${\mathbf Z}_+{\mathbf E}$ and ${\mathbf Z}_-{\mathbf E}$ are $c$-contracted.
We have the obvious cofibration sequence
$ {\mathbf E} \to {\mathbf E} \wedge (S^1)_+ \to {\mathbf E} \wedge S^1$ and the retraction ${\mathbf E} \wedge (S^1)_+ \to {\mathbf E}$.
There is a weak equivalence ${\mathbf N}_{\pm}{\mathbf E} \vee {\mathbf E} \to {\mathbf Z}_{\pm}{\mathbf E}$. We conclude from
Lemma~\ref{lem:c_contracted_and_vee} that ${\mathbf N}_{\pm}{\mathbf E}$, ${\mathbf B}_r{\mathbf E}$ and ${\mathbf B}{\mathbf E}$ are $c$-contracted.
By construction we have the homotopy cofibration sequence ${\mathbf B}_r{\mathbf E} \to {\mathbf Z}{\mathbf E} \to {\mathbf L}{\mathbf E}$.
It induces a long exact sequence of homotopy groups. The existence
of the retractions $\rho_i$ imply that it breaks up into short split exact sequences
of transformations of functors from $\matheurm{Add\text{-}Cat}$ to the category of abelian groups
\[
0 \to \pi_i({\mathbf B}_r{\mathbf E}) \to \pi_i({\mathbf Z}{\mathbf E}) \to \pi_i({\mathbf L}{\mathbf E}) \to 0.
\]
We obtain a commutative diagram with short split exact rows
as vertical arrows, where the retractions from the middle term to the left term are also compatible
with the vertical maps.
\[
\xymatrix{0 \ar[r]
&
\pi_i({\mathbf Z}_{\pm}{\mathbf B}_r{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf e}v_0^{\pm}({\mathbf B}_r{\mathbf E}))}
&
\pi_i({\mathbf Z}_{\pm}{\mathbf Z}{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf e}v_0^{\pm}({\mathbf Z}{\mathbf E}))}
&
\pi_i({\mathbf Z}_{\pm}{\mathbf L}{\mathbf E}) \ar[d]^{\pi_i({\mathbf e}v_0^{\pm}({\mathbf L}{\mathbf E}))} \ar[r]
& 0
\\
0 \ar[r]
&
\pi_i({\mathbf B}_r{\mathbf E}) \ar[r]
&
\pi_i({\mathbf Z}{\mathbf E}) \ar[r]
&
\pi_i({\mathbf L}{\mathbf E}) \ar[r]
& 0
}
\]
Since we have the isomorphism
\[
\pi_i({\mathbf b}_{\pm}) \oplus \pi_i({\mathbf i}_+) \colon \pi_i({\mathbf N}_{\pm}{\mathbf E}) \oplus \pi_i({\mathbf E})
\xrightarrow{\cong} \pi_i({\mathbf Z}_{\pm}{\mathbf E}),
\]
and $\pi_i({\mathbf e}v^{\pm}_0) \circ \pi_i({\mathbf i}_+) = \id$, we obtain the short split exact sequence
\[
0 \to \pi_i({\mathbf N}_{\pm}{\mathbf B}_r{\mathbf E}) \to \pi_i({\mathbf N}_{\pm}{\mathbf Z}{\mathbf E}) \to \pi_i({\mathbf N}_{\pm}{\mathbf L}{\mathbf E}) \to 0.
\]
We have the obvious short split exact sequences
\[
0 \to \pi_i\bigl({\mathbf B}_r{\mathbf E} \wedge (S^1)_+\bigr) \to \pi_i\bigl({\mathbf Z}{\mathbf E}\wedge (S^1)_+ \bigr)
\to \pi_i\bigl({\mathbf L}{\mathbf E}\wedge (S^1)_+ \bigr) \to 0.
\]
Taking direct sums shows that we obtain short split exact sequences
\[
0 \to \pi_i({\mathbf B}{\mathbf B}_r{\mathbf E}) \to \pi_i({\mathbf B}{\mathbf Z}{\mathbf E}) \to \pi_i({\mathbf B}{\mathbf L}{\mathbf E}) \to 0,
\]
and
\[
0 \to \pi_i({\mathbf B}_r{\mathbf B}_r{\mathbf E}) \to \pi_i({\mathbf B}_r{\mathbf Z}{\mathbf E}) \to \pi_i({\mathbf B}_r{\mathbf L}{\mathbf E}) \to 0.
\]
Thus we obtain for all $i \in \mathbb{Z}$ a commutative diagram with exact rows
\[\xymatrix{
0 \ar[r]
&
\pi_i({\mathbf B}{\mathbf B}_r{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf B}HS({\mathbf B}_r{\mathbf E}))}
&
\pi_i({\mathbf B}{\mathbf Z}{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf B}HS({\mathbf Z}{\mathbf E}))}
&
\pi_i({\mathbf B}{\mathbf L}{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf B}HS({\mathbf L}{\mathbf E}))}
&
0
\\
0 \ar[r]
&
\pi_i({\mathbf Z}{\mathbf B}_r{\mathbf E}) \ar[r]
&
\pi_i({\mathbf Z}{\mathbf Z}{\mathbf E}) \ar[r]
&
\pi_i({\mathbf Z}{\mathbf L}{\mathbf E}) \ar[r]
&
0
}
\]
Since $\pi_i({\mathbf B}HS({\mathbf B}_r{\mathbf E}))$ and $\pi_i({\mathbf B}HS({\mathbf Z}{\mathbf E}))$
are isomorphisms for $i \ge -c+1$, the same is true for $\pi_i({\mathbf B}HS({\mathbf L}{\mathbf E}))$
by the Five-Lemma.
The following diagram commutes and has exact rows
\begin{eqnarray}
& \xymatrix{
0 \ar[r]
&
\pi_i({\mathbf B}_r{\mathbf B}_r{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf B}HS_r({\mathbf B}_r{\mathbf E}))}
&
\pi_i({\mathbf B}_r{\mathbf Z}{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf B}HS_r({\mathbf Z}{\mathbf E}))}
&
\pi_i({\mathbf B}_r{\mathbf L}{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf B}HS_r({\mathbf L}{\mathbf E}))}
&
0
\\
0 \ar[r]
&
\pi_i({\mathbf Z}{\mathbf B}_r{\mathbf E}) \ar[r]
&
\pi_i({\mathbf Z}{\mathbf Z}{\mathbf E}) \ar[r]
&
\pi_i({\mathbf Z}{\mathbf L}{\mathbf E}) \ar[r]
&
0
}
&
\label{diagram_1}
\end{eqnarray}
The first two vertical arrows are split injective. We claim that the retractions fit into the
following commutative square
\begin{eqnarray}
&
\xymatrix{\pi_i({\mathbf Z}{\mathbf B}_r{\mathbf E}) \ar[d]_{\rho_i({\mathbf B}_r{\mathbf E})} \ar[r]
&
\pi_i({\mathbf Z}{\mathbf Z}{\mathbf E}) \ar[d]^{\rho_i({\mathbf Z}{\mathbf E})}
\\
\pi_i({\mathbf B}_r{\mathbf B}_r{\mathbf E}) \ar[r]
&
\pi_i({\mathbf B}_r{\mathbf Z}{\mathbf E})
}
&
\label{diagram_2}
\end{eqnarray}
This follows from the fact that we have the commutative diagram with isomorphisms as horizontal arrows
\[
\xymatrix@!C=20em{
\pi_i({\mathbf Z}{\mathbf E}) \oplus \pi_i({\mathbf Z}{\mathbf N}_+{\mathbf E}) \oplus \pi_i({\mathbf Z}{\mathbf N}_-{\mathbf E})
\ar[r]^-{\pi_i({\mathbf Z}{\mathbf i}_0) \oplus \pi_i({\mathbf Z}{\mathbf b}_+) \oplus \pi_i({\mathbf Z}{\mathbf b}_-)}_-{\cong}
\ar[d]_{\rho_i({\mathbf E}) \oplus \rho_i({\mathbf N}_+{\mathbf E}) \oplus \rho_i({\mathbf N}_-{\mathbf E})}
&
\pi_1({\mathbf Z} {\mathbf B}_r {\mathbf E}) \ar[d]^{\rho_i({\mathbf B}_r{\mathbf E})}
\\
\pi_i({\mathbf B}_r{\mathbf E}) \oplus \pi_i({\mathbf B}_r{\mathbf N}_+{\mathbf E}) \oplus \pi_i({\mathbf B}_r{\mathbf N}_-{\mathbf E})
\ar[r]^-{\pi_i({\mathbf B}_r{\mathbf i}_0) \oplus \pi_i({\mathbf B}_r{\mathbf b}_+) \oplus \pi_i({\mathbf B}_r{\mathbf b}_-)}_-{\cong}
&
\pi_i({\mathbf B}_r {\mathbf B}_r {\mathbf E})
}
\]
and the following commutative diagrams
\[
\xymatrix@!C=7em{\pi_i({\mathbf Z}{\mathbf N}_{\pm}{\mathbf E}) \ar[d]_{\rho_i({\mathbf N}_{\pm}{\mathbf E})} \ar[r]^{\pi_i({\mathbf Z}{\mathbf b}_{\pm})}
&
\pi_i({\mathbf Z}{\mathbf Z}{\mathbf E}) \ar[d]^{\rho_i({\mathbf Z}{\mathbf E})}
\\
\pi_i({\mathbf B}_r{\mathbf N}_{\pm}{\mathbf E}) \ar[r]_{\pi_i({\mathbf B}_r{\mathbf b}_{\pm})}
&
\pi_i({\mathbf B}_r{\mathbf Z}{\mathbf E})
}
\]
and
\[
\xymatrix@!C=6em{\pi_i({\mathbf Z}{\mathbf E}) \ar[d]_{\rho_i({\mathbf E})} \ar[r]^{\pi_i({\mathbf Z}{\mathbf i}_0)}
&
\pi_i({\mathbf Z}{\mathbf Z}{\mathbf E}) \ar[d]^{\rho_i({\mathbf Z}{\mathbf E})}
\\
\pi_i({\mathbf B}_r{\mathbf E}) \ar[r]_{\pi_i({\mathbf B}_r{\mathbf i}_0)}
&
\pi_i({\mathbf B}_r{\mathbf Z}{\mathbf E})
}
\]
The two diagrams~\eqref{diagram_1} and~\eqref{diagram_2} imply that
$\pi_i({\mathbf B}HS_r({\mathbf L}{\mathbf E})) \colon \pi_i({\mathbf B}_r{\mathbf L}{\mathbf E}) \to \pi_i({\mathbf Z}{\mathbf L}{\mathbf E})$
is split injective for all $i \in \mathbb{Z}$. This finishes the proof
that ${\mathbf L}{\mathbf E}$ is $c$-contracted.
We have the following diagram which has homotopy cofibration sequences
as vertical arrows and which commutes up to homotopy.
\[
\xymatrix{{\mathbf B}_r{\mathbf E} \ar[r] \ar[d]^{\id}
&
{\mathbf B}{\mathbf E} \ar[d]^{{\mathbf B}HS} \ar[r]
&
\Sigma {\mathbf E} \ar[d]^{{\mathbf s}'}
\\
{\mathbf B}_r{\mathbf E} \ar[r]^{{\mathbf B}HS_r}
&
{\mathbf Z}{\mathbf E} \ar[r]
&
{\mathbf L}{\mathbf E}
}
\]
The long exact homotopy sequences associated to the rows and the fact that
$\pi_i({\mathbf B}HS_r) \colon \pi_i({\mathbf B}_r{\mathbf E}) \to \pi_i({\mathbf Z}{\mathbf E})$ is split injective for
$i \in \mathbb{Z}$ imply that we obtain for all $i \in \mathbb{Z}$ a commutative diagram with exact rows.
\[
\xymatrix{0 \ar[r]
&
\pi_i({\mathbf B}_r{\mathbf E}) \ar[r] \ar[d]^{\id}
&
\pi_i({\mathbf B}{\mathbf E}) \ar[r] \ar[d]^{\pi_i({\mathbf B}HS)}
&
\pi_i(\Sigma{\mathbf E}) \ar[r] \ar[d]^{{\mathbf s}'}
&
0
\\
0 \ar[r]
&
\pi_i({\mathbf B}_r{\mathbf E}) \ar[r]
&
\pi_i({\mathbf Z}{\mathbf E}) \ar[r]
&
\pi_i({\mathbf L}{\mathbf E}) \ar[r]
&
0
}
\]
Since $\pi_i({\mathbf B}HS)$ is bijective for $i \ge -c+1$ by assumption, the same is true for $\pi_i({\mathbf s}')$.
This finishes the proof of Lemma~\ref{E-n-contracted_implies_LE_is_n-contracted}.
\end{proof}
\typeout{----------------------- Section 3: The delooping construction ------------------------}
\section{The delooping construction}
\label{sec:The_delooping_construction}
Let ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ be a covariant functor satisfying Condition~\ref{con:condition_bfu}.
Next we define inductively a sequence of spectra
\begin{eqnarray}
& ({\mathbf E}[n])_{n \ge 0} &
\label{sequence_bfE[n]}
\end{eqnarray}
together with maps of spectra
\begin{eqnarray}
{\mathbf s}[n] \colon {\mathbf E}[n] \to {\mathbf E}[n+1] & & \text{for} \; n \ge 0.
\label{bfs[n]}
\end{eqnarray}
We define ${\mathbf E}[0]$ to be ${\mathbf E}$. In the induction step we have to explain how we construct
${\mathbf E}[n + 1]$ and ${\mathbf s}[n]$ provided that we have defined ${\mathbf E}[n]$. Define
${\mathbf E}[n+1] = \Omega {\mathbf L} {\mathbf E}[n]$ and let ${\mathbf s}[n]$ be the map
${\mathbf s} \colon {\mathbf E}[n] \to \Omega {\mathbf L} {\mathbf E}[n]$ associated to ${\mathbf E}[n]$.
\begin{definition}[Delooping ${{\mathbf E}[\infty]}$]
\label{def:E[infty]}
Define the \emph{delooping} ${\mathbf E}[\infty]$ of ${\mathbf E}$ to be the homotopy colimit of the sequence
\[
{\mathbf E} = {\mathbf E}[0] \xrightarrow{{\mathbf s}[0]} {\mathbf E}[1] \xrightarrow{{\mathbf s}[1]}{\mathbf E}[2] \xrightarrow{{\mathbf s}[2]} \cdots.
\]
Define the map of spectra
\[
{\mathbf d} \colon {\mathbf E} \to {\mathbf E}[\infty]
\]
to be the zero-th structure map of the homotopy colimit.
\end{definition}
\begin{theorem}[Main property of the delooping construction]
\label{the:Main_property_of_the_delooping_construction}
Fix an integer $c$. Suppose that ${\mathbf E}$ is $c$-contracted. Then
\begin{enumerate}
\item \label{the:Main_property_of_the_delooping_construction:pi_i(bfd)}
The map $\pi_i({\mathbf d}) \colon \pi_i({\mathbf E}) \to \pi_i({\mathbf E}[\infty])$ is bijective for $i \ge -c$;
\item \label{the:Main_property_of_the_delooping_construction:E[infty]}
${\mathbf E}[\infty]$ is $\infty$-contracted;
\item \label{the:Main_property_of_the_delooping_construction:d} ${\mathbf E}$ is $\infty$-contracted
if and only if ${\mathbf d} \colon {\mathbf E} \to {\mathbf E}[\infty]$ is a weak equivalence.
\end{enumerate}
\end{theorem}
\begin{proof}~\ref{the:Main_property_of_the_delooping_construction:pi_i(bfd)}
This follows from the fact that $\colim_{n \to \infty} \pi_i({\mathbf E}[n]) = \pi_i({\mathbf E}[\infty])$
and the conclusion of
Lemma~\ref{E-n-contracted_implies_LE_is_n-contracted} that
$\pi_i({\mathbf s}[n]) \colon \pi_i({\mathbf E}[n]) \to \pi_i[{\mathbf E}[n+1])$ is bijective for $i \ge c$.
\\[2mm]~\ref{the:Main_property_of_the_delooping_construction:E[infty]} over $n$ we
conclude from Lemma~\ref{E-n-contracted_implies_LE_is_n-contracted} that ${\mathbf E}[n]$ is
$(n+c)$-contracted for $n \ge 0$. Obviously $\hocolim$ and ${\mathbf Z}^+$ commute as well as
$\hocolim$ and ${\mathbf Z}$. Hence $\hocolim$ and ${\mathbf N}_{\pm}$ commute up to weak equivalence,
since $\hocolim$ is compatible with $\vee$ up to weak equivalence and we have a natural equivalence
${\mathbf E} \vee {\mathbf N}_{\pm}{\mathbf E} \to {\mathbf Z}_{\pm}{\mathbf E}$. This implies that $\hocolim$ and ${\mathbf B}$ commute up to weak
equivalence. Obviously $\hocolim$ commutes with $- \wedge (S^1)_+$.
Hence we obtain for each $i \in \mathbb{Z}$ the following commutative diagram
with isomorphisms as horizontal maps
\[
\xymatrix{\colim_{n \to \infty} \pi_i({\mathbf B}{\mathbf E}[n]) \ar[r]^-{\cong} \ar[d]_{\colim_{n \to \infty} \pi_i({\mathbf B}HS({\mathbf E}[n]))}
& \pi_i({\mathbf B} {\mathbf E}[\infty]) \ar[d]^{\pi_i({\mathbf B}HS({\mathbf E}[\infty])}
\\
\colim_{n \to \infty} \pi_i({\mathbf Z}{\mathbf E}[n]) \ar[r]^-{\cong}
& \pi_i({\mathbf Z} {\mathbf E}[\infty])
}
\]
Since ${\mathbf E}[n]$ is $(n+c)$-contracted, the left arrow and hence the right arrow are
isomorphisms for all $i \in \mathbb{Z}$.
\\[2mm]~\ref{the:Main_property_of_the_delooping_construction:d} If ${\mathbf d}$ is a weak
equivalence, then ${\mathbf B}HS \colon {\mathbf B}{\mathbf E} \to {\mathbf Z}{\mathbf E}$ is a weak equivalence by
assertion~\ref{the:Main_property_of_the_delooping_construction:E[infty]} and the fact that
the following diagram commutes and has weak equivalences as horizontal arrows
\[
\xymatrix{{\mathbf B}{\mathbf E} \ar[r]^-{{\mathbf B}{\mathbf d}}_-{\simeq} \ar[d]_{{\mathbf B}HS({\mathbf E})}
& {\mathbf B} {\mathbf E}[\infty] \ar[d]^{{\mathbf B}HS({\mathbf E}[\infty])}
\\
{\mathbf Z}{\mathbf E} \ar[r]_-{{\mathbf Z}{\mathbf d}}^-{\simeq} &
{\mathbf Z}{\mathbf E}[\infty]
}
\]
Suppose that ${\mathbf B}HS \colon {\mathbf B}{\mathbf E} \to {\mathbf Z}{\mathbf E}$ is a weak equivalence.
Then ${\mathbf E}$ is $c$-contracted for all $c \in \mathbb{Z}$. Because of
Lemma~\ref{E-n-contracted_implies_LE_is_n-contracted} ${\mathbf E}[n]$ is $c$-contracted for all $c \in \mathbb{Z}$
and $\pi_i({\mathbf s}[n]) \colon \pi_i({\mathbf Z}{\mathbf E}[n]) \to \pi_i({\mathbf Z}{\mathbf E}[n+1])$ is bijective for all $i \in \mathbb{Z}$ and $n \ge 0$.
This implies that $\pi_i({\mathbf d})$ is bijective for all $i \in \mathbb{Z}$.
\end{proof}
\begin{remark}[Retraction needed in all degrees]
\label{rem:retraction_needed_in_all_degrees}
One needs the retractions $\rho_i$ appearing in Definition~\ref{def:c-contracted_functor} in
each degree $i \in \mathbb{Z}$ and not only in degree $-c$. The point is that one has a
$c$-contracted functor ${\mathbf E}$ and wants to prove that ${\mathbf E}[n]$ is
$(c+n)$-contracted. For this purpose one needs the retraction to split up certain long
exact sequence into pieces in dimensions $i \ge -c$ to verify bijectivity on $\pi_i$ for
$i \ge -c$, but also in degree $i = -c-1$, to construct the retraction for ${\mathbf E}[1]$ in
degree $-c-1$. For this purpose one needs the retraction for ${\mathbf E}$ also in degree
$-c-2$. In order to be able to iterate this construction, namely, to pass from
${\mathbf E}[1]$ to ${\mathbf E}[2]$, we have the same problem, the retraction for ${\mathbf E}[1]$ must also
be present in degree $-c-3$. Hence we need a priori the retractions for ${\mathbf E}$ also in degree
$-c-3$. This argument goes on and on and forces us to require the retractions in all
degrees.
One needs retractions rather than injective maps in the definition of $c$-contract\-ed.
Injectivity would suffice
to reduce the long exact sequences obtained after taking homotopy groups to short exact sequences and
most of the arguments involve the Five-Lemma where only short exact but not split short
exact is needed. However, at one point we want to argue for a commutative diagram with
exact rows of abelian groups
\[
\xymatrix{0 \ar[r] & A_0 \ar[r] \ar[d] & A_1 \ar[r] \ar[d] & A_2 \ar[r] \ar[d]& 0
\\
0 \ar[r] & B_0 \ar[r] & B_1 \ar[r] & B_2 \ar[r] & 0
}
\]
that the third vertical arrow admits a retraction if the first and the second arrow admit retractions compatible
with the two first horizontal arrows. This is true. But the corresponding statement is wrong
if we replace ``admitting a retraction'' by ``injective''.
\end{remark}
\begin{lemma} \label{lem:alpha_bijective} Suppose that the covariant functors ${\mathbf E}, {\mathbf F}
\colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ satisfy Condition~\ref{con:condition_bfu} and are
$\infty$-contracted. Let ${\mathbf f} \colon {\mathbf E} \to {\mathbf F}$ be a compatible transformation. Suppose that
there exists an integer $N$ such that $\pi_i({\mathbf f}(\mathcal{A}))$ is bijective for all $i \ge
N$ and all objects $\mathcal{A}$ in $\matheurm{Add\text{-}Cat}$.
Then ${\mathbf f}\colon {\mathbf E} \to {\mathbf F}$ is a weak equivalence.
\end{lemma}
\begin{proof}
We show by induction over $i$ that $\pi_i({\mathbf f}(\mathcal{A}))$ is bijective for $i = N, N-1, N-2$
and all objects objects $\mathcal{A}$ in $\matheurm{Add\text{-}Cat}$.
The induction beginning $i = N$ is trivial, the induction step from $i$ to $i-1$ done as follows.
We have the following commutative diagram whose horizontal arrows come from
the Bass-Heller-Swan maps and hence are bijective by assumption
\[
\xymatrix@!C= 14em{\pi_{i-1}({\mathbf E}(\mathcal{A})) \oplus \pi_{i}({\mathbf E}(\mathcal{A})) \oplus \pi_{i}({\mathbf N}_+{\mathbf E}(\mathcal{A})) \oplus \pi_{i}({\mathbf N}_-{\mathbf E}(\mathcal{A}))
\ar[r]^-{\cong} \ar[d]_{\pi_{i-1}({\mathbf f}(\mathcal{A})) \oplus \pi_i({\mathbf f}(\mathcal{A})) \oplus \pi_i({\mathbf N}_+{\mathbf f}(\mathcal{A})) \oplus \pi_i({\mathbf N}_-{\mathbf f}(\mathcal{A}))}
&\pi_i({\mathbf Z}{\mathbf E}(\mathcal{A})) \ar[d]^{\pi_i({\mathbf Z}{\mathbf f}(\mathcal{A}))}
\\
\pi_{i-1}({\mathbf F}(\mathcal{A})) \oplus \pi_{i}({\mathbf F}(\mathcal{A})) \oplus \pi_{i}({\mathbf N}_+{\mathbf F}(\mathcal{A})) \oplus \pi_{i}({\mathbf N}_-{\mathbf F}(\mathcal{A}))
\ar[r]_-{\cong}
&\pi_i({\mathbf Z}{\mathbf F}(\mathcal{A}))
}
\]
By the induction hypothesis $\pi_i({\mathbf Z}{\mathbf f}(\mathcal{A}))$ is bijective. Hence $\pi_{i-1}({\mathbf f}(\mathcal{A}))$ is bijective since it is
a direct summand in $\pi_i({\mathbf Z}{\mathbf f}(\mathcal{A}))$.
\end{proof}
Theorem~\ref{the:Main_property_of_the_delooping_construction} and Lemma~\ref{lem:alpha_bijective} imply
\begin{corollary}
\label{cor:turning_a_transformation_into_a_weak_equivalence}
Suppose that the functors ${\mathbf E}, {\mathbf F} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ satisfy
Condition~\ref{con:condition_bfu}. Suppose that ${\mathbf E}$ and ${\mathbf F}$ are $c$-contracted
for some integer $c$. Let ${\mathbf f} \colon {\mathbf E} \to {\mathbf F}$ be a compatible transformation.
Suppose that there exists an integer $N$ such that
$\pi_i({\mathbf f}) \colon \pi_i({\mathbf E}) \to \pi_i({\mathbf F})$ is bijective for $i \ge N$.
Then the following diagram commutes
\[
\xymatrix@!C= 5em{{\mathbf E} \ar[d]_{{\mathbf f}} \ar[r]^{{\mathbf d}({\mathbf E})}
&
{\mathbf E}[\infty]
\ar[d]^{{\mathbf f}[\infty]}_{\simeq}
\\
{\mathbf F} \ar[r]_{{\mathbf d}({\mathbf F})}
& {\mathbf F}[\infty]
}
\]
and the right vertical arrow is a weak equivalences.
\end{corollary}
\begin{remark}[Universal property of the delooping construction in the homotopy category]
\label{rem:Universal_property_of_the_delooping_construction_in_the_homotopy_category}
Suppose that the covariant functors ${\mathbf E}, {\mathbf F} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ satisfy
Condition~\ref{con:condition_bfu}. Suppose that ${\mathbf E}$ is $c$-contracted for some
integer $c$, and let ${\mathbf f} \colon {\mathbf E} \to {\mathbf F}$ be a compatible transformation to an
$\infty$-contracted functor.
Then, in the homotopy category (of functors $\matheurm{Add\text{-}Cat}\to\matheurm{Spectra}$), the transformation ${\mathbf f}$
factors uniquely through ${\mathbf d}({\mathbf E})\colon {\mathbf E}\to {\mathbf E}[\infty]$:
\[{\mathbf f} = {\mathbf d}({\mathbf F})^{-1} \circ {\mathbf f}[\infty] \circ {\mathbf d}({\mathbf E}).\]
\end{remark}
\typeout{----------------------- Section 4: Delooping algebraic K-theory of additive categories------------------------}
\section{Delooping algebraic K-theory of additive categories}
\label{sec:Delooping_algebraic_K-theory_of_additive_categories}
Now we treat our main example for ${\mathbf E}$, the functor
${\mathbf K} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ that assigns to an additive category $\mathcal{A}$
the connective $K$-theory spectrum ${\mathbf K}(\mathcal{A})$ of $\mathcal{A}$.
\begin{definition}[Non-connective algebraic $K$-theory spectrum {${\mathbf K}infty$}]
\label{def:non-connective_algebraic_K-theory_spectrum_bfK[infty]}
We call the functor
\[
{\mathbf K}infty := {\mathbf K}[\infty] \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}
\]
associated to ${\mathbf K} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ in Definition~\ref{def:E[infty]} the
\emph{non-connective algebraic $K$-theory functor}.
If $\mathcal{A}$ is an additive category, then $K_i(\mathcal{A}) := \pi_i({\mathbf K}infty(\mathcal{A}))$
is the $i$-th algebraic $K$-group of $\mathcal{A}$ for $i \in \mathbb{Z}$.
\end{definition}
Notice that by Lemma~\ref{lem:alpha_bijective} we could have as well defined ${\mathbf K}infty$
to be ${\mathbf K}idem[\infty]$. In particular, by
Theorem~\ref{the:Bass-Heller-Swan_Theorem_for_bfKIdem} and
Theorem~\ref{the:Main_property_of_the_delooping_construction}~\ref{the:Main_property_of_the_delooping_construction:pi_i(bfd)},
Definition~\ref{def:non-connective_algebraic_K-theory_spectrum_bfK[infty]} extends the
previous definition $K_i(\mathcal{A}) := \pi_i\bigl({\mathbf K}idem(\mathcal{A})\bigr)$ for $i \ge 0$
of~\eqref{K_i(cala)_i_ge_0}.
We conclude from Theorem~\ref{the:Bass-Heller-Swan_Theorem_for_bfK}
and Theorem~\ref{the:Main_property_of_the_delooping_construction}
~\ref{the:Main_property_of_the_delooping_construction:E[infty]}
\begin{theorem}[Bass-Heller-Swan-Theorem for ${\mathbf K}infty$]
\label{the:Bass-Heller-Swan-Theorem_for_non-connective_algebraic_K-theory}
The Bass-Heller-Swan transformation
\[
{\mathbf B}HS \colon {\mathbf K}infty
\wedge (S^1)_+ \vee {\mathbf N}_+{\mathbf K}infty \vee {\mathbf N}_-{\mathbf K}infty \xrightarrow{\simeq} {\mathbf Z}{\mathbf K}infty
\]
is a weak equivalence.
In particular we get for every $i \in \mathbb{Z}$ and every additive category $\mathcal{A}$ an in
$\mathcal{A}$-natural isomorphism
\[
K_{i-1}(\mathcal{A}) \oplus K_i(\mathcal{A}) \oplus N\!K_i(\mathcal{A}[t]) \oplus N\!K_i(\mathcal{A}[t^{-1}]) \xrightarrow{\cong} K_i(\mathcal{A}[t,t^{-1}]),
\]
where $N\!K_i(\mathcal{A}[t^{\pm}])$ is defined as the kernel of $K_i(\ev_0^{\pm}) \colon K_i(\mathcal{A}[t^{\pm}]) \to K_i(\mathcal{A})$.
\end{theorem}
We will extend Theorem~\ref{the:Bass-Heller-Swan-Theorem_for_non-connective_algebraic_K-theory}
later to the twisted case.
\begin{remark}[Fundamental sequence]
\label{rem:fundamental_sequence}
Theorem~\ref{the:Bass-Heller-Swan-Theorem_for_non-connective_algebraic_K-theory}
is equivalent to the statement that there exists for each $i \in \mathbb{Z}$ a fundamental sequence of algebraic $K$-groups
\begin{multline*}
0 \to K_i(\mathcal{A}) \xrightarrow{(K_i(i_+), -K_i(i_-))} K_i(\mathcal{A}[t]) \oplus K_i(\mathcal{A}[t^{-1}])
\\
\xrightarrow{K_i(j_+) \oplus K_i(j_-)}
K_i(\mathcal{A}[t,t^{-1}]) \xrightarrow{\partial_i} K_{i-1}(\mathcal{A}) \to 0
\end{multline*}
which comes with a splitting $s_{i-1} \colon K_{i-1}(\mathcal{A}) \to K_i(\mathcal{A}[t,t^{-1}])$ of $\partial_i$, natural in $\mathcal{A}$.
\end{remark}
\begin{remark}[Identification with the original negative $K$-groups]
\label{rem:Identification_with_the_original_negative_K-groups}
Bass~\cite[page~466 and page~462]{Bass(1968)} (see also~\cite[Chapter~3, Section~3]{Rosenberg(1994)})
defines negative $K$-groups $K_i(\mathcal{A})$ for $i = -1,-2, \ldots$ inductively
by putting
\[
K_{i-1}(\mathcal{A}) := \cok\left(K_i(j_+) \oplus K_i(j_-) \oplus K_i(\mathcal{A}[t]) \oplus K_i(\mathcal{A}[t^{-1}]) \to K_i(\mathcal{A}[t,t^{-1}])\right).
\]
We conclude from Remark~\ref{rem:fundamental_sequence}
that the negative $K$-groups of
Definition~\ref{def:non-connective_algebraic_K-theory_spectrum_bfK[infty]}
are naturally isomorphic to the negative $K$-groups defined by Bass.
\end{remark}
\begin{remark}[Identification with the Pedersen-Weibel construction]
\label{rem:identification_with_Pedersen-Weibel}
Pedersen-Weibel~\cite{Pedersen-Weibel(1985)} construct another transformation ${\mathbf K}_{\PW} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$
which models negative algebraic $K$-theory. We conclude from
Corollary~\ref{cor:turning_a_transformation_into_a_weak_equivalence}
that there exists weak equivalences
\[
{\mathbf K}infty \xrightarrow{\simeq} {\mathbf K}_{\PW}[\infty] \xleftarrow{\simeq} {\mathbf K}_{\PW}
\]
since there is a natural map ${\mathbf K} \to {\mathbf K}_{\PW}$ inducing on $\pi_i$ bijections for $i \ge 1$ and
the Bass-Heller-Swan map for ${\mathbf K}_{\PW}$ is a weak equivalence as
$\pi_i({\mathbf K}_{\PW}[\infty])$ agrees in a natural way
with the $i$-th homotopy groups of the connective $K$-theory for $i \ge 1$ and
with the negative $K$-groups of Bass for $i \le 0$, see~\cite[Theorem~A]{Pedersen-Weibel(1985)}.
\end{remark}
\typeout{--------------- Section 5: Compatibility of the delooping construction with homotopy colimits ----------------------}
\section{Compatibility with colimits}
\label{sec:Compatibility_of_the_delooping_construction_with_homotopy_colimits}
Let $\mathcal{J}$ be a small category, not necessarily filtered or finite. Recall the notation
\[
\funcc(\matheurm{Add\text{-}Cat},\matheurm{Spectra})
\]
from Definition~\ref{def:funcc}. Consider a $\mathcal{J}$-diagram in
$\funcc(\matheurm{Add\text{-}Cat},\matheurm{Spectra})$, i.e., a covariant functor ${\mathbf E} \colon \mathcal{J} \to
\funcc(\matheurm{Add\text{-}Cat},\matheurm{Spectra})$. There is the functor homotopy colimit
\[
\hocolim_{\mathcal{J}} \colon \funcc(\mathcal{J},\matheurm{Spectra}) \to \matheurm{Spectra}
\]
which sends a $\mathcal{J}$-diagram of spectra, i.e., a covariant functor $\mathcal{J} \to \matheurm{Spectra}$,
to its homotopy colimit. As a consequence of Lemma~\ref{lem:inheritance_of_condition_under_general_constructions},
it induces a functor, denoted by the same symbol
\[
\hocolim_{\mathcal{J}} \colon \func\bigl(\mathcal{J},\funcc(\matheurm{Add\text{-}Cat},\matheurm{Spectra})\bigr) \to
\funcc(\matheurm{Add\text{-}Cat},\matheurm{Spectra}),
\]
that sends a $\mathcal{J}$-diagram $({\mathbf E}(j))_{j \in \mathcal{J}}$ to the functor $\matheurm{Add\text{-}Cat} \to
\matheurm{Spectra}$ which assigns to an additive category $\mathcal{A}$ the spectrum $\hocolim_{\mathcal{J}}
{\mathbf E}(j)(\mathcal{A})$.
\begin{theorem}[Compatibility of the delooping construction with homotopy colimits]
\label{the:Compatibility_of_the_delooping_construction_with_homotopy_colimits}
Given a $\mathcal{J}$-diagram ${\mathbf E}$ in $\func_c(\matheurm{Add\text{-}Cat},\matheurm{Spectra})$, there is a morphism in
$\funcc(\matheurm{Add\text{-}Cat},\matheurm{Spectra})$, natural in ${\mathbf E}$,
\[
\gamma({\mathbf E}) \colon \hocolim_{\mathcal{J}} \bigl({\mathbf E}(j)[\infty]\bigr) \xrightarrow{\simeq}
\bigl(\hocolim_{\mathcal{J}} {\mathbf E}(j)\bigr)[\infty]
\]
that is a weak equivalence, i.e., its evaluation at any object in $\matheurm{Add\text{-}Cat}$ is a weak
homotopy equivalence of spectra.
\end{theorem}
The proof uses some well-known properties of homotopy colimits of spectra,
which we record here for the reader's convenience.
\begin{lemma}\label{lem:diagrams_of_spectra}
Let ${\mathbf E}$ and ${\mathbf F}$ be $\mathcal{J}$-diagrams of spectra and let ${\mathbf f} \colon {\mathbf E} \to {\mathbf F}$
be a morphism between them.
\begin{enumerate}
\item \label{lem:diagrams_of_spectra:vee}
The canonical map
\[\bigl(\hocolim_{\mathcal{J}} {\mathbf E}\bigr) \vee \bigl(\hocolim_{\mathcal{J}} {\mathbf F} \bigr)
\xrightarrow{\simeq} \hocolim_{\mathcal{J}} \bigl({\mathbf E} \vee {\mathbf F} \bigr)
\]
is an isomorphism;
\item \label{lem:diagrams_of_spectra:smash}
If $Y$ is a pointed space, then we obtain an isomorphism, natural in ${\mathbf E}$,
\[
\hocolim_{\mathcal{J}} \bigl({\mathbf E} \wedge Y\bigr) \xrightarrow{\simeq} \bigl(\hocolim_{\mathcal{J}} {\mathbf E} \bigr) \wedge Y;
\]
\item \label{lem:diagrams_of_spectra:Omega}
There is a weak homotopy equivalence, natural in ${\mathbf E}$,
\[
\hocolim_{\mathcal{J}} \bigl(\Omega{\mathbf E}\bigr) \xrightarrow{\simeq} \Omega \bigl(\hocolim_{\mathcal{J}} {\mathbf E} \bigr);
\]
\item \label{lem:diagrams_of_spectra:commute}
If $\mathcal{K}$ is another small category and we have a ${\mathbf J} \times {\mathbf K}$ diagram ${\mathbf E}$ of spectra.
Then we have isomorphisms of spectra, natural in ${\mathbf E}$,
\[
\hocolim_{\mathcal{J}} \bigl(\hocolim_{\mathcal{K}} {\mathbf E} \bigr) \xrightarrow{\cong}
\hocolim_{\mathcal{J} \times \mathcal{K}} {\mathbf E} \xleftarrow{\cong}
\hocolim_{\mathcal{K}} \bigl(\hocolim_{\mathcal{J}} {\mathbf E} \bigr);
\]
\item \label{lem:diagrams_of_spectra:homotopy_fibers}
Let $\hofib({\mathbf f})$ and $\hocofib({\mathbf f})$ respectively be the $\mathcal{J}$-diagram
of spectra which assigns to an object $j$ in $\mathcal{J}$
the homotopy fiber and homotopy cofiber respectively
of ${\mathbf f}(j) \colon {\mathbf E}(j) \to {\mathbf F}(j)$.
Then there are weak homotopy equivalences, natural in ${\mathbf f}$,
\begin{eqnarray*}
\hocolim_{\mathcal{J}} \hocofib({\mathbf f}) & \xrightarrow{\simeq} & \hocofib\bigl(\hocolim_{\mathcal{J}} {\mathbf f} \bigr);
\\
\hocolim_{\mathcal{J}} \hofib({\mathbf f}) & \xrightarrow{\simeq} & \hofib\bigl(\hocolim_{\mathcal{J}} {\mathbf f} \bigr);
\end{eqnarray*}
\item \label{lem:diagrams_of_spectra:pi_and_filtered}
If $\mathcal{J}$ is filtered, i.e., for any two objects $j_0$ and $j_1$ there exists a morphism $u \colon j \to j'$ in $\mathcal{J}$
such that there exists morphisms from both $j_0$ and $j_1$ to $j$ and for any two morphisms
$u_0 \colon j_0 \to j$ and $u_1 \colon j_1 \to j$ we have $u \circ j_0 = u \circ u_1$. Then the canonical map
\[
\colim_{\mathcal{J}} \pi_i({\mathbf E}(j)) \xrightarrow{\cong} \pi_i\bigl(\hocolim_{\mathcal{J}} {\mathbf E}(j)\bigr)
\]
is bijective for all $i \in \mathbb{Z}$.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{the:Compatibility_of_the_delooping_construction_with_homotopy_colimits}]
Let ${\mathbf E}$ be a $\mathcal{J}$-diagram in $\funcc(\matheurm{Add\text{-}Cat},\matheurm{Spectra})$.
We have by definition the equalities
\begin{eqnarray*}
{\mathbf Z} \bigl(\hocolim_{\mathcal{J}} {\mathbf E}\bigr) & = & \hocolim_{\mathcal{J}} ({\mathbf Z}{\mathbf E});
\\
{\mathbf Z}_{\pm} \bigl(\hocolim_{\mathcal{J}} {\mathbf E}\bigr) & = & \hocolim_{\mathcal{J}} ({\mathbf Z}_{\pm}{\mathbf E}).
\end{eqnarray*}
We obtain from Lemma~\ref{lem:diagrams_of_spectra}~\ref{lem:diagrams_of_spectra:smash}
and~\ref{lem:diagrams_of_spectra:homotopy_fibers}
natural weak homotopy equivalences
\begin{eqnarray*}
\hocolim_{\mathcal{J}} (N_{\pm}{\mathbf E})
& \xrightarrow{\simeq} &
{\mathbf N}_{\pm} \bigl(\hocolim_{\mathcal{J}} {\mathbf E}\bigr);
\\
\bigl(\hocolim_{\mathcal{J}} \bigl({\mathbf E} \wedge (S^1)_+\bigr)
& \xrightarrow{\simeq} &
\bigl(\hocolim_{\mathcal{J}} {\mathbf E} \bigr) \wedge (S^1)_+,
\end{eqnarray*}
and thus by Lemma~\ref{lem:diagrams_of_spectra}~\ref{lem:diagrams_of_spectra:vee}
a natural weak homotopy equivalence
\begin{eqnarray*}
\hocolim_{\mathcal{J}} ({\mathbf B}_r{\mathbf E})
& \xrightarrow{\simeq} &
{\mathbf B}_r \bigl(\hocolim_{\mathcal{J}} {\mathbf E}\bigr).
\end{eqnarray*}
As $\mathcal{J}$ takes values in functors satisfying Condition~\ref{con:condition_bfu}, the maps
$\mathcal{A}(j)\colon {\mathbf E}(j)\wedge (S^1)_+ \to {\mathbf Z}{\mathbf E}(j)$ are natural in $j\in \mathcal{J}$. In
this way ${\mathbf L}{\mathbf E}(j)$ also becomes a functor in $j$, and further applications of
Lemma~\ref{lem:diagrams_of_spectra} show that the induced map
\[\hocolim_\mathcal{J} {\mathbf L}{\mathbf E} \xrightarrow{\simeq} {\mathbf L}\bigl(\hocolim_\mathcal{J} {\mathbf E}\bigr)\]
is a weak equivalence. We obtain a commutative diagram
\[
\xymatrix{
\hocolim_{\mathcal{J}} {\mathbf E} \ar[d]^{\hocolim_{\mathcal{J}}{\mathbf s}} \ar[r]^{\id}
&
\bigl(\hocolim_{\mathcal{J}} {\mathbf E} \bigr) \ar[d]^{{\mathbf s}}
\\
\hocolim_{\mathcal{J}} \Omega {\mathbf L}{\mathbf E} \ar[r]^{\simeq}
&
\Omega {\mathbf L}\bigl(\hocolim_{\mathcal{J}} {\mathbf E}\bigr)
}
\]
with weak homotopy equivalences as vertical arrows.
Iterating this construction leads to a commutative diagram with weak homotopy equivalences
as vertical arrows.
\[
\xymatrix@!C= 30mm{
\hocolim_{\mathcal{J}} {\mathbf E} \ar[r]^-{\hocolim_{\mathcal{J}} {\mathbf s}[0]} \ar[d]^{\id}
& \hocolim_{\mathcal{J}} {\mathbf E}[1] \ar[r]^-{\hocolim_{\mathcal{J}} {\mathbf s}[1]} \ar[d]^{\simeq}
& \hocolim_{\mathcal{J}} {\mathbf E}[2] \ar[r]^-{\hocolim_{\mathcal{J}} {\mathbf s}[2]} \ar[d]^{\simeq }
&
\cdots
\\
\hocolim_{\mathcal{J}} {\mathbf E} \ar[r]^-{{\mathbf s}[0]}
& \bigl(\hocolim_{\mathcal{J}} {\mathbf E}\bigr) [1] \ar[r]^-{{\mathbf s}[1]}
& \bigl(\hocolim_{\mathcal{J}} {\mathbf E}\bigr) [2] \ar[r]^-{{\mathbf s}[2]}
& \cdots
}
\]
After an application of Lemma~\ref{lem:diagrams_of_spectra}~\ref{lem:diagrams_of_spectra:commute}
the induced map on homotopy colimits is the desired map $\gamma({\mathbf E})$.
\end{proof}
\typeout{---------- Section 6: The Bass-Heller-Swan decomposition for additive categories with automorphisms -----------------}
\section{The Bass-Heller-Swan decomposition for additive categories \\with automorphisms}
\label{sec:The_Bass-Heller-Swan_decomposition_for_additive_categories_with_automorphisms}
In~\cite[Theorem~0.4]{Lueck-Steimle(2013twisted_BHS)} the following
twisted Bass-Heller-Swan decomposition is proved for the connective $K$-theory spectrum.
\begin{theorem}[The twisted Bass-Heller-Swan decomposition for connective $K$-theory of
additive categories]
\label{the:BHS_decomposition_for_connective_K-theory}
Let $\mathcal{A}$ be an additive category which is idempotent complete. Let $\Phi \colon \mathcal{A} \to \mathcal{A}$ be an
automorphism of additive categories.
\begin{enumerate}
\item \label{the:BHS_decomposition_for_connective_K-theory:BHS-iso}
Then there is a weak equivalence of spectra, natural in $(\mathcal{A},\Phi)$,
\[
{\mathbf a} \vee {\mathbf b}_+\vee {\mathbf b}_- \colon {\mathbf T}_{{\mathbf K}(\Phi^{-1})} \vee
{\mathbf N}K(\mathcal{A}_{\Phi}[t]) \vee {\mathbf N}K(\mathcal{A}_{\Phi}[t^{-1}]) \xrightarrow{\simeq}
{\mathbf K}(\mathcal{A}_{\Phi}[t,t^{-1}]);
\]
\item \label{the:BHS_decomposition_for_connective_K-theory:Nil}
There exist weak homotopy equivalences of spectra, natural in $(\mathcal{A},\Phi)$,
\begin{eqnarray*}
\Omega {\mathbf N}K(\mathcal{A}_{\Phi}[t]) & \xleftarrow{\simeq} & {\mathbf E}(\mathcal{A},\Phi);
\\
{\mathbf K}(\mathcal{A}) \vee {\mathbf E}(\mathcal{A},\Phi) & \xrightarrow{\simeq} &{\mathbf K}(\Nil(\mathcal{A},\Phi)).
\end{eqnarray*}
\end{enumerate}
\end{theorem}
Here ${\mathbf T}_{{\mathbf K}(\Phi^{-1})}$ is the mapping torus of the map ${\mathbf K}(\Phi^{-1})\colon
{\mathbf K}(\mathcal{A})\to {\mathbf K}(\mathcal{A})$, that is, the pushout of
\[{\mathbf K}(\mathcal{A})\wedge [0,1]_+ \xleftarrow{\id\wedge \operatorname{incl}} {\mathbf K}(\mathcal{A})\wedge \{0, 1\}_+
\xrightarrow{{\mathbf K}(\Phi^{-1}) \vee \id} {\mathbf K}(\mathcal{A}).\] The spectrum
${\mathbf N}{\mathbf K}(\mathcal{A}_\Phi[t^{\pm1}])$ is by definition the homotopy fiber of the map
\[{\mathbf K}(\mathcal{A}_\Phi[t^{\pm1}])\to {\mathbf K}(\mathcal{A})\]
induced by evaluation at 0. The category $\Nil(\mathcal{A},\Phi)$ is the exact category of
$\Phi$-nilpotent endomorphisms of $\mathcal{A}$ whose objects are morphisms
$f\colon \Phi(A)\to A$, with $A\in\ob(\mathcal{A})$ which are nilpotent in a suitable sense. For more details of the
construction of the spectra and maps appearing the result above, we refer
to~\cite[Theorem~0.1]{Lueck-Steimle(2013twisted_BHS)}. In that paper it is also claimed that
Theorem~\ref{the:BHS_decomposition_for_connective_K-theory} implies by the delooping
construction of this paper in a formal manner the following non-connective version, where
the maps ${\mathbf a}infty$, ${\mathbf b}infty_+$, and ${\mathbf b}infty_-$ are defined completely analogous to
the maps ${\mathbf a}$, ${\mathbf b}_+$, ${\mathbf b}_-$, but now for ${\mathbf K}^{\infty}$ instead of ${\mathbf K}$.
\begin{theorem}[The twisted Bass-Heller-Swan decomposition for non-connective $K$-theory of
additive categories]
\label{the:BHS_decomposition_for_non-connective_K-theory}
Let $\mathcal{A}$ be an additive category. Let $\Phi \colon \mathcal{A} \to \mathcal{A}$ be an
automorphism of additive categories.
\begin{enumerate}
\item \label{the:BHS_decomposition_for_non-connective_K-theory:BHS-iso}
There exists a weak homotopy equivalence of
spectra, natural in $(\mathcal{A},\Phi)$,
\[
{\mathbf a}infty \vee {\mathbf b}infty_+\vee {\mathbf b}infty_- \colon
{\mathbf T}_{{\mathbf K}infty(\Phi^{-1})} \vee {\mathbf N}Kinfty(\mathcal{A}_{\Phi}[t]) \vee {\mathbf N}Kinfty(\mathcal{A}_{\Phi}[t^{-1}])
\xrightarrow{\simeq} {\mathbf K}infty(\mathcal{A}_{\Phi}[t,t^{-1}]);
\]
\item \label{the:BHS_decomposition_for_non-connective_K-theory:Nil}
There exist weak homotopy equivalences of spectra, natural in $(\mathcal{A},\Phi)$,
\begin{eqnarray*}
\Omega{\mathbf N}Kinfty(\mathcal{A}_{\Phi}[t]) & \xleftarrow{\simeq} & {\mathbf E}^{\infty}(\mathcal{A},\Phi);
\\
{\mathbf K}infty(\mathcal{A}) \vee {\mathbf E}^{\infty} (\mathcal{A},\Phi) & \xrightarrow{\simeq} & {\mathbf K}Nilinfty(\mathcal{A},\Phi),
\end{eqnarray*}
where ${\mathbf K}Nilinfty(\mathcal{A},\Phi)$ is a specific spectrum whose connective cover is the spectrum ${\mathbf K}\bigl(\Nil(\mathcal{A},\phi)\bigr)$.
\end{enumerate}
\end{theorem}
To deduce Theorem~\ref{the:BHS_decomposition_for_non-connective_K-theory} from
Theorem~\ref{the:BHS_decomposition_for_connective_K-theory}, we will think of all functors
appearing as functors in the pair $(\mathcal{A}, \Phi)$ and apply a variation of the delooping
construction to these functors. In particular, the spectrum ${\mathbf K}Nilinfty(\mathcal{A},\Phi)$ is
obtained by delooping the functor
\[
(\mathcal{A},\Phi)\mapsto {\mathbf K}(\Nil(\mathcal{A},\Phi)).
\]
The next remark formalizes how to do deloop functors in $(\mathcal{A},\Phi)$.
\begin{remark}[Extension of the delooping construction to diagrams]
\label{rem:Extension_of_the_delooping_construction_to_diagrams}
Let $\mathcal{C}$ be a fixed small category which will become an index category.
Let $\matheurm{Add\text{-}Cat}c$ be the category of $\mathcal{C}$-diagrams in $\matheurm{Add\text{-}Cat}$, i.e., objects are
covariant functors $\mathcal{C} \to \matheurm{Add\text{-}Cat}$ and morphisms are natural transformations of these.
Our delooping construction can be extended from $\matheurm{Add\text{-}Cat}$ to $\matheurm{Add\text{-}Cat}c$ as follows,
provided that the obvious version of Condition~\ref{con:condition_bfu}
which was originally stated for ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$, holds now
for ${\mathbf E} \colon \matheurm{Add\text{-}Cat}c \to \matheurm{Spectra}$.
The functors $z[t]$, $z[t^{-1}]$, $z[t,t^{-1}]$ from $\matheurm{Add\text{-}Cat}$ to $\matheurm{Add\text{-}Cat}$, extend to
functors $z[t]^{\mathcal{C}}$, $z[t^{-1}]^{\mathcal{C}}$, $z[t,t^{-1}]^{\mathcal{C}}$ $\matheurm{Add\text{-}Cat}c \to \matheurm{Add\text{-}Cat}c$ by composition.
Analogously the natural transformations $i_0$, $i_{\pm}$, $j_{\pm}$ and $\ev_0^{\pm}$,
originally defined for $\matheurm{Add\text{-}Cat}$, do extend to natural transformations of functors
$\matheurm{Add\text{-}Cat}c \to \matheurm{Add\text{-}Cat}c$. Now the definitions of section~\ref{sec:The_Bass-Heller-Swan_map} make still sense if we start with a functor
${\mathbf E} \colon \matheurm{Add\text{-}Cat}c \to \matheurm{Spectra}$ and end up with functors
$\matheurm{Add\text{-}Cat}c \to \matheurm{Spectra}$, where it is to be understood that everything is compatible with
Condition~\ref{con:condition_bfu}. Moreover, the notion of a $c$-contracted functor, the
construction of ${\mathbf E}[\infty]$, Lemma~\ref{lem:c_contracted_and_vee},
Theorem~\ref{the:Main_property_of_the_delooping_construction},
Corollary~\ref{cor:turning_a_transformation_into_a_weak_equivalence} and
Theorem~\ref{the:Compatibility_of_the_delooping_construction_with_homotopy_colimits}
carry over word by word
if we replace $\matheurm{Add\text{-}Cat}$ by $\matheurm{Add\text{-}Cat}c$ everywhere. From the definitions we also conclude:
\begin{lemma} \label{lem:commuting_[infty]_and_F}
Let $G \colon \matheurm{Add\text{-}Cat}c \to \matheurm{Add\text{-}Cat}$ be a functor.
Suppose that there are natural isomorphisms in a commutative diagram
\[\xymatrix{
G \circ z[t]^{\mathcal{C}} \ar[d]^{G(j_+)} \ar[rr]^\cong && z[t] \circ G \ar[d]^{j_+}
\\
G \circ z[t^{-1}]^{\mathcal{C}} \ar[rr]^\cong && z[t^{-1}] \circ G;
\\
G \circ z[t,t^{-1}]^{\mathcal{C}} \ar[u]_{G(j_-)} \ar[rr]^\cong && z[t,t^{-1}] \circ G \ar[u]_{j_-}
}\]
Let ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ be a functor
respecting Condition~\ref{con:condition_bfu}.
Then ${\mathbf E} \colon \matheurm{Add\text{-}Cat}
\to \matheurm{Spectra}$ is $c$-contracted if and only if ${\mathbf E} \circ G \colon \matheurm{Add\text{-}Cat}^{\mathcal{C}} \to \matheurm{Spectra}$
is $c$-contracted, and we have a natural isomorphism
\[
{\mathbf E}[\infty] \circ G \cong \bigl({\mathbf E} \circ G\bigr)[\infty].
\]
\end{lemma}
\end{remark}
We will always be interested in the case where $\mathcal{C}$ is the groupoid with one object
and $\mathbb{Z}$ as automorphism group of this object.
We will write $\matheurm{Add\text{-}Cat}t$ for $\matheurm{Add\text{-}Cat}^{\mathcal{C}}$ in this case. Then
objects in $\matheurm{Add\text{-}Cat}t$ are pairs $(\mathcal{A},\Phi)$ consisting of a small additive
category $\mathcal{A}$ and an automorphism $\Phi \colon \mathcal{A} \xrightarrow{\cong} \mathcal{A}$ and a
morphism $F \colon (\mathcal{A}_0,\Phi_0) \to (\mathcal{A}_1,\Phi_1)$ is a functor of additive
categories $F \colon \mathcal{A}_0 \to \mathcal{A}_1$ satisfying $\Phi_1 \circ F = F \circ \Phi_0$.
Our main examples for functors $G$ as appearing in Lemma~\ref{lem:commuting_[infty]_and_F} will be the functors
\begin{align*}
z_t[s^{\pm1}] \colon \matheurm{Add\text{-}Cat}t &\to \matheurm{Spectra},\quad (\mathcal{A},\Phi)\mapsto \mathcal{A}_\Phi[s^{\pm1}],\\
z_t[s,s^{-1}]\colon \matheurm{Add\text{-}Cat}t & \to \matheurm{Spectra}, \quad (\mathcal{A},\Phi)\mapsto
\mathcal{A}_\Phi[s,s^{-1}]. \end{align*} (The subscript ``$t$'' stands for ``twisted'',
since these functors are the obvious twisted generalizations of the functors
$z[t^{\pm1}]$ and $z[t,t^{-1}]$ from Section~\ref{sec:The_Bass-Heller-Swan_map}, with the
variable $t$ replaced by $s$ for the sake of readability.)
Let us go back to the situation of Section~\ref{sec:The_Bass-Heller-Swan_map} where we
were given a functor ${\mathbf E}\colon \matheurm{Add\text{-}Cat}\to\matheurm{Spectra}$ satisfying
Condition~\ref{con:condition_bfu}. Replacing $z[t^{\pm1}]$ and $z[t,t^{-1}]$ by their twisted
versions throughout, we may define the twisted versions
\[
{\mathbf B}^t{\mathbf E}, {\mathbf N}^t_\pm{\mathbf E}, {\mathbf Z}^t_\pm{\mathbf E}, {\mathbf Z}{\mathbf E}\colon \matheurm{Add\text{-}Cat}t\to \matheurm{Spectra}
\]
of the corresponding functors appearing in
Section~\ref{sec:The_Bass-Heller-Swan_map}. The role of the functor ${\mathbf E}\wedge(S^1)_+$ is now
taken by the functor
\[{\mathbf T}^t{\mathbf E}(\mathcal{A},\Phi) = {\mathbf T}_{{\mathbf E}(\Phi^{-1})}\]
given by the mapping torus of the map ${\mathbf E}(\Phi^{-1})\colon {\mathbf E}(\mathcal{A})\to {\mathbf E}(\mathcal{A})$.
Condition~\ref{con:condition_bfu} implies in this setting that there is a
natural transformation
\[{\mathbf a}^t\colon {\mathbf T}^t{\mathbf E} \to {\mathbf Z}^t {\mathbf E}.\]
(It is induced by the natural transformation
\[\id_A\cdot t\colon \Phi^{-1}(A)\to A\]
between the functors $\Phi^{-1}\circ i$ and $i$, where $i\colon \mathcal{A}\to \mathcal{A}_\Phi[t,t^{-1}]$
is the canonical inclusion.)
In these terms, the natural transformation from
Theorem~\ref{the:BHS_decomposition_for_connective_K-theory}~\ref{the:BHS_decomposition_for_connective_K-theory:BHS-iso}
is just given by the twisted version of the Bass-Heller-Swan map
\[
{\mathbf B}HS^t\colon {\mathbf T}^t{\mathbf E} \vee {\mathbf N}^t_+{\mathbf E} \vee {\mathbf N}^t_-{\mathbf E} \to {\mathbf Z}^t{\mathbf E}
\]
applied to ${\mathbf E}={\mathbf K}$.
Next we want to apply the delooping construction to the Nil-groups.
\begin{lemma}\label{lem:nil_1_contracted}
The functor $(\mathcal{A},\Phi)\mapsto {\mathbf K}(\Nil(\mathcal{A},\Phi))$ is 1-contracted.
\end{lemma}
\begin{proof}
The functors
\begin{align*}
(\mathcal{A},\Phi) & \mapsto {\mathbf K}(\Nil(\Idem \mathcal{A},\Idem\Phi),\\
(\mathcal{A},\Phi) & \mapsto {\mathbf K}(\Idem\mathcal{A})\vee \Omega{\mathbf N}{\mathbf K}((\Idem\mathcal{A})_{\Idem\Phi}[t^{-1}])
\end{align*}
are, by Theorem~\ref{the:BHS_decomposition_for_connective_K-theory}~\ref{the:BHS_decomposition_for_connective_K-theory:Nil},
naturally equivalent. In the second functor, the first summand is 0-contracted by
Theorem~\ref{the:Bass-Heller-Swan_Theorem_for_bfKIdem}. The second summand is 0-contracted by
Theorem~\ref{the:Bass-Heller-Swan_Theorem_for_bfK} and Lemma~\ref{lem:commuting_[infty]_and_F},
noticing that $\Omega$ decreases the degree of
contraction by one. From
Lemma~\ref{lem:c_contracted_and_vee}~\ref{lem:c_contracted_and_vee:contracted} it follows that
second functor is 0-contracted. Hence the first functor is 0-contracted, too.
There is a natural splitting
\[
{\mathbf K}(\Nil(\mathcal{A},\Phi)) \simeq {\mathbf K}(\mathcal{A}) \vee \widetilde\Nil(\mathcal{A},\Phi)
\]
induced by the obvious projection $\Nil(\mathcal{A},\Phi)\to\mathcal{A}$ and its section $A\mapsto (A,0)$.
Next we show that the map induced by the inclusion
\[
\widetilde\Nil(\mathcal{A},\Phi)\to \widetilde\Nil(\Idem\mathcal{A},\Idem\Phi)
\]
is an equivalence of spectra.
Denote $\widetilde\Nil_i:=\pi_i\widetilde\Nil$. In the diagram
\[\[email protected]{
0 \ar[r] & K_i(\mathcal{A}) \ar[r] \ar[d] & K_i (\Nil(\mathcal{A},\Phi)) \ar[r] \ar[d] & \widetilde\Nil_i(\mathcal{A},\Phi) \ar[d] \ar[r] & 0
\\
0 \ar[r] & K_i(\Idem \mathcal{A}) \ar[r] & K_i (\Nil(\Idem \mathcal{A},\Idem \Phi)) \ar[r] & \widetilde\Nil_i(\Idem \mathcal{A},\Idem \Phi) \ar[r] & 0
}
\]
the left and middle vertical arrows are bijections for $i\geq 1$ and injections for $i=0$, by cofinality.
Since both rows are split exact and the splittings are compatible with the vertical arrows,
also the right arrow is bijective for $i\geq 1$ and injective for $i=0$.
We are left to show surjectivity for $\widetilde\Nil_0$. So let $((A,p),\phi)$
represent an element of
$\widetilde\Nil_0(\Idem\mathcal{A},\Idem\Phi)$. Then the same element is also
represented by $((A,p),\phi)\oplus ((A,1-p),0)$ which clearly has a preimage in
$\widetilde\Nil_0(\mathcal{A},\Phi)$.
Now the following diagram commutes:
\[
\xymatrix{
{\mathbf K}(\Nil(\mathcal{A},\Phi)) \ar[d] \ar[r]^\simeq & {\mathbf K}(\mathcal{A}_\Phi[t])\vee \widetilde\Nil(\mathcal{A},\Phi) \ar[d]
\\
{\mathbf K}(\Nil(\Idem \mathcal{A},\Idem \Phi)) \ar[r]^(.35)\simeq & {\mathbf K}((\Idem \mathcal{A})_{\Idem \Phi}[t])\vee \widetilde\Nil(\Idem \mathcal{A},\Idem \Phi)
}
\]
Thinking of all terms as functors in $(\mathcal{A},\Phi)$, we know that the lower left term is
0-contracted. It follows from
Lemma~\ref{lem:c_contracted_and_vee}~\ref{lem:c_contracted_and_vee:contracted} that
$\widetilde\Nil(\mathcal{A},\Phi)$ is 0-contracted. Moreover the functor $(\mathcal{A},\Phi)\mapsto
{\mathbf K}(\mathcal{A}_\Phi[t])$ is 1-contracted by Theorem~\ref{the:Bass-Heller-Swan_Theorem_for_bfK}
and Lemma~\ref{lem:commuting_[infty]_and_F}. Applying
again~\ref{lem:c_contracted_and_vee}~\ref{lem:c_contracted_and_vee:contracted} proves the claim.
\end{proof}
Hence we may apply the delooping construction to the functor
\[(\mathcal{A},\Phi)\mapsto {\mathbf K}(\Nil(\mathcal{A},\Phi))\] to obtain a new functor
${\mathbf K}Nilinfty(\mathcal{A},\Phi)$. It follows from cofinality and Lemma~\ref{lem:alpha_bijective}
that the map induced by the inclusion
\[{\mathbf K}Nilinfty(\mathcal{A},\Phi)\to {\mathbf K}Nilinfty(\Idem\mathcal{A},\Idem\Phi)\]
is a weak equivalence.
\begin{proof}[Proof of Theorem~\ref{the:BHS_decomposition_for_non-connective_K-theory}]~
\ref{the:BHS_decomposition_for_non-connective_K-theory:BHS-iso}
As ${\mathbf K}$ satisfies Condition~\ref{con:condition_bfu}, the same is true for
${\mathbf T}^t{\mathbf K}$ and ${\mathbf N}^t_{\pm}{\mathbf E}$ and hence for their wedge. So we may apply the
delooping construction to the transformation ${\mathbf B}HS^t$; using compatibility with the
smash product, we get from
Theorem~\ref{the:BHS_decomposition_for_connective_K-theory}~\ref{the:BHS_decomposition_for_connective_K-theory:BHS-iso}
a natural homotopy equivalence
\begin{equation}\label{eq:delooped_BHS_map}
{\mathbf B}HS^t[\infty]\colon ({\mathbf T}^t{\mathbf K}) [\infty] \vee ({\mathbf N}^t_+{\mathbf K})[\infty]
\vee ({\mathbf N}^t_-{\mathbf K})[\infty] \xrightarrow{\simeq} ({\mathbf Z}^t{\mathbf K})[\infty].
\end{equation}
An application of Lemma~\ref{lem:alpha_bijective} to the functors $z_t[s,s^{-1}]$ and $z_t[s^{\pm1}]$ implies that
\begin{equation}\label{eq:canonical_isos}
({\mathbf Z}^t{\mathbf K})[\infty] \cong {\mathbf Z}^t {\mathbf K}infty, \quad ({\mathbf N}_{\pm}^t{\mathbf K})[\infty]\cong{\mathbf N}_\pm^t{\mathbf K}infty.
\end{equation}
By definition, the mapping torus is a homotopy pushout; the compatibility of the delooping construction with homotopy colimits
(Theorem~\ref{the:Compatibility_of_the_delooping_construction_with_homotopy_colimits}) implies that the canonical transformation
\[\alpha\colon {\mathbf T}^t {\mathbf K}infty \to ({\mathbf T}^t {\mathbf K})[\infty]\]
is a weak equivalence. Thus, from \eqref{eq:delooped_BHS_map} we obtain a natural homotopy equivalence
\[
{\mathbf a}[\infty]\circ \alpha \vee {\mathbf b}infty_+\vee {\mathbf b}infty_- \colon
{\mathbf T}_{{\mathbf K}infty(\Phi^{-1})} \vee {\mathbf N}Kinfty(\mathcal{A}_{\Phi}[t]) \vee {\mathbf N}Kinfty(\mathcal{A}_{\Phi}[t^{-1}])
\xrightarrow{\simeq} {\mathbf K}infty(\mathcal{A}_{\Phi}[t,t^{-1}]);
\]
It remains to show that the map ${\mathbf a}[\infty]\circ \alpha$ defined in this way agree with the map ${\mathbf a}infty$, that is, the map
\[{\mathbf a}^t\colon {\mathbf T}^t{\mathbf E}\to {\mathbf Z}^t{\mathbf E}\]
for ${\mathbf E}={\mathbf K}infty$ as a functor which satisfies Condition~\ref{con:condition_bfu}. In fact, the diagram
\[\xymatrix{
{\mathbf T}^t (\Omega{\mathbf L}{\mathbf E}) \ar[rr]^\simeq \ar[d]^{{\mathbf a}_{{\mathbf L}{\mathbf E}}} && \Omega{\mathbf L}{\mathbf T}^t {\mathbf E} \ar[d]^{{\mathbf L}{\mathbf a}_{{\mathbf E}}}\\
{\mathbf Z}^t(\Omega{\mathbf L}{\mathbf E}) \ar[rr]^\cong && \Omega{\mathbf L}{\mathbf Z}^t{\mathbf E}
}\]
with the canonical horizontal arrows is commutative. Iterating the construction shows that
\[\xymatrix{
{\mathbf T}^t ({\mathbf E}[\infty]) \ar[rr]^\simeq \ar[d]^{{\mathbf a}_{{\mathbf E}[\infty]}} && ({\mathbf T}^t {\mathbf E})[\infty] \ar[d]^{{\mathbf a}_{{\mathbf E}}[\infty]}\\
{\mathbf Z}^t({\mathbf E}[\infty]) \ar[rr]^\cong && ({\mathbf Z}^t{\mathbf E})[\infty]
}\]
is also commutative. The lower horizontal isomorphisms is \eqref{eq:canonical_isos};
comparing with the proof of
Theorem~\ref{the:Compatibility_of_the_delooping_construction_with_homotopy_colimits} shows
that the upper horizontal map agrees with $\alpha$. This implies the claim.
\\[2mm]~\ref{the:BHS_decomposition_for_non-connective_K-theory:Nil} Here we use that
$K$-theory may be naturally defined on chain complexes: If $\mathcal{A}$ is an additive
category, we denote by $\Ch(\mathcal{A})$ the category of bounded chain complexes over
$\mathcal{A}$. This category is naturally a Waldhausen category where the weak equivalences are
the chain homotopy equivalences and the cofibrations are the maps which are level-wise
inclusions into a direct summand. As for any Waldhausen category, the $K$-theory spectrum
${\mathbf K}(\Ch(\mathcal{A}))=:{\mathbf K}ch(\mathcal{A})$ is defined. By the Gillet-Waldhausen theorem, ${\mathbf K}ch$
is naturally equivalent to ${\mathbf K}$ \cite[Proposition~6.1]{Cardenas-Pedersen(1997)}.
Given $(A,f)$ in $\Nil(\mathcal{A},\Phi)$, denote by $\chi(A,f)$ the following 1-dimensional chain complex in $\mathcal{A}_\Phi[t]$:
\[\Phi(A) \xrightarrow{t-f} A\]
This leads to a functor $\chi\colon \Nil(\mathcal{A},\Phi)\to \Ch(\mathcal{A}_\Phi[t])$;
this functor induces a map
\[{\mathbf K}(\chi)\colon {\mathbf K}(\Nil(\mathcal{A},\Phi)) \to {\mathbf K}ch(\mathcal{A}_\Phi[t]).\]
In~\cite[Section~8]{Lueck-Steimle(2013twisted_BHS)}
it is shown that if $\mathcal{A}$ is idempotent complete, then ${\mathbf K}(\chi)$ is part of a homotopy fiber sequence
\begin{equation}\label{eq:fiber_sequence_with_nil}
{\mathbf K}(\Nil(\mathcal{A},\Phi)) \to {\mathbf K}ch(\mathcal{A}_\Phi[t^{-1}]) \to {\mathbf K}ch(\mathcal{A}_\Phi[t,t^{-1}]).
\end{equation}
Now each of the terms in this sequence, as a functor in $(\mathcal{A},\Phi)$, satisfies Condition~\ref{con:condition_bfu}
and the maps in the sequence are compatible transformations. Applying the delooping construction
to each of the terms leads to a sequence
\begin{equation}\label{eq:fiber_sequence_with_nil_infty}
{\mathbf K}Nilinfty(\mathcal{A},\Phi) \to {\mathbf K}chinfty(\mathcal{A}_\Phi[t]) \to {\mathbf K}chinfty(\mathcal{A}_\Phi[t,t^{-1}]).
\end{equation}
Note that by the Gillet-Waldhausen Theorem and Lemma~\ref{lem:commuting_[infty]_and_F},
the middle and right terms are naturally homotopy equivalent to ${\mathbf K}infty(\mathcal{A}_\Phi[t])$ and ${\mathbf K}infty(\mathcal{A}_\Phi[t,t^{-1}])$, respectively.
We will show that this sequence is a fibration sequence for any $(\mathcal{A},\Phi)$.
To do this, consider the commutative diagram
\[\xymatrix{
{\mathbf K}\circ\Nil \ar[r]\ar[d] & {\mathbf K}ch\circ {\mathbf Z}_+^t \ar[r] \ar[d] & {\mathbf K}ch\circ {\mathbf Z}^t \ar[d]
\\
{\mathbf K}\circ\Nil \circ \Idem_t\ar[r]& {\mathbf K}ch\circ {\mathbf Z}_+^t\circ \Idem_t \ar[r] & {\mathbf K}ch\circ {\mathbf Z}^t\circ \Idem_t
}\]
of functors whose top line at the object $(\mathcal{A},\Phi)$ is just \eqref{eq:fiber_sequence_with_nil},
and where we define $\Idem_t(\mathcal{A},\Phi):=(\Idem(\mathcal{A}),\Idem(\Phi))$. Notice that the bottom
line of this diagram, at any object $(\mathcal{A},\Phi)$, is a fibration sequence.
We claim that all the functors in this diagram are 1-contracted. In fact, we showed in
Lemma~\ref{lem:nil_1_contracted} (and its proof) that the left terms are 1-contracted.
The middle and right upper terms are 1-contracted as the functor ${\mathbf K}\simeq {\mathbf K}ch$ is 1-contracted.
At the object $(\mathcal{A},\Phi)$, the middle vertical arrow is given by the map induced by the inclusion
\[{\mathbf K}ch(\mathcal{A}_\Phi[t])\to {\mathbf K}ch((\Idem\mathcal{A})_{\Idem(\Phi)}[t]).\]
In particular, by cofinality, it induces an isomorphism in degrees $\geq1$. When precomposed with $Z_+$, it becomes
\begin{equation}\label{eq:transformation_at_Z_plus}
{\mathbf K}ch((\mathcal{A}[s])_\Phi[t])\to {\mathbf K}ch((\Idem(\mathcal{A}[s]))_{\Idem(\Phi)}[t].
\end{equation}
Now the idempotent completion of $(\mathcal{A}[s])_\Phi[t]$ is also an idempotent completion of
$(\Idem(\mathcal{A}[s]))_{\Idem(\Phi)}[t]$. As $\pi_n{\mathbf K}$ is invariant under idempotent
completions, we see that \eqref{eq:transformation_at_Z_plus} induces isomorphisms in homotopy
groups of degree $\geq 1$.
The same argument works with $Z_+$ replaced by $Z_-$ and $Z$. We conclude that the
restricted Bass-Heller-Swan maps for the upper and the lower middle terms in the diagram
are isomorphic in degree $\geq1$. Smashing with $(S^1)_+$ preserves connectivity; so the
unrestricted Bass-Heller-Swan maps for the middle terms are also isomorphic in degrees
$\geq2$. Thus, to show that the lower middle term is 1-contracted, we are left to show
split injectivity of the restricted Bass-Heller-Swan map in degree 0.
This map is given by
\begin{multline*}
\pi_0 {\mathbf K}((\Idem(\mathcal{A})_\Phi[t])\oplus \pi_0 {\mathbf K}(\Idem(\mathcal{A}[s])_\Phi[t]) \oplus \pi_0 {\mathbf K}(\Idem(\mathcal{A}[s^{-1}])_\Phi[t]) \\
\to \pi_0 {\mathbf K}((\Idem(\mathcal{A}[s,s^{-1}])_\Phi[t]).
\end{multline*}
Split injectivity holds as $\pi_0{\mathbf K}(\mathcal{A})\cong\pi_0{\mathbf K}(\mathcal{A}_\Phi[t])$ for any
$(\mathcal{A},\Phi)$ (as in the proof of Theorem~\ref{the:Bass-Heller-Swan_Theorem_for_bfK}),
and ${\mathbf K}idem$ is 0-contracted.
Thus the lower middle term is 1-contracted and the middle vertical map induces an
isomorphism in degrees $\geq1$. The corresponding statements hold for the right terms in
the diagram, by the very same arguments.
Applying the delooping construction to the whole diagram, we obtain a new diagram whose
top line, at $(\mathcal{A},\Phi)$ is given by \eqref{eq:fiber_sequence_with_nil_infty}, and
whose bottom line is still a fiber sequence by
Theorem~\ref{the:Compatibility_of_the_delooping_construction_with_homotopy_colimits}.
By Lemma~\ref{lem:alpha_bijective} all the vertical maps are weak homotopy equivalences. We
conclude that the upper line is a fibration sequence. This is what we claimed, for the
upper line, at the object $(\mathcal{A},\Phi)$, is just~\eqref{eq:fiber_sequence_with_nil_infty}.
In~\cite[Section~3]{Lueck-Steimle(2013twisted_BHS)} it is shown that part~\ref{the:BHS_decomposition_for_connective_K-theory:Nil} of
Theorem~\ref{the:BHS_decomposition_for_connective_K-theory} follows formally
from~\eqref{eq:fiber_sequence_with_nil}. The same arguments apply to prove that
part~\ref{the:BHS_decomposition_for_non-connective_K-theory:Nil} of
Theorem~\ref{the:BHS_decomposition_for_non-connective_K-theory} follows from
\eqref{eq:fiber_sequence_with_nil_infty}.
\end{proof}
\begin{remark}[Schlichting's non-connective $K$-theory spectrum for exact categories]
\label{rem:Schlichting}
Notice that $\Nil(\mathcal{A},\Phi)$ is an exact category whose exact structure does not come
from the structure of an additive category.
Schlichting~\cite{Schlichting(2004deloop_exact)} has defined non-connective $K$-theory
for exact categories. It is very likely that Schlichting's non-connected $K$-theory
applied to the exact category $\Nil(\mathcal{A},\Phi)$ is weakly homotopy equivalent to our non-connective
version ${\mathbf K}Nilinfty(\mathcal{A},\Phi)$ in a natural way. This would follow from
Corollary~\ref{cor:turning_a_transformation_into_a_weak_equivalence}
if Schlichting's non-connective $K$-theory of $\Nil(\mathcal{A},\Phi)$ is $\infty$-contracted, or, equivalently,
has a Bass-Heller-Swan decomposition.
It is conceivable that the twisted Bass-Heller-Swan decomposition for connective $K$-theory, which is described
in Theorem~\ref{the:BHS_decomposition_for_connective_K-theory} and whose proof is given
in~\cite{Lueck-Steimle(2013twisted_BHS)}, can be extended directly to the
non-connective setting described in
Theorem~\ref{the:BHS_decomposition_for_non-connective_K-theory} using Schlichting's
non-connective version of $K$-theory for exact categories
and his localization theorem instead of Waldhausen's approximation and fibration theorems.
\end{remark}
\typeout{---------- Section 7: Filtered colimits -----------------}
\section{Filtered colimits}
\label{sec:Filtered_colimits}
Suppose that the small category $\mathcal{J}$ is filtered, i.e., for any two objects $j_0$ and
$j_1$ there exists a morphism $u \colon j \to j'$ in $\mathcal{J}$ with the following property:
There exist morphisms from both $j_0$ and $j_1$ to $j$, and for any two morphisms
$u_0 \colon j_0 \to j$ and $u_1 \colon j_1 \to j$ we have $u \circ j_0 = u \circ j_1$. Given a
functor $\mathcal{A} \colon \mathcal{J} \to \matheurm{Add\text{-}Cat}$, its \emph{colimit} $\colim \mathcal{A}$ in the
category of small categories exists and is in a natural way an additive category.
(We do not need an explicit description, but one can see that
\[\ob(\colim \mathcal{A}) = \colim (\ob(\mathcal{A}))\]
and that the abelian group of morphisms from $A$ to $B$ is given by
\[(\colim\mathcal{A})(A,B) = \colim_{j\in \mathcal{J}}\biggl( \bigoplus_{A_j, B_j} \mathcal{A}(j)(A_j, B_j)\biggr)\]
where the coproducts range over all objects $A_j, B_j\in\mathcal{A}(j)$ projecting to $A$ resp.~$B$.)
For a functor ${\mathbf E}\colon\matheurm{Add\text{-}Cat}\to\matheurm{Spectra}$, we say that \emph{$E$ commutes with filtered
colimits} if for any $\mathcal{A}\colon \mathcal{J}\to\matheurm{Add\text{-}Cat}$ the canonical map
\[
\hocolim {\mathbf E} \circ \mathcal{A} \to {\mathbf E}(\colim\mathcal{A})\]
is a weak homotopy equivalence. By
Lemma~\ref{lem:diagrams_of_spectra}~\ref{lem:diagrams_of_spectra:pi_and_filtered}
this is equivalent to saying that
\[
\colim \pi_* {\mathbf E}\circ\mathcal{A} \xrightarrow{\cong} \pi_*{\mathbf E}(\colim\mathcal{A}).
\]
\begin{proposition}\label{prop:compatibility_with_filtered_colimits}
Suppose that ${\mathbf E}$ commutes with filtered colimits and satisfies
Condition~\ref{con:condition_bfu}. Then ${\mathbf E}[\infty]$ commutes with filtered colimits.
\end{proposition}
Quillen's (or Waldhausen's) connective $K$-theory spectrum ${\mathbf K}$ commutes with filtered
colimits~\cite[(9) on page~20]{Quillen(1973)}. Letting $K^{\infty}_i(\mathcal{A}) := \pi_i \bigl({\mathbf K}infty(\mathcal{A})\bigr)$, we conclude:
\begin{corollary}\label{cor:K_upper_infty_commutes_with_colimits}
If $\mathcal{J}$ is filtered and $\mathcal{A} \colon \mathcal{J} \to \matheurm{Add\text{-}Cat}$ is a covariant functor, then the canonical homomorphism
\[
\colim_{\mathcal{J}} K^{\infty}_i(\mathcal{A}) \to K^{\infty}_i\bigl(\colim_{\mathcal{J}} \mathcal{A}\bigr)
\]
is bijective for all $i \in \mathbb{Z}$.
\end{corollary}
Schlichting proves compatibility of negative $K$-theory with colimits
in~\cite[Corollary~5]{Schlichting(2006)}, cf. Section~\ref{sec:Filtered_colimits}.
\begin{proof}[Proof of Proposition~\ref{prop:compatibility_with_filtered_colimits}]
Let $\mathcal{A}\colon \mathcal{J}\to\matheurm{Add\text{-}Cat}$. It follows from the definition of the categories
$\mathcal{A}[t^{\pm1}]$ and $\mathcal{A}[t,t^{-1}]$ that the the canonical functors
\[\colim_j (\mathcal{A}(j)[t^{\pm1}])\to (\colim \mathcal{A})[t^{\pm1}], \quad \colim_j
(\mathcal{A}(j)[t,t^{-1}])\to (\colim \mathcal{A})[t,t^{-1}]\] are isomorphisms. It follows that the
functors ${\mathbf Z}_\pm {\mathbf E}$ and ${\mathbf Z} {\mathbf E}$ commute with filtered colimits, too.
Lemma~\ref{lem:diagrams_of_spectra} implies then that for ${\mathbf F}\in\{{\mathbf N}_\pm, {\mathbf B}_r, {\mathbf B}, {\mathbf L}\}$, ${\mathbf F} {\mathbf E}$
commutes with filtered colimits. The square
\[\xymatrix{
{\hocolim {\mathbf E}\circ\mathcal{A}} \ar[rr]^{\hocolim {\mathbf s}} \ar[d]^\simeq
&& {\hocolim \Omega{\mathbf L} {\mathbf E}\circ\mathcal{A}} \ar[d]^\simeq
\\
{\mathbf E}(\colim \mathcal{A}) \ar[rr]^{{\mathbf s}}
&& \Omega {\mathbf L}{\mathbf E}(\colim\mathcal{A})
}\]
with canonical vertical arrows is commutative. Iterating this construction and applying
Lemma~\ref{lem:diagrams_of_spectra}~\ref{lem:diagrams_of_spectra:commute}, we see that
\[\hocolim {\mathbf E}[\infty]\circ\mathcal{A} \to {\mathbf E}[\infty](\colim\mathcal{A})\]
is a weak equivalence.
\end{proof}
\begin{remark}
In~\cite[Theorem~1.8~(i) on page~43]{Bartels-Echterhoff-Lueck(2008colim)} it is shown
that the Farrell-Jones Isomorphism Conjecture is inherited under filtered colimits of
groups (with not necessarily injective structure maps), but only for rings as
coefficients. The same statement remains true if one allows coefficients in additive
categories, as stated in~\cite[Corollary~0.8]{Bartels-Lueck(2009coeff)}. The proof
of~\cite[Theorem~1.8~(i) on page~43]{Bartels-Echterhoff-Lueck(2008colim)} in the $K$-theory case
carries over directly as soon as one has Theorem~\ref{cor:K_upper_infty_commutes_with_colimits}
available, it is needed in the extension of~\cite[Lemma~6.2 on page~61]{Bartels-Lueck(2009coeff)}
to additive categories. The analog of Theorem~\ref{cor:K_upper_infty_commutes_with_colimits} for $L$-theory
is obvious since the $L$-groups of an additive category with involution can be defined elementwise
instead of referring to the homotopy groups of a spectrum.
\end{remark}
\typeout{---------- Section 8: Homotopy K-theory -----------------}
\section{Homotopy $K$-theory}
\label{sec:Homotopy_K-theory}
Let ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ be a (covariant) functor and let
$F_+ \colon \matheurm{Add\text{-}Cat} \to \matheurm{Add\text{-}Cat}$ be the functor sending $\mathcal{A}$ to $\mathcal{A}[t]$. Denote by
${\mathbf F}_+{\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Add\text{-}Cat}$ the composite ${\mathbf E} \circ F_+$. The natural inclusion
$i_+ \colon \mathcal{A} \to \mathcal{A}[t]$, which sends a morphism $f \colon A \to B$ to
$f \cdot t^0 \colon A \to B$, induces a natural transformation ${\mathbf i}_+ \colon {\mathbf E} \to {\mathbf F}_+{\mathbf E}$
of functors $\matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$. Define inductively the functor
${\mathbf F}_+^n\colon \matheurm{Add\text{-}Cat} \to\matheurm{Spectra}$ by ${\mathbf F}^n_+ {\mathbf E} := {\mathbf F}_+({\mathbf F}^{n-1}_+ {\mathbf E})$ starting with
${\mathbf F}_+^0 = {\mathbf E}$. Define inductively ${\mathbf i}^n_+ \colon {\mathbf F}^{n-1}{\mathbf E} \to {\mathbf F}^n{\mathbf E}$ by
${\mathbf i}^n_+ = {\mathbf i}_+({\mathbf i}_+^{n-1})$ starting with ${\mathbf i}^1_+ := {\mathbf i}_+$. Thus we obtain a sequence of
transformations of functors $\matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$
\[
{\mathbf E} = {\mathbf F}^0_+ {\mathbf E} \xrightarrow{{\mathbf i}_+^1} {\mathbf F}_+^1{\mathbf E} \xrightarrow{{\mathbf i}_+^2} {\mathbf F}_+^2 {\mathbf E}
\xrightarrow{{\mathbf i}_+^3} \cdots.
\]
\begin{definition}[Homotopy stabilization ${\mathbf H}{\mathbf E}$]
\label{def:homotopy_stabilization}
Define the \emph{homotopy stabilization}
\[{\mathbf H}{\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}\]
of ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ to be the homotopy
colimit of the sequence above. Let
\[
{\mathbf h} \colon {\mathbf E} \to {\mathbf H}{\mathbf E}
\]
be given by the zero-th structure map of the homotopy colimit.
We call ${\mathbf E}$ \emph{homotopy stable} if ${\mathbf h}$ is a weak equivalence.
\end{definition}
This construction has the following basic properties.
Let $\ev_0^+ \colon \mathcal{A}_{\Phi}[t] \to \mathcal{A}$ be the functor sending $\sum_{i \ge 0} f_i \cdot t^i$ to $f_0$.
\begin{lemma}\label{lem:properties_of_bfHbfE}
Let ${\mathbf E} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ be a covariant functor.
\begin{enumerate}
\item \label{lem:properties_of_bfHbfE:bfHbfE_is_homotopy_stable}
${\mathbf H}{\mathbf E}$ is homotopy stable;
\item \label{lem:properties_of_bfHbfE:twisted_homotopy_stable}
Suppose that ${\mathbf E}$ is homotopy stable. Let $\mathcal{A}$ be any additive category with
an automorphism $\Phi \colon \mathcal{A} \xrightarrow{\cong} \mathcal{A}$. Then the maps
\begin{eqnarray*}
{\mathbf E}(\ev_0^+) \colon {\mathbf E}(\mathcal{A}_{\Phi}[t]) & \xrightarrow{\simeq} & {\mathbf E}(\mathcal{A});
\\
{\mathbf E}({\mathbf i}^+) \colon {\mathbf E}(\mathcal{A}) & \xrightarrow{\simeq} & {\mathbf E}(\mathcal{A}_{\Phi}[t]),
\end{eqnarray*}
are weak homotopy equivalences.
\end{enumerate}
\end{lemma}
\begin{proof}~\ref{lem:properties_of_bfHbfE:bfHbfE_is_homotopy_stable}
This follows from the definitions since
the obvious map from the homotopy colimit of
\[
{\mathbf F}^0_+ {\mathbf E} \xrightarrow{{\mathbf i}^1_+} {\mathbf F}^1_+{\mathbf E} \xrightarrow{{\mathbf i}^2_+} {\mathbf F}^2_+ {\mathbf E}
\xrightarrow{{\mathbf i}^3_+} \cdots
\]
to
\[
{\mathbf F}_+^1 {\mathbf E} \xrightarrow{{\mathbf i}_+^2} {\mathbf F}_+^2{\mathbf E} \xrightarrow{{\mathbf i}_+^3} {\mathbf F}^3_+{\mathbf E}
\xrightarrow{{\mathbf i}^4_+} \cdots
\]
given by applying ${\mathbf i}_+$ in each degree is a weak homotopy equivalence.
\\[2mm]~\ref{lem:properties_of_bfHbfE:twisted_homotopy_stable}
Consider an additive category $\mathcal{A}$ with an automorphism $\Phi \colon \mathcal{A} \xrightarrow{\cong} \mathcal{A}$.
Define a functor $j_s \colon \mathcal{A}_{\Phi}[t] \to \bigl(\mathcal{A}_{\Phi}[t]\bigr)[s]$ by sending $\sum_{i \ge 0 } f_i \cdot t^i$
to $\sum_{i \ge 0} \bigl(f_i \cdot t^i\bigr) \cdot s^i$. Let
$\ev_{s = 0}$ and $\ev_{s=1}$ respectively be the functors $\bigl(\mathcal{A}_{\Phi}[t]\bigr)[s] \to \mathcal{A}_{\Phi}[t]$
sending a morphism $\sum_{i \ge 0} \bigl(\sum_{j_i \ge 0} f_{j_i,i} \cdot t^{j_i}\bigr) \cdot s^i \colon A \to B$ to
$\sum_{j_0 \ge 0} f_{j_0,0} \cdot t^{j_0} \colon A \to B$ and
$\sum_{i \ge 0} \sum_{j_i \ge 0} f_{j_i,i} \cdot t^{j_i} \colon A \to B$ respectively.
Recall that $i_+ \colon \mathcal{A} \to \mathcal{A}_{\Phi}[t]$ is the obvious inclusion.
Then
\[\ev_{s = 0} \circ j_s=j_+ \circ \ev_0^+, \quad \ev_{s = 1} \circ j_s=\id_{\mathcal{A}_{\Phi}[t]}.\]
The composite of both $\ev_{s = 0}$ and $\ev_{s = 1}$ with the canonical inclusion
$k_+ \colon \mathcal{A}_{\Phi}[t] \to \bigl(\mathcal{A}_{\Phi}[t]\bigr)[s]$ is the identity. Since ${\mathbf E}$
is homotopy stable by assumption, the map
${\mathbf E}(k_+) \colon {\mathbf E}\bigl(\mathcal{A}_{\Phi}[t]\bigr) \to {\mathbf E}\bigl((\mathcal{A}_{\Phi}[t])[s]\bigr)$
is a weak equivalence. Hence the functors $\ev_{s = 0}$ and $\ev_{s = 1}$
induce same homomorphism after applying $\pi_i \circ {\mathbf E}$ for $i \in \mathbb{Z}$. This implies
that the composite
\[\pi_i\bigl({\mathbf E}(\mathcal{A}_{\phi}[t])\bigr) \xrightarrow{\pi_i({\mathbf E}(\ev_0^+))} \pi_i\bigl({\mathbf E}(\mathcal{A})\bigr)
\xrightarrow{\pi_i({\mathbf E}(i_+))} \pi_i\bigl({\mathbf E}(\mathcal{A}_{\phi}[t])\bigr)\]
is the identity. Since $\ev_0^+ \circ i_+ = \id_{\mathcal{A}}$, also the composite
\[ \pi_i\bigl({\mathbf E}(\mathcal{A})\bigr) \xrightarrow{\pi_i({\mathbf E}(i_+))}
\pi_i\bigl({\mathbf E}(\mathcal{A}_{\phi}[t])\bigr) \xrightarrow{\pi_i({\mathbf E}(\ev_0^+))} \pi_i\bigl({\mathbf E}(\mathcal{A})\bigr)
\]
is the identity. Hence ${\mathbf E}(\ev_0^+) \colon {\mathbf E}(\mathcal{A}_{\Phi}[t]) \to {\mathbf E}(\mathcal{A})$ is a weak homotopy equivalence.
\end{proof}
\begin{remark}[Universal property of ${\mathbf H}$]
\label{rem:universal_property_of_ngH}
Notice that
Lemma~\ref{lem:properties_of_bfHbfE}~\ref{lem:properties_of_bfHbfE:bfHbfE_is_homotopy_stable}
says that up to weak homotopy equivalence the transformation ${\mathbf h} \colon {\mathbf E}\to {\mathbf H}{\mathbf E}$
is universal (from the left) among transformations ${\mathbf f} \colon {\mathbf E} \to {\mathbf F}$ to homotopy stable
functors ${\mathbf F} \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$ since we obtain a commutative square whose lower
vertical arrow is a weak homotopy equivalence
\[
\xymatrix{{\mathbf E} \ar[r]^{{\mathbf h}} \ar[d]^{{\mathbf f}} & {\mathbf H}{\mathbf E} \ar[d]^{{\mathbf H}{\mathbf f}}
\\
{\mathbf F} \ar[r]_{{\mathbf h}}^{\simeq} & {\mathbf H}{\mathbf F}
}
\]
\end{remark}
Lemma~\ref{lem:properties_of_bfHbfE}~\ref{lem:properties_of_bfHbfE:twisted_homotopy_stable}
essentially says that homotopy stable automatically implies homotopy stable in the twisted case.
\begin{definition}[Homotopy $K$-theory]
\label{homotopy_K-theory}
Define the homotopy $K$-theory functors
\[
{\mathbf H}{\mathbf K}, {\mathbf H}{\mathbf K}infty \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}
\]
to be the homotopy stabilization in the sense of
Definition~\ref{def:homotopy_stabilization} of the functors ${\mathbf K}, {\mathbf K}infty \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$.
\end{definition}
\begin{lemma} \label{lem:Compatibility_of_bfH_and[infty]}
The functor ${\mathbf H}{\mathbf K}$ is $1$-contracted and there is a weak equivalence
\[
{\mathbf H}{\mathbf K}[\infty] \xrightarrow{\simeq} {\mathbf H}{\mathbf K}infty.
\]
\end{lemma}
\begin{proof}
As
\[\pi_* {\mathbf H}{\mathbf E}(\mathcal{A}) \cong \colim_n \pi_* {\mathbf E}(\mathcal{A}[t_1, \dots, t_n])\]
we conclude that ${\mathbf H}{\mathbf E}$ is $c$-contracted provided that $E$ is $c$-contracted.
Applying Theorem~\ref{the:Bass-Heller-Swan_Theorem_for_bfK} we see that ${\mathbf H}{\mathbf K}$ is 1-contracted. Also ${\mathbf H}{\mathbf K}infty$ is $\infty$-contracted.
The claim now follows from Lemma~\ref{lem:alpha_bijective}.
\end{proof}
\begin{lemma} [Bass-Heller-Swan for homotopy $K$-theory]
\label{lem_Bass-Heller-Swan_for_homotopy_K-theory}
Let $\mathcal{A}$ be an additive category with an automorphism $\Phi \colon \mathcal{A} \xrightarrow{\cong} \mathcal{A}$.
Then we get for all $n \in \mathbb{Z}$ a weak homotopy equivalence
\[
{\mathbf a}infty \colon
{\mathbf T}_{{\mathbf K}infty(\Phi^{-1})}
\xrightarrow{\simeq} {\mathbf H}{\mathbf K}infty(\mathcal{A}_{\Phi}[t,t^{-1}]).
\]
\end{lemma}
\begin{proof}
We conclude from
Theorem~\ref{the:BHS_decomposition_for_non-connective_K-theory}~
\ref{the:BHS_decomposition_for_non-connective_K-theory:BHS-iso}
and the fact that the Bass-Heller-Swan map is compatible with homotopy colimits in the
spectrum variable and ${\mathbf H}{\mathbf K}infty$ is defined as a homotopy colimit in terms of
${\mathbf K}infty$ that there is a weak equivalence of spectra, natural in $(\mathcal{A},\Phi)$,
\[
{\mathbf a} \vee {\mathbf b}_+\vee {\mathbf b}_- \colon {\mathbf T}_{{\mathbf H}{\mathbf K}infty(\Phi^{-1})} \vee
{\mathbf N}{\mathbf H}{\mathbf K}infty(\mathcal{A}_{\Phi}[t]) \vee {\mathbf N}{\mathbf H}{\mathbf K}infty(\mathcal{A}_{\Phi}[t^{-1}]) \xrightarrow{\simeq}
{\mathbf H}{\mathbf K}infty(\mathcal{A}_{\Phi}[t,t^{-1}]).
\]
Since all the homotopy groups of the terms
${\mathbf N}{\mathbf H}{\mathbf K}infty(\mathcal{A}_{\Phi}[t])$ and ${\mathbf N}{\mathbf H}{\mathbf K}infty(\mathcal{A}_{\Phi}[t^{-1}])$ vanish by
Lemma~\ref{lem:properties_of_bfHbfE}~\ref{lem:properties_of_bfHbfE:twisted_homotopy_stable},
Lemma~\ref{lem_Bass-Heller-Swan_for_homotopy_K-theory} follows.
\end{proof}
\begin{remark}[Identification with Weibel's definition]
\label{rem:Identification_with_Weibel's_definition}
Weibel has defined a version of homotopy $K$-theory for a ring $R$ by a simplicial construction
in~\cite{Weibel(1989)}. It is not hard to
check using Remark~\ref{rem:universal_property_of_ngH}, which applies also to the
constructions of~\cite{Weibel(1989)} instead of
${\mathbf H}$, that $\pi_i({\mathbf H}{\mathbf K}idem(\mathcal{R}))$ and $\pi_i({\mathbf H}{\mathbf K}infty(\mathcal{R}))$ can be
identified with the one in~\cite{Weibel(1989)}, if
$\mathcal{R}$ is a skeleton of the category of finitely generated free $R$-modules.
\end{remark}
\typeout{---------- Section 8: The Farrell-Jones Conjecture for homotopy K-theory -----------------}
\section{The Farrell-Jones Conjecture for homotopy $K$-theory}
\label{sec:The_Farrell-Jones_Conjecture_for_homotopy_K-theory}
The Farrell-Jones Conjecture for (non-connective) homotopy $K$-theory has been treated for rings
in~\cite{Bartels-Lueck(2006)}. Meanwhile it has turned out to be useful to formulate the
Farrell-Jones Conjecture for additive categories as coefficients since then one has much
better inheritance properties, see for instance~\cite{Bartels-Lueck(2009coeff)}
and~\cite{Bartels-Reich(2007coeff)}. The Farrell-Jones Conjecture for (non-connective)
$K$-theory for additive categories is true for a group $G$, if for any additive
$G$-category $\mathcal{A}$ the assembly map
\[
H_n^G(\EGF{G}{{\mathcal{VC}}};{\mathbf K}infty_{\mathcal{A}}) \to H_n^G(\pt;{\mathbf K}infty_{\mathcal{A}}) = K_n(\int_G \mathcal{A})
\]
is bijective for all $n \in \mathbb{Z}$, where $\EGF{G}{{\mathcal{VC}}}$ is
the classifying space for the family of virtually cyclic groups.
If one replaces ${\mathbf K}infty \colon \matheurm{Add\text{-}Cat} \to \matheurm{Spectra}$
by the functor ${\mathbf H}{\mathbf K}infty \colon \matheurm{Spectra} \to \matheurm{Add\text{-}Cat}$ and $\EGF{G}{{\mathcal{VC}}}$
by the classifying space $\EGF{G}{{\mathcal{F}\text{in}}}$ for the family of finite subgroups,
one obtains the Farrell-Jones Conjecture for
algebraic (non-connective) homotopy $K$-theory with coefficients in additive categories.
It predicts the bijectivity of
\[
H_n^G(\EGF{G}{{\mathcal{F}\text{in}}};{\mathbf H}{\mathbf K}infty_{\mathcal{A}}) \to H_n^G(\pt;{\mathbf H}{\mathbf K}infty_{\mathcal{A}}) = K\!H_n(\int_G \mathcal{A})
\]
for all $n \in \mathbb{Z}$, where $K\!H_n(\mathcal{B})$ denotes $\pi_n\bigl(({\mathbf H}{\mathbf K}infty(\mathcal{B})\bigr)$
for an additive category $\mathcal{B}$.
For the following result, denote by $\mathrm{FJH}(G)$ the
statement ``The Farrell-Jones Conjecture for algebraic homotopy $K$-theory with coefficients in additive categories holds for $G$''.
\begin{theorem}[Farrell-Jones Conjecture for homotopy $K$-theory]
\label{the:Farrell-Jones_Conjecture_for_homotopy_K-theory}
\
\begin{enumerate}
\item \label{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:extensions}
Let $1 \to K \to G \to Q \to 1$ be an extensions of groups.
If $\mathrm{FJH}(Q)$ and $\mathrm{FJH}(p^{-1}(H))$ for any finite subgroup $H \subseteq Q$, then $\mathrm{FJH}(G)$;
\item \label{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:trees} If $G$
acts on a tree $T$ such that $\mathrm{FJH}(G_x)$ for every stabilizer group $G_x$ of $x \in T$, then $\mathrm{FJH}(G)$;
\item \label{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:From_K_to_HK}
If $G$ satisfies the Farrell-Jones Conjecture for algebraic $K$-theory with coefficients in
additive categories, then $\mathrm{FJH}(G)$.
\end{enumerate}
\end{theorem}
\begin{proof}~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:extensions} This follows
from~\cite[Corollary~4.4]{Bartels-Lueck(2006)}.
\\[1mm]~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:trees} It is easy to check
that the arguments in Bartels-L\"uck~\cite{Bartels-Lueck(2006)} carry over from rings to
additive categories since they are on the level of equivariant homology theories.
\\[1mm]~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:From_K_to_HK}
It follows from~\cite[Remark~1.6]{Davis-Quinn-Reich(2011)} that the assembly map
\[
H_n^G(\EGF{G}{{\mathcal{VC}}_I};{\mathbf K}infty_{\mathcal{A}})\to H_n^G(\pt;{\mathbf K}infty_{\mathcal{A}}) = K_n(\int_G \mathcal{A})
\]
is bijective for $n \in \mathbb{Z}$, where we have replaced ${\mathcal{VC}}$ by the smaller family of
subgroups ${\mathcal{VC}}_I$ of virtually cyclic subgroups of type $I$, i.e., of subgroups which
are either finite or admit an epimorphism to $\mathbb{Z}$ with finite kernel. Since
${\mathbf H}{\mathbf K}infty$ is given by a specific homotopy colimit, the assembly map is required to
be bijective for all additive $G$-categories $\mathcal{A}$ and is compatible with homotopy
colimits in the spectrum variable, we conclude that
\[
H_n^G(\EGF{G}{{\mathcal{VC}}_I};{\mathbf H}{\mathbf K}infty_{\mathcal{A}}) \to H_n^G(\pt;{\mathbf H}{\mathbf K}infty_{\mathcal{A}}) = K\!H_n(\int_G \mathcal{A})
\]
is bijective for $n \in \mathbb{Z}$. In order to replace ${\mathcal{VC}}_I$ by ${\mathcal{F}\text{in}}$, we have to show
in view of the Transitivity Principle, see for
instance~\cite[Theorem~1.11]{Bartels-Farrell-Lueck(2011cocomlat)}
or~\cite[Theorem~1.5]{Bartels-Lueck(2007ind)},
that for any virtually cyclic group $V$ of type $I$ the assembly map
\[
H_n^G(\EGF{V}{{\mathcal{F}\text{in}}};{\mathbf H}{\mathbf K}infty_{\mathcal{A}}) \to H_n^G(\pt;{\mathbf H}{\mathbf K}infty_{\mathcal{A}}) = K\!H_n(\int_V \mathcal{A})
\]
is bijective for all $n \in \mathbb{Z}$. This follows for $V = \mathbb{Z}$ from
Lemma~\ref{lem_Bass-Heller-Swan_for_homotopy_K-theory} since the assembly map appearing
above can be identified with the map appearing in
Lemma~\ref{lem_Bass-Heller-Swan_for_homotopy_K-theory}. The general case of an extension
$1 \to F \to V \to \mathbb{Z}$ can be reduced to case $V = \mathbb{Z}$ by
assertion~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:extensions}.
\end{proof}
\begin{remark}[Wreath products]
\label{rem:wreath_products}
We can also consider the versions ``with finite wreath products'', i.e., we require for
a group $G$ that the Farrell-Jones Conjecture is not only satisfied for $G$ itself, but
for all wreath products of $G$ with finite groups, see for instance~\cite{Farrell-Roushon(2000)}.
The advantage of this version is that it is
inherited to overgroups of finite
index. This follows from the fact that an overgroup
$H$ of finite index of $G$ can be embedded into a wreath product $G \wr F$
for a finite group $F$, see~\cite[Section~2.6]{Dixon-Mortimer(1996)}.
Theorem~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory} remains true for
the version with finite wreath products, where
assertion~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:extensions} can be
reduced to the statement that for an extension $1 \to G \to Q \to 1$ the Farrell-Jones
Conjecture with wreath products holds for $G$ if it holds for $K$ and $Q$.
\end{remark}
\begin{remark}[Status of the Farrell-Jones Conjecture for homotopy $K$-theory]
Because of assertion~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:extensions}
and~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:trees} of
Theorem~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory}, the class of groups
for which the Farrell-Jones Conjecture for homotopy algebraic $K$-theory is known is
larger than the class for the Farrell-Jones Conjecture for algebraic $K$-theory. For
instance, elementary amenable groups satisfy the version for homotopy $K$-theory, just
adapt the argument in~\cite[Theorem~1.3~(i),
Lemma~2.12]{Bartels-Lueck-Reich(2008appl)}. On the other hand, it is not known whether
the Farrell-Jones Conjecture for algebraic $K$-theory holds for the semi-direct product
$\mathbb{Z}[1/2] \rtimes \mathbb{Z}$, where $\mathbb{Z}$ acts by multiplication with $2$ on $\mathbb{Z}[1/2] \rtimes
\mathbb{Z}$.
To summarize, the Farrell-Jones Conjecture for homotopy $K$-theory for coefficients in
additive categories with finite wreath products has the following properties:
\begin{itemize}
\item It is known for elementary amenable groups, hyperbolic, CAT(0)-groups, $GL_n(R)$
for a ring $R$ whose underlying abelian group is finitely generated, arithmetic groups
over number fields, arithmetic groups over global fields, cocompact lattices in almost connected Lie groups,
fundamental groups of (not necessarily compact) $3$-manifolds (possibly with boundary), and one-relator groups;
\item It is closed under taking subgroups;
\item It is closed under taking finite direct products and finite free products;
\item It is closed under directed colimits (with not necessarily injective) structure
maps;
\item It is closed under extensions as explained in Remark~\ref{rem:wreath_products};
\item It has the tree property, see
Theorem~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory}
~\ref{the:Farrell-Jones_Conjecture_for_homotopy_K-theory:trees};
\item It is closed under passing to overgroups of finite index.
\end{itemize}
This follows from the results above
and~\cite{Bartels-Farrell-Lueck(2011cocomlat)},~\cite{Bartels-Lueck(2012annals)},~\cite{Bartels-Lueck-Reich(2008hyper)},
~\cite{Bartels-Lueck-Reich-Rueping(2012)}, and~\cite{Rueping(2013)}.
\end{remark}
\begin{remark}[Implications of the homotopy $K$-theory version to the $K$-theory version]
\label{rem:Implications_of_the_homotopy_K-theory_version_to_the_K-theory_version}
We have already seen above that the Farrell-Jones Conjecture for $K$-theory
with coefficients in additive categories implies the Farrell-Jones Conjecture for homotopy $K$-theory
with coefficients in additive categories. Next we discuss some cases, where
the Farrell-Jones Conjecture for homotopy $K$-theory with coefficients in the ring $R$
gives implications for the injectivity part of the Farrell-Jones Conjecture for $K$-theory
with coefficients in the ring $R$. These all follow by inspecting for a ring $R$ the following commutative diagram
\[
\xymatrix{H_n^G(\EGF{G}{{\mathcal{VC}}};{\mathbf K}infty_R) \ar[r]
&
H_n^G(\pt;{\mathbf K}infty_{R}) = K_n(RG) \ar[dd]^{K\!H({\mathbf h}^{\infty})}
\\
H_n^G(\EGF{G}{{\mathcal{F}\text{in}}};{\mathbf K}infty_R) \ar[u]^f \ar[d]_{h^{\infty}}
&
\\
H_n^G(\EGF{G}{{\mathcal{F}\text{in}}};{\mathbf H}{\mathbf K}infty_R) \ar[r]
&
H_n^G(\pt;{\mathbf H}{\mathbf K}infty_{R}) = K\!H_n(RG)
}
\]
where the two vertical arrows pointing downwords are induced by the transformation
${\mathbf h}^{\infty} \colon {\mathbf K}infty \to {\mathbf H}{\mathbf K}infty$, the map $f$ is induced by the
inclusion of families ${\mathcal{F}\text{in}} \subseteq {\mathcal{VC}}$ and the two horizontal arrows are the
assembly maps for $K$-theory and homotopy $K$-theory.
Suppose that $R$ is regular and the order of any finite subgroup of $G$ is invertible in $R$.
Then the two left vertical arrows are known to be bijections. This follows
for $f$ from~\cite[Proposition~2.6 on page~686]{Lueck-Reich(2005)}
and for $h^{\infty}$ from~\cite[Lemma~4.6]{Davis-Lueck(1998)} and the fact that $RH$ is regular for all finite subgroups
$H$ of $G$ and hence $K_n(RH) \to K\!H_n(RH)$ is bijective for all $n \in \mathbb{Z}$.
Hence the (split) injectivity
of the lower horizontal arrow implies the (split) injectivity
of the upper horizontal arrow.
Suppose that $R$ is regular. Then the two left vertical arrows are rational bijections, This follows
for $f$ from~\cite[Theorem~0.3]{Lueck-Steimle(2013splitasmb)}.
To show it for $h^{\infty}$ it suffices
because of~\cite[Lemma~4.6]{Davis-Lueck(1998)} to
show that $K_n(RH) \to K\!H_n(RH)$ is rationally bijective for each finite group $H$ and $n
\in \mathbb{Z}$. By the version of the spectral sequence appearing in~\cite[1.3]{Weibel(1989)}
for non-connective $K$-theory, it remains to show that $N^pK_n(RH)$ vanishes rationally
for all $n \in \mathbb{Z}$. Since $R[t]$ is regular if $R$ is, this boils down to show that
$N\!K_p(RH)$ is rationally trivial for any regular ring $R$ and any finite group $H$.
This reduction can also be shown by proving directly that the structure maps
of the system of spectra appearing in the Definition~\ref{def:homotopy_stabilization}
of ${\mathbf H}{\mathbf K}$ are weak homotopy equivalences.
The proof that $ N\!K_p(RH)$ is rationally trivial for any regular ring $R$ and any finite group $H$
can be found for instance in~\cite[Theorem~9.4]{Lueck-Steimle(2013splitasmb)}.
Hence the upper horizontal arrow is rationally injective if the lower horizontal arrow is rationally injective.
\end{remark}
\typeout{----------------------- References ------------------------}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Existence of at least $k$ solutions to a fractional $p$-Kirchhoff problem involving singularity and critical exponent}
{\author{Sekhar Ghosh\fnref{label2}}
\ead{[email protected]} }
{\author{Debajyoti Choudhuri\fnref{label2}}
\ead{[email protected]}}
\fntext[label2]{Department of Mathematics, National Institute of Technology Rourkela, India}
\begin{abstract}
We prove the existence of $k$ (being arbitrary) nonnegative solutions to the following nonlocal elliptic partial differential equation involving singularity.
\begin{align}
\mathfrak{M}\left(\int_{Q}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}dxdy\right)(-\Delta)_{p}^{s} u&=\frac{\lambda}{|u|^{\gamma-1}u}+|u|^{p_s^*-2}u~\text{in}~\Omega,\nonumber\\
u&=0~\text{in}~\mathbb{R}^N\setminus\Omega,\nonumber
\end{align}
where $\Omega\subset\mathbb{R}^N,\, N\geq2$ is a bounded domain with Lipschitz boundary, $\lambda>0$, $N>sp$, $0<s,\gamma<1$, $(-\Delta)_{p}^{s}$ is the fractional $p$-Laplacian operator for $1<p<\infty$ and $p_s^*=\frac{Np}{N-sp}$ is the critical Sobolev exponent. We will employ a {\it cut-off} argument to obtain the existence of arbitrarily many solutions. Further, by using the Moser iteration technique, we will prove an uniform $L^{\infty}(\bar{\Omega})$ bound for the solutions.
\end{abstract}
\begin{keyword}
Fractional $p$-Laplacian, Critical Exponent, Concentration Compactness, Genus, Symmetric Mountain Pass Theorem, Singularity.
\MSC[2020] 35R11 \sep 35J60 \sep 35J75.
\end{keyword}
\end{frontmatter}
\section{Introduction}
\noindent The aim of this paper is to study the following nonlocal Kirchhoff type elliptic partial differential equation involving singularity and critical exponent.
\begin{align*}\label{main p}
\mathfrak{M}\left(\int_{Q}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}dxdy\right)(-\Delta)_{p}^{s} u&=\frac{\lambda}{|u|^{\gamma-1}u}+|u|^{p_s^*-2}u~\text{in}~\Omega,\\
u&=0~\text{in}~\mathbb{R}^N\setminus\Omega,\tag{P}
\end{align*}
where $\Omega\subset\mathbb{R}^N$ is a bounded domain with Lipschitz boundary, $\lambda>0$, $sp<N$, $0<s, \gamma<1$ and $p_s^*=\frac{Np}{N-ps}$ is the critical Sobolev exponent. The Kirchhoff function $\mathfrak{M}$ is supposed to satisfy the following conditions.
\begin{itemize}
\item[$(\mathfrak{m}_1)$] The function $\mathfrak{M}:\mathbb{R}^+\rightarrow\mathbb{R}^+$ is continuous and there exists $\theta>1$ such that $p<p\theta<p_s^*$ and $t\mathfrak{M}(t)\leq\theta\mathcal{M}(t)>$ for all $t\geq 0$, where $\mathcal{M}(t)=\int_0^t\mathfrak{M}(s)ds$.
\item[$(\mathfrak{m}_2)$] $\underset{t\geq0}{\inf}\{\mathfrak{M}(t)\}=\mathfrak{m}_0>0$.
\end{itemize}
Here, $\mathfrak{M}$ is a degenerate Kirchhoff function if $\mathfrak{M}(0)=0$, otherwise $\mathfrak{M}$ is said to be a non-degenerate Kirchhoff function. We define the fractional $p$-Laplacian as follows.
\begin{eqnarray}
(-\Delta_p)^su(x)&=&C_{N,s}\lim_{\epsilon\rightarrow 0}\int_{\mathbb{R}^N\setminus B_{\epsilon}(x)}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+sp}}dy,~\text{in}~x\in\mathbb{R}^N.\nonumber
\end{eqnarray}
Off late a lot of attention has been paid to the elliptic problems involving local/nonlocal operators with singularties. These works are not only important from the mathematical point of view but also has lots of applications, {\it viz.} in thin obstacle problems\cite{Silvestre2007}, problems on minimal surfaces\cite{Caffarelli2010}, fractional quantum mechanics\cite{Laskin2000} etc.(refer \cite{Papageorgiou2019, Bisci2016, Nezza2012} and the references therein). ``{\it The problem draws its motivation from the models presented by Kirchhoff in 1883 as a generalization of the D'Alembert wave equation.
$$\rho\frac{\partial^2u}{\partial t^2}-\left(a+b\int_{0}^{l}\left|\frac{\partial u}{\partial x}\right|^2dx\right)\frac{\partial^2u}{\partial x^2}=g(x,u)$$
where $a, b, \rho$ are positive constants and $l$ is the changes in the length of the strings due to the vibrations.}" An elaborate detailing on these for the fractional counterpart can be found in \cite{Pucci2016}, \cite[Appendix A]{Fiscella2014} and the references therein. For further details on practical applications, one may refer \cite{Alves2005,Caffarelli2012, Bisci2014} and the references therein. Elliptic problems with a singularity has not only been important but also a tough challenge to the mathematical community. The roots of the problem can be traced back to a celebrated work due to Lazer and McKenna \cite{Lazer1991}, where the authors considered the following problem.
\begin{eqnarray}
-\Delta u&=&p(x)u^{-\gamma},~\text{in}~\Omega\nonumber\\
u&=&0,~\text{on}~\partial\Omega\nonumber
\end{eqnarray}
where $\Omega\subset\mathbb{R}^N$ is a sufficiently regular domain. Also $p$ is
a sufficiently regular function which is positive in $\overline{\Omega}$. The solution $u$ is in $W_0^{1,2}$ if and only if $\gamma<3$. The authors in \cite{Lazer1991} proved that if $\gamma >1$, then $u$ is not in $C^1(\overline{\Omega})$ whereas for $0<\gamma<1$ the solution obtained is a classical solution. Thereafter a lot of work on elliptic problems involving a singularity has also been considered whose existence and multiplicity results have been investigated. As a passing list of references on some pioneering study of problems involving a purely singular term one may refer to \cite{Crandall1977,Canino2017, Boccardo2009} and the references therein. With the developement of newer tools in functional analysis, the problems with a singularity also became richer. One such instance is a problem investigated by Giacomonni \cite{Giacomoni2007}. The problem is as follows.
\begin{eqnarray}\label{giaco}
-\Delta_p u&=&\lambda u^{-\gamma}+u^q,~\text{in}~\Omega\nonumber\\
u&=&0,~\text{on}~\partial\Omega\nonumber\nonumber\\
u&>&0,~\text{in}~\Omega
\end{eqnarray}
where $1<p-1<q\leq p^*-1$, $\lambda>0$, $0<\gamma<1$. Here the authors have proved the existence of two positive solutions. Similar type of results to obtain existence and multiplicity (finitely many) of solutions can be found in \cite{Saoudi2017, Haitao2003, Giacomoni2009,Saoudi2019, Mukherjee2016} and the references therein. Recently, Saoudi et al. \cite{Saoudi2019} considered a fractional $p$-Laplacian version of \eqref{giaco} and proved the existence of two solutions to it by using a variational methods. Besides this the authors in \cite{Saoudi2019} used Moser's iteration method to prove that the solutions are in $L^{\infty}$. A $W_0^{s,p}$ versus $C^1$ analysis has also been discussed in it.\\
We now focus on some of the work which bears the Kirchhoff term, with or without singularities. Moving on from here, we now turn our attention to problems involving a critical exponent. If one considers $\mathfrak{M}$, $\lambda=0$ in \eqref{main p}, then the problem reduces to the following.
\begin{align}\label{crit_prob1}
(-\Delta_p)^s u&=|u|^{p_s^*-2}u,~\text{in}~\Omega\nonumber\\
u&=0,~\text{on}~\mathbb{R}^N\setminus\partial\Omega.
\end{align}
The main hurdle with problems with critical exponent is the lack of compact embedding $W_0^{s,p}(\Omega)\hookrightarrow L^{p^*}(\Omega)$. Such problems are tackled by the concentration-compactness principle due to Lions \cite{Lions1985, Lions1985a}. The literature pertaining to these type of problems are so vast that it can't be discussed here in this section completely. However, the readers may refer to \cite{Mosconi2016,Chu2018,Song2018,Miyagaki2020,Zhang2018} and the references therein. As for the existence of infinitely many solutions to problems with critical exponent, one may refer to Azorero and Alosonso \cite{Garcia1991}, who have studied the following problem.
\begin{align}\label{crit_prob2}
-\Delta_p u&=|u|^{p^*-2}u+\lambda |u|^{q-2}u,~\text{in}~\Omega\\
u&=0,~\text{on}~\partial\Omega.
\end{align}
for $1<q<p$, $\lambda>0$. Here the authors have used the Lusternik-Schnirelman's theory to guarantee the existence of infinitely many solutions. The problem in \cite{Garcia1991} was further generalized by Li and Zhang \cite{Li2009} with the driving operator being $-\Delta_p-\Delta_q$. The reader may also refer to the work due to Figueiredo \cite{Figueiredo2013}. We now throw some light on the $p$-Kirchhoff problems of the following type that has been discussed in Khiddi and Sbai \cite{Khiddi2020}.
\begin{align*}
\mathfrak{M}\left(\iint_{\mathbb{R}^N}\frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}}dxdy\right)(-\Delta)_p^su&=\lambda H(x)|u|^{q-2}u+ |u|^{p_s^*-2}u,~\text{in}~\Omega\\
u&=0,~\text{in}~\mathbb{R}^N\setminus\Omega,
\end{align*}
where $\lambda>0$, $1<q<p<p_s^*<\infty$. The authors in \cite{Khiddi2020} have guaranteed the existence of infinitely many solutions. Such type of problems have led to the generalization of a few classical results for the case of $\mathfrak{M}=1.$ In \cite{Xiang2015a}, the authors employed the Fountain and the dual Fountain theorem to guarantee the existence of infinitely many solutions for a symmetric subcritical Kirchhoff problem for a non-degenerate $\mathfrak{M}$ and $p\geq 2$. In \cite{Fiscella2016}, the authors dealt with the case when $p=2$ and used the notion of Krasnoselskii's genus (refer \cite{Rabinowitz1986}) to obtain the existence of infinitely many solutions. Further in \cite{Xiang2016} the authors had a similar conclusion but for a system of PDEs with subcritical degenerate Kirchhoff function. This is in no way a complete picture of the literature developed so far as it is vast. What we can do at this point is to direct the attention of the reader to the problem which prompted us to take up this problem. The motivation of this problem was drawn from the results due to Azorero et al. \cite{Garcia1991}, Khiddi-Sbai \cite{Khiddi2020}. The literature consisting the study of infinitely many solution mainly deals with the concave-convex data, which may be both sublinear as well as superlinear. Recently, in its first kind the study due to \cite{Ghosh2019} guarantees the existence of infinitely solutions involving a singularity. Motivated from the above studies, in this article by employing a cut-off technique, we will prove that the problem \eqref{main p} possesses a sequence of solutions whose space norms converges to zero. It is noteworthy to mention here that the symmetric mountain pass theorem plays an key role to study the existence of infinitely many to a PDE. The symmetric mountain pass theorem has two type of conclusions consisting a sequence of solutions. One is for sublinear data in which the space norm of the solutions converges to zero another one is for the superlinear data which says the space norm of solutions goes to infinity. The major hurdles to us were to figure out a way to tackle the singular term as well as the critical exponent term, which is superlinear in the problem \eqref{main p} and then to show that the sequence of solutions converges to zero in space norm. To add to the lacunae, the functional also fails to be coercive. The main result proved in this article is the following.
\begin{theorem}\label{thm main}
Let $\mathfrak{m}_1$-$\mathfrak{m}_2$ holds and $0<\gamma<1$. Then for any $k\in\mathbb{N}$ (arbitrarily large), there exists $\Lambda>0$ such that whenever $0<\lambda<\Lambda$, the problem \eqref{main p} has at least $k$ non-negative weak solutions $\{u_1,u_2,\cdots, u_k,\cdots\}$ such that $J_{\lambda}(u_n)<0$, for all $n=1,2,\cdots,k\cdots$. In addition, as $k$ increases, then the norms of $J_{\lambda}(u_k)$ and $u_k$ decreases. Further, each solution to \eqref{main p} belongs to $L^{\infty}(\bar{\Omega})$.
\end{theorem}
\begin{remark}
We will now make following two remarks.
\begin{enumerate}
\item The conclusion of the Theorem \ref{thm main} holds true even if we consider subcritical but superlinear exponent instead of $p_s^*$.
\item It will be interesting to study whether the conclusion of Theorem \ref{thm main} holds true if we take a degenerate Kirchhoff function, i.e. if $\mathfrak{m}_0=0$.
\end{enumerate}
\end{remark}
\section{Preliminaries and Weak Formulations.}
\noindent In this section we will first recall some properties of the fractional Sobolev spaces. Let $\Omega$ be a bounded domain in $\mathbb{R}^N, N\geq2$ with Lipschitz boundary and define $Q=\mathbb{R}^{2N}\setminus((\mathbb{R}^N\setminus\Omega)\times(\mathbb{R}^N\setminus\Omega))$. Consider the Banach space $(X, \|\cdot\|_X)$ such that
\begin{eqnarray}
X&=&\left\{u:\mathbb{R}^N\rightarrow\mathbb{R}~\text{is measurable}, u|_{\Omega}\in L^p(\Omega) ~\text{and}\frac{|u(x)-u(y)|}{|x-y|^{\frac{N+ps}{p}}}\in L^{p}(Q)\right\}
\end{eqnarray}
with respect to the well known Gagliardo norm
\begin{eqnarray}
\|u\|_X&=&\|u\|_{L^p(\Omega)}+\left(\int_{Q}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}dxdy\right)^{\frac{1}{p}}.\nonumber
\end{eqnarray}
Let $X_0$ be the subspace of $X$ defined as
\begin{eqnarray}
X_0&=&\left\{u\in X: u=0 ~\text{a.e. in}~ \mathbb{R}^N\setminus\Omega\right\}.\nonumber
\end{eqnarray}
Then the space $(X, \|\cdot\|)$ is a Banach space \cite{Servadei2012,Servadei2013} with respect to the norm
\begin{eqnarray}
\|u\|&=&\left(\int_{Q}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}dxdy\right)^{\frac{1}{p}}.\nonumber
\end{eqnarray}
\noindent The best Sobolev constant is defined as
\begin{equation}\label{sobolev const}
S=\underset{u\in X_0\setminus\{0\}}{\inf}\cfrac{\int_{Q}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}dxdy}{\left(\int_\Omega|u|^{p_s^*}dx\right)^{\frac{p}{p_s^*}}}.
\end{equation}
\noindent The next Lemma is due to \cite{Servadei2012,Servadei2013} which states the embeddings of the space $X_0$ into Lebesgue spaces.
\begin{lemma}\label{embedding}
If $\Omega$ has a Lipschitz boundary and $N>ps$, then the embedding $X_0 \hookrightarrow L^{q}(\Omega) $ for $q\in[1,p_s^*]$ is continuous and is compact for $q\in[1,p_s^*)$, where $p_s^*=\frac{Np}{N-ps}$ is the critical Sobolev exponent.
\end{lemma}
\noindent A function $u\in X_0$ is a weak solution to the problem (\ref{main p}), if $\varphi u^{-\gamma}\in L^1(\Omega)$ and
\begin{align}\label{weak p}
\mathfrak{M}(\|u\|^p)\int_{Q}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+ps}}dxdy-\int_{\Omega}\frac{\lambda}{|u|^{\gamma-1}u}\varphi-\int_{\Omega}|u|^{p_s^*-2}u\varphi=0
\end{align}
for each $\varphi\in X_0$. The energy functional $J_\lambda\colon X_0\rightarrow\mathbb{R}$ associated to the problem \eqref{main p}is defined as
\begin{align}\label{energy p}
J_{\lambda}(u) =
\frac{1}{p}\mathcal{M}(\|u\|^p)\|u\|^p-\frac{\lambda}{1-\gamma}\int_{\Omega}|u|^{1-\gamma}dx-\frac{1}{p_s^*}\int_{\Omega}|u|^{p_s^*}dx.
\end{align}
Observe that the existence of solution through the usual variational techniques is restricted to the problems with singular term and critical Sobolev exponent where the former restricts the differentiability of $J_{\lambda}$ and the later causes the lack of compact embedding (refer Lemma \ref{embedding}). Hence we will use a cut-off technique to accomplish our goal. Precisely, we will first construct a $C^1$ functional $I_{\lambda}$ such that the critical points of both the functionals $J_{\lambda}$ and $I_{\lambda}$ coincides. Then on applying the concentration compactness principle, we will obtain a certain range of {\it energy} for the functional to satisfy the Palais Smale condition. Finally, by using the Kajikiya's symmetric mountain pass theorem \cite{Kajikiya2005} the existence of $k$ solutions will be achieved. One can prove the G\^{a}teaux differentiability of $J_{\lambda}$ with a slight modification of cf.\cite[Lemma 6.2]{Saoudi2019}.\\
Let us now consider the singular PDE
\begin{align}\label{squasinna}
{\mathfrak{m}_{0}}(-\Delta_p)^s w&=\frac{\lambda}{|w|^{\gamma-1}w}~\text{in}~\Omega,\nonumber\\
w&=0~\text{in}~\mathbb{R}^N\setminus\Omega.
\end{align}
The existence of a weak solution to this PDE can be obtained as a SOLA solution by considering a sequence of perturbed PDEs. We now have the following Lemma due to \cite{Canino2017}.
\begin{lemma}
Let $\Omega\subset\mathbb{R}^N$ be a bounded domain with Lipschitz boundary. Then for $0<\gamma<1$ and $\lambda>0$ the problem \eqref{squasinna} possesses a unique solution, $\underline{u}_{\lambda}\in W_0^{s, p}(\Omega)$, such that for every $K\subset\subset\Omega$, $ess.\underset{K}{\inf}\,\underline{u}_\lambda>0$.
\end{lemma}
\noindent We now consider the following cut-off problem
\begin{align*}\label{cut p}
\mathfrak{M}\left(\int_{Q}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}dxdy\right)(-\Delta)_{p}^{s} u&=\lambda g(u)+f(u)~\text{in}~\Omega,\\
u&=0~\text{in}~\mathbb{R}^N\setminus\Omega,\tag{$P'$}
\end{align*}
\noindent where, the functions $g$ and $f$ are defined as below. Let $\underline{u}_{\lambda}$ be the solution to \eqref{squasinna}, then define
\[
{g}(t) =
\begin{cases}
\frac{1}{|t|^{\gamma-1}t}, &~\text{if}~|t|>\underline{u}_{\lambda}\\
\frac{1}{|\underline{u}_{\lambda}|^{\gamma-1}\underline{u}_{\lambda}},&~\text{if}~|t|\leq \underline{u}_{\lambda}
\end{cases}\] and
\[
{f}(t) =
\begin{cases}
|t|^{p_s^*-2}t, &~\text{if}~|t|>\underline{u}_{\lambda}\\
\underline{u}_{\lambda}^{p_s^*-2}\underline{u}_{\lambda},&~\text{if}~|t|\leq \underline{u}_{\lambda}.
\end{cases}\]
Let $G(s)=\int_{0}^{s}{g}(t)dt$ and $F(s)=\int_{0}^{s}{f}(t)dt$. Therefore, the associated cut-off functional $I_\lambda\colon X_0\rightarrow\mathbb{R}$ corresponding to \eqref{cut p} is defined as
\begin{align}\label{cut p energy}
I_{\lambda}(u) =
\frac{1}{p}\mathcal{M}(\|u\|^p)-\lambda\int_{\Omega}G(u)dx-\int_{\Omega}F(u)dx.
\end{align}
Note that the functional $I_{\lambda}$ is even. Moreover, one can proceed on similar lines as in cf. \cite[Lemma 6.4]{Saoudi2019} to conclude that $I_\lambda$ is $C^1$. Since the functional $J_{\lambda}$ is G\^{a}teaux differentiable, then the set of critical points of $I_{\lambda}$ coincides with the same for $J_{\lambda}$.
We aim to guarantee that the problem \eqref{cut p} possesses arbitrarily many solutions.\\
A function $u\in X_0$ is said to be a weak solution to the problem (\ref{cut p}), if $\varphi u^{-\gamma}\in L^1(\Omega)$ and
\begin{align}\label{cut p weak}
\mathfrak{M}(\|u\|^p)\int_{Q}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+ps}}dxdy-\lambda\int_{\Omega}g(u)\varphi-\int_{\Omega}f(u)\varphi=0
\end{align}
for each $\varphi\in X_0$. Now for any $u\in X_{0}$, we have
$$
\langle I_{\lambda}^{'}(u),\varphi\rangle=
\int_{Q}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+sp}}dxdy-\lambda\int_{\Omega}g(u)\varphi dx-\int_{\Omega}f(u)\varphi dx,$$
for every $\varphi\in X_{0}$. Thus, the weak solutions of problem \eqref{cut p} are precisely the critical points of the energy functional $I_{\lambda}$.
\section{Sufficient energy range for the Palais-Smale Condition.}
\noindent The aim of this section is to see whether the functional $I_{\lambda}$ satisfies the Palais-Smale $(PS)_c$ condition upto some finite energy level $c$ or not. We also target to apply the symmetric mountain pass theorem to the functional $I_{\lambda}$. Observe that due to the presence of the critical exponent, the functional $I_{\lambda}$ is not bounded from below. Moreover, due to the lack of a compact embedding of $X_0$ in $L^{p^*_s}(\Omega)$, one can not verify the $(PS)$ condition instantly. However, we will look for an energy range $(-\infty,c]$ such that the $(PS)_c$ condition holds true. Before we venture out to hunt for the energy range $(-\infty,c]$, we first state the $(PS)_c$ condition for $I_{\lambda}$.
\begin{definition}[$(PS)_c$ condition for $I_{\lambda}$] Let $c\in \mathbb{R}$ and $\{u_{n}\}\subset X_0$ is a sequence such that $I_{\lambda}(u_{n})\rightarrow c$ and $I_{\lambda}^{'}(u_{n})\rightarrow0$ in $X_0^{*}$, where $X_0^{*}$ refers to the dual of $X_0$. Then $\{u_{n}\}$ is said to be a $(PS)_c$ sequence for $I_{\lambda}$ if $\{u_{n}\}$ possesses a convergent subsequence. Moreover, $I_{\lambda}$ satisfies the $(PS)_{c}$ condition if every $(PS)_{c}$ sequence for $I_{\lambda}$ possesses a convergent subsequence.
\end{definition}
\begin{lemma}\label{ps limit}
Let the assumptions $\mathfrak{m}_{1}$-$\mathfrak{m}_{2}$ hold true. Then there exists $\alpha>0, K>0$ such that the functional $I_{\lambda}$ satisfies $(PS)$ condition for every $\lambda>0$.
\end{lemma}
\begin{proof}
Consider a sequence $\{u_{n}\}\subset X_0$ such that
\begin{align}\label{ps 1}
I_{\lambda}(u_{n})\rightarrow c~\text{and}~\langle I_{\lambda}^{'}(u_{n}), v\rangle\rightarrow0
\end{align} and
for every $v\in X_0$. Now choose $\alpha$ such that $1-\gamma<p\theta<\alpha<p_{s}^{*}$. Then by the Sobolev inequality together with $\mathfrak{m}_{1}$-$\mathfrak{m}_{2}$ we get
\begin{align}\label{ps bdd}
c+o(\| u_{n}\|)&=I_{\lambda}(u_{n})-\frac{1}{\alpha}\langle I_{\lambda}^{'}(u_{n}),u_{n}\rangle\nonumber\\
&\geq\frac{1}{p}\mathcal{M}(\|u_{n}\|^{p})-\frac{1}{\alpha}\mathfrak{M}(\|u_{n}\|^{p})\|u_{n}\|^{p}-\lambda\int_{\Omega}(G(u_n)-g(u_n)u_n)-\int_{\Omega}(F(u_n)-f(u_n)u_n)\nonumber\\
&\geq\left(\frac{1}{\theta p}-\frac{1}{\alpha}\right)\mathfrak{M}(\|u_{n}\|^{p})\|u_{n}\|^{p}-\lambda\left(\frac{1}{1-\gamma}-\frac{1}{\alpha}\right)\int_{\Omega}|u_n|^{1-\gamma}-\left(\frac{1}{p_s^*}-\frac{1}{\alpha}\right)\int_{\Omega}|u_n|^{p_s^*}\nonumber\\
&\geq\left(\frac{1}{\theta p}-\frac{1}{\alpha}\right)\mathfrak{M}(\|u_{n}\|^{p})\|u_{n}\|^{p}-\lambda\left(\frac{1}{1-\gamma}-\frac{1}{\alpha}\right)S^{-\frac{1-\gamma}{p_s^*}}\|u_n\|^{1-\gamma}+\left(\frac{1}{\alpha}-\frac{1}{p_s^*}\right)\|u_n\|^{p_s^*}\nonumber\\
&\geq\left(\frac{1}{\alpha}-\frac{1}{p_s^*}\right)\|u_n\|^{p_s^*}-\lambda\left(\frac{1}{1-\gamma}-\frac{1}{\alpha}\right)S^{-\frac{1-\gamma}{p_s^*}}\|u_n\|^{1-\gamma}
\end{align}
It is clear from the last inequality \eqref{ps bdd} that the sequence $\{u_{n}\}$ is bounded in $X_0$. Since the space $X_0$ is reflexive then there exists a subsequence of $\{u_{n}\}$ (still denoted by $\{u_{n}\}$) and $u\in X_0$ such that
\begin{align*}
&u_{n}\rightharpoonup u ~\text{weakly in}~ X_0,\\
&u_{n}\rightarrow u ~\text{strongly in}~ L^{r}(\Omega) ~\text{for}~ 1\leq r<p_{s}^{*},\\
&u_{n}(x)\rightarrow u(x) ~\text{a. e. in}~ \Omega.
\end{align*}
Moreover, by the weak convergence we have
\begin{align*}
\lim\limits_{n\rightarrow+\infty}&\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(v(x)-v(y))}{|x-y|^{N+sp}}dxdy\\
&=\int_{Q}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+sp}}dxdy
\end{align*}
Again, on applying the Lemma \ref{embedding}, one can obtain
\begin{align*}
\lim\limits_{n\rightarrow+\infty}\int_{\Omega}f(u_n)vdx=\int_{\Omega}f(u)vdx
\end{align*}
We now claim that
\begin{align}\label{conv sing}
\lim\limits_{n\rightarrow+\infty}\int_{\Omega}g(u_n)vdx=\int_{\Omega}g(u)vdx
\end{align}
{\bf{Proof of the claim:}} To prove the claim we first show that
\begin{align*}
\lim\limits_{n\rightarrow+\infty}\int_{\Omega}vu_n^{-\gamma}dx=\int_{\Omega}vu^{-\gamma}dx.
\end{align*}
\noindent Let us denote the set $A_n=\{x\in\Omega:u_n(x)=0\}$. Since $\phi u_n^{-\gamma}\in L^1(\Omega)$, we have that the Lebesgue measure of $A_n$ is zero, i.e. $|A_n|=0$. Thus by the sub-additivity of the Lebesgue measure we have, $|\bigcup A_n|=0$. Let $x\in\Omega\setminus D$ such that $u(x)=0$. Here $|D|<\:\! \mathrm{d}elta$ - obtained from the Egorov's theorem - where $u$ is a uniform limit of (a subsequence of $\{u_n\}$, still denoted as $\{u_n\}$) $u_n$ in $\Omega$. Further, define $$A_{m,n}=\{x\in\Omega\setminus D:|u_n(x)|<\frac{1}{m}\}.$$
Note that due to the uniform convergence, for a fixed $n$ we have $|A_{m,n}|\rightarrow 0$ as $m\rightarrow\infty$. Now consider
$$\underset{m,n\in\mathbb{N}}{\bigcup}A_{m,n}=\underset{n\geq 1}{\bigcup}\underset{m\geq n}{\bigcap}A_{m,n}.$$
Observe that, for a fixed $n$, $$|\underset{m\geq n}{\bigcap}A_{m,n}|=\underset{m\rightarrow\infty}{\lim}A_{m,n}=0.$$
The above argument is true for each fixed $n$ and thus $$|\underset{m,n\in\mathbb{N}}{\bigcup}A_{m,n}|=0.$$
Therefore, $|\{x\in\Omega\setminus D: u_n(x)\rightarrow u(x)=0\}|=0$. Hence the claim.\\\\
Therefore, we obtain $\langle I_{\lambda}^{'}(u_{n}), \varphi\rangle\rightarrow 0$ for every $\varphi\in X_0$. Hence on passing the limit as $n\rightarrow+\infty$, we get
$$\langle I_{\lambda}^{'}(u), \varphi\rangle=0.$$
for every $\varphi\in X_0$. Thus $u$ is a weak solution to the problem \eqref{cut p}.
Now from the concentration-compactness principle \cite{Lions1985,Lions1985a} there exists a pair of Radon measures $\mu, \nu$ such that
\begin{align*}
\int_{\mathbb{R}^N}\frac{|u_{n}(x)-u_{n}(y)|^{p}}{|x-y|^{N+sp}}dy&\rightharpoonup \mu~\text{and}~ |u_{n}|^{p_{s}^{*}}\rightharpoonup\nu
\end{align*}
Moreover, there exists $\Upsilon$, an at most countable set, $\{x_i\}_{i\in\Upsilon}\subset\Omega$, $\mu_{x_{i}}, v_{x_{i}}\in(0,\infty)$ such that
\begin{align}\label{concentration}
\begin{split}
\mu&\geq\int_{\mathbb{R}^N}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}dy+\sum_{i\in\Upsilon}\mu_{x_{i}}\:\! \mathrm{d}elta_{x_{i}},\\
\nu&=|u|^{p_{s}^{*}}+\sum_{i\in\Upsilon}v_{x_{i}}\:\! \mathrm{d}elta_{x_{i}}\\
\mu_{x_{i}}&\geq S\nu_{x_i}^{p/p_{s}^{*}},
\end{split}
\end{align}
where $\:\! \mathrm{d}elta_{x_i}$ is the Dirac mass at $x_{i}\in \mathbb{R}^{N}.$ Fix an $i\in\Upsilon$ and let $\epsilon>0$ be given. Now consider the following smooth family of cut-off functions $\varphi_{i,\epsilon}$ such that $0\leq\varphi_{i,\epsilon}\leq 1, \varphi_{i,\epsilon}=1$ for $|x-x_{i}|\leq\epsilon, \varphi_{i,\epsilon}=0$ for $|x-x_{i}|\geq2\epsilon$ and $|\nabla\varphi_{i,\epsilon}|\leq\frac{2}{\epsilon}.$ Clearly, the sequence $\{u_{n}\varphi_{i,\epsilon}\}$ is bounded in $X_0$. Therefore, by taking $\varphi=u_{n}\varphi_{i,\epsilon}$ as the test function in $\langle I_{\lambda}^{'}(u_{n}), \varphi\rangle\rightarrow 0$, we get
\begin{align}\label{concent 1}
\mathfrak{M}(\|u_{n}\|^{p})&\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(u_n(x)\varphi_{i,\epsilon}(x)-u_n(y)\varphi_{i,\epsilon}(y))}{|x-y|^{N+sp}}dxdy\nonumber\\
&=\lambda\int_{\Omega}g(u_n)u_n\varphi_{i,\epsilon}dx+\int_{\Omega}f(u_n)u_n\varphi_{i,\epsilon}dx+o_{n}(1)
\end{align}
We will now estimate every term in \eqref{concent 1}. Observe that
\begin{align}\label{concent 2}
&\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(u_n(x)\varphi_{i,\epsilon}(x)-u_n(y)\varphi_{i,\epsilon}(y))}{|x-y|^{N+sp}}dxdy\nonumber\\
&=\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p}\varphi_{i,\epsilon}(x)}{|x-y|^{N+sp}}dxdy+\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(\varphi_{i,\epsilon}(x)-\varphi_{i,\epsilon}(y))u_n(y)}{|x-y|^{N+sp}}dxdy
\end{align}
On applying \eqref{concentration}, the first term in \eqref{concent 2} gives that
\begin{align*}
\lim_{n\rightarrow+\infty}\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p}\varphi_{i,\epsilon}(x)}{|x-y|^{N+sp}}dxdy=\int_{\Omega}\varphi_{i,\epsilon}d\mu\geq\int_{Q}\frac{|u(x)-u(y)|^{p}\varphi_{i,\epsilon}(x)}{|x-y|^{N+sp}}dxdy+\mu_{x_{i}}
\end{align*}
Now on taking limit $\epsilon\rightarrow0$, we get
\begin{align}\label{concent 2.1}
\lim_{\epsilon\rightarrow0}[\lim_{n\rightarrow+\infty}\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p}\varphi_{i,\epsilon}(x)}{|x-y|^{N+sp}}dxdy]\geq\mu_{x_{i}}
\end{align}
Again, from the H\"{o}lder inequality, we have
\begin{align}\label{concent 2.2}
&\left|\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(\varphi_{i,\epsilon}(x)-\varphi_{i,\epsilon}(y))u_n(y)}{|x-y|^{N+sp}}dxdy\right|\nonumber\\
&\leq\left(\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^p}{|x-y|^{N+sp}}dxdy\right)^{\frac{p-1}{p}}\left(\int_{Q}\frac{|\varphi_{i,\epsilon}(x)-\varphi_{i,\epsilon}(y)|^p|u_{n}(y)|^p}{|x-y|^{N+sp}}dxdy\right)^{\frac{1}{p}}\nonumber\\
&\leq{C}\left(\int_{Q}\frac{|\varphi_{i,\epsilon}(x)-\varphi_{i,\epsilon}(y)|^p|u_{n}(y)|^p}{|x-y|^{N+sp}}dxdy\right)^{\frac{1}{p}}
\end{align}
Now from Lemma 2.3 \cite{Xiang2017}, we have
\begin{align*}
\lim_{\epsilon\rightarrow0}\lim_{n\rightarrow+\infty}\int_{Q}\frac{|\varphi_{i,\epsilon}(x)-\varphi_{i,\epsilon}(y)|^p|u_{n}(y)|^p}{|x-y|^{N+sp}}dxdy=0
\end{align*}
Hence, we can conclude that
\begin{align}\label{concent 2.3}
\lim_{\epsilon\rightarrow0}\lim_{n\rightarrow+\infty}\int_{Q}\frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(\varphi_{i,\epsilon}(x)-\varphi_{i,\epsilon}(y))u_n(y)}{|x-y|^{N+sp}}dxdy=0
\end{align}
Further from \eqref{concentration}, we have
\begin{align*}
\lim_{\epsilon\rightarrow0}\lim_{n\rightarrow+\infty}\int_{\Omega}|u_{n}|^{p_{s}^{*}}\varphi_{i,\epsilon}dx&=\lim_{\epsilon\rightarrow0}[\int_{\Omega}|u|^{p_{s}^{*}}\varphi_{i,\epsilon}dx+\nu_{x_{i}}]=\nu_{x_{i}}.
\end{align*}
This implies
\begin{align}\label{concent 3}
\lim_{\epsilon\rightarrow0}\lim_{n\rightarrow+\infty}\int_{\Omega}f(u_n)u_{n}\varphi_{i,\epsilon}dx&=\lim_{\epsilon\rightarrow0}[\int_{\Omega}f(u_n)u_{n}\varphi_{i,\epsilon}+\nu_{x_{i}}]=\nu_{x_{i}}.
\end{align}
Now on using $\mathfrak{m}_1$-$\mathfrak{m}_2$ and $u_{n}\rightarrow u$ in $L^{r}(\Omega)$ for $1\leq r<p_{s}^{*}$ we obtain
$$\lim_{\epsilon\rightarrow0}\lim_{n\rightarrow+\infty}\int_{\Omega}|u_n|^{1-\gamma}\varphi_{i,\epsilon}dx=0.$$
From this one can obtain
\begin{align}\label{concent 3.1}
\lim_{\epsilon\rightarrow0}\lim_{n\rightarrow+\infty}\int_{\Omega}g(u_n)\varphi_{i,\epsilon}dx=0.
\end{align}
Again on using the boundedness of $\{u_{n}\}$, one can say that upto a subsequence,$\|u_{n}\|\rightarrow t_{0}$ for some $t_0$. Therefore, by using the continuity of the function $\mathfrak{M}$, we conclude that $\mathfrak{M}(\|u_{n}|^{p})\rightarrow\mathfrak{M}(t_{0}^{p})$. Hence, from the equations \eqref{concent 1}-\eqref{concent 3.1}, we obtain
\begin{align}\label{concent 4}
0\geq \mathfrak{M}(t_{0}^{p})\mu_{x_{i}}-\nu_{x_{i}}.
\end{align}
Moreover, we have, $\mu_{x_{i}}\geq S\nu_{x_{i}}^{p/p_{s}^{*}}$ from \eqref{concentration} and $\mathfrak{M}(t_{0}^{p})\geq \mathfrak{m}_{0}$. Thus we deduce that either one of the following must hold.
\begin{align}\label{concent 5}
\nu_{x_{i}}=0~\text{or}~\nu_{x_{i}}^{ps/N}\geq\mathfrak{m}_{0}S.
\end{align}
If $\nu_{x_{i}}=0$ then we are done. Let us assume
\begin{align}\label{concent 6}
\nu_{x_{i}}^{ps/N}\geq\mathfrak{m}_{0}S.
\end{align}
for some $i$. Then we have
\begin{align}\label{concent 6.1}
c+o(1)&=I_{\lambda}(u_{n})-\frac{1}{\alpha}I_{\lambda}^{'}(u_{n})u_{n}\nonumber\\
&\geq\mathfrak{m}_{0}\left(\frac{1}{\theta p}-\frac{1}{\alpha}\right)\|u_{n}\|^{p}-\lambda\left(\frac{1}{1-\gamma}-\frac{1}{\alpha}\right)\int_{\Omega}|u_n|^{1-\gamma}+\left(\frac{1}{\alpha}-\frac{1}{p_s^*}\right)\int_{\Omega}|u_n|^{p_s^*}\nonumber\\
&\geq\left(\frac{1}{\alpha}-\frac{1}{p_s^*}\right)\left[\int_{\Omega}|u|^{p_{s}^{*}}dx+\sum_{i\in\Upsilon}\nu_{x_i}\:\! \mathrm{d}elta_{i}\right]-\lambda\left(\frac{1}{1-\gamma}-\frac{1}{\alpha}\right)\int_{\Omega}|u_n|^{1-\gamma}\nonumber\\
&\geq\left(\frac{1}{\alpha}-\frac{1}{p_s^*}\right)|u|_{p_s^*}^{p_{s}^{*}}-\lambda\left(\frac{1}{1-\gamma}-\frac{1}{\alpha}\right)|u_n|_{p_s^*}^{1-\gamma}+\left(\frac{1}{\alpha}-\frac{1}{p_s^*}\right)\sum_{i\in\Upsilon}\nu_{x_i}\:\! \mathrm{d}elta_{i}
\end{align}
Now consider the function $f:x\mapsto ax^{p_{s}^{*}}-\lambda bx^{1-\gamma}$, where $a=\left(\frac{1}{\alpha}-\frac{1}{p_s^*}\right)$ and $b=\left(\frac{1}{1-\gamma}-\frac{1}{\alpha}\right)$. The function $f$ attains its absolute minimum for all $x>0$ at $x_0=\left(\frac{b\lambda(1-\gamma)}{ap_s^*}\right)^{\frac{1}{p_s^*-1+\gamma}}>0$ and $f(x)\geq-K\lambda^{p_{s}^{*}/(p_{s}^{*}-1+\gamma)}$ where $$K=b^{\frac{p_s^*}{p_s^*-1+\gamma}}a^{\frac{\gamma-1}{p_s^*-1+\gamma}}\left(\left(\frac{1-\gamma}{p_s^*}\right)^{\frac{1-\gamma}{p_s^*-1+\gamma}}-\left(\frac{1-\gamma}{p_s^*}\right)^{\frac{p_s^*}{p_s^*-1+\gamma}}\right)>0.$$
Hence, on passing the limit as $n\rightarrow+\infty$ in \eqref{concent 6.1}, we have
$$c\geq\left(\frac{1}{\alpha}-\frac{1}{p_{s}^*}\right)\sum_{i\in\Upsilon}\nu_{x_i}\:\! \mathrm{d}elta_{x_{i}}-K\lambda^{p_{s}^{*}/(p_{s}^{*}-1+\gamma)}$$
Now from \eqref{concent 6} we get
$$c\geq\left(\frac{1}{\alpha}-\frac{1}{p_{s}^*}\right)(\mathfrak{m}_{0}S)^{N/ps}-K\lambda^{p_{s}^{*}/(p_{s}^{*}-1+\gamma)}$$
This is a contradiction to our assumption.
$$c<c_{*}=\left(\frac{1}{\alpha}-\frac{1}{p_{s}^*}\right)(\mathfrak{m}_{0}S)^{N/ps}-K\lambda^{p_{s}^{*}/(p_{s}^{*}-1+\gamma)}.$$
Hence, the set $\Upsilon$ is empty, i.e. there does not exists any $x_i$ in the decomposition of $\{u_n\}$ and we have
$$\int_{\Omega}|u_{n}|^{p_{s}^{*}}dx\rightarrow\int_{\Omega}|u|^{p_{s}^{*}}dx.$$
From \eqref{ps 1} with $\varphi=u_n$, we deduce that
\begin{align}\label{concent 7}
\lim_{n\rightarrow+\infty}\mathfrak{M}(\|u_{n}\|^{p})\|u_{n}\|^{p}=\lambda\int_{\Omega}g(u)udx+\int_{\Omega}f(u)udx
\end{align}
and
\begin{align}\label{concent 8}
\mathfrak{M}(t_{0}^{p})\|u\|^{p}=\lambda\int_{\Omega}g(u)udx+\int_{\Omega}f(u)udx
\end{align}
Thus from \eqref{concent 7} and \eqref{concent 8}, we get
$$\lim_{n\rightarrow+\infty}\mathfrak{M}(\|u_{n}\|^{p})\|u_{n}\|^{p}=\mathfrak{M}(t_{0}^{p})\|u\|^{p}.$$
Hence, we can conclude that $\|u_{n}\|^{p}\rightarrow\|u\|^{p}$ and that $u_{n}\rightarrow u$ strongly in $X_0$. This completes the proof.
\end{proof}
\section{Auxiliary Results and Proof of Main Theorem. }
\noindent In this section, we will establish the existence of arbitrarily many solutions to the problem \eqref{cut p}. Prior to that let us define some useful tools to be used to guarantee the existence of solutions.
\begin{definition}[\bf{Genus}\cite{Rabinowitz1986}]\label{genus}
Let $X$ be a Banach space and $A\subset X$. A set $A$ is said to be symmetric if $u\in A$ implies $(-u)\in A$. Let $A$ be a close, symmetric subset of $X$ such that $0\notin A$. We define a genus $\gamma(A)$ of $A$ by the smallest integer $k$ such that there exists an odd continuous mapping from $A$ to $\mathbb{R}^{k}\setminus\{0\}$. We define $\gamma(A)=\infty$, if no such $k$ exists.
\end{definition}
\noindent The next Proposition is due to \cite{Rabinowitz1986} pertaining to some properties of genus.
Let $\Gamma$ denotes the family of all closed subsets of $X\setminus\{0\}$ which are symmetric with respect to the origin.
\begin{lemma}\label{lemma genus}
Let $A, B\in\Gamma$. Then
\begin{enumerate}
\item $A\subset B\Rightarrow\gamma(A)\leq\gamma(B)$.
\item Suppose $A$ and $B$ are homeomorphic via an odd map, then $\gamma(A)=\gamma(B)$.
\item $\gamma(S^{N-1})=N$, where $S^{N-1}$ is the sphere in $\mathbb{R}^{N}$.
\item $\gamma(A\cup B)\leq \gamma(A)+\gamma(B)$.
\item $\gamma(A)<\infty\Rightarrow\gamma(A\setminus B)\geq\gamma(A)-\gamma(B)$.
\item For every compact subset $A$ of $X$, $\gamma(A)<\infty$ and there exists $\:\! \mathrm{d}elta>0$ such that $\gamma(A)=\gamma(N_{\:\! \mathrm{d}elta}(A))$ where $N_{\:\! \mathrm{d}elta}(A)=\{x\in X:d(x,A)\leq\:\! \mathrm{d}elta\}.$
\item Suppose $Y\subset X$ is a subspace of $X$ such that $codim(Y)=k$ and $\gamma(A)>k$, then $A\cap X_{0}\neq\emptyset.$
\end{enumerate}
\end{lemma}
\noindent We will use the following version of the symmetric Mountain Pass Theorem due to Kajikiya \cite{Kajikiya2005}.
\begin{theorem}\label{sym mountain}
Let $X$ be an infinite dimensional Banach space and $I\in C^1(X,\mathbb{R})$ satisfies the following
\begin{itemize}
\item[(i)] $I$ is even, bounded below, $I(0)=0$ and $I$ satisfies the $(PS)_c$ condition.
\item[(ii)] For each $n\in\mathbb{N}$, there exists an $A_n\in\Gamma_n$ such that $\sup\limits_{u\in A_n}I(u)<0.$
\end{itemize}
Then either $(1)$ or $(2)$ below holds.
\begin{itemize}
\item[(1)] There exists a sequence $\{u_n\}$ such that $I'(u_n)=0$, $I(u_n)<0$ and $u_n\longrightarrow0$ in in $X$.
\item[(2)] There exist two sequences $\{u_n\}$ and $\{v_n\}$ such that $I'(u_n)=0$, $I(u_n)=0$, $u_n\neq0$, $\lim\limits_{n\rightarrow\infty}u_n=0$; $I'(v_n)=0$, $I(v_n)<0$, $\lim\limits_{n\rightarrow\infty}u_n=0$ and $\{v_n\}$ converges to a non-zero limit.
\end{itemize}
\end{theorem}
\begin{remark}
It is important here to mention that in either of the cases from Theorem \ref{sym mountain}, we obtain a sequence $\{u_n\}$ of critical points such that $I'(u_n)=0$, $I(u_n)=0$, $u_n\neq0$, $\lim\limits_{n\rightarrow\infty}u_n=0$.
\end{remark}
\noindent Observe that $\lim\limits_{t\rightarrow+\infty}I_{\lambda}(tu)=-\infty$. Therefore the functional $I_{\lambda}$ is not bounded from below. Hence, we will use a technique from \cite{Garcia1991} to overcome this difficulty. Let $u\in X_0$. Since $1-\gamma<1<p<p\theta<p_{s}^{*}$, therefore on using the Sobolev inequality and H\"{o}lder inequality we deduce that
\begin{align}\label{main 1}
I_{\lambda}(u)&=\frac{1}{p}\mathcal{M}(\|u\|^p)-\lambda\int_{\Omega}G(u)dx-\int_{\Omega}F(u)dx\nonumber\\
&\geq\frac{\mathfrak{m}_{0}}{p\theta}\|u\|^{p}-\lambda\frac{S^{(\gamma-1)/p}}{1-\gamma}\|u\|^{1-\gamma}-\frac{{S}^{-p_{s}^{*}/p}}{p_{s}^*}\|u\|^{p_{s}^{*}}
\end{align} Let us look at the following polynomial
\begin{equation}\label{defn h}
h(x)=\frac{\mathfrak{m}_{0}}{p\theta}x^{p}-\lambda\frac{S^{(\gamma-1)/p}}{1-\gamma}x^{1-\gamma}-\frac{{S}^{-p_{s}^{*}/p}}{p_{s}^*}x^{p_{s}^{*}}
\end{equation}
Now, we may choose $\Lambda>0$, very small such that for all $\lambda\in(0,\Lambda)$ the function $h$ has exactly two real roots $r_0, r_1$, say. More precisely, $h(t)>0$ for $r_0<t<r_1$, $h(t)<0$ for $t<r_0$ and $t>r_1$ with $h(r_0)=0$ and $h(r_1)=0$. Moreover, $h$ attains is non-negative maximum at some point $r\in(r_0,r_1)$.
Let us now truncate the functional $I_{\lambda}$ as follows.
\begin{align}\label{main 3}
\overline{I}_{\lambda}(u)=\frac{1}{p}\mathcal{M}(\| u\|^{p})-\lambda\int_{\Omega}G(u)dx-\int_{\Omega}F(u)\tau(\|u\|)dx
\end{align}
where $\tau:\mathbb{R}^{+}\rightarrow[0,1]$ is a non-decreasing, $C^{\infty}(\mathbb{R}^{+})$ function such that
\begin{align}\label{main 2}
\begin{split}
\tau(x)&=1~\text{if}~x\leq r_{0}\\
\tau(x)&=0~\text{if}~x\geq r_{1}
\end{split}
\end{align}
Now define
\begin{equation}\label{defn h bar}
\overline{h}(x)=\frac{\mathfrak{m}_{0}}{p\theta}x^{p}-\lambda\frac{S^{(\gamma-1)/p}}{1-\gamma}x^{1-\gamma}-\frac{{S}^{-p_{s}^{*}/p}}{p_{s}^*}x^{p_{s}^{*}}\tau(x)
\end{equation}
From \eqref{main 1}, it is clear that
\begin{align}\label{main 4}
\overline{I}_{\lambda}(u)\geq\overline{h}(\| u\|)
\end{align}
One can easily conclude that $\overline{h}(x)\geq h(x)$ whenever $x\geq0$, $\overline{h}(x)=h(x)$ for $0\leq x\leq r_{0}$, $\overline{h}(x)\geq 0$, for $r_{0}<x\leq r_{1}$ and if $x>r_{1}$, then $\overline{h}(x)>0$ since the function $\overline{h}(x)= x^{1-\gamma}((\mathfrak{m}_{0}/p\theta)x^{p-1+\gamma}-\lambda(S^{(\gamma-1)/p}/(1-\gamma))$ is strictly increasing. Therefore, $\overline{h}(x)\geq 0$ for all $x\geq r_{0}.$\\
We now prove the following auxiliary Lemma for the truncated functional $\overline{I}_{\lambda}$ to apply the symmetric mountain pass theorem.
\begin{lemma}\label{lemma smpt}
There exists $\lambda_{0}>0$ such that for all $\lambda\in(0,\lambda_{0})$, we have
\begin{enumerate}[label=(\roman*)]
\item $\overline{I}_{\lambda}\in C^{1}(X_0,\mathbb{R})$ is even, $\overline{I}_{\lambda}(0)=0$.
\item $\overline{I}_{\lambda}$ is coercive and bounded from below.
\item $\|u\|<r_{0}$ whenever $\overline{I}_{\lambda}(u)<0$. In addition, $\overline{I}_{\lambda}(v)=I_{\lambda}(v)$ for all $v\in N_{\:\! \mathrm{d}elta}(u)$.
\item For every $c<0$, the functional $\overline{I}_{\lambda}$ satisfies a local $(PS)_c$-condition.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item Clearly, $\overline{I}_{\lambda}(0)=0$. By definition of $\tau$, there exists a small neighbourhood of $0\in X_0$ such that $\overline{I}_{\lambda}$ is even and $\overline{I}_{\lambda}\in C^{1}(X_0,\mathbb{R})$.
\item Again from the definition of $\tau$, we get $\overline{I}_{\lambda}(u)\rightarrow+\infty$ as $\|u\|\rightarrow+\infty$. This implies that $\overline{I}_{\lambda}$ is coercive and bounded from below.
\item Suppose that $\overline{I}_{\lambda}(u)<0$ and the conclusion fails to hold, i.e. $\|u\|\geq r_0$. Then from \eqref{main 4} and the succeeding calculations, we can conclude that $\overline{I}_{\lambda}(u)\geq\overline{h}(u)\geq0$. This gives a contradiction to our assumption. Therefore, $\|u\|<r_0$. In addition, by using the fact that $\overline{h}(x)=h(x)$ for $0\leq x\leq r_{0}$, one may guarantee that there exists a $\:\! \mathrm{d}elta>0$, small enough such that $\overline{I}_{\lambda}(v)=I_{\lambda}(v)$ for all $v\in N_{\:\! \mathrm{d}elta}(u)$.
\item Let $c<0$ and $\{u_{n}\}\subset X_0$ be a $(PS)_{c}$ sequence for the functional $\overline{I}_{\lambda}$. Then we have $\overline{I}_{\lambda}(u_{n})<0$, $\overline{I}_{\lambda}^{'}(u_{n})\rightarrow 0$. From the conclusion in $(ii)$, using the coerciveness of $\overline{I}_{\lambda}$ we get the sequence $\{u_{n}\}$ bounded in $X_0$. Thus from $(iii)$ we get $\|u_{n}\|<r_{0}$ and hence $\overline{I}_{\lambda}(u_{n})=I_{\lambda}(u_{n})$ and $\overline{I}_{\lambda}^{'}(u_{n})=I_{\lambda}^{'}(u_{n})$. Finally from Lemma \ref{ps limit}, we can obtain that there exists $\lambda_{0}>0$ such that for all $0<\lambda<\lambda_{0}$, the functional $\overline{I}_{\lambda}$ satisfies $(PS)_{c}$-condition.
\end{enumerate}
\end{proof}
\noindent We now prove the following technical property of $\overline{I}_{\lambda}$ which guarantees the existence of of a subset of $X_0$ of genus at least $n$ for every $n\in\mathbb{N}$.
\begin{lemma}\label{lemma genus n}
For every $\lambda>0$ and $k\in\mathbb{N}$ there exists $\epsilon=\epsilon(\lambda,k)<0$ such that
$$\gamma(\{u\in X_0: \overline{I}_{\lambda}(u)\leq\epsilon\})\geq k.$$
\end{lemma}
\begin{proof}
Let $k\in \mathbb{N}$ and $\lambda>0$ be given. Let $Y^k$ be a $k$-dimensional subspace of $X_0$. Now for each $u\in Y^k$ such that $0<\|u\|\leq r_0$ we have
\begin{align}\label{main 9}
\overline{I}_{\lambda}(u)&\leq\frac{1}{p}\mathcal{M}(\|u\|^{p})-\frac{1}{p_{s}^*}\int_{\Omega}|u|^{p_{s}^{*}}dx-\frac{\lambda}{1-\gamma}\int_{\Omega}|u|^{1-\gamma}dx\nonumber\\
&\leq\frac{M}{p}\|u\|^p-\frac{\lambda}{1-\gamma}\int_{\Omega}|u|^{1-\gamma}dx.
\end{align}
where $M=\max\limits_{0\leq t\leq r_{0}}\mathfrak{M}(t)<\infty$. Since $0<1-\gamma<1$, we loose the advantages of having concave-convex nonlinearity. More precisely, we can not use an immediate equivalence of norms over finite dimensional space to conclude $\overline{I}_{\lambda}(u)<0$. We instead use the following bypass route to overcome this difficulty. Note that, using the fact that $\:\! \mathrm{d}im(Y^k)=k$, we can say there exist $c>0$ such that $$c^{-1}\|u\|_{\infty}\leq\|u\|\leq{c}\|u\|_{\infty}.$$
Therefore, whenever $\|u\|\leq r_0$, we get $\|u\|_{\infty}\leq cr_0$. We now choose $r_0\leq1$ small enough such that $\|u\|\leq r_0<1$ and $\|u\|_{\infty}\leq cr_0<1$. Obviously, $|u|\leq\|u\|_{\infty}$. Thus we can easily obtain
\begin{align}\label{main 10}
\int_{\Omega}|u|^{1-\gamma}dx\geq\int_{\Omega}|u|dx\geq C\|u\|
\end{align}
where we have used the fact that any two norms over a finite dimensional normed linear space are topologically equivalent.
Therefore, on using \eqref{main 9} and \eqref{main 10}, we get
\begin{align}\label{main 11}
\overline{I}_{\lambda}(u)&\leq\frac{1}{p}\mathcal{M}(\|u\|^{p})-\frac{\lambda}{1-\gamma}\int_{\Omega}|u|^{1-\gamma}dx\nonumber\\
&\leq\frac{M}{p}\|u\|^p-\frac{C\lambda}{1-\gamma}\|u\|.
\end{align}
Now choose, $\rho>0$ and $R>0$ such that
$$0<\rho<R<\min\left\{r_0,\left[\frac{Cp\lambda}{M(1-\gamma)}\right]^{\frac{1}{p-1}}\right\}$$ and consider the subset of $Y^k$, defined as $A_k=\{u\in Y^k:\|u\|=\rho\}$. Thus for any $u\in A_k$, we obtain
\begin{align}\label{main 12}
\overline{I}_{\lambda}(u)&\leq\rho\left(\frac{M}{p}\rho^{p-1}-\frac{C\lambda}{1-\gamma}\right)\nonumber\\
&\leq R\left(\frac{M}{p}R^{p-1}-\frac{C\lambda}{1-\gamma}\right)\nonumber\\
&<0.
\end{align}
This implies that for every $u\in A_k$ there exists $\epsilon<0$ such that
$$\overline{I}_{\lambda}(u)\leq\epsilon<0.$$
Therefore, $A_k\subset\{u\in X_0: \overline{I}_{\lambda}(u)\leq\epsilon\}.$ Finally by Lemma \ref{lemma genus}, one can conclude that
$$\gamma(\{u\in X_0: \overline{I}_{\lambda}(u)\leq\epsilon\})\geq\gamma(A_k\cap Y^k)\geq k.$$
This completes the proof.
\end{proof}
\noindent We now define the following notations. For each $n\in\mathbb{N}$, let us define the following $$\Gamma_n=\{A_n\subset X: A_n~\text{is closed, symmetric and}~ 0\notin A_n~\text{such that the genus}~ \gamma(A_n)\geq n\},$$ $$K_{c}=\{u\in X_0:\overline{I}'_{\lambda}(u)=0,\overline{I}_{\lambda}(u)=c\}~\text{and}~c_{n}=\inf_{A\in\Gamma_{n}}\sup_{u\in A}\overline{I}_{\lambda}(u).$$
\begin{lemma}\label{lemma cn}
For $\lambda\in(0,\lambda_{0})$ and each $n\in\mathbb{N}$, the energy $c_n$ is a critical value of $\overline{I}_{\lambda}$. Moreover, $c_n<0$ and $\lim\limits_{n\rightarrow\infty}c_n=0.$
\end{lemma}
\begin{proof}
Let $\lambda\in(0,\lambda_{0})$ as in Lemma \ref{lemma smpt}.
By using Lemma \ref{lemma smpt}(ii) and Lemma \ref{lemma genus n}, we conclude that
\begin{equation}\label{main pf 1}
-\infty<c_n<0.
\end{equation}
Again, for all $n\in\mathbb{N}$, $c_n\leq c_{n+1}$, since $\Gamma_{n+1}\subset\Gamma_n$. Therefore from \eqref{main pf 1}, we get $\lim_{n\rightarrow+\infty}c_n=c\leq0.$ Proceeding similar to \cite[Proposition 9.33]{Rabinowitz1986}, we will show that $c=0$. We will prove it by contradiction. Observe that for $c<0$, the functional $\overline{I}_{\lambda}$ satisfies $(PS)_c$-condition. Therefore the set $K_c$ is compact. Then from the Lemma \ref{lemma genus}, there exists a $\:\! \mathrm{d}elta>0$ such that $\gamma(N_{\:\! \mathrm{d}elta}(K_c))=\gamma(K_c)=m<\infty$. Again, since $c<0$, then by using the Deformation Lemma \cite{Willem1997}, there exists $\epsilon>0$ with $\epsilon+c<0$ and an odd homeomorphism $\eta:X_0\rightarrow X_0$ such that
\begin{equation}\label{deform}
\eta(A^{c+\epsilon}\setminus N_{\:\! \mathrm{d}elta}(K_c))\subset A^{c-\epsilon},
\end{equation}
where $A^{c}=\{u\in X_0:\overline{I}_{\lambda}(u)\leq c\}$. Now, the sequence $\{c_n\}$ is monotonically increasing and $\lim_{n\rightarrow+\infty}c_n=c$. Thus there exists $n\in\mathbb{N}$ such that $c_n>c-\epsilon$ and $c_{n+m}<c$. We now choose, $A\subset\Gamma_{n+m}$ such that $\sup_{u\in A}\overline{I}_{\lambda}(u)<c+\epsilon$. In other words, $A\subset A^{c+\epsilon}$. Then from Lemma \ref{lemma genus}, we have
$$\gamma(\overline{A\setminus N_{\:\! \mathrm{d}elta}(K_c)})\geq\gamma(A)- \gamma(N_{\:\! \mathrm{d}elta}(K_c))\geq n~\text{and}~\gamma(\eta(\overline{A\setminus N_{\:\! \mathrm{d}elta}(K_c)}))\geq n.$$
This implies that $\eta(\overline{A\setminus N_{\:\! \mathrm{d}elta}(K_c)})\in\Gamma_n$. Hence, we obtain $$\sup\limits_{u\in\eta(\overline{A\setminus N_{\:\! \mathrm{d}elta}(K_c)})}\overline{I}_{\lambda}(u)\geq c_n>c-\epsilon.$$
This gives a contradiction and hence we conclude $c_n\rightarrow0$. Finally from \cite{Rabinowitz1986}, it is easy to prove that for each $n\in\mathbb{N}$, the energy $c_n$ is a critical value of $\overline{I}_{\lambda}$ This completes the proof.
\end{proof}
\begin{proof}[{\it Proof of Theorem \ref{thm main}:}]
\noindent Observe that from Lemma \ref{lemma smpt}, we have $\overline{I}_{\lambda}(u)=I_{\lambda}(u)$ whenever $\overline{I}_{\lambda}(u)<0$. We now have all the ingredients required for the symmetric mountain pass theorem. Therefore, from Lemma \ref{lemma genus}, Lemma \ref{lemma smpt}, Lemma \ref{lemma genus n} and Lemma \ref{lemma cn} one can easily verify that all the hypotheses of Theorem \ref{sym mountain} are satisfied. Therefore, we conclude that there exists infinitely many critical points of the functional $I_{\lambda}$. However, since $u_n\rightarrow 0$ in $X_0$ as $n\rightarrow\infty$, therefore there is a possibility of $u_n$ becoming smaller than $\underline{u}_{\lambda}$ after a finite number of terms. Hence, on suitably choosing $\lambda$ to be small, one can obtain $k$ solutions to the problem \eqref{main p}, for arbitrarily large $k$. Moreover, since $\overline{I}_{\lambda}(u)=\overline{I}_{\lambda}(|u|)$, we can choose the solutions to be nonnegative.
\end{proof}
\section{Regularity of solutions}
\noindent This section is fully devoted to obtain some regularity of solutions to \eqref{main p}. We begin with the following comparison principle. We borrowed the idea from \cite{Choudhuri2020}.
\begin{lemma}[Weak Comparison Principle]\label{weak comparison}
Let $u, v\in X_0$. Suppose, $\mathfrak{M}(\|v\|^p)(\Delta_{p})^sv-\frac{\lambda}{|v|^{\gamma-1}v}\geq\mathfrak{M}(\|u\|^p)(\Delta_{p})^su-\frac{\lambda}{|u|^{\gamma-1}u}$ weakly with $v=u=0$ in $\mathbb{R}^N\setminus\Omega$.
Then $v\geq u$ in $\mathbb{R}^N.$
\end{lemma}
\begin{proof}
Since, $\mathfrak{M}(\|v\|^p)(\Delta_{p})^sv-\frac{\lambda}{|v|^{\gamma-1}v}\geq\mathfrak{M}(\|u\|^p)(\Delta_{p})^su-\frac{\lambda}{|u|^{\gamma-1}u}$ weakly with $v=u=0$ in $\mathbb{R}^N\setminus\Omega$, we have
{\small\begin{align}\label{compprinci}
\langle\mathfrak{M}(\|v\|^p)(\Delta_{p})^sv,\phi\rangle-\int_{\Omega}\frac{\lambda}{|v|^{\gamma-1}v}dx&\geq\langle\mathfrak{M}(\|u\|^p)(\Delta_{p})^su,\phi\rangle-\int_{\Omega}\frac{\lambda}{|u|^{\gamma-1}u}dx~\forall{\phi\geq 0\in X_0}.
\end{align}}
Define $\mathfrak{m}(t)=(a+bt^p)\geq a>0$ for $t\geq 0$ and $$\mathfrak{M}(t)=\int_{0}^{t}\mathfrak{m}(t)dt.$$ Suppose $S=\{x\in\Omega:u(x)>v(x)\}$ is a set of non-zero measure. Over this set $S$, we have
{\small\begin{align}\label{compprinci1}
&(a+b\|v\|^p)(\Delta_{p})^sv-(a+b\|u\|^p)(\Delta_{p})^su\geq\mu h(x)\left(\frac{1}{v^{\gamma}}-\frac{1}{u^{\gamma}}\right)\geq 0.
\end{align}}
\noindent We will now show that the operator $\mathfrak{M}(\cdot)(\Delta_{p})^s(\cdot)$ is a {\it monotone} operator. By the Cauchy-Schwartz inequality we have
\begin{eqnarray}\label{csineq}
|(u(x)-u(y))(v(x)-v(y))|&\leq& |u(x)-u(y)||v(x)-v(y)|\nonumber\\
& \leq &\frac{|u(x)-u(y)|^2+|v(x)-v(y)|^2}{2}.\end{eqnarray}
Consider $I_1=\langle \mathcal{M}'(u),u\rangle-\langle \mathcal{M}'(u),v\rangle-\langle \mathcal{M}'(v),u\rangle+\langle \mathcal{M}'(v),v\rangle$ and let $|u(x)-u(y)| \geq|v(x)-v(y)|$.
Therefore using \eqref{csineq} we get
\begin{eqnarray}\label{first_ineq}
I_1&=&p \mathfrak{M}(\|u\|^p)\left(\int_{Q}|u(x)-u(y)|^{p-2}\{|u(x)-u(y)|^2-(u(x)-u(y))(v(x)-v(y))\}dxdy\right)\nonumber\\
& &+p \mathfrak{M}(\|v\|^p)\left(\int_{q}|v(x)-v(y)|^{p-2}\{|v(x)-v(y)|^2-(u(x)-u(y))(v(x)-v(y))\}dxdy\right)\nonumber\\
&\geq&\frac{p}{2} \mathfrak{M}(\|u\|^p)\left(\int_{Q}|u(x)-u(y)|^{p-2}\{|u(x)-u(y)|^2-|v(x)-v(y)|^2\}dxdy\right)\nonumber\\
& &+\frac{p}{2} \mathfrak{M}(\|v\|^p)\left(\int_{Q}|v(x)-v(y)|^{p-2}\{|v(x)-v(y)|^2-|u(x)-u(y)|^2\}dxdy\right)\nonumber\\
&\geq&p\mathfrak{M}(\|u\|^p)\left(\int_{Q}(|u(x)-u(y)|^{p-2}-|v(x)-v(y)|^{p-2})(|u(x)-u(y)|^2-|v(x)-v(y)|^2)dxdy\right).\nonumber\\
&\geq&p\mathfrak{m}_0\left(\int_{Q}(|u(x)-u(y)|^{p-2}-|v(x)-v(y)|^{p-2})(|u(x)-u(y)|^2-|v(x)-v(y)|^2)dxdy\right).
\end{eqnarray}
On the other hand, when $|u(x)-u(y)| \leq|v(x)-v(y)|$, we interchange the roles of $u$, $v$ to get
\begin{eqnarray}\label{second_ineq}
I_1&\geq&p\mathfrak{m}_0\left(\int_{Q}(|u(x)-u(y)|^{p-2}-|v(x)-v(y)|^{p-2})(|u(x)-u(y)|^2-|v(x)-v(y)|^2)dxdy\right).
\end{eqnarray}
Thus, we deduce that
\begin{eqnarray}
\langle \mathcal{M}'(u)-\mathcal{M}'(v),u-v\rangle&=&I_1\geq 0.
\end{eqnarray}
Thus $\mathfrak{m}(\cdot)(-\Delta_p)^s(\cdot)$ is a monotone operator. This monotonicity is sufficient for our work.\\
Coming back to \eqref{compprinci1}, by the monotonicity of $\mathfrak{m}(\cdot)(-\Delta_p)^s(\cdot)$ thus proved implies that $v\geq u$ in $S$. Therefore $u=v$ in $S$ and hence $u\geq v$ a.e. in $\Omega$.
\end{proof}
\noindent We now recall three essential results due to \cite{Brasco2016} to obtain an $L^{\infty}(\bar{\Omega)}$ bound. Consider the monotone increasing function $J_{p}(t):=|t|^{p-2}t$ for every $1<p<\infty$.
\begin{lemma}\label{beta convex}
For every $\beta>0$ and $1\leq p<\infty$ we have
$$\left(\frac{1}{\beta}\right)^{\frac{1}{p}}\left(\frac{p+\beta-1}{p}\right)\geq 1.$$
\end{lemma}
\begin{lemma}\label{l infty 1}
Assume $1<p<\infty$ and $f: \mathbb{R}\rightarrow \mathbb{R}$ to be a $C^{1}$ convex function. Then for any $\tau\geq 0$
\begin{equation}\label{bdd est1}
J_{p}(a-b)\big[AJ_{p,\tau}(f'(a))-BJ_{p,\tau}(f'(b))\big]\geq(\tau(a-b)^{2}+(f(a)-f(b))^{2})^{\frac{p-2}{2}}(f(a)-f(b))(A-B),
\end{equation}
for every $a, b\in \mathbb{R}$ and every $A, B\geq 0$, where $J_{p,\tau}(t)=(\tau+|t|^{2})^{\frac{p-2}{2}}t,~ t\in \mathbb{R}.$ Moreover, for $\tau=0$, we get
\begin{equation}\label{bdd est1 remark}
J_{p}(a-b)\big[AJ_{p}(f'(a))-BJ_{p}(f'(b))\big]\geq(f(a)-f(b))^{p-2}(f(a)-f(b))(A-B),
\end{equation}
for every $a, b\in \mathbb{R}$ and every $A, B\geq 0$
\end{lemma}
\begin{lemma}\label{l infty 2}
Assume $1<p<\infty$ and $h:\mathbb{R}\rightarrow \mathbb{R}$ to be an increasing function. Define
$$G(t)=\int_{0}^{t}h'(\tau)^{\frac{1}{p}}\:\! \mathrm{d}\tau, t\in \mathbb{R},$$
then we have
\begin{equation}\label{bdd est2}
J_{p}(a-b)(h(a)-h(b))\geq|h(a)-h(b)|^{p}.
\end{equation}
\end{lemma}
\noindent We now prove the uniform boundedness of solutions to the problem \eqref{main p}. The proof is based on the Moser iteration technique.
\begin{lemma}\label{bounded}
Let $u\in X_0$ be a positive weak solution to the problem in \eqref{main p}, then $u\in L^{\infty}(\bar{\Omega}).$
\end{lemma}
\begin{proof}
We will prove this Lemma by obtaining a more general result for any $r\in(p-1,p_s^*]$ in place of the critical exponent, $p_s^*$. The arguments of the proof is taken from the celebrated article of \cite{Brasco2016} with appropriate modifications. We will proceed with the smooth, convex and Lipschitz function $g_{\epsilon}(t)=(\epsilon^2+t^2)^{\frac{1}{2}}$ for every $\epsilon>0.$ Moreover, $g_{\epsilon}(t)\rightarrow|t|$ as $t\rightarrow0$ and $|g'_{\epsilon}(t)|\leq1.$ Let $0<\psi\in C_c^{\infty}(\Omega)$. Choose $\varphi=\psi |g'_{\epsilon}(u)|^{p-2}g'_{\epsilon}(u)$ as the test function in \eqref{weak p} with exponent $r\in(p-1,p_s^*]$ in place of $p_s^*$. Now the following estimate follows immediately by putting $a=u(x), b=u(y), A=\psi(x)$ and $B=\psi(y)$ in Lemma \ref{l infty 1}. For all $\psi\in C_c^{\infty}(\Omega)\cap\mathbb{R^+}$, we obtain
\begin{align}\label{bound est 2.0}
\mathfrak{M}(\|u\|^p)\int_{Q}&\cfrac{|g_{\epsilon}(u(x))-g_{\epsilon}(u(y))|^{p-2}(g_{\epsilon}(u(x))-g_{\epsilon}(u(y)))(\psi(x)-\psi(y))}{|x-y|^{N+sp}}\:\! \mathrm{d} x\:\! \mathrm{d} y\nonumber\\
&\leq\int_\Omega\left(\left|\frac{\lambda}{|u|^{\gamma-1}u}+|u|^{r-2}u\right|\right)|g_{\epsilon}(u)|^{p-1}\psi \:\! \mathrm{d} x
\end{align}
Now on passing to the limit $\epsilon\rightarrow0$ together with Fatou's Lemma, we obtain
\begin{align}\label{bound est 2}
\mathfrak{M}(\|u\|^p)\int_{Q}\cfrac{||u(x)|-|u(y)||^{p-2}(|u(x)|-|u(y)|)(\psi(x)-\psi(y))}{|x-y|^{N+sp}}\:\! \mathrm{d} x\:\! \mathrm{d} y\leq\int_\Omega\left(\left|\frac{\lambda}{|u|^{\gamma-1}u}+|u|^{r-2}u\right|\right)\psi \:\! \mathrm{d} x
\end{align}
Note that \eqref{bound est 2} is true for every $\psi\in X_0$. Define $u_k=\min\{(u-1)^+, k\}\in X_0$ for each $k>0$. Let $\beta>0$ and $\:\! \mathrm{d}elta>0$ be given. Putting $\psi=(u_k+\:\! \mathrm{d}elta)^{\beta}-\:\! \mathrm{d}elta^{\beta}$ \eqref{bound est 2} we get
\begin{align*}
\mathfrak{M}(\|u\|^p)\int_{Q}&\cfrac{||u(x)|-|u(y)||^{p-2}(|u(x)|-|u(y)|)((u_k(x)+\:\! \mathrm{d}elta)^{\beta}-(u_k(y)+\:\! \mathrm{d}elta)^{\beta})}{|x-y|^{N+sp}}\:\! \mathrm{d} x\:\! \mathrm{d} y\\
&\leq\int_\Omega\left|\frac{\lambda}{|u|^{\gamma-1}u}+|u|^{r-2}u\right|((u_k+\:\! \mathrm{d}elta)^{\beta}-\:\! \mathrm{d}elta^{\beta}) \:\! \mathrm{d} x
\end{align*}
On setting $h(u)=(u_k+\:\! \mathrm{d}elta)^{\beta}$ in Lemma \ref{l infty 2}, we obtain
\begin{align}\label{bound est 4}
&\mathfrak{M}(\|u\|^p)\int_{Q}\cfrac{|((u_k(x)+\:\! \mathrm{d}elta)^{\frac{\beta+p-1}{p}}
-(u_k(y)+\:\! \mathrm{d}elta)^{\frac{\beta+p-1}{p}})|^{p}}{|x-y|^{N+sp}}\:\! \mathrm{d} x\:\! \mathrm{d} y\nonumber\\
&\leq\mathfrak{M}(\|u\|^p)\int_{Q}\left(\cfrac{(\beta+p-1)^{p}}{{\beta}p^{p}}\right)\cfrac{||u(x)|-|u(y)||^{p-2}(|u(x)|-|u(y)|)((u_k(x)+\:\! \mathrm{d}elta)^{\beta}-(u_k(y)+\:\! \mathrm{d}elta)^{\beta})}{|x-y|^{N+sp}}\:\! \mathrm{d} x\:\! \mathrm{d} y\nonumber\\
&\leq\left(\cfrac{(\beta+p-1)^{p}}{\beta p^{p}}\right)\mathfrak{M}(\|u\|^p) \int_{Q}\cfrac{||u(x)|-|u(y)||^{p-2}(|u(x)|-|u(y)|)((u_k(x)+\:\! \mathrm{d}elta)^{\beta}-(u_k(y)+\:\! \mathrm{d}elta)^{\beta})}{|x-y|^{N+sp}}\:\! \mathrm{d} x\:\! \mathrm{d} y\nonumber\\
&\leq\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right)\int_{\Omega}\left(\left|\frac{\lambda}{|u|^{\gamma-1}u}\right|+||u|^{r-2}u|\right)\left((u_k+\:\! \mathrm{d}elta)^{\beta}-\:\! \mathrm{d}elta^{\beta}\right) \:\! \mathrm{d} x\nonumber\\
&=\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right)\left[\int_{\{u\geq1\}}\lambda|u|^{-\gamma}\left((u_k+\:\! \mathrm{d}elta)^{\beta}-\:\! \mathrm{d}elta^{\beta}\right)+\int_{\{u\geq1\}}|u|^{r-1}\left((u_k+\:\! \mathrm{d}elta)^{\beta}-\:\! \mathrm{d}elta^{\beta}\right) \:\! \mathrm{d} x\right]\nonumber\\
&\leq{C}\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right)\left[\int_{\{u\geq1\}}\left(1+|u|^{r-1}\right)\left((u_k+\:\! \mathrm{d}elta)^{\beta}-\:\! \mathrm{d}elta^{\beta}\right) \:\! \mathrm{d} x\right]\nonumber\\
&\leq{2C}\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right)\left[\int_{\Omega}|u|^{r-1}\left((u_k+\:\! \mathrm{d}elta)^{\beta}-\:\! \mathrm{d}elta^{\beta}\right) \:\! \mathrm{d} x\right]\nonumber\\
&\leq{C'}\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right){\|u\|^{r-1}_{p_s^*}}\|(u_k+\:\! \mathrm{d}elta)^{\beta}\|_{q},
\end{align}
where $q=\frac{p_s^*}{p_s^*-r+1}$ and $C=\max\{1,|\lambda|\}.$ From the Sobolev inequality \cite{Nezza2012} we get
\begin{align}\label{bound est 5}
&\mathfrak{M}(\|u\|^p)\int_{Q}\cfrac{|((u_k(x)+\:\! \mathrm{d}elta)^{\frac{\beta+p-1}{p}}
-(u_k(y)+\:\! \mathrm{d}elta)^{\frac{\beta+p-1}{p}})|^{p}}{|x-y|^{N+sp}}\:\! \mathrm{d} x\:\! \mathrm{d} y\geq{C}\mathfrak{m}_0\left\|(u_k+\:\! \mathrm{d}elta)^{\frac{\beta+p-1}{p}}-\:\! \mathrm{d}elta^{\frac{\beta+p-1}{p}}\right\|_{p_{s}^*}^{p}
\end{align}
where $C>0$. Again from triangle inequality and $(u_k+\:\! \mathrm{d}elta)^{\beta+p-1}\geq\:\! \mathrm{d}elta^{p-1}(u_k+\:\! \mathrm{d}elta)^{\beta}$ we have
\begin{align}\label{bound est 6}
\left[\int_{\Omega}\left((u_k+\:\! \mathrm{d}elta)^{\frac{\beta+p-1}{p}}
-\:\! \mathrm{d}elta^{\frac{\beta+p-1}{p}}\right)^{p_s^*}\:\! \mathrm{d} x\right]^{\cfrac{p}{p_s^*}}\geq\left(\frac{\:\! \mathrm{d}elta}{2}\right)^{p-1}&\left[\int_{\Omega}(u_k+\:\! \mathrm{d}elta)^{\frac{p_s^*\beta}{p}}\right]^{\cfrac{p}{p_s^*}}-\:\! \mathrm{d}elta^{\beta+p-1}|\Omega|^{\cfrac{p}{p_s^*}}.
\end{align}
Therefore, by using \eqref{bound est 6} in \eqref{bound est 5} and then from \eqref{bound est 4}, we obtain
\begin{align}\label{bdd1}
\left\|(u_k+\:\! \mathrm{d}elta)^{\frac{\beta}{p}}\right\|^{p}_{p_s^*}
\leq\frac{C'}{\mathfrak{m}_0}\left[C\left(\frac{2}{\:\! \mathrm{d}elta}\right)^{p-1}\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right)\|u\|_{p_s^*}^{r-1}\|(u_k+\:\! \mathrm{d}elta)^{\beta}\|_{q}+\:\! \mathrm{d}elta^{\beta}|\Omega|^{\cfrac{p}{p_s^*}}\right].
\end{align}
Therefore, on using \eqref{bdd1}, Lemma \ref{beta convex} and \eqref{bound est 4}, we can deduce that
\begin{align}\label{bound est 7}
\left\|(u_k+\:\! \mathrm{d}elta)^{\frac{\beta}{p}}\right\|^{p}_{p_s^*}
&\leq\frac{C'}{\mathfrak{m}_0}\left[\frac{1}{\beta}\left(\cfrac{\beta+p-1}{p}\right)^{p}\left\|(u_k+\:\! \mathrm{d}elta)^{\beta}\right\|_{q}\left(\frac{C\|u\|_{p_s^*}^{r-1}}{\:\! \mathrm{d}elta^{p-1}}+|\Omega|^{\cfrac{p}{p_s^*}-\cfrac{1}{q}} \right)\right].
\end{align}
\noindent Now choose, $\:\! \mathrm{d}elta>0$ such that $\:\! \mathrm{d}elta^{p-1}=C\|u\|_{p_s^*}^{r-1}\left(|\Omega|^{\frac{p}{p_s^*}-\frac{1}{q}}\right)^{-1}$ and $\beta\geq1$ with $\left(\frac{\beta+p-1}{p}\right)^{p}\leq\beta^{p}.$ Further, by setting $\eta=\cfrac{p_s^*}{pq}>1$ and $\tau=q\beta$ we can rewrite the inequality \eqref{bound est 7} as
\begin{align}\label{bound est 8}
\left\|(u_k+\:\! \mathrm{d}elta)\right\|_{\eta\tau}\leq\left(C|\Omega|^{\frac{p}{p_s^*}-\frac{1}{q}}\right)^{\frac{q}{\tau}}\left(\frac{\tau}{q}\right)^{\frac{q}{\tau}}\left\|(u_k+\:\! \mathrm{d}elta)\right\|_{\tau}.
\end{align}
Set $\tau_0=q$ and $\tau_{m+1}=\eta\tau_m=\eta^{m+1}q$. Then after performing $m$ iterations, the inequality \eqref{bound est 8} reduces to
\begin{align}\label{bound est 9}
\left\|(u_k+\:\! \mathrm{d}elta)\right\|_{\tau_{m+1}}&\leq\left(C|\Omega|^{\frac{p}{p_s^*}-\frac{1}{q}}\right)^{\left(\sum\limits_{i=0}^{m}\frac{q}{\tau_i}\right)}\left(\prod\limits_{i=0}^{m}\left(\frac{\tau_i}{q}\right)^{\frac{q}{\tau_i}}\right)^{p-1}\left\|(u_k+\:\! \mathrm{d}elta)\right\|_{q}\nonumber\\
&=\left(C|\Omega|^{\frac{p}{p_s^*}-\frac{1}{q}}\right)^{\frac{\eta}{\eta-1}}\left(\eta^{\frac{\eta}{(\eta-1)^2}}\right)^{p-1}\left\|(u_k+\:\! \mathrm{d}elta)\right\|_{q}
\end{align}
Therefore, on passing the limit as $m\rightarrow\infty$, we get
\begin{equation}\label{bound est 10}
\left\|u_k\right\|_{\infty}\leq\left(C|\Omega|^{\frac{p}{p_s^*}-\frac{1}{q}}\right)^{\frac{\eta}{\eta-1}}\left(C'\eta^{\frac{\eta}{(\eta-1)^2}}\right)^{p-1}\left\|(u_k+\:\! \mathrm{d}elta)\right\|_{q}.
\end{equation}
Furthermore, by applying the triangle inequality together with the fact $u_k\leq(u-1)^+$ in \eqref{bound est 10} and then letting $k\rightarrow\infty$, we obtain
\begin{equation}\label{bound est 11}
\left\|(u-1)^+\right\|_{\infty}\leq\left\|u_k\right\|_{\infty}\leq{C}\left(\eta^{\frac{\eta}{(\eta-1)^2}}\right)^{p-1}\left(|\Omega|^{\frac{p}{p_s^*}-\frac{1}{q}}\right)^{\frac{\eta}{\eta-1}}\left(\left\|(u-1)^+\right\|_q+\:\! \mathrm{d}elta|\Omega|^{\frac{1}{q}}\right)
\end{equation}
Hence, we have $u\in L^{\infty}(\bar{\Omega}).$ In particular, by choosing $r=p_s^*$, we conclude the that if $u\in X_0$ is a solution to \eqref{main p}, then $u\in L^{\infty}(\bar{\Omega}).$
\end{proof}
\section*{Data availability statement}
\noindent Data sharing is not applicable to this article as no new data were created or analyzed in this study.
\section*{References}
\end{document}
|
\begin{document}
\title{Noise induced loss of entanglement}
\author{Kovid Goyal}
\email{[email protected]}
\homepage{http://kovidgoyal.cjb.net}
\affiliation{St. Xavier's College, Mumbai 400001}
\keywords{noise, entanglement}
\begin{abstract}
The disentangling effect of repeated applications of the bit flip
channel ($\1\otimes\sigma_x$) on bipartite qubit systems is
analyzed. It is found that the rate of loss of entanglement
is not uniform over all states. The distillable entanglement of
maximally entangled states decreases faster than that of less
entangled states. The analysis is also generalized to noise channels
of the form $\hat{n}\cdot\vec{\sigma}$.
\end{abstract}
\maketitle
\allowdisplaybreaks
\section{Introduction}
The storage/transmission of classical data is subject to various noise
processes that reduce the integrity of the data over time. One such
noise process is the \textit{binary symmetric channel}
(\Fref{fig:bin-symm-chann}), that flips a bit with a given probability
$1-p$. There exist many, successful strategies for dealing with this noise process
\cite{Welsh:1988}.
\begin{figure}
\caption{The binary symmetric channel}
\label{fig:bin-symm-chann}
\end{figure}
\textit{Entanglement} is a quantum resource, essential to many
applications such as teleportation, super-dense coding, etc. As such,
the ability to combat noise during the storage of entanglement is
essential. In this paper, we consider an instance of the binary
symmetric channel, applied to bipartite qubit systems. We analyze the
disentangling effect of this channel on singlet (maximally entangled)
states. The choice of a qubit system is dictated by the existence of a
mathematically tractable measure of entanglement for bipartite qubit
systems \cite{Wootters:1998}.
\section{The Quantum Bit Flip Channel}
The generalization of the symmetric bit flip channel to the case of a
single qubit is straightforward. Choose the computational basis
($\{\ket{0},\ket{1}\}$) of the Hilbert space $\op{H}_2$. Let $\rho$ be
any density matrix acting on this space. Then the quantum bit flip
channel can be defined as
\begin{equation}
\label{eq:sq-bf-defn}
\rho' = p\rho + (1-p)\sigma_x\rho\sigma_x.
\end{equation}
In order to study the effect of this channel on entanglement, this
definition needs to be extended for bipartite systems. We make the
choice that only one of the two subsystems is affected by the
noise. Then for $\rho \in \op{H}_2\otimes\op{H}_2$,
\begin{align}
\label{eq:bipart-bf-defn}\notag
\rho' &= p\rho + (1-p)\op{X}(\rho),\\
\op{X}(\rho) :&= (\1\otimes\sigma_x)\rho(\1\otimes\sigma_x).
\end{align}
Since $\sigma_x$ is a completely positive map, $\rho'$ is also a
density matrix in $\op{H}_2\otimes\op{H}_2$. We are interested in the
disentangling effect of this channel on the maximally entangled
singlet state, defined as
\begin{equation}
\label{eq:sing-defn}
\ensuremath{\op{\rho}_+} = \frac{1}{2}\sum_{i,j=0}^1 \ketbra{i}{j}\otimes\ketbra{i}{j}.
\end{equation}
After a single application of the channel, the resulting density
matrix \nrho{1} has the form
\begin{equation}
\label{eq:rho1-defn}
\nrho{1} = p\ensuremath{\op{\rho}_+} + (1-p)\op{X}(\ensuremath{\op{\rho}_+}).
\end{equation}
The entanglement of this state should be a function of $p$, which
completely parameterizes the bit flip channel.
\subsection{Entanglement of Formation}
In order to calculate the
entanglement of formation \cite{Wootters:1998}, the following
definitions are required
\begin{align}
\label{eq:ent-defs}
\tilde{\rho} &= (\sigma_y\otimes\sigma_y) \rho^* (\sigma_y\otimes\sigma_y), \\
\op{C}(\rho) &= \max \{0, \lambda_1 - \lambda_2 - \lambda_3 - \lambda_4\},
\end{align}
where $\lambda_1 \geq \lambda_2 \geq \lambda_3 \geq \lambda_4$ are the
the eigenvalues of the matrix
$\sqrt{\sqrt{\rho}\tilde{\rho}\sqrt{\rho}}$. Then the entanglement of
formation $\E(\rho)$ is given by
\begin{align}\notag
\label{eq:eof-defn}
\E(\rho) &= h\left(\frac{1+\sqrt{1-\op{C}(\rho)^2}}{2}\right),\\
h(x) &= -x\log_2x - (1-x)\log_2(1-x).
\end{align}
For \nrho{1} the concurrence is found to be
\begin{equation}
\label{eq:rho1-con}
\op{C}(\nrho{1}) = 2\left|p-\frac{1}{2}\right|.
\end{equation}
This gives an entanglement of formation
\begin{equation}
\label{eq:form-defn}
\E_F(\nrho{1}) = h\left(\frac{1}{2} + \sqrt{p(1-p)}\right).
\end{equation}
An outline of the calculations is presented in
\Sref{sec:gen}. \Fref{fig:rho-one-stage} shows how the entanglement
varies as a function of $p$.
\begin{figure}
\caption{Entanglement of $\rho_{(1)}
\label{fig:rho-one-stage}
\end{figure}
\subsection{Distillable Entanglement}
While there doesn't exist a general method for calculating the
distillable entanglement of an arbitrary density matrix, we are
fortunate in that \nrho{1} can be written in the Bell diagonal form,
as
\begin{equation}
\label{eq:rho1-bell-diag}
\nrho{1} = p\ketbra{\Phi^+}{\Phi^+} + (1-p)\ketbra{\Psi^+}{\Psi^+}.
\end{equation}
Using the one way hashing protocol \cite{BDSW:1996} for distillation, it is possible to obtain a
lower limit on the distillable entanglement of $1-h(p)$.
The distillable entanglement is bound above by the relative
entropy of entanglement \cite{Vedral/Plenio:1998}, which for \nrho{1}
is also \cite{VPRK:1997}, $1-h(p)$.
Combining the two bounds, we have
\begin{equation}
\label{eq:dist-defn}
\E_D(\nrho{1}) = 1 - h(p).
\end{equation}
\subsection{Multiple Applications}
We now ask the question, what effect do multiple applications of the
channel have on the singlet state? In order to answer it,
we need to know the form of the singlet state after $n$ applications,
denoted by \nrho{n}. Proceeding from \Eref{eq:rho1-defn},
\begin{align}\notag
\nrho{2} &= p\nrho{1} +(1-p)\op{X}(\nrho{1})\\\label{eq:rho2-form}
&= (p^2 + (1-p)^2)\ensuremath{\op{\rho}_+} + (p(1-p) + (1-p)p)\op{X}(\ensuremath{\op{\rho}_+})\\\notag
&= P_2\ensuremath{\op{\rho}_+} + (1-P_2)\op{X}(\ensuremath{\op{\rho}_+});\\\notag
P_2 &= p^2 + (1-p)^2.
\end{align}
The identity $\sigma_x^2=\1$ was used to arrive at
\Eref{eq:rho2-form}. Thus \nrho{2} has exactly the same form as
\nrho{1}; repeated applications of the channel will not change this
form. All that remains is to find an expression for $P_n$. By
calculating \nrho{n} for the first few $n$ explicitly, we have
\begin{align*}
&\begin{split}
P_0 &= 1, \\ P_1 &= p,
\end{split}
&\begin{split}
P_2 &= p^2 + (1-p)^2, \\ P_3 &= p^3 + 3p(1-p)^2.
\end{split}
\end{align*}
Evidently, $P_n$ is the sum of the even terms from the expansion of
$(p+(1-p))^n$.
\begin{align}\notag
\therefore P_n &= \frac{(p+(1-p))^n + (p-(1-p))^n}{2}\\\label{eq:P_n-defn}
&= \frac{1}{2} + 2^{n-1}\left(p-\frac{1}{2}\right)^n.
\end{align}
Now that we have obtained a general expression for $P_n$, we can
calculate the entanglements as,
\begin{align}
\label{eq:ents-fin}\notag
\E_F(\nrho{n}) &= h\left(\frac{1}{2}+\sqrt{P_n(1-P_n)}\right)\\
\E_D(\nrho{n}) &= 1-h(P_n).
\end{align}
\Fref{fig:eof-n} shows how the distillable entanglement
decreases with $n$ for different values of $\left|p-\frac{1}{2}\right|$.
\begin{figure}
\caption{Entanglement of $\rho_{(n)}
\label{fig:eof-n}
\end{figure}
\subsection{Combating the Disentanglement}
The form of the curves in \Fref{fig:eof-n} suggests that perhaps,
states further along the curves lose entanglement slower than the
singlet. In order to test this, first we define the fractional loss of
entanglement the state \nrho{k} after $r$ applications of the channel as
\begin{equation}
\label{eq:frac-defn}
F(p, k, r) = - \frac{\E(\nrho{k}) - \E(\nrho{k+r})}{\E(\nrho{k})};
\end{equation}
where $\E(\rho)$ is a measure of the entanglement of $\rho$. Then the
fractional loss of entanglement of the singlet state after $r$
applications of the channel is given by $F(p, 0, r)$. In order to
compare the loss of entanglement of the singlet state with that of
\nrho{k}, define
\begin{equation}
\label{eq:frac-adv-defn}
R(p, k, r) = \frac{F(p, k, r)}{F(p, 0, r)}.
\end{equation}
\begin{center}
\begin{widetext}
\hspace{1.9cm}
\begin{figure}
\caption{Graphs showing the dependence of $R(p, k, r)$ on $p, k$ and
$r$. It is seen that $R$ behaves differently for different
measures of entanglement.}
\label{fig:disen-stuff}
\end{figure}
\end{widetext}
\end{center}
\Fref{fig:disen-stuff} illustrates the behavior of $R(p,k,r)$. The
most striking feature of the graphs is that the entanglement of
formation and the distillable entanglement behave in a qualitatively
different manner with regard to the rate of loss of entanglement of
\nrho{k}.The rate of loss of entanglement of formation is higher for
\nrho{k} than for the singlet state. The reverse is true for the
distillable entanglement.
It is the distillable entanglement that is of greater practical
interest, and the fact that \nrho{k} loses it slower than the singlet
suggests a simple tactic to combat the disentangling action
of this channel. Rather than storing entanglement as a few
singlets, it should be stored as a larger number of less
entangled states of the form of \nrho{k}. Since the fractional loss of
entanglement for these states is less than for the singlet, there will
be a smaller net loss of entanglement over time, \textit{provided}
that the distillable entanglement for these states is additive, that
is
\begin{equation}
\label{eq:additive}
\E_D(\nrho{k}^{\otimes N}) = N\E_D(\nrho{k}).
\end{equation}
This will ensure that the entanglement spread over $N$ copies of these
states can be efficiently concentrated into singlet form again.
The second graph in \Fref{fig:disen-stuff} shows that the advantage
obtained by storing the entanglement in dilute form is lost if the
system is exposed to noise repeatedly. While this does impose a limit
on the savings that can be made, if a sufficiently large $k$ is
chosen and $r$ is bounded, there can still be significant gains.
The final graph shows, rather predictably, that the less severe the
noise, the greater the gains that can be made, for a given $k$ and $r$.
\section{Generalization}
\label{sec:gen}
Although most of the results in this paper are derived for
the bit flip channel, a number of them hold for more general noise
processes as well. In this section, we will analyze the general noise
process
\begin{align}
\label{eq:rho1-gen-defn}\notag
\nrho{1} &= p\ensuremath{\op{\rho}_+} + (1-p)\op{N}(\ensuremath{\op{\rho}_+})\\\notag
\op{N}(\rho) :&= (\1\otimes\hat{n}\cdot\vec{\sigma})\rho(\1\otimes\hat{n}\cdot\vec{\sigma})\\
& = \frac{1}{2}\sum_{i,j=0}^1\sum_{a,b=1}^3n_an_b\ket{i}\bra{j}\otimes\sigma_a \ket{i}\bra{j}\sigma_b.
\end{align}
where $\hat{n}\in\mathbb{R}^3$ is arbitrary. For $\hat{n} = (1,0,0),
(0,1,0)$ and $(0,0,1)$, this channel reduces to the bit flip,
bit-phase flip and phase flip channels respectively \cite{Nielsen/Chuang:2000}.
\subsection{Entanglement of Formation}
Here we explicitly calculate the entanglement of formation of
\nrho{1}, defined in \Eref{eq:rho1-gen-defn}. First we need to
evaluate $\tilde{\nrho{1}} =
(\sigma_y\otimes\sigma_y)\nrho{1}(\sigma_y\otimes\sigma_y)$.
The following identity \cite{Jozsa:1994}, comes in handy
\begin{equation}
\label{eq:jozsa-ident}
(\1\otimes M)\ensuremath{\op{\rho}_+}(\1\otimes M^\dagger) = (M^T\otimes\1)\ensuremath{\op{\rho}_+}(M^*\otimes\1);
\end{equation}
where $M$ is any matrix. As a result of \Eref{eq:jozsa-ident} we get
\begin{align}\label{eq:inv1}\notag
(\sigma_y&\otimes\sigma_y)\ensuremath{\op{\rho}_+}^*(\sigma_y\otimes\sigma_y)\\\notag
&= (\sigma_y\otimes\1)(\1\otimes\sigma_y)\ensuremath{\op{\rho}_+}^*(\1\otimes\sigma_y)(\sigma_y\otimes\1)\\
&= \ensuremath{\op{\rho}_+}^* = \ensuremath{\op{\rho}_+}.
\end{align}
Define $\hat{n}' = (n_x, -n_y, n_z)$. Then, for the second term in \nrho{1}
\begin{align}
\label{eq:inv2}\notag
(\sigma_y&\otimes\sigma_y)\op{N}(\ensuremath{\op{\rho}_+})^*(\sigma_y\otimes\sigma_y)\\\notag
&=(\sigma_y\otimes\sigma_y)(\1\otimes\hat{n}'\cdot\vec{\sigma})
\ensuremath{\op{\rho}_+}(1\otimes\hat{n}'\cdot\vec{\sigma})(\sigma_y\otimes\sigma_y)\\\notag
&=\frac{1}{2}\sum_{a,b,i,j}\sigma_y\ket{a}\bra{b}\sigma_y\otimes
n_i'\sigma_y\sigma_i\ket{a}\bra{b}n_j'\sigma_j\sigma_y\\\notag
&=\frac{1}{2}\sum_{a,b,i,j}\sigma_y\ket{a}\bra{b}\sigma_y\otimes
(-n_i)\sigma_i\sigma_y\ket{a}\bra{b}(-n_j)\sigma_y\sigma_j\\\notag
&=(\1\otimes\hat{n}\cdot\vec{\sigma})(\sigma_y\otimes\sigma_y)\ensuremath{\op{\rho}_+}(\sigma_y\otimes\sigma_y)(\1\otimes\hat{n}\cdot\vec{\sigma})\\
&=\op{N}(\ensuremath{\op{\rho}_+}).
\end{align}
\Eref{eq:inv1} and \Eref{eq:inv2} together imply that
$\tilde{\nrho{1}} = \nrho{1}$. Thus in order to calculate the
concurrence of \nrho{1} we need to know only its eigenvalues. The
matrix is
\begin{widetext}
\begin{align}
\label{eq:gen-rho1-defn}
&\nrho{1} = \frac{1-p}{2}\left[
\begin{matrix}
r + n_z^2 & (n_x-in_y)n_z & (n_x + in_y)n_z & r-n_z^2 \\
(n_x+in_y)n_z & n_x^2 + n_y^2 & (n_x + in_y)^2 & -(n_x+in_y)n_z\\
(n_x-in_y)n_z & (n_x-in_y)^2 & n_x^2 + n_y^2 & -(n_x-in_y)n_z \\
r-n_z^2 & -(n_x-in_y)n_z & -(n_x+in_y)n_z & r+n_z^2
\end{matrix}\right];
&r = \frac{p}{1-p}.
\end{align}
\end{widetext}
Amazingly enough, the eigenvalues of this matrix are $\{p, 1-p,0,0\}$
giving a concurrence
\begin{equation}
\label{eq:gen-con}
\op{C} = |2p-1|.
\end{equation}
This is the same result as was obtained for the bit flip channel in
\Eref{eq:rho1-con}. The fact that $(\hat{n}\cdot\vec{\sigma})^2 = \1$ ensures that
\begin{equation}
\label{eq:gen-n-form}
\nrho{n} = P_n\ensuremath{\op{\rho}_+} + (1-P_n)\op{N}(\ensuremath{\op{\rho}_+}).
\end{equation}
It can easily be demonstrated that this $P_n$ is the same as was obtained
for the bit flip channel in \Eref{eq:P_n-defn}. Thus, the analysis
carries over entirely for the $\hat{n}\cdot\vec{\sigma}$ channel, in the case of
entanglement of formation.
For the distillable entanglement, the situation is complicated by the
absence of any method for calculating the entanglement for an
arbitrary density matrix. However, for the special cases of
$\hat{n}=(1,0,0), (0,1,0)$ or $(0,0,1)$ \nrho{1} remains in Bell
diagonal form. As a result its distillable entanglement is easily
calculated to be $1-h(p)$, as in \Eref{eq:dist-defn}.
\section{Conclusion}
Noise reduces bipartite entanglement (of a singlet) exponentially, at a rate that
depends on how non uniform the noise probability is. The greater the
distance of the noise probability $p$ from $1/2$, the less severe the
noise. While the noise never totally destroys the entanglement, it
does make it negligible very quickly.
Interestingly, noise seems to affect states differently. The
distillable entanglement of the singlet reduces faster than that of
\nrho{k}. Theoretically, this is interesting behavior in
itself. There seems to be no \textit{a priori} reason why the singlet
should be more fragile than its less entangled counterparts.
Practically, it is of importance as it suggests that
entanglement should not be stored in the form of singlets.
The rate of loss of entanglement of formation was found to be the same
for the generalized $\hat{n}\cdot\vec{\sigma}$ channel as that for the
$\1\otimes\sigma_x$ channel. The rate of loss of distillable
entanglement for the special cases $\hat{n}=(1,0,0),(0,1,0)$ and
$(0,0,1)$ was uniformly $1-h(p)$. It is conjectured that this is the
rate of loss of distillable entanglement for arbitrary $\hat{n}$.
\begin{acknowledgments}
I would like to thank Dr. R. Simon for his advice and for many
stimulating discussions. I would also like to thank Dr. Ajay
Patwardhan for his support and guidance over the years. I also
acknowledge the support in the form of a Summer Fellowship from the
Indian Academy of Sciences and Institutional support from the
Institute of Mathematical Sciences, Chennai, without which this
paper would never have been written.
\end{acknowledgments}
\end{document}
|
\begin{document}
\title[Solutions of 3D NSE in $B^{-1}_{\infty,\infty}$]
{On the regularity of weak solutions of the 3D Navier-Stokes
equations in $B^{-1}_{\infty,\infty}$}
\author{A. Cheskidov}
\address[A. Cheskidov]
{Department of Mathematics\\
University of Chicago\\
5734 S. University Avenue\\
Chicago, IL 60637}
\email{[email protected]}
\author{R. Shvydkoy}
\thanks{The work of R. Shvydkoy was partially supported by NSF grant
DMS--0604050}
\address[R. Shvydkoy]
{Department of Mathematics, Stat. and Comp. Sci.\\
University of Illinois\\
Chicago, IL 60607}
\email{[email protected]}
\begin{abstract}
We show that if a Leray-Hopf solution $u$ to the 3D Navier-Stokes
equation belongs to $C((0,T]; B^{-1}_{\infty,\infty})$ or its jumps
in the $B^{-1}_{\infty,\infty}$-norm do not exceed a constant
multiple of viscosity, then $u$ is regular on $(0,T]$. Our method
uses frequency local estimates on the nonlinear term, and yields an
extension of the classical Ladyzhenskaya-Prodi-Serrin criterion.
\end{abstract}
\keywords{Navier-Stokes equation, Leray-Hopf solutions,
Ladyzhenskaya-Prodi-Serrin criterion, Besov spaces}
\subjclass[2000]{Primary: 76D03 ; Secondary: 35Q30}
\maketitle
\section{Introduction}
We study the 3D incompressible Navier-Stokes equations (NSE)
\begin{equation} \label{NSE}
\left\{
\begin{aligned}
&\p_t u - \nu \Delta u + (u \cdot \nabla)u + \nabla p = 0, \qquad x \in \mathbb{R}^3, t\geq 0,\\
&\nabla \cdot u =0,\\
&u(0)=u_0,
\end{aligned}
\right.
\end{equation}
where $u(x,t)$, the velocity, and $p(x,t)$, the pressure, are unknowns,
$u_0 \in L^2(\mathbb{R}^3)$ is the initial condition,
and $\nu>0$ is the kinematic viscosity coefficient of the fluid.
In 1934 Leray \cite{Leray} proved that
for every divergence-free initial data $u_0 \in L^2(\mathbb{R}^3)$
there exists a weak solution to the system \eqref{NSE} on $[0,\infty)$
with $u(0) = u_0$. Moreover, one can find a weak solution satisfying the
energy inequality (see Section~\ref{preliminaries}), called the Leray-Hopf solution.
A famous open question, equivalent to one of the millennium
problems, is whether all Leray-Hopf solutions to \eqref{NSE} on
$[0,\infty)$ with smooth initial data are regular. A weak solution
$u(t)$ is called regular if $\|u(t)\|_{H^1}$ is continuous. Even
though the regularity problem is far from been solved, numerous
regularity criteria were proved since the work of Leray. The first
result of this kind is due to Leray in the $\ensuremath{\mathbb{R}}R^3$ case
(see also Ladyzhenskaya, Serin, and Prodi \cite{L,S,P} in the case of a
bounded domain). It states that every
Leray-Hopf solution $u$ to \eqref{NSE} with $u \in L^{r}((0,T);
L^{s})$ is regular on $(0,T]$ provided $2/r+3/s = 1$, $s \in (3,
\infty]$. The limit case $r=\infty$ appeared to be more challenging.
First, Leray \cite{L} showed that Leray-Hopf solutions in
$L^{\infty}((0,T); L^p)$ are regular provided $p>3$. Later von Wahl
\cite{W} and Giga \cite{G} increased the regularity space to
$C((0,T]; L^{3})$, and, finally,
Esperanza, Seregin, and \v{S}ver\'ak \cite{ESS} increased it further
to $L^{\infty}((0,T); L^{3})$.
Due to this remarkable result, if a solution looses regularity, then
its $L^3$-norm blows up. The space $L^3$, as well as all the other
critical spaces for the 3D NSE (see Cannone \cite{C}), is
continuously embedded in the Besov space $B^{-1}_{\infty,\infty}$.
Hence, a natural question is whether $B^{-1}_{\infty,\infty}$-norm
has to blow up if a solution looses regularity. In this paper we
prove a slightly weaker statement. Namely, if a Leray-Hopf solution
$u$ to \eqref{NSE} is continuous in $B^{-1}_{\infty,\infty}$ on the
interval $(0,T]$, then $u$ is regular on $(0,T]$. In fact we will
prove that regularity of the solution is guaranteed even if $u$ is
discontinuous but all its jumps are smaller than a constant proportional to
the viscosity (see \thm{t:main}). Thus, discontinuities of a limited
size are allowed. As a consequence, we conclude that Leray-Hopf
solutions with $B^{-1}_{\infty,\infty}$-norm bounded by $c\nu$ are
regular. This last result will also be shown in the ''almost
everywhere" version (see Corollary \ref{c:ae}).
A related direction in the study of the 3D NSE concerns solutions
with small initial data. The best present result due to Koch and
Tataru \cite{KT} states that if the initial data is small in
$BMO^{-1}$, then there exists a global in time regular solution of
the 3D NSE with this initial data. Note that the space $BMO^{-1}$ is
continuously embedded in $B^{-1}_{\infty,\infty}$. The small
initial data theorem in this larger space is unknown, but
would be desirable to obtain.
We carry out our arguments in the case of $\ensuremath{\mathbb{R}}^3$, but they apply to
the periodic case as well.
\section{Preliminaries} \label{preliminaries}
\subsection{Besov spaces}
We will use the notation $\lambda_q = 2^q$ (in some inverse length
units). Let $B_r$ denote the ball centered at $0$ of radius $r$
in $\ensuremath{\mathbb{R}}^{3}$. Let us fix a nonnegative radial function $\chi \in {C_0^{\infty}} (B_2)$ such that $\chi(\xi)=1$ for $|\xi|\leq
1$. We further define
\begin{equation} \label{defvf}
\f(\xi) = \chi(\l_1^{-1}\xi) - \chi(\xi).
\end{equation}
and
\begin{equation} \label{defvfq}
\f_q(\xi) = \f(\l_q^{-1}\xi).
\end{equation}
For a tempered distribution vector field $u$ let us denote
\begin{align*}
u_q & = {\mathcal F}^{-1}(\f_q) \ast u, \quad \text{for} \quad q > -1,\\
u_{-1} & = {\mathcal F}^{-1}(\chi) \ast u,
\end{align*}
where ${\mathcal F}$ denotes the Fourier transform. So, we have
$$
u = \sum_{q=-1}^\infty u_q
$$
in the sense of distributions. We also use the following notation
$$
u_{\leq q} = \sum_{p=-1}^q u_p.
$$
Let us recall the definition of Besov spaces. A tempered distribution $u$ belongs to $B^s_{p,\infty}$ for some $s\in \ensuremath{\mathbb{R}}R$ and $p\geq 1$ iff
$$
\|u\|_{B^s_{p,\infty}} = \sup_{q\geq -1} \l_q^s \|u_q\|_p < \infty.
$$
\subsection{Weak solutions of the 3D NSE}
In this section we recall some of the classical definitions and
results on weak solutions of the NSE.
\begin{definition}
A weak solution of \eqref{NSE} on $[0,T]$ (or $[0,\infty)$ if $T=\infty$)
is a function
$u:[0,T] \to L^2(\ensuremath{\mathbb{R}}R^3)$ in the class
\[
u \in C_{\mathrm{w}}([0,T];L^2(\ensuremath{\mathbb{R}}R^3)) \cap L^2_{\rm loc}([0,T];H^1(\ensuremath{\mathbb{R}}R^3)),
\]
satisfying
\begin{multline}\label{weakform}
(u(t),\f(t)) + \int_0^t \left\{ - (u,\p_t \f) + \nu (\n u,\n \f) + (u\cdot \n u, \f) \right\} \, ds\\ = (u_0,\f(0)),
\end{multline}
for all $t\in[0,T]$ and all test functions
$\f \in C_0^\infty([0,T]\times \ensuremath{\mathbb{R}}R^3)$ with $\n \cdot \f = 0$. Here
$(\cdot,\cdot)$ stands for the $L^2$-inner product.
\end{definition}
\begin{theorem}[Leray] \label{thm:Leray}
For every $u_0 \in L^2(\ensuremath{\mathbb{R}}R^3)$, there exists a weak solution of \eqref{NSE}
on $[0,\infty)$ satisfying the following energy inequality
\begin{equation}\label{SEI}
\|u(t)\|_2^2 + 2\nu \int_{t_0}^t \|\n u(s)\|_2^2 \, ds \leq \|u(t_0)\|_2^2,
\end{equation}
for almost all $t_0 \in [0,\infty)$ including $t_0=0$, and all $t \in [t_0,\infty)$.
\end{theorem}
\begin{definition}
A Leray-Hopf solution of \eqref{NSE} on $[0,T]$ is a weak solution
on $[0,T]$ satisfying the energy inequality \eqref{SEI}
for almost all $t_0 \in (0,T)$ and all $t \in [t_0,T]$.
\end{definition}
A weak solution $u(t)$ of \eqref{NSE} on a time interval $I$ is called regular
if $\|u(t)\|_{H^1}$ is continuous on $I$.
The following is a well-known fact concerning Leray-Hopf solutions.
Note that weak solutions are not known to satisfy this property.
\begin{theorem}[Leray] \label{t:LHreg}
Let $u(t)$ be a Leray-Hopf solution of \eqref{NSE} on $[0,T]$. If
for every interval of regularity $(\alpha,\beta) \subset (0,T)$
\[
\limsup_{t \to \beta -} \|u(t)\|_{H^{s}} < \infty,
\]
for some $s > 1/2$, then $u(t)$ is regular on $(0,T]$.
\end{theorem}
\section{Main result}
In this section we state and prove our main result. Its proof is a
consequence of the limiting case of other regularity criteria in the
subcritical range of integrability parameter. We will discuss how
those criteria extend some of the classical and newly found results
on regularity.
\begin{theorem}\label{t:main}
Let $u(t)$ be a Leray-Hopf solution of \eqref{NSE} on $[0,T]$. If
$u(t)$ satisfies
\begin{equation} \label{eq:regcriterion}
\sup_{t\in(0,T]}\limsup_{t_0 \to t-} \|u(t) -
u(t_0)\|_{B^{-1}_{\infty,\infty}} < c\nu,
\end{equation}
where $c>0$ is some absolute constant, then $u(t)$ is regular on
$(0,T]$.
In particular, if $$u \in C((0,T];B^{-1}_{\infty,\infty}),$$ then
$u(t)$ is regular on $(0,T]$.
\end{theorem}
The following criterion will be a crucial ingredient in the proof of
\thm{t:main}.
\begin{lemma}\label{L:1}
Let $u(t)$ be a Leray-Hopf solution of \eqref{NSE} on $[0,T]$. There
is a constant $c>0$ such that if $u(t)$ satisfies
\begin{equation} \label{1}
\limsup_{q \to \infty} \sup_{t \in (0,T)} \l_q^{-1}
\|u_q(t)\|_{\infty} < c \nu,
\end{equation}
then $u(t)$ is regular on $(0,T]$.
\end{lemma}
\begin{proof}
Let $(\alpha, \beta) \subset (0,T]$ be an interval of regularity. Let us fix $\e\in(0,1)$ and use the weak formulation of the NSE with the test-function
$\l_q^{1+\e}(u_q)_q$. Then on $(\alpha,\beta)$ we obtain
\begin{equation}\label{nseineq-1}
\frac{1}{2}\frac{d}{dt}\l_q^{1+\e}\|u_q\|_2^2 + \nu \l_q^{3+\e}\|u_q\|_2^2
\leq \l_q^{1+\e}\int \tr[ (u\otimes u)_q \cdot \n u_q]\, dx.
\end{equation}
We can write
\[
(u \otimes u)_q = r_q(u,u) + u_q \otimes u + u \otimes u_q,
\]
for all $q>-1$, where
\[
r_q(u,u)(x) = \int_{\ensuremath{\mathbb{R}}R^3} {\mathcal F}^{-1}(\f_q)(y) (u(x-y) - u(x)) \otimes (u(x-y) - u(x)) \, dy.
\]
After proper cancelations we arrive at
\begin{equation*}
\int_{\ensuremath{\mathbb{R}}R^3} \tr[ (u\otimes u)_q \cdot \n u_q] \,dx = \int_{\ensuremath{\mathbb{R}}R^3} r_q(u,u) \cdot \n u_q \,dx -
\int_{\ensuremath{\mathbb{R}}R^3} u_q \cdot \n u_{\leq q+1} \cdot u_q \,dx.
\end{equation*}
Let us now estimate the first term using H\"{o}lder and Bernstein's inequalities
\[
\int_{\ensuremath{\mathbb{R}}R^3} r_q(u,u) \cdot \n u_q \,dx \lesssim \|r_q(u,u)\|_{3/2} \l_q \|u_q\|_3.
\]
Using Littlewood-Paley decomposition we obtain as in \cite{ccfs}
\[
\begin{split}
\|r_q(u,u)\|_{3/2} &\lesssim
\int_{\ensuremath{\mathbb{R}}R^3} \left |{\mathcal F}^{-1}(\f_q)(y) \right |\|u(\cdot - y) - u(\cdot)\|_3^2 \,dy \\
&\lesssim \int_{\ensuremath{\mathbb{R}}R^3} \left |{\mathcal F}^{-1}(\f_q)(y) \right| \left(
\sum_{p =-1}^q |y|^2 \l_p^2 \|u_p\|_3^2 + \sum_{p=q+1}^\infty \|u_p\|_3^2\right) dy \\
&\lesssim \sum_{p = -1}^q \l_q^{-2} \l_p^2 \|u_p\|_3^2 + \sum_{p=q+1}^\infty \|u_p\|_3^2.
\end{split}
\]
Analogously,
$$
\int_{\ensuremath{\mathbb{R}}R^3} u_q \cdot \n u_{\leq q+1} \cdot u_q \, dx \lesssim \|u_q\|_3^2 \sum_{p=-1}^{q+1} \l_p \|u_p\|_3.
$$
Thus, up to a constant multiple independent of $u$, the nonlinear term in
\eqref{nseineq-1} is bounded by
\[
\l_q^\e \|u_q\|_3 \sum_{p =-1}^q \l_p^2 \|u_p\|_3^2 + \l_q^{2+\e}\|u_q\|_3 \sum_{p = q+1}^\infty \|u_p\|_3^2 +
\l_q^{2+\e} \|u_q\|_3^2 \sum_{p = -1}^{q+1} \l_p \|u_p\|_3.
\]
Let us now fix $Q \in \ensuremath{\mathbb{N}}$ and sum over $q \geq Q$ in \eqref{nseineq-1} on
$(\alpha,\beta)$ obtaining
\begin{equation} \label{eq:ineq123}
\frac{1}{2}\frac{d}{dt}\sum_{q\geq Q} \l_q^{1+\e}\|u_q\|_2^2 + \nu \sum_{q = Q}^\infty \l_q^{3+\e}\|u_q\|_2^2 \lesssim
I + II + III,
\end{equation}
where
\begin{align*}
I & = \sum_{q = Q}^\infty \l_q^\e \|u_q\|_3 \sum_{p =-1}^q \l_p^2 \|u_p\|_3^2,\\
II & = \sum_{q = Q}^\infty\l_q^{2+\e}\|u_q\|_3 \sum_{p = q+1}^\infty \|u_p\|_3^2,\\
III& = \sum_{q = Q}^\infty \l_q^{1+\e} \|u_q\|_3^2 \sum_{p = -1}^{q+1} \l_p \|u_p\|_3.
\end{align*}
We will show that the right hand side of \eqref{eq:ineq123} obeys the following estimate
\begin{equation}\label{iii}
I+II+III \lesssim \sum_{q = Q}^\infty \l_q^{2+\e} \|u_q\|_3^3 + R(Q),
\end{equation}
where
\[
R(Q) = \sup_{t \in (0,T)} \sum_{q =-1}^{Q-1} \l_q^{2+\e} \|u_q\|_3^3.
\]
Once this is proved, the argument goes as follows. By interpolation,
$\|u_q\|_3^3 \leq \|u_q\|_2^{2} \|u_q\|_\infty$. So,
\[
\frac{1}{2}\frac{d}{dt}\sum_{q=Q}^\infty \l_q^{1+\e}\|u_q\|_2^2 + \nu \sum_{q = Q}^\infty \l_q^{3+\e}\|u_q\|_2^2 \leq
C \sum_{q = Q}^\infty \l_q^{3+\e} \|u_q\|_2^2 ( \l_q^{-1} \|u_q\|_\infty) + CR(Q),
\]
where $C>0$ is an absolute constant independent of $u$.
Let $c = \nu/C$. Choosing $Q$ so that $\l_q^{-1} \|u_q(t)\|_\infty < c$ for all $q \geq Q$ and all $t \in (0,T)$, we obtain
\[
\frac{1}{2}\frac{d}{dt}\sum_{q=Q}^\infty \l_q^{1+\e}\|u_q\|_2^2 \leq CR(Q),
\]
for any $t \in (\alpha,\beta)$. Since $u(t)$ is bounded in $L^2$ on $(0,T]$, it follows that $R(Q)<\infty$. Hence the $H^{1/2+\e}$-norm of $u(t)$ is bounded on $(\a,\b)$. Therefore $u(t)$ is regular
on $(0,T]$ due to Theorem~\ref{t:LHreg}.
Let us now prove \eqref{iii}. Indeed, using Young and Jensen's inequalities
we obtain
\begin{align*}
I & = \sum_{q = Q}^\infty \l_q^\e \|u_q\|_3 \sum_{p =-1}^q \l_p^2 \|u_p\|_3^2\\
& = \sum_{q = Q}^\infty \l_q^{\frac{2}{3} + \frac{\e}{3}} \|u_q\|_3 \sum_{p =-1}^q(\l_p\l_q^{-1})^{\frac{2}{3} - \frac{2\e}{3}} \l_p^{\frac{4}{3} + \frac{2\e}{3}} \|u_p\|_3^2 \\
& \lesssim \sum_{q = Q}^\infty \l_q^{2+\e} \|u_q\|_3^3 + \sum_{q = Q}^\infty \sum_{p =-1}^q(\l_p\l_q^{-1})^{\frac{2}{3} - \frac{2\e}{3}} \l_p^{2+\e} \|u_p\|_3^3\\
& \lesssim \sum_{q = Q}^\infty \l_q^{2+\e} \|u_q\|_3^3 + R(Q).
\end{align*}
Similarly,
\begin{align*}
II & = \sum_{q = Q}^\infty\l_q^{2+\e}\|u_q\|_3 \sum_{p = q+1}^\infty \|u_p\|_3^2\\
& = \sum_{q = Q}^\infty \l_q^{\frac{2}{3} + \frac{\e}{3}} \|u_q\|_3 \sum_{p = q+1}^\infty (\l_q\l_p^{-1})^{\frac{4}{3}+\frac{2\e}{3}} \l_p^{\frac{4}{3}+\frac{2\e}{3}} \|u_p\|_3^2 \\
& \lesssim \sum_{q = Q}^\infty \l_q^{2+\e} \|u_q\|_3^3 + \sum_{q = Q}^\infty \sum_{p = q+1}^\infty(\l_q\l_p^{-1})^{\frac{4}{3}+\frac{2\e}{3}} \l_p^{2+\e} \|u_p\|_3^3 \\
& \lesssim \sum_{q = Q}^\infty \l_q^{2+\e} \|u_q\|_3^3.
\end{align*}
And finally,
\begin{align*}
III & = \sum_{q = Q}^\infty \l_q^{1+\e} \|u_q\|_3^2 \sum_{p = -1}^{q+1} \l_p \|u_p\|_3\\
& = \sum_{q = Q}^\infty \l_q^{\frac{4}{3}+\frac{2\e}{3}} \|u_q\|_3^2 \sum_{p = -1}^{q+1} (\l_p\l_q^{-1})^{\frac{1}{3} - \frac{\e}{3}} \l_p^{\frac{2}{3} + \frac{\e}{3}} \|u_p\|_3 \\
& \lesssim \sum_{q = Q}^\infty \l_q^{2+\e} \|u_q\|_3^3 + \sum_{q = Q}^\infty\sum_{p = -1}^{q+1}(\l_p\l_q^{-1})^{\frac{1}{3} - \frac{\e}{3}} \l_p^{2+\e} \|u_p\|_3^3\\
& \lesssim \sum_{q = Q}^\infty \l_q^{2+\e} \|u_q\|_3^3 + R(Q).
\end{align*}
\end{proof}
It is worth mentioning that this lemma immediately yields the
following corollary. For mild solutions in the sense of Kato a
similar result was proved in \cite{CF}.
\begin{corollary}\label{c:ae}
Let $u(t)$ be a Leray-Hopf solution of \eqref{NSE} on $[0,T]$.
There is a constant $c>0$ such that if $u(t)$ satisfies
$$\|u\|_{L^\infty((0,T);B^{-1}_{\infty,\infty})} < c \nu,$$ then
$u(t)$ is regular on $(0,T]$.
\end{corollary}
Now we are in a position to prove \thm{t:main}.
\begin{proof}[Proof of Theorem~\ref{t:main}]
Take any interval of regularity $(\alpha,\beta) \subset (0,T]$.
Since $u(t)$ is a Leray-Hopf solution, $u(t) \in
C^{\infty}(\mathbb{R}^3)$ for all $t \in (\alpha,\beta)$. Hence
\[
\phi(t):=\limsup_{q \to \infty} \l_q^{-1}\|u_q(t)\|_\infty=0, \qquad
\forall t \in (\alpha,\beta).
\]
Then, thanks to \eqref{eq:regcriterion}, $\phi(\beta) < c\nu$. This
together with \eqref{eq:regcriterion} implies that there exists $t_0
\in (\alpha, \beta)$, such that
\[
\limsup_{q \to \infty} \sup_{s \in (t_0,\beta)}
\l_q^{-1}\|u_q(s)\|_\infty < 2 c\nu.
\]
Thus, if $c$ is chosen to be half of what is in Lemma~\ref{L:1}, then that
lemma implies regularity of $u(t)$ on $(\alpha, \beta]$, i.e.,
continuity of $\|u(t)\|_{H^1}$ on $(\alpha, \beta]$. Hence $u(t)$ is
regular on $(0,T]$ in view of Theorem~\ref{t:LHreg}.
\end{proof}
\section{Extension of the Ladyzhenskaya-Prodi-Serin criteria} \label{s:lemma:main}
We conclude with the following regularity criterion in the subcritical
range of integrability exponent.
\begin{lemma} \label{l:main}
Let $u(t)$ be a Leray-Hopf solution of \eqref{NSE} on $[0,T]$.
There is a constant $c>0$ such that if $u(t)$ satisfies
\begin{equation} \label{2}
\sup_{t \in (0,T]} \lim_{t_0 \to t-} \limsup_{q \to \infty} \int_{t_0}^t \left( \l_q^{\frac{2}{r}-1} \|u_q(s)\|_{\infty}
\right)^r \, ds< \nu^{r-1}c^r\left(\frac{r}{r-1}\right)^{r-1},
\end{equation}
for some $r\in(2,\infty)$, then $u(t)$ is regular on $(0,T]$.
\end{lemma}
\begin{proof}
Let $(\alpha, \beta) \subset (0,T]$ be an interval of regularity. We
use the weak formulation of the NSE with the test-function
$\l_q^{2r-2}(u_q)_q$. Then on $(\alpha,\beta)$ we obtain
\begin{equation}\label{nseineq}
\frac{1}{2}\frac{d}{dt}\l_q^{2r-2}\|u_q\|_2^2 \leq -\nu \l_q^{2r}\|u_q\|_2^2
+ \l_q^{2r-2}\int_{\ensuremath{\mathbb{R}}R^3} \tr[ (u\otimes u)_q \cdot \n u_q]\, dx.
\end{equation}
As in the proof of Lemma~\ref{L:1},
we have the following bound on the nonlinear term
\[
\int_{\ensuremath{\mathbb{R}}R^3} \tr[ (u\otimes u)_q \cdot \n u_q] \,dx \leq
C_1 \sum_{p=-1}^\infty \l_{|q-p|}^{-\frac{2}{3}} \l_p \|u_p\|_3^3,
\]
where $C_1>0$ is an absolute constant.
Thanks to Young and Jensen's inequalities,
\begin{multline*}
\l_q^{2r-2}\|u_q\|^{2r-2} C_1\sum_{p=-1}^\infty \l_{|q-p|}^{-\frac{2}{3}} \l_p \|u_p\|_3^3 \leq \nu \l_q^{2r} \|u_q\|_2^{2r} \\+
(rA)^{-1} \sum_{p=-1}^\infty \l_{|q-p|}^{-\frac{2}{3}} \l_p^r \|u_p\|_3^{3r},
\end{multline*}
where
\[
A = \nu^{r-1}C^{-r}\left(\frac{r}{r-1}\right)^{r-1},
\]
and $C>0$ is an absolute constant independent of $u$.
Therefore on $(\alpha,\beta)$ we have
\begin{equation} \label{eq:qw}
\begin{split}
\frac{1}{2r}\frac{d}{dt} \l_q^{2r-2} \|u_q\|_2^{2r} &\leq
-\nu \l_q^{2r} \|u_q\|_2^{2r} + \l_q^{2r-2}\|u_q\|^{2r-2} C_1\sum_{p=-1}^\infty \l_{|q-p|}^{-\frac{2}{3}} \l_p \|u_p\|_3^3\\
& \leq (rA)^{-1} \sum_{p=Q+1}^\infty \l_{|q-p|}^{-\frac{2}{3}} \l_p^r \|u_p\|_3^{3r} + R(Q),
\end{split}
\end{equation}
where
\[
R(Q) = (rA)^{-1} \sup_{t \in (0,T)}
\sum_{p =-1}^{Q} \l_p^r \|u_p\|_3^{3r},
\]
$r\in(2,\infty)$, and $Q\in \mathbb{N}$. Note that since $u(t)$ is bounded in $L^2$ on $(0,T]$,
we have $R(Q)<\infty$.
Now assume that \eqref{2} holds for $c=(C\sqrt{24})^{-1}$. Then there exist
$t_0 \in (\alpha, \beta)$ and $Q$ large enough, such that
\[
\sup_{q>Q} \int_{t_0}^\beta(\l_q^{\frac{2}{r}-1}\|u_q(s)\|_\infty)^r \, ds \leq
{\textstyle \frac{1}{24}} A.
\]
Let $t\in(t_0,\beta)$. Integrating \eqref{eq:qw} we obtain
\begin{multline} \label{eq:ghjk}
\sup_{s\in(t_0,t)}\l_q^{2r-2} \|u_q(s)\|_2^{2r} -
\l_q^{2r-2} \|u_q(t_0)\|_2^{2r}\\ \leq
2A^{-1} \int_{t_0}^t \sum_{p=Q+1}^\infty \l_{|q-p|}^{-\frac{2}{3}} \l_p^r \|u_p(s)\|_3^{3r} \, ds +R(Q)(t-t_0).
\end{multline}
Thanks to Levi's convergence theorem
we have
\[
\begin{split}
\int_{t_0}^t \sum_{p=Q+1}^\infty & \l_{|q-p|}^{-\frac{2}{3}} \l_p^r \|u_p(s)\|_3^{3r} \, ds\\
&\leq \sup_{p> Q} \left\{ \sup_{s\in(t_0,t)}\l_p^{2r-2} \|u_p(s)\|_2^{2r}\int_{t_0}^t(\l_p^{\frac{2}{r}-1}\|u_p(s)\|_\infty)^r \, ds
\right\} \sum_{p=1}^\infty \l_{|q-p|}^{-\frac{2}{3}}\\
&\leq 6\sup_{p> Q} \left\{\sup_{s\in(t_0,t)}\l_p^{2r-2} \|u_p(s)\|_2^{2r} \right\}\sup_{p> Q} \int_{t_0}^t(\l_q^{\frac{2}{r}-1}\|u_p(s)\|_\infty)^r \, ds
\\
&\leq {\textstyle \frac{1}{4}}A \sup_{p> Q} \sup_{s\in(t_0,t)}\l_p^{2r-2} \|u_p(s)\|_2^{2r}.
\end{split}
\]
Hence, taking the supremum of \eqref{eq:ghjk} over $q>Q$, we obtain
\[
{\textstyle \frac{1}{2}}\sup_{q> Q} \sup_{s\in(t_0,t)}\l_q^{2r-2} \|u_q(s)\|_2^{2r} \leq
\sup_{q> Q} \l_q^{2r-2} \|u_q(t_0)\|_2^{2r} + R(Q)(t-t_0).
\]
Letting $t \ra \b$ we now have
$$
\sup_{q > Q} \sup_{s \in (t_0, \b)} \l_q^{1- \frac{1}{r}} \|u_q(s)\|_2 < \infty.
$$
Thus, since $r>2$, the solution $u(t)$ is bounded in $H^{s}$
on $(t_0, \beta)$ for some $s>1/2$, which implies regularity
in view of \thm{t:LHreg}.
\end{proof}
Let us now see how this lemma
implies an extension of the classical Ladyzhenskaya-Serin-Prodi
condition in the subcritical range of integrability exponent.
Indeed, by Bernstein inequalities and Littlewood-Paley Theorem we
have the inclusions
\begin{align*}
B^{\frac{3}{s} + \frac{2}{r}-1}_{s,\infty} & \ss
B^{\frac{2}{r}-1}_{\infty,\infty},\\
L^s &\ss B^0_{s,\infty},
\end{align*}
for all $s\geq 1$.
On the other hand, as an easy consequence of Lemma~\ref{l:main} we
have the following theorem.
\begin{theorem}\label{c:crit}
Let $u(t)$ be a Leray-Hopf solution of \eqref{NSE} on $[0,T]$
satisfying
\begin{equation}\label{e:crit}
u \in L^{r}((0,T); B^{\frac{2}{r}-1}_{\infty,\infty}),
\end{equation}
for some $r \in (2,\infty)$. Then $u(t)$ is regular on $(0,T]$.
\end{theorem}
Let us also note that for a negative smoothness parameter as in
\eqref{e:crit}, the homogeneous version of the Besov space is smaller
than the nonhomogeneous one, i.e.,
$$
\dot{B}^{\frac{2}{r}-1}_{\infty,\infty} \ss
B^{\frac{2}{r}-1}_{\infty,\infty}, \quad \text{for all} \quad r >2.
$$
In view of this fact, Theorem~\ref{c:crit} extends the
corresponding result obtained recently by Q. Chen and Z. Zhang in
\cite{CZ}.
\end{document}
|
\begin{document}
\title{Approximation of Steiner Forest via the Bidirected Cut Relaxation}
\begin{abstract}
The classical algorithm of Agrawal, Klein and Ravi [SIAM J. Comput., 24 (1995), pp. 440-456], stated in the setting of the primal-dual schema by Goemans and Williamson [SIAM J. Comput., 24 (1995), pp. 296-317] uses the undirected cut relaxation for the Steiner forest problem. Its approximation ratio is $2-\frac{1}{k}$, where $k$ is the number of terminal pairs. A variant of this algorithm more recently proposed by K\"onemann et al. [SIAM J. Comput., 37 (2008), pp. 1319-1341] is based on the lifted cut relaxation. In this paper, we continue this line of work and consider the bidirected cut relaxation for the Steiner forest problem, which lends itself to a novel algorithmic idea yielding the same approximation ratio as the classical algorithm. In doing so, we introduce an extension of the primal-dual schema in which we run two different phases to satisfy connectivity requirements in both directions. This reveals more about the combinatorial structure of the problem. In particular, there are examples on which the classical algorithm fails to give a good approximation, but the new algorithm finds a near-optimal solution.
\end{abstract}
\section{Introduction}
The \textsf{Steiner forest} problem is one of the central problems in the field of approximation algorithms and network design. It is a natural generalization of the famous \textsf{Steiner tree} problem, and stands as the starting point for many other generalizations occupying a large fraction of network design literature. In this problem, one is given an undirected graph $G=(V,E)$, a cost function on the edges $c:E \rightarrow \mathbb{Q}^+$ and a set of terminal pairs $R=\{(s_1,t_1), \hdots, (s_k,t_k)\}$ (we set $n:=|V|$ and $m:=|E|$ throughout the paper). The objective is to find a subgraph $F$ of $G$ (which is necessarily a forest) of minimum cost $c(F) := \sum_{e \in F} c(e)$, which connects every terminal pair.
The \textsf{Steiner forest} problem had a pivotal role in the development of the fundamental techniques for the field of approximation algorithms, being the main problem of interest for the primal-dual schema with the idea of growing dual variables synchronously, which was introduced at the beginning of 90s. In particular, the famous approximation algorithm given by \cite{AKR} (henceforth called \texttt{AKR}), which has an approximation ratio of $2-\frac{1}{k}$ stimulated a series of results for similar problems, and in general for the area of network design and connectivity problems. This algorithm stated in purely combinatorial terms was then underlined with the language of the primal-dual schema by \cite{GW}, who introduced a more general approach for approximating such problems. Both of these approaches make use of the \emph{undirected cut relaxation} (UCR) for the problem, which has an integrality gap of at least $2-\frac{1}{k}$.
Due to its importance in the field of approximation algorithms, the problem of finding an algorithm for \textsf{Steiner forest} with a constant approximation ratio better than $2$ was stated as one of the top ten open problems in the area in a recent textbook by \cite{Williamson-Shmoys}. However, since the appearance of the conference paper by \cite{AKR-conf}, there have been no improved approximation algorithms discovered. Given that how much we know about its special case, the \textsf{Steiner tree} problem, for which there are many different LP relaxations and algorithmic techniques, it is of great interest to see if there are variations in algorithmic ideas for \textsf{Steiner forest} even if they do not provide significant improvements in terms of the approximation ratio. In this respect, the fact that there had been a single constant factor approximation algorithm for the problem for a long time is also intriguing. Along these lines, a recent attempt by \cite{Gupta-Kumar} proves a constant factor approximation for a greedy algorithm, a result which does not make use of an LP relaxation. A work similar in vein to this result followed by \cite{Gross} using local search.
This paper is motivated by the question of whether there are new LP relaxations for the \textsf{Steiner forest} problem, yielding novel algorithmic ideas. More relevant to this question, \cite{KLS} introduced a new LP relaxation called the \emph{lifted cut relaxation (LCR)}, motivated by a game-theoretic version of the problem. They show that LCR is stronger than UCR, and its integrality gap is at least $2-\frac{2}{k+1}$. The algorithm they present (henceforth called \texttt{KLS}), which computes a feasible dual with respect to LCR is a variant of \texttt{AKR} with a modified set of duals to be grown. Its approximation ratio is also $2-\frac{1}{k}$, although as the authors point out, the solution it returns is usually costlier than that of \texttt{AKR}.
The importance of delving more into the combinatorial structure of \textsf{Steiner forest} is also related to the more general \textsf{survivable network design} problem. Extensions of the usual approach inspired by \texttt{AKR} has only had limited success so far (\cite{GoemansGPSTW}; \cite{Williamson95}) toward the goal of a 2-approximation primal-dual algorithm for this problem.
\subsection{The results}
We introduce a new LP relaxation for the \textsf{Steiner forest} problem, which we call the \emph{bidirected cut relaxation} (BCR). This is inspired by the bidirected cut relaxation for the \textsf{Steiner tree} problem in which one replaces each edge by two arcs in both directions. It is an easy result that BCR is equivalent to UCR. We would like to stress the fact that our bidirected cut relaxation is not the same as the one introduced for Steiner tree (\cite{GoemansM93}; \cite{ChopraR94}), which can also be extended to the Steiner forest problem. Indeed, this relaxation has not been well exploited for both of the problems. In contrast, what we consider can be seen as a bidirected version of the usual undirected cut relaxation, which we use to construct \emph{two} paths between pairs in both directions.
Using our bidirected cut relaxation, we provide a new primal-dual algorithm for the \textsf{Steiner forest} problem with approximation ratio $2-\frac{1}{k}$. The algorithm is a novel extension of the primal-dual schema consisting of two phases with synchronous dual growth, one starting from the terminals $s_i$, and the other starting from the terminals $t_i$. We combine the results of these two phases followed by a standard pruning phase and a final reduction phase on certain subgraphs problematic for BCR. The proof of the approximation ratio also turns out to be quite different than the usual practice for primal-dual type algorithms. In the usual approach, the duals collide, and the set of edges they cover is considered by looking at the degrees of the duals. In our case, a set of duals growing against each other in different directions is considered.
To underline the differences between \texttt{AKR} and the new algorithm, we provide an example on which \texttt{AKR} and \texttt{KLS} give an approximation ratio arbitrarily close to $2-\frac{1}{k}$, whereas the new algorithm finds a near-optimal solution. We also provide a tight example for the new algorithm on which the approximation ratio is arbitrarily close to $2-\frac{1}{k}$.
Our approach enlarges the (small) set of $2$-approximation algorithms for the \textsf{Steiner forest} problem, which might stimulate new insights on how one can break the barrier of $2$, especially in light of the tight examples we present. More generally, given a problem with a cut-based relaxation involving terminal pairs, one can consider the bidirected version instead of the usual undirected one, thus possibly having a two-phase algorithm similar to the one in this paper. How widely applicable this is and whether it would yield improved results or new insights on a given problem is a question of interest.
\subsection{Organization}
The rest of the paper is structured as follows. In Section 2, we review the undirected cut relaxation together with \texttt{AKR} exploiting it. Section 3 introduces the bidirected cut relaxation for \textsf{Steiner forest} and the new primal-dual algorithm for \textsf{Steiner forest} using this relaxation. Section 4 provides the details of a straightforward implementation. In Section 5, we establish the approximation ratio of the new algorithm. In Section 6, we give the aforementioned tight examples.
\section{The undirected cut relaxation and \texttt{AKR}}
\begin{algorithm}[!b]
\caption{\texttt{AKR}$(G=(V,E), R, c)$}
$y \leftarrow 0$ \\
$F \leftarrow \emptyset$ \\
$\ell \leftarrow 0$
\BlankLine \BlankLine
// The augmentation phase \\
\While{not all $s_i$-$t_i$ pairs are connected in $(V,F)$} {
$\ell \leftarrow \ell+1$ \\
Let $\mathcal{C}$ be the set of all connected components $C$ of $(V,F)$ such that $|C \cap \{s_i,t_i\}| = 1$ for some $i$ \\
Increase $y_C$ for all $C \in \mathcal{C}$ uniformly until for some $e_{\ell} \in \delta(C'), C' \in \mathcal{C}, c(e_{\ell}) = \sum_{C:e_{\ell} \in \delta(C)} y_C$ \\
$F \leftarrow F \cup \{e_{\ell}\}$
}
\BlankLine \BlankLine
// The pruning phase \\
$F' \leftarrow F$ \\
\For{$j \leftarrow \ell$ downto $1$} {
\If{$F'-\{e_j\}$ is feasible} {
$F'\leftarrow F'-\{e_j\}$
}
}
\BlankLine \BlankLine
\Return $(F',y)$
\end{algorithm}
Let $\mathcal{S}$ be the set of subsets $S$ of $V$ that separate at least one terminal pair in $R$. In other words, $S \in \mathcal{S}$ if and only if there is $(s,t) \in R$ satisfying $|S \cap \{s,t\}| = 1$. We call an element in $\mathcal{S}$ a \emph{Steiner cut} or simply a \emph{cut}. Let $\delta(S)$ denote the set of edges with exactly one endpoint in $S$. The undirected cut relaxation of the problem is then as follows:
\begin{alignat*}{4}
\text{minimize} \qquad & \sum_{\substack{e \in E}} c(e) x_e & & \pushright{\text{(UCR)}} \\
\text{subject to} \qquad & \sum_{\substack{e \in \delta(S)}} x_e \geq 1, \qquad &\forall S \in \mathcal{S}, \\
& x_e \geq 0, &\forall e \in E.
\end{alignat*}
\noindent The dual of this linear program is
\begin{alignat*}{4}
\text{maximize} \qquad & \sum_{\substack{S \in \mathcal{S}}} y_S & & \pushright{\text{(UCR-D)}} \\
\text{subject to} \qquad & \sum_{\substack{S \in \mathcal{S}}: e \in \delta(S)} y_S \leq c(e), \qquad &\forall e \in E, \\
& y_S \geq 0, &\forall S \in \mathcal{S}.
\end{alignat*}
\texttt{AKR} synchronously grows dual variables corresponding to the cuts separating any pair. The sets corresponding to these cuts, which are selected to be minimal with respect to inclusion are called \emph{minimal violated sets}. It iteratively improves the feasibility of the primal solution by taking edges whenever the corresponding constraints become tight. After arriving at a primal feasible solution, it removes the unnecessary edges, i.e. the edges whose removal do not violate the feasibility, in the reverse order of their inclusion. The following is a standard result from \cite{GW}:
\begin{thm}[\cite{GW}]
\label{gw}
If $F'$ and $y$ are the set of edges and the dual variables returned by \texttt{AKR}, then
$$
\sum_{e \in F'} c(e) \leq \left(2-\frac{2}{|A|}\right) \cdot \sum_{S \subseteq V} y_S \leq \left(2-\frac{1}{k}\right) \cdot \sum_{S \subseteq V} y_S,
$$
\noindent where $A$ is maximum number of minimal violated sets during the algorithm.
\end{thm}
\section{The bidirected cut relaxation and the new primal-dual algorithm}
We first replace each edge $e=\{u,v\} \in V$ by two directed arcs $(u,v)$ and $(v,u)$ each with cost $\frac{1}{2}c(e)$. For a given cut $S \subseteq V$, we define $\delta^+(S) = \{(u,v) \in E| u \in S, v \notin S\}$, i.e. the set of arcs emanating from $S$. As usual, we set $\mathcal{S}$ be the set of cuts that separate at least one terminal pair: $S \in \mathcal{S}$ if and only if there is $(s,t) \in R$ satisfying $|S \cap \{s,t\}|=1$. Then, the following is a relaxation for the \textsf{Steiner forest} problem.
\begin{alignat*}{4}
\text{minimize} \qquad & \frac{1}{2} \sum_{e \in E} c(e) x_e & & \pushright{\text{(BCR)}} \\
\text{subject to} \qquad & \sum_{e \in \delta^+(S)} x_e \geq 1, \qquad
&\forall S \in \mathcal{S}, \\
& x_e \geq 0, &\forall e \in E.
\end{alignat*}
\noindent It is a straightforward result that (BCR) is equivalent to (UCR), i.e. they can be converted to each other with equal objective values by assigning appropriate values to the edges/arcs. In particular, for converting (BCR) to (UCR), the value of an undirected edge is assigned to the corresponding directed arcs. From (BCR) to (UCR), we assign the average of the values of the arcs in both directions to the corresponding undirected edge. Thus, the integrality gap of (BCR) is also $2-\frac{1}{k}$.
Let us now write the dual of the linear program (BCR).
\begin{alignat*}{4}
\text{maximize} \qquad & \sum_{S \in \mathcal{S}} y_S & & \pushright{\text{(BCR-D)}} \\
\text{subject to} \qquad & \sum_{\substack{S: e \in \delta^+(S)}} y_S \leq \frac{1}{2} c(e), \qquad &\forall e \in E, \\
& y_S \geq 0, &\forall S \in \mathcal{S}.
\end{alignat*}
Similar to \texttt{AKR}, the new algorithm is also based on the primal-dual schema and growing dual variables in a synchronized fashion. However, since the underlying graph is a bidirected graph, the algorithm tries to satisfy the constraints of the primal program (BCR) by constructing a solution in both directions. This requires two distinct phases for the selection of arcs. In total, the algorithm consists of four phases to produce a feasible solution. In the first phase, we grow the dual variables starting from the terminals $s_i$, and continue the usual process of including arcs that go tight until there are directed paths from each $s_i$ to $t_i$. Note that this does not necessarily make a feasible solution as some arcs might only be taken in one direction. The solution constructed in the first phase is an input to the second phase in which we apply the same procedure, but this time starting to grow the dual variables from the terminals $t_i$. We continue until there are directed paths from each $t_i$ to $s_i$. By definition, the solution constructed in the first two phases contains bidirected paths between each terminal pair.
\begin{algorithm}[!]
\caption{\textsc{Bidirected-Primal-Dual($G=(V,E)$, $R$, $c$)}}
// Initialization \\
$y \leftarrow 0$ \\
$F \leftarrow \emptyset$ \\
$\ell \leftarrow 0$
\BlankLine \BlankLine
// The first augmentation phase \\
\While{there are terminal pairs in $R$ not connected by a directed $s_i$-$t_i$ path in $(V,F)$} {
$\ell \leftarrow \ell+1$ \\
Let $\mathcal{C}$ be the set of all minimal sets $C$ (w.r.t. inclusion) such that $|\delta^+(C) \cap F|=0$, and $s_i \in C$, but $t_i \notin C$ for some $i$ \\
Increase $y_C$ for all $C \in \mathcal{C}$ uniformly until for some $e_{\ell} \in \delta^+(C')$, $C' \in \mathcal{C}$, $c(e_{\ell}) = \sum_{S:e_{\ell} \in \delta^+(C)} y_C$ \\
$F \leftarrow F \cup \{e_{\ell}\}$
}
\BlankLine \BlankLine
// The second augmentation phase \\
\While{there are terminal pairs in $R$ not connected by a directed $t_i$-$s_i$ path in $(V,F)$} {
$\ell \leftarrow \ell+1$ \\
Let $\mathcal{C}$ be the set of all minimal sets $C$ (w.r.t. inclusion) such that $|\delta^+(C) \cap F|=0$, and $t_i \in C$, but $s_i \notin C$ for some $i$ \\
Increase $y_C$ for all $C \in \mathcal{C}$ uniformly until for some $e_{\ell} \in \delta^+(C')$, $C' \in \mathcal{C}$, $c(e_{\ell}) = \sum_{S:e_{\ell} \in \delta^+(C)} y_C$ \\
$F \leftarrow F \cup \{e_{\ell}\}$
}
\BlankLine \BlankLine
// The pruning phase \\
$F' \leftarrow F$ \\
\For{$j \leftarrow \ell$ downto $1$} {
\If{$F'-\{e_j\}$ is feasible} {
$F' \leftarrow F'-\{e_j\}$
}
}
$F^1 \leftarrow \{(u,v) \in F'| (v,u) \in F'\}$ \\
$F' \leftarrow F'-F^1$
\BlankLine \BlankLine
// The reduction phase \\
$F^2 \leftarrow \emptyset$ \\
Let $\{(s_i',t_i')\}$ be the set of pairs such that there are disjoint bidirected paths between $s_i'$ and $t_i'$ in $F'$ AND at least one of the following holds for $v \in \{s_i',t_i'\}$: \\
(1) $v$ is adjacent to some edge in $F^1$ \\
(2) $v \in \{s_i,t_i\}$ for some $i \in \{1,\hdots,k\}$ \\
\For{all pairs $(s_i',t_i')$} {
Let $P_s$ be the directed path $s_i'-t_i'$ \\
Let $P_t$ be the directed path $t_i'-s_i'$ \\
$P \leftarrow \arg\min_{P \in \{P_s, P_t\}} \tau(P)$ \\
Double the arcs in $P$ by adding the ones in reverse direction \\
$F^2 \leftarrow F^2 \cup P$
}
\BlankLine \BlankLine
$F^3 \leftarrow F^1 \cup F^2$ \\
\Return $(F^3, y)$
\end{algorithm}
As in the case of \texttt{AKR}, the set of dual variables that are grown at a particular phase in the new algorithm must naturally satisfy certain properties. Given a cut $S$ (synonymously a \emph{dual}) determined by a set of vertices, an already selected set of edges $F$, we say that $S$ is a \emph{minimal violated set} if there is at least one $(s,t) \in R$ such that $|S \cap \{s,t\}|=1$, $|\delta^+(S) \cap F|=0$, and $S$ is minimal with respect to inclusion. Note that the implications of these conditions are different from those of \texttt{AKR} based on the undirected cut relaxation. In that case, one can simply take the connected components $S$ of $(V,F)$ satisfying the property that $|S \cap \{s,t\}|=1$ for some $(s,t)$ which correspond to minimal violated sets. However, determining the minimal violated sets is not easy in our case since the underlying graph is directed. In particular, the minimal violated sets in the new algorithm are not necessarily disjoint (See the leftmost picture in Figure~\ref{merging}). We also make the distinction between the two different phases of the algorithm and say that a set $S$ satisfying the usual conditions stated above is an \emph{$s$-minimal violated set} if in addition it contains at least one $s_i$ but not $t_i$ for some valid $i$. In this case, we say that the corresponding dual \emph{originates from $s_i$}. Similarly, we say that it is a \emph{$t$-minimal violated set} if it contains at least one $t_i$ but not $s_i$, and we say that the corresponding dual originates from $t_i$. With this terminology, we are interested in growing the $s$-minimal violated sets in the first phase, and the $t$-minimal violated sets in the second phase.
The third phase which we call the pruning phase considers the arcs in the reverse order of their inclusion and discards an arc unless its exclusion violates the feasibility. The order is determined by the inclusion of the arcs in the first augmentation phase followed by the second augmentation phase. The arcs selected in both directions and which remain after this phase make the set $F^1$. These are in the final solution. At the end of the phase, we have the set of arcs $F'$ which are only selected in one direction.
The effect of the pruning phase of the new algorithm is quite different than that of \texttt{AKR}. In our case, it may not be clear which edges of the original input graph we should select even though the result is feasible. An example is given in Figure~\ref{bad} with the set of arcs shown in $F^1 \cup F'$, the feasible solution by the end of the pruning phase. The nontrivial duals grown are also shown in dashed lines. To see that we might have such an instance, note that there are two dual variables running on the arc with cost $1/2+\epsilon$, one on the $t_1$-side and the other on the $t_2$-side. This results in the inclusion of that arc before the arcs of cost $1$ are taken. In contrast, the arcs of cost $1$ are included in the first phase before the relevant dual covers the arcs of cost $1/2+\epsilon$ and $1/2$. The last phase is run as a final remedy for this situation.
\begin{figure}
\caption{An example on which the pruning phase does not lead to a valid solution}
\label{bad}
\end{figure}
In order to state the fourth phase, we need to consider a specific meaning for the growth of duals in the algorithm. A useful intuition for the type of primal-dual approach we utilize is to consider the growth of the duals in each iteration as a continuous process over time. Considering an arc along which a dual grows as a line segment, within a period of time $\epsilon > 0$, the dual is considered to ``cover'' a cost of $\epsilon$ of the arc, starting from the already covered part and continuously extending. An arc of unit length might go tight after half a unit of time if there are two duals growing along it. Accordingly, given any $\epsilon > 0$, if there are $d$ duals growing along an arc $e$ in the period of time $\epsilon$, we think that the partial cost of $\epsilon d \leq \frac{1}{2} c(e)$ is covered by the duals in that period. Given two vertices $u$ and $v$, and the directed path $P = u-v$ between them, we denote the period of time from the moment a dual is formed including $u$ to the moment a dual is formed including $v$ by $\tau(P)$.
The fourth phase, which we call the reduction phase determines the set of edges to be selected based on the information in $F'$. We consider all the pairs $(s_i',t_i')$ such that there are \emph{node disjoint} bidirected paths between $s_i'$ and $t_i'$ in $F'$, together with the requirement that at least one of the following holds for $v \in \{s_i',t_i'\}$.
\begin{itemize}
\item $v$ is adjacent to some edge in $F^1$ (i.e. the edges induced by the arcs taken in both directions);
\item $v \in \{s_i,t_i\}$ for some $i$ (i.e. the original set of pairs).
\end{itemize}
\noindent These are precisely the endpoints of the subgraphs that we seek a valid solution on. The algorithm considers both of the directed paths between such pairs and takes the path with a smaller $\tau$ value, i.e. the path which goes tight in a shorter period of time. It doubles the selected arcs by taking them in both directions and includes them into the solution $F^2$, which is feasible by definition. The final solution is the union of $F^1$ and $F^2$.
To give an example of the last phase, we consider again the graph given in Figure~\ref{bad}. $F'$ at the end of the pruning phase consists of the arcs forming the disjoint directed paths between the intermediate vertices. Note that there are two duals of the second phase growing along the arc of cost $1/2+\epsilon$. The time to cover the arcs directed from the $t$-side to the $s$-side is then $1/2+\frac{1}{2}(1/2+\epsilon) = 3/4+\epsilon/2$. The time to cover the arcs of cost $1$ directed from the $s$-side to the $t$-side on the other hand is $1$ since there is a single dual growing in the first phase. So the arcs of cost $1/2$ and $1/2+\epsilon$ are taken by the reduction phase.
\section{Implementation details}
We will give in this section a straightforward implementation of the algorithm. There may be faster and more compact implementations, which we leave as an open problem.
During the course of the algorithm, we explicitly store all the nodes in a given minimal violated set. Initially in both the first phase and the second phase, there are $k$ such lists and each list contains a single terminal representing a minimal violated set. By the execution of the algorithm, this number is non-increasing. Consequently, we have at most $k$ minimal violated sets at any time in the algorithm. We describe how to select the next arc and how to update the minimal violated sets for the first phase of the algorithm. The running times will be identical in the second phase.
In order to find the next tight arc, we keep a priority queue for arcs. The key values of the arcs are the times at which they will go tight. Initially, all the arcs that are not incident to the terminals might be set to $\infty$, and the key values of the immediately accessible arcs are set to their correct values examining their costs. For each arc, we also keep a list of duals growing on that arc. This is convenient in updating the key values. The initialization of the priority queue takes $O(m)$ time. At each iteration of the loop, we extract the minimum from the priority queue and update all the other arcs in the queue with the information obtained from the new set of minimal violated sets. This takes at most $O(m\log n)$ time since we consider at most $m$ arcs to update. In practice, this number might be much smaller.
Updating the minimal violated sets is the most expensive part of the algorithm. Upon inclusion of the arc in the current iteration, we update the list of nodes in the sets by performing a standard graph traversal procedure such as BFS, which takes time $O((m+n)k)=O(mk)$. Notice that not all of these sets might be minimally violated, i.e. there might be a set which is a proper subset of another. Initially declare all the sets \emph{active}, i.e. consider them as minimal violated sets. In order to determine which one of these are actual minimal violated sets, we perform the following operation starting from the smallest cardinality set (assume that the lists keep their sizes). Compare the elements in the set with all the other sets, and if another set turns out to be a strict superset of this set, declare the larger set \emph{inactive}, i.e. not a minimal violated set. Comparing sets can be performed in expected time $O(n)$ by hashing the values of one set and looping over the second set to see if they contain the same elements. Hence, for a single set, we spend $O(nk)$ time in expectation. The total time requirement for this operation is then $O(nk^2)$. If the two sets compared are identical, we \emph{merge} them into a new minimal violated set and declare it active (See Figure~\ref{merging} for an example of this procedure and merging). The number of iterations of the main loop of the algorithm is at most $O(n)$. So the execution of the whole loop takes time $O(mn\log{n}+mnk+n^2k^2)$ in expectation.
In order to implement the pruning phase, we iterate over the arcs in $F'$. For each such arc, we check for each terminal pair if they are still connected even if the arc is discarded. This takes time $O((m+n)k=O(mk)$ with a standard graph traversal algorithm. Since there are at most $O(n)$ arcs to consider, the total running time is then $O(mnk)$. The reduction phase amounts to re-executing the algorithm on a subgraph with an extra bookkeeping of times. Its running time is absorbed by
that of the whole algorithm. Thus, the algorithm can overall be implemented in time $O(mn\log{n}+mnk+n^2k^2)$.
\begin{figure}
\caption{An example of a merging}
\label{merging-1}
\label{merging-2}
\label{merging-3}
\label{merging}
\end{figure}
The merging of minimal violated sets is more difficult in the new algorithm compared to \texttt{AKR} since they do not necessarily merge even if they reach a common vertex. However, merging of minimal violated sets can occur during the algorithm, particularly when the algorithm takes some arcs in both directions. This is illustrated in Figure~\ref{merging}. The dual growing from $s_1$ takes the arc $e_1$ in forward direction, which we denote by $e_1^+$. The list of nodes representing this dual then becomes $S_1=\{s_1,v_1\}$. The dual growing from $s_2$ takes the arcs $e_2^+$ and $e_3^+$, making its node list $S_2=\{s_2,v_1,v_2\}$. When $S_1$ continues to grow to take $e_2^-$, its node list is updated to $S_1=\{s_1,v_1,s_2,v_2\}$, the set of nodes reachable from $s_1$. However, this cannot be a minimal violated set as $S_2$ is a proper subset of this list. As a result, the only minimal violated set in this iteration is $S_2=\{s_2,v_1,v_2\}$. After some time, $S_2$ takes $e_1^-$ and adds $s_1$ to its node list. At this time, the algorithm realizes that $S_1$ and $S_2$ are the same, i.e. $\{s_1,s_2,v_1,v_2\}$, and merges them to a new minimal violated set.
\section{Proof of the approximation ratio}
Recall that, given \emph{any time}, one can see the set of duals and their positions as a snapshot of the algorithm. In the following discussion proving Propositions~\ref{against-1}-\ref{against-3} and Lemmas~\ref{dual-one}-\ref{tree-2}, we refer the behavior of duals in an infinitesimal amount of time in which the snapshot remains the same.
With an abuse of notation, we denote by $F^3$ the set of undirected edges induced by the solution when we consider it as a subset of the original input graph. Given an iteration of the first phase and a dual $C^s$, we will consider the set of edges $\Delta(C^s) \cap F^3$, where $\Delta(C^s)$ denote the undirected set of edges induced by $\delta^+(C^s)$. We make the same definitions for a dual $C^t$ grown in an iteration of the second phase. Throughout this section, we say that duals grow along \emph{edges}, rather than arcs. This is for simplicity of discussion as we usually consider duals growing along the two arcs representing the same edge.
Given an undirected edge $e=\{u,v\} \in F^3$, we consider $e$ as a line segment defining an interval $[u=0,v=\frac{1}{2}c(e)]$. A single dual $C^t$ growing on $e^-=(v,u)$ (from $v$ to $u$) is considered to be \emph{grown against} a single dual $C^s$ growing on $e^+=(u,v)$ (from $u$ to $v$) if there is an interval $[a,b] \subseteq [u,v]$ such that both $C^s$ and $C^t$ grow on this interval. Given a dual $C^s$ grown in the first phase of the algorithm, the set of all duals grown against $C^s$ on $e$ for all $e \in \Delta(C^s) \cap F^3$ is called the set of \emph{duals grown against $C^s$}. We make similar definitions for a dual $C^t$ growing in an iteration of the second phase. Note that if $e \in F^1$, it is possible that a dual grown against $C^s$ belongs to the set of duals grown in the first phase. This happens, for instance, when two $s$-terminals are closer to each other than any other terminals. Similarly, a dual grown against $C^t$ might have grown in the second phase.
There may be multiple duals growing against each other on an edge. Accordingly, we make the following refinement over the definition above. Consider the case where a set of duals $\{C^{t_1}, \hdots, C^{t_{\beta}}\}$ is grown against $\{C^{s_1}, \hdots, C^{s_{\alpha}}\}$ on an edge $e$ within a period of time $\epsilon$. We define the \emph{graph of duals} on the edge $e$ as a single edge between two vertices. One of the vertices $C^s$ corresponds to the set $\{C^{s_1}, \hdots, C^{s_{\alpha}}\}$, and is assigned a \emph{growth speed} of $\alpha$. The other vertex $C^t$ represents the set $\{C^{t_1}, \hdots, C^{t_{\beta}}\}$ with a growth speed of $\beta$. We will later generalize the notion of graph of duals.
The proof of the performance ratio relies on the properties of the set of duals grown against each other. First, we make sure that there always exists a dual grown against another one.
\begin{prop}
\label{against-1}
(a) If $e^+=(u,v) \in \delta^+(C^s) \cap F^1$ for some $C^s$ grown in the first phase, then there is a dual grown against $C^s$ on $e=\{u,v\}$.
(b) If $e^-=(v,u) \in \delta^+(C^t) \cap F^1$ for some $C^t$ grown in the second phase, then there is a dual grown against $C^t$ on $e=\{u,v\}$.
\end{prop}
\begin{proof}
The first part of the statement holds by the definition of the algorithm since $e^-=(v,u) \in F^1$ is included into the solution either in the first phase or in the second phase. The second part is symmetric to the first one.
\end{proof}
\noindent Proposition~\ref{against-1} does not hold for the edges in $F^2$ since the edges in the source graph $F'$ are not selected in both directions. We thus properly define a new set of duals growing against each other on $F^2$, first considering a single edge. Let $P^+ = s'-t'$ and $P^- = t'-s'$ be the disjoint paths between a pair $(s',t')$ considered in the reduction phase of the algorithm. Assume without loss of generality that $\tau(P^+) \leq \tau(P^-)$, i.e. $P^+$ with doubled edges is selected by the algorithm. Let $e^+=(u,v) \in P^+$ be any edge selected via the duals grown in the first phase. Taking $e$ as a line segment $[u,v]$, let the set of duals grown along some $[a,b] \subseteq [u,v]$ be $\{C_1^s, \hdots, C_{\alpha}^s\}$ such that they originate from $s_1', \hdots, s_{\alpha}'$, respectively, and $e^+$ is taken by the reduction phase between $s_j'$ and $t_j'$ for all $1 \leq j \leq \alpha$. Note that there exists at least one such $j$ by definition. We create a set of $\alpha$ new duals, namely $\{C_1^t, \hdots, C_{\alpha}^t\}$ each growing against $C_j^s$ along the interval $[a,b]$.
Consider now defining a new set of duals described as above for all pairs $(s_i',t_i')$ connected by a path $P_i^+$ on which we select an edge $e_i^+$ and take an interval covered within a period of time $\epsilon$. We consider all these duals $y_S'$ for $F^2$ in the rest of our discussion, unless we explicitly mention the duals $y_S$ computed by the algorithm. So, considering $y_S'$, we have
\begin{prop}
\label{against-2}
(a) If $e^+=(u,v) \in \delta^+(C^s) \cap F^2$ for some $C^s$ grown in the first phase, then there is a dual grown against $C^s$ on $e=\{u,v\}$.
(b) If $e^-=(v,u) \in \delta^+(C^s) \cap F^2$ for some $C^t$ grown in the second phase, then there is a dual grown against $C^t$ on $e=\{u,v\}$.
\end{prop}
The edges in $F^2$ are now tight with respect to $y_S'$, i.e. their total cost is exactly covered by the duals in $y_S'$. Let $\alpha$ be the number of new duals defined on an interval covered within a period of time $\epsilon$, as described above. Each new dual covers a cost of $\epsilon$. This cost is compensated by one of the actual duals grown by the algorithm from some $t_j'$ to $s_j'$, particularly by covering the same portion of the cost of an edge on the path $t_j'-s_j'$. Indeed, since $e_i^+$ is taken by the algorithm on which $C_j^s$ grows, for the paths $P_j^+=s_j'-t_j'$ and $P_j^-=t_j'-s_j'$, we have $\tau(P_j^+) \leq \tau(P_j^-)$, and such a dual always exists. In particular, this implies
\begin{prop}
\label{against-3}
For every new set of $\alpha$ duals in $y_S'$ defined on an interval in $F^2$, each with value $\epsilon$, there is a distinct dual in $y_S$ of value $\epsilon$ computed by the algorithm.
\end{prop}
The following two lemmas use Proposition~\ref{against-1} and Proposition~\ref{against-2} as premises.
\begin{lem}
\label{dual-one}
Given a dual $C^s$ growing in an iteration of the first phase, let $\mathcal{C}^t$ be the set of duals grown against $C^s$. Then, for any $C^t \in \mathcal{C}^t$,
$$
|\Delta(C^s) \cap \Delta(C^t) \cap F^3| = 1.
$$
\end{lem}
\begin{proof}
If $|\Delta(C^s) \cap F^3| = 1$, there is nothing to prove. Thus, assume $|\Delta(C^s) \cap F^3| > 1$, and assume further for a contradiction that for some $C^t \in \mathcal{C}^t$, we have $|\Delta(C^s) \cap \Delta(C^t) \cap F^3| > 1$. Let $\{v_1,w_1\}$ and $\{v_2,w_2\}$ be two of the edges on which both $C^s$ and $C^t$ grow with $v_1,v_2 \in C^s$, $w_1,w_2 \in C^t$.
If $C^t$ has grown in the second phase, observe that there is an index $i$ such that $s_i \in C^s$ and $t_i \in C^t$. There is also an index $j$ such that $s_j \in C^s$ and $t_j \in C^t$. Otherwise, one of $\{v_1,w_1\}$ and $\{v_2,w_2\}$ would be redundant, resulting in a deletion in the pruning phase. Thus, we may assume without loss of generality that $\{v_1,w_1\}$ is on the path between $s_i$ and $t_i$, $\{v_2,w_2\}$ is on the path between $s_j$ and $t_j$. We further observe that there must be a bidirected path between $s_i$ and $s_j$ in $F_{\ell}$. Otherwise, it would contradict the minimality of $C^s$. Similarly, there is a bidirected path between $t_i$ and $t_j$. The existence of all these paths implies that there is a cycle in $F^3$, which is a contradiction (See Figure~\ref{dual-one-fig} for an illustration, where the cycle contains both of the terminal pairs).
If $C^t$ has grown in the first phase, then there are $s_i \in C^s$ and $s_{j_1},s_{j_2} \in C^t$. We may assume that $\{v_1,w_1\}$ is on the path between $s_i$ and $s_{j_1}$, $\{v_2,w_2\}$ is on the path between $s_i$ and $s_{j_2}$. By the minimality of $C^t$, there must also be a bidirected path between $s_{j_1}$ and $s_{j_2}$. This again induces a cycle, yielding a contradiction.
\end{proof}
\begin{figure}
\caption{An example illustrating the proof of Lemma~\ref{dual-one}
\label{dual-one-fig}
\end{figure}
The symmetric result with a proof identical to that of Lemma~\ref{dual-one} except the interchanged roles of $C^s$ and $C^t$ is as follows.
\begin{lem}
\label{dual-one-t}
Given a dual $C^t$ growing in an iteration of the second phase, let $\mathcal{C}^s$ be the set of duals grown against $C^t$. Then, for any $C^s \in \mathcal{C}^s$,
$$
|\Delta(C^t) \cap \Delta(C^s) \cap F^3| = 1.
$$
\end{lem}
The following theorem establishes the approximation ratio of the algorithm by weak duality.
\begin{thm}
If $(F^3,y)$ is the solution returned by the new primal-dual algorithm for the \textsf{Steiner forest} problem, then
$$
\frac{1}{2} \sum_{e \in F^3} c(e) \leq \left(2-\frac{1}{k}\right) \cdot \sum_{S \subseteq V} y_S.
$$
\end{thm}
\begin{proof}
Since all the edges in $F^1$ are tight, we have
$$
\frac{1}{2} \sum_{e \in F^1} c(e) = \sum_{e \in F^1} \sum_{\substack{S:e \in \delta^+(S)}} y_S.
$$
\noindent The edges in $F^2$ are also tight with respect to $y_S'$, i.e.
$$
\frac{1}{2} \sum_{e \in F^2} c(e) = \sum_{e \in F^2} \sum_{\substack{S:e \in \delta^+(S)}} y_S'.
$$
\noindent Then, we obtain
\begin{align*}
\frac{1}{2} \sum_{e \in F^3} c(e) &= \frac{1}{2} \sum_{e \in F^1} c(e) + \frac{1}{2} \sum_{e \in F^2} c(e) \\
&= \sum_{e \in F^1} \sum_{\substack{S:e \in \delta^+(S)}} y_S + \sum_{e \in F^2} \sum_{\substack{S:e \in \delta^+(S)}} y_S'.
\end{align*}
\noindent Thus, it suffices to show
\begin{equation}
\label{main_eq}
\sum_{e \in F^1} \sum_{\substack{S:e \in \delta^+(S)}} y_S + \sum_{e \in F^2} \sum_{\substack{S:e \in \delta^+(S)}} y_S' \leq \left(\frac{2k-1}{k}\right) \cdot \sum_{\substack{S \subseteq V}} y_S.
\end{equation}
We argue by providing a procedure for covering the edges in $F^3$ in several steps. We first make some definitions. Form the graph of duals $G_{1}=(V_{1},E_{1})$ with $V_{1}$ consisting of all the duals grown in the first and the second phase by the algorithm on $F^1$, and $E_{1}$ consisting of edges of the form $(C^s,C^t)$ if $C^s$ and $C^t$ have grown against each other on an edge in $T_1$. Modify this graph by contracting duals simultaneously grown on an edge into a single dual, and let the growth speed of this dual be the number of contracted duals. Consider a component $T_1$ of $G_1$ and let its vertices be $C_1,\hdots,C_{|T_1|}$ with the corresponding growth speeds $\sigma_1,\hdots,\sigma_{|T_1|}$. Let $\sigma_{max}$ be the largest of these values with the corresponding vertex $C_{max}$. A single step of the procedure covering some portion of the edges of $T_1$ is as follows. For all components $T_1$ of $G_1$, consider increasing all the duals defining $C_{max}$ by an $\epsilon>0$ together with the increments of the neighboring vertices so that they counter the portion of the corresponding edge of $F^1$ in the reverse direction (Recall that we select $\epsilon$ small enough so that the snapshot does not change). Continuing this process in the breadth-first search fashion, consider the increments of the vertices of $T_{1}$ so that all the edges we go over are covered by the same amount in both directions.
\begin{prop}
\label{tree}
$T_{1}$ is a tree.
\end{prop}
\begin{proof}
Take a maximal tree $T$ in $T_{1}$. Let the duals in $T$ be $C_1, \hdots, C_{|T|}$. Assume for a contradiction that there is an edge $e = (C_i,C_j)$ of $T_1$, which is not in $T$. Let $P$ be the path in $F^3$ between the terminals $C_i$ and $C_j$ originate from, which is implied by the path between $C_i$ and $C_j$ in $T$. The existence of $e$ implies that there is another path $P'$ in $F^3$ between the terminals $C_i$ and $C_j$ originate from. In particular, the edge in $F^3$ on which $e$ is defined is distinct from the edges of $P$ by the definition of the graph of duals. This induces a cycle in $F^3$, which is a contradiction.
\end{proof}
Given $C \in V_{1}$, we define $deg_1(C)$ as the graph-theoretic degree of $C$ in $G_{1}$. It is an immediate consequence of Lemma~\ref{dual-one} and Lemma~\ref{dual-one-t} that
\begin{cor}
\label{degree-cor}
For $C \in V_{1}$, $|\Delta(C) \cap F^1| = deg_1(C)$.
\end{cor}
We now make the analogous definitions for $F^2$. Form the graph of duals $G_{2}=(V_{2},E_{2})$ with $V_{2}$ consisting of the duals $y_S'$ defined on $F^2$, and $E_{2}$ consisting of edges of the form $(C^s,C^t)$ if $C^s$ and $C^t$ have grown against each other on an edge in $F^2$. Note that in this graph, the growth speeds of all the vertices are already $1$ by definition. Note also that $V_1$ and $V_2$ might have nonempty intersection. Thus, we extend the procedure described for $G_1$ above to the vertices in $V_2$ by making sure that
\begin{itemize}
\item their increments are compatible with the ones in $V_1$ in the same step,
\item if an increased dual in $V_2$ belongs to the set of duals that we have defined (rather than the ones actually grown by the algorithm), we increase the values of all such duals defined on the current interval together with the duals grown against them.
\end{itemize}
\noindent The second condition is enforced to make sure that there is at least one dual computed by the algorithm corresponding to the duals we have defined in $V_2$, i.e. we can use Proposition~\ref{against-3}.
\begin{prop}
\label{tree-2}
The edges in $V_{2}$ do not share any common vertex.
\end{prop}
\begin{proof}
By the definition of $y_S'$ on $F^2$, there is a distinct dual grown against each dual grown by the algorithm on an edge. Thus, a dual in $y_S'$ cannot be growing against another two duals.
\end{proof}
Given $C \in V_{2}$, we define $deg_2(C)$ as the graph-theoretic degree of $C$ in $G_{2}$. Combining Proposition~\ref{tree-2} with Lemma~\ref{dual-one} and Lemma~\ref{dual-one-t}, we have
\begin{cor}
\label{degree-cor-2}
For $C \in V_{2}$, $|\Delta(C) \cap F^2| = deg_2(C) = 1$.
\end{cor}
After performing a single step of the procedure for all the trees in $G_1$ together with all the duals in $G_2$ affected by their increments, we continue to cover the uncovered portion of the edges in $F^1$ and $F^2$ in the same fashion by recomputing $G_1$ and $G_2$ on the residual graphs. This process terminates since the set of duals is finite and we always select $\epsilon > 0$. We first assume that the uncovered part of $F^1$ is nonempty till the end of the procedure, and argue by induction on the number of steps of the procedure with this assumption. At the beginning of the first step, the values of all the dual values are $0$. Thus, the inequality (\ref{main_eq}) vacuously holds. Assume that it holds at the beginning of some step. Let $T_1$ be a tree of $G_1$ with $|T_1|$ vertices. By Corollary~\ref{degree-cor}, the degree of a vertex in $G_{1}'$ coincides with the number of edges the corresponding dual is incident to in $F^1$. Noting that we have $|T_1|-1$ edges, the amount of increase on the first term of the left hand side of (\ref{main_eq}) for $T_1$ is then
\begin{equation}
\label{1}
\epsilon \sigma_{max} \left(2(|T_1|-1)\right).
\end{equation}
\noindent Since there are $|T_1|$ vertices, the corresponding increase on the right hand side is
\begin{equation}
\label{2}
\epsilon \sigma_{max} \left(\frac{2k-1}{k}\right)|T_1|.
\end{equation}
\noindent On the other hand, we have
\begin{align*}
2(|T_1|-1) = \left(\frac{2(|T_1|-1)}{|T_1|}\right)|T_1|
&\leq \left(\frac{2(2k-1)}{2k}\right)|T_1| \\
&= \left(\frac{2k-1}{k}\right)|T_1|,
\end{align*}
\noindent where the inequality follows from the fact that the number of duals in both phases is at most $k$, i.e. $|T_1| \leq 2k$. Thus, (\ref{1}) is upper bounded by (\ref{2}).
Let $T_2 \subseteq V_2$ be the set of vertices whose values are increased due to the increments in $T_1$. By Corollary~\ref{degree-cor-2}, we have that the degrees of the duals in $T_2$ are all $1$, which is the same as the degrees in $F^2$. In particular, the number of edges in $T_{2}$ is $|T_{2}|/2$. The amount of increase on the second term of the left hand side of (\ref{main_eq}) is then
\begin{equation}
\label{3}
\epsilon \sigma_{max} \left(|T_2|\right).
\end{equation}
\noindent By Proposition~\ref{against-3}, there is at least one dual in $y_S$ for all the $|T_2|/2$ new duals we have defined in $y_S'$. Thus, the corresponding increase on the right hand side is at least
\begin{equation}
\label{4}
\epsilon \sigma_{max} \left(\frac{2k-1}{k}\right)\left(\frac{|T_2|}{2}+1\right).
\end{equation}
\noindent Similar to the previous inequality, we obtain
\begin{align*}
|T_2| = \left(\frac{|T_2|}{\frac{|T_2|}{2}+1}\right) \left(\frac{|T_2|}{2}+1\right)
&\leq \left(\frac{2k}{k+1}\right) \left(\frac{|T_2|}{2}+1\right) \\
&\leq \left(\frac{2k-1}{k}\right) \left(\frac{|T_2|}{2}+1\right),
\end{align*}
\noindent where the first inequality is due to the fact that $|T_2| \leq 2k$, and the second inequality follows since $k \geq 1$. Thus, (\ref{3}) is upper bounded by (\ref{4}). Overall, the inequality (\ref{main_eq}) remains valid at the beginning of the next step of the procedure.
If the uncovered part of $F^1$ becomes empty, for the rest of the procedure, we appropriately select at each step some $T_2 \subseteq V_2$ such that the edges in $F^2$ on which the edges in $T_2$ are defined are simultaneously covered by the algorithm, i.e. $T_2$ is implied by a snapshot of the algorithm. This ensures that $|T_2| \leq 2k$ and Proposition~\ref{against-3} holds for the duals in $T_2$. Mimicking the procedure defined for $F^1$, we increase the values of the duals in $T_2$ by an appropriate $\epsilon > 0$. Then, given a step, the amount of increase on the left hand side of (\ref{main_eq}) is upper bounded by the amount of increase on the right hand side via the exact same argument given above for $T_2$. Thus, the inequality (\ref{main_eq}) holds at the beginning of the next step of the procedure. This completes the induction and hence the proof.
\end{proof}
\section{Tight examples}
We give a tight example essentially putting a lower bound of $2-\frac{1}{k}$ for \texttt{AKR} and \texttt{KLS} in Figure~\ref{tight-AKR} on which the new algorithm finds a near-optimal solution. For this example, the set of duals grown by \texttt{KLS} is the same as that of \texttt{AKR}, and they both select all the edges of cost $1-\epsilon$ before the edges of cost $1/2$ become tight. This makes a total cost of $(2k-1)(1-\epsilon)$. The optimal solution on the other hand consists of all the edges of cost $1/2$ between the pairs of indices from $2$ to $k$ and the edge of cost $1-\epsilon$ between $s_1$ and $t_1$, with a total cost of $k-\epsilon$. In the first phase of the new algorithm, after the duals cover the edges of cost $1/2$ on the $s$-side, the number of duals grown on the edges of cost $1/2$ on the $t$-side becomes $k$. Thus, these edges are also covered within $1/(2k)$ unit of time before all the other edges are covered. Same thing happens in the second phase resulting in the selection of all edges of cost $1/2$, making a total cost of $k$.
\begin{figure}
\caption{A tight example for \texttt{AKR}
\label{tight-AKR}
\end{figure}
\begin{figure}
\caption{A tight example for the new algorithm}
\label{tight}
\end{figure}
A tight example for the new algorithm is given in Figure~\ref{tight}, where the high degree dual is around $s_k$. In the first phase of the algorithm, the set of $k-1$ edges of cost $1$ between the set of terminals $\{s_1,\hdots,s_{k-1}\}$ and $s_k$ are covered in both directions since $s_k$ also grows. Due to this growth, the set of $k$ edges between $s_k$ and the $t$-terminals are also selected in forward direction. In the second phase, these edges are covered in the reverse direction. Thus, the total cost of the solution returned by the algorithm is $2k-1$. The direct edges of cost $1+\epsilon$ between $s_i$ and $t_i$ for $i=1,\hdots,k-1$ remain uncovered throughout the algorithm, which gives an optimal cost of $(k-1)(1+\epsilon)+1$ together with the edge $(s_k,t_k)$.
\section*{Acknowledgment}
We would like to thank David P. Williamson for answering questions on the classical primal-dual algorithm for Steiner forest during the early stages of our investigation. This work was supported by TUBITAK (Scientific and Technological Research Council of Turkey) under Project No. 112E192.
\end{document}
|
\begin{document}
\begin{abstract}
We propose a new approach to the problem of recovering signal from frame coefficients with erasures. Such problems arise naturally from applications where some of the coefficients could be corrupted or erased during the data transmission. Provided that the erasure set satisfies the minimal redundancy condition, we construct a suitable synthesizing dual frame which enables us to perfectly reconstruct the original signal without recovering the lost coefficients. Such dual frames which compensate for erasures are described from various viewpoints.
In the second part of the paper frames robust with respect to finitely many erasures are investigated.
We characterize all full spark frames for finite-dimensional Hilbert spaces. In particular, we show that each full spark frames is generated by a matrix whose all square submatrices are nonsingular. In addition, we provide a method for constructing totally positive matrices. Finally, we give a method, applicable to a large class of frames, for transforming general frames into Parseval ones.
\end{abstract}
\subjclass[2010]{Primary 42C15; Secondary 47A05}
\keywords{Frame, Dual frame, Erasure, Full spark, Totally positive matrix}
\title{Expansions from frame coefficients with erasures}
\section{Introduction}
Frames are often used in process of encoding and decoding signals. It is the redundancy property of frames that makes them robust to erasure corrupted data. A number of articles have been written on methods for reconstruction from frame coefficients with erasures and related problems.
Recall that a sequence $(x_n)_{n=1}^{\infty}$ in a Hilbert space $H$ is a \emph{frame} for $H$ if there exist positive constants $A$ and $B$, that are called frame bounds, such that
\begin{equation}\langlebel{frame def}
A\|x\|^2\leq \sum_{n=1}^{\infty}|\langle x,x_n\rangle |^2\leq B\|x\|^2,\quad \forall x \in H.
\end{equation}
If $A=B$ we say that frame is tight and, in particular, if $A=B=1$ so that
\begin{equation}
\sum_{n=1}^{\infty}|\langle x,x_n\rangle |^2=\|x\|^2,\quad\forall x \in H,
\end{equation}
we say that $(x_n)_{n=1}^{\infty}$ is a \emph{Parseval frame}.
If only the second inequality in \eqref{frame def} is satisfied, $(x_n)_{n=1}^{\infty}$ is said to be a \emph{Bessel sequence}.
For each Bessel sequence $(x_n)_{n=1}^{\infty}$ in $H$ one defines the \emph{analysis operator} $U: H \rightarrow \ell^2$ by $Ux=(\langlengle x,x_n\ranglengle)_{n},\,x\in H$. It is evident that $U$ is bounded. Its adjoint operator $U^*$, which is called the \emph{ synthesis operator}, is given by $U^*((c_n)_{n})=\sum_{n=1}^{\infty}c_nx_n,\,(c_n)_{n}\in \ell^2$. Moreover, if $(x_n)_{n=1}^{\infty}$ is a frame, the analysis operator $U$ is also bounded from below, the synthesis operator $U^*$ is a surjection and the product $U^*U$ (sometimes called the \emph{frame operator}) is an invertible operator on $H$. It turns out that the sequence $(y_n=(U^*U)^{-1}x_n)_{n=1}^{\infty}$ is also a frame for $H$ that is called the \emph{canonical dual} frame and satisfies the reconstruction formula
\begin{equation}\langlebel{defcandual}
x=\sum_{n=1}^{\infty}\langle x,x_n\rangle y_n,\quad \forall x \in H.
\end{equation}
In general, the canonical dual is not the only frame for $H$ which provides us with the reconstruction in terms of the frame coefficients $\langle x,x_n\rangle$. Any frame $(v_n)_{n=1}^{\infty}$ for $H$ that satisfies
\begin{equation}\langlebel{defaltdual}
x=\sum_{n=1}^{\infty}\langle x,x_n\rangle v_n,\quad \forall x \in H
\end{equation}
is called a \emph{dual frame} for $(x_n)_{n=1}^{\infty}$.
In the present paper we work with infinite frames for infinite-dimensional separable Hilbert spaces and our frames will be denoted as $(x_n)_n$, $(y_n)_n$, etc. Accordingly, by writing $\sum_{n=1}^{\infty}c_nx_n$, $\sum_{n=1}^{\infty}c_ny_n$, $\ldots$ with $(c_n)_n\in \ell^2$, we will indicate that the corresponding summations consist of infinitely many terms. However, all the results that follow (including the proofs) are valid for finite frames in finite-dimensional spaces. We shall explicitly indicate whenever a specific result concerns finite frames only and such frames will be typically denoted as $(x_n)_{n=1}^M, (y_n)_{n=1}^M,\ldots$ with $M\in \Bbb N$.
Frames that are not bases are overcomplete (i.e., there exist their proper subsets which are complete) and they posses infinitely many dual frames. The \emph{excess} of a frame is defined as the greatest integer $k$ such that $k$ elements can be deleted from the frame and still leave a complete set, or $+\infty$ if there is no upper bound to the number of elements that can be removed.
A frame is called $m$-\emph{robust}, $m\in \Bbb N$, if it is a frame whenever any $m$ of its elements are removed. In particular, we say that a frame is of \emph{uniform excess} $m$ if it is a basis whenever any $m$ of its elements are removed. Frames of uniform excess for finite-dimensional spaces are called \emph{full spark frames}.
Frames were first introduced by Duffin and Schaeffer in \cite{DS}. The readers are referred to some standard references, e.g.~\cite{CKu, Ch, HL, KC} for more information about frame theory and their applications. In particular, basic results on excesses of frames can be found in \cite{BB, BB2,BCHL,H}.
In applications, we first compute the frame coefficients $\langle x,x_n\rangle$ of a signal $x$ (analyzing or encoding $x$) and then apply \eqref{defcandual} or
\eqref{defaltdual} to reconstruct (synthesizing or decoding) $x$ using a suitable dual frame. During the processing the frame coefficients or data transmission some of the coefficients could get lost. Thus, a natural question arise: how to reconstruct the original signal in a best possible way with erasure-corrupted frame coefficients? Recently many researchers have been working on different approaches to this and related problems. In particular, we refer the readers to \cite{BO, BP, CK, GKK, HS, HP, LS, LH} and references therein.
It turns out that the perfect reconstruction is possible as long as erased coefficients are indexed by a set that satisfies the \emph{minimal redundancy condition} (\cite{LS}; see Definition~\ref{defMRC} below). Most approaches assume a pre-specified dual frame and hence aim to recover the missing coefficients using the non-erased ones. Alternatively, one may try to find an alternate dual frame, depending on the set of erased coefficients, in order to compensate for errors.
Here we use this second approach. Assuming that the set of indices $E$ for which the coefficients $\langle x,x_n\rangle$, $n\in E$, are erased is finite and satisfies the minimal redundancy condition, we show that there exists a frame $(v_n)_n$ dual to $(x_n)_n$ such that
\begin{equation}\langlebel{approach}
v_n=0,\quad \forall n\in E.
\end{equation}
Obviously, such an "$E^c$-supported" frame $(v_n)_n$ (with $E^c$ denoting the complement of $E$ in the index set), enables the perfect reconstruction using \eqref{defaltdual} without knowing or recovering the lost coefficients $\langle x,x_n\rangle$, $n\in E$. Such dual frames are the central object of our study in the present article.
The paper is organized as follows.
In Section 2 we prove in a constructive way, for each finite set $E$ satisfying the minimal redundancy condition, the existence of a dual frame with property \eqref{approach}. The construction is enabled by a parametrization of dual frames by oblique projections to the range of the analysis operator that is obtained in \cite{BB}. For the construction details we refer to the second and the third proof of Theorem~\ref{main}.
Moreover, Theorem~\ref{main2} provides a concrete procedure for computing the elements of such a dual frame in terms of the canonical dual. It turns out that the computation boils down to solving certain system of linear equations.
In Section 3 we obtain in Theorem~\ref{main2-cor1} another description of the dual frame constructed in the preceding section. Then we introduce in Theorem~\ref{thm-inverse} a finite iterative algorithm for computing the elements of the constructed dual frame. Finally, in our Theorem \ref{tm-LS} we improve a result from \cite{LS} which provides us with an alternative technique for obtaining our dual.
Section 4 is devoted to results concerning frames robust to erasures. In particular, we present a simple procedure for deriving a $1$-robust Parseval frame starting from an arbitrary Parseval frame. The second part of Section 4 is devoted to finite frames for finite dimensional spaces. In Theorem~\ref{2N} we characterize all full spark frames. It turns out that each full spark frame is generated by a matrix whose all square submatrices are nonsingular.
This opens up a question of finding methods for construction of such matrices. We provide in Theorem~\ref{constructing TP} a method for construction of infinite totally positive matrices (a subclass that consists of matrices with all minors strictly positive).
Finally, we discuss a method for obtaining a full spark Parseval frame from a general full spark frame. In Theorem~\ref{thm-PArs_spark new} we present a finite iterative algorithm which gives us, for a positive invertible operator on a finite dimensional space, an invertible (not necessarily positive) operator $R$ such that $RFR^*=I$. This gives us a method (see Corollary~\ref{thm-PArs_spark old}) which can be applied to a large class of frames for obtaining a Parseval frame from a general one.
At the end of this introductory section we establish the rest of our notation. The linear span of a set $X$ will be denoted by $\text{span}\,X,$ and its closure by $\overline{\text{span}}\,X$. The set of all bounded operators on a Hilbert space $H$ is denoted $\Bbb B(H)$ (or $\Bbb B(H,K)$ if two different spaces are involved). For $x,y\in H$ we denote by $\theta_{x,y}$ a rank one operator on $H$ defined by $\theta_{x,y}(v)=\langlengle v,y\ranglengle x,\,v\in H$. The null-space and range of a bounded operator $T$ will be denoted by $\text{N}(T)$ and $\text{R}(T)$, respectively. By $X\stackrel{.}{+}Y$ we will denote a direct sum of (sub)spaces $X$ and $Y$. The space of all $m\times n$ matrices will be denoted by $M_{mn};$ if $m=n$ then we write $M_n.$
Finally, we denote by $(e_n)_n$ the canonical basis in $\ell^2$.
\section{Dual frames compensating for erasures}
We begin with the definition of the minimal redundancy condition as formulated in \cite{LS}.
\begin{definition}\langlebel{defMRC}
Let $(x_n)_n$ be a frame for a Hilbert space $H$. We say that a finite set of indices $E$ satisfies the minimal redundancy condition for $(x_n)_n$ if $\overline{\text{span}}\,\{x_n:n\in E^c\}=H$.
\end{definition}
\begin{remark}\langlebel{first remark}
If a finite set $E$ satisfies the minimal redundancy condition for a frame $(x_n)_n$ for $H$ it is a non-trivial fact, though relatively easy to prove, that the reduced sequence $(x_n)_{n\in E^c}$ is again a frame for $H$. In fact, if $H$ is finite-dimensional, there is nothing to prove since in this situation frames are just spanning sets. We refer the reader to \cite{LS} for a proof in the infinite-dimensional case.
We also note: if $E$ satisfies the minimal redundancy condition for a frame $(x_n)_n$ with the analysis operator $U$, then $E$ has the same property for all frames of the form $(Tx_n)_n$ where $T\in \Bbb B(H)$ is a surjection. In particular, this applies to the canonical dual $(y_n)_n, y_n=(U^*U)^{-1}x_n$, $n\in \Bbb N$, and to the associated Parseval frame $((U^*U)^{-\frac{1}{2}}x_n)_n$.
\end{remark}
\begin{theorem}\langlebel{main}
Let $(x_n)_n$ be a frame for a Hilbert space $H$. Suppose that a finite set of indices $E$ satisfies the minimal redundancy condition for $(x_n)_n$. Then there exists a frame $(v_n)_n$ for $H$ dual to $(x_n)_n$ such that $v_n=0$ for all $n\in E$.
\end{theorem}
\begin{proof}
By Remark~\ref{first remark}, $(x_n)_{n\in E^c}$ is a frame for $H$. Consider an arbitrary dual frame of $(x_n)_{n\in E^c}$, conveniently denoted by $(v_n)_{n\in E^c}$, and put $v_n=0$ for $n\in E.$ Then $(v_n)_n$ is a frame for $H$ with the desired property.
\end{proof}
As noted in the introduction, a dual frame $(v_n)_n$ with the property from the preceding theorem enables us to reconstruct perfectly a signal $x\in H$ even when the coefficients
$\langlengle x,x_n\ranglengle,n\in E,$ are lost. This is simply because the lost coefficients are irrelevant in the reconstruction formula
$x=\sum_{n=1}^{\infty}\langlengle x,x_{n}\ranglengle v_n$ since $v_n=0$ for $n\in E.$ However, the above proof is not satisfactory from the application point of view. What we really need is a constructive proof. To provide such a proof we first need a lemma.
\begin{lemma}\langlebel{MR equivalent}
Let $(x_n)_n$ be a frame for a Hilbert space $H$ with the analysis operator $U$. Then a finite set of indices $E$ satisfies the minimal redundancy condition for $(x_n)_n$ if and only if $\textup{R}(U)\cap \text{span}\,\{e_n:n\in E\}=\{0\}$.
\end{lemma}
\begin{proof}
Each element in $\text{R}(U)\cap \,\text{span}\,\{e_n:n\in E\}$ is of the form $s=(\langlengle x,x_n\ranglengle)_n$ with $x\in H$ such that
$\langlengle x,x_n\ranglengle =0$ for all $n\notin E$. Since $U$ is an injection, $s\not =0$ if and only if the corresponding $x\in H$ has the property $x\not =0$ and $x \perp x_n$ for all $n\notin E$. Clearly, this last condition is equivalent to $x\perp \overline{\text{span}}\,\{x_n:n\notin E\}$.
\end{proof}
\begin{proof}[The second proof of Theorem~\ref{main}.] Denote by $U$ the analysis operator of $(x_n)_n$. Recall from Corollary 2.4 in \cite{BB} that all dual frames of $(x_n)_n$ are parameterized by bounded oblique projections to $\text{R}(U)$ or, equivalently, by closed direct complements of $\text{R}(U)$ in $\ell^2$. More precisely, a frame $(v_n)_n$ with the analysis operator $V$ is dual to $(x_n)_n$ if and only if $V^*$ is of the form
$V^*=(U^*U)^{-1}U^*F$ where $F\in \Bbb B(\ell^2)$ is the oblique projection to $\text{R}(U)$ parallel to some closed subspace $Y$ of $\ell^2$ such that $\ell^2=\text{R}(U)\stackrel{.}{+}Y$. In particular, the canonical dual frame $(y_n)_n$ corresponds to the orthogonal projection $F=U(U^*U)^{-1}U^*$ to $\text{R}(U)$.
Hence, to obtain a dual frame $(v_n)_n$ with the required property $v_n=0$ for $n\in E,$ we only need to find a closed direct complement $Y$ of $\text{R}(U)$ in $\ell^2$ such that $e_n\in Y$ for all $n\in E$. Then we will have
$$Fe_n=0,\quad \forall n\in E$$ and, consequently,
$$v_n=V^*e_n=(U^*U)^{-1}U^*Fe_n=0,\quad \forall n\in E.$$
Since $E$ satisfies the minimal redundancy condition for $(x_n)_n$, Lemma~\ref{MR equivalent} tells us that $\text{R}(U)\cap \text{span}\,\{e_n:n\in E\}=\{0\}$. Denote by $Z$ the orthogonal complement of
$\text{R}(U)\stackrel{.}{+} \text{span}\,\{e_n:n\in E\}$.
(Indeed, this is a closed subspace, being a sum of two closed subspaces, one of which is finite-dimensional.)
In other words, let
\begin{equation}\langlebel{I}
\ell^2=\left(\text{R}(U)\stackrel{.}{+} \text{span}\,\{e_n:n\in E\}\right) \oplus Z.
\end{equation}
This may be rewritten in the form
\begin{equation}\langlebel{II}
\ell^2=\text{R}(U) \stackrel{.}{+}\left(\text{span}\,\{e_n:n\in E\} \oplus Z\right).
\end{equation}
Put
\begin{equation}\langlebel{III}
Y=\text{span}\,\{e_n:n\in E\} \oplus Z.
\end{equation}
Clearly, $Y$ is a closed direct complement of $\text{R}(U)$ in $\ell^2$ with the desired property.
\end{proof}
Although the above proof provides more insight into the construction of a dual frame $(v_n)_n$ with the desired property, we still need more concrete realization for practical purposes. Hence we provide
\begin{proof}[The third proof of Theorem~\ref{main}.] Let us keep the notations from the preceding proof. Assume, without loss of generality, that $E=\{1,2,\ldots,k\}$.
Recall that the synthesis operator of our desired dual $(v_n)_n$ is $V^*=(U^*U)^{-1}U^*F$, so
$v_n$'s are given by
\begin{equation}\langlebel{IV}
v_n=(U^*U)^{-1}U^*Fe_n,\quad \forall n\in \Bbb N.
\end{equation}
We want to express $(v_n)_n$ in terms of the canonical dual frame $(y_n)_n$. Recall that
\begin{equation}\langlebel{V}
y_n=(U^*U)^{-1}U^*e_n,\quad \forall n\in \Bbb N.
\end{equation}
Let $p_n\in \text{R}(U)$ and $a_n\in \text{R}(U)^{\perp}$ be such that
\begin{equation}\langlebel{VI}
e_n=p_n+a_n,\quad \forall n\in \Bbb N.
\end{equation}
Since $a_n\in \text{R}(U)^{\perp}=\text{N}(U^*)$, we can rewrite \eqref{V} in the form
\begin{equation}\langlebel{VII}
y_n=(U^*U)^{-1}U^*p_n,\quad \forall n\in \Bbb N.
\end{equation}
Recall now that $U(U^*U)^{-1}U^*$ is the orthogonal projection onto $\text{R}(U)$. Hence, by applying $U$ to \eqref{VII} we get
\begin{equation}\langlebel{VIII}
Uy_n=p_n,\quad \forall n\in \Bbb N.
\end{equation}
On the other hand, using \eqref{II}, we can find $r_n\in \text{R}(U), b_n \in \text{span}\,\{e_{1},e_{2},\ldots , e_{k}\}$ and $c_n\in Z$ such that
\begin{equation}\langlebel{IX}
e_n=r_n+b_n+c_n,\quad \forall n\in \Bbb N.
\end{equation}
Since $F$ is the oblique projection to $\text{R}(U)$ along $\text{span}\,\{e_{1},e_{2},\ldots , e_{k}\} \oplus Z$, we have
\begin{equation}\langlebel{X}
Fe_n=r_n,\quad \forall n\in \Bbb N.
\end{equation}
Observe that
\begin{equation}\langlebel{XI}
b_n=e_n,\ r_n=0,\ c_n=0, \quad \forall n=1,2,\ldots, k.
\end{equation}
Since each $b_n$ belongs to $\text{span}\,\{e_{1},e_{2},\ldots , e_{k}\}$, there exist coefficients $\alpha_{ni}$ such that
\begin{equation}\langlebel{XII}
b_n=\sum_{i=1}^k\alpha_{ni}e_i,\quad \forall n\in \Bbb N.
\end{equation}
Note that \eqref{XI} implies
\begin{equation}\langlebel{XIII}
\alpha_{ni}=\delta_{ni}, \quad \forall n,i=1,2,\ldots, k.
\end{equation}
We now have for all $n\in \Bbb N$
\begin{eqnarray*}
e_n&\stackrel{\eqref{IX}}{=}&r_n+b_n+c_n\\
&\stackrel{\eqref{XII}}{=}&r_n+\sum_{i=1}^k\alpha_{ni}e_i+c_n\\
&\stackrel{\eqref{VI}}{=}&r_n+\sum_{i=1}^k\alpha_{ni}(p_i+a_i)+c_n\\
&=&\left(r_n+\sum_{i=1}^k\alpha_{ni}p_i\right)+\left(\sum_{i=1}^k\alpha_{ni}a_i+c_n\right).
\end{eqnarray*}
Observe that $\left(r_n+\sum_{i=1}^k\alpha_{ni}p_i\right) \in \text{R}(U)$, while $\left(\sum_{i=1}^k\alpha_{ni}a_i+c_n\right) \in \text{R}(U)^{\perp}$. Thus, comparing this last equality with \eqref{VI} we obtain
\begin{equation}\langlebel{XIV}
r_n=p_n-\sum_{i=1}^k\alpha_{ni}p_i,\quad a_n=\sum_{i=1}^k\alpha_{ni}a_i+c_n,\quad \forall n\in \Bbb N.
\end{equation}
Finally, we conclude that for all $n\in \Bbb N$
\begin{eqnarray}\langlebel{XV}
v_n&\stackrel{\eqref{IV}}{=}&(U^*U)^{-1}U^*Fe_n \nonumber\\
&\stackrel{\eqref{X}}{=}&(U^*U)^{-1}U^*r_n \nonumber\\
&\stackrel{\eqref{XIV}}{=}&(U^*U)^{-1}U^*\left(p_n-\sum_{i=1}^k\alpha_{ni}p_i\right) \nonumber\\
&\stackrel{\eqref{VII}}{=}&y_n-\sum_{i=1}^k\alpha_{ni}y_i.
\end{eqnarray}
Note that \eqref{XV} and \eqref{XIII} show that $v_1=v_2=\ldots =v_k=0$, as required.
\end{proof}
The preceding proof describes our desired dual frame $(v_n)_n$ in terms of the canonical dual $(y_n)_n$. Obviously, to obtain $v_n$'s one has to compute all the coefficients $\alpha_{ni},\,i=1,2,\ldots,k,\,n\geq k+1$.
To do that, let us first note the following useful consequence of the preceding computation. We claim that
\begin{equation}\langlebel{XVI}
\langlengle v_n,x_i\ranglengle=-\alpha_{ni},\quad \forall i=1,2,\ldots,k,\ \forall n\geq k+1.
\end{equation}
Indeed, for $i=1,2,\ldots,k$ and $n\geq k+1$ we have
\begin{eqnarray*}
\langlengle v_n,x_i\ranglengle&=&\langlengle v_n,U^*e_i\ranglengle\\
&\stackrel{\eqref{IV}}{=}&\langlengle U(U^*U)^{-1}U^*Fe_n,e_i\ranglengle\\
&=&\langlengle Fe_n,e_i\ranglengle\\
&\stackrel{\eqref{X}}{=}&\langlengle r_n,e_i\ranglengle\\
&\stackrel{\eqref{IX}}{=}&\langlengle e_n-b_n-c_n,e_i \ranglengle \quad (\mbox{since }i<n\mbox{ and }c_n\perp e_i)\\
&=&-\langlengle b_n,e_i \ranglengle\\
&\stackrel{\eqref{XII}}{=}&-\alpha_{ni}.
\end{eqnarray*}
For each $n\geq k+1$ we can rewrite \eqref{XVI}, using \eqref{XV}, as
$$
\left\langlengle y_n-\sum_{j=1}^k\alpha_{nj}y_j,x_i\right\ranglengle=-\alpha_{ni},\quad \forall i=1,2,\ldots, k
$$
or, equivalently,
$$
\sum_{j=1}^k\langlengle y_j,x_i\ranglengle \alpha_{nj}-\alpha_{ni}=\langlengle y_n,x_i\ranglengle,\quad \forall i=1,2,\ldots, k.
$$
The above equalities can be regarded as a system of $k$ equations in unknowns $\alpha_{n1},\alpha_{n2},\ldots,\alpha_{nk}$ that can be written in the matrix form as
\begin{equation}\langlebel{XVII}
\left(\left[\begin{array}{cccc}
\langlengle y_1,x_1\ranglengle&\langlengle y_2,x_1\ranglengle&\ldots&\langlengle y_k,x_1\ranglengle\\
\langlengle y_1,x_2\ranglengle&\langlengle y_2,x_2\ranglengle&\ldots&\langlengle y_k,x_2\ranglengle\\
\vdots&\vdots& &\vdots\\
\langlengle y_1,x_k\ranglengle&\langlengle y_2,x_k\ranglengle&\ldots&\langlengle y_k,x_k\ranglengle
\end{array}
\right]-I\right)
\left[
\begin{array}{c}
\alpha_{n1}\\
\alpha_{n2}\\
\vdots\\
\alpha_{nk}
\end{array}
\right]=
\left[
\begin{array}{c}
\langlengle y_n,x_1\ranglengle\\
\langlengle y_n,x_2\ranglengle\\
\vdots\\
\langlengle y_n,x_k\ranglengle
\end{array}
\right],
\end{equation}
where $I$ denotes the unit $k\times k$ matrix. Note that the matrix of the above system is independent of $n$.
The following lemma should be compared to Lemma~\ref{MR equivalent}. It gives us another condition equivalent to the minimal redundancy property.
\begin{lemma}\langlebel{crossGramian}
Let $(x_n)_n$ be a frame for a Hilbert space $H$ with the canonical dual $(y_n)_n$. Then a finite set of indices $E=\{n_1,n_2,\ldots,n_k\},k\in\Bbb N,$ satisfies the minimal redundancy condition for $(x_n)_n$ if and only if
$$
\left[\begin{array}{cccc}
\langlengle y_{n_1},x_{n_1}\ranglengle&\langlengle y_{n_2},x_{n_1}\ranglengle&\ldots&\langlengle y_{n_k},x_{n_1}\ranglengle\\
\langlengle y_{n_1},x_{n_2}\ranglengle&\langlengle y_{n_2},x_{n_2}\ranglengle&\ldots&\langlengle y_{n_k},x_{n_2}\ranglengle\\
\vdots&\vdots& &\vdots\\
\langlengle y_{n_1},x_{n_k}\ranglengle&\langlengle y_{n_2},x_{n_k}\ranglengle&\ldots&\langlengle y_{n_k},x_{n_k}\ranglengle
\end{array}
\right]-I
$$
is an invertible matrix.
\end{lemma}
\begin{proof}
We can assume without loss of generality that $E=\{1,2,\ldots,k\}$.
Let us first prove the lemma under additional hypothesis that $(x_n)_n$ is a Parseval frame. Note that then $(x_n)_n$ coincides with its canonical dual, i.e.~$y_n=x_n$ for all $n\in \Bbb N$. Denote $H_E=\text{span}\,\{x_1,x_2,\ldots,x_k\}.$ Then $(x_n)_{n=1}^k$ is, as a spanning set, a frame for $H_E$; let $U_E\in \Bbb B(H_E,\Bbb C^k)$ be its analysis operator.
By definition, $E$ does not satisfy the minimal redundancy condition for $(x_n)_n$ if and only if there exists $h\in H,\,h\neq 0$, such that $h \perp x_n$ for all $n\geq k+1.$ By the Parseval property of $(x_n)_n$ this is equivalent with the existence of $h\neq 0$ such that $h=\sum_{n=1}^k\langlengle h,x_n\ranglengle x_n,$ i.e. $h\in H_E\setminus\{0\}$ such that $h=U_E^*U_E h.$ This means that 1 belongs to the spectrum of $U_E^*U_E$ or, equivalently, that 1 belongs to the spectrum of $U_EU_E^*.$
So, $E$ does not satisfy the minimal redundancy condition for $(x_n)_n$ if and only if
$U_EU_E^*-I$ is a non-invertible operator. Since the matrix of $U_EU_E^*-I$ in the canonical basis of $\Bbb C^k$ is precisely $$\left[\begin{array}{cccc}
\langlengle x_1,x_1\ranglengle&\langlengle x_2,x_1\ranglengle&\ldots&\langlengle x_k,x_1\ranglengle\\
\langlengle x_1,x_2\ranglengle&\langlengle x_2,x_2\ranglengle&\ldots&\langlengle x_k,x_2\ranglengle\\
\vdots&\vdots& &\vdots\\
\langlengle x_1,x_k\ranglengle&\langlengle x_2,x_k\ranglengle&\ldots&\langlengle x_k,x_k\ranglengle
\end{array}
\right]-I,
$$
the statement is proved in the Parseval case.
We now prove the general case. Denote the analysis operator of $(x_n)_n$ by $U$. Recall that $y_n=(U^*U)^{-1}x_n,\,n\in \Bbb N$. We shall also need the associated Parseval frame $(p_n)_n$, where $p_n=(U^*U)^{-\frac{1}{2}}x_n$ for all $n\in \Bbb N$. Observe that, since $(U^*U)^{-\frac{1}{2}}$ is an invertible operator, the set $E$ satisfies the minimal redundancy condition for $(x_n)_n$ if and only if it satisfies the same condition for $(p_n)_n$. Since $(p_n)_n$ is a Parseval frame, by the first part of the proof $E$ satisfies the minimal redundancy condition for $(p_n)_n$ if and only if
$$\left[\begin{array}{cccc}
\langlengle p_1,p_1\ranglengle&\langlengle p_2,p_1\ranglengle&\ldots&\langlengle p_k,p_1\ranglengle\\
\langlengle p_1,p_2\ranglengle&\langlengle p_2,p_2\ranglengle&\ldots&\langlengle p_k,p_2\ranglengle\\
\vdots&\vdots& &\vdots\\
\langlengle p_1,p_k\ranglengle&\langlengle p_2,p_k\ranglengle&\ldots&\langlengle p_k,p_k\ranglengle
\end{array}
\right]-I
$$
is an invertible matrix.
To conclude the proof one only needs to observe that $$\langlengle p_i,p_j\ranglengle=\langlengle (U^*U)^{-\frac{1}{2}}x_i,(U^*U)^{-\frac{1}{2}}x_j\ranglengle =\langlengle (U^*U)^{-1}x_i,x_j\ranglengle=\langlengle y_i,x_j\ranglengle$$ for all $i$ and $j$.
\end{proof}
Let us now turn back to the system \eqref{XVII}. By the preceding lemma the matrix of the system is invertible; hence, the system has a unique solution $(\alpha_{n1},\alpha_{n2},\ldots,\alpha_{nk})$ for each $n\geq k+1$.
Thus, we have finally obtained a concrete description of our desired frame $(v_n)_n$.
\begin{theorem}\langlebel{main2}
Let $(x_n)_n$ be a frame for a Hilbert space $H$ with the canonical dual $(y_n)_n$. Suppose that a finite set of indices $E=\{n_1,n_2,\ldots,n_k\},k\in\Bbb N,$ satisfies the minimal redundancy condition for $(x_n)_n$. For each $n \in E^c$ let
$(\alpha_{n1},\alpha_{n2},\ldots,\alpha_{nk})$ be a (unique) solution of the system
\begin{equation}\langlebel{XVIII}
\left(\left[\begin{array}{cccc}
\langlengle y_{n_1},x_{n_1}\ranglengle&\langlengle y_{n_2},x_{n_1}\ranglengle&\ldots&\langlengle y_{n_k},x_{n_1}\ranglengle\\
\langlengle y_{n_1},x_{n_2}\ranglengle&\langlengle y_{n_2},x_{n_2}\ranglengle&\ldots&\langlengle y_{n_k},x_{n_2}\ranglengle\\
\vdots&\vdots& &\vdots\\
\langlengle y_{n_1},x_{n_k}\ranglengle&\langlengle y_{n_2},x_{n_k}\ranglengle&\ldots&\langlengle y_{n_k},x_{n_k}\ranglengle
\end{array}
\right]-I\right)
\left[
\begin{array}{c}
\alpha_{n1}\\
\alpha_{n2}\\
\vdots\\
\alpha_{nk}
\end{array}
\right]=
\left[
\begin{array}{c}
\langlengle y_n,x_{n_1}\ranglengle\\
\langlengle y_n,x_{n_2}\ranglengle\\
\vdots\\
\langlengle y_n,x_{n_k}\ranglengle
\end{array}
\right].
\end{equation}
Put
\begin{equation}\langlebel{XIX}
v_{n_1}=v_{n_2}=\ldots =v_{n_k}=0,\quad v_n=y_n-\sum_{i=1}^k\alpha_{ni}y_{n_i},\quad n\not =n_1,n_2,\ldots, n_k.
\end{equation}
Then $(v_n)_n$ is a frame for $H$ dual to $(x_n)_n$.
\end{theorem}
\begin{remark}
(a) Clearly, if $(x_n)_n$ is a Parseval frame, our constructed dual frame $(v_n)_n$ is expressed in terms of the original frame members $x_n$'s.
(b) Note that the matrix of the system \eqref{XVIII} is independent not only of $n$, but also of all $x\in H$. Thus, the inverse matrix can be computed in advance, without knowing for which $x$ the coefficients $\langlengle x,x_{n_1}\ranglengle$, $\langlengle x,x_{n_2}\ranglengle, \ldots, \langlengle x,x_{n_k}\ranglengle$ will be lost.
(c) The existence of frames dual to $(x_n)_n$ with the property as in Theorem~\ref{main} is also proved in \cite{LS} (Theorem 5.2) and \cite{HS} (Theorem 2.5). However, in both of these papers such dual frames are discussed only for finite frames in finite-dimensional spaces. Besides, both papers focus on recovering the lost coefficients, rather than alternating a dual frame. We also note that the special case of our Lemma~\ref{crossGramian} (namely, the same statement for finite frames) is proved in Lemma~2.3 from \cite{HS} using a different technique.
\end{remark}
Let us now turn to the examples.
\begin{example}\langlebel{ex1}
Let $(\epsilon_n)_n$ be an orthonormal basis for a Hilbert space $H$. Let
$$
x_1=\frac{1}{3}\epsilon_1,\ x_2=\frac{2}{3}\epsilon_1-\frac{1}{\sqrt{2}}\epsilon_2, \ x_3=\frac{2}{3}\epsilon_1+\frac{1}{\sqrt{2}}\epsilon_2, x_n=\epsilon_{n-1},\ \forall n\ge 4.
$$
One easily checks that $(x_n)_n$ is a Parseval frame for $H$; thus, here we have $y_n=x_n$ for all $n\in \Bbb N$. It is obvious that the set $E=\{1\}$ has the minimal redundancy property for $(x_n)_n$.
We shall use Theorem~\ref{main2} to construct a dual frame $(v_n)_n$ for $(x_n)_n$ such that $v_1=0$.
In this situation the system \eqref{XVIII} from Theorem~\ref{main2} reduces to a single equation
$$
-\frac{8}{9}\alpha_{n1}=\langlengle x_n,x_1\ranglengle,\quad \forall n\geq 2.
$$
Since $\langlengle x_2,x_1\ranglengle =\frac{2}{9}$, $\langlengle x_3,x_1\ranglengle =\frac{2}{9}$, and $\langlengle x_n,x_1\ranglengle =0$ for $n\geq 4$, we find
$\alpha_{21}=\alpha_{31}=-\frac{1}{4}$ and $\alpha_{n1}=0$ for $n\geq 4.$
Using \eqref{XV} we finally get
$$
v_1=0,\,v_2=x_2+\frac{1}{4}x_1,\,v_3=x_3+\frac{1}{4}x_1,\,v_n=x_n,\ \forall n\ge 4,
$$
that is,
$$
v_1=0,
v_2=\frac{3}{4}\epsilon_1-\frac{1}{\sqrt{2}}\epsilon_2, v_3=\frac{3}{4}\epsilon_1+\frac{1}{\sqrt{2}}\epsilon_2, v_n=\epsilon_{n-1}, \ \forall n\ge 4,
$$
\end{example}
\begin{example}\langlebel{ex2}
Let $(\epsilon_1,\epsilon_2)$ be an orthonormal basis of a $2$-dimensional Hilbert space $H$. Consider a frame $(x_n)_{n=1}^4$ where
$$
x_1=\frac{1}{2}\epsilon_1,\,x_2=\frac{1}{2}\epsilon_2,\,x_3=\frac{1}{2}\epsilon_1-\frac{1}{2}\epsilon_2,\,x_4=\frac{1}{2}\epsilon_1+\frac{1}{2}\epsilon_2.
$$
One easily verifies that $(x_n)$ is a tight frame with $U^*U=\frac{3}{4}I,$ so the members of the canonical dual $(y_n)_{n=1}^4$ are given by
$$
y_1=\frac{2}{3}\epsilon_1,\,y_2=\frac{2}{3}\epsilon_2,\,y_3=\frac{2}{3}\epsilon_1-\frac{2}{3}\epsilon_2,\,y_4=\frac{2}{3}\epsilon_1+\frac{2}{3}\epsilon_2.
$$
Obviously, the set $E=\{1,2\}$ satisfies the minimal redundancy condition for $(x_n)_{n=1}^4$. To obtain the dual frame $(v_n)_{n=1}^4$ from Theorem~\ref{main2}
we have to solve the system
$$
\left(\left[\begin{array}{cc}\langle y_1,x_1\rangle&\langle y_2,x_1\rangle\\\langle y_1,x_2\rangle&\langle y_2,x_2\rangle\end{array}\right]-I\right)\left[\begin{array}{c}\alpha_{n1}\\\alpha_{n2}\end{array}\right]=
\left[\begin{array}{c}\langle y_n,x_1\rangle\\\langle y_n,x_2\rangle\end{array}\right],\quad n=3,4.
$$
Since
$$
\left[\begin{array}{cc}\langle y_1,x_1\rangle&\langle y_2,x_1\rangle\\\langle y_1,x_2\rangle&\langle y_2,x_2\rangle\end{array}\right]-I=\left[\begin{array}{rr}-\frac{2}{3}&0\\0&-\frac{2}{3}
\end{array}
\right],
$$
we have
$$
\left[\begin{array}{c}\alpha_{n1}\\
\alpha_{n2}\end{array}\right]=\left[\begin{array}{rr}-\frac{3}{2}&0\\
0&-\frac{3}{2}\end{array}
\right]
\left[\begin{array}{c}\langle y_n,x_1\rangle\\\langle y_n,x_2\rangle\end{array}\right],\quad n=3,4.
$$
From this one easily finds
$$
\alpha_{31}=-\frac{1}{2},\,\alpha_{32}=\frac{1}{2},\,\alpha_{41}=-\frac{1}{2},\,\alpha_{42}=-\frac{1}{2}.
$$
Theorem~\ref{main2} gives us now
$$
v_1=0,\,v_2=0,\, v_3=y_3+\frac{1}{2}y_1-\frac{1}{2}y_2,\,v_4=y_4+\frac{1}{2}y_1+\frac{1}{2}y_2,
$$
that is,
$$
v_1=0,\,v_2=0,\,v_3=\epsilon_1-\epsilon_2,\,v_4=\epsilon_1+\epsilon_2.
$$
\end{example}
We end this section with the proposition that provides another characterization for the minimal redundancy property of a finite set of indices.
\begin{prop}\langlebel{some dual}
Let $(x_n)_n$ be a frame for a Hilbert space $H$, let $E=\{n_1,n_2,\ldots, n_k\}$, $k\in \Bbb N$ be a finite set of indices. Then $E$ satisfies the minimal redundancy condition for $(x_n)_n$ if and only if there exists a frame $(z_n)_n$ for $H$ dual to $(x_n)_n$ such that
\begin{equation}\langlebel{matrica_x_z}
\left[\begin{array}{cccc}
\langlengle z_{n_1},x_{n_1}\ranglengle&\langlengle z_{n_2},x_{n_1}\ranglengle&\ldots&\langlengle z_{n_k},x_{n_1}\ranglengle\\
\langlengle z_{n_1},x_{n_2}\ranglengle&\langlengle z_{n_2},x_{n_2}\ranglengle&\ldots&\langlengle z_{n_k},x_{n_2}\ranglengle\\
\vdots&\vdots& &\vdots\\
\langlengle z_{n_1},x_{n_k}\ranglengle&\langlengle z_{n_2},x_{n_k}\ranglengle&\ldots&\langlengle z_{n_k},x_{n_k}\ranglengle
\end{array}
\right]-I
\end{equation}
is an invertible matrix.
\end{prop}
\begin{proof} If $E$ satisfies the minimal redundancy condition for $(x_n)_n$ than, by Lemma~\ref{crossGramian}, the canonical dual frame of $(x_n)$ can be taken for $(z_n).$
Let us prove the converse. We again assume that $E=\{1,2,\ldots ,k\}$. Denote by $W$ the analysis operator of $(z_n)_n$. Recall that $W^*=(U^*U)^{-1}U^*Q$ where $Q\in \Bbb B(\ell^2)$ is some oblique projection onto $\text{R}(U)$. Observe that $UW^*=U(U^*U)^{-1}U^*Q=Q$, since $U(U^*U)^{-1}U^*$ is the orthogonal projection to $\text{R}(U)$. Notice also that $UW^*e_j=(\langlengle z_j,x_n\ranglengle)_n$ for each $j\in \Bbb N$.
Let $P_k\in \Bbb B(\ell^2)$ denote the orthogonal projection to $\text{span}\,\{e_1,e_2,\ldots,e_k\}.$
Then $\langlengle P_kUW^*P_ke_j,e_n\ranglengle=\langlengle W^*e_j,U^*e_n\ranglengle = \langlengle z_j,x_n\ranglengle$ for all $i,n=1,\ldots,k,$ so the matrix of the compression $P_kUW^*P_k$ of $UW^*$ to $\text{span}\,\{e_1,e_2,\ldots,e_k\}$ with respect to the basis $(e_n)_{n=1}^k$ has the form
$$
[P_kUW^*P_k]_{(e_n)_{n=1}^k}=
\left[\begin{array}{cccc}
\langlengle z_1,x_1\ranglengle&\langlengle z_2,x_1\ranglengle&\ldots&\langlengle z_k,x_1\ranglengle\\
\langlengle z_1,x_2\ranglengle&\langlengle z_2,x_2\ranglengle&\ldots&\langlengle z_k,x_2\ranglengle\\
\vdots&\vdots& &\vdots\\
\langlengle z_1,x_k\ranglengle&\langlengle z_2,x_k\ranglengle&\ldots&\langlengle z_k,x_k\ranglengle
\end{array}
\right].
$$
Suppose that $E$ does not satisfy the minimal redundancy condition. By Lemma~\ref{MR equivalent} there is $h\in H$ such that
$Uh=(\langlengle h,x_1\ranglengle, \ldots ,\langlengle h,x_k\ranglengle,0,0,\ldots)\not =0$. Since $Q$ is a projection on $\text{R}(U)$ and $Uh\in \text{R}(U)$, we have
$$
UW^*Uh=QUh=Uh.
$$
Since $Uh \in \text{span}\,\{e_1,e_2,\ldots,e_k\}$, the last equality can be written as
$$
P_kUW^*P_kUh=Uh.
$$
Thus, 1 is in the spectrum of $P_kUW^*P_k$ which means that $[P_kUW^*P_k]_{(e_n)_{n=1}^k}-I$ is a non-invertible matrix - a contradiction.
\end{proof}
\begin{remark}
In the light of the preceding proposition and Lemma~\ref{crossGramian} one may ask: if the set $E=\{n_1,n_2,\ldots, n_k\}$ satisfies the minimal redundancy condition for a frame $(x_n)_n$ can we conclude that the matrix from \eqref{matrica_x_z}
is invertible for \emph{each} dual frame $(z_n)_n$?
The answer is negative which is demonstrated by the following simple example. Take an orthonormal basis $\{\epsilon_1,\epsilon_2\}$ of a two-dimensional space $H$ and consider a frame $(x_n)_{n=1}^3=(\epsilon_1, \epsilon_1+\epsilon_2,\epsilon_2)$ and its dual $(\epsilon_1,0,\epsilon_2)$. Then the set $E=\{1\}$ satisfies the minimal redundancy condition for $(x_n)_{n=1}^3$ but the corresponding matrix from the preceding proposition is equal to $0$.
Let us also mention that in general, if $E$ satisfies the minimal redundancy property for $(x_n)_n,$ there are dual frames to $(x_n)_n$ different from the canonical dual frame for which the matrix from \eqref{matrica_x_z} is invertible. For example, this matrix is invertible for every dual $(v_n)_n$ as in Theorem~\ref{main}, i.e. such that $v_n=0$ for all $n\in E$ (and, unless $(x_n)_n$ is of a very special form, such dual frames differ from the canonical dual frame associated to $(x_n)_n$).
\end{remark}
\section{Computing a dual frame optimal for erasures}
In this section we give an alternative way for computing the constructed dual.
We first show in Theorem~\ref{main2-cor1} that the dual frame from Theorem~\ref{main2} can be written in another form. That naturally leads to the question of inverting operators of the form $I-G$, where $G$ has a finite rank.
In practice, $G$ will be of the form $G=\sum_{i=1}^k\theta{y_{n_i},x_{n_i}},$ where $x_{n_i}$ are frame members indexed by an erasure set $E=\{n_1,n_2,\ldots,n_k\}$, and $y_{n_i}$ are the corresponding members of the canonical dual. (Recall from the introduction that $\theta_{y,x}$ denotes a rank one operator defined by $\theta_{y,x}(v)=\langlengle v,x\ranglengle y,\,v\in H$.) Here we provide a (finite) iterative process for computing $(I-\sum_{i=1}^k\theta{y_{n_i},x_{n_i}})^{-1}$ with the number of iterations equal to $k$.
At the end of this section we show that this inverse operator can also be computed
by applying the formula given in Theorem~6.2 in \cite{LS}. This theorem is proved for invertible operators of the form $I-\sum_{i=1}^k\theta_{y_i,x_i},$ where $y_1,\ldots,y_k$ are linearly independent. Since we cannot expect from our frame elements $y_{n_1},\ldots,y_{n_k}$ to be linearly independent, in order to apply this theorem to our situation, we prove in our Theorem \ref{tm-LS} that the conclusion of Theorem~6.2 from \cite{LS} holds even without the linear independence assumption.
Consider an arbitrary frame $(x_n)_n$ for $H$ with the analysis operator $U$ and a finite set of indices $E=\{n_1,n_2,\ldots , n_k\}$. Obviously, sequences $(x_n)_{n\in E^c}$ and $(x_n)_{n\in E}$ are Bessel. ($E^c$ denotes the complement of $E$ in the index set.) Denote the corresponding analysis operators by $U_{E^c}$ and $U_E$, respectively. Notice that $(x_n)_{n\in E}$ is finite, so $U_E$ takes values in $\Bbb C^k$. It is evident that the corresponding frame operators satisfy $U_{E^c}^*U_{E^c}x=\sum_{n\in E^c}\langle x,x_n\rangle x_n$, $U_{E}^*U_{E}x=\sum_{n\in E}\langle x,x_n\rangle x_n$, $x\in H$, and hence
\begin{equation}\langlebel{the reduced frame operator}
U_{E^c}^*U_{E^c}=U^*U-U_{E}^*U_{E}.
\end{equation}
Further, if $(y_n)_n$ is the canonical dual of $(x_n)_n,$ its analysis operator $V$ is of the form $V=U(U^*U)^{-1}$. The analysis operators of Bessel sequences $(y_n)_{n\in E^c}$ and $(y_n)_{n\in E}$ will be denoted by $V_{E^c}$ and $V_E$, respectively. Observe that $V_E=U_E(U^*U)^{-1}$ and $V_{E^c}=U_{E^c}(U^*U)^{-1}$. Now the equality $x=\sum_{n=1}^{\infty}\langle x,x_n\rangle y_n,x\in H,$ can be written as
\begin{equation}\langlebel{the reduced dual_operators}
V_{E^c}^*U_{E^c}=I-V_{E}^*U_{E}.
\end{equation}
The following lemma should be compared to Lemma~\ref{MR equivalent} and Lemma~\ref{crossGramian}. It gives us another characterization of finite sets satisfying the minimal redundancy condition.
\begin{lemma}\langlebel{bivsa lema 2 u sec 4}
Let $(x_n)_n$ be a frame for a Hilbert space $H$ with the analysis operator $U$ and the canonical dual $(y_n)_n.$ If $E$ is any finite set of indices the following statements are mutually equivalent:
\begin{enumerate}
\item[(a)] $E$ satisfies the minimal redundancy condition for $(x_n)_n;$
\item[(b)] $I-V_{E}^*U_{E}$ is invertible;
\item[(c)] $I-V_{F}^*U_{F}$ is invertible for every $F\subseteq E.$
\end{enumerate}
\end{lemma}
\begin{proof}
Since $U_{E^c}$ is the analysis operator of a Bessel sequence $(x_n)_{n\in E^c}$,
we know that $(x_n)_{n\in E^c}$ is a frame for $H$ if and only if $U_{E^c}^*U_{E^c}$ is invertible, i.e., by \eqref{the reduced frame operator}, if and only if $U^*U-U_{E}^*U_{E}$ is invertible.
Since $U^*U$ is invertible and $$U^*U-U_{E}^*U_{E}=U^*U(I-(U^*U)^{-1}U_{E}^*U_{E})= U^*U(I-V_{E}^*U_{E}),$$ this is equivalent with invertibility of $I-V_{E}^*U_{E}.$ Thus, we have proved (a)$\Leftrightarrow$(b).
The implication (c)$\textup{R}ightarrow$(b) is obvious.
For (a)$\textup{R}ightarrow$(c) we only need to observe that, if $E$ satisfies the minimal redundancy condition for $(x_n)_n$, then so does every subset of $E;$ thus, we can apply the implication (a)$\textup{R}ightarrow$(b) to $F.$
\end{proof}
Suppose that a finite set $E$ has the minimal redundancy property for a frame $(x_n)_n$ for $H$. If $(y_n)_n$ is its canonical dual, then by \eqref{the reduced dual_operators} we have
$$
(I-V_E^*U_E)x=\sum_{n\in E^c}\langle x,x_n \rangle y_n,\quad \forall x\in H.
$$
By the above lemma, $I-V_E^*U_E$ is an invertible operator, so we get from the preceding equality that
$$
x=\sum_{n\in E^c}\langle x,x_n \rangle (I-V_E^*U_E)^{-1}y_n,\quad \forall x\in H.
$$
This shows that the sequence $((I-V_E^*U_E)^{-1}y_n)_{n\in E^c}$ is a frame for $H$ that is dual to $(x_n)_{n\in E^c}$. As noted in the first proof of Theorem~\ref{main} its trivial extension (with null-vectors as frame members indexed by $n\in E$) serves as one of the frames for $H$ dual to the original frame $(x_n)_n$ that provides perfect reconstruction without coefficients indexed by the erasure set $E$.
Moreover, it turns out that $((I-V_E^*U_E)^{-1}y_n)_{n\in E^c}$ is precisely the dual frame from our construction that is given by formula \eqref{XIX} in Theorem~\ref{main2}.
\begin{theorem}\langlebel{main2-cor1}
Let $(x_n)_n$ be a frame for a Hilbert space $H$ with the analysis operator $U$ and the canonical dual $(y_n)_n.$ Suppose that a finite set $E=\{n_1,n_2,\ldots,n_k\}$, $k\in \Bbb N$, satisfies the minimal redundancy condition for $(x_n)_n$. If $(v_n)_n$ is a dual frame for $(x_n)_n$ defined by \eqref{XIX} then
\begin{equation}\langlebel{nova formula za veove}
v_n=(I-V_E^*U_E)^{-1}y_n,\quad \forall n\in E^c.
\end{equation}
\end{theorem}
\begin{proof}
Observe first that $I-V_E^*U_E$ is invertible by Lemma~\ref{bivsa lema 2 u sec 4}, so \eqref{nova formula za veove} makes sense.
Let $\alpha_n:=\begin{bmatrix}
\alpha_{n1} & \ldots & \alpha_{nk} \\
\end{bmatrix}^T,n\in E^c.$
Then \eqref{XIX} can be written as
\begin{equation}\langlebel{XIX-Pars1}
v_n=y_n-V_E^*\alpha_n,\quad \forall n\in E^c,
\end{equation}
while \eqref{XVIII} becomes
\begin{equation}\langlebel{XVIII-Pars1}
(U_EV_E^*-I)\alpha_n=U_Ey_n,\quad \forall n\in E^c.
\end{equation}
By Lemma~\ref{crossGramian} the operator $U_EV_E^*-I$ is invertible so \eqref{XVIII-Pars1} becomes
\begin{equation}\langlebel{XVIII-Pars2}
\alpha_n=(U_EV_E^*-I)^{-1}U_Ey_n,\quad \forall n\in E^c.
\end{equation}
Observe now the equality $(U_EV_E^*-I)U_E=U_E(V_E^*U_E-I)$.
Multiplying by $(U_EV_E^*-I)^{-1}$ from the left and by $(V_E^*U_E-I)^{-1}$ from the right we obtain
\begin{equation}\langlebel{medjukorak}
(U_EV_E^*-I)^{-1}U_E=U_E(V_E^*U_E-I)^{-1}.
\end{equation}
We now have, for all $n\in E^c$,
\begin{eqnarray*}
v_n&\stackrel{\eqref{XIX-Pars1}}{=}&y_n-V_E^*\alpha_n\\
&\stackrel{\eqref{XVIII-Pars2}}{=}&y_n-V_E^*(U_EV_E^*-I)^{-1}U_Ey_n\\
&\stackrel{\eqref{medjukorak}}{=}&y_n-V_E^*U_E(V_E^*U_E-I)^{-1}y_n\\
&=&y_n-\left((V_E^*U_E-I)+I\right)(V_E^*U_E-I)^{-1}y_n\\
&=&y_n-y_n-(V_E^*U_E-I)^{-1}y_n\\
&=&(I-V_E^*U_E)^{-1}y_n.
\end{eqnarray*}
\end{proof}
Recall from Theorem~\ref{main2} that $v_n$'s can be obtained by solving a system of linear equations. The preceding theorem provides us with another possibility: $(v_n)_{n\in E}$ is identified as the image of $(y_n)_{n\in E}$
under the action of $(I-V_E^*U_E)^{-1}$. Put $E=\{n_1,n_2,\ldots n_k\}$, $k\in \Bbb N$. Then $V_E^*U_E$ is of the form $$V_E^*U_Ex=\sum_{i=1}^k\langlengle x,x_{n_i}\ranglengle y_{n_i}=\sum_{i=1}^k\theta_{y_{n_i},x_{n_i}}(x),\quad x\in H.$$
Thus, in applications we need an efficient procedure for computing $(I-\sum_{i=1}^k\theta_{y_{n_i},x_{n_i}})^{-1}$.
The case $k=1$ is easy. Here we need the following simple observation: if $x,y\in H$ are such that $I-\theta_{y,x}$ is invertible, then $\langle y,x\rangle\neq 1$. (Indeed, $\langlengle y,x\ranglengle=1$ would imply $(I-\theta_{y,x})y=y-\langlengle y,x\ranglengle y=0$ and, by invertibility of $I-\theta_{y,x}$, we would have $y=0$ which contradicts the equality $\langle y,x\rangle= 1$.)
Moreover, it can be easily verified that $(I-\theta_{y,x})^{-1}$, if it exists, is given by
\begin{equation}\langlebel{cijeli inverz}
\left(I-\theta_{y,x}\right)^{-1}=I+\frac{1}{1-\langle y,x\rangle}\theta_{y,x}.
\end{equation}
\begin{cor}\langlebel{main2-cor2}
Let $(x_n)_n$ be a frame for a Hilbert space $H$ with the analysis operator $U$ and the canonical dual $(y_n)_n.$ Suppose that a set $E=\{m\}$ satisfies the minimal redundancy condition for $(x_n)_n$.
Let $v_m=0$ and
\begin{equation}\langlebel{XIX-cor}
v_n= y_n +\frac{\langle y_n,x_m\rangle}{1-\langle y_m,x_m\rangle}y_{m},\quad \forall n\neq m.
\end{equation}
Then $(v_n)_n$ is a frame for $H$ dual to $(x_n)_n$.
\end{cor}
\begin{proof} If $E=\{m\}$ then $I-V_E^*U_E=I-\theta_{y_m,x_m}.$ Now \eqref{cijeli inverz} and Theorem~\ref{main2-cor1} give
\eqref{XIX-cor}.
\end{proof}
Suppose now that $E$ has $k\geq 2$ elements and satisfies the minimal redundancy condition for a frame $(x_n)_n$ with the analysis operators $U.$ Assume for notational simplicity that $E=\{1,2,\ldots, k\}$. As before we denote by $(y_n)_n$ the canonical dual of $(x_n)_n$ and by $V$ its analysis operator. Observe that $I-V_E^*U_E=I-\sum_{i=1}^k\theta_{y_i,x_i}$. Recall from Lemma~\ref{bivsa lema 2 u sec 4} that $I-V_F^*U_F$ is invertible for all $F\subseteq E$. In particular, by taking $F=\{1\}, F=\{1,2\}, F=\{1,2,3\}, \ldots$ we conclude that $I-\sum_{i=1}^n\theta_{y_i,x_i}$ is an invertible operator for all $n=1,2,\ldots,k$.
In the theorem that follows we demonstrate an iterative procedure for computing all $(I-\theta_{y_1,x_1})^{-1}$, $(I-\sum_{i=1}^2\theta_{y_i,x_i})^{-1}$, $(I-\sum_{i=1}^3\theta_{y_i,x_i})^{-1}$, $\ldots$ $(I-\sum_{i=1}^k\theta_{y_i,x_i})^{-1}$ such that each $(I-\sum_{i=1}^n\theta_{y_i,x_i})^{-1}$, $n=1,2, \ldots,k$, is expressed as a product of exactly $n$ simple inverses $(I-\theta_{y,x})^{-1}$ which one obtains using \eqref{cijeli inverz}.
\begin{theorem}\langlebel{thm-inverse}
Let $(x_n)_n$ be a frame for a Hilbert space $H$ with the analysis operator $U$ and the canonical dual $(y_n)_n.$ Suppose that a set $E=\{1,2,\ldots,k\}$, $k\in \Bbb N$, satisfies the minimal redundancy condition for $(x_n)_n$. Let $\overline{y}_1,\ldots,\overline{y}_n$ be defined as
\begin{eqnarray}\langlebel{definicija krnjih nizova}
\overline{y}_1&=&y_1, \nonumber\\
\overline{y}_n&=&(I-\theta_{\overline{y}_{n-1},x_{n-1}})^{-1}\ldots(I-\theta_{\overline{y}_1,x_1})^{-1}y_n,\quad n=2,\ldots,k.
\end{eqnarray}
Then $\overline{y}_1,\ldots,\overline{y}_n$ are well defined and
\begin{equation}\langlebel{inverzi}
(I-\sum_{i=1}^n\theta_{y_i,x_i})^{-1}=(I-\theta_{\overline{y}_{n},x_{n}})^{-1}\dots(I-\theta_{\overline{y}_1,x_1})^{-1},\quad n=1,\ldots, k.
\end{equation}
\end{theorem}
\begin{proof}
In order to see that $\overline{y}_1,\ldots,\overline{y}_k$ are well defined we have to prove that the operators $I-\theta_{\overline{y}_n,x_n}$ for $n=1,\ldots,k$ are invertible.
Note that, as already observed, $I-\sum_{i=1}^n\theta_{y_i,x_i}$ are invertible for all $n=1,2,\ldots, k$.
We prove by induction.
For $n=1$ we have $I-\theta_{\overline{y}_1,x_1}=I-\theta_{y_1,x_1}$ which is an invertible operator by Lemma~\ref{bivsa lema 2 u sec 4} (applied to $F=\{1\}$), and formula \eqref{inverzi} is trivially satisfied.
Assume now that for some $n<k$ the operators $I-\theta_{\overline{y}_1,x_1},\ldots,I-\theta_{\overline{y}_n,x_n}$ are invertible and that \eqref{inverzi} is satisfied.
Observe the equality
\begin{equation}\langlebel{product rule}
T\theta_{y,x}=\theta_{Ty,x}
\end{equation}
which holds for all $x,y\in H$ and $T\in \Bbb B(H)$.
Now we have
\begin{eqnarray*}\langlebel{inverz-n}
I-\theta_{\overline{y}_{n+1},x_{n+1}}&\stackrel{\eqref{definicija krnjih nizova}}{=}&I-\theta_{(I-\theta_{x_n,\overline{y}_n})^{-1}\cdots(I-\theta_{x_1,\overline{y}_1})^{-1}y_{n+1},x_{n+1}}\\
&\stackrel{\eqref{product rule}}{=} &I-(I-\theta_{\overline{y}_n,x_n})^{-1}\cdots(I-\theta_{\overline{y}_1,x_1})^{-1}\theta_{y_{n+1},x_{n+1}}\\
&\stackrel{\eqref{inverzi} }{=}&I-(I-\sum_{i=1}^n\theta_{y_i,x_i})^{-1}\theta_{y_{n+1},x_{n+1}}\\
&=&(I-\sum_{i=1}^n\theta_{y_i,x_i})^{-1}(I-\sum_{i=1}^n\theta_{y_i,x_i})-(I-\sum_{i=1}^n\theta_{y_i,x_i})^{-1}\theta_{y_{n+1},x_{n+1}}\\
&=&(I-\sum_{i=1}^n\theta_{y_i,x_i})^{-1}(I-\sum_{i=1}^n\theta_{y_i,x_i}-\theta_{y_{n+1},x_{n+1}})\\
&=&(I-\sum_{i=1}^n\theta_{y_i,x_i})^{-1}(I-\sum_{i=1}^{n+1}\theta_{y_i,x_i})\\
&\stackrel{\eqref{inverzi}}{=}&(I-\theta_{\overline{y}_{n},x_{n}})^{-1}\cdots(I-\theta_{\overline{y}_1,x_1})^{-1}(I-\sum_{i=1}^{n+1}\theta_{y_i,x_i}).
\end{eqnarray*}
This proves that $I-\theta_{\overline{y}_{n+1},x_{n+1}}$ is invertible (as a product of invertible operators).
Also, it follows from the final equality that
$$(I-\sum_{i=1}^{n+1}\theta_{y_i,x_i})^{-1}=
(I-\theta_{\overline{y}_{n+1},x_{n+1}})^{-1}(I-\theta_{\overline{y}_{n},x_{n}})^{-1}\cdots(I-\theta_{\overline{y}_1,x_1,})^{-1}.$$
\end{proof}
We conclude this section with the result that improves Theorem~6.2 from \cite{LS} by removing the linear independence assumption. In this way we provide a closed-form formula for the inverse $(I-\sum_{n=1}^{k}\theta_{y_k,x_k})^{-1}$ which can (alternatively) be used, via Theorem~\ref{main2-cor1}, for obtaining our dual frame $(v_n)_n$.
\begin{theorem}\langlebel{tm-LS}
Let $x_1,\ldots,x_k$ and $y_1,\ldots,y_k$ be vectors in a Hilbert space $H$ such that the operator $R=I-\sum_{j=1}^k\theta _{y_j,x_j}\in\Bbb B(H)$ is invertible.
Then
$$R^{-1}=I+\sum_{i,j=1}^kc_{ij}\theta_{y_i,x_j},$$
where the coefficient matrix $C:=(c_{ij})$ is given by
\begin{equation}\langlebel{opet_minus}
C=-\left[
\begin{array}{cccc}
\langle y_1,x_1\rangle-1 & \langle y_2,x_1\rangle&\ldots & \langle y_k,x_1\rangle \\
\langle y_1,x_2\rangle & \langle y_2,x_2\rangle-1&\ldots & \langle y_k,x_2\rangle \\
\vdots & \vdots& & \vdots \\
\langle y_1,x_k\rangle & \langle y_2,x_k\rangle&\ldots & \langle y_k,x_k\rangle-1 \\
\end{array}
\right]^{-1}.
\end{equation}
\end{theorem}
\begin{proof}
Let $U,V:H\to \Bbb C^k$ be the analysis operators of Bessel sequences $(x_n)_{n=1}^k$, $(y_n)_{n=1}^k,$ respectively. Then $R=I-V^*U,$ and since $R$ is invertible, $UV^*-I$ is also an invertible operator.
Let $(e_n)_{n=1}^k$ be the canonical basis for $\Bbb C^k$. Since
$$(UV^*-I)_{ij}=\langle (UV^*-I)e_j,e_i\rangle=\langle y_j,x_i\rangle-\delta_{ij},$$ ($\delta_{ij}$ is the Kronecker delta) the matrix representation of the operator $UV^*-I: \Bbb C^k\to \Bbb C^k$ with respect to the basis $(e_n)_{n=1}^k$ is precisely the matrix
$$\left[
\begin{array}{cccc}
\langle y_1,x_1\rangle-1 & \langle y_2,x_1\rangle&\ldots & \langle y_k,x_1\rangle \\
\langle y_1,x_2\rangle & \langle y_2,x_2\rangle-1&\ldots & \langle y_k,x_2\rangle \\
\vdots & \vdots& & \vdots \\
\langle y_1,x_k\rangle & \langle y_2,x_k\rangle&\ldots & \langle y_k,x_k\rangle-1 \\
\end{array}
\right].$$
As a matrix representation of an invertible operator, this is an invertible matrix.
Now one proceeds exactly as in the proof of Theorem~6.2 from \cite{DS}.
\end{proof}
\section{Uniformly redundant frames}
Recall from the introduction that a frame $(x_n)_n$ is said to be $M$-robust, $M\in \Bbb N$, if any set of indices of cardinality $M$ satisfies the minimal redundancy condition for $(x_n)_n$. Obviously, such frames are resistant to erasures of any $M$ frame coefficients. A special subclass consists of $M$-robust frames with the property that after removal of any $M$ elements the reduced sequence makes up a basis for the ambient space - we say that such frames are of uniform excess $M$. In particular, if a frame $(x_n)_{n=1}^{N+M}$ for an $N$-dimensional space $H$ is of uniform excess $M$, any $N$ of its members make up a basis for $H$. Such frames are sometimes called \emph{full spark frames} (see \cite{ACM}) or \emph{maximally robust frames} (as in \cite{PK}).
Here we provide some results on such frames. Let us start with a simple proposition that is certainly known. For completeness we include a proof.
\begin{prop}\langlebel{bounded from below}
Let $(x_n)_n$ be a frame for a Hilbert space $H$ with the lower frame bound $A$. If $\|x_j\|<\sqrt{A}$ for some index $m$ then the set $E=\{m\}$ satisfies the minimal redundancy condition for $(x_n)_n$.
\end{prop}
\begin{proof}
Let $(y_n)_n$ be the canonical dual of $(x_n)_n$. Denote again by $U$ and $V$ the analysis operators of $(x_n)_n$ and $(y_n)_n$, respectively.
Since $\|(U^*U)^{-1}\|\le \frac{1}{A}$ it follows that
$$\langle y_m,x_m\rangle=\langle (U^*U)^{-1}x_m,x_m\rangle\le \| (U^*U)^{-1}\| \|x_m\|^2<1.$$ Then $I+\frac{1}{1-\langle y_m,x_m\rangle}\theta_{y_m,x_m}$ is well defined and, by \eqref{cijeli inverz}, it is the inverse of
$I-\theta_{y_j,x_j}=I-V_E^*U_E.$ By Lemma~\ref{bivsa lema 2 u sec 4}, $E$ satisfies the minimal redundancy condition for $(x_n)_n$.
\end{proof}
\begin{cor}\langlebel{1-robust}
Suppose that $(x_n)_n$ is a frame for a Hilbert space $H$ with the lower frame bound $A$ such that $\|x_n\|<\sqrt{A}$ for all $n\in \Bbb N$. Then $(x_n)_n$ is $1$-robust.
\end{cor}
Consider now a Parseval frame $(x_n)_n$ for a Hilbert space $H$. By Proposition~\ref{bounded from below}, a sequence that is obtained from $(x_n)_n$ by removing any $x_n$ such that $\|x_n\|<1$ is again a frame for $H$. On the other hand, it is a well known fact each $x_m$ such that $\|x_m\|=1$ is orthogonal to all $x_n,\,n\not=m$. By combining these two facts we obtain
\begin{cor}\langlebel{Parseval 1-robust}
Let $(x_n)_n$ be a Parseval frame for a Hilbert space $H$. The following two conditions are mutually equivalent:
\begin{enumerate}
\item $(x_n)_n$ is $1$-robust;
\item $\|x_n\|<1$ for all $n\in \Bbb N$.
\end{enumerate}
\end{cor}
Note in passing that a similar statement can be proved for an arbitrary frame $(x_n)_n$ with the analysis operator $U$ using the preceding corollary applied to the associated Parseval frame $((U^*U)^{-\frac{1}{2}}x_n)_n$. We omit the details.
Next we show that each Parseval frame that is not an orthonormal basis can be converted to a $1$-robust Parseval frame. We first need a lemma which is proved using a nice maneuver from \cite{BP}.
\begin{lemma}\langlebel{Paulsen}
Let $(x_n)_n$ be a Parseval frame for a Hilbert space $H$ that is not an orthonormal basis. Suppose that $\|x_i\|=1$ for some $i\in \Bbb N$.
Then there exists $j\in \Bbb N$, $j\not =i$, such that $\|x_j\|<1$ and a Parseval frame $(x_n^{\prime})_n$ for $H$ with the properties $\|x_i^{\prime}\|<1$, $\|x_j^{\prime}\|<1$, and
$x_n^{\prime}=x_n$ for all $n\not=i,j$.
\end{lemma}
\begin{proof}
First, since $(x_n)_n$ is not an orthonormal basis, there exists at least one index $j\in \Bbb N$, $j\not =i$, such that $\|x_j\|<1$. Find such $j$, take a real number $\varphi$, $0<\varphi<\frac{\pi}{2}$, and define
$$x_n^{\prime}=\left\{
\begin{array}{cl}
\cos\varphi\, x_i+\sin\varphi\, x_j, & \hbox{if $n=i$;} \\
-\sin\varphi\, x_i+\cos\varphi\, x_j, & \hbox{if $n=j$;} \\
x_n, & \hbox{if $n\not=i,j$.}
\end{array}
\right.$$
A direct verification shows that $(x_n^{\prime})_n$ is a Parseval frame for $H$. Further, $\|x_i\|=1$ implies that $x_i\perp x_n$ for all $n\not =i$.
Using this, we find
$$\|x_i^{\prime}\|^2=\|x_i\|^2\cos^2 \varphi + \|x_j\|^2\sin^2\varphi<1$$
and
$$\|x_j^{\prime}\|^2= \|x_i\|^2\sin^2 \varphi+ \|x_j\|^2\cos^2\varphi<1.$$
\end{proof}
\begin{remark}\langlebel{producing 1-robust}
Suppose we are given a Parseval frame $(x_n)_n$ for a Hilbert space $H$ that is not an orthonormal basis. If $\|x_n\|<1$ for all $n\in \Bbb N$, Corollary~\ref{Parseval 1-robust} guarantees that $(x_n)_n$ is $1$-robust.
If, on the other hand, there exists $i\in \Bbb N$ such that $\|x_i\|=1$ we can apply the preceding lemma to obtain another Parseval frame $(x_n^{\prime})_n$ with less elements that are perpendicular to all other members of the frame. By repeating the procedure we eventually obtain a $1$-robust Parseval frame.
\end{remark}
It is now natural to ask if there is a similar technique that would produce a $2$-robust Parseval frame starting from a $1$-robust Parseval frame.
A related question is: is there some necessary and sufficient condition (possibly stronger than $\|x_n\|<1$) on the norms of the frame members which would ensure $2$-robustness? The following example shows that the answer to the latter question is negative.
\begin{example} Let $\varepsilon\in\langle 0,\frac{1}{2}\rangle.$ Let $(\epsilon_n)_n$ be an orthonormal basis for a Hilbert space $H$.
Let
$$x_n=\left\{
\begin{array}{ll}
\sqrt{\varepsilon}\epsilon_{k}, & \hbox{if $n=3k-2$ or $n=3k-1$;} \\
\sqrt{1-2\varepsilon}\epsilon_{k}, & \hbox{if $n=3k$;}
\end{array}
\right.$$
The sequence $(x_n)_n$ is obviously a $2$-robust Parseval frame. Observe that $\|x_{3k}\|= \sqrt{1-2\varepsilon}$ for all $k,$ so
we can choose $\epsilon$ such that the norms $\|x_{3k}\|$ are arbitrarily close to $1$.
\end{example}
In the rest of the paper we turn to finite-dimensional spaces and their finite frames. Our goal is to characterize full spark frames (i.e.~finite frames of uniform excess). Let us begin with two simple examples.
\begin{example}\langlebel{ex_N+1_full_spark}
Suppose $(x_1,x_2,\ldots,x_N)$ is a basis for a Hilbert space $H$. Choose arbitrary $\langlembda_1, \langlembda_2,\ldots, \langlembda_N$ such that $\langlembda_i \not =0$ for all $i=1,2,\ldots,N$, and define
$$
x_{N+1}=\langlembda_1 x_1+\langlembda_2 x_2+\ldots + \langlembda_N x_N.
$$
Then it is easy to verify that $(x_n)_{n=1}^{N+1}$ is a full spark frame for $H$.
\end{example}
\begin{example} \langlebel{ex_N+2_full_spark}
Take again a basis $(x_1,x_2,\ldots,x_N)$ for $H.$ Choose arbitrary $\langlembda_i,\mu_i, i=1,\ldots,N$ such that
$\langlembda_i,\mu_i \neq 0$ for all $i=1,2,\ldots,N$, and $\frac{\langlembda_i}{\mu_i}\not =\frac{\langlembda_j}{\mu_j}$ for $i\not =j$. Let
\begin{eqnarray*}
x_{N+1}&=&\langlembda_1x_1+\langlembda_2x_2+\ldots+\langlembda_Nx_N, \\
x_{N+2}&=&\mu_1x_1+\mu_2x_2+\ldots+\mu_Nx_N.
\end{eqnarray*}
Then $(x_n)_{n=1}^{N+2}$ is a full spark frame for $H$.
To see this, we must show that any subsequence consisting of exactly $N$ vectors makes up a basis for $H$. For example, consider $(x_{N+1},x_{N+2},x_3,\ldots,x_N).$ The matrix representation of these $N$ vectors in the basis $(x_1,x_2,\ldots,x_N)$ is
$$A=\left[ \begin{array}{ccccc}
\langlembda_1 & \mu_1 & 0 & \ldots& 0 \\
\langlembda_2 & \mu_2 & 0 & \ldots& 0 \\
\langlembda_3 & \mu_3 & 1 & \ldots& 0 \\
\vdots & \vdots & \vdots & & \vdots \\
\langlembda_N & \mu_N & 0 & \ldots& 1 \\
\end{array}
\right].$$
Since $\det A=\langlembda_1\mu_2-\langlembda_2\mu_1\neq 0,$ $A$ is invertible and $(x_{N+1},x_{N+2},x_3,\ldots,x_N)$ is a basis for $H.$
Again, we omit the further details.
\end{example}
The preceding two examples are just special cases of the following theorem.
\begin{theorem}\langlebel{2N}
Let $(x_n)_{n=1}^N$, $N\in \Bbb N$, be a basis of a Hilbert space $H$ and let $T=(t_{ij})\in M_{NM}$, $M\in \Bbb N$, be a matrix with the property that each square submatrix of $T$ is invertible. Define $x_{N+1},x_{N+2},\ldots,x_{N+M}\in H$ by
\begin{equation}\langlebel{full spark vectors}
x_{N+j}=\sum_{i=1}^Nt_{ij}x_i,\quad \forall j=1,2,\ldots,M.
\end{equation}
Then $(x_n)_{n=1}^{N+M}$ is a full spark frame for $H$.
Conversely, each full spark frame for $H$ is of this form. More precisely, if $(x_n)_{n=1}^{N+M}$ is a full spark frame for $H$ then there is a matrix $T=(t_{ij})\in M_{NM}$ such that
$x_{N+1},x_{N+2},\ldots,x_{N+M}$ are of the form \eqref{full spark vectors} and whose all square submatrices are invertible.
\end{theorem}
\begin{proof}
Suppose that we are given a basis $(x_n)_{n=1}^N$ for $H$ and a matrix $T=(t_{ij})\in M_{NM}$ whose all square submatrices are invertible. Consider $(x_n)_{n=1}^{N+M}$ with $x_{N+1},x_{N+2},\ldots,x_{N+M}\in H$ defined by \eqref{full spark vectors}.
Let $k$ be a natural number such that $1\leq k\leq N,M$. Consider two arbitrary sets of indices of cardinality $k$; $I=\{i_1,i_2,\ldots,i_k\}$ and
$J=\{j_1,j_2,\ldots,j_k\}$ and let $I^c=\{1,2,\ldots,N\}\setminus I$. We must prove that a reduced sequence
\begin{equation}\langlebel{2Nfull spark proof}
(x_n)_{n\in I^c} \cup (x_{N+j})_{j\in J}
\end{equation}
is a basis for $H$.
(Note that the case $k=0$ is trivial.)
Let $C\in M_N$ denotes the matrix that is obtained by representing our reduced sequence \eqref{2Nfull spark proof} in the basis $(x_n)_{n=1}^N$. It suffices to show that $C$ is an invertible matrix. We shall show, using an argument from the proof of Theorem 6 in \cite{ACM} that $\det C\not =0$. By suitable changes of rows and columns of $C$, where only the first $N-k$ columns of $C$ are involved, we get a block-matrix $C^{\prime}$ of the form
$$
C^{\prime}=\left[\begin{array}{cc}I_{N-k}&T^{\prime}\\0&T^{\prime \prime}\end{array}\right]
$$
where $I_{N-k} \in M_{n-k}$ is a unit matrix while $T^{\prime}\in M_{N-k,k}$ and $T^{\prime \prime}\in M_k$ are some submatrices of $T$. By the hypothesis, $T^{\prime \prime}$ is invertible. Hence, $\det C^{\prime}=\det I_{N-k} \cdot \det T^{\prime \prime}=
\det T^{\prime \prime} \not =0$ and this obviously implies $\det C\not =0$.
To prove the converse, suppose that $(x_n)_{n=1}^{N+M}$ is an arbitrary full spark frame for $H$. In particular, $(x_n)_{n=1}^{N}$ is a basis for $H$, so there exist numbers $t_{ij}$ such that $x_{N+1},x_{N+2},\ldots,x_{N+M}$ are of the form \eqref{full spark vectors}. We must prove that each square submatrix of $T=(t_{ij})\in M_{NM}$ is invertible.
Consider two sets of indices $I=\{i_1,i_2,\ldots,i_k\}$ and
$J=\{j_1,j_2,\ldots,j_k\}$ with $1\leq k\leq N,M$ and the corresponding $k\times k$ submatrix $T_{I,J}=(t_{ij})_{i\in I,j\in J}$ of $T$.
Denote again $I^c=\{1,2,\ldots,N\}\setminus I$ and consider a reduced sequence
$$
(x_n)_{n\in I^c} \cup (x_{N+j})_{j\in J}.
$$
By the assumption, these $N$ vectors make up a basis for $H$. Denote by $C$ the matrix representation of this basis with respect to $(x_n)_{n=1}^N$ and notice that $C$ is an invertible matrix. In particular, the rows of $C$ are linearly independent. Observe now that a $k\times N$ submatrix $C_I$ of $C$ that corresponds to the rows indexed by $I$ is a block-matrix of the form
$$
C_I=\left[\begin{array}{ccc}0&|&T_{I,J}\end{array} \right].
$$
In particular, the rows of $C_I$ are linearly independent. This immediately implies that the rows of $T_{I,J}$ are linearly independent. Thus, $T_{I,J}$ is invertible.
\end{proof}
\begin{remark} Let $(x_n)_{n=1}^{N+M}$ be a full spark frame for an $N$-dimensional Hilbert space $H$. Consider a matrix $T$ introduced by
\eqref{full spark vectors} as in the Theorem~\ref{2N}.
(a) Let $M=1.$ Then
$T=\left[\begin{array}{c}
t_{11}\\t_{21}\\\vdots\\t_{N1}
\end{array}
\right].$
Obviously, the property that each square submatrix of $T$ is invertible means that $t_{i1}\neq 0$ for all $i=1,\ldots,N.$
Therefore, all full spark frames for $N$-dimensional Hilbert space $H$ consisting of $N+1$ elements are of the form as in Example~\ref{ex_N+1_full_spark}.
(b) Let $M=2.$ Then
$T=\left[\begin{array}{cc}
t_{11}&t_{12}\\t_{21}&t_{22}\\\vdots&\vdots\\t_{N1}&t_{12}
\end{array}
\right].$
Then each square submatrix of $T$ will be invertible if and only if $t_{ij}\neq 0,i=1,\ldots,N,j=1,2,$ and $t_{i1}t_{j2}\neq t_{i2}t_{j1},$ i.e. $\frac{t_{i1}}{t_{j1}}\neq \frac{t_{i2}}{t_{j2}},$ for all $i,j=1,\ldots,N,i\neq j.$ In other words, every full spark frames for $N$-dimensional Hilbert space $H$ with $N+2$ elements is as in Example~\ref{ex_N+2_full_spark}.
(c) Let $M=N.$ Recall that a matrix $T\in M_N$ is said to be totally nonsingular if each minor of $T$ is different from zero (i.e.~if each $k \times k$ submatrix of $T$, for all $k=1,2,\ldots, N$, is nonsingular). Therefore, a sequence $(x_n)_{n=1}^{2N}$ is a full spark frame for $H$ if and only if $T$ is totally nonsingular.
\end{remark}
In the rest of the paper we discuss some applications of Theorem~\ref{2N}. Let us begin by providing examples of totally nonsingular matrices.
Recall from \cite{FZ} that a square matrix $T$ is called {\em totally positive} if all its minors are positive real numbers. Clearly, each totally positive matrix is totally nonsingular. We shall construct infinite totally positive symmetric matrices which can be used, via Theorem~\ref{2N}, for producing new examples of full spark frames. A construction that follows may be of its own interest.
For a matrix $T=(t_{ij})\in M_n$ and two sets of indices $I,J\subseteq \{1,2,\ldots, n\}$ of the same cardinality we denote by $\Delta(T)_{I,J}$ the corresponding minor; i.e.~ the determinant of a submatrix $T_{I,J}=(t_{ij})_{i\in I,j\in J}$. A minor $\Delta(T)_{I,J}$ is called {\em solid} if both $I$ and $J$ consist of consecutive indices. More specifically, a minor $\Delta(T)_{I,J}$ is called {\em initial} if it is solid and $1\in I\cup J$.
Observe that each matrix entry is the lower-right corner of exactly one initial minor. In our construction we will make use of the following efficient criterion for total positivity which was proved by M. Gasca and J.M. Pe\~{n}a in \cite{GP} (see also Theorem 9 in \cite{FZ}): a square matrix is totally positive if and only if all its initial minors are positive.
To describe our construction we need to introduce one more notational convention. Given an infinite matrix $T=(t_{ij})_{i,j=1}^{\infty}$ and $n\in \Bbb N$, we denote by $T^{(n)}$ a submatrix in the upper-left $n \times n$ corner of $T$, that is $T^{(n)}=(t_{ij})_{i,j=1}^{n}$. Its minors will be denoted by $\Delta(T^{(n)})_{I,J}$.
\begin{theorem}\langlebel{constructing TP}
Let $(a_n)_n$ and $(b_n)_n$ be sequences of natural numbers such that $b_1=a_2$ and $a_nb_{n+1}-b_na_{n+1}=1$ for all $n \in \Bbb N$. There exists an infinite matrix $T=(t_{ij})_{i,j=1}^{\infty}$ with the following properties:
\begin{enumerate}
\item $t_{ij}\in \Bbb N,\,\forall i,j \in \Bbb N$;
\item $t_{ij}=t_{ji},\,\forall i,j\in \Bbb N$;
\item $t_{1n}=t_{n1}=a_n,\,\forall n\in \Bbb N$, and $t_{2n}=t_{n2}=b_n,\,\forall n\in \Bbb N$.
\item all minors of $T^{(n)}$ are positive (i.e. $T^{(n)}$ is totally positive), for each $n\in \Bbb N$ ;
\item For each $n \in \Bbb N$, it holds
\noindent
$\Delta(T^{(n)})_{\{n\},\{1\}}=t_{n1}=a_n$,
\noindent
$\Delta(T^{(n)})_{\{n,n-1\},\{1,2\}}=1$,
\noindent
$\Delta(T^{(n)})_{\{n,n-1,n-2\},\{1,2,3\}}=1$,
\noindent
$\Delta(T^{(N)})_{\{n,n-1,n-2,n-3\},\{1,2,3,4\}}=1$,
\noindent
$\ldots$
\noindent
$\Delta(T^{(n)})_{\{n,n-1,\ldots,1\},\{1,2,\ldots,n\}}=\det T^{(n)}=1$
\noindent
(i.e. all solid minors of $(T^{(n)})$ with the lower-left corner coinciding with the lower-left corner of $(T^{(n)})$, except possibly $\Delta(T^{(n)})_{\{n\},\{1\}}$, are equal to $1$).
\end{enumerate}
\end{theorem}
\begin{proof}
We shall construct $T$ by induction starting from $T^{(1)}=\left[ a_1 \right]$. Observe that $T^{(2)}=\left[\begin{array}{cc}a_1&b_1\\a_2&b_2\end{array}\right]$; note that $T^{(2)}$ is symmetric since by assumption we have $b_1=a_2$.
Suppose that we have a symmetric totally positive matrix with integer coefficients $T^{(n)}\in M_n$ which satisfies the above conditions (1)-(5),
$$
T^{(n)}=\left[\begin{array}{ccccc}
a_1&a_2&a_3&\ldots&a_n\\
a_2&b_2&b_3&\ldots&b_n\\
a_3&b_3&t_{ž33}&\ldots&t_{3n}\\
\vdots&\vdots&\vdots& &\vdots\\
a_{n-1}&b_{n-1}&t_{n-1,3}&\ldots&t_{n-1,n}\\
a_n&b_n&t_{n3}&\ldots&t_{nn}
\end{array}\right].
$$
Put
\begin{equation}\langlebel{Te en plus jedan}
T^{(n+1)}=\left[\begin{array}{cccccc}
a_1&a_2&a_3&\ldots&a_n&a_{n+1}\\
a_2&b_2&b_3&\ldots&b_n&b_{n+1}\\
a_3&b_3&t_{33}&\ldots&t_{3n}&x_3\\
\vdots&\vdots&\vdots& &\vdots&\vdots\\
a_{n-1}&b_{n-1}&t_{n-1,3}&\ldots&t_{n-1,n}&x_{n-1}\\
a_n&b_n&t_{n3}&\ldots&t_{nn}&x_n\\
a_{n+1}&b_{n+1}&x_3&\ldots&x_n&x_{n+1}
\end{array}\right].
\end{equation}
Note that, by the hypothesis on sequences $(a_n)_n$ and $(b_n)_n$, we have
$$
\det
\left[\begin{array}{cc}
a_n&b_n\\
a_{n+1}&b_{n+1}
\end{array}
\right]=1.
$$
We must find numbers $x_3,x_4,\ldots,x_n,y$ such that $T^{(n+1)}$ satisfies (1)-(5).
Consider a $3 \times 3$ minor in the lower-left corner of $T^{(n+1)}$:
$$
\Delta(T^{(n+1)})_{\{n+1,n,n-1\},\{1,2,3\}}=\det
\left[\begin{array}{ccc}
a_{n-1}&b_{n-1}&t_{n-1,3}\\
a_n&b_n&t_{n3}\\
a_{n+1}&b_{n+1}&x_3
\end{array}
\right].
$$
We can compute $\Delta(T^{(n+1)})_{\{n+1,n,n-1\},\{1,2,3\}}$ by the Laplace expansion along the third row. By the assumption on sequences $(a_n)_n$ and $(b_n)_n$ we have $\det \left[\begin{array}{cc}a_{n-1}&b_{n-1}\\a_n&b_n\end{array}\right]=1$; hence, there exists a unique integer $x_3$ such that
$\Delta(T^{(n+1)})_{\{n+1,n,n-1\},\{1,2,3\}}=1;$ choose this $x_3$ and put $t_{n+1,3}=x_3.$
Consider now
$$
\Delta(T^{(n+1)})_{\{n+1,n,n-1,n-2\},\{1,2,3,4\}}=\det \,
\left[\begin{array}{cccc}
a_{n-2}&b_{n-2}&t_{n-2,3}&t_{n-2,4}\\
a_{n-1}&b_{n-1}&t_{n-1,3}&t_{n-1,4}\\
a_n&b_n&t_{n3}&t_{n4}\\
a_{n+1}&b_{n+1}&t_{n+1,3}&x_4
\end{array}
\right].
$$
Note that the only unknown entry in this minor is $x_4$.
We again use the Laplace expansion along the bottom row. By the induction hypothesis we know that
$$
\det \,
\left[\begin{array}{ccc}
a_{n-1}&b_{n-2}&t_{n-2,3}\\a_{n-1}&b_{n-1}&t_{n-1,3}\\a_n&b_n&t_{n3}
\end{array}\right]=
\Delta(T^{(n)})_{\{n,n-1,n-2\},\{1,2,3\}}=1;
$$
hence, there is a unique $x_4\in \Bbb Z$ such that $\Delta(T^{(n+1)})_{\{n+1,n,n-1,n-2\},\{1,2,3,4\}}=1$. Put $t_{n+1,4}=x_4$.
We proceed in the same fashion to obtain $x_5,\ldots,x_{n+1}$ in order to achieve the above condition (5) for $T^{(n+1)}$.
Since $T^{(n+1)}$ is symmetric, all its essential minors with the lower-right corner in the last column are also equal to $1$. By the induction hypothesis $T^{(n)}$ is totally positive, so all essential minors of $T^{(n+1)}$ with the lower-right corner in the $i$th row and $j$th column such that $i,j\leq n$ are also positive. Thus, we can apply the above mentioned result of M. Gasca and J.M. Pe\~{n}a (Theorem 9 in \cite{FZ}) to conclude that $T^{(n+1)}$ is totally positive. In particular, the integers $x_3,x_4,\ldots,x_n,y$ that we have computed along the way are all positive. This concludes the induction step.
\end{proof}
\begin{example}\langlebel{TPFibonacci}
Let us take $a_n=1$ and $b_n=n$, for all $n\in \Bbb N$. Clearly, the sequences $(a_n)_n$ and $(b_n)_n$ defined in this way satisfy the conditions from Theorem~\ref{constructing TP}. Thus, an application of that theorem gives us a totally positive matrix
$$
T=\left[\begin{array}{rrrrrrr}
1&1&1&1&1&1&\ldots\\
1&2&3&4&5&6&\ldots\\
1&3&6&10&15&21&\\
1&4&10&20&35&56&\\
1&5&15&35&70&126&\\
1&6&21&56&126&252&\\
\vdots&\vdots& & & &
\end{array}
\right]
$$
By the construction, the coefficients of $T$ in the first two rows and columns are determined in advance. One can prove that all other coefficients of $T$, those that need to be computed by an inductive procedure as described in the preceding proof, are given by
\begin{equation}\langlebel{matrix fibonacci}
t_{i,j+1}=t_{ij}+t_{i-1,j+1},\quad \forall i\geq 3,\,\forall j\geq 2.
\end{equation}
This means that $T$ is in fact a well known Pascal matrix; i.e. that $t_{ij}$'s are given by $t_{ij}=\binom{i+j-2}{j-1}$ for all $i,j \geq 1$.
A verification of (\ref{matrix fibonacci}) serves as an alternative proof of Proposition~\ref{constructing TP} with this special choice of $(a_n)_n$ and $(b_n)_n$. The key observation is the equality that one obtains by subtracting each row in $\Delta(T^{(n+1)})_{\{n+1,n,\ldots,n-j+1\},\{1,2,\ldots,j+1\}}$ from the next one and then using \eqref{matrix fibonacci}:
$$
\Delta(T^{(n+1)})_{\{n+1,n,\ldots,n-j+1\},\{1,2,\ldots,j+1\}}=\left|\begin{array}{cc}
1&t_{n-j+1,2}\quad t_{n-j+1,3}\quad \ldots \quad t_{n-j+1,j+1}\\
\begin{array}{c}0\\0\\\vdots\\0\end{array}&\Delta(T^{(n+1)})_{\{n+1,n,\ldots,n-j+2\},\{1,2,\ldots,j\}}
\end{array}\right|.
$$
We omit the details.
\end{example}
Another example of an infinite totally positive symmetric matrix is obtained by a different choice of sequences $(a_n)_n$ and $(b_n)_n$.
\begin{example}\langlebel{TP example 2}
Let $a_n=n$ and $b_n=3n-1$, for all $n\in \Bbb N$. Evidently, these two sequences satisfy the required conditions; namely, $b_1=a_2$ and $a_nb_{n+1}-b_na_{n+1}=1$ for all $n\in \Bbb N$. An application of Theorem~\ref{constructing TP} gives us a totally positive matrix
$$
T=\left[
\begin{array}{rrrrrrrrr}
1&2&3&4&5&6&7&8&\ldots\\
2&5&8&11&14&17&20&23&\ldots\\
3&8&14&21&29&38&48&59&\ldots\\
4&11&21&35&54&79&\cdot&\cdot&\\
5&14&29&54&94&\cdot&\cdot&\cdot&\\
6&17&38&79&\cdot&\cdot&\cdot&\cdot&\\
7&20&48&\cdot&\cdot&\cdot&\cdot&\cdot&\\
8&23&59&\cdot&\cdot&\cdot&\cdot&\cdot&\\
\vdots&\vdots&\vdots& & & & & &
\end{array}
\right]
$$
\end{example}
\begin{remark}\langlebel{generating all TP}
Obviously, by choosing suitable sequences $(a_n)_n$ and $(b_n)_n$ one can generate in the same fashion many other totaly positive symmetric matrices with integer coefficients.
It is also clear from the proof of Proposition~\ref{constructing TP} that, by applying a similar inductive procedure, one can construct any infinite totally positive matrix (not necessarily symmetric) with coefficients merely in $\Bbb R^+$, with a prescribed first column or the first row.
\end{remark}
Suppose now again that we work in an $N$-dimensional Hilbert space $H$. Recall that Theorem 6 from \cite{ACM} describes a procedure for producing full spark unit norm tight frames using Chebotar\"{e}v theorem on $N\times N$ discrete Fourier transform matrix with $N$ prime. Another technique for producing full spark equal norm tight frames can be found in \cite{PK}.
We can now provide, using Theorem~\ref{2N} and the preceding two examples, further examples of full spark frames for $H$ of arbitrary length. To do that, we only need to fix some $M\in\Bbb N$, and choose arbitrary set of indices $I=\{i_1,i_2,\ldots,i_N\}$, $J=\{j_1,j_2,\ldots, j_M\}$. Then we can take a totally positive matrix $T$ from Example~\ref{TPFibonacci} or Example~\ref{TP example 2}, its submatrix $T_{I,J}\in M_{NM}$ and apply Theorem~\ref{2N}. In this way we obtain a full spark frame for $H$ consisting of $N+M$ elements.
\begin{example}\langlebel{novi genericki Fib}
Denote by $(x_n)_{n=1}^N$ the canonical basis for $\Bbb C^N$, $N\in \Bbb N$. Take arbitrary $M\in \Bbb N$ and the upper left $N\times M$ corner of the matrix from Example \ref{TPFibonacci}. An application of Theorem~\ref{2N} gives us a full spark frame $(x_n)_{n=1}^{N+M}$ for $\Bbb C^N$ whose members are represented in the basis $(x_n)_{n=1}^N$ by the matrix
$$
F_{NM}=\left[\begin{array}{cccccccccc}
1&0&0&\cdots&0&1&1&1&\cdots&1\\
0&1&0&\cdots&0&1&2&3&\cdots&M\\
0&0&1&\cdots&0&1&3&t_{33}&\cdots&t_{3M}\\
0&0&0&\cdots&0&1&4&t_{43}&\cdots&t_{4M}\\
\vdots&\vdots&\vdots& &\vdots&\vdots&\vdots&\vdots& &\vdots\\
0&0&0&\cdots&1&1&N&t_{N3}&\cdots&t_{NM}
\end{array}
\right]
$$
with
$$
t_{ij}=\binom{i+j-2}{j-1},\quad i=1,2,\ldots,N,\ j=1,2,\ldots,M.
$$
\end{example}
It would be useful to construct classes of full spark frames with some additional properties. Given a frame $(x_n)_n$ for a Hilbert space $H$ with the analysis operator $U$, it is well known that $((U^*U)^{-\frac{1}{2}}x_n)_n$ is a Parseval frame for $H$. Since $(U^*U)^{-\frac{1}{2}}$ is an invertible operator, this transformation preserves full spark property.
Unfortunately, computing the inverse of the positive square root (from the frame operator) is costly. However, a class of full spark frames described as in Theorem~\ref{2N} in terms of an orthonormal basis is easier to handle.
Observe (as in the above example) that, if $(x_n)_{n=1}^{N+M}$ is a full spark frame and if
$T$ is a matrix introduced by
\eqref{full spark vectors} as in the Theorem~\ref{2N}, then the matrix representation of $(x_n)_{n=1}^{N+M}$ with respect to a basis $(x_n)_{n=1}^N$
is (written in the form of a block matrix) $\left[ \begin{array}{ccc}I&|&T\end{array} \right]$. Moreover, this is precisely the matrix of the synthesis operator $U^*$ of our frame $(x_n)_{n=1}^{N+M}$. If, additionally, $(x_n)_{n=1}^N$ is an orthonormal (e.g.~canonical) basis, this implies that $U^*U=I+TT^*$. An easy computation (which we omit) shows that, in this situation, $TT^*$ is nothing else than $\sum_{n=1}^M\theta_{x_{N+n},x_{N+n}}$. Thus,
\begin{equation}\langlebel{specijalni oblik}
U^*U=I+\sum_{n=1}^M\theta_{x_{N+n},x_{N+n}}.
\end{equation}
In the case $M=1$ there is a simple formula for $(I+\theta_{x,x})^{-\frac{1}{2}}$. The reader can easily check that
\begin{equation}\langlebel{inverz}
\left(I+\theta_{x,x}\right)^{-\frac{1}{2}}=I+\frac{1}{\|x\|^2}\left(\frac{1}{\sqrt{1+\|x\|^2}}-1\right) \theta_{x,x},\quad \forall x\in H.
\end{equation}
For $M>1$ we do not have a formula for the inverse square root of $U^*U$ where
$U : \Bbb C^N \rightarrow \Bbb C^M$ is such that $U^*U$ is of the form \eqref{specijalni oblik}. However,
it suffices for our purposes to find any invertible operator $R$ on $\Bbb C^N$, not necessarily positive, such that $UR^*$ is an isometry. Then $(Rx_n)_n$ will be a Parseval full spark frame for $H$. Indeed, this is immediate from the observation that the analysis operator of $(Rx_n)_n$ is precisely $UR^*$.
In the theorem that follows we give a finite iterative procedure for constructing such an operator $R$. It turns out that $R$ is a finite product of operators of the form \eqref{inverz}.
\begin{theorem}\langlebel{thm-PArs_spark new}
Let $U : \Bbb C^N \rightarrow \Bbb C^M$ be an operator such that $U^*U=I+\sum_{n=1}^M\theta_{f_n,f_n}$ for some $f_1,f_2,\ldots,f_M\in \Bbb C^N$, $M\in \Bbb N$. Let
$$f_n^{(0)}=f_n,\quad \forall n=1,\ldots, M, $$
and, for $k=1,\ldots, M$,
\begin{equation}\langlebel{rekurzija_Pars}
f_n^{(k)}=\left(I+\theta_{f_{k}^{(k-1)},f_{k}^{(k-1)}}\right)^{-\frac{1}{2}}f_n^{(k-1)},\quad \forall n=1,\ldots, M.
\end{equation}
Consider the operators
\begin{equation}\langlebel{T_Nk_Mk}
R_{k}^{(k-1)}=\left(I+\theta_{f_{k}^{(k-1)},f_{k}^{(k-1)}}\right)^{-\frac{1}{2}},\quad \forall k=1,\ldots,M,
\end{equation}
and
\begin{equation}\langlebel{operator_T}
R= R_{M}^{(M-1)}R_{M-1}^{(M-2)}\cdots R_{1}^{(0)}.
\end{equation}
Then $R$ is an invertible operator such that $UR^*$ is an isometry.
\end{theorem}
\begin{proof}
Observe first that the recursion formulae \eqref{rekurzija_Pars} can be written as
\begin{equation}\langlebel{rekurzija_Pars_1}
f_n^{(k)}=R_{k}^{(k-1)}f_n^{(k-1)},\quad \forall n=1,\ldots,M,\quad \forall k=1,\ldots, M.
\end{equation}
Notice that we can rewrite the assumed equality $U^*U=I+\sum_{n=1}^M\theta_{f_n,f_n}$ can be written as
$$U^*U\stackrel{\eqref{T_Nk_Mk} }{=}(R_{1}^{(0)})^{-2}+\theta_{f_{2},f_{2}}+\ldots+\theta_{f_{M},f_{M}}.$$
Multiplying on the both sides by $R_{1}^{(0)}$ and taking into account that
$$R_{1}^{(0)}\theta_{f_{k},f_{k}}R_{1}^{(0)}=\theta_{R_{1}^{(0)}f_{k},R_{1}^{(0)}f_{k}}\stackrel{\eqref{rekurzija_Pars_1} }{=}\theta_{f_{k}^{(1)},f_{k}^{(1)}}$$
for all $k=2,\ldots,M$, we get
$$R_{1}^{(0)}U^*UR_{1}^{(0)}=I+\theta_{f_{2}^{(1)},f_{2}^{(1)}}+\ldots+\theta_{f_{M}^{(1)},f_{M}^{(1)}}.$$
Again, we write this equality as
$$R_{1}^{(0)}U^*UR_{1}^{(0)}=(R_{2}^{(1)})^{-2}+\theta_{f_{3}^{(1)},f_{3}^{(1)}}+\ldots+\theta_{f_{M}^{(1)},f_{M}^{(1)}}.$$
Now we multiply on the both sides by $R_{2}^{(1)}$ and, as above, we get
$$R_{2}^{(1)}R_{1}^{(0)}U^*UR_{1}^{(0)}R_{2}^{(1)} =I+\theta_{f_{3}^{(2)},f_{3}^{(2)}}+\ldots+\theta_{f_{M}^{(2)},f_{M}^{(2)}}.$$
After $m$ steps we get
$$R_{M}^{(M-1)}\cdots R_{2}^{(1)}R_{1}^{(0)}U^*UR_{1}^{(0)}R_{2}^{(1)}\cdots R_{M}^{(M-1)} =I,$$
that is, by \eqref{operator_T},
$$
RU^*UR^*= I.
$$\end{proof}
As an immediate consequence we get:
\begin{cor}
\langlebel{thm-PArs_spark old}
Let $(x_n)_{n=1}^{N+M}$ be a frame for a Hilbert space $H$ such that $(x_n)_{n=1}^{N}$ is an orthonormal basis for $H.$
Let
\begin{equation}\langlebel{rekurzija_Pars old1}
x_n^{(0)}=x_n,\quad \forall n=1,\ldots, N+M,
\end{equation}
and for $k=1,\ldots, M$
\begin{equation}\langlebel{rekurzija_Pars old2}
x_n^{(k)}=\left(I+\theta_{x_{N+k}^{(k-1)},x_{N+k}^{(k-1)}}\right)^{-\frac{1}{2}}x_n^{(k-1)},\quad \forall n=1,\ldots, N+M.
\end{equation}
The sequence $(x_n^{(M)})_{n=1}^{N+M}$ is a Parseval frame for $H.$ If $(x_n)_{n=1}^{N+M}$ is full spark, then $(x_n^{(M)})_{n=1}^{N+M}$ is also a full spark frame.
\end{cor}
\begin{proof}
Denote by $U$ the analysis operator of $(x_n)_{n=1}^{N+M}$. As we already observed in the discussion preceding Theorem~\ref{thm-PArs_spark new}, the fact that $(x_n)_{n=1}^{N}$ is an orthonormal basis implies that the frame operator $U^*U$ is given by $U^*U=I+\sum_{n=1}^M\theta_{x_{N+n},x_{N+n}}$. We now apply the preceding theorem with $f_n=x_{N+n},\,n=1,2,\ldots,M$.
Having defined $x_n^{(k)}$, $k=0,1,\ldots, M$, $n=1,2\ldots, N+M$, by \eqref{rekurzija_Pars old1} and \eqref{rekurzija_Pars old2} we can write (as in the preceding proof)
\begin{equation}\langlebel{rekurzija_Pars old3}
x_n^{(k)}=R_k^{(k-1)}x_n^{(k-1)},\,\,\forall k=1,2,\ldots,M,\quad\forall n=1,2,\ldots,N+M,
\end{equation}
where
\begin{equation}\langlebel{T_Nk_Mk old}
R_{k}^{(k-1)}=\left(I+\theta_{x_{N+k}^{(k-1)},x_{N+k}^{(k-1)}}\right)^{-\frac{1}{2}},\quad \forall k=1,\ldots,M.
\end{equation}
Put again
$$
R= R_{M}^{(M-1)}R_{M-1}^{(M-2)}\cdots R_{1}^{(0)}.
$$
Then \eqref{rekurzija_Pars old3} can be rewritten as
$$
x_n^{(M)}=R_M^{(M-1)}x_n^{(M-1)}=\ldots=R_M^{(M-1)}R_{M-1}^{(M-2)} \ldots R_1^{(0)}x_n^{(0)}
$$
i.e.
$$
x_n^{(M)}=Rx_n,\,\,\forall n=1,2,\ldots ,N+M.
$$
Since $R$ is surjective, $(x_n^{(M)})_{n=1}^{N+M}$ is a frame for $H$. Since its analysis operator $UR^*$ is an isometry, $(x_n^{(M)})_{n=1}^{N+M}$ is Parseval. Finally, since $R$ is invertible, if $(x_n)_{n=1}^{N+M}$ is a full spark frame,
$(x_n^{(M)})_{n=1}^{N+M}$ is also full spark.
\end{proof}
Here we note that the idea of transforming a general finite frame into a Parseval frame by a finite iterative process is not new. A different approach, inspired by the Gram-Schmidt orthogonalization procedure, can be found in \cite{CKut}.
\begin{example}
Consider a full spark frame $(x_n)_{n=1}^5$ for $\Bbb C^3$ from Example \ref{novi genericki Fib} with $N=3$ and $M=2$.
Recall that the matrix of its synthesis operator $U^*$ in the canonical pair of bases is
$$
U^*=F_{23}=\left[\begin{array}{ccccc}
1&0&0&1&1\\
0&1&0&1&2\\
0&0&1&1&3
\end{array}
\right]
$$
After applying the procedure described in the preceding corollary, we get the following sequence
\begin{eqnarray*}
x_1^{(2)}&=&\frac{5}{6}x_1+\frac{-4-\sqrt{6}}{60}x_2+\frac{2-2\sqrt{6}}{60}x_3\\
x_2^{(2)}&=&-\frac{1}{6}x_1+\frac{44+\sqrt{6}}{60}x_2+\frac{-22+2\sqrt{6}}{60}x_3\\
x_3^{(2)}&=&-\frac{1}{6}x_1+\frac{-28+3\sqrt{6}}{60}x_2+\frac{14+6\sqrt{6}}{60}x_3\\
x_4^{(2)}&=&\frac{1}{2}x_1+\frac{4+\sqrt{6}}{20}x_2+\frac{-1+\sqrt{6}}{10}x_3\\
x_5^{(2)}&=&\frac{\sqrt{6}}{6}x_2+\frac{2\sqrt{6}}{6}x_3,
\end{eqnarray*}
which makes up a full spark Parseval frame for $\Bbb C^3.$
\end{example}
\section*{Concluding remarks}
We have constructed, for every finite set of indices $\{n_1,n_2,\ldots,n_k\}$ that satisfies the minimal redundancy condition for a frame $(x_n)_n$, a dual frame $(v_n)_n$ which satisfies $v_n=0$ for all $n=n_1,n_2,\ldots,n_k$.
This dual frame enables us to reconstruct each signal $x$, using the reconstruction formula $x=\sum_{n=1}^{\infty}\langle x,x_n\rangle v_n$, without recovering (possibly) corrupted coefficients $\langle x,x_{n_1}\rangle$, $\langle x,x_{n_2}\rangle,
\ldots,\langle x,x_{n_k}\rangle$.
The elements of the dual frame $(v_n)_n$ are computed in terms of the canonical dual $(y_n)_n$. In Theorem \ref{main2} each $v_n$, $n\not =n_1,n_2,\ldots, n_k$,
is obtained in the form
$v_n=y_n-\sum_{i=1}^k\alpha_{ni}y_{n_i}$, where the $k$-tuple $(\alpha_{n1}, \alpha_{n2},\ldots,\alpha_{nk})$ is a unique solution of a system of $k$ linear equations in $k$ unknowns, whose matrix is independent of $n$.
In Theorem~\ref{main2-cor1} we have proved that $v_n=(I-\sum_{i=1}^k\theta_{y_{n_i},x_{n_i}})^{-1}y_n$, $n\not =n_1,n_2,\ldots, n_k$.
Related results, Theorem \ref{thm-inverse} and Theorem \ref{tm-LS} may be of independent interest. In Theorem \ref{thm-inverse} we demonstrated a finite iterative procedure for computing the inverse $(I-\sum_{i=1}^k\theta_{y_{n_i},x_{n_i}})^{-1}$. Theorem \ref{tm-LS}, improving a result form \cite{LS}, provides a closed-form formula for the same inverse operator.
Here we also mention that our dual frame $(v_n)_n$ (as any other dual frame) arises from an oblique projection $F$ to the range of the analysis operator of the original frame $(x_n)_n$. One can derive an explicit formula for $F$ in which the above coefficients $\alpha_{ni}$'s again play a role. One can also prove that there is a related oblique projection $Q$ whose infinite matrix with respect to the canonical basis of $\ell^2$ transforms $(x_n)_n$ into $(v_n)_n$ in the sense of Theorem 4 from \cite{A}. These two results and their proofs might be of some interest, but are omitted since they are not essential for the presentation. The details will appear elsewhere.
We have also discussed properties of frames robust to erasures. In Theorem \ref{2N} we have proved that all full spark frames for finite-dimensional Hilbert spaces are generated by matrices in which all square submatrices are nonsingular. In this light, Theorem \ref{constructing TP} serves as a useful tool for generating infinite totally positive matrices (those that have all minors strictly positive) and, consequently, for generating full spark frames.
Finally, we have obtained in Theorem \ref{thm-PArs_spark new} and Corollary \ref{thm-PArs_spark old} a finite iterative procedure that can be used for transforming general frames to Parseval ones. Remarkably, full spark property is preserved under that transformation. It would be interesting and useful for applications to refine this procedure in such a way that when applied (at least to some specific classes of frames) gives rise to Parseval full spark frames with some additional properties (e.g.~frames with all elements having equal norms).
\end{document}
|
\begin{document}
\author{Zhao-Ming Wang$^{1}$\footnote{
Email address: [email protected]}, C. Allen Bishop$^{2}$,
Yong-Jian Gu$^{1}$\footnote{
Email address: [email protected]}, and Bin Shao$
^{3}$}
\affiliation{$^{1}$ Department of
Physics, Ocean University of China, Qingdao, 266100, China}
\affiliation{$^{2}$ Department of Physics, Southern Illinois
University, Carbondale, Illinois 62901-4401}
\affiliation{$^{3}$
Department of Physics, Beijing Institute of Technology, Beijing,
100081, China}
\title{Duplex quantum communication through a spin chain}
\begin{abstract}
Data multiplexing within a quantum computer can allow for the simultaneous
transfer of multiple streams of information over a shared medium
thereby minimizing the number of channels needed for requisite data transmission.
Here, we investigate a two-way quantum communication protocol using a spin
chain placed in an external magnetic field. In our scheme, Alice and
Bob each play the role of a sender and a receiver as
two states $\cos (\frac{\theta
_{1}}{2})\left\vert 0\right\rangle+\sin (\frac{\theta
_{1}}{2})e^{i\phi _{1}}\left\vert 1\right\rangle$, $\cos
(\frac{\theta _{2}}{2})\left\vert 0\right\rangle+\sin (\frac{\theta
_{2}}{2})e^{i\phi _{2}}\left\vert 1\right\rangle$ are transferred
through one channel simultaneously. We find that the transmission fidelity
at each end of a spin chain can usually be enhanced by the presence of a
second party. This is an important result for establishing the viability of duplex
quantum communication through spin chain networks.
\end{abstract}
\pacs{03.67.Hk,75.10.Jm}
\maketitle
\section{Introduction}
In classical electronic communications, full-duplex transmission capabilities
allow for the simultaneous sending and receiving of data to and from some
remote host or process. In instances where real-time information transfer
is required between two parties, for example in voice communications and
high-performance distributed computing, full-duplex transmission is desired \cite{tan}.
While two physical twisted-pairs of wires per cable provide the avenue for
duplex transmission classically, there has been no counterpart for such communications in quantum machines. It will be shown, however, that spin chains can be
used as foundational elements of quantum duplex communications. In principal, full-duplex information transfer can be achieved during quantum
computations using the interactions which naturally occur between
neighboring sites of a spin chain.
It was Bose
who first suggested
using an unmodulated spin chain to serve as
a mediator for quantum information
transfer \cite{Bose2003}. The basic idea goes like this: An arbitrary qubit
state is encoded at one end of the chain which then evolves naturally
under spin
dynamics. Later, at some time $\tau$, the state can be received
at the other end with some probability. Although his original proposal
offers the advantage of simplicity, it does not allow for a perfect
state transfer in most situations. In order to improve the fidelity of
transmission an extensive investigation has been made regarding
state or entanglement transfer through
permanently coupled spin chains \cite{Bose2003,Giovanetti2006,Gualdi2008,Wang2007,Wang2009,Allen2010,Durgarth2005,Durgarth2007}.
The transmission fidelity (entanglement) can be
significantly enhanced by means of introducing phase shifts, energy
currents \cite{Wang2007}, or by properly encoding the state over
more than one site \cite{Wang2009,Allen2010}.
There are also methods using two parallel spin chains which allow for a
perfect state transfer (PST) \cite{Durgarth2005,Durgarth2007}. In this
case PST is achieved using measurements at the end
of the chain. Other methods require a single local on-off switch
actuator \cite{Schirmer2009}, a single-spin optimal
control \cite{XiaotingWang2010}, or via certain classes of random
unpolarized spin chains \cite{Yao2011}. PST in a strongly coupled
antiferromagnetic spin chain has been reported in Ref.~\cite{Oh2011} which
requires weakly coupled external qubits. Furthermore, PST \cite{Wu20091} or perfect function
transfer \cite{Wu20092} can also be realized in a variety of interacting media, including, but not
limited to, the spin chain model.
In most scenarios considered in the literature the communication is assumed to occur
in one direction, i.e. if Alice sends the information Bob plays the role of the receiver.
This type of communication resembles a form of broadcasting where one party sends a signal while the second party simply
``listens''. Although broadcasting quantum information is certainly an important
method of communication, it is by no means the only method needed
for quantum computation. Full scale quantum computing will undoubtedly require multiplexing
between multiple processes and therefore an analysis of the effects of state transmission in
two directions is warranted. Here we study a duplex quantum communication protocol using a spin chain
placed in a external magnetic field. In this case
Alice and Bob each play the role of a sender and a receiver. Unlike the current trend in spin
chain research, our intention is not to improve the quality of state transfer
but rather to investigate how the presence of a second party effects
the other senders transmission. We focus on the least technically
challenging spin chain model and simply require
local control over the first and last site for the preparation and reception of the
states. We find that in most cases the presence of a second party can significantly enhance the
fidelity of state transmission from the other party thereby allowing for reliable two-way communication.
The paper will be outlined as follows. In Section II we will describe the
physical model we consider and derive expressions for the communication fidelity. The
numerical results obtained from these expressions will be presented in Section III. Finally,
we will conclude with a summary of our findings in Section IV.
\section{The model}
We depict our scheme in Fig.~1. Alice and Bob are situated at opposite
ends of a one-dimensional array of $N$ spin-1/2 systems. We assume
the chain has been cooled to the ground state
$\left\vert \downarrow_1 \downarrow_2 \hdots \downarrow_N \right\rangle$
prior to the
encoding process, where we have defined the eigenstates of the Pauli
operator $\sigma_z$ to be
$\left\vert \downarrow \right\rangle \equiv \left\vert 0 \right\rangle$ and
$\left\vert \uparrow \right\rangle \equiv \left\vert 1 \right\rangle$.
Alice and Bob then respectively prepare the states $\left\vert \varphi _{1}\right\rangle $ and $\left\vert
\varphi _{2}\right\rangle $ which they intend to send. To simplify matters,
we will assume that both encodings take place simultaneously.
\begin{figure}
\caption{Schematic illustration of our communication protocol.
Alice and Bob respectively encode the
states $\left\vert \varphi _{1}
\label{fig:1}
\end{figure}
\begin{figure}
\caption{(Color online.) The maximum fidelity $F_{max}
\label{fig:2}
\end{figure}
After the states of the spin systems at sites 1 and $N$ have been prepared
the system as a whole will then be allowed to evolve. This evolution
will be generated by
nearest-neighbor XY-type interactions and an externally applied
magnetic field
\begin{equation}
H=-\frac{J}{2}\sum\limits_{i=1}^{N-1}(\sigma _{i}^{x}\sigma
_{i+1}^{x}+\sigma _{i}^{y}\sigma _{i+1}^{y})-h\sum\limits_{i=1}^{N}\sigma
_{i}^{z}.
\end{equation}
We assume a ferromagnetic coupling and take the interaction constant
to be $J=1.0$
throughout. The constant $h$ represents the external magnetic field strength
of a field applied along the $z$ direction and $
\sigma _{i}^{x,y,z}$ denote the Pauli operators acting on spin $i$. We
consider an open ended chain which is perhaps the most natural geometry for a
channel. This Hamiltonian can be
diagonalized by means of the Jordan-Wigner
transformation that maps spins to one-dimensional spinless fermions with
creation operators defined by $c_{l}^{\dag }=(\prod\limits_{s=1}^{l-1}-\sigma
_{l}^{z})\sigma _{l}^{+}$. Here $\sigma _{l}^{+}=\frac{1}{2}$ $(\sigma
_{l}^{x}+i\sigma _{l}^{y})$ denotes the spin raising operator at site $l$.
The action of $c_{l}^{\dag }$ is to flip the spin at site $l$ from down to
up. For indices $l$ and $m$, the operators $c_{l}$ and $c_{m}^{\dag }$ satisfy
the anticommutation relations $
\{c_{l},c_{m}^{\dag }\}=\delta _{lm}$. The z-component of the total spin is
a conserved quantity implying the conservation of the total number of
excitations $M = \sum_{l}c_{l}^{\dag }c_{l}$ in the system. The evolution
of the creation operator $c_{j}^{\dag }$ is given by \cite{Amico}
\begin{equation}
c_{j}^{\dag }(t)=\sum_{l}f_{j,l}(t)c_{l}^{\dag },
\end{equation}
where
\begin{equation}
f_{j,l}=\frac{2}{N+1}\sum\limits_{m=1}^{N}\sin (q_{m}j)\sin
(q_{m}l)e^{-iE_{m}t},
\end{equation}
with $E_{m}=2h-2J\cos (q_{m})$ and $q_{m}=\pi m/(N+1).$ When the number of magnon
excitations is more than one, the time evolution of the creation operators is
\cite{Wichterich}
\begin{equation}
\label{Eq:multiple}
\prod\limits_{m=1}^{M}c_{j_{m}}^{\dag }(t)= \!\!\!\!\!\!\!
\sum\limits_{l_{1}<...<l_{M}} \!\!\!\!\! \det
\left\vert
\begin{array}{cccc}
f_{j_{1},l_{1}} & f_{j_{1},l_{2}} & ... & f_{j_{1},l_{M}} \\
f_{j_{2},l_{1}} & f_{j_{2},l_{2}} & ... & f_{j_{2},l_{M}} \\
... & ... & ... & ... \\
f_{j_{M},l_{1}} & f_{j_{M},l_{2}} & ... & f_{j_{M},l_{M}}
\end{array}
\right\vert \prod\limits_{m=1}^{M}c_{l_{m}}^{\dag }
\end{equation}
where $M$ gives the number of excitations. The set \{$j_{1},j_{2},...,j_{M}$
\} denotes the sites where the excitations are created and \{$
l_{1},l_{2},...,l_{M}$\} denotes an ordered combination of $M$ different
indices from $\{1,2,...,N\}$. In this paper we consider
chains which carry no more than two excitations, so we set $M=2$
in the situations when Eq.~(\ref{Eq:multiple}) is used.
To proceed, let us assume that Alice prepares an arbitrary qubit
state $\left\vert \varphi _{1}\right\rangle =$
$\alpha _{1}\left\vert 0\right\rangle +\beta _{1}\left\vert 1\right\rangle $
at the first site while Bob prepares the state $\left\vert \varphi _{2}\right\rangle =$ $\alpha
_{2}\left\vert 0\right\rangle +\beta _{2}\left\vert 1\right\rangle $ at the
$N$th site. The
initial state of the chain is then
\begin{equation}
\label{Eq:initial}
\left\vert \Phi (t=0)\right\rangle =(\alpha _{1}\left\vert 0\right\rangle
+\beta _{1}\left\vert 1\right\rangle )\otimes \left\vert \mathbf{0}
\right\rangle \otimes
(\alpha _{2}\left\vert 0\right\rangle +\beta
_{2}\left\vert 1\right\rangle ).
\end{equation}
The time evolution of $\left\vert \Phi (0)\right\rangle $ will be
\begin{equation}
\left\vert \Phi (t)\right\rangle =[\alpha _{1}\alpha
_{2}+\sum\limits_{i=1}^{N}A_{i}(t)c_{i}^{\dag
}+\sum\limits_{i<i^{\prime}}B_{i,i^{\prime }}(t)c_{i}^{\dag }c_{i^{\prime
}}^{\dag }]\left\vert \mathbf{0}\right\rangle, \label{eq:evo}
\end{equation}
where $A_{i}=\alpha _{1}\beta _{2}f_{N,i}+\beta _{1}\alpha _{2}f_{1,i}$, $
B_{i,i^{\prime }}=\beta _{1}\beta _{2}(f_{1,i}f_{N,i^{\prime
}}-f_{1,i^{\prime }}f_{N,i})$. Although we have used
$\left\vert \mathbf{0}\right\rangle$ interchangeably in these last
two equations, it should be understood that in Eq.~(\ref{Eq:initial})
the notation $\left\vert \mathbf{0}\right\rangle$ is used to refer
to the state
$\left\vert \downarrow_2 \downarrow_3 \hdots \downarrow_{N-1} \right\rangle$
of the channel spins between the first and last site, while in
Eq.~(\ref{eq:evo}) it is used to represent the state
$\left\vert \downarrow_1 \downarrow_1 \hdots \downarrow_{N} \right\rangle$
of the entire chain. We see from Eq.~(\ref{eq:evo}) that the
excitations which are created at the ends of the chain begin to
spread over time, resulting in a probability distribution of appearing
over all sites. Unlike the original one way communication
protocol where at most one excitation exists in the chain, here we can
find excitations at sites $i$ and $i^{\prime}$ ($i \neq i^{\prime}$)
with probability $\left\vert B_{i,i^{\prime }}(t)\right\vert ^{2}$.
Clearly, when $\alpha _{2}=1.0$ and $\beta _{2}=0.0,$ our two-way
communication scheme reduces to the original scenario \cite{Bose2003}.
Now consider the dynamics of the system. The fidelity between the
received state and initial state $\left\vert \varphi_i\right\rangle $
$(i=1,2)$ is defined by $F=\sqrt{\left\langle \varphi_i\right\vert \rho (t)\left\vert \varphi_i\right\rangle },$ where
$\rho (t)$ is the reduced density matrix associated with
the spin state at the receiving position.
In what follows we will assume that both parties measure
their states simultaneously since the measurement process
will necessarily disturb the
state of the system. It would be interesting
to examine the effect of non-simultaneous measurements on state transmission and will be
a subject of later work. For now, with the assumption of simultaneous measurements,
the fidelity at Bob's end $F_{N}$ and Alice's end $F_{1}$ are calculated to be
\begin{widetext}
\begin{equation}
F_{N}=\left[\left\vert \left\vert \alpha _{1}\right\vert ^{2}\alpha _{2}+\beta _{1}^{\ast }A_{N}\right\vert ^{2}+\sum\limits_{i=1}^{N-1}\left\vert \alpha
_{1}^{\ast }A_{i}+\beta _{1}^{\ast }B_{i,N}\right\vert ^{2}+\sum\limits_{i<(i^{\prime }\neq N)}\left\vert \alpha _{1}\right\vert ^{2}\left\vert
B_{i,i^{\prime }}\right\vert ^{2}\right]^{1/2}
\end{equation}
\begin{equation} \label{eq:alice}
F_{1}=\left[\left\vert \left\vert \alpha _{2}\right\vert ^{2}\alpha _{1}+\beta _{2}^{\ast }A_{1}\right\vert ^{2}+\sum\limits_{i=2}^{N}\left\vert \alpha
_{2}^{\ast }A_{i}+\beta _{2}^{\ast }B_{1,i}\right\vert ^{2}+\sum\limits_{(i\neq 1)<i^{\prime }}\left\vert \alpha _{2}\right\vert ^{2}\left\vert
B_{i,i^{\prime }}\right\vert ^{2}\right]^{1/2}
\end{equation}
\end{widetext}
We will analyze the behavior of duplex quantum communication in terms
of these fidelity measures next. We will show that the transmission fidelity
at each end can be significantly enhanced by the presence of the other party.
\section{Results and discussion}
We assume that Alice and Bob respectively prepare the arbitrary qubit
states $\alpha
_{1}\left\vert 0\right\rangle +\beta _{1}\left\vert 1\right\rangle $ and $
\alpha _{2}\left\vert 0\right\rangle +\beta _{2}\left\vert 1\right\rangle$
with each individual state being represented by a
point on a Bloch sphere with $\alpha _{i}=\cos (\frac{\theta
_{i}}{2})$, and $\beta _{i}=\sin (\frac{\theta _{i}}{2})e^{i\phi
_{i}}(i=1,2)$. First we investigate the effect of the polar angles
$\theta_i $ on the transmission fidelity and let
$\phi _{1}=\phi _{2}=0$. In Fig. 2, we plot the maximal fidelity which
can be obtained at
Bob's side of a $N=10$ site chain in a time interval $t\in \lbrack
10,50]$ as a function of the parameters $\theta _{1}$ and $\theta _{2}$.
We exclude the time interval $[0,10]$
since we need to avoid the fact that if Alice and
Bob both send a similar state, the fidelity will be very high at
$t=0$ regardless of any actual state transfer. Throughout
the paper, the maximum fidelity $F_{\max }$ is found in the
same time interval. When the chain is isolated from an external field
($h=0$) we find that when $\theta_{1}=\theta_{2}$, i.e. when the two
states are identically prepared, $F_{\max}$ is higher than in the other cases
(see Fig. 2(a)). Although the fidelity is maximized when both
senders encode similar states, any nonzero $
\theta _{2}$ will enhance the fidelity of Alice's transmission when
$h=0$ and $\phi _{1}=\phi _{2}=0$.
For example, when $
\theta _{1}/\pi=0.6$, $F_{\max }=0.67$ for $\theta _{2}=0,$ while $F_{\max
}=0.91$ for $\theta _{2}/\pi=0.6$. Since the chain has
been initialized to the ground state $\left\vert \downarrow_1 \downarrow_2 \hdots \downarrow_N \right\rangle$ before Alice and Bob encode their
states, a $\theta _{2}=0$ encoding at Bob's end amounts to the
same thing as if he were not even present. We can therefore
infer that duplex quantum communication has a positive
impact on a senders ability to transfer information, at least in the
case where $h=0$ and $\phi _{1}=\phi _{2}=0$. In a way,
Bob's encoding resembles an "activator" used in chemistry.
Let us now consider the influence of the external magnetic field.
Fig. 2(b) and (c)
exemplify the weak field $(h=0.1)$ and strong field ($h=1.0$) regimes. In
Fig. 2(b), the behavior of $F_{\max}$ with $\theta_{1}$ and
$\theta_{2}$ is similar to that given in Fig 2(a), but Fig 2(c) shows
that the presence of a strong
field can hinder the aforementioned properties as $F_{\max}$ is only
slightly enhanced for some range of $\theta _{2}$. For some values of
$\theta _{2}$, the fidelity of Alice's transmission can actually
decrease, though the decrease is small. For
example, when $\theta _{1}/\pi=0.8$, $F_{\max }=0.79$ for
$\theta_{2}=0$ while $F_{\max }=0.75$ for $\theta _{2}/\pi=0.35$.
The fidelity of Bob's transmission can be explicitly calculated
from Eq.~(\ref{eq:alice}). The results will be similar to those above due to
symmetry hence we only consider Alice's state transfer here.
\begin{figure}
\caption{(Color online.) The maximum fidelity $F_{max}
\label{fig:3}
\end{figure}
We now consider the effect of the phase angles $\phi_i$. A numerical calculation shows that
$F_{\max }$ only depends on the difference of the parameters $\phi
_{1}$, and $\phi _{2}$. In Fig.~3, we plot the maximum fidelity $F_{\max }$ as a function of the difference $\delta \phi = \phi_2 -\phi_1$
for fixed parameters $\theta_{1}$ and $\theta_{2}$. Again, we consider the
vanishing field ($h=0.0$), weak field ($h=0.1$), and strong field ($h=1.0$) regimes.
We find that for an isolated chain ($h=0.0$) the fidelity of Alice's state transfer
will be greatest when the difference $\delta \phi \approx k\pi$ for $(k=-2,-1,0,1,2)$. When
the difference $\delta \phi/ \pi \approx -1.5,-0.5,0.5,1.5$ the maximum fidelity which can be obtained
in the time interval t$\in[10,50]$ will minimized with respect to $\delta \phi$.
When an external field is applied to the chain it is more difficult
to assess the behavior of state transfer as can be seen in
Fig. 3(b) and (c). However, in all three cases we find that if Alice and
Bob both choose $\theta _{1}=\theta _{2}$ the fidelity will be greater when
compared to other values of the parameters $\theta_i$.
\begin{figure}
\caption{(Color online.) The maximum fidelity $F_{max}
\label{fig:4}
\end{figure}
In the analysis above, we have only considered a $N=10$ site chain.
We now study the length dependence of the maximal fidelity. In Fig.~4
we compare the success of Alice's state transfer with and without
Bob's encoding for various chain lengths. In the figures, the horizontal axes represent the number of sites
$N$ and we have selected Alice's state to be $1/2 \left\vert 0\right\rangle + \sqrt{3}/2 \left\vert 1\right\rangle$ in
both figures.
For a given $N$, the maximum fidelity $F_{\max }$ is determined numerically in a range $
\theta _{2}\in \lbrack 0,\pi ]$ and $\phi _{2}\in \lbrack 0,2\pi ]$. Fig.~4(a) and (b) correspond
to the cases $h=0.0$ and $h=1.0$, respectively. The plots reveal several interesting features.
First of all, we find that the maximum fidelity of Alice's state transfer is generally enhanced
when Bob encodes an appropriate state, regardless of the presence or absence of an external field.
Our numerical results show that when $N=5$, a near
perfect state transfer ($F_{\max }\approx 1$) can be obtained in many different cases. Secondly, when $h=0$ (Fig. 4(a)),
there are particular chain lengths for which the maximum fidelity is independent of Bob's presence, namely
chains which have $N= 4n+5$ $(n=0,1,2,...)$ sites (except for the slight deviation at $N=13$). This
property is lost however when an external field is applied. For instance, when $h=1.0$ Alice can
obtain a higher quality state transfer with Bob's presence for all chain lengths we consider except
for $N=18, 21$ (see Fig.~4(b)). Finally, for
any practical communication protocol it is important to know the
time $\tau$ at which the fidelity gains its maximum. As an example, for an isolated ($h=0.0$) chain
consisting of N=10 sites we find that Alice's transmitted state reaches a maximum fidelity
at time $\tau = 23.1$ when Bob encodes a similar state while $\tau = 29.2$ when Bob
is not present. When h=1.0,
$\tau = 23.0$ with Bob's encoding, and $\tau = 29.1$ without
Bob's encoding. Thus when Bob encodes an appropriate state Alice can obtain both a higher
transmission fidelity as well as an increased transfer speed.
\section*{CONCLUSION}
In conclusion, we have investigated the effects of duplex quantum communication
through an unmodulated spin chain.
A sophisticated quantum computer will undoubtedly rely on data multiplexing
so it is an important task to establish potential avenues for two-way
communication. We have shown that spin chains are indeed viable candidates
for this purpose. Specifically, we have shown that the transmission fidelity
at each end of a spin chain can usually be enhanced by the presence of a second party.
Our initiative opens the door for a broad investigation of multi-party communication through
spin networks and may find applications in many experimental
systems such as quantum dots \cite{Petta2005}, optical
lattices \cite{Simon2011}, or NMR \cite{Zhang2005,Cappellaro2007}.
\section*{ACKNOWLEDGMENTS}
This material is based upon work supported by the National Science
Foundation under Grant No. 11005099 and supported by "the
Fundamental Research Funds for the Central Universities" under Grant
No. 201013037.
\begin{references}
\bibitem{tan} A.S. Tanenbaum, {\it{Computer Networks, 3rd Ed.}}, (Prentice Hall PTR, 1996).
\bibitem{Bose2003} S. Bose, Phys. Rev. Lett. \textbf{91}, 207901 (2003); Contem. Phys. \textbf{48}, 13 (2007).
\bibitem{Giovanetti2006} V. Giovannetti and D. Burgarth, Phys. Rev. Lett. \textbf{96}, 030501 (2006).
\bibitem{Gualdi2008} G. Gualdi, V. Kostak, I. Marzoli, and P. Tombesi, Phys. Rev. A
\textbf{78}, 022325 (2008).
\bibitem{Wang2007}Z.-M. Wang, B. Shao, P. Chang and J. Zou, J. Phys. A, \textbf{40} 9067 (2007); Physica A
\textbf{387}, 2197 (2008).
\bibitem{Wang2009}Z.-M. Wang, C.A. Bishop, M.S. Byrd, B. Shao, and J. Zou, Phys. Rev. A \textbf{80}, 1 (2009).
\bibitem{Allen2010} C.A. Bishop, Y.-C. Ou, Z.-M. Wang, and M.S. Byrd,
Phys. Rev. A \textbf{81}, 042313 (2010).
\bibitem{Durgarth2005} D. Burgarth, S. Bose, Phys. Rev. A \textbf{71}, 052315 (2005).
\bibitem{Durgarth2007} D. Burgarth, V. Givvannetti, and S. Bose, Phys. Rev. A \textbf{75}, 062327 (2007).
\bibitem{Schirmer2009} S.G. Schirmer and P.J. Pemberton-Ross, Phys.
Rev. A \textbf{80}, 030301(R) (2009).
\bibitem{XiaotingWang2010} X. Wang, A. Bayat, S.G. Schirmer, and S. Bose, Phys. Rev. A \textbf{81}, 032312 (2010).
\bibitem{Yao2011} N.Y. Yao, L. Jiang, A.V. Gorshkov, Z.-X. Gong, A. Zhai, L.-M. Duan, M.D.
Lukin, Phys. Rev. Lett. \textbf{106}, 040505 (2011).
\bibitem{Oh2011} S. Oh, L.-A. Wu, Y.P. Shim, M. Friesen, and X. Hu,
arXiv:1102.0762.
\bibitem{Wu20091} L.-A. Wu, Y.-X. Liu, and F. Nori, Phys. Rev. A \textbf{80},
042315(2009).
\bibitem{Wu20092} L.-A. Wu, A. Miranowicz, X.B. Wang, Y.-X. Liu and F. Nori, Phys. Rev. A \textbf{80}, 012332(2009).
\bibitem{Amico} L. Amico and A. Osterloh, J. Phys. A \textbf{37}, 291 (2004);L. Amico
and A. Osterloh, Phys. Rev. A \textbf{69}, 022304 (2004).
\bibitem{Wichterich} H. Wichterich and S. Bose, Phys. Rev. A \textbf{79}, 060302(R) (2009).
\bibitem{Petta2005} J.R. Petta et al., Science \textbf{309}, 2180 (2005).
\bibitem{Simon2011} J. Simon et al., arXiv:1103.1372.
\bibitem{Zhang2005} J. Zhang, G.L. Long, W. Zhang, Z. Deng, W. Liu, and Z. Lu, Phys. Rev. A \textbf{72}, 012331 (2005).
\bibitem{Cappellaro2007} P. Cappellaro, C. Ramanathan, and D.G. Cory, Phys. Rev. Lett. \textbf{99}, 250506 (2007).
\end{references}
\end{document}
|
\begin{document}
\title{Fast Quantum Computing with Buckyballs}
\author{Maria Silvia Garelli and Feodor V Kusmartsev }
\date{18.07.2005}
\affiliation{Department of Physics, Loughborough University, LE11 3TU, UK}
\begin{abstract}
We have found that encapsulated atoms in fullerene molecules, which carry a spin, can be used for fast quantum computing. We describe the scheme for performing quantum computations, going through the preparation of the qubit state and the realization of a two-qubit quantum gate. When we apply a static magnetic field to each encased spin, we find out the ideal design for the preparation of the quantum state. Therefore, adding to our system a time dependent magnetic field, we can perform a phase-gate. The operational time related to a $\pi-$phase gate is of the order of $ns$. This finding shows that, during the decoherence time, which is proportional to $\mu s$, we can perform many thousands of gate operations. In addition, the two-qubit state which arises after a $\pi-$gate is characterized by a high degree of entanglement. This opens a new avenue for the implementation of fast quantum computation.
\end{abstract}
\pacs{03.67.-a, 03.67.Lx, 61.48.+c}
\maketitle
\section{Introduction}
In the study of systems for the realization of quantum gates, a great interest is addressed to encoding qubits in spins. The most remarkable property of both nuclear and electronic spin is their long decoherence time, which allows them to be the most promising objects for quantum manipulations. The first studies about quantum computing via spin-spin interaction were based on the Nuclear Magnetic Resonance (NMR) technique \cite{Gershenfeld,Schmidt,Leibfried,Nielsen},
but these systems show a very limited scalability in the number of qubits.
Solid state devices were found to be suitable for building up scalable spin-based quantum computers \cite{Kane,Loss,Div}.
In this work we focus on the realization of a spin based quantum gate, considering a system composed of endohedral fullerene molecules. The qubit is encoded in the electron spin of the encased atom in each fullerene molecule. Many studies about the physics of endohedrally doped fullerenes have been performed \cite{Harneit,Harneit1,Feng}. In our study we borrow many ideas from these previous papers, but we use a different approach for performing the gate operation and entangled states. Our main target is to get a very short operational time for such tasks.
The chemical and physical properties of an endohedral fullerene molecule, see Refs. \cite{Greer,Harneit,Suter,Twam}, are very remarkable. Any charge inside these molecules is completely screened, and the fullerene can be considered as a Faraday cage which traps the encased atom. Moreover, considering the $N@C_{60}$, the nitrogen atom sits in the center of the fullerene cage and it preserves all the characteristics of the free nitrogen together with a lower reactivity. Indeed, this buckyball is stable also at room temperature. In addition, the mutual interaction between two spins in adjacent buckyballs, is dominated by the spin dipole-dipole interaction, while the exchange interaction vanishes. The most relevant feature which is required for a reliable quantum computation, is the long decoherence time of spins trapped in fullerenes.
These endohedral systems are
typically characterized by two relaxation times. The first is
$T_1$, which is due to the interactions between a spin and the
surrounding environment. The second relaxation time is $T_2$ and it is due to
the dipolar interaction between the qubit encoding spin and the
surrounding endohedral spins randomly distributed in the sample. While
$T_1$ is dependent on temperature, $T_2$ is practically
independent of it. The experimental measure of the two relaxation
times shows that $T_1$ increases with decreasing temperature from
about $100\mu s$ at $T=300 K$ to several seconds below $T=5K$, and
that the value of the other relaxation time, $T_2$, remains constant, that is $T_2\simeq 20\mu s$ \cite{Knorr1,Knorr2}. It is thought that the value of $T_2$ can be
increased, if it is possible to design a careful experimental
architecture, which could screen the interaction of the spins with the
surrounding magnetic moments.
Actually, the \emph{peapod} is the most promising setup for trapping buckyballs \cite{Briggs1}, and for getting better decoherence times \cite{Briggs,site1,site2}. In such a peapod, different ordered phases of fullerenes, which could be relevant to quantum computation, have been observed \cite{Briggs2}.
We studied the realization of a fast quantum computation. We focused on the implementation of a two-qubit
quantum \emph{$\pi$-gate}, which is a generalization of the
\emph{phase gate}. The theoretical study related to the realization of the $\pi-$gate is treated in our previous work, \cite{Garelli}. In contrast with our previous work here we found a new setup which allows us to perform many quantum operations characterized by a very short operational time. We show that during such short time we were able to create a two-qubit operation like a $\pi-$gate and a highly entangled state. We have also designed the ideal preparatory and final setup needed before and after the realization a two-qubit gate.
In this setup, before performing the two-qubit operation, we initially apply a static magnetic field in the z-direction in order to lift the energy degeneracy on each of the two spin$-\frac{1}{2}$ particles. Moreover, this design is suitable as a starting configuration for the following two-qubit operation. This configuration for the system can be also adopted at the end of the gate operation, in order to preserve the result obtained through its computation. To perform the gate, we apply an additional microwave magnetic field, always in the z-direction. The spin dipole-dipole interaction of the two-qubit system, controlled by the added static and microwave fields, enables the system to realize the desirable $\pi-$gate. As we introduced before, the main result of our study
is the gate time, i.e. the time required by the system in order
to complete a $\pi$-gate operation. The value obtained for the gate time through numerical computation is
$\tau\simeq1.6 ns$, which is about three orders smaller than
the shortest relaxation time, $T_2$ \cite{Knorr1,Knorr2}. Comparing $\tau$ and $T_2$, we found that it
is theoretically possible to realize many thousands of basic gate
operations before the system decoheres. We also found that the two-qubit state arising during the gate operation is characterized by a high degree of entanglement. Evaluating the \emph{concurrence} of the two-qubit state, see Ref. \cite{Wootters}, we see that, at the gate time, $\tau$, the corresponding value of concurrence is $C\simeq 0.9$. This value is close to the maximum value for the concurrence, i.e. $C=1$, therefore our two-qubit state reaches a good degree of entanglement at the end of the $\pi-$gate operation.
Therefore, we can perform a $\pi-$gate in a remarkable short time, and, during this time interval, the two- qubit state acquires a high level of entanglement, which allows it to be a good candidate for carrying quantum information.
\section{Realization of a $\pi-$gate}
Our system consists of two spins, which interact with a static
magnetic field. By applying a static magnetic field oriented in the
$z$ direction we get splitting of the
spin z component into the spin-up and spin-down components, which is due to the Zeeman effect. The
energy difference between these two levels give the resonance
frequency of the particle. If we apply a static magnetic
field to the whole sample, all the particles will have the same
resonance frequency. If we put these buckyballs in a frequency resonator, each buckyball must have its own resonance frequency, in order to be individually addressed and manipulated. This arrangement leads to the most relevant
experimental disadvantage for systems composed by arrays of
buckyballs, which is the difficulty in the individual addressing
of each qubit. Although the single addressing of each qubit is not strictly relevant for the realization of our two-qubit gate, it is useful however for performing single qubit operations. Indeed, for the realization of this type of operation, we need to be able to distinguish qubits, in order to act on them independently.
In order to overcome the problem of addressing a single qubit, external field gradients, which
can shift the electronic resonance frequency of the qubit-encoding
spins, can be used \cite{Suter}. With the use of atom chips, thin wires
can carry a current density of more than $10^7 A/cm^2$ \cite{Groth}, therefore we can realize the desirable magnetic field gradients \cite{Harneit1,Garelli}.
The values of the arising magnetic field amplitudes on our two spins are $B_{g_1}=3.73\times 10^{-5}T$ and $B_{g_1}=-3.73\times 10^{-5}T$, for the left and right spin respectively.
Choosing a static magnetic field along the z direction, and neglecting the exchange interaction between the two spins \cite{Greer,Waiblinger,Harneit}, the Hamiltonian of our system is the following
\begin{equation}\label{hamtind}
\begin{array}{ll}
H&=g(r)[\vec{\hat\sigma}_1\cdot
\vec{\hat\sigma}_2-3(\vec{\hat\sigma}_1\cdot\vec n)(\vec{\hat\sigma}_2\cdot\vec
n)]\\
&-\mu_B[((B_{z}+B_{g_1})\hat\sigma_{z_1})\otimes
I_2\\
&+I_1\otimes((B_{z}+B_{g_2})\hat\sigma_{z_2})],
\end{array}
\end{equation}
where $g(r)=\gamma_1 \gamma_2 \frac{\mu_0 \mu_{B}^{2}}{4 \pi r^3}$, $\mu_0$ is the diamagnetic constant, $\mu_B$ is the Bohr magneton and $\vec{\hat\sigma}_{1,2}$ are the spin matrices. Choosing $\gamma_{1}=\gamma_2=2$ for the gyromagnetic ratio, we obtain $g(r)=\frac{\mu_0 \mu_{B}^{2}}{\pi r^3}$.
Whenever we are performing a quantum gate, we have to be able to stop the gate operation. Since our gate is due to the time evolution of the system, we have to find a way to slow down the interaction process after a $\pi-$gate is performed. Static magnetic fields cannot be switched off immediately. Because of the circuit inductance, the static field vanishes slowly in comparison to the gate time we are looking for. On the other hand, time dependent oscillating fields, like microwave fields, can be switched off promptly. Therefore, we have to look for the best microwave field, which will dominate the time evolution of the quantum state of the system.
\subsection{Preparatory Configuration}
At first, we consider the case where, in addition to terms $B_{g_1}$ and $B_{g_2}$, we only apply a static magnetic field, $B_z$, in the z direction to our system. To check the time evolution of the phase $\theta$ we need to solve the Schr{\"o}dinger equation, in which the Hamiltonian is given by Eq. (\ref{hamtind}), and the time evolved wave function of the system is
\begin{equation}\label{wave}
\begin{array}{lll}
\mid \psi(t)\rangle&=c1(t)\mid 00\rangle+c2(t)\mid 01\rangle\\
&+c3(t)\mid 10\rangle+c4(t)\mid 11\rangle.
\end{array}
\end{equation}
Therefore, we get the following four differential equation system
\begin{eqnarray}\label{systemTindip1}
\dot c1(t)&=&-\imath[(g(r)+m_1)c1(t)-3g(r)c4(t)];\\
\dot c2(t)&=&-\imath[(-g(r)+m_2)c2(t)-g(r)c3(t)];\\
\dot c3(t)&=&-\imath[
-g(r)c2(t)+(-g(r)-m_2)c3(t)];\\\label{systemTindip4} \dot
c4(t)&=&-\imath[-3g(r) c1(t)+(g(r)-m_1)c4(t)],
\end{eqnarray}
whose solution gives the phases acquired by each computational basis state during the time evolution. According to the formula which allows us to evaluate a phase-gate, see Ref. \cite{Garelli}, we can evaluate phase $\theta$ as follows
\begin{equation}\label{theta1}
\begin{array}{ll}
\theta &=Arg(c1(t))-Arg(c2(t))\\
&-Arg(c3(t))+Arg(c4(t)),
\end{array}
\end{equation}
where $Arg(c(i))$, $i=1,..4,$, are the phases of the complex coefficients related to each basis state.
Choosing $B_{z}=5\times 10^{-4}T$, we obtain the result of the numerical computation for phase $\theta$, see Fig. \ref{static}.
\begin{figure}
\caption{Evolution of the phase $\theta$ during time, $t$. The two-spin system is subjected to a static field $B_z$ and to the two amplitudes $B_{g_1}
\label{static}
\end{figure}
In this picture the time interval is up to $t=20 ns$, and we can see that in this time range the phase $\theta$ shows small oscillations about zero and it is very far from $\pi$. These phase oscillations are negligible in comparison to a $\pi-$gate, and we could consider the behaviour phase $\theta$ as approximately constant during this time. We also checked the concurrence of the two qubit state. Using the following formula
\begin{equation}\label{concnorm}
C(\psi)=\frac{2\mid c2^*c3^*-c1^*c4^*\mid}{\mid c1\mid^2+\mid
c3\mid^2+\mid c3\mid^2+\mid c4\mid^2}
\end{equation}
whose derivation can be found in Ref. \cite{Garelli} , we performed a numerical evaluation of the concurrence. The concurrence is a positive monotonically increasing time dependent function, which shows the degree of entanglement of a qubit state. When concurrence is minimum, $C=0$, the related quantum state is separable. When it reaches its maximum value, $C=1$, the related state is maximally entangled. If each qubit in a maximally entangled state undergoes a spin-flip transformation, the total state does not change at all, therefore it is the best candidate for carrying quantum information. In Fig. \ref{ctind}, the time evolution of concurrence in a time interval of $20 ns$ is presented.
\begin{figure}
\caption{Dependence of concurrence, $C$, on time, $t$. The time range is up to $20 ns$.}
\label{ctind}
\end{figure}
We can see that in this time range, not only does the phase $\theta$ shows negligible oscillations, but also the concurrence is characterized by small deviations about zero. Therefore, the concurrence is well-approximated by a constant function nearly equal to zero. This finding enables us to conclude that the two-qubit state obtains a negligible degree of entanglement during the time evolution, when the system is subjected to only the chosen static field $B_z$.
This setup is very good as a starting and final configuration, corresponding to the state before and after the realization of the $\pi-$gate. Indeed, we need the two-qubit state not to be entangled before the two-qubit gate operation is started. Moreover, this setup could also be applied at the end of the gate operation. Indeed, when the system is subjected to the static field $B_z$, and to the two amplitudes $B_{g_1}$ and $B_{g_2}$, the evolution of phase $\theta$ remains approximately constant and the two-qubit state preserves its degree of entanglement. This is a way to preserve the result obtained through the computation of the two-qubit operation. It is important to note that if the two-buckyball system is allowed to evolve without any applied magnetic field, the interaction is represented by only the mutual dipole-dipole spin interaction. In this arrangement, the time evolution provides a phase $\theta$ constantly equal to zero, therefore the quantum state which describe the system never becomes entangled.
\subsection{Setup for Performing a $\pi-$gate}
We will now show the setup for the realization of the $\pi-$gate, and the way in which we can create an entangled state. We keep the configuration of the previous setup, but we also apply a microwave field or a time oscillating magnetic field oriented in the z direction, of the form $B(t)=B_t \cos (\omega t)$. Therefore, the Hamiltonian of the system is
\begin{equation}\label{hamtdip}
\begin{array}{ll}
H&=g(r)[\vec{\hat\sigma}_1\cdot
\vec{\hat\sigma}_2-3(\vec{\hat\sigma}_1\cdot\vec n)(\vec{\hat\sigma}_2\cdot\vec
n)]\\
&-\mu_B[((B_{z}(t)+B_{g_1})\hat\sigma_{z_1})\otimes
I_2\\
&+I_1\otimes((B_{z}(t)+B_{g_2})\hat\sigma_{z_2})],
\end{array}
\end{equation}
where $B_{z}(t)=B_z+B(t)$ is the total magnetic field in the z direction. Solving the Schr{\"o}dinger equation with the Hamiltonian given by Eq. (\ref{hamtdip}), the phases related to each basis state are derived as in the time independent case. Manipulating these phases as shown in Ref. \cite{Garelli}, through the requirement
\begin{equation}
\theta=\pm\pi,
\end{equation}
the gate time (i.e. the time taken by phase $\theta$ to reach a value equal to $-\pi$ or $+\pi$) is evaluated.
After many numerical trials, the optimal choice for the amplitude and the frequency of the time dependent field is $B_t=2\times 10^{-1}T$ and $\omega=15.5 GHz$, respectively. The evolution of $\theta$ is shown in Fig. \ref{microz}. The gate time, corresponding to $\theta=-\pi$, is $\tau=1.56 ns$. In Fig. \ref{microz}, the phase evolution is not smooth, but shows small jumps.
\begin{figure}
\caption{$\pi-$gate: time evolution of phase $\theta$. The gate time, corresponding to $\theta=-\pi$, is $\tau\simeq1.56 ns$.}
\label{microz}
\end{figure}
These small jumps are due to the oscillation period of frequency $\omega$ of the microwave field.
The most interesting result is finding the time evolution of the concurrence, see Fig. \ref{cmicroz}.
\begin{figure}
\caption{Time evolution of the concurrence. At the gate time, $\tau\simeq 1.56 ns$, the concurrence value is $C\simeq0.90$.}
\label{cmicroz}
\end{figure}
The concurrence is $C(t)=0.90$ at the gate time, $\tau$. Since the concurrence is close to its maximum value, we can say that, at the end of the gate operation, the two-qubit state is strongly entangled. Therefore, the final state of our $\pi-$gate is reliable for carrying quantum information. The time evolution of the concurrence shows small jumps, always due to the oscillation period of $\omega$, as in the case of phase evolution.
\section{Conclusions}
Our study has been focused on the realization of a high speed $\pi-$gate, which is a particular choice of a two-qubit phase gate. We have also looked for the best configuration of the system, before and after the implementation of the $\pi-$gate.
Our quantum computation is performed through the evolution of a system composed by two spin-$\frac{1}{2}$ particles.
First, we have investigated the optimal setup for the preparatory configuration. To encode the qubit in each spin, we applied static magnetic fields oriented in the z direction. This procedure causes each spin to undergo the Zeeman splitting of the spin z-component. The two-level systems, which arise as a result, represent individual qubits. By applying small amplitude magnetic fields, as discussed in the previous section, we showed that the concurrence value of the two-qubit preparatory state is approximately $C\simeq 0$, see Fig. \ref{ctind}. This result shows that our preparatory state does not become entangled when it is subjected only to static fields, at least for the characteristic time of interest. However, it could get entangled after a very long time. Therefore, we can assume that the starting state in the computation of the $\pi-$gate is approximately not entangled. This is the ideal initial condition for the implementation of the two-qubit gate.
Next, we investigated the realization of a quantum phase-gate. The phase gate is performed simply by switching on a microwave field oriented in the z direction, which has some optimal amplitude. Since the time dependent field can be turned on and off promptly, the phase gate operation can be controlled and stopped at any time. Therefore, any phase gate can be realized, i.e. $\pm\frac{\pi}{4}$, $\pm\frac{\pi}{2}$, $\pm\pi$. The only quantity of interest in the computation of a phase gate is the phase $\theta$, see Eq. \ref{theta1}. For the realization of the $\pi-$gate, we require $\theta=-\pi$. The action of only static fields on the system results in a practically constant phase $\theta$, see Fig. \ref{static}. Therefore the implementation of the $\pi-$gate is limited to the time range between the switching on and off of the time dependent magnetic field. The operational time needed to perform the $\pi-$gate is $\tau\simeq 1.6 ns$,which is remarkably short.
At the end of the gate operation we should be able to preserve its result, that is $\theta=-\pi$. When we turn off the time dependent field and allow the system to evolve only under the action of static fields, the phase $\theta$ shows again a constant behaviour, as it was before the phase gate was applied. This is a convenient and reliable arrangement to preserve the final result obtained in the gate operation and serves as a good preparatory state for the next quantum computation. Moreover, this set up enables the two-qubit state to retain its characteristics that it obtained at the end of the gate operation. Since the $\pi-$gate operation enables the two-qubit state to become strongly entangled, see Fig. \ref{microz}, by switching off the time dependent field and subjecting it to the same static fields only, the state will preserve in time its degree of entanglement.
Finally, our scheme for a quantum computation will allow the system, consisting of two spins, to perform a fast two qubit gate and to create a very robust entangled state. If we have a system with more than two spins, we can create an entangled state between all spins by applying this two-qubit gate in sequence to any pair of spins. Furthermore, such state could be used in other quantum computations, i.e. quantum error correction codes.
\section{Acknowledgments}
The authors are grateful and thankful to Andrew Briggs and Jason Twamley for their very helpful discussions. Many thanks to Debbie Dalton and Neil Lindsey for their help with corrections and computer related troubles. MSG is pleased to thank Giuseppe Giordano for his never-ending and much valuable moral support and friendship.
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
We prove explicit upper and lower bounds for the Poisson hierarchy, the averaged $L^1$-moment spectra $\{\dfrac{\mathcal{A}_k\left(B_R^M\right)}{\text{vol}\left(S_R^M\right)}\}_{k=1}^\infty$, and the torsional rigidity $\mathcal{A}_1(B^M_R)$ of a geodesic ball $B^M_R$ in a Riemannian manifold $M^n$ which satisfies that the mean curvatures of the geodesic spheres $S^M_r$ included in it, (up to the boundary $S^M_R$), are controlled by the radial mean curvature of the geodesic spheres $S^\omega_r(o_\omega)$ with same radius centered at the center $o_\omega$ of a rotationally symmetric model space $M^n_\omega$. As a consecuence, we prove a first Dirichlet eigenvalue $\lambda_1(B^M_R)$ comparison theorem and show that equality with the bound $\lambda_1(B^\omega_R(o_\omega))$, (where $B^\omega_r(o_\omega)$ is the geodesic $r$-ball in $M^n_\omega$), characterizes the $L^1$-moment spectrum $\{\mathcal{A}_k(B^M_R)\}_{k=1}^\infty$ as the sequence $\{\mathcal{A}_k(B^\omega_R)\}_{k=1}^\infty$ and vice-versa.
\end{abstract}
\section{Introduction}\label{sec:intro}\
Let $(M^n,g)$ be a complete Riemannian manifold. We shall consider the Brownian motion $X_t$ in $M$ and, given $x \in M$, its associated family of probability measures $\mathbb{P}^x$ in the space of Brownian paths emanating from a point $x \in M$.
Given a smoothly bounded precompact domain $D \subseteq M$, the first exit time from $D$ is given by the quantity
$$\tau_D:= \inf \{t \geq 0 : X_t \notin D\}\, .$$
Given $x \in D$, the function $E_D: D \rightarrow \mathbb{R}$ that assigns to the point $x$ the expectation of the value of the first exit time $\tau_D$ with respect $\mathbb{P}^x$ is the {\em mean exit time function from x}, $E_D(x)$. We have the following characterization, (see \cite{Dy}), of this function as a solution of the second order PDE, with Dirichlet boundary data:
\begin{equation}\label{eq:moments1}
\begin{split}
\operatorname{D}elta^M E_D+1 &=0,\,\text{ in }D,\\
\left.E_D\right|_{\partial D} & =0,
\end{split}
\end{equation}
\noindentdent where $\operatorname{D}elta^M$ denotes the Laplace-Betrami operator on $(M^n,g)$.
The mean exit time function is the first in a sequence of functions $\{E=u_{1,D}\;, u_{2,D}\;, ....\}$ defined in $D \subseteq M$ inductively as follows
\begin{equation}\label{poisson}
\begin{split}
\operatorname{D}elta^Mu_{1,D}+1 & =0,\,\text{ on }D,\\
u_{1,D}\lvert_{_{\partial D}} & = 0,
\end{split}
\end{equation}
\noindentdent and, for $k\geq 2$,
\begin{equation}\label{poissonk}
\begin{split}
\operatorname{D}elta^Mu_{k,D}+ku_{k-1,D} & = 0,\,\, \text{on }\,\,D,\\
u_{k,D}\lvert_{_{\partial D}} & = 0.
\end{split}
\end{equation}
\noindentdent This sequence is the so-called, (see \cite{DLD}), {\em Poisson hierarchy for} $D$.
The Poisson hierarchy of the domain $D$ determines the $L^p$-moment spectrum of $D$, which can be defined as the following sequence of integrals, (see e.g. \cite{Mc} and references therein for a more detailed exposition about these concepts):
$$\mathcal{A}_{p,k}(D):= \bigg(\int_D(u_{k,D}(x))^p dV\bigg)^{\frac{1}{p}}\, , \,\,k=1,2,...,\infty.$$
We are going to focus our study in the $L^1$-moment spectrum of $D$, $\{\mathcal{A}_{1,k}(D)\}_{k=1}^\infty$ which we denote as $\{\mathcal{A}_{k}(D)\}_{k=1}^\infty$ and, in particular, in its first value, $\mathcal{A}_1(D)$, called the \emph{torsional rigidity of $D$} which is defined as the integral
\begin{equation}\label{eq:Apk}
\mathcal{A}_1(D)=\int_D E_D(x)\,d\sigma,
\end{equation}
\noindentdent where $E_D$ is the smooth solution of the Dirichlet-Poisson equation (\ref{eq:moments1}).
The name \lq\lq torsional rigidity" comes from the fact that, when $D \subseteq \mathbb{R}^2$ is a plane domain, the quantity
$\mathcal{A}_1(D)$ represents the torque required when twisting an elastic beam of uniform cross section $D$, (see \cite{PS}). A natural question consists in to opimize this quantity among all the domains having the same given area/volume in a fixed space or under some other geometrical setting. This problem is known as a {\em Saint-Venant type problem}.
The study of this variational problem in the general context of Riemannian manifolds involves the stablishment of bounds on the torsional rigidity of a given domain $D \subseteq M$, together the determination of the domains and the spaces that shelter them where these bounds are attained, in an analogous way that the study of the Rayleigh conjecture for the fundamental tone, and the techniques involved in this analysis encompasses the use of the notion of Schwarz symmetrization as well as the isoperimetric inequalities satisfied by the domains in question.
From the intrinsic point of view, the stablishment of bounds for the $L^p$-moment spectrum and the study of the relationship between the torsional rigidity (and more generally, the $L^1$-moment spectrum), of a domain in a Riemannian manifold $D \subseteq M$ and its Dirichlet spectrum has been explored along the last years in a number of papers, (see, among others, \cite{BBC}, \cite{BG},\cite{BGJ}, \cite{BFNT}, \cite{DLD}, \cite{ KD}, \cite{KDM}, \cite{HuMP1}, \cite{HuMP2}, \cite{HuMP3}, \cite{Mc},\cite{Mc2}, \cite{McMe}, \cite{MP4} and the references therein). Related with this issue and in the line of the classical Kac's question, we have the isospectrality problem, namely, to see to what extent the $L^1$- moment spectrum of a domain determines it up to isometry, (see \cite{CD} and \cite{CKD}).
From the viewpoint of submanifold theory, we can find in the papers \cite{MP4}, \cite{HuMP1}, and \cite{HuMP2} upper and lower bounds for the $L^1$-moment spectrum of extrinsic balls $B^M_R\cap \mathbb{S}igma$, (let us denote as $B^M_R$ the geodesic $R$-ball in the manifold $M$), in submanifolds $\mathbb{S}igma^m \subseteq M^n$ with controlled mean curvature $H_\mathbb{S}igma$ immersed in ambient Riemannian manifolds $(M,g)$ with radial sectional curvatures $K_{(M, g)}(\frac{\partial}{\partial r}, \, )$ bounded from above or from below.
These bounds were given, on the basis of previously stablished isoperimetric inequalities, by the corresponding values for the torsional rigidity of the Schwartz symmetrization of the geodesic balls in rotationally symmetric spaces with a pole which are warped products of the form $M^n_w=[0, \infty)\times_{w} \mathbb{R}^{+}$, and we call the {\em model spaces}. As we shall see in subsection 2.2, the model spaces $M^n_w$ are rotationally symmetric generalizations of the real space forms with constant sectional curvature $b \in \mathbb{R}$, denoted as $M^n_{w_b}=\mathbb{R}^n, \mathbb{S}^n(b),\, \text{or}\, \mathbb{H}^n(b)$, with
\begin{equation*}
\omega_b(r)=\begin{cases}
\dfrac{1}{\sqrt{b}}\sin\left(\sqrt{b}\,r\right),& \quad\text{si }\,b>0\\
r, & \quad\text{si }\,b=0\\
\dfrac{1}{\sqrt{-b}}\sinh\left(\sqrt{-b}\,r\right), & \quad\text{si }\,b<0
\end{cases}
\end{equation*}
\noindentdent We shall denote as $B^\omega_r(o_\omega)$ and as $S^\omega_r(o_\omega)$ the geodesic $r$-ball centered at $o_\omega$ and the geodesic $r$-sphere, respectively, in $M^n_\omega$.
Moreover, in \cite{MP4}, \cite{HuMP1} and \cite{HuMP2}, the geometry of situations where the equality with the bounds was attained was characterized. On the other hand, in these papers it were given too {\em intrinsic} upper and lower bounds for the torsional rigidity of geodesic balls $B^M_R$ in the ambient manifold when it was assumed that $\mathbb{S}igma=M$, so the extrinsic distance became the intrinsic distance and are only assumed bounds on the radial sectional curvatures of the ambient manifold $M$.
To summarize the intrinsic results obtained in \cite{MP4}, \cite{HuMP1} and \cite{HuMP2} in a couple of statements, we need the following context and notation: let us consider a complete Riemannian manifold $(M,g)$, and a geodesic $R$-ball $B^M_R(o)$ centered at $o \in M$. Let us denote as $ K_{(M, g)}(\frac{\partial}{\partial r}, \, )$ its radial, (from the center $o$) sectional curvatures, (namely, the sectional curvatures of the planes containing the radial vector field $\frac{\partial}{\partial r}$, where $r$ denotes the distance function from the point $o$). With all these notions in hand, we have the following two results. The first concerns the so-called, (see \cite{HuMP1}), {\em averaged $L^1$-moment spectrum} of a geodesic ball:
\begin{theoremA}\label{teoSec}[see \cite{MP4} and \cite{HuMP1}]
Let $(M,g)$ be a complete Riemannian manifold. Let us consider $M^n_w$ a rotationally symmetric model space and let us suppose that
$$ K_{(M_w, g_{w})}(\frac{\partial}{\partial r}, \, )\, \geq\, (\leq) \,K_{(M, g)}(\frac{\partial}{\partial r}, \, )\,, $$
\noindentdent where $K_{(M_w, g_{w})}(\frac{\partial}{\partial r}, \, )$ denotes the radial sectional curvatures of $M^n_w$ from its center point $o_\omega \in M^N_\omega$.
Then the {\em averaged} $L^1$-moments, $\{\dfrac{\mathcal{A}_k\left({\rm B}_R^M(o)\right)}{\operatorname{Vol}\left({\rm S}_R^M(o)\right)}\}_{k=1}^\infty$ are bounded as follows
\begin{equation}\label{momentscomp1}
\dfrac{\mathcal{A}_k\left({\rm B}_R^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_R^\omega(o_\omega)\right)}\geq\,(\leq)\,\dfrac{\mathcal{A}_k\left({\rm B}_R^M(o)\right)}{\operatorname{Vol}\left({\rm S}_R^M(o)\right)}\, .
\end{equation}
\noindentdent Equality in inequality (\ref{momentscomp1}) for some $k_0 \geq 1$ implies that ${\rm B}^M_R(o)$ and ${\rm B}^\omega_R(o_\omega)$ are isometric.
\end{theoremA}
Concerning now the Torsional Rigidity, we need to assume, in addition, that the model space $M^n_w$ is {\em balanced from above}, namely, that the isoperimetric quotient given by $$q_\omega(r)=\dfrac{\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)}$$
\noindentdent is a non-decreasing function of $r$. This condition is satisfied by a wide range of spaces, in particular, for all real space forms of constant sectional curvature. We then obtain the following
\begin{theoremA}\label{teoSec2}[see \cite{MP4} and \cite{HuMP1}]
Let $(M,g)$ be a complete Riemannian manifold. Let us consider $M^n_w$ a rotationally symmetric model space, balanced from above, and let us suppose that
$$ K_{(M_w, g_{w})}(\frac{\partial}{\partial r}, \, )\, \geq\, (\leq) \,K_{(M, g)}(\frac{\partial}{\partial r}, \, ) \, ,$$
\noindentdent where $K_{(M_w, g_{w})}(\frac{\partial}{\partial r}, \, )$ denotes the radial sectional curvatures of $M^n_w$ from its center point.
Then the torsional rigidity $\mathcal{A}_1\left({\rm B}_R^M(o)\right)$ is bounded as follows
\begin{equation}\label{torsrigcomp1}
\mathcal{A}_1\left({\rm B}_{s(R)}^\omega(o_\omega)\right)\geq\,(\leq)\,\mathcal{A}_1\left({\rm B}_R^M(o)\right)\, ,
\end{equation}
\noindentdent where ${\rm B}_{s(R)}^\omega(o_\omega)$ is the Schwarz symmetrization of ${\rm B}_R^M(o)$ in the model space $(M_\omega^n,g_\omega)$.
\noindentdent Equality in inequality (\ref{torsrigcomp1}) implies that $s(R)=R$ and that ${\rm B}^M_R(o)$ and ${\rm B}^\omega_R(o_\omega)$ are isometric.
\end{theoremA}
As a consequence of the bounds for the $L^1$-moment spectrum stated in Theorem \ref{teoSec}, and the proof of Theorem 1.1 in \cite{McMe}, (where it was presented a formula for the first Dirichlet eigenvalue of a precompact domain $D$ in a Riemnnian manifold $M$, $\lambda_1(D)$, in terms of its $L^1$-moment spectrum, $\{\mathcal{A}_k(D)\}_{k=1}^\infty$), it was obtained the following version of Cheng's eigenvalue comparison theorem:
\begin{theoremA}\label{ChengSec}[see \cite{HuMP3},\cite{Cheng75-1}, \cite{Cheng75-2}]
Let $(M,g)$ be a complete Riemannian manifold. Let us denote as $ K_{(M, g)}(\frac{\partial}{\partial r}, \, )$ its radial sectional curvatures and as $ Ricc_{(M, g)}(\frac{\partial}{\partial r}, \frac{\partial}{\partial r})$ its radial Ricci curvatures at any point.
Let us consider $M^n_w$ a rotationally symmetric model space and let us suppose that
\begin{equation*}
\begin{aligned}
K_{(M_w, g_{w})}(\frac{\partial}{\partial r}, \, )\, &\geq \, K_{(M, g)}(\frac{\partial}{\partial r}, \, )\, ,\\
\bigg( \text{or that} \, \,\,Ricc_{(M_w, g_{w})}(\frac{\partial}{\partial r}, \frac{\partial}{\partial r})\,&\leq \, Ricc_{(M, g)}(\frac{\partial}{\partial r}, \frac{\partial}{\partial r})\bigg)\, ,
\end{aligned}
\end{equation*}
\noindentdent where $K_{(M_w, g_{w})}(\frac{\partial}{\partial r}, \, )$ and $Ricc_{(M_w, g_{w})}(\frac{\partial}{\partial r}, \frac{\partial}{\partial r})$ denotes the radial sectional and Ricci curvatures of $M^n_w$ at its center point.
Then
$$\lambda_1(B^{\omega}_R(o_\omega)) \,\leq\,(\geq) \, \lambda_1(B^M_R(o)).$$
\noindentdent for all $R < inj(o) \leq inj(o_w)$.
Equality in any of these inequalities holds if and only if the geodesic balls ${\rm B}^M_R(o)$ and ${\rm B}^\omega_R(o_\omega)$ are isometric.
\end{theoremA}
On the other hand, in the paper \cite{Mc2}, P. McDonald showed that, given a precompact domain $D \subseteq M$ in a complete Riemannian manifold $M$ such that satisfies the inequalities $\mathcal{A}_k(D) \leq \mathcal{A}_k(D^*)$ where $D^*$ is the Schwarz symmetrization of $D$ in a constant curvature space form $M_{\omega_b}$, then we have the inequality $\lambda_1(D^*) \leq \lambda_1(D)$, (see Theorem 1 in \cite{Mc2}).
Following with versions of Cheng's result, in the paper \cite{BM}, the authors proved that Cheng's eigenvalue comparison is still valid assuming bounds on the mean curvature of (intrinsic) distance spheres, a weaker hypothesis (as we shall see below), than the bounds on the sectional curvatures of the manifold:
\begin{theoremA}[see \cite{BM}]\label{teoPacelli}
Let $B^M_R \subseteq M^n$ and $B^{w_{b}}_R$ be geodesic $R$-balls in a Riemannian manifold $(M,g)$ and in the real space form with constant sectional curvatures $b\in \mathbb{R}$, $M^n_{w_b}$, respectively, both within the cut locus of their centers and let $(t, \overline{\theta}) \in (0,R]\times\mathbb{S}^{n-1}_1$ be the polar coordinates for $B^M_R$ and $B^{w_b}_R$.
Then, if $H_{S^M_t}(t,\overline{\theta})$ and $H_{S^{w_b}_t}(t)$ are the, (pointed inward), mean curvatures of the distance spheres $S^M_t$ in $M$ and $S^{w_b}_t$ in the real space form of constant curvature $M^{w_b}$ respectively and we assume that
$$H_{S^{w_b}_t}(t) \, \leq\, (\geq) \, H_{S^M_t}(t,\overline{\theta})\,\,\forall t \leq R\,\,\forall \overline{\theta} \in \mathbb{S}^{n-1}_1$$
\noindentdent we have that
$$\lambda_1(B^{\omega_b}_R(o_\omega)) \, \leq\, (\geq) \, \lambda_1(B^M_R(o)).$$
Equality in any of these inequalities holds if and only if $H_{S^M_t}(t,\overline{\theta}) = H_{S^{w_b}_t}(t)\,\,\forall t \leq R\,\,\forall \overline{\theta} \in \mathbb{S}^{n-1}_1.$
\end{theoremA}
Its proof relies in Barta's Lemma and the expresion of the Laplacian of the first Dirichlet eigenfunction in polar coordinates. It is precisely from this intrinsic expression that the use, as hypotheses, of bounds on the mean curvature of distance spheres comes from.
Therefore, it can be said that the results we are going to present in the following paper are inspired, by one hand, by the intrinsic bounds for the torsional rigidity and the $L^1$-moment spectrum of the geodesic balls and by the estimation of $\lambda_1(B^M_R)$ obtained in the papers \cite{MP4}, \cite{HuMP1}, \cite{HuMP2} and \cite{HuMP3}, and by the other hand, by the weaker restrictions on the mean curvatures of geodesic spheres assumed in \cite{BM} as well as the comparisons for the $L^1$-moment spectrum and the first Dirichlet eigenvalue given in \cite{Mc} and \cite{Mc2}.
\subsection{A glimpse at our results}\
We consider along this paper, a complete Riemannian manifold $(M^n,g)$ and a rotationally symmetric model space $(M_\omega^n, g_w)$, with center $o_w$, and we shall assume that given $o \in M$ a point in $M$, the injectivity radius of $o\in M$ satisfies $inj(o) \leq {{\rm inj}\,}(o_w)$. Let us fix $R < inj(o) \leq inj(o_w)$ assuming that the pointed inward mean curvatures of metric $r$-spheres satisfies
\begin{equation*}
\begin{aligned}
{\rm H}_{{\rm S}_R^\omega(o_\omega)}\,&\leq\, {\rm H}_{{\rm S}_R^M(o)}\quad\text{for all}\quad 0 < r \leq R\\
\bigg( \text{or that }\,\,\,\,\,{\rm H}_{{\rm S}_R^\omega(o_\omega)}\,&\geq\, {\rm H}_{{\rm S}_R^M(o)}\quad\text{for all}\quad 0 < r \leq R\bigg)
\end{aligned}
\end{equation*}
These hypotheses are the same than the conditions assumed in \cite{BM}, and constitutes a more general assumption than the bounds for the sectional and the Ricci curvatures in Theorems \ref{teoSec} and \ref{ChengSec}, as we shall see in next Subsection \ref{ex}. On the other hand, they implies, in its turn, the following isoperimetric conditions satisfied by the geodesic $r$- balls with $r \leq R$ in the complete Riemannian manifold $M$,
\begin{equation} \label{hypisop}
\dfrac{\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)}\geq\,(\leq)\,\dfrac{\operatorname{Vol}\left({\rm B}_r^M(o)\right)}{\operatorname{Vol}\left({\rm S}_r^M(o)\right)}\quad\text{for all}\quad 0<r \leq R.
\end{equation}
Concerning the use of isoperimetric inequalities, (no exactly the given in (\ref{hypisop})), in the study of the relation between the moments spectrum and the Dirichlet spectrum, we refer to the paper \cite{Mc}.
Under these restrictions on the mean curvatures of geodesic spheres we have obtained all the results in this paper, the most important of which are Proposition \ref{prop3.2}, Theorem \ref{teo:MeanComp} and Corollary \ref{cor:MeanComp} in Section \ref{sec:MeanComp}, Theorem \ref{teo:TowMomComp}, Corollary \ref{isoptors} and Theorem \ref{teo:2.1} in Section \ref{sec:TorRidCom1}, and Theorem \ref{th_const_below2} and Corollary \ref{th_const_below3} in Section \ref{sec:TorRidCom2}.
We are going to present in the following statements of Theorem \ref{newresult1}, Theorem \ref{newresult2} and Theorem\ref{ChengMean} summaryzed versions of some of our results concerning bounds on the Poisson hierarchy, the $L^1$-moments spectrum and the first Dirichlet eigenvalue of the geodesic balls $B^M_R$ in Sections \ref{sec:TorRidCom1} and \ref{sec:TorRidCom2}, in order to see that they are a generalization of those presented in Theorem \ref{teoSec} and in Theorem \ref{ChengSec}.
The techniques used in the proof of these results are basically the same than in the cited papers \cite{MP4}, \cite{HuMP1}, and \cite{HuMP2}, but now with the intrinsic point of view as the main perspective. These techniques encompasses the use of the formula of the Laplacian of the mean exit time function in polar coordinates, the application of the Maximum principle, the properties of the Schwartz symmetrization of the ball $B^M_R$ and the explicit expression of the first Dirichlet eigenvalue of a geodesic ball $B^w_R$ in a rotationally symmetric model space $M^n_w$ as a limit of the sequence given by the $L^1$-moment spectrum of this geodesic ball, $\{\mathcal{A}_{k}(B^w_R)\}_{k=1}^\infty$, obtained in the paper \cite{HuMP3}. This formula for $\lambda_1(B^\omega_R(o_\omega))$ was subsequently extended in the paper \cite{BGJ} to any precompact domain $\Omegaega \subseteq M$, namely
\begin{equation}\label{DirMoment2}
\begin{aligned}
\lambda_1(\Omegaega)=\lim_{k \to \infty}\dfrac{k\mathcal{A}_{k-1}\left(\Omegaega\right)}{\mathcal{A}_{k}\left(\Omegaega\right)}.
\end{aligned}
\end{equation}
In fact, the presence of the mean curvature of geodesic spheres $H_{S^M_{t}} $ in the expresion of the Laplacian operator in polar coordinates has played a key role in the stablishment of our hypotheses, (as it is obvious), and also in the analysis of the equality with the bounds in all of our comparisons.
Concerning this analysis of the equality case, an important notion which appears in next Theorems \ref{newresult1}, \ref{newresult2} and \ref{ChengMean}, (in fact, along all the equality discussions in the paper), is the concept of {\em determination} of a Riemannian invariant defined on the geodesic balls by its $L^1$-moment spectrum, its $L^1$-averaged moment spectrum or its Torsional Rigidity, in a way which, altthough it is not exactly the same, it has been directly inspired by P. McDonald in \cite{Mc2}.
In the paper \cite{Mc2} it is presented the notion of {\em determination} of a Riemannian invariant $I(D)$ defined on the precompact domain $D\subseteq M$ by the $L^1$-moment spectrum of $D$: we say that $\{\mathcal{A}_k(D)\}_{k=1}^\infty$ {\em determines} the invariant $I(D)$ if and only if when $\{\mathcal{A}_k(D)\}_{k=1}^\infty=\{\mathcal{A}_k(D')\}_{k=1}^\infty$, then $I(D)=I(D')$. With this definition, in \cite{Mc2} is proved that the $L^1$-moment spectrum of a precompact domain $D$ determines its heat content.
We shall see in the following Theorem \ref{newresult1} and Theorem \ref{newresult2} that, under our hypotheses, the Torsional Rigidity $\mathcal{A}_1(B^M_R)$ and any individual averaged moment $\dfrac{\mathcal{A}_{k_0}\left({\rm B}_R^M(o)\right)}{\operatorname{Vol}\left({\rm S}_R^M(o)\right)}$ determines the Poisson hierarchy, the volume, the $L^1$-moment spectrum and the first Dirichlet eigenvalue of the ball $B^M_R$, in the following sense:
When $\mathcal{A}_1(B^M_R(o))=\mathcal{A}_1(B^\omega_{s(R)}(o_\omega))$, or there exists $k_0 \geq 1$ such that
$$\dfrac{\mathcal{A}_{k_0}\left({\rm B}_R^M(o)\right)}{\operatorname{Vol}\left({\rm S}_R^M(o)\right)}=\dfrac{\mathcal{A}_{k_0}\left({\rm B}_R^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_R^\omega(o_\omega)\right)},$$
\noindentdent then $s(R)=R$ and the Poisson hierarchy, the volume, the $L^1$-moment spectrum and the first Dirichlet eigenvalue of the ball $B^M_R$ is the same than the corresponding values for the geodesic ball $B^\omega_R(o_\omega)$ in the model space $M^n_\omega$.
With all these previous considerations, we present the following:
\begin{theorem}\label{newresult1}[see Corollary \ref{isoptors}]
Let us consider a complete Riemannian manifold $(M^n,g)$ and a rotationally symmetric model space $(M_\omega^n, g_w)$, with center $o_w$, and we shall assume that given $o \in M$ a point in $M$, the injectivity radius of $o\in M$ satisfies $inj(o) \leq {{\rm inj}\,}(o_w)$. Let us fix $R < inj(o) \leq inj(o_w)$ assuming that the pointed inward mean curvatures of metric $r$-spheres satisfies
\begin{equation}\label{eq:meancurvatureconditions}
{\rm H}_{{\rm S}_R^\omega(o_\omega)}\leq\,(\geq)\, {\rm H}_{{\rm S}_R^M(o)}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then, for all $k\geq 1$,
\begin{equation}\label{momentscomp2}
\dfrac{\mathcal{A}_k\left({\rm B}_R^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_R^\omega(o_\omega)\right)}\geq\,(\leq)\,\dfrac{\mathcal{A}_k\left({\rm B}_R^M(o)\right)}{\operatorname{Vol}\left({\rm S}_R^M(o)\right)}.
\end{equation}
Equality in any of inequalities \ref{momentscomp2} for some $k_0 \geq 1$ implies that
$${\rm H}_{{\rm S}_R^\omega(o_\omega)}=\,{\rm H}_{{\rm S}_R^M(o)}\,\,\,\text{for all }0<r\leq R$$
\noindentdent and hence, we have the equalities
\begin{enumerate}
\item Equality $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$, and hence, equality $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$ for all $k\geq 1$ and for all $0 < r \leq R$.
\item Equalities $\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm B}_r^M(o)\right)$ and $\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm S}_r^M(o)\right)\,\,\,\text{for all }\, 0<r \leq R$.
\item Equalities $\mathcal{A}_k\left({\rm B}_r^\omega(o_\omega)\right)=\mathcal{A}_k\left({\rm B}_r^M(o)\right)$, for all $k\geq 1$ and for all $0<r \leq R$.
\item Equality $\lambda_1(B^M_r(o))=\lambda_1(B^w_r(o_\omega))$ for all $0<r \leq R$.
\noindentdent Namely, {\em one} value of $\dfrac{\mathcal{A}_k\left({\rm B}_R^M(o)\right)}{\operatorname{Vol}\left({\rm S}_R^M(o)\right)}$ for some $k \geq 1$ determines the Poisson hierarchy, the volume, the $L^1$-moment spectrum and the first Dirichlet eigenvalue of the ball $B^M_r(o)$ for all $0<r \leq R$.
\end{enumerate}
\end{theorem}
Our second result is a comparison for the Torsional Rigidity of the ball $B^M_R$, and, as in Theorem \ref{teoSec2}, we need that the model space $M^n_\omega$ used in the comparison be {\em balanced from above}.
\begin{theorem}\label{newresult2}[see Theorem \ref{teo:2.1} ]
Let us consider a complete Riemannian manifold $(M^n,g)$ and a balance from above rotationally symmetric model space $(M_\omega^n, g_w)$, with center $o_w$, and we shall assume that given $o \in M$ a point in $M$, the injectivity radius of $o\in M$ satisfies $inj(o) \leq {{\rm inj}\,}(o_w)$. Let us fix $R < inj(o) \leq inj(o_w)$ assuming that the pointed inward mean curvatures of metric $r$-spheres satisfies
\begin{equation}\label{eq:meancurvatureconditions}
{\rm H}_{{\rm S}_R^\omega(o_\omega)}\leq\,(\geq)\, {\rm H}_{{\rm S}_R^M(o)}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then
\begin{equation}\label{torsrigcomp21}
\mathcal{A}_1\left({\rm B}_{s(R)}^\omega(o_\omega)\right)\geq\,(\leq)\,\mathcal{A}_1\left({\rm B}_R^M(o)\right)\, ,
\end{equation}
\noindentdent where ${\rm B}_{s(R)}^\omega(o_\omega)$ is the Schwarz symmetrization of ${\rm B}_R^M(o)$ in the model space $(M_\omega^n,g_\omega)$.
Equality in any of inequalities (\ref{torsrigcomp21}) implies the equality among the radius $s(R)=R$ and that
$${\rm H}_{{\rm S}_R^\omega(o_\omega)}=\,{\rm H}_{{\rm S}_R^M(o)}\,\,\,\text{for all }0<r\leq R$$
\noindentdent and hence, we have the equalities
\begin{enumerate}
\item Equality $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$, and hence, equality $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$ for all $k\geq 1$ and for all $0 < r \leq R$.
\item Equalities $\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm B}_r^M(o)\right)$ and $\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm S}_r^M(o)\right)\newline \text{for all } 0<r \leq R$.
\item Equalities $\mathcal{A}_k\left({\rm B}_r^\omega(o_\omega)\right)=\mathcal{A}_k\left({\rm B}_r^M(o)\right)$, for all $k\geq 1$ and for all $0<r \leq R$.
\item Equality $\lambda_1(B^M_r(o))=\lambda_1(B^w_r(o_\omega))$ for all $0<r \leq R$.
\noindentdent Namely, the Torsional Rigidity determines the Poisson hierarchy, the volume, the $L^1$-moment spectrum and the first Dirichlet eigenvalue of the ball $B^M_r(o)$ for all $0<r \leq R$.
\end{enumerate}
\end{theorem}
As a consequence of the proof of Theorem 1.1 in \cite{McMe}, Theorem \ref{isoptors}, and volume inequalities given in Theorem \ref{teo:MeanComp}, we have the following Cheng's Dirichlet eigenvalue comparison, following \cite{BM}, (Theorem \ref{th_const_below2} in Section 5). In this case, we have proved that the first Dirichlet eigenvalue of $B^M_R$ determines its Poisson hierarchy, its volume and its $L^1$-moment spectrum.
\begin{theorem}\label{ChengMean}[see Theorem \ref{th_const_below2}]
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that the pointed inward mean curvatures of the geodesic spheres in $M$ and $M_{\omega}$ satisfies
\begin{equation}\label{eq:meancurvatureconditions12}
{\rm H}_{{\rm S}_R^\omega(o_\omega)}\leq\,(\geq)\, {\rm H}_{{\rm S}_R^M(o)}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then we have the inequalities
\begin{equation}\label{ineqleq_submanifold2}
\lambda_1 (B^\omega_R(o_\omega))\, \leq\,(\geq) \, \lambda_1(B^M_R(o))\, .\end{equation}
\noindentdent Equality in any of these inequalities implies that
$${\rm H}_{{\rm S}_R^\omega(o_\omega)}=\,{\rm H}_{{\rm S}_R^M(o)}\,\,\,\text{for all }0<r\leq R$$
\noindentdent and hence, we have the equalities
\begin{enumerate}
\item Equality $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$, and hence, equality $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$ for all $k\geq 1$ and for all $0 < r \leq R$.
\item Equalities $\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm B}_r^M(o)\right)$ and $\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm S}_r^M(o)\right) \newline \text{for all }\, 0<r \leq R$.
\item Equalities $\mathcal{A}_k\left({\rm B}_r^\omega(o_\omega)\right)=\mathcal{A}_k\left({\rm B}_r^M(o)\right)$, for all $k\geq 1$ and for all $0<r \leq R$.
\end{enumerate}
\noindentdent Namely, the first Dirichlet eigenvalue determines the Poisson hierarchy, the volume, and the $L^1$-moment spectrum of the ball $B^M_r(o)$ for all $0<r \leq R$.
\end{theorem}
The characterizations of equalities in both theorems are ultimately based on a rigidity property satisfied by the Poisson hierarchy for $B^M_R$, $\{u_{k,R}\}_{k=1}^\infty$ in a Riemannian manifold $M$ under the hypotheses depicted above. This rigidity property can be summarized by saying that the value at {\em one point} $ p \in B^M_R$ of one of the functions $u_{k,R}$ of the Poisson hierarchy determines it {\em entirely} on the geodesic ball $B^M_R$, ( see Proposition \ref{prop3.2} and assertions (3) and (4) in Theorem \ref{teo:TowMomComp} in Section \ref{sec:TorRidCom1}).
\subsection{Example}\label{ex}\
We remark that, under the bounds on the sectional curvatures of the manifold as hypothesis and if we have the equality with the corresponding bound in the model space of any of our invariants defined on the geodesic ball $B^M_R$, (namely, the Poisson hierarchy, the averaged $L^1$-moment spectrum, or the torsional rigidity), then $B^M_R$ is isometric to the geodesic balls in the model space, $B^w_R \subseteq M^n_w$. However, the equality of the mean curvature of distance spheres in the Riemannian manifold $M$, with its radial bound given by the mean curvature of distance spheres in the model space $M_\omega$ does not imply the isometry among the geodesic balls, as in the previous case.
This observation is coherent with the fact that bounds on the sectional curvatures of the manifold implies bounds for the mean curvature of its geodesic spheres, namely, if (M,g) is a Riemannian manifold with radial sectional curvatures
$$K_{sec,g}(\frac{\partial}{\partial r}, \, ) \leq \, (\geq)\,K_{sec,g_{w}}(\frac{\partial}{\partial r}, \, )=-\frac{w''(r)}{w(r)}$$
\noindentdent then we have that
$$H_{S^M_{r}} \geq\,(\leq)\, H_{S^w_{r}}=\frac{w'(r)}{w(r)}\, .$$
These implications follows from the observation that the mean curvature of geodesics spheres is the Laplacian of the distance from its center in the manifold, (see Proposition \ref{lapmean}), together the Hessian comparison analysis of the distance function as it can be found in \cite{GreW}, \cite{MM} or \cite{Pa3}.
However, in the paper \cite{BM}, the authors exhibit in Example 3.1 in Section 3, smooth complete and rotationally symmetric metrics $g$ on $\mathbb{R}^n$ with radial sectional curvatures bounded from below, $K_{sec, g}(\frac{\partial}{\partial r},\,) \geq b$ outside a compact set and such that the distance spheres $S^{(\mathbb{R}^{n}, g)}_t$ have mean curvature $H_{S^{(\mathbb{R}^{n}, g)}_{t}} \geq H_{S^{w_{b}}_{t}}$.
In the following, we are going to present a new example which shows that bounds on the mean curvature of geodesic spheres of the manifold does not imply that the sectional curvatures of the manifold are controlled.
Let $(\mathbb{R}^2,g)$ be a Riemannian manifold such that its metric tensor expressed in polar coordinates is given by $g=dr^2+\omega^2(r,\theta)d\theta^2$, where $\omega:\mathbb{R}^2\to\mathbb{R}$ is a positive smooth function given by
\begin{equation}\label{eq:metricaexemple}
\omega(r,\theta)=r\left(1+\dfrac{r^2}{1+r^2\cos^2\theta}\right).
\end{equation}
On the other hand, we consider as a model space the simply connected space form $(\mathbb{R}^2,g_{{\rm can}})$ of constant sectional curvature $b=0$.
We are going to see that the mean curvatures of the geodesic spheres ${\rm S}^{(\mathbb{R}^{2}, g)}_t(\vec{0})$ of $(\mathbb{R}^2,g)$ centered at $\vec{0}$ with radius $t$, are bounded from below by the mean curvatures of the geodesic spheres ${\rm S}_t^{\omega_0}(\vec{0})$ of $(\mathbb{R}^2,g_{\rm can})$ centered at $\vec{0}$ with the same radius, namely, that
$${\rm H}_{{\rm S}^{(\mathbb{R}^{2}, g)}_t}\geq{\rm H}_{{\rm S}_t^{\omega_0}}$$
As ${\rm H}_{{\rm S}^{(\mathbb{R}^{2}, g)}_t}(t,\theta)=\dfrac{\frac{\partial\omega}{\partial t}(t,\theta)}{\omega(t,\theta)}$ and we have
\begin{equation*}
\begin{split}
\dfrac{\partial \omega}{\partial t}(t,\theta) & =1+\dfrac{t^2}{1+t^2\cos^2\theta}+\dfrac{2t^2}{(1+t^2\cos^2\theta)^2}
\end{split}
\end{equation*}
we obtain
\begin{equation*}
{\rm H}_{{\rm S}^{(\mathbb{R}^{2}, g)}_t}(t,\theta) =\dfrac{\frac{\partial\omega}{\partial t}(t,\theta)}{\omega(t,\theta)}=
\dfrac{1}{t}+\dfrac{2t}{(1+\frac{t^2}{1+t^2\cos^2\theta})(1+t^2\cos^2\theta)^2}
\end{equation*}
\noindentdent But
$$\dfrac{2t}{(1+\frac{t^2}{1+t^2\cos^2\theta})(1+t^2\cos^2\theta)^2}\geq 0\,\,\text{for all}\,\,(t,\theta)\in (0,+\infty)\times[0,2\pi)$$
Hence we have that
$${\rm H}_{{\rm S}^{(\mathbb{R}^{2}, g)}_t}(t,\theta)\geq \dfrac{1}{t}={\rm H}_{{\rm S}_t^{\omega_b}}(t)\,\,\text{for all}\,\,(t,\theta)\in (0,+\infty)\times[0,2\pi)$$
Now, let us consider the unique $2$-plane tangent to a point $(t,\theta)\in\mathbb{R}^2$ generated by the coordinate vector fields $\left\{\frac{\partial}{\partial r},\frac{\partial}{\partial\theta}\right\}$. We are going to compute the sectional curvature of $(\mathbb{R}^2,g)$ at this point and we will see that it is not bounded by the corresponding sectional curvature of $(\mathbb{R}^2,g_{\rm can})$, i.e., we will show that $K_{sec,g}(t,\theta)$ is not bounded from below by $0$.
As $K_{sec,g}(t,\theta)=-\dfrac{\frac{\partial^2\omega}{\partial t^2}(t,\theta)}{\omega(t,\theta)}$ then, it is straightforward to check that
\begin{equation*}
K_{sec,g}(t,\theta) =-\dfrac{\frac{\partial^2\omega}{\partial t^2}(t,\theta)}{\omega(t,\theta)}=
\dfrac{2(t^2\cos^2\theta-3)}{(1+t^2\cos^2\theta)^2(1+t^2+t^2\cos^2\theta)}
\end{equation*}
Thus, for $\theta=0$, we have that
\begin{equation*}
K_{sec,g}(t,0)=\dfrac{2(t^2-3)}{(1+t^2)^2(1+2t^2)}
\end{equation*}
This shows that there are points $(t,\theta)\in\mathbb{R}^2$ where the sectional curvature of $(\mathbb{R}^2,g)$ is bounded either below or above by $0$ which is the sectional curvature of $(\mathbb{R}^2,g_{{\rm can}})$.
\subsection{Outline}
After the Introduction, Section \ref{sec:preliminars} is devoted to the presentation of preliminary concepts, including the rotationally symmetric model spaces used to construct the bounds and the notion of Schwarz symmetrization based on these models. We have stated and proved, for the sake of completeness, all the properties of these symmetrizations we need in our context. Next Section \ref{sec:MeanComp} deals with the properties of the mean exit time function defined on the geodesic $R$-balls in a complete Riemannian manifold satisfying our hypotheses and its relation with its volume and the isoperimetric inequalities satisfied by these domains, (Proposition \ref{prop3.2}, Theorem \ref{teo:MeanComp}, Corollary \ref{cor:MeanComp} and Corollary \ref{cor:MeanComp2}) . In Section \ref{sec:TorRidCom1} we have stablished bounds for the Poisson hierarchy and the averaged $L^1$-moment spectrum of a geodesic $R$-ball under our restricitions (Theorem \ref{teo:TowMomComp} and Corollary \ref{isoptors}) , and we have bounded too the Torsional Rigidity of a geodesic $R$-ball by means its Schwarz symmetrization, (Theorem \ref{teo:2.1}). Finally, in Section \ref{sec:TorRidCom2}, we have showed a Cheng's comparison for the first Dirichlet eigenvalue of geodesic balls, (Theorem \ref{th_const_below2}), and we have stablished the relation between the first Dirichlet eigenvalue of geodesic balls, its $L^1$-moment spectrum and its Poisson hierarchy in Corollary \ref{th_const_below3}.
\section{Preliminaries and comparison setting}\label{sec:preliminars}\
We are going to present some previous notions and results that will be instrumental in our work.
\subsection{Polar coordinates and the Laplacian on a Riemannian manifold}\
\begin{definition}
Let us consider a complete Riemannian manifold $(M^n, g)$ and a point $o \in M$. Let us denote as $Cut(o)$ the cut locus of $o\in M$ and as $inj(o)=dist_M(o, Cut(o))$ the injectivity radius of the point $o \in M$. We shall denote too as $\mathbb{S}^{n-1}_1 \subseteq \mathbb{R}^n$ the unit sphere with center $\vec{0} \in \mathbb{R}^n$.
We define, in the set $M \sim (Cut(o) \cup \{o\})$, the {\em polar coordinates} of any point $x \in M \sim (Cut(o) \cup \{o\})$ as the pair $(r(x), \overline{\theta}) \in (0, inj(o))\times \mathbb{S}^{n-1}_1$, where $r(x):=r_o(x)= dist_M(o,x)$ is the distance from $o$ to $x$ realized by the shortest geodesic between these points which starts at $o$ with direction $\overline{\theta} \in \mathbb{S}^{n-1}_1$.
\end{definition}
The Riemannian metric $g$ in $M \sim (Cut(o) \cup \{o\})$ has in the polar coordinates the form
$$g=dr^2+\sum_{i,j=1}^{n-1}g_{i,j}(r,\overline{\theta}) d\theta^i d\theta^j\, ,$$
\noindentdent where $\overline{\theta}\equiv (\theta_1,...,\theta_{n-1}) \in \mathbb{S}^{n-1}_1$ is a system of local coordinates in $\mathbb{S}^{n-1}_1$ and $g_{i,j}(r,\overline{\theta})=g\left(\left.\frac{\partial}{\partial \theta_i}\right\rvert_{(r,\overline{\theta})},\left.\frac{\partial}{\partial \theta_j}\right\rvert_{(r,\overline{\theta})}\right)$.
Thus, the matrix form of the metric $g$ in polar coordinates is a positive definite matrix given by
\begin{equation*}
\mathfrak{G}=\left(\begin{array}{@{}c|c@{}}
1 & \begin{matrix}
0 & \cdots & 0
\end{matrix} \\
\hline
\begin{matrix}
0\\ \vdots\\ 0
\end{matrix} &
G
\end{array}\right)\, ,
\end{equation*}
where $G$ is the matrix which elements are $g_{ij}$, \emph{i.e.}, $G=\left(g_{ij}\right)_{i,j\in\{1,\dots,n-1\}}$. And hence, we have, for any point $(r,\overline{\theta})\in M-\left(Cut(o)\cup\{o\}\right)$, that
\begin{equation*}
\sqrt{\det\left(\mathfrak{G}(r,\overline{\theta})\right)}=\sqrt{\det\left(G(r,\overline{\theta})\right)}.
\end{equation*}
Then, (see for example \cite{Gri}, \cite{Cha1}), the Laplace operator of $M$ has the following expression in the polar coordinates
\begin{equation}\label{lapeq}
\operatorname{D}elta^M = \frac{\partial }{\partial r^2}+\frac{\partial}{\partial r}\big(log\sqrt{\det G(t,\ov{\theta})}r \big)\frac{\partial}{\partial r}+\operatorname{D}elta^{S^M_r(o)}\, ,
\end{equation}
\noindentdent where $\operatorname{D}elta^{\mathbb{S}^M_r(o)}$ is the Laplace operator in the geodesic sphere $S^M_r(o) \subseteq M$.
\begin{remark}
Along all the paper, given $o \in M$ and as long as $R <inj(o)$, we will use indistinctly the terms {\em geodesic ball}, {\em geodesic sphere}, {\em metric ball}, {\em metric sphere}, {\em distance ball} and {\em distance sphere} to name the sets $B^M_R(o)$ and $S^M_R(o)$ respectively.
\end{remark}
Using this result we have the following
\begin{proposition}\label{lapmean}
Let $(M^n,g)$ be a complete Riemannian manifold and let $o\in M$ be a point of $M$. Then the normalized mean curvature vector field of the geodesic sphere $S^M_t(o)$, is given by
$$\vec{H}_{S^M_t(o)}=-H_{S^M_t(o)}\nabla^M r\, ,$$
\noindentdent where
$${\rm H}_{{\rm S}_t^M}=\frac{1}{n-1}\operatorname{D}elta^M r(\gamma(t))=\frac{1}{n-1}\dfrac{\frac{\partial}{\partial t}\sqrt{\det G(t,\ov{\theta})}}{\sqrt{\det G(t,\ov{\theta})}}\,\,\forall t>0$$
\noindentdent is the pointed inward mean curvature of $S^M_t(o)$ and $\gamma(t)$ is a unit geodesic starting at the point $o \in M$.
\end{proposition}
\begin{proof}
The proof is straightforward taking $\{\vec{e}_i(t)\}_{i=1}^n$ an orthonormal basis of $T_{\gamma(t)}{\rm S}_t^M$, with $\vec{e}_n(t)=\nabla^Mr(\gamma(t))$, the unit normal to ${\rm S}_t^M$ at $\gamma(t)$, pointed outward. Then, after some computations,
\begin{equation}
\begin{aligned}
\vec{H}_{S^M_t}&=\frac{1}{n-1} \big(\operatorname{tr} L_{\nabla^M r}\big) \nabla^M r=-\frac{1}{n-1} \operatorname{D}iv^M(\nabla^Mr)\\&=-\frac{1}{n-1} \operatorname{D}elta^M r(\gamma(t))\nabla^Mr(\gamma(t))
\end{aligned}
\end{equation}
\noindentdent so $$H_t=\langle \vec{H}_{S^M_t},-\nabla^Mr(\gamma(t))\rangle=\frac{1}{n-1}\operatorname{D}elta^Mr(\gamma(t)).$$
\noindentdent The result follows now using equation (\ref{lapeq}).
\end{proof}
Given a domain (connected open set) $D$ in $M$, a function $u\in C^2(D)$ is \emph{harmonic} (resp. \emph{subharmonic}) if $\operatorname{D}elta^M u=0$ (resp. $\operatorname{D}elta^M u\geq 0$) on $D$. We gather the strong maximum principle and the Hopf boundary point lemma for subharmonic functions in the next statement.
\begin{theorem}
\label{th:mp}
Let $D$ be a smooth domain of a Riemannian manifold $M$. Consider a subharmonic function $u\in C^2(D)\cap C(\overlineerline{D})$. Then, we have:
\begin{itemize}
\item[(i)] if $u$ achieves its maximum in $D$ then $u$ is constant,
\item[(ii)] if there is $p_0\in\partial D$ such that $u(p)<u(p_0)$ for any $p\in D$ then $\frac{\partial u}{\partial \nu}(p_0)>0$, where $\nu$ denotes the outer unit normal along $\partial D$.
\end{itemize}
\end{theorem}
\begin{proof}
The proof of (i) can be found in \cite[Cor.~8.15]{Gri2}. The proof of (ii) can be derived from (i) as in the Euclidean case \cite[Lem.~3.4]{GT}.
\end{proof}
\subsection{Model Spaces}\label{subsec:Model}\
The model spaces $M_\omega^n$ are rotationally symmetric spaces defined as follows:
\begin{definition}\label{def:modelspaces}
(See \cite{Gri},\cite{GreW}) A $\omega$-model $M_\omega^n$ is a smooth warped product with base $B^1=[0,R[\subset\mathbb{R}$ (on $0<R\leq\infty$), fiber $F^{n-1}=\mathbb{S}^{n-1}_1$ (i.e., the unit $(n-1)$-sphere with standard metric), and warping function $\omega:\,[0,R[\rightarrow\mathbb{R}_+\cup\{0\}$ with $\omega(0)=0$, $\omega'(0)=1$, $\omega^{(2k)}(0)=0$ and $\omega(r)>0$ for all $k\in\mathbb{N}^*$ and for all $r>0$, where $\omega^{(2k)}$ denotes the even derivatives of the warping function.
The point $o_\omega=\pi^{-1}(0)$, where $\pi$ denotes the natural projection onto $B^1$, is called \emph{center point} of the model space. If $R=+\infty$, then $o_\omega$ is a pole of $M_\omega^n$. We denote as $r=r(x)$ the distance to the pole $o_\omega$ of the point $x \in M_\omega^n$.
\end{definition}
\begin{remark}\label{prop:SpaceForm}
The simply connected space forms $M^n_{\omega_b}$ of constant sectional curvature $b$ can be constructed as $\omega$-models with any given point as center point using the warping functions
\begin{equation}\label{eq:SpaceForm}
\omega_b(r)=\begin{cases}
\dfrac{1}{\sqrt{b}}\sin\left(\sqrt{b}\,r\right),& \quad\text{si }\,b>0,\\
r, & \quad\text{si }\,b=0,\\
\dfrac{1}{\sqrt{-b}}\sinh\left(\sqrt{-b}\,r\right), & \quad\text{si }\,b<0.
\end{cases}
\end{equation}
Note that for $b>0$, the warped metric $g_{\omega_b}=dr^2+\omega_b^2(r)g_{\mathbb{S}^{n-1}_1}$ determined by the function $\omega_b(r)$ admits smooth extension to $r=\pi/\sqrt{b}$. For $b\leq 0$ any center point is a pole.
\end{remark}
In \cite{O'N}, \cite{GreW}, \cite{Gri} and \cite{MP4}, we have a complete description of these model spaces, including the computation of their sectional curvatures $K_{o_\omega,M_\omega^n}$ in the radial directions from the center point $o_\omega$. They are determined by the radial function $K_{o_\omega,M_\omega^n}(\sigma_x)=K_\omega(r)=-\frac{\omega''(r)}{\omega(r)}$. Moreover, the normalized inward mean curvature of the distance sphere $S^\omega_r(o_\omega)$ of radius $r$ from the center point, is, at the point $p=\gamma(r) \in S^\omega_r(o_\omega)$, where $\gamma(t)$ is the normal geodesic parametrized by arclength joining $o_\omega$ and $p$
\begin{equation}\label{eq:WarpMean}
H_{S^w_r}(p)=\eta_\omega(r)=\dfrac{\omega'(r)}{\omega(r)}=\dfrac{d}{dr}\ln\left(\omega(r)\right).
\end{equation}
In particular, in \cite{MP4} we introduce, for any given warping function $\omega(r)$, the \emph{isoperimetric quotient $q_\omega(r)$} for the corresponding $\omega$-model space $M_\omega^n$ as follows:
\begin{equation}\label{eq:Defq}
q_\omega(r)=\dfrac{\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)}=\dfrac{\int_0^r\omega^{n-1}(t)\,dt}{\omega^{n-1}(r)}.
\end{equation}
On the other hand, using equation (\ref{lapeq}), the Laplace operator in $M^n_w$ is given by
\begin{equation}\label{laplacemodel}
\operatorname{D}elta^{M^n_w} = \frac{\partial }{\partial r^2}+(n-1)\frac{w'(r)}{w(r)}\frac{\partial}{\partial r}+\operatorname{D}elta^{S^{\omega}_r(o_\omega)}.
\end{equation}
Then, we have the following results concerning the mean exit time function of the geodesic $R$-ball ${\rm B}_r^\omega(o_\omega)\subseteqM_\omega^n$, (see \cite{MP4}):
\begin{proposition}\label{prop:MeanTorEqualities}
Let $E_R^\omega$ the solution of the Poisson Problem \eqref{eq:moments1}, defined on the geodesic $R$-ball ${\rm B}_R^\omega(o_\omega)$ in the model space $M_\omega^n$.
Then $E_R^\omega$ is a non-increasing radial function given by
\begin{equation}\label{eq:EwR}
E_R^\omega(x)=E_R^\omega(r_{o_\omega}(x))=\int_{r_{o_\omega}(x)}^R q_\omega(t)\,dt\, ,
\end{equation}
\noindentdent where $r\equiv r_{o_\omega}(x)=dist_{M^n_{\omega}}(o_\omega, x)$ denotes the distance to the center point. Hence, it attains its maximum at $r=0$, with $E_R^\omega{'}(0)=0$ and $E_R^\omega{'}(r) < 0\,\,\,\forall r \in ]0,R]$ .
\end{proposition}
\begin{proof}
Using the expresion of the Laplace operator given in equation (\ref{laplacemodel}), it is straightforward to check that $E_R(r)=\int_r^R q_\omega(t)\,dt$ satisfies the
equation
$$\operatorname{D}elta^{M^n_w} E_R=-1$$
\noindentdent with boundary condition $E_R(R)=0$.
\end{proof}
\subsection{Balance conditions}\
We present now a purely intrinsic condition on the general model spaces $M_\omega^n$, (see \cite{MP4}), which will play a key role in the last section of the paper:
\begin{definition}
A given $\omega$-model space $M_\omega^n$ is \emph{balanced from above} if we have the inequality
\begin{equation}\label{eq:balancedabove1}
q_\omega(r)\eta_\omega(r)\leq\dfrac{1}{n-1},\quad\text{for all } r\geq 0.
\end{equation}
\end{definition}
In \cite{MP4} it was proved the following characterization of the balance condition defined previously:
\begin{proposition}\label{prop:balancedequivalences}
Let us consider the $\omega$-model space $M_\omega^n$. Then $M_\omega^n$ is \emph{balanced from above} if and only if the following equivalent conditions hold:
\begin{align}
\dfrac{d}{dr}\left(q_\omega(r)\right) & \geq 0,\label{eq:balancedabovediff}\\
\omega^n(r) & \geq (n-1)\,\omega'(r)\int_0^r\omega^{n-1}(t)\,dt.\label{eq:balancedabove2}
\end{align}
\end{proposition}
Also in \cite{MP4} it were listed several examples of balanced from above $\omega$-model spaces $M_\omega^n$. Here we enumerate some of them:
\begin{examples}\
\begin{enumerate}
\item Every $\omega_b$-model space $M^n_{\omega_b}=[0,R[\times_{\omega_b}\mathbb{S}^{n-1}_1$ of constant positive sectional curvature $b>0$ and $R <\frac{\pi}{2\sqrt{b}}$ is balanced from above. In fact, when $b >0$ and for $r>0$ we have that (\ref{eq:balancedabove2}) is a strict inequality, (see Lemma 2.4 in \cite{MP3}).
\item On the other hand, the $\omega_b$-model spaces $M^n_{\omega_b}$ of constant non-positive sectional curvature $b\leq 0$ are balanced from above too. In fact, when $b <0$, we have that inequality (\ref{eq:balancedabove2}) is equivalent to inequality
$$\int_0^r \sinh^{n-1}(\sqrt{-b} t) \leq \frac{\sinh^n(\sqrt{-b} r)}{\sqrt{-b}(n-1)\cosh(\sqrt{-b}r)}$$
which holds for all $r>0$ because $\tanh^2(\sqrt{-b} r) \leq 1\,\,\forall r>0$. The case $b=0$ is trivial.
\item Let us consider the $\omega$-model space $M^n_\omega$, being $\omega(t):=t+t^3, t\in [0, \infty)$. This model space is balanced from above.
\end{enumerate}
\end{examples}
\subsection{Symmetrization into Model Spaces}\label{subsec:Symm}\
As in \cite{MP4} we use the concept of \emph{Schwarz-symmetrization} as considered in e.g., \cite{Ba}, \cite{Po}, or, more recently, in \cite{Mc} and \cite{Cha2}. For the sake of completeness, we review and show some facts about this instrumental concept, in the context of Riemannian manifolds.
\begin{definition}\label{def:Symm}
Let $(M^n, g)$ be a complete Riemannian manifold. Suppose that ${\rm D} \subseteq M$ is a precompact open connected domain in $M^n$. Let $(M_\omega^n, g_\omega)$ be a rotationally symmetric model space, with pole $o_\omega \in M_\omega^n$. Then the \emph{$\omega$-model space symmetrization of \, ${\rm D}$} is denoted by ${\rm D}^{*_{\omega}}$ and is defined to be the unique $L(D)$-ball in $M_\omega^n$, centered at $o_\omega$
$$D^{*_{\omega}}:=B^\omega_{L(D)}(o_\omega)$$
\noindentdent satisfying
\begin{equation*}
\operatorname{Vol}(D)=\operatorname{Vol}\left(B^\omega_{L(D)}(o_\omega)\right).
\end{equation*}
In the particular case that $D$ is a geodesic $R$-ball ${\rm B}_R^M(o)$ in $M$ centered at $o \in M$, then the radius $L({\rm B}_R^M(o))$ is some increasing function $s(R)=L({\rm B}_R^M(o))$ which depends on the geometry of $M$, so we can write
\begin{equation*}
{\rm B}_R^M(o)^{*_{\omega}}=B^\omega_{s(R)}(o_\omega)
\end{equation*}
\noindentdent and this symmetrization $B^\omega_{s(R)}(o_\omega)$ satisfies
\begin{equation}\label{symball}
\operatorname{Vol}({\rm B}_R^M(o))=\operatorname{Vol}(B^\omega_{s(R)}(o_\omega)).
\end{equation}
\end{definition}
\begin{remark}
When it is clear from the context, we write ${\rm D}^{*}$ instead of ${\rm D}^{*_{\omega}}$.
Along the rest of the paper, and if there is not confusion, we shall omit the centers $o \in M$ and $o_\omega \in M^n_\omega$ when we refer to the balls ${\rm B}_{r}^M(o)$ and ${\rm B}_{r}^\omega(o_\omega)$ and the spheres ${\rm S}_{r}^M(o)$ and ${\rm S}_{r}^\omega(o_\omega)$.
\end{remark}
Given $f: D \rightarrow \mathbb{R}^+$ a smooth non-negative function on ${\rm D}$, we are going to introduce the notion of {\em $\omega$- symmetrization} $f^{*_{\omega}}: {\rm D}^{*_{\omega}}\rightarrow \mathbb{R}^+$. But first, we will show some useful facts.
\begin{definition}\label{def:simfuncprevi}
Let $(M^n, g)$ be a complete Riemannian manifold, $D \subseteq M$ a precompact domain in $M$ and $f:{\rm D}\subseteq M\longrightarrow\mathbb{R}^+$ a smooth non-negative function on ${\rm D}$. For $t\geq 0$ we define the sets
\begin{equation*}
{\rm D}(t):=\{x\in{\rm D}\,|\,f(x)\geq t\}\subseteq M
\end{equation*}
\noindentdent and
\begin{equation*}
\Gamma(t):=\{x\in{\rm D}\,|\,f(x)=t\}.
\end{equation*}
\end{definition}
\begin{remark}\
\begin{enumerate}
\item The set $D(t)$ is precompact for all $t\geq 0$ and moreover, $\partial{\rm D}(t)=\Gamma(t) \subseteq D(t)$.
\item Note too that $D(0)=D$ and that if $t_1\leq t_2$ then ${\rm D}(t_2)\subseteq{\rm D}(t_1)$.
\item If $T:= \sup_{x \in D} f(x)$, then $D(t)= \emptyset \,\,\forall t > T$, and hence, $\operatorname{Vol}(D(t))=0\,\,\forall t \geq T$.
\item Therefore, we have a family of nested sets $\{D(t)\}_{t \in [0,T]}$ that covers $D$.
\end{enumerate}
\end{remark}
Now, we define the {\em symmetrization} of a function:
\begin{definition}\label{def:simfunc}
Let $(M^n, g)$ be a complete Riemannian manifold, $D \subseteq M$ a precompact domain in $M$ and $f:{\rm D}\subseteq M\longrightarrow\mathbb{R}^+$ a smooth non-negative function on ${\rm D}$. Let $(M_\omega^n, g_\omega)$ be a rotationally symmetric model space. Then the \emph{$\omega$-symmetrization of $f$} is the function $f^{*_{\omega}}:{\rm D}^{*_{\omega}}\longrightarrow\mathbb{R}$ defined, for all $x^* \in {\rm D}^{*_{\omega}}$, by
\begin{equation*}
f^{*_{\omega}}(x^*)=\sup\{t\geq 0\,|\, x^*\in {\rm D}(t)^{*_\omega}\}.
\end{equation*}
Note that the symmetrization $f^{*_{\omega}}$ ranges on $[0,T]$, namely, $f^{*_{\omega}}: D^{*_{\omega}} \rightarrow [0,T]$, where $T:= \sup_{x \in D} f(x)$.
\end{definition}
\begin{remark}\
\begin{enumerate}
\item When it is clear from the context, we write $f^{*}$ instead of $f^{*_{\omega}}$ and $D^*$ instead of $D^{*_{\omega}}$.
\item By Sard's theorem, if $D_f \subseteq D$ denotes the set of critical points of $f$, the set $S_f =f(D_f)\subseteq [0,T]$ of critical values of $f$ has null measure, and the set of regular values of $f$, $R_f=[0,T]\sim S_f$ is open and dense in $[0,T]$. In particular, for any $t \in R_f$, the set $\Gamma(t)=\{x\in{\rm D}\,|\,f(x)=t\}$ is a smooth embedded hypersurface in $D$ and $\Vert \nabla^M f\Vert$ does not vanish along $\Gamma(t)$.
\end{enumerate}
\end{remark}
With these observations in hand, we have the following
\begin{definition}\label{defR}
Let $(M^n, g)$ be a complete Riemannian manifold and let $(M_\omega^n, g_\omega)$ be a rotationally symmetric model space. Given the precompact domain $D \subseteq M$ and $f: D \subseteq M\longrightarrow\mathbb{R}^+$ a smooth non-negative function on $D$, let us define the function
$$\widetilde{r}: [0, T] \rightarrow [0, L(D)]$$
\noindentdent such that, for all $t \in [0, T]$, $\widetilde{r}(t)$ is defined as the radius of the symmetrization
$$D(t)^*={\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)$$
\noindentdent satisfying
\begin{equation*}
\operatorname{Vol}\left({\rm D}(t)\right)=\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)\right).
\end{equation*}
\end{definition}
\begin{remark}
Note that, as $D(0)=D$, then $D(0)^*=D^*$, i.e., $\widetilde{r}(0)=L(D)$, the radius defined in Definition \ref{def:Symm}, and $D^*={\rm B}_{\widetilde{r}(0)}^\omega(o_\omega)$. On the other hand, as $\operatorname{Vol}(D(t))=0\,\,\forall t \geq T$, then $\widetilde{r}(t)=0\,\,\forall t \geq T$.
\end{remark}
Concerning this last definition, we have the following result, which will play an important r\^ole in the proof of Proposition \ref{prop:ineqpsiE}:
\begin{lemma}\label{prop:Rfuncsymm}
The function $\widetilde{r}: [0, T] \rightarrow [0, L(D)]$ is non-increasing. In particular, for all regular values $t \in R_f$, the function $\widetilde{r}\vert_{R_f}: R_f \subseteq [0, T] \rightarrow [0, L(D)]$ satisfies
\begin{equation*}
\widetilde{r}'(t)=-\dfrac{\int_{\partial D(t)}\norm{\nabla^M f}^{-1}\,d\mu_t}{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)} < 0
\end{equation*}
\noindentdent so it is strictly decreasing in $R_f$, and hence, injective (and bijective onto its image).
\end{lemma}
\begin{remark}
Note that when $R_f=[0,T]$, then $\widetilde{r}: [0, T] \rightarrow [0, L(D)]$ is bijective.
\end{remark}
\begin{proof}
When $t_1\leq t_2$, then ${\rm D}(t_2)\subseteq{\rm D}(t_1)$ and hence $\operatorname{Vol}({\rm D}(t_2)) \leq \operatorname{Vol}({\rm D}(t_1))$, so $\operatorname{Vol}({\rm B}_{\widetilde{r}(t_2)}^\omega(o_\omega)) \leq \operatorname{Vol}({\rm B}_{\widetilde{r}(t_1)}^\omega(o_\omega))$ and hence $\widetilde{r}(t_2) \leq \widetilde{r}(t_1)$.
On the other hand, given $t \in R_f$, let us denote as:
\begin{equation*}
{\rm V}(t) =\operatorname{Vol}\left({\rm D}(t)\right)=\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega\right).
\end{equation*}
Then, \begin{equation*}
{\rm V}'(t)=\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)\widetilde{r}'(t)
\end{equation*}
\noindentdent and as $\partial{\rm D}(t)=\Gamma(t)=\{x\in{\rm D}\,|\,f(x)=t\}$, by the co-area formula (see \cite{Cha1}, \cite{Sa}), and as $t \in R_f$, we have
\begin{equation*}
\widetilde{r}'(t)=-\dfrac{\int_{\partial D(t)}\norm{\nabla^M f}^{-1}\,d\mu_t}{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)} < 0
\end{equation*}
\noindentdent for all $t \in R_f$. Therefore, $\widetilde{r}\vert_{R_f}$ is strictly decreasing.
\end{proof}
To finish this subsection, we are going to prove in Theorem \ref{prop:Propietatssymmobjects} that, given $f:{\rm D}\subseteq M\longrightarrow\mathbb{R}^+$ a non-negative function defined on the precompact domain $D$, the symmetrized function $f^*:D^{*_{\omega}}\longrightarrow\mathbb{R}$ is a radial function, and that $f$ and $f^*$ are both equimeasurable.
\begin{theorem}\label{prop:Propietatssymmobjects}
Let $(M^n, g)$ be a complete Riemannian manifold, $D \subseteq M$ a precompact domain in $M$ and $f:{\rm D}\subseteq M\longrightarrow\mathbb{R}^+$ a non-negative and smooth function on ${\rm D}$. Let $(M_\omega^n, g_\omega)$ a rotationally symmetric model space such that its center $o_\omega$ is a pole. The symmetrized objects $f^*$ and ${\rm D}^*$ satisfy the following properties:
\begin{enumerate}
\item \label{prop:Propietatssymmobjects1} The function $f^*$ depends only on the geodesic distance to the center $o_\omega$ of the ball ${\rm D}^*$ in $M_\omega^n$ and is non-increasing.
\item \label{prop:Propietatssymmobjects2} The functions $f$ and $f^*$ are equimeasurable in the sense that
\begin{equation}\label{eq:eqVol}
\operatorname{Vol}_M\left(\{x\in{\rm D}\,|\,f(x)\geq t\}\right)=\operatorname{Vol}_{M_\omega^n}\left(\{x^*\in{\rm D}^*\,|\,f^*(x^*)\geq t\}\right)
\end{equation}
for all $t\geq 0$.
\end{enumerate}
\end{theorem}
\begin{proof}
We are going to prove first statement. Let us consider $x_1^*,x_2^*\in{\rm D}^*={\rm B}_{\widetilde{r}(0)}^\omega(o_\omega)$ such that $r_{o_\omega}(x_1^*)=r_{o_\omega}(x_2^*)$. Then it is evident that $x_1^* \in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)$ if and only if $x_2^* \in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)$ for all $t \in [0,T]$. Hence,
\begin{equation*}
f^*(x_1^*)=\sup\left\{t\geq 0\,|\,x_1^*\in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)\right\}
= \sup\left\{t\geq 0\,|\,x_2^*\in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)\right\}=f^*(x_2^*)
\end{equation*}
\noindentdent which means that $f^*$ is a radial function. Namely, $f^*$ depends only on the geodesic distance to the center $o_\omega$, $f^*(x^*)=f^*(r_{o_\omega}(x^*))$.
To see that $f^*$ is non-increasing, let us consider $x_1^*,x_2^*\in{\rm D}^*$ such that $r_{o_\omega}(x_1^*)\leq r_{o_\omega}(x_2^*)$. We are going to see that $t_1:=f^*(x_1^*) \geq t_2:=f^*(x_2^*)$.
As $$f^*(x_2^*)= \sup\left\{t\geq 0\,|\,x_2^*\in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)\right\}=\sup\left\{t\geq 0\,|\,r_{o_\omega}(x_2^*) \leq \widetilde{r}(t)\right\}=t_2$$
\noindentdent then, if $t \leq t_2$, we have that $x_2^*\in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)$, so $r_{o_\omega}(x_2^*) \leq \widetilde{r}(t)\,\,\forall t \leq t_2$. In particular, $r_{o_\omega}(x_1^*)\leq r_{o_\omega}(x_2^*) \leq \widetilde{r}(t_2)$, so $x_1^* \in {\rm B}_{\widetilde{r}(t_2)}^\omega(o_\omega)$ and therefore, $t_1=f^*(x_1^*)=\sup\left\{t\geq 0\,|\,x_1^*\in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)\right\} \geq t_2=f^*(x_2^*)$.
To prove second statement, note that, for all $t>0$, we have, by Definitions \ref{def:simfunc} and \ref{defR},
\begin{equation*}
\begin{split}
{\rm D}(t)^* & ={\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)=\left\{x^*\in{\rm D}^*\,|\,f^*(x^*)\geq t\right\}.
\end{split}
\end{equation*}
\noindentdent In fact, if $x^* \in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)$, then $f^*(x^*)=\sup\left\{t\geq 0\,|\,x^*\in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)\right\} \geq t$ and, conversely, if $f^*(x^*)=\sup\left\{t\geq 0\,|\,x^*\in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)\right\} \geq t$, then $x^* \in {\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)$.
Therefore, since ${\rm D}(t)=\left\{x\in{\rm D}\,|\,f(x)\geq t\right\}$, we obtain that
\begin{equation*}
\operatorname{Vol}\left(\left\{x\in{\rm D}\,|\,f(x)\geq t\right\}\right)=\operatorname{Vol}\left({\rm D}(t)\right)=\operatorname{Vol}\left({\rm D}(t)^*\right)=\operatorname{Vol}\left(\left\{x^*\in{\rm D}^*\,|\,f^*(x^*)\geq t\right\}\right).
\end{equation*}
\end{proof}
\section{Mean Exit Time Comparison}\label{sec:MeanComp}\
We start this section with the notion of {\em transplanted mean exit time}.
\begin{definition}\label{transplanted}
Let $(M,g)$ a complete Riemannian manifold and $(M_\omega^n,g_{\omega})$ a model space with center $o_\omega$. Given $o \in M$, let us consider a geodesic $R$-ball $B^M_R(o)$, with $0 <R<\, inj(o)$ and the geodesic $R$-ball in $M_\omega^n$, centered at the center $o_\omega$, $B^{\omega}_R(o_\omega)$. Let $E_R^M$ and $E_R^\omega$ be the mean exit time functions defined on $B^M_R(o)$ and $B^{\omega}_R(o_\omega)$, respectively.
Now, we {\em transplant} the radial mean exit time function of $M_\omega^n$ to $M$ by defining the function $\mathbb{E}_R^\omega:\, {\rm B}_R^M \rightarrow \mathbb{R}$ as $\mathbb{E}_R^\omega(x):=E_R^\omega\left(r_o(x)\right)\,\,\forall x \in {\rm B}_R^M$ where $r_o$ is the distance function to $o$, the center of the ball ${\rm B}_R^M(o)$.
The function $\mathbb{E}_R^\omega$ is a radial function called the {\em transplanted mean exit time} function in ${\rm B}_R^M$.
\end{definition}
We can compare the transplanted mean exit time function $\mathbb{E}_R^\omega$ defined in a geodesic ball $B^M_R$ with the mean exit time function $E^M_R$ corresponding with this ball. Remember that, along the text, we can omit the centers $o \in M$ and $o_\omega \in M^n_\omega$ when we refer to the balls ${\rm B}_{r}^M(o)$ and ${\rm B}_{r}^\omega(o_\omega)$ and the spheres ${\rm S}_{r}^M(o)$ and ${\rm S}_{r}^\omega(o_\omega)$.
Our first result in this regard is following:
\begin{proposition}\label{prop3.2}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a geodesic ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Then the following assertions are equivalent:
\begin{enumerate}
\item $E^M_R= \mathbb{E}_R^\omega \,\,\text{on}\,\,\,B^M_R(o)$.
\item $H_{S^w_r(o_\omega)}=H_{S^M_r(o)}\,\,\,\forall r \in ]0,R]$.
\end{enumerate}
\noindentdent where ${\rm H}_{{\rm S}_r^M(o)}$ denotes the mean curvature of the geodesic $r$- sphere ${\rm S}_r^M(o) \subseteq M$ and ${\rm H}_{{\rm S}_r^\omega(o_\omega)}$ is the corresponding mean curvature of the geodesic $r$-sphere ${\rm S}_r^\omega(o_\omega) \subseteq M_\omega^n$.
\end{proposition}
\begin{proof}
Using polar coordinates $(r,\overline{\theta})$ in $M- (Cut(o)\cup\{o\})$, equations (\ref{lapeq}) and (\ref{laplacemodel}) and applying Maximum Principle, equality $E^M_R= \mathbb{E}_R^\omega \,\,\text{on}\,\,\,B^M_R(o)$ is equivalent to equality
$$\operatorname{D}elta^M\mathbb{E}_R^\omega(r, \overline{\theta})=\operatorname{D}elta^M E^M_R(r, \overline{\theta})=-1=\operatorname{D}elta^{M^n_\omega}E^\omega_R\,\,\,\,\forall (r, \overline{\theta})$$
\noindentdent which, in its turn, applying Proposition \ref{lapmean} and equations (\ref{eq:WarpMean}) and (\ref{laplacemodel}), is equivalent to equality, for all $(r,\overline{\theta}) \in [0,R]\times \mathbb{S}^{n-1}_1$:
$$\mathbb{E}_R^\omega{''}(r)+(n-1) H_{S^M_r(o)}\,\mathbb{E}_R^\omega{'}(r)=E_R^\omega{''}(r)+(n-1)H_{S^w_r(o_\omega)}\,E_R^\omega{'}(r)$$
and, as for all $r \in ]0,R]$, $\mathbb{E}_R^\omega{''}(r)= E_R^\omega{''}(r)$ and $\mathbb{E}_R^\omega{'}(r)=E_R^\omega{'}(r)<0\,\forall r \in ]0,R]$, this last equality is equivalent to equality
$$H_{S^M_r(o)}=H_{S^w_r(o_\omega)}\,\,\forall r \in ]0,R]$$.
\end{proof}
Now, we can state the following comparison theorem:
\begin{theorem}\label{teo:MeanComp}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that
\begin{equation}\label{eq:meancurvatureconditions3}
{\rm H}_{{\rm S}_r^\omega(o_\omega)}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M(o)}\quad\text{for all}\quad 0 < r \leq R
\end{equation}
\noindentdent where ${\rm H}_{{\rm S}_r^M}$ denotes the mean curvature of the metric $r$- sphere ${\rm S}_r^M(o) \subseteq M$ and ${\rm H}_{{\rm S}_r^\omega}$ is the corresponding mean curvature of the metric $r$-sphere ${\rm S}_r^\omega(o_\omega) \subseteq M_\omega^n$.
Then, we have the inequality
\begin{equation}\label{eq:MeanComp}
\mathbb{E}_R^\omega\geq\,(\leq)\, E_R^M\quad\text{in}\quad{\rm B}_R^M (o)\, ,
\end{equation}
\noindentdent where $\mathbb{E}_R^\omega(x):=E_R^\omega\left(r_o(x)\right)$ is the {\em transplanted mean exit time} function in ${\rm B}_R^M(o)$.
Moreover, if there exists $p \in B^M_R(o)$ such that $\mathbb{E}_R^\omega(p)=E^M_R(p)$, then
$$\mathbb{E}_R^\omega=E_R^M\,\quad\text{in}\,\,\,{\rm B}_R^M(o)$$
\noindentdent and hence,
$$H_{S^w_r(o_\omega)}=H_{S^M_r(o)}\,\,\,\forall r \in ]0,R].$$
\end{theorem}
\begin{proof}
To prove first assertion, let us consider polar coordinates $(r, \overline{\theta}) \in [0, inj(o))\times \mathbb{S}^{n-1}_1$ centered at the center $o\in M$ of the geodesic ball $B^M_R$, with $R < inj(o)$, (as before and along the rest of the paper, we shall omit the center point of the ball $o$ if there is not confusion). By definition of $\mathbb{E}_R^\omega$, and using equation (\ref{eq:EwR}) we have that this radial function satisfies
\begin{equation}\label{eq:MeanTransDecreixent}
\mathbb{E}_R^\omega{'}(r)=E_R^\omega{'}(r)< 0, \quad\text{for all } r\in ]0,R],
\end{equation}
\noindentdent and, since $\operatorname{D}elta^{M_\omega^n}E_R^\omega=-1\,\text{on} \, B^\omega_R$,
\begin{equation*}
\mathbb{E}_R^\omega{''}(r)=E_R^\omega{''}(r)=-1-(n-1)\,\dfrac{\omega'(r)}{\omega(r)}\,E_R^\omega{'}(r).
\end{equation*}
\noindentdent Therefore, using equation (\ref{lapeq}) and applying Proposition \ref{lapmean} and equations (\ref{eq:WarpMean}) and (\ref{laplacemodel}), we have, for all $(r,\overline{\theta}) \in ]0,R]\times \mathbb{S}^{n-1}_1$:
\begin{equation}\label{eq:MeanTransLapSubstituida}
\operatorname{D}elta^M\mathbb{E}_R^\omega(r,\overline{\theta})=-1+(n-1)\left({\rm H}_{{\rm S}_r^M}-{\rm H}_{{\rm S}_r^\omega}\right)E_R^\omega{'}(r)\, .
\end{equation}
Then, from equations \eqref{eq:MeanTransLapSubstituida} and \eqref{eq:MeanTransDecreixent}, and assuming inequality ${\rm H}_{{\rm S}_r^\omega}\,\leq\, {\rm H}_{{\rm S}_r^M}\,\,\text{for all}\, r>0$ we obtain that
\begin{equation}\label{eq:MeaniMeanTransDesigLap}
\operatorname{D}elta^M\mathbb{E}_R^\omega(r,\overline{\theta})\leq -1=\operatorname{D}elta^M E_R^M(r,\overline{\theta}),\quad\text{for all }(r,\overline{\theta})\in]0,R]\,\times\,\mathbb{S}^{n-1}_1\, .
\end{equation}
Thus
\begin{equation*}
\operatorname{D}elta^M\left(E_R^M-\mathbb{E}_R^\omega\right)(r,\overline{\theta})\geq 0\,\,\text{on}\,\, B^M_R
\end{equation*}
\noindentdent and since $\left(E_R^M-\mathbb{E}_R^\omega\right)(R)=0$ we have, applying the strong maximum principle
\begin{equation*}
\mathbb{E}_R^\omega \geq E_R^M\,\,\text{ on}\,\, B^M_R
\end{equation*}
\noindentdent as we wanted to prove.
We obtain opposite inequalities with same arguments, assuming that ${\rm H}_{{\rm S}_r^\omega}\geq\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad r>0$.
We are going to prove second assertion assuming that
$${\rm H}_{{\rm S}_r^\omega(o_\omega)}\leq\, {\rm H}_{{\rm S}_r^M(o)}\quad\text{for all}\quad 0 < r \leq R\, .$$
\noindentdent For that, let us suppose that there exists $p \in B^M_R$ such that $\mathbb{E}_R^\omega(p)=E^M_R(p)$. Therefore, we have that $\operatorname{D}elta^M\left(E_R^M-\mathbb{E}_R^\omega\right) \geq 0$ on $B^M_R$ and that $E^M_R - \mathbb{E}_R^\omega \leq 0=(E_R^M-\mathbb{E}_R^\omega)(p)$ on $B^M_R$. Hence, $E_R^M-\mathbb{E}_R^\omega$ attains its maximum in $B^M_R$. Applying the strong maximum principle, the diference function $E_R^M-\mathbb{E}_R^\omega =C$ is constant on $B^M_R$ and, by continuity, as $E_R^M-\mathbb{E}_R^\omega =0$ on $\partial B^M_R=S^M_R$, then $C=0$.
\end{proof}
\begin{corollary}\label{cor:MeanComp}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that
\begin{equation}\label{eq:meancurvatureconditions4}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
\noindentdent Then we have the isoperimetric inequalities
\begin{equation} \label{cor:MeanComp1}
\dfrac{\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)}\geq\,(\leq)\,\dfrac{\operatorname{Vol}\left({\rm B}_r^M(o)\right)}{\operatorname{Vol}\left({\rm S}_r^M(o)\right)}\quad\text{for all}\quad 0<r \leq R.
\end{equation}
\noindentdent Moreover, equality in inequalites (\ref{cor:MeanComp1}) for some radius $r_0 \in ]0,R]$ implies that $$H_{S^w_r(o_\omega)}=H_{S^M_r(o)}\,\,\,\forall r \in ]0,r_0].$$
\noindentdent As a consequence of inequalities (\ref{cor:MeanComp1}), for all $0 < r \leq R$, we have
\begin{equation}\label{ineqVolumes}
\begin{aligned}
\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)\leq\,&(\geq)\,\operatorname{Vol}\left({\rm B}_r^M(o)\right),\\
\operatorname{Vol}(S^\omega_r(o_\omega)) \leq\,&(\geq)\, \operatorname{Vol}(S^M_r(o)).
\end{aligned}
\end{equation}
\noindentdent Finally, equality
$$\operatorname{Vol}\left({\rm B}_{r_0}^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm B}_{r_0}^M(o)\right)$$
\noindentdent for some radius $r_0 \in ]0,R]$ implies that $$H_{S^w_r(o_\omega)}=H_{S^M_r(o)}\,\,\,\forall r \in ]0,r_0].$$
\end{corollary}
\begin{proof}
Let us fix one radius $r \in ]0,R]$. The proof follows the lines of the proof of Theorem 1.1 and Corollary 1.2 in \cite{Pa2}, adapting it to this intrinsic context and using the new hypotheses.
First, let us assume that ${\rm H}_{{\rm S}_r^\omega}\leq {\rm H}_{{\rm S}_r^M}$, for all $ 0<r \leq R$. If we fix $r \in ]0,R]$, then we have, in particular, that ${\rm H}_{{\rm S}_s^\omega}\leq {\rm H}_{{\rm S}_s^M}$, for all $ 0<s \leq r$. We can apply Theorem \ref{teo:MeanComp} to obtain
\begin{equation*}
\operatorname{D}elta^M\mathbb{E}_{r}^\omega\leq\,(\geq)\,\operatorname{D}elta^M E_{r}^M=-1\,\,\text{on}\,\, B^M_{r}.
\end{equation*}
\noindentdent Therefore, since $\norm{\nabla^M r}=1$, and using the \emph{Divergence Theorem}, we have
\begin{equation}\label{eqeq1}
\begin{split}
\operatorname{Vol}\left({\rm B}_{r}^M\right) &\leq \int_{{\rm B}_{r}^M}-\operatorname{D}elta^M\mathbb{E}_{r}^\omega\,d\widetilde{\sigma}
= -\int_{{\rm B}_{r}^M}\operatorname{D}iv\left(\nabla^M\mathbb{E}_{r}^\omega\right)\,d\widetilde{\sigma}\\&=-\int_{{\rm S}_{r}^M}\ep{\nabla^M\mathbb{E}_{r}^\omega,\nabla^M r}\,d\sigma
=-\mathbb{E}_{r}^\omega{'}({r})\operatorname{Vol}\left({\rm S}_{r}^M\right).
\end{split}
\end{equation}
Thus, we obtain, using Proposition \ref{prop:MeanTorEqualities},
\begin{equation*}
\operatorname{Vol}\left({\rm B}_{r}^M\right)\leq -\mathbb{E}_r^\omega{'}({r})\operatorname{Vol}\left({\rm S}_{r}^M\right)=q_\omega({r})\operatorname{Vol}\left({\rm S}_{r}^M\right)=\dfrac{\operatorname{Vol}\left({\rm B}_{r}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{r}^\omega\right)}\operatorname{Vol}\left({\rm S}_{r}^M\right)
\end{equation*}
\noindentdent and therefore
\begin{equation*}
\dfrac{\operatorname{Vol}\left({\rm B}_{r}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{r}^\omega\right)}\geq \dfrac{\operatorname{Vol}\left({\rm B}_{r}^M\right)}{\operatorname{Vol}\left({\rm S}_{r}^M\right)}.
\end{equation*}
We are going to discuss the equality assertion: we are still assuming that ${\rm H}_{{\rm S}_r^\omega}\leq\,{\rm H}_{{\rm S}_r^M}$, for all $ 0<r \leq R$. If there exists $r_0 \in ]0,R]$ such that we have
\begin{equation*}
\dfrac{\operatorname{Vol}\left({\rm B}_{r_0}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{r_0}^\omega\right)}= \dfrac{\operatorname{Vol}\left({\rm B}_{r_0}^M\right)}{\operatorname{Vol}\left({\rm S}_{r_0}^M\right)}
\end{equation*}
\noindentdent then all the inequalities in (\ref{eqeq1}) become equalities with the radius $r_0$.
In particular,
$$\operatorname{Vol}\left({\rm B}_{r_0}^M\right) = \int_{{\rm B}_{r_0}^M}-\operatorname{D}elta^M\mathbb{E}_{r_0}^\omega\,d\widetilde{\sigma}$$
\noindentdent and hence, as $1+\operatorname{D}elta^M\mathbb{E}_{r_0}^\omega \leq 0$ on $B^M_{r_0}$, we conclude that $1+\operatorname{D}elta^M\mathbb{E}_{r_0}^\omega = 0$ on $B^M_{r_0}$ and hence, as $\operatorname{D}elta^M\mathbb{E}_{r_0}^\omega=\operatorname{D}elta^ME^M_{r_0}$ on $B^M_{r_0}$ then, applying the maximum principle, $\mathbb{E}_{r_0}^\omega=E^M_{r_0}$ on $B^M_{r_0}$ and hence, by Proposition \ref{prop3.2}, ${\rm H}_{{\rm S}_r^\omega}= {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0, r_0]$.
When we assume that ${\rm H}_{{\rm S}_r^\omega}\geq {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0, R]$, we argue as before, inverting all the inequalities to conclude the opposite isoperimetric inequality. The equality discussion is the same, mutatis mutandi.
To prove statement (\ref{ineqVolumes}), and as in Corollary 1.2 in \cite{Pa2}, let us define, given $0 <R < inj(o)$ the function $$G: [0,R] \rightarrow \mathbb{R}$$
\noindentdent as
\begin{equation*}
G(s):=
\begin{cases}
\ln\left(\dfrac{\operatorname{Vol}\left({\rm B}_s^M\right)}{\operatorname{Vol}\left({\rm B}_s^\omega\right)}\right),\quad\text{if } s>0,\\
0,\qquad\qquad\qquad\quad\;\;\,\text{if } s=0.
\end{cases}
\end{equation*}
Then, if ${\rm H}_{{\rm S}_s^\omega}\leq {\rm H}_{{\rm S}_s^M} \,\,\forall s\in ]0, R]$, we have, applying inequality (\ref{cor:MeanComp1}), that
\begin{equation*}
G'(s) = \dfrac{\operatorname{Vol}\left({\rm S}_s^M\right)}{\operatorname{Vol}\left({\rm B}_s^M\right)}-\dfrac{\operatorname{Vol}\left({\rm S}_s^\omega\right)}{\operatorname{Vol}\left({\rm B}_s^\omega\right)}\geq\,0\,\,\forall s\in ]0,R].
\end{equation*}
Hence, $G$ is non-decreasing in $]0,R]$. The rest of the proof follows as in \cite{Pa2}, using in this case the asymptotic expansion around $s=0$ for the volume of a geodesic $s$-ball, (see Theorem 9.12 in \cite{Gray1}) to conclude with a straightforward computation, that $\lim_{s \to 0} G(s)=0=G(0)$, and hence, that $G(s)$ is continuous and $G(s) \geq G(0)\,\,\forall s \in [0, R]$, so, given $s=r \in ]0,R]$, we have
\begin{equation*}
\operatorname{Vol}\left({\rm B}_r^\omega\right)\leq \operatorname{Vol}\left({\rm B}_r^M\right)\,\,\forall r \in ]0,R].
\end{equation*}
Moreover, isoperimetric inequality (\ref{cor:MeanComp1}), together above inequality implies that
$$\operatorname{Vol}(S^\omega_r) \leq \operatorname{Vol}(S^M_r) \,\,\,\forall r\leq R.$$
We are going to discuss the equality assertion: let us assume that ${\rm H}_{{\rm S}_r^\omega}\leq {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0, R]$ and that there exists $r_0 \in ]0,R]$ such that $\operatorname{Vol}\left({\rm B}_{r_0}^\omega\right)= \operatorname{Vol}\left({\rm B}_{r_0}^M\right)$. Then, $G(0)=G(r_0)=0$ and, as $G$ in non-decreasing, for all $r \in [0,r_0]$, we have
$$0=G(0) \leq G(r) \leq G(r_0)=0$$
\noindentdent so $G(r)=0 \,\,\forall r \in [0,r_0]$ and therefore, $G'(r)=0 \,\,\forall r \in [0,r_0]$ which implies that ${\rm H}_{{\rm S}_r^\omega}= {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0, r_0]$.
When we assume that ${\rm H}_{{\rm S}_r^\omega}\geq {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0, R]$, we argue as before, inverting all the inequalities to conclude that $G$ is non-increasing in $]0,R]$ and hence
\begin{equation*}
\operatorname{Vol}\left({\rm B}_r^\omega\right)\, \geq\,\operatorname{Vol}\left({\rm B}_r^M\right)\,\,\forall r \in ]0,R]
\end{equation*}
and $\operatorname{Vol}(S^\omega_r) \geq \operatorname{Vol}(S^M_r) \,\,\,\forall r\leq R$.
The equality discussion is the same than above, mutatis mutandi.
\end{proof}
\begin{corollary}\label{cor:MeanComp2}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that
\begin{equation}\label{eq:meancurvatureconditions5}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then, if there exists $p \in B^M_R(o)$ such that equality $\mathbb{E}^\omega_R(p)=E^M_R(p)$ holds, we have, for all $r \in ]0,R]$:
\begin{enumerate}
\item The equalities $\mathbb{E}^\omega_r=E^M_r$ on $B^M_r(o)$.
\item The isoperimetric equalities
$$\dfrac{\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)\,}=\,\dfrac{\operatorname{Vol}\left({\rm B}_r^M(o)\right)}{\operatorname{Vol}\left({\rm S}_r^M(o)\right)}.$$
\item The volume equalities $\operatorname{Vol}\left({\rm B}_r^\omega\right)\, =\,\operatorname{Vol}\left({\rm B}_r^M\right)$
and $\operatorname{Vol}(S^\omega_r) = \operatorname{Vol}(S^M_r) $.
\end{enumerate}
\end{corollary}
\begin{proof}
First of all, equality assertion in Theorem \ref{teo:MeanComp} states that, as we are assuming one of the inequalities in (\ref{eq:meancurvatureconditions5}), then if there exists $p \in B^M_R(o)$ such that equality $\mathbb{E}^\omega_R(p)=E^M_R(p)$ holds, we con conclude the equality $\mathbb{E}^\omega_R=E^M_R$ on $B^M_R(o)$ and applying Proposition \ref{prop3.2}, from this equality, we have equality ${\rm H}_{{\rm S}_r^\omega}= {\rm H}_{{\rm S}_r^M} \,\,\forall r\in ]0, R]$. This last equality implies that, given any fixed $r \in ]0,R]$, we have the equalities ${\rm H}_{{\rm S}_s^\omega}= {\rm H}_{{\rm S}_s^M} \,\,\forall s\in ]0, r]$ and hence, by Proposition \ref{prop3.2} again, we obtain $\mathbb{E}^\omega_r=E^M_r$ on $B^M_r(o)$.
On the other hand, equality $\mathbb{E}^\omega_R=E^M_R\,\,\text{on}\,\, B^M_R(o)$ implies that $\operatorname{D}elta^M\mathbb{E}_R^\omega=-1=\operatorname{D}elta^M E^M_R$ on $B^M_R(o)$, which implies in its turn that
$$\operatorname{Vol}\left({\rm B}_{R}^M\right) = \int_{{\rm B}_{R}^M}-\operatorname{D}elta^M\mathbb{E}_{R}^\omega\,d\sigma=-\mathbb{E}_{R}^\omega{'}({R})\operatorname{Vol}\left({\rm S}_{R}^M\right)$$
\noindentdent and hence, by Proposition \ref{prop:MeanTorEqualities}
$$\dfrac{\operatorname{Vol}\left({\rm B}_R^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_R^\omega(o_\omega)\right)\,}=\,\dfrac{\operatorname{Vol}\left({\rm B}_R^M(o)\right)}{\operatorname{Vol}\left({\rm S}_R^M(o)\right)}.$$
Moreover, fixing $r \in ]0,R]$, we know that, as $\mathbb{E}^\omega_R=E^M_R\,\,\text{on}\,\, B^M_R(o)$, then $\mathbb{E}^\omega_r=E^M_r\,\,\text{on}\,\, B^M_r(o)$ applying Proposition \ref{prop3.2}, and this equality implies equality
$$\dfrac{\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)\,}=\,\dfrac{\operatorname{Vol}\left({\rm B}_r^M(o)\right)}{\operatorname{Vol}\left({\rm S}_r^M(o)\right)}$$
\noindentdent with the same argument than above.
\item Finally, as equality $\mathbb{E}^\omega_R=E^M_R\,\,\text{on}\,\, B^M_R(o)$ implies that $\mathbb{E}^\omega_r=E^M_r\,\,\text{on}\,\, B^M_r(o)\,\,\forall r\in ]0,R]$, then, if we define \begin{equation*}
G(r):=
\begin{cases}
\ln\left(\dfrac{\operatorname{Vol}\left({\rm B}_r^M\right)}{\operatorname{Vol}\left({\rm B}_r^\omega\right)}\right),\quad\text{if } r\in ]0,R],\\
0,\qquad\qquad\qquad\quad\;\;\,\text{if } r=0,
\end{cases}
\end{equation*}
\noindentdent then $G'(r)=0 \,\forall r \in ]0,R]$, and hence, $G(r)=0 \,\forall r \in ]0,R]$, so $\operatorname{Vol}\left({\rm B}_r^\omega\right)\, =\,\operatorname{Vol}\left({\rm B}_r^M\right)\,\,\forall r \in ]0,R]$ and differentiating with respect the parameter $r$, $\operatorname{Vol}(S^\omega_r) = \operatorname{Vol}(S^M_r) \,\,\,\forall r\leq R$.
\end{proof}
To finish this section, we present the following property satisfied by the symmetrization of the transplanted mean exit time function $\mathbb{E}_R^\omega$. This result is an intrinsic version of Theorem 4.4 in \cite{HuMP1}, and it follows directly from this result, (see too Section 6 in \cite{HuMP1}).
\begin{theorem}\label{cor:eqschwarz}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$, and let us assume that there exists $B^\omega_{s(R)}(o_\omega)$, the Schwarz symmetrization of $B^M_R$ in $M^n_\omega$. Let $\mathbb{E}_R^\omega{^*}\,:\,{\rm B}_{s(R)}^\omega\longrightarrow\mathbb{R}$ be the symmetrization of the transplanted mean exit time function $\mathbb{E}_R^\omega\,:\,{\rm B}_R^M\longrightarrow\mathbb{R}$. Then
\begin{equation}\label{eq:corschwarz}
\int_{{\rm B}_R^M}\mathbb{E}_R^\omega\,d\sigma=\int_{{\rm B}_{s(R)}^\omega}\mathbb{E}_R^\omega{^*}\,d\widetilde{\sigma}.
\end{equation}
\end{theorem}
\section{Moment spectrum comparison}\label{sec:TorRidCom1}\
We are going to apply the Mean Exit comparisons obtained in Section 3 to obtain estimates of the moment spectrum, and the torsional rigidity of a geodesic ball in a Riemannian manifold with bounds on the mean curvature of its extrinsic spheres.
\subsection{Estimates for the Poisson hierarchy and the moment spectrum of a geodesic ball}\label{sec:MomentsComp}\
We shall start defining the so called Poisson hierarchy of a domain in a Riemannian manifold, (see \cite{DLD}).
\begin{definition}
Let $(M^n,g)$ be a complete Riemannian manifold and let $D\in M$ be a smooth precompact domain. We define the \emph{Poisson hierarchy for $D$} as the sequence $\{u_{k,D}\}_{k=1}^\infty$ of solutions of the following recurrence of boundary value problems
\begin{equation}\label{poissonk}
\begin{split}
\operatorname{D}elta^Mu_{k,D}+ku_{k-1,D} & = 0,\,\, \text{on }\,\,D,\\
u_{k,D}\lvert_{_{\partial D}} & = 0,
\end{split}
\end{equation}
\noindentdent with $u_{0,D}=1 \,\,\text{on}\,\, D$.
Let us note that $u_{1,D}=E_D^M$, i.e. the mean exit time function from $D$.
\end{definition}
As we did in Definition \ref{transplanted}, we transplant the Poisson hierarchy for the geodesic balls in a model space to the geodesic balls in a Riemanian manifold in the following way:
\begin{definition}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$.
Let us consider the Poisson hierarchy for $B^\omega_R(o_\omega)$, namely, the sequence $\{u_{k,R}^\omega\}_{k=1}^\infty$ which, for $k\geq 1$, are the solutions of
\begin{equation*}
\begin{split}
\operatorname{D}elta^{M_\omega^n}u_{k,R}^\omega+ku_{k-1,R}^\omega & =0,\,\,\text{on }{\rm B}_R^\omega,\\
u_{k,R}^\omega\lvert_{_{{\rm S}_R^\omega}}& = 0,
\end{split}
\end{equation*}
\noindentdent with $u^\omega_{0,R}=1 \,\,\text{on}\,\, {\rm B}_R^\omega$.
It is known, for all $k\geq 1$ that $u_{k,R}^\omega(x)=u_{k,R}^\omega\left(r_{o_\omega}(x)\right)$, i.e. $u_{k,R}^\omega$ is radial, and that $u_{k,R}^\omega{'}\leq 0$ (see Proposition 3.1 of \cite{HuMP2}).
Thus, for all $k\geq 1$, we can transplant these functions to ${\rm B}_R^M(o) \subseteq M$ by defining
$$\bar{u}_{k,R}^{\omega}: {\rm B}_R^M(o) \rightarrow \mathbb{R}$$
\noindentdent as $\bar{u}_{k,R}^{\omega}(x):=u_{k,R}^\omega\left(r_o(x)\right)\,\,\forall x \in {\rm B}_R^M(o)$, where $r_o$ is the distance function to the center of ${\rm B}_R^M(o)$.
The sequence $\{\bar{u}_{k,R}^\omega\}_{k=1}^\infty$ is the {\em transplanted Poisson hierarchy for $B^M_R(o)$}.
\end{definition}
Associated to the Poisson hierarchy of a domain $D \subseteq M$ it is defined the exit time moment spectrum of this domain in the following way:
\begin{definition}
Let $D\subseteq M$ a smooth precompact domain. We define the \emph{moment spectrum of $D$} as the sequence of integrals $\{\mathcal{A}_k(D)\}_{k=1}^{\infty}$ given by:
\begin{equation*}
\mathcal{A}_k(D):=\int_D u_{k,D}\,d\sigma\, ,
\end{equation*}
\noindentdent where $\{u_{k,D}\}_{k=1}^\infty$ is the Poisson hierarchy for $D$.
Let us note that $\mathcal{A}_1(D)$ is the torsional rigidity of $D$.
\end{definition}
We have the following comparison for the Poisson hierarchy of a geodesic ball in a Riemannian manifold:
\begin{theorem}\label{teo:TowMomComp}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that the mean curvatures of the geodesic spheres in $M$ and $M_\omega^n$ satisfies
\begin{equation}\label{eq:meancurvatureconditions6}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then the Poisson hierarchy for ${\rm B}_R^M(o)\subseteq M$, $\{u_{k,R}\}_{k=1}^\infty$, and its transplanted Poisson hierarchy for $B^M_R(o)$, $\{\bar{u}_{k,R}\}_{k=1}^\infty$ (and, fixed any $r \in ]0, R]$, the corresponding Poisson hierarchies for $B^M_r(o)$), satisfies
\begin{enumerate}
\item \label{teo:TowMomComp1} $\bar{u}_{1,R}^{\omega}\geq\,(\leq)\,u_{1,R}$ on ${\rm B}_R^M$.
\item \label{teo:TowMomComp2} For all $k\geq 2$, $\bar{u}_{k,R}^{\omega}\geq\,(\leq)\,u_{k,R}$ on ${\rm B}_R^M$.
\item If there exists $p \in B^M_R$ and $k_0 \geq 1$ such that $\bar{u}_{k_0, R}^{\omega}(p)=u_{k_0,R}(p)$, then
$${\rm H}_{{\rm S}_r^\omega}={\rm H}_{{\rm S}_r^M} \, \forall r \in ]0,R]$$
\noindentdent and
$$\bar{u}_{k,R}^{\omega}=u_{k,R}\,\text{ in} \,\,\,B^M_R\, \,\forall \, k \geq 1.$$
\item If there exists $p \in B^M_R$ and $k_0 \geq 1$ such that $\bar{u}_{k_0, R}^{\omega}(p)=u_{k_0,R}(p)$, then
$$\bar{u}_{k,r}^{\omega}=u_{k,r}\,\text{ in} \,\,\,B^M_r\, \,\forall \, k \geq 1\,\,\,\text{and}\,\,\,\forall r \in [0,R].$$
\noindentdent and hence,
$$\mathcal{A}_k(B^\omega_r)=\mathcal{A}(B^M_r)\,\,\forall r \in ]0,R]\,\,\text{and}\,\,\, k \geq 1\, .$$
\end{enumerate}
\end{theorem}
\begin{proof}
Statement \eqref{teo:TowMomComp1} is proved in Theorem \ref{teo:MeanComp}.
The proof of statement \eqref{teo:TowMomComp2} follows using induction on $k$, as it is done in \cite{HuMP2}. Indeed, assuming that ${\rm H}_{{\rm S}_r^\omega}\leq {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R]$ and as $\bar{u}_{k,R}^{\omega}{'}(r)\leq 0\,\,\forall r \in ]0,R]$, we have that
\begin{equation*}
\bar{u}_{k,R}^{\omega}{'}(r){\rm H}_{{\rm S}_r^\omega}\geq\bar{u}_{k,R}^{\omega}{'}(r){\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R]
\end{equation*}
and then, by equations (\ref{eq:WarpMean}) and (\ref{laplacemodel}) and Proposition \ref{lapmean}, we have, for all $k\geq 2$:
\begin{equation}\label{eq:TowMomLap}
\begin{aligned}
\operatorname{D}elta^M\bar{u}_{k,R}^{\omega} & =\bar{u}_{k,R}^{\omega}{''}(r)+(n-1){\rm H}_{{\rm S}_r^M}\,\bar{u}_{k,R}^{\omega}{'}(r)\leq\bar{u}_{k,R}^{\omega}{''}(r)+(n-1){\rm H}_{{\rm S}_r^\omega}\bar{u}_{k,R}^{\omega}{'}(r)\\& =\operatorname{D}elta^{M_\omega^n}u_{k,R}^\omega=-ku_{k-1}^\omega(r)=-k\bar{u}_{k-1}^{\omega}(r)\,\,\forall r \in ]0,R].
\end{aligned}
\end{equation}
Now remember that $\bar{u}_1^{\omega}\geq u_1$ on ${\rm B}_R^M$ and let us suppose that
$$\bar{u}_{k,R}^{\omega}\geq u_{k,R} \,\,\text{on} \,\,{\rm B}_R^M.$$
Then, by induction with $k+1$ and using equation (\ref{eq:TowMomLap}), we have that
\begin{equation}\label{eq:TowMomCompLapDes}
\operatorname{D}elta^M\bar{u}_{k+1,R}^{\omega}\leq-(k+1)\bar{u}_{k,R}^{\omega}\leq-(k+1)u_{k,R}=\operatorname{D}elta^M u_{k+1,R}\,\,\text{on}\,\,\, {\rm B}_R^M.
\end{equation}
Thus, $\operatorname{D}elta^M\left(u_{k+1,R}-\bar{u}_{k+1,R}^{\omega}\right) \geq 0$ on ${\rm B}_R^M$ and, applying the Maximum Principle, we obtain that
$$\bar{u}_{k+1,R}^{\omega}\geq u_{k+1,R}\, .$$
When we assume that ${\rm H}_{{\rm S}_r^\omega}\geq\,{\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R]$, the argument is exactly the same, inverting all the inequalities. All this proves \eqref{teo:TowMomComp2}.
We are going to prove assertion (3). Let us suppose that, as hypothesis, ${\rm H}_{{\rm S}_r^\omega}\leq {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R]$, and that there exists $p \in B^M_R$ and $k_0 \geq 1$ such that $$\bar{u}_{k_0,R}^{\omega}(p)=u_{k_0,R}(p).$$
We know that, for all $k \geq 1$, $\bar{u}_{k,R}^{\omega}\geq u_{k,R} \,\,\text{on} \,\,{\rm B}_R^M$. Then, as, on ${\rm B}^M_R(o)$, $\operatorname{D}elta^M \bar{u}_{k,R}^{\omega} \leq -k\bar{u}_{k-1}^{\omega}\leq -k\ u_{k-1}=\operatorname{D}elta^M u_{k,R}$ for all $k \geq 1$, we have, in particular,
$$\operatorname{D}elta^M\left(u_{k_0,R}-\bar{u}_{k_0,R}^{\omega}\right) \geq 0$$
\noindentdent on ${\rm B}_R^M(o)$.
Moreover, as $\bar{u}_{k_0,R}^{\omega}\geq u_{k_0,R}$ on ${\rm B}_R^M(o)$, then $u_{k_0,R}-\bar{u}_{k_0,R}^{\omega} \leq 0$ on ${\rm B}_R^M(o)$ and there exists $p \in B^M_R$ such that $(u_{k_0,R}-\bar{u}_{k_0,R}^{\omega})(p)=0$. Then, applying the strong maximum principle, $\bar{u}_{k_0,R}^{\omega}= u_{k_0,R}$ on ${\rm B}_R^M$, because $u_{k_0,R}-\bar{u}_{k_0,R}^{\omega}$ is constant on ${\rm B}^M_R(o)$, continuous in $\overlineerline{{\rm B}^M_R(o)}$ and $u_{k_0,R}-\bar{u}_{k_0,R}^{\omega}=0$ on ${\rm S}^M_R(o)$.
On the other hand, as $\bar{u}_{k_0-1}^{\omega}\geq u_{k_0-1}$ on ${\rm B}_R^M$, we have, on $B^M_R(o)$:
\begin{equation}
\begin{split}
\operatorname{D}elta^M\bar{u}_{k_0,R}^{\omega} &=\operatorname{D}elta^M u_{k_0,R}=-k_0 u_{k_0-1,R}\geq\\& -k_0 \bar{u}_{k_0-1,R}^{\omega}=-k_0u_{k_0-1,R}^\omega=\operatorname{D}elta^{M_\omega^n}u_{k_0,R}^\omega
\end{split}
\end{equation}
\noindentdent so, for all $r \in ]0,R]$:
\begin{equation}
\bar{u}_{k_0,R}^{\omega}{''}(r)+(n-1){\rm H}_{{\rm S}_r^M}\,\bar{u}_{k_0,R}^{\omega}{'}(r) \geq u_{k_0,R}^{\omega}{''}(r)+(n-1){\rm H}_{{\rm S}_r^\omega}u_{k_0,R}^{\omega}{'}(r).
\end{equation}
As $\bar{u}_{k_0,R}^{\omega}{''}(r)=u_{k_0,R}^{\omega}{''}(r)$ and $\bar{u}_{k_0,R}^{\omega}{'}(r)=u_{k_0,R}^{\omega}{'}(r)$ for all $r \in ]0,R]$, we conclude that
$$ {\rm H}_{{\rm S}_r^M}\bar{u}_{k_0,R}^{\omega}{'}(r) \geq {\rm H}_{{\rm S}_r^\omega} u_{k_0,R}^{\omega}{'}(r) \,\,\forall r \in ]0,R]$$
\noindentdent and hence, as $u_{k_0,R}^{\omega}{'}(r) < 0\,\,\forall r \in ]0,R]$, then
$${\rm H}_{{\rm S}_r^\omega}\geq {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R].$$
As, by hypothesis, ${\rm H}_{{\rm S}_r^\omega}\leq {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R]$, we have finally that
$${\rm H}_{{\rm S}_r^\omega}= {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R].$$
Now, to prove that $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$, we argue as follows: as we know that ${\rm H}_{{\rm S}_r^\omega}= {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R]$, let us apply Proposition \ref{prop3.2}, to have that $\bar{u}_{1,R}^{\omega}=u_{1,R}$ on $B^M_R(o)$, and we procceed by induction: let us suppose that $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$. Let us see that $\bar{u}_{k+1,R}^{\omega}=u_{k+1,R}$ on $B^M_R(o)$. For that, we compute
\begin{equation}
\begin{split}
\operatorname{D}elta^M\bar{u}_{k+1,R}^{\omega} & =\bar{u}_{k+1,R}^{\omega}{''}(r)+{\rm H}_{{\rm S}_r^M}\,\bar{u}_{k+1,R}^{\omega}{'}(r) =\bar{u}_{k+1,R}^{\omega}{''}(r)+{\rm H}_{{\rm S}_r^\omega}\bar{u}_{k+1,R}^{\omega}{'}(r)\\& =\operatorname{D}elta^{M_\omega^n}u_{k+1,R}^\omega=-(k+1)u_{k,R}^\omega=-(k+1)\bar{u}_{k,R}^{\omega}\\&=-(k+1)u_{k,R}=\operatorname{D}elta^M u_{k+1,R}\,\,\text{on}\,\,B^M_R(o).
\end{split}
\end{equation}
Hence $\operatorname{D}elta^M (\bar{u}_{k+1,R}^{\omega}-u_{k+1,R})=0$ on $B^M_R(o)$ and as $\bar{u}_{k+1,R}^{\omega}-u_{k+1,R}=0$ on $S^M_R(o)$, then, appliyng Maximum Principle again, we conclude that $\bar{u}_{k+1,R}^{\omega}=u_{k+1,R}=0$ on $B^M_R(o)$.
Finally, to prove assertion (4), let us assume that that ${\rm H}_{{\rm S}_r^\omega}\leq {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R]$, and that there exists $p \in B^M_R$ and $k_0 \geq 1$ such that $$\bar{u}_{k_0,R}^{\omega}(p)=u_{k_0,R}(p).$$
\noindentdent As before, we conclude that
$${\rm H}_{{\rm S}_r^\omega}= {\rm H}_{{\rm S}_r^M}\,\,\forall r \in ]0,R],$$
\noindentdent and hence, fixing $r \in ]0,R]$, that
$${\rm H}_{{\rm S}_s^\omega}= {\rm H}_{{\rm S}_s^M}\,\,\forall s \in ]0,r].$$
\noindentdent Now, to prove that $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$, we argue as in the proof of (3): as we know that ${\rm H}_{{\rm S}_s^\omega}= {\rm H}_{{\rm S}_s^M}\,\,\forall s \in ]0,r]$, let us apply Proposition \ref{prop3.2}, to have that $\bar{u}_{1,r}^{\omega}=u_{1,r}$ on $B^M_r(o)$, and we procceed by induction, as in the proof of assertion (3).
\end{proof}
As a consequence of the Theorem \ref{teo:TowMomComp} we have the following result, where it is proved that, under our hypotheses, any of the averaged moments of the geodesic balls determines its first Dirichlet eigenvalue:
\begin{corollary}\label{isoptors}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that the mean curvatures of the geodesic spheres in $M$ and $M_\omega^n$ satisfies
\begin{equation}\label{eq:meancurvatureconditions7}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then, for all $k\geq 1$,
\begin{equation}\label{RigIsop}
\dfrac{\mathcal{A}_k\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}\geq\,(\leq)\,\dfrac{\mathcal{A}_k\left({\rm B}_R^M\right)}{\operatorname{Vol}\left({\rm S}_R^M\right)}.
\end{equation}
Equality in any of inequalities (\ref{RigIsop}) for some $k \geq 1$ implies that
$${\rm H}_{{\rm S}_R^\omega(o_\omega)}=\,{\rm H}_{{\rm S}_R^M(o)}\,\,\,\text{for all }0<r\leq R$$
\noindentdent and hence, we have the equalities
\begin{enumerate}
\item Equality $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$, and hence, equality $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$ for all $k\geq 1$ and for all $0 < r \leq R$.
\item Equalities $\operatorname{Vol}\left({\rm B}_r^\omega\right)=\operatorname{Vol}\left({\rm B}_r^M\right)$ and $\operatorname{Vol}\left({\rm S}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^M\right)\,\,\,\text{for all }\, 0<r \leq R$.
\item Equalities $\mathcal{A}_k\left({\rm B}_r^\omega\right)=\mathcal{A}_k\left({\rm B}_r^M\right)$, for all $k\geq 1$ and for all $0<r \leq R$.
\item Equalities
$\lambda_1 (B^\omega_{r})=\lambda_1(B^M_{r})$ for all $0<r \leq R$.
\noindentdent Namely, {\em one} value of $\dfrac{\mathcal{A}_k\left({\rm B}_R^M(o)\right)}{\operatorname{Vol}\left({\rm S}_R^M(o)\right)}$ for some $k \geq 1$ determines the Poisson hierarchy, the volume, the $L^1$-moment spectrum and the first Dirichlet eigenvalue of the ball $B^M_r(o)$ for all $0<r \leq R$.
\end{enumerate}
\end{corollary}
\begin{proof}
In the model spaces we have that $\operatorname{D}elta^{M_{\omega}} u_{k+1,R}^{\omega}=-(k+1) u_{k}^{\omega}$ on the geodesic ball ${\rm B}^{M_{\omega}}_R(o_{\omega})$, so, applying Divergence theorem in this setting , we obtain
\begin{equation*}
\mathcal{A}_k\left({\rm B}_R^\omega\right)=\int_{{\rm B}_R^\omega} u_{k,R}^\omega\,d\widetilde{\sigma}=-\dfrac{1}{k+1}\int_{{\rm B}_R^\omega}\operatorname{D}elta^{M_\omega^n}u_{k+1,R}^\omega\,d\widetilde{\sigma}=-\dfrac{1}{k+1}\,u_{k+1,R}^\omega{'}(R)\operatorname{Vol}\left({\rm S}_R^\omega\right).
\end{equation*}
\noindentdent Therefore, for all $k \geq 1$,
\begin{equation}\label{eq:TowMomDerviadaMW}
-\dfrac{1}{k+1}u_{k+1,R}^\omega{'}(R)=\dfrac{\mathcal{A}_k\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}.
\end{equation}
Assuming now as hypothesis one of the inequalities in (\ref{eq:meancurvatureconditions7}), we obtain correspondingly the inequalities
$$\operatorname{D}elta^M\bar{u}_{k+1,R}^{\omega}\leq\,(\geq)\,\operatorname{D}elta^M u_{k+1,R}\,\,\text{on}\,\,\, {\rm B}_R^M.$$
Then, using the Divergence theorem and that $\bar{u}_{k+1,R}^{\omega}$ is radial in ${\rm B}_R^M$, we have
\begin{equation}\label{ineqisopTors}
\begin{split}
\mathcal{A}_k\left({\rm B}_R^M\right) & =\int_{{\rm B}_R^M}u_{k,R}\,d\sigma=-\dfrac{1}{k+1}\int_{{\rm B}_R^M}\operatorname{D}elta^M u_{k+1,R}d\sigma\\&
\leq\,(\geq)\,-\dfrac{1}{k+1}\int_{{\rm B}_R^M}\operatorname{D}elta^M\bar{u}_{k+1,R}^{\omega}\,d\sigma\\&=-\dfrac{1}{k+1}\int_{{\rm S}_R^M}\ep{\nabla^M\bar{u}_{k+1,R}^{\omega},\nabla^Mr}\,d\sigma_r \\&= -\dfrac{1}{k+1}\,\bar{u}_{k+1,R}^{\omega}{'}(R)\operatorname{Vol}\left({\rm S}_R^M\right).
\end{split}
\end{equation}
Then, using equation \eqref{eq:TowMomDerviadaMW}, and that $\bar{u}_{k+1,R}^\omega{'}(R)=u_{k+1,R}^\omega{'}(R)$, we finally obtain that
\begin{equation*}
\mathcal{A}_k\left({\rm B}_R^M\right)\leq\,(\geq)\,\dfrac{\mathcal{A}_k\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}\,\operatorname{Vol}\left({\rm S}_R^M\right).
\end{equation*}
We are going to discuss the equality case: assuming that ${\rm H}_{{\rm S}_r^\omega}\leq\,{\rm H}_{{\rm S}_r^M},\, \text{for all }0<r\leq R$, equality
\begin{equation}
\dfrac{\mathcal{A}_{k_0}\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}=\,\dfrac{\mathcal{A}_{k_0}\left({\rm B}_R^M\right)}{\operatorname{Vol}\left({\rm S}_R^M\right)}
\end{equation}
\noindentdent for some $k_0 \geq 1$ implies that all inequalities in (\ref{ineqisopTors}) are equalities for this fixed $k_0$, so $\bar{u}_{k_0+1,R}^{\omega}=u_{k_0+1,R}$ on $B^M_R(o)$.
Applying assertion (3) in Theorem \ref{teo:TowMomComp}, we have that ${\rm H}_{{\rm S}_r^\omega}=\,{\rm H}_{{\rm S}_r^M}\,\,\,\text{for all }0<r \leq R$ and that $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$. In particular, $\bar{u}_{1,R}^{\omega}=u_{1,R}$ on $B^M_R(o)$, so, by Corollary \ref{cor:MeanComp2},
$\operatorname{Vol}\left({\rm S}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^M\right)$ and $\operatorname{Vol}\left({\rm B}_r^\omega\right)=\operatorname{Vol}\left({\rm B}_r^M\right)$ for all $r \in ]0, R]$ and, hence, for all $k \geq 1$,
\begin{equation}\label{ineqisopTors2}
\begin{split}
\mathcal{A}_k\left({\rm B}_R^M\right) & =\int_{{\rm B}_R^M}u_{k,R}\,d\sigma=-\dfrac{1}{k+1}\int_{{\rm B}_R^M}\operatorname{D}elta^M u_{k+1,R}d\sigma
\\&=\,-\dfrac{1}{k+1}\int_{{\rm B}_R^M}\operatorname{D}elta^M\bar{u}_{k+1,R}^{\omega}\,d\sigma= -\dfrac{1}{k+1}\,\bar{u}_{k+1,R}^{\omega}{'}(R)\operatorname{Vol}\left({\rm S}_R^M\right)\\ &=\dfrac{\mathcal{A}_k\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}\,\operatorname{Vol}\left({\rm S}_R^M\right)= \mathcal{A}_k\left({\rm B}_R^\omega\right).
\end{split}
\end{equation}
Moreover, applying assertion (4) in Theorem \ref{teo:TowMomComp}, from equality $\bar{u}_{k_0+1,R}^{\omega}=u_{k_0+1,R}$ on $B^M_R(o)$ we can deduce that $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$ for all $k\geq 1$ and for all $r \in ]0,R]$, so given $r \in ]0,R]$, and for all $k \geq 1$,
\begin{equation}\label{ineqisopTors3}
\begin{split}
\mathcal{A}_k\left({\rm B}_r^M\right) & =\int_{{\rm B}_r^M}u_{k,r}\,d\sigma=-\dfrac{1}{k+1}\int_{{\rm B}_r^M}\operatorname{D}elta^M u_{k+1,r}d\sigma
\\&=\,-\dfrac{1}{k+1}\int_{{\rm B}_r^M}\operatorname{D}elta^M\bar{u}_{k+1,r}^{\omega}\,d\sigma= -\dfrac{1}{k+1}\,\bar{u}_{k+1,r}^{\omega}{'}(r)\operatorname{Vol}\left({\rm S}_r^M\right)\\ &=\dfrac{\mathcal{A}_k\left({\rm B}_r^\omega\right)}{\operatorname{Vol}\left({\rm S}_r^\omega\right)}\,\operatorname{Vol}\left({\rm S}_r^M\right)= \mathcal{A}_k\left({\rm B}_r^\omega\right).
\end{split}
\end{equation}
Finally, to prove the last assertion of the Theorem, we know that, assuming that ${\rm H}_{{\rm S}_r^\omega}\leq {\rm H}_{{\rm S}_r^M}\,\,\text{for all}\,\,0 < r \leq R$, the equality
\begin{equation*}
\dfrac{\mathcal{A}_{k_0}\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}=\dfrac{\mathcal{A}_{k_0}\left({\rm B}_R^M\right)}{\operatorname{Vol}\left({\rm S}_R^M\right)}
\end{equation*}
\noindentdent implies equalities $\mathcal{A}_k\left({\rm B}_r^\omega\right)=\mathcal{A}_k\left({\rm B}_r^M\right)$, for all $k\geq 1$, and for all $r \in ]0,R]$.
Then, given $B^M_r \subseteq M$ in a Riemannian manifold $(M,g)$, (see \cite{HuMP3} and \cite{BGJ}):
\begin{equation}\label{gim2}
\begin{aligned}
\lambda_1(B^M_r)&=\lim_{k \to \infty}\dfrac{k\mathcal{A}_{k-1}\left({\rm B}_r^M\right)}{\mathcal{A}_{k}\left({\rm B}_r^M\right)}\\&=\lim_{k \to \infty}\dfrac{k\mathcal{A}_{k-1}\left({\rm B}_r^\omega\right)}{\mathcal{A}_{k}\left({\rm B}_r^\omega\right)}=\lambda_1(B^\omega_r).
\end{aligned}
\end{equation}
\end{proof}
\subsection{An estimate for the torsional rigidity of a geodesic R-ball }\label{subsec:TRcomp}\
We are going to bound the torsional rigidity of a metric ball ${\rm B}_R^M$ in a Riemannian manifold $(M,g)$ in Theorem \ref{teo:2.1}, assuming that the mean curvature of the geodesic spheres in this Riemnnian manifold is bounded from above or from below by the corresponding mean curvature of the geodesic spheres in a symmetric model space $(M_\omega^n, g_{\omega})$ which is balanced from above.
This result can be considered as a continuation of the intrinsic comparison done in Section 6 of the paper \cite{HuMP1}. In that paper it were obtained upper and lower bounds for the torsional rigidity of a metric ball ${\rm B}_R^M(o)$ in a Riemannian manifold $(M,g)$ with a pole $o \in M$ under more restrictive conditions, namely, assuming that the radial sectional curvatures were bounded above or below by the corresponding radial sectional curvatures of a suitable model space.
To do that, let us consider a symmetric model space rearrangement of the metric ball ${\rm B}_R^M$ as it has been described in Definition \ref{def:Symm} and Definition \ref{def:simfunc}, namely, a symmetrization of ${\rm B}_R^M$ which is a geodesic $s(R)$-ball in the model space $M_\omega^n$ such that $\operatorname{Vol}\left({\rm B}_R^M(o)\right)=\operatorname{Vol}\left({\rm B}_{s(R)}^\omega(o_\omega)\right)$, together the symmetrization $\mathbb{E}_R^\omega{^*}\,:\,{\rm B}_{s(R)}^\omega\longrightarrow\mathbb{R}$ of the transplanted mean exit time function $\mathbb{E}_R^\omega\,:\,{\rm B}_R^M\longrightarrow\mathbb{R}$. It is evident that Proposition \ref{prop:ineqpsiE}, Theorem \ref{teo:2.1} and Corollary \ref{cor:2.1} make sense for those geodesic balls $B^M_R(o)$ which posses a Schwarz symmerization ${\rm B}_{s(R)}^\omega(o_\omega)$.
Then, we have the following comparison. Its proof follows closely the lines of the proof of Propositions 5.2 and 5.4 in \cite{HuMP1}, we have included it because the changes due to its intrinsic character, the different assumptions on the curvatures we have assumed here and the new analysis of the equality we present in this case.
\begin{proposition}\label{prop:ineqpsiE}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$, balanced from above. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that the mean curvatures of the geodesic spheres in $M$ and $M_{\omega}$ satisfies
\begin{equation}\label{eq:meancurvatureconditions8}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then
\begin{equation}\label{eq:ineqpsiE}
\mathbb{E}_R^\omega{^*}{'}(\widetilde{r})\geq\,(\leq)\,E_{s(R)}^\omega{'}(\widetilde{r})\quad\text{for all }\,\,\widetilde{r}\in(0,s(R))
\end{equation}
\noindentdent and hence,
\begin{equation}\label{eq:ineqpsiE2}
\mathbb{E}_R^\omega{^*}(\widetilde{r})\leq\,(\geq)\,E_{s(R)}^\omega(\widetilde{r})\quad\text{for all }\,\,\widetilde{r}\in[0,s(R)].
\end{equation}
\noindentdent Equality in any of the inequalities (\ref{eq:ineqpsiE2}) implies the equality among the radius $s(R)=R$ and the equality
$${\rm H}_{{\rm S}_r^\omega}=\,{\rm H}_{{\rm S}_r^M},\;\,\text{for all }0<r\leq R$$
\noindentdent and hence, we have the equalities among the volumes $\operatorname{Vol}\left({\rm B}_{r}^\omega \right)=\operatorname{Vol}\left({\rm B}_{r}^M\right)\,\,\forall r \in ]0,R]$ and $\operatorname{Vol}\left({\rm S}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^M\right)\,\,\,\text{for all }\, 0<r \leq R$.
\end{proposition}
\begin{proof}
We are going to analyze first the symmetrization $\mathbb{E}_R^\omega{^*}$. The transplanted function
$$\mathbb{E}_R^\omega:B_R^M(o)\longrightarrow\mathbb{R}$$
\noindentdent satisfies that $\mathbb{E}_R^\omega \in C^\infty ( B_R^M(o) \sim \{o\}) \cap C^0(\overlineerline{B}_R^M(o))$, and, moreover, that $\mathbb{E}_R^\omega\vert_{S^M_R(o)}=0$.
Let us consider the radial function $\psi=E_R^\omega$ defined on the interval $[0,R]$ in equation (\ref{eq:EwR}) of Proposition \ref{prop:MeanTorEqualities}. Let us denote by $T=\max_{[0,R]}\psi$. Thus, as $\psi$ is monotone, (strictly decreasing, with $\psi(0)=T$ and $\psi(R)=0$), we have that $\frac {d}{dr}\psi <0$ on $]0,R]$ and that $\psi:[0,R]\longrightarrow [0,T]$ is bijective.
Now, let us define the function $a:[0,T]\longrightarrow[0,R]$ as $a(t):=\psi^{-1}(t)$, satisfying $a(0)=\psi^{-1}(0)=R$ and $a(T)=\psi^{-1}(T)=0$. We know that
$$a'(t)=\frac{1}{\psi'(a(t))} <0\,\,\forall t \in (0,T)$$
\noindentdent so $a(t)$ is strictly decreasing in $(0,T)$.
Let us denote, for all $x \in B^M_R(o)$,
$$\varphiphi(x)=\mathbb{E}^\omega_R(x):=E^\omega_R(r_o(x))=\psi(r_o(x)).$$
We have that $\varphiphi(B^M_R(o))=\psi([0,R])=[0,T]$, so the function $\varphiphi: B^M_R(o) \rightarrow [0,T]$ satisfies $\Vert \nabla^M \varphiphi\Vert=\vert \frac {d}{dr}\psi\vert \Vert \nabla^M r_o\Vert \neq 0$ for all $x \in B^M_R(o) -\{o\}$. Therefore, the set of regular values of $\varphiphi$ is $R_{\varphiphi}=(0,T)$.
On the other hand, and given $t\in[0,T]$, let us consider the sets
\begin{equation*}
\begin{aligned}
D(t)&=\left\{x\in{\rm B}_R^M\,|\,\varphiphi(x)\geq t\right\}=\left\{x\in{\rm B}_R^M\,|\,\mathbb{E}^\omega_R(x)\geq t\right\}\\&=\left\{x\in{\rm B}_R^M\,|\,r_o(x)\leq \psi^{-1}(t)\right\}={\rm B}_{a(t)}^M
\end{aligned}
\end{equation*}
and
\begin{equation*}
\Gamma(t)=\left\{x\in{\rm B}_R^M\,|\,\varphiphi(x)= t\right\}=\left\{x\in{\rm B}_R^M\,|\,\psi\left(r_o(x)\right)=t\right\}={\rm S}_{a(t)}^M.
\end{equation*}
We have too that $D(0)=B_{a(0)}^M=B_R^M$ and $D(T)=B_{a(T)}^M=\{o\}$, where $o$ is the center of the geodesic ball $B_R^M$.
We consider the symmetrization in $M_\omega^n$ of the sets ${\rm D}(t)=B^M_{a(t)}\subseteq{\rm B}_R^M\subseteq M$, namely, the geodesic balls ${\rm D}(t)^*={\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)$ in $M_\omega^n$ such that
\begin{equation*}
\operatorname{Vol}\left({\rm D}(t)\right)=\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega(o_\omega)\right)\, .
\end{equation*}
For each $t \in [0,T]$, let us consider the function $\widetilde{r}(t)$, defined in Definition \ref{defR}. Then, in this particular context, we have that $\widetilde{r}\,:\,[0,T]\longrightarrow[0,s(R)]$ is strictly decreasing and hence, bijective. In fact, note that if $t_1, t_2 \in [0,T]$ such that $t_1 < t_2$, then, as $a(t)$ is strictly decreasing, $a(t_1) > a(t_2)$, so
$$\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t_1)}^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm B}_{a(t_1)}^M\right)>\operatorname{Vol}\left({\rm B}_{a(t_2)}^M\right)=\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t_2)}^\omega(o_\omega)\right)$$
\noindentdent and hence $\widetilde{r}(t_1)> \widetilde{r}(t_2)$.
On the other hand, applying Lemma \ref{prop:Rfuncsymm}, we have that for all $t \in R_\varphiphi=(0,T)$,
\begin{equation}\label{eq3.11}
\widetilde{r}{'}(t)=\dfrac{-1}{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}\int_{\Gamma(t)}\norm{\nabla^M\varphiphi}^{-1}\,d\sigma_t.
\end{equation}
The inverse of $\widetilde{r}$ is the decreasing function
\begin{equation*}
\phi\,:\,[0,s(R)]\longrightarrow [0,T];\quad \phi:=\phi(\widetilde{r}),
\end{equation*}
\noindentdent such that $\phi'\left(\widetilde{r}(t)\right)=\frac{1}{\widetilde{r}'(t)}$ for all $t\in [0,T]$, $\phi(0)=T$ and $\phi(s(R))=0$.
With all this background, we can say now that, in accordance with Definition \ref{def:simfunc} and Theorem \ref{prop:Propietatssymmobjects}, the symmetrization of $\varphiphi=\mathbb{E}^\omega_R:\,{\rm B}_R^M\longrightarrow\mathbb{R}$ is a radial function $\varphiphi^*=\mathbb{E}_R^{\omega^{*}}\,:\,{\rm B}_{s(R)}^\omega\longrightarrow\mathbb{R}$ which satisfies the following equality
\begin{equation}\label{eq:Psi}
\varphiphi^*(x^*)=\mathbb{E}_R^{\omega^{*}}(x^*)=\mathbb{E}_R^{\omega^{*}}\left(r_{o_\omega}(x^*)\right)
=t_0=\phi\left(\widetilde{r}(t_0)\right)=\phi\left(\widetilde{r}\right).
\end{equation}
To see equation (\ref{eq:Psi}), we argue as follows: given $x^*\in B^M_R(o)^*={\rm B}_{s(R)}^\omega(o_\omega)=\cup_{t\in[0,T]}{\rm S}_{\widetilde{r}(t)}^\omega(o_\omega)$, (concerning the second equality, recall that $\widetilde{r}: [0,T] \rightarrow [0,s(R)]$ is bijective), there exists some biggest value $t_0$ such that $r_{o_\omega}(x^*)=\widetilde{r}(t_0)$ and, hence, $x^*\in{\rm B}_{\widetilde{r}(t_0)}^\omega ={\rm D}(t_0)^*$. We then have that
\begin{equation}\label{eq:Psi2}
\varphiphi^*(x^*)=\varphiphi^*(r_{o_{\omega}}(x^*))=\sup\{t \geq 0/ x^* \in B^w_{\widetilde{r}(t)}(o)\}=t_0=\phi\left(\widetilde{r}(t_0)\right)
\end{equation}
\noindentdent and hence, for all $t\in(0,T)$, $\varphiphi^*\equiv \varphiphi^*(\widetilde{r}(t))$ and we have, applying equation \eqref{eq3.11}:
\begin{equation}\label{eq:1}
\begin{aligned}
\frac{d}{d\widetilde{r}}\vert_{\widetilde{r}=\widetilde{r}(t)} \varphiphi^*(\widetilde{r})=\varphiphi^{*}{'}(\widetilde{r}(t))&=\mathbb{E}_R^{\omega^{*}}{'}(\widetilde{r}(t))=\phi{'}(\widetilde{r}(t))\\&=\dfrac{1}{\widetilde{r}{'}(t)}=-\frac{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}{ \int_{\Gamma(t)}\norm{\nabla^M\varphiphi}^{-1}\,d\sigma_t }.
\end{aligned}
\end{equation}
But, as $\Vert \nabla^M \varphiphi(x)\Vert=\vert \psi'(r_o(x))\vert \neq 0$ for all $x \in B^M_R(o) -\{o\}$ and $\Gamma(t)={\rm S}_{a(t)}^M$ for all $t\in R_\varphiphi = (0,T)$, we conclude that
\begin{equation}\label{eq4.2}
\int_{\Gamma(t)}\norm{\nabla^M\varphiphi}^{-1}\,d\sigma_t = \dfrac{1}{\vert\psi{'}(a(t))\vert}\operatorname{Vol}\left({\rm S}_{a(t)}^M\right)
\end{equation}
\noindentdent and hence, equation (\ref{eq:1}) becomes, using equation (\ref{eq4.2}), and the fact that $\psi=E^\omega_R$, in the following expression, for all $t \in [0,T]$:
\begin{equation}\label{eq:psisymmprimavolums}
\begin{split}
\varphiphi^*{'}(\widetilde{r}(t)) & =-\modul{\psi{'}(a(t))}\dfrac{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{a(t)}^M\right)}\\
& = -\dfrac{\operatorname{Vol}\left({\rm B}_{a(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{a(t)}^\omega\right)}\,\dfrac{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{a(t)}^M\right)}.
\end{split}
\end{equation}
On the other hand, let us assume that ${\rm H}_{{\rm S}_r^\omega}\leq\,{\rm H}_{{\rm S}_r^M}$, for all $r \in (0,R]$. Then by Corollary \ref{cor:MeanComp} we know that $\operatorname{Vol}\left({\rm B}_r^\omega\right)\,\leq\,\operatorname{Vol}\left({\rm B}_r^M\right)$ for all $r \in [0,R]$. Therefore,
\begin{equation}\label{eq4.14}
\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega\right)=\operatorname{Vol}\left({\rm B}_{a(t)}^M\right)\,\geq\,\operatorname{Vol}\left({\rm B}_{a(t)}^\omega\right), \quad\text{for all }t\in[0,T].
\end{equation}
Then, since $\operatorname{Vol}\left({\rm B}_r^\omega\right)$ is an increasing function, because $\frac{d}{dr}\operatorname{Vol}\left({\rm B}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^\omega\right)\geq 0$, we have that
\begin{equation}\label{eq4.151}
\widetilde{r}(t)\,\geq\,a(t),\quad\text{for all }t\in[0,T].
\end{equation}
\noindentdent so, since $M_\omega^n$ is balanced from above, $q_\omega{'}(r)\geq 0$, we obtain:
\begin{equation}\label{eq4.15}
\dfrac{\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}\,\geq\,\dfrac{\operatorname{Vol}\left({\rm B}_{a(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{a(t)}^\omega\right)}, \quad\text{for all }t\in[0,T].
\end{equation}
Therefore, using equation \eqref{eq:psisymmprimavolums} and the fact that $\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega \right)=\operatorname{Vol}\left({\rm B}_{a(t)}^M\right)$, we have
\begin{equation}\label{eq.4.16}
\begin{split}
\mathbb{E}_R^{\omega *}{'}(\widetilde{r}(t))=\varphiphi^*{'}(\widetilde{r}(t)) \geq\,& -\dfrac{\operatorname{Vol}\left({\rm B}_{a(t)}^M\right)}{\operatorname{Vol}\left({\rm S}_{a(t)}^M\right)},\quad\text{for all }t\in[0,T].
\end{split}
\end{equation}
Now, we apply Proposition \ref{prop:MeanTorEqualities}, the isoperimetric inequality (\ref{cor:MeanComp1}) of Corollary \ref{cor:MeanComp}, the fact that $\widetilde{r}(t)\geq\,a(t)$, and that $q_\omega{'}\geq 0$, to obtain finally
\begin{equation}\label{eq4.17}
\mathbb{E}_R^{\omega *}{'}(\widetilde{r}(t)) \geq\,-\dfrac{\operatorname{Vol}\left({\rm B}_{a(t)}^M\right)}{\operatorname{Vol}\left({\rm S}_{a(t)}^M\right)}
\geq\,-\dfrac{\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}=E_{s(R)}^\omega{'}(\widetilde{r}(t))\,\,\forall t \in (0,T).
\end{equation}
Now, as $\mathbb{E}_R^{\omega *}{'}(\widetilde{r}) \geq\,E_{s(R)}^\omega{'}(\widetilde{r})\,\,\forall \widetilde{r} \in (0,s(R))$, we have, integrating along $[0,s(R)]$, and taking into account that $\mathbb{E}_R^{\omega *}(s(R))=E_{s(R)}^\omega(s(R))=0$,
\begin{equation}
\begin{aligned}
-\mathbb{E}_R^{\omega *}(\widetilde{r})&=\int_{\widetilde{r}}^{s(R)} \mathbb{E}_R^{\omega *}{'}(u)du \geq \\&
\int_{\widetilde{r}}^{s(R)} E_{s(R)}^\omega{'}(u)du=-E_{s(R)}^\omega(\widetilde{r})\,\,\,\forall \widetilde{r} \in [0,s(R)]
\end{aligned}
\end{equation}
\noindentdent so
$$\mathbb{E}_R^{\omega *}(\widetilde{r}) \leq\,E_{s(R)}^\omega(\widetilde{r})\,\,\,\forall \widetilde{r} \in [0,s(R)].$$
If we assume that ${\rm H}_{{\rm S}_r^\omega}\geq\,{\rm H}_{{\rm S}_r^M}$, for all $r \in [0,R]$, we use the same argument, changing all the inequalities, to obtain
\begin{equation}
\begin{aligned}
\mathbb{E}_R^{\omega *}{'}(\widetilde{r}(t)) &\leq\,E_{s(R)}^\omega{'}(\widetilde{r}(t))\,\,\forall t \in (0,T)\,\,\text{and hence}\\
\mathbb{E}_R^{\omega *}(\widetilde{r}) &\geq\,E_{s(R)}^\omega(\widetilde{r})\,\,\,\forall \widetilde{r} \in [0,s(R)].
\end{aligned}
\end{equation}
We are going to study the case of equality, when we assume the hypothesis ${\rm H}_{{\rm S}_r^\omega}\leq\,{\rm H}_{{\rm S}_r^M}$, for all $r \in [0,R]$, (the discussion of equality if we assume ${\rm H}_{{\rm S}_r^\omega}\geq\,{\rm H}_{{\rm S}_r^M}$, for all $r \in [0,R]$ is the same, mutatis mutandi).
Equality $\mathbb{E}_R^{\omega *}(\widetilde{r})=E_{s(R)}^\omega(\widetilde{r}) \,\,\forall r \in ]0,s(R)]$ implies equality $\mathbb{E}_R^{\omega *}{'}(\widetilde{r}) =\,E_{s(R)}^\omega{'}(\widetilde{r})\,\,\forall \widetilde{r} \in (0,s(R))$, which in its turn implies that inequalities in (\ref{eq4.17}) and hence, in (\ref{eq.4.16}) and (\ref{eq4.15}) become equalities for all $t \in [0,T]$. In particular, from equality in (\ref{eq4.15}) and inequality (\ref{eq4.14}), we deduce that
\begin{equation}\label{eq4.18}
\dfrac{\operatorname{Vol}\left({\rm S}_{a(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}\,\leq\,1, \quad\text{for all }t\in[0,T].
\end{equation}
On the other hand, using again equality in inequality (\ref{eq4.15}) and having into account that, as we assume that ${\rm H}_{{\rm S}_r^\omega}\leq\,{\rm H}_{{\rm S}_r^M}$, for all $r \in [0,R]$, then we have isoperimetric inequality (\ref{cor:MeanComp1}), we obtain:
\begin{equation}\label{eq4.19}
\dfrac{\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}\,=\,\dfrac{\operatorname{Vol}\left({\rm B}_{a(t)}^\omega\right)}{\operatorname{Vol}\left({\rm S}_{a(t)}^\omega\right)}\,\geq\,\dfrac{\operatorname{Vol}\left({\rm B}_{a(t)}^M\right)}{\operatorname{Vol}\left({\rm S}_{a(t)}^M\right)} , \quad\text{for all }t\in[0,T],
\end{equation}
\noindentdent and hence, as $\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega\right)=\operatorname{Vol}\left({\rm B}_{a(t)}^M\right)$ and using (\ref{eq4.18}):
\begin{equation}\label{eq4.20}
\operatorname{Vol}\left({\rm S}_{a(t)}^\omega\right) \leq \operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right) \leq \operatorname{Vol}\left({\rm S}_{a(t)}^M\right).
\end{equation}
Now, differentiating the equality
\begin{equation}\label{eq4.21}
\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega\right)=\operatorname{Vol}\left({\rm B}_{a(t)}^M\right),\quad\text{for all }t\in[0,T],
\end{equation}
\noindentdent we obtain
\begin{equation}\label{eq4.22}
\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)\widetilde{r'}(t)=\operatorname{Vol}\left({\rm S}_{a(t)}^M\right)a'(t),\quad\text{for all }t\in(0,T),
\end{equation}
\noindentdent and hence, using inequality (\ref{eq4.20}),
\begin{equation}\label{eq4.23}
\dfrac{\widetilde{r'}(t)}{a'(t)}\,=\,\dfrac{\operatorname{Vol}\left({\rm S}_{a(t)}^M\right)}{\operatorname{Vol}\left({\rm S}_{\widetilde{r}(t)}^\omega\right)}\,\geq\,1, \quad\text{for all }t\in(0,T),
\end{equation}
\noindentdent so $\widetilde{r'}(t) \geq a'(t)\,\,\forall t \in (0,T)$, and therefore, as $\widetilde{r}(T)=a(T)=0$, we finally obtain, integrating along $[0,T]$, that $\widetilde{r}(t)\leq a(t)\,\,\forall t \in [0,T]$. Hence, as we know, (see inequality (\ref{eq4.151})), that $\widetilde{r}(t)\geq a(t)\,\,\forall t \in [0,T]$, we obtain
$$\widetilde{r}(t)= a(t)\,\,\forall t \in [0,T].$$
Therefore, $s(R)=\widetilde{r}(0)= a(0)=R$\, and, moreover, $\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^\omega \right)=\operatorname{Vol}\left({\rm B}_{\widetilde{r}(t)}^M\right)\,\,\text{for all }t\in[0,T]$, so
$$\operatorname{Vol}\left({\rm B}_{r}^\omega \right)=\operatorname{Vol}\left({\rm B}_{r}^M\right)\,\,\forall r \in [0,R]$$
\noindentdent and hence
$$\operatorname{Vol}\left({\rm S}_{r}^\omega \right)=\operatorname{Vol}\left({\rm S}_{r}^M\right)\,\,\forall r \in [0,R].$$
Moreover, we apply the equality assertion in Corollary \ref{cor:MeanComp} to conclude that ${\rm H}_{{\rm S}_r^\omega}=\,{\rm H}_{{\rm S}_r^M},\;\,\text{for all }0\leq r \leq R$
\end{proof}
As a consequence of the Proposition \ref{prop:ineqpsiE} we have the following result, where it is proved that, under our hypotheses, the torsional rigidity of the geodesic balls determines its first Dirichlet eigenvalue:
\begin{theorem}\label{teo:2.1}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$, balanced from above. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that the mean curvatures of the geodesic spheres in $M$ and $M_{\omega}$ satisfies
\begin{equation}\label{eq:meancurvatureconditions9}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then
\begin{equation}\label{torsrigcomp}
\mathcal{A}_1\left({\rm B}_{s(R)}^\omega\right)\geq\,(\leq)\,\mathcal{A}_1\left({\rm B}_R^M\right)
\end{equation}
\noindentdent where ${\rm B}_{s(R)}^\omega$ is the Schwarz symmetrization of ${\rm B}_R^M$ in the model space $(M_\omega^n,g_\omega)$.
Equality in any of inequalities (\ref{torsrigcomp}) implies the equality among the radius $s(R)=R$ and that
$${\rm H}_{{\rm S}_R^\omega(o_\omega)}=\,{\rm H}_{{\rm S}_R^M(o)}\,\,\,\text{for all }0<r\leq R$$
\noindentdent and hence, we have the equalities
\begin{enumerate}
\item Equality $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$, and hence, equality $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$ for all $k\geq 1$ and for all $0 < r \leq R$.
\item Equalities $\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm B}_r^M(o)\right)$ and $\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)=\operatorname{Vol}\left({\rm S}_r^M(o)\right)\,\,\,\text{for all }\, 0<r \leq R$.
\item Equalities $\mathcal{A}_k\left({\rm B}_r^\omega(o_\omega)\right)=\mathcal{A}_k\left({\rm B}_r^M(o)\right)$, for all $k\geq 1$ and for all $0<r \leq R$.
\item Equality $\lambda_1(B^w_r(o_\omega))=\lambda_1(B^M_r(o))$ for all $0<r \leq R$.
\noindentdent Namely, the Torsional Rigidity determines the Poisson hierarchy, the volume, the $L^1$-moment spectrum and the first Dirichlet eigenvalue of the ball $B^M_r(o)$ for all $0<r \leq R$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let us consider a symmetric model space rearrangement of the metric ball ${\rm B}_R^M$ as it has been described in Definition \ref{def:Symm} and Definition \ref{def:simfunc}, namely, a symmetrization of ${\rm B}_R^M$ which is a geodesic $s(R)$-ball in the model space $M_\omega^n$ such that $\operatorname{Vol}\left({\rm B}_R^M\right)=\operatorname{Vol}\left({\rm B}_{s(R)}^\omega\right)$, together the symmetrization $\mathbb{E}_R^\omega{^*}\,:\,{\rm B}_{s(R)}^\omega\longrightarrow\mathbb{R}$ of the transplanted mean exit time function $\mathbb{E}_R^\omega\,:\,{\rm B}_R^M\longrightarrow\mathbb{R}$.
Applying Theorems \ref{teo:MeanComp} and \ref{cor:eqschwarz} and Proposition \ref{prop:ineqpsiE}, we have that
\begin{equation}\label{eq:4}
\begin{aligned}
\mathcal{A}_1\left({\rm B}_R^M\right)&=\int_{{\rm B}_R^M}E_R^M\,d\sigma\leq\,(\geq)\,\int_{{\rm B}_R^M}\mathbb{E}_R^\omega\,d\sigma\\&=\int_{{\rm B}_{s(R)}^\omega}\mathbb{E}_R^\omega{^*}\,d\widetilde{\sigma} \leq\,(\geq)\,\int_{{\rm B}_{s(R)}^\omega}E_{s(R)}^\omega\,d\widetilde{\sigma}=\mathcal{A}_1\left({\rm B}_{s(R)}^\omega\right)
\end{aligned}
\end{equation}
\noindentdent and the Theorem is proved.
We are going to study the case of equality, when we assume the hypothesis ${\rm H}_{{\rm S}_r^\omega}\leq\,{\rm H}_{{\rm S}_r^M}$, for all $r \in [0,R]$, (the discussion of equality if we assume ${\rm H}_{{\rm S}_r^\omega}\geq\,{\rm H}_{{\rm S}_r^M}$, for all $r \in [0,R]$ is the same, mutatis mutandi).
Equality in (\ref{eq:4}) implies that all the inequalities contained in this expression become equalities. In particular, we have that $\int_{{\rm B}_R^M}E_R^M\,d\sigma=\int_{{\rm B}_R^M}\mathbb{E}_R^\omega\,d\sigma$ and that $\int_{{\rm B}_{s(R)}^\omega}\mathbb{E}_R^\omega{^*}=\int_{{\rm B}_{s(R)}^\omega}E_{s(R)}^\omega\,d\widetilde{\sigma}$.
From this second equality and inequality (\ref{eq:ineqpsiE2}) in Proposition \ref{prop:ineqpsiE}, we have that $\mathbb{E}_R^\omega{^*}=E_{s(R)}^\omega$ on $[0,s(R)]$. Applying again Proposition \ref{prop:ineqpsiE}, we deduce that $s(R)=R$, and that ${\rm H}_{{\rm S}_r^\omega}=\,{\rm H}_{{\rm S}_r^M}\,\,\,\text{for all }0<r \leq R$.
On the other hand, equality $\int_{{\rm B}_R^M}E_R^M\,d\sigma=\int_{{\rm B}_R^M}\mathbb{E}_R^\omega\,d\sigma$ implies, using Theorem \ref{teo:MeanComp}, that $E_R^M=\mathbb{E}_R^\omega$ on ${\rm B}_R^M$. Hence we conclude equality $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$ using assertion (3) in Theorem \ref{teo:TowMomComp} and that $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$ for all $k\geq 1$ and for all $r \in [0,R]$ using assertion (4) in Theorem \ref{teo:TowMomComp}.
Moreover, equality $E_R^M=\mathbb{E}_R^\omega$ on ${\rm B}_R^M$ implies, using equality conclusions in Corollary \ref{cor:MeanComp2}, that, for all $r \in ]0,R]$,
\begin{equation}
\begin{aligned}
\dfrac{\operatorname{Vol}\left({\rm B}_r^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_r^\omega(o_\omega)\right)\,}&=\,\dfrac{\operatorname{Vol}\left({\rm B}_r^M(o)\right)}{\operatorname{Vol}\left({\rm S}_r^M(o)\right)},\\
\operatorname{Vol}\left({\rm B}_r^\omega\right)\, &=\,\operatorname{Vol}\left({\rm B}_r^M\right),\\
\operatorname{Vol}(S^\omega_r) &= \operatorname{Vol}(S^M_r).
\end{aligned}
\end{equation}
Hence, as we are assuming that $\mathcal{A}_1\left({\rm B}_R^M\right)=\mathcal{A}_1\left({\rm B}_{s(R)}^\omega\right)$ and we have deduced $s(R)=R$, then we obtain the equality $$\dfrac{\mathcal{A}_1\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}=\dfrac{\mathcal{A}_1\left({\rm B}_R^M\right)}{\operatorname{Vol}\left({\rm S}_R^M\right)}$$ and hence, by Corollary \ref{isoptors}, $\mathcal{A}_k\left({\rm B}_r^M\right)=\mathcal{A}_k\left({\rm B}_{r}^\omega\right)$ for all $k \geq 1$ and for all $0 <r \leq R$.
Finally, as equality $\mathcal{A}_1\left({\rm B}_R^M\right)=\mathcal{A}_1\left({\rm B}_{s(R)}^\omega\right)$ implies equalities $\mathcal{A}_k\left({\rm B}_r^\omega\right)=\mathcal{A}_k\left({\rm B}_r^M\right)$, for all $k\geq 1$ and for all $r \in ]0,R]$, we have that, given $B^M_r \subseteq M$ in a Riemannian manifold $(M,g)$ with $r \in ]0,R]$, (see \cite{HuMP3} and \cite{BGJ}):
\begin{equation}\label{gim}
\begin{aligned}
\lambda_1(B^M_r)&=\lim_{k \to \infty}\dfrac{k\mathcal{A}_{k-1}\left({\rm B}_r^M\right)}{\mathcal{A}_{k}\left({\rm B}_r^M\right)}\\&=\lim_{k \to \infty}\dfrac{k\mathcal{A}_{k-1}\left({\rm B}_r^\omega\right)}{\mathcal{A}_{k}\left({\rm B}_r^\omega\right)}=\lambda_1(B^\omega_r).
\end{aligned}
\end{equation}
\end{proof}
\begin{corollary}\label{cor:2.1}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$, balanced from above. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that the mean curvatures of the geodesic spheres in $M$ and $M_{\omega}$ satisfies
\begin{equation}\label{eq:meancurvatureconditions10}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then
\begin{equation}\label{torsrigcomp2}
\mathcal{A}_1\left({\rm B}_R^M\right)\,\leq E_{s(R)}^\omega(0) \operatorname{Vol}({\rm B}_{R}^M).
\end{equation}
\end{corollary}
\begin{proof}
Assuming that ${\rm H}_{{\rm S}_r^\omega}\leq\ {\rm H}_{{\rm S}_r^M}\,\,\text{for all}\,\, 0 < r \leq R$, we use equation (\ref{eq:4}) to obtain, having into account that $E_{s(R)}^\omega(r) \leq E_{s(R)}^\omega(0)\,\forall r \in ]0,s(R)]$,
\begin{equation}
\begin{aligned}
\mathcal{A}_1\left({\rm B}_R^M\right)&=\int_{{\rm B}_R^M}E_R^M\,d\sigma\, \leq \,\int_{{\rm B}_R^M}\mathbb{E}_R^\omega\,d\sigma\\&=\int_{{\rm B}_{s(R)}^\omega}\mathbb{E}_R^\omega{^*}d\widetilde{\sigma}\leq\,\int_{{\rm B}_{s(R)}^\omega}E_{s(R)}^\omega\,d\widetilde{\sigma}\leq E_{s(R)}^\omega(0) \operatorname{Vol}({\rm B}_{R}^M).
\end{aligned}
\end{equation}
\end{proof}
\begin{remark}
As $(M_\omega^n,g_\omega)$ is balanced from above, then $\dfrac{d}{dr}\left(q_\omega(r)\right) \geq 0$, so $q_\omega(r)$ is non-decreasing with $r$. Then, as $E_{s(R)}^\omega(r(x))=\psi(r(x))=\int_{r(x)}^{s(R)} q_\omega(t)\,dt$, we have that
$$E_{s(R)}^\omega(0)=\int_{0}^{s(R)} q_\omega(t)\,dt \leq s(R) q_\omega(s(R))=s(R) \dfrac{\operatorname{Vol}\left({\rm B}_{s(R)}^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_{s(R)}^\omega(o_\omega)\right)}$$
\noindentdent so
$$\mathcal{A}_1\left({\rm B}_R^M\right)\,\leq \mathbb{E}_{s(R)}^\omega(0) \operatorname{Vol}({\rm B}_{R}^M) \leq s(R) \dfrac{\operatorname{Vol}\left({\rm B}_{s(R)}^\omega(o_\omega)\right)}{\operatorname{Vol}\left({\rm S}_{s(R)}^\omega(o_\omega)\right)} \operatorname{Vol}({\rm B}_{R}^M).$$
\end{remark}
\section{ Cheng's first Dirichlet eigenvalue comparison and the determination of the moment spectrum of a geodesic ball }\label{sec:TorRidCom2}\
Finally, as a corollary of the previous results, we have in Theorem \ref{th_const_below2} a Cheng's first Dirichlet eigenvalue comparison, (see \cite{BM}). On the other hand, in Corollary \ref{th_const_below3}, we have been able to show that, under our hypotheses, the first Dirichlet eigenvalue of geodesic balls determines its exit time moment spectrum and its Poisson hierarchy.
\begin{theorem}\label{th_const_below2}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that the mean curvatures of the geodesic spheres in $M$ and $M_{\omega}$ satisfies
\begin{equation}\label{eq:meancurvatureconditions12}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then we have the inequalities
\begin{equation}\label{ineqleq_submanifold2}
\lambda_1 (B^\omega_R) \leq\,(\geq)\,\lambda_1(B^M_R)\end{equation}
\noindentdent where $B^\omega_R$ is the geodesic ball in $M^m_\omega$.
\noindentdent Equality in any of these inequalities implies that
$${\rm H}_{{\rm S}_r^\omega}=\,{\rm H}_{{\rm S}_r^M}\,\,\,\text{for all }0<r\leq R$$
\noindentdent and hence, we have the equalities
\begin{enumerate}
\item Equality $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$, and hence, equality $\bar{u}_{k,r}^{\omega}=u_{k,r}$ on $B^M_r(o)$ for all $k\geq 1$ and for all $0 < r \leq R$.
\item Equalities $\operatorname{Vol}\left({\rm B}_r^\omega\right)=\operatorname{Vol}\left({\rm B}_r^M\right)$ and $\operatorname{Vol}\left({\rm S}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^M\right)\,\,\,\text{for all }\, 0<r \leq R$.
\item Equalities $\mathcal{A}_k\left({\rm B}_r^\omega\right)=\mathcal{A}_k\left({\rm B}_r^M\right)$, for all $k\geq 1$ and for all $0<r \leq R$.
\noindentdent Namely, the first Dirichlet eigenvalue determines the Poisson hierarchy, the volume, and the $L^1$-moment spectrum of the ball $B^M_r(o)$ for all $0<r \leq R$.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof follows the lines of the proof of Theorems 6 and 7 in \cite{HuMP3}. This technique is based in the the description of the first Dirichlet eigenvalue of a smooth precompact domain $D$ in a Riemannian manifold
given by P. McDonald and R. Meyers in \cite{McMe}.
When $D=B^M_R$, we have
\begin{equation}\label{desc}
\lambda_1(B^M_R)= \sup \left\{\eta \geq 0 \,:\, \lim_{k \to \infty}
\sup
\left(\frac{\eta}{2}\right)^k\frac{\mathcal{A}_{k}(B^M_R)}{\Gamma(k+1)}
<\infty\right\} \quad .
\end{equation}
Let us assume first that ${\rm H}_{{\rm S}_r^\omega}\leq\,{\rm H}_{{\rm S}_r^M},\,\,\text{for all }0<r\leq R$. Then, we have, by Corollary \ref{isoptors}, that
\begin{equation}\label{ineqspec2}
\frac{\mathcal{A}_{k}(B^M_R)}{\operatorname{Vol}(S^M_R)} \leq
\frac{\mathcal{A}_{k}(B^\omega_{R})}{\operatorname{Vol}(S^\omega_{R})}\,\,\,\,\textrm{for all}\,\, k \in \mathbb{N}.
\end{equation}
On the other hand, by Corollary \ref{cor:MeanComp}:
\begin{equation}\label{meyer1}
\frac{\operatorname{Vol}(S^M_R)}{\operatorname{Vol}(S^\omega_{R})} \geq \frac{\operatorname{Vol}(B^M_R)}{\operatorname{Vol}(B^\omega_{R})}\geq 1.
\end{equation}
Then, using inequality (\ref{ineqspec2}) the set $$\mathcal{F}_2:=\{\eta \geq 0 \,:\, \lim_{k \to \infty} \sup \left(\frac{\eta}{2}\right)^k\frac{\mathcal{A}_{k}(B^\omega_{R})}{\Gamma(k+1)} \frac{\operatorname{Vol}(S^M_R)}{\operatorname{Vol}(S^\omega_{R})}<\infty\}$$ is included in the set $$\mathcal{F}_1:= \{\eta \geq 0 \,:\, \lim_{k \to \infty} \sup \left(\frac{\eta}{2}\right)^k\frac{\mathcal{A}_{k}(B^M_R)}{\Gamma(k+1)} <\infty\}\quad ,$$
\noindentdent so we have, using this last observation and inequality (\ref{meyer1}),
\begin{equation}\label{meyer2}
\begin{aligned}
\lambda_1(B^M_R) & = \sup \{\eta \geq 0 \,:\, \lim_{k \to \infty} \sup \left(\frac{\eta}{2}\right)^k\frac{\mathcal{A}_{k}(B^M_R)}{\Gamma(k+1)} <\infty\} \\
& \geq
\sup \{\eta \geq 0 \,:\, \lim_{k \to \infty} \sup \left(\frac{\eta}{2}\right)^k\frac{\mathcal{A}_{k}(B^\omega_{R})}{\Gamma(k+1)} \frac{\operatorname{Vol}(S^M_R)}{\operatorname{Vol}(S^\omega_{R})}<\infty\} \\
& =
\frac{\operatorname{Vol}(S^M_R)}{\operatorname{Vol}(S^\omega_{R})}\sup \{\eta \geq 0 \,:\, \lim_{k \to \infty} \sup \left(\frac{\eta}{2}\right)^k\frac{\mathcal{A}_{k}(B^\omega_{R})}{\Gamma(k+1)} <\infty\}\\
& =
\frac{\operatorname{Vol}(S^M_R)}{\operatorname{Vol}(S^\omega_{R})} \lambda_1(B^\omega_{R})\geq \lambda_1(B^\omega_{R}).
\end{aligned}
\end{equation}
If we assume ${\rm H}_{{\rm S}_r^\omega}\geq\,{\rm H}_{{\rm S}_r^M},\,\text{for all }0<r\leq R$, then we obtain $\lambda_1 (B^\omega_R) \geq \lambda_1(B^M_R)$ with the same argument, inverting all the inequalities.
Finally, equality $\lambda_1 (B^\omega_{R})=\lambda_1(B^M_{R})$ implies that all the inequalities in (\ref{meyer2}) are equalities, so we have the equality in the inequality (\ref{meyer1}), (namely, the equality in the isoperimetric inequality (\ref{cor:MeanComp1}) in Corollary \ref{cor:MeanComp}), and moreover the equality between the volumes $ \operatorname{Vol}( B^\omega_{R})=\operatorname{Vol}(B^M_{R})$ and $\operatorname{Vol}\left({\rm S}_R^\omega\right)=\operatorname{Vol}\left({\rm S}_R^M\right)$. Hence, we have, by Corollary \ref{cor:MeanComp}, the equalities
$${\rm H}_{{\rm S}_r^\omega}=\,{\rm H}_{{\rm S}_r^M}\,\,\,\text{for all }0<r\leq R$$
\noindentdent and, in its turn, equalities $ \operatorname{Vol}( B^\omega_{r}) =\operatorname{Vol}(B^M_{r})$ and $\operatorname{Vol}\left({\rm S}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^M\right)\,\,\forall r \in ]0,R]$. Assertions (1) and (2) follows from Proposition \ref{prop3.2} and Theorem \ref{teo:TowMomComp}.
\end{proof}
We finish the paper with a consequence of Theorems \ref{th_const_below2} and \ref{teo:TowMomComp} which summarizes the relation between the first Dirichlet eigenvalue, the $L^1$-moment spectrum and the Poisson hierarchy of the geodesic balls $B^M_R(o)$ of a Riemannian manifold which satisfies our restriction on the mean curvatures of the geodesic spheres included in it, $S^M_r(o)$, $r \leq R$.
\begin{corollary}\label{th_const_below3}
Let $(M^n,g)$ be a complete Riemannian manifold and let $(M_\omega^n,g_\omega)$ be a rotationally symmetric model space with center $o_\omega \in M_\omega^n$. Let $o \in M$ be a point in $M$ and let us suppose that $inj(o) \leq inj(o_\omega)$. Let us consider a metric ball $B^M_R(o)$, with $R < inj(o) \leq inj(o_w)$. Let us suppose moreover that the mean curvatures of the geodesic spheres in $M$ and $M_{\omega}$ satisfies
\begin{equation}\label{eq:meancurvatureconditions13}
{\rm H}_{{\rm S}_r^\omega}\leq\,(\geq)\, {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.
\end{equation}
Then, the following equalities are equivalent:
\begin{enumerate}
\item \label{cor:5.2.1} $ \lambda_1 (B^\omega_R)=\lambda_1(B^M_R)$.
\item \label{cor:5.2.2} $\mathcal{A}_k(B^\omega_R)=\mathcal{A}_k(B^M_R)\,\,\forall k \geq 1$.
\item \label{cor:5.2.3} $\bar{u}^w_{k,R}=u_{k,R} \,\,\forall k \geq 1$ in $B^M_R$.
\end{enumerate}
Moreover, equality ${\rm H}_{{\rm S}_r^\omega}= {\rm H}_{{\rm S}_r^M}\,\,\, \text{for all}\, \,\,0 < r \leq R$ implies any, (and hence, all), of the equalities $(1)$, $(2)$ and $(3)$.
\end{corollary}
\begin{proof}
We are going to prove these equivalences. We first assume that
$${\rm H}_{{\rm S}_r^\omega}\leq {\rm H}_{{\rm S}_r^M}\quad\text{for all}\quad 0 < r \leq R.$$
We see first that equality \eqref{cor:5.2.1} implies equalities \eqref{cor:5.2.3}, namely, that the first Dirichlet eigenvalue of the geodesic ball $B^M_R$ determines its Poisson hierarchy. To do that, we start with the last observation in Theorem \ref{th_const_below2}, namely, that equality $\lambda_1(B^\omega_{R}) = \lambda_1 (B^M_{R})$ implies that all the inequalities in (\ref{meyer2}) are equalities, so we have the equality in the inequality (\ref{meyer1}), (namely, the equality in the isoperimetric inequality (\ref{cor:MeanComp1}) in Corollary \ref{cor:MeanComp}), and moreover the equality between the volumes $ \operatorname{Vol}(B^\omega_{r}) = \operatorname{Vol}( B^M_{r})$ and $\operatorname{Vol}\left({\rm S}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^M\right)$ for all $r \in [0,R]$. Hence, we have, by Corollary \ref{cor:MeanComp}, the equalities
$${\rm H}_{{\rm S}_r^\omega}=\,{\rm H}_{{\rm S}_r^M}\,\,\,\text{for all }0<r\leq R.$$
Then, by Proposition \ref{prop3.2}, we have that $\bar{u}^w_{1,R}=u_{1,R}$ on $B^M_R$. Hence we conclude equality $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$ using assertion (3) in Theorem \ref{teo:TowMomComp}. We have concluded that \eqref{cor:5.2.1} implies \eqref{cor:5.2.3}.
To see that equality \eqref{cor:5.2.1} implies equalities \eqref{cor:5.2.2}, we compute now as in Corollary \ref{isoptors}: for all $k \geq 1$, we have that, as $\operatorname{Vol}\left({\rm S}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^M\right)$ for all $r \in [0,R]$,
\begin{equation}
\begin{split}
\mathcal{A}_k\left({\rm B}_R^M\right) & =\int_{{\rm B}_R^M}u_{k,R}\,d\sigma=-\dfrac{1}{k+1}\int_{{\rm B}_R^M}\operatorname{D}elta^M u_{k+1,R}d\sigma
\\&=\,-\dfrac{1}{k+1}\int_{{\rm B}_R^M}\operatorname{D}elta^M\bar{u}_{k+1,R}^{\omega}\,d\sigma= -\dfrac{1}{k+1}\,\bar{u}_{k+1,R}^{\omega}{'}(R)\operatorname{Vol}\left({\rm S}_R^M\right)\\ &=\dfrac{\mathcal{A}_k\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}\,\operatorname{Vol}\left({\rm S}_R^M\right)= \mathcal{A}_k\left({\rm B}_R^\omega\right)
\end{split}
\end{equation}
\noindentdent and hence we have equalities \eqref{cor:5.2.2}.
To see that equalities \eqref{cor:5.2.2} implies equality \eqref{cor:5.2.1}, i.e., that the exit time moment spectrum of $B^M_R$ determines its first Dirichlet eigenvalue, we compute, using Theorem A in \cite{HuMP3} and as $\mathcal{A}_k\left({\rm B}_R^M\right)= \mathcal{A}_k\left({\rm B}_R^\omega\right)\,\forall k \geq 1$:
\begin{equation}\label{gim2}
\begin{aligned}
\lambda_1(B^M_R)&=\lim_{k \to \infty}\dfrac{k\mathcal{A}_{k-1}\left({\rm B}_R^M\right)}{\mathcal{A}_{k}\left({\rm B}_R^M\right)}\\&=\lim_{k \to \infty}\dfrac{k\mathcal{A}_{k-1}\left({\rm B}_R^\omega\right)}{\mathcal{A}_{k}\left({\rm B}_R^\omega\right)}=\lambda_1(B^\omega_R).
\end{aligned}
\end{equation}
To see that equalities \eqref{cor:5.2.3} implies equality \eqref{cor:5.2.1}, namely, that the Poisson hierarchy of the ball $B^M_R$ determines its first Dirichlet eigenvalue, we will see first that equalities \eqref{cor:5.2.3} implies equalities \eqref{cor:5.2.2}. Assuming that \eqref{cor:5.2.3} is satisfied, we have that $\bar{u}_{k,R}^{\omega}=u_{k,R}$ on $B^M_R(o)$ for all $k\geq 1$. In particular, $\bar{u}_{1,R}^{\omega}=u_{1,R}$ on $B^M_R(o)$, so, by Corollary \ref{cor:MeanComp2},
$\operatorname{Vol}\left({\rm S}_r^\omega\right)=\operatorname{Vol}\left({\rm S}_r^M\right)$ and $\operatorname{Vol}\left({\rm B}_r^\omega\right)=\operatorname{Vol}\left({\rm B}_r^M\right)$ for all $r \in ]0, R]$ and, hence, given $r \in ]0,R]$, and for all $k \geq 1$,
\begin{equation}\label{ineqisopTors2}
\begin{split}
\mathcal{A}_k\left({\rm B}_R^M\right) & =\int_{{\rm B}_R^M}u_{k,R}\,d\sigma=-\dfrac{1}{k+1}\int_{{\rm B}_R^M}\operatorname{D}elta^M u_{k+1,R}d\sigma
\\&=\,-\dfrac{1}{k+1}\int_{{\rm B}_R^M}\operatorname{D}elta^M\bar{u}_{k+1,R}^{\omega}\,d\sigma= -\dfrac{1}{k+1}\,\bar{u}_{k+1,R}^{\omega}{'}(R)\operatorname{Vol}\left({\rm S}_R^M\right)\\ &=\dfrac{\mathcal{A}_k\left({\rm B}_R^\omega\right)}{\operatorname{Vol}\left({\rm S}_R^\omega\right)}\,\operatorname{Vol}\left({\rm S}_R^M\right)= \mathcal{A}_k\left({\rm B}_R^\omega\right)
\end{split}
\end{equation}
\noindentdent so we have inequalities \eqref{cor:5.2.2}. Now, we use equation (\ref{gim2}) to obtain \eqref{cor:5.2.1}.
\end{proof}
\end{document}
|
\begin{document}
\title{Non-demolishing measurement of a spin qubit state via Fano resonance}
\author{V. Vyurkov$^{{\rm 1}}$}
\author{L. Gorelik$^{{\rm 2}}$}
\author{A. Orlikovsky$^{{\rm 1}}$}
\affiliation{$^{{\rm 1}}$ Institute of Physics and Technology of
the Russian Academy of Sciences, Nakhimovsky Avenue 34, Moscow
117218, Russia \\ $^{{\rm 2}}$ Chalmers University of Technology
and G\"{o}teborg University,SE-412 96 G\"{o}teborg, Sweden}
\date{\today}
\begin{abstract}
Fano resonances are proposed to perform a measurement of a spin
state (whether it is up or down) of a single electron in a quantum
dot via a spin-polarized current in an adjacent quantum wire.
Rashba-like spin-orbit interaction in a quantum dot prohibits
spin-flip events (Kondo-like phenomenon). That ensures the
measurement to be non-demolishing.
\end{abstract}
\keywords{quantum computer, spin qubit, quantum wire, quantum dot}
\maketitle
\section{\label{sec:level1}Introduction}
The solid state structures seem quite promising to implement
quantum computers. The first proposal of the solid state quantum
computer based on quantum dots was put forward in 1998 by D. Loss
and P. DiVincenzo \cite{loss}. The quantum calculations were
offered to be performed on single electron spins placed in quantum
dots. To read out the result one should measure the state of the
quantum dot register consisting of single spins. Several
possibilities were proposed there. In particular, two of them
exploited a spin blockade regime. One could use spin dependent
tunneling of a target electron into a quantum dot with a definite
spin orientation of a reference electron. The final charge state
of the dot (whether tunneling occurred or not) could be tested by
a single electron transistor (SET) quite sensitive for an
environment charge.
In the same year Kane offered a solid state quantum computer based
on $^{{\rm 3}{\rm 1}}$P atoms embedded in Si substrate
\cite{kane}. The computation was to be performed on $^{{\rm 3}{\rm
1}}$P nuclei spins mediated by outer shell electrons. The
inventive procedure to transfer a resulting nucleus spin state
into an electron spin state was proposed. The latter could be
measured by single electron tunneling into the reference atom,
i.e. almost with the same means as that in Ref. \cite{loss}. This
idea was much developed in Ref. \cite{recher} where it was
proposed to test a single electron spin in a quantum dot by a
spin-polarized current passing through this dot sandwiched between
leads. The inspiring experiment demonstrating such a readout of a
single electron spin placed in an open quantum dot by detecting a
current passing through the dot was presented in \cite{giorga}.
Far earlier than the first solid state quantum computer implementations
were put forward a set of scanning tunneling microscopy experiments
to observe an evolution of a single spin with the help of spin-polarized
current emanating from a magnetic tip had begun \cite{S1, S2, S3, S4}.
Most of proposals of spin-based quantum computers relate to
electron spins although their relaxation is much faster than that
of nucleus spins. It is caused by relatively easier and faster
operation upon them and a possibility of measurement of individual
electron spin state.
In general, most of proposals of spin-based quantum computers
relate just to electron spins although their relaxation is much faster
than that of nucleus spins. It is caused by relatively easier and
faster operation upon them and a possibility of measurement of
individual electron spin state. All suggestions of individual
electron spin measurement made so far are based on exchange
interaction between a target electron and a referenced electron
which spin orientation is known.
One of the publications on the topic \cite{engel} concerns the
problem how to perform a particular measurement of spin states via
tunneling in adjacent quantum dots, namely, to make clear whether
spins are parallel or antiparallel. This kind of measurement could
be employed for quantum calculations instead of organizing an
interaction between qubits which can hardly be controlled with
fairly high accuracy. The recent paper \cite{sarovar} discusses a
possibility to test a single spin via spin-dependent scattering
inside a field effect transistor channel. However, spin-flip
(Kondo-like) phenomena were unreasonably ignored there.
Here we examine an opportunity to employ a quantum wire with
a spin-polarized current to measure a single electron spin inserted
in a nearby quantum dot. Firstly, in Ref. \cite{vyurkov1}
there was pointed out
to an opportunity that a current passing through a quantum wire
could be quite sensitive to a nearby charge due to Coulomb blockade.
Further the sensitivity of a quantum wire with spin polarized current
to a spin state of an electron in adjacent quantum dot was clarified
in Ref. \cite{vyurkov2}.
There the possibility of spin blockade of current was
elucidated. In this paper we propose to employ Fano resonances
in a quantum wire to make the measurement much more sensitive.
We also regard the question how to make a non-demolishing
measurement by virtue of a spin-orbit interaction.
\section{\label{sec:level1}Quantum wire to measure a single spin state}
We suggest a measurement which allows via detecting the current to
conclude whether the electron in a quantum dot has the same spin
orientation as electrons in the quantum wire or opposite one.
\begin{figure*}
\caption{\label{fig:wide}
\label{fig:wide}
\end{figure*}
The structure under consideration is schematically depicted in
Fig. 1. It consists of a quantum wire transmitting a spin
polarized current coupled to a nearby quantum dot via tunneling.
The current through the wire is determined by well-known Landauer
formula
\begin{equation}
\label{eq1} I(V_{sd} ) = {\frac{{e}}{{h}}}{\sum\limits_{i =
0}^{\infty} {}} \int {dE \ T_{i} (E){\left[ {f_{s} (E) - f_{d}
(E)} \right]}} ,
\end{equation}
\noindent where summation is fulfilled over all modes of
transversal quantization in the wire, $T_{i}(E)$ is a transmission
coefficient for i-th mode dependent on the total energy $E$, the
factor $e/h$ arises from a conductance quantum $e^{2}/h$ for spin
polarized current in a ballistic wire, $f(E)$ is Fermi-Dirac
distribution function
\begin{equation}
\label{eq2} f(E) = {\frac{{1}}{{1 + \exp {\frac{{E - \mu}}
{{kT}}}}}},
\end{equation}
\noindent providing the chemical potentials in the source contact
$\mu _{{\rm s}}$ and in the drain contact $\mu _{{\rm d}}$ are
shifted by bias: $\mu _{{\rm s}}-\mu _{{\rm d}}=e V_{{\rm s}{\rm
d}}$. The transmission coefficients $T_{i}(E)$ may be different
for different spin state of the quantum dot electron with respect
to spin polarization of electrons in quantum wire. This results in
different current. The measurement is operated by potentials on
gate electrodes.
Recently the proposals appeared to exploit Fano resonances for a
measurement of an individual electron spin state with the help of
quantum wire or quantum constriction which in fact could be
regarded as a merely short quantum wire \cite{mourokh,vyurkov3}.
Indeed, Fano resonances make such a measurement more sensitive.
Fano effect in the considered structure means the following. An
electron moving along the quantum wire can partially penetrate
into the quantum dot due to tunneling. The interference between
two routes, one of which passes through the dot and another
doesn't, determine the transmission coefficient of an electron
through the wire. In other words, the discrete energy spectrum in
the dot interferes with a continuum in the wire. This interference
becomes destructive when the energy of an electron in wire
coincides with that in a dot. This results in backscattering which
can be detected by a dip on a current-voltage curve, so called
Fano antiresonance.
Transmission coefficient $T$ for an electron with the energy
detuning from the resonance \textit{$\varepsilon $} is supplied by
the expression
\begin{equation}
\label{eq0} T = {\frac{{{\left| {\varepsilon + q\Gamma}
\right|}^{2}}}{{\varepsilon ^{2} + \Gamma ^{2}}}},
\end{equation}
\noindent where $\Gamma$ stands for the level broadening and $q$
is the Fano asymmetry factor. In general $q$ is a complex number
depending on scattering and relaxation in the system. For a
ballistic quantum wire and a quantum dot without relaxation $q
\approx 0$. Here we assume that this is the case.
However, the model put forward in \cite{mourokh} does not take
into account the possibility of spin flip, i.e. Kondo-like
phenomenon. Indeed, the spin exchange between an incident electron
and an electron in quantum dot takes the same time required for an
incident electron to be backscattered. Therefore, the initial spin
state of the measured electron becomes demolished after only one
incident electron is backscattered. Here we endeavor to circumvent
such a shortcoming introducing a spin-orbit interaction into the
system.
\begin{figure*}
\caption{\label{fig:wide}
\label{fig:wide}
\end{figure*}
However, for the sake of clarity we firstly discard a spin-orbit
interaction to highlight the disadvantages of such an approach to
measurement. The general way of measurement looks like follows. A
conducting electron (an electron at the Fermi level) in the
quantum wire tunnels to an excited state in a quantum dot which is
split with respect to the total spin $S$ because just a total spin
determines the exchange energy. We also suppose that the state
$S=1$ has a smaller exchange energy as the state $S=0$ (Fig. 2).
This is valid, at least, for the excited state with orbital moment
$L=1$. It should be noted that the Lieb-Mattis theorem is not
valid for excited states. This theorem only claims that the ground
state definitely has $S=0$ for even number of electrons. Anyhow,
this suggestion is not crucial for the measurement.
We suppose that the initial state of the system is the following.
For the sake of definiteness, the spin of electron in quantum wire
is always directed up ($\uparrow $) and the spin of target
electron in the ground state of quantum dot is directed up
($\uparrow$) or down ($\downarrow$). Evidently, only mutual spin
orientation of both electrons matters. Due to tunneling this state
evolves into some excited state of two electrons in the quantum
dot.
The Hamiltonian describing tunneling of an electron from the
quantum wire into an exited level in the quantum dot in absence of
spin-orbit interaction reads
\begin{equation}
\label{eq3} H_{0} = \varepsilon _{0} + (\varepsilon _{1} + U_{C} -
J\vec {S}_{0} \vec {S}_{1} )a_{1}^{ +} a_{1} +
\\{\sum\limits_{j}
{}} \{T_{j} (a_{1}^{ +} b_{j} + b_{j}^{ +} a_{1} ) + \varepsilon
_{j} b_{j}^{ +} b_{j}\},
\end{equation}
\noindent where $\varepsilon _{{\rm 0}}$ is a ground level energy
of a single electron in the quantum dot, $\varepsilon _{{\rm 1}}$
is an excited level energy, $U_{C}$ is a direct Coulomb
interaction between an electron in the ground state and an
electron in the excited state, $a_{1}^{ +}$ and $a_{1}^{}$ are
operators of creation and annihilation of an electron in the
excited level in the quantum dot, $b_{j}^{ +}$ and $b_{j}$ are
operators of creation and annihilation of an electron in the
quantum wire in j-th state, in general, the longitudinal momentum
and the transversal quantization subband (mode) number are
ascribed to the index $j$, $J$ is the strength of exchange
interaction, $\vec {S}_{0}$ and $\vec {S}_{1}$ are spin operators
for an electron in the ground level and that in excited level,
respectively. In general, the sign of exchange energy (therefore,
the sign of $J)$ could be positive or negative. It is only known
for sure that the ground state of the system composed of two
electrons definitely corresponds to the total spin $S=0$ owing to
Lieb-Mattis theorem.
It is convenient to present the spin Hamiltonian in the form
\begin{equation}
\label{eq4} \vec {S}_{0} \vec {S}_{1} = S_{0}^{z} S_{1}^{z} +
S_{0}^{ +} S_{1}^{ -} ,
\end{equation}
\noindent where $S_{0}^{z}$ and $S_{1}^{z}$ are operators of
$z$-projection, $S_{0}^{ +}$ and $S_{1}^{ -}$ are raising and
lowering operators of $z$-projection. The second term describes
spin-flip processes, i.e. Kondo-like phenomena.
Hereafter, we incorporate into consideration only three lowest
states of two electrons in the quantum dot: the ground state with
total spin $S=0$ and two excited states corresponding to the same
space wave function but the different total spin $S=0$ and $S=1$.
The energies of these excited states differ due to exchange
interaction.
We suppose that the tunneling coupling $T$ of both
states with a quantum wire continuum is the same as the space wave
functions are the same. Hereafter, we employ the designations
analogous to that in Ref. \cite{engel}.
There are two triplet states with $S=1$ which can originate after
an electron with spin up tunnels from a quantum wire into a
quantum dot
\begin{equation}
\label{eq-1} \uparrow _{D} \uparrow _{D}
\end{equation}
\noindent for parallel spins and
\begin{equation}
\label{eq-2}{{{\left[ { \uparrow _{D} \downarrow _{D} + \downarrow
_{D} \uparrow _{D}} \right]}} \mathord{\left/ {\vphantom {{{\left[
{ \uparrow _{D} \downarrow _{D} + \downarrow _{D} \uparrow _{D}}
\right]}} {\sqrt {2}}} } \right. \kern-\nulldelimiterspace} {\sqrt
{2}}}
\end{equation}
\noindent for antiparallel spins. There could also occur a singlet
state with $S=0$
\begin{equation}
\label{eq-3}{{{\left[ { \uparrow _{D} \downarrow _{D} - \downarrow
_{D} \uparrow _{D}} \right]}} \mathord{\left/ {\vphantom {{{\left[
{ \uparrow _{D} \downarrow _{D} - \downarrow _{D} \uparrow _{D}}
\right]}} {\sqrt {2}}} } \right. \kern-\nulldelimiterspace} {\sqrt
{2}}}
\end{equation}
\noindent for antiparallel spins. The above formulae are sensitive
to position: the first place corresponds to the ground state.
Tunneling to the ground state from a quantum wire seems
inappropriate for the measurements because the target electron may
escape from the quantum dot the same way as the reference
electron.
Therefore, tunneling to the excited state is preferable as it
leaves the target electron in the dot.
First of all we must discuss the initial state of the system when
only a target electron is situated in the ground state in a
quantum dot and there is a reference electron in a quantum wire.
For parallel spin orientation
\begin{equation}
\label{eq-4} \uparrow _{D} \uparrow _{W}
\end{equation}
This state relates to a definite value of total spin $S=1$ and
there is no entanglement between spin and space coordinates.
For antiparallel spins the state is
\begin{equation}
\label{eq-5} \downarrow _{D} \uparrow _{W} = {{{\left[ {
\downarrow _{D} \uparrow _{W} - \uparrow _{D} \downarrow _{W}}
\right]}} \mathord{\left/ {\vphantom {{{\left[ { \downarrow _{D}
\uparrow _{W} - \uparrow _{D} \downarrow _{W}} \right]}} {\sqrt
{2}}} } \right. \kern-\nulldelimiterspace} {\sqrt {2}}} +
{{{\left[ { \uparrow _{D} \downarrow _{W} + \downarrow _{D}
\uparrow _{W}} \right]}} \mathord{\left/ {\vphantom {{{\left[ {
\uparrow _{D} \downarrow _{W} + \downarrow _{D} \uparrow _{W}}
\right]}} {\sqrt {2}}} } \right. \kern-\nulldelimiterspace} {\sqrt
{2}}}
\end{equation}
Indeed, this state is a superposition of $S=0$ and $S=1$,
moreover, there exists an entanglement of space and spin
coordinates. Worth noting both space components of this state do
not interfere because they are orthogonal with respect to total
spin.
When the conditions shown in Fig. 2 are effectuated and electrons
from the Fermi level in quantum wire can tunnel to the state with
$S=1$ in a quantum dot the reflection coefficient for the
antiparallel spins (10) is exactly twice smaller than that for
parallel spins (9). This looks like the basis of a spin state
measurement.
Actually, it is an illusion that foregoing procedure is already
satisfactory for the measurement. Really, Kondo-like phenomena,
i.e. spin-flip events should be involved into consideration. The
spin-flip occurs within the period of time $\tau \sim (J / \hbar
)^{ - 1}$. The broadening of levels with $S=0$ and $S=1$ depends
on tunneling rate, so that $\Gamma \approx T$. The claim for these
levels to be clearly distinguished requires $J \gg T$. It follows
that the spin flip process must be much faster than tunneling.
We propose to employ spin-orbit interaction inside a quantum dot
to prevent spin-flip. Hereafter, we analyze the simplest case when
a coin-like quantum dot is formed of a quantum well. It is
sketched in Fig. 3. The arrows there indicate the interface
electric field at different interfaces caused by conduction (or
valence) band discontinuity. For instance, this kind of dot could
be fabricated by etching with a subsequent overgrowing of a host
semiconductor with a wider gap. Rashba spin-orbit interaction
\cite{rashba} originates in interface electric field and strong
conduction and valence bands coupling in narrow-gap semiconductors
\cite{zakharova,wissinger}. This mechanism of spin-orbit
interaction turned out to be several orders stronger than that
based on electric field in one-band model. It seems as the most
suitable to get the goal of non-demolishing measurement of a spin
qubit state.
The original Rashba Hamiltonian describing spin-orbit interaction
in a two-dimensional electron gas (2DEG) reads \cite{rashba}:
\begin{equation}
\label{eq5} \hat {H}_{R} = \alpha _{R} [\hat {\vec {S}}\times \hat
{\vec {k}}]\vec {\upsilon} \equiv \alpha _{R} [\hat {\vec
{k}}\times \vec {\upsilon} ]\hat {\vec {S}},
\end{equation}
\noindent where $\alpha _{{\rm R}}$ is Rashba constant, $\vec {k}
= - i{\frac{{\partial}} {{\partial \vec {r}}}}$ is an operator of
in-plane moment, $\vec {S}$ is a spin operator, and a unit vector
$\vec {\upsilon}$ is directed perpendicular to 2DEG plane. The
Rashba constant $\alpha _{{\rm R}}$ is non-zero for a 2DEG in
non-symmetric quantum well.\textbf{ }Unfortunately, Rashba
Hamiltonian (\ref{eq5}) results in an entanglement of spin and
space variables for an electron state in a quantum dot cut of a
quantum well. It occurs even in a ground state. This kind of a
quantum dot is not suitable for a spin qubit application. The
appropriate quantum dot should be made of a symmetric quantum well
with zero original Rashba term (\ref{eq5}).
Rashba Hamiltonian (\ref{eq5}) is widely used for a 2DEG
originating at a unique interface in a heterostructure, for
example, at common GaAs/AlGaAs interface. We adopt this
description to introduce a spin-orbit interaction caused by
interface field at side wall of the dot. To that end, we propose a
model Rashba-like Hamiltonian
\begin{equation}
\label{eq6} \hat {H}_{RS} = \beta [\hat {\vec {S}}\times \hat
{\vec {k}}]\hat {\vec {r}} = \beta [\hat {\vec {k}}\times \hat
{\vec {r}}]\hat {\vec {S}} = \beta \hat {\vec {L}}\hat {\vec {S}},
\end{equation}
\noindent where $\hat {\vec {L}} = [\hat {\vec {k}}\times \hat
{\vec {r}}]$ is an operator of angular momentum of an electron in
the dot which appears after permutation, $\hat {\vec {S}}$ is a
spin operator, $\beta $ is a coefficient analogous to Rashba
constant, it could be roughly evaluated as $\beta \approx \alpha
_{{\rm R}}/D$, where D is a dot diameter.
\begin{figure}
\caption{\label{fig:epsart}
\label{fig:epsart}
\end{figure}
\begin{figure*}
\caption{\label{fig:wide}
\label{fig:wide}
\end{figure*}
For the measured electron in a ground state with L=0 spin-orbit
interaction equals zero. The electron can possess any orientation
of spin. Therefore, this a good quantum dot for a spin qubit
manipulation. For the excited state with angular momentum $L=1$
the Hamiltonian (\ref{eq6}) looks like
\begin{equation}
\label{eq7} H_{RS} = \beta L_{1Z} S_{1Z} ,
\end{equation}
\noindent where $L_{1Z} = \pm 1$ is a z-projection of orbital
moment, $S_{1Z} = \pm 1 / 2$ is z-projection of electron spin in
the excited state. The spin-orbit term (\ref{eq5}) should be added
to the Hamiltonian (\ref{eq3}) to acquire the resultant
Hamiltonian
\begin{equation}
\label{eq8} H = H_{0} + H_{RS} .
\end{equation}
The eigen-states of the Hamiltonian (\ref{eq8}) relevant to two
electrons in the dot are depicted in Fig. 4. Tunneling of an
electron from a quantum wire to a quantum dot occurs if only the
latter has the same spin orientation as the measured electron in
dot (the arrow in Fig. 4 marks the proper level).
Spin-orbit splitting makes impossible the spin-flip due to energy
conservation law. As it was reported in Ref. \cite{nitta} Rashba
spin-orbit splitting in $A_{{\rm I}{\rm I}{\rm I}}B_{{\rm V}}$
heterostructures may attain to several meV. At the same time, the
Zeeman energy splitting proposed in Ref. \cite{engel} for the same
reason to suppress spin-flip processes in a dot approximately
equals $0.3 meV$ even in rather big magnetic field $5T$.
In accordance with expression (3) the mean reflection coefficient
$R=1-T$ in the range $-\Gamma<\varepsilon<+\Gamma$ is
approximately $1/3$. It means that the relative decrease in
current for parallel spins is around $1/3$ if the bias $V \le
\Gamma/e$. For antiparallel spins it is twice smaller, i.e. around
$1/6$. For the sake of better sensitivity the optimal bias $V$ for
a ballistic quantum wire should be chosen around $V=\Gamma/e$.
Then the absolute value of current could be roughly estimated for
a single mode quantum wire as $I=G_{0}V$, where $G_{0}=e^{2}/h=(26
kOhm)^{{\rm -} {\rm 1}}$ is a conductance quantum for
spin-polarized current in the wire. The possible level broadening
$\Gamma $ is restricted only by spin-orbit splitting. Supposing
the latter as several meV we are able to choose the broadening as
1meV. Substituting $V=1mV$ one arrives at the current equal to
$4\cdot10^{{\rm -} {\rm 8}} A$ which could be easily measured by
up to date equipment. Moreover, this current exceeds that in a
single electron transistor (SET). The greater is the current the
faster is its measurement. One more significant advantage of a
quantum wire is that it can be emptied during computing and,
therefore, unlike to a SET the quantum wire does not introduce an
additional decoherence in the system that time.
Worth noting a quantum wire allows to perform a partial
measurement of the state of two adjacent spin qubits like in
\cite{engel} or even distant ones: whether they are parallel or
antiparallel. This also provides a possibility of quantum
computation without organizing a perfectly controllable
interaction between qubits.
If for some reason the reflection coefficient is too low,
hopefully, the sensitivity may be augmented when N identical
qubits are placed in series along the wire. When qubits are
situated randomly and, therefore, interference does not matter the
sensitivity may rise as $\sim N$. When there is an order in qubit
positions and the interference is significant the sensitivity
increases as $\sim N^{{\rm 2}}$.
The opportunity to perform the proposed measurement is confirmed
by findings in Ref. \cite{sato}. Schematically the structure was
almost the same as that in Fig. 1. The combined Fano-Kondo
anti-resonances were observed in the $I-V$ curve and exploited to
test relaxation in multi-electron quantum dot. In principle, this
set up could serve as a prototype of our proposal.
\section{\label{sec:level1}Conclusion}
We have examined the possibility to use Fano-Rashba resonances for
non-demolishing measurement of a spin state (whether it is up or
down) of a single electron in a quantum dot (spin qubit) via a
spin-polarized current in an adjacent quantum wire. The spin-orbit
interaction in a quantum dot prohibits spin-flip events
(Kondo-like phenomenon). That makes the measurement
non-demolishing.
\begin{acknowledgments}
The research was supported by NIX Computer Company
([email protected]), grant F793/8-05, via the grant of The Royal
Swedish Academy of Sciences, and also by Russian Basic Research
Foundation, grants \# 08-07-00486-a-a and \# 06-01-00097-a.
\end{acknowledgments}
\end{document}
|
\begin{document}
\title{McCool groups of toral relatively hyperbolic groups}
enoncetmp\alph{numEnonceTmpInterne}uthor{Vincent Guirardel and Gilbert Levitt}
\date{\today}
\maketitle
\begin{abstract}
The outer automorphism group $\Out(G)$ of a group $G$ acts on the set of conjugacy classes of elements of $G$.
McCool proved that the stabilizer ${\mathrm{Mc}}(\calc)$ of a finite set of conjugacy classes is finitely presented when $G$ is free. More generally, we consider the group ${\mathrm{Mc}}(\calh)$ of outer automorphisms $\Phi$ of $G$ acting trivially on a family of subgroups $H_i$, in the sense that $\Phi$ has representatives $enoncetmp\alph{numEnonceTmpInterne}lpha_i$ with $enoncetmp\alph{numEnonceTmpInterne}lpha_i$ equal to the identity on $H_i$.
When $G$ is a toral relatively hyperbolic group, we show that these two definitions lead to the same subgroups of $\Out(G)$, which
we call ``McCool groups'' of G.
We prove that such McCool groups are of type VF (some finite index subgroup has a finite classifying space). Being of type VF also holds for the group of automorphisms
of $G$ preserving a splitting of $G$ over abelian groups.
We show that McCool groups satisfy a uniform chain condition: there is a bound, depending only on $G$, for the length of a strictly decreasing sequence of McCool groups of $G$. Similarly, fixed subgroups of automorphisms of $G$ satisfy a uniform chain condition.
\end{abstract}
\section{Introduction}
Mapping class groups of punctured surfaces may be viewed as subgroups of $\Out(F_n)$ for some $n$ (with $F_n$ denoting the free group of rank $n$). Indeed, they consist of automorphisms of $F_n$ fixing conjugacy classes corresponding to punctures. More generally, the group of automorphisms of $F_n$ fixing a finite number of conjugacy classes was studied by McCool, who proved in particular that such groups are finitely presented \cite{McCool_fp}.
We therefore define:
\begin{dfn}\label{mcc1}
Let $G$ be a group. Let $\calc$ be a set of conjugacy classes $[c_i]$ of elements of $G$.
We denote by ${\mathrm{Mc}}(\calc)$ the subgroup of $\Out(G)$ consisting of outer automorphisms fixing each $[c_i]$.
If $\calc$ is finite, we say that ${\mathrm{Mc}}(\calc)$ is an \emph{elementary McCool group} of $G$ (or of $\Out(G)$).
\end{dfn}
Work on automorphisms suggests a more general definition:
\begin{dfn}\label{gmc}
Let $G$ be a group. Let $\calh=\{H_i \}$ be an arbitrary family of subgroups of $G$.
We say that $\varphi\in\Aut(G)$, and its image $\Phi\in\Out(G)$, \emph{act trivially on $\calh$} if $\varphi$ acts on each $H_i$ as conjugation by some $g_i\in G$. Note that $\Phi $ acts trivially if and only if it has representatives $\varphi_i\in\Aut(G)$ with $\varphi_i$ equal to the identity on $H_i$.
We denote by ${\mathrm{Mc}}(\calh)$, or ${\mathrm{Mc}}_G(\calh)$, the subgroup of $\Out(G)$ consisting of all $\Phi$ acting trivially on $\calh$.
If $\calh$ is a finite family of finitely generated subgroups,
we say that ${\mathrm{Mc}}(\calh)$
is a \emph{McCool group} of $ G$ (or of $\Out(G)$).
\end{dfn}
Elementary McCool groups correspond to McCool groups with $\calh$ a finite family of cyclic groups. ${\mathrm{Mc}}(\calh)$ does not change if we replace
the $H_i$'s by conjugate subgroups, so it is really associated to a family of conjugacy classes of subgroups.
For a topological analogy,
one may think of ${\mathrm{Mc}}(\calh)$ as the group of automorphisms of $G=\pi_1(X)$ induced by homeomorphisms of $X$ equal to the identity on subspaces $Y_i$ with $\pi_1(Y_i)=H_i$.
McCool groups are relevant for automorphisms for the following reason (see \cite{GL6}).
Consider a splitting of a group $\hat G$ as a graph of groups in which $G$ is a vertex group, and
the $H_i$'s
are
the incident edge groups.
Then any element of ${\mathrm{Mc}}_G(\calh)$
extends ``by the identity'' to an automorphism of $\hat G$.
Topologically, if $X$ is a vertex space in a graph of spaces $\hat X$, and edge spaces are attached to subspaces $Y_i\subset X$, then
any homeomorphism of $X$ equal to the identity on the $Y_i$'s extends to $\hat X$ by the identity.
In this paper we will consider McCool groups when $G$ is a \emph{toral relatively hyperbolic group}:
$G$ is torsion-free, and hyperbolic relative to a finite set of finitely generated abelian subgroups. This includes in particular torsion-free hyperbolic groups, limit groups, and groups acting freely on $\R^n$-trees.
We will show (Corollary \ref{genmc}) that in this case
any ${\mathrm{Mc}}(\calh)$ is an elementary McCool group ${\mathrm{Mc}}(\calc)$; in other words,
it is equivalent for a subgroup of $\Out(G)$ to be
an elementary McCool group ${\mathrm{Mc}}(\calc)$, or to be a McCool group ${\mathrm{Mc}}(\calh)$ with $\calh$ a finite family of finitely generated groups, or to be ${\mathrm{Mc}}(\calh)$ with $\calh$ arbitrary.
We will not always make the distinction in the statements given below.
It was proved by McCool \cite{McCool_fp} that (elementary) McCool groups of a free group are finitely presented.
Culler-Vogtmann \cite[Corollary 6.1.4]{CuVo_moduli} proved that they are of \emph{type VF}: they have a finite index subgroup with a finite classifying space (i.e.\ there exists a classifying space which is a finite complex). We proved in \cite{GL6} that $\Out(G)$ is of type VF if $G$ is toral relatively hyperbolic (in particular, $\Out(G)$ is virtually torsion-free). Our first main results extend this to certain naturally defined subgroups of $\Out(G)$.
\begin{thm} \label{mc}
If $G$ is a toral relatively hyperbolic group,
then any McCool group ${\mathrm{Mc}}(\calh)\subset\Out(G)$ is of type VF.
\end{thm}
\begin{thm} \label{mct}
If $G$ is a toral relatively hyperbolic group, and $T$ is a simplicial tree on which $G$ acts with abelian edge stabilizers, then the group of automorphisms $\Out(T)\subset\Out(G)$ leaving $T$ invariant is of type VF.
\end{thm}
Our most general result in this direction (Corollary \ref{thm_general}) combines these two theorems; it implies in particular that \emph{${\mathrm{Mc}}(\calh)\cap\Out(T)$ is of type VF if $T$ is as above and $\calh$ is any family of subgroups, each of which fixes a point in $T$.}
\begin{rem*}
Some of these results may be extended to groups which are hyperbolic relative to virtually polycyclic subgroups, but with the weaker conclusion that the automorphism groups are of type $F_\infty$ (see \cite{GL_extension}). On the other hand, one can show that, if there exists a hyperbolic group which is not residually finite,
then there exists a hyperbolic group with $\Out(G)$ not virtually torsion-free (hence not VF).
\end{rem*}
Our second main result is the following:
\begin{thm} \label{mccc}
Let $G$ be a toral relatively hyperbolic group.
McCool groups of $G$ satisfy a \emph{uniform chain condition:}
there exists $C=C(G)$ such that, if
$${\mathrm{Mc}}(\calh_0)\subsetd {\mathrm{Mc}}(\calh_1)\subsetd\dots\subsetd{\mathrm{Mc}}(\calh_p ) $$
is a strictly decreasing chain of McCool groups in $\Out(G)$, then $p\le C$.
\end{thm}
This is based, among other things, on the vertex finiteness proved in \cite{GL_vertex}:
if $G$ is toral relatively hyperbolic, then all vertex groups occurring in splittings of $G$ over abelian groups
lie in finitely many isomorphism classes.
The chain condition, proved in Section \ref{pfcc} for McCool groups ${\mathrm{Mc}}(\calh)$ with $\calh$ a finite family of finitely generated groups, implies:
\begin{cor}\label{genmc}
Let $G$ be a toral relatively hyperbolic group.
If $\calh $ is a (possibly infinite) family of (possibly infinitely generated) subgroups $H_i\subset G$,
there exists a finite set of conjugacy classes $\calc$
such that ${\mathrm{Mc}}(\calh)={\mathrm{Mc}}(\calc)$. In particular, any ${\mathrm{Mc}}(\calh)$ is a McCool group, and any McCool group is an elementary McCool group ${\mathrm{Mc}}(\calc)$.
\end{cor}
The chain condition also implies that
no McCool group ${\mathrm{Mc}}(\calh)\subset\Out(G)$ is conjugate to a proper subgroup. Note, however, that McCool groups may fail to be co-Hopfian (they may be isomorphic to proper subgroups). To illustrate the variety of McCool groups, we show:
\begin{prop} \label{infmc}
$\Out(F_n)$ contains infinitely many non-isomorphic McCool groups if $n\ge4$; it contains infinitely many non-conjugate McCool groups if $n\ge3$.
\end{prop}
It may be shown that the bounds on $n$ are sharp (see the appendix).
We will also show in the appendix that, if $G$ is a torsion-free \emph{one-ended} hyperbolic group, then $\Out(G)$ only contains finitely many McCool groups up to conjugacy.
Say that $J\subset G$ is \emph{a fixed subgroup} if there is a family of automorphisms $enoncetmp\alph{numEnonceTmpInterne}lpha_i\in\Aut(G)$ such that $J=\cap_i{\mathrm{Fix\,}}enoncetmp\alph{numEnonceTmpInterne}lpha_i$, with ${\mathrm{Fix\,}}enoncetmp\alph{numEnonceTmpInterne}lpha=\{g\in G \mid enoncetmp\alph{numEnonceTmpInterne}lpha(g)=g\}$. The chain condition also implies:
\begin{thm}\label{uccfix}
Let $G$ be a toral relatively hyperbolic group. There is a constant $c=c(G)$ such that, if $J_0\subsets J_1\subsets\dots\subsets J_p$ is a strictly ascending chain of fixed subgroups, then $p\le c$.
\end{thm}
This was proved by Martino-Ventura
\cite{MaVe_fixed}
for $G$ free, with $c(F_n)=2n$. In \cite{GL7}, we will apply Theorems \ref{mc} and \ref{uccfix} to the study of stabilizers for the action of $\Out(G)$ on spaces of $\R$-tree s.
As explained above, one does not get new groups by allowing the set $\calc$ in Definition \ref{mcc1} to be infinite, or by considering arbitrary subgroups as in Definition \ref{gmc}.
The following definition provides a genuine generalization.
\begin{dfn}
Let $G$ be a group, and $\calc$ a finite set of conjugacy classes $[c_i]$. We write $\calc\m$ for the set of classes $[c_i\m]$. Let $\wh{\mathrm{Mc}}(\calc)$ be the subgroup of $\Out(G)$ consisting of automorphisms leaving $\calc\cup\calc\m$ globally invariant; it contains ${\mathrm{Mc}}(\calc)$ as a normal subgroup of finite index.
We say that $\wh{\mathrm{Mc}}(\calc)$ is an \emph{extended elementary McCool group} of $G$.
\end{dfn}
More generally, if $\calh $ is a finite family of subgroups,
one can define
finite extensions of ${\mathrm{Mc}}(\calh)$ by allowing the $H_i$'s to be permuted, or the action on $H_i$ to be only ``almost'' trivial.
\begin{prop} \label{mccet}
Given a toral relatively hyperbolic group $G$, there exists a number $C$ such that, if a subgroup $\wh M\subset \Out(G)$ contains a group $ {\mathrm{Mc}}(\calh)$ with finite index, then
the index $[\wh M:{\mathrm{Mc}}(\calh)]$ is bounded by $C$.
In particular, for $\calc$ finite, the index of ${\mathrm{Mc}}(\calc)$ in $\wh{\mathrm{Mc}}(\calc)$ is bounded by a constant depending only on $G$.
\end{prop}
It follows that extended elementary McCool groups satisfy a uniform chain condition as in Theorem \ref{mccc} (see Corollary \ref{eucc}). We also have:
\begin{cor} \label{rless}
Let $G$ be a toral relatively hyperbolic group. Let $A$ be any subgroup of $\Out(G)$, and let $\calc_A$ be the (possibly infinite) set of conjugacy classes of $G$ whose $A$-orbit is finite. The image of $A$ in the group of permutations of $\calc_A$ is finite, and its order is bounded by a constant depending only on $G$. In other words, there is a subgroup $A_0\subset A$ of bounded finite index such that every conjugacy class in $G$ is fixed by $A_0$ or has infinite orbit under $A_0$.
\end{cor}
When $G$ is free, one may take for $A_0$ the intersection of $A$ with a fixed finite index subgroup of $\Out(G)$ (independent of $A$) \cite{HaMo_announcement}.
One may also consider subgroups of $\Aut(G)$.
\begin{dfn}
Let $\calh$ be a family of (conjugacy classes of) subgroups, and $H_0<G$ another subgroup.
Let $
{\mathrm{Ac}}(\calh, H_0)\subset\Aut(G)$ be the group of automorphisms acting trivially on $\calh$ (in the sense of Definition \ref{gmc}) and fixing the elements of $H_0$.
\end{dfn}
\begin{prop} \label{mcaut}
If $G$ is a non-abelian toral relatively hyperbolic group,
then $
{\mathrm{Ac}}(\calh, H_0)$ is an extension $$1\to K\to
{\mathrm{Ac}}(\calh, H_0)\to {\mathrm{Mc}}(\calh')\to 1$$ where ${\mathrm{Mc}}(\calh')\subset\Out(G)$ is a McCool group, and $K $ is the centralizer of $H_0$ (isomorphic to $G$ or to ${\mathbb {Z}}^n$ for some $n\ge0$).
\end{prop}
\begin{cor} \label{mca}
Theorems \ref{mc} and \ref{mccc}
also hold in $\Aut(G)$: groups of the form $
{\mathrm{Ac}}(\calh, H_0)$ are of type VF
and satisfy a uniform chain condition.
\end{cor}
Theorems \ref{mc} and \ref{mct} are proved in Section \ref{clas}, and Theorem \ref{mccc} is proved in Section \ref{pfcc}. All other results are proved in Section \ref{pfcor}.
\paragraph{Acknowledgements}
The first author acknowledges support from ANR-11-BS01-013, the Institut Universitaire de France, and from the Lebesgue center of mathematics.
The second author acknowledges support from ANR-10-BLAN-116-03.
\section{Preliminaries}
In this paper, $G$ will always denote a toral
relatively hyperbolic group.
Any non-trivial abelian subgroup $A$ of $G$ is contained in a unique maximal abelian subgroup.
The maximal abelian subgroups are malnormal ($G$ is CSA), finitely generated, and there are finitely many non-cyclic ones up to conjugacy. Two subgroups of $A$ which are conjugate in $G$ are equal.
The center of a group $H$ will be denoted by $Z(H)$. We write $N_K(H)$ for the normalizer of a group $H$ in a group $K$, with $N(H)=N_G(H)$. Centralizers are denoted by $Z_K(H)$.
We say that $\Phi\in\Out(G)$ preserves a subgroup $H$, or leaves $H$ invariant, if its representatives $\varphi\in\Aut(G)$ map $H$ to a conjugate. If $\varphi\in\Aut(G)$ equals the identity on $H$, we say that it fixes $H$.
\begin{dfn} \label{pres}
If $\calh$ is a family of subgroups, we let $ \Out(G;\calh)\subset\Out(G)$ be the group of automorphisms preserving each $H\in \calh$, and $\widehat\Out(G;\calh)$ the group of automorphisms preserving $\calh$ globally (possibly permuting groups in $\calh$).
We denote by $$\Out(G;\mk\calh)={\mathrm{Mc}}(\calh)\subset \Out(G)$$ the group of automorphisms acting trivially on groups in $\calh$ (as in Definition \ref{gmc}).
We write $$\Out(G;\mk\calh,\calk):=\Out(G;\mk\calh )\cap\Out(G; \calk),$$ and $$\Out(G;\calh,\calk):=\Out(G;\calh \cup\calk). $$
\end{dfn}
\begin{rem*}
$\Out(G;\mk\calh)$ and ${\mathrm{Mc}}(\calh)$ denote the same group. The notation $\Out(G;\mk\calh)$ is more flexible and will be convenient in Section \ref{clas}.
We will often view a set of conjugacy classes $\calc=\{[c_i]\}$ as a family of cyclic subgroups $\calh=\{\langle c_i\rangle\}$,
since ${\mathrm{Mc}}(\calc)={\mathrm{Mc}}(\calh)$. Note
that $\Out(G;\calh)$ is larger than ${\mathrm{Mc}}(\calc)={\mathrm{Mc}}(\calh)$
since $c_i$ may sent
to a conjugate of $c_i\m$.
\end{rem*}
For example, suppose that $H<G=\bbZ^n$ is the subgroup generated by the first $k$ basis elements, and $\calh=\{H\}$. Then $\Out(G)=GL(n,{\mathbb {Z}})$; the group $\Out(G;\calh)$ consists of block triangular matrices,
and $\Out(G;\mk\calh)={\mathrm{Mc}}(\calh)$ is the group of matrices fixing the first $k$ basis vectors.
There are inclusions $\Out(G;\mk\calh)\subset \Out(G;\calh)\subset \wh\Out(G;\calh)$.
Note that $\Out(G;\mk\calh)$ has finite index in $ \Out(G;\calh)$ and $\widehat\Out(G;\calh)$ if $\calh$ is a finite family of cyclic groups.
Given a family $\calh$ and a subgroup $J$, we denote by $\calh_{ | J}$ the $J$-conjugacy classes of subgroups of $J$ conjugate to a group of $\calh$. We view $\calh_{ | J}$ as a family of subgroups of $J$, each defined up to conjugacy in $J$. In the next subsection
we will define a closely related notion $\calh_{||J}$ when $J=G_v$ is a vertex stabilizer in a tree.
If $\calc$ is a set of conjugacy classes $[c_i]$, viewed as a set of cyclic subgroups,
$\calc_{ | J}$ is the set of $J$-conjugacy classes of elements of $J$ representing elements in $\calc$.
Now suppose that
subgroups of $J$ which are conjugate in $G$ are conjugate in $J$;
this holds for instance if $J$ is malnormal (in particular if $J$ is a free factor), and also if $J$ is abelian.
In this case
we may view $\calh_{ | J} $ as a subset of $\calh$; it is finite if $\calh$ is.
\subsection{Trees and splittings} \label{tre}
A tree will be a simplicial tree $T$ with an action of $G$ without inversions.
A tree $T$ is \emph{relative to $\calh$} (resp.\ to $\calc$) if any group in $\calh$ (resp.\ any element representing a class in $\calc$) fixes a point in $T$.
Two trees are considered to be the same if there is a $G$-equivariant isomorphism between them. In this paper, all trees will have abelian edge stabilizers.
Unless mentioned otherwise, we
assume that the action is \emph{minimal} (there is no proper invariant subtree). We usually assume that there is \emph{no redundant vertex} (if $T\setminus \{x\}$ has two components, some $g\in G$ interchanges them). If a finitely generated subgroup $H\subset G$ acts on $T$ with no global fixed point, there is a smallest $H$-invariant subtree called the \emph{minimal subtree} of $H$.
The tree $T$ is \emph{trivial} if there is a global fixed point (minimality then implies that $T$ is a point).
An element of $G$, or a subgroup, is \emph{elliptic} if it fixes a point in $T$. Conjugates of elliptic subgroups are elliptic, so we also consider elliptic conjugacy classes.
An action of $G$ on a tree $T$ gives rise to a splitting of $G$, i.e.\ a decomposition of $G$ as the fundamental group of the quotient graph of groups $\Gamma=T/G$. Conversely, $T$ is the Bass-Serre tree of $\Gamma$.
All definitions given here apply to both splittings and trees. In particular, a splitting is relative to $\calh$ if every $H\in\calh$ has a conjugate contained in a vertex group.
Minimality implies that the graph $\Gamma$ is finite. There is a one-to-one correspondence between vertices (resp.\ edges) of $\Gamma$ and $G$-orbits of vertices (resp.\ edges) of $T$.
We denote by $V$ the set of vertices of $\Gamma$, and by $G_v$ the group carried by a vertex $v\in V$. We also view $v$ as a vertex of $T$ with stabilizer $G_v$. Similarly, we denote by $e$ an edge of $\Gamma$ or $T$, by $G_e$ the corresponding group (always abelian in this paper), and by $E$ the set of non-oriented edges of $\Gamma$.
Edge groups being abelian, hence relatively quasiconvex, every vertex group $G_v$ is toral relatively hyperbolic (see for instance \cite{GL6}).
The edge groups carried by edges of $\Gamma$ incident to a given vertex $v$ will be called the \emph{incident edge groups} of $G_v$. We denote by $\mathrm{Inc}_v$ the family of incident edge groups (we view it as a finite family of subgroups of $G_v$, each well-defined up to conjugacy).
If $\calh$ is a finite family of subgroups of $G$, and $v$ is a vertex stabilizer of $T$, we denote by $\calh_{||G_v}$ the family of subgroups $H\subset G_v$ which are conjugate to a group of $\calh$ and fix no other point in $T$. Two such groups are conjugate in $G_v$ if they are conjugate in $G$ (\cite{GL6}, Lemma 2.2 where the notation $\calh_{|G_v}$ is used instead),
so we may also view $\calh_{||G_v}$ as a subset of $\calh$ (it contains some of the groups of $\calh$ having a conjugate in $G_v$), or as a finite family of subgroups of $G_v$,
each well-defined up to conjugacy
($\calh_{||G_v}$ may be smaller than $\calh_{ | G_v}$ because we do not include subgroups of edge groups).
Any splitting of $G_v$ relative to $\mathrm{Inc}_v $ extends to a splitting of $G$. If $T$ is relative to $\calh$, any splitting of $G_v$ relative to $\mathrm{Inc}_v\cup \calh_{||G_v}$ is relative to $\calh_{|G_v}$ and extends to a splitting of $G$ relative to $\calh$.
If $\calc$ is a set of conjugacy classes, we view
$\calc_{ | | G_v}$
as the subset of $\calc$ consisting of classes having a representative that fixes $v$ and no other vertex.
In particular, $\calc_{ | | G_v}$ is finite if $\calc$ is.
A tree $T'$ is a \emph{collapse} of $T$ if it is obtained from $T$ by collapsing
each edge in a certain $G$-invariant
collection
to a point; conversely, we say that $T$ \emph{refines} $T'$.
In terms of graphs of groups, one passes from $\Gamma=T/G$ to $\Gamma'=T'/G$ by collapsing edges;
for each vertex $v'\in\Gamma'$,
the vertex group $G_{v'}$ is the fundamental group of the graph of groups $\Gamma_{v'}$ occuring as the preimage of $v'$ in $\Gamma$.
All maps between trees will be $G$-equivariant.
Given two trees $T$ and $T'$, we say that $T$ \emph{dominates} $T'$ if there is a
map $f:T\to T'$, or equivalently if every subgroup which is elliptic in $T$ is also elliptic in $T'$; in particular, $T$ dominates any collapse $T'$. We sometimes say that $f$ is a \emph{domination map.}
Minimality implies that it is onto.
Two trees belong to the same \emph{deformation space} if they dominate each other. In other words, a deformation space $\cald$ is the set of all trees having a given family of subgroups as their elliptic subgroups. We say that $\cald$ dominates $\cald'$ if trees in $\cald$ dominate those in $\cald'$.
\subsection{JSJ decompositions \cite{GL3a,GL3b}}\label{jsj}
Let $\calh$ be a family of subgroups of $G$.
Recall that a tree $T$ is \emph{relative} to $\calh$
if all groups of $\calh$
are elliptic in $T$.
We denote
by $\hp$ the family obtained by adding to $\calh$ all non-cyclic abelian subgroups of $G$.
The group $G$ is \emph{freely indecomposable} relative to $\calh$ if it does not split over the trivial group relative to $\calh$; equivalently,
$G$ cannot be written non-trivially as $A*B$ with every group of $\calh$ contained in a conjugate of $A$ or $B$ (if $\calh$ is trivial, we also require $G\ne{\mathbb {Z}}$, as we consider ${\mathbb {Z}}$ as freely decomposable).
Non-cyclic abelian groups being one-ended, being {freely indecomposable} relative to $\calh$
is the same as relative to $\hp$.
Let $\cala$ be another family of subgroups (in this paper, $\cala$
consists of the trivial group or is the family of all abelian subgroups).
Once $\calh $ and $\cala$ are fixed, we only consider trees relative to $\calh$, with edge stabilizers in $\cala$. We also assume that trees are minimal.
A tree $T$ (with edge stabilizers in $\cala$, relative to $\calh$) is \emph{universally elliptic} (with respect to $\calh$) if its edge stabilizers are elliptic in every tree. It is a \emph{JSJ tree} if, moreover, it dominates every universally elliptic tree.
The set of JSJ trees is called the \emph{JSJ deformation space} (over $\cala$ relative to $\calh$). All JSJ trees have the same vertex stabilizers, provided one restricts to stabilizers not in $\cala$.
When $\cala$ consists of the trivial group, the JSJ deformation space is called the \emph{Grushko deformation space} (relative to $\calh$). The group $G$ has a relative Grushko decomposition $G=G_1*\dots*G_n*F_p$, with $F_p$ free, every $H\in\calh$ contained in some $G_i$ (up to conjugacy), and $G_i$ freely indecomposable relative to $\calh_{ | G_i}$.
Vertex stabilizers of the relative Grushko deformation space $\cald$ are precisely conjugates of the $G_i$'s.
The deformation space
is trivial (it only contains the trivial tree) if and only if $G$ is freely indecomposable relative to $\calh$.
Writing $\calg=\{G_1,\dots, G_n\}$, note that $\Out(G;\calh\cup\calg)$ has finite index in $\Out(G;\calh )$, because automorphisms in $\Out(G;\calh )$ leave $\cald$ invariant and therefore permute the $G_i$'s (up to conjugacy).
Now suppose that $\cala$ consists of all abelian subgroups, and $G$ is freely indecomposable relative to a family $\calh$. Then \cite[11.1]{GL3b} the
JSJ deformation space relative to $\hp$
contains a preferred tree $T_{\mathrm{can}}$; this tree is invariant under $\widehat\Out(G;\calh)$ (the group of automorphisms preserving $\calh$).
It is obtained as a \emph{tree of cylinders}. We describe this construction in the case that will be needed here (see Proposition 6.3 of \cite{GL4} for details).
Let $T$ be any tree with non-trivial abelian edge stabilizers, relative to all non-cyclic abelian subgroups. Say that two edges $e,e'$ belong to the same cylinder if their stabilizers commute. Cylinders are subtrees intersecting in at most one point.
The tree of cylinders $T_c$ is defined as follows. It is bipartite, with vertex set $\V_0\cup \V_1$. Vertices in $\V_0$ are vertices of $T$ belonging to at least two cylinders. Vertices in $\V_1$ are cylinders of $T$. A vertex $v\in \V_0$ is joined to a vertex
$ Y\in \V_1$ if $v$ (viewed as a vertex of $T$) belongs to $Y$ (viewed as a subtree of $T$). Equivalently, one obtains $T_c$ from $T$ by replacing each cylinder $Y$ by the cone on its boundary (points of $Y$ belonging to at least one other cylinder).
The tree $T_c$ only depends on the
deformation space $\cald$ containing $T$, and it belongs to $\cald$. Like $T$, it has non-trivial abelian edge stabilizers, and is relative to all non-cyclic abelian subgroups.
It is minimal if $T$ is minimal, but vertices in $\V_1$ may be redundant vertices.
The stabilizer of a vertex $v_1\in \V_1 $ is a maximal abelian subgroup.
The stabilizer of a vertex in $\V_0$ is non-abelian and is the stabilizer of a vertex of $T$. The stabilizer of an edge $v_0v_1$ with $v_i\in \V _i$ is an infinite abelian subgroup, it is a maximal abelian subgroup of $G_{v_0}$ (but it is not always maximal abelian in $G_{v_1}$).
The $\widehat\Out(G;\calh)$-invariant tree $T_{\mathrm{can}}$ mentioned above is the tree of cylinders of JSJ trees relative to $\hp$. It is a JSJ tree, and
the tree of cylinders of $T_{\mathrm{can}}$ is $T_{\mathrm{can}}$ itself.
Let $\Gamma_{\mathrm{can}}=T_{\mathrm{can}}/G$ be the quotient graph of groups, and let $v\in \V_0/G$ be a vertex with $G_v$ non-abelian.
If $G_v$ does not split over an abelian group relative to incident edge groups and
to $\calh_{ | | G_v}$, it is universally elliptic (with respect to both $\calh$ and $\hp$); we say that $G_v$ (or $v$) is \emph{rigid}. Otherwise it is \emph{flexible}.
A key fact here is that every flexible vertex $v$ of $\Gamma_{\mathrm{can}}$ is \emph{quadratically hanging (QH)}.
The group $G_v$ is the fundamental group of a compact (possibly non-orientable) surface $\Sigma$, and incident edge groups are boundary subgroups of $\pi_1(\Sigma)$ (i.e.\ fundamental groups of boundary components of $\Sigma$); in particular, incident edge groups are cyclic. At most one incident edge group is attached to a given boundary component (groups carried by distinct incident edges are non-conjugate in $G_v$). If $H$ is conjugate to a group of $\calh$, then $H\cap G_v$ is contained in a boundary subgroup. Conversely, every boundary subgroup is an incident edge group or has a finite index subgroup which is conjugate to a group of $\calh$.
As in \cite{Szepietowski_presentation},
we denote by $\calp\calm^+(\Sigma)$ the group of isotopy classes of homeomorphisms of $\Sigma$ mapping each boundary component to itself in an orientation-preserving way. We view $\calp\calm^+(\Sigma)$ as a subgroup of $\Out(\pi_1(\Sigma))=\Out(G_v)$, indeed $\calp\calm^+(\Sigma)=\Out(G_v;\mk\mathrm{Inc}_v ,\mk\calh_{||G_v} )$.
\subsection{Automorphisms of trees}\label{automs}
There is a natural action of
$\Out(G)$ on the set of trees, given by precomposing the action on $T$ with an automorphism of $G$. We denote by $\Out(T)$ the stabilizer of a tree $T$. We write $\Out(T,\calh)$ for $\Out(T)\cap\Out(G;\calh)$, and so on.
If $T$ is a point, $\Out(T)=\Out(G)$. If $G$ is abelian and $T$ is not a point, then $T$ is a line on which $G$ acts by
integral translations, and $\Out(T)$ is the group of automorphisms of $G$ preserving the kernel of the action.
We now study $\Out(T)$ in the general case, following \cite{Lev_automorphisms}.
We always assume that edge stabilizers are abelian.
This implies that all vertex or edge stabilizers $H$ have the property that the normalizer $N(H)$ acts on $H$ by inner automorphisms: indeed, $N(H)$ is abelian if $H$ is abelian, equal to $H$ if $H$ is not abelian.
One first considers the action of $\Out(T)$ on the finite graph $\Gamma=T/G$. We always denote by $\Out^0(T)$ the finite index subgroup consisting of automorphisms acting trivially.
We study it through the natural map $$\rho=\prod_{v\in V}\rho_v:\Out^0(T)\to\prod_{v\in V}\Out(G_v)$$ recording the action of automorphisms on vertex groups (see Section 2 of \cite{Lev_automorphisms}); recall that $V$ is the vertex set of $\Gamma$.
Since $N(G_v)$ acts on $G_v$ by inner automorphisms,
$\rho_v(\Phi)$ is simply defined as the class of $enoncetmp\alph{numEnonceTmpInterne}lpha_{ | G_v}$, where $enoncetmp\alph{numEnonceTmpInterne}lpha\in\Aut(G)$ is any representative of $\Phi\in\Out^0(T)$ leaving $G_v$ invariant.
The image of $\rho$ is contained in $\prod_{v\in V} \Out(G_v;\mathrm{Inc}_v)$ (the family of incident edge groups at a given $v$ is preserved). It contains the subgroup $\prod_{v\in V}\Out(G_v;\mk\mathrm{Inc}_v)$
because automorphisms of $G_v$ acting trivially on incident edge groups extend ``by the identity'' to automorphisms of $G$ preserving $T$.
The kernel of $\rho$ is the \emph{group of twists} $\calt$, a finitely generated abelian group when no edge group is trivial
(bitwists as defined in \cite{Lev_automorphisms} belong to $\calt$ because the normalizer of an abelian subgroup is its centralizer). We therefore have an exact sequence
$$1\to\calt\to \Out^0(T)\xra{\ \rho\ }\prod_{v\in V} \Out(G_v;\mathrm{Inc}_v).$$
Now suppose that $T$ is relative to families $\calh$ and $\calk$ (\ie each $H_i$, $K_j$ fixes a point in $T$).
A trivial but important remark is that $\calt\subset\Out(G ;\mk \calh, \mk\calk )$. As pointed out in Lemma 2.10 of \cite{GL6}, we have
$$\prod_{v\in V}\Out(G_v;\mk\mathrm{Inc}_v ,\mk\calh_{||G_v}, \calk_{||G_v}) \subset \rho\biggl (\Out^0(T)\cap\Out(G;\mk\calh, \calk)\biggr)
\subset \prod_{v\in V} \Out(G_v;\mathrm{Inc}_v,\mk\calh_{||G_v}, \calk_{||G_v})
$$
(see Subsection \ref{tre} for the definition of $\calh_{||G_v}$; groups of $\calh_{||G_v}$ that are conjugate
in $G$ are necessarily conjugate in $G_v$).
The fact noted above that the image of $\Out^0(T)$ by $\rho$ contains $\prod_{v\in V} \Out(G_v;\mk\mathrm{Inc}_v)$ expresses that \emph{automorphisms $\Phi_v\in\Out(G_v)$ acting trivially on incident edge groups may be combined into a global $\Phi\in\Out(G)$}. In Subsection \ref{finpf} we will need a more general result, where we only assume that the $\Phi_v$'s have compatible actions on edge groups.
Given an edge $e$ of $\Gamma$,
there is a natural map $\rho_e:\Out^0(T)\to \Out(G_e)$, defined in the same way as $\rho_v$ above. If $v$ is an endpoint of $e$, the inclusion of $G_e$ into $G_v$ induces a homomorphism ${\rho_{v,e}}:\Out(G_v;\mathrm{Inc}_v)\to\Out(G_e)$ with $\rho_e={\rho_{v,e}}\circ\rho_v$
(it is well-defined because the normalizer $N_{G_v}(G_e)$ acts on $G_e$ by inner automorphisms).
\begin{lem} \label{modif}
Consider a family of automorphisms $\Phi_v\in\Out(G_v;\mathrm{Inc}_v)$ such that, if $e=vw$ is any edge of $\Gamma$, then $\rho_{v,e}(\Phi_v)=\rho_{w,e}(\Phi_w)$. There exists $\Phi\in\Out^0(T)$ such that $\rho_v(\Phi)=\Phi_v$ for every $v$.
\end{lem}
We leave the proof to the reader.
The lemma applies
to any
graph of groups such that, for every vertex or edge group $H$, the normalizer $N(H)$ acts on $H$ by inner automorphisms.
$\Phi$ is not unique, it may be composed with any element of $\calt$.
In Subsection \ref{finpf} we will have a family of automorphisms $\Phi_e\in\Out(G_e)$, and we will want $\Phi\in\Out^0(T)$ such that $\rho_e(\Phi)=\Phi_e$ for every $e$. By the lemma, it suffices to find automorphisms $\Phi_v\in\Out(G_v;\mathrm{Inc}_v)$ inducing the $\Phi_e$'s.
\subsection{Rigid vertices}
We now specialize to the case when
$T=T_{\mathrm{can}}$ is the canonical JSJ decomposition relative to $\hp$ discussed in
Subsection \ref{jsj}.
If $v$ is a QH vertex, the image of $\Out^0(T)\cap\Out(G;\mk\calh)$ in $\Out(G_v)$ contains
$\calp\calm^+(\Sigma)=\Out(G_v;\mk\mathrm{Inc}_v ,\mk\calh_{||G_v} )$ with finite index (see \cite{GL6}, Proposition 4.7).
If $v$ is a rigid vertex, then $G_v$ does not split over an abelian group relative to $\mathrm{Inc}_v\cup\calh_{||G_v}$.
By the Bestvina-Paulin method and Rips theory, one deduces that the image of $\Out^0(T)\cap\Out(G;\mk\calh)$ in $\Out(G_v)$ is finite if $\calh$ is a finite family of finitely generated subgroups (see \cite{GL6}, Theorem 3.9 and Proposition 4.7).
\begin{lem}
\label{lem_fini}
Let $\calh,\calk$ be finite families of
finitely generated subgroups, with each group in $\calk$ abelian.
Assume that $G$ is one-ended relative to $\calh\cup\calk$,
and let $T_{\mathrm{can}}$ be the canonical JSJ tree relative to $(\calh\cup\calk)^{+ab}$.
The image of $$\Out^0(T)\cap\Out(G;\mk\calh,\calk)$$ by $\rho_v:\Out^0(T)\to\Out(G_v)$ is finite if $v$ is a rigid vertex of $T_{\mathrm{can}}$. Its image
by $\rho_e:\Out^0(T)\to\Out(G_e)$ is finite if $e$ is any edge.
\end{lem}
\begin{proof}
Define $\calk_{\mathbb {Z}}$ by removing all non-cyclic groups from $\calk$.
Being freely indecomposable relative to $\calh\cup\calk$ is the same as being freely indecomposable relative to $\calh\cup\calk_{\mathbb {Z}}$, and a tree is relative to $ (\calh\cup\calk)^{+ab}$ if and only if it is relative to $(\calh\cup\calk_{\mathbb {Z}})^{+ab}$. We may therefore view $T_{\mathrm{can}}$ as the canonical JSJ tree relative to $(\calh \cup\calk_{\mathbb {Z}})^{+ab}$.
Let $v$ be a rigid vertex.
The group $\Out(G;\mk\calh,\calk)$ is contained in $\Out(G;\mk\calh,\calk_{\mathbb {Z}})$, which contains
$\Out(G;\mk\calh,\mk\calk_{\mathbb {Z}})$
with finite index.
As explained above,
the image of $\Out^0(T)\cap\Out(G;\mk\calh,\mk\calk_{\mathbb {Z}})$ in $\Out(G_v)$ is finite (\cite{GL6}, Prop.\ 4.7).
The first assertion of the lemma follows.
Since $T_{\mathrm{can}}$ is bipartite, every edge $e$ is incident to a vertex $v$ which is QH or rigid.
In the first case $G_e$ is cyclic, so there is nothing to prove. In the second case the map $\rho_e:\Out^0(T)\to\Out(G_e)$ factors through $\Out(G_v)$, and the second assertion follows from the first.
\end{proof}
\section{Finite classifying space}\label{clas}
In this section, we prove that
McCool groups of a toral relatively hyperbolic group have type VF (Theorem \ref{mc}),
and that so does the stabilizer of a splitting (Theorem \ref{mct}).
In the course of the proof, we will
describe the automorphisms of a given maximal abelian subgroup which are restrictions of an automorphism of $G$ belonging to a given McCool group (Proposition \ref{image}).
We start by recalling some standard facts
about groups of type VF.
A group has type F if it has a finite classifying space, type VF if some finite index subgroup is of type F. A key tool for proving that groups have type F is the following statement:
\begin{thm}[see for instance {\cite[Th.\ 7.3.4]{Geoghegan_topological}}] \label{extF}
Suppose that $G$ acts simplicially and cocompactly on a contractible simplicial complex $X$. If all point stabilizers have type F, so does $G$.
In particular, being of type F is stable under extensions. \qed
\end{thm}
If $G$ has a finite index subgroup acting as in the theorem, then $G$ has type VF. In particular:
\begin{cor}\label{cor_extensionVF}
Given an exact sequence $1\ra N\ra G\ra Q \ra 1$, suppose that $Q$ has type VF, and $G$ has a finite index subgroup $G_0<G$
such that $G_0\cap N$ has type F. Then $G$ has type VF. \qed
\end{cor}
\begin{rem} \label{subt}
Suppose that $G$ acts on $X$ as in Theorem \ref {extF}. If point stabilizers are only of type VF,
one cannot claim that $G$ has type VF,
even if $G$ is torsion-free. This subtlety was overlooked in Theorem 5.2 of \cite{GL1} (we will give a corrected statement in Corollary \ref{corr}), and it introduces technical complications
(which would not occur if we only wanted to prove that the groups under consideration have
type $F_\infty$).
In particular, to study the stabilizer of a tree with non-cyclic edge stabilizers in Subsection \ref{sec_edge}, we have to prove more precise versions of certain results (such as the ``moreover'' in Theorem \ref{mcga}).
\end{rem}
\subsection{McCool groups are VF}
In this subsection we prove the following strengthening of Theorem \ref{mc}.
\begin{thm} \label{mcga}
Let $G$ be a toral relatively hyperbolic group.
Let $\calh$ and $\calk$ be two finite families of finitely generated subgroups,
with each group in $\calk$ abelian.
Then $\Out(G;\mk\calh,\calk) $ is of type VF.
Morerover, if groups in $\calh$ are also abelian, there exists a finite index subgroup \break $\Out^1(G;\calh ,\calk) \subset\Out(G;\calh ,\calk) $ such that $\Out^1(G;\calh ,\calk) \cap \Out(G;\mk\calh,\calk) $ is of type $F$.
\end{thm}
Recall (Definition \ref{pres}) that $\Out(G;\mk\calh ,\calk) $ consists of classes of automorphisms acting trivially on each group $H_i\in \calh$ (i.e.\ as conjugation by some $g_i\in G$),
and leaving each $K_j\in\calk$ invariant up to conjugacy.
It will follow from
Corollary \ref{genmc}
that the main assertion of Theorem \ref{mcga} holds
if $\calh$ is an arbitrary family of subgroups
(see Corollary \ref{thm_general}), but finiteness is needed at this point in order to apply Lemma \ref{lem_fini}.
\begin{con} \label{1} In this section, a superscript $-^1$, as in $\Out^1(G;\calh ,\calk)$, always indicates a subgroup of finite index. The superscript $-^0$ refers to a trivial action on a quotient graph of groups (see Section \ref{automs}).
\end{con}
\subsubsection{The abelian case}
The following lemma deals with the case when $G={\mathbb {Z}}^n$.
\begin{lem} \label{arithm}
Let $\calh$ and $\calk$ be finite families of subgroups of ${\mathbb {Z}}^n$. Let $A=\Out(\bbZ^n;\mk\calh ,\calk) $ be the subgroup of $GL(n,{\mathbb {Z}})$ consisting of matrices acting as the identity on groups $H_i\in\calh$ and leaving each $K_j\in\calk$ invariant. Then $A$ is of type VF. More precisely, every torsion-free subgroup of finite index $A'\subset A$ is of type F.
\end{lem}
Recall that $GL(n,{\mathbb {Z}})$ is virtually torsion-free, so groups such as $A'$ exist.
\begin{proof}
The set of endomorphisms of ${\mathbb {Z}}^n$ acting as the identity on $H_i$ and preserving $K_j$ is a linear subspace defined by linear equations with rational coefficients.
It follows that the groups $A$ and $A'$ are arithmetic: they are commensurable with a subgroup of $GL(n,\bbZ)$
defined by $\Q$-linear equations.
By Borel-Serre \cite{BorelSerre_corners}, every torsion-free arithmetic subgroup of $GL(n,\Q)$ is of type F.
\end{proof}
To deduce Theorem \ref{mcga} when $G$ is abelian, we simply define $\Out^1(G;\calh ,\calk)$ as any torsion-free finite index subgroup of $\Out(G;\calh ,\calk) $.
If $G$ is not abelian, we shall distinguish two cases.
\subsubsection{The one-ended case} \label{oe}
We
first assume that $G$ is freely indecomposable relative to $\calh\cup\calk$:
one cannot write $G=A*B$ with each group of $\calh\cup\calk$ contained in a conjugate of $A$ or $B$.
We then consider the canonical tree $T_{\mathrm{can}}$ as in Subsection \ref{jsj} (it is a JSJ tree relative to $\calh$, $\calk$, and to non-cyclic abelian subgroups).
It is invariant under
$\Out(G;\calh,\calk)$, so $\Out(G; \calh,\calk)\subset\Out(T_{\mathrm{can}})$.
We write $\Out^0(T_{\mathrm{can}})$ for the finite index subgroup consisting of automorphisms acting trivially on the finite graph $\Gamma_{\mathrm{can}}=T_{\mathrm{can}}/G$,
and $$\Out^0(G; \calh,\calk)=\Out (G; \calh,\calk)\cap\Out^0(T_{\mathrm{can}});$$ it has finite index in $\Out (G; \calh,\calk)$.
Recall that non-abelian vertex stabilizers $G_v$ of $T_{\mathrm{can}}$ (or vertex groups of $\Gamma_{\mathrm{can}}$) are rigid or QH. Also recall from Subsection \ref{automs} that, for each vertex $v$, there is a map $\rho_v:\Out^0(T_{\mathrm{can}})\to\Out(G_v; \mathrm{Inc}_v)$, with
$\mathrm{Inc}_v$ the family of incident edge groups (see Subsection \ref{tre}).
We define a subgroup $\Out^r(G;\calh ,\calk)\subset\Out (G;\calh ,\calk)$
by restricting to automorphisms $\Phi\in\Out^0(G;\calh,\calk)$,
and imposing conditions on the image of $\Phi$ by the maps $\rho_v$.
$\bullet$ If $G_v$ is rigid, we ask that $\rho_v(\Phi)$ be trivial.
$\bullet$ If $G_v$ is abelian, we fix a torsion-free subgroup of finite index $\Out^1(G_v)\subset \Out(G_v)$, and
we ask that $\rho_v(\Phi)$ belong to $\Out^1(G_v)$.
$\bullet$ If $G_v$ is QH, it is the fundamental group of a compact surface $\Sigma$.
Each boundary component is associated to an incident edge or a group in $\calh\cup\calk$ (see Subsection \ref{jsj}), so $\rho_v(\Phi)$ preserves the peripheral structure of $\pi_1(\Sigma)$ and may therefore be represented by a homeomorphism of $\Sigma$. Since groups in $\calh\cup\calk$, and their conjugates, only intersect $G_v$ along boundary subgroups, the image of $\Out^0(G;\calh,\calk)$ by $\rho_v$ contains the mapping class group $\calp\calm^+(\Sigma)= \Out(G_v;\mk\mathrm{Inc}_v ,\mk\calh_{||G_v}, \mk \calk_{||G_v})$ (see Subsection \ref{jsj}); the index is finite. We fix a finite index subgroup $\calp\calm^{+,1}(\Sigma)$ of type F, and we require $\rho_v(\Phi)\in \calp\calm^{+,1}(\Sigma)$. In particular, $\Phi$ acts trivially on all boundary subgroups of $\Sigma$.
Let $\Out^r(G;\calh ,\calk)$ consist of automorphisms $\Phi\in\Out^0(G;\calh,\calk)$ whose images $\rho_v(\Phi)$ satisfy the conditions stated above. These automorphisms act trivially on edge stabilizers.
It follows from Lemma \ref{lem_fini} that
$\Out^r(G;\calh ,\calk) \cap \Out(G;\mk\calh,\calk) $ always has finite index in $\Out(G;\mk\calh,\calk) $. If groups in $\calh$ are abelian, then $\Out^r(G;\calh ,\calk)$ has finite index in $\Out (G;\calh ,\calk)$.
It therefore suffices to prove that $$O:=\Out^r(G;\calh ,\calk) \cap \Out(G;\mk\calh,\calk) $$ is of type $F$ (this argument, based on Lemma \ref{lem_fini}, is the only place where we use the assumptions on $\calh$ and $\calk$).
Every edge of $T_{\mathrm{can}}$ has an endpoint $v$ with $G_v $ rigid or QH, so elements of $O$ act trivially on edge stabilizers of $T_{\mathrm{can}}$. Consider an abelian vertex stabilizer $G_v$. Elements in $\rho_v(O)$ are the identity on incident edge groups and groups in $\calh_{||G_v}$, and leave groups in $\calk_{||G_v}$ invariant. By Lemma \ref{arithm} these conditions define a group $B_v\subset\Out(G_v)$ which is of type VF, and $C_v:=B_v\cap\Out^1(G_v)$ is a group of type F containing $\rho_v(O)$.
Recall from Subsection \ref{automs} the exact sequence
$$1\ra \calt \ra \Out^0(T_{\mathrm{can}})
\xra{\ \rho\ }\prod_{v\in V} \Out(G_v;\mathrm{Inc} _v).
$$
We claim that the image of $O$ by $\rho$ is a direct product $\prod_{v\in V} C_v$, with
$C_v$ as above
if $G_v$ is abelian, $C_v=\calp\calm^{+,1}(\Sigma)$ if $v$ is QH, and $C_v$ trivial if $ v$ is rigid. The image is contained in the product. Conversely, given a family $(\Phi_v)_{v\in V}$, with $\Phi_v\in C_v$, the automorphisms $\Phi_v$ act trivially on incident edge groups, so there is $\Phi\in\Out^0(T_{\mathrm{can}})$ with $\rho_v(\Phi)=\Phi_v$.
Since $\Phi_v$ acts trivially on $\mathrm{Inc}_v\cup\calh_{||G_v}$ and preserves $\calk_{||G_v}$, this automorphism is in $O$. This proves the claim.
It follows that $\rho(O)$ is of type F.
The group of twists $\calt$ is contained in $O$, because twists act trivially on vertex groups and $T$ is relative to $\calh\cup\calk$, so we can conclude that $O$ is of type F by Theorem \ref {extF} if we know that $\calt$ is of type F.
The group $\calt$ is a finitely generated abelian group.
It is torsion-free, hence of type F, as shown in Section 4 of \cite{GL6} (alternatively, one can replace $\Out^r(G;\calh ,\calk)$ by its intersection with a torsion-free finite index subgroup of $\Out(G)$, which exists by Corollary 4.4 of \cite{GL6}).
This proves Theorem \ref{mcga} in the freely indecomposable case. To prove it in general,
we need to study automorphisms of free products.
\subsubsection{Automorphisms of free products}
In this subsection, $G$ does not have to be relatively hyperbolic.
Let $\calg=\{G_i\}$ be a family of subgroups of $G$. We have defined $\Out(G;\calg )$ as automorphisms leaving the conjugacy class of each $G_i$ invariant, and $\Out(G;\mk\calg)$ as automorphisms acting trivially on each $G_i$.
More generally, consider a group of automorphisms $\bQ_i\subset \Out(G_i)$
and $\bQ=\{\bQ_i\}$. We
would like to define $\Out(G;\mk[\bQ]\calg )\subset\Out (G;\calg )$ as the automorphisms $\Phi$ acting on each $G_i$ as an element of $\bQ_i$. To be precise, given $\Phi\in\Out (G;\calg )$, choose representatives $\varphi_i$ of $\Phi$ in $\Aut(G)$ with $\varphi_i(G_i)=G_i$. We say that $\Phi$ belongs to $\Out(G;\mk[\bQ]\calg)$ if every $\varphi_i$ represents an element of $\bQ_i$. This is well-defined (independent of the chosen $\varphi_i$'s) if each $G_i$ is a free factor (more generally, if the normalizer of $G_i$ acts on $G_i $ by inner automorphisms).
The goal of this subsection is to show:
\begin{prop} \label{fp}
Let $G=G_1*\dots*G_n*F_p$, with $F_p $ free of rank $p$, and let $\calg=\{G_i\}$. Assume that all groups $G_i$ and $G_i/Z(G_i)$ have type F.
Let $\bQ=\{\bQ_i\}$ be a family of subgroups $\bQ_i\subset\Out(G_i)$. If every $\bQ_i$ is of type VF,
then $\Out(G;\mk[\bQ]\calg )$ has type VF.
More precisely, there exists a finite index subgroup $\Out^1(G;\calg)\subset\Out(G;\calg)$, independent of $\bQ$,
such that, if every $\bQ_i$ is of type F,
then $\Out^1(G;\calg)\cap\Out(G;\mk[\bQ]\calg)$ has type F.
\end{prop}
The ''more precise'' assertion implies the first one, since $\Out(G;\mk[\calq']\calg)$ has finite index in $\Out(G;\mk[\calq]\calg)$ if every $\bQ'_i $ is a finite index subgroup of $ \bQ_i$.
Assume that $G_i$ and $G_i/Z(G_i)$ have type F. The proposition says in particular that the Fouxe-Rabinovitch group $\Out (G;\mk\calg)$ is of type VF, and that $\Out (G;\calg ) $ is of type VF if every $ \Out(G_i)$ is.
If we consider the Grushko decomposition of $G$, then $\Out (G;\calg )$ has finite index in $\Out(G)$ and we get:
\begin{cor} [Correcting \cite{GL1}, Theorem 5.2] \label{corr}
Let $G=G_1*\dots*G_n*F_p$, with $F_p $ free, $G_i$ non-trivial, not isomorphic to ${\mathbb {Z}}$, and not a free product. If every $G_i$ and $G_i/Z(G_i)$ has type F, and every $\Out(G_i)$ has type VF, then $\Out(G)$ has type VF. \qed
\end{cor}
\begin{proof}[Proof of Proposition \ref{fp}]
We prove the ``more precise'' assertion, so we assume that $\calq_i\subset \Out(G_i)$ has type F.
We shall apply Theorem \ref{extF} to the action of $\Out(G;\mk[\bQ]\calg )$ on
the outer space defined in \cite{GL1}. We let $\cald$ be
the Grushko deformation space
relative to $\calg$, \ie the JSJ deformation space of $G$ over the trivial group relative to $\calg$ (see Subsection \ref{jsj}). Trees in $\cald$ have trivial edge stabilizers, and non-trivial vertex stabilizers are conjugates of the $G_i$'s.
Like ordinary outer space \cite{CuVo_moduli}, the projectivization $\hat \cald$ of $\cald$ is a complex consisting of simplices with missing faces, and the spine of $\hat \cald$ is a simplicial complex. It is
contractible for the weak topology \cite{GL2}.
The group $\Out (G;\calg )$ acts on $\cald$, hence on the spine,
and the action of the Fouxe-Rabinovitch group $\Out (G;\mk\calg)\subset \Out(G;\mk[\bQ]\calg )$ is cocompact because there are finitely many possibilities for the quotient graph $T/G$, for $T\in\cald$. In order to apply Theorem \ref{extF}, we just need to show that stabilizers are of type F.
$\Out (G;\calg )$ also acts on the free group (isomorphic to $F_p$) obtained from $G$ by killing all the $G_i$'s (it may be viewed as the topological fundamental group of $\Gamma=T/G$, for any $T\in\cald$).
In other words, there is a natural map $\Out (G;\calg )\to \Out(F_p)$. We fix a torsion-free finite index subgroup $\Out^1(F_p)\subset\Out(F_p)$, and we define $\Out ^1 (G;\calg )\subset\Out (G;\calg )$ as the pullback of $\Out^1(F_p)$.
Given $T\in\cald$, we let $S$ be its stabilizer for the action of $\Out ^1 (G;\calg )$, and $S_\bQ$ its stabilizer for the action of $ \Out^1(G;\calg)\cap\Out(G;\mk[\bQ]\calg )$. We complete the proof by showing that $S_\bQ$ has type F.
We first claim that $S$ equals $\Out^0(T)$, the group of automorphisms of $G$ leaving $T$ invariant and acting trivially on $\Gamma=T/G$. Clearly $\Out^0(T)\subset S$. Conversely, we have to show that any $\Phi\in S$ acts as the identity on $\Gamma$. First, $\Phi$ fixes all vertices of $\Gamma$ carrying a non-trivial group $G_v$, because $G_v$ is a $G_i$ (up to conjugacy) and the $G_i$'s are not permuted. In particular,
by minimality of $T$, all terminal vertices of $\Gamma$ are fixed. Also, by our definition of $\Out ^1 (G;\calg )$, the image of $\Phi$ in $\Out(\pi_1(\Gamma))$ is trivial or has infinite order. The claim follows because any non-trivial symmetry of $\Gamma$ fixing all terminal vertices maps to a non-trivial element of finite order in $\Out(\pi_1(\Gamma))$ if $\Gamma$ is not a circle.
The map $\rho$ (see Subsection \ref{automs}) maps $S$ onto $\prod_i\Out(G_i)$, and the image of $S_\bQ$ is $\prod_i\bQ_i$, a group of type F. The kernel is the group of twists $\calt$, which is contained in $S_\bQ$, so it suffices to check that $\calt$ has type F. Since edge stabilizers are trivial, $\calt$ is a direct product $\prod_iK_i$, with $K_i=G_i^{n_i}/Z(G_i)$; here $n_i$ is the valence of the vertex carrying $G_i$ in $\Gamma$, and the center $Z(G_i)$ is embedded diagonally (see \cite{Lev_automorphisms}). There are exact sequences $$1\to G_i^{n_i-1}\to G_i^{n_i}/Z(G_i)\to G_i /Z(G_i)\to1,$$ so the assumptions of the proposition ensure that $\calt$ is of type F.
\end{proof}
\subsubsection{The infinitely-ended case}
We can now prove Theorem \ref{mcga} in full generality. We let $G=G_1*\dots*G_n*F_p$ be the Grushko decomposition of $G$ relative to $\calh\cup\calk$ (see Subsection \ref{jsj}), and $\calg=\{G_i\}$.
Each $G_i$ is toral relatively hyperbolic, so has type F by \cite{Dah_classifying}.
Its center is trivial if $G_i$ is nonabelian, so $G_i/Z(G_i)$ also has type F. This will allow us to use Proposition \ref{fp}.
\begin{lem} \label{gloloc}
Let $\bQ=\{\bQ_i\}$, with $\bQ_i=\Out(G_i;\mk\calh_{|G_i},\calk_{|G_i})$, and let $\bG=\{\bG_i\}$ with $\bG_i=\Out(G_i;\calh_{|G_i},\calk_{|G_i})$.
Then $$\Out(G;\mk[\bQ]\calg)=\Out(G;\mk \calh,\calk)\cap\Out(G;\calg),$$ and $$\Out(G;\mk[\bG]\calg)=\Out(G;\calh,\calk)\cap\Out(G;\calg).$$
Moreover, $\Out(G;\mk[\bQ]\calg)$ has finite index in $\Out(G;\mk \calh,\calk)$, and $\Out(G;\mk[\bG]\calg)$ has finite index in $\Out(G; \calh,\calk)$.
\end{lem}
\begin{proof} If $\Phi$ belongs to $\Out(G;\mk[\bQ]\calg)$, it belongs to $\Out(G;\mk \calh,\calk)$ because every group in $\calh\cup\calk$ has a conjugate contained in some $G_i$. Conversely, automorphisms in $\Out(G;\mk \calh,\calk)$ preserve the Grushko deformation space relative to $\calh\cup\calk$ and therefore permute the $G_i$'s, so $\Out(G;\calg)\cap\Out(G;\mk \calh,\calk)$ has finite index in $\Out(G;\mk \calh,\calk)$. If $\varphi\in\Aut(G)$ leaves $G_i$ invariant and maps a non-trivial $H\subset G_i$ to a conjugate $gHg\m$, then $g\in G_i$ because $G_i$ is a free factor. This shows $$\Out(G;\mk \calh,\calk)\cap\Out(G;\calg)\subset\Out(G;\mk[\bQ]\calg),$$ completing the proof for $\Out(G;\mk[\bQ]\calg)$. The proof for $\Out(G;\mk[\bG]\calg)$ is similar.
\end{proof}
The first assertion of Theorem \ref{mcga} now follows immediately from the one-ended case together with Proposition \ref{fp}, since $\Out(G;\mk \calh,\calk)$ contains $\Out(G;\mk[\bQ]\calg)$ with finite index.
There remains to prove the ``moreover''.
Each $G_i$ is freely indecomposable relative to $\calh_{|G_i}\cup\calk_{|G_i}$, so we may apply the ``moreover'' of Theorem \ref{mcga} to $G_i$.
We get a finite index subgroup $\calr_i^1\subset \calr_i $
such that $\calq_i^1:=\calr_i^1
\cap \bQ_i$
has type F. Let $\calr^1=\{\calr_i^1\}$, and $\calq^1=\{\calq_i^1\}$.
By
Proposition \ref{fp}, there is a finite index subgroup
$\Out^1(G;\calg)\subset\Out(G;\calg)$ such that
$\Out^1(G;\calg)\cap\Out (G;\mk[\bQ^1]\calg)$ has type $F$.
Now write
$$\Out^1(G;\calg)\cap\Out (G;\mk[\bQ^1]\calg)=\Out^1(G;\calg)\cap\Out (G;\mk[\bG^1]\calg)\cap\Out (G;\mk[\bQ ]\calg).$$
By Lemma \ref{gloloc}, we may replace the last term $\Out (G;\mk[\bQ ]\calg)$ by $\Out(G;\mk \calh,\calk)$.
Defining $$\Out^1(G;\calh,\calk):=\Out^1(G;\calg)\cap\Out (G;\mk[\bG^1]\calg),$$ we have shown that $\Out^1(G;\calh,\calk)\cap\Out(G;\mk\calh,\calk )$ has type F. There remains to check that $\Out^1(G;\calh,\calk)$ is a finite index subgroup of $\Out(G;\calh,\calk)$.
Since $\Out^1(G;\calg)$ has finite index in $\Out (G;\calg)$, and $\calr_i^1$ is a finite index subgroup of $\calr_i$,
the group $\Out^1(G;\calh,\calk)$ has finite index in $\Out (G;\calg)\cap\Out (G;\mk[\bG]\calg)$, which equals $\Out (G;\mk[\bG]\calg)$ and has finite index in $\Out(G;\calh,\calk)$ by Lemma \ref{gloloc}.
This completes the proof of Theorem \ref{mcga}.
\subsubsection{The action on abelian groups} \label{aag}
We study the action of $\Out(G)$ on abelian subgroups.
The result of this subsection (Proposition \ref{image}) will be needed in Subsection \ref{finpf}.
A toral relatively hyperbolic group has finitely many conjugacy classes of non-cyclic maximal abelian subgroups. Fix a representative $A_j$ in each class. Automorphisms of $G$ preserve the set of $A_j$'s (up to conjugacy), so some finite index subgroup of $\Out(G)$ maps to $\prod_j\Out(A_j)$.
We shall show in particular that \emph{the image of a suitable finite index subgroup $\Out'(G)\subset\Out(G)$ is
a product of McCool groups $ \prod_j\Out(A_j; \mk{\{F_j\}} )\subset\prod_j\Out(A_j)$}.
This product structure expresses the fact that automorphisms of non-conjugate maximal non-cyclic abelian subgroups do not interact. Indeed, consider a family of elements $\Phi_j\in\Out(A_j)$, and suppose that each $\Phi_j$, taken individually, extends to an automorphism $\widehat\Phi_j\in\Out'(G)$; then there is $\Phi\in\Out'(G)$ inducing all $\Phi_j$'s simultaneously.
In fact, we will work with two (possibly empty) finite families $\calh$, $\calk$ of abelian
subgroups, and we will restrict to $\Out(G;\mk\calh,\calk)$.
We shall therefore define a finite index subgroup $\Out'(G;\mk\calh,\calk)\subset\Out(G;\mk\calh,\calk)$.
First assume that $G$ is freely indecomposable relative to $\calh\cup\calk$. As in Subsection \ref{oe}, we consider the canonical JSJ tree $T_{\mathrm{can}}$, we restrict to automorphisms $\Phi\in\Out(G;\mk\calh,\calk)$ acting trivially on $\Gamma_{\mathrm{can}}=T_{\mathrm{can}}/G$, and we define $\Out'(G;\mk\calh,\calk)$ by imposing conditions on the action on non-abelian vertex groups $G_v$: if $G_v$ is QH, the action should be trivial on all boundary subgroups of $\Sigma$ (i.e.\ $\rho_v(\Phi)\in\calp\calm^+(\Sigma)$); if $G_v$ is
rigid, then $\rho_v(\Phi)$ should be trivial. We have explained in Subsection \ref{oe} why this defines a subgroup of finite index $\Out'(G;\mk\calh,\calk)$ in $\Out(G;\mk\calh,\calk)$. Note that
$\Out'(G;\mk\calh,\calk)$ acts trivially on edge groups of $T_{\mathrm{can}}$.
If $G$ is not freely indecomposable relative to $\calh\cup\calk$, let $G=G_1*\dots*G_n*F_p$ be the relative Grushko decomposition. To define $\Out'(G;\mk\calh,\calk)$, we require that $\Phi$ maps $G_i$ to $G_i$ (up to conjugacy), and the induced automorphism belongs to $\Out'(G_i;\mk\calh_{ | G_i},\calk_{ | G_i})$ as defined above.
Elements of $\Out'(G;\mk\calh,\calk)$ leave every $A_j$ invariant (up to conjugacy), and we denote by
$$\theta:\Out'(G;\mk\calh,\calk)\to\prod_j\Out(A_j)$$ the natural map.
We can now state:
\begin{prop} \label{image}
Let $\calh$, $\calk$ be two finite families of abelian
subgroups, and let
$\Out'(G;\mk\calh,\calk)$ be the finite index subgroup of $\Out(G;\mk\calh,\calk)$ defined above.
There exist subgroups $F_j\subset A_j$ such that the image of $\theta:\Out'(G;\mk\calh,\calk)\to\prod_j\Out(A_j)$ equals $\prod_j\Out(A_j; \mk{\{F_j\}},\calk_{ | A_j})$.
\end{prop}
Recall that the $A_j$'s are representatives of conjugacy classes of non-cyclic maximal abelian subgroups.
\begin{proof}
The $A_j$'s are contained (up to conjugacy) in factors $G_i$ of the Grushko decomposition relative to $\calh\cup\calk$, and the $G_i$'s are invariant under $\Out'(G;\mk\calh,\calk)$.
Since any family
of automorphisms $\Phi_i\in\Out'(G_i;\mk\calh_{|G_i},\calk_{|G_i})$
extends to an automorphism
$\Phi\in\Out'(G;\mk\calh,\calk)$,
we may assume that $G$ is freely indecomposable relative to $\calh\cup\calk$.
Consider $T_{\mathrm{can}}$ as above. If $A_j$ is contained in a rigid vertex stabilizer, then $\Out'(G;\mk\calh,\calk)$ acts trivially on $A_j$ and we define $F_j=A_j$. If not, $A_j$ is a vertex stabilizer $G_v$. Vertex stabilizers adjacent to $v$ are rigid or QH, and because of the way we defined $\Out'(G;\mk\calh,\calk)$ it leaves $A_j$ invariant and acts trivially on incident edge groups. It also acts trivially on the groups belonging to $\calh _{ | A_j}$.
Defining $F_j$ as the subgroup of $A_j$ generated by incident edge groups and groups in $\calh _{ | A_j}$, we have proved that the image of $\theta$ is contained in $\prod_j\Out(A_j; \mk{\{F_j\}},\calk_{ | A_j})$.
Conversely, choose a family $\Phi_j\in\Out(A_j; \mk{\{F_j\}},\calk_{ | A_j})$. As explained in Subsection \ref{automs}, there exists $\Phi\in\Out^0(T_{\mathrm{can}})$ acting trivially on cyclic, rigid, and QH vertex stabilizers, and inducing $\Phi_j $ on $A_j$.
We check that $\Phi$ acts trivially on any $H\in\calh$.
Such a group $H$ fixes a vertex $v\in T_{\mathrm{can}}$. If $G_v$ is cyclic, rigid, or QH, the action of $\Phi$ on $H$ is trivial. If not, $G_v$ is (conjugate to) an $A_j$ and the action is trivial because $H\subset F_j$.
A similar argument shows that $\Phi$ preserves $\calk$ up to conjugacy, so $\Phi\in \Out(G;\mk\calh,\calk)$.
Since $\Phi$ acts trivially on rigid
and QH vertex stabilizers, $\Phi\in \Out'(G;\mk\calh,\calk)$.
\end{proof}
\subsection{Automorphisms preserving a tree}
We now study the stabilizer of a tree. The following theorem clearly implies Theorem \ref{mct}.
\begin{thm} \label{mcgg}
Let $G$ be a toral relatively hyperbolic group. Let $T$ be a simplicial tree on which $G$ acts with abelian edge stabilizers. Let
$\calk$ be a finite family of abelian subgroups of $G$, each of which fixes a point in $T$.
Then $\Out(T,\calk)=\Out(T)\cap\Out(G;\calk)$
is of type VF.
\end{thm}
The group $\Out(T,\calk)$ is the subgroup of $\Out(G)$ consisting of automorphisms leaving $T$ invariant and mapping each group of $\calk$ to a conjugate (in an arbitrary way).
The tree $T$ is assumed to be minimal, but it may be a point, it may have trivial edge stabilizers, and non-cyclic abelian subgroups need not be elliptic.
Theorem \ref{mcga} proves Theorem \ref{mcgg} when $T$ is a point.
Also note that, if $G$ is abelian and $T$ is not a point, then $T$ is a line on which
$G$ acts by integral translations,
and $\Out(T,\calk)$ is of type VF because it equals $\Out(G;\calk\cup \{N\})$, with
$N$ the kernel of the action of $G$ on $T$.
Thus, we assume from now on that $G$ is not abelian.
We will prove Theorem \ref{mcgg} when $T$ has cyclic edge stabilizers before treating the general case. This special case is much easier because $\Out(G_e)$ is finite for every edge stabilizer $G_e$, and we may apply Proposition 2.3 of \cite{Lev_automorphisms}.
\subsubsection{Cyclic edge stabilizers} \label{cyc}
In this subsection we prove Theorem \ref{mcgg} when all edge stabilizers $G_e$ of $T$ are cyclic (possibly trivial); this happens in particular if $G$ is hyperbolic.
As in Subsection \ref{automs}, we consider the exact sequence
$$1\to\calt\to\Out^0(T)\xra{\ \rho\ }\prod_{v\in V}\Out(G_v;\mathrm{Inc}_v) .$$
The image of $\rho$ contains $\prod_{v\in V}\Out(G_v;\mk\mathrm{Inc}_v)$, and the index is finite because
all groups $\Out(G_e)$ are finite (see \cite{Lev_automorphisms}, where $\Out(G_v;\mk\mathrm{Inc}_v)$ is denoted $PMCG(G_v)$).
The preimage of $\prod_{v\in V}\Out(G_v;\mk\mathrm{Inc}_v)$ is thus a finite index subgroup $\Out^1(T)\subset\Out(T)$.
We want to prove that
$\Out (T,\calk)$ is of type VF, so we restrict the preceding discussion to $\Out (T,\calk)$.
Let
$$\Out^1(T,\calk)=\Out^1(T)\cap\Out(G;\calk),$$ a finite index subgroup. We show that $\Out^1(T,\calk)$ is of type VF (this will not use the assumption that edge stabilizers are cyclic).
The image of $\Out^1(T,\calk) $ by $\rho$ is contained in $\prod_{v\in V}\Out(G_v; \mk\mathrm{Inc}_v, \calk_{||G_v})$, with $\calk_{||G_v}$ as in Subsection \ref{tre}, and arguing as in Subsection \ref{automs} one sees that equality holds.
On the other hand, $\Out^1(T,\calk)$
contains $\calt$ because twists act trivially on vertex stabilizers, hence on $\calk$ since groups of $\calk$ are elliptic in $T$.
We therefore have an exact sequence
$$1\to\calt\to\Out^1(T,\calk)\to\prod_{v\in V}\Out(G_v; \mk\mathrm{Inc}_v, \calk_{||G_v})\to1 .$$
Vertex stabilizers are toral relatively hyperbolic, so the product is of type VF by Theorem \ref{mcga} applied to the $G_v$'s.
We conclude the proof by showing that $\calt$ is of type F. This will imply that $\Out^1 (T,\calk)$, hence $\Out (T,\calk)$, is VF.
We claim that $\calt$ is isomorphic to the direct product of a finitely generated abelian group and a finite number of copies of non-abelian
vertex groups $G_v$.
We use the presentation of $\calt$ given in Proposition 3.1 of \cite{Lev_automorphisms}.
It says that
$\calt$ can be written as a quotient $$\calt=\prod_{e,v} Z_{G_v}(G_e)/\grp{\calr_V,\calr_E},$$
the product being taken over all pairs $(e,v)$ where $e$ is an edge incident to $v$;
here $\calr_E=\prod_e Z(G_e)$ is the group of edge relations,
and $\calr_V=\prod_{v} Z(G_v)$ is the group of vertex relations, both embedded naturally in $\prod_{e,v} Z_{G_v}(G_e)$.
Every group $Z_{G_v}(G_e)$ is abelian, unless $G_e$ is trivial and $G_v$ is non-abelian.
In this case $Z_{G_v}(G_e)=G_v$, and it is not affected by the edge and vertex relations since
both $Z(G_v)$ and $Z(G_e)$ are trivial.
Our claim follows.
It follows that $\calt$
is of type F provided that it is torsion-free. One may show that this is always the case, but it is simpler to replace $\Out^1(T,\calk)$ by its intersection with a torsion-free finite index subgroup of $\Out(G)$.
\subsubsection{Changing $T$}
We shall now prove Theorem \ref{mcgg} in the general case.
The first step, carried out in this subsection, is to replace $T$ by a better tree $\hat T$ (satifsfying the second assertion of the lemma below). When all edge stabilizers are non-trivial, $\hat T$ may be viewed as the smallest common refinement (called lcm in \cite{GL3b}) of $T$ and its tree of cylinders (see Subsection \ref{jsj}).
Here is the
construction of $\hat T$.
Consider edges of $T$ with non-trivial stabilizer. We say that two such edges belong to the same cylinder if their stabilizers commute. Cylinders are subtrees and meet in at most one point. A vertex $v$ with all incident edge groups trivial belongs to no cylinder. Otherwise $v$ belongs to one cylinder if $G_v$ is abelian, to infinitely many cylinders if $G_v$ is not abelian.
To define $\hat T$, we shall refine $T$ at vertices $x$ belonging to infinitely many cylinders.
Given such an $x$, let $S_x$ be the set of cylinders $Y$ such that $x\in Y$. We replace $x$ by the cone $T_x$ on $S_x$: there is a central vertex, again denoted by $x$, and vertices $(x, s_Y)$ for $Y\in S_x$, with an edge between $x$ and $(x,s_Y)$. Edges $e$ of $T$ incident to $x$ are attached to $T_x$ as follows: if the stabilizer of $e$ is trivial, we attach it to the central vertex $x$; if not, $e$ is contained in a cylinder $Y$ and we attach $e$ to the vertex $(x,s_Y)$, noting that $G_e$ leaves $Y$ invariant.
Performing this operation at each $x$ belonging to infinitely many cylinders yields a tree $\hat T$. The construction being canonical, there is a natural action of $G$ on $\hat T$, and $\Out(T)\subset\Out(\hat T)$.
\begin{lem}\label{norm}
\begin{enumerate}
\item Edge stabilizers of $\hat T$ are abelian, $\hat T$ is dominated by $T$, and $\Out(\hat T)=\Out(T)$.
\item Let $G_v$ be a non-abelian vertex stabilizer of $\hat T$. Non-trivial incident edge stabilizers $G_e$ are maximal abelian subgroups of $G_v$. If $e_1$, $e_2$ are edges of $\hat T$ incident to $v$ with $G_{e_1}$, $G_{e_2}$ equal and non-trivial, then $e_1=e_2$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $Y$ be a cylinder in $S_x$ (viewed as a subtree of $T$).
The setwise stabilizer $G_Y$ of $Y$ is the maximal abelian subgroup of $G$ containing stabilizers of edges of $Y$. The stabilizer of the vertex $(x,s_Y)$ of $\hat T$, and also of the edge between $(x,s_Y)$ and $x$, is $G_x\cap G_{Y}$; it is non-trivial (it contains the stabilizer of edges of $Y$ incident to $x$), and is a maximal abelian subgroup of $G_x$. This proves that edge stabilizers of $\hat T$ are abelian, since the other edges have the same stabilizer as in $T$.
Every vertex stabilizer of $T$ is also a vertex stabilizer of $\hat T$, so $T$ dominates $\hat T$. Edges of $\hat T$ which are not edges of $ T$ (those between $(x,s_Y)$ and $x$) are characterized as those having non-trivial stabilizer and having an endpoint $v$ with $G_v$ non-abelian. One recovers $T$ from $\hat T$ by collapsing these edges, so $\Out(\hat T)\subset\Out(T)$.
Consider two edges $e_1$ and $e_2$ incident to $v$ in $\Hat T$, with the same non-trivial stabilizer. They join $v$ to vertices $(v,s_{Y_i})$, and we have seen that $G_{e_1}=G_{e_2}$ is maximal abelian in $G_v$. The groups $G_{Y_1}$ and $G_{Y_2}$ are equal because they both contain $G_{e_1}=G_{e_2}$. Edges of $Y_i$ have stabilizers contained in $G_{Y_i}$, so have commuting stabilizers. Thus $Y_1=Y_2$, so $e_1=e_2$.
\end{proof}
\begin{rem}\label{nonconj}
If $G_{e_1}$, $G_{e_2}$ are conjugate in $G_v$, rather than equal, we conclude that $e_1$ and $e_2$ belong to the same $G_v$-orbit. On the other hand, edges belonging to different $G_v$-orbits may have stabilizers which are conjugate in $G$ (but not in $G_v$).
\end{rem}
\subsubsection {The action on edge groups}
\label{sec_edge}
In Subsection \ref{cyc} we could neglect the action of $\Out^0(T)$ on edge groups because all groups $\Out(G_e)$ were finite. We now allow edge stabilizers of arbitrary rank, so we must take these actions into account.
We denote $\Out^0(T,\calk)=\Out^0(T)\cap\Out(G;\calk)$.
Recall that, for each edge $e$ of $\Gamma=T/G$,
there is a natural map $\rho_e:\Out^0(T)\ra \Out(G_e)$ (see Section \ref{automs}).
The collection of all these maps defines a map
$$\psi:\Out^0(T,\calk)\to\prod_{e\in E}\Out(G_e),$$
the product being over all non-oriented edges of $\Gamma$.
We denote by $Q$ the image of $\Out^0(T,\calk)$ by
$\psi$, so that we have the exact sequence
$$1\ra \ker\psi\ra \Out^0(T,\calk)\ra Q\ra 1.$$
\begin{lem} \label{pt2}
If $T$ satisfies the second assertion of Lemma \ref{norm},
then the group $Q$ is of type VF.
\end{lem}
This lemma will be proved in the next subsection.
We first explain how to deduce Theorem \ref{mcgg} from it. The first assertion of Lemma \ref{norm} implies that the theorem holds for $T$ if it holds for $\hat T$, so we may assume that $T$ satisfies the second assertion of Lemma \ref{norm}.
The kernel of $\psi$ is the group discussed in Subsection \ref {cyc} under the name $\Out^1(T, \calk)$, but now (contrary to Convention \ref{1}) $\Out^1(T, \calk)$ may be of infinite index in $\Out(T,\calk)$; indeed,
$\Out(T,\calk)$ is virtually an extension of
$\Out^1(T ,\calk)$ by $Q$. To avoid confusion, we use the notation $\ker \psi$ rather than $\Out^1(T ,\calk)$.
We proved in Subsection \ref {cyc}
that $\ker \psi$ is of type VF, and $Q$ is of type VF by the lemma, but this is not quite sufficient (see Remark \ref{subt}).
We shall now
construct a finite index subgroup $\Out^2(T,\calk)\subset \Out^0(T,\calk)$
such that $\ker\psi\cap\Out^2(T,\calk)$ has type $F$. Applying Corollary \ref{cor_extensionVF} to $\Out^0(T,\calk)$ then completes the proof of Theorem \ref{mcgg}.
We argue as in Subsection \ref{cyc}. Recall from Subsection \ref{automs}
the exact sequence
$$1\to\calt\to\Out^0(T,\calk)\xra{\ \rho\ }\prod_{v\in V}\Out(G_v; \mathrm{Inc}_v, \calk_{||G_v}) $$
whose restriction to $\ker \psi$ is the exact sequence
$$1\to\calt\to\ker \psi\xra{\ \rho\ }\prod_{v\in V}\Out(G_v; \mk\mathrm{Inc}_v, \calk_{||G_v})\ra 1. $$
Using the ``more precise'' statement of Theorem \ref{mcga}
we get, for each $v\in V$,
a finite index subgroup
$\Out^1(G_v; \mathrm{Inc}_v, \calk_{||G_v})\subset\Out(G_v; \mathrm{Inc}_v, \calk_{||G_v})$
such that
$$\Out^1(G_v; \mathrm{Inc}_v, \calk_{||G_v})\cap\Out(G_v; \mk\mathrm{Inc}_v, \calk_{||G_v})$$ is of type F.
Define the finite index subgroup $\Out^2(T,\calk)\subset \Out^0(T,\calk)$
as the preimage of $\prod_{v\in V}\Out^1(G_v; \mathrm{Inc}_v, \calk_{||G_v})$ under $\rho$,
intersected with a torsion-free finite index subgroup of $\Out(G)$.
Restricting the exact sequence above,
we get an exact sequence
$$1\to\calt'\to\ker\psi\cap\Out^2(T,\calk)\xra{\ \rho\ } L \ra 1$$
where $L$ has finite index in the product of the groups
$$\Out^1(G_v; \mathrm{Inc}_v, \calk_{||G_v})\cap \Out(G_v; \mk\mathrm{Inc}_v, \calk_{||G_v}) ,$$ hence has type F.
The group $\calt'$ is a torsion-free subgroup of finite index of $\calt$, so
has type F as in Subsection \ref{cyc}. We conclude that $\ker\psi\cap\Out^2(T,\calk)$ has type F.
As explained above, this completes the proof of Theorem \ref{mcgg} (assuming Lemma \ref{pt2}).
\subsubsection {Proof of Lemma \ref{pt2}} \label{finpf}
There remains to prove Lemma \ref{pt2}. We let $E_j$ be representatives of conjugacy classes of maximal abelian subgroups containing a non-trivial edge stabilizer.
Note that $E_j$ is allowed to be cyclic, and maximal abelian subgroups of $G$
containing no non-trivial $G_e$ are not included.
Inside each $E_j$ we let $B_j$ be the smallest direct factor containing all edge groups included in $E_j$ (it equals $E_j$ if $E_j$ is cyclic). It is elliptic in $T$, because it is an abelian group generated (virtually) by elliptic subgroups.
Each automorphism $\Phi\in \Out^0(T,\calk)$ induces an automorphism of $E_j$, which preserves $B_j$ and all the
edge groups it contains.
This defines a map
$$\psi':\Out^0(T,\calk)\ra \prod_j \Out(B_j)$$
having the same kernel as the map $\psi:\Out^0(T,\calk)\to\prod_{e\in E}\Out(G_e)$ defined in Subsection \ref{sec_edge}.
Thus, it suffices to prove that the image of $\Out^0(T,\calk)$ by $\psi'$ is of type VF. We do so by finding a finite index subgroup
$ \Out^1(T,\calk)$ (not the same as in Subsection \ref{cyc}) whose image is a product $\prod_j Q_j$ with each $Q_j$ of type VF.
Consider a non-abelian vertex group $G_v$. Define $\mathrm{Inc}_{v,{\mathbb {Z}}}\subset\mathrm{Inc}_v$ by keeping only the incident edge groups which are infinite cyclic, and denote by $E_{nc}(v)$ the set of edges $e$ of $\Gamma$ with origin $v$ and $G_e$ non-cyclic (if $e$ is a loop, we subdivide it so that it counts twice in $E_{nc}(v)$). By Lemma \ref{norm} and Remark \ref{nonconj}, the edge groups $G_e$, for $e\in E_{nc}(v)$, are non-conjugate maximal abelian subgroups of $G_v$.
We apply Proposition \ref{image}, describing the action on non-cyclic maximal abelian subgroups, to $\Out(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v}) $.
We get a subgroup of finite index $\Out'(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v})$,
and a subgroup $F_e^v \subset G_e$ for each edge $e\in E_{nc}(v)$,
such that
the image of $\Out'(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v})$ in $\prod_{e\in E_{nc}(v)}\Out(G_e)$ is
the product $\prod_{e\in E_{nc}(v)} \Out(G_e;\mk {\{F_e^v\}},\calk_{|G_e})$.
We let $\Out^1(T,\calk)\subset\Out^0(T,\calk)$ be the subgroup consisting of automorphisms acting trivially on cyclic edge stabilizers, and acting on non-abelian vertex stabilizers as an element of $\Out'(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v}) $. It has finite index because
$$\Out(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v}) \subset \rho_v(\Out^0(T,\calk))\subset\Out(G_v;\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v} ),$$ with all indices finite.
We now define $Q_j\subset\Out(B_j)$ as consisting of automorphisms $\Phi_j$ such that:
\begin{enumerate}
\item if $G_e$ is a cyclic edge stabilizer contained in $B_j$, then $\Phi_j$ acts trivially on $G_e$;
\item if $B_j$ contains a non-cyclic $G_e$, and $v$ is an endpoint of $e$ with $G_v$ non-abelian, then $\Phi_j$ acts trivially on $F_e^v$;
\item non-cyclic edge stabilizers, and abelian vertex stabilizers, contained in $B_j$ are $\Phi_j$-invariant;
\item $\Phi_j$ extends to an automorphism of $E_j$ leaving $\calk_{ | E_j}$ invariant; in particular, subgroups of $B_j$ conjugate to a group of $\calk$ are $\Phi_j$-invariant.
\end{enumerate}
This definition was designed so that the image of $\Out^1(T,\calk)$ by $\psi'$ is contained in $\prod_j Q_j $. We claim that equality holds:
\begin{lem}
The image of $\Out^1(T,\calk)$ by $\psi'$ equals $\prod_j Q_j $.
\end{lem}
\begin{proof}
We fix automorphisms $\Phi_j\in Q_j\subset\Out(B_j)$, and we have to construct an automorphism $\Phi\in\Out^1(T,\calk)$. By items 1 and 3 above, the $\Phi_j$'s induce automorphisms $\Phi_e$ of edge stabilizers
(each non-trivial edge group $G_e$ lies in a unique $E_j$, so there is no ambiguity in the definition of $\Phi_e$).
As explained after Lemma \ref{modif}, it suffices to find automorphisms $\Phi_v$ of vertex groups inducing the $\Phi_e$'s.
We distinguish several cases.
If $G_v$ is contained in some $B_j$ (up to conjugacy), it is $\Phi_j$-invariant by item 3, so we let $\Phi_v$ be the restriction.
If $G_v$ is abelian, but not contained in any $B_j$, we may assume that some incident $G_e$ is non-cyclic (otherwise we let $\Phi_v$ be the identity).
This $G_e$ is contained in some $B_j$, and $G_v\subset E_j$. In fact $G_v=E_j$: since $G_v$ is not contained in $ B_j$,
it fixes only $v$, and $E_j$ fixes $v$ because it commutes with $G_v$. We may thus extend $\Phi_j$ to $G_v$ using item 4.
If $G_v$ is not abelian,
we construct $\Phi_v$ in $\Out'(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v} ) $ as follows.
If $e\in E_{nc}(v)$,
the automorphism $\Phi_e$ acts trivially on $F_e^v$ by item 2, and preserves $\calk_{|G_e}$ by item 4.
Thus, the collection of automorphisms $\Phi_e$
lies in
$\prod_{e\in E_{nc}(v)} \Out(G_e;\mk {\{F_e^v\}},\calk_{|G_e})$.
Proposition \ref{image} guarantees that
$\Out'(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v} ) $
contains an automorphism $\Phi_v $
inducing $\Phi_e$ for all $e\in E_{nc}(v)$ (and acting trivially on all cyclic incident edge groups).
We have now constructed automorphisms $\Phi_v\in\Out(G_v)$ inducing the $\Phi_e$'s, so Lemma \ref{modif} provides an automorphism $\Phi\in\Out^0(T)$ whose image in $\prod_j\Out(B_j)$ is the product of the $\Phi_j$'s because $B_j$ is virtually generated by edge stabilizers.
We show $\Phi\in\Out^1(T,\calk)$. By construction it acts trivially on cyclic edge groups and acts on non-abelian vertex stabilizers as an element of $\Out'(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v}) $. We just have to check that $\Phi$ leaves any $K\in\calk $ invariant.
The group $K$ is contained in some $G_v$.
If $K$ is contained in some $B_j$, it is $\Phi$-invariant by item 4.
Otherwise, $K$ fixes no edge.
If $G_v$ is abelian, we have seen that either all incident edge groups are cyclic (and $\Phi_v$ is the identity), or $G_v$ equals some $E_j$ and our choice of $\Phi_v$ using item 4 guarantees that $K$ is invariant.
If $G_v$ is not abelian, then $K$ belongs to $\calk_{||G_v}$ because it fixes no edge.
It is invariant because we chose $\Phi_v\in \Out'(G_v;\mk\mathrm{Inc}_{v,{\mathbb {Z}}},\calk_{||G_v}) $.
\end{proof}
We have seen that the group $Q$ of Lemma \ref{pt2} is isomorphic to the image of $\Out^0(T,\calk)$ by $\psi'$, hence contains $\prod_j Q_j$ with finite index.
To show that $Q$ is of type VF, there remains to show that each $Q_j$ is of type VF.
We defined $Q_j$ inside $\Out(B_j)$ by four conditions. As in Lemma \ref{arithm}, the first three define an arithmetic group.
To deal with the fourth one, we consider the group $\tilde Q_j$ consisting of automorphisms of $E_j$ leaving $B_j$ and $\calk_{ | E_j}$ invariant, with the restriction to $B_j$ satisfying the first three conditions. This is an arithmetic group.
It consists of block-triangular matrices, and one obtains $Q_j$ by considering the upper left blocks of matrices in $\tilde Q_j$. It follows that $K_j$ is arithmetic, as the image of an arithmetic group by a rational homomorphism \cite[Th.\ 6]{Borel_density},
hence of type VF by Lemma \ref{arithm}.
This completes the proof of Lemma \ref{pt2}, hence of Theorem \ref{mcgg}.
\section{A finiteness result for trees}
The goal of this section is Proposition \ref{edstab}, which gives a uniform bound for the size of certain sets of relative JSJ decompositions of $G$.
This an essential ingredient in the proof of the chain condition for McCool groups.
We will have to restrict to root-closed (RC) trees, which are introduced in Definitions \ref{rct} and \ref{rcj}
(they are closely related to the primary splittings of \cite{DaGr_isomorphism}).
\begin{dfn} \label{rootcl} Let $H $ be a subgroup of a group
$ G$. Its \emph{root closure} $e(H,G)$, or simply $e(H)$, is the set of elements of $G$ having a power in $H$. If $e(H)=H$, we say that $H$ is \emph{root-closed}.
\end{dfn}
If $G$ is toral relatively hyperbolic and $H$ is abelian, $e(H)$ is a direct factor of the maximal abelian subgroup containing $H$, and $H$ has finite index in $e(H)$.
Also note that, given $h\in G$ and $n\ge2$, there exists at most one element $g$ such that $g^n=h$.
The following fact is completely general.
\begin{lem} \label{ev}
Let $T$ be a tree with an action of an arbitrary group. The following are equivalent:
\begin{itemize}
\item Vertex stabilizers of $T$ are root-closed.
\item Edge stabilizers of $T$ are root-closed.
\end{itemize}
\end{lem}
\begin{proof} If $g^n$ fixes an edge $e=vw$, it fixes $v$ and $w$. If vertex stabilizers are root-closed, $g$ fixes $v$ and $w$, hence fixes $e$, so edge stabilizers are root-closed.
Conversely, if $g^n$ fixes a vertex $v$, then $g$ is elliptic hence fixes a vertex $w$. Edges between $v$ and $w$ (if any) are fixed by $g^n$, hence by $g$ if edge stabilizers are root-closed. Thus $g$ fixes $v$.
\end{proof}
We now go back to a toral relatively hyperbolic group $G$.
\begin{dfn} \label{rct}
A tree $T$ is an \emph{$\RC$-tree} if:
\begin{itemize}
\item all non-cyclic abelian subgroups fix a point in $T$;
\item edge stabilizers of $T$ are abelian and root-closed.
\end{itemize}
\end{dfn}
When $G$ is hyperbolic, $\RC$-trees are the $\calz_{\mathrm{max}}$-trees of \cite{DG2}: non-trivial edge stabilizers are maximal cyclic subgroups.
\begin{lem} \label{ecl}
\begin{enumerate}
\item Let $T$ be an $\RC$-tree with all edge stabilizers non-trivial. Its
tree of cylinders $T_c$ (see Subsection \ref{jsj})
is an $\RC$-tree belonging to the same deformation space as $T$.
\item
If $T_1$ and $T_2$ are $\RC$-trees relative to some family $\calh$, and edge stabilizers of $T_1$ are elliptic in $T_2$, there is
an $\RC$-tree $\hat T_1$ relative to $\calh$ which refines $T_1$ and dominates $T_2$. Moreover, the stabilizer of any edge of $\hat T_1$ fixes an edge in $T_1$ or in $T_2$.
\end{enumerate}
\end{lem}
\begin{proof}
Non-triviality of edge stabilizers ensures that $T_c$ is defined.
The vertex stabilizers of $T_c$ are vertex stabilizers of $T$ or maximal abelian subgroups, so are root-closed. The deformation space does not change because $T$ is relative to non-cyclic abelian subgroups (see \cite{GL4}, prop.\ 6.3). This proves 1.
We define a refinement $\hat T_1$ of $T_1$ dominating $T_2$ as in Lemma 3.2 of \cite{GL3a}, by blowing up each vertex $v$ of $T_1$ into a $G_v$-invariant subtree of $T_2$. We just have to check that its edge stabilizers are root-closed.
As in the proof of Lemma 4.9 of \cite{DG2}, an edge stabilizer of $\hat T_1$ is an edge stabilizer of $T_1$ or is the intersection of a vertex stabilizer of $T_1$ with an edge stabilizer of $T_2$, so is root-closed.
\end{proof}
\begin{prop} \label{access}
Let $G$ be toral relatively hyperbolic.
In each of the following two cases, there is a bound for the number of orbits of edges of a minimal tree $T$ with abelian edge stabilizers:
\begin{enumerate}
\item $T$ is bipartite: each edge has exactly one endpoint with abelian stabilizer (redundant vertices are allowed);
\item $T$ is an $\RC$-tree with no redundant vertex.
\end{enumerate}
\end{prop}
Here, and below, the bound has to depend only on $G$ (it is independent of the trees under consideration).
Case 1 applies in particular to trees of cylinders.
\begin{proof} We cannot apply Bestvina-Feighn's accessibility theorem \cite{BF_bounding} directly because $T$ does not have to be reduced in the sense of \cite{BF_bounding}: $\Gamma=T/G$ may have a vertex $v$ of valence 2 such that an incident
edge carries the same group as $v$. We say that such a $v$ is a non-reduced vertex. The assumptions
rule out the possibility that $\Gamma$ contains long segments consisting of non-reduced vertices (as in the example on top of page 450 in \cite{BF_bounding}).
If $T$ is bipartite, consider all non-reduced vertices of $\Gamma$ and collapse exactly one of the incident edges. This yields a reduced graph of groups, and at most half of the edges of $\Gamma$ are collapsed, so \cite{BF_bounding} gives a bound.
If $T$ is an $\RC$-tree with no redundant vertex, every non-reduced vertex $v$ of $\Gamma=T/G$ has exactly two adjacent edges $e_v$ and $f_v$, whose groups satisfy $G_{e_v}\subsets G_v=G_{f_v}$.
Among all edges incident to a non-reduced vertex, consider the set $E_m$ consisting of those with $G_e$ of minimal rank.
No two edges of $E_m$ are adjacent at a non-reduced vertex, because $T$ is an $\RC$-tree. Now collapse the edges in $E_m$.
If $I=e_1\cup e_2\cup \dots \cup e_k$ is a maximal segment in the complement of the set of vertices of $\Gamma$ having degree 3 or carrying a non-abelian group, we never collapse adjacent edges $e_i,e_{i+1}$ (and we do not collapse $e_1$ if $k=1$; we may collapse $e_1$ and $e_3$ if $k=3$).
It follows that
at least one third of the edges of $\Gamma$ remain after the collapse.
Repeat the process.
Denote by $M$ the maximal rank of abelian subgroups of $G$.
After at most $M$
steps one obtains a graph of groups which is reduced in the sense of \cite{BF_bounding}, hence has at most $N$ edges for some fixed $N$. The number of edges of $\Gamma$ is bounded by $3^MN$.
\end{proof}
\begin{prop} \label{dom2}
Given a toral relatively hyperbolic group $G$,
there exists a number $M$ such that, if $T_1\to T_2\to\dots\to T_p$ is a sequence of maps between $\RC$-trees
belonging to distinct deformation spaces, then $p\le M$.
\end{prop}
\begin{proof}
There are two steps.
$\bullet$ The first step is to reduce to the case when no edge stabilizer is trivial.
Consider the tree $\bar T_i$ (possibly a point) obtained from $T_i$ by collapsing all edges with non-trivial stabilizer. A map $ T_i\to T_{i+1}$ cannot send an arc with non-trivial stabilizer to the interior of an edge with trivial stabilizer, so $\bar T_i$ dominates $\bar T_{i+1}$. Vertex stabilizers of $\bar T_i$ are free factors, there are finitely many possibilities for their isomorphism type.
Using Scott's complexity, it is shown in Section 2.2 of \cite{Gui_actions} that the number of times that the deformation space $\cald_i$ of $\bar T_i$ differs from that of $\bar T_{i+1}$ is uniformly bounded. We may therefore assume that $\cald=\cald_i$ is independent of $i$.
Let
$H_1,\dots,H_k$ be representatives of conjugacy classes of non-trivial vertex stabilizers of trees in $\cald$.
They
are free factors of $G$, hence toral relatively hyperbolic,
and $k$ is bounded.
Consider the action of $H_j$ on its minimal subtree $T^j_i\subset T_i$ (we let $T^j_i$ be any fixed point if the action is trivial). It is an $\RC$-tree, and no edge stabilizer is trivial. The deformation space of $T_i$ is completely determined by $\cald
$ and the deformation spaces $\cald ^j_i$ of the trees $T^j_i$ (viewed as trees with an action of $H_j$). It therefore suffices to bound (by a constant depending only on $H_j$) the
number of times that $\cald ^j_i$ changes in a
sequence $T_1^j\to T_2^j\to\dots\to T_p^j$, so we may continue the proof under the additional assumption that the $T_i$'s have non-trivial edge stabilizers.
$\bullet$ Now that edge stabilizers are non-trivial, the tree of cylinders of $T_i$ is defined. By the first assertion of Lemma \ref{ecl},
we may assume that it equals $T_i$.
Since all trees are trees of cylinders, Proposition 4.11 of \cite{GL4}
lets us assume that all
domination maps $ T_i\to T_{i+1}$ send vertex to vertex, and map an edge to either a point or an edge. Such a map may collapse an edge to a point, or identify edges belonging to different orbits, or identify edges in the same orbit. The first two phenomena are easy to control since they decrease the number of orbits of edges; controlling the third one requires more care (and restricting to $\RC$-trees).
We associate a complexity $(n,-s)$ to each $T_i$, with $n$ the number
of edges of $T_i/G$, and $s$ the sum of the ranks of its edge groups;
complexities are ordered lexicographically. We claim that the complexity of $T_{i+1}$ is strictly smaller than that of $T_i$.
This gives the required uniform bound on $p$, since $n$ (hence also $s$) is bounded
by the first case of Proposition \ref{access}.
Let $f_i:T_i\to T_{i+1}$ be a domination map as above. Complexity clearly cannot increase when passing from $T_i$ to $T_{i+1}$. If $n$ does not decrease, no edge of $T_i$ is collapsed in $T_{i+1}$. Since $ T_i$ and $T_{i+1}$ belong to distinct deformation spaces, there exist distinct edges $e,e'$ identified by $f_i$.
They have to belong to the same orbit (otherwise $n$ decreases),
so $e'=ge$ for some $g\in G$.
The group $\grp{g,G_e}$ fixes the edge $f_i(e)=f_i(e')$ of $T_{i+1}$, so is abelian.
It has rank bigger than the rank of $G_e$ because $G_e$ is root-closed and $g\notin G_e$.
Thus $s$ increases, and the complexity decreases.
\end{proof}
Let $\cala $ be the family of all abelian subgroups. Let $\calh$ be a family of subgroups of $G$.
A JSJ tree (over $\cala$) relative to $\calh $ may be defined as a tree $T$ such that
$T$ is relative to $\calh$, edge stabilizers of $T$ are
elliptic in every tree which is relative to $\calh$, and $T$ dominates every tree satisfying the previous conditions (all trees
are assumed to have abelian edge stabilizers).
This motivates the following definition, where we require that $T$ be an $\RC$-tree (compare Section 4.4 of \cite{DG2}). Recall that $\hp$ is obtained by adding
all non-cyclic abelian subgroups to $\calh$.
\begin{dfn} \label {rcj}
Let $G$ be a toral relatively hyperbolic group,
and $\calh$ a family of subgroups.
A tree $T$ is an \emph{$\RC$-JSJ tree relative to $\hp$} if:
\begin{enumerate}
\item $T$ is relative to $\hp$, and is an $\RC$-tree;
\item edge stabilizers of $T$ are
elliptic in every tree with abelian edge stabilizers (not necessarily an $\RC$-tree) which is relative to $\hp$;
\item $T$ dominates every tree satisfying 1 and 2.
\end{enumerate}
\end{dfn}
We will construct $\RC$-JSJ trees in Section \ref{pfcc}. Note that non-cyclic edge stabilizers always satisfy 2.
\begin{prop} \label{edstab}
Let $G$ be a toral relatively hyperbolic group.
Let $\calh_1\subset\dots\subset \calh_i\subset\dots$ be an increasing sequence (finite or infinite) of families of subgroups, with $G$ freely indecomposable relative to $\calh_1$.
For each $i$, let $U_i$ be an $\RC$-JSJ tree relative to $\calh_i^{+ab}$.
There exists a number $q$, depending only on $G$, such that
the trees $U_i$ belong to at most $q$ distinct deformation spaces.
\end{prop}
\begin{proof}
Let $U_i$ be as in the proposition.
Note that $U_i$ satisfies condition 1 of Definition \ref{rcj}
with respect to $\calh_j^{+ab}$ if $j\le i$, and condition 2 with respect to $\calh_j^{+ab}$ if $j\ge i$. But cyclic edge stabilizers of $U_i$ do not necessarily satisfy 2 with respect to $\calh_j^{+ab}$ if $j< i$.
In general, there is no domination map $U_i\ra U_{i+1}$, so we cannot apply Proposition \ref{dom2} directly.
The easy case is when, for each $i$, every cyclic edge stabilizer of $U_{i+1}$
is contained in an edge stabilizer of $U_i$.
Indeed, this implies that $U_{i+1}$
satisfies condition 2 with respect to $\calh_i^{+ab}$ (not just to $\calh_{i+1}^{+ab}$).
By condition 3,
$U_i$ dominates $U_{i+1}$,
so Proposition \ref{dom2} applies.
Next, assume that there is an $\RC$-tree $T$ relative to $\calh_1$ such that, for all $i$,
there is a domination map $T\ra U_i$ that collapses no edge.
Each cyclic edge stabilizer $G_e$ of $U_{i+1} $ contains an edge stabilizer $G_{e'}$ of $T$ (take for $e'$ any edge whose image contains a subarc of $e$). Since $G$ is freely indecomposable relative to $\calh_1$, and $T$ is relative to $\calh_1$, one has $G_{e'}\neq 1$, and
$G_{e'}=G_e$ because $G_{e'}$ is root closed.
Since the map $T\ra U_{i}$ collapses no edge, $G_e$ fixes an edge in $U_i$, and we conclude as above.
We now construct such a tree $T$.
By condition 2 of Definition \ref{rcj}, edge stabilizers of $U_1$ are elliptic in $U_2$,
so by Lemma \ref{ecl} there is an $\RC$-tree $T_1$ relative to $\calh_1$
which refines $U_1$ and dominates $U_2$; we remove redundant vertices of $T_1$ if needed.
Edge stabilizers of $T_1$ fix an edge in $U_1$ or $U_2$, so are elliptic in $U_3$ and one may iterate. One obtains $\RC$-trees $T_i$ relative to $\calh_1$ such that $T_i$ refines $T_{i -1}$ and dominates $U_{i+1}$. By Proposition \ref{access}, all trees $T_i$ for $i$ large enough are equal to a fixed $\RC$-tree $T$.
We have no control over how large $i$ has to be, but we have a uniform bound for the number of orbits of edges of $T$.
By construction, there are domination maps $f_i:T\to U_i$, but $f_i$ may collapse some $G$-invariant set of edges.
There are only a bounded number of possibilities for the set $E_i$ of edges of $T$ that are collapsed by $f_i$,
so we
may assume that $E=E_{i}$ is independent of $i$. Collapsing all edges of $E$ then gives
a tree $T$ as wanted.
\end{proof}
\section{The chain condition}\label{pfcc}
\renewcommand {\calc} {{\mathcal {H}}}
We prove Theorem \ref{mccc}. In this section we only consider groups of the form $\Out(G;\mk\calh)$, so we use the simpler notation ${\mathrm{Mc}}(\calc )$.
Since we do not yet know that every ${\mathrm{Mc}}(\calc )$ is a McCool group, we assume that every $\calc_i$ is a finite set of finitely generated subgroups (this is needed to apply Lemma \ref{lem_fini}).
Since ${\mathrm{Mc}}(\calc')={\mathrm{Mc}}(\calc\cup\calc')$ if ${\mathrm{Mc}}(\calc)\supset {\mathrm{Mc}}(\calc')$, we may assume $\calc_i\subset\calc_{i+1}$.
We will use the following procedure several times. We associate an invariant to each family $\calc_i$, and we show that, as $i$ varies, the number of distinct values of the invariant is bounded (by which we mean that there is a bound depending only on $G$). We then continue the proof under the additional asssumption that the value of the invariant is independent of $i$.
$\bullet$
The first invariant is the Grushko deformation space $\cald_i$ relative to $\calc_i$ (see Subsection \ref{jsj}).
The assumption $\calc_i\subset\calc_{i+1}$ implies that $\cald_i$ dominates $\cald_{i+1}$.
As in the proof of Proposition \ref{dom2}, it follows from
\cite{Gui_actions} that the number of times that $\cald_i$ changes is bounded. We may therefore assume that $\cald_i$ is constant.
Let $G_1,\dots,G_n$ be the free factors in a Grushko decomposition $G=G_1*\dots*G_n*F_p$ relative to $\calc_i$
(they do not depend on $i$ up to conjugation since $\cald_i$ is constant).
The subgroup of ${\mathrm{Mc}}(\calc_i)$ consisting of automorphisms sending each factor $G_j$ to a conjugate has bounded index,
and it is determined by the McCool groups
${\mathrm{Mc}}_{G_j}(\calc_i{}_{ | G_j})$, so we are reduced to the case when $G$ is freely indecomposable relative to $\calc_i$.
$\bullet$
We then consider the canonical JSJ tree $T_i$ (over abelian subgroups) relative to $\cip$, i.e.\ to $\calc_i$ and all non-cyclic abelian subgroups (see Subsection \ref{jsj}); it is ${\mathrm{Mc}}(\calc_i)$-invariant. We cannot use Proposition \ref{edstab} to say that the number of distinct $T_i$'s is bounded, because they are not $\RC$-trees, so we shall now replace $T_i$
by an $\RC$-JSJ tree $U_i$.
Any edge $e$ of $T_i$ joins a vertex ${v_1}$ whose stabilizer is a maximal abelian subgroup to a vertex ${v_0}$ with non-abelian stabilizer. The group $G_e$ is a maximal abelian subgroup of $G_{v_0}$, but not necessarily of $G_{v_1}$. Let $\bar G_e$ be the root-closure of $G_e$ in $G_{v_1}$ (hence also in $G$). As in Section 4.3 of \cite{DG2}, we can fold all edges in the $\bar G_e$-orbit of $e$ together. Doing this for all edges of $T_i$ yields an $\RC$-tree $U_i$ which is ${\mathrm{Mc}}(\calc_i)$-invariant.
This construction may also be described in terms of graphs of groups, as follows. We now view
$e=v_0v_1$ as an edge of $T_i/G$. Subdivide it by adding a midpoint $u$ carrying $\bar G_e$. This creates two edges $v_0u$ and $uv_1$, carrying $G_e$ and $\bar G_e$ respectively. Do this for every edge $e$ of $T_i/G$. Collapsing all edges $uv_1$ yields $T_i/G$, whereas collapsing all edges $v_0u$ yields $U_i/G$.
The quotient graph $U_i/G$ is the same as $T_i/G$, but labels are different. Edge groups are replaced by their root-closure, and non-abelian vertex groups have gotten bigger (roots have been adjoined: each fold replaces some
$G_{v_0}$ by $G_{v_0}*_{G_e}\bar G_e$). Just like $T_i$, the tree $U_i$ is equal to its tree of cylinders because folding only occurs within cylinders; in particular, $U_i$ is determined by its deformation space.
Note that $U_i$ may have redundant vertices, and is not necessarily minimal (this happens if $T_i/G$ has a terminal vertex carrying an abelian group, and the incident edge group has finite index). In this case we replace $U_i$ by its minimal subtree.
We claim that $U_i$ is an $\RC$-JSJ tree relative to $\calh_i^{+ab}$, in the sense of Definition \ref{rcj}. It satisfies conditions 1 and 2 since its edge stabilizers are finite extensions of edge stabilizers of $T_i$.
Any tree satisfying these two conditions is dominated by $T_i$ because $T_i$ is a JSJ tree. But any $\RC$-tree dominated by $T_i$ is also dominated by $U_i$ (with notations as above, $e$ and $ge$ must have the same image if $g\in \bar G_e$).
$\bullet$
Proposition \ref{edstab} lets us assume that $U_i$ is a fixed tree $U$. It is invariant under every ${\mathrm{Mc}}(\calc_i)$.
We let $\Out^0(U)$ be the finite index subgroup of $\Out(U)$ consisting of automorphisms preserving $U$ and acting trivially on $\Gamma=U/G$.
The number of edges of $\Gamma$ is uniformly bounded by Proposition \ref{access}, so the index of $\Out^0(U)$ in $\Out(U)$
is bounded, and it is enough to prove the chain condition for ${\mathrm{Mc}}^0(\calc_i):={\mathrm{Mc}}(\calc_i)\cap\Out^0(U)$.
Let $V$ be the set of vertices of $\Gamma $.
As recalled in Subsection \ref{automs}, there are maps $\rho_v:\Out^0(U)\to \Out(G_v)$ and a product map $\rho:\Out^0(U)\to\prod_{v\in V}\Out(G_v)$. Since $U$ is relative to $\calc_i$, the group of twists $\calt=\ker\rho$ is contained in ${\mathrm{Mc}}^0(\calc_i)$.
\begin{lem} \label{outun}
There exist subgroups $\Out^1(G_v)\subset \Out(G_v)$, independent of $i $, such that:
\begin{enumerate}
\item $\prod_{v\in V}
\Out^1(G_v)$ is contained in $\rho({\mathrm{Mc}}^0(\calc_i))$ for every $i$;
\item the index of $\Out^1(G_v)$ in $\rho_v({\mathrm{Mc}}^0(\calc_i))$ is uniformly bounded.
\end{enumerate}
\end{lem}
This lemma implies Theorem \ref{mccc} because ${\mathrm{Mc}}^0(\calc_i)$
contains $\rho\m(\prod_{v\in V}
\Out^1(G_v))$ with bounded index.
\begin{proof}[Proof of Lemma \ref{outun}]
Let $\calc_{i,v}:=(\calc_{i})_{||G_v}$ be
the set of (conjugacy classes of) subgroups of $G_v$ which are conjugate to an element of $\calc_i$, and which fix no other point in $T$ (see Subsection \ref{tre}).
Since two such subgroups are conjugate in $G_v$ if and only if they are conjugate in $G$,
we may view $\calc_{i,v}$ as a subset of $\calc_i$.
Since $\rho ({\mathrm{Mc}}^0(\calc_i))$ contains $\prod_{v\in V}{\mathrm{Mc}}(\mathrm{Inc}_v\cup\calc_{i,v} )$, as explained in Subsection \ref{automs},
it suffices
to fix $v\in V$ and to construct $\Out^1(G_v)$, \emph{with $\Out^1(G_v)\subset {\mathrm{Mc}}(\mathrm{Inc}_v\cup\calc_{i,v})$ and the index of $\Out^1(G_v)$ in $\rho_v({\mathrm{Mc}}^0(\calc_i))$ uniformly bounded.} We distinguish several cases.
$\bullet$ First suppose that $G_v\simeq {\mathbb {Z}}^k$ is abelian, so $\Out(G_v)=\Aut(G_v)=GL(k,{\mathbb {Z}})$.
Let $A_i$ be the root-closure of the subgroup of $G_v$ generated by incident edge groups and subgroups in $\calc_{i,v} $. It is a direct factor and increases with $i$, so we may assume that it is independent of $i$. We define $\Out^1(G_v)\subset \Out(G_v)$ as the subgroup consisting of automorphisms equal to the identity on $A_i$. It is equal to ${\mathrm{Mc}}(\mathrm{Inc}_v\cup\calc_{i,v})$ and contained in $\rho_v({\mathrm{Mc}}^0(\calc_i))$. We must show that the index is bounded.
The group $A_i$ is invariant under $\rho({\mathrm{Mc}}^0(\calc_i))$, and we have to bound the order of the image of ${\mathrm{Mc}}^0(\calc_i)$ in $\Out(A_i)$. Any incident edge group $\bar G_e$ of $G_v$ contains an edge stabilizer $G_e$ of $T_i$ with finite index, and
the image of the map $\rho_e: {\mathrm{Mc}}^0(\calc_i)\to \Out(G_e)$ is finite by Lemma \ref{lem_fini}. Since $A_i$ is generated by incident edge groups and elements which are fixed by ${\mathrm{Mc}}^0(\calc_i)$, this implies that the image of ${\mathrm{Mc}}^0(\calc_i)$ in $\Out(A_i)$ is finite. Its cardinality is uniformly bounded because there is a bound for the order of finite subgroups of $GL(k,{\mathbb {Z}})$, so the index of $\Out^1(G_v)$ in $\rho_v({\mathrm{Mc}}^0(\calc_i))$ is bounded.
$\bullet$
We now consider a non-abelian vertex stabilizer $G_v$.
It follows from the way $U_i$ was constructed that
$G_v$ is, for each $i$, the fundamental group of a graph of groups $\Lambda_{i,v}$. This graph is a tree. It has a central vertex $v_i$, which may be viewed as a vertex of $
T_i/G$ with $G_{v_i}$ non-abelian. All edges $e$ join $v_i$ to a vertex $u_e$ carrying a root-closed abelian group, and the index of $G_e$ in $G_{u_e}$ is finite.
The graph of groups $\Lambda_{i,v}$ is invariant under the action of $ {\mathrm{Mc}}^0(\calc_i)$ on $G_v$.
We say that $G_v$ (or $v$) is \emph{rigid with sockets}, or \emph{QH with sockets}, depending on the type of $v_i$ as a vertex of $T_i$ (since the number of vertices of $T_i/G$ is bounded, we may assume that this type is independent of $i$).
$\bullet$ If $G_v$ is rigid with sockets, we define
$\Out^1(G_v)$ as the trivial group, and we have to explain why $\rho_v({\mathrm{Mc}}^0(\calc_i))$ is a finite group of bounded order.
Assume first that $U=T_i$ (i.e.\ $U $ is also a regular JSJ tree).
Lemma \ref{lem_fini} then implies
that $\rho_v({\mathrm{Mc}}^0(\calc_i))$ is a finite subgroup of $G_v$,
but we need to bound its order only in terms of $G$ (independently of the sequence $\calc_i$).
To get this uniform bound, we note that there are only finitely many possibilities for $G_{v}$ up to isomorphism
by \cite{GL_vertex}. Moreover $\Out(G_{v})$ is virtually torsion-free by \cite[Cor 4.5]{GL6}, so there is a bound for the order of its finite subgroups.
In general (\ie without assuming $U=T_i$),
we study $\rho_v({\mathrm{Mc}}^0(\calc_i))$ through its action on the graph of groups $\Lambda_{i,v}$ as in Subsection \ref{automs} (note that edges are not permuted).
The group of twists is trivial because edge groups are maximal abelian in $G_{v_i}$ and terminal vertex groups are abelian (see Proposition 3.1 of \cite{Lev_automorphisms}), so we only have to control the action of $ {\mathrm{Mc}}^0(\calc_i) $ on vertex groups of $\Lambda_{i,v}$.
Applying Lemma \ref{lem_fini} to the JSJ decomposition $T_i$,
we get finiteness of the image of $ {\mathrm{Mc}}^0(\calc_i )$ in $\Out(G_{v_i})$,
and in $\Out(G_e)$ for every edge $e$ of $T_i$, and hence of $\Lambda_{i,v}$.
The action of an automorphism on the edge groups of $\Lambda_{i,v}$ determines the action on the abelian vertex groups
because they contain the incident edge group with finite index.
This proves that $\rho_v({\mathrm{Mc}}^0(\calc_i))$ is finite, and boundedness follows as above.
$\bullet$ There remains the case when $G_v$ is QH with sockets. The group
$G_{v_i}$ is then isomorphic to the fundamental group of a compact surface $\Sigma_i$, and incident edge groups are boundary subgroups. The topology of $\Sigma_i$ may vary with $i$, but the number of boundary components of $\Sigma_i$ is bounded (by a simple accessibility argument, or because the rank of $G_{v_i}$ as a free group is bounded by \cite{GL_vertex}).
If $J$ is a subgroup of $G$, denote by $\calu_i(J)$ the set of elements of $J$ that are
$\cip$-universally elliptic (\ie elliptic in every $G$-tree with abelian edge stabilizers which is relative to $\calc_i$ and non-cyclic abelian subgroups). We view it as a union of $J$-conjugacy classes. Since $\calc_i\subset\calc_{i+1}$, we have $\calu_i(J)\subset\calu_{i+1}(J)$. We shall show that the sequence $\calu_i(G_v)$ stabilizes.
We first study $\calu_i(G_{v_i})$: we claim that $\calu_i(G_{v_i})$ is the
union of the conjugacy classes
of boundary subgroups of $G_{v_i}=\pi_1(\Sigma_i)$.
Indeed, any boundary subgroup is an incident edge group of $v_i$ (up to conjugacy)
or
has
a finite index subgroup conjugate to a group in $\calc_i$
(otherwise, $G$ would be freely decomposable relative to $\calc_i$, see Proposition 7.5 of \cite{GL3a}).
It follows that $\calu_i(G_{v_i})$ contains all boundary subgroups
(incident edge groups are $\cip$-universally elliptic because $T_i$ is a JSJ tree relative to $\cip$).
Conversely, by Proposition 7.6 of \cite{GL3a}, any $g\in\calu_i(G_{v_i})$ is contained in a boundary subgroup of $\pi_1(\Sigma_i)$. This proves our claim, and shows in particular that $\calu_i(G_{v_i})$ is the union of a bounded number of conjugacy classes of maximal cyclic subgroups $L_j(i)$ of $G_{v_i}$.
We now consider $\calu_i(G_{v})$.
The $\cip$-universally elliptic elements of $G_v$ are contained (up to conjugacy) in $G_{v_i}$ or in one of the terminal vertex groups of $\Lambda_{i,v}$, so $\calu_i(G_v)$ is the union of the conjugates of the root-closures (in $G_v$) of the groups $L_j(i)$.
Since $\calc_i\subset\calc_{i+1}$, we have $\calu_i(G_v)\subset\calu_{i+1}(G_v)$. As $\calu_i(G_v)$ is the union of the conjugates of a bounded number of cyclic subgroups,
we may assume that $\calu_i(G_v)=\calu(G_v)$ does not depend on $i$.
Elements of $\rho_v({\mathrm{Mc}}^0(\calc_i))$ send each cyclic group in $\calu(G_v)$ to a conjugate (conjugacy classes are not permuted because the action on $T_i/G$ is trivial). They act trivially on
groups in $\calc_{i,v}$, but they may map an element $g$ belonging to a terminal vertex group of $\Lambda_{v,i}$
to $g\m$ (geometrically, they correspond to homeomorphisms of $\Sigma_i$ which may reverse orientation on boundary components).
We define $\Out^1(G_v)\subset\Out (G_v)$ as the group of automorphisms acting trivially on each cyclic group in $\calu(G_v)$
(geometrically, we restrict to homeomorphisms of $\Sigma_i$ equal to the identity on the boundary). It is contained in ${\mathrm{Mc}}(\mathrm{Inc}_v\cup\calc_{i,v})$, because $\calu_i(G_v)$ contains the incident edge groups of $G_v$ in $U$,
hence contained in $\rho_v({\mathrm{Mc}}^0(\calc_i))$, and the index is bounded in terms of the number of conjugacy classes of cyclic subgroups in $\calu(G_v)$.
\end{proof}
\begin{rem} \label{ccab}
Groups of the form $\Out(G;\calh)$, with $\calh$ a finite family of abelian groups, do not satisfy the descending chain condition: consider $G={\mathbb {Z}}^2=\langle x,y\rangle$, and $\calh_i=\{\langle x,y^{2^i}\rangle\}$.
\end{rem}
\section{Proof of the other results
} \label{pfcor}
\renewcommand {\calc} {{\mathcal {C}}}
We first note the following consequence of the chain condition:
\begin{prop} \label{cinf}
If $\calc$ is an infinite family of conjugacy classes, there exists a finite subfamily $\calc'\subset\calc$ such that ${\mathrm{Mc}}(\calc)={\mathrm{Mc}}(\calc')$.
\end{prop}
Recall that ${\mathrm{Mc}}(\calc)$ is the group of outer automorphisms fixing all conjugacy classes belonging to $\calc$.
\begin{proof} Write $\calc$ as an increasing union of finite families $\calc_i$ and note that ${\mathrm{Mc}}(\calc)$ is the intersection of the descending chain ${\mathrm{Mc}}(\calc_i)$.
\end{proof}
To prove Corollary \ref{genmc}, saying in particular that every McCool group
is an elementary McCool group,
we need the following fact:
\begin{lem} \label {minos}
Let $G$ be a toral relatively hyperbolic group. Let $H$ be a subgroup, and $enoncetmp\alph{numEnonceTmpInterne}lpha\in\Aut(G)$. If $enoncetmp\alph{numEnonceTmpInterne}lpha(h)$ and $h$ are conjugate in $G$ for every $h\in H$, then $enoncetmp\alph{numEnonceTmpInterne}lpha$ acts on $H$ as conjugation by some $g\in G$.
\end{lem}
\begin{proof} We may assume that there is a non-trivial $h\in H$ such that $enoncetmp\alph{numEnonceTmpInterne}lpha(h)=h$. If $H$ is
abelian, malnormality of maximal abelian subgroups implies that $enoncetmp\alph{numEnonceTmpInterne}lpha$ is the identity on $H$. If not, the result follows from Lemma 5.2 of \cite{MiOs_normal} (which is valid for any homomorphism $\varphi:H\to G$, not just automorphisms of $H$), see also Corollary 7.4 of
\cite{AMS_commensurating}.
\end{proof}
\begin{cor*}[Corollary \ref{genmc}]
Let $G$ be a toral relatively hyperbolic group.
If $\calh $ is any
family of
subgroups of $G$,
there exists a finite set of conjugacy classes
such that ${\mathrm{Mc}}(\calh)={\mathrm{Mc}}(\calc)$.
\end{cor*}
Recall that ${\mathrm{Mc}}(\calh)$ is also denoted $ \Out(G;\mk\calh)$. We favor the notation ${\mathrm{Mc}}(\calh)$ in this section.
\begin{proof}
Given an arbitrary family $\calh$, let $\calc_\calh$ be the set of all conjugacy classes having a representative belonging to some $H_i$. By Lemma \ref{minos}, $
{\mathrm{Mc}}(\calh)={\mathrm{Mc}}(\calc_\calh)$. We apply
Proposition \ref {cinf} to get $
{\mathrm{Mc}}(\calh)={\mathrm{Mc}}(\calc)$ with $\calc$ finite.
\end{proof}
Together with Theorem \ref{mcgg}, this implies our most general finiteness result.
\begin{cor} \label{thm_general}
Let $G$ be a toral relatively hyperbolic group.
Let $\calh$ be an arbitrary collection of subgroups of $G$.
Let $\calk$ be a finite
collection of abelian subgroups of $G$.
Let $T$ be a simplicial tree on which $G$ acts with abelian edge stabilizers,
with each group in $\calh\cup \calk$ fixing a point.
Then the group $\Out(T,\mk\calh,\calk)=\Out(T)\cap\Out(G;\mk\calh,\calk)$ of automorphisms leaving $T$ invariant,
acting trivially on each group of $\calh$, and sending each $K\in \calk$ to a conjugate (in an arbitrary way),
is of type VF.
\end{cor}
\begin{proof}
By Corollary \ref{genmc}, we may write $\Out(G;\mk\calh)={\mathrm{Mc}}(\calc)$ for some finite family of conjugacy classes $[c_i]$, with each $c_i$ belonging to a group of $\calh$ hence elliptic in $T$. Defining $\call=\{\langle c_i\rangle\}$, we see that ${\mathrm{Mc}}(\calc)$ is a finite index subgroup of $ \Out(G;\call)$, so $\Out(T,\mk\calh,\calk)$ is a finite index subgroup of $\Out(T,\calk\cup\call)$.
By Theorem \ref{mcgg}, this group has type VF,
and therefore so does $\Out(T,\mk\calh,\calk)$.
\end{proof}
Proposition \ref{infmc} and Theorem \ref{uccfix} will be proved at the end of the section.
\begin{prop*}[Proposition \ref{mccet}]
Given a toral relatively hyperbolic group $G$, there exists a number $C$ such that,
if a subgroup $\wh M\subset \Out(G)$ contains a group $ {\mathrm{Mc}}(\calh)$ with finite index, then
the index $[\wh M:{\mathrm{Mc}}(\calh)]$ is bounded by $C$.
\end{prop*}
\begin{proof} [Proof of Proposition \ref{mccet}]
By Corollary \ref{genmc}, we may write
$ {\mathrm{Mc}}(\calh)={\mathrm{Mc}}(\calc')$ for some finite set $\calc'$.
Let $\calc$ be the orbit of $\calc'$ under $\wh M$. Since ${\mathrm{Mc}}(\calc')$ fixes $\calc'$,
this is a finite $\wh M$-invariant collection of conjugacy classes.
We thus have $${\mathrm{Mc}}(\calc)\subset {\mathrm{Mc}}(\calc')\subset \wh M\subset \wh{\mathrm{Mc}}(\calc),$$
and it suffices to bound the index $[\wh{\mathrm{Mc}}(\calc):{\mathrm{Mc}}(\calc)]$.
As in the beginning of Section \ref{pfcc},
let $G=G_1*\dots *G_n*F_r$ be a Grushko decomposition of $G$ relative to $\calc$, and $\calg=\{G_1,\dots G_n\}$.
The group $\wh{\mathrm{Mc}}(\calc)$ permutes the conjugacy classes of the groups in $\calg$.
Since the cardinality of $\calg$ is bounded, and $G$ has finitely many free factors up to isomorphism, we may assume that $G$ is one-ended relative to $\calc$.
We now consider the JSJ decomposition $T_{\mathrm{can}}$ over abelian groups relative to $\calc$ and non-cyclic abelian groups. It is invariant under $\wh{\mathrm{Mc}}(\calc)$, so we may study $\wh{\mathrm{Mc}}(\calc)$ through its action on $T_{\mathrm{can}}$ (see Subsection \ref{automs}).
The number of edges of $\Gamma_{\mathrm{can}}=T_{\mathrm{can}}/G$ being bounded by the first case of Proposition \ref{access}, we may replace $\wh{\mathrm{Mc}}(\calc)$ and ${\mathrm{Mc}}(\calc)$ by their subgroups $\wh{\mathrm{Mc}}^0(\calc)$ and ${\mathrm{Mc}}^0(\calc)$ acting trivially on $\Gamma$. The group of twists $\calt$ is contained in ${\mathrm{Mc}}^0(\calc)$, so
as in the proof of Lemma \ref{outun} it suffices to construct $\Out^1(G_v)\subset{\mathrm{Mc}}_{G_v}(\mathrm{Inc}_v\cup\calc_{ | | G_v} )$ with the index of $\Out^1(G_v)$ in $\rho_v(\wh{\mathrm{Mc}}^0(\calc))$ uniformly bounded.
We distinguish the same cases as in
the proof of Lemma \ref{outun}.
If $G_v$ is abelian, isomorphic to ${\mathbb {Z}}^k$ with $k\ge2$, let $H<G_v$ be
the set of elements whose orbit under $\rho_v(\wh{\mathrm{Mc}}^0(\calc))$ is finite. This is
a subgroup of $G_v$, isomorphic to some ${\mathbb {Z}}^p$, which is invariant under $\rho_v(\wh{\mathrm{Mc}}^0(\calc))$
and contains the incident edge groups by Lemma \ref{lem_fini}. We define $\Out^1(G_v)={\mathrm{Mc}}_{G_v}(\{H\})$. It is contained in ${\mathrm{Mc}}_{G_v}(\mathrm{Inc}_v\cup\calc_{ | | G_v} )$.
The image of $\rho_v(\wh{\mathrm{Mc}}^0(\calc))$ in $\Aut(H)=GL(p,{\mathbb {Z}})$ is finite, and its order bounds the index of $\Out^1(G_v)$ in $\rho_v(\wh{\mathrm{Mc}}^0(\calc))$.
This concludes the proof in this case since there is a bound for the order of finite subgroups of $GL(p,{\mathbb {Z}})$.
If $G_v$ is rigid, we let $\Out^1(G_v)$ be trivial. The image of $\wh{\mathrm{Mc}}^0(\calc)$ in $\Out(G_v)$ is finite by Lemma \ref{lem_fini}, and bounded by \cite{GL_vertex} as in the proof of Lemma \ref{outun}.
If $G_v=\pi_1(\Sigma)$ is QH, we define $\Out^1(G_v)=\calp\calm^+(\Sigma)={\mathrm{Mc}}_{G_v}(\mathrm{Inc}_v \cup\calc_{||G_v} )$. Elements of $\rho_v(\wh{\mathrm{Mc}}^0(\calc))$ may reverse orientation, or permute boundary components of $\Sigma$.
\end{proof}
\begin{cor}\label{eucc}
Extended elementary McCool groups $\wh{\mathrm{Mc}}(\calc)$ of $G$ satisfy a uniform chain condition.
\end{cor}
\begin{proof}
Given a descending chain $\wh{\mathrm{Mc}}(\calc_i)$, define $\calc'_i=\calc_0\cup\dots\cup \calc_i$ and note that
$${\mathrm{Mc}}(\calc'_ i)=\cap_{j\le i} {\mathrm{Mc}}(\calc_j)\subset \wh{\mathrm{Mc}}(\calc_i)=\cap_{j\le i}\wh{\mathrm{Mc}}(\calc_j)\subset \wh{\mathrm{Mc}}(\calc'_i).$$
The corollary follows from Theorem \ref{mccc}, since by Proposition \ref{mccet} the index of ${\mathrm{Mc}}(\calc'_ i)$ in $\wh{\mathrm{Mc}}(\calc'_i)$ is bounded.
\end{proof}
We now prove Corollary \ref{rless} stating that, for any $A<\Out(G)$, there is a subgroup $A_0<A$ of bounded finite index such that, for the action of $A_0$ on the set of conjugacy classes of $G$, every orbit is a singleton or is infinite.
\begin{proof} [Proof of Corollary \ref{rless}]
Let $\calc_A$ be the (possibly infinite) set of conjugacy classes of $G$ whose $A$-orbit is finite.
Partition $\calc_A$ into $A$-orbits, and let $\calc_p$ be the union of the first $p$ orbits. The image of $A$ in the group of permutations of $\calc_p$ is contained in that of $\wh{\mathrm{Mc}}(\calc_p)$, so by Proposition \ref{mccet} its order is bounded by some fixed $C$. This $C$ also bounds the order of the image of $A$ in the group of permutations of $\calc_A$.
\end{proof}
Recall that $
{\mathrm{Ac}}(\calh, H_0)\subset\Aut(G)$ is the group of automorphisms acting trivially on $\calh$ (in the sense of Definition \ref{gmc}, i.e.\ by conjugation) and fixing the elements of $H_0$.
Proposition \ref{mcaut}
states that, if $G$ is non-abelian, then $
{\mathrm{Ac}}(\calh, H_0)$ is an extension $$1\to K\to
{\mathrm{Ac}}(\calh, H_0)\to {\mathrm{Mc}}(\calh')\to 1$$ with ${\mathrm{Mc}}(\calh')\subset\Out(G)$ a McCool group, and $K $ the centralizer of $H_0$.
Corollary \ref{mca}
states that the groups $ {\mathrm{Ac}}(\calh, H_0)$ are of type VF
and satisfy a uniform chain condition.
\begin{proof} [Proof of Proposition \ref{mcaut}]
Let $\calh'=\calh\cup\{H_0\}$.
Map
$
{\mathrm{Ac}}(\calh, H_0)\subset \Aut(G)$
to $\Out(G)$. The image is ${\mathrm{Mc}}(\calh')$.
The kernel $K$ is the set of inner automorphisms equal to the identity on $H_0$. Since $G$ has trivial center, it is isomorphic to the centralizer of $H_0$.
\end{proof}
\begin{proof} [Proof of Corollary \ref{mca}]
The group ${\mathrm{Mc}}(\calh')$ has type VF by Theorem \ref{mc}. The group $K$ is abelian or equal to $G$, so has type F because $G$ does \cite{Dah_classifying}. Proposition \ref{mcaut} and Corollary \ref{cor_extensionVF} imply that
$
{\mathrm{Ac}}(\calh, H_0)$
has type VF.
Moreover, a chain of centralizers has length at most 2 since the centralizer of $H_0$ is trivial, $G$, or a maximal abelian subgroup. The uniform chain condition for McCool groups (Proposition \ref{mccc}) then implies the uniform chain condition for groups of the form $
{\mathrm{Ac}}(\calh, H_0)$.
\end{proof}
We now deduce the bounded chain condition for fixed subgroups.
\begin{proof} [Proof of Theorem \ref{uccfix}] Let $J_0\subsets J_1\subsets\dots\subsets J_p$ be a strictly ascending chain of fixed subgroups.
Let ${\mathrm{Ac}}(\emptyset,J_i)$ be the subgroup of $\Aut(G)$ consisting of automorphisms equal to the identity on $J_i$.
Since $J_i$ is a fixed subgroup, ${\mathrm{Ac}}(\emptyset, J_i)\supsetneq {\mathrm{Ac}}(\emptyset, J_{i+1})$.
Corollary \ref{mca} then gives a bound on the length on the chain.
\end{proof}
\begin{rem*} One can adapt the arguments of Section \ref{pfcc} to prove Theorem \ref{uccfix} directly (without passing through McCool groups).
\end{rem*}
We now prove Proposition \ref{infmc} saying that $\Out(F_n)$ contains infinitely many non-isomorphic McCool groups
for $n\geq 4$, and infinitely many non-conjugate McCool groups for $n\geq 3$.
\begin{proof}[Proof of Proposition \ref{infmc}]
Let $H$ be the free group on three generators $a,b,c$. Given a non-trivial element $w\in\langle a,b\rangle$, let $P_w$ be the cyclic HNN extension $P_w=\langle a,b,c,t\mid tct\m=w\rangle$. It is free of rank 3, with basis $a,b,t$. Let $\varphi_w$ be the automorphism of $P_w$ fixing $a$ and $b$,
and mapping $t$ to $wt$
(it equals the identity on $H$ since it fixes $c=t\m wt$).
The image $\Phi_w$ of $\varphi_w$ in $\Out(P_w)$ preserves the Bass-Serre tree $T$ of the HNN extension (it belongs to its group of twists $\calt$).
We apply this construction with $w=a^kb^k$, for $k$ a positive integer. As $k$ varies, the
cyclic subgroups $\langle\Phi_w\rangle$
are pairwise non-conjugate in $\Out(P_w)\simeq\Out(F_3)$, as seen by considering the action on the
abelianization.
We shall now prove the second assertion of the proposition for $n=3$,
by showing that
$\langle\Phi_w\rangle$ is a McCool group of $P_w$,
namely $\langle\Phi_w\rangle={\mathrm{Mc}}_{P_w}(\{H\})\subset \Out(F_3)$.
The extension to $n>3$ is straightforward, by adding generators to $H$.
Consider splittings of $P_w$ over abelian (i.e.\ cyclic) subgroups relative to $H$.
The tree $T$
is a JSJ tree because its vertex stabilizers are universally elliptic \cite[lemma 4.7]{GL3a}; in particular, $P_w$ is freely indecomposable relative to $H$. Moreover, $T$ equals its tree of cylinders (up to adding redundant vertices) because $w$ is not a proper power, so $T$ is the canonical JSJ tree $T_{\mathrm{can}}$. The McCool group ${\mathrm{Mc}}_{P_w}(\{H\})$ therefore leaves $T$ invariant, and it is easily checked using \cite{Lev_automorphisms} that
${\mathrm{Mc}}_{P_w}(\{H\})=\calt= \langle\Phi_w\rangle$.
To prove the first assertion of the proposition, consider $R_w=P_w*\langle d\rangle\simeq F_4$, the family $\calh=\{H,\langle d\rangle\}$, and the McCool group
${\mathrm{Mc}}_{R_w}(\calh)\subset \Out(F_4)$.
The decomposition $R_w=P_w*\langle d\rangle$ is a Grushko decomposition of $R_w$ relative to
$\calh$ because $P_w$ is freely indecomposable relative to $H$.
This decomposition is invariant under ${\mathrm{Mc}}_{R_w}(\calh)$
because it is a one-edge splitting (see \cite{For_deformation}, cor 1.3).
The stabilizer $\Out(T)$ of the Bass-Serre tree $T$ in $\Out(R_w)$ is naturally isomorphic to
$$\Aut(P_w)\times \Aut(\grp{d})\simeq \Aut(P_w)\times{\mathbb {Z}}/2{\mathbb {Z}}$$
(see \cite{Lev_automorphisms});
the natural map $\Out(T)\ra \Out(P_w)$ kills the factor ${\mathbb {Z}}/2{\mathbb {Z}}$ and
coincides with the quotient map $\Aut(P_w) \ra \Out(P_w)$ on the other factor.
The McCool group ${\mathrm{Mc}}_{R_w}(\calh)$ is isomorphic to
the preimage of ${\mathrm{Mc}}_{P_w}(\{H\})=\grp{\Phi_w}$ in $\Aut(P_w)$,
hence to the mapping torus
$$Q_w= \langle a,b,t,u\mid ua=au, ub=bu, utu\m= a^kb^kt \rangle.$$
The abelianization of $Q_w$ is ${\mathbb {Z}}^3\times {\mathbb {Z}}/k{\mathbb {Z}}$, so the isomorphism type of $Q_w$ changes when $k$ varies. This proves the first assertion of the proposition or $n=4$. The extension to larger $n$ is again straightforward.
\end{proof}
\section{Appendix: groups with finitely many McCool groups
}
In this appendix we
describe cases when $\Out(G)$ only contains finitely many McCool subgroups. In particular, we
show that the values of $n$ given in Proposition \ref{infmc} are optimal.
\begin{prop} \label{mccoolfini} If $G$ is a torsion-free one-ended hyperbolic group, then $\Out(G)$ only contains finitely many McCool groups up to conjugacy.
\end{prop}
\begin{prop} \label{infmc2}
$\Out(F_2)$ only contains finitely many McCool groups up to conjugacy.
\end{prop}
\begin{prop} \label{infmc3} $\Out(F_3)$ only contains finitely many McCool groups up to isomorphism.
\end{prop}
The proof of Proposition
\ref{mccoolfini} requires the fact that $\Out(G)$, and more generally extended McCool groups $\widehat{\mathrm{Mc}}(\calc)$, only contain finitely many conjugacy classes of finite subgroups. This will appear in \cite{GL_extension}.
\begin{proof}[Proof of Proposition
\ref{mccoolfini}]
We assume that $\Out(G)$ contains infinitely many non-conjugate elementary McCool groups ${\mathrm{Mc}}(\calc_i)$, and we derive a contradiction
(this implies the proposition by Corollary \ref{genmc}).
It is proved in \cite[Corollary 4.9]{Sela_acylindrical}
that there are only finitely many minimal actions of $G$ on trees with cyclic edge stabilizers, up to the action of $\Out(G)$, so we may assume
that the canonical cyclic JSJ tree relative to $\calc_i$ (the tree $T_{\mathrm{can}}$ of Subsection \ref{jsj}) is a given tree $T$. This tree is invariant under all groups ${\mathrm{Mc}}(\calc_i)$, so ${\mathrm{Mc}}(\calc_i)\subset \Out(T)$. In this proof, we cannot restrict to $\Out^0(T)$.
Given a vertex $v$ of $T$, we define $\calc_{i,v}$
as the restriction $\calc_i{}_{ | G_v}$ if $G_v$ is cyclic,
as $\calc_i{}_{ | | G_v}$ if $G_v$ is not cyclic (recall from Subsection \ref{tre} that conjugacy classes represented by elements fixing an edge of $T$ do not belong to $\calc_i{}_{ || G_v}$). The tree being bipartite, $\calc_i$ is the disjoint union of the $\calc_{i,v}$'s.
We say that $v$ is used if
$\calc_{i,v}$ is non-empty. Since there are finitely many $G$-orbits of vertices, we may assume that usedness is independent of $i$; we let $V_u$ be a set of representatives of orbits of used vertices.
We may also assume that the type of vertices with non-cyclic stabilizer (rigid or QH) is independent of $i$ (QH vertices with $\Sigma$ a pair of pants are rigid, we do not consider them as QH).
We claim that QH vertices $G_v$ of $T$ are not used. Indeed,
any boundary subgroup of $G_v$ is an incident edge stabilizer
of $T$: otherwise, $G_v$ would split as a free product relative to $\mathrm{Inc}_v$, contradicting one-endedness of $G$. Elements in $\calc_i$ are universally elliptic (relative to $\calc_i$), and the only universally elliptic subgroups of $G_v$ are contained in boundary subgroups of $G_v$ because $G_v$ is flexible (see \cite{GL3a}, Proposition 7.6),
so $\calc_{i||G_v}$ is empty.
For $v\in V_u$, define $\Out_i(G_v)\subset\Out(G_v)$ as the set of automorphisms which fix each conjugacy class in $\calc_{i,v}$
and leave the set of incident edge stabilizers globally invariant.
Any automorphism in ${\mathrm{Mc}}(\calc_i)$
is an automorphism of $T$ which leaves $G_v$ invariant (up to conjugacy),
and induces an automorphism belonging to $\Out_i(G_v)$.
Conversely, any automorphism of $T$ satisfying these properties for every $v\in V_u$ lies in ${\mathrm{Mc}}(\calc_i)$.
This means that
${\mathrm{Mc}}(\calc_i)$ is completely determined by the knowledge of the groups $\Out_i(G_v)$, for $v\in V_u$.
We complete the proof by
showing that there are only finitely many possibilities for each $\Out_i(G_v)$.
This is clear if $G_v$ is cyclic, and QH vertices are not used, so there
remains to consider the case where $G_v$ is rigid.
In this case, $\Out_i(G_v)$ is finite by Lemma \ref{lem_fini} (otherwise $G_v$ would have a cyclic splitting relative to $\mathrm{Inc}_v$ and $\calc_{i,v}$, contradicting rigidity).
Since $G_v$ is hyperbolic, $\Out(G_v)$ has finitely many conjugacy classes of finite subgroups \cite{GL_extension}.
We deduce that there are finitely many possibilities for $\Out_i(G_v)$, up to conjugacy in $\Out(G_v)$. Unfortunately, this is not enough to get finiteness for ${\mathrm{Mc}}(\calc_i)$ up to conjugacy in $\Out(G)$, because the conjugator may fail to extend to an automorphism of $G$.
To remedy this, we consider ${\mathrm{Mc}}(\mathrm{Inc}_v)$ and $\wh{\mathrm{Mc}}(\mathrm{Inc}_v)$, with $\mathrm{Inc}_v$ the family of incident edge groups as in Subsection \ref{tre}, and $\wh{\mathrm{Mc}}(\mathrm{Inc}_v)=\wh\Out(G_v;\mathrm{Inc}_v)$ the set of outer automorphisms of $G_v$ preserving $\mathrm{Inc}_v$ (see Definition \ref{pres}; edge groups may be permuted, and the generator of an edge group may be mapped to its inverse).
The group $\Out_i(G_v)\subset\Out(G_v)$ is finite and contained in
$\wh{\mathrm{Mc}}(\mathrm{Inc}_v)$ (but not necessarily in ${\mathrm{Mc}}(\mathrm{Inc}_v)$).
By \cite{GL_extension}, $\wh{\mathrm{Mc}}(\mathrm{Inc}_v)$ has only finitely many conjugacy classes of finite subgroups.
It follows that there are only finitely many possibilities for $\Out_i(G_v)$ up to conjugation by an element of $\wh{\mathrm{Mc}}(\mathrm{Inc}_v)$,
hence also up to conjugation by an element of ${\mathrm{Mc}}(\mathrm{Inc}_v)$
since ${\mathrm{Mc}}(\mathrm{Inc}_v)$ has finite index in $\wh{\mathrm{Mc}}(\mathrm{Inc}_v)$.
We may therefore assume that
$\Out_i(G_v)$ is independent of $i$ if $G_v$ is cyclic and $v\in V_u$, and that all groups $\Out_i(G_v)$ are conjugate by elements of ${\mathrm{Mc}}(\mathrm{Inc}_v)$ if $v\in V_u$ is rigid.
Any element of ${\mathrm{Mc}}(\mathrm{Inc}_v)$ extends ``by the identity'' to an automorphism of $G$ which leaves $T$ invariant and acts trivially
(as conjugation by an element of $G$) on $G_w$ if $w$ is not in the orbit of $v$.
Since
${\mathrm{Mc}}(\calc_i)$ is determined by the groups $\Out_i(G_v)$ for $v\in V_u$, we conclude that all groups ${\mathrm{Mc}}(\calc_i)$ are conjugate in $\Out(G)$.
\end{proof}
\renewcommand {\calc} {{\mathcal {H}}}
\begin{proof}[Proof of Proposition \ref{infmc2}]
We view $\Out(F_2)\simeq GL(2,{\mathbb {Z}})$ as the mapping class group of a punctured torus $\Sigma$ (with orientation-reversing maps allowed). Let $c$ be a peripheral conjugacy class (representing the commutator of basis elements of $F_2$).
We consider a McCool group ${\mathrm{Mc}}(\calc)\subset\Out(F_2)$. We may assume that ${\mathrm{Mc}}(\calc)$ is infinite.
By the classification of elements of $GL(2,{\mathbb {Z}})$, or by the Bestvina-Paulin method and Rips theory, $F_2$ then splits over a cyclic group relative to $\calc$ and $c$ (see for instance Theorem 3.9 of \cite{GL6}). Such a splitting is dual to a non-peripheral simple closed curve $\gamma \subset \Sigma$.
If there are two different splittings, they are dual to curves $\gamma,\gamma'$
whose union fills $\Sigma$, so $\calc$ only contains peripheral subgroups.
It follows that ${\mathrm{Mc}}(\calc)$ is either $\Out(F_2)\simeq GL(2,{\mathbb {Z}})$ or $SL(2,{\mathbb {Z}})$.
If the splitting is unique, ${\mathrm{Mc}}(\calc)$ fixes $\gamma$ (viewed as an unoriented curve up to isotopy).
Since the splitting dual to $\gamma$ is relative to $\calc$, the Dehn twist $T_\gamma$ around $\gamma$ is contained in
${\mathrm{Mc}}(\calc)$.
The stabilizer $\Stab(\gamma)$ of $\gamma$ in the mapping class group
contains
$\langle T_\gamma\rangle$ with finite index (the index is 4 because a
homeomorphism may reverse the orientation of $\Sigma$ and/or
of $\gamma$).
We thus have $\langle T_\gamma\rangle\subset {\mathrm{Mc}}(\calc)\subset \Stab(\gamma)$, with both indices finite.
Finiteness of ${\mathrm{Mc}}(\calc)$ up to conjugacy follows, since $\gamma$ is unique up to the action of the mapping class group.
\end{proof}
The remainder of this appendix is devoted to the proof of Proposition \ref{infmc3}.
We first record a few useful facts.
\begin{lem}\label{vr}
Fix $n$.
Up to isomorphism, $\Out(F_n)$ only contains finitely many virtually solvable subgroups.
\end{lem}
\begin{proof}
Virtually solvable subgroups are virtually abelian (\cite{Ali_translation,BFH_solvable}). More precisely, they contain ${\mathbb {Z}}^k$ with $k\le2n-3$ as a subgroup of bounded index (see \cite{BFH_solvable}, proof of Theorem 1.1 page 94). This implies finiteness, for instance by \cite[Th.\ 6 p.\ 176]{Segal_livre}.
\end{proof}
\begin{lem} \label{coh}
Let $A$ be virtually cyclic, and $B$ be virtually $F_n$ for some $n$. Up to isomorphism, there are only finitely many groups which are extensions of $A$ by $B$.
\end{lem}
\begin{proof}
This follows from standard extension theory (\cite{Brown_cohomology}, sections III.10 and IV.6), noting that $\Out(A)$ is finite and $B$ has a finite index subgroup with trivial $H^2$.
\end{proof}
Now consider a McCool group ${\mathrm{Mc}}(\calc)\subset\Out(F_3)$. The first step is to reduce to the case where $F_3$ is freely indecomposable relative to $\calc$. If this does not hold, let $\Gamma$ be a Grushko decomposition relative to $\calc$ (see Subsection \ref{jsj}). It is not unique, we choose one with as few edges as possible.
If all vertex groups are cyclic, groups
in $\calc$ are
generated (up to conjugacy) by powers of elements belonging to some fixed basis of $F_3$, and finiteness holds. Otherwise, there is a vertex group $G_v\simeq F_2$. Our choice of $\Gamma$ implies that $\Gamma$ has a single edge (it is an HNN extension, or an amalgam $F_2*{\mathbb {Z}}$ with a finite index subgroup of ${\mathbb {Z}}$ belonging to $\calc$). It follows that $\Gamma$ is ${\mathrm{Mc}}(\calc)$-invariant (\cite{For_deformation,Lev_rigid}), and ${\mathrm{Mc}}(\calc)$ is determined by its image in $\Out(F_2)$. This image is the McCool group ${\mathrm{Mc}}(\calc_{ | F_2})$, so finiteness follows from Proposition \ref{infmc2}.
We continue the proof under the assumption that $F_3$ is freely indecomposable relative to $\calc$. Let $\Gamma_{\mathrm{can}}$ be the canonical ${\mathrm{Mc}}(\calc)$-invariant cyclic JSJ decomposition relative to $\calc$
(see Subsection \ref{jsj}). Vertex groups $G_v$ are cyclic, rigid, or QH.
One easily checks the formula $\sum_v (\mathrm{rk}\,G_v -1) =2$.
In particular, $\mathrm{rk}\, G_v\leq 3$ for all $v$, and if some $G_v$ is isomorphic to $F_3$, then all other
vertex groups are cyclic.
If $G_v\simeq\pi_1(\Sigma)$ is a QH vertex group, it is isomorphic to $F_2$ or $F_3$, so there are 9 possibilities for the compact surface $\Sigma$:
\setlist{topsep=0mm,itemsep=.0mm,parsep=.5mm}
\begin{enumerate}
\item pair of pants
\item sphere with 4 boundary components
\item projective plane with 2 boundary components
\item projective plane with 3 boundary components
\item torus with 1 boundary component
\item torus with 2 boundary components
\item Klein bottle with 1 boundary component
\item Klein bottle with 2 boundary components
\item non-orientable surface of genus 3 with 1 boundary component.
\end{enumerate}
Each incident edge group $G_e$ is (up to conjugacy) a boundary subgroup of $\pi_1(\Sigma)$.
Conversely, there are two possibilities for a boundary subgroup $C$. If it is an incident edge group, it equals $G_e$ for a unique incident edge. If not, we say that the corresponding boundary component of $\Sigma$ is \emph{free}; in this case some finite index subgroup of $C$ belongs to $\calc$.
As in Subsection \ref{automs}, the finite index subgroup ${\mathrm{Mc}}^0(\calc)$ of ${\mathrm{Mc}}(\calc)$ acting trivially on $\Gamma_{\mathrm{can}}$ maps to $\prod_v \Out(G_v)$ with kernel the group of twists $\calt$. The image in $\Out(G_v)$ is finite if $G_v$ is cyclic or rigid, virtually the mapping class group of $\Sigma$ if $G_v$ is QH, and $\calt$ is isomorphic to some ${\mathbb {Z}}^k$ (see Subsection 4.3 of \cite{GL6}).
By mapping class group, we mean the group of isotopy classes of homeomorphisms of a compact surface $\Sigma$ mapping each boundary component to itself in an orientation-preserving way.
We denote it by $\calp\calm^+(\Sigma)$ as in Subsection \ref{jsj}.
By Lemma \ref{vr}, we may assume that there is a QH vertex $v$ with $\calp\calm^+(\Sigma)$ non-solvable. As explained above, there are 9 possibilities for $\Sigma$.
Cases 1, 3, 7 are ruled out
because $\calp\calm^+(\Sigma)$ is virtually cyclic (see \cite{Szepietowski_presentation}, or argue as in the proof of Proposition \ref{infmc2}, noting that a finite index subgroup of $\calp\calm^+(\Sigma)$ fixes a conjugacy class of $F_2$ which is not a power of the commutator).
If $\Gamma_{\mathrm{can}}$ is trivial (i.e.\ if the QH subgroup $G_v $ is the whole group),
${\mathrm{Mc}}(\calc)$ is the mapping class group of $\Sigma$. We therefore assume that $\Gamma_{\mathrm{can}}$ is non-trivial.
\begin{lem} \label{ol}
If $G_v$ has rank 3,
then $\Sigma$ has a free boundary component.
\end{lem}
\begin{proof}
This follows from Lemma 4.1 of \cite{BF_outer}, a generalization of the standard fact that a cyclic amalgam $A*_{\langle c\rangle}B$ of free groups is free only if $c$ belongs to a basis in $A$ or $B$.
\end{proof}
This lemma rules out case 9.
Now suppose that all vertices of $\Gamma_{\mathrm{can}}$ other than $v$ are terminal vertices carrying ${\mathbb {Z}}$ (by Lemma \ref{ol}, this holds in cases 6 and 8). In this case the group of twists $\calt$ is trivial (see Proposition 3.1 of \cite{Lev_automorphisms}). The group ${\mathrm{Mc}}(\calc)$ contains $\calp\calm^+(\Sigma)$ with finite index, and there are finitely many possibilities: they depend on whether edges of $\Gamma_{\mathrm{can}}$ may be permuted, and whether elements in edge groups may be mapped to their inverse.
We must now deal with cases 2, 4, 5. We start with 4. The only possibility left is that $\Gamma_{\mathrm{can}}$ has two vertices $v,w$ joined by 2 edges, with $G_w$
cyclic.
Every automorphism leaving $\Gamma_{\mathrm{can}}$ invariant maps $G_v$ to itself (up to conjugacy), and
we consider the natural map from ${\mathrm{Mc}}(\calc)$ to $\Out(G_v)$. As above, the image contains $\calp\calm^+(\Sigma)$ with finite index, and there are finitely many possibilities. The kernel is the group of twists $\calt$, which is isomorphic to $ {\mathbb {Z}}$.
Since $\calp\calm^+(\Sigma)$ is isomorphic to $F_3$ by Theorem 7.5 of \cite{Szepietowski_presentation}, we conclude by Lemma \ref{coh}.
The argument in case 2 is similar. Besides $v$ and $w$, there may be another vertex $w'$, with $G_{w'}$ cyclic and a single edge between $v$ and $w'$. The group $\calp\calm^+(\Sigma)$ is again free, it is isomorphic to $F_2$ (see for instance \cite{FaMa_primer}, 4.2.4).
In case 5 (once-punctured torus), there is a single edge incident to $v$. Collapsing all other edges yields an ${\mathrm{Mc}}(\calc)$-invariant decomposition as an amalgam $F_3=G_v*_{\langle a\rangle}G_w$ with $G_w\simeq F_2$. By the standard fact recalled above, $a$ belongs to a basis of $G_w$ (and is equal to a commutator in $G_v$). The group ${\mathrm{Mc}}(\calc)$ acts trivially on the graph underlying this amalgam, and
the map $\rho$ (see Subsection \ref{automs}) maps ${\mathrm{Mc}}(\calc)$ to $\Out(G_v)\times \Out(G_w)$, with kernel the group of twists $\calt$, isomorphic to ${\mathbb {Z}}$. The image in $\Out(G_v)$ is isomorphic to $GL(2,{\mathbb {Z}})$ or $SL(2,{\mathbb {Z}})$.
We now consider the image $L$ of ${\mathrm{Mc}}(\calc)$ in $\Out(G_w)$.
It preserves the conjugacy class of $\grp{a}$.
If $L$ is finite (necessarily of order $\le 6$), then ${\mathrm{Mc}}(\calc)$ maps onto $GL(2,{\mathbb {Z}})$ or $SL(2,{\mathbb {Z}})$ with virtually cyclic kernel $K$; there are finitely many possibilities for $K$ up to isomorphism (it maps to $L$ with cyclic kernel), and we conclude by Lemma \ref{coh}.
As explained in the proof of Proposition \ref{infmc2}, if $L$ is infinite, it
is virtually cyclic, contains a ``Dehn twist'' $T_a$, and has index at most 4 in the stabilizer of the conjugacy class of $\grp{a}$ in $\Out(G_w)$.
Since ${\mathrm{Mc}}(\calc)$ is determined by its image in $\Out(G_v)\times \Out(G_w)$, and this image contains $SL(2,{\mathbb {Z}})\times \langle T_a\rangle$, this leaves only finitely many possibilities.
\small
\begin{flushleft}
Vincent Guirardel\\
Institut de Recherche Math\'ematique de Rennes\\
Membre de l'institut universitaire de France\\
Universit\'e de Rennes 1 et CNRS (UMR 6625)\\
263 avenue du G\'en\'eral Leclerc, CS 74205\\
F-35042 RENNES C\'edex\\
\emph{e-mail:} \texttt{[email protected]}\\[8mm]
Gilbert Levitt\\
Laboratoire de Math\'ematiques Nicolas Oresme\\
Universit\'e de Caen et CNRS (UMR 6139)\\
BP 5186\\
F-14032 Caen Cedex\\
France\\
\emph{e-mail:} \texttt{[email protected]}\\
\end{flushleft}
\end{document}
|
\begin{document}
\title{Isospectral domains for discrete elliptic operators\small\footnote{
This work has been partially supported by GNCS-INDAM (Gruppo Nazionale
Calcolo Scientifico - Istituto Nazionale di Alta Matematica).
}}
\author{Lorella Fatone$^{1}$, Daniele Funaro$^{2}$}
\date{}
\maketitle
\centerline{$^{1}$\small Dipartimento di Matematica e Informatica }
\centerline{\small Universit\`a di Camerino, Via Madonna delle Carceri 9, 62032
Camerino (Italy)}
\centerline{$^{2}$\small Dipartimento di Fisica, Informatica e Matematica }
\centerline{\small Universit\`a di Modena e Reggio Emilia, Via Campi 213/B, 41125
Modena (Italy)}
\begin{abstract}
Concerning the Laplace operator with homogeneous Dirichlet
boundary conditions, the classical notion of {\sl isospectrality} assumes
that two domains are related when they give rise to the same spectrum. In two dimensions, non isometric,
isospectral domains exist. It is not known however if all the eigenvalues relative
to a specific domain can be preserved under suitable continuous deformation of its geometry.
We show that this is possible when the 2D Laplacian is replaced by a finite dimensional
version and the geometry is modified by respecting certain constraints. The
analysis is carried out in a very small finite dimensional space, but it can
be extended to more accurate finite-dimensional representations of the 2D Laplacian, with an increase of
computational complexity. The aim of this paper is to introduce the preliminary
steps in view of more serious generalizations.
\end{abstract}
\section{Introduction}\label{sec1}
Consider the Laplace problem in 2D with homogeneous
Dirichlet boundary conditions defined in an open set with regular boundary
(see \cite{grebe} for a general overview).
It is known that there are distinct domains (non isometric) such that
all the infinite eigenvalues of the Laplace operator coincide (see, e.g., Fig.~\ref{fig1}).
For this reason,
these are called isospectral domains. It can be shown that two isospectral
domains have the same area. It is not known however
if it is possible to connect with continuity two isospectral domains
through a sequence of domains, by preserving the whole spectrum.
Partial answers can be given by working in a finite dimensional environment.
Here, we take a suitable approximation of the Laplace operator corresponding
to a negative-definite matrix. By varying the domain,
we are interested to detect those deformations that preserve the entire set
of eigenvalues (that are now in finite number). At the same time, not
all the possible deformations are allowed, but only those belonging
to a finite dimensional space of parameters. The problem turns out to
be far from easy. Indeed, already at dimension 4, things get rather involved.
The question examined here concerns with the deformation
of quadrilateral domains, with the aim of preserving the eigenvalues
of discrete operators obtained from collocation of the Laplace problem
using polynomials of degree 3 in each variable. By imposing Dirichlet homogeneous boundary
conditions, the discretization matrix ends up to be only of dimension $4\times 4$.
The vertices of the domains are then suitably moved by maintaining the
magnitude of the four corresponding eigenvalues. The results show that,
at least in these simplified circumstances, families of isospectral domains
exist and can be connected by continuous transformations.
The most straightforward
(but extremely expensive) approach is to try all the possible allowed
configurations and sort them by comparing the so obtained spectrum.
Upgraded versions consist in moving the vertices along curves, whose tangent
is obtained as an application of the {\sl Implicit Function} theorem due to U. Dini (see, for example, \cite{rudin}).
Here, one computes
with the help of symbolic manipulation the partial derivatives, with respect to the
various parameters, of the coefficients of the
characteristic polynomial of the matrix representing the discrete operator.
Different, more or less efficient, approaches have been tested.
By the way, despite its simple formulation, the problem looks rather complex and the
extension to higher dimensions or to more complicated domains looks
at the moment quite unrealistic.
\section{Preliminary settings}\label{sec2}
For the convenience of the reader we briefly review some results about the classical eigenvalue problem for
the Laplace operator:
$\Delta= \frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}$,
in an open set $\Omega$, when homogeneous Dirichlet boundary conditions are imposed on the
boundary $\partial \Omega$. The problem is formulated as follows:
\begin{eqnarray}
& -\Delta u(x,y)=\lambda u(x,y), \qquad \ &( x, y) \ \in \ \Omega, \label{lapl1} \\
& u(x,y)=0 ,\qquad \ &( x, y) \ \in \ \partial \Omega. \label{laplBC1}
\end{eqnarray}
It is known (see, e.g.: \cite{grebe}) that the spectrum of minus the Laplace operator
is discrete, that the eigenvalues are non negative and can be ordered in
ascending order to form a divergent sequence:
\begin{eqnarray} \label{aut1}
&& 0 < \lambda_{1} < \lambda_{2} \le \lambda_{3} \le \ldots \nearrow \infty ,
\end{eqnarray}
with possible multiplicities. When $\Omega=Q=]0,1[\times ]0,1[$ the eigenvalues are:
\begin{eqnarray} \label{aut1Q}
&& \lambda =\pi^{2} (m^2+n^2), \quad m,n=1,2,3,4, \ldots.
\end{eqnarray}
The Weyl's law establishes an estimate of the
$m$-th eigenvalue in terms of the area $\mu_2 (\Omega )$ of the domain $\Omega$
where the eigenvalue problem is defined:
\begin{eqnarray}
\frac{\lambda_m}{m} \rightarrow \frac{4\pi }{\mu_2 (\Omega )}, \quad {\rm for } \ m \rightarrow \infty .
\label{weyl}
\end{eqnarray}
This relation leads us to the following consequence. For a given $\lambda$, one
denotes by $N(\lambda )$ the number of eigenvalues smaller than $\lambda$.
Then, a Weyl's theorem states that:
\begin{eqnarray}
N( \lambda) \approx \frac{\mu_2 (\Omega )}{4\pi}\lambda.
\label{weyl2}
\end{eqnarray}
It turns out that, if two distinct domains produce the same set of
eigenvalues (i.e., they are isospectral), they must have the same area.
A refinement of property (\ref{weyl2}) brings to the relation:
\begin{eqnarray}
N( \lambda) \approx \frac{\mu_2 (\Omega )}{4\pi}\lambda -\frac{\mu_1 (\partial\Omega )}
{4\pi}\sqrt{\lambda} ,
\label{weyl3}
\end{eqnarray}
known as Weyl's conjecture, which also involves the length of the perimeter $\partial\Omega$, i.e. $\mu_1 (\partial\Omega )$.
This means that, provided the conjecture is verified, isospectral domains
also share the same perimeter.
The existence of (non isometric) isospectral domains has been established quite
recently. The problem was firstly formulated in \cite{kac}, where the author
was wondering if the eigenspectrum of the Laplacian was sufficient to detect
the shape of the domain (put in other words: if one can {\sl hear} the shape of a drum).
In the 2D plane, the negative answer appeared in \cite{gordon},
where distinct domains (see, for example, Fig.~\ref{fig1}) exhibiting
identical sets of eigenvalues, were proposed. Preliminary results in this direction were
investigated in \cite{sunada}.
Successive examples have been discussed for instance in \cite{buser} and
\cite{chapman}. Nowadays, wide classes of isospectral domains are available.
For the sake of brevity, we address the reader to the specialized literature for more insight.
Some facts are still not known however, as for instance the existence of 2D isospectral
domains of convex type, or the possibility to vary with continuity the shape
of a given domain maintaining at the same time its entire spectrum. This last
property is the one we are going to investigate in this paper, when the Laplace operator
is substituted by a very rough finite-dimensional version. In fact, in this paper we want to see
if, under suitable simplified hypotheses, it is possible to connect with continuity two isospectral domains
through a sequence of domains, by preserving the whole spectrum.
Our first goal is to define a general set of quadrilateral domains. Afterwords,
starting from a given member of the family, we will try to find
other members that are isospectral to it. The notion of isospectrality is
decided according to a finite-dimensional elliptic operator that is
going to be introduced in Sect.~\ref{sec3}. First of all, we need to exclude as much
as possible, the chance of having isometric domains in the family, i.e,
quadrilaterals that can be related through elementary operations, such
as translation, rotation and symmetry. Of course, two isometric domains
are automatically isospectral and we would like to avoid such trivial connections.
\begin{figure}
\caption{Two isospectral domains in $\mathbb{R}
\label{fig1}
\end{figure}
\begin{figure}
\caption{(a) The unit square $Q$. (b) The generic quadrilateral $ \hat{Q}
\label{fig2}
\end{figure}
From now on we assume that one of the sides of our quadrilaterals is ``nailed" to the $x$-axis of the plane.
Moreover in the future, we
shall avoid those cases in which a couple of sides may intersect at some internal point.
Let $n$ be a positive integer, $ \mathbb{R}$ be the set of real numbers, $ \mathbb{R}^{n}$ be the $n$-dimensional real Euclidean space.
Let $Q \subset \mathbb{R}^{2}$ be the unit square in the space $\mathbb{R}^{2}$, i.e.,
$Q$ is the quadrilateral of vertices $V_{1}=(0,0)$, $V_{2}=(1,0)$, $V_{3}=(0,1)$, $V_{4}=(1,1)$
(see Fig.~\ref{fig2}).
Given the real parameters $ \alpha ,\beta, \gamma , \delta$, let
$\hat{Q} \subset \mathbb{R}^{2}$ be the generic quadrilateral of our family, having
vertices $\hat{V}_{1}=(0,0)$, $\hat{V}_{2}=(1,0)$, $\hat{V}_{3}=(\alpha ,\beta)$, $\hat{V}_{4}=(\gamma ,\delta)$
(see again Fig.~\ref{fig2}). The domains $Q$ and $ \hat{Q}$ are open.
In the plane with coordinates $(\hat{x},\hat{y})$, we now focus our attention on the eigenvalue problem
for the Laplace operator defined in $\Omega =\hat{Q}$, with homogeneous Dirichlet boundary conditions on the piecewise
smooth boundary $\partial \hat{Q}$. Translating into formulas, we have:
\begin{eqnarray}
& -{\Delta} \hat{u} (\hat x, \hat y) ={\lambda} \hat{u} (\hat x, \hat y), \qquad \ & (\hat x, \hat y) \ \in \ \hat{Q}, \label{lapl2} \\
&\hat{u} (\hat x, \hat y)=0 , \qquad \ & (\hat x, \hat y) \ \in \ \partial \hat{Q}, \label{laplBC2}
\end{eqnarray}
where $\displaystyle{\Delta}= \frac{\partial^{2}}{\partial \hat{x}^{2}}+\frac{\partial^{2}}
{\partial \hat{y}^{2}}$.
For convenience, let us map problem (\ref{lapl2}), (\ref{laplBC2}) into the following
modified version, defined in the square $Q$:
\begin{eqnarray}
& L u ( x, y) ={\lambda} u ( x, y) , \qquad \ &( x, y) \ \in \ Q, \label{lapl3} \\
& u ( x, y) =0 , \qquad \ &( x, y) \ \in \ \partial Q, \label{laplBC3}
\end{eqnarray}
where $L$ turns out to be a suitable positive-definite elliptic operator that we are going to define
here below.
To this scope let us examine the transformation $\hat x=\theta_1 (x,y)$, $\hat y=\theta_2 (x,y)$ that
allows us to bring the operator $\Delta$ into $L$.
First of all let us transform a general quadrilateral $ \hat{Q}$
into the reference square $Q$. We use a classical invertible mapping $\theta: Q \rightarrow \hat{Q}$
consisting of polynomials of degree one in each variable. This relates the ordered points
$V_{1}$, $V_{2}$, $V_{3}$, $V_{4}$ of $Q$ with the ordered points
$\hat{V}_{1}$, $\hat{V}_{2}$, $\hat{V}_{3}$, $\hat{V}_{4}$ of $ \hat{Q}$ (see Fig.~\ref{fig2}).
The two components of the transformation $\theta=(\theta_{1},\theta_{2})$ are given by:
\begin{eqnarray}
&& \theta_{1}: \hskip1.2truecm \hat{x}=x+\alpha y +(\gamma-1-\alpha) x y, \label{trasf1} \\
&& \theta_{2}: \hskip1.2truecm \hat{y}=\beta y +(\delta-\beta) x y.\label{trasf2}
\end{eqnarray}
Thus, a function $\hat u$ defined in $\hat{Q}$ is associated with the function $u=\hat u(\theta)$ defined in $Q$.
By applying the change of variables to the Laplace operator $\Delta$ we arrive at
the eigenvalue problem (\ref{lapl3}), (\ref{laplBC3}) where the operator $L$ is defined as follows (see \cite{miolibro2}):
\begin{eqnarray}\label{defL}
&&\hskip-1truecm L =f_{1} \frac{\partial^{2}}{\partial x^{2}} + f_{2} \frac{\partial^{2}}{\partial x \partial y} + f_{3} \frac{\partial^{2}}{\partial y^{2}} +
f_{4} \frac{\partial }{\partial x} + f_{5} \frac{\partial }{\partial y},
\end{eqnarray}
and the coefficients of $L$ in (\ref{defL}) are given by:
\begin{eqnarray}\label{fi}
&& \hskip-0.7truecm f_{1}= -\frac{1}{\sigma^{2}} \left[ \left( \frac{\partial\theta_{1}}{\partial y} \right)^{2}+
\left( \frac{\partial \theta_{2}}{\partial y} \right)^{2}
\right], \quad
f_{2}= \frac{2}{\sigma^{2}} \left[ \frac{\partial\theta_{1}}{\partial x} \frac{\partial\theta_{1}}{\partial y} +
\frac{\partial\theta_{2}}{\partial x} \frac{\partial\theta_{2}}{\partial y}
\right], \nonumber\\[3mm]
&&\hskip-0.7truecm f_{3}= -\frac{1}{\sigma^{2}} \left[ \left( \frac{\partial\theta_{1}}{\partial x} \right)^{2}+
\left( \frac{\partial \theta_{2}}{\partial x} \right)^{2}
\right],\quad
f_{4}= \frac{ f_{2}}{\sigma} \left[ \frac{\partial\theta_{1}}{\partial y} \frac{\partial^{2}\theta_{2}}{\partial x \partial y} -
\frac{\partial\theta_{2}}{\partial y} \frac{\partial^{2}\theta_{1}}{\partial x \partial y}
\right],\\[3mm]
&& \hskip-0.7truecm f_{5}= \frac{ f_{2}}{\sigma} \left[ \frac{\partial\theta_{2}}{\partial x} \frac{\partial^{2}\theta_{1}}{\partial x \partial y} -
\frac{\partial\theta_{1}}{\partial x} \frac{\partial^{2}\theta_{2}}{\partial x \partial y}
\right], \nonumber
\end{eqnarray}
where
\begin{eqnarray}
&& \sigma= \frac{\partial\theta_{1}}{\partial x} \frac{\partial\theta_{2}}{\partial y} - \frac{\partial\theta_{1}}{\partial y} \frac{\partial\theta_{2}}{\partial x},
\end{eqnarray}
is the determinant of the Jacobian of $\theta$.
In the future, instead of solving the eigenvalue problem directly on $\hat{Q}$, we will find more convenient to approach
the transformed eigenvalue problem defined in $Q$. That is, instead of approaching problem (\ref{lapl2}), (\ref{laplBC2}) we consider problem
(\ref{lapl3}), (\ref{laplBC3}) where $L$ is defined in (\ref{defL}). It is straightforward to
check that $L=\Delta$ when $\hat{Q}=Q$.
As explained with more details in Sect.~\ref{sec4}, it is also convenient
to introduce a new parameter $c>0$. Through the homothety centered at the origin $(0,0)$, we associate
a generic point $\hat p$ of the plane $(\hat x, \hat y)$ to the point $\sqrt{c} \ \hat p$.
In this way, any given eigenvalue $\lambda$ of $\Delta$ in the domain $\hat{Q}$ takes
the form $\lambda /c$ in the new domain $\sqrt{c} \ \hat{Q}$. By this trick, without too much effort, we can
enlarge the set of quadrilaterals at our disposition.
\section{The discrete operator}\label{sec3}
In the referring domain $Q$ we now build a discrete elliptic operator
corresponding to a very rough approximation of the operator $L$ defined
in (\ref{defL}). Indeed,
we do not want to handle too many eigenvalues. The reason is that the
family of quadrilateral domains introduced in the previous section only
depends on five degrees of freedom (that is $\alpha$, $\beta$, $\gamma$, $\delta$, $c$).
Therefore, the isospectrality will be
judged on the basis of very few eigenvalues. Our discretized operators
correspond to $4\times 4$ matrices, so that we can only account on
four eigenvalues. This is the maximum number that allows us to find
continuously connected subfamilies of isospectral domains. In fact,
having more eigenvalues to handle can lead to an unsolvable problem, since
we do not have enough parameters to deform the domains. Thus, we
will look for a curve in the five-dimensional space, in such a way four
nonlinear equations (expressing the coincidence of the eigenvalues in relation
to an initial given quadrilateral) have to be simultaneously satisfied.
A first basic approach consists in implementing the classical finite difference method to approximate
the operator $L$ defined (\ref{defL}) and related to the
eigenvalue problem (\ref{lapl3}), (\ref{laplBC3}). We recall that, when the quadrilateral
$\hat{Q}$ coincides with the square $Q$, one has $L=\Delta$, $\hat u=u$ and the corresponding
eigenvalue problem is given by (\ref{lapl1}), (\ref{laplBC1}).
We take a uniform grid with step size $h$ in the unit square $Q\cup\partial Q$, both in the $x$ and $ y$ directions.
In particular, we divide the interval $[0,1]$ into three equal subintervals of length $\frac13$.
The corresponding grid points are defined by:
\begin{equation}\label{gridDF}
G: \quad \quad ( x_{i}, y_{j})=(hi ,hj) = (\textstyle{\frac13} i, \textstyle{\frac13}j), \quad i,j=0,1,2,3.
\end{equation}
Let $p_{1}=\left(\frac13, \frac13 \right)$, $p_{2}=\left(\frac23, \frac13 \right)$,
$p_{3}=\left(\frac13, \frac23 \right)$, $p_{4}=\left(\frac23, \frac23 \right)$ be the internal points of the grid $G$.
In these circumstances using centered finite differences and taking into account the vanishing Dirichlet boundary condition
on the boundary of $Q$ (see (\ref{laplBC3})),
we have that the differential
operators $\frac{\partial^2}{\partial x^2}$, $\frac{\partial^2}{\partial x \partial y}$, $\frac{\partial^2}{\partial y^2}$,
$\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$,
can be, respectively, approximated with the discrete operators
$D_{xx,G}$,
$D_{xy,G}$,
$D_{yy,G}$,
$D_{x,G}$ and
$D_{y,G}$
given by the following $4\times 4$ matrices:
\begin{eqnarray}
&&\hskip-0.8truecm D_{xx,G} =
9 \begin{pmatrix}
-2 & \ 1 \ & 0 & 0 \\
\ 1 \ & -2 & 0 & 0 \\
0 & 0 & -2 & \ 1 \ \\
0 & 0 & \ 1 \ & -2 \\
\end{pmatrix}, \qquad
D_{xy,G}=
\frac94 \begin{pmatrix}
0 & 0 & 0 & \ 1 \ \\
0 & 0 & \ -1 \ & 0 \\
0 & \ -1 \ & 0 & 0 \\\
\ 1 \ & 0 & 0 & 0 \\
\end{pmatrix}, \nonumber
\end{eqnarray}
\begin{eqnarray} \label{matrici}
&&\hskip-0.8truecm
D_{yy,G}=
9 \begin{pmatrix}
-2 & 0 &\ 1 \ & 0 \\
0 & -2 & 0 & \ 1 \ \\
\ 1 \ & 0 & -2 & 0 \\
0 & \ 1 \ & 0&- 2 \\
\end{pmatrix}, \qquad
D_{x,G}=
\frac32 \begin{pmatrix}
0 & \ \ 1 \ \ & 0 & 0 \\
\ -1 \ & 0 & 0 & 0 \\
0 & 0 & 0 & \ \ 1 \ \ \\\
0 & 0 & \ -1 \ & 0 \\
\end{pmatrix},
\end{eqnarray}
\begin{eqnarray}
&&
D_{y,G}=
\frac32 \begin{pmatrix}
0 & 0& \ \ 1 \ \ & 0 \\
0 & 0 & 0 & \ \ 1 \ \ \\
\ -1 \ & 0 & 0 & 0 \\\
0 & \ -1 \ & 0 & 0 \\
\end{pmatrix}.\nonumber
\end{eqnarray}
Finally, given the functions $f_{i}$, $i=1,2,3,4,5,$ in (\ref{fi}) and defined the five $4\times 4$ matrices:
\begin{eqnarray} \label{fiMatrici}
&&F_{i}=
\begin{pmatrix}
f_{i}(p_{1}) & 0& 0 & 0 \\
0 & f_{i}(p_{2}) & 0 & 0 \\
0 & 0 & f_{i}(p_{3}) & 0 \\\
0 & 0 & 0 & f_{i}(p_{4}) \\
\end{pmatrix}, \qquad i=1,2,3,4,5,
\end{eqnarray}
we have that the finite differences operator $L_{fd}$ approximating the differential
operator $L$ on the grid $G$ can be written as follows:
\begin{eqnarray}\label{approxLsuG}
&& \hskip-1truecm L_{fd} = F_{1} \cdot D_{xx,G} + F_{2} \cdot D_{xy,G} +F_{3}\cdot D_{yy,G} +F_{4}
\cdot D_{x,G}+F_{5} \cdot D_{y,G},
\end{eqnarray}
where $\cdot$ denotes the usual matrix multiplication.
In this way the discrete eigenvalue problem associated to problem (\ref{lapl3}), (\ref{laplBC3}) can be written as:
\begin{equation}\label{eig3}
L_{fd} \, \vec v =\lambda_{fd} \, \vec v ,
\end{equation}
where the eigenvector $\vec v$ belongs to $\mathbb{R}^4$.
Better discretizations of $ \Delta$ and $L$ are obtained by spectral collocation. In this simple circumstance one can
use polynomials of degree three in each variable. They must satisfy the vanishing constraint on the
boundary of $Q$, so that one easily checks that the dimension of the approximating space is reduced
to four degrees of freedom. Thus, we collocate equation (\ref{lapl3}) at the four
inner grid points in order to close the system.
Indeed, we can also generalize the grid by introducing a new
parameter $\kappa$, with $0<\kappa<\frac12$, and by considering the following set of points in the unit square $Q\cup\partial Q$:
\begin{equation}\label{gridDFn}
\bar{G}: \quad \quad (\bar{x}_{i}, \bar{y}_{j}), \quad i,j=0,1,2,3,
\end{equation}
where
\begin{equation}\label{gridDFpoint}
(\bar{x}_{0},\bar{x}_{1},\bar{x}_{2},\bar{x}_{3})=(\bar{y}_{0},\bar{y}_{1},\bar{y}_{2},\bar{y}_{3})=(0,\kappa, 1-\kappa,1) .
\end{equation}
Of course, the internal nodes are:
$\bar{p}_{1}=\left(\kappa, \kappa \right)$, $\bar{p}_{2}=\left(1-\kappa, \kappa \right)$,
$\bar{p}_{3}=\left(\kappa, 1-\kappa \right)$, $\bar{p}_{4}=\left(1-\kappa, 1-\kappa \right)$.
They correspond to the classical uniform grid when $\kappa =1/3$. Another suitable choice is
to set $\kappa =\frac12-\frac{1}{2\sqrt5}$. In this way the values $2\kappa -1$ and $1-2\kappa$
are the zeros of the first derivative of the Legendre polynomial $P_3$.
Let us now consider a generic polynomial of degree 3 in each of the two variables $x$ and $y$:
\begin{equation}\label{pol}
{\cal P}(x,y)=a_{11} l_{1}(x) l_{2}(y) + a_{12} l_{1}(x) l_{2}(y)+a_{21} l_{2}(x) l_{1}(y)+a_{22} l_{2}(x) l_{2}(y),
\end{equation}
for some coefficients $a_{ij} \in \mathbb{R}$, $i,j=1,2$.
In (\ref{pol}) $l_{1}$, $l_{2}$ are the one-dimensional Lagrange polynomials of degree 3 with respect
to the nodes: $0,\kappa, 1-\kappa,1$. In particular $l_1$ and $l_2$ vanish at the endpoints, and we have:
\begin{equation}\label{Lag}
l_{1}(x)=\frac{x(x-1)(x+\kappa-1)}{\kappa (\kappa-1) (2\kappa-1)}, \quad \quad l_{2}(x)=-\frac{x(x-1)(x-\kappa)}{\kappa (\kappa-1) (2\kappa-1)}.
\end{equation}
By calculating explicitly the derivative of ${\cal P}$ and collocating at the grid points
in (\ref{gridDFn}), one obtains the $4\times 4$ matrices:
$D_{xx,\bar{G}}$,
$D_{xy,\bar{G}}$,
$D_{yy,\bar{G}}$,
$D_{x,\bar{G}}$ and
$D_{y,\bar{G}}$,
approximating, respectively, the differential operators
$\frac{\partial^2}{\partial x^2}$, $\frac{\partial^2}{\partial x \partial y}$, $\frac{\partial^2}{\partial y^2}$,
$\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$.
Finally, as in (\ref{approxLsuG}), a spectral discretization $L_{sp}$ of the operator $L$ in the domain $Q$ takes the
form:
\begin{eqnarray}\label{approxLsuGbar}
&&\hskip-1truecm L_{sp} = F_{1} \cdot D_{xx,\bar{G}} + F_{2} \cdot D_{xy,\bar{G}} +F_{3} \cdot D_{yy,\bar{G}} +F_{4} \cdot
D_{x,\bar{G}}+F_{5} \cdot D_{y,\bar{G}},
\end{eqnarray}
where the matrices $F_{i}$, $ i=1,2,3,4,5,$ are defined in (\ref{fiMatrici}). This leads us
to the eigenvalue problem:
\begin{equation}\label{eig4}
L_{sp} \, \vec v =\lambda_{sp} \, \vec v ,
\end{equation}
where the eigenvector $\vec v$ belongs to $\mathbb{R}^4$.
We now discuss some practical cases. Suppose that $\hat{Q}=Q$, i.e., we are dealing with problem (\ref{lapl1}),
(\ref{laplBC1}). From (\ref{aut1Q}) we have that the first four exact eigenvalues of minus the Laplace operator
$(-\Delta )$ in the square are:
$(2\pi^2, 5\pi^2, 5\pi^2, 8\pi^2)\approx (19.74, 49.35, 49.35, 78.96)$. Regarding the
discretization of $-\Delta$ by finite differences we find:
$(\lambda_{fd,1}, \lambda_{fd,2}, \lambda_{fd,3}, \lambda_{fd,4}) =(18,36,36,54)$. These values coincide
with those of the spectral approximation for $\kappa=\frac13$.
By using the spectral approximation with $\kappa =\frac12-\frac{1}{2\sqrt5}$ we obtain instead $(\lambda_{sp,1},
\lambda_{sp,2}, \lambda_{sp,3}, \lambda_{sp,4}) =(20,40,40,60)$.
Taking into account the quadrilateral $\hat{Q}$ with
vertices $\hat{V}_{1}=(0,0)$, $\hat{V}_{2}=(1,0)$, $\hat{V}_{3}=(-0.2 ,1.1)$, $\hat{V}_{4}=(1.2 ,1.3)$ shown in
Fig.~\ref{fig2}
we find that the solution of (\ref{eig3}) produces the eigenvalues
$({\lambda}_{fd,1},{\lambda}_{fd,2}, {\lambda}_{fd,3}, {\lambda}_{fd,4}) =(12.54, 24.79, 25.43, 38.30)$.
Approximating $L$ with (\ref{approxLsuGbar}) and $\kappa=\frac13$ and solving (\ref{eig4}) we find the following
set of positive eigenvalues: $({\lambda}_{sp,1}, {\lambda}_{sp,2}, {\lambda}_{sp,3}, {\lambda}_{sp,4}) =
(12.52, 24.63, 25.98, 38.05)$, while using the spectral approximation (\ref{eig4}) with $\kappa =\frac12-\frac{1}{2\sqrt5}$
we obtain $(\lambda_{sp,1}, \lambda_{sp,2}, $ $ \lambda_{sp,3}, \lambda_{sp,4}) =(13.92, 27.30,28.59,43.11)$.
\section{First attempts}\label{sec4}
Given the family of quadrilateral domains $\hat{Q} \subset \mathbb{R}^{2}$ of vertices
$\hat{V}_{1}=(0,0)$, $\hat{V}_{2}=(1,0)$, $\hat{V}_{3}=(\alpha ,\beta)$, $\hat{V}_{4}=(\gamma ,\delta)$,
we begin our study by fixing an initial quadrilateral $\hat{Q}^{*}$ of vertices $\hat{V}_{1}^{*}=\hat{V}_{1}=(0,0)$,
$\hat{V}_{2}^{*}=\hat{V}_{2}=(1,0)$, $\hat{V}_{3}^{*}=(\alpha^{*} ,\beta^{*})$, $\hat{V}_{4}^{*}=(\gamma^{*} ,\delta^{*})$
with $c=1$ (see Fig.~\ref{fig3}).
From now on the superscript $*$
is used to point out that the domain $\hat{Q}^{*}$ has been fixed, and we would like to examine
what happens in its neighborhood in terms of isospectrality.
As shown in Sect.~\ref{sec3}, the quadrilateral $\hat{Q}^{*}$ is mapped to the referring square $Q$ producing
the new operator $L$. By using the finite difference approximation (\ref{approxLsuG}) or spectral collocation
as in (\ref{approxLsuGbar}), the discrete operator can be either represented by
the $4\times 4$ matrix $ L_{fd} $ or by $ L_{sp}$. Correspondingly, problems (\ref{eig3}) or (\ref{eig4}) must
be solved.
From now on, we only work with $L_{sp}$ with $\kappa=\frac13$ and, by solving (\ref{eig4}), we find four
positive eigenvalues, that will be denoted by
${\lambda}_{sp,1}^{*}, {\lambda}_{sp,2}^{*}, {\lambda}_{sp,3}^{*}, {\lambda}_{sp,4}^{*}$.
For example, as anticipated in the previous section,
by taking the quadrilateral $\hat{Q}^{*}$ of vertices
\begin{equation}\label{verticiQ*}
\hat{V}_{1}^{*}=(0,0), \ \ \hat{V}_{2}^{*}=(1,0), \ \ \hat{V}_{3}^{*}=(-0.2 ,1.1), \ \ \hat{V}_{4}^{*}=(1.2 ,1.3),
\end{equation}
shown in Fig.~\ref{fig2} (see also Fig.~\ref{fig3}) and solving
(\ref{eig4}) with $\kappa=\frac13$, we obtain the following outcome:
\begin{equation}\label{autovreali}
({\lambda}_{sp,1}^{*}, {\lambda}_{sp,2}^{*}, {\lambda}_{sp,3}^{*}, {\lambda}_{sp,4}^{*}) =( 12.52, 24.63, 25.98, 38.05).
\end{equation}
\begin{figure}
\caption{(a) The quadrilateral $\hat{Q}
\label{fig3}
\end{figure}
Our goal is to see if there exist other quadrilaterals leading to the same
set of eigenvalues and if these can be connected by a curve.
The reason why we did not start our analysis with the unit square $Q$
and its isospectral companions will be clarified later on in Sect.~\ref{sec7}.
Since we do not have at the moment any theoretical result, a rough way to have an
idea of what happens it to check methodically a great number of
quadrilaterals in the neighborhood of the initial one. To this purpose,
we construct two little squares with sides of length $l$ centered at the points $(\alpha^{*} ,\beta^{*})$ and
$(\gamma^{*} ,\delta^{*})$ as shown in Fig.~\ref{fig3}. After defining appropriate grids of given size $h$
in these two squares, we try all the possible combinations. For each
couple of grid-points (one in the first square and one in the second square)
we have a quadrilateral (recall that $\hat{V}_{1}^{*}=(0,0)$ and $\hat{V}_{2}^{*}=(1,0)$ have
been fixed). From this we deduce a $4\times 4$ matrix, and finally
four eigenvalues. We call the set of all quadrilaterals obtained
in this fashion: $h$-range.
We then select those configurations in the $h$-range displaying the same
eigenvalues given in (\ref{autovreali}), up to a prescribed error $\epsilon$. We can call
these special domains $\epsilon$-isospectral. The sizes of $h$
and $\epsilon$ have to be set up with the aim of finding reasonable
outcomes. In fact, if $\epsilon$ is too large we may end up with too many
$\epsilon$-isospectral domains; otherwise for $\epsilon$ too small
we could discover that the only acceptable domain is the starting one $\hat{Q}^{*}$.
Similar situations could also occur by selecting $h$ inappropriately.
The procedure is quite costly, especially for small $h$ and $\epsilon$.
This is the reason why in the next sections we look for something more
convenient from the numerical viewpoint.
Unfortunately, the results of this analysis are not encouraging. Indeed,
it seems that there are no enough degrees of freedom to play with, and
this is the reason why in Sect.~\ref{sec2} we introduced the new parameter $c$. This is
an amplification (or reduction) factor that allows us to include
in the set of possible $\epsilon$-isospectral candidates other
quadrilaterals. These are obtainable through a suitable homothety centered
in $(0,0)$.
Through this homothety we associate
a generic point $\hat p$ of the plane $(\hat x, \hat y)$ to the point $\sqrt{c} \ \hat p$.
In this way, any given eigenvalue $\lambda_{sp}$ of $L_{sp}$ relative to a generic domain $\Omega$ takes
the form $\lambda_{sp} /c$ in the new domain $\sqrt{c} \ \Omega$. By this trick, without too much effort, we can
enlarge the set of quadrilaterals at our disposition.
\begin{figure}
\caption{Shape of some of the $\epsilon$-isospectral domains related to the initial quadrilateral
$\hat{Q}
\label{fig4}
\end{figure}
Thus, we argue as follows. For a given domain in the $h$-range,
we will also accept eigenvalues that are proportional to those
in (\ref{autovreali}) through a multiplicative constant depending on $c$.
To this end we set $ c={\lambda}_{sp,1}/{\lambda}_{sp,1}^{*}$. Afterwards
the
domains are going to be selected according to the formula:
\begin{equation}\label{Errautov}
\left(\frac{ \sum_{k=1}^4 ({\lambda}_{sp,k}- c \ {\lambda}_{sp,k}^{*})^{2}}
{ \sum_{k=1}^4 ({\lambda}_{sp,k}^{*})^{2} }\right)^{1/2}
\leq\epsilon .
\end{equation}
If the above inequality is satisfied for $c=1$, the corresponding
quadrilateral is directly $\epsilon$-isospectral to the starting one.
If $c\not =1$, then the actual $\epsilon$-isospectral quadrilateral is
obtained by multiplying the coordinates by the constant $\sqrt{c}$.
For example, starting from the quadrilateral
$\hat{Q}^{*}$ in (\ref{verticiQ*}) shown in Fig.~\ref{fig3} (which eigenvalues are listed in (\ref{autovreali})),
we adopted the above procedure based on the parameters: $l=0.1$, $h=0.0036$ and $\epsilon=10^{-4}$.
A family of 47 $\epsilon$-isospectral domains in the $h$-range was obtained. Some of these
are displayed in Fig.~\ref{fig4}. Concerning the operator $L_{fd}$ in (\ref{approxLsuG}), preliminary results of this type
were found in \cite{codeluppi}.
We checked areas and perimeters of the 47 $\epsilon$-isospectral domains.
Within a tolerance of $10^{-3}$, 46 out of
47 quadrilaterals (including $\hat{Q}^{*}$) share the same area, and 29 have the
same perimeter. Note that, in our discrete case, we cannot rely on a result
similar to that of Weyl for the continuous case. Nevertheless, the discovery that areas
are (almost) preserved, besides being in agreement with predictions, is an excellent tool to
decide a priori if a domain is appropriate. Indeed, before directly computing the
eigenvalues, one can filter those domains that, up to a certain accuracy, share
with the initial one the same area. This preliminary control saves a lot of computational
time.
Note that the vertex $\hat{V}_{1}=(0,0)$ remains fixed, while
the vertex $\hat{V}_{2}$ shifts horizontally. This means that, except for the initial configuration
where the parameter $c$ is equal to one, the values of $c$ are in general different from one.
The plots in Fig.~\ref{fig5} are the zooms of the two little squares with sides of length $l=0.1$
of Fig. \ref{fig3} centered at
the points $(\alpha^{*} ,\beta^{*})=(-0.2 ,1.1)$ and $(\gamma^{*} ,\delta^{*})=(1.2 ,1.3)$.
From these images we can conjecture the existence of a continuous path joining the
various $\epsilon$-isospectral domains. Although only based on heuristic considerations,
the guess seems to be confirmed by further tests, where $h$ and $\epsilon$ are
conveniently taken smaller and smaller. We can be more precise in the coming sections, where
appropriate strategies will be developed to prove the existence of these curves and detect them.
\begin{figure}
\caption{Zoom of the little squares of Fig.~\ref{fig3}
\label{fig5}
\end{figure}
Similar results are obtained by varying the configuration of the
initial quadrilateral $\hat{Q}^{*}$. We suggest however to set the initial parameters
$ \alpha^{*} ,\beta^{*}, \gamma^{*} , \delta^{*}$ in order to stay away from some critical
situations that will be analyzed more in detail in Sect.~\ref{sec7}.
\section{Implementation of the Implicit Function theorem}\label{sec5}
From the experiments of the previous section, our guess is that a curve
joining isospectral quadrilaterals actually exists. It is a one-parameter
function embedded in the 5 dimensional space spanned by the
parameters $\alpha$, $\beta$, $\gamma$, $\delta$, $c$. We can track it
by observing that it is implicitly characterized through four functional
equations. We can follow the curve by computing locally its tangent
vector and this can be done with the help of the Implicit Function theorem (see, for example, \cite{rudin}).
Actually, in circumstances in which such a theorem is applicable, we
automatically have an existence result, at least at local level.
We recall that a generic quadrilateral
$\hat{Q}$ has vertices of the form $\hat{V}_{1}=(0,0)$,
$\hat{V}_{2}=(1,0)$, $\hat{V}_{3}=(\alpha ,\beta)$, $\hat{V}_{4}=(\gamma ,\delta)$.
Afterwords, for $i=1,2,3,4,$ let
$\lambda_{sp,i}$ be the eigenvalues in ascending order of the discrete problem
(\ref{eig4}). Clearly, each one of these quantities depends on the parameters
$ \alpha ,\beta, \gamma , \delta$, i.e.:
\begin{eqnarray}\label{lambdaabcd}
&& \lambda_{sp,i}=\lambda_{sp,i} ( \alpha ,\beta, \gamma , \delta) , \quad i=1,2,3,4.
\end{eqnarray}
Now, the eigenvalues (\ref{lambdaabcd}) can be seen as the roots
of the following characteristic polynomial:
\begin{eqnarray}\label{charpolyQ}
&& q(z)=z^{4} +\xi_{3} z^{3} +\xi_{2} z^{2} +\xi_{1} z +\xi_{0} ,
\end{eqnarray}
where the coefficients $\xi_{k}$, $k=0,1,2,3$, are computed from the entries of the matrix $L_{sp}$ defined in (\ref{approxLsuGbar}). As a consequence these coefficients also depend on $ \alpha ,\beta, \gamma , \delta$, that is
$\xi_{k}=\xi_{k} ( \alpha ,\beta, \gamma , \delta)$, $k=0,1,2,3$.
As done in the previous section, we can work around an initial domain $\hat{Q}^{*}$
of fixed vertices $\hat{V}_{1}^{*}=\hat{V}_{1}=(0,0)$,
$\hat{V}_{2}^{*}=\hat{V}_{2}=(1,0)$, $\hat{V}_{3}^{*}=(\alpha^{*} ,\beta^{*})$, $\hat{V}_{4}^{*}=(\gamma^{*} ,\delta^{*})$
with $c=1$. For instance, we can start from the quadrilateral
$\hat{Q}^{*}$ specified in (\ref{verticiQ*}) and shown in Fig.~\ref{fig3}, whose corresponding eigenvalues are listed in (\ref{autovreali}). Accordingly, $\lambda^{*}_{sp,i}$, $i=1,2,3,4,$ are
the roots of the following characteristic polynomial:
\begin{eqnarray}\label{charpolyQ*}
&& q^{*}(z)=z^{4} +\xi_{3}^{*} z^{3} +\xi_{2}^{*} z^{2} +\xi_{1}^{*} z +\xi_{0}^{*},
\end{eqnarray}
where now
the coefficients $\xi_{k}^{*}$, $k=0,1,2,3$, are given real numbers.
For example, starting from the quadrilateral
$\hat{Q}^{*}$ in (\ref{verticiQ*}), we have:
\begin{eqnarray}
(\xi_{3}^{*}, \xi_{2}^{*}, \xi_{1}^{*},\xi_{0}^{*}) =( 101.18 , 3675.65, 56468.45, 304819.78).
\end{eqnarray}
Thus, a certain quadrilateral
$\hat{Q} \not =\hat{Q}^{*}$ (with vertices $\hat{V}_{1}=(0,0)$,
$\hat{V}_{2}=(1,0)$, $\hat{V}_{3}=(\alpha ,\beta)$, $\hat{V}_{4}=(\gamma ,\delta)$)
is isospectral to $\hat{Q}^{*}$ if and only if $q$ and $q^{*}$ have the same roots, i.e.,
one has $\xi_k=\xi_{k}^{*}$, $k=0,1,2,3$. We can weaken this condition by introducing the
parameter $c>0$. Indeed, we can also accept situations where the eigenvalues are proportionally related
as follows:
\begin{eqnarray}\label{charpolyQprimoequal0}
c=\frac{\lambda_{sp,i} }{\lambda_{sp,i}^{*}} , \quad i=1,2,3,4.
\end{eqnarray}
If $c=1$, $\hat{Q}$ turns out to be directly isospectral to $\hat{Q}^{*}$.
If $c\not =1$, the new domain $\sqrt{c} \ \hat{Q}$, obtained by homothety, is also isospectral to
$\hat{Q}^{*}$ (note that in this case the second vertex $\hat{V}_{2}$ becomes $(\sqrt{c},0)$). We can now translate condition
(\ref{charpolyQprimoequal0}) in terms of polynomial coefficients, obtaining:
\begin{eqnarray}\label{polcof}
c^{4-k}=\frac{\xi_{k} }{\xi_{k}^{*}} , \quad k=1,2,3,4.
\end{eqnarray}
In the end, we propose to introduce the four functions ${\cal F}_{k}={\cal F}_{k}(\alpha, \beta, \gamma, \delta, c)$, $ k=0,1,2,3,$ of the five variables $\alpha, \beta, \gamma, \delta, c$, defined as follows:
\begin{eqnarray}\label{Funzionik}
{\cal F}_{k}(\alpha, \beta, \gamma, \delta, c)= \xi_{k} -c^{4-k} \ \xi_{k}^{*} , \qquad k=0,1,2,3,
\end{eqnarray}
and look for values such that:
\begin{eqnarray}\label{Funzionikzero}
{\cal F}_{k}=0, \qquad k=0,1,2,3.
\end{eqnarray}
If we are lucky, there
is a local curve $\Phi (t)\in \mathbb{R}^5$, $t\in [-T, T]$, $T>0$, described by the functions
$\alpha (t)$, $\beta (t)$, $\gamma (t)$, $\delta (t)$, $c(t)$, passing through the point
${\bf P}^*$ of coordinates $\alpha^{*}$,
$\beta^{*}$, $\gamma^{*}$, $\delta^{*}$, $c=1$ (i.e., the parameters identifying the initial
quadrilateral $\hat{Q}^{*}$ as in (\ref{verticiQ*})) and connecting isospectral domains.
In order to follow such a curve $\Phi $, we need to find its local tangent vector. We can
express the various parameters in function of one of them.
We fix for example $\beta (t)=\beta^{*}+t$, with $t\in [-T, T]$, so that $\beta (0)=\beta^{*}$.
We then differentiate with
respect to $t$ the four equations in (\ref{Funzionikzero}), arriving at the system:
\begin{eqnarray}
&&
\begin{pmatrix}
\ \ \displaystyle\frac{\partial {\cal F}_0}{\partial \alpha} \ \ & \ \ \displaystyle \frac{\partial {\cal F}_0}{\partial \gamma} \ \
& \ \ \displaystyle\frac{\partial {\cal F}_0}{\partial \delta} \ \ & \ \ \displaystyle\frac{\partial {\cal F}_0}{\partial c} \ \ \\[4mm]
\displaystyle\frac{\partial {\cal F}_1}{\partial \alpha} & \displaystyle \frac{\partial {\cal F}_1}{\partial \gamma} & \displaystyle\frac{\partial {\cal F}_1}{\partial \delta} & \displaystyle\frac{\partial {\cal F}_1}{\partial c} \\[4mm]
\displaystyle\frac{\partial {\cal F}_2}{\partial \alpha} & \displaystyle \frac{\partial {\cal F}_2}{\partial \gamma} & \displaystyle\frac{\partial {\cal F}_2}{\partial \delta} & \displaystyle\frac{\partial {\cal F}_2}{\partial c} \\[4mm]
\displaystyle\frac{\partial {\cal F}_3}{\partial \alpha} & \displaystyle \frac{\partial {\cal F}_3}{\partial \gamma} & \displaystyle\frac{\partial {\cal F}_3}{\partial \delta} & \displaystyle\frac{\partial {\cal F}_3}{\partial c} \\
\end{pmatrix}
\begin{pmatrix}
\displaystyle\frac{d \alpha}{d t} \\[4mm]
\displaystyle\frac{d \gamma}{d t} \\[4mm]
\displaystyle\frac{d \delta}{d t} \\[4mm]
\displaystyle\frac{d c}{d t} \\
\end{pmatrix}
=
\begin{pmatrix}
- \displaystyle\frac{\partial {\cal F}_0}{\partial \beta} \\[4mm]
- \displaystyle\frac{\partial {\cal F}_1}{\partial \beta} \\[4mm]
- \displaystyle\frac{\partial {\cal F}_2}{\partial \beta} \\[4mm]
- \displaystyle\frac{\partial {\cal F}_3}{\partial \beta} \\
\end{pmatrix} ,\label{problemFk}
\end{eqnarray}
in the unknowns $\alpha^{\prime} (t)=\frac{d \alpha}{d t}$,
$\gamma^{\prime} (t)=\frac{d \gamma}{d t}$, $\delta^{\prime} (t)=
\frac{d \delta}{d t}$ and $c^{\prime} (t)=\frac{d c}{d t}$.
At this point, it is important to observe that the functions ${\cal F}_{k}$, $k=0,1,2,3$, in (\ref{Funzionik})
and their derivatives are explicitly known, although their expressions may result rather
complicated. In fact, starting from $\alpha$, $\beta$, $\gamma $, $\delta$, one
can build the coefficients of the mapping into the referring square. Successively,
always in function of these parameters, one writes the matrix $L_{sp}$ defined in (\ref{approxLsuGbar}). Finally,
a closed form is also known for the coefficients of the characteristic polynomial
(this is not true instead for the eigenvalues). Of course, from the practical viewpoint, these computations
can only be carried out with the help of a software running with symbolic manipulation.
Going through this calculation, we find that the determinant of the matrix in
(\ref{problemFk}) is different from zero at ${\bf P}^*$. Therefore, one is able to theoretically detect a
value $T>0$, such that a curve $\Phi (t)\in \mathbb{R}^5$, $t\in [-T, T]$, described by the parameters
$\alpha (t)$, $\beta (t)=\beta^{*}+t$, $\gamma (t)$, $\delta (t)$, $c(t)$, actually exists
in the interval $t\in [-T, T]$. For $t=0$, we have $\Phi (0)=(\alpha^*,\beta^*, \gamma^*,
\delta^*,1)={\bf P}^*$.
The following first-order explicit iteration method can be adopted in order to
follow the curve, at least locally.
Recall that, given $T>0$, we have chosen $\beta (t)=\beta^{*}+t$, $t\in [-T, T]$.
For simplicity let us fix our attention on positive value of the variable $t$ (indeed
similar arguments may also be applied when $t$ is negative)
and, given an integer $M>0$, let us discretize the interval $[\beta^{*},\beta^{*}+T]$ with a uniform grid
$t_{m}$, $m=0,\ldots, M$, with step size $\delta t=\frac{T}{M}$, that is:
\begin{equation}\label{grigliaBeta}
t_{m}=\beta^{*} +m \delta t=\beta^{*} +m \frac{T}{M}, \quad m=0,\ldots, M.
\end{equation}
Our curve $\Phi$ is going to be approximated by the quantities $\Phi_m \approx \Phi (t_m)$,
$m=0,\ldots, M$. In particular $\Phi_0 =\Phi (0) ={\bf P}^*$. At each step, the exact
derivatives $\alpha^{\prime} (t_m)$, $\gamma^{\prime} (t_m)$, $\delta^{\prime} (t_m)$
and $c^{\prime} (t_m)$, $m=0,\ldots, M$, are computed by solving (\ref{problemFk}) (note that $\beta^\prime
(t_m)=1$, $m=0,\ldots, M$). These exact
derivatives are organized in the correcting vector $\Psi_m$, $m=0,\ldots, M$. The algorithm is
then based on the following iteration:
\begin{equation}\label{iter}
\Phi_{m+1}=\Phi_m +\delta t \Psi_m \qquad m=0,\ldots,M-1.
\end{equation}
A similar procedure can be used to approximate the curve $\Phi $ in the interval $ [-T, 0]$.
For example, we discuss the case of the quadrilateral
$\hat{Q}^{*}$ in (\ref{verticiQ*}) shown in Fig.~\ref{fig3} and we take $\beta (t)=\beta^{*}+t=1.1+t$, $t\in [-T, T]= [-0.06,0.06]$.
We apply (\ref{iter}) with $M=100$, both for positive and negative values of $t$.
Fig.~\ref{fig6} and Fig.~\ref{fig7} show, respectively, the projections of $\Phi_{m}$, $m=0,\ldots, M$, in the planes
$(\alpha ,\beta )$, $(\gamma ,\delta )$ and the graph $(t_m,c(t_m))$, $m=0,\ldots, M$.
These results confirm what was predicted in the previous section.
As expected, the shapes of the projections of the curve turn out to be exactly the same if, instead of the parameter $\beta$, another
parameter is assumed to be explicit.
\begin{figure}
\caption{Projections of the curve $\Phi (t)\in \mathbb{R}
\label{fig6}
\end{figure}
\begin{figure}
\caption{Plot of the points $(t_m,c(t_m))$, $m=0,\ldots, M$, corresponding
to the curves of Fig.~\ref{fig6}
\label{fig7}
\end{figure}
\section{Other approaches}\label{sec6}
The use of symbolic manipulation, in order to compute the exact tangent vector to
the curve $\Phi$ at a given point, is certainly expensive, ill-conditioned and
difficult to be generalized. It has been however an important theoretical tool
to establish the local existence of the curve.
In this section we propose an alternative numerical procedure, far more cheaper but
with similar performances.
In line with the arguments invoked in Sect.~\ref{sec5} and using the same notation we
have that, if the curve $\Phi$ has to connect isospectral domains, the eigenvalues
$ \lambda_{sp,i}/c$, $ i=1,2,3,4$, in (\ref{charpolyQprimoequal0}) must remain the same
as $t$ varies. In particular we have: $\lambda_{sp,i}^{*}=
\lambda_{sp,i}/c$, $ i=1,2,3,4$.
That is, having in mind (\ref{lambdaabcd}), the four functions ${\cal G}_{i}={\cal G}_{i}(\alpha, \beta,
\gamma, \delta, c)$, $ i=1,2,3,4,$ of the five variables $\alpha, \beta, \gamma, \delta, c$, defined as follows:
\begin{eqnarray}\label{FunzionikG}
{\cal G}_{i} (\alpha, \beta, \gamma, \delta, c)= \frac{\lambda_{sp,i} }{c} , \quad i=1,2,3,4,
\end{eqnarray}
must be constant, i.e., their derivatives with respect to $t$ are zero.
By differentiating (\ref{FunzionikG}) with respect to $t$ and by expressing the
various parameters in function
of one of them, we find the approximated local tangent vector to the curve $\Phi $.
As in Sect.~\ref{sec5} let us fix for example $\beta (t)=\beta^{*}+t$, with $t\in [-T, T]$, so that $\beta (0)=\beta^{*}$.
The new linear system takes the form:
\begin{eqnarray}
&&
\begin{pmatrix}
\ c \ \displaystyle\frac{\partial \lambda_{sp,1}}{\partial \alpha} \ & \ c \ \displaystyle\frac{\partial \lambda_{sp,1}}{\partial \gamma} \ &
\ c \ \displaystyle\frac{\partial \lambda_{sp,1}}{\partial \delta} \ & \ \ - \lambda_{sp,1} \ \\[4mm]
c \ \displaystyle\frac{\partial \lambda_{sp,2}}{\partial \alpha} & c \ \displaystyle\frac{\partial \lambda_{sp,2}}{\partial \gamma} &
c \ \displaystyle\frac{\partial \lambda_{sp,2}}{\partial \delta} & - \lambda_{sp,2} \\[4mm]
c \ \displaystyle\frac{\partial \lambda_{sp,3}}{\partial \alpha} & c \ \displaystyle\frac{\partial \lambda_{sp,3}}{\partial \gamma} &
c \ \displaystyle\frac{\partial \lambda_{sp,3}}{\partial \delta} & - \lambda_{sp,3} \\[4mm]
c \ \displaystyle\frac{\partial \lambda_{sp,4}}{\partial \alpha} & c \ \displaystyle\frac{\partial \lambda_{sp,4}}{\partial \gamma} &
c \ \displaystyle\frac{\partial \lambda_{sp,4}}{\partial \delta} & - \lambda_{sp,4} \\
\end{pmatrix}
\begin{pmatrix}
\displaystyle\frac{d \alpha}{d t} \\[4mm]
\displaystyle\frac{d \gamma}{d t} \\[4mm]
\displaystyle\frac{d \delta}{d t} \\[4mm]
\displaystyle\frac{d c}{d t} \\
\end{pmatrix}
=
\begin{pmatrix}
- c \ \displaystyle\frac{\partial \lambda_{sp,1}}{\partial \beta} \\[4mm]
- c \ \displaystyle\frac{\partial \lambda_{sp,2}}{\partial \beta} \\[4mm]
- c \ \displaystyle\frac{\partial \lambda_{sp,2}}{\partial \beta} \\[4mm]
- c \ \displaystyle\frac{\partial \lambda_{sp,2}}{\partial \beta} \\
\end{pmatrix} ,\label{problemGk}
\end{eqnarray}
in the unknowns $\alpha^{\prime} (t)=\frac{d \gamma}{d t}$,
$\gamma^{\prime} (t)=\frac{d \gamma}{d t}$, $\delta^{\prime} (t)=
\frac{d \delta}{d t}$ and $c^{\prime} (t)=\frac{d c}{d t}$.
Since now the explicit expression of the partial derivatives in (\ref{problemGk}) is not available,
the entries of the matrix are approximated by classical finite differences, that is, for example, given an
increment $d\alpha$, we can compute $\partial \lambda_{sp,i} /\partial \alpha$, $i=1,2,3,4$, as follows:
\begin{eqnarray}\label{derivAlpha}
\displaystyle\frac{\partial \lambda_{sp,i}}{\partial \alpha} \approx
\frac{\lambda_{sp,i}(\alpha+d\alpha, \beta, \gamma, \delta, c)-\lambda_{sp,i}(\alpha, \beta, \gamma, \delta, c)}{d\alpha} , \quad i=1,2,3,4.
\end{eqnarray}
Similar formulae can be used to approximate the other entries of the matrix in (\ref{problemGk}).
Finally, we use the uniform grid (\ref{grigliaBeta}) when $t$ is positive (or its opposite when $t$ is negative).
At step $m$, we evaluate the approximations of
$\alpha^{\prime} (t_m)$, $\gamma^{\prime} (t_m)$, $\delta^{\prime} (t_m)$
and $c^{\prime} (t_m)$, $m=0,\ldots, M$ (note that $\beta^\prime
(t_m)=1$, $m=0,\ldots, M$), by solving a system similar to that in (\ref{problemGk}), where partial
derivatives have been replaced with incremental ratios. We put the so obtained
values in the current vector $ \bar{\Psi}_m$, $m=0,\ldots, M$. Thus, a rough approximation of $\Phi (t_m)$
is given by the quantities $\bar{\Phi}_m $, $m=0,\ldots, M$, recovered by recursion
from the following explicit iteration method:
\begin{equation}\label{iter2}
\bar{\Phi}_{k+1}=\bar{\Phi}_k +\delta t \ \bar{\Psi}_k \qquad k=0,\ldots,M-1,
\end{equation}
starting from $\bar{\Phi}_0 =\Phi (0) ={\bf P}^*$.
Given the quadrilateral $\hat{Q}^{*}$ corresponding to (\ref{verticiQ*}) and shown in Fig.~\ref{fig3},
by taking $\beta (t)=\beta^{*}+t=1.1+t$, $t\in [-T, T]= [-0.06,0.06]$,
we apply the iteration method (\ref{iter2}), both for positive and negative values of $t$. Different
values of the step size $\delta t=\frac{T}{M}$ are used. In particular we choose $M=10, 30, 50$.
In Fig.~\ref{fig8} we show the projections of $\bar{\Phi}_{m}$, $m=0,\ldots, M$, in
the planes $(\alpha ,\beta )$, $(\gamma ,\delta )$.
As $\delta t$ diminishes, the graphs of Fig.~\ref{fig8} approach those of Fig.~\ref{fig6}.
Also in this case the shape of the projections of the curve turn out to be qualitatively the same if
another parameter is assumed to be explicit (instead of the parameter $\beta$).
This convergence behavior is a good surprise. Indeed, such a property should not be given for granted. In fact,
through (\ref{derivAlpha}) and similar formulae, we replaced partial derivatives by a first-order approximation,
and we introduced this correction in the first-order algorithm (\ref{iter}). This double
discretization does not necessarily bring to a convergent scheme.
\begin{figure}
\caption{Projections of the curve $\Phi (t)\in \mathbb{R}
\label{fig8}
\end{figure}
A sort of convergence analysis can be carried out by examining the history of the quadrilateral areas
in comparison to the area of $\hat{Q}^{*}$, given by $\mu_2 (\hat{Q}^{*})=1.44$. The results of
some tests are reported in Table \ref{tavola}. For a fixed $\delta t$ the error
grows linearly with the distance from the quadrilateral $\hat{Q}^{*}$.
Nevertheless, there is no substantial decay of the error by diminishing $\delta t$.
It has also to be noted however that, in the discrete case, we do not have any theoretical
result ensuring that areas of isospectral domains must be preserved.
We finally observe
that the results obtained so far do not change significantly if other values of the
parameter $\kappa$ introduced in (\ref{gridDFpoint}) are taken into account (we recall that
in the experiments reported in this paper we used $\kappa =\frac13$).
\begin{table}
\caption{Relative errors of the areas in relation to the initial
position.}
\label{tavola}
\begin{tabular}{llll}
\hline\noalign{
}
$\beta^{*}+t$ & $M=5$ and $\delta t$ =0.012 & $M=10$ and $\delta t $=0.006 & $M=20$ and
$\delta t$ =0.003\\
\hline\noalign{
}
1.040 & 11.70 e-04& 10.30 e-04 & 9.65 e-04\\
1.052 & 9.91 e-04& 8.69 e-04 & 8.12 e-04\\
1.064 & 7.83 e-04& 6.84 e-04 & 6.37 e-04\\
1.076 & 5.48 e-04& 4.76 e-04 & 4.43 e-04\\
1.088 & 2.87 e-04& 2.48 e-04 & 2.30 e-04\\
1.1 & 0& 0 & 0\\
1.112 & 1.53 e-04& 1.89 e-04 & 2.07 e-04\\
1.124 & 3.09 e-04& 3.85 e-04 & 4.25 e-04\\
1.136 & 4.67 e-04& 5.89 e-04 & 6.54 e-04\\
1.148 & 6.26 e-04& 8.00 e-04 & 8.92 e-04\\
1.160 & 7.86 e-04& 10.19 e-04 & 11.42 e-04\\
\noalign{
}\hline
\end{tabular}
\end{table}
\section{The case of the square}\label{sec7}
Here, we discuss about the case of the unit square $Q$, i.e. the quadrilateral of vertices $V_{1}=(0,0)$, $V_{2}=(1,0)$, $V_{3}=(0,1)$, $V_{4}=(1,1)$, and the case of other domains $\hat{Q}$ of
vertices $\hat{V}_{1}=(0,0)$, $\hat{V}_{2}=(1,0)$, $\hat{V}_{3}=(\alpha ,\beta)$, $\hat{V}_{4}=(\gamma ,\delta)$ that
are symmetric with respect to the straight line $y=x$, that is domains $\hat{Q}$ with $\alpha =0$, $\beta
=1$ and $\gamma =\delta$. Of course, when $\alpha =0$, $\beta
=1$, $\gamma =\delta=1$ we have $\hat{Q}=Q$.
In these situations using the approach proposed in Sect.~\ref{sec5}, one can check that the determinant of the matrix
in (\ref{problemFk}) is always zero.
We remind that in Sect.~\ref{sec5} we have chosen $\beta (t) =1+t$, $t\in [-T, T]$, but, indeed, the determinant remains zero
independently on the explicited parameter.
Moreover, for the square $Q$, the rank of the
matrix in (\ref{problemFk}) is just one, i.e. all the lines (or rows) of the matrix are linearly dependent.
Definitely, we are not in the situation that allows us to use the
Implicit Function theorem. Nevertheless, experimentally, one can find
in the neighborhood of the unit square $Q$ many other quadrilaterals isospectral to
it.
An analysis similar to that of Sect.~\ref{sec4}
reveals configurations as the ones shown in Fig.~\ref{fig9}.
In this case we choose: $l=0.1$, $h=0.0036$ and $\epsilon=5\times 10^{-4}$, and, as in Sect.~\ref{sec4}, we
considered $c={\lambda}_{sp,1}/{\lambda}_{sp,1}^{*}$.
In the neighborhood
of the vertex $V_{3}=(0,1)$ on the left of Fig.~\ref{fig9} the distribution of selected points does not reveal specific
patterns, while for the vertex $V_{4}=(1 ,1)$ on the
right of Fig.~\ref{fig9} there is a superimposition of several curves, as it will be evident from the
discussion that follows.
\begin{figure}
\caption{Zoom of the vertices of the $\epsilon$-isospectral domains when the initial quadrilateral
is the unit square $Q$.}
\label{fig9}
\end{figure}
\begin{figure}
\caption{Deformations of the unit square $Q$ obtained by moving independently
each single parameter. The resulting quadrilaterals are isometric.}
\label{fig10}
\end{figure}
A heuristic explanation of the non applicability of the Implicit Function
theorem is inspired by Fig.~\ref{fig10}, where the four parameters
$-\alpha$,
$\beta$, $\gamma$, $\delta$, are varied independently (one has to imagine
that these deformations are infinitesimal). One gets four configurations
corresponding to quadrilaterals that are reciprocally isometric (by rotations
or symmetries), and therefore isospectral. This somehow explains why the
Jacobian matrix in (\ref{problemFk}) has rank equal to one. Analogous
conclusions hold for other initial domains presenting symmetries.
\begin{figure}
\caption{Zoom of the vertices of the $\epsilon$-isospectral domains starting from the unit square $Q$ and imposing $\alpha=0$ in the $h$-range.}
\caption{Zoom of the vertices of the $\epsilon$-isospectral domains starting from the unit square $Q$ and imposing $\beta=1+\alpha$ in the $h$-range.}
\label{fig11}
\label{fig12}
\end{figure}
\begin{figure}
\caption{Shapes of the $\epsilon$-isospectral domains obtained by departing from the initial unit square
$Q$, and imposing $\alpha=0$ (case (a)) or $\beta=1+\alpha$ (case (b)). }
\label{fig13}
\end{figure}
The above considerations do not prevent however the existence of curves connecting isospectral domains.
Actually, in correspondence of the parameters associated with the unit square $Q$
we have a biforcation point. A deeper study shows that different curves of isospectral quadrilaterals are
obtained. For instance, we can have those of the family associated either with the points
of Fig.~\ref{fig11} or with the points of Fig.~\ref{fig12}. In particular, Fig.~\ref{fig13} shows the
quadrilaterals corresponding to the two ways of deforming the square $Q$. These two branches
of isospectral domains departing from the unit square are very neat. Nevertheless, their
identification is not easy if one only examines Fig.~\ref{fig9}. Of course, when we started
our analysis we (erroneously) thought that the case of the square was the easiest one; only later we realized
that this was far from being true.
We conclude this section with a few more experiments. We would like to figure out what
happens to the curve connecting isospectral domains when the initial quadrilateral $ \hat{Q}^{*}$
is modified. For example, we can start from the quadrilateral $ \hat{Q}^{*}$ associated with the
vertices in (\ref{verticiQ*}). The corresponding isospectral family forms a curve characterized
by the projections shown in Fig.~\ref{fig6} (also reported in Fig.~\ref{fig14}). Now, we deform
such initial domain $ \hat{Q}^{*}$ by slowly approaching the unit square $Q$. To this purpose, we fix an integer $S>1$.
For $j=0,\ldots, S$, the vertices of the
transitory quadrilaterals $\hat Q_j$ are chosen according to the law:
\begin{eqnarray} \label{legge}
&& \hat{V}_{1,j}=V_{1}=(0,0), \quad \hat{V}_{2,j}=V_{2}=(1,0), \nonumber\\
&& \hat{V}_{3,j}=(1-s_j)\hat V_{3}^{*} +s_j{V}_{3}, \quad \hat{V}_{4,j}=(1-s_j)\hat V_{4}^{*} +s_j{V}_{4}, \nonumber\\
&&\hskip6truecm \ s_j=j/S, \, \, j=0,\ldots, S.
\end{eqnarray}
In this way, for $j=0$ in (\ref{legge}) we have the initial quadrilateral $ \hat{Q}^{*}$, i.e.
$\hat Q_0=\hat{Q}^{*}$, while
for $j=S$ we obtain the unit square $Q$, i.e. $\hat Q_S=Q$.
\begin{figure}
\caption{ The quadrilateral $\hat{Q}
\caption{Projections of the local curves joining the
corresponding isospectral domains for each of the quadrilaterals of Fig.~\ref{fig14}
\label{fig14}
\label{fig15}
\end{figure}
Since we know
that, in correspondence of the unit square $Q$, the determinant of the Jacobian in (\ref{problemFk})
is zero, the use of the Implicit Function theorem becomes more and more restrictive as $j$ increases.
For this reason we need to limit the range $[-T_j, T_j]$ of definition of the curve.
Given $T_0 >0$ we propose to control this range by defining the extremes
$ T_j$, $j=0,\ldots, S-1$, of the interval according to the rule:
\begin{equation}\label{legget}
T_j=\frac{T_0}{1+2*s_j}, \quad \quad j=0,\ldots, S-1.
\end{equation}
We excluded the value $j=S$ in (\ref{legget}) because in this case the procedure is not applicable.
Starting from the quadrilateral
$\hat{Q}^{*}$ in (\ref{verticiQ*}), taking $S=10$ in (\ref{legge}) and $T_0=0.06$ in (\ref{legget}), we obtain the various quadrilaterals
displayed in Fig.~\ref{fig14}. For $j=0,\ldots, S-1$, we locally compute the curve of the
isospectral domains related to $\hat Q_j$, indifferently with the approach proposed in Sect.~\ref{sec6} or Sect.~\ref{sec7}.
The projections of these curves in the planes
$(\alpha ,\beta )$ and $(\gamma ,\delta )$ are shown in Fig.~\ref{fig15}.
Finally, we run the same tests by using a different set of vertices $\hat V_k^*$, $k=1,2,3,4$, of the initial quadrilateral $ \hat{Q}^{*}$, namely:
\begin{equation}\label{verticiQ*2}
\hat{V}_{1}^{*}=(0,0), \ \ \hat{V}_{2}^{*}=(1,0), \ \ \hat{V}_{3}^{*}=(0.2 ,1.1), \ \ \hat{V}_{4}^{*}=(1.2 ,1.3).
\end{equation}
In Fig.~\ref{fig16} and Fig.~\ref{fig17} the reader can see the results of the tests obtained by
taking $S=10$ in (\ref{legge}) and $T_0=0.06$ in (\ref{legget}).
Here the curve is twisting in a more complicated fashion. Scattered
dots may appear in the plots. They are a consequence of the break down of the algorithm when
approaching the unit square $Q$. In order to get rid of them it is necessary to further
reduce the interval $[-T_j, T_j]$, as $j$ gets close to $S$.
\begin{figure}
\caption{The quadrilateral $\hat{Q}
\caption{Projections of the local curves joining the
corresponding isospectral domains for each of the quadrilaterals of Fig.~\ref{fig16}
\label{fig16}
\label{fig17}
\end{figure}
\section{Conclusions}\label{sec8}
Still remaining in finite dimension, the extension of our analysis
to general domains depending on more degrees of freedom can be a severe
numerical task. The main difficulty is how to choose the family of admissible
domains. They may have, for instance, a polygonal boundary and the
approximation
of the continuous operator can be performed by finite elements. If we are
far from special symmetric configurations (as the case of the unit square
considered in Sect.~\ref{sec7}), the local application of the Implicit Function theorem
should guarantee isospectral deformations in a small neighborhood.
The one-dimensional curve now belongs to a space of larger dimension and
its grafical representation can be rather troublesome. For visualization,
the best way
is probably to show animations concerning the movement of the isospectral
domains. Note that the costs of implementation can drastically become
high when
increasing the degrees of freedom.
An alternative to the work presented here is to try to preserve a certain
number of eigenvalues of the exact operator. For example, concerning the
family of quadrilaterals (as those examined so far), instead of introducing
a discretization based on matrices of dimension $4\times 4$, one can evaluate the first
4 eigenvalues of the exact Laplacian and move the domains with the aim of
preserving their values. We expect the so obtained domains to be slightly
different
from the ones we found in this paper. Unfortunately, the exact eigenvalues of
$-\Delta$ on a general
quadrilateral domain are not explicitly available, so that the computation
should be accompanied by an appropriate discretization on a very fine
grid. Again, the complexity of the algorithm may become unaffordable
as more degrees of freedom are introduced.
The problem of finding isospectral families, connected with continuity,
in order to preserve
\underline{all} the eigenvalues of the \underline{exact} Laplace operator is certainly harder than the
experiments we tried in this paper, and represents a stimulating theoretical
challenge. We hope however that our little contribution may be the starting
point for future ideas.
\end{document}
|
\begin{document}
\title[Short Title]{On the affine representations of the trefoil knot group}
\author[H.Hilden]{Hugh M. Hilden}
\address[H.Hilden]{ Department of Mathematics, University of Hawaii,
Honolulu, HI 96822, USA}
\author[M.T.Lozano]{ Mar\'{\i}a Teresa Lozano*}
\address[M.T.Lozano]{ IUMA, Departamento de Matem\'{a}ticas, Universidad de
Zaragoza, Zaragoza 50009, Spain }
\thanks{*This research was supported by grant MTM2007-67908-C02-01}
\author[J.M.Montesinos]{ Jos\'{e} Mar\'{\i}a Montesinos-Amilibia**}
\address[J.M.Montesinos]{Departamento de Geometr\'{\i}a y Topolog\'{\i}a,
Universidad Complutense, Madrid 28040, Spain}
\thanks{**This research was supported by grant MTM2006-00825}
\date{Mars, 2010}
\subjclass[2000]{Primary 57M25, 57M60; Secondary 20H15}
\keywords{quaternion algebra, representation, knot group, crystallographic group}
\dedicatory{}
\begin{abstract}
The complete classification of representations of the Trefoil knot group $G$ in $S^{3}$ and $SL(2,\mathbb{R})$, their affine deformations, and some
geometric interpretations of the results, are given. Among other results, we also obtain
the classification up to conjugacy of the non cyclic groups of affine
Euclidean isometries generated by two isometries $\mu $ and $\nu $ such that
$\mu ^{2}=\nu ^{3}=1$ , in particular those which
are crystallographic. We also prove that
there are no affine crystallographic groups in the three dimensional
Minkowski space which are quotients of $G$.
\end{abstract}
\maketitle
\section{ Introduction}
The representation of a knot group in the group of isometries of a geometric
manifold is important in order to obtain invariants of the knot and also in
order to relate geometric structures with the knot.
In \cite{HLM2009} the varieties $V(\mathcal{I}_{G}^{c})$ and $V(\mathcal{I}_{aG}^{c})$
of c-representations and affine c-representations (resp.) of a
two-generator group in a quaternion algebra are defined. A c-representation is a representation where the image of the generators are conjugate elements. We gave there the
c-representation associated to each point in the varieties and also the
complete classification of c-representations of $G$ in $S^{3}$ and $SL(2,\mathbb{R})$.
In this article we apply the results of \cite{HLM2009} to the group $G$ of
the Trefoil knot, giving the complete classification of representations of $G$ in $S^{3}$ and $SL(2,\mathbb{R})$. We classify their affine deformations and we give some
geometric interpretations of the results.
We also obtain as a consequence
the classification up to conjugacy of the non cyclic groups of affine
Euclidean isometries which are quotients of $G$, indeed those which are generated by two isometries $\mu $ and $\nu $ such that
$\mu ^{2}=\nu ^{3}=1$ (Theorem \ref{tposiblesH}), in particular those which
are crystallographic (Theorem \ref{tposibleseucli}). We also prove that
there are no affine crystallographic groups in the three dimensional
Minkowski space which are quotients of $G$, by using Mess's theorem (\cite{Mess1990} ,\cite{GM2000}) and
Margulis invariant (\cite{Mar1983}, \cite{Mar1984}).
This paper is organized as follows. In Sec. 2 we recall some concepts contained in \cite{HLM2009}, about quaternion algebras, the varieties $V(\mathcal{I}_{G}^{c})$ and $V(\mathcal{I}_{aG}^{c})$.
of c-representations and affine c-representations of a group $G$ in a quaternion algebra. We include as Theorem \ref{teorema4} the result in \cite{HLM2009} giving explicitly the complete classification of c-representations of $G$ in $S^{3}$ and $SL(2,\mathbb{R})$. In Sec. 3 we apply Theorem \ref{teorema4} to the Trefoil knot group $G(3_{1})$ and we describe the five occurring cases of representations associated to the real points of the algebraic variety $V(\mathcal{I}_{G(3_{1})})$. We give also a geometric interpretation of the image of the representation in each of these five cases as the holonomy of a 2-dimensional geometric cone-manifold. The geometry of these cone-manifolds is spherical, Euclidean or hyperbolic. In Sec. 4 we obtain all the representations of the Trefoil knot group $G(3_{1})$ in the affine isometry group $A(H)$ of a quaternion algebra $H$. In particular we obtain the representations in the 3-dimensional affine Euclidean (and Lorentz) isometry group. Finally we study the Euclidean and Lorentz crystallographic groups which are images of representations of $G(3_{1})$ and we deduce some interesting consequences.
\section{Preliminaires}
\subsection{Quaternion algebras}
Recall that the quaternion algebra $H=\left( \frac{\mu ,\nu }{k}\right) $ is
the $k$-algebra on two generators $i,j$ with the defining relations:
\begin{equation*}
i^{2}=\mu ,\qquad j^{2}=\nu \qquad \text{and}\qquad ij=-ji.
\end{equation*}
Then $H$ also is a four dimensional vector space over $k$, with basis $\left\{ 1,i,j,ij\right\}. $Given a quaternion $A=\alpha +\beta i+\gamma
j+\delta ij$, $A\in H=\left( \frac{\mu ,\nu }{k}\right) =\langle 1,\quad
i,\quad j,\quad ij\rangle $, we use the notation $A^{+}=\alpha $, and $A^{-}=\beta i+\gamma j+\delta ij$. Then, $A=A^{+}+A^{-}$.
The \emph{conjugate} of $A$, is by definition, the quaternion $\overline{A}:=A^{+}-A^{-}=\alpha -\beta i-\gamma j-\delta ij.$
The \emph{trace} of $A$ is $T(A)=A+\overline{A}$. Therefore $T(A)=2A^{+}=2\alpha \in k.$
The \emph{norm} of $A$ is $N(A):=A\overline{A}=\overline{A}A$. Observe that
the norm can be considered as a quadratic form on $H:$
\begin{equation*}
N(A)=(\alpha +\beta i+\gamma j+\delta ij)(\alpha -\beta i-\gamma j-\delta
ij)=\alpha ^{2}-\beta ^{2}\mu -\gamma ^{2}\nu +\delta ^{2}\mu \nu \in k.
\end{equation*}
We denote by $(H,N)$ the quadratic structure in $H$ defined by the norm.
In the quaternion algebra $M(2,k\mathbb{)=}\left( \frac{-1,1}{k}\right) $
the trace is the usual trace of the matrix, and the norm is the determinant
of the matrix.
There are two important subsets in the quaternion algebra $H=\left( \frac{\mu ,\nu }{k}\right) $: the pure quaternions $H_{0}=\left\{ A\in
H:A^{+}=0\right\} $ (a 3-dimensional vector space over $k$ generated by $\left\{ i,j,ij\right\} $), and the unit quaternions $U_{1}$ (the
multiplicative group of quaternions with norm 1). The norm induces a
quadratic structure on the pure quaternions $(H_{0},N).$
There exists a homomorphism $c:U_{1}\longrightarrow SO(H_{0},N)$ such that $c(A)$ acts on $H_{0}$ by conjugation: $c(A)(B^{-})=AB^{-}\overline{A}$. This
homomorphism permits us to associate to each representation $\rho
:G\longrightarrow U_{1}$ a linear isometry $c\circ \rho $ of the metric
space $(H_{0},N).$
The \emph{equiform group} or \emph{group of similarities} $\mathcal{E}q(H)$
of a quaternion algebra $H$, is the semidirect product $H_{0}\rtimes U$,
where $U$ is the multiplicative group of invertible elements in $H.$ This is
the group whose underlying space is $H_{0}\times U$ and the product is
\begin{equation*}
\begin{array}{lll}
\mathcal{E}q(H)\times \mathcal{E}q(H) & \longrightarrow & \mathcal{E}q(H) \\
((v,A),(w,B)) & \rightarrow & (v+c(A)(w),AB)
\end{array}
\end{equation*}
The group of \emph{affine isometries} $\mathcal{A}(H)$ of a quaternion
algebra is the subgroup of $\mathcal{E}q(H)\ $which is the semidirect
product $H_{0}\rtimes U_{1}$.
The group $\mathcal{A}(H)$ defines a left action on the 3-dimensional vector
space $H_{0}$,
\begin{equation*}
\begin{array}{llll}
\Psi : & \mathcal{A}(H_{0})\times H_{0} & \longrightarrow & H_{0} \\
& ((v,A),u) & \rightarrow & (v,A)u:=v+c(A)(u)
\end{array}
\end{equation*}
For an element $(v,A)\in \mathcal{E}q(H)$, $A$ is the \emph{linear part} of $(v,A)$, $N(A)$ is the \emph{homothetic factor}, and $v$ is the \emph{translational part}. For an element $(v,A)\in \mathcal{A}(H)$, the vector $v\in H_{0}$ can be decomposed in a unique way as the orthogonal sum of two
vectors, one of them in the $A^{-}$ direction:
\begin{equation*}
v=sA^{-}+v^{\perp },\quad \qquad \left\langle v^{\perp },A^{-}\right\rangle
=0
\end{equation*}
Then
\begin{equation*}
(v,A)=(v^{\perp },1)(sA^{-},A)
\end{equation*}
The element $(v^{\perp },1)$ is a translation in $H_{0}$. The restriction of
the action of $(sA^{-},A)$ on the line generated by $A^{-}$ is a translation
with vector $sA^{-}$. We define $sA^{-}$ as the \emph{vector shift of the
element }$(v,A)$. The length $\sigma $ of the vector shift will be called
the \emph{shift of the element }$(A,v).$
The action of $(v,A)\ $leaves (globally) invariant an affine line parallel
to $A^{-}$ and its action on this line is a translation with vector the
shift $sA^{-}$. This invariant affine line will be call \emph{the axis of} $(v,A)$ . Then the action of $(v,A)$ on the axis of $(v,A)$ is a translation
by $\sigma $.
The following Table contains three important examples, which are the only
quaternion algebras over $\mathbb{R}$ and $\mathbb{C}$ up to isomorphism.
\begin{tabular}{|c|c|c|c|}
\hline
$H=\left( \frac{\mu ,\nu }{k}\right) $ & $U_{1}$ & $H_{0}$ & $(H_{0},N)$ \\
\hline \hline
$M(2,\mathbb{C})=\left( \frac{-1,1}{\mathbb{C}}\right) $ & $SL(2,\mathbb{C)}$
& $\mathbb{C}^{3}$ & $\mathbb{C}^{3}$ Complex Euclidean space \\ \hline
$M(2,\mathbb{R})=\left( \frac{-1,1}{\mathbb{R}}\right) $ & $SL(2,\mathbb{R)}$
& $\mathbb{R}^{3}$ & $E^{2,1}$ Minkowski space \\ \hline
$\mathbb{H}=\left( \frac{-1,-1}{\mathbb{R}}\right) $ & $S^{3}=SU(2,\mathbb{R})$ & $\mathbb{R}^{3}$ & $E^{3}$ (Real) Euclidean space \\ \hline
\end{tabular}
In fact, $M(2,\mathbb{R})$ and the Hamilton quaternions $\mathbb{H}=\left(
\frac{-1,-1}{\mathbb{R}}\right) $ are each isomorphic to an $\mathbb{R}$-subalgebra of $M(2,\mathbb{C}).$
\subsection{The varieties $V(\mathcal{I}_{G}^{c})$ and $V(\mathcal{I}_{aG}^{c})$}
Let $G$ be a group given by the presentation
\begin{equation*}
G=\left\vert a,b:w(a,b)\right\vert
\end{equation*}
where $w$ is a word in $a$ and $b$. A homomorphism
\begin{equation*}
\rho :G\longrightarrow U_{1}
\end{equation*}
such that $\rho (a)=A$ and $\rho (b)=B$ are conjugate elements in $U_{1}$ is
called here a \textit{c-representation}. Then, since $A$ and $B$ are
conjugate quaternions, $A^{+}=B^{+}$. Set
\begin{equation*}
x=A^{+}=B^{+}\quad \text{and \quad }y=-(A^{-}B^{-})^{+}.
\end{equation*}
We say that $\rho :G\longrightarrow U_{1}$ realizes $(x,y).$ It is proven in
\cite{HLM2009} that if $U_{1}$ is the group of unit quaternions in $M(2,\mathbb{C})=\left( \frac{-1,1}{\mathbb{C}}\right) ,$ then there is an
algorithm to construct an ideal $\mathcal{I}_{G}^{c}$ generated by four
polynomials
\begin{equation*}
\left\{ p_{1}(x,y),p_{2}(x,y),p_{3}(x,y),p_{4}(x,y)\right\}
\end{equation*}
with integer coefficients defining the \emph{algebraic variety of
c-representations} of $G$: $V(\mathcal{I}_{G}^{c})$. It is characterized as
follows:
\begin{enumerate}
\item [-]The set of points $\{(x,y)\in V(\mathcal{I}_{G}^{c}):y^{2}\neq
(1-x^{2})^{2}\}$ coincides with the pairs $(x,y)$ for which there exists an
irreducible c-representation $\rho :G\longrightarrow U_{1}$, unique up to
conjugation in $U_{1}$, realizing $(x,y)$.
\item [-]The set of points $\{(x,y)\in V(\mathcal{I}_{G}^{c}):y^{2}=(1-x^{2})^{2},\quad x^{2}\neq 1\}$ coincides with the pairs $(x,y)$ for which there exists an almost- irreducible c-representation $\rho
:G\longrightarrow U_{1}$, unique up to conjugation in $U_{1}$, realizing $(x,y)$.
\item [-]The set of points $\{(x,y)\in V(\mathcal{I}_{G}^{c}):y=0,\quad
x^{2}=1\}$ coincides with the pairs $(x,y)$ for which neither irreducible
nor almost-irreducible c-representations $\rho :G\longrightarrow U_{1}$
realizing $(x,y)$ exist.
\end{enumerate}
The real points in $V(\mathcal{I}_{G}^{c})$ , excepting the cases $\{(x,\pm
(1-x^{2}))|x^{2}\leq 1\}$, correspond to irreducible c-representations in $S^{3}$ and irreducible (or almost-irreducible) c-representations in $SL(2,\mathbb{R})$ according to \cite[Th.4]{HLM2009}:
\begin{theorem}[\cite{HLM2009}]
\label{teorema4}Let $G$ be a group given by the presentation
\begin{equation*}
G=\left\vert a,b:w(a,b)\right\vert
\end{equation*}
where $w$ is a word in $a$ and $b$. If $(x_{0}$, $y_{0})$ is a real point of
the algebraic variety $V(\mathcal{I}_{G}^{c})$ we distinguish two cases:
\begin{enumerate}
\item If
\begin{equation*}
1-x_{0}^{2}>0,\quad (1-x_{0}^{2})^{2}>y_{0}^{2}
\end{equation*}
there exists an irreducible c-representation $\rho :G\longrightarrow U_{1}$,
$U_{1}=S^{3}\subset \mathbb{H}$,~unique up to conjugation in $U_{1}=S^{3}$,
realizing $(x_{0},y_{0})$. Namely:
\begin{equation*}
\fbox{$
\begin{array}{l}
A=x_{0}+\frac{1}{+\sqrt{1-x_{0}^{2}}}\left( y_{0}i+\sqrt{(1-x_{0}^{2})^{2}-y_{0}^{2}}j\right) \\
B=x_{0}+\sqrt{1-x_{0}^{2}}i,\quad \sqrt{1-x_{0}^{2}}>0
\end{array}
$}
\end{equation*}
\begin{equation*}
(A^{-}B^{-})^{-}=-\sqrt{(1-x^{2})^{2}-y^{2}}ij
\end{equation*}
\item The remaining cases. Then excepting the case
\begin{equation*}
1-x_{0}^{2}>0,\quad (1-x_{0}^{2})^{2}=y_{0}^{2}
\end{equation*}
and the case
\begin{equation*}
x_{0}^{2}=1,\quad y_{0}=0
\end{equation*}
there exists an irreducible (or almost-irreducible) c-representation $\rho
:G\longrightarrow U_{1}$, $U_{1}=SL(2,\mathbb{R})\subset \left( \frac{-1,1}{\mathbb{R}}\right) $ realizing $(x_{0},y_{0})$. Moreover two such
homomorphisms are equal up to conjugation in the group of quaternions with
norm $\pm 1$, $U_{\pm 1}$. Specifically:
\item[(2.1)] If
\begin{equation*}
1-x_{0}^{2}>0,\quad (1-x_{0}^{2})^{2}<y_{0}^{2}
\end{equation*}
set
\begin{equation*}
\fbox{$
\begin{array}{c}
A=x_{0}+\sqrt{1-x_{0}^{2}}I,\quad \sqrt{1-x_{0}^{2}}>0 \\
B=x_{0}+\frac{1}{+\sqrt{1-x_{0}^{2}}}\left( y_{0}I+\sqrt{
y_{0}^{2}-(1-x_{0}^{2})^{2}}J\right)
\end{array}
$}
\end{equation*}
Then $\rho :G\longrightarrow U_{1}$ is irreducible, and
\begin{equation*}
(A^{-}B^{-})^{-}=+\sqrt{y_{0}^{2}-(1-x_{0}^{2})^{2}}IJ
\end{equation*}
\item[(2.2)] If
\begin{equation*}
1-x_{0}^{2}<0,\quad (1-x_{0}^{2})^{2}<y_{0}^{2}
\end{equation*}
there are two subcases:
\begin{enumerate}
\item[(2.2.1)] $y_{0}<0$. Set
\begin{equation*}
\fbox{$
\begin{array}{c}
A=x_{0}+\sqrt{x_{0}^{2}-1}J,\quad \sqrt{x_{0}^{2}-1}>0 \\
B=x_{0}-\frac{1}{+\sqrt{x_{0}^{2}-1}}\left( \sqrt{y_{0}^{2}-(x_{0}^{2}-1)^{2}
}I+y_{0}J\right)
\end{array}
$}
\end{equation*}
\begin{equation*}
(A^{-}B^{-})^{-}=-\sqrt{y_{0}^{2}-(x_{0}^{2}-1)^{2}}IJ
\end{equation*}
\item[(2.2.2)] $y_{0}>0$. Set
\begin{equation*}
\fbox{$
\begin{array}{c}
A=x_{0}+\sqrt{x_{0}^{2}-1}J,\quad \sqrt{x_{0}^{2}-1}>0 \\
B=x_{0}+\frac{1}{+\sqrt{x_{0}^{2}-1}}\left( \sqrt{y_{0}^{2}-(x_{0}^{2}-1)^{2}
}I-y_{0}J\right)
\end{array}
$}
\end{equation*}
\begin{equation*}
(A^{-}B^{-})^{-}=+\sqrt{y_{0}^{2}-(x_{0}^{2}-1)^{2}}IJ
\end{equation*}
In both subcases $\rho :G\longrightarrow U_{1}$ is irreducible.
\end{enumerate}
\item[(2.3)] If
\begin{equation*}
1-x_{0}^{2}<0,\quad (1-x_{0}^{2})^{2}>y_{0}^{2}
\end{equation*}
set
\begin{equation*}
\fbox{$
\begin{array}{c}
A=x_{0}+\sqrt{x_{0}^{2}-1}J,\quad \sqrt{x_{0}^{2}-1}>0 \\
B=x_{0}+\frac{1}{+\sqrt{x_{0}^{2}-1}}\left( -y_{0}J+\sqrt{
(x_{0}^{2}-1)^{2}-y_{0}^{2}}IJ\right)
\end{array}
$}
\end{equation*}
Then $\rho :G\longrightarrow U_{1}$ is irreducible and
\begin{equation*}
(A^{-}B^{-})^{-}=-\sqrt{(x_{0}^{2}-1)^{2}-y_{0}^{2}}IJ
\end{equation*}
\item[(2.4)] If
\begin{equation*}
1-x_{0}^{2}<0,\quad (1-x_{0}^{2})^{2}=y_{0}^{2}
\end{equation*}
set
\begin{equation*}
\fbox{$
\begin{array}{c}
A=x_{0}+\sqrt{x_{0}^{2}-1}J,\quad \sqrt{x_{0}^{2}-1}>0 \\
B=x_{0}+(IJ+I)-\frac{_{y_{0}}}{\sqrt{x_{0}^{2}-1}}J
\end{array}
$}
\end{equation*}
Then $\rho :G\longrightarrow U_{1}$ is almost-irreducible, and
\begin{equation*}
(A^{-}B^{-})^{-}=-\sqrt{(x_{0}{}^{2}-1)}(I+IJ).
\end{equation*}
\item[(2.5)] If
\begin{equation*}
1-x_{0}^{2}=0,\quad y_{0}\neq 0
\end{equation*}
set
\begin{equation*}
\fbox{$
\begin{array}{c}
A=x_{0}+I+J \\
B=x_{0}+\frac{y_{0}}{2}(I-J)
\end{array}
$}
\end{equation*}
Then $\rho :G\longrightarrow U_{1}$ is irreducible, and
\begin{equation*}
(A^{-}B^{-})^{-}=-y_{0}IJ
\end{equation*}
\end{enumerate}
\qed
\end{theorem}
We also want to study the \emph{c-representations of }$G$\emph{\ in the affine
group }$\mathcal{A}(H)$ of a quaternion algebra $H$, that is representations
of $G$ in the affine group $\mathcal{A}(H)$ of a quaternion algebra $H$ such
that the generators $a$ and $b$ go to conjugate elements, up to conjugation
in $\mathcal{E}q(H)$. Given a such c-representation $\rho :G\longrightarrow
\mathcal{A}(H)$, if $\rho (a)$, $\rho (b)$ has translational part different
from $0$, we can suppose that
\begin{equation*}
\begin{array}{llll}
\rho : & G & \longrightarrow & \mathcal{A}(H) \\
& a & \rightarrow & (sA^{-},A) \\
& b & \rightarrow & (sB^{-}+(A^{-}B^{-})^{-},B)
\end{array}
\end{equation*}
where $(A,B)$ is an irreducible pair of conjugate unit quaternions. (A pair $(A,B)$ is irreducible
if $\{A^{-},B^{-},(A^{-}B^{-})^{-}\}$ is a basis of $H_{0}$.)
Because $\rho $ is a homomorphism of the semidirect product $H_{0}\rtimes
U_{1}=\mathcal{A}(H),$ we have
\begin{equation}
\rho (w(a,b))=(\frac{\partial w}{\partial a}|_{\phi }\circ v+\frac{\partial w
}{\partial b}|_{\phi }\circ u,w(A,B))=(0,I) \label{erelfox}
\end{equation}
where $\frac{\partial w}{\partial a}|_{\phi }$ is the Fox derivative of the
word $w(a,b)$ with respect to $a$, and evaluated by $\phi $ such that $\phi
(a)=A$, $\phi (b)=B$. (See \cite{CF1963}.) The equation \ref{erelfox} yields
two relations between the parameters
\begin{eqnarray*}
x &=&A^{+}=B^{+} \\
y &=&-(A^{-}B^{-})^{+} \\
s &=&vector\ shift\ parameter
\end{eqnarray*}
the relations are
\begin{equation}
w(A,B)=I \label{erel1}
\end{equation}
\begin{equation}
\frac{\partial w}{\partial a}|_{\phi }\circ v+\frac{\partial w}{\partial b}
|_{\phi }\circ u=0 \label{erel2}
\end{equation}
The relation \eqref{erel1} yields the ideal $\mathcal{I}_{G}^{c}=
\{p_{i}(x,y)\mid i\in \left\{ 1,2,3,4\right\} \}$, and it defines $V(
\mathcal{I}_{G}^{c})$ the algebraic variety of c-representations of $G$ in $
SL(2,\mathbb{C})$.
The relation \eqref{erel2} produces four polynomials in $x,y,s$: $\{q_{j}(x,y,s)\mid j\in \left\{ 1,2,3,4\right\} \}.$ The ideal
\begin{equation*}
\mathcal{I}_{aG}^{c}=\{p_{i}(x,y),q_{j}(x,y,s)\mid i,j\in \left\{
1,2,3,4\right\} \}
\end{equation*}
defines an algebraic variety, that we call $V_{a}(\mathcal{I}_{aG}^{c})$ the
\emph{variety of affine c-representations of }$G$\emph{\ in }$A(H)$ up to
conjugation in $\mathcal{E}q(H)$.
\subsection{The group of the trefoil knot}
Consider the standard presentation of the group of the trefoil knot $3_{1}$
as the 2-bridge knot $3/1$. Figure \ref{ftrebol}. (See \cite{BZ}.):
\begin{equation*}
G(3_{1})=|a,b;aba=bab|
\end{equation*}
where $a$ and $b$ are meridians. This knot is also the toroidal knot $
\{3,2\} $. As such, it has the following presentation:
\begin{equation}
G(3_{1})=|F,D;F^{3}=D^{2}| \label{genfd}
\end{equation}
where $F=ab$ and $D=aba$. It is easy to show that the element $
C:=F^{3}=D^{2} $ belongs to the center of $G(3_{1})$.
\begin{figure}
\caption{The trefoil knot $3_{1}
\label{ftrebol}
\end{figure}
\section{The representations of $G(3_{1})$ in $U_{1}$}
Let
\begin{equation*}
\begin{array}{cccc}
\rho : & G(3_{1})=|a,b;aba=bab| & \longrightarrow & U_{1} \\
& a & \rightarrow & \rho (a)=A \\
& b & \rightarrow & \rho (b)=B
\end{array}
\end{equation*}
denote a representation of $G=G(3_{1})$ in the unit group of a quaternion
algebra $H$. This always is a c-representation because $a$ and $b$ are
conjugate elements in $G$. Moreover if $\rho $ is irreducible (that is, if
$\{A^{-},B^{-},(A^{-}B^{-})^{-}\}$ is a basis of $H_{0}$, the 3-dimensional
vector space of pure quaternions), then $\{A,B\}$ generates $H$ as an
algebra. Since $\rho (C)$ commutes with $A$ and $B$, it belongs to the
center of $H$. Hence $\rho (C)=\pm 1.$ Applying the algorithm of \cite[Th.3]{HLM2009}
to the presentation
\begin{equation*}
G(3_{1})=|a,b;aba=bab|
\end{equation*}
we obtain $4$ polynomials, two of which are zero and the other two coincide
up to sign with
\begin{equation}
2x^{2}-2y-1. \label{epoly1}
\end{equation}
Therefore $\mathcal{I}_{G}=(2x^{2}-2y-1)$. The real part of the algebraic
variety $V(\mathcal{I}_{G})$ is the parabola $y=\frac{2x^{2}-1}{2}$ depicted
in Figure \ref{fcurvas}.
\begin{figure}
\caption{The real curve $V(\mathcal{I}
\label{fcurvas}
\end{figure}
Theorem \ref{teorema4} establishes the different cases of representations
associated to real points of the algebraic variety $V(\mathcal{I}_{G}).$
Figure \ref{fregiones} shows a pattern on the real plane with coordinates $x$
and $\frac{y}{1-x^{2}}$ : The plane is divided in several labeled regions by
labeled segments. The label corresponds to the case described in Theorem \ref
{teorema4}. Points in the segments between region (1) and (2.3), which have
no label, correspond to almost-irreducible representations in $SL(2,\mathbb{C
})$. Therefore, to apply Theorem \ref{teorema4} to the algebraic variety $V(
\mathcal{I}_{G})$ it suffices to study the graphic of $\frac{y}{1-x^{2}}$ as a
function of $x$ over the pattern.
\begin{figure}
\caption{The pattern for different cases.}
\label{fregiones}
\end{figure}
Figure \ref{fregiones}
shows $\frac{y}{1-x^{2}}$ as a funtion of $x$ for the algebraic variety $V(\mathcal{I}_{G})$ of the Trefoil knot.
\begin{figure}
\caption{The function $\frac{y}
\label{fysobreuutrebol}
\end{figure}
Then, according with Theorem \ref{teorema4}, there are several
cases:
\subsection{Case 1}
Region (1).
\[x\in (\frac{-\sqrt{3}}{2},\frac{\sqrt{3}}{2}
)\Longleftrightarrow \left\{ 1-x^{2}>0,(1-x^{2})^{2}>y^{2}\right\}, \] where
\begin{equation*}
y=\frac{2x^{2}-1}{2},\quad u=1-x^{2}
\end{equation*}
There exists an irreducible c-representation $\rho
_{x}:G(3_{1})\longrightarrow S^{3}$ realizing $(x,y)$, unique up to
conjugation in $S^{3}$, such that
\begin{equation}
\begin{array}{l}
\rho _{x}(a)=A=x+\frac{2x^{2}-1}{2\sqrt{1-x^{2}}}i+\frac{1}{2}\sqrt{\frac{
3-4x^{2}}{1-x^{2}}}j \\
\rho _{x}(b)=B=x+\sqrt{1-x^{2}}i,\quad \sqrt{1-x^{2}}>0
\end{array}
\label{ecase1}
\end{equation}
The composition of $\rho _{x}$ with $c:S^{3}\rightarrow SO(3)$, where $c(X)$
, $X\in S^{3}$, acts on $P\in H_{0}\cong E^{3}$ by conjugation, defines the
representation $\rho _{x}^{\prime }=c\circ \rho _{x}:G(3_{1})\longrightarrow
SO(3)$. In linear notation, where $\left\{ X,Y,Z\right\} $ is the coordinate
system in $E^{3}$ associated to the basis $\left\{ -ij,j,i\right\} $ we have
\begin{equation*}
\begin{array}{l}
\rho _{x}^{\prime }(a)=m_{x}(a)\left(
\begin{array}{c}
X \\
Y \\
Z
\end{array}
\right) =\left(
\begin{array}{c}
X^{\prime } \\
Y^{\prime } \\
Z^{\prime }
\end{array}
\right) \\
\rho _{x}^{\prime }(b)=m_{x}(b)\left(
\begin{array}{c}
X \\
Y \\
Z
\end{array}
\right) =\left(
\begin{array}{c}
X^{\prime } \\
Y^{\prime } \\
Z^{\prime }
\end{array}
\right)
\end{array}
\end{equation*}
where
\begin{equation*}
m_{x}(a)=\left(
\begin{array}{ccc}
2x^{2}-1 & \frac{x-2x^{3}}{\sqrt{1-x^{2}}} & x\sqrt{\frac{4x^{2}-3}{x^{2}-1}}
\\
\frac{x(2x^{2}-1)}{\sqrt{1-x^{2}}} & \frac{-4x^{4}+2x^{2}+1}{2-2x^{2}} &
\frac{(1-2x^{2})\sqrt{3-4x^{2}}}{2(x^{2}-1)} \\
-x\sqrt{\frac{4x^{2}-3}{x^{2}-1}} & \frac{(1-2x^{2})\sqrt{3-4x^{2}}}{
2(x^{2}-1)} & \frac{1-2x^{2}}{2x^{2}-2}
\end{array}
\right)
\end{equation*}
and
\begin{equation*}
m_{x}(b)=
\begin{pmatrix}
2x^{2}-1 & -2x\sqrt{1-x^{2}} & 0 \\
2x\sqrt{1-x^{2}} & 2x^{2}-1 & 0 \\
0 & 0 & 1
\end{pmatrix}
.
\end{equation*}
The maps $\rho _{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$ are right
rotations of angle $\alpha $ around the axes $A^{-}$ and $B^{-}$ where $
x=A^{+}=B^{+}=\cos \frac{\alpha }{2}$. Moreover, $\rho _{x}^{\prime }(C)$ is
the identity. Hence $\rho _{x}^{\prime }:G(3_{1})\longrightarrow SO(3)$
factors through the group $C_{2}\ast C_{3}=|F,D;F^{3}=D^{2}=1|$. Thus $\rho
_{x}^{\prime }(F)=\rho _{x}^{\prime }(ab)$ is a $3$-fold rotation and $\rho
_{x}^{\prime }(D)=\rho _{x}^{\prime }(aba)$ is a $2$-fold rotation.
The angle $\omega $ between the axes of $\rho _{x}^{\prime }(a)$ and $\rho
_{x}^{\prime }(b)$ is given by
\begin{equation*}
\cos \omega =\frac{y}{u}=\frac{x^{2}-\frac{1}{2}}{1-x^{2}}
\end{equation*}
The geometrical meaning of the representation $\rho _{x}^{\prime
}:G(3_{1})\longrightarrow SO(3)$ for $\frac{-\sqrt{3}}{2}<x<\frac{\sqrt{3}}{2
}$ is the following.
\begin{theorem}
\label{tgenso3}The image of $\rho _{x}^{\prime }:G(3_{1})\longrightarrow
SO(3)$ , $-\frac{\sqrt{3}}{2}<x<\frac{\sqrt{3}}{2}$, is isomorphic to the
holonomy of the 2-dimensional spherical cone-manifold $S_{\frac{2\pi }{2},
\frac{2\pi }{3},\alpha }^{2}$, where $x=\cos \frac{\alpha }{2}$ ($\frac{
10\pi }{6}>\alpha >\frac{2\pi }{6}$).
\begin{proof}
The 2-dimensional spherical cone-manifold $S_{\frac{2\pi }{2},\frac{2\pi }{3}
,\alpha }^{2}$ is the result of identifying the edges of the spherical
triangle $T(\frac{2\pi }{3},\frac{\alpha }{2},\frac{\alpha }{2})$ as in the
Figure \ref{s23alfa}. The holonomy of $S_{\frac{2\pi }{2},\frac{2\pi }{3}
,\alpha }^{2}$ is generated by rotations of angle $\alpha $ in the vertices $P$ and $Q$. The distance $r$ between $P$ and $Q$ is calculated by spherical
trigonometry:
\begin{equation*}
\cos r=\frac{\cos ^{2}\frac{\alpha }{2}+\cos \frac{2\pi }{3}}{\sin ^{2}\frac{
\alpha }{2}}=\frac{x^{2}-\frac{1}{2}}{1-x^{2}}=\cos \omega
\end{equation*}
Then $r=\omega $. Therefore the points $P$ and $Q$ are the intersection with
$S^{2}$ of the axes $A^{-}$ and $B^{-}$of the two generators $\rho
_{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$ of the subgroup $\rho
_{x}^{\prime }(G(3/1)).$
\end{proof}
\end{theorem}
\begin{figure}
\caption{The
spherical conemanifold $S_{\pi ,2\pi /3,\alpha }
\label{s23alfa}
\end{figure}
\begin{remark}
The point $R$ (resp. $M$ the mid-point between $P$ and $Q$) is the
intersection with $S^{2}$ of the axis of the $3$-fold (resp. $2$-fold)
rotation $\rho _{x}^{\prime }(F)=\rho _{x}^{\prime }(ab)$ (resp. $\rho
_{x}^{\prime }(D)=\rho _{x}^{\prime }(aba)$).
\end{remark}
\subsection{Case 2} Points in the no-label segments between region (1) and region (2.1).
\[(x,y)=(\pm \frac{\sqrt{3}}{2},\frac{1}{4})\Longleftrightarrow \left\{
1-x^{2}>0,(1-x^{2})^{2}=y^{2}\right\}.\]
There exists an almost-irreducible c-representation $\rho
_{x}:G(3_{1})\longrightarrow U_{1}\subset \left( \frac{-1,1}{\mathbb{C}}
\right) $ realizing $(x,y)$, unique up to conjugation in $U_{1}$, such that:
\begin{eqnarray}
\rho _{\pm \sqrt{3}/2}(a) &=&A=\frac{\pm \sqrt{3}}{2}+\frac{\sqrt{-1}}{2}IJ
\label{ecaso2} \\
\rho _{\pm \sqrt{3}/2}(b) &=&B=\frac{\pm \sqrt{3}}{2}-\frac{1}{2}I+\frac{1}{2
}J+\frac{\sqrt{-1}}{2}IJ \notag
\end{eqnarray}
This representation cannot be conjugated to any real representation. Under
the isomorphism $U_{1}\approx SL(2,\mathbb{C})$ we have
\begin{equation*}
\begin{array}{cccc}
\rho _{\pm \sqrt{3}/2}: & G(3_{1}) & \longrightarrow & SL(2,\mathbb{C}) \\
& a & \rightarrow & A=
\begin{pmatrix}
\frac{\pm \sqrt{3}}{2}+\frac{\sqrt{-1}}{2} & 0 \\
0 & \frac{\pm \sqrt{3}}{2}-\frac{\sqrt{-1}}{2}
\end{pmatrix}
\\
& b & \rightarrow & B=
\begin{pmatrix}
\frac{\pm \sqrt{3}}{2}+\frac{\sqrt{-1}}{2} & 0 \\
1 & \frac{\pm \sqrt{3}}{2}-\frac{\sqrt{-1}}{2}
\end{pmatrix}
\end{array}
\end{equation*}
The composition of $\rho _{\sqrt{3}/2}$ with $c:U_{1}\rightarrow PSL(2,
\mathbb{C})$, defines the representations $\rho _{\sqrt{3}/2}^{\prime
}=c\circ \rho _{\sqrt{3}/2}:G(3_{1})\longrightarrow PSL(2,\mathbb{C})$ where
(up to conjugation with $w=\frac{-1}{z}$) $\rho _{\sqrt{3}/2}^{\prime }(a)$
is the $\frac{2\pi }{6}$-rotation of $\mathbb{C}P^{1}$around the point $0$:
\begin{equation*}
w=e^{i\frac{2\pi }{6}}z
\end{equation*}
and $\rho _{\sqrt{3}/2}^{\prime }(b)$ is the $\frac{2\pi }{6}$-rotation of $
\mathbb{C}P^{1}$around the point $i$:
\begin{equation*}
w=e^{i\frac{2\pi }{6}}z+e^{i\frac{\pi }{6}}
\end{equation*}
\begin{figure}
\caption{Case 2.}
\label{orbifoldS236}
\end{figure}
We see, in fact, that
\begin{theorem}
The image of $\rho _{\sqrt{3}/2}^{\prime }:G(3_{1})\longrightarrow PSL(2,\mathbb{C})$ acts on the Euclidean plane $\mathbb{C}$ as the holonomy of the
Euclidean crystallographic orbifold $S_{2,3,6}^{2}$.
\end{theorem}
\begin{remark}
The barycenter of the triangle $\{0,i,e^{i\frac{\pi }{6}}\}$ is the center
of the $3$-fold rotation $\rho _{x}^{\prime }(F)=\rho _{\sqrt{3}/2}^{\prime
}(ab)$ and the point $\frac{i}{2}$ is the center of the $2$-fold rotation $
\rho _{x}^{\prime }(D)=\rho _{\sqrt{3}/2}^{\prime }(aba)$). See Figure \ref
{orbifoldS236}. a)
\end{remark}
\begin{remark}
The image of $\rho _{\sqrt{3}/2}^{\prime }:G(3_{1})\longrightarrow PSL(2,\mathbb{C})$ can be interpreted as the holonomy of the Euclidean
crystallographic conemanifold $S_{\frac{2\pi }{2},\frac{2\pi }{3},\frac{10\pi }{6}}^{2}$. See Figure \ref{orbifoldS236}. b)
\end{remark}
\subsection{Case 3}
Region (2.1).
\[x\in (-1,\frac{-\sqrt{3}}{2})\cup (\frac{\sqrt{3}}{2}
,1)\Longleftrightarrow \left\{ 1-x^{2}>0,(1-x^{2})^{2}<y^{2}\right\} .\]
There exists an irreducible c-representation $\rho
_{x}:G(3_{1})\longrightarrow SL(2,\mathbb{R})=U_{1}\subset \left( \frac{-1,1
}{\mathbb{R}}\right) $ realizing $(x,y)$, unique up to conjugation in $SL(2,
\mathbb{R})$, such that
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=A=x+\sqrt{1-x^{2}}i,\quad \sqrt{1-x^{2}}>0 \\
\rho _{x}(b)=B=x+\frac{2x^{2}-1}{2\sqrt{1-x^{2}}}i+\frac{1}{2}\sqrt{\frac{
-3+4x^{2}}{1-x^{2}}}j
\end{array}
\end{equation*}
The composition of $\rho _{x}$ with $c:SL(2,\mathbb{R})\rightarrow
SO^{0}(1,2)\cong Iso^{+}(\mathbb{H}^{2})$, where $c(X)$, $X\in SL(2,\mathbb{R})$, acts on $P\in H_{0}\cong E^{1,2}$ by conjugation, defines the
representation $\rho _{x}^{\prime }=c\circ \rho _{x}:G(3_{1})\longrightarrow
SO^{0}(1,2)$ in affine linear notation, where $\left\{ X,Y,Z\right\} $ is
the coordinate system associated to the basis $\left\{ -ij,j,i\right\} $
\begin{equation*}
\begin{array}{l}
\rho _{x}^{\prime }(a)=m_{x}(a)\left(
\begin{array}{c}
X \\
Y \\
Z
\end{array}
\right) =\left(
\begin{array}{c}
X^{\prime } \\
Y^{\prime } \\
Z^{\prime }
\end{array}
\right) \\
\rho _{x}^{\prime }(b)=m_{x}(b)\left(
\begin{array}{c}
X \\
Y \\
Z
\end{array}
\right) =\left(
\begin{array}{c}
X^{\prime } \\
Y^{\prime } \\
Z^{\prime }
\end{array}
\right)
\end{array}
\end{equation*}
such that the matrices of $\rho _{x}^{\prime }(a)$ and $\rho
_{x}^{\prime }(b)$ are respectively:
\begin{equation*}
m_{x}(a)=\left(
\begin{array}{ccc}
2x^{2}-1 & -2x\sqrt{1-x^{2}} & 0 \\
2x\sqrt{1-x^{2}} & 2x^{2}-1 & 0 \\
0 & 0 & 1
\end{array}
\right)
\end{equation*}
and
\begin{equation*}
m_{x}(b)=
\begin{pmatrix}
2x^{2}-1 & \frac{x-2x^{3}}{\sqrt{1-x^{2}}} & x\sqrt{\frac{3-4x^{2}}{x^{2}-1}}
\\
\frac{x(2x^{2}-1)}{\sqrt{1-x^{2}}} & \frac{1+2x^{2}-4x^{4}}{2-2x^{2}} &
\frac{\sqrt{4x^{2}-3}(2x^{2}-1)}{2(1-x^{2})} \\
x\sqrt{\frac{3-4x^{2}}{x^{2}-1}} & \frac{\sqrt{4x^{2}-3}(2x^{2}-1)}{
2(1-x^{2})} & \frac{1-2x^{2}}{2x^{2}-2}
\end{pmatrix}.
\end{equation*}
The maps $\rho _{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$ are right
(spherical) rotations on $H_{0}\cong E^{1,2}$ of angle $\alpha $ around the
time-like axes $A^{-}$ and $B^{-}$ where $x=A^{+}=B^{+}=\cos \frac{\alpha }{2
}$. See Figure \ref{fambmi}.
\begin{figure}
\caption{Case 3.}
\label{fambmi}
\end{figure}
Moreover, $\rho _{x}^{\prime }(C)$ is the identity. Hence $\rho _{x}^{\prime
}:G(3_{1})\longrightarrow SO^{0}(1,2)$ factors through the group $C_{2}\ast
C_{3}=|F,D;F^{3}=D^{2}=1|$. Thus $\rho _{x}^{\prime }(F)=\rho _{x}^{\prime
}(ab)$ is a $3$-fold rotation and $\rho _{x}^{\prime }(D)=\rho _{x}^{\prime
}(aba)$ is a $2$-fold rotation.
The distance $d$ (measured in the hyperbolic plane) between the axes of $\rho _{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$ is given by
\begin{equation*}
\cosh d=\frac{y}{u}=\frac{x^{2}-\frac{1}{2}}{-x^{2}+1}
\end{equation*}
The geometrical meaning of the representation $\rho _{x}^{\prime
}:G(3_{1})\longrightarrow SO^{0}(1,2)\cong Iso^{+}(\mathbb{H}^{2})$ for $x\in (\frac{\sqrt{3}}{2},1)$ is the following.
\begin{figure}
\caption{The hyperbolic cone-manifold $S_{\frac{2\pi }
\label{s23alfahy}
\end{figure}
\begin{theorem}
\label{tgenso12}The image of $\rho _{x}^{\prime }$, $\frac{\sqrt{3}
}{2}<x<1$, is a subgroup of $SO^{o}(1,2)$ isomorphic to the holonomy of the
2-dimensional hyperbolic cone-manifold $S_{\frac{2\pi }{2},\frac{2\pi }{3}
,\alpha }^{2}$, where $x=\cos \frac{\alpha }{2}$ ($\frac{2\pi }{6}>\alpha >0$
).
\begin{proof}
The 2-dimensional hyperbolic cone-manifold $S_{\frac{2\pi }{2},\frac{2\pi }{3
},\alpha }^{2}$ is the result of identifying the edges of the hyperbolic
triangle $T(\frac{2\pi }{3},\frac{\alpha }{2},\frac{\alpha }{2})$ as in
Figure \ref{s23alfahy} . The holonomy of $S_{\frac{2\pi }{2},\frac{2\pi }{3}
,\alpha }^{2}$ is generated by rotations of angle $\alpha $ in the vertices $
P$ and $Q$. The distance $d^{\prime }$ between $P$ and $Q$ is calculated by
hyperbolic trigonometry:
\begin{equation*}
\cosh d^{\prime }=\frac{\cos ^{2}\frac{\alpha }{2}+\cos \frac{2\pi }{3}}{
\sin ^{2}\frac{\alpha }{2}}=\frac{x^{2}-\frac{1}{2}}{-x^{2}+1}=\cosh d
\end{equation*}
Thus $d^{\prime }=d$. Therefore the points $P$ and $Q$ are the intersection
with the upper sheet of the two sheeted hyperboloid ( model of the
2-dimensional hyperbolic plane) of the axis $A^{-}$ and $B^{-}$of the two
generators of the subgroup $\rho _{x}^{\prime }(G(3_{1})).$
\end{proof}
\end{theorem}
\begin{remark}
The point $R$ (resp. $M$ the mid-point between $P$ and $Q$) is the
intersection with $\mathbb{H}^{2}$ of the axis of the $3$-fold (resp. $2$
-fold) rotation $\rho _{x}^{\prime }(F)=\rho _{x}^{\prime }(ab)$ (resp. $\rho _{x}^{\prime }(D)=\rho _{x}^{\prime }(aba)$).
\end{remark}
\subsection{Case 4}
Segment (2.5).
\[(x,y)=(\pm 1,\frac{1}{2})\Longleftrightarrow \left\{
1-x^{2}=0,(1-x^{2})^{2}<y^{2}\right\}. \]
There exists an irreducible c-representation $\rho
_{x}:G(3_{1})\longrightarrow SL(2,\mathbb{R})=U_{1}\subset \left( \frac{-1,1}{\mathbb{R}}\right) $ realizing $(x,y)$, unique up to conjugation in $SL(2,\mathbb{R})$, such that
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=A=\pm 1+i+j \\
\rho _{x}(b)=B=\pm 1+\frac{1}{4}(i-j)
\end{array}
\end{equation*}
The composition of $\rho _{x}$ with $c:SL(2,\mathbb{R})\rightarrow
SO^{0}(1,2)\cong Iso^{+}(\mathbb{H}^{2})$, defines the representation $\rho
_{x}^{\prime }=c\circ \rho _{x}:G(3_{1})\longrightarrow SO^{0}(1,2)$ such
that the matrices of $\rho _{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$
are respectively:
\begin{equation*}
m_{x}(a)=m(-1,1;\pm 1,1,1,0)=\left(
\begin{array}{ccc}
1 & -2 & 2 \\
2 & -1 & 2 \\
2 & -2 & 3
\end{array}
\right)
\end{equation*}
and
\begin{equation*}
m_{x}(b)=m(-1,1;\pm 1,\frac{1}{4},-\frac{1}{4},0)=
\begin{pmatrix}
1 & -\frac{1}{2} & -\frac{1}{2} \\
\frac{1}{2} & \frac{7}{8} & -\frac{1}{8} \\
-\frac{1}{2} & \frac{1}{8} & \frac{9}{8}
\end{pmatrix}
\end{equation*}
where $\left\{ X,Y,Z\right\} $ is the coordinate system associated to the
basis $\left\{ -ij,j,i\right\} $. The maps $\rho _{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$ are parabolic rotations on $H_{0}\cong E^{1,2}$
around the light-like axes $A^{-}$ and $B^{-}$. See Figure \ref{fambmiii}.
\begin{figure}
\caption{Case 4. }
\label{fambmiii}
\end{figure}
Therefore the image of $\rho
_{1}^{\prime }:G(3_{1})\longrightarrow Iso^{+}(\mathbb{H}^{2})$ is conjugate
to the action of the modular group $PSL(2,\mathbb{Z})$ in $\mathbb{H}^{2}$.
Thus
\begin{figure}
\caption{The hyperbolic orbifold $
S_{2,3,\infty }
\label{s23alfay4}
\end{figure}
\begin{theorem}
The image of $\rho _{1}^{\prime }:G(3_{1})\longrightarrow SO^{0}(1,2)$ acts
on the hyperbolic plane $\mathbb{H}^{2}$ as the holonomy of the hyperbolic
orbifold $S_{2,3,\infty }^{2}$.
\end{theorem}
\subsection{Case 5}
Region (2.2).
\[x\in (-\infty ,-1)\cup (1,\infty )\Longleftrightarrow \left\{
1-x^{2}<0,(1-x^{2})^{2}<y^{2}\right\}\quad ( y>0).\]
There exists an irreducible c-representation $\rho
_{x}:G(3_{1})\longrightarrow SL(2,\mathbb{R})=U_{1}\subset \left( \frac{-1,1}{\mathbb{R}}\right) $ realizing $(x,y)$, unique up to conjugation in $SL(2,\mathbb{R})$, such that
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=A=x+\sqrt{x^{2}-1}j,\quad \sqrt{x^{2}-1}>0 \\
\rho _{x}(b)=B=x-\frac{1}{2}\sqrt{\frac{4x^{2}-3}{x^{2}-1}}i-\frac{2x^{2}-1}{
2\sqrt{x^{2}-1}}j
\end{array}
\end{equation*}
The composition of $\rho _{x}$ with $c:SL(2,\mathbb{R})\rightarrow
SO^{0}(1,2)\cong Iso^{+}(\mathbb{H}^{2})$, defines the representation $\rho
_{x}^{\prime }=c\circ \rho _{x}:G(3/1)\longrightarrow SO^{0}(1,2)$ such that
the matrices of $\rho _{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$ are
respectively:
\begin{equation*}
m_{x}(a)=\left(
\begin{array}{cccc}
2x^{2}-1 & 0 & 2x\sqrt{x^{2}-1} & 0 \\
0 & 1 & 0 & \frac{\left( 3-4x^{2}\right) \sqrt{x^{2}-1}}{4x} \\
2x\sqrt{x^{2}-1} & 0 & 2x^{2}-1 & 0
\end{array}
\right)
\end{equation*}
\begin{equation*}
m_{x}(b)=
\begin{pmatrix}
2x^{2}-1 & x\sqrt{\frac{-3+4x^{2}}{x^{2}-1}} & -\frac{x(2x^{2}-1)}{\sqrt{
x^{2}-1}} \\
-x\sqrt{\frac{-3+4x^{2}}{x^{2}-1}} & \frac{1-2x^{2}}{2x^{2}-2} & \frac{\sqrt{
-3+4x^{2}}(2x^{2}-1)}{2(x^{2}-1)} \\
-\frac{x(2x^{2}-1)}{\sqrt{x^{2}-1}} & \frac{\sqrt{-3+4x^{2}}(2x^{2}-1)}{
2(x^{2}-1)} & \frac{1+2x^{2}-4x^{4}}{2-2x^{2}}
\end{pmatrix}
\end{equation*}
The maps $\rho _{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$ are
hyperbolic rotations on $H_{0}\cong E^{1,2}$ moving $\delta $ along the
polars of the space-like vectors $A^{-}$ and $B^{-}$ where $
x=A^{+}=B^{+}=\cosh \frac{\delta }{2}$. See Figure \ref{fcaseii}.
\begin{figure}
\caption{Case 5.}
\label{fcaseii}
\end{figure}
Moreover, $\rho _{x}^{\prime }(C)$ is the identity. Hence $\rho
_{x}^{\prime }:G(3_{1})\longrightarrow SO^{0}(1,2)$ factors through the
group $C_{2}\ast C_{3}=|F,D;F^{3}=D^{2}=1|$. Thus $\rho _{x}^{\prime
}(F)=\rho _{x}^{\prime }(ab)$ is a $3$-fold rotation and $\rho _{x}^{\prime
}(D)=\rho _{x}^{\prime }(aba)$ is a $2$-fold rotation.
The distance $d$ (measured in the hyperbolic plane) between the polars of
the axes of $\rho _{x}^{\prime }(a)$ and $\rho _{x}^{\prime }(b)$ is given
by
\begin{equation*}
\cosh d=\frac{y}{x^{2}-1}=\frac{x^{2}-\frac{1}{2}}{x^{2}-1}
\end{equation*}
The geometrical meaning of the representation $\rho _{x}^{\prime
}:G(3_{1})\longrightarrow SO^{0}(1,2)\cong Iso^{+}(\mathbb{H}^{2})$ for $x\in (1,\infty )$ is the following.
\begin{figure}
\caption{The hyperbolic orbifold $O_{2,3,\delta }
\label{s23alfay5}
\end{figure}
\begin{theorem}
\label{tgenso122}The image of $\rho _{x}^{\prime }$, $1<x<\infty $,
is a subgroup of $SO^{0}(1,2)$ isomorphic to the holonomy of the
2-dimensional hyperbolic orbifold $O_{2,3,\delta }^{2}$ where $O^{2}$ is an
open disk and $\delta $ is the length of the closed geodesic at the end of $O^{2}$, where $x=\cosh \frac{\delta }{2}$.
\begin{proof}
The 2-dimensional hyperbolic cone-manifold $O_{2,3,\delta }^{2}$ is the
result of identifying the edges of the hyperbolic triangle $T(\frac{2\pi }{3},\frac{\delta }{2},\frac{\delta }{2})$ as in Figure \ref{s23alfay5}, where $P$ and $Q$ are ultrainfinite points such that the length of the segments $T(\frac{2\pi }{3},\frac{\delta }{2},\frac{\delta }{2})\cap P^{\perp }$ and $T(\frac{2\pi }{3},\frac{\delta }{2},\frac{\delta }{2})\cap Q^{\perp }$ are
both $\frac{\delta }{2}$ ($P^{\perp }$ denotes the polar of $P$). The
holonomy of $O_{2,3,\delta }^{2}$ is generated by hyperbolic rotations of
length $\delta $ around the vertices $P$ and $Q$. The distance $r$ between $P^{\perp }$ and $Q^{\perp }$ is calculated by hyperbolic trigonometry:
\begin{equation*}
\cosh r=\frac{\cosh ^{2}\frac{\delta }{2}+\cos \frac{2\pi }{3}}{\sinh ^{2}
\frac{\delta }{2}}=\frac{x^{2}-\frac{1}{2}}{x^{2}-1}=\cosh d
\end{equation*}
Hence $r=d$.
\end{proof}
\end{theorem}
\begin{remark}
The point $R$ (resp. the mid-point between $P^{\perp }$ and $Q^{\perp }$) is
the intersection with $\mathbb{H}^{2}$ of the axis of the $3$-fold (resp. $2$-fold) rotation.
\end{remark}
\subsection{Some particular cases}
\begin{description}
\item[Case1] We have seen that the image of $\rho _{x}^{\prime
}:G(3_{1})\longrightarrow SO(3)$ , $0\leqslant x<\frac{\sqrt{3}}{2}$, is
isomorphic to the holonomy of the 2-dimensional spherical cone-manifold $S_{\frac{2\pi }{2},\frac{2\pi }{3},\alpha }^{2}$. The orbifolds among these
cone-manifolds are $S_{232}$, $S_{233}$, $S_{234}$, $S_{235}$. The
fundamental groups of these orbifolds are the group of direct symmetries of
the equilateral triangle (in $E^{3}$); the tetrahedron; the octahedron; and
the icosahedron, respectively. These groups are isomorphic to, respectively,
$\Sigma _{3}$, $A_{4}$, $\Sigma _{4}$ and $A_{5}$.
\item[Case 2] The image of the almost-irreducible c-representation $\rho_{\frac{\sqrt{3}}{2}}^{\prime }:G(3_{1})\longrightarrow PSL(2,\mathbb{C})$
acts on the Euclidean plane $\mathbb{C}$ as the holonomy of the Euclidean
crystallographic orbifold $S_{236}$.
\item[Case 3] The image of $\rho _{x}^{\prime }$, $\frac{\sqrt{3}}{2}<x<1$,
is a subgroup of $SO^{o}(1,2)$ isomorphic to the holonomy of the
2-dimensional hyperbolic cone-manifold $S_{\frac{2\pi }{2},\frac{2\pi }{3}
,\alpha }^{2}$, where $x=\cos \frac{\alpha }{2}$. The orbifolds among these
cone-manifolds are $S_{23p}$, for all $p\geq 7$. The orbifold $S_{237}$ is
the (orientable) hyperbolic orbifold of minimal area.
\item[Case 4] The image of $\rho _{1}^{\prime }:G(3_{1})\longrightarrow
SO^{0}(1,2)$ acts on the hyperbolic plane $\mathbb{H}^{2}$ as the holonomy
of the hyperbolic open orbifold (with finite volume) $S_{23\infty }$.
\item[Case 5] The image of $\rho _{x}^{\prime }$, $1<x<\infty $, is a
subgroup of $SO^{0}(1,2)$ isomorphic to the holonomy of the 2-dimensional
open hyperbolic orbifold (with infinite volume) $O_{23\delta }$ where $O$ is
an open disk and $\delta $ is the length of the closed geodesic at the end
of $O$, where $x=\cosh \frac{\delta }{2}$.
\end{description}
\begin{theorem}
\label{tsubgrupolibre} Every image of $\rho _{x}^{\prime
}:G(3_{1})\longrightarrow SO^{0}(1,2)$, where $x\geq 1$ contains a two
generator free subgroup of finite index.
\end{theorem}
\begin{proof}
Theorem \ref{tgenso122} shows that the image of $\rho _{x}^{\prime
}:G(3_{1})\longrightarrow SO^{0}(1,2)$, $1<x<\infty $, is a subgroup of $
SO^{0}(1,2)$ isomorphic to the holonomy of the 2-dimensional open hyperbolic
orbifold (with infinite volume) $O_{23\delta }$ where $O$ is an open disk
and $\delta $ is the length of the closed geodesic at the end of $O$, where $x=\cosh \frac{\delta }{2}$. Figure \ref{s23alfay5}. The image of $\rho
_{1}^{\prime }:G(3_{1})\longrightarrow SO^{0}(1,2)$, is a subgroup of $SO^{0}(1,2)$ isomorphic to the holonomy of the 2-dimensional open hyperbolic
orbifold (with finite volume) $S_{\frac{2\pi }{2},\frac{2\pi }{3},\infty
}^{2}$ with underlying space a punctured 2-sphere. Figure \ref{s23alfay4}.
Let us denote $S_{\frac{2\pi }{2},\frac{2\pi }{3},\infty }^{2}$ by $O_{230}$
to unify notation.
Consider the homomorphism
\begin{equation*}
\begin{array}{cccc}
\Omega : & \pi _{1}^{o}(O_{23\delta })=|F,D;F^{3}=D^{2}=1| & \longrightarrow
& \Sigma _{6} \\
& F & \rightarrow & (123)(456) \\
& D & \rightarrow & (15)(24)(36)
\end{array}
\end{equation*}
where $\delta \geq 0$. It defines a covering of orbifolds
\begin{equation*}
p_{\Omega }:O_{2\delta ,2\delta ,2\delta }\overset{6:1}{\longrightarrow }
O_{23\delta }
\end{equation*}
where $O_{2\delta ,2\delta ,2\delta }$ is a 2-sphere with 3 holes and the
length of the closed geodesic at every hole is $2\delta $ if $\delta >0$ and
$O_{0,0,0}$ is a 3-punctured 2-sphere. Figure \ref{fo2d2d2d}.
\begin{figure}
\caption{The hyperbolic manifold $O_{2
\delta ,2\delta ,2\delta }
\label{fo2d2d2d}
\end{figure}
The fundamental group of $O_{2\delta
,2\delta ,2\delta }$, $\pi _{1}^{o}(O_{2\delta ,2\delta ,2\delta })$, is a
free subgroup of $\rho _{x}^{\prime }(G(3_{1}))$ generated by $g_{1}=F^{-2}DFD^{-1}=FDFD$ and $g_{2}=F^{-1}DF^{2}D^{-1}=F^{-1}DF^{-1}D$. We
can write the two generators of $\pi _{1}^{o}(O_{2\delta ,2\delta ,2\delta
}) $ in terms of $\rho _{x}^{\prime }(a)=A$ and $\rho _{x}^{\prime }(b)=B$ ,
because $F=AB$ and $D=ABA=BAB\mathbf{.}$ Then
\begin{eqnarray*}
g_{1} &=&FDFD=\underset{B^{-1}}{\underbrace{ABABA}}\underset{B^{-1}}{
\underbrace{ABABA}}\mathbf{=}B^{-1}B^{-1} \\
g_{2} &=&F^{-1}DF^{-1}D=B^{-1}A^{-1}ABAB^{-1}A^{-1}ABA\mathbf{=}AA
\end{eqnarray*}
We conclude that the subgroup of $\rho _{x}^{\prime }(G(3_{1}))$ generated
by $\{\rho _{x}^{\prime }(b^{-2}),$ $\rho _{x}^{\prime }(a^{2})\}$ is an
index 6 free subgroup of rank 2 generated by $\{B^{-2},A^{2}\}$.
\end{proof}
\section{The representations of $G(3_{1})$ in $A(H)$}
In this section we obtain all the representations of the trefoil knot group
in the affine isometry group $A(H)$ of a quaternion algebra $H$. They
include all the representations in the 3-dimensional affine Euclidean
isometries $\mathcal{E}(E^{3})$ and all the representations in the
3-dimensional affine Lorentz isometries $\mathcal{L}(E^{1,2})$. The
representations in the affine isometry group of a quaternion algebra are
affine deformations of representations in the unit quaternions group of $H$,
computed in the above section.
Let
\begin{equation*}
\begin{array}{llll}
\rho : & G & \longrightarrow & A(H) \\
& a & \rightarrow & \rho (a)=(sA^{-},A) \\
& b & \rightarrow & \rho (b)=(sB^{-}+(A^{-}B^{-})^{-},B)
\end{array}
\end{equation*}
be a representation of $G$ in the affine group of a quaternion algebra $H$.
The composition of $\rho $ with the projection $\pi _{2}$ on the second
factor of $A(H)=H_{0}\rtimes U_{1}$ gives the linear part of $\rho $ and it
is a representation $\widehat{\rho }$ on the group of unitary quaternions.
\begin{equation*}
\begin{array}{llll}
\widehat{\rho }=\pi _{2}\circ \rho : & G & \longrightarrow & U_{1} \\
& a & \rightarrow & A \\
& b & \rightarrow & B
\end{array}
\end{equation*}
The composition of $\rho $ with the projection $\pi _{1}$ on the first
factor of $A(H)=H_{0}\rtimes U_{1}$ gives the translational part of $\rho :$
\begin{equation*}
\begin{array}{llll}
v_{\rho }=\pi _{1}\circ \rho : & G & \longrightarrow & H_{0} \\
& a & \rightarrow & sA^{-} \\
& b & \rightarrow & sB^{-}+(A^{-}B^{-})^{-}
\end{array}
\end{equation*}
Therefore $\widehat{\rho }$ corresponds to a point in the character variety $V(\mathcal{I}_{G})$ of representations of $G$ and it is determined by the
characters $x$ and $y$. We are interested in the classes of affine
deformations up to conjugation in the equiform group $\mathcal{E}
q(H)=H_{0}\rtimes U$, where $U$ is the group of invertible elements . Each
of these classes is determined by the parameter $s.$
The relation \eqref{erel2} in the case of the Trefoil knot group shows that
the parameter $s$ satisfies the equation
\begin{equation}
4x^{2}+4sx-3=0 \label{epoly2}
\end{equation}
\begin{theorem}
Every representation $\rho _{x}:G(3/1)\longrightarrow
A(H)=H_{0}\rtimes U_{1}$ defined by
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=(sA^{-},A) \\
\rho _{x}(b)=((sB^{-}+(A^{-}B^{-})^{-},B)
\end{array}
\end{equation*}
where $A$ and $B$ are independent unit quaternions, factors through the
group $C_{2}\ast C_{3}.$
\end{theorem}
\begin{proof}
The group $G(3_{1})$ has also the presentation $\left\vert
F,D;F^{3}=D^{2}\right\vert $ (Recall \eqref{genfd}.)
The group $C_{2}\ast C_{3}.$ is a quotient of $G(3_{1})=\left\vert
F,D;F^{3}=D^{2}\right\vert $
\begin{equation*}
q:G(3/1)=\left\vert F,D;F^{3}=D^{2}\right\vert \longrightarrow C_{2}\ast
C_{3}=\left\vert F,D;F^{3}=D^{2}=1\right\vert
\end{equation*}
Then to prove that there exists a homomorphism
\begin{equation*}
\lambda _{x}:C_{2}\ast C_{3}=\left\vert F,D;F^{3}=D^{2}=1\right\vert
\longrightarrow A(H)
\end{equation*}
such that $\lambda _{x}\circ q=\rho _{x}$ it is enough to show that $\rho
_{x}(D^{2})=(0,1)$. Recall that the element $C=D^{2}=F^{3}$ belongs to the
center of the group $G(3_{1})$, therefore $\rho _{x}(C)$ commutes with every
element of $\rho _{x}(G(3_{1})),$ in particular with $\rho _{x}(a)$ and $
\rho _{x}(b)$. Let us consider first the linear part $\widehat{\rho }_{x}=\pi
_{2}\circ \rho _{x}:G(3_{1})\longrightarrow U_{1}$. Because $A$ and $B$ are
independent, $A^{-}\neq \pm B^{-}$, the only element of $U_{1}$ commuting
with $A$ and $B$ is the identity 1. Therefore $\widehat{\rho }_{x}(C)=1$,
and $\rho _{x}(C)$ is a translation. If $\rho _{x}(C)=(v,1)$ and $\rho
_{x}(a)=(sA^{-},A)$ commute then
\begin{eqnarray*}
(v,1)(sA^{-},A) &=&(sA^{-},A)(v,1)\Longrightarrow
(v+sA^{-},A)=(sA^{-}+A\circ v,A) \\
&\Longrightarrow &v=A\circ v=AvA^{-1}
\end{eqnarray*}
we deduce that $v=0$ or $v$ and $A^{-}$ are dependent. An analogous
computation with $\rho _{x}(b)$ yields $v=0$ or $v$ and $B^{-}$ are
dependent. As $A$ and $B$ are independent, $v=0.$ We have proved that $\rho
_{x}(C)=(0,1)$.
\end{proof}
Moreover, every homomorphism $\lambda :C_{2}\ast C_{3}=\left\vert
F,D;F^{3}=D^{2}=1\right\vert \longrightarrow A(H)$ induces a representation $
\rho :G(3/1)\longrightarrow A(H)$ such that $\rho =\lambda \circ q$.
Therefore we can apply the results on representations of $G(3_{1})$ in $A(H)$
to representations of $C_{2}\ast C_{3}$ in $A(H).$
\begin{corollary}
\label{cor1}Every non cyclic subgroup $S$ in $A(H)$ generated by two
isometries $\mu \neq 1,$ $\upsilon \neq 1$ such that $\mu ^{2}=\upsilon
^{3}=1$ is necessarily generated by two conjugate elements in $A(H)$.
\end{corollary}
\begin{proof}
The subgroup $S$ is the image of a representation
\begin{equation*}
\begin{array}{cccc}
\lambda : & C_{2}\ast C_{3}=\left\vert F,D;F^{3}=D^{2}=1\right\vert &
\longrightarrow & A(H) \\
& D & \rightarrow & \mu \\
& F & \rightarrow & \upsilon
\end{array}
\end{equation*}
Then it defines a representation $\rho =\lambda \circ
q:G(3/1)\longrightarrow A(H)$ with the same image $S.$ As $G(3_{1})$ is
generated by two conjugate elements $a$ and $b$, the group $S$ is generated by
the two conjugate elements $\rho (a)$ and $\rho (b)$.
\end{proof}
\subsection{The Hamilton quaternion algebra $\mathbb{H}=\left( \frac{-1,-1}{\mathbb{R}}\right) $}
Let us study first the affine deformations of representations corresponding to
points in the character variety belonging to Case 1: $x\in (\frac{-\sqrt{3}}{
2},\frac{\sqrt{3}}{2})\Longleftrightarrow \left\{
1-x^{2}>0,(1-x^{2})^{2}>y^{2}\right\} ,$ where the quaternion algebra is the
Hamilton quaternion algebra $\mathbb{H}=\left( \frac{-1,-1}{\mathbb{R}}
\right) $. Therefore $U_{1}=S^{3}$, $H_{0}=E^{3}$.
Recall that in this case, the geometrical meaning of parameters $
x,y,u,s,\sigma $ is the following.
\begin{figure}
\caption{The geometrical meaning of parameters in $E^{3}
\label{fdelta}
\end{figure}
Let $\alpha $ be the angle of the right rotation around the axis $A^{-}$
corresponding to the action of $A=A^{+}+A^{-}$ in $H_{0}\cong E^{3}$. Let $
\omega $ be the angle defined by $A^{-}$ and $B^{-}$. See Figure \ref{fdelta}
. Then
\begin{eqnarray}
x &=&A^{+}=B^{+}=\cos \frac{\alpha }{2} \notag \\
u &=&N(A^{-})=N(B^{-})=1-x^{2}=\sin ^{2}\frac{\alpha }{2} \notag \\
y &=&-(A^{-}B^{-})^{+}=\left\langle A^{-},B^{-}\right\rangle =u\cos \omega
\label{eomega}
\end{eqnarray}
The shift $\sigma $ of the element $(sA^{-},A)$ or any element of $A(H)$
conjugate to $(sA^{-},A)$ is given by
\begin{equation}
\sigma =s\sqrt{u}=s\sqrt{1-x^{2}} \label{esigma}
\end{equation}
The distance $\delta $ between the axes of $(sA^{-},A)$ and $
(sB^{-}+(A^{-}B^{-})^{-},B)$ is half of the length of the vector $
(A^{-}B^{-})^{-}$. See Figure \ref{fdelta}.
\begin{equation}
N((A^{-}B^{-})^{-})=-y^{2}+u^{2}\quad \Longrightarrow \quad \delta =\frac{
\sqrt{u^{2}-y^{2}}}{2} \label{edelta}
\end{equation}
The following Theorem gives the affine deformations of the subgroups of $
SO(3,\mathbb{R})$ which are the image of representations of $G(3_{1}),$ or
equivalently the affine deformations of the holonomies of the spherical
conemanifold $S_{\frac{2\pi }{2},\frac{2\pi }{3},\alpha }^{2}$, $\frac{2\pi
}{6}<\alpha <\frac{10\pi }{6}$.
\begin{theorem}
\label{tgeneucl} For each $x\in (-\sqrt{3}/2,\sqrt{3}/2),$ there exists a
representation $\rho _{x}:G(3/1)\longrightarrow \mathcal{E}(E^{3})$ unique
up to conjugation in $\mathcal{E}(E^{3})$ such that
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=(sA^{-},A) \\
\rho _{x}(b)=((sB^{-}+(A^{-}B^{-})^{-},B)
\end{array}
\end{equation*}
where
\begin{equation*}
\begin{array}{l}
A=x+\frac{2x^{2}-1}{2\sqrt{1-x^{2}}}i+\frac{1}{2}\sqrt{\frac{3-4x^{2}}{
1-x^{2}}}j \\
B=x+\sqrt{1-x^{2}}i,\quad \sqrt{1-x^{2}}>0
\end{array}
\end{equation*}
In affine linear notation, where $\left\{ X,Y,Z\right\} $ is the coordinate
system associated to the basis $\left\{ -ij,j,i\right\} $
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=M_{x}(a)\left(
\begin{array}{c}
X \\
Y \\
Z \\
1
\end{array}
\right) =\left(
\begin{array}{c}
X^{\prime } \\
Y^{\prime } \\
Z^{\prime } \\
1
\end{array}
\right) \\
\rho _{x}(b)=M_{x}(b)\left(
\begin{array}{c}
X \\
Y \\
Z \\
1
\end{array}
\right) =\left(
\begin{array}{c}
X^{\prime } \\
Y^{\prime } \\
Z^{\prime } \\
1
\end{array}
\right)
\end{array}
\end{equation*}
where
\begin{equation}
M_{x}(a)=\left(
\begin{array}{cccc}
2x^{2}-1 & \frac{x-2x^{3}}{\sqrt{1-x^{2}}} & x\sqrt{\frac{4x^{2}-3}{x^{2}-1}}
& 0 \\
\frac{x(2x^{2}-1)}{\sqrt{1-x^{2}}} & \frac{-4x^{4}+2x^{2}+1}{2-2x^{2}} &
\frac{(1-2x^{2})\sqrt{3-4x^{2}}}{2(x^{2}-1)} & \frac{\sqrt{\left(
3-4x^{2}\right) ^{3}}}{8x\sqrt{1-x^{2}}} \\
-x\sqrt{\frac{4x^{2}-3}{x^{2}-1}} & \frac{(1-2x^{2})\sqrt{3-4x^{2}}}{
2(x^{2}-1)} & \frac{1-2x^{2}}{2x^{2}-2} & \frac{-8x^{4}+10x^{2}-3}{8x\sqrt{
1-x^{2}}} \\
0 & 0 & 0 & 1
\end{array}
\right) \label{emxa}
\end{equation}
\begin{equation}
M_{x}(b)=
\begin{pmatrix}
2x^{2}-1 & -2x\sqrt{1-x^{2}} & 0 & -\frac{1}{2}\sqrt{3-4x^{2}} \\
2x\sqrt{1-x^{2}} & 2x^{2}-1 & 0 & 0 \\
0 & 0 & 1 & \frac{(3-4x^{2})\sqrt{1-x^{2}}}{4x} \\
0 & 0 & 0 & 1
\end{pmatrix}
\label{emxb}
\end{equation}
The distance $\delta $ and the angle $\omega $ between the axis of $\rho
_{x}(a)$ and $\rho _{x}(b)$ are given by
\begin{eqnarray*}
\delta &=&\frac{\sqrt{3-4x^{2}}}{4} \\
\cos \omega &=&\frac{2x^{2}-1}{2-2x^{2}}
\end{eqnarray*}
The shift $\sigma $ is
\begin{equation*}
\sigma =(\frac{3}{4x}-x)\sqrt{1-x^{2}}
\end{equation*}
\begin{proof}
The values of $A$ and $B$ are given by \eqref{ecase1}. From the polynomial \eqref{epoly2}
we obtain the value of $s$
\begin{equation*}
s=\frac{3-4x^{2}}{4x}
\end{equation*}
The associated representation $\rho _{x}:G(3/1)\longrightarrow \mathcal{E}
(E^{3})$ such that
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=(sA^{-},A) \\
\rho _{x}(b)=((sB^{-}+(A^{-}B^{-})^{-},B)
\end{array}
\end{equation*}
is unique up to conjugation in $\mathcal{E}(E^{3})$ (\cite[Th.5]{HLM2009}) and $(A^{-}B^{-})^{-}=-\sqrt{(1-x^{2})-y^{2}}ij$. The matrices $M_{x}(a)$ and $M_{x}(b)$
as affine transformations are given by
\begin{equation*}
M_{x}(a)=
\begin{pmatrix}
m_{x}(-1,-1;x,\frac{y}{\sqrt{1-x^{2}}},\frac{\sqrt{(1-x^{2})^{2}-y^{2}}}{
\sqrt{1-x^{2}}},0) &
\begin{array}{c}
0 \\
s\frac{\sqrt{(1-x^{2})^{2}-(y)^{2}}}{\sqrt{1-x^{2}}} \\
s\frac{y}{\sqrt{1-x^{2}}}
\end{array}
\\
\begin{array}{ccc}
\quad 0\quad & \quad 0\quad & \quad 0\quad
\end{array}
& 1
\end{pmatrix}
\end{equation*}
\begin{equation*}
M_{x}(b)=
\begin{pmatrix}
m_{x}(-1,-1;x,\sqrt{1-x^{2}},0,0) &
\begin{array}{c}
-\sqrt{(1-x^{2})^{2}-y^{2}} \\
0 \\
s\sqrt{1-x^{2}}
\end{array}
\\
\begin{array}{ccc}
\quad 0\quad & \quad 0\quad & \quad 0\quad
\end{array}
& 1
\end{pmatrix}
\end{equation*}
The values of $\delta $, $\cos \omega $ and $\sigma $ are given by \eqref{edelta}, \eqref{eomega} and \eqref{esigma}.
\end{proof}
\end{theorem}
\begin{remark}
\label{rro0}The representation
\begin{equation*}
\begin{array}{cccc}
\widehat{\rho }_{0}: & G(3/1) & \longrightarrow & S^{3} \\
& a & \rightarrow & -\frac{1}{2}i+\frac{\sqrt{3}}{2}j \\
& b & \rightarrow & i
\end{array}
\end{equation*}
does not have any affine deformation, because the polynomial \eqref{epoly2} is
never zero for $x=0$.
\end{remark}
The following theorem states the classification up to conjugation of the non
cyclic subgroups of $\mathcal{E}(E^{3})$ generated by two isometries $\mu
\neq 1,$ $\upsilon \neq 1$ such that $\mu ^{2}=\upsilon ^{3}=1$.
\begin{theorem}
\label{tposiblesH} Every non cyclic subgroup $S$ in the group of direct
Euclidean isometries $\mathcal{E}(E^{3})$ generated by two isometries $\mu
\neq 1,$ $\upsilon \neq 1$ such that $\mu ^{2}=\upsilon ^{3}=1$ is one of
the following
\begin{enumerate}
\item [1.] If $S$ has a fixed point, then $S$ is conjugate in $\mathcal{E}(E^{3})$
to the holonomy of the 2-dimensional spherical conemanifold $S_{\frac{2\pi }{
2},\frac{2\pi }{3},\alpha }^{2}$ , $\frac{2\pi }{6}<\alpha <\frac{10\pi }{6}$
.
\item [2.] If $S$ has no fixed points, then three cases are possible
\begin{enumerate}
\item [2.a)] $S$ is conjugate in $\mathcal{E}(E^{3})$ to the subgroup generated by $
M_{x}(a)$ and $M_{x}(b)$, $x\in (0,\sqrt{3}/2).$ See \eqref{emxa} and \eqref{emxb}.
\item [2.b)]$S$ is conjugate in $\mathcal{E}(E^{3})$ to the natural extension to $
\mathcal{E}(E^{3})$ of the holonomy of the 2-dimensional Euclidean orbifold $
S_{236}^{2}$.
\item [2.c)]$S$ is conjugate in $\mathcal{E}(E^{3})$ to the Euclidean
crystallographic group $P6_{1}$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
If $S$ has a fixed point, the translational part of $S$ is $0.$ Then $
S\subset SO(3)$, and therefore it is conjugate to the image of
\begin{equation*}
\widehat{\rho }_{x}^{\prime }=c\circ \widehat{\rho }_{x}:G(3/1)
\longrightarrow SO(3)
\end{equation*}
for some $x\in (-\sqrt{3}/2,\sqrt{3}/2)$. Theorem \ref{tgenso3} shows that $
\widehat{\rho }_{x}^{\prime }\left( G(3/1)\right) $ is isomorphic to the
holonomy of the 2-dimensional spherical conemanifold $S_{\frac{2\pi }{2},
\frac{2\pi }{3},\alpha }^{2}$, where $x=\cos \frac{\alpha }{2}$.
If $S$ has no fixed points, we analyze the relative position of the axes
of the generators $\rho (a)$ and $\rho (b)$. If these axes are not parallel,
then $S$ is conjugate to the image of
\begin{equation*}
\rho _{x}:G(3/1)\longrightarrow \mathcal{E}(E^{3})
\end{equation*}
for some $x\in \lbrack 0,\sqrt{3}/2)$. Theorem \ref{tgeneucl} gives the
generators $M_{x}(a)$ and $M_{x}(b)$.
If the axes are parallel, we show in the following that $S$ is conjugate to
the image of an affine deformation $\rho :G(3/1)\longrightarrow \mathcal{E}
(E^{3})$ of a representation
\begin{equation*}
\pi _{2}\circ \rho =\widehat{\rho }^{\prime }=c\circ \widehat{\rho }
:G(3/1)\longrightarrow SO(3)
\end{equation*}
Observe that the linear part of $S$, $\pi _{2}(S)$ is generated by two
conjugate rotations $(\pi _{2}\circ \rho )(a)$ and $(\pi _{2}\circ \rho )(b)$
with the same axis. Therefore $\widehat{\rho }$ is a reducible
representation.
\begin{equation*}
C_{2}\ast C_{3}\overset{\lambda }{\longrightarrow }\mathcal{E}(E^{3})\overset
{\pi _{2}}{\longrightarrow }SO(3)
\end{equation*}
Then $\pi _{2}\circ \lambda $ factors through the abelianized group $C_{6}$
of $C_{2}\ast C_{3}$. Therefore $\pi _{2}(S)$ is a cyclic group of order
dividing $6$. The elements $(\pi _{2}\circ \rho )(a)$ and $(\pi _{2}\circ
\rho )(b)$ are both elements of order $1,2,3$ or $6$. Up to similarity we
assume that the axis of $\rho (a)$ is the $Z$ axis and the axis of $\rho (b)$
is the line parallel to the $Z$ axis through the point $(1,0,0)$. In
affine notation
\begin{equation*}
\rho (a)=
\begin{pmatrix}
\cos \alpha & -\sin \alpha & 0 & 0 \\
\sin \alpha & \cos \alpha & 0 & 0 \\
0 & 0 & 1 & \sigma \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{equation*}
\begin{equation*}
\rho (b)=
\begin{pmatrix}
\cos \alpha & -\sin \alpha & 0 & 1-\cos \alpha \\
\sin \alpha & \cos \alpha & 0 & -\sin \alpha \\
0 & 0 & 1 & \sigma \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{equation*}
where $\alpha $ is the angle of rotation around the axis and $\sigma $ is
the shift.
The relation $aba=bab$ implies
\begin{eqnarray*}
\rho (aba)-\rho (bab) &=&
\begin{pmatrix}
0 & 0 & 0 & (1-2\cos \alpha )^{2}\sin \alpha \\
0 & 0 & 0 & (1-2\cos \alpha )^{2}\sin \alpha \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}
=
\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}
\\
&\Longrightarrow &\quad \cos \alpha =\frac{1}{2}\quad \Longrightarrow \quad
\alpha =\frac{2\pi }{6}\quad \Longrightarrow \quad x=\frac{\sqrt{3}}{2}
\end{eqnarray*}
If the parameter $\sigma $ is $0$, then $S$ is conjugate to the holonomy of
the Euclidean 2-dimensional orbifold $S_{236}$.
If the parameter $\sigma $ is different from $0,$ we assume up to similarity
that $\sigma =1/6.$ Then $S$ is conjugate to the Euclidean crystallographic
group $P6_{1}.$ To prove this, we compute some elements and their axes.
\begin{equation}
\mathbf{A}=\rho (a)=
\begin{pmatrix}
\frac{1}{2} & -\frac{\sqrt{3}}{2} & 0 & 0 \\
\frac{\sqrt{3}}{2} & \frac{1}{2} & 0 & 0 \\
0 & 0 & 1 & \frac{1}{6} \\
0 & 0 & 0 & 1
\end{pmatrix}
,\mathbf{B}=\rho (b)=
\begin{pmatrix}
\frac{1}{2} & -\frac{\sqrt{3}}{2} & 0 & \frac{3}{2} \\
\frac{\sqrt{3}}{2} & \frac{1}{2} & 0 & -\frac{\sqrt{3}}{2} \\
0 & 0 & 1 & \frac{1}{6} \\
0 & 0 & 0 & 1
\end{pmatrix}
\label{eroarob}
\end{equation}
The element $\mathbf{A}^{6}=\mathbf{B}^{6}$ is a translation by vector $
(0,0,1)$.
\begin{equation*}
\mathbf{ABA}=
\begin{pmatrix}
-1 & 0 & 0 & \frac{3}{2} \\
0 & -1 & 0 & \frac{\sqrt{3}}{2} \\
0 & 0 & -1 & \frac{1}{2} \\
0 & 0 & 0 & 1
\end{pmatrix}
,\quad (\mathbf{ABA})^{2}=
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{equation*}
Then $\mathbf{ABA}$ is a rotation by $\pi $ with shift $(0,0,\frac{1}{2}).$
The axis of $\mathbf{ABA}$ is obtained by solving the equation
\begin{equation*}
\mathbf{ABA}
\begin{pmatrix}
x_{1} \\
x_{2} \\
x_{3} \\
1
\end{pmatrix}
-
\begin{pmatrix}
x_{1} \\
x_{2} \\
x_{3} \\
1
\end{pmatrix}
=
\begin{pmatrix}
0 \\
0 \\
\frac{1}{2} \\
0
\end{pmatrix}
\end{equation*}
Then $x_{1}=\frac{3}{4},$ $x_{2}=\frac{\sqrt{3}}{4}.$ The axis of $\mathbf{
ABA}$ is the line $\left( \frac{3}{4},\frac{\sqrt{3}}{4},t\right) $.
\begin{equation*}
\mathbf{AB}=
\begin{pmatrix}
-\frac{1}{2} & -\frac{\sqrt{3}}{2} & 0 & \frac{3}{2} \\
\frac{\sqrt{3}}{2} & -\frac{1}{2} & 0 & \frac{\sqrt{3}}{2} \\
0 & 0 & 1 & \frac{1}{3} \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{equation*}
The isometry $\mathbf{AB}$ is a rotation by $\frac{2\pi }{3}$ with shift $
(0,0,\frac{1}{3}).$ The axis of $\mathbf{AB}$ is the line $\left( \frac{1}{2}
,\frac{\sqrt{3}}{2},t\right) $. These data correspond to the
crystallographic group $P6_{1}$ number 169 in the Tables \cite{Tables}.
The group $P6_{1}$ is also $\pi _{1}^{o}(\widehat{S}_{2_{1}3_{1}6_{1}})$,
the fundamental group of the 3-dimensional Euclidean orbifold $E^{3}/P6_{1}$
denoted by $\widehat{S}_{2_{1}3_{1}6_{1}}$, see Figure \ref{fp61}.
\begin{figure}
\caption{The crystallographic group $P6_{1}
\label{fp61}
\end{figure}
\end{proof}
\begin{theorem}
The homomorphism
\begin{equation*}
\begin{pmatrix}
\rho : & G(3/1) & \longrightarrow & P6_{1}\subset \mathcal{E}(E^{3}) \\
& a & \rightarrow & \mathbf{A} \\
& b & \rightarrow & \mathbf{B}
\end{pmatrix}
\end{equation*}
where $\mathbf{A}$ and $\mathbf{B}$ are given by \eqref{eroarob}, factors
through $\pi _{1}(M_{0}),$ where $M_{0}$ is the 3-manifold obtained from $
S^{3}$by 0-surgery in the trefoil knot.
\end{theorem}
\begin{proof}
The manifold $M_{0}$ is the result of pasting a solid torus $T$ to the
exterior of the trefoil knot $K$ such that the meridian of the torus $T$ is
mapped to the canonical longitude $l_{c}$ of $K$. The element of $G(3_{1})$
represented by $l_{c}$ is $a^{-4}baab$. Then
\begin{equation*}
\pi _{1}(M_{0})=\left\vert a,b;aba=bab,a^{-4}baab\right\vert
\end{equation*}
The image $\rho (l_{c})=\mathbf{A}^{-4}\mathbf{BAAB}=I_{4\times 4},$
therefore the homomorphism $\rho $ factors through $\pi _{1}(M_{0})$: $\rho
= $ $\rho ^{\prime }\circ \eta $
\begin{equation*}
G(3/1)\overset{\eta }{\longrightarrow }\pi _{1}(M_{0})\overset{\rho ^{\prime
}}{\longrightarrow }P6_{1}
\end{equation*}
\end{proof}
\begin{corollary}
The exterior of the trefoil knot has a Euclidean structure whose completion
gives a complete Euclidean structure in $M_{0}$.
\end{corollary}
The following Theorem gives more information on this manifold $M_{0}$.
\begin{theorem}
The manifold $M_{0}$ is the spherical tangent bundle of the 2-dimensional
Euclidean orbifold $S_{2,3,6}$.
\end{theorem}
\begin{proof}
The sphere $S^{3}$ has the Seifert manifold structure $(Ooo|0;(3,-1),(2,1))$
, with the trefoil knot $K$ as general fibre. The result of 0-surgery in a
general fibre produces a Seifert manifold $(Ooo|0;(3,-1),(2,1),(\alpha
,\beta ))$, where the pair $(\alpha ,\beta )$ can be easily computed as
follows. Let $Q$, $Q_{1}$ and $Q_{2}$ be simple closed curves in $S^{2}$, the
base of the Seifert fibration $(Ooo|0;(3,-1),(2,1))$, which are meridians of
a general fibre $H$, the exceptional fibre $(3,-1)$, and the exceptional
fibre $(2,1)$, respectively. Then, the first homology group of $(S^{3}\setminus K)$ has the following presentation
\begin{equation*}
H_{1}(S^{3}\setminus
K)=|Q,Q_{1},Q_{2},H;3Q_{1}-H=0,2Q_{2}+H=0,Q+Q_{1}+Q_{2}=0|
\end{equation*}
Let $l$ be the canonical longitude of $K=H$, then in $M_{0}$ we have that
\begin{equation*}
l=\alpha Q+\beta H=\alpha (-Q_{1}-Q_{2})+\beta 3Q_{1})=Q_{1}(3\beta -\alpha
)-\alpha Q_{2}=0
\end{equation*}
This implies that
\begin{equation*}
(3\beta -\alpha )Q_{1}=\alpha Q_{2}\quad \Rightarrow \quad -\alpha
Q_{1}=\alpha Q_{2}+2\beta Q_{2}
\end{equation*}
But also $2Q_{2}=-3Q_{1}$. Therefore
\begin{equation*}
\frac{3\beta -\alpha }{\alpha }=-\frac{3}{2}\quad \Rightarrow \quad 6\beta
-2\alpha =-3\alpha \quad \Rightarrow \quad 6\beta =-\alpha \quad \Rightarrow
\quad \alpha =6,\,\beta =-1.
\end{equation*}
Using Seifert signature equivalence, we have that
\begin{equation*}
M_{0}=(Ooo|0;(3,-1),(2,1),(6,-1))=(Ooo|1;(3,-1),(2,-1),(6,-1))
\end{equation*}
which is orientation reversing equivalent to
\begin{equation*}
(Ooo|1;(3,1),(2,1),(6,1))=ST(S_{2,3,6})
\end{equation*}
the spherical tangent of the Euclidean orbifold $S_{6,3,2}$ since
\begin{equation*}
\chi =-1+\frac{1}{2}+\frac{1}{3}+\frac{1}{6}=0
\end{equation*}
\end{proof}
\begin{remark}
The manifold $M_{0}$ is a torus bundle over $S^{1}$ with periodic monodromy
of order 6 (\cite{Z1965}).
\end{remark}
Theorem \ref{tposiblesH} can be used to identify all the 3-dimensional
orientation preserving Euclidean crystallographic groups generated by one
element of order two and one other of order three.
\begin{theorem}
\label{tposibleseucli}The only 3-dimensional orientation preserving
Euclidean crystallographic groups generated by one element of order two and
one other of order three, up to similarity, are $I2_{1}3$, $P4_{1}32(P4_{3}32)$
and $P6_{1}$.
\end{theorem}
\begin{proof}
Assume $S$ is conjugate to the subgroup $\rho _{x}(G(3/1)$ generated by $
M_{x}(a)$ and $M_{x}(b)$, $x=\cos \frac{\alpha }{2}\in (0,\sqrt{3}/2)$,
where $M_{x}(a)$ and $M_{x}(b)$ are given by \eqref{emxa} and \eqref{emxb}.
A necessary condition for $S$ be Euclidean crystallographic group is $\alpha
\in \left\{ \frac{2\pi }{2},\frac{2\pi }{3},\frac{2\pi }{4},\frac{2\pi }{6}
\right\} \Longleftrightarrow x\in \left\{ 0,\frac{1}{2},\frac{\sqrt{2}}{2},
\frac{\sqrt{3}}{2}\right\} $.
\begin{enumerate}
\item [Case $x=0.$] This case is impossible. (Remark \ref{rro0}.)
\item [Case $x=\frac{1}{2}$.] Then
\begin{equation*}
\alpha =\frac{2\pi }{3},\quad \cos \omega =-\frac{1}{3},\quad \sigma =\frac{
\sqrt{3}}{2},\quad \delta =\frac{\sqrt{2}}{4}
\end{equation*}
We will prove that $\rho _{\frac{1}{2}}(G(3/1)$ is the Euclidean
crystallographic group $I2_{1}3$ (number 199 in Tables \cite{Tables}).
\begin{enumerate}
\item $\rho _{\frac{1}{2}}(G(3/1)\leqslant I2_{1}3$. To compare both groups,
we conjugate $\rho _{\frac{1}{2}}(G(3/1)$ by a similarity so that
\begin{equation*}
\alpha =\frac{2\pi }{3},\quad \cos \omega =-\frac{1}{3},\quad \sigma =\frac{
\sqrt{3}}{6},\quad \delta =\frac{\sqrt{2}}{12}
\end{equation*}
The axes $\rho _{\frac{1}{2}}(a)$ and $\rho _{\frac{1}{2}}(b)$ are depicted
in Figure \ref{fabI23}. It can be checked that the two axes in Figure
\ref{fabI23} have distance $\frac{\sqrt{2}}{12}=\delta $ and the angle $
\omega $ is
\begin{equation*}
\cos \omega =2\cos ^{2}\frac{\omega }{2}-1=2\left( \frac{1}{\sqrt{3}}\right)
^{2}-1=-\frac{1}{3}
\end{equation*}
The shift $\sigma =\frac{\sqrt{3}}{6}$ is the length of the diagonal of the
cube with edge length $\frac{1}{6}$.Therefore $\rho _{\frac{1}{2}}(a^{-1})$
and $\rho _{\frac{1}{2}}(b)$ are the screws axes in $I2_{1}3$ (Figure \ref
{fi213} $)$ denoted by $\mathbf{A}^{-1}$ and $\mathbf{B}$.
\begin{figure}
\caption{The axes of $\rho _{\frac{1}
\label{fabI23}
\end{figure}
\begin{figure}
\caption{The crystallographic group $I2_{1}
\label{fi213}
\end{figure}
\item $\rho _{\frac{1}{2}}(G(3/1)=I2_{1}3$. The group $I2_{1}3$ satisfies
the following short exact sequence
\begin{equation*}
0\longrightarrow T^{3}\longrightarrow I2_{1}3\overset{p}{\longrightarrow }
S_{233}\longrightarrow 1
\end{equation*}
where $T^{3}$ is the translational subgroup of a cube with edge length $1$
as fundamental domain and $S_{233}$, the linear quotient, is the holonomy of
the 2-dimensional spherical orbifold denoted also by $S_{233}$. It is clear
that $p(\rho _{\frac{1}{2}}(a^{-1}))=\rho _{\frac{1}{2}}^{\prime }(a^{-1})$
and $p(\rho _{\frac{1}{2}}(b))=\rho _{\frac{1}{2}}^{\prime }(b)$ generate $
S_{233}$ by Theorem \ref{tgenso3}. Therefore it suffices to prove that $
T^{3}\leqslant \rho _{\frac{1}{2}}(G(3/1)$ and to find enough elements: The
element $\mathbf{A}^{3}$ is a translation with vector $(\frac{1}{2},\frac{1}{
2},\frac{1}{2})$. Figure \ref{fi213} shows the elements $\mathbf{AB}$, $
\mathbf{BA}$ (3-fold rotations), $\mathbf{AAB}$ (2-fold rotation), $\mathbf{
ABA}$, $\mathbf{BA}^{-1}$ (2-screw rotations).
\end{enumerate}
\item [Case $x=\frac{\sqrt{2}}{2}$.] Then
\begin{equation*}
\alpha =\frac{2\pi }{4},\quad \cos \omega =0,\quad \sigma =\frac{1}{4},\quad
\delta =\frac{1}{4}
\end{equation*}
The group $P4_{1}32$ is depicted in Figure \ref{fi4132}. Up to similarity we assume that
the axes $\rho _{\frac{\sqrt{2}}{2}}(a)$ and $\rho _{\frac{\sqrt{2}}{2}}(b)$
are the ones depicted in Figure \ref{fabP432} , so they are the $4_{1}$
axes denoted by $\mathbf{A}$ and $\mathbf{B}$ in Figure \ref{fi4132}.
\begin{figure}
\caption{The axes of $\rho _{\frac{
\sqrt{2}
\label{fabP432}
\end{figure}
\begin{figure}
\caption{The crystallographic group $I4_{1}
\label{fi4132}
\end{figure}
In affine notation
\begin{equation}
\mathbf{A}=\rho _{\frac{\sqrt{2}}{2}}(a)=
\begin{pmatrix}
0 & -1 & 0 & \frac{1}{4} \\
1 & 0 & 0 & -\frac{1}{4} \\
0 & 0 & 1 & \frac{1}{4} \\
0 & 0 & 0 & 1
\end{pmatrix}
,\quad \mathbf{B}=\rho _{\frac{\sqrt{2}}{2}}(b)=
\begin{pmatrix}
0 & 0 & 1 & -\frac{1}{4} \\
0 & 1 & 0 & \frac{1}{4} \\
-1 & 0 & 0 & \frac{1}{4} \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{equation}
This proves that $\rho _{\frac{\sqrt{2}}{2}}(G(3/1)\leqslant P4_{1}32$,
because $\rho _{\frac{\sqrt{2}}{2}}(G(3/1)$ is generated by $\mathbf{A}$ and
$\mathbf{B}$, both elements of $P4_{1}32$. To prove the equality, we can
compute the others elements of $P4_{1}32$. For instance, the element
\begin{equation*}
\mathbf{BA}=
\begin{pmatrix}
0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{equation*}
is a rotation by $2\pi /3$ with axis $(t,t,t)$. The element
\begin{equation*}
\mathbf{ABA}=
\begin{pmatrix}
-1 & 0 & 0 & \frac{1}{4} \\
0 & 0 & 1 & -\frac{1}{4} \\
0 & 1 & 0 & \frac{1}{4} \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{equation*}
is a rotation by $\pi $ with axis $(1/8,t,t+1/4)$. Observe that $\mathbf{A}
^{4}$ and $\mathbf{B}^{4}$ are translations. The subgroup generated by $
\mathbf{A}^{4}$ and $\mathbf{B}^{4}$ has a cube as fundamental domain, with edge length one.
\item [Case $x=\frac{\sqrt{3}}{2}.$] Theorem \ref{tposiblesH} 2.c) shows that $
S $ is conjugate to $P6_{1}$.
\end{enumerate}
\end{proof}
\begin{remark}
The space $E^{3}/I2_{1}3$ is the Euclidean orbifold $Q_{1}$ with underlying
space $S^{3}$ and singular set the rational link $10/3$ whose two components
have isotropies or orders $2$ and $3$. See Figure \ref{forbeuclideas}.
\end{remark}
\begin{remark}
The space $E^{3}/P4_{1}32$ is the Euclidean orbifold $Q_{2}$ with underlying
space $S^{3}$ and singular set the graph depicted in Figure \ref
{forbeuclideas}. This orbifold is 2-fold covered by the Euclidean orbifold $
E^{3}/P2_{1}3$ with underlying space $S^{3}$ and singular set the Figure
Eight knot with isotropy of order $3$.
\begin{figure}
\caption{Euclidean orbifolds.}
\label{forbeuclideas}
\end{figure}
\end{remark}
\subsection{Case 2: $|x|=\frac{\protect\sqrt{3}}{2}$}
The almost-irreducible representation $\rho _{\pm \frac{\sqrt{3}}{2}
}:G(3/1)\longrightarrow SL(2,\mathbb{C})$ given by \eqref{ecaso2} do not
have a proper affine deformation, because the polynomial \eqref{epoly2} in
this case gives $s=0$. But, by Theorem \ref{tposiblesH}, there exists an
affine deformation of a reducible representation $\rho _{\frac{\sqrt{3}}{2}
}:G(3/1)\longrightarrow SO(3)$ , whose image is the crystallographic group $
P6_{1}$.
\subsection{The quaternion algebra $\left( \frac{-1,1}{\mathbb{R}}\right) $.}
Next we analyze the affine deformations of representations corresponding to
points in the character variety belonging to Cases 3, 4 and 5: $|x|>\frac{
\sqrt{3}}{2}$, where the quaternion algebra is $M(2,\mathbb{R})=\left( \frac{
-1,1}{\mathbb{R}}\right) $. Therefore $U_{1}=SL(2,\mathbb{R})$, $
H_{0}=E^{1,2}$, the Minkowsi space, and $A(H)=\mathcal{L}(E^{1,2})$.
The meaning of the parameters depends on the sign of $x-1$.
\begin{enumerate}
\item [Case 3]: $\frac{\sqrt{3}}{2}<x<1\Longrightarrow x-1<0$, then $A^{-}$ is
a vector inside the nullcone (a time-like vector) and $A$ acts as a right
spherical rotation around the axis $A^{-}$ with angle $\alpha $, where $
x=\cos \frac{\alpha }{2}$. Let $d$ be the hyperbolic distance between the
projection of $A^{-}$ and $B^{-}$ on the hyperbolic plane (pure unit
quaternions in the upper component), then $\cosh d=\frac{y}{1-x^{2}}$.
\begin{eqnarray}
x &=&A^{+}=B^{+}=\cos \frac{\alpha }{2} \notag \\
u &=&N(A^{-})=N(B^{-})=1-x^{2}=\sin ^{2}\frac{\alpha }{2} \notag \\
y &=&u\cosh d
\end{eqnarray}
\item [Case 4]: $x=1,$ then $A^{-}$ belongs to the nullcone ( a light-like
vector) and $A$ acts as a parabolic transformation around the axis $A^{-}$.
\item [Case 5]: $1<x\Leftrightarrow $ $x-1>0$, then $A^{-}$ is a vector
outside the nullcone (a space-like vector) and $A$ acts as a right
hyperbolic rotation around the axis $A^{-}$ with distance $\partial $, where
$x=\cosh \frac{\partial }{2}$. The meaning of the parameter $y$ depends on
the sign of $y^{2}-(x^{2}-1)^{2}$.
\begin{enumerate}
\item If $y^{2}-(x^{2}-1)^{2}>0$, $y^{2}=(x^{2}-1)^{2}\cosh ^{2}d$, where $d$
is the distance between the polars of $A^{-}$ and $B^{-}$.
\item If $y^{2}-(x^{2}-1)^{2}=0$ then $y^{2}=(x^{2}-1)^{2}$.
\item If $y^{2}-(x^{2}-1)^{2}<0$, $y^{2}=(x^{2}-1)^{2}\cos ^{2}\theta $,
where $\theta $ is the angle between the polars of $A^{-}$ and $B^{-}$.
Therefore
\begin{eqnarray}
x &=&A^{+}=B^{+}=\cosh \frac{\partial }{2} \notag \\
y^{2} &>&(x^{2}-1)^{2}\Longrightarrow y^{2}=(x^{2}-1)^{2}\cosh ^{2}d \\
y^{2} &<&(x^{2}-1)^{2}\Longrightarrow y^{2}=(x^{2}-1)^{2}\cos ^{2}\theta
\end{eqnarray}
The shift $\sigma $ of the element $(sA^{-},A)$ and the distance $\delta $
between the axes of $(sA^{-},A)$ and $(sB^{-}+(A^{-}B^{-})^{-},B)$ are as in
the case 1, that is
\begin{equation*}
\sigma =s\sqrt{u}=s\sqrt{1-x^{2}}.
\end{equation*}
\begin{equation}
N((A^{-}B^{-})^{-})=-y^{2}+u^{2}\quad \Longrightarrow \quad \delta =\frac{
\sqrt{u^{2}-y^{2}}}{2}
\end{equation}
\end{enumerate}
\end{enumerate}
\begin{theorem}
\label{tgenlorentz} For each $x\in (\sqrt{3}/2,\infty ),$ there exists a
representation $\rho _{x}:G(3/1)\longrightarrow \mathcal{L}(E^{1,2})$ unique
up to conjugation in $\mathcal{L}(E^{1,2})$ such that
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=(sA^{-},A) \\
\rho _{x}(b)=((sB^{-}+(A^{-}B^{-})^{-},B)
\end{array}
\end{equation*}
where the values of $A,B\in SL(2,\mathbb{R})$ are the following
\begin{enumerate}
\item For $\frac{\sqrt{3}}{2}<x<1$.
\begin{equation*}
\begin{array}{c}
A=x+\sqrt{1-x^{2}}i,\quad \sqrt{1-x^{2}}>0 \\
B=x+\frac{2x^{2}-1}{2\sqrt{1-x^{2}}}i+\frac{1}{2}\sqrt{\frac{4x^{2}-3}{
1-x^{2}}}j
\end{array}
\end{equation*}
\item For $x=1\Longrightarrow $ $y=\frac{1}{2}$
\begin{equation*}
\begin{array}{c}
A=1+i+j \\
B=1+\frac{1}{4}(i-j)
\end{array}
\end{equation*}
\item For $x>1$, $y=\frac{2x^{2}-1}{2}>0$
\begin{equation*}
\begin{array}{c}
\widehat{\rho }_{x}(a)=A=x+\sqrt{x^{2}-1}j,\quad \sqrt{x^{2}-1}>0 \\
\widehat{\rho }_{x}(b)=B=x-\frac{1}{2}\sqrt{\frac{4x^{2}-3}{x^{2}-1}}i-\frac{
2x^{2}-1}{2\sqrt{x^{2}-1}}j
\end{array}
\end{equation*}
\end{enumerate}
In affine linear notation, where $\left\{ X,Y,Z\right\} $ is the coordinate
system associated to the basis $\left\{ -ij,j,i\right\} $
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=M_{x}(a)\left(
\begin{array}{c}
X \\
Y \\
Z \\
1
\end{array}
\right) =\left(
\begin{array}{c}
X^{\prime } \\
Y^{\prime } \\
Z^{\prime } \\
1
\end{array}
\right) \\
\rho _{x}(b)=M_{x}(b)\left(
\begin{array}{c}
X \\
Y \\
Z \\
1
\end{array}
\right) =\left(
\begin{array}{c}
X^{\prime } \\
Y^{\prime } \\
Z^{\prime } \\
1
\end{array}
\right)
\end{array}
\end{equation*}
where,
\begin{enumerate}
\item For $\frac{\sqrt{3}}{2}<x<1$
\begin{equation}
M_{x}(a)=\left(
\begin{array}{cccc}
2x^{2}-1 & -2x\sqrt{1-x^{2}} & 0 & 0 \\
2x\sqrt{1-x^{2}} & 2x^{2}-1 & 0 & 0 \\
0 & 0 & 1 & \frac{\left( 3-4x^{2}\right) \sqrt{1-x^{2}}}{4x} \\
0 & 0 & 0 & 1
\end{array}
\right) \label{emacase3}
\end{equation}
\begin{equation}
M_{x}(b)=
\begin{pmatrix}
2x^{2}-1 & \frac{x-2x^{3}}{\sqrt{1-x^{2}}} & x\sqrt{\frac{3-4x^{2}}{x^{2}-1}}
& -\frac{1}{2}\sqrt{4x^{2}-3} \\
\frac{x(2x^{2}-1)}{\sqrt{1-x^{2}}} & \frac{1+2x^{2}-4x^{4}}{2-2x^{2}} &
\frac{\sqrt{4x^{2}-3}(2x^{2}-1)}{2(1-x^{2})} & \frac{\sqrt{(4x^{2}-3)^{3}}}{
8x\sqrt{1-x^{2}}} \\
x\sqrt{\frac{3-4x^{2}}{x^{2}-1}} & \frac{\sqrt{4x^{2}-3}(2x^{2}-1)}{
2(1-x^{2})} & \frac{1-2x^{2}}{2x^{2}-2} & \frac{-3+10x^{2}-9x^{4}}{8x\sqrt{
1-x^{2}}} \\
0 & 0 & 0 & 1
\end{pmatrix}
\label{embcase3}
\end{equation}
\item For $x=1\Longrightarrow $ $y=\frac{1}{2}$
\begin{equation}
M_{x}(a)=\left(
\begin{array}{cccc}
1 & -2 & 2 & 0 \\
2 & -1 & 2 & -\frac{1}{4} \\
2 & -2 & 3 & -\frac{1}{4} \\
0 & 0 & 0 & 1
\end{array}
\right) \label{emacase4}
\end{equation}
\begin{equation}
M_{x}(b)=
\begin{pmatrix}
1 & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} \\
\frac{1}{2} & \frac{7}{8} & -\frac{1}{8} & \frac{1}{16} \\
-\frac{1}{2} & \frac{1}{8} & \frac{9}{8} & -\frac{1}{16} \\
0 & 0 & 0 & 1
\end{pmatrix}
\label{embcase4}
\end{equation}
\item For $x>1$, $y=\frac{2x^{2}-1}{2}>0$
\begin{equation}
M_{x}(a)=\left(
\begin{array}{cccc}
2x^{2}-1 & 0 & 2x\sqrt{x^{2}-1} & 0 \\
0 & 1 & 0 & \frac{\left( 3-4x^{2}\right) \sqrt{x^{2}-1}}{4x} \\
2x\sqrt{x^{2}-1} & 0 & 2x^{2}-1 & 0 \\
0 & 0 & 0 & 1
\end{array}
\right) \label{emacase5}
\end{equation}
\begin{equation}
M_{x}(b)=
\begin{pmatrix}
2x^{2}-1 & x\sqrt{\frac{-3+4x^{2}}{x^{2}-1}} & -\frac{x(2x^{2}-1)}{\sqrt{
x^{2}-1}} & -\frac{1}{2}\sqrt{4x^{2}-3} \\
-x\sqrt{\frac{-3+4x^{2}}{x^{2}-1}} & \frac{1-2x^{2}}{2x^{2}-2} & \frac{\sqrt{
-3+4x^{2}}(2x^{2}-1)}{2(x^{2}-1)} & \frac{3-10x^{2}+8x^{4}}{8x\sqrt{x^{2}-1}}
\\
-\frac{x(2x^{2}-1)}{\sqrt{x^{2}-1}} & \frac{\sqrt{-3+4x^{2}}(2x^{2}-1)}{
2(x^{2}-1)} & \frac{1+2x^{2}-4x^{4}}{2-2x^{2}} & -\frac{\sqrt{(-3+4x^{2})^{3}
}}{8x\sqrt{x^{2}-1}} \\
0 & 0 & 0 & 1
\end{pmatrix}
\label{mbcase5}
\end{equation}
The distance $\delta $ and the angle $\omega $ between the axis of $\rho
_{x}(a)$ and $\rho _{x}(b)$ are given by
\begin{eqnarray*}
\delta &=&\frac{\sqrt{3-4x^{2}}}{4} \\
\cos \omega &=&\frac{2x^{2}-1}{2-2x^{2}}
\end{eqnarray*}
The shift $\sigma $ is
\begin{equation*}
\sigma =(\frac{3}{4x}-x)\sqrt{1-x^{2}}
\end{equation*}
\end{enumerate}
\begin{proof}
Theorem \ref{teorema4} gives the values of $A$ and $B$ as a function of $x$
and $y$ and also the values of $(A^{-}B^{-})^{-}$ in each case:
\begin{eqnarray*}
\frac{\sqrt{3}}{2} &<&x<1\Longrightarrow (A^{-}B^{-})^{-}=\sqrt{
y^{2}-(1-x^{2})^{2}}ij \\
x &=&1\Longrightarrow (A^{-}B^{-})^{-}=-yij \\
x &>&1\Longrightarrow (A^{-}B^{-})^{-}=\sqrt{y^{2}-(x^{2}-1)^{2}}ij
\end{eqnarray*}
From the polynomial defining the character variety \eqref{epoly1}
\begin{equation*}
2y-(2x^{2}-1)=0
\end{equation*}
we obtain the value of $y$
\begin{equation*}
y=\frac{2x^{2}-1}{2}
\end{equation*}
From the polynomial \eqref{epoly2} we obtain the value of $s$
\begin{equation*}
s=\frac{3-4x^{2}}{4x}
\end{equation*}
\cite[Th.5]{HLM2009} defines the representation $\rho
_{x}:G(3/1)\longrightarrow \mathcal{L}(E^{1,2})$ unique up to conjugation in
$\mathcal{L}(E^{1,2})$ such that
\begin{equation*}
\begin{array}{l}
\rho _{x}(a)=(sA^{-},A) \\
\rho _{x}(b)=((sB^{-}+(A^{-}B^{-})^{-},B)
\end{array}
\end{equation*}
\end{proof}
\end{theorem}
As in the Euclidean case we are interested in representations in $\mathcal{L}
(E^{1,2})$ whose image is an \emph{affine crystallographic group}. This
concept is defined by Fried and Goldman \cite{FG1983} as a subgroup of the affine group $
A(H)$ acting properly discontinuously and with compact orbit space. An
affine crystallographic group is the fundamental group of a flat affine
manifold. The following theorem analyzes some examples in case 1 $(x<1)$ of
Theorem \ref{tgenlorentz}, where generators go to elements with linear part $
A$ and $B$ such that $A^{-}$ and $B^{-}$ are timelike vectors.
\begin{theorem}
The image of the representation $\rho _{x}:G(3/1)\longrightarrow \mathcal{L}
(E^{1,2})$, where $x=\cos \frac{2\pi }{2n}$, $n\geq 7$, is not a properly
discontinuous subgroup of $\mathcal{L}(E^{1,2})$.
\end{theorem}
\begin{proof}
The linear quotient of $\rho _{x}(G(3/1))$ is the group $S_{23n}\subset
SO(1,2)$. The group $S_{23n}$, been the holonomy group of the 2-dimensional
hyperbolic orbifold denoted by the same name $S_{23n}$, is a cocompact
group, therefore Mess's Theorem (\cite{Mess1990} ,\cite{GM2000}) says that
$Im(\rho _{x})$ is not a properly discontinuous subgroup of $\mathcal{
L}(E^{1,2})$.
\end{proof}
To analyze the discrete condition of representations in case 3 $(x>1)$ of
Theorem \ref{tgenlorentz}, we can use the Margulis invariant
(\cite{Mar1983}, \cite{Mar1984}, \cite{GM2000}).
Recall that an element of $O(1,2)$ is hyperbolic if it has three distinct
real eigenvalues. A subgroup $G\subset $ $O(1,2)$ is purely hyperbolic if
every element is hyperbolic.
\begin{theorem}
Every image of $\rho _{x}:G(3/1)\longrightarrow \mathcal{L}(E^{1,2})$, where
$x>1$ contains an affine deformation of a purely hyperbolic subgroup of
finite index.
\end{theorem}
\begin{proof}
Theorem \ref{tsubgrupolibre} shows that the image of the linear quotient of $
\rho _{x}$, $\rho _{x}^{\prime }=\pi _{2}\circ \rho
_{x}:G(3/1)\longrightarrow SO^{0}(1,2)$, $1<x<\infty $, is an index 6 free
subgroup generated by $\{\rho _{x}^{\prime }(b^{-2}),$ $\rho _{x}^{\prime
}(a^{2})\}$. All the elements of this subgroup are hyperbolic
transformations.
We conclude that the subgroup of $\rho _{x}(G(3/1))$ generated by $\{\rho
_{x}(b^{-2}),$ $\rho _{x}(a^{2})\}$ is an affine deformation of the purely
hyperbolic free subgroup of rank 2 generated by $\{B^{-2},A^{2}\}$.
\end{proof}
Margulis defines an invariant $\alpha _{\phi }:G\longrightarrow \mathbb{R}$
of an affine deformation $\phi $ of a purely hyperbolic subgroup $G\subset
SO^{0}(1,2)$ as follows. Every element $g\in G$ has three distinct positive
real eigenvalues $\lambda (g)<1<\lambda (g)^{-1}$. Choose an eigenvector $
x^{-}(g)$ for $\lambda (g)$ and an eigenvector $x^{+}(g)$ for $\lambda
(g)^{-1}$, both in the same component $\mathcal{N}_{+}$ of the complement of
$0$ in the nullcone. Consider the unique eigenvector $x^{0}(g)$ for $g$ with
eigenvalue $1$ such that $|x^{0}(g)|=-1$ and $\{x^{-}(g),x^{+}(g),x^{0}(g)\}$
is a positively oriented basis of $E^{1,2}$.
If $\phi $ is a hyperbolic deformation of $G$, then $\alpha _{\phi }$ is
defined as
\begin{equation*}
\begin{array}{cccc}
\alpha _{\phi }: & G & \longrightarrow & \mathbb{R} \\
& g & \rightarrow & Q(x^{0}(g),\phi (g)(x)-x)
\end{array}
\end{equation*}
for any $x\in E^{1,2}$, where $Q(-,-)$ is the bilinear quadratic form
defining the Minkowski metric. It has been proven (\cite{DG2001}) that $\alpha $
is a complete invariant of conjugacy class of the affine deformation. The
following theorem of Margulis can be used to check the
proper condition of an affine deformation.
\begin{theorem}[Margulis] Let $G$ be a purely hyperbolic subgroup of $SO^{0}(1,2)$, and $
\phi :G\longrightarrow \mathcal{L}(E^{1,2})$ an affine deformation. If there
exist $g_{1},g_{2}\in G$ such that $\alpha _{\phi }(g_{1})>0>\alpha _{\phi
}(g_{2})$, then $\phi $ is not proper.
\end{theorem}
\begin{theorem}
The image of the representation $\rho _{x}:G(3/1)\longrightarrow \mathcal{L}
(E^{1,2})$, where $x>1$, is not a properly discontinuous subgroup of $
\mathcal{L}(E^{1,2})$.
\begin{proof}
We compute the Margulis invariants of the elements $g_{2}=\widehat{\rho }
_{x}^{\prime }(a^{2})$ and $g_{3}=\widehat{\rho }_{x}^{\prime }(a^{2}b^{2})$
in the purely hyperbolic group $\pi _{1}^{o}(O_{2\delta ,2\delta ,2\delta
})\subset \widehat{\rho }_{x}^{\prime }(G(3/1))$. Observe that the element
\begin{equation*}
g_{2}=\widehat{\rho }_{x}^{\prime }(a^{2})=
\begin{pmatrix}
1-8x^{2}+8x^{4} & 0 & 4x\sqrt{x^{2}-1}(2x^{2}-1) \\
0 & 1 & 0 \\
4x\sqrt{x^{2}-1}(2x^{2}-1) & 0 & 1-8x^{2}+8x^{4}
\end{pmatrix}
\end{equation*}
has the following eigenvalues
\begin{eqnarray*}
\lambda (g_{2}) &=&1-8x^{2}+8x^{4}-\sqrt{(1-8x^{2}+8x^{4})^{2}-1}<1 \\
1 &<&\left( \lambda (g_{2})\right) ^{-1}=1-8x^{2}+8x^{4}+\sqrt{
(1-8x^{2}+8x^{4})^{2}-1}.
\end{eqnarray*}
We choose the eigenvector $x^{-}(g_{2})=\{-1,0,1\},$ $x^{+}(g_{2})=\{1,0,1\}.
$ Therefore the vector $x^{0}(g_{2})=\{0,1,0\}$ is the unique eigenvector
such that $|x^{0}(g)|=-1$ and $\{x^{-}(g),x^{+}(g),x^{0}(g)\}$ is a
positively oriented basis of $E^{1,2}$. The Margulis invariant $\alpha
_{\phi _{x}}(g_{2})=Q(x^{0}(g),\rho _{x}(a^{2})(x)-x)$ for any $x\in E^{1,2}$
is
\begin{equation*}
\alpha _{\phi _{x}}(g_{2})=(0,1,0)
\begin{pmatrix}
-1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
s_{1} \\
s_{2} \\
s_{3}
\end{pmatrix}
=-s_{2}
\end{equation*}
where $\{s_{1},s_{2},s_{3}\}$ are the $\{X,Y,Z\}-$coordinates of $\rho
_{x}(a^{2})(x)-x$ for $x=\{t_{1},t_{2},t_{3}\}$. The computation gives
\begin{equation*}
s_{2}=\frac{(3-4x^{2})\sqrt{x^{2}-1}}{2x}.
\end{equation*}
Therefore
\begin{equation*}
\alpha _{\phi }(g_{2})=-\frac{(3-4x^{2})\sqrt{x^{2}-1}}{2x}.
\end{equation*}
This value is only zero for $x=\pm \frac{\sqrt{3}}{2},\pm 1$, then it has
the same sign for all $x>1.$ For $x=2$, it is $\frac{13\sqrt{3}}{4}>0.$ Then
\begin{equation*}
\alpha _{\phi _{x}}(g_{2})>0,\quad x>1.
\end{equation*}
We have used the computer program Mathematica to do an analogous, but much
more complicate computation for $g_{3}=\widehat{\rho }_{x}^{\prime
}(a^{2}b^{2})$. We found
\begin{multline*}
\alpha _{\phi _{x}}(g_{3})=\\
\frac{3+x^{2}(-1-22x^{2}+36x^{4}-16x^{6}-8x\sqrt{
x^{2}-1}(9+4x^{2}(-21+63x^{2}-76x^{4}+32x^{6})))}{8x\sqrt{(x^{2}-1)^{5}}}
\end{multline*}
which is never zero for $x>1$, and for $x=2$ it is equal to $-\frac{
143(15+7616\sqrt{3}}{144\sqrt{3}}<0$ . Therefore $\alpha _{\rho
_{x}}(g_{3})<0$, for all $x>1$. We conclude that $\alpha _{\rho
_{x}}(g_{2})>0>\alpha _{\rho _{x}}(g_{3})$ for all $x>1$. Then we apply the
above Margulis's Theorem to deduce that $\rho _{x}(G(3/1))$ contains a
subgroup with no proper action.
\end{proof}
\end{theorem}
For case 2 $(x=1)$ in Theorem \ref{tgenlorentz}, where generators go to
elements with linear part $A$ and $B$ such that $A^{-}$ and $B^{-}$ are null
vectors, we know by Theorem \ref{tsubgrupolibre} that the image of $\rho
_{1}:G(3_{1})\longrightarrow \mathcal{L}(E^{1,2})$, contains an affine
deformation of a free subgroup of index 6 generated by the two parabolic
elements $\{B^{-2},A^{2}\}$. To analyze the discrete condition of the
representation $\rho _{1}:G(3_{1})\longrightarrow \mathcal{L}(E^{1,2})$ one
could use the generalization of the Margulis invariant method obtained in \cite{CD2005}
for subgroups
generated by two parabolic elements. But in this case it does not work
because the generalized Margulis invariant of the parabolics elements $A^{2}$
and $B^{-2}$ is $0,$ as is the Margulis invariant of hyperbolic elements $
A^{2n}B^{2}$. The reason is that each of the these elements has a line of
fixed points. Moreover this property for some element implies a non properly
discontinuously action. Therefore we can establish the following theorem.
\begin{theorem}
The image of the representation $\rho _{1}:G(3_{1})\longrightarrow \mathcal{L
}(E^{1,2})$, is not a properly discontinuous subgroup of $\mathcal{L}
(E^{1,2})$.
\end{theorem}
\begin{proof}
The elements
\begin{equation*}
\rho _{1}(a^{n})=\left(
\begin{array}{cccc}
1 & -2 & 2 & 0 \\
2 & -1 & 2 & -\frac{1}{4} \\
2 & -2 & 3 & -\frac{1}{4} \\
0 & 0 & 0 & 1
\end{array}
\right) ^{n}
\end{equation*}
fix each point in the line $(1/8,t,t)$, and the action in a neighborhood of
this line is not discontinuous.
\end{proof}
\begin{corollary}
\label{cor20}There is no affine crystallographic group in $\mathcal{L}
(E^{1,2})$ which is a quotient of $G(3_{1}).$
\end{corollary}
\begin{corollary}
There is no affine crystallographic group in $\mathcal{L}(E^{1,2})$
generated by two isometries $\mu $ and $\nu $ such that $\mu ^{2}=\nu ^{3}=1.
$
\end{corollary}
\begin{proof}
This is a consequence of Corollary \ref{cor20} and Corollary \ref{cor1}.
\end{proof}
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.